halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
7
| timestamp
stringlengths 19
19
| year
stringclasses 49
values | url
stringlengths 43
389
| text
stringlengths 908
2.18M
| size
int64 908
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 1
229
|
---|---|---|---|---|---|---|---|---|---|
01768663 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768663/file/JNNFM_two_particle_PREPRINT.pdf | Bloen Metzger
Guillaume Ovarlez
Sarah Hormozi
email: [email protected]
Mohammadhossein Firouznia
Keywords: Yield stress materials, PIV visualization, Low-Reynolds-number flows, Simple-shear flow, Elastoviscoplastic materials, Noncolloidal yield stress suspensions
The interaction of two spherical particles in simple-shear flows of yield stress fluids
Mohammadhossein Firouznia, Bloen Metzger, Guillaume Ovarlez, Sarah
The interaction of two spherical particles in simple-shear flows of yield stress fluids
Introduction
The flows of non-Newtonian slurries, often suspensions of noncolloidal particles in yield stress fluids, are ubiquitous in many natural phenomena (e.g. flows of slurries, debris and lava) and industrial processes (e.g. waste disposal, concrete, drilling muds and cuttings transport, food processing). Studying the rheological and flow behaviors of non-Newtonian slurries is therefore of high interest. The bulk rheology and macroscopic properties of noncolloidal suspensions are related to the underlying microstructure, i.e., the arrangement of the particles. Therefore, investigating the interactions of particles immersed in viscous fluids is key to understanding the microstructure, and consequently, to refine the governing constitutive laws of noncolloidal suspensions. Here, we study experimentally the interaction of two particles in shear flows of yield stress fluids.
There exists an extensive body of research on hydrodynamic interactions of two particles in shear flows of Newtonian fluids. One of the most influential studies on this subject is performed by Batchelor and Green [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] who then used the knowledge of two particle trajectories and stresslets to scale up the results and provide a closure for the bulk shear stress in a dilute noncolloidal suspension to the second order of solid volume fraction, φ [START_REF] Batchelor | The determination of the bulk stress in a suspension of spherical particles to order c 2[END_REF]. Moreover, they showed that due to the fore-aft symmetry of the particle trajectories, Stokesian noncolloidal suspensions do not exhibit any normal stress difference.
The work of Batchelor and Green was followed by subsequent attempts [START_REF] Jeffrey | Calculation of the resistance and mobility functions for two unequal rigid spheres in low-reynolds-number flow[END_REF][START_REF] Kim | The resistance and mobility functions of two equal spheres in low-reynolds-number flow[END_REF][START_REF] Jeffrey | The calculation of the low reynolds number resistance functions for two unequal spheres[END_REF][START_REF] Kim | Microhydrodynamics: principles and selected applications[END_REF] to develop accurate functions describing the hydrodynamic interactions between two particles, which built a foundation for further analytical studies [START_REF] Batchelor | The effect of brownian motion on the bulk stress in a suspension of spherical particles[END_REF][START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF][START_REF] Zarraga | Normal stress and diffusion in a dilute suspension of hard spheres undergoing simple shear[END_REF] and powerful simulation methods such as Stokesian Dynamics [START_REF] Brady | Stokesian dynamics[END_REF]. A large body of theoretical and numerical studies has been done to solve the relative motion of two spherical particles in order to obtain the quantities required for the calculation of the bulk parameters, such as mean stress and viscosity in suspensions with a wide range of solid fractions (dilute to semi-dilute) [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF][START_REF] Brenner | On the stokes resistance of multiparticle systems in a linear shear field[END_REF][START_REF] Wakiya | Particle motions in sheared suspensions xxi: Interactions of rigid spheres (theoretical)[END_REF][START_REF] Lin | Slow motion of two spheres in a shear field[END_REF][START_REF] Guazzelli | A physical introduction to suspension dynamics[END_REF].
The Stokes regime without any irreversible forces leads to symmetric particle trajectories, and consequently, a symmetric Pair Distribution Function (PDF), i.e., the probability of finding a particle at a certain position in space with respect to a reference particle. These result in a Newtonian bulk behavior without any development of normal stress differences in shear flows. However, even in Stokesian suspensions the PDF is not symmetric [START_REF] Blanc | Experimental signature of the pair trajectories of rough spheres in the shear-induced microstructure in noncolloidal suspensions[END_REF][START_REF] Blanc | Microstructure in sheared non-brownian concentrated suspensions[END_REF][START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF][START_REF] Parsi | Fore-and-aft asymmetry in a concentrated suspension of solid spheres[END_REF][START_REF] Gao | Direct investigation of anisotropic suspension structure in pressure-driven flow[END_REF] and the loss of symmetry can be related to contact, due to roughness [START_REF] Blanc | Kinetics of owing dispersions. 9. doublets of rigid spheres (experimental)[END_REF][START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF][START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF] or other irreversible surface forces (e.g., repulsive force leads to an asymmetric PDF in a similar fashion to how a finite amount of Brownian motion does [START_REF] Brady | Microstructure of strongly sheared suspensions and its impact on rheology and diffusion[END_REF]).
The microstructure affects the macroscopic properties of noncolloidal suspensions leading to non-Newtonian effects (i.e., normal stress differences) and phenomena such as shear induced migration of particles [START_REF] Morris | A review of microstructure in concentrated suspensions and its implications for rheology and bulk flow[END_REF][START_REF] Singh | Normal stresses and microstructure in bounded sheared suspensions via stokesian dynamics simulations[END_REF][START_REF] Sierou | Rheology and microstructure in concentrated noncolloidal suspensions[END_REF][START_REF] Stickel | Fluid mechanics and rheology of dense suspensions[END_REF]. Thus, the development of accurate constitutive equations requires considering the connection between the microstructure and macroscopic properties either explicitly [START_REF] Phan-Thien | A new constitutive model for monodispersed suspensions of spheres at high concentrations[END_REF][START_REF] Stickel | A constitutive model for microstructure and total stress in particulate suspensions[END_REF][START_REF] Stickel | Application of a constitutive model for particulate suspensions: Time-dependent viscometric flows[END_REF] or implicitly through the particle phase stress [START_REF] Miller | Suspension flow modeling for general geometries[END_REF][START_REF] Morris | Curvilinear flows of noncolloidal suspensions: The role of normal stresses[END_REF][START_REF] Morris | Pressure-driven flow of a suspension: Buoyancy effects[END_REF][START_REF] Morris | A review of microstructure in concentrated suspensions and its implications for rheology and bulk flow[END_REF][START_REF] Nott | Pressure-driven flow of suspensions: simulation and theory[END_REF].
A yield stress fluid deforms and flows when it is subjected to a shear stress larger than its yield stress. In ideal yield stress models, such as the Bingham or Herschel-Bulkley models [START_REF] Huilgol | Fluid mechanics of viscoplasticity[END_REF], the state of stress is undetermined when the shear stress is below the yield stress and the shear rate vanishes. In the absence of inertia, the solutions to flows of ideal yield stress fluids have the following features: (i) uniqueness (ii) nonlinearity of the equations (iii) symmetries of the domain geometry, coupled methodologically with reversibility and reflection of solutions [START_REF] Putz | Creeping flow around particles in a bingham fluid[END_REF]. Therefore, flows around obstacles, such as spheres, should lead to symmetric unyielded regions and to symmetric flow lines in the yielded regions, as observed in simulations [START_REF] Beris | Creeping motion of a sphere through a bingham plastic[END_REF][START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF][START_REF] Blackery | Creeping motion of a sphere in tubes filled with a bingham plastic material[END_REF][START_REF] Beaulne | Creeping motion of a sphere in tubes filled with herschel-bulkley fluids[END_REF][START_REF] Deglo De Besses | Sphere drag in a viscoplastic fluid[END_REF].
However, recent studies report on phenomena such as loss of fore-aft symmetry under creeping condition and formation of negative wake behind particles, which cannot be explained with the assumption of ideal yield stress fluid [START_REF] Putz | Settling of an isolated spherical particle in a yield stress shear thinning fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF]. While these behaviors have been attributed to the thixotropy of the material previously [START_REF] Gueslin | Flow induced by a sphere settling in an aging yield-stress fluid[END_REF], recent simulations show similar behaviors for nonthixtropic materials when elastic effects are considered [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF][START_REF] Fraggedakis | Yielding the yield stress analysis: A thorough comparison of recently proposed elasto-visco-plastic (evp) fluid models[END_REF]. Therefore, elastoviscoplastic (EVP) models are proposed which consider the contribution of elastic, plastic and viscous effects simultaneously in order to analyze the material behavior more accurately [START_REF] Saramito | A new constitutive equation for elastoviscoplastic fluid flows[END_REF][START_REF] Saramito | A new elastoviscoplastic model based on the herschel-bulkley viscoplastic model[END_REF][START_REF] Dimitriou | Describing and prescribing the constitutive response of yield stress fluids using large amplitude oscillatory shear stress (laostress)[END_REF]. The field of inclusions (i.e. solid particles, fluid droplets and air bubbles) in yield stress fluids is not as advanced as that of Newtonian fluids. The main challenges are due to the nonlinearity of the constitutive laws of yield stress fluids and resolving the structure of unyielded regions, where the stress is below the yield stress (for more details see [START_REF] Hormozi | Visco-plastic sculpting[END_REF]). To locate the yield surfaces that separate unyielded from yielded regions, two basic computational methods are used: regularization and the Augmented Lagrangian (AL) approach [START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF]. On the experimental front, techniques such as PIV [START_REF] Gueslin | Flow induced by a sphere settling in an aging yield-stress fluid[END_REF][START_REF] Putz | Settling of an isolated spherical particle in a yield stress shear thinning fluid[END_REF][START_REF] Gueslin | Sphere settling in an aging yield stress fluid: link between the induced flows and the rheological behavior[END_REF][START_REF] Holenberg | Particle tracking velocimetry and particle image velocimetry study of the slow motion of rough and smooth solid spheres in a yield-stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF][START_REF] Ahonguio | Influence of surface properties on the flow of a yield stress fluid around spheres[END_REF], PTV [START_REF] Holenberg | Particle tracking velocimetry and particle image velocimetry study of the slow motion of rough and smooth solid spheres in a yield-stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF], Nuclear Magnetic Resonance (NMR) [START_REF] Van Dinther | Suspension flow in microfluidic devicesa review of experimental techniques focussing on concentration and velocity gradients[END_REF][START_REF] Ovarlez | Flows of suspensions of particles in yield stress fluids[END_REF], X-ray [START_REF] Heindel | A review of x-ray flow visualization with applications to multiphase flows[END_REF][START_REF] Gholami | Timeresolved 2d concentration maps in flowing suspensions using x-ray[END_REF], Magnetic Resonance Imaging (MRI) [START_REF] Powell | Experimental techniques for multiphase flows[END_REF] are used to study the flow field inside the yielded region as well as determining the yield surface.
Generally speaking, studies of single and multiple inclusions (i.e., rigid particles and deformable bubbles and droplets) in yield stress fluids are abundant.
These studies mainly focus on resolving important physical features when dealing with yield stress suspending fluids, e.g. buoyant inclusions can be held rigidly in suspensions [START_REF] Bhavaraju | Bubble motion and mass transfer in non-newtonian fluids: Part i. single bubble in power law and bingham fluids[END_REF][START_REF] Potapov | Motion and deformation of drops in bingham fluid[END_REF][START_REF] Tsamopoulos | Steady bubble rise and deformation in newtonian and viscoplastic fluids and conditions for bubble entrapment[END_REF][START_REF] Singh | Interacting two-dimensional bubbles and droplets in a yield-stress fluid[END_REF][START_REF] Dimakopoulos | Steady bubble rise in herschel-bulkley fluids and comparison of predictions via the augmented lagrangian method with those via the papanastasiou model[END_REF][START_REF] Lavrenteva | Motion of viscous drops in tubes filled with yield stress fluid[END_REF][START_REF] Holenberg | Interaction of viscous drops in a yield stress material[END_REF][START_REF] Maleki | Macro-size drop encapsulation[END_REF][START_REF] Chaparian | Yield limit analysis of particle motion in a yield-stress fluid[END_REF]; multiple inclusions appear not to influence each other beyond a certain proximity range [START_REF] Singh | Interacting two-dimensional bubbles and droplets in a yield-stress fluid[END_REF]; flows may stop in finite time [START_REF] Chaparian | Yield limit analysis of particle motion in a yield-stress fluid[END_REF]; etc. Other studies exist which address the drag closures, the shape of yielded region, the role of slip at the particle surface and its effect on the hydrodynamic interactions [START_REF] Deglo De Besses | Sphere drag in a viscoplastic fluid[END_REF][START_REF] Jossic | Drag and stability of objects in a yield stress fluid[END_REF][START_REF] Holenberg | Ptv and piv study of the motion of viscous drops in yield stress material[END_REF][START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF].
Progressing beyond a single sphere and tackling the dynamics of multiple particles in a Lagrangian fashion is a much more difficult task. Therefore, another alternative is to address yield stress suspensions from a continuum-level closure perspective. The fundamental objective is then to characterize the rheological properties as a function of the solid volume fraction (φ) and properties of the suspending yield stress fluid. Recent studies show that adding particles to a yield-stress fluid usually induces an enhancement of both yield stress and effective viscosity while leaving the power-law index intact [START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Mahaut | Yield stress and elastic modulus of suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Ovarlez | Shear-induced sedimentation in yield stress fluids[END_REF][START_REF] Ovarlez | A physical model for the prediction of lateral stress exerted by self-compacting concrete on formwork[END_REF][START_REF] Vu | Macroscopic behavior of bidisperse suspensions of noncolloidal particles in yield stress fluids[END_REF][START_REF] Dagois-Bohy | Rheology of dense suspensions of non-colloidal spheres in yield-stress fluids[END_REF][START_REF] Ovarlez | Flows of suspensions of particles in yield stress fluids[END_REF].
Unlike the case of settling of particles in yield stress fluids, no attention has been payed to the study of pair interactions of particles in simple flows of yield stress fluids. Our knowledge about this fundamental problem is essential to form a basis for further studies regarding the suspensions of non-Brownian particles in yield stress fluids. To this end, we present an experimental study on the interaction of two small freely-moving spheres in a Couette flow of a yield stress fluid. Our main objective is to understand how the nonlinearity of the suspending fluid affects the particle trajectories, and consequently, the bulk rheology. This paper is organized as follows. Section 2 describes the experimental methods, materials and particles used in this study along with the rheology of our test fluids. In Section 3, we present our results on establishing a linear shear flow in the absence of particles, flow around one particle and the interaction of particle pairs in different fluids including Newtonian, yield stress and shear thinning. Finally, we discuss our conclusions and suggestions for future works in Section 4.
Experimental methods and materials
In this section we describe the methodology and materials used in this study.
Experimental set-up
The schematic of the experimental set-up is shown in Fig. 1. It is designed to produce a uniform shear flow within the fluid enclosed by a transparent belt. The belt is tightened between two shafts one of which is coupled with a precision rotation stage (M-061.PD from PI Piezo-Nano Positioning) with high angular resolution (310 5 rad) while the other shaft rotates freely. The rotation gener- ated by the precision rotation stage drives the belt around the shafts and hence, applies shear to the fluid maintained in between. In order to have the maximum optical clarity along with the mechanical strength to afford the tension, Mylar sheets (polyethylene terephthalate films from Goodfellow Corporation) of 0.25 mm thickness are used to make the belt. The set-up is designed to reach large enough strains (γ ¦ 45) to ensure the steady-state condition. The design is inspired by Rampall et al. [START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF] and the Couette apparatus is the same as that used by Metzger and Butler in [START_REF] Metzger | Clouds of particles in a periodic shear flow[END_REF].
The flow field is visualized in the plane of shear (xy plane) located in the mid-plane between the free surface and bottom of the cell. A fraction of the whole flow domain is illuminated by a laser sheet, which is formed by a line generator mounted on a diode laser (2W, 532 µm). Fluid is already seeded homogeneously with fluorescently labeled tracer particles, which reflect the incident light (see Sec.2.2). Tracer particles should be small enough to follow the flow field without any disturbance and large enough to reflect enough light needed for image recording. The thickness of the laser sheet is tuned to be around its minimum in the observation window with a plano-convex cylindrical lens. Images are recorded from the top view via a high quality magnification lens (Sigma APO-Macro-180 mm-F3.5-DG) mounted on a high-resolution digital camera (Basler Ace acA2000-165um, CMOS sensor, 2048 1080 pixel 2 , 8 bit). The reflected light is filtered with a high-pass filter (590 nm) through which the direct reflection (from the particle surface) is eliminated. A transparent window made of acrylic is carefully placed on the free surface of the fluid in order to eliminate the deformation of the fluid surface and by this, the quality of images is improved significantly. The imaging system is illustrated schematically in Fig. 1.
Particles
Particles used in this study are transparent and made of PMMA (polymethyl methacrylate, Engineering Laboratories Inc.) with radius of a 1 mm, density of 1.188 gr©cm 3 and refractive index of 1.492 at 20 `C. They are dyed with Rhodamine 6G (Sigma-Aldrich) which enables us to preform PTV and PIV at the same time. In order to dye particles the procedure proposed by Metzger and Butler in [START_REF] Metzger | Clouds of particles in a periodic shear flow[END_REF] is followed; PMMA particles are soaked for 30 minutes in a mixture of 50 % wt. water and 50 % ethanol with a small amount of Rhodamine 6G maintained at 40 `C. They are rinsed with an excess amount of water afterwards to assure there is no extra fluorescent dye on their surface and the coat is stable.
The surface of the particles from the same batch have been previously observed by Phong [START_REF] Pham | Origin of shear-induced diffusion in particulate suspensions: Crucial role of solid contacts between particles[END_REF][START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF] and Souzy [START_REF] Souzy | Mélange dans les suspensions de particules cisaillées à bas nombre de reynolds[END_REF] using Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM). The root mean square and peak values of the roughness are measured to be 0.064 0.03 µm and 0.6 0.3 µm respectively after investigating an area of 400 µm 2 [START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF]. Moreover, in order to perform PIV, the fluid is seeded with melamine resin particles dyed with Rhodamine B with a diameter of 3.87 µm, provided by Microparticle GmbH.
Fluids
In this study, three different fluids have been used including Newtonian, yield stress and shear thinning fluid; each of the fluids is described in the following sections:
Newtonian fluid
The Newtonian fluid is designed to have the density and refractive index (RI) matched with that of the PMMA particles. Any RI mismatch could lead to refraction of the laser light when it passes the particle-fluid interface which decreases the quality of the images and makes the post processing very difficult or even impossible. However, we only have one or two particles in our experiments and therefore, a slight refractive index mismatch does not result in a poor quality image. The fluid consists of 76.20% wt. Triton X-100, 14.35% wt. of zinc chloride, 9.31% wt. of water and 0.14% wt. of hydrochloric acid [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF] with the viscosity of 4.64 P a.sec and refractive index of 1.491 10 3 at room temperature. A small amount of hydrochloride acid prevents the formation of zinc hyperchlorite and thus enhances the transparency of the solution. Water is first added to zinc chloride gradually and the solution is stirred until all solid particles dissolve in the water. Since the process is exothermal we let the solution cool down to reach room temperature. After adding hydrochloride acid to the cooled solution, Triton X-100 is added and mixed until the final solution is homogeneous.
Yield stress fluid
Here we limit our study to non-thixotropic yield-stress materials with identical static and dynamic yield-stress independent of the flow history [START_REF] Balmforth | Yielding to stress: recent developments in viscoplastic fluid mechanics[END_REF][START_REF] Ovarlez | On the existence of a simple yield stress fluid behavior[END_REF].
To this end, we chose Carbopol 980 which is a cross-linked polyacrylic acid with high molecular weight and is widely used in the industry as a thickening agent. Most of the experimental works studying the flow characteristics of the simple yield-stress fluids utilize Carbopol since it is highly transparent and the thixotropy can be neglected. Carbopol 980 is available in a form of anhydrous solid powder with micrometer sized grains. When mixed with water, polymer chains hydrate, uncoil and swell forming an acidic solution with pH¢ 3 4.
When neutralized with a suitable basic agent such as sodium hydroxide, microgels swell up to 1000 times of their initial size (10 times bigger radius) and jam (depending on the concentration) forming a structure which exhibits yieldstress and elastic behavior [START_REF] Gutowski | Scaling and mesostructure of carbopol dispersions[END_REF][START_REF] Lee | Investigating the microstructure of a yield-stress fluid by light scattering[END_REF]. Rheological properties of Carbopol gels are dependent of both concentration and pH. At intermediate concentrations, both yield-stress and elastic modulus increase with pH until they reach their peak values around the neutral point, where they are least sensitive to pH. A comprehensive study on the microstruture and properties of Carbopol gel is provided by Piau in [START_REF] Piau | Carbopol gels: Elastoviscoplastic and slippery glasses made of individual swollen sponges: Meso-and macroscopic properties, constitutive equations and scaling laws[END_REF].
In order to make Carbopol gel with a density matched with that of PMMA particles mentioned in sec. 2.2, first, a solution of deionized water 27.83% wt. and glycerol 72.17% wt (provided by ChemWorld) is prepared, which has the same density as the PMMA particles. Then, depending on the concentration needed for the experiment (varies in the range of 0.07-0.2 % wt. in this study), the corresponding amount of Carbopol 980 (provided by Lubrizol Corporation) is added to the solution while it is being mixed by a mixer. The dispersion is let to be mixed for hours until all Carbopol particles hydrate and the dispersion is homogeneous. A small amount of sodium hydroxide (provided by Sigma-Aldrich) is then added in order to neutralize the dispersion. It is suggested to add all of the neutralizer at once, or at least in a short amount of time since as pH increases the viscosity increases drastically which would increase mixing time. The solution becomes more transparent as it reaches neutral pH. The refractive index of the Carbopol gels used in this study varies in the range of 1.370510 3 . By investigating the rheological properties of the gel at different pHs, we found pH 7.4 to be a stable point with highest yield-stress and elastic modulus. The solution is then covered and mixed for more than eight hours.
The final solution is transparent, homogeneous with no visible aggregates. Also, the rheometry results of all samples taken from different parts of the solution batch collapse. The compositions of all Carbopol gels used in this study are described in Table 1.
Shear thinning fluid
In order to investigate the effect of yield-stress and shear thinning individually, it is required to study the problem with a shear thinning fluid with no yield stress. Therefore, we chose Hydroproxypyl Guar which is a derivative of the guar gum, a polysaccharide made from seeds of guar beans. Jaguar HP-105 (provided by Solvay Inc.) is used in this study which is widely used in cosmetics and personal care products [START_REF] Inc | Jaguar, product guide for personal care solutions[END_REF]. It is transparent when mixed with water and exhibits negligible yield stress in low to moderate concentrations. The refractive index of the guar gum solutions used in this study varies in the range of 1.368 5 10 3 .
In order to make a solution of Jaguar HP-105 with the same density as the particles, we follow the same scheme mentioned earlier for Carbopol gel in sec.2.3.2. First, a solution of deionized water 27.83% wt. and glycerol 72.17% wt (provided by ChemWorld) is prepared. While being mixed by a mixer, depending on the desirable concentration (in this study varies from 0.30.6% wt.), corresponding amount of Jaguar HP-105 is added gradually to the solution. The dispersion is covered and mixed for 24 hours until a homogeneous solution is achieved. Homogeneity is tested by comparing rheometry results performed on samples taken from different spots in the container. The compositions of the guar gum solutions used in this study are described in Table 1.
Rheometry
Unlike Newtonian fluids, the effective viscosity of the non-Newtonian fluids depends on the shear rate and flow history. Here, we explain the rheological tests performed to characterize the non-Newtonian behaviors. For each test the procedure is described followed by the results and the interpretation. All measurements shown in this section are carried out using serrated parallel plates with a stress-controlled DHR-3 rheometer (provided by TA Instruments) on samples of Carbopol gels and guar gum solutions referred to as "YS1-2" and "ST" respectively. The rheological properties of all test fluids used in this study are described in Table .1.
A logarithmic shear rate ramp with γ 4 0.001, 10$ sec 1 is applied on samples of test fluids for a duration of 105 sec in order to find the relation between shear rate and shear stress, τ f γ¦ (see Fig. 2). During the increasing shear ramp, the material is sheared from rest. The behavior of the yield stress material is hence similar to a Hookean solid until the stress reaches the yield stress. Beyond the yield stress, the material starts to flow like a shear thinning liquid. On the contrary, during the decreasing shear ramp, the yield stress material is already in flow condition and the stress asymptotes to the yield stress at low shear rates (see Fig. 2a). The value of yield stress during both increasing and decreasing ramps are identical. This is the typical behavior of non-thixotropic yield-stress materials (more information can be found in [START_REF] Uhlherr | The shear-induced solid-liquid transition in yield stress materials with chemically different structures[END_REF][START_REF] Coussot | Rheometry of pastes, suspensions, and granular materials: applications in industry and environment[END_REF]). The measurements of increasing and decreasing ramps overlap beyond yield stress and show no sign of hysteresis. The rheological behavior of Carbopol gel is described well by the Herschel-Bulkley (see Eq. 1) model as shown in Fig. 2: : increasing ramps (W), decreasing ramps (u) and the corresponding Herschel-Bulkley fits (q) described in the Table 1. The inset of (b) presents the variation of viscosity versus shear rate for ST.
τ τ y K γ¦ n (1)
Where τ y is the yield stress, K is the consistency and n is the power index.
These values are calculated for YS1-2 in range of γ 4 0.01, 10$ sec 1 in Table . 1.
Fig. 2b shows the rheology of the guar gum solution, ST in the plane of shear stress versus shear rate. The Carreau-Yasuda model has generally been adopted to explain the rheological behavior of guar gum solutions [START_REF] Risica | Rheological properties of guar and its methyl, hydroxypropyl and hydroxypropylmethyl derivatives in semidilute and concentrated aqueous solutions[END_REF][START_REF] Szopinski | Structure-property relationships of carboxymethyl hydroxypropyl guar gum in water and a hyperentanglement parameter[END_REF]. The inset of Fig. 2b shows the viscosity of the guar gum solution versus shear rate following the Carreau-Yasuda model. We see that the viscosity presents a plateau, η 0 ¤ 12.2 P a.sec, in the limit of small shear rates, γ 6 0.1 sec 1 . At γ 7 0.1 sec 1 viscosity decreases with shear rate until it reaches another plateau at higher shear rates. Here, we adopt a power-law model which properly describes the rheological behavior of the material in the range of shear rate in our experiments. The values of consistency and power-law index are reported in Table 1.
Practical yield-stress fluids exhibit viscoelastic behavior as well. Therefore, it is expected that the shear history has an impact on the behavior of the material. We have adopted two experimental procedures to evaluate the effect of shear history. In the first procedure, we shear the material ensuring that the strain is sufficient to break the micro-structure of gel and reach a steady state.
Then, we rest the material for one minute (zero stress) and apply the shear in the same direction as the pre-shear (hereafter called positive pre-shear). In the second procedure, we reverse the direction of the applied shear after imposing a pre-shear on the material (hereafter called negative pre-shear) and a rest period. Fig. 3a shows that under a constant applied shear stress the yield stress material reaches its steady state after a larger strain when negative preshear is applied. However, the shear history does not affect the behavior of the guar gum solution as shown in Fig. 3b. These procedures helped us design the experimental protocol for our Couette flow experiments (see Sec. 3.3.2).
One can conclude that a preshear in the same direction as the shear imposed subsequently in the experiments is appropriate for having a behavior close to that of ideal visco-plastic behavior.
In order to characterize the viscoelasticity of the test fluids further, the shear storage modulus, G ¬ and the shear loss modulus, G ¬¬ (representing the elastic and viscous behavior of the material respectively) are measured during oscillatory tests. Dynamic moduli of YS1 and ST are shown in Fig. 4 as a function of strain amplitude, γ 0 4 10 1 , 10 3 $ % while frequency is constant, ω 1 rad.sec 1 . We observe that the behavior is linear up to γ 0 ¤ 1% in YS1 while it remains linear at larger strain amplitudes, γ 0 ¤ 10% in ST. Elastic effects are dominant (i.e.
G
¬ 7 G ¬¬ ) at strain amplitudes lower than γ 0 ¤ 100% in the yield stress material, YS1 (see Fig. 4a). At γ 0 7 100%, the shear loss modulus becomes larger than the shear storage modulus in YS1 indicating that the viscous effects take over. On the other hand, elastic and viscous effects are equally important in ST in the linear viscoelastic regime as the shear loss and shear storage moduli have identical values under γ 0 ¤ 100% (see Fig. 4b). At larger strain amplitudes however, the shear loss modulus becomes larger implying larger viscous effects. The values of G ¬ and G ¬¬ reported in Table 1 are measured at ω 1 rad.sec 1 , γ 0 0.25 %. In Fig. 5 the variation of dynamic moduli is given as a function of frequency for the Carbopol gel, YS1 and the guar gum solution, ST. Different curves correspond to different strain amplitudes (γ 0 1, 5, 20, 50, 100 %).
Post-processing
The PMMA particles are tracked during their motion via Particle Tracking Velocimetry (PTV) to extract the trajectories. Images are recorded at strain increments of γ rec 8 0.6% to ensure high temporal resolution. In each image, the center and radius of each particle is detected via the Circular Hough Transform [START_REF] Peng | Detect circles with various radii in grayscale image via hough transform[END_REF][START_REF] Duda | Use of the hough transformation to detect lines and curves in pictures[END_REF]. Due to the small strain difference between two respective images and consequently small displacement of PMMA particles, same particles are identified and labeled in two images. Applying this methodology to all images we obtain trajectories of particles.
Particle Image Velocimetry (PIV) is employed to measure the local velocity field from successive images recorded from the flow field. It is worth mentioning that in this method we calculate the two dimensional projection of the velocity field in the plane of shear (xy plane).
We have used the MatPIV routine with minor modifications in order to analyze PIV image pairs [START_REF] Sveen | An introduction to matpiv v. 1.6. 1[END_REF]. Each image is divided into multiple overlapping sub-images, also known as interrogation windows. The PIV algorithm goes through three iterations of FFT-based cross-correlation between corresponding interrogation widows in two successive images in order to calculate the local velocity field. The velocity field measured in each iteration is used to improve the accuracy during the next iteration where the interrogation size is reduced to one half. Window sizes of 64 64, 32 32 and 16 16 pixels (¢ a©9) with the overlap of 50% are selected respectively during the first, second and third iterations. Following each iteration, spurious vectors are identified by different filters such as signal-to-noise ratio, global histogram and local median filters. Spurious vectors are then replaced via linear interpolation between surrounding vectors. Since less than 3.1% of our data is affected we do not expect a significant error due to the interpolation process. The measured velocity is ignored if the interrogation window overlaps with the particle surface (detected earlier via PTV algorithm). The size independence of the velocity measurements is verified by comparing the results with that obtained when we increase the interrogation widow size to 32 32 pixels (¢ a©4.5).
γ0 = 1% γ0 = 5% γ0 = 20% γ0 = 50% γ0 = 100% (a)
Experimental results
Establishing a linear shear flow in the absence of particles
The first step is to establish a linear shear flow field within the experimental set-up. Any deviation from the linear velocity profile across the gap of the Couette-cell affects the flow field around one particle, or the interaction of two particles. Our Couette-cell has a finite dimension bounded with a wall from the bottom, an acrylic window from the top and two rotating cylinders from the sides (see Fig 1). It is essential to show that a linear shear flow is achievable in the middle of the set-up and not affected by the boundaries. Reynolds number is defined as:
Re 4ρ 2U ©H¦a 2 µ (2)
which is of the order O 10 5 ¦ in our experiments, implying that the iner- tial effects are negligible. Here a and H are the particle radius and gap width respectively, U is the maximum velocity across the gap, ρ is the density and µ is the viscosity of the fluid. Moreover, according to the aspect ratio of the Couette-cell (50 cm long versus 2 cm wide), the central region where measurements are made is far from the shafts. In the absence of inertia and boundary effects the solution to the momentum equations would give us a linear velocity profile in our configuration, independent of the rheology of the test fluids. In this section, we present our experimental results showing how a linear shear flow field is established within the Couette-cell when we have different suspending fluids including Newtonian fluids, yield stress fluids and shear thinning fluids.
In the case of the Newtonian fluid, Fig 6a shows the velocity profile across the gap for different shear rates imposed at the belt. The velocity field is averaged along the x-direction (flow direction). We normalize the velocity with the maximum velocity across the PIV window, u c , and show that all velocity profiles collapse to a master curve (see Fig. consequently with the Newtonian fluid.
When we deal with a yield stress test fluid, there exist more dimensionless numbers in addition to the bulk Reynolds number including Bingham number (B) which is the ratio of yield stress (τ Y ) to the viscous stress (K γn ) in the flow:
B τ Y K γn (3)
Another important dimensionless number is Deborah number which is the ratio of the material time scale to the flow time scale. For elastoviscoplastic materials, the relaxation time λ, the elastic modulus G ¬ , and the apparent plastic viscosity η p are related via η p λG ¬ where the so-called plastic viscosity is defined as follows [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF]:
η p τ τ Y γ (4)
Comparing Eq.( 4) with (1), we conclude η p K γn1 . Therefore, the Deborah number is:
De λ γ K γn G ¬ (5)
Velocity fields obtained via PIV measurements are averaged along the flow direction. Fig. 7a shows the measured velocity profiles across the gap when normalized by the maximum velocity across the PIV window, u c . Next, shear rate profiles are calculated from the averaged velocity profiles according to Eq. ( 6) and are used to calculate the shear stress profiles via the Herschel-Bulkley model (shown in Fig. 7b and 7c respectively). Shear rate profiles are normalized by the average shear rate across the gap γc , while stress profiles are normalized by the average stress across the gap τ c . γloc
Ø 2 ∂u d ∂x ¦ 2 2 ∂v d ∂y ¦ 2 ∂u d ∂y ∂v d ∂x ¦ 2 (6)
It is evident that as we increase the Bingham number, the velocity profile deviates from a linear shape, and consequently, the shear rate is not constant. This is quite a unique observation for a yield stress fluid, and the rheology of the fluid can explain this puzzle. Let us take a closer look at the variation of stress with respect to the shear rate shown in Fig. 2 for the yield stress test fluids used in the experiments. We can see that at low shear rates (i.e. high Bingham numbers), such as 0.01 6 γ 6 0.1 sec 1 , a small variation in the shear stress projects to a large variation in the shear rate. On the contrary, at higher shear rates γ 7 1 sec 1 (i.e. low Bingham numbers) the same amount of stress variation corresponds to a significantly smaller variation in the shear rate. Fig. 7c shows the variation of stress across the gap is of the same order for all Bingham numbers while the resulting shear rate profiles are significantly different in terms of inhomogeneity. This implies that a small stress inhomogeneity due to any imperfection of the set-up and the test fluid (finite dimension of the set-up, slight inhomogeneity in the test fluid or etc.) projects into a larger shear rate inhomogeneity as we increase the Bingham number. This stress inhomogeneity is estimated from Fig. 7c to be ¤ 2% in our set-up.
Both the characteristic length of the inhomogeneity and its amplitude increase as the Bingham number increases. Our results show that for B 8 2, the shear rate inhomogeneity is minimal (comparable to that of the Newtonian test fluid), and we can establish a linear velocity profile in the set-up for the case of a yield stress fluid. Therefore, all the experiments in this work are performed for B 6 2.
One particle in a linear shear flow
This section is aimed at studying a linear shear flow around one particle in the limit of zero Re when we have different types of fluids including Newtonian, yield stress and shear thinning. A theoretical solution is available for a particle in a Newtonian fluid subjected to a linear shear flow field. We use the theoretical solution to validate our experimental results. The effect of a non-Newtonian fluid on the flow field around one particle is then investigated experimentally. Studying the disturbance fields around one particle is key to understanding the hydrodynamic interaction of two particles, and consequently, the bulk behavior of suspensions of noncolloidal particles in non-Newtonian fluids.
Stokes flow around one particle in a linear shear flow of a Newtonian
fluid: comparison of theory and experiment First, we compare our PIV measurements with the available theoretical solution for the Stokes flow around one particle in a linear shear flow of a Newtonian fluid [START_REF] Leal | Advanced transport phenomena: fluid mechanics and convective transport processes[END_REF]. The normalized velocity field obtained via a theoretical solution is illustrated in Fig. 8a along with the measured velocity field via PIV in Fig. 8b, which is normalized by the velocity at the belt. A quantitative comparison is given in Figs. 8d-8f where dimensionless velocity profiles are compared at cross sections located at different distances from the particle center, x©a 2.5, 1, 0.
It is noteworthy to mention that the PIV measurements are available at distances r©a 9 1 , where r is the distance from the particle center and ¢ 0.1 is given by the resolution of the PIV interrogation window. The close agreement between our velocity measurements with that predicted by the theory allows us to employ our method for the case yield stress fluids, where the theoretical solution is unavailable. Our experimental data can be used as a benchmark for these fluids.
Creeping flow around one particle in a linear shear flow: Newtonian and non-Newtonian suspending fluids
We present our PIV measurements of creeping flows around one particle in linear shear flows of Newtonian, shear thinning (guar gum solution) and yield stress (Carbopol gel) suspending fluids. About 100 PIV measurements (i.e., 100 PIV image pairs) are averaged afterwards to reduce the noise. The origin of the coordinate system, x, y, z¦, is fixed on the center of the particle and translates with it (non-rotating). We subtract the far field velocity profile from the experimentally-measured velocity field in order to calculate the disturbance velocity field around one particle:
u d u d , v d ¦ u u (7)
Where u d and v d are components of the disturbance velocity vector along the flow direction and gradient direction respectively. The disturbance velocity field is then normalized by the maximum disturbance velocity in the PIV window. Fig. 9 shows the normalized disturbance velocity field around one particle in linear shear flows of a Newtonian fluid (theory: Fig. 9a and experiment: Fig. 9b), a yield stress fluid (experiment of Carbopol gel: Fig. 9c) and a shear thinning fluid (experiment of guar gum solution: Fig. 9d). The shear flow is established as u γy, 0, 0¦ where γ 7 0. The disturbance velocity field is normalized by the maximum disturbance velocity in the field. Although the theoretical solution for the case of a single rigid sphere in a simple-shear flow of a Newtonian fluid exists, there is no theoretical solution in the case of a yield stress fluid. Therefore, our experimental measurements shown in Fig. 9c serves as the first set of information about simple-shear flows around a spherical particle. Fig. 10 shows the colormaps of shear rate around one particle in linear shear flows of a Newtonian fluid (theory: Fig. 10a and experiment: Fig. 10b), a yield stress fluid (experiment of Carbopol gel: Fig. 10c) and a shear thinning fluid (experiment of guar gum solution: Fig. 10d). The magnitude of local shear rates are calculated by taking the spatial derivative of the disturbance velocity fields based on Eq. ( 6). Although taking the derivative of experimental data (i.e., PIV measurements of the velocity field) amplifies the noise, averaging over more than 100 PIV measurements reduces the noise and allows us to see the qualitative features.
For the Newtonian fluid, our experimental results shown in Fig. 9b are in a very close agreement with the theoretical solution illustrated in Fig. 9a. we can see that the disturbance velocity has fore-aft symmetry and decays as we move away from the particle surface. Unlike the Newtonian fluid, fore-aft symmetry is broken for our non-Newtonian test fluids (see Fig. 9c and9d). The fore-aft asymmetry is significantly larger for the Carbopol gel (in Fig. 9c). As mentioned in Section 1, the loss of fore-aft symmetry is not predicted for the flow field around one particle if we use ideal visco-plastic constitutive models; e.g. Herschel-Bulkley and Bingham models [START_REF] Beris | Creeping motion of a sphere through a bingham plastic[END_REF][START_REF] Liu | Convergence of a regularization method for creeping flow of a bingham material about a rigid sphere[END_REF][START_REF] Blackery | Creeping motion of a sphere in tubes filled with a bingham plastic material[END_REF][START_REF] Beaulne | Creeping motion of a sphere in tubes filled with herschel-bulkley fluids[END_REF]. However, practically speaking, both the guar gum solution and Carbopol gel are polymer based solutions with slight elasticity, and consequently, these are not ideal visco-plastic fluids. Elastic effects are thus responsible for the fore-aft asymmetry observed in Fig. 9c and9d. For viscoelastic fluid flows, uniqueness and nonlinearity are present but the symmetry and reversibility are missing. We should mention that by adopting an appropriate pre-shear procedure in our experiments (described in Section 2.4), we eliminated the possible effects due to the shear history.
Despite the loss of fore-aft symmetry which is evident in Fig. 9c and Fig. 9d, we note that the velocity disturbance field is symmetric with respect to the center of the particle (symmetric with respect to a point). This is indeed expected. Assume two fluid elements are moving towards the particle and located at the top left and bottom right of the flow field, but at the same vertical distance from the particle. Both fluid elements experience the same shear history during their motion (e.g., compression, extension, rotation) resulting in a symmetric flow field with respect to the center of the particle. The Deborah number is calculated based on the values of shear storage moduli measured at the frequency ω 1 rad.sec 1 with low strain amplitudes, γ 0 0.25 % (see Table 1). In the experiment with the Carbopol gel, YS1 (Fig. 9c) the Deborah number is De 0.15 while it is De 1.03 in the case of guar gum solution, ST(Fig. 9d).
Although the Deborah number is relatively small in our experiments, it clearly affects the flow field around the particles. This is consistent with the results of Fraggedakis et al. [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF] where they observed the effect of slight elasticity in a yield stress fluid to be significant in establishing the flow field around a single particle settling in a stationary column of a yield stress fluid. Despite the smaller value of the De number for the case of Carbopol gel compared to the guar gum solution, we see that the fore-aft asymmetry is larger. It can be due the interplay between plastic and elastic effects in the Carbopol gel which is an elasoviscoplastic material. Further investigation is required to reveal the role of plastic and elastic effects individually and mutually in establishing the flow field in a wide range of Bingham and Deborah numbers. This can be explored via a computational study since practical limitations exist in tackling this problem experimentally. For example, it is not possible to change the Deborah number in our experiments independent of other parameters such as Bingham number. Also, it is not feasible to increase the Deborah number significantly with the aid of conventional yield stress fluids such as Carbopol gels.
Variation of disturbance velocity around one particle at fixed distances from the particle center (r fixed) is illustrated in Figs. 11a and11b. It shows more clearly the fore-aft asymmetry in the Carbopol gel compared to that of the Newtonian fluid. Velocity is normalized with its maximum value at each distance, u c,r in Figs. 11a and11b.
The disturbance field shows how regions around a particle are affected by the presence of a particle. When disturbance velocity is zero or very small at a region it means this region lies outside of the zone influenced by the particle. Studying the disturbance fields around one particle is thus essential to predict the interaction of two particles, and consequently, the bulk behavior of dilute suspensions. The extent of disturbance is better seen on the velocity profiles. Figs. 11c and11d show the variation of disturbance velocity around one particle along different directions (θ fixed) normalized with the maximum disturbance velocity along each direction, u c,θ . It is evident that the disturbance velocity decays more rapidly in the case of the yield stress fluid and shear thinning fluid.
The maximum decay occurs in the flow of Carbopol gel around one particle. This means two particles will feel each other at a farther distance in a Newtonian fluid than in a generalized Newtonian fluid.
Interaction of two particles in a linear shear flow
In this section we study experimentally the interaction of two spherical PMMA particles in a linear shear flow of Newtonian, yield stress and shear thinning fluids. First, we compare our experimental results for the case of a Newtonain suspending fluid with the existing models [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF] and analytical solutions [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] describing the relative motion of two particles in a linear shear flow without the inertia. We proceed afterwards to study the non-Newtonian effects on the interaction of particles in a linear shear flow.
Interaction of two particles in a linear shear flow of a Newtonian fluid:
theory and experiment Fig. 12 shows the schematic of a particle trajectory around a reference particle in a linear shear flow. Depending on the initial offset, y 0 ©a, the particles follow different trajectories. If the initial offset is small enough, two particles collide and separate further apart on the recession zone (symmetry is broken). However, if the initial offset is large enough that they do not make contact, the corresponding trajectory is expected to be symmetric due to the symmetry of the Stokes equations. It is noteworthy to mention that in the case of smooth particles with no surface roughness, a contact is not possible due to divergence of lubrication forces. However, practical contact occurs due to unavoidable roughness at the surface of particles. For more details see theoretical [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] and experimental works [START_REF] Darabaner | Particle motions in sheared suspensions xxii: Interactions of rigid spheres (experimental)[END_REF][START_REF] Blanc | Kinetics of owing dispersions. 9. doublets of rigid spheres (experimental)[END_REF][START_REF] Rampall | The influence of surface roughness on the particle-pair distribution function of dilute suspensions of noncolloidal spheres in simple shear flow[END_REF][START_REF] Blanc | Experimental signature of the pair trajectories of rough spheres in the shear-induced microstructure in noncolloidal suspensions[END_REF].
The interaction of two particles could be described at different ranges of separation by accurate hydrodynamics functions based on works by Batchelor [START_REF] Batchelor | The hydrodynamic interaction of two small freely-moving spheres in a linear flow field[END_REF] and Da Cunha [START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF]. It is assumed that inertial and Brownian effects are negligible, particles are neutrally buoyant and spherical. The appropriate set of hydrodynamic functions must be chosen according to the separation of two particles, r and the roughness, ε. Using the aforementioned hydrodynamic functions we calculated the relative trajectories of two particles via 4th-order Runge-Kutta to march in time. The results are plotted in Fig. 13a. The trajectories fall into two categories of asymmetric and symmetric whether or not a contact occurs respectively.
Here, we present our experimental results for two particles suspended in a linear shear flow of a Newtonian fluid. The experimental trajectory map of two particles is shown in Fig. 14. In addition, we have compared the experimental trajectory map with those calculated from theoretical solutions in Fig. 13b. The best match is achieved by manually setting the roughness to ε theo 5.5 10 4 in the model which is close to the peak value of roughness, ε exp 6 3 10 4
reported by Phong in [START_REF] Pham | Particle dispersion in sheared suspensions: Crucial role of solid-solid contacts[END_REF] for particles from the same batch. We see a great agreement between the theoretical and experimental trajectory map. The relative trajectories are symmetric with respect to y axis between the approach and the recession side if two particles do not contact. However, at lower initial offsets, when particles come into contact due to an unavoidable roughness at the particles surfaces, two particles separate further apart on their recession.
Consequently, the particle trajectories are fore-aft asymmetric. It is evident that all trajectories along which the particles come into a contact will collapse on each other at the downstream after separation.
Particles are tracked via PTV and the flow field is investigated via PIV simultaneously. Therefore, we can link the particle trajectories to the information obtained from the flow field. Fig. 15 illustrates a typical example of a trajectory line with its corresponding velocity and local shear rate colormaps at different points along the trajectory line for two particles in a linear shear flow of a Newtonian fluid. The second particle approaches the reference particle from x©a 6 0. When particles are far from each other, the distribution of shear rate around them resembles that of a single particle, i.e., the particles do not see each other. The particles interact as they approach, and the shear rate distribution and velocity field around them change correspondingly. After they come into contact, they seem to get locked together and rotate like a single body (between points B and D in Fig. 15) then separate from each other. Shear rate fields are normalized by the far-field shear rate.
-4 -2 0 2 4 6 -1 0 1 2 3 x/a y/a (a) -4 -2 0 2 4 6 -1 0 1 2 x/a y/a (b)
3.3.2. Interaction of two particles in a linear shear flow of a yield stress fluid: experiment In this section we present our experimental results on the interaction of two PMMA spherical particles in a linear shear flow of Carbopol gel, which is a yield stress fluid (see Sections 2.3.2 and 2.4). In such case, a theoretical solution does not exist due to the nonlinearity of the governing equations of motion, even in the absence of inertia. While the majority of the experimental works and simulations focused on the settling of particles in yield stress fluids, there are no simulation or experimental work on the interaction of two particles in a linear shear flow of a yield stress fluid in the literature. However, a paper relating a numerical 2D study of the interaction of pairs of particles in an ideal Bingham fluid is under review at the same time as our paper; our experimental results will be qualitatively compared to the simulation results when relevant [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF].
In the absence of inertia, the knowledge of roughness and initial offset are sufficient to predict the interaction, and consequently, the relative trajectory of two particles when we are dealing with Newtonian fluids. However, there are more parameters influencing the interactions of two particles in a yield stress fluid. We expect that the value of Bingham number should strongly affect the relative motion of two particles.
Moreover, viscoelastic effects are not always negligible when dealing with non-ideal yield stress fluids and their contribution must be evaluated (see [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF][START_REF] Fraggedakis | Yielding the yield stress analysis: A thorough comparison of recently proposed elasto-visco-plastic (evp) fluid models[END_REF]). According to the range of Deborah number in our experiments, De 4 0.04, 1.3$ we believe that viscoelastic effects can play an important role, which is consistent with [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF].
In addition, shear history is another parameter which affects the interaction of two particles due to the strain hardening in the non-ideal yield stress test fluids. As discussed earlier in sec. 2.4, for a sample of Carbopol gel, the material undergoes different transient flow states depending on the applied shear history. Our results show that when the material is pre-sheared in a negative direction, the trajectories experience a relatively longer transient regime (results not included). This is consistent with our results in Fig. 3 which suggest that the material reaches a steady state at larger strains under negative pre-shear. In the course of this study, we apply the same shear history in all of the experiments via adopting the positive pre-shear procedure in order to avoid strain hardening and to be as close as possible to a model plastic behavior. However, we should mention that the dimension of our Couette-cell is large enough to allow us to apply sufficient amounts of pre-strain to reach steady state condition, regardless of the shear history. x/a y/a While shearing the material we study the interaction of particles and the flow field via performing PTV and PIV respectively. Fig. 16 shows the trajectory map of particles in a Carbopol gel at γ 0.34 sec 1 , B 1.23 and De 0.15.
Two features are evident. First, the fore-aft asymmetries exist for all the trajectories including those with no collisions of particles. When the initial offset is large enough that there is no contact, particles experience a negative drift along the y-direction after passing each other (i.e. y f y 0 6 0). We think that this pattern can be attributed to the elasticity of the test fluid since no such behavior is observed in simulations when the fluid is considered ideal visco-plastic (e.g. Bingham model) [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. Second, for trajectories with small initial offsets, the second particle moves downward along the velocity gradient direction on the approach side while it moves upward on the recession side. The same pattern is observed in the simulations by Fahs et al. in [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF] for yield stress fluids as well as Newtonian fluids. These local minima in trajectories disappeared in their results for the Newtonian fluid when the domain size is increased from 24a 12a to 96a 48a. However, this pattern for the yield stress fluid (with B 10) disappeared at a larger domain size, 192a 96a. Hence, we can conclude that this might be due to the interplay of wall effects and non-Newtonian behavior.
Fig. 17 shows trajectories of two particles in a Carbopol gel at two different Bingham numbers, starting from approximately equal initial offsets. As expected, the particle trajectories strongly depend on the Bingham number. As we increase the Bingham number the second particle approaches the reference particle to a close distance and separates with a larger upward drift. This can be related to the stronger decay of the disturbance velocity at larger Bingham values around a single particle (see Sec. 3.2). This feature, which has been also observed in simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF], implies larger asymmetry in the PDF, and consequently, larger normal stress differences in the yield stress suspensions as we increase the Bingham number.
Fig. 18 shows a typical example of a trajectory line with its corresponding velocity and local shear rate colormaps at different points along the trajectory line for two particles in a linear shear flow of a yield stress fluid. Shear rate fields are normalized with the applied shear rate at the belt. The second particle approaches the reference particle from x©a 6 0. We see that particles interact as they approach and the shear rate distribution and velocity field around them change (see colormpas associated with point A, Figs. 18b andc). After they come into contact they seem to get locked together and rotate like a single body (between points B and C in Fig. 18). They separate from each other afterwards on their recession.
3.3.3. Interaction of two particles in a linear shear flow of a shear thinning fluid: experiment A Carbopol gel exhibits both yield stress and shear thinning effects. In order to investigate the effect of each non-Newtonian behavior individually, we perform similar experiments with a shear thinning test fluid without a yield stress. We use a Hydroproxypyl Guar solution which is transparent with negligible thixotropy at low concentrations (see Sections 2.3 and 2.4).
A map of the relative trajectory map of two particles in a linear shear flow of the guar gum solution, ST, is illustrated in Fig. 19. Unlike yield stress suspending fluids, trajectories do not exhibit downward and upward motions at the approach and recession zone respectively. A slight asymmetry exists when particles do not come into a contact, but this is much smaller than that of yield stress suspending fluids. When a contact occurs, the trajectories are all asym-
metric.
Fig. 20 illustrates a sample trajectory with its corresponding velocity and shear rate fields at different points along the trajectory line for two particles in the guar gum solution ST (see Table 1). The second particle approaches the reference particle from x©a 6 0. Shear rate fields are normalized with the applied shear rate at the belt.
Particle trajectories versus streamlines
As mentioned earlier in Section 3.2.2, the disturbance velocity decays more rapidly in the non-Newtonian fluids considered in this study. In other words, the influence zone around a single particle is smaller when dealing with yield stress and shear thinning fluids compared to the Newtonian fluid. In Fig. 21 we compared the trajectories of two particles subjected to a shear flow with the streamlines around a single particle (experimental velocity field). We can see that they overlap up to closer distances in the Carbopol gel and guar gum solution.
The streamlines around one particle can be viewed as the limiting form of when two particles are far away or when one particle is much smaller than the other one. The discrepancy between the fluid element streamlines and trajectories is related to the lubrication and contact of the particles. Fig. 21 shows that this discrepancy is minimal when the initial offset is large, meaning the pairwise interaction does not occur. Further computational and theoretical investigations are needed to build up trajectory maps of particle pairs in complex fluids from the flow field around a single particle in shear flows of complex fluids.
Discussion and conclusions
In this work, we have developed an accurate experimental technique to study the interaction of two spherical particles in linear shear flows of Newtonian, yield stress and shear thinning fluids. We have made use of PIV and PTV techniques to measure the velocity fields and particle trajectories respectively. Rheometry is employed in order to characterize the behavior of our test fluids.
We showed in Section 3.1 that we can establish a linear velocity profile in our Newtonian and non-Newtonian test fluids. In addition, for yield stress fluids, we observed that stress inhomogeneity (naturally present due to any imperfection in the set-up or the test fluid) could project to a larger amount of shear rate inhomogeneity as we increase the Bingham number. By restricting the range of Bingham number to B 6 2, we managed to eliminate this effect and achieve a linear shear flow in Couette device.
Next, we studied the flow around one particle when it is subjected to a linear shear flow. Our results are in a very close agreement with the theoretical solution for a Newtonian suspending fluid. Also the length scale of variation of the disturbance velocity is significantly smaller in yield stress fluids compared to that of Newtonian fluids. This affects the interaction of two particles, and consequently, the bulk rheology of suspensions of noncolloidal particles in shear thinning and yield stress fluids.
We provided the first direct experimental measurement of the flow disturbance around a sphere in a yield stress fluid. This can serve as a benchmark for simulations when dealing with suspensions of noncolloidal particles in yield stress fluids. Our study shows that Carbopol gel exhibits significant viscoelastic behavior which affects the particle interactions. We observed that even the disturbance field around a single particle in a shear flow cannot be explained without considering the viscoelastic effects. Hence, employing elastoviscoplastic (EVP) constitutive models [START_REF] Saramito | A new constitutive equation for elastoviscoplastic fluid flows[END_REF] [47] are necessary when accurate simulations are considered [START_REF] Fraggedakis | Yielding the yieldstress analysis: a study focused on the effects of elasticity on the settling of a single spherical particle in simple yield-stress fluids[END_REF]. Due to the experimental limits, further theoretical and computational studies are required to characterize the contribution of elastic and plastic effects in establishing the flow field around a single particle.
In the next step, we studied the interaction of a pair of neutrally buoyant particles in linear shear flows of Newtonian, yield stress and shear thinning fluids. In the case of Newtonian suspending fluids, we observed a very close agreement between our measurements and the available theoretical solution, which shows the merit of our experimental method. Subsequently, the same method has been employed to study the problem with yield stress and shear thinning suspending fluids which we have no theoretical solutions available for. As it is evident in Fig. 22, fore-aft asymmetry is enhanced for trajectories of particles in yield stress fluids (also observed in simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]) and shear thinning fluids. Even a slight asymmetry has been observed in tra-jectories with no collision. These observations imply greater asymmetry in the PDF and stronger normal stress differences in the yield stress suspensions.
It is noteworthy to mention that for yield stress suspending fluids, in the absence of inertia, the interaction of particles depend on various parameters such as Bingham number, Deborah number, shear history, initial offset and roughness. Hence, obtaining the entire trajectory space is not feasible experimentally for yield-stress fluids. However, overall trends and patterns could be understood by investigating a limited number of systematic measurements. The effect of different parameters on the interaction of particles is investigated in this study.
As mentioned in Section 3.2.2, in the guar gum solution and Carbopol gel, variations along the trajectory lines are confined in a closer neighborhood of the particle. We can link this observation to the variation of the disturbance velocity field around one particle in yield stress fluids where the length scale of the decay is smaller than that in Newtonian suspending fluids (see Figs. 9, 11c and 11d ). This feature has been observed in the numerical simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. It means that two particles feel each other's presence at closer distances, and when they do, the interactions are more severe. One can conclude that the short-range interactions are more important when dealing with yield stress suspending fluids. Due to the limited resolution of the experimental measurements close to the particles, especially when they are touching or very close (separations of the order the size of the interrogation window), accurate simulations with realistic constitutive models are required to understand and characterize the short-range hydrodynamic interactions, particularly the lubrication forces.
Another distinct feature observed during the motion of two particles in a yield stress fluid is the downward and upward motion of the second particle along the velocity gradient direction during approach and recession. This phenomenon could affect the microstructure, and consequently, the PDF of yield stress suspensions. This pattern has been observed experimentally for shear thinning suspending fluids in [START_REF] Snijkers | Hydrodynamic interactions between two equally sized spheres in viscoelastic fluids in shear flow[END_REF]. Also, similar behavior is observed for both Newtonian and yield stress fluids in the simulations of Fahs et al. [START_REF] Fahs | Pair-particle trajectories in a shear flow of a bingham fluid[END_REF]. By increasing the gap size, w©a, the downward and upward motion disappeared in their results for Newtonian fluid. However, for yield stress fluid, such behavior disappeared at larger gap sizes. We have not observed this feature during the motion of two particles in the shear thinning fluid in the course of this project, but it is perhaps due to the fact that this behavior is present only at initial offsets smaller than the range covered in our experiments. The confinement effects might be responsible for this behavior. The extent of such effects could be amplified in the presence of yield stress fluids. Further investigations are needed to understand the underlying mechanisms properly.
Figure 1 :
1 Figure 1: Schematic of the planar Couette-cell and the imaging system: the left shaft is driven by a precision rotation stage while the right shaft rotates freely. Walls are made from transparent acrylic, which allows laser to illuminate the flow field (styled after Fig. 1 of [74])
Figure 2 :
2 Figure2: Stress versus shear rate for a cycle of logarithmic shear rate ramps applied on samples of YS1 (a) and ST (b): increasing ramps (W), decreasing ramps (u) and the corresponding Herschel-Bulkley fits (q) described in the Table1. The inset of (b) presents the variation of viscosity versus shear rate for ST.
Figure 3 :
3 Figure 3: Normalized stress versus strain for samples of yield stress and shear thinning test fluids under a constant shear rate with different shear histories: (a) YS1 at γ 0.129 sec 1 B, De¦ 2.0, 0.09), (b) ST at γ 0.26 sec 1 De 1.03. Triangle markers represent negative pre-shear while square markers indicate positive preshear.
Figure 4 :
4 Figure 4: Elastic and viscous moduli with respect to strain amplitude in strain amplitude sweep tests with an angular frequency of 1 rad.sec 1 on samples of YS1 (a), and ST (b). G ¬ starts to decrease beyond a critical strain below which it is nearly constant.
ω (rad.sec -1 )
ω (rad.sec -1 ) γ0 = 1% γ0 = 5% γ0 = 20% γ0 = 50% γ0 = 100% (d)
Figure 5 :
5 Figure 5: Dynamic moduli, G ¬ (left column) and G ¬¬ (right column) for samples of YS1 (first row) and ST (second row) during frequency sweep from 0.1 to 100 rad©sec. Different markers correspond to different strain amplitudes, γ 0 1, 5, 20, 50, 100 %.
Figure 6 :
6 Figure 6: (a) Velocity profiles averaged along the x-direction for the Newtonian fluid when subjected to different shear rates of γ 0.18, 0.26, 0.35, 0.44, 0.52, 0.61, 0.70, 0.79 sec 1 . (b)
Figure 7 :
7 Figure 7: (a) Normalized velocity profiles across the gap when YS2 undergoes shear flows at different Bingham numbers: B, De¦ 4.6, 0.05¦ (c), B, De¦ 3.2, 0.07¦ (]), B, De¦ 2.3, 0.10¦ (), B, De¦ 2.2, 0.10¦ ([), B, De¦ 2.0, 0.11¦ () compared to that of the Newtonian fluid, NWT (v). (b) The corresponding dimensionless shear rate profiles and (c) stress profiles.
Figure 8 :
8 Figure 8: (a) Normalized velocity field obtained via theoretical solution for a Newtonian fluid. (b) Normalized velocity field for the Newtonian fluid NWT measured via PIV at γ 0.27 sec 1 . (c) Schematic of the particle and locations where velocity profiles are compared with the theory. (d-e) Comparison between velocity profiles obtained from theory () and experimental measurements (]) at different locations.
Figure 9 :
9 Figure 9: Normalized disturbance velocity fields around one particle in the shear flow of different fluids: (a) theoretical solution for a Newtonian fluid, (b) experimental results for a Newtonian fluid at γ 0.27 sec 1 , (c) experimental results for the Carbopol gel, YS1 at
Figure 10 :
10 Figure 10: Normalized shear rate fields around one particle in the shear flow of different fluids: (a) theoretical solution for a Newtonian fluid, (b) experimental results for a Newtonian fluid at γ 0.27 sec 1 , (c) experimental results for the Carbopol gel, YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦, (d) experimental results for the guar gum solution, ST at γ 0.26 sec 1 De 1.03.
Figure 11 :
11 Figure 11: Variation of disturbance velocity at fixed distances ((a) r©a 1.8, (b) r©a 2.3) around one particle in different test fluids: NWT at γ 0.27 sec 1 , YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦, and ST at γ 0.26 sec 1 De 1.03. Variation of disturbance velocity along different directions in different test fluids: (c) θ 45 `, (d) θ 135 `.
Figure 12 :
12 Figure 12: A schematic of two particles subjected to a shear flow and the general shapes of their trajectory: a) trajectory when two particles pass each other with no collision. b) trajectory when two particles collide.
Figure 13 :
13 Figure 13: a) Relative trajectory map calculated via Da Cunha's model [20], ε 5.5 10 4 . b) Relative trajectories obtained from the theoretical solution compared with those measured from the experiment (dashed colored lines) with the same initial offsets (y©a wit x 6 0).
Figure 14 :
14 Figure 14: Trajectory map of two particles in a linear shear flow of the Newtonian fluid. The reference particle is located at the origin and the second particle is initially at x©a 6 0.
Figure 15 :
15 Figure 15: (a) Trajectory line of two particles in the Newtonian fluid subjected to a shear rate of γ .27 sec 1 . (b-k) Left column is the velocity fields at different points marked along the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 16 :
16 Figure 16: Trajectory map of two particles in a shear flow of the Carbopol gel YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦.
Figure 17 :
17 Figure 17: Relative trajectories of two particles in the Carbopol gel YS1 with similar initial offsets at two different Bingham numbers. Dashed line corresponds to γ 1.70 sec 1 B, De¦
Figure 18 :
18 Figure 18: (a) Trajectory line of two particles in the Carbopol gel, YS1 at γ 0.34 sec 1 B, De¦ 1.23, 0.15¦. (b-k) Left column is the velocity fields at different points marked on the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 19 :
19 Figure 19: Trajectory map of two particles subjected to a shear flow of ST at γ, De¦ 0.26 sec 1 , 1.03¦.
Figure 20 :
20 Figure 20: (a) Trajectory line of two particles in the guar gum solution, ST at γ 0.26 sec 1 De 1.03. (b-k) Left column is the velocity fields at different points marked on the trajectory line (A to E) while the right column is the corresponding normalized shear rate fields.
Figure 21 :
21 Figure 21: Two-particle trajectories (solid lines) compared with the streamlines around one particle (dashed lines) in shear flows of different fluids: (a) NWT at γ 0.27 sec 1 , (b) YS1
Figure 22 :.26 sec 1 De 1 .
2211 Figure 22: a-d) Relative trajectories of two particles in shear flows of different test fluids with similar initial offsets: y 0 ©a 0.63 (a), 0.75 (b), 1.05 (c), 2.12 (d). Test fluids include NWT at γ 0.27 sec 1 , YS1 at γ 0.34 sec 1 B, De¦
Table 1 :
1 Composition, pH and rheological properties of the test fluids used in this study: NWT (Newtonian fluid), YS1-2 (yield stress fluids) & ST (shear thinning fluid). Dynamic moduli, G ¬ and G ¬¬ are measured at ω 1 rad.sec 1 , γ 0 0.25 %.
Test fluids
Materials (% wt.) ST YS1 YS2 NWT
Water 71.764 71.969 75.004 9.31
Glycerol 27.707 27.876 24.766 -
Carbopol 980 - 0.116 0.170 -
Jaguar HP-105 0.529 - - -
Sodium hydroxide - 0.039 0.060 -
Triton X-100 - - - 76.20
Zinc chloride - - - 14.35
Hydrochloric acid - - - 0.14
pH - 7.40 7.44 -
τ y (P a) 0 3.3 46.6 0
K (P a.sec n ) 6.7 4.6 18.7 4.6
n 0.46 0.50 0.30 1
G ¬ P a¦ G ¬¬ P a¦ 3.5 3.5 17.9 3.3 213.5 18.9 --
Acknowledgments
This research was supported by National Science Foundation (Grant No.
CBET-1554044-CAREER) via the research award (S.H.). | 81,528 | [
"1004338",
"214"
] | [
"505289",
"949",
"50686",
"388610",
"505289"
] |
01768670 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01768670/file/Batchelor_JFM_hal.pdf | Mathieu Souzy
Imen Zaier
Henri Lhuissier
Tanguy Le Borgne
Bloen Metzger
Mixing lamellae in a shear flow
Introduction
Mixing is a key process in many industrial applications and natural phenomena [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF]. Practical examples include glass manufacture, processing of food, micro-fluidic manipulations, and contaminant transport in the atmosphere, oceans and hydrological systems. During the last decades, substantial progresses have been made in the description of mixing in systems as complex as turbulent flows [START_REF] Warhaft | Passive scalars in turbulent flows[END_REF][START_REF] Shraiman | Scalar turbulence[END_REF][START_REF] Falkovich | Particles and fields in fluid turbulence[END_REF], Duplat & Villermaux 2008[START_REF] Kalda | Simple model of intermittent passive scalar turbulence[END_REF], oceanic and atmospheric flows [START_REF] Rhines | How rapidly is a passive scalar mixed within closed streamlines?[END_REF], the earth mantle [START_REF] Allègre | Implications of a two-component marble-cake mantle[END_REF], porous media [START_REF] Dentz | Mixing, spreading and reaction in heterogeneous media: A brief review[END_REF][START_REF] Villermaux | Mixing by porous media[END_REF][START_REF] Borgne | The lamellar description of mixing in porous media[END_REF] and sheared particulate suspensions [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF]. In particular, the conceptualization of scalar mixtures as ensembles of 'lamellae' evolving through stretching, diffusion and aggregation [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF], Villermaux & Duplat 2003) has allowed deriving quantitative accurate theoretical predictions for the evolution of the full concentration Probability Density Functions (PDF) for a broad range of flows. Within this framework (valid for lamellae thinner than the smallest scale within the flow), mixing is driven by three processes occurring simultaneously: i) stretching a scalar field by a given flow creating elongated lamellar structures, ii) diffusion and compression that compete to define concentration gradients and iii) lamella coalescence leading ultimately to the final homogeneity of the system (Duplat & Villermaux 2008).
In complex configurations such as those listed above, the evolution of the concentration distribution is sensitive to a number of characteristics of the system: the distribution of stretching rates, the macroscopic dispersion rate, the scalar molecular diffusion coefficient and the rate at which lamellae aggregate. While their influence on the dynamics of mixing is clear from a theoretical point of view, they can rarely be observed independently. Moreover, in spite of the numerous experimental studies on mixing, the question of the spatial resolution required to quantify the evolution of a concentration field has received little attention. This latter point is obviously crucial since under-resolved images, by artificially broadening the concentration distribution of isolated lamellae or conversely, by sharpening the concentration distribution of bundles of adjacents lamellae, lead to an erroneous appreciation of the mixing rate.
Here, we present highly resolved experiments where the basic mechanisms governing mixing are quantified individually: the formation of elongated lamellae by stretching, the establishment of a Batchelor scale by competition between compression and diffusion, the enhancement of diffusive mixing by stretching and the diffusive aggregation of lamellae. We consider for this a lamella formed by photo-bleaching of a fluorescent dye in a laminar shear flow where the concentration distribution can be quantified using a well controlled experimental set-up, built specifically to resolve small length scales. This benchmark experiment was chosen precisely because it is well established theoretically. Unambiguous conclusions can therefore be drawn regarding the spatial resolution required to capture the evolution of the concentration distribution of an isolated lamella, and more generally within any mixing protocols. Moreover, it illustrates with an unprecedented resolution and level of details the basic lamellar theories for mixing at the scale of a single lamella. Last, we investigate the coalescence of two nearby lamellae specifically focusing on its impact on the evolution of the concentration distribution. The theoretical prerequisites are recalled in § 2. After presenting the experimental set-up in § 3, the measurements are reported in § 4. Conclusions are drawn in § 5.
Mixing in a laminar shear flow
General picture
We consider a lamella of dye of length l 0 and half width s 0 initially positioned perpendicular to the flow as illustrated in Figure 1.a. As the lamella is advected by a laminar shear flow with a shear-rate γ , its length increases as l(t) = l 0 1 + ( γt) 2 . Thus, considering the sole effect of the advection field (for now neglecting that of molecular diffusion), the half width of the lamella s A (t) decreases following s A (t) = s 0 / 1 + ( γt) 2 (see Figure 1.b) since in this two dimensional flow, mass conservation prescribes s 0 l 0 = s A (t)l(t). The effect of the shear flow can thus be quantified by a compression rate -1/s A ds A /dt which describes how fast the lamella transverse dimension thins down owing to its stretching by the advection field. Conversely, molecular diffusion tends to broaden the lamella with a diffusive broadening rate D 0 /s 2 A , where D 0 is the molecular diffusion coefficient of the scalar. Balancing these two rates, -1/s A ds A /dt ∼ D 0 /s 2 A and assuming γt 1, naturally defines a time-scale, also called the Batchelor time, t B ∼ P e 1/3 / γ where P e = γs 2 0 /D 0 denotes the Péclet number. This time corresponds to the onset of the homogenization of the concentration levels within the system, beyond which the concentration levels within the lamella start to significantly decay.
Exact solution
The complete description of the evolution of the lamella of dye can be found by directly solving the full advection-diffusion equation
C(n, t) C(x, t) s AD σ x x n C( C C n C( C C x x x n n n n x y s A s AD l 0 s 0 θ a) b) c) l 0 γt l(t) z n γ 2 2 2 2
Figure 1. a) Schematic of a lamella of dye of initial length l0 and half width s0 advected in a laminar shear flow. b) Effect of the advection field alone: the strain γt has stretched the lamella and thinned down its transverse dimension to 2sA(t). c) Effect of both advection and molecular diffusion: at the same strain, the half width of the lamella is denoted sAD(t). Inset: schematic of the gaussian concentration field of the lamella with its concentration profiles along the flow C(x, t) and transverse to the lamella C(n, t).
∂C ∂t + u.∇C = D 0 ∇ 2 C, (2.1)
where u = ( γy, 0, 0) denotes the advection field, and C the concentration field. This equation can be simplified [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF][START_REF] Rhines | How rapidly is a passive scalar mixed within closed streamlines?[END_REF][START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF][START_REF] Meunier | How vortices mix[END_REF] if written in a moving frame (n, z) aligned with the directions of maximal compression and stretching of the lamellae (see Figure 1.c), and by using Ranz's transform [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF]) which uses warped time units, τ = t 0 D 0 /s 2 A (t )dt , and distances normalized by the lamella transverse dimension, ξ = n/s A (t). Equation (2.1) then reduces to a simple diffusion equation
∂C ∂τ = ∂ 2 C ∂ξ 2 , (2.2)
whose solution for an initial gaussian concentration profile with maximum C 0 is
C(ξ, τ ) = C 0 √ 1 + 4τ e -ξ 2 1+4τ . (2.3)
The maximum concentration of the lamella thus decays with time according to
C max (t) = C 0 √ 1 + 4τ . (2.4)
Since the half width of the lamella in Ranz's units is
σ ξ = (1 + 4τ ) 1/2 (see equation 2.
3), the transverse dimension of the lamella s AD (t), accounting both for the effect of advection and diffusion, and expressed in standard unit of length is
s AD (t) = σ ξ s A (t) = s 0 (1 + 4τ ) 1/2 1 + ( γt) 2 .
(2.5)
The latter expression gives access to the half width of the lamella along the flow (x-direction)
σ x (t) = s AD (t) cos θ = σ 0 (1 + 4τ ) 1/2 , (2.6)
with σ 0 = s 0 and cos θ = 1/ 1 + ( γt) 2 (see Figure 1).
We now have access to a more accurate estimation of the Batchelor time which by definition is reached when τ = 1 [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF]. Since τ = D 0 (t+ γ2 t 3 /3)/s 2 0 , the Batchelor time, for P e 1, is
t B ≈ (3P e) 1 3
γ .
(2.7)
At this time, the transverse dimension of the lamella, from now on referred to as the Batchelor scale [START_REF] Batchelor | Small-scale variation of convected quantities like temperature in a turbulent fluid. part 1. general discussion and the case of small conductivity[END_REF], is found to be
s AD (t B ) = s 0 √ 5(3P e) -1 3 . (2.8)
Last, the evolution of the concentration distribution P (C, t) of the lamella can be derived. Using the following change of variables P (C)dC = P (x)dx yields P (C) = P (x)|dC/dx| -1 which can easily be expressed from
C(x, t) = C max (t)e - x 2 σ 2 0 (1+4τ ) ,
(2.9)
and the uniformity of P (x). Considering the range of concentration C th ≤ C ≤ C max (t), above any arbitrary threshold concentration C th (larger than the experimental background noise), one obtains
P (C, t) = 1 βC ln(C max /C) , (2.10)
where β is a normalizing prefactor insuring that
Cmax(t) C th P (C, t)dC = 1.
The above set of equations fully describes the evolution of a lamella of dye advected in a laminar shear flow.
Experimental set-up
The experimental set-up is shown in Figure 2. It consists of a shear-cell made of two parallel plexiglass plates sandwiching a fluid cell 85 mm long, 25 mm wide and of height h = 3 mm. The fluid is sheared by setting the two plates into opposite motion with two high-resolution translation stages (not shown on the schematic). The travel-range of the plate enables a maximum strain of 30. The cell is sealed on both ends by two PTFE plates (grey in Figure 2), and on the sides by two transparent side walls (not shown on the schematic) mounted on the bottom moving plate.
The fluid is a Newtonian mixture of triton X-100 (77.4 wt%), zinc chloride (13.4 wt%) and water (9.2 wt%) having a density ρ = 1.19 g.cm -3 and viscosity η = 4.2 Pa.s which ensures laminar flow conditions (over the full range of shear rates investigated Re = ρ γh 2 /η ≤ 10 -3 ). Prior to filling the cell, some fluorescent dye (Rhodamine 6G) is thoroughly mixed to the fluid at a concentration of 2 10 -6 g.mL -1 . The molecular diffusion coefficient of this dye was measured in the cell with the fluid at rest using a technic similar to that reported in [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF]. We find D 0 ≈ 2.66 10 -13 m 2 s -1 at a temperature of 22 • C.
The initial lamella is generated using fluorescent recovery after photo-bleaching, [START_REF] Axelrod | Mobility measurement by analysis of fluorescence photobleaching recovery kinetics[END_REF]. A high-power-laser-diode beam (Laser Quantum Gem512-2.5W) is shaped into a thin laser sheet using the combination of lenses set in their 'bleaching' configuration as shown in Figure 2.a. This sheet, oriented in the yz plane (perpendicular to the flow) and used with full intensity, locally changes the conformation of the rhodamine molecules which irreversibly become unable to fluoresce.
Then, the set-up is set into its 'visualization' configuration by rotating the cylindrical lens 90 • : this orients the laser sheet along the xy plane (along the flow). The lamella then appears as a thin dark vertical line, see Figure 2.b. Note that the initial lamella is in fact a transverse sheet, rather than just a line. This avoids diffusion in the z-direction and enables us to study a purely 2D process. This photo-bleaching technic, relative to a direct injection of fluorescent dye, also insures well-controlled initial conditions: the lamella is uniform and its initial concentration profile is gaussian owing to the gaussian nature of the impacting laser beam.
Immediately after bleaching, the fluid is sheared at the desired shear rate and images of the evolution of the lamella are acquired using a camera (Basler Ace2000-50gm, 2048×1080 pixel 2 , 12 bit) coupled to a high-resolution macroscope (Leica Z16 APO×1). The set-up was designed to provide an image resolution (0.8 µm/pixel) small enough to resolve the Batchelor scale (equation 2.8). For instance, with γ = 0.01 s -1 and s 0 = 25µm, we have s
AD (t B ) = s 0 √ 5(3 γs 2 0 /D 0 ) -1 3 ≈ 10µm.
A high-pass filter (590 nm) is positioned between the sample and the camera to eliminate direct light reflexions. To avoid photobleaching during the image acquisition, the intensity of the laser is lowered to 100 mW and the image acquisition is synchronized with a shutter opening only during acquisition times. Note that all experiments are performed at T = 22 ± 0.05 • C by setting the temperature of the water running through the bottom moving plate with a cryo-thermostat.
Experimental results
A single lamella
Figure 3.a shows successive pictures of a lamella undergoing a laminar shear (see also supplementary movie 1). Initially vertical and highly contrasted, the lamella progressively tilts under the effect of the shear flow while blurring under the effect of molecular diffusion. Accurate measurement of the lamella's concentration profile along the flow (x-direction) are obtained by averaging over all horizontal lines of pixels after trans- lating these lines to make their maximum concentration coincide. The resulting average concentration profile of the lamella is shown in Figure 3.b for successive strains: the maximum concentration decays while the width increases. These trends are well captured by fitting each concentration profile with a gaussian of the form C(x, t) = C max (t)e -x 2 /σ 2 x (t)
(see Figure 3.b).
The resulting maximum concentration C max (t) and width σ x (t) are plotted in Figure 3.c and 3.d versus time for experiments performed at different Péclet number (4.5 ≤ P e ≤ 1190). The Péclet number was varied by repeating the experiment at various shear-rates γ = [6 × 10 -4 -0.3] s -1 . The agreement with equations 2.4 and 2.6 is very good for both C max (t) and σ x (t). Note that in both cases, γ, s 0 and D 0 are fixed by the experimental conditions; there is thus no adjustable parameter. When plotted as a function of the dimensionless time t/t B , where t B = (3Pe) 1/3 / γ is the Batchelor time, these data are found to collapse, for all Pe, on the same master curve, see Figure 3.e and f. For t < t B , C max and σ x remain constant. Then when the effect of molecular diffusion becomes significant, i.e for t > t B , C max (respectively σ x ) starts to decrease (respectively increase) following the power law t -3/2 (respectively t 3/2 ), consistently with the long time trends of equations 2.4 and 2.6. These measurements clearly illustrate how mixing is accelerated by imposing an external macroscopic shear: larger applied shear rates (larger Péclet numbers) result in earlier mixing times.
We have so far probed the lamella along the direction of the flow. However, further insight into the mixing process, specifically on the advection-diffusion coupling presented above, are provided by probing the lamella width along its transverse direction (along n, see Figure 1). Figure 4.a shows the evolution of s AD (t) measured experimentally. At an intermediate time, the thickness of the lamella is found to decrease like t -1 . After reaching a minimum, it increases like t 1/2 . These trends precisely illustrate the expected interplay between advection and diffusion. The lamella width initially decreases as imposed by the kinematics of the flow following the intermediate time trend (for t < t B ) of equation 2.5, s AD (t) ∼ s 0 ( γt) -1 . However, this compression of the lamella progressively steepens its concentration gradients which beyond the Batchelor time, eventually makes the broadening effect due to molecular diffusion become dominant. The transverse dimension of the lamella then re-increases diffusively like t 1/2 . At the Batchelor time t B , the lamella typically reaches its minimum thickness which is equal to the Batchelor scale s AD (t B ) (within 3%). As shown in Figure 4.b, reporting this direct measurement of the Batchelor scale obtained for various Péclet numbers matches the expected prediction s AD (t B ) = s 0 √ 5(3P e) -1 3 , see equation 2.8. To fully describe the mixing process, we also measure the evolution of the lamella's concentration distribution P (C, t). The distribution P (C) is obtained from the histogram of intensities collected from all the pixels constituting the lamella which are above a threshold C th = 0.1C 0 . This discards the background image noise enabling us to focus on the peak of interest, that of high concentration. Changing the value of the threshold above the background noise, changes the extend of the spatial domain over which P (C) is computed, but it does not affect its shape. We obtain concentration distributions which have a characteristic U-shape [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF], see Figure 4.c. The distribution's maximum probability, initially located at C = C 0 , progressively drifts to lower values as the lamella is stretched and diffuses. The prediction provided by equation 2.10 is in good agreement with the measured concentration distributions obtained for all Péclet number and again, without adjustable parameter.
The distributions shown above are well resolved since the Batchelor scale is always larger than 10 pixels. However, in most studies on mixing, such a large resolution is not achieved. Moreover, when mixing is investigated in complex systems such as turbulent flows [START_REF] Villermaux | Coarse grained scale of turbulent mixtures Physical review letters[END_REF], porous media (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF] or recently in sheared particulate suspensions [START_REF] Souzy | Stretching and mixing in sheared particulate suspensions[END_REF], the initial lamella of dye does not evolve towards a single lamella with uniform thickness but instead, towards a large number of lamellae having widely distributed thicknesses. In such situations, it is important to know which part of the distribution of lamellae can be resolved experimentally.
To address this point, we systematically investigate how the set-up spatial resolution can bias the measured concentration distribution of a single lamella. This can be achieved by a simple coarsening protocol: the reference concentration distribution (blue dotted line in Figure 5) is obtained from a highly resolved image where the lamella half width spans 8 pixels. When coarsening the pixels of this reference image, (merging 2×2 pixels into one, 4×4 pixels into one and so on) we find that the concentration distribution remains unchanged for images having 4 and 2 pixels across the lamella half width (green dotted lines in Figure 5). Conversely, larger coarsening, i.e. images having 1 pixels or less across the lamella half width, yield erroneous results: the concentration distribution departs from the reference one (red dotted lines in Figure 5). The limit of 1 pixels per lamella half width is surprisingly small and comes as encouraging news for experimentalists. This limit holds as long as the concentration profile is smooth (e.g gaussian) and there is a finite tilt between the lamella and the lines of pixels. In such case, the progressive drift of the lamella position relative to that of the pixels scans all the possible concentration levels, thereby providing results consistent with the fully resolved PDF.
Two Lamellae
As mentioned in the introduction, mixing in complex systems generally involves coalescence between dispersed lamellae. This latter step is crucial for the system to reach its final homogeneity. Here, the photo-bleaching technic is used to produce two adjacent lamellae and investigate a single coalescence event. Figure 6.a shows images of two parallel lamellae captured at successive strains (see also supplementary movie 2). The two lamellae are initially distinct from each other and, as they are stretched by the flow and diffuse, they progressively merge to eventually form one single entity. The evolution of their concentration profiles measured along the flow x-direction are shown in Figure 6.b. The two maxima of concentration, corresponding to each lamella, decay (blue arrows) while the minimum located in between increases (green arrow). This goes on until the two lamellae merge into one single lamella whose maximum subsequently decreases (yellow arrow). This evolution can easily be predicted owing to the linearity of the diffusion equation: the concentration profile of a set of two lamellae, with individual profiles C 1 (x, t) and C 2 (x, t), is simply obtained from the summation C(x, t) = C 1 (x, t) + C 2 (x, t) [START_REF] Fourier | Théorie Analytique de la Chaleur[END_REF]. Using, for each lamella, equation 2.9 with the initial experimental maximum intensity and width, we thereby obtain the concentration profiles shown in Figure 6.d.
To anticipate the impact of this coalescence on the evolution of the concentration distribution, lets recall that P (C) ∼ |dC/dx| -1 . Thus, each value of C(x, t) where the concentration profile presents an horizontal slope gives rise to a peak in the concentration distribution. This can be clearly observed on the experimental concentration distributions shown in Figure 6.c. As long as the two lamellae are distinct, one not only observes two peaks at large concentrations (corresponding to the maximum of concentration of each lamellae), but also one peak at small concentration (corresponding to the minimum concentration located in between the two lamellae). Then, as the lamellae are stretched and diffuse, the two peaks corresponding to the two concentration maxima, move to the left (blue arrow in Figure 6.c). Conversely, as the concentration in between the two lamellae increases, the small concentration peak moves to the right (green arrow in Figure 6.c. Coalescence occurs when the three peaks collide; the distribution eventually recovers its characteristic U-shape as the lamellae have coalesced into a single entity. The same phenomenology can be observed on the concentration distributions obtained from numerically computing the slope of the predicted concentration profiles, see Figure 6.e.
Discussion and conclusions
We have explored experimentally the basic mechanisms of mixing in stirred flows by thoroughly investigating the evolution of an initially segregated scalar (a lamella of flu-orescent dye) stretched within a laminar shear flow. A high resolution set-up, using a photo-bleaching technic to generate and control the shape of the initial lamella, was built in order to resolve length scales at which diffusion plays a significant role. Our measurements of the evolution of the lamella concentration profiles and concentration distributions are, without adjustable parameter, in excellent quantitative agreement, for P e = [4.5 -1190], with the theoretical predictions.
We also investigated the evolution of the lamella's transverse dimension which conspicuously illustrates the advection-diffusion coupling yielding at intermediate time to a t -1 compression of the lamella dominated by the kinematics of the flow, followed, after the Batchelor time, by a t 1/2 broadening dominated by molecular diffusion. The Batchelor scale, which to our knowledge was experimentally observed only once [START_REF] Meunier | Transport and diffusion around a homoclinic Point[END_REF], was here measured systematically for various Péclet number and found to follow the expected behavior, s AD (t B )/s 0 = √ 5(3P e) -1 3 . Most importantly, through a coarsening protocol, we determined the minimal experimental spatial resolution required to resolve the concentration distribution of a single lamella: its half width must be larger than 1 pixel. This requirement is general and constrains the measurement of any mixing protocol. Indeed, for all stretching protocol, lamellae reach their minimum thickness at the Batchelor time while they are still isolated individual entities. Resolving P (C) at this time, which requires to resolve each individual lamella, therefore is the most demanding in term of spatial resolution: the half width of the lamella at the Batchelor time, s AD (t B ), shall at least span 1 pixel. Note that measuring P(C) at longer times can be less stringent. Indeed, beyond the Batchelor time, lamellae diffuse sufficiently that, when their spacing becomes sufficiently small, nearby lamellae start merging together. The concentration gradients then no longer vary over the lamellae transverse dimension but instead, over a larger length-scale which reflects the size of the bundles of merging lamellae. In the context of turbulent mixtures for instance, this 'coarse-grained scale' was proposed to follow η = LSc -2/5 where L denotes the stirring length and Sc the Schmidt number [START_REF] Villermaux | Coarse grained scale of turbulent mixtures Physical review letters[END_REF]). However, the latter length-scale is only relevant after lamellae have merged which occurs after the Batchelor time. Therefore, resolving P(C) at all times requires an experimental spatial resolution satisfying s AD (t B ) > 1 pixel.
Finally, we have investigated the coalescence between two lamellae and its impact on the evolution of the concentration distribution. Lamellae's overlap gives rise to a nontrivial peak at low concentration. This observation is important as it may be relevant to interpret more complex situations.
To conclude, the high resolution experimental technics developed for the present study and the determination of its limitations, open promising perspectives for future studies on mixing.
Figure 2 .
2 Figure 2. (color version online) a) Schematic of the set-up. b) Typical image of a lamella obtained by photo-bleaching.
Figure 3 .
3 Figure 3. (color version online) a) Successive images of a lamella undergoing shear (see also supplementary movie 1). b) Corresponding averaged concentration profiles along the flow (x-direction). The black lines are fitting gaussian profiles. c) Normalized maximum concentration Cmax/C0 and d) normalized half width of the concentration profiles σx/σ0 versus time for experiments performed at different Péclet numbers. The black lines correspond to equation 2.4 in c) and equation 2.6 in d). In both cases, γ, s0 and D0 are set and fixed by the experimental conditions. e) and f) Same data plotted versus t/tB.
Figure 4 .
4 Figure 4. (color version online) a) Evolution of transverse dimension of the lamella sAD(t)/s0 versus time for experiments performed at different Péclet numbers. The black lines correspond to equation 2.5. The dotted line corresponds to the solution in the absence of shear, i.e. in the pure diffusion limit (Pe = 0). b) Corresponding Batchelor scale sAD(tB)/s0 versus Péclet number; the black line corresponds to equation 2.8. c) Concentration distribution P (C/C0) measured at successive strains, γt, for a lamella sheared at P e = 4.5. The black lines correspond to equation 2.10 with γ. In all cases, γ, s0 and D0 are fixed by the experimental conditions.
Figure 5 .
5 Figure 5. (color version online) Distributions of concentration P (C/Cmax) obtained from the progressive digital coarsening of an experimental image.
Figure 6 .
6 Figure 6. (color version online) a) Coalescence of two nearby lamellae advected in a laminar shear flow (see also supplementary movie 2). Evolution of the corresponding experimental b) concentration profiles and c) concentration distributions obtained at successive strains γt. Predictions for the evolution of d) the concentration profiles and e) the concentration distributions at the same strains. Again, γ, s0 and D0 are fixed by the experimental conditions.
We thank E. Villermaux and P. Meunier for their inspiring comments and suggestions. We would also like to thank Sady Noel for machining the experimental set-up. This work was supported by ANR JCJC SIMI 9, by the Labex MEC ANR-10-LABX-0092, by ANR-11-IDEX-0001-02 and by ERC consolidator grant ReactiveFronts 648377. | 28,979 | [
"1176002",
"861613",
"905951"
] | [
"949",
"196526",
"179899"
] |
01768671 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768671/file/2016_Souzy_JFM.pdf | M Souzy
H Lhuissier
E Villermaux
B Metzger
Stretching and mixing in sheared particulate suspensions
Stretching and mixing in sheared
particulate suspensions
Introduction
Sheared particulate suspensions represent a quasi-unique system where efficient dispersion spontaneously occurs even under low Reynolds number flow conditions. For instance, the transfer of heat [START_REF] Sohn | Heat Transfer enhancement in laminar slurry pipe flows with power law thermal conductivities[END_REF], Metzger 2013) or mass [START_REF] Wang | Hydrodynamic diffusion and mass transfer across a sheared suspension of neutrally buoyant spheres[END_REF][START_REF] Wang | Augmented Transport of Extracellular Solutes in Concentrated Erythrocyte Suspensions in Couette Flow[END_REF][START_REF] Souzy | Super-diffusion in sheared suspensions[END_REF] across a suspension of non-Brownian particles is significantly enhanced when the suspension is submitted to a macroscopic shear. This would not happen in a pure Newtonian fluid where the laminar streamlines remain perpendicular to the scalar (heat or concentration) gradients. In a sheared suspension, the macroscopic stationary imposed shear results at the particle scale in an unstationary flow: particles constantly collide with one another, change streamlines and thus generate disturbances within the fluid which promote the dispersion of the scalar, prelude to its subsequent mixing. Two mechanisms have been identified to explain the origin of the transfer enhancement. First, the particle translational shear-induced diffusivity, a phenomenon which has been widely investigated over the last decades [START_REF] Eckstein | Self-diffusion of particles in shear flow of a suspension[END_REF][START_REF] Arp | The Kinetics of Flowing Dispersions IX. Doublets of Rigid Spheres (Experimental)[END_REF][START_REF] Cunha | Shear-induced dispersion in a dilute suspension of rough spheres[END_REF][START_REF] Breedveld | Measurement of the full shear-induced self-diffusion tensor of noncolloidal suspensions[END_REF][START_REF] Sierou | Shear-induced self-diffusion in non-colloidal suspensions[END_REF], Metzger 2013). Second, the particle rotation whose impact is particularly important at the boundaries, where particles disrupt the diffusive boundary layer by a 'rolling-coating' effect [START_REF] Souzy | Super-diffusion in sheared suspensions[END_REF]. These studies mainly focused on the rate of transfer across sheared suspensions which is customary characterized by an effective diffusion coefficient much larger than the scalar molecular diffusivity. Another aspect of transport enhancement concerns the mixing properties of the system, namely its ability, starting from a given spatial scalar distribution, to reach homogeneity. Figure 1 shows how a blob of dye with initial size s 0 , diffuses while it is deformed by the complex flow in the interstitial fluid of a suspension. The important question which naturally arises is to understand how this initially segregated system reaches homogeneity, and particularly how long this process takes. By essence, it involves both advection by the flow and molecular diffusion of the scalar. Such a problem has been studied in a wide range of situations involving a single fluid phase such as shear flows [START_REF] Ranz | Applications of a stretch model diffusion, and reaction in laminar and turbulent flows[END_REF], vortice flows (Meunier 2003), turbulent jets [START_REF] Duplat | A nonsequential turbulent mixing process[END_REF], or flows in porous media (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF]. These studies all underline the crucial importance of the rate at which fluid material lines are elongated by the flow [START_REF] Villermaux | On dissipation in stirred mixtures[END_REF]. The knowledge of these 'stretching laws' allows to estimate the mixing time: the time when the scalar concentration fluctuations start to significantly decay [START_REF] Batchelor | Small-scale variation of convected quantities like temperature in a turbulent fluid. part 1. general discussion and the case of small conductivity[END_REF]. For instance, in a simple shear flow with rate γ, the material lines grow as γt. In the limit of large Péclet number P e = γs 2 0 /D, the mixing time for a scalar blob of initial size s 0 is t mix ∼ γ-1 P e 1/3 , where D denotes the molecular diffusivity of the dye. In chaotic flows, where the stretching rate is maintained, the material lines stretch exponentially, as e γt , and t mix ∼ γ-1 ln P e.
In spite of their crucial importance for mixing issues, stretching laws in particulate suspensions have never been studied experimentally, nor have the general question about the mixing time in such a system been. Stretching in particulate suspension has been addressed indirectly using numerical simulations through the measurement of the suspension largest Lyapunov exponent [START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF][START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF], Metzger 2012, Metzger 2013). In such a chaotic system, the mean stretching rate of fluid elements can be assimilated to the largest Lyapunov exponent. The reported positive Lyapunov exponents indicate that the stretching laws must be exponential. Stretching has also been explored theoretically with the motivation of understanding the rheology of such system when the suspending fluid is viscoelastic. It was shown that the expected exponential stretching of the polymers should affect the pressure drop in fixed beds of spheres or fibres [START_REF] Shaqfeh | Polymer stretch in dilute fixed beds of fibres or spheres[END_REF] or the viscosity of freely suspended fibres in a simple shear flow [START_REF] Harlen | Simple shear flow of a suspension of fibres in a dilute polymer solution at high Deborah number[END_REF].
In this paper, we specifically address the question of the stretching kinematics by performing experiments on non-Brownian and spherical particulates suspended in a viscous and Newtonian fluid that is steadily and uniformly sheared. In this limit, the flow kinematics is independent of both the shear rate γ and the molecular diffusivity. The sole parameter expected to affect the stretching process is the particulate volume fraction φ. We investigate the stretching laws in particulate suspensions varying the volume fraction over a wide range 20% φ 55%, for which collective effects between particles are present but the suspension still flows easily, since it is still far from jamming. After presenting the experimental set-up in § 2, we first compare the evolution of a blob of dye sheared in a pure fluid (without particles) to that of a blob sheared in a suspension ( § 3). This experiment illustrates the complexity of the advection field induced by the presence of the particles. Then, following the Diffusive Strip Method of Meunier 2010, accurate velocity field measurements of the fluid phase ( § 2.3) are used to determine the stretching laws. Material lines are found to stretch, on average, exponentially with time ( § 4), at a rate which agrees with the largest Lyapunov exponents reported in 3D Stokesian dynamic simulations [START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF][START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF]. Beyond the mean, we tackle the complete statistics of stretching, that is to say, the distributions of elongation as a function of strain and particle volume fraction, which are found to converge towards log-normal distributions. In § 5, we present a model, based on a multiplicative stretching process, which explains quantitatively the experimental distributions of the material line elongation and its dependance to γt and φ. Finally, the crucial implication of these findings for scalar mixing are developed and discussed in §6, before we conclude in §7.
Experimental set-up
The experimental set-up is shown in figure 2. It aims at steadily and uniformly shearing a viscous particulate suspension, injecting a small blob of dyed fluid, and observing both the flow and the mixing of the dye. The set-up consists of a transparent cell in which a transparent mylar belt is tightly mounted at the top of the cell on two cylinders and at the bottom on two ball bearings. One cylinder is entrained by a rotating stage (M-061.PD from PI Piezo-Nano Positioning) with high angular resolution (3×10 -5 rad). The motion of the belt generates in its central region a linear shear flow. The suspension is allowed to flow below the cylinders and a constant spacing between the belt and the inner wall of the cell is maintained all around the cell. This specific design, which is an evolution of that used in Metzger 2012, minimizes secondary flows and ensures a velocity profile with constant shear rate within the belt.
Particles and liquid
The particles and the liquid are carefully chosen to allow the visualization of the dye and of the flow inside the suspension, as well as to ensure a purely viscous flow without buoyancy effects. This requires using a transparent media, matching both the density and the refraction index of the particles, and using a fairly viscous liquid.
To fulfill the above requirements, we use mono-disperse spherical particles (PMMA from Engineering Laboratories Inc.) with density ρ = 1.18 kg/m 3 and diameter d = 2 mm, especially chosen for their smooth surface and good transparency. The liquid is a Newtonian mixture of Triton X-100 (77.4 wt %), Zinc Chloride (13.4 wt %) and water (9.2 wt %) with viscosity η = 3 Pa s and having the same density as the particles at room temperature. Its composition is optimized to match both the refractive index and the density of the particles. A small amount of hydrochloric acid (≈ 0.05 wt%) is added to the solution to prevent the formation of zinc hypochlorite precipitate, thereby significantly improving the optical transparency of the solution. Last, to finely tune the index matching between the particles and the liquid, the temperature of the set-up is adjusted with a water bath surrounding the shear cell.
The solid volume fraction φ of the suspension is varied between 20 and 55%. To ensure that inertial effects are negligible, the shear rate γ is set to typically 0.15 s -1 , which corresponds to a particulate Reynolds number ρ γd 2 /η ∼ 10 -4 .
Imaging
The suspension is observed in the flow-gradient plane (xy plane): a slice of suspension is illuminated by a laser sheet across the transparent belt and imaged from the top (see figure 2).
The laser sheet is formed by reflecting a laser beam (2 W, 532 nm) on a standard laserprinter mirror (rotating at ∼ 10000 rpm). This technique was found to produce a light sheet with a better spatial homogeneity than that obtained with classical cylindrical or Powel lense techniques. The sheet is collimated and focused to a thickness of ∼ 60 µm with the help of two perpendicular plano-convex lenses. Last, a high-pass filter (590 nm) eliminates direct light reflexions. The suspension is imaged with a high-resolution camera (Basler Ace2000-50gm, 2048x1080 pixel 2 , 12bit) coupled to a high-quality magnification lens (Sigma APO-Macro-180 mm-F3.5-DG). To avoid that the particles distord the free surface of the suspension through which the visualization is realized, a small plexiglass window is positioned on the free surface, above the region of interest, which locally ensures a flat interface. The window has a small hole allowing the injection of a blob of dyed fluid with a syringe.
Velocity field measurements
The velocity field in the suspending liquid is measured in the plane of the laser sheet (xy), at half-distance between the bottom and the free surface (see figure 3), performing particle image velocimetry (PIV), which yields the two-dimensional velocity field {u, v}, which does not necessarily verify incompressibility. To perform PIV, the liquid is seeded with small passive fluorescent tracers (3.23 µm PMMA B-particles from MF-Rhodamine) at a very low volume fraction (∼ 10 -5 φ). These small and diluted tracers do not affect the flow but allow its visualization and quantification, as shown in figure 3 and Movie 1. The large (2 mm) particles of the suspension do not interact with the laser sheet and appear as black discs. Note that all the particles have the same size; the apparent size differences arise from their different vertical positions relative to the laser sheet plane. The PIV routine is adapted from a Matlab code developed by Meunier 2003. Images are captured every 0.1 s, which corresponds to a strain increment of 0.015. To perform PIV, the images are divided into equally spaced and overlapping sub-images with a typical size of d/20 (32 pixels). The local velocity field is computed by cross-correlating successive sub-images. The presence of a particle in the sub-image is detected with the help of two filters (for the maximum of correlation and for the standard deviation of the sub-images), in which case the corresponding velocity vector is not used (see figure 3b). For each volume fraction, three independent runs over a strain of 20 are performed.
The independence of the measured velocity field on the PIV sub-image size was verified by decreasing the latter to ∼ d/40 (16 pixels). Besides increasing the data noise, no significant effect was found on the measured velocities.
Molecular diffusivity measurements
The molecular diffusion coefficient of the dye (rhodamine 6G) is measured by observing the spreading, in the absence of flow, of a slice of liquid depleted in dye. A small Hele-Shaw cell (100 µm thick) is filled with dye-doped suspending liquid without particles. A thin slice of liquid is initially depleted in dye by bleaching the dye with a high power laser sheet across the cell (see figure 4). The depleted slice appears as a dark line having a gaussian profile which diffuses with the diffusion coefficient of the dye. The spatial variance of the gaussian profile χ 2 is measured over one day, and the diffusivity is determined from D = [χ 2 (t)χ 2 (0)]/2t 1.44 10 -13 m 2 s -1 . This value is consistent with that of 4.14 10 -10 m 2 s -1 found by [START_REF] Culbertson | Diffusion coefficient measurements in microfluidic devices[END_REF] for the diffusivity of the same dye in water, given that water is 3000 times less viscous than the suspending liquid and that, according to the Einstein-Sutherland law, D ∝ 1/η.
General observations
To illustrate the influence of particles on mixing in a shear flow, we first compare the evolution of a blob of dye sheared in a pure fluid (without particles) to that of a blob sheared in the suspension of particles, see Movie 2. A cylindrical blob of dyed fluid is injected, at rest and at t = 0, in the middle of the shear cell. Initially, the blob has a diameter s 0 2 mm, is aligned with the vorticity direction and is centered on the neutral velocity plane. This results in a macroscopically two-dimensional initial configuration, and ensures that the blob does not drift with the flow but only deforms. Figure 5 shows, for a Péclet number P e ≈ 10 6 , how mixing proceeds in the two sheared media, from the initial segregated state up to a strain γt = 20. In the pure liquid, the blob of dye stretches homogeneously. Its length increases linearly with time and the blob transverse dimension thus decreases as 1/t. In the suspension, the situation is markedly different: the fluctuations induced by the particles in the fluid phase strongly impact the evolution of the blob. Several conspicuous features deserve being highlighted i) the dispersion and the unfolded length of the blob are significantly enhanced by the particles, ii) both translational diffusivity (transverse undulations of the blob, see figure 5b) and rotation (blob winding around particles, see figure 5c) of the particles contribute to these enhancements, iii) the blob stretching is highly inhomogeneous: at some locations, its transverse dimension becomes much thinner and at others larger than in the pure fluid case, revealing regions of enhanced stretching and regions of compression, iv) at large strains (figure 5d), the blob has separated into several filaments which means that some regions of the blob have already mixed, while in the pure liquid (without particles) mixing has not occurred yet, v) In some regions, the blob evolves into bundles composed of several nearly overlapping filaments [START_REF] Duplat | Mixing by random stirring in confined mixtures[END_REF]. This suggests an underlying stretching/folding mechanism similar to the well known baker's transform [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF].
The above features are generic to the flow of a viscous suspension at large Péclet number. Since inertial effects are negligible, these features are independent of the rate γ at which the suspension is sheared. Similarly, the value of the Péclet number does not influence the general stretching pattern of the blob, but only prescribes the strain γt at which diffusion starts to becomes effective.
This direct comparison clearly illustrates how the liquid velocity fluctuations generated by the particles dramatically accelerate the blob deformation and dispersion. This acceleration is apparent here from the beginning of the shear, because the blob size s 0 is similar to the particle size d. It is however crucial to realize that the strain at which this acceleration establishes is expected to depend on the ratio s 0 /d. If the initial size of the blob s 0 is larger than d, the blob is essentially not stretched by the particle fluctuation motions. It is thus essentially stretched by the linear macroscopic shear until the blob transverse size has thinned down to d, after a typical strain s 0 /d. From that strain on, the particulate fluctuations are expected to contribute directly to the blob stretching.
In stirred flows, such as the case considered here, mixing results from the coupling between advection and molecular diffusion. In the experiment described above, the blob of dye is stretched by the local velocity field: the blob is stretched along its own longitudinal direction and conversely compressed along its transverse direction. The blob thus evolves towards a topology constituted of sheets, or filaments [START_REF] Ottino | The Kinematics of Mixing: Stretching, Chaos, and Transport[END_REF][START_REF] Buch | Experimental study of the fine-scale structure of conserved scalar mixing in turbulent shear flows[END_REF]. Conversely to the effect of advection, molecular diffusion tends to broaden the filaments. This diffusive broadening will at some point counter-balances the rate of compression of the blob caused by the advection. As we already mentioned, this naturally sets a time-scale called the mixing time, t mix , beyond which the concentration levels drop significantly. The mixing time, a key element to understand the overall mixing process, can be estimated from the sole knowledge of the dye molecular diffusion coefficient and from the history of the transverse dimension of the blob. If one assumes that the flow is two-dimensional (this assumption is discussed in 5.3), incompressibility and mass conservation relate at any time the transverse size of the blob to its length l, through s 0 l 0 = s(t)l(t). The mixing time can therefore be estimated from the characterization of the evolution of l(t). Our goal in the following is thus to determine the so-called 'stretching laws', i.e., the time dependence of l in sheared particulate suspensions.
Experimental stretching laws
Our first attempt to measure the unfolded length of the blob l(t) was naturally to perform direct image analysis on images such as those shown in Figure 5. However, the intrinsic dispersion process rapidly distorts the blob into bundles of very close (sometimes merging) filaments, which renders image analysis ineffective above strains of typically 5.
To overcome these limitations, we adopted a different approach inspired from the Diffusive Strip Method. This method happens to be a very powerful experimental tool allowing the determination of the stretching laws over unlimited strains. The key idea is to use the experimental fluid velocity field to numerically advect passive material lines representing portions of the blob. The lines are initially composed of three passive tracers separated by a distance d/20 which discretize a fluid material line. The lines are randomly located in the two-dimensional velocity field with a random orientation (see figure 6a). Each tracer with coordinate x is advected independently from each other according to the local fluid velocity v(x) (obtained by linear interpolation of the instantaneous PIV velocity field) as x(t + ∆t) = x(t) + v(x)∆t, where ∆t is the time between consecutive measurements of the velocity field. As a material line is advected, it is refined by adding more tracers when its length increases or when its local curvature becomes too large (see Meunier 2010 for a detailed description of the refinement procedure).
Figure 6 shows the evolution of two material lines up to a strain of 15, see also Movie 3. The red line successively stretches and folds very similarly to what is observed in the blob experiments (figure 5). Interestingly, the blue line behaves very differently. Although it sustains the same macroscopic strain as the red one, it experiences a much softer stretching only because it started from a different initial location. These different stretching histories reveal the stochastic nature of the stretching induced within particulate suspensions. The stretching laws therefore have to be sought in a statistical sense by repeating the advection procedure over a large number of independent material lines. However, as the material lines lengthen, they may reach the boundaries of the measured velocity field, which limits the maximum strain that can practically be investigated (typically γt < 10). This problem is easily circumvented by realizing that, as long as the stretching laws are concerned, the object of interest is not the material line as a whole but rather the small segments which compose this line and which all stretch differently from each other. We thus perform a new set of calculations focusing on segments: i) initial segments (composed of two tracers) with length d/20 are positioned and oriented randomly in the flow, ii) each time the length of a segment doubles, it is splited in two individual segments that are subsequently advected independently, iii) if a segment reaches the boundary of the velocity field, it is re-injected near the center of the velocity field, iv) when a segment overlaps with a particle, where the velocity field is undefined (as can happen due to the finite time ∆t), it is frozen until the particle has moved away. Owing to these rules, virtually unlimited strains can be considered, and the stretching history of each segment that have been created over this strain can be determined. We define the elongation of these segments as the ratio
ρ(t) ≡ δl(t)/δl 0 (4.1)
of their current length δl(t) to their initial length δl 0 where δl 0 = (d/20)/2 n , with n the number of times the sub-segment was splited in two. Note that to compute the distributions of elongations we present below, the contribution of each segment is weighted by its initial length. Note also that times for which a segment is frozen are not considered. The distribution of elongations at time t therefore represents the portion of the blob that has reached a given elongation after being advected for a duration t. It was built from the stretching histories of 25000 segments advected over 3 independent experimental velocity fields, each of them recorded for a total strain of 20 (typically 4000 images).
Figure 7 shows the experimental stretching laws obtained for a suspension with φ = 35 %, which is generic to the volume fraction range 20 to 55 % investigated. It presents the mean value ρ and the standard deviation σ ρ ≡ ρ 2ρ 2 of the elongation for strains up to 20. At γt = 20, the segments have on average lengthened by typically 10 3 , which is about one hundred times larger than in the case of a pure liquid. The striking result is that the presence of particles in a shear flow changes the very nature of the stretching laws from linear to exponential. Indeed, the elongation of material line in a simple shear (without particles) follows
ρ lin (θ, t) = 1 + 2 cos θ sin θ γt + sin 2 θ γ2 t 2 , (4.2)
where θ denotes the angle between the line initial orientation and the flow direction. On averaging ρ 2 lin (θ, t) over all possible orientations, we obtain
ρ lin (t) = 1 + γ2 t 2 /2, (4.3)
which is only of order 10 for a strain of 20, and increases linearly with time for large strains. Equation (4.3) is ploted in figure 7 to illustrate the contrast with the elongations actually measured in particulate suspensions: the mean elongation in suspensions is different both in magnitude and in law. Moreover, by contrast with the pure fluid case, the stretching variability of individual material lines is very broad as evidenced by the exponential growth of the standard deviation σ p . These results corroborate the preliminary blob experimental visualizations where, in the suspension, many filaments having very different transverse thickness can be observed while the pure fluid case solely exhibits one uniform thickness, (figure 5d). More precisely, figure 7b shows the distributions of the relative elongations P (ρ/ ρ ) at successive strains. The distribution of elongations broadens rapidly such that at a strain γt = 20, it spans more than eight decades. At that strain, the right tail of the distribution contains segments elongated 10 4 times relative to the average ρ , which corresponds to an absolute elongation of ρ ∼ 10 7 , in stark contrast with the uniform average elongation of 10 obtained in a simple shear. As figure 7b shows, these distributions are found to be well fitted by log-normal distributions (shown as dashed lines). Note that the apparent absence of data on the left hand side of the distributions is fully consistent with lognormal distributions. Indeed, for broad distributions, the statistical weight of the left hand side of the distribution vanishes. Our data thus fully resolve the meaningful part of the distribution.
The advective strip method presented above was repeated with velocity fields measured in suspensions with different volume fractions φ ranging from 20% to 55%. The same trends as those detailed for φ = 35 % are systematically observed. As shown in figure 8, it is moreover found that larger particulate volume fractions increase both the growth rate of the average elongation ρ and that of the standard deviation σ ρ . This indicates that a larger volume fraction results in larger fluid disturbances which, in turn, induce a faster and more random elongation of the fluid material lines. Fitting these curves with exponential growths in strain e κ γt yields for ρ , κ ρ = 0.09 + 0.74 φ, and for σ ρ , κ σρ = 0.12 + 1.03 φ. In the range of volume fraction investigated, the growth rates are found to increase linearly with φ. No measurements could be performed above 55% as the large normal stress built in the suspension starts to deform the belt.
To summarize, by kinematically advecting passive segments using the experimental velocity fields of the fluid phase, we measured the elongation of fluid material lines in sheared particulate suspensions. Two important features characterize these elongations: i) the mean and the standard deviation grow exponentially, ii) the distribution converges to a log-normal. In the following, using two measurable properties of the fluid velocity field, namely the local shear rate distribution and the Lagrangian correlation time, we present a mechanism accounting for these observations.
ρ φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55%
Origin of the stretching laws
Principle
We consider the elementary component of a fluid material line: the segment, see figure 9. At that scale, much smaller than the particle size, the local shear γloc is uniform.
Considering the broad distribution of the segment orientations, we assume that the local shear rate has a random orientation with respect to the segment. Therefore, as long as the local shear rate γloc persists, the average elongation of the segment is (see equation 4.3)
ρ = 1 + γ2 loc t 2 /2.
(5.1)
Note that an individual segment can be stretched or compressed depending on wether it is located in a diverging or compressive region of the flow, respectively. However, once averaged over all possible orientations, the segment net elongation is strictly larger than unity. Two questions then naturally emerge: what are the local shear rates? and how long do these shear rates persist? In the following two sections, we address these questions by providing information about the local shear rates and the Lagrangian correlation time of the velocity field.
0 2 4 φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55% 0
Local shear rate
We measure the local shear rate from the experimental two-dimensional velocity fields. To this end, we define the local shear rate by the norm of the symmetric part of the strain tensor:
γloc = 2 ∂u ∂x 2 + 2 ∂v ∂y 2 + ∂u ∂y + ∂v ∂x 2 , (5.2)
where {u, v} are the {x, y} components of the velocity field. This definition disregards the rotation part of the strain tensor. For a simple shear, one has γloc = cte = γ. Figure 10a shows a typical local shear rate map, obtained in a suspension with volume fraction φ = 35%. The color-scale represents γloc amplitude normalized by the applied macroscopic shear rate γ, that is to say the amplification of the shear due to the presence of particles. The local shear rate is highly non-uniform and its value can greatly exceed the macroscopic shear rate. Interestingly, large local shear rates occur preferentially in the vicinity of the particles, however there is no apparent correlation between large local shear rates and small inter-particle distances.
φ = 20% φ = 25% φ = 30% φ = 35% φ = 40% φ = 45% φ = 50% φ = 55% fit for φ= 40% 0 V/ γd 0 1 2 -2 -1 φ = 20% 0 1 2 -2 -1 1 2 4 3 5 0 1 2 4 3 5 V V γt γt φ = 50% a) b) c) d) V/ γd
More quantitatively, we report in figure 10b the distribution of normalized local shear rates obtained for various volume fractions. Clearly, the local shear rate exceeds most of the time the imposed macroscopic shear rate, sometimes by one order of magnitude, and this trend accentuates with increasing volume fractions. The mean normalized value γloc / γ is plotted versus φ in the inset of figure 10c. It is found to fit well γloc / γ = A/(φ cφ) δ . Fixing φ c = 0.58 this yields A 0.56 and δ 0.7. Note that the last point, corresponding to φ = 55%, was not included in the fitting procedure since we suspect that it is biased by the deflexion of the belt mentioned above. Note also that PIV using smaller boxes resulted in very similar local shear rate distributions with less than 6% difference on the average. The trends discussed above may also be interpreted in terms of a macroscopic viscosity. In such case, the relevant quantity to investigate is the second moment of the local shear rate distribution γ2 loc [START_REF] Chateau | Homogenization approach to the behavior of suspensions of noncolloidal particles in yield stress fluids[END_REF], Lerner 2012, Dagois-Bogy 2015). Values of this quantity have recently been obtained by Trulson et al. from numerical simulations of dense frictional suspensions [START_REF] Trulsson | Effect of Friction on Dense Suspension Flows of Hard Particles[END_REF]. They report that γ2 loc / γ ∼ (J/µ) -1/3 , where J = γη f /P is the viscous number, with P the confining pressure, and µ the suspension macroscopic friction coefficient. Since η s /η f = σ/ γη f = µ/J, this results in γ2 loc / γ ∼ (η s /η f ) 1/3 . Combining this with η s /η f ∼ (φ cφ) -2 and using φ c = 0.58 [START_REF] Boyer | Unifying suspension and granular rheology[END_REF]) leads to γ2 loc / γ ∼ (φ cφ) -2/3 , which is in fairly good agreement with the measured scaling, as figure 10c shows.
Lagrangian correlation time
The second important quantity of the suspending liquid flow is the persistence time of the velocity fluctuations induced by the particles. Figures 11a andb show the transverse Lagrangian velocity V (perpendicular to the flow) of a passive tracer advected by the fluid at a low and a large volume fraction, respectively. Consistently with the magnitude of the local shear, more concentrated particulate suspensions develop velocity fluctuations with larger amplitudes. However these fluctuations are found to persist on a much shorter time as φ increases. The duration t c for which a segment is coherently stretched by the flow is directly prescribed by this persistence time, which we define from the Lagrangian velocity auto-correlation functions. As shown in figure 11c, these functions decorrelate exponentially with strain. In the range of volume fraction investigated, the dimensionless correlation time γτ , inferred from this exponential decay, decreases linearly with φ as γτ 0.62 -1.08 φ,
(5.3) (see figure 11d). We expect t c to be of the order of τ and thus write (5.4) with α an order one constant. Note that as shown in figure 7, this persistence time ( γ-1 ) is much shorter than the observation period ( 10 γ-1 ).
t c = ατ,
Multiplicative stretching process
With informations about the local shear rates and their persistence time at hand, we now explain the elongations of fluid material lines as a sequence of uncorrelated cycles of stretching. During the first cycle of duration t c , a given segment of a material line is elongated by the local shear rate γloc,1 resulting in a stretching
∆ρ 1 = 1 + ( γloc,1 t c ) 2 /2, (5.5)
where γloc,1 is a local shear rate which probability is prescribed by the distribution P ( γloc / γ), cf. figure 10b. After the duration t c , the local velocity field de-correlates and the local shear rate map is entirely redistributed. The segment then experiences a new local shear rate γloc,2 , which at t = 2t c yields ρ = ∆ρ 1 1 + ( γloc,2 t c ) 2 /2, and so on. The total elongation at time t, after N = t/t c cycles, is the product of all the elementary elongations occurring at each cycle ρ(t) =
N =t/tc i=1 ∆ρ i . The logarithm of this expression can be written as a sum
ln ρ ≡ t/tc i=1 ln ∆ρ i = 1 2 t/tc i=1 ln[1 + ( γloc,i t c ) 2 /2].
(5.6)
Since the elementary stretchings are independent, the distribution of ln ρ is expected, by virtue of the central limit theorem, to be normal. This multiplicative stretching model thus predicts ρ to converge, after a few t/t c cycles, to a log-normal distribution. This prediction is in agreement with the experimental results shown in figure 7b. The distribution of ln ρ, i.e. the normal distribution, writes
P (x = ln ρ) = 1 √ 2πσ e -(x-µ) 2 2σ 2 , (5.7)
with a non-zero mean µ ≡ ln ρ = ln ∆ρ γt c γt, (5.8) and variance
σ 2 ≡ ln 2 ρ -µ 2 = ln 2 ∆ρ -ln ∆ρ 2 γt c γt.
(5.9)
Both the mean and the variance of the distribution of ln ρ increase linearly with time.
They also vary with the particulate volume fraction due to the φ-dependence of γloc and t c . This variation with φ is better appreciated by recasting equations (5.8) and (5.9) into µ = f (φ) γt, (5.10) (5.11) with f (φ) ≡ ln ∆ρ / γt c and g(φ) ≡ ( ln 2 ∆ρln ∆ρ 2 )/ γt c only depending on φ. Note that f (φ) and g(φ) are crucial quantities. Since the time dependency is known, they contain all the information about the asymptotic of the stretching laws in suspensions. The multiplicative stretching model not only explains the origin of the log-normal distributions of elongations measured experimentally, but also the exponential increase of the mean elongation ρ and variance σ 2 ρ shown in figure 7. Indeed, the mean and variance of the (log-normal) distribution of ρ can be deduced from the mean and the variance of the (normal) distribution of ln ρ following ρ = e (f +g/2) γt , (5.12) and σ 2 ρ = (e g γt -1)e (2f +g) γt e 2(f +g) γt , (
σ 2 = g(φ) γt,
the last simplification in σ 2 ρ becoming true after a few t c . Furthermore, the particulate volume fraction dependence of f (φ) and g(φ) can be computed from the persistence time t c and the distribution of local shear rates, using equations (5.10) and (5.11) together with equations (5.8) and (5.9). In the experimental range 20 % < φ < 55 %, this yields f (φ) 0.104 + 0.298 φ,
(5.14) and g(φ) -0.069 + 0.810 φ, (5.15)
with the structure constant α, set once and for all φ, to 0.3 for computing µ, and to 3.9 for computing σ 2 . These rates f and g both increase with φ, in agreement with the experimental trends. Note that this dependence on the volume fraction is non-trivial, since f and g result from the product of γloc and t c , which have opposite trends with φ: the former increases whereas the latter decreases with increasing φ.
The predictions of the multiplicative stretching model are compared to the experimental stretching laws obtained by the Diffusive Strip Method in figure 12. The agreement is good for all volume fractions and all strains, which suggests that the multiplicative stretching model presented above captures the relevant mechanisms at the origin of the stretching laws.
Comments on the stretching process
Stretching of material elements in nature, may they be passive like in the present case, or with internal restoration forces like polymers [START_REF] Shaqfeh | Polymer stretch in dilute fixed beds of fibres or spheres[END_REF][START_REF] Afonso | Nonlinear elastic polymers in random flow[END_REF] may have different origins. The stochastic models to describe them usually present a net drift, and a random noise terms. The relative amplitudes of these two contributions are in our analysis given by f (φ) and g(φ), respectively (see equations (5.10) and (5.11)). The first term sets the growth of ln ρ , while the second sets that of ln 2 ρln ρ 2 . At a 10 0 10 2 10 4 10 0 10 2 10 4 10 1 10 0 10 3 10 2 10 1 10 0 10 2
φ= 20% φ= 25% φ= 30% φ= 35% φ= 40% φ= 45% φ= 50% φ= 55% c) d) σ model ρ σ exp ρ ρ exp ρ model 0.2 0.3 0.4 0.5 φ 0.2 0.4 0.6 0.4 0.8 1.2 0.2 0.3 0.4 0.5 φ 1.6 a) b) f + g 2 2f + 2g
Rates from advective strip method Model: Model:
0.8 Figure 12.
Comparison between the experimental stretching (extracted from figure 8) and the multiplicative stretching model. (a-b) Mean elongation ρ and standard deviation σρ. The values from the advective strip method are plotted versus those predicted by the multiplicative stretching model. (c) Comparison between the exponential rates of the mean elongation κρ obtained from the advective strip method (see figure 8a) and the model prediction f + g/2 (5.12). (d) Comparison between the exponential rates of the variance of the elongation 2κσ ρ (see figure 8b) and the model prediction 2(f + g) (5.13).
microscopic level, the growth of a given material line depends on its orientation with that of the local velocity gradient. The line length l(t) may increase or decrease depending on wether it is aligned with a diverging or compressive region of the flow. For instance, in a flow corresponding to the pure Brownian motion limit [START_REF] Cocke | Turbulent Hydrodynamic Line-stretching: The Random Walk Limit[END_REF], for which ρ/ρ = B(t) with B(t) a zero-mean, Delta correlated noise, i.e. B(t) = 0 and B(t )B(t ) = (1/τ 0 )δ(tt ), these two contributions are balanced and the net line growth d ln ρ(t)/dt = l(t)/l(t) is, on averaging over all directions, identically zero. In that case, representative of t c → 0, the material lines only grow through the contribution of the fluctuations of B(t), which results in d ln ρ /dt = 0, and (ln ρ) 2 ∼ 2t/τ 0 : the logarithm of the elongation diffuses.
In particulate suspensions, the situation is different since if the direction of the stretch indeed changes at random, it has a finite persistence time. In such case, it has been shown that material lines tend to preferentially align in the direction of elongations (see [START_REF] Cocke | Turbulent Hydrodynamic Line Stretching: Consequences of Isotropy[END_REF][START_REF] Orszag | Comments on 'Turbulent Hydrodynamic Line Stretching: Consequences of Isotropy[END_REF][START_REF] Girimaji | Material-element Deformation in Isotropic Turbulence[END_REF][START_REF] Duplat | Persistence of Material Element Deformation in Isotropic Flows and Growth Rate of Lines and Surfaces[END_REF] and also equation 5.5). Thus, over an observation period larger than the (non-zero) correlation time t c , we expect ln ρ in figure 13(b)), on average, the logarithm of the elongations ln ρ increases with time. Material lines in particulate suspensions thus grow from the contribution of both a drift and a noise. The stretching process thus corresponds to a noisy multiplicative sequence of correlated motions, like the random Sine Flow [START_REF] Meunier | The diffusive strip method for scalar mixing in two dimensions[END_REF], or porous media flows (Le Borgne 2015). Porous media and sheared particulate suspensions have similar exponential stretching laws. This is true in 3D systems as in both cases the fluid trajectories are chaotic. Note however that for 2D systems the implications of steadiness change the picture qualitatively. In a 2D porous media, the flow is steady and there are only two degrees of freedom : the flow is thus not chaotic. The elongation of material lines in 2D synthetic porous media have been shown to grow algebraically rather than exponentially (Le [START_REF] Borgne | The lamellar description of mixing in porous media[END_REF]. Conversely in 2D sheared suspensions, the time dependence of the flow allows the system to be chaotic (Metzger 2013). One therefore expect to observe exponential stretching laws in sheared particulate suspensions also in purely 2D configurations.
Further remarks
We would like to point out certain limitations of the present study. First, the present findings and their analysis are restricted to the particulate volume fraction 20 % φ 55 %, for which material lines in the suspending liquid stretch exponentially with strain. This is not necessarily the case outside form this range. In particular, as φ → 0, this exponential trend must cross-over to linear since the elongation of material lines in a simple shear is linear with strain. We however anticipate that the exponential trend could hold down to fairly low volume fractions but only emerge after increasingly large strains, since the velocity correlation time in the dilute limit should follow τ ∼ ( γφ) -1 and diverge at low φ. Further investigations are needed to characterize this dilute regime (φ < 20%). Second, the PIV measurements performed here are two-dimensional and provide the fluid velocity projected in the (xy) plane only. They therefore neglect part of the stretching of the material lines, namely that involving deformations in the vorticity direction (z). However, we believe that they resolve the stretching mechanism and most of its magnitude for the following reasons: i) these measurements resolve the fluid displacements in the gradient direction (y), which is the only direction for which displacements couple with the main shear flow to produce an enhance stretching. The fluctuations in the vorticity direction are thus expected to produce less stretching than those occurring in the gradient direction. ii) Particles in a shear flow rotate mainly about the vorticity axis thereby inducing fluid disturbances mostly in the velocity-gradient plane, which we consider. Here again, the effects of the velocity disturbances induced by the particle rotation should be smaller in the vorticity direction than those occurring in the velocity-gradient plane. iii) More quantitatively, the stretching rates f + g/2 predicted by the present model based on 2D data are in good agreement with the largest Lyapunov exponents obtained from 3D Stokesian simulations, see figure 14. From the above considerations, it is likely that the mechanisms at the origin of the scalar dispersion, stretching and subsequent mixing are well characterized by the present measurements, even though those are limited to the information contained in the xy plane. Third, as already mentioned in section 3, the stretching of material lines is exponential at every scale, but the stretching of a material blob with thickness s 0 is expected to follow that of material lines only if its thickness is smaller than the correlation scale of the fluid motion, which is of order d (in the other case, the blob is first essentially stretched by the macroscopic shear γ until s d). In the following, we will therefore only consider the relevant case s 0 d.
The latter considerations have important consequences on the estimation of the blob thickness s, hence on the mixing time that we will address in the next section. For an arbitrary elongation w/w 0 in the vorticity direction (z), mass conservation gives s 0 l 0 w 0 = s(t)l(t)w(t). However, in light of the above discussion, the flow is assumed to be two dimensional with w/w 0 l/l 0 . Mass conservation thus results in s 0 l 0 = s(t)l(t).
(5.17)
Using direct image analysis, we have checked that this is experimentally verified. A blob with initial surface s 0 l 0 being converted into a strip with length l(t) and thickness s(t) The median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times can be compared to the dimensionless mixing time ∼ P e 1/3 expected in a pure fluid (dashed line).
indeed obeys, before it starts mixing, to equation (5.17), suggesting that the flow is indeed area preserving.
Implications for mixing
In such area preserving flow, the thickness s(t) of a distorted blob decreases in inverse proportion of its length l(t) according to equation (5.17). As recalled in the introduction, the mixing time for a given blob portion of thickness s is reached when its compression rateṡ/s is balanced by its rate of diffusive broadening D/s 2 . At that time, called the mixing time, the scalar concentration carried by that portion of the blob starts to significantly decay i.e., mix. Since in particulate suspensions ρ = l(t)/l 0 = e κ γt , the mixing time writes t mix γ-1 ln(κP e)/(2κ). We also found that the logarithm of the elongations of an ensemble of such material lines is normally distributed with a mean and a variance growing linearly with time following µ = ln ρ = f (φ) γt and σ 2 = g(φ) γt (see equations (5.10) and (5.11), respectively). These results are illustrated in figure 15a. Since, similarly to the logarithm of the elongations, the stretching rates, κ γ = ln ρ/t are normally distributed, the median mixing time, obtained for the mean stretching rate, i.e. for κ γ = ln ρ /t = µ/t = f (φ) γ, is
t med mix ≈ 1 2f (φ)
γ ln(f (φ)P e). (6.1)
Considering a blob distorted in such a way that it samples all the possible elongations in the global statistics, the above estimate provides the time at which half of the blob has reached its mixing time. The logarithmic dependence of the mixing time on the Péclet number is different from that obtained in a simple shear flow (without particles) for which ρ γt yields t mix γ-1 P e 1/3 . Introducing particles in a viscous fluid therefore becomes more and more efficient at reducing the mixing time as the Péclet number increases. In the present study, the Péclet number is P e ∼ 10 6 . The median mixing time for φ = 35 % is thus t med mix 30/ γ, which has to be compared with t mix 100/ γ in a pure shear flow. Note that varying the volume fraction from 20 % to 55 % increases f (φ) only by a typical factor of 2, which decreases the median mixing time by about the same moderate factor.
In practical situations, mixing half of the scalar may not be the relevant question, precisely because in particulate suspensions, elongations are, as seen in figure 5, broadly distributed. So are mixing times. To address this point, we estimate, for the same conditions as previously, the mixing times for the portions of the blob that undergo the largest and the lowest stretching rates respectively, i.e. the mixing times corresponding to both tails of the distribution (highlighted in figure 15a). The 3 % most strongly stretched portions of the blob are bounded by ln ρ = µ + 2σ. The expressionṡ/s = D/s 2 results in 2f (φ) γt + 4 g(φ) γt = ln[(f (φ) + g(φ)/ γt)P e], which yields the mean field mixing time t 3% mix 14/ γ. On the other end of the distribution, the less stretched portions of the blob, bounded by ln ρ = µ -2σ, reach their mixing time at t 97% mix 64/ γ, later than if it were sheared in a pure fluid. In figure 15b, the median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times are plotted as a function of the Péclet number. This shows that if the concern is to mix essentially all the scalar, large Péclet numbers ( 10 5 ) are required before mixing in a suspension becomes more efficient than in a pure fluid. Persistent poorly stretched regions are deterring. The relative width σ/µ of the stretching rate distribution decreases in time like t -1/2 but this only mildly decreases the spreading of the mixing times as P e increases, since t mix ∝ ln P e. For instance, at P e = 10 20 , the mixing times remain fairly distributed with t 97% mix /t 3% mix > 2. Finally, the results obtained on the stretching laws must be related to the overall dispersion of the blob. In a random flow, line stretching, and dispersion, are two different things: because the extent of the area occupied by the blob grows more slowly than the area where the scalar constitutive of the blob is dispersed, the blob will at some point unavoidably reconnect and merge by overlapping onto itself [START_REF] Duplat | Mixing by random stirring in confined mixtures[END_REF]. Let us see how: after the mixing time, a scalar blob with length growing like l(t) = l 0 e γt has a transverse concentration profile whose width is confined to the Batchelor scale √ Dt. The area A occupied by the scalar is thus A = √ Dt l 0 e γt , growing exponentially in time. Now, the spatial support of the blob undergoes a dispersion induced by the particle effective dispersion coefficient D eff ∼ γd 2 [START_REF] Eckstein | Self-diffusion of particles in shear flow of a suspension[END_REF]. The total area explored by the blob of dye, within which the blob folds, is typically (see also [START_REF] Taylor | Dispersion of soluble matter in solvent flowing slowly through a tube[END_REF] in a related, but different context), Σ ∼ (l 0 + √ D eff t) × (s 0 + √ D eff t) γt ∼ d 2 ( γt) 2 , growing algebraically in time. Because an exponential will always beat a power law, there will necessary be a time for which the area occupied by the scalar overcomes that visited by the blob (i.e. Σ/A < 1), and from that instant of time, overlaps of the folded scalar filaments will be unavoidable. Such an event is illustrated in figure 16. These overlaps will locally delay the mixing process and therefore affect the whole route of the mixture towards homogenization. This aspect, and more generally all aspects regarding the concentration content of the mixture and its evolution, are left for future research.
Conclusions
Motivated by the need to understand on a firm basis the mixing properties of particulate flows, we have provided a complete characterization of the kinematics of stretchings and consecutive elongations of materials lines in non-Brownian particulate suspensions under a simple macroscopic shear. Our observations rely on high resolution PIV measurements of the interstitial fluid velocity field, and our findings are as follows:
i) Following the Diffusive Strip Method of Meunier 2010, we used the experimentally measured velocity fields to numerically advect passive segments in order to reconstruct the stretching histories of fluid material lines. In agreement with previous theoretical predictions and simulation results, we observe that adding particles in a shear flow changes the very nature of the stretching laws from linear to exponential in strain. The growth rate for the mean elongation are found to closely agree with the largest Lyapunov exponent obtained from 3D numerical simulations [START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF][START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF]. Besides the mean, our analysis also provides the full statistics of the material lines elongation: the variances of the elongations also grow exponentially in strain and the distributions of elongations converge toward log-normals. This statistics of elongation was characterized for a large range of volume fractions 20% φ 55%.
ii) Using the same velocity fields, we determined the distribution of the local shear rates intensities and their persistence time. From these, we have shown how the fluid material lines undergo a multiplicative stretching process consisting in a noisy multiplicative sequence of correlated motions. We also discussed the important role of the finite correlation time of the velocity field. The model quantitatively predicts the evolution of the mean and the variance of the elongations of the fluid material lines as well as their evolution towards a log-normal distribution.
iii) We have discussed the importance of this characterization of the flow kinematics to understand how mixing proceeds in sheared particulate suspensions. The exponential stretching results in a mixing time increasing logarithmically with the Péclet number. Moreover, the broad distribution of stretching rates implies a broad distribution of mixing times. The stochastic nature of the stretching process thus allows stretching rates that are smaller than in a pure shear flow. However, our analysis shows that the occurrence of such events becomes negligible at large Péclet number ( 10 5 ) as mixing occurs at larger deformations.
The present study opens the way for a complete description of the mixing process occurring in sheared particulate suspension. In particular, it allows the prediction of the evolution of the concentration distribution P (C, t) [START_REF] Duplat | A nonsequential turbulent mixing process[END_REF]. A quantitative verification of these predictions requires a specific experimental device that resolves the Batchelor scale s(t mix ) which corresponds to the transverse dimension of the filaments at the time when diffusion significantly modifies the concentration levels. Such challenging measurements will be addressed in future studies.
We would like to thank P. Meunier for letting us use his DSM code, S. Dagois Bohy, S. Gallier, E. DeGiuli, M. Wyart, O. Pouliquen for having thoughful discussions and P. Cervetti, S. Noel and S. Martinez for helping us build the experimental set-up. This work was supported by ANR JCJC SIMI 9 and by the Labex MEC ANR-10-LABX-0092 et ANR-11-IDEX-0001-02.
Figure 1 .
1 Figure 1. Some dye, initially confined to a small blob in a flowing particulate suspension (a), mixes with the rest of the suspension (b) by diffusing while the blob is stretched in the complex micro-flow generated by the particles.
Figure 2 .
2 Figure 2. Schematics of the set-up.
Figure 3
3 Figure 3. a) Flow streamlines in the bulk of a suspension. b) Slice of a sheared suspension illuminated by a laser sheet. The small fluorescent tracers seeding the suspending fluid appear as bright whereas the particle intersections with the laser sheet appear as dark, see also Movie 1. c) Magnified view of the suspending liquid velocity field obtained from the PIV (the velocity is not computed in the particles).
Figure 4 .
4 Figure 4. a) Schematics of the set-up used to measure the molecular diffusivity D of the dye (rhodamine 6G), in the suspending liquid (TritonX+ZnCl2+H2O). b) Diffusive thickening of the bleached line at t = 0, 4800, 21600 and 64800 s (the image width is 5 mm). c) Concentration profiles at successive times (0 s< t <64800 s). d) Increase of the spatial variance of the concentration χ 2 (t) -χ 2 (0) versus time. Its fit to 2Dt yields D = 1.44 ± 0.2 × 10 -13 m 2 s -1 .
Figure 5 .
5 Figure 5. Comparison of the stretching processes of a blob of dye sheared at high Péclet (∼ 10 6 ) and low Reynolds numbers (∼ 10 -4 ), in a pure fluid (top), and in a particulate suspension with volume fraction φ = 35% (bottom). The dye appears as dark, and the beads appear as bright, see also Movie 2.
Figure 6 .
6 Figure 6. Example of stretching for two material lines numerically advected using the experimental fluid velocity field, see also Movie 3.
Figure 7 .
7 Figure 7. Stretching laws measured for a suspension with volume fraction φ = 35%. a) Mean value ρ and standard deviation σρ = ρ 2 -ρ 2 of the distribution of elongations versus macroscopic strain in a semilogarithmic representation. The dashed line corresponds to the mean elongation in a pure fluid ρ lin (t) = 1 + γ2 t 2 /2. b) Distribution of the normalized elongations ρ/ ρ at different strains. The dashed curves are log-normal distributions built from the mean value ρ and standard deviation σρ of the experimental elongation distributions.
Figure 8 .Figure 9 .
89 Figure 8. Mean elongation ρ (a) and standard deviation σρ (b) versus strain for increasing volume fractions ranging from 20 to 55%. Insets: growth rate κ of the exponential fit e κ γt to the main curves, as a function of φ. The lines show κρ = 0.09 + 0.74 φ (a), and κσ ρ = 0.12 + 1.03 φ (b).
Figure 10
10 Figure 10. a) Typical local shear rate map for a suspension with volume fraction φ = 35%. b) Experimental distributions of normalized local shear rate P ( γloc / γ) for different volume fractions (the solid line is not a fit but the experimental data, sparse markers are used for sake of clarity). c) γ2 loc / γ versus φc -φ. The best fit γ2 loc / γ ∼ (φc -φ) -β , with φc = 0.58, yields β = 0.601 (see text). Inset: mean normalized local shear rate γloc / γ versus φ. The line is the best fit by A/(φc -φ) δ .
Figure 11 .
11 Figure 11. (a-b) Lagrangian velocity transverse to the flow of a tracer passively advected by the suspending liquid V , as a function of the strain γt. a) φ = 20% and b) φ = 50%. c) Average Lagrangian velocity auto-correlation function V V obtained for different volume fractions versus strain. The velocity auto-correlation functions fit well e -γt/ γτ where τ denotes the correlation time. d) Correlation strain γτ versus φ and corresponding linear fit γτ = 0.62 -1.08 φ.
Figure 13
13 Figure 13. a) Mean logarithm of the material line elongations, ln ρ versus γt for a suspension of volume fraction φ = 35%. b) PDF of the logarithm of the material line elongations P (ln ρ) at successive times.
Figure 14 .
14 Figure14. Comparison between the stretching rates obtained in the present study and the largest Lyapunov exponent obtained from 3D Stokesian dynamics simulations[START_REF] Dasan | Stress fluctuations in sheared Stokesian suspensions[END_REF][START_REF] Drazer | Deterministic and stochastic behaviour of non-Brownian spheres in sheared suspensions[END_REF]).
Figure 15
15 Figure15. a) Evolution of the distribution P (ln ρ) of the logarithm of the material line elongations in a particulate suspension. The mean of the distribution µ ∼ t and its standard deviation σ ∼ √ t. b) Dimensionless mixing times γtmix in a suspension (φ = 35%) as a function of P e. The median (blue line), the most stretched t 3% mix (green line), and the less stretched t 97% mix (red line) dimensionless mixing times can be compared to the dimensionless mixing time ∼ P e 1/3 expected in a pure fluid (dashed line).
Figure 16 .
16 Figure 16. Picture illustrating the complexity of folding of the stretched blob of dye and the potential interaction (merging) of nearby filaments. | 60,424 | [
"1004338"
] | [
"949",
"949",
"196526",
"949"
] |
01768677 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://amu.hal.science/hal-01768677/file/096368916x693329.pdf | Sylvie Cointe
Éric Rhéaume
Catherine Martel
Olivier Blanc-Brude
Evemie Dubé
Florence Sabatier
Francoise Dignat-George
Jean-Claude Tardif
Arnaud Bonnefoy
Françoise Dignat-George
Thrombospondin-1-Derived Peptide RFYVVMWK Improves the Adhesive Phenotype of CD34 + Cells from Atherosclerotic Patients with Type 2 Diabetes
Keywords: Thrombospondin-1 (TSP-1), Atherosclerosis, Type 2 diabetes (T2D), CD34, CD47
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
CD34 is a marker of hematopoietic stem cells (HSCs) that is also expressed on several non-HSCs including endothelial and epithelial progenitors, embryonic fibroblasts, multipotent mesenchymal stromal cells (MSCs), and interstitial dendritic cells. The plasticity of CD34 + cells and their paracrine-stimulating properties on the endothelium during hypoxia make these cells potential candidates for cell transplant therapies combating ischemic diseases such as cardiac failure 1 .
Circulating progenitor cells are known to contribute to neoangiogenesis during numerous processes including healing, lower limb ischemia, vascular graft endothelialization, atherosclerosis, post-myocardial infarction, lymphoid organ neovascularization, and tumoral growth 2 . Clinical trials have demonstrated that intracoronary administration of bone marrow-derived mononuclear cells (BM-MNCs) from autologous progenitor cells or peripheral blood (PB) CD34 + cells mobilized by granulocyte colony-stimulating factor improves the left ventricular ejection fraction and reduces the size of the infarcted area [START_REF] Caballero | Ischemic vascular damage can be repaired by healthy, but not diabetic, endothelial progenitor cells[END_REF][START_REF] Blanc-Brude | Abstract 895: CD47 activation by thrombospondin peptides enhances bone marrow mononuclear cell adhesion, recruitment during thrombosis, endothelial differentiation and stimulates pro-angiogenic cell therapy[END_REF] . However, the benefit of cell therapy is dampened by the negative impact of cardiovascular risk factors, such as atherosclerosis, obesity, and type 2 diabetes (T2D), on the number and function of progenitor cells, thereby jeopardizing their use for autologous treatment 5,6 . Interestingly, a reduced progenitor cell adhesion capacity has been reported in T2D 5,7 . Moreover, strategies aiming at preconditioning CD34 + cells or endothelial progenitor cells prior to injection in animal models of ischemia were reported to improve revascularization in vivo, together with increased adhesion capacity of cells in in vitro models 8 .
The ability of CD34 + cells to adhere and engraft onto damaged vessel walls is crucial to initiate neovascularization 9 . During this process, activated platelets are instrumental in targeting CD34 + cell recruitment to injured vessels via stromal cell-derived factor-1 (SDF-1a) secretion and chemotaxism 10 . Platelets also stimulate the "homing" of CD34 + cells, via CD62-CD162 interaction 11 and their differentiation into mature endothelial cells 8 . One of the most abundant proteins secreted by activated platelets is thrombospondin-1 (TSP-1), which is a multifunctional matricellular glycoprotein bearing proatherogenic, prothrombotic, as well as both pro-and antiangiogenic properties 12 . The interaction of TSP-1, through its COOH terminal RFYVVMWK sequence, with the transmembrane protein CD47 [integrin-associated protein (IAP)] occurs following a conformational reorganization of the C-terminal domain of TSP-1 13 .This interaction positively modulates the function of several integrins including CD51/CD61, CD41/CD61, and CD49b/CD29, thereby modulating cellular functions including platelet activation and adhesion, leukocyte adhesion, migration, and phagocytosis through heterotrimeric G i protein signaling [14][15][16][17] .
We have previously observed that TSP-1-deficient mice exhibit a significant drop in the vessel wall recruitment of BM-MNCs in a FeCl 3 -induced intravascular thrombosis mouse model 18 . We also found that ex vivo RFYVVMWK preconditioning of mouse BM-MNCs stimulates their recruitment to sites of intravascular thrombosis induced by FeCl 3
19
. Indeed, RFYVVMWK increased BM-MNCto-vessel wall interactions and decreased their rolling speeds to the damaged vessel wall, leading to a 12-fold increase in permanent cell engraftment 19 .
The goal of the present study was to analyze the proadhesive effects of RFYVVMWK preconditioning on CD34 + progenitor cells isolated from PB of atherosclerotic patients with T2D. We first explored their "proengraftment" phenotype through the measurement of a panel of biomarkers including cell adhesion receptors, platelet/CD34 + conjugates, and apoptotic markers. We next investigated whether this preconditioning could improve their capacity to adhere to stimulated endothelial cells and subendothelial components.
MATERIAL AND METHODS
Patients
Blood samples were drawn from participants after obtaining informed consent as part of a protocol approved by the ethics committee of the Montreal Heart Institute and in accordance with the recommendations of the Helsinki Declaration. A total of 40 adult males (>18 years old) with stable coronary artery disease or stable angina documented by angiography, all treated with antiplatelet agents and statins, were included in the study. Among these patients, 20 had T2D (T2D group) and 20 were nondiabetic (non-T2D group). The patients were predominantly hypertensive (n = 27), dyslipidemic (n = 38), overweight (n = 25), and with a smoking history (n = 27). Diabetic patients received biguanide (metformin) monotherapy (n = 10), biguanide + sulfonylureas (glyburide or glimepiride) bitherapy (n = 6), biguanide + sulfonylureas (glyburide) + DPP-4 inhibitor (gliptin) tritherapy (n = 2), or no medication (diabetes was controlled by diet). Exclusion criteria were acute coronary syndrome (ACS) or stroke within the past 6 months, treatment with insulin, treatment with peroxisome proliferator-activated receptors (PPARs; pioglitazone and rosiglitazone), extra cardiac inflammatory syndromes, surgery within the last 8 weeks, kidney or liver failure, use of systemic corticosteroids, cancer in the last 5 years, chronic anticoagulation, heart failure [NYHA class 3 or 4 and/or left ventricular ejection fraction (LVEF) <40%], and hemoglobin <100 g/L. Six healthy adult males [healthy donors (HD)] who showed no cardiovascular disease or known T2D were also recruited if they had not taken any medication during the past 15 days before blood sampling. All samples were analyzed in a single-blind manner with respect to the group (T2D or non-T2D).
Isolation of CD34 + and CD34 -Peripheral Blood Mononuclear Cells (PBMCs)
One hundred milliliters of blood was collected by venipuncture into syringes containing ethylenediaminetetraacetic acid (EDTA; 1.8 mg/ml of blood) (Sigma-Aldrich, St. Louis, MO, USA), dispensed into 50-ml conical tubes, and centrifuged at 400 ´ g for 15 min at 20°C, to remove a maximum quantity of platelets while minimizing PBMC loss. EDTA was used throughout the isolation process to avoid platelet binding to CD34 + cells. The platelet-rich plasma (PRP; upper phase) was removed, and the remaining blood components were diluted 1:1 in phosphate-buffered saline (PBS) containing 2 mM EDTA and 0.5% fetal bovine serum (FBS) (Sigma-Aldrich) (PBS/EDTA/FBS). Ficoll at a density of 1.077 g/ml (Amersham Biosciences, Little Chalfont, UK) was added to samples in a ratio of 1:3 and centrifuged at 400 ´ g for 40 min at 20°C (without brakes). The resulting mononuclear cell ring was collected at the Ficoll/plasma interface. Cells were then washed twice with PBS/EDTA/FBS and incubated for 10 min at 4°C with 100 µl of FcR blocking reagent (Miltenyi Biotec, Bergisch Gladbach, Germany) to remove FcR-specific binding antibodies. Cells were then incubated for 30 min at 4°C with 100 µl of magnetic beads bearing anti-CD34 monoclonal antibodies (Microbead; Miltenyi Biotec). After washing with PBS/EDTA/FBS, cells were filtered (30-µm nylon cell strainer; Miltenyi Biotec) to remove cell aggregates or other large contaminants and loaded on a MACS magnetic column (Miltenyi Biotec). Unbound CD34 -cells were collected, while CD34 + PBMCs were retained on the column. After three washes with PBS/ EDTA/FBS, CD34 + cells were recovered in 1 ml of PBS/ EDTA/FBS. To increase the purity of CD34 + cells, this step was repeated once on a new column with the retained fraction. Finally, cell viability was measured with trypan blue (Sigma-Aldrich).
Cell Preconditioning With TSP-1-Derived Peptides
CD34 + and CD34 -cells were diluted either at a concentration of 1,000 cells/µl for adhesion assays or at a concentration of 4,000 cells/µl for flow cytometry assays. Cells were then preincubated with either 30 µM of the CD47 interacting peptide RFYVVMWK (amino acid sequence: Arg-Phe-Tyr-Val-Val-Met-Trp-Lys) (4N1-1; Bachem, Bubendorf, Switzerland), 30 µM of the RFYVVM truncated peptide devoid of CD47-binding activity (Arg-Phe-Tyr-Val-Val-Met) (4N1-2; Bachem), or saline (vehicle) for 30 min at 37°C.
Phenotyping of Preconditioned Cells
The phenotype of preconditioned cells (with TSP-1 peptides or the vehicle, as previously described) was analyzed by flow cytometry using fluorescent-labeled antibodies directed against biomarkers grouped in four panels: panel 1 with CD47 (clone B6H12; R&D Systems, Minneapolis, MN, USA) and TSP-1 (clone A4.1; Santa Cruz Biotechnology, Santa Cruz, CA, USA); panel 2 with the adhesion molecules CD29 (clone TS2/16; eBioscience, San Diego, CA, USA), CD51/CD61 (clone 23C6, eBioscience), and CD162 (clone KPL-1; BD Biosciences, Franklin Lakes, NJ, USA); panel 3 with CD62P (clone P.seK02.22; BD Biosciences); and panel 4 with the apoptosis and cell death markers phosphatidylserine (annexin V labeling), 4¢,6¢-diamidino-2-phenylindole (DAPI), and propidium iodide (PI) (BD Biosciences). Each panel also included antibodies against CD34 (clone 581; BD Biosciences), CD42b (platelet marker; clone HIP1; BioLegend, San Diego, CA, USA), and DAPI to discriminate living cells.
Cell suspension ( 4´ 10 3 cells/µl) was incubated with each antibody panel (previously centrifuged at 2 ´ 10 3 ´ g for 2 min to remove aggregates of antibodies) for 30 min at room temperature in the dark. Immunophenotyping of CD34 + cells was performed on an LSR II flow cytometer (BD Biosciences) and analyzed with Kaluza software (Beckman Coulter, Miami, FL, USA).
Detection of Integrin Polarization and Platelet/CD34 + Conjugates
CD29 and CD51/CD61 distribution on cell surfaces and platelet (CD42b + )/CD34 + cell conjugates was visualized by confocal microscopy (Zeiss Observer Z1 equipped with a Yokogawa CSU-X1 confocal head QuantEM 512SC camera; Intelligent Imaging Innovations, Denver, CO, USA).
Cell Adhesion Onto Collagen-Vitronectin Matrices
Ninety-six-well plates (Sarstedt, Nümbrecht, Germany) were coated overnight at 4°C in PBS containing a mixture of 0.3 µg/ml vitronectin (Sigma-Aldrich) and 1 µg/ml type I collagen (Sigma-Aldrich). The wells were then saturated with 0.1% gelatin [American Type Culture Collection (ATCC), Manassas, VA, USA] for 1 h at room temperature and washed with PBS. Twenty thousand cells in 200 µl of endothelial basal medium-2 (EBM-2; Lonza, Walkersville, MD, USA) were pretreated with either the vehicle, RFYVVMWK, or RFYVVM for 5 min at 150 ´ g at room temperature to quickly spin down the cells onto the matrix. Plates were then incubated for 30 min at 37°C and gently washed with EBM-2. Finally, 100 µl of 2% paraformaldehyde (PFA) and 100 µl of DAPI were sequentially added. Nuclei were counted using an inverted epifluorescence microscope (Axiovert 200M, camera AxioCam MRm; Zeiss, Stockholm, Sweden) coupled with the image analysis software ImageJ [National Institutes of Health (NIH), Bethesda, MD, USA]. Results were expressed as the number of cells adhered per 20 ´ 10 3 cells originally loaded per well.
Cell Adhesion Onto HUVEC Monolayers
Human umbilical vein endothelial cells (HUVECs; PromoCell, Heidelberg, Germany) between passage 4 and 8 were seeded into 96-well plates for 48 h at a density of 25 ´ 10 3 cells/well. After 36 h at 37°C, 5% CO 2 concentration, and 95% relative humidity, HUVECs were stimulated for 18 h with 1 ng/ml tumor necrosis factor-a (TNF-a; R&D Systems) or 10 ng/ml interleukin-1b (IL-1b; Sigma-Aldrich). Cells were then washed twice with Hank's balanced salt solution (HBSS). To differentiate PBMCs from HUVECs during cell counting, PBMCs were prelabeled with 0.5 µg/ml calcein-AM (Sigma-Aldrich) in EBM-2 for 1 h at 37°C and then washed and resuspended in EBM-2 before seeding onto HUVEC monolayers. As for matrix adhesion assays, microplates were centrifuged for 5 min at 150 ´ g at room temperature to quickly spin down the cells onto the HUVECs and then incubated for 1 h at 37°C. After two washes with HBSS, the cells were fixed with 2% PFA. Calcein-AM-labeled PBMCs were then counted by fluorescence microscopy and ImageJ software. Results were expressed as the number of adherent cells per 10 ´ 10 3 cells originally loaded in each well. All experiments were performed in duplicate.
Statistics
Analyses were performed using the GraphPad Prism software v.5.01 (GraphPad Software, San Diego, CA, USA). Data were expressed as mean ± standard error of the mean (SEM). The Kruskal-Wallis nonparametric test was used to compare the three preconditioning treatments (vehicle, RFYVVMWK, and RFYVVM) in cell adhesion assays and the Mann-Whitney nonparametric test to compare biomarkers. Values of p < 0.05 were considered statistically significant.
RESULTS
Patient Characteristics
Both T2D and non-T2D had similar demographic characteristics (Table 1). Almost all participants (95%) had dyslipidemia and high cholesterol levels. The majority was also overweight (BMI > 27 kg/m 2 ) with a smoking history (n = 31). Beside a hypoglycemic therapy in T2D, there were no significant differences in drug regimen, with all patients being treated with antiplatelet drugs and statins. As expected, blood glucose (+40%; p < 0.001) and glycated hemoglobin (+19%; p < 0.001) were significantly higher in T2D participants. T2D also had higher triglyceride levels compared to non-T2D participants (+70%; p < 0.002).
Stimulation of Platelet/CD34 + Conjugate Formation by RFYVVMWK
Circulating CD34 + and CD34 -PBMCs were isolated from HD, T2D, and non-T2D blood by immunomagnetic separation and quantified. The purity of the isolated CD34 + cells was 92 ± 4%. No significant difference was found in total PBMCs (167.7 ± 50.2 ´ 10 6 PBMC/100 ml blood in T2D vs. 141.5 ± 32.5 ´ 10 6 in non-T2D and 136. 4 ± 38.8 ´ 10 6 in HD; p = 0.2) and CD34 -cells (77.2 ± 3.1 ´ 10 6 CD34 - cells/100 ml blood in T2D vs. 62.0 ± 2. 3 ´ 10 6 in non-T2D and 66.6 ± 2.5 ´ 10 6 in HD; p = 0.15). However, twice the amount of CD34 + PBMCs were retrieved from the blood of T2D participants compared to non-T2D and HD participants [218.9 ± 124.1 ´ 10 3 cells per 100 ml of blood vs. respectively, 101.6 ± 29.0 ´ 10 3 cells (p < 0.001) and 117.5 ± 49.8 ´ 10 3 (NS)]. The CD34 + /total PBMC ratio was also significantly higher in cell fractions isolated from T2D participants [0.13 ± 0.06% in T2D vs. 0.075 ± 0.03% in non-T2D (p = 0.0011), and 0.1 ± 0.07% (p = 0.06) in HD]. Although a double enrichment process and extensive wash were used during the purification of CD34 + cells, platelets were still detectable in the final cell preparations (average of 23.5 ´ 10 3 platelets/10 3 CD34 + cells). Using flow cytometry, we measured the extent of platelet/CD34 + cell conjugate formation (hereafter referred to as CD42b + /CD34 + conjugates) in samples and the effect of TSP-1 peptide preconditioning. RFYVVM preconditioning had no significant effect on conjugate formation, with 1.4% CD42b + /CD34 + conjugates in T2D and 2% in the non-T2D participants (NS). RFYVVMWK increased the percentage of CD42b + /CD34 + conjugates up to 11% in T2D (p < 0.0001 vs. RFYVVM) and 9% in non-T2D participants (p < 0.0001 vs. RFYVVM). Progenitor cellplatelet conjugate formation following RFYVVMWK treatment was confirmed by assessing the expression of CD42b and CD34 antigens by confocal microscopy (Fig. 1A).
Expression of CD47 and its Ligand TSP-1 CD47 expression was not modified by the RFYVVMWK preconditioning compared with the nonactive RFYVVM control peptide (Fig. 1B). We observed a 30% lower expression on T2D CD34 + cells compared to non-T2D cells (p < 0.01). CD47 expression was higher on CD42b + /CD34 + conjugates compared to CD42b -/CD34 + cells, probably due to the presence of CD47 on platelets (p < 0.05). TSP-1 was barely expressed on CD42b + /CD34 + conjugates and CD42b -/CD34 + preincubated with RFYVVM (Fig. 1C). RFYVVMWK induced a high expression of TSP-1 on both T2D and non-T2D CD34 + cells, in the presence or absence of platelets (all p < 0.001).
Expression of CD62P and CD162
RFYVVMWK induced a significant increase in CD62P on T2D and non-T2D CD42b + /CD34 + conjugates [+146% (p < 0.001) and +129% (p < 0.001) vs. RFYVVM, respectively], and a low increase in CD42b -/CD34 + cells [+26% (p < 0.01) vs. +25% (p < 0.001)] (Fig. 1D). By contrast, CD162 expression remained unchanged (Fig. 1E).
Expression of Adhesion Receptors CD29 and CD51/CD61
RFYVVMWK preconditioning of CD34 + cells significantly increased the expression of CD29 in both T2D and non-T2D participants [74% (p < 0.01) and 42% (p < 0.05) in CD42b + /CD34 + conjugates, respectively] (Fig. 2A). A strong increase in CD51/CD61 expression was also measured in T2D and non-T2D CD34 + cells [+2,715% (p < 0.001) and +3,260% (p < 0.001) in CD42b + /CD34 + conjugates, respectively] (Fig. 2B). Similar results were observed with CD42b -/CD34 + cells. Integrin polarization and clustering, which are indicators of integrin activation state, were also detected in RFYVVMWK stimulated cells by confocal microscopy (Fig. 2C).
Effect of RFYVVMWK on CD34 + Cell Viability
RFYVVMWK induced increased phosphatidylserine exposure (PI -/annexin V + cells) in both non-T2D CD34 + (7.2% vs. 0.15% with RFYVVM, p < 0.001) and T2D CD34 + cells (12.4% vs. 0.04% with RFYVVM, p < 0.001) (Fig. 3A). The percentage of PI + /annexin V + cells in response to RFYVVMWK was also significantly higher compared to RFYVVM but remained negligible (0.25% in non-T2D and 0.17% in T2D, p < 0.001 vs. RFYVVM) (Fig. 3B).
Cell Adhesion to Vitronectin-Collagen Matrix
CD34 + cells preincubated with vehicle (saline) adhered with values reaching 59.9 ± 8 cells/20 ´ 10 3 seeded cells, compared to 18.8 ± 2.9 CD34 -cells (p < 0.0001) (Fig. 4A). RFYVVMWK strongly increased the adhesion of CD34 + and CD34 -, respectively, by +368% and +468%. RFYVVM peptide had no significant effect, giving results comparable to vehicle. T2D CD34 + cells showed 67% less basal adherence (30 ± 4 cells) compared to non-T2D cells (90 ± 14 cells, p < 0.0001) (Fig. 4B). RFYVVMWK strongly increased the adhesion of T2D and non-T2D CD34 + cells by +786% (266 ± 54 cells) and +232% (296 ± 31 cells), respectively (p < 0.0001 compared to vehicletreated cells).
Cell Adhesion to HUVEC Monolayers Stimulated With TNF-α or IL-1β
We next measured the adhesion of CD34 + cells on HUVEC monolayers prestimulated with TNF-a or IL-1b. T2D and non-T2D CD34 + cells equally adhered to prestimulated HUVEC monolayers. Neither RFYVVMWK nor RFYVVM had a significant effect on cell adhesiveness (Fig. 4C andD).
DISCUSSION
Proangiogenic cell therapy offers many potential applications in regenerative medicine for the treatment of patients with ischemic diseases, particularly in cardiology. Promising preclinical studies have prompted the initiation of numerous clinical trials based on administration of progenitor cells. Several cellular functions are involved in the process of neoangiogenesis such as homing and recruitment of cells, proliferation, endothelial differentiation, and survival. However, cardiovascular risk factors such as T2D were associated with dysfunctional progenitor cells, including impaired adhesiveness 5,6 , which undermines their therapeutic value in autologous cell therapies 20 . CD34 + PBMC recruitment to damaged vessels is a crucial step to initiate the process of vascular repair and neovascularization 9 . In ex vivo settings, we observed a lower basal adhesion of T2D CD34 + cells to vitronectin-collagen matrix compared to the non-T2D CD34 + cells. We thus sought to investigate whether stimulating the adhesiveness of CD34 + PBMCs was feasible in an attempt to improve cell therapy efficiency in T2D.
The transmembrane protein CD47, TSP-1 receptor, associates with CD51/CD61, CD41a/CD61, CD49d/CD29, and CD49b/CD29 integrins to mediate cell adhesion and motility 17 . Herein we provide evidence that prestimulating CD34 + PBMCs with RFYVVMWK, a TSP-1-related peptide that activates CD47, restores and amplifies their adhesiveness to vitronectin-collagen matrix beyond the basal adhesion values obtained in non-T2D patients. In addition, we showed a strong increase in surface expression of CD29 and CD51/CD61 integrins following RFYVVMWK stimulation, thereby providing a possible mechanism to the increased adhesion of CD34 + cells to the subendothelial matrix components. Confocal microscopy strengthened this hypothesis by revealing polarization of integrin at the cell surface, consistent with the clustering process occurring during integrin activation.
The endothelial expression of CD51/CD61 (a v b 3 integrin) and its interaction with extracellular matrix components are crucial during angiogenesis 21 . This interaction triggers vascular endothelial growth factor (VEGF)-A-mediated full activation of VEGF receptor 2 (VEGFR-2), but also yields a strong antiapoptotic effect through the suppression of the p53 and the p53-inducible cell cycle inhibitor p21WAF1/CIP1 activities and the increase in the Bcl-2/Bax ratio 22,23 . Consistent with the later functions of CD51/CD61, RFYVVMWK priming did not compromise CD34 + cell survival, as assessed by annexin V/PI labeling. CD29 (b 1 ) integrin subsets diversely contribute to angiogenesis. Li and collaborators have associated CD29 expression levels with the rate of implantation and colonization of ischemic limbs with bone marrow-derived endothelial precursors, which is of critical importance for inducing therapeutic angiogenesis by cell implantation 24 .
Albeit standardized CD34 + PBMC isolation and purification techniques were used in this study, there were still platelet remnants in the positive fraction, as recurrently reported in the literature addressing CD34 + cell isolation and enrichment 25 . We observed a significant increase in platelet-CD34 + conjugate formation upon RFYVVMWK stimulation, along with an increased expression of CD62P restricted to platelet-CD34 + conjugates. These results are consistent with the previously reported activating effect of RFYVVMWK on platelets 26 . This activation, concomitant to CD34 + cell stimulation, induces platelet secretion and surface CD62P expression, thereby enabling platelets to interact with CD162 (PSGL-1) on CD34 + cells. As previously described by others, platelets are instrumental in neovascularization by targeting CD34 + cell recruitment to injured vessels and promoting their homing and maturation 8,10,11 .
Consistent with this rationale, RFYVVMWK stimulation had no significant effect on CD34 + PBMC adhesion on HUVEC monolayers, stimulated with either TNF-a or IL1-b. These results echo our observations in a TSP-1 knockout mouse model of FeCl 3 -induced intravascular thrombosis, in which we observed that TSP-1 was essential for bone marrow cell (BMC) recruitment to vascular injury sites 18 . We also reported that CD47 preactivation with RFYVVMWK strongly stimulated BMC adhesion and specific recruitment to sites of thrombosis in vivo 19 . The present findings suggest that stimulation by RFYVVMWK confers to CD34 + PBMCs an increased adhesiveness restricted to most damaged and de-endothelialized vascular areas exposing the matrix components, with limited stickiness to healthier areas.
It has previously been suggested that increased expression of adhesion molecules [including CD11a/CD18 (LFA-1), CD49d/CD29 (VLA-4), CD54 (ICAM-1), CD51/ CD61, and CD162] on CD34 + or endothelial progenitor cells and/or increased adhesiveness in vitro could translate into enhanced endothelial repair or neovascularization capacity in vivo [27][28][29] . In coherence with these observations, we have previously reported that priming BM-MNCs with RFYVVMWK results in increased proangiogenic activity in a mouse model of hindlimb ischemia and cell therapy 19 . However, additional studies are required to demonstrate whether the priming of CD34 + cells isolated from PB improves vascularization in vivo.
In a recent study, Albiero et al. suggested that increased adhesiveness of stem cells may hamper their ability of being mobilized from the bone marrow 30 . Our results are in line with the prospect of using the peptide ex vivo as a pretreatment strategy prior to administration of an autologous cell-based therapy product, rather than using RFYVVMWK in vivo. Thus, we anticipate that the endogenous mobilization of stem cells would not be affected. In addition, since the majority of current cell-based therapy strategies are using local injection in ischemic areas, it is unlikely that RFYVVMWK preconditioning of cells can favor homing of injected CD34 + cells into the bone marrow.
RFYVVMWK induced surface expression of TSP-1. This neo-expression was observed even in CD42b -/ CD34 + cells. The timeline of our experimental conditions suggest that TSP-1 originated from exocytosis or platelet secretion rather than from neosynthesis per se. The consequences of TSP-1 expression on CD34 + cells are difficult to anticipate as TSP-1 induces both positive and negative modulation of endothelial cell adhesion, motility, and growth through its interaction with a plethora of cell adhesion receptors, including CD47, CD36, CD51/CD61, and CD29 integrins, and syndecan 12 .
CD47 expression was not modulated upon RFYVV MWK stimulation. CD47 interaction with signal regulatory protein a (SIRP-a), expressed on macrophages and dendritic cells, negatively regulates phagocytosis of hematopoietic cells 31 . Interestingly, we observed that T2D CD42b + /CD34 + conjugates express significantly less CD47 on their surface compared to non-T2D cells. This lower expression of CD47 may contribute to a higher susceptibility of T2D CD34 + to phagocytosis in vivo. Yet, we could not demonstrate lower amounts of CD34 + in T2D PB as previously observed by others 32 . Surprisingly, using a singleblinded counting approach, we measured significantly higher levels of CD34 + cells recovered from T2D patients (n = 20) compared to non-T2D (n = 20). This could be due to the fact that counting of CD34 + cells was performed after enrichment of cells with an immunomagnetic CD34 antibody column rather than on the PB of patients, which may have introduced an unexpected bias in the quantification. In addition, several studies have suggested that glycemic control could impact circulating progenitor cell levels in diabetic patients. Indeed, oral antidiabetics were shown to attenuate the quantitative deficit and improve angiogenic function of progenitor cells in diabetics 33,34 . The underlying mechanisms probably involve reduction in inflammation, oxidative stress, and insulin resistance. Furthermore, a recent study reported a positive correlation between circulating CD34 + cell count and serum triglycerides in nonhypertensive elderly Japanese men, suggesting that triglycerides may stimulate an increase in circulating CD34 + by inducing vascular disturbance 35 . In our patient cohort, triglycerides were significantly higher in T2D patients despite statin treatment. In agreement with the study by Shimizu and collaborators 35 , CD34 + cell count significantly correlated with triglyceride levels in nonhypertensive patients (n = 11; Spearman test; r = 0.81; p < 0.004), but also to a lesser extent in the hypertensive group (n = 27; Spearman test; r = 0.43; p < 0.03).
In conclusion, priming CD34 + PBMCs from T2D patients with the TSP-1 carboxy-terminal peptide RFYVVMWK restores and amplifies their adhesion properties without compromising their viability. These findings may be instrumental to improve proangiogenic autologous cell therapy in several disease settings such as T2D.
Figure 1 .
1 Figure 1. (A) Examples of CD42b + (red)/CD34 + (green) conjugates formed after stimulation with RFYVVMWK observed by confocal microscopy. Scale bars: 5 µm. (B-E) Expression of CD47 (B), TSP-1 (C), CD62P (D), and CD162 (E) on CD34 + /CD42b + conjugates and CD34 + /CD42b -cells after RFYVVM or RFYVVMWK preconditioning. TSP-1, thrombospondin-1; MFI, mean fluorescence intensity; T2D, type 2 diabetes (gray bars); non-T2D, nondiabetic (white bars); NS, not significant. *p < 0.05; **p < 0.01; ***p < 0.001 [analysis of variance (ANOVA)].
Figure 2 .
2 Figure 2. Expression of CD29 (A) and CD51/CD61 (B) on CD34 + /CD42b + conjugates and CD34 + /CD42b -cells after RFYVVM or RFYVVMWK preconditioning. (C) Examples of CD29 (top) and CD51/CD61 (bottom) distribution on RFYVVM (left)-and RFYVVMWK (right)-stimulated cells observed by confocal microscopy are shown. MFI, mean fluorescence intensity; T2D, type 2 diabetes (gray bars); non-T2D, nondiabetic (white bars); NS, not significant. *p < 0.05; **p < 0.01; ***p < 0.001 (ANOVA).
Figure 3 .
3 Figure 3. Percentage of CD34 + annexin V + /PI -and annexin V + /PI + cells after preconditioning with RFYVVM or RFYVVMWK. T2D, type 2 diabetes (gray bars); non-T2D: nondiabetic (white bars); AnV, annexin V; PI, propidium iodide. **p < 0.01; ***p < 0.001 (ANOVA).
Figure 4 .
4 Figure 4. Effect of peptide preconditioning on the adhesion of CD34 -and CD34 + cells onto vitronectin-collagen matrix. (A) All patients (diabetic plus nondiabetic). (B) Diabetic (gray bars) versus nondiabetic patients (white bars). Results are expressed as number of adherent cells per 2 ´ 10 3 seeded cells. ***p < 0.001 (Kruskal-Wallis test). Effect of preconditioning with TSP-1 peptides on the adhesion of CD34 + (nonhashed bars) and CD34 -cells (hashed bars) in diabetic (gray bars) versus nondiabetic patients (white bars) on HUVEC monolayers prestimulated by TNF-a (C) or IL-1b (D). Results are expressed as number of adherent cells per 10 3 seeded cells (p > 0.05, Kruskal-Wallis test). HUVEC, human umbilical vein endothelial cell; TNF, tumor necrosis factor; TSP, thrombospondin; T2D, type 2 diabetes; non-T2D, nondiabetic; NS, not significant. ***p < 0.001 (ANOVA).
Table 1 .
1 Patient Characteristics
Nondiabetics Type 2 Diabetics
Characteristics (n = 20) (n = 20) p
Age (years) 70.1 ± 1.9 69 ± 1.6 0.34
BMI (kg/m 2 ) 29 ± 1.1 30 ± 1.6 0.2
Hypertension (%) 15 (75) 12 (60) 0.5
Dyslipidemia (%) 19 (95) 19 (95) 1
Former smoking (%) 12 (60) 15 (75) 0.46
Active smoking (%) 2 (10) 2 (10) 1
Blood glucose (mmol/L) 5.61 ± 0.1 7.9 ± 0.5 <0.001
HbAIc (mmol/mol) 39 ± 0.9 51 ± 3.6 <0.001
Total cholesterol (mmol/L) 3.8 ± 0.2 3.7 ± 0.2 0.7
LDL cholesterol (mmol/L) 2 ± 0.2 1.8 ± 0.2 0.08
HDL cholesterol (mmol/L) 1 ± 0.1 1.1 ± 0.1 0.8
Triglycerides (mmol/L) 1 ± 0.1 1.8 ± 0.2 0.002
Platelets (G/L) 199 ± 11.4 181 ± 6.3 0.3
Statins (%) 20 (100) 20 (100) NS
Antiaggregant (%) 20 (100) 20 (100) NS
Oral antidiabetics (%) 0 18 (90) <0.001
Results are expressed as means ± SEM. BMI, body mass index; HbA1c, glycosylated hemoglobin; LDL, low-density lipoprotein; HDL, high-density lipoprotein.
ACKNOWLEDGMENTS: This work was supported in part by
the Agence Nationale de la Recherche (Grant No. ANR-07-PHYSIO-025-02). The authors declare no conflicts of interest. | 31,986 | [
"780869",
"1118884",
"958995",
"172613"
] | [
"2966",
"182181",
"1361",
"81503",
"302452",
"417872",
"458410",
"182181",
"445519",
"323994"
] |
01768819 | en | [
"info"
] | 2024/03/05 22:32:16 | 2010 | https://hal.science/hal-01768819/file/LREC_Tahon_2010.pdf | Marie Tahon
Agnès Delaborde
Claude Barras
Laurence Devillers
A corpus for identification of speakers and their emotions
This paper deals with a new corpus, called corpus IDV for "Institut De la Vision", collected within the framework of the project ROMEO (Cap Digital French national project founded by FUI6). The aim of the project is to construct a robot assistant for dependent person (blind, elderly person). Two of the robot functionalities are speaker identification and emotion detection. In order to train our detection system, we have collected a corpus with blind and half-blind person from 23 to 79 years old in situations close to the final application of the robot assistant. This paper explains how the corpus has been collected and shows first results on speaker identification.
Introduction
The aim of the project ROMEO (Cap Digital French national project founded by FUI6, http://www.projetromeo.com) is to design a robotic companion (1m40) which can play different roles: a robot assistant for dependent person (blind, elderly person) and a game companion for children. The functionalities that we aim to develop are speaker identification (one speaker among N, impostor) and emotion detection in every day speech. The main challenge is to develop a strong speaker detection system with emotional speech and an emotion detection system knowing the speaker. All our systems are supposed to be real time systems.
In the final demonstration, the robot assistant will have to execute some tasks as defined in a detailed scenario. The robot is in an apartment with its owner, an elderly and blind person. During the whole day, the owner will have some visitors. The robot will have to recognize who are the different characters: his little children (two girls and a boy), the doctor, the house-keeper and an unknown person. In the scenario the robot will also have to recognize emotions. For example, Romeo would be able to detect how the owner feels when he wakes up (positive or negative) and to detect anger in the little girl's voice.
To improve our detection systems (speaker and emotion)
we need different corpora, the closer to final demonstration they are, the better the results will be. We focused on blind or half-blind speakers (elderly and young person) and children voices while they interact with a robot [START_REF] Delaborde | A Wizard-of-Oz game for collecting emotional audio data in a children-robot interaction[END_REF] in order to have real-life conditions. However, emotions in real-life conditions are complex and the different factors involved in the emergence of an emotional manifestation are strongly linked together [START_REF] Scherer | Vocal communication of emotion: a review of research paradigms[END_REF].
In this paper, we will describe the IDV corpus which was collected with blind and half-blind person: acquisition protocol, scenarii involved. Then we explain the annotation protocol. And in section 4, we give our first results on speaker identification (identify a speaker from a set of known speakers).
IDV corpus
The part of the final scenario that concerns IDV corpus, we aim to demonstrate at the end of the project consists in:
identify a speaker from a set of known speakers (children or adults), recognize a speaker as unknown and in this case, provide its category (children, adult, elderly) and gender (for adults only), and detect positive or negative emotion.
Speaker identification and emotion detection are real time tasks. For that objective, we have collected a first corpus called IDV corpus with blind and half-blind French people from 23 to 79 years old. This corpus has been collected without any robot but a Wizard-of-oZ which simulates an emotion detection system. This corpus is not fully recorded yet; further records are scheduled with the IDV. A second corpus will be collected in the context of the scenario: at the IDV (Institut de la Vision in Paris) with the robot ROMEO.
Corpus characteristics
So far, we recorded 10h48' of French emotional speech.
28 speakers (11 males and 17 females) were recorded with a lapel-microphone at 48kHz. In accordance with the Romeo Project target room, the recordings took place in an almost empty studio (apart from some basic pieces of furniture), which implies a high reverberation time.
The originality of this corpus lies in the selection of speakers: for a same scientifically controlled recording protocol, we can compare both young voices (from 20 years old) to voices of older person (so far, the oldest in this corpus is 89).
Acquisition protocol
Before the recording starts, the participant is asked some profile data (sex, location, age, type of visual deficiency, occupation and marital status). An experimenter from the LIMSI interviews the volunteer following three sequences described below in 2.3. Some parasite noise happened to be audible in the studio (guide dog walking around, people working outside, talking, moving in the corridor, …). When overlapping the speaker's speech, these parts were discarded.
Sequences description
Each recording is divided into three sequences.
The first one is an introduction to the Romeo project: we explain the participant that we need him to provide us with emotional data, so that we can improve our emotion detection system in a future robot. We take advantage of this sequence to calibrate the participant's microphone.
Since there is no experimental control over the emotions that could be expressed by the participant, this part is discarded in the final corpus and will not be annotated.
In the second sequence, called "words repetition" (table 1), the experimenter asks the participant to repeat after him orders that could be given to the robot. The participant is free to choose the intonation and the expression of his or her production. This sequence gives us a sub-corpus where lexicon is determined and emotions mainly neutral.
Viens par ici! (come here!) Mets le plat au four! (put the dish in the oven!) Arrête-toi! (stop here!) Descends la poubelle! (Bring down the bin!) Stop! (stop!)
Va chercher le courrier! (Bring back the mails!) Ecoute-moi! (listen to me!) Va chercher à boire! (Bring back some water!) Approche! (come near!)
Aide-moi à me lever! (help me to get up!) Va-t-en! (go away!)
Aide-moi à marcher! (help me to walk!) Donne! (give it!) Roméo, réveille-toi! (Romeo, wake up!) Ramasse ça! (pick it up!)
Table 1: List of words and expressions in French
In the third sequence, called "scenarii", the experimenter presents six scenarii (see table Scenarii) in which the participant has to pretend to be interacting with a domestic robot called Romeo. For each presented scenario, the experimenter asks the participant to act a specific emotion linked to the context of the scenario :
for instance Joy, "Your children come to see you and you appreciate that, tell the robot that everything is fine for you and you don't need its help", or Stress, "You stand up from your armchair and hit your head in the window, ask Romeo to come for help", or Sadness, "You wake up and the robot comes to ask about your health. You explain it that you're depressed". The participant has to picture himself or herself in this context and to speak in a way that the emotions are easily recognizable. He or her knows that the lexicon he uses is not taken into account;
the emotion has to be heard in his or her voice.
At the end of each of his or her performance, the experimenter runs a Wizard-of-Oz emotion detection tool, that tells aloud the recognized emotion.
Corpus annotations
Emotion labels
Segmentation and annotation of the data are done with the Transcriber annotation tool1 on the scenario sequences.
The participant utterances are split into emotional segments. These segments mark the boundary of the emotion: when a specific emotion expression starts, and when it comes to an end.
On each segment, three labels describe the emotion. The first label corresponds to the most salient perceived
IDV emotional content
As the emotional annotation of the IDV corpus is not finished yet, all results on emotion annotation are based on a set of 15 speakers.
IDV corpus is divided in two different corpora: spontaneous and acted, according to the task (as defined in part 3). The results of the emotion scores are reported in table 4.
The spontaneous corpus contains 736 instances of 0.5s to 5s. The most important emotional label is "interest" (51%). This corresponds to the agreement of the volunteer with what the interviewer asked him to do.
Positive emotions (18%) are more numerous than negative emotions (6%). The volunteer has accepted to be recorded, so he is not supposed to express displeasure, he will more probably be nice with the LIMSI team.
Macro-class "fear" is also quite important (10%). It corresponds to embarrassment or anxiety, playing the actor is not an easy task.
The acted corpus contains 866 instances of 0.5s to 6s.
The results corresponds to what was expected: the main emotions are well represented. Positive emotion (21%, mainly "satisfaction"), negative emotion (24%, mainly "irritation"), fear (10%, mainly anxiety) and sadness (8%, "deception" and "sadness").
Label emotion
IDV first results
In this section, speaker identification scores are presented. All the results presented here were obtained with the same method based on GMM (Gaussian Mixture Models) speaker models [START_REF] Reynolds | Speaker verification using adapted Gaussian mixture models[END_REF].
First we have studied the different parameters of the GMM model, then the evolution of scores in function of the sex and the age of speakers.
Global speaker identification scores
This section aims at choosing the experimental setup for
Age influence
In this part, we show that speaker identification is easier A female voice is recognized as well at 96%, a male voice is recognized as well at 82%. Female voices have better identification scores.
Sex influence
Emotional speech influence
The results below are based on the corpus "repeating words" which contains 28 speakers. The results presented in this part are based on both sequences "repeating words" and "scenario", with the 15 speakers corresponding to the emotional annotation of the sequence "scenario".
Conclusion
This corpus IDV is interesting for many reasons. 6.
studying the influence of the age, gender and emotional expression. Experiments are performed with the "repeating words" sequence of the corpus. It contains 458 audio segments of varied duration. 26-dimensional acoustic features (13 MFCC and their first-order temporal derivatives) are extracted from the signal every 10ms using a 30ms analysis window. For each speaker, a training set is constructed by the concatenation of segments up to a requested duration Ntrain; a Gaussian mixture model (GMM) with diagonal covariance matrices is then trained on this data through maximum likelihood estimation with 5 EM iterations. The remaining segments, truncated to a Ntest duration, are used for the tests. For a given duration, the number of available segments is limited by the number of segments already used for training and the minimal test duration necessary (the higher duration is, the less audio files there are). For each test segment, the most likely speaker is selected according to the likelihood of the speaker models. In order to optimize the number of files of train and test, we have chosen the following set of parameters: -test duration: 1s (225 files), -train duration: 10s (179 files), -speaker model: mixture of 6 Gaussians.The error rate is 34.7% (+/-6.5%) when recognizing one speaker among 28.This extremely short test segment duration is due to constraints on segment counts in the database, and improvement of the performance as a function of the segment length will be studied later in the course of the project.
on elderly person voices than on young voices. Two subcorpora from IDV corpus composed of the 8 older volunteers (4 male, 4 female, from 52 to 79 years old), respectively the 8 younger volunteers (4 male, 4 female, from 23 to 46 years old) are studied separately. Of course, the number of segments is quite low, which may be a bias of the experiment.
number of segment and trust interval As a result speaker identification (one speaker among N) is better with elderly person voices. Our hypothesis is that voice qualities are much more different with elderly person voices than with young voices. In figure 1, we have plotted the MFCC2 gaussian model for the first four older person (blue) and for the first four younger person (red). As the red curves are quite the same, the blue one are more separated one from another.
Figure 1 :
1 Figure 1: Distribution of the 4 th MFCC coefficient according to a gaussian model for old (blue, plain) and young speaker (red, dashed)
Figure 2 :
2 Figure 2 : Confusion matrix between male (1) and female (2) Based on the whole IDV corpus, we compute the confusion matrix sorted by sex without taking into account the age of the speakers anymore.
three corporaIdentification scores are better with the "words" corpus (lexically controlled) than with the "acted" corpus. The "spontaneous" corpus gives intermediate results. The scores are always better when the train and the test are made on the same corpus.Speaker models were tested directly in mismatch conditions without any specific adaptation. The very high error rates observed are of course due to the very short train and test durations constraints in our experiments, but also highlight the necessity of an adaptation of the speaker models to the emotional context which will be explored during the ROMEO project.
First, as it presents a sequence of words, lexically determined by the protocol and quite neutral, and a sequence of emotional speech, with the same speakers, recorded in the same audio conditions, it allows us to compare scores for speaker identification between neutral speech and emotional speech. Secondly, the corpus collection has been made with blind and half-blind volunteers from 23 to 79 years old. Thus we can compare scores across speaker age. Moreover we have the opportunity to work with elderly person who often have specific voice qualities.
Table 2 :
2 ScenariiTable2summarizes the 6 different scenarii and the emotions asked to the participant.
can recognize an emotion that is of the opposite valence
of what the participant was supposed to express (the
experimenter selects Anger when Joy has been acted); it
can recognize no emotion at all (the experimenter selects
Neutral when a strong Anger was expressed, or when the
emotion has not been acted intensely enough); it can
recognize an emotion that is close to what is expected,
but too strong or too weak (Sadness instead of
Disappointment). The participant is asked to act the
emotion again, either until it is correctly recognized by
the system, or when the experimenter feels that the
participant is tired of the game.
Emotional data acquired through acting games obviously
do not reflect real-life emotional expressions. However,
We can also question the relevancy of having the
participant imagine the situation, instead of having him
live it in an experimental setting. We should note that for
obvious ethical reasons we can not put them in a situation
of emergency such as "being hurt, and ask for immediate
help": we can only have them pretend it. Another obvious
reason for setting this kind of limited protocol is a matter
of credibility of the settings: currently, the only available
prototype does not fit the target application
characteristics (Nao is fifty centimeters high, and its
motion is still under development).
Scenarii Emotions
Medical emergency Pain, stress
Suspicious noises Fear, anguish, anxiety
Awaking (good mood) Satisfaction, joy
Awaking (bad health) Pain, irritation, anger
Awaking (bad mood) Sadness, irritation
Visit from close relations Joy
The system is presented as being under-development, and most of the times it does not correctly recognize the emotion: it the strategies that are being used through our Wizard-of-Oz emotion detection tool allow us to elicit emotional reaction in the participant. An example: the participant is convinced that he expressed Joy, but the system recognizes Sadness. The participant's emotional reactions are amusement, or frustration, boredom, irritation. Our corpus is then made of both acted emotions, and spontaneous reactions to controlled triggers. The distinction between acted and spontaneous expressions will be spotted in our annotations; this distinction is really important to have an estimation of how natural the corpus is
[START_REF] Tahon | Acoustic measures characterizing anger across corpora collected in artificial or natural context[END_REF]
.
Table 5 :
5 The results are referred in the table 5, error rate, number of segments for test and trust interval (binomial distribution test). Speaker identification, age influence: error rate,
Old person Young person
Error rate 17.00% 38.00%
Number of segment 66 63
Trust interval 9.18% 12.24%
Table 5
5
below shows the error rate
for speaker identification (1 among 15) across the 3
corpora: "repeating words", "scenario spontaneous" and
"scenario acted". The parameters we have chosen for the
gaussian model are the followings: 5 gaussians, train
duration: 10s, test duration: 1s.
TEST
"Words" "Spontane "Acted"
ous"
"Words" 28.60% 78.60% 88.00%
TRA IN "Spontaneous" "Acted" X X 45.10% X 60.20% 56.30%
Table 5 :
5 Error rates for speaker identification across the
http://trans.sourceforge.net/en/presentation.php | 17,764 | [
"9821",
"17217",
"999093"
] | [
"247329",
"247329",
"247329",
"247329"
] |
01768827 | en | [
"info"
] | 2024/03/05 22:32:16 | 2012 | https://hal.science/hal-01768827/file/LREC_Tahon_2012.pdf | Marie Tahon
email: [email protected]
Agnes Delaborde
Laurence Devillers
Corpus of Children Voices for Mid-level Markers and Affect Bursts Analysis
Keywords: Audio Signal Processing, Emotion Detection, Human-Robot Interaction
This article presents a corpus featuring children playing games in interaction with the humanoid robot Nao: children have to express emotions in the course of a storytelling by the robot. This corpus was collected to design an affective interactive system driven by an interactional and emotional representation of the user. We evaluate here some mid-level markers used in our system: reaction time, speech duration and intensity level. We also question the presence of affect bursts, which are quite numerous in our corpus, probably because of the young age of the children and the absence of predefined lexical content.
Introduction
In the context of Human-Robot Interaction, the robot usually evolves in real-life conditions and then faces a rich multimodal contextual environment. While spoken language constitutes a very strong communication channel in interaction, it is known that lots of information is conveyed nonverbally simultaneously to spoken words [START_REF] Campbell | On the use of nonverbal speech sounds in human communication[END_REF]. Experimental evidence shows that many of our social behaviours and actions are mostly determined by the display and interpretation of nonverbal cues without relying on speech understanding. Among social markers, we can consider three main kinds of markers: interactional, emotional and personality markers. Generally-speaking, social markers are computed as long-term markers which include a memory management of the multi-level markers during interaction. In this paper, we focus on specific mid-level and short-time acoustic markers: affect bursts, speech duration, reaction time and intensity level which can be used for computing the interactional and emotional profile of the user. In a previous study, we have collected a realistic corpus (Delaborde, 2010a) of children interacting with the robot Nao (called NAO-HR1). In order to study social markers, we have recorded a second corpus (called NAO-HR2), featuring children playing an emotion game with the robot Nao. The game is called interactive story game (Delaborde, 2010b). So far, there exist few realistic children voices corpora. The best known being the AIBO corpus [START_REF] Batliner | You stupid tin box"children interacting with the AIBO robot: A cross-linguistic emotional speech corpus[END_REF], in which children give orders to the Sony's pet robot Aibo. Two corpora were collected for studying speech disorders in impaired communication children [START_REF] Ringeval | Automatic prosodic disorders analysis for impaired communication children, 1st Workshop on Child, Computer and Interaction (WOCCI)[END_REF]. In both studies, there are no spoken dialogs with robots; only the children are speaking.
Many previous studies focus on one of the three social markers. Interactional markers can be prosodic as in [START_REF] Breazeal | Recognition of affective communicative intent in Robot-Directed speech[END_REF]: five different pitch contours (praise, prohibition, comfort and attentional bids and neutral) learnt from infant-mother interaction are recognised by the Kismet robot. Mental state markers can also be only linguistic as the number of words, the speech rate (Kalman, 2010). Personality markers can be linguistic and prosodic cues [START_REF] Mairesse | Using linguistic cues for the automatic recognition of personality in conversation and text[END_REF]. Emotional markers can be prosodic, affect bursts and also linguistic. The concept of "affect bursts" has been introduced by Scherer. He defines them as "very brief, discrete, nonverbal expressions of affect in both face and voice as triggered by clearly identifiable events" [START_REF] Scherer | Affect Bursts, in Emotions[END_REF]. Affect bursts are very important for real-life interactions but they are not well recognized by emotion detection systems because of their particular temporal pattern. [START_REF] Schröder | Experimental study of affect bursts[END_REF] shows that affect bursts have a meaningful emotional content. Our hypothesis is that non verbal events and specific affect bursts production are important social cues during a spontaneous Human-Robot Interaction and probably even more with young children.
Section 2 presents the protocol for collecting our second children emotional voices corpus. The content of the corpus NAO-HR2 is described in Section 3: affect bursts, speakers and other interactional information. Section 4 summarizes the values we can expect for some mid-level social cues. Finally, Section 5 presents our conclusion and future work.
Data collection 2.1 Interactive Story Game
We have collected the voices of children playing with the robot Nao and recorded with lapel-microphone. Nao told a story, and two children in front of it where supposed to act the expected emotions in the course of the story. A game session consists in 3 phases: first the robot explains the rules and suggests some examples, the second part is the game itself, and the last part is a questionnaire proposed by an experimenter. The children are presented a board, on which words or concepts are drawn and written (such as "house", or "poverty"). Emotion tags are written in correspondence for each of this word. The player number one knows that, for example, if the notion "poverty" occurs in the course of the story, he will have to express sadness. He can express it the way he wants: he can speak sadly, or do as though he was weeping; children were free to interpret the rules as they wanted to. Once the rules are understood by the two players, Nao starts to tell the story. When it stops speaking, one of the players is supposed to have spotted a concept in the previous sentence, and is expected to play the corresponding emotion. If the robot detects the right emotion, the child wins one point.
Semi-automatic Human-Robot Interaction System
The behaviour of the robot changes in the course of the game. It can be neutral, just saying "Your answer is correct", or "not correct". It can also be empathic "I know this is a hard task", etc. Fuzzy logic rules select the most desirable behaviour for the robot, according to the emotional and interactional profile of each child, and their sex. This profile is built according to another set of fuzzy logic rules which process the emotional cues provided manually by the Wizard experimenter. The latter provides the system with the emotion expressed by the child (a label such as "Happiness", "Anger", "Sadness", etc.), the strength of the emotion (low, average or high activation), the elapsed time between the moment when the child is expected to speak and the time he starts speaking, and the duration of the speaking turn (both in seconds). From these manually captured cues, the Human-Robot Interaction system builds automatically an emotional and interactional representation of each child, and the behaviour of the robot changes according to this representation.
The dynamic adaptation of the behaviour of the robot and the design of the profile, based on a multi-level processing of the emotional audio cues, are explained in (Delaborde, 2010b). Table 1 The collected audio data is subsequently processed by expert labellers. On each speaker's track, we define speaker turns called instances. The annotation protocol is described in detail in (Delaborde, 2010b). The annotation scheme consists in emotional information (labels, dimensions and affect bursts), but also mental-state and personality information based on different time windows. In this paper, we focus on the study of affect bursts and others mid-level markers such as reaction time, duration but also the low-level marker intensity.
Contents of NAO-HR2 corpus
Description of the corpus
The NAO-HR2 corpus is made up of 603 emotional segments for a total amount of 21mn 16s. Twelve children (from six to eleven years old) and four adults have been recorded (five boys, seven girls, one woman and three men). For this study, we have selected only the speech instances which occur during the story game (not during the questionnaire). In consequence, we obtain 20 emotional answers per gaming session: 10 emotional answers for each speaker. In that way the number of speaker turns is quite similar from one speaker to another.
Affect bursts
An annotation tag indicates the presence or absence of an affect burst in the instances. We notice that a large majority of the corpus is made up of affect bursts. Table 2 summarizes the number of affect bursts (AB) over the total number of instances (TT) for each group of speaker. We have separated the children in two groups of 5 according to their age: the younger are from 6 to 7 years old, the older over 8 year old.
# AB (TT)
Mean From these results we can conclude that asking a participant to express an emotion without any predefined lexical content leads to a high number of affect bursts.
Children seem to use more often affect bursts than adults and young children even more. It seems that they are not at ease with finding words to express an emotion. Both children and adults express happiness laughing, but only children use "grr" affect bursts for anger in our corpora. Expressions of fear are usually more affect bursts for children than for adults. Affect bursts usually contain only a single phoneme; it is not possible to compute easily a speaking rate.
Results on Social Markers
In this section, we have manually measured the different markers in all game sessions.
An example is shown in Figure 1. Nao says: "a lot of sadness", the word "sadness" is one of the keywords written on the board and the child has to express the corresponding emotional state which is sadness. The four social markers we are studying, are represented in red: reaction time is 4.42s, speech duration is 2.17s, mean intensity is 52.83dB (after normalization: 28.67dB) and mean Harmonics-to-noise Ratio is 10.95dB. Reaction Time is important for this turn; the mean value of this 10 year old boy is 3.07s. Intensity and HNR are also lower than the mean values obtained on his whole session (Intensity mean is 32.43dB and HNR mean is 12.56dB).
Intensity and HNR values correspond to what is expected when acting sadness; a high reaction time probably means that the boy was not at ease with this specific turn.
Reaction Time
The reaction time (RT) represents the interval between the time when the speaker is expected to speak (when Nao stops telling the story), and the time he indeed starts to speak. In the context of our game, the children were not supposed to call up their knowledge, or to think about the best answer. They were supposed to act the emotion written on the board. The longer the reaction time, the more the speaker postpones the time of his oral production. This parameter is one of the parameters used for the definition of the dimension "self-confidence" of the emotional profile. The shorter the reaction time, the more the speaker tends to be self-confident. Table 3 presents the mean and standard deviation of mean reaction times for each child.
Mean RT (s) Std RT (s) 4.62 2.00 Table 3: Reaction Time Some children are not at ease with the game, and their RT is much more important than the other (RT = 7.73 for children n°12, 6 year old). When the RT value is so high it often means that the children did not find any answer to give to NAO in the time he has to (if the child did not answer after 12.5s, the robot continues the story). Hesitation is quite used by children who have an important RT.
Estimation of Speech Duration
The speech duration (SD) is another parameter used for the emotional profile of the speaker. It corresponds to the duration of speech of the speaker, for each speaking turn. Children included small pauses (from 850ms to 1.40s) in their speech. These short silences are not considered as ends of speaking turn: it can be breathing, hesitating, thinking, and the speaker resumes speaking. Mean SD (s) Std SD (s) 2.01 1.30 Table 4: Speech Duration for each turn We notice in table 4 that the mean SD is generally quite short. The turns are mostly composed of one single syllable. As we have seen before the proportion of affect bursts is quite important and most of them have short durations. As the players do not have any lexical support except what Nao have just said, they are not simulated to speak a lot.
Estimation of Intensity
For each session, both children were recorded with separate microphones which have their own gain. We compute the mean intensity (Int) normalized to the noise value for each session. It is also possible to estimate the HNR value on voiced parts only. Hesitation is often expressed with a lower intensity: on hesitation turns, mean intensity is from 45% to 70% lower than the mean intensity for the same child. Figure 2 shows that mean Intensity seems to decrease with RT and HNR to increase with RT. As we have said, a small RT generally signifies a good self-confidence; our data show that it is correlated with a high Intensity and a small HNR. When the child is at ease, he will speak loud.
Conclusion and Future Works
The NAO-HR2 children voices corpus is composed of French emotional speech collected in the course of a game between two children and the robot Nao. A semi-automatic Human-Robot Interaction system built the emotional and interactional representation of each child and selected the behaviour of the robot, based on the emotions captured manually by an experimenter. The data we collected allow us to study some parameters which take part in the setting up of the emotional and interactional profile. We have analysed some of the mid-level cues which are used in our Human-Robot Interaction system. Among those cues, reaction time, intensity level and speech duration do make sense in our child-robot interaction game, but speaking rate does not seem to be relevant in that particular context. Indeed, as the children are quite young (from six to eleven years old), and as they are not given any predefined lexical content, they usually express their emotions with affect bursts. The younger the child, the more he/she will use affect bursts. In a future work, we will also study the speaking rate in longer turns of child speech. For the needs of our data collection, the affective interactive system was used in Wizard-of-Oz (an experimenter captured manually the emotional inputs); in a next collection, we will use it with automatic detection of the emotions in speech, and then collect more data to confirm our analysis.
Acknowledgement
This work is financed by national funds FUI6 under the French ROMEO project labelled by CAP DIGITAL competitive centre (Paris Region).
Figure 1 :
1 Figure 1: An example of social markers during the story game, the markers are collected with Praat
Figure 2 :
2 Figure 2: Intensity and HNR in function of the reaction time for the 12 children
Table 1 :
1 Multi-level cues and social markers
gives an overview of the different level of
processing of the emotional audio signal: from low level
cues computed from the audio signal, to high level
markers such as emotions, emotional tendencies, and
interactional tendencies.
Low-level Cues Mid-level Cues High Level Social Markers
• Intensity level • Prosody • Spectral envelope • Affect bursts (Laughs, hesitation, 'grr') • Speech duration • Reaction Time • Speaking rate • Emotion (label, dimension) • Interactional tendencies (e.g. dominance) • Emotional tendencies (e.g.
extraversion)
Table 5 :
5 Intensity and HNR means and std
The | 15,813 | [
"9821",
"999093"
] | [
"247329",
"247329",
"247329"
] |
01768816 | en | [
"chim",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768816/file/Fauvelle%20et%20al.%2C%202017%20ES%26T.pdf | Vincent Fauvelle
Sarit L Kaserzon
Natalia Montero
Sophie Lissalde
Ian J Allan
Graham Mills
Nicolas Mazzella
Jochen F Mueller
Kees Booij
Dealing with flow effects on the uptake of polar compounds by passive samplers
Passive sampling of polar contaminants in aquatic environments is commonly undertaken using the Polar Organic Chemical Integrative Sampler (POCIS) or the Chemcatcher. Many studies have shown that the sampling rates (Rs) of contaminants increase with increasing water flow velocities (v), and could reach a maximum Rs(max) at high v (several dm s -1 and beyond). [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF][START_REF] Li | Controlled field evaluation of water flow rate effects on sampling polar organic compounds using polar organic chemical integrative samplers[END_REF] In situ v are often within the range where flow effects on Rs can persist, and it has thus been concluded that the transfer of most contaminants is (at least partially) under water boundary layer (WBL) control both for the POCIS and Chemcatcher. [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF] Two methods have been proposed to account for the effects of v on Rs. The first method adapted the performance reference compounds (PRC) approach for the POCIS. [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF] Although several studies indeed showed that higher Rs are associated with higher dissipation rates of some PRCs, this method has not proven to be fully quantitative. [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF] The current application of this method for POCIS takes into account the overall mass transfer coefficient (MTC) of the PRC selected, including its transport within the sorbent, which i) is governed by various interaction types (e.g., π-π, dipole-dipole, H-bonding and ionic interactions) depending on the analyte considered, and ii) is often anisotropic. The second method uses passive flow monitors (PFMs) to measure time averaged in situ v from the mass loss rate of calcium sulfate casts. An empirical relationship between the PFM derived v and Rs is established in the laboratory, and is then applied for field exposed PFMs and passive samplers to obtain in situ Rs. [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF] A more thorough understanding of the effect of transport through the WBL on Rs of polar compounds can be obtained by considering that the overall resistance to mass transfer (1/ko) equals the sum of the resistances for transport through the WBL, the membrane, and the sorbent:
3 ! " # = % & ' = ! " ( + ! * +( " + + ! * '( " ' (1)
When transport through the membrane is only via the pore space, eq 1 becomes:
! " # = - . ( + /0 1 2. ( + ! * '( " ' (2)
Where ko is the overall MTC, kw, km, ks are the MTC for the WBL, the membrane, and the sorbent respectively, A is the exposure surface area of the device, and Kmw, Ksw are the sorption coefficients of the membrane and the sorbent. d and d are WBL and membrane thicknesses, Dw is the contaminant diffusion coefficient in water, q is the tortuosity, and f is the membrane porosity.
During the last International Passive Sampling Workshop (Prague, September 2016), several options were identified to deal with flow effects on Rs: i) increasing the membrane resistance, ii) accepting and quantifying the larger uncertainties associated to low flow conditions, iii) establishing empirical relationships between Rs and v, iv) taking kw explicitly into account during laboratory calibrations and in field exposures. These options are discussed below.
i) increasing the membrane resistance (second term in eq 1 and 2) reduces the relative importance of kw, and shifts the occurrence of flow effect to a lower v. Thus, laboratory calibrations will be applicable for a wider range of flow conditions. This approach was chosen for the development of the diffusive gradient in thin films for organics which employs a 0.8 mm hydrogel membrane. [START_REF] Chen | Evidence and recommendations to support the use of a novel passive water sampler to quantify antibiotics in wastewaters[END_REF] Considering a typical WBL thickness between 10 to 500 µm, (1 < kw < 50 µm s -1 ), 2 mm thick diffusion layers would decrease flow effects on Rs to less than 20% for all environments (eq 2, adopting q and f =1). Increasing the resistance to diffusion implies smaller Rs, which may or may not be problematic, depending on the analytical detection limits and the concentration of the analytes.
ii) accepting larger uncertainties at low flow conditions may be a relevant option when differences between Rs under quiescent and fully turbulent conditions are smaller than a chosen value. The available evidence shows that these differences amount to a factor of 2 or more for most compounds. [START_REF] Harman | Calibration and use of the polar organic chemical integrative sampler-A critical review[END_REF] This implies that calibrations should be carried out over a range of v, to determine the smallest flow rate at which the Rs reduction is smaller than a given value. However, if such laborious calibrations are to be done, then flow-Rs relationships in the first place for field use may as well be established with little extra effort. Differences in Rs between quiescent and turbulent conditions have been used by Poulier et al. for determining confidence intervals for the application of POCIS for the regulatory monitoring of polar pesticides. 5
iii) empirical relationships between Rs and v were used by Li et al., 2
𝑅 4 = 𝑐𝑣 7 (3)
Where c and n are laboratory-determined fitting parameters. Although this model gave an accurate description of analyte accumulation by POCIS at v between 3 and 37 cm s -1 , it cannot handle the nonzero Rs that are observed at zero flow, nor a limiting Rs value at infinite v.
iv) taking kw explicitly into account. The co-deployment of two devices with different d can give an estimation of d. Otherwise, the MTC of the WBL can be directly measured in the laboratory and in situ, using alabaster dissolution rates, or the dissipation rates of PRCs from nonpolar samplers. [START_REF] Booij | Method to account for the effect of hydrodynamics on polar organic compound uptake by passive samplers[END_REF]
By also measuring the in situ kw, the in situ Rs can be obtained from
! & '(=>'=?@) = ! & '(9:;) + ! %" ( (5)
In brief, it needs to be recognized that monitoring organic chemicals using passive sampling relies on balancing considerations related to sensitivity (i.e., rapid uptake rates) and accuracy (i.e., compensating for flow effects on Rs) to provide a best estimate of time integrative concentrations of the chemicals of interest. In recognition of the effect of v on Rs we offer a series of approaches with the aim to decrease uncertainties. Arguably, none of the proposed techniques provides a universal solution for the wide range of applications nor is it likely that the development of novel passive samplers will overcome the issues discussed. Hence, we propose the application of at least one or ideally a combination of the approaches discussed above to improve confidence in results.
Knowledge of kw for the laboratory calibration studies allows quantifying the limiting sampling rate Rs(max) at high v.
! & '(9:;) = ! %* +( " + + ! %* '( " ' | 7,668 | [
"177468"
] | [
"191652",
"531091",
"531091",
"531091",
"449694",
"74243",
"353661",
"68170",
"864",
"302049",
"531091",
"356395"
] |
00176886 | en | [
"phys",
"stat"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-00176886/file/TaPe.pdf | Isabelle Rivals
Léon Personnaz
Costantino Creton
François Simal
Patrice Roose
Steven Van Es
A STATISTICAL METHOD FOR THE PREDICTION OF THE LOOP TACK AND THE PEEL OF PSAS FROM PROBE TEST MEASUREMENTS
We investigated the potential of the probe test as a high throughput test for the rapid screening of a large number of candidate pressure-sensitive-adhesives. The output of a successful screening tool should be usable to predict with some precision important characteristics of adhesives such as loop tack and peel which take longer to measure. The output of an instrumented probe tack test being both reproducible and sensitive to changes in polymer structure or formulation, we developed a statistical procedure which builds a polynomial model of the force values at the relevant time instants only, the number of necessary monomials being established using a test for lack of fit. The performance of the models is then estimated using cross-validation and an independent test set. The prediction results obtained on a data set representative of commercial adhesives show that the force signal recorded during a probe test indeed contains exploitable information about the loop tack and peel forces, and that the proposed statistical procedure is more successful for quantitative predictions than existing alternative approaches.
I. Introduction
Pressure-sensitive-adhesives (PSAs) are increasingly popular for fastening applications due to their safe and easy handling. With the increase in the number and variety of applications come more demanding requirements in terms of properties. In order to speed up the optimization of the properties needed for a specific application, it will be become more and more important to develop a good high throughput test for the rapid screening of a large number of materials, so as to select only the most promising ones [START_REF] Grunlan | [END_REF]] [Kulikov et al. 2005] [Zhang et al. 2005].
Initially, the instrumented probe tack test has been developed to gain more physical insight in the mechanisms of debonding of PSAs [Zosel 1985] [Lakrout et al. 1999] [Shull & Creton 2004]. A careful analysis of the results obtained with the probe tack test has led to significant advances in the understanding of the micromechanisms of debonding of a PSA layer from a rigid substrate, and the main results can be found in a comprehensive review [Shull & Creton 2004]. Most of these advances came from the very detailed information obtained from the probe test, which provides an entire stress-strain curve and not a unique value, coupled with an in situ observation of the deformation mechanisms of the adhesive layer with a video camera [Creton et al. 2001] [Brown et al. 2002] [Creton 2003].
These results have shown that for a very confined PSA layer which is being deformed in traction, failure is initiated by the formation of cavities at the interface with the probe. These cavities then form a foamed structure, which can be easily deformed to large strains in the tensile direction. Final detachment of the adhesive from the probe occurs when the nonlinear elastic properties of the adhesive display a significant strain hardening due to the finite extensibility of the polymer chains forming the backbone of the adhesive.
Despite its obvious advantages, such as speed of execution and reproducibility [Chuang et al. 1997], the probe test is not yet widely accepted in industry, where adhesive properties are still typically tested with standardized industry tests closer to applications, such as loop tack, peel or shear tests [START_REF] Pstc | Test methods for pressure sensitive adhesive tapes[END_REF]]. It would therefore be economically advantageous to be able to predict the result of a loop tack, peel or shear test, from a simple probe test experiment which typically lasts less than a minute. However, the important conceptual advances that the more fundamental investigations brought, did not yet result in a quantitative correlation between the outcome of the probe test (i.e. a curve of stress as a function of strain) and a value such as loop tack, peel force or shear, and further work is needed in that direction.
In this paper, we explore a different but parallel approach to predict the property of interest, loop tack or peel value, from a probe test curve. This approach is based on the assumption that the information needed to predict the property is contained in the curve obtained with the probe tack test, but needs to be extracted with statistical tools. From the experimental point of view, the requirements are simple: the probe test must be sensitive to changes in polymer structure and formulation that will have an effect on the application property, and the probe test must be reproducible to provide reliable data. The sensitivity of the test to changes in chemical structure and in formulation has been shown in several publications on the subject [Zosel 1985] [Brown et al. 2002] [Lakrout et al. 2001], and is summarized in Figure 1 which shows a typical probe test curve for a PSA. The compression and initial part of the tensile curve are sensitive to the linear viscoelastic properties of the adhesive; the peak in tension and the plateau following the peak are much more sensitive to both adhesive properties and non-linear elastic properties.
Unlike a simple rheology experiment in the linear viscoelastic regime, the probe test deforms the adhesive layer, in both the linear (compression) and the non-linear (tension) regime which will be essential to predict debonding energies [Roos & Creton 2004]. Reproducibility is usually better than for standard PSA tests, but can be further optimized by a careful sample preparation and by the choice of a hemispherical probe, which is insensitive to small misalignments [Chuang et al. 1997] [Crosby & Shull 1997].
Therefore a suitable statistical method should be able to extract the most relevant information from the probe test curves and use them to build a predictive model of a specific property such as loop tack or peel value. It is this path that we explore in our paper. The methodology we use has been developed to extract the most significant part of a data set to build a predictive model of the desired property, see [Rivals & Personnaz 2003a,b].
II. Experimental
Probe tests were performed for 26 different materials representative of commercial permanent PSAs based on SIS (Kraton D-1160 andD-1161, from Kraton Polymers) and tackifing resins (Piccotac 95E, Piccotac 212 and Foral 85-E, from Eastman Chemical Company), see Appendix 1. Three to five repeat tests were made for each material in the same conditions. The probe tests were performed on a TA-XT2 i HR texture analyzer (Stable Microsystems) fitted with a spherical probe having a diameter of 25 mm. The probe was brought in contact with the adhesive layer at a velocity of 10 µm/s, until a compressive force of 1.1 N was reached. It was kept in contact for 0.5 second at this load, and next removed at a debonding speed of 10 µm/s.
The whole force-distance curve (compression and traction) was recorded.
The adhesive films were prepared by transfer coating: a toluene solution of polymer and tackifier was applied on Silicone paper using an automatic bar coater. The drying conditions were as follows: 30 minutes at room temperature, followed by 3 minutes at 110°C. The dried PSA film (thickness 30 mm) was then coated with a PET film (Mylar, 23 µm thick). The laminate was conditioned for 24 hours under controlled humidity and temperature (23°C ± 2°C, 50% RH ± 5% RH) before testing. The peel tests were performed using FINAT FTM 1 conditions: 25 mm x 90 mm strips of PSA were applied on stainless steel plates using a 1 kg FINAT standard test roller, and 180° peel were measured at a peel rate of 300 mm/min, after 20 minutes. The loop tack tests were performed using FINAT FTM 9 conditions: 25 mm x 210 mm strips of PSA were applied on stainless steel plates. The loop was brought in contact and immediately removed at 300 mm/min. No stick-slip phenomenon was observed during the peel and loop tack tests.
The outcome of the experimental part of the study is a series of force curves with the corresponding values of the peel force and of the loop tack. Typical force curves obtained with 4 probe test repeats are shown on Figure 1, with the mean values of the loop tack and of the peel. Note that each repeat was performed on a fresh sample. The whole data set is described in Appendix 2.
III. Design of the predictive models
This section is devoted to the design of predictive models of the loop tack and of the peel from force descriptors, using the previous data set. Briefly, there are five steps: 1) the study of the variability of the loop tack and peel measurements ; 2) the choice of the candidate descriptors, i.e. of the variables that potentially contain the information about the loop tack and the peel ;
3) the construction of nested models of increasing complexity involving the candidate descriptors and/ or fixed functions of them ; 4) the selection of the model of minimal necessary complexity given the variability of the loop tack and peel values, i.e. involving only the most relevant descriptors ; 5) the estimation of the performance of the model using both crossvalidation and an independent test set.
III.1. Variability of the loop tack and of the peel measurements
The data set involves 26 adhesives, however, there are 2 missing values for peel (curves 277 and 279, see Table 5 in Appendix 2). Thus, the data set size is N = 26 for loop tack, and only N = 24 for peel. For the 16 first adhesives, the loop tack and the peel values {y k } are the mean of M k = 3 repeat measurements {y k,j }, but only the means {y k } are available (see Appendix 2, Table 5). For the last 10 adhesives (shaded cells in Table 5), the repeat measurements {y k,j } are available, and the loop tack and the peel values are the mean of M k = 4 or 5 measurements. For adhesive k, with k=17 to 26, the estimate (s k ) 2 of the measurement variance (s k ) 2 is given by:
s k 2 = y k,j y k 2 j=1 M k
M k 1 The values of the {s k } are shown in Figure 2. The measurement noise is clearly heteroscedastic, i.e. its variance depends on the adhesive. In principle, this should lead to a weighted least squares approach. However, since the {y k,j } and hence the {s k } are available for only 10 adhesives out of 26, such an approach is not feasible in practice, and the model parameters will be estimated using ordinary least squares, see Appendix 3.
In the following, we will use the average estimate of the standard deviation of the output. As a matter of fact, if the variances are all assumed equal to a common value s 2 , an estimate s 2 of s 2 is computed on the available repeat measurements:
s 2 = (M k 1) s k 2 k=17 26 M 10 with M = M k k=17
Numerically, we obtain s = 1.19 N/25 mm for loop tack and s = 1.23 N/25 mm for peel, see Figure 2. These values will be used later for the selection of the appropriate model complexity with a lack of fit test, see section III.4 and Appendix 3.
III.2. Choice of the candidate descriptors
As stated in the introduction, it is assumed that the information about the loop tack and the peel properties is contained in the force-displacement curves measured during a probe test. Since the displacement is the same for each test and hence for each adhesive, only the force as a function of time is useful for the prediction of the loop tack and of the peel values. In the following, the dimension of the descriptor space is the number of time instants at which the force values are considered, between the contact creation and the time corresponding to the longest test of the dataset. The force measurements are performed every 2 milliseconds, see
Figure 3a which shows one of the curves obtained for the adhesive N°300, the thick curve of Figure 1. We performed a dimension reduction by drastically reducing the time resolution.
Hundred descriptors, i.e. force values, seem enough to grasp the information contained in the force curve, see Figure 3b (each descriptor value is the mean of the 330 corresponding force values). We even consider lower resolutions, see Figure 3c. There is a compromise between losing information and reducing the number of candidate descriptors: as shown in section IV, a good compromise is achieved for a number of descriptors equal to 60.
As stated above, the reproducibility of the probe test is quite high. However, the curves of some adhesives show a significant variability, especially at the end of the curve see Figure 1.
Therefore, we choose to consider the mean of the previous descriptor values over the repeats of the probe test (M k = 3 to 5 repeats).
An alternative choice of descriptors would be that of physically significant characteristics of the curves, such as the force minima and maxima and the corresponding time instants, the area of the curve below and above zero, etc. (see Appendix 4). We also experimented such a choice, but obtained worse results than with the raw force descriptors.
III.3. Construction of nested models involving the most relevant descriptors
We are now in the presence of the descriptors judged potentially relevant for the prediction of the loop tack or of the peel. We denote the candidate descriptors values by the {x i } i=1 to n , the components of the n-descriptor vector x, and by y the mean value of the loop tack or of the peel. The assumption that the information about the property is contained in the candidate descriptors is equivalent to assuming the existence of a model of the form:
y = g(x) + w (1)
where g is an unknown function, and w is a zero mean random variable modeling the output noise studied. The goal is hence to choose a suitable parameterized function f(x, q) and a value q opt of the parameters q such that f(x, q opt ) is close to g(x) in the domain of interest.
A first issue is that, the dimension of x being large (n ʦ [50; 100]), even a simple affine model involves too many parameters (n+1) given the number of measurements (N = 26 for tack and 24 for peel). Thus, the design of the model heavily relies on the selection of the relevant and non redundant descriptors among the candidates. Therefore, the model must be mathematically suitable for the selection of the most relevant descriptors.
Next, the model should also be suitable for the modeling of interactions between different parts of the curve and of possible nonlinearities.
Polynomial models satisfy both conditions: they are linear in their parameters, but nonlinear in the descriptors. A polynomial of degree two is able to represent linear (monomials x i ) and quadratic (monomials (x i ) 2 ) dependencies, as well as interactions between different regions of the curve (cross-products monomials x i x j ). The value of the degree is often limited to 2 by the available computer memory1 .
A third issue is how to construct polynomials involving an increasing number of the most relevant monomials: this is achieved using the modified Gram-Schmidt orthogonalization procedure [Golub 1996], the question of when to stop this construction being answered in the next subsection. Let us consider a polynomial of fixed degree d of the n candidate descriptors {x i }. For example, if the list of the candidate descriptors is x 1 , x 2 , , x n , the polynomial of degree 2 involves a constant term and the monomials:
x 1 , x 2 , , x n , x 1 x 2 , x 1 x 3 , , x n-1 x n ,
x 1 2 , , x n 2 . We denote the N-vector of the mean loop tack or peel values by y, and the Nvectors corresponding to the monomials by the {x i }. The monomials {x i } are introduced according to their decreasing contribution to the explanation of y. The monomial j that is considered to have the maximum contribution to the output is the monomial such that |cos(y, x j )| is the largest; it is also the monomial which decreases most the residual sum of squares, see Appendix 3. The remaining monomials and the output are orthogonalized with respect to this first monomial j using the Gram-Schmidt orthogonalization algorithm. The procedure is repeated in the subspace orthogonal to the first ranked monomial j, and so on.
y x 1 x 2 x 3 x 4 x 5 R N
An alternative to this model construction could be a principal component analysis (PCA)
followed by the construction of least squares models having the first principal components as inputs. Such an approach has two main disadvantages. First, PCA is an unsupervised method:
the principal components are computed from the descriptor values only, without using the output values of the propery to predict. Thus, the first principal components have no reason to
N mono (n, d) = K n i i=1 d = C n+i-1 i i=1 d
where K n i is the number of i to i combinations of n objects with repetitions, and explain the property, whereas, in our procedure, the monomials are introduced precisely in the order of their decreasing correlation with the output property. Next, PCA is a linear method: the principal components are linear combinations of the descriptors. Thus, PCA is not able to capture the influence of nonlinearities or of interactions between descriptors, whereas higherorder monomials do.
C n i = i!/n!(n-i)!
In the next subsection, we study when to stop the Gram-Schmidt model construction.
III.4. Selection of the model with minimal necessary complexity
The model with minimal necessary complexity is the model with the smallest number of monomials which achieves a precision, i.e. a mean square error, that is comparable to the variance l 2 of the output. Each time a monomial is introduced, we can test whether the mean square error of the corresponding polynomial is not too large, using a lack of fit test, whose principle is detailed in Appendix 3. The first polynomial which satisfies the test is selected. The test is performed with a type I risk a = 1%. Since there are M k = 3 to 5 repeats for each adhesive, we make the approximation that M k = m = 4, i.e. that l 2 = s 2 /m. This corresponds to an estimate of l equal to v = s/2, i.e. v = 0.59 for loop tack and v = 0.61 for peel.
In order to quantify the significance of the force descriptors involved in the selected model, we define the sensitivity of its output with respect to each force descriptor as:
d j = 1 N f(x,q LS ) x jk k=1 N = 1 N q LS i x i x jk i=1 p k=1 N for j=1 to n
where N denotes the number of PSAs, f(x, q LS ) is the model output, n is the number of descriptors, p is the number of selected monomials, x i is the i-th selected monomial, and q LS i is the corresponding least squares parameter estimate.
III.5. Model performance estimation
The performance index of the model is defined as its mean error over all possible adhesives in the family of interest, i.e. as the root of the expectation of its mean square error E((y f(x,q LS )) 2 ).
Since the data set is small, we provide a leave-one-out (LOO) cross validation estimate of this performance index. The LOO estimate for a model obtained with N adhesives is defined as follows: the k-th LOO error is the error for the k-th adhesive made by the model obtained when the k-th adhesive is left out for the model parameter estimation, and the LOO estimate is the root of the mean of the N squared LOO errors. These LOO errors can be computed conveniently as The model sensitivities are shown in Figure 5. The 11 monomials of the tack model are, in the order of relevance: x 2 x 8 , x 6 x 100 , x 6 x 70 , x 40 x 85 , x 2 x 31 , x 59 x 65 , x 39 x 84 , x 41 x 70 , x 6 x 8 , x 44 x 57 , x 25 2 . The 9 monomials of the peel model are: x 59 2 , x 5 x 8 , x 6 x 75 , x 59 , x 23 x 74 , x 75 x 78 , x 9
x 24 , x 44 x 45 , x 30 x 57 . Most of the monomials are hence cross-product monomials. This illustrates the benefits of the statistical approach, whose only assumption is that the information is contained in the raw descriptors: it discovers interaction effects between different parts of the curve which were not predictable a priori, and which are still difficult to analyze.
Note also that the tack model involves descriptors located at the end of the curves, i.e. descriptors whose variability is generally high, whereas the peel model does not. Though it may seem surprising, the fact that relevant descriptors are located after the contact of certain tests is over is not pathological. It simply means that these descriptors will not contribute to the output of the model for these tests, whereas for other tests, they will contribute (positively or negatively).
The number of parameters of the models selected with the LOF test are too large given the training set size (N=26 or 24), and the discrepancy between the RMSE whole and the RMSE LOO are too large (a factor of 4 for tack, of 2 for peel). As a matter of fact, when increasing the number of descriptors, the correlation will always be better on the training set, but not necessarily on unseen PSAs. Furthermore, we do not have total confidence in the output variance estimate used for the LOF test, since the detail of the measurements is unknown for 16 adhesives out of 26.
IV.2. Achieving a bias/variance compromise
Thus, we retained models with less monomials than those selected with the LOF test, but for which the discrepancy between the RMSE whole and the RMSE LOO is smaller. In other words, we pay for reducing the model variability (variance) by accepting a lack of fit (bias). For tack, we retain a model with 9 monomials, and for peel, a model with 7 monomials, see
IV.3. Achieving a bias/variance compromise and using only 60 descriptors
The results are quite stable when the number of descriptors is in the interval [50 ; 100]. We present the results obtained with 60 descriptors, which are summarized in Table 3
V. Discussion
Let us first summarize the results. Table 3 shows that the peel model achieves RMSE LOO and RMSE test twice as large as the standard deviation, with a reasonably small number of parameters as compared to the available number of adhesives. The tack model is less accurate, with RMSE LOO and RMSE test three times as large as the standard deviation, despite the larger number of parameters. The loop tack may be more difficult to predict for two reasons. First because the relationship between the descriptors and loop tack may be more nonlinear. Second because the tack model involves descriptors located at the end of the curves, where the probe test is less reproducible, whereas the peel model does not. Aside from this assessment of the performance of the model from a purely statistical point of view, we can discuss in more details the advantages and limitations of the approach as presented here. A number of points need to be addressed:
-The adhesives for both the training set and the test set were all of the same family of adhesives (SIS + tackifying resins). It remains to be seen whether a model developed for a specific family of PSA would work to predict peel and loop test for a different family of PSA. Our guess is that as long as the interactions between the PSA and surface do not vary much, the model should be applicable to another family of adhesives such as acrylics.
However more tests would be needed to confirm that.
-We did not discuss in this paper the influence of debonding mechanisms on the peel force values. It is well known that in a certain range of peel velocities, many PSA experience a socalled stick-slip debonding behaviour. In this case the peel force is no longer defined as a single value, but rather two values (an initiation force and an arrest force, which depend among other things on the details of the experimental setup). Clearly this type of mechanism cannot be predicted by our model, but one could envision that a different set of descriptors could be used to build a model able to predict the occurrence of stick slip.
-Although indeed the model does not use physically meaningful descriptors, it certainly assumes that the information necessary to predict the peel force or loop test is contained in the probe test curve. It uses in other terms the probe test curve as a signature of the material (like an infrared spectrum) and extracts the relevant bits of the signature to predict the peel force or loop test result. For this approach to be successful, it is important to minimize the variability of all the parameters that are not the properties of the adhesive itself. This means that all adhesive layers should have the same thickness and surface roughness, that the probe used for the probe tests should be the same for all tests, that the experimental conditions for the probe test (probe velocity, contact force and contact time) and for the loop test and peel test should be kept constant.
VI. Conclusion
The results we obtained confirm our starting hypothesis that the information about the loop tack and the peel is indeed contained in the force signal obtained with a probe test. Although the accuracy of the models is not yet optimal, we feel that the proposed procedure is very promising and is likely to provide increasingly more refined predictive models of standard PSA properties. This approach is complementary to an approach based on intuitively identifying the most relevant physically significant descriptors of the force-displacement curves such as the maximum stress or the adhesion energy. It is worthwhile to note that the most relevant descriptors found by the empirical model are often coupled and may provide ideas for more complex and better physically based descriptors.
As usual with empirical models, the quality of the model depends on the range and representativity of the data available to construct it, and we expect further improvements from a reliable estimation of the measurement variability for each adhesive, which will allow a correct weighting of the data points, and from the use of a larger, more representative data set. Such a model would have a great potential as a development and screening tool. Probe tests results, obtained easily and rapidly even from rather small amounts of materials, would then guide the synthetic chemist to select the most promising synthesis conditions, and to perform a timeconsuming complete set of tests on these selected formulations only.
We also feel that, because it probes the large strain nonlinear elastic properties of the materials, the probe test measurements may contain some of the information necessary to predict the resistance to shear. This is an industrially very important but notoriously difficult property to predict from the molecular structure of the adhesive, and we plan to apply the same statistical methodology in that direction.
VII. Acknowledgements
We are very grateful to Hilde De Muyck for producing the results that motivated this study, as well as to Jan Vaneecke, who performed the major part of the experimental work.
A2. Description of the data set
PSA
A3. Least squares and statistics
We consider a data set of N adhesives represented by the couples {x k , y k } k=1 to N . The {x k } are the force values at n sampling instants, and the {y k } are the corresponding mean values of the considered property, loop tack or peel. To assume that the information about the property is contained in the candidate descriptors {x} is equivalent to assume the existence of the model:
y = E(y) + w = g(x)
+ w where the mean g is an unknown function of x, and w is a zero mean random variable modeling uncertainty. We further assume homoscedasticity, i.e. that the outputs {y k } are uncorrelated and have the same variance l 2 .
Polynomial model
The goal is to choose a suitable parameterized function f(x, q) and a value q opt of the parameters q such that f(x, q opt ) is close to g(x) in the domain of interest. Here, we consider a polynomial model with p monomials and parameter p-vector q. We denote the monomials of adhesive k by x k , which is a p-vector, and whose first component is equal to one. For example:
f(x k , q) = q 1 + q 2 x 15 k + q 3 x 45 k 2 + q 4 x 10 k x 31 k = q 1 x 1 k + q 2 x 2 k + q 3 x 3 k + q 4 x 4 k = x k T q
The estimates of this model for the N adhesives of the training set are hence given by:
x 1 T x N T q 1 x 2 x p q X q
where the {x i } are N-vectors. The (N,p) matrix X is known as the experiment matrix.
Assuming that N p and rank(X) = p, i.e. that the {x i } are linearly independent, the least squares (LS) parameter estimate q LS minimizing || y X q || 2 is given by: q LS = X T X -1 X T y
The residual N-vector r is defined as the N-error vector of the LS model, that is:
r = y X q LS
The residual sum of squares is equal to r T r.
Null hypothesis
Let us consider the hypothesis that there exists an unknown parameter vector q* such that:
H 0 : y = E(y) + w = X q* + w and where hence E(w) = 0. This hypothesis is called the null hypothesis (H 0 ): it is the hypothesis that the monomials {x i } are sufficient to explain the output property y, i.e. that the model is unbiased. In other words, the subspace spanned by the {x i } (the image of the matrix X) contains the mean of y. If the monomials {x i } are not sufficient, H 0 is false, which means that the model is biased. Since we search for an unbiased model, H 0 is the hypothesis we want to test.
If H 0 is true, E(q LS ) = q*. Moreover, the noise being homoscedastic, the residuals provide an unbiased estimate of the variance l 2 of the outputs {y k }:
E r T r N p = l 2
Lack of fit test
Suppose that we dispose of an independent estimate v 2 of the variance l 2 that is reliable, i.e.
unbiased: E(v 2 ) = l 2 . Intuitively, if H 0 is true, the residual vector r is essentially due to uncertainty and the following ratio f should be close to 1:
f = r T r N p v 2
On the other hand, if H 0 is false, the model is too simple (biased) and r is not only due to uncertainty but also to a model bias, and the ratio f will be much larger than 1.
If we assume Gaussian outputs, it is easy to test whether f is significantly larger than 1, see [START_REF] Draper | Applied regression analysis[END_REF]] [Rivals & Personnaz 2003b]. Then, if v 2 is obtained from q independent values:
q v 2 l 2 c 2 (q)
where c 2 (q) denotes the Pearson distribution with q degrees of freedom. If moreover H 0 is true, we have:
r T r l 2 c 2 (N p)
and hence:
f = r T r N p v 2
Fisher (Np, q) This leads to the lack of fit (LOF) test. Let us choose a value a for the probability of rejecting H 0 when it is true, the type I risk. Then, we will decide to reject H 0 when:
f = r T r N p v 2 > F Np, q (1a)
where F Np,q (1a) denotes the 1a quantile of the Fisher distibution with Np and q degrees of freedom. In other words, if the value of f is too large given a, we will decide that the model is biased with a risk a of being wrong.
Independent variance estimate
The available individual measurements {y k,j } k=17 to 26 of the property are assumed to have a common variance s 2 . Hence, we have the following unbiased estimate s 2 of s 2 :
s 2 = (M k 1) s k 2 k=17 26 M 10 = y k,j y k 2 j=1 M k k=17
where, for each adhesive k, M k is the number of repeats, y k,j is the value of the j-th repeat, y k is the mean value over the repeats, and M = M k k=17 26 .
Assuming Gaussian measurements:
s 2 M10 s 2 c 2 (MN)
The outputs being a mean of M k = 3 to 5 repeats, we choose to approximate M k Ϸ m = 4.
Thus, we can consider l 2 = s 2 /m as the common variance of the outputs {y k }, and v 2 = s 2 /m as its estimate, obtained with M 10 degrees of freedom.
Leave-one-out errors
The LS parameter vector q LS of the model is obtained using the whole data set of size N. The residual vector of this model is r = [r 1 r N ] T = y X q LS . The k-th leave-one-out (LOO)
error is the error made on adhesive k by the model obtained when adhesive k is left out from the data set. The latter does not need to be computed, since the k-th LOO error is a simple function of the k-th residual r k :
e LOO k = r k 1 h k
where h k is the k-th diagonal term of the (N,N) orthogonal projection matrix X (X T X) -1 X T on the image of X, or hat matrix, see [Rivals & Personnaz 2003a].
A2. Alternative choice of physical descriptors
We constructed 18 descriptors of the shape of the force signal as a function of time. These new inputs are the following :
1) The area below zero.
2) The area above zero.
3) The initial slope of the curve.
4) The slope of a line joining the point at which the force reaches 1N and the point at which the force reaches its first maximum.
5) The slope of a line joining the point at which the force reaches its first maximum and the point at which the force reaches the next minimum.
6) The slope of a line joining the point at which the force reaches this minimum and the point at which the force reaches the second maximum.
7) The slope of a line joining the point at which the force reaches its second maximum and the point at which the force reaches zero.
8) The instant when the force reaches 1N.
9) The instant when the force crosses zero.
10) The instant when the force reaches its first maximum.
11) The instant when the force reaches the next minimum.
12) The instant when the force reaches its second maximum.
13) The instant when the force reaches zero.
14) The value of the first maximum of the force.
15) The value of the next minimum of the force.
16) The value of the second maximum of the force.
17) The number of force maxima.
18) The curvature at the minimum.
Figure 1 .
1 Figure 1. Typical force curves obtained with four probe test repeats (curves N°300). The loop tack test provides a mean value of 21.2 N/25 mm, and the peel test a mean value of 15.5 N/25 mm.
Figure 2 .
2 Figure 2. Local estimates {s k } of the standard deviations of the measurements obtained with repeats for the last 10 adhesives of Table 5, and mean estimate s (dashed line): a) loop tack, b) peel.
Figure 3 .
3 Figure 3. Thick curve of Figure 1 (N°300): a) force measurements performed every 2 ms; b) 100 candidate descriptors; c) 60 candidate descriptors.
Figure 4 .
4 Figure 4. The vector {x 4 } is such that |cos(y, x j )| is the largest, and is introduced first.
is the number of i to i combinations of n objects without repeats. For example, the polynomial of degree d = 2 of n = 100 input variables possesses 5 150 monomials, that of degree d = 3 possesses 176 850 monomials.
Figure 5 .
5 Figure 5. Sensitivities of the models selected with the LOF test (100 descriptors).
Figure 6 .
6 Figure 6. Sensitivities of the models achieving a bias/variance compromise (100 descriptors).
Figure 7 .
7 Figure 7. Sensitivities of the models achieving a bias/variance compromise (60 descriptors).
Figure 8 .
8 Figure 8. Predicted/measured tack, for 20 training (crosses) and 6 test adhesives (filled circles).
Figure 7 Figure 9 .
79 Figure 7 displays the model sensitivities. Figures 8 and 9 display the outputs of the models for the training and test adhesives. The LOO and test errors of the tack model are even smaller than with 100 descriptors, probably due to the averaging effect on the end of the curve.
Figure 10 .
10 Figure10. Some of the descriptors.
Application to the prediction of the loop tack and peel values IV.1. Using the lack of fit test
functions of the residuals of the model obtained on the whole data set, see Appendix 3. The root mean square error (RMSE) obtained on the whole data set is denoted by RMSE whole , and that of LOO by RMSE LOO .We also provide the RMSE obtained when estimating the parameters of the model on a training set of 20 adhesives for tack and 18 adhesives for peel, and testing the model on an independent test set of 6 adhesives corresponding to a series of different formulations (see Appendix 2). The RMSE obtained on the training set is denoted by RMSE train , and that on the test set by RMSE test . A good model is such that RMSE LOO /RMSE whole and RMSE test /RMSE train are not much larger than 1. For loop tack, with a degree 1, 15 monomials are needed for the lack of fit (LOF) test, but the RMSE LOO and the RMSE test are large. With a degree 2, only 11 monomials are needed, and all the RMSEs are acceptable. For peel, with a degree 1, 16 monomials are needed for the LOF test, but the RMSE LOO and the RMSE test are also large. With a degree 2, only 9 monomials are needed, and all the RMSEs are acceptable. The performance of the tack and peel models of degree 2 is summarized in Table 1. We note that the models obtained with the LOF test possess either a RMSE LOO or a RMSE test that is large compared to the RMSE whole or to the RMSE train .
output nb nb PSAs output s.e. RMSE whole RMSE LOO RMSE train RMSE test
property monomials training/test estimate v (N/25 mm) (N/25 mm) (N/25 mm) (N/25 mm)
/descriptors (N/25 mm)
loop tack 11/16 20/6 0.59 0.39 1.44 0.40 0.54
peel 9/14 18/6 0.61 0.64 1.17 0.49 1.44
IV.
Table 1 .
1 Model of degree 2 selected with the LOF test (100 descriptors).
Table 2 .
2 The discrepancy between the RMSE whole and the RMSE LOO is smaller, and the number of parameters is more reasonable. However, the RMSE LOO and the RMSE test of the tack model are still large as compared to the output standard error estimate v.
output nb nb PSAs output s.e. RMSE whole RMSE LOO RMSE train RMSE test
property monomials training/test estimate v (N/25 mm) (N/25 mm) (N/25 mm) (N/25 mm)
/descriptors (N/25 mm)
loop tack 9/13 20/6 0.59 1.26 2.86 1.07 2.43
peel 7/10 18/6 0.61 0.93 1.24 0.83 1.20
Table 2 .
2 Model of degree 2 achieving the best bias/variance compromise (100 descriptors).
Table 3 .
3 . Model of degree 2 achieving the best bias/variance compromise (60 descriptors).The 9 monomials of the tack model are, in the order of relevance: x 2 x 54 , x 4 x 39 , x 3 x 4 , x 27 , x 33 x 49 , x 1 x 58 , x 4 x 33 , x 22 x 46 , x 16 x 35 . The 7 monomials of the peel model are: x 41 x 43 , x 3 x 5 ,x 38 x 43 , x 3 x 34 , x 45 2 , x 28 x 48 , x 35 x 42 . Most of them are still cross-product monomials.
output nb nb PSAs output s.e. RMSE whole RMSE LOO RMSE train RMSE test
property monomials training/test estimate v (N/25 mm) (N/25 mm) (N/25 mm) (N/25 mm)
/descriptors (N/25 mm)
loop tack 9/14 20/6 0.59 1.38 2.19 1.30 2.00
peel 7/11 18/6 0.61 0.88 1.14 0.79 1.28
Table 5 .
5 The available data set. (*) Standard deviations estimated with repeats, if available. The shaded cells correspond to the adhesives for which the standard deviations are available.
As a matter of fact, the polynomial of degree d possesses a number of monomials equal to : | 38,695 | [
"737642"
] | [
"45449",
"541848",
"45451",
"45451",
"45451"
] |
01725213 | en | [
"chim"
] | 2024/03/05 22:32:16 | 2014 | https://imt-mines-albi.hal.science/hal-01725213/file/78%20-%20Weiss-Hortala%20-%20depolym%20composite.pdf | Elsa Weiss-Hortala
Yi Rong
Andréa Oliveira Nunes
Yannick Soudais
Radu Barna
Hydrothermal Depolymerization of Carbon-Based Composites
The global demand of carbon fiber is increasing, therefore an increasing amount of carbon fiber reinforced polymer waste is produced. Carbon fibers are high added value materials which would be recycled at the composite end of life to offer positive impacts on environment and economic development. Hydrothermal treatments in water or in solvents are also gaining more attention to recycle carbon fibers and valorize polymer matrix into useful materials at the same time. This study focuses on the impact of operating conditions on the depolymerization of the matrix and on the surface quality of the fibers. Carbon fibers reinforced PA6 (thermoplastic polymer, 45 wt. %) are depolymerized in sub and supercritical water (T c =374°C and P c =22.1MPa). The experiments were carried out in batch reactors of 5mL at various operating conditions. The whole operating conditions are suitable to remove the polymer from the carbon fibers. A minimum of 39 wt. % of resin is removed in all cases after 30 min treatment while the surface quality of the fibers remains close to their virgin state. The increase of pressure and temperature favors the kinetics, however reaction time plays an important role on the carbon distribution. Indeed, repolymerization reaction (carbonization) is supposed to occur and discussed.
INTRODUCTION
Although composites materials appeared during World War II, Carbon Fibers Reinforced Polymers (CFRP) found firstly use in aeronautic industry around the eighties'. Currently CFRP are gaining more attention due to their key use in many various industrial sectors such as aerospace, automotive, electronics industries as well as wind energy sectors and sport materials. Thus the global demand of carbon fiber is worldwide increasing. The production was estimated to be 46 000 tons in 2011 and it is expected to reach 140 000 tons in 2020 [START_REF] Roberts | Materials technologies publications[END_REF]. At the same time, the amount of wastes generated increases. However, these hazardous wastes have to follow the EU Directives (1993/31/EC and 2000/53/EC) that regulates their end-oflife. These directives imply a management of the composite wastes, especially with a special effort to recycle the carbon fiber [START_REF] Conroy | [END_REF]. CFRP could be classified in two categories of polymers: thermoplastic and thermosetting. The latter is the most difficult to manage with thermochemical treatments due to its properties. Indeed, an increase of temperature results in a reinforcement of the tridimensional network of the polymer. On the contrary, thermoplastics are able to smelt that can favor the recycling of the carbon fibers. Carbon fibers are high added value materials as their price is 50 to 150 times higher than glass fiber. Their recycling in circular processes offers highly positive impacts on environment and economic development [3]. Indeed, carbon fibers contained in the composite materials would be reused after recovery rather than destroyed, e.g. by incineration. Recycling of carbon fiber requires a "deconstruction" process consisting in removing of the polymer matrix with the return of the embedded fiber in a state close to its virgin. New processes are in development to respond to these requests: pyrolysis [4], steam-thermolysis [5] and solvolysis [6]. Among solvolysis processes, sub and supercritical fluids such as water, methanol, ethanol or propan-1-ol, have been studied either or not in mixtures [6][7][8] to remove the resin from the carbon fibers. However carbon fibers recycling could be achieved simultaneously with chemicals or energy recovery (energetic verctor). Hydrothermal treatments could open interesting solution for simultaneous recycling of carbon fibers and valorization of the polymer matrix into building blocks at the same time. To achieve depolymerization process, i.e. resin removal and organic molecules recovered in the liquid phase, water was particularly considered as a benign option compared to other solvents. Some recent studies were realized in water at sub or supercritical conditions (T c = 374°C and P c = 22.1 MPa) for thermosettings or thermoplastics removal [6,[9][10][11][12][13]. The main focuses of these studies were the resin removal and the properties of the carbon fibers recovered (surface and mechanical properties) at various operating conditions, including alkali catalysts. Oxidation reactions were also pointed out to explain the degradation of carbon fibers that results in a degradation of mechanical properties [12]. The literature demonstrates a lack of knowledge as the chemicals recovery was concerns. Ondwudili et al. discussed chemicals recovery in the liquid phase (phenol and aniline) by using alkali catalyst [13], while Morin et al. mentioned the decomposition of epoxy resin into lower molecular weight organic compounds [6]. Therefore our study concerns the depolymerization of CFRP as regards to carbon fiber and chemicals recovery. Literature reports studies realized using thermosettings rather than thermoplastics resins [6]. However, the development of thermoplastics composites is nowadays twice as high as for thermosettings. The present work concerns the use of carbon fibers reinforced Polyamide 6 (PA6, [-NH-C 5 H 10 -CO-] n ) which is an aliphatic thermoplastic. This is a composite developed for high productivity for automotive, sport and leisure, aerospace and industry markets. Experiments were carried out to evaluate the effect of temperature, pressure and reaction time on the resin removal and on the distribution of organic carbon. The present paper discusses only the effect of some operating conditions on the hydrothermal depolymerization of the resin towards resin removal and the ability to recover chemicals.
MATERIALS AND METHODS
Raw material:
Cut-off of Carbostamp UD Tape composites (Torayca T700S MOE carbon fibers and PA6 resin equal to 45 wt. %) are used raw material. Strips of composite are provided by a composite manufacturer with a thickness of 0.28 mm. The resin in the cut-off strips represents 40-45 wt. % of the composite. As the resin is not only composed of PA6 but contains also mineral or other organic charges, the total mass of resin is not only due to PA6. A complementary experiment has been carried out in our laboratory to evaluate the mass of polymer in the composite using chemical degradation in sulfuric acid. The average mass loss reached 42.45 wt. %. This value is lower than that indicated in the composite properties. Two reasons can explain this difference. On the one hand, the raw material issued from cut-off meaning that the resin impregnation did not reach its steady-state and thus its desired quantity in the composite. On the other hand, the resin (45 wt. %) contains PA6 and minerals, metals and other compounds which are not necessary degraded during the thermochemical or chemical reactions carried out. Thus the total mass loss of the composite is lower than the theoretical value.
Batch reactor and experimental protocol:
Experiments in sub and supercritical water were carried out in stainless steel 316 miniautoclaves (MA) of 5 mL. A NABERTHERM oven was preheated at the desired temperature (350 or 400°C) before introducing reactors. The heating rate was around 40°C min -1 and the plateau duration varied from 0 to 120 min. The pressure reached 25 MPa, since the temperature stabilized in the reactors. The reactors were filled with ultrapure water and composite. Taking into account the size of the inner volume of the reactors, the mass of composite is of about 0.1 g. the volume of water was adjusted to reach the desired temperature/pressure couple. At the end of the experiments the three phases are separated and collected. The mass of the dry solid residue was then compared to its initial mass to evaluate the total mass loss.
Analyses:
The efficiency of the treatment was evaluated through the analyses of the liquid and the solid phases. The carbon fibers clean-up was observed using the Environmental Scanning Electron Microscope (ESEM, Philips XL30 FEG) with a back-scattering electron (BSE) detector. Micro analysis of some regions at the surface was also realized with EDS detector. Total Organic Carbon in the liquid phase was measured with a TOC analyzer (Shimadzu TOC-5050). TOC values are "normalized" to the initial mass of composite to be compared to each other.
RESULTS
Prior to present and discuss the results, Figure 1 presents ESEM pictures of the composite (a) and of the virgin carbon fibers (b). Thus, clean-up of carbon fibers can be successfully observed from ESEM pictures. The efficiency of hydrothermal conversion of organics is basically evaluated by studying the effect of operating conditions such as reaction temperature, pressure, reaction time and catalyst. In this present work, the three first parameters were studied and the focus in the present paper is on reaction time and temperature (sub and supercritical media).
Conversion in supercritical water:
The experiments were carried out at 400°C and 25 MPa in the range of 15-120 min of reaction time. The mass loss of the composite increased rapidly with reaction time, as the mass loss reached 39 wt. % in 15 min (Table 1). In the range of reaction time studied the mass loss varied between 39 and 41 wt. %. Theses mass losses are closed to 42.5 wt. % obtained during the thermal or the chemical experiments, however this maximum mass loss was not reached with the current operating conditions. Although the maximum value was reached at 60 min of reaction time, the differences of mass loss at various reaction times did not exceed 2%. Thus the trend observed is an increase of the mass loss until 60 min followed by its slight decrease. Total Organic Carbon was also measured in the liquid phase. This value increased rapidly as the depolymerization starts during the first 15 min, then it slightly decreased until 60 min. Between 60 and 120 min, TOC values increased in the liquid phase. Using a TOC value of 24 g L -1 , the amount of organic carbon in the liquid phase represents 20 mg while an estimation of the organic carbon from the resin would represents less than 25 mg. Thus at 60 min the mass loss is maximum while the amount of carbon in the liquid phase is minimum. This experiment was reproduced 3 times and the result was confirmed. At the end of the experiments considered, volume of gas produced was too low to be measured. Especially in the case of 60 min reaction time, the high resin removal and the low TOC value would result in a transfer of carbon to gas phase. The concomitant variations of TOC and mass loss of the composite are not directly related and can be explained by using microscopy imaging of the composite surface. Degradations of the carbon fibers, such as micro fissures, are not highlighted with this procedure. A microanalysis with EDS detector indicates that some bright spots are due to minerals (platinium, titanium or alkali compounds) while the extended zones contains already carbon, oxygen and sometimes alkalis. These latter zones would be attributed to resin or other organic compounds. As the number and the size of bright zones increased with reaction time, one can suppose that resin, modified resin or a new polymer are deposed on the surface. In this last case, repolymerization reactions would be involved in, especially due to phenols. However the increase of TOC in the liquid phase is related to the liquefaction of the whole composite. At long reaction time, cleaned carbon fibers can be attacked by supercritical water. However, ESEM pictures realized with BSE detector are not able to highlight degradations on the carbon fibers surface. The increase of TOC value in the liquid could be explained by the conversion of the carbon fiber but this assumption is not confirmed by ESEM pictures. The results presented were obtained in supercritical water that is supposed to improve reaction kinetics. The work is completed at subcritical conditions to study the impact on chemicals recovery.
Conversion in subcritical water:
The experiments were carried out at 350°C and 25 MPa in the range of 15-120 min of reaction time. The mass loss of the composite increased with reaction time and reached about 40 wt. % at 30 min (Table 2). In the range of reaction time studied the mass loss varied between 40.6 and 41.7 wt. % that is closed to the theoretical maximal mass loss of 42.5 wt. %. After 15 min of reaction time, the degradation is of about 13 wt. % that corresponds to 1/3 of the polymer mass in the composite. Thus 30 min of reaction time is required to reach high depolymerization efficiency. Total Organic Carbon was also measured in the liquid phase. This value increased with the depolymerization reaction and remains constant from 60 to 90 min. In this case, the mass of carbon recovered in the liquid phase equals to 34 mg. One can also notice that the TOC value increased at 120 min together with the resin removal. In the literature, research works discussed that long reaction time and molecular oxygen could result in an attack of carbon fiber [12]. The near supercritical water used for these experiments could produce significant amount of radicals especially for a long reaction time [14]. No gaseous phase was obtained at this temperature, or was too small to be determined. 3 show that the clean-up is not visibly achieved at 15 min of reaction time; the resin is observed rather than carbon fibers. However the morphology of the surface has changed and the process seems to be heterogeneous due to the thickness variability of the remained resin. The spherical particles observed, which are mainly composed of carbon, let us consider that repolymerization reaction could occur before the cooling step. Although some pieces of resin are observed for other reaction time, the polymer seems to be efficiently removed from 30 min. Even using high magnification, the surface of fibers seems to be not affected by acid or basic attack, and the diameter of carbon fibers was not varying. As for the supercritical conditions, any carbon fiber degradation was highlighted using ESEM. On the contrary, confocal microscopy highlights that a large amount of "defaults" appeared on the surface of remained carbon fiber at 120 min (Figure 4). A chemical analysis of these regions indicates that they are composed of organic compounds which are slightly different from the original resin. This chemical observation could explain that the modification of the surface was not highlighted with ESEM. Moreover the depth of this default is of about 1 µm while carbon fiber diameter is more or less 6 µm. These "defaults" are also observed on carbon fibers recovered after 30, 45 and 60 min with a chemical composition quite similar. However their number increased with reaction time. This complementary work is in progress to determine the nature of these chemical compounds and the statistical representation of these degradations.
CONCLUSION
Depolymerisation of carbon fiber reinforced PA6 was realized in this study. Among the experiments carried out, the effect of temperature and reaction time was discussed. Based on the resin removal, hydrothermal conversion of resin is quite achieved under a pressure of 25 MPa after 15 and 30 min at 400 and 350°C respectively. These moderate temperatures result in a liquefaction process as a too small fraction of resin was converted to the gaseous phase. A long reaction time seems to be not required to achieve complete resin degradation as some polymerization reactions could occur and modify the carbon fibers surfaces. The main organic carbon from the resin is transferred to the liquid and Total Organic Carbon (TOC) values seem to indicate a higher content at 350°C meaning that carbon fiber was attacked. As the
Figure 1 :
1 Figure 1: ESEM pictures of (a) composite and (b) virgin carbon fibers.
Figure 2 :
2 Figure 2: ESEM pictures of remaining composite after a treatment of: (a) 15 min, (b) 30 min, (c) 60 min and (d) 90min at 400°C and 25 MPa.
Figure 3 :
3 Figure 3: ESEM pictures of remaining composite after a treatment at 350°C and 25 MPa.
Figure 4 :
4 Figure 4: picture of remaining composite using confocal microscopy after a treatment of 120 min at 350°C and 25 MPa.
Table 1 : results of composite mass loss as function of reaction time.
1 Remaining composites from the experiments have been observed using ESEM. The images presented in Figure2demonstrate that fibers are separated from each other from 15 min of reaction time. Back-scattering electrons (BSE) are analyzed by the detector meaning that the contrast results from different chemical natures. Thus the bright spots observed are related to chemical compounds that are different from carbon fibers, such as resin, minerals and so on.
Temperature Pressure Reaction Initial mass of Total mass loss TOC
(°C) (MPa) time (min) water (g) (%) (g L -1 )
400 25 0 0.8327 0.00 0
400 25 15 0.8327 39.00 27.01
400 25 30 0.8327 40.60 27.14
400 25 60 0.8327 41.03 23.75
400 25 90 0.8327 39.35 27.72
400 25 120 0.8327 39.26 26.97
Table 2 : results of composite mass loss as function of reaction time.
2
Temperature Pressure Reaction Initial mass of Total mass loss TOC
(°C) (MPa) time (min) water (g) (%) (g L -1 )
350 25 0 3.13 0.00 0.00
350 25 15 3.13 13.31 1.11
350 25 30 3.13 40.62 12.90
350 25 45 3.13 41.27 10.80
350 25 60 3.13 41.21 10.87
350 25 90 3.13 41.04 10.87
350 25 120 3.13 41.74 11.28
ESEM pictures confirmed the resin removal, this technique did not highlight degradations on the carbon fibers. Confocal microscopy was then used for the sample obtained at 350°C, 25 MPA and 120 min and we observed lots of contrast regions on the carbon fibers. These regions are assumed to be composed of organic molecules (chemical analysis) that are closed to the original PA6 resins. This observation would then explain why carbon fibers seem to be recovered as their original state using ESEM. Mechanical essays were not performed due to the limited size of the fibers recovered. The next step of this study is to determine the characteristic organic functions of this "new" compound and its mechanism of formation (repolymerization reactions or resin transformation directly at the surface), and to study to what extend the mechanical properties are related to these surface modifications. | 18,572 | [
"18950",
"17911",
"748738"
] | [
"242220",
"242220",
"242220",
"242220",
"242220"
] |
01742998 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2014 | https://imt-mines-albi.hal.science/hal-01742998/file/189-Fullpaper-Ducousso.pdf | M Ducousso
E Weiss-Hortala
M J Castaldi
A Nzihou
CHAR AS CATALYST FOR HOT GAS CLEANING AND UPGRADING: ROLE OF THE OXYGENATED SITES ON THE CHAR SURFACE
Keywords: char, catalyst, Temperature Program Desorption, methane cracking, oxygenation
. The goal of this study is to investigate the oxygenation of the char surface in terms of selectivity and efficiency. Then the catalytic performances of the functionalized chars will be evaluated through methane cracking tests.
The oxygenation process has been done at 280°C for different duration times (1h, 2h, 4h, and 8h) under a 45 mL/min flow rate of a mixture composed of 8% O 2 / 92% N 2 . Raw chars and oxygenated chars have been analysed by FTIR and TPD. Methane cracking experiments have been carried out at 700°C over a bed of raw chars and oxygenated chars in a Chembet Instrument. The output gas has been analyzed continuously during the experiments via a micro-GC.
The O 2 gas phase oxygenation process has shown good results since the global amount of oxygenated groups at the surface has been enhanced. In particular groups such as anhydride, ether, phenol and lactone groups have been increased. The longest the oxygenation process was, the highest amount of oxygenated groups has been added. Thus it emphasizes that a functionalization of the char surface using a mixture of 8% O 2 / 92% N 2 is possible. First results of methane cracking confirmed the catalytic performances of chars since methane decomposed to hydrogen at a lower temperature than that for regular cracking. The efficiency is evaluated as regards to hydrogen production and the impact of oxygenated groups. However methane cracking experiments conducted on chars oxygenated did not lead to a clear increase in the hydrogen production. Oxygenated groups which have been increased seem to have been desorbed before reaching the temperature of methane cracking. Further investigation on the impact of the oxygenation on the textural properties need to be carried out. [START_REF] Klinghoffer | Catalyst Properties and Catalytic Performance of Char from Biomass Gasification[END_REF]
1-INTRODUCTION
The development of biomass gasifiers at the industrial scale is slowed down due to scientific and technological bottlenecks. During the gasification process to obtain syngas some coproducts are formed due to the pyrolysis of the lignocellulosic materials which causes decrease in the syngas yield and lead to additional clean up costly steps. At gasification temperature (ca. 700 to 900°C) all co-products are in their gaseous forms while downstream the reactor some co-products called tars start to condense which leads to fouling and blocking process equipments as turbines and engines [START_REF] Devi | Primary measures to reduce tar formation in fluidised-bed biomass gasifiers[END_REF]. Non negligible part of the gas phase is also contaminated by co-products and pollutants. Main products are carbon dioxide, methane, noncondensible tars which may contribute up to 20% of the volatile material [START_REF] Nzihou | A review of catalysts for the gasification of biomass char, with some reference to coal[END_REF]. In a small extent pollutants as nitric or sulphur oxides are also present. A big effort has already been done to solve the issue of hot gas cleaning and two different ways have been developed. It can be achieved either by preventing their formation (primary treatment), or by their destruction in another step (secondary treatment). Unfortunately, when primary methods are successful, the amount of syngas produced is comparatively small [START_REF] Nzihou | A review of catalysts for the gasification of biomass char, with some reference to coal[END_REF]. For secondary treatment, several methods could be used as thermal, mechanical (cyclone, ceramic filter or scrubber) or catalytic cracking. In the case of catalytic secondary treatment three broad groups were identified: dolomite catalysts; alkali metal and other metal catalysts; and nickel catalysts. These mineral and metal catalyst have been selected because they are successful in transforming tars into fixed gases. Yung et. al state: "The main limitation for hot gas catalytic tar cracking is catalyst deactivation. Deactivation occurs from both physical and chemical processes associated with the harsh reaction conditions and impurities in the feed stream. Attrition, coking, and sulfur poisoning are the primary deactivation mechanisms that affect the efficient catalytic conditioning of biomass-derived syngas" [START_REF] Yung | Review of Catalytic Conditioning of Biomass-Derived Syngas[END_REF]. Metal catalysts are expensive materials and their quick deactivation is a big issue for the economic viability of the process gasification.
An alternative process has been recently investigated. It consists in using chars which is the solid residue from biomass gasification as catalyst for hot gas cleaning. Chars are a very good candidate as it is a cheap and an onsite feedstock. Previous investigations have highlighted its catalytic performances for hydrocarbons degradation [START_REF] Klinghoffer | Utilization of char from biomass gasification in catalytic applications Naomi Klingho ff er[END_REF], [START_REF] Wang | Char and char-supported nickel catalysts for secondary syngas cleanup and conditioning[END_REF]. Its high surface area and porosity are considered to be important factors of its catalytic efficiency. Minerals and metals such as calcium, iron are naturally present in chars, they may be active sites for hydrocarbons cracking. However literature stated that the cracking mechanism at the char surface is really complex and different active sites may compete as metals, oxygen atoms and defect in the carbon matrix.
Extensive use of char as catalyst for catalytic applications need to first understand the role of the different active sites at the char surface and be able to enhance their amount when it is needed. This study focuses on the oxygenated groups. The first objective is to investigate a process to oxygenate the char surface and evaluated the impact on the oxygenated groups.
Then catalytic performances of raw and oxygenated chars will be evaluated. Methane cracking has been chosen because it is the main co-product in the gas phase with carbon dioxide. Moreover its cracking leads to upgrade the syngas ratio. Firstly, a parametric study has been performed on raw chars. The impact of temperature, methane concentration, mass of chars and total flow rate have been investigated. From these results kinetics parameters have been determined. Then the influence of oxygen groups on the methane cracking yield, selectivity and deactivation has been studied.
2-MATERIALS AND METHODS
Generation of chars:
Char has been produced in a fluidized bed. Approximately 65 g of poplar wood have been gasified at 750°C during 30 minutes into a 90% H 2 O / 10% N 2 mixture atmosphere of 1.5 m 3 /h biomass. Then particles have been crushed and their diameter size is in the range 100 to 500 μm.
Methane cracking experiments:
Methane cracking experiments have been carried out in a ChemBet Pulsar Instrument coupled with an Inficon micro Gas Chromatograph. A first set of experiments have been carried out only on raw chars to perform a parametric study. Four parameters have been studied: temperature (650, 700 and 800°C), mass of char (20, 50 and 100 mg), methane concentration (7, 15 and 21%) and flow rate (15 and 45 mL/min). A second set consisted in comparing methane cracking experiments on raw char and chars which have been oxygenated.
Prior reaction chars have been degassed up to 900°C with a heating rate of 20°C/min. Degassing has been done before introduction of the methane to be sure that all the hydrogen observed during the reaction is coming from methane cracking and not from desorption of the surface cause by heat treatment. Figure 1 shows that hydrogen and carbon monoxide desorbed from the surface from 700°C to 900°C during the degassing. Methane cracking experiments used for the comparison between raw char and oxygenated char have been performed at 700°C and the degassing has been done under nitrogen up to 700°C with an heating rate of 20°C/min.
Char oxygenation process: 100 mg of chars have been oxygenated in the ChemBet Pulsar reactor under a mixture 8% O 2 / 92% N 2 and a flow rate of 45 ml/min during different extent.
Chars surface analyses:
TPD: TPD of 50 mg of raw char and oxygenated chars have been performed in the ChemBet Pulsar instrument under a 25 mL/min of helium from 25°C to 1100°C with a temperature rate of 5°C/min.
FTIR: FTIR analyses have been done one raw char and char oxygenated on a Shimadzu FTIR 8400 s.
BET: BET analyses have been done on a ASAP 2010 apparatus from Micromeritics. Adsorption of argon has been monitored at 77 K until a relative pressure of 1. Prior argon adsorption a degassing step of 30 h under high vacuum at 200°C has been performed. Specific surface area has been determined by application of the BET and Langmuir models. Pore size and pore volume have been evaluated using the Horvath-Kawazoe model.
Kinetics parameters calculations:
Apparent initial partial order of reaction regarding methane and initial apparent activation energy have been calculated. Methane has supposed to be decomposed into carbon solid and hydrogen following the reaction [START_REF] Klinghoffer | Catalyst Properties and Catalytic Performance of Char from Biomass Gasification[END_REF]. From this equation initial reaction rate could be written in Equation [START_REF] Moliner | Thermocatalytic decomposition of methane over activated carbons: influence of textural properties and surface chemistry[END_REF]. To obtain the reaction rate constant and be able to determine the activation energy the differential method has been applied. The magnitudes of partial pressure of methane were 7%, 15% and 21%. A mass balance on the plug flow reactor allows the determination of the reaction rate for the different partial pressures (see Equation 3). The activation energy was calculated following the Arrhenius law (Equation 4).
Methane cracking reaction equation:
( ) ( ) ( ) (1)
Reaction rate equation:
(
Mass balance on the differential plug flow reactor:
(
Arrhenius law:
(4)
( ( )) ( ) (mol/ m 3 ) (mol/s) (m 3 ) (kJ/mol)
3-RESULTS AND DISCUSSION
The study concerns the use of char either or not functionalized as catalyst for methane cracking. However the efficiency is evaluated as regards to hydrogen production and the impact of oxygenated groups. The results are presented under both points of view.
Methane cracking on raw char
Methane cracking experiments under various operating conditions have been performed. Figure 2 shows measurements of gases concentration over time downstream the reactor during experiments. Hydrogen is the main product from the methane cracking according to reaction (1). This is in good agreement with the literature [START_REF] Muradov | Catalytic activity of carbons for methane decomposition reaction[END_REF], [START_REF] Serrano | Hydrogen production by methane decomposition: Origin of the catalytic activity of carbon materials[END_REF]. Hydrogen production becomes smaller over the reaction. This decrease could be due to catalyst deactivation related to a decrease of the active surface. Pore blocking by carbon deposit are supposed to be the main cause of catalyst deactivation [START_REF] Klinghoffer | Catalyst Properties and Catalytic Performance of Char from Biomass Gasification[END_REF]. Indeed, our chars are microporous materials with a high BET surface area of ca. 550 m 2 /g compare to some activated carbons. Their surface area have been measured after methane cracking experiments and the results highlighted that they have been halved [START_REF] Klinghoffer | Utilization of char from biomass gasification in catalytic applications Naomi Klingho ff er[END_REF]. The lower active surface area would explain the decrease in hydrogen production. Kim et al. [START_REF] Kim | Hydrogen production by catalytic decomposition of methane over activated carbons: kinetic study[END_REF] also investigated methane cracking over several carbon based materials as activated carbons and coals. Surface areas of these catalysts have been correlated to initial reaction rate. Thus, the reaction at the surface is the limitative step of the process. Even if results have shown that pore blocking is happening during the reaction they stated that there is not a clear correlation between the deactivation and the surface area evolution. Moliner et al. [START_REF] Moliner | Thermocatalytic decomposition of methane over activated carbons: influence of textural properties and surface chemistry[END_REF] have done a similar study on activated carbons and lead to the same conclusion. These previous results have highlighted that textural properties of chars have been modified during methane cracking experiments and should play a role. However surface chemistry needs to be investigated to get better insight into the mechanism.
Surface chemistry of chars related to kinetics of methane cracking
Surface chemistry of the chars is complex as it is a natural material and its structure is completely dependent of the environmental conditions of the wood development. However results obtained during methane cracking revealed an interesting fact. Some carbon monoxide and carbon dioxide have been produced at the early stage of methane cracking experiments. As record the char has been previously degassed and no CO or CO 2 have been analysed in the previous chromatograph at 700°C prior experiments which means that these two components are produced during the methane cracking reaction. Reactions ( 5), ( 6) and ( 7) might happen between carbon from methane and oxygen groups of the char surface:
( ) ( ) ( ) ( ) (5)
( ) ( ) ( ) ( ) (6)
( ) ( ) ( ) ( ) (7)
Kim et al. [START_REF] Kim | Hydrogen production by catalytic decomposition of methane over activated carbons: kinetic study[END_REF] stated that the methane cracking reaction rate at early stage was controlled by oxygen groups present on the activated carbons. This observation is interesting and our results are in agreement with this assessment. From the parametric study which has been done, the initial apparent kinetics parameters have been determined (see Figure 3). The initial apparent activation energy has been found to be ca. 192 kJ/mol which is close to values obtained for activated carbons obtained by Muradov et al. [START_REF] Muradov | Catalytic activity of carbons for methane decomposition reaction[END_REF] (160-201 kJ/mol). These observations are interesting for different reasons. Firstly, it showed that oxygen groups at the char surface are potential active sites. Secondly, if they are active sites it could be guessed that increasing their contents may lead to increase the conversion of methane into hydrogen and carbon monoxide. So if carbon atoms from methane react with oxygen groups from the surface and release into the gas phase they will not deposited into the pores of the char which limits the catalyst deactivation. For all these reasons, further investigation of the role of oxygen groups on the char surface have been performed.
Surface chemistry of chars: oxygenated functions
In order to study the role of the oxygenated groups on the char surface in the methane cracking reaction, methods to increase their content have been investigated. Various methods have been reported in the literature using either liquid or gas agents. The most common methods using liquid are the immersion into nitric acid (HNO 3 ) and hydrogen peroxide (H 2 O 2 ). Oxygenation into gas phase could be operated using molecular oxygen, ozone, nitrous oxide or steam. Klinghoffer et al. [START_REF] Klinghoffer | Utilization of char from biomass gasification in catalytic applications Naomi Klingho ff er[END_REF] have oxygenated chars by immersion into HNO 3.
Comparison of TPD results for raw chars and oxygenated chars into these conditions showed that the carboxylic groups have been the most increased groups. However carboxylic groups desorbed at low temperature and they did not impact the methane cracking reaction. Pereira et al. [START_REF] Figueiredo | Modification of the surface chemistry of activated carbons[END_REF] have tested different oxygenation processes onto activated carbon. He stated that liquid treatment tend to increase acidic oxygen groups and gas phase rather basic groups which release at high temperature. In our case chars have been oxygenated under 8% O 2 / 92% N 2 at 280°C to different extent because we attend to increase oxygen groups which remain at high temperature. FTIR analyses revealed that the process of oxygenation has been successful. It allowed the increase of C=O and C-O bondages (see Figure 4). The longest the oxygenation process is the highest the peaks are. However peaks assignments of the different double bonding oxygenated groups are very close and it is hard to make an accurate correlation between the peaks shown in FTIR and the nature of the group (see table 1).
-0,50 0,00 0,50 1,00 1,50 ln r 0 ln C CH4 TPD analysis gives complementary information as it is possible to correlate the amount of CO or CO 2 desorbed at a given temperature under inert atmosphere to the functional groups present at the char surface. In fact according the strength of the bonding the energy needed to break it varies. It means that oxygenated groups which have different kind of bondings will not desorb at the same temperature. Literature provides temperature range of desorption and type of gases desorbed according to the oxygenated groups (see Figure 5). Groups as carboxylic and lactone desorb into CO 2 in a large range of temperature 100°C-400°C and 200°C-700°C respectively. Anhydrides decomposed into both CO and CO 2 at about 650°C. Phenol and ether desorb in the same range of temperature as anhydride but produced only CO.
Then carbonyl and quinone decomposed into CO at temperature up to 800°C. Results of TPD of raw chars and chars which have been oxygenated at 280°C for different durations (1h, 2h, 4h, 8h) are presented in Figure 6. This graph confirms the FTIR results. The longest was the oxygenation process, the highest the global amount of oxygenated groups had been enhanced. All oxygenated groups seem to have been impacted by the oxygenation process. However oxygenated groups which desorb in CO or/and CO 2 in the temperature range of 400-700°C might have been the most impacted. Figure 6.a) shows a comparison of CO 2 production during TPD procedure for the raw chars and oxygenated chars. Before 400°C chars which have been oxygenated desorbed less CO 2 compare to raw chars. It could be explained by the fact that they have been heated first under inert atmosphere to 280°C prior oxygenation which may lead to desorbing a part of carboxylic groups. Lactones may be one of the groups which have been the most enhanced. In fact concentration of CO 2 desorbed by raw chars is about 0.03% between 400°C and 600°C while it is ca. 0.06% and 0.08% for chars which have been oxygenated during 2h and 8h respectively. At a higher temperature anhydride starts desorbing into CO 2 and CO. In the Figure 6.a) it might correspond to the peak between 600°C and 800°C. Peak reaches 0.06% for raw char and 0.09% for char oxygenated at 280°C.
CO production is compared in the Figure 6.b) The impact of oxygenation on groups desorbing into CO is weaker than on those desorbing into CO 2 . No desorption is observed before 450°C which is in accordance with the literature. Up to 800°C the process seems to have not impacted the oxygenated groups. However it is interesting to note that the longest is the oxygenation the soonest the CO desorption starts. Phenol is the first group desorbing into CO in the literature. Long oxygenation processes are favorable to phenol group formation at the char surface. Ether of which peak of desorption is around 650°C has also been enhanced. Then results of CO production confirms that anhydride have been increased at the char surface. Oxygenation of the char surface using a mixture of O 2 /N 2 had successfully enhanced the amount of oxygenated groups at the char surface. Clear impact of the duration time of the oxygenation is shown by TPD results. Then functionalization of the char surface by oxygen treatment is possible. Next paragraph will discuss preliminary results in term of catalytic application for the methane cracking.
Impact of oxygenated functions on methane cracking
Chars oxygenated during 8h at 280°C and raw chars have been tested for methane cracking at 700°C. Results show that the hydrogen production was very similar in both cases. An explanation is that the oxygen process mainly increased species desorbing around 500°C to 750°C. On the Figure 7, it is observable that more CO 2 and CO are desorbed during the ramp temperature under pure nitrogen for the oxygenated chars rather than for the raw chars. Then the oxygenated groups added during the oxygenation process have been desorbed before starting the methane cracking reaction. An FTIR analysis of a chars which has been oxygenated and then heated under pure nitrogen to 700°C is coherent with this conclusion. In fact no peaks in the 1000-1300 and 1500-1900 regions are observable (see Figure 8). However precautions have to be taken in discussing results as the oxygenation treatment may affected textural properties and carbon content of the chars.
4-CONCLUSIONS
The first goal of this study was to investigate the O 2 gas phase oxygenation of the char surface. Comparison of the CO and CO 2 production during TPD results of raw and oxygenated chars have shown that the oxygenation successfully increased the global amount of oxygenated groups. Anhydrides, lactones, phenols and ethers may have been enhanced in the greatest extent. The longest the oxygenation was the highest amount of oxygen atoms was added to the surface.
First results of methane cracking on oxygenated chars did not lead to conclude on the impact of the oxygenated groups on the hydrogen production as species enhanced by oxygenation process were not stable at the methane cracking temperature. In addition further analyses will be carried out to evaluate the impact of the oxygenation process on the textural properties of the chars. These complementary information will be useful regarding the interpretation of the methane cracking comparison.
Figure 1 :
1 Figure 1: Gas desorption during the ramp temperature to 900°C with a heating rate of 20°C/min under pure nitrogen prior methane cracking reaction at 700°C
Figure 2 :
2 Figure 2: Gas production during methane cracking experiment over a bed of chars at 700°C, under 15 mL/min of 7% CH 4 /93% N 2
Figure 3 :
3 Figure 3: Determination of k 0 and Ea from experimental points
Figure 4 :
4 Figure 4: comparison of the 1000-1300 and 1500-1900 region of FTIR results of raw and oxygenated chars
Figure 5 :
5 Figure5: TPD Peak assignements[START_REF] Figueiredo | The role of surface chemistry in catalysis with carbons[END_REF]
Figure 6 :
6 Figure 6: TPD results of raw chars and chars oxygenated at 280°C under a mixture of 8% O2/92% N2. a) CO 2 desorption; b) CO desorption operating conditions for TPD : 25 mL/min He at 5°C/min to 1100°C
Figure 7 :Figure 8 :
78 Figure 7: Comparison of the hydrogen production during methane cracking at 700°C over a bed of raw chars and a bed of chars prior oxygenated at 280°C during 2h under 8% O 2
Table 1 :
1 Functional groups on active carbon and their corresponding infrared assignments
Wave number (cm-1) Functional group Ref
1850-1786 Anhydrides 1880-1740
1740,1724 Lactones (C=O) 1790-1675
1264 Lactones (C-O-C) 1260
1710-1680 Carboxylic (C=O) 1760-1665
1440 Carboxylic (O-H) 1440
1670-1660 Quinone or conjugated ketone 1667-1653
1076-1014 Alcohol (C-O) 1276-1025
1162-1114 Phenol (C-O) and (O-H bend/stretching) 1200-1000
1250-1235 Ether bridges between rings 1250-1230
th International Conference on Engineering for Waste and Biomass Valorisation -August 25-28, 2014 -Rio de Janeiro, Brazil | 24,254 | [
"174362",
"18950",
"176587"
] | [
"242220",
"242220",
"355659",
"242220"
] |
01725152 | en | [
"spi",
"sde"
] | 2024/03/05 22:32:16 | 2014 | https://imt-mines-albi.hal.science/hal-01725152/file/%23270-proceeding%20catalytic%20conversion%20of%20black%20liquor.pdf | H Boucard
email: [email protected]
M Watanabe
S Takami
E Weiss-Hortala
R Barna
T Adschiri
Catalytic Conversion of Black Liquor under Sub-/Supercritical Conditions
Black liquor is an alkaline liquid residue of paper industry, containing ~80wt% of water and ~20wt% of organic matter and minerals. Our global project explores black liquor conversion by using properties of supercritical water. Exploratory conversion experiments have been made previously without catalyst and revealed the formation of three phases; especially an interesting gaseous phase with a high proportion of H 2 (~80% at 600°C). A solid residue is also observed and results from the polymerisation of phenolic compounds together with aldehyde crosslinks. Based on these results, this study is devoted to the catalytic conversion using CeO 2 nanocatalysts to improve gasification and H 2 production as well as to block coke formation. In sub-/supercritical conditions, water splitting occurs by CeO 2 catalysts, to produce active hydrogen and oxygen species. Thus these actives species react with molecules of the medium. Experiments are performed in sub-/supercritical conditions in batch reactor, during 15-60 min reaction time with and without catalyst. As expected, nanocatalyst of CeO 2 improved the conversion of black liquor at sub and supercritical conditions. Hydrogen production was not significantly affected by the catalyst while amounts of CO and CO 2 are reduced at short reaction time. The high basic pH of raw feedstock and remaining solutions impacts the CO 2 dissolution into carbonates. Supercritical water media and catalyst clearly affects the fragmentation of dissolved lignin compared to subcritical conditions.
INTRODUCTION
Black liquor comes from Kraft process in paper industry. It results from the step of cooking wood chips with white liquor (Na 2 S and NaOH mixture). Delignification of wood corresponds to lignin dissolution from the fibrous part (cellulose, hemicellulose) to kraft lignin recovered in the cooking juice (black liquor) [START_REF] Leask | Pulp and papermanufacture[END_REF]. As white liquor is a basic mixture, black liquor is also a basic aqueous solution (pH ~ 13), containing dissolved lignin, fragments of cellulose and hemicellulose and several minerals and salts (Na, K, Ca, S.... carbonates, sulfates, sulfides...). In industrial facilities of the Kraft process, a large space is devoted to the concentration of black liquor followed by the incineration of this concentrated solution for heat and white liquor recovery [START_REF] Demirbas | [END_REF]. Nevertheless an extra volume of black liquor is produced, and thus black liquor would be valorized. As a result, an alternative process is to convert black liquor by using properties of supercritical water. Supercritical water has been firstly used to achieve total oxidation of waste and more recently it is used to recycle waste by generating energetical gas, platform molecules or added-value solids. Indeed, the water in these conditions (T > 374°C and P > 22.1 MPa) has the distinction of being both a reactant and a solvent. In subcritical condition (T < 374°C and P < 22.1 MPa), water is a polar solvent in which salts are miscible and organic molecules are immiscible. In supercritical state, the properties of water change drastically. The water molecules form clusters [3] that results in a nonpolar solvent and thus free radical reactions are favored. Furthermore, supercritical water and gases are miscible and the diffusion of the molecules in this phase is then improved. Either platform molecules (cresol, guaiacol,…) or gases (H 2 , CO, CO 2 ... ) are produced depending on the temperature reaction. As black liquor is a wet biomass (~80wt%), this kind of hydrothermal process is suitable to use its high water content to convert its significant organic content (~ 140 gC.L -1 ). Preliminary batch conversion experiments have been made without catalyst and revealed the formation of three phases [4]: gas with a high proportion of H 2 (~80% at 600°C), a rich bouquet of phenolic compounds in the liquid phase and an non negligeable proportion of carbon converted into solid phase (~20wt%). Here, we propose the use of catalysts to convert black liquor more efficiently. Cerium oxide is an interesting nanocatalyst to improve hydrogen production and to reduce coke formation as mentioned in a study with bitumen [5] using cubic CeO 2 . Indeed, CeO 2 has a "redox cycle" allowing to fix and release easily the oxygen generated by the water in supercritical conditions. Different morphologies of CeO 2 exist (Cubic, octahedral….); however the catalytic activity of cubic CeO 2 is better than octahedral CeO 2 [5]. These both studies lead us to consider the catalytic conversion of black liquor using cubic CeO 2 to improve the amount of H 2 into the gas phase and to reduce coke formation as well as to form more building block molecules in the liquid phase. Thus, this study focuses on the conversion of Black liquor with and without catalyst.
MATERIALS AND METHODS
1-Reagents
Black liquor
Black liquor was obtained from Smurfit Kappa Cellulose du Pin in Facture, France. It was obtained from the digester after the recovery of tall oil in Kraft process. The fraction of dried components was 23 wt% (organic and mineral) while the other part was water (77 wt%). The organic compounds amounted to 65 wt% of the dry components, which equals to 140 g-C.L -1 . In this study, the original solution was diluted to 10 wt% and used in the following experiments.
Cerium oxide: CeO 2 CeO 2 nanoparticles were synthesized by Adschiri's group in Tohoku University, Japan [5], [6]. The average size of the nanoparticles was around 8 nm. The catalytic activity of CeO 2 arises from its redox activity via Ce 4+ /Ce 3+ cycle, which is accompanied with capturing and releasing oxygen. CeO 2 is known to have high oxygen storage capacity (OSC). The OSC of cubic CeO 2 is 340 µg-O/g-cat and that is 3.4 times higher than octahedral CeO 2 [5]. Due to this high OSC, cubic CeO 2 nanoparticles were used in this study. The ratio of catalyst to the reactants was calculated and shown as a parameter in the following study.
⁄
(1) Equation 1 : ratio of catalyst
2-Experimental protocol
Experiments were performed at either 350°C or 450°C in sub-or super-critical conditions using a pressure-resistant batch reactor with an inner volume of 5.0 mL. The diluted black liquor was introduced into the reactor with the cubic CeO 2 nanoparticles (R= 0, 5, 20), then the reactor was capped tightly. After adding N 2 gas into the reactor (~0.26 MPa), the reactor was placed in an electric oven whose temperature was set to either 350°C or 450°C. After the reaction time (15 or 60 min) passed, reactor was cooled in an iced water to stop the reaction immediately. After quenching, the gas inside the reactor was analyzed by µ-GC without exposing to air. Liquid and solid products were collected after gas analysis by rinsing the reactor with THF. The collected products were separated by filtration using PTFE membrane filter with pore diameter of 0.1 µm. The THF-insoluble fraction was called coke and the THFsoluble fraction form the liquid phase. The weight of coke was evaluated as the weight loss of the solid products during calcination until 600°C, because the solid products contained CeO 2 nanoparticles as well as coke.
3-Analysis of gas, liquid and solid phases
Gas, liquid and solid phases were analyzed and characterized after reaction.
Gas phase:
The gas products were analyzed by µ-Gas Chromatography (Agilent GC-3000) to quantify following gases: H 2 , O 2 , N 2 , CO, CO 2 , and hydrocarbones.
Liquid phase:
The composition of the liquid phase was analyzed by complementary techniques of Gel Permeation Chromatography (GPC) (HP1100) and Gas Chromatography coupled with Mass spectroscopy (GC-MS) (GC: Agilent 7890A; MS:Agilent 5975C).
Solid phase:
Morphology of solid residue (coke) was observed via transmission electron microscopy (TEM, Hitachi H7650).
RESULTS
1-Gaseous phase
During the hydrothermal process of black liquor, H 2 is expected to be produced. Demirbas reported that more than 10 mol of H 2 was produced from 1 mol of black liquor if the formula C 10 H 12.5 O 7 Na 2.4 S 0.36 is considered [START_REF] Demirbas | [END_REF]. The use of CeO 2 nanocatalysts possibly results in higher gas productions [7]. Figure 1 shows the results of gas phase analysis at both sub-(Fig. 1a) and supercritical (Fig. 1b) conditions. In both cases, carbon species were formed during reaction. However, gasification of carbon was enhanced under supercritical conditions. CeO 2 acts as a catalyst and splits water into actives species [5]. Active hydrogen species stabilize the reactive intermediate species and active oxygen species involves in oxidation reactions. Although radical reactions are mainly involved at supercritical conditions, they remain significant at subcritical conditions. These hydrogen active species can form directly H 2 or react with organic molecules to form smaller molecules. Simultaneously, alkenes are formed at subcritical conditions and release H 2 in gas phase by deshydrogenation reaction. The role of active oxygen species were mainly involved in oxidation reactions. The more the oxidation is achieved, the more CO 2 is produced. The concentration of these active species increases in the presence of CeO 2 catalyst and thus the carbon conversion was enhanced. Figure 1 shows that the amount of carbon (~2 or 3%) is lower at subcritical conditions than at supercritical conditions. This tendency was in accordance with the lower quantity of gaseous products (not shown). Proportion of carbon at 350°C can be also explained by the remaining basic pH of the solution that dissolves a large amount of CO 2 in the liquid phase. The gas mixtures are composed of the same gases: H 2 , CO, CO 2 , and light hydrocarbons. However the composition changes slightly in the presence of the catalyst. The main part of carbon in the gas phase was CO and CO 2 , which was produced from the oxidation and watergas shift reactions. The amount of CO 2 after quenching the reaction was low, because CO 2 could dissolve into basic aqueous solution to form carbonates. The amount of CO was also low due to either its consumption through the water gas shift reaction, which is promoted by alkaline salts ( [4], [8]), or the strong oxidation strength of the CeO 2 catalyst. Reaction time, as well as the catalysts, played also an important role in the carbon conversion. At 350°C, the influence was not significant. However, longer reaction time resulted in enhanced gasification at 450°C. Catalysts increased the conversion of carbon by promoting water gas-shift reaction. The quantification of gaseous phases by µ-GC confirmed this result. Indeed, at 450°C and 15 min, a low quantity of CO was detected while no CO was detected after the experiment with the catalysts. The simultaneous action of catalyst and short reaction time may explain that no CO was detected in the gaseous phase due to thermodynamic equilibrium of CO 2 and kinetics.
Figure 2 shows that the amount of produced H 2 significantly increased in the presence of the catalysts at 350°C and 60 min. The other experimental conditions demonstrate a moderate effect of the catalysts. As H 2 is mainly produced from water splitting, its amount would be higher with the catalyst. Active hydrogen species that was produced from water splitting would easily and continuously react in gas phase to form H 2 . H 2 was also prouduced by hydrogenation reaction from alkane. However the increase in active hydrogen species would simultaneously increase reactions with organic molecules. Thus, the amount of H 2 was related to both the production and consumption of hydrogen actives species. The slight effect of the catalyst seems to indicate an increase of both phenomena. Concerning the experiments carried out at 350°C and 60 min, the amount of H 2 was multiplied by 10 using CeO 2 as a catalyst. This means that H 2 consumption was higher at long reaction time without catalyst while H 2 production would be higher using catalyst. As the gas-liquid equilibrium was expected for these experiments due to sub-critical water conditions, limitative diffusion of hydrogen from gas to liquid phase could explain its low consumption. The amount of H 2 was not affected by the catalyst ratio (R= 5 or 20, 350°C and 60 min). As the amount of H 2 produced from water splitting would be higher, the mass transfer at the interface would be also enhanced by this high concentration. As a result, produced H 2 would be used for capping reaction in liquid phase to produce smaller molecules.
To conclude, carbon conversion to gas phase was more influenced by the use of the catalyst that will consequently affected the composition of the liquid phase.
2-Liquid phase
The orignal black liquor is a dark brown liquid with a basic pH (>12). The conversion of black liquor is partially related to colour evolution of the liquid products (fig 3). The deep color is due to the presence of phenolic compounds [9]. In subcritical conditions, at 350°C, the liquid was still brown and no colour variation was observed within the reaction time. However at 450°C, the liquid was yellow and almost transparent. The colour became lighter with increased reaction time. The observed colour variation was caused by the conversion of oligomer molecules to either smaller colourless molecules (such as acids, aldehydes, alcohols…) and/or solid residues. This observation was supported by gas permeation chromatography (GPC, fig 4). GPC separates molecules towards their molecular weight. This first global result was interesting to compare the degradation efficiency. Indeed, the curve at 450°C was shifted to low molecular weight compared to the curve at 350°C; that suggests molecules into liquid phase are smaller at high temperature.
(a) (b) (c) (d)
This assumption is in accordance with the increase of gasification efficiency at high temperature. The presence of catalysts during reaction promoted the degradation. As a result, the products showed a lighter colour as shown in Fig. 3 and had lower molecular weight distribution. At 450°C the coupled effect of reaction time and the catalysts created a new population of molecules whose the molecular weight is center at log M = 1.77 (M = 5.87 g/mol). This different composition can then explain the higher release of carbon to gas phase. At 350°C, the impact of the catalyst at a ratio of 5 is limited as regards to GPC results. The sharp peak at log M = 2 suggests one small species was selectively obtained in high quantity. Its disparition and the shift of the curve for R=20 means other populations of molecules were created. For this temperature, all curves are stackable to Log M = 2; so no smaller molecules are formed. This result was confirmed by GC-MS results and also by the change in the colour of the products. In the presence of catalyst, oxygenation of molecules has also to be considered, more oxidized molecules have been detected by GC-MS in the presence of catalyst.
Especially at 350°C and R=20 and R=5, the same alkenes are detected, so hydrogenation reaction should be in competition with capping reactions. This tendency can explain why the amount of H 2 remains the same. Slight amounts of small aliphatic molecules (such as aldehydes) are detected in the presence of catalyst, while a high amount of aromatic compound was obtained into liquid and less coke was formed. 3 Solid phase Without catalyst, polymerisation of aromatic compounds occurred since their production began during reaction. This polymerisation resulted from the reactions between aromatic molecules and small molecules such as aldehydes. Coke formation was the result of this polymerization and was formed at both temperatures. At 350°C, solid was micrometric particles of carbon (fig 6) whereas at 450°C, the solid was shapeless (fig 7). The composition was always the same: Carbon, oxygen and minerals, particularly sulfur (darkest spots on pictures) but minerals did not play a role on the surface morphology [10]. For both temperatures, coke was formed (table 1); its quantity was increased in the supercritical conditions; so at 450°C, the amount of coke was higher than at 350°C. The proportion of coke formed decreased when R of catalyst increases at 350°C. The efficiency of catalyst seems to be higher at low temperature (subcritical conditions). However, for R=20, the amount of coke formed seems to be important. A hypothesis was an antagonist effet of CeO 2 as catalyst and as carrier of coke formation.
CONCLUSION
The catalytic conversion of black liquor with cubic CeO 2 allows to split water molecules into active hydrogen and oxygen species. Active hydrogen species leads to H 2 molecules in gas phase, H 2 molecules can also be formed by hydrogenation reaction from alkane. A part of these active species can react also with liquid molecules by capping to form smaller molecules (less complex). GPC and color of liquid attested this degradation. Active oxygen species oxidized organic molecules in liquid phase, which was confirm by GC-MS results.
When oxidation was extreme, some CO 2 was released to gas mixture; CO 2 was also due to the consumption of CO by the water gas shift reaction; which was promoted by alkaline salts and CeO 2 catalyst. Formation of very small molecules like aldehyde seems to be blocked which limits the polymerisation of phenolic compound. Indeed coke formation was limited, the compounds detected by GC-MS was higher in the presence of catalyst and the carbon conversion into gas increased that goes to an improvement of conversion of the black liquor using CeO 2 .
Figure 1 :
1 Figure 1: Proportion of carbon converted into gas phase at 350°C and 450°C
Figure 2 :
2 Figure 2: H 2 produced into gas phase after reaction at 350°C and 450°C
Figure 3 :
3 Figure 3: liquid obtained after 15min and 60 min without catalyst; (a) at 350°C (b) at 450°C, liquid obtained after 15min and 60 min with catalyst (c) at 350°C (d): 450°C, 15 and 60 min
Figure 4 :Figure 5 :
45 Figure 4: influence of the temperature without catalyst on the molecular weight
Fig 6: Fig 7 Figure 6 : 7 :
767 Fig 6: Fig 7 Figure 6: solid obtained after 350°C, 60 min of reaction time, without catalyst Figure 7: solid obtained after 450°C, 60 min of reaction time, without catalyst
Figure 8 :
8 Figure 8: Cubic CeO 2 nanoatalyst obtained after reaction
Table 1 :
1 Proportion of coke formed after reaction | 18,666 | [
"1066171",
"18950",
"748738"
] | [
"242220",
"247408",
"242220",
"242220",
"247408"
] |
01706811 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2016 | https://imt-mines-albi.hal.science/hal-01706811/file/Fiori_HTC.pdf | archive ouverte pluridisciplinaire HAL, est destinée au dépôt
Hydrochar from EWC 19.12.12 as a substitute of carbon black
Abstract
Carbon black is a carbonaceous material (more than 97 wt.% of colloidal carbon) used for tires and rubber production as well as plastics, conductive packaging and inks. It must possess low ash content and, to improve adhesion, convenient amounts of unsaturated functions and graphitic defects. At present, the processes used for carbon black production exploit nonrenewable substrates (mainly aromatic hydrocarbons and natural gas) and produce nonnegligible amounts of hazardous air pollutants. On the contrary, hydrochar is recognized to have very low environmental impacts. Indeed, hydrochar is a carbon-based material, produced from organic residues or waste substrates during hydrothermal carbonization (HTC), a wet thermochemical process. During HTC, the feedstock undergoes mainly dehydration and carbonization that enhance the colloidal carbon particles formation. For this purpose, HTC was applied to a specific substrate, i.e. the residue classified by the European Waste Catalogue as EWC 19.12.12. This is a by-product of the municipal solid waste (MSW) treatment, supplied by Contarina S.p.A., a company which collects and treats the MSW in the province of Treviso in Italy. The non-recyclable and non-compostable residual fraction of the MSW is treated to produce refuse-derived fuel (RDF). However, a small percentage of this residual is an organic fraction deemed to be contaminated that cannot be used for composting purposes. Thus, this fraction (EWC 19.12.12) is removed and currently bio-stabilized, and then landfilled. The EWC 19.12.12 was therefore carbonized in a 50 mL batch reactor [START_REF] Basso | Hydrothermal carbonization of off-specification compost: a byproduct of the organic municipal solid waste treatment[END_REF] at different HTC conditions, namely three temperatures (180, 220 and 250 °C) and three residence times (1, 3 and 8 hours). The hydrochar yield resulted dependent on the severity of the HTC treatment and ranged from 50% (T=250°C, residence time=8 h) to 64% (T=180°C, residence time=1 h). The main characteristics of the hydrochar were investigated to get insights on the possibility to introduce it as a substitute of the common carbon black [START_REF] Titirici | Hydrothermal carbon from biomass: a comparison of the local structure from poly-to monosaccharides and pentoses/hexoses[END_REF]. Hence, the chemical composition (elemental analysis, inductively coupled plasma), the crystalline structure (X-ray diffraction), the surface area and the particle size of the hydrochar were characterized. The preliminary results seem to support the possibility to exploit the hydrochar from EWC 19.12.12 as a precursor or a substitute of carbon black.
Daniele Basso, Francesco Patuzzi, Elsa Weiss-Hortala, Marco Baratieri, Paolo Conto, Luca Fiori To cite this version: Daniele Basso, Francesco Patuzzi, Elsa Weiss-Hortala, Marco Baratieri, Paolo Conto, et al.. Hydrochar from EWC 19.12.12 as a substitute of carbon black. WasteEng 2016 -6th International Conference on Engineering for Waste and Biomass Valorisation, May 2016, Albi, France. p.608-620. hal-01706811
HYDROCHAR FROM EWC 19.12.12 AS A SUBSTITUTE OF CARBON BLACK
Université de Toulouse, Mines Albi, UMR CNRS 5302, Centre RAPSODEE, Albi, France. *Corresponding author: [email protected], phone: +39 0461 282692, Fax: +39 0461 282672
D. BASSO 1 , F. PATUZZI 2 , E. WEISS-HORTALA 3 , M. BARATIERI 2 and L. FIORI 1,*
1 University of Trento, Department of Civil, Environmental and Mechanical Engineering, via
Mesiano, Trento, Italy.
2 Free University of Bolzano, Faculty of Science and Technology, Bolzano, Italy.
3
L' | 3,774 | [
"18950"
] | [
"242367",
"463159",
"242220",
"242367"
] |
01768949 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01768949/file/L2EP_2017_IECON_ZAHR.pdf | Hussein Zahr
email: [email protected]
Mohamed Trabelsi
email: [email protected]
Member IEEE Eric Semail
email: [email protected]
Member IEEE Ky Ngac
Nguyen
Five-Phase Bi-Harmonic PMSM Control under Voltage and Currents Limits
Keywords: Bi-harmonic machine, control sensibility, Maximum Torque Per Ampere, I
For a particular five-phase synchronous machine, this paper investigates the sensitivity of a vectorial control strategy on the required peak phase voltage whose value is fundamental for the choice of the DC bus voltage. The specificity of the machine is that the first and third harmonic components of the back electromotive force (back-emf) have the same amplitude. As a consequence, the torque can be produced by one of them or both with suitable currents. This degree of freedom is interesting for optimizing the efficiency and generating high transient torque. However, using two harmonics having the same amplitude leads to a necessity to analyze the constraints on the required phase machine voltage. Considering a Maximum Torque Per Ampere (MTPA) strategy, the paper examines the impact of some parameters such as the phase shift between currents and back-emfs or the ratio between the third and the first harmonic of current on the torque and maximum voltage value. Experimental tests with a limited DC bus voltage have been carried out and compared to the results obtained by a Finite Element Analysis.
INTRODUCTION
Nowadays, multiphase machines are widely used for their fault tolerance and high torque density [START_REF] Barrero | Recent advances in the design, modeling and control of multiphase machines-Part 1[END_REF] especially in critical applications, such as marine [START_REF] Thongam | Trends in naval ship propulsion drive motor technology[END_REF], aerospace [START_REF] Bojoi | Control of Shaft-Line-Embedded Multiphase Starter/Generator for Aero-Engine[END_REF] and automotive traction [START_REF] Parsa | Fault-Tolerant Interior-Permanent-Magnet Machines for Hybrid Electric Vehicle Applications[END_REF]. Thanks to the vector control, these machines have an ability to produce torque without pulsation even with nonsinusoidal back-EMFs and non-sinusoidal currents, in similar way to the classical three-phase machines [START_REF] Wang | Harmonic current effect on torque density of a multiphase permanent magnet machine[END_REF]. Moreover, in low voltage and/or high power applications, a high number of phases leads to a lower current per phase and consequently decreases the power of switches. Nevertheless, a high number of phases impact the cost since the number of drivers and current sensors increases, and the control is more complicated since many harmonics can interfere in torque production.
The work addressed in this paper deals with a five-phase PMSM that appears as a compromise when it used in critical applications such as automotive and aerospace applications. This machine allows creating torque from both 1 st and 3 rd harmonic components [START_REF] Scuiller | Multi-criteria based design approach of multiphase permanent magnet low-speed synchronous machines[END_REF]. The contribution of each harmonic component depends mainly on the amplitude of the harmonic content of the back-EMF [START_REF] Aslan | New 5-phase concentrated winding machine with bi-harmonic rotor for automotive application[END_REF]- [START_REF] Zahr | Comparison of Optimized Control Strategies of a High-Speed Traction Machine with Five Phases and Bi-harmonic Electromotive Force[END_REF]. Generally, the 3 rd harmonic is used to improve the torque, which comes essentially from the fundamental harmonic. Therefore, many researchers focus on modifying the control strategies, the design or both to achieve this goal. However, the torque production due to 3 rd harmonic appears always to be marginal [START_REF] Mengoni | High-Torque-Density Control of Multiphase Induction Motor Drives Operating Over a Wide Speed Range[END_REF]- [START_REF] Wang | Torque Improvement of Five-Phase Surface-Mounted Permanent Magnet Machine Using Third-Order Harmonic[END_REF].
In this paper, the designed machine has a torque which can be created, with an equal sharing ratio between two harmonics since they have the same amplitude. This kind of machines is called bi-harmonic machine. Furthermore, in the machine design, many requirements for traction applications are considered:
1) High transient torque for a boost can be achieved by using the 3 rd harmonic component. 2) Ability of control in wide speed range at constant power: for three-phase machine, the flux-weakening is appearing almost as the unique solution to increase the speed above the base speed when the maximum voltage imposed by the voltage source inverter is reached. For the bi-harmonic machine, many solutions exist for flux weakening operation since the 3 rd harmonic offer an additional degree of freedom. The total torque/speed characteristic is the sum of the characteristic associated to each harmonic component [8][13]. 3) Low losses associated to eddy currents when high frequency is required at high speed: the winding configuration is selected so that the harmonic content of the MMF is low. Consequently, low level of losses can appear in magnets. Therefore, the fractional slot concentrated winding with a number of slots per pole and per phase equal to 0.5 is used 0- [START_REF] Bianchi | Index of rotor losses in three-phase fractional-slot permanent magnet machines[END_REF]. The control strategies applied to multiphase machines are based on the Maximum Torque Per ampere MTPA [START_REF] Zahr | Comparison of Optimized Control Strategies of a High-Speed Traction Machine with Five Phases and Bi-harmonic Electromotive Force[END_REF]-0. Consequently, a ratio ρ between 1 st and 3 rd current harmonic components must be fixed to have collinear current and back-EMF (these assertions are detailed in section III). Practically, it could be appeared that the actual current differs from the optimal (reference) ones, leading to a variation (increase or decrease) of torque and required voltage.
The aim of this paper is to study the sensitivity of the MTPA strategy. Taking into account the several degrees of freedom in the case of the 5-phase bi-harmonic PMSM (5-Φ B-PMSM), there are several possibilities to modify the reference currents by acting on the phase shift (φ) between current and back-EMF and/or the ratio between the 1 st and 3 rd harmonic current components ( ). Consequently, in addition to the classical MTPA method, two new strategies will be investigated and analyzed with constraints on the voltage and currents.
This paper will be organized as follows: Section II presents the structure of the considered prototype. Section III presents the MTPA supply strategies applied to the 5-Φ B-PMSM machine and the other strategies derived from the MTPA but with the variation of the phase shift φ and the ratio ρ=I 3 /I 1 . The obtained characteristics of the studied machine under these conditions are analyzed, taking into account the voltage and current limits.
II. TOPOLOGY AND VECTOR CONTROL OF INVISTIGATED 5-Φ BI-HARMONIC PMSM
This section aims to present the prototype of the five phase bi-harmonic PMSM and the adopted control for operating characteristics investigation. The investigated machine can be used in traction applications with capability for developing high torque during transient operation and good efficiency at steady state [START_REF] Boldea | Automotive Electric Propulsion Systems With Reduced or No Permanent Magnets: An Overview[END_REF]- [START_REF] El-Refaie | Motors/generators for traction/propulsion applications: A review[END_REF].
A. 5-Φ Bi-Harmonic PMSM Topology
The prototype is depicted in Fig. 1. This prototype consists in a 5-Φ 40-slot/16-pole bi-harmonic PMSM [START_REF] Zahr | Comparison of Optimized Control Strategies of a High-Speed Traction Machine with Five Phases and Bi-harmonic Electromotive Force[END_REF]0. In order to decrease the cogging torque and other torque ripples, the stator is skewed by ½ slot step. In the next section, thanks to vector control, the ability to produce torques with low torque ripples even when two harmonics are injected is verified and approved.
The back-EMF harmonic content, obtained for a 500 rpm rotor speed, is depicted in Erreur ! Source du renvoi introuvable.Fig. 3. It comprises two main harmonic components (E 1 and E 3 ). Here, it should be noticed that because of a high value of winding factor, the 3 rd harmonic component of the back-EMF is greater than the fundamental harmonic, but both have comparable magnitude since they have a ratio = E 3 /E 1 =1.22. It should be noticed also that, for a 5-Φ 40-slot/16-pole, this ratio-value allows to maximize the torque in low-speed region and to improve the flux-weakening operation mode [START_REF] Aslan | New 5-phase concentrated winding machine with bi-harmonic rotor for automotive application[END_REF][8]0.
B. Modeling of the 5-Φ Bi-harmonic PMSM
In this part, the basic knowledge of the reference frame theory applied to a 5-Φ B-PMSM is introduced to highlight the potentialities of this concept used for vector control of multiphase machines. The basics of the vector control for multiphase machines [START_REF] Kestelyn | Vectorial modeling and control of multiphase machines with non-salient poles supplied by an inverter[END_REF]- [START_REF] Kestelyn | A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent-Magnet Synchronous Machines in Real Time[END_REF], leads to three decoupled subspaces, as shown in Fig. 4.
First sub-space: is associated mainly with the 1 st harmonic electrical components (voltage, current, back- For an easier control of the 5-Φ B-PMSM, the Park transformation given in ( 5)-( 6) is used. In such frames, the currents, the voltages and the flux are constant in healthy operations. Consequently, the control of the two-phase fictitious machines can be achieved independently with two PI-controllers. For motor modeling, we suppose that the magnetic saturation, the hysteresis and slot effects, and the iron losses are neglected. Considering these assumptions, the electrical equations that describe the 5-Φ Bi-harmonic PMSM, in dq-rotating frames, are defined by: for the fictitious main machine (MM) [START_REF] Aslan | New 5-phase concentrated winding machine with bi-harmonic rotor for automotive application[END_REF] for the fictitious Secondary machine (SM)
dI d 3 V R I L e 3 L I m q3 q3 d 3 d 3 d 3 d 3 dt dI q3 V RI L e 3 L I m q3 q3 q3 q3 d 3 d 3 dt (8)
where, ([v dq1 ], [i dq1 ], [e dq1 ]) and ([v dq3 ], [i dq3 ], [e dq3 ]) are the stator voltages, the phase currents and the back-EMFs linked to the fictitious MM and the fictitious SM, respectively. R is the phase resistance and (L d1 , L q1 ) and (L d3 , L q3 ) are the equivalent self-inductances linked to the MM and SM, respectively.
The electrical fundamental pulsation corresponding to the MM, in steady state, ω m , is given by ( 9), where p is the pair poles number, fs is the electrical supply frequency and θ m is the mechanical rotor position. On the contrary, the electrical pulsation, ω s , corresponding to the SM, in steady state, is three times higher than ω m , as given by ( 10): [START_REF] Abdelkhalik | Eleven-phase induction machine: steady-state analysis and performance evaluation with harmonic injection[END_REF] For the considered machine, the developed torque by the fivephase PMSM is the sum of the torque produced by each fictitious machine, which is given by: [START_REF] Zhao | Torque density improvement of fivephase PMSM drive for electric vehicles applications[END_REF] where (E 1 , E 3 ) and (I 1 , I 3 ) are the 1 st and the 3 rd harmonic amplitude of the back-emf and current respectively, φ 1 (φ 3 ) the phase shift between the 1 st (3 rd ) harmonic current and the 1 st (3 rd ) harmonic of the back-emf.
) ) 3 cos( 3 3 ) 1 cos( 1 1 ( . 2 5 s T I E m T I E em T
III. OPERATING CHARACTERISTICS OF 5-Φ BI-HARMONIC PMSM UNDER VOLTAGE AND CURRENT LIMITS
In this section, several supply strategies will be presented and examined in order to test the sensitivity of MTPA strategy applied to the 5-Φ Bi-Harmonic PMSM.
A. Maximum Torque Per Ampere strategy (MTPA).
For this control strategy, the maximum torque is achieved when the back-emf and the current vectors are collinear [START_REF] Kestelyn | A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent-Magnet Synchronous Machines in Real Time[END_REF]. The two following conditions guarantee the collinearity:
1) Based on equation [START_REF] Zhao | Torque density improvement of fivephase PMSM drive for electric vehicles applications[END_REF], the maximum torque is guaranteed by setting :
0 3 1 (12)
2) The ratio between the 1 st and the 3 rd harmonic current component must be equal to ρ given by : Fig. 5. Currrent and back-emf shape for φ≠0. According to Fig. 3, the ratio ρ must be equal to 1.22. Assuming that the machine is supplied with currents characterized by equation ( 14):
eff I I eff I I 1 2 2 3 1 2 1 2 1 (14)
This strategy can be used as long as the maximum voltage delivered by the voltage source inverter (VSI) is not reached. When the maximum voltage is reached, the speed can still be increased by varying the ratio ρ and/or the phase shifts φ 1 and φ 3 . This approach is different from the one used in classical three phase machines, in which flux-weakening appears as the only possible solution to increase speed when the voltage limit is reached. For the bi-harmonic machine, the determination of the current references in high speed zone requires the resolution of four variables non-linear optimization problem [START_REF] Zahr | Comparison of Optimized Control Strategies of a High-Speed Traction Machine with Five Phases and Bi-harmonic Electromotive Force[END_REF].
As mentioned in the introduction, the two strategies will be considered and compared with the MTPA:
The first strategy: the phase shift φ 1 and φ 3 fixed to zero, and the ratio ρ is variable.
The second strategy: The ratio ρ is fixed to its optimal value and the phase shifts φ 1 and φ 3 are varying.
For each control strategy, the required voltage is calculated and compared with the one obtained in the MTPA strategy. The results are mainly presented from Fig. 7 to Fig. 10.
B. Control with ρ and variable phase shifts φ.
In this control strategy, the value ρ is always equal to 1.22. However, the phase shifts φ 1 and φ 3 are changed so that the shape of the current remains the same as the one obtained with MTPA strategy. Therefore, we introduce a global phase shift between the current and the back-emf, noted φ as presented in Fig. 5. Consequently, the phase shift φ 1 and φ 3 are expressed in function of φ as follows: component. In addition, the machine should deliver the same torque as the one delivered when the MTPA strategy is applied. In these conditions, the toque is given by: [START_REF] Boldea | Automotive Electric Propulsion Systems With Reduced or No Permanent Magnets: An Overview[END_REF] In MTPA conditions, the ratio ρ MTPA =1.22, this results in: [START_REF] El-Refaie | Motors/generators for traction/propulsion applications: A review[END_REF] In Fig. 9, we present the ratio between the current density due to the variation of the ratio ρ and the current density of the MTPA strategy J/J MTPA . Results show that it is necessary to inject more current density in the machine in order to maintain the same torque as the MTPA approach. This results in more copper losses in the 5-Φ Bi-Harmonic PMSM machine. The variation of the required voltage value according to ρ is given in Fig. 10. These values are measured experimentally and compared to the ones obtained by FE simulation. We can conclude that they are quite similar. We show that when the variation of the ratio ρ is less 1.22 (MTPA ratio), the required voltage is generally lower. If ρ=0which means that there is no 3 rd harmonic current componentthe required voltage is 3% less than the MTPA case at 963rpm, 11% at 790 rpm and 12% at 640rpm. The lower is the contribution of the 3 rd harmonic in the total torque; the lower is the required voltage. This is due mainly to the fact that the 3 rd harmonic induces more voltage drop than the 1 st harmonic. Fig. 11 shows a comparison between the phase voltage obtained by FE simulation and the measured voltage obtained after elimination of PWM harmonics. Finally, by comparing the two approaches developed in section III.B. and III. C., it appears that the required voltage is less sensitive to the variation of ρ than the variation of φ. Finally, Table .III summarizes the advantages and drawbacks of each control strategy in terms of peak current, peak voltage, torque and copper losses.
MTPA T I E I E em T ) 1 3 1 1 ( 2 5
) ( ) ( 1 2 ) ( 2 / 5 ) ( 3 1 3 1 2 3 1 1 E E E E I E E T I MTPA MTPA eff MTPA
IV. CONCLUSION
In this paper, the sensitivity of MTPA strategy was investigated for a particular bi-harmonic machine whose the torque developed from the 1 st harmonic is equivalent to the one from the 3 rd harmonic. Given that the MTPA is achieved when ρ MTPA =I 3 /I 1 =E 3 /E 1 and φ=0, two sensitivity tests are performed: the first one concerns the phase shift φ (control parameter) between the current and the back-emf. The second one concerns the ratio ρ=I 3 /I 1 (design parameter) which is equal in MTPA to E 3 /E 1 . The results show that if the phase shift φ varies between -π/10 and π/10, the torque is thus reduced up to 35%. Therefore, in order to obtain the desired torque, more current density should be injected in the machine. Furthermore, the required voltage can increase up to 20% (φ<0) or decreases by 40% (φ >0) compared to MTPA values.
On the other hand, the variation on the ratio ρ can decrease slightly the required voltage between 3% and 12%, while keeping the same torque as the one provided in MTPA strategy. But for this kind of variation, the current density must be increased.
In conclusion, the sensitivity of the required voltage and current with respect to ρ and φ should be taken into account when sizing the VSI. The presented results allow to define the security margin to be considered when fixing the volt-ampere rating of the VSI. Consequently, the machine remains able to provide the same torque as the one delivered in MTPA when the values of ρ and φ are not precisely determined.
(a) Stator (b) Rotor Fig. 1. Machine prototype (5-Φ 40-slot/16-pole/48-pole).
Fig. 2 .
2 Fig. 2. Test bench used for experimental validation.
Fig. 3 .
3 Waveforms and harmonic content of the back-EMF obtained by FE and experimentally at XX rpm EMF) corresponding to the Main fictitious Machine (MM) and is noted with index (αβ).Second sub-space: is associated with the 3 rd harmonic component corresponding to the Secondary fictitious Machine (SM) and is noted with index (xy). Third subspace: is associated with the fifth harmonic component. Because of the star connection isolated neutral of the PMSM, there is no path for the zerosequence component of current. The transformation which allows to obtain these machines is the generalized Concordia transform. The equivalent components in orthogonal sub-spaces αβ and xy are obtained by: ] denotes the variable in natural frame abcde (voltage, current, back-EMF, ...). Their equivalent components in orthogonal frames are denoted by [x αβ ] and [x xy ]. [x d1q1 ] and [x d3q3 ] are variables in rotating frame.
Fig. 4 .
4 Fig.4. Configuration of the 5-Φ PMSMTABLE I ELECTRICAL PARAMETERS OF 5-Φ B-PMSM Rs = 0.0324 Ω; Lp =139 μH; Ls = 178 μH, p=8, From spectrum analysis at load for 500 rpm, amplitudes (RMS) of emf E1=10,2V, E3=13V
Fig. 6 .
6 Fig. 6. Variation of torque according to the phase shift angle φ.
Fig. 7 .
7 Fig. 7. Variation of torque according to the phase shift angle φ.
Fig. 8 .
8 Fig. 8. Voltage waveform for three value of phase shift φ and a current density of 20A/mm 2 obtained by FE simulation.
TABLE III ADVANTAGES
III AND DRAWBACKS OF THE TWO STUDIED STRATEGIES
Variation of φ Variation of ρ
(ρ=1.22) (φ=0)
1-Less required voltage
Advantages than MTPA strategy if φ>0 2-Same current peak 1-Less required voltage than MTPA for the same torque
value as MTPA
Drawbacks 1-More required voltage than the MPTA strategy if φ<0 2-Generally, lower torque than the once in MTPA is obtained. 1-More copper losses than MTPA are obtained for the same torque. 2-Higher peak current than MTPA strategy
ACKNOWLEDGEMENT
This work has been achieved within the framework of CE2I project (Convertisseur d'Energie Intégré Intelligent). CE2I is co-financed by European Union with the financial support of European Regional Development Fund (ERDF), French State and the French Region of Hauts-de-France.
The allowed variation of the phase shift φ belongs to the following interval:-π/10 ≤ φ ≤ π/10. The FE simulation is performed for a current density of 2.5 A/mm 2 , 5A/mm 2 and 20A/mm 2 . Another operating point is considered when the current peak value is equal to 200A. Here, it should be noticed that the maximum value which can be measured by the current sensors reaches 200A.
Each current density is injected in the machine for different speed value so that for φ=0, a peak voltage value equal to the half of DC bus voltage is obtained, for all the considered current densities. In this work, the DC bus voltage is assumed to be 48V. Fig. 6 and Fig. 7 present the variation of torque and voltage respectively according to the variation of phase shift φ. The results are determined using the FE software ANSYS Maxwell 2D. As observed in Fig. 6, the required voltage is greater than 24V when the phase shift φ<0.
Regarding Table. II, it has been shown that the percentage of the additional required voltage is within the band [8%-20%].
On the contrary, the required voltage becomes less than 24 V when φ>0. The percentage of required voltage decrease is within [14%-40%] in comparison to the MTPA value (24V).
In Fig. 8, the impact of phase shift on the voltage waveform is presented. The FE simulation is performed for three of phase shift values (φ=-π/10, φ=0 and φ=π/10) with a current density equal to 20A/mm 2 . The results show that for φ=-π/10, the voltage exceeds for several instants the maximum limit of 24V. Consequently, an important distortion can appear in the voltage waveform in this case.
In addition, the torque can decrease significantly by 35.9% as depicted in table .II and Fig. 6 when the phase shift φ<0 or φ>0.07. In order to compensate this torque decrease, more current density must be injected which increases significantly the required voltage. Notice that, in all the studied cases, a slight improvement of the torque is possible when φ is within [0-0.07] (up to 1.5 % according to Table. II and to Fig. 4), due to the slight saliency effect of the machine.
C. Control with variable value of ρ and phase shift φ equal
to zero. In this control strategy, the phase shift φ 1 and φ 3 are fixed to zero. The ratio between the 1 st and the 3rd harmonic current ρ is variable. This means that the error on the ratio ρ (<1.22) implies that the torque is mainly produced by the 1 st harmonic | 23,367 | [
"770391",
"22157",
"1244471"
] | [
"13338",
"13338",
"13338",
"13338"
] |
01768961 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2010 | https://imt-mines-albi.hal.science/hal-01768961/file/texte%20d%C3%A9finitif.pdf | Elsa Weiss-Hortala
Andrea Kruse
Christina Ceccarelli
Radu Barna
Influence of phenol on glucose degradation during supercritical water gasification
Keywords: supercritical water gasification, phenol, glucose, hydrogen production
Biomass is the ideal alternative renewable energy that might help decrease CO 2 emissions. Super Critical Water Gasification (SCWG) is a recent treatment method which is still being developed as regards wet biomass. Above its critical point, water has specific properties and is able to convert wet biomass into gas, and especially into hydrogen. In order to propose a general reaction scheme of the SCWG as regards the lignocellulosic biomass conversion, the interactions between lignin and cellulose must be highlighted. Lignocellulosic biomass could be modelled with phenol (substitute for lignin) and glucose (substitute for cellulose).
In a continuous flow tubular reactor, the gasification of solutions containing phenol and glucose or either of the two pure compounds is more efficiently performed when in presence of an alkaline catalyst. The comparison of global parameters, such as the Total Organic Carbon content (TOC), the composition of the liquid product phase (glucose, phenol…), the volume and composition of the gas phase (H 2 , CO 2 , CH 4 …), showed that a small quantity of phenol in a glucose solution dramatically decreased the efficiency of the solution's conversion.
In a continuous tubular reactor, the gasification efficiency of solutions containing phenol, glucose or the two pure compounds are conducted in presence of an alkaline catalyst. The comparison of the gas yields showed that presence of phenol in a glucose solution decreased dramatically the conversion efficiency of the solution at the comparable operating conditions (pressure, temperature, flow rate, catalyst concentration).
Introduction
Nowadays, researchers are getting more involved in the field of renewable or supposedly renewable forms of energies: solar power, wind energy, biogas, synthetic gas… with view to replace fossil energies and decrease carbon dioxide emissions that are ensued from fossil compound processing operations. With consideration of these two points, biomass is the ideal alternative energy, with a carbon dioxide balance close to zero. Most of the currently used techniques are efficient with dry biomass (pyrolysis for example). On the contrary, Super Critical Water Gasification (SCWG) is a recent treatment method which is still being developed [START_REF] Kruse | Hydrothermal biomass gasification[END_REF], and that allows wet biomass to be used, even with a natural water content of up to 80-90 %.
The principle is to convert wet biomass into gas, using high temperatures and pressures.
Above the critical point (22.1 MPa and 374°C), water has specific properties and is able to convert carbon from the biomass into methane and/or carbon dioxide. During this process, hydrogen that was bonded to biomass could also be transformed into dihydrogen. At the present time, hydrogen seems to be the best alternative source of energy thanks to its high energy potential and eco-friendly properties, but it has to be produced in a sustainable way.
Different technologies for H 2 production are under development.
Above its critical point, water is a monophasic system [START_REF] Clifford | Reactions in supercritical water[END_REF], and its physico-chemical properties are no longer the same when it is liquid [START_REF] Cansell | Thermodynamic aspects of supercritical fluids processing: applications to polymers and wastes treatment[END_REF][START_REF] Shaw | Supercritical water-a medium for chemistry[END_REF]. Generally speaking, supercritical fluids are interesting because they form a unique phase; their diffusivity is close to that of gases and their density could be easily adjusted to the desired values [START_REF] Savage | Reactions at supercritical conditions: applications and fundamentals[END_REF]. SCW media are very reactive because they efficiently hydrolyse organic compounds [START_REF] Marshall | Ion product of water substance, 0-1000°C, 1-10,000 bar new international formulation and its background[END_REF][START_REF] Franck | Thermophysical properties of supercritical fluids with special consideration of aqueous systems[END_REF][START_REF] Kruse | Hot compressed water as reaction medium and reactant. Properties and synthesis reactions[END_REF]. At low densities, SCW is a poor solvent for ionic species like inorganic salts [START_REF] Tester | Chemical reactions and phase equilibria of model halocarbons and salts in sub-and supercritical water (200-300 bar, 100-600°C)[END_REF], but it is completely miscible with many organic compounds and gases [START_REF] Kruse | Hot compressed water as reaction medium and reactant. Properties and synthesis reactions[END_REF], allowing the precipitation of inorganic salts on the one hand, and homogenous reaction processes between gases and organic compounds on the other hand. Reactions in supercritical water could be catalysed by acid or base [START_REF] Kruse | Hot compressed water as reaction medium and reactant. Properties and synthesis reactions[END_REF][START_REF] Hunter | Recent advances in acid-and base-catalyzed organic synthesis in high-temperature liquid water[END_REF] or initiated by free radicals [START_REF] Kruse | Hot compressed water as reaction medium and reactant. 2. Degradation reactions[END_REF].
Biomass reacts with water and forms hydrogen and carbon dioxide in a steam reforming reaction [START_REF] Yu | Hydrogen production by stream reforming glucose in supercritical water[END_REF][START_REF] Matsumura | Biomass gasification in near-and super-critical water: status and prospects (review)[END_REF] following Eq. 1. Both Water-Gas Shift reaction (Eq. 2, WGS) and methanation (Eq. 3) are used to determine the composition of the gas.
2 2 2 2 2 2 H x y CO O H y O CH y x (1) 2 2 2 H CO O H CO (2) O H CH H CO 2 4 2 3 (3)
Even water is very reactive, chemical kinetics is a limitative factor in the development of a technical process [START_REF] Kruse | Hydrothermal biomass gasification[END_REF]. Sometimes, char and/or tar could be produced during the process [START_REF] Matsumura | Biomass gasification in near-and super-critical water: status and prospects (review)[END_REF].
Catalysts are used to increase reaction rates and support some reaction pathways, such as the WGS by alkaline salts [START_REF] Matsumura | Biomass gasification in near-and super-critical water: status and prospects (review)[END_REF][START_REF] Sutton | Review of literature on catalysts for biomass gasification[END_REF][START_REF] Yanik | Biomass gasification in supercritical water: II. Effect of catalyst[END_REF]. After cooling and depressurising sequences, both liquid and gas phases are analysed.
Most studies are published on the SCWG of glucose [START_REF] Kruse | Hot compressed water as reaction medium and reactant. 2. Degradation reactions[END_REF][START_REF] Yu | Hydrogen production by stream reforming glucose in supercritical water[END_REF][START_REF] Matsumura | Biomass gasification in near-and super-critical water: status and prospects (review)[END_REF][START_REF] Lee | Gasification of glucose in supercritical water[END_REF][START_REF] Watanabe | Catalytic hydrogen generation from biomass (glucose and cellulose) with ZrO 2 in supercritical water[END_REF][START_REF] Aida | Dehydration of D-glucose in high temperature water at pressures up to 80 MPa[END_REF][START_REF] Sinağ | Key compounds of the hydropyrolysis of glucose in supercritical water in the presence of K 2 CO 3[END_REF][START_REF] Goodwin | Conversion of glucose to hydrogen-rich gas by supercritical water in a microchannel reactor[END_REF], which is an intermediate compound of cellulose decomposition [START_REF] Sasaki | Kinetics of cellulose conversion at 25 MPa in sub-and supercritical water[END_REF][START_REF] Resende | Noncatalytic gasification of cellulose in supercritical water[END_REF]. Some recent studies show that both the presence of lignin in the feedstock and operating conditions have an influence on the gasification efficiency [START_REF] Yoshida | Gasification of cellulose, xylan and lignin mixtures in supercritical water[END_REF][START_REF] Saisu | Conversion of lignin with supercritical water-phenol mixtures[END_REF][START_REF] Yoshida | Gasification of model compounds and real biomass in supercritical water[END_REF][START_REF] Resende | Noncatalytic gasification of lignin in supercritical water[END_REF] and that the lignin seems to modify the conversion efficiency of the carbohydrate conversion [START_REF] Yoshida | Gasification of cellulose, xylan and lignin mixtures in supercritical water[END_REF][START_REF] Yoshida | Gasification of model compounds and real biomass in supercritical water[END_REF][START_REF] Yanik | Biomass gasification in supercritical water: Part 1. Effect of the nature of biomass[END_REF]. In order to propose a general reaction scheme of the SCWG for lignocellulosic biomass, the interactions between these two compounds must be highlighted. In this paper, lignocellulosic biomass was modelled by glucose as substitute for cellulose and phenol for lignin. In our experimental approach, we used two representative molecules. Glucose is the hydrolysis product of cellulose. Phenol is easily obtained in SCW by the degradation of guaiacol [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF] and can be considered as one of the structural building bricks of lignin. We have decided to use phenol, the simplest molecule presenting the Ar-OH structure, as a substitute for lignin.
The lignin molecule is a complex arrangement of aromatic rings. In the different molecular forms of lignin that have been proposed, guaiacol (o-methoxy-phenol) is one of the most abundant aromatic rings containing hydroxyl and methoxy substituent. Di Leo et al. [START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF] have noted that guaiacol quickly disappears during SCWG and can be substituted by its reaction products, e.g. phenol. During the hydrothermal degradation of lignin, phenol and different substituted phenols are formed as intermediate products of lignin gasification [START_REF] Saisu | Conversion of lignin with supercritical water-phenol mixtures[END_REF][START_REF] Kruse | Biomass conversion in water at 330-410°C and 30-50 MPa. Identification of key compounds for indicating different chemical reaction pathways[END_REF]. Phenol is also a widely encountered industrial pollutant (oil, painting, pesticides, colouring agents and pharmaceutical industries), which is difficult to eliminate from wastewater [START_REF] Weiss | A comparison of electrochemical degradation of phenol on boron doped diamond and lead dioxide electrodes[END_REF][START_REF] Busca | Technologies for the removal of phenol from fluid streams: A short review of recent developments[END_REF].
Phenol's oxidation in supercritical water (SCWO) is a very big issue in many publications.
SCWO of phenol was performed in batch or continuous flow tubular reactors [START_REF] Thornton | Phenol oxidation pathways in supercritical water[END_REF][START_REF] Gopalan | A reaction network for phenol oxidation in supercritical water[END_REF][START_REF] Krajnic | On the kinetics of phenol oxidation in supercritical water[END_REF][START_REF] Henrikson | Inhibition and acceleration of phenol oxidation by supercritical water[END_REF][START_REF] Henrikson | Potential explanation for the inhibition and acceleration of phenol SCWO by water[END_REF] in various experimental conditions: temperatures from 300-630°C, pressure range from 21-31
MPa, oxygen excess, various water densities and reaction times. Different global kinetic models have been proposed for phenol degradation, almost pseudo first order for phenol; in general, the oxidation kinetic is considered as an Arrhenius type process. Water density in the mixture seems to play a complex role. In general, with the decrease of water density the main mechanisms change from heterolytic (ionic) to homolytic (radical) as a result of the concomitant change of the dielectric constant of water, of ion product of water Kw, of phenol dissociation, of diffusion coefficients and rates of the diffusion controlled reactions. The experiments showed that water inhibits the reaction rate at low densities and high temperatures and it accelerates the rate at lower temperatures and higher water densities.
Different mechanisms for phenol SCWO have been published [START_REF] Thornton | Phenol oxidation pathways in supercritical water[END_REF][START_REF] Gopalan | A reaction network for phenol oxidation in supercritical water[END_REF][START_REF] Krajnic | On the kinetics of phenol oxidation in supercritical water[END_REF][START_REF] Onwudili | Reaction mechanisms for the decomposition of phenantrene and naphtalene under hydrothermal conditions[END_REF], and among the characterised products of the reaction, the main ones in the liquid phase are phenoxyphenols, biphenol, dibenzofuran, acids (oxalic, acetic, benzoic, salicylic…). The gasification of phenol in supercritical water was also studied, showing a relatively slow degradation, yet more efficient when using an Ni catalyst in a quartz reactor [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF][START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF].
Alkali salts are used as substitute for inorganic ("ash") components of natural biomass. It was shown that both KHCO 3 and natural inorganic components act as catalysts in the water-gas shift reaction [START_REF] Kruse | Hydrothermal biomass gasification[END_REF][START_REF] Kruse | Hot compressed water as reaction medium and reactant. 2. Degradation reactions[END_REF]. The addition of the salt enables the model system and real biomass to be more efficiently compared.
Materials and methods
Continuous flow reactor and operating conditions
Experiments were performed in a continuous flow tubular reactor (Fig In all experiments, KHCO 3 (Roth Company, 99.5% purity) was used as catalyst at 0.2 wt %.
The pressure was kept constant at 25 MPa. The working temperatures were of 400, 450, 500 and 550 °C. The flow rate was set between 0.9 and 3. The experimental error, estimated by repeated experiments in the same experimental conditions, is in the range of ± 10 % of the given data.
Analyses
After cooling, gas and liquid phases were analysed. No solid phase was observed in these operating conditions. After 30 and 60 minutes of stable state of the reactor, gas and liquid samples were taken for analysis. The obtained results were an average of the analysed samples. To compare the efficiency of the conversion, the only comparison of gas volume or liquid mass is not pertinent, particularly in case of different flow rates. In the outflow, a normalised gas yield (N gas yield), calculated by the volume of gas and the weight of the liquid obtained after the 60 min steady state of the reactor, was defined:
1 kg L in weight liquid volume gas yield gas N (4)
The mass error is 0.1 g and the gas volume error is 10%.
Liquid phase
The efficiency of the organic compounds' conversion was estimated by measuring the Total Organic Carbon (DIMATOC ® 2000) with an error of 2%. To represent this efficiency, the TOC removal will be calculated as follows:
100 initial final initial TOC TOC TOC removal TOC ( 5
)
Residual phenols (phenol and substituted phenols) representing synthesised and/or undecomposed phenol as well as glucose and fructose were also quantified in the liquid phase by using a colorimetric test from Hach-Lange (LCK 346 Phenols), a specific enzymatic test and a UV-method supplied by Roche. The error in the analysis of glucose/fructose concentration was 0.5% and 5% for phenol.
In addition, UV-Vis (DR 5000, Lange-Hach) measurements are conducted with selected samples. The results show the presence of phenols and other aromatic compounds with a single aromatic system. No adsorption at longer wavelength was detected, therefore no hint for polyaromatic systems is found.
After the phenol-glucose mixture reaction, the GC-MS 1 analysis of the selected samples not only shows phenol presence but also that of alkyl-phenols (mainly methylphenols) in the product mixture.
Gas phase
An HP-6890A Series Gas Chromatograph was used to analyse the gas, and the injected volume was 100 µL. Two columns (80/100 Hayesep and 60/80 Molesieve 5A) and two kinds of detectors were placed in series so as to determine the composition and quantification of the gas. The main gases to be quantified were H 2 , CO 2 , CH 4 , CO and some light hydrocarbons (C 2 H 6 , C 3 H 8 …).
Results and discussion
Sinağ et al. demonstrated the effect of alkaline salt on glucose gasification in this reactor [START_REF] Sinağ | Formation and degradation pathways of intermediate products formed during the hydropyrolysis of glucose as a model substance for wet biomass in a tubular reactor[END_REF], they concluded that KHCO 3 improved gas generation and decreased the amount of furfural in the liquid phase. The influence of the glucose concentration was studied in the range of 0.25-2 wt % at 25
Glucose solutions
MPa, 500°C and with a flow rate of 1.385 kg h -1 (residence time equals to 0.1624 min).
Glucose was totally removed from the solution (<0.003 mg L -1 , not shown), meaning that molecules completely react with supercritical water in the considered operating conditions.
Moreover, fructose was also quantified and the values were less than 0.01 mg L -1 . These experimental results are in accordance with the literature [START_REF] Yu | Hydrogen production by stream reforming glucose in supercritical water[END_REF][START_REF] Goodwin | Conversion of glucose to hydrogen-rich gas by supercritical water in a microchannel reactor[END_REF][START_REF] Kabyemela | Kinetics of glucose epimerization and decomposition in subcritical and supercritical water[END_REF]. During the continuous process, no solid particle was observed.
Fig. 2 shows that, in these operating conditions, the normalised gas yield (Eq. 4) linearly increases along with the glucose concentration in the solution. For a glucose concentration in the range of 0.25 -2 wt %, this linear behaviour means that catalyst concentration is not kinetically limitative. This linear behaviour could also indicate that the conversion's efficiency, i.e. mole distribution between the gas and the liquid phases, is nearly the same in all experiments. The evolution of TOC and residual phenol of the liquid phase as function of glucose concentration is also linear (not shown). An increasing linear profile for the TOC indicates that the mineralisation of organic compounds is independent from the concentration in the investigated range. As seen in the literature, phenol is an intermediate product of
glucose degradation [START_REF] Kruse | Hot compressed water as reaction medium and reactant. 2. Degradation reactions[END_REF][START_REF] Sinağ | Key compounds of the hydropyrolysis of glucose in supercritical water in the presence of K 2 CO 3[END_REF][START_REF] Goodwin | Conversion of glucose to hydrogen-rich gas by supercritical water in a microchannel reactor[END_REF][START_REF] Watanabe | Oil formation from glucose with formic acid and cobalt catalyst in hot-compressed water[END_REF]. The concentration of residual phenol in the liquid outflow linearly increases with glucose concentration, meaning that the ratio between the phenol concentration and the initial glucose concentration is the same in all experiments.
Phenol concentration is a balance between phenol formation from glucose and phenol conversion during SCWG. This linear profile could indicate that rates and reaction mechanisms are similar in all experiments. As regards values, TOC removal was close to 90%
(between 88 and 90%) in each experiment, indicating that mineralisation was efficient in these operating conditions. Fig. 3 represents the volume percentage of the main gas (H 2 , CO 2 , CH 4 and CO) in the total volume as function of glucose concentration. The composition of the gas is of about 50 vol.% H 2 , 40 vol.% CO 2 , 10 vol.% CH 4 and 1 vol.% CO. These values are in the same order of magnitude as those obtained in the literature under various operating conditions [START_REF] Yu | Hydrogen production by stream reforming glucose in supercritical water[END_REF][START_REF] Lee | Gasification of glucose in supercritical water[END_REF][START_REF] Goodwin | Conversion of glucose to hydrogen-rich gas by supercritical water in a microchannel reactor[END_REF].
Hydrogen is the main compound in the gas phase. Of course, the presence of the catalyst decreases the amount of CO, and therefore the WGS reaction is promoted. Fig. 3 shows that, when glucose concentration increases, the proportion of H 2 slightly decreases and the proportion of CO 2 slightly increases. This phenomenon could be due to a difference between the kinetics of the reactions. Taking the general and total reaction scheme following Eq.6 into account:
2 2 2 6 12 6 12 6 6 H CO O H O H C (6)
the part of hydrogen should be doubled compared to carbon dioxide volume. The different ratios of the gases indicate that the reaction is not complete, even if glucose has totally disappeared, and even if the kinetics of degradation or other consecutive reactions seem to be different for the intermediate products. In other words: hydrogen reacts with the intermediate products by hydrogenation and this reaction has a higher reaction order than glucose degradation.
Phenol solutions
According to literature, the reaction rate of phenol gasification in a batch reactor is low without any Ni catalyst, and a temperature of 700°C leads to a complete reaction of phenol in a short time compared to that of 600°C [START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF].
Influence of flow rate
As regards phenol solutions, the influence of flow rate (or residence time) was studied in the range of 0.9-3.8 kg h -1 (residence time equals to 0.23-0.06 min) for a phenol concentration of 1 wt % (corresponding to 10 g L -1 ) at 25 MPa and 500°C. Fig. 4 shows that the conversion of phenol is not efficient (less than 50%). Contrary to glucose solutions, the reactivity of phenol in supercritical water is low in these operating conditions. The experiments performed by DiLeo et al. on phenol solutions with or without Ni catalyst [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF][START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF] showed that the relation between water density and phenol concentration has an influence on the conversion's efficiency. It was concluded that an optimal water density (0.079 g mL -1 ) might exist for homogeneous SCWG of phenol at 600°C with an Ni catalyst [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF]. It was also demonstrated that phenol removal was only of 25% without any Ni catalyst while the value reached 93%
with an Ni catalyst in the same period of 10 min [START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF]. For all parameters (TOC, phenol and relative gas yield) as function of residence time, the profiles of the curves show an abrupt increase or decrease between the lower residence time and the intermediate one, and the values of the next points are quite similar. According to literature, this low conversion is a consequence of the short reaction times in the experiments hereby presented. The composition of the gas phase is quite stable at the different flow rates, especially as regards H 2 (45%) and CO 2 (32%). With an Ni catalyst, the ratio between H 2 and CO 2 is close to two in optimised conditions [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF].
Influence of temperature
To study the influence of temperature in the range of 400-500°C at constant pressure, it has to be considered that a change in the temperature not only influences the reaction rate with a continuous activation energy but also changes the water density [START_REF] Dileo | Supercritical water gasification of phenol and glycine as models for plant and protein biomass[END_REF]. As regards pure water, density equals to 0.166 g mL -1 at 400°C, 0.109 g mL -1 at 450°C and 0.089 g mL -1 at 500°C. Fig. 5 shows the residual phenol as function of the reactor's temperature, for the lowest residence time. The quantity of phenols decreases when the temperature increases. At 500°C, phenol removal reaches 52% compared to 12% at 450°C. This means that higher temperatures promote the reactivity of phenols in supercritical water. Temperature also favours TOC removal, increasing from 1% (400°C) to 9% (450°C) and up to 50% at 500°C (values not shown on Fig. 5).
Glucose and phenol solutions
The first objective is to check the influence of phenol on glucose degradation. The gas volume from the solutions containing only one compound (1 wt %) was compared to that of mixture solutions (1 wt % of each compound). With this kind of representation, the degradation process of both components will be independent if the values for the mixture are the sum of the other two values. The gas volume of the three solutions and the theoretical sum is shown in Fig. 6 with a flow rate of 1.385 kg h -1 , at T=500°C and observations are the same with the other flow rates. The theoretical sum of gas volumes is higher than the experimental gas volume obtained from the mixture solution: the simultaneous presence of the two compounds in the mixture decreases the production of gas. For example, the gas volume obtained from the mixture is 50% less voluminous than the gas yield observed with glucose. As a conclusion, phenol has an influence on glucose conversion and more precisely on the quantity of gas produced in these operating conditions. Phenol probably influences the intermediate products of glucose degradation because glucose is also totally removed from the mixture outflow.
Gas quantity is not the only parameter affected by the presence of phenol: TOC and residual phenol values obtained from the mixtures are higher than the sum of the values obtained from the solutions of each compound. Fig. 7 shows the values of TOC as function of the solution's composition at the same flow rate. For glucose solutions of 1 wt % the TOC is very low. On the contrary, for 1 wt % phenol solutions, the TOC value in the liquid phase is 10 times higher. If the TOC of the mixture was simply the sum of the other two values, the TOC would only be a little higher than the value of the phenol solution. As seen on Fig. 7 TOC value of the mixtures is higher, meaning that phenol interacts with the intermediate products of glucose degradation, therefore modifying the glucose degradation process. Similar results concerning TOC and residual phenols are obtained with the other two flow rates: at 0.982 kg h -1 , the theoretical values of TOC and residual phenol would respectively be 5.45 g L -1 and 6.14 g L -1 . The experimentally found values are higher: 6.96 g L -1 of TOC and 8.3 g L -1 of residual phenol. For the flow rate of 3.8 kg h -1 , the experimental values are 1.5 higher than the theoretical sum: 6.1 g L -1 for the experimental TOC and 7.3 g L -1 for the experimental phenol.
Other experiments were performed with various compositions of mixture solution considering a total weight of organic compounds equal to 2%. As it was expected and in accordance with Yoshida et al. [START_REF] Yoshida | Gasification of cellulose, xylan and lignin mixtures in supercritical water[END_REF] who used a cellulose-lignin mixture in their operating conditions, the comparison of normalised yield gas as function of phenol fraction showed a non-linear profile. The influence of temperature was studied. Fig. 8 shows the influence of temperature on the TOC value for three different compositions of 2 wt % feed solutions. The initial TOC of these solutions were 7.3 mg L -1 for glucose, 7.7 mg L -1 for 0.1 wt % phenol and 8.3 mg L -1 for 0.25 wt % phenol. As for previous experiments, glucose concentration in outflow liquid phase equals to zero.
As shown in Fig. 7 for a glucose solution (▲) the TOC is removed from the solution with a high efficiency regardless of the temperature. Glucose degradation depends on temperature:
TOC removal increases from 82 to 91% when increasing the temperature from 450 to 500°C.
For the mixture containing 0.1 wt % of phenol (■), TOC removal linearly increases from 75
to 94% with the temperature. In fact, TOC removal is only of 22% at 450°C but reaches 77% at 550°C. On the contrary, the values of TOC removal dramatically decrease with 0.25 wt % of phenol in the mixture (♦), unlike results with glucose. Fig. 8 shows that the temperature has an influence on the mineralisation's efficiency regardless of the composition, and that a small initial quantity of phenol changes the gasification behaviour of the solution. The residual phenol values also showed a profile close to the profiles observed with TOC values. The analytical method for residual phenol did not separate phenol molecules from their derivates (cresol, oligomers…). However, an estimation of the part of TOC represented by phenol (6 atoms of carbon) was calculated. The part of phenol in the experimental TOC value for the solution at 0.25 wt % of phenol (♦) is around 20% whereas it is more than 50% in the other cases.
When adding a small amount of phenol to glucose solution, residual TOC and residual phenol are significantly increased, confirming the influence of phenol on glucose SCWG conversion.
To understand the chemistry of phenol gasification, the reaction path from phenol to compounds with a higher molecular weight has to be considered; DiLeo et al. [START_REF] Dileo | Gasification of guaiacol and phenol in supercritical water[END_REF] report the formation of e.g. dibenzofuran and biphenyl from phenol. The compounds with a large conjugated electron system obtained in the SCWG process would form relatively stable free radicals and would act as a free radical scavenger. This would explain the reduced gasification of glucose in the presence of phenol. As above mentioned, no such compound could be found here. Still, in further experiments it has to be verified whether these compounds are to be found, formed or not. For example, it is possible that such compounds may stick to the reactor's wall.
On the other hand, studies on phenol oxidation mentioned in the introduction show that the free radical chemistry of phenols is complex. Also during oxidation, higher molecular weight compounds are found. The addition of an active free radical to phenol, as observed in SCWG and SCWO of phenol, should lead to a less reactive free radical, slowing down the free radical chain mechanism. Yet, this effect seems to be too low to explain the results presented here.
Further investigations are necessary to clarify this point.
Conclusion
SCWG of glucose and/or phenol in diluted solutions was studied in a continuous flow reactor (25 MPa and 400-550°C). The achieved comparison between solutions of only one compound and mixtures showed that the presence of phenol, even in a small amount in a glucose solution, decreases the conversion's efficiency. In our experiments, phenol is considered as an intermediate of lignin hydrolysis/gasification. The role played by phenol in glucose degradation can be correlated with the literature data related to the lignin influence on the SCWG of carbohydrates.
Phenol reduces hydrogen yield and particularly the total volume of gas from the conversion of glucose. Consequently, in the presence of phenol, the TOC and residual phenols of the liquid phase is higher. TOC removal dramatically decreases in presence of phenol in the mixture.
One possible explanation is that, similarly to studies on the influence of proteins [START_REF] Yanik | Biomass gasification in supercritical water: Part 1. Effect of the nature of biomass[END_REF][START_REF] Kruse | Influence of proteins on the hydrothermal gasification and liquefaction of biomass. 2. Model compounds[END_REF],
phenol works as a free radical scavenger, which has to be proven in further experiments.
Captions
Figure 1: Scheme of the apparatus with nitrogen supply (A), feed tank (B), high pressure pump (C), preheater (D), reactor (E), cooler (D), pressure control unit (E), phase separation (F), wet gas meter (G) and gas sampling (H). ) 0,25 0,1 0 250 bars, 0,2% KHCO3, % phenol variable mais 2% total. 2,19 kg/h (6 mm).
. 1) made of Inconel 625. It is an 18m-long helix with an internal diameter of 1.6 mm and an external diameter of 6.35 mm. The volume of the reaction zone is 36.19 mL. The electric heating was controlled by a thermocouple on the external surface of the reactor. The GraphWorX32 module of the Genesis32 Software program supplied by ICONICS was used to control and monitor the temperatures. The solution was injected into the reactor with a membrane pump, supplied by LEWA (Type EL1). After cooling, the product mixture was expanded to ambient pressure by a back-pressure regulator.
Figure 2 :
2 Figure 2: Influence of glucose concentration on the normalized gas yield during the SCWG at 500°C, 25 MPa and 1.385 kg h -1 , in presence of 0.2 wt % of KHCO 3 .
Figure 3 :
3 Figure 3: Influence of glucose concentration on gas composition during SCWG at 500°C, 25 MPa and 1.385 kg h -1 , in presence of 0.2 wt % of KHCO 3 .
Figure 4 :
4 Figure 4: Influence of residence time on the residual phenol during the SCWG at 500°C, 25
Figure 5 :
5 Figure 5: Influence of temperature on the residual phenol for a residence time of 0.06 min
Figure 6 :
6 Figure 6: Comparison of theoretical sum of gas volume obtained during the SCWG of
Figure 7 :
7 Figure 7: Comparison of theoretical sum of TOC obtained during the SCWG of solutions
Figure 8 :
8 Figure 8: Influence of temperature on TOC values for 3 compositions of mixture solutions during the SCWG at 25 MPa, 2.19 kg h -1 and in presence of 0.2 wt % of KHCO 3 . Solutions
Fig. 1 :Fig. 2 :
12 Fig. 1:
8 kg h -1 . Considering the reactor volume of 36.19 mL and the volumic flow rate at the temperature and pressure in the reactor, the corresponding residence times were of 0.25 down to 0.06 min. The concentration of glucose (C 6 H 12 O 6 , H 2 O Monohydrate, Merck) or phenol (C 6 H
[START_REF] Marshall | Ion product of water substance, 0-1000°C, 1-10,000 bar new international formulation and its background[END_REF]
O, Merck, 99% purity) varied from 0.25 to 2 wt %. | 35,202 | [
"18950",
"748738"
] | [
"242220",
"150665",
"150665",
"242220"
] |
01769022 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01769022/file/L2EP_ICELMACH_2016_CLENET.pdf | Antoine Tan-Kim
Nicolas Hagen
Vincent Lanfranchi
Stéphane Clénet
Thierry Coorevits
Jean-Claude Mipo
Jérôme Legranger
Frédéric Palleschi
Influence of the Manufacturing Process of a Claw-Pole Alternator on its Stator Shape and Acoustic Noise
Keywords: Acoustic noise, claw-pole alternator, manufacturing tolerances, faults
This paper shows the influence of the manufacturing process of a claw-pole alternator on its acoustic noise. First, the stator welds and the assembly of the stator in the brackets are linked to deformations of the inner diameter of the stator. Then, the influences of these deformations on the magnetic forces and the subsequent acoustic noise are investigated. Results show that the deformations caused by the manufacturing process significantly increase the sound power level of particular orders.
I. INTRODUCTION
Claw-pole alternators are complex electrical machines not only due to their rotor shape but also because of their assembly as shown in Figure 1. Assembling these parts together as well as manufacturing the parts lead to discrepancies between nominal and real shapes and dimensions. Manufacturing tolerances and process have an influence on torque ripple or back EMF for permanent magnet synchronous machines [START_REF] Vivier | Torque ripple improvement with IPMSM rotor shape modification[END_REF], [START_REF] Khan | Design of experiments to address manufacturing tolerances and process variations influencing cogging torque and back EMF in the mass production of the permanent-magnet synchronous motors[END_REF], [START_REF] Ortega | Analytical model for predicting effects of manufacturing variations on cogging torque in surface-mounted permanent magnet motors[END_REF]. Acoustic noise of electrical machines can also be influenced by tolerances on material parameters [START_REF] Druesne | Modal stability procedure applied to variability in vibration from electromagnetic origin for an electric motor[END_REF] or dimensions. The most common studied fault is eccentricity [START_REF] Pellerey | Numerical simulations of rotor dynamic eccentricity effects on synchronous machine vibrations for full run up[END_REF], [START_REF] Kim | Estimation of acoustic noise and vibration in an induction machine considering rotor eccentricity[END_REF]. However, other tolerances, such as stator deformations, can change the airgap width and also affect the performances and the acoustic noise of electrical machines.
Liu [START_REF] Liu | Influence of the stator deformation on the behaviour of a claw-pole generator[END_REF] has investigated the influence of stator deformations of a claw-pole alternator on its no-load flux linkage and torque. A stator with an oval shape as well as a stator with retracted teeth at specific locations were studied. The oval shape was found to have little influence on the noload characteristics whereas the retracted teeth have a greater Paper presented at the XXII International Conference on Electrical Machines (ICEM), Lausanne, September 4-7, 2016 [START_REF] Tan-Kim | Influence of the manufacturing process of a claw-pole alternator on its stator shape and acoustic noise[END_REF].
A. Tan-Kim, J.C. Mipo, J. Legranger and F. Palleschi are with Valeo Engine and Electrical Systems, Créteil, France (e-mail: [email protected]).
N. Hagen is with Arts et Métiers ParisTech, Centre de Paris, 151 boulevard de l'hopital, 75013 Paris, France.
V. Lanfranchi is with Sorbonne University, Université de Technologie de Compiègne, EA 1006 Laboratoire Electromécanique, Compiègne, France (email: [email protected]).
S influence especially on the cogging torque. Offermann et al. [START_REF] Offermann | Uncertainty quantification and sensitivity analysis in electrical machines with stochastically varying machine parameters[END_REF] also studied these machine uncertainties and found a link between the retracted teeth and the 12 th and 24 th harmonics of the cogging torque. Ramesohl [START_REF] Ramesohl | Influencing factors on acoustical simulations including manufacturing tolerances and numerical strategies[END_REF] found, by simulation, that an oval stator with a maximum increase of 0.1 mm of the airgap width significantly increases the sound power level of the 5 th and 6 th harmonics (assumed to be the 30 th and 36 th orders in this study) but the faults were not implemented in a full finite element simulation. To the author's knowledge, the influence of retracted teeth on the acoustic noise has not been investigated.
This paper aims at investigating the influence of stator deformations on the acoustic noise of claw-pole alternators. It establishes a link between the stator deformations shown in [START_REF] Liu | Influence of the stator deformation on the behaviour of a claw-pole generator[END_REF] and the simulation of acoustic noise of claw-pole alternators shown in [START_REF] Tan-Kim | Vibro-acoustic simulation and optimization of a claw-pole alternator[END_REF]. First, the influence of the manufacturing process on the stator deformations is explained. This model uses permeance functions and magnetomotive force (MMF) functions taking into account 3D rotor topology and stator deformations. Then, electromagnetic and vibroacoustic simulations are carried out taking these deformations into account. Finally, the influence of the deformations on the magnetic forces and the sound power level are analyzed. The manufacturing process of a claw-pole alternator introduces two specific deformations of the stator.
The first deformation is due to the stator manufacturing. The stator stack of a claw-pole alternator is made up with an assembly of steel sheets. These steel sheets are held together thanks to six welds equally spaced by 60˚on the circumference of the outer diameter as shown in Figure 2 (left). The heat generated by the welds increases the outer diameter at these six locations. As a consequence, the stator teeth located every 60˚are also drawn towards the outer diameter.
The second deformation of the stator is due to the assembly of the alternator. The stator is clamped between two aluminum brackets in the axial direction. Four screws hold the assembly as shown in red in Figure 2 (right). When the screws are tightened, the stator is squeezed between the two brackets and the stator is deformed. This deformation can be investigated thanks to a static mechanical simulation. Increasing pre-stress is applied on the screws to simulate their tightening and the stator is compressed. As a result of this compression, the stator adopts a square shape as shown in Figure 3. The stator deformations due to the welds and the stator and brackets assembly have been reported by Liu [START_REF] Liu | Influence of the stator deformation on the behaviour of a claw-pole generator[END_REF]. The measured deformed shapes of the stator inner diameter are shown in black in Figure 4. The first shape on the left is the nominal case: the inner diameter is perfectly circular. The second shape is due to the welds: six teeth are retracted, leading to an increase in the airgap width at these locations. The third shape is oval and was measured after tightening the screws. The difference between the measured oval shape and the simulated square shape may come from the successive tightening of the screws instead of a simultaneous tightening of the 4 screws as used in the simulation. The last shape on the right was measured on the inner diameter of the assembled stator and includes the oval shape due to the assembly and the retracted teeth due to the welds. Figure 4: Inner diameter stator shapes of a claw-pole alternator in black, from left to right: nominal stator, stator with retracted teeth, oval stator, oval stator with retracted teeth [START_REF] Liu | Prise en compte des incertitudes dimensionnelles introduites par les procédés de fabrication dans les modèles numériques de machines électriques[END_REF] The deformations of the inner diameter of the stator have an influence on the airgap width of the alternator. As a consequence, magnetic forces are affected and are investigated in the following section.
III. ELECTROMAGNETIC AND VIBRO-ACOUSTIC SIMULATION METHODS
A. Electromagnetic simulation
Electromagnetic simulations are carried with the finite element software JMAG. The magnetic flux density and the magnetic forces can be directly computed as shown in Figure 5. However, in order to better understand the influence of the stator deformations on the magnetic forces, the analytical formulas of magnetic pressures are derived in this section. Radial and tangential magnetic pressures, P r and P t , are computed with the following formulas:
P r = B 2 r -B 2 t 2µ 0 ( 1
)
P t = B r B t µ 0 (2)
where µ 0 is the magnetic permeability of vacuum and B r and B t are the radial and tangential components of the magnetic flux density.
An analytical expression of the radial magnetic flux density (B r ) can be derived from the magnetomotive force (mmf) and permeance (λ) functions such as:
B r = mmf × λ (3)
The magnetomotive force (mmf) and the permeance (λ) functions of a claw-pole alternator must be adapted to take the rotor shape into account [START_REF] Tan-Kim | A hybrid electromagnetic model for acoustic optimization of claw-pole alternators[END_REF]. These equations are detailed below. First, the permeance function is written as:
λ = Λ 0 + λ s + λ r (4)
where Λ 0 is a constant value which accounts for the airgap width and λ s and λ r are the permeance functions of the stator and the rotor respectively. λ s is expressed as:
λ s = ∞ ks=1 Λ ks cos(k s Z s α) (5)
with Z s the number of stator teeth and α the angular position.
The rotor permeance function λ r changes depending on the rotor position as shown in Figure 6. Hence, the expression of λ r includes two parameters α 1 and α 2 which depend on the axial position.
λ r = Λ r 0 + ∞ kr=1 2Λ r πk r cos(pk r (α-θ))×[sin(pk r α 1 )-sin(pk r α 2 )] (6)
with p the number of pole pairs and:
Λ r 0 = α 1 + (π/p -α 2 ) π/p (7)
Figure 6: Magnetomotive force mmf r and permeance λ r of a claw-pole rotor
Magnetomotive force functions are expressed as follow with mmf s and mmf r the stator and rotor magnetomotive force functions:
mmf = mmf s + mmf r (8)
The stator magnetomotive force function is expressed as:
mmf s = ∞ ks=6k±1 M M F s ks cos(pθ ± k s pα) (9)
M M F s ks = 3 √ 2N s k w ks I s ks πpk s (10)
with θ the angular position of the rotor, N s the number of stator turns, I s ks and k w ks the RMS stator phase current and the winding factor for harmonic k s respectively.
Again, the rotor magnetomotive force function changes depending on the rotor position similarly to Equation 6:
mmf r = M M F r 0 + ∞ hr=1 2M M F r πh r cos(ph r (α -θ)) × [sin(ph r α 1 ) + sin(ph r α 2 )] (11)
with M M F r 0 and M M F r defined as:
M M F r 0 = α 1 -(π/p -α 2 ) π/p (12)
M M F r = N r I r (13)
N r and I r the number of rotor turns and the rotor coil current respectively.
The stator deformations depicted in Figure 4 have an influence on the stator permeance function which must be rewritten as:
λ s = ∞ ks=1 Λ Zs ks cos(k s Z s α) + Λ welds ks cos(6k s α) + Λ oval cos(2α) (14)
with Λ welds ks and Λ oval depending on the amplitudes of the stator deformations. Figure 7 shows the permeance function of a deformed 36-slots stator.
The deformations of the stator shown in Figure 4 only modify the stator permeance λ s compared to the nominal case. Since this function does not depend on θ which is a time variable but only on α which is a space variable, the deformations of the stator do not introduce new "temporal orders" in the magnetic forces but only new "spatial orders". As a result, the force and acoustic spectra contain the same temporal orders for all configurations (i.e. with or without deformations) but differences in amplitudes are expected. The deformations of the stator are taken into account in the finite element model thanks to a deformation of the mesh. This method, called "morphing", consists in changing the coordinates of the nodes of a mesh. As a consequence, the mesh is only modified locally by moving the nodes close to the studied deformations as shown in Figure 8. This method avoids the introduction of possible numerical errors due to a remeshing of the complete geometry.
B. Modal analysis
The modal analysis solves the following equation:
[K -ω 2 M ]{x} = 0 (15)
with ω the angular velocity, {x} the displacement vector and [M ] and [K] are respectively the mass and stiffness matrices of the structure. The results of the modal analysis are the modal shapes and natural frequencies of the structure.
In order to carry out this modal analysis, a model of the structure is needed. The mechanical model of a claw-pole alternator has been detailed in [START_REF] Tan-Kim | Vibro-acoustic simulation and optimization of a claw-pole alternator[END_REF]. This model accounts for the laminated structure of the stator stack thanks to an updated Young's modulus which is lower than for steel, meaning that the stack is less rigid than a homogeneous block of steel. The heterogeneous composition of the windings, made up with copper wires and varnish, is also accounted for with an equivalent Young's modulus assigned to a solid inside each slot. The contacts between the stator and the brackets also have to be computed. Only contacts near the screws are considered, where pressure between the stator and the brackets is maximum. The rotor is taken into account in the model and linked to the brackets with springs, modeling the stiffness of the bearings. The final model was checked against measurements of physical parts to ensure a good correlation as in [START_REF] Tan-Kim | Vibro-acoustic simulation and optimization of a claw-pole alternator[END_REF]. Figure 9 shows an example of modal shape of the alternator obtained with ANSYS. Figure 9: Simulated alternator mode shape [START_REF] Tan-Kim | Vibro-acoustic simulation and optimization of a claw-pole alternator[END_REF]. Left: complete geometry. Right: shape of the stator outer diameter (blue) compared to its nominal shape (black)
The stator deformations due to the manufacturing process are large in comparison with the airgap width: they reach up to 40% of the airgap width. Therefore, they may have a significant influence on the magnetic forces. However, we assume that these deformations are too small to influence the mechanical dynamic behavior of the alternator. Thus, the mechanical mesh of the alternator used for the modal analysis is not deformed.
C. Vibro-acoustic simulation
Electromagnetic simulation and modal analysis require different mesh densities. Therefore, magnetic forces must be transferred from the electromagnetic mesh to the structural mesh. This first step of the vibro-acoustic simulation, carried out in LMS Virtual.Lab, is called "mapping". Since, the electromagnetic mesh takes the stator deformations into account and the mechanical mesh does not, the geometries of the two meshes are slightly different. However, since the deformations are much smaller than the element size of the mechanical mesh, the mapping of the forces is not affected by this difference.
Vibrations are then computed with a modal superposition method considering a 2% damping coefficient for all modes, which is a typical value for claw-pole alternators [START_REF] Tan-Kim | Contribution à l'étude du bruit acoustique d'origine magnétique en vue de la conception optimale de machines synchrones à griffes pour application automobile[END_REF]. Next, sound pressures are computed based on the vibrations. The sound power of mode m could be analytically calculated for a simple geometry using the following formula [START_REF] Gieras | Noise of polyphase electric motors[END_REF]:
W m = 1 2 ρcS σ m v 2 m ( 16
)
with ρ the air density, c the speed of sound in the air, S the outer area of the stator, σ m the radiation factor of the stator for mode m and v m the radial velocity.
Nonetheless, simple analytical formulas cannot be found to compute the radiation factor of parts with complex geometries such as the brackets of the alternator (see Figure 1). Hence, for claw-pole alternators, the sound power level is computed numerically with FEM.
IV. SIMULATION RESULTS AND ANALYSIS
A three-phase alternator with six pole pairs and a 36-slots stator similar to the one used in [START_REF] Tan-Kim | Influence of temperature on the vibro-acoustic behavior of clawpole alternators[END_REF] is studied in this section. The four stator configurations shown in Figure 4 (nominal, oval, retracted teeth and nominal with retracted teeth) are simulated. The simulations are carried out at 4000 rpm with an excitation current of 4 A and the alternator is connected to a rectifier bridge and a battery (i.e. load condition). The maximum values of the imperfections measured by Liu [START_REF] Liu | Prise en compte des incertitudes dimensionnelles introduites par les procédés de fabrication dans les modèles numériques de machines électriques[END_REF] for similar alternators are used in the simulations. The deformations reach 40% and 15% of the airgap width for the retracted teeth and oval stator respectively.
The stator deformations have an influence on the stator permeance as shown by Equation 14. As a consequence, the magnetic flux density is modified as well as the subsequent magnetic forces and acoustic noise. In the following sections, the influence of the deformations on the average torque, the output current, the magnetic forces and the sound power level (SPL) are analyzed.
A. Average torque and output current
Stator deformations change the average airgap width and influence "global" characteristics such as the average torque and output current. On the one hand, the oval shape increases and decreases the airgap width depending on the considered angular position. However, the average airgap width is the same as the nominal stator and only small changes of the average torque and output current are expected. On the other hand, the retracted teeth always increase the airgap width. Thus, the average airgap width is larger than for the nominal case and the average torque and the output current are expected to be lower. These results are confirmed by simulations as shown in Table I. In the end, stator deformations slightly decrease the global performances of the alternator.
Table I: Relative difference in average torque and output current compared with the nominal stator (for the nominal stator, torque is -4.5 Nm and output current is 112 A)
Configuration
Average torque Output current Retracted teeth -2.2% -1.6% Oval -0.3% -0.3% Oval + retracted teeth -0.8% -0.7%
B. Magnetic forces
In order to investigate the influence of the deformations on the magnetic forces, the sum of the forces on the stator is considered. Since tangential forces have a significant influence on the acoustic noise of claw-pole alternators as shown in [START_REF] Tan-Kim | A hybrid electromagnetic model for acoustic optimization of claw-pole alternators[END_REF], the forces are separated into their radial and tangential components. Instead, of summing the tangential forces on the stator, the rotor torque is considered as it is the image of the sum of the tangential magnetic forces applied on the stator. The sum of radial and tangential forces do not account for the spatial distribution of these magnetic forces, which has an important influence on the acoustic noise. However, they give a first insight into the evolution of excitation forces with the stator deformations.
Figure 10 shows the spectrum of the sum of the radial forces applied on the stator and the spectrum of the electromagnetic torque for the four simulated configurations. The temporal order 0 (i.e. null frequency) is hidden as it does not contribute to the noise generation.
The results show that the retracted teeth lead to a significant increase in amplitude of the orders multiple of 12 except for orders 36 and 72. The oval stator has relatively low influence on the magnetic forces. The results of the stator with both deformations, oval and retracted teeth, also show an increase in amplitudes of orders multiple of 12 but to a smaller extent compared with the retracted teeth alone. The ovalization seems to mitigate the effect of the retracted teeth on the sum of the radial forces and the torque.
Figure 10: Spectra of the sum of radial forces acting on the stator and torque for different stators at 4000 rpm As shown previously in [START_REF] Zhu | Influence of design parameters on cogging torque in permanent magnet machines[END_REF], [START_REF] Zhu | Analytical methods for minimizing cogging torque in permanent-magnet machines[END_REF] and [START_REF] Hanselman | Effect of skew, pole count and slot count on brushless motor radial force, cogging torque and back EMF[END_REF], the fundamental order of the cogging torque and the radial forces on the stator is N c , the least common multiple (lcm) of the number of poles 2p and the number of stator teeth Z s :
N c = lcm(2p, Z s ) (17)
In the same manner, we can introduce N w , the fundamental order of the forces, radial and tangential, due to the retracted teeth, defined as the least common multiple (lcm) of the number of poles 2p and the number of welds or retracted teeth Z w :
N w = lcm(2p, Z w ) (18)
In this case, 2p = 12 and Z w = 6 therefore N w = 12 and the influenced orders are multiples of 12 as shown in Figure 10.
C. Sound power level
Figure 11 shows the sound power level (SPL) for all simulated configurations as well as the measurement. The influences of the stator deformations on the SPL are basically the same as for the magnetic forces: the influence of the ovalization is relatively small whereas the influence of the retracted teeth is significant on orders multiple of N w = 12. With the retracted teeth, the average increase in SPL for orders multiple of 12 reach about 16 dBA compared to the nominal case.
The strong influence of the retracted teeth on the 54 th order remains unexplained by the radial forces and torque spectra. However, the difference in dB between the SPL of the nominal case and the retracted teeth case depicts the relative difference between these two levels. As a consequence, the difference may be large on a dB scale even if both values are small.
The simulated SPL with the retracted teeth and the oval shape is closer to the measurement than the simulated SPL with the nominal stator: the average difference between measurement and simulation for all the orders is 8 dB with the final simulation (e.g. retracted teeth and oval shape) whereas the average difference with the nominal simulation is 16.6 dB.
The remaining discrepancies between the measurement and the simulation may be explained by other imperfections which were not taken into account. For instance, the 6 th order is not influenced by any of the simulated imperfections and is largely underestimated.
For claw-pole alternators, the SPL of the main order, in this case the 36 th order, is generally much higher than the SPL of the other orders [START_REF] Tan-Kim | Contribution à l'étude du bruit acoustique d'origine magnétique en vue de la conception optimale de machines synchrones à griffes pour application automobile[END_REF]. Consequently, the stator deformations do not influence the overall sound power level. However, if one of the influenced orders, for example the 12 th or 24 th order, crosses a natural frequency meanwhile the level of 36 th order is low, the overall SPL would be influenced.
V. CONCLUSION
This paper has investigated the influence of stator deformations on the acoustic noise of claw-pole alternators. First, the origins of the stator deformations have been linked to the manufacturing process. Two deformations have been studied: an oval stator shape due to the assembly of the stator Figure 11: Sound power level spectrum of a claw-pole alternator at 4000 rpm: simulations with different stators and measurement in two brackets and retracted teeth at six locations due to the six welds on the stator outer diameter. The equations of the modified stator permeance were then derived to show how they affect the magnetic flux density and the magnetic forces. Simulation results show that the oval stator has little influence on the sound power level of the alternator. However, the retracted teeth have a significant influence on orders multiple of N w , the least common multiple of the number of poles and the number of retracted teeth or welds. This influence is observed on the magnetic forces as well as on the sound power level which is increased by 16 dBA on average. Furthermore, the simulations with the stator imperfections are much closer to the measurements compared to the simulations with the nominal stator.
In the studied case, the influence of the stator deformations on the overall sound power level (SPL) was weak. However, if the level of the influenced orders exceeds the main acoustic order, the overall SPL will also be affected. This could be the case for other electrical machines.
Figure 1 :
1 Figure1: Exploded view of an alternator[START_REF] Tan-Kim | Vibro-acoustic simulation and optimization of a claw-pole alternator[END_REF]
Figure 2 :
2 Figure 2: Stator stack with welds circled in red and statorbrackets assembly of a claw-pole alternator (screws in red)
Figure 3 :
3 Figure 3: Stator and brackets assembly deformation and stator deformation due to the tightening of the screws
Figure 5 :
5 Figure 5: Magnetic flux density and magnetic forces computed with JMAG
Figure 7 :
7 Figure 7: Permeance function of the oval stator with retracted teeth
Figure 8 :
8 Figure 8: Mesh modification or "morphing" of a stator tooth
. Clénet is with L2EP, Arts et Métiers ParisTech, Centre de Lille, 8 boulevard Louis XIV -59046 Lille Cedex, France.
T. Coorevits is with MSMP, Arts et Métiers ParisTech, Centre de Lille, 8 boulevard Louis XIV -59046 Lille Cedex, France. | 26,121 | [
"171470"
] | [
"218959",
"301320",
"267205",
"13338",
"211915",
"218959",
"218959",
"218959"
] |
01769073 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769073/file/L2EP_2018_TMAG_CLENET.pdf | Iterative Kriging-based Methods for Expensive Black-Box Models
Siyang Deng, Reda El Bechari, Stéphane Brisset, and Stéphane Clénet Univ. Lille, Centrale Lille, Arts et Metiers ParisTech, HEI, EA 2697 -L2EP -Laboratoire d'Electrotechnique et d'Electronique de puissance, F -59000 Lille, France
Reliability-Based Design Optimization (RBDO) in electromagnetic field problems requires the calculation of probability of failure leading to a huge computational cost in the case of expensive models. Three different RBDO approaches using kriging surrogate model are proposed to overcome this difficulty by introducing an approximation of the objective function and constraints. These methods use different infill sampling criteria (ISC) to add samples in the process of optimization or/and in the reliability analysis. Several enrichment criteria and strategies are compared in terms of number of evaluations and accuracy of the solution.
Index Terms-Infill sampling criteria, kriging model, reliability analysis, reliability-based design optimization.
I. INTRODUCTION ELIABILITY-BASED DESIGN OPTIMIZATION
(RBDO) approaches can be divided into Double-Loop (DLM), Single-Loop (SLM) and Sequential Decoupled Methods (SDM). They have emerged in the past few decades and become more and more popular in electromagnetics owing to their ability to account for uncertain parameters. However, for expensive black-box models, the computational burden can become unbearable.
To overcome this issue, iterative kriging surrogate models have been proposed to reduce the number of evaluations [START_REF] Lee | A sampling technique enhancing accuracy and efficiency of metamodel-based RBDO: Constraint boundary sampling[END_REF]- [START_REF] Lee | Sampling-based RBDO using the stochastic sensitivity analysis and Dynamic Kriging method[END_REF]. Infill Sampling Criterion (ISC) was used with the aim of improving the quality of the surrogate model, and searching for the solution of the optimization problem. However, the metamodel is established before starting RBDO and no enrichment is made during neither optimization nor reliability analysis.
With the purpose of enhancing the efficiency, different strategies including the choice of the ISC and the positioning of sample enrichments in the optimization process are investigated in this paper for each aforementioned type of RBDO approaches, so that the reliabilities are also analyzed by metamodels. A mathematical example is used to compare with classic RBDO, i.e. without kriging model, and highlight the most effective strategy. Then, RBDO of a transformer modelled by a time consuming model based on the Finite Element method is performed with the most effective strategy.
II. INFILL SAMPLING CRITERIA
Iterative surrogate-based optimization methods start with a small set of initial sampling points to create a preliminary metamodel. Then, the infill sampling criteria are considered as new objective functions to add points into the sample set and update the meta-model until the predicted error is less than a chosen tolerance. A great advantage of this approach is that it enhances the accuracy of meta-model and search the probabilistic optimum simultaneously with a small amount of samples.
Expected Improvement (EI) criterion [START_REF] Jones | Efficient global optimization of expensive black-box functions[END_REF] is widely used for surrogate-based optimizations without constraints.
𝐸𝐼 𝑓 = { (𝑓 𝑚𝑖𝑛 -𝑓 ̂)𝛷(𝑧) + 𝑠̂𝑓𝜙(𝑧) if 𝑠̂𝑓 > 0 0 if 𝑠̂𝑓 = 0 (1)
where 𝑓 𝑚𝑖𝑛 is the best current sampled objective function value, 𝑓 ̂ and 𝑠̂𝑓 are the predicted value and the mean square error (MSE), 𝜙(•) and 𝛷(•) denote the probability density function and the cumulative distribution function of the standard normal distribution respectively, and 𝑧 = (𝑓 𝑚𝑖𝑛 -𝑓 ̂) 𝑠̂𝑓 ⁄ . However, as EI is multimodal, more attention should be paid on the infill criterion to be sure to find the global solution. The Weighted EI (WEI) criterion [START_REF] Xiao | Adaptive weighted expected improvement with rewards approach in Kriging assisted electromagnetic design[END_REF] seems to be more suitable as it adds weights into EI expression to balance exploration (right part) and intensification (left part).
W𝐸𝐼 𝑓 = { 𝜔(𝑓 𝑚𝑖𝑛 -𝑓 ̂)𝛷(𝑧) + (1 -𝜔)𝑠̂𝑓𝜙(𝑧) if 𝑠̂𝑓 > 0 0 if 𝑠̂𝑓 = 0 (2)
Choosing small weight 𝜔 prevents WEI from converging to a local minimum if the initial sampling is inside the security domain. This condition is quite difficult to satisfy for many devices, as their security domains may be small and sometimes discontinuous. To avoid this issue, a Modified WEI (MWEI) combined with the surrogate objective function is proposed:
MW𝐸𝐼 𝑓 = W𝐸𝐼 𝑓 -𝜔𝑓 ̂ (3)
Investigations on the same example as in [START_REF] Xiao | Adaptive weighted expected improvement with rewards approach in Kriging assisted electromagnetic design[END_REF] show that a weight equal to 0.1 provide to a global optimum with less iterations.
For constrained problems, an extended method consists in multiplying the value of EI by the probability of feasibility (PF) [START_REF] Schonlau | Computer Experiments and Global Optimization[END_REF]. However, PF may prevent the sampling on the constraint boundary where the deterministic optimum may lie. Another constraint handling method is the Expected Violation (EV) method [START_REF] Audet | A surrogate-model-based method for constrained optimization[END_REF] but the number of candidate points to evaluate can be very large. An alternative method is to use the predicted value of the constraint functions 𝑔 ̂ directly as constraints in the infill sub-problem [START_REF] Sasena | Flexibility and efficiency enhancements for constrained global design optimization with kriging approximations[END_REF].
R III. INFILL STRATEGIES FOR RBDO METHODS
RBDO is a combination of deterministic constrained optimizations and reliability analysis. For the first one, the design variables 𝑑 are the mean value of random variables 𝑋 and the standard deviations 𝜎 are constant. For reliability analysis, 𝑑 and 𝜎 are constants and the design variable 𝑥 is a realization of 𝑋. 𝛽 𝑡 is the given target reliability index.
A. Double-Loop Method
DLM like Performance Measure Approach (PMA) [START_REF] Tu | A new study on reliability-based design optimization[END_REF] has a nested structure: The outer loop seeks for the optimum and the inner loop searches the Most Performance Target Point (MPTP) that maximize the constraint subject to a given reliability index.
There are two places where ISC can be introduced to improve the accuracy of kriging model: Outer loop and inner loop. For the inner loop, EI of 𝑔 is used to find MPTP by solving the optimization problem in Eq. ( 4)-( 5) as the constraint on the reliability index is an explicit function of the design variables:
𝑥 * = argmax 𝑥 𝐸𝐼 𝑔 (𝑥) 𝑠. 𝑡. ‖(𝑥 -𝑑) 𝜎 ⁄ ‖ = 𝛽 𝑡 ( 4
)
𝐸𝐼 𝑔 = { (𝑔 ̂-𝑔 𝑚𝑎𝑥 )𝛷(𝑧 𝑔 ) + 𝑠̂𝑔𝜙(𝑧 𝑔 ) if 𝑠̂𝑔 > 0 0 if 𝑠̂𝑔 = 0 ( 5
)
where 𝑧 𝑔 = (𝑔 ̂-𝑔 𝑚𝑎𝑥 ) 𝑠̂𝑔 ⁄ , 𝑠̂𝑔 is the MSE of the constraint, and 𝑔 𝑚𝑎𝑥 is the maximum sampled constraint value.
For outer loop, the criterion MWEI is preferred to avoid local solutions and the implicit inequality constraints are computed by the inner loop. However, as the two loops are nested, the enrichment in inner loop may bring out thousands of model evaluations. To test it, two strategies are proposed: the first one (PMA1) adds new samples only inside the outer loop, whereas the second (PMA2) enriches inside both loops.
B. Single-Loop Method
For SLM like Single Loop Approach (SLA) [START_REF] Liang | A single-loop method for reliability-based design optimization[END_REF], the main point is that the inner loop optimization is replaced by an approximation based on a first order Taylor expansion to avoid the numerous evaluations required to find the MPTP.
It is important to note that due to its approximation, the method itself has already loose some precision. Therefore, it is expected that with a surrogate model, the two approximation errors will be added and the accuracy will be further reduced.
C. Sequential Decoupled Method
SDM like Sequential Optimization and Reliability Assessment (SORA) [START_REF] Du | Sequential optimization and reliability assessment method for efficient probabilistic design[END_REF] are based on a series of sequential deterministic optimizations and reliability assessments. The main point is to shift the boundaries of constraints inside the feasible domain based on the reliability information obtained in the former iteration. The first optimization aims at searching the global deterministic optimum. Reliability assessment is then conducted to locate the MPTP corresponding to the target reliability index. Finally, new optimizations are carried out by taking into account the shift 𝑡 computed with MPTP.
Three strategies are proposed. In the first one (SORA1), the reliability analysis is the same as in the inner loop of PMA and enrichment is made with EI criterion. For the deterministic optimizations, the constraints are computed with the metamodel 𝑔 ̂(𝑑 -𝑡) and the MWEI is preferred in order to find the global solution.
The second strategy (SORA2) differs from the first one by the fact that enrichment of the kriging models with MWEI criterion is made at first iteration only. For all other iterations, the deterministic optimization is made with the meta-models 𝑓 ̂ and 𝑔 ̂.
For the third strategy (SORA3), if the deterministic optimum found in 𝑘-th cycle 𝑑 𝑘 is close to any optimum of other 𝑘 -1 cycles, as the former reliability assessments have already added points in this region, the accuracy is considered to meet the requirement so there is no need to add samples any more. The proximity criterion defined in ( 6) is checked before entering reliability analysis:
‖(𝑑 𝑘 -𝑑 𝑖 )/𝜎‖ < 𝛽 𝑡 , 𝑖 = 1, … , 𝑘 -1 (6)
where 𝑑 𝑖 is the deterministic optimum found by the 𝑖-th cycle. If ( 6) is satisfied, the meta-model of constraints is used directly and only MPTP are evaluated. For the deterministic optimizations, it takes the same strategy as SORA2. The flowchart of SORA3 is shown in Fig. 1 where 𝑥 𝑘 is the MPTP of 𝑘-th cycle.
IV. COMPARISON OF STRATEGIES
To assess the efficiency of kriging-based RBDO methods, the mathematical example in [START_REF] Kim | A Single-Loop Strategy for Efficient Reliability-Based Electromagnetic Design Optimization[END_REF] with two variables and three constraints is analyzed. Noting that the random variables are Gaussian and their standard deviations are all equal to 0.3. Lower and upper bounds are 0 and 10 respectively for both variables. The target reliability index 𝛽 𝑡 is chosen equal to 2, so that the target probability of failure 𝑃 𝑡 = 𝛷(-𝛽 𝑡 ) = 2.28%. The results are given in Table 1 with an initial sampling of 20 points. The probability of failure 𝑃 𝑓 is calculated by Monte-Carlo Simulation (MCS) with 10 6 samples. For comparison purpose, results given by classic RBDO methods without kriging are also presented.
All the iterative kriging-based RBDO methods lead to a reduced number of evaluations. SLA with kriging has the minimum number of evaluations but is not accurate enough as the maximum probability of failure is much greater than 𝑃 𝑡 due to the approximation used to simplify the reliability analysis. As expected, PMA with infill during inner loops requires thousands of samples to evaluate. The other PMA strategy is faster but the accuracy is not sufficient. Kriging-based SORA strategies lead to the best result and the third one SORA3 is the most efficient. Fig. 2 shows two iterations of SORA3.
As SORA3 seems to be the most efficient meta-model strategy on this mathematical example, it is tested on the RBDO of a transformer with FEM.
V. ELECTROMAGNETIC DEVICE
The electromagnetic device is a single-phase safety isolating transformer with grain-oriented E-I laminations designed for installation in electric cabinet [START_REF] Tran | A Benchmark for Multi-Objective, Multi-Level and Combinatorial Optimizations of a Safety Isolating Transformer[END_REF]. The primary and secondary windings are wound around the frame surrounding the central core (Fig. 3, left).
A. Finite Element Models
Thermal and magnetic phenomena are modeled by using 3D FEA on the eighth of transformer due to symmetries. There are about 43,000 nodes and 290,000 edges in the model. The right part of Fig. 3 shows the mesh in the magnetic circuit, the insulating, the air gap, the frame and the opposing direction of currents in the primary and secondary windings that create flux in the gap between the coils (leakage flux).
For the electromagnetic modeling, all magnetic and electric quantities are assumed sinusoidal. Full-load and no-load simulations are used to compute all the characteristics. The iron losses are computed with the Steinmetz formula and the leakage inductances are calculated with the magnetic co-energy. The core magnetic nonlinearity is taken into account.
In the thermal modeling, some assumptions are considered: the insulator between the core and the coils is in perfect contact with both parts; there is no thermal contact between the exterior coil and the magnetic circuit; there is no thermal exchange with the air trapped between the coils and the iron; there is no convection on the upper and lower sides of the coil; there is no temperature gradient in the copper and the iron, and all surfaces have the same convection coefficient.
A magneto-thermal weak coupling is considered and the computational time is equal to10 minutes on a single core of an Intel Xeon CPU E5-2690 at 2.60 GHz. The copper and iron losses are computed with the magnetic AC solver and introduced as heat sources in the thermal static solver. The copper temperature is used to compute the coils resistors introduced in the magnetic solver and this loop continues until change in temperatures is less than 0.1 °C. Both solvers use the same mesh and are included in Opera3D software.
B. Analytical Model
In order to motivate the need of a time expensive 3D FEA model, an Analytical Model (AM) is also used to compare RBDO results with both models.
The physical phenomena within the transformer are electric, magnetic and thermal. The assumptions for AM are uniform distribution of the magnetic flux density in the iron core and no voltage drop due to the magnetizing current. The thermal assumptions are the same than the 3D FEA except that the temperatures are uniform within the coils and laminations.
The weakest points of AM are the assumption of uniform temperature in copper and iron, and the approximation of the leakage inductance values.
C. Optimization Problem
The optimization problem contains 7 design variables. There are three parameters (𝑎, 𝑏, 𝑐) for the shape of the lamination, one for the frame (𝑑), two for the section of conductors (𝑆 1 , 𝑆 2 ), and one for the number of primary turn (𝑛 1 ) (Fig. 3
, left).
There are 7 inequality constraints in this problem. The copper and iron temperatures 𝑇 𝑐𝑜 , 𝑇 𝑖𝑟 respectively should be less than 120℃ and 100℃. The efficiency 𝜂 should be greater than 80% . The magnetizing current 𝐼 𝜇 𝐼 1 ⁄ and drop voltage Δ𝑉 2 𝑉 2 ⁄ should be less than 10% . All these constraints are computed with FEM or AM model. Finally, the filling factors of both coils 𝑓 1 , 𝑓 2 should be lower than 0.5.
The goal is to minimize the mass 𝑚 𝑡𝑜𝑡 of iron and copper materials. Thus, the optimization problem is expressed as: min 𝑚 𝑡𝑜𝑡 (𝑎, 𝑏, 𝑐, 𝑑, 𝑆 1 , 𝑆 For RBDO, all constraints are considered with a target probability of failure equal to 0.13%, which means a reliability index of 3. The standard deviation of each design variable is equal to 1% of its lowest bound.
D. Results
Table 2 shows optimal values, objective, probabilities of failure calculated by MCS with 10 6 samples computed with the meta-model, and the number of evaluations. For AM, SORA without meta-model is also tested and the number of evaluations is greater than 10,000, so it can be seen that SORA3 with kriging meta-model (SORA3 + AM, center column in Table 2) can find a solution almost satisfying all constraints with less evaluations. However when the same solution is reevaluated with FEM (FEM reeval., right column in Table 2), the highest probability of failure is 90%, so RBDO cannot be performed with AM only. The mass computed with FEM is slightly different from the one with AM because the voltage drop is considered to calculate the number of turns for the secondary coil.
SORA3 with FEM (SORA3 + FEM, left column in Table 2) leads to a probability of failure close to its target value. The objective value is higher with FEM because AM underestimates constraints. The initial sampling includes 7,000 points evaluated in parallel on 24 cores in about 49 hours then the 265 infill sampling points are evaluated sequentially in about 44 hours. The first advantage of SORA3 with FEM is that a significant computing time can be saved as it reduces the number of evaluations. The second advantage is that kriging model gives accurate derivatives that enable the use of fast gradient-based algorithm. Contrarily, as FEM provides noisy derivatives it requires a noise-free costly algorithm when directly connected with it. VI. CONCLUSION According to the mathematical example, the third strategy of kriging-based SORA is the most efficient without losing too much accuracy among the 6 approaches proposed.
RBDO of a single-phase safety isolating transformer is also performed here with FEM and the kriging-based SORA shows its applicability in dealing with this highly constrained problem by reducing the number of evaluations. Then, compared with analytical model of the same device, this approach with FEM could get a more accurate solution.
Fig. 1 .
1 Fig. 1. The process of SORA3 strategy.
Fig. 3 .
3 Fig. 3. Design variables of transformer and mesh.
Fig. 2 .
2 Fig. 2. Iterations of SORA3 for the mathematical example (green points are initial sampling, pink points are enrichment samples during deterministic optimizations, yellow ones are added by reliability analysis and blue ones are MPTPs at the current iteration, dashed lines and contours present the real constraints and objectives respectively while solid ones present the metamodels).
TABLE 1 RESULTS
1 OF MATHEMATICAL EXAMPLE USING DIFFERENT STRATEGIES
Strategy Number of evaluations Optimal solution Optimal value Maximal 𝑃 𝑓 (%)
SLA (exact model) 165 [2.2512; 1.9677] -1.9953 2.32
PMA/SORA (exact model) 3183/531 [2.2513; 1.9691] -1.9945 2.27
SLA 26 [2.2466; 1.9617] -1.9996 2.59
PMA1 29 [2.2494; 1.9649] -1.9972 2.44
PMA2 1804 [2.2513; 1.9691] -1.9945 2.27
SORA1/2/3 142/97/45 [2.2513; 1.9691] -1.9945 2.27
TABLE 2
2
RESULTS OF TRANSFORMER OPTIMIZATION WITH META-MODEL
Values SORA3 + FEM SORA3 + AM FEM reeval.
𝑎 12.902 13.153
𝑏 46.042 51.039
𝑐 18.183 16.532
𝑑 42.318 43.098
𝑛 1 659.06 641.75
𝑆 1 0.3254 0.3216
𝑆 2 2.7552 2.8956
𝑚 𝑡𝑜𝑡 2.4028 2.3552 2.3520
𝑃(𝑇 𝑐𝑜 > 120℃) 0% 0% 0%
𝑃(𝑇 𝑖𝑟 > 100℃) ⁄ > 0.1) 𝑃(∆𝑉 2 𝑉 2 𝑃(𝐼 μ 𝐼 1 ⁄ > 0.1) 0.1506% 0% 0.1348% 0.1567% 0% 0.1420% 90.05% 0% 0.3281%
𝑃(𝑓 1 > 0.5) 0.1327% 0.1236% 71.20%
𝑃(𝑓 2 > 0.5) 0.1282% 0.1307% 0.0014%
𝑃(𝜂 < 0.8) 0% 0% 0%
Evaluations 7265 7242 1 | 19,261 | [
"791252",
"169677"
] | [
"13338",
"13338",
"13338",
"13338"
] |
01769083 | en | [
"sde"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-01769083/file/Larchevequeatal2005-ApplSoilEcol-HAL.pdf | M Larchevêque
Virginie Baldy
email: [email protected]
Nathalie Korboulewsky
Elena Ormeño
Catherine Fernandez
M Larcheve ˆque
E Ormen ˜o
Compost effect on bacterial and fungal colonization of kermes oak leaf litter in a terrestrial Mediterranean ecosystem
Keywords: Mediterranean ecosystem, Sewage sludge compost, Leaf litter, Ergosterol, Bacterial numbers, Microbial biomass, Quercus coccifera L
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Soils under Mediterranean climate are undergoing degradation due to water erosion and recurrent fires, which affect their fertility [START_REF] De Luis | Climatic trends, disturbances and short-term vegetation dynamics in a Mediterranean shrubland[END_REF].
Nitrogen, which is often a limiting plant nutrient in soil, is easily lost by volatilisation during wildfires. [START_REF] Guerrero | Reclamation of a burned forest soil with municipal waste compost: macronutrient dynamics and improved vegetation cover recovery[END_REF] pointed out that organic matter addition is a suitable technique for accelerating the natural recovery process of burned soils. The spreading of biosolids, defined as the nutrient rich product of wastewater treatment, can improve the low fertility of soils and constitutes an alternative to landfill disposal. Biosolids are a source of organic matter and plant nutrients [START_REF] Brockway | Forest floor, soil, and vegetation responses to sludge fertilization in red and white pine plantations[END_REF][START_REF] Martinez | Biowaste effects on soil and native plants in semiarid ecosystem[END_REF] and can improve soil physical, chemical and biological properties [START_REF] Mckay | GB experience in the last decade: from research to practice[END_REF][START_REF] Caravaca | Aggregate stability changes after organic amendment and mycorrhizal inoculation in the afforestation of a semiarid site with Pinus halepensis[END_REF]. But biosolids present potential environmental risks. Their use can induce heavy metal and organic contaminants accumulation in soils [START_REF] Brockway | Forest floor, soil, and vegetation responses to sludge fertilization in red and white pine plantations[END_REF], as well as the discharge of nutrients, especially N and P, to surface and ground waters [START_REF] Martinez | Biowaste effects on soil and native plants in semiarid ecosystem[END_REF]. To decrease risks of heavy metal and salt leaching, the organic matter of biosolids can be stabilized by composting [START_REF] Garcia | The influence of composting and maturation processes on the heavy-metal extractability from some organic wastes[END_REF][START_REF] Planquart | Distribution, movement and plant availability of trace metals in soils amended with sewage sludge composts: application to low metal loadings[END_REF]. In addition, mixing biosolids and other organic wastes with large C/N ratios (such as green wastes) can reduce the rate of nitrogen leaching [START_REF] Mckay | GB experience in the last decade: from research to practice[END_REF]. Therefore, compost presents better agronomic potential than biosolids.
Compost amendment has been frequently shown to increase soil fertility [START_REF] Caravaca | Aggregate stability changes after organic amendment and mycorrhizal inoculation in the afforestation of a semiarid site with Pinus halepensis[END_REF][START_REF] Martinez | Biowaste effects on soil and native plants in semiarid ecosystem[END_REF], plant biomass [START_REF] Guerrero | Reclamation of a burned forest soil with municipal waste compost: macronutrient dynamics and improved vegetation cover recovery[END_REF] and nutrition [START_REF] Moreno | Transference of heavy metals from a calcareous soil amended with sewage-sludge compost to barley plants[END_REF]. As a consequence, it could lead to a high nutrient content litter and enhance litter breakdown. Litter breakdown is the principal pathway of nutrients' return to soil in a form available to plants [START_REF] Kavvadias | Litterfall, litter accumulation and litter decomposition rates in four forest ecosystems in northern Greece[END_REF]. This aspect is especially important on Mediterranean nutrient-poor soils where plant communities rely, to a great extent, on the recycling of litter nutrients [START_REF] Can ˜ellas | Litter fall and nutrient turnover in Kermes oak (Quercus coccifera L.) shrublands in Valencia (eastern Spain)[END_REF]. Factors contributing to the litter breakdown are: soil fertility, litter quality and supply and climatic conditions [START_REF] Kavvadias | Litterfall, litter accumulation and litter decomposition rates in four forest ecosystems in northern Greece[END_REF] The litter breakdown process involves three types of organisms: invertebrates, fungi and bacteria. The crucial role of microorganisms is clearly established, and consecutive changes in fungal and bacterial biomass dynamics are a useful way to investigate the impact of factors controlling leaf breakdown [START_REF] Gessner | Importance of stream microfungi in controlling breakdown rates of leaf litter[END_REF][START_REF] Isidorov | Volatile organic compounds from leaves litter[END_REF]. Moreover, the soil microbial biomass is often regarded as an early indicator of changes which may occur in the long term with regard to soil fertility. Likewise, [START_REF] Wardle | Response of soil microbial biomass dynamics, activity and plant litter decomposition to agricultural intensification over a seven-year period[END_REF] showed that microbial biomass responds to addition of fertilizers and of organic residues.
Most studies deal with total microbial biomass (fumigation-incubation, fumigation-extraction, substrate induced respiration and ATP methods; [START_REF] Martens | Current methods for measuring microbial biomass C in soil: potential and limitations[END_REF], i.e. bulked fungi and bacteria [START_REF] Borken | Application of compost in spruce forest: effects on soil respiration, basal respiration and microbial biomass[END_REF][START_REF] Khan | Effects of metal (CD, Cu, Ni Pb or Zn) enrichment of sewage sludge on soil microorganisms and their activities[END_REF][START_REF] Kunito | Copper and Zinc fractions affecting microorganisms in long-term sludge-amended soils[END_REF].
However, the respective roles of bacteria and fungi in the litter breakdown process are different. Fungi are able to decompose and assimilate as refractory compounds as lignin or tanins [START_REF] Criquet | La litie `re de che ˆne vert (Quercus ilex L.). Aspects me ´thodologiques, enzymologiques et microbiologiques. Etude pre ´liminaire des relations in situ entre microorganismes, enzymes et facteurs environnementaux[END_REF], although bacteria are not thought to assume notable importance before the leaf material has been partially broken down and decomposed by fungi [START_REF] Jensen | Decomposition of angiosperm tree leaf litter[END_REF]. Thus, it is of great interest to study separately their reactions to compost amendment.
In this study, bacterial and fungal biomass were determined on kermes oak (Quercus coccifera L.) leaf litter on a Mediterranean burnt area amended with sewage sludge and greenwaste compost. Kermes oak is one of the most important shrub species in the Mediterranean basin, where it covers more than 2 Mha and accounts generally for 60-70% of the total litter [START_REF] Can ˜ellas | Litter fall and nutrient turnover in Kermes oak (Quercus coccifera L.) shrublands in Valencia (eastern Spain)[END_REF].
Our objectives were to (i) determine the effects of compost amendment on kermes oak leaf litter colonization by bacteria and fungi, (ii) offset the drastic Mediterranean climatic conditions (e.g. drought) against the potential improvement of soil fertility by compost, (iii) provide comprehensive data on leaf litter breakdown in terrestrial Mediterranean ecosystems by separately quantifying fungal and bacterial biomass.
Material and methods
Study site and experimental design
The experiment was carried over 6000 m 2 on the plateau of Arbois (Southern Provence, France; 5818 0 6 00 E-43829 0 10 00 N in WSG-84 Geodetic system) at 240 m above sea level and under Mediterranean climatic conditions (Fig. 1). The soil was a siltyclayey chalky rendzina, with a high percentage of stones (77%) and a low average depth (24 cm). The last fire occurred in June 1995 and the site was colonised by typical Mediterranean sclerophyllous vegetation, with 70% total cover; Q. coccifera L. and Brachypodium retusum Pers. being the two dominant species. This natural vegetation belongs to the holm oak (Quercus ilex L.) succession series.
Compost was surface applied in January 2002. The experimental design was a complete randomised block of twelve plots of 500 m 2 . Four plots did not receive any compost (D0 = control), four received 50 t ha À1 (D50), and four 100 t ha À1 (D100). The compost was produced by a local company (Biotechna, Ensue `s, Southern Provence) and is certified conform to the French norm on composts made with materials of water treatment origin (NF U 44-095, 2002). This compost was made with greenwastes (1/3 volume), pine barks (1/3 volume), and local municipal sewage sludge (1/3 volume). The mixture was composted for 30 days (windrows submitted to forced ventilation, Beltsville method) at 75 8C to kill pathogenic microorganisms and decompose phytotoxic substances, and then sieved (<20-mm mesh) to remove large bark pieces and stored in windrows. The windrows were mixed several times over the next 6 months to promote organic matter humification. The final compost met the French legal standards for pathogenic microorganisms, organic trace elements and heavy metals (order of the 08/01/1998 on sewage sludge use in agriculture). Soil and compost characteristics are shown in Table 1.
Field procedures
Humiferous episolum was sampled nine times superficially from June 2002 to October 2003, and 2mm sieved. The fraction >2 mm was classed as litter
fraction (O-h >2
). Chemical analyses were performed on this coarse mixed litter. Each analysed sample was a mix of three samples randomly collected on each plot. Kermes oak entire leaf litter, on which leaf litter breakdown is the most efficient [START_REF] Toutain | Les litie `res: sie `ge de syste `mes interactifs et moteurs de ces interactions[END_REF], was separated from the O-h >2 samples to determine microbial biomass. Green leaves were also collected on three bushes per plot to analyse N and P concentrations, at maximum Q. coccifera litterfall in late May (2002 and2003) (Can ˜ellas and San Miguel, 1998).
Laboratory procedures
Fungal colonization of litter was estimated using the ergosterol concentration, taking advantage of its recent improvement. Ergosterol is a fungal indicator that offers an efficient measure of living fungal biomass [START_REF] Gessner | Extraction and quantification of ergosterol as a mesure of fungal biomass in leaf litter[END_REF][START_REF] Gessner | Use of solid phase extraction to determine ergosterol concentrations in plant tissue colonized by fungi[END_REF][START_REF] Cortet | Increasing species and trophic diversity of mesofaune affects fungal biomass, mesofaune structure community and organic matter decomposition processes[END_REF]. Fifty milligrams of leaf litter were roughly crushed and lyophilized. Ergosterol was extracted from leaf litter by 30 min refluxing in alcohol base [START_REF] Gessner | Extraction and quantification of ergosterol as a mesure of fungal biomass in leaf litter[END_REF] and purified by solid-phase extraction [START_REF] Gessner | Use of solid phase extraction to determine ergosterol concentrations in plant tissue colonized by fungi[END_REF]. Final purification and quantification of ergosterol was achieved by high performance liquid chromatography (HPLC; HP series 1050 chromatograph). The system was run with HPLC-grade methanol at a flow rate of 1.5 ml min À1 . Ergosterol eluted after 9 min and was detected at 282 nm; peak identity was checked on the basis of retention times of commercial ergosterol (Fluka 1 ; >98% purity).
Bacterial colonization was determined by counting numbers of bacteria following the general protocol of [START_REF] Porter | The use of DAPI for identifying and counting aquatic microflora[END_REF] modified by [START_REF] Schallenberg | Solutions to problems in enumerating sediment bacteria by direct counts[END_REF]. Samples of fresh litter were stored in 2% formalin before analysis. Bacteria were detached from entire leaves by 2 min probe sonication, according to [START_REF] Buesing | Comparison of detachment procedures for direct counts of bacteria associated with sediment particles, plant litter and epiphytic biofilms[END_REF]. Then, 4 0 ,6-diamidino-2phenylindole (DAPI) at a concentration of 5 mg l À1 [START_REF] Baldy | Bacteria, fungi and the breakdown of leaf litter in a large river[END_REF] was added for DNA staining. Finally, numbers of bacteria were counted by epifluorescence microscopy.
Soil, compost (<4 mm) and O-h >2 chemistry (total N, total and exchangeable P, total Cu and Zn) were determined using standard procedures [START_REF] Afnor | Qualite ´des sols[END_REF]. Exchangeable P was measured by the Olsen method [START_REF] Olsen | Estimation of available phosphorus in soils by extraction with sodium bicarbonate[END_REF]. Humidity of Q. coccifera leaf litter was measured by oven drying samples at 60 8C during 3 days.
Green leaves of Q. coccifera were washed with demineralised water, oven dried at 40 8C and crushed (2 mm mesh) in a trace metal free grinder (FOSS TECATOR Sample Mill 1093 Cyclotec). Foliar N concentration was determined according to [START_REF] Masson | Simultaneous analysis of nitrogen, potassium, calcium and magnesium in digested plant samples by ion chromatography[END_REF], modified. Samples (250 mg) were digested in H 2 SO 4 and H 2 O 2 at 400 8C during 3 h (Bioblock Scientific Digestion Unit 10401). Then, the solutions were diluted 500 times, filtered at 0.45 mm and analysed by ion chromatography (Dionex DX120). The eluent used was 26 mmol l À1 methan sulfonic acid. Foliar P concentration was measured by atomic absorption spectrometry (VARIAN VISTA Radial) after digestion in aqua regia.
Statistical analyses
Two-way ANOVAs combined with Tukey tests [START_REF] Zar | Biostatistical Analysis[END_REF] were used to make comparisons of the different parameters (ergosterol, bacterial numbers, litter and green leaves chemistry) according to the two studied factors (sampling date and compost rate). Previously, normality and homocedaticity were verified by Shapiro-Wilks and Bartlett tests, respectively [START_REF] Zar | Biostatistical Analysis[END_REF]. Data were ln-transformed when test assumptions were not met (i.e. bacterial numbers). To relate microbial biomass and chemical characteristics of leaf litter, linear regressions were performed. Significant level was considered to be 95%. The software Statgraphics plus (version 2.1: Statistical Graphics Corporation # , Copyright 1994-1996) and Minitab (release 13 for Windows, 2000, Minitab Inc., USA) were used.
Results
Q. coccifera green leaves at maximal litterfall and coarse mixed litter (O-h >2 )
N and P concentrations in green leaves increased significantly after compost amendment (Table 2, Fig. 2). There was no difference between the two compost rates for P, whereas N concentration in green leaves was higher in D100 than D50 for both years (2002,2003). Moreover, N concentration in green leaves increased significantly from May 2002 to May 2003, respectively, 5 months and 1.5 years after compost amendment (Table 2, Fig. 2). P concentration in green leaves decreased significantly from 2002 to 2003 (Table 2, Fig. 2).
Compost amendment significantly increased the following physical and chemical parameters of O-h >2 (Table 2): O-h >2 humidity (Fig. 3), exchangeable P (Fig. 3), total N (0.92, 1.58, and 1.67% DM, for D0, D50 and D100, respectively, means of the nine Results of the comparison are given by an exponent letter: values that do not differ at the 0.05 level are noted with the same letter (a < b < c). D0: control plots; D50: plots amended with 50 t ha À1 of compost and D100: plots amended with 100 t ha À1 of compost. sampling dates), total Cu (8.75, 75.4 and 69.8 ppm DM, means of the 2 sampling years), and total Zn (50.9, 137.6 and 139.4 ppm DM, means of the 2 sampling years). However, there was no significant difference between the two compost rates (D50 and D100) for these parameters.
Sampling year had also a significant effect on Oh >2 . Total N and exchangeable P concentrations in Oh >2 had similar dynamics to N and P in green leaves from 2002 to 2003 (Table 2). Moreover, N and P concentrations in O-h >2 green leaves were positively correlated with concentrations in green leaves (R = 0.45, p = 0.026 and R = 0.64, p = 0.004, respectively, for N and P). Total N in O-h >2 increased from 1.3% DM in 2002 (mean of the three treatments) to 1.5% DM in 2003, whereas exchangeable P decreased from 859 ppm DM in 2002 (mean of the three treatments) to 309 ppm DM in 2003 (Fig. 3). Total P in O-h >2 remained similar (Table 2) from 1 year (2002, 0.08% DM) to another (2003, 0.09% DM).
Humidity of O-h >2 was lowest during the summer 2003 (Fig. 3). Spring, summer, and autumn 2003 were exceptionally dry (Fig. 1), and as a consequence, October 2003 values were found to be similar to summer 2002 values.
Microbial biomass of kermes oak leaf litter
Compost amendment had a significant effect on leaf litter colonization by fungi, whereas it did not affect bacterial numbers (Fig. 4, Table 3). Compost at highest rate decreased overall the ergosterol concentration in Q. coccifera leaf litter compared to control (159 and 194 mg g À1 DM in D100 and D0, respectively, mean over the studied period). Ergosterol concentration for D50 rate (189 mg g À1 DM, mean Results of the comparison are given by an exponent letter: values that do not differ at the 0.05 level are noted with the same letter (a < b < c). D0: control plots; D50: plots amended with 50 t ha À1 of compost and D100: plots amended with 100 t ha À1 of compost.
over the studied period) was similar to control. Moreover, ergosterol was negatively correlated to exchangeable phosphorus in O-h >2 . However, neither bacterial number nor ergosterol are correlated to total P, N, Cu or Zn concentrations (Table 4).
Both ergosterol concentration and bacterial number of leaf litter varied significantly according to sampling date within a year. The lowest values corresponded to summers (June and July, both years), when drought is maximum under Mediterranean climatic conditions (Fig. 1). In addition, both bacterial and fungal colonisation were positively correlated with O-h >2 humidity, particularly ergosterol (Table 4). As O-h >2 humidity (month n) is positively correlated (R = 0.76, p = 0.03) to precipitation (month n-1), the highest ergosterol values corresponded to the highest rainfall of the month before.
Discussion
Microbial biomass of kermes oak leaf litter
Compost amendment significantly decreased kermes oak leaf litter fungal colonization at 100 t ha À1 rate whereas it had no effect on bacterial colonization. This shows the importance of studying separately fungi and bacteria in order to determine precisely the response of microbial communities after compost amendment. In our study, the two groups of microorganisms exhibited
Table 4 Matrix of Pearson correlation analysis between ergosterol, numbers of bacteria and O
-h >2 parameters Ergosterol Bacterial numbers O-h >2 humidity R = 0.44 *** R = 0.25 * O-h >2 exchangeable P R = À0.31 * R = À0.13 O-h >2 total P R = À0.21 R = À0.02 O-h >2 total N R = À0.16 R = 0.04 O-h >2 total Cu R = À0.39 R = À0.05 O-h >2 total Zn R = À0.38 R = À0
.001 * Significant (0.05 < p < 0.01). *** Highly significant ( p < 0.001). different reactions. The decrease in ergosterol concentrations found within D100 amended plots differs from numerous studies which have found enhancing effects of organic amendments on soil microbial biomass [START_REF] Albiach | Microbial biomass concentration and enzymatic activities after the application of organic amendments to a horticultural soil[END_REF][START_REF] Kunito | Copper and Zinc fractions affecting microorganisms in long-term sludge-amended soils[END_REF][START_REF] Ros | Soil microbial activity after restoration of a semiarid soil by organic amendments[END_REF]. However, the ergosterol concentrations observed on Q. coccifera leaf litter of D100 plots remained similar to those of the rare studies that have attempted to quantify separately bacterial and fungal colonization of Mediterranean litter or soil [START_REF] Barajas-Aceves | Effect of polluants on the ergosterol concentration as indicator of fungal biomass[END_REF][START_REF] Cortet | Increasing species and trophic diversity of mesofaune affects fungal biomass, mesofaune structure community and organic matter decomposition processes[END_REF].
Conversely, litter humidity highly improved Q. coccifera leaf litter colonization. Q. coccifera leaf litter moisture was positively correlated both to ergosterol concentrations and bacterial numbers, and fungi were more affected than bacteria. Thus, microbial colonization was lowest during dry months (June and July) in both years, and 2003 values were lower than 2002 values due to exceptional drought. This is in accordance with previous works, such as [START_REF] Papatheodorou | The effects of large-and small-scale differences in soil temperature and moisture on bacterial functionnal diversity and the community of bacterivorous nematodes[END_REF] who reported seasonal patterns for bacterial richness in soil. Likewise, [START_REF] Criquet | Annual variations of phenoloxidase activities in an evergreen oak litter: influence of certain biotic and abiotic factors[END_REF] and [START_REF] Barajas-Aceves | Effect of polluants on the ergosterol concentration as indicator of fungal biomass[END_REF] found that fungal biomass reaches maximum values under humidity conditions in Mediterranean areas.
Q. coccifera green leaves at maximal litterfall and coarse mixed litter (O-h >2 )
The compost significantly increased the moisture of Q. coccifera leaf litter in amended plots. This result is in contradiction with the depreciating effect of the compost on fungal colonization. Thus, some changes induced by compost amendment may have had strong enough decreasing effects on fungal colonization of kermes oak leaf litter to annul the increasing moisture effect, even if drought is one of the major limiting factors in Mediterranean areas.
Total N concentration significantly increased in amended plots (D50 and D100) in kermes oak green leaves and in O-h >2 . These two parameters are positively correlated (R = 0.45), which suggests that the N increase in green leaves is involved in the O-h >2 total N enrichment. This species generally provides 60-70% of the total litter in garrigue ecosystems [START_REF] Can ˜ellas | Litter fall and nutrient turnover in Kermes oak (Quercus coccifera L.) shrublands in Valencia (eastern Spain)[END_REF]. As nitrogen is frequently a limiting nutrient in Mediterranean ecosystems [START_REF] Archibold | Mediterranean ecosystems[END_REF], an increase in O-h >2 amended plots improves microbial biomass [START_REF] Berg | Fungal biomass and nitrogen in decomposing scots pine needle litter[END_REF]. Moreover, the N enrichment of Q. coccifera litter may decrease its C/N ratio, as well as its polyphenol concentration, leading to an easier breakdown [START_REF] Gosz | Biological factors influencing nutrient supply in forest soils[END_REF]. However, in our experiment, no significant positive correlation was found between microbial colonization of kermes oak litter and total N concentration in O-h >2 . In contrast to the literature, we observed the highest depletion of fungal biomass when total N in O-h >2 was the highest: 2 years after amendment (2003), in D100. Therefore, it is very likely that N accumulates in O-h >2 consequently to fungal biomass decrease in D100 plots.
The reduction of fungal colonization at D100 could be the result of a decrease in another limiting factor in Mediterranean ecosystems [START_REF] Archibold | Mediterranean ecosystems[END_REF]: P is mostly unavailable, because it is associated with calcium in inorganic forms on calcareous soils [START_REF] Khanna | Soil characteristics influencing nutrient supply in forest soils[END_REF]. [START_REF] Thirukkumaran | Microbial activity, nutrient dynamics and litter decomposition in a Canadian Rocky Mountain pine forest as affected by N and P fertilizers[END_REF] reported that microbial variables were unaffected by N addition, whereas an increase of substrate induced respiration (SIR) was found with P fertilization. In addition, [START_REF] Kwabiah | Response of soil microbial biomass dynamics to quality of plant materials with emphasis on P availability[END_REF] found that phosphorus is the most important quality factor affecting microbial biomass. But, this explanation is not confirmed by our results. Compost amendment led to a marked increase in exchangeable P in O-h >2 , as a result of an increase in P concentration in Q. coccifera green leaves (R = 0.64). However, exchangeable P in O-h >2 was negatively correlated to fungal colonization (R = À0.31).
In our study, compost amendment induced significant increases in O-h >2 total copper and zinc concentrations. As heavy metals are known to affect growth, morphology and metabolism of microorganisms in soils [START_REF] Dai | Influence of heavy metals on C and N mineralisation and microbial biomass in Zn-, Pb-, Cu-, and Cdcontaminated soils[END_REF], Cu and Zn increases could have decreased the fungal colonization of leaf litter collected on plots amended with 100 t ha À1 . However, neither bacterial numbers, nor ergosterol content of leaf litter were correlated to Cu and Zn total concentrations. On the one hand, total heavy metal concentrations do not reflect exactly their biological effects and available concentrations in the soil have to be taken into account. On the other hand, Cu accumulates more in plant roots [START_REF] Van Den Driessche | Nutrient storage, retranslocation and relationship of stress to nutrition[END_REF] than in shoots. Thus, the Q. coccifera leaf litter may have been slightly contaminated by this element, explaining the absence of Cu effect on microbial colonization. However, Zn (as Cd, Hg and Ni) is a metal reported to be toxic to microorganisms at lower levels than other metals [START_REF] Kabata-Pendias | Trace Elements in Soils and Plants[END_REF]. Moreover, this element is considered to be readily soluble relative to the other heavy metals in soils [START_REF] Kabata-Pendias | Trace Elements in Soils and Plants[END_REF]. Thus, Zn in available form could have reduced fungal colonization of Q. coccifera leaf litter in D100. However, bacteria are known to be less resistant to heavy metals than fungi [START_REF] Dai | Influence of heavy metals on C and N mineralisation and microbial biomass in Zn-, Pb-, Cu-, and Cdcontaminated soils[END_REF] and are not affected by compost amendment in our study. Therefore, the hypothetical negative effect of Zn on fungal colonization of kermes oak litter must be put in perspective. Moreover, it is not easy to predict microbial consequences of soil pollution without determining microbial diversity changes under heavy metal contamination, according to species relative sensibility [START_REF] Dai | Influence of heavy metals on C and N mineralisation and microbial biomass in Zn-, Pb-, Cu-, and Cdcontaminated soils[END_REF].
Thus, fungal biomass was decreased at D100 rate despite an increase of humidity, N and P concentrations of O-h >2 after compost amendment. We hypothesize that amendment led to a redistribution of fungi from O-h >2 to O-h <2 . The compost used in this study contained 57% of sewage sludge (<2 mm fraction), which presents a higher decomposability (rich in low-molecular dissolved organic carbon and in salts, [START_REF] Agassi | Percolation and leachate composition in a disturbed soil layer mulched with sewage biosolids[END_REF] than O-h >2 (containing leaves, rich in inhibiting molecules as polyphenols, especially in the Mediterranean region, [START_REF] Gershenzon | Changes in the level of plant secondary metabolites production under water and nutrient stress[END_REF]. This would induce an accumulation of O-h >2 litter and could explain the total N increase that occurred in O-h >2 of D100 amended plots from 2002 to 2003. Likewise, [START_REF] Borken | Application of compost in spruce forest: effects on soil respiration, basal respiration and microbial biomass[END_REF] showed that compost could induce some redistribution of microbes from one fraction to another. They noted that after compost application, microbial biomass decreased in the organic horizons (O-h >2 and O-h <2 ), whereas it increased in the mineral soil, the latter being enriched by nutrients released from organic horizon.
Conclusion
In conclusion, compost at D100 induced significant decrease in colonization of Q. coccifera leaf litter by fungi, especially the second year after amendment. Compost had no significant effect on litter colonization by bacteria. Furthermore, compost at D100 increased O-h >2 humidity, as well as N and P concentrations, which should have improved the decomposition of O-h >2 OM. Compost Zn enrichment could be responsible for the observed fungal biomass decrease, but it should have affected bacteria as well. Thus, we hypothesize a redistribution of fungi from refractory components of O-h >2 to easily decomposable O-h <2 . This could be responsible for leaf litter N accumulation in O-h >2 . Therefore, it would be of great interest to study the microbial colonization of Oh <2 and mineral soil.
Fig. 1 .
1 Fig. 1. Mean air temperature and precipitation from June 2002 to May 2003 (Me ´te ´o France).
Fig. 2 .
2 Fig. 2. N and P foliar concentrations in green Quercus coccifera leaves at maximum litterfall (late May 2002 and 2003). Bars denote S.E. (N = 4). D0: control plots; D50: plots amended with 50 t ha À1 of compost and D100: plots amended with 100 t ha À1 of compost. Results of the comparison are given by a letter: values that do not differ at the 0.05 level are noted with the same letter (a < b < c).
Fig. 3 .
3 Fig. 3. Exchangeable P concentration (in ppm of dry mass) and humidity (in % of dry mass) in the O-h >2 from June 2002 to October 2003. Bars denote S.E. (N = 4). D0: control plots; D50: plots amended with 50 t ha À1 of compost and D100: plots amended with 100 t ha À1 of compost.
Fig. 4 .
4 Fig. 4. Dynamics of ergosterol concentrations (in mg of ergosterol per gram of dry mass) and bacterial numbers (per gram of dry mass) associated with leaf litter of Quercus coccifera decomposing in control plots (D0), plots amended with 50 t ha À1 (D50) and plots amended with 100 t ha À1 (D100) of compost. Bars denote S.E. (N = 4).
Table 1
1
Soil (0-24 cm: maximal depth; N = 12) and compost (N = 3) physico-chemical characteristics
Parameter Soil Compost
Mean (S.E.) Allowed French limit value before Mean (S.E.) Allowed French limit
sewage sludge amendment value (08/01/1998)
pH H 2 O 7.34 (0.008) 7.7 (0.05)
Humidity (% FM) 4.8 (0.29)
CEC (meq. 100 g À1 ) 23.12 (0.31)
Total calcareous (% DM) 4.17 (0.13)
OM (% DM) 7.58 (0.12) 46.8 (2.74)
Total N (% DM) 0.36 (0.005) 2.03 (0.03)
C/N 12.42 (0.09) 13.4 (0.78)
Total P (% DM) 0.037 (0.001) 3.24 (0.03)
Exchangeable P (mg kg À1 DM) 23.3 (0.35) 2514.8 (7.82)
Copper (mg kg À1 DM) 19.8 (0.14) 100 144.1 (0.84) 1000
Zinc (mg kg À1 DM) 78.2 (0.24) 300 265.0 (5.49) 3000
Cadmium (mg kg À1 DM) 0.31 (0.002) 2 0.8 (0.0) 15
Chrome (mg kg À1 DM) 67.3 (0.33) 150 27.1 (0.65) 1000
Mercury (mg kg À1 DM) 0.06 (0.001) 1 0.86 (0.06) 10
Nickel (mg kg À1 DM) 45.3 (0.17) 50 16.5 (0.23) 200
Lead (mg kg À1 DM) 43.1 (0.26) 100 57.3 (2.53) 800
DM: dry mass; FM: fresh mass.
Table 2
2 Results of two-way ANOVA on N and P foliar concentrations in green leaves of Quercus coccifera at maximal litterfall (late May) and on O-h >2 parameters (only statistically significant results are reported)
Parameter Factor F p Tukey's test
Q. coccifera N concentration in green Year 4.88 0.0403 May02 a , May03 b
leaves at maximal litterfall Rate 23.55 <0.001 D0 a , D50 b , D100 c
Q.coccifera P concentration in green Year 8.66 0.0123 May02 a , May03 b
leaves at maximal litterfall Rate 4.13 0.0433 D0 a , D50 ab , D100 b
Q. coccifera litter humidity Date 145.43 <0.001 June02 bc , July02 d , Oct02 e , Dec02 f ,
Mar03 e , Apr03 d , June03 ab , July03 a , Oct03 c ,
Rate 10.8 <0.001 D0 a , D50 b , D100 b
O-h >2 exchangeable P Year 53.77 <0.001 2002 b , 2003 a
Rate 115.46 <0.001 D0 a , D50 b , D100 b
O-h >2 total P Year 0.54 0.464 2002 a , 2003 a
Rate 117.6 <0.001 D0 a , D50 b , D100 b
O-h >2 total N Year 30.06 <0.001 2002 a , 2003 b
Rate 202.2 <0.001 D0 a , D50 b , D100 b
O-h >2 total Cu Year 46.08 <0.001 Mar02 a , Mar03 b
Rate 58.52 <0.001 D0 a , D50 b , D100 b
O-h >2 total Zn Date 31.53 <0.001 Mar02 a , Mar03 b
Rate 20.69 <0.001 D0 a , D50 b , D100 b
Table 3
3 Results of two-way ANOVA on ergosterol concentration and bacterial numbers associated with kermes oak leaf litter
Parameter Factor F p Tukey's test
Ergosterol Date 10.55 <0.001 June02 bc , July02 b , Oct02 de , Dec02 f , Mar03 ef
Rate 5.54 0.0056 Apr03 bcd , June03 cde , July03 a , Oct03 de
Date rate 1.2 0.2845 D0 a , D50 a , D100 b
Number of bacteria Date 4.52 0.0002 June02 ab , July02 a , Oct02 bcd , Dec02 abc , Mar03 cd
Rate 0.9 0.4096 Apr03 d , June03 ab , July03 a , Oct03 ab
Date rate 0.98 0.4852 D0 a , D50 a , D100 a
https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1
https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard
Acknowledgements
This research was supported by the Conseil Ge ´ne ´ral des Bouches-du-Rho ˆne, the ADEME (Agence De l'Environnement et de la Maı ˆtrise de l'Energie) and the Agence de l'Eau Rho ˆne-Me ´diterrane ´e-Corse. We thanks the two reviewers for their interesting remarks, and Mr. Michael Paul for English improvement. | 34,559 | [
"18881",
"741356",
"170291",
"18874"
] | [
"834",
"834",
"834",
"834"
] |
01769127 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769127/file/L2EP_2018_TPEL_MORAES.pdf | Member, IEEE Walter Lhomme
Philippe Delarue
Tiago José
Santos Moraes
Ky Ngac
Eric Nguyen
Keyu Semail
Benedicte Chen
Silvestre
Member, IEEE K Chen
B Silvestre
T J Dos
Member, IEEE, É. Semail N K Nguyen
Integrated Traction/Charge/Air Compression Supply using 3-Phase Split-windings Motor for Electric Vehicle
Keywords: Battery charger, Multiphase drive, Air-compressor, Automotive, Electric vehicle
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
I. INTRODUCTION
Increasing pure electric driving range, decreasing the battery charge time and reducing vehicle costs are the three main challenges that automotive industry has to face for developing W. Lhomme, P. Delarue, T. J. Dos Santos Moraes, N. K. Nguyen and E. Semail are with the Univ. Lille, Centrale Lille, Arts et Métiers Paris Tech, HEI, EA 2697 L2EP -Laboratoire d'Electrotechnique et d'Electronique de Puissance F-59000 Lille, France (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).
Electric Vehicle (EV). To meet the challenges, the key components are battery chargers, electrical machines and their power electronics. In most of the cases, the charger is installed in the vehicle, allowing the battery to be recharged anywhere a power outlet is available. However, in order to restrict the on-board charger size and weight, its power is limited. To turn vehicle components more compacted with higher power, many different integrated motor drive / battery charger solutions have been studied [START_REF] Haghbin | Gridconnected integrated battery chargers in vehicle applications: review and new solution[END_REF]- [START_REF] Kolli | Space-vector PWM control synthesis for an H-bridge drive in electric vehicles[END_REF].
These integrated motor drive / battery charger solutions can be classified by single-phase or 3-phase AC supply [START_REF] Haghbin | Gridconnected integrated battery chargers in vehicle applications: review and new solution[END_REF]- [START_REF] Khaligh | Comprehensive topological analysis of conductive and inductive charging solutions for plug-in electric vehicles[END_REF]. The integrated charger using single-phase AC supply is, however, generally limited to low power charge ability. For integrated charger using 3-phase AC supply, several solutions have been proposed. In [START_REF] Cocconi | Combined motor drive and battery recharge system[END_REF] Cocconi has presented an integrated motor drive / battery charger based induction or brushless DC motor with a set of relays. In [START_REF] Lacressonniere | Converter used as a battery charger and a motor speed controller in an industrial truck[END_REF] the authors have proposed an integrated motor drive / battery charger based on a wound-rotor induction motor. Recently a split-stator winding IPMSM has been proposed for an isolated high-power integrated charger [START_REF] Haghbin | An isolated high-power integrated charger in electrified-vehicle applications[END_REF]. Nevertheless, all of the above reference solutions generate motor torque during the charging and need a set of relays to reconfigure the motor between the traction and the charging modes. For safety reasons the generation of torque needs the use of a clutch or a mechanical rotor lock. To avoid rotating field into the motor a multiphase machine has to be used [START_REF] Levi | Advances in converter control and innovative exploitation of additional degrees of freedom for multiphase machines[END_REF]. Subotic et al. have proposed to connect the 3-phase AC supply to the neutral points of isolated 3-phase windings of an N-leg inverter. N is a multiple of 3 with at least 9 phases [START_REF] Subotic | On-board integrated battery charger for EVs using an asymmetrical nine-phase machine[END_REF]. Some solutions based on 6-phase drives have also been proposed. In [START_REF] Subotic | Isolated Chargers for EVs Incorporating Six-Phase Machines[END_REF] two topologies of an isolated charger are proposed according to the stator windings, asymmetrical or symmetrical, of the 6-phase induction machine. The case with an asymmetric 6-phase requires a transformer with dual secondary that creates a 6-phase voltage supply. The machine is supposed to have a sinusoidal magnetomotive force (MMF) and the torque is only created by the 1 st harmonic of currents: a sinusoidal rotating field in the αβ frame. In this work a phase transposition between the output of grid, with or without the transformer, and the 6-phase machine is required to impose a zero torque during the charging/vehicle to grid modes and a hardware reconfiguration is required to switch between the traction and the charging mode. In [START_REF] Diab | A Nine-Switch-Converter-Based Integrated Motor Drive and Battery Charger System for EVs Using Symmetrical Six-Phase Machines[END_REF], a topology for 3-phase charging and 6-phase traction modes has been proposed for a 6-phase symmetrical machine by using only a 9 switch inverter, instead of 12. During the charging mode, controlling the middle switch of the 9-switch inverter and an additional hardware reconfiguration, composed of 9 switches, leads the 6-phase symmetrical machine become a classical 3-phase machine.
According to Levi [START_REF] Levi | Advances in converter control and innovative exploitation of additional degrees of freedom for multiphase machines[END_REF], the most developed integrated motor drive / battery charger is based on the concept of [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF]- [START_REF] Sousa | A combined multiphase electric drive and fast battery charger for electric vehicles[END_REF] with associated optimized controls using numerous degrees of freedom of the structure [START_REF] Bruyere | Rotary drive system, method for controlling an inverter and associated computer program[END_REF]- [START_REF] Sandulescu | Control strategies for open-end winding drives operating in the flux-weakening region[END_REF]. [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF] and [START_REF] De-Sousa | Method and electric combined device for powering and charging with compensation means[END_REF] are the first patents of this concept to present the invention of the automotive supplier Valeo, which describes an integrated 3-phase split-winding electrical machine and on-board battery charger system without static relays. In [START_REF] Sousa | A combined multiphase electric drive and fast battery charger for electric vehicles[END_REF] De Sousa et al. presents, in terms of performances and efficiencies, the comparison of this integrated system with the classical solution using a 3-leg inverter with its 3-phase electrical machine and a battery charger. The concept uses a split-windings AC motor with just a 6-leg inverter instead of at least 9 in [START_REF] Subotic | On-board integrated battery charger for EVs using an asymmetrical nine-phase machine[END_REF]. The middle point of each phase windings is connected to the 3-phase supply to achieve charging mode. This technology ensures that, during charging mode, the rotor of the electrical machine does not vibrate or rotate, because the winding configuration allows decoupling magnetically the rotor and the stator of the electrical machine. No motor reconfiguration and no supplementary static relays are necessary. It may be noted that the traction mode has already been studied in the flux-weakening region [START_REF] Sandulescu | Control strategies for open-end winding drives operating in the flux-weakening region[END_REF] and in degraded mode [START_REF] Béthoux | Real-time optimal control of a 3-phase PMSM in 2-phase degraded mode[END_REF]. A space-vector pulse width modulation (SVPWM) of the 6-leg inverter has furthermore been proposed in [START_REF] Kolli | Space-vector PWM control synthesis for an H-bridge drive in electric vehicles[END_REF].
In this paper, based on the concept of [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF]- [START_REF] Sousa | A combined multiphase electric drive and fast battery charger for electric vehicles[END_REF], a further advancement is proposed to not only combine in a unique system the traction and charging, but also to provide an auxiliary system supply in traction mode: the air-compressor supply of the air-conditioning. It has been demonstrated that several electrical machines could be connected in series with an appropriate connection using a single inverter, with which an independent control for each motor can be implemented [START_REF] Levi | Even-phase multi-motor vector controlled drive with single inverter supply and series connection of stator windings[END_REF]. The middle point of the split-windings of each phase can then be connected to another electrical machine, in this paper, an air compressor during the traction mode. All in all, the 6-leg inverter achieves the functions of propulsion, charging and air-conditioning. As the number of the inverter legs is reduced, the cost, volume and weight are potentially reduced.
The objective of this paper is to show the feasibility of this integrated system that combines the traction / charge / air-compression supply modes by experimental results. Prior to this paper, it is the first time to show the three modes experimental results of this innovative system. To deal with the complexity of the multi-phase system the Energetic Macroscopic Representation (EMR) will be used [START_REF] Bouscayrol | Graphic formalisms for the control of multi-physical energetic systems: COG and EMR[END_REF]- [START_REF] Martinez | Practical control structure and energy management of a testbed hybrid electric vehicle[END_REF]. A unified control scheme is deduced from this EMR to achieve the three operation modes.
Section 2 presents the electrical architecture of the vehicle with the combined electrical drive / battery charger / air-compression supply. The modeling, the description and the overall control of the system are dealt with in section 3. Finally, experimental results with discussions are presented in section 4.
II. ELECTRICAL ARCHITECTURE OF THE INTEGRATED 3-PHASE SPLIT-WINDINGS MOTOR OF THE VEHICLE
A. Studied system
The original concept in [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF]- [START_REF] Sousa | A combined multiphase electric drive and fast battery charger for electric vehicles[END_REF] uses a common power electronic converter to propel the vehicle or charge the battery at a standstill. A 6-leg inverter supplies a 3-phase open-end winding motor with an accessible central point per phase. Six of the connection points (a, a', b, b', c and c' in Fig. 1) are connected to the inverter legs, in order to supply the three phases. The other three connection points (1, 2 and 3) are connected to the 3-phase AC system.
During charge mode, the motor windings compose the charger system. As the charger windings' cost and weight are directly related to its charging power, this sharing of the motor windings results in a considerable gain of weight and cost in comparison to other proposed structures, in which the charger has its own windings. Moreover, the windings of the system present in this paper are sized to ensure the traction function of the electric motor, whose power is equivalent of a standard high power charger: 22 kW. As a conclusion, no supplementary cost or volume is needed in order to ensure its traction and charging functions. A buck-boost converter is set between the battery and the 6-leg inverter. A constant DC bus voltage is then ensured. The constraint of the battery charger is then satisfied with a high value on the voltage whatever the depth of discharge level of the battery. Compared to other charging systems, the concept proposed is an economic and compact on-board solution, compatible with any type of grids, whatever the way of energy flow: battery charger mode or vehicle-to-grid mode.
The 6-leg inverter is used either for the traction mode or for the charging mode, never at the same time. The 6-leg inverter can be seen as 3 independent 2-leg inverter. Each 2-leg inverter is used to supply one of the 3 phases of the electrical machine. A classical 3-leg inverter has 8 (=2 3 ) different states. A 6-leg inverter presents then 64 (=2 6 ) different states, which leads to more degrees of freedom to be used to optimize the modulation or to control the zero sequence current inside the machine. More information regarding fundamental principles of a 6-leg inverter can be found in [START_REF] Sandulescu | FPGA implementation of a general space vector approach on a 6-leg voltage source inverter[END_REF]. Levi et al. [START_REF] Levi | Even-phase multi-motor vector controlled drive with single inverter supply and series connection of stator windings[END_REF] has nevertheless demonstrated that it was possible to decouple the control of two series-connected multi-phase machines, even though they are supplied by the same inverter. During the propulsion mode the mid-points of the windings can then be used to supply another electrical machine. In this paper, the electrical machine will be the air compressor of the air conditioning. A switch element is required to switch between charging and air compression mode. Both used electrical machines are considered as Permanent Magnet Synchronous Machine (PMSM).
B. Operating modes
Depending on the switching element state, different operating modes can be done. Table 1 summarizes all the operating modes for the currents of the mid-point of the winding 1 and the phase-to-phase voltages. In traction mode, the 3-phase openend winding PMSM is used by controlling its currents. Since a source or a load can be connected to the mid-points of the 3phase, the machine currents im of the traction mode are not directly given. In traction mode we have chosen, imposed by the control, to set the machine current im as half of the difference of
{ 𝑖 𝑚 = 1 2 𝑇 1 𝑖 ℎ 𝑣 𝑚 = 𝑇 1 𝑣 𝑐𝑜𝑛𝑣 with 𝑇 1 = [ -1 1 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 -1 1 ] { 𝑖 ℎ = [𝑖 ℎ𝑎 𝑖 ℎ𝑎 ′ 𝑖 ℎ𝑏 𝑖 ℎ𝑏 ′ 𝑖 ℎ𝑐 𝑖 ℎ𝑐 ′ ] 𝑡 𝑣 𝑐𝑜𝑛𝑣 = [𝑣𝑎𝑂 𝑣 𝑎 ′ 𝑂 𝑣 𝑏𝑂 𝑣 𝑏 ′ 𝑂 𝑣 𝑐𝑂 𝑣 𝑐 ′ 𝑂 ] 𝑡 and { 𝑖 𝑚 = [𝑖 𝑚𝑎 𝑖 𝑚𝑏 𝑖 𝑚𝑐 ] 𝑡 𝑣 𝑚 = [𝑣𝑚𝑎 𝑣 𝑚𝑏 𝑣 𝑚𝑐] 𝑡 (1)
When the system is in traction mode, the machine's phase currents imX are equal to the currents of the leg:
imX = ihX = -ihX' with X [a, b, c].
III. MODELING AND CONTROL: SYSTEMIC APPROACH
A. Modeling
Change of variables -The aim of the change of variables is to decompose the 3-phase open-end winding machine in two fictitious machines (Fig. 3). The first fictitious machine is a 3phase 4-wire machine used to create the torque Tm in the traction mode. The second fictitious machine does not create torque (T1=0). It is then equivalent to three inductors.
The first change of variables (1) leads to calculate, from the six actual currents ih and voltages vconv, the three independent fictitious currents im and voltages vm of the fictitious 3-phase 4wire machine. The second change of variables (2) allows writing the relationships of the mid-points of the windings:
{ 𝑖 𝑝123 = -𝑇 2 𝑖 ℎ 𝑣 𝑝 = 1 2 𝑇 2 𝑣 𝑐𝑜𝑛𝑣 with 𝑇 2 = [ 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 ] and { 𝑖 𝑝123 = [𝑖 𝑝1 𝑖 𝑝2 𝑖 𝑝3 ] 𝑡 𝑣 𝑝 = [ 𝑣 𝑝1 𝑣 𝑝2 𝑣 𝑝3] 𝑡 (2)
Mode Currents Voltages
Charging Due to the architecture, the open-end winding electrical machine is not electrically coupled. The three currents ima, imb and imc then have to be controlled to produce the torque Tm. Since the mid-point connection has three wires, only two control currents are sufficient for control purpose: ip1 and ip3. The relationship (2) then has to be rewritten with these currents and the phase-to-phase voltages up:
𝑖 𝑐𝑟1 = 0 𝑖 𝑚𝑎 = 0 𝑖 𝑔1 = 𝑖 𝑝1 = -2𝑖 ℎ𝑎 = -2𝑖 ℎ𝑎 ′ { 𝑢 𝑔12 = 𝑢 𝑝12 𝑢 𝑔32 = 𝑢 𝑝32 { 𝑢 𝑐𝑟12 = 0 𝑢 𝑐𝑟32 = 0 Traction 𝑖 𝑐𝑟1 = 𝑖 𝑝1 = 0 𝑖 𝑚𝑎 = 𝑖 ℎ𝑎′ = -𝑖 ℎ𝑎 𝑖 𝑔1 = 0 { 𝑢 𝑔12 = 0 𝑢 𝑔32 = 0 { 𝑢 𝑐𝑟12 = 𝑢 𝑝12 𝑢 𝑐𝑟32 = 𝑢 𝑝32 Air Compression 𝑖 𝑐𝑟1 = 𝑖 𝑝1 = -2𝑖 ℎ𝑎 = -2𝑖 ℎ𝑎 ′ 𝑖 𝑚𝑎 = 0 𝑖 𝑔1 = 0 { 𝑢 𝑔12 = 0 𝑢 𝑔32 = 0 { 𝑢 𝑐𝑟12 = 𝑢 𝑝12 𝑢 𝑐𝑟32 = 𝑢 𝑝32 Traction + Air Compression 𝑖 𝑐𝑟1 = 𝑖 𝑝1 = -𝑖 ℎ𝑎 -𝑖 ℎ𝑎′ 𝑖 𝑚𝑎 = (𝑖 ℎ𝑎′ -𝑖 ℎ𝑎 )/2 𝑖 𝑔1 = 0 { 𝑢 𝑔12 = 0 𝑢 𝑔32 = 0 { 𝑢 𝑐𝑟12 = 𝑢 𝑝12 𝑢 𝑐𝑟32 = 𝑢 𝑝32 1 i ha i ha' i hb i hb' i hc' i hc 2 3 u p12 i p1 i p3 i p2 u p32 v ma v mc v mc Tm tract T m tract i ma i mb i mc i mF v mb v mc u p12 u p32 i p1 i p3 v ma T l = 0 tract
{ 𝑖 𝑝123 = 𝑇 3 𝑡 𝑖 𝑝13 𝑢 𝑝 = 𝑇 3 𝑣 𝑝 with 𝑇 3 = [ 1 -1 0 0 -1 1 ] and { 𝑖 𝑝13 = [𝑖 𝑝1 𝑖 𝑝3 ] 𝑡 𝑢 𝑝 = [ 𝑢 𝑝12 𝑢 𝑝32] 𝑡 (3)
Electrical windings -The design of the electrical machine leads to a strong magnetic coupling between each of the halfphases. Considering six identical windings and neglecting saliency effects, the inductor matrix LEM of the electrical machine can be written such as:
𝐿 𝐸𝑀 = [ 𝐿 + 𝑙 𝑙 -𝐿 𝑀 -𝐿 𝐿 + 𝑙 𝑙 -𝑀 𝑀 -𝑀 𝐿 + 𝑙 𝑙 -𝑀 𝑀 -𝑀 𝑀 -𝑀 𝑀 -𝐿 𝑀 -𝑀 -𝑀 𝑀 -𝐿 𝑀 -𝑀 𝑀 -𝑀 𝑀 -𝑀 𝐿 + 𝑙 𝑙 -𝑀 𝑀 -𝑀 𝐿 + 𝑙 𝑙 -𝐿 𝑀 -𝐿 𝐿 + 𝑙 𝑙 ] (4)
In relation (4), L is the half-winding's inductance, ll represents the leakage inductance and the mutual inductance between halfwindings is noted M. The change of variables ( 1) and ( 2) leads to write ( 5) and ( 6) for the two fictitious machines:
𝑣 𝑚 -𝑒 𝑚 = 𝑑 𝑑𝑡 [
Relations ( 5) and ( 6) form a five-order system of differential equations. Nevertheless the equations between both fictitious machines are totally decoupled.
Energy sources -The paper is focused on the PMSM control of the different operating modes: traction, charging and air compression. An electrical source connected to the DC bus is considered to represent the battery and the associated chopper. An equivalent mechanical load represents the traction subsystem. The torque Tm of the electrical machine acts on this load, the reaction is the speed tract. The 6-leg converter is supplied by the DC bus with a voltage Vbus, the reaction is the current itot. The system is supplied by the grid voltages ug and reacts by circulating currents ig.
6-leg power converter -
The switch orders are mathematically represented by switching functions sij. These functions are equal to 0 when switches are open and equal to 1 when switches are closed. The DC bus Vbus and switching functions sij lead to determine the rectifier voltages vconv. As well as, converter current ih and and switching functions sij leads to determine the DC bus current iconv and then itot.
𝑖 𝑡𝑜𝑡 =
Electromechanical conversion -The electrical machine leads to the torque Tm and the back EMF em. The currents of the machine are noted im and the rotation speed Ωtract. The modeling of the machine can be found in [START_REF] Sandulescu | Control strategies for open-end winding drives operating in the flux-weakening region[END_REF].
B. Energetic Macroscopic Representation
EMR is a functional description of energetic systems for control purpose [START_REF] Bouscayrol | Graphic formalisms for the control of multi-physical energetic systems: COG and EMR[END_REF]- [START_REF] Martinez | Practical control structure and energy management of a testbed hybrid electric vehicle[END_REF]. The system is split into elementary subsystems in interaction. According to the action and reaction principle all subsystems are interconnected. The instantaneous power flow between two subsystems is the product of the action and reaction variables. In EMR, only the integral causality must be used: outputs are integral functions of inputs. This property is described with accumulation elements. Other elements are defined using equations without time dependence. The EMR of the studied architecture, without the air compressor, has been proposed in [START_REF] Lhomme | Control of a combined multiphase electric drive and battery charger for electric vehicle[END_REF] (upper part in Fig. 2). For better clarity all variables are defined as vectors. The DC bus and the grid are considered as electrical sources (green oval pictograms). The air compression and the traction subsystems are considered as mechanical sources. The inverter performs mono-domain conversions (orange square pictograms). The parallel connection couples each leg of the 6-leg inverter and the change of variables is represented by a mono-domain distribution element (overlapping squares). The inductors are accumulation elements (orange rectangle pictograms with diagonal line). The currents of the machine im and the mid-points of the windings ip are the five state variables of the studied system.
C. Inversion-based control scheme
From inversion rules, EMR can deduce an inversion-based control scheme. Two kinds of levels are organized: local and global controls. The local control level controls the different subsystems. It is described by light blue parallelograms in Fig. 2. The global control level is the EMS, which stands for Energy Management Strategy. The EMS coordinates the local control to manage the whole system. It is described by dark blue parallelograms in Fig. 2. In this study, there are two main control objectives. The first objective is to impose the torque of the machine Tm through the currents im. The second objective is to impose the currents of the grid ig for the charging mode, or the torque of the air compressor Tcp for the air compressor mode, through the currents ip. Six tuning variables, the switching functions sconv, are managed to reach this aim. From the objectives to the switching functions, the local control can then be deduced by inverting the EMR. The accumulation elements is inverted with the crossed blue parallelograms, which correspond to closed-loop controls. The conversion elements with the blue parallelograms, which correspond to an open-loop control. The energetic coupling is inverted with the overlapped blue parallelograms.
The control of the three currents of the electrical machine im allows controlling the torque Tm and the magnetic flux m (EMS traction). To simplify the calculations of the control direct-quadrature-zero transformation is used. Comparing to wyeconnected structures, this topology has one more degree of freedom (DoF) consisting in the possibility to have the zero sequence current im-o-ref.
Based on this DoF, different studies have been proposed. Indeed, in [START_REF] Sandulescu | Control strategies for open-end winding drives operating in the flux-weakening region[END_REF] the homopolar current is used to extend the range of speed with a higher torque for a 3-phase PMSM during flux weakening operation. In case of open-phase fault, having the homopolar component reduces torque ripples [START_REF] Zhao | Remedial Injected-Harmonic-Current Operation of Redundant Flux-Switching Permanent-Magnet Motor Drives[END_REF]- [START_REF] Bolognani | Experimental faulttolerant control of a PMSM drive[END_REF]. In [START_REF] Flieller | A self-learning solution for torque ripple reduction for nonsinusoidal permanent-magnet motor drives based on artificial neural networks[END_REF], using im-o-ref leads to minimum copper losses for a given torque in case where the back-EMF contains harmonics k*n (n: the number of phases, k=1, 2,…). The work presented in [START_REF] Flieller | A self-learning solution for torque ripple reduction for nonsinusoidal permanent-magnet motor drives based on artificial neural networks[END_REF] is available not only for 3-phase drives but also for multiphase drives.
Currents in rotating frame are controlled by PI controllers 𝐶 𝑚-𝑜𝑑𝑞 since their references are constant. The machine voltages are thus determined as shown in [START_REF] Diab | A Nine-Switch-Converter-Based Integrated Motor Drive and Battery Charger System for EVs Using Symmetrical Six-Phase Machines[END_REF].
𝑣 𝑚-𝑜𝑑𝑞-𝑟𝑒𝑓 = 𝐶 𝑚-𝑜𝑑𝑞 (𝑖 𝑚-𝑜𝑑𝑞-𝑟𝑒𝑓 -𝑖 𝑚-𝑜𝑑𝑞-𝑒𝑠𝑡 ) + 𝑒 𝑚-𝑜𝑑𝑞-𝑒𝑠𝑡 𝑣 𝑚-𝑟𝑒𝑓 = 𝑃(𝜃) -1 𝑣 𝑚-𝑜𝑑𝑞-𝑟𝑒𝑓 𝑖 𝑚-𝑜𝑑𝑞-𝑒𝑠𝑡 = 𝑃(𝜃)𝑖 𝑚-𝑚𝑒𝑎 𝑃(𝜃) = √ 2 3 [ cos(𝜃) cos (𝜃 - 2𝜋 3 ) cos (𝜃 + 2𝜋 3 ) -sin(𝜃) -sin (𝜃 - 2𝜋 3 ) -sin (𝜃 + 2𝜋 3 ) √2 2 √2 2 √2 2 ] (10)
The control of the mid-points currents ip13 depends of the chosen operating mode through the input switch kS.
𝑖 𝑝-𝑑𝑞-𝑟𝑒𝑓 = 𝑘 𝑆 𝑖 𝑔-𝑑𝑞-𝑟𝑒𝑓 + (1 -𝑘 𝑆 )𝑖 𝑐𝑝-𝑑𝑞-𝑟𝑒𝑓 [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF] In the charging mode, when kS = 1, the objective of the control is to manage the charge of the battery from the grid. Two grid currents ig-dq-ref leads to control active and reactive power on the grid. A classical Power Factor Correction is used in this purpose inside of the "EMS charge". The grid current references are calculated, according to the power of charging, from the grid voltages. Because PI controllers are used for tracking the currents, it should be better to have constant references of current by using a Phase-Locked Loop (PLL) for frame rotation. The grid currents are only used in case of charging. The voltage references are thus determined as given in [START_REF] De-Sousa | Method and electric combined device for powering and charging with compensation means[END_REF].
𝑢 𝑝-𝑑𝑞-𝑟𝑒𝑓 = -𝐶 𝑝-𝑑𝑞 (𝑖 𝑝-𝑑𝑞-𝑟𝑒𝑓 -𝑖 𝑝-𝑑𝑞-𝑒𝑠𝑡 ) + 𝑢 𝑆-𝑒𝑠𝑡 𝑢 𝑝-𝑟𝑒𝑓 = 𝑃(𝜃) -1 𝑢 𝑝-𝑑𝑞-𝑟𝑒𝑓 [START_REF] De-Sousa | Method and electric combined device for powering and charging with compensation means[END_REF] The second coupling element of the EMR, "change of variables" in Fig. 2, deduces the two phase-to-phase voltages of the mid-points of the windings up from the three single phase voltages of the mid-points vp. This change of variable that is nonbijective, leads to the homopolar voltage vM:
𝑣 𝑝-𝑟𝑒𝑓 = 𝑣 𝐾-𝑟𝑒𝑓 + 𝑣 𝑀 with 𝑣 𝐾-𝑟𝑒𝑓 = 𝑇 4 𝑢 𝑝-𝑟𝑒𝑓 where 𝑇 4 = 1 3 [ 2 -1 -1 -1 -1 2 ] 𝑣 1𝐾-𝑟𝑒𝑓 + 𝑣 2𝐾-𝑟𝑒𝑓 + 𝑣 3𝐾-𝑟𝑒𝑓 = 0 and 𝑣 𝐾-𝑟𝑒𝑓 = [ 𝑣 1𝐾-𝑟𝑒𝑓 𝑣 2𝐾-𝑟𝑒𝑓 𝑣 3𝐾-𝑟𝑒𝑓] 𝑡 (13) 𝑣 𝑐𝑜𝑛𝑣-𝑟𝑒𝑓 = [ 𝑇 1 1 2 𝑇 2 ] -1 [ 𝑣 𝑚-𝑟𝑒𝑓 𝑣 𝑝-𝑟𝑒𝑓 ] (14)
The homopolar voltage vM can be considered as a DoF to increase the modulation index of the converter using over-modulation techniques in case of wye-connection since there is no path for zero sequence of current. In this paper, choosing The proposed control of the entire system in Fig. 2 can seem complicated but, in fact, it is no more complex than a classical control of a 3-phase electrical machine for the traction mode and a classical control of a 3-phase PWM converter for the battery charger or for supplying the compressor. Current controllers in rotating frames have been used in the control scheme. The current references are obtained from the torque in traction mode and from the required power for air compressor / charging. Regular PI controllers are used since all currents in rotating frames are constant. The only differences are inversions of coupling relations implemented by relations [START_REF] De-Sousa | Combined electric device for powering and charging[END_REF] and ( 14) that are not so complicated to implement and which do not consume lots of execution time.
IV. RESULTS AND DISCUSSION
To test the system, the studied structure is presented in Fig. 4. The grid and the air compressor are represented by an electrical drive, which will work as a generator in charging mode and as motor in compression mode. A test bench built for this experiment is reported in Fig. 5. It is composed of: two isolated DC-sources; an industrial drive to simulate the load for both emulators; a current measurement box and a dSPACE MicroLabBox to carry out the proposed control. The MicroLabBox is an equipment of dSPACE with a high calculation capacity up to 2GHz for real-time processor. During the tests, the fixedstep calculation has been set up at 100 µs in Simulink control scheme. The switching frequency has been fixed at 10 kHz; a mechanical load of 10 kW to emulate the grid and the air compressor; a 3-phase open-end winding PMSM of 15 kW with its 6-leg inverter connected mechanically to a load drive to emulate the traction subsystem, and connected electrically at the mid-points of the windings to a 3-phase PMSM. A profile test has been carried out to examine the different operating modes (Table II). Fig. 6 reports the experimental results for example functioning cycle having three modes: charging (I), traction (II) and traction plus compression (III).
The speeds of both machines and the DC bus current are shown in Fig. 6a. The emulated grid currents and references of voltage are reported in Fig. 6b and Fig. 6e. The currents and voltage references of the 3-phase open-end machine are given in Fig. 6c and Fig. 6d.
During the charging mode, the torque Tm is controlled to zero, then the speed is also zero. The speed of the machine emulating the 3-phase grid is setting to 80 rad/s. The measured DC bus current ibus, similar to the battery power, is negative showing that the battery is in charging mode; at this time the grid currents are sinusoidal. It can be seen that the relation given in Table I for charging mode is verified by observing the currents of the two machines. Indeed, for example, ip1 = -2 iha = -2 iha' leads to the current ima = 0 and as a consequence, the torque Tm generated by the 3-phase open-end winding machine is equal to zero. The emulated grid voltages are however not perfectly sinusoidal (Fig. 6e left side) due to some harmonics of the back-EMF, mainly the 3 rd one, existing in the 3-phase machine used for the emulation of the grid.
In traction mode (mode II), only the 3-phase open-end winding machine is controlled to track a reference speed, which is fixed at 50 rad/s. The DC bus current ibus becomes positive in this mode (Fig. 6a). During the traction mode, the currents of the 3-phase open-end winding machine are balanced with a phase-shifted of 60 degrees. This means that iha =iha', ihb =ihb' and ihc =ihc' (Fig. 6c right side) and ip1 = ip2 = ip3 = 0 (Fig. 6b middle). The voltage references of the traction machine are given in Fig. 6. These voltages are determined by PI current controllers and considered as voltages of a symmetric 6-phase wye-connected machine.
In mode III, i.e when the air compressor is started at 5.56s during the traction mode, the currents crossing the 3-phase open-end machine are unbalanced due to the different speeds of both machines. The change of torque for the 3-phase PMSM emulating air compressor at 7.05 s can be seen with the value of the currents in Fig. 6b right side. The above presented results confirm the validity of the proposed structure for automotive applications by offering three operating modes by control strategies. The results given in Fig. 6 confirm that all the calculation are finished in time into the MicroLabBox. To prove the feasibility of the proposed control scheme the grid and the air compressor have been emulated with a versatile scaled-down prototype using a same PMSM. To better emulate the characteristics of the grid, a large PMSM with a small synchronous reactance and low total harmonic distortion (THD) would be preferable. Nevertheless the balanced 3-phase currents during the charging mode prove the feasibility even if the voltage of the grid contains harmonics. For the future, the real grid and a PMSM with a smaller power rating of the air compressor would be necessary to check the effectiveness of the control in real-world application scenarios. V. CONCLUSION Using a split-windings AC motor, an integrated motor drive/battery charger/air-compressor supply system has been introduced and shown its feasibility by real-time experimentation. This paper describes the unique control, in a same structure, to achieve the three operating modes: charging, traction and air-compressor supply. The integrated system proposed in this paper is expected to increase the vehicle component compactness and power, therefore potentially reduces the cost and battery charging time. In future prospects, more potentialities of this integrated system will be studied, discussed and tested.
Fig. 1 .Fig. 2 .
12 Fig. 1. The studied 3-phase open-end winding system
Fig. 3 .
3 Fig. 3. Equivalence between the 3-phase open-end winding machine and two 3-phase machines
vM = 0 is necessary to eliminate im-o-ref because of the sinusoidal back-EMF. The coupling inversion element leads then to the six converter reference voltages vconv-ref from the two grid reference voltages ul-ref, the three motor reference voltages vm-ref and the homopolar voltage uh-ref. The duty cycle of the switching function conv-ref are obtained by an inversion of (8): Modulation (PWM) is then classically used to define the switching function references sconv-ref from this duty cycle.
Fig. 5 .
5 Fig. 4. Experimental set-up scheme
Fig. 6 .
6 Fig. 6. Experimental results: a) Measured speeds of two machines (left) and the measured current of the DC bus (right); b) Grid currents; c) 6 currents of the 3-phase open-end winding machine; d) 6 references of voltage for the 3-phase open-end machine; e) Grid voltages (emulated by the back-EMF of a 3-phase PMSM).
TABLE I .
I CURRENTS AND VOLTAGES ACCORDING TO THE OPERATING MODES
𝐿 𝑚 𝑖 𝑚 ] + 𝑟 𝑚 𝑖 𝑚
𝐿 + 𝑙 𝑙 /2 𝑀 𝑀
with 𝐿 𝑚 = 4 [ 𝑀 𝑀 𝐿 + 𝑙 𝑙 /2 𝑀 𝑀 𝐿 + 𝑙 𝑙 /2 ] (5)
1 0 0
and 𝑟 𝑚 = 2 𝑟 𝑠 [ 0 1 0 ]
0 0 1
𝑢 𝑆 -𝑢 𝑝 = 𝑑 𝑑𝑡 [𝐿 𝑝 𝑖 𝑝13 ] + 𝑟 𝑝 𝑖 𝑝13
with 𝐿 𝑝 = 𝑙 𝑙 2 [ 2 1 1 2 ] and 𝑟 𝑝 = 𝑟 𝑠 2 [ 2 1 1 2 ]
𝐼 𝑖 𝑐𝑜𝑛𝑣 with I the unitary matrix of size 1 X 6 and 𝑖 𝑐𝑜𝑛𝑣 = [𝑖 11 𝑖 21 𝑖 31 𝑖 41 𝑖 51 𝑖 61 ] 𝑡
(7)
𝑣 𝑐𝑜𝑛𝑣 = (𝑠 𝑐𝑜𝑛𝑣 -{ 𝑖 𝑐𝑜𝑛𝑣 = 𝑠 𝑐𝑜𝑛𝑣 𝑖 ℎ 1 2 ) 𝑉 𝑏𝑢𝑠
with 𝑠 𝑐𝑜𝑛𝑣 = [𝑠11 𝑠 21 𝑠 31 𝑠 41 𝑠 51 𝑠 61] 𝑡 (8)
where 𝑠 𝑖𝑗 ∈ {0; 1} with 𝑖 ∈ {1; 2; 3; 4; 5; 6} the number of
the legs
and 𝑗 ∈ {1; 2} the number of the switch in the leg
Switch element -The switch element commutes through the
grid currents or the air compressor currents based on the value
of the switch input kS:
{ 𝑖 𝑝13 = 𝑘 𝑆 𝑖 𝑔 + (1 -𝑘 𝑆 )𝑖 𝑐𝑝 𝑢 𝑆 = 𝑘 𝑆 𝑢 𝑔 + (1 -𝑘 𝑆 )𝑒 𝑐𝑝 with 𝑘 𝑆 ∈ {0; 1}
TABLE II .
II PROFILE TEST
Mode 3-phase open-end winding machine 3-phase machine (grid/air compres-sor)
Charging (Mode I) Speed and torque null Generator with constant speed and
torque
Traction (Mode II) Acceleration then constant speed Speed and torque null
Traction + Compression (Mode III) Speed constant then deceleration Motor with constant speed and 2 steps of torque | 34,202 | [
"775004",
"1244471",
"22157"
] | [
"13338",
"13338",
"13338",
"13338",
"13338",
"218959",
"218959"
] |
01769160 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2016 | https://hal.science/hal-01769160/file/L2EP_2016_ESARS_ZARH.pdf | Hussein Zahr
email: [email protected]
Franck Scuiller
email: [email protected]
Eric Semail
email: [email protected]
Five-phase SPM machine with electronic pole changing effect for marine propulsion
In this paper, the possibility of designing a fivephase Surface-mounted Permanent Magnet (SPM) machine with 20 slots and 8 poles for a low power marine propulsion system is examined. Due to its particular winding and surface magnet design, the machine inherently offers an electronic pole changing effect from 3×4 pole pairs at low speed to 4 pole pairs at high speed. At high speed, in the constant power range, according to Finite Element Analysis, the Maximum Torque Per Ampere strategy appears not to be the right solution to minimize the whole machine losses (copper, iron and magnets). In particular, a strategy that favors the 4-pole rotating field at high speed allows to mitigate the magnet losses, thus limiting the risk of magnet overheating.
I. NOMENCLATURE
SPM Surface-mounted Permanent Magnet FEA Finite Elements Analysis CPR Constant Power Range MM Main Machine (1st harmonic dq-subspace) SM Secondary Machine (3rd harmonic dq-subspace) Ω 𝑚 Mechanical speed (rad/s) 𝑝 Pole pair number 𝑅 Armature resistance 𝜖 1 , 𝜖 3 MM and SM no load back-emf at 1 rad/s 𝐿 1 , 𝐿 3 MM and SM cyclic inductances 𝜃 1 , 𝜃 3 MM and SM back-emf to current angles 𝐼 1 , 𝐼 3 MM and SM currents 𝑉 𝑏 , 𝐼 𝑏 Base RMS voltage and current Ω 𝑏 , 𝑇 𝑏 Base speed and torque
II. INTRODUCTION
Multi-phase motors are widely used in electrical marine propulsion for reasons such as reliability, smooth torque and distribution of power [START_REF] Levi | Multiphase electric machines for variable-speed applications[END_REF]. For low power propulsion system (less than 10kW), the power partition constraint results from the low DC voltage (less than 60V) that supplies the drive. Hence increasing the phase number allows to limit the rating of the power electronic components. In addition, compactness objective can be more easily achieved if the phase number is considered as a design parameter. For instance, with five-phase machine, third harmonic current injection can be performed to boost the torque [START_REF] Semail | Right harmonic spectrum for the back-electromotive force of an n-phase synchronous motor[END_REF], [START_REF] Wang | Torque improvement of fivephase surface-mounted permanent magnet machine using third-order harmonic[END_REF]. Regarding the rotor, Permanent Magnet (PM) structure contribute to enhance the power density [START_REF]Oceanvolt website[END_REF], [START_REF]Torqeedo website[END_REF]. In case of Surfacemounted Permanent Magnet (SPM) rotor, the ripple torque mitigation is facilitated. Furthermore, with five-phase SPM machine, third harmonic current injection can be used to eliminate the pulsating torque [START_REF] Scuiller | Third harmonic current injection to reduce the pulsating torque of a five-phase spm machine[END_REF]. If fractional-slot windings facilitate the reduction of cogging torque for SPM machine [START_REF] Zhu | Influence of design parameters on cogging torque in permanent magnet machines[END_REF], they also generate magnetomotive force harmonics that could result in excessive magnet losses. Machine with 0.5 slots per phase and per pole (𝑠 𝑝𝑝 = 0.5) are known to limit this effect [START_REF] Aslan | General analytical model of magnet average eddy-current volume losses for comparison of multiphase pm machines with concentrated winding[END_REF]. In addition, the slot filling can also be improved with this solution [START_REF] El-Refaie | Fractional-slot concentrated-windings synchronous permanent magnet machines: Opportunities and challenges[END_REF]. Therefore the machine here considered is a five-phase machine with 20 slots and 8 poles (20-8-5 configuration) for a marine propeller.
The 3-phase counterpart of this 20-8-5 machine has 8 poles and 12 slots (12-8-3 configuration). With reference to this 12-8-3 machine, the benefits of the 20-8-5 machine are examined in [START_REF] Zahr | Five-phase version of 12slots/8poles three-phase synchronous machine for marinepropulsion[END_REF] for the same design specifications: rated torque, power and external diameters are identical. With numerical computations of the two machines, this study shows that the 5-phase configuration allows a significant reduction of the magnet losses. In addition, the 5-phase machine facilitates the reduction of the ripple torques (cogging and pulsating) that is of critical importance at low speed.
This paper focuses on another property of the 20-8-5 machine. Due to its particular winding distribution, this machine inherently owns 3×4 pairs of pole and 4 pairs of pole. Hence an electronic pole changing effect can be obtained if the machine is designed to operate at low speed with 3×4 pole pairs or at high speed with 4 pole pairs. More generally the appropriate polarity has to be selected regarding the load demand in transient or steady state, taking into account the inverter rating and the efficiency or torque quality requirements. Pole changing methods by winding switching are well known for induction machine [START_REF] Osama | A new inverter control scheme for induction motor drives requiring wide speed range[END_REF]. For multiphase induction machine, pole phase modulation strategy can be applied [START_REF] Gautam | Variable speed multiphase induction machine using pole phase modulation principle[END_REF]. In [START_REF] Sadeghi | Wide operational speed range of five-phase permanent magnet machines by using different stator winding configurations[END_REF], the speed range of a five-phase PM machine is extended by switching between different stator configurations. A similar procedure is achieved in [START_REF] Nguyen | Different virtual stator winding configurations of open-end winding five-phase pm machines for wide speed range without flux weakening operation[END_REF] but with electronic switching. In this paper, the pole changing effect is electronically ensured by the inverter, depending of the levels of first and third harmonics of current.
The paper is divided into two parts. In the first part, the 20-8-5 machine design is introduced. The magnet layer to obtain the double polarity property is introduced. To master the sizing, numerical field calculations are reported and the torque/speed characteristic is estimated. The second part focuses on the control strategy in the constant power range that corresponds to the steady state operation of the propeller. Two strategies are described and compared regarding the copper, magnet and iron losses (estimated with FEA).
III. MACHINE DESIGN
A. Five-phase machine modeling
If the magnetic saturation and the demagnetization issue are not considered, it can be shown that a star-connected five-phase SPM machine behaves as two two-phase virtual machines that are magnetically independent but electrically and mechanically coupled [START_REF] Semail | Vectorial formalism for analysis and design of polyphase synchronous machines[END_REF]. Furthermore, as the rotor saliency can be neglected with SPM machines, the space harmonics are distributed among the two virtual machines: the virtual machine sensitive to the fundamental is called Main Machine (MM) whereas the other sensitive to the third harmonic is called Secondary Machine (SM). Actually the virtual machine is a physical reading of the mathematical subspace build on the linear application that describes the phase-to-phase magnetic couplings: this two-dimension subspace is usually represented with 𝛼𝛽-axis circuit in stationary frame or with 𝑑𝑞-axis circuit in rotating frame. As there is no saliency effect, no distinction has to be made between d-axis and q-axis inductance.
B. Design specifications
The machine is intended to be integrated in a pod for an electrical outboard. The propeller is driven by the electrical machine with a mechanical gear that reduces the electrical rotating speed by five. The main machine parameters are listed in table I. The electromagnetic circuit is sketched out in Fig. 1 (over a pole pair): the five-phase winding distribution with the corresponding first and third harmonic winding factors (𝑘 𝑤,1 and 𝑘 𝑤,3 ) can be observed. The magnet layer shape is arranged to make the two virtual machines enable to produce the same torque level. To reach this goal, the rotor pole consists of two radially magnetized magnets, each magnet covering one third of the pole arc, as illustrated by Fig. 1.
Winding Factor k w,1 =0.5878 k w,3 =0.9811
C. Field analysis
In this subsection, the electromagnetic behaviour of the machine is estimated with FEA software FEMM [START_REF] Meeker | Finite element method magnetics, version 4.2, users manual[END_REF] under magnetostatic hypothesis. In addition, saturation effects are not taken into account (linear assumption for the materials).
Fig. 2 shows the no-load back-emf waveform and spectrum. The double polarity of the machine can be inferred. The spectrum confirms that first and third harmonic terms are of the same order. The inductance values 𝐿 1 for the MM and 𝐿 3 for the SM are calculated by loading the machine with the rated current:
• 𝐿 1 = 3.1𝑚𝐻 for the MM (4 pole pairs)
• 𝐿 3 = 4.0𝑚𝐻 for the SM (3 × 4 pole pairs). Fig. 3 shows the flux lines when loading the machine with fundamental rated current (for a given rotor position). The field intensity values allow to control the right sizing of the electromagnetic circuit: in the yokes and in the stator teeth, the field intensity is lower than 1.4T. Fig. 4 focuses on the cogging torque estimation. As previous, the cogging torque is negligible: its amplitude is less than 0.25Nm, that is less than 1% the rated torque.
Density Plot: |B|, Tesla 1.900e+000 : >2.000e+000 1.800e+000 : 1.900e+000 1.700e+000 : 1.800e+000 1.600e+000 : 1.700e+000 1.500e+000 : 1.600e+000 1.400e+000 : 1.500e+000 1.300e+000 : 1.400e+000 1.200e+000 : 1.300e+000 1.100e+000 : 1.200e+000 1.000e+000 : 1.100e+000 9.000e-001 : 1.000e+000 8.000e-001 : 9.000e-001 7.000e-001 : 8.000e-001 6.000e-001 : 7.000e-001 5.000e-001 : 6.000e-001 4.000e-001 : 5.000e-001 3.000e-001 : 4.000e-001 2.000e-001 : 3.000e-001 1.000e-001 : 2.000e-001 <0.000e+000 : 1.000e-001
D. Maximum reachable torque for finite Volt-Ampere rating
In this subsection, the torque/speed characteristic of the machine is calculated: for a given mechanical speed Ω 𝑚 (i.e. for a given electrical speed 𝜔), taking into account the maximum DC voltage and the maximum copper losses (driven by base current 𝐼 𝑏 ), the goal consists in finding the MM and SM current distribution that maximizes the electromagnetic torque [START_REF] Scuiller | Maximum reachable torque, power and speed for five-phase spm machine with low armature reaction[END_REF]. To solve this problem is equivalent to find the optimal d-axis and q-axis references for each virtual machine. The Maximum Torque Per Ampere (MTPA) for a given speed is then obtained. The optimization variable is defined as follows:
𝑧 = [ 𝐼 1 𝜃 1 𝐼 3 𝜃 3 ] 𝑇 (1)
The optimization variable is lower and upper bounded according to the following relations:
𝑍 𝑙𝑜𝑤 = ⎡ ⎢ ⎢ ⎣ 0 -𝜋 0 -𝜋 ⎤ ⎥ ⎥ ⎦ ≤ 𝑧 ≤ ⎡ ⎢ ⎢ ⎣ 𝐼 𝑏 𝜋 𝐼 𝑏 𝜋 ⎤ ⎥ ⎥ ⎦ = 𝑍 𝑢𝑝 (2)
The objective is to maximize the electromagnetic torque. This goal is expressed in the following relation where 𝑇 is the average electromagnetic torque:
𝑧 * = 𝑎𝑟𝑔𝑚𝑖𝑛(-𝑇 (𝑧)) (3)
The average electromagnetic torque is the sum of the torque of each virtual machine. The MM torque (due to the fundamental of current) is denoted 𝑇 1 and the SM torque (due to the third harmonic of current) is denoted 𝑇 3 :
𝑇 1 = 5𝜖 1 𝐼 1 cos 𝜃 1 ( 4
)
𝑇 3 = 5𝜖 3 𝐼 3 cos 𝜃 3 (5)
The five-phase machine torque is then expressed as follows:
𝑇 = 5𝜖 1 𝐼 1 cos 𝜃 1 + 5𝜖 3 𝐼 3 cos 𝜃 3 (6)
Equation ( 6) is used in relation (3) to track the optimal current repartition 𝑧 * . The non linear constraint regarding the peak phase voltage is written in the following relation:
𝑓 𝑉 (𝑧) = max {𝑣(𝑝Ω 𝑚 𝑡, 𝑧), 𝑝Ω 𝑚 𝑡 ∈ [0..2𝜋]} -𝑉 𝑝𝑒𝑎𝑘 (7)
In equation [START_REF] Zhu | Influence of design parameters on cogging torque in permanent magnet machines[END_REF], 𝑣(𝑝Ω 𝑚 𝑡, 𝑧) is the machine phase-to-neutral voltage for the current determined by 𝑧 at Ω 𝑚 speed. It should be noted that the considered voltage contains all the harmonics (i.e. not only the first and the third harmonics). The maximum allowable peak voltage is chosen to be half the bus voltage (thus meaning that linear modulation operation is targeted):
𝑉 𝑝𝑒𝑎𝑘 = 𝑉 𝑑𝑐 2 (8)
The constraint relative to the maximum RMS current is defined by the following equation:
𝑓 𝐼 (𝑧) = 𝑧(1) 2 + 𝑧(3) 2 -𝐼 2 𝑏 ( 9
)
The following expression summarizes the optimization problem under consideration:
𝑧 * = 𝑎𝑟𝑔𝑚𝑖𝑛(-𝑇 (𝑧)) with ⎧ ⎨ ⎩ 𝑍 𝑙𝑜𝑤 ≤ 𝑧 ≤ 𝑍 𝑢𝑝 𝑓 𝑉 (𝑧) ≤ 0 𝑓 𝐼 (𝑧) ≤ 0 (10)
Fig. 5 shows the resulting torque/speed characteristic corresponding to the resolution of optimization problem [START_REF] Zahr | Five-phase version of 12slots/8poles three-phase synchronous machine for marinepropulsion[END_REF]. It can be observed that the rated torque (about 34Nm) is obtained by using the virtual MM and the virtual SM in the same time, thus meaning that the five-phase machine here considered is designed to operate with first and third harmonic of current. In addition, the electronic pole changing effect is obtained since, at low speed, the MM (3 × 4 poles) mainly contributes to the torque whereas, at high speed, the SM (4 poles) torque becomes higher. The isopower line (7.7kW) drawn in fig. 5 allows to determine the constant power range that is between 2250rpm and 4500rpm (as specified in table I).
IV. CONTROL STRATEGIES IN THE CONSTANT POWER RANGE
This section focuses on the machine operation in the CPR. At low speed (below 2250rpm), the machine is supposed being in transient states and the MTPA strategy previously introduced is performed. At high speeds (between 2250rpm and 4500rpm), the constant power control is used: in this mode, since the torque is not maximized, the constant power can be obtained with different current distributions, depending on the considered objective. Therefore, two objectives will be examined: one that aims to reduce the copper losses and another that aims to maximize the MM torque contribution. These two strategies will be compared regarding the machine losses with FEA.
A. Constant Power Control with minimizing copper losses
For the control here described, the goal consists in minimizing the copper losses for a given electromagnetic power accounting the maximum allowable peak voltage. For a given torque, as the copper losses are minimized, this strategy is actually a MTPA one and is called CPR-MTPA in the following. The optimization problem can be written as follows:
𝑧 * = 𝑎𝑟𝑔𝑚𝑖𝑛(𝑧(1) 2 + 𝑧(3) 2 ) with ⎧ ⎨ ⎩ 𝑍 𝑙𝑜𝑤 ≤ 𝑧 ≤ 𝑍 𝑢𝑝 𝑓 𝑉 (𝑧) ≤ 0 𝑓 𝑇 (𝑧) = 0 (11)
In relation [START_REF] Osama | A new inverter control scheme for induction motor drives requiring wide speed range[END_REF], 𝑓 𝑇 (𝑧) is the constraint relative to the torque, simply obtained by dividing the required constant electromagnetic power 𝑃 𝑒𝑚 by mechanical speed Ω 𝑚 :
𝑓 𝑇 (𝑧) = 𝑇 (𝑧) - 𝑃 𝑒𝑚 Ω 𝑚 ( 12
)
The speeds where the CPR can be achieved are deduced from the maximum torque/speed characteristic estimated in the previous subsection (2250rpm -4500rpm). Fig. 6a shows the obtained torque/speed characteristics: at low speed (below 2250rpm), the reported characteristic is the one according to the MTPA strategy (10) whereas, at high speed, the reported characteristic is the one corresponding to the resolution of the CPR problem [START_REF] Osama | A new inverter control scheme for induction motor drives requiring wide speed range[END_REF] introduced in this subsection. Again the torque repartition between the two virtual machines confirms the electronic pole changing effect: at low speed, the SM is mainly used whereas, at high speed, the MM becomes predominant. Fig. 6b gives the corresponding power/speed characteristics. As specified, the CPR is actually obtained between 2250rpm and 4500rpm and the electronic pole changing effect already observed when, looking at Fig. 6a, is confirmed.
B. Constant Power Control with favoring Main Machine
In order to reinforce the pole changing effect at high speed, the constant power control (between 2250rpm and 4500rpm) is calculated so that the Main Machine torque contribution is maximized (always accounting the maximum allowable peak voltage). With this strategy, a limitation of the magnet eddy current losses is expected because the Main Machine owns 0.5 slot per pole and per phase [START_REF] Aslan | General analytical model of magnet average eddy-current volume losses for comparison of multiphase pm machines with concentrated winding[END_REF]. This strategy is called CPR-h1 and is obtained by solving the following optimization problem:
𝑧 * = 𝑎𝑟𝑔𝑚𝑖𝑛(-𝑇 1 (𝑧)) with ⎧ ⎨ ⎩ 𝑍 𝑙𝑜𝑤 ≤ 𝑧 ≤ 𝑍 𝑢𝑝 𝑓 𝑉 (𝑧) ≤ 0 𝑓 𝑇 (𝑧) = 0 (13)
In [START_REF] Sadeghi | Wide operational speed range of five-phase permanent magnet machines by using different stator winding configurations[END_REF], 𝑇 1 denotes the torque produced by the Main Machine defined by (4). Fig. 7a and Fig. 7b show respectively the resulting torque/speed and power/speed characteristics when solving optimization problem [START_REF] Sadeghi | Wide operational speed range of five-phase permanent magnet machines by using different stator winding configurations[END_REF] at high speed (at low speed, below 2250rpm, the MTPA characteristic is given). As aimed for, the torque produced by the Main Machine is maximized in the Constant Power Range. One can observe that, between 3200 and 4200rpm, the Secondary Machine operates in generator mode to facilitate the Main Machine power increase.
C. Losses analysis
Copper, magnet and iron (stator and rotor) losses are now considered with FE software Ansys Maxwell. Magnet and iron losses are estimated from the time variation of the flux density. The method to calculate the hysteresis, eddy current and excess losses is close to the one detailed in [START_REF] Lin | A dynamic core loss model for soft ferromagnetic and power ferrite materials in transient finite element analysis[END_REF]. For speed belonging to the CPR, the five-phase machine is simulated for the optimized currents corresponding to the CPR-MTPA and CPR-h1 strategies.
Fig. 8a and Fig. 8b give the copper, magnet and iron losses according to the speed for the CPR-MTPA and the CPR-h1 strategies respectively. With the CPR-MTPA strategy, as aimed for, the copper losses are minimized but, in Fig. 8a, it can be observed that copper losses do not represent the major losses. This trend is all the truer as the speed increases. Referring to the CPR-MTPA strategy that allows the lowest copper losses, the CPR-h1 strategy allows to reduce the whole losses up to 3200rpm as shown by Fig. 8b. This advantage is based on the significant decrease of the magnet losses. This result complies with the analysis carried out in [START_REF] Zahr | Five-phase version of 12slots/8poles three-phase synchronous machine for marinepropulsion[END_REF] where it is shown that the asynchronous space harmonics due the third harmonic generate eddy currents in the magnet layer. For SPM machine, magnet losses mitigation is critical because the magnet layer can not be cooled as easily as the stator winding. In steady state, the magnet losses should be carefully controlled to prevent from demagnetization due to overheating.
Fig. 9 gives another insight of the possible losses reduction with the CPR-h1 strategy: the efficiency versus speed for the two strategies are represented. Thus the efficiency enhancement up to 3200rpm with the CPR-h1 strategy is illustrated. Finally, for the SPM 20-8-5 machine, the MTPA strategy is not the right solution to maximize the efficiency and to limit the heating of the magnets.
V. CONCLUSION
In this paper, the estimations of the 20-8-5 machine performances are obtained without considering magnetic saturation and demagnetization issue. With a particular but quite simple design of the magnet layer, the 20-8-5 machine inherently owns 3×4 pole pairs and 4 pole pairs. With a single stator and single rotor structure, a magnetic gear behavior is then obtained by controlling two rotating fields. The torque/speed characteristic calculated by considering the current and voltage limitations imposed by the inverter confirms the electronic pole changing effect: at low speed, the SM (3×4 poles) mainly contributes to the torque whereas, at high speed, the MM (4 poles) torque becomes higher. As the propeller is specified to operate in the constant power range at steady state (between 2250rpm and 4500rpm), the machine losses are numerically estimated for two CPR control strategies: the CPR-MTPA that minimizes the copper losses and the CPR-h1 that favors the MM torque contribution. A better efficiency is obtained with CPR-h1 control up to 3200rpm, with a significant reduction of the magnet losses for the whole speed range. Finally, for the 20-8-5 machine here considered, the existence of an optimal Maximum Torque Per Losses (MTPL) strategy is then demonstrated and have to be explored in further studies.
Fig. 1 .
1 Fig. 1. Electromagnetic circuit of the 5-phase machine
Fig. 2 .
2 Fig. 2. No load back-emf at 2250rpm mechanical speed (FEA)
Fig. 3 .Fig. 4 .
34 Fig. 3. Flux density (for fundamental rated current)
Fig. 5 .
5 Fig. 5. Maximum reachable torque for the five-phase machine
Fig. 6 .
6 Fig. 6. Torque and power with CPR-MTPA strategy
Fig. 7 .
7 Fig. 7. Torque and power with CPR-h1 strategy
Fig. 8 .Fig. 9 .
89 Fig. 8. Losses according to the speed in the CPR | 21,810 | [
"959914",
"22157"
] | [
"13338",
"13094",
"13338"
] |
01769219 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2017 | https://shs.hal.science/halshs-01769219/file/Regulation%20stem%20cell%20research-%20UK-vf-AM.pdf | The ethical approval is delivered by a Research Ethics Committee (REC) and it must be applied for using the guidance provided by National Research Ethics Service (NRES) at the Health Research Authority2 Tissue banks that have been approved by a REC can provide human tissues to researchers, whom do not need to store them under a Human Tissue Authority licence during the period of the research project, subject to certain requirements. However, specific project approval by a recognised REC will be required, or the samples will need to be stored under a Human Tissue Authority licence, if the research is not carried out in accordance with these requirements.
A Human Tissue Authority establishment licence for human tissue stored outside a specific research project An establishment licence delivered by the Human Tissue Authority is required to remove and store human material for research in England, Wales and Northern Ireland.
The Human Tissue Authority provides a detailed list of what is to be considered as 'relevant material' under the Human Tissue Act 2004, and as such regulated by the Human Tissue Authority for research notably: https://www.hta.gov.uk/policies/list-materials-considered-be-%E2%80%98relevant-material%E2%80%99-under-human-tissue-act-2004
The Human Tissue Authority's licensing role covers licensing premises to store tissue from the living and licensed establishments for tissue to be removed from the deceased for research. However, exceptions exist and the Human Tissue Authority licensing is not required where the relevant material is: -From a person who died prior to 1st September 2006 and at least one hundred years have elapsed since their death -Being held 'incidental to transportation' for a period no longer than a week -Being held whilst it is processed with the intention to extract DNA or RNA, or other subcellular components that are not relevant material (i.e. rendering the tissue acellular) for a period no longer than a week -If research is undertaken under REC approval from a recognised REC at an establishment that does not operate as a research tissue bank -And other cases (See Part 2. 16 of the Human Tissue Act 20043 and point 84 of the Human Tissue Authority, Code E: research4 ) A Designated Individual (DI) has to be appointed in each licensed establishment. The DI has a statutory responsibility under the HT Act to supervise activities taking place under the licence. The HTA's licensing Standards are grouped under four headings: -Consent (C); -Governance and quality systems (GQ); -Traceability (T); -and Premises, facilities and equipment (PFE) Complementarity between the Human Tissue Authority establishment licence and ethical approval Specific research ethics committees (RECs) can give broad ethics approval for research tissue banks. The latter will consequently be required to work under NRES standard operating procedures (SOPs). In that case, there is no need for further individual project specific approvals as long as a broad specified remit of work is permitted. But Human Tissue Authority licensed premises/establishments are always required for tissues to be stored in these research-based tissue banks.
2) Tissues and cells for research to be transplanted into humans
As long as tissues and cells, including cell lines, may be transplanted into humans, they must be licensed by the Human Tissue Authority under the Human Tissue (Quality and Safety for Human Application) Regulations 2007 (Q&S Regulations)5 .
The Human Tissue Authority regulates the following activities: procurement, testing, processing, storage, distribution and import/export of tissues and cells, including cell lines, which may be transplanted into humans, even where it is for research. These activities, apart from storage, can also be carried out under a third party agreement when: -The establishment carrying out the activity is acting on behalf of a licensed establishment; and -The third party agreement meets the standards set out in the Guide to Quality and Safety Assurance of Human Tissues and Cells for Patient Treatments . 6 When a cell therapy is deemed to be a Medicinal Product (MP), including an Advanced Therapy Medicinal Product (ATMP) or Investigational Medicinal Product (IMP), the Q&S regulations will apply for the donation, procurement and testing of tissues and cells. The Medicines and Healthcare products Regulatory Agency (MHRA) [START_REF]Tissues%20and%20Cells%20for%20Patient %20Treatment.pdf Human Tissue Authority, Code A: Guiding principles and the fundamental principle of consent[END_REF] will regulate the subsequent stages, including manufacture, storage and distribution. Moreover, a clinical trial authorisation and a positive opinion from an ethics committee are required for research. For further information, see: https://www.gov.uk/government/collections/clinical-trialsfor-medicines
B) Ethical and regulatory oversight
The Human Tissue Authority8 regulates under the Human Tissue Act the removal and storage of human tissues for a range of activities (scheduled purposes), including research (establishments/premises licensing in the Research sector). The Human Tissue Authority also regulates under the Q&S Regulations the procurement, testing, processing, storage, distribution and import/export of tissues and cells, including cell lines, intended for human application: which may be transplanted into humans, even where it is for clinical research (establishments/premises licensing in the Human Application sector). The Medicines and Healthcare products Regulatory Agency (MHRA)9 regulates when a cell therapy is deemed to be a Medicinal Product (MP) or Investigational Medicinal Product (IMP). Research Ethics Committees (RECs) deliver approvals for research projects and guidance for applications are provided by National Research Ethics Service (NRES) at the Health Research Authority. 10 The West London and Gene Therapy Advisory Committee 11 and its equivalent in Oxford, York and Edinburgh provides ethical opinions for clinical trials authorisation regarding gene therapy and cell therapy using stem cell lines.
II-Research on human embryonic stem cells
The Human Fertilisation and Embryology Authority (HFEA) 12 regulates the creation and use of human embryos in the derivation of human embryonic stem cell lines. However, its remit ceases at the point the embryo is dissociated. After that, the Human Tissue Authority's remit begins (see above).
A) Current legal position
Research on embryo and human embryonic stem cells is authorised by the Human Fertilisation and Embryology Act 1990, Schedule 2. 13 The Human Fertilisation and Embryology Authority (HFEA) 14 regulates the storage of gametes (eggs and sperm) and embryos. It also grants licences for research projects involving human embryos where the following conditions are met: -The research project is carried out in suitable premises -The use of human embryos is necessary and the research fulfils at least one of the purposes set out in the Act:
• Increasing knowledge about serious disease or other serious conditions.
• Developing treatments for serious diseases or other serious medical conditions. • Increasing knowledge about the causes of congenital diseases.
• Promoting the advances in the treatment of infertility.
• Increasing knowledge about the causes of miscarriages.
• Developing more efficient techniques of contraception.
• Developing methods for detecting gene, chromosome or mitochondrion abnormalities in embryos before implantation. • Increasing knowledge about the development of embryos.
-The people donating their eggs, sperm or embryos for research provided consent to do so -The embryos must not be allowed to develop in the laboratory beyond 14 days after fertilisation -No embryo created or used in research can be transferred to a woman -When a derived human embryonic stem cell line is fully characterised and cultured to ensure uniform characteristics it is a condition of all Human Fertilisation and Embryology Authority research licences that the cell line is deposited in the UK Stem Cell Bank. 15 The majority of embryos used in research projects are donated by patients undergoing fertility treatment. But embryos can also be created for research
, CNRS-Aix-Marseille Université-Université de Pau et des Pays de l'Adour-Université de Toulon et du Var, Aix-en-Provence, France Regulatory Overview, July 2017, published on EuroStemCell website: http://www.eurostemcell.org/regulation-stem-cell-research-united-kingdom To be cited as: A. MAHALATCHIMY, Regulation of stem cell research in the United Kingdom, Site internet EuroStemCell, 18 Juillet 2017 : http://www.eurostemcell.org/regulation-stem-cell-research-united-kingdom I-Research on human stem cells A) Current legal position 1) Tissues and cells for research not to be transplanted into humans An ethical approval for specific research projects Human tissue held for a specific research project approved by a recognised Research Ethics Committee (REC) (or where approval is pending). (Section 1 (9) of the Human Tissue Act 2004 1 )
http://www.legislation.gov.uk/ukpga/2004/30/contents
http://www.hra.nhs.uk/
http://www.legislation.gov.uk/ukpga/2004/30/contents
https://www.hta.gov.uk/sites/default/files/Code%20E%20-%20Research%20Final_0.pdf
https://www.hta.gov.uk/sites/default/files/Q&S_Human_Application_Regs_2007.pdf
https://www.hta.gov.uk/sites/default/files/Guide%20to%20Quality%20and%20Safety%20As surance%20for%20Tissues%20and%20Cells%20for%20Patient%20Treatment.pdf
https://www.gov.uk/government/organisations/medicines-and-healthcare-productsregulatory-agency
8 https://www.hta.gov.uk
/ 9 https://www.gov.uk/government/organisations/medicines-and-healthcare-productsregulatory-
agency 10 http://www.hra.nhs.uk/
Acknowledgement: I would like to deeply thank Professor Andrew Webster and the Human Tissue Authority for their useful comments. | 9,945 | [
"18844"
] | [
"239063"
] |
01769217 | en | [
"sde"
] | 2024/03/05 22:32:16 | 2006 | https://hal.science/hal-01769217/file/Pergentetal-2006-BotMAr.-HALpdf.pdf | Ge ´rard Pergent
Vanina Pasqualini
Christine Pergent-Martini
Lila Ferrat
Catherine Fernandez
email: [email protected]
Variability of Ruppia cirrhosa in two coastal lagoons with differing anthropogenic stresses
Keywords: aquatic Magnoliophyta, Mediterranean Sea, monitoring, population dynamics, seagrasses, wetland
The dynamics of Ruppia cirrhosa were studied over two years in two coastal lagoons on the Corsican coast (France, Mediterranean Sea). The lagoons differed in type of eutrophication: (1) Biguglia lagoon (urban and industrial effluent, agriculture, runoff from catchment area) and (2) Santa Giulia lagoon (tourist pressure in summer). Spatio-temporal variability of R. cirrhosa occurrence was monitored on permanent transects. We also monitored temporal changes in density, aboveground/belowground biomass and organic matter. Most of the parameters studied along the transects show variations with season and site. Density and aboveground biomass of R. cirrhosa in Biguglia lagoon were lower when Ulva species were present. This may be related to differences in nutrient availability. During the first year of the study, rainfall was greater with concomitantly higher nutrient inputs, which may account for the higher values of measured parameters in the first year. The results suggest that environmental parameter variations affect the functioning of R. cirrhosa meadows.
Introduction
Aquatic macrophytes, and especially aquatic Magnoliophyta, are prominent components of numerous lagoon habitats [START_REF] Millet | Relationships between benthic communities and physical environment in a lagoon ecosystem[END_REF][START_REF] Sfriso | Seasonal variation in biomass, morphometric parameters and production of seagrasses in the lagoon of Venice[END_REF][START_REF] Mene ´ndez | Net production of Ruppia cirrhosa in the Ebro Delta[END_REF], Sfriso et al. 2002). These plants have essential ecological roles [START_REF] Tamisier | Aquatic bird populations as possible indicators of seasonal nutrient flow at Ichkeul lake, Tunisia[END_REF], and also have economic importance [START_REF] Pearce | Caracte ´ristiques ge ´ne ´rales des zones humides me ´diterrane ´ennes[END_REF]Crivelli 1994, Skinner and[START_REF] Skinner | Fonctions et valeurs des zones humides me ´diterrane ´ennes[END_REF]. They contribute to sedimentary balance and constitute a bioindicator of water quality (Succow and Reinhold 1978[START_REF] Meriaux | Les ve ´ge ´tations aquatiques et subaquatiques. Relations avec la qualite ´des eaux[END_REF][START_REF] Blandin | Bioindicateurs et diagnostic des syste `mes e ´cologiques[END_REF][START_REF] Guilizzoni | The role of heavy metals and toxic materials in the physiological ecology of submersed macrophytes[END_REF]. For many years, strong anthropogenic pressures (e.g., fishing, fish and shellfarming, industrialisation and urbanisation) have disturbed Mediterranean lagoon eco-systems and strongly jeopardise their future survival [START_REF] Bacher | Modelling the impact of a cultivated oyster population on the nitrogen dynamics: the Thau lagoon case (France)[END_REF][START_REF] Bettinetti | Application of an integrated management approach to the restoration project of the lagoon of Venice[END_REF][START_REF] De Casabianca | Impact of shellfish farming eutrophication on benthic macrophyte communities in the Thau lagoon, France[END_REF][START_REF] Deslous-Paoli | Relationship between environment and resources: impact of shellfish farming on a Mediterranean lagoon (Thau, France)[END_REF][START_REF] Kjerfve | Coastal lagoon processes[END_REF]. Many studies highlight the degradation of aquatic Magnoliophyta in response to these disturbances, especially Cymodocea nodosa (Ucria) Ascherson, Zostera marina Linneaus and Zostera noltii Hornemann (Pe ´rez-Llorens and Niell 1993, [START_REF] Perez | Growth dynamics, production, and nutrient status of the seagrass Cymodocea nodosa in a Mediterranean semi-estuarine environment[END_REF][START_REF] Philippart | Seasonal variation in growth and biomass of an intertidal Zostera noltii stand in the Dutch Wadden Sea[END_REF][START_REF] Auby | Seasonal dynamics of Zostera noltii Hornem. in the bay of Arcachon (France)[END_REF][START_REF] Vermaat | Seasonal variation in the intertidal seagrass Zostera noltii Hornem.: coupling demographic and physiological patterns[END_REF][START_REF] De Casabianca | Impact of shellfish farming eutrophication on benthic macrophyte communities in the Thau lagoon, France[END_REF][START_REF] Sfriso | Seasonal variation in biomass, morphometric parameters and production of seagrasses in the lagoon of Venice[END_REF][START_REF] Laugier | Seasonal dynamics in mixed eelgrass beds, Zostera marina L. and Z. noltii Hornem., in a Mediterranean coastal lagoon (Thau lagoon, France)[END_REF][START_REF] Plus | Factors influencing primary production of seagrass beds (Zostera noltii Hornem.) in the Thau lagoon (French Mediterranean coast)[END_REF]). However, very few surveys deal with Ruppia cirrhosa (Petagna) Grande, an aquatic magnoliophyte of widespread occurrence in European coastal lagoon waters [START_REF] Verhoeven | The ecology of Ruppia-dominated communities in western Europe. III. Aspects of production, consumption and decomposition[END_REF][START_REF] Viarioli | Relationship between benthic fluxes and macrophyte cover in a shallow brackish lagoon[END_REF][START_REF] Bachelet | Seasonal changes in macrophyte and macrozoobenthos assemblages in three coastal lagoons under varying degrees of eutrophication[END_REF], especially in Mediterranean lagoons [START_REF] Verhoeven | Ruppia communities in the Camargue, France. Distribution and structure in relation to salinity and salinity fluctuations[END_REF][START_REF] Verhoeven | Structure of macrophyte dominated communities in two brackish lagoons on the island of Corsica, France[END_REF][START_REF] Baroli | Ricerche ecologiche nella laguna di s'ena arrubia (sardegna occidentale), variazioni stagionali della composizione delle principali associazioni vegetali e della biomassa delle specie dominanti[END_REF][START_REF] Pergent-Martini | Localisation et e ´volution des peuplements de phane ´rogames aquatiques de l'e ´tang de Berre (Bouches du Rho ˆne, France)[END_REF][START_REF] Ribera | Phytobenthic assemblages of Addaia Bay (Menerca, western Mediterranean): composition and distribution[END_REF][START_REF] Mene ´ndez | Net production of Ruppia cirrhosa in the Ebro Delta[END_REF][START_REF] Agostini | Distribution and estimation of basal area coverage of subtidal seagrass meadows in a Mediterranean coastal lagoon[END_REF][START_REF] Marzano | Distribution, persistence and change in the macrobenthos of the lagoon of Lesina (Apulia, southern Adriatic Sea)[END_REF]. R. cirrhosa is common in large permanent water bodies, in which it is one of the rare macrophytes that survives and exhibits healthy growth in salinities above 20 psu [START_REF] Verhoeven | The ecology of Ruppia-dominated communities in western Europe. I. Distribution of Ruppia representatives in relation to their autoecology[END_REF]. This climax species has extremely high resistance to variations in environmental conditions. We monitored seasonal variations of this species in two lagoons with different types of anthropogenic stress.
Materials and methods
The two lagoons chosen (Biguglia and Santa Giulia) are located along the eastern coast of Corsica (Mediterranean Sea; Figure 1). The environmental conditions vary between the lagoons: there is greater heterogeneity and human pressure in the Biguglia lagoon than in the Santa Giulia lagoon (Table 1). The Biguglia lagoon has been included in the Ramsar list (wetlands of international importance) since 1990, and has been listed as a natural reserve since 1994 (Decree no. 94-688 of August 9, 1994). It lies parallel to the sea over a distance of about 10 km, and is separated from the sea by a lido (recreational beach) not more than 1 km wide (Figure 1). This mesohaline lagoon [START_REF] Sacchi | Le sel de La Palice: re ´flexion sur le paralin me ´diterrane ´en[END_REF], Table 1) communicates with the sea through channels at the north and south ends. There are strong inputs of fresh water in the northern part of the lagoon (e.g., 23-53=10 6 m 3 y -1 of surface waters; [START_REF] Frisoni | L'e ´tang de Biguglia -diagnostic e ´cologique 1991-1992[END_REF]. In the centre of the lagoon and on the landward shore, the substratum is a fine sediment, rich in organic silt. On the seaward shore, the substratum is more sandy, sometimes with shells [START_REF] Agenc | Etang de Biguglia, Haute Corse. Dossier scientifique, propositions pour la cre ´ation d'une re ´serve naturelle[END_REF]. There is fish farming in the lagoon, with an annual mean production of 180 t [START_REF] Boulmer | Plan de gestion de la re ´serve naturelle de Biguglia[END_REF]. Nutrient concentrations in this lagoon are higher than in most other Mediterranean lagoons (e.g., total nitrogen, ammonium; [START_REF] Orsoni | Caracte ´risation de l'e ´tat d'eutrophisation des trois principaux e ´tangs corses (Biguglia, Diana et Urbino), et proposition de renforcement et leur surveillance[END_REF]).
The Santa Giulia lagoon (Figure 1) is located on an estate owned by the ''Conservatoire de l'espace littoral et des rivages lacustres''. This irregularly shaped lagoon is separated from the sea by a narrow sandy strand exposed to strong erosion since 1986 (Gauthier 1992). It is a mesohaline lagoon [START_REF] Sacchi | Le sel de La Palice: re ´flexion sur le paralin me ´diterrane ´en[END_REF], Table 1) with a temporary channel located to the south of the sandy beach. The bottom is constituted mainly of silty sediments [START_REF] Lorenzoni | Description phytosociologique et cartographique de la ve ´ge ´tation de l'e ´tang de Santa Giulia (Corse du Sud)[END_REF]). This lagoon is subjected to tourist pressures in summer.
Monitoring of the vegetation was carried out using the transect technique [START_REF] Corre | La me ´thode des transects dans l'e ´tude de la ve ´ge ´tation littorale[END_REF], with fixed markers set up in July 1997, and checked regularly thereafter. After preliminary studies, the transects (each 100 m long; Figure 1) were set up in the two lagoons in areas where seagrass beds were representatives (coverage, distribution, species; Figure 1). Spatio-temporal changes in seagrasses were investigated by recording the plant populations 1 m on either side of these markers for microcartography (precision 20 cm). Presence/absence, cover and health estimation (dead, alive) of populations were recorded along transects. Cover was estimated in four classes: 0-25, 25-50, 50-75 and 75-100%. In parallel, from July 1997 to July 1999, a systematic sampling was done at each sampling station parallel to the transect (5 replicates, every 20 m). A cylindrical corer (15 cm diameter = 50 cm long) was pushed up to 20 cm into the sediment in monospecific stands (5 replicates), in order to follow seasonal variations of Ruppia cirrhosa. For each core, shoots of R. cirrhosa were counted (density per m 2 ), and the aboveground biomass (foliar shoots) and the belowground biomass (rhizomes and roots) were weighed after oven drying (48 h at 608C). Organic matter content of the sediment was measured in the surface layer (0-10 cm) by weighing the ash after combustion in a muffle furnace (3 h at 5008C). Also, at the Biguglia lagoon, when presence of Ulvophyceae was observed, its cover and biomass were estimated by sampling the species in a 0.4 m 2 quadrat near each sampling station of R. cirrhosa.
Comparisons of parameters as a function of site or season were processed by paired t-tests or Kruskal-Wallis tests, coupled with a Student-Newmann-Keuls test, in order to detect significant differences [START_REF] Zar | Biostatistical analysis[END_REF]. The relation between two parameters was analysed using a simple linear regression [START_REF] Zar | Biostatistical analysis[END_REF]. The software Statgraphics plus (v 2.1) for Windows was used.
Results
Vegetation distribution along the transects differed according to season and site. In winter, Ruppia cirrhosa was dormant as short quiescent leaf-bearing stolons, which prevented monitoring the transects. At Biguglia, a healthy continuous and rather dense meadow of R. cirrhosa was observed (about 50 m off shore) in July 1997 and in April 1998, with presence of Ulva specimens (Figure 2). In July, R. cirrhosa was replaced by members of the Ulvophyceae that can, locally, constitute very dense formations that form detached free-floating mats near the bottom, with heights up to 20-30 cm (continuous beds of Ulva species, from 35 m to 75 m from shore; 1324 to 2983 g m -2 ; Figure 2). This continuous meadow of R. cirrhosa disappeared almost entirely during this period, with mortalities of R. cirrhosa and Ulva species. Later, R. cirrhosa appeared again with high cover in spring 1999 and Ulva species decreased drastically (2 to 162 g m -2 ; Figure 2). Seagrass was then much more developed than the previous year and it occupied almost the whole transect. At Santa Giulia in April 1998, a sparse meadow of R. cirrhosa was observed on silt, and this meadow became continuous 45 m off shore (Figure 2). Over the whole study period, the only variations were in meadow cover (Figure 2).
Shoot density of Ruppia cirrhosa meadows varied, on average, from 3316"1112 shoots m -2 (Biguglia, July 1998) to 16401"3657 shoots m -2 (Santa Giulia, October 1997; Figure 3A). Density variations occurred over the study period at both sites (Kruskal-Wallis test, p-0.05). Maximal densities for Biguglia were recorded in April 1998 and for Santa Giulia in July 1997 (SNK test, p-0.05). However, no seasonal pattern was observed. For each site, the average density was higher in the first year (July 1997-July 1998) than in the second year (July 1998-July 1999; SNK test, p-0.05). Comparison between sites shows that R. cirrhosa densities were generally significantly higher in the Santa Giulia lagoon (paired t-test, p-0.05; Figure 3A).
Ruppia cirrhosa aboveground biomass varied from 8"4 g DW m -2 (Biguglia, October 1998) to 343"70 g DW m -2 (Biguglia, July 1999; Figure 3B). For the two sites studied, significant seasonal variations were observed (Kruskal-Wallis test, p-0.05), with an increase of biomass in spring and summer, and a decrease in autumn and winter (SNK test, p-0.05). Significant differences between years were observed in April (higher in 1998) and July (higher in 1997 and 1999) for the two sites studied. Comparison of the two sites shows significant variations with a greater aboveground biomass at Santa Giulia than at Biguglia, except in July 1997 and 1999 when the reverse pattern was observed (paired t-test, p-0.05).
The belowground biomass varied from 4"2 g DW m -2 (Biguglia, October 1998) to 87"15 g DW m -2 (Santa Giulia, October 1997; Figure 3C). The variations observed for the study period were significant (Kruskal-Wallis test, p-0.05), even though no seasonal pattern was observed. Biomass was higher in the period July 1997-April 1998 than in the period July 1998-July 1999. In the same way, no significant difference was recorded between the two sites (paired t-test, p)0.05). The aboveground: belowground biomass ratio (AB:BB ratio, Figure 3D) confirms the seasonal trends observed for both lagoons, but highlights an important difference in biomass allocation in July 1998 between the sites. The AB:BB ratio was significantly higher in Santa Giulia (26.21) than in Biguglia (11.39).
Organic matter content in the Ruppia cirrhosa meadows sediment varied from 1.6% (Biguglia, January 1998) to 4.6% (Santa Giulia, July 1999; Figure 4) and differed among sampling times. At Santa Giulia, the organic matter content increased significantly from the beginning of 1998 to the end of the study; thus it was higher in the second year than in the first year (Kruskal-Wallis and SNK tests, p-0.05). On the other hand, at Biguglia the organic matter content remained steady over the whole period of study, except in April 1998 which showed a significantly higher content than the other months (Kruskal-Wallis and SNK tests, p-0.01). Statistical analysis of the results shows that, except in April 1998, the organic matter content was lower in Biguglia than in Santa Giulia (paired t-test, p-0.05). No correlation between the organic matter content in the sediment and the total biomass or the belowground biomass was observed (correlation test, 0.07-r-0.37; p)0.05).
Discussion
Our results on density, aboveground and belowground biomasses are concordant with those reported in the literature [START_REF] Verhoeven | The ecology of Ruppia-dominated communities in western Europe. III. Aspects of production, consumption and decomposition[END_REF][START_REF] Bachelet | Seasonal changes in macrophyte and macrozoobenthos assemblages in three coastal lagoons under varying degrees of eutrophication[END_REF], Calado and Duarte 2000[START_REF] Azzoni | Iron, sulphur and phosphorus cycling in the rhizosphere sediments of a eutrophic Ruppia cirrhosa meadow (Valle Smarlacca, Italy)[END_REF][START_REF] Mene ´ndez | Net production of Ruppia cirrhosa in the Ebro Delta[END_REF]. In particular, the total biomass values at both sites (Biguglia and Urbino) are higher than values observed in the Ebro Delta (Spain; Mene ´ndez 2002) and comparable with those recorded for the Santo Andre ´lagoon (Portugal;Calado and Duarte 2000).
The biotic parameters of the two populations of Ruppia cirrhosa showed seasonal variations. Higher aboveground biomass (attributable to larger leaves) in summer is well known for this species [START_REF] Verhoeven | The ecology of Ruppia-dominated communities in western Europe. III. Aspects of production, consumption and decomposition[END_REF][START_REF] Bachelet | Seasonal changes in macrophyte and macrozoobenthos assemblages in three coastal lagoons under varying degrees of eutrophication[END_REF][START_REF] Azzoni | Iron, sulphur and phosphorus cycling in the rhizosphere sediments of a eutrophic Ruppia cirrhosa meadow (Valle Smarlacca, Italy)[END_REF][START_REF] Mene ´ndez | Net production of Ruppia cirrhosa in the Ebro Delta[END_REF]. These variations in biomass of magnoliophytes seem to be linked to the strong variations in temperature and light over the year [START_REF] Duarte | Temporal biomass variability and production/ biomass relationships of seagrass communities[END_REF], Perez-Llorens and Niell, 1993[START_REF] Laugier | Seasonal dynamics in mixed eelgrass beds, Zostera marina L. and Z. noltii Hornem., in a Mediterranean coastal lagoon (Thau lagoon, France)[END_REF]. Biguglia and Santa Giulia lagoons have strong variations in temperature (Figure 5A), and a significant correlation between aboveground biomass and temperature was observed (rs0.69, p-0.05).
Comparison between the two populations of Ruppia cirrhosa (Biguglia and Santa Giulia) reveals the following differences: (i) a smaller density and aboveground biomass at Biguglia and (ii) more variation in cover at Biguglia. These results could be related to differences in nutrient availability due to inputs of nutrients in each lagoon (Figure 5B-E). The correlation matrix shows a significant correlation between aboveground biomass and ammonium, nitrates (correlation test, rs-0.57 and -0.54, respectively, p-0.05), and between density and phosphates (correlation test rs0.52, p-0.05). Important https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1 inputs in nutrients could cause negative indirect effects on the magnoliophytes through proliferation of plankton or macroalgal blooms [START_REF] Sfriso | Temporal and spatial changes of macroalgae and phytoplankton in a Mediterranean coastal area: the Venice lagoon as a case study[END_REF]. In Biguglia, observations of the green macroalgal bloom (Ulva species) in spring and summer 1998 reveal environmental modifications in the sampling site with a probable increase in nutrients in the water column, and an increase in organic matter in the sediment which could be related to significant rainfall at this site in March and April 1998 (compared to Santa Giulia and to 1999). These modifications seem to allow the development of macroalgae and to limit magnoliophyte development at this location. Macroalgae, such as members of the Ulvophyceae, are able to absorb large quantities of nutrients, eventually producing sudden blooms. These species are primarily nitrogen-limited and stimulated by the loading provided by waste discharge (Giusti and Marsilli-Libelli 2005). These data seem to describe a dystrophic crisis in the Biguglia lagoon in spring-summer 1998, as described by [START_REF] Viaroli | Macrophyte communities and their impact on benthic fluxes of oxygen, sulphide and nutrients in shallow eutrophic environments[END_REF]. These crises are often caused by the combination of nutrient input, high temperature, high oxygen consumption, long periods of calm winds and subsequent reduction in water exchange [START_REF] Orsoni | Caracte ´risation de l'e ´tat d'eutrophisation des trois principaux e ´tangs corses (Biguglia, Diana et Urbino), et proposition de renforcement et leur surveillance[END_REF], which generate a proliferation of opportunistic macroalgae [START_REF] Bachelet | Seasonal changes in macrophyte and macrozoobenthos assemblages in three coastal lagoons under varying degrees of eutrophication[END_REF]. This proliferation leads to a decrease in light intensity received by the magnoliophytes (Da Silva and Asmus 2001). The Biguglia lagoon is eutrophic because of agricultural runoff and wastewater discharge. The environmental modifications observed at the sampling site of this lagoon during spring-summer 1998 were confirmed by the mortality of R. cirrhosa. Algal decomposition may also generate an anoxic phenomenon harmful to the development of the magnoliophytes [START_REF] Giusti | Modelling the interactions between nutrients and the submersed vegetation in the Orbetello Lagoon[END_REF]. Light attenuation and decomposition of macroalgae following collapse of the bloom have strong effects on R. cirrhosa meadows (drop in biomass and densities). The phenomenon of macroalgal proliferation, which led to the regression of R. cirrhosa, has been observed in other Mediterranean lagoons (Viaroli et al. 1996, Bachelet et al. Data for water temperature, phosphates, ammonium, nitrites and nitrates were obtained from September 1998 to September 1999 for Biguglia lagoon [START_REF] Orsoni | Caracte ´risation de l'e ´tat d'eutrophisation des trois principaux e ´tangs corses (Biguglia, Diana et Urbino), et proposition de renforcement et leur surveillance[END_REF]) and from September 1994 to September 1995 for Santa Giulia lagoon [START_REF] Canovas | Diagnostic hydrologique et hydrobiologique de l'e ´tang de Palo, Corse. Rapport IARE[END_REF].
The data for Santa Giulia come from a nearby lagoon with similar characteristics (personal com. G.F. Frisoni). Rainfall data were collected between July 1997 and August 1999 (Me ´te ´oFrance ᮋ ) for the two lagoons.
2000, [START_REF] Mistri | Variability in macrobenthos communities in the Valli di Comacchio, Northern Italy, a hypereutrophized lagoonal ecosystem[END_REF]. Indeed, the differing physiological requirements of the algae and magnoliophytes tend to produce a mutual exclusion [START_REF] Giusti | Modelling the interactions between nutrients and the submersed vegetation in the Orbetello Lagoon[END_REF]. This phenomenon was reported for other magnoliophytes (e.g., Zostera marina; [START_REF] Coffaro | Resources competition between Ulva rigida and Zostera marina: a quantitative approach applied to the Lagoon of Venice[END_REF].
Comparison of the AB:BB ratio between the sites (Figure 3D) in July 1998 highlights the fact that R. cirrhosa meadows at Biguglia are not able to develop as much aboveground tissues as populations of Santa Giulia. Indeed, the presence of high densities of macroalgae in Biguglia leads R. cirrhosa to invest its biomass in subterranean tissues at the expense of photosynthetic tissues. These density and biomass results could be also related to differences in nutrient availability in the sediment. The high concentrations of organic matter at the sampling site in Santa Giulia lagoon seem to constitute an important source of nutrients, which may improve Ruppia cirrhosa growth. Sediment composition appears to have a primary influence on establishment and development of Ruppia meadows (Guisti and Marsili-Libelli 2005). The amount of nutrients being supplied by organic matter is a positive factor supporting plant growth, as has already been shown for Zostera marina [START_REF] Kenworthy | The use of fertilizer to enhance growth and transplanted seagrass Zostera marina L. and Halodule wrightii Aschers[END_REF][START_REF] Van Lent | Comparative study on populations of Zostera marina L. (seagrass): in situ nitrogen enrichment and light manipulation[END_REF][START_REF] Peralta | On the use of sediment fertilization for seagrass restoration: a mesocosm study on Zostera marina L[END_REF]. These organic matter concentrations must be carefully measured and considered with care, because an excess of organic matter can cause anoxic events in the sediment by mineralisation, and release of hydrogen sulphide with consequent negative effects on plants [START_REF] Tagliapietra | Macrobenthic community changes related to eutrophication in Palude della Rosa (Venetian Lagoon, Italy)[END_REF].
Annual comparisons (1997)(1998)(1998)(1999) for both sites (Biguglia and Santa Giulia) show higher values in 1997-1998 for the whole set of parameters measured, together with the development of macroalgae in the sam-pling site of Biguglia. The available rainfall data (Meteo France ᮋ data; Figure 5F) indicate that both lagoons benefited from higher inputs of fresh water, which came from their catchment areas in 1997-1998. This could have led to a better mixing of the water and to an enrichment in nutrients able to support the development of Ruppia cirrhosa plants (high density and biomass values for that period) and, in some cases, of macroalgae.
Presence of anthropogenic stresses causing environmental parameter variations in some Corsican lagoons affects the functioning Ruppia cirrhosa meadows. Even if this species shows a high resistance to variations in environmental conditions and resilience capacity (sensu [START_REF] Connell | On the evidence needed to judge ecological stability or persistence[END_REF], nutrient input by human activities could jeopardise the future survival of the meadows by facilitating the development of other more opportunistic species.
Figure 1
1 Figure 1 Ruppia cirrhosa: location of Biguglia and Santa Giulia lagoons in Corsica (France) and sampling sites.
Figure 3
3 Figure 3 Ruppia cirrhosa: seasonal variations of density (A), aboveground biomass (B), belowground biomass (C), and ratio aboveground:belowground biomass (D) in Biguglia and Santa Giulia lagoons (mean and 95% confidence interval).
Figure 4
4 Figure 4Ruppia cirrhosa: seasonal variations of sediment organic matter in Biguglia and Santa Giulia lagoons (mean and 95% confidence interval).
Figure 5
5 Figure5Ruppia cirrhosa: seasonal variations of water temperature (A), phosphates (B), ammonium (C), nitrites (D), nitrates (E), and rainfall (F) in Biguglia and Santa Giulia lagoons. Data for water temperature, phosphates, ammonium, nitrites and nitrates were obtained from September 1998 to September 1999 for Biguglia lagoon[START_REF] Orsoni | Caracte ´risation de l'e ´tat d'eutrophisation des trois principaux e ´tangs corses (Biguglia, Diana et Urbino), et proposition de renforcement et leur surveillance[END_REF]) and from September 1994 to September 1995 for Santa Giulia lagoon[START_REF] Canovas | Diagnostic hydrologique et hydrobiologique de l'e ´tang de Palo, Corse. Rapport IARE[END_REF]. The data for Santa Giulia come from a nearby lagoon with similar characteristics (personal com. G.F. Frisoni). Rainfall data were collected between July 1997 and August 1999 (Me ´te ´oFrance ᮋ ) for the two lagoons.
Table 1 Ruppia cirrhosa: main characteristics of the Biguglia and Santa Giulia lagoons.
Biguglia Santa Giulia
Abiotic parameters
Geographical coordinates 428379 N, 98279 E 4 1 8319 N, 98159 E
Surface area (ha) 1500 26
Maximum depth (m) 1.8 1.0
Mean depth (m) 1.5 0.5
Catchment area (km 2 ) 180 15.5
Water residence time (month) 1-2 1
Temperature (8C) 5-26 12-25
Salinity (psu) 4-26 12-37
Human pressure sources
Population of catchment area (ha) 18 528 712
Industrial and touristic activities in Effluent, wine making, airport, oil tanks, None
the catchment area boilerworks, hotels and restaurants, transport
Agricultural activities in the Orchards, vineyards, pumpkins and squashes, stock None
catchment area rearing
Activities at the lido Built-up area, holiday village, hotels and restaurants Holiday village,
hotels and restaurants
Fishing and aquaculture Fishing: 180 t/y of fish None
(Data from Pergent-Martini et al. 1997, Orsoni et al. 2001).
https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard
https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1 https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1
https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1 https://payment.sips-atos.com/cgis-payment/prod/callresource?rsc=creditcard 1/1
Acknowledgements
This work was carried out under, and funded by, the Programme National d'Oce ´anologie Co ˆtie `re (PNOC) in partnership with IFREMER and the French Ministry of the Environment, for the valorisation of Mediterranean lagoons. Some of the results were acquired from the European Life and Interreg programmes. The authors would like to thank O. Dumay, J.E. Tomaszewski and C. Segui for their participation in field missions and M. Paul for help with the English translation. The authors also greatly appreciate comments provided by the editor and the reviewers.
https://payment.sips-atos.com/cgis-payment/prod/ | 30,348 | [
"751224",
"909211",
"8002",
"18874"
] | [
"834"
] |
01769282 | en | [
"math"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769282/file/Chaumont-Malecki.pdf | Loïc Chaumont
email: [email protected]
Jacek Malecki
email: [email protected]
Jacek Małecki
Short proofs in extrema of spectrally one sided Lévy processes
Keywords: Cyclically interchangeable process, spectrally one sided Lévy process, Ballot theorem, Kendall's identity, past supremum, bridge
come
Introduction
The series of notes from J. Bertrand [START_REF] Bertrand | Solution d'un problème[END_REF], É. Barbier [START_REF] Barbier | Généralisation du problème résolu par M. J. Bertrand[END_REF] and D. André [START_REF] André | Solution directe du problème résolu par M. J. Bertrand[END_REF] which appeared in 1887 has inspired an extensive literature on the famous ballot theorem for discrete and continuous time processes. In the same year, the initial question raised by J. Bertrand was related by himself to the ruin problem. Using modern formalism, it can be stated in terms of the simple random walk (S n ) n≥0 as follows:
P(T k = n | S n = -k) = k n , k, n ≥ 1, (1.1)
where T k = inf{j : S j = -k}. The first substantial extension was obtained in 1962 by L. Takács [START_REF] Takács | A generalization of the ballot problem and its application in the theory of queues[END_REF] who proved that identity (1.1) is actually satisfied if (S n ) n≥0 is any downward skip free sequence with interchangeable increments such that S 0 = 0. Then the same author considered this question in continuous time and proved the identity
P(T x = t | X t = -x) = x t , x, t > 0, (1.2)
where (X s , 0 ≤ s ≤ t) = (Y s -s, 0 ≤ s ≤ t) and (Y s , 0 ≤ s ≤ t) is an increasing continuous time stochastic process with cyclically interchangeable increments and T x = inf{s : X s = -x}, see [START_REF] Takács | On combinatorial methods in the theory of stochastic processes[END_REF]. The first step of this note is to provide a short and elementary proof of a more general result than identity (1.2) which also applies to processes with negative jumps.
Identity (1.2) cannot be extended to all continuous time processes with cyclically interchangeable increments. A problem appears when the process has unbounded variation. In particular, if (X s , 0 ≤ s ≤ t) is a spectrally positive Lévy process with unbounded variation, then we can check that P(T x = t | X t = -x) = 0. However, by considering the process on the whole half line, it is still possible to compare the measures P(T x ∈ dt) dx and P(-X t ∈ dx) dt on (0, ∞) 2 in order to obtain the following analogous result:
P(T x ∈ dt) dx = x t P(-X t ∈ dx) dt .
(1.3)
Identity (1.
3) was first obtained by D. Kendall [START_REF] Kendall | Some problems in the theory of dams[END_REF] in the particular case of compound Poisson processes. It has later been extended by J. Keilson in [START_REF] Keilson | The first passage time density for homogeneous skip-free walks on the continuum[END_REF] and A.A. Borovkov in [START_REF] Borovkov | On the first passage time for a class of processes with independent increments[END_REF] to all spectrally positive Lévy processes. Since then several proofs have been given using fluctuation identities, chap. VII of J. Bertoin's book [START_REF] Bertoin | Lévy Processes[END_REF] or martingale identities and change of measures, K. Borovkov and Z. Burq [START_REF] Borovkov | Kendall's identity for the first crossing time revisited[END_REF]. We shall see in the next section that identity (1.3) can actually be obtained as a direct consequence of (1.2) for Lévy processes with bounded variation and extended to the general case in a direct way.
These results on first passage times will naturally lead us in Section 3 to the law of the past supremum X t of spectrally positive Lévy processes. In a recent work, Z. Michna, Z. Palmowski and M. Pistorius [START_REF] Michna | The distribution of the supremum for spectrally asymmetric Lévy processes[END_REF] obtained the identity
P(X t > x, X t ∈ dz) = t 0 x -z s p s (z -x)p t-s (x) ds dz , x > z , (1.4)
where p t (x) is the density of X t . As in [START_REF] Michna | The distribution of the supremum for spectrally asymmetric Lévy processes[END_REF], our proof of identity (1.4) is based on an application of Kendall's identity. However, we show in Theorem 3 that a quite simple computation involving the law of the bridge of the Lévy process allows us to provide a very short proof of (1.4). It is first obtained for the dual process -X and then derived for X from the time reversal property of Lévy processes. As a consequence of this result, we obtain in Corollary 4 an integro-differential equation characterizing the entrance law of the excursion measure of the Lévy process X reflected at its infimum.
Continuous ballot theorem and Kendall's identity
Let D = D([0, ∞)) and for t > 0 let D t = D([0, t]) be the spaces of càdlàg functions defined on [0, ∞) and [0, t], respectively. Denote by X the canonical process of the coordinates, i.e. for all ω ∈ D and s ≥ 0 or for all ω ∈ D t and s ∈ [0, t], X s (ω) = ω(s).
The spaces D and D t are endowed with their Borel sigma fields F and F t , respectively. For any u ∈ [0, t], we define the family of transformations θ u : D t → D t , as follows:
θ u (ω) s = ω(0) + ω(s + u) -ω(u), if s < t -u ω(s -(t -u)) + ω(t) -ω(u) if t -u ≤ s ≤ t .
(2.1)
The transformation θ u consists in inverting the paths {ω(s), 0 ≤ s ≤ u} and {ω(s), u ≤ s ≤ t} in such a way that the new path θ u (ω) has the same values as ω at times 0 and 1, i.e. θ u (ω)(0) = ω(0) and θ u (ω)(t) = ω(t). We call θ u the shift at time u over the interval [0,t], see the picture below.
0 -x u t 0 t t-u -x
A path ω of D t on the left and the shifted path θ u (ω) on the right.
We say that the process X = (X s , 0 ≤ s ≤ t) has cyclically interchangeable increments under some probability measure P on
(D t , F t ) if θ u (X) (d) = X , for all u ∈ [0, t]. (2.2)
The process (X, P) will be called a CEI process on [0, t]. Let us note that Lévy processes are CEI processes. We define the past supremum and the past infimum of X before time s ≥ 0 by X s = sup u≤s X u and X s = inf u≤s X u , this definition being valid for all s ∈ [0, t] on D t and for all s ≥ 0 on D. For a stochastic process Z defined on D or D t , and x > 0, we define the first passage time at -x by Z,
T x (Z) = inf{s : Z s = -x} ,
with the convention that inf ∅ = ∞. For the canonical process, we will often simplify this notation by setting T x := T x (X).
Here is an extension of Theorem 3 in [START_REF] Takács | On combinatorial methods in the theory of stochastic processes[END_REF], which is known as the continuous time Ballot theorem.
Theorem 1. Let t > 0 and (X, P) be a CEI process on [0, t] such that X 0 = 0 and X t = -x < 0, a.s., then
P(T x = t) = 1 t E(λ(E t,x )) , (2.3)
where λ is the Lebesgue measure on R and E t,x is the random set
E t,x = {s ∈ [0, t] : X s = X s and X s ∈ [X t , X t + x)} .
In particular if X is of the form X s = Y s -cs, where Y is a pure jump, non-decreasing CEI process and c is some positive constant, then
P(T x = t) = x ct . (2.4) Proof. First observe that for all u ∈ [0, t], T x (θ u (X)) = t if and only if X u = X u and X u ∈ [X t , X t -x) . (2.5)
This fact is readily seen on the graph of X, see for instance the picture above. Then let U be a uniformly distributed random variable on [0, t] which is independent of X under P. The CEI property immediately yields that under P,
θ U (X) (d) = X . (2.6)
From (2.5), we obtain {T x (θ U (X)) = t} = {U ∈ E t,x } and from (2.6), we derive the equalities,
P(T x (X) = t) = P(T x (θ U (X)) = t) = P(U ∈ E t,x ) = 1 t E(λ(E t,x )) .
If X s = Y s -cs, for a pure jump non-decreasing process Y and a constant c > 0, then X has bounded variation and for all t ≥ 0,
X t = t 0 1I {Xu=X u } dX u = u≤t 1I {Xu=X u } (Y u -Y u-) -c t 0 1I {Xu=X u } du.
But since X is càdlàg with no negative jumps, if u is such that
X u = X u , then Y u = Y u-.
Therefore X t = -c t 0 1I {Xu=X u } du, so that on the set {s ∈ [0, t] : X s = X s }, the Lebesgue measure satisfies λ(ds) = ds = -c -1 dX s , and in particular λ(E t,x ) = x/c, a.s. Note that in [START_REF] Takács | On combinatorial methods in the theory of stochastic processes[END_REF], Theorem 1 has been proved for separable processes of the form X s = Y s -s, where Y is a pure jump non decreasing CEI process. Separability implies that the past infimum of the process is measurable and this property can be considered as the minimal assumption for a CEI process to satisfy the ballot theorem. Our proof would still apply, up to slight changes, under this more general assumption. However, since this paper is mainly concerned with Lévy processes, we have chosen the more classical framework of càdlàg processes in which they are usually defined.
Let us now focus on two applications of identity (2.3):
1. Let U be a uniformly distributed random variable on [0, t] which is independent of the canonical process X under some probability measure P on (D t , F t ). Let P be the law of θ U (X) under P . Then one easily proves that (X, P) is a CEI process on [0, t]. It is also straightforward to show that for all u ∈ [0, t],
E t,x • θ u = E t,x + t -u, mod (t) , (2.7)
and (2.7) implies that λ(E t,x ) = λ(E t,x • θ u ). It follows from these two observations and (2.3) that
P (T x = t) = P(T x = t) ,
which allows us to provide many examples of CEI processes (X, P) such that P(T x = t) is explicit. Suppose for instance that under P , the canonical process is a.s. equal to the deterministic function
ω(s) = s 2 if 0 ≤ s < t 4 -s 3 if t 4 ≤ s < t 2 -( 3t 4 -s) 3 if t 2 ≤ s < 3t 4 (2t -s) 2 if 3t 4 ≤ s ≤ t ,
and set ω(t) = inf s≤t ω(s), then
λ({s ∈ [0, t] : ω s = ω s and ω s ∈ [ω t , ω t + x)}) = t 4 ,
so that from (2.3),
P(T x = t) = 1 4 .
2. For our second application, we assume that (X, P) is the bridge with length t of a Lévy process from 0 to -x < 0 and we set X = -X. Then the process ( X, P) is the bridge of the dual Lévy process from 0 to x. By the time reversal property of Lévy processes,
( X, P) = ((x + X (t-s)-, 0 ≤ s ≤ t), P),
where we set X 0-= X 0 . Hence P(T x = t) = P(inf 0≤s≤t X s ≥ 0) and from (2.3),
P( inf 0≤s≤t X s ≥ 0) = P( sup 0≤s≤t X s ≤ 0) = 1 t E(λ(E t,x )).
Integrating this equality over x with respect to the law P(X t ∈ dx), this shows that, for the Lévy process, sup 0≤s≤t X s ≤ 0 with positive probability if and only if the set {s : X s = X s } has positive Lebesgue measure. Recall that the downward ladder time process is the inverse of the local time defined on this set. Then we have recovered the well-know fact that for a Lévy process, 0 is not regular for (0, ∞) if and only if the downward ladder time process has positive drift, see [START_REF] Doney | Fluctuation theory for Lévy processes[END_REF]. Note that when X has no negative jumps, this is also equivalent to the fact that it has bounded variation.
From now on we will consider stochastic processes defined on the whole positive half line. In particular, X is now the canonical process of D. We shall see in the proof of the following theorem that Kendall's identity is a direct consequence of the Ballot theorem.
Theorem 2. Let (X, P) be a spectrally positive Lévy process such that P(X 0 = 0) = 1. If (X, P) is not a subordinator, then the following identity between measures:
P(T x ∈ dt) dx = x t P(-X t ∈ dx) dt (2.8) holds on (0, ∞) 2 .
Proof. Assume first that X has bounded variation, that is X t = Y t -ct, where Y is a subordinator with no drift and c > 0 is a constant. Let f and g be any two Borel positive functions defined on R. It follows directly from (2.4) by conditioning on
X t that E(1I {Xt=X t } f (X t )) = -E Xt ct f (X t )1I {Xt≤0} , so that E ∞ 0 g(t)1I {Xt=X t } f (X t ) dt = - ∞ 0 g(t)E X t f (X t )1I {Xt≤0} dt ct .
(2.9)
Recall from the end of the proof of Theorem 1 (applied to processes defined on [0, ∞)) that dt = -c -1 dX t on the set {t : X t = X t }, so that from the change of variables t = T x ,
E ∞ 0 g(t)1I {Xt=X t } f (X t ) dt = -E ∞ 0 g(t)f (X t ) c -1 dX t = ∞ 0 E g(T x )f (-x)1I {Tx<∞} dx c .
(2.10) Then (2.8) follows by comparing the right hand sides of (2.9) and (2.10). Now if X has unbounded variation and Laplace exponent
ϕ(λ) := log E(e -λX 1 ) = -aλ + σ 2 λ 2 2 + (0,∞) (e -λx -1 + λx1I {x<1} ) π(dx), λ > 0,
then the spectrally positive Lévy process X (n) with Laplace exponent
ϕ n (λ) := log E(e -λX (n) 1 ) = -aλ+σ 2 (λ √ n+n(e -λ/ √ n -1))+ (1/n,∞) (e -λx -1+λx1I {x<1} ) π(dx)
has bounded variation and the sequence X
(n) t , n ≥ 1 converges weakly toward X t , for all t. Recall that ϕ and ϕ n are strictly convex functions. Then let ρ and ρ n be the largest roots of ϕ(s) = 0 and ϕ n (s) = 0, respectively. Since X and X (n) are not subordinators, ρ and ρ n are finite and ρ n tends to ρ as n → ∞. The first passage time T
(n) x by X (n) at -x has Laplace exponent ϕ -1
n , where ϕ -1 n is the inverse of ϕ n , on [ρ n , ∞), see chap. VII in [START_REF] Bertoin | Lévy Processes[END_REF]. From these arguments, ϕ -1 n converges toward the Laplace exponent ϕ -1 of T x , so that T (n) x converges weakly toward T x , for all x > 0. Since X (n) satisfies identity (2.8) for each n ≥ 1, so does X.
Note that if X = (X s , s ≥ 0) is a stochastic process such that X = (X s , 0 ≤ s ≤ t) is a CEI process for all t > 0, then it has actually interchangeable increments, that is for all t > 0, n ≥ 1 and for all permutation σ of the set {1, . . . , n},
(X kt/n -X (k-1)t/n , k = 1, . . . , n) (d) = (X σ(k)t/n -X (σ(k)-1)t/n , k = 1, . . . , n) .
A canonical representation for these processes has been given in Theorem 3.1 of [START_REF] Kallenberg | Canonical representations and convergence criteria for processes with interchangeable increments[END_REF]. In particular, conditionally on the tail σ-field G = ∩ t≥0 {X s : s ≥ t}, the process X is a Lévy process. By performing again the proof of Theorem 2 under the conditional probability P( • | G) we show that (2.8) is actually valid for all processes with interchangeable increments and no negative jumps.
The law of the extrema of spectrally one sided Lévy processes
Throughout this section we are assuming that, (i) the process (X, P) is a spectrally positive Lévy process which is not a subordinator and such that P(X 0 = 0) = 1.
(ii) For all t > 0, the law p t (dx) of X t is absolutely continuous with respect to the Lebesgue measure. We shall denote by p t (x) its density.
We recall that under assumption (i), 0 is always regular for (-∞, 0) and that 0 is regular for (0, ∞) if and only if X has unbounded variation, see Corollary 5 in Chap. VII of [START_REF] Bertoin | Lévy Processes[END_REF].
Let us also briefly recall the definition of bridges of Lévy processes. The law P t y of the bridge from 0 to y ∈ R, with length t > 0 of the Lévy process (X, P) is a regular version of the conditional law of (X s , 0 ≤ s ≤ t) given X t = y, under P. It satisfies P t y (X 0 = 0, X t = y) = 1 and for all s < t, this law is absolutely continuous with respect to P on F s , with density p t-s (y -X s )/p t (y), i.e.
P t y (Λ) = E 1I Λ p t-s (y -X s ) p t (y) , for all Λ ∈ F s . (3.1)
Note that from Theorem (3.3) in [START_REF] Sharpe | Zeroes of infinitely divisible densities[END_REF], p t (y) > 0, for all t > 0 and y ∈ R if and only if for all c ≥ 0, the process (|X t -ct|, t ≥ 0) is not a subordinator. But from assumptions (i) and (ii), the later condition is always satisfied in our framework.
Formula (3.3) below was proved in Theorem 2.4 in [START_REF] Michna | The distribution of the supremum for spectrally asymmetric Lévy processes[END_REF], see also [START_REF] Michna | Formula for the supremum distribution of a spectrally positive α-stable Lévy process[END_REF] and Theorem 12 in [START_REF] Kortchemski | Sub-exponential tail bounds for conditioned stable Bienaymé-Galton-Watson trees[END_REF] for the stable case. Here we first prove an analogous formula for the dual process in (3.2) from which (3.3) is immediately derived. Theorem 3. The laws of (X t , X t ) and (X t , X t ) admit the following expressions,
P(X t < -x, X t ∈ dz) = t 0 x s p s (-x)p t-s (x + z) ds dz, -x ≤ z, x > 0, (3.2)
P(X t > x, X t ∈ dz) = t 0 x -z s p s (z -x)p t-s (x) ds dz, x > z, x ≥ 0. (3.3)
The process (X, P) has bounded variation if and only if for all t ≥ 0, P(X t = 0) > 0 and P(X t = X t ) > 0. In this case, the expressions (3.2) and (3.3) can be completed by the following one,
P(X t = 0, X t ∈ dz) = P(X t = X t ∈ dz) = - z ct p t (z) dz, z < 0 , (3.4)
where -c is the drift of X.
Proof. From (3.1) applied at the stopping time T -x = inf{s : X s = -x}, we obtain
P t z (X t < -x) = P t z (T -x < t) = E 1I {T -x <t} p t-T -x (z -X T -x ) p t (z) = E 1I {T -x <t} p t-T -x (x + z) p t (z) ,
where in the third equality we used the fact that X has no negative jumps. Recalling the definition of the law P t z , we derive from the above equality that
P(X t < -x, X t ∈ dz) = E(1I {T -x <t} p t-T -x (x + z)) dz = t 0 P(T -x ∈ ds)p t-s (x + z) dz .
Then (3.2) is obtained by plunging Kendall's identity (2.8) in the right hand side of the above equality. Identity (3.3) follows by replacing x by x-z in (3.2) and by applying the time reversal property of Lévy processes, that is under P,
(X s , 0 ≤ s < t) (d) = (X t -X (t-s)-, 0 ≤ s < t) . (3.5)
If (X, P) has bounded variation, then 0 is not regular for the half line (0, ∞), so that for all t ≥ 0, P(X t = 0) > 0 and P(X t = X t ) > 0, where the second inequality follows from the time reversal property (3.5). Then (3.4) follows directly from (2.9).
We can derive from Theorem 3 a series of immediate corollaries. First we obtain the distribution functions of X t and X t by integrating identity (3.2), (3.3) and (3.4) over z.
Corollary 1. For all t ≥ 0 and x > 0,
P(X t < -x) = t 0 P(X t-s > 0)p s (-x) ds s + P(X t < -x) , (3.6)
P(X t > x) = t 0 E(X - s )p t-s (x) ds s + P(X t > x) . (3.7)
If X has bounded variation with drift -c, then for all t > 0,
P(X t = 0) = - E(X t 1I {Xt≤0} ) ct . (3.8)
Note that we can derive from (2.8) the following simpler expression for the distribution function of X t .
P(X t < -x) = t 0 xp s (-x) ds s . (3.9)
There exists a huge literature on the law of the extrema of spectrally one sided Lévy processes. First explicit results were obtained for processes with bounded variation in [START_REF] Takács | On the distribution of the supremum for stochastic processes with interchangeable increments[END_REF]. Then the stable case has received particular attention. In [START_REF] Bingham | Maxima of sums of random variables and suprema of stable processes[END_REF] it is proved that X t has a Mittag-Leffler distribution. Then the law of X t was first characterized in [START_REF] Bernyk | The law of the supremum of a stable Lévy process with no negative jumps[END_REF] and was followed by more explicit forms in [START_REF] Hubalek | A convergent series representation for the density of the supremum of a stable process[END_REF], [START_REF] Michna | Formula for the supremum distribution of a spectrally positive α-stable Lévy process[END_REF] and Theorem 12 in [START_REF] Kortchemski | Sub-exponential tail bounds for conditioned stable Bienaymé-Galton-Watson trees[END_REF]. In the general case, one is tempted to derive expressions for the density of the extrema by differentiating (3.6), (3.7) and (3.9) but proving conditions allowing us to do so is an open problem.
Only some estimates of these densities have been given in [START_REF] Chaumont | The asymptotic behavior of the density of the supremum of Lévy processes[END_REF] and [17].
Multiplying each side of identities (3.6) and (3.7) by e -λx or x n and integrating we obtain the following other immediate consequence of Theorem 3.
Corollary 2. The Laplace transform of X t and X t are given for λ ≥ 0 by
E(e λX t ) = -λ t 0 P(X t-s > 0)E(e λXs 1I {Xs≤0} ) ds s + E(e λXt 1I {Xt≤0} ) + P(X t > 0) , E(e -λXt ) = -λ t 0 E(X - s )E(e -λX t-s 1I {X t-s >0} ) ds s + E(e -λXt 1I {Xt>0} ) + P(X t ≤ 0) .
Assume moreover that X admit a moment of order n ≥ 1. Then X t and X t admits a moment of order n and the later are given by,
E((-X t ) n ) = n t 0 P(X t-s > 0)E((-X s ) n-1 1I {Xs<0} ) ds s + E((X - t ) n ) , E(X n t ) = n t 0 E(X - s )E(X n-1 t-s 1I {X t-s ≥0} ) ds s + E((X + t ) n ) .
Then for λ ≥ 0 and z < 0, define the Laplace transform of the function t → t -1 p t (z) by
ϕ(λ, z) = ∞ 0 e -λt t -1 p t (z) dt .
Corollary 3. The Laplace transform ϕ(λ, z) satisfies the equation
ϕ(λ, z) = ϕ(0, z) + e -zΦ(λ) , λ ≥ 0 , z < 0 , (3.10)
where
Φ(λ) = ∞ 0 (1 -e -λt )t -1 p t (0) dt. Proof. Letting x = 0 in identity (3.3), we obtain for z < 0, p t (z) = t 0 -z s p s (z)p t-s (0) ds.
Taking the Laplace transform of each side of this identity gives
∂ ∂λ ϕ(λ, z) = -zϕ(λ, z) ∞ 0 e -λt p t (0) dt ,
whose solution is given by (3.10).
Recall from [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF] and [START_REF] Chaumont | The asymptotic behavior of the density of the supremum of Lévy processes[END_REF] the definition of the entrance laws q t (dx) (resp. q * t (dx)) of the excursions reflected at the supremum (resp. at the infimum) of X. Both reflected processes X -X and X -X are homogeneous Markov processes. We denote by n and n * the characteristic measures of the corresponding Poisson point processes of excursions away from 0, see [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF]. Then q t (dx) and q * t (dx) are defined by
n(f (X t ), t < ζ) = [0,∞) f (x)q t (dx) and n * (f (X t ), t < ζ) = [0,∞) f (x)q * t (dx) ,
where ζ denotes the life time of the excursions and f is any positive Borel function. We also recall that if p t (dx) is absolutely continuous, then so are q t (dx) and q * t (dx), see part (3) of Lemma 1, p. 1208 in [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF]. We will denote the corresponding densities by q t (x) and q * t (x). Thanks to the absence of negative jumps, the entrance law q t (dx) can be related to the law of X t through the relation,
q t (x) = x t p t (-x) , (3.11)
which is valid for all t > 0 and x ≥ 0, see (5.10), p.1208 in [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF]. We now use this fact and Theorem 3.1, in order to describe the entrance law q * t (dx). Corollary 4. The entrance law q * t (x) satisfies the equation, t 0
x -z t -s p t-s (z -x)q * s (x) ds = -
d dx t 0
x -z t -s p t-s (z -x)p s (x) ds , (3.12) for all t > 0, x > 0 and z < x.
Proof. Let us recall that from Theorem 6 in [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF], the law of the couple (X t , X t ) is given in terms of q t and q * t as follows, P(X t ∈ dx, X t ∈ dz) = t 0 q * s (x)q t-s (x -z) ds dx dz , (3.13) for x > 0 and z < x. Then plunging (3.11) into (3.13) and comparing this expression with (3.3) where we performed the time change s → t -s and we differentiated in x > 0, we obtain (3.12).
Let us finally point out that actually Theorem 6 in [START_REF] Chaumont | On the law of the supremum of Levy processes[END_REF] gives the following disintegrated version of (3.13), P(g t ∈ ds, X t ∈ dx, X t ∈ dz) = q * s (x)q t-s (x -z)1I [0,t] (s) dx dz ,
on (0, ∞) 2 × R, where g t is the unique time at which the past supremum of (X, P) occurs on [0, t]. This result suggests a possibility of disintegrating also (3.3) according to the law of g t . Then comparing this disintegrated form with (3.14) would provide a means to obtain an expression for the density q * t (x) in terms of p t (x). However, this problem remains open.
J. Małecki is supported by the Polish National Science Centre (NCN) grant no. 2015/19/B/ST1/01457. | 23,801 | [
"842062"
] | [
"396",
"461233"
] |
01769390 | en | [
"qfin"
] | 2024/03/05 22:32:16 | 2021 | https://hal.science/hal-01769390/file/Forecasting%20sovereign%20CDS%20volatility%20A%20comparison%20of%20univariate%20GARCH-class%20models.pdf | Saker Sabkha
Christian De Peretti
Dorra Hmaied
Forecasting sovereign CDS volatility: A comparison of univariate GARCH-class models
Keywords: JEL Classification: G15, G17, C58 CDS volatility, Predictability, Forecasting models, Loss functions criteria
Initially overlooked by investors, the sovereign credit risk has been reassessed upwards since the 2000's which has contributed to awaken the interest of speculators in sovereign CDS. The growing need of accurate forecasting models has led us to fill the gap in the literature by studying the predictability of sovereign CDS volatility, using both linear and non-linear GARCH-class models. This paper uses data from 38 worldwide countries, ranging from January 2006 to March 2017. Results show that the CDS markets are subject to periods of volatility clustering, nonlinearity, asymmetric leverage effects and long-memory behavior. Using 7 heteroskedastic and no heteroskedastic-robust statistic criteria, results show that the fractionally-integrated models outperform the basic GARCH-class models in terms of forecasting ability and that allowing flexibility regarding the persistence degree of variance shocks significantly improves the model's suitability to data. Despite the divergence in the economic status and geographical positions of the countries composing our sample, the FIGARCH and FIEGARCH models are mainly found to be the most accurate models in predicting credit market volatility.
Introduction
Understanding the fluctuations' dynamic of financial assets has always been of a particular interest in the academic and non-academic spheres. The considerable number of studies focusing on the stock prices' mechanism point out several stylized facts characterizing the financial markets such as: the volatility clustering, the non-stationarity. . . (See for example [START_REF] Niu | Volatility clustering and long memory of financial time series and financial price model[END_REF] for a study of the statistical behaviors of the Shanghai Composite Index and Hang Seng Index). Besides the stock markets widely studied, analyzing the characteristics of the credit market, and particularly the sovereign CDS market, is likewise interesting especially when it comes to investigating the impact of financial properties on the suitability of the CDS volatility modeling and forecasting ability.
The curious increase in the empirical studies dealing with modeling CDS data during the last decade can be explained by several reasons: (i) the constantly evolving outstanding amount of the CDS contracts reaching its highest values during the crisis periods, (ii) the need of more clear understanding of the role played by this market in the spread of crises and (iii) and the requirement of identifying the main explaining factors of credit risk. Furthermore, the use of CDS contracts no more as hedging instruments but rather as diversification, trading and speculation instruments has legitimized the usefulness of CDS volatility forecasting to investors for both risk management and portfolio management.
Despite the relevance of the volatility forecasts particularly in the decision process and considering the grown interest in predicting credit spreads, the nonexistence of papers in the literature of CDS spreads dealing with the ability of GARCH models to accurately forecast the volatility of the CDS is completely outrageous [1] . The literature on CDS is mainly composed by studies that focus on the determinants of these credit spreads [START_REF] Oliveira | The determinants of sovereign credit spread changes in the euro-zone[END_REF][START_REF] Costantini | Determinants of sovereign bond yield spreads in the emu: An optimal currency area perspective[END_REF][START_REF] Fontana | An analysis of euro area sovereign cds and their relation with government bonds[END_REF] or the Granger Causal relationship between CDS markets and related markets [START_REF] Coudert | Credit default swap and bond markets: which leads the other[END_REF]; [START_REF] Francis A Longstaff | How sovereign is sovereign credit risk?[END_REF]; [START_REF] Coudert | The interactions between the credit default swap and the bond markets in financial turmoil[END_REF]; [START_REF] Sabkha | International risk spillover in the sovereign credit markets: An empirical analysis[END_REF]. The very few papers that investigate the forecasts of CDS spreads [START_REF] Cnv Krishnan | Predicting credit spreads[END_REF][START_REF] Sunila | Oil price uncertainty and sovereign risk: Evidence from asian economies[END_REF][START_REF] Avino | Are cds spreads predictable? an analysis of linear and non-linear forecasting models[END_REF][START_REF] Srivastava | Global risk spillover and the predictability of sovereign cds spread: International evidence[END_REF] only focus on the first moment order, while the predictability of the CDS volatility remains understudied. Yet, these studies try to forecast the CDS spreads based on the commonly known economic and financial determinants and not based on the predictive ability of the econometric models. Considering the foregoing gaps, this study aims to extent the literature by investigating the forecasting performance of 9 GARCH-class models in the sovereign CDS markets from January 2 nd , 2006 to March 31 st , 2017.
Our study contributes to the existing literature in several ways: first, as far a we are concerned, none of the previous studies has focused on the predictability of CDS volatility, especially when it comes to the sovereign market. Second, our paper contributes as well to the literature by implementing a larger set of statistical loss function criteria -taking into account the nonzero mean and the heteroscedasticity of the forecast errors -to assess the out-of-sample predictive ability of the models in comparison with existing forecasting papers on financial assets. Third, the comparative study between linear and non-linear ARCH-class models provides a better and clearer comprehension of the in-sample and out-of-sample fit of the CDS data. Finally, our data set allows us to draw more robust and worldwide conclusions, as it is composed by CDS spreads for 38 countries from all over the world covering the recent two economic and financial crises when the volatility of asset prices have reached their highest unexpected levels.
Our empirical findings show that the sovereign CDS market is characterized by the same stylized facts as the stock market: volatility clustering, leverage effects and long memory behavior. The results of the diagnostic tests on the in-sample modeling generally show that no model outperforms all the others in terms of fitting. Based on the results the 7 loss functions, the predictive performance of the fractionally-integrated models seems to be more accurate, emphasizing the importance of taking into account the long-range memory and the nonlinear behavior of CDS spreads while forecasting volatility. Among the fractionallyintegrated models, our results show that the FIGARCH and the FIEGARCH are the most accurate models, providing the best out-of-sample performances in most cases.
The rest of the paper is organized as follows. A brief literature review of the previous studies predicting financial assets is section 2. Section 3 presents the sample and data used to compare the predictive ability of the models and displays the 9 volatility forecasting models studied. Results of the in-sample and out-of-sample analysis are reported is Section 4. Section 5 concludes the paper.
Literature review
Investigating the degree to which financial time series can be accurately forecast has always been in the limelight of researchers' issues. The empirical literature on the modeling and predicting volatility processes is extensive and takes into account more and more financial markets properties. [START_REF] Robert F Engle | Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation[END_REF] is the first researcher to model financial data through a time-varying stochastic process characterized by a nonconstant correlated variance so-called ARCH model. A generalization of this Autoregressive Conditional Heteroscedaticity model is then proposed by [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF] with more parsimonious and less overparametrization and biasedness in the estimates. Some extensions of this model are afterwards proposed, taking into account more stylized facts of the financial markets: leverage effects [START_REF] Daniel B Nelson | Conditional heteroskedasticity in asset returns: A new approach[END_REF][START_REF] Lawrence R Glosten | On the relation between the expected value and the volatility of the nominal excess return on stocks[END_REF], stationarity issues (Engle and [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF], long memory [START_REF] Ding | A long memory property of stock market returns and a new model[END_REF][START_REF] Richard T Baillie | Fractionally integrated generalized autoregressive conditional heteroskedasticity[END_REF]; [START_REF] Bollerslev | Modeling and pricing long memory in stock market volatility[END_REF]; Tse (1998); [START_REF] Davidson | Moment and memory properties of linear conditional heteroscedasticity models, and a new model[END_REF]. . . [2] .
These GARCH-class volatility models have been widely used to forecast various financial data, based on their predictive power. The great focus in these studies has been primarily given to stock returns [START_REF] Donald | Predicting returns in the stock and bond markets[END_REF][START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF][START_REF] Guidolin | Non-linear predictability in stock and bond returns: When and where is it exploitable[END_REF][START_REF] Miguel | Forecasting stock market returns: The sum of the parts is more than the whole[END_REF][START_REF] Niu | Volatility clustering and long memory of financial time series and financial price model[END_REF], in which recent past information is found to help forecast the future variance. Similar studies are conducted using commodity market data, especially oil data [START_REF] Agnolucci | Volatility in crude oil futures: a comparison of the predictive ability of garch and implied volatility models[END_REF][START_REF] Wei | Forecasting crude oil market volatility: Further evidence using garch-class models[END_REF][START_REF] Chkili | Volatility forecasting and risk management for commodity markets in the presence of asymmetry and long memory[END_REF][START_REF] Charles | Forecasting crude-oil market volatility: Further evidence with jumps[END_REF]. Generally, these studies show no model outperforms all the others in capturing the time series financial and statistical features, while the non-linear GARCH-class models [2] For an exhaustive survey of the proposed ARCH-class models, see [START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF] are found to be more relevant in terms of forecasting accuracy [3] .
Unlike stock markets, exchange rates and oil market data, not many studies have been conducted to assess the predictive performance of the volatility GARCH-type models using CDS data. Despite Krishnan et al. (2010), Sharma and[START_REF] Sunila | Oil price uncertainty and sovereign risk: Evidence from asian economies[END_REF], [START_REF] Avino | Are cds spreads predictable? an analysis of linear and non-linear forecasting models[END_REF] and [START_REF] Srivastava | Global risk spillover and the predictability of sovereign cds spread: International evidence[END_REF] whose aim is to predict the future changes in the CDS spreads based on some macroeconomic and marketwide variables, the literature on CDS spreads focuses generally on the key drivers and determinants of these credit spreads [START_REF] Oliveira | The determinants of sovereign credit spread changes in the euro-zone[END_REF][START_REF] Costantini | Determinants of sovereign bond yield spreads in the emu: An optimal currency area perspective[END_REF][START_REF] Fontana | An analysis of euro area sovereign cds and their relation with government bonds[END_REF] or rather on the interaction and comovement between CDS markets and the other related financial markets [START_REF] Coudert | Credit default swap and bond markets: which leads the other[END_REF][START_REF] Francis A Longstaff | How sovereign is sovereign credit risk?[END_REF][START_REF] Coudert | The interactions between the credit default swap and the bond markets in financial turmoil[END_REF][START_REF] Sabkha | International risk spillover in the sovereign credit markets: An empirical analysis[END_REF].
Among the first authors who are interested in the prediction of credit spreads, Krishnan et al. ( 2010) construct credit-spread curves, based on several macroeconomic and firm-specific variables, for 241 highly and lowly credit-risky firms from 1990 to 2005. Results show that only the information contained in the riskless yield curve significantly improve the out-ofsample forecasts. Focusing more precisely on the CDS as proxy for the credit risk level, Sharma and Thuraisamy (2013) investigates the forecastability of the CDS spreads of 8 Asian sovereign from 2005 to 2012. In-sample and out-of-sample evidences reveal that the oil price uncertainty provides valuable information for predicting the future fluctuations in the sovereign CDS spreads. [START_REF] Avino | Are cds spreads predictable? an analysis of linear and non-linear forecasting models[END_REF] use some economic and financial factors to investigate whether the iTraxx index spreads are forecastle. Based on the results of the predictive ability of some linear (Structural OLS model and AR(1)) and non-linear (Markov-switching) models, these authors show that the daily changes in the CDS index can be predictable from the yield curve, the equity returns and the changes in the VSTOXX volatility index. Using an error correction model before, during and after the subprime crisis, [START_REF] Srivastava | Global risk spillover and the predictability of sovereign cds spread: International evidence[END_REF] show that the VIX predicts the future changes in 98% of the studied sovereign CDS markets.
These few studies on the forecastability of CDS spreads rely on the information contained in the theoretical determinants -widely used in the empirical literature -and its ability to predict future fluctuations in the CDS market. Yet, he accuracy of these CDS predictions is assessed through some loss functions criteria that are subject to nonzero mean noise and serial correlation (such as RMSE, MAE. . . ). Furthermore, the data studied so far only cover the period of the subrpime crisis and end before or right after the outbreak of the Sovereign debt crisis, which is quite a weak point given that all the unexpected changes in the market behavior are not taken into account in their forecasting models. Finally, the most important shortcoming of the aforementioned studies, is that they focus on the first moment order neglect the Variance in forecasting the CDS spreads.
Data and methodology
This section introduces one of our paper contributions: the sample under study, composed by countries around the world, allowing us to provide international evidences and data time [3] For a complete theoretical and empirical survey on the use of univariate ARCH processes in financial studies, see [START_REF] Bollerslev | Arch modeling in finance: A review of the theory and empirical evidence[END_REF].
line covering the both recent two financial and economic crises. Volatility forecasting models are as well presented in this section.
Sample and data description
Our study focuses on a sample composed by 38 worldwide countries belonging to five different geographical areas: Eastern and Western Europe, North and South America and Asia. Besides the developed countries and the emerging countries, the sample under study in this paper includes some Newly Industrialized Countries (such as Brazil, Mexico, Philippines and Thailand. . . ) and some low economic growth countries with the highest credit risk levels (such as Portugal, Ireland, Greece and Spain. . . ). The sample details with the economic and geographical status of each country are given in Table 1.
The dataset used is composed by daily 5-year sovereign CDS spreads, denominated in US dollars and collected from Thomson Reuters R . The extracted series cover a period spanning from January 2006 to March 2017, during which the world financial and credit markets have been affected by two major crises, namely the Global Financial Crisis and the Sovereign Debt Crisis. Thus, modeling, forecasting the CDS volatility and comparing models performances are particularly interesting during this period during which we observed some unexpected fluctuations on the credit market.
Marginal volatility processes: univariate ARCH-type models
The financial markets are generally characterized by periods of volatility clustering, during which the assets' second moment order remains high before regaining its normal levels. [START_REF] Robert F Engle | Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation[END_REF] proposes an Autoregressive Conditional Heteroscedasticity (ARCH) model able to capture such financial phenomenon. This volatility persistence is as well observed in the Credit Default Swap market and the use of ARCH-class models to model the variance of the CDS spreads is thus legitimate. As an extension of the ARCH model, [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF] proposes a generalized high-order ARCH process that is more parsimonious and allows for less overparametrization and biasedness in the estimates. This GARCH model is given by:
x i,t = µ i,t + u i,t / u i,t = σ i,t ε i,t , ε i,t |P t-1 N (0, 1); σ 2 i,t = V (x i,t |F t-1 ) = α i,0 + q i k=1 α i,k a 2 i,t-1 + p i h=1 β i,h σ 2 i,t-1 .
(1) with x i,t is a financial time series, i is a given country from the sample and µ i,t and σ i,t are respectively conditional mean and conditional volatility. To satisfy the positive-definite condition, some restrictions are imposed:
p ≥ 0, q ≥ 0, α i,k ≥ 0 for k = 1, . . . , q i , β i,h ≥ 0 and α i,0 ≥ 0 for h = 1, . . . , p i .
For sake of simplicity and suitability, only models with process orders (p i and q i ) equal to 1 are estimated. In fact, the simplest GARCH(1,1) specification is the most useful and fitted for financial time series [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF][START_REF] Wei | Forecasting crude oil market volatility: Further evidence using garch-class models[END_REF].
The GARCH(1,1) process, as proposed by [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF] is given by the following formula: The countries' economic classification is made according to the NU, the CIA World Factbook, he IMF and the World Bank criteria, in order to have a sample with a sufficient number of countries in each category.
σ 2 i,t = α i,0 + α i,1 + α i,1 a 2 i,t-1 + β i,1 σ 2 i,t-1 . (2)
Furthermore to the previous model restrictions, α i,1 and β i,1 parameters must satisfy the condition of α i,1 + β i,1 < 1 to comply with the stationarity in the broad sense.
A more restrictive version of the GARCH(1,1) is proposed by Engle and [START_REF] Bollerslev | Generalized autoregressive conditional heteroskedasticity[END_REF] where the equivalent of the unit root in the mean is included in the variance so we can handle for the stationarity of the variance. The Integrated GARCH(1,1) takes into account the persistence of conditional volatilities [4] . The main difference with the GARCH(1,1) is that the IGARCH requires the parameters α 1 and β 1 to respect the equality of α 1 + β 1 = 1. Thus, the IGARCH(1,1) [5] can be written as follows:
σ 2 i,t = α i,1 a 2 t-1 + (1 -α i,1 )σ 2 i,t-1 . ( 3
)
Besides the aforementioned linear models, there exist some nonlinear GARCH-class of models taking into account the other financial markets properties. The Exponential GARCH, [4] Today's shocks on a financial asset (future contracts for example) have a significant impact on the conditional volatility several periods in the future.
[5] The IGARCH(1,1) is equivalent to the Exponentially Weighted Moving Average(EWMA) model developed by [START_REF] Jp Morgan | [END_REF].
as proposed by [START_REF] Daniel B Nelson | Conditional heteroskedasticity in asset returns: A new approach[END_REF], is one of these models that accounts for the leverage effect and the asymmetry of the error distribution. While the nonnegativity of linear GARCH model is ensured by several parameters restrictions, the EGARCH model proposes another formulation allowing for a positive volatility without any restrictive constraints. The EGARCH(1,1) is expressed as follows:
ln(σ 2 i,t ) = α i,0 + α i,1-1 ln(σ 2 i,t ) + β i,1 g(ε i,t-1 ), where g(ε i,t ) = θ i ε i,t + γ i [| ε i,t | -E(| ε i,t |)]. (4)
The asymmetric relation between assets' fluctuation and volatility changes is depicted by the θ i and γ i representing respectively the sign and the magnitude of ε i,t . [START_REF] Lawrence R Glosten | On the relation between the expected value and the volatility of the nominal excess return on stocks[END_REF] propose a model that allows the sign and the amplitude of the innovations (ε t ) to affect the conditional volatility separately. The asymmetric leverage effect [6] is represented in the following formulation of the GJR-GARCH(1,1) [7] model:
σ 2 i,t = α i,0 + α i,1 a 2 i,t-1 + γ i I i,t-1 a 2 i,t-1 + β i,1 σ 2 i,t-1 , ( 5
)
with I t is a dummy variable equals to 0 when a t is positive and 1 otherwise. The first model accounting for the long-range persistence of financial assets variance is developed by [START_REF] Ding | A long memory property of stock market returns and a new model[END_REF]. This asymmetric power ARCH model allows the volatility to be long-memory [8] . The APARCH(1,1) model is:
σ 2 i,t = α i,0 + α i,1 (| a i,t-1 | -γ i a i,t-1 ) δ + β i,1 σ δ i,t-1 . ( 6
)
where δ depicts the Box-Cox power transformation of the conditional volatility (σ t ) and satisfies the condition of δ ≥ 0.
A more flexible class of GARCH models is proposed by [START_REF] Richard T Baillie | Fractionally integrated generalized autoregressive conditional heteroskedasticity[END_REF] who introduce a new feature of the unit root for the variance. In fact, the fractionally integrated GARCH model (FIGARCH) highlights the fact that -unlike stationary processes where the persistence of volatility shocks is finite -in unit root processes, the impact of lagged errors occurs at a slow hyperbolic rate of decay. The FIGARCH model allows, thus, to capture the long memory in financial volatility with a complete flexibility regarding the persistence degree. In fact, the FIGARCH(1,d,1) formulation depends on fractional integration parameter (d) as follows:
σ 2 i,t = α i,0 + [1 -(1 -β i (L)) -1 (1 -φ(L))(1 -L) d ]a 2 i,t + β i σ 2 i,t-1 . ( 7
)
with 0 < d < 1. When d=1, the FIGARCH(1,d,1) is equivalent to an IGARCH(1,1) where the persistence of conditional variance is supposed to be complete, while when d=0, it is rather equivalent to a GARCH(1,1) and no volatility persistence is taken into consideration. L is the lag operator and (1 -L) d is the financial fractional differencing operator. [6] Positive and negative financial shocks revamp asymmetrically the variance. Furthermore, bad news (shocks) generate greater volatility than good news. [7] The volatility's different reactions to signs and sizes of past innovations are also suggested in the Threshold Heteroskedastic model (TGARCH) of [START_REF] Zakoian | Threshold heteroskedastic models[END_REF]. The major difference is that in the TGARCH model the conditional standard deviation (σ t ) is considered rather than the conditional variance (σ 2 t ). [8] The autocorrelation function of time series returns decreases gradually.
Other ARCH formulations are extended to the fractionally integrated GARCH, including asymmetric leverage effect presented in the EGARCH model. [START_REF] Bollerslev | Modeling and pricing long memory in stock market volatility[END_REF] propose a new class of model combining characteristics of the FIGARCH and the EGARCH models so-called FIEGARCH(p,d,q). Financial assets' volatilities are, thus, better explained and depicted by a mean-reverting fractionally integrated process. The FIE-GARCH(1,d,1) model is written as follows:
ln(σ 2 i,t ) = α i,0 + φ(L) -1 (1 -L) -d [1 + ψ(L)]g(ε i,t-1 ). ( 8
)
where φ(L) and ψ(L) are lag polynomials, and -as in the EGARCH(1,1) [9] g(ε t ) is a quantization function of information flows such as:
g(ε i,t ) = θ i ε i,t + γ i [| ε i,t | -E(| ε i,t |)].
An extension of the conventional fractionally integrated GARCH model is proposed by Tse (1998) so-called FIAPARCH(1,d,1). The new approach combines the long-range dependencies feature and the asymmetric impact of lagged positive and negative shocks on future volatilities in one fractionally integrated model. The FIAPARCH(1,d,1) is written as follows:
σ δ i,t = α i,0 (1 -β i ) -1 + [1 -(1 -β i (L)) -1 φ(L)(1 -L) d ](| a i,t | -γ i a i,t ) δ . ( 9
)
More recently, another linear GARCH model, called hyperbolic GARCH (HYGARCH) is proposed by [START_REF] Davidson | Moment and memory properties of linear conditional heteroscedasticity models, and a new model[END_REF] who argues that the impact of lagged errors on the conditional variance discloses near-epoch dependence feature. The main contribution of this model is that the fractional integration parameter is negative (-d) instead of positive and that d increases rather when it approaches zero [10] .The statistical properties included in the HYGARCH make it the most successful and used approach by financial practitioners in modeling time series volatilities. The HYGARCH(1,d,1) is defined under the following formulation:
σ 2 i,t = α i,0 + [1 -(1 -β i (L)) -1 (1 -φ(L))[1 + α i ((1 -L) d -1)]]a 2 i,t . ( 10
)
The volatility estimation of the CDS log returns of the 38 countries is computed for 9 GARCH-class models taking into account, each time, different financial stylized facts such as long-run properties in the conditional mean and volatility clustering and long-memory behavior in the conditional variance. The BFGS-BOUNDS method [START_REF] Charles G Broyden | The convergence of a class of double-rank minimization algorithms: 2. the new algorithm[END_REF] is used to optimize the likelihood function rather than the conventional numerical optimization, in order to respect the parameters constraints, notably the stationary and the nonnegativity constraints.
In addition to the widely used Box-Pierce tests and the LM ARCH effects test, several other diagnostic tests are reported here, namely the Nyblom test, the adjusted Pearson goodness-of-fit test and the Residual-Based Diagnostic (as suggested by [START_REF] Fantazzini | Fractionally integrated models for volatility: A review-empirical appendix: Some examples with r interfaced with the ox package g@rch[END_REF]). The Joint Nyblom [START_REF] Nyblom | Testing for the constancy of parameters over time[END_REF]) is a stability test under the null hypothesis of parameters joint constancy over time against the alternative of parameters shift at an undefined breakpoint. According to [START_REF] Franz | Simple diagnostic procedures for modeling financial time series[END_REF], The adjusted Pearson goodness-of-fit test verifies whether the residuals' empirical distribution matches or not the theoretical distribution (namely Gauss, Student or G.E.D depending on the country). The Residuals-Based Diagnostic test (Tse, 2002) checks for conditional Heteroscedasticity, by complementing and filling the gaps of the Box-Pierce Q statistics.
Loss functions criteria
Following [START_REF] Wei | Forecasting crude oil market volatility: Further evidence using garch-class models[END_REF], the forecasting process of the CDS volatility is implemented as follows: the 38 CDS times series' timeline is divided into two subperiods: the in-sample volatility estimation is conducted from January 2 nd , 2006 to March 31 st , 2014 (2152 observations), and the out-of-sample model forecasts concerns the last three years, i.e. from April 1 st , 2014 to March 31 st , 2017 (783 observations). The twenty-day out-of-sample forecasting are used to assess and compare the predictive performance of the 9 studied models.
The comparison of the volatility models' forecasting ability is not straightforward. Several measures of the predictive ability are suggested in the literature based on some loss functions. According to [START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF], [START_REF] Wei | Forecasting crude oil market volatility: Further evidence using garch-class models[END_REF] and [START_REF] Pilbeam | Forecasting exchange rate volatility: Garch models versus implied volatility forecasts[END_REF], we can not conclude with certainty the superiority of one model over another in terms of forecasting performance, based solely on the result of a single error statistic since each criterion may be more and less relevant from one case to another [11] . That's why the conclusions made in this study are based on the results of rich set composed by the 7 most popular and relevant ones, including:
• The Mean Square Error (MSE):
M SE = 1 N N t=1 ( σt -σ t ) 2 , ( 11
)
• The Mean Absolute Error (MAE):
M AE = 1 N | σt -σ t |, (12)
• The Heteroscedatiscity-adjusted Mean Square Error (HMSE). As suggested by [START_REF] Bollerslev | Periodic autoregressive conditional heteroscedasticity[END_REF], the HMSE is calculated as follows:
HM SE = 1 N N t=1 σ t σt -1 2 , ( 13
)
• The Heteroscedatiscity-adjusted Mean Absolute Error (HMAE). Andersen et al. (1999) proposes a loss function that better accommodates the heteroskedasticity in the forecast bias . The HMAE is calculated as follows:
HM AE = 1 N N t=1 σ t σt -1 , ( 14
) [11] Diebold and Mariano (2002) argue that allowing for forecast errors to be non-Gaussian, nonzero mean and autocorrelated produces better tests' results.
• The QLIKE loss fuction (QLIKE). This is a test of forecast bias implied by a Gaussian likelihood (see [START_REF] Wei | Forecasting crude oil market volatility: Further evidence using garch-class models[END_REF] for a further details.)
QLIKE = 1 N N t=1 LN ( σt ) + σ t σt , ( 15
)
• The R 2 LOG loss function (R 2 LOG): This loss function asses the goodness-of-fit of the out-of-sample forecasts based on the regressions of Mincer and Zarnowitz (1969)
R 2 LOG = 1 N N t=1 LN ( σ t σt ) 2 , ( 16
)
• The Mean Logarithm of Absolute Errors (MLAE): As proposed [START_REF] Pagan | Alternative models for conditional stock volatility[END_REF], the MLAE criterion is written as follows:
M LAE = 1 N N t=1 LN | σt -σ t | . ( 17
)
With N is the number of predicted data and σt is the volatility forecasts. The latent daily CDS spreads volatility σ t is not observed and is thus proxied by the squared daily logarithmic returns [12] . Previous studies [START_REF] Lopez | Evaluating the predictive accuracy of volatility models[END_REF][START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF] report that the use of such a proxy produces unbiased estimates, even though it remains questionable (noisy estimator because of its asymmetric distribution).
Empirical results
This section presents the summary statistics for the 38 studied time series. The modeling, estimation and testing of the forecasting ability of the 9 GARCH-class models are presented, as well, in this section.
Descriptive statistics
Descriptive statistics, displayed in Table 2, show that the studied countries present dissimilar credit risk levels with CDS spreads ranging from 1 bp to 37081.41 bp. The average daily spreads highlights, as well, this divergence in sovereign financing conditions with the largest value recorded, as expected, in Greece (9508.85 bp) and the smallest value recorded in the USA (24.01 bp). The high levels of standard deviations reveal, on the other side, that the worldwide financial and economic troubles impacted the public finances of the countries under study, doubtlessly with different magnitudes. The least volatile CDS market is Germany (24.50). According to the Augmented Dickey-Fuller test [START_REF] David | Likelihood ratio statistics for autoregressive time series with a unit root[END_REF], all the time series present a unit root, implying that the CDS spreads of the 38 countries are non-stationary at 5% statistical level at least.
[12] More methods exist in the literature to proxy the volatility of financial assets, such as, the high-low measure and the realized volatility estimate. For a complete survey of these methods, see [START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF]. The table reports descriptive statistics for the daily sovereign CDS spreads of 38 countries. Min., Max. and Std. Dev. denote respectively to the minimum, the maximum and the standard deviation. The Augmented-Dickey Fuller (Individual intercept included in the test equation) is a unit root test that informs about the stationary of time-series with a null hypothesis of the presence of a unit root in the process. The Engle's ARCH-LM test with 2, 5 and 10 lag orders informs about the presence of ARCH effects in the series under the null hypothesis of no autocorrecations in the squared residuals. GPH is the log periodogram test of [START_REF] Geweke | The estimation and application of long memory time series models[END_REF] with d-parameter (m=1467). This test is applied to the squared logarithmic returns (as proxy for unconditional volatility) to detect any long-range dependence under the null assumption of no longmemory behavior in the volatility process. *, ** and *** refer to the statistical significance at respectively 10%, 5% and 1% levels.
Focusing on the evolution of the CDS log returns (computed as x t = log( St S t-1 )) over the studied period, as presented in Figure 1, some volatility clustering periods are detected. Results of the ARCH-LM test in Table 2 confirm that the data used clearly exhibit heteroscedastic properties and support the appropriate use of GARCH-class processes to model the conditional volatility. The GPH test [START_REF] Geweke | The estimation and application of long memory time series models[END_REF]) conducted on the squared CDS log returns rejects the null hypothesis of no long-memory behavior in the series' volatility process, suggesting the use of the fractionally-integrated models [13] . Figure 2 reports the density estimation and show that the series, composing our international sample, exhibit dissimilar statistical behaviors as to their empirical distribution. The majority of the data returns distribution does not overlay the Gaussian reference, which indicates that the residuals should be allowed to follow a Gaussian, a student and a Generalized Error Distribution (G.E.D) [14] .
Models estimation and diagnostic tests
Results of the 9 GARCH-class model estimates are not reported here but are available upon request. Even though some models are difficult to optimize, no miss-convergences are recorded for any time series. However, at first sight, the major conclusion that could be drawn regarding the models estimation process is that taking into account several financial markets' stylized facts (long memory characteristic, shock persistence and asymmetric leverage effects...) does not necessarily improve the models performances since the more the model is over-parametrized, the more its computation and its convergence are complicated. In fact, different inconsistency and inaccuracy of the estimator parameters in some countries and for some model can result from the complexity of the model's statistical specifications. At the opposite, the models that great perform as to strong numerical convergence and computing-time delay are the GARCH, the IGARCH, FIGARCH and FIEGARCH.
Results of the univariate misspecification tests applied on the standardized residuals are presented in Table 5 (Appendix A). The Q portmanteau empirical statistics with 20 lags of both standardized residuals in levels and squared show that the null hypothesis of no serial correlation is accepted in most cases for all the studied models. The LM-ARCH test up to 10 lag orders shows, as well, that there is no heteroscdasticity in the conditional variance equations of most of time series. The GARCH, IGARCH and FIGARCH models pass this in 100% of cases, whilst the least performant model, in terms of serial correlation, in the FIAPARCH with ARCH effects detected in 6 countries. Moreover, testing for conditional heteroscadticity through the Residual-Based Diagnostic (RDB) for conditional heteroscedasticity [START_REF] Yiu | Residual-based diagnostics for conditional heteroscedasticity models[END_REF] gives better results, with absolutely no detected serial correlation in all series for the APARCH, IGARCH and FIGARCH. Based on the the Nyblom test, proposed by [START_REF] Nyblom | Testing for the constancy of parameters over time[END_REF], no possible shifts are detected and the 9 models parameters coefficients are found to be constant over time for all countries. One of the recommended steps in modeling financial data process is to evaluate the goodness of fit [START_REF] Ralphb | Goodness-of-fit-techniques[END_REF]. The fitting of our models are thus assessed, in this paper, through the adjusted Pearson goodness-of-fit test. Statistics indicate that mostly there is no difference between the empirical distribution of the residuals and the theoretical one. Interestingly, the basic GARCH model seems to have the highest number (12 over the 38 studied series) of unconformity and discrepancy of the data from the data from the hypothesized probability distribution.
In addition to the diagnostic tests, Table 5 displays the Akaike information criterion (AIC) for each model and country. Result do not allow us to unanimously select only one most appropriate model. AIC results of the studied models are mitigated across the 38 countries of the sample. By minimizing the AIC, the APARCH turns out to be the best fitted model for the CDS data of 34% of the sample, while HYGARCH, IGARCH and FIAPARCH provide the best in-sample fit for respectively 26%, 18% and 11% of the studied countries. However, these results are not in line with the preliminary analysis where all the studied CDS log returns are found to be subject to long-memory feature in the variance. By only focusing in the fractionally integrated subset of models, the HYGARCH is found to majority outperform in 53% of cases, followed by the FIAPARCH in 40% of cases. These results divergence points out the limits of using the 'minimizing loss of information' technique in comparing models appropriateness. Thus, this approach seems to be, in this case, not totally consistent and should only be used tentatively, at least if it is not associated with any other approaches. Hence the importance of rather rely on the forecasting ability to select the best performant volatility model.
Forecasting performance
Results of the twenty-day out-of-sample volatility forecasts are reported in Table 3 andTable 4. As mentioned before, the forecasting robustness and reliability of the 9 models is studied through 7 error statistics, namely the MSE, MAE, HMSE, HMAE, QLIKE, R 2 LOG and MLAE. Even though there is no unanimous dominant model in terms of forecasting ability according to all the comparison measure, it is clearly seen that the fractionallyintegrated class of model outperforms the basic GARCH models -not taking into account long-memory in volatility process. Ranked in the last position by 5 out of the 7 criteria, the least forecasting performant model for CDS volatility is the EGARCH with the largest recorded errors.
The lowest values of MSE, MAE and R 2 LOG are recorded for the FIGARCH, whilst the lowest values of HMSE, QLIKE and MLAE are reported for the FIEGARCH, making them preferable, in terms of accurate forecasting abilities, to the other studied models. On another side, and according to the results of the MSE, MAE, HMAE, R 2 LOG and MLAE criteria, the HYGARCH produce the highest errors, probably due to its computational complexity.
These findings empirically reveal the nonlinear predictability pattern of CDS volatility. In general, our results are in line with the findings of other financial markets: the non-linear GARCH-class models, that allows for leverage effects, nonsymmetrical dependencies and long-range memory in the volatility model provide a more accurate in-sample performance and a more reliable out-of-sample forecasting ability. The improvement of the forecasting power of the studied models depends, thus, on their ability to capture a maximum of financial stylized facts while estimating the CDS volatility of future days.
Conclusion
This paper aimed to assess the performances of 9 linear and non-linear volatility models. Using daily sovereign CDS data, GARCH, IGARCH, EGARCH, GJR, APARCH, FIGARCH, FIEGARCH, FIAPARCH and HYGARCH are estimated, allowing to take into account different financial markets properties such as the leverage effect, the asymmetric reaction to good and bad news and long-range persistence. The performance comparison being made upon several loss functions criteria and several multivariate diagnostic tests, a certain number of conclusions can be drawn.
Table 3: Results of the loss functions criteria for the twenty-day out-of-sample volatility predictions (Continued)
Table 3: Results of the loss functions criteria for the twenty-day out-of-sample volatility predictions (Continued)
Table 3: Results of the loss functions criteria for the twenty-day out-of-sample volatility predictions (Continued)
First, the in-sample estimation shows that all the models almost always pass all diagnostic tests for the most cases, and that the smallest Akaike criterion does not allow us to choose only one best fitted model. Second, none of the volatility models studied in this paper is found to be more relevant than all the others in all situations, in terms of forecasting ability. The chosen model varies from one country to another and from loss function criterion to another. Third, in most cases and according to the majority of the errors statistics criteria, the non-linear GARCH-class models, that capture the long-memory behavior, the leverage effects and the asymmetric dependencies in the volatility process are more relevant in terms of out-of-sample forecasting ability than the others. Fourth, the FIGARCH and FIEGARCH models are found to be the most relevant and robust forecasting models.
Since comparing predictive performance of volatility models is of a paramount in assessing diversifiable risk, in dynamic asset pricing theory and in optimization of portfolio allocation, the economic implication of our findings concerns particularly policymakers, financial practitioners and financial market participants generally. The in-sample performances show that no model clearly outperforms all the others, and since the results are mitigated and differ from one country to another, no volatility model should be selected in an arbitrary way. The model selection should rather be based on the particular features of the data used and the country studied. When it comes to the forecasting performances, some models are preferable and seem to predict accurately and robustly the future volatility of the CDS market. Thus, after taking into account the transaction costs, investors can eventually take advantage of the market's relative inefficiency and generate extra-profits by putting in place a simple trading strategy to exploit the predictability of sovereign CDS volatility. Finally, our study shows that improving the volatility forecasts needs including the maximum of CDS market's stylized facts. However, in practice, the implementation of complex models generates additional costs that are not necessarily reflected in our comparison method, which may controvert the usefulness of using better volatility predictive models.
Our research line can be pursued in several ways. First, a further investigation on the performances of the volatility models can be done by carrying out a comparative study based of the superior predictive ability test rather than on the diagnostic tests and loss functions criteria as in our case. Second, our study can be applied to the corporate CDS market, in order to assess whether the nature of the reference entity impacts the performances of the studied models. Third, since there is a dynamic segmentation in financial markets, it can be interesting to check the robustness of our findings using a different sample from other regions and/or a CDS term structure with different maturity.
Table 1 :
1 Sample and countries classification into economic categories and geographical positions
Country Geographical position Country Geographical position Developed countries (20) Newly industrialized countries (6)
Austria Western Europe Brazil South America
Belgium Western Europe China Asia
Denmark Western Europe Mexico North America
Finland Western Europe Thailand Asia
France Western Europe Turkey Asia
Germany Western Europe Emerging countries (11)
Ireland Western Europe Bulgaria Eastern Europe
Italy Western Europe Croatia Eastern Europe
Japan Asia Czech Eastern Europe
Latvia Eastern Europe Hungary Western Europe
Lithuania Eastern Europe Greece Western Europe
Netherlands Western Europe Indonesia Asia
Norway Western Europe Poland Eastern Europe
Portugal Western Europe Romania Eastern Europe
Slovakia Eastern Europe Russia Asia
Slovenia Eastern Europe Ukraine Eastern Europe
Spain Western Europe Venezuela South America
Sweden Western Europe
UK Western Europe
USA North America
Table 2 :
2 Descriptive statistics and ARCH effect tests for the CDS time series
CDS spreads CDS log returns
Obs. Min Mean Max Std. ADF ARCH-LM ARCH-LM ARCH-LM GPH
Dev statistics (2) (5) (10)
Austria 1.40 36.13 132.77 24.96 -2.45 249.75 *** 127.05 *** 72.58 *** 0.29 ***
Belgium 2.05 72.39 398.78 74.62 -1.67 508.94 *** 237.99 *** 120.84 *** 0.18 ***
Brazil 61.50 178.55 606.31 94.86 -2.46 25.01 *** 43.70 *** 37.71 *** 0.11 ***
Bulgaria 13.22 180.37 692.65 121.88 -2.25 12.71 *** 10.36 *** 6.72 *** 0.08 ***
China 10.00 82.44 276.30 43.56 -2.82 * 120.85 *** 63.09 *** 39.00 *** 0.22 ***
Croatia 24.88 244.20 592.50 128.47 -2.15 137.90 *** 58.87 *** 47.62 *** 0.26 ***
Czech 3.41 66.89 350.00 49.54 -2.62 * 62.52 *** 46.01 *** 29.50 *** 0.14 ***
Denmark 11.25 36.65 157.46 32.94 -2.17 87.27 *** 41.66 *** 24.36 *** 0.21 ***
Finland 2.69 26.85 94.00 19.24 -2.33 13.79 *** 7.98 *** 4.43 *** 0.05 ***
France 1.50 54.30 245.27 50.56 -1.71 276.95 *** 120.56 *** 62.86 *** 0.20 ***
Germany 1.40 28.77 118.38 24.50 -2.05 252.46 *** 128.31 *** 73.27 *** 0.29 ***
Greece 5.20 9508.85 37081.41 15351.1 -1.46 5.E-04 4.E-04 6.E-04 -4.E-04
Hungary 17.34 225.98 729.89 153.05 -2.18 14.48 *** 15.20 *** 8.67 *** 0.10 ***
Indonesia 118.09 219.29 1240.00 116.83 -2.63 * 139.82 *** 105.31 *** 61.49 *** 0.23 ***
Ireland 1.75 188.89 1249.30 234.02 -1.36 218.63 *** 103.01 *** 63.33 *** 0.18 ***
Italy 5.575 151.7504 586.7 127.38 -1.79 127.35 *** 60.46 *** 35.18 *** 0.19 ***
Japan 2.13 49.26 152.64 33.28 -1.94 71.53 *** 31.30 *** 21.68 *** 0.13 ***
Latvia 5.50 210.89 1176.30 216.13 -1.62 152.57 *** 68.47 *** 35.36 *** 0.26 ***
Lithuania 6.00 169.21 850.00 154.01 -1.90 56.75 *** 26.91 *** 13.56 *** 0.15 ***
Mexico 64.17 141.89 613.11 59.36 -3.03 * 356.35 *** 160.17 *** 127.50 *** 0.39 ***
Netherlands 7.67 37.13 133.84 29.50 -2.00 10.79 *** 4.33 *** 5.59 *** 0.05 ***
Norway 10.59 30.95 62.00 17.82 -1.68 3.22 *** 2.46 *** 2.06 *** 0.05 **
Philippines 78.30 188.72 840.00 101.70 -1.77 154.83 *** 127.66 *** 90.03 *** 0.23 ***
Poland 7.67 101.35 421.00 73.12 -2.32 311.98 ** 135.64 ** 75.78 ** 0.21 ***
Portugal 4.02 289.89 1600.98 323.68 -1.60 53.57 *** 42.23 *** 22.61 *** 0.17 ***
Qatar 7.8 83.13518 390 53.89 -2.12 37.65 *** 17.33 *** 9.55 *** 0.09 ***
Romania 17.00 204.20 767.70 144.17 -2.09 57.88 *** 33.74 *** 17.50 *** 0.17 ***
Russia 36.88 209.09 1106.01 147.84 -2.95 * 258.09 *** 117.58 *** 65.50 *** 0.29 ***
Slovakia 5.33 77.52 306.01 66.71 -2.03 25.14 *** 24.62 *** 19.31 *** 0.11 ***
Slovenia 4.25 131.24 488.58 114.88 -1.65 13.23 *** 9.82 *** 34.88 *** 0.11 ***
Spain 2.55 144.63 634.35 135.01 -1.56 195.02 *** 78.80 *** 39.98 *** 0.19 **
Sweden 1.63 27.17 159.00 25.70 -2.64 * 69.49 *** 30.82 *** 20.72 *** 0.16 ***
Thailand 51.01 120.94 500.00 41.89 -3.64 * 81.52 *** 120.36 *** 96.33 *** 0.17 ***
Turkey 109.82 217.65 835.01 72.41 -3.72 * 69.04 *** 86.65 *** 46.84 *** 0.21 ***
UK 16.50 42.89 165.00 28.11 -2.07 27.33 *** 21.12 *** 23.19 *** 0.11 ***
Ukraine 1.00 2173.76 15028.76 3969.27 -2.15 60.42 *** 32.53 *** 17.13 *** 0.11 ***
USA 10.02 24.01 90.00 11.11 -3.58 * 94.96 *** 46.67 *** 24.57 *** 0.18 ***
Venezuela 124.62 1771.08 10995.67 1869.79 -2.00 36.17 *** 38.56 *** 22.73 *** 0.11 ***
Table 4 :
4 Summary of the number of selected models according to each criterion
GARCH EGARCH GJR APARCH IGARCH FIGARCH FIEGARCH FIAPARCH HYGARCH
MSE 5 1 2 2 7 16 5 13 3
MAE 4 0 1 0 3 14 6 11 4
HMSE 3 3 2 2 2 4 10 7 9
HMAE 4 3 2 4 3 6 7 11 1
QLIKE 6 2 2 3 3 4 10 7 5
R 2 LOG 5 2 2 1 3 9 6 8 5
MLAE 2 3 0 3 0 6 18 5 4
[1] The majority of papers dealing with the predictive power of GARCH models, only focus on the major stock indexes and exchange rates[START_REF] Poon | A practical guide to forecasting financial market volatility[END_REF]
Table 5: Results of the diagnostic tests for the 38 countries (Continued) | 50,369 | [
"1014428"
] | [
"489734"
] |
01769391 | en | [
"info",
"math"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769391/file/Ben%20Ammar%20et%20Al.%20October%202017%20version%204%20FINAL%20without%20modifications.pdf | Oussama Ben-Ammar
Alexandre Dolgui
email: [email protected]
Desheng Dash Wu
Planned lead times optimization for multilevel assembly systems under uncertainties
Keywords: Multi-level assembly systems, Assemble-to-order, Assembly contracting, Stochastic lead-times, Planned lead times optimization, Stochastic modeling, Branch and Bound
Planned lead times are crucial parameters in management of supply networks that continue to be more and more extended with multiple levels of inventory of components and uncertainties. The object of this study is the problem of determining planned lead times in multi-level assembly systems with stochastic lead times of different partners of supply chains. A general probabilistic model with a recursive procedure to calculate all the necessary distributions of probability is proposed. A Branch and Bound algorithm is developed for this model to determine planned order release dates for components at the last level of a BOM which minimize the sum of inventory holding and backlogging costs. Experimental results show the behaviour of the proposed model and optimisation algorithm for different numbers of components at the last level of the BOM and for different numbers of levels and values of holding and backlogging costs. The model and algorithm can be used for assembly contracting in an assembly to order environment under lead time uncertainty.
Introduction
In today's global marketplace, planners have to take appropriate actions in response to supply disruptions [START_REF] Snyder | OR/MS models for supply chain disruptions: a review[END_REF][START_REF] Speier | Global supply chain design considerations: Mitigating product safety and security risks[END_REF][START_REF] Kleindorfer | Managing disruption risks in supply chains[END_REF] and supply uncertainty [START_REF] Flynn | On Theory in Supply Chain Uncertainty and its Implications for Supply Chain Integration[END_REF], Simangungsong et al. 2012[START_REF] Wazed | Uncertainty factors in real manufacturing environment[END_REF]. In comparison to supply-demand coordination uncertainties, Revilla and Sáenz (2013) defined disruption as random, unplanned events that stop operation either completely or partially for a certain duration. [START_REF] Snyder | OR/MS models for supply chain disruptions: a review[END_REF] specified that disruptions can often be viewed as a special case of lead time uncertainty.
In the last few years, academics and decision-makers have recognised that supply chains have become extremely vulnerable due to uncertain lead times, demand prediction and price variability. For planners, it seems difficult to improve the efficiency of the supply chain when lead times frequently have uncertain values [START_REF] Bandaly | Impact of lead time variability in supply chain risk management[END_REF]. They therefore have to manage assembly and delivery as an uncertain process.
In the Assemble-To-Order (ATO) environment, finished products are assembled only after customer orders have been received. This kind of environment enables firms to assemble on customer orders with a specific quantity and due date, and so unwanted inventory of finished products can be zero.
Despite the widespread adoption of the ATO environment, there are considerable weaknesses. Some input data are often considered as deterministic parameters, but in reality are inherently uncertain. For example, the assembly process can be interrupted by machine breakdown and components replenishment lead times may be significantly longer than planned ones. Therefore, the stock-out of one component may delay the delivery of finished products.
The literature reports many investigations into production planning that consider the randomness of the finished product demand (see [START_REF] Peidro | Quantitative models for supply chain planning under uncertainty: a review[END_REF][START_REF] Mula | Models for production planning under uncertainty: A review[END_REF][START_REF] Koh | Uncertainty under MRP-planned manufacture: review and categorization[END_REF], but few studies have examined how to cope with the uncertainty of lead times (see [START_REF] Damand | Parameterisation of the MRP method: automatic identification and extraction of properties[END_REF][START_REF] Guide | A review of techniques for buffering against uncertainty with MRP systems[END_REF]Srivasta 2000). Safety stocks have largely highlighted how to handle different uncertainties, whereas safety lead times have not been sufficiently studied. Interested readers may refer to [START_REF] Van Kampen | Safety stock or safety lead time: coping with unreliability in demand and supply[END_REF], where the cases in which the use of safety stocks and/or safety lead times could be advantageous are explained, or to Jansen and de Kok (2011) for further details on the importance of lead time anticipation. More generally, readers who are interested in supply planning models under uncertainty may turn to [START_REF] Aloulou | A bibliography of non-deterministic lotsizing models[END_REF], [START_REF] Díaz-Madroñero | A review of discrete-time optimization models for tactical production planning[END_REF], [START_REF] Dolgui | A State of the Art on Supply Planning and Inventory Control under Lead Time Uncertainty[END_REF], [START_REF] Ko | A review of soft computing applications in supply chain management[END_REF], [START_REF] Peidro | Quantitative models for supply chain planning under uncertainty: a review[END_REF], [START_REF] Mula | Models for production planning under uncertainty: A review[END_REF] and [START_REF] Koh | Uncertainty under MRP-planned manufacture: review and categorization[END_REF].
Therefore, it is necessary to examine the influence of lead times on supply planning and to develop methods that minimize costs, considering the non-deterministic behaviour of lead times.
In this study, we consider an ATO environment. The whole supply network is configured for a given tailored finished product. This product is customized according to the customers' requests and composed of a given set of personalized components.
Our problem arises at contract negotiation step. A client orders a specific product, we need to design the corresponding supply network and decide both (i) the due date for client delivery and (ii) the date when the overall process is launched at the bottom level. All partners of the supply network (local assembly units or suppliers) are independent enterprises. Thus we cannot coordinate activities inside them. But we are responsible for client delivery at the fixed due date, and we know, at the supply network design stage, the statistics on lead times of all partners. We are thus able to give to the client an estimate of the total lead time, and launch the overall process at the bottom level.
In our case, the demand of clients is not known in advance, and no stocks of finished products or components are planned to anticipate this demand. As stated in [START_REF] Berlec | Predicting order lead times[END_REF], [START_REF] Chandra | Inventory management with variable lead-time dependent procurement cost[END_REF], [START_REF] Hennet | Inventory control in a multi-supplier system[END_REF], [START_REF] Golini | Moderating the Impact of Global Sourcing on Inventories through Supply Chain Management[END_REF], [START_REF] Farhani | Competitive supply chain network design: an overview of classifications, models, solution techniques and applications[END_REF], in this case, the planners need information about the tailored product, personalized components and assembly process to negotiate the delivery time with the customer, select suppliers and plan release dates based on cost and lead times.
Obviously, the lead time are uncertain because of different factors including capacity constraints, machine breakdowns, stochastic variations on operation processing times, etc. However, at the considered stage, we know only the distributions of probability for partners' lead times (based on statistical data). Different components produced by different partners need to be assembled to obtain a finished product. So to decide the client delivery due date and the start times for supply chains, a model based on the probability distributions of the partner lead times is developed. This is a common approach in contracting and planning under uncertainty [START_REF] Song | Contract assembly: Dealing with combined supply lead time and demand quantity uncertainty[END_REF][START_REF] Fiala | Information sharing in supply chains[END_REF][START_REF] Berlec | Predicting order lead times[END_REF][START_REF] Yoo | New product development and the effect of supplier involvement[END_REF], which is commonly used owing to the complexity of the problem (Guide and Srivasta 2000[START_REF] Koh | Uncertainty under MRP-planned manufacture: review and categorization[END_REF][START_REF] Dolgui | Supply planning under uncertainties in MRP environments: A state of the art[END_REF][START_REF] Damand | Parameterisation of the MRP method: automatic identification and extraction of properties[END_REF][START_REF] Dolgui | A State of the Art on Supply Planning and Inventory Control under Lead Time Uncertainty[END_REF][START_REF] Díaz-Madroñero | A review of discrete-time optimization models for tactical production planning[END_REF].
The rest of the paper is organized as follows. Firstly, we make a short review of previous work on the optimization of assembly systems under lead time uncertainty (Section 2). A description of the problem is presented in Section 3. The analytical model is given in Section 4. In Section 5, a technique is given to reduce the initial space of research. It will be used with a Branch and Bound algorithm to optimize the mathematical expectation of the total cost (Section 6). Some results are shown in Section 7. Finally, we outline the work done in a conclusion, and give some perspectives for future research.
Related publications
An analysis of the literature shows that in the case of assembly systems, the lead time is most often considered deterministic and rarely uncertain. To handle the uncertainty of lead times, the studies found in the literature can be split into two categories: one-level and multi-level assembly systems [START_REF] Dolgui | A State of the Art on Supply Planning and Inventory Control under Lead Time Uncertainty[END_REF]. Yano (1987a) was among the first to study assembly systems with supply timing uncertainties after her studies on serial production systems [START_REF] Yano | Setting Planned Leadtimes in Serial Production Systems with Tardiness Costs[END_REF]. In Yano (1987a) only the case of the single-period model was considered. The assembly system is composed of one component at level 1 and two components at level 2. The lead times of these three components are considered stochastic. An algorithm was developed to find optimal planned lead times, which minimize holding and backlogging costs. This study has been cited 148 times and in subsequent publications, models have been limited to one or two-level assembly systems [START_REF] Chu | Supply management in assembly systems[END_REF][START_REF] Tang | The detailed coordination problem in a two-level assembly system with stochastic lead times[END_REF][START_REF] Mohebbi | The impact of component commonality in an assemble-to order environment under supply and demand uncertainty[END_REF], Hnaien et al. 2008a[START_REF] Fallah-Jamshidi | A hybrid multi-objective genetic algorithm for planning order release dates in two-level assembly systems with random lead times[END_REF][START_REF] Dolgui | A State of the Art on Supply Planning and Inventory Control under Lead Time Uncertainty[END_REF], Hnaien et al. 2016[START_REF] Borodin | Component replenishment planning for a single-level assembly system under random lead times: A chance constrained programming approach[END_REF]. [START_REF] Chu | Supply management in assembly systems[END_REF] addressed the same problem, but in the case of one-level assembly systems. The convexity of the expected total holding and backlog costs was proven, and an iterative algorithm was used to minimize it. [START_REF] Dolgui | Planification de systèmes d'assemblage avec approvisionnements aléatoires en composants[END_REF] and [START_REF] Dolgui | A model of joint control of reserves in automatic control systems of production[END_REF] developed an approach based on the coupling of an integer linear programming and a simulation to model a multi-period problem. They studied one-level assembly systems under a deterministic demand and random lead times in the case of the lot-for-lot policy. The authors considered several types of finished product. Several types of component are needed to assemble a finished product, and for each component, an inventory holding cost is considered. In this study, both the number of components to be ordered at the beginning of each period and the number of products to be assembled during each period are determined.
Ould [START_REF] Louly | Generalized newsboy model to compute the optimal planned lead times in assembly systems[END_REF], [START_REF] Dolgui | Supply planning for single-level assembly system with stochastic component delivery times and service level constraints[END_REF] and Ould Louly et al. (2008) were focused on one-level assembly systems for one product under component lead time uncertainties. The demand for the considered product was assumed to be known and fixed and the capacity was considered unlimited. A multi-period under component lead time uncertainties was considered. A generalization of the discrete Newsboy model was suggested to find optimal release dates which maximize the customer service level for the finished product and minimize the expected inventory holding cost for the components for a specific case where the distributions of probability of lead times and holding cost are the same for all components. A Branch and Bound procedure was developed to solve it for a general case of distributions of probability and costs. In [START_REF] Shojaie | A Study on MRP with Using Leads Time, Order Quality and Service Level over a Single Inventory[END_REF], the same model was studied, but for the case of POQ policy, service level constraints and over a single inventory. The authors explain that the proposed model can function with no major restriction on the type of the lead time distribution. However, a concrete example is missing in the study.
A two-level assembly system was studied by [START_REF] Tang | The detailed coordination problem in a two-level assembly system with stochastic lead times[END_REF] in the case of both stochastic lead times and the process time for components at level one of the BOM. Both the demand and the due date are assumed to be known. The capacity is considered unlimited. To determine the optimal safety lead times, which minimize the total backlogging and inventory holding cost, a Laplace transform procedure was introduced. Later, Hnaien et al. (2008a) treated only a one-period demandor a two-level assembly system and developed a genetic algorithm to minimize the total expected cost, which equal to the sum of the backlogging cost for the finished product and the inventory holding costs for components. The authors assumed that the components at level 1 of the BOM were stored and the finished product was assembled only after the given due date. [START_REF] Fallah-Jamshidi | A hybrid multi-objective genetic algorithm for planning order release dates in two-level assembly systems with random lead times[END_REF] exploit the same problem in a multi-objective context. An electromagnetism-like mechanism is proposed to reinforce the GA and to determine minimal expected costs. However, the authors focused on the number of components at the last level and neglected the influence of different costs. The case of a one-period inventory model for a one-level assembly system under stochastic demand and lead times was studied by [START_REF] Hnaien | Single-period inventory model for one-level assembly system with stochastic lead times and demand[END_REF]. A mathematical model and a Branch and Bound procedure were developed to determine optimal quantity and optimal planned lead times for components. Drawing on this work, [START_REF] Borodin | Component replenishment planning for a single-level assembly system under random lead times: A chance constrained programming approach[END_REF] proposed a joint chance constrained model and an equivalent linear formulation to solve this problem.
Recently, Atan et al. (2015) considered a parallel multi-stage process feeding of final assembly process. To determine optimal planned lead times for different stages minimizing the expected cost for a customer order, an iterative heuristic procedure was developed. It could be considered as a special case of our problem.. [START_REF] Bollapragada | Component procurement and end product assembly in an uncertain supply and demand environment[END_REF] examined a multi-product, multi-component, procurement and assembly problem under supply and demand uncertainty. A numerical example illustrates the impacts of lead times and capacity on the performance of the assembly system. However, no resolution method is provided.
To our knowledge, in the literature, there is no other multi-level model that determines optimal order release dates with several levels in the BOM, several types of components and stochastic lead times for each component at each level. The existing models are limited:
to one-level assembly systems, continuous random lead times, real decision variables and one-off demand [START_REF] Chu | Supply management in assembly systems[END_REF],
or to one-level assembly systems, discrete random lead times, discrete decision variables and a constant demand which is the same for all periods (Ould- Louly et al., 2008 andOuld-Louly and[START_REF] Dolgui | A State of the Art on Supply Planning and Inventory Control under Lead Time Uncertainty[END_REF],
or to two-level systems with only one component at level 1 and two components at level 2, continuous random lead times and one period planning (Yano 1987a),
or to two-level assembly systems and several types of components at each level (Hnaien et al., 2008a[START_REF] Fallah-Jamshidi | A hybrid multi-objective genetic algorithm for planning order release dates in two-level assembly systems with random lead times[END_REF].
In the field of project planning, [START_REF] Trietsch | Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result[END_REF] performed a study on a problem closest to our problem. It focused on the role of gates by creating safety time cushions, and optimized the later by optimizing gates. These gate times are considered as lower bounds on the actual start times of activities, and the final activity is completed at the earliest on the due date (as in [START_REF] Chu | Supply management in assembly systems[END_REF]Hnaien et al. 2008a). [START_REF] Trietsch | Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result[END_REF] considered a centralized model of a supply network. The decision-maker can decide the start times at every level. By contrast, our model was developed for the case of decentralized decisions in the supply networks where all partners are independent enterprises. Thus we cannot decide the start times at all levels. We decide only the due date for the final product and release dates for the components of last level of BOM, i.e. the dates of launching of the corresponding supply chains. The decisions which can be made with our model are useful at the assembly contracting step in ATO decentralized environments. We have selected a modeling approach based only on lead time distributions of probability for all partners. To obtain these distributions of probability, we use statistics and consider that a partner lead time starts when all components for the considered partner are available. Thus the difference from [START_REF] Trietsch | Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result[END_REF] is in the modelling approach, which is based on a different objective, and in the system studied (decentralized in our case).
Moreover, in [START_REF] Trietsch | Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result[END_REF], the duration of activities is assumed to be continuous random variables (they are discrete in our model), the costs for starting activities earlier are linear, and a linear project tardiness penalty cost is also introduced. The objective function of the proposed project-gating model is convex under gate constraints. An optimal solution is found by simulation. The author does not specify the number of simulations, the size of the studied problem, or the limits of the approach. In addition, to obtain the convexity of the cost function, specific assumptions on costs were proposed. By contrast, in our model, there is no specific assumption on costs and decision variables are discrete.
In the present paper, there is no specific assumption on costs. The supply planning and plan execution activities are decentralized and only the aspect of synchronization between partners of supply chains is considered. We cannot manage the coordination issues inside of each partner. At our decision level, we know only the distributions of probability of partner lead times. At this level of abstraction, considering the available information, all other aspects are assumed to be integrated into the probability distributions of partner lead times. This is a standard assumption made by several scholars worked in this field [START_REF] May | Applying inventory classification to a large inventory management system[END_REF][START_REF] Kuang | A Dynamic Programming Approach to Integrated Assembly Planning and Supplier Assignment with Lead Time Constraints[END_REF][START_REF] Ding | A simulation optimization methodology for supplier selection problem[END_REF][START_REF] Petrovica | Modelling and simulation of a supply chain in an uncertain environment[END_REF]. We assume that the lead time of a partner starts when all needed components are available. A partner manages its own production process considering capacity and coordination issues. But we cannot take into account these decisions because we do not know them at the considered planning stage. We have only the distributions of probability of lead times obtained by using the statistics that are also results of similar past decisions.
To summarize, the available information for us is only the probability distributions of lead times. Based on this information, we search for planned lead times for all components of last supplier level for every new customer order. In other words, taking into account the supply network design and the distributions of probability of partner lead times obtained with statistics or estimated by partners themselves, we need to define the client delivery due date and when at the latest the supply network should start all the supply chain processes.
Based on the approaches presented in the literature, this paper proposes a new generalized model to study multi-level assembly systems under lead time uncertainty in an ATO environment. To the best of our knowledge, our paper reports the first study of a model for multi-level assembly systems with stochastic lead times with the number of level greater than two. Moreover, the model and algorithm developed in this paper are different from the model and Branch and Bound procedure presented in Ould- [START_REF] Louly | Calculating safety stocks for assembly systems with random component procurement lead times: A branch and bound algorithm[END_REF] and [START_REF] Hnaien | Single-period inventory model for one-level assembly system with stochastic lead times and demand[END_REF]. The later have required specific assumptions about costs: (i) the inventory holding cost of a component is greater than the sum of unit inventory holding costs of all the individual components which make it up; and (ii) the unit inventory holding cost of the finished product is greater than the sum of unit inventory holding costs of the individual components which make it up. In our paper, these constraints are lifted.
Notation and background
To get closer to the industrial methods of planning, we consider a discrete temporal environment and integer decision variables. Fig. 1 shows that the finished product is produced from components themselves obtained from components of the next level and so on.
The assembly system is constituted by levels and components at each level ( = 1, … , ). ∑ is the number of components necessary to assemble the finished product. It is necessary to define the order release dates for components , ( = 1, … ,
) at level (the last level). No decision is possible on the start date for intermediate levels. At intermediate levels, , ( = 1, … , , = 1, … , -1) components are assembled.
Without loss of generality, we assume that the finished product demand is known and equal to one, and exactly one unit of each component is required to produce the finished product. The unit backlogging cost and the unit inventory holding cost for the finished product, and the unit inventory holding cost ℎ , for the component , are known. Note that we assume that each supplier or local assembly unit is independent. Overall, we know relatively a little about how they manage production. Nevertheless, we suppose that distribution of probability of lead time for each component and partner are available to assess lead times. They can be obtained from the statistics or estimated by partners. In the case of statistics, the time statistics automatically include not only processing times, but also additional times depending on load, capacity constraints and variations, local planning decision, etc. This is a common approach at the contracting stage to predict lead times under uncertainty. The lead times , of components are modelled as independent random discrete variables with known probability distributions and finite upper values.
Figure 1. A multi-level assembly system
For each level , when all the necessary components are available, level delivers the components to level -1 after a random discrete lead time. When a semi-finished product arrives at the final level (level 0), it undergoes the necessary final assembly operations and afterwards the finished product is delivered to the customer in order to satisfy the demand . The assembly capacity at every level is considered as infinite. It is assumed that each component of level is used to assemble only one type of component at level -1. Thus, for this model, only planned order release dates , for components , at level are unknown parameters and are the set of our decision variables. • Maximum between and the due date : = ( , )
• Minimum between and the due date :
= ( , ) • = ℎ , - ℎ , , ∈ , • = ℎ , + • = - ℎ ,
Mathematical model
We search to know when the overall processes should be launched to satisfy the demand of our customer for a due date. This model is used at the stage of contracting with our customer in an assembly to order environment. Note that for the considered problem, taking into account that all partners of supply networks are independent enterprises and the network will be managed in a decentralized manner, we search only for values of decision variables , .
From the moments , when we order components , from suppliers at the last level , and until the date when we deliver the finished product to our customer, all the processes at different levels are considered to be launched when all necessary components from previous levels are available. The assembly capacity is considered infinite. We know the distributions of probability of lead time for each level and component.. we use the infinite capacity model. There is no decision variable for internal levels (no possibility to take into account the future local decisions only the distributions of probability of lead times of our partners are known).
The production cycle extends from order release dates , of , to the delivery date ( , ) of the finished product.
Inventory holding costs are considered if components arrive before the triggering of the assembly process.
Figure 2. The composition of the total cost (the case of backlog)
Then, the total cost ( , ) is the sum of inventory holding costs for components and inventory holding or backlogging costs for the finished product. An example when the finished product is assembled after the due date is given in Fig. 2.
The proofs of the following theoretical results are in the Appendix.
Property 1.
An explicit form for the total cost is the following: The objective is to find the order release dates for components at level minimizing the expected value of the total cost ( , ). This total cost is a random discrete variable (because the lead times , and assembly dates ( , and ) are random variables, ∀ = 1, … , and ∀ = 1, … , ). Thus, we can calculate the mathematical expectation of the total cost ⟦ ( , )⟧. In Fig. 3, different mathematical expectation costs are presented.
( , ) = ℎ , - , - ℎ , , - ℎ , , + × --× ( - ) (1
From Eq. ( 1), the total expected cost, which is noted by ⟦ ( , )⟧ can be formulated:
⟦ ( , )⟧ = ℎ , ⟦ ⟧ - × , - ℎ , , - ℎ , , + × -+ × ( -⟦ ⟧) (2)
The expressions , ⟦ ⟧, ⟦ ⟧ and , are calculated and the exact expression of ⟦ ( , )⟧ is given in the Appendix.
Figure 3. Composition of the total expected cost
The intervals -, ≤ , ≤are the initial space of research. It depends on maximum and minimum lead times and the number of levels. In the next section, a technique is proposed to obtain upper limits for decisions variables.
Reducing the space of research
The main idea is to decompose the multi-level assembly system (Fig. 1) to (the number of components at level ) multi-level linear supply chains (Fig. 4). Each linear chain , ∈ {1, . . , }, delivers a finished product on a specified delivery date . There are (i) backlogging costs if the finished product is available after the due date and (ii) an inventory holding cost if it arrives before. The optimal order release date for one linear chain will be used to reduce the initial space of research for the corresponding component release date at last the level in the BOM. Therefore, it can be inferred that the total expected cost ( , ) is equal to:
( , ) = × --× -From Eq. (A.7) and Eq. (A.8) we can deduce that = + ∑ 1 -≤ and = ∑ 1 -≤ . Then, the total expected cost can be written as:
( , ) = × 1 - ≤ + × - 1 - ≤ Subsequently, let = ≤
and for each linear chain , * * be the optimal order release date which minimizes this total expected cost ( * * , ) .
, , ,
Finished Product m , , , , Finished Product j , , , , Finished Product 1 , , , , , Property 3.
The optimal order release date * * satisfies the optimality condition for the discrete Newsboy model, where the cumulative distribution function (. ) of the total lead time is used:
- * * -1 ≤ + ≤ - * * (3)
The complete proof is detailed in Appendix A.3.
Remark 1.
It is worthwhile to mention that in our case, each multi-level linear supply chain is composed of levels (successive production stages) with random processing times and without planned production dates (see [START_REF] Elhafsi | Optimal leadtimes planning in serial production systems with earliness and tardiness costs[END_REF] for the case when there are planned production date for each stage). A due date for the final product is known, and each production process starts the moment the previous one is completed. Therefore, inventory-holding costs for intermediary processes do not exist and only backlogging and inventory-holding costs of the finished product appear in expression (3).
Numerical example
To illustrate the procedure for reducing the research space, the example in Fig. 5 is studied. A three level assembly system is considered ( = 3). Two components , and , constitute the first level. The second one contains four components ( = 4) and the third one contains eight components ( = 8).
Level 2 Level 1 Finished product
The due date is equal to 15. The unit inventory holding cost of the finished product is equal to 10. The unit backlogging cost of the same product per period is fixed to 11 values: 10 , 10 , 10 , 1, 10, 10 , 10 , 10 , 10 , 10 and 10 . The unit inventory holding costs for components per period are given in the following table:
Table 2. The unit inventory holding costs for components per period
The maximum lead time of each component , is equal to 5 and each lead time varies between 1 and 5. Its probability distributions are given in the Table 3.
Initially, each order release date , varies between 0 -, and 13 ( -). The initial cardinality of research space is equal to 13 (( -+ 1) ) solutions.
The multi-level assembly system is decomposed to 8 linear chains. The cumulative distribution function of in each linear chain can be obtained. Only the first one ( , → , → , →
) is presented as follows:
F(3)=0
Table 3. The probability distributions of lead times
For the first supply chain , → , → , → the optimal release date * * depends on and and is given by the expression (3). For example for = 10 and = 10 : (2) ≤ (15 - * * -1) ≤ 10 ≤ (15 - * * ) ≤ (3). Thus, * * = 12. For = 10 : (13) ≤ (15 - * * -1) ≤ 0.999990 ≤ (15 - * * ) ≤ ( 14) and * * = 1.
Then, the value of * * it is the upper limit , for the order release date , for the component , in the assembly system. Thus, for = 10 the initial research space of possible solutions [0; 12] is the same. For = 10 , it is reduced to 0. * * , * * , … , * * are determined in the same way.
For the same example, the influence of and is studied. Table 4 andFig In the first case ( / = 10 ), the reduced space of research is equal to one. In the second case ( / = 10 ), the proposed technique to reduce the space of research decreases it by 99.99% (from 13 to 2 ). In the seventh case ( / = 1), the space of research is reduced by 90.57%. In the last one ( / = 10 ), the space of research is not reduced. This seems be logical, for a small / (the inventory holding cost for the finished product is much greater than the backlogging cost for the same product) components have to be ordered as late as possible. The research space of possible solutions can have a significant impact on the choice of resolution methods. For a high unit backlogging cost a Branch and Bound procedure can be sufficient to resolve this problem independently of the system and the number of components at the last level of the BOM.
Optimization
In this section, a Branch and Bound approach is developed. The total expected cost ⟦ ( , )⟧ mentioned in Eq. ( 2) is minimized under:
-, ≤ , ≤ * *
Branch and Bound procedure
A Branch and Bound procedure was proposed by Ould- [START_REF] Louly | Calculating safety stocks for assembly systems with random component procurement lead times: A branch and bound algorithm[END_REF] and [START_REF] Hnaien | Single-period inventory model for one-level assembly system with stochastic lead times and demand[END_REF]for an one-level assembly and a one-level production problems, respectively. Nevertheless, it is only valid if
⎩ ⎪ ⎨ ⎪ ⎧ ℎ , - ℎ , , ∈ , ≥ 0 - ℎ , ≥ 0
In other words, only if: (i) the inventory holding cost of a component is greater than the sum of unit inventory holding costs of components which make it up; and (ii) the unit inventory holding cost of the finished product is greater than the sum of unit inventory holding costs of components which make it up.
We will use the idea of such a Branch and Bound procedure, adapt it to our problem and prove upper and lower bounds as well as additional recursive properties to obtain an efficient Branch and Bound algorithm for our problem and without these restrictive assumptions about costs and system that were employed in Ould- [START_REF] Louly | Calculating safety stocks for assembly systems with random component procurement lead times: A branch and bound algorithm[END_REF] and [START_REF] Hnaien | Single-period inventory model for one-level assembly system with stochastic lead times and demand[END_REF]..
Upper bound
An upper bound ( ) is calculated by the following procedure and the algorithm is detailed in Fig. 7. It equals to the minimum between two variables and . As in section 5, the multi-level assembly system is decomposed to (the number of components at level ) multi-level linear supply chains (Fig. 3). The different Let two vectors Φ = , , … , = ( -, ; -, ; … ; -, ) and = , , … , = ( * * ; * * ; … ; * * ). We start by delaying the order release date (respectively by advancing ), the same operation is executed until the ⟦ (Φ, )⟧ does not decrease anymore. Moreover, there we do the same operations for order release date of the next component. Let = ( ,,… ,,,… , , ) a vector composed of order release dates for components , . Each decision variable is between , = -, and , = -. This horizontal dependency can easily be proven.
Lower bound
Numerical example
The proposed methods have been implemented in C++. The experiments are executed on a computer with a 2.93 GHz CPU and 4 GB of RAM. We analysed the results obtained in the example introduced above where: the number of components in the last level is equal to 8 ( = 8) and the ratio between (the unit backlogging cost of the finished product per period) and (the unit inventory holding cost for the finished product per period) is known and variable.
Levels of search tree
Table 6. Percentage of cut branches at each level of search if RSR is used (%)
In Table 5 and Fig. 11, are reported the results for the case where the Branch and Bound procedure does not use the reduced space of research (RSR). In Table 6 and Fig. 12, are given the results with RSR procedure. For a ratio / > 1, RSR reduces considerably the percentage of branches to be cut (null for / > 10 and < 3% for / = 10). Then, the efficiency of this technique decreases according to the ratio / and the percentage of branches to be cut when the RSR is considered (Table 5) and tends to the same percentage when this RSR is not used (Table 6). This finding is confirmed when / is equal to 10 .
The heuristic mentioned in Fig. 7 calculates an upper bound. This bound is equal to ( , ). Table 7 shows the gaps between the exact solution, provided by the Branch and Bound procedure, and (
) and between the exact solution and ( ). The is equal to ( , ) and for all / , it is less than 21%. We see from Table 7 that the upper bound is generally determined by when the unit backlogging cost for the finished product is more important than the unit inventory holding cost for the same product; and is determined by in other cases. It is worthwhile to mention that for this example, it seems to be possible to get optimal solutions only using both RSR and a heuristic (as the one given in Fig. 7, which calculates an upper bound) in less than 0.01 seconds. This assertion is valid only if the unit backlogging cost of the finished product is 100 times more than the unit inventory holding cost of the same product. CPU times (Table 8) vary significantly according to the ratio / . For a very large ratio, the space of research is small (see Table 4) and the vast majority of nodes in the tree are cut. When the ratio decreases, the cardinality of the reduced space of research (RSR) tends to the cardinality of the initial space of research. This explain that the CPU times required for the Branch and Bound procedure reinforced by a RSR tend to the CPU times required for the Branch and Bound procedure that used the initial space of research. In other words, reducing the space of research is not necessary when the unit inventory holding cost is much greater than the backlogging cost for the finished product. Lower limits for order release dates could possibly be developed, using properties and techniques proposed in [START_REF] Trietsch | Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result[END_REF], to further reduce the initial space of research taking into account a very small / . To understand the effect of dispersion variability on the robustness of the solution, the effect of variance is studied. The example in Fig. 5 is employed. The following parameters are unchanged: the due date , the unit inventory holding cost of the finished product, and the unit inventory holding costs for components per period. The unit backlogging cost of the finished product per period is set at 3 values: 1, 10 and 100. To consider a large variation in the variance of the lead time, we assume that the probability distributions are the same for all components: * , * = 1 = 0.245, * , * = 2 = 0.48, * , * = 3 = 0.255, * , * = 4 = 0.01 and * , * = 5 = 0.01.
In Table 9 and in Fig. 13, the effect of variance, between -200% and 200%, is studied. The evaluation of the relative solution proves that the variation of the total expected cost remains below 3% when the variance is between -100% and 100%, and below 13% when the variance is between -200% and 200%. This analysis proves that the variability of the lead time (a variance less than 200%) affects slightly the expected total cost whatever the ratio / . 9. The effect of variance on the best solution (%)
Variation of variance
Figure 13. The effect of variance of lead times on the variance of the total expected cost
In the next section, we analyse the results obtained by instances where the number of levels of the BOM tree varies between 1 and 5 levels and the ratio / varies between 10 and 10 . We note also that the number of components at the last level is fixed to eight components ( = 8).
Tests on randomly generalized examples
The influence of the ratio / has been studied for multi-level assembly systems, it varies in {10 , 10 , … , 10 }. We consider the results of 150 examples that are generated by a randomized algorithm and grouped into five families, according to the number of levels ( = 1,2, … ,5). In each family, 30 different instances are generated. The input data are due dates, the distribution function of the component lead times, unit holding costs of components, and unit backlogging and unit inventory holding costs for the finished product.
The 150 examples are generated randomly as follows:
Each component lead time , , ∀ ∈ {1, … , 5}, ∀ ∈ 1, … , varies between 1 and 5 with discrete uniform probability distribution; The unit inventory holding cost ℎ , , ∀ ∈ {1, … , 5}, ∀ ∈ 1, … , for component
, ranges from 1 to ∑ ;
The unit inventory holding cost for the finished product ranges from 10 to 10 ;
The unit backlogging cost for the finished product is calculated according to .
Table 10 shows the influence of the number of levels of the BOM and the ratio / on the cardinality of research space ( is the initial cardinality of the research space and is the average cardinality of reduced research space). We note that if the ratio decreases, tends to . 3.10 10 10 2.35 10 10 1.02 10 10 2.62 10 9 3.74 10 8 5.51 10 7 6.86 10 6 4.37 10 5
Spaces of search
Table 10. Cardinalities of the initial and reduced spaces of research
Table 11 shows the gap ( ) between the upper bound (is equal to ( , )) and the exact solution provided by the Branch and Bound procedure, only if optimal solutions are provided for all instances in the corresponding family. We note that, for each instance, an upper limit of calculation time of one hour was allowed to find optimal solutions. The value of this gap depends on both the ratio / and the number of levels . When / tends to +∞ or -∞, tends to 0. For one-level assembly systems from a ratio equal to 10, the upper bound is very often equal to the exact solution, Nevertheless, for five-level assembly systems, the ratio / has to be more than 10 to go under 0.5%.
In other words, the upper bound used in the Branch and Bound procedure has a good quality when the unit backlogging cost for the finished product is greater than the unit inventory cost for the same product.
Table 12 reports the percentage of nodes visited by the Branch and Bound procedure at the last level over the reduced space of research. For one-level assembly systems and for a ratio / equal to 100%, most of the nodes are visited. For five-level assembly systems and for a ratio equal to 10 , this percentage drops to 83.99%.
That proves the importance of the reduced space of research for the performance of the Branch and Bound procedure. On the one hand, for the unit backlogging cost of the finished product, which is much bigger than the unit inventory holding cost for the same product, the RSR reduces the cardinality of the initial space of research immensely and the few remaining nodes are then visited. In the opposite case, the RSR reached its limits but the Branch and Bound procedure seems to be effective: a great percentage of branches is cut and a small percentage of nodes is visited. To study the influence of the number of components at the last level, we tested the solution approach on a randomly generated instance set. We created 12 examples grouped into three families according to the number of levels ( = 1,2,3). The number of components at the last level varies in [10,20,30,40]. In each family, 10 test instances for each example are generated as instances defined in the first part of section 7. Only the unit backlogging cost is equal to 5 times the unit holding cost for all instances. We note that, for each instance, an upper limit of one hour on calculation time was fixed and CPU times are given only if all optimal solutions are provided for all instances in the corresponding family. For these instances, the performance of the Branch and Bound procedure decreases according to the number of components and the number of levels (Table 14). From three-assembly systems with more than 10 components, more than one hour is required to find exact solutions. Therefore, metaheuristics should be developed to generate good quality approximate solutions for larger instances.
Conclusion and perspectives
The paper deals with the modelling and optimization of multi-level assembly systems under uncertainty of components lead times. We have proposed a one-period planning model with infinite assembly capacity at all levels and for a known demand. The model calculates the mathematical expectation of the total cost composed of inventory holding costs for components at all levels and for the finished product and a backlogging cost for the finished product. The proposed lower and upper bounds and recursive function which expresses the dependence among levels, enabled us to study assembly systems with more than two levels and thus extend the results of (Yano 1987a[START_REF] Chu | Supply management in assembly systems[END_REF][START_REF] Tang | The detailed coordination problem in a two-level assembly system with stochastic lead times[END_REF], Hnaien et al. 2008a[START_REF] Fallah-Jamshidi | A hybrid multi-objective genetic algorithm for planning order release dates in two-level assembly systems with random lead times[END_REF], Hnaien et al. 2016[START_REF] Borodin | Component replenishment planning for a single-level assembly system under random lead times: A chance constrained programming approach[END_REF]. Specific techniques were introduced to reduce the initial cardinality of research space, they considerably decreases the percentage of branches to be pruned. In particular, an original technique, based on the Newsboy model, was developed to reduce the initial space of research. In addition, bounds were developed and a Branch and Bound procedure was suggested to determine planned lead times when the component lead times are independent and identically distributed discrete random variables.
The proposed model and optimisation algorithm were developed to furnish a help to decision maker at the assembly contract negotiation with a customer in an ATO environment under a complex multi-level structure of supply network and uncertain partner lead times. They help the decision makers to define a due date for the finished product delivery and release dates for the components of last level of BOM.
The numerical results analyse the influences of the number of levels of the BOM, the number of components in each level, and backlogging and the inventory holding costs for the finished product. The efficiency of the proposed algorithm does not depend only on the number of components in the last level, but also on other parameters, such as finished product backlogging and holding costs and the number of levels. The proposed method is efficient for solving small and medium-sized problems, and its performance increases if, the backlogging cost greatly exceeds the inventory cost for the finished product.
Our approach is based on a one-period inventory model: for a given demand and due date, the optimization is done to determine optimal order release dates. This approach can be applied not only at the contract negotiation stage for Assembly-to-Order systems under lead time uncertainty when distributions of probability of lead times are available. For the multi-period inventory model, for example for MRP parameterization, it can be used to obtain approximate values of planned lead time parameters for MRP tables. The recursive functions developed and proved in this paper can be also used to generalize different known models, for example, those proposed by Ould [START_REF] Louly | Generalized newsboy model to compute the optimal planned lead times in assembly systems[END_REF], [START_REF] Dolgui | Supply planning for single-level assembly system with stochastic component delivery times and service level constraints[END_REF] and [START_REF] Shojaie | A Study on MRP with Using Leads Time, Order Quality and Service Level over a Single Inventory[END_REF], for multi-period models of one-level assembly systems under lead time uncertainty. The uncertainty of the demand can be also integrated as in [START_REF] Song | Contract assembly: Dealing with combined supply lead time and demand quantity uncertainty[END_REF].
The contributions of our study are to be seen in the light of the state-of-the-art results, because similar problems have already been studied in the management science literature. Our model is based on existing work, and we have proved that our model is more general, and that our method outperforms the existing ones.
From a practitioner's point of view, the interest in our approach lies in the fact that it can be used in many industrial situations, because there is no assumption on cost functions and distributions of probability of lead times. For example, we worked with ZF in France (Saint-Étienne), a German company that manufactures gearboxes. ZF classifies suppliers according to the statistics, and a safety coefficient is determined for each supplier to set the corresponding planned lead times in their MRP system. This coefficient is multiplied by the contractual lead time to obtain the planned one. This enables the company to anticipate delays by estimating the reliability of suppliers. In other words, the less reliable a supplier, the higher its coefficient. However, suppliers were considered independently, and coefficients were empirical. The synchronization aspects and costs were not taken into account. From the outcome of our investigation it is now possible to use our model to better estimate these coefficients by considering inventory and backlogging costs, the independence (synchronization) of suppliers via the assembly operations, and distributions of probability for supplier lead times.
It is clear that if some assumptions of our model are not respected, the solution obtained will therefore be approximate and not optimal. However, in real applications (with complex structures), decision makers often do not seek optimal solutions; approximate ones may be satisfactory if they propose good quality decisions.
Our future work will focus on the coupling of the analytical method with a genetic algorithm. The Branch and Bound procedure seems to be limited to small and medium sized problems.
To study assembly systems with many more components and levels, metaheuristics are necessary. A second objective is to extend this model and different proposed techniques to study coordination between parameters of different replenishment calculation tables in a complex MRP system, and in particular to calculate parameters of planned lead times for multi-level multi-period case. The main difficulty will be to express the total expected cost for multi-period planning of assembly systems.
Note the following notations: Parameters Due date for the finished product, > 0 Demand (known) for the finished product at the date , without loss of generality, let = 1 Level in the bill of material (BOM), = 1, … ,,
Figure 4 .
4 Figure 4. Decomposition of the assembly system to several multi-level linear chains
Figure
Figure 5. A three-level assembly system
. 6 present the upper limits , of the reduced research space of possible solutions according to the backlogging and inventory holding costs of the finished product.
Figure
Figure 6. Upper limits
Figure 7 .
7 Figure 7. Calculation of the upper bound
to the costs of linear chains. So, the first ,
..
Nodes in the first level of the search tree correspond to different release dates of the first component , at level . Vectors associated with these nodes are defined as follows: ∀ ∈ 0; , , It will be noted , = , + . The first value , + corresponds to the order release date of the first component , and the other values , correspond to the order release dates of the other components , , = 2, … , , … , . The search tree nodes are illustrated in Fig.8.
Figure 10 .
10 Figure 10. Horizontal dependencies between nodes.
Figure 12 .
12 Figure 12. Percentage of cut branches at each level of search if RSR is used (%)
Table 4 . Upper limits of the space of research
4
1,E+04
1,E+03
1,E+02
1,E+01
1,E+00
0.1
0.01
0.001
1,E-04
Table 5 . Percentage of cut branches at each level of search if RSR is not used
5
1 2 3 4 5 6 7
91.67 96.00 96.00 88.00 94.12 94.12 84.62
75.00 88.24 86.67 80.95 92.52 92.52 74.13
66.67 78.13 71.65 62.59 92.33 82.30 65.71
50.00 70.00 62.26 47.02 92.32 63.37 49.58
41.67 54.37 46.23 30.39 84.00 58.45 39.88
25.00 38.76 30.80 14.91 74.41 49.67 29.25
8.33 38.06 26.98 37.90 79.58 51.90 30.67
. 0.00 30.36 27.27 78.56 80.53 55.96 32.88
0.00 27.38 24.41 84.13 78.00 54.34 30.27
0.00 27.38 23.78 84.82 77.41 54.12 29.50
0.00 27.38 23.78 84.82 77.39 54.21 29.43
Table 7 . The gap between the exact solution provided by the Branch and Bound procedure and the upper bound (%)
7
/
Table 8 . CPU times (seconds)
8
/
Table 11 . The gap between the exact solution and the upper bound (%)
11
Table 12 . The percentage of nodes visited by the Branch and Bound algorithm (%)
12 In Table13, the average time in which the algorithm finds the exact solution is presented. It depends on the number of levels and the ratio / . For one level assembly systems, exact solutions are found in less than one second. For the five-level systems and for a / less than 10 , more than one hour is required to solve the problem. Finally, the CPU times do not depend only on the number of components in the last level , but on other parameters as different costs and the number of levels.
/
1 10
Table 13 . Average CPU times
13
Table 14 . Average CPU times for B&B
14
Acknowledgements: This work is partially supported by Chinese Academy of Science Visiting Professorship for Senior International Scientists, grant number 2013T2J0054, the Ministry of Science and Technology of China under Grant 2016YFC0503606, by NSFC under grant number 71471055 and 91546102.
Let the vector
, are fixed and belong to , ;
,
∀ ∈ [ + 1; ], , are fixed and equal to ,
We introduce the following additional notations:
The component , is necessary to assemble the component , , it is the component in the search tree.
is defined as follows :
(7)
If , is the last component necessary to assemble , then is equal to .
If
, is not the last component necessary to assemble , so is equal to the sum of the number of components necessary to assemble the components , ∀ ∈ [1;
] from ( -1) level of the BOM. . We note that if = , = .
• , ∀ ; , , :
The assembly date of the component , . Components in the last level of the BOM of the assembly subsystem of , have order release dates , equal to or greater than , .
:
The maximum between and with the assembly date of the finished product. The order release dates , belong to the search intervals , ; , with , = -, and , = -.
The maximum between and with the assembly date of the finished product. The order release dates , are detailed in expressions (4), ( 5) and ( 6).
Proposition 4.
The lower bound corresponding to the vector , , is equal to:
The complete proof of the previous lower bound is detailed in Appendix A.4. Please also note that a depth-first search, a reduced space of research and other strategies based on vertical and horizontal dependencies between nodes are used to reduce the search complexity.
Vertical dependencies
Each lower bound is cut into three parts. This technique has already been used to calculate a lower bound for the Knapsack problem [START_REF] Kellerer | Knapsack Problems[END_REF][START_REF] Martello | Knapsack problems: algorithms and computer implementations[END_REF][START_REF] Martello | Knapsack problems: algorithms and computer implementations[END_REF]. The idea is not to redo some calculations where we explore in the depth. Let a node . To determine the lower bound ( ) related to this node, a part ( ) is calculated every time, another part ( ) is recovered from the parent node and a last part is calculated and is added to ( ) ; it constitutes the fixed part related to this node and to be used by son nodes. An illustration is given in Fig. 9.
Figure 9. Vertical dependencies between nodes.
Then, the lower bound ( ) is equal to the sum of ( ), ( ) and ( ) with:
It can be easily proven by devising the upper bound of a given node according to the upper bound of a son node.
Horizontal dependencies
As with vertical dependencies, it is easy to prove that a given node can be composed of fixed and varied parts. We note that is the quantity which is calculated and added to ( ) to determine ( + 1). So, ( + 1) = ( + 1) + ( ) + ( + 1) with:
• ( + 1) = -ℎ , In the first case, the gap between the upper bound (the minimum between and ) and the exact solution is null. The Branch and Bound procedure is not required to find the optimal solution that is determined directly from the reduced space of research (0, 0, 0, 0, 0, 0, 0, 0).
Figure 11. Percentage of cut branches at each level of search if RSR is not used (%)
/
Levels of search tree
Appendix A. Proofs of the theoretical results
A.1. Proof of Property 1
The total cost is the sum of: Inventory holding or backlogging costs for the finished product. There will be a stockout of the finished product if, at least, one type of component at level 1 is delivered after the due date . Then the backlogging cost is equal to:
If all components , ∀ = 1, … , , are available before , the finished product may be assembled and stored. The corresponding inventory holding cost is equal to:
Inventory holding cost for components. There are inventory for components
, during the time period between its arrivals at , + , and the assembly date for the finished product. The corresponding inventory holding cost is equal to:
There are inventory for components , , = 2, … , -1, , ∈ , , during the time period between their arrival at , + , and , the assembly date of the component , . The corresponding holding cost The components , at the last level are ordered at the date , and are delivered at the date , + , . The assembly of the components , begins when all necessary components , ∈ , are available, i.e. at the date , . The corresponding components holding cost is equal to:
Then, the total cost is the sum of , , ,
,…, and .
A.2. Proof of Property 2
To calculate the mathematical expectation of the total cost ⟦ ( , )⟧, we calculate , ⟦ ⟧, ⟦ ⟧ and , .
Let the following recursive function ; it allows the expression of the dependence among levels:
≥ Let a positive random discrete variable with a finished number of possible values. Its expected value is equal to:
This expression (A.6) is used to calculate , ⟦ ⟧, ⟦ ⟧ and , .
Knowing that:
And that ∀ = 1, … , , the random variables , + , are independents, so:
is calculated in the same way. By introducing the recursive function, we can easily deduce that ⟦ ≤ ⟧ = , , , 1 .
Then:
We note that ⟦ ⟧ is calculated in the same way and is equal to:
The expression of ⟦ ⟧ is calculated using expressions (A.7) and (A.8) and is equal to:
The assembly dates , are positive random discrete variables with a finished number of possible values:
Knowing that:
And ∀ = 1, … , and ∀ = 1, … , -2, the random variables , + , are independents, so:
The recursive function , , , is called each time when it is necessary to determine the probability related to , . The expression of , can be written as follows:
Then, using expressions (A.7, A.8, A.9 and A.10), the total expected cost mentioned in expression (2) can be directly found. It is given by the next expression:
A.3. Proof of Property 3
The expression (3) can also be deduced from expression (11) in Hnaien et al. (2008b) by replacing ℎ by . * * can be an upper limit for , of the component , from the multilevel assembly system and the initial research space of possible solutions -, ;can be reduced to -, ; * * . For this, let us suppose that the vector * = , * , … , , *
, which is composed of order release dates, minimizes ⟦ ( * , )⟧. ∀( , ) ∈ [1;
] , this vector is defined as follows: The optimal solution * * has to satisfy, otherwise a neighbouring solution better than * * exists:
- * * -1 ≤ 0 Then, expression (3) can be proven. Therefore, to say that all order release dates are between -, and * * , we have to prove that ( * , * ) ≥ 0.
with:
Then after some algebraic transformations, we obtain: According to the expression (3), ∀ -≤ < :
We finally deduce that ( * , * ) ≥ 0. ∎
A.4. Proof of Proposition 4
The Lower bound corresponded to the vector whose values are defined in expressions (4), ( 5) and ( 6): | 63,014 | [
"15786",
"8541"
] | [
"489559",
"481384",
"489559",
"481384",
"469160"
] |
01622231 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01622231/file/numerical-pottier.pdf | Mahmoud Harzallah
email: [email protected]
Thomas Pottier
Johanna Senatore
Michel Mousseigne
Guénaël Germain
Yann Landon
Numerical and experimental investigations of Ti-6Al-4V chip generation and thermo-mechanical couplings in orthogonal cutting
The chip formation mechanism of the Ti-6Al-4V remains a challenging problem in the machining process as well as its modeling and simulation. Starting from experimental observation on the titanium alloys Ti-6Al-4V machining shows that the ductile fracture in the chip formation is dominated by the shear phenomenon under high strain rate and temperature, the present work develops a new coupled behavior and damage model for better representation and understanding of the chip formation process. The behavior and damage of Ti-6Al-4V have been studied via hat-shaped specimen under temperature up to 900 °C and strain rate up to 1000 s -1 . An inverse identification method based on Finite Element (FE) is established in order to determine the constitutive law's parameters. The prediction of the segmented chip was analyzed through 3D finite element orthogonal cutting model which was validated by an in-situ and post-mortem orthogonal cutting machining observations. Finally, a particular attention is focused on the chip formation genesis which is described by three steps: Growth, Germination and Extraction.
Introduction
Machining is a common process that is widely used in the manufacturing of industrial parts. During the cutting process, the material is subjected to large strains at high strain rates which induce temperature increase (local heating) and a chip removal in complex conditions. The prediction of the process outputs (machined surface integrity, chip morphology and cutting forces) is an industrially and scientifically challenging task. Indeed the coupled nature of the phenomena requires setting up various models and often in a coupled manner. Moreover, the extreme velocity and temperature result in an additional difficulty related to models identification and validation.
Nowadays, thanks to significant advances in simulation techniques, various methods are used to enhance the quality of simulation predictions. Three main techniques were successfully used in previous studies: Finite Element (FE), Smoothed Particle Hydrodynamics, and Discrete Element Methods [START_REF] Calamaz | Toward a better understanding of tool wear effect through a comparison between experiments and SPH numerical modelling of machining hard materials[END_REF][START_REF] Iliescu | A discrete element method for the simulation of CFRP cutting[END_REF] . Among these techniques, the Finite Element Method is the oldest and was successfully implemented in various metals forming processes. It relies on a spatial discretization of the constitutive equations which may interact in a coupled manner.
The chip formation mechanism of many hard metals and especially Ti-6Al-4V, results in the generation of serrated chips. In such cases, the nature of the thermomechanical conditions drastically interlinks and therefore challenges the existing models. Many researchers addressed comprehensive and exhaustive studies on the physical phenomena involved in cutting such materials (adiabatic shear band, crack propagation) [START_REF] Komanduri | Some clarifications on the mechanics of chip formation when machining titanium alloys[END_REF][START_REF] Nakayama | Machining characteristics of hard materials[END_REF] .
In order to describe the chip formation process, various published papers addressed constitutive behavior and damage laws which provide good descriptions for a wide range of materials concerning finite strain, strain rate and temperature-dependent visco-plasticity [START_REF] Ayed | Experimental and numerical study of laser-assisted machining of Ti6Al4V titanium alloy[END_REF][START_REF] Liu | Evaluation of ductile fracture models in finite element simulation of metal cutting processes[END_REF] . The Johnson Cook behavior model [START_REF] Johnson | A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures[END_REF] is among these laws and is still the most popular phenomenological constitutive law adopted to model the material behavior during the cutting process [START_REF] Ayed | Experimental and numerical study of laser-assisted machining of Ti6Al4V titanium alloy[END_REF][START_REF] Mabrouki | Some insights on the modelling of chip formation and its morphology during metal cutting operations[END_REF][START_REF] Mabrouki | A contribution to a qualitative understanding of thermomechanical effects during chip formation in hard turning[END_REF] . Calamaz [START_REF] Calamaz | A new material model for 2D numerical simulation of serrated chip formation when machining titanium alloy Ti-6Al-4V[END_REF] has shown that within a tight range of strain rate and temperature the Johnson-Cook model is able to fit properly many metal forming processes. Unfortunately, outside of this range, the flow stress is known as poorly extrapolated.
Based on Johnson Cook approach [START_REF] Johnson | A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures[END_REF] , various researchers modified and/or extended the model to describe more accurately the flow behavior. These papers can be sorted into four groups.
The first group intended to modify the viscosity effect term. This term was the first to be changed by Holmquist and Johnson [START_REF] Holmquist | Determination of constants and comparison for various constitutive models[END_REF] to improve its impact at high strain rates by substituting it with a simple power law. Rule and Jones [START_REF] Wk | A revised form for the Johnson-Cook strength model[END_REF] modified this term to better describe the rapid increase of the stress around 10 3 s -1 in the case of copper and aluminum samples. Finally, a quadratic formulation was added to this term by Woo Jong Kang [START_REF] Woo | Crash analysis of auto-body structures considering the strainrate hardening effect[END_REF] to enhance the strain rate sensitivity effect for various sheet steels.
Modifications of the hardening term were addressed by Tan et al. [START_REF] Tan | A modified Johnson-Cook model for tensile flow behaviors of 7050-T7451 aluminum alloy at high strain rates[END_REF] for 7050-T7451 alloy in uniaxial isothermal tensile tests. The authors proposed a revised Johnson-Cook model by introducing a coupling between hardening and strain rate through hardening coefficients. Based on experimental observations, Khan et al. [START_REF] Khan | Quasi-static and dynamic loading responses and constitutive modeling of titanium alloys[END_REF] proposed an alternate formulation of the same coupling to describe the quasi-static and dynamic behavior in the case of titanium alloy Ti-6Al-4V. By considering a strong effect of the thermal softening on the strain hardening, a revised Johnson-Cook model was proposed by Vural and Caro [START_REF] Vural | Experimental analysis and constitutive modeling for the newly developed 2139-T8 alloy[END_REF] which provided a good correlation with the experimental data.
The third group focused on the thermal term. Inspired from Johnson-Cook model, several formulations were proposed by Lin et al. [START_REF] Lin | A modified Johnson-Cook model for tensile behaviors of typical high-strength alloy steel[END_REF] considering the coupling effects between the strain, strain rate and temperature on the flow stress. Li et al. [START_REF] Li | A modified Johnson Cook model for elevated temperature flow behavior of T24 steel[END_REF] proposed a new formulation which considers the coupling between strain and temperature at high strain rate on the flow behavior in the case of a hot compression of T24 steel. For machining process, Bäker [START_REF] Bäker | Finite element simulation of high-speed cutting forces[END_REF] modified the Johnson-Cook model by considering a strong coupling between the hardening and the thermal softening phenomena in order to describe the dominance of this latter at high temperatures. A loss of ductility was observed by Sartkulvanich et al. [START_REF] Sartkulvanich | Determination of flow stress for metal cutting simulation -a progress report[END_REF] in the case of the AISI 1045 in a temperature range between 200 °C and 400 °C. The authors modified the Johnson-Cook thermal term by adding an exponential formulation to reproduce this effect.
Finally, many extensions of the Johnson-Cook flow stress model were proposed by the fourth group. The first extension was added through a multiplicative term by Andrade et al. [START_REF] Andrade | Constitutive description of work-and shock-hardened copper[END_REF] to describe the decrease of the stress caused by the dynamic recrystallization in the case of copper compression tests at a strain rate of 10 -3 s -1 . To improve physical understanding of the chip formation during the cutting process, an extension known as Tanh term was proposed by Calamaz et al. [START_REF] Calamaz | A new material model for 2D numerical simulation of serrated chip formation when machining titanium alloy Ti-6Al-4V[END_REF] and used by Sima and Özel [START_REF] Sima | Modified material constitutive models for serrated chip formation simulations and experimental validation in machining of titanium alloy Ti-6Al-4V[END_REF] in order to describe the strain softening phenomenon effects at high temperature in titanium alloys.
In the present study, the focus on serrated chips has led to pay a special attention to the damage model. Indeed, material separation in such thermomechanical conditions involves complex physical mechanisms.
Because of the widespread applications involving large plastic deformations accompanied by rapid increase of temperature and damage evolution, many works were devoted to ductile fracture.
McClintock [START_REF] Mcclintock | A criterion for ductile fracture by the growth of holes[END_REF] and also Rice and Tracey [START_REF] Rice | On the ductile enlargement of voids in triaxial stress fields *[END_REF] proposed a theory based on the growth of cylindrical and spherical voids the model fracture. Porosity based fracture theories such as developed by Gurson [START_REF] Gurson | Continuum theory of ductile rupture by void nucleation and growth: part I -Yield criteria and flow rules for porous ductile media[END_REF] is another microstructure based model. On the other hand, phenomenological approaches have also been proposed by Cockcroft and Latham [START_REF] Cockcroft | Ductility and the workability of metals[END_REF] and many others since then.
Most of these models exhibited a high dependency to the stress triaxiality. In fact, based on experimental observations, many researchers showed the major impact of the hydrostatic pressure. Using round bar specimens, Bridgman [START_REF] Bridgman | Studies in large plastic flow and fracture: with special emphasis on the effects of hydrostatic pressure[END_REF] was the first to analyze the strain failure sensibility to the hydrostatic pressure in the neck of specimen. Under hydrostatic loading, Rice and Tracey [START_REF] Rice | On the ductile enlargement of voids in triaxial stress fields *[END_REF] described the growth of voids and cavities by a simple exponential expression as a function of the stress triaxiality . Especially, Rice and Tracey's expression became very popular in the fracture application and various researchers extended or modified the model to study the damage evolution. Johnson and Cook [START_REF] Johnson | Fracture characteristics of three metals subjected to various strains, strain rates, temperatures and pressures[END_REF] extended this expression by considering separately the strain rate sensitivity and temperature dependency, which allowed a broad application of this model in a wide array of thermomechanical processes [START_REF] Ayed | Experimental and numerical study of laser-assisted machining of Ti6Al4V titanium alloy[END_REF][START_REF] Mabrouki | Some insights on the modelling of chip formation and its morphology during metal cutting operations[END_REF] . However, the work of Wierzbicki et al. [START_REF] Wierzbicki | Calibration and evaluation of seven fracture models[END_REF] proved that the approximation of the failure strain with a monotonically decreasing function of stress triaxiality is poor, while a strong dependency to the third stress invariant is to be preferred. Wilkins et al. [START_REF] Wilkins | Cumulative-strain-damage model of ductile fracture: simulation and prediction of engineering fracture tests[END_REF] were the first to introduce separately the deviatoric stress and the Lode angle effects in their damage model. As an extension of Wilkins' model and Johnson-Cook's model, Wierzbicki and Xue [START_REF] Wierzbicki | On the effect of the third invariant of the stress deviator on ductile fracture[END_REF] postulated a new formulation by introducing the dependency to these two variables.
It can be mentioned that the dependency in temperature and strain rate is not explicitly addressed in the majority of damage models. By contrast, it is implicitly presented through their behavior law as a thermovisco-plastic evaluation of stress.
The present paper firstly, proposed and details a new formulation of Johnson Cook model through its Ludwick hardening term. Its experimental identification through hat-shaped compression tests at various temperature and strain rate is then presented. Validation is performed through a comparison with the Johnson Cook model. Secondly, a modified Max shear damage model and its calibration are detailed. Then, by means of 3D FE orthogonal cutting model, the chip formation process is investigated and compared to experimental observations. These latter are performed in-situ, during the cutting process and either at high magnification and high frame rate. The chip length, height, angles and frequencies are used for comparison purpose. The comparison to recorded cutting forces is also presented. Finally a comprehensive discussion on the chip generation phenomenon is addressed.
Flow rule and parameter identification
Model description
The main goal of the proposed modeling is to prevent from an always controversial model selection process by assuming the simplest model for hardening namely the Ludwik's law [START_REF] Ludwik | Elemente Der Technologischen Mechanik[END_REF] which is a restriction of the Johnson Cook model. The coupling in strain-rate and temperature is then permitted through material parameters. The flow stress thus becomes:
= ( ̇ , ) + ( ̇ , ) . ( ̇ , ) (1)
where ( ̇ , ) , ( ̇ , ) and ( ̇ , ) are respectively the yield strength, the hardening modulus and the hardening coefficient. These parameters exhibit a dependency on both the strain rate and temperature.
Mechanical tests
The parameter identification is performed through nine dynamic tests using hat-shaped specimen loaded in compression by a Gleeble 3500 testing machine. Various speed and temperature are investigated, ranging from 10 -1 mm s -1 to 10 3 mm s -1 and from 20 °C to 900 °C respectively. These tests were performed by Germain et al. [START_REF] Germain | Identification of material constitutive laws representative of machining conditions for two titanium alloys: Ti6Al4V and Ti555-3[END_REF] . Raw force/displacement curves are plotted in Fig. 1 .
As shown in Fig. 1 , the force is affected by both temperature and strain rate (since the proportionality between the crosshead speed and strain rate is assumed). As expected, the force decreases with temperature and increases with the strain rate. Particularly, a small effect of the strain rate at ambient temperature is observed between 10 -1 mm s -1 and 10 2 mm s -1 .
Model calibration procedure
The main advantage of the formalism proposed in Eq. ( 1) is that it does not presume any shape for the couplings, and users can fit their experimental points by any suited analytical function. Though a minimum of three tests are required to fully calibrate the proposed model, the explicit nature of the coupling leads to a direct relation between the identification quality and the amount of calibration tests, whereas it is not necessarily true if the interpolant shape is set a priori (e.g. the Johnson-Cook model).
Least-squares Method
Numerical force Experimental force
F(N) U (m)
Cost function evaluation Moreover, the univocal aspect of the calibration procedure ensures that a set of tests leads to a one and only parameter set. On the contrary, in the case of the Johnson-Cook model (and its by-product) several parameter sets can be obtained from the same measured data depending on which order is chosen for the identification (strain-rate term first or temperature term first).
The hat-shaped tests have already been used for analytical identification [START_REF] Hor | Modelling, identification and application of phenomenological constitutive laws over a large strain rate and temperature range[END_REF] . However, such procedure relies on several coarse assumptions such as: (i) rectangular region of interest (ROI), (ii) rigid body behaviors outside of the ROI, (iii) homogeneity of the strain rate over the material and (iv) stress uniaxiality. The strictness of such hypothesis led various authors like Germain et al. [START_REF] Germain | Identification of material constitutive laws representative of machining conditions for two titanium alloys: Ti6Al4V and Ti555-3[END_REF] to consider inverse identification in such cases and these studies proved this choice worthy. In the present paper, finite elements update (FEU) is used to retrieve material parameters from force mean square comparison and a simplex optimization algorithm. More details on the FEU approach can be found in Harzallah et al. [START_REF] Harzallah | A new behavior model for better understanding of titanium alloys Ti-6Al-4V chip formation in orthogonal cutting[END_REF] . The different steps of numerical implementation are summarized in Fig. 2 .
An axisymmetric finite element hat-shaped model is developed on ABAQUS Explicit platform. The sample is meshed by quadrilateral axisymmetric elements, coupled in temperature-displacement in reduced integration calculation (CAX4RT). Both the moving and fixed compression plates are modeled as analytical rigid bodies which are tied to the sample.
The iterative solving procedure leads to a simultaneously optimized parameter set ( A, B, n ) for each of the nine tests at various strain-rates and temperature. On Fig. 3 , these identified parameters are plotted as black dots.
The obtained parameter sets allow to introduce the suited analytical description of the thermo-visco-plastic coupling. It can be seen from Fig. 3 that plane fitting of the all three parameters within the temperature-strain-rate space properly approximates the experimental data. Based on these observations, the evolution of the parameters as a function of temperature and strain rate is described by planes. For each parameter, the plane equation is expressed as ( Eq. 2 ):
, , = -. -. log ( ̇ ) - (2)
where a, b and c are the constitutive parameters of the law. These values are reported in Table 1 .
Comparison with the Johnson-Cook model
In order to assess the improvement brought by the proposed model, a comparison with the Johnson-Cook constitutive law is performed. In fact, this latter model describes the flow stress of materials as a multiplication of three terms: hardening term of Ludwick [START_REF] Ludwik | Elemente Der Technologischen Mechanik[END_REF] , viscosity (strain rate) and thermal dependency ( Eq. 3 ).
= [ + ( ) ] [ 1 + Log ( ̇ ̇ 0 ) ] [ 1 - ( - - ) ] (3)
where , ̇ , ̇ 0 are respectively the plastic strain, the strain rate and reference plastic strain rate and T, T r ,T m are the temperature, the room temperature, the melting temperature of the workpiece material. A, B, C, m, n are material parameters to be calibrated. For this task, the same FEU identification procedure applied earlier for ( ̇ , ) , ( ̇ , ) and ( ̇ , ) and presented in Fig. 2 , is used again to identify the 5 parameters of the Johnson Cook at once. A Levenberg-Marquardt algorithm [START_REF] Marquardt | An algorithm for least-squares estimation of nonlinear parameters[END_REF] was used for a simultaneous identification of the set of parameters ( A, B, C, m, n ) over the whole experimental database at once (i.e. the 9 tests). This latter algorithm is proven worthy when the number of parameters became consequent [START_REF] Ponthot | A cascade optimization methodology for automatic parameter identification and shape/process optimization in metal forming simulation[END_REF] . In addition, the quality optimal solution was verified by checking that the same optimum set is reached from various initial set of parameters. The identified Johnson-Cook parameters are mentioned in Table 2 .
The optimum set of parameters for the two models (the proposed one and J-C) are used in a finite element simulation of every hat-shaped compression tests. This allows obtaining numerical loading forces along with the corresponding displacements . Both models are compared to experimental data for various temperature and strain rate ( Fig. 4 ).
It can be seen that despite the plane approximation, the proposed model properly predicts the experimental data and gives better results than the Johnson-Cook model under all conditions. Because of the constant parameters of the hardening term in the Johnson-Cook law, the shape of the hardening curves remain the same over all conditions and such hypothesis is here proven quite erroneous.
At ambient temperature (20 °C), the proposed model properly fits the experimental behavior while the Johnson-Cook model shows an acceptable error. But within the wide range of temperature and strain rate covered in this application, the error increases as a function of these phenomena for both models. Nevertheless the magnitude of this error remains smaller for the proposed model than for the JC model.
Damage model implementation
After calibration and validation of the proposed behavior law, a particular attention is paid to the damage model through the coupling between phenomena in the chip formation process.
The experimental observations performed by Pottier et al. [START_REF] Pottier | Sub-millimeter measurement of finite strains at cutting tool tip vicinity[END_REF] proved that a ductile fracture occurs during the machining process that is caused by extensive plastic deformation induced by the shear phenomena in the material. From a micromechanical stand point, it is related to nucleation, growth and coalescence of void provoked by the increase of the density of dislocations under high temperature and strain rate. From a phenomenological stand point, the ductile fracture is described as an accumulation of plastic shear strain induced by the process. The damage modeling classically relies on a cumulative formulation of the damage internal variable D , of which the evolution throughout plasticity is defined by ( Eq. 4 )
̇ = ̄ (4)
Such formalism requires the assessment of the strain at failure: ̄ . Various relations to the mechanical fields were developed for this purpose. However, the shear nature of the loading and the narrow range or stress triaxiality involve in cutting have led to consider max shear failure criterion as suited for this study. The strain rate and temperature dependency is thus being addressed through the material parameters as proposed in the above for the hardening law.
The maximum shear (MS) damage criterion in the spherical coordinate system ( ̄ , , )
The maximum shear (or Tresca) damage criterion can be expressed in terms of principal stresses by Eq. ( 5). From geometrical consideration the maximum shear damage criterion can be transposed within the ( ̄ , , ) space from Eq. ( 6).
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 1 = 2 3 ̄ cos 2 = 2 3 ̄ cos ( 2π 3
- ) 3 = 2 3 ̄ cos ( 4π 3 - ) ( 6
)
where ̄ is the second stress invariant, is the lode angle and the S i are the principal component of the stress deviator tensor. More details on these transformations can be found in references [START_REF] Malvern | Introduction to the mechanics of a continuous medium[END_REF][START_REF] Bai | Application of extended Mohr-Coulomb criterion to ductile fracture[END_REF] . Through equations ( 6) , an expression of the three principal stresses can straight-forwardly be obtained as a function of ̄ , and m the first invariant as follows:
⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 1 = + 1 = + 2 3 ̄ cos 2 = + 2 = + 2 3 ̄ cos ( 2π 3
-
) 3 = + 3 = + 2 3 ̄ cos ( 4π 3 - ) (7)
By introducing these expressions of 1 and 3 into the maximum shear criterion ( Eq. 5 ), the model can be expressed as a function of the normalized lode angle by ( Eq. 8 ):
̄ = [ 1 √ 3 ( π 6 -̄ ) ] -1 (8)
With ̄ = 1 -6 ∕ . However, damage criteria are usually described and implemented though the equivalent strain at failure . For that purpose, Eq. ( 8) is used to modify the coupled Ludwick's flow law presented in the above. The coupled parameters A, B , and n identified previously are reused to enrich the damage description and dependency to both temperature and strain-rate. The maximum shear damage criterion therefore provides an expression of the equivalent strain at failure as a function of not only and f but also as function of T and ̇ such as ( Eq. 9 ):
= ⎡ ⎢ ⎢ ⎢ ⎣ √ 3 ( ̇ , ) . cos ( ̄ 6 ) - ( ̇ , ) ( ̇ , ) ⎤ ⎥ ⎥ ⎥ ⎦ 1 ( ̇ , ) (9)
However, Bai and Wierzbicki [START_REF] Bai | Application of extended Mohr-Coulomb criterion to ductile fracture[END_REF] shown that only the plane stress condition enable to relate between triaxiality and the Lode angle ( Eq. 10 ).
(
) = cos [ 2 ( 1 -̄ ) ] = - 27 2 [ 2 - 1 3
] ( 3
) 10
It can be mentioned that only one additional parameter needs to be calibrated, namely the maximum shear stress at failure f . It can be identified from any kind of tests of known load angle, through the principal (in Pa) Fig. 5. The ( ̇ , ) parameter identified evolution (black dots) and its least square plane fit in the temperature/strain-rate space.
Table 3
The identified parameters of f . a b c f 7.17.10 5 -9.3.10 7 -6.96.10 8
stresses at the point of fracture and should also depend on strain rate and temperature = ( ̇ , ) see reference [START_REF] Wierzbicki | Calibration and evaluation of seven fracture models[END_REF] .
MS damage criterion calibration
As recommended by Wierzbicki et al. [START_REF] Wierzbicki | Calibration and evaluation of seven fracture models[END_REF] , the choice of shear tests with ̄ = 0 ease the identification process. For that reason, experimental values of the shear stress at failure are obtained from the hat-shaped tests presented above. Inverse identification is performed to obtain the stress fields corresponding to the global displacement observed at failure. The stress fields are obtained from the identified flow model.
The obtained values of ( ̇ , ) are shown in Fig. 5 and the parameters of the fitting least-square plane are summarized in Table 3 . Accordingly, this parameter is described as a function of temperature and strain rate through Eq. (2 ).
The shape of the fracture locus is presented in Fig. 6 a for a given temperature and strain rate. The strain at failure value, which used for calibration, is determined from the optimized hat-shaped numerical simulation under the corresponding loading conditions. It can be seen that direct dependency to the stress triaxiality ratio is not shown, however this coupling is derived from the coupled nature of the load angle and
. By means of Eq. ( 10), this point is highlighted in Fig. 6 b in the case of plane stress hypothesis, where the fracture locus clearly depends on .
In addition, as mentioned by Bao and Wierzbicki [START_REF] Bao | On the cut-off value of negative triaxiality for fracture[END_REF] , for low triaxiality, the nature of metallurgical phenomenon related to damage changes, so that cracks can no longer propagates due to high compressive stress state, which leads to experimental observations where material is clearly affected but exhibits no failure. For this purpose, the value of the strain at failure is set to 10 when the loading triaxiality ratio decreases below sat = -1/3.
Chip formation simulation
By the means of ABAQUS/Explicit and user material subroutine VU-MAT, the damage and behavior models are implemented into a 3Dorthogonal cutting FE model.
In addition to these two constitutive laws, the simulation of the chip formation process also requires a good control and description of many parameters: (i) parts geometries, (ii) boundary conditions, (iii) friction evolution.
Initial geometry, meshes and boundary conditions, and material properties
Thermomechanical fields are calculated from the use of 3D continuum elements under reduced integration (C3D8RT). Mesh is refined in the region of interest in order to enhance the accuracy at the tool tip. The lowest mesh size in the model is about 25 µm while the coarsest is (200-500 µm).
The tungsten carbide tool is considered as deformable and modeled by a thermo-elastic law. It has an edge radius of 20 µm, a clearance angle of 11°and two rake angles are tested namely -5°, 15°.
The dimensions of the workpiece are 10 mm in length, 1.5 mm in width ( w = 3 but a plane of symmetry is defined) and 1.7 mm in height. It is modeled by a single partition to avoid many modeling hypotheses. That favors the chip separation and helps to be more realistic.
As summarized in Fig. 7 , nodes on the back surface of the tool are locked over 6 dof. The bottom surface of the workpiece is only free to translate along the X axis. The displacement of the nodes on the back surface of the workpiece is imposed with a constant velocity that equals the desired cutting speed.
The 3D finite element model is setup under symmetric condition ( Fig. 7 ) in order to investigate the evolution of damage mechanism either under plane strain assumption (center of the chip y = w /2) and plane stress assumption (side free surface y = 0).
Contact and friction modeling
Another important feature of finite element based computation in cutting simulation is the friction law. It is influenced by many factors such as sliding velocity, local contact pressure, temperature, tool and workpiece materials as proven by Ben Abdelali et al. [START_REF] Abdelali | Experimental characterization of friction coefficient at the tool-chip-workpiece interface during dry cutting of AISI 1045[END_REF] . Due to its simplicity and its availability in all FE codes such as Abaqus [43] , the Coulomb friction model is commonly used for this application as done by Bäker [START_REF] Bäker | Finite element simulation of high-speed cutting forces[END_REF] . Extensive studies were carried out at the tool/chip interface by Puls et al. [START_REF] Puls | Experimental investigation on friction under metal cutting conditions[END_REF] which are shown a strong material adhesion at the tool tip vicinity. When moving along the rake face, a sliding motion of the chip was observed.
Accordingly, a stick-slip friction model was developed by Zorev [START_REF] Zorev | Interrelationship between shear processes occurring along tool face and on shear plane in metal cutting[END_REF] . It advocates the existence of two distinct contact regions ( Fig. 8 ): sticking contacts around the tool tip where the shear stress f is assumed to be equal to the yield shear stress of the material, y , whereas, in the sliding region, the frictional stress is lower than the yield shear stress. Based in these assumptions, a Coulomb-Tresca model is adopted to define the tool-chip interface contact which is described as follows ( Eq. 11 ):
{ < ℎ = . ( Sliding region ) . = ℎ = ( Stick region ) ( 11
)
where n is the normal stress and the Coulomb's friction coefficient is here set to 0.2 as proposed by Zhang et al. [START_REF] Zhang | On the selection of Johnson-Cook constitutive model parameters for Ti-6Al-4V using three types of numerical models of orthogonal cutting[END_REF] .
Materials, machining parameters and contact conditions
The physical properties of the tool and the workpiece, as well as the contact conditions are reported in Table , whereas the constitutive and damage model parameters mentioned above in Table 1 .
Results and discussion
The present section deals with the numerical and experimental results obtained from orthogonal cutting of titanium alloy Ti-6Al-4V. In order to validate the orthogonal cutting model developed in the present work, the numerical results are compared to experimental data. The evolutions of the chip size and cutting forces are thus monitored and compared. Moreover, a particular attention is paid to the chip morphology in terms of segmentation and the physical mechanism governing the particular chip shape generation.
Finite element chip morphology results
The simulations carried out are presented in Fig. 9 . The damage distribution corresponding to material degradation during the chip formation process is presented for four machining conditions in terms of cutting speed (15 and 25 m min -1 ) and rake angle ( -5°and + 15°).
Under all machining conditions, segmented chip morphology and quasi-periodic cracks are observed. The material separation initiates at the side free surface, close to the tool tip and propagates within the material. It can be noticed that most of the damage phenomenon is localized at the tool-chip interface (secondary shear zone) and in the shear band (primary shear zone). The segments morphology is greatly affected by the rake angle while the cutting speed seems of least influence. These observations are to be confirmed by experimental comparison and this is the point of the next section.
Experimental setup
One of main improvement of this work is the use of high speed camera equipped by a high magnification optics allowing to access the region of cut and therefore to observe and analyze the chip formation mechanisms.
The orthogonal cutting configuration is obtained through a specific test devise (DEXTER) developed for this application. The relative speed between the tool and the workpiece is obtained using a linear axis which allows varying the cutting velocity up to 120 m min -1 . The uncoated carbide tools (rake angles = -5°; + 15°) are fixed on a 6-components dynamometer (Kistler 9257A) in order to measure the force components for each cutting configuration.
The workpiece is polished and etched for 10 s by kroll's reagent to reveal Ti-6Al-4V microstructure, thus permitting a better accuracy in the image analysis. A depth of cut of 0.25 mm is selected and two cutting speeds were investigated (15 and 25 m min -1 ).
The optical device ( Fig. 10 ) consists of a Photron SA3 camera with CMOS sensor, coupled to a reflective Schwarzschild objective (magnification X15). A 120 W halogen light guide is used to illuminate the scene.
The use of high speed camera requires a compromise between frame rate, frame resolution and the cutting velocity in order to obtain acceptable images (unblurred). Consequently, these parameters are modified for the tests as highlighted in Table 5 . These settings ensure a pixel size of 1.133 µm/pixel.
Chips morphology
All the performed cutting tests led to serrated chips, generated from periodic cracks propagation ( Fig. 11 ). It is also found that the chip morphology differs whether positive or negative rake angles are used.
It must be mentioned that a significant transversal (out of plane) deformation of the chip surface is observed for all cutting conditions. As depicted in Fig. 12 , further altimetry investigations by extended field confocal microscopy (EFCM) showed that the magnitude of this deformation increases with negative rake angles. A significant swell is observed in the secondary shear zone and more specifically in the stick region (see Section 4.2 ). Another bulge is observed atop the segment. This corresponds to the tertiary shear zone of the previous cut. The magnitude of this deformation doubles when the rake angle ranges from 15°t o -5°. This latter finding leads to consider that such angles affect more deeply and more significantly the generated surface. However further investigations need to be conducted to be conclusive on this matter.
Thanks to fast imaging technique, the serrated chip morphology can be analyzed more in details and characterized by their length ( L ) and the shear angle ( ) as defined in Fig. 11 . These parameters were measured for each segment of each test in order to investigate the evolution of the chip shape regarding the rake angle and the cutting velocity.
The whole captured sequence for each test is described by its number of frames and the corresponding number of segments, as summarized in Table 6 .
To analyze the chip segmentation phenomenon, many parameters are defined in the literature, such as the chip segmentation frequency (Hz) and the segmentation intensity [START_REF] Atlati | Analysis of a new segmentation intensity ratio "SIR " to characterize the chip segmentation process in machining ductile metals[END_REF][START_REF] Ducobu | Numerical contribution to the comprehension of saw-toothed Ti6Al4V chip formation in orthogonal cutting[END_REF] . However, these latter are not able to ensure comparison with other tests under different machining conditions. Consequently, in the present work, the characterization of the metric frequency of segmentation (in segment per millimeter) is preferred.
As it can be seen in Table 6 , the metric segmentation frequency strongly depends on cutting conditions. It increases with both the cutting speed and the rake angle. This parameter is directly linked to the segment length, but may also be influenced by the shear angle. Fig. 13 depicts the distributions and evolutions of the segments lengths and the shear angles.
It can be seen from Fig. 13 a, c, e and g that the experimental segment length distributions seem to be normally distributed and exhibit a strong dispersion for all tests. By comparing with numerical results, the average value of the segment length is relatively well simulated. With the increase of the cutting speed (tests 2 and 4), the segment length becomes slightly smaller and consequently, the metric frequency of segmentation becomes more important ( Table 6 ). Moreover, with the use of a negative angle, the segment length is strongly reduced. More precisely, the stiffness degradation in the primary shear zone ensures the chip deformation in a different way as the shear plane shape changes. With a positive rake angle (tests 1 and 2), the shear surface can be described properly by a simple plane in which angle is mainly comprised between 39°and 45°( Fig. 13 b andd). By contrast, the use of a negative rake angle gives birth to an additional compression component in the chip during the process which induce deformation in the shear plane and consequently widen the dispersion of the shear angle mostly comprised between 30°and 45°( Fig. 13 f andh). These experimental findings show the different natures of the mechanical loadings responsible for the material failure in the primary shear zone.
Fig. 14 shows the last computed state of strain before material failure for many points along the primary shear plane.
It can be seen that the loadings that lead to failure differs from the rake angle changes. For a positive rake angle, it is clearly found that shear is leading mode of failure. However, a compressive/shear defor- mation is observed in the case of the negative rake angle. Nearby the tool tip, the state of the strain is almost strictly compressive while by moving away from the tool tip the shear phenomenon became more and more significant. Multiple conclusions can be drawn from these observations and computations: first, the primary shear zone does not only withstand shear. Second, the nature of the mechanical loading is highly related to the cutting geometry (rake angle) and third the stochastic nature of the observed segment shapes is hardly predicted by computation and further models exhibiting a more probabilistic nature should be added at some point.
Based on these observations, it appears that the shear plane is more hardly affected by the rake angle than by the cutting velocity. Consequently, it tends to prove that the shear angle stays mostly a geometric parameter and thus brings to light several challenges about the influence of the hydrostatic pressure on the shape and geometry of the shear band.
Nevertheless, a great dispersion is observed in the chip parameters (shear angle, length) and in the chip shape throughout tests with a nega-tive rake angle. This highlights the stochastic aspect of the cutting mechanisms and indicates that this aspect seems more important for a negative rake angles.
For all tests, it can be seen that the simulated results present the same order of magnitude than the experimental values. However it is very noticeable that the shape distribution of the computed segments does not exhibit the same variation than the observations. This issue may have two possible origins: (i) the geometric restriction in the Lagrangian finite element model, (ii) the deterministic nature of Finite Elements though numerical instability may lead to some kind of randomness.
Cutting forces
Simulated cutting forces under different machining conditions are presented and compared to experimental data ( Fig. 15 ). It can be noted that the force is affected by the rake angle and the cutting velocity. It increases by varying the rake angle toward negative values or by decreasing the cutting velocity. An important variation (up to 500 N) is observed for the numerical forces. It can be explained by many reasons such as the element deletion method, the assumption of a constant cutting velocity, the mesh size.
With a positive rake angle, the measured cutting forces are correctly fitted by the numerical ones. Nerveless, they are slightly underestimated for the negative rake angle. It can be related to the friction coefficient which is constant over all simulations.
In summary, it can be said that simulation results show a good agreement with experiments. It permits to predict the chip formation morphology and the cutting forces under an acceptable error.
Chip formation mechanisms
The use of a reliable Finite Element model allows improving the understanding of serrated chip generation process. More especially, a specific attention is here paid to the chip formation mechanisms. The numerical results presented in Fig. 16 lead to split the chip generation process into three successive steps. This description is backed by the experimental observation presented by Pottier et al. [START_REF] Pottier | Sub-millimeter measurement of finite strains at cutting tool tip vicinity[END_REF] which also proposed to consider three different sub-processes that lead to serrated chips. Indeed, Fig. 16 prompts the evolution of equivalent von Mises stresses, temperature, strain rate and triaxiality ratio during the formation of a single segment. The three successive sub-processes being then: • Germination: during this phase, a linear evolution of the von Mises stresses is observed due to a constant compressive loading ( ∼ -1∕3) from the tool tip. Elastic energy is stored within the segment (no thermal dissipation is observed) while plasticity develops in the primary shear zone. This loading induces (i) an out of plane deformation of the chip (bulge) also clearly visible on the videos (see Section 4.3 ) and (ii) a strong hydrostatic compressive zone at the tool tip. The end of this stage is defined by the uprising of a micro crack at the tool tip, i.e. the damage parameter reaches the unity in the first element which is thus deleted.
• Growth: it describes the crack evolution along the primary shear plane. It is characterized by a rapid increase of both temperature (300°C.µs -1 ) and strain rate (up to 700 s -1 ). Because of hardening, the von Mises strain remains approximately constant while thermal dissipation is massive (Taylor-Quinney coefficient set to 0.9). The triaxiality ratio passes within the range of [ -0.1, 0.1] clearly indicating that shear is the driving mechanisms of this phase. These coupled phenomena activate, firstly the strain accumulation in the shear band (that pushes the segment backward) and secondly the crack propagation along the same direction. This crack starts at the tool tip and evolve inside the shear zone toward the free chip surface as depicted in Fig. 16 . The end of this stage is set as the cracks reaches the top surface and no further strain can be summed up into the shear band.
• Extraction: the von Mises stress decreases as the segment moves upward along the rake face and leaves the loaded zone. Temperature stabilizes as natural convective-radiative cooling starts (in Fig. 16 temperature remains constant instead of decreasing because adiabatic boundary conditions are prescribed). The damage parameter keeps increasing at the tool/chip interface leading to a slight increase of the hydrostatic compressive zone. The friction upon the next segment leads to a slightly positive triaxiality ratio that remains constant as the strain rate tends toward zero. The segment is then fully formed.
For each numerical test, an element in the chip's segmentation zone was chosen and then, the evolution of temperature ̇ ( • C μs -1 ) , the maximal temperature T max (°C) and the strain rate ̇ ( s -1 ) are reported in Table 7 . Despite the evolution of the machining conditions (cutting speed and rake angle), it can be underlined that the chip formation mechanisms remain the same. The magnitude of strain rate and temperatures involve changes but the three same sub-processes are observed.
To improve the physical comprehension of the chip formation, complementary chip SEM observations were carried out. As shown in Fig. 17 , a material crack is observed in the free side of the chip but is not observed in the stress triaxiality zone. From the observation of Fig. 17 b it can be seen that a classical ductile fracture is involve. With another magnification factor, it is observed on the fracture surface that the flow material tends to converge from the side to the center of the cut ( Fig. 17 c). These observations are consistent with the numerical results presented in the above and confirm the ability of the proposed model to predict the specific kind of fracture involve in segmented chip generation. Nevertheless, these evidences of a ductile fracture do not allow to claim the presence or absence of so-called adiabatic shear bands such as described by Rittel and Wang [START_REF] Rittel | Thermo-mechanical aspects of adiabatic shear failure of AM50 and Ti6Al4V alloys[END_REF] . It rather seems that both phenomena are consecutive to one another.
Conclusions
This work brings to light the Ti-6Al-4V chip formation problem. The principal aim of this contribution is the comprehension and modeling of the physical phenomena inducing the segmented chip during the machining of the Ti-6Al-4V alloy.
Based on Johnson-Cook model, a new behavior model is proposed to ensure a good description of the tight coupling between the temperature and the strain rate. This model is calibrated by an inverse identification method through dynamic hat-shaped tests performed under different temperatures up to 900 °C and strain rates up to 1000 s -1 . The new behavior model properly predicts the experimental data and gives better results than the Johnson-Cook model under all machining conditions.
Since the chip formation results from plastic deformation induced by the shear phenomena, the max shear damage criterion was chosen in this study to describe the ductile fracture during the chip formation. Its Lode angle dependency and its coupled structure to behavior model lead to better description of the damage phenomenon. In addition, it requires only one test for calibration.
A 3D orthogonal cutting model is set up under symmetric hypothesis in order to exhibit the capability of the model in restituting the chip formation mechanisms. In addition, to validate this numerical approach, experimental machining tests using a specific test bench were performed and observed by the mean of a high-speed camera. A good agreement is observed between the numerical and experimental results in terms of cutting forces and chip geometries. Nevertheless, the presented experimental observation leads to consider the chip generation problem as way more stochastic than previously considered in the literature. This very matter should be addressed in future works.
Based on orthogonal cutting simulation results and experimental investigations, it appears that the chip formation process results from a coupled development of both adiabatic shear band and crack propagation which start at the tool tip and evolve inside the shear zone toward the free surface. It can be described by three steps: Germination, Growth and Extraction.
Future works will be focused on in-situ thermal and strain field observation through the use of high speed imaging and infrared thermography, in order to reinforce numerical and the experimental findings. To improve the ability of the proposed model to simulate more accurately the cutting mechanisms, further work has also to be done on the development of a friction law that represents more faithfully the phenomena that occurs in the tool-chip interface and that takes into account the machining parameters.
Fig. 1 .
1 Fig. 1. (a) Hat-shaped specimen geometry (mm); (b) Experimental force-displacement curves of titanium alloy Ti-6Al-4V.
Fig. 2 .
2 Fig. 2. Iterative solving flowchart of the inverse problem.
Fig. 3 .
3 Fig. 3. Modeling of the coupling for A, B and n. The best fit of each experimental response are plotted as black dots.
Fig. 4 .
4 Fig. 4. Comparison between experimental data, proposed model and Johnson-Cook model curves under: (a) Temperature (b) Strain rate.
Fig. 6 .
6 Fig. 6. (a) Shape of the identified fracture locus in the ( ̄ , , ) space at T = 20 °C and a rate of 1 mm s -1 . (b) Restriction to the plane stress case. The calibration point is depicted as a red star ( ɛ f = 0.213 ). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
Fig. 7 .
7 Fig. 7. Orthogonal cutting model: geometries and boundaries conditions.
Fig. 9 .
9 Fig. 9. Numerical results under all machining parameters.
Fig. 10 .
10 Fig. 10. Experimental setup.
Fig. 11 .
11 Fig. 11. Images captured at four cutting conditions and geometrical parametrization of the chip morphology.
Fig. 12 .
12 Fig. 12. SEM images of the chips overlaid with the measured altitude map measured by EFCM: (a) test 2 (b) test 4.
Fig. 13 .
13 Fig. 13. Segment length and shear angle distributions for each segment.
Fig. 14 .
14 Fig. 14. State of strain at failure in the shear plane for -5°and 15°rake angles.
Fig. 15 .
15 Fig. 15. Experimental and numerical cutting forces comparison for all cutting condition.
Fig. 16 .
16 Fig. 16. Steps of the chip formation genesis (Vc: 25 m min -1 , rake angle: 15°).
Fig. 17 .
17 Fig. 17. SEM view of the chip (Vc: 25 m min -1 , rake angle 15°), B: detail view of crack, C: detail of crack orientation.
Table 1
1 Identified flow rule parameters for Ti-6Al-4V.
A (Pa) B (Pa) n
a b c 9.36e + 005 -1.45e + 08 -8.65e + 08 5.57e + 05 4.66e + 07 -6.39e + 08 9.46e -05 4.23e -02 -0.365
Table 2
2 The identified Johnson-Cook parameters.
A (MPa) B (MPa) C n m
880 582 0.041 0.353 0.6337
Table 4
4 Material properties and contact condition.
Material properties and contact conditions [5]
Materials properties Contact Property Density (kg m -3 ) Elastic modulus E (GPa) Poisson's ratio ϑ Specific heat C p (J kg -1 °C) Thermal conductivity (W m -1 °C) Expansion coefficient expansion (µm m -1 °C) Room temperature T room (°C) Inelastic heat fraction TQ Friction coefficient µ Friction energy transformed to heat Workpiece 4430 110 0.33 670 6.6 9 20 0.9 0.2 99% Tool 15,700 705 0.23 178 24 5 -
Fig. 8. Stick-slip contact Model.
Table 5
5 Machining and recording parameters.
Test 1 Test 2 Test 3 Test 4 15 -5 15 25 15 25 6000 10,000 6000 10,000 512 × 512 384 × 352 512 × 512 384 × 352 5 2.5 4 2.5
Rake angle (°)
Cutting velocity (m min -1 ) Frame rate (fps) Spatial resolution (pixels) Exposure time (µs)
Table 6
6 Number of frames and segments for each test.
Test number Number of frames Number of measured segments Metric frequency of segmentation (segment/mm)
1 2 3 4 2113 1166 1761 2049 306 362 161 251 2.49 3.26 2.15 2.89
Table 7
7 The numerical evolution of the temperature and strain rate over tests.
Test 1 Test 2 Test 3 Test 4 123 308 145 348 532 620 615 860 3896 8420 4512 8823
̇ ( • C s -1 )
T max (°C) ̇ ( s -1 )
-3 =
[START_REF] Ayed | Experimental and numerical study of laser-assisted machining of Ti6Al4V titanium alloy[END_REF] where f , is the only parameter to calibrate and stands for the maximum shear stress at failure. | 53,415 | [
"18962",
"19669",
"19608",
"19090"
] | [
"110103",
"110103",
"110103",
"110103",
"206863",
"110103"
] |
01234982 | en | [
"info"
] | 2024/03/05 22:32:16 | 2015 | https://hal.science/hal-01234982/file/paper.pdf | François Boulier
email: [email protected]
François Lemaire
email: [email protected]
Finding First Integrals Using Normal Forms Modulo Differential Regular Chains
Keywords: First integral, linear algebra, differential algebra, nonlinear system
This paper introduces a definition of polynomial first integrals in the differential algebra context and an algorithm for computing them. The method has been coded in the Maple computer algebra system and is illustrated on the pendulum and the Lotka-Volterra equations. Our algorithm amounts to finding linear dependences of rational fractions, which is solved by evaluation techniques.
Introduction
This paper deals with the computation of polynomial first integrals of systems of ODEs, where the independent variable is t (for time). A first integral is a function whose value is constant over time along every solution of a system of ODEs. First integrals are useful to understand the structure of systems of ODEs. A well known example of first integral is the energy of a mechanical conservative system, as shown by Example 1.
Example 1. Using a Lagrange multiplier λ(t), a pendulum of fixed length l, with a centered mass m submitted to gravity g can be coded by:
x y l m g Σ m ẍ(t) = λ(t) x(t) m ÿ(t)
= λ(t) y(t) + mg x(t) 2 + y(t) 2 = l 2 .
(1)
A trivial first integral is x(t) 2 + y(t) 2 since x(t) 2 + y(t) 2 = l 2 on any solution. A less trivial first integral is m 2 ẋ(t) 2 + ẏ(t) 2 -mg y(t) which corresponds to the total energy of the system (kinetic energy + potential energy).
When no physical considerations can be used, one needs alternative methods. Assume we want to compute a polynomial first integral of a system of ODEs. Our algorithm findAllFirstIntegrals, which has been coded in Maple, relies on the following strategy. We choose a certain number of monomials µ 1 , µ 2 , . . . , µ e built over t, the unknown functions and their derivatives (namely t, x(t), y(t), ẋ(t), ẏ(t), . . . on Example 1), and look for a candidate of the form q = α 1 µ 1 +• • •+α e µ e satisfying dq (t) dt = 0 for all solutions, where the α i are in some field K. If the α i are constant (i.e. dαi dt = 0 for each α i ), then our problem amounts to finding α i such that dq (t) dt = α 1 dµ1 dt + • • • + α e dµe dt is zero for all solutions. On Example 1, µ 1 , µ 2 and µ 3 could be the monomials ẋ(t) 2 , ẏ(t) 2 and y(t) and α 1 , α 2 and α 3 could be m/2, m/2 and -mg.
Anticipating Section 4, differential algebra techniques combined with the Nullstellensatz Theorem permit to check that a polynomial p vanishes on all solutions of a radical ideal A by checking that p belongs to A. Moreover, in the case where the ideal A is represented 1 by a list of differential regular chains M = [C 1 , . . . , C f ], checking that p ∈ A can be done by checking that the normal form (see Section 4) of p modulo the list of differential regular chains M , denoted NF(p, M ), is zero. Since the normal form is K-linear (see Section 4), finding first integrals can be solved by finding a linear dependence between NF( dµ1 dt , M ), NF( dµ2 dt , M ), . . . , NF( dµe dt , M ). This process, which is in the spirit of [START_REF] Boulier | Efficient computation of regular differential systems by change of rankings using Kähler differentials[END_REF][START_REF] Faugère | Efficient computation of Gröbner bases by change of orderings[END_REF], encounters a difficulty: the normal forms usually involve fractions. As a consequence, we need a method for finding linear dependences between rational functions. A possible approch consists in reducing all fractions to the same denominator, and solving a linear system based on the monomial structure of the numerators. However, this has disadvantages. First, the sizes of the numerators might grow if the denominators are large. Second, the linear system built above is completely modified if one considers a new fraction. Finding linear dependences between rational functions is addressed in Sections 2 and 3 by evaluating the variables occurring on the denominators, thus transforming the fractions into polynomials. The main difficulty with this process is to choose adequate evaluations to obtain deterministic (i.e. non probabilistic) and terminating algorithms.
This paper is structured as follows. Section 2 presents some lemmas around evaluation and Algorithm findKernelBasis (and its variant findKernelBasisMatrix) which computes a basis of the linear dependences of some given rational functions. Section 3 presents Algorithm incrementalFindDependence (and its variant incrementalFindDependenceLU) which sequentially treats the fractions and stops when a linear dependence is detected. Section 4 recalls some differential algebra techniques and presents Algorithm findAllFirstIntegrals which computes a basis of first integrals which are K-linear combinations of a predefined set of monomials.
In Sections 2 and 3, k is a commutative field of characteristic zero containing R, A is an integral domain with unit, a commutative k-algebra and also a k-vector space equipped with a norm • A . Moreover, K is the total field of fractions of A. In Section 4, we will require, moreover, that K is a field of constants. In applications, k is usually R, A is usually a commutative ring of multivariate polynomials over k (i.e. A = k[z 1 , . . . , z s ]) and K is the field of fractions of A (i.e. the field of rational functions k(z 1 , . . . , z s )). This section presents Algorithms findKernelBasis and findKernelBasisMatrix which compute a basis of the linear dependences over K of e multivariate rational functions denoted by q 1 , . . . , q e . More precisely, they compute a K-basis of the vectors (α 1 , . . . , α e ) ∈ K e satisfying e i=1 α i q i = 0. Those algorithms are mainly given for pedagogical reasons, in order to focus on the difficulties related to the evaluations (especially Lemma 2). The rational functions are taken in the ring K(Y )[X] where Y = {y 1 , . . . , y m } is a finite set of indeterminates and X is another (possibly infinite) set of indeterminates such that Y ∩ X = ∅. The idea consists in evaluating the variables Y , thus reducing our problem to finding the linear dependences over K of polynomials in K[X], which can be easily solved for instance with linear algebra techniques. If enough evaluations are made, the linear dependences of the evaluated fractions coincide with the linear dependences of the non evaluated fractions.
Even if it is based on evaluations, our algorithm is not probabilistic. A possible alternative is to use [START_REF] Stoutemyer | Multivariate partial fraction expansion[END_REF] by writing each rational function into a (infinite) basis of multivariate rational functions. However, we chose not to use that technique because it relies on multivariate polynomial factorization into irreducible factors, and because of a possible expression swell.
Preliminary Results
This section introduces two basic definitions as well as three lemmas needed for proving Algorithm findKernelBasis.
Definition 1 (Evaluation). Let D = {g 1 , . . . , g e } be a set of polynomials of K[Y ] where Y = {y 1 , . . . , y m }. Let y 0 = (y 0 1 , y 0 2 , . . . , y 0 m ) be an element of K m , such that none of the polynomials of D vanish on y 0 . One denotes σ y 0 the ring homomorphism from K[X, Y, g -1 1 , . . . , g -1 e ] to K[X] defined by σ y 0 (y j ) = y 0 j for 1 ≤ j ≤ m and σ y 0 (x) = x for x ∈ X. Roughly speaking, the ring homomorphism σ y 0 evaluates at y = y 0 the rational functions whose denominator divides a product of powers of g i .
Definition 2 (Linear combination application). Let E be a K-vector space. For any vector v = (v 1 , v 2 , . . . , v e ) of E e , Φ v denotes the linear application from
K e to E defined by α = (α 1 , α 2 , . . . , α e ) → Φ v (α) = e i=1 α i v i .
The notation Φ v defined above is handy: if q = (q 1 , . . . , q e ) is (for example) a vector of rational functions, then the set of the vectors α = (α 1 , . . . , α n ) in K e satisfying e i=1 α i q i = 0 is simply ker Φ q . The following Lemma 1 is basically a generalization to several variables of the classical following result: a polynomial of degree d over an integral domain admitting d + 1 roots is necessarily the zero polynomial.
Lemma 1. Let p ∈ K[Y ] where Y = {y 1 , . . . , y m }. Assume deg yi p ≤ d for all 1 ≤ i ≤ m. Let S 1 , S 2 , . . . , S m be m sets of d + 1 points in Q. If p(y 0 ) = 0 for all y 0 ∈ S 1 × S 2 × . . . × S m , then p is the zero polynomial.
Proof. By induction on m. When m = 1, p has d + 1 distinct roots and a degree at most d. Since p is a polynomial over a field K, p is the zero polynomial.
Suppose the lemma is true for m. One shows it is true for m + 1. Seeing p as an element of K(y 1 , . . . , y m )[y m+1 ] and using the Lagrange polynomial interpolation formula [9, where S m+1 = {s 1 , . . . , s d+1 }.
For each 1 ≤ i ≤ d+1, p(y 1 , . . . , y m , s i ) vanishes on all points of S 1 ×S 2 ×. . .×S m . By induction, all p(y 1 , . . . , y m , s i ) are thus zero, and p is the zero polynomial.
The following lemma is quite intuitive. If one takes a nonzero polynomial g in K[Y ], then it is possible to find an arbitrary large "grid" of points in N m where g does not vanish. This lemma will be applied later when g is the product of some denominators that we do not want to cancel. Lemma 2. Let g be a nonzero polynomial in K[Y ] and d be a positive integer. There exist m sets S 1 , S 2 , . . . , S m , each one containing d + 1 consecutive nonnegative integers, such that g(y 0 ) = 0 for all
y 0 in S = S 1 × S 2 × • • • × S m .
The proof of Lemma 2 is the only one explicitly using the norm • A . The proof is a bit technical but the underlying idea is simple and deserves a rough explanation. If g is nonzero, then its homogeneous part of highest degree, denoted by h, must be nonzero at some point ȳ, hence on a neighborhood of ȳ. Since g "behaves like" h at infinity (the use of • A will make precise this statement), one can scale the neighborhood to prove the lemma.
Proof. Since K is the field of fraction of A, one can assume with no loss of generality that g is in A[Y ].
Denote g = h + p where h is the homogeneous part of g of (total) degree e = deg g. Since, g is nonzero, so is h. By Lemma 1, there exists a point ȳ ∈ R m >0 , such that h(ȳ) = 0, where R >0 denotes the positive reals. Without loss of generality, one can assume that ȳ ∞ = 1 since h is homogeneous. There exists an open neighborhood V of ȳ such that V ⊂ R m >0 and 0 / ∈ h(V ). Moreover, this open neighborhood V also contains a closed ball B centered at ȳ for some positive real i.e. B = {y ∈ R m >0 | y -ȳ ∞ ≤ } ⊂ V . Since B is a compact set and the functions h and t are continuous, the two following numbers are well-defined and finite: m = min y∈B h(y) A and M = max y∈B ( i∈I a i A m i (y)) (where p = i∈I a i m i with a i ∈ A and m i is a monomial in Y ). Moreover, m > 0 (since by a compactness argument there exists y ∈ B such that h(y) = m).
Take y ∈ B and s > 1 ∈ R. Then g(sy) = h(sy) + p(sy) = s e h(y) + p(sy). By the reverse triangular inequality and the homogeneity of h, one has g(sy) A ≥ s e h(y) A -p(sy) A . Moreover
p(sy) A ≤ i∈I a i A m i (sy) ≤ i∈I a i A s e-1 m i (y) ≤ s e-1 M .
Consequently, g(sy) A ≥ s e m-s e-1 M . If one takes s sufficiently large to ensure s e m-s e-1 M > 0 and s ≥ (d+1)/2, the ball B obtained by uniformly scaling B by a factor s contains no root of g. Since the width of the box B is at least d + 1, the existence of the expected S i is guaranteed.
Roughly speaking, the following lemma ensures that if a fraction q in K(Y )[X] evaluates to zero for well chosen evaluations of Y , then q is necessarily zero. Lemma 3. Take an element q in K(Y )[X]. Then, there exist an integer d, and m sets S 1 , S 2 , . . . , S m , each one containing d+1 nonnegative consecutive integers, such that σ y 0 (q) is well-defined for all y 0 ∈ S = S 1 × S 2 × • • • × S m . Moreover, if σ y 0 (q) = 0 for all y 0 in S, then q = 0.
Proof. Let us write q = p/g where p ∈ K[X, Y ] and g ∈ K[Y ]. Consider d = max 1≤i≤m deg yi (p). By Lemma 2, there exist m sets S 1 , . . . , S m of d+1 consecutive nonnegative integers such that g(y 0 ) = 0 for all y 0 in S = S 1 ×S 2 ו • •×S m . Consequently, σ y 0 (q) is well-defined for all y 0 in S. Let us assume that σ y 0 (q) = 0 for all y 0 in S. As a consequence, one has σ y 0 (p) = 0 for any y 0 in S. By Lemma 1, one has p = 0, hence q = 0.
Algorithm findKernelBasis
We rely on the notion of iterator, which is a tool in computer programming that enables to enumerate all values of a structure (such as a list, a vector, . . . ). An iterator can be finite (if it becomes empty after enumerating a finite number of values) or infinite (if it never gets empty).
In order to enumerate evaluation points (that we take in N m for simplicity), we use two basic functions to generate one by one all the tuples, which do not cancel any element of D, where D is a list of nonzero polynomials of K[Y ]. The first one is called newTupleIteratorNotCancelling(Y, D). It builds an iterator I to be used with the second function getNewTuple(I). Each time one calls the function getNewTuple(I), one retrieves a new tuple which does not cancel any element of D. The only constraint we require for getNewTuple is that any tuple of N m (which does not cancel any element of D) should be output after a finite number of calls to getNewTuple. To ensure that, one can for example enumerate all integer tuples by increasing order (where the order is the sum of the entries of the tuple) refined by the lexicographic order for tuples of the same order. Without this constraint on getNewTuple, Algorithm findKernelBasis may not terminate.
Example 2. Take K = Q(z), Y = {a} and X = {x}. Consider q = (q 1 , q 2 , q 3 ) = ( ax+2 a+1 , z(a-1-ax) 1+a , 1
). One can show that the only (up to a constant) K-linear dependence between the q i is -q 1 -(1/z)q 2 + q 3 = 0.
Apply Algorithm findKernelBasis. One has D = [a + 1] at Line 3, so one can evaluate the indeterminate a on any nonnegative integer. At Line 7, S contains the tuple s 1 = (0), corresponding to the evaluation a = 0. One builds the vectors v 1 , v 2 and v 3 at Line 7 by evaluating q 1 , q 2 and q 3 on a = 0. One obtains polynomials in K Algorithm 1: findKernelBasis algebra over K, which contains z. The test at Line 12 fails because the vector b 1 = (-1/2, 0, 1) of B does not yield a linear dependence over the q i . Indeed, -(1/2)q 1 + 0q 2 + q 3 = 0. Consequently, S is enriched with the new tuple s 1 = (1) at Line 14 and one performs a second iteration. One builds the vectors v 1 , v 2 and v 3 at Line 7 by evaluating q 1 , q 2 and q 3 on a = 0 and a = 1. Thus v 1 = (2, 1 + x/2), v 2 = (-z, -xz/2) and v 3 = (1, 1). A basis B computed at Line 10 could be (-1, -1/z, 1). This time, the test at Line 12 succeeds since -q 1 -(1/z)q 2 +q 3 = 0. The algorithm stops by returning the basis (-1, -1/z, 1).
[x]. Thus v 1 = (2), v 2 = (-z) and v 3 = (1). A basis of ker Φ v computed at Line 10 could be B = {(-1/2, 0, 1), (z/2, 1, 0)}. Note that it is normal that both v = (v 1 , v 2 , v 3 )
Proof (Algorithm findKernelBasis). Correctness. If the algorithm returns, then ker Φ q ⊃ ker Φ v . Moreover, at each loop, one has ker Φ q ⊂ ker Φ v since α i q i = 0 implies α i σ(q i ) = 0 for any evaluation which does not cancel any denominator. Consequently, if the algorithm stops, ker Φ q = ker Φ v , and B is a basis of ker Φ q . Termination. Assume the algorithm does not terminate. The vector space ker Φ v is smaller at each step since the set S grows. By a classical dimension argument, the vector space ker Φ v admits a limit denoted by E, and reaches it after a finite number of iterations. From ker Φ q ⊂ ker Φ v , one has ker Φ q ⊂ E. Since the test at Line 12 always fails (the algorithm does not terminate), ker Φ q E. Take α ∈ E \ ker Φ q and consider the set S obtained by applying Lemma 3 with q = e i=1 α i q i . Since the algorithm does not terminate, S will eventually contain S. By construction of v, one has e i=1 α i σ y 0 (q i ) = 0 = σ y 0 (q) for all y 0 in S. By Lemma 3, one has q = e i=1 α i q i = 0 which proves that α ∈ ker Φ q . Contradiction since α / ∈ ker Φ q .
Input: Two lists of variables Y and X Input: A vector q = (q1, . . . , qe) of e rational fractions in K(Y )[X] Output: A basis of ker Φq 1 begin 2 For each 1 ≤ i ≤ e, denote the reduced fraction qi as fi/gi Algorithm 2: findKernelBasisMatrix
with fi ∈ K[Y, X], gi ∈ K[Y ];
A Variant of findKernelBasis
A variant of algorithm findKernelBasis consists in incrementally building a matrix M with values in K, encoding all the evaluations. Each column of M corresponds to a certain q j , and each line corresponds to a couple (s, m), where s is an evaluation point, and m is a monomial of K[X]. This yields Algorithm findKernelBasisMatrix.
Example 3. Apply Algorithm findKernelBasisMatrix on Example 2. The only difference with Algorithm findKernelBasis is that the vectors v i are stored vertically in a matrix M that grows at each iteration. At the end of the first iteration of the while loop, one has y 0 = (0), q = (2, -z, 1), L = [START_REF] Boulier | The BLAD libraries[END_REF], and
M = N = 2 -z 1 , and B = {(-1/2, 0, 1), (z/2, 1, 0)}. Another iteration is needed since -(1/2)q 1 + 0q 2 + q 3 = 0.
At the end of the second iteration, one has y 0 = (1), q = (1 + x/2, -xz/2, 1),
L = [1, x], N = 1 0 1 1/2 -z/2 0 , and M = 2 -z 1 1 0 1 1/2 -z/2 0 . A basis B is (-1, -1/z, 1
) and the algorithm stops since -q 1 -(1/z)q 2 + q 3 = 0.
Proof (Algorithm findKernelBasisMatrix). The proof is identical to the proof of findKernelBasis. Indeed, the vector v i in findKernelBasis is encoded vertically in the i th column of the M matrix in findKernelBasisMatrix. Thus, the kernel of Φ v in findKernelBasis and the kernel of M in findKernelBasisMatrix coincide.
In terms of efficiency, Algorithm findKernelBasisMatrix is far from optimal for many reasons. First, the number of lines of the matrix M grows excessively. Second, a basis B has to be computed at each step of the loop. Third, the algorithm needs to know all the rational functions in advance, which forbids an incremental approach. Next section addresses those issues.
Incremental Computation of Linear Dependences
The main idea for building incrementally the linear dependences is presented in Algorithm incrementalFindDependence. It is a sub-algorithm of Algorithms findFirstDependence and findAllFirstIntegrals. Assume we have e linearly independent rational functions q 1 , . . . , q e and a new rational function q e+1 : either the q 1 , . . . , q e+1 are also linearly independent, or there exists (α 1 , . . . , α e+1 ) ∈ K e+1 such that e+1 i=1 α i q i = 0 with α e+1 = 0. It suffices to iterate this idea by increasing the index e until a linear dependence is found. When such a dependence has been found, one can either stop, or continue to get further dependences.
Algorithm incrementalFindDependence
The main point is to detect and store the property that the q 1 , . . . , q e are linearly independent, hence the following definition. Moreover, for efficiency reasons, one should be able to update this property at least cost when a new polynomial q e+1 is considered.
Definition 3 (Evaluation matrix)
. Consider e rational functions q 1 , . . . , q e in K(Y )[X], and e couples (s 1 , ω 1 ), . . . , (s e , ω e ) where each s i is taken in N m and each ω i is a monomial in X. Assume that none of the s i cancels any denominator of the q i . Consider the e × e matrix M with coefficients in K, defined by M ij = coeff(σ si (q j ), ω i ) where coeff(p, m) is the coefficient of the monomial m in the polynomial p. The matrix M is called the evaluation matrix of q 1 , . . . , q e w.r.t (s 1 , ω 1 ), . . . , (s e , ω e ).
Example 4. Recall q 1 = ax+2 a+1 and q 2 = z(a-1-ax) 1+a from Example 2. Consider (s 1 , ω 1 ) = (0, 1) and (s 2 , ω 2 ) = (1, 1). Thus, σ s1 (q 1 ) = 2, σ s1 (q 2 ) = -z, σ s2 (q 1 ) = x/2 + 1 and σ s2 (q 2 ) = -zx/2. Then, the evaluation matrix for (s
1 , ω 1 ), (s 2 , ω 2 ) is M = 2 -z 1 0
• If w 2 were the monomial x instead of 1, the evaluation matrix
would be M = 2 -z 1/2 -z/2
• In both cases, the matrix M is invertible.
The evaluation matrices computed in Algorithms incrementalFindDependence, findFirstDependence and findAllFirstIntegrals will be kept invertible, to ensure that some fractions are linearly independent (see Proposition 1 below).
Proposition 1. Keeping the same notations as in Definition 3, if the matrix M is invertible, then the rational functions q 1 , . . . , q e are linearly independent. Proof. Consider α = (α 1 , . . . , α e ) such that e i=1 α i q i = 0. One proves that α = 0. For each 1 ≤ j ≤ e, one has e i=1 α i σ sj (q i ) = 0, and consequently e i=1 α i coeff(σ sj (q i ), ω j ) = 0. This can be rewritten as M α = 0, which implies α = 0 since M is invertible.
By Proposition 1, each evaluation matrix of Example 4 proves that q 1 and q 2 are linearly independent. In some sense, an invertible evaluation matrix can be viewed as a certificate (in the the computational complexity theory terminology) that some fractions are linearly independent.
Proposition 2. Take the same notations as in Definition 3 and assume the evaluation matrix M is invertible. Consider a new rational function q e+1 . If the rational functions q 1 , . . . , q e+1 are linearly independent then one necessarily has q e+1 + e i=1 α i q i = 0 where α is the unique solution of M α = -b, with b = (. . . , coeff(σ sj (q e+1 ), ω j ), . . .) 1≤j≤e .
Proof. Since M is invertible and by Proposition 1, any linear dependence involves q e+1 with a nonzero coefficient, assumed to be 1. Assume q e+1 + e i=1 α i q i = 0.
Then for each j, one has e i=1 α i coeff(σ sj (q i ), ω j ) = -coeff(σ sj (q e+1 ), ω j ) which can be rewritten as M α = -b.
Example 5. Consider the q i of Example 2. Take D = [a+1], and (s 1 , ω 1 ) = (0, 1), and the 1 × 1 matrix M = (2) which is the evaluation matrix of q 1 w.r.t. (s 1 , ω 1 ). Apply algorithm incrementalFindDependence on Y , X, D, [q 1 ], [(s 1 , ω 1 )], M and q 2 . The vector b at Line 2 equals (-z) since q 2 = z(a -ax -1)/(a + 1) evaluates to -z when a = 0. Solving M α = -b yields α = (z/2). Then h = q 2 + (z/2)q 1 = az(2-x) 2(a+1) = 0. One then iterates the repeat until loop until h evaluates to a non zero polynomial. The value a = 0 is skipped, and the repeat until loop stops with s 2 = 1. Choosing the monomial w 2 = 1 yields the matrix M = 2 -z 1 0 • Proof (Algorithm incrementalFindDependence). Correctness. Assume the fractions q 1 , . . . , q e+1 are not linearly independent. Then Proposition 2 ensures that h must be zero and Case 1 is correct. It the q 1 , . . . , q e+1 are linearly independent, then necessarily h is non zero. Assume the repeat until loop terminates (see proof below). Since σ se+1 (h) = 0, a monomial ω e+1 such that coeff(σ se+1 (h), ω e+1 ) = 0 can be chosen. By construction, the matrix M is the evaluation matrix of q 1 , . . . , q e+1 w.r.t (s 1 , ω 1 ), . . . , (s e+1 , ω e+1 ). One just needs to prove that M is invertible to end the correctness proof. Assume M v = 0 with a non zero vector v = (α 1 , . . . , α e , β). If β = 0, then M α = 0 where α = (α 1 , . . . , α e ) and α = 0 since v = 0 and β = 0. Since M is invertible, α = 0 hence a contradiction. If β = 0, then one can assume β = 1. The e first lines of M v = 0 imply M α = -b where α is the vector computed in the algorithm. The last line of M v = 0 implies that l(α 1 , . . . , α e , 1) = 0, which implies coeff(σ se+1 (h, ω e+1 )) = 0 and contradicts the choice of ω e+1 .
Termination. One only needs to show that the repeat until loop terminates. This follows from Lemma 3 in the case of the single fraction q e+1 .
Improvement Using a LU-decomposition
In order to optimize the solving of M α = -b in incrementalFindDependence, one can require a LU-decomposition of the evaluation matrix M . The specification of Algorithm incrementalFindDependence can be slightly adapted by passing a LU-decomposition of M = LU (with L lower triangular with a diagonal of 1, and U upper triangular), and by returning a LU-decomposition of M = L U in Case 2. Note that a PLU-decomposition (where P is a permutation matrix) is not needed in our case as shown by Proposition 4 below. // E is a list of (s, ω), where s is an evaluation and w is a monomial
Finding the First Linear Dependence
The main advantage of Algorithm incrementalFindDependence is to limit the number of computations if one does not know in advance the number of rational fractions needed for having a linear dependence. Imagine the rational functions are provided by a finite iterator (i.e. a iterator that outputs a finite number of fractions), one can build the algorithm findFirstDependence which terminates on the first linear dependence it finds, or fails if no linear dependence exists. Example 6. Let us apply Algorithm findFirstDependence on Example 2. The first fraction q to be considered is ax+2 a+1 . The call to incrementalFindDependence returns the 1 × 1 matrix (2) and the couple (s 1 , ω 1 ) = (0, 1). One can check that q evaluated at a = 0 yields 2 and that (2) is indeed the evaluation matrix for the couple (s 1 , ω 1 ) = (0, 1).
Continue with the second iteration. One now considers q = z(a-ax-1) a+1 . Example 5 shows that the call to incrementalFindDependence returns the 2×2 invertible matrix 2 -z 1 0 and the couple (s 2 , ω 2 ) = (1, 1).
Finally, the third iteration makes a call to incrementalFindDependence. Line 2 builds the vector b = (1, 1). Line 3 solves M α = -b, yielding α = (-1, -1/z). Line 4 builds h = q 3 -q 1 -(1/z)q 2 which is zero, so incrementalFindDependence returns at Line 6. As a consequence, findFirstDependence detects that a linear dependence has been found at Line 11 and returns ((-1, -1/z), [q 1 , q 2 , q 3 ]) at Line 15.
Proof (Algorithm findFirstDependence). Termination. The algorithm obviously terminates since the number of loops is bounded by the number of elements output by the iterator J.
Correctness. One first proves the following loop invariant: M is invertible, and M is the evaluation matrix of Q w.r.t. E. The invariant is true when first entering the loop (even if the case is a bit degenerated since M , Q and E are all empty). Assume the invariant is true at some point. The call to incrementalFindDependence either detects a linear dependence α, or returns an evaluation matrix M and a couple (s, ω). In the first case, M , Q and E are left unmodified so the invariant remains true. In the second case, M , Q and E are modified to incorporate the new fraction q, and the invariant is still true thanks to the specification of incrementalFindDependence. The invariant is thus proven.
When the algorithm terminates, it returns either (α, Q) or FAIL. If it returns (α, Q), this means that the variable bool has changed to true at some point. Consequently a linear dependence α has been found, and the algorithm returns (α, Q). The dependence is necessarily the one with the smallest e because of the invariant (ensuring M is invertible) and Proposition 1. This proves the Case 1 of the specification.
If the algorithms returns FAIL, the iterator has been emptied, and the variable bool is still false. Consequently, all elements of the iterator have been stored in Q, and because of the invariant and Proposition 1, the elements of Q are linearly independent. This proves the Case 2 of the specification.
Remark 1. Please note that Algorithm findFirstDependence can be used with an infinite iterator (i.e. an iterator that never gets empty). However, Algorithm findFirstDependence becomes a semi-algorithm since it will find the first linear dependence if it exists, but will never terminates if no such dependence exists.
Complexity of the Linear Algebra
When findFirstDependence terminates, it has solved at most e square systems with increasing sizes from 1 to e. Assuming the solving of each system M α = b has a complexity of O(n ω ) [START_REF] Bunch | Triangular factorization and inversion by fast matrix multiplication[END_REF] arithmetic operations, where n is the size of the matrix and ω is the exponent of linear algebra, the total number of arithmetic operations for the linear algebra is O(e ω+1 ). If the LU-decomposition variant is used in Algorithm incrementalFindDependence, then the complexity of drops to O(e 3 ), since solving a triangular system of size n can be made in O(n2 ) arithmetic operations. As for the space complexity of algorithm findFirstDependence, it is O(e 2 ), whether using the LU-decomposition variant or not.
Application to Finding First Integrals
In this section, we look for first integrals for ODE systems. Roughly speaking, a first integral is an expression which is constant over time along any solution of the ODE system. This is a difficult problem and we will make several simplifying hypotheses. First, we work in the context of differential algebra, and assume that the ODE system is given by polynomial differential equations. Second, we will only look for polynomial first integrals.
Basic Differential Algebra
This section is mostly borrowed from [START_REF] Boulier | On the Regularity Property of Differential Polynomials Modulo Regular Differential Chains[END_REF] and [START_REF] Boulier | A normal form algorithm for regular differential chains[END_REF]. It has been simplified in the case of a single derivative. The reference books are [START_REF] Ritt | Differential Algebra[END_REF] and [START_REF] Kolchin | Differential Algebra and Algebraic Groups[END_REF]. A differential ring R is a ring endowed with an 2 abstract derivation δ i.e. a unary operation which satisfies the axioms δ(a
+ b) = δ(a) + δ(b) and δ(a b) = δ(a) b + aδ(b)
for all a, b ∈ R. This paper considers a differential polynomial ring R in n differential indeterminates u 1 , . . . , u n with coefficients in the field K. Moreover, we assume that K is a field of constants (i.e. δk = 0 for any k ∈ K). Letting U = {u 1 , . . . , u n }, one denotes R = K{U }, following Ritt and Kolchin. The derivation δ generates a monoid w.r.t. the composition operation. It is denoted: Θ = {δ i , i ∈ N} where N stands for the set of the nonnegative integers. The elements of Θ are the derivation operators. The monoid Θ acts multiplicatively on U , giving the infinite set ΘU of the derivatives.
If A is a finite subset of R, one denotes (A) the smallest ideal containing A w.r.t. the inclusion relation and [A] the smallest differential ideal containing A. Let A be an ideal and S = {s 1 , . . . , s t } be a finite subset of R, not containing zero. Then A : A differential polynomial q is said to be partially reduced w.r.t. p if it does not depend on any proper derivative of the leading derivative v of p. It is said to be reduced w.r.t. p if it is partially reduced w.r.t. p and deg(q, v) < deg(p, v). A set of differential polynomials of R \ K is said to be autoreduced if its elements are pairwise reduced. Autoreduced sets are necessarily finite [10, chapter I, section 9]. To each autoreduced set C, one may associate the set L = ld C of the leading derivatives of C and the set N = ΘU \ ΘL of the derivatives which are not derivatives of any element of L (the derivatives "under the stairs" defined by C).
S ∞ = {p ∈ R | ∃ a 1 , . . . , a t ∈ N, s a1 1 • • • s at t p ∈ A} is
In this paper, we need not recall the (rather technical) definition of differential regular chains (see [START_REF] Boulier | A normal form algorithm for regular differential chains[END_REF]Definition 3.1]). We only need to know that a differential regular chain C is a particular case of an autoreduced set and that membership to the ideal [C] : H ∞ C can be decided by means of normal form computations, as explained below.
Normal Form Modulo a Differential Regular Chain
All the results of this section are borrowed from [START_REF] Boulier | A normal form algorithm for regular differential chains[END_REF] and [START_REF] Boulier | On the Regularity Property of Differential Polynomials Modulo Regular Differential Chains[END_REF] The interest of [START_REF] Boulier | On the Regularity Property of Differential Polynomials Modulo Regular Differential Chains[END_REF] is that it provides a normal form algorithm which always succeeds (while [START_REF] Boulier | A normal form algorithm for regular differential chains[END_REF] provides an algorithm which fails when splittings occur).
Recall that the normal form algorithm relies on the computation of inverses of differential polynomials, defined below. Definition 5. Let f be a nonzero differential polynomial of R. An inverse of f modulo C is any fraction p/q of nonzero differential polynomials such that p ∈ K[N ∪ L] and q ∈ K[N ] and f p ≡ q mod A.
Normal Form Modulo a Decomposition
This subsection introduces a new definition which in practice is useful for performing computations modulo a radical ideal [Σ] expressed as an intersection of differential regular chains (i.e.
[Σ] = [C 1 ] : H ∞ C1 ∩ • • • ∩ [C f ] : H ∞ C f ).
Such a decomposition can be computed with the RosenfeldGroebner algorithm [START_REF] Boulier | The BLAD libraries[END_REF][START_REF] Boulier | Computing representations for radicals of finitely generated differential ideals[END_REF]. Definition 6 (Normal form modulo a decomposition). Let Σ be a set of differential polynomials, such that [Σ] is a proper ideal. Consider a decomposition of [Σ] into differential regular chains C 1 , . . . , C f for some ranking, that is differential regular chains satisfying
[Σ] = [C 1 ] : H ∞ C1 ∩ . . . ∩ [C f ] : H ∞ C f .
First Integrals in Differential Algebra
Definition 7 (First integral modulo an ideal). Let p be a differential polynomial and A be an ideal. One says p is a first integral modulo A if δp ∈ A.
For any ideal A, the set of the first integrals modulo A contains the ideal A. If A is a proper ideal, the inclusion is strict since any element of K is a first integral. In practice, the first integrals taken in K are obviously useless.
Example 7 (Pendulum). Take K = Q(m, l, g). Consider the ranking
• • • > l > ẍ > ÿ > l > ẋ > ẏ > l > x > y.
Recall the pendulum equations Σ in Equations [START_REF] Boulier | The BLAD libraries[END_REF]. Algorithm RosenfeldGroebner [START_REF] Boulier | Computing representations for radicals of finitely generated differential ideals[END_REF] shows that
[Σ] = [C 1 ] : H ∞ C1 ∩ [C2] : H ∞ C2
where C 1 and C 2 are given by:
-C 1 = [ λ = -3 ẏgm l 2 , ẏ2 = --λy 2 l 2 +λl 4 -y 3 gm+ygml 2 ml 2 , x 2 = -y 2 + l 2 ] ; -C 2 = [λ = -ygm l 2 , x = 0, y 2 = l 2 ]
. Remark that the differential regular chain C 2 corresponds to a degenerate case, where the pendulum is vertical since x = 0. Further computations show that
NF(δ( m 2 ( ẋ2 + ẏ2 ) -mg y), [C 1 , C 2 ]) = [0, 0]), proving that p = m 2 ( ẋ2 + ẏ2 ) -mg y is a first integral modulo [Σ]. Remark that x 2 + y 2 is also a first integral. This is immediate since δ(x 2 + y 2 ) = δ(x 2 + y 2 -1) ∈ [Σ].
Definition 7 is new to our knowledge. It is expressed in a differential algebra context. The end of Section 4.4 makes the link with the definition of first integral in a analysis context, through analytic solutions using results from [START_REF] Boulier | A computer scientist point of view on Hilbert's differential theorem of zeros[END_REF]. Definition 8 (Formal power solution of an ideal). Consider a n-uple ū = (ū 1 (t),. . . ,ū n (t)) of formal power series in t over K. For any differential polynomial, one defines p(ū) as the formal power series in t obtained by replacing each u i by ūi and interpreting the derivation δ as the usual derivation on formal power series. The n-uple ū = (ū 1 , . . . , ūn ) is called a solution of an ideal A if p(ū) = 0 for all p ∈ A.
Lemma 4. Take a differential polynomial p and n-uple ū = (ū 1 (t),. . . ,ū n (t)) of formal power series. Then (δp)(ū) = dp(ū) dt . If p is a first integral modulo an ideal A and ū is a solution of A, then the formal power series p(ū) satisfies dp(ū) dt = 0.
Proof. Since δ is a derivation, (δp)(ū) = dp(ū) dt is proved if one proves it when p equals any u i . Assume that p = u i . Then (δp)
(ū i ) = (δu i )(ū i ) = dp(ū)
dt . Assume p is a first integral modulo A and ū is a solution of A. Then δp ∈ A and (δp)(ū) = 0. Using (δp)(ū) = dp (ū) dt , one has dp(ū) dt = 0 Take a system of differential polynomials Σ. By [5, Theorem and definition 3], a differential polynomial p in R vanishes on all analytic solutions (over some open set with coordinates in the algebraic closure of the base field) of Σ if and only if p ∈ [Σ]. Applying this statement to p = δq for some first integral q w.r.t. [Σ], then δq vanishes on all analytic solutions of [Σ] so p is a first integral in the context of analysis, if one excludes the non analytic solutions.
Algorithm findAllFirstIntegrals
In this section, one considers a proper ideal A given as A
= [C 1 ]:H ∞ C1 ∩• • •∩[C f ]: H ∞ C f where the C i are differential regular chains. Denote M = [C 1 , . . . , C f ]
. Take a first integral modulo A of the form p = α i µ i , where the α i 's are in K and the µ i 's are monomials in the derivatives. Computing on lists componentwise, we have 0 = NF(δp, M ) = α i NF(δµ i , M ). Consequently, a candidate α i µ i is a first integral modulo A if and only if the NF(δµ i , M ) are linearly dependent over K.
Since the µ i have no denominators, every irreducible factor of the denominator of any NF(w i , C j ) necessarily divides (by Proposition 5) the denominator of the inverse of a separant or initial of C j . As a consequence, the algorithms presented in the previous sections can be applied since we can precompute factors which should not be cancelled.
Algorithm findAllFirstIntegrals is very close to Algorithm findFirstDependence and only requires a few remarks. Instead of stopping at the first found dependence, it continues until the iterator has been emptied and stores all first dependences encountered. It starts by precomputing a safe set D for avoiding cancelling the denominators of any NF(δw i , C j ). The algorithm introduces some dummy When increasing the degree bound, one finds more and more spurious first integrals like ẏ y 2 + ẋ x y (which is in the ideal) or some powers of the first integral -2g y + ẋ2 + ẏ2 .
Example 9 (Lotka-Volterra equations).
C ẋ(t) = a x(t) -b x(t) y(t) ẏ(t) = -c y(t) + d x(t) y(t)
x(t) u(t) = ẋ(t)
y(t) v(t) = ẏ(t) (2)
Take K = Q(a, b, c, d) and the ranking
• • • > u > v > ẋ > ẏ > u > v > x > y.
One can show that C is a differential regular chain for the chosen ranking.
The two leftmost equations of C corresponds to the classical Lotka-Volterra equations, and the two rightmost ones encode the logarithms of x(t) and y(t) in a polynomial way. A call to findAllFirstIntegrals with the monomials of degree at most 1 built over x, y, u, v yields [1, -av d -cu d + by d + x] which corresponds to the usual first integral -a ln(y(t)) -c ln(x(t) + by(t) + dx(t).
Remark 2. The choice of the degree bounds and the candidate monomials in the first integrals is left to the user, through the use of the iterator J. This makes our algorithm very flexible especially if the user has some extra knowledge on the first integrals or is looking for specific ones. Finally, this avoids the difficult problem of estimating degree bounds, which can be quite high. Indeed, the simple system ẋ = x, ẏ = -ny, where n is any positive integer, admits x n y as a first integral, which is minimal in terms of degrees.
Complexity
The complexity for the linear algebra part of Algorithm findAllFirstIntegrals is the same as for Algorithm findFirstDependence: it is O(e 3 ), where e is the cardinal of the iterator J, if one uses the LU-decomposition variant. However, e can be quite large in practice. For example, if one considers the monomials of total degree at most d involving s derivatives, then e is equal to s+d s .
page 101] over the ring K(y 1 , . . . , y m )[y m+1 ], one has p = d+1 i=1 p(y 1 , . . . , y m , s i )
2 For each 1 3 D 4 I 5 S 6 while true do 7 For each 1 ≤ 8 / 9
2134567189 and B involve z since one performs linear Input: Two lists of variables Y and X Input: A vector q = (q1, . . . , qe) of e rational fractions in K(Y )[X] Output: A basis of ker Φq 1 begin ≤ i ≤ e, denote the reduced fraction qi as fi/gi with fi ∈ K[Y, X], gi ∈ K[Y ]; ← [g1, . . . , ge]; ← newTupleIteratorNotCancelling(Y, D); ← [getNewTuple(I)]; i ≤ e, denote vi the vector (σs 1 (qi), σs 2 (qi), . . . , σs r (qi)) where S = [s1, s2, . . . , sr]; / each vi is a vector of r = | S| elements of K[X], obtained by evaluating qi on all points of S Denote v = (v1, . . . , ve); 10 Compute a basis B of the kernel of Φv using linear algebra; 11 // if ker Φq ⊃ ker Φv, one returns B 12 if e i=1 biqi = 0 for all b = (b1, . . . , be) ∈ B then return B; 13 ; 14 Append to S the new evaluation getNewTuple(I);
3 D 4 I 5 M 6 while true do 7 y 0 8 // evaluate the vector q at y 0 9 q
34567089 ← [g1, . . . , ge]; ← newTupleIteratorNotCancelling(Y, D); ← the 0 × e matrix; ← getNewTuple(I); ← σ y 0 (q); 10 build L = [ω1, . . . , ω l ] the list of monomials involved in q; 11 build the l × e matrix N = (Nij) where Nij is the coefficient of qj in the monomial ωi ; 12 M ← M N ; 13 Compute a basis B of the right kernel of M ; 14 if e i=1 biqi = 0 for all b = (b1, . . . , be) ∈ B then return B; 15 ;
Input: 2 b 4 h 5 if h = 0 then 6 return 8 I 10 ; 11 choose a monomial
245681011 Two lists of variables Y and X, and a list D of elements of K[Y ] Input: A list Q = [q1, . . . , qe] of e rational functions in K(Y )[X] Input: A list E = [(s1, ω1), . . . , (se, ωe)], with si ∈ N m and ωi a monomial in X Input: M an invertible eval. matrix of q1, . . . , qe w.r.t. (s1, ω1), . . . , (se, ωe) Input: qe+1 a rational function Assumption: denote qi = fi/gi with fi ∈ K[X, Y ] and gi ∈ K[Y ] for 1 ≤ i ≤ e + 1. Each gi divides a power of elements of D. Moreover, σs i (dj) = 0 for any si and dj ∈ D. Output: Case 1 : (α1, . . . , αe, 1) s.t. qe+1 + e i=1 αiqi = 0 Output: Case 2 : M , (se+1, ωe+1) such that M is the evaluation matrix of q1, . . . , qe+1 w.r.t. (s1, ω1), . . . , (se+1, ωe+1), with M invertible 1 begin ← (. . . , coeff(σs j (qe+1), ωj), . . .) 1≤j≤e ; 3 solve M α = -b in the unknown α = (α1, . . . , αe); ← qe+1 + e i=1 αiqi; (α1, . . . , αe, 1) ; // Case 1: a linear dependence has been found 7 else ← newTupleIteratorNotCancelling(Y, D); 9 repeat se+1 ← getNewTuple(I) until σs e+1 (h) = 0; ωe+1 such that coeff(σs e+1 (h), ωe+1) = 0 ; 12 l ← (coeff(σs e+1 (q1), ωe+1) • • • coeff(σs e+1 (qe+1), ωe+1)); (se+1, ωe+1) // Case 2: q1, . . . , qe+1 are linearly independent Algorithm 3: incrementalFindDependence
Proposition 3 ( 2 M 3 Q
323 Solving α). Solving M α = -b is equivalent to solving the two triangular systems Ly = -b, then U α = y. Proposition 4 (The LU-decomposition of M ). The LU-decomposition of M can be obtained by: solve γU = l 1:e (in the variable γ), where l 1:e denotes the e first components of the line vector l computed in findFirstDependence Input: Two lists of variables Y and X Input: A list D of elements of K[Y ] Input: A finite iterator J which outputs rational functions q1, q2, . . . in K(Y )[X] Assumption: Denote the reduced fraction qi = fi/gi with fi ∈ K[X, Y ] and gi ∈ K[Y ]. Each gi divides a power of elements of D. Output: Case 1: a shortest linear dependence i.e. a vector (α1, . . . , αe) and a list [q1, . . . , qe] with e i=1 αiqi = 0 and e the smallest possible. Output: Case 2: FAIL if no linear dependence exists 1 begin ← the 0 × 0 matrix ; // M is an evaluation matrix ← the empty list [] ; // Q is the list of the qi 4
-
solve Ly = -b (in the variable y) l e+1 + γy
called the saturation of A by the multiplicative family generated by S. The saturation of a (differential) ideal is a (differential) ideal [10, chapter I, Corollary to Lemma 1]. Fix a ranking, i.e. a total ordering over ΘU satisfying some properties [10, chapter I, section 8]. Consider some differential polynomial p / ∈ K. The highest derivative v w.r.t. the ranking such that deg(p, v) > 0 is called the leading derivative of p. It is denoted ld p. The leading coefficient of p w.r.t. v is called the initial of p. The differential polynomial ∂p/∂v is called the separant of p. If C is a finite subset of R \ K then I C denotes its set of initials, S C denotes its set of separants and H C = I C ∪ S C .
Definition 4 .Proposition 5 .
45 . Let C be a regular differential chain of R, defining a differential ideal A = [C]:H ∞ C . Let L = ld C and N = ΘU \ ΘL. The normal form of a rational differential fraction is introduced in [2, Definition 5.1 and Proposition 5.2], recalled below. Let a/b be a rational differential fraction with b regular modulo A.A normal form of a/b modulo C is any rational differential fraction f /g such that 1 f is reduced with respect to C, 2 g belongs to K[N ] (and is thus regular modulo A), 3 a/b and f /g are equivalent modulo A (in the sense that a g -b f ∈ A). Let a/b be a rational differential fraction, with b regular modulo A. The normal form f /g of a/b exists and is unique. The normal form is a K-linear operation. Moreover 4 a belongs to A if and only if its normal form is zero, 5 f /g is a canonical representative of the residue class of a/b in the total fraction ring of R/A, 6 each irreducible factor of g divides the denominator of an inverse of b, or of some initial or separant of C .
Proposition 6 .
6 For any differential fraction a/b with b regular modulo each [C i ] : H ∞ Ci , one defines the normal form of a/b w.r.t. to the list [C 1 , . . . , C f ] by the list [NF(a/b, C 1 ), . . . , NF(a/b, C f )] . It is simply denoted by NF(a/b, [C 1 , . . . , C f ]). Since it is an open problem to compute a canonical (e.g. minimal) decomposition of the radical of a differential ideal, the normal form of Definition 6 depends on the decomposition and not on the ideal. With the same notations as in Definition 6, for any polynomial p ∈ R, one has p ∈ [Σ] ⇐⇒ NF(p, [C 1 , . . . , C f ]) = [0, . . . , 0]. Moreover, the normal form modulo a decomposition is K-linear.
f 1 begin 2 result 3 for i = 1 to f do 4 I 6 Y 7 X 8 M 9 E 10 while the iterator J is not empty do 11 µ
23467891011 Input: A list of differential regular chains M = [C1, . . . , C f ] Input: A finite iterator J which outputs differential monomials Output: A K-basis of the linear combinations of momomials of J which arefirst integrals w.r.t. [C1] : H ∞ C 1 ∩ • • • ∩ [C f ] : H ∞ C ←the empty list [] ; D ← the empty list [] ; ← the inverses of the initials and separants of Ci modulo Ci ; 5 append to D the denominators of the elements of I ; ← the list of derivatives occurring in D ; ← the list of dummy variables [d1, . . . , d f ] ; ← the 0 × 0 matrix ; Q ← the empty list [] ; ← the empty list [] ; F ← the empty list []; ← getNewDerivativeMonomial(J); 12 q ← f i=1 diNF(δµ, Ci); 13 append to X the variables of the numerator of q, which are neither in X nor Y ; 14 r ← incrementalFindDependence(Y, X, D, Q, E, M, q); 15 if r is a linear dependence then 16 // A new first integral has been found 17 (α1, . . . , αe, 1) ← r ; 18 append α1µ1 + • • • + αeµe + w to result, where F = (µ1, . . . , µe); 19 else 20 append µ to the end of F ; append q to the end of Q ; 21 M , (s, ω) ← r ; M ← M ; append (s, ω) to the end of E; 22 return result ; Algorithm 5: findAllFirstIntegrals variables d 1 , . . . , d f for storing the normal form NF(δµ, M ), which is by definition a list, as the polynomial d 1 NF(δµ, C 1 )+• • •+d f NF(δµ, C f ). This alternative storage allows us to directly reuse Algorithm incrementalFindDependence which expects a polynomial. Example 8 (Pendulum). Take the same notations as in Example 7. Take an iterator J enumerating the monomials 1, y, x, ẏ, ẋ, y 2 , xy, y 2 , ẏy, ẏx, ẏ2 , ẋy, ẋx, ẋ ẏ and ẋ2 . Then Algorithm findAllFirstIntegrals returns the list [1, x 2 + y 2 , ẏy + ẋx, -2g y + ẋ2 + ẏ2 ]. Note the presence of ẏy + ẋx which is in the ideal since it is the derivative of (x 2 + y 2 -1)/2. The intermediate computations are too big be displayed here. As an illustration, one gives the normal forms of δy, δ( ẏ2 ), δ( ẋ ẏ) and δ( ẋ2 ) modulo [C 1 , C 2 ] which are respectively [ ẏ, 0] , 2(λy ẏ + mg ẏ) m , 0 , x ẏ(λ(2y 2 -l 2 ) + mgy) m(y 2 -l 2 ) , 0 and -2λy ẏ m , 0 .
In the general setting, differential ring are endowed with finitely many derivations | 48,725 | [
"6780",
"6490"
] | [
"410272",
"410272"
] |
01769672 | en | [
"spi",
"shs"
] | 2024/03/05 22:32:16 | 2015 | https://hal.science/hal-01769672/file/Abstract%20Joconde%20EuroMech_V08.pdf | Joseph Gril
email: [email protected]
Bertrand Marcon
Fabrice Brémand
Linda Cocchi
Paolo Dionisi-Vici
Jean-Christophe C Dupré
Cécilia Gauvin
Giacomo Goli
Franck Hesser
Delphine Jullien
P Mazzanti
E Ravaud
M Togni
V Valle
L Uzielli
The Mona Lisa Project: an update on the progress of measurement, monitoring, modelisation and simulation
Introduction
Since 2004 an international research group of wood technologists has been given by the Louvre museum the task of analysing the mechanical situation of the wooden panel on which Leonardo da Vinci painted his "Mona Lisa" (Fig. 1a), possibly between 1503 and 1506. The general purpose of such study was to evaluate the influences that could possibly derive from environmental fluctuations in the showcase where the painting is exhibited as well as outside the showcase for occasional checks, and develop measurements and models to improve its conservation conditions. To acquire data on the mechanical behaviour of the panel, and to feed and calibrate appropriate simulation models, the team has set up a continuous monitoring by means of automatic equipment (Fig. 1c). The "Mona Lisa" is painted on a poplar panel (Populus alba L.) ~790 x 530 mm, ~13 mm thick, which is inserted in an oak frame (called châssis-cadre), and is slightly forced against it by means of four crossbars, holding it flatter than it would be if unconstrained. In turn the châssis-cadre is inserted in a wooden gilded frame (Fig. 1c). In 2006 a book was published [START_REF] Mohen | Mona Lisa: Inside the Painting[END_REF] offering a wealth of scientific studies and researches concerning the "Mona Lisa". Also in 2006 a report about finite elements model was presented [START_REF] Gril | Mona Lisa saved by the Griffith theory: assessing the crack propagation risk in the wooden support of a panel painting[END_REF] and in 2011 a scientific article on the modelisation was published [START_REF] Dureisseix | A partitioning strategy for the coupled numerical simulation of hygromechanical wood structures of Cultural Heritage[END_REF].
Development of measurement and monitoring techniques
Initially the panel shape has been determined through manual shape measurement by means of a mechanical comparator and a reference bar. This technique is slow, and allows surveying only a limited number of points. Then optical techniques shadow moiré and fringe pattern profilometry -FPP-have been used to measure [START_REF] Brémand | Mechanical structure analysis by Digital Image Correlation and Fringe Pattern Profilometry -Proceedings of COST Actions IE0601 Non-destructive techniques to study Wooden Cultural Heritage Objects[END_REF], on both front and rear faces, relief (Fig. 1b) and out of plane deformation field. To obtain the 3D surface displacements in some points of the panel stereo-optical tracking [START_REF] Brémand | Mechanical structure analysis by Digital Image Correlation and Fringe Pattern Profilometry -Proceedings of COST Actions IE0601 Non-destructive techniques to study Wooden Cultural Heritage Objects[END_REF] has been used (following image contrast and craquelure pattern). Accurate and reliable data about the forces exerted between panel and crossbars are obtained thanks to self-designed equipment including four sub-miniature load cells incorporated into the crossbar thickness (next to the panel's four corners). The deflection variations along time are measured by means of three deformation transducers, located inside a thin reference aluminium profiled crossbar (carrying data-loggers, transducers and instrumentation), fixed on the châssis-cadre and providing records of both (a) transversal deflection at the panel centre with reference to the lateral edges, and (b) longitudinal deflection with reference to the châssis-cadre, see Fig. 1c. The contact forces between panel and châssis-cadre have been localized and estimated on the basis of local contact pressures through a pressure-sensitive foil as described in [START_REF] Goli | Locating contact areas and estimating contact forces between the "Mona Lisa" wooden panel and its frame[END_REF].
Modelisation and simulation strategies
The results from the above mentioned measurements are being processed to be included into a 3D numerical finite elements model to simulate the panel behaviour under environmental fluctuations, see Fig. 1d. We focus herein on a numerical strategy, taking into account the panel specificities, including its shape and thickness, its sawing pattern (the elastic behaviour is orthotropic, roughly with a cylindrical symmetry), the boundary conditions imposed by the châssis-cadre (contact area and forces [START_REF] Goli | Locating contact areas and estimating contact forces between the "Mona Lisa" wooden panel and its frame[END_REF]), the crack at the upper edge of the panel and the remedial wooden butterfly, the unpainted back face of the panel responsible for the out-ofplane movements caused by moisture gradients across the panel's thickness and resulting moisture-induced expansion during relative humidity fluctuations. Such elements -mostly based on measurements-are essential to propose a model that can simulate with accuracy the Mona Lisa behaviour and are exposed in detail in [START_REF] Dureisseix | A partitioning strategy for the coupled numerical simulation of hygromechanical wood structures of Cultural Heritage[END_REF] and [START_REF] Marcon | Hygromécanique des panneaux en bois et conservation du patrimoine culturel[END_REF]. The aim of this strategy is to improve Mona Lisa's preventive conservation by virtually testing and predicting, via numerical simulation, if the masterpiece will be safe under various scenarios.
Fig. 1 (
1 Fig. 1 (a) The painted face and the back face; (b) Shape of the panel obtained by FPP; (c) Completed assembly of the Mona Lisa: panel, crossbars, châssis-cadre, gilded frame and the monitoring systems; (d) Mona Lisa numerical finite elements model.
Acknowledgments
The authors wish to thank the Louvre museum and V. Delieuvin, conservator at the Department of Paintings, in charge of the Mona Lisa; they also acknowledge P. Mandron and D. Jaunard, independent restorers, for their essential contribution and fruitful discussions. | 6,315 | [
"173780",
"2064",
"846610",
"177615",
"978144",
"20000",
"176129"
] | [
"693",
"407549",
"454316",
"693",
"118112",
"454316",
"454316",
"347847",
"118112",
"693",
"407549",
"454316",
"118112",
"693",
"407549",
"454316",
"510",
"454316",
"118112",
"454316"
] |
01499475 | en | [
"chim"
] | 2024/03/05 22:32:16 | 2017 | https://amu.hal.science/hal-01499475/file/Manuscript_VSI_ICFIA2016_Talanta_Archimer%2BHAL.pdf | Elodie Mattio
Fabien Robert-Peillard
Catherine Branger
Kinga Puzio
André Margaillan
Christophe Brach-Papa
Joël Knoery
Jean Luc Boudenne
Bruno Coulomb
email: [email protected]
Jean-Luc Boudenne
3D-printed flow system for determination of lead in natural waters
Keywords: Lead determination, 3D-printed MPFS system, stereolithography, natural waters
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Lead is considered as one of the most toxic heavy metals [START_REF] Garnier | Toxicité du plomb et de ses dérivés[END_REF] in the light of its environmental [START_REF] Dikilitas | Chapter 3 -Effect of Lead on Plant and Human DNA Damages and Its Impact on the Environment[END_REF] and health [START_REF] Juberg | Lead and human health: An update[END_REF][START_REF] Johnson | The genetic effects of environmental lead[END_REF] impacts; it may cause irreversible neurological effects and digestive and kidney malfunctions. Its presence in the environment and more particularly in water can be mainly explained by anthropogenic sources like paints, arms and electronic [START_REF] Pecht | The impact of lead-free legislation exemptions on the electronics industry[END_REF] industries products. For these reasons, the World Health Organization has identified lead as one of the 10 chemicals of major public health concern [START_REF]World Health Organization | Lead poisoning and health[END_REF] and has recommended a guideline value of 10 μg.L -1 of lead in drinking water. It is therefore necessary to quantify lead by a rapid and efficient method to avoid toxic consumption.
Many methods are already available for lead determination such as spectrophotometry [START_REF] Rahman | Determination of lead in solution by solid phase extraction, elution, and spectrophotometric detection using 4-(2-pyridylazo)-resorcinol[END_REF],
voltammetry [START_REF] Fang | Determination of trace lead and cadmium using stripping voltammetry in fluidic microchip integrated with screen-printed carbon electrodes[END_REF][START_REF] Abate | Complexation of Cd (II) and Pb (II) with humic acids studied by anodic stripping voltammetry using differential equilibrium functions and discrete site models[END_REF], graphic furnace atomic absorption spectrometry [START_REF] Bruland | Analysis of seawater for dissolved cadmium, copper and lead: An intercomparison of voltammetric and atomic absorption methods[END_REF], or inductively coupled plasma spectroscopy [START_REF] Ho | Determination of trace metals in seawater by an automated flow injection ion chromatograph pretreatment system with ICPMS[END_REF][START_REF] Zougagh | Automatic on line preconcentration and determination of lead in water by ICP-AES using a TSmicrocolumn[END_REF], but they require costly and sophisticated devices and do not allow real-time and on-site measurements. In this context, mesofluidic and microfluidic systems can help to meet these needs thanks to their intrinsic advantages: miniaturization and low energy consumption, decreased reagents consumption and waste generation. Flow analysis offers many opportunities [START_REF] Cerdà | Flow techniques in water analysis[END_REF][START_REF] Trojanowicz | Advances in Flow Analysis[END_REF] and Flow Injection Analysis (FIA), Sequential Injection Analysis (SIA)
or MultiSyringe Flow Injection Analysis (MSFIA) based systems have been successfully applied for the determination of a wide range of substances in environmental matrices. Among flow analysis system, pulse flow systems using solenoid diaphragm micropumps (MPFS: Multi Pumping Flow System) present high flexibility, easy configuration and low cost. Another advantage is that the signal peaks are higher compared to other flow techniques, due to turbulences created by pump diaphragm strokes, which improve mixing between reagents and sample [START_REF] Santos | Multipumping flow systems: The potential of simplicity[END_REF]. Recently, such MPFS systems were developed for phosphorous [START_REF] González | Low cost analyzer for the determination of phosphorus based on open-source hardware and pulsed flows[END_REF] or boron [START_REF] González | Multi-pumping flow system for the determination of boron in eye drops, drinking water and ocean water[END_REF] determination in aqueous samples.
Several units can be combined to create a flow system in full compliance with the analytical needs: solid phase extraction for pre-concentration, photo-oxidation or digestion, membrane or membrane-less separation, detection and many others. To increase versatility and ease of fabrication of flow systems, 3D printing is increasingly used in flow analysis [START_REF] Bhattacharjee | The upcoming 3D-printing revolution in microfluidics[END_REF][START_REF] Au | 3D-Printed Microfluidics[END_REF]. This printing technology can be divided into three main categories: Fused Deposition Modeling (FDM) where a thermoplastic material is heated and extruded from a XYZ positionable nozzle, Multi Jet Modeling (MJM) which is based on an inkjet head to deposit liquid photopolymer (plastic resin or casting wax) layer by layer and finally stereolithography (SL) [START_REF] Au | Mail-order microfluidics: evaluation of stereolithography for the production of microfluidic devices[END_REF] invented in the 1980s and which is a layer manufacturing process with liquid materials [START_REF] Bártolo | Stereolithography: Materials, Processes and Applications[END_REF] as MJM. SL is based on a mobile platform which dives in a resin tray, where a laser polymerizes the resin layer by layer. The most frequently used material is poly(methyl methacrylate) (PMMA) which allows to decrease fabrication cost and improve resolution [START_REF] Polzin | Characterization and evaluation of a PMMA-based 3D printing process[END_REF]. 3D-printing technologies emergence enables creation of new type of units [START_REF] Frizzarin | A portable multi-syringe flow system for spectrofluorimetric determination of iodide in seawater[END_REF][START_REF] Cerdà | Chips: How to build and implement fluidic devices in flow based systems[END_REF] for microfluidic systems which can be imagined and combined according to the analytical needs.
In this paper, the development of a novel 3D-printed multi-pumping flow system for the determination of lead in natural waters is presented. The system is composed of three 3Dprinted units: a resin column for lead solid phase extraction, a mixing coil and a classical flow cell for UV-Spectroscopy. The 3 modules are connected to one another by a screw system to limit tubings between the three units. Lead solid phase extraction is based on a commercial crown-ethers resin (TrisKem Pb resin) and the chromogenic reagent used for the spectrophotometric detection of lead is 4-(2-pyridylazo)resorcinol (PAR). The system was applied to the determination of lead in real river water samples.
Materials and methods
2.1.Reagents and samples
All chemicals used were of analytical grade and used without further purification. Freshwater samples were collected at purposely chosen points in a coastal river "The Arc" in the south-east of France, near industrial or urban effluents discharge points. All samples were UV-photo-oxidized with a 254 nm low-pressure mercury lamp (UVP PenRay, USA) during 30 minutes before analysis allowing liberation of lead linked to natural organic ligands, inorganic ligands or even to anthropogenic organic ligands.
2.2.Apparatus
Flow system
The three units of the MPFS system (Fig. 1) were designed with Rhinoceros® 5.0 3D software (Robert McNeel & Associates Europe, Spain), then printed with a poly(methyl methacrylate) resin on the 3D printer Form1+ (Formlabs, USA). The first unit consisted of a resin column, tightly closed by a screw piece with three entry channels for injection of sample and reagents.
The resin was packed in this unit between two layers of glass wool. To optimize mixing of eluate and chromogenic reagent used for lead detection (PAR), a second unit composed of a serpentine mixing coil (1.5 mm internal diameter, 50 cm length) and of a connection for PAR inlet was added to the system. Finally, the detection step was performed with a classical spectroscopic flow-cell with a 5-cm optical pathlength. Sample and reagents were introduced inside the system by means of four solenoid micro-pumps (Bio-ChemValve Inc., USA) that had a stroke volume of 20 µL and a high frequency of 250 cycles/min. These pumps were computercontrolled by a MCFIA/MPFS system (Sciware, Spain) with eight digital 12V output channels.
For the detection, two FC-UV600 optical fibres (Ocean Optics, USA) were connected at the ends of the optical pathlength, and isolated from the reaction mixture with two tailor-made quartz discs, to guide the light from the source to the spectrophotometric detector. The radiation of the halogen bulb of a DH-2000 UV-Vis light source (Ocean Optics, USA) was transmitted to a USB2000 miniature spectrometer detector (Ocean Optics, USA). The whole system was controlled by AutoAnalysis 5.0 software (Sciware, Spain).
Metal analysis
Graphite furnace atomic absorption spectrometry (GF-AAS) was used to optimize the extraction/elution steps on TrisKem Pb resin and validate lead amounts in real samples (after filtration on a 0.45 µm polyethersulfone membrane). The measurements were carried out on a Thermo Scientific ICE3500 (USA) atomic absorption spectrometer equipped with a lead hollow-cathode lamp operated at 10 mA (wavelength of 217 nm). Argon flow was 0.2 L.min -1 except during atomisation step (no flow). The furnace settings were: drying at 110 °C, ramp for 9 s, hold for 35 s; cracking at 800 °C, ramp for 5 s, hold for 20 s; atomising at 1200 °C, ramp for 1 s and 3 s hold; cleaning at 2500 °C, no ramp and 3 s hold.
Interfering cations were determined by inductively coupled plasma-atomic emission spectrometry (ICP-AES) with a Jobin YVON JY2000 Ultratrace spectrometer, equipped with a CMA spray chamber and a Meinhard R50-C1 glass nebuliser. Determinations were performed with the following settings: power 1000W, pump speed 20 mL.min -1 , plasma flow rate 12 L.min -1 , coating gas flow rate 0.2 L.min -1 , nebuliser flow rate 0.9 L.min -1 and nebuliser pressure 2.08 bar.
2.3.Flow procedure
The MPFS system was operated according to the procedure in 6 steps given in Table 1: in step 1, 9 mL of nitric acid solution (0.05 mol.L -1 ) was used for conditioning the Pb resin and washing the system before analysis. Then 50 mL of acidified sample were introduced in the system (step 2). In step 3, the resin was washed again with nitric acid solution (0.05 mol.L -1 ) to eliminate potential interfering species extracted by the resin. 5 mL of ammonium oxalate solution (0.025 mol.L -1 ) and PAR (0.01 mmol.L -1 ) were simultaneously pumped through the resin to eluate extracted lead and at the inlet of the mixing coil, respectively (steps 4-5).
Measurements were based on the peak height. The analytical signal was recorded at 520 nm when the elution started. Absorbance spectra were acquired every 0.5 s, with an integration time of 55 ms and an average of 3 spectra.
Results and discussion
3.1.Extraction and washing steps 3.1.1. Influence of concentration of nitric acid
The developed system was based on the spectrophotometric determination of lead in the presence of PAR reagent. However, it is well known that lead determination by this simple method often suffers from interferences of cadmium, copper and zinc [START_REF] Dagnall | Determination of lead with 4-(2-pyridylazo)resorcinol[END_REF]. Therefore, the concentration of these interfering metals should be reduced before PAR detection to allow selective determination of lead. The results in Fig. 2 showed that more than 90% of lead was extracted with HNO3 concentration above 0.02 mol.L -1 . The best lead extraction (97%) was obtained with HNO3 0.1 mol.L -1 .
Lead determination was thus based on extraction properties of the resin
Concerning the other metal cations, iron was also partially extracted (20-50%, with the highest extraction at 0.1 mol.L -1 ). Aluminium, copper and zinc were not retained by the resin whatever the acid concentration. For HNO3 concentrations of 0.02 and 0.05 mol.L -1 , a very few amount (≤ 5%) of cobalt, chromium, and cadmium was extracted. It can therefore be concluded that the resin has an excellent selectivity for lead over other metals, except for iron for which extraction is dependent on HNO3 concentration. A nitric acid concentration of 0.05 mol.L -1 appeared to be sufficient to extract lead quantitatively and to limit the extraction rate of iron below 20%.
After metal extraction on the resin, a washing step can improve the selectivity by removing potentially interfering metals without eluting the analyte of interest. This washing step has also been studied with different HNO3 concentrations. Based on the previous results, only iron and lead were studied. The washing solutions were collected and analysed by ICP-AES. As can be seen in Fig. 3 the amount of both iron and lead washed out decreased with increasing HNO3 concentration. With HNO3 0.02 mol.L -1 , all the iron retained within the resin was eliminated, but an important lead elution rate of 42% was also observed. A nitric acid concentration of 0.05 mol.L -1 seems to be the best compromise between iron elimination (75%) and limited lead elution rate (about 10%).
To summarize 0.05 mol.L -1 HNO3 may be used for both extraction and washing steps, thereby limiting the number of pumps to be used in the system.
Sample flow-rate
Sample flow rate was studied from 2 to 5 mL.min -1 for a sample volume of 30 mL. Samples were acidified with 0.05 mol.L -1 HNO 3 before extraction. As displayed in supplementary material (Fig. S1), results showed an important increase of lead extraction from 3 mL.min -1 to 5 mL.min -1 . At the highest flow rate (5 mL.min -1 ) the extraction percentage of lead reached 94 %.
To avoid overpressure problem of solenoid pumps or clogging of resin column, the column inner diameter and height has been set respectively at 7.8 mm and 4.5 mm, which are higher than in a traditional packed column. None of the previously mentioned problems were observed during analysis. Although solenoid pumps are not usually suitable for resin solid phase extraction, it seems that high sample flow rates enable fluidizing of the resin bed and thus good lead extraction. Therefore, a 5mL.min -1 flow rate was chosen for the system.
Elution step
Ammonium oxalate is recommended by the resin manufacturer for lead elution. The lead elution has thus been studied at different concentrations of ammonium oxalate. The elution flow rate has been adjusted at 4 mL.min -1 , according to the resin manufacturer instructions.
The results shown in Fig. 4 highlighted that elution efficiency increased with ammonium oxalate concentration up to 0.05 mol.L -1 .
For concentration of ammonium oxalate between 0.025 and 0.1 mol.L -1 , the results obtained were not significantly different. To limit reagent consumption, the concentration of ammonium oxalate was adjusted at 0.025 mol.L -1 for further experiments.
Detection
Lead elution profile has been studied by collecting small fractions (0.5 mL) of ammonium oxalate used for elution with a flow rate of 5 mL.min -1 . As can be seen on Fig. S2 (Supplementary material), elution was complete within one minute (elution rate of 99% of total extracted lead after 5 mL).
Lead was detected by spectrophotometry using a PAR solution pumped simultaneously and mixed with ammonium oxalate in the mixing coil. PAR solution was buffered with borate solution at pH=12 in order to obtain a mixture at pH=9 (optimised for UV-Vis detection) when mixed with ammonium oxalate which pH is around 6.6.
PAR reagent and ammonium oxalate flow rates were studied from 1 to 5 mL.min -1 .
Optimization was carried out with a standard lead solution of 100 µg.L -1 . PAR flow rate was optimized using a fixed ammonium oxalate flow rate of 5 mL.min -1 and inversely. Fig. 5 showed that the best absorbance was obtained with the maximum flow rate of the pumps useable for the two reagents (5 mL.min -1 ). The absorbance increased regularly with increasing PAR flow rate. The absorbance values increased slightly with an ammonium oxalate flow rate between 1 and 4 mL.min -1 and, strongly increased for a flow rate of 5 mL.min -1 . This observation was consistent with results obtained for sample flow rate with better efficiency at high flow rates.
However, eluent flow rate seemed to have less influence than sample flow rate. Flow rates of PAR and ammonium oxalate were thus fixed at 5 mL.min -1 .
Analytical features
Calibration curves have been constructed for various sample volumes (10, 25 and 50 mL). From these data, limits of detection (LOD; 3; n=10) and coefficient of variation (CV; n=6) were determined and summarized in Table S1 (Supplementary material). The analytical features obtained for a 50 mL sample volume seemed adapted for typical lead concentrations in natural waters: LOD was calculated at 2.7 µg.L -1 , linear domain range was between 3 and 120 µg.L -1 .
The LOD was acceptable for environmental sample analysis, but sample volume can potentially be increased if lower LOD needs to be reached for water samples with very small amounts of lead. An additional experiment has been carried out and a volume of 200 mL of sample has been passed through the TrisKem Pb resin column. The results obtained showed that no traces of lead were detected at the column outlet and that the breakthrough volume of the resin was not reached. Coefficient of variation obtained with the optimal conditions (50 mL sample volume), for a lead concentration of 50 µg.L -1 was 5.4 %.
A brief comparison of previously reported flow methods for lead determination with proposed 3D-printed flow system is given in Table 2. Some of these methods require complicated or expensive equipment in particular for detection step [START_REF] Mitani | On-line liquid phase micro-extraction based on drop-in-plug sequential injection lab-at-valve platform for metal determination[END_REF][START_REF] Beltran | Determination of lead by atomic fluorescence spectrometry using an automated extraction/pre-concentration flow system[END_REF][START_REF] Anthemidis | On-line sequential injection dispersive liquid-liquid microextraction system for flame atomic absorption spectrometric determination of copper and lead in water samples[END_REF][START_REF] Ampan | Exploiting sequential injection analysis with bead injection and lab-onvalve for determination of lead using electrothermal atomic absorption spectrometry[END_REF]. Compared to other simpler flow procedures using spectrophotometric detection [START_REF] Di Nezio | A sensitive spectrophotometric method for lead determination by flow injection analysis with on-line preconcentration[END_REF][START_REF] Mesquita | A flow system for the spectrophotometric determination of lead in different types of waters using ion-exchange for pre-concentration and elimination of interferences[END_REF], the proposed method has a lower detection limit better suited for analysis of natural water samples.
Validation
In order to validate the 3D-printed optimized system, five samples of freshwater have been collected at purposely chosen points in a coastal river ("The Arc", south of France) in areas close to anthropogenic activities. The samples were UV-photooxidized at 254 nm during 30 minutes and filtered at 0.45 µm before analysis. These samples have been analyzed in duplicate by ICP-AES and developed 3D-printed system with a sample volume of 50 mL, and the results were summarized in Table 3.
The values obtained by the two methods were consistent. The mean difference between the two methods was 5.8%, (min -11.2%; max 11.7%). The results obtained by the proposed system were compared (t test) with the reference method values (ICP-AES) and no significant differences at the 95% confidence level were found.
Conclusion
A 3D-printed system was developed for determination of lead in natural waters. Lead quantification was based on the selective solid phase extraction of lead on TrisKem Pb resin followed by elution with ammonium oxalate and spectrophotometric detection using 4-(2pyridylazo)-resorcinol as chromophoric reagent. Interferences were eliminated by optimisation of extracting and washing steps on TrisKem Pb resin. Detection limit obtained (2.7 µg.L -1 ) was consistent with environmental samples analysis but sample volume may be increased if lower detection limits are needed. The proposed flow system was compared to a reference method (ICP-AES) and was satisfactorily applied to natural waters samples. The optimized 3D-printed MPFS flow system could be controlled by an open-source microcontroller board to design a low cost portable on-line analyzer [START_REF] González | Low cost analyzer for the determination of phosphorus based on open-source hardware and pulsed flows[END_REF][START_REF] González | Multi-pumping flow system for the determination of boron in eye drops, drinking water and ocean water[END_REF]. (V = 30 mL at 3 mL.min-1), nitric acid at 5 mL.min -1 (V = 9 mL for conditioning step, V = 6
3 mL for washing step), ammonium oxalate at 0.1 mol.L -1 (V = 10 mL at 4 mL.min -1 ). µg.L -1 (V = 30 mL at 5 mL.min -1 ), nitric acid at 0.05 mol.L -1 (V = 9 mL for conditioning step, V = 6 mL for washing step, at 5 mL.min -1 ), ammonium oxalate at 4 mL.min -1 (V = 10 mL)]. (V = 30 mL at 5 mL.min -1 ), nitric acid at 0.05 mol.L -1 (V = 9 mL for conditioning step, V = 6 mL for washing step, at 5 mL.min -1 ), ammonium oxalate at 0.025 mol.L -1 (V = 10 mL), PAR at 0.01 mmol.L -1 (V= 10 mL)].
0.17
TrisKem Pb resin. This resin was initially developed for the separation of 210 Pb and 210 Po and is constituted by crownethers diluted in isodecanol and impregnated on an inert support. The length of the carbon chain of isodecanol facilitates lead elution. Its retention capacity is 29 mg Pb/g of resin. The manufacturer's instructions recommend acidification of samples with 1 mol.L -1 nitric acid before extraction of lead on TrisKem Pb resin. However, the first experiments carried out with 1 mol.L -1 nitric acid led, after a few injections, to yellowing and cracking of the inlet of 3Dprinted column part. The effect of HNO3 concentration on the extraction of lead and potentially interfering metals was thus tested in the range 0-0.1 mol.L -1 in order to preserve 3D-printed parts.
Fig. 1 .
1 Fig. 1. Schema of the MPFS system with the three 3D printed units. C: resin column, R1: nitric
Fig. 2 .Fig. 3 .
23 Fig. 2. Extraction of several metals on TrisKem Pb resin for different concentrations of nitricacid [multi-metal solution at 0.037 mmol.L -1 (V = 30 mL at 3 mL.min -1 ), nitric acid at 5 mL.min - 1 (V = 9 mL for conditioning step, V = 6 mL for washing step), ammonium oxalate at 0.1 mol.L - 1 (V = 10 mL at 4 mL.min -1 )].
Fig. 4 .
4 Fig. 4. Elution of lead at different concentrations of ammonium oxalate [lead solution at 100
Fig. 5 .
5 Fig. 5. Absorbance versus PAR and ammonium oxalate flow rates [lead solution at 100 µg.L -1
Table 1
1 Flow procedure for lead determination in water
Volume Flow-rate
Step Description Action P1 P2 P3 P4
(mL) (mL.min -1 )
1 Conditioning Pumping nitric acid 9 5 ON OFF OFF OFF
2 Pb extraction Pumping sample 50 5 OFF ON OFF OFF
3 Washing Pumping nitric acid 6 5 ON OFF OFF OFF
4 Start data acquisition - - -
5 Elution & detection Pumping ammonium 5 4 OFF OFF ON ON
oxalate and PAR
6 Stop data acquisition - - -
Acknowledgment
This work was included in the project "Lab-on-Ship" funded by the French Research Agency (ANR-14-CE04-0004). | 24,023 | [
"17020",
"954716",
"10885",
"10887"
] | [
"220811",
"220811",
"84790",
"84790",
"84790",
"106834",
"106834",
"220811",
"220811"
] |
01405589 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01405589/file/postprint.pdf | B Bernales
email: [email protected]
P Haldenwang
email: [email protected]
P Guichardon
email: [email protected]
N Ibaseta
email: [email protected]
Prandtl model for concentration polarization and osmotic counter-effects in a 2-D membrane channel
Keywords: Numerical modeling, concentration polarization, osmotic
An accurate 2-D numerical model that accounts for concentration polarization and osmotic effects is developed for the cross-flow filtration in a membrane channel. Focused on the coupling between laminar hydrodynamics and mass transfer, the numerical approach solves the solute conservation equation together with the steady Navier-Stokes equations under the Prandtl approximation, which offers a simplified framework to enforce the non-linear coupling between filtration and concentration polarization at the membrane surface. The present approach is first validated thanks to the comparison with classical exact analytical solutions for hydrodynamics and/or mass transfer, as well as with approximated analytical solutions that attempted at coupling the various phenomena. The effects of the main parameters in cross-flow reverse osmosis (RO) or nanofiltration (NF) (feed concentration, axial flow rate, operating pressure and membrane permeability) on streamlines, velocity profile, longitudinal pressure drop, local permeate flux and solute concentration profile are predicted with the present numerical model, and discussed. With the use of data reported from NF and RO experiments, the Prandtl approximation model is shown to accurately correlate both average permeate flux and local solute concentration over a wide range of operating conditions.
pressure, Starling-Darcy law, Prandtl approximation
Introduction
The first paragraph has been cancelled
In cross-flow filtration, species remain in the axial (or feed) flow, while permeate (or purified transverse flow) leaves the duct by leaking transversally through the membrane. The accumulation of rejected species at the membrane inner wall results in a concentration polarization (i.e. a large solute enhancement) which -combined with osmosis-can induce a substantial reduction in permeation. On the one hand, the basic theoretical description of the cross-flow filtration refers to the seminal contribution by [START_REF] Berman | Laminar flow in channels with porous walls[END_REF]) that treats of the channel flow driven by a uniform leakage at the wall. Other contributions on the case of the "pure solvent" flow proposed several analytical solutions that accounts for the pressure dependence on permeate flux (see [START_REF] Regirer | On the approximate theory of the flow of a viscous incompressible liquid in a tube with permeable walls[END_REF], [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF], [START_REF] Tilton | Incorporating darcy's law for pure solvent flow through porous tubes: Asymptotic solution and numerical simulations[END_REF], [START_REF] Bernales | Laminar flow analysis in a pipe with locally pressure-dependent leakage through the wall[END_REF]). On the other hand, an exact analytical solution for the solute transfer in Berman flow has been derived in [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF] and accounts for concentration polarization and the subsequent hindrance to permeation in the limit of certain RO/NF configurations.
Permeate flux actually results from the coupling of those transport phenomena (hydrodynamics and mass transfer) upstream and through the membrane.
The present contribution can essentially be seen as an improvement in the numerical efficiency for studying the laminar flows involved in filtration. It is however known that the applicability domain of such an approach can be extended by introducing the concept of turbulent viscosity and turbulent diffusivity (as we shall do in the present Subsection 3.3).
This issue has also been recently identified as a challenge for nanofiltration and reverse osmosis processes [ [START_REF] Van Der Bruggen | Drawbacks of applying nanofiltration and how to avoid them: A review[END_REF] and [START_REF] Malaeb | Reverse osmosis technology for water treatment: State of the art review[END_REF]]. The literature is now rich with studies on RO/NF modeling. In the present approach, we hence pay a special attention to develop an efficient predictive model that allows us to accurately describe the full coupling between flow, concentration polarization and hindrance to permeation. As discussed below, efficiency and accuracy come from the fact that the solver is fast and has no limitations in terms of numerical degrees of freedom.
Whereas the conservation laws for diluted solutions of salt and water are well-admitted, solvent and solute transport through a membrane remains a quite controversial issue. A large number of investigators have proposed numerous local models derived from different mechanisms. They are based either on the principles of irreversible thermodynamics models (Kedem-Katchalsky model, Spiegler-Kedem model) or on various homogeneous membrane models (Porous model, Solution-Diffusion model). The oldest of these models are analyzed in the contribution by Soltanieh [START_REF] Soltanieh | Review of reverse osmosis membranes and transport models[END_REF]]. More recent models describing the different local transport phenomena within the membranes with an increasing complexity can be found in [START_REF] Wijmans | The solution-diffusion model: a review[END_REF], [START_REF] Gauwbergen | Modelling revrese osmosis by irreversible thermodynamics[END_REF], [START_REF] Weissbrodt | Separation of aqueous organic multi-compount solutions by reverse osmosis -development of a mass transfer model[END_REF], [START_REF] Moresi | Modelling of ammonium fumarate recovery from model solutions by nanofiltration and reverse osmosis[END_REF], [START_REF] Kahdim | Modeling of reverse osmosis systems[END_REF], [START_REF] Mehdizadeh | Modeling of mass transport of aqueous solutions of multi-solute organics through reverse osmosis membranes in case of solute-membrane affinity part1. model development and simulation[END_REF], [START_REF] Mane | Modeling boron rejection in pilot-and full-scale reverse osmosis desalination processes[END_REF], [START_REF] Malaeb | Reverse osmosis technology for water treatment: State of the art review[END_REF]. Below, we shall nevertheless use a minimal model, since it is essential to investigate the associated limitations.
An evident improvement in modeling consists in coupling a local membrane model with the upstream composition of the solution. Several investigations [START_REF] Alvarez | Permeate flux prediction in apple juice concentration by reverse osmosis[END_REF], [START_REF] Jamal | Mathematical modeling of reverse osmosis systems[END_REF], [START_REF] Prabhavathy | Treatment of fatliquoring effluent from a tannery using membrane separation process: Experimental and modeling[END_REF], [START_REF] Hung | Mass-transfer modeling of reverseosmosis performance on 0.5-2% salty water[END_REF], [START_REF] Choi | Modeling of full-scale reverse osmosis desalination system: Influence of operational parameters[END_REF], [START_REF] Qiu | Concentration polarization model of spiralwound membrane modules with application to batch-mode ro desalination of brackish water[END_REF]] have predicted permeate flux by using the combined solution-diffusion/film model also known as the Kimura-Sourirajan model. The integrated or differential film theory equations (convection due to the pressure difference and back diffusion owing to the concentration gradient) have also been used by Urama [START_REF] Urama | Mechanistic interpretation of solute permeation through a fully aromatic polyamide revrese osmosis membrane[END_REF]] and Ahmad [START_REF] Ahmad | Mathematical modeling of multiple solutes system for reverse osmosi process in pal oil mill effluent (pome) treatment[END_REF]] while they applied the Spiegler-Kedem models in the membrane. It is worth noting that all these studies are based only on mass transfer considerations forgetting the influence of hydrodynamics. Such an approach that involves only averaged data all along the membrane is attractive in view of its simplicity and surely convenient for the description of a complex multistage membrane process in transient condition. However, just using averaged conditions assimilated to feed parameters along the membrane length becomes seemingly unrealistic when an industrial membrane module where a spatial variation of pressure and solute concentration appears in the long cross-flow membrane channel is involved.
We hence turn towards 1-D models that consider the local longitudinal variations along membrane duct. The simplest approach we can mention conceives the feed channel as a black box which can be represented by a series of perfect mixing cells with exchange. No information is needed on hydrodynamics and mechanisms of transport. This is the purpose of [START_REF] Roth | Modelling of stimulus response experiments in the feed channel of spiral-wound reverse osmosis membranes[END_REF] who used the residence time distribution (RTD) method by analyzing in some spiral-wound membranes the response to a stimulus injection of tracer. The main drawback of this very simplified method is to ignore the local solute concentration in boundary layer which has a great influence on permeate flux via the phenomenon of concentration polarization.
In other 1-D studies, the material balance is solved numerically without taking the velocity field into account, the hydrodynamic effects being limited to pressure drop along the membrane, which is generally described by the Hagen-Poiseuille or the Ergun equations [START_REF] Sekino | Precise analytical model of hollow fiber reverse osmosis modules[END_REF], [START_REF] Sekino | Study of an analytical model for hollow fiber reverse osmosis module systems[END_REF], [START_REF] Chatterjee | Modeling of a radial flow hollow fiber module and estimation of model parameters using numerical techniques[END_REF], [START_REF] Senthilmurugan | Modeling of a spiralwound module and estimation of model parameters using numerical techniques[END_REF]]. Material balances have also been considered numerically with the assumption of a negligible pressure drop by [START_REF] Malek | A lumped transport parameter approach in predicting b10 ro permeator performance[END_REF]] who introduced a simple model based on a lumped transport parameter approach combined with the solution-diffusion model. A modified solution-diffusion model [START_REF] Sagne | Modeling permeation of volatile organic molecules through reverse osmosis spiral-wound membranes[END_REF]] was developed to take the sorption pattern into account.
Several approaches investigated the cell model based on single material balances and consisted in dividing the membrane into several parts that were considered as a well-mixed reactors. [START_REF] Voros | Salt ans water permeability in reverse osmosis membranes[END_REF]] neglected pressure drop in comparison with the trans-membrane applied pressure, while others works treated of the influence of pressure drop [START_REF] Costa | Modelling of modules and systems in reverse osmosis. part i. theoretical system design model development[END_REF], [START_REF] Ali | Modeling the transient behavior of an experimental reverse osmosis tubular membrane[END_REF], [START_REF] Fujioka | Modelling the rejection of n-nitrosamines by a spiral-wound reverse osmosis system: Mathematical model development and validation[END_REF]]. MATHCAD8 package was also used to solve the mass transport equations across the membrane and the boundary layer [START_REF] Albastaki | Predicting the performance of ro membranes[END_REF]]. Let us also mention that an analytical model treating the membrane as a heterogeneous system [START_REF] Song | Performance of a long crossflow reverse osmosis membrane channel[END_REF]] was obtained with the assumption of constant driving pressure.
To conclude on the large amount of 1-D models, we stress on the fact that these approaches at least postulate one of the following strong simplifications:
-simplified pressure drop derived from the particular case of flow in an impermeable duct, -permeate flux and concentration polarization are estimated from mass transfer coefficients derived from dimensionless correlations on Sherwood number;
-hydrodynamics is not investigated, only solute mass transfer is considered neglecting the influence of the cross-flow on solution composition.
It is however accepted that permeate flux has a great influence on crossflow, particularly on pressure drop in the situation of highly permeable membrane [START_REF] Mellis | Fluid dynamics in a tubular membrane: Theory and experiment[END_REF], [START_REF] Haldenwang | Pressure runaway in a permeable channel with pressure-dependent leakage[END_REF]]. Concerning the use of mass transfer coefficients, the difficulty lies in the proper choice of the most suitable correlation among the large number of correlations found in literature which are often eclectic. As concentration polarization results from the establishment of a solute boundary layer, it is essential to couple hydrodynamics and mass transport.
We now devote what follows to more accurate models that deal with the 2-D variations of pressure, flow velocities and solute concentration in the feed channel.
We first consider the analytical approaches, which are approximate solutions for the time being, even though some exact solutions have been derived in certain limit cases. The first concern of the approximate analytical approaches considers the various manners of incorporating a satisfactory velocity field into the mass transfer equation. Then, the film theory is often invoked to describe the coupling between permeate flux and concentration polarization. In this way, an analytical expression of the permeate flux can be established. This has firstly been done for the case of a solid suspension [START_REF] Song | Theory of concentration polarization in crossflow filtration[END_REF]]. Song and Elimelech used the excess concentration subtracted by feed concentration to balance the convection and diffusion of solute within the polarization concentration layer. The validation analysis of such an approach can be found in [START_REF] Kim | Modeling concentration polarization in reverse osmosis processes[END_REF]]. Later, this model was extended and applied to salt solutions specific to RO membranes [START_REF] Song | Concentration polarization in cross-flow reverse osmosis[END_REF]]. The specific case of a long-narrow channel in which concentration polarization could develop all along the channel was also investigated [START_REF] Song | Concentration polarization in a narrow reverse osmosis membrane channel[END_REF]]. To take account of the variable cross-flow velocity, a total salt balance model was proposed for different considered kinds of shear flow [START_REF] Song | A total salt balance model for concentration polarization in crossflow reverse osmosis channel with shear flow[END_REF]]. Let us mention that Sundaramoorthy [START_REF] Sundaramoorthy | An analytical model for spiral wound reverse osmosis membrane modules: Part i -model development and parameter estimation[END_REF]] also proposed an analytical model that provides us with explicit expressions for spatial variations of pressure, fluid velocity and solute concentration on the feed channel side of a spiral wound RO/NF membrane module.
If the permeation is supposed to remain more or less uniform all along the channel, the flow field can be described by the Berman exact solution [START_REF] Berman | Laminar flow in channels with porous walls[END_REF]]. This expression is then used for solving the mass transport equation in the diffusion layer [START_REF] Agashichev | Modeling the influence of temperature on gelenhanced concentration polarization in reverse osmosis[END_REF]]. [START_REF] Kim | Permeat flux inflection due to concentration polarization in crossflow membrane filtration: A novel analytic approach[END_REF]] solved the convection-diffusion equation with a Berman flow approximated as a linear shear flow in the close proximity of the membrane surface. In Annex, we briefly recall the exact analytical solution to the mass transfer in a solution carried by a Berman flow, as obtained by [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF]]. Note that the latter solution precisely describes the concentration polarization in the HP-LR limit (HP-LR for High Pressure -Low Recovery).
In addition to these analytical contributions, numerous numerical studies have to be mentioned. Bhattacharyya [START_REF] Bhattacharyya | Prediction of concentration polarization and flux behavior in reverse osmosis by numerical analysis[END_REF]] developed a numerical approach that solves the diffusion-convection equation to compute the concentration profiles throughout a reverse osmosis membrane. Ma and Song [START_REF] Ma | A 2-d streamline upwind petrov/galerkin finite element model for concentration polarization in spiral wound reverse osmosis modules[END_REF]] solved the convection-diffusion equation coupled with Navier-Stokes equations in the feed channel. This work is based on rigorous mass and momentum balances in both the radial and axial membrane dimensions. We now reach an important aspect in the present modeling: it concerns the calculation of the hydrodynamics and mass transfer coupling with high accuracy thanks to computational fluids dynamics (CFD) softwares. Wiley and Fletcher developed a CFD model of concentration polarization and fluid flow in membrane process [see [START_REF] Wiley | Techniques for computational fluid dynamics modelling of flow in membrane channels[END_REF]]. Later, they extended the model to the specific filtration processes [START_REF] Fletcher | A computational fluids dynamics study of buoyancy effects in reverse osmosis[END_REF], [START_REF] Alexiadis | CFD modelling of reverse osmosis membrane flow and validation with experimental results[END_REF]]. A great advantage of the CFD methods lies in their ability to treat of complex system geometries. Certain CFD software are indeed convenient to simulate momentum and mass transport in membrane filtration system with spacer-filled channels [START_REF] Subramani | Pressure, flow and concentration profiles in open and spacer-filled membrane channels[END_REF]] in order to modify the cross-flow. The fully coupled governing equations for fluid dynamics and mass transfer were investigated [START_REF] Lyster | Numerical study of concentration polarization in a rectangular reverse osmosis membrane channel: Permeate flux variation and hydrodynamic end effects[END_REF]], and more recently in [START_REF] Salcedo-Diaz | Visualization and modeling of the polarization layer in crossflow reverse osmosis in a slit-type channel[END_REF]], where the CFD sofware Comsol Multiphysics has been used. A 3-D numerical solution of the coupled fluid dynamics and solute transfer equation [START_REF] Lyster | Coupled 3-d hydrodynamics and mass transfer analysis of mineral scaling-induced flux decline in a laboratory plate-and-frame reverse osmosis membrane module[END_REF]] was obtained with the use of Ansys CFX solver.
It is worthy of note that the numerical approach in cross-flow filtration is faced with an unusual boundary condition that nonlinearly relates permeation flux and concentration at the membrane surface. The general CFD softwares treat this difficulty in an iterative way that is founded on standard methods for solving large non-linear systems; in practice, this limits the number of nodes to get a fast convergence. For instance, in [START_REF] Salcedo-Diaz | Visualization and modeling of the polarization layer in crossflow reverse osmosis in a slit-type channel[END_REF]] the number of nodes is about 3.10 4 . The aim of the present numerical approach is to develop a tailor-made modeling, which does not have such a limitation (our standard runs will be able to involve 2.10 7 nodes and will take one ten of seconds on a standard Intel I5 processor).
Actually, an elegant manner to save computational efforts is to con-sider the laminar Navier-Stokes equations in the limit of the Prandtl approximation. This approach -fully justified in most channel laminar filtration systems-involves a "time marching like" solver along the channel, which makes easier the treatment of the non-linear boundary condition. The Prandtl approximation applies on all conservation laws for a chemical solution flowing in a cross-flow filtration channel. Therefore, the present approach aims at accounting for concentration polarization and osmotic counter-pressure, expecting to predict permeate flux and concentration profile, at any point all along the channel, with a computational cost low enough to develop an exhaustive parametric study. Furthermore, we focus our interest on cross-flow filtration at steady state. The fluid is incompressible and the channel is supposed to be narrow. As a result, this paper takes place within a continuous effort towards predictive filtration models, the efficiency of which increases in terms of precision and in capacities of analysis.
It is organized as follows: In section 2, we present the governing equations and the main hypotheses. We briefly recall why the Prandtl approximation of the conservation laws is justified. The numerical method that solves the Prandtl differential system is described. In section 3, we first compare our numerical results with the Berman exact solution and the associated exact solution for mass transfer [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF]. Second, other approximated models (Elimelech's model and Total Salt balance model) are also compared with the present numerical predictions for the case of a pure solvent and of a single solute solution. Finally, we present the comparison between our model predictions and several experimental data from reverse osmosis experiments of the literature. In Appendix, we briefly recall the spirit of the different analytical results that we need for conducting the validation step [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF], [START_REF] Song | Theory of concentration polarization in crossflow filtration[END_REF], [START_REF] Song | A total salt balance model for concentration polarization in crossflow reverse osmosis channel with shear flow[END_REF]].
Numerical solution of Prandtl system for filtration
Main hypotheses and conservation laws at steady state
The present 2-D numerical development is focused on modelling the concentration polarization and the osmotic (counter-)pressure that occur in the steady cross-flow filtration with one solute diluted solution. The experiments on reverse osmosis show that the rejection of solute is nearly total in most situations. To simplify the approach -and reduce the number of independent parameters-we suppose that the solute rejection is total.
The Prandtl approximation corresponds to a simplification that produces an important gain in numerical efficiency. The price to pay is the following: when we shall decide to cancel the streamwise diffusive terms, we shall renounce to study the details of any recirculating phenomena, as what happens between two spacers.
Two semi-permeable parallel walls compose the 2-D channel as schematically shown in Figure 1. The channel is of length L and spacing 2d. This defines the computational domain as {-d < x < d} × {0 < z < L}. The fluid is supposed to be Newtonian, incompressible and its physical properties, such as dynamical viscosity µ 0 and density ρ 0 , are supposed uniform. Inlet conditions (at z = 0) are the given axial mean velocity W in , the fixed feed concentration C in and the fixed inlet pressure P in . To simplify our development again, the effect of partial fouling or cake formation on permeate flux is supposed to be inexistent or negligible in comparison with the hindrance due to the osmotic (counter-)pressure. In other words, I 0 , the membrane resistance is considered as constant and uniform all along the channel. This allows us to define U in , the permeation velocity in pure solvent cross-flow, as
Solute concentration profile
C in P in W in d z x -U W (z) U W (z) = P W (z)-P osm W (z) I 0
U in = P in I 0 (1)
Accordingly with the Darcy law for porous media and the van't Hoff law for osmotic pressure, and considering a total rejection of the membrane, the Spiegler-Kedem model leads us to the so-called Darcy-Starling law which relates U W (z), the local permeation velocity at the wall, to pressure P W and osmotic pressure P osm W as:
U W (z) ≡ P W (z) -P osm W (z) I 0 = P W (z) -iRT C W (z) I 0 ( 2
)
where i is the number of dissolved ions per molecule of salt, T , the temperature of the solution and R, the ideal gas constant. We consider a twodimensional newtonian flow in cartesian coordinates (x, z) with the velocity vector {U, W }. The classical set of conservation laws reads:
∂U ∂ x + ∂W ∂ z = 0 (3) U ∂U ∂ x + W ∂U ∂ z = - 1 ρ 0 ∂P ∂ x + µ 0 ρ 0 ∂ 2 U ∂ x2 + ∂ 2 U ∂ z2 (4) U ∂W ∂ x + W ∂W ∂ z = - 1 ρ 0 ∂P ∂ z + µ 0 ρ 0 ∂ 2 W ∂ x2 + ∂ 2 W ∂ z2 (5) U ∂ C ∂ x + W ∂ C ∂ z = D 0 ∂ 2 C ∂ x2 + ∂ 2 C ∂ z2 ( 6
)
where C is the field of solute concentration, the molecular diffusivity of which is D 0 . This set of partial differential equations has to be solved with the following boundary conditions: at the porous walls (i.e x = ±d, ∀z)
W = 0 (7) U C -D 0 ∂ C ∂ x = 0 (8) U = P -iRT C I 0 (9)
and with appropriate inlet/outlet conditions at z = 0 and z = L. Note the nonlinearity present in equation ( 8).
Non-dimensioning
At this point, we have several order of magnitude already defined: d, U in , W in , P in , C in determine the order of magnitude for the variations of x, U , W , P and C. To set an order of magnitude for the axial coordinate, we add L de , the so-called dead-end length (or exhaustion length for clear water) [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF]),
L de = W in d U in = I 0 W in d P in (10)
Let us now define the following dimensionless unknowns and variables, used throughout the article:
u = U U in , w = W W in , p = P P in , x = x d , z = z L de , C = C C in (11)
This process leads us to introduce the six dimensionless numbers that characterize the problem:
R in = ρ 0 U in d µ 0 , λ = L L de , α = µ 0 I 0 W 2 in P 2 in d 1 2 , β = µ 0 I 0 d , P e in = P in d D 0 I 0 and N osm = iRT C in P in (12)
R in is the "pure solvent" transverse Reynolds number, λ is the dimensionless length of the channel reduced by L de , the exhaustion length (or dead-end length). In practice, experimentalists choose λ < 1. α is the square root of the ratio of Hagen-Poiseuille pressure drop throughout the exhaustion length to trans-membrane pressure and β is the dimensionless permeability of the membrane, the typical values of which are much less than unity (for a more detailed discussion on those numbers, refer to [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF]). P e in is the "pure solvent" transverse Péclet number. N osm , the ratio of the osmotic pressure in absence of polarization to the operating pressure P in , will be denoted the osmotic number. Note that N osm must be less than 1 in pressure-driven filtration.
We are now able to rewrite the whole system, in the domain ]-1, 1[×]0, λ[, as the following non-dimensional form:
0 = ∂u ∂x + ∂w ∂z (13) - ∂p ∂x = β R in u ∂u ∂x + w ∂u ∂z - ∂ 2 u ∂x 2 + β α 2 ∂ 2 u ∂z 2 (14) - ∂p ∂z = α 2 R in u ∂w ∂x + w ∂w ∂z - ∂ 2 w ∂x 2 + β α 2 ∂ 2 w ∂z 2 (15) 0 = P e in u ∂C ∂x + w ∂C ∂z - ∂ 2 C ∂x 2 - β α 2 ∂ 2 C ∂z 2 (16)
with the boundary conditions at the wall (x = ±1):
w(±1, z) = 0, u(±1, z) = p -N osm C W , P e in uC = ∂C ∂x (17)
with the appropriate inlet/outlet conditions at z = 0 and z = λ.
Prandtl system for cross-flow filtration
Note that the ratio β/α 2 reduces to U 2 in /W 2 in , which is by nature of RO/NF cross-flow filtrations a very small quantity (as small as 10 -6 ).
Incompressibility condition (13) confirms that the non-dimensional quantities u and w have the same order of magnitude for their different variations of the RHS of equations ( 14) and ( 15). Therefore, transverse variation of pressure is β/α 2 times as small as its axial variation. It can easily be deduced that pressure is constant within a channel section (i.e p(x, z) ≡ p(z)).
The same rationale applies when comparing transverse diffusion with axial diffusion. It is clear that both magnitudes are in U 2 in /W 2 in ratio. This incites us to cancel the axial diffusion terms in the differential equations.
Consequently, the following set of equations are validated for a large class of filtration processes. This is the so-called Prandtl approximation of the conservation laws. Note that the mathematical nature of these equations has changed, since we switch from a set of elliptic equations to a set of parabolic equations, in which the variable "z" plays the same role as the time in the heat equation.
∂u ∂x + ∂w ∂z = 0 (18) ∂p ∂x = 0 (19) R in w ∂w ∂z - ∂ 2 w ∂x 2 = -R in u ∂w ∂x - 1 α 2 dp dz (20) P e in w ∂C ∂z - ∂ 2 C ∂x 2 = -P e in u ∂C ∂x (21)
In the left-hand-side of equations (20-21), we have gathered the terms of these equations that are mathematically identical to that of heat equation. This Prandtl system for cross-flow filtration calls for the following comments.
Remark 1: If the above inlet data at z = 0 are symmetrical, the laminar solution of the system will develop symmetrically with respect to x = 0. Therefore, assuming symmetrical boundary conditions at x = 0 allows us to gain half of the computational effort, the computational domain reducing to {0 ≤ x ≤ 1} × {0 ≤ z ≤ λ} with the following boundary conditions on the axis (x = 0):
u(0, z) = 0, ∂w ∂x (0, z) = 0, ∂C ∂x (0, z) = 0 (22)
This assumption will hold in all what follows.
Remark 2 : Equation ( 19) expresses the fact that pressure only depends on z, the stream-wise position. Equations ( 20) and ( 21) are clearly of parabolic type, the pressure being -in each section-the Lagrange multiplier that allows us to satisfy the following constraint
d dz 1 0 wdx = -u(1, z) = -[p(z) -N osm C W (z)] ( 23
)
which is obtained owing to transverse integration of equation ( 18). The forthcoming numerical method exploits this mathematical property. Furthermore, the parabolic nature of this system only requires boundary conditions at inlet. Hence, the term "appropriate boundary conditions" previously mentioned after equations ( 17) becomes now defined and simply reduce to entrance conditions at z = 0: only, suitable entrance x-profiles on w and C are required.
Remark 3: Conservation laws are non-linear by nature. Here, their nonlinearity is increased by the boundary condition: P e in uC -∂C/∂x = 0 at x = 1, which non-linearly combines permeation velocity and solute concentration. This coupling is tremendously important since it governs polarization and permeation. Therefore, this nonlinearity needs to be numerically enforced iteratively, as described below.
A new numerical approach
Technically, the solution method for solving the steady conservation laws: equations from eq.( 18) to eq.( 21), coupled with the boundary conditions ( 22) and ( 17) is performed by using finite difference methods (FDM) of order two. The computational domain is discretized using a regular mesh in both transverse and axial direction. A regular mesh is not optimal transversally, but a large discretization is permitted, since the numerical cost is low. As a matter of fact, the conservation laws being conceived in the context of Prandtl approximation, the system is parabolic, the axial coordinate playing the role of the time. Hence, for a given transverse section z, we solve the coupling -which is non-linear-by an iterative process. One iteration is as follows: we first solve the concentration field, then the axial velocity field together with pressure, and finally the transverse velocity field. The solver iterates until a certain convergence criterion is satisfied in what concerns the simultaneous satisfaction of all boundary conditions or couplings, in particular, when the consistency between local permeation velocity U W , local pressure p and local wall concentration C W is achieved. At this convergence stage, we are in condition to proceed with the computation of the various fields in the next transverse section until the whole length of the channel is reached.
As the system of equations [from eq.( 18) to eq.( 21)] is of parabolic type, we now resort to the terminology generally used for the discretization of heat equation with finite differencing. Let us define w
(n) j [resp. C (n) j ]
, the value of unknown w [resp. C] at point {x = j∆x, z = n∆z} for 0 ≤ j ≤ J and 0 ≤ n ≤ N , where ∆x = 1/J and ∆z = λ/N are the mesh sizes in both directions. We suppose that all transverse profiles (velocity field {u
(n) j , w (n) j }, scalar concentration field C (n) j
and local pressure p (n) ) are known up to the section z = n∆z. The purpose of the numerical scheme is to compute the all unknown fields ( {u
(n+1) j , w (n+1) j }, C (n+1) j
and p (n+1) in the next section z = (n + 1)∆z.
In channel section z = (n+1)∆z , the permeation velocity u (n+1) J is a keystone for the numerical method. Since the permeation velocity is involved in the non-linear boundary condition, it assessment will be obtained as the convergence of an iterative process where U (k) , k = 0, 1, 2, • • • , represents a series of local permeation estimates. At convergence of the iterative process, we shall set u (n+1) J = lim U (k) . The final numerical system to be solved incorporates a discretization of the differential operators. More precisely, for the transverse coordinate, the following centered finite difference operators of order two are chosen:
∂w ∂x (n+1) j ≈ w (n+1) j+1 -w (n+1) j-1 2∆x (24) ∂ 2 w ∂x 2 (n+1) j ≈ w (n+1) j+1 -2w (n+1) j + w (n+1) j-1 (∆x) 2 , ( 25
)
As for the axial differential operators, a backward finite difference scheme of order 2 is used:
∂w ∂z (n+1) j ≈ 3w (n+1) j -4w (n) j + w (n-1) j 2∆z ( 26
)
To perform the axial extrapolations, we selected the Adams-Bashforth second order scheme: ŵ(n+1)
j ≈ 2w (n) j -w (n-1) j (27)
The entire procedure starts with the setting of the entrance data. Operating pressure is p (0) = 1. The velocity fields u
(0) 0,•••,j,•••,J , w (0) 0,•••,j,•••,J is either of Berman type, or of Poiseuille type. The entry concentration field C (0) 0,•••,j,•••,J
is supposed transversally uniform and set to C (0) j = 1. Because we use the extrapolation scheme 27 to determine the guessed next fields, for the first (unknown) transverse section (n = 1) we would also need all the fields in n = -1, which can be those of the section (n = 0). Hence, we set {u
(-1) j = u (0) j , w (-1) j = w (0) j and C (-1) j = C (0) j }.
Now we are ready to enter into the main axial loop. The procedure described bellow is the same for all transverse section z = (n + 1)∆z from n = 0 to N -1. For a general node n + 1 the above extrapolation provides us with the initial guesses { ŵ(n+1)
j , û(n+1) j , Ĉ (n+1)
j } for 0 ≤ j ≤ J. At iteration k = 0, we use this guess for the transverse velocity by setting
U k=0 = û(n+1) j
. ¿From the discretized form of equation ( 21), and boundary nodes (symmetry condition at j = 0, and the Robin boundary condition at j = J), we then obtain a tri-diagonal system to inverse for computing the solute concentration profile C (n+1) j . We now proceed with the computation of the axial velocity field w (n+1) . In the same manner as that used for the concentration field, equation ( 20) is developed by its respective discretization. It is worthy noting that the no-slip condition w (n+1) J = 0 (i.e. at node j = J ) reduce the unknown field w (n+1) to J unknowns. However the whole system has always J + 1 unknowns due to additional unknown p (n+1) . Therefore, a complementary constraint is required. The latter unknown depends on boundary condition u = p -N osm C that holds at the membrane (x = 1) and after integration of the incompressibility constraint 18 we obtain another relationship to relate all unknowns (w (n+1) j to p (n+1) ) and complement the tri-diagonal system. From the profile w (n+1) , the incompressibility constraint allows us to compute the transverse velocity profile u (n+1) and the permeate velocity
U (K) At this stage, all fields C (n+1) j , w (n+1) j , u (n+1) j
and p (n+1) are known. The new permeation velocity can be computed thanks to net trans-membrane pressure, as
U k+1 = p (n+1) -N osm C (n+1) J . ( 28
)
We then check the convergence of the procedure by comparing the new filtration with that of the previous step in the loop. More precisely, we
if n < N n = n + 1 Set n = 0
Assign values to u (0) , w (0) , C (0) , p (0) Set values to u (-1) , w (-1) , C (-1) , p (-1) Compute values of û(n+1) , ŵ(n+1) , Ĉ(n+1) , p(n+1)
Set k = 0 Resolve system C (n+1) Resolve system w (n+1) Compute values of u (n+1)
Store values of u (n+1) , w (n+1) , C (n+1) , p (n+1) Set values of u (n-1) , w (n-1) , C (n-1) , p (n-1) Set values of u (n) , w (n)
, C (n) , p (n) End If U (k+1) -U (k) U (k) ≤ Econv k = k + 1 U (k) = U (k+1)
U (k) -U (k+1) U ( k) ≤ E conv (29)
where E conv is a certain criterion of convergence. If convergence is not reached, a new iteration is performed for the same section z = (n + 1)∆z with the new guess of permeation
U k = U k + ω(U k - U k+1
), where ω is a relaxation factor used to speed up the convergence, until an acceptable permeation velocity U K satisfies the convergence criterion. After each iterative loop, all fields
C (n+1) j , w (n+1) j , u (n+1) j
and p (n+1) are stored. If convergence is attained, the computation of the next axial section is prepared by setting all the memories affected to section (n -1) with the corresponding values just obtained for section (n) and fields of section (n) with the values of section (n + 1). Now we are in condition to restart the previous iterative process for the next section z = (n + 1)∆z, following the same calculation sequence until z = λ = N ∆z (i.e. n = N ). All calculation steps are schematically described in figure 2.
A classical study of dependency on mesh size has been performed [START_REF] Bernales | Modélisation de l'hydrodynamique et des transferts dans les procédés de filtration membranaire[END_REF] in both axial and transverse directions; this numerical check confirms the order two expected from the finite difference approximations, that we have used.
Validation and results
Even though we have derived a numerical approach rather light, we need to validate the numerical technique, as well as the spirit of the Prandtl approximation. The best manner is to resort to exact analytical solutions, even when their efficiency with respect to filtration is limited. Once this step is completed, we can tackle the numerical predictions of well-documented experiments.
Validation with exact analytical solutions
First of all, the numerical model is validated by comparison with the Berman flow. This validation implies the simulation of a uniform permeation of pure solvent, namely a pure water flow nearly isobaric. Accordingly with the validity domain of the Berman solution, this is achieved when parameter α becomes as small as possible, since the pressure drop is proportional to α 2 . In Table 1, the numerical data obtained with three studied cases where α rapidly declines are gathered. E p is the maximum of the relative error between the numerical result (average permeate flux) and the solution of reference (here the Berman flow). It is clear that the departure diminishes as the square of α 2 , indicating that the error only comes from the pressure drop, which induces a non-uniform filtration in the numerical simulation.
Test Case α R in number of meshes in the axial direction (N) E p 1 10 -2 0.1 10 4 10 -4 2 10 -3 0.1 10 4 10 -6 3 10 -4 0.1 10 4 10 -8
Table 1: Error with respect to the simple case of uniform permeation (Berman exact solution)
We retrieve the fact that pressure becomes uniform as α 2 diminishes, as well as permeation (according to Darcy's law).
For the sake of illustration, Figure 3 shows the axial and transverse velocity profiles. Both numerical and analytical solutions (Berman flow) are plotted for three axial sections, with no visible departures. We observe that the axial velocity component is close to a parabola (but slightly flatter at the maximum). As expected, the analytical solution well-predicts the velocity (α = 0.01). The reduced pressure p is quietly constant in that case (figure 4d), leading to a constant local permeate flux, which does not depend on the axial position (figure 4b). Hence, the axial flow q decreases linearly as seen in figure 4d. There is consequently a very good agreement with the Berman's theory in this case, as expected.
For the second case, an important pressure drop was tested (α = 0.5). The reduced pressure p decrease all along the channel (figure 5d), leading to a decreasing local permeate flux (figure 5b). Then, the axial flowrate decreases more slowly than expected (figure 5d), and the exhaustion length, z = 1, is reached without completely exhausting the axial flow (figures 5c, 5d). As seen in figure 5d, the reduced flowrate q decreases faster than the reduced pressure p; hence, if the channel would be longer, the axial flow exhaustion (AFE) would occur rather than a cross-flow reversal (CFR). This is consistent with a previous work [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF] predicting this behaviour for these values of α and R in (R in ≈ 0, α < 1/ √ 3). If the relative pressure drop is further increased (α = 0.75), the reduced pressure p will decrease faster than the flowrate q, as seen in figure 6d. At z = 0.8, the reduced pressure becomes negative (that is, the pressure in the retentate side becomes lower than the pressure in the permeate side), leading to a cross-flow reversal (CFR): water flows from the permeate to the retentate, as expected for R in ≈ 0, α > 1/ √ 3 [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF]). Hence, the reduced axial flowrate q increases again for z > 0.8. Figure 6c confirms us this behavior when streamlines at z = 0.8 change direction passing through a process of suction to injection.
To investigate the influence of transverse Reynolds number R in , other values of R in (0.2, 0.5) that might correspond to ultrafiltration (UF) or microfiltration (MF) systems have been evaluated; in both cases the same behavior was found for the same values of parameter α.
Finally, let us consider the validation of the solute transfer coupled with the fluid motion. The solute concentration profile will follow the exact analytical solution obtained by [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF], if the following conditions are fulfilled: a) α vanishes (negligible pressure drop to get the Berman flow); b) the exact solute boundary layer and Berman flow are initiated at the duct entrance; c) the recovery is low (i.e. the comparison is carried out for small λ).
In Figure 7, the concentration profiles numerically computed, as well as those corresponding to our analytic studies have been plotted for four cases of increasing operating pressures (i.e. P e in ). As expected, increasing operating pressures (i. e. increasing P e in ) leads to higher permeate fluxes and then to a more important concentration polarization. The discrepancy between the numerical and the approximated analytical solution is hardly discernable in the figure 7.
Comparisons with approximated approaches of the literature
Figures 8 and9 show the solute concentration at the wall and the axial variation of the permeate flux, respectively. These figures conduct the comparisons between the results by [START_REF] Song | Theory of concentration polarization in crossflow filtration[END_REF], TSB-plug [START_REF] Song | Concentration polarization in a narrow reverse osmosis membrane channel[END_REF]) and TSB-shear [START_REF] Song | A total salt balance model for concentration polarization in crossflow reverse osmosis channel with shear flow[END_REF]) models described in Appendix (subsection 5.3), with our numerical model for a RO module of We remark that our numerical model predicts a strong development of concentration polarization layer and a more rapid decay of the permeate flux in the length very close to the entrance. This prediction can be explained because with permeable walls, the transverse advection strongly modifies the layer construction, which is no longer created by diffusion only. Our careful iterative process allowed us to take account of this non-usual balance. Let us compare our model with the TSB models. TSB models uncouple the axial and transverse solute transfer, and its solute concentration profile (50) is then flatter than ours (39). Since the total salt concentration is conserved, the predicted wall solute concentration is weaker than ours, and the local permeate flux is higher. If the membrane is long enough, higher fluxes in the TSB models lead finally to higher solute concentrations, and their predicted local permeate flux decreases then faster than ours for Z > 1 m. Recently, Liu et al. ( 2014) developed a TSB model considering a parabolic velocity profile. Their results show an intermediate behaviour between the TSB-plug and the TSB-shear models with a concentration polarization more intense than predicted by the first, but more attenuated than predicted by Comparison of the numerical and analytical solution shows good agreement for low permeate fluxes and short membranes (figure 10). It should be kept in mind that the analytical model has been obtained for high pressure conditions. When the inlet pressure P in is increased, the value of alpha decreases; however, lambda increases at the same time, from 0.0286 for 1 bar to 0.571 for 20 bar. Hence, the hypothesis of λ ≪ 1 is not valid for high permeate fluxes, which explains the gap between the analytical and the numerical solutions. The analytical solution overestimates the permeate flux because it considers a constant transversal concentration profile (39), whereas the numerical solution considers the more realistic development of the concentration polarization from an homogeneous inlet concentration at z = 0.
For long membranes (0.171 ≤ λ ≤ 3.43, Fig. 11), the numerical solution deviates from the analytical solution (even for low permeate fluxes), since the low recovery hypothesis is not valid. Considering that recovery is low implies an underestimation of the solute concentration, an underestimation of the osmotic pressure and finally an overestimation of the local permeate flux. Furthermore, as TMP increases, the discrepancy between the analytical solution and both numerical models is enhanced. This is of course due to the analytical solution, the hypotheses of which are no longer fulfilled, as pressure increases. More precisely, in Fig. 11, the channel length is large and the hypothesis of low recovery becomes invalid, all the more so that TMP produces a large permeation.
Comparison of our numerical solution with the TSB-plug model shows a quite good agreement for long membranes (figure 11). For short membranes (figure 10), the TSB model predicted higher values than our numerical model; this trend was expected due to a major influence, in our numerical model, of the concentration polarization development at the beginning of the process.
Comparisons with experimental data
At this point, we have demonstrated that the Prandtl approximation offers an accurate alternative with respect to the full conservation laws. We now need to validate the overall approach in terms of basic choice with respect to the mathematical model. This must be established by comparison with experiments. For the sake of conducting this comparison, we choose different experimental contributions that propose careful measurements in both NF and RO measurements. We first compare our averaged permeate fluxes as a function of the operating pressure with the experimental results obtained by [START_REF] Geraldes | The effect on mass transfer of momentum and concentration boundary layers at the entrance region of a slit with a nanofiltration membrane wall[END_REF] in a NF module composed by a CDNF501 commercial membrane with an hydraulic permeability of 1.4 × 10 -11 ms -1 Pa -1 , working at operating pressures that range between 10 and 40 bar and for three feed axial rates. Even though the NF module had a length of 20 cm, the reported data only included average values obtained in the first 6 cm. Table 3 allows the quantitative comparison between the data from the experiment and the predictions of the numerical simulation for three aqueous solutions of N a 2 SO 4 , sucrose and polyethylene glycol 1000 (PEG1000). For characterizing the physical properties of the chemical solution, we recall that the numerical prediction only uses the standard constants taken from the literature.
For both solutions with N a 2 SO 4 and sucrose, the comparison between numerical and experimental results shows an excellent agreement (a discrepancy of the order of 10by contrast this trend is reversed for the solutions of sucrose, the numerical simulation providing us with higher values than the experiments.
As for the PEG1000 solutions, our modeling overpredicts the permeation, in particular when the operating pressure is higher than 20 bars. When the feeding flow is increased (i.e. when the concentration polarization is lesser), Solute the discrepancy diminishes. As a matter of fact, the Schmidt number of PEG 1000 is very large and the polarization at high pressure attains important values, in such a way that the osmotic law increasing linearly with PEG1000 concentration can no longer be appropriate. To conclude this stage of model validation, the numerical predictions can be considered as reliable for both solutions of N a 2 SO 4 and sucrose. For solutes of larger Schmidt numbers (as PEG 1000), the polarization at high trans-membrane pressure leads to very rich concentrations, for which the physical "constants" have to be modified.
P in W in C in Sc J exp J num 10 5 Pa ms -1 kg • m -3 ms -1 ms -1 N a 2 SO
We next turn towards a reverse osmosis experiment conducted with salt. More precisely, we now compare two predicitions (the TSB-shear model and the present numerical model) with the experiments of [START_REF] Zhou | A numerical study on concentration polarization and system performance of spiral wound ro membrane modules[END_REF]. This experiment presents a particular difficulty for our laminar simulation, since the flow is turbulent due to the effects of spacers. Actually, an aqueous solution of NaCl feeds a set of four spiral wound RO modules of a polyamide composite membrane (TFC 2540SW, Koch Membrane Systems). The pilot had a total length L of 4 m with a distance between membranes 2d of 0.6 mm, and a membrane resistance I 0 = 8.41 × 10 10 Pa • s/m. Experiments were carried out with a mean inlet axial velocity W in = 0.075 m/s and for three different feed initial concentrations C in : 500, 1000 and 3000 mg/l.
To overcome the difficulty due to the presence of turbulent mixing, it was proposed in [START_REF] Zhou | A numerical study on concentration polarization and system performance of spiral wound ro membrane modules[END_REF]] to adopt the concept of an effective solute diffusivity, which is suggested at the value of D 0 = 1.81 × 10 -8 m 2 /s, i.e. about ten times the molecular diffusion coefficient of sodium chloride. We hence have used the same turbulent diffusivity in our numerical Prandtl approach. Numerical simulations are depicted by solid lines, while TSB model results and experimental data are plotted as dash-dotted lines and symbols, respectively. For all cases both models agreed very well with experimental fluxes until a pressure of 10 bar but deviated gently as pressure still increases. The maximum difference was about 15% for a feed concentration of 500 mg/l at a pressure of 15 bar. Actually, although the simulations have considered a membrane which was 4 m long, the experiments were carried out in 4 modules 1.016 m long connected in series [START_REF] Zhou | A numerical study on concentration polarization and system performance of spiral wound ro membrane modules[END_REF]. There is no permeate flux along the connection pipes between the modules, thus reducing concentration polarization. This could explain the discrepancy between simulations and experiments, especially for the highest trans-membrane pressures.
We now conduct the last comparison of our numerical predictions with experimental results. Measurements of the concentration polarization profile 2010) and consisted in measuring the concentration polarization layer of a solution of N a 2 SO 4 within a RO module by digital holographic interferometry. The module was composed of a thin film membrane (TFM-50, from Hydro Water S.L.) and sized 10 cm×3 mm×10 mm (L × W × H). In all three cases the operating pressure P in was 7.2 bar and the feed initial concentration C in was 8.5 kg/m 3 , for three different inlet mean velocities W in of 0.2, 0.7 and 1.7 cm/s. Unlike all the previous experimental results which considered two parallel permeable walls, these experiments were carried in a channel with a permeable wall and an impermeable one. Hence, the limit conditions for the axial velocity ( 22) have been modified:
w(0, z) = 0 (30)
The figure 13 shows the concentration polarization profiles obtained by our numerical model and TSB model (solid and dotted lines, respectively) compared with the experimental ones (symbols). Measurements were performed at the distance from the inlet Z = 5cm. A good agreement is found between our numerical predictions and the experimental data. We observe that the modeling slightly overestimates the concentration gradient (and hence the permeate flux). The same trend in discrepancy also appears in [START_REF] Salcedo-Diaz | Visualization and modeling of the polarization layer in crossflow reverse osmosis in a slit-type channel[END_REF]]. Here, we can propose two possible explanations. First, our model considers a total rejection by the membrane, although experiments showed a 97 % rejection for N a 2 SO 4 , only. Second, the experimental channel has a width-to-height ratio of 3.3, and the consideration of a twodimensional geometry might be a slightly inaccurate approximation. Lastly, the TSB model meets difficulties in fitting with the experimental concentration profiles. This can partly be attributed to the fact that the latter predictions have been obtained in the case of a symmetric channel.
To finish this important stage of mathematical model validation with respect to experiments, let us make the following remarks. The Prandtl simulations, in a general manner, have shown an excellent accordance with the analytical and numerical previous results, and a good agreement with the experimental results. This point is valid for both "clean water" filtration and coupling between hydrodynamics and mass transfer. Furthermore, the possible discrepancies we have observed when comparing with experiments are likely to be corrected by using concentration-dependent physical constants. Such an improvement does not affect the present overall numerical approach, and will be the object of future contributions. Another remarkable point is that in all cases the flow always match locally with a pattern that can be identified with the analytical solution (i.e. [START_REF] Berman | Laminar flow in channels with porous walls[END_REF]). In other words, the flow pattern adapts to permeation, while mass transfer rules the whole system (of course, the flow plays an important role in mass transfer...).
Conclusions and final discussion
An efficient two dimensional numerical model has been developed; it solves the solute conservation equation coupled with the steady Navier-Stokes equations under the Prandtl approximation. This approach of crossflow membrane filtration takes the pressure-dependent leakage into account, as well as the effect of concentration polarization on osmotic (counter-)effects in RO/NF filtration. More precisely, the effective trans-membrane pressure depends on osmotic effects due to concentration polarization at the membrane. Under specific operating conditions, this numerical model can predict the local permeate flux and the solute concentration polarization with an excellent accuracy, as shown during the validation steps, where several exact In comparison with other numerical approaches, our model predicts a rapid development of the polarization boundary layer and hence a faster decline in permeation in the vicinity of entrance. This prediction that differs from other numerical models might be explained by the ability of the present model to accurately enforce the non-linear boundary condition at the very entrance.
In the last section, we have compared our numerical predictions with several RO/NF experimental results. A good accordance was generally obtained. This validates the overall modeling approach, from the choice of the model to the numerical solution method. It is furthermore worthy of note that the present numerical tool can be easily adapted to more sophisticate situations in terms of viscous and transport properties. In the same vein, the numerical model can be extended to other geometries, as tubular membranes, or to non-linear laws on osmotic pressure, as realized in [START_REF] Lopes | Prediction of permeate flux and rejection rate in ro and nf membrane processes: Numerical modelling of hydrodynamics and mass transfer coupling[END_REF]]. In the same manner, a model of membrane selectivity with partial solute rejection can be considered as developed in [START_REF] Lopes | Predicting permeate fluxes and rejection rates in reverse osmosis and tightnanofiltration processes[END_REF]]. The present numerical approach has easily been adapted to a situation of fouling, a work which has been left to future publication. Lastly, the filtration of multi-species systems (as in the case of sea water desalination) can easily be envisaged from the present work. For each species, a conservation law can be duplicated from the present solute conservation law. Furthermore, an osmotic number must be introduced for every species. The purpose of this addendum is to present the various main properties of the analytical or semi-analytical results that we have used in the validation step of our numerical approach. We shall particularly stress on the underlying assumptions making feasible the analytical approach.
The Berman flow
The Berman flow is a basis solution for the fluid motion in a leaky channel, when the permeation is uniform and fixed all along the porous walls [START_REF] Berman | Laminar flow in channels with porous walls[END_REF]. Under this hypothesis, Berman derived the flow solution {u(x, z), w(x; z)} to the steady Navier-Stokes equations complemented with the lateral boundary conditions u
(x = ±1, z) = U W /U in = ±1, as u(x, z) = B(x; R in ) ; w(x, z) = (1 -z)B ′ (x; in) (31)
where B(x; in) is the solution of the ordinary differential equation (ODE):
R in (BB ′′ -B ′2 ) -B ′′′ = K(R in ) in 0 < x < 1 (32)
with the boundary conditions:
B(0) = 0, B(1) = 1, B ′ (1) = 0, B ′′ (0) = 0 (33)
and R in is the transverse Reynolds number of permeation. For the standard configurations of filtration, a rich literature has shown that the solution to ODE (32) is unique, stable and attractive in the sense that, if the inlet conditions are different from the Berman x-profiles, Berman solution is recovered shortly after the entrance. Moreover, in the range (i.e. R in ≤ 4), the following Taylor expansion of the Berman function rapidly converges (see for instance [START_REF] Haldenwang | Laminar flow in a two-dimensional plane channel with local pressure-dependent crossflow[END_REF]]) in the standard situations of filtration:
B(x; R in ) = n=∞ n=0 1 n! f n (x)R n in ( 34
)
where the first two coefficients are:
f 0 = 3x 2 1 - x 2 3 ; f 1 = x 280 -x 6 + 3x 2 -2 (35)
The Berman flow for filtration only makes sense when the pressure drop remains weak enough all along the channel, in order to justify that the permeation is uniform. This condition corresponds to the so-called assumption of "High Pressure" (HP).
Exact analytical solution for mass transfer in Berman flow
In a previous study, [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF] obtained an exact analytical solution for the solute conservation law, when the carrying flow is of Berman type. Let us now consider the solute mass conservation law (21), in which we consider the velocity components {u, w} as fixed and given by expression (31) where the transverse Reynolds number is unknown, since the permeation will depend on the hindrance due to concentration polarization. Let us denote this unknown permeation with U 0 as the transverse velocity at the wall, leading us to define Re 0 [resp. P e 0 ] , the (unknown) Reynolds [resp. Péclet] number of permeation, as Re 0 = ρ 0 U 0 d/µ 0 [resp. P e 0 = U 0 d/D 0 ]. An exact solution to this problem has been established in [START_REF] Haldenwang | Exact solute polarization profile combined with osmotic effects in berman flow for membrane cross-flow filtration[END_REF] and reads:
C(x, z) = C(x, z) C in = 1 (1 -z) exp P e 0 x 0 B(x; R 0 )dx (36) with x 0 B(x; R 0 )dx = n=∞ n=0 1 n! F n (x)R n 0 ( 37
)
where the first two F n (x) take the form :
F 0 (x) = 3x 2 4 - x 4 8 ; F 1 (x) = 1 280 - x 8 8 + 3x 4 4 -x 2 (38)
Developing this solution (36) at the lowest order with respect to R 0 is sufficient in most RO/NF situations and leads to the simple expression for the solute concentration field.
C(x, z) = C(x, z) C in = 1 (1 -z) exp P e 0 3x 2 4 - x 4 8 (39)
This relationship gives a mathematical form to the phenomenon of concentration polarization, and allows us to obtain the concentration at the membrane wall as:
C W (z) = CW C in ≈ 1 1 -z exp 5 8 P e 0 (40)
Now, let us consider the osmotic pressure provoked by C W (z). To maintain a uniform leakage, the osmotic pressure must be constant all along the channel. Therefore, we must assume that (1 -z) ≈ 1. In other words, the channel length is limited enough to keep the concentration polarization uniform all along the wall. This condition corresponds to the so-called "Low Recovery" condition (LR). Therefore, under the "HP-LR" assumption, concentration polarization and permeation satisfy
where P e in is the pure solvent transverse Péclet number and P e osm in is a dimensionless form of inlet concentration, defined as:
P e in = P in d I 0 D 0 ; P e osm in = iRT C in d I 0 D 0 (43)
Permeate flux U 0 can be calculated from the last implicit equation ( 42) in P e 0 by using an iterative method. This analytical approach is a useful tool to estimate the permeation velocity U 0 in presence of polarization and osmotic effects. Finally, let us note that, in the limit of the high operating pressures P e in ≫ 1, expression (42) reduces to P e 0 ≈ 1.6ln [P e in /P e osm in ]. The latter expression simply demonstrates the classically observed saturation of the permeation when increasing operating pressure. [START_REF] Song | Theory of concentration polarization in crossflow filtration[END_REF] proposed to consider equation ( 21) under various simplified forms, accordingly with the concerned zone. The first step in the assumption process consists in neglecting the second term in the RHS of equation ( 21) (the mass axial transport) when considering the domain in the very vicinity of the membrane. After integration, this yields the dimensional form
Song-Elimelech model
U 0 ( C -C in ) = -D 0 d C dX (44)
where C in is the feed solute concentration or bulk concentration. This model assume that the concentration polarization layer is smaller than the spacing of the membrane channel and considers at steady state a mass balance relationship in the axial direction through the channel:
∞ 0 γX( C -C in )dX = C in Z 0 U 0 (Z ′ )dZ ′ ( 45
)
where γ is the shear rate, Z the axial distance from the entrance and X the transverse coordinate. In this model the wall concentration could be determine by solving equations ( 44) and ( 45) simultaneously
CW = C in 1 + 1 D 2 γ U 2 0 Z 0 U 0 (Z ′ )dZ ′ (46)
If a constant shear rate γ is assumed (Figure 14) an analytical expression of the permeate velocity can be obtained as follows:
U 0 (Z) = U 0 (0) (1 + AZ/L) 1/3 1 1 + AZ/L + 4 1 2 + 2 1 3 - 1 1 + AZ/L + 4 1 2 -2 1 3 ( 47
)
where U 0 (0) is the permeate velocity at inlet, A is a dimensionless parameter of operation and L is the membrane channel length. Considering that the total solute flux downstream along the channel is constant even though both solute wall concentration and permeate flux changed carried us to the following mass balance relationship:
Cm W m = C in W in (53)
where W in and W m are the axial mean velocity at the entrance and the axial mean velocity at any Z position respectively. Working with an average axial velocity is equivalent to assume that the fluid has a constant axial velocity across the channel height (plug flow profile in Figure 14). ¿From mass conservation principle for the solvent we can establish the following relationship
dW m dZ = - U 0 d ( 54
)
The variation of Cm along the membrane channel can be determine by taking derivate of equation ( 53) with respect to Z and using equation ( 54) we obtained:
d Cm dZ = U 0 W m d Cm ( 55
)
A numerical method can solve equations ( 55),( 54), ( 2) and ( 52). Discretizing the membrane channel length L into n equal segments (∆Z = L/n), the finite difference forms of these equations are:
Cmi+1 = 1 + U 0i ∆Z W mi d Cmi ( 56
)
W mi+1 = W mi - ∆Z d U 0i (57)
U 0i+1 = P -Γ CW i I 0 (58)
CW i+1 = C in + ( Cmi+1 -C in ) f U 0i d D 0 (1 -exp(-U 0i d/D 0 ) (59)
The iterative process begins when the feed solute concentration C in , the axial mean velocity at entrance W in and the driving pressure P are given. The mean solute concentration Cm , the axial mean velocity W m , the permeate velocity U 0 and the solute wall concentration CW , can be then obtained progressively throughout the entire membrane channel using the finite difference equations above.
Total salt balance model with shear flow
In this model a shear flow rate γ for axial velocity was considered, as shown shear flow profile in Figure 14 ¿From Figure 14 we induced that the shear rate γ for an average axial velocity W m was:
γ = 2 d W m (64)
The derivative of γ with respect to Z gave:
dγ dZ = 2 d dW m dZ = - 2 d 2 U 0 (65)
Substituting equations ( 64), (65) into equation ( 63) reached the following relationship:
d Cm dZ = U 0 W m d Cm + U 0 W m d A 1 C in (66)
where A 1 is a new function of the transverse Péclet number:
A 1 = P e 0 (1 -exp(-P e 0 )) 2(1 -exp(-P e 0 ) -exp(-P e 0 )P e 0 )
-1 (67)
In this model, the numerical method solves equations (2), ( 52) and (54) coupled with equation (66) instead of equation ( 55) using the following forward finite difference forms: Let us finally remark that the models described above assume a constant driving pressure P through the membrane channel. The variation in operating pressure is obtained by solving the fluid motion. In the latter case, to analytically study this more complex situation is, to say the least, a hard task. Analytical developments for PDE are never easy, especially when hydrodynamics is strongly coupled with mass transfer equation. This is why a numerical approach seems to be more suitable to analyze this situation.
Cmi+1 = 1 + U 0i ∆Z W mi d Cmi + U 0i ∆Z W mi d A 1 C in (68) W mi+1 = W mi - ∆Z d U 0i (
Prandtl model for concentration polarization and osmotic counter-effects in a 2-D membrane channel
Figure 1 :
1 Figure 1: Sketch of cross-flow membrane filtration and concentration polarization
Figure 2 :
2 Figure 2: Calculation sequence of the numerical model
Figure 3 :
3 Figure 3: Axial and transverse velocity profiles obtained by numerical simulation compared to analytical solutions for a uniform permeation case (α = 0.01, Rin = 0.1, clean water): correspond to symmetric channel (in Fig.3a, note that both curves for z = 1 and vertical axis superimpose).
Figure 4 :
4 Figure 4: Results obtained by numerical simulation for a pure solvent case with α = 0.01 and Rin = 0.05 R in is reflected in all panels of Figures 4, 5, and 6. Panels a and b illustrate
Figure 5 :
5 Figure 5: Results obtained by numerical simulation for a pure solvent case with α = 0.5 and Rin = 0.05
Figure 6 :
6 Figure 6: Results obtained by numerical simulation for a pure solvent case with α = 0.75 and Rin = 0.05
Figure 7 :
7 Figure 7: Comparisons of concentration profiles obtained by both numerical and by analytical approaches for a uniform permeation case in a symmetric channel for various values of inlet Péclet number (α = 0.001, Nosm = 0.1, z = 0.05).
Figure 8 :
8 Figure 8: Comparisons of solute wall concentration along the channel between Song-Elimelech, TSB-plug and TSB-shear model with our numerical approach for different operating pressures ranging between 6 and 10 bar
Figure 9 :
9 Figure 9: Comparisons of permeate flux along the channel between Song-Elimelech, TSBplug and TSB-shear model with our numerical approach for different operating pressures ranging between 6 and 10 bar
Figure 10 :
10 Figure 10: Channel length L = 1 m
Figure 11 :
11 Figure 11: Channel length L = 6 m
Figure 12 :
12 Figure 12: Comparisons of the average permeate flux according to Zhou experiments [Zhou et al. (2006)]and permeate flux obtained numerically for different operating pressures and N aCl initial concentrations.
Figure 13 :
13 Figure 13: Comparisons of concentration profiles between TSB (dotted lines) and present numerical model (solid lines) with experiments [Fernández-Sempere et al. (2010)] (triangles) for a solution of N a2SO4 operating at Pin = 7.2 bar and a feed initial concentration Cin = 8.5 g/l (a)Win = 0.2 cm/s, (b)Win = 0.7 cm/s and (c)Win = 1.7 cm/s
ratio between Hagen-Poiseuille pressure drop and transmembrane pressure Γ osmotic factor, Γ ≡ iRT (J) γ shear rate in TBS model (1/s) λ channel length reduced by dead-end length L de , λ ≡ L/L de µ 0 dynamic viscosity (kg/m/s) the membrane wall (moles/m 3 ) CW = C W C in ˜transforms variables or certain unknowns into their dimensional form B(x; R in ) Berman function that gives the dimensionless transverse velocity, i.e. the solution to Eq. (32) combined with boundary conditions (33) C in feed solute concentration (moles/m 3 ) D 0 solute diffusivity through the solvent (m 2 /s) i number of dissociated entities (ionic or neutral) per solute molecule I 0 membrane resistance (kg/m 2 /s) L membrane length (m) L de dead-end length, or location where axial flowrate is exhausted, L de = W in d/U in N osm osmotic number, N osm ≡ P osm in /P in P driving pressure (kg/m/s 2 ) p dimensionless pressure (p = P/P in ) -)pressure due to feed concentration, P osm in ≡ ΓC in P e 0 Actual Péclet number of permeation, P e 0 ≡ U 0 d/D 0 P e in pure solvent transverse Péclet number R perfect gaz constant (J/mol/K) (= W/W in ) W m axial mean velocity at Z (m/s) W in axial mean velocity at entrance (m/s) X dimensional transverse coordinate in TSB model (m) x dimensionless transverse coordinate (x = x/d) Z dimensional axial coordinate in TSB model (m) z dimensionless axial coordinate (z = z/L de ) 5. Appendix: a brief description of the models used for validation
Figure 14 :
14 Figure 14: Different profiles of a laminar flow in the membrane channel
Total salt balance model with plug flowThis model takes into account that concentration polarization layer could be developed to the whole height of the membrane channel. Solute concentration at any transverse coordinate can be determine by integrating equation (44) C = C in + ( Cm -C in ) the average solute concentration and P e 0 is the transverse Péclet number:P e 0 = U 0 d D 0(51)We can consequently determine the wall concentration CW by evaluating equation (50) in X = 0: CW = C in + ( Cm -C in ) P e 0 1 -exp(-P e 0 ) (52)
0 γX
0 therefore the mass balance relationship (53) along the cross-flow direction reads:d CdX = W in C in d (60)Replacing the expression (50) for C in equation (60) above leads to:W in C in d = of equation (61) results in:W in C in d = γd 2 1 P e 0 -exp(-P e 0 ) 1 -exp(-P e 0 ) ( Cm -C in )
i+1 = C in + ( Cmi+1 -C in ) P e 0i 1 -exp(-P e 0i )(71)
Table 2 summarizes the dimensionless parameters of three tested cases in "clean water" filtration. The influence of parameters α and Test Case P in (bar) W in (m/s)
α R in
4 100 0.23 0.01 0.05
5 100 11.2 0.5 0.05
6 100 16.7 0.75 0.05
Table 2 :
2 Operating conditions of various test cases in "clean water" filtration
Table 3 :
3 Average permeate flux of experiments carried out by Geraldes et al. (2002) compared with the same quantity obtained by numerical simulations
B. Bernales, P. Haldenwang, P. Guichardon, N. Ibaseta M2P2 UMR 7340; Aix Marseille Université, CNRS, Centrale Marseille 38, rue Frédéric Joliot-Curie 13451 Marseille Cedex 20, France Highlights 1. An efficient two dimensional numerical model based on the solute conservation equation coupled with the steady Navier-Stokes equations has been developed. 2. The local permeat flux and the solute concentration polarization are predicted with an excellent accuracy. 3. In comparison with other numerical approaches, our model predicts a rapid development of the polarization boudary layer and a faster decline in permeation. 4. An excellent agreement was obtained between our numerical results and several reverse osmosis experimental results. | 75,153 | [
"770079",
"2744"
] | [
"199963",
"199963",
"199963",
"199963"
] |
01499481 | en | [
"chim",
"sdv",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://amu.hal.science/hal-01499481/file/MICROC_2017_19_Original_V0-3-30.pdf | Fabien Robert-Peillard
email: [email protected]
Edwin Palacio Barco
Marco Ciulu
Carine Demelas
F Théraulaz1
J.-L Boudenne
Bruno Coulomb
Frédéric Théraulaz
Jean-Luc Boudenne
High throughput determination of ammonium and primary amine compounds in environmental and food samples
Keywords: ammonium, primary amine, microplate, reagents stability
In this paper, an improved spectrofluorimetric method for the simultaneous and direct determination of ammonium and primary amine compounds is presented. The method is based on the derivatization with o-phthaldialdehyde (OPA) / N-acetylcysteine (NAC) reagent using high throughput microplates, and OPA/NAC ratio has been optimized in order to suppress interference of ammonium on primary amine determination. Direct measurement of these two parameters is therefore possible with a global procedure time that does not exceed ten minutes. Excellent limits of detection of 1.32 µM and 0.55 µM have been achieved for ammonium and primary amines, respectively. Reagent stability issues have also been addressed and formulation of reagents solution is described for improved reagents shelf life.
The proposed protocol was finally applied and validated on real samples such as wine samples, compost extracts and wastewater.
Introduction
Amino compounds are widely distributed in the environment and essentially result from metabolic processes of degradation, hydrolysis and excretion at different levels of the food chains. Soil organic matter thus contains about 30% of its nitrogen pool as amino acids [START_REF] Stevenson | Humus chemistry: Genesis, composition, reactions[END_REF], and a significant proportion of NH 4 + can be released from organic matter by microbial hydrolysis [START_REF] Barraclough | The direct or MIT route for nitrogen immobilization: A 15 N mirror image study with leucine and glycine[END_REF]. Anthropogenic activities also contribute to the presence of these compounds and ammonium in the environment, mainly attributed to the incineration of waste [START_REF] Seeber | Determination of airborne, volatile amines from polyurethane foams by sorption onto a high-capacity cation-exchange resin based on poly(succinic acid)[END_REF] and the discharges of waste waters from chemical industry and wastewater treatment plants [START_REF] Seeber | Determination of airborne, volatile amines from polyurethane foams by sorption onto a high-capacity cation-exchange resin based on poly(succinic acid)[END_REF][START_REF] Skarping | Determination of toluenediamine isomers by capillary gas chromatography and chemical ionization mass spectrometry with special reference to the biological monitoring of 2,4-and 2,6-toluene diisocyanate[END_REF].
Among the nitrogen pool, primary amino compounds determination is a significant parameter in the food processing or drinking-water treatment industry as it can react with nitrites or nitrates to form nitrosamines which are classified as "probably carcinogenic to human" [START_REF] Hureiki | Chlorination studies of free and combined amino acids[END_REF][START_REF] Brosillon | Chlorination studies of free and combined amino acids[END_REF].
Primary amino compounds (mainly primary amino acids) are also very important regarding the nitrogen management in some specific fields such as wine industry [START_REF] Ugliano | Nitrogen management is critical for wine fl avour and style[END_REF]. Indeed, assaying the total primary amino acids concentration is often considered to be the most convenient method to measure assimilable nitrogen, which is critical for wine flavour and style.
Regarding ammonium, its determination in environmental samples is highly relevant, due to its important micronutrient function in aquatic systems or as an important index in composting process studies. As example, high concentration in a water body can be an indicator of the environmental impact of human activities, with strong effects on microbiological activities that can potentially lead to eutrophication events [START_REF] Zamparas | Nitrogen management is critical for wine flavour and style[END_REF]. In the composting field, low ammonium concentration coupled to low N-NH 4 + /N-NO 3 -ratio has been proposed as indicators of a compost stability [START_REF] Bernal | Influence of sewage sludge compost stability and maturity on carbon and nitrogen mineralization in soil[END_REF].
Several methods have been proposed for the analysis of ammonium [START_REF] Molins-Legua | Influence of sewage sludge compost stability and maturity on carbon and nitrogen mineralization in soil[END_REF], such as spectrophotometry [START_REF] Marina | Flow-injection determination of low levels of ammonium ions in natural waters employing preconcentration with a cation-exchange resin[END_REF][START_REF] Oms | Gas diffusion techniques coupled sequential injection analysis for selective determination of ammonium[END_REF][START_REF] Jeong | Determination of NH 4 + in environmental water with interfering substances using the modified Nessler method[END_REF], ion selective electrode [START_REF] Meseguer-Lloret | Ammonium determination in water samples by using OPA-NAC reagent: a comparative study with Nessler and ammonium selective electrode methods[END_REF], fluorimetry [START_REF] Meseguer-Lloret | Determination of ammonia and primary amine compounds and Kjeldahl nitrogen in water samples with a modified Roth's fluorimetric method[END_REF][START_REF] Cao | A new fluorescence method for determination of ammonium nitrogen in aquatic environment using derivatization with benzyl chloride[END_REF] or ion chromatography [START_REF] Shen | The application of a chemical sensor array detector in ion chromatography for the determination of Na + , NH 4 + , K + , Mg 2+ and Ca 2+ in water samples[END_REF]. Primary amines/amino acids are generally analyzed individually by liquid chromatography [START_REF] Krumpochova | Amino acid analysis using chromatography-mass spectrometry: an inter platform comparison study[END_REF][START_REF] Rigas | Post-column labeling techniques in amino acid analysis by liquid chromatography[END_REF], but methods have also been developed to measure the total concentrations of these compounds in order to have a rapid and global assay of these important nitrogen compounds. This can be done for example by spectrophotometric measurements after reaction with ninhydrin [START_REF] Moore | A modified ninhydrin reagent for the photometric determination of amino acids and related compounds[END_REF] or by fluorimetry after reaction with ophthaldialdehyde (OPA) and a thiol compound [START_REF] Jones | Simple method to enable the high resolution determination of total free amino acids in soil solutions and soil extracts[END_REF].
The numerous advantages of the combination of this OPA-thiol reagent (reaction with ammonium and amines, sensitivity, selectivity, low toxicity and price of reagents...) has led to the development of simultaneous determination methods for ammonium and primary amino compounds. Meseguer Lloret et al. [START_REF] Meseguer-Lloret | Ammonium determination in water samples by using OPA-NAC reagent: a comparative study with Nessler and ammonium selective electrode methods[END_REF] used solution derivatization with OPA-NAC reagent (NAC: N-acetylcysteine), two different excitation and emission fluorescence wavelengths and statistical analysis by multivariate Principal Component Regression in order to separate the responses of ammonium and amine under selected experimental conditions [START_REF] Meseguer-Lloret | Determination of ammonia and primary amine compounds and Kjeldahl nitrogen in water samples with a modified Roth's fluorimetric method[END_REF]. Darrouzet-Nardi et al. [START_REF] Darrouzet-Nardi | Fluorescent microplate analysis of amino acids and other primary amines in soils[END_REF] developed a fluorescent assay with OPA and β-mercaptoethanol for analysis of primary amino compounds, by taking into account potential interference of ammonium.
The main drawback of this method was the necessity to use a 1-h incubation time in order to reduce interference from ammonium.
The aim of the present study is to develop a fast, simple and efficient method for ammonium and primary amino compounds analysis, with no need of statistical analysis, direct measurement from calibration curves and a global procedure time that does not exceed ten minutes. Moreover, this method has been developed as a potential routine analytical method applicable to the large number of samples that an analytical laboratory typically has to deal with (especially for ammonium). Therefore, a high-throughput microplate method based on OPA-NAC reagent was used, with special care on reagents stability which is a key point for routine analysis development (shelf life of at least 3 months without deterioration of analytical performances). The final goal of this study was to conduct a strong validation on complex samples like compost extracts or wastewaters by comparison with a chromatographic reference method, in order to have a robust method for routine analysis of ammonium and primary amino compounds.
Ion exchange chromatography for primary amines determination
Primary amino acid compounds in selected wines were determined by an external laboratory using an automatic amino acid analyzer (Biochrom 30+, Cambridge, England). Wine samples were initially diluted in a sodium citrate buffer (pH 2.2). All amino acids were spectrophotometrically detected after post-column derivatization with ninhydrin reagent at 570 nm. Concentrations of amino acid compounds in unknown samples were determined by comparison with standard peak areas (Sigma-Aldrich amino acid standard kit) and by using norleucine as internal standard. Ion-exchange chromatography analyses on real wine samples were performed on the same day as microplate analyses for validation purposes. 2.3 Analytical protocol for primary amines and ammonium determination 100 µL of sample or standard solution were dispensed into the wells of the microplate, where 30 µL of 13 mM OPA in ethanol-0.15 M carbonate buffer pH 10.5 (10:90, v/v) and 20 µL of a solution of 20 mM NAC and 1.5 mM TCEP in 0.1 M HCl were added. The plate was shaken for 10 min and fluorescence intensity was then recorded, with excitation and emission wavelengths set at at λ ex =335 nm / λ em =455 nm and at λ ex =415 nm / λ em =485 nm for total primary amines and ammonium determination respectively. Concentrations in unknown samples were determined using the linear calibration curves obtained with standards. All experiments were performed in duplicate.
Results
Optimization of analytical method 3.1.1 OPA/NAC concentration and pH
Initially derivatization of amino acids by OPA-NAC reagent developed by Aswad [START_REF] Aswad | Determination of D-and L-Aspartate in amino acid mixtures by High-Performance Liquid Chromatography after derivatization with a chiral adduct of o-Phthaldialdehyde[END_REF] was carried out with an OPA/NAC ratio of 1:2. Even if the reaction rate and the fluorescence yield are not dependent on the OPA/NAC ratio [START_REF] Zhao | Determination of a-dialkylamino acids and their enantiomers in geological samples by high-performance liquid chromatography after derivatization with a chiral adduct of o-phthaldialdehyde[END_REF], it is nevertheless necessary to use an OPA concentration higher than targeted amino acid concentration [START_REF] Svedas | The interaction of amino acids with o-Phthaldialdehyde: a kinetic study and spectrophotometric assay of the reaction product[END_REF]. In past years OPA/NAC 1:1 ratio solution has often been used as pre-column derivatization reagent for HPLC or capillary electrophoresis separation of amino acids or biological amines.
More recently, this method was adapted for the determination of total primary amine compounds and ammonium in environmental samples by Meseguer Lloret et al. [START_REF] Meseguer-Lloret | Determination of ammonia and primary amine compounds and Kjeldahl nitrogen in water samples with a modified Roth's fluorimetric method[END_REF] with a fluorescence measurement at two ex / em couples of wavelengths to separate the response of the primary amines with that of ammonium. However, this method was based on the use of a This reaction time is longer than the one used by Meseguer-Lloret et al. [START_REF] Meseguer-Lloret | Determination of ammonia and primary amine compounds and Kjeldahl nitrogen in water samples with a modified Roth's fluorimetric method[END_REF] (120 and 300 seconds respectively for amines and ammonia) but greatly simplifies the calibration process by avoiding the use of statistical tools. The reaction time also depends on the pH used for the derivatization reaction. Fig. 2 shows the fluorescence intensity of the OPA-NAC-ammonium adduct as a function of pH and reaction time. The reaction time previously set at 600 seconds for the simultaneous measurement of ammonium and primary amines is sufficient to obtain a high and stable fluorescence signal for ammonium derivatization by OPA/NAC 8mM/8mM at pH = 10.5. This reaction time is significantly lower than that proposed by Darrouzet-Nardi et al. (60 minutes) for a OPA/mercaptoethanol (ME) procedure developed in order to limit interference of ammonium on the measurement of primary amines in soils [START_REF] Darrouzet-Nardi | Fluorescent microplate analysis of amino acids and other primary amines in soils[END_REF].
Buffer
Sodium or potassium tetraborate are certainly the most commonly used buffers for derivatization of ammonia or amines with OPA and NAC or ME in alkaline conditions [START_REF] Meseguer-Lloret | Ammonium determination in water samples by using OPA-NAC reagent: a comparative study with Nessler and ammonium selective electrode methods[END_REF][START_REF] Meseguer-Lloret | Determination of ammonia and primary amine compounds and Kjeldahl nitrogen in water samples with a modified Roth's fluorimetric method[END_REF][START_REF] Jones | Simple method to enable the high resolution determination of total free amino acids in soil solutions and soil extracts[END_REF][START_REF] Darrouzet-Nardi | Fluorescent microplate analysis of amino acids and other primary amines in soils[END_REF][START_REF] Aswad | Determination of D-and L-Aspartate in amino acid mixtures by High-Performance Liquid Chromatography after derivatization with a chiral adduct of o-Phthaldialdehyde[END_REF][START_REF] Zhao | Determination of a-dialkylamino acids and their enantiomers in geological samples by high-performance liquid chromatography after derivatization with a chiral adduct of o-phthaldialdehyde[END_REF][START_REF] Svedas | The interaction of amino acids with o-Phthaldialdehyde: a kinetic study and spectrophotometric assay of the reaction product[END_REF]. However, tetraborate salts have been classified as toxic for reproduction (category 1B) by European regulations since 2008 [START_REF]Parliament and of the council of 16 December 2008 on classification, labelling and packaging of substances and mixtures[END_REF]. Substitution of these products is therefore recommended, especially for analytical procedures that are developed as potential routine methods with high frequency of use for the reagent solutions.
In this study, we replaced the borate buffer solution with a carbonate or CAPS buffer which have pKa values compatible with the pH used in the derivatization reaction. Experiments showed that a carbonate buffer could replace the borate buffer but the fluorescence signal obtained decreased by 40%. Nevertheless, despite this significant decrease of fluorescence intensity, the analytical features obtained with the carbonate buffer fit with expected values of ammonium and amines in environmental samples (see 3.3). CAPS, on the other hand, lead to a very significant decrease in the fluorescence signal by 90% (Sup. Fig. 1).
Conservation of reagents 3.2.1 Reducing agent
It is well known that thiol group can easily be oxidized and form disulfide bond. This oxidation reaction will quickly limit NAC reactivity and therefore inhibit derivatization of ammonium or amines by OPA. This usually leads the authors to prepare OPA/NAC solutions daily. However, it may be interesting in an analytical laboratory to keep OPA and NAC solutions for several days, several weeks or even a few months, again especially for analytical methods that are used routinely.
Three reducing agents conventionally used in analytical procedures have been studied: 2,2thiodiethanol (TDE), ascorbic acid and tris(2-carboxyethyl) phosphine hydrochloride (TCEP).
Fluorescence intensity of an ammonium standard adduct (100 μM) was measured over a period of 60 days (Fig. 3). Derivatization was carried out with OPA/NAC solution comprising one of the reducing agent mentioned above at a concentration of 1 mM (TCEP) or 10 mM (ascorbic acid, TDE).
We can observe with data of Fig. 3 that ascorbic acid did not enable good preservation of OPA/NAC solutions. TDE and TCEP reduced degradation of OPA/NAC solution with approximately 14% decrease in fluorescence intensity after 30 days in both cases. TCEP was preferred for subsequent experiments because TDE is unpleasant to use due to its malodorous properties. The optimization of the concentration of TCEP was then carried out (fig . 4). Data showed that TCEP concentration of 1 mM was sufficient to improve the conservation of OPA/NAC reagent. Higher concentrations lead to an increase in the blank signal. Similar results have been obtained for amino compounds derivatization (glycine as a reference).
Conservation mode of reagents
The optimization of the preservation of reagents was finally optimized by studying the conservation mode of the reagents: OPA and NAC solutions can be stored in a mixture or separately. It is well known that a formulation at low pH can be used to prevent oxidation of N-acetylcysteine [START_REF] Pearlman | Formulation, characterization, and stability of protein drugs: case histories[END_REF]. We have thus studied the evolution of the slope of calibration curves over time (up to 6 months) either with the two liquid reagents prepared together (condition called 'mixed reagents' on Fig. 5A) or with the two liquid reagents stored separately (condition called 'separated liquid reagents' on Fig. 5B), and also with solid NAC deposited in microplate wells and liquid OPA reagent stored separately (condition called 'solid NAC + liquid OPA' on Fig. 5C). For this last preservation condition of the reagents, NAC was solubilized in acetone and then introduced into microplate wells. Acetone was evaporated at room temperature, then the plate was sealed with a polyethylene film. The OPA reagent was similar to the previous condition (in carbonate buffer at pH 10.5). All the solutions or microplates used in this study were stored in the dark at 4 °C.
For each experimental condition, confidence interval (CI) of slope value was determined at initial time (0 day) from the standard deviation of the b-slope value of the calibration curve s(b), depending on the residual standard deviation of the regression (n=5, P=0.05) [START_REF] Afnor | Protocole d'évaluation d'une méthode alternative d'analyse physico-chimique quantitative par rapport à une méthode de référence[END_REF].
Calculated values of (b±UM), with UM the uncertainty of measurement, for separated liquid reagents, mixed reagents and solid NAC/liquid OPA were 58.1±8.1, 54.6±1.6 and 49.1±9.2, respectively. On figure 5, CI limits are noticed by dotted lines. We can observe that the slope remained inside CI for separated liquid reagents up to 4 months, on contrary to mixed reagents only after 7 days. For solid NAC/liquid OPA condition, a great variability of slope value during experimental period was noticed and one data was outside CI after only 4 weeks.
These results prompted us to use separated liquid reagents condition.
Analytical features 3.3.1 Screening of primary amines
Screening of some biogenic amines and primary amino acids (25 µM) that could be likely present in environmental or food samples was performed using OPA/NAC optimized analytical protocol (Fig. 6). The fluorescence intensity of the different adducts was standardized over that of the glycine adduct (reference 100).
A great diversity of amines and primary amino acids can therefore be detected by the developed method, although their responses are not all similar. Secondary amines (proline) and primary amines involved in a conjugated system (creatine, urea, creatinine) do not react. The method allows for example the control of amino compounds in raw waters at the inlet of treatment plants. Brosillon et al. [6] found that alanine, valine and tyrosine were the main amino acids at the origin of disinfection by-products in drinking water during chlorination.
These 3 amino acids exhibit responses of 103, 82 and 78% compared to glycine. Although individual concentrations of amino acids in raw waters range from 0.2 to 0.9 nM depending on the season, the method is sensitive enough to quantify the total amount of amino compounds. Likewise, the most frequently measured primary amino acids in wine samples (alanine, glutamic acid, arginine) all lead to responses comparable with that of glycine, and a global measurement based on a glycine standard is therefore relevant for this study.
Cross-interference and other interferences
In this section, a compound was considered as interferent when its presence resulted in more than 5% modification of the pure ammonium (100 µM) or glycine (as reference amino acid; 100 µM) response.
The cross-interference of amines over ammonium (λ ex =415 nm / λ em =485 nm) and inversely ammonium over amines (λ ex =335 nm / λ em =455 nm) was evaluated. The results showed that ammonium could be quantified in the presence of a 1000-fold excess of glycine and that primary amines could be quantified in the presence of a 100-fold excess of ammonium.
The interference of various metal cations has also been studied. The presence of Fe 3+ or Cu 2+ at 15 µM resulted in an interference of more than 5%. The addition of 50 mM EDTA to the reaction mixture reduced this interference. Ammonium could thus be quantified in the presence of 750 µM of Fe 3+ or Cu 2+ and the amines could be quantified in the presence of 180 µM of Fe 3+ and 450 µM of Cu 2+ . The other metallic elements tested (Al 3+ , Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ ) did not interfere up to 1 mM.
Performance and validation of the analytical method
The developed method was characterized and validated according to the AFNOR procedure XP T 90-210 [START_REF] Afnor | Protocole d'évaluation d'une méthode alternative d'analyse physico-chimique quantitative par rapport à une méthode de référence[END_REF]. Regarding analytical features, calibration curves for ammoniacal nitrogen and amine compounds were constructed with several standards (respectively, n=6 and n=5)
with triplicate measurements for each, and have been obtained for each compound by linear regression of the fluorescence intensity against the concentrations of standards. The calibration range lies between the limit of quantification and a maximum concentration depending on the concentration range allowing to keep linearity between instrumental signal and standard concentrations. The limit of detection (LOD) of the analytical procedure is defined as the lowest amount of analyte in a sample that can be detected and considered as different from the blank value but not necessarily quantified as an exact value, whereas the limit of quantification (LOQ) is the lowest amount of analyte in a sample which can be quantitatively determined with the analytical procedure with a defined variability. The LOD and LOQ were evaluated from the residual standard deviation of the regression (linearity study method) as LOD= 3.s(a)/b and LOQ= 10.s(a)/b, where s(a) is the standard deviation of the a-intercept and b is the slope of the calibration curve.
The accuracy of an analytical procedure is defined as the closeness of agreement between the conventional "true" value (obtained by the reference analytical procedure) and the measured value. The accuracy of our microplate procedure with fluorescence detection was assessed by analyzing a great number of samples from various origins. The calculation of the existing difference (d) between the measured value and the value issue from cationic chromatography, for each sample, and the calculation of the absolute value of their mean (|d||) and the standard deviation (s) of all calculated data, is used to evaluate the good accuracy by check of the normal distribution of these data around zero (w=|d||/s < 3; P=0.01) [START_REF] Afnor | Protocole d'évaluation d'une méthode alternative d'analyse physico-chimique quantitative par rapport à une méthode de référence[END_REF].
The repeatability, i.e. the precision, of analytical procedures was assessed by calculating the standard deviation of repeatability taking into account each replicate measurement (n=3) for each sample. A comparison of variance of repeatability obtained with the two analytical methods on the same multiple samples as previously is realized in order to determine their potential significative difference or not (Fisher-test, P=0.01) [START_REF] Afnor | Protocole d'évaluation d'une méthode alternative d'analyse physico-chimique quantitative par rapport à une méthode de référence[END_REF].
Ammonium
The calibration curve (y=57.076x+623.96) was linear up to 100 µM with a correlation coefficient of 0.997 (n=6). A LOD of 1.32 µM and a LOQ of 4.41 µM have been obtained.
The working range of this analytical method is then 4.4-100 µM. The relative standard deviation evaluated from a sample containing 50 µM (n=6) was 1.58%.
Validation of the analytical method requires satisfactory results for the accuracy and the repeatability. No significant differences (Fisher-test, n=54, P=0.01) were noticed between variance of repeatability of our new analytical method and cationic chromatography, and the closeness of agreement of results proved a good accuracy (w=0.16 < 3; P=0.01). Figure 7 shows regression line obtained by comparison of results of 54 samples (surface water, wastewater, water extracts of compost). Slope of regression line is 0.96±0.05 and intercept were 1.13±1.88. Result was satisfactory for the intercept and slope as confidence intervals of their value included 0 and 1 respectively, in accordance with previous statistical test for accuracy. The analytical range and the low limit of quantification allow the determination of ammonium in various types of samples as natural waters, raw or treated wastewaters, aqueous extracts of soils or wastes.
Primary amines
For primary amines, the calibration curve (y=153.94x+374.89) was linear up to 100 µM with a correlation coefficient of 0.995 (n=5). A LOD of 0.55 µM and a LOQ of 1.84 µM have been obtained. The working range of this analytical method is then 0.55-100 µM. The relative standard deviation evaluated from a sample containing 50 µM (n=6) was 1.25%.
As for ammoniacal nitrogen, the accuracy and the repeatability for amine compounds gave satisfactory results. No significant difference (Fisher-test, n=13, P=0.01) was noticed between variance of repeatability of the developed analytical method and cationic chromatography, and the closeness of agreement of results proved a good accuracy (w=0.22 < 3; P=0.01).
Figure 8 shows regression line obtained by comparison of results of 13 samples (various samples of wines). Slope of regression line is 0.94±0.23 and intercept were 178.7±535.2.
Confidence intervals for the intercept and slope were relatively wide, essentially due to higher residual standard deviation of the regression, including however 0 and 1 respectively, equally in accordance with previous statistical test for accuracy. The analytical range and the low limit of quantification allow the determination of primary amines in various types of samples as natural waters or food samples for example.
Conclusion
In this study, we presented the development and validation of a method for the determination of ammonium and primary amine compounds in various environmental and food samples.
Optimization of reagents ratio and pH buffer enables fast and stable responses with no cross interferences between these two structurally close analytes. Reagent storage conditions were also evaluated in order to improve reagents shelf life, and we showed that separated storage of OPA and NAC solutions was the best option. Regarding analytical features, excellent limits of detection of 1.32 µM and 0.55 µM have been achieved for ammonium and primary amines, respectively, with RSD below 2%. Validation on various real samples by comparison with reference methods resulted in very good accuracies, proving the efficiency of this methodology for routine analysis of these nitrogen compounds. [OPA]: 8 mM; [NAC]: 8 mM; carbonate buffer pH=10.5. [∑(RNH2)] (µM) -Ion Exchange Chromatography [RNH2] (µM] Microplate assay HIGHLIGHTS An improved spectrofluorimetric method for the simultaneous and direct determination of ammonium and primary amine compounds is presented. Reagents ratio has been optimized in order to suppress known cross-interferences. Optimization of reagents formulation led to high reagents stability for routine analysis. The method was validated on real samples (wine samples, compost extracts or wastewater).
Figure captions
Fig. 1 displays the evolution of fluorescence intensity of OPA-NAC-ammonium adduct at amino compounds wavelengths as a function of reaction time. The increase in OPA and NAC concentration can reduce ammonium interference during primary amines determination. For a concentration greater than 8 mM of OPA and NAC, a reaction time of 600 seconds allows to fully eliminate the cross-interference of ammonium over amino compounds determination.
Fig. 1 : 5 .Fig. 2 :Fig. 3 :
1523 Fig. 1: Fluorescence intensity of OPA-NAC-ammonium adduct at λ ex =335 nm / λ em =455 nm as a function of reaction time and OPA/NAC concentration. [NH 4 + ]: 50 µM; borate buffer pH=10.5.
Fig. 4 :
4 Fig. 4: Evolution of fluorescence intensity (λ ex =415 nm / λ em =485 nm) of an ammonium standard adduct (100 μM) over a period of 70 days as a function of TCEP concentration.
Fig. 5 :Fig. 6 :
56 Fig. 5: Evolution of the slope of calibration curves over time depending on conservation mode of reagents. A: OPA and NAC mixed reagent; B: OPA and NAC separated liquid reagents; C: solid NAC and liquid OPA. Confidence intervals (CI) are represented in dotted lines. [OPA]: 8 mM; [NAC]: 8 mM; [TCEP]: 1 mM; carbonate buffer pH=10.5.Fig. 6: Screening of biogenic amines and primary amino acids (25 µM). [OPA]: 8 mM; [NAC]: 8 mM; [TCEP]: 1 mM; carbonate buffer pH=10.5.
Fig. 7 :
7 Fig. 7 : Regression line for comparison of analytical results between Microplate Assay and Ion Chromatography for ammonium determination.
Fig. 8 :
8 Fig. 8 : Regression line for comparison of analytical results between Microplate Assay and Ionic Exchange Chromatography for primary amines determination.
Acknowledgment
This work was financially supported by the French Research Agency (ANR) through the programme "EMERGENCE" [ANR-10-EMMA-038] and by the French Environment and Energy Management Agency (ADEME) through the programme "DOSTE" [1506C0034]. | 31,574 | [
"17020",
"741720",
"10885",
"10887"
] | [
"220811",
"305409",
"220811",
"220811",
"220811",
"220811",
"220811"
] |
01769731 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2012 | https://insep.hal.science//hal-01769731/file/Desgorces%20et%20al.%20%282012%29%20-%20JEB.pdf | F.-D Desgorces
email: [email protected]
G Berthelot
A Charmantiert
M T Afflet
K Schaal
P Jarnet
J.-F Toussaint
Franc ¸ois-Denis Desgorces
Similar slow down in running speed progression in species under human pressure
Keywords: phenotype, physical performance, running
Running speed in animals depends on both genetic and environmental conditions. Maximal speeds were here analysed in horses, dogs and humans using data sets on the 10 best performers covering more than a century of races. This includes a variety of distances in humans (200-1500 m). Speed has been progressing fast in the three species, and this has been followed by a plateau. Based on a Gompertz model, the current best performances reach 97.4% of maximal velocity in greyhounds to 100.3 in humans. Further analysis based on a subset of individuals and using an 'animal model' shows that running speed is heritable in horses (h 2 = 0.438, P = 0.01) and almost so in dogs (h 2 = 0.183, P = 0.08), suggesting the involvement of genetic factors. Speed progression in humans is more likely due to an enlarged population of runners, associated with improved training practices. The analysis of a data subset (40 last years in 800 and 1500 m) further showed that East Africans have strikingly improved their speed, now reaching the upper part of the human distribution, whereas that of Nordic runners stagnated in the 800 m and even declined in the 1500 m. Although speed progression in dogs and horses on one side and humans on the other has not been affected by the same genetic ⁄ environmental balance of forces, it is likely that further progress will be extremely limited.
Introduction
Locomotor performance is a key trait in mobile species that is closely associated with fitness [START_REF] Irschick | Do lizards avoid habitats in which performance is submaximal? The relationship between sprinting capabilities and structural habitat use in Caribbean anoles[END_REF]. In particular, improving running speed might allow for reaching prey or avoiding predator more efficiently [START_REF] Husak | Does survival depend on how fast you can run or how fast you do run?[END_REF]. It is therefore of importance to better evaluate those factors, whether genetic or environmental, affecting maximum speed. However, experimentally evaluating how far maximum speed can improve is difficult, because assessing it over meaningful time periods is not always feasible. We propose a way to alleviate this issue by using race records in species in which fastest speeds have been monitored over a long time span. Such data are available in horses, dogs (greyhounds) and humans.
Speed progression, that is, the increase in maximal speed over time, in these three species results from both genetic and environmental factors, although to various extents according to species [START_REF] Nevill | Are there limits to running world records?[END_REF][START_REF] Niemi | Mitochondrial DNA and ACTN3 genotypes in Finnish elite endurance and sprint athletes[END_REF][START_REF] Davids | Genes, environment and sport performance. Why the nature-nurture dualism is no longer relevant[END_REF]. Moreover, some variables acting on speed progression depend on training (relating to physiology, psychology, biomechanics, technology or tactics improvement), whereas others are outside the athletes' control (genetics, anthropometric characteristics, climatic conditions; [START_REF] Smith | A framework for understanding the training process leading to elite performance[END_REF][START_REF] Brutsaert | What makes a champion? Explaining variation in human athletic performance[END_REF]. In domesticated animals, sustained selection by breeders has long been known to cause large and accelerated phenotypic changes contributing to short-term evolutionary processes [START_REF] Darwin | Animals and Plants Under Domestication[END_REF][START_REF] Falconer | Introduction to Quantitative Genetics[END_REF]. Racing horses have indeed been selected from the 16th century, based on closed populations and a very small number of founders [START_REF] Willett | An Introduction to the Thoroughbred[END_REF]. Beginning later, similar selection processes have been imposed on greyhounds for dog racing. This means that phenotypic expression (such as running speed) relies on a narrow genetic basis in comparison with the variation available in these species. However, genetic variance for performance has recently been detected in thoroughbred horses [START_REF] Gu | A genome scan for positive selection in thoroughbred horses[END_REF][START_REF] Hill | A sequence polymorphism in MSTN predicts sprinting ability and racing stamina in thoroughbred horses[END_REF].
The improvement in fast-running performances in humans does not (of course) rely on selective breeding, but on the detection of fast-running athletes generally during adolescence or later through national systems that were optimized after World War II [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF]. Importantly, the athlete population has increased in size proportionally to the development of modern sport throughout the last century. For example, 241 athletes from 14 national Olympic committees participated in the first modern Olympic games (Athens, 1896), whereas 10 942 athletes coming from 204 nations attended the 29th games in Beijing (2008). Thus, the best human runners are now selected from a much larger number of countries [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF], presumably over a larger genetic basis for performance [START_REF] Yang | ACTN3 genotype is associated with human elite athletic performance[END_REF][START_REF] Williams | Similarity of polygenic profiles limits the potential for elite human physical performance[END_REF]. This is to be opposed to the limited variation for running capacity observed in dogs and horses [START_REF] Denny | Limits to running speed in dogs, horses and humans[END_REF]. A striking tendency in humans is also related to geographical origins of runners, with, for example, the massive rise of African runners among best performers (BPs) in middle to long distance over the recent decades [START_REF] Onywera | Demographic characteristics of elite Kenyan endurance runners[END_REF].
Other factors relating to runner physiology (e.g. training, running style, nutrition and, sometimes, doping activities) or to environment sensu lato (e.g. climatic conditions, riding style, rules, betting activity, reward) have also played a significant role in improving running performance [START_REF] Norton | Morphological evolution of athletes over the 20th century: causes and consequences[END_REF][START_REF] Noakes | Tainted Glory. Doping and athletic performance[END_REF][START_REF] Pfau | Modern riding style improves horse racing times[END_REF][START_REF] Toutain | Veterinary medicines and competition animals: the question of medication versus doping control. comparative veterinary and pharmacology[END_REF]. The respective influence of genetic and nongenetic factors may therefore largely differ in domesticated animals and in humans [START_REF] Brutsaert | What makes a champion? Explaining variation in human athletic performance[END_REF]. Whatsoever, recent studies have demonstrated that the maximal running speed may soon reach its limits in dogs, horses and humans [START_REF] Berthelot | The citius end: world records progression announces the completion of a brief ultra-physiological quest[END_REF][START_REF] Berthelot | Athlete atypicity on the edge of human achievement: performances stagnate after the last peak[END_REF][START_REF] Denny | Limits to running speed in dogs, horses and humans[END_REF].
Best running performances in humans have been under scrutiny [START_REF] Nevill | Are there limits to running world records?[END_REF][START_REF] Berthelot | The citius end: world records progression announces the completion of a brief ultra-physiological quest[END_REF], but a full analysis of speed progression in greyhounds and horses is not available. The comparison of speed progression in animals (horses and dogs) with that in humans requires long-term data over comparable periods and distances.
We collected data built up over more than a century (up to 118 years) for the 10 BPs in races (450-500 m in dogs, 2200-2800 m in horses and 200-1500 m in humans)the larger range in humans allowing direct comparison with the two other species over similar running times. We also tested the heritability of BPs using pedigrees of dogs and horses, whereas in humans, performance analysis from different geographical regions allowed us to test the progression of maximum running speed according to the geographical origin of runners. Our objectives here are (i) to compare the progression of speed performances in humans, horses and dogs and (ii) to evaluate the potential influence of genetic factors in horses and dogs, based on a so-called animal model approach [START_REF] Kruuk | Estimating genetic parameters in wild populations using the 'animal model[END_REF], and of geographical origin in humans. Our approach is comparative (not experimental) in essence as it is based on observational data over the long term in three species, although it allows to dissect the respective influences of genetic and environmental factors on patterns of speed progression based on fairly large data sets.
Materials and methods
The performance progression of the 10 BPs was recorded yearly throughout the world in flat, thoroughbred horse races (years: 1898-2009; distances: 2200-2800 m; effort duration from 130 to 170 s) and greyhound races (years: 1929-2009; distances: 450-500 m; effort duration from 24 to 28 s). Men's track and field races (years: 1891-2009; distances: 200, 400, 800 and 1500 m; effort duration from 20 to 230 s) were used to express human 10 BPs. The means of 10 BPs from sprint (200 and 400 m) and short to middle distances (800 and 1500 m) were considered separately for a best correspondence to thoroughbred and greyhounds' BP times, respectively. For each species, only the best yearly performance of a single athlete or animal was kept, so that any given athlete appears only once per year in the data sets. The yearly BP records were obtained in thoroughbreds from 30 events at the highest competitive level over the world ('Group 1 and 2' in Europe, Oceania and Asia and 'Stakes' for North America). In greyhounds, we collected the yearly 10 best performances recorded in short race's rings (25 events in Great Britain, Ireland, Spain, United States, Australia and New Zealand). In humans, track and field outdoor best performances were collected. Speed data were collected with similar methodology as recently reported [START_REF] Denny | Limits to running speed in dogs, horses and humans[END_REF][START_REF] Desgorces | From Oxford to Hawaii ecophysiological barriers limit human progression in ten sport monuments[END_REF] from various sources in humans (Associations of Track and Field Statisticians;[START_REF] Rabinovich | Track and Field Statistics[END_REF], horses (Galop course, 2011;Galopp sieger, 2011) and greyhounds (Greyhounds-data, 2011). A total of 6620 performances were gathered (4720 in humans, 1120 in horses and 810 in dogs). The BPs in dogs and humans were male only, whereas the horse data set included some females as well. However, as the fraction of female BPs was limited, we did not account for a sex effect when analysing the data. Speed data were expressed in m s -1
Progression pattern of maximal running speeds
Previous studies reported that best performances' progression over the last century was best described by logistics and ⁄ or multiexponential curves [START_REF] Nevill | Are there limits to running world records?[END_REF][START_REF] Berthelot | The citius end: world records progression announces the completion of a brief ultra-physiological quest[END_REF][START_REF] Denny | Limits to running speed in dogs, horses and humans[END_REF]. [START_REF] Blest | Lower bounds for athletic performance[END_REF], analysing a set of world records in athletics, showed that Gompertz models provided lower standard errors of estimates compared with other models (i.e. antisymmetric exponential and logistic models). They also described well best performances in humans [START_REF] Berthelot | Athlete atypicity on the edge of human achievement: performances stagnate after the last peak[END_REF]. We therefore opted for such a model using the following function:
V P = V A + V R ,
where V A is the additive genetic variance and V R is the residual variance, consisting of environmental effects, nonadditive genetic effects and error variance. A single speed record was available in 10 years in dog races and 20 years in horses; hence, a year could not be entered as a random effect in the model. The full model outlined above was compared with a simpler model where V A was removed using a chi-squared test, to determine whether speed displayed significant additive genetic variance.
Geographical origin of human best runners
The full human data set (200-1500 m) was split accord-Y(t) = a.e be c.t +d ing to the geographical origin (Africa, America, Asia, Europe and Oceania) of runners. Africa, Asia and where a gives the upper asymptote of y, b sets the value of t displacement, c is the curve steepness and d accounts for the fact that the minimum y value is not 0. a and d takes here positive values, whereas b and c are negative. The model parameters were estimated using a nonlinear least-squares method on the uniformized [(0, 1) range] vector of recorded values. The physiological limit for each species was given by computing the year corresponding to 99.95% (1 ⁄ 2000th) of the estimated asymptotic value.
Oceania are poorly represented in the 10 BPs over the period considered (1891-2009; Fig. S1). The analysis was further refined for short to middle distance (800-1500 m; year range: 1970-2009) by comparing the five BPs from North European countries (Nordic: Denmark, Norway, Sweden, Finland) and from East Africa (EAf; Djibouti, Eritrea, Ethiopia, Somalia, Tanzania, Kenya, Uganda, Burundi), for which large data sets are available, with runners from rest of world (ROW). An analysis of Curve fitting was performed using M M A A T T L L A A B B (version variance was used to determine the regional (Nordic, 7.11.0.584, MathWorks, Natick, MA, USA).
The yearly progression of human speed was compared with that of greyhounds (200-400 m in humans) and thoroughbreds (800-1500 m in humans) and expressed as the percentage of a human-animal ratio. The annual speed variation was also observed in each species through coefficient of variation (CV) changes.
Speed heritability in dogs and horses
For the recent period, genealogical data were available in both dogs and horses, allowing for a quantitative genetics approach. The speeds of current 10 BPs (2007)(2008)(2009) were collected and compared with those of their ancestors (individual best performance; Galop course 2011, Galopp sieger 2011, Greyhounds-data 2011). These ancestor data sets bear on 67 dogs and 64 horses including six females in the current 10 BPs (2007)(2008)(2009). All dogs and horses are part of a pedigree (seven generations in dogs and six in horses) with only male links known.
To decompose the phenotypic variance expressed in speed of greyhounds and horses lineages and to estimate the heritability (h 2 ) of this character, we used two methods: (i) a classic father-midsons regression and (ii) a restricted estimate maximum likelihood procedure to run a mixed model with the software ASReml [START_REF] Gilmour | ASReml User Guide Release 2.0[END_REF]. An 'animal model' [START_REF] Kruuk | Estimating genetic parameters in wild populations using the 'animal model[END_REF] was used in which the year was considered as a fixed effect and the total phenotypic variance (V P ) was broken into two components of variance as follows: EAf, ROW) and time (data arranged per decade; 1970-1979; 1980-1989; 1990-1999; 2000-2009) influence on speed progression. When speed differences were detected, Tukey's post hoc tests were used to identify significant differences according to region and decade.
The software package S S T T A A T T I I S S T T I I C C
A A (version 6.1; Statsoft, Maisons-Alfort, France) was used for statistical analyses. The level of significance for all analyses was set at P < 0.05. Data are expressed as mean ± SD.
Results
The Gompertz model fits speed progression with good accuracy in animals (horses: R 2 = 0.56, MSE = 0.169; greyhounds: R 2 = 0.81, MSE = 0.55; Fig. 1a,b) and in humans (800-1500 m 10 BPs: R 2 = 0.97, MSE = 0.006, Fig. 1c; 200-400 m 10 BPs: R 2 = 0.94, MSE = 0.008, Fig. 1d). The asymptotic limit in horses and dogs was 16.59 and 17.76 m s )1 , respectively, that is, 99.0% and 97.4% of the respective Gompertz asymptotes. In humans, the speeds of sprint and middle distances have already reached their estimated maximum based on the Gompertz curves (sprint: 9.54 m s )1 in 1996, 100.3% of asymptotic speed; middle distance: 7.45 m s )1 in 2001, 100.1% of asymptotic speed). Speed progression was evaluated by comparing the first 10 BPs recorded with the 10 BPs ever observed, and it appears similar in the three species (11.1% of initial value in horses, 9.4% in greyhounds, 11.1% in 200-400 m and 14.2% in 800-1500 m in humans; Fig. 1). Even less difference in speed progression was observed in the two animal species when using the asymptotic speed limit of the Gompertz curves rather than the 10 BPs ever observed (12.4% in horses, 11.9% in greyhounds, and 11.1% and 14.2% in humans).
Although the overall progression was similar, the curves differed in shape among species (b and c coefficients). The b ⁄ c ratios revealed a monoexponential shape (b ⁄ c > 3) in both horses and greyhounds, whereas performances of humans follow an S-shaped development (b ⁄ c < 0.1; Fig. 1). The CVs of dog and human speed were of the order of 0.02 (or less) over the period considered (Fig. 2). In horses, the CV was also 0.02 for the 1940-2009 period, but the 1898-1939 period had larger variance with CV up to 0.05. Humans have increased their mean speed compared with dogs over a comparable distance with a current human-dog ratio downward plateauing towards 53.2% (Fig. 3). The comparison with horses shows a current human-horse ratio of 45.2%. Both ratios have been stable for 18 years (dogs) and 42 years (horses).
Heritability estimates based on father-midsons comparisons were significant neither in greyhounds (30 pairs; h 2 = 0.042, SE = 0.34, P = 0.90) nor in horses (34 pairs; h 2 = 0.272, SE = 0.23, P = 0.25). The 'animal model' confirmed this trend in greyhounds (N = 67 data points): the additive variance was low (0.028, SE = 0.039), and model comparison (with and without V A ) showed that the heritability (h 2 = 0.183, SE = 0.240) did not differ from 0 (chi-square = 3.08, P = 0.080). In horses, though (N = 64 data points), more additive variance was detected (0.085, SE = 0.059), and the heritability differed from 0 (h 2 = 0.438, SE = 0.267; chi-square = 6.14, P = 0.013). We note here that our approach, based on BPs, underestimates the available phenotypic variance in both greyhounds (V P = 0.156, SE = 0.003) and horses (V P = 0.193, SE = 0.037).
Human speeds achieved in the 800-and 1500-m races by the five BPs over the 1970-2009 period differ according to performance years and geographical origin of runners (800-m races, F 1,39 = 5.89, P < 0.001 and 1500-m races F 1,39 = 6.66; Fig. 4). EAf and ROW runners clearly increased their speed in both distances, compared with Nordic runners. The Nordic-versus-EAf difference was not significant over the 1970-1989 decade, and both groups performed less well than the ROW group (all P < 0.001). Pairwise comparisons also showed a Nordic ⁄ EAf ⁄ ROW increasing hierarchy for 800-m races. In the most recent decade (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009), speeds on the 800-and 1500-m races are lower in Nordic than in EAf runners (both P < 0.001), but there was no difference 1969 1974 1979 1984 1989 1994 1999 2004 2009 7 1969 1974 1979 1984 1989 1994 1999 2004 2009 42 40 1890 1905 1920 1935 1950 1965 1980 1995 2010
Discussion
Our study shows that the maximum running speed over short to middle distances increased over the last century in dogs, horses and humans, but is currently reaching its asymptotic value. This is a striking result because of the difference in genetic and environmental conditions leading to speed improvement in the three species. The fitted Gompertz curves do not exhibit the same shape (see values of b ⁄ c for the three curves in Results section)the initial plateau in humans is not detected in horses and greyhoundsbut this might simply be due to the larger variance in dogs and horses in the first decades of records or the facts that data were collected earlier in the speed progression in humans than in dogs and horses. Human speed has indeed already reached its asymptote, whereas horses and greyhounds speeds are 1.0% and 2.6% down from their respective values. These results are in agreement with recent studies hypothesizing that current speeds are reaching the species' locomotory limits and explain their conclusions [START_REF] Berthelot | The citius end: world records progression announces the completion of a brief ultra-physiological quest[END_REF][START_REF] Denny | Limits to running speed in dogs, horses and humans[END_REF]. However, the similarities do not appear only in the progression patterns over the last century, but also in the progression ranges that were very close, especially when using the asymptotic speed limits to estimate the maximal speed progression per species. Over the last decades, we also noted that the speed gap between species pairs remained remarkably stable (Fig. 3). The speed CVs among the ten BPs are low as well (Fig. 2) and very similar across species. The large CV recorded in the earlier decades for horses and dogs might be accounted for by the fact that rank, rather than speed, was often recorded in races at the end of the 19th century and beginning of the 20th century. It is also likely that the homogenization in training methods and the selection of similar genetic backgrounds also lead to reduced CVs.
The similarity in speed patterns (progression, asymptotes, CV) across species is somewhat unexpected because the balance between genetic and environmental forces acting on this pattern is different. Dogs for about 20 generations and horses for 25 generations have been submitted to intense artificial selection over the period considered (artificial selection was indeed initiated much earlier in both species; [START_REF] Willett | An Introduction to the Thoroughbred[END_REF]. Quite clearly, artificial selection has reached a phenotypic limit with regard to the maximal speed and minimal variance across the 10 BPs, at least under the environmental conditions imposed by humans on dog and horse champions. Interestingly, the quantitative genetic analysis indicated some genetic variance and heritability in horses. These genetic effects are larger than those reported recently for lifetime earnings in a larger thoroughbred population [START_REF] Wilson | Breeding racehorses: what price good genes?[END_REF]. This marked genetic influence on speed appears surprising when considering the reduced speed progression over the last 40 years. However, the animal model was run on a much more reduced data set than that used for getting the general trend, and the ancestors of the current 10 BPs did not belong to the 10 BPs in their generation. It might also be that training and raising environments for best horse racing should have been homogenized over time, enhancing in return h 2 estimates. The heritability in dogs was on the verge of statistical significance (P = 0.08). Previous studies reported that selective breeding may have led to high homogeneity of predispositions for running [START_REF] Gu | A genome scan for positive selection in thoroughbred horses[END_REF][START_REF] Hill | A sequence polymorphism in MSTN predicts sprinting ability and racing stamina in thoroughbred horses[END_REF], suggesting reduced chance for the occurrence of genetically gifted individuals. Our results highlight that genetic predispositions for running fast in these particular populations are still good indicators of individual speed.
The optimization of running speed in humans probably essentially relies on training and national detection systems, but these might also allow identifying favourable polygenic profiles for elite runners [START_REF] Yang | ACTN3 genotype is associated with human elite athletic performance[END_REF][START_REF] Williams | Similarity of polygenic profiles limits the potential for elite human physical performance[END_REF][START_REF] Bejan | The evolution of speed in athletics: why the fastest runners are black and swimmers white[END_REF]. It remains therefore possible that part of speed progression in humans is due to genetic (evolutionary) processes because there might be some selection to run fast under some conditions. However, this effect might only be minor given the very short time period considered. Our data also suggest that estimated speed limits of the 10 BPs have already been achieved in 200-400 and 800-1500 m, although exceptional athletes may sometimes set atypical performances [START_REF] Berthelot | Athlete atypicity on the edge of human achievement: performances stagnate after the last peak[END_REF]. Over the century, the number of individuals engaged in athletic competitions evolved with societies' development and their emphasis on sport [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF]. America and Europe largely contributed to the 10 BPs with differences according to distance, whereas Asia and Oceania did not. The contribution of Africa to the 10 BPs in middle distance over the most recent decades is noticeable. Initially less engaged, East Africans achieved speed records comparable to those of Nordic athletes on the 1500 m in the 1970s. The striking speed improvement over the last 40 years now placed them in the first ranks, comparable to the other world regions. It is no less striking that running speeds of natives from Nordic countries stagnated over the same period in 800m races and even declined in 1500-m race. Trainers and scientists from Northern Europe were indeed largely involved in the development of modern training methods and scientific results [START_REF] Astrand | Influence of Scandinavian scientists in exercise physiology[END_REF], questioning their real impact on current best performances. On the other hand, Nordic athletes are still successful in other sports (i.e. jumps and throws in athletics, speed skating, cross-country skiing, rowing), suggesting that their genetic potential may be more favoured under the environmental constraints of other sports.
Over the last decades, the maximal human speed has progressed with the involvement of runners from new geographical origins, leading to optimized phenotypes [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF]. The enlargement of the runner population in humans certainly enlarged the genetic pools upon which elite athletes were detected [START_REF] Williams | Similarity of polygenic profiles limits the potential for elite human physical performance[END_REF]. Furthermore, some studies suggested that successful runners from East Africa originated from distinct ethnic and environmental backgrounds compared with the general population of these countries [START_REF] Onywera | Demographic characteristics of elite Kenyan endurance runners[END_REF][START_REF] Scott | Mitochondrial haplogroups associated with elite Kenyan athlete status[END_REF]. Quite clearly, improving speed through artificial selection in dogs and horses or detecting better athletes through an increasing population can only be actually efficient under appropriate environmental conditions, including training and competing conditions [START_REF] Davids | Genes, environment and sport performance. Why the nature-nurture dualism is no longer relevant[END_REF][START_REF] Desgorces | From Oxford to Hawaii ecophysiological barriers limit human progression in ten sport monuments[END_REF]. The decrease in the 10 BPs during the two World Wars [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF] is an indication of the role of environmental conditions. Speed increase requires prosperous societies with a flourishing economy as described for improved health and body size over the 19th and 20th centuries [START_REF] Fogel | The Escape from Premature Death and Hunger[END_REF].
Our analysis is based on patterns exhibited in longterm data sets collected in arguably artificial environments. It would certainly be interesting to use more experimental approaches to test whether other species are not far away from their limit and whether this limit can be experimentally manipulated and increased. Speed increase has indeed been described in nature when environmental conditions allow for the improvement in maximal capacities [START_REF] Miles | The race goes to the swift: fitness consequences of variation in sprint performance in juvenile lizards[END_REF][START_REF] Husak | Does survival depend on how fast you can run or how fast you do run?[END_REF] and recently suggested as describing phenotypic expansion in humans [START_REF] Berthelot | A double exponential dependence toward developmental growth and time degradation explains performance evolution in individual athletes and human species[END_REF]. For example, dietary and thermal conditions experienced both during embryogenesis and early in life may favour the expression of genetic predisposition for running [START_REF] Elphick | Long term effects of incubation temperatures on the morphology and locomotor performance of hatchling lizards (Bassiana duperreyi, Scincidae)[END_REF][START_REF] Galliard | Physical performance and darwinian fitness in lizards[END_REF]. Trade-offs are of course expected between running performances and appropriate behaviour or the ability to migrate towards more favourable environmental conditions [START_REF] Irschick | Do lizards avoid habitats in which performance is submaximal? The relationship between sprinting capabilities and structural habitat use in Caribbean anoles[END_REF][START_REF] Husak | Field use of maximal sprint speed by collared lizards (Crotaphytus collaris): compensation and sexual selection[END_REF]. In natural conditions, such adaptations in animals have been associated with survivorship and reproductive success [START_REF] Miles | The race goes to the swift: fitness consequences of variation in sprint performance in juvenile lizards[END_REF]. It is ironical that behaviour training and migration to more favourable environments also occur in competitive sports.
Conclusion
The parallel progression of maximal running speeds in these three species over the last decades suggests that performances will no longer progress despite genetic selection in animals and best population detection in humans. Regardless of differences between species (biological, environmental and competition history), human pressure, which has accelerated the biological adaptations allowing to run faster, is a process with limited potential and reduced benefits in the near future.
Fig. 1
1 Fig. 1 Models fitting for speed progression (m s -1 ) between 1891 and 2009, in (a) 10 best thoroughbred performers (R 2 = 0.56); (b) 10 best greyhound performers on short ring races (R 2 = 0.81); (c) Mean speed of the 10 best human performers on 800-to 1500-m track races (R 2 = 0.97); (d) Mean speed of the 10 best human performers on 200-to 400-m track races (R 2 = 0.94). The calculated a, b, c, d parameters of the Gompertz function were for thoroughbred(respectively, 77.8, 3.77, 0.46, 16.59), greyhound (respectively, 14.99, 1.99, 0.60, 17.7), 800-to 1500-m human races (respectively, 1.45, 0.26, 3.06, 7.44) and for 200-to 400-m human races(respectively, 1.29, 0.20, 3.31, 9.52).
Fig. 2
2 Fig. 2 Coefficient of variations of 10 best performers in thoroughbred (1899-2009; grey squares), greyhounds (1929-2009; black squares), human 200-to 400-m races (1891-2009; grey open circles) and humans 800-to 1500-m races (1891-2009; black open circles).
Fig. 4
4 Fig. 4 Five best human performances in 800-m (a) and 1500-m (b) races in runners from Nordic countries (open circles), East African countries (grey circles) and rest of World (black circles).
Fig. 3
3 Fig. 3 Ratios of human vs. greyhound (black squares, comparison with human 200-400 m mean speed) and human vs. thoroughbred (grey circles, comparison with human 800-1500 m mean speed). Dashed lines for mean speed gap between humans and animals when stabilized.
Acknowledgments
We thank the INSEP teams for full support.
Supporting information
Additional Supporting Information may be found in the online version of this article: Figure S1 Humans 10 best performances in sprint, up figures: 200 m (grey circles), 400 m (black circles); and middle distance, down figures: 800 m (grey circles) and 1500 m (black circles) according to the geographic origin of runners.
As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer-reviewed and may be reorganized for online delivery, but are not copy-edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors. | 34,089 | [
"942002",
"182881",
"741841",
"941997",
"179320",
"1041396"
] | [
"441096",
"415984",
"301664",
"441096",
"415984",
"301664",
"171392",
"441096",
"415984",
"303623",
"441096",
"415984",
"171392",
"441096",
"415984",
"301664",
"306007"
] |
01681108 | en | [
"chim",
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01681108/file/Gustavo%20CET%202017%20soumis%20after%20review.pdf | Gustavo H Lopes
Nelson Ibaseta
Pierrette Guichardon
email: [email protected]
Pierre Haldenwang
Effects of solute permeability on permeation and solute rejection in membrane filtration
Keywords: permeate flux, concentration polarization, solute permeability, Reverse Osmosis, solute rejection Permeate flux, solute rejection, concentration polarization, solute permeability, reverse osmosis Wiley-VCH
come
Effects of solute permeability on permeation and solute rejection in membrane filtration
Introduction
Reverse osmosis (RO) and nanofiltration (NF) are mainly applied for seawater desalination, wastewater treatment and water reclamation, for ultrapure water production and in process industry niches [START_REF] Fane | Treatise on Water Science[END_REF][START_REF] Schaefer | Nanofiltration Principles and Applications[END_REF]. These processes use membrane separation to achieve high purification and relatively high solvent permeation rates [START_REF] Fane | Treatise on Water Science[END_REF]. The development and optimization of these separation techniques typically rely on abundant, laborious, often costly pilot experiments while the prediction of their performances remains underexplored. In this context, the use of predictive models and the application of the subsequent fundamental knowledge would be valuable.
The performance of pressure-driven membrane filtration processes is determined by an intimate interaction between the transport properties of the membranes and the hydrodynamics and mass transfer taking place inside a membrane-bound channel. It is well known that this interaction gives rise to a reversible accumulation of solute in a mass boundary layer on the filtrating surface of the membrane, the so-called concentration polarization layer [START_REF] Strathmann | Ullmann's Encyclopedia of Industrial Chemistry[END_REF][START_REF] Sablani | [END_REF][5][6]. Combined with osmosis, concentration polarization hinders solvent permeation and increases solute passage into the filtrate (permeate) in addition to playing a part in scaling and fouling [START_REF] Strathmann | Ullmann's Encyclopedia of Industrial Chemistry[END_REF][START_REF] Sablani | [END_REF].
New membranes are under development with the aim of maximizing the permeation rate by enhancing the membrane permeability to the solvent while ensuring highest selectivity, so as to keep the solute concentration in the permeate as low as possible. Unfortunately, the effort is partly wasted, as increased permeation inherently causes enhanced concentration polarization. Reducing this concentration polarization can nonetheless be achieved by disturbing the concentration layers with eddy-promoting spacers put into the flow channel [7]. Notwithstanding, the question may be raised as to whether tailoring the selectivity of the membrane can induce intensified permeation, irrespective of the polarization conditions. The role played by the membrane solute permeability here has not been discussed enough. This is the point we herein investigate.
This work follows the publication by Lopes et al. [8], which demonstrated the adequacy of the numerical model developed by the authors for predicting permeate fluxes and solute rejection obtained experimentally in RO and tight NF in flat and spiral-wound geometries. The current article begins with a summary of the model and a description of the simulations. It then presents the results, discussions and conclusions about the impact of solute permeability at different operating conditions upon the fundamental behavior of an ideal separation process, as well as on its two main outputs: permeate flux and solute rejection. The study considers numerous orders of magnitude of solute permeability from 0 m s -1 to 10 -6 m s -1 and a solvent permeability in the range of RO (10 -12 m Pa -1 s -1 ).
Model description
As the model we used is described extensively in [6,8,9], only its main elements will be given here. The model considers a liquid solution of one solvent and one solute in laminar, steady, incompressible crossflow filtration (tangential filtration) along a two-dimensional planar symmetric channel of axial and transverse coordinates ܼ and ܺ, respectively, the channel walls being permeable to both solvent and solute. Axial velocity (ܹ), transverse velocity (ܷ), permeate flux (ܷ ୵ ), solute concentration (or concentration polarization) ,)ܥ( gauge pressure (ܲ) and permeate concentration ܥ( ୮ ) are considered as functions of ܼ and ܺ. The input parameters are all easily accessible experimentally: two properties of the membrane (permeability to the solvent, ܫ ିଵ , and permeability to the solute, ,)ܤ two properties of the membrane filtration module (channel length, ,ܮ and channel half-height, ݀), two operating conditions (feed pressure, ܲ ୧୬ , and axial feed velocity, ܹ ୧୬ ) and four properties of the solution (solute feed concentration, ܥ ୧୬ , solution mass density, ߩ, solution viscosity, ߤ, and solute diffusivity in the solvent, .)ܦ A schematic is sketched in Fig. 1. Note that subscripts "w", "p" and "in" refer to the membrane surface on the retentate side (wall), the membrane surface on the permeate side, and the inlet (or feed) conditions, respectively. Fouling is not considered while osmotic counter-effects are taken into account. ߩ, ߤ and ܦ are equal to their values in the feed. The solution-diffusion model [START_REF] Fane | Treatise on Water Science[END_REF]10] describes locally the transmembrane solvent and solute fluxes, as in Eq. ( 1) and Eq. ( 2), respectively:
ܷ ୵ ሺܼሻ = ܫ ିଵ ሾܲሺܼሻ -Δߎሺܼሻሿ (1)
ܥ ୮ ሺܼሻܷ ୵ ሺܼሻ = ܥൣܤ ୵ ሺܼሻ -ܥ ୮ ሺܼሻ൧ (2)
where
ܷ ௪ ሺܼሻ = ܷሺ݀, ܼሻ (3)
ܥ ௪ ሺܼሻ = ܥሺ݀, ܼሻ (4) and Δߎ is the osmotic pressure difference between the retentate and permeate sides of the membrane,
Δߎሺܼሻ = ߎሾܥ ୵ ሺܼሻሿ -ߎൣܥ ୮ ሺܼሻ൧ (5)
calculated based on the van't Hoff's osmotic pressure law, ߎ = ܥܴܶ݅ (6) where ݅ is the van't Hoff's factor, ܴ is the ideal gas constant and ܶ is the solution temperature.
The numerical solutions are calculated at each grid point in half of the channel via a secondorder finite difference scheme which solves the continuity equation, the two-dimensional Navier-Stokes equations and the solute conservation equation simultaneously [6]. The system is handled in dimensionless form under Prandtl's hypotheses [6,8,9] whereby diffusion of momentum and mass in the longitudinal (axial) flow direction is negligible compared with that in the transverse direction. These equations are solved in the filtration channel only (flow domain), whereas membrane transport takes the form of boundary conditions (Eqs. ( 1) and ( 2)). As will be seen, the flow in the whole domain will depend on the membrane transport properties.
The simulations consider the desalination of aqueous sodium chloride solutions at 25°C by membranes of solvent permeability ܫ ିଵ = 5×10 -12 m Pa -1 s -1 (typical for RO) and solute permeability, ,ܤ tuned from 0 m s -1 (membrane impermeable to the solute, i.e., perfectly permselective) through 10 -8 m s -1 and 10 -7 m s -1 (typical for RO) up to 10 -6 m s -1 . The flow channel reproduces the length of an industrial membrane module with six identical membranes arranged in series, thus ܮ = 6 m and ݀ = 5×10 -4 m. As presented in Tab. concentrations ranging from that of low-concentration brackish water up to that of seawater are simulated. P in varies from 1.5×10 5 Pa to 60×10 5 Pa. W in is 0.1 m s -1 . Empirical laws are used to estimate ρ, µ and ܦ [11]. The computational domain is discretized into 1000 transverse and 8000 axial grid points.
(Tab. 1)
Results
Permeate fluxes
Let us first consider the behavior of the averaged permeate flux measured by experimentalists, ܷ ୟ୴ , equivalent to the local permeate flux averaged along the membrane length [8]. The classic influence of the inlet pressure on it is depicted in Fig. 2 for three feed concentrations using the nondimensional "three-Péclet-numbers" representation introduced by Haldenwang et al. [12].
(Fig. 2)
Accordingly, ܲ݁ ୟ୴ is the averaged (subscript av) permeation Péclet number (dimensionless permeate flux), ܲ݁ ୧୬ is the inlet pure-solvent Péclet number (nondimensional inlet pressure), and ܲ݁ ୧୬ ୭ୱ୫ is the inlet osmotic (superscript osm) Péclet number (dimensionless inlet solute concentration):
ܲ݁ ୟ୴ = ܷ ୟ୴ ݀ ܦ (7)
ܲ݁ ୧୬ = ܲ ୧୬ ܫ ିଵ ݀ ܦ (8)
ܲ݁ ୧୬ ୭ୱ୫ = Π ୧୬ ܫ ିଵ ݀ ܦ (9)
This representation is convenient because the three Péclet numbers are of the same order of magnitude. Hence, for low solute concentrations, i.e., low values of ܲ݁ ୧୬ ୭ୱ୫ , ܲ݁ ୟ୴ is close to but always lower than ܲ݁ ୧୬ . As concentration polarization increases, so does the gap between ܲ݁ ୟ୴ and ܲ݁ ୧୬ . Finally, if the membrane is fully impermeable to the solute, ܲ݁ ୟ୴ will be positive only if ܲ݁ ୧୬ is higher than ܲ݁ ୧୬ ୭ୱ୫ .
Tab. 2 exemplifies the values of ܲ ୧୬ and ܲ݁ ୧୬ . As expected, Fig. 2 shows that the averaged permeate flux increases with the inlet (or applied) pressure. For the lowest solute concentration (ܲ݁ ୧୬ ୭ୱ୫ = 0.13), concentration polarization remains weak as long as the inlet pressure is low; for ܲ݁ ୧୬ ≤ 1 in particular, ܲ݁ ୟ୴ ≈ ܲ݁ ୧୬ . However, as ܲ݁ ୧୬ rises, concentration polarization becomes significant; the increase in effective operating pressure tends to vanish (term in brackets in Eq. ( 1)) and so does the permeation. Two observations can be made regarding higher feed concentrations. First, whenever the membrane is fully selective, and for because ܲ݁ ୧୬ must exceed ܲ݁ ୧୬ ୭ୱ୫ , as concluded from Eq. ( 1). Second, it is reasonable to expect ܲ݁ ୟ୴ to grow linearly with ܲ݁ ୧୬ -ܲ݁ ୧୬ ୭ୱ୫ provided that concentration polarization is negligible, and to infer that the coupling of polarization and osmotic pressure is significant whenever the growth of ܲ݁ ୟ୴ with ܲ݁ ୧୬ -ܲ݁ ୧୬ ୭ୱ୫ tends to deviates from linearity even for low pressures. However, the slopes of the linear segments are very different depending on ܲ݁ ୧୬ ୭ୱ୫ :
higher feed concentrations intensify polarization and osmosis, ergo reducing permeation.
(Tab. 2)
It is also noteworthy to analyze the dependence of the permeate flux on the solute permeability value. The units of ܤ allow us to envisage it as the diffusion velocity of the solute across the membrane and thus to express the permeability values in nondimensional form by dividing them by the solute diffusion velocity in the feed channel, ܦ ݀ ⁄ . This gives rise to the dimensionless membrane solute permeability, ߜ, whose correspondence with the values of ܤ in this study is presented in Tab. 3:
ߜ = ܤ ݀ ܦ (10) (Tab. 3)
Fig. 2 evaluates three orders of magnitude of ߜ and reveals that the more solute-permeable (or the less permselective) the membrane, the higher the permeate fluxes for a given solvent permeability. This tendency is even greater when the inlet concentration rises and at lower feed pressures. In the cases where this tendency is clearest, ܲ݁ ୟ୴ for a membrane of ߜ ~ 10 -1 is almost 4 times higher than for a membrane of ߜ ~ 0 when ܲ݁ ୧୬ ୭ୱ୫ is 1.46 and ܲ݁ ୧୬ is 1.7. Also, ܲ݁ ୟ୴ for ߜ ~ 10 -1 is more than 3 times higher than for ߜ ~ 0 when ܲ݁ ୧୬ ୭ୱ୫ is 5.12 and ܲ݁ ୧୬ is 6.
In fact, as will be demonstrated in the following subsection, an intricate mechanism accounts for the reduction of the osmotic pressure difference between both sides of the membrane. This happens when the membrane becomes more permeable to the solute, leading to higher permeation rates. Under our simulation conditions, the effects of the solute permeability upon this mechanism with respect to ܲ݁ ୟ୴ cannot be neglected unless the order of magnitude of ߜ is below the threshold of 10 -2 (in which case there is overlap with ߜ = 0 [6]). In accordance with this rationale, it is worth noticing that the results simulated here for mild polarization conditions (low ܲ݁ ୧୬ ) for the two membranes of lowest solute permeability are comparable to those calculated with equation numbered (41) in Haldenwang et al [12].
Axial and transverse profiles
Dimensionless concentrations, fluid velocities, fluxes and transverse coordinate are henceforth represented by the same symbols as their homonyms but in the small letter version. Fig. 3 and Fig. 4
ܿ ୵ = ܥ ୵ ܥ ୧୬ (11)
ܿ ୮ = ܥ ୮ ܥ ୧୬ (12)
ݏܱ = ߎ ୧୬ ܲ ୧୬ (13)
ݑ ୵ = ܷ ୵ ܲ ୧୬ ܫ ିଵ (14) (Fig. 3a, Fig. 3b, Fig. 3c) (Fig. 4a, Fig. 4b) In Eq. ( 13), ݏܱ is the osmotic number [8]. Profiles in Fig. 3 and4 are represented as functions of the dimensionless axial coordinate, ߞ:
ߞ = ܼ ܮ (15)
We first focus on the case of membranes totally impermeable to the solute (ߜ = 0): ܿ ୮ is null and therefore an osmotic pressure develops only in the retentate. As a consequence of the solvent passage into the permeate, the concentration of the retentate grows as it flows forward in the membrane channel, and so does the membrane surface concentration (ܿ ୵ ). Figs. 3a-3c show the increase in ܿ ୵ along ߞ and Figs. 4a-4b the consequent increase in osmotic pressure difference and the reduction in permeate flux. The magnitude of these effects is however dependent on the feed concentration. In fact, depending on the conditions, ܿ ୵ may become from a little more than 1 up to 9 times higher than the feed concentration along the flow channel. Figs. 3a-3c and Figs. 4a-4b reveal that the dimensionless permeate flux becomes weaker at higher ܲ݁ ୧୬ ୭ୱ୫ , just as the nondimensional magnitude of concentration polarization.
As a result, the rise in osmotic pressure and the consequent reduction in permeate flux are attenuated. Note that, in nondimensional form (as in Fig. 3b and according to Eq. 11), concentration polarization (and thus the value of ܿ ୵ ) can be proportionally lower for higher feed concentrations than for lower feed concentrations because of a lower nondimensional permeation rate at higher polarization.
The rationale changes considerably if the membrane is permeable to the solute: the transmembrane solute flux has to be taken into account in the calculations. The solute concentration in the permeate is no longer zero. An osmotic pressure develops in the permeate, and not only in the retentate. This alone would tend to increase the permeation rate. Somewhat surprisingly however, Figs. 3a-3c reveal that ܿ ୵ is higher for a membrane of high solute permeability (ߜ ~ 10 -1 ) than when ߜ = 0. This itself would cause ݑ ୵ to decrease. The final result of these opposing tendencies upon ݑ ୵ is observed in Figs. 4a-4b. The rise in ܿ ୮ counterbalances that in ܿ ୵ , i.e., ܿ ୵ -ܿ ୮ diminishes and so does the local osmotic pressure difference, resulting in higher permeate fluxes. This contributes, in turn, to the rise in ܿ ୵ , as previously observed.
A crossflow is not one-dimensional by definition. While the previous paragraphs insisted on the role of ܷ on the establishment of the polarization layer by advection of solute onto the membrane surface, the influence of the axial shearing generated upon this layer by ܹ should not be neglected [6,8]. Consider:
ܿ = ܥ ܥ ୧୬ (16) ݓ = ܹ ܹ ୧୬ (17)
ݔ = ܺ ݀ (18)
Fig. 5a reveals that, for higher ߜ, the axial velocity, ,ݓ is lower at all transverse positions )ݔ( as a consequence of the strong increase in solvent permeation seen in Fig. 5b as an increased transverse velocity, ݑ (crossflow velocity, whose maximum value is ݑ ୵ , as illustrated in Fig. 1). Hence, for higher ߜ, less solute is swept downstream by the flow, intensifying the concentration polarization layer, as shown in Fig. 5c. It is interesting to note that there is a certain compensation between ݓ and ,ݑ which is reflected by the inverted positions that straight and dashed lines have in Fig. 5a and Fig. 5b.
(Fig. 5a, Fig. 5b, Fig. 5c) Fig. 5a also allows us to conclude that the retentate flow is increasingly decelerated along the membrane channel. The comparison of Fig. 5b and Fig. 5c furthermore shows that the increase in polarization in the axial direction is accompanied by a sharp reduction in the transverse velocity as the fluid flows along the filtration channel. Moreover, for the same reasons as in Fig. 2, the local values of the crossflow velocity are somewhat larger for higher ߜ. More details on the transverse profiles of the crossflow velocity are presented in [6].
The model is also capable of calculating the magnitude of concentration polarization at different cross-sections of the flow domain. Fig. 5c depicts the axial evolution of the transverse concentration gradient along the membrane filtration channel. It is evident from Fig. 5c that actually not only ܿ ୵ but also the whole bulk solution becomes more concentrated along the flow channel. This corroborates the usual experimental finding according to which the retentate stream is more concentrated than the feed solution. In other words, the transverse concentration profiles hypothesized by the film model, according to which the concentration boundary layer is found on the membrane surface only, [START_REF] Fane | Treatise on Water Science[END_REF][START_REF] Sablani | [END_REF]6] are only found at the beginning of the flow channel. This remark underlines the limitations of the one-dimensional film model for describing a crossflow in long filtration channels [6,9]. For a membrane of given solute permeability (ߜ), the difference between the nondimensional values of ܿ at positions ݔ = 0 (center of the filtration channel) and ݔ = 1 (membrane surface) in Fig. 5c is the extent of concentration polarization at each cross-section (ߞ). According to Eq. 6, this value also represents the local enhancement of osmotic pressure on the retentate side caused by the buildup of the polarization layer. This enhancement is strongly related to the reduction in solvent permeation and the increase in the solute passage into the permeate, as indicated by Eq. 1 and Eq. 2, respectively. From Fig. 5c, an enhancement of approximately 2 to 3 times occurs depending on the membrane properties and the axial coordinate. The figure also confirms that the osmotic pressure on the retentate side increases with ߞ (this can also be concluded from Fig. 4).
The axial pressure profiles had been analyzed beforehand according to Eq. ( 1) since the permeate velocity, ݑ ୵ , is proportional to ሺߞሻ -ܱݏൣܿ ୵ ሺߞሻ -ܿ ୮ ሺߞሻ൧ where ሺߞሻ = ܲሺܼሻ ܲ ୧୬ ⁄
is the local pressure. These profiles are presented in reference [6]. The pressure drop was low in all cases: less than 5% at lowest feed pressure and negligible otherwise. It is interesting to realize how the influence of membrane selectivity upon the hydrodynamics of the problem extends to the pressure values. Indeed, enhanced permeation at higher values of ܤ is accompanied by a deceleration of the axial flow and so by a lower pressure drop (or increased filtration pressure). This has a retroactive effect on the solvent permeation, intensifying it [6,13,14]. However, the effect was minor under our simulation conditions.
Solute rejection
Together with the averaged permeate flux, the averaged solute rejection [8], ܴܶ ୟ୴ , is of prime importance for experimentalists:
ܴܶ ୟ୴ = 1 -ܿ ୮ ౬ (19)
where ܿ ୮ ౬ is the mean bulk concentration of the permeate solution reduced by ܥ ୧୬ [7].
Fig. 6a represents ܴܶ ୟ୴ as a function of ܲ݁ ୧୬ for several values of ߜ and ܲ݁ ୧୬ ୭ୱ୫ . The solute rejection obviously diminishes when the solute permeability increases. Total rejection can only be admitted for ߜ < 10 -3 (or ܤ < 10 -8 m s -1 ). This can be more clearly distinguished in Fig. 6b where all curves correspond to orders of magnitude of ܿ ୮ ౬ above 10 -2 . The remarkable finding in Fig. 6 is that the solute rejection exhibits a non-monotonical dependence on the applied pressure. On the one hand, for low pressures leading to relatively low permeate fluxes, any pressure rise will result in a higher permeate flux, which dilutes the solute in the permeate and results in a higher solute rejection. By contrast, for high pressures causing high permeate fluxes, the higher the pressure, the more severe the concentration polarization, which increases the solute transfer across the membrane and reduces the solute rejection. In other words, there is a maximum value for the operating pressure which will lead to a maximum solute rejection (possibly regarded as optimal).
(Fig. 6a, Fig. 6b)
F o r P e e r R e v i e w 9
The example in Fig. 7 helps clarify the effect of ߜ seen in Fig. 6. From Fig. 7, the increase in ݑ ୵ (due to higher solute permeability) is not sufficient for the soaring transmembrane solute flux (due to higher concentration polarization at higher ݑ ୵ ), ܿ ୮ × ݑ ୵ , to be diluted. Indeed, as opposed to ܿ ୮ × ݑ ୵ , ݑ ୵ depends indirectly on ߜ [6, 8] and its sensitivity is not too high.
(Fig. 7) Fig. 6 also demonstrates that the maximum value of the solute rejection that can be attained is practically independent of the inlet concentration. It however shows that the corresponding pressure increases with ܲ݁ ୧୬ ୭ୱ୫ .
Concentration polarization affects simultaneously, in opposite directions and to different extents, the driving forces for both solvent and solute transmembrane transport. It modifies their magnitudes and determines their axial evolutions. As such, it influences the whole process behavior and outputs. At the same time, these driving forces are also functions of the membrane transport properties (among which the solute permeability), the operating conditions and the characteristics of the filtration module. That is to say, concentration polarization is "cause and effect" of deeply convoluted aspects. Unless simulations or experiments are carried out for each set of input conditions, it is not possible to quantify concentration polarization and its effects upon solute rejection. The only statements that can be made in advance are the behaviors highlighted in the previous paragraphs leading to the non-asymptotic dependence of the solute rejection on the applied pressure.
Conclusions
This study has pointed out the effects of membrane selectivity on the overall hydrodynamics and bulk mass transfer in the flow channel. The membrane transport properties regarding the solute may modify concentration polarization to such significant extent as to define not only the values of solute rejection but also those of permeate flux. The reason behind this modification is an intertwined coupling between membrane transport properties, hydrodynamics, concentration polarization, osmotic pressures on the feed and permeate sides of the membrane, and pressure drop. As long as the membrane solute permeability is high, the consideration of the transmembrane solute flux, and thus of an integrated treatment coupling permeate flux and permeate concentration, should not be circumvented. This calls for an accurate estimation of the membrane transport parameters [15].
One main finding of this study is the increase in permeate flux with increasing solute permeability. This owes to the coupling between the osmotic pressures on the two sides of the membrane and the concentration polarization. This effect is significant for relatively high values of membrane solute permeability not necessarily in the range of reverse-osmosis membrane permeability values. It is useful to understand that a less selective membrane of higher solute permeability might be a good choice whenever the target of the separation process is to concentrate the retentate instead of obtaining a high-purity permeate.
The second main finding is the existence of an optimal applied pressure leading to maximum solute rejection (or lowest permeate concentration) for a given feed concentration. This is the result of the combination of two regimes: the dilutive regime, where rising pressures will increase the permeate flux and result in higher solute rejection, and the concentrative regime, where even higher pressures will result in lower and even negative solute rejections. The maximum solute rejection depends on the solute permeability but is rather independent of the inlet concentration. Some considerations apply. The effect of higher solute permeability upon the permeate flux would be lessened in modules where concentration polarization is weakened, as for example in spiral-wound modules with feed spacers. Conversely, if the osmotic pressure happens to be higher than that predicted by Eq. ( 6), the effect would be intensified; this is the case for some solutes whose osmotic pressure obeys a polynomial law or power law with exponents higher than one. Finally, higher solute permeability leads to higher concentrations on the membrane surface. This could be a problem in systems prone to fouling, but could be interesting for specific applications.
Acknowledgement
Gustavo H.
Short text for the table of contents section
The coupling of concentration polarization, transmembrane solute and solvent fluxes and osmotic pressures was studied by solving the continuity, Navier-Stokes and solute transfer equations. Solvent permeation was found to rise with the value of membrane solute permeability. Moreover, there is an optimal value of inlet pressure which leads to maximum solute rejection at a given feed concentration.
Graphical abstract
(submitted online)
pass into the permeate, the value of the abscissa-intercept increases with ܲ݁ ୧୬ ୭ୱ୫
Fig. 1 :
1 Fig.1: Two-dimensional flow channel bound symmetrically by two equally permeable membranes. A crossflow occurs in the channel whereby the feed solution splits into the retentate stream and the solute-containing permeate. Process variables are described locally and vary axially and/or transversally. Sketch out of scale ܮ( ≫ ݀).
14 Fig. 2 :
142 Fig. 2: Three-Péclet-number diagram representing the dimensionless averaged permeate flux (ܲ݁ ୟ୴ ) as a function of the nondimensional applied pressure (ܲ݁ ୧୬ ) for three values of dimensionless solute concentration (ܲ݁ ୧୬ ୭ୱ୫ ). Results are represented for three values of dimensionless solute permeability (ߜ).
Fig. 3 :Fig. 4 :
34 Fig. 3: Axial profiles of the nondimensional membrane and permeate concentrations, ܿ ୵ and ܿ ୮ , respectively, for two values of nondimensional solute permeability (ߜ). a) ܯ ୧୬ = 3.5 %; ܲ ୧୬ = 60×10 5 Pa. b) ܯ ୧୬ = 1.0 %; ܲ ୧୬ = 60×10 5 Pa. c) ܯ ୧୬ = 0.1 %; ܲ ୧୬ = 1.5×10 5 Pa.
Fig. 5 :
5 Fig. 5: Transverse profiles in the channel half-height at different axial positions along the flow channel during the desalination of a solution of ܯ ୧୬ = 1.0 % at ܲ ୧୬ = 30×10 5 Pa for two values of solute permeability (ߜ). a) Axial velocity, .ݓ b) Transverse velocity, .ݑ c) Solute concentration, ܿ.
Fig. 6 :
6 Fig. 6: Influence of the dimensionless applied pressure (ܲ݁ ୧୬ ) on the overall purification for three feed concentrations (ܲ݁ ୧୬ ୭ୱ୫ ) and using membranes of different solute permeabilities (ߜ). a) Averaged rejection, ܴܶ ୟ୴ . b) Logarithm of ܿ ୮ ౬ , the averaged solute concentration.
Fig. 7 :
7 Fig. 7: Axial evolution of the local solute and solvent fluxes, ݑ ୵ ܿ ୮ and ݑ ௪ , respectively, in the case of a solution of ܯ ୧୬ = 1.0 % desalinated at ܲ ୧୬ = 60×10 5 Pa for four values of dimensionless solute permeability (ߜ).
Fig. 1 297x209mm
1 Fig. 1 297x209mm (300 x 300 DPI)
Fig. 2 147x101mm
2 Fig. 2 147x101mm (300 x 300 DPI)
Fig. 3a 147x101mm (300 x 300 DPI)
Fig. 3b 145x101mm (300 x 300 DPI)
Fig. 3c 153x109mm (300 x 300 DPI)
Fig. 4a 144x95mm (300 x 300 DPI)
Fig. 4b 145x96mm (300 x 300 DPI)
Fig. 5a 150x102mm (300 x 300 DPI)
Fig. 5b 149x102mm (300 x 300 DPI)
Fig. 5c 146x100mm (300 x 300 DPI)
Fig. 6a 149x102mm (300 x 300 DPI)
Fig. 6b 142x99mm (300 x 300 DPI)
Fig. 7 144x94mm
7 Fig. 7 144x94mm (300 x 300 DPI)
represent the nondimensional axial profiles of the concentration on the membrane ܿ ୵ , permeate concentration, ܿ ୮ , osmotic pressure difference, ܱݏ൫ܿ ୵ -ܿ ୮ ൯, and local permeate flux, ݑ ୵ :
surface,
F o r
P e
e r
R e
v
i e
w
6
Lopes thanks the Conseil Régional de Provence-Alpes-Côte d'Azur and the École Centrale de Marseille for financing this work. Dimensional feed pressure values (ܲ ୧୬ ) and their nondimensional counterparts (ܲ݁ ୧୬ ). The variation with ܲ݁ ୧୬ ୭ୱ୫ is due to the concentration-dependence of the salt diffusivity[11] in the feed solution.
ܷ ݑ ܹ ݓ ܺ Tab. 2: ܲ ୧୬ (10 5 Pa) [m s -1 ] [-] [m s -1 ] [-] [m] ݔ [-] ܼ [m] 0 transverse velocity dimensionless transverse velocity axial velocity dimensionless axial velocity ܲ݁ ୧୬ ୭ୱ୫ ܲ݁ ୧୬ transverse coordinate 0.13 0 reduced transverse coordinate 1.46 0 axial coordinate 5.12 0
0.13 2.36
Greek symbols 15 1.46 2.59
5.12 2.59
Δ ߜ ߞ ߤ ߎ ߩ [-] [-] 30 [-] [Pa s] [Pa] [kg m -3 ] 45 60 F o r difference between the feed and permeate membrane sides, respectively 0.13 4.72 1.46 5.17 dimensionless membrane solute permeability 5.12 5.17 dimensionless axial coordinate 0.13 7.09 feed solution viscosity osmotic pressure feed solution density F o r 1.46 7.76 F 5.12 7.76 0.13 9.45 o r 1.46 10.34
P e e r R e averaged inlet (feed, applied) conditions osmotic P e e r 5.12 10.34 P Tab. 3: Dimensional solute permeability values )ܤ( studied in the current work and their Sub-/superscripts av in osm e nondimensional counterparts (ߜ). The variation with ܲ݁ ୧୬ ୭ୱ୫ is due to the concentration-e dependence of the salt diffusivity [11] in the feed solution. r p permeate w wall (membrane surface) R e ܤ (m s -1 ) ܲ݁ ୧୬ ୭ୱ୫ ߜ R 0.13 0 e 0 1.46 0 Symbols used ܤ [m s -1 ] membrane solute permeability ܥ [mol m -3 ] solute concentration ܿ [-] dimensionless solute concentration ܦ [m 2 s -1 ] v i e w v i e w 5.12 0 v 0.13 3.15×10 -3 i 10 -8 1.46 3.45×10 -3 e 5.12 3.45×10 -3 0.13 3.15×10 -2 w 10 -7 1.46 3.45×10 -2 solute diffusion coefficient in the feed solution ݀ [m] 5.12 3.45×10 -2 flow channel half-height ܫ [m -1 Pa s] membrane resistance to transmembrane solvent flow ܫ ିଵ 0.13 3.15×10 -1 Tables with headings 10 -6 1.46 3.45×10 -1 [m Pa -1 s -1 ] membrane solvent permeability ݅ [-] van't Hoff's factor ܮ [m] flow channel length respective inlet osmotic Péclet numbers (ܲ݁ ୧୬ ୭ୱ୫ ). Tab. 1: Mass percentages ܯ( ୧୬ ) and molar concentration ܥ( ୧୬ ) of sodium chloride and their 5.12 3.45×10 -1
ܯ ݏܱ ܯ ୧୬ (%) [%] [-] ܲ [Pa] 0.1 1.0 ܲ݁ [-] 3.5 solute mass percentage ୭ୱ୫ ܥ ୧୬ (mol m -3 ) ܲ݁ ୧୬ osmotic number 17.1 0.13 hydrodynamic pressure 171.1 1.46 Péclet number 598.9 5.12
ܴ [J K -1 mol -1 ] ideal gas constant
ܶ [K] solution temperature
ܴܶ [-] solute rejection
11 13
Wiley-VCH Chemical Engineering & Technology
Abbreviations
NF nanofiltration RO reverse osmosis | 30,538 | [
"2744",
"770079"
] | [
"199963",
"199963",
"199963",
"199963"
] |
00614912 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:16 | 2011 | https://hal.science/hal-00614912/file/article-new.pdf | J M Martinez
A Boukamel
email: [email protected]
S Méo
S Lejeunes
Statistical approach for a hyper-visco-plastic model for filled rubber: experimental characterization and numerical modeling
Keywords: Hyperviscoplastic behavior, Rheological model, Payne effect, Gent-Fletcher effect, Filled elastomer
This paper presents a campaign of experimental tests performed on a silicone elastomer filled with silica particles. These tests were conducted under controlled temperatures (ranging from -55 o C to +70 o C) and under uniaxial tension and in shearing modes. In these two classes of tests, the specimens were subjected to cyclic loading at various deformation rates and amplitudes and relaxation tests at various levels of deformation. A statistical hyper-visco-elasto-plastic model is then presented, which covers a wide loading frequency spectrum and requires indentifying only a few characteristic parameters. The method used to identify these parameters consists in performing several successive partial identifications with a view to reducing the coupling effects between the parameters. Lastly, comparisons between modeling predictions and the experimental data recorded under harmonic loading, confirm the accuracy of the model in a relatively wide frequency range and a large range of deformations.
Introduction
Elastomers belong to the high polymer family i.e. they consist of macromolecular chains of various lengths, with and without ramifications. This structure confers on these materials a low level of rigidity and a high level of deformability. In addition, the reinforcement of these materials with fillers accentuates their dissipative behavior. Because of these properties, especially their damping capacity, these materials are widely used in industry. The application on which this study focuses is that of the drag dampers for helicopters. These parts connect helicopter blades to the rotor and attenuate the drag movement. Designing these parts, which are often related to safety, imposes a guarantee of high reliability under extreme operating conditions (dynamic loading with multi-frequency and large amplitudes, thermal constraints, etc). Meeting these specifications requires good knowledge of the mechanical behavior of the constitutive materials. In addition, the behavior of an elastomer can depend heavily on the temperature, the degree of cross-linking and the type of particles incorporated (carbon or silica), etc.
During recent decades, several approaches have been used by previous authors to model various behavioral aspects of elastomers:
• To describe the static behavior of the material, a hyperelastic approach was used in: [START_REF] Treloar | The elasticity of a network of long chain molecules i[END_REF]Treloar ( , 1957) ) where statistical models were proposed; and [START_REF] Mooney | A theory of large elastic deformation[END_REF]; [START_REF] Rivlin | Some topics in finite elasticity[END_REF]; [START_REF] Hart-Smith | Elasticity parameters for finite deformations of rubber-like materials[END_REF]; [START_REF] Ogden | Large deformation isotropic elasticity, on the correlation of theory and experiment for incompressible rubberlike solids[END_REF], which involved the use of phenomenological approaches.
• Some authors have used the damage mechanics approach to describe the softening behavior occurring during the first loading cycles, which is known as the Mullins effect, see [START_REF] Mullins | Effect of strectching on the properties of rubber[END_REF]; [START_REF] Harwood | Stress softening in rubbers : a review[END_REF]. A theoretical framework was proposed by Govindjee andSimo (1991, 1992) in the case of a hyperelastic behavior. A similar approach was described in [START_REF] Simo | On a fully three-dimensional finite-strain viscoelastic damage model : fomulation and computational aspects[END_REF]; [START_REF] Miehe | Discontinuous and continuous damage evolution in ogden-type large-strain elastic materials[END_REF] in the case of a viscoelastic material.
• In [START_REF] Holzapfel | Fully coupled thermomechanical behaviour of viscoelastic solids treated with finite elements[END_REF]; Holzapfel and Simo (1996a,b); [START_REF] Lion | On the large deformation behaviour of reinforced rubber at different temperatures[END_REF], a thermomechanical coupling model was developed, which takes into account the temperature dependence of the mechanical characteristics and describes the temperature changes resulting from the mechanical dissipation.
• Furthermore, to model the viscoelastic effects of these materials, a framework of the Finite Non Linear Viscoelasticity has been proposed by several authors. These models can be classified in the following groups, depending on the type of formulation used:
Those using an integral approach, which were mainly developed for modeling non linear materials with evanescent memory. These approaches describe the behavior of the material using equations giving the stress tensor in terms of the strain history. [START_REF] Rivlin | Some topics in finite elasticity[END_REF]; [START_REF] Coleman | Thermodynamics of materials with memory[END_REF]; [START_REF] Christensen | Theory of viscoelasticity, an introduction[END_REF]; [START_REF] Coleman | Foundations of linear viscoelasticity[END_REF]; [START_REF] Lianis | Constitutive equations of viscoelastic solids under finite deformation[END_REF]; [START_REF] Chang | The behaviour of rubber-like materials in moderatly large deformations[END_REF]; [START_REF] Morman | Original contributions. an adaptation of finite linear viscoelasticity theory for rubber-like by use of the generalised strain measure[END_REF].
Those using a differential approach, based on the concept of intermediate states commonly used to describe finite elastic-plastic deformations (see [START_REF] Sidoroff | The geometrical concept of intermediate configuration and elastic finite strain[END_REF][START_REF] Sidoroff | Un modèle viscoélastique non linéaire avec configuration intermédiaire[END_REF]). Defining intermediate states provides the internal variables needed to describe the behavior. This approach can be said to be an extension of rheological models in the case of large strains: [START_REF] Sidoroff | Rhéologie non-linéaire et variables internes tensorielles[END_REF]; Le [START_REF] Tallec | Numerical analysis of viscoelastic problems[END_REF]; Le [START_REF] Tallec | Numerical models of steady rolling for non-linear viscoelastic structures in finite deformations[END_REF]; [START_REF] Leonov | On thermodynamics and stability of general maxwell-like viscoelastic constitutive equations, theoretical and applied rheology[END_REF]. The local state method, [START_REF] Lemaître | Mécanique des matériaux solides[END_REF], provides the theoretical framework of this formulation, and the internal variables are provided by the intermediate states.
And those using micro-physically motivated models for filled elastomers, which are often based on hypotheses about the interactions between the agglomerates of fillers and the gum matrix: Drozdov (2001a,b); Drozdov andDorfmann (2002, 2003); [START_REF] Drozdov | Linear thermo-viscoelasticity of isotactic polypropylene[END_REF], or about the mechanisms underlying the deformation and rearrangement of the macro-molecular network: [START_REF] Tanaka | Viscoelastic properties of physically cross-linked networks. transient network theory[END_REF]; [START_REF] Drozdov | A model for the nonlinear viscoelastic response in polymers at finite strains[END_REF][START_REF] Drozdov | A model of cooperative relaxation in finite viscoelasticity of amorphous polymers[END_REF]; [START_REF] Reese | A micromechanically motivated material model for the thermo-viscoelastic material behaviour of rubber-like polymers[END_REF].
In this paper, a meso-physically motivated approach is used to model the response of the material, in large strain and at various frequencies and temperatures. A statistical approach is then proposed to develop a model based on the generalization of an assembly of rheological models. The advantage of this statistical rheological model is that it can be used to simulate the behavior of the material in a wide frequency range while requiring only a few parameters to be identified.
First we present the results of a series of experimental tests, which were carried out on a silicone elastomer filled with silica. These tests were uniaxial tension and shear tests and were performed under controlled temperature (ranging from -55 o C to +70 o C) and under various loading conditions (Relaxation tests, quasistatic and dynamic loading at various strain rates). The results show the dependence of the behavior of the material on the temperature, as well as on the strain-rate (Fletcher-Gent effect, see [START_REF] Fletcher | Non-linearity in the dynamic properties of vulcanised rubber compounds[END_REF]) and the amplitude of the strain (Payne effect, see [START_REF] Harwood | Stress softening in rubbers : a review[END_REF]). The constitutive model is then developed on the basis of the fundamental principle of thermodynamics of continuous media, adapted to finite strain theory. Using the concept of intermediate configurations (multiplicative decomposition of the deformation gradient) and in line with the theory of thermodynamics of irreversible processes, and under the hypothesis of the normal dissipation depending only on the internal variables, the constitutive equation and the flow rules are obtained. A statistical approach is then applied, in order to extend this rheological model to a wide range of strain rates and to account for the plastic behavior of the material. In the following section, this statistical hyper-visco-plastic model is analyzed in the case of simple loads, with a view to propose a strategy for identifying its parameters. For this purpose, analytical solutions are developed to simulate the relaxation response and the hardening test, respectively. In the case of cyclic loading, a semianalytical response was obtained using a symbolic and numeric computation software. These identifications were performed at various temperatures. Lastly, using the semi-analytical solution under sinusoidal shear loading conditions at various frequencies and amplitudes, the effect of parameters such as the temperature, frequency and loading amplitude on the harmonic response of the elastomer are analyzed.
Experimental analysis
Description of the experimental tests
An experimental campaign was conducted on a silicone (dimethyl-vinyl-siloxan vulcanized by peroxide) reinforced with silica particles. The glass transition temperature of this elastomer is approximately -105 o C.
The following tests were carried out:
• Uniaxial tensile tests on specimens with a dumbbell shape (H2 according to standard NF T46-002), to determine the quasi-static behavior and the relaxation response of the material. • Shear tests on Double-Shearing specimens (DS, see Figure 1). These specimens were successively subjected to: a quasi-static loading-unloading cycle; relaxation tests at various shearing levels; triangular cyclic loading, at various strain rates (from 0.03s -1 to 10s -1 ) and various amplitudes (12.5%, 25% and 50%).
All these tests were performed under controlled temperatures (ranging from -55 o C to +70 o C) in a climatic chamber cooled by injecting nitrogen and heated with an electrical resistance and the airflow. In the tensile tests, monitoring and deformation measurements were performed with a laser extensometer.
Remark 1 (Mullins effect) To eliminate the Mullins effect (see [START_REF] Mullins | Effect of strectching on the properties of rubber[END_REF]; [START_REF] Harwood | Stress softening in rubbers : a review[END_REF]) and therefore to characterize the behavior of the stabilized material, a softening process was first induced by applying about ten cycles with an amplitude greater than the maximum strain imposed during the series of tests.
Remark 2 (Temperature stabilization) To avoid errors in the temperature measurement, a waiting period of ten minutes was fixed between each characterization test to allow the temperature to reach equilibrium inside the specimen. The characterization time was sufficiently short to avoid a too strong self-heating phenomena in the specimen.
Experimental results
Relaxation tests: In the relaxation tests, the specimen was subjected to various strain levels: 25%, 50% and 100% under tension loading; 20%, 30% and 54% under shear strain. The response of the material is described by the evolution of the normalized stress 1 versus time. The curves presented in Figure 2(a) and Figure 2(c) show that at temperatures above the ambient temperature, the evolution of these stresses during relaxation was always independent of the strain amplitude. At these temperatures, the relaxation mechanism seems independent of the strain level, under both tension and shear loading; whereas at lower temperatures, the responses doesn't show the same linearity of the stress depending on the deformation, especially in the case of uniaxial tension tests (see Figure 2(b)). The graphs in Figure 2(b) and Figure 2(d) 1 For relaxation tests, normalized stress is obtained by dividing the total stress by the instantaneous stress. show the dependence of the relaxation response on the temperature. The relaxation response was therefore more sensitive to the temperature in the [-55 o C, -25 o C] range than at higher temperatures (above 25 o C).
Quasi-static shear response: The quasi-static test was a loading-unloading test, performed at low strain rate ( γ = 0.03s -1 ) and for three shear amplitudes (γ max = 12.5%, 25% and 50%). The stress-strain curves given in Figure 3 show that even at low rates of deformation, the material shows dissipative behavior. It will therefore be necessary to take the plasticity into account when developing the constitutive model. • Shear rate ( γ): 0.03s -1 , 0.1s -1 , 0.3s -1 , 1s -1 , 3s -1 and 10s -1 .
• Shear amplitude (γ max ): 12.5%, 25% and 50%.
Figures 4(a) and 5 show the effects of the loading amplitude on the stabilized response. Qualitatively, these responses show the strong non-linearity at high amplitudes. In addition, it is worth noting the decrease in the global stiffness observed when the strain amplitude increases. These results therefore clearly confirm that this phenomenon, which is known as the Payne effect, is more pronounced at low temperatures (see Figure 5(a)).
In previous studies, this softening has often been attributed to the plastic behavior of elastomers reinforced with fillers.
As with other visco-elastic polymer materials, the influence of the strain rate was more classical (see Figures 4(b) and6) at all the temperatures tested: an increase in the global stiffness and the cyclic dissipation with the strain rate were clearly observed.
Lastly, the hysteresis loops presented in Figure 7 show the strong influence of the temperature on the behavior of the material. It can be seen in particular that: a softening and a decrease in the cyclic dissipation occurs as the temperature increases; at low temperatures the hysteresis loop shows a nonlinear behavior, which is characterized by: the angular point, the stiffening observed at the end of the cycle and the contraction of the loop at zero strain. These non-linear features, which are more pronounced at low temperatures, are consistent with plastic behavior. These results therefore confirm that plastic behavior begins to predominate when the material approaches the glass transition point.
In conclusion, the following aspects of the behavior have to be taken into account in the model:
• The geometric non-linearities due to large strains. • The dissipative behavior induced by viscous effects which should be coupled to the hyperelasticity.
• The model must be able to describe the behavior in a wide range of strain rates, and in particular, to reflect the Feltcher-Gent effect.
• The effects of plasticity on the behavior, including the Payne effect in particular.
Other phenomena, such as the Mullins effect and the self-heating of the material, were also observed during in this experiments. However, these aspects will not be integrated directly into the model developed in study, because their analysis has been widely discussed in the literature, see [START_REF] Mullins | Effect of strectching on the properties of rubber[END_REF][START_REF] Mullins | Determination of degree of crosslinking in natural rubber vulcanizates. part i[END_REF][START_REF] Mullins | Determination of degree of crosslinking in natural rubber vulcanizates. part iii[END_REF]; [START_REF] Mullins | Stress softening in rubber vulcanizates. part i, use of a strain amplification factor to describe the elastic of filler-reinforced rubber[END_REF]; Govindjee andSimo (1991, 1992).
Constitutive equations
Some generalities about rheological modeling
Using the concept of the local intermediate configuration, introduced by [START_REF] Sidoroff | Un modèle viscoélastique non linéaire avec configuration intermédiaire[END_REF][START_REF] Sidoroff | Variables internes en viscoélasticité, 2. milieux avec configuration intermédiaire[END_REF], the transformation gradient tensor F is split into a viscous and an elastic parts:
F = F e • F v . (1)
Then, assuming that the Clausius-Duhem inequality can be written in Eulerian terms as follows (neglecting the thermal effects):
φ = σ : D -J -1 ρ 0 ψ, (2)
where ρ 0 is the density in the initial configuration, φ is the intrinsic dissipation, σ is the Cauchy stress tensor and D represents the Eulerian strain rate tensor:
D = 1 2 L + L T with L = Ḟ • F -1 , (3)
J denotes the determinant of the gradient tensor F, and ψ is the free specific energy which is expressed as the sum:
ψ(B, B e ) = ψ v (B e ) + ψ 0 (B) (4)
where B = F • F T and B e = F e • F T e are sets of independent thermodynamic variables. So, one can express ψ as follows:
ψ = ∂ψ 0 ∂B : Ḃ + ∂ψ v ∂B e : Ḃe (5)
The time derivatives of the left Cauchy-Green tensor and the local changes in volume are given by:
Ḃ = L • B + B • L T (6) J = J (1 : L) (7)
where 1 is the identity tensor, and:
Ḃe = L • B e + B e • L T -2V e • D o v • V e (8)
where V e is the purely elastic strain tensor (i.e. coming from the polar decomposition F e = V e • R e ) and
D o v = R e • D v • R T e (9)
is the objective measure of the anelastic strain rate. By injecting equations ( 6), ( 7) and ( 8) in ( 5), the variation of the free energy can be written as follows:
ψ = 2B • ∂ψ 0 ∂B : D + 2B e • ∂ψ v ∂B e : D -2V e • ∂ψ v ∂B e • V e : D o v ( 10
)
with the incompressibility conditions:
D : 1 = 0, D o v : 1 = 0 (11)
Using the assumption of the normal dissipation, (we choose a quadratic pseudo-potential of dissipation, ϕ v , depending only on D o v ), the constitutive equation and the evolution law are obtained as follows 2 :
σ = σ 0 + σ v -p1 with σ 0 = 2ρ 0 J -1 B • ∂ψ 0 ∂B D σ v = 2ρ 0 J -1 B e • ∂ψ v ∂B e D ( 12
)
∂ϕ v ∂D o v = 2ρ 0 J -1 V e • ∂ψ v ∂B e • V e D ( 13
)
where p is the hydrostatic pressure due to the local incompressibility condition. Equations ( 12) and ( 13) can be said to be a generalization of the classical Zener rheological model to the case of finite strain.
A statistical approach for a hyper-visco-plastic model
The constitutive model must first reflect the behavior of the material in a wide range of strain rates, but it also has to account for the effects of the plasticity, such as the Payne effect, and the behavior of the material at low temperatures. Previous studies have shown that plasticity gives good agreement between the 2 the symbol D stands for the deviatoric operator. 2005)), and that some rheological models are suitable for modeling the behavior under a given range of loads [START_REF] Olsson | A fitting procedure for a viscoelastic-elastoplastic material[END_REF]; [START_REF] Miehe | Surimposed finite elastic-viscoelastic-plastoelastic stress response with damage in filled rubbery polymers. experiments, modelling and algorithmic implementation[END_REF]; [START_REF] Nedjar | Frameworks for finite strain viscoelastic-plasticity based on multiplicative decomposition. part i: Continuum formulations[END_REF]). Under more complex loading conditions, rheological models with several branches (see Figure 9(a)) seem to account satisfactorily for the behavior of the material. However, the disadvantage of these models is that they require identifying a large number of parameters.
In order to overcome this difficulty, a statistical approach was developed, whereby the assembly of discrete rheological branches is extended to a continuous model with an infinite number of branches. The advantage of this method is that it covers a wide range of retardation times (or a large frequency spectrum). This approach gives the advantages of a multi-branch assembly without increasing the number of parameters in the model.
The model presented here, can be motivated in micro-physical terms by the heterogeneity of reinforced elastomers, especially in the case of a silicone elastomer slightly filled with silica particles. In order to account for this heterogeneity, it is therefore assumed that the elastomeric matrix is dense and that the inclusions, which consist of particles of silica agglomerated together with a thin rubber bond, are supposed to be slightly reticulated (see for instance [START_REF] Drozdov | A micro-mechanical model for the response of filled elastomers at finite strains[END_REF]). Based on these assumptions, the behavior of the elastomer can be defined as follows:
• The behavior of elastomeric matrix is hyperelastic ;
• The behavior of the inclusions is hyper-visco-elastic (an extension of the Maxwell model to large strains);
• The inclusion/matrix interfaces is assumed to have a hyper-elasto-plastic behavior (an extension of the Saint-Venant model to large strains).
Two statistic quantities are introduced, namely:
• ω i , which denotes the activation energy of the mechanism inclusion/matrix, i.e. the energy required to break the links at the interface [START_REF] Drozdov | A model of cooperative relaxation in finite viscoelasticity of amorphous polymers[END_REF]),
• and P i which represents the probability of that a population of inclusion corresponds to the activation energy ω i .
The discrete form of the statistical model can be written:
ψ = ψ 0 (B) + N i=1 ψv (ω i , B e (ω i )) P i + ψ p (B ep ) ϕ = N i=1 φv (ω i , D o v (ω i )) P i + ϕ ⋆ p (σ p ) (14)
where D o p denotes the anelastic objective strain rate of the elasto-plastic branch (same definition as in equation ( 9)), ψ 0 denotes the specific free energy associated with the matrix, whereas ψv and ψ p are the free energies associated with the inclusions and the inclusions/matrix interfaces, respectively. φv and ϕ ⋆ p are the pseudo-potential of dissipation 3 corresponding to the inclusions and to the interface, respectively.
Using the continuous statistical model in Figure 9(b) to generalize this formulation, we obtain:
ψ = ψ 0 (B) + ∞ 0 ψv (ω, B e (ω)) P (ω)dω + ψ p (B ep ) ϕ = ∞ 0 φv (ω, D o v (ω)) P (ω)dω + ϕ ⋆ p (σ p ) ( 15
)
where ω is a random variable associated with the activation energy of a relaxation micromechanism, and P (ω) is the probability that a population of fillers has the given value ω.
Substituting the potentials (15) in equations ( 12) give the constitutive equation of the statistical model 4 :
σ = σ 0 + ∞ 0 σv (ω)P (ω)dω + σ p -p1 with σ 0 = 2ρ 0 J -1 B • ∂ψ 0 ∂B D σv (ω) = 2ρ 0 J -1 B e (ω) • ∂ ψv (ω) ∂B e (ω) D σ p = 2ρ 0 J -1 B ep • ∂ψ p ∂B ep D ( 16
)
3 The symbol ⋆ denotes a Legendre-Fenchel transformation. 4 In the hyperelasto-plastic branch, F is split into an elastic part Fep and a plastic part Fpp, (F = Fep • Fpp). The objective measure of the plastic strain rate D o p is therefore defined in the same way as D o v (see ( 9)).
and the following evolution laws:
∂ φv (ω) ∂D o v (ω) = 2ρ 0 J -1 B e (ω) • ∂ ψv (ω) ∂B e (ω) D ( 17
)
D o p = ∂ϕ ⋆ p ∂σ p D ( 18
)
The various forms of the potentials can be chosen so that: the neo-Hookean incompressible hyperelastic model for the matrix behavior; the neo-Hookean form, for the hyperelasticity of the inclusions and the inclusions/matrix interface; a quadratic form for the pseudo-potential of viscous dissipation of the inclusions;
and a perfectly plastic pseudo-potential at the interface. These choice can be written as follows:
ρ 0 ψ 0 = C 1 (I 1 (B) -3) ρ 0 ψv (ω) = G(ω) (I 1 (B e (ω)) -3) φv (ω) = η(ω) 2 D o v : D o v ρ 0 ψ p = A p (I 1 (B ep ) -3) ϕ ⋆ p =< σ p -χ > (19)
where C 1 is the coefficient of the neo-Hookean density, G(ω) and η(ω) are two functions of the random variable ω, χ and A p are the parameters involved in the hyper-elasto-plastic branch, < . > denotes the Mac-Cauley brackets and σ p = √ σ p : σ p . To define the distribution function P (ω), a classical Gaussian distribution centred at the origin and characterized by the standard deviation Ω was adopted:
P (ω) = 1 ∞ 0 P (ω)dω Exp - ω Ω 2 (20)
To choose the functions G(ω) and η(ω), which describe the variations in the hyperelastic and viscous characteristics depending on ω, several forms were tested. The functions giving the best match with the experimental data were of the form:
G(ω) = G 0 Exp [ω] , η(ω) = η ∞ ln [ √ ω + 1] ω + 1 (21)
In fact, these expressions for the hyperelastic and viscous characteristics lead to a decreasing evolution of the retardation time depending on ω. Combining this variation with the distribution function ( 20) makes it possible to focus on the instantaneous elastic response rather than on the delayed response. Lastly, by injecting ( 19) in ( 16), we obtain the constitutive equation:
σ = σ 0 + ∞ 0 σv (ω)P (ω)dω + σ p -p1 with σ 0 = 2C 1 B D σv (ω) = 2G(ω)B e (ω) D σ p = 2A p B ep D ( 22
)
and by using ( 19), ( 17) and ( 18) in ( 8), we obtain the following flow rules:
Ḃe (ω) = L • B e (ω) + B e (ω) • L T -4 G(ω) η(ω) B e (ω) • B e (ω) D (23)
Ḃep = L • B ep + B ep • L T -2 < σ p -χ > σ p σ p • B ep (24)
The statistical hyper-visco-plastic model given by expressions (22) therefore includes 6 parameters which have to be identified, 5 of which are determinist parameters (C 1 , G 0 , η ∞ , A p , χ) and one of which is a statistical parameter (Ω).
Identification of the model parameters
To identify the model parameters by fitting the response of the model to the experimental data, an algorithm implemented in the MATHEMATICA software was used. This algorithm, is based on a minimization of the sum of the squared differences between the experimental data and the analytical or semi-analytical responses. The latter are obtained by simulating the uniaxial tension tests and the double shear tests, in which the responses are assumed to be homogeneous and incompressible.
Analytical forms of tension responses
Under uniaxial tension, the elastic and anelastic gradients of the transformation are written as follows, respectively:
F = λ 0 0 0 1 √ λ 0 0 0 1 √ λ , F a = λ a 0 0 0 1 √ λa 0 0 0 1 √ λa (25)
Substituting these expressions into the constitutive equations and the complementary law ( 22), the response of the material under various loading modes can be obtained using an analytical form or after a numerical solving.
Relaxation test
To obtain the relaxation response, the instantaneous and delayed stresses are written in terms of the axial component of the first Piola-Kirchoff stress tensor Π, (σ = Π 11 ):
σ 0 = 2 λ 3 -1 C 1 + ∞ 0 G(ω)P (ω)dω 1 λ 2 + 3 2 χ λ σ ∞ = 2 C 1 λ 2 λ 3 -1 + 3 2 χ λ σ| t=0 = - 8 3 2λ 3 -λ - 1 λ 3 ∞ 0 G(ω) 2 η(ω) P (ω)dω (26)
where σ 0 is the instantaneous stress response and σ ∞ is the infinite (long-time) stress response.
Hardening test
The axial stress is written here in the quasi-static case, in the form:
σ(λ) = 2 λ C 1 λ λ 3 -1 + A p λ 3 e -1 λ e (27)
The plastic and elastic elongations λ p and λ e are given by:
λp =< λ > H (λ -λ y ) and λ e = λ λ p ( 28
)
where H is a hardening function which is obtained from eq. ( 24) λ y is the elongation corresponding to the plastic yield:
λ y = 3 1 2 + 1 4 - b 3 3 + 3 1 2 - 1 4 - b 3 3 if b ≤ 3 3 √ 4 2 b 3 Cos 1 3 π -Acos - 1 2 3 b 3 2 if b ≥ 3 3 √ 4 with b = χ 2A p 3 2 (29)
Lastly, the residual strain at zero stress λ 0 corresponds to the equation σ(λ 0 ) = 0 when λ < 0, which can be written as a function of (C 1 , A p , χ) and the maximum strain.
Shear responses
In the case of shear tests, the gradient tensors are taken to be as follows:
F = 1 γ 0 0 1 0 0 0 1 , F a = λ a1 γ a 0 0 λ a2 0 0 0 1 λa 1 λa 2 (30)
Relaxation test
With the approximation, λ ai = 1, the instantaneous and delayed stress relaxation terms (τ = Π 12 ) are given by:
τ 0 = 2γ C 1 + ∞ 0 G(ω)P (ω)dω + √ 2 2 χ τ ∞ = 2C 1 γ + √ 2 2 χ τt=0 = -8γ ∞ 0 G(ω) 2 η(ω) P (ω)dω (31)
Hardening test
The response of the material under quasi-static loading/unloading can be approximated as follows. The shearing stress is written:
τ (γ) = 2C 1 γ + 2A p γ e (32)
The plastic and elastic shear strain γ p and γ e are given by: γp =< γ > H (γγ y ) and
γ e = γ -γ p (33)
where γ y is the shear strain corresponding to the plastic yield:
γ y = 9 + 3χ 2 2A 2 p -3 2 ≃ √ 2χ 4A p (34)
Cyclic test
More generally, based on expressions (30), the response to a cyclic shear test can be obtained by writing the constitutive equations ( 22). This leads to a system of differential equations, the solutions of which are {γ a , λ a1 , λ a2 }. These systems can be solved using a Runge-Kutta scheme.
Identification algorithm
The identification of the parameters of the model X = {C 1 , G 0 , η ∞ , A p , χ, Ω} can be reduced to the minimization of the difference between the experimental curves {(λ i , σ i ), i = 1, N T } and {(γ i , τ i ), i = 1, N S } and the theoretical responses (λ, σ(λ, X)) and (γ, τ (γ, X)). This difference is characterized by the least square distance:
E(X) = NT i=1 ξ i (σ i -σ(λ i , X)) 2 + NS i=1 η i (τ i -τ (γ i , X)) 2 , ( 35
)
where ξ i and η i are the weights associated with the tensile and shear tests, respectively.
This minimization problem is solved using Powell's iterative algorithm (see [START_REF] Fletcher | Pratical methods of optimization[END_REF]), which is a conjugate direction method without gradient calculation. This algorithm is combined with a one-dimensional minimization procedure in each direction which is based on a quadratic interpolation of the function to be minimized.
In short, the identification procedure consists of:
1. determining of the response of the material with a set of model parameters, under a given loading mode;
2. calculating the least square difference between the modeling predictions and the experimental data;
3. applying the iterative procedure to minimize the least square difference.
Identification strategy
Given the complexity of the model, the number of parameters which have to be identified and the multiplicity of the experimental data required to identify these parameters, it was necessary to develop a strategy for decoupling the various stages in the identification procedure. This identification strategy was based on the distinction between the instantaneous and delayed responses, as well as between the effects of viscosity and plasticity. Based on the analytical or semi-analytical results outlined in the previous paragraphs, the following strategy was therefore adopted:
1. Quasi-static and delayed responses are used to identify the parameters C 1 , A p and χ.
2. The other parameters, G 0 , Ω and η ∞ , can be obtained by fitting the values to the instantaneous or cyclic responses at various strain rates.
Table (1) summarizes the successive steps in the identification strategy.
Results and Discussion
Identification results
The six model parameters The table (2) also gives the relative identification error obtained for each temperature and each strain rate. A maximum of 15% of error is obtained and the figure 11 shows two examples of identification results.
(C 1 , G 0 , η ∞ , A p , χ ,
A good agreement between the predictions of the model and the experimental data is observed in the shear tests, at various temperatures and strain rates.
Lastly, Figure 12 shows the evolution of the model parameters identified (normalized) versus the temperature. These curves suggest an exponential decay of all the parameters with the increase of the temperature, except for Ω, which has been fixed at all temperatures, since it characterized only the range of retardation times to be covered by the model. This result shows that the present model seems consistent as the evolution of the parameters between the temperature is monotonous in the temperature range considered.
Relevance of the model
Comparisons between the results of the model predictions and the experimental data (which has not been used for identification), obtained under sinusoidal shear loading conditions, show that the model accurately predicts the effects observed experimentally, namely, the Payne effect (see [START_REF] Harwood | Stress softening in rubbers : a review[END_REF]), as shown in Figure 14(c), and The Gent-Fletcher effect (see [START_REF] Fletcher | Non-linearity in the dynamic properties of vulcanised rubber compounds[END_REF]) as shown in Figure 14(a).
In addition, figures 14(b) and 14(d) show the existence of good agreement between the simulated and the experimental data on the cyclic dissipation depending on the frequency and the amplitude.
Other comparisons made in multi-harmonic loading in shear tests also shows the ability of this model to accurately simulate the behavior of materials subjected to a combination of several loads at different frequencies (see figure ( 13)). These results show that the present model can successfully predict the complex behavior of a highly dissipative silicone rubber for a large range of strain amplitudes and strain rates with a few number of material parameters.
Conclusion
In this study, a statistical approach was used to develop a hyper-visco-plastic model covering a wide frequency range. This approach has the same advantages of classical multi-branch models, such as the ability to simulate the behavior of material for several decades of retardation time, but it do not show the same inconvenient as only a few number of material parameters are required (6 in the present model).
A series of experiments were conducted under various loading and temperature conditions, in order to identify the parameters involved in the model, using an algorithm developed with the MATHEMATICA software library. To optimize this identification procedure, a relevant strategy was adopted, which consisted in distinguishing between the various stages in the procedure and thus reducing the effects of coupling
Figure 1 :
1 Figure 1: Double-shearing specimen.
Effects of the temperature under shear loading.
Figure 2 :
2 Figure 2: Response of the material in relaxation tests: first Piola-Kirchoff stress vs time. The vertical axis corresponds to the total stress divided by the instantaneous stress (normalized stress).
Figure 3 :
3 Figure 3: Quasi-static responses recorded in loading-unloading shear tests: first Piola-Kirchoff stress vs shear strain (shear rate: γ = 0.03s -1 , temperature: T = 25 o C).
Stabilized cycles (T = 25 o C, γ = 0.3s -1 ), effect of the strain amplitude. Stabilized cycles (T = 25 o C, γmax = 50%), effect of the strain rate.
Figure 4 :-
4 Figure 4: Cyclic responses in triangular shear tests: first Piola-Kirchoff stress vs shear strain (temperature: T = 25 o C). The first cycles has been removed to keep only stabilized responses.
Figure 7 :
7 Figure 6: Effects of the strain rate on the response of the material under triangular cyclic shearing tests at various temperatures (γmax = 50%).
Figure 8 :
8 Figure 8: Intermediate configuration.
Figure 9 :
9 Figure 9: Statistical hyper-visco-plastic model
Evolution of the distribution function P(ω).
Figure 10 :
10 Figure 10: Evolution of statistical functions: predominance of the instantaneous elasticity.
Ω) were identified successively, at various temperatures (-55 o C, -40 o C, -25 o C, 25 o C, 40 o C and 70 o C) and at various strain rates (3s -1 , 10s-1 ). The values obtained are given in table (2).
Figure 11 :
11 Figure 11: Comparison between stabilized hysteretic cycles at various temperatures and strain rates. The solid line show the model response, the points show the experimental results.
Figure 12: Evolution of parameters vs temperature.
Figure 14 :
14 Figure 14: Comparisons between modeling predictions (solid line) and experimental data (points).
Table 1 :
1 Identification strategy: C 1 , Ap and χ are identified on quasi-static and delayed responses, G 0 , Ω and η∞ are obtained by fitting the values to the instantaneous or cyclic responses at various strain rates.
Error (%)
Table 2 :
2 Parameters identified at various temperatures.
0.3
0.2
Stress (MPa) -0.1 0 0.1
-0.2
-0.3
-0.2 -0.1 0 0.1 0.2
Strain
Treloar, L., 1957. The present status of the theory of large elastic deformations. In: The rheology of elastomers.
between the parameters.
The results obtained at various temperatures show the ability of this model to simulate the behavior of the material in a wide range of temperatures. In addition, the comparisons between the modeling predictions and the experimental data recorded at various frequencies and strain amplitudes have shown a good agreement.
The present model is therefore capable to reproduce the complex behavior of filled rubber in particular the Gent-Fletcher and Payne effects. | 38,374 | [
"1833",
"3866"
] | [
"136844",
"230907",
"136844"
] |
01266697 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2016 | https://hal.science/hal-01266697/file/papierTCM.pdf | T A N'guyen
S Lejeunes
email: [email protected]
D Eyheramendy
A Boukamel
A thermodynamical framework for the thermo-chemo-mechanical couplings in soft materials at finite strain
Keywords: polymer, rubber, thermodynamics, irreversible processes
This paper focus on the modeling of thermo-chemo-mechanical behaviors for soft materials (polymer, rubber, ...) that can exhibit large strain in a rigorous thermodynamical framework. As an application it is considered the case of a thermo-viscoelastic material that can undergo chemical reactions which are described by a kinectic approach. Simple idealized and homogeneous numerical tests are considered and illustrate the availability of this framework to take into account reciprocal coupling between each physics.
Introduction
Thermo-chemo-mechanical couplings in soft materials (polymers, rubbers, ...) are an important topic that is linked to many different industrial applications. The most evident one concerns the processing of these materials. A precise simulation of material processing can be of great interest to optimize the process, to predict residual strain/stress due to process, to anticipate material heterogeneities (different grade of material due to different chemical states). Another application concerns the prediction of the behavior of industrial parts submitted to severe operating conditions (in aeronautics or spatial applications for instance). In these applications: thermal and chemical aging can be coupled to mechanical damage phenomena. Concerning modelisation, the thermodynamic of irreversible processes is the main basic tool to address these problematics. The pioneer work of Prigogine and co-authors is very important as it allows the interpretation and the modelisation of chemical reactions in the context of irreversible processes (see for instance [START_REF] Prigogine | Introduction to thermodynamics of irresversible processes[END_REF]). Prigogine has considered that a thermodynamical system can be described by classical state variables (temperature, volume,...) and by supplementary ones which are the extend of reactions in the case of reactive systems. Thermodynamical energy potential (Gibbs or Helmholtz free energy) are then dependent on these new thermodynamical variables. Chemical thermodynamical forces can be defined, called affinity, and linked to earlier work of De Donder (1928). The second law of thermodynamic (entropy production) can be satisfied as these chemical thermodynamical forces derive from the thermodynamical potential of energy. The generalisation of these fundamental concepts to the case of continuum mechanic can be done in a straightforward manner by considering the concept of local state and the introduction of internal variables (see [START_REF] Germain | Continuum thermodynamics[END_REF] and reference therein). Therefore all the thermodynamical variables are function of time and position. Applying Onsager reciprocity concept (see [START_REF] Onsager | Reciprocal Relations in Irreversible Processes[END_REF]) or using the convex framework of standard generalized materials (see [START_REF] Halphen | On generalized standard materials[END_REF]) evolution equations for chemistry or mechanical irreversibilities can be established from the definition of a thermodynamical energy potential and eventually from a pseudo potential of dissipation (in the context of standard generalized materials). These general tools has been used by many authors in the literature in the context of soft material subjected to chemical reactions. [START_REF] Gigliotti | Chemo-mechanics couplings in polymer matrix materials exposed to thermo-oxidative environments[END_REF]; [START_REF] Gigliotti | Assessment of chemo-mechanical couplings in polymer matrix materials exposed to thermo-oxidative environments at high temperatures and under tensile loadings[END_REF] has proposed a chemo-mechanical coupled model in the context of elastic behavior with several reactions for a thermo-oxydation process in polymer matrix. They also consider the diffusion of species inside the materials to predict aging. In [START_REF] Lion | On the phenomenological representation of curing phenomena in continuum mechanics[END_REF]; [START_REF] Mahnken | Thermodynamic consistent modeling of polymer curing coupled to viscoelasticity at large strains[END_REF] one can find thermo-chemo-mechanical models for the curing of polymer. Theses models take into account of mechanical, thermal and chemical deformations (dilatation and shrinkage) in a thermo-viscoelastic context. The chemistry is phenomenologically described by a kinetic approach. [START_REF] André | Thermo-mechanical behaviour of rubber materials during vulcanization[END_REF] has proposed a thermo-chemo-mechanical model in small strain to simulate the vulcanization of rubber materials. The mechanical behavior is assumed to be elasto-visco-plastic and two phenomenological chemical reactions are considered to describe the vulcanization process. In [START_REF] Kannan | A thermodynamical framework for chemically reacting systems[END_REF], the authors have developed a framework in finite strain that is applied to vulcanization of rubbers in a viscoelastic context with several chemical reactions. In this contribution, we propose a rigorous thermodynamical framework that can be used as a basis of development for thermo-chemo-mechanical models. As previously mentioned the basic tools are the thermodynamics of irreversible processes and the local state hypothesis. As already done by some authors, we assume that the volume variation is decomposed in a thermal part (dilation) a chemical part (shrinkage) and a mechanical part (compressibility). Hydrostatic pressure is therefore directly related to the chemical, mechanical and thermal states. This decomposition also implies a coupling of the heat capacity and the chemical evolution upon pressure or volume variation. A kinetic (phenomenological) description of chemistry is chosen. However, due to the fact that mechanical parameters (modulus) and volume variation are dependent on the chemical state, the thermodynamical chemical force is also directly dependent on the mechanical state. The hydrostatic pressure and deviatoric strain can therefore have a favorable or a unfavorable effect on chemical evolution. The main originality of this paper lies in the following points. First, it is proposed a new form of chemical potential of energy, this potential takes into account of an initiation temperature under which no reactions take place and energy is stored as heat. This potential is defined so as to have a clear definition and admissible form of the heat capacity. Second, the chemical evolution that is associated to this potential naturally takes into account of the influence of the mechanical state. It is proposed a new evolution law for chemistry that is inspired from visco-plasticity behavior: the chemical evolution rate is assumed to be null if the thermodynamical force becomes negative (therefore chemical reversion is not allowed) and a chemical viscosity parameter is introduced. This parameter allows the balancing of mechanical influence on chemical evolution. The paper is organised as follows: the general thermodynamical framework and conservation equations are presented in a first section. As an application it is derived a phenomenological thermo-chemo-visco-elastic model in a second section. This model is general and can be applied to different problematic. In a third section, some simple idealized examples are provided. These examples illustrate both the capability of the proposed model to be applied to different problematic (aging or material processing) and the influence of some material parameters on the reciprocal couplings. In the last section some remarks are addressed to conclude.
Thermodynamical framework
Definition of a chemical internal variable
In this work, it is considered a solid body submitted to thermo-mechanical loadings. This body is viewed as a closed 1 thermodynamical continuum system and therefore no mass exchange with the outside can occur. Furthermore as classically assumed neither creation nor destruction of mass is allowed. Let us assumed that the initial configuration (at t = 0) is defined by a stress and heat flux free state. In this configuration the chemical species are in equilibrium: the reaction rates are null in this state. Using the classical continum mechanics point of view, the body can be considered as a continuum of material points that are defined by an initial position X and a current position x in an euclidean space. The material point can be viewed as an infinitesimal element of volume and it can be defined the current material density ρ(x, t), which evolves in times as the volume evolves (the initial density is denoted ρ 0 ). The closed system hypothesis leads to:
ρ + ρdiv(v) = 0 (1)
where div(v) is the eulerian divergence of the velocity of the material point and ρ is the total time derivative (so called material time derivative2 ) of the density. Assuming that this infinitesimal element of volume is a mixture of all chemical species that compose the material, one can defined the current mass concentration of the i th species relative to the current infinitesimal volume (of the mixture), denoted ρ i (x, t). In the case of null diffusion of species inside the body (i.e. no relative velocity of one species from another) the mass conservation is also defined from:
ρi + ρ i div(v) = m i i = 1, 2, .., n n i=1 m i = 0 (2)
where m i (x, t) is the rate of production of mass of the i th species per unit of current volume of mixture.
Knowing the molar mass of each species, denoted M i equations 2 can be rewritten in terms of volume concentration (mole per unit of current volume) denoted
Y i = ρ i /M i : Ẏi + Y i div(v) = m i M i i = 1, 2, .., n n i=1 m i = 0 (3)
As already mentioned by [START_REF] Kannan | A thermodynamical framework for chemically reacting systems[END_REF] in finite strain it is not a good idea to consider Y i (x, t) as thermodynamical variables that characterize the chemical state because volume variation leads to a change in concentration (in terms of volume). Following aforementioned authors, we prefer to consider n i the number of mole of the ith species per unit of mass of the mixture:
n i (x, t) = Y i /ρ = ρ i /(ρM i ).
Inserting this definition in 3 and using 1, the mass balance becomes:
n i=1 ṅi M i = 0 (4)
Therefore only n -1 variables are sufficient to describe fully the chemical state. If the reactive scheme is known and if this reactive scheme implies r stoichiometric reactions (with the stochiometric ratios ν ir ), the extend ζ r of the rth reaction can be defined from:
ζ r (x, t) = n r i (x, t) -n r i (X, 0) ν ir ∀i, r = 1..m (5)
where n r i (x, t) is the molar concentration of the ith species for the rth reaction per unit of mass of the mixture. Finally if both the initial concentration and the stochiometric coefficients are known the chemical state can be defined by only r independant variables that describe the advance of reactions. In this paper, it is only considered one reaction3 and the chemical state is defined by the following normalized concentration (chemical conversion):
ξ(x, t) = n 1 vu (x, t) -n 1 vu (X, 0) n 1 vumax (6)
where n 1 vu (x, t) is the current molar concentration of the product of interest (in the case of material processing this could be the vulcanized or polymerized product) and n 1 vumax is the maximum attainable concentration. The previously defined chemical conversion is chosen to be a thermodynamical internal variable in the following.
Kinematic
To define kinematic, it is assumed that the motion of all the material points of the body is described by a bijective vectorial function χ(X, t) such that: x = χ(X, t). The transformation gradient F(X, t) is therefore defined by F = ∂χ/∂X. Following [START_REF] Flory | Thermodynamic relations for highly elastic materials[END_REF], the transformation is split into volumetric and isochoric part:
F = (J 1/3 1) • F (7)
where J = detF is the volume variation and 1 the identity tensor. It is assumed that the volume variation can result form three independent terms: the mechanical compressibility J m , the thermal dilatation J Θ and the chemical shrinkage
J ξ : J = J m J Θ J ξ (8)
The volume variation terms are defined from the following relations:
J Θ = 1 + α(Θ -Θ 0 ) (9)
J ξ = 1 + βg(ξ) (10)
J m = J(J ξ J Θ ) -1 (11)
where α and β are respectively the thermal expansion and chemical shrinkage coefficients, Θ 0 is the temperature in the initial configuration, g is a shrinkage function. For the sake of simplicity, the thermal dilatation is assumed as linear (eq. 9). More complex expressions for the thermal dilatation can be considered that depend on the material composition and on the temperature range of the desired application (see for instance [START_REF] Holzapfel | Entropy elasticity of isotropic rubber-like solids at finite strains[END_REF] who have proposed an exponential form). The linearized form of the thermal dilatation is commonly assumed in the case of small thermal variations. However for some materials this hypothesis seems valuable for a large range of temperatures (far from the glass transition temperature). For instance, figure 1 show the evolution of density of a partially hydrogenated poly(acrylonitrileco-1,3-butadiene) upon temperature for various density of cross-links. To describe inelastic effects an intermediate configuration is introduced and the isochoric transformation gradient is classically decomposed with the following multiplicative split:
F = F e • F i (12)
where F i represents the inelastic transformation and F e represents elastic one. This decomposition implicitly assumes that inelastic flows are incompressible.
Clausius-Duhem inequality
It is chosen in the following to consider the Helmoltz free specific energy to characterize thermodynamical states. This potential of energy is classical adopted by mechanician whereas chemists are more familiar with Gibbs potential. It is therefore defined ψ the Helmoltz free specific energy:
ψ = e -Θs ( 13
)
where e is the specific internal energy and s is the specific entropy. Combining the first and second laws of thermodynamics together with the previous expression, one can obtain the Clausius-Duhem inequality. In an Eulerian configuration, it can be written as:
φ = σ : D -ρ ψ -ρs Θ -q • grad x Θ Θ ≥ 0 ∀ D, Θ, q (14)
where φ stands for the dissipation, σ(x, t) is the Cauchy stress, D = Ḟ•F -1 the eulerian rate of deformation, q(x, t) is the eulerian heat flux, grad x is the eulerian gradient operator. Introducing the left Cauchy-Green deformation B = FF T and Be = F e F e T and assuming that the free energy is a function of B, Θ, J, Be , ξ, it can be obtained:
ψ = ∂ψ ∂ B : Ḃ + ∂ψ ∂ Be : Ḃe + ∂ψ ∂J J + ∂ψ ∂Θ Θ + ∂ψ ∂ξ ξ ( 15
)
where:
J = J (1 : D) (16)
Ḃ = L • B + B • L T - 2 3 (1 : D) B (17)
Ḃe = L • Be + Be • L T -2 Ve • Do i • Ve - 2 3 (1 : D) Be ( 18
)
with Ve is the pure deformation coming from the polar decomposition of F e = Ve • Re , Do i is the objective rate of inelastic deformation, defined from:
Do i = Re • Di • RT e .
Inserting equations ( 16), ( 17), ( 18) in ( 15) and putting the result in ( 14) and regrouping terms, we have:
φ = σ -2ρ B • ∂ψ ∂ B + Be • ∂ψ ∂ Be D -ρJ ∂ψ ∂J 1 : D + 2ρ Be • ∂ψ Be D : Do i -ρ s + ∂ψ ∂Θ Θ -ρ ∂ψ ∂ξ ξ - grad x Θ Θ • q ≥ 0 (19)
To proceed further, it is made the following hypothesis: entropy is fully defined from the free specific energy variation and there are no dissipation for the thermodynamical force associated with the thermodynamical flux D. This leads to:
s = - ∂ψ ∂Θ (20) σ = 2ρ B • ∂ψ ∂ B D σeq + 2ρ Be • ∂ψ ∂ Be D σneq + ρJ ∂ψ ∂J 1 σ vol (21)
It remains the following terms in the dissipation:
φ = φm 2ρ Be • ∂ψ ∂ Be D : Do i + φ ξ -ρ ∂ψ ∂ξ ξ + φΘ - grad x Θ Θ • q ≥ 0 (22)
where φ m is the intrinsic dissipation, φ ξ the chemical dissipation and φ θ the thermal dissipation. It is assumed that φ m , φ ξ and φ Θ are independently positives. Interesting readers can refer to [START_REF] Germain | Continuum thermodynamics[END_REF] and references therein for a general discussion on thermodynamics of local state models. From eq. ( 22), it can be defined a mechanical thermodynamical force σ i = 2ρ( Be • ∂ψ/∂ Be ) D , a chemical force A ξ = -ρ∂ψ/∂ξ and a heat force A Θ = -grad x Θ/Θ.
Heat equation
The heat equation can be obtained from the first thermodynamical principle (energy conservation), which takes the following local form in the eulerian configuration:
ρ ė = σ : D + ρr -div x q (23)
Where r is a volumetric heating source term (defined by unit of volume). The material time derivative of eq. ( 13) leads to: ė = ψ + s Θ + ṡΘ (24) using eq (24) in eq (23), one can obtain:
ρ ṡΘ = σ : D + ρr -div x q -ρ ψ -ρs Θ (25)
The material time derivative of the free energy is given by eq (15) and using eqs ( 16), ( 17), ( 18), ( 20) in ( 25):
ρ ṡΘ = φ m + φ c + ρr -div x q (26)
The material time derivative of the entropy is obtained from eq (20):
ṡ = - ∂ 2 ψ ∂θ 2 Θ- ∂ 2 ψ ∂θ∂ξ ξ -2 B • ∂ 2 ψ ∂Θ∂ B + Be • ∂ 2 ψ ∂Θ∂ Be D : D+2 Be ∂ 2 ψ ∂Θ∂ Be D : Do i -J ∂ 2 ψ ∂Θ∂J (1 : D) (27)
Finally, the heat equation is obtained by replacing eq (27) in eq (26):
ρC Θ = φ m + φ c + l m + l c + ρr -div x q (28)
where C is the specific heat capacity which is defined from:
C = -Θ ∂ 2 ψ ∂θ 2 (29)
the coupling terms l m and l c are defined from:
l m = Θ ∂σ ∂Θ : D -Θ ∂σ neq ∂Θ : Do i (30) l c = -Θ ∂A ξ ∂Θ ξ (31)
Application: a thermo-chemo-visco phenomenological model
One can consider the specific free energy splitting into mechanical, thermal and chemical parts as:
ψ = ψ m ( B, Be , ξ, Θ, J) + ψ Θ (Θ) + ψ ξ (Θ, ξ) (32)
This decomposition is based on the following ideas: (i) the mechanical behavior is strongly dependent on chemical and thermal state even if the behavior is elastic, (ii) without mechanical deformations or mechanical stresses the chemical free energy will mainly depend on the chemical state and on the temperature (classical thermo-kinetic approach), (iii) in a purely thermal process free energy depends only on temperature. Using the previous decomposition leads to an additive splitting of the entropy and eventually to the specific heat capacity splitting:
s = - ∂ψ ∂Θ = s m + s Θ + s ξ C = -Θ ∂ 2 ψ ∂Θ 2 = C m + C Θ + C ξ (33)
PSfrag replacements
ψ vol (J, Θ, ξ) C10(Θ, ξ), C01(Θ, ξ) G(Θ, ξ) η(Θ, ξ)
mechanical part
Without loss of generality it is chosen the very simple Zener viscoelastic model for the mechanical behavior. The model is schematized on the figure 2. The mechanical free energy is decomposed into an equilibrium, a non-equilibrium and a volumetric part:
ψ m ( B, Be , ξ, Θ, J) = ψ eq ( B, ξ, Θ) + ψ neq ( Be , ξ, Θ) + ψ vol (J, ξ, Θ) (34)
The equilibrium and non-equilibrium free energy are based on Mooney-Rivlin and neo-Hookean hyperelastic model:
ρ 0 ψ eq = C 10 (Θ, ξ)(I 1 ( B) -3) + C 01 (Θ, ξ)(I 2 ( B) -3) (35) ρ 0 ψ neq = G(Θ, ξ)(I 1 ( Be ) -3) (36)
where I 1 (•) and I 2 (•) are the first and second invariant of the tensorial argument (•), defined by
I 1 (•) = tr(•) and I 2 (•) = 1/2(I 1 (•) 2 -tr(• 2 )).
The material parameter C 10 , C 01 and G, are assumed to depend both on temperature and chemical state. The volumetric part is assumed to be related only to compressibility and the compressibility modulus K v is assumed to be independent of the thermal and chemical state:
ρ 0 ψ vol = K v 2 (J m -1) 2 = K v 2 ( J J Θ J ξ -1) 2 (37)
The stresses are therefore expressed as:
σ eq = 2C 10 (Θ, ξ)J -1 BD + 2C 01 (Θ, ξ)J -1 (I 1 ( B) B -B2 ) D σ neq = 2G(Θ, ξ)J -1 BD e σ vol = K v (J m -1)J -1 Θ J -1 ξ 1 = p1 (38)
The visco-elastic behavior is assumed to be described by the following flow rule:
Do i = 2 η(Θ, ξ) Be • ∂ψ ∂ Be D = 2G(Θ, ξ) η(Θ, ξ) BD e ( 39
)
where η(Θ, ξ) is a viscosity parameter. Using (39) in [START_REF] Onsager | Reciprocal Relations in Irreversible Processes[END_REF], it is obtained the following evolution equation (classical Maxwell viscosity):
Ḃe = L • Be + Be • L T - 2 3 (1 : L) Be - 1 τ (Θ, ξ) BD e • Be ( 40
)
where τ (Θ, ξ) = η(Θ, ξ)/4G(Θ, ξ) is a characteristic time of viscosity. If the dependency of C 01 , C 10 upon temperature is assumed to be linear, the mechanical contribution to the heat capacity is only given by:
ρ 0 C m = ΘK v α 2 (2 -3J m )JJ -3 Θ J -1 ξ -Θ ∂ 2 G ∂Θ 2 (I 1 ( Be ) -3) (41)
This contribution can be either positive or negative depending on the value of J m and the sign of the second derivative of the modulus G. However, C m is usually small compared to C Θ .
Thermal part
For the thermal part of the free energy it is adopted the form proposed by Reese and Govindjee (1997); [START_REF] Behnke | An extended tube model for thermo-viscoelasticity of rubberlike materials: Parameter identification and examples[END_REF] which leads to a linear dependency of heat capacity upon temperature:
ρ 0 ψ Θ (Θ) = C 0 Θ -Θ 0 -Θ log Θ Θ 0 -C 1 (Θ -Θ 0 ) 2 2Θ 0 (42)
An isotropic Fourier transport is assumed for the heat flux in the eulerian configuration:
q = -k t grad x Θ ( 43
)
where k t is the thermal conductivity coefficient and it is neglected the dependence of this coefficient upon temperature and chemical state. One can easily verify that the previous heat flux is thermodynamically consistent, ie. the mechanical dissipation is always positive independently of the thermodynamic state.
The contribution of the thermal part to the heat capacity is therefore:
ρ 0 C Θ = C 0 + C 1 Θ Θ 0 (44)
Chemical part
The choice of a free energy for a chemical reactive system within a rigorous thermodynamical framework is not obvious. For well established reactive systems (when stochiometric equations can be written with a clear reactive scheme), the Gibbs free energy can be written as a function of chemical potentials of each species. For perfect or ideal systems the chemical potentials have simple form which can depend on temperature and hydrostatic pressure. This approach has been used by [START_REF] Kannan | A thermodynamical framework for chemically reacting systems[END_REF] for the vulcanization of natural rubber. The authors consider a complex reactive scheme in isothermal conditions. In [START_REF] Mahnken | Thermodynamic consistent modeling of polymer curing coupled to viscoelasticity at large strains[END_REF], it is proposed to formulate the chemical part of the free energy as a linear function of the chemical conversion and the amount of heat generated during curing. This amount of heat depends on the all history during processing and it is considered as a material parameter in the aforementioned study.
In this work it is proposed a different approach: at constant temperature the free energy has to be the integral of the chemical force for an increment of the chemical conversion. As a kinetic approach is assumed, the chemical force is multiplicatively split into a function of the conversion (ξ) and a function of the temperature. The conversion part is inspired by the work of [START_REF] Prime | Differential scanning calorimetry of the epoxy cure reaction[END_REF]. The temperature part is motivated by the fact that in general it exists a temperature of activation of chemical reactions. Below this temperature no reactions occur and heat is stored by the material. The temperature part also control the thermochemical coupling term in the heat transport equation and eventually the dependency of the heat capacity to the chemical conversion. This two contributions are rarely discussed in the literature but wrong choices may lead to non-physical models. It is proposed to use the following form for the chemical part of the free energy:
ρ 0 ψ ξ (Θ, ξ) = C 2 Θ i log Θ Θ i (1 -ξ) m+1 m + 1 -Θ 0 log Θ 0 Θ i ( 45
)
where C 2 and m are material parameter, Θ i is an initiation temperature: below this temperature at a null hydrostatic pressure (pressure of free stress state in the present model) no reaction can occur. The chemical force is therefore given by:
A ξ = -ρ 0 J -1 ∂ψ m ∂ξ -ρ 0 J -1 ∂ψ ξ ∂ξ ρ 0 ∂ψ ξ ∂ξ = -C 2 Θ i log Θ Θ i (1 -ξ) m ρ 0 ∂ψ m ∂ξ = ∂C 10 ∂ξ (I 1 ( B) -3) + ∂C 01 ∂ξ (I 2 ( B) -3) + ∂G ∂ξ (I 1 ( Be ) -3) -β ∂g ∂ξ J -1 ξ p (46)
The chemical force is therefore coupled to the mechanical state through the dependency of the mechanical parameter upon ξ and through the hydrostatic pressure. Furthermore the sign of this coupling determine the influence of the mechanics upon chemistry a given state of strain and stress can be favorable or unfavorable (a hydrostatic compression state is a priori favorable to a vulcanization process for rubbers or polymerization for polymers).
It is assumed the following evolution equation:
ξ = k(Θ) A ξ ( 47
)
where < . > are the Mac-Cauley brackets4 , k(Θ) is a kinetic rate term which is here assumed to follow an Arrhenius law:
k(Θ) = A exp -Ea RΘ ( 48
)
where A is a material parameter, Ea is an activation energy and R is the ideal gaz constant (R = 8.314J/mol/K). The Mac-Cauley brackets are introduced to guarantee a null or positive chemical evolution5 .
If one consider a deformation for which I1 = I2 = 3, the kinetic evolution is therefore defined by:
ξ = A J exp -Ea RΘ C 2 Θ i log Θ Θ i (1 -ξ) m + β ∂g ∂ξ J -1 ξ p (49)
It can be seen from the previous equation that if Θ < Θ i the first term inside the Mac-Cauley Brackets is negative and therefore no reactions can occur if the second term is null (null hydrostatic pressure) or negative (depending on the sign of ∂g/∂ξ and the sign of p). The relative influence of mechanic under chemical reaction is parametrized by the product C 2 × A and β × A.
Inserting (47) in the expression of φ ξ leads to a straight forward proof of the admissibility of the previous chemical flow rule:
φ ξ = k(Θ) A ξ A ξ ≥ 0 ∀ξ, Θ, F (50)
The contribution of the chemical free energy to the heat capacity is obtained as follows:
ρ 0 C ξ = -Θ ∂ 2 ψ ξ ∂Θ 2 = C 2 Θ i Θ (1 -ξ) m+1 m + 1 (51)
This chemical heat capacity is a positive decreasing function of ξ. It can be seen that the proposed form of the chemical free energy is a priori compatible with the idea of a heat energy storage to be released during chemical reactions. In the literature, it can be found that the impact of the chemical evolution upon the specific heat capacity is greatly dependent on the material and the chemical process considered. In [START_REF] Likozar | Cross-linking of polymers: Kinetics and transport phenomena[END_REF], authors show that the heat capacity of a HPAB polymer has mainly a linear dependency upon temperature and a decreasing behavior upon the formation of chemical bonds in the polymer (this effect is important: 25% of relative influence on the heat capacity from the uncured to the fully cured state).
In Dimier ( 2003) the vulcanisation of a natural rubber with carbon black fillers is considered, the authors show that the heat capacity evolution upon chemical state is very low and could be neglected (for a Natural Rubber with carbon black).
The shrinkage function g(ξ) must fulfill some restrictions: this function has to be chosen such as -1 < g(ξ) ≤ 0 ∀ξ and g(ξ = 0) = 0 to guaranty that 0 < J ξ ≤ 1. In the present model the following form has been adopted:
g(ξ) = 0.367 -exp -(1-ξ) n 0.632 (52)
where n is a material parameter that controls the chemical shrinkage function as well as the material parameter β that is the shrinkage coefficient (see eq. ( 3)). Figure 3 shows the influence of n on the shrinkage function. It is also assumed in this work that the chemical rate ξ, is identically null when ξ = 1, therefore g (ξ = 1) must be null. This condition is fulfilled if n > 1.
-
Numerical simulation for simple homogeneous tests
The material parameters are synthesized in table 1. These parameters have not been identified from a real material however the order of magnitude of most of them are closed from those of a filled rubber material. For the mechanical parameters, it has been adopted the following hypothesis:
• the chemical conversion is assumed to stiffen the mechanical behavior
• far from the glass transition temperature, an increase of the temperature leads to a stiffening effect for the elastic part (parameter C 10 ) of the model (entropic elasticity) and to a softening effect for the viscoelastic part (parameter G).
• the characteristic time of viscosity: η/(4G), is a decreasing function of the temperature.
Exemple 1: cyclic shearing at fixed temperature and fixed chemical conversion
To illustrate the influence of chemical conversion and temperature on the mechanical response, it is considered a cyclic simple shear test. The response is assumed as homogeneous and the deformation gradient is stated as following (thermal dilatation and chemical shrinkage are neglected):
F(t) = 1 γ(t) 0 0 1 0 0 0 1 γ(t) = g sin(2Πf t) (53)
where f = 1Hz is the chosen frequency and g is the amplitude of the signal. It is also assumed the following form for the viscoelastic Cauchy-Green gradient:
Be (t) = B e11 (t) B e12 (t) 0 B e12 (t) B e22 (t) 0 0 0 1/(B e11 (t)B e22 (t)) (54) Density ρ 0 (Kg/m 3 ) 1000 Thermal part α(K -1 ) C 0 (J/m 3 /K) C 1 (J/m 3 /K) k t (W/m/K) 2.2e -4
8e 5 1e 6 0.22 Mechanical part K v (P a) C 10 (P a) C 01 (P a) 1.0e 9
1.e 6 (0.2 + 1.5e For this test, the shear stress is fully determined by the material behavior model independently of the equilibrium equation. The equations are:
σ 12 = 2γ(t)(C 10 (Θ, ξ) + C 01 ) + 2G(Θ, ξ)B e12 (t) (55) Ḃe = L • Be + Be • L T - 1 τ (Θ, ξ) BD e • Be (56)
Using ( 53) and ( 54) in ( 56), one can obtain a system of three differential equation with three unknowns (B e11 , B e12 , B e22 ). This system is solved using the NDSolve function of Mathematica (Wolfram Research ( 2014)), Θ and ξ are fixed. The results are synthetized on the figures 4. These results clearly show that the chemical conversion has a stiffening effect on the dynamical mechanical response (at a fixed temperature) and that temperature increase leads to a softer and less dissipative dynamical response (at fixed chemical conversion).
Example 2: Isothermal case with fixed hydrostatic pressure
It is considered a block of matter in isothermal conditions and submitted to an hydrostatic pressure. The thermal evolution and dilatation effect are neglected. The deformation, the temperature and the chemical state are assumed to be homogeneous. In this case, the equations of the model become:
J ξ = 1 + βg(ξ) (57) p = K v (J m -1)J -1 ξ = K v ( J J ξ -1)J -1 ξ (58) ξ = A J exp -Ea RΘ C 2 Θ i log Θ Θ i (1 -ξ) m + β ∂g ∂ξ J -1 ξ (ξ)p , ξ(t = 0) = 0 (59)
From eqs ( 58) and ( 57) it can be obtained the volume variation as a function of the hydrostatic pressure and the chemical conversion:
J = p K v (1 + βg(ξ)) 2 + (1 + βg(ξ)) (60)
Considering the case p << K v this simplify to: J = J ξ = (1 + βg(ξ)), inserting this result in eq (59) the chemical conversion is therefore defined by: By noting f (ξ) = -g (ξ)(1 + βg(ξ)) -1 the influence of chemical shrinkage function on kinetic evolution, eq (61) can be written as:
ξ = A exp -Ea RΘ 1 + βg(ξ) C 2 Θ i log Θ Θ i (1 -ξ) m + β ∂g ∂ξ (1 + βg(ξ)) -1 p , ξ(t = 0) = 0 (
ξ = A exp -Ea RΘ 1 + βg(ξ) C 2 Θ i log Θ Θ i (1 -ξ) m + βf (ξ)p , ξ(t = 0) = 0 (62)
The previous differential equation is numerically solved with the NDSolve function of Mathematica.
Figures 5 show the response of the model in isothermal condition at fixed temperature. As observed for rubber system, the chemical conversion is very sensitive to the temperature. The influence of the hydrostatic pressure can be clearly observed: hydrostatic compression leads to an acceleration of the conversion contrary to hydrostatic tension that could stop the chemical process when a limit is reached. The relative influence of mechanic on chemistry can be very sensitive to the shrinkage coefficient as can be seen on figure 5(c).
Example 3: adiabatic case with hydrostatic pressure, application to material processing
In this example it is considered the fictitious case of the vulcanization of a block of rubber. The matter is submitted to a thermal and pressure cycle which is supposed to be representative of rubber curing process. No heat exchange is considered with the outside (adiabatic case) and the temperature is assumed to be homogeneous. Thermal diffusion is neglected however thermal dilatation is taken into account. The deformation is assumed to be homogenous and purely hydrostatic. The figure 6 shows the pressure and heating cycle considered: pressure is first applied linearly at the end of this ramp heating start and is then released. Pressure is kept fixed during chemical conversion followed by a cooling phase. Finally pressure is released to zero. In the following the amplitude values of heat source are the same for each test (max/min value of 6 M J/m 3 of initial volume) and different values of hydrostatic pressure amplitude and sign are chosen to illustrate the influence of mechanical couplings. For this problem as the deformation is purely hydrostatic, the system of equations is reduced to:
p = K v J J Θ (Θ)J ξ (ξ) -1 1 J Θ (Θ)J ξ (ξ) (63) Θ = J φ ξ (Θ, ξ, J) + l m (Θ, ξ, J) + l c (Θ, ξ, J) + J -1 ρ 0 r ρ 0 C(Θ, ξ, J) , Θ(t = 0) = Θ 0 (64) ξ = A J exp -Ea RΘ C 2 Θ i log Θ Θ i (1 -ξ) m + β ∂g ∂ξ J -1 ξ (ξ)p , ξ(t = 0) = 0 (65)
The previous system of equations is resolved numerically with a standard Euler (explicit) method with a small step size (dt = 0.2s). Each physics are therefore integrated at the same times with the same scheme (Mathematica is also used for this test). The results of the numerical simulation are presented on figures 7. It can be made the following observations: -a hydrostatic tension state of stress (p = 50M pa) leads to the smallest chemical conversion and therefore reach the smaller temperature during curing; -the higher the compression the fastest is the chemical evolution and as the reaction is exothermic the higher is the temperature. The competition between thermal dilatation/chemical shrinkage can be clearly seen on the figure that shows the volume variation. The figures 8 show that at the end of curing after cooling with the same amount of volumetric energy than this of heating and after pressure release the final temperature is slightly different than the initial one which is of 20C. The final temperature is also dependent on the hydrostatic pressure applied during curing. The final shrinkage value correspond to the value given for the coefficient β, i.e. 3% of volume variation at the maximum conversion. The final value of volume variation is therefore directly related to the reached conversion value at the end. The figure 9, illustrate the dependency of heat capacity upon temperature and chemical conversion. 4.4. Example 4: adiabatic case with cyclic shearing and heating, application to chemo-thermal aging upon thermo-mechanical loadings In this example, it is considered the case of an homogeneous sinusoidal shear test with an initial volumetric heating (as previously with an imposed value of ρ 0 r). No heat exchange is considered therefore the solution 14 is assumed to be homogeneous. The total deformation gradient is defined from eq (53) and the viscoelastic Cauchy Green tensor is defined from eq (54). The thermal and the chemical shrinkage are taken into account, the imposed deformation gradient leads to J = 1 and therefore J m is explicitly defined as a function of ξ and Θ:
J m = J -1 ξ J -1 Θ = (1 + βg(ξ)) -1 (1 + α(Θ -Θ 0 )) -1 (66)
In this case hydrostatic pressure is defined by:
p = K v (J -1 ξ J -1 Θ -1)J -1 ξ J -1 Θ (67)
As for the previous example, it is imposed in the first 30 mechanical cycles a volumetric heating to initiate chemical reaction (initial condition of Θ 0 = 20C is assumed). The volumetric heating is defined by: ρ 0 r(t) = 4.2e 6 × (t/30) J/m 3 t < 30 0 t ≥ 30 (68)
This idealized example is viewed as a very simplified simulation of thermo-chemical aging of a polymer material. These phenomena are distinct from the previous virtual curing processing however it is assumed that couplings upon each physics can be represented by the same model. As we do not intend to simulate a real material but only to qualitatively represent observed phenomena, the material parameters are kept identical than previous examples.
In the particular case of simple shear, the shear stress is explicitly given from the constitute equation, it is obtained the following set of equations:
σ 12 = 2γ(t)(C 10 (Θ, ξ) + C 01 ) + 2G(Θ, ξ)B e12 (t) (69) Ḃe = L • Be + Be • L T - 1 τ (Θ, ξ) BD e • Be (70) Θ = J ρ 0 C(Θ, ξ, Be ) φ m (Θ, ξ, Be ) + φ ξ (Θ, ξ, Be ) + l m (Θ, ξ, Be ) + l c (Θ, ξ, Be ) + J -1 ρ 0 r (71) ξ = A J exp -Ea RΘ C 2 Θ i log Θ Θ i (1 -ξ) m - ∂C 10 ∂ξ γ(t) 2 - ∂G ∂ξ (tr( Be ) -3) -β K v 2 ∂g ∂ξ J -1 ξ (J -1 ξ J -1 Θ -1) 2 (72) ξ(0) = 0, Θ(0) = Θ 0 , Be (0) = 1 (73)
For this example, the mechanical evolution problem has the smaller time scale and it is adopted a staged coupling strategy to save computing time and memory. It is assumed that Θ and ξ does not evolve much during a mechanical period (T = 1s). Therefore Θ and ξ are considered as functions of the number of mechanical cycles N . It is defined the following quantity:
Θ(N ) = Θ N = 1 T cycle Θ(t)dt ξ(N ) = ξ N = 1 T cycle ξ(t)dt (74)
The system of equations ( 69) to ( 73) is replaced by:
σ 12 (t) = 2γ(t)(C 10 (Θ N , ξ N ) + C 01 ) + 2G(Θ N , ξ N )B e12 (t) Ḃe (t) = L(t) • Be (t) + Be (t) • L(t) T - 1 τ (Θ N , ξ N ) Be (t) D • Be (t) Be (0) = 1 (75) δΘ N δN = cycle φ m (Θ N , ξ N , Be ) + φ ξ (Θ N , ξ N , Be ) + l m (Θ N , ξ N , Be ) + l c (Θ N , ξ N , Be ) + ρ 0 J -1 r dt cycle J -1 ρ 0 C(Θ N , ξ N , Be (t))dt δξ N δN = A exp -Ea RΘ cycle J -1 C 2 Θ i log Θ N Θ i (1 -ξ N ) m - ∂C 10 ∂ξ γ(t) 2 - ∂G ∂ξ (tr( Be ) -3) -β K v 2 ∂g ∂ξ J -1 ξ (J -1 ξ J -1 Θ -1) 2 dt ξ N =0 = 0, Θ N =0 = Θ 0 , (76)
For a given cycle N , the following resolution is adopted: eq (75) are integrated with a forward Euler method. Temperature and chemical state are frozen and all the integral quantity needed to compute system (76) are numerically computed with a rectangle rule. System ( 76) is also integrated with a rectangle rule.
Figures 10 show the results of the numerical simulation up to N = 3000 cycles. As expected, the higher is the mechanical amplitude the higher is the temperature and the faster is the chemical conversion. The chemo-mechanical coupling is controlled by the dependency of mechanical parameter upon chemical conversion and by the ratio of A × C 2 versus β × A. For the chosen values of C 2 and A, it can be seen that a small change of β can lead to strong differences for the chemical conversion. In this example and for the chosen material parameters, it is needed to bring external heating for initiating chemical reactions. The proposed model is not limited to this case and it could be considered the case of chemical reaction activated only by self heating due to mechanical loadings (however more cycles would be necessary).
Remarks and conclusion
In this paper, we have developed a rigorous thermodynamical framework for the thermo-chemo-mechanical couplings in soft materials at finite strain. A thermo-viscoelastic model with a kinetic approach for chemistry is proposed. The main originality resides in the definition of the chemical Helmoltz free energy potential and the associated chemical flow rule. The free energy potential takes into account of an initiation temperature under which energy is stored as heat and above which energy is released to initiate chemical conversion. The chemical conversion which is derived from the energy potential naturally takes into account of the hydrostatic pressure. This two points lead to a clear definition of the heat capacity (and the respective influence of temperature and chemistry on it) and to a clear influence of hydrostatic pressure on chemical process. The proposed model is a first step to analyze and simulate complex multiphysics phenomena that take place during processing or aging of soft materials submitted to large strain. In this paper it has been chosen to consider only idealized application without any characterization of a real material. Therefore the chemical (one phenomenological reaction scheme) and the mechanical (zener model) behaviors are chosen as simple as possible to focus on the influence of chemo-mechanical couplings. For a specific application, the question of material parameters identification will certainly become a central one. In the case of non-linear mechanical behavior at finite strain the identification of phenomenological model usually required several experiments to be valid. As example simple DMA experiments are in general not sufficient. Conducing these experiments at controlled (homogeneous in the sample and stable) chemical states seems challenging. Nevertheless, if one consider curing of polymers (or rubbers), chemorheology can give useful informations in the linear regime (see for instance [START_REF] Joshi | Modeling of steady and time-dependent responses in filled, uncured, and crosslinked rubbers[END_REF]). It is therefore needed to develop new testing methods and experimental procedures that could help to investigate the non-linear regime or to impose complex mechanical loadings within a rheometer (for instance a precise control of the hydrostatic pressure, see [START_REF] Mackley | Experimental observations on the pressure-dependent polymer melt rheology of linear low density polyethylene, using a multi-pass rheometer[END_REF]). We believe that numerical simulation with thermo-chemo-mechanical models will greatly help to introduce new experimental technics and will be useful for a better understanding of existing ones. The finite element implementation of the present model requires to pay a careful attention on the variational formulation and on the numerical integrations schemes (for both evolution and conservation equations). The intrinsic times of each physics can be greatly different depending on the application (curing, aging, ...), on the material (polymer, rubbers, ...) and the loading conditions. Depending on these characteristics a monolithic or a staged strategy can be more appropriate. A first attempt has been proposed in Nguyen [START_REF] Nguyen | A finite strain thermo-chemo-mechanical coupled model for filled rubber[END_REF]; [START_REF] Eyheramendy | Advances in symbolic and numerical approaches in computational mechanics[END_REF] and need further improvements. These numerical developments will permit to take into account of more complex and localized phenomena like heat conduction (thermal gradients) or chemical diffusion.
Figure 1 :
1 Figure 1: Dependence of the density of HPAB polymer with different degrees of cure on temperature. The legend shows the concentration of cross-links. Issued from[START_REF] Likozar | Cross-linking of polymers: Kinetics and transport phenomena[END_REF]
Figure 2 :
2 Figure 2: A visco-elastic model
Figure 3 :
3 Figure 3: Shrinkage function g(ξ)
Cyclic shear test at ξ = 0.5 for various temperature values. Thermal softening and hysteresis area decreasing due to temperature increase.
Figure 4 :
4 Figure 4: Mechanical response to cyclic shear test at constant temperature and constant chemical conversion values.
Isothermal chemical conversion at p = 0M pa, effect of the temperature. Isothermal chemical conversion at Θ = 160C and p = -100M pa, effect of the shrinkage coefficient.
Figure 5 :
5 Figure 5: Chemical conversion at fixed temperature and fixed hydrostatic pressure.
Figure 6 :
6 Figure 6: Hydrostatic pressure and volumetric heating cycle.
Figure 7 :
7 Figure 7: Results of a virtual material processing test under various hydrostatic pressure.
Figure 9 :
9 Figure 8: Results of a virtual material processing test under various hydrostatic pressure: zoom at the end of curing.
Figure 10: Thermo-Chemical aging for cyclic shearing
Table 1 :
1 Material parameters: mechanical and thermal parts
-3 (Θ -273) + 0.35ξ) 1.e 4
Chemical reactions in soft material often involve matter (gaz) diffusion and therefore the problem should be considered as on open thermodynamical system, however we neglected this effect in this work.
derivative of quantity upon time holding initial position constant: ρ = (∂ρ(x, t)/∂t) X
The proposed thermodynamical framework can be easily extended to more complex reacting systems: one has to define at least the same number of internal variables that the number of reactions.
< f >= f if f > 0 and < f >= 0 if f ≤ 0
A negative chemical evolution (reversion) could be thermodynamically admissible, however the authors believe that one have to introduce supplementary mechanisms (reactions or damage) that leads to reversion to be consistent. | 45,127 | [
"3866",
"930401"
] | [
"420299",
"391104",
"391104",
"1303",
"447427"
] |
01769783 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01769783/file/Heat_transfer_Concentric_spheres-revised_ver6%20%281%29.pdf | H Yamaguchi
M T T Ho
Y Matsuda
T Niimi
Irina Martin Graur
Conductive heat transfer in a gas confined between two concentric spheres: From
Keywords: heat transfer, vacuum, Knudsen number, kinetic model, concentric spheres, thermal accommodation coecient
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The heat transfer through a rareed gas conned between two surfaces with dierent temperatures is a fundamental issue which has been studied for a long time [START_REF] Goodman | Dynamics of Gas-Surface scattering[END_REF][START_REF] Saxena | Thermal Accommodation and Adsorption Coecients of Gases[END_REF]. In parallel to the study of the theoretical aspects of the heat transfer as its dependence on the gas nature, composition and pressure various practical applications were found, such as the principle of operation of the Pirani sensor, which uses the dependence of heat ux on the pressure for the pressure measurements. Despite the long history of investigations of heat transfer between the two surfaces, there are several aspects which remain not very well known until now. One of these problems lies in the interaction of gas molecules with surfaces, which signicantly aects the intensity of the heat transfer between the surfaces. This interaction may be explained in terms of thermal accommodation coecient. The thermal or energy accommodation coecient α is dened as [START_REF] Goodman | Dynamics of Gas-Surface scattering[END_REF][START_REF] Saxena | Thermal Accommodation and Adsorption Coecients of Gases[END_REF],
α = E i -E r E i -E s , ( 1
)
where E is the mean energy of the molecules colliding to a surface; the subscripts i and r correspond to the incident and reected molecules, respectively, the subscript s corresponds to the molecules fully accommodated to the surface.
This coecient is useful in the analysis and the management of heat transfer in micro-and nano-devices, where gas ow should be treated as rareed, even if the operating pressure is an atmospheric pressure, due to their small characteristic dimension. Additionally, the surface-to-volume ratio of the uid for microand nano-devices becomes much larger than that for the conventional devices, therefore, the gas-surface interaction plays an essential role.
In this study, we focused on the heat transfer problem between two concentric spheres. This geometry was employed in a novel measurement system of the thermal accommodation coecient, which characterizes the mean energy transfer through the gas-surface interaction [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF]. This measurement system is based on the low pressure method, rstly introduced by Knudsen, later employed by many researchers [START_REF] Goodman | Dynamics of Gas-Surface scattering[END_REF]. This method employs the particular property of heat transfer at low pressure: the heat transfer between two surfaces maintained at dierent temperatures is proportional to the pressure between them and the thermal accommodation coecient. Therefore, the heat ux through rareed gas conned in concentric spherical shells is measured as a function of pressure, then the thermal accommodation coecient is extracted. The novel measurement system is able to measure the thermal accommodation coecient on non-metal surfaces, which is rarely reported in the literature due to specicity of the measurement methods [START_REF] Saxena | Thermal Accommodation and Adsorption Coecients of Gases[END_REF][START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF], and it is especially important for micro-and nano-devices because of the materials employed.
Recently, the heat transfer between the two concentric spheres system is also simulated [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF] on the basis of the nonlinear S-model kinetic equation [START_REF] Shakhov | Generalization of the Krook kinetic relaxation equation[END_REF]. In the free-molecular, slip and continuum ow regimes, the analytical expressions are provided for arbitrary temperature and spheres' radius ratios. In the transitional regime, the S-model kinetic equation is solved numerically. Then, the limits of the applicability of the obtained analytical expressions and the previous empirical relation are established by confronting the numerical and analytical solutions.
The objective of this study is to evaluate the accuracy of the empirical heat ux expression between the free-molecular and continuum ow regime, and re-vise the expression valid for the whole range of the ow regime. The analytical heat ux expressions in the free-molecular and continuum ow regimes and the numerical data in the transitional ow regime, obtained in Ref. [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF], are used to derive this revised heat ux expression. Then, based on the measurements of the heat ux and the pressure between two spherical shells, provided in Ref.
[3], the values of the thermal accommodation coecients are derived using proposed revised expression of the heat ux. Finally, the numerical simulations are carried out using the S-model kinetic equation with the obtained thermal accommodation coecients, and the measured and simulated heat uxes are compared to validate the revised expression.
Analysis of Experimental Heat Flux Data
The experimental data on the heat ux reported in Ref. [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF] are analyzed in the following. The two concentric spherical shells conguration is chosen as the experimental setup to measure the thermal accommodation coecient on non-metal surfaces. A tiny heater, maintained at temperature T H by an analog electrical circuit, is xed at the center of a spherical vacuum chamber.
The tiny heater has a thin at-plate shape, and the test sample surfaces are attached to the heater. The surface of the vacuum chamber made by Pyrex is maintained at the temperature T C by immersing the chamber in a water bath. The inner radius of the chamber is equal to R C = 49.5mm. The heat ux from the tiny sample to the chamber surface is measured together with the pressure in the chamber. By using the expression relating the heat ux between the surfaces and the pressure, the thermal accommodation coecient is derived.
The conguration of the experimental setup is explained in detail in Ref. [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF].
The heat ux from a heated surface surrounded by a monatomic gas in the the free-molecular ow regime is expressed as,
q F M = α 2 v T p (T H -T C ) , v = 8kT πm , (2)
here T , p and v are the temperature, pressure, and mean molecular speed of the gas, respectively; k is the Boltzmann constant and m is the molecular mass of the gas. It is important to underline that the heat ux is proportional to the pressure of the gas and, therefore, to the gas number density. The heat ux between two surfaces maintained at dierent temperatures in the free-molecular ow regime, Eq. ( 2), is independent of the geometrical conguration of the system, see Refs. [START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF][START_REF] Trott | An experimental assembly for precise measurement of thermal accommodation coecients[END_REF][START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. The accommodation coecient α can be obtained by tting heat uxes measured as a function of pressure by Eq. ( 2).
However, it is dicult in a simple apparatus to maintain the free-molecular ow regime (i.e., low pressure environment) owing to leakage. An alternative way is to build a high vacuum experimental setup; however, in this case, the measurement system becomes huge and costly. In addition, it is also not easy to measure very small heat ux at low pressure environment. To realize the measurement in a low-cost simple apparatus, a much higher pressure condition is favorable for the measurement. A more general model for the heat ux through a gas from a heated surface has to be implemented for the accommodation coecient extraction for this condition extended to the higher pressure. For the whole ow regimes, i.e. from the free-molecular to continuum ow regimes, the expression of the heat ux q can be approximated by a simple empirical interpolation of the free-molecular limit heat ux q F M , Eq. (2), and the continuum limit heat ux, q C , as it was done in Refs. [START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF][START_REF] Trott | An experimental assembly for precise measurement of thermal accommodation coecients[END_REF][START_REF] Springer | Heat transfer in rareed gases[END_REF][START_REF] Sherman | A survey of experimental results and methods for the transitional regime of rareed gas dynamics[END_REF], and it is expressed as
1 q = 1 q F M + 1 q C . (3)
For the continuum limit the heat ux q C is described by Fourier's law and it is independent of pressure and dependent upon the ow geometry. By making the size of the internal heated surface relatively small compared to the external surface of the vacuum chamber, the heat transfer problem is approximated by a simple spatially symmetric heat transfer between two concentric spherical shells, even though the shape of this internal heated surface is not a sphere but a at plate, as mentioned above [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF]. Following this model of two spherical shells, we can calculate the radius of the internal virtual sphere from the surface areas equality: the surface of the internal virtual sphere of a radius R H has the same surface area as the real heated surface. From this equality, the radius of virtual internal sphere is estimated as R H = 4.95mm. Thus, the radius ratio of the concentric spherical shells, R = R C /R H , is equal to 10 and it is relatively large. By assuming the concentric spherical shells geometry, the theoretical heat ux at the continuum limit q C in Eq. ( 3) is expressed as
q C = κ (T H -T C ) R C R H R C -R H 1 R 2 H , ( 4
)
where κ is the thermal conductivity of the gas. The temperature inside the spherical vacuum chamber is assumed to be equal to the temperature of the surface of the external spherical shell T C [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF][START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF]. Therefore, in this analysis, the temperature dependence of the thermal conductivity κ along the radial direction is not taken into account, and the thermal conductivity at the temperature of the external spherical shell, T C , is used for the entire region inside the vacuum chamber: κ = κ(T C ). In addition, the pressure is assumed to be constant between the shells.
To minimize an error which can come from the use of the empirical relation, Eq.( 3), the pressure condition is limited below 1.6Pa in the experiment so that the measurement is realized in the near free-molecular regime.
In order to test this new developed experimental setup, rst a platinum sample is used. A platinum foil with a thickness of 10µm (Nilaco) is selected as a sample surface. Five values of the hot sphere surface temperature, T H , are stated in the experiments, see Table 1. The cold sphere temperature, T C , is almost the same for all ve cases and equal to a room temperature. A number is attributed for each hot temperature value to simplify the reference. The averaged accommodation coecients, extracted by the described above procedure from three measurements of the heat ux and pressure for each case, are provided in Table 1. Surface temperatures were dierent for each measurement, and the mean surface temperatures are listed, with any variation from the mean value less than 0.2 K. The relative standard errors of the accommodation coecient did not exceed 1.6%, showing good repeatability of the measurements.
However, it was not so simple to estimate the measurement accuracy of the system [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF], and the number of signicant digits was decided from the size of the above mentioned relative standard error.
From Table 1, the accommodation coecients exceed unity in some cases. This could be coming from above mentioned several assumptions made in the extraction procedure. Therefore, more accurate expression to describe the heat ux is important to improve the existing methodology of the accommodation coecient extraction.
Analytical Solution and Numerical Simulation
In this section, we present the detailed analytical and numerical analyses on the heat ux problem between two concentric spherical shells in all ow regimes following Ref. [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. The assumptions used for derivation of all analytical expressions are provided. These analyses allow to estimate the error, which is made, when using the analytical expression for a set of physical parameters, and also it allows the improvement on the accuracy of the accommodation coecient extraction procedure.
The heat transfer between two spherical shells, of the arbitrary radii R H and R C , for the internal and external shells, respectively, is analyzed for various temperatures of the shells' surfaces T H and T C . The detailed description of the developed approaches can be found in Ref. [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. We provide here only one part of the results indispensable for the accommodation coecient extraction procedure. To characterize the level of the gas rarefaction, it is convenient to introduce the rarefaction parameter as following
δ 0 = R 0 , = µ 0 v 0 p 0 , v 0 = 2kT 0 m . (5)
Here R 0 is the reference length of the problem, is the equivalent mean free path, p 0 is the reference pressure, p 0 = n av kT 0 , n av is the number density averaged over the physical space n av = 3/(R 3 C -R 3 H ) n(r)r 2 dr, r is the radial coordinate of physical region between the concentric spheres with the origin at their centers, n(r) is the gas number density, which depends on r coordinate only, µ 0 and v 0 are the gas viscosity and the most probable molecular velocity at the reference temperature T 0 , respectively. For convenience, the reference values in Eq. ( 5) are taken as follows
T 0 = T C , R 0 = R C -R H . ( 6
)
The denition of the rarefaction parameter allows us to choose the appropriate modeling as a function of the rarefaction parameter δ 0 value. The cases of δ 0 = 0 and δ 0 → ∞ correspond to the free-molecular ow and continuum limits, respectively. It is to note that the gas rarefaction parameter δ 0 in Eq.
( 5) is inversely proportional to the Knudsen number.
Free-molecular ow regime
In the free-molecular ow limit (δ 0 → 0), the Boltzmann equation (or model kinetic equations) [START_REF] Cercignani | Mathematical methods in kinetic theory[END_REF] can be solved analytically for arbitrary temperature and radius ratio, because the collision term in its right-hand side can be neglected in this case. We assume here the complete accommodation of the molecules with the surface on the external sphere α C = 1 following the discussion in the experimental analysis [START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF][START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF], and on the internal sphere surface, the Maxwelltype diuse-specular reection with α H = α is assumed. The heat ux at any point between the spheres reads
q F M (r) = α 2 p 0 v (T -1) K F M R H r 2 , K F M = 1 + α 2 √ T -1 -1 1 - (R + 1) √ R 2 -1 R 2 + R + 1 -1 , ( 7
)
where T is the temperature ratio, T = T H /T C . It is interesting to note that the expression of the heat ux depends not only on the temperature ratio T , but also on the radius ratio R.
Transitional regime
Contrarily to the free-molecular ow regime, where the analytical solution of the kinetic equation was obtained, only numerical solution is possible in the transitional regime. Therefore, in this regime the S-model kinetic equation [START_REF] Shakhov | Generalization of the Krook kinetic relaxation equation[END_REF] was solved numerically for various range of the rarefaction parameter, shell's temperature and radius ratios. The details can be found in [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF].
Continuum regime
In the continuum limit, the temperature variation between two concentric spheres may be obtained from the energy balance
∂ ∂r r 2 κ ∂T ∂r = 0. (8)
It is to note that here the hypothesis of zero macroscopic gas velocity is used and only the conduction heat transfer is considered. The Fourier law can be applied to calculate the heat ux q = -κ dT dr .
(
) 9
For the monatomic gases the gas thermal conductivity is related to the gas viscosity as follows
κ = 15 4 k m µ. (10)
In order to dene the dependence of the viscosity on the temperature, the molecular interaction potential must be specied, and we use the inverse power law potential [START_REF] Bird | Molecular Gas Dynamics and the Direct Simulation of Gas Flows[END_REF] in the following. This model leads to a power law temperature dependence for the viscosity coecient
µ = µ 0 T T 0 ω , ( 11
)
where ω is the viscosity index, which is equal to 0.5 for the Hard Sphere model and 1 for the Maxwell model. In the presented here analysis, the Variable Hard Sphere model (VHS) [START_REF] Bird | Molecular Gas Dynamics and the Direct Simulation of Gas Flows[END_REF] is used, where the viscosity index varies with the gas nature; it is equal to 0.66, 0.81 and 0.85, for Helium, Argon and Xenon, respectively. Taking into account the relation between the thermal conductivity and viscosity, Eq. ( 10), the thermal conductivity has similar temperature dependence to the viscosity.
In the continuum limit, the gas temperature in the vicinity of a wall is equal to the wall temperature, so Eqs. ( 8) and ( 9) are solved analytically for the arbitrary temperature and radius ratios. The heat ux distribution takes form:
q C (r) = κ(T C )K C (T H -T C ) R C R H R C -R H 1 r 2 , K C = T ω+1 -1 (ω + 1)(T -1) . ( 12
)
It is worth to note that contrarily to expression (4), the temperature dependence of the heat conductivity is taken into account in this expression. Table 2: Recalculated energy accommodation coecient by using the complete heat transfer expression, Eq. ( 7), with real radius ratio and taking into account the dependence of K F M coecient from the energy accommodation coecient. The relative dierence from the originally obtained accommodation coecients, provided in Table 1, is given in the brackets.
Accuracy Evaluation of Experimental Analysis
In the experimental analysis, several approximations were made to extract the accommodation coecient. However, they are not validated in detail because of a limitation of the experimental measurement in the simplied system.
We will investigate here several approximations made during the experimental analysis to understand in detail the heat ux behavior between two concentric spherical shells.
Free-molecular heat ux: eects of geometrical and physical parameters
The complete analytical expression of the free-molecular heat ux for the case of the arbitrary shell's temperature ratio T and radius ratio R is given by Eq. ( 7).
If the radius of the external sphere is large compared to the internal one (R → ∞), the case of the heat ux between a sphere and the surrounding gas, then the coecient K F M in Eq. ( 7) tends to 1. In this case, Eq.( 7) for r = R H gives the same results as Eq. ( 2) with p = p 0 .
We evaluate now the value of coecient K F M in Eq. ( 7) for the real radius ratio R = 10 and for several sets of measured temperature ratios and accommodation coecients given in Table 1. In the case of small temperature ratio T min = 1.139 with He, the deviation of K F M coecient from unity is of the order of 0.01%; while, in the case of large temperature ratio T max = 1.543 with Xe, this deviation increases up to 0.15%. Therefore, a posteriorly estimation of the coecient K F M 's deviation from unity seems to be relatively small, so expression of Eq.( 2) gives very accurate results for the used in the experimental setup radius ratio R = 10.
However, the K F M coecient depends also on the accommodation coecient, Eq. ( 7), and it might probably aect the tting result. Therefore, the experimentally measured heat uxes are re-analyzed by using the complete analytical expression, Eq. ( 7). From Table 2, it is clearly shown that neither the assumption of the innitely large radius ratio (R → ∞ if R > 10), nor the dependence of the K F M coecient on α does not aect the results. Therefore, the implementation of Eq. ( 2) in the experimental analysis of the free-molecular ow regime is completely justied with maximum accuracy less than 0.2%.
Continuum heat ux: eects of geometrical and physical parameters
Let now analyze the analytical expression of the heat transfer in the continuum ow regime. This expression takes part of the empirical tting formula, Table 3: Recalculated energy accommodation coecient by using the empirical heat transfer expression with the continuum heat ux q C , calculated using Eq. ( 12). The relative dierence from the originally obtained accommodation coecients, provided in Table 1, is given in the brackets.
Eq. ( 3), which is used for the extraction of the accommodation coecient in the case when the experimental pressure range is extended to higher pressure.
Even though the heat transfers are measured at relatively low pressure condition around the near free-molecular regime, the heat transfer in the continuum limit could aect the results through Eq. ( 3). Initially, Eq. ( 4) is used with the constant thermal conductivity, calculated at temperature T C due to the large surface-area ratio of the external to the internal spherical shells R 2 with the large radius ratio R [START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF][START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF]. While, the complete analytical expression of the heat transfer in the continuum ow regime, Eq.( 12), takes into account the dependence of the heat transfer coecient on the temperature. It is clear that only coecient K C makes the dierence between two expressions.
In the limit of the small temperature dierence between the spheres' surfaces 12) for r = R H is equivalent to Eq.( 4). In the cases of small temperature ratio T = 1.139, see Table 1, the discrepancy coming from this K C factor becomes 4.5%, 5.6%, 5.9% for He, Ar, and Xe, respectively. However, for the largest temperature ratio between the surfaces and with Xe, it increases up to 22%. Thus, the approximations of the heat transfer in the continuum limit might be a large error source in the experimental analysis.
T H -T C T C or T → 1, K C → 1 and expression (
The experimentally measured heat uxes are re-analyzed by the analytical form, Eq. ( 12), instead of Eq. ( 4). The extracted accommodation coecients are listed in Table 3. The discrepancy from the originally provided value, see Table 1, are also listed in (). It is clearly shown that the discrepancies are much larger for the conditions with large temperature dierences since the temperature distribution between the shells aects the results. The dierence in the gas species is resulting rst from the dierence in the thermal conductivity coecient for dierent gases which is involved in the q C expression, Eq.( 12). Then, for the same value of pressure, the rarefaction parameters δ 0 , Eq. ( 5), and so the ow regimes are dierent for the dierent gases: the lighter Helium is still in free molecular regime (smaller δ 0 ), whilst the heavier Xenon is already in the transitional regime. Although there is a large dierence in the evaluation of the heat transfer in the continuum limit between the analytical form and the original expression used in the experiment, the eect on the accommodation coecient is less than 5%. This should be because the heat transfers were measured in a low pressure environment, less than 1.6 Pa. However, it is recommended to use the analytical form q C (R H ) by Eq. ( 12), instead of Eq. ( 4), since it is more accurate and easy to integrate in the experimental analysis.
Revised expression for the heat ux interpolation
To express the heat ux in the transitional regime, the empirical expression, Eq. ( 3), is useful and known to show quite good results [START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF][START_REF] Yamaguchi | Measurement of thermal accommodation coecients using a simplied system in a concentric sphere shells conguration[END_REF][START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. In order to evaluate the accuracy of this empirical expression in detail, the numerical solution of the S-model kinetic equation is evaluated with the Hard Sphere (HS) model for a wide range of parameters; two radius ratios R = 2 and 10, two temperature ratios T = 1.1 and 1.5 and three accommodation coecients α = 0.6, 0.8 and 1.0, i.e. 12 cases in total. Figure 1 shows the results of the numerical solution of the S-model kinetic equation by markers, and the empirical expression, Eq. ( 3), with q F M and q C calculated by Eq. ( 2) and Eq. ( 12), respectively, by dotted lines. It is to note that the dimensionless heat ux q * = q/(p 0 v 0 ) is plotted. From Fig. 1, the empirical expression coincides with the numerical S-model results up to δ 0 ∼ 1. However, they start to deviate by increasing the rarefaction parameter from δ 0 = 10 -1 up to 10 2 , i.e. in the major part of the transitional regime up to the continuum ow regime. The heat ux from the empirical expression starts to decrease at smaller δ 0 compared to the numerical results.
To make a better tting function to reproduce the results of the S-model solutions, the following expression in a dimensionless form is proposed as a modication of the empirical expression, Eq. ( 3):
1 q * = 1 q * F M + 1 ζq * C , ζ = 1 1 -c1 δ0+c2 , (13)
where the dimensionless heat ux in the free-molecular q * F M and continuum q * C limits are expressed in Eqs. (37) and (23) in Ref. [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. The factor ζ is introduced to improve the tting quality in the transitional ow regime. The form of Eq. ( 13) is inspired by the expression of the dimensionless heat ux in the slip ow regime q * S , indicated as q r (r) in Eqs. ( 14) and (17) in Ref. [START_REF] Ho | Heat transfer through rareed gas conned between two concentric spheres[END_REF]. In that paper, q * S is obtained by integrating the energy balance equation with the temperature jump boundary conditions. This heat ux in the slip ow regime, q * S , was found to be dierent from q * C by a factor of ζ s = 1/(1 + c/δ 0 ), where c is a constant dened by temperature jump coecients, temperatures and radii of shells. However, this factor is not suitable as a correction term to the empirical expression, because the factor ζ s vanishes in the free-molecular limit (δ 0 → 0), resulting in the nite limit value for 1/(ζ s q * C ), because q * C is proportional to 1/δ 0 . Then, a constant is added to δ 0 in ζ factor to avoid the problem, so now 1/(ζq * C ) vanishes in the free-molecular limit. The correction factor ζ goes to unity in the continuum limit (δ 0 → ∞).
Here, the coecients c 1 and c 2 are analyzed to satisfy the dimensionless (q * ) S-model results for 12 dierent cases, and it is suggested from ttings by the least-square method by the dimensionless form of heat ux that c 1 = 1.04αT /R and c 2 = 1.97αT /R. The tting results by this revised tting function are plotted by solid lines in Fig. 1. It is clearly shown that the agreement between the S-model solutions and the revised tting function are excellent for the whole range of the rarefaction parameter. It is also important to note that the revised Table 4: To check the eect of the heat ux expression for the transitional regime on the accommodation coecient extraction procedure, the accommodation coecients are extracted from the heat ux of the S-model solutions by using the empirical expression, Eq. ( 3), and the revised tting function, Eq. ( 13).
tting function reproduces very well the numerical solution for wide range of parameters; 1.1 ≤ T ≤ 1.5 and 2 ≤ R ≤ 10.
To validate the tting procedure to obtain the accommodation coecient by the revised function, the accommodation coecient is extracted from the S-model numerical solutions by the same procedure as that used for the experimental data. The extracted value of the accommodation coecient should be the same as that used initially in the numerical simulation, α S , if the extraction procedure is accurate enough. The revised function, Eq. ( 13), and the original empirical expression, Eq. ( 3), are employed as a tting function and the extracted accommodation coecients are compared. The data are tted by the least-square method in the dimensionless forms; q * vs. δ 0 . The extracted accommodation coecients are listed in Table 4. The accommodation coecients, used in the numerical simulations, are denoted as α S in the table. As expected from the discrepancies between the S-model solutions and the empirical expression in Fig. 1, extracted accommodation coecients by the empirical expression can not reproduce the value originally used in the numerical simulation, α S . Compared to the empirical expression, the revised function reproduces the accommodation coecients quite well. Even in the worst case where both R and T are small, the deviation is less than 1%. Therefore, it is not highly accurate to use the empirical expression, Eq. ( 3), for the accommodation coecient extraction procedure in the transitional regime.
The revised tting function, Eq. ( 13), is employed to extract the accommodation coecients from the experimental results. In the accommodation coecient extraction procedure, the experimentally measured heat ux is tted in the dimensional form. This is essential to avoid the eect from the measurement error in a low pressure condition [START_REF] Yamaguchi | Investigation on heat transfer between two coaxial cylinders for measurement of thermal accommodation coecient[END_REF]. In the dimensional form of Eq. ( 13), the dimensional heat uxes q F M and q C instead of the dimensionless q * F M and q * C are expressed by using the dimensional form derived from Eq. ( 7) at the free-molecular ow limit and Eq. ( 12) at the continuum limit. The rarefaction parameter δ = (pR 0 )/(µ 0 v 0 ) using the experimentally measured pressure p is employed instead of δ 0 . Re-calculated accommodation coecients by the revised tting function are listed in Table 5. Compared with the results in Table 3, it is clearly shown that the function to express the heat ux in the transitional regime is as important as the consideration of the temperature distribution in the continuum limit in Section 4.2.
! " (((((((((( ((((((((( a: !;((( (((((((((( ((((((((( a: !"((( (((((((((( ((((((((( (a) (((((((((( ((((((((( a: !;((( (((((((((( ((((((((( a: !"((( (((((((((( ((((((((( (c Table 5: Re-extracted energy accommodation coecient by using the revised tting function, Eq. ( 13), by using the more accurate expressions of the heat transfer in the free-molecular ow regime, Eq. ( 7), and in continuum ow regime, Eq. ( 12). The relative dierence from the originally obtained accommodation coecients, provided in Table 1, is given in the brackets.
! # ! $ ! % ! & ! ' ! ! ! ' !' ' ' '
T = 1.1, R = 2 !" !#$ !# !%$ !% ! $ ! ! ! % !% % % %
Comparison between measured and simulated heat ux
To validate the expressions of the heat ux between the concentric spherical shells, the energy accommodation coecients are re-extracted from the experimental data, and the heat uxes are numerically evaluated using the re-extracted results. The comparison between the measured heat ux as a function of pressure and the numerical results for three gases of He, Ar and Xe for ve cases of the surface temperatures are shown in Fig. 2. The numerical simulations are carried out using the S-model kinetic equation. Two dierent sets of the accommodation coecient are employed; the original values extracted from the measurements listed in Table 1 and the corrected values, listed in Table 5, extracted by using the improvements, explained in the previous sections.
In Fig. 2 (a), (c) and (e), the dimensionless curves of the measured and simulated heat ux, q * = q/(p 0 v 0 ), as the function of the pressure via the rarefaction parameter δ 0 are presented. As it is clear from (a), the measured pressure range is restricted to the near free-molecular ow regime, where the dimensionless heat ux has the constant value, which depends only on the accommodation coecient. However, in (c) and (e), the curves of the heat ux start to decreases, that means the ow is in the transitional, or even in the slip ow regime. From these gures, it is clear that the experimental data scattered around the lines, especially in the near free-molecular ow regime, due to the measurement diculty as it was already explained in Section 4. The corrected accommodation coecients are also compared with those in the literatures [START_REF] Trott | An experimental assembly for precise measurement of thermal accommodation coecients[END_REF][START_REF] Amdur | Thermal accommodation coecients on gascovered tungsten, nickel and platinum[END_REF][START_REF] Thomas | The accommodation coecients of he, ne, a, h2, d2, o2, co2, and hg on platinum as a function of temperature[END_REF] for validation in Fig. 2. Considering the dierence in surface conditions which is not known, the corrected values are in good agreement.
Conclusion
The measurement technique, called the low-pressure method, was analyzed by comparing the expressions, usually used by this technique, with the analytical expressions and numerical solution of the S-model kinetic equation. In the free-molecular ow limit, it is conrmed that the original expression coincides with the analytical form, and it is reasonable to assume the radius ratio /01 /0! !01 !0! !%&23. 43565%%%%%%%%%%%%%%%%%%%%%%%%%%%%%/%%%%%%%%$%%%%%%%%7%%%%%%%%#%%%%%%%%1 89:6;<*6=>%%%%%%%%%%%%%%%% % % % % ?@*AB6CD%4A;;64>6B%a% % % % % ?@*AB6CD%A;<E<=3C%a%%%% % % % % (f) Xe, dimensional Figure 2: The heat ux as a function of pressure: comparisons between the experimental and numerical data with the dimensionless and the dimensional forms for the ve cases. The numerical results by using the originally obtained accommodation coecients, listed in Table 1, and by using the corrected values, listed in Table 5, are compared.
((((((()*+,-./(((0+123245/(((6277289 a:'! (((
) T = 1 . 1 ,
11 T = 1.5, R = 10
Figure 1 :
1 Figure1: The dimensionless heat uxes q * are plotted as a function of the rarefaction parameter δ 0 . The S-model solutions, shown by markers in all ow regimes, are compared with the empirical expression, represented by dotted lines, and the revised tting function, shown by solid lines. For the accommodation coecients α = 1.0, the results are plotted in black (S-model: ), for α = 0.8, they are plotted in red (S-model: ) and for α = 0.6,they are plotted in blue (S-model: ), respectively.
3 .
3 The experimental data seemed to converge in the region where the lines curved. The comparison between the solid and dotted lines indicates that the corrected accommodation gives better agreement with the experimental results.In Figs.2 (b), (d) and (f ), the dimensional quantities are plotted. This form is used for extracting the accommodation coecient by the least square method. It is clearly shown that the agreement is much better from the original to corrected accommodation coecient for the whole pressure range. Therefore, it is conrmed that the revised procedure suits well for the extraction of the accommodation coecient. It is also important to note that even though the at plate-shaped heater is assumed as a sphere in the experiment, as it was explained in Section 2, the experimental heat ux is well simulated by the numerical solutions of the S-model kinetic equation, indicating the validity of the assumption.
############################# ########3########4########5########! 67829:(2;<################ # # # # =>(?@2AB#0?9920<2@#a# # # ## =>(?@2AB#?9:C:;/A#a###
Table 1 :
1 Experimental temperature conditions and mean energy accommodation coecients based on measurements.
1 2 3 4 5
T H (K) 335 364 395 424 453
He T C (K) 294 294 294 294 294
α 0.280 0.292 0.308 0.322 0.338
T H (K) 335 364 394 424 453
Ar T C (K) 293 293 293 293 294
α 0.850 0.867 0.856 0.864 0.886
T H (K) 335 364 394 424 453
Xe T C (K) 294 294 294 294 294
α 1.024 1.045 1.066 1.053 1.065
Preprint submitted to ElsevierDecember 12, 2016
& & & & ?@+AB6CD&4A;;64>6B&a& & & & & ?@+AB6CD&A;<E<=3C&a&&&& & & & &
Acknowledgments
This work was partially supported by JSPS KAKAENHI Grant Number 16K14157. This work was also granted access to the HPC resources of Aix-Marseille Université nanced by the project Equip@Meso (ANR-10-EQPX-29-01) of the program Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. The authors would like to thank Mr. Takamasa Imai, Mr. Tadashi Iwai and Mr. Akira Kondo for their support on the measurements.
to be innite when it is about 10. In the continuum limit, it is important to consider the temperature dependence of the thermal conductivity coecient.
In the transitional regime, there is a small discrepancy between the empirical | 39,141 | [
"1031058"
] | [
"472208",
"949",
"472208",
"472208",
"949"
] |
01586866 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01586866/file/ECCMR_156.pdf | K D Ahose
S Lejeunes
D Eyheramendy
On the thermal aging of a filled butadiene rubber
In this study we investigate the influence of thermal aging on the mechanical properties of a butadiene rubber filled with carbon black. To emphasize the influence of both the crosslink density and the crosslink lengths, we consider three different materials based on the same formulation. Dynamic and quasistatic characterizations are realized periodically at room temperature to study of the impact of aging on various mechanical characteristic such as the equilibrium hysteresis, the Payne effect, etc. Crosslink density is followed by swelling tests.
INTRODUCTION
The impact of thermal aging phenomena on physical properties such as mechanical stiffness, tensile strength limit, loss angle, shore hardness, has been studied by many authors, see for instance [START_REF] Tomer | Cross-linking assessment after accelerated ageing of ethylene propylene diene monomer rubber[END_REF][START_REF] Shabani | Thermal and radiochemical of neat and ATH filled EPDM : establishment of structure/properties relationships[END_REF][START_REF] Kumar | Ageing of elastomers: a molecular approach based on rheological characterization[END_REF], Kartout 2016, Ben Hassine 2013 and references therein. Obviously, aging phenomena are material dependent and different issues arrise depending on monomers or vulcanizate agents (see [START_REF] Choi | Accelerated thermal aging behaviors of epdm and nbr vulcanizates[END_REF] for a comparison of EPDM and NBR). These phenomena are also strongly coupled with the environment, chemical reactions can occur with air (oxidation), salt water (see for instance [START_REF] Gac | Ageing mechanism and mechanical degradation behaviour of polychloroprene rubber in a marine environment: Comparison of accelerated ageing and long term exposure[END_REF][START_REF] Rabanizada | Experimental investigation of the dynamic mechanical behaviour of chemically aged elastomers[END_REF], etc. Furthermore, mechanical state during thermal aging may also have an influence on aging phenomena (see for instance [START_REF] Ciutacu | Accelerated thermal ageing of ethylene-propylene rubber in pressurized oxygen[END_REF]. The chemo-physical evolutions involved during aging may lead both to the formation of new crosslinks and to the dissociation of existing ones. For sulfur vulcanized system a commonly admitted mechanism is the degradation of poly-sulfur crosslinks into di-sulfur of mono-sulfur ones. Free sulfurs can eventually migrate and form new crosslinks. This phenomenon is of interest for mechanicians as it can occur during high-cycles fatigue in particular when the heat-build is significant or when the thermal environment plays an important role.
In this study we propose to investigate the consequence of thermal aging on a butadiene rubber reinforced with carbon black fillers and vulcanized with sulfur. To study the influence of the poly-sulfur crosslinks we consider a single rubber formulation cured under different conditions (temperature and time of cure). This lead to different crosslink networks and different crosslink densities. We proceed to mechanical characterizations with quasi-static and dynamic tests on unaged and aged specimens.
EXPERIMENTAL SETUP
Materials
The material is a polybutadiene rubber reinforced with carbon black and vulcanized with sulfur. Due to confidentiality reasons, the formulation of this rubber can not detailed here. The vulcanization system is efficient (sulfur to accelerator ratio is greater than one) and the filler mass fraction is about 45%. This system was cured at three different temperatures: 130 o C, 150 o C and 170 o C. The time of cure for each temperature was previously determined with the help of a rheometer: time of cure was obtained when the measured torque reach 98% of the maximum observed one at each temperature. We obtained the following time of cure: 170 o C -7min, 150 o C -17min, 130 o C -55min. To simplify, in the following each couple time/temperature is refereed as a different material even if the initial composition is the same. Each material can be considered as fully vulcanized and differs from each others from its network structure: for high temperature, we have higher crosslink lengths (poly-sulfurs) and a smaller crosslink density than for smaller temperatures of cure.
Thermal aging
Thermal aging of H2 tensile specimens was done with the help of a oven. Each sample was previously put inside an individual plastic bag in which air was partially removed with a vacuum pump. The objective was to minimize thermo-oxidative phenomena and to limit non homogeneous aging. Some specimens, 2 by material, were periodically removed from the oven for mechanical, and chemical characterizations at room temperature.
Mechanical characterizations
Mechanical characterizations on tensile specimen consisted into the following sequence: a softening phase to eliminate the Mullins effect, relaxations by step (relaxations at different increasing and decreasing amplitudes), cyclic sinusoidal tests with two static amplitudes and three dynamics amplitudes (frequencies of 0.1Hz, 3Hz, 6Hz, 10Hz). These tests were all done at room temperature.
UNAGED BEHAVIOR
Chemical and thermal characterizations
We realized swelling tests on small specimens that were put in xylene. Figure 1, shows the evolution of the relative swelling ratio in mass upon the time spent in solvent. We define the relative swelling ratio in mass from:
q = m s -m 0 m 0 (1)
where m s is the mass of the specimen after swelling and m 0 is the initial mass. Swelling tests are good indicators of the crosslink density of the vulcanized network. The crosslink density could be estimated using the equation proposed by [START_REF] Kraus | Swelling of filler-reinforced vulcanizates[END_REF] for filled rubber, however, we do not had access to the volume fraction of unfilled rubber in the swollen rubber phase. Nevertheless, it can be seen from Kraus or Flory-Rhener works [START_REF] Flory | Statistical mechanics of crosslinked polymer networks ii. swelling[END_REF]) that crosslink density is related to the inverse of the previously defined swelling ratio. We can therefore admit that crosslink density is higher for the material cured at 130 o C -55min than for those cured at 150 o C -17min and 170 o C -7min. We also realized thermal characterizations with a DSC and optical dilatometry with DIC. The glass-transition temperature was measured at -82 o C for each materials and the thermal dilatation behavior was also very close for each materials.
Mechanical behavior
Figures 2 and 3 show some obtained results on unaged samples. It can be seen that, in accordance with the statistical theory of rubber network, the higher is the crosslink density the higher is the stiffness. where p is the hydrostatic pressure. For the equilibrium part, we adopt a Mooney-Rivlin potential defined such as:
ρ 0 ψ eq = sC 10 (I 1 -3) + sC 01 (I 2 -3) (3)
where ρ 0 is the reference density, I 1 , I 2 the two first strain invariants. The material coefficients, C 10 , C 01 are classical Mooney parameters (which are identical for each materials) and s is a parameter that is fixed to 1 for the material 130-55 and that is identified for other materials. The identification of C 10 , C 01 , s 150 , s 170 is done by minimizing the leastsquare distance from the end of relaxation points (extracted from figure 3) to the predicted ones for the three materials at the same time. We obtain the material parameters given at table 1 (see also figure 5).
It can be remarked that the obtained values for s are nearly equal to the inverse of the relative swelling ratio such as we recover the hypothesis of proportionality of the rubber elasticity upon the crosslink density that is made in statistical theories. We can postulate:
s x = q 130 q x (4)
where x stands for the material 150-17 or 170-7. C 10 C 01 s 150 s 170 0.185M P a 0.127M P a 0.948 0.9 q 130 /q 150 q 130 /q 170 0.944 0.898 For the non-equilibrium part, we adopt a Neo-Hooken potential and a Maxwell viscous flow:
ρ 0 ψ neq = n i=1 (G i /s)ω i (I ei 1 -3) Ḃei = L Bei + Bei LT - 1 sτ i BD ei Bei ωi = - 1 h i ω i - 3 I 1 ( B) r i , ω i (t = 0) = 1 (5)
where Bei is the i th isochoric left Cauchy Green tensor coming from the i th multiplicative split of the transformation gradient into viscous and elastic part, L is the isochoric tensor of eulerian velocities, ω i is the i th internal variable to account of the Payne effect, and G i , τ i , h i , r i are material parameters (i = 1..n).
The material parameters identification of the non equilibrium part is done only on the data of the material 130-55 (by setting s = 1). We have therefore a viscoelastic model that can account of a variation of the crosslink density, using the relation of eq. (4). A good agreement with the response of the two other materials is observed as can be seen from the figures 6 and 7.
AGED BEHAVIOR
We investigate, in this section, the behavior of aged tensile samples that were put in a oven at 90 o C. No significant variation of mass or volume, of the samples, was observed after aging.
Chemical characterizations
The results of the swelling tests done at different aging time is synthesized with figure 8. We plotted the inverse of the swelling ratio q relative to the initial (unaged) swelling ratio q 0 for each material. It can be seen that the material that has the larger intial mean crosslink length (the larger number of polysulfur crosslink) exhibits the stronger evolution of the crosslink density. After 11 days of aging, the swelling ratio, q, is very close for each material as the initial concentration of sulfur is the same in each material.
Mechanical characterizations
To investigate the results of relaxation tests, we have plotted the stiffness obtained from the end of relaxation tests done at the maximum amplitude. It can be seen from figure 9 that the equilibrium behavior become stiffer. Furthermore, the material 170-7 seems to become more rigid than the two others. We can also notice that the dispersion of the results increases with aging.
For dynamic tests, we calculated the median line and the area of stabilized hysteresis curves. In the following, the term dynamic stiffness stands for the median line. Typical results are shown with figures 10 and 11. As for the equilibrium stiffness, the dynamic stiffness increases with aging while the hysteresis area decreases. The dispersion is stronger than for equilibrium results and seems to increase with aging. Figures 12, 13 and 14 show that the effect of the dynamic amplitude of loading (Payne effect) and the effect of the frequency of loading are not impacted by aging (at least after 15th days of aging): we only see
DISCUSSION
Based on the previous experimental results we can made the following remarks:
• Crosslink density and mean crosslink length.
Under non-oxidative conditions, thermal aging leads to an increase of the crosslink density of a sulfur vulcanized filled rubber and this Figure 14: Hysteresis area on aged samples upon the amplitude of the dynamic amplitude at 3Hz with 50% of elongation preload evolution is related to the initial (and current) mean crosslink length.
• Fillers/rubber network interactions. Aging mechanisms seem to not impact in a significant manner the fillers/network interactions (at least at 90 o C for aging). Amplitude and frequency of loading effects are not modified by aging.
• Rubber network. The crosslink density may not be the only parameter to take into account in a modeling of non-oxidative aging. In this campaign, we show that the aged mechanical behavior can be slightly different between the three materials even if the results of swelling tests are very close after aging. We guess that the aged rubber network is not exactly the same for the three materials after aging, even if the initial formulation is exactly the same.
• Dispersion of experimental results. We saw an increase of the dispersion of experimental results during aging. This effect can have many different origins, this can be due to damage that can occurs during mechanical test. As the limit of chains extensibility is reduced due to aging, damage can occur for smaller load amplitudes. This can also be a consequence of a non homogeneous aging that can have different origins: heterogeneity of temperature in the oven or oxidation (even if we tried to minimize it).
For the modeling part, taking into account of the previous remarks, we will have to introduce at least two supplementary variables that will be related to the mean crosslink length and the crosslink density. We already have introduced a relative crosslink density variable in the unaged behavior (eq. 4). This variable should eventually be complemented with another one.
CONCLUSION
We have investigated the consequence of thermal aging for a sulfur vulcanized filled rubber. From the re-sults of the experimental campaign, we have shown that aging is strongly related to the initial rubber network (crosslink density and mean crosslink length). As already shown by previous authors, the increase of stiffness, the decrease of dissipated energy and the increase of crosslink density are the three main phenomena but the kinetic is dependent on the initial vulcanization system and on the initial crosslink lenghts and crosslink densities.
To investigate further aging mechanisms and their kinetic, we need to do other experimental tests with different temperatures of aging. We will also need to study the impact of a permanent mechanical loads during aging, as done by [START_REF] Johlitz | Thermo-oxidative ageing of elastomers: A modelling approach based on a finite strain theory[END_REF] From the modeling point of view the challenge is to describe the kinetic with few supplementary variables. We also need to have a good knowledge of the initial network structure and swelling tests may not be sufficient for that.
Figure 1 :
1 Figure 1: Relative mass solvent absorption upon the swelling time for the three materials on virgin samples (not submitted to thermal aging)
Figure 4 :
4 Figure 4: Stabilized hysteresis in tension on non-aged samples for cyclic tests at 3Hz
Figure 5 :
5 Figure 5: Identification of the equilibrium contribution with a Mooney-Rivilin potential
Figure 6 :Figure 7 :
67 Figure 6: Validation of the identified parameters of the nonequilibrium contribution with the material 170-7 (f = 10Hz)
FigureFigure 10 :
10 Figure 9: Equilibrium stiffness of aged samples
Figure 13 :
13 Figure 13: Dynamic stifness on aged samples upon the amplitude of the dynamic amplitude at 3Hz with 50% of elongation preload
Table 1 :
1 Material parameters of the equilibrium part | 14,887 | [
"982019",
"3866",
"8438"
] | [
"136844",
"136844",
"136844",
"456947"
] |
01737079 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://imt-mines-albi.hal.science/hal-01737079/file/Publi-8-HAL.pdf | M Harzallah
T Pottier
email: [email protected]
R Gilblas
Y Landon
M Mousseigne
J Senatore
A coupled in-situ measurement of temperature and kinematic fields in Ti-6Al-4V serrated chip formation at micro-scale
Keywords:
The present paper describes and uses a novel bi-spectral imaging apparatus dedicated to the simultaneaous measurement of kinematic and thermal fields in orthogonal cutting experiment. Based on wavelength splitting, this device is used to image small scale phenomenon (about 500×500µm area) involved in the generation of serrated chips from Ti-6Al-4V titanium alloy. Small to moderate cutting speeds are investigated at 6000 images per second for visible spectrum and 600 images per second for infrared measuresments. It allows to obtain unblurred images. A specific attention is paid to calibration issue including optical distortion correction, thermal calibration and data mapping. A complex post-processing procedure based on DIC and direct solution of the heat diffusion equation is detailed in order to obtain strain, strain-rate, temperature and dissipation fields from raw data in a finite strains framework. Finally a discussion is addressed and closely analyzes the obtained results in order to improve the understanding of the segment generation problem from a kinemtatic standpoint but also for the first time from an energetic standpoint.
Introduction
The constant interest in understanding and mastering the machining processes has led researchers to focus more and more closely on the cutting phenomeon. In recent years, various modelling attemps has met with the need of reliable measurements to either discriminate and/or validate models [START_REF] Mabrouki | A contribution to a qualitative understanding of thermo-mechanical effects during chip formation in hard turning[END_REF]. The use of post-mortem analysis such as chip morphology and SEM analysis [START_REF] Ugarte | Machining behaviour of ti-6al-4v and ti-5553 all0oys in interrupted cutting with pvd coated cemented carbide[END_REF][START_REF] Zhang | Role of phase transformation in chip segmentation during high speed machining of dual phase titanium alloys[END_REF][START_REF] Courbon | Further insight into the chip formation of ferritic-pearlitic steels: Microstructural evolutions and associated thermo-mechanical loadings[END_REF][START_REF] Wang | Evolutions of grain size and microhardness during chip formation and machined surface generation for ti-6al-4v in high-speed machining[END_REF][START_REF] Wagner | Relationship between cutting conditions and chips morphology during milling of aluminium al-2050[END_REF] remains the most common approach. Force components measurement in real time trough Kistler dynamometer is also very popular and has become almost dogmatic in machining research [START_REF] Sun | Characteristics of cutting forces and chip formation in machining of titanium alloys[END_REF][START_REF] Klocke | From orthogonal cutting experiments towards easy-toimplement and accurate flow stress data[END_REF]. Temperature measurement using thermo-couples inserted either in the part or in the tool insert has also been investegated to register macro scale heat generation [START_REF] Haddag | Analysis of the heat transfer at the tool-workpiece interface in machining: determination of heat generation and heat transfer coefficients[END_REF]. The use of such global quantities (chip morphology, cutting force and tool temperature) has proven worthy but limits the understanding of local phenomena such as strain distribution and temperature generation.
To overcome this very issue, some studies have been dedicated to the assessement of in-situ measurement of thermomechanical quantities. The use of Quick Stop Devices has been widely developped in the machining field with some intresting results [START_REF] Buda | New methods in the study of plastic deformation in the cutting zone[END_REF][START_REF] Komanduri | On the mechanics of chip segmentation in machining[END_REF][START_REF] Jaspers | Material behaviour in metal cutting: Strains, strain rates and temperatures in chip formation[END_REF]. More recently, full-field measurement techniques have been successfully implemented for machining purpose. They offer a real-time and in-situ insight of the thermomechanical fields through surface measurements. Such data exhibit of course a different nature from those within the bulk of the material but remain valuable either for phenomena understanding or for model validation. Performing full-field measurements of any nature in cutting conditions exhibits two major difficulies i) the size of the observed area and ii) the rapidity with which the phenomenon occurs. These papers can be sorted in two main categories: strains measurements and infrared (IR) temperature measurements.
Though strain measurements at micro-scale are well documented in a SEM environnement [START_REF] Héripré | Coupling between experimental measurements and polycrystal finite element calculations for micromechanical study of metallic materials[END_REF][START_REF] Guery | Slip activities in polycrystals determined by coupling dic measurements with crystal plasticity calculations[END_REF][START_REF] Stinville | Sub-grain scale digital image correlation by electron microscopy for polycrystalline materials during elastic and plastic deformation[END_REF], the speed requirement in cutting condition has prevented researchers to adopt this approach to obtain experimental data. Few studies have focused on such measurement, in quasi-static or dynamic conditions, through optical microscopy [START_REF] Içöz | Strain accumulation in tial intermetallics via high-resolution digital image correlation (dic)[END_REF][START_REF] Perrier | Mechanical behaviour analysis of the interface in single hemp yarn composites: Dic measurements and fem calculations[END_REF]. Indeed, optical microscopy can easily be coupled with high speed imaging and thus offer a way to work around the two difficulties mentioned in the above. The use of Digital Image Correlation (DIC) has enabled the computation of strains and strain rates at microscale and low cutting speed (V c = 6.10 -4 m.min -1 ) along continuous chips in [START_REF] Cai | Characterization of the deformation field in large-strain extrusion machining[END_REF] and at higher speed (V c = 6 m.min -1 ) in serrated chips in [START_REF] Calamaz | Strain field measurement in orthogonal machining of a titanium alloy[END_REF][START_REF] Pottier | Sub-millimeter measurement of finite strains at cutting tool tip vicinity[END_REF]. Nevertheless, this technique requires unblurred images and is thus often used at low-to-moderate cutting speed. At higher cutting speed, Particle Image Velocimetry (PIV) is often preferred eventhough it cannot be used for serrated chips [START_REF] Gnanamanickam | Direct measurement of large-strain deformation fields by particle tracking[END_REF][START_REF] Guo | Controlling deformation and microstructure on machined surfaces[END_REF][START_REF] Guo | Deformation field in largestrain extrusion machining and implications for deformation processing[END_REF]. One noticeable exception is the work of Hijazi and Madhavan [START_REF] Hijazi | A novel ultra-high speed camera for digital image processing applications[END_REF] which have developed a complex dedicated device composed of four non-intensified digital cameras set in dual frame mode to perform a 4 unblurred images ac-quisition at 1 MHz and thus use DIC to retreive strains. More recently, the improvement of high-speed camera spatial resolution enable Baizeau et al. [START_REF] Baizeau | Effect of rake angle on strain field during orthogonal cutting of hardened steel with c-bn tools[END_REF] to perform DIC at higher cutting speed (V c = 90 m.min -1 ) , this work is not directly focused on the chip formation but rather on residual stresses in the generated surface and is thus performed at a lower magnification.
Temperature measurement at tool tip within the infrared waveband have been investigated in the early 2000's by various works includig the pioneer work of [START_REF] Ranc | Temperature measurement by visible pyrometry: Orthogonal cutting application[END_REF] which uses IR pyrometry. However, thermographical studies are very seldom at high magnification. The completion of thermal measurments by corresponding kinematic measures at the same location has been adressed at micro scale by Bodelot et al. [START_REF] Bodelot | Experimental study of heterogeneities in strain and temperature fields at the microstructural level of polycrystalline metals through fully-coupled full-field measurements by digital image correlation and infrared thermography[END_REF] with a spatial resolution of 21µm/pixel for the IR frames. The work of Arrazola et al. in [START_REF] Arrazola | The effect of machinability on thermal fields in orthogonal cutting of aisi 4140 steel[END_REF][START_REF] Arrazola | Analysis of the influence of tool type, coatings, and machinability on the thermal fields in orthogonal machining of AISI 4140 steels[END_REF] also presents thermal measurement in the mid-wave IR band (3 -5µm) using a home developped IR-microscope. It allowed the measurement of temperature fields with a resolution of 10µm/pixel and an exposure time of 2ms, at a cutting speed of 400m/min. Others authors have developed various experimental apparatus to obtain temperature fields at high cutting speeds (above 100m.min -1 ) for continuous chips [START_REF] Sutter | An experimental technique for the measurement of temperature fields for the orthogonal cutting oin high-speed machining[END_REF][START_REF] Davies | High bandwidth thermal microscopy of machining[END_REF][START_REF] Kazban | Measurements of forces and temperature fields in highspeed machining[END_REF][START_REF] Valiorgue | Emissivity calibration for temperatures measurement using thermography in the context of machining[END_REF][START_REF] Artozoul | Extended infrared thermography applied to orthogonal cutting[END_REF]. At such cutting speed, the scale of the problem imposes to use high exposure time, therefore, the obtained images are blurred (time-convoluted) and can thus only be processed when a thermal steady state is reached. The simultaneous measurements of strains and temperatures through the same optical path have been first performed by [START_REF] Arriola | Relationship between machinability index and in-process parameters during orthogonal cutting of steels[END_REF][START_REF] Whitenton | An introduction for machining researchers to measurement uncertainty sources in thermal images of metal cutting[END_REF]. The authors use Schwarzschild reflective optics to acheive a 15X magnification and a cold mirror reflects the visible light to a visible camera and transmits infrared light to a mid-wave thermal camera. More recently Zhang et al. [START_REF] Zhang | A study on the orthogonal cutting mechanism based on experimental determined displacement and temperature fields[END_REF] have proposed a dual camera measurement of serrated chip generation using a two side configuration. This technique, well known for tensile tests proposes to capture visible images on one side of the sample and IR-fields on the other side. Authors acheive the acquisiton of thermal images at 60Hz with a spatial resolution of 25µm/pix. Accordingly, only small cutting speed 2-4m/min are investigated. Finally the work of [START_REF] Heigel | Infrared measurement of the temperature at the tool-chip interface while machining ti-6al-4v[END_REF] presents thermal measurement at very small intergation time (10µs) and 700Hz in the midwave IR band, this allows to obtain unblurred and transient thermal images with a resolution of 33µm/pix and a cutting speed up to 100m.min -1 .
1 Experimental Setup
Orthogonal cutting apparatus
Tests are performed in othogonal cutting configuration using a dedicated device made of a fixed tool and a linear actuator. This latter is fixed on the working plate of a conventionnal milling maching while the tool is fixed on the spindle head (see Fig. 1). The feed is set to f = 250µm using the Z-axis wheel. The chosen cutting tools are made from uncoated carbide and exhibit a rake angles of 0 • . The depth of cut is d = 2.7mm, the length cut is 120mm and the cutting speeds are ranging from 3m.min -1 to 15m.min -1 . The typical geometry of the segments generated at these two cutting speeds is depicted in Fig. 2d. The three components of the cutting force are recorded through a Kistler dynamometer.
Imaging apparatus
The proposed imaging device is inspired by the one presented in [START_REF] Arriola | Relationship between machinability index and in-process parameters during orthogonal cutting of steels[END_REF]. The key feature being a spectral separation of the incoming flux though a cold mirror that enables imaging at two different wavelengths, one dedicated to visible images (for DIC purpose) and the other to IR images (temperature measurments). The chosen configuration is slightly different in the present paper since the IR flux is here focused by reflection along the optical path. An off-axis parabolic mirror is used instead of a germanium tube lens thus preventing from chromatism throughout the IR spectrum (0.9µm -20µm).
Lighting is performed from two high power LEDs (1040lm) one is embedded in the imaging system and provide diffuse axial illumination, the other is set outside and is focused directly on the sample thus providing a directional illumination. This latter is also added a low-pass filter to prevent stray IR illumination.
The camera on the visible optical path is a Photron Fastcam SA3 set at 6000f ps and an exposure time of 25µs. The image resolution is 512 × 512 pixels.
The dimensioning of the thermographic line depends on the expected thermal range. Recent works, in the field of orthogonal cutting [START_REF] Artozoul | Extended infrared thermography applied to orthogonal cutting[END_REF][START_REF] Valiorgue | Emissivity calibration for temperatures measurement using thermography in the context of machining[END_REF], with different cutting conditions (speed, angle and material), provides a glance at the expected temperature range of T min = 200 • C to T max = 550 • C. In order to choose the most suited detector, it is classical to refer to the cross-checking of Planck's laws calculated at the two extremes temperatures of the range (Eq.1). Indeed, for a blackbody at a given temperature, 95% of the emitted flux is between 0.5λ max and 5λ max , where λ max is provided by Wien's displacement law.
λ 1 = 0.5 2898 T min λ 2 = 5 2898 T max (1)
Therefore, the optimal spectral band for this temperature range is [3.1 -17.6]µm. The commercially available detector offers wavelength range of [3 -5]µm for InSb or MCT detectors (Mid-Wave IR) or [8 -12]µm for microbolometer. In the present paper, the ability of Mid-Wave IR cameras to reach higher acquisition rates, up to 9 kHz with sub-windowing mode, have led to consider this latter type of detector.
Accordingly, the camera on the IR optical path is a FLIR SC7000 set at 600f ps and an exposure time of 50µs. The detector is sub-windowed at 1/4 in order to increase the amount of frames per second. It receives radiations through the 1mmthick silicon beam-splitter of which the measured average transmittance is T λ ≈ 0.66 for wavelengths ranging from 3µm to 5µm. The IR-image resolution is 160 × 128 pixels.
Material and samples
The studied material is Ti-6Al-4V titanium alloy, it presents two advantages in the scope of this study: it is industrially machined at relatively low cutting speed (typ. below 60m.min -1 ) and it is known to generate serrated chips even at very low cutting speed [START_REF] Pottier | Sub-millimeter measurement of finite strains at cutting tool tip vicinity[END_REF]. This latter feature being related to the poor thermal conductivity of titanium alloys (typ. below 10 W.m -1 .K -1 ). The microstruture has been investigated through SEM and exhibits almost equiax grains with an average size of [START_REF] Gnanamanickam | Direct measurement of large-strain deformation fields by particle tracking[END_REF] is polished and etched to exhibit microstructure, this matter being used as a natural speckle for DIC (Fig. 2a-b).
With such surface treatment, the spectral emisivity of the sample is measured from Fourier transform infrared spectroscopy. The emissivity spectrum is depicted in Fig. 2c. The spectral emissivity λ is here assumed to equal its average value over the wavelentgh range of the camera denoted . It is also assumed to remain constant within the considered temperature range [START_REF] González-Fernández | Infrared normal spectral emissivity of ti-6al-4v alloy in the 500 -1150 k temperature range[END_REF].
The mass density ρ, the specific heat C p and the thermal conductivity k are functions of the temperature. According to [START_REF] Basak | Measurement of specific heat capacity and electrical resistivity of industrial alloys using pulse heating techniques[END_REF][START_REF] Boivineau | Thermophysical properties of solid and liquid ti-6al-4v (ta6v) alloy[END_REF], they are chosen to evolve linearly in-between the boundaries given in Tab.1.
Calibrations and optical caracterizations 2.1 Metric calibration
The size of the observed area and the amount of optical transmission from object to detectors leads to question either the exact magnification of the apparatus and the possible distortions along both visible and IR optical paths. For magnification assessement purpose, images of calibrated lines (50 lpm and 31.25 lpm) were captured by both cameras (see Fig. 3a-d). Then a Fast Fourrier Transform of the image provide the pixel period of the pattern which lead to a metric ratio of 1.133µm/pixel for the visible camera (Photron Fastcam SA3) and knowing that the pixel pitch of this camera is 17µm, the obtained magnification is M vis ≈ 15.00. The same procedure applied to the IR optical path leads to a metric ratio of 1.981µm/pixel and a magification of M IR ≈ 15.14 (detector pitch ≈ 30µm).
Distortions correction
For distortion purpose, image pairs exhibiting rigid body motion along X and Y axis (the horizontal and vertical axis of the image frame respectively) are captured by both camera. The sample is being translated using a two-axis micrometric translation stages for a prescribed translation of u pres x = 50µm, then u pres y = 50µm. DIC is then performed between these images in order to assess the imposed displacement such that:
δ x (X) = u meas x (X) -u pres x δ y (X) = u meas y (X) -u pres y (2)
where δ x (X) and δ y (X) are the components of the disortion field. Thus, assuming that optical distortions are zero at the center of the image (δ x (0, 0) = δ y (0, 0) = 0), the value of the constant u pres can be assessed and the distortions estimated. Fig. 3b-e reads the shape and magnitude of δ x (X) and δ y (X) for both visible and infrared imaging. Finally, these noised distortion fields are approximated through the model presented in [START_REF] Weng | Camera calibration with distortion model and accuracy evaluation[END_REF] and used for DIC purpose in [START_REF] Pierré | Unstructured finite element-based digital image correlation with enhanced management of quadrature and lens distortions[END_REF]. This latter proposes to approximate the distortions through the correction of radial, decentering and prismatic components as follows:
δx (X) = x r 1 ρ 2 + r 2 ρ 4 + r 3 ρ 6 + 2d 1 xy + d 2 3x 2 + y 2 + p 1 ρ δy (X) = y r 1 ρ 2 + r 2 ρ 4 + r 3 ρ 6 + 2d 2 xy + d 1 x 2 + 3y 2 + p 2 ρ
(3) where ρ = x 2 + y 2 is the distance to the optical center. A simplex optimization algorithm is used to estimate the six parameters of each model and resulting distortion fields are depicted in Fig. 3c-f. It can be seen that the optical distortions exhibit similar order of magnitude for the two imaging wavelengths although the mirrors alignements issues lead to different shapes of the distortion fields. All subsequent displacement and temperature measures presented in this paper are corrected accordingly.
Image and speckle quality
No special treatment is applied on the sample surface, and the visible microstructure is used as a spekle for the DIC algorithm. Estimating the measurement uncertainties of speckle related DIC techniques, is known to be a complex task [START_REF] Bornert | Assessment of digital image correlation measurement errors: Methodology and results[END_REF][START_REF] Pan | Study on subset size selection in digital image correlation for speckle patterns[END_REF]. Lots of studies have been addressed in this field and the point of the present paper is not to discuss any further in this matter. Hence two estimation approaches are here adressed in order to assess the most suited subset size. The first approach tested here is the so-called Mean Intensity Gradient [START_REF] Pan | Mean intensity gradient: An effective global parameter for quality assessment of the speckle patterns used in digital image correlation[END_REF], it provides an estimation of displacement precision (standard deviation). In the present work the image MIG is δ f = 10.52, the measurement noise is assessed from the substraction of two motionless images and its standard deviation equals σ = 3.53 pix. Hence the standard deviation of the displacement can be approximated from:
ρ C p k kg • m -3 - J • kg -1 • K -1 W • m -1 • K -1
std(u) ≈ √ 2σ n * δ f ( 4
)
where n is the subset size (in pixel). Hence, assuming a chosen subset size of 16×16 pix, the expected standard deviation of the measured displacements is about 0.03 pix.
The second approach is the use of rigid body motion such as detailed in [START_REF] Triconnet | Parameter choice for optimized digital image correlation[END_REF]. This approach is close to the one used in section 2.2. Two images exhibiting a translation motion are captured and displacement are computed and compared to the imposed one. It allows assessing both precision (random error) and accuracy (systematic errors) though this latter requires to know the accuracy of the translation stage. For this reason, only the random error is here estimated and equals 0.035 pix for a subset size of 16 × 16 pix. The two approaches provides approximatively the same magnitude of the dispersion. Hence, cumulating such error over 50 images lead to an maximal error or 1.5 pix and therefore lead to errors on strain below 10% (if a 16 × 16 pix extensiometric basis is used).
Thermal calibration
Small scale calibration of IR cameras remains a challenging task for the thermography community. Either a specific lens is designed for microscopic observation, and calibration is then classical [START_REF] Romano | Simultaneous microscopic measurements of thermal and spectroscopic fields of a phase change material[END_REF], or the camera is used without lens and the incoming radiation is focused on the detector array [START_REF] Justice | Microscopic two-color infrared imaging of nial reactive particles and pellets[END_REF]. However, the approach presented in this latter work performs the calibration of the thermal flux ratio at two different wavelengths (bichromatic thermography), not an absolute one. In the present paper, among the two imaging paths, only one is in the infrared spectrum and thus prevents from the use of the bichromatic approach. The goal is then to perform an accurate thermal calibration (with the smallest interpolation error), at small scale. To do so, several precautions must be respected.
First of all, and as each pixel exhibits a different spectral and electronical response than its neighbours, a Non Uniformity Correction (NUC) is necessary. The method used in this paper is the Two Point Correction (TPC), which classically corrects the offsets φ and gains K of each pixel of coordinate X on the average response over every pixels. The relation between the output current of a given pixel and the read digital level is given by
I d (X, T ) = K(X)I current (X, T ) + φ(X) (5)
For calibration sake, two input images are required: a dark and a bright image. They are obtained by placing an integrating sphere (Optronics Laboratories OL400) in front of the camera and adjusting the lamp supplying current (see Fig. 4a).
The second step of the calibration procedure is the thermal calibration. This step consists in visualizing a blackbody at different temperatures with the camera, and to choose and parametrize a model which best fits the experimental points (digital level versus temperature). The chosen model is the so-called effective extended wavelength model proposed by [START_REF] Sentenac | Noise effect on interpolation equation for neai infrared thermography[END_REF], which is recalled hereafter :
I 0 d (T ) = K w exp - C 2 λ x T with 1 λ x = a 0 + a 1 T + a 2 T 2 (6)
where I 0 d (T ) is the averaged signal provided by the pixels viewing the blackbody at a temperature T . K w , and the a i are the radiometric parameters. Then the parameters identification is performed through the use of a black body set at 10 temperatures T i ranging from 20 • C to 550 • C (Fig. 5b):
log I 0 d (T i ) = log (K w ) - C 2 a 0 T i - C 2 a 1 T 2 i - C 2 a 2 T 3 i (7)
At small scales, an important source of error is the Size of Source Effect (SSE), which includes the illumination of the detector array by stray light and detector overwhelming [START_REF] Yoon | Methods to reduce the size-of-source effect in radiometers[END_REF]. It is then necessary to install a pinhole between the black body and the camera. The obtained image with a 200µm pinhole is depicted in Fig. 4b, only the illuminated pixels are used for calibration. It is also verified with other pinholes (100µm and 50µm) that the incoming flux is no longer affected below 200µm.
A 3 rd degree polynomial fit is then used to assess the radiometric parameters (error to the black body measurement are depicted in Fig. 5c). The absolute error is always inferior to 7 • C, and is higher for low temperatures, where the detectivity of the pixels becomes very low (i.e. the photoresponse versus the flux of each pixels behaves non-linearly).
Once the calibration performed, in the measurement step, the emissivity must be known to infer true temperature, as defined in the following equation :
(T ) = I d (T ) I 0 d (T ) ⇒ I 0 d (T ) = I d (T ) (T ) (8)
Combining Eq.( 7) and Eq.( 8) then gives
log I d (T ) (T ) -log (K w ) + C 2 a 0 T + C 2 a 1 T 2 + C 2 a 2 T 3 = 0 (9)
where (T ) = is assumed to be constant over the considered temperature range [START_REF] González-Fernández | Infrared normal spectral emissivity of ti-6al-4v alloy in the 500 -1150 k temperature range[END_REF]. The computation of the true temperature T at each pixel from the measured intensity I d (T ) is obtained by solving this latter equation. In practice, this is performed through the Cardano's method.
3 Measurement Post-processing
Digital Image Correlation
The DIC computations has been performed with 7D software [START_REF] Vacher | Bidimensional strain measurement using digital images[END_REF]. The quality of the obtained images have led to consider the use of incremental correlation [START_REF] Pan | Incremental calculation for large deformation measurement using reliability-guided digital image correlation[END_REF][START_REF] Tang | Large deformation measurement scheme for 3d digital image correlation method[END_REF]. This choice relies on several considerations: very significant strains, microstructure transformation, out-of-plane motion, material decohesion and changing lighting (disorientation). Therefore, the likeness of image no.1 and n is poor and prevent from a straight forward use of classical DIC. The matching of image n -1 and n is obtained with better results than images no.1 and n. The counterpart is that such incremental approach requires serveral numerical processings in order to compute the cumulated displacements and strains. Fig. 6 depicts the resolution scheme used to retreive global displacement. Indeed, in such resolution scheme, the incremental displacements ∆u k are obtained (in pixel) at nodes of the correlation grid so that:
∆u k (X k ) ( 10
)
where X k = (X, Y ) k are the correlation grid coordinates (identical for every image in the image coordinate system but different in the local coordinate system), k being the image number. Accordingly, X 0 = x 0 = (X, Y ) 0 is the intial coordinates of the tracked points. This only information does not enable the estimation of strains (whether in initial or final configuration) since material tracking can not be achieved, cumulating strain and different location is just not right. The evaluation of the deformed coordinates x k = (x, y) k at every step/images of the deformation process is performed incrementally through triangular bi-cubic interpolation.
x k = x k-1 + ∆u k (x k-1 ) (11)
with
∆u k (x k-1 ) = 3 i=1 φ i (x k-1 ) × ∆u k (X k ) (12)
where the φ i are classical triangular cubic shape functions. Finally the total cumulated displacement is then obtained as:
u k (x k ) = x k -X 0 (13)
Strain Computation
The point of the present experiment is to offer a straight forward comparison of the local mechanical fields between experimental conditions and numerical simulations. For this purpose, strain fields are computed from the observed displacements. In a finite strain framework the polar decomposition of the strain gradient tensor at instant k states
F = RU = VR ( 14
)
where R is the rotation matrix, and U and V are symmetric matrices describing the deformations. In most finite element softwares, explicit solvers return various measures of strains. However Eulerian strains such as the Hencky's strain H (a.k.a. logarithmic strain) or the so-called Swainger's strain N (a.k.a. nominal strains) are usually prefered in such configuration [START_REF]Abaqus, Inc. ABAQUS Theory guide : version 6.12-1[END_REF]. Hence, experimental strain measures are obtained from:
H = lnV and N = V -I (15)
where I stands for the identity matrix. The computation of the strain gradient tensor thus becomes the only prerequisite for strains assessement and is performed from
F k = ∇ X0 u k + I (16)
The space derivative in (Eq.16) leads to a significant issue in experimental conditions: the presence of noise on the measure of u k . In order to address this issue, a filtering method should be applied on u k prior to any calculation. For this purpose, a modal projection approach is chosen such as described in [START_REF] Pottier | Proposition of a modal filtering method to enhance heat source computation within heterogeneous thermomechanical problems[END_REF] and allows to approximate the displacement by:
u k (X 0 ) ≈ N p=1 α xp Q p (X 0 ) , N p=1 α yp Q p (X 0 ) (17)
where the Q p (X) are the N first eigen modes of a shell square plate. The α p are the corresponding modal coordinates. The gradient then reads:
∇ X0 u(X 0 ) ≈ N p=1 α xp ∂Qp(X0) ∂X N p=1 α xp ∂Qp(X0) ∂Y N p=1 α yp ∂Qp(X0) ∂X N p=1 α yp ∂Qp(X0) ∂Y (18)
Finally, interpolation is used again to obtain the displacement gradient, and thus the strains over the deformed grid
x k as ∇ X0 u k (x k ) = ∇ X0 u k (X 0 + u k (X 0 )) (19)
Thermal imaging post-treatment
The measurement of surface temperature as such is of limited interest since it has to be matched with the generated powers involved in the cutting phenomenon. Indeed, as pointed by many authors [START_REF] Kone | Finite element modelling of the thermo-mechanical behavior of coatings under extreme contact loading in dry machining[END_REF][START_REF] Haddag | Analysis of the heat transfer at the tool-workpiece interface in machining: determination of heat generation and heat transfer coefficients[END_REF] most of the generated heat is extracted along with the chip. It therefore affects the cutting force but only very remotely the generated surface. Accordingly, the thermal information is mostly intresting from an energy balance point of view and in order to investigate the tight couplings between constitutive equations and temperature. Let's recall the specific form of the heat diffusion equation in the Lagrangian configuration applied to a 2D thermographic framework (readers should refer to Appendix A for detail on the establishement of such relation and its related hypothesis).
ρC p ∂ θ ∂t + v • ∇ θ -k 1 ∇ θ 2 -k∆ 2 θ+ 2hθ d + 2σ d ( T 4 -T 4 r ) = w ch (20) where:
• θ = T (x, t) -T 0 is the difference between the current surface temperature field and the initial state where the temperature T 0 is assumed to be homogeneous and equal to the room temperature.
• ρC p ∂ θ ∂t + v∇ θ is the inertial term that reads the temperature evolution at a given location (x, t). Variable v stands for the velocity vector field.
• k 1 ∇ θ 2 -k∆ 2 θ is the diffusion (Laplace's) term. Note that∇ is the two dimensionnal gradient and ∆ 2 the twodimensionnal Laplace operator.
• 2hθ d + 2σ d (T 4 -T 4 r
) is the convective and radiative heat losses over the front and back faces of the sample. σ here stands for the Stefan-Boltzman constant. h is the heat transfer coefficient arbitrary chosen to equal 50W.m -2 .K -1 (forced convection) and d is the depth of cut. The objective is here to compute the left hand side terms from the measured quantities ( T (x, t) and T 0 ) and the material parameters in order to provide an estimation of the involved power in the cutting phenomenon w ch . In practice, the computation of the Laplace operator over noisy data required the use of filtering. The chosen approach relies on the same projection used for strains (see Eq.17). In addition, the time derivative requires to evaluate the temperature in a Lagrangian framework while thermography provides it in Eulerian configuration. Hence, Motion Compensation Technique is used to retreive the material evolution of temperature from the fixed pixel evolution. Such approach is classical nowaday and is not detail any further here; readers can refer to [START_REF] Sakagami | A new full-field motion compensation technique for infrared stress measurement using digital image correlation[END_REF][START_REF] Pottier | Study on the use of motion compensation techniques to determine heat sources. Application to large deformations on cracked rubber specimens[END_REF] for more details.
Results
Test at
V c = 3m.min -1
The captured images consititues a film depicting the generation of 197 segments. Fig. 8a presents 5 raw images of one single segment generation process (segment no.101). In addition it is worth mentionning that no drift of the thermomechanical quantities is observed during the cut. A total of 62 visible images and 6 thermal images are captured for this segment. For reading and comparison purpose, images are labelled by their number and the percentage of the segment formation. The first image (0%) being the image where the segment first touches the tool and the last image (100%) being the image exhibiting a displacement of 50µm after the segment being fully formed. It is seen from the raw images and Fig. 8b that a crack continuously propagates along the primary shear band (denoted Z I in the following). Fig. 7a depicts the fluctuations of the cutting force. It can be seen that the cutting force peaks just prior the crack initiation (around 30%) and drops significantly when the crack reaches completion (about 60%). The magnitude of the force fluctuations equals 150 N meaning a tenth of the overall force. The average cutting force over the whole segment formation is F c = 1611 N . The Fast Fourier Transform of the force signal provided in Fig. 8b show a typical spectrum of a sum of two Gaussian distributions where the main peak matches the segmentation frequency (110 Hz) and the secondary correspond to the oscillations during the early stages of the segment generation (clearly visible in Fig. 7a).
It can be seen from the strain depicted in Fig. 8c that the three stages process described in [START_REF] Pottier | Sub-millimeter measurement of finite strains at cutting tool tip vicinity[END_REF] is clearly visible. The first two images (prior to 30% progression) show a diffuse deformation within the segment bulk, included in-between 0.3 and 0.8. Latter on, strains slowly converge to a localized zone ahead of the tool tip (clearly visible from 40%). Starting from this stage, strains accumulate in Z I and lead to material failure. A crack propagates moving away from the tool tip. From image no.49 (80% and subsequent), the segment is fully formed and is extracted. It therefore undergoes rigid body motion and strains accumulation stops.
The strain rate field depicted in Fig. 8d is of localized nature in the early stages of the process. Strains are accumlated at the same location all along the segment generation process. There is no clear evidence of any motion of the shearing zone conversly to previouly published results [START_REF] Komanduri | On the catastrophic shear instability in high-speed machining of an aisi 4340 steel[END_REF]. The primary shear zone model seems therefore particularly suited from these obsevations. Indeed, strain is generated by the fact that material particules are geometrically forced through this band and exit it with a strain that almost no longer evolves afterward (Fig. 8c).
The temperatures imaging do not prompt any clear localisation within Z I before 60% (i.e. the end of segment generation). The maximum temperature do not evolves significantly during the sequence, only ranging from 317 • C to 360 • C. However, the temperature gradient is significant in space. The min/max range within the same image roughly equals 300 • C. Indeed, the main temperature rise seems to oc- cur during the transit of a material point through the strain rate line (Z I ). It is also worth noticing that the maximum temperature location is not in the direct vicinity of the tool. Such consideration lead to consider that heat generated in Z I (from plasticity and damage) prevails over the heat generated in the secondary and tertiary shear zone (due to friction over the rake or draft face). Conversly to temperature, the computed heat source (Fig. 8f) localizes during the early stage of segementation and almost simultaneously with the strains (depicted in Fig. 8c). The dissipated power evolution also exhibits three stages: at fisrt, up to image no.6, the dissipated power is low (about 1.7 × 10 12 W.m -3 ) and is concentrated close to the tool. Then, from a progression in-between 20% to 60%, the dissipated power increases within Z I from 2 × 10 12 W.m -3 to 3.2 × 10 12 W.m -3 . The source also sligthly progress rightward (i.e. along the shear band from the tool tip to the free surface). Finally, it is seen from image no.49 (80%) that even while the crack propagation is completed, and that strain accumulation stalls, the generated power remains high and peaks around 3.2 × 10 12 W.m -3 . It is also visible from the first image that when the generation of segment n starts the input mechanical power is split in two locations, the first consists in the sliding of the segment n-1 over the segment n and the second is the early stage of generating segment n. Indeed, it is clearly visible that a second source lights up beneath the sliding band and close to the tool tip.
Test at
V c = 15m.min -1
At such cutting speed 248 segment are generated and with a frame rate of 6000 F P S the generation of a single segment last 11 images. Fig. 9a depicts 5 images spread over the segment no.107 generation process. Comparatively, with a capture frequence of 600 F P S only one thermal image is acquired during the process and corresponds to a 40% progression. For this very reason, the other thermal images presented in Fig. 9e are not otained from the investigated segment but from other measured segments but at the same stages of the segmentation progression. For this test, the average cutting force over the whole segment formation equals F c = 1516 N .
The crack progression differs from the one presented for a cutting speed of 3 m.min -1 (Fig. 8b). Indeed, the crack does not progess from 0% to 40% and suddenly propagates almost all the way to the free surface at image no.6 (60%). The pheneomon therefore seems to be more brutal at high cutting speed. This observation is confirmed by the investigation of the strain field depicted in Fig. 9c. Eventhough these latter exhibit the same kind of diffused strain within the segment bulk (about 0.7), the shear localization within Z I appears later (at 60% instead of 20% or 40%). It is also worth noticing that the total strain reached at 80% (i.e. at failure) is 1.5 which is lower than its counterpart at lower cutting speed (1.7).
The strain rates fields exhibit similar features regardless of the cutting speed. Their magnitude obviously increases with the cutting speed but in a linear manner. Indeed, one would expect that multiplying the cutting speed by 5 would increase the strain-rate magnitude by the same ratio, which is almost the case here. Hence, the difference of crack propagation nature is explained by the only fact that strain at failure decreases as the thermomechanical loadings increases.
As expected, the temperature rises higher as the cutting speed increases and reaches a maximum of 548 • C. It is also seen that the localization of the thermal fields is more obvious at such cutting speed even at the very early stages. In addition, it is observed that the maximum temperature decreases in the early stage of segment generation (between 10% and 20%) then increases again while the crack reaches completion. This could be explained by the fact that the heat source image at 10% actually reads two primary shear zones, the one at the tool tip (which leads to generate the observed segment) and another one, higher along the tool rake face, correponding to the previous segment.
From this perspective, the thermal dissipation fields depicted in Fig. 9f are consistent with the observation made at 3 m.min -1 . Indeed, the heat source originates from the subsurface at the tool tip then spreads and moves rigthward along Z I . It is also noticed that the dissipation band is narrower. The magnitude of the source however rises question since it appears to be only multiplied by a factor of roughly 3 while the cutting speed and the strain rate are increased by a factor of 5. This leads to conclude that segmentation in Z I (where the heat source are higher) do not develops at iso-energy. In addition, it is worth noticing that the dissipation prompts at 80% has exactly the same shape and magnitude that image at 0% (not depicted in Fig9). This illustrates the cyclic nature machining under such conditions.
Discussion
The use of coupled measurement provides a valuable insight of serrated chips generation phenomenon from which some conclusions can be drawn but it also rises many subsequent questions. The present disscussion aims at summarizing and sorting the obtained information.
Force measurements
First, as seen from the heat source assessement, the generation of segment n partially overlap in time the generation of segments n -1 and n + 1. This explains the difficulty in post-processing macroscopic measurment such as force measurements. Indeed, the variations of the force applied by a segment on the rake face cannot be measured from dynamometric measurements since the force applied by segment n sum up with the force applied by segment n + 1 or n -1 [START_REF] Calamaz | Strain field measurement in orthogonal machining of a titanium alloy[END_REF]. It is also seen that the overall cutting force decreases as the cutting speed increases (from 1611 N down to 1516 N ). This results is easily explained by the addition of thermal softening (significant between 300 • and 550 • C [START_REF] Calamaz | A new material model for 2d numerical simulation of serrated chip formation when machining titanium alloy ti-6al-4v[END_REF]) and the stick-slip nature of the friction phenomenon at low cutting speed [START_REF] Zemzemi | Identification of a friction model at tool/chip/workpiece interfaces in dry machining of aisi4142 treated steels[END_REF].
Kinematic fields
Second, eventhough the strain fields are heterogeneous in space and in time, it seems that the strains always cumulate at the same location. As seen from strain rates field, the straining band is fixed in space and only fluctuates in magnitude over time. In addition, other tests performed with different rake angles (not presented here) suggest that the shape of the straining zone (its angle) is strongly related to the cutting geometry (feed and rake angle). This is clearly a perspective to the present work [START_REF] Harzallah | Numerical and experimental investigations of ti-6al-4v chip generation and thermo-mechanical couplings in orthogonal cutting[END_REF].
The equivalent strain at failure is smaller at high cutting speed eventhough the temperature in Z I is higher. A first explanation for this phenomenon would be that the loss of ductility due to the viscous behaviour of metal overtakes the thermal softening in this range of thermo-mechanical loading. Another explanation would be related to the nature of the strain field itself. Indeed, as seen from the heat sources computations, the primary shear band narrows as strain rate increases which pleads for a strain path closer to pure shear than its counter-part at low cutting speed. This latter explanation seems to be confirmed by the detailed results presented in Fig. 10 in which it is seen that at small cutting speed, the deformation in Z I is a mix of shear and compression while at 15 m.min -1 , shear is clearly the main straining mode.
Thermal informations
From a thermal stand point, the maximum temperature is always observed within Z I and away from the tool. One can therefore assumed that the temperature measured from a thermocouple positioned within the insert do not reads the proper temperature of Z I . Consequently, it is hard to extrapolate from such measure the value of τ f the shear friction stress and therefore the generated heat flux along Z II commonly defined as [START_REF] Haddag | Analysis of the heat transfer at the tool-workpiece interface in machining: determination of heat generation and heat transfer coefficients[END_REF]:
q = β sl η f • τ f • V sl ( 21
)
where V sl is the sliding velocity and η f the part of the frictional work converted into heat. β sl is the ratio of the frictional power entering the material (1 -β sl being the power ratio entering the tool). Indeed, the thermal softening of the material in Z I is the leading factor of the stresses at tool/chip interface (i.e. Z II ) and a proper knowledge of temperature distribution in Z I is therefore a key issue in assessing any energetic quantity in Z II .
Although most of the heat ultimately ends up in chip, it is seen that the temperature spreads significantly beneath the tool (i.e. in the generated surface) for both cutting speeds. Further investigations are required to link this heat peak to the residual stresses distribution in the generated surface.
As expected, multiplying the cutting speed by five do not multiply the temperature variation (θ = T (x, t) -T 0 ) by five but only by 1.5 while the maximum dissipated power within Z I (see Fig. 8f and Fig. 9f) is roughly multiplied by 3. This simple observation challenges the adiabaticity hypothesis often set within the three shear zones Z i . If the dissipation is multiplied by 3, under adiabatic conditions, the temperature variation should by multiplied by 3 as well, which is not the case, meaning that the heat is somehow diffusing. Indeed, it is also observed that the second term (the so-called Laplace's term) of the heat diffusion equation Eq.( 20), is clearly not negligible before the inertial term (ρC p θ) especially at small cutting speed. As seen in Fig. 11, at 15m.min -1 , these two terms exhibits the same order of magnitude, eventhough the thermal conductivity of titanium allows is small.
Local Powers
From an energy stand point, it is also noticed in Fig. 11, that the convective and radiative thermal losses on the side of the sample are negligible. It is also reasonnable to assume that even if extrapolated at higher cutting speeds (typ. 60m.min -1 ) it remains negligible despite the presence of T 4 . Alike the thermal losses, the variation of kinetic energy is small and will remain so, regardless of the cutting speed. Under such hypothesis and those exposed in Appendix A, the classical power balance becomes (local formulation):
w ext = -w int + k = w e + w a + k ⇒ w ext ≈ w a (22)
where w ext and w int are the external and internal specific power, w e and w a are the elastic and anelastic powers and k is the variation of kinetic energy.
Zone I: Heat is mainly generated within Z I . Indeed, at both cutting speed, the dissipated powers at stake and the temperatures in Z II and Z III are significantly lower than in Z I . However, plasticity may not, on it's own, be responsible for the observed dissipated power in Z I . Indeed, recalling that Fig. 8f and Fig. 9f read the left hand side terms of Eq.( 20) and assuming that plasticity is the only phenomenon involved in the thermal rise within Z I , it classically comes that the specific dissipated power equals [START_REF] Chrysochoos | An infrared image processing to analyse the calorific effects accompanying strain localisation[END_REF]:
w ch = βw a = β (σ : ˙ p ) ( 23
)
where β is the so-called Taylor-Quinney coefficient. Note that the thermo-elastic coupling is here neglected before plasticity [START_REF] Boulanger | Calorimetric analysis of dissipative and thermoelastic effects associated with the fatigue behavior of steels[END_REF]. Hence, assuming a constant value of β of 0.8 ( [START_REF] Mason | On the strain and strain rate dependence of the fraction of plastic work converted into heat: an experimental study using high speed infrared detectors and the kolsky bar[END_REF]), it comes that the maximal value of the equivalent Von-Mises stresses roughly equal 2.1 GP a for the test at 15 m.min -1 and 3.3 GP a for the test at 3 m.min -1 which seems largly overestimated. Such calculation is, of course, too coarse but lead to consider that, especially at small cutting speed, other phenomenon are involved (damage, segment-over-segment friction, phase transition, recrystalization...) and that other dissipative terms should be padded to the right hand side of Eq. [START_REF] Guo | Deformation field in largestrain extrusion machining and implications for deformation processing[END_REF].
Zones II and III : The presented measurements mainly focused on the primary shear zone. Nonetheless they provide valuable information on other dissipatives zones (namely Z II and Z III ). Indeed, from the thermal information it is possible to assess the heat flux transfering between the tool and the material. The Neumann boundary condition provides the incoming surface heat flux which can be compared to Eq.( 21) as:
q = -k ∂θ ∂x = β sl η f • τ f • V sl -h(T w -T tool ) (24)
Hence, evaluating the temperature gradient over the material frontier ∂Ω provides the heat fluxes depicted in Fig. 12 for both Z II and Z III . It can be seen that the power generated by friction in Z II is not sufficient to overcome the natural flux heading from the hot segment toward the tool (cooler). It is also seen that increasing the cutting speed slightly increases the outbound heat flux. However, this increase is not proportional to the cutting speed and is to be matched with the tool temperature (see Eq.( 24)). Regarding Eq.( 24), it can be concluded that in Z II , the current tool temperature T tool is of the utmost importance for whoever wants to relate tool wear to the incoming thermal flux. Indeed, if T tool is no longer an unknown variable, a given set of parameters β sl , η f and h leads to a possible assessement of the last unknown variable: τ f a key variable in machining operations.
In Z III heat is entering the material, meaning that the generated surface is hot and slowly diffuses within the material. It is seen that the cutting speed strongly affects the surface temperature beneath the tool which doubles between 3 m.min -1 and 15 m.min -1 (approximatively 140 • C and 300 • C). This ratio of 2 is also observed between the heat flux in Z III depicted in Fig. 12. No discontinuity is observed in the incoming flux which leads to consider that there is no contact between the tool and the part along the clearance face.
Global Powers
From a global stand point, it is worth recalling that the overall input power of the cut is the cross product of the cutting force with the cutting speed and that the internal energy balance alows to write that:
W ext = F c • V c = -W int + K = W e + W a + K ( 25
)
where K is the variation of kinetic energy, W int is the internal power, W e the elastic power, W a the anelastic power, all being expressed in W atts. Hence, by neglecting the elastic power and K before the anelastic power, it is possible to evaluate the part of energy consumed in each of the three zones from the spatial intergation of the specific dissipation w ch = βw a depicted in Fig. 8f and Fig. 9f. It therefore comes:
W ext = d β IV i=I Zi w ch dS ( 26
)
where d is the depth of cut, Z i is the surface area of the zones Z I , Z II and Z III and Z IV is the remaining surface corresponding to the imaged sub-surface of the sample and the bulk of the chip segment as depicted in Fig. 13. The volume integration is performed under the strong assumption that the heat source is of homogeneous nature along the depth of the sample. Finally, dividing this quantity by the cutting speed results in obtaining Wext expressed in J.m -1 ; the energy required to perform 1 meter of cut. Fig. 13 depicts for the 5 instants of each tests, the shares of energy consumed by each of the three zones (Z I , Z II , Z III ) and in the rest of the imaged area Z IV (denoted bulk & sub-surface in Fig. 13). Various considerations can be made from such representation:
• As expected from the force measurements interpretation, the overall consumed energy decreases as the cutting speed increases (which is consistant with classical dynamometric observation).
• The dissipation in the segment bulk and in the subsurface (i.e. outside of the three indentified and well known shear zones) is far from negligible. It is consistent with the strains measurments presented in the above where it is seen that smaller but significant deformation occurs within the bulk of the segment.
• The dissipation in the segment bulk suddenly drops when the crack propagation reaches completion (see Fig. 13 progression 60% for both tests).
• The energy dissipated through plasticity in the subsurface of Z II and Z III slightly increases with the cutting speed, especially for Z III . The interpretation of (Eq.23) leads to assume that the plastic deformation in these zones also increases with the cutting speed.
• For most of the investigated instants, the internal energy comes close to the external one. Meaning that most of the energy is used within the captured area. However, a part of the input energy is consumed outside of the image. Further investigations would be required to sort out if this energy is used to deform the subsurface (left of Z III ) or the top section of Z I both unseen from the imaging apparatus.
Conclusions and Perspectives
In this paper, the development and the implementation of an original imaging apparatus dedicated to the simultaneous measurement of strain and temperature fields at small scale is presented. The proposed experiment enables the monitoring of a 500 × 500µm area at the tool tip using both visible and infrared cameras. The study provides a novel and valuable insight to essential thermomechanical couplings and in-process mechanical phenomena involved in the generation of serrated chips and thus gives a new understanding of essential cutting process mechanics occurring during the orthogonal cutting of Ti-6Al-4V. The developed numerical post-processing also constitutes an original contribution since it provides fields information at various time steps of the serrated chip generation progression. Strain, strain-rates, temperatures, dissipated powers along with displacement, velocity and crack progression are obtained at each pixel from both kinematic and thermal/energy measurement. It allows the assessment of the dissipated powers at stake during material removing processes. The whole measuring chain has been strongly calibrated and a special attention has been paid to the dependency of the physical parameters to temperature.
The experimental observations have highlighted the dependency of the physical phenomena to the cutting speed. This work provides valuable experimental evidences on the different nature of the coupling phenomenon. First, it appears that the nature of the deformation mechanisms is clearly affected by the cutting speed, in terms of magnitude but also in term of strain path. At low cutting speed, the loading in the primary shear zone is a mix of shear and compression, while it becomes mainly pure shear at higher cutting speed. Also, the width of the primary shear zone decreases as the cutting speed increases. It has been confirmed that the heat is mainly generated in the primary shear zone, and the highest temperature is observed in this zone and not along the rake face interface. In addition, the energetic study conducted in this work demonstrated that the adiabaticity hypothesis often set within the three shear zones is disputed. Indeed, as the dissipation is multiplied by 3, the temperature variation is not multiplied by 3 as well, meaning that the heat is somehow diffusing. It is also observed that the Laplace's term of the heat diffusion equation is clearly not negligible before the inertial term, especially at small cutting speed. Moreover, plasticity is not the only responsible for the dissipated power in the primary shear zone. Other phenomena may be involved (damage, segment-over-segment friction, phase transition, recrystallization...). The study also showed that a significant power is dissipated outside the three shear zones and mostly within the segment bulk.
Moreover, even though the heat is mainly evacuated within the chip, the generated surface temperature evolution is significant and this surface is most probably thermally affected. This constitutes a direct perspective of this work. A proper knowledge of temperature distribution in this zone is a key issue in assessing any energetic quantity in the chip, in the material and at the tool/chip interface. Especially, acquiring detailed coupled measurements in Z II and Z III would be of high interest in order to improve understanding of tool wear and of the thermal influence on the fatigue behavior of the generated surface.
Appendix A
Let's assume a thermal dependance of the physical constants: thermal conductivity, mass density and specific heat as
k = k(θ) = k 1 θ(x, t) + k 0 ρ = ρ(θ) = ρ 1 θ(x, t) + ρ 0 C p = C p (θ) = C 1 θ(x, t) + C 0 (27)
where k 1 is the first term of the linear regression from the data presented in Tab.1 and k 0 the thermal conductivity at 20 • C (idem for the ρ i and C i ). θ = T (x, t) -T 0 (x) is the temperature variation to the initial temperature T 0 The heat diffusion equation with variable conductivity, mass density and specific heat is:
ρC p dθ dt -∇ k ∇θ = w ch + r (θ) (28)
where w ch is the specific calorific power (i.e. the internal heat source).The thermoealstic coupling are here neglected since the plastic strains largely prevail over the elastic ones. Readers can refer to [START_REF] Boulanger | Calorimetric analysis of dissipative and thermoelastic effects associated with the fatigue behavior of steels[END_REF] and [START_REF] Rittel | On the conversion of plastic work to heat during high strain rate deformation of glassy polymers[END_REF] for more details. Another hypothsesis assumed here is that r , the volume external radiative sources do not depend on temperature T . It comes that r (T ) = r (T 0 ) and therefore that r (θ) = 0. Using the Kirshoff transform of the temperature: = w ch [START_REF] Justice | Microscopic two-color infrared imaging of nial reactive particles and pellets[END_REF] where the left hand side term I(Ψ) is called the inertial term and the right one D(Ψ) is called diffusive term. The key point here is that the available experimental data do not read the third dimension of the temperature fileds (along z). Indeed the thermal information is of surface nature and therefore requires to integrate the relation Eq.( 31) over the z direction (through-thickness dimension). for this purpose the through thickness average of fields varaibles must be defined as:
Ψ(T ) =
Imaged zone
• = 1 d d 0 •dz ( 32
)
where d is the thickness of the sample (i.e. the depth of cut). This imposes the set two main hypothesis [START_REF] Chrysochoos | An infrared image processing to analyse the calorific effects accompanying strain localisation[END_REF]:
i ) The calorific power w ch is constant through thickness : w ch = w ch
ii ) The heat conduction is much greater than the thermal losses (radiative and convective) of the front and back faces. It therefore comes that the averaged temperature through thickness can be approximated by the measured surface temperature θ ≈ θ(z = 0) ≈ θ(z = d).
the integrated diffusive term then becomes:
D = 1 d d 0 ∂ 2 Ψ ∂x 2 + ∂ 2 Ψ ∂y 2 + ∂ 2 Ψ ∂z 2 dz = 1 d ∂ 2 ∂x 2 d 0 Ψdz + 1 d ∂ 2 ∂y 2 d 0 Ψdz + 1 d ∂Ψ ∂z d 0 = ∂ 2 Ψ ∂x 2 + ∂ 2 Ψ ∂y 2 + 1 d k ∂θ ∂z d 0 = ∆ 2 Ψ + 1 d k ∂θ ∂z d 0 (33)
where ∆ 2 is the two-dimensional Laplace operator such as
∆ 2 Ψ = ∇ • ∇ Ψ = ∇ • k( θ) ∇ θ = ∇ • k 1 θ ∇ θ + k 0 ∇ θ = k 1 ∇ • θ ∇ θ + k 0 ∇ • ∇ θ = k 1 ∇ θ • ∇ θ + ∇ • ∇ θ × θ + k 0 ∆ 2 θ = k 1 ∇ θ2 + ∆ θ × θ + k 0 ∆ 2 θ = k 1 ∇ θ 2 + k∆ 2 θ ( 34
)
and ∇ is here the two-dimensional gradient. The integrated inertial term Ī is expressed through the partial derivative of Ψ and the Liebniz intergration rule as: where v = v(x, t) is the 2D velocity vector field. Finally, it should be noticed that the right hand side term of eq.( 33) reads the boundary condition of the front and back faces of the sample. Therefore, assuming a convective/radiative conditions on these faces and an room temperature denoted T r that equals the intitial temperature T 0 , one can write that:
k ∂θ ∂z d 0 = 2h θ + 2σ ( T 4 -T 4 r ) [START_REF] Kone | Finite element modelling of the thermo-mechanical behavior of coatings under extreme contact loading in dry machining[END_REF] and the completed heat diffusion of the problem at instant t can be expressed as:
ρC p ∂ θ ∂t + v • ∇ θ -k 1 ∇ θ 2 -k∆ 2 θ + 2hθ d + . . .
Figure 1 :
1 Figure 1: a)Orthogonal cutting device and imaging apparatus. b) Schematic of the VIS-IR imaging apparatus
Figure 2 :
2 Figure 2: (a) SEM image of a typical Ti-6Al-4V equiax microstructure. (b) Microstructure as-seen from the visible imaging line. The texture is used as a speckle for DIC. (c) Mid-wave infrared emmission spectrum of the sample surface. (d) Typical chip geometry obtained at 3m.min -1 and 15m.min -1 .
Figure 3 :
3 Figure 3: a) and b) Visible and infrared images of calibrated targets of 50 lines per millimeter(lpm) and 31.25 lpm. c) and d) Shapes and magnitudes (in pixel) of the measured distortion fields along x and y axis for the visible and IR optical path. e) and f ) Approximation of the distortion fileds from the model presented in (Eq.3). Results in pixel.
Figure 4
4 Figure 4: a) NUC correction using integrating sphere setup. b) Infrared image of the blackbody at 300 • C through a 200µm pinhole (image at full frame 320 × 256 pixels).
Figure 5 :
5 Figure 5: a) Experimental setup for black body measurement. b) Camera signal as a function of blackbody temperature and correponding fit. c) Fit error of the black body measurements I measured d
Figure 6 :
6 Figure 6: Illustration of the computation scheme used to assess the cumulated displacement from incremental correlation.
Figure 7 :
7 Figure 7: a) Fluctuations of the cutting force over the generation of segment n0.101 and no.102 versus time. b) Fast Fourier Transform spectrum of the whoole force signal at V c = 3m.min -1 .
Figure 8 :
8 Figure 8: a) Raw images of the chosen segment at 3m.min -1 , images no.6, no.13, no.25, no.42 and no.50 (final image is no.62). b) Crack evolution along the primary shear zone. c) Logarithmic equivalent strain. d) Equivalent strain rate (in s -1 ). e) Measured temperature (in • C). f ) Intrinsic dissipation (specific mechanical power in W.m -3 ).
Figure 9 :
9 Figure 9: a) Raw images of the chosen segment at 15m.min -1 , images no.1, no.2, no.4, no.6 and no.8 (final image is no.10). b) Crack evolution along the primary shear zone. c) Logarithmic equivalent strain. d) Equivalent strain rate (in s -1 ). e) Measured temperature (in • C). f ) Intrinsic dissipation (specific mechanical power in W.m -3 ).
Figure 10 :
10 Figure 10: a) Trajectory of the considered point, up to material failure, for the test at 3 m.min -1 . b) Trajectory of the considered point, up to material failure for the test at 15 m.min -1 . c) Strain path up to failure for both considered points (cutting speed) in the strain principal plane.
Figure 11 :
11 Figure 11: Shares of power contribution for the test at 15m.min -1 (progression 40%) unit is W.m -3 . a) Inertial term. b) Diffusive (Laplace) term. c)boundary condtion term. d) Intrinsic dissipation. e) Kinetic term.
Figure 12 :
12 Figure 12: Averages heat flux entering/exiting the material from the tool/part interfaces of Z II and Z III during one segment generation. a) at 3 m.min -1 . b) at 15 m.min -1
heat diffusion equation Eq.(28) becomes:
Figure 13 :
13 Figure 13: Shares of energy consumed per zone of the image (red dot stand for the external energy Wext ).a) at 3 m.min -1 and b) at 15 m.min -1 c) Zones location and legend
2σ d ( T 4
4 -T 4 r ) = w ch (37) Remark: under such formalism, the use of a Stefan-Boltzmann condition imposes to express the tempratures T and T r = T 0 in Kelvin (and not in Celsius)
Table 1 :
1 Chosen material parameters for Ti-6Al-4V[START_REF] Basak | Measurement of specific heat capacity and electrical resistivity of industrial alloys using pulse heating techniques[END_REF][START_REF] Boivineau | Thermophysical properties of solid and liquid ti-6al-4v (ta6v) alloy[END_REF][START_REF] González-Fernández | Infrared normal spectral emissivity of ti-6al-4v alloy in the 500 -1150 k temperature range[END_REF].
20 • C 4450 0.37 560 6.7
550 • C 4392 0.37 750 9.7 | 67,038 | [
"18962",
"19321",
"19090",
"19608",
"19669"
] | [
"110103",
"110103",
"110103",
"110103",
"110103",
"110103"
] |
01769971 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769971/file/JFM.pdf | Etienne Jambon-Puillet
Odile Carrier
Noushine Shahidzadeh
David Brutin
Jens Eggers
Daniel Bonn
Spreading dynamics and contact angle of completely wetting volatile drops
The spreading of evaporating drops without a pinned contact line is studied experimentally and theoretically, measuring the radius R(t) of completely wetting alkane drops of different volatility on glass. Initially the drop spreads (R increases), then owing to evaporation reverses direction and recedes with an almost constant non-zero contact angle θ ∝ β 1/3 , where β measures the rate of evaporation; eventually the drop vanishes at a finite-time singularity. Our theory, based on a first-principles hydrodynamic description, well reproduces the dynamics of R and the value of θ during retraction.
Introduction
The evaporation of liquid drops, in the form of dew, rain, or mist generated by breaking waves, must be accounted for accurately in the heat and mass balance of climate models. Evaporation is also important for industrial processes such as spray drying or ink jet printing. As a result, the evaporation of drop has attracted a great deal of attention over the past few years (for recent reviews see [START_REF] Cazabat | Evaporation of macroscopic sessile droplets[END_REF][START_REF] Erbil | Evaporation of pure liquid sessile and spherical suspended drops: A review[END_REF][START_REF] Larson | Transport and deposition patterns in drying sessile droplets[END_REF].
The two situations most studied are (i) the "coffee-stain" problem in which a drop is deposited on a rough substrate to which its contact line remains anchored during evaporation [START_REF] Deegan | Capillary flow as the cause of ring stains from dried liquid drops[END_REF][START_REF] Deegan | Contact line deposits in an evaporating drop[END_REF] and (ii) drop of completely wetting liquids deposited on a perfectly smooth surface (Cachile et al. 2002a,b;[START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF][START_REF] Shahidzadeh-Bonn | Evaporating droplets[END_REF]. The latter problem, studied here, has attracted a great deal of attention since it is unclear why a completely wetting liquid exhibits a non-zero contact angle during evaporation [START_REF] Elbaum | Evaporation preempts complete wetting[END_REF][START_REF] Bonn | Comment on "evaporation preempts complete wetting[END_REF]. This problem is difficult because it involves diverging viscous stresses and evaporation rates, which need to be regularized to predict the motion [START_REF] Bonn | Wetting and spreading[END_REF][START_REF] Eggers | Nonlocal description of evaporating drops[END_REF]. In doing so, the shape of the drop is a priori unknown and has to be calculated; however this requires the prediction of the speed of the moving contact line, which is due to a complicated interplay between pinning, thermal activation and viscous dissipation [START_REF] Snoeijer | Moving contact lines: scales, regimes, and dynamical transitions[END_REF][START_REF] Perrin | Defects at the nanoscale impact contact line motion at all scales[END_REF]. In addition, numerous secondary effects [START_REF] Hu | Marangoni effect reverses coffee-ring depositions[END_REF]).
Here we study the relative effect of evaporation and spreading systematically by placing completely wetting drops of alkanes (pentane (C 5 H 12 ) to nonane (C 9 H 20 )), whose volatility varies by two orders of magnitude, on a clean glass surface (see figure 1 (a)(b)). The perfectly circular drop shape indicates that contact line pinning is not important. Our drops are sufficiently small, so that convection in the gas phase is negligible, and the evaporation rate is limited by vapour diffusion into the surrounding gas phase. Moreover, our drops are very thin, which limits temperature gradients, especially for alkanes that do not evaporate too fast.
Previous studies have found that the contact angle of such a completely wetting but evaporating drop can be non-zero [START_REF] Bourges-Monnier | Influence of evaporation on contact angle[END_REF]Cachile et al. 2002a,b;[START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF][START_REF] Shahidzadeh-Bonn | Evaporating droplets[END_REF][START_REF] Lee | Spreading and evaporation of sessile droplets: Universal behaviour in the case of complete wetting[END_REF]. The interpretation of such a non-zero contact angle for a completely wetting liquid, which we denote by θ ev , is difficult, since it represents a fundamentally non-equilibrium situation. The presence of stress and evaporative singularities at the contact line, which need to be regularized on a microscopic scale, make the problem inherently multi-scale. A crude regularization as proposed by [START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF] allows to understand the formation of such an angle but its exact expression has remained a subject of debate [START_REF] Eggers | Nonlocal description of evaporating drops[END_REF][START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF]. The recent paper by [START_REF] Saxton | On thin evaporating drops: When is the d 2 -law valid[END_REF] only considers partially wetting liquids (while ignoring pinning of the contact line). The time dependence of the drop radius is also intriguing. Since the fluid is wetting it starts to spread, but at some point, the evaporation starts to dominate, the drop retracts and R eventually vanishes at a time t 0 .
In this article, using the framework proposed by [START_REF] Eggers | Nonlocal description of evaporating drops[END_REF], later developed by [START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF] we propose a simple parameter free model to describe the spreading dynamics and contact angle of evaporating drops of completely wetting liquids and make a direct comparison with experiments.
Experimental set-up
Experiments were performed at room temperature T ≈ 21 • C by gently depositing a drop on a float glass surface using a microsyringe, and recording either its weight or shape using a precision balance and a drop shape analyser (Kruss Easydrop, see figure 1 (c)(d)). The alkanes used were ultra pure, from Sigma-Aldrich, the substrate were glass microscope slides (Menzel Gläser, 1 mm thick), cleaned with either sulfochromic acid or piranha solution. The equilibrium vapour pressure P sat of the different alkanes varies over two orders of magnitude, while keeping almost the same density ρ, surface tension γ and viscosity η. The volume of the drops was about 1 µL. The largest Bond number was Bo = ρghR/γ ≈ 0.2 so gravity could be neglected (h is the drop height). The drop profile being well fitted by a spherical cap (figure 1 (c)(d)) the drop volume V , and apparent contact angle θ are calculated from the drop height h and radius R assuming a spherical cap profile.
Result and discussion
Figure 2 shows the mass m of various alkane drops as a function of their radius cubed as they evaporate, giving a straight line. For a thin drop, θ = 4m/(πρR 3 ), so that the slope in figure 2 directly corresponds to the contact angle, which is seen to depend strongly on the chain length of the alkane, despite their similar interfacial properties. The macroscopic contact angle is thus controlled by evaporation rather than the wetting properties; due to their low surface tension in equilibrium the contact angle of all alkanes on the substrate is zero.
Turning to the drop dynamics, the simplest assumption is that to leading order the drop dynamics are unaffected by evaporation, which enters through the total mass balance only. Thus drop motion is described by Tanner's law [START_REF] Bonn | Wetting and spreading[END_REF]): R ∼ V 3/10 (γt/η) 1/10 , but the total mass flux is proportional to the drop radius [START_REF] Deegan | Contact line deposits in an evaporating drop[END_REF]:
V = -4βR.
(3.1)
Here β is the evaporation parameter which can be approximated as β = D (ρ sat -ρ ∞ ) /ρ for thin drops [START_REF] Cazabat | Evaporation of macroscopic sessile droplets[END_REF] with ρ sat the saturation vapour density, ρ ∞ the vapour density far from the drop (ρ ∞ = 0 for alkanes) and D the vapour diffusion coefficient. Solving the resulting differential equation for V , and substituting back into Tanner's law to find R, we find
R ∝ t 11/10 0 -t 11/10 3/7 t 1/10 . (3.2)
Figure 3 shows (3.2) as the dot-dashed line, with both a prefactor and t 0 as adjustable parameters. Clearly, this simple theory is unable to describe the drop dynamics satisfactorily, demonstrating that evaporation must be included into the description of the contact line dynamics itself, rather than including it phenomenologically.
To do better, one needs to solve the viscous flow problem in the drop coupled to the evaporation which is limited by the diffusion of vapour. For thin isothermal drops, the flow is simplified through the lubrication approximation and the flow profile is parabolic. The evolution of the drop shape is given by mass conservation (in axisymmetric coordinates),
∂h ∂t + 1 r ∂ ∂r h 3 r 3η ∂p ∂r = -j ev , j ev = - D ρ ∂ρ v ∂z , (3.3)
p is the pressure driving the flow and j ev the local volume flux induced by the diffusion limited evaporation (ρ v is the vapour density). At the macroscopic scale, the pressure is simply the Laplace pressure and the vapour concentration is given by Laplace's equation ∇ 2 ρ v = 0 with boundary condition ρ v = ρ sat at the drop surface and ρ v = ρ ∞ far from the drop. Approximating the thin drop as a disc allows to compute the vapour field ρ v and the volume flux [START_REF] Jackson | Classical Electrodynamics[END_REF]). However, this macroscopic description suffers from the usual viscous stress divergence at the contact line, also present without evaporation [START_REF] Bonn | Wetting and spreading[END_REF][START_REF] Eggers | Singularities: formation, structure, and propagation[END_REF]. In addition, j ev is also singular at r = R (the divergence persists for spherical caps with low contact angle [START_REF] Deegan | Contact line deposits in an evaporating drop[END_REF]). To deal with the problem one has to introduce microscopic effects to regularize the singularities.
j ev = 2β/ π √ R 2 -r 2 (
A first attempt was made by [START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF] using scaling arguments. They introduce the distance from the contact line where van der Waals forces balance capillary forces and assume that the evaporation rate saturates below this scale. The resulting model being based on scaling arguments, [START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF] did not perform a direct comparison with experiments. Nonetheless, their model predicts power laws for R(t) during the retraction stage that agree with the ones observed experimentally.
More recently, [START_REF] Eggers | Nonlocal description of evaporating drops[END_REF] introduced van der Waals forces selfconsistently in the coupled problem through a disjoining pressure term Π = A/ 6πh 3 = γa 2 /h 3 (A is the Hamaker constant, and a is a microscopic length). A consequence is that far from the drop, (attractive) van der Waals interactions compete with evaporation to condensate a microscopic prewetting film whose thickness h f is given by a balance of evaporative to disjoining pressure h 3 f = γa 2 / (ρR s T ln (ρ ∞ /ρ sat )) with R s the specific gas constant. In addition, [START_REF] Eggers | Nonlocal description of evaporating drops[END_REF] included the Kelvin effect which takes the local curvature of the drop into account. They showed that taking these effects into account regularizes the evaporative singularity as it inhibits evaporation. Equation (3.3) then becomes:
∂h ∂t + γ ηr ∂ ∂r h 3 r 3 ∂ ∂r ∂ 2 h ∂r 2 + 1 r ∂h ∂r + a 2 h 3 = -j ev , j ev = β r ∂ ∂r ∞ 0 K(r, r ′ ) ∂ ∂r ′ h f h 3 dr ′ .
(3.4)
The kernel is given by
K(r, r ′ ) = 2 π r [K(r ′ /r) -E(r ′ /r)] , r ′ < r r ′ [K(r/r ′ ) -E(r/r ′ )] , r ′ > r (3.5)
where K and E are the complete elliptic integrals. In the quasi-static limit, which means that when considering evaporation, the time derivative in (3.4) is neglected, [START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF] shows that the contact region can be described analytically in the case of vanishing L . This allows for the exact computation of the cut-off length introduced by hand by [START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF] and the determination of the evaporative angle [START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF], see appendix B for the derivation of the closed form solution presented below). The result is
θ ev = k ηβ γa 1/2 R 1/2 1/3 , (3.6)
where k can be assumed constant and is given by Motion is such that the apparent contact angle θ is driven toward θ ev according to the Cox-Voinov law [START_REF] Eggers | Singularities: formation, structure, and propagation[END_REF]:
k = 1.47758 2 1/6 π 1/3 W 4 L 3/2 1/6 , L = 2 2/3 ρ 4/3 sat (ηβ) 16/9 π 4/9 a 26/9 R 2/9 γ 4/9 (R s T ρ (ρ sat -ρ ∞ )) 4/3 , (
Ṙ = γ 9Bη 4V πR 3 3 -θ 3 ev , (3.8)
where B is the usual logarithmic molecular cut-off also present without evaporation (see appendix B).
The equations (3.1),(3.6),(3.8) derived here are the same as the scaling analysis proposed in [START_REF] Poulard | Rescaling the dynamics of evaporating drops[END_REF]; however because of the nature of their analysis, they were unable to calculate the prefactors and the discussion remained qualitative. Here we have done the full analysis; the evaporative cut-off is calculated by including the effect of disjoining pressure self consistently [START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF].
We have simulated the complete equations of motion (3.4),(3.5) with parameters that lie in the quasi-static regime, and compared it to the model (3.1),(3.6),(3.8) in figure 4 (lengths are rescaled with the initial drop radius R 0 and time with R 0 η/γ). The model shows a very good agreement with the simulation with a fitted value k = 1.9 close to the one predicted by equation (3.7): k = 1.69. Now, comparing the model to experimental data, we measure the evaporation parameter β using (3.1) and estimate a ≈ 4 Å using Lifshitz theory [START_REF] Israelachvili | Evaporation rates of pure liquids measured using a gravimetric technique[END_REF]); B in (3.8) is calculated from (B 7) and varies between 5.38 and 6.03 (table 2). The range being narrow we use the mean value 5.6 for all our experimental comparisons. Similarly we calculate the parameter k in (3.6) from (3.7). It varies between 1.42 and 1.69 (table 2) and we use an intermediate value of 1.62 for all our experiments.
Figures 3 and5 compare R(t) from the model with the experimental data for various alkanes. We find excellent agreement for slowly evaporating alkanes: heptane to nonane, without any free parameters, cf. figure 5 (a). In addition, within the experimental accuracy, the macroscopic contact angle θ(t) is also well described by these equations, and approaches a constant steady state value at late times (inset figure 5 (a)). For pentane and hexane, for which evaporation is very rapid, the drop hardly spreads and the contact line recedes quickly (figure 5 (b)). During the short spreading time, θ decreases significantly (inset of figure 5 (b)); this means that ∂h/∂t is not small, and the quasistatic assumption used in the model is not valid. Moreover, the cooling due to evaporation increases with the evaporation rate β and neglecting the temperature gradient and resulting Marangoni flows becomes incorrect (see appendix A for a critical discussion of the model's assumptions). As a result, the model is only able to reproduce these dynamics qualitatively, as it overestimates the spreading motion at short times.
As for the contact angle, the breakdown of the quasi-static and isothermal assumptions means that the measured angle does not necessarily converge to θ ev given by (3.6). Nevertheless, the experimental contact angle θ reaches a steady-state value close to θ ev . We plot this value as a function of β for the different alkanes in figure 6. Within the experimental uncertainties (that become significant when the drop is very thin) one retrieves the 1/3rd power law predicted by (3.6) with the correct prefactor.
Conclusion
In summary, we studied the dynamics of perfectly wetting, volatile fluids on a solid substrate for a wide range of evaporation rates. Taking into account both spreading dynamics as well as using a consistent description of evaporation near the contact line, we were able to obtain a quantitative agreement between our parameter free model and experiments during both spreading and retraction phases for slowly evaporating alkanes. For very volatile liquids, the agreement is only qualitative as temperature gradients and dynamic effects, neglected in the model, become significant. This work received a financial grant from the "French Agence Nationale de la Recherche"; project ANR-13-BS09-0026. (mm) (mm) (10 -9 m 2 s -1 ) (10 -9 m 2 s -1 ) (K) (mm s
M a = dγ dT ∆T R α l η
above a critical value M a c ∼ 10 2 . Here α l is the thermal diffusivity of the liquid and is of the order 10 -7 m 2 s -1 for alkanes while dγ/dT ≈ 10 -4 N m -1 . We evaluate both ∆T and M a for our drops in table 1 using the initial values h 0 , R 0 for the drop shape and our measured value of β. We see that the temperature gradient is very large for pentane drops and still significant for hexane drops. This produces Marangoni numbers well above the instability threshold for these drops. Thus our neglect of the Marangoni stress in the lubrication analysis partly explains the discrepancy between model and experiment for the spreading dynamics and evaporative angle. For heptane drops M a ∼ M a c , however, because the model is able to reproduce the experimental data and the evaluation of M a is approximate, we believe these Marangoni effects are still negligible.
To evaluate the quasi-static assumption we calculate characteristic time scales of our drops and compare them to the evaporation duration t 0 . The characteristic time for heat equilibration inside the drop is t heat ∼ h 2 0 /α l . Because the initial drop height h 0 decreases with the chain length (see table 1), so does t heat , which varies between 0.05 t heat (s) 0.5. The drying time t 0 , however increases with the chain length 5 t 0 (s) 200. Since t 0 >> t heat , we can neglect the time dependence of the temperature in the drop. Similarly the characteristic time for the velocity inside the drop to reach a steady state is t mom ∼ h 2 0 ρ/η. It also decreases with the chain length of the alkane and varies between 5 10 -3 t mom (s) 0.2. Again, since the drying time t 0 is much larger, the quasistatic approach is in general valid. However, the most unfavourable cases are pentane and hexane. For these fast evaporating alkane the spreading motion is very fast, and the reversal of the contact line occurs at t Rmax ∼ 0.5 s. During the spreading motion the system has not yet had time to reach a steady state. At this early stage ∂h/∂t is significantly higher for pentane and hexane (see table 1) and cannot be neglected. the drop θ ev in the asymptotic limit L → 0 (equation (6.3) of Morris ( 2014)) :
θ ev = 1.47758 2 √ 2 π ηβ γa 1/2 R 1/2 1/3 ℓ 1/4 1 . (B 1)
Here L is the Laplace parameter, which is a dimensionless surface tension controlling the coupled problem:
L = 2 2/3 ρ 4/3 sat (ηβ) 16/9 π 4/9 a 26/9 R 2/9 γ 4/9 (R s T ρ (ρ sat -ρ ∞ )) 4/3 , ( B 2)
and {ℓ 1 , h 1 } are the dimensionless location and height at which the capillary and disjoining pressures balance in the wetting film, respectively. These lengths are given by (see equation (5.10) of Morris ( 2014)):
L ℓ 1 h 4 1 = 1 and ℓ 1 = 3 2 ln h 1 2/3 . ( B 3)
Eliminating ℓ 1 from (B 3) we arrive at:
4 L 3/2 = 4 L 3/2 h 6 1 exp 4 L 3/2 h 6 1 , ( B 4)
which can be solved in terms of the Lambert W function, and we obtain:
h 1 = 4 L 3/2 W 4/L 3/2 1/6 , ℓ 1 = W 4/L 3/2 4 2/3 . ( B 5)
Replacing (B 5) in (B 1), we obtain the final closed form equation for the evaporative contact angle:
θ ev = k ηβ γa 1/2 R 1/2 1/3 with k = 1.47758 2 1/6 π 1/3 W 4 L 3/2 1/6 . ( B 6)
We show in table 2 the values obtained for L and k using (3.7) with the measured value of β and R 0 . Since R varies during an experiment, L and thus k are not strictly constant, yet their variations are small so we neglect them. For instance 1.9 10 -3 < L < 3.0 10 -3 and 1.60 < k < 1.62 for the octane drop of figure 5 (the minimum recorded drop radius is 0.3 mm). L ranges between 3 10 -4 for nonane and 8 10 -2 for pentane, which gives k ≈ 1.55 (see table 2). To compute k for the simulation we rescale length with R 0 and time with R 0 η/γ and rewrite (B 2) using dimensionless parameters : L = 2 2/3 β 16/9 h 4 f π 4/9 a 50/9 R 2/9 . In doing so we have used the linearised definition of h f as in [START_REF] Eggers | Nonlocal description of evaporating drops[END_REF] which strictly speaking is not correct when ρ ∞ = 0. The resulting prefactor k = 1.69 is very close to experimental ones despite the very different values of the parameters.
Equation (B 6) predicts the experimental steady state contact angle quantitatively, except for pentane and hexane (see figure 5 (b)), for which a discrepancy larger than the uncertainties starts to appear. As discussed above some of the model's assumption break down for these drops, the isothermal assumption is incorrect and the quasi-static assumption becomes doubtful. We can also notice that L is much larger such that the limit L → 0 might not be reached. As in all moving contact line problems, the viscous stress in the vicinity of the contact line must be regularized in order to predict the drop radius as a function of time R(t). This is usually done by introducing a cut-off length, which results in a logarithmic prefactor B in the equation of motion in the contact line [START_REF] De Gennes | Wetting: statics and dynamics[END_REF][START_REF] Bonn | Wetting and spreading[END_REF]). In the very beginning of our experiment, spreading is dominant and the drop moves over a prewetting film. We thus use the cut-off length derived for spreading drops without evaporation [START_REF] Bonn | Wetting and spreading[END_REF][START_REF] Eggers | Singularities: formation, structure, and propagation[END_REF]:
B = ln R 1.38e 2 a η Ṙ γ 2/3 (B 7)
and consider it constant throughout the experiment for simplicity (although it was not derived for an evaporating receding contact line). Table 2 shows the values of B we obtain using the initial values of the experiments.
Figure 1 .
1 Figure 1. Top (a)-(b) and side (c)-(d) view of evaporating and spreading drops on clean glass surfaces. (a) Pentane drop, scale bar 5 mm, dt = 6 s. (b) Heptane drop, scale bar 2 mm, dt = 10 s. (c) Peptane drop, R = 1.4 mm. (d) Heptane drop, R = 2.6 mm, the images include the drop's reflection on the solid surface, the red line is a spherical cap fit.
Figure 2 .
2 Figure 2. Mass of the drop vs. radius cubed; straight lines indicate a constant contact angle. Pentane (C5): green diamonds; hexane (C6): black squares; heptane (C7): blue circles; octane (C8): red triangles.
Figure 3 .
3 Figure3. Spreading and evaporation of a 0.32 µL heptane drop with a best fit of (3.2) as the dot-dashed line (prefactor = 0.36, t0 = 22.6), and our model as the red solid curve. (inset) Same data on a linear scale.
Figure 4 .
4 Figure4. Dimensionless radius as a function of the dimensionless time for an evaporating drop, with β = 5 10 -3 , a = 10 -3 and h f = 10 -4 . The solid line is the simulation and the dashed line is theory (B ≈ 2.1 with these parameters).
Figure 5 .Figure 6 .
56 Figure 5. Dimensionless radius of spreading and evaporating alkane drops. (inset) Measured contact angle for the same data set, the uncertainties are not reported here for clarity (see figure 6). (a) Blue circles: heptane; red triangles: octane; black diamonds: nonane. (b) Pink open squares: pentane; green open triangles: hexane.
Table 1 .
1 Parameters used to asses the validity of our assumptions; the experimental values corresponds to the experiment presented in figure5, D values at T = 22 • come from[START_REF] Berezhnoi | Binary diffusion coefficients of liquid vapors in gases[END_REF][START_REF] Israelachvili | Evaporation rates of pure liquids measured using a gravimetric technique[END_REF], Psat values from[START_REF] Carruth | Vapor pressure of normal paraffins ethane through n-decane from their triple points to about 10 mm mercury[END_REF][START_REF] Israelachvili | Evaporation rates of pure liquids measured using a gravimetric technique[END_REF]. ratio is larger for drops which evaporates faster, we immediately see they have the largest temperature gradient. These temperature gradients can induce Marangoni flows which appear for values of the Marangoni number
-1 )
Table 2 .
2 Parameters deduced from the model for figure5.
pentane hexane heptane octane nonane
a ( Å) 4.29 3.96 3.95 3.85 3.78
L 8.0 10 -2 1.8 10 -2 6.5 10 -3 2.0 10 -3 3.0 10 -4
k 1.42 1.51 1.57 1.62 1.69
B 5.38 5.75 5.58 6.03 5.52
† Email address for correspondence: [email protected]
Appendix A
In our model we assume the system to be isothermal (and thus without any Marangoni flows), the drop movement to be quasi-static and the transport of vapour to be purely diffusive (neglecting convection and kinetic effects). We will now discuss the validity of these assumptions using the dimensionless groups as proposed in [START_REF] Larson | Transport and deposition patterns in drying sessile droplets[END_REF].
Because the experiments are carried out in a box to limit air flows around the drop, the only source of convection in our experiment is the natural convection due to the buoyancy of the alkane vapour in the air. The Grashof number Gr, which balances buoyancy with viscous forces, controls the strength of this natural convection. Assuming the gas to be ideal and the vapour concentration to be the saturation concentration, we have
where ρ air is the air density and ν air the air kinematic viscosity. [START_REF] Kelly-Zion | Evaporation of sessile drops under combined diffusion and natural convection[END_REF] studied the effect of natural convection in evaporating sessile drops with pinned contact line. They found the empirical relationship
which differs significantly from the purely diffusive case if 0.310Gr 0.216 > 1. We show in table 1 the Grashof number corresponding to our experiments (using R 0 for the drop radius). According to [START_REF] Kelly-Zion | Evaporation of sessile drops under combined diffusion and natural convection[END_REF], the maximum convective we can expect (as R decreases after the spreading phase) ranges between 0.5 and 0.9 times the diffusive contribution, which is neither dominant nor negligible. Nonetheless our V = f (R) data are fairly linear, but with a coefficient β somewhat higher than Dρ sat /ρ, the value expected in the purely diffusive case (see table 1, ρ ∞ = 0 for alkanes). Although this is difficult to quantify given our experimental uncertainties, there might be a little bit of convection in some of our experiments. Though this is not the reason why the model fails to describe the dynamics for short alkanes. For hexane and pentane the discrepancies are small and replacing (3.1) by (A 1) does not improve the model significantly. Thus we keep the pure diffusion approximation in our model, but we use the measured β directly instead of using the predicted value Dρ sat /ρ. If the diffusion of the vapour in the ambient gas is very fast, for instance under reduced pressure or if the ambient gas is pure vapour, then the evaporation can become affected by kinetic effects: the transfer of molecules from the liquid to the vapour at the interface (given by the Hertz-Knudsen relation). This effect reduces the concentration of vapour at the liquid-vapour interface and thus the overall evaporation rate. However, scaling arguments [START_REF] Cazabat | Evaporation of macroscopic sessile droplets[END_REF][START_REF] Larson | Transport and deposition patterns in drying sessile droplets[END_REF]) and numerical simulations [START_REF] Semenov | Computer simulations of evaporation of pinned sessile droplets: Influence of kinetic effects[END_REF] have shown that this effect is negligible for common liquids in ambient air, except for microscopic droplets. Moreover, pure diffusion predicts our measured evaporation rate satisfactorily so we neglect interfacial kinetic effects.
The possible temperature gradients come from the heat loss due to latent heat of evaporation whose average rate is ρ V ∆H vap /πR 2 , with ∆H vap the heat of vaporisation per unit mass. This flux must be balanced by steady-state heat conduction from the substrate of the order of k l ∆T /h with k l the liquid thermal conductivity. Equating the two allows to evaluate the temperature gradient
For alkanes ∆H vap ≈ 3.5 10 5 J kg -1 and k l ≈ 0.13 W m -1 K -1 , the temperature gradients are thus directly proportional to β times the drop aspect ratio h/R. As the drop aspect
Appendix B
With a local analysis of the coupled equations (3.4) in the vicinity of the contact line and in the quasi-static limit, [START_REF] Morris | On the contact region of a diffusion-limited evaporating drop: a local analysis[END_REF] predicted the macroscopic contact angle of | 29,856 | [
"776701",
"1525"
] | [
"29042",
"29042",
"29042",
"949",
"20498",
"29042"
] |
01176641 | en | [
"info"
] | 2024/03/05 22:32:16 | 2015 | https://theses.hal.science/tel-01176641v2/file/Urban-2015-These.pdf | The overall aim of this thesis is the development of mathematically sound and practically e cient methods for automatically proving the correctness of computer software. More specifically, this thesis is grounded in the theory of Abstract Interpretation, a powerful mathematical framework for approximating the behavior of programs. In particular, this thesis focuses on proving program liveness properties, which represent requirements that must be eventually or repeatedly realized during program execution.
Program termination is the most prominent liveness property. This thesis designs new program approximations, in order to automatically infer sucient preconditions for program termination and synthesize so called piecewisedefined ranking functions, which provide upper bounds on the waiting time before termination. The approximations are parametric in the choice between the expressivity and the cost of the underlying approximations, which maintain information about the set of possible values of the program variables along with the possible numerical relationships between them. This thesis also contributes an abstract interpretation framework for proving liveness properties, which comes as a generalization of the framework proposed for termination. In particular, the framework is dedicated to liveness properties expressed in temporal logic, which are used to ensure that some desirable event happens once or infinitely many times during program execution. As for program termination, piecewise-defined ranking functions are used to infer su cient preconditions for these properties, and to provide upper bounds on the waiting time before a desirable event.
The results presented in this thesis have been implemented into a prototype analyzer. Experimental results show that it performs well on a wide variety of benchmarks, it is competitive with the state of the art, and is able to analyze programs that are out of the reach of existing methods.
Les résultats présentés dans cette thèse ont été mis en oeuvre dans un prototype d'analyseur. Les résultats expérimentaux montrent qu'il donne de bons résultats sur une grande variété de programmes, il est compétitif avec l'état de l'art, et il est capable d'analyser des programmes qui sont hors de la portée des méthodes existantes.
Résumé
L'objectif général de cette thèse est le développement de méthodes mathématiques correctes et e caces en pratique pour prouver automatiquement la correction de logiciels. Plus précisément, cette thèse est fondée sur la théorie de l'Interprétation Abstraite, un cadre mathématique puissant pour l'approximation du comportement des programmes. En particulier, cette thèse se concentre sur la preuve des propriétés de vivacité des programmes, qui représentent des conditions qui doivent être réalisés ultimement ou de manière répétée pendant l'exécution du programme.
La terminaison des programmes est la propriété de vivacité la plus fréquement considérée. Cette thèse conçoit des nouvelles approximations, afin de déduire automatiquement des conditions su santes pour la terminaison des programmes et synthétiser des fonctions de rang définies par morceaux, qui fournissent des bornes supérieures sur le temps d'attente avant la terminaison. Les approximations sont paramétriques dans le choix entre l'expressivité et le coût des approximations sous-jacentes, qui maintiennent des informations sur l'ensemble des valeurs possibles des variables du programme ainsi que les relations numériques possibles entre elles.
Cette thèse développe également un cadre d'interprétation abstraite pour prouver des propriétés de vivacité, qui vient comme une généralisation du cadre proposé pour la terminaison. En particulier, le cadre est dédié à des propriétés de vivacité exprimées dans la logique temporelle, qui sont utilisées pour s'assurer qu'un événement souhaitable se produit une fois ou une infinité de fois au cours de l'exécution du programme. Comme pour la terminaison, des fonctions de rang définies par morceaux sont utilisées pour déduire des préconditions su santes pour ces propriétés, et fournir des bornes supérieures sur le temps d'attente avant un événement souhaitable. In the last decades, software took a growing importance into all kinds of systems. As we rely more and more on software, the consequences of a bug are more and more dramatic, causing great financial and even human losses. A notorious example is the spectacular Ariane 5 failure 1 caused by an integer overflow, which resulted in more than $370 million loss. More recent examples are the Microsoft Zune Z2K bug 2 and the Microsoft Azure Storage service interruption 3 both due to non-termination, the Toyota unintended acceleration 4 caused by a stack overflow, and the Heartbleed security bug 5 .
Testing
The most widespread (and in many cases the only) method used to ensure the quality of software is testing. Many testing methods exist, ranging from blackbox and white-box testing to unit and integration testing. They all consist in executing parts or the whole of the program with selected or random inputs in a controlled environment, while monitoring its execution or its output. Achieving an acceptable level of confidence with testing is generally costly and, even then, testing cannot completely eliminate bugs. Therefore, while in some cases it is acceptable to ship potentially erroneous programs and rely on regular updates to correct them, this is not the case for mission critical software which cannot be corrected during missions.
Formal Methods
Formal methods, on the other hand, try to address these problems by providing rigorous mathematical guarantees that a program preserves certain properties. The idea of formally discussing about programs dates back from the early history of computer science: program proofs and invariants are attributed to Robert W. Floyd [START_REF] Floyd | Assigning Meanings to Programs[END_REF] and Tony Hoare [START_REF] Antony | An Axiomatic Basis for Computer Programming[END_REF] in the late 1960s, but may be latent in the work of Alan Turing in the late 1940s [START_REF] Turing | Checking a Large Routine[END_REF][START_REF] Morris | An Early Program Proof by Alan Turing[END_REF]. Current methods can be classified into three categories [START_REF] Cousot | A Gentle Introduction to Formal Verification of Computer Systems by Abstract Interpretation[END_REF]: deductive methods, model checking, and static analysis.
Formal deductive methods employ proof assistants such as Coq [START_REF] Bertot | Interactive Theorem Proving and Program Development[END_REF], or theorem provers such as PVS [START_REF] Owre | PVS: A Prototype Verification System[END_REF] to prove the correctness of programs. These methods rely on the user to provide the inductive arguments needed for the proof, and sometimes to interactively direct the proof itself.
Formal methods based on model checking [START_REF] Clarke | Design and Synthesis of Synchronization Skeletons using Branching-Time Temporal Logic[END_REF][START_REF] Queille | Specification and Verification of Concurrent Systems in CESAR[END_REF] explore exhaustively and automatically finite models of programs so as to determine whether undesirable error states are accessible. Various successful model checking algorithms based on SAT-solving, such as IC3 [START_REF] Bradley | SAT-Based Model Checking without Unrolling[END_REF], have been developed to complement the traditional model checking based on Binary Decision Diagrams [START_REF] Bryant | Graph-based Algorithms for Boolean Function Manipulation[END_REF]. A major di culty of this approach is to synthesize the model from the program and the property to check. In particular, the size of the model is critical for the checking phase to be practical.
Formal static analysis methods analyze directly and without user intervention the program source code at some level of abstraction. Due to decidability and e ciency concerns, the abstraction is incomplete and can result in false alarms or false positives (that is, correct programs may be reported as incorrect) but never false negatives. The programs that are reported as correct are indeed correct despite the approximation. The most prominent static analysis methods are based on Abstract Interpretation [START_REF] Cousot | Abstract Interpretation: a Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints[END_REF].
Abstract Interpretation. Abstract Interpretation [START_REF] Cousot | Abstract Interpretation: a Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints[END_REF] is a general theory for approximating the behavior of programs, developed by Patrick Cousot and Radhia Cousot in the late 1970s, as a unifying framework for static program analysis. It stems from the observation that, to reason about a particular program property, it is not necessary to consider all aspects and details of the program behavior. In fact, reasoning is facilitated by the design of a welladapted semantics, abstracting away from irrelevant details. Therefore, there 1.2. Program Properties is no universal general-purpose program semantics but rather a wide variety of special-purpose program semantics, each one of them dedicated to a particular class of program properties and reduced to essentials in order to ignore irrelevant details about program executions.
In the past decade, abstract interpretation-based static analyzers began to have an impact in real-world software development. This is the case, for instance, of the static analyzer Astrée [BCC + 10], which is used daily by industrial end-users in order to prove the absence of run-time errors in embedded synchronous C programs. In particular, by carefully designing the abstractions used for the analysis, Astrée has been successfully used to prove, with zero false alarm, the absence of run-time errors in the primary flight control software of the fly-by-wire systems of the Airbus A340 and A380 airplanes.
We provide a formal introduction to Abstract Interpretation in Chapter 2 and we recall the main results used in this thesis, which are later illustrated on a small idealized programming language in Chapter 3.
Program Properties
Leslie Lamport, in the late 1970s, suggested a classification of program properties into the classes of safety and liveness properties [START_REF] Lamport | Proving the Correctness of Multiprocess Programs[END_REF]. Each class encompasses properties of similar character, and every program property can be expressed as the intersection of a safety and a liveness property.
Safety Properties
The class of safety properties is informally characterized as the class of properties stating that "something bad never happens", that is, a program never reaches an unacceptable state.
Safety properties represent requirements that should be continuously maintained by the program. Indeed, in order to prove safety properties, an invariance principle [START_REF] Floyd | Assigning Meanings to Programs[END_REF] can be used. A counterexample to a safety property can always be witnessed by observing finite (prefixes of) program executions.
Examples of safety properties include program partial correctness, which guarantees that all terminating computations produce correct results, and mutual exclusion, which guarantees that no two concurrent processes enter their critical section at the same time.
1. Introduction
Liveness Properties
The class of liveness properties, on the other hand, is informally characterized as the class of properties stating that "something good eventually happens", that is, a program eventually reaches a desirable state.
Liveness properties typically represent requirements that need not hold continuously, but whose eventual or repeated realization must be guaranteed. They are usually proved by exhibiting a well-founded argument, often called ranking function, which provides a measure of the distance from the realization of the requirement [START_REF] Turing | Checking a Large Routine[END_REF][START_REF] Floyd | Assigning Meanings to Programs[END_REF]. Note that, a counterexample to a liveness property cannot be witnessed by observing finite program executions since these can always be extended in order to satisfy the liveness property.
Examples of liveness properties are program termination, which guarantees that all program computations are terminating, and starvation freedom, which guarantees that a process will eventually enter its critical section.
Termination
The halting problem, the problem of determining whether a given program will always finish to run or could potentially execute forever, rose to prominence before the invention of programs or computers, in the era of the Entscheidungsproblem posed by David Hilbert: the challenge to formalize all mathematics into logic and use mechanical means to determine the validity of mathematical statements. In hopes of either solving the challenge, or showing it impossible, logicians and mathematicians began to search for possible instances of undecidable problems. The undecidability of the halting problem, proved by Alan Turing [START_REF] Turing | On Computable Numbers, with An Application to the Entscheidungs Problem[END_REF], is the most famous of those findings. In Appendix A.1, we propose a proof of the undecidability of the halting problem reinterpreted by the linguist Geo↵rey K. Pullum.
However, relaxing the halting problem by asking for a sound but not necessarily complete solution, allows us to overcome the barrier of its undecidability. Indeed, in the recent past, termination analysis has benefited from many research advances and powerful termination provers have emerged over the years [BCF13, CPR06, GSKT06, HHLP13, [START_REF] Le | Termination and Non-Termination Specification Inference[END_REF]etc.]. This thesis stems from [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF], where Patrick Cousot and Radhia Cousot proposed a unifying point of view on the existing approaches for proving program termination, and introduced the idea of the inference of a ranking function by abstract interpretation. We recall and revise their work in Chapter 4.
Liveness Properties
Chapter 5 and Chapter 6 are devoted to the construction of new abstractions dedicated to program termination. More precisely, Chapter 5 presents abstractions based on piecewise-defined ranking functions, while Chapter 6 presents abstractions based on ordinal-valued ranking functions. Some of the results described in Chapter 5 and Chapter 6 have been the subject of publications in international workshops and conferences [Urb13a, Urb13b, UM14a, UM14c, UM14b] and are presented here with many extensions. In Chapter 7, we detail how these abstractions can be used for proving termination of recursive programs. The implementation of our prototype static analyzer [START_REF] Urban | FuncTion: An Abstract Domain Functor for Termination (Competition Contribution)[END_REF] and the most recent experimental evaluation [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF] are described in Chapter 8.
Liveness Properties
For reactive programs, which may never terminate, the spectrum of relevant and useful liveness properties is much richer than the single property of termination. For example, it also includes the guarantee that a certain event occurs infinitely many times.
In general, these liveness properties are satisfied only under fairness hypotheses, that is, a restriction on some infinite behavior according to eventual occurrence of some events. A common property of these fairness notions is that they all imply that, under certain conditions, each of a number of alternative or competing events occur infinitely often in every infinite behavior of the system considered. Nissim Francez distinguishes three main subclasses, depending on the condition guaranteeing the eventual occurrence [START_REF] Francez | Fairness[END_REF]: unconditional fairness guarantees that for each behavior each event occurs infinitely often without any further condition; weak fairness implies that an event will not be indefinitely postponed provided that it remains continuously enabled; and strong fairness guarantees eventual occurrence under the condition of being enabled infinitely often, but not necessarily continuously.
In Chapter 9, we generalize the abstract interpretation framework proposed for termination by Patrick Cousot and Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF], to other liveness properties. In particular, we focus on two classes of program properties proposed by Zohar Manna and Amir Pnueli [START_REF] Manna | A Hierarchy of Temporal Properties[END_REF]: the class of guarantee properties informally characterized as the class of properties stating that "something good happens at least once", and the class of recurrence properties informally characterized as the class of properties stating that "something good happens infinitely often". In order to e↵ectively prove these properties 1. Introduction we reuse the abstractions proposed in Chapter 5 and Chapter 6. The results described in Chapter 9 have been published in [START_REF] Urban | Proving Guarantee and Recurrence Temporal Properties by Abstract Interpretation[END_REF].
Note that the guarantee provided by liveness properties that something good will eventually happen is not enough for mission critical software, where some desirable event is required to occur within a certain time bound. Indeed, these real-time properties are considered safety properties, and there is a general consensus against the interest of proving liveness properties for mission critical software. However, specifying and verifying real-time properties can be cumbersome, while liveness properties provide a nice approximation, without distracting timing assumptions, of real-time properties. The well-founded measures used to prove these liveness properties can then be mapped back to a real-time duration. In particular, this is the premise of ongoing research work conducted at NASA Ames Research Center where some of the theoretical work presented in this thesis is being used for the verification of avionics software.
We envision further future directions in Chapter 10.
II Safety
Basic Notions and Notations
In the following, we briefly recall well-known mathematical concepts in order to establish the notation used throughout this thesis.
Sets. A set S is defined as an unordered collection of distinct elements. We write s 2 S (resp. s 6 2 S) when s is (resp. is not) an element of the set S. A set is expressed in extension when it is uniquely identified by its elements: we write {a, b, c} for the set of elements a, b and c. The empty set is denoted by ;. A set is expressed in comprehension when its elements are specified through a shared property: we write {x 2 S | P (x)} for the set of elements x of the set S for which P (x) is true. The cardinality of a set S is denoted by |S|.
A set S is a subset of a set S 0 , written S ✓ S 0 , if and only if every element of S is an element of S 0 . The empty set is a subset of every set. The power set P(S) of a set S is the set of all its subsets: P(S)
• • • ⇥ S n def = {hx 1 , . . . , x n i | x 1 2 S 1 ^• • • ^xn 2 S n } and S n def = {hx 1 , . . . , x n i | x 1 2 S ^• • • ^xn 2 S}
is the set of n-tuples of elements of S. A covering of a set S is a set P of non-empty subsets of S such that any element s 2 S belongs to a set in P: S = S P. A partition of a set S is a covering P such that any two sets in P are disjoint, that is, any element s 2 S belongs to a unique set in P: 8X, Y 2 P : X 6 = Y ) X \ Y = ;.
Relations. A binary relation R between two sets A and B is a subset of the cartesian product A ⇥ B. We often write x R y for hx, yi 2 R.
The following are some important properties which may hold for a binary relation R over a set S: 8x 2 S : x R x (reflexivity) 8x 2 S : ¬(x R x) ( i r r e fl e x i v i t y ) 8x, y 2 S : x R y ) y R x (symmetry) 8x, y 2 S : x R y ^y R x ) x = y (anti-symmetry) 8x, y, z 2 S : x R y ^y R z ) x R z (transitivity) 8x, y 2 S : x R y _ y R x (totality)
An equivalence relation is a binary relation which is reflexive, symmetric, and transitive. A binary relation which is reflexive (resp. irreflexive), antisymmetric, and transitive is called a partial order (resp. strict partial order ). A preorder is reflexive and transitive, but not necessarily anti-symmetric. A (strict) total order is a (strict) partial order which is total.
Ordered Sets. A partially ordered set or poset (resp. preordered set) hD, vi is a set D equipped with a partial order (resp. preorder) v. A finite partially ordered set hD, vi can be represented by a Hasse diagram such as the one in Figure 2.1: each element x 2 D is uniquely represented by a node of the diagram, and there is an edge from a node x 2 D to a node y 2 D if y covers x, that is, x y and there exists no z 2 D such that x z y. Hasse diagrams are usually drawn placing the elements higher than the elements they cover.
Let hD, vi be a partially ordered set. The least element of the poset, when it exists, is denoted by ?: 8d 2 D : ? v d. Similarly, the greatest element of the poset, when it exists, is denoted by >: 8d 2 D : d v >. Note that any partially ordered set can always be equipped with a least (resp. greatest) element by adding a new element that is smaller (resp. greater) than every other element. Let X ✓ D. A maximal element of X is an element d 2 X such that, for each x 2 X , x v d. When X has a unique maximal element, it is called maximum and it is denoted by max X . Dually, a minimal element of X is an element d 2 X such that, for each x 2 X , d v x. When X has a unique minimal element, it is called minimum and it is denoted by min X . An upper bound of X is an element d 2 D (not necessarily belonging to X ) such that, for each x 2 X , x v d. The least upper bound (or lub, or supremum) of X is an upper bound d 2 D of X and such that, for every upper bound d 0 2 D of X , d v d 0 . When it exists, it is unique and denoted by F X (or sup X ). Dually, a lower bound of X is an element d 2 D such that, for each x 2 X , d v x. The greatest lower bound (or glb, or infimum) of X is a lower bound d 2 D of X such that, for every lower bound d 0 2 D of X , d 0 v d. When it exists, it is unique and denoted by d X (or inf X ). Note that, the notion of maximal element, maximum, and least upper bound (resp. minimal element, minimum, and greatest lower bound) are di↵erent, as illustrated by the following example. = {{a}, {b}, {c}, {a, b}, {a, c}, {b, c}}. The maximal elements of X are {a, b}, {a, c}, and {b, c}. Thus the maximum max X of X does not exist, while its least upper bound is sup X = {a, b, c}. Similarly, the minimal elements of X are {a}, {b}, and {c}. Thus, the minimum min X of X does not exist, while its greatest lower bound is inf X = ;.
A set equipped with a total order is a totally ordered set. A totally ordered subset C of a partially ordered set hD, vi is called a chain. A partially ordered set hD, vi satisfies the ascending chain condition (ACC) if and only if any infinite increasing chain
x 0 v x 1 v • • • v x n v • • • of elements of D is not strictly increasing, that is, 9k 0 : 8j k : x k = x j .
Dually, a partially ordered set hD, vi satisfies the descending chain condition (DCC) if and only if any infinite decreasing chain x 0 w x 1 w • • • w x n w • • • of elements of D is not strictly decreasing, that is, 9k 0 : 8j k : x k = x j . A complete partial order or cpo hD, vi is a poset where every chain C has a least upper bound F C, which is called the limit of the chain. Note that, since the empty set ; is a chain, a complete partial order has a least element ? = F ;. Thus, a partially ordered set that satisfies the ascending chain condition and that is equipped with a least element, is a complete partial order.
Lattices. A lattice hD, v, t, ui is a poset where each pair of elements x, y 2 D has a least upper bound, denoted by x t y, and a greatest lower bound, denoted by x u y. Any totally ordered set is a lattice. A complete lattice hD, v, t, u, ?, >i is a lattice where any subset X ✓ D has a least upper bound F X and a greatest lower bound d X . A complete lattice has both a least element ? Example 2.1.2
The power set hP(S), ✓, [, \, ;, Si of any set S is a complete lattice.
Functions. A partial function f from a set A to a set B, written f : A * B, is a binary relation between A and B that pairs each element x 2 A with no more than one element y 2 B. The set of all partial functions from a set A to a set B is denoted by A * B. We write f (x) = y if there exists an element y such that hx, yi 2 f , and we say that f (x) is defined, otherwise we say that f (x) is undefined. Given a partial function f : A * B, we define its domain as dom(f ) def = {x 2 A | 9y 2 B : f (x) = y}. The totally undefined function, denoted by ;, has the empty set as domain: dom( ;) = ;. The join of two partial functions f 1 : A * B and f 2 : A * B with disjoint domains, denoted by f 1 [ f 2 : A * B, is defined as follows:
f 1 [ f 2 def = x 2 A. 8 > < > : f 1 (x)
x 2 dom(f 1 ) f 2 (x)
x 2 dom(f 2 ) undefined otherwise where dom(f 1 ) \ dom(f 2 ) = ;.
A (total ) function f from a set A to a set B, written f : A ! B, is a partial function that pairs each x 2 A with exactly one element y 2 B. Equivalently, a (total) function f : A ! B is a partial function such that dom(f ) = A. The set of all functions from a set A to a set B is denoted by A ! B.
We sometimes denote functions using the lambda notation x 2 A. f(x), or more concisely x. f (x).
The composition of two functions f : A ! B and g : B ! C is another function g f : A ! C such that 8x 2 A : (g f )(x) = g(f (x)).
The following properties may hold for a function f : A ! B: 8x, y 2 A : f (x) = f (y) ) x = y (injectivity) 8y 2 B : 9x 2 A : f (x) = y (surjectivity)
A bijective function, also called isomorphism, is both injective and surjective. Two sets A and B are isomorphic if there exists a bijective function f : A ! B. The inverse of a bijective function f : A ! B is the bijective function
f 1 : B ! A defined as f 1 def = {hb, ai | ha, bi 2 f }.
Let hD 1 , v 1 i and hD 2 , v 2 i be partially ordered sets. A function f : D 1 ! D 2 is said to be monotonic when, for each x, y 2 D 1 , x v 1 y ) f (x) v 2 f (y). It is continuous (or Scott-continuous) when it preserves existing least upper bounds of chains, that is, for each chain
C ✓ D 1 , if F C exists then f ( F C) = F {f (x) | x 2 C}.
Dually, it is co-continuous when it preserves existing greatest lower bounds of chains, that is, if
d C exists then f ( d C) = d {f (x) | x 2 C}.
A complete t-morphism (resp. complete u-morphism) preserves existing least upper bounds (resp. greatest lower bounds) of arbitrary non-empty sets.
Pointwise Lifting. Given a complete lattice hD, v, t, u, ?, >i (resp. a lattice, a cpo, a poset) and a set S, the set S ! D of all functions from S to D inherits the complete lattice (resp. lattice, cpo, poset) structure of D: hS ! 2. Abstract Interpretation D, v, ṫ, u, ?, >i where the dotted operators are defined by pointwise lifting:
f v g def = 8s 2 S : f (s) v g(s) Ġ X def = s. G {f (s) | f 2 X } l X def = s. l {f (s) | f 2 X } ? def = s. ? > def = s. > (2.1.1)
Ordinals. The theory of ordinals was introduced by Georg Cantor as the core of his set theory [START_REF] Cantor | Beiträge zur Begründung der Transfiniten Mengenlehre[END_REF][START_REF] Cantor | Beiträge zur Begründung der Transfiniten Mengenlehre[END_REF].
A binary relation R over a set S is well-founded when every non-empty subset of S has a least element with respect to R. A well-founded total order is called a well-order (or a well-ordering). A well-ordered set hW, i is a set W equipped with a well-ordering . Every well-ordered set is associated with a so-called ordered type. Two well-ordered sets hA, A i and hB, B i are said to have the same order type if they are order-isomorphic, that is, if there exists a bijective function f : A ! B such that, for all elements x, y 2 A, x A y if and only if f (x) B f (y).
An ordinal is defined as the order type of a well-ordered set and it provides a canonical representative for all well-ordered sets that are order-isomorphic to that well-ordered set. We use lower case Greek letters to denote ordinals. In fact, a well-ordered set hW, i with order type ↵ is order-isomorphic to the well-ordered set {x 2 W | x < ↵} of all ordinals strictly smaller than the ordinal ↵ itself. In particular, as suggested by John von Neumann [vN23], this property permits to define each ordinal as the well-ordered set of all ordinals that precede it: the smallest ordinal is the empty set ;, denoted by 0. The successor of an ordinal ↵ is defined as ↵ [ {↵} and is denoted by ↵ + 1, or equivalently, by succ(↵). Thus, the first successor ordinal is {0}, denoted by 1. The next is {0, 1}, denoted by 2. Continuing in this manner, we obtain all natural numbers, that is, all finite ordinals. A limit ordinal is an ordinal number which is neither 0 nor a successor ordinal. The set N of all natural numbers, denoted by !, is the first limit ordinal and the first transfinite ordinal.
We use hO, i to denote the well-ordered set of ordinals. In the following, we will see that the theory of ordinals is the most general setting for proving program termination.
Fixpoints. Given a partially ordered set hD, vi and a function f : D ! D, a fixpoint of f is an element x 2 D such that f (x) = x. An element x 2 D such that x v f (x) is called a pre-fixpoint while, dually, a post-fixpoint is an element x 2 D such that f (x) v x. The least fixpoint of f , written lfp f , is a fixpoint of f such that, for every fixpoint x 2 D of f , lfp f v x. We write lfp d f for the least fixpoint of f which is greater than or equal to an element d 2 D. Dually, we define the greatest fixpoint of f , denoted by gfp f , and the greatest fixpoint of f smaller than or equal to d 2 D, denoted by gfp d f . When the order v is not clear from the context, we explicitly write lfp v f and gfp v f . We now recall a fundamental theorem due to Alfred Tarski [START_REF] Tarski | A lattice-theoretical fixpoint theorem and its applications[END_REF]:
Theorem 2.1.3 (Tarski's Fixpoint Theorem) The set of fixpoints of a monotonic function f : D ! D over a complete lattice is also a complete lattice.
Proof.
See [START_REF] Tarski | A lattice-theoretical fixpoint theorem and its applications[END_REF].
⌅
In particular, the theorem guarantees that f has a least fixpoint lfp f = d {x 2 D | f (x) v x} and a greatest fixpoint gfp f = F {x 2 D | x v f (x)}. However, such fixpoint characterizations are not constructive. An alternative constructive characterization is often attributed to Stephen Cole Kleene: Theorem 2.1.4 (Kleene's Fixpoint Theorem) Let hD, vi be a complete partial order and let f : D ! D be a Scott-continuous function. Then, f has a least fixpoint which is the least upper bound of the increasing chain ? v f (?) v f (f (?)) v f (f (f (?))) v . . . i.e., lfp f = F {f n (?) | n 2 N}.
In case of monotonic but not continuous functions, a theorem by Patrick Cousot and Radhia Cousot [START_REF] Cousot | Constructive Versions of Tarski's Fixed Point Theorems[END_REF] expresses fixpoints as limits of possibly transfinite iteration sequences: Theorem 2.1.5 Let f : D ! D be a monotonic function over a complete partial order and let d 2 D be a pre-fixpoint. Then, the iteration sequence:
f def = 8 > < > : d = 0 (zero case) f (f ↵ ) = ↵ + 1 (successor case) F {f ↵ | ↵ < } otherwise (limit case)
converges towards the least fixpoint lfp d f .
Transition Systems. The semantics of a program is a mathematical characterization of all possible behaviors of the program when executed for all possible input data. In order to be independent from the choice of a particular programming language, programs are often formalized as transition systems:
Definition 2.2.1 (Transition System) A transition system is a pair h⌃, ⌧ i where ⌃ is a (potentially infinite) set of states and the transition relation ⌧ ✓ ⌃ ⇥ ⌃ describes the possible transitions between states.
In a labelled transition system h⌃, A, ⌧ i, A is a set of actions that are used to label the transitions described by the transition relation ⌧ ✓ ⌃ ⇥ A ⇥ ⌃.
Note that this model allows representing programs with (possibly unbounded) non-determinism. In the following, in order to lighten the notation, a transition hs, s 0 i 2 ⌧ (resp. a labelled transition hs, a, s 0 i 2 ⌧ ) between a state s and another state s 0 is sometimes written as s ! s 0 (resp. s a ! s 0 ). In some cases, a set I ✓ ⌃ is designated as the set of initial states. The set of blocking or final states is ⌦ def = {s 2 ⌃ | 8s 0 2 ⌃ : hs, s 0 i 6 2 ⌧ }. We define the following functions to manipulate sets of program states. Definition 2.2.2 Given a transition system h⌃, ⌧ i, post : P(⌃) ! P(⌃) maps a set of program states X 2 P(⌃) to the set of their successors with respect to the program transition relation ⌧ :
post(X) def = {s 0 2 ⌃ | 9s 2 X : hs, s 0 i 2 ⌧ } (2.2.1)
Definition 2.2.3 Given a transition system h⌃, ⌧ i, pre : P(⌃) ! P(⌃) maps a set of program states X 2 P(⌃) to the set of their predecessors with respect to the program transition relation ⌧ :
pre(X) def = {s 2 ⌃ | 9s 0 2 X : hs, s 0 i 2 ⌧ } (2.2.2)
Definition 2.2.4 Given a transition system h⌃, ⌧ i, g post : P(⌃) ! P(⌃) maps a set of states X 2 P(⌃) to the set of states whose successors with respect to the program transition relation ⌧ are all in the set X:
g post(X) = ⌃\post(⌃\X) def = {s 0 2 ⌃ | 8s 2 ⌃ : hs, s 0 i 2 ⌧ ) s 2 X} (2.2.3)
Definition 2.2.5 Given a transition system h⌃, ⌧ i, f pre : P(⌃) ! P(⌃) maps a set of states X 2 P(⌃) to the set of states whose predecessors with respect to the program transition relation ⌧ are all in the set X:
f pre(X) = ⌃ \ pre(⌃ \ X) def = {s 2 ⌃ | 8s 0 2 ⌃ : hs, s 0 i 2 ⌧ ) s 0 2 X} (2.2.4)
Maximal Trace Semantics
The semantics generated by a transition system is the set of computations described by the transition system. We formally define this notion below.
Sequences. Given a set S, the set
S n def = {s 0 • • • s n 1 | 8i < n : s i 2
S} is the set of all sequences of exactly n elements from S. We write " to denote the empty sequence, i.e., S 0 , {"}.
In the following, let S ⇤ def = S n2N S n be the set of all finite sequences, S + def = S ⇤ \ S 0 be the set of all non-empty finite sequences, S ! be the set of all infinite sequences, S +1 def = S + [ S ! be the set of all non-empty finite or infinite sequences and S ⇤1 def = S ⇤ [ S ! be the set of all finite or infinite sequences of elements from S. In the following, in order to ease the notation, sequences of a single element s 2 S are often written omitting the curly brackets, e.g., we write s ! and s +1 instead of {s} ! and {s} +1 .
We write 0 (or • 0 ) for the concatenation of two sequences , 0 2 S +1 (with " = " = , and 0 = when 2 S ! ), T + def = T \ S + for the selection of the non-empty finite sequences of T ✓ S +1 , T ! def = T \ S ! for the selection of the infinite sequences of T ✓ S +1 and T ; T
0 def = { s 0 | s 2 S ^ s 2 T ^s 0 2 T 0 } [ T ! for the merging of sets of sequences T , T 0 ✓ S +1 .
Traces. Given a transition system h⌃, ⌧ i, a trace is a non-empty sequence of states in ⌃ determined by the transition relation ⌧ , that is, hs, s 0 i 2 ⌧ for each pair of consecutive states s, s 0 2 ⌃ in the sequence. Note that, the set of final states ⌦ and the transition relation ⌧ can be understood as a set of traces of length one and a set of traces of length two, respectively. The set of all traces generated by a transition system is called partial trace semantics: Definition 2.2.6 (Partial Trace Semantics) The partial trace semantics ⌧ +1 2 P(⌃ +1 ) generated by a transition system h⌃, ⌧ i is defined as follows:
⌧ +1 def = ⌧ + [ ⌧ !
where ⌧ + 2 P(⌃ + ) is the set of finite traces:
⌧ + def = [ n>0 {s 0 • • • s n 1 2 ⌃ n | 8i < n 1 : hs i , s i+1 i 2 ⌧ } and ⌧ ! 2 P(⌃ !
) is the set of infinte traces:
⌧ ! def = {s 0 s 1 • • • 2 ⌃ ! | 8i 2 N : hs i , s i+1 i 2 ⌧ } Example 2.2.7
Let ⌃ = {a, b} and ⌧ = {ha, ai, ha, bi}. The partial trace semantics generated by h⌃, ⌧ i is the set of traces a +1 [ a ⇤ b.
Maximal Trace Semantics. In practice, given a transition system h⌃, ⌧ i, and possibly a set of initial states I ✓ ⌃, the traces worth of consideration (start by an initial state in I and) either are infinite or terminate with a final state in ⌦. These traces define the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) and represent infinite computations or completed finite computations:
Definition 2.2.8 (Maximal Trace Semantics) The maximal trace semantics ⌧ +1 2 P(⌃ +1 ) generated by a transition system h⌃, ⌧ i is defined as:
⌧ +1 def = ⌧ + [ ⌧ !
where ⌧ + 2 P(⌃ + ) is the set of finite traces terminating with a final state in ⌦:
⌧ + def = [ n>0 {s 0 • • • s n 1 2 ⌃ n | 8i < n 1 : hs i , s i+1 i 2 ⌧, s n 1 2 ⌦} Example 2.2.9
The maximal trace semantics generated by the transition system h⌃, ⌧ i of Example 2.2.7 is the set of traces a ! [ a ⇤ b. Note that, unlike the partial trace semantics of Example 2.2.7, the maximal trace semantics does not represent partial computations, i.e., finite sequences of a 2 ⌃.
In practice, in case a set of initial states I ✓ ⌃ is given, only the traces starting from an initial state s 2 I are considered:
{s 2 ⌧ +1 | s 2 I}.
Remark 2.2.10 It is worth mentioning that not all program semantics are directly generated by a transition system. For example, this is the case of the semantics of programs where the future evolution of a computation depends on the whole computation history.
Example 2.2.11 Let ⌃ = {a, b}. The set of fair traces a ⇤ b, where the event b eventually occurs, is an example of program semantics that cannot be generated by a transition system. As seen in Example 2.2.7 and Example 2.2.9, the transition relation ⌧ = {ha, ai, ha, bi} introduces spurious traces in a +1 .
In fact, transition systems can only describe computations whose future evolution depends entirely on their current state [START_REF] Cousot | Fondements des Méthodes de Preuve d'Invarian ce et de Fatalité de Programmes Parallèles[END_REF]. We will often come back to this remark throughout this thesis.
The following result provides a fixpoint definition of the maximal trace semantics within the complete lattice hP(⌃ +1 ), v, t, u, ⌃ ! , ⌃ + i, where the computational order is
T 1 v T 2 def = T + 1 ✓ T + 2 ^T ! 1 ◆ T ! 2 [Cou97]:
Theorem 2.2.12 (Maximal Trace Semantics) The maximal trace semantics ⌧ +1 2 P(⌃ +1 ) can be expressed as a least fixpoint in the complete lattice hP(⌃ +1 ), v, t, u, ⌃ ! , ⌃ + i as follows: In Figure 2.2, we propose an illustration of the fixpoint iterates. Intuitively, the traces belonging to the maximal trace semantics are built backwards by prepending transitions to them: the finite traces are built extending other finite traces from the set of final states ⌦, and the infinite traces are obtained selecting infinite sequences with increasingly longer prefixes forming traces. In particular, the ith iterate builds all finite traces of length i, and selects all infinite sequences whose prefixes of length i form traces. At the limit we obtain all infinite traces and all finite traces that terminate in ⌦.
⌧ +1 = lfp v +1 +1 (T ) def = ⌦ [ (⌧ ; T ) (2.2.5) Proof. See [Cou02]. ⌅ 22 2. Abstract Interpretation T 0 = ( ⌃ ! ) T 1 = ⇢ ⌦ [ ( ⌧ ⌃ ! ) T 2 = ⇢ ⌦ [ ⇢ ⌧ ⌦ [ ( ⌧ ⌧ ⌃ ! ) T 3 = ⇢ ⌦ [ ⇢ ⌧ ⌦ [ ⇢ ⌧ ⌧ ⌦ [ ( ⌧ ⌧ ⌧ ⌃ ! ) . . .
Galois Connections
The maximal trace semantics carries all information about a program. It is the most precise semantics and it fully describes the behavior of a program.
Then, usually another fixpoint semantics is designed that is minimal, sound and relatively complete for the program properties of interest.
Program Properties. Following [START_REF] Cousot | Abstract Interpretation: a Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints[END_REF], a property is represented by the set of elements which have this property. By program property we mean a property of its executions, that is a property of its semantics. Since a program semantics is a set of traces, a program property is a set of set of traces.
The strongest property of a program with semantics S 2 P(⌃ +1 ) is the collecting semantics {S} 2 P(P(⌃ +1 )). This is the strongest property because any program with this property must have the same semantics.
We say that a program with semantics S 2 P(⌃ +1 ) satisfies a property Note that, not all program properties are trace properties. As an example, the non-interference policy is not a trace property, because whether a trace is allowed by the policy depends on whether another trace is also allowed [START_REF] Michael | Hyperproperties[END_REF].
P 2 P(P(⌃ +1 )) if
We prefer program semantics expressed in fixpoint form which directly lead, using David Park's fixpoint induction [START_REF] Park | Fixpoint Induction and Proofs of Program Properties[END_REF], to sound and complete proof methods for the correctness of a program with respect to a property: Theorem 2.2.14 (Park's Fixpoint Induction Principle) Let f : D ! D be a monotonic function over a complete lattice hD, v, t, u, ?, >i and let d 2 D be a pre-fixpoint. Then, given an element P 2 D, we have:
lfp d f v P , 9I 2 D : d v I ^f (I) v I ^I v P. (2.2.6)
Dually, we have:
P v gfp d f , 9I 2 D : I v d ^I v f (I) ^P v I. (2.2.7)
The element I 2 D is called inductive invariant.
Hierarchy of Semantics.
As mentioned, to reason about a particular program property, it is not necessary to consider all aspects and details of the program behavior. In fact, reasoning is facilitated by the design of a welladapted semantics, abstracting away from irrelevant matters. Therefore, there is no universal general-purpose program semantics but rather a wide variety of special-purpose program semantics, each one of them dedicated to a particular class of program properties and reduced to essentials in order to ignore irrelevant details about program executions. Abstract interpretation is a method for relating these semantics. In fact, they can be uniformly described as fixpoints of monotonic functions over ordered structures, and organized into a hierarchy of interrelated semantics specifying at various levels of detail the behavior of programs [START_REF] Cousot | Constructive Design of a Hierarchy of Semantics of a Transition System by Abstract Interpretation[END_REF][START_REF] Cousot | Constructive Design of a Hierarchy of Semantics of a Transition System by Abstract Interpretation[END_REF].
Galois Connections. The correspondence between the semantics in the hierarchy is established by Galois connections formalizing the loss of information: Definition 2.2.15 (Galois Connection) Let hD, vi and hD \ , v \ i be two partially ordered sets. A Galois connection hD, vi ! The orders v and v \ are called approximation orders. They dictate the relative precision of the elements in the concrete and abstract domain: if
↵ hD \ , v \ i is a pair of monotonic functions ↵ : D ! D \ and : D \ ! D such that: 8d 2 D, d \ 2 D \ : ↵(d) v \ d \ , d v (d \ ) (2.2.8) We write hD, vi ! ! ↵ hD \ , v \ i when ↵ is surjective (or, equivalently, is injective), hD, vi ! ↵ hD \ , v \ i when ↵ is injective (or,
↵(d) v \ d \ , then d \ is also a correct abstract approximation of the concrete element d, although less precise than ↵(d); if d v (d \ )
, then d \ is also a correct abstract approximation of the concrete element d, although the element d provides more accurate information about program executions than (d \ ).
Remark 2.2.16 Observe that, the computational order used to define fixpoints and the approximation order often coincide but, in the general case, they are distinct and totally unrelated. We will need to maintain this distinction throughout the rest of this thesis.
The function
↵ : D ! D is extensive, that is, 8d 2 D : d v ↵(d),
meaning that the loss of information in the abstraction process is sound. The function ↵
:
D \ ! D \ is reductive, that is, 8d \ 2 D \ : ↵ (d \ ) v \ d \
, meaning that the concretization introduces no loss of information.
Given a Galois connection hD, vi ! ↵ hD \ , v \ i, the abstraction function and the concretization function uniquely determine each other:
↵(d) def = l {d \ | d v (d \ )} (d \ ) def = G \ {d | ↵(d) v \ d \ }
For this reason, we often provide only the definition of the abstraction function or, indi↵erently, only the definition of the concretization function.
Another important property of Galois connections is that, given two Galois connections hD, vi !
↵ 1 1 hD # , v # i and hD # , v # i ! ↵ 2 2 hD \ , v \ i, their com- position hD, vi ! ↵ 2 ↵ 1 1 2 hD \
, v \ i is also a Galois connection. Hence, abstract interpretation has a constructive aspect, since program semantics can be systematically derived by successive abstractions of the maximal trace semantics, rather than just being derived by intuition and justified a posteriori.
Example 2.2.17
The function post 2 P(⌃) ! P(⌃) (cf. Definition 2.2.2) and the function f pre 2 P(⌃) ! P(⌃) (cf. Definition 2.2.5) form a Galois connection:
hP(⌃), ✓i ! post f pre hP(⌃), ✓i
↵ R (T ) def = I [ {s 2 ⌃ | 9s 0 2 I, 2 ⌃ ⇤ , 0 2 ⌃ ⇤1 : s 0 s 0 2 T } (2.2.9)
This abstraction, from now on, is called reachability abstraction.
Note that, the approximation order ✓ of the concrete domain hP(⌃ +1 ), ✓i di↵ers from the computational order v used to define the maximal trace semantics in the complete lattice hP(⌃ +1 ), v, t, u, ⌃ ! , ⌃ + i (cf. Equation 2.2.5).
The reachable state semantics is thus defined as:
⌧ R def = ↵ R (⌧ +1 )
It can be specified in fixpoint form as follows:
⌧ R = lfp ✓ R R (S) def = I [ post(S) (2.2.10)
In this case, the computational order ✓ used to define the semantics coincides with the approximation order of the abstract domain hP(⌃), ✓i.
Note that, while the traces belonging to the maximal trace semantics are built backwards, the states belonging to the reachable state semantics are build forward from the set of initial states I.
Remark 2.2.19 (Absence of Galois Connection) The use of Galois connections squares with an ideal situation where there is a best way to approximate any concrete property by an abstract property. However, imposing the existence of a Galois connection is sometimes too strong a requirement. In [START_REF] Cousot | Abstract Interpretation Frameworks[END_REF], Patrick Cousot and Radhia Cousot illustrate how to relax the Galois connection framework in order to work with only a concretization function or, dually, only an abstraction function. In practice, concretization-based abstract interpretations are much more used and will be frequently encountered throughout this thesis (cf. Section 3.4 and Chapter 5).
Fixpoint Transfer.
The following theorem provides guidelines for deriving an abstract fixpoint semantics S \ by abstraction of a concrete fixpoint semantics S, or dually, for deriving S by refinement of S \ .
Theorem 2.2.20 (Kleenian Fixpoint Transfer) Let hD, vi and hD \ , v \ i be complete partial orders, let : D ! D and \ : D \ ! D \ be monotonic functions, and let ↵ : D ! D \ be a Scott-continuous abstraction function that satisfies ↵(?) = ? \ and the commutation condition ↵ = \ ↵. Then, we have the fixpoint abstraction ↵(lfp v ) = lfp v \ \ . Dually, let : D \ ! D be a Scott-continuous concretization function that satisfies ? = (? \ ) and the commutation condition = \ . Then, we have the fixpoint derivation
lfp v = (lfp v \ \ ).
In particular, for the respective iterates of : D ! D and \ : D \ ! D \ from ? and ? \ (cf. Theorem 2.1.5) we have:
8 2 O : ↵( ) = \ .
Proof.
See [START_REF] Cousot | Constructive Design of a Hierarchy of Semantics of a Transition System by Abstract Interpretation[END_REF].
⌅
When the abstraction function is not Scott-continuous, but it preserves greatest lower bounds, we can rely on the following theorem.
Theorem 2.2.21 (Tarskian Fixpoint Transfer) Let hD, v, t, u, ?, >i and hD \ , v \ , t \ , u \ , ? \ , > \ i be complete lattices, let : D ! D and \ : D \ ! D \ be monotonic functions, and let ↵ : D ! D \ be an abstraction function that is a complete u-morphism and that satisfies
\ ↵ v \ ↵ and the post-fixpoint correspondence 8d \ 2 D \ : \ (d \ ) v \ d \ ) 9d 2 D : (d) v d ^↵(d) = d \ (i.e.
, each abstract post-fixpoint of \ is the abstraction by ↵ of some concrete post-fixpoint of ). Then, we have ↵(lfp v ) = lfp v \ .
Proof.
See [START_REF] Cousot | Constructive Design of a Hierarchy of Semantics of a Transition System by Abstract Interpretation[END_REF].
⌅
In the particular case when the abstraction function ↵ : D ! D \ is a complete t-morphism, there exists a unique concretization function :
D \ ! D such that hD, vi ! ↵ hD \ , v \ i is a Galois connection.
Fixpoint Approximation. In case no optimal fixpoint abstraction of a concrete semantics S can be defined, we settle for a sound abstraction, that is, an abstract semantics S \ such that ↵(S) v \ S \ , or equivalently S v (S \ ), for the approximation orders v and v \ .
Widening and Narrowing
We must now address the practical problem of e↵ectively computing these program semantics. In [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF], Patrick Cousot and Radhia Cousot introduced the idea of using widening and narrowing operators in order to accelerate the convergence of increasing and decreasing iteration sequences to a fixpoint over-approximation. In [START_REF] Cousot | Méthodes Itératives de Construction et d'Appro ximation de Points Fixes d'Opérateurs Monotones sur un Treil lis, Analyse Sémantique de Programmes[END_REF], the dual operators are also considered.
Widening. The widening operator is used to enforce or accelerate the convergence of increasing iteration sequences over abstract domains with infinite or very long strictly ascending chains, or even over finite but very large abstract domains. It is defined as follows:
Definition 2.2.22 (Widening) Let hD, vi be a partially ordered set. A widening operator O : (D ⇥ D) ! D is such that:
(1) for all element x, y 2 D, we have x v x O y and y v x O y;
(2) for all increasing chains
x 0 v x 1 v • • • v x n v • • • , the increasing chain y 0 def = x 0 y n+1 def = y n O x n+1
is ultimately stationary, that is, 9l 0 : 8j l : y j = y l . Intuitively, given and abstract domain hD, vi and a function : D ! D, the widening uses two consecutive iterates y n and (y n ) in order to extrapolate the next iterate y
n+1 def = y n O (y n )
. This extrapolation should be an over-approximation for soundness, that is y n v y n+1 and (y n ) v y n+1 (cf. Definition 2.2.22(1)), and enforce convergence for termination (cf. Definition 2.2.22(2)). In this way, the widening allows computing in finite time an over-approximation of a Kleenian fixpoint: Theorem 2.2.23 (Fixpoint Approximation with Widening) Let hD, v i be a complete partial order, let : D ! D be a Scott-continuous function, let d 2 D be a pre-fixpoint, and let O : (D ⇥ D) ! D be a widening. Then, the following increasing chain:
y 0 def = d y n+1 def = y n O (y n
) is ultimately stationary and its limit, denoted by O , is such that lfp d v O . Proof. See [START_REF] Cousot | Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation[END_REF].
⌅
In [START_REF] Cousot | Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation[END_REF], Patrick Cousot and Radhia Cousot demonstrated that computing in an abstract domain with infinite ascending chains using a widening is strictly more powerful than any finite abstraction. Intuitively, the widening adds a dynamic dimension to the abstraction, which is more flexible than relying only on the static choice of an abstract domain.
Narrowing. The narrowing operator is used to enforce or accelerate the convergence of decreasing iteration sequences. It is defined as follows: (1) for all element x, y 2 D, if x w y we have x w (x 4 y) w y;
(2) for all decreasing chains
x 0 w x 1 w • • • w x n w • • • , the decreasing chain y 0 def = x 0 y n+1 def = y n 4 x n+1
is ultimately stationary, that is, 9l 0 : 8j l : y j = y l . It is often the case that the limit O of the iteration sequence with widening is a strict post-fixpoint of : ( O ) < O . Hence, the over-approximation O can be refined by a decreasing iteration without widening:
y 0 def = O y n+1 def =
(y n ) However, this decreasing sequence can be infinite. The narrowing operator is used to limit the refinement while enforcing termination (cf. Figure 2.3). Intuitively, the narrowing uses two consecutive iterates y n and (y n ) in order to compute the next iterate y
n+1 def = y n 4 (y n ).
This should be an interpolation for soundness, that is y n w y n+1 w (y n ) (cf. Definition 2.2.24(1)), and enforce convergence for termination (cf. Definition 2.2.24(2)). In this way, the narrowing allows refining in finite time an over-approximation of a fixpoint:
Abstract Interpretation
Theorem 2.2.25 (Fixpoint Refinement with Narrowing) Let hD, vi be a complete partial order, let : D ! D be a Scott-continuous function, let d 2 D be a pre-fixpoint and let a 2 D be a post-fixpoint such that d v a, and let 4 : (D ⇥ D) ! D be a narrowing. Then, the decreasing chain
y 0 def = a y n+1 def = y n 4 (y n
) is ultimately stationary and its limit, denoted by 4 , is such that lfp d v 4 . Proof. See [START_REF] Cousot | Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation[END_REF].
⌅
Dual Widening. The dual widening operator is used to enforce or accelerate the convergence of decreasing iterations sequences to a (greatest) fixpoint under-approximation. Its definition is the dual of Definition 2.2.22: Definition 2.2.26 (Dual Widening) Let hD, vi be a partially ordered set. A dual widening operator Ō : (D ⇥ D) ! D is such that:
(1) for all element x, y 2 D, we have x w x Ō y and y w x Ō y;
(2) for all decreasing chains
x 0 w x 1 w • • • w x n w • • • , the decreasing chain y 0 def = x 0 y n+1 def = y n Ō x n+1
is ultimately stationary, that is, 9l 0 : 8j l : y j = y l . The widening and the dual widening are extrapolators: they are used to find abstract elements outside the range of known abstract elements [START_REF] Cousot | Abstracting Induction by Extrapolation and Interpolation[END_REF].
Dual Narrowing. The dual narrowing operator is used to enforce or accelerate the convergence of increasing iterations sequences to a (greatest) fixpoint under-approximation. Its definition is the dual of Definition 2.2.24.
The narrowing and the dual narrowing are interpolators: they are used to find abstract elements inside the range of known abstract elements [START_REF] Cousot | Abstracting Induction by Extrapolation and Interpolation[END_REF].
In conclusion, a sound and fully automatic static analyzer can be designed by systematically approximating the semantics of programs, with an inevitable loss of information, from a concrete to a less precise abstract setting, until the resulting semantics is computable.
The formal treatment given in the previous chapter is language independent. In this chapter, we look back at the notions introduced in the context of a simple sequential programming language that will be used to illustrate our work throughout the rest of this thesis.
Le traitement formel donné dans le chapitre précédent est indépendant du langage. Dans ce chapitre, nous nous penchons sur les notions introduites dans le contexte d'un simple langage de programmation séquentiel qui sera utilisé pour illustrer notre travail dans le reste de cette thèse.
A Small Imperative Language
We consider a simple sequential non-deterministic programming language with no procedures, no pointers and no recursion. The variables are statically allocated and the only data type is the set Z of mathematical integers.
In Chapter 7 we will introduce procedures and recursion, while pointers and machine integers and floats will remain out of the scope of this work.
Language Syntax. In Figure 3.1, we define inductively the syntax of our programming language.
A program prog consists of an instruction followed by a unique label l 2 L. Another unique label appears within each instruction. An instruction stmt is either a skip instruction, a variable assignment, a conditional if statement, a while loop or a sequential composition of instructions.
Arithmetic expressions aexp involve variables X 2 X , numeric intervals [a, b] and the arithmetic operators +, , ⇤, / for addition, subtraction, multiplication, and division. Numeric intervals have constant and possibly infinite bounds, and denote a random choice of a number in the interval. This provides
aexp ::= X X 2 X | [i 1 , i 2 ] i 1 2 Z [ { 1}, i 2 2 Z [ {+1}, i 1 i 2 | aexp | aexp ⇧ aexp ⇧ 2 {+, , ⇤, /} bexp ::= ? | not bexp | bexp and bexp | bexp or bexp | aexp ./ aexpr ./ 2 {<, , =, 6 =} stmt ::= l skip | l X := aexp l 2 L, X 2 X | if l bexp then stmt else stmt fi l 2 L | while l bexp do stmt od l 2 L | stmt stmt prog ::= stmt l l 2 L Figure 3
.1: Syntax of our programming language.
a notion of non-determinism useful to model user input or to approximate arithmetic expressions that cannot be represented exactly in the language. Numeric constants are a particular case of numeric interval. In the following, we often write the constant c for the interval [c, c].
Boolean expressions bexp are built by comparing arithmetic expressions, and are combined using the boolean not, and, and or operators. The boolean expression ? represents a non-deterministic choice and is useful to provide a sequential encoding of concurrent programs by modeling a (possibly, but not necessarily, fair) scheduler. Whenever clear from the context, we frequently abuse notation and use the symbol ? to also denote the numeric interval [ 1, +1].
Maximal Trace Semantics
In the following, we instantiate the general definitions of transition system and maximal trace semantics of Section 2.2 with our small imperative language.
Expression Semantics. An environment ⇢ : X ! Z maps each program variable X 2 X to its value ⇢(X) 2 Z. Let E denote the set of all environments.
J X K⇢ def = { ⇢(X) } J [a, b] K⇢ def = { x | a x b } J aexp K⇢ def = { x | x 2 J aexp K⇢ } J aexp 1 + aexp 2 K⇢ def = { x + y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 aexp 2 K⇢ def = { x y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 ⇤ aexp 2 K⇢ def = { x ⇤ y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 / aexp 2 K⇢ def = { trnk(x / y) | x 2 J aexp 1 K, y 2 J aexp 2 K⇢, y 6 = 0 }
where trnk : R ! Z is defined as follows:
trnk(x) def = ( max{y 2 Z | y x} x 0 min{y 2 Z | y x} x < 0 Figure 3.2: Semantics of arithmetic expressions aexp.
The semantics of an arithmetic expression aexp is a function JaexpK : E ! P(Z) mapping an environment ⇢ 2 E to the set of all possible values for the expression aexp in the environment. It is presented in Figure 3.2. Note that, the set of values for an expression may contain several elements because of the non-determinism embedded in the expressions. It might also be empty due to undefined results. In fact, this is the case of divisions by zero. The trnk function rounds the result of the division towards an integer.
Similarly, the semantics JbexpK : E ! P({true, false}) of boolean expressions bexp maps an environment ⇢ 2 E to the set of all possible truth values for the expression bexp in the environment. It is presented in Figure 3.3. In the following, we write true and false to represent a boolean expression that is always true and always false, respectively. Example 3.2.1 Let us consider the following program:
Transition Systems. A program state s 2 L ⇥ E is a
J ? K⇢ def = { true, false } J not bexp K⇢ def = { ¬x | x 2 J bexp K⇢ } J bexp 1 and bexp 2 K⇢ def = { x ^y | x 2 J bexp 1 K, y 2 J bexp 2 K⇢ } J bexp 1 or bexp 2 K⇢ def = { x _ y | x 2 J bexp 1 K, y 2 J bexp 2 K⇢ } J aexp 1 < aexp 2 K⇢ def = { x < y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 aexp 2 K⇢ def = { x y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 = aexp 2 K⇢ def = { x = y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } J aexp 1 6 = aexp 2 K⇢ def = { x 6 = y | x 2 J aexp 1 K, y 2 J aexp 2 K⇢ } Figure 3.3: Semantics of boolean expressions bexp. stmt ::= l skip iJ l skip K def = l | l X := aexp iJ l X := aexp K def = l | if l bexp then stmt 1 else stmt 2 fi iJ if l bexp then stmt 1 else stmt 2 fi K def = l | while l bexp do stmt 1 od iJ while l bexp do stmt 1 od K def = l | stmt 1 stmt 2 iJ stmt 1 stmt 2 K def = iJ stmt 1 K prog ::= stmt l iJ stmt l K def = iJ stmt K
1 x := ? while 2 (1 < x) do 3 x := x 1 od 4
We have the following final program control points:
f J 1 x := ? while 2 (1 < x) do 3 x := x 1 od 4 K = 4 f J 1 x := ? while 2 (1 < x) do 3 x := x 1 od K = 4 f J 1 x := ? K = 2 f J while 2 (1 < x) do 3 x := x 1 od K = 4 f J 3 x := x 1 K = 2
Note that, the final control point f J stmt K 2 L of an instruction stmt does not belong to the set of control points appearing in the instruction. A program execution starts at its initial program control point with any possible value for the program variables. Thus, the set of initial states of a program prog is
stmt ::= l skip f J l skip K def = f J stmt K | l X := aexp f J l X := aexp K def = f J stmt K | if l bexp then stmt 1 else stmt 2 fi f J if l bexp then stmt 1 else stmt 2 fi K def = f J stmt K f J stmt 1 K def = f J stmt K f J stmt 2 K def = f J stmt K | while l bexp do stmt 1 od f J while l bexp do stmt 1 od K def = f J stmt K f J stmt 1 K def = l | stmt 1 stmt 2 f J stmt 1 stmt 2 K def = f J stmt K f J stmt 1 K def = iJ stmt 2 K f J stmt 2 K def = f J stmt K prog ::= stmt l f J stmt l K def = l
I def = {hiJ prog K, ⇢i | ⇢ 2 E}.
It is sometimes useful to assume that a set E ✓ E of initial environments is given, in which case the program initial states correspond to the initial program control point paired with any initial environment:
I def = {hiJ prog K, ⇢i | ⇢ 2 E}. The set of final states of a program prog is Q def = {hf J prog K, ⇢i | ⇢ 2 E}.
Remark 3.2.2 In Section 2.2 we defined the final states to have no successors with respect to the transition relation, meaning that the program halts: ⌦ def = {s 2 ⌃ | 8s 0 2 ⌃ : hs, s 0 i 6 2 ⌧ }. This is the case when the program successfully terminates by reaching its final label, or when a run-time error occurs. For the sake of simplicity, the definition of program final states given in this section ignores possible run-time errors silently halting the program.
We now define the transition relation ⌧ 2 ⌃ ⇥ ⌃. In particular, in Figure 3.6, we define the transition semantics ⌧ J stmt K 2 ⌃ ⇥ ⌃ of each program instruction stmt. Given an environment ⇢ 2 E, a program variable X 2 X and a value v 2 Z, we denote by ⇢[X v] the environment obtained by writing the value v into the variable X in the environment ⇢:
⇢[X v](x) = ( v x= X ⇢(x) x 6 = X ⌧ J l skip K def = {hl, ⇢i ! hf J l skip K, ⇢i | ⇢ 2 E} ⌧ J l X := aexp K def = {hl, ⇢i ! hf J l X := aexp K, ⇢[X v]i | ⇢ 2 E, v 2 JaexpK⇢} ⌧ J if l bexp then stmt 1 else stmt 2 fi K def = {hl, ⇢i ! hiJ stmt 1 K, ⇢i | ⇢ 2 E, true 2 JbexpK⇢} [ ⌧ J stmt 1 K [ {hl, ⇢i ! hiJ stmt 2 K, ⇢i | ⇢ 2 E, false 2 JbexpK⇢} [ ⌧ J stmt 2 K ⌧ J while l bexp do stmt od K def = {hl, ⇢i ! hiJ stmt K, ⇢i | ⇢ 2 E, true 2 JbexpK⇢} [ ⌧ J stmt K [ {hl, ⇢i ! hf J while l bexp do stmt od K, ⇢i | ⇢ 2 E, false 2 JbexpK⇢} ⌧ J stmt 1 stmt 2 K def = ⌧ J stmt 1 K [ ⌧ J stmt 2 K Figure 3.6: Transition semantics of instructions stmt.
The semantics of a skip instruction simply moves control from the initial label of the instruction to its final label. The execution of a variable assignment l X := aexp moves control from the initial label of the instruction to its final label, and modifies the current environment in order to assign any of the possible values of aexp to the variable X. The semantics of a conditional statement if l bexp then stmt 1 else stmt 2 fi moves control from the initial label of the instruction to the initial label of stmt 1 , if true is a possible value for bexp, and to the initial label of stmt 2 , if false is a possible value for bexp; then, stmt 1 and stmt 2 are executed. Similarly, the execution of a while statement while l bexp do stmt od moves control from the initial label of the instruction to its final label, if false is a possible value for bexp, and to the initial label of stmt 1 , if true is a possible value for bexp; then stmt is executed. Note that, control moves from the end of stmt to the initial label l of the while loop, since l is the final label of stmt (cf. Figure 3.5). Finally, the semantics of the sequential combination of instructions stmt 1 stmt 2 executes stmt 1 and stmt 2 . Note that, control moves from the end of stmt 1 to the beginning of stmt 2 , since the final label of stmt 1 is the initial label of stmt 2 (cf. Figure 3.5).
The transition relation ⌧ 2 ⌃ ⇥ ⌃ of a program prog is defined by the semantics
⌧ J prog K 2 ⌃ ⇥ ⌃ of the program as ⌧ J prog K = ⌧ J stmt l K def = ⌧ J stmt K. Example 3.2.3
Let us consider again the program of Example 3.2.1:
1 x := ? while 2 (1 < x) do 3 x := x 1 od 4
The set of program environments E contains functions ⇢ : {x} ! Z mapping the program variable x to any possible value ⇢(x) 2 Z. The set of program states ⌃ def = {1, 2, 3, 4} ⇥ E consists of all pairs of labels and environments; the initial states are
I def = {h1, ⇢i | ⇢ 2 E}. The program transition relation ⌧ 2 ⌃ ⇥ ⌃ is defined as follows: ⌧ def = {h1, ⇢i ! h2, ⇢[x v]i | ⇢ 2 E ^v 2 Z} [ {h2, ⇢i ! h3, ⇢i | ⇢ 2 E ^true 2 J1 < xK⇢} [ {h3, ⇢i ! h2, ⇢[x ⇢(x) 1]i | ⇢ 2 E} [ {h2, ⇢i ! h4, ⇢i | ⇢ 2 E ^false 2 J1 < xK⇢}
The set of final states is
Q def = {h4, ⇢i | ⇢ 2 E}.
Note that, the definition of final states assumes all possible values for the program variable x although when executing the program we have x 1 on program exit.
Maximal Trace Semantics. In the following, we provide a structural definition of the fixpoint maximal trace semantics ⌧ +1 2 ⌃ +1 (Equation 2.2.5) by induction on the syntax of programs.
We recall that a program trace is a non-empty sequence of program states in ⌃ determined by the program transition relation ⌧ .
Example 3.2.4
Let us consider again the program of Example 3.2.3:
1 x := ? while 2 (1 < x) do 3 x := x 1 od 4
We write {hx, vi} to denote the environment ⇢ : {x} ! Z mapping the program variable x to the value v 2 Z. The following sequence of program states:
h1, {hx, 42i}ih2, {hx, 2i}ih3, {hx, 2i}ih2, {hx, 1i}ih4, {hx, 1i}i
is an example of program trace determined by the transition relation ⌧ .
We only consider program traces starting from the set I of initial states of a program prog. Accordingly, in Figure 3.7 we define the trace semantics ⌧ +1 J stmt K 2 P(⌃ +1 ) ! P(⌃ +1 ) of each program instruction stmt. Analogously to Equation 2.2.5, the program traces are built backwards: each function ⌧ +1 J stmt K 2 P(⌃ +1 ) ! P(⌃ +1 ) takes as input a set of traces starting with the final label of the instruction stmt and outputs a set of traces starting with the initial label of stmt.
⌧ +1 J l skip KT def = ⇢ hl, ⇢ihf J l skip K, ⇢i ⇢ 2 E, 2 ⌃ ⇤1 hf J l skip K, ⇢i 2 T ⌧ +1 J l X := aexp KT def = ⇢ hl, ⇢ihf J l X := aexp K, ⇢[X v]i ⇢ 2 E, v 2 JaexpK⇢, 2 ⌃ ⇤1 hf J l X := aexp K, ⇢[X v]i 2 T ⌧ +1 J if l bexp then stmt 1 else stmt 2 fi KT def = ⇢ hl, ⇢ihiJ stmt 1 K, ⇢i ⇢ 2 E, true 2 JbexpK⇢, 2 ⌃ ⇤1 hiJ stmt 1 K, ⇢i 2 ⌧ +1 J stmt 1 KT [ ⇢ hl, ⇢ihiJ stmt 2 K, ⇢i ⇢ 2 E, false 2 JbexpK⇢, 2 ⌃ ⇤1 hiJ stmt 2 K, ⇢i 2 ⌧ +1 J stmt 2 KT ⌧ +1 J while l bexp do stmt od KT def = lfp v +1 +1 (X) def = ⇢ hl, ⇢ihiJ stmt K, ⇢i ⇢ 2 E, true 2 JbexpK⇢, 2 ⌃ ⇤1 hiJ stmt K, ⇢i 2 ⌧ +1 J stmt KX [ ⇢ hl, ⇢ihf J while • • • od K, ⇢i ⇢ 2 E, false 2 JbexpK⇢, 2 ⌃ ⇤1 hf J while • • • od K, ⇢i 2 T ⌧ +1 J stmt 1 stmt 2 KT def = ⌧ +1 J stmt 1 K(⌧ +1 J stmt 2 KT )
The trace semantics of a skip instruction takes as input a set T 2 P(⌃ +1 ) of traces starting with environments paired with the final label of the instruction and, according to the transition relation semantics of the instruction, it prepends to them the same environments paired with its initial label.
The trace semantics of a variable assignment l X := aexp takes as input a set T 2 P(⌃ +1 ) of traces starting with the final label of the instruction and it prepends to them all program states with its initial label that are allowed by the transition relation semantics of the instruction (cf. Figure 3.6).
Similarly, given a conditional instruction if l bexp then stmt 1 else stmt 2 fi, its trace semantics prepends all program states with its initial label that are allowed by its transition relation semantics to the traces starting with the initial labels of stmt 1 and stmt 2 ; these are obtained by means of the trace semantics of stmt 2 and stmt 1 taking as input a set T 2 P(⌃ +1 ) of traces starting with the final label of the conditional instruction.
The trace semantics of a loop instruction while l bexp do stmt od is defined as the least fixpoint of the function +1 : P(⌃ +1 ) ! P(⌃ +1 ) within the complete lattice hP(⌃ +1 ), v, t, u, ⌃ ! , ⌃ + i, analogously to Equation 2.2.5. The iteration sequence, starting from all infinite sequences ⌃ ! 2 P(⌃ +1 ), builds the set of program traces that consist of an infinite number of iterations within the loop, and it prepends a finite number of iterations within the loop to an input set T 2 P(⌃ +1 ) of traces starting with the final label of the loop instruction. In Figure 3.7, we have shortened f J while l bexp do stmt od K by means of f J while • • • od K. In particular, the function +1 : P(⌃ +1 ) ! P(⌃ +1 ) takes as input a set X 2 P(⌃ +1 ) of traces: initially, X is the set of all infinite sequences ⌃ ! and, at each iteration, the function prepends all program states whose label is the initial label of the loop instruction, to the traces belonging to the input set T , and to the traces that are obtained by means of the trace semantics of the loop body stmt from the set X. In this way, after the ith iteration, the set X contains the program traces starting at the initial label of the loop instruction whose prefix consist of from zero up to i 1 iterations within the loop and whose su x is a trace in T , and the sequences whose prefix are program traces which consist of i 1 iterations within the loop. With a slight abuse of notation, we depict the fixpoints iterates in Figure 3.8.
X 0 = ( ⌃ ! ) X 1 = ( ¬bexp T ) [ ( bexp stmt ⌃ ! ) X 2 = ( ¬bexp T ) [ ( bexp stmt ¬bexp T ) [ ( bexp stmt bexp stmt ⌃ ! ) . . .
Finally, the trace semantics of the sequential combination of instructions stmt 1 stmt 2 takes as input a set T 2 P(⌃ +1 ) of traces starting with the final label of stmt 2 , determines from T the set of traces ⌧ +1 J stmt 2 KT belonging to the trace semantics of stmt 2 , and outputs the set of traces determined by the trace semantics of stmt 1 from ⌧ +1 J stmt 2 KT .
The maximal trace semantics ⌧ +1 J prog K 2 P(⌃ +1 ) of a program prog is the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) defined in Equation 2.2.5 restricted to the traces starting from the program initial states I, and it is defined taking as input the set of program final states Q: Definition 3.2.5 (Maximal Trace Semantics) The maximal trace semantics ⌧ +1 J prog K 2 P(⌃ +1 ) of a program prog is:
⌧ +1 J prog K = ⌧ +1 J stmt l K def = ⌧ +1 J stmt KQ (3.2.1)
where the trace semantics ⌧ +1 J stmt K 2 P(⌃ +1 ) ! P(⌃ +1 ) of each program instruction stmt is defined in Figure 3.7.
Note that, as pointed out in Remark 3.2.2, possible run-time errors are ignored.
More specifically, all traces containing run-time errors are discarded and do not belong to the maximal trace semantics of a program prog.
In the following, we often simply write ⌧ +1 instead of ⌧ +1 J prog K.
Invariance Semantics
The reachable state semantics ⌧ R 2 P(⌃) introduced by Example 2.2.18 abstracts the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) by collecting the possible values of the program variables at each program point and disregarding other information. In the following, we provide an isomorphic invariance semantics defined by induction on the syntax of our small language. Note that the complete lattice hP(⌃), ✓, [, \, ;, ⌃i on which the reachable state semantics is defined is isomorphic by partitioning with respect to the program control points and by pointwise lifting (cf. Equation 2.1.1) to hL ! P(E), ✓, [, \, l. ;, l. Ei. This can be formalized as a Galois connection
↵ I (R) def = l 2 L. {⇢ 2 E | hl, ⇢i 2 R} I (I) def = {hl, ⇢i 2 ⌃ | ⇢ 2 I(l)} (3.3.1)
This abstraction, from now on, is called invariance abstraction. In this form, the reachable state semantics ⌧ R 2 P(⌃) becomes an invariance semantics ⌧ I 2 L ! P(E) associating to each program control point l 2 L an invariant E 2 P(E) which collects the set of possible program environments for each
3.3. Invariance Semantics ⌧ post J l skip KE def = E ⌧ post J l X := aexp KE def = {⇢[X v] 2 E | ⇢ 2 E, v 2 JaexpK⇢} ⌧ post J if l bexp then stmt 1 else stmt 2 fi KE def = ⌧ post Jstmt 1 K{⇢ 2 E | true 2 JbexpK⇢} [ ⌧ post Jstmt 2 K{⇢ 2 E | false 2 JbexpK⇢} ⌧ post J while l bexp do stmt od KE def = {⇢ 2 lfp post | false 2 JbexpK⇢} post (X) def = E [ ⌧ post JstmtK{⇢ 2 X | true 2 JbexpK⇢} ⌧ post J stmt 1 stmt 2 KE def = ⌧ post J stmt 2 K(⌧ post J stmt 1 KE)
Figure 3.9: Invariance semantics of instructions stmt.
time the program execution reaches that program control point. We can now define the invariance semantics pointwise within the power set of environments.
In Figure 3.9, for each program instruction stmt, we define its postcondition semantics ⌧ post J stmt K : P(E) ! P(E). Analogously to the states belonging to the reachable state semantics in Example 2.2.18, the environment belonging to the postcondition semantics are built forward : each function ⌧ post J stmt K : P(E) ! P(E) takes as input a set of environments and outputs the set of possible environments at the final control point of the instruction.
The postcondition semantics ⌧ post J prog K 2 P(E) of a program prog outputs the set of possible program environments at the final program control point f J prog K 2 L. It is defined from the set of all program environments E as: Definition 3.3.1 (Postcondition Semantics) Given a program prog, its postcondition semantics ⌧ post J prog K 2 P(E) is:
⌧ post J prog K = ⌧ post J stmt l K def = ⌧ post J stmt KE (3.3.2)
where the postcondition semantics ⌧ post J stmt K 2 P(E) ! P(E) of each program instruction stmt is defined in Figure 3.9.
In this way, we can collect the set of possible program environments at each program control point. At the initial control point, the possible environments are all program environments:
⌧ I (iJ prog K) def = E.
Then, for each program instruction stmt, the set ⌧ I (f J stmt K) 2 P(E) of possible environments at its final control point is defined by the postcondition semantics (cf. Figure 3.9), taking as input the set E 2 P(E) of possible environments at the final control point of the instruction preceding stmt (or all program environments in case stmt is the first instruction of the program):
⌧ I (f J stmt K) def = ⌧ post J stmt KE.
In case of a while loop instruction, the set ⌧ I (iJ while l bexp do stmt od K) 2 P(E) of possible environments at its initial control point is defined as the least fixpoint greater than the input set E of the function post : P(E) ! P(E) defined in Figure 3.9:
⌧ I (iJ while l bexp do stmt od K) def = lfp E post
At each iteration, the function post : P(E) ! P(E) accumulates the possible environments after another loop iteration from a given set of environments X 2 P(E). The set ⌧ I (iJ stmt K) 2 P(E) of possible environments at the initial control point of the loop body is the set of possible environments at the initial control point of the loop that satisfy the boolean expression bexp:
⌧ I (iJ stmt K) def = {⇢ 2 ⌧ I (iJ while l bexp do stmt od K) | true 2 JbexpK⇢} Example 3.3.2
Let us consider again the program of Example 3.2.4:
1 x := ? while 2 (1 < x) do 3 x := x 1 od 4
The following are the set of possible program environments collected by the invariance semantics at each program control point:
⌧ I (1) = E ⌧ I (2) = lfp ⌧postJ 1 x:=? K⌧ I (1) post = E ⌧ I (3) = {⇢ 2 ⌧ I (2) | true 2 J1 < xK⇢} = {⇢ 2 E | 2 ⇢(x)} ⌧ I (4) = {⇢ 2 ⌧ I (2) | false 2 J1 < xK⇢} = {⇢ 2 E | ⇢(x) 1} where post (X) = X [ ⌧ post J 3 x := x 1 K{⇢ 2 X | true 2 J1 < xK⇢}.
In the next section, we present further abstractions of the invariance semantics by means of numerical abstract domains.
Numerical Abstract Domains
We consider concretization-based abstractions of the following form:
hP(E), ✓i D hD, v D i
which provide, for each program control point l 2 L, a sound decidable abstraction ⌧ \ I (l) 2 D of the invariance semantics ⌧ I (l) 2 P(E) (cf. Section 3.3). We have ⌧ I (l) ✓ D (⌧ \ I (l)), meaning that the abstract invariance semantics • a set D whose elements are computer-representable;
⌧ \ I (l)
• a partial order v D together with an e↵ective algorithm to implement it;
• a concretization-based abstraction hP(E), ✓i D hD, v D i or, when possible, a Galois connection hP(E), ✓i !
↵ D D hD, v D i;
• a least element ? D 2 D such that D (? D ) = ;;
• a greatest element > D 2 D such that D (> D ) = E; • a sound binary operator t D such that D (d 1 ) [ D (d 2 ) ✓ D (d 1 t D d 2 ); • a sound binary operator u D such that D (d 1 ) \ D (d 2 ) ✓ D (d 1 u D d 2 );
• a sound unary operator ASSIGN D J X := aexp K : D ! D, together with an e↵ective algorithm to handle a variable assignment l X := aexp, such that
⌧ I J l X := aexp K D (d) ✓ D (ASSIGN D J X := aexp Kd); • a sound backward operator B-ASSIGN D J X := aexp K : D ! D ! D, to-
gether with an e↵ective algorithm to refine an element d 2 D given the result r 2 D of the assignment l X := aexp, such that
{⇢ 2 D (d) | ⌧ I J l X := aexp K{⇢} ✓ D (r)} ✓ D ((B-ASSIGN D J X := aexp Kd)(r)) ✓ D (d);
• a sound unary operator FILTER D J bexp K : D ! D, together with an effective algorithm to handle a boolean expression bexp, such that {⇢ 2
D (d) | true 2 JbexpK⇢} ✓ D (FILTER D J bexp Kd);
• a sound widening operator O D , when hD, v D i does not satisfy the ACC;
• a sound narrowing operator O D , when hD, v D i does not satisfy the DCC;
The backward assignment operator B-ASSIGN D J X := aexp K : D ! D ! D, is not directly used to abstract the invariance semantics. However, it usually defined in order to allow the combination of forward and backward analyses [START_REF] Cousot | Abstract Interpretation and Application to Logic Programs[END_REF]. It will also be useful in Chapter 5.
Abstract Invariance Semantics. The operators of the numerical abstract domains can now be used to define the abstract invariance semantics.
In Figure 3.10 we define, for each program instruction stmt, its abstract postcondition semantics ⌧ D J stmt K : D ! D. Each function ⌧ D J stmt K : D ! D takes as input a numerical abstraction and outputs the possible numerical abstraction at the final control point of the instruction. For a while loop, O D is the limit of the following increasing chain (cf. Theorem 2.2.23):
y 0 def = d y n+1 def = y n O D D (y n )
The abstract postcondition semantics ⌧ D J prog K 2 D of a program prog outputs the possible numerical abstraction at the final program control point f J prog K 2 L. It is defined taking as input the element > D as:
⌧ D J l skip Kd def = d ⌧ D J l X := aexp Kd def = ASSIGN D J X := aexp Kd ⌧ D J if l bexp then stmt 1 else stmt 2 fi Kd def = F 1 (d) t D F 2 (d) F 1 (d) def = ⌧ D J stmt 1 K(FILTER D J bexp Kd) F 2 (d) def = ⌧ D J stmt 2 K(FILTER D J not bexp Kd) ⌧ D J while l bexp do stmt od Kd def = FILTER D J not bexp K O D D (x) def = d t D ⌧ D J stmt K(FILTER D J bexp Kx) ⌧ D J stmt 1 stmt 2 Kd def = ⌧ D J stmt 2 K(⌧ D J stmt 1 Kd)
⌧ D J prog K = ⌧ D J stmt l K def = ⌧ D J stmt K> D (3.4.1)
where the abstract postcondition semantics ⌧ D J stmt K 2 D ! D of each program instruction stmt is defined in Figure 3.10.
The following result proves the soundness of ⌧ D J prog K 2 D with respect to the postcondition semantics ⌧ post J prog K 2 D (cf. Definition 3.3.1):
Theorem 3.4.3 ⌧ post J prog K ✓ D (⌧ D J prog K) Proof (Sketch).
The proof follows from the soundness of the operators of the numerical abstract domain (cf. Definition 3.4.1) used for the definition of ⌧ D J prog K 2 D.
⌅
In this way, we can collect the possible numerical abstractions at each program control point. At the initial control point, the possible numerical abstraction is the element > D :
⌧ \ I (iJ prog K) def = > D .
Then, for each instruction stmt, the numerical abstraction ⌧ \ I (f J stmt K) 2 D at its final control point is defined by the abstract postcondition semantics (cf. Figure 3.10), taking as input the numerical abstraction d 2 D at the final control point of the instruction preceding stmt (or the element > D in case stmt is the first instruction of the program):
⌧ \ I (f J stmt K) def = ⌧ D J stmt Kd.
For a while loop, the numerical abstraction ⌧ \ I (iJ while l bexp do stmt od K) 2 D at its initial control point is defined as the limit O D of the increasing chain
y 0 def = d, y n+1 def = y n O D D (y n )
, where d is the input numerical abstraction and the function D : D ! D is defined in Figure 3.10:
⌧ \ I (iJ while l bexp do stmt od K) def = O D
The numerical abstraction ⌧ \ I (iJ stmt K) 2 D at the initial control point of the loop body is the numerical abstraction at the initial control point of the loop filtered by the boolean expression bexp:
⌧ \ I (iJ stmt K) def = FILTER D J bexp K⌧ \ I (iJ while l bexp do stmt od K)
The following result proves, for each program control point l 2 L, the soundness of the abstract invariance semantics ⌧ \ I 2 D with respect to the invariance semantics ⌧ I (l) 2 P(E) proposed in Section 3.3:
Theorem 3.4.4 8l 2 L : ⌧ I (l) ✓ D (⌧ \ I (l))
Proof (Sketch).
The proof follows from the definition of ⌧ I (l) 2 P(E) (cf. Section 3.3) and ⌧ \ I 2 D, and from Theorem 3.4.3.
⌅
Various numerical abstract domains have been proposed in the literature. We refer to [START_REF] Miné | Weakly Relational Numerical Abstract Domains[END_REF] for an overview. In the following, we briefly recall the well-known numerical abstract domains of intervals [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF], convex polyhedra [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF], and octagons [START_REF] Miné | The Octagon Abstract Domain[END_REF]. They are the foundation upon which we develop new abstract domains in Chapter 5 and Chapter 6.
Intervals Abstract Domain
The intervals abstract domain is a non-relational numerical abstract domain. Non-relational abstract domains abstract each program variable independently, thus forgetting any relationship between variables. They are an abstraction of the power set of environments given by the Galois connection hP(E), ✓ i !
↵ C C
hX ! P(Z), ✓i where the abstraction ↵ C : P(E) ! (X ! P(Z)) and the concretization C : (X ! P(Z)) ! P(E) are defined as follows:
↵ C (E) def = X 2 X . {⇢(X) 2 Z | ⇢ 2 E} C (f ) def = {⇢ 2 E | 8X 2 X : ⇢(X) 2 f (X)} (3.4.2)
This abstraction, from now on, is called cartesian abstraction. Specifically, the interval abstract domain maintains an upper bound and a lower bound on the possible values of each program variable. It was introduced by Patrick Cousot and Radhia Cousot [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF] and it is still widely used as it is e cient and yet able to provide valuable information crucial to prove the absence of various run-time errors.
In the following, the intervals abstract domain is denoted by hB, v B i. The abstraction is formalized by a Galois connection hX ! P(Z), ✓i !
↵ B B hB, v B i.
Most operators of the domain rely on well-known interval arithmetics [START_REF] Moore | Interval Analysis[END_REF]. We do not discuss the details here and we refer instead to [START_REF] Miné | Weakly Relational Numerical Abstract Domains[END_REF].
Polyhedra Abstract Domain
The polyhedra abstract domain, introduced by Patrick Cousot and Nicolas Halbwachs [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF], is a relational numerical abstract domain. Relational abstract domains are more precise than non-relational ones since they are able to preserve some of the relationships between the program variables. Specifically, the polyhedra abstract domain allows inferring a ne inequalities
c 1 X 1 + . . . + c k X k + c
k+1 0 between the program variables. In the following, the polyhedra abstract domain is denoted by hP, v P i. The abstraction is formalized as a concretization-based abstraction hP(E), ✓i P hP, v P i. We again omit the details and we refer to [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF][START_REF] Miné | Weakly Relational Numerical Abstract Domains[END_REF].
Octagons Abstract Domain
The octagons abstract domain is a weakly-relational abstract domain. It was introduced by Antoine Miné [START_REF] Miné | The Octagon Abstract Domain[END_REF] to answer the need for a trade-o↵ between non-relational abstract domains, that are very cheap but quite imprecise, and relational abstract domains, that are very expressive but quite costly.
The octagons abstract domain, in the following denoted by hO, v O i, allows inferring inequalities of the form ±X i ±X j c between the program variables. It is based on the e cient Di↵erence Bound Matrix data structure [START_REF] Kim Guldstrand Larsen | E cient Verification of Real-Time Systems: Compact Data Structure and State-Space Reduction[END_REF].
The abstraction, since our program variables have integer values (cf. Section 3.1), is formalized by a Galois connection hP(E), ✓i !
↵ O O hO, v O i.
We refer to [START_REF] Miné | Weakly Relational Numerical Abstract Domains[END_REF][START_REF] Miné | The Octagon Abstract Domain[END_REF] for a more detailed discussion.
Note that, the intervals [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF], convex polyhedra [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF], and octagons [START_REF] Miné | The Octagon Abstract Domain[END_REF] numerical abstract domains maintain information about the set of possible values of the program variables along with the possible numerical relationships between them using convex sets consisting of conjunctions of linear constraints. The convexity of these abstract domains makes the analysis scalable. On the other hand, it might lead to too harsh approximations and imprecisions in the analysis, ultimately yielding false alarms and a failure of the analyzer to prove the desired program property.
The key for an adequate cost versus precision trade-o↵ is handling disjunctions arising during the analysis (e.g., from program tests and loops). In practice, numerical abstract domains are usually refined by adding weak forms of disjunctions to increase the expressivity while minimizing the cost of the analysis [CCM10, GR98, GC10a, SISG06, etc.].
An Abstract Interpretation Framework for Termination
In this chapter, we recall and revise the abstract interpretation framework for potential and definite termination proposed by Patrick Cousot and Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]. In particular, we recall their fixpoint semantics for definite termination, which is at the basis of our work. For potential termination, we choose a di↵erent fixpoint semantics. Then, we provide the definition of the semantics for definite termination for our small language of Chapter 2.
Dans ce chapitre, nous rappelons et révisons le cadre de l'interprétation abstraite de terminaison et de terminaison potentielle proposé par Patrick Cousot et Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]. En particulier, nous rappelons leur sémantique de point fixe pour la terminaison, qui est à la base de notre travail. Pour la terminaison potentielle, nous choisissons une sémantique de point fixe di↵érente. Ensuite, nous fournissons la définition de la sémantique de terminaison pour notre petit langage du Chapitre 2.
Ranking Functions
The traditional method for proving program termination dates back to Alan Turing [START_REF] Turing | Checking a Large Routine[END_REF] and Robert W. Floyd [START_REF] Floyd | Assigning Meanings to Programs[END_REF]. It consists in inferring ranking functions, namely functions from program states to elements of a well-ordered set whose value decreases during program execution. Definition 4.1.1 (Ranking Function) Given a transition system h⌃, ⌧ i, a ranking function is a partial function f : ⌃ * W from the set of states ⌃ into a well-ordered set hW, i whose value decreases through transitions between states, that is 8s, s 0 2 dom(f ) : hs, s 0 i 2 ⌧ ) f (s 0 ) < f(s).
The best known well-ordered sets are the natural numbers hN, i and the ordinals hO, i, and the most obvious ranking function maps each program state to the number of program execution steps until termination, or some well-chosen upper bound on this number.
while 2 (1 < x) do 3 x := x 1 od 4
The loop terminates whatever the initial value of the variable x. In order to prove it, we define a ranking function f : ⌃ ! N whose domain coincides with the set of program states and whose value is an upper bound on the number of state transitions until termination. We have defined the program transition relation ⌧ 2 ⌃⇥⌃ in Example 3.2.3. In particular, as in Section 3.3, we partition with respect to the program control points, and we define the ranking function f : L ! (E ! N):
f (2) def = ⇢.
(
⇢(x) 1 2⇢(x) 1 1 < ⇢(x) f (3) def = ⇢. ( 2 ⇢(x) 2 2⇢(x) 2 2 < ⇢(x) f (4) def = 1
⇢. 0 Note that, at the final program control point 4, the program is terminated and thus no transitions are needed.
At the program control point 2 and 3, the value of the ranking function depends on the value of the program variable x in the current environment ⇢. As an example, given the state h2, {hx, 2i}i, the transitions needed to reach termination are h2, {hx, 2i}i ! h3, {hx, 2i}i, h3, {hx, 2i}i ! h2, {hx, 1i}i, and h2, {hx, 1i}i ! h4, {hx, 1i}i (cf. Example 3.2.4). Indeed, f (2){hx, 2i} = 3, . . . Thus, in order to prove that the program terminates, we define a ranking function f : ⌃ ! O mapping the program states to ordinals:
f (3){hx, 2i} = 2,
f (1) def =
⇢. ! where f (2), f (3), and f (4) are defined as above. Note that, the value ! does not provide an upper bound on the number of transitions until termination that depends on the value of the program variable x but simply testifies that this number is finite, meaning that the program always terminates. Indeed, even when starting from !, a strictly decreasing function cannot decrease forever since ordinals are a well-ordered set.
In the following, we also consider a weaker notion of ranking function, called potential ranking function. The value of a potential ranking function decreases at least along one transition during program execution.
Definition 4.1.3 (Potential Ranking Function) Given a transition system h⌃, ⌧ i, a potential ranking function is a partial function f : ⌃ * W from ⌃ to a well-ordered set hW, i whose value decreases through at least one transition from each state, that is 8s 2 dom(f ) : (9s 0 2 dom(f ) : hs, s 0 i 2 ⌧ ) ) 9s 0 2 dom(f ) : hs, s 0 i 2 ⌧ ^f (s 0 ) < f(s).
Termination Semantics
In [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF], Patrick Cousot and Radhia Cousot prove the existence of a most precise program ranking function that can be derived by abstract interpreta-tion of the program maximal trace semantics. In the following, we recall from [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF] the results that are most relevant to our work.
Potential and Definite Termination. In presence of non-determinism, we distinguish between potential termination or may-terminate properties and definite termination or must-terminate properties. The property of potential termination requires a program to have at least one finite execution trace. Definition 4.2.1 (Potential Termination) A program with trace semantics S 2 P(⌃ +1 ) may terminate if and only if S \ ⌃ + 6 = ;.
The property of definite termination requires all execution traces of a program to be finite (cf. also Example 2.2.13).
Definition 4.2.2 (Definite Termination) A program with trace semantics S 2 P(⌃ +1 ) must terminate if and only if S ✓ ⌃ + .
In the following, when clear from the context, we will refer to the property of definite termination simply as property of termination.
Termination Trace Semantics
We present a first abstraction of the program maximal trace semantics (cf. Definition 2.2.8 and Equation 2.2.5, and Definition 3.2.5) into a potential termination trace semantics and a definite termination trace semantics.
The definite (resp. potential) termination trace semantics eliminates the program execution traces that are not starting with a state from which the program execution must (resp. may) terminate.
I def = {1} ⇥ E. The program transition relation ⌧ 2 ⌃ ⇥ ⌃ is ⌧ def = {h1, ⇢i ! h2, ⇢i | ⇢ 2 E} [ {h2, ⇢i ! h1, ⇢i | ⇢ 2 E} [ {h1, ⇢i ! h3, ⇢i | ⇢ 2 E}.
The maximal trace semantics ⌧ +1 2 P(⌃ +1 ) of the program (cf. Definition 3.2.5) contains the execution traces starting from the initial states I that enter the while loop a non-deterministic number of times:
⌧ +1 def = {h1, ⇢i(h2, ⇢i) ⇤ h3, ⇢i | ⇢ 2 E} [ {h1, ⇢i(h2, ⇢i) ! | ⇢ 2 E}.
↵ m (T ) def = T \ ⌃ + m (T ) def = T [ ⌃ ! (4.2.1)
This abstraction, from now on, is called potential termination abstraction. It forgets about non-terminating program executions.
Example 4.2.4
Let T = {ab, aba, ba, bb, ba ! }. Then, the potential termination abstraction of
T is ↵ m (T ) = {ab, aba, ba, bb}.
Definition 4.2.5 (Potential Termination Trace Semantics) The potential termination trace semantics ⌧ m 2 P(⌃ + ) is defined as follows:
⌧ m def = ↵ m (⌧ +1 )
where ⌧ +1 2 P(⌃ +1 ) is the maximal trace semantics (cf. Definition 2.2.8).
The following result provides, by Kleenian fixpoint transfer (cf. Theorem 2.2.20) from the fixpoint maximal trace semantics (cf. Equation 2.2.5), a fixpoint definition of the potential termination trace semantics within the complete lattice hP(⌃ + ), ✓, [, \, ;, ⌃ + i: Theorem 4.2.6 (Potential Termination Trace Semantics) The potential termination trace semantics ⌧ m 2 P(⌃ + ) can be expressed as a least fixpoint in the complete lattice hP(⌃ + ), ✓, [, \, ;, ⌃ + i as follows:
⌧ m = lfp ✓ m m (T ) def = ⌦ [ (⌧ ; T ) (4.2.2)
In this case, the abstraction function ↵ m : P(⌃ +1 ) ! P(⌃ + ) is a complete [-morphism and thus there also exists a Galois connection hP(
⌃ +1 ), v i ! ! ↵ m ¯ m hP(⌃ + ), ✓i, where the concretization function ¯ m 2 P(⌃ + ) ! P(⌃ +1 ) is defined as ¯ m (T ) def = T .
Note that, because of the distinction between the approximation order ✓ and the computational order v, such Galois connection is di↵erent from the potential termination abstraction presented above (cf. Equation 4.2.1).
Example 4.2.7
Let us consider again the program of Example 4.2.3:
while 1 ( ? ) do 2 skip od 3 Its potential termination trace semantics is ⌧ m def = {h1, ⇢i(h2, ⇢i) ⇤ h3, ⇢i | ⇢ 2 E}
since an execution trace starting from the state h1, ⇢i may terminate (by choosing a transition to the state h3, ⇢i).
Definite Termination Trace Semantics. The definite termination trace semantics ⌧ M 2 P(⌃ + ) eliminates all program execution traces potentially branching, through local non-determinism, to non-termination.
We define the neighborhood of a sequence 2 ⌃ +1 in a set of sequences T ✓ ⌃ +1 as the set of sequences 0 2 T with a common prefix with :
nbhd( , T ) def = { 0 2 T | pf( ) \ pf( 0 ) 6 = ;} (4.2.3)
where pf 2 ⌃ +1 ! P(⌃ +1 ) yields the set of prefixes of a sequence 2 ⌃ +1 : pf( )
def = { 0 2 ⌃ +1 | 9 00 2 ⌃ ⇤1 : = 0 00 }. ( 4
.2.4)
A program execution trace belongs to the definite termination trace semantics if and only if it is finite and its neighborhood in the program semantics consists only of finite traces. The corresponding definite termination abstraction
↵ M : P(⌃ +1 ) ! P(⌃ +
) is defined as follows:
↵ M (T ) def = { 2 T + | nhbd( , T ! ) = ;}. (4.2.5)
where
T + def = T \ ⌃ + and T ! def = T \ ⌃ ! .
Example 4.2.8 Let T = {ab, aba, ba, bb, ba ! }. Then, ↵ M (T ) = {ab, aba} since pf(ab) \ pf(ba ! ) = ; and pf(aba) \ pf(ba ! ) = ;, while pf(ba) \ pf(ba ! ) = {ba} 6 = ; and pf(bb) \ pf(ba ! ) = {b} 6 = ;.
Definition 4.2.9 (Definite Termination Trace Semantics) The definite termination trace semantics ⌧ M 2 P(⌃ + ) is defined as follows:
⌧ M def = ↵ M (⌧ +1 )
where ⌧ +1 2 P(⌃ +1 ) is the maximal trace semantics (cf. Definition 2.2.8).
The following result provides, by Tarskian fixpoint transfer (cf. Theorem 2.2.21) from the fixpoint maximal trace semantics (cf. Equation 2.2.5), a fixpoint definition of the definite termination trace semantics within the complete lattice hP(⌃ + ), ✓, [, \, ;, ⌃ + i: Theorem 4.2.10 (Definite Termination Trace Semantics) The definite termination trace semantics ⌧ M 2 P(⌃ + ) can be expressed as a least fixpoint in the complete lattice hP(⌃ + ), ✓, [, \, ;, ⌃ + i:
⌧ M = lfp ✓ M M (T ) def = ⌦ [ ((⌧ ; T ) \ ¬(⌧ ; ¬T )) (4.2.6)
where ¬(⌧ ; ¬T ) stands for ⌃ + \ (⌧ ; (⌃ + \ T )): the term ¬(⌧ ; ¬T ) eliminates potential transition towards non-terminating executions.
Example 4.2.11
Let us consider again the program of Example 4.2.3:
while 1 ( ? ) do 2 skip od 3
Its definite termination trace semantics is ⌧ M def = ; since for any execution trace starting from the state h1, ⇢i there is a possibility of non-termination (by always choosing a transition to the state h2, ⇢i).
Termination Semantics
We now further abstract the definite (resp. potential) termination trace semantics into a definite (resp. potential ) termination semantics, which is the most precise ranking function (resp. potential ranking function) that can be derived by abstract interpretation of the program maximal trace semantics.
Proposition 4.2.12 A program, whose maximal trace semantics is generated by a transition system h⌃, ⌧ i, terminates if and only if the program transition relation ⌧ 2 ⌃ ⇥ ⌃ is well-founded.
When considering a given set I ✓ ⌃ of initial states, the program execution traces starting from an initial state are terminating if and only if the program transition relation is well-founded when restricted to reachable states:
⌧ ✓ ↵ R (⌧ +1 ) ⇥ ↵ R (⌧ +1 ), where ↵ R : P(⌃ +1 ) ! P(⌃) is the reachability abstraction defined in Example 2.2.18.
In the case when the program semantics is not generated by a transition system (cf. Remark 2.2.10), we might consider its transition abstraction given by the Galois connection hP(⌃ +1 ), ✓i !
↵ (T ) def = {hs, s 0 i | 9 2 ⌃ ⇤ , 0 2 ⌃ ⇤1 : ss 0 0 2 T }. (4.2.7)
Note, however, that in this case the condition of Proposition 4.2.12 is su cient but not necessary (i.e., the program execution traces starting from an initial states are terminating if the program transition relation is well-founded when restricted to reachable states), as shown by the following counterexample.
Counterexample 4.2.13 Let ⌃ = {a, b}. The program whose semantics T is the set of fair traces a ⇤ b is terminating but the transition abstraction ! ↵ (T ) = {ha, ai, ha, bi} generates the infinite trace a ! (cf. also Remark 2.2.10). Thus, the transition relation restricted to the reachable states is not well-founded.
An over-approximation R ◆ ↵ R (⌧ +1 ) of the reachable states can be computed by abstract interpretation, as we have seen in Section 3.4. The program transition relation restricted to the reachable states
⌧ ✓ ↵ R (⌧ +1 ) ⇥ ↵ R (⌧ +1 ) is well-founded if its over-approximation r ✓ R ⇥ R is well-founded. More- over, r ✓ R ⇥ R is well-founded if
and only if there exists a ranking function f : ⌃ * W into a well-ordered set hW, i whose domain is R: dom(f ) = R. Thus, for programs whose maximal trace semantics is generated by a transition system, (potential) termination can be proved by exhibiting a (potential) ranking function mapping invariant states to elements of a well-ordered set.
In the following, we consider the set of partial functions from the program states ⌃ into the well-ordered set of ordinals hO, i.
Potential Termination Semantics. The potential termination semantics is the most precise potential ranking function ⌧ mt 2 ⌃ * O for a program. It is defined starting from the final states in ⌦, where the function has value zero, and retracing the program backwards while mapping each program state in ⌃ potentially leading to a final state (i.e., a program state such that there exists a terminating program execution trace to which it belongs) to an ordinal in O representing the minimum number of program execution steps remaining to termination. The domain dom(⌧ mt ) of ⌧ mt is the set of states from which the program execution may terminate: at least one trace branching from a state s 2 dom(⌧ mt ) terminates in at least ⌧ mt (s) execution steps, while all traces branching from a state s 6 2 dom(⌧ mt ) do not terminate.
Intuitively, a potential ranking function f 1 is more precise than another potential ranking function f 2 when it is defined over a smaller set of program states, that is, it can disprove termination for more program states, and when its value is always smaller, that is, the minimum number of program execution steps required for termination is lower. The approximation order is then:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
⌧ mt def = ↵ mrk (↵ m (⌧ +1 )) = ↵ mrk (⌧ m ) (4.2.9)
where the potential ranking abstraction ↵ mrk :
P(⌃ + ) ! (⌃ * O) is: ↵ mrk (T ) def = ↵ mv ( ! ↵ (T )) (4.2.10)
where the function ↵ mv : P(⌃ ⇥ ⌃) ! (⌃ * O) provides the rank of the elements in the domain of a relation r ✓ ⌃ ⇥ ⌃:
↵ mv (;) def = ; ↵ mv (r)s def = 8 > < > : 0 8s 0 2 ⌃ : hs, s 0 i 6 2 r inf ( ↵ mv (r)s 0 + 1 s 0 2 dom(↵ mv (r)) ^hs, s 0 i 2 r ) otherwise
The following result provides, by Kleenian fixpoint transfer (cf. Theorem 2.2.20) from the fixpoint potential termination trace semantics (cf. Equation 4.2.2), a fixpoint definition of the potential termination semantics within the partially ordered set h⌃ * O, vi, where the computational order coincides with the approximation order defined in Equation 4
⌧ mt = lfp v ; mt mt (f ) def = s. 8 > < > : 0 s 2 ⌦ inf {f (s 0 ) + 1 | hs, s 0 i 2 ⌧ } s 2 pre(dom(f )) undefined otherwise (4.2.11) Example 4.2.16
Let us consider the following trace semantics:
The fixpoint iterates of the potential termination semantics ⌧ mt 2 ⌃ * O are:
0 0 1 1 0 0 2 1 1 0 0 2 1 1 0 0
where unlabelled states are outside the domain of the function.
Note that our definition of the potential termination semantics ⌧ mt 2 ⌃ * O di↵ers from [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]: the value of the ranking function for a state is the greatest lower bound (instead of the least upper bound ) of the value plus one of the ranking function for its successors. In fact, the following example shows that the definition of [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF] does not guarantee the existence of a least fixed point.
Example 4.2.17 Let us consider again the program of Example 4.2.7:
while 1 ( ? ) do 2 skip od 3
The iterates of the potential termination semantics ⌧ mt 2 ⌃ * O with respect to the program control points are the following:
f 0 (1) def = ; f 0 (2) def = ; f 0 (3) def = ; f 1 (1) def = ; f 1 (2) def = ; f 1 (3) def = ⇢. 0 f 2 (1) def = ⇢. 1 f 2 (2) def = ; f 2 (3) def = ⇢. 0 f 3 (1) def = ⇢. 1 f 3 (2) def = ⇢. 2 f 3 (3) def = ⇢. 0 f 4 (1) def = ⇢. 1 f 4 (2) def = ⇢. 2 f 4 (3) def =
⇢. 0 Instead, the iterates of the potential termination semantics of [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF] are:
f 0 (1) def = ; f 0 (2) def = ; f 0 (3) def = ; f 1 (1) def = ; f 1 (2) def = ; f 1 (3) def = ⇢. 0 f 2 (1) def = ⇢. 1 f 2 (2) def = ; f 2 (3) def = ⇢. 0 f 3 (1) def = ⇢. 1 f 3 (2) def = ⇢. 2 f 3 (3) def = ⇢. 0 f 4 (1) def = ⇢. 3 f 4 (2) def = ⇢. 2 f 4 (3) def = ⇢. 0 f 5 (1) def = ⇢. 3 f 5 (2) def = ⇢. 4 f 5 (3) def = ⇢. 0 f 6 (1) def = ⇢. 5 f 6 (2) def = ⇢. 4 f 6 (3) def = ⇢. 0 . . . f ! (1) def = ⇢. ! f ! (2) def = ⇢. ! f ! (3) def = ⇢. 0 f !+1 (1) def = ⇢. ! + 1 f !+1 (2) def = ⇢. ! + 1 f !+1 (3) def = ⇢. 0 . . .
In particular, note that the value of the potential termination semantics at the program control points 1 and 2 is always increasing.
The potential termination semantics is sound and complete for proving potential termination of a program for a given set of initial states I ✓ ⌃: Theorem 4.2.18 A program may terminate for execution traces starting from a given set of initial states I if and only if I ✓ dom(⌧ mt ).
Proof. See Appendix A.2.
⌅ Definite Termination Semantics. The definite termination semantics is the most precise ranking function ⌧ Mt 2 ⌃ * O for a program. It is defined starting from the final states in ⌦, where the function has value zero, and retracing the program backwards while mapping each program state in ⌃ definitely leading to a final state (i.e., a program state such that all program execution traces to which it belongs are terminating) to an ordinal in O representing an upper bound on the number of program execution steps remaining to termination. The domain dom(⌧ Mt ) of ⌧ Mt is the set of states from which the program execution must terminate: all traces branching from a state s 2 dom(⌧ Mt ) terminate in at most ⌧ Mt (s) execution steps, while at least one trace branching from a state s 6 2 dom(⌧ Mt ) does not terminate.
Intuitively, a ranking function f 1 is more precise than another ranking function f 2 when it is defined over a larger set of program states, that is, it can prove termination for more program states, and when its value is always smaller, that is, the maximum number of program execution steps required for termination is smaller. The approximation order is then:
f 1 4 f 2 () dom(f 1 ) ◆ dom(f 2 ) ^8x 2 dom(f 2 ) : f 1 (x) f 2 (x).
⌧ Mt def = ↵ Mrk (↵ M (⌧ +1 )) = ↵ Mrk (⌧ M ) (4.2.13)
where the ranking abstraction ↵ Mrk :
P(⌃ + ) ! (⌃ * O) is: ↵ Mrk (T ) def = ↵ Mv ( ! ↵ (T )) (4.2.14)
where the function ↵ Mv : P(⌃ ⇥ ⌃) ! (⌃ * O) provides the rank of the elements in the domain of a relation r ✓ ⌃ ⇥ ⌃:
↵ Mv (r)s def = 8 > < > : 0 8s 0 2 ⌃ : hs, s 0 i 6 2 r sup ( ↵ Mv (r)s 0 + 1 s 0 2 dom(↵ Mv (r)) ^hs, s 0 i 2 r ) otherwise
The following result provides a fixpoint definition of the definite termination semantics within the partially ordered set h⌃ * O, vi, where the computational order is defined as: Note that, the approximation order 4 and the computational order v coincide when the ranking functions have the same domain:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
⌧ Mt = lfp v ; Mt Mt (f ) def = s. 8 > < > : 0 s 2 ⌦ sup {f (s 0 ) + 1 | hs, s 0 i 2 ⌧ } s 2 f pre(dom(f )) undefined otherwise
Lemma 4.2.22 dom(f 1 ) = dom(f 2 ) ) (f 1 4 f 2 () f 1 v f 2 ) Proof.
The proof is immediate from Equation 4.2.12 and Equation 4.2.15.
⌅
The definite termination semantics is sound and complete for proving definite termination of a program for a given set of initial states I ✓ ⌃: Theorem 4.2.23 A program must terminate for execution traces starting from a given set of initial states I if and only if I ✓ dom(⌧ Mt ).
Proof.
See [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF].
Denotational Definite Termination Semantics
In this work, we are mostly interested in proving program definite termination.
In the following, we provide a structural definition of the fixpoint definite termination semantics ⌧ Mt 2 ⌃ * O (cf. Equation 4.2.16) by induction on the syntax of programs written in the small language presented in Chapter 3.
In Section 3.3 we partitioned the reachable state semantics ⌧ R (cf. Equation 2.2.10) into an invariance semantics ⌧ I 2 L ! P(E). Similarly, we partition ⌧ Mt with respect to the program control points:
⌧ Mt 2 L ! (E * O).
⌧ Mt J stmt K : (E * O) ! (E * O).
The termination semantics
⌧ Mt J stmt K : (E * O) ! (E * O)
of each program instruction stmt outputs a ranking function whose domain represents the terminating environments at the initial label of stmt, which is determined taking as input a ranking function whose domain represents the terminating environments at the final label of stmt, and whose value represents an upper bound on the number of program execution steps remaining to termination.
The termination semantics of a skip instruction takes as input a ranking function f : E * O whose domain represents the terminating environments at the final label of the instruction, and increases its value by one to take into account that from the environments at the initial label of the instruction another program execution step is necessary before termination:
⌧ Mt J l skip Kf def = ⇢ 2 dom(f ). f(⇢) + 1 (4.3.1)
Similarly, the termination semantics of a variable assignment l X := aexp takes as input a ranking function f : E * O whose domain represent the terminating environments at the final label of the instruction. The resulting ranking function is defined over the environments that when subject to the variable assignment always belong to the domain of the input ranking function. The value of the input ranking function for these environments is increased by one, to take into account another execution step before termination, and the value of the resulting ranking function is the least upper bound of these values:
⌧ Mt J l X := aexp Kf def = ⇢. 8 > > > > < > > > > : sup{f (⇢[X v]) + 1 | v 2 JaexpK⇢} 9v 2 JaexpK⇢ 8v 2 JaexpK⇢ : ⇢[X v] 2 dom(f ) undefined otherwise (4.3.2)
Note that, all environments yielding a run-time error due to a division by zero do not belong to the domain of the termination semantics of the assignment.
Example 4.3.1 Let X def = {x}. We consider the following ranking function f : E * O:
f (⇢) def = 8 > < > : 2 ⇢(x) = 1 3 ⇢(x) = 2 undefined otherwise
and the backward assignment x := x + [1, 2]. The termination semantics of the assignment, given the ranking function, is:
⌧ Mt J x := x + [1, 2] Kf (⇢) def = ( 4 ⇢(x) = 0 undefined otherwise
In particular, note that the function is only defined when ⇢(x) = 0. In fact, when
⇢(x) = 1, we have Jx + [1, 2]K⇢ = {0, 1} and ⇢[x 0] 6 2 dom(f ). Similarly, when ⇢(x) = 1, we have Jx+[1, 2]K⇢ = {2, 3} and ⇢[x 3] 6 2 dom(f ).
Given a conditional instruction if l bexp then stmt 1 else stmt 2 fi, its termination semantics takes as input a ranking function f : E * O and derives the termination semantics ⌧ Mt J stmt 1 Kf of stmt 1 , in the following denoted by S 1 , and the termination semantics ⌧ Mt J stmt 2 Kf of stmt 2 , in the following denoted by S 2 . Then, the termination semantics of the conditional instruction is defined by means of the ranking function F [f ] : E * O whose domain is the set of environments that belong to the domain of S 1 and to the domain of S 2 , and that due to non-determinism may both satisfy and do not satisfy the boolean expression bexp:
F [f ] def = ⇢ 2 dom(S 1 ) \ dom(S 2 ). 8 > < > : sup{S 1 (⇢) + 1, S 2 (⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise
and the ranking function F 1 [f ] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S 1 and that must satisfy bexp:
F 1 [f ] def = ⇢ 2 dom(S 1 ). ( S 1 (⇢) + 1 JbexpK⇢ = {true} undefined otherwise
and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of S 2 and that cannot satisfy bexp:
F 2 [f ] def = ⇢ 2 dom(S 2 ). ( S 2 (⇢) + 1 JbexpK⇢ = {false} undefined otherwise
The resulting ranking function is defined joining
F [f ], F 1 [f ],
and F [f ]:
⌧ Mt J if l bexp then stmt 1 else stmt 2 fi Kf def = F [f ] [ F 1 [f ] [ F 2 [f ] (4.3.3) Example 4.3.2 Let X def = {x}.
We consider the termination semantics of the conditional statement if bexp then stmt 1 else stmt 2 fi. We assume, given a ranking function f : E * O, that the termination semantics of stmt 1 is defined as:
⌧ Mt J stmt 1 Kf (⇢) def = ( 1 ⇢(x) 0 undefined otherwise
and that the termination semantics of stmt 2 is defined as
⌧ Mt J stmt 2 Kf (⇢) def = ( 3 0 ⇢(x) undefined otherwise
Then, when the boolean expression bexp is for example x 3, the termination semantics of the conditional statement is:
⌧ Mt J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = 8 > < > : 2 ⇢(x) 0 4 3< ⇢(x) undefined otherwise
Instead, when bexp is for example the non-deterministic choice ?, we have:
⌧ Mt J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = ( 4 ⇢(x) = 0 undefined otherwise
The termination semantics of a loop instruction while l bexp do stmt od takes as input a ranking function f : E * O the domain of which represents the terminating environments at the final label of the instruction, and outputs the ranking function which is defined as a least fixpoint of the function Mt : (E * O) ! (E * O) within hE * O, vi, analogously to Equation 4.2.16:
⌧ Mt J while l bexp do stmt od Kf def = lfp v ; Mt (4.3.4)
where the computational order is defined as in Equation 4.2.15:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
The function Mt : (E * O) ! (E * O) takes as input a ranking function
x : E * O and adds to its domain the environments for which one more loop iteration is needed before termination. In the following, the termination semantics ⌧ Mt J stmt Kx of the loop body is denoted by S. The function Mt is defined by means of the ranking function F [x] : E * O whose domain is the set of environments that belong to the domain of S and to the domain of the input function f , and that may both satisfy and not satisfy the boolean expression bexp:
F [x] def = ⇢ 2 dom(S) \ dom(f ). 8 > < > : sup{S(⇢) + 1, f(⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise
and the ranking function F 1 [x] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S and that must satisfy bexp:
F 1 [x] def = ⇢ 2 dom(S).
( S(⇢) + 1 JbexpK⇢ = {true} undefined otherwise and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of the input function f and that cannot satisfy bexp:
F 2 [f ] def = ⇢ 2 dom(f ). ( f (⇢) + 1 JbexpK⇢ = {false} undefined otherwise
The resulting ranking function is defined joining
F [x], F 1 [x], and F 2 [f ]: Mt (x) def = F [x] [ F 1 [x] [ F 2 [f ] (4.3.5)
Finally, the termination semantics of the sequential combination of instructions stmt 1 stmt 2 , takes as input a ranking function f : E * O, determines from f the termination semantics ⌧ Mt J stmt 2 Kf of stmt 2 , and outputs the ranking function determined by the termination semantics of stmt 1 from ⌧ Mt J stmt 2 Kf :
⌧ Mt J stmt 1 stmt 2 Kf def = ⌧ Mt J stmt 1 K(⌧ Mt J stmt 2 Kf ) (4.3.6)
The termination semantics ⌧ Mt J prog K 2 E * O of a program prog is a ranking function whose domain represents the terminating environments, which is determined taking as input the zero function:
Definition 4.3.3 (Termination Semantics) The termination semantics ⌧ Mt J prog K 2 E * O of a program prog is: ⌧ Mt J prog K = ⌧ Mt J stmt l K def = ⌧ Mt J stmt K( ⇢. 0) (4.3.7)
where the function
⌧ Mt J stmt K : (E * O) ! (E * O) is the termination se- mantics of each program instruction stmt.
Note that, as pointed out in Remark 3.2.2 and accordingly to Definition 3.2.5, possible run-time errors silently halting the program are ignored. More specifically, all environments leading to run-time errors are discarded and do not belong to the domain of the termination semantics of a program prog.
The termination semantics ⌧ Mt J prog K 2 E * O is usually not computable. In the next Chapter 5 and Chapter 6, we present sound decidable abstractions of ⌧ Mt J prog K by means of piecewise-defined functions. The soundness is related to the same approximation order defined in Equation 4.2.12 as:
f 1 4 f 2 () dom(f 1 ) ◆ dom(f 2 ) ^8x 2 dom(f 2 ) : f 1 (x) f 2 (x).
Piecewise-Defined Ranking Functions
In this chapter, we present a parameterized numerical abstract domain for e↵ectively proving program termination by abstract interpretation of the definite termination semantics presented in Section 4.3. The domain is used to automatically synthesize piecewise-defined ranking functions and infer sucient preconditions for program termination. The elements of the abstract domain are piecewise-defined partial functions represented by decision trees, where the decision nodes are labeled with linear constraints, and the leaf nodes belong to an auxiliary abstract domain for functions. The abstract domain is parametric in the choice between the expressivity and the cost of the numerical abstract domain which underlies the linear constraints labeling the decision nodes, and the choice of the auxiliary abstract domain for the leaf nodes. We describe various instances based on the numerical abstract domains presented in Section 3.4 for the decision nodes, and a ne functions for the leaf nodes.
Dans ce chapitre, nous présentons un domaine abstrait numérique paramétré pour prouver e↵ectivement la terminaison de programmes par interprétation abstraite de la sémantique de terminaison présentée dans la Section 4.3. Le domaine est utilisé pour synthétiser automatiquement des fonctions de rang définies par morceaux et en déduire des conditions su santes pour la terminaison de programmes. Les éléments du domaine abstrait sont des fonctions partielles définies par morceaux représentées par des arbres de décision, où les noeuds de décision sont étiquetés avec des contraintes linéaires et les feuilles appartiennent à un domaine abstrait auxiliaire pour fonctions.
Le domaine abstrait est paramétrique dans le choix entre l'expressivité et le coût du domaine abstrait numérique qui sous-tend les contraintes linéaires étiquetant les noeuds de décision, et le choix du domaine abstrait auxiliaire pour les feuilles. Nous décrivons di↵érentes instances sur la base des domaines numériques abstraits présentés dans la Section 3.4 pour les noeuds de décision, et les fonctions a nes pour les noeuds feuilles.
Piecewise-Defined Ranking Functions
In order to abstract the termination semantics presented in Section 4.3, we consider the following concretization-based abstraction:
hE * O, 4i T hT , 4 T i which provides a sound decidable abstraction ⌧ \ Mt J prog K 2 T of the termina- tion semantics of programs ⌧ Mt J prog K 2 E * O with
respect to the approximation order defined in Equation 4.2.12:
f 1 4 f 2 def = dom(f 1 ) ◆ dom(f 2 ) ^8x 2 dom(f 2 ) : f 1 (x) f 2 (x).
We have ⌧ Mt J prog K 4 T (⌧ \ Mt J prog K), meaning that the abstract termination semantics ⌧ \ Mt J prog K over-approximates the value of the ranking function ⌧ Mt J prog K and under-approximates its domain of definition dom(⌧ Mt J prog K). In this way, an abstraction provides su cient preconditions for program termination: if the abstract ranking function is defined on a program state, then all program execution traces branching from that state are terminating.
By pointwise lifting (cf. Equation 2.1.1) we obtain the following: The partitioning is dynamic: during the analysis, partitions (resp. decision nodes and constraints) are split (resp. added) by tests, modified by assignments and joined (resp. removed) when merging control flows. In order to minimize the cost of the analysis, a widening limits the height of the decision trees and the number of maintained disjunctions.
hL ! (E * O), 4i ˙ T hL ! T , 4T i
The abstract domain is parametric in the choice between the expressivity and the cost of the numerical abstract domain which underlies the linear constraints labeling the decision nodes, and the choice of the auxiliary abstract domain for the leaf nodes. In this chapter, we propose various instances based on the numerical abstract domains presented in Section 3.4 for the decision nodes, and a ne functions for the leaf nodes.
The following examples motivate the choice and illustrate the potential of piecewise-defined ranking functions.
Example 5.1.1 Let us consider the following program from [START_REF] Podelski | A Complete Method for the Synthesis of Linear Ranking Functions[END_REF]:
while 1 (x 0) do 2 x := 2x + 10 od 3
The program is terminating since our program variables have integer values (cf. Section 3.1). In case we admitted non-integer values, the program would not terminate for x = 10 3 . However, the program does not have a linear ranking function (cf. [START_REF] Podelski | A Complete Method for the Synthesis of Linear Ranking Functions[END_REF]). As a result, well-known methods to synthesize ranking functions like [START_REF] Podelski | A Complete Method for the Synthesis of Linear Ranking Functions[END_REF][START_REF] Bradley | The Polyranking Principle[END_REF], are not capable to guarantee its termination.
On the other hand, our method is not impaired from the fact that the program does not have a linear ranking function. The decision tree t 2 T inferred at the initial program control point is depicted in Figure 5.1a. It represents the following piecewise-defined ranking function T (t) : E * O, which proves that the program is terminating in at most nine program execution steps independently from the initial value for the program variable: 5. Piecewise-Defined Ranking Functions
x 6 0
x. 3
x 4 0
x. 7
x 3 0
x. 9 x 0
x. 5
x. 1 The leaves of the tree represent partial functions whose domain is determined by the constraints satisfied along the path to the leaf node.
T (t) def = ⇢. 8 > > > > > > < > > > > > > : 1 ⇢(x) 1 5 0 ⇢(x) 2 9 ⇢(x) = 3 7 4 ⇢(x) 5 3 6 ⇢(x)
The graphical representation of the ranking function is shown in
T (t) def = ⇢. 8 > < > : 1 ⇢(r) 1 3r + 1 0 ⇢(r) ^⇢(x) < ⇢(y) undefined otherwise
which proves that the program is terminating in at most 3r + 1 program execution steps if the initial value of the program variable x is smaller than the initial value of the program variable y. Note that the constraint x < y does not explicitly appear in the program.
The fully detailed analysis of the program proposed in Example 5.1.2 uses polyhedral constraints based on the polyhedra abstract domain (cf. Section 3.4.2) for the decision nodes.
We emphasize that, as shown by the previous example, the partitioning induced by the decision trees is semantic-based rather than syntactic-based: the linear constraints labeling the decision nodes are automatically inferred by the analysis and do not necessarily appear in the program.
Decision Trees Abstract Domain
In the following, we give a more formal presentation of the decision trees abstract domain. To this end, we introduce the family of abstract domains T(D, C, F), parameterized by a family C of auxiliary abstract domains for the linear constraints labeling the decision nodes, and a family F of auxiliary abstract domains for the leaf nodes, both parameterized by a numerical abstract domain D from Section 3.4. Adopting an OCaml terminology, each T is an abstract domain functor : a function mapping the parameter abstract domains D, C and F into a new abstract domain T(D, C, F). T can be applied to various implementations of D, C and F yielding the corresponding implementations of T(D, C, F), with no need for further programming e↵ort. We first formally define the families C, F and T. Then, we present all abstract operators needed to manipulate decision nodes, leaf nodes, and decision trees.
Decision Trees
We first dive into some details on the families of auxiliary abstract domains C and F. Then, we formally define the family of abstract domains T.
Linear Constraints Auxiliary Abstract Domain. The family of auxiliary abstract domains C is itself a functor parameterized by D, where D is any of the numerical abstract domains hD, v D i. introduced in Section 3.4.
In the following, let X def = {X 1 , . . . , X k } be the set of program variables. The elements of the numerical abstract domains of Section 3.4 are equivalently represented as sets (i.e., conjunctions) of linear constraints of the form:
±X i c (intervals abstract domain) ±X i ± X j c (octagons abstract domain) c 1 X 1 + . . . + c k X k + c k+1 0 (polyhedra abstract domain) where c, c 1 , . . . , c k , c k+1 2 Z. In the following, C B def = {±X i c | X i 2 X , c 2 Z} denotes the set of interval constraints, C O def = {±X i ± X j c | X i , X j 2 X , c 2
Z} denotes the set of octagonal constraints, and
C P def = {c 1 X 1 + . . . + c k X k + c k+1 0 | X 1 , . . . , X n 2 X , c 1 , . . . , c k , c
k+1 2 Z} denotes the set of polyhedral constraints. Let C be any of these sets of constraints. We have
C B ✓ C O ✓ C P . In particular, any interval constraint ±X i c is equiva- lent to the polyhedral constraint 0X 1 + . . . ± X i + . . . + 0X k c 0, and any octagonal constraint ±X i ± X j c is equivalent to the polyhedral constraint 0X 1 + . . . ± X i + . . . ± X j + . . . 0X k c 0.
C def = 8 < : c 1 X 1 + . . . + c k X k + c k+1 0 X 1 , . . . , X k 2 X , c 1 , . . . , c k , c k+1 2 Z gcd(|c 1 |, . . . , |c k |, |c k+1 |) = 1 9 = ;
(5.2.1) We can easily formalize the correspondence between the elements of the numerical abstract domains of Section 3.4 and sets of linear constraints as a Galois connection hP(C), v D i ! The decision tree abstract domain is parametric in the choice between the expressivity and the cost of the set C of linear constraints chosen for labeling the decision nodes, which depends on the corresponding underlying numerical abstract domain hD, v D i. As for boolean decision trees, where an ordering is imposed on all decision variables, we assume the set of program variables X to be totally ordered and we impose a total order < C on C. As an example, we define < C to be the lexicographic order on the coe cients c 1 , . . . , c n and constant c k+1 of the linear constraints c 1 X 1 + . . .
↵ C C hD, v D i,
+ c k X k + c k+1 0: a 1 X 1 + . . . + a k X k + a k+1 0 < C b 1 X 1 + . . . + b k X k + b k+1 0 () 9j > 0 : 8i < j : (a i = b i ) ^(a j < b j ) (5.2.2)
Thus, any set C equipped with < C forms a totally ordered set hC, < C i.
We define the negation ¬c of a linear constraint c as follows:
¬(c 1 X 1 + . . . + c k X k + c k+1 0) def = c 1 X 1 . . . c k X k c k+1 1 0 (5.2.
3) Note that, we decrement the value of the constant of the negated linear constraint because our program variables have integer values (cf. Section 3.1).
Example 5.2.1 Let X def = {x} and let us consider the linear constraint x 2 0. Its negation ¬(x 2 0) is the linear constraint x + 1 0 (i.e., x 1).
In order to ensure a canonical representation of the decision trees, we forbid a linear constraint c and its negation ¬c from simultaneously appearing in a decision tree. As an example, between c and ¬c, we keep only the largest constraint with respect to the total order < C . We define the following equivalence relation between a linear constraint and its negation:
c 1 ⌘ C c 2 def = c 1 = ¬c 2 ^c2 = ¬c 1 (5.2.4)
Let C denote any totally ordered set hC/ ⌘ C , < C i.
Functions Auxiliary Abstract Domain. The family of abstract domains F is also a functor parameterized by any of the numerical abstract domains D introduced in Section 3.4. It is dedicated to the manipulation of the leaf nodes of a decision tree with respect to the set of linear constraints satisfied along their paths from the root of the decision tree.
The elements of these abstract domains belong to the following set:
F def = {? F } [ ⇣ Z |X | ! N ⌘ [ {> F }.
(5.2.5) which consists of natural-valued functions of the program variables, plus the element ? F and the element > F , whose meaning will be explained shortly.
In the following, the leaf nodes belonging to F \ {? F , > F } and {? F , > F } are referred to as defined and undefined leaf nodes, respectively. Moreover, the undefined leaf nodes belonging to {? F } are called ? F -leaves and those belonging to {> F } are called > F -leaves.
As an instance, in the following we consider a ne functions:
F A def = n f : Z |X | ! N f (X 1 , . . . , X k ) = m 1 X 1 + . . . + m k X k + q o [ {? F ,> F }.
(5.2.6) We now define a computational order v F and an approximation order 4 F where ? F -leaves and > F -leaves are comparable and incomparable, respectively. These orders are parameterized by a numerical abstract domain element D 2 D, which represents the linear constraints satisfied along the path to the compared leaf nodes. Intuitively, these orders abstract the computational order v defined in Equation 4.2.15 and the approximation order 4 defined in Equation 4.2.12, respectively. We have observed in Lemma 4.2.22 that v and 4 coincide when the ranking functions have the same domain. Analogously, the approximation order 4 F and the computational order v F between defined leaf nodes are identical and defined as follows:
f : Z |X | ! N ? F > F (a) > F f : Z |X | ! N ? F (b)
f 1 4 F [D] f 2 () 8⇢ 2 D (D) : f 1 (⇢(x 1 ), . . . , ⇢(x k )) f 2 (⇢(x 1 ), . . . , ⇢(x k )) (5.2.7) f 1 v F [D] f 2 () 8⇢ 2 D (D) : f 1 (⇢(x 1 ), . . . , ⇢(x k )) f 2 (⇢(x 1 ), . . . , ⇢(x k )). (5.2.8)
Instead, when one or both leaf nodes are undefined, the approximation and computational order are defined by the Hasse diagrams in Figure 5.3a and Figure 5.3b, respectively. Note that, as we mentioned, in the approximation order ? F -leaves and > F -leaves are incomparable.
A leaf node, together with its path from the root of the decision tree, represents a (piece of a) partial ordinal-valued functions of environments. We define the following concretization-based abstraction:
hE * O, 4i F [D] hF, 4 F i
where D 2 D represents the path to the leaf node. The concretization function
F : D ! F ! (E * O)
is defined as follows:
F [D]? F def = ; F [D]f def = ⇢ 2 D (D) : f (⇢(x 1 ), . . . , ⇢(x k )) F [D]> F def = ;
(5.2.9)
Note that, both F and > F represent the totally undefined function.
The following result proves that, given D 2 D, F [D] is monotonic:
Lemma 5.2.2 8f 1 , f 2 2 F : f 1 4 F [D] f 2 ) F [D]f 1 4 F [D]f 2 .
Proof. See Appendix A.3.
⌅
Note that, for now, we are approximating ordinal-valued functions by means of natural-valued functions. The next Chapter 6, will be dedicated to abstractions based on ordinal-valued functions.
Decision Trees Abstract Domain. We can now use the families of domains D, C and F to build the decision trees abstract domains T(D, C, F).
The elements of these abstract domains belong to the following set:
T def = {LEAF : f | f 2 F} [ {NODE{c} : t 1 ; t 2 | c 2 C, t 1 , t 2 2 T } (5.2.10)
where F is defined in Equation 5.2.5 and C is defined in Equation 5.2.1. A decision tree t 2 T is either a leaf node LEAF : f , with f an element of F (in the following denoted by t.f ), or a decision node NODE{c} : t 1 ; t 2 , such that c is a linear constraint in C (in the following denoted by t.c) and the left subtree t 1 and the right subtree t 2 (in the following denoted by t.l and t.r, respectively) belong to T . In addition, given a decision tree NODE{c} : t 1 ; t 2 , we impose that the linear constraint c 2 C is always the largest constraint, with respect to < C , appearing in the tree. In the following, let ? T def = LEAF : ? F .
A decision tree represents a partial ordinal-valued function of environments. We define the following concretization-based abstraction:
hE * O, 4i T [D] hT , 4 T i
where D 2 D represent an over-approximation of the reachable environments.
The concretization function T : D ! T ! (E * O) is defined as follows:
T [D]t def = ¯ T [ C (D)]t (5.2.11) where ¯ T : D ! T ! (E * O). The function ¯ T : D ! T ! (E * O) accu- mulates into a set C 2 P(C) (initially equal to C (D)
) the linear constraints satisfied along the paths of the decision tree up to a leaf node, where the concretization function F : D ! F ! (E * O) (cf. Equation 5.2.9) returns a partially defined function over D (↵ C (C)):
¯ T [C]LEAF : f def = F [↵ C (C)]f ¯ T [C]NODE{c} : t 1 ; t 2 def = ¯ T [C [ {c}]t 1 [ ¯ T [C [ {¬c}]t 2
The approximation order 4 T will be formally defined in the next section.
Binary Operators
In the following, we define the decision tree computational and approximation ordering. Moreover, we define binary operators for the computational and approximation join, and for the meet of decision trees.
The binary operators manipulate elements belonging to the following set:
T NIL def = {NIL} [ {LEAF : f | f 2 F} [ {NODE{c} : t 1 ; t 2 | c 2 C, t 1 , t 2 2 T NIL }
(5.2.12) A partial decision tree t 2 T NIL is either an empty tree NIL, or a leaf node leaf node LEAF : f , with f an element of F, or a decision node NODE{c} : t 1 ; t 2 , such that c is a linear constraint in C and the left subtree t 1 and the right subtree t 2 belong to T NIL . Note that T ✓ T NIL . Intuitively, the special element NIL represent the absence of information regarding some partition of the domain of the ranking function. Note that, an empty tree NIL is di↵erent from an undefined leaf node LEAF : ? F or LEAF : > F , since an undefined leaf node provides the information that the ranking function is undefined. The NIL nodes serve an algorithmic purpose. They allow carving out parts of decision trees and stitching them up on disjoint domains. In particular, NIL nodes are only temporary. They are inserted in intermediate trees during computations but disappear before the abstract operators return.
Tree Unification. The decision tree orderings and binary operators are based on Algorithm 1 for tree unification: the main function unification, given an over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T NIL , calls the auxiliary unification-aux to find a common refinement for the trees.
The function unification-aux accumulates into a set C 2 P(C) (initially equal to C (D), cf. Line 36) the linear constraints encountered along the paths of the decision trees (cf. Lines 12-13, Lines 21-22, Lines 31-32), possibly adding decision nodes (cf. Line 14, Line 23) or removing constraints that are redundant (cf. Line 7, Line 16, Line 26) or whose negation is redundant (cf. Line 9, Line 18, Line 28) with respect to C.
The redundancy check is performed by the function isRedundant which, given a linear constraint c 2 C and a set of linear constraints C 2 P(C), tests the inclusion of the corresponding numerical abstract domain elements
↵ C (C) 2 D and ↵ C ({c}) 2 D (cf. Line 2).
Piecewise-Defined Ranking Functions
Algorithm 1 : Tree Unification return unification-aux(t 1 , t 2 , C (D))
1: function isRedundant(c, C) . c 2 C, C 2 P(C) 2: return ↵ C (C) v D ↵ C ({c}) 3: 4: function unification-aux(t 1 , t 2 , C) . t 1 ,
x + 1 0
x.x + 4 NIL (a)
x 0
NIL (b)
x 0
x.x + 4
x + 1 0 Note that the tree unification does not loose any information. Then, the binary operations are carried out "leaf-wise" on the unified decision trees. Ordering. The decision tree ordering is implemented by Algorithm 2: the function order is parameterized by the choice of the ordering E between leaf nodes and, given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls the function unification for tree unification (cf. Line 11) and then calls the auxiliary function orderaux. The function order-aux accumulates into a set C 2 P(C) (initially 82 5. Piecewise-Defined Ranking Functions Algorithm 2 : Tree Order
x.x + 4 NIL (c) x 0 NIL x + 1 0 (d)
1: function order-aux(E, t 1 , t 2 , C) . t 1 , t 2 2 T , C 2 P(C), E 2 {4 F , v F } 2:
if isLeaf(t 1 ) ^isLeaf(t 2 ) then 3: return order-aux(E, t 1 , t 2 , C (D))
return LEAF : t 1 .f E [↵ C (C)] t 2 .
Algorithm 3 : Tree Approximation Order 1: function a-order(D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T 2:
return order(4 F , D, t 1 , t 2 ) equal to C (D), cf. Line 12) the linear constraints encountered along the paths of the decision trees (cf. Lines 6-7) up to the leaf nodes, which are compared by means of the chosen ordering E (cf. Line 3).
In particular, the approximation ordering 4 T and the computational ordering v T are implemented by Algorithm 3 and Algorithm 4, respectively: the functions a-order of Algorithm 3 and c-order of Algorithm 4 call the function order of Algorithm 2 choosing respectively the approximation order 4 F (cf. Equation 5.2.7 and Figure 5.3a) and the computational order v F (cf. Equation 5.2.8 and Figure 5.3b).
The following result proves that, given D 2 D, the concretization function
T [D] is monotonic with respect to the approximation order 4 T [D]: Lemma 5.2.4 8t 1 , t 2 2 T : t 1 4 T [D] t 2 ) T [D]f 1 4 T [D]f 2 . Proof. See Appendix A.3. ⌅ Join.
return LEAF : t 1 .f [↵ C (C)] t 2 .f 6:
else if isNode(t 1 ) ^isNode(t 2 ) then 7:
c t 1 .c 8: (l 1 , l 2 ) join-aux(t 1 .l, t 2 .l, C [ {c}) 9: (r 1 , r 2 ) join-aux(t 1 .r, t 2 .r, C [ {¬c}) 10:
return (NODE{c} : l 1 ; r 1 , NODE{c} : l 2 ; r 2 ) 11:
12: function join( , D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T NIL , 2 {g F , t F } 13: (t 1 , t 2 ) unification(D, t 1 , t 2 ) 14: return join-aux( , t 1 , t 2 , C (D))
join is parameterized by the choice of the join between leaf nodes, which will be presented shortly. Given a sound over-approximation D 2 D of the reachable environments and two partial decision trees t 1 , t 2 2 T NIL , join first calls unification (cf. Line 13) and then calls the auxiliary function join-aux. The latter collects the set C 2 P(C) (initially equal to C (D), cf. Line 14) of linear constraints encountered up to the leaf nodes (cf. Lines 8-9), which are joined by the chosen operator (cf. Line 5). In case it encounters an empty tree, join-aux favors the possibly non-empty one (cf. Lines 2-3).
We define the approximation join operator g F and the computational join operator t F between leaf nodes. The approximation join is the least upper bound for the approximation order (cf. Figure 5.3a and Equation 5.2.7), and the computational join is the least upper bound for the computational order (cf. Figure 5.3b and Equation 5.2.8). These operators are parameterized by a numerical abstraction D 2 D, which represents the linear constraints satisfied along the path to the leaf nodes (cf. Line 5 of Algorithm 5).
The approximation and computational join di↵er when joining defined and 84 5. Piecewise-Defined Ranking Functions undefined leaf nodes. In this case, the approximation join is defined as follows:
? F g F [D] f def = ? F f 2 F \ {> F } f g F [D] ? F def = ? F f 2 F \ {> F } > F g F [D] f def = > F f 2 F \ {? F } f g F [D] > F def = > F f 2 F \ {? F } (5.2.13)
and the computational join is defined as follows:
? F t F [D] f def = f f2 F f t F [D] ? F def = f f2 F > F t F [D] f def = > F f 2 F f t F [D] > F def = > F f 2 F (5.2.14)
In particular, note that the approximation join is undefined when joining ? Fleaves and > F -leaves and always favors the undefined leaf nodes. Instead, given two defined leaf nodes
f 1 , f 2 2 F \ {? F , > F }, their approx- imation join f 1 g F [D] f 2 and their computational join f 1 t F [D] f 2 coincide
and they are defined as their least upper bound f 2 F \ {? F , > F } or, when such least upper bound does not exist, > F :
f 1 g F [D] f 2 def = ( f f 2 F \ {? F , > F } > F otherwise
(5.2.15)
f 1 t F [D] f 2 def = ( f f 2 F \ {? F , > F } > F otherwise (5.2.16)
where
f def = ⇢ 2 D (D). max{f 1 (⇢(X 1 ), . . . , ⇢(X k )), f 2 (⇢(X 1 ), . . . , ⇢(X k ))}.
As an instance, let us consider two a ne functions f 1 , f 2 2 F A \ {? F , > F }. Let V 6 2 X be a special variable not appearing in any program. We define the hypograph of a given a ne function
f 2 F A \{? F , > F } within a given numerical abstraction D 2 D as f [D]# def = {(X 1 , . . . , X k , V ) | C (D), V f (X 1 , . . . , X k )}. The approximation join f 1 g F [D] f 2 and computational join f 1 t F [D] f 2 of
f 1 and f 2 are identical and defined as the a ne function f 2
F A \ {? F , > F }, whose hypograph f [D]# is the convex-hull of the hypographs f 1 [D]# and f 2 [D]#
of f 1 and f 2 , respectively; when such function does not exist, the result is > F :
f 1 g F [D] f 2 def = ( f f[D]# = convex-hull{f 1 [D]#, f 2 [D]#} > F otherwise
(5.2.17)
x 1 5 (a) x 1 5 (b) x 1 5 (c) x 1 5 (d) x 1 5 (e) x 1 5 (f)
f 1 t F [D] f 2 def = ( f f[D]# = convex-hull{f 1 [D]#, f 2 [D]#} > F otherwise (5.2.18)
To clarify, let us consider the following example: x be two a ne functions, whose domain of definition D 2 D is represented by the empty set of linear constraints C def = C (D) = ;. Then, since there is no a ne function which is
Example 5.2.5 Two a ne functions f 1 , f 2 2 F A \ {? F , > F }
x 1 x 2 4 4 (a)
x 1
x 2 4 4
(b)
x 1
x 2 4 4 (c) Algorithm 6 : Tree Approximation Join
1: function a-join(D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T NIL 2: return join(g F , D, t 1 , t 2 )
the least upper bound of f 1 and f 2 , the result of the join is > F .
Another example of join of a ne functions is proposed in Figure 5.6.
The approximation join g T and the computational join t T of decision trees are implemented by Algorithm 6 and Algorithm 7, respectively: the functions a-join of Algorithm 6 and c-join of Algorithm 7 call the function join of Algorithm 5 choosing respectively the approximation join g F and the computational join t F between leaf nodes.
Example 5.2.7 Let X def = {x}, let D be the intervals abstract domain hB, v B i, let C be the interval constraints auxiliary abstract domain hC B / ⌘ C , < C i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the decision trees t 1 2 T represented in Algorithm 7 : Tree Computational Join
1: function c-join(D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T NIL 2:
return join(t F , D, t 1 , t 2 )
x 0
x. x + 4 x + 1 0 Meet. The meet of decision trees represents a ranking function defined over the intersection of their partitions. It is implemented by Algorithm 8: the function meet, given a sound over-approximation D 2 D of the reachable environments and two partial decision trees t 1 , t 2 2 T NIL , calls unification (cf. Line 12) and then calls the auxiliary function join-aux. The latter collects the set C 2 P(C) (initially equal to C (D), cf. Line 13) of linear constraints encountered up to the leaf nodes (cf. Lines 8-9), which are joined by the approximation join operator g F (cf. Line 4). However, unlike Algorithm 6, meet-aux favors empty trees over non-empty trees (cf. Line 2).
( x. x + 4) g F
Unary Operators
We now define the unary operators for handling skip instructions, backward variable assignments and program tests on decision trees.
Skip. The step operator STEP T : T ! T for handling skip instructions is implemented by Algorithm 9: the function step, given a decision tree t 2 T , descends along the paths of the decision tree (cf. Line 5) up to a leaf node, where the leaves step operator STEP F is invoked (cf. Line 3). The operator STEP F : F ! F, given a function f 2 F \ {? F , > F }, simply increments its constant to take into account that one more execution step is needed before termination; undefined leaf nodes are left unaltered: return NODE{t.c} : step(t.l); step(t.r)
STEP F (? F ) def = ? F STEP F (f ) def = X 1 , . . . , X k . f(X 1 , . . . , X k ) + 1 STEP F (> F ) def = > F (5.
The following result proves that for a skip instruction l skip, given a sound over-approximation R 2 D of ⌧ I (l) and a sound over-approximation D 2 D of ⌧ I (f J l skip K), the step operator STEP T is a sound over-approximation of the termination semantics ⌧ Mt J l skip K defined in Equation 4.3.1:
Lemma 5.2.8 ⌧ Mt J l skip K T [D]t 4 T [R]STEP T (t).
Proof. See Appendix A.3.
⌅
Tree Pruning. The remaining decision tree unary operators rely on Algorithm 10 for pruning a decision tree with respect to a given set of linear constraints. Only the subtrees whose paths from the root of the decision tree satisfy these constraints are preserved, while the other subtrees are pruned and substituted with empty trees. The function prune takes as input a decision tree t 2 T , a set C 2 P(C) of linear constraints representing an overapproximation of the reachable environments, and a set J 2 P(C) of linear constraints that need to be added to the decision tree in order to prune it. When J is empty (cf. Line 2), prune accumulates in C the linear constraints encountered along the paths (cf. Lines 8-9) up to a leaf node (cf. Line 3), possibly removing constraints that are redundant (cf. Line 5) or whose negation is redundant (cf. Line 6) with respect to C. When J is not empty (cf. Line 11), the linear constraints J are added to the decision tree in descending order with respect to < C . Note that not all constraints in J are in canonical form, that is, J 6 2 P(C/ ⌘ C ): at each iteration a linear constraint j 2 C is extracted from J (cf. Line 12), which is the largest constraint in J with respect to the constraints in canonical form.
Example 5.2.9 Let J def = {x 1 0, x + 5 0}. Note that, x + 5 0 is not in canonical form, since its negation x 6 0 is larger with respect to < C . However, when comparing x 1 0 and x + 5 0, their canonical forms are compared and thus max J is the linear constraint x + 5 0.
Then, the function prune possibly adds a decision node for the linear constraint j or its negation ¬j (cf. Lines 13-19), or continues the descent along the paths of the decision tree (cf. Lines 20-34). In the first case, prune tests j and ¬j for redundancy with respect to C (cf. Lines 14-15): when ¬j is redundant with respect to C the whole decision tree is pruned (cf. Line 15); otherwise, prune adds a decision node for j while pruning its right subtree (cf. Line 17), if j is already in canonical form (cf. Line 16), or it adds a decision node for for ¬j while pruning its left subtree (cf. Line 19), if ¬j is the canonical form of j (cf. Line 18). In the second case, prune accumulates in C the encountered linear constraints (cf. Lines 24-25), possibly removing redundant decision nodes (cf. Lines 21-22) and pruning the decision tree when the encountered linear constraints coincide with those appearing in J (cf. Lines 27-30) or their negations (cf. Lines 31-34).
Example 5.2.10 Let X def = {x}, let D be the intervals abstract domain hB, v B i, let C be the interval constraints auxiliary abstract domain hC B / ⌘ C , < C i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the decision tree represented in Figure 5.7 from Example 5.2.7, and the set of constraints The function prune adds a decision node for the negation x 2 0 of
J def = { x + 1 0}. Let d def = > B and let C def = C (d) = ;. x 2 0 NIL x 0 x. x + 4 x + 1 0 ( x. x + 4) g F
x + 1 0, since x + 1 0 is not in canonical form (cf. Line 18), and prunes the left subtree (cf. Line 19). Then, it continues the descent along the paths of the decision tree, without removing decision nodes. The result is the decision tree represented in Figure 5.8.
Assignments.
A variable assignment might impact some linear constraints within the decision nodes as well as some functions within the leaf nodes.
We define the operator B-ASSIGN C to handle backward variable assignments by manipulating linear constraints within the decision nodes by means of the underlying numerical abstract domain hD, v D i:
B-ASSIGN C J X := aexp KD def = c. ↵ C (B-ASSIGN D J X := aexp KD( C ({c})))
(5.2.20)
The operator B-ASSIGN C J X := aexp K : D ! C ! P(C) takes as input a numerical abstraction D 2 D of the reachable environments at the initial control point of the instruction and a linear constraint c 2 C, and produces a set C 2 P(C) of linear constraints that need to be substituted to c in the decision tree. It is often the case that the set C contains a single linear constraint. However, when variable assignments and program tests cannot be exactly represented in the numerical abstract domain, the output set C contains multiple linear constraints, as shown by the following example. We now define the operator B-ASSIGN F to handle backward variable assignments within the leaf nodes of a decision tree. The operator B-ASSIGN F J X := aexp K : D ! D ! F ! F, given a numerical abstraction d 2 D of the reachable environments at the initial control point of the instructions, a numerical abstraction D 2 D of the linear constraints accumulated along the path to the leaf node, and a function f 2 F \ {? F , > F }, substitutes the arithmetic expression aexp to the variable X with the function, and increments the constant of the resulting function to take into account that one more program execution step is needed before termination; when such function does not exist, the result is > F ; undefined leaf nodes are left unaltered:
B-ASSIGN F J X := aexp Kd[D]? F def = ? F B-ASSIGN F J X := aexp Kd[D]f def = ( F F 2 F \ {? F , > F } > F otherwise B-ASSIGN F J X := aexp Kd[D]> F def = > F (5.2.21) where F (X 1 , . . . , X, . . . , X k ) def = max{f (⇢(X 1 ), . . . , v, . . . , ⇢(X k )) + 1 | ⇢ 2 D (R), v 2 J aexp K⇢} and R def = B-ASSIGN D J X := aexp Kd(D).
Note that, all possible outcomes of the backward assignment are taken into account when substituting the arithmetic expression aexp to the variable X.
As an instance, given an a ne function f 2 F A \ {? F , > F }, the result of the backward assignment is the a ne function
f 0 2 F A \ {? F , > F } whose hypograph f 0 [R]# within R is the convex-hull of the hypograph F [R]# of F ; when such a ne function does not exist, the result is > F : B-ASSIGN F J X := aexp Kd[D]f def = ( f 0 f 0 [R]# = convex-hull{F [R]#} > F otherwise
(5.2.22) To clarify, consider the following example: if isLeaf(t) then 3:
return LEAF : b-assign F J X := aexp KD[↵ D (C)]t.f 4:
else if isNode(t) then if isEmpty(I) ^isEmpty(J) then return assign-auxJ X := aexp K(D, t, C (D))
Example 5.2.12 Let X def = {x} and let D be the intervals abstract domain hB, v B i. We consider the a ne function f 2 F A \ {? F , > F } defined as f (x) function assignJ X := aexp K, given a sound over-approximation D 2 D of the reachable environments before the assignment and a decision tree t 2 T , calls the auxiliary function assign-auxJ X := aexp K, which performs the assignment on each linear constraint along the paths in the decision tree (cf. Lines 4-21), and accumulates the encountered constraints into a set C 2 P(C) (initially equal to C (D), cf. Line 24), up to the leaf nodes, where the assignment is performed by the operator B-ASSIGN F defined in Equation 5.2.21 (cf. Line 3).
In particular, for each linear constraint c 2 C appearing in the decision trees, the auxiliary function assign-auxJ X := aexp K performs the assignment on both c and its negation ¬c (cf. Lines 5-6) by means of the backward assignment operator B-ASSIGN C defined in Equation 5.2.20. Then, assign-auxJ X := aexp K possibly (cf. Line 15) calls prune to add the resulting sets of constraints I 2 P(C) and J 2 P(C) to the decision tree (cf. Lines 17-19). In case both I and J are empty (cf. Line 7), it means that neither c nor ¬c exist anymore and thus the subtrees of the decision tree are joined by the approximation join A-JOIN (cf. Line 10). Note that, the function prune introduces empty trees. However, they disappear when the subtrees are joined. Indeed, since I and J are sets of constraints resulting from the assignment on complementary linear constraints, they identify adjacent (or, due to non-determinism, overlapping) partitions. In case I is empty and J is an unsatisfiable set of constraints (cf. Line 11), it means that ¬c is no longer satisfiable and thus only the left subtree of the decision tree is kept (cf. Line 12). Similarly, in case I is an unsatisfiable set of constraints and J is empty (cf. Line 13), it means that c is no longer satisfiable and thus only the right subtree of the decision tree is kept (cf. Line 14). When both I and J are unsatisfiable sets of constraints, an error is raised (cf. Line 21).
Example 5.2.13 Let X def = {x}, let D be the intervals abstract domain hB, v B i, let C be the interval constraints auxiliary abstract domain hC B / ⌘ C , < C i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the decision tree represented in Figure 5.9a, where ↵, 2 T are leaf nodes, and the backward assignment
x := x + [1, 2]. Let d def = > B .
The function assign-aux collects the set C def = { x + 2 0, x 1 0} of encountered linear constraints up to the leaf node LEAF : x.x + 1, where B-ASSIGN F performs the assignment (cf. Line 3). From Example 5.2.12, the result of the assignment is the leaf node LEAF : x.x + 4. Similarly, let and be the results of the assignment to ↵ and , respectively. Then, the function assign-aux performs the assignment on x 1 0 and its negation x 0 (cf. Lines 5-6) yielding the sets of constraints
x 3 0 ↵ x 1 0 x. x + 1 (a)
x 2 0
x 1 0
g F ( x. x + 4) x 0 ( x. x + 4) g F (b)
Figure 5.9: Tree assignment on the decision tree (a) of Example 5.2.13. The resulting decision tree is represented in (b).
I def = {x + 1 0} and J def = { x 1 0}
, respectively. Note that, because of non-determinism within the variable assignment, the partitions identified by I and J overlap when x = 1. The leaf node LEAF : x.x + 4 is pruned with I (cf. Line 17) and the leaf node is pruned with J (cf. Line 19); the resulting decision trees respectively represented in Figure 5.4a and Figure 5.4b are joined by the approximation join (cf. Line 20) yielding the decision tree represented in Figure 5.7 (cf. Example 5.2.7).
Finally, assign-aux performs the assignment on x 3 0 and its negation x+2 0 yielding the sets of constraints I def = {x 1 0} and J def = { x+1 0}, respectively. Note that, because of non-determinism, the partitions identified by I and J overlap when x = 1. The leaf node is pruned with I and the decision tree represented in Figure 5.7 is pruned with J; the resulting decision trees NODE{x 1 0} : ; NIL and Figure 5.8 (cf. Example 5.2.10) are joined by the approximation join yielding the decision tree represented in Figure 5.9.
In absence of run-time errors, the following result proves that for a variable assignment l X := aexp, given a sound over-approximation R 2 D of ⌧ I (l) and a sound over-approximation D 2 D of ⌧ I (f J l X := aexp K), the backward assignment operator B-ASSIGN T J X := aexp K is a sound over-approximation of return prune(step(t), C (D), J)
⌧ Mt J l X := aexp K defined in Equation 4.3.2: Lemma 5.2.14 ⌧ Mt J l X := aexp K D [D]t 4 T [R]((B-ASSIGN T J X := aexp KR)(t)
Tests. In case of a program test, all paths that are feasible in the decision tree are preserved, while all paths that for sure can never be followed according to the tested condition are discarded. We define the operator FILTER C J bexp K : D ! P(C) which takes as input a numerical abstraction D 2 D and, by means of the underlying numerical abstract domain hD, v D i, produces a set C 2 P(C) of linear constraints that need to be added to prune the decision tree:
FILTER C J bexp KD def = ↵ C (FILTER D J bexp KD) (5.2.23)
The operator FILTER T J bexp K : D ! T ! T for handling tests within decision trees is implemented by Algorithm 12: the function filterJ bexp K, given a sound over-approximation D 2 D of the reachable environments before the test and a decision tree t 2 T , reasons by induction on the structure of the boolean expression bexp. In particular, when bexp is a conjunction of two boolean expression bexp 1 and bexp 2 (cf. Line 6), the resulting decision trees are merged by the function MEET defined in Algorithm 8 (cf. Line 7). Similarly, when bexp is a disjunction of two boolean expression bexp 1 and bexp 2 (cf. Line 7), the resulting decision trees are merged by the approximation join A-JOIN defined in Algorithm 6 (cf. Line 9). Instead, when bexp is a comparison of arithmetic expressions aexp 1 ./ aexp 2 (cf. Line 10), the function step (cf. Algorithm 9) is invoked (cf. Line 12). Then, the resulting decision tree is pruned (cf. Line 12) with the set of constraints J (cf. Line 11) produced by the operator FILTER C defined in Equation 5.2.23.
In absence of run-time errors, the following result provides a sound overapproximation of ⌧ Mt J if l bexp then stmt 1 else stmt 2 fi K defined in Equation 4.3.3, given a sound over-approximation R 2 D of ⌧ I (l) and a sound over-approximation D 2 D of ⌧ I (f J if l bexp then stmt 1 else stmt 2 fi K):
Lemma 5.2.15 Let F \ 1 [t] def = (FILTER T J bexp KR)(⌧ \ Mt J stmt 1 Kt) and F \ 2 [t] def = (FILTER T J not bexp KR)(⌧ \ Mt J stmt 2 Kt).
Then, for all t 2 T , we have:
⌧ Mt J if l bexp then stmt 1 else stmt 2 fi K T [D]t 4 T [R](F \ 1 [t] g T [R] F \ 2 [t])
Proof. See Appendix A.3.
⌅
Note that, FILTER T introduces empty trees. However, they disappear when the decision trees
F \ 1 [t] and F \ 2 [t] are joined. Indeed, F \ 1 [t] and F \ 2 [t] are obtained from complementary boolean expressions.
Similarly, the next result provides, for a loop while l bexp do stmt od, a sound over-approximation \ Mt of Mt defined in Equation 4.3.5, given sound over-approximations R 2 D of ⌧ I (l) and D 2 D of ⌧ I (f J while l bexp do stmt od K):
Lemma 5.2.16 Let F \ 1 [x] def = (FILTER T J bexp KR)(⌧ \ Mt J stmt Kx) and F \ 2 [t] def = (FILTER T J not bexp KR)(t).
Then, given t 2 T , for all x 2 T we have:
Mt ( D [R]x) 4 T [R]( \ Mt (x))
where \ Mt (x)
def = F \ 1 [x] g T [R] F \ 2 [t].
Proof. See Appendix A.3. ⌅
Widening
The widening operator O T requires a more thorough discussion. The widening is allowed more freedom than the other operators, in the sense that it is temporary allowed to under-approximate the value of the termination semantics ⌧ Mt (cf. Section 4.3) or over-approximate its domain of definition, or both -in contrast with the approximation order 4 (cf. Equation 4.2.12). This is necessary in order to extrapolate a ranking function over the environments on which it is not yet defined. The only requirement is that, when the iteration sequence with widening is stable for the computational order, its limit is a sound abstraction of the termination semantics with respect to the approximation order. In the following, we discuss in detail how the widening guarantees the soundness of the analysis.
As running example, let us consider Figure 5.10. In Figure 5.10a we depict a transition system and the value of the termination semantics for the well-founded part of its transition relation. In Figure 5.10b we represent the concretization of a possible iterate of the analysis: we assume that the first iterate has individuated the states marked with value zero, the second iterate has individuated the states marked with value one, and the widening at the third iterate has extrapolated the ranking function over the states marked with value two. In this case the abstraction both under-approximates the value of the termination semantics (on the second state from the left -case B) and over-approximates its domain of definition (including the first and the last state from the left -case A and C, respectively). In case A, the nonterminating loop is outside the domain of definition of the unsound abstract function, while in case C the loop is inside. The analysis continues iterating until all these discrepancies are solved and, in the following, we explain and justify why this works in general.
For a loop while l bexp do stmt od, given a sound over-approximation R 2 D of ⌧ I (l), we define the iteration sequence with widening as follows:
y 0 def = ? T y n+1 def = ( y n \ Mt (y n ) v T [R] y n ^ \ Mt (y n ) 4 T [R] y n y n O T \ Mt (y n ) otherwise
(5.2.24) if isLeaf(t 1 ) ^isLeaf(t 2 ) then 3: return caseA-aux(t 1 , t 2 , C (D))
if t 1 .f v F [↵ C (C)] t 2 .f
In the following, its limit is denoted by lfp \ \ Mt . Note that, the usual condition for halting the iterations \ Mt (y
n ) v T [R] y n has been strengthened to \ Mt (y n ) v T [R] y n ^ \ Mt (y n ) 4 T [R]
y n in order to force the iterations to continue in case of discrepancies as in Figure 5.10b. Indeed, in situations like case A and case B, the usual halting condition \ Mt (y n ) v T [R] y n can be satisfied but the iteration sequence should not halt.
The widening O T is implemented by Algorithm 13: the function widen, given a given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls in order the functions caseA (cf. Line 2), left-unification (cf. Line 3), caseBorC (cf. Line 4), and widen-aux (cf. Line 4). In the following, we detail all these functions. Check for Case A. First, the widening O T has to detect situations like case A in Figure 5.10b, where at some iterate y i in Equation 5.2.24 the domain of the termination semantics has been over-approximated including environments from which a non-terminating loop is reachable.
The following result proves in situations like case A that, given t 2 T , 100 5. Piecewise-Defined Ranking Functions a sound over-approximations R 2 D of ⌧ I (l) and a sound-overapproximation D 2 D of ⌧ I (f J while l bexp do stmt od K), an additional iterate of \ Mt removes (a subset of) the incriminated environments from the abstraction: Lemma 5.2.17 Let w def = ⌧ Mt J while l bexp do stmt od K T [D]t and, for some iterate y i , let dom( T [R]y i ) \ dom(w) 6 = ;. Then, in case A of Figure 5.10b, we have dom(
T [R] \ Mt (y i )) \ dom(w) ⇢ dom( T [R]y i ) \ dom(w). Proof.
Let dom( T [R]y
i ) \ dom(w) 6 = ;, for some iterate y i . It means that there exists at least an environment ⇢ 2 dom( T [R]y i ) such that the state hl, ⇢i 2 ⌃ belongs to a non-terminating program execution trace 2 ⌃ ! . We assumed to be in case A of Figure 5.10b. Thus, let hl 0 , ⇢ 0 i 2 ⌃ where ⇢ 0 6 2 dom( T [R]y i ) be a state reachable from hl, ⇢i on . Without loss of generality, we can assume that hl 0 , ⇢ 0 i is the immediate successor of hl, ⇢i: hhl, ⇢i, hl 0 , ⇢ 0 ii 2 ⌧ . Thus, by definition of (cf. Equation 4.3.5), we have ⇢ 6 2 dom( ( T [R]y i )) and, by Lemma 5.2.16 and by definition of the approximaiton order 4 (cf. Equation 4.2.12), we have dom(
T [R] \ Mt (y i )) ✓ dom( ( T [R]y i )) which im- plies ⇢ 6 2 dom( T [R] \ Mt (y i ))
. This concludes the proof, for case A, that dom(
T [R] \ Mt (y i )) \ dom(w) ⇢ dom( T [R]y i ) \ dom(w).
⌅ Therefore note that, in situations like case A, the iterate \ Mt (y i ) is a decision tree with more undefined leaf nodes than y i , that is y i 4 T [R] \ Mt (y i ). Moreover, when the newly added undefined leaf nodes are ? F -leaves, we have
\ Mt (y i ) v T [R] y i .
Thus, the widening O T has to detect whether some defined leaf node in y i has become a ? F -leaf in \ Mt (y i ) and, in case, turn it into a > F -leaf in order to have y i v T [R] \ Mt (y i ) but also to prevent successive iterates from mistakenly including again the same environment in their domain. This check is implemented by Algorithm 14: the main function caseA, given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls unification for tree unification (cf. Line 11) and then calls the auxiliary function caseA-aux. The latter collects the set C 2 P(C) (initially equal to C (D), cf. Line 12) of the linear constraints encountered up to the leaf nodes (cf. Lines 6-7), which are compared by the computational order v F [↵ C (C)] defined in Equation 5.2.8 and Figure 5.3b (cf. Line 3) and, in case, turned into a > F -leaf (cf. Line 4). Domain Widening -Left Unification. In order to ensure convergence, the widening O T uses Algorithm 15 to limit the size of the decision trees, and thus avoid infinite sequences of partition refinements. Algorithm 15 is a slight modification of Algorithm 1: the main function left-unificaiton is parameterized by the choice of the join between leaf nodes and, given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls the auxiliary function left-unification-aux in order to force the structure of t 1 on t 2 . Note that in this way, unlike Algorithm 1, Algorithm 15 might loose information.
The function left-unification-aux accumulates into a set C 2 P(C) (initially equal to C (D), cf. Line 33) the linear constraints encountered along the paths in the first decision trees (cf. Lines 17-18, Lines 27-28), possibly adding decision nodes to the second tree (cf. Line 19) or removing decision nodes from the second tree (cf. Line 10), or removing constraints that are redundant (cf. Line 5, Line 12, Line 22) or whose negation is redundant (cf. Line 7, Line 14, Line 24) with respect to C. When removing a decision node from the second tree, the left and right subtree are joined by means of join (cf. Line 10): in order to extrapolate the domain of the ranking function over environments on which it is not yet defined, the widening O T invokes leftunification choosing the computational join t F (cf. Line 3 of Algorithm 13). For this reason, since ? F -leaves might disappear when joining subtrees, it is important to check for situations like case A before the left unification.
Remark 5.2.18 Note that, in order to limit the loss of precision, it is possible to adapt the idea presented in [START_REF] Cousot | Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation[END_REF] to design a domain widening parametric in a finite set of thresholds. Is is also possible to integrate the state-of-the-art precise widening operators proposed in [START_REF] Bagnara | Precise Widening Operators for Convex Polyhedra[END_REF]. In [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF], we used conflictdriven learning to obtain an improvement of the precision that is similar in spirit to using a more precise domain widening. We plan to investigate further possibilities for the domain widening as part of our future work.
Check for Case B and C. Next, the widening O T has to detect situations like case B in Figure 5.10b, where at some iterate y i the value of the termination semantics has been under-approximated, and situations like case C in Figure 5.10b, where the domain of the termination semantics has been over-approximating including a non-terminating loop.
The following result proves in situations like case B that, given t 2 T , a sound over-approximations R 2 D of ⌧ I (l) and a sound-overapproximation D 2 D of ⌧ I (f J while l bexp do stmt od K), an additional iterate of \ Mt strictly increases the value of the abstraction for the incriminated environments:
Decision Trees
i , let T [R]y i (⇢) < w(⇢), for some ⇢ 2 dom(w) \ dom( T [R]y i ). For all ⇢ 2 dom( T [R] \ Mt (y i )) \ dom(w), we have T [R]y i (⇢) < T [R] \ Mt (y i )(⇢).
Proof.
For some iterate y i , let T [R]y i (⇢) < w(⇢), for some environment ⇢ 2 dom(w) \ dom( T [R]y i ). It means that the state hl, ⇢i 2 ⌃ belongs to a program execution trace 2 ⌃ + whose length from hl, ⇢i is greater than T [R]y i (⇢). Let hl 0 , ⇢ 0 i 2 ⌃ where ⇢ 0 2 dom(w) \ dom( T [R]y i ) and w(⇢ 0 ) T [R]y i (⇢ 0 ), be a state reachable from hl, ⇢i on . Without loss of generality, we can assume that from hl, ⇢i is the longest program execution trace from hl, ⇢i and that hl 0 , ⇢ 0 i is the immediate successor of hl, ⇢i: hhl, ⇢i, hl 0 , ⇢ 0 ii 2 ⌧ . Thus, by definition of (cf. Equation 4.3.5), by Lemma 5.2.16 and by definition of 4 (cf. Equation 4.2.12), we have w(⇢)
( T [R]y i )(⇢) T [R] \ Mt (y i )(⇢). This concludes the proof that T [R]y i (⇢) < T [R] \ Mt (y i )(⇢).
⌅
The following result proves in situations like case C that, given t 2 T , a sound over-approximation R 2 D of ⌧ I (l) and a sound-overapproximation D 2 D of ⌧ I (f J while l bexp do stmt od K), an additional iterate of \ Mt strictly increases the value of the abstraction for the incriminated environments: Lemma 5.2.20 Let w def = ⌧ Mt J while l bexp do stmt od K T [D]t and, for some iterate y i , let dom( T [R]y i )\dom(w) 6 = ;. Then, for all ⇢ 2 dom( Proof.
T [R] \ Mt (y i ))\ dom(w) in case C of Figure 5.10b, we have T [R]y i (⇢) < T [R] \ Mt (y i )(⇢).
Let dom( T [R]y i ) \ dom(w) 6 = ;, for some iterate y i . It means that there exists at least an environment ⇢ 2 dom( T [R]y i ) such that the state hl, ⇢i 2 ⌃ belongs to a non-terminating program execution trace 2 ⌃ ! . We assume to be in case C of Figure 5.10b. For all ⇢ 2 dom( T [R]y i ) \ dom(w) 6 = ;, by definition of (cf. Equation 4.3.5), by Lemma 5.2.16, and by definition of 4 (cf. Equation 4.2.12), we have
T [R]y i (⇢) < ( T [R]y i )(⇢) T [R] \ Mt (y i )(⇢)
. This concludes the proof, for case C.
⌅
Note that, an additional iterate of \
Mt is not able to distinguish between an under-approximation of the value of the termination semantics as in case B, and an over-approximation of its domain of definition as in case C.
Therefore, the widening O T has to detect whether the value of some defined leaf node in y i has increased in \ Mt (y i ) and, in such case, turn it into a > F -leaf in order to prevent an indefinite growth. This check is implemented by Algorithm 16: the main function caseBorC, given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls the auxiliary function caseBorC-aux, which collects into a set C 2 P(C) (initially equal to C (D), cf. Line 12) the linear constraints encountered along the paths up to the leaf nodes (cf. Lines 7-8), which are compared (cf. Line 3) and, in case, turned into a > F -leaf (cf. Line 4).
Remark 5.2.21 Note that, it is also possible to allow the ranking function to increase using a finite set of thresholds instead of returning directly > F . In this way, in situations like case C, it simply delays the iteration that must return > F . However, in situations like case B, it may discover the correct ranking function and avoid losing precision.
Value Widening Finally, the widening O T extrapolates the value of the ranking function over the environments on which it was not defined before the domain widening. The heuristic that we chose consists in widening leaf nodes with respect to their adjacent leaf nodes. The rationale being that programs often loop over consecutive values of a variable, to infer the shape of the ranking function over the environments on which it was not defined we use the information available in adjacent partitions of its domain of definition.
We define an extrapolation operator H F [D 1 , D 2 ] between two defined leaf nodes f 1 , f 2 2 F \ {? F , > F }, given the numerical abstractions D 1 2 D and D 2 2 D representing their path from the root of the decision tree.
In particular, given two a ne functions f 1 , f 2 2 F A \ {? F , > F }, the operator extrapolates the a ne function f 2 F A \ {? F , > F }, whose hypograph
f [D 2 ]# within D 2 is the convex-hull of the hypographs f 0 [D 2 ]# of the a ne functions f 0 2 F A \ {? F , > F } lying on the boundaries of convex-hull{f 1 [D 1 ]#, f 2 [D 2 ]
#}; when such a ne function does not exist, the result is > F :
f 1 H F [D 1 , D 2 ] f 2 def = ( f f[D 2 ]# = convex-hull{f 0 [D 2 ]# | f 0 2 F } > F otherwise
(5.2.25) where
F def = {f 2 F A \ {? F , > F } | f [D 1 t D D 2 ]#◆ convex-hull{f 1 [D 1 ]#, f 2 [D 2 ]# }} is the minimal set of a ne functions whose hypographs within D 1 t D D 2 determine convex-hull{f 1 [D 1 ]#, f 2 [D 2 ]#}, that is, 8f 2 F :6 9f 0 2 F : f [D 1 t D D 2 ]# f 0 [D 1 t D D 2 ]#◆ convex-hull{f 1 [D 1 ]#, f 2 [D 2 ]#} and T f 2F f [D 1 t D D 2 ]#= convex-hull{f 1 [D 1 ]#, f 2 [D 2 ]#}.
To clarify, let us consider the following example: The heuristic for widening adjacent leaf nodes within decision trees is implemented by Algorithm 17: the function widen-aux in Algorithm 17 is given as input two decision trees t 1 , t 2 2 T , a copy t 2 T of t 1 and a set C 2 P(C) initially equal to C (D), where D 2 D is a sound over-approximation of the reachable environments (cf. Line 5 of Algorithm 13). It accumulates into C the linear constraints accumulated along the paths up to the leaf nodes (cf. Lines 36-37). The leaf nodes defined in t 2 and undefined in t 1 (cf. Line 29) are extrapolated with respect to their adjacent defined leaf nodes (cf. Line 31) by means of the extrapolation operator H F (cf. Line 32).
Example 5.2.22 Two a ne functions f 1 , f 2 2 F A \ {? F , > F }
The adjacent leaf nodes are determined by the function adjacent (cf. Line 31). Note that, not all linear constraints appearing in a decision tree necessarily appear along a path in the decision tree. For this reason, given a decision tree t 2 T and a set C 2 P(C) of linear constraints encountered along the path to a leaf node, the function adds to a set K 2 P(C) (initially equal to C, cf. Line 17) all the redundant constraints (cf. Lines 20-21) that appear in the decision tree (cf. Line 18) but are missing from the path represented by C (cf. Line 19). Then, the adjacent leaf nodes are sought negating one by one in K the linear constraint in C, and are collected into a set A (cf. Line 23).
f (f 1 H F [d 1 , ↵ C (C)] t 2 .f ) t F [↵ C (C)]
The search for a leaf node determined by a set of constraints J 2 P(C) is conducted by the function leaf: the function extracts from J the linear constraints in decreasing order (cf. Line 11) and uses them to choose a path in the decision tree while accumulating into an initial empty set C 2 P(C) the linear constraints e↵ectively encountered (cf. Lines 12-14). In case the path leads to a defined leaf node, the function returns a set containing the leaf node together with the numerical abstract domain ↵ C (C) 2 D representing its path (cf. Line 8). Otherwise, the function returns an empty set (cf. Line 9).
The linear constraints appearing in a decision tree are determined by the function constraints by visiting in-order the decision tree (cf. Line 4).
Remark 5.2.23 Note that, besides establishing relationships between adjacent leaf nodes, other value widening heuristics are possible. An example would be establishing relationships between leaf nodes based on the parity of some variable, or based on numerical relationships between variables. We plan to investigate these possibilities as part of our future work.
The following result provides, for a loop while l bexp do stmt od, a sound over-approximation of ⌧ Mt J while l bexp do stmt od K defined in Equation 4.3.4, given a decision tree t 2 T , a sound over-approximation R 2 D of ⌧ I (l), and a sound over-approximation D 2 D of ⌧ I (f J while l bexp do stmt od K):
Lemma 5.2.24 Let \ Mt (x) def = F \ 1 [x] g T [R] F \ 2 [t]
as defined in Lemma 5.2.16 for any given t 2 T . Then, we have:
⌧ Mt J while l bexp do stmt od K T [D]t 4 T [R](lfp \ \ Mt )
where lfp \ \ Mt is the limit of the iteration sequence with widening:
y 0 def = ? T , y n+1 def = y n O \ Mt (y n ) (cf. Equation 5.2.24).
Proof.
See Appendix A.3. ⌅
Abstract Definite Termination Semantics
The operators of the decision trees abstract domains can now be used to define the abstract definite termination semantics. Note that, pure backward analysis is blind with respect to the program initial states. For this reason, in the following, we assume to have, for each program control point l 2 L, a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (l) 2 P(E): ⌧ I (l) ✓ D (R) (cf. Section 3.4).
In Figure 5.12 we define the semantics ⌧ \ Mt J stmt K : T ! T , for each program instruction stmt. Each function ⌧ \ Mt J stmt K : T ! T takes as input a decision tree over-approximating the ranking function corresponding to the final control point of the instruction, and outputs a decision tree defined over a subset of the reachable environments at iJ stmt K, which over-approximates the ranking function corresponding to the initial control point of the instruction.
The abstract termination semantics ⌧ \ Mt J prog K 2 T of a program prog outputs the decision tree over-approximating the ranking function corresponding
⌧ \ Mt J l skip Kt def = STEP T (t) ⌧ \ Mt J l X := aexp Kt def = (B-ASSIGN T J X := aexp KR)(t) ⌧ \ Mt J if l bexp then stmt 1 else stmt 2 fi Kt def = F \ 1 [t] g T [R] F \ 2 [f ] F \ 1 [t] def = (FILTER T J bexp KR)(⌧ \ Mt J stmt 1 Kt) F \ 2 [t] def = (FILTER T J not bexp KR)(⌧ \ Mt J stmt 2 Kt) ⌧ \ Mt J while l bexp do stmt od Kt def = lfp \ \ Mt \ Mt (x) def = F \ [x] g T [R] (FILTER T J not bexp KR)(t) F \ [x] def = (FILTER T J bexp KR)(⌧ \ Mt J stmt Kx) ⌧ \ Mt J stmt 1 stmt 2 Kt def = ⌧ \ Mt J stmt 1 K(⌧ \ Mt J stmt 2 Kt)
Figure 5.12: Abstract termination semantics of instructions stmt.
to the initial program control point iJ prog K 2 L. It is defined taking as input the leaf node LEAF : X 1 , . . . , X k . 0 as: Definition 5.3.1 (Abstract Termination Semantics) The abstract termination semantics ⌧ \ Mt J prog K 2 T of a program prog is:
⌧ \ Mt J prog K = ⌧ \ Mt J stmt l K def = ⌧ \ Mt J stmt KLEAF : X 1 , . . . , X k . 0 (5.3.1)
where the abstract termination semantics ⌧ \ Mt J stmt K 2 T ! T of each program instruction stmt is defined in Figure 5.12.
The following result proves the soundness of the abstract termination semantics ⌧ \ Mt J prog K 2 T with respect to the termination semantics ⌧ Mt J prog K 2 E * O, given a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (iJ prog K):
Theorem 5.3.2 ⌧ Mt J prog K 4 T [R]⌧ \ Mt J prog K Proof (Sketch).
The proof follows from the soundness of the operators of the decision trees abstract domain (cf. Lemma 5.2.8, Lemma 5.2.14, Lemma 5.2.15, Lemma 5.2.16, and Lemma 5.2.24) used for the definition of ⌧ \ Mt J prog K 2 T .
⌅
In particular, the abstract termination semantics provides su cient preconditions for ensuring definite termination of a program for a given overapproximation R 2 D of the set of initial states I ✓ ⌃:
Corollary 5.3.3 A program must terminate for execution traces starting from a given set of initial states
D (R) if D (R) ✓ dom( T [R]⌧ \ Mt J prog K).
Examples. In the following, we recall the examples introduced at the beginning of the chapter and we present the fully detailed analyses of the program using the abstract domain of decision trees. We present the analysis of the program using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes, and a ne functions for the leaf nodes (cf. Equation 5.2.6).
The starting point is the zero function at the program final control point:
3 : LEAF : x. 0
The ranking function is then propagated backwards towards the program initial control point taking the loop into account:
1 : NODE{x 0} : (LEAF : ? F ); (LEAF : x. 1)
The first iterate of the loop is able to conclude that the program terminates in at most one program step if the loop condition x 0 is not satisfied. Then, at program control point 2, the operator ASSIGN T replaces the program variable
x with the expression 2x + 10 within the decision tree: 2 : NODE{x 6 0} : (LEAF : x. 2); (LEAF : ? F ) Note that, ASSIGN T has also increased the value of the ranking function in order to count one more program execution step to termination. The second iterate of the loop concludes that the program terminates in at most three execution steps, when x 6 and thus the loop is executed only once, and in at most one program step, when the loop is not entered:
1 : NODE{x 6 0} : LEAF : x. 3; NODE{x 0} : (LEAF : ? F ); (LEAF : x. 1)
In this particular case, there is no need for convergence acceleration and the analysis is rather precise: at the sixth iteration, a fixpoint is reached providing the decision tree shown in Figure 5.1.
:
NODE{r 1 0} : NODE{x y + r 1 0} :
NODE{2x 2y +r 1 0} : (LEAF : ? F ); (LEAF : x. y. r. 7); LEAF : x. y. r. 4 LEAF : x. y. r. 1
The widening that we defined in Section 5.2.4 extrapolates the ranking function on the partitions over which it is not yet defined:
1 : NODE{r 1 0} :
NODE{x y+r 1 0} : (LEAF : x. y. r. 7); (LEAF : x. y. r. 4); LEAF : x. y. r. 1 Note, in particular, that the ranking function is now temporarily defined even when x y and the program does not terminate. Indeed, the fourth iterate of the while loop identifies a situation like case C in Figure 5.10b: The fixpoint represents the following piecewise-defined ranking function:
⇢. 8 > < > : 1 ⇢(r) 1 4
0 ⇢(r) ^⇢(r) ⇢(y) ⇢(x) undefined otherwise which proves that the program is terminating in at most 4 program execution steps if the initial value of the program variable r is smaller than or equal to the di↵erence between the initial value of the program variable y and the initial value of the program variable x.
In Remark 5.2.18, we mentioned various possibilities to improve the precision of the widening operator. In particular, an adaptation of the domain widening using the evolving rays technique proposed in [START_REF] Bagnara | Precise Widening Operators for Convex Polyhedra[END_REF] yields the more precise decision tree shown in Figure 5.2. The same result can be obtained using conflict-driven learning [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF].
Related Work
We conclude the chapter with a discussion of the most relevant related work.
Decision Trees. The use of (binary) decision trees (Binary Decision Diagrams [START_REF] Bryant | Graph-based Algorithms for Boolean Function Manipulation[END_REF], in particular) for verification has been devoted a large body of work, especially in the area of timed-systems and hybrid-system verification [Jea02, LPWY99, MLAH99, etc.].
In this thesis, we focus on common program analysis applications and, in this sense, our decision trees abstract domain is mostly related to the ones presented in [CCM10, GC10a]: both ours and these abstract domains are based on decision trees extended with linear constraints. However, the abstract domains proposed in [START_REF] Cousot | A Scalable Segmented Decision Tree Abstract Domain[END_REF][START_REF] Gurfinkel | BOXES: A Symbolic Abstract Domain of Boxes[END_REF] are designed for the disjunctive refinement of numerical abstract domains, while our abstract domain is designed specifically in order manipulate ranking functions. Moreover, while our abstract domain is based on binary decision trees, in [START_REF] Cousot | A Scalable Segmented Decision Tree Abstract Domain[END_REF] the choices at the decision nodes may di↵er at each node and their number is not bounded a priori.
In general, despite all the available alternatives [BCC + 10, CCM10, GR98, GC10a, GC10b, SISG06, etc.], it seems to us that in the literature there is no disjunctive abstract domain well-suited for program termination. A first (minor) reason is the fact that most of the existing disjunctive abstract domains are designed specifically for forward analyses while ranking functions are inferred through backward analysis (cf. Section 5.3). However, the main reason is that adapting existing widening operators to ranking functions is not obvious due to the coexistence of an approximation and computational ordering in the termination semantics domain (cf. Section 4.3 and Section 5.2.4).
Termination. In the recent past, termination analysis has benefited from many research advances and powerful termination provers have emerged over the years [BCF13, CPR06, GSKT06, HHLP13, LQC15].
Many results in this area [BCC + 07, CPR06, etc.] are based on the transition invariants method introduced in [START_REF] Podelski | Transition Invariants[END_REF]. In particular, the Terminator prover [START_REF] Cook | Terminator: Beyond Safety[END_REF] is based on an algorithm for the iterative construction of transition invariants. This algorithm searches within a program for single paths representing potential counterexamples to termination, computes a ranking function for each one of them individually (as in [START_REF] Podelski | A Complete Method for the Synthesis of Linear Ranking Functions[END_REF]), and combines the obtained ranking functions into a single termination argument. Our approach di↵ers in that it aims at proving termination for all program paths at the same time, without resorting to counterexample-guided analysis. In particular, we emphasize that our approach is able to deal with arbitrary control structures, and thus it is not limited to simple loops as [START_REF] Podelski | A Complete Method for the Synthesis of Linear Ranking Functions[END_REF] or to nonnested loops as [START_REF] Bradley | Linear Ranking with Reachability[END_REF]. Moreover, it avoids the cost of explicit checking for the well-foundedness of the termination argument. The approach presented in [START_REF] Tsitovich | Loop Summarization and Termination Analysis[END_REF] shares similar motivations, but prefers loop summarization to iterative fixpoint computation with widening, as considered in this thesis.
The majority of the literature is based on the indirect use of invariants for proving termination [ADFG10, BCC + 07, BCF13, CS02, LORCR13, etc.]. On the other hand, our approach infers and manipulates ranking functions directly as invariants associated to program control points. In this sense, [START_REF] Alias | Multi-Dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs[END_REF] is the closest approach to ours: the invariants are pre-computed, but each program point is assigned with a ranking function (that also provides information on the execution time in terms of execution steps), as in our approach.
The strength of our approach is being an abstract interpretation of a complete semantics for termination. For any given terminating program, it is always possible to design an abstraction able to prove its termination. In particular, this is stronger than fixing a priori an incomplete reasoning method that can miss terminating programs. For example, various methods to synthesize ranking functions based on linear programming [ADFG10, CS01, PR04a, etc.] are complete for programs with rational -valued variables, but not with integer-valued variables. Indeed, as we have seen in Example 5.1.1 at the beginning of the chapter, there are programs that terminate over the integers but do not terminate over the rationals.
Finally, in the literature, we found only few works that have addressed the problem of automatically inferring preconditions for program termination. In [CGLA + 08], the authors proposed a method based on preconditions generating ranking functions from potential ranking functions, while our preconditions are inherently obtained from the inferred ranking functions as the set of programs state for which the ranking function is defined. Thus, our preconditions are derived by under-approximation of the set of terminating states as opposed to the approaches presented in [START_REF] Ganty | Proving Termination Starting from the End[END_REF][START_REF] Massé | Policy Iteration-based Conditional Termination and Ranking Functions[END_REF] where the preconditions are derived by (complementing an) over-approximation of the non-terminating states.
The abstract domain that we have presented in this chapter has been instantiated only with a ne functions, while several methods in the literature use more powerful lexicographic ranking functions. This limitation will be addressed in the next chapter Chapter 6 using ordinal-valued ranking functions. We postpone the comparison between our work and related work based on lexicographic ranking functions to the end of the next chapter.
Ordinal-Valued Ranking Functions
In this chapter, we address the limitation to natural-valued functions of the decision trees abstract domain presented in Chapter 5. In particular, we propose a functions auxiliary abstract domain based on ordinal-valued functions. More specifically, these functions are polynomials in !, where the polynomial coefficients are natural-valued functions of the program variables. The abstract domain is parametric in the choice of the maximum degree of the polynomials, and the types of functions used as polynomial coe cients.
Dans ce chapitre, nous levons la limitation à des fonctions à valeur naturelle du domaine abstrait d'arbres de décision présentés au Chapitre 5. En particulier, nous proposons un domaine abstrait auxiliaire de fonctions basé sur des fonctions à valeurs dans les ordinaux. Plus précisément, ces fonctions sont basées sur des polynômes en !, où les coe cients des polynômes sont des fonctions à valeur naturelle des variables du programme. Le domaine est paramétrique dans le choix du degré maximum des polynômes, et des types de fonctions utilisées comme coe cients des polynômes.
Ordinal-Valued Ranking Functions
In many cases, a single natural-valued ranking function is not su cient. In particular, this is the case in the presence of unbounded non-determinism, as we have seen in Example 4.1.2. In order to further motivate the need for ordinal-valued ranking functions, we propose the following example. x 0 y 0 0 Each loop iteration, either decrements the value of y, or decrements the value of x and resets the value of y, until either program variable becomes less than or equal to zero. There is a non-deterministic choice between the branches of the conditional if instruction at program control point 2, and the value of the variable y is chosen non-deterministically at program control point 4. The program always terminates, whatever the initial values for x and y, and whatever the non-deterministic choices during execution.
x 0 y = 1 0 x 0 y = 2 0 . . . x 0 y = n 0 . . . x = 1 y 0 0 x = 1 y = 1 1 x = 1 y = 2 2 . . . x = 1 y = n n . . . x = 2 y 0 0 x = 2 y = 1 ! x = 2 y = 2 ! + 1 . . . x = 2 y = n ! + n 1 . . . . . . . . . . . . . . . . . . . . . x = n y 0 0 x = n y = 1 ! • (n 1) x = n y = 2 ! • (n 1) + 1 . . . x = n y = n ! • (n 1) + n 1 . . .
In the graph of Figure 6.1, each node represents a possible state of the program at program control point 1, and each edge represents a loop iteration.
The nodes with a double outline are blocking states with no successor. We define a ranking function for the program following the intuition behind the definite termination semantics of Section 4.2.2: we start from the blocking states, where we assign value 0 to the function; then, we follow the edges backwards, and for each state that we encounter (whose successors belong to the domain of the function) we define the value of the ranking function as the maximum of all values of the function plus 1 for all successors of the state. Hence, we need a transfinite value whenever we encounter a state leading through unbounded non-determinism to program executions of arbitrary length. In particular, in this case, we need ordinal numbers for all states where x > 1 and y > 0
The analysis of the program using our decision trees abstract domain extended with ordinal-valued functions is proposed in Example 6.4.4 Lexicographic Ranking Functions. It is also possible to prove the termination of the program of Example 6.1.1 using a lexicographic ranking function (x, y). Indeed, a lexicographic tuple (f n , . . . , f 1 , f 0 ) of natural numbers is an isomorphic representation of the ordinal
! n • f n + • • • + ! • f 1 + f 0 [MP96]
. However, reasoning directly with lexicographic ranking functions, poses the additional di culty of finding an appropriate lexicographic order. Existing methods [BMS05a, CSZ13, etc.] use heuristics to explore the space of possible orders, which grows very large with the number of program variables. Instead, the coe cients f n , . . . , f 1 , f 0 (and thus their order) of our ordinal-valued ranking functions are automatically inferred by the analysis. Moreover, there exist programs for which there does not exist a lexicographic ranking function but there is a piecewise-defined ordinal-valued ranking function as considered in this chapter, and we will provide an example in Example 6.4.5. We refer to Section 6.5 at end of the chapter for further discussion on the comparison between lexicographic and ordinal-valued ranking functions.
Ordinal Arithmetic
The theory of ordinals was introduced by Georg Cantor as the core of his set theory [START_REF] Cantor | Beiträge zur Begründung der Transfiniten Mengenlehre[END_REF][START_REF] Cantor | Beiträge zur Begründung der Transfiniten Mengenlehre[END_REF]. We recall from Chapter 2 that the smallest ordinal is denoted by 0. The successor of an ordinal ↵ is denoted by ↵+1, or equivalently, by succ(↵). A limit ordinal is an ordinal which is neither 0 nor a successor ordinal. In the following, we provide the definition and some properties of addition, multiplication and exponentiation on ordinals [START_REF] Kunen | Set Theory: An Introduction to Independence Proofs[END_REF]. Addition. Ordinal addition is defined by transfinite induction:
↵ + 0 def = ↵ (zero case) ↵ + ( + 1) def = (↵ + ) + 1 (successor case) ↵ + def = [ < (↵ + ) (limit case) (6.2.1)
Ordinal addition generalizes the addition of natural numbers. It is associative, i.e. (↵ + ) + = ↵ + ( + ), but not commutative, e.g. 1
+ ! = ! 6 = ! + 1.
Multiplication. Ordinal multiplication is also defined inductively: Exponentiation. We define ordinal exponentiation by transfinite induction:
↵ • 0 def = 0 (zero case) ↵ • ( + 1) def = (↵ • ) + ↵ (successor case) ↵ • def = [ < (↵ • ) (limit case) (6.
↵ 0 def = 1 (zero case) ↵ +1 def = (↵ ) • ↵ (successor case) ↵ def = [ < (↵ ) (limit case) (6.2.3)
Cantor Normal Form. Using ordinal arithmetic, we can build all ordinals up to " 0 (i.e. the smallest ordinal such that " 0 = ! " 0 ):
0, 1, 2, . . . , !, ! + 1, ! + 2, . . . , ! • 2, ! • 2 + 1, ! • 2 + 2, . . . , ! 2 , . . . , ! 3 , . . . , ! ! , .
. . In the following, we use the representation of ordinals based on Cantor Normal Form [START_REF] Kunen | Set Theory: An Introduction to Independence Proofs[END_REF], i.e. every ordinal ↵ > 0 can be uniquely written as
! 1 • n 1 + • • • + ! k • n k
where k is a natural number, the coe cients n 1 , . . . , n k are positive integers and the exponents 1 > 2 > • • • > k 0 are ordinal numbers. Throughout the rest of the thesis we will consider ordinal numbers only up to ! ! .
Decision Trees Abstract Domain
In the previous Chapter 5 we have presented the decision trees abstract domain T(D, C, F), parameterized by a numerical abstract domain D, an auxiliary abstract domain C for the decision nodes, and an auxiliary abstract domain F based on natural-valued functions for the leaf nodes (cf. Section 5.2). In the following, we address the limitations of F by means of auxiliary abstract domain W(F) based on ordinal-valued functions. Thus, the decision trees abstract domain T(D, C, F) is lifted to T(D, C, W(F)).
Decision Trees
We now formally define the family of auxiliary abstract domains W. Then, we illustrate its integration within the family of abstract domains T.
Ordinal-Valued Functions Auxiliary Abstract Domain. The family of abstract domains W is a functor which lifts any auxiliary abstract domain F introduced in Section 5.2 to ordinal-valued functions. As F, it is dedicated to the manipulation of the leaf nodes of a decision tree with respect to the linear constraints satisfied along their paths from the root of the decision tree.
Let X def = {X 1 , . . . , X k } be the set of program variables. The elements of these abstract domains belong to the following set:
W def = {? W } [ ( X i ! i • f i f i 2 F \ {? F , > F } ) [ {> W } (6.3.1)
where F is defined in Equation 5.2.5, which consists of ordinal-valued functions of the program variables, plus the counterparts ? W and > W of ? F and > F , respectively. More specifically, the ordinal-valued functions are polynomials in ! (that is, ordinals in Cantor Normal Form), where the coe cients are natural-valued functions of the program variables belonging to F \ {? F , > F }.
The maximum degree of the polynomials, in the following denoted by M , is another parameter of the abstract domain. The computational order v W and the approximation order 4 W are parameterized by a numerical abstract domain element D 2 D, which represents the linear constraints satisfied along the path to the leaf nodes. According to Lemma 4.2.22, they coincide when the compared leaf nodes are both defined :
X i ! i • f i 1 4 W [D] X i ! i • f i 2 () 8⇢ 2 D (D) : X i ! i • f i 1 (⇢(X 1 ), . . . , ⇢(X k )) X i ! i • f i 2 (⇢(X 1 ), . . . , ⇢(X k )) (6.3.2) 120 6. Ordinal-Valued Ranking Functions p : Z |X | ! O ? F > F (a) > F p : Z |X | ! O ? F (b)
X i ! i • f i 1 v W [D] X i ! i • f i 2 () 8⇢ 2 D (D) : X i ! i • f i 1 (⇢(X 1 ), . . . , ⇢(X k )) X i ! i • f i 2 (⇢(X 1 ), . . . , ⇢(X k )) (6.3.3)
Instead, when one or both leaf nodes are undefined, the approximation and computational orders are defined by the Hasse diagrams in Figure 6.2a and Figure 6.2b, respectively. Note that, as in Figure 5.3a and Figure 5.3b, ? Wleaves and > W -leaves are incomparable and comparable, respectively.
We define the following concretization-based abstraction:
hE * O, 4i W [D] hW, 4 W i
where D 2 D represents the path to the leaf node. The concretization function
W : D ! W ! (E * O) is defined as follows: W [D]? W def = ; W [D] X i ! i • f i def = ⇢ 2 D (D) : X i ! i • f i (⇢(X 1 ), . . . , ⇢(X k )) W [D]> W def = ; (6.3.4)
As in Equation 5.2.9, ? W and > W represent the totally undefined function.
The following result proves that, given D 2 D, W [D] is monotonic:
Lemma 6.3.1 8p 1 , p 2 2 W : p 1 4 W [D] p 2 ) W [D]p 1 4 W [D]p 2 .
Proof.
See Appendix A.4. Specifically, the decision trees now belong to the following set:
T def = {LEAF : p | p 2 W} [ {NODE{c} : t 1 ; t 2 | c 2 C, t 1 , t 2 2 T } (6.3.5)
where W is defined in Equation 6.3.1 and C is defined in Equation 5.2.1.
In the following, we lift to ordinal-valued functions all the abstract domain operators, focusing on the operators used to manipulate leaf nodes.
Binary Operators
The need for ordinals arising from non-deterministic boolean expressions (cf. Example 6.1.1) gets reflected into the binary operator for the computational and approximation join of decision tree.
Join. The join of decision trees is parameterized by the choice of the join between leaf nodes and yields a ranking function defined over the union of their partitions (cf. Algorithm 5). We lift Algorithm 5 to ordinal-valued functions by simply defining the join between ordinal-valued leaf nodes.
The approximation join g W and the computational join t W are parameterized by a numerical abstraction D 2 D, which represents the linear constraints satisfied along the path to both leaf nodes, after the decision tree unification (cf. Line 5 of Algorithm 5). They di↵er when joining defined and undefined leaf nodes. Specifically, the approximation join is defined as follows:
? W g W [D] p def = ? W p 2 W \ {> W } p g W [D] ? W def = ? W p 2 W \ {> W } > F g W [D] f def = > W p 2 W \ {? W } p g W [D] > W def = > W p 2 W \ {? W } (6.3.6)
and the computational join is defined as follows: .
? W t W [D] p def = p p2 W p t W [D] ? W def = p p2 W > W t W [D] p def = > F p 2 W p t W [D] > W def = > F p 2
D 2 D, p 1 , p 2 2 W \ {? W , > W }, 2 {g F , t F } 3: r P i ! i • ( X 1 , . . . , X k . 0) 4: i 0 5: carry false 6:
while i < M do . join carried out in ascending powers of ! 7: return > W
f p 1 [i] [D] p 2 [i] 8: if f = > F
In particular, as in Equation 5.2.13, the approximation join is undefined when joining ? W -leaves and > W -leaves and always favors the undefined leaf nodes.
Instead, the approximation and computational join between defined leaf nodes coincide. Algorithm 18 is a generic implementation parameterized by the choice of the join between natural-valued leaf nodes. Given an ordinal valued function
p def = P i ! i • f i 2 W \ {? W , > W }, we write p[i] to denote the coe cient f i .
The join of two given ordinal-valued functions p 1 , p 2 2 W \ {? W , > W } is carried out in ascending powers of !, joining the coe cients of terms with the same power of ! (cf. Line 7), up to the maximum degree M (cf. Line 6). In case the join of natural-valued leaf nodes yields > F (cf. Line 8), the function w-join sets the coe cient to equal the zero function (cf. Line 3) and propagates a carry of one execution step (cf. Line 9 and Lines 12) to the unification of terms with next higher degree; unless the maximum degree M has been reached, in which case w-join returns > W (cf. Line 20).
To clarify, let us consider the following example: = y. However, there does not exists a natural-valued a ne function which is the least upper bound of f 1 0 and f 2 0 within D. Thus, we force their join to equal zero and we carry one to the join of f 1 1 def = x 1 and f 2 1 def = x which becomes x + 1 (i.e., x after the join and x + 1 after propagating the carry). Thus, the result of the join of p 1 and p 2 is r def = ! • (x + 1).
Intuitively, whenever natural-valued functions are not su cient, w-join naturally resorts to ordinal numbers: given two terms ! k •f 1 and ! k •f 2 , forcing their join to equal zero and carrying one to the terms with next higher degree is equivalent to considering their join to be equal to ! and applying the limit case of ordinal multiplication (cf. Equation 6.2.2):
! k • ! = ! k+1 • 1 + ! k • 0 = ! k+1 .
The approximation join g W and the computational join t W between defined leaf nodes are implemented by Algorithm 19 and Algorithm 20, respectively: the functions a-w-join of Algorithm 19 and c-w-join of Algorithm 20 call the function w-join of Algorithm 18 choosing respectively the approximation join g F and the computational join t F between natural-valued leaf nodes.
Unary Operators
In the following, we lift to ordinal-valued functions the unary operators for handling skip instructions and backward variable assignments. In particular, the assignment operator reflects the need for ordinals arising from nondeterministic assignments (cf. Example 4.1.2 and Example 6.1.1). . d, D 2 D, p 2 W \ {? W , > W } 3: return > W Skip. We lift the step operator for handling skip instructions (cf. Algorithm 9) defining the step operator STEP W for ordinal-valued leaves. The operator STEP W : W ! W, given an ordinal-valued function p 2 W \ {? W , > W }, simply invokes the step operator STEP F for natural-valued leaves on the constant term of the polynomial; undefined leaf nodes are unaltered:
r P M i=1 ! i • ( X 1 , . . . ,
STEP W (? W ) def = ? F STEP W ( X i ! i • f i ) def = STEP F (f 0 ) + X i>0 ! i • f i = ( X i ! i • f i ) + 1 STEP W (> W ) def = > F (6.3.8)
Assignments. We now define the operator B-ASSIGN W to handle backward assignments within ordinal-valued leaf nodes, in order to lift the operator B-ASSIGN T (cf. Algorithm 11) to ordinal-valued functions.
The operator B-ASSIGN W J X := aexp K : D ! D ! W ! W is implemented by Algorithm 21. The assignment, given a numerical abstraction d 2 D of the reachable environments at the initial control point of the instruction, a numerical abstraction D 2 D of the linear constraints accumulated along the path to the leaf node, is carried out in ascending powers of !, invoking the B-ASSIGN F operators on the coe cients (cf. Line 7), up to the maximum degree M (cf. Line 6). In case the assignment of natural-valued leaf nodes yields > F (cf. Line 8), the function w-assign lets the coe cient to equal the zero function (cf. Line 3) and carries one execution step (cf. Line 9 and Line 12) to the term with next higher degree; unless the maximum degree M has been reached, in which case w-assign returns > W (cf. Line 20). The operator STEP W is invoked before returning (cf. Line 18) to take into account that one more program execution step is needed before termination.
To clarify, let us consider the following example: x is reset to zero and carries one to the term with next higher degree ! 2 . In fact, the non-deterministic assignment x := ? allows x (and consequently f 1 ) to take any value, but there does not exists a natural-valued a ne function that properly abstracts all possible outcomes of the assignment. The resulting ordinal-valued function, after the invocation of STEP W , is r def = ! 2 • 1 + (y + 1).
Widening
The widening operator, unlike the join and assignment operators, is not allowed to introduce ordinals of higher degree to avoid missing cases like case B and case C in Figure 5.10b, where the value of the abstract ranking function increases between iterates. We lift Algorithm 13 to ordinal-valued functions defining an extrapolation operator H W between ordinal-valued leaf nodes.
The extrapolation operator H
W [D 1 , D 2 ] is implemented by Algorithm 22.
The extrapolation, given the numerical abstractions D 1 2 D and D 2 2 D representing the path to the leaf nodes from the root of the decision tree, is carried out in ascending powers of ! invoking the extrapolation operator H F [D 1 , D 2 ] on the coe cients (cf. Line 6), up to the maximum degree M (cf. .
D 1 , D 2 2 D, p 1 , p 2 2 W \ {? W , > W } 3: r P M i=1 ! i • ( X 1 , . . . , X k . 0) 4: i 0 5:
while i < M do . widening carried out in ascending powers of ! 6:
f p 1 [i] H F [D 1 , D 2 ] p 2 [i] 7: if f = > F then 8:
return > W 9: else 10:
r[i] f 11: i i + 1 12:
return r Line 5). In case the extrapolation of natural leaf nodes yields > F (cf. Line 7), the function w-widen returns > W (cf. Line 8).
Abstract Definite Termination Semantics
The lifted operators of the decision trees abstract domain can now lift to ordinal-valued functions the abstract termination semantics of a program instruction stmt, defined in Figure 5.12, and the abstract termination semantics of a program prog, defined in Definition 5.3.1.
The following result proves, for program instruction stmt, the soundness of the abstract termination semantics ⌧ \ Mt J stmt K with respect to the termination semantics ⌧ Mt J stmt K defined in Section 4.3, given sound over-approximations
R 2 D of ⌧ I (iJ stmt K) and D 2 D of ⌧ I (f J stmt K): Lemma 6.4.1 ⌧ Mt J stmt K( T [D]t) 4 T [R](⌧ \ Mt J stmt Kt).
Proof.
See Appendix A.4.
⌅
Similarly, the following result proves the soundness of the abstract termination semantics ⌧ \ Mt J prog K 2 T with respect to the termination semantics ⌧ Mt J prog K 2 E * O, given a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (iJ prog K) at the initial program control point:
Theorem 6.4.2 ⌧ Mt J prog K 4 T [R]⌧ \ Mt J prog K Proof (Sketch).
The proof follows from the soundness of the operators of the decision trees abstract domain (cf. Lemma 6.4.1) used for the definition of
⌧ \ Mt J prog K 2 T .⌅
In particular, the abstract termination semantics provides su cient preconditions for ensuring definite termination of a program for a given overapproximation R 2 D of the set of initial states I ✓ ⌃: Corollary 6.4.3 A program must terminate for execution traces starting from a given set of initial states
D (R) if D (R) ✓ dom( T [R]⌧ \ Mt J prog K).
Examples. In the following, we recall the example introduced at the beginning of the chapter and we describe in some detail the analysis of the program using the abstract domain of decision trees. Then, we propose further examples to illustrate the expressiveness of the abstract domain. We present the analysis of the program using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes, and ordinal-valued functions for the leaf nodes (cf. Equation 6.3.1).
The starting point is the zero function at the program final control point:
6 : LEAF : x. y. 0
The ranking function is then propagated backwards towards the program initial control iterating through the while loop. We use a widening delay of three iterations. At the fourth iteration, the decision tree at program control point 1 represents the following piecewise-defined ranking function:
> <
> :
1 ⇢(x) 0 _ ⇢(y) 0 3y + 2 ⇢(x) = 1 undefined otherwise
where 3y + 2 is the result of the widening between adjacent leaf nodes with consecutive values for y (i.e., between LEAF : x. y. 5 within ⇢(x) = 1 ⇢(y) = 1 and LEAF : x. y. 8 within ⇢(x) = 1 ^⇢(y) = 2). Ordinals appear for the first time at program control point 4 due to the non-deterministic assignment to y:
⇢. 8 > < > : 2 ⇢(x) 0 ! + 9 ⇢(x) = 1 undefined otherwise
In the first case the value of the function is simply increased to count one more program step before termination, but (since y can now have any value) its domain is modified forgetting all constraints on y (i.e. y 0). In the second case, 3y + 2 becomes ! + 9 due to the non-deterministic assignment.
At the seventh iteration, the decision tree associated with program control point 1, represents the following ranking function:
⇢. 8 > > > > < > > > > : 1 ⇢(x) 0 _ ⇢(y) 0 3y + 2 ⇢(x) = 1 ! + (3y + 9) ⇢(x) = 2 undefined otherwise
where ! + (3y + 9) is the result of the widening between LEAF : x. y. ! + 12 within ⇢(x) = 2 ^⇢(y) = 1 and LEAF : x. y. ! +15 within ⇢(x) = 2 ^⇢(y) = 2. The widening is carried out in ascending powers of !: from the constants 12 and 15, the widening infers the value 3y + 9; then, since the coe cients of ! are equal to one, the inferred coe cient is again one. Thus, the result of the widening is ! + (3x + 9) within ⇢(x) = 2. Finally, at the eleventh iteration, the analysis reaches a fixpoint:
⇢. 8 > > > > < > > > > : 1 ⇢(x) 0 _ ⇢(y) 0 3y + 2 ⇢(x) = 1 ! + (3y + 9) ⇢(x) = 2 ! • (x 1) + (7x + 3y 5) otherwise
where 3y + 2 and ! + (3y + 9) are particular cases (within ⇢(x) = 1 and ⇢(x) = 2, respectively) of ! • (x 1) + (7x + 3y 5) and are explicitly listed only due to the amount of widening delay we used. The ranking function proves that the program is always terminating, whatever the initial values for
x and y, and whatever the non-deterministic choices during execution. The reason why we obtain a di↵erent and more complex ranking function with respect to Figure 6.1 is because we count the number of program execution steps whereas, for convenience of presentation, in Figure 6.1 we simply count the number of loop iterations.
Example 6.4.5 Let us consider the following program: x is positive, either decrements the value of x, or decrements the value of x and resets the value of y; when x is negative, either increments the value of x, or decrements the value of y and resets the value of x to any value (possibly positive). The loop exits when x is equal to zero or y is less than zero.
while 1 (x 6 = 0 ^0 < y) do if 2 (0 < x) then if 3 ( ? ) then
Note that, there does not exist a lexicographic ranking function for the loop. In fact, the variables x and y can be alternatively reset to any value at each loop iteration: the value of y is reset at the program control point 5, while the value of x is reset at the control point 10. 130 6. Ordinal-Valued Ranking Functions There is an edge from any node where x has value k > 0 (and y > 0) to all nodes where x has value k 1 (and y has any value); there is also an edge from any node where y has value h > 0 (and x < 0) to all nodes where y has value h 1 (and x has any value). In every node we indicate the maximum number of loop iterations needed to reach a blocking state: the highlighted nodes require ordinals greater than ! 2 .
x y 0 x 0 y = 1 0 x 0 y = 2 0 . . . x 0 y = n 0 . . . x = y 0 x = 1 y = 1 1 x = 1 y = 2 2 . . . x = 1 y = n n . . . x = y 0 x = 1 y = 1 1 x = 1 y = 2 ! 2 . . . x = 1 y = n ! 2 + ! • (n 1) . . . x = y 0 x = 2 y = 1 ! x = 2 y = 2 ! + 1 . . . x = 2 y = n ! + n 1 . . . x = y 0 x = 2 y = 1 2 x = 2 y = 2 ! 2 + 1 . . . x = 2 y = n ! 2 + ! • (n 1) + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x = n y 0 x = n y = 1 ! • (n 1) x = n y = 2 ! • (n 1) + 1 . . . x = n y = n ! • (n 1) + n 1 . . . x = n y 0 x = n y = 1 n x = n y = 2 ! 2 + n 1 . . . x = n y = n ! 2 + ! • (n 1) + n 1 . . .
Nonetheless, the program always terminates, regardless of the initial values for x and y, and regardless of the non-deterministic choices taken during execution. Let us consider the graph in Figure 6.3. Whenever y is reset to any value, we move towards the blocking states decreasing the value of x, and whenever x is reset to any value, we move towards the blocking states decreasing the value of y. Moreover, whenever x is reset to a positive value, its value will only decrease until it reaches zero (or y is reset to a value less than zero).
The analysis of the program using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes, ordinal-valued functions for the leaf nodes (cf. Equation 6.3.1), yields the following piecewisedefined ranking function at program control point 1:
⇢. 8 > < > : ! 2 + ! • (y 1) + ( 4x + 9y 2) ⇢(x) 0 _ 0 < ⇢(y) 1 ⇢(x) = 0 _ y 0 ! • (x 1) + (9x + 4y 7) 0 < ⇢(x) _ 0 ⇢(y)
In Figure 6.3, we justify the need for ! 2 . Indeed, from any state where x < 0 and y = h > 0, whenever x is reset at program control point 10, it is possible to jump to any state where y = h 1. In particular, for example from the state where x = 1 and y = 2, it is possible to jump through unbounded nondeterminism to states with value of the most precise ranking function equal to an arbitrary ordinal number between ! and ! 2 , which requires ! 2 as upper bound of the maximum number of loop iterations needed to reach a final state. Finally, note the expressions identified as coe cients of !: where x < 0, the coe cient of ! is an expression in y (since y guides the progress towards the blocking states), and where 0 < x, the coe cient of ! is an expression in x (because x rules the progress towards termination). The expressions are automatically inferred by the analysis without any assistance from the user.
Related Work
Interestingly, ordinal-valued ranking functions already appeared in the work of Alan Turing in the late 1940s [START_REF] Turing | Checking a Large Routine[END_REF][START_REF] Morris | An Early Program Proof by Alan Turing[END_REF]. To the best of our knowledge, the automatic inference of ordinal-valued ranking functions for proving termination of imperative programs is unique to our work.
The approach presented in this chapter is mostly related to [ADFG10]: both techniques handle programs with arbitrary structure and infer ranking functions (that also provide information on the program computational complexity in terms of executions steps) attached to program control points. In [START_REF] Alias | Multi-Dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs[END_REF], lexicographic ranking functions are obtained by a greedy algorithm based on Farkas lemma that, analogously to the operators of our abstract domain, constructs the ranking functions by adding one dimension at a time. However, the method proposed in [START_REF] Alias | Multi-Dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs[END_REF], although complete for programs with rational-valued variables, is incomplete for integer-valued variables. In contrast, there is no completeness limitation to our method, only a choice of relevant abstract domains. In [START_REF] Gonnord | Synthesis of Ranking Functions using Extremal Counterexamples[END_REF], the authors improve over [START_REF] Alias | Multi-Dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs[END_REF] and present an SMT-based method for the inference of lexicographic ranking functions. However, both [START_REF] Alias | Multi-Dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs[END_REF] and [START_REF] Gonnord | Synthesis of Ranking Functions using Extremal Counterexamples[END_REF] remain focused only on proving termination for all program input, while our work naturally deals with proving conditional termination as well.
In a di↵erent context, a large amount of research followed the introduction of size-change termination [START_REF] Chin | The Size-Change Principle for Program Termination[END_REF]. The size-change termination approach consists in collecting a set of size-change graphs (representing function calls) and combining them into multipaths (representing program executions) in such a way that at least one variable is guaranteed to decrease. Compared to sizechange termination, our approach avoids the exploration of the combinatorial space of multipaths with the explicit manipulation of ordinals. In [START_REF] Soon | Ranking Functions for Size-Change Termination[END_REF][START_REF] Amir | Ranking Functions for Size-Change Termination II[END_REF], algorithms are provided to derive explicit ranking functions from size-change graphs, but these ranking functions have a shape quite di↵erent from ours which makes it di cult for us to compare their expressiveness. For example, the derived ranking functions use lexicographic orders on variables while our polynomial coe cients are arbitrary linear combinations of variables. In general, an in-depth comparison between such fairly di↵erent methods is an open research topic (e.g., see [START_REF] Heizmann | Size-Change Termination and Transition Invariants[END_REF] for the comparison of the transition invariants and the size-change termination methods).
Finally, we have seen that there exist programs for which there does not exist a lexicographic ranking function (cf. Example 6.4.5). In [START_REF] Cook | Ramsey vs. Lexicographic Termination Proving[END_REF] the authors discuss the problem and propose some heuristics to circumvent it. Interestingly these heuristics rediscover exactly the need for piecewise-defined ranking functions, even if implicitly and in a roundabout way.
Recursive Programs
In the small programming language introduced in Chapter 3 only loops present a challenge when proving termination. In this chapter, we illustrate a simple extension of the language with recursive procedures. We revisit the definition of its maximal trace semantics (cf. Section 3.2) and its definite termination semantics (cf. Section 4.3). Moreover, we propose a sound decidable abstraction for proving termination of recursive programs based the piecewise-defined ranking functions introduced in Chapter 5 and Chapter 6.
Dans le petit langage de programmation introduit dans le Chapitre 3 seules les boucles présentent un défi pour prouver la terminaison. Dans ce chapitre, nous montrons une simple extension du language avec des procédures récursives. Nous reviendrons sur la définition de sa sémantique de traces maximales (cf. Section 3.2) et sa sémantique de terminaison (cf. Section 4.3). De plus, nous proposons une abstraction décidable correcte pour prouver la terminaison des programmes récursifs basée sur les fonctions de rang définies par morceaux introduites au Chapitre 5 et au Chapitre 6.
A Small Procedural Language
In Figure 7.1, the syntax of our programming language proposed in Figure 3.1 is extended with possibly recursive procedures.
A program prog consists of a procedure declaration followed by a unique label l 2 L. A procedure declaration mthd is either a main procedure declaration or a sequential composition of declarations. The main procedure is always declared last. A procedure consists in an instruction statement labelled by a unique procedure name M 2 M [ {main}.
The set of language instructions is extended with call and return statements: a call statement branches to the initial control point of the called procedure; the return statement branches back to the control point after the call statement. With the exception of the main procedure, the last instruction of all procedures is a return statement. For simplicity, we assume that any mutual recursion is reduced into a single recursive procedure [START_REF] Owen Kaser | On the Conversion of Indirect to Direct Recursion[END_REF]. Note that, this extension of the language allows us to also encode programs containing functions with arguments and return values, thanks to variables. Therefore, we do not explicitly introduce such features in the language.
Maximal Trace Semantics
We now define the transition semantics of our extended language. In particular, since a procedure might be called during the execution of another procedure, we introduce a stack to recover the right calling control point: the procedure call statement pushes the calling control point onto the stack, whereas the procedure return statement pops the last calling control point from the top of the stack and branches to this control point. Then, we define the maximal trace semantics of the language by induction on its extended syntax.
Transition Systems. A stack k 2 L ⇤ is a possibly empty sequence of program control points. Let K denote the set of all stacks.
The set of all program states ⌃ def = L ⇥ E is extended to pairs K ⇥ ⌃ consisting of a stack k 2 K and a program state s 2 ⌃, which in turn consists of a program control point l 2 L paired with an environment ⇢ 2 E.
In Figure 7.2 we define the initial control point of a program prog, a call instruction stmt and a return instruction ret. The initial control point of a program prog is the initial control point of the main procedure. The initial stmt ::= . . .
| l call M i J l call M K def = l ret ::= l return iJ l return K def = l mthd ::= main: stmt iJ main : stmt K def = iJ stmt K | M : stmt ret mthd 1 iJ M : stmt ret mthd 1 K def = iJ mthd 1 K prog ::= mthd l iJ mthd l K def = iJ mthd K Figure 7
.2: Initial control point of stmt, ret, and prog.
stmt ::= . . . call instruction stmt. The final control point f J l return K of a return instruction ret cannot be defined statically; it is a dynamic information depending on the calling control point on the stack during execution. The final control point of other instructions stmt is unchanged from Figure 3.5.
| l call M fJ l call M K def = f J stmt K prog ::= mthd l f J mthd l K def = l
In the following, the function i : M ! L maps procedure names to their initial control point: given a declaration M : stmt ret, i(M
) def = iJ stmt K. The set of initial states is extended to {"} ⇥ I def = {h", hiJ prog K, ⇢ii | ⇢ 2 E} and the set of final states is extended to {"} ⇥ Q def = {h", hf J prog K, ⇢ii | ⇢ 2 E}.
We now redefine the transition relation ⌧ 2 (K⇥⌃)⇥(K⇥⌃). In particular, in Figure 7.4, we define the transition semantics
⌧ J stmt K 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃) and ⌧ J ret K 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃) of
a call instruction and a return instruction, respectively. The semantics of other instructions stmt is defined analogously to Figure 3.6, extending the states with a stack, which is left unchanged by the instruction. The execution of a call instruction pushes the final control point of the instruction onto the stack and branches to the initial control point of the called procedure. The function ⌧ : M ! L maps procedure names to their transition semantics: given a declaration M : stmt ret,
⌧ (M ) def = ⌧ J stmt K [ ⌧ J ret K.
The execution of a return instruction branches to the control point popped from the top of the stack.
⌧ J l call M K def = {hk, hl, ⇢ii ! hf J l call M K • k, hi(M ), ⇢ii | k 2 K, ⇢ 2 E} [ ⌧ (M ) ⌧ J l return K def = {hr • k, hl, ⇢ii ! hk, hr, ⇢ii | r 2 L, k 2 K, ⇢ 2 E}
The transition relation ⌧ 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃) of a program prog is defined by the semantics ⌧ J prog K 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃)
of the program as follows:
⌧ J prog K = ⌧ J mthd l K def = ⌧ J mthd K where ⌧ J main : stmt K def = ⌧ J stmt K and ⌧ J M : stmt ret mthd K def = ⌧ J mthd K.
In other words, the semantics ⌧ J prog K of a program prog is defined by the semantics ⌧ J main : stmt K 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃) of the main procedure.
def = {h", h6, ⇢ii | ⇢ 2 E}. The program transition relation ⌧ 2 (K ⇥ ⌃) ⇥ (K ⇥ ⌃) is defined as follows: ⌧ def = {h", h6, ⇢ii ! h", h7, ⇢[x v]ii | ⇢ 2 E ^v 2 Z} [ {h", h7, ⇢ii ! h8 • ", h1, ⇢ii | ⇢ 2 E} [ {hk, h1, ⇢ii ! hk, h2, ⇢ii | k 2 K, ⇢ 2 E ^true 2 J1 < xK⇢} [ {hk, h2, ⇢ii ! hk, h3, ⇢[x ⇢(x) 1]ii | k 2 K, ⇢ 2 E} [ {hk, h3, ⇢ii ! h5 • k, h1, ⇢ii | k 2 K, ⇢ 2 E} [ {hk, h1, ⇢ii ! hk, h4, ⇢ii | k 2 K, ⇢ 2 E ^false 2 J1 < xK⇢} [ {hk, h4, ⇢ii ! hk, h5, ⇢ii | k 2 K, ⇢ 2 E} [ {h5 • k, h5, ⇢ii ! hk, h5, ⇢ii | k 2 K, ⇢ 2 E} [ {h8 • ", h5, ⇢ii ! h", h8, ⇢ii | ⇢ 2 E} The extended set of final states is {"} ⇥ Q def = {h", h8, ⇢ii | ⇢ 2 E}.
Maximal Trace Semantics. The maximal trace semantics ⌧ +1 2 P((K ⇥ ⌃) +1 ) of our extended language is generated by the extended transition system hK ⇥ ⌃, ⌧ i as in Section 3.2. In particular, the following result restates Theorem 2.2.12 and defines the maximal trace semantics in fixpoint form: Theorem 7.2.2 (Maximal Trace Semantics) The maximal trace semantics ⌧ +1 2 P((K ⇥ ⌃) +1 ) can be expressed as a least fixpoint in the complete lattice hP((K ⇥ ⌃) +1 ), v, t, u, (K ⇥ ⌃) ! , (K ⇥ ⌃) + i as follows:
⌧ +1 = lfp v +1 +1 (T ) def = ({"} ⇥ Q) [ (⌧ ; T ) (7.2.1)
In the following, we provide a structural definition of the maximal trace semantics by induction on the extended syntax of programs.
Remark 7.2.3 From now on, we assume that procedures have a single recursive call. We plan to generalize the framework as part of our future work.
We do not explicitly represent the stack. Instead, we use least fixpoints as denotations of recursive procedures. Thus, we have ⌧ +1 2 P(⌃ +1 ).
In Figure 7.5, for any procedure P, we define the semantics ⌧ +1 J stmt K P S : P(⌃ +1 ) ! P(⌃ +1 ) and ⌧ +1 J ret K P S : P(⌃ +1 ) ! P(⌃ +1 ) of a call instruction and a return instruction, respectively. The traces are built backwards: each function ⌧ +1 J stmt K P S (resp. ⌧ +1 J ret K P S) takes as input a set of traces starting with the final label of the instruction stmt (resp. ret) and outputs a set of traces starting with the initial label of stmt (resp. ret). The parameter set of traces S 2 P(⌃ +1 ) is used to handle recursive calls.
The trace semantics of a recursive call instruction, when the called procedure coincides with the caller, is parameterized by a set of traces S representing
⌧ +1 J l call M K M S(T ) def = ⇢ hl, ⇢ihi(M ), ⇢i ⇢ 2 E, 2 ⌃ ⇤1 hi(M ), ⇢i 2 S ; T ⌧ +1 J l call M K P S(T ) def = ⇢ hl, ⇢ihi(M ), ⇢i ⇢ 2 E, 2 ⌃ ⇤1 hi(M ), ⇢i 2 C P 6 = M C def = lfp v ⌃ ! ( S. ⌧ +1 (M )S(T )) ⌧ +1 J l return K P S(T ) def = {hl, ⇢ihr, ⇢i | r 2 L, ⇢ 2 E, 2 ⌃ ⇤1 , hr, ⇢i 2 T } Figure 7
.5: Maximal trace semantics of instructions stmt and ret.
the trace semantics of the following recursive calls, followed by a set of traces T starting with the final label of the instruction. The traces belonging to S ; T start with environments paired with the initial control point of the procedure. Thus, the trace semantics of the instruction simply prepends to them the same environments paired with the initial label of the instruction.
The trace semantics of a non-recursive call instruction, when the called procedure M is di↵erent from the caller P , takes as input a set of traces T starting with the final label of the instruction, determines from T the trace semantics C of the callee, and prepends to the initial states of the traces belonging to C the same environments paired with the initial label of the instruction. The called procedure M can be a recursive procedure. Thus, its trace semantics is defined as the least fixpoint of S. ⌧ +1 (M )S(T ) within the complete lattice hP(⌃ +1 ), v, t, u, ⌃ ! , ⌃ + i, analogously to Equation 2.2.5. Note that, we assumed that mutual recursion is reduced into a single recursive procedure. The function ⌧ +1 : M ! (P(⌃ +1 ) ! P(⌃ +1 ) ! P(⌃ +1 )) maps procedure names to their maximal trace semantics: given a declaration M : stmt ret, ⌧ +1 (M )S(T ) def = ⌧ +1 J stmt K M S(⌧ +1 J ret K M S(T )). The iteration sequence, starting from all infinite sequences ⌃ ! , builds the set of program traces that consist of an infinite number of recursive calls, and it prepends a finite number of recursive calls to an input set T 2 P(⌃ +1 ) of traces starting with the final label of the call instruction.
Finally, the semantics of a return instruction takes as input a set of traces starting from the calling control point, and prepends to them the same initial environments paired with the initial control point of the instruction.
The trace semantics ⌧ +1 J stmt K P S : P(⌃ +1 ) ! P(⌃ +1 ) of other instructions stmt is defined as follows:
⌧ +1 J stmt K P S def = ⌧ +1 J stmt K (7.2.2)
where ⌧ +1 J stmt K : P(⌃ +1 ) ! P(⌃ +1 ) is defined in Figure 3.7.
The maximal trace semantics ⌧ +1 J prog K 2 P(⌃ +1 ) of a program prog is defined by the maximal trace semantics of the main procedure taking as input the infinite sequences ⌃ ! and the final states Q: Definition 7.2.4 (Maximal Trace Semantics) The maximal trace semantics ⌧ +1 J prog K 2 P(⌃ +1 ) of a program prog is:
⌧ +1 J prog K = ⌧ +1 J mthd l K def = ⌧ +1 J mthd K⌃ ! (Q) (7.2.3)
where ⌧ +1 J main : stmt
K def = ⌧ +1 J stmt K main and ⌧ +1 J M : stmt ret mthd K def = ⌧ +1 J mthd K,
Q def = {6} ⇥ E.
The maximal trace semantics of the program is defined by the maximal trace semantics of the main procedure: ⌧ +1 main J 5 call f K⌃ ! (Q) (cf. Definition 3.2.5). The semantics of the call statement determines the maximal trace semantics of the procedure f (cf. Figure 7.5). The fixpoint iterates are depicted in Figure 7.6. Thus, the maximal trace semantics of the program contains the traces that starting from the initial states I call the procedure f a nondeterminist number of times: {h5, ⇢i(h1, ⇢ih3, ⇢i
) ⇤ h1, ⇢ih2, ⇢ih4, ⇢ih8, ⇢i | ⇢ 2 E} [ {h5, ⇢i(h1, ⇢ih3, ⇢i) ! | ⇢ 2 E} S 0 = ( ⌃ ! ) S 1 = ( ? skip return ) [ ( ? call f ⌃ ! ) S 2 = ( ? skip return ) [ ( ? call f ? skip return ) [ ( ? call f ? call f ⌃ ! ) . . .
Definite Termination Semantics
We now extend the structural definition given in Section 4.3 of the fixpoint definite termination semantics ⌧ Mt 2 ⌃ * O (cf. Equation 4.2.16). Then, we propose a sound decidable abstraction of ⌧ Mt based on piecewise-defined ranking functions (cf. Chapter 5 and Chapter 6).
Definite Termination Semantics
We partition the definite termination semantics ⌧ Mt with respect to the program control points:
⌧ Mt 2 L ! (E * O).
In this way, to each program control point l 2 L corresponds a partial function f : E * O, and, for any procedure P, to each program instruction stmt and ret corresponds a termination semantics
⌧ Mt J stmt K P : (E * O) ! (E * O) and ⌧ Mt J ret K P : (E * O) ! (E * O).
Each function ⌧ Mt J stmt K P F and ⌧ Mt J ret K P F takes as input a ranking function whose domain represents the terminating environments at the final label of the instruction and outputs a ranking function whose domain represents the terminating environments at the initial label of the instruction, and whose value is an upper bound on the number of program execution steps remaining to termination. The parameter F 2 E * O is used to handle recursive calls.
The termination semantics of a recursive call, when the called procedure coincides with the caller, is parameterized by a ranking function F : E * O representing the termination semantics of the following recursive calls, and takes as input a ranking function f : E * O whose domain represents the terminating environments at the final label of the instruction. The domain of termination semantics of the instruction is the intersection of the domains of f and F , and its value is the sum of the value of f and F plus one, to take into account that from the environments at the initial label of the instruction another program execution step is necessary before termination:
(⌧ Mt J l call M K M F )(f ) def = ⇢ 2 dom(f ) \ dom(F ). f(⇢) + F (⇢) + 1 (7.3.1)
The termination semantics of a non-recursive call, when the called procedure M is di↵erent from the caller P , takes as input a ranking function f : E * O, determines from f the termination semantics I : E * O of the callee, and increases its value to account of another program execution step:
(⌧ Mt J l call M K P F )(f ) def = ⇢ 2 dom(I). I(⇢) + 1 P 6 = M I def = lfp v ; ( F. (⌧ Mt (M )F )(f )) (7.3.2)
The termination semantics of the possibly recursive called procedure M is defined as the least fixpoint of F. (⌧ Mt (M )F )(f ) within the partially ordered set hE * O, vi, analogously to Equation 4.2.16. The rationale being that the call from P to M may be a call to a sequence of recursive calls to M , which are collected by the fixpoint. The function
⌧ Mt : M ! ((E * O) ! (E * O) ! (E * O))
maps procedure names to their termination semantics: given a procedure declaration M : stmt ret, we have (⌧
Mt (M )F )(f ) def = (⌧ Mt J stmt K M F )((⌧ Mt J ret K M F )(f )
). The iteration sequence, starting from the totally undefined function ;, builds the ranking function whose domain is the set of environments from which a finite number of recursive calls is made.
Finally, the termination semantics of a return instruction takes as input a ranking function whose domain represents the terminating environments at the calling control point and increases its value to take into account another program execution step before termination:
(⌧ Mt J l return K P F )(f ) def = ⇢ 2 dom(f ). f(⇢) + 1 (7.3.3)
The termination semantics ⌧ Mt J stmt K P F : (E * O) ! (E * O) of other instructions stmt is defined as follows:
⌧ Mt J stmt K P F def = ⌧ Mt J stmt K (7.3.4)
where
⌧ Mt J stmt K : (E * O) ! (E * O)
is defined in Section 4.3. For a conditional statement if l bexp then stmt 1 else stmt 2 fi, F has to be passed to the termination semantics of stmt 1 and stmt 2 . For a loop while l bexp do stmt od, F has to be passed to the termination semantics of the loop body stmt. For a sequential composition of instructions stmt 1 stmt 2 , F has to be passed to the termination semantics of the components stmt 1 and stmt 2 .
The termination semantics ⌧ Mt J prog K 2 E * O of a program prog is a ranking function whose domain represents the terminating environments, which is determined by the termination semantics of the main procedure taking as input the totally undefined function and the zero function: Definition 7.3.1 (Termination Semantics) The termination semantics
⌧ Mt J prog K 2 E * O of a program prog is: ⌧ Mt J prog K = ⌧ Mt J mthd l K def = (⌧ Mt J mthd K ;)( ⇢. 0) (7.3.5)
where ⌧ Mt J main : stmt
K def = ⌧ Mt J stmt K main and ⌧ Mt J M : stmt ret mthd K def = ⌧ Mt J mthd K,
and where the function
⌧ Mt J stmt K P F : (E * O) ! (E * O)
is the termination semantics of each program instruction stmt.
Abstract Definite Termination Semantics
We propose a sound decidable abstraction of the definite termination semantics ⌧ Mt J prog K 2 E * O based on piecewise-defined ranking functions.
In particular, we supplement the set of operators of the decision trees abstract domain (cf. Chapter 5 and Chapter 6) with a sum operator + T .
Sum. The sum operator + T is implemented by Algorithm 23: the function sum, given a sound over-approximation D 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , first calls unification (cf. Line 10) for tree unification and then calls the auxiliary function sum-aux (cf. Line 11). The latter descends along the paths of the decision trees (cf. Lines 5-6), up to the leaf nodes where the leaves sum operator + F is invoked.
The sum + F between defined and undefined leaf nodes is analogous to the leaves approximation join g F (cf. Equation 5.2.13): if isLeaf(t 1 ) ^isLeaf(t 2 ) then 3:
? F + F f def = ? F f 2 F \ {> F } f + F ? F def = ? F f 2 F \ {> F } > F + F f def = > F f 2 F \ {? F } f + F > F def = > F f 2 F \ {? F } (
return LEAF : t 1 .f + F t 2 .f 4:
else if isNode(t 1 ) ^isNode(t 2 ) then 5:
(l 1 , l 2 ) sum-aux(t 1 .l, t 2 .l)
6:
(r 1 , r 2 ) sum-aux(t 1 .r, t 2 .r)
7:
return (NODE{c} : l 1 ; r 1 , NODE{c} : l 2 ; r 2 ) 8:
9: function sum(D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T 10: (t 1 , t 2 ) unification(D, t 1 , t 2 ) 11: return sum-aux(t 1 , t 2 ) (⌧ \ Mt J l call M K M T )(t) def = STEP T (t + T T ) (⌧ \ Mt J l call M K P T )(t) def = STEP T (I) P 6 = M I def = lfp \ ( T. (⌧ \ Mt (M )T )(t)) (⌧ \ Mt J l return K P T )(t) def = STEP T (t)
Figure 7.7: Abstract termination semantics of instructions stmt.
In particular, the sum is undefined between ? F -leaves and > F -leaves and always favors the undefined leaf nodes. Instead, given two defined leaf nodes f 1 , f 2 2 F \ {? F , > F }, their sum f 1 + F f 2 is defined as expected:
f 1 + F f 2 def = f 1 + f 2 (7.3.7) Example 7.3.2 Let X = {x}, and let f 1 def = x.
x and f 2 def = x. x+5 be two a ne functions. Then, their sum is the a ne function
f 1 + F f 2 def = x. x x + 5 = x. 5.
Abstract Definite Termination Semantics. In the following, as in Section 5.3, we assume to have, for each program control point l 2 L, a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (l) 2 P(E):
⌧ I (l) ✓ D (R) (cf. Section 3.4).
In Figure 7.7, for any procedure P, we define ⌧ \ Mt J stmt K P T : T ! T and ⌧ \ Mt J ret K P T : T ! T for a call instruction stmt and a return instruction ret, respectively. Each function ⌧ \ Mt J stmt K P and ⌧ \ Mt J ret K P takes as input a decision tree over-approximating the ranking function corresponding to the final control point of the instruction, and outputs a decision tree defined over a subset of the reachable environments R 2 D, which over-approximates the ranking function corresponding to the initial control point of the instruction. The parameter T 2 T is used to handle recursive calls. In particular, the decision tree T abstracts the termination semantics F : E * O of the following recursive calls (cf. Equation 7.3.1). Then, the semantics of a recursive call instruction invokes STEP T (cf. Algorithm 9) on the sum + T (cf. Algorithm 23) between T and the input decision tree t 2 T .
The following result proves that for a recursive call instruction l call M , given sound over-approximations R 2 D of ⌧ I (l), S 2 D of ⌧ I (i(M )), and
D 2 D of ⌧ I (f J l call M K), the abstract semantics ⌧ \ Mt J l call M K M is a sound over-approximation of ⌧ Mt J l call M K M defined in Equation 7.3.1: Lemma 7.3.3 (⌧ Mt J l call M K M T [S]T )( T [D]t) 4 T [R](STEP T (t + T T )).
Proof. See Appendix A.5.
⌅
The semantics of a non-recursive call, when the called procedure M is di↵erent from the caller P , invokes the STEP T operator on the semantics I 2 T of the callee. The semantics of the called procedure M is defined as the limit of the iteration sequence with widening (cf. Equation 5.2.24) of T. ⌧ \ Mt (M )T (t). The function ⌧ \ Mt : M ! (T ! T ! T ) maps procedure names to their semantics: given a procedure declaration M : stmt ret, we have
⌧ \ Mt (M )T (t) def = ⌧ \ Mt J stmt K M T (⌧ \ Mt J ret K M T (t)). The abstract semantics ⌧ \ Mt J l call M K P of a non-recursive call instruc- tion l call M , given sound over-approximations R 2 D of ⌧ I (l), S 2 D of ⌧ I (i(M )), and D 2 D of ⌧ I (f J l call M K)
, is a sound over-approximation of the termination semantics ⌧ Mt J l call M K P defined in Equation 7.3.2:
Lemma 7.3.4 (⌧ Mt J l call M K P T [S]T )( T [D]t) 4 T [R](STEP T (I))
, where
I def = lfp \ ( T. (⌧ \ Mt (M )T )(t)).
Proof. See Appendix A.5.
⌅
Finally, the semantics of a return instruction simply invokes the STEP T operator on the given input decision tree.
The following result proves, given sound over-approximations R 2 D of ⌧ I (l) and D 2 D of ⌧ I (f J l return K), the soundness of the abstract semantics ⌧ \ Mt J l return K P with respect to ⌧ Mt J l return K P defined in Equation 7.3.3:
Lemma 7.3.5 (⌧ Mt J l return K P T [D]T )( T [D]t) 4 T [R](STEP T (t)). Proof. See Appendix A.5.
⌅
The semantics ⌧ \ Mt J stmt K P T : T ! T of other instructions stmt is:
⌧ \ Mt J stmt K P T def = ⌧ \ Mt J stmt K (7.3.8)
where ⌧ \ Mt J stmt K : T ! T is defined in Figure 5.12. For a conditional statement if l bexp then stmt 1 else stmt 2 fi, T has to be passed to the termination semantics of stmt 1 and stmt 2 . For a loop while l bexp do stmt od, T has to be passed to the termination semantics of the loop body stmt. For a sequential composition of instructions stmt 1 stmt 2 , T has to be passed to the termination semantics of the components stmt 1 and stmt 2 . Given sound overapproximations R 2 D of ⌧ I (iJ stmt K) and D 2 D of ⌧ I (f J stmt K), ⌧ \ Mt J stmt K P is a sound over-approximation of ⌧ Mt J stmt K P defined in Equation 7.3.4:
Lemma 7.3.6 (⌧ Mt J stmt K P T [D]T )( T [D]t) 4 T [R]((⌧ \ Mt J stmt K P T )(t)).
Proof (Sketch).
The proof follows from Equation 7.3.4 and Equation 7.3.8 and from the soundness of ⌧ \ Mt J stmt K defined in Figure 5.12 with respect to ⌧ Mt J stmt K defined in Section 4.3 (cf. Lemma 5.2.8, Lemma 5.2.14, Lemma 5.2.15, Lemma 5.2.16, Lemma 5.2.24, and Lemma 6.4.1).
⌅
The abstract termination semantics ⌧ \ Mt J prog K 2 T of a program prog outputs the decision tree over-approximating the ranking function corresponding to the initial program control point iJ prog K 2 L. It is defined by means of the abstract termination semantics of the main procedure taking as input ? T and the leaf node LEAF : X 1 , . . . , X k . 0: Definition 7.3.7 (Abstract Termination Semantics) The abstract termination semantics ⌧ \ Mt J prog K 2 T of a program prog is:
⌧ \ Mt J prog K = ⌧ \ Mt J mthd l K def = (⌧ \ Mt J mthd K? T )(LEAF : X 1 , . . . , X k . 0) (7.3.9)
where ⌧ \ Mt J main : stmt
K def = ⌧ \ Mt J stmt K main and ⌧ \ Mt J M : stmt ret mthd K def = ⌧ \ Mt J mthd K,
and where the abstract semantics ⌧ \ Mt J stmt K P T : T ! T of each program instruction stmt is defined in Figure 7.7 and Equation 7.3.8.
The following result proves the soundness of the abstract termination semantics ⌧ \ Mt J prog K 2 T with respect to the termination semantics ⌧ Mt J prog K 2 E * O, given a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (iJ prog K) at the initial program control point:
Theorem 7.3.8 ⌧ Mt J prog K 4 T [R]⌧ \ Mt J prog K Proof (Sketch).
The proof follows from the soundness of the operators of the decision trees abstract domain (Lemma 7.3.3, Lemma 7.3.4, Lemma 7.3.5, and Lemma 7.3.6) used for the definition of ⌧ \ Mt J prog K 2 T .
⌅
In particular, the abstract termination semantics provides su cient preconditions for ensuring definite termination of a program for a given overapproximation R 2 D of the set of initial states I ✓ ⌃: Corollary 7.3.9 A program must terminate for execution traces starting from a given set of initial states
D (R) if D (R) ✓ dom( T [R]⌧ \ Mt J prog K).
Example. In the following, we recall the example introduced at the beginning of the chapter and we present the fully detailed analysis of the program using the abstract domain of decision trees.
Example 7.3.10 Let us consider again the recursive program of Example 7.2.1: Similarly, the third iterate of the call to f concludes that the procedure terminates in at most eleven execution steps, when it calls itself recursively twice, in at most seven execution steps, when it calls itself recursively only once, and in at most three program steps when 1 < x is not satisfied:
f : if 1 (1 < x) then 2 x := x 1 3 call f else
1 : NODE{x 4 0} : LEAF : ? F ; NODE{x 3 0} : LEAF : LEAF : x. 11; NODE{x 2 0} : (LEAF : x. 7); (LEAF : x. 3)
Then, the widening extrapolates the ranking function on the partitions over which it is not yet defined:
1 : NODE{x 3 0} : LEAF : x. 4x 1;
NODE{x 2 0} : (LEAF : x. 7); (LEAF : x. 3) yielding a fixpoint for the call to f within the main procedure.
The ranking function associated with program control point 7 is:
7 : NODE{x 3 0} : LEAF : x. 4x; NODE{x 2 0} : (LEAF : x. 8); (LEAF : x. 4)
Finally, the non-deterministic assignment at program control point 6 yields:
6 : LEAF : x. ! + 8.
which proves that the program is always terminating, whatever the initial value of the program variable x.
The results presented in this chapter are only preliminary, more general results being necessary to cover all practical cases.
Implementation
This chapter presents our prototype static analyzer FuncTion which is based on piecewise-defined ranking functions. We also propose the most recent experimental evaluation.
Ce chapitre présente notre prototype d'analyseur statique FuncTion qui est basé sur des fonctions de rang définies par morceaux. Nous présentons aussi l'évaluation expérimentale la plus récente.
FuncTion
We have implemented a prototype static analyzer FuncTion based on the decision trees abstract domain presented in Chapter 5. It is available online: http://www.di.ens.fr/ ~urban/FuncTion.html. The prototype accepts programs written in a (subset of) C, without struct and union types. It provides only a limited support for arrays and pointers. The only basic data type are mathematical integers, deviating from the standard semantics of C. The prototype is written in OCaml and, at the time of writing, the available numerical abstractions for handling linear constraints within the decision nodes are based on the intervals abstract domain [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF] (cf. Section 3.4.1), the convex polyhedra abstract domain [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF] (cf. Section 3.4.2), and the octagons abstract domain [START_REF] Miné | The Octagon Abstract Domain[END_REF] (cf. Section 3.4.3), and the available abstraction for handling functions within the leaf nodes are based on a ne functions. The numerical abstract domains are provided by the APRON library [START_REF] Jeannet | Apron: A Library of Numerical Abstract Domains for Static Analysis[END_REF]. It is also possible to activate the extension to ordinalvalued ranking functions presented in Chapter 6, and tune the precision of the analysis by adjusting the widening delay.
To improve precision, we avoid trying to compute a ranking function for the non-reachable states: FuncTion runs a forward analysis to over-approximate the reachable states using a numerical abstract domain (cf. Section 3.4). Then, it runs the backward analysis to infer a ranking function, intersecting its domain at each step with the states identified by the previous analysis.
The analysis proceeds by structural induction on the program syntax, iterating loops and recursive procedures (cf. Chapter 7) until an abstract fixpoint is reached. In case of nested loops, a fixpoint on the inner loop is computed for each iteration of the outer loop, following [START_REF] Bourdoncle | E cient Chaotic Iteration Strategies with Widenings[END_REF][START_REF] Muthukumar | Compiletime Derivation of Variable Dependency Using Abstract Interpretation[END_REF]. It is also possible to activate the extension of FuncTion with conflict-driven learning [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF].
Experimental Evaluation
We evaluated our prototype implementation FuncTion against 288 terminating C programs collected from the termination category of the 4th International Competition on Software Verification (SV-COMP 2015). FuncTion provides only a limited support for arrays and ponters. Therefore, we were not able to analyze 17% of the SV-COMP 2015 benchmark test cases.
The experiments were performed on a system with a 1.30GHz 64-bit Dual-Core CPU (Intel i5-4250U) and 4GB of RAM, and running Ubuntu 14.04.
In Figure 8.1, we compared di↵erent parameterizations of FuncTion. The results match the expectations: FuncTion parameterized with interval constraints based on the intervals abstract domains (cf. Section 3.4.1) and a widening delay of two iterations is the fastest but least precise, and Func-Tion parameterized with polyhedral constraints based on the polyhedra abstract domain (cf. Section 3.4.2) and a widening delay of three iterations is the slowest but most precise. We observed that delaying the widening further marginally improves precision but significantly increases running times.
We also compared FuncTion to the other tools that participated to the termination category of SV-COMP 2015 : AProVE [SAF + 15], the preliminary version of FuncTion [Urb15], HIPTnT+ [START_REF] Le | Termination and Non-Termination Specification Inference[END_REF], and Ultimate [HDL + 15]. FuncTion, with respect to its preliminary version, has been extended with conflict-driven learning [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF]. In the comparison, we implemented FuncTion to use interval constraints and a two iteration widening delay, and respond to failure to prove a program terminating by using polyhedral constraints with three iterations widening delay. The reported execution times are for the entire run, which may involve trying both parameterizations. It was not possible to run all the tools on the same system because we did not have access to the competition versions of AProVE, HIPTnT+ and Ultimate. For these tools, we used the results of SV-COMP 2015 1 even though the competition was conducted on more powerful systems with a 3.40GHz 64-bit Quad-Core CPU (Intel i7-4770) and 33GB of RAM.
Figure 8.2 summarizes our experimental evaluation and Figure 8.3 shows a detailed comparison of FuncTion against each other tool. In Figure 8.2a, the first column reports the total number of programs that each tool could prove terminating, the second column reports the average running time in seconds for the programs where the tool proved termination, and the last column reports the number of time outs. We used a time limit of 180 seconds for each program test case. In Figure 8.2b, the first column (⌅) lists the total number of programs that the tool was not able to prove termination for and that FuncTion could prove terminating, the second column (N) reports the total number of programs that FuncTion was not able to prove termination for and that the tool could prove terminating, and the last two columns report the total number of programs that both the tool and FuncTion were able (⇥) or unable (#) to prove terminating. The same symbols are used in Figure 8.3.
Figure 8.2a shows that FucTion improves by around 9% the result of its preliminary version. The increase in the execution time is not evenly distributed, and about 2% of the program test cases require more than 20 seconds to be proved terminating by FuncTion (cf. also Figure 8.3a). The reason for this overhead is the heuristic that we have chosen to guide the conflictdriven analysis, which appears to be unfortunate in these cases [START_REF] Vijay | Conflict-Driven Abstract Interpretation for Conditional Termination[END_REF] Figure 8.3a it also emerges that, as expected, the previous version of Func-Tion gives up earlier when unable to prove termination. Instead, FuncTion persists in the analysis and times out slightly more frequently.
In Figure 8.3b and Figure 8.3d we clearly notice that, despite the fact that AProVE and Ultimate were run on the more powerful machines of SV-COMP 2015, FuncTion is generally faster but able to prove termination of respectively 19% and 9% fewer program test cases (cf. also Figure 8.2a).
HIPTnT+ is able to prove termination of 16% more programs than Func-Tion (cf. Figure 8.2a), but FuncTion can prove termination of 52% of the programs that HIPTnT+ is not able to prove terminating (8% of the total program test cases, cf. Figure 8.2b). Note that, when performing the experiments with the previous version of FuncTion [START_REF] Urban | FuncTion: An Abstract Domain Functor for Termination (Competition Contribution)[END_REF] we observed that the SV-COMP 2015 machines provided a 2x speedup. Thus, the comparison of the execution times of FuncTion and HIPTnT+ is currently inconclusive.
Finally, we noticed that 1% of the SV-COMP 2015 program test cases could be proved terminating only by FuncTion (2.7% only by AProVE, 1% only by HIPTnT+, and 1.7% only by Ultimate). None of the tools was able to prove termination for 0.7% of the programs.
In the following, we further detail our experimental evaluation. In particular, we further discuss the comparison of our prototype implementation FuncTion with each of the other tools that participated to the termination category of SV-COMP 2015.
The SV-COMP 2015 program test cases for termination are arranged into four verification tasks, according to the characteristics of the programs: crafted (programs manually crafted by the participants to the competition), crafted-lit (programs collected from the literature), memory-alloca (programs with dynamic memory allocation), and numeric (programs implementing numerical algorithms). For each verification task, a dedicated overview of the experimental evaluation is shown in Figure 8. 8.2a, the first column of the tables reports the total number of programs that each tool was able to prove termination for, the second column reports the average running time in seconds for the programs where the tool proved termination, and the last column reports the number of time outs. We used a time limit of 180 seconds for each program test case. As in Figure 8.2b, the first column (⌅) of the tables reports the total number of programs that FuncTion was able to prove terminating and that the tool was not, the second column (N) reports the total number of programs that the tool was able to prove terminating and that FuncTion was not, and the last two columns report the total number of programs that both the tool and FuncTion were able (⇥) or unable (#) to prove terminating. The same Figure 8.5 shows, for each verification task, a detailed comparison of Func-Tion against AProVE [SAF + 15]. In the crafted verification task, the tool are able to prove the termination of the same number of programs (cf. also Figure 8.4a), while in the crafted-lit verification task, AProVE performs much better than FuncTion (cf. also Figure 8.6a). In all verification tasks, Func-Tion is faster despite running on a less powerful machine than AProVE. In particular, we observe that AProVE times out on most of the programs that only FuncTion is able to prove terminating.
In Figure 8.7, we observe a comparison of FuncTion against its previous version [Urb15]. The numeric verification task is where FuncTion improves the most the result of its previous version (around 17%, cf. also Figure 8.10a). In the crafted-lit verification task, we clearly see that, as expected, Func-Tion spends more execution time and often times out when unable to prove termination (cf. also Figure 8.6a).
The comparison of FuncTion against HIPTnT+ is shown in Figure 8.9. In the crafted-lit verification task HIPTnT+ performs better than FuncTion (cf. also Figure 8.6a), while FuncTion performs better than HIPTnT+ in the memory-alloca verification task (cf. also Figure 8.8a). Note that, as mentioned, when performing the experiments with the previous version of Func-Tion [START_REF] Urban | FuncTion: An Abstract Domain Functor for Termination (Competition Contribution)[END_REF] we observed that the more powerful SV-COMP 2015 machines provided a 2x speedup in the execution time. Nonetheless, in the crafted verification task FuncTion is faster than HIPTnT+ (cf. also Figure 8.4a), and in the numeric verification task the execution time of the tools is at least comparable (cf. also Figure 8.10a). Finally, the programs that could be proved terminating only by Func-Tion were 4% of the programs for the crafted verification task (4% only by Ultimate), 0.7% of the programs for the crafted-lit verification task (1.5% only by AProVE and HIPTnT+, and 3.1% only by Ultimate), and 1.7% of the programs for the numeric verification task (1.7% only by AProVE). For the memory-alloca, 6.5% and 1.3% of the programs could be proved terminating only by AProVE and HIPTnT+, respectively. All the programs that none of the tools could prove terminating belong to the crafted-lit verification task.
IV
Liveness
A Hierarchy of Temporal Properties
In this chapter, we generalize the abstract interpretation framework proposed for termination by Patrick Cousot and Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF] and presented in Chapter 4, to other liveness properties. In particular, with reference to the hierarchy of temporal properties proposed by Zohar Manna and Amir Pnueli [MP90], we focus on guarantee ("something good occurs at least once") and recurrence ("something good occurs infinitely often") temporal properties. Specifically, static analyses of guarantee and recurrence temporal properties are systematically derived by abstract interpretation of the program maximal trace semantics presented in Chapter 2 and Chapter 3. These methods automatically infer su cient preconditions for the temporal properties by leveraging the abstract domains based on piecewise-defined ranking functions presented in Chapter 5 and Chapter 6. We augment these abstract domains with new operators including a dual widening.
Finally, we describe the implementation of the static analysis methods for guarantee and recurrence temporal properties into our prototype static analyzer FuncTion described in Chapter 8.
Dans ce chapitre, nous généralisons le cadre de l'interprétation abstraite proposé pour la terminaison par Patrick Cousot et Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF] et présenté dans le Chapitre 4, à d'autres propriétés de vivacité. En particulier, en référence à la hiérarchie des propriétés temporelles proposée par Zohar Manna et Amir Pnueli [START_REF] Manna | A Hierarchy of Temporal Properties[END_REF], nous nous concentrons sur les propriétés temporelles de garantie ("quelque bon chose se produit au moins une fois") et de récurrence ("quelque bon chose se produit infiniment souvent").
Plus précisément, les analyses statiques pour les propriétés temporelles de garantie et de récurrence sont dérivées systématiquement par interprétation abstraite de la sémantique de traces maximales des programmes présentée dans le Chapitre 2 et le Chapitre 3. Ces méthodes déduisent automatiquement des conditions su santes pour les propriétés temporelles en s'appuyant sur les domaines abstraits basés sur des fonctions de rang définies par morceaux présentées dans le Chapitre 5 et le Chapitre 6. Nous enrichissons ces domaines abstraits avec de nouveaux opérateurs, dont un élargissement dual.
Enfin, nous décrivons la mise en oeuvre des méthodes d'analyse statique pour les propriétés temporelles de garantie et de récurrence dans notre prototype d'analyseur statique FuncTion décrit dans le Chapitre 8.
A Hierarchy of Temporal Properties
Leslie Lamport, in the late 1970s, suggested a classification of program properties into the classes of safety and liveness properties [START_REF] Lamport | Proving the Correctness of Multiprocess Programs[END_REF]. The class of safety properties is informally characterized as the class of properties stating that "something bad never happens", that is, a program never reaches an unacceptable state. The class of liveness properties is informally characterized as the class of properties stating that "something good eventually happens", that is, a program eventually reaches a desirable state (cf. Section 1.2).
Zohar Manna and Amir Pnueli, in the late 1980s, suggested an alternative classification of program properties into a hierarchy [START_REF] Manna | A Hierarchy of Temporal Properties[END_REF], which distinguishes four basic classes making di↵erent claims about the frequency or occurrence of "something good" mentioned in the informal characterizations of the classes proposed by Leslie Lamport: safety properties, guarantee properties, recurrence properties, and persistence properties.
In the following, we only consider program properties expressible by temporal logic. To this end, we assume an underlying specification language, which is used to describe properties of program states.
For instance, for the small imperative language presented in Chapter 3, we define inductively the syntax of the state properties as follows:
' ::= bexp | l : bexp | ' ^' | ' _ ' l 2 L (9.1.1)
where bexp is a boolean expression (cf. Figure 3 x := x + 1 then
6 x := x fi od 7
The first loop is an infinite loop for the values of the variable x greater than or equal to zero: at each iteration the value of x is increased by one. The second loop is an infinite loop for any value of the variable x: at each iteration, the value of x is increased by one or negated when it becomes greater than ten.
The set of program environments E contains functions ⇢ : {x} ! Z mapping the program variable x to any possible value ⇢(x) 2 Z. The set of program states ⌃ def = {1, 2, 3, 4, 5, 6, 7} ⇥ E consists of all pairs of numerical labels and environments; the initial states are
I def = {h1, ⇢i | ⇢ 2 E}.
An example of state property allowed by the specification language defined in Equation 9.1.1 is the property x = 3. The set of states that satisfy this property is {1, 2, 3, 4, 5, 6, 7} ⇥ {hx, 3i}. Note however, that the states h6, {hx, 3i}i and h7, {hx, 3i}i are not reachable from the initial states.
Another examples of state property allowed by the specification language is 7 : x = 3, which is only satisfied by the unreachable state h7, {hx, 3i}i.
The program properties within the hierarchy are then defined by means of the temporal operators always 2 and eventually 3.
Safety Properties
The class of safety properties is informally characterized as the class of properties stating that "something good alway happens", that is, a program never reaches an unacceptable state. The safety properties that we consider are expressible by a temporal formula of the following form:
2'
where ' is a state property. The temporal formula expresses that all program states in every program trace satisfy the state property '.
In general, these safety properties are used to express invariance of some program state property over all computations.
A typical safety property is program partial correctness, which guarantees that all terminating computations produce correct results, and which is expressible by the following temporal formula: 2(l e : ) where l e 2 L denotes the program final control point and the formula specifies the postcondition of the program.
Another typical safety property is mutual exclusion, which guarantees that no two concurrent processes can enter their critical section at the same time, and which is expressible by the following temporal formula:
2(l 1 : false _ l 2 : false)
where l 1 2 L and l 2 2 L denote the program control points representing the critical section of the first and second process, respectively. The class of safety properties that we consider are closed under conjunction.
In fact, any temporal formula of the form 2' 1 ^2' 2 is equivalent to the safety property formula 2(' 1 ^'2 ).
Guarantee Properties
The class of guarantee properties is informally characterized as the class of properties stating that "something good happens at least once", that is, a program eventually reaches a desirable state. The guarantee properties that we consider are expressible by a temporal formula of the following form:
3'
where ' is a state property. The temporal formula expresses that at least one program state in every program trace satisfies the property ', but it does not promise any repetition.
In general, these guarantee properties are used to ensure that some event happens once during a program execution.
A typical guarantee property is program termination, which ensures that all computations are finite, expressible by the following temporal formula:
3(l e : true)
where l e 2 L denotes the program final control point. Another typical guarantee property is program total correctness, which ensures that all computations starting in a '-state terminate in a -state, expressible by the following temporal formula: An example of guarantee property is the formula 3(x = 3), which is satisfied when the program initial states are limited to the set {h1, ⇢i 2 ⌃ | ⇢(x) 3}. In particular, note that when the program initial state are limited to {h1, ⇢i 2 ⌃ | 0 ⇢(x) 3}, the guarantee property is satisfied within the first while loop. Instead, when the program initial states are limited to {h1, ⇢i 2 ⌃ | ⇢(x) < 0}, the guarantee property is satisfied within the second while loop.
Another example of guarantee property is the formula 3(3 x), which is always satisfied by the program whatever its initial states.
The class of guarantee properties that we considered are closed under disjunction. In fact, any temporal formula of the form 3' 1 _ 3' 2 is equivalent to the guarantee property formula 3(' 1 _ ' 2 ).
The classes of safety and guarantee properties that we consider are not closed under negation. On the other hand, the negation of a safety property formula 2' is a guarantee property formula 3¬'. Similarly, the negation of a guarantee property formula 3' is a safety property formula 2¬'.
Obligation Properties
The program properties that cannot be expressed by either a safety or a guarantee property formula belong to the compound class of obligation properties, which contains program properties expressible as a boolean combination of a safety property formula and guarantee property formula. The obligation properties that we consider are represented by a temporal formula of the form:
2' _ 3
where ' and are state properties.
Recurrence Properties
The class of recurrence properties is informally characterized as the class of properties stating that "something good happens infinitely often", that is, a program reaches a desirable state infinitely often. The recurrence properties that we consider are expressible by a temporal formula of the following form:
23'
where ' is a state property. The temporal formula expresses that infinitely many program states in every program trace satisfy the property '.
In general, these recurrence properties are used to ensure that some event happens infinitely many times during a program execution.
A typical recurrence property is program starvation freedom, which ensures that a process will eventually enter its critical section, and which is expressible by the following temporal formula: The recurrence property represented by the formula 23x = 3 is satisfied when the program initial states are limited to the set {h1, ⇢i 2 ⌃ | ⇢(x) < 0}. In particular, note that the recurrence property is satisfied only within the second while loop. Instead, the recurrence property 233 x is always satisfied by the program whatever its initial states.
The class of recurrence properties that we consider are closed under disjunction. In fact, any temporal formula of the form 23' 1 _ 23' 2 is equivalent to the recurrence property formula 23(' 1 _ ' 2 ).
Persistence Properties
The class of persistence properties is informally characterized as the class of properties stating that "something good eventually happens continuously", that is, a program eventually reaches a desirable state and continues to stay in a desirable state. The persistence properties that we consider are expressible by a temporal formula of the following form:
32'
where ' is a state property. The temporal formula expresses that all but finitely many program states (and, in particular, all program states from a certain point on) in every program trace satisfy the property '.
In general, these persistence properties are used to ensure the eventual stabilization of the program into a state. They allow an arbitrary delay until the stabilization, but require that once it occurs it is continuously maintained. An example of persistence property is the formula 32x = 3, which is never satisfied by the program. Instead, the persistence property represented by the formula 323 x is satisfied when the program initial states are limited to the set {h1, ⇢i 2 ⌃ | 0 ⇢(x)}. In particular, note that the persistence property is satisfied only within the first while loop.
The class of persistence properties that we consider are closed under conjunction. In fact, any temporal formula of the form 32' 1 ^32' 2 is equivalent to the persistence property formula 32(' 1 ^'2 ).
The classes of recurrence and persistence properties that we consider are not closed under negation but, analogously to the classes of safety and guarantee properties, the negation of a formula in one of the classes belongs to the other. The negation of a recurrence property formula 23' is a persistence property formula 32¬'. Similarly, the negation of a persistence property formula 32' is a recurrence property formula 23¬'.
Reactivity Properties
The program properties that cannot be expressed by either a recurrence or a persistence property formula belong to the compound class of reactivity prop-erties, which contains program properties expressible as a boolean combination of a recurrence property formula and a persistence property formula. The reactivity properties that we consider are represented by a temporal formula of the following form:
23' _ 32
where ' and are state properties.
Guarantee Semantics
In the following, we generalize Section 4.2 and Section 4.3 from definite termination to guarantee properties. We define a sound and complete semantics for proving guarantee temporal properties by abstract interpretation of the program maximal trace semantics (cf. Equation 2.2.5). The generalization is straightforward but provides a building block for the next Section 9.3. Then, we propose a sound decidable abstraction based on piecewise-defined ranking functions (cf. Chapter 5 and Chapter 6).
Guarantee Semantics
The guarantee semantics, given a set of desirable states S ✓ ⌃, is a ranking function ⌧ g [S] 2 ⌃ * O defined starting from the states in S, where the function has value zero, and retracing the program backwards while mapping every state in ⌃ definitely leading to a state in S (i.e., a state such that all the traces to which it belongs eventually reach a state in S) to an ordinal in O representing an upper bound on the number of program execution steps remaining to S. The domain dom(⌧ g [S]) of ⌧ g [S] is the set of states definitely leading to a desirable state in S: all traces branching from a state s 2 dom(⌧ g [S]) reach a state in S in at most ⌧ g [S]s execution steps, while at least one trace branching from a state s 6 2 dom(⌧ g [S]) never reaches S.
Note that the program traces that satisfy a guarantee property can also be infinite traces. In particular, guarantee properties are satisfied by finite subsequences of possibly infinite traces. Thus, in order to reason about subsequences, we define the function sq : P(⌃ +1 ) ! P(⌃ + ), which extracts the finite subsequences of a set of sequences T ✓ ⌃ +1 : sq(T )
def = { 2 ⌃ + | 9 0 2 ⌃ ⇤
, 00 2 ⌃ ⇤1 : 0 00 2 T } (9.2.1)
We recall that the neighborhood of a sequence 2 ⌃ +1 in a set of sequences , d ! }. Then, we have ↵ g [S]T = {c, bc}. In fact, let us consider the trace (abcd) ! : the subsequences of (abcd) ! that are terminating with c and never encounter c before are {c, bc, abc, dabc}; for abc, we have pf(ab) \ pf(a ! ) = {a} 6 = ; and, for dabc, we have pf(dab) \ pf(d ! ) = {d} 6 = ;. Similarly, let us consider (cd) ! : the subsequences of (cd) ! that are terminating with c and never encounter c before are {c, dc}; for dc, we have pf(d) \ pf(d ! ) = {d} 6 = ;.
We can now define the guarantee semantics ⌧ g [S] 2 ⌃ * O: Definition 9.2.2 (Guarantee Semantics) Given a desirable set of states S ✓ ⌃, the guarantee semantics ⌧ g [S] 2 ⌃ * O is an abstract interpretation of the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) (cf. Equation 2.2.5):
⌧ g [S] def = ↵ Mrk (↵ g [S](⌧ +1 )) (9.2.4)
where the abstraction ↵ Mrk : P(⌃ + ) ! (⌃ * O) is the same ranking abstraction defined in Equation 4.2.14.
The following result provides a fixpoint definition of the guarantee semantics within the partially ordered set h⌃ * O, vi, where the computational order is defined as in Equation 4.2.15 as:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
Guarantee Semantics
Theorem 9.2.3 (Guarantee Semantics) Given a desirable set of states S ✓ ⌃, the guarantee semantics ⌧ g [S] 2 ⌃ * O can be expressed as a least fixpoint in the partially ordered set h⌃ * O, vi: Note that, when the set of desirable states S is the set of final states ⌦, unsurprisingly we rediscover the definite termination semantics presented in Section 4.2, since g [⌦] = Mt (cf. Equation 4.2.16).
⌧ g [S] = lfp v ; g [S] g [S]f def = s. 8 > < > : 0 s 2 S sup {f (s 0 ) + 1 | hs, s 0 i 2 ⌧ } s 6 2 S ^s 2 f pre(dom(f )) undefined otherwise ( 9
Let ' be a state property. The '-guarantee semantics ⌧ ' g 2 ⌃ * O:
⌧ ' g def = ⌧ g ['] (9.2.6)
is sound and complete for proving a guarantee property 3': The guarantee semantics of a skip instruction resets the value of the input ranking function f : E * O for the environments that satisfy ', and otherwise it increases its value to take into account another program execution step to reach ' from the environments at the initial label of the instruction:
⌧ ' g J l skip Kf def = ⇢. 8 > < > : 0 hl, ⇢i |= ' f (⇢) + 1 hl, ⇢i 6 |= ' ^⇢ 2 dom(f ) undefined otherwise (9.2.7)
Similarly, the guarantee semantics of a variable assignment l X := aexp resets the value of the input ranking function f : E * O for the environments that satisfy '; otherwise, the resulting ranking function is defined over the environments that, when subject to the variable assignment, always belong to the domain of the input ranking function. The value of the input ranking function for these environments is increased by one, to take into account another execution step, and the value of the resulting ranking function is the least upper bound of these values:
⌧ ' g J l X := aexp Kf def = ⇢. 8 > > > > > > < > > > > > > : 0 hl, ⇢i |= ' sup{f (⇢[X v]) + 1 | v 2 JaexpK⇢} hl, ⇢i 6 |= ' ^9v 2 JaexpK⇢ 8v 2 JaexpK⇢ : ⇢[X v] 2 dom(f ) undefined otherwise (9.2.8)
Note that all environments yielding a run-time error due to a division by zero do not belong to the domain of the termination semantics of the assignment.
Example 9.2.6 Let X def = {x}. We consider the following ranking function f : E * O:
f (⇢) def = 8 > < > : 2 ⇢(x) = 1 3 ⇢(x) = 2 undefined otherwise
the backward assignment x := x + [1, 2] and the guarantee property 3(x = 3).
The guarantee semantics of the assignment, given the ranking function, is:
⌧ x=3 g J x := x + [1, 2] Kf (⇢) def = 8 > < > : 4 ⇢(x) = 0 0 ⇢(x) = 3 undefined otherwise
In particular, note that unlike Example 4.3.1, the function is also defined when ⇢(x) = 3, since the environment satisfies the property x = 3. Given a conditional instruction if l bexp then stmt 1 else stmt 2 fi, its guarantee semantics takes as input a ranking function f : E * O and derives the guarantee semantics ⌧ ' g J stmt 1 Kf of stmt 1 , in the following denoted by S 1 , and the guarantee semantics ⌧ ' g J stmt 2 Kf of stmt 2 , in the following denoted by S 2 . Then, the guarantee semantics of the conditional instruction is defined by means of the ranking function F [f ] : E * O whose domain is the set of environments that belong to the domain of S 1 and to the domain of S 2 , and that may both satisfy and not satisfy the boolean expression bexp:
F [f ] def = ⇢ 2 dom(S 1 ) \ dom(S 2 ). 8 > < > : sup{S 1 (⇢) + 1, S 2 (⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise
and the ranking function F 1 [f ] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S 1 and that must satisfy bexp:
F 1 [f ] def = ⇢ 2 dom(S 1 ).
( S 1 (⇢) + 1 JbexpK⇢ = {true} undefined otherwise and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of S 2 and that cannot satisfy bexp:
F 2 [f ] def = ⇢ 2 dom(S 2 ). ( S 2 (⇢) + 1 JbexpK⇢ = {false} undefined otherwise
The resulting ranking function is defined by joining
F [f ], F 1 [f ], and F 2 [f ],
and resetting the value of the function for the environments that satisfy ':
⌧ ' g J if l bexp then stmt 1 else stmt 2 fi Kf def = ⇢. 8 > > > > < > > > > : 0 hl, ⇢i |= ' G(⇢) hl, ⇢i 6 |= ' ⇢ 2 dom(G) undefined otherwise (9.2.9) where G def = F [f ] [ F 1 [f ] [ F 2 [f ].
Example 9.2.7 Let X def = {x}. We consider the guarantee property 3(x = 3) and the guaran- tee semantics of the conditional statement if bexp then stmt 1 else stmt 2 fi. We assume, given a ranking function f : E * O, that the guarantee semantics of stmt 1 is defined as:
⌧ x=3 g J stmt 1 Kf (⇢) def = 8 > < > : 1 ⇢(x) 0 0 ⇢(x) = 3 undefined otherwise
and that the guarantee semantics of stmt 2 is defined as
⌧ x=3 g J stmt 2 Kf (⇢) def = 8 > > > > < > > > > : 3 0 ⇢(x) < 3 0 ⇢(x) = 3 3 3< ⇢(x) undefined otherwise
Then, when the boolean expression bexp is for example x 3, the guarantee semantics of the conditional statement is:
⌧ x=3 g J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = 8 > > > > < > > > > : 2 ⇢(x) 0 0 ⇢(x) = 3 4 3< ⇢(x) undefined otherwise
Instead, when bexp is for example the non-deterministic choice ?, we have:
⌧ x=3 g J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = 8 > < > : 4 ⇢(x) = 0 0 ⇢(x) = 3 undefined otherwise
Note that, unlike Example 4.3.2, both functions are also defined when ⇢(x) = 3, since the environment satisfies the property x = 3.
The guarantee semantics of a loop instruction while l bexp do stmt od takes as input a ranking function f : E * O whose domain represents the environments leading to ' from the final label of the instruction, and outputs the ranking function which is defined as the least fixpoint of the function
⌧ ' g J while l bexp do stmt od Kf def = lfp v ; ' g (9.2.10)
where the computational order is defined as in Equation 4.2.15:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
The function ' g : (E * O) ! (E * O) takes as input a ranking function
x : E * O, resets its value for the environments that satisfy ', and adds to its domain the environments for which one more loop iteration is needed before '. In the following, the guarantee semantics ⌧ ' g J stmt Kx of the loop body is denoted by S. The function ' g is defined by means of the ranking function F [x] : E * O whose domain is the set of environments that belong to the domain of S and to the domain of the input function f , and that may both satisfy and not satisfy the boolean expression bexp:
F [x] def = ⇢ 2 dom(S) \ dom(f ). 8 > < > : sup{S(⇢) + 1, f(⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise
and the ranking function F 1 [x] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S and that must satisfy bexp:
F 1 [x] def = ⇢ 2 dom(S).
( S(⇢) + 1 JbexpK⇢ = {true} undefined otherwise and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of the input function f and that cannot satisfy bexp:
F 2 [f ] def = ⇢ 2 dom(f ). ( f (⇢) + 1 JbexpK⇢ = {false} undefined otherwise
The resulting ranking function is defined by joining F [x], F 1 [x], and F 2 [f ], and resetting the value of the function for the environments that satisfy ':
' g (x) def = ⇢. 8 > < > : 0 hl, ⇢i |= ' G(⇢) hl, ⇢i 6 |= ' ^⇢ 2 dom(G) undefined otherwise (9.2.11)
where
G def = F [x] [ F 1 [x] [ F 2 [f ].
Finally, the guarantee semantics of the sequential combination of instructions stmt 1 stmt 2 , takes as input a ranking function f : E * O, determines from f the guarantee semantics ⌧ ' g J stmt 2 Kf of stmt 2 , and outputs the ranking function determined by the guarantee semantics of stmt 1 from ⌧ ' g J stmt 2 Kf :
⌧ ' g J stmt 1 stmt 2 Kf def = ⌧ ' g J stmt 1 K(⌧ ' g J stmt 2 Kf ) (9.2.12)
The guarantee semantics ⌧ ' g J prog K 2 E * O of a program prog is a ranking function whose domain represents the environments always leading to ', which is determined by taking as input the constant function equal to zero for the environments that satisfy ', and undefined otherwise: Definition 9.2.8 (Guarantee Semantics) The guarantee semantics Note that, as pointed out in Remark 3.2.2, possible run-time errors are ignored. Thus, all environments leading to run-time errors do not belong to the domain of the guarantee semantics of a program prog.
⌧ ' g J prog K 2 E * O of a program prog is: ⌧ ' g J prog K = ⌧ ' g J stmt l K def = ⌧ ' g J stmt K ⇢. ( 0
Abstract Guarantee Semantics
In the following, we propose a sound decidable abstraction of the guarantee semantics ⌧ ' g J prog K 2 E * O, which is based on the piecewise-defined ranking functions presented in Chapter 5 and Chapter 6. The abstraction is sound with respect to the same approximation order defined in Equation 4.2.12 as:
f 1 4 f 2 () dom(f 1 ) ◆ dom(f 2 ) ^8x 2 dom(f 2 ) : f 1 (x) f 2 (x).
In particular, we complement the operators of the decision trees abstract domain presented in Chapter 5 with a new unary operator RESET T J ' K, which resets the leaves of a decision tree that satisfy a given property '.
Reset. The operator RESET T J ' K : L ! D ! T ! T for the reset of decision trees is implemented by Algorithm 24: the function reset, given a program control point p 2 L, a sound over-approximation d 2 D of the reachable environments, and a decision tree t 2 T , reasons by induction on the structure of the property '. In particular, when ' specifies the property at a particular program control point l 2 L (cf. Line 55), the decision tree is reset only if l coincides with p (cf. Line 57). Instead, when ' is a conjunction of two properties ' 1 and ' 2 (cf. Line 58), or a conjunction of two boolean expressions bexp 1 and bexp 2 (cf. Line 44), the resulting decision trees are merged by the function MEET defined in Algorithm 8 (cf. Line 59 and Line 47). Similarly, when ' is a disjunction of two properties ' 1 and ' 2 (cf. Line 60), or a disjunction of two boolean expressions bexp 1 and bexp 2 (cf. Line 48), the resulting decision trees are merged by the approximation join A-JOIN defined in Algorithm 6 (cf. Line 61 and Line 51). Finally, when ' is a comparison of arithmetic expressions aexp 1 ./ aexp 2 (cf. Line 52), a set of constraints J is produced by the operator FILTER C defined in Equation 5.2.23 (cf. Line 53), which is used by the auxiliary function reset-aux (cf. Line 54).
The function reset-aux augments a decision tree with a given set of linear constraints. Then, only the subtrees whose paths from the root of the decision tree satisfy these constraints are reset. In particular, the function reset-aux takes as input a decision tree t 2 T , a set C 2 P(C) of linear constraints representing an over-approximation of the reachable environments, and a set J 2 P(C) of linear constraints that need to be added to the decision tree. When J is not empty (cf. Line 12), the linear constraints J are added to the decision tree in descending order with respect to < C : at each iteration a linear constraint j 2 C is extracted from J (cf. Line 13), which is the largest constraint in J with respect to the constraints in canonical form. Then, the function reset-aux possibly adds a decision node for the linear constraint j
⌧ '\ g J l skip Kt def = RESET T J ' K(l, R, STEP T (t)) ⌧ '\ g J l X := aexp Kt def = RESET T J ' K(l, R, (B-ASSIGN T J X := aexp KR)(t)) ⌧ '\ g J if l bexp then stmt 1 else stmt 2 fi Kt def = RESET T J ' K(l, R, F \ 1 [t] g T [R] F \ 2 [f ]) F \ 1 [t] def = (FILTER T J bexp KR)(⌧ '\ g J stmt 1 Kt) F \ 2 [t] def = (FILTER T J not bexp KR)(⌧ '\ g J stmt 2 Kt) ⌧ '\ g J while l bexp do stmt od Kt def = lfp \ '\ g '\ g (x) def = RESET T J ' K(l, R, F \ [x] g T [R] (FILTER T J not bexp KR)(t)) F \ [x] def = (FILTER T J bexp KR)(⌧ '\ g J stmt Kx) ⌧ '\ g J stmt 1 stmt 2 Kt def = ⌧ '\ g J stmt 1 K(⌧ '\ g J stmt 2 Kt)
Figure 9.1: Abstract guarantee semantics of instructions stmt.
or left decision subtree unmodified when the encountered linear constraints coincide with those appearing in J (cf. Lines 29-32) or their negations (cf. Lines 33-36). When J is empty (cf. Line 2), reset-aux accumulates in C the linear constraints encountered along the paths (cf. Lines 9-10) up to a leaf node, possibly removing constraints that are redundant (cf. Line 5) or whose negation is redundant (cf. Line 6) with respect to C. Once reached a leaf node, the operator RESET F J ' K : F ! F is invoked (cf. Line 3): given a function f 2 F, it simply resets its value to zero:
RESET F (f ) def = X 1 , . . . , X k . 0 (9.2.14)
Abstract Guarantee Semantics. The operators of the decision trees abstract domain can now be used to define the abstract guarantee semantics.
In the following, as in Section 5.3 we assume to have, for each program control point l 2 L, a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (l) 2 P(E): ⌧ I (l) ✓ D (R) (cf. Section 3.4).
In Figure 9.1 we define the semantics ⌧ '\ g J stmt K : T ! T , for each program instruction stmt. Each function ⌧ '\ g J stmt K : T ! T takes as input a decision tree over-approximating the ranking function corresponding to the final control point of the instruction, and outputs a decision tree defined over a subset of the reachable environments R 2 D, which over-approximates the ranking function corresponding to the initial control point of the instruction. Note that each function ⌧ '\ g J stmt K invokes the reset operator RESET T . For a while loop, lfp \ '\ g is the limit of the iteration sequence with widening:
y 0 def = ? T y n+1 def = ( y n '\ g (y n ) v T [R] y n ^ '\ g (y n ) 4 T [R] y n y n O T '\ g (y n ) otherwise (9.2.15)
In absence of run-time errors, the abstract semantics ⌧ '\ g J stmt K, given sound over-approximations R 2 D of ⌧ I (iJ stmt K) and D 2 D of ⌧ I (f J stmt K), is a sound over-approximation of the semantics ⌧ ' g J stmt K defined in Section 9.2.2:
Lemma 9.2.9 ⌧ ' g J stmt K T [D]t 4 T [R]⌧ '\ g J stmt Kt.
Proof.
See Appendix A.6.
⌅
The abstract guarantee semantics ⌧ '\ g J prog K 2 T of a program prog outputs the decision tree over-approximating the ranking function corresponding to the initial program control point iJ prog K 2 L. It is defined by taking as input the leaf node LEAF : ? F as: Definition 9.2.10 (Abstract Guarantee Semantics) The abstract guarantee semantics ⌧ '\ g J prog K 2 T of a program prog is:
⌧ '\ g J prog K = ⌧ '\ g J stmt l K def = ⌧ '\ g J stmt KRESET T J ' K(l, R, LEAF : ? F ) (9.2.16
) where the abstract guarantee semantics ⌧ '\ g J stmt K : T ! T of each program instruction stmt is defined in Figure 9.1.
In absence of run-time errors, the following result proves the soundness of the abstract guarantee semantics ⌧ '\ g J prog K 2 T with respect to the guarantee semantics ⌧ ' g J prog K 2 E * O, given a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (iJ prog K):
Theorem 9.2.11 ⌧ ' g J prog K 4 T [R]⌧ '\ g J prog K Proof (Sketch).
The proof follows from the soundness of the operators of the decision trees abstract domain (cf. Lemma 9.2.9) used for the definition of ⌧ '\ g J prog K 2 T .⌅
In particular, the abstract guarantee semantics provides su cient preconditions for ensuring a guarantee property 3' for a given over-approximation R 2 D of the set of initial states I ✓ ⌃:
Corollary 9.2.12 A program satisfies a guarantee property 3' for execution traces starting from a set of states
D (R) if D (R) ✓ dom( T [R]⌧ '\ g J prog K).
Recurrence Semantics
In the following, along the same approach used in the previous Section 9.2 for guarantee properties, we define a sound and complete semantics for proving recurrence temporal properties by abstract interpretation of the program maximal trace semantics (cf. Equation 2.2.5). Then, we propose a sound decidable abstraction based on piecewise-defined ranking functions. ) never reaches S.
Recurrence Semantics
In particular, the recurrence semantics reuses the guarantee semantics of Section 9.2 as a building block: from the guarantee that some desirable event happens once during program execution, the recurrence semantics ensures that the event happens infinitely often. A finite subsequence of a program trace satisfies a recurrence property if and only if it terminates in the desirable set of states, and its neighborhood in the subsequences of the program semantics consists only of sequences that are terminating in the desirable set of states, and that are prefixes of traces in the program semantics that reach infinitely often the desirable set of states. The corresponding recurrence abstraction ↵ r [S] : P(⌃ +1 ) ! P(⌃ + ) is parameterized by a set of desirable states S ✓ ⌃ and it is defined as follows:
↵ r [S]T def = gfp ✓ ↵ g [S]T r [T, S] r [T, S]T 0 , def = ↵ g [ f pre[T ]T 0 \ S]T (9.3.1) where f pre[T ]T 0 def = {s 2 ⌃ | 8 2 ⌃ ⇤ , 0 2 ⌃ ⇤1 : s 0 2 T ) pf( 0 ) \ T 0 6
= ;} is the set of states whose successors all belong to a given set of subsequences, and ↵ g [S] : P(⌃ +1 ) ! P(⌃ + ) is the guarantee abstraction of Equation 9.2.2.
To explain intuitively Equation 9.3.1, we use the dual of Kleene's Fixpoint Theorem (cf. Theorem 2.1.4) to rephrase ↵ r [S] as follows:
↵ r [S]T = \ i2N T i+1 T i+1 def = [ r [T, S]] i (↵ g [S]T )
Then, for i = 0, we get the set T 1 = ↵ g [S]T of subsequences of T that guarantee S at least once. For i = 1, starting from T 1 , we derive the set of states S 1 = f pre[T ]T 1 \ S (i.e., S 1 ✓ S) whose successors all belong to the subsequences in T 1 , and we get the set T 2 = ↵ g [S 1 ]T of subsequences of T that guarantee S 1 at least once and thus guarantee S at least twice. Note that all the subsequences in T 2 terminate with a state s 2 S 1 and therefore are prefixes of subsequence of T that reach S at least twice. More generally, for each i 2 N, we get the set T i+1 of subsequences which are prefixes of subsequences of T that reach S at least i + 1 times, i.e., the subsequences that guarantee S at least i + 1 times. The greatest fixpoint thus guarantees S infinitely often.
= ↵ Mrk (↵ r [S](⌧ +1 )) (9.3.2)
where the abstraction ↵ Mrk : P(⌃ + ) ! (⌃ * O) is the same ranking abstraction defined in Equation 4.2.14.
The following result provides a fixpoint definition of the recurrence semantics within the partially ordered set h⌃ * O, vi, where the computational order is defined as in Equation 4.2.15 as:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
Theorem 9.3.3 (Recurrence Semantics) Given a desirable set of states S ✓ ⌃, the recurrence semantics ⌧ r [S] 2 ⌃ * O can be expressed as a greatest fixpoint in the partially ordered set h⌃ * O, vi:
⌧ r [S] = gfp v ⌧g[S] r [S] r [S]f def = s. ( f (s) s 2 dom(⌧ g [ f pre(dom(f )) \ S]) undefined otherwise (9.3.3)
Note that, the recurrence semantics can be equivalently simplified as:
⌧ r [S] = gfp v ⌧g[S] r [S] r [S]f def = s. ( f (
Proof.
See Appendix A.6. ⌅
Denotational Recurrence Semantics
We now provide a structural definition of the '-recurrence semantics ⌧ ' r 2 ⌃ * O by induction on the syntax of our idealized imperative language.
We partition ⌧ ' r with respect to the program control points:
⌧ ' r 2 L ! (E * O).
⌧ ' r J stmt K 2 (E * O) ! (E *
O) behaves as described in Section 9.2.2 and also ensures that each time ' is satisfied, it will be satisfied again in the future: the value of the input ranking function is reset for the environments that satisfy ' only if all their successors by means of the instruction stmt belong to the domain of the input ranking function.
The recurrence semantics of a skip instruction resets the value of the input ranking function f : E * O for the environments that belong to its domain and satisfy ', and otherwise it increases its value to take into account another program execution step from the initial label of the instruction:
⌧ ' r J l skip Kf def = ⇢. 8 > < > : 0 hl, ⇢i |= ' ^⇢ 2 dom(f ) f (⇢) + 1 hl, ⇢i 6 |= ' ^⇢ 2 dom(f ) undefined otherwise (9.3.6)
Similarly, the recurrence semantics of a variable assignment l X := aexp is defined over the environments that when subject to the assignment always belong to the domain of the input ranking function f : E * O. The value of the input ranking function for these environments is reset when they satisfy '; otherwise, it is increased by one to account for another execution step, and the value of the resulting ranking function is the least upper bound of these values:
⌧ ' r J l X := aexp Kf def = ⇢. 8 > > > > > > > > > < > > > > > > > > > : 0 hl, ⇢i |= ' ^9v 2 JaexpK⇢ 8v 2 JaexpK⇢ : ⇢[X v] 2 dom(f ) sup{f (⇢[X v]) + 1 | v 2 JaexpK⇢} hl, ⇢i 6 |= ' ^9v 2 JaexpK⇢ 8v 2 JaexpK⇢ : ⇢[X v] 2 dom(f ) undefined otherwise ( 9
⌧ x=3 r J x := x + [1, 2] Kf (⇢) def = ( 4 ⇢(x) = 0 undefined otherwise
In particular, note that unlike Example 9.2.6, the function is not defined when ⇢(x) = 3, since the environment satisfies the property x = 3 but Jx + [1, 2]K⇢ = {4, 5} and ⇢[x 4] 6 2 dom(f ) and ⇢[x 5] 6 2 dom(f ).
Given a conditional instruction if l bexp then stmt 1 else stmt 2 fi, its recurrence semantics takes as input a ranking function f : E * O and derives the recurrence semantics ⌧ ' r J stmt 1 Kf of stmt 1 , in the following denoted by S 1 , and the recurrence semantics ⌧ ' r J stmt 2 Kf of stmt 2 , in the following denoted by S 2 . Then, the recurrence semantics of the conditional instruction is defined by means of the ranking function F [f ] : E * O whose domain is the set of environments that belong to the domain of S 1 and to the domain of S 2 , and that may both satisfy and not satisfy the boolean expression bexp:
F [f ] def = ⇢ 2 dom(S 1 ) \ dom(S 2 ). 8 > < > : sup{S 1 (⇢) + 1, S 2 (⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise
and the ranking function F 1 [f ] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S 1 and that must satisfy bexp:
F 1 [f ] def = ⇢ 2 dom(S 1 ). ( S 1 (⇢) + 1 JbexpK⇢ = {true} undefined otherwise
and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of S 2 and that cannot satisfy bexp:
F 2 [f ] def = ⇢ 2 dom(S 2 ). ( S 2 (⇢) + 1 JbexpK⇢ = {false} undefined otherwise
The resulting ranking function is defined by joining
F [f ], F 1 [f ], and F 2 [f ],
and resetting the value of the function for the environments that belong to its domain and satisfy ':
⌧ ' r J if l bexp then stmt 1 else stmt 2 fi Kf def = ⇢. 8 > > > > > > < > > > > > > : 0 hl, ⇢i |= ' ⇢ 2 dom(R) R(⇢) hl, ⇢i 6 |= ' ⇢ 2 dom(R) undefined otherwise (9.3.8) where R def = F [f ] [ F 1 [f ] [ F 2 [f ].
Example 9.3.7 Let X def = {x}. We consider the recurrence property 23(x = 3) and the recur- rence semantics of the conditional statement if bexp then stmt 1 else stmt 2 fi.
We assume, given a ranking function f : E * O, that the recurrence semantics of stmt 1 is defined as:
⌧ x=3 r J stmt 1 Kf (⇢) def = ( 1 ⇢(x) 0 undefined otherwise
and that the recurrence semantics of stmt 2 is defined as
⌧ x=3 r J stmt 2 Kf (⇢) def = 8 > > > > < > > > > : 3 0 ⇢(x) < 3 0 ⇢(x) = 3 3 3< ⇢(x) undefined otherwise
Then, when the boolean expression bexp is for example x 3, the recurrence semantics of the conditional statement is:
⌧ x=3 r J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = 8 > < > : 2 ⇢(x) 0 4 3< ⇢(x) undefined otherwise
Instead, when bexp is for example the non-deterministic choice ?, we have:
⌧ x=3 r J if l bexp then stmt 1 else stmt 2 fi Kf (⇢) def = ( 4 ⇢(x) = 0 undefined otherwise
Note that, like Example 4.3.2 and unlike Example 9.2.7, both functions are undefined when ⇢(x) = 3, even though the property x = 3 is satisfied. In fact, the ranking function for the then branch of the if is undefined when ⇢(x) = 3.
The recurrence semantics of a loop instruction while l bexp do stmt od takes as input a ranking function f : E * O whose domain represents the environments leading infinitely often to ' from the final label of the instruction, and outputs the ranking function which is defined as a greatest fixpoint of the function
' r : (E * O) ! (E * O) within hE * O, vi: ⌧ ' r J while l bexp do stmt od Kf def = gfp v G ' r (9.3.9)
where G def = ⌧ ' g J while l bexp do stmt od Kf is the guarantee semantics of the loop instruction defined in Equation 9.2.10, and the computational order is defined as in Equation 4.2.15:
f 1 v f 2 () dom(f 1 ) ✓ dom(f 2 ) ^8x 2 dom(f 1 ) : f 1 (x) f 2 (x).
In essence, from the guarantee that some desirable event eventually happens, the recurrence semantics ensures that the event happens infinitely often. The function ' r : (E * O) ! (E * O) takes as input a ranking function x : E * O, resets its value for the environments that belong to its domain and that satisfy ', and adds to its domain the environments for which one more loop iteration is needed before the next occurrence of '. In the following, the recurrence semantics ⌧ ' r J stmt Kx of the loop body is denoted by S. The function ' r is defined by means of the ranking function F [x] : E * O whose domain is the set of environments that belong to the domain of S and to the domain of the input function f , and that may both satisfy and not satisfy the boolean expression bexp: As pointed out in Remark 3.2.2, possible run-time errors are ignored. Thus, all environments leading to run-time errors are discarded and do not belong to the domain of the recurrence semantics of a program prog.
F [x]
Abstract Recurrence Semantics
We now propose a sound decidable abstraction of the recurrence semantics ⌧ ' r J prog K 2 E * O, based on the piecewise-defined ranking functions presented in Chapter 5 and Chapter 6. The abstraction is sound with respect to the usual approximation order defined in Equation 4.2.12:
f 1 4 f 2 () dom(f 1 ) ◆ dom(f 2 ) ^8x 2 dom(f 2 ) : f 1 (x) f 2 (x).
In particular, we revisit the definition of the unary operator RESET T J ' K and we introduce a dual widening operator ŌT .
Reset. The operator RESET T J ' K : T ! T should reset the leaves of a decision tree that not only satisfy a given property ' but also guarantee that the property will be satisfied again in the future. To this end, we redefine the operator RESET F J ' K : F ! F invoked when Algorithm 24 reaches a leaf node (cf. Line 3): given a function f 2 F \ {? F , > F }, RESET F J ' K simply resets its value to zero; undefined leaf nodes are instead left unaltered: return dual-widen-aux(t 1 , t 2 , C (d))
RESET F (? F ) def = ? F RESET F (f ) def = X 1 , .
. value widening Dual Widening. The recurrence semantics of a while loop instruction, as defined in Equation 9.3.9, involves a greatest fixpoint. Greatest fixpoints are solved by iterations using a new dual widening operator ŌT . The dual widening ŌT is implemented by Algorithm 25: the function dual-widen, given a given a sound over-approximation d 2 D of the reachable environments and two decision trees t 1 , t 2 2 T , calls the function leftunification (cf. Line 11) to limit the size of the decision trees in order to ensure convergence. Unlike Algorithm 13, Algorithm 25 invokes leftunification choosing the approximation join g F . Then, Algorithm 25 calls the auxiliary function dual-widen-aux, which collects into a set C 2 P(C) (initially equal to C (d), cf. Line 12) the linear constraints encountered along the paths up to the leaf nodes (cf. Lines 6-7), which are compared (cf. Line 3) and, in case, turned into a ? F -leaf (cf. Line 4).
In Figure 9.2 we depict an example of dual widening. In essence, the dual widening maintains the value of a piecewise-defined ranking function only on the pieces where it stays defined between two iterate, and if a piece where it is defined shrinks, we remove it entirely.
Abstract Recurrence Semantics. In the following, we assume to have, for each program control point l 2 L, a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (l) 2 P(E): ⌧ I (l) ✓ D (R) (cf. Section 3.4).
In Figure 9.3 we define the semantics ⌧ '\ r J stmt K : T ! T , for each program instruction stmt. Each function ⌧ '\ r J stmt K : T ! T takes as input a decision tree over-approximating the ranking function corresponding to the ⌧ '\ r J l skip Kt def = RESET T J ' K(l, R, STEP T (t))
⌧ '\ r J l X := aexp Kt def = RESET T J ' K(l, R, (B-ASSIGN T J X := aexp KR)(t)) final control point of the instruction, and outputs a decision tree defined over a subset of the reachable environments R 2 D, which over-approximates the ranking function corresponding to the initial control point of the instruction. Each function ⌧ '\ r J stmt K invokes the redefined reset operator RESET T . For a while loop, gfp \ '\ r is the limit of the iteration sequence with dual widening: where G is the guarantee semantics of the loop (cf. Figure 9.1).
⌧ '\ r J if l bexp then stmt 1 else stmt 2 fi Kt def = RESET T J ' K(l, R, F \ 1 [t] g T [R] F \ 2 [f ]) F \ 1 [t] def = (FILTER T J bexp KR)(⌧ '\ r J stmt 1 Kt) F \ 2 [t]
y 0 def = G(t)
In absence of run-time errors, the abstract semantics ⌧ '\ r J stmt K, given sound over-approximations R 2 D of ⌧ I (iJ stmt K) and D 2 D of ⌧ I (f J stmt K), is a sound over-approximation of the semantics ⌧ ' r J stmt K defined in Section 9.3.2:
Lemma 9.3.9 ⌧ ' r J stmt K T [D]t 4 T [R]⌧ '\ r J stmt Kt.
Proof.
See Appendix A.6.
⌅
The abstract recurrence semantics ⌧ '\ r J prog K 2 T of a program prog outputs the decision tree over-approximating the ranking function corresponding to the initial program control point iJ prog K 2 L. It is defined by taking as input the leaf node LEAF : ? F as: Definition 9.3.10 (Abstract Recurrence Semantics) The abstract recurrence semantics ⌧ '\ r J prog K 2 T of a program prog is:
⌧ '\ r J prog K = ⌧ '\ r J stmt l K def = ⌧ '\ r J stmt KRESET T J ' K(l,
Implementation
We have incorporated the static analysis methods for guarantee and recurrence temporal properties that we have presented in this chapter into our prototype static analyzer FuncTion that we have presented in Chapter 8. The prototype, when the guarantee or recurrence analysis methods are selected, accepts state properties written as C-like pure expressions.
The following examples illustrate the potential and e↵ectiveness of our new static analysis methods. and the guarantee property 3(x = 3). FuncTion, using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes and a ne functions for the leaf nodes (cf. Equation 5.2.6), infers the following ranking function associated with program control point 1:
x.
> < > :
3x + 10 x < 0 2x + 6 0 x ^x 3 undefined otherwise which bounds the wait (from the program control point 1) for the desirable state x = 3 by 3x + 10 program execution steps when x < 0, and by 2x + 6 execution steps when 0 x ^x 3. The analysis is inconclusive when 3 < x. In this case, when 3 < x, the guarantee property is never satisfied (cf. Example 9.1.3). Thus, the precondition x 3 induced by the domain of the ranking function is the weakest precondition for 3(x = 3).
Let us consider now the recurrence property 23(x = 3). FuncTion infers the following ranking function associated with program control point 1:
x.
( 3x + 10 x < 0 undefined otherwise which induces the precondition x < 0 for 23(x = 3). Indeed, when 0 x x 3, the desirable state x = 3 does not occur infinitely often but only once. Again x < 0 is the weakest precondition for 23(x = 3).
Instead, for both 3(x = 3) and 23(x = 3), FuncTion infers the follow- ing ranking function associated with program control point 3:
x.
> <
> :
3x + 9
x 3 3x + 72 3 < x 10 3x + 12 10 < x which bounds the wait (from the program control point 3) for the next occurrence of x = 3 by 3x + 9 execution steps when x 3, by 3x + 72 execution steps when 3 < x 10, and by 3x + 12 execution steps when 10 < x. Each iteration of the outer loop, assigns to the program variable x the value of some counter c, which initially has value one; then, the inner loop decreases the value of x and increases the value of the counter c until the value of x becomes less than or equal to zero.
FuncTion, using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes and a ne functions for the leaf nodes (cf. Equation 5.2.6), is able to prove that the recurrence property 23(x = 0) is always satisfied by the program. The piecewise-defined rank- ing function inferred at program control point 1 bounds the wait for the next occurrence of the desirable state x = 0 by five program execution steps (i.e., executing the variable assignment c := 1, testing the outer loop condition, executing the assignment x := c, testing the inner loop condition and executing the assignment x := x 1). The analysis infers a more interesting raking function associated to program control point 4:
x. c. 8 > > > > > > < > > > > > > : 3c + 2 x < 0 ^0 < c 3 x < 0 ^c = 0 1 x = 0 ^0 c 3x 1 (x = 1 ^ 1 c) _ (2 x ^ 2 c) undefined otherwise
The function bounds the wait for the next occurrence of x = 0 by 3c + 2 execution steps when x < 0 ^0 < c, by 3 execution steps when x < 0 ^c = 0 (i.e., testing the inner loop condition, testing the outer loop condition and executing the assignment x := c), by 1 execution step when x = 0 ^0 c (i.e., testing the inner loop condition) and by 3x 1 execution steps when (x = 1 ^ 1 c) _ (2 x ^ 2 c). In the last case there is a precision loss due to a lack of expressiveness of the intervals abstract domain: if x is strictly positive at program control point 4, the weakest precondition ensuring infinitely many occurrences of the desirable state x = 0 is x c, which is not representable by the intervals abstract domain. However, because of the non-deterministic assignment x := ?, the number of execution steps between two occurrences of the desirable state x = 0 is unbounded. FuncTion is able to prove that the property is satisfied using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes and ordinal-valued functions for the leaf nodes (cf. Chapter 6). The inferred ranking function at program control point 1:
x. ! + 8 means that, whatever the value of the variable x, the number of execution steps between two occurrences of x = 0 is unbounded but finite. Note that weak fairness assumptions are required to guarantee bounded bypass (i.e., a process cannot be bypassed by any other process in entering the critical section for more than a finite number of times. At the moment our prototype FuncTion is not able to directly analyze concurrent programs. Thus, we have modeled the algorithm as a fair non-deterministic sequential program which interleaves execution steps from both processes while enforcing 1-bounded bypass (i.e., a process cannot be bypassed by any other process in entering the critical section for more than once). FuncTion, using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes and a ne functions for the leaf nodes (cf. Equation 5.2.6), is able to prove the recurrence property 23(8 : true), meaning that both processes are allowed to enter their critical section infinitely often.
Related Work
In the recent past, a large body of work has been devoted to proving liveness properties of (concurrent) programs.
A successful approach for proving liveness properties is based on a transformation from model checking of liveness properties to model checking of safety properties [START_REF] Biere | Liveness Checking as Safety Checking[END_REF]. The approach looks for and exploits lasso-shaped counterexamples. A similar search for lasso-shaped counterexamples has been used to generalize the model checking algorithm IC3 to deal with liveness properties [START_REF] Bradley | An Incremental Approach to Model Checking Progress Properties[END_REF]. However, in general, counterexamples to liveness properties in infinite-state systems are not necessarily lasso-shaped. Our approach is not counterexample-based and is meant for proving liveness properties directly, without reduction to safety properties.
In [START_REF] Podelski | Transition Predicate Abstraction and Fair Termination[END_REF], Andreas Podelski and Andrey Rybalchenko present a method for the verification of liveness properties based on transition invariants [START_REF] Podelski | Transition Invariants[END_REF]. The approach, as in [START_REF] Vardi | Verification of Concurrent Programs: The Automata-Theoretic Framework[END_REF], reduces the proof of a liveness property to the proof of fair termination by means of a program transformation. It is at the basis of the industrial-scale tool Terminator [CGP + 07]. By contrast, our method is meant for proving liveness properties directly, without reduction to termination. Moreover, it avoids the cost of explicit checking for the wellfoundedness of the transition invariants.
A distinguishing aspect of our work is the use of infinite height abstract domains, equipped with (dual) widening. We are aware of only one other such work: in [START_REF] Massé | Property Checking Driven Abstract Interpretation-Based Static Analysis[END_REF], Damien Massé proposes a method for proving arbitrary temporal properties based on abstract domains for lower closure operators. A small analyzer is presented in [START_REF] Massé | Abstract Domains for Property Checking Driven Analysis of Temporal Properties[END_REF] but the approach remains mainly theoretical. We believe that our framework, albeit less general, is more straightforward and of practical use.
An emerging trend focuses on proving existential temporal properties (e.g., proving that there exists a particular execution trace). The most recent approaches [START_REF] Beyene | Solving Existentially Quantified Horn Clauses[END_REF][START_REF] Cook | Reasoning About Nondeterminism in Programs[END_REF] are based on counterexample-guided abstraction refinement [CGJ + 03]. Our work is designed for proving universal temporal properties (i.e., valid for all program execution traces). We leave proving existential temporal properties as part of our future work.
Finally, to our knowledge, the inference of su cient preconditions for guarantee and recurrence properties, and the ability to provide upper bounds on the time before a program reaches a desirable state, is unique to our work.
V
Conclusion
Future Directions
With this thesis, we have proposed new abstract interpretation-based methods for proving termination and other liveness properties of programs.
In particular, our first contribution is the design of new abstract domains suitable for the abstract interpretation framework for termination proposed by Patrick Cousot and Radhia Cousot [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]. These abstract domains automatically infer su cient preconditions for program termination, and synthesize piecewise-defined ranking functions through backward analysis (cf. Chapter 5 and Chapter 6). The ranking functions provide upper bounds on the program execution time in terms of execution steps. The abstract domains are parametric in the choice between the expressivity and the cost of the underlying numerical abstract domain (cf. Section 3.4). They are shown to be e↵ective for proving termination of recursive programs in Chapter 7.
Our second contribution is an abstract interpretation framework for liveness properties, which comes as a generalization of the framework proposed for termination [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]. In particular, we have proposed new abstract interpretation-based methods for proving guarantee and recurrence properties (cf. Chapter 9). We have also reused the abstract domains based on piecewise-defined ranking functions (cf. Chapter 5 and Chapter 6) to e↵ectively infer su cient preconditions for these properties, and to provide upper bounds on the time before a program reaches a desirable state.
Our last contribution is a prototype implementation of a static analyzer based on piecewise-defined ranking functions (cf. Chapter 8). The earlier versions of the prototype participated in the 3rd International Competition on Software Verification (SV-COMP 2014), which featured a category for termination for the first time, and in the 4th International Competition on Software Verification (SV-COMP 2015). We conclude with some perspectives for future research.
Potential Termination and Non-Termination. In Chapter 4 we have introduced the definite and potential termination semantics for a program [START_REF] Cousot | An Abstract Interpretation Framework for Termination[END_REF]. Then, throughout the rest of the thesis we have focused our attention only on the definite termination semantics, and proposed decidable abstractions in Chapter 5 and Chapter 6. As part of our future work, we plan to do a similar work for the potential termination semantics.
We plan to use the same domains based on piecewise-defined ranking functions (cf. Chapter 5 and Chapter 6) but adapt their operators in order to automatically infer necessary preconditions for program termination, which are not very interesting per se but provide complementary su cient preconditions for non-termination. In this way, we could complement our analysis method for proving program termination with an analysis method for proving program non-termination. In particular, we envision to run the analyses simultaneously in order to improve precision. Abstract Domains. In Chapter 5 we have proposed the decision trees abstract domain, which we have designed to be parameterized by a convex numerical abstract domain, such as intervals [START_REF] Cousot | Static Determination of Dynamic Properties of Programs[END_REF], convex polyhedra [START_REF] Cousot | Automatic Discovery of Linear Restraints Among Variables of a Program[END_REF], and octagons [START_REF] Miné | The Octagon Abstract Domain[END_REF]. In the future, we would like to also allow non-convex abstract domain, such as congruences [START_REF] Granger | Static Analysis of Arithmetic Congruences[END_REF].
Moreover, the decision tree abstract domain has been instantiated with a ne functions, in Chapter 5, and ordinal-valued functions, in Chapter 6. It remains for future work to support non-linear functions, such as polynomials [START_REF] Bradley | The Polyranking Principle[END_REF] or exponentials [START_REF] Feret | The Arithmetic-Geometric Progression Abstract Domain[END_REF]. Non-linear functions would provide more precise upper bounds on the program execution time for programs with non-linear computational complexity. On this account, we plan to explore further the possible potential of our approach in the termination-related field of automatic cost analysis [DLH90, SLH14, Weg75].
In Chapter 5, we also mentioned various ideas for improving the widening of decision trees, by introducing thresholds [START_REF] Cousot | Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation[END_REF] and integrating state-ofthe-art precise widening operators [START_REF] Bagnara | Precise Widening Operators for Convex Polyhedra[END_REF]. We plan to investigate these and further possibilities.
The decision trees abstract domain could also o↵er an alternative disjunctive refinement of numerical abstract domains [CCM10, GR98, GC10a, SISG06, etc.]. We would also like to explore its potential to be adapted to other program semantics and for proving other program properties.
Fairness and Liveness Properties. In Chapter 9, we have proposed an abstract interpretation framework for guarantee and recurrence temporal properties [START_REF] Manna | A Hierarchy of Temporal Properties[END_REF]. The verification of liveness properties is a di cult problem, with not many satisfying solutions. We have made a first step that shows how Abstract Interpretation can also be used for proving liveness properties and that, we believe, opens many possibilities.
We mentioned in Chapter 1 that some of the theoretical work presented in this thesis is being used as part of ongoing research work aimed at the verification of real-time properties of avionics software.
It remains for future work the integration of fairness properties [START_REF] Francez | Fairness[END_REF]. We also plan to extend the present framework to the full hierarchy of temporal properties [START_REF] Manna | A Hierarchy of Temporal Properties[END_REF], and more generally to arbitrary (universal and existential [START_REF] Beyene | Solving Existentially Quantified Horn Clauses[END_REF][START_REF] Cook | Reasoning About Nondeterminism in Programs[END_REF]) liveness properties.
Machine Integers and Floats
In this thesis we have only considered programs that operate over mathematical integers. The reality is that most programs operate over variables that range over fixed-width numbers, such as 32-bit integers or 64-bit floating-point numbers, with the possibility of overflow or underflow. In particular, we cannot ignore the fixed-width semantics, as overflow and underflow can for example cause non-termination in programs that would otherwise terminate. We plan, as part of our future work, to make the decision trees abstract domain aware of the low-level memory representation of data-types.
Heap-Manipulating Programs. We also plan to prove termination and liveness properties of more complex programs, such as heap-manipulating programs. We would like to investigate the adaptability of existing methods [START_REF] Berdine | Automatic Termination Proofs for Programs with Shape-Shifting Heaps[END_REF] and existing abstract domains for shape analysis [START_REF] Bor-Yuh | Modular construction of shape-numeric analyzers[END_REF], and possibly design new techniques.
Concurrent Programs. Finally, in this thesis we have developed methods for proving termination and liveness properties of sequential programs. A natural future direction is considering concurrent programs. When analyzing concurrent programs, it is necessary to consider all possible interactions between concurrently executing threads. The usual method for proving concurrent programs correct is based on rely-guarantee or assume-guarantee style of reasoning, which considers every thread in isolation under assumptions on its environment and thus avoids reasoning about thread interactions directly [START_REF] Gotsman | Proving that Non-Blocking Algorithms Don't Block[END_REF][START_REF] Miné | Relational Thread-Modular Static Value Analysis by Abstract Interpretation[END_REF]. We plan to extend these strategies to liveness properties. J aexp K⇢}. Moreover, by definition of B-ASSIGN T (cf. Algorithm 11), we have sup{( T [D]t)(⇢[X ! v]) + 1 | v 2 J aexp K⇢} A(⇢). In fact, Algorithm 11 invokes B-ASSIGN F , which after the assignment increases the value of the defined leaf nodes of a decision tree (cf. Equation 5.2.21). Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof.
⌧ Mt J if l bexp then stmt 1 else stmt 2 fi K T [D]t 4 T [R](F \ 1 [t] g T [R] F \ 2 [t])
Proof of Lemma 5.2.15.
We have ⌧ I J if l bexp then stmt 1 else stmt 2 fi K D (R) ✓ D (D), given that R 2 D is a sound over-approximation of ⌧ I (l) and D 2 D is a sound overapproximations of ⌧ I (f J if l bexp then stmt 1 else stmt 2 fi K). Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). Since ⇢ 2 dom(A), by definition of 4 T [D] (cf. Algorithm 6) and FILTER T (cf. Algorithm 12), we have ⇢ 2 dom( T [R](⌧ \ Mt J stmt 1 Kt)) or ⇢ 2 dom( T [R](⌧ \ Mt J stmt 2 Kt)), or both. In fact, Algorithm 12 prunes a decision tree (cf. Line 12) and Algorithm 6 favors undefined leaf nodes over defined leaf nodes (cf. Equation 5.2.13). Thus, by structural induction, we have ⇢ 2 dom(⌧ Mt J stmt 1 K T [D]t) or ⇢ 2 dom(⌧ Mt J stmt 2 K T [D]t), or both. In absence of run-time errors, by definition of ⌧ Mt J if l bexp then stmt 1 else stmt 2 fi K (cf. Equation 4.3.3), we have ⇢ 2 dom(C) which is absurd. Therefore, we conclude that dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). Let We have, by structural induction, S 1 (⇢) < A(⇢) or S 2 (⇢) < A(⇢) or sup{S 1 (⇢) + 1, S 2 (⇢) + 1} < A(⇢). In fact, Algorithm 12 invokes step which increases the value of the defined leaf nodes of a decision tree (cf. Algorithm 9. Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof. def = (FILTER T J not bexp KR)(t). Then, given t 2 T , for all x 2 T we have:
Mt ( D [R]x) 4 T [R]( \ Mt (x))
where \ Mt (x)
def = F \ 1 [x] g T [R] F \ 2 [t].
Proof of Lemma 5. Mt J stmt Kx). We have, by structural induction, S 1 (⇢) < A(⇢) or ( T [D]t)(⇢) < A(⇢) or sup{S 1 (⇢) + 1, ( T [D]t)(⇢) + 1} < A(⇢). In fact, Algorithm 12 invokes step which increases the value of the defined leaf nodes of a decision tree (cf. Algorithm 9. Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof.
⌅ Lemma 5.2.24 Let \ Mt (x) def = F \ 1 [x] g T [R] F \ 2 [t]
as defined in Lemma 5.2.16 for any given t 2 T . Then, we have:
⌧ Mt J while l bexp do stmt od K T [D]t 4 T [R](lfp \ \ Mt )
where lfp \ \ Mt is the limit of the iteration sequence with widening: y 0
def=
{S 0 | S 0 ✓ S}. The union of two sets A and B, written A [ B, is the set of all elements of A and all elements of B: A [ B def = {x | x 2 A _ x 2 B}. More generally, the union of a set of sets S, is denoted by S S: S S def = S S 0 2S S 0 = {x | 9S 0 2 S : x 2 S 0 }. The intersection A \ B of two sets A and B is the set of all elements that are elements of both A and B: A \ B def = {x | x 2 A ^x 2 B}. More generally, the intersection of a set of sets S is denoted by T S: T S def = T S 0 2S S 0 = {x | 8S 0 2 S : x 2 S 0 }. The relative complement of a set B in a set A, denoted by A \ B, is the set of all elements of A that are not elements of B: A \ B def = {x | x 2 A ^x 6 2 B}. When B ✓ A and the set A is clear from the context, we simply write ¬B for A \ B and we call it complement of B. The cartesian product of two sets A and B, denoted by A ⇥ B, is the set of all pairs where the first component is an element of A and the second component is an element of B: A ⇥ B def = {hx, yi | x 2 A ^y 2 B}. More generally, S 1 ⇥
Figure 2
2 Figure 2.1: Hasse diagrams for the partially ordered set hP({a, b, c}), ✓i.
Let us consider the partially ordered set hP({a, b, c}), ✓i represented in Figure 2.1 and the subset X def
Figure 2 . 2 :
22 Figure 2.2: Fixpoint iterates of the maximal trace semantics.
equivalently, is surjective), and hD, vi ! ! ↵ hD \ , v \ i when both ↵ and are bijective. The posets hD, vi and hD \ , v \ i are called the concrete domain and the abstract domain, respectively. The function ↵ : D ! D \ is the abstraction function, which provides the abstract approximation ↵(d) of a concrete element d 2 D, and : D \ ! D is the concretization function, which provides the concrete element (d \ ) corresponding to the abstract description d \ 2 D \ . Note that, the notion of Galois connection can also be defined on preordered sets.
Figure 2 . 3 :
23 Figure 2.3: Example of increasing iteration with widening ( O O O !) followed by
pair consisting of a label l 2 L and an environment ⇢ 2 E, where the environment defines the values of the program variables at the program control point designated by the label. Let ⌃ denote the set of all program states. The initial control point iJ stmt K 2 L (resp. iJ prog K 2 L) of an instruction stmt (resp. a program prog) defines where the execution of the instruction (resp. program) starts, and the final control point f J stmt K 2 L (resp. f J prog K 2 L) defines where the execution of the instruction stmt (resp. program prog) ends. The formal definitions are in Figure 3.4 and Figure 3.5.
Figure 3 . 4 :
34 Figure 3.4: Initial control point of instructions stmt and programs prog.
Figure 3 . 5 :
35 Figure 3.5: Final control point of instructions stmt and programs prog.
Figure 3 . 7 :
37 Figure 3.7: Maximal trace semantics of instructions stmt.
Figure 3 . 8 :
38 Figure 3.8: Fixpoint iterates of the trace semantics of a loop instruction.
(E), ✓i where the abstraction ↵ I : P(⌃) ! (L ! P(E)) and the concretization I : (L ! P(E)) ! P(⌃) are defined as follows:
Figure 3 .
3 Figure 3.10: Abstract invariance semantics of instructions stmt.
Let us consider again the loop within the program of Example 3.3.2:
Example 4.2.3 Let us consider the following non-deterministic program: while 1 ( ? ) do 2 skip od 3 The program has no variables: X def = ;. Thus, the set of program environments E only contains the totally undefined function ⇢ : ; ! Z. The set of program states is ⌃ def = {1, 2, 3} ⇥ E, and the set of initial states is
In the following, we abstract such maximal trace semantics into a definite termination trace semantics and a potential termination trace semantics. Potential Termination Trace Semantics. The potential termination trace semantics ⌧ m 2 P(⌃ + ) eliminates all program infinite traces. It can be derived by abstract interpretation of the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) by means of the Galois connection hP(⌃ +1 ), ✓i ! ! ↵ m m hP(⌃ + ), ✓i, where the abstraction function ↵ m : P(⌃ +1 ) ! P(⌃ + ) and the concretization function m : P(⌃ + ) ! P(⌃ +1 ) are defined as follows:
⇥ ⌃), ✓i where the abstraction function ! ↵ : P(⌃ +1 ) ! P(⌃ ⇥ ⌃) extracts from a set of sequences T ✓ ⌃ +1 the smallest transition relation r ✓ ⌃ ⇥ ⌃ that generates T : !
(4.2.8) Definition 4.2.14 (Potential Termination Semantics) The potential termination semantics ⌧ mt 2 ⌃ * O is derived by abstract interpretation of the maximal trace semantics ⌧ +1 2 P(⌃ +1 ):
(4.2.12) Definition 4.2.19 (Definite Termination Semantics) The definite termination semantics ⌧ Mt 2 ⌃ * O is derived by abstract interpretation of the maximal trace semantics ⌧ +1 2 P(⌃ +1 ):
(4.2.15) Theorem 4.2.20 (Definite Termination Semantics) The definite termination semantics ⌧ Mt 2 ⌃ * O can be expressed as a least fixpoint in the partially ordered set h⌃ * O, vi:
Example 4.2.21 Let us consider again the trace semantics of Example 4.2.16: The fixpoint iterates of the definite termination semantics ⌧ Mt 2 ⌃ * O are: are outside the domain of the function.
⌅ 4 .
4 An Abstract Interpretation Framework for Termination
In this way, to each program control point l 2 L corresponds a partial function f : E * O, and to each program instruction stmt corresponds a termination semantics
which provides a concretization-based abstraction of the definite termination semantics ⌧ Mt 2 L ! (E * O) by partitioning with respect to the program control points. No approximation is made on L. On the other hand, each program control point l 2 L is associated with an element t 2 T .Piecewise-Defined Ranking Functions. The elements of the abstract domain hT , 4 T i are piecewise-defined partial functions.Their internal representation is inspired by the space partitioning trees[START_REF] Fuchs | On Visible Surface Generation by a Priori Tree Structures[END_REF] developed in the context of 3D computer graphics and the use of decision trees in program analysis and verification [BCC + 10, Jea02]: the piecewisedefined partial functions are represented by decision trees, where the decision nodes are labeled with linear constraints, and the leaf nodes belong to an auxiliary abstract domain for functions. The decision trees recursively partition the space of possible values of the program variables inducing disjunctions into the auxiliary domain. The elements of the auxiliary domain are functions of the program variables, which provide upper bounds on the computational complexity of the program in terms of execution steps.
Figure 5
5 Figure 5.1: Decision tree representation (a) of the piecewise-defined ranking function (b) for the program of Example 5.1.1. The linear constraints are satisfied by their left subtree, while their right subtree satisfies their negation. The leaves of the tree represent partial functions whose domain is determined by the constraints satisfied along the path to the leaf node.
Figure 5.2: Decision tree representation of the piecewise-defined ranking function for the program of Example 5.1.2. The leaf with value ? explicitly represents the undefined piece of the ranking function determined by the constraints satisfied along the path to the leaf node.
where the abstraction function ↵ C : P(C) ! D maps a set of interval (resp. octagonal, polyhedral) constraints to an interval (resp. an octagon, a polyhedron), and the concretization function C : D ! P(C) maps an interval (resp. an octagon, a polyhedron) to a set of interval (resp. octagonal, polyhedral) constraints. In particular, we define C (> D ) def = ; and C (? D ) def = {? C }, where ? C represents the unsatisfiable linear constraint 1 0.
Figure 5
5 Figure 5.3: Hasse diagrams defining the approximation order 4 F [D] (a) and the computational order v F [D] (b) of the functions abstract domain.
Figure 5 . 4 :
54 Figure 5.4: Tree unification of the decision trees (a) and (b) of Example 5.2.3. The resulting decision trees are represented in (c) and (d), respectively.
Example 5.2.3 Let X def = {x}, let D be the intervals abstract domain hB, v B i, let C be the interval constraints auxiliary abstract domain hC B / ⌘ C , < C i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the decision trees t 1 2 T NIL represented in Figure 5.4a and t 2 2 T NIL represented in Figure 5.4b, where 2 T is a leaf node. Let d def = > B and let C def = C (d) = ;. The function unification-aux adds a decision node for x 0 to t 1 (cf. Line 14) and removes the redundant constraint x+1 0 from the resulting left subtree (cf. Line 8). Moreover, unification-aux adds a decision node for x+ 1 0 to the right subtree of t 2 (cf. Line 23). The result of the tree unification are the decision trees represented in Figure 5.4c and Figure 5.4d, respectively.
Figure 5 . 5 :
55 Figure 5.5: Example of join of two a ne functions of one variable, shown in (a) and (b), respectively. The result is shown in (c). In (d), (e), and (f) are respectively shown the hypographs of (a), (b), and (c).
are shown in Figure 5.5a and Figure 5.5b, respectively. Their domain of definition D 2 D is represented by the set of linear constraints C def = C (D) = {x 1 0, x + 5 0}. In Figure 5.5d and Figure 5.5e are represented the hypographs f 1 [D]# and f 2 [D]# of f 1 and f 2 , respectively. Their convex-hull is shown in Figure 5.5e, and the result of the join is the a ne function f 2 F A \{? F , > F } shown in Figure 5.5c. Example 5.2.6 Let X = {x}, and let f 1 def = x. x and f 2 def = x.
Figure 5
5 Figure 5.6: Example of join of two a ne functions of two variables, shown in (a) and (b), respectively. The result is shown in (c).
Figure 5.4a and t 2 2 T represented in Figure 5.4b, where 2 T is a leaf node. Let d def = > B and let C def = C (d) = ;. From Example 5.2.3, the result of the tree unification are the decision trees represented in Figure 5.4c and Figure 5.4d, respectively. The decision tree returned by the function a-join-aux is represented in Figure 5.7.
Figure 5
5 Figure 5.7: Tree approximation join of Figure 5.4a and Figure 5.4b.
Figure 5
5 Figure 5.8: Tree pruning of Figure 5.7 with respect to { x + 2 0}.
=
{x, y}. We consider, as numerical abstract domain D, the intervals abstract domain hB, v B i. Let c 2 C B be the linear constraint x 0 and let 92 5. Piecewise-Defined Ranking Functions D 2 B be the interval hy, [0, +1]i, which is isomorphic to the set of constraints {y 0}. The backward assignment x := x y produces the set of linear constraints C def = B-ASSIGN C J x := x y KD(c) = {x 0, y 0}, since the linear constraint x y 0, obtained by replacing x with x y in c, cannot be exactly represented in the intervals abstract domain.
5. 2 .
2 Decision Trees Abstract Domain 93 Algorithm 11 : Tree Assignment 1: function assign-auxJ X := aexp K(D, t, C) . D 2 D, t 2 T , C 2 P(C) 2:
C J X := aexp Kd(t.c) 6: J b-assign C J X := aexp Kd(¬t.c) 7:
def = x + 1 and the backward assignment x := x + [1, 2]. Let D 2 B be the interval hx, [1, 2]i and let d 2 B be defined as d def = > B . The backward assignment produces the interval R def = hx, [ 1, 1]i and the a ne function f 0 2 F A \ {? F , > F } defined as f 0 (x) def = x + 4, since substituting the expression x + [1, 2] to the variable x within f gives f (x+[1, 2])+1 = x+[1, 2]+1+1 = x+[3, 4] and max{3, 4} = 4. The operator B-ASSIGN T J X := aexp K : D ! T ! T for handling backward assignments within decision trees is implemented by Algorithm 11: the 94 5. Piecewise-Defined Ranking Functions
Figure 5 .
5 Figure 5.10: Unsound abstraction (b) of a most precise ranking function (a).
are shown in Figure 5.11a and Figure 5.11b, respectively. Their domains of definition D 1 2 D and D 2 2 D are represented by the set of linear constraints C 1 def = C (D 1 ) = {x 6 0, x + 10 0} and C 2 def = C (D 2 ) = { x + 5 0}. In Figure 5.11d and Figure 5.11e are represented the hypographs f 1 [D 1 ]# and f 2 [D 2 ]# of f 1 and f 2 , respectively. Their convex-hull is shown in Figure 5.11e, and the result of the extrapolation is the a ne function f 2 F A \ {? F , > F } shown in Figure 5.11c.
Figure 5
5 Figure 5.11: Example of extrapolation between two a ne functions of one variable, shown in (a) and (b), respectively. The hypographs of (a) and (b) are respectively shown in (d) and (e). Their convex-hull is shown in (f) together with the a ne functions lying on its boundaries. The result of the extrapolation is shown in (c).
Example 5.3. 4
4 Let us consider again the program of Example 5.1.1 [PR04a]: while 1 (x 0) do 2 x := 2x + 10 od 3
NODE{x y+r 1 0} : (LEAF : x. y. r. 10); (LEAF : x. y. r. 4); LEAF : x. y. r. 1 and the widening corrects the ranking function yielding a fixpoint: NODE{x y + r 1 0} : (LEAF : > F ); (LEAF : x. y. r. 4); LEAF : x. y. r. 1
Figure 6 . 1 :x
61 Figure 6.1: Transitions between states at program control point 1 for the program of Example 6.1.1. There is an edge from any node where y has value k > 0 (and y > 0) to all nodes where y has value k 1 (and y has any value).In every node we indicate the maximum number of loop iterations needed to reach a blocking state with no successors.
2.2) Ordinal multiplication generalizes the multiplication of natural numbers. It is associative, i.e. (↵ • ) • = ↵ • ( • ), and left distributive, i.e. ↵ • ( + ) = (↵ • ) + (↵ • ). However, commutativity does not hold, e.g. 2 • ! = ! 6 = ! • 2, and neither does right distributivity, e.g. (! + 1) • ! = ! • ! 6 = ! • ! + !.
Figure 6 . 2 :
62 Figure 6.2: Hasse diagrams for approximation order 4 F [D] (a) and computational order v F [D] (b) of the ordinal-valued functions abstract domain.
Domain. We can now use the family of abstract domains W to lift the decision trees abstract domain to T(D, C, W(F)).
6. 3 .
3 Decision Trees Abstract Domain 123 Algorithm 19 : Ordinal-Valued Function Approximation Join 1: function a-w-join(D, p 1 , p 2 ) . D 2 D, p 1 , p 2 2 W \ {? W , > W } 2: return w-join(g F , D, p 1 , p 2 ) Algorithm 20 : Ordinal-Valued Function Computational Join 1: function c-w-join(D, p 1 , p 2 ) . D 2 D, p 1 , p 2 2 W \ {? W , > W } 2: return w-join(t F , p 1 , p 2 , C) Example 6.3.2 Let X def = {x, y}, let D be the intervals abstract domain hB, v B i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the ordinal-valued functions p 1 def = ! • x + y and p 2def = ! • (x 1) y and we assume that their domain of definition D 2 D is defined as D def = > B . The join of p 1 and p 2 is carried out in ascending powers of ! starting from the coe cients f 1 0 def = y and f 2 0 def
124 6 .
6 Ordinal-Valued Ranking Functions Algorithm 21 : Ordinal-Valued Function Assignment 1: function w-assignJ X := aexp K(d, D, p) 2:
=
{x, y}, let D be the intervals abstract domain hB, v B i, and let F be the a ne functions auxiliary abstract domain hF A , 4 F i. We consider the ordinal-valued functions p def = ! • x + y and the non-deterministic assignment x := ?. We assume that the domain of definition D 2 D and d 2 B are defined as D def = > B and d def = > B . The assignment is carried out in ascending powers of ! starting from the coe cient f 0 def = y, which remains unchanged since the assignment only involves the variable x, whereas the coe cient f 1 def =
126 6 .
6 Ordinal-Valued Ranking Functions Algorithm 22 : Ordinal-Valued Function Extrapolation 1: function w-widen(D 1 , D 2 , p 1 , p 2 ) 2:
Example 6.4. 4
4 Let us consider again the program from[START_REF] Cook | Proving Program Termination[END_REF]:while 1 (0 < x ^0 < y) do if 2 ( ? )
is an involved variation of Example 6.1.1. Each loop iteration, when
Figure 6 . 3 :
63 Figure 6.3: Transitions between states at control point 1 for the program in Figure 6.4.5. There is an edge from any node where x has value k > 0 (and
Figure 7.1: Syntax of our programming language extended with procedures.
Figure 7 . 3 :
73 Figure 7.3: Final control point of stmt and prog.
Figure 7 . 4 :
74 Figure 7.4: Transition semantics of stmt and ret.
8
Let us consider the recursive version of the program of Example 4.1.2: The set of program environments E contains the functions ⇢ : {x} ! Z mapping the program variable x to any possible value ⇢(x) 2 Z. The set of program states ⌃ def = {1, 2, 3, 4, 5, 6, 7, 8}⇥E consists of all pairs of numerical labels and environments; the extended initial states are {"} ⇥ I
6
and where the semantics ⌧ +1 J stmt K P S : P(⌃ +1 ) ! P(⌃ +1 ) of each program instruction stmt is defined in Figure7.5. Example 7.2.5 Let us consider the recursive version of the program of Example 4.2.3: The set of program environments E only contains the totally undefined function ⇢ : ; ! Z, since the program has no variables. The set of all program states is ⌃ def = {1, 2, 3, 4, 5, 6} ⇥ E, while the set of initial states is I def = {5} ⇥ E, and the set of final states is
Figure 7 . 6 :
76 Figure 7.6: Fixpoint iterates of the maximal trace semantics for Example 7.2.5.
NODE{x 3 0} : LEAF : ? F ; NODE{x 2 0} : (LEAF : x. 7); (LEAF : x. 3)
Figure 8 . 1 :
81 Figure 8.1: Comparison of di↵erent parameterizations of FuncTion.
Figure 8 . 2 :
82 Figure 8.2: Overview of the experimental evaluation.
1
http://sv-comp.sosy-lab.org/2015/results/Termination.table.html
Figure 8
8 Figure 8.3: Detailed comparison of our prototype FuncTion against its previous version [Urb15] (a), AProVE [SAF + 15] (b), HIPTnT+ [LQC15] (c), and Ultimate [HDL + 15] (d).
FuncTionFigure 8
8 Figure 8.4: crafted: overview of the experimental evaluation.
Figure 8
8 Figure 8.5: Detailed comparison of FuncTion against AProVE [SAF + 15].
Figure 8
8 Figure 8.6: crafted-lit: overview of the experimental evaluation.
Figure 8 .Figure 8
88 Figure 8.7: Detailed comparison of FuncTion against its previous version [Urb15].
Figure 8
8 Figure 8.8: memory-alloca: overview of the experimental evaluation.
Figure 8 . 9 :
89 Figure 8.9: Detailed comparison of FuncTion against HIPTnT+ [LQC15].
Figure 8 .
8 Figure 8.10: numeric: overview of the experimental evaluation.
Figure 8 .
8 Figure 8.11: Detailed comparison of FuncTion against Ultimate Automizer [HDL + 15].
Let us consider again the program of Example 9.1.1:while 1 (x 0) do 2 x := x + 1 od while 3 ( true ) do if 4 (x 10) then 5 x := x + 1 then 6 x := x fi od 7An example of safety property is the formula 2(x = 3), which is never satisfied by the program. Instead, since the first while loop increases the value of the variable x at each iteration, the safety property 2(3 x) is satisfied when the program initial states are limited to the set {h1, ⇢i 2 ⌃ | 3 ⇢(x)}.
e 2 L respectively denote the initial and final program control point. Example 9.1.3 Let us consider again the program of Example 9.1.1:while 1 (x 0) do 2 x := x + 1 od while 3 ( true ) do if 4 (x 10)
c
: true) where l c 2 L is the program control point representing the critical section. Example 9.1.4 Let us consider again the program of Example 9.1.1: while 1 (x 0) do 2 x := x + 1 od while 3 ( true ) do if 4 (x 10) then 5 x := x + 1 then 6 x := x fi od 7
Let us consider again the program of Example 9.1.1:while 1 (x 0) do 2 x := x + 1 od while 3 ( true ) do if 4 (x 10) then 5 x := x + 1 then 6 x := x fi od 7
.2.5) Example 9.2.4 Let us consider the following trace semantics: where the highlighted states are the set S of desirable states. The fixpoint iterates of the guarantee semantics ⌧ g [S] 2 ⌃ * O are: are outside the domain of the function.
'
g : (E * O) ! (E * O) within hE * O, vi, analogously to Equation 9.2.5:
where the function ⌧ ' g J stmt K : (E * O) ! (E * O) is the guarantee semantics of each program instruction stmt.
Example 9.3.1 Let T def = {(cd) !, ca ! , d(be) ! } and let S def = {b, c, d}. For i = 0, we haveT 1 = ↵ g [S]T = {b, eb, c, d}. For i = 1, we derive S 1 = {b, d}, since c(dc) ! 2 T and pf((dc) ! ) \ T 1 = {d} 6 = ; but ca ! 2 T and pf(a ! ) \ T 1 = ;. We getT 2 = ↵ g [S 1 ]T = {b, eb, d}. For i = 2, we derive S 2 = {b}, since d(be) ! 2 T and pf((be) ! ) \ T 1 = {b} 6 = ; but d(cd) ! 2 T and pf((cd) ! ) \ T 2 = ;. We getT 3 = ↵ g [S 2 ]T = {b,eb} which is the greatest fixpoint: the only subsequences of sequences in T that guarantee S infinitely often start with b or eb. We can now define the recurrence semantics ⌧ r [S] 2 ⌃ * O: Definition 9.3.2 (Recurrence Semantics) Given a desirable set of states S ✓ ⌃, the recurrence semantics ⌧ r [S] 2 ⌃ * O is an abstract interpretation of the maximal trace semantics ⌧ +1 2 P(⌃ +1 ) (cf. Equation 2.2.5): ⌧ r [S]
def
is not need to redefine ⌧ g [S] at each iterate since the various subsequences of a program traces manipulated by the recurrence abstraction (cf. Equation 9.3.1) have been abstracted into program states. The rest of the chapter refers to this simplified version of the recurrence semantics. Example 9.3.4 Let us consider the following trace semantics: where the highlighted states are the set S of desirable states. The fixpoint iterates of the recurrence semantics ⌧ r [S] 2 ⌃ * O are: are outside the domain of the function. Let ' be a state property. The '-recurrence semantics ⌧ ' r 2 ⌃ * O: complete for proving a recurrence property 23': Theorem 9.3.5 A program satisfies a recurrence property 23' for execu- tion traces starting from a given set of states I if and only if I ✓ dom(⌧ ' r ).
In this way, to each program control point l 2 L corresponds a partial function f : E * O, and to each program instruction stmt corresponds a recurrence semantics ⌧ ' r J stmt K : (E * O) ! (E * O). The recurrence semantics ⌧ ' r J stmt K : (E * O) ! (E * O) of each program instruction stmt takes as input a ranking function whose domain represents the environments always leading infinitely often to ' from the final label of stmt, and determines the ranking function whose domain represents the environments always leading infinitely often to ' from the initial label of stmt, and whose value represents an upper bound on the number of program execution steps remaining to the next occurrence of '. In particular, each function
=
{x}. We consider the following ranking function f : E * O: x := x + [1, 2] and the recurrence property 23(x = 3). The recurrence semantics of the assignment is:
def = ⇢ 2 dom(S) \ dom(f ).
) + 1, f(⇢) + 1} JbexpK⇢ = {true, false} undefined otherwise and the ranking function F 1 [x] : E * O whose domain is the set of environments ⇢ 2 E that belong to the domain of S and that must satisfy bexp:F 1 [x] def = ⇢ 2 dom(S).( S(⇢) + 1 JbexpK⇢ = {true} undefined otherwise and the ranking function F 2 [f ] : E * O whose domain is the set of environments that belong to the domain of the input function f and that cannot satisfy bexp:F 2 [f ] def = ⇢ 2 dom(f ).
(=F
f (⇢) + 1 JbexpK⇢ = {false} undefined otherwiseThe resulting ranking function is defined by joining F [x], F 1 [x], and F 2 [f ], and resetting its value for the environments that belong to its domain and satisfy ':|= ' ^⇢ 2 dom(R) R(⇢) hl, ⇢i 6 |= ' ^⇢ 2 dom([x] [ F 1 [x] [ F 2 [f ].Finally, the recurrence semantics of the sequential combination of instructions stmt 1 stmt 2 , takes as input a ranking function f : E * O, determines from f the recurrence semantics ⌧ ' r J stmt 2 Kf of stmt 2 , and outputs the ranking function determined by the recurrence semantics of stmt 1 from ⌧ ' r J stmt 2 Kf :⌧ ' r J stmt 1 stmt 2 Kf def = ⌧ ' r J stmt 1 K(⌧ ' r J stmt 2 Kf ) (9.3.11)The recurrence semantics ⌧ ' r J prog K 2 E * O of a program prog is a ranking function whose domain represents the environments always leading infinitely often to ', which is determined by taking as input the totally undefined function, since the program final states cannot satisfy a recurrence property: Definition 9.3.8 (Recurrence Semantics) The recurrence semantics⌧ ' r J prog K 2 E * O of a program prog is: ⌧ ' r J prog K = ⌧ ' r J stmt l K def = ⌧ ' r J stmt K ; (9.3.12)where the function ⌧ ' r J stmt K : (E * O) ! (E * O) is the recurrence semantics of each program instruction stmt.
Figure 9
9 Figure 9.2: Example of dual widening of piecewise-defined function of one variable, shown in (a) and (b), respectively. The result is shown in (c).
def=(
Figure 9.3: Abstract recurrence semantics of instructions stmt.
R, LEAF : ? F ) (9.3.15) where the abstract recurrence semantics ⌧ '\ r J stmt K : T ! T of each program instruction stmt is defined in Figure9.3.In absence of run-time errors, the abstract recurrence semantics ⌧ '\ r J prog K 2 T , given a sound numerical over-approximation R 2 D of the reachable environments ⌧ I (iJ prog K), is sound with respect to ⌧ ' r J prog K 2 E * O:Theorem 9.3.11 ⌧ ' r J prog K 4 T [R]⌧ '\ r J prog K Proof (Sketch).The proof follows from the soundness of the operators of the decision trees abstract domain (cf. Lemma 9.3.9) used for the definition of ⌧ '\ r J prog K 2 T .⌅In particular, the abstract recurrence semantics provides su cient preconditions for ensuring a recurrence property 23' for a given over-approximation R 2 D of the set of initial states I ✓ ⌃:Corollary 9.3.12 A program satisfies a recurrence property 23' for execu- tion traces starting from a set of states D (R) if D (R) ✓ dom( T [R]⌧ '\ r J prog K).
These and additional examples are available from FuncTion web interface: http://www.di.ens.fr/ ~urban/FuncTion.html. Example 9.4.1 Let us consider again the program of Example 9.1.1: while 1 (x 0) do 2 x := x + 1 od while 3 ( true ) do if 4 (x 10) then 5 x := x + 1 then 6 x := x fi od 7
Example 9.4. 2
2 Let us consider the following program: 1 c := 1 while 2 ( true ) do 3 x := c while 4 (0 < x) do 5 x := x 1 6 c := c + 1 od od 7
x := x 1 else 6 x := x + 1 fi od od 7
67 Example 9.4.3 Let us consider the following program:while 1 ( true ) do 2 x := ? while 3 (x 6 = 0) do if 4 (0 < x) then5 Each iteration of the outer loop, resets the value of the program variable x with the non-deterministic assignment x := ?; then, the inner loop decreases or increases the value of x until it becomes equal to zero.The recurrence property 23(x = 0) is clearly satisfied by the program.
Example 9.4.4 Let us consider Peterson's algorithm[START_REF] Gary | Myths About the Mutual Exclusion Problem[END_REF] for mutual exclusion:
Let C def =
def ⌧ Mt J if l bexp then stmt 1 else stmt 2 fi K T [D]t and let A def = T [R](F \ 1 [t] g T [R] F \ 2 [t]). By structural induction, we have ⌧ Mt J stmt 1 K T [D]t 4 T [R](⌧ \ Mt J stmt 1 Kt) and ⌧ Mt J stmt 2 K T [D]t 4 T [R](⌧ \ Mt J stmt 2 Kt).We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12).
S 1
1 denote ⌧ Mt J stmt 1 K T [D]t and let S 2 denote ⌧ Mt J stmt 2 K T [D]t. We have, by definition of ⌧ Mt J if l bexp then stmt 1 else stmt 2 fi, C(⇢) = S 1 (⇢) + 1 or C(⇢) = S 2 (⇢) + 1 or C(⇢) = sup{S 1 (⇢) + 1, S 2 (⇢) + 1}. Let S \ 1 denote T [R](⌧ \ Mt J stmt 1 Kt) and let S \ 2 denote T [R](⌧ \ Mt J stmt 2 Kt).
⌅
Lemma 5.2.16 Let F \ 1 [x] def = (FILTER T J bexp KR)(⌧ \ Mt J stmt Kx) and F \ 2 [t]
2.16. Let C def = Mt ( D [R]x) and let A def = T [R]( \ Mt (x)). By structural induction, we have⌧ Mt J stmt K T [R]x 4 T [R](⌧ \ Mt J stmt Kx).We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12).Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). Since ⇢ 2 dom(A), by definition of 4 T [D] (cf. Algorithm 6) and FILTER T (cf. Algorithm 12), we have⇢ 2 dom( T [R](⌧ \ Mt J stmt Kx)) or ⇢ 2 dom( T [D]t),or both. In fact, Algorithm 12 prunes a decision tree (cf. Line 12) and Algorithm 6 favors undefined leaf nodes over defined leaf nodes (cf. Equation 5.2.13). Thus, by structural induction, we have ⇢ 2 dom(⌧ Mt J stmt K T [R]x) or ⇢ 2 dom( T [D]t), or both. In absence of run-time errors, by definition of Mt (cf. Equation 4.3.5), we have ⇢ 2 dom(C) which is absurd. Therefore, we conclude that dom(C) ◆ dom(A). Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). Let S denote ⌧ Mt J stmt K T [R]x. We have, by definition of Mt , C(⇢) = S 1 (⇢) + 1 or C(⇢) = ( T [D]t)(⇢) + 1 or C(⇢) = sup{S 1 (⇢) + 1, ( T [D]t)(⇢) + 1}. Let S \ denote T [R](⌧ \
n O \ Mt (y n ) (cf. Equation 5.2.24).
and only if the property P is implied by the collecting semantics {S} 2 P(P(⌃ +1 )) of the program, that is, if and only if {S} ✓ P .In practice, a weaker form of program correctness is often used. In fact, the traditional safety and liveness properties are trace properties and thus can be modeled as sets of traces. In this case, a program satisfies a property P 2 P(⌃ +1 ) if and only if its semantics S 2 P(⌃ +1 ) implies the property P , that is, if and only if S ✓ P . Let us consider a program with semantics S 2 P(⌃ +1 ). We can model the property of program termination as the set of all non-empty finite sequences ⌃ + . The program terminates if and only if S ✓ ⌃ + .
Example 2.2.13
is an over-approximation of the set of environments ⌧ I (l).
In some cases, there also exists a Galois connection (cf. Remark 2.2.19):
hP(E), ✓i ! ↵ D D hD, v D i
By pointwise lifting (cf. Equation 2.1.1) we obtain the following:
hL ! P(E), ✓i ˙ D hL ! D, vD i
or, when an abstraction function ↵ D : P(E) ! D also exists:
hL ! P(E), ✓i ! ↵D ˙ D hL ! D, vD i
which provides a (concretization-based) abstraction of the invariance seman-
tics ⌧ I 2 L ! P(E) by partitioning with respect to the program control points. No approximation is made on L. On the other hand, each program control point l 2 L is associated with an element d 2 D.
Numerical Abstract Domains. The abstract domain hD, v D i is a numer-ical abstract domain and it obeys the following signature:
Definition 3.4.1 (Numerical Abstract Domain) A numerical abstract do-
main is characterized by a choice of:
.2.8: Theorem 4.2.15 (Potential Termination Semantics) The potential termination semantics ⌧ mt 2 ⌃ * O of a program can be expressed as a least fixpoint in the partially ordered set h⌃ * O, vi:
Thus, in the following, we consider all linear constraints in C to have the form c 1 X 1 +. . .+c
we require gcd(|c 1 |, . . . , |c k |, |c k+1 |) = 1:
k X k +c k+1 0. More-
over, in order to ensure a canonical representation of the linear constraints,
t 2 2 T NIL , C 2 P(C)
5: if ¬isNode(t 1 ) ^¬isNode(t 2 ) then return (t 1 , t 2 )
34:
35: function unification(D, t 1 , t 2 ) 36: . D 2 D, t 1 , t 2 2 T NIL
6:
else if ¬isNode(t 1 ) _ (isNode(t 1 ) ^isNode(t 2 ) ^t1 .c < C t 2 .c) then 7: if isRedundant(t 2 .c, C) then 8: return unification-aux(t 1 , t 2 .l, C) 9: else if isRedundant(¬t 2 .c, C) then 10: return unification-aux(t 1 , t 2 .r, C) 11: else . t 2 .c can be added to t 1 and kept in t 2 12:
(l 1 , l 2 ) unification-aux(t 1 , t 2 .l, C [ {t 2 .c}) 13: (r 1 , r 2 ) unification-aux(t 1 , t 2 .r, C [ {¬t 2 .c}) 14: return (NODE{t 2 .c} : l 1 ; r 1 , NODE{t 2 .c} : l 2 ; r 2 ) 15: else if ¬isNode(t 2 ) _ (isNode(t 1 ) ^isNode(t 2 ) ^t2 .c < C t 1 .c) then 16: if isRedundant(t 1 .c, C) then 17:
return unification-aux(t 1 .l, t 2 , C) 18: else if isRedundant(¬t 1 .c, C) then 19: return unification-aux(t 1 .r, t 2 , C) 20: else . t 1 .c can be kept in t 1 and added to t 2 21: (l 1 , l 2 ) unification-aux(t 1 .l, t 2 , C [ {t 1 .c}) 22: (r 1 , r 2 ) unification-aux(t 1 .r, t 2 , C [ {¬t 1 .c}) 23: return (NODE{t 1 .c} : l 1 ; r 1 , NODE{t 1 .c} : l 2 ; r 2 ) 24: else if isNode(t 1 ) ^isNode(t 2 ) then 25: c t 1 .c . t 1 .c and t 2 .c are equal 26: if isRedundant(c, C) then 27: return unification-aux(t 1 .l, t 2 .l, C) 28: else if isRedundant(¬c, C) then 29: return unification-aux(t 1 .r, t 2 .r, C) 30: else . c can be kept in t 1 and t 2 31: (l 1 , l 2 ) unification-aux(t 1 .l, t 2 .l, C [ {c}) 32: (r 1 , r 2 ) unification-aux(t 1 .r, t 2 .r, C [ {¬c}) 33:
return (NODE{c} : l 1 ; r 1 , NODE{c} : l 2 ; r 2 )
f
4: 5: else if isNode(t 1 ) ^isNode(t 2 ) then c t 1 .c
6: 7: 8: l r return l order-aux(t 1 .l, t 2 .l, C [ {c}) order-aux(t 1 .r, t 2 .r, C [ {¬c}) ^r
9:
10: function order(E,
D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T , E 2 {4 F , v F } 11:
(t 1 , t 2 ) unification(D, t 1 , t 2 )
12:
The join of decision trees represents a ranking function defined over the union of their partitions. It is implemented by Algorithm 5: the function Algorithm 5 : Tree Join 1: function join-aux( , t 1 , t 2 , C) . t 1 , t 2 2 T NIL , C 2 P(C), 2 {g F , t F }
Algorithm 4 : Tree Computational Order
1: function c-order(D, t 1 , t 2 ) 2: return order(v F , D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T
2: if isNil(t 1 ) then return t 2
3: else if isNil(t 2 ) then return t 1
4:
else if isLeaf(t 1 ) ^isLeaf(t 2 ) then 5:
return LEAF : t 1 .f g F [↵ C (C)] t 2 .f
88 5. Piecewise-Defined Ranking Functions
Algorithm 8 : Tree Meet
1: function meet-aux(t 1 , t 2 , C) 2: if isNil(t 1 ) _ isNil(t 2 ) then return NIL 3: else if isLeaf(t 1 ) ^isLeaf(t 2 ) then 4: . t 1 , t 2 2 T NIL , C 2 P(C)
5: 6: else if isNode(t 1 ) ^isNode(t 2 ) then c t 1 .c
7: (l 1 , l 2 ) meet-aux(t 1 .l, t 2 .l, C [ {c})
2.19)
8:
(r 1 , r 2 ) meet-aux(t 1 .r, t 2 .r, C [ {¬c})
9:
return (NODE{c} : l 1 ; r 1 , NODE{c} : l 2 ; r 2 ) 10:
11: function meet(D, t 1 , t 2 ) . D 2 D,
t 1 , t 2 2 T NIL 12: (t 1 , t 2 ) unification(D, t 1 , t 2 ) 13: return meet-aux(t 1 , t 2 , C (D)) Algorithm 9 : Tree Step 1: function step(t) . t 2 T 2: if isLeaf(t) then 3: return LEAF : step F (t.f ) 4: else if isNode(t) then 5:
).
96 5. Piecewise-Defined Ranking Functions
Algorithm 12 : Tree Filter
1: function filterJ bexp K(D, t) 2: switch bexp do . D 2 D, t 2 T
3: case ? : return t
4: case not bexp :
5: 6: return filterJ ¬bexp K(D, t) case bexp 1 and bexp 2 :
7: return meet(D, filterJ bexp 1 K(D, t), filterJ bexp 2 K(D, t))
Proof.
See Appendix A.3. ⌅
8:
case bexp 1 or bexp 2 : 9: return a-join(D, filterJ bexp 1 K(D, t), filterJ bexp 2 K(D, t)) 10: case aexp 1 ./ aexp 2 : 11: J filter C J aexp 1 ./ aexp 2 KD 12:
then return t 2 .f
4: else return LEAF : > F 5: else if isNode(t 1 ) ^isNode(t 2 ) then 6: l caseA-aux(t 1 .l, t 2 .l, C [ {t 2 .c}) 7: r caseA-aux(t 1 .r, t 2 .r, C [ {¬t 2 .c}) 8: return NODE{t 2 .c} : l; r 9: 10: function caseA(D, t 1 , t 2 ) . D 2 D, t 1 , t 2 2 T 11: (t 1 , t 2 ) unification(D, t 1 , t 2 ) 12:
, t 2 2 T , C 2 P(C), 2 {g F , t F } 3: if isLeaf(t 1 ) ^isLeaf(t 2 ) then return (t 1 , t 2 ) 4: else if isLeaf(t 1 ) _ (isNode(t 1 ) ^isNode(t 2 ) ^t1 .c < C t 2 .c) then D, t 1 , t 2 2 T , 2 {g F , t F }
5.2. Decision Trees Abstract Domain 102 101 5. Piecewise-Defined Ranking Functions
Algorithm 15 : Tree Left Unification
1: function left-unification-aux( , t 1 , t 2 , C)
2: . t 1 5: if isRedundant(t 2 .c, C) then
30:
31: function left-unification( , D, t 1 , t 2 )
32: . D 2 33: return left-unification-aux(t 1 , t 2 , C (D))
6:
return left-unification-aux(t 1 , t 2 .l, C) 7: else if isRedundant(¬t 2 .c, C) then 8: return left-unification-aux(t 1 , t 2 .r, C) 9: else . t 2 .c should be removed from t 2 10: return left-unification-aux(t 1 , t 2 , join( , ↵ C (C), t 2 .l, t 2 .r)) 11: else if isLeaf(t 2 ) _ (isNode(t 1 ) ^isNode(t 2 ) ^t2 .c < C t 1 .c) then 12: if isRedundant(t 1 .c, C) then 13: return left-unification-aux(t 1 .l, t 2 , C) 14: else if isRedundant(¬t 1 .c, C) then 15: return left-unification-aux(t 1 .r, t 2 , C) 16: else . t 1 .c can be kept in t 1 and added to t 2 17: (l 1 , l 2 ) left-unification-aux(t 1 .l, t 2 , C [ {t 1 .c}) 18: (r 1 , r 2 ) left-unification-aux(t 1 .r, t 2 , C [ {¬t 1 .c}) 19: return (NODE{t 1 .c} : l 1 ; r 1 , NODE{t 1 .c} : l 2 ; r 2 ) 20: else if isNode(t 1 ) ^isNode(t 2 ) then 21: c t 1 .c . t 1 .c and t 2 .c are equal 22: if isRedundant(c, C) then 23: return left-unification-aux(t 1 .l, t 2 .l, C) 24: else if isRedundant(¬c, C) then 25: return left-unification-aux(t 1 .r, t 2 .r, C) 26: else . c can be kept in t 1 and t 2 27: (l 1 , l 2 ) left-unification-aux(t 1 .l, t 2 .l, C [ {c}) 28: (r 1 , r 2 ) left-unification-aux(t 1 .r, t 2 .r, C [ {¬c}) 29:
return (NODE{c} : l 1 ; r 1 , NODE{c} : l 2 ; r 2 )
f
5.2. Decision Trees Abstract Domain 107
x
6 10
(a)
33: return f
34: else return t 2 .f
35: 36: 37: else if isNode(t 1 ) ^isNode(t 2 ) then l widen-aux(t, t 1 .l, t 2 .l, C [ {t 2 .c}) r widen-aux(t, t 1 .r, t 2 .r, C [ {¬t 2 .c})
38:
return NODE{t 2 .c} : l; r
then
9: carry true
10: else
11: if carry then
12: r[i] STEP F (f )
13: carry false
14: else
15: r[i] f
16: i i + 1
17: 18: if ¬carry then return r
19: else . maximum degree of the polynomial exceeded
20:
. From
Tot Time Timeouts
FuncTion 20 0.1s 1
AProVE [SAF + 15] 20 4.5s 3
FuncTion [Urb15] 18 0.2s 0
HIPTnT+ [LQC15] 19 0.3s 1
Ultimate [HDL + 15] 22 7.4s 1
(a)
.1). The predicate l : bexp allows specifying a program state property at a particular program control point l 2 L. When a program state s 2 ⌃ satisfies the property ', we write s |= ' and we say that s is a '-state. We also slightly abuse notation and write ' to also denote the set {s 2 ⌃ | s |= '} of states that satisfy the property '.
Example 9.1.1
Let us consider the following program:
while 1 (x 0) do
2 x := x + 1
od
while 3 ( true ) do
if 4 (x 10) then 5
T ✓ ⌃ +1 is the set of sequences 0 2 T with a common prefix with (cf. Equation 4.2.3). A finite subsequence of a program trace satisfies a guarantee property if and only if it terminates in the desirable set of states (and never encounters a desirable state before), and its neighborhood in the subsequences of the program semantics consists only of sequences that are terminating in the desirable set of states (and never encounter a desirable state before). The corresponding guarantee abstraction ↵ g [S] : P(⌃ +1 ) ! P(⌃ + ) is parameterized by a set of desirable states S ✓ ⌃ and it is defined as follows: S and the function sf : P(⌃ +1 ) ! P(⌃ +1 ) yields the set of su xes of a set of sequences T ✓ ⌃ +1 :
↵ g [S]T def = s 2 sq(T ) 2 S⇤ , s 2 S, nhbd( , sf(T ) \ S+1 ) = ; (9.2.2)
where S def = ⌃ \ sf(T ) def = [ { 2 ⌃ +1 | 9 0 2 ⌃ ⇤ : 0 2 T }. (9.2.3)
Example 9.2.1
Let T def = {(abcd) ! , (cd) ! , a ! , cd ! } and let S
def
= {c}. We have sf(T ) \ S+1 = {a !
In this way, to each program control point l 2 L corresponds a partial function f : E * O, and to each program instruction stmt corresponds a guarantee semantics ⌧ ' g J stmt K : (E * O) ! (E * O). The guarantee semantics ⌧ ' g J stmt K : (E * O) ! (E * O) of each program instruction stmt takes as input a ranking function whose domain represents the environments always leading to ' from the final label of stmt, and determines the ranking function whose domain represents the environments always leading to ' from the initial label of stmt, and whose value represents an upper bound on the number of program execution steps remaining to '.
9.2.2 Denotational Guarantee Semantics
In the following, we provide a structural definition of the '-guarantee seman-
tics ⌧ ' g 2 ⌃ * O by induction on the syntax of programs written in the small language presented in Chapter 3.
We partition ⌧ ' g with respect to the program control points: ⌧ ' g 2 L ! (E * O).
Theorem 9.2.5 A program satisfies a guarantee property 3' for execution traces starting from a given set of initial states I if and only if I ✓ dom(⌧ ' g ).
Proof.
See Appendix A.6. ⌅
. . , X k . 0 RESET F (> F ) Algorithm 25 : Tree Dual Widening 1: function dual-widen-aux(t 1 , t 2 , C) . t 1 , t 2 2 T , C 2 P(C) if t 2 .f v F [↵ C (C)] t 1 .fthen return t 2 .f -aux(t 1 .l, t 2 .l, C [ {t 2 .c}) -aux(t 1 .r, t 2 .r, C [ {¬t 2 .c}) function dual-widen(d,t 1 ,t 2 ) . d 2 D, t 1 , t 2 2 T 11: (t 1 , t 2 ) left-unification(g F , d, t 1 , t 2 ) . domain widening
9.3. Recurrence Semantics 191
2: 3: if isLeaf(t 1 ) ^isLeaf(t 2 ) then
4: 5: 6: dual-widen7: else return LEAF : ? F else if isNode(t 1 ) ^isNode(t 2 ) then l r dual-widen8: return NODE{t 2 .c} : l; r
9:
10: 12:
(9.3.13)
def = > F
FILTER T J bexp KR)(⌧ \ Mt J stmt 1 Kt) and F \ 2 [t] def = (FILTER T J not bexp KR)(⌧ \ Mt J stmt 2 Kt).Then, for all t 2 T , we have:
⌅
Lemma 5.2.15 Let F \ 1 [t]
def = (
Abstract Interpretation
A Small Imperative Language
3.2. Maximal Trace Semantics
An Abstract Interpretation Framework for Termination
x := x 1 od
Future Directions
In the following example, the widening is required for convergence. We present the analysis of the program using polyhedral constraints based on the polyhedra abstract domain (cf. Section 3.4.2) for the decision nodes, and a ne functions for the leaf nodes (cf. Equation 5.2.6).
The starting point is the zero function at the program final control point:
3 : LEAF : x. y. r. 0
The ranking function is then propagated backwards towards the program initial control point taking the loop into account:
1 :
NODE{r 1 0} : (LEAF : ? F ); (LEAF : x. y. r. 1)
The first iterate of the loop is able to conclude that the program terminates in at most one program step if the loop condition r > 0 is not satisfied. Then, at program control point 3, the operator ASSIGN T replaces the program variable r with the expression r y within the decision tree: 3 : NODE{y r 0} : (LEAF : x. y. r. 2); (LEAF : ? F ) and at program control point 2, it replaces r with r + x:
:
NODE{x y + r 1 0} : (LEAF : ? F ); (LEAF : x. y. r.
3)
The second iterate of the loop concludes that the program terminates in at most four execution steps, when the loop is executed only once, and in at most one program step, when the loop is not entered:
1 :
NODE{r 1 0} : NODE{x y + r 1 0} : (LEAF : ? F ); (LEAF : x. y. r. 4); LEAF : x. y. r. 1
The third iterate concludes that the program terminates in at most seven execution steps when the loop is executed twice, in at most four steps when the loop is executed once, and in at most one step when the loop is not entered:
We present the analysis of the program using interval constraints based on the intervals abstract domain (cf. Section 3.4.1) for the decision nodes, and ordinal-valued functions for the leaf nodes (cf. Chapter 6).
The starting point is the zero function at the program final control point:
The ranking function is then propagated backwards towards the program initial control point taking into account the call to the recursive procedure f.
At program control point 5 the semantics of the return instruction simply increases the value of the ranking function:
The semantics of the skip instruction does the same at program point 4:
Instead, the recursive call at program control point 3 returns:
which is propagated by the variable assignment at program control point 2:
2 : LEAF : ? F .
Thus, the first iterate of the call to f is able to conclude that the procedure terminates in at most three execution steps when 1 < x is not satisfied:
Then, during the second iterate, the recursive call at program control point 3 increases the value of the ranking function obtained after the first iterate (plus the ranking function obtained at program control point 5):
:
NODE{x 2 0} : (LEAF : ? F ); (LEAF : x. 5) and, at program control point 2, the operator ASSIGN T replaces the program variable x with the expression x 1 within the decision tree:
:
NODE{x 3 0} : (LEAF : ? F ); (LEAF : x. 6).
Thus, the second iterate of the call to f is able to conclude that the procedure terminates in at most seven execution steps, when it calls itself recursively only once, and in at most three program steps when 1 < x is not satisfied:
if isEmpty(J) then or its negation ¬j (cf. Lines 14-20), or continues the descent along the paths of the decision tree (cf. Lines 21-36). In the first case, reset-aux tests j and ¬j for redundancy with respect to C (cf. Lines 15-16): when ¬j is redundant with respect to C the whole decision tree is returned unmodified; otherwise, resetaux adds a decision node for j while leaving unmodified its right subtree (cf. Line 18), if j is already in canonical form (cf. Line 17), or it adds a decision node for for ¬j while leaving unmodified its left subtree (cf. Line 20), if ¬j is the canonical form of j (cf. Line 19). In the second case, reset-aux accumulates in C the encountered linear constraints (cf. Lines 26-27), possibly removing redundant decision nodes (cf. Lines 22-24) and leaving the right
The halting problem is undecidable.
Proof (by Geo↵rey K. Pullum).
Scooping the Loop Snooper
No general procedure for bug checks will do. Now, I won't just assert that, I'll prove it to you. I will prove that although you might work till you drop, you cannot tell if computation will stop.
For imagine we have a procedure called P that for specified input permits you to see whether specified source code, with all of its faults, defines a routine that eventually halts.
You feed in your program, with suitable data, and P gets to work, and a little while later (in finite compute time) correctly infers whether infinite looping behavior occurs.
If there will be no looping, then P prints out 'Good.' That means work on this input will halt, as it should.
But if it detects an unstoppable loop, then P reports 'Bad!' -which means you're in the soup.
A. Proofs
Well, the truth is that P cannot possibly be, because if you wrote it and gave it to me, I could use it to set up a logical bind that would shatter your reason and scramble your mind.
Here's the trick that I'll use -and it's simple to do. I'll define a procedure, which I will call Q, that will use P's predictions of halting success to stir up a terrible logical mess.
For
Proof of Lemma 5.2.2. Let D 2 D. We reason by cases. First, we consider the case of defined and undefined leaf nodes. Then, we consider the case of defined leaf nodes.
Let 5.3a. Moreover, from Equation 5.2.9, we have 5.2.9, we have dom(
. Moreover, from Equation 5.2.7, for all ⇢ 2 D (D) we have
). Thus, from Equation 4.2.12, we have
This concludes the proof that F [D] is monotonic.
The approximation ordering 4 T between decision trees is implemented by Algorithm 3: the functions a-order calls (the function order of Algorithm 2, which in turn calls) Algorithm 1 for tree unification, and then compares the decision trees "leaf-wise". Algorithm 1 forces the decision trees to have the same structure. Indeed, the missing linear constraints (cf. Line 6 and Line 15) are added to the decision trees (cf. Line 14 and Line 23). Thus, after the tree unification, all paths to the leaf nodes coincide between the decision trees. Let C 2 C denote the set of linear constraints satisfied along a path of the unified decision trees, and let f 1 , f 2 2 F denote the leaf nodes reached following the path C within the first and the second decision tree, respectively. We have
The proof follows from Lemma 5.2.2.
Proof of Lemma 5.2.8. We have
. We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12).
Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). In particular, ⇢ 2 D (R). Thus, since ⌧ I J l skip K D (R) ✓ D (D) and by definition of ⌧ I J l skip K (cf. Figure 3.9), we have D (R) ✓ D (D) which implies ⇢ 2 D (D). Moreover, since ⇢ 2 dom(A) and by definition of STEP T (cf. Algorithm 9), we must have ⇢ 2 dom( T [D]t). In fact, Algorithm 9 simply invokes STEP F for every leaf node of a decision tree, which leaves undefined leaf nodes unaltered (cf. Equation 5.2.19). Thus, by definition of ⌧ Mt J l skip K (cf. Equation 4.3.1), we have ⇢ 2 dom(C), which is absurd. Therefore, we have dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). We have, by definition of
Moreover, by definition of STEP T (cf. Algorithm 9), we have ( T [D]t)(⇢) < A(⇢). In fact, Algorithm 9 invokes STEP F , which increases the value of the defined leaf nodes of a decision tree (cf. Equation 5.2.19). Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof. Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). In particular, ⇢ 2 D (R). Thus, since ⌧ I J l X := aexp K D (R) ✓ D (D) and by definition of ⌧ I J l X := aexp K (cf. Figure 3.9), we have 8v 2 J aexp K⇢ : ⇢[X v] 2 D (D). Moreover, since ⇢ 2 dom(A) and by definition of B-ASSIGN T (cf. Algorithm 11), we must have 8v 2
for some z 2 J aexp K⇢, must be represented by an undefined leaf node f 2 F\{? F , > F } in t 2 T (cf. Equation 5.2.11 and Equation 5.2.9). Moreover, Algorithm 11 invokes B-ASSIGN F for every leaf node of a decision tree, which leaves undefined leaf nodes unaltered (cf. Equation 5.2.21), and a-join to handle possibly overlapping partitions (cf. Line 10 and Line 20), which favors undefined leaf nodes over defined leaf nodes (cf. Algorithm 6 and Equation 5.2.13). Thus, by definition of ⌧ Mt J l X := aexp K (cf. Equation 4.3.2) and in absence of run-time errors, we have ⇢ 2 dom(C), which is absurd. Therefore, we conclude that dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). We have, by definition of
Proof of Lemma 5.2.24.
. We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12).
The proof follows from Lemma 5.2.16, Lemma 5.2.17, and Lemma 5.2.19, and Lemma 5.2.20 and from the definition of O T (cf. Algorithm 13). In fact from Lemma 5.2.16, whenever some iterate y n under-approximates the value of the termination semantics or over-approximates its domain of definition (cf. Figure 5.10), it cannot be the limit of the iteration sequence with widening because it violates either \ Mt (y 5.2.24). Moreover, from Lemma 5.2.17, and Lemma 5.2.19, and Lemma 5.2.20 and from the definition of O T (cf. Algorithm 13), we know that further iterates resolve the issue. In fact, Algorithm 13 specifically invokes caseA and caseBorC to this end. Thus, this concludes the proof since the limit of the iteration sequence with widening must over-approximate the value of the termination semantics and under-approximate its domain of definition.
Proof of Lemma 6.3.1. Let D 2 D. We reason by cases. First, we consider the case of defined and undefined leaf nodes. Then, we consider the case of defined leaf nodes.
Let p 1 2 W \ {? W , > W } and let p 2 2 {? W , > W }. We have p 1 4 W [D]p 2 from Figure 6.2a. Moreover, from Equation 6.3.4, we have W [D]p 2 = ;. Thus, since dom( W [D]p 2 ) = ;, we have dom( W [D]p 1 ) ◆ dom( W [D]p 2 ) and, from Equation 4.2.12, we have
Proof of Lemma 6.4.1 (Sketch).
The proof for a skip instruction follows from Lemma 5.2.8 and the definition of the STEP W operator (cf. Equation 6.3.8). The proof for a variable assignment follows from Lemma 5.2.14 and the definition of the
Proof of Lemma 7.3.3.
. We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12). Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). Thus, since ⇢ 2 dom(A) and by definition of the step operator STEP T (cf. Algorithm 9) and the sum operator + T (cf. Algorithm 23), we must have ⇢ 2 dom( T [D]t) and ⇢ 2 dom( T [D]T ). In fact, Algorithm 23 invokes the leaves sum operator + F , which favors undefined leaf nodes over defined leaf nodes (cf. Equation 7.3.6 and Equation 7.3.7) and Algorithm 9 simply invokes STEP F for every leaf node of a decision tree, which leaves undefined leaf nodes unaltered (cf. Equation 5.2.19). Thus, by definition of ⌧ Mt J l call M K M (cf. Equation 7.3.1), we have ⇢ 2 dom(C), which is absurd. Therefore, we have dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). We have, by definition of
Moreover, by definition of STEP T (cf. Algorithm 9) and + T (cf. Algorithm 23), we have ( T [D]t)(⇢) + ( T [D]T )(⇢) < A(⇢). In fact, Algorithm 23 invokes + F , which additions the value of the leaf nodes that are defined in both decision trees (cf. Equation 7.3.7), and Algorithm 9 invokes STEP F , which increases the value of the resulting defined leaf nodes (cf. Equation 5.2.19). Thus, C(⇢) A(⇢), which is absurd. Therefore, 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof. Proof of Lemma 7.3.4.
The soundness of I follows from the definition of (⌧ Mt J l call M K P (cf. Equation 7.3.2) and from Lemma 5.2.24.
Let
Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). Thus, since ⇢ 2 dom(A) and by definition of STEP T (cf. Algorithm 9), we must have ⇢ 2 dom( T [D]t). In fact, Algorithm 9 simply invokes STEP F for every leaf node of a decision tree, which leaves undefined leaf nodes unaltered (cf. Equation 5.2.19). Thus, by definition of (⌧ Mt J l call M K P , we have ⇢ 2 dom(C), which is absurd. Therefore, we have dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). We have, by definition of (⌧ Mt J l call M K P , C(⇢) = ( T [D]t)(⇢) + 1. Moreover, by definition of STEP T (cf. Algorithm 9), we have ( T [D]t)(⇢) < A(⇢). In fact, Algorithm 9 invokes STEP F , which increases the value of the defined leaf nodes of a decision tree (cf. Equation 5.2.19). Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof.
. We prove that dom(C) ◆ dom(A) and 8⇢ 2 dom(A) : C(⇢) A(⇢) (cf. Equation 4.2.12).
Let us assume, by absurd, that dom(C) ⇢ dom(A). Then, there exists an environment ⇢ 2 E such that ⇢ 2 dom(A) and ⇢ 6 2 dom(C). Thus, since ⇢ 2 dom(A) and by definition of STEP T (cf. Algorithm 9), we must have ⇢ 2 dom( T [D]t). In fact, Algorithm 9 simply invokes STEP F for every leaf node of a decision tree, which leaves undefined leaf nodes unaltered (cf. Equation 5.2.19). Thus, by definition of ⌧ Mt J l return K (cf. Equation 7.3.3), we have ⇢ 2 dom(C), which is absurd. Therefore, we have dom(C) ◆ dom(A).
Let us assume now, by absurd, that 9⇢ 2 dom(A) : C(⇢) > A(⇢). We have, by definition of ⌧ Mt J l return K, C(⇢) = ( T [D]t)(⇢) + 1. Moreover, by definition of STEP T (cf. Algorithm 9), we have ( T [D]t)(⇢) < A(⇢). In fact, Algorithm 9 invokes STEP F , which increases the value of the defined leaf nodes of a decision tree (cf. Equation 5.2.19). Thus, C(⇢) A(⇢), which is absurd. Therefore, we conclude that 8⇢ 2 dom(A) : C(⇢) A(⇢).
This concludes the proof. | 377,469 | [
"1061085"
] | [
"25027"
] |
01769982 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769982/file/COCIS.pdf | F R Smith
D Brutin
email: [email protected]
Wetting and spreading of human blood: recent advances and applications
Keywords: Drop, evaporation, interfaces, complex fluid, rheology
Investigation of the physical phenomena involved in blood interactions with real surfaces present new exciting challenges. The fluid mechanical properties of such a fluid is singular due its non-Newtonian and complex behaviour, depending on the surrounding ambient conditions and the donor/victim's blood biological properties. The fundamental research on the topic remains fairly recent; although it finds applications in fields such as forensic science, with bloodstain pattern analysis, or biomedical science with the prospect of disease detection from dried blood droplets. In this paper, we review the understanding that has been achieved by interpreting blood wetting, spreading and drying when in contact, ex-vivo, with non-coated surfaces. Ultimately, we highlight the applications with the most up to date research, future perspectives, and the need of advancing further in this topic for the benefit of researchers, engineers, bloodstain pattern analysts, and medical practitioners.
Introduction
The interaction of blood with foreign, non-coated, surfaces is an emerging research subject that finds various applications in fields such as forensic science or biomedical discipline. Although blood flow circulation is a major subject that was thoroughly studied, interest in blood exvivo, i.e. outside the body, and its interaction with a noncoated surface, is less significant. However the properties of blood as a complex, biological, colloidal fluid, differ from those of more commonly studied, pure or complex, fluids, stimulating a recent curiosity concerning the behaviour of blood with real porous and non-porous surfaces. While applied studies on the topic had already been made, in 2011 the work on the evaporation of blood droplets by Brutin et al. gave a fundamental insight into blood drying pattern dynamics [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF]. This work was later completed by recent studies whose major findings are presented in table 1. When blood is in contact with a substrate the two main interface transfers are first the wetting, and then the evaporation. While wetting accounts for the equilibrium of the various applied forces at the triple line (meeting point of blood, surface, and air), evaporation describes the phase change taking place after as recalled by the review of Sefiane et al. of 2011 [START_REF] Sefiane | Wetting 580 and phase change: Opportunities and challenges[END_REF]. These processes are dependent on variables such as ambient temperature, humidity, 25 and pressure,wettability and roughness of the surface, time elapsed since blood deposition, volume and blood's clotting response. Indeed, when blood is outside the body, its properties change rapidly due to platelet activation leading to the coagulation cascade response. The impor-30 tance of understanding this problem for forensic science and biomedical applications, appears to be crucial since it is used as a tool to obtain evidence in crime scene reconstitution or in medical diagnosis. The National Institute of Standards and Technology (NIST) has pointed out, in a 35 very recent report, the urge of using valid scientific methods before presenting evidence in courtrooms [START_REF] Lund | Likelihood ratio as weight of forensic evidence: A closer look[END_REF]. This illustrates the demand that exists concerning this research. To respond, we investigated the current understanding of blood physical and mechanical properties, and the most up 40 to to date advances in the field. In this paper, we present first the characteristics of blood, then its spreading behaviour over dry porous and non-porous surfaces, followed by its drying dynamics, and finally we review the physics of impacting droplets. At last, we discuss the implications that the understanding of blood behaviour on real surfaces has for applications, and the future challenges that research is confronted with. Fluid dynamic model to estimate the point of origin of blood spatters, taking into account gravity and drag [START_REF] Laan | Bloodstain Pattern Analysis: implementation of a fluid dynamic model for position determination of victims[END_REF]. 2. Blood properties
Biological composition
Blood is categorised as a body fluid that accounts for roughly 7% of the human body weight. It has two main constituents: plasma, the fluid medium and blood cells, the colloids. Plasma is a water based solution, composed for 92% of water, which dissolves and transports organic and inorganic molecules, and for 8% of dissolved solutes [START_REF] Bruce | Molecular Biology of the Cell[END_REF]. The solutes are mainly sodium electrolytes; nutrients and organic wastes are found as well in diverse amounts, and finally the proteins, which account for 7-9% of the plasma. These proteins are albumins (80%), globulin (16%) and fibrinogen (4%) that is really important in blot clot formation. Fibrinogen is a globulin of very high molecular weight synthesised exclusively from the liver that can be precipitated easily. During coagulation, the fibrin coming from fibrinogen forms a web leading to the clot formation. The blood cells are divided into three categories, the red blood cells (RBCs) that transport oxygen and carbon dioxide, the white blood cells (WBCs) (neutrophils and monocytes, eosinophils, basophils and lymphocytes), and the platelets, which play an important role in the clotting response. Indeed platelets prevent bleeding by clumping and clotting vessel injuries. Thus all these constituents are main parameters to understand this complex, non-Newtonian, fluid, since an excess or a lack of these components would alter its wetting and its spreading.
Physical and rheological properties
Blood has a small density variation between 1020 and 1060 kg.m -3 , due to the differences in the individuals haematocrit level, which is defined as the ratio of RBCs volume to whole blood volume. The surface tension of blood is known to be similar to that of water, as shown by the work of Brutin et al. where they found θ = 69.8 mN.m -1 as average statistical surface tension. Studies, such as the one of Chao et al. of 2014, account for the shear thinning properties of blood, since its viscosity de-85 creases to a constant value at high shear rates (η = 4.8 mPa.s), although it spreads like Newtonian fluids of similar viscosity. Rheological properties of a liquid are determined by the liquid viscosity, µ, and the applied shear rate, γ, according to the following relationship: µ = k γn-1 , 90 where n<1 corresponds to pseudoplastic fluids. The is in an agreement with previous measurements of blood rheology [START_REF] Chao | Influence of haematocrit level on the kinetics of blood spreading on thin porous medium during dried blood spot sampling[END_REF]. As described by Baskurt et al., the apparent viscosity depends on the existing shear forces, and is determined by its biological properties: haematocrit, plasma 95 viscosity, RBCs aggregation, and the mechanical properties of RBCs [START_REF] Baskurt | Blood rheology and hemodynamics[END_REF]. Alteration of those properties, such as a noteworthy modification of the haematocrit value, accounts for the heamorheological variations. Additionally, the viscoelastic properties of blood come from the high 100 deformability of the RBCs.
Spreading of complex fluids
For a lot of applications (like painting, coating...), a smooth deposition is critical. However, only a few studies were published on the understanding of the influence of 105 complex fluids on the dynamics of spreading. The most obvious way to improve spreading for aqueous solutions is to add surfactants to decrease the liquid-vapour interfacial tension and increase the initial spreading coefficient. Rafaï et al. in 2002 [24], firstly studied the super-spreading of 110 aqueous surfactants drops on hydrophobic surfaces. The super-spreading is due to a large affinity of the surfactant molecules for the solid substrate. If the surfactant molecules are transported rapidly and efficiently over the (small) height of the droplet in order to saturate the solid 115 surface, this entails a large surface tension gradient over a small distance, and consequently a large Marangoni force. This, in turn, leads to the linear time evolution of the radius. The difference with the 'classical' surfactants observed here is then either a difference in affinity for the solid substrate or a difference in the transport efficiency to that same substrate, a question that remains to be answered. In 2013, Bouzeid and Brutin [START_REF] Bouzeid | Influence of relative humidity on spreading, pattern formation and adhesion of a drying drop of whole blood[END_REF] studied the influence of relative humidity (RH) on the spreading behaviour and pattern formation of a human blood droplet.
125
The drops of blood of same volume are deposited on an ultra-clean glass substrate and evaporated inside a humidity chamber with a controlled range of RH between 13.5% and 78.0% . Their experiments show that the contact angle decreases as a function of the RH which influences the 130 final deposition pattern at the end of the evaporation process (Fig. 1). Due to the effect of RH on the contact angle of the drop of blood, initial evaporative rate is dependent of RH values. They observed cracks pattern at the end of the drying process which is due to the competition between the drying regime and the gelation inside the drop of blood. Indeed, at first, the Marangoni convection inside the droplet induces the transport of particles towards the rim, and thus favours evaporation at the triple line. This corresponds to a convective evaporation. Later, once the particles concentration reaches a critical point, gelation occurs, and evaporation occurs through the porous media. The transition between the purely convective evaporation regime and the gelation regime appears always at 65% of the total drying time. Thus, controlling evaporative rate by evaporating drops of blood at different RH levels impacts strongly the wettability properties and the final pattern of drying drops of blood. The influence of RH on the final pattern of a dried drop of human blood is of huge importance for biomedical applications where drops of blood the Marangoni flow motion inside a drop of human whole blood using a digital camera [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF]. They showed that the motion of RBCs at the edge of the drop is the main mechanism of blood drop deposition and early evaporation, since Marangoni convection favours RBCs displacement towards 170 the rim and evaporation at the triple line. The mechanisms involved in the blood drop evaporation are confirmed through the evaporation mass flux, which is measured to be in agreement with pure water drop evaporation. Other biological elements (white blood cells, pro-175 teins) are transported to the edge of the drop with the RBCs and contribute to the pattern formation and the deposit wettability with the substrate and the mechanical properties. The blood evaporation dynamic is comparable to the evaporation of pure fluids of similar mass 180 concentration in colloids. Totally different patterns are formed according to whether the person is healthy or suffers from anaemia or hyperlipidaemia. A few years later, in 2014, Sobac and Brutin evidenced the influence on the final pattern of the wetting of blood on a substrate [START_REF] Sobac | Desiccation of a sessile drop of blood: Cracks, folds formation and delamination[END_REF].
185
They showed that all of the mechanisms are linked, and the change of wettability highly affects the deposit shape. The evaporation of a drop of blood with an initial angle of 90 • revealed the formation of a complex morphology due to the presence of an elastic buckling instability. In 190 this situation, a shell forms during the drying due to the accumulation of particles at the free surface, and is deformed by buckling, leading to the formation of folds. The final shape is no longer axisymmetric. The different behaviours encountered were assimilated to a stability dia-195 gram obtained for physical suspensions, and a good level of agreement was observed. The results presented are notably close to those obtained after the evaporation of drops of bovine serum, suspensions of colloidal particles and suspensions of polymers. The practical application of pattern 200 formation in drying drops of whole blood requires a complete understanding of the involved mechanisms such as spreading, wetting, evaporation heat and mass transfer, and cracks formation.
Patterns in dried pools
205
Although the drying of gels, or colloidal Newtonian suspensions has been largely studied, research on the drying of blood pools remains scarce. Unlike droplets, in the case of a pool, gravitational forces are more important than surface tension forces; this induces the flattening of the liquid 210 on the top part. One of the first considerations to observe concerning drying blood is that the properties of the blood change when the blood is ex-vivo. Visually the change of colour from bright red to dark brown due to the oxidation of the ox-haemoglobin (HbO 2 ) to methaemoglobin (met-Hb) and haemachrome (HC), is a first indicator. Another important step when bleeding occurs, is haemostasis, which involves coagulation corresponding to the change of blood from a liquid into a gel [START_REF] Gale | Continuing Education Course #2: Current Understanding of Hemostasis[END_REF]. In 2016, Laan et al. investigated the topic, and observed the different phases that a drying pool of blood goes through and arbitrarily called them coagulation stage, gelation stage, rim desiccation stage, centre desiccation stage, and final desiccation stage [START_REF] Laan | Morphology of drying blood pools[END_REF]. Comparatively to a drying droplet, their experiment showed that again 23% of the original mass remains after drying, which corresponds to the colloids, mainly RBCs, present in blood. When whole blood is deposited on a foreign surface, at first the RBCs are homogeneously distributed. Then rapidly coagulation takes places. During this phase the fibrin coming from the fibrinogen forms a web leading to the clot formation. The evaporation that follows leads to the displacement of the cells not trapped yet in the web towards the pool's rim. The drying front starts at the rim of the pool and goes towards the centre of the pool in a non-uniform way. The stress induced by the water evaporation leads to the noteworthy cracks observed on dried blood pools. In 2017, Ramsthaler et al. compared the drying processes of extra-corporal blood without any additives to blood that had received anticoagulation therapy for small volume droplets by studied the wipeability of bloodstains. As a result, they found that the time of the drying process was delayed more efficiently by anticoagulation therapy than it was by antithrombotic drugs or platelet aggregation inhibitors, meaning that the last part of the coagulation process is of high importance on the drying dynamics compared to the primary stages of the haemostasis when platelet activation and aggregation take place [START_REF] Ramsthaler | Effect of anticoagulation therapy on drying times in bloodstain pattern analysis[END_REF].
Drop impact on solid surfaces
Impact theory
When a drop of blood is released onto a dry solid surface from different heights, it impacts with energy due to the velocity of the fall. Understanding the correlation between a blood drop pattern and its velocity of fall is important to the forensic community as it could answer some crime scene reconstruction questions. In 2014, Laan et al. [START_REF] Laan | Maximum Diameter of Impacting Liquid Droplets[END_REF] described the phenomenon, and how inertia forces favour spreading, but adhesion and viscous forces counter this spreading so that the drop reaches a maximum spreading diameter commonly named D max , as shown on figure 2. It normally remains self-pinned to the surfaces before evaporation and drying mechanisms take place. The following resulting outcome of the drop stain depends upon the drop, the surface and the surrounding properties. To define the impact dynamics, the dimensionless Weber and Reynolds numbers are commonly used since they describe the liquid properties and the impact. The Weber number (We = ρD 0 U 2 /σ), is the ratio between capillary and inertial forces whereas the Reynolds number (Re = ρD 0 U/η) is the ratio between the viscous and the inertial forces. ρ, 270 η are respectively the blood density and viscosity. σ is the blood/air surface tension. U stands for the impact velocity of the droplet, and D 0 for the initial drop diameter. One of the latest model describing the maximum spreading diameter of blood droplet, is the model of 2016 of Lee 275 et al. [START_REF] Lee | 565 Universal rescaling of drop impact on smooth and rough surfaces[END_REF], who presented a universal solution taking into account that energy conservation is the only physical principal needed to describe the impact behaviour of droplets, according to the following equation:
(β 2 max -β 2 0 ) 1/2 Re -1/5 = W e 1/2 A + W e 1/2 , ( 1
)
280 with β max the maximum spreading ratio of the droplet (the ratio of the maximum spreading diameter by the initial diameter of the given droplet), β 0 the maximum spreading ratio at zero impact velocity for the same surface (the ratio of the maximum spreading diameter by the initial diam-285 eter of the droplet at 0 m.s -1 ), and A = 7.6, the fitting constant.
Surface influence
Interest for various industrial applications, led many researches to study the influence of the surfaces prop-290 erties on the impact outcomes of impacting droplets of various Newtonian fluids, as thoroughly described in the recent review of Josserand and Thoroddsen [START_REF] Josserand | Drop Impact on a Solid Surface[END_REF]. Meanwhile, in 2016, Kim al. investigated the influence of the target properties on the trajectory reconstruction of blood 295 droplets impacting different surfaces using glass, polycarbonate bare, aluminium mirror and cardstock [START_REF] Kim | How important is it to consider target properties and hematocrit in bloodstain pattern analysis? For[END_REF]. To quantify the influence, they correlated the Ohnesorge number (Oh = √ W e/Re), which relates the energy of the impact to the resistant work of surface tension and viscous forces, to the spreading ratio. They found that an increase in surface roughness reduces the spreading upon drop impact, which led them to conclude that the stain size cannot directly be related to an impact velocity, which supports the model of Lee et al. that takes into account the initial spreading ratio of the liquid at a zero impact velocity for a given surface, implying that this ratio is substrate dependent. This suggests that not only the spreading could be influenced by the target properties, but the outcome, and whether splashing can be observed or not. Recently Smith et al. investigated the influence of the surface properties on the splashing threshold of blood drip stains [START_REF] Smith | Roughness Influence on Human Blood Drop Spreading and Splashing[END_REF].
In their study they defined the splashing threshold as the minimum drop velocity inducing the drop to break-up and to have at least one secondary droplet. The experimental results showed that an increase in surface roughness leads to an earlier splashing threshold when comparing the Weber number of their impacting drops with the average surface roughness (Sa). Besides, they showed that roughness modifies the deformation of the drip stains (fig. 3) by presenting a map of the different impact outcomes of the blood droplets at different velocities and various surfaces.
Applications
Forensic science
Understanding the behaviour of blood on real surfaces finds a major interest in bloodstain pattern analysis, commonly called BPA, which is the forensic field that studies bloodstains found on crime scenes in order to help events reconstruction. The first works on the subject by Eduard Piotrowski, a researcher form the Institute of Forensic Medicine of Krakow, started in the late 1800s, but it took another 50 years for bloodstain evidence to be used in real cases. Since then, with a first formal bloodstain course in 1973 led by MacDonnel in the United States, 335 and the creation ten years later of the International Association of Bloodstain Pattern Analysts, this expertise was brought up to crime scene investigation with trained BPA experts, and the concern of emerging research on the topic. On crimes scenes, if a bloodshed event takes place, blood 340 traces may be found on various surfaces (floor, furniture, clothing...) giving a first piece of information. The classification of the pattern type (drip stain, cast-off pattern, drip trail, pool...) is important to correctly help reconstitution (see fig. 4). In the case of some types of spatter droplets, it 345 will impact a surface with an angle giving stains the form of ellipses. Using the stringing theory, the flight path of such stains is reconstructed, and by combining trajectories together the point of origin of the source of blood can be determined in the three-dimensional space. Nowadays, as 350 presented in the study of Joris in 2015, various automated and virtual 3D scanning methods are used during investigations [START_REF] Joris | HemoVision: An automated and virtual approach to bloodstain pattern analysis[END_REF]. A lack of adequate scientific validity is noticed in some of those methods, such as highlighted by the 2016 study of Taylor et al., where they questioned the reliabil-355 ity of pattern classification on fabrics depending on how biased was the opinion of the expert. In order to prevent doubtful BPA evidence leading to wrongful convictions, many governments currently promote the rigorous use of forensic science. To fulfil this demand, research in the do-360 main is emerging as can be illustrated by some recent studies: in 2015, Laan et al. proposed a fluid dynamic model that, unlike the stringing method, takes into account gravity effect and drag giving a more accurate estimation of path reconstruction [START_REF] Laan | Bloodstain Pattern Analysis: implementation of a fluid dynamic model for position determination of victims[END_REF]; in 2016 Geoghegan et al worked 365 on trajectory reconstruction of blood drops from expirated blood using Particle Image Velocimetry (PIV) and a numerical model [START_REF] Geoghegan | Experimental and computational investigation of the trajectories of blood drops ejected from the nose[END_REF]; in 2017, Comiskey et al. presented a theoretical numerical model describing blood spatters resulting from a blunt bullet gunshot, that they compared to 370 experimental data [START_REF] Comiskey | Hydrodynamics of back spatter by blunt bullet gunshot with a link to bloodstain pattern analysis[END_REF]. Works concerning drip stains on fabrics is thoroughly investigated as well, in 2015 by Dicken et al. on the correlation between impact velocity with the shape and the size of stain on the fabric [START_REF] Dicken | The use of micro computed tomography to ascertain the morphology of bloodstains on fabric[END_REF], in 2016 by De Castro et al. who studied the effect of prior laundering 375 on the final stain appearance [START_REF] De Castro | Drip bloodstain appearance on inclined apparel fabrics: Effect of prior-laundering, fibre content and fabric structure[END_REF], again in 2016 by Cho et al. on the differentiation between transfer patterns and spatter patterns [START_REF] Cho | Quantitative bloodstain analysis: Differentiation of contact transfer patterns versus spatter patterns on fabric via microscopic inspection[END_REF], which is a difficult question at the moment. Finally some effort is put as well onto creating a Synthetic Blood Substitute (SBS) of similar rheological 380 properties as illustrated by the research of Totesbury of 2017 [START_REF] Stotesbury | The application of silicon sol-gel technology to forensic 590 blood substitute development: Mimicking aspects of whole human blood rheology[END_REF], in order to provide tools to help research into understanding bloodstain pattern mechanisms, as working with fresh blood without any additives is a complex problem that encounter researchers in the field. digital polymerase chain reaction, also called ddPCR on 390 the maternal peripheral blood [12][13]. In Hudecova study, peripheral maternal blood is collected in EDTA tubes, and then centrifuged twice. For the purpose of the study, samples are frozen and then shipped onto dry ice. Placental DNA is extracted from the maternal blood according to a DNA purification protocol. The authors designed a ddPCR assay to detect two sequence variants based on the maternal mutational status. All ddPCR analyses were carried out on a Droplet Digital PCR system (bio-Rad) which generates up to 20 000 reactions droplets within a single 400 reaction well. Out of the 20 000 droplets generated only 200 to 2000 evidence a positive result. The results must be analysed automatically using software and compared to computer simulations in order to define the diagnostic accuracy. In 216, Laux et al. demonstrated how ultrasonic shear reflectometry could be used, as a non-intrusive tool, to determine the gelation time and the total desiccation time. The only assumption is on the consequences of ultrasonic waves on the cellular matter such as RBCs membranes, which are very fragile vesicles. So far, there is nothing in the literature about the effect of the ultrasonic wave on the blood components. In 2017, Ahmed et al. partly reviewed the situation of wetting and spreading of blood, focusing on porous substrates. They presented two situations of complete and partial wetting cases and detailed the different stages encountered. The main and historical application remains the storage and transport of blood samples using the DBS sampling technique used by the National Health Service (NHS) in the UK. In their review of 2016, Chen et al conclude that the research of medical diagnosis based on blood drop pattern is still at a very early stage. Indeed, significant efforts have to be done before reaching the biomedical application for both humans and livestock. The basis of this technique relies on the postulation that diseases modify the blood properties, such as viscosity, viscoelasticity and cracks formation after drying. For a biomedical device using dried blood drops analysis to be developed and commercialise, improvement of the fundamental knowledge of spreading, wetting, and drying of blood is required. At least one decade will be 430 needed before the appearance of a mobile analysis device using this technology.
Conclusions and prospectives
In order to use the interaction of ex-vivo blood with real surfaces for application such as forensic science or 435 biomedical diagnosis, an extensive understanding of this complex phenomenon is mandatory. The recent studies on the topic opened a new outlook on this physical and fluid mechanical problem by focusing on blood interactions with non-coated surfaces (i.e surfaces without any 440 anticoagulant or chemicals that could react with blood). Blood is, first, a non-Newtonian fluid, meaning that predicting its mechanical behaviour is complex, as it will depend on the applied shear stress, secondly, blood is a biological liquid that coagulates and ages when outside the 445 body. This invites researchers and scientists to take up the challenge that is to related the mechanical properties of blood with the potential applications and to seek deeper. In order to use the morphological analysis of dried drops of blood for biomedical purposes, we need to be able to 450 predict the spreading diameters of blood droplets created from a finger. This is a fast and cheap blood collection method that medical practitioners are trying to use more and more, but some studies reported that blood test results might vary from drop to drop in fingerprick tests.
455
Hence we have to be able to control and predict spreading and wetting of blood onto various, mainly porous, substrates. Moreover, we have to be able to predict as well the time needed to obtain a complete evaporation, since the last phase of drying no more produces important changes 460 but slowly deforms the cracks/plaques, meaning that a visual criteria to disease detection could no longer be used. Ultimately we have to understand the link between blood pattern and potential diseases. Similarly in BPA investigation, deeper correlation between observed patterns and 465 physical comprehension of blood mechanical properties are crucial. Nowadays forensic experts are trying to improve their procedures by introducing new technologies and recent findings. Moreover, this field of expertise being fairly recent, many issues that would in appearance seem simple, 470 still remain. They are thus trying to find answers to questions such as: how long as this pool of blood been drying for? Was this pattern projected or transferred onto the fabric? From what height was this droplet falling? What volume of blood is present in this droplet/pool? Thus in 475 order to prevent wrongful convictions, or better medical diagnosis, it is urgent to improve our knowledge concerning blood behaviour.
Acknowledgment
This work received a financial grant from the French 480 Agence Nationale de la Recherche in the frame of the project ANR-13-BS09-0026. This research was also supported by the Institut Universitaire de France (IUF) and has received funding from Excellence Initiative of Aix-Marseille University -A*MIDEX, a French "Investissements d'Avenir" programme. It has been carried out in the framework of the Labex MEC.
Figure 1 :
1 Figure 1: Top-view images of drop deposits left after complete evaporation of sessile drops of whole blood (same scale). All experiments are performed for a same drop volume (V = 14.2 µL) and increasing RH (microscope ultraclean glass substrate, room temperature: 23.8 • C, room pressure: 1005 hPa), reproduced from [2] with permission from Elsevier, copyright 2013.
150
are drying in an open atmosphere.
4 .
4 Patterns in dried drops and pools 4.1. Patterns in dried drops Martusevich and Bochkareva presented in 2007 some first results concerning the morphology of dried blood serum 155 droplets with viral hepatitis[23]. Out of the 58 people included in the study, 32 were healthy, 14 patients had viral hepatitis B and 12 patients had viral hepatitis C. The authors established that the presence of viral hepatitis in patient's organism significantly changes the result of free 160 and initiated blood serum crystallization. They concluded on the feasibility of single out the biological fluid crystallogenesis characteristics according to virus type. Before 2011, published studies were performed on blood serum. Brutin et al. achieved in 2011 the first observations of165
Figure 2 :
2 Figure 2: (a)-(d) High-speed camera recordings of a single blood droplet impacting at 2 m.s -1 on a stainless-steel substrate. (a) At 0.2 ms prior to droplet impact, the droplet is spherical. (b) At 0.6 ms after droplet impact on the surface, a thin lamella spreads outwards due to the inertial forces. (c) At 2.4 ms after droplet impact, the lamella increases in size. Spreading of the lamella slows down until it reaches its maximum size (Dmax), while there is a buildup of liquid in the outer rim. (d) At 4 ms after droplet impact, the bulk of the liquid is distributed over the entire stain. Reproduced from[START_REF] Laan | Maximum Diameter of Impacting Liquid Droplets[END_REF] with permission from APS, copyright 2014.
Figure 3 :
3 Figure 3: Deformation pattern of three blood drip stains of same initial volume (V = 13.8 * 10 -3 mL) impacting at U = 2.6m.s -1surfaces of increasing roughness. Reproduced from[START_REF] Smith | Roughness Influence on Human Blood Drop Spreading and Splashing[END_REF] with permission from APS, copyright 2017.
385 6 .
6 2. BiomedicalIn 2017, Ragni M. considered the work of Hudevoca and al. (2017), which describes a simple and non-invasive device for prenatal detection of haemophilia by droplet
Figure 4 :
4 Figure 4: Picture of an IABPA bloodstain analysis training course, with the tools used to reproduced various specific patterns.
Table 1 :
1 References of special interest studies published over the last three years | 32,858 | [
"1525"
] | [
"265426",
"949"
] |
01770001 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2016 | https://hal.science/hal-01770001/file/FSI.pdf | Nick Laan
email: [email protected]
Fiona Smith
Celine Nicloux
David Brutin
Morphology of drying blood pools
Keywords: BPA, blood pools, drying, evaporation, fluid dynamics
published or not. The documents may come L'archive ouverte pluridisciplinaire
HIGHLIGHTS
• Blood pools are often encountered on crime scenes.
• The general knowledge concerning blood pools is very limited.
• During drying a blood pool goes through five separate stages.
• The size of a pool depends on the volume and the contact angle.
Introduction
Bloodstain pattern analysis is a forensic tool used by investigators to determine, among others, what, where and how a crime took place [1]. One of the most common types of bloodstains found on a crime scene following a deadly blood shedding event, is the blood pool (fig. 1). Ante-and postmortem it is often the case that a victim bleeds out, thus accumulating blood in one or multiple areas. Currently, when a blood pool is found, it is classified as such and an investigator can conclude that the blood donor was bleeding at that location for any reasonable period of time for the pool to be created, be it seconds, minutes or even hours. Previous studies have investigated if it was possible to determine what the volume of a blood pool was, to determine if such a loss of blood volume could constitute loss of life [2], or for other crime scene reconstruction purposes [START_REF] Lee | Estimation of original volume of bloodstains[END_REF][START_REF] Sant | Exsanguinated blood volume estimation using fractal analysis of digital images*[END_REF][START_REF] Laan | Volume determination of fresh and dried bloodstains by means of optical coherence tomography[END_REF][START_REF] Laan | Bloodstain pattern analysis: implementation of a fluid dynamic model for position determination of victims[END_REF]. However, almost no studies have been performed concerning the drying of an entire pool of blood. Such studies can be very useful for determining, e.g., the time that the blood shedding event occurred, any actions that may have occurred during the blood shedding event or the physiological state the subject was in. For example, fig. 1) shows two crime scene pictures of the same pool, 22 hours apart. In the first (top) picture, the edges and the bottom of the pool have started drying. In the second picture the pool has completely dried. Information obtained from how fast the blood dried could be crucial to determine when the pool was created. There have been several studies concerning the drying of singular blood droplets [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF][START_REF] Sobac | Structural and evaporative evolutions in desiccating sessile drops of blood[END_REF][START_REF] Zeid | Influence of relative humidity on spreading, pattern formation and adhesion of a drying drop of whole blood[END_REF][START_REF] Zeid | Influence of evaporation rate on cracks formation of a drying drop of whole blood[END_REF][START_REF] Sobac | Desiccation of a sessile drop of blood: cracks, folds formation and delamination[END_REF]. To our knowledge only Ramsthaler et al. investigated the drying of blood pools [START_REF] Ramsthaler | The ring phenomenon of diluted blood droplets[END_REF]. In their study they focused on the drying and morphology of diluted blood droplets and pools to be able to distinguish between diluted and whole blood. In this paper we report on the morphology of drying blood pools. Pools of blood, obtained from healthy volunteers were deposited on linoleum surfaces. Based on our results we are able to distinguish five different stages of drying. In addition, we report the universal properties of drying blood pools, but also distinguish anomalies, which can differ between pools.
Background theory
Once bleeding occurs, blood being ex vivo, it will coagulate and dry. During the coagulation (clotting) process, fibrin strands are formed creating a solid structure of the blood, the clot. During drying water evaporates from the blood pool until only the solid matter, mainly red blood cells (RBC's), remains. Depending on the size of the pool and environmental conditions, the time the pool completely evaporates may take hours to days. On the crime scene, pools can be found in the order of millilitres to litres. We, however, focus on pools in the order of millilitres, simply because pools with a volume of several litres, without any additives like anticoagulants, would require a very large donation of a volunteer which is not a viable option. Prior to our investigation into drying blood pools we require some general knowledge about fluid dynamics of evaporating liquids.
When a droplet is deposited upon a surface it will spread. The area (A) the droplet spreads over depends on the physical properties of both the surface and liquid, where the surface tension and contact angle are the most important parameters (see supplementary materials). The surface tension (γ) is defined as the amount of energy required to increase the surface area by one square meter. In other words, increasing the area of a droplet requires energy and the higher the surface tension, the more energy this takes. As the droplet or pool lays upon a surface, the surface tension acts upon the triple-line (the line around droplet or pool where liquid, surface and air meet, see fig. 2a-b). How a droplet or pool spread upon a surface can be deduced from young's equation [START_REF] De Gennes | Capillarity and wetting phenomena: drops, bubbles, pearls, waves[END_REF]:
S = γ(cos θ -1) (1)
Here, S is the so-called spreading parameter, γ is the surface tension between liquid and gas interface and θ the contact angle between liquid and surface (see fig. 2c). When the contact angle is much smaller than 90 • (S is positive) the surface wetting, i.e., the liquid can easily spread over the surface and the surface is presumed wetting. When the contact angle is much larger than 90 • (S is negative) the liquid cannot spread over the surface easily and the surface is presumed non-wetting. With a small contact angle, the pool will cover a much larger area and have a larger perimeter, which should significantly increase the rate of evaporation. Therefore, the contact angle is a very important parameter concerning the drying of blood pools. Deegan et al. [START_REF] Deegan | Capillary flow as the cause of ring stains from dried liquid drops[END_REF] demonstrated very accurately the principles of the coffee ring effect that is observed during the drying of a droplet of a colloidal suspension. This study showed how the flow arising from the evaporating liquid induced the characteristic ring formation. A study by Brutin et al. [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF] focused on the drying of sessile whole blood droplets which showed that it is very similar to the drying of a droplet of a colloidal suspension, blood being a colloidal fluid. During the drying of a blood drop the formation of cracks is observed. Moreover, the study showed that a drop of blood dries following two different regimes and goes through five different stages. The first regime (first three stages) is being driven by convection, diffusion and then gelation. At the moment the drop is deposited, RBC's are evenly distributed inside the droplet, but then the solvent starts evaporating inducing an evaporation flux at the interface and an internal flow transporting particles inside the drop. This leads to the formation of a gel once the concentration of particles is high enough. Additionally, this flow induces the formation of the so-called biological deposit on the periphery of the droplet; indeed RBC's are driven from the inner part of the droplet to its rim. Then the transition phase takes place and leads to the gelation of the entire drop. A sharp decrease in the drying rate is observed, whereas gelation is rapid.
The second regime is much slower since it is diffusive. The final two stages correspond to the drying and the formation of cracks that are nucleating and propagating. This extensive work on drying of droplets gives precious information about the process and shows accurately that desiccation starts at the periphery of the drop, and then dries towards the centre of the droplet. The work presented in this study no longer focuses on droplets but on pools. To understand the dynamics occurring during the drying of a pool, the size of the blood pool must be considered. As long as the volume is low, in the case of droplets, the surface tension forces are dominant resulting in a curved surface. In contrast, if the volume is large enough, the gravitational forces will dominate over the surface tension forces producing a flat surface on top of the pool. Similar to a droplet, a pool will have a contact angle with the substrate on the edges, but in contrast is flat otherwise (fig. 2b). The area the pool spreads over is directly dependent on the contact angle (see appendix A). In order to understand the phenomena and the dynamics driving the drying of blood pools, we performed experiments with small blood pools (about 4 ml), which were recorded by taking pictures every two minutes. Foremost, the purpose of our experiment is to identify the different drying stages of blood pools.
Methods and Materials
To follow the drying of a blood pool we required the environment to be monitored and as constant as possible. Therefore each blood pool was created in a glovebox (Jacomex T-Box, V=700L). The humidity and temperature were recorded during drying by means of a hydrometer (Teslo AG, 175-H2, Datalogger, Germany). The temperature was constant at 22±0.5 • Celsius during all of the experiments. Within the glovebox, a camera (Nikon D200, resolution: 2592x3872 pixels or Nikon D300s resolution: 2848x4288 pixels both with a 60mm 1:2.8 lens) was suspended directly above the blood pool. The camera's enabled us to take a single picture, every two minutes, with a resolution of roughly 26 to 30 pixels per millimetre. For each blood pool created, blood of a healthy volunteer was drawn by a certified nurse in a 4.5 ml evacuated blood collection tube (VenoSafe, Terumo, France). Immediately after blood collection the tube was emptied above the target substrate (linoleum) creating a pool of blood of roughly 4 ml each time.
Moreover, blood pools were created by slowly dripping blood directly from the tube/needle connected to the arm to imitate the most realistic blood shedding event possible. Accordingly, blood pools created had a starting weight 3.5 g < m i < 7 g. No differences were observed in the drying process of the pools between the two methods of blood deposition. Of several blood pools the substrate was on top of a balance (Mettler Toledo, ML802, Switzerland) to determine exactly the mass (m) of the blood pool during the entire drying process (one measurement every minute). By means of a reference length next to the pool it was possible to determine the area of the pool, by using a program written in Matlab.
Results and Discussion
In fig. 3 we show a time-lapse of a drying blood pool deposited on a linoleum surface (see supplementary movie for the complete time-lapse). First of all the simple observation of the pictures obtained in the experimental conditions described previously allowed the identification of five distinct phases (fig. 3): (I) coagulation stage, (II) gelation stage, (III) rim desiccation stage, (IV) centre desiccation stage, (V) final desiccation stage (see supplementary materials for a point-wise summary). (I) When blood is deposited upon a surface, the RBC's are evenly distributed throughout the pool, which then will sediment. At this point the blood has a dark red colour and starts to coagulate [START_REF] Riddel | Theories of blood coagulation[END_REF]. It is possible that due to wetting and capillary action, the area of the blood pool increases as the blood spreads slowly over the surface during the initial 15 to 30 minutes. During this first stage there is a change in colour from dark red to lighter red, mainly due to coagulation. (II) As fluid evaporates from the blood pool red blood cells, that are not constricted in the fibrin web, are transported to the rim of the stain and deposited, due to flow caused by evaporation [START_REF] Deegan | Capillary flow as the cause of ring stains from dried liquid drops[END_REF]. The transition from the fluidal to a gel state is referred to as the gelation front.
The second stage starts when the gelation rim is created around the pool. The gelation front propagates inwards, towards the centre of the stain, as the pool continues to dry. (III) The third stage starts as soon as the rim turns black and starts to crack, indicating that the rim is desiccating. The transition from the red to black colour is referred to as the drying front. During this stage both the gelation and drying front propagate towards the centre of the stain. (IV) Once the gelation front reaches the centre of the stain, the entire stain has gelified and evaporation of fluid is mainly driven by the porous media drying dynamics [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF]. The drying front and cracks propagate towards the middle of the stain. (V) Finally, the drying front reaches the centre of the pool. The pool has almost completely desiccated. During this last stage, the entire pool is black in colour. As the last liquid evaporates, the remains contract and the cracks reach the middle of the stain. Accordingly, flakes are separated and partially or completely detach from the surface. We have observed the five stages described above for every pool we created. However, it should be clear that a pool does not dry in a uniform manner. Instead, one part of the blood pool may be fully desiccating (left side fig. 4) while another part is still in a gel-like state (right side fig. 4). Consequently, the centre of the pool, i.e., the location where all cracks come together, does not necessarily has to be the geometrical centre of the pool. Furthermore, the duration of any one stage and complete desiccation can differ between blood pools, depending on the humidity, temperature, shape and size of the pool and the kind of surface. During the drying of the blood pool, the mass (m) of the pool was measured with a balance, every minute (fig. 5). In the first and second stage the pool loses roughly 40% of its mass. Liquid is evaporating and the height of the pool diminishes over time. A linear function was fitted to the data points for the first six hours of drying, with fitting parameters A the initial droplet mass equal to 4.39 g and B the mass loss per hour equal to 403±3 mg/h. It is clear that the mass loss during the first six hours scales linear with time. Only once the drying front is significantly formed, does the decrease in mass stop scaling linear with time. This effect can be explained as follows. The pool is pinned to the surface, i.e., the contact line cannot move. As liquid from the pool evaporates, the volume diminishes, but because the contact line is pinned, the pool can only decrease in height. Moreover, to compensate for the volume loss at the contact line, there is a flow from the middle of the pool towards the rim. This flow causes particle (RBC's) transport throughout the pool which are deposited at the contact line creating a characteristic rim around the pool. As the pool dries, the contact angle should change which in turn should decrease the evaporation rate [START_REF] Hu | Evaporation of a sessile droplet on a substrate[END_REF]. We, however, have not observed any change in evaporation rate due to this effect. As long as the area of the pool does not change, the evaporation rate does not change. Only once a critical amount of liquid has been depleted from the pool, in this case more than 55%, does the evaporation rate change. At this point the liquid and gel areas of the pool are diminishing, accordingly the evaporation rate diminishes as well, explaining the deviation from the linear trend after six hours in fig. 5. The largest visual change happens during stage III where the entire stain transforms from a liquid/gel to a solid. Finally, 21% of the initial mass remains in this case. Multiple blood pools were created and recorded with varying haematocrit values and under different humidities. The mass of those pools are shown in fig. 6b(inset). Not surprisingly, it is clear that the larger the mass of a pool, the longer it takes for the pool to dry. The mass decreases linearly with time until roughly 50% to 70% of the mass has been depleted, at which point the slope diminishes and the mass becomes constant as the last liquid evaporates from the pool. Moreover, the left-over mass is dependent on both the initial volume of the pool and the haematocrit value of the blood, which is in accor-dance with the findings of [START_REF] Laan | Volume determination of fresh and dried bloodstains by means of optical coherence tomography[END_REF][START_REF] Laan | Bloodstain pattern analysis: implementation of a fluid dynamic model for position determination of victims[END_REF]. As the pools were deposited under various environmental humidities; these findings indicate that the drying speed, i.e., the slope of the linear part of the drying curves, becomes steeper with decreasing humidity. In other words, the higher the humidity, the longer the blood pool takes to dry. However, a more in-depth investigation is required to quantify how the drying speed depends on the humidity and other factors, as temperature, contact angle and size of the pool might influence the drying speed a lot. To have a better insight into the drying of the pools, we normalized the mass according to:
m norm = m -m f m i -m f (2)
Here m i is the initial mass, m f is the final mass and t f is the final drying time defined as the point in time where the change in mass is less than 0.01 g/h. In fig. 6a, the normalized mass was plotted as a function of the normalized time to show that independent of humidity, mass, haematocrit value, or total drying time the mass diminishes similarly for every pool. We show that this dimensionless rescaling is valid for a range of masses (3.5 g < m i < 7 g) and a range of humidities (22% < H < 57%). This rescaling is the first step towards a unified theory concerning the drying of blood pools and maybe even pools in general.
Although the literature concerning the drying and evaporation of blood pools is poor, there has been extensive studies concerning the drying of gels [START_REF] Sherwood | The drying of solidsi[END_REF][START_REF] Moore | The mechanism of moisture movement in clays with particular reference to drying-a concise review[END_REF][START_REF] Macey | Clay-water relationships and the internal mechanisms of drying[END_REF][START_REF] Dwivedi | Drying behaviour of alumina gels[END_REF], which show considerable similarities. The drying process of gels has three distinct drying stages. During the first stage the evaporation rate is constant and the volume decrease of the gel is equal to the volume of liquid lost by evaporation [START_REF] Moore | The mechanism of moisture movement in clays with particular reference to drying-a concise review[END_REF][START_REF] Macey | Clay-water relationships and the internal mechanisms of drying[END_REF]. Once a critical point is reached, the volume of the gel stops to decrease and cracking may occur. According to a study of Dwivedi [START_REF] Dwivedi | Drying behaviour of alumina gels[END_REF], the evaporation rate of water from alumina gels during this first stage is comparable to the evaporation rate of pure water. Subsequent to the first stage, gels undergo two so-called 'falling rate' stages where the evaporation rate decreases towards zero. During the first falling rate stage the liquid flows through partially empty pores, followed by a second falling rate stage, where the liquid diffuses its vapour to the surface. Comparably, drying blood exhibits a similar behaviour to water and a drying gel. After blood pool creation, blood dries with a constant rate of evaporation (fig. 5). Subsequently, the drying rate decreases and cracking occurs, similar to a gel. Consequently, drying blood has analogous characteristics to that of 13 both water and a gel. The size of the pool is dependent on two important parameters. The first one is the volume of the blood, namely, the larger the volume of the blood, the larger the area will be the blood spreads over (fig. 7). The second parameter is the contact angle θ, i.e., the wettability of the surface the blood lies on, which was calculated using the approach described in the supplementary materials (Eq. S6). If a surface is wetting (θ < 90 • ) then the blood spreads over a much larger area then if the surface was non-wetting (θ > 90 • ). Accordingly, the larger the contact angle between blood and surface, the smaller the area covered by the blood. This is reflected well by our results (fig. 7) which show that blood pools with a larger contact angle deviate from the average (dashed line), i.e., they have a smaller area. These factors are very important for the drying dynamics. Namely, a higher volume blood pool takes a longer time to dry, whereas an increase in area and a lower contact angle can both speed up the drying process. It is possible that during stage II, due to clotting, serum is forced out of the main pool, also known as serum separation [1]. In this study, we did not encounter any serum separation from our blood pools. This is not a surprise considering the findings of Ramsthaler et al. [START_REF] Ramsthaler | The ring phenomenon of diluted blood droplets[END_REF] who reported no serum separation unless the volume of the pool was larger than 10 ml, while in our study all pools were smaller than 7 ml.
We have to stress that the five stages of blood pool drying reported in this study are not the same stages as reported for single sessile droplets [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF]. The drying stages observed in our experiments are very similar to those of drying blood droplets, however there are several distinct differences. First of all, the droplets reported by [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF][START_REF] Zeid | Influence of evaporation rate on cracks formation of a drying drop of whole blood[END_REF][START_REF] Sobac | Desiccation of a sessile drop of blood: cracks, folds formation and delamination[END_REF] contain anti-coagulant and therefore do not clot, while clotting is a very important parameter, as it may inhibit particle transport due to evaporative flow. On crime scenes blood almost always clots once it has left the body and the drying dynamics could change a lot if the blood did not clot, e.g., serum separation would not occur at all. In addition, during clotting (stage I) the colour of blood turns from a dark red to a bright red, something which was not observed for non-coagulating blood droplets.
Secondly, pools are much larger than droplets which change the evap-oration dynamics and the final appearance of the remains. Drying blood droplets show mobile fragmented cracking patterns at the corona and fine cracking patterns at the middle and periphery of the droplet [START_REF] Brutin | Pattern formation in drying drops of blood[END_REF]. The cracking patterns of blood droplets have been extensive investigated and resemble the cracking patterns of colloidal gels [START_REF] Zeid | Influence of evaporation rate on cracks formation of a drying drop of whole blood[END_REF][START_REF] Sobac | Desiccation of a sessile drop of blood: cracks, folds formation and delamination[END_REF][START_REF] Pauchard | Patterns caused by buckle-driven delamination in desiccated colloidal gels[END_REF], i.e., the peripheral cracks of a blood droplet divide the remains in polygonal shaped cells. However, the cracking patterns of an entire (coagulating) blood pool are quite dissimilar from a single blood droplet. With blood pools we observe long elongated cracking patterns which propagate towards the middle of the stain, that turn black when completely desiccating. Once more, the size and thickness of the pool and coagulation of the blood can be the main reasons why blood pools show very different cracking patterns from those of single blood droplets. The cracks of a blood pool are similar to those observed by Pauchard et al.
for a colloidal suspension (with a low ionic strength I = 0.4 mol/l) [START_REF] Pauchard | Influence of salt content on crack patterns formed through colloidal suspension desiccation[END_REF] and for a drying suspension of latex particles (0.1 µm) resulting from directional growth of fractures [START_REF] Pauchard | Morphologies resulting from the directional propagation of fractures[END_REF]. Cracks closely follow the drying front and the width between cracks is directly related to the thickness of the sample, with a prefactor depending on gel thickness, physicochemical properties, adhesion onto substrate and desiccation conditions [START_REF] Pauchard | Morphologies resulting from the directional propagation of fractures[END_REF][START_REF] Lazarus | From craquelures to spiral crack patterns: influence of layer thickness on the crack patterns induced by desiccation[END_REF]. Finally, a single blood droplet only shows a gelation front which partly propagates from the rim towards the middle. In contrast, a drying blood pool shows both a gelation front and a drying front that is created at the rim and completely propagates towards the middle of the stain. It is specifically the evolution of the drying fronts that define our different stages.
This study showed clearly that the drying dynamics of a pool of blood could be identified. However many more parameters would need to be investigated in further studies. Such parameters would be the influence of humidity and temperature, but as well the influence of the substrate which would mainly change the contact angle, and thus drying rate since it would change the spreading of the pool. Moreover it would be interesting to consider more the influence of the shape of the pool. Finally, size of the pool is very important. In these experiments, each pool was in the order of 4 ml, while in practice they might be much larger or even much smaller. For the latter, it is necessary to distinguish when a volume of liquid is considered a pool or a droplet. We suggest that a pool should be defined as having a flat surface. Accordingly, in our experiments given a contact angle of roughly 45 • , the minimum volume of a pool should be much more than 170 µl (see supplementary materials, Eq. S11), which is the case in our experiments.
Conclusion
In this study, for the first time the drying dynamics of pools of whole blood were investigated, for a range of 3.5 to 4.5 ml. We were able to distinguish five different drying stages, each with their own characteristics. The mass of a blood pool diminishes in a very reproducible manner, first linearly in time and then approaches a constant value. Additionally, we were able to collapse all mass curves onto a single curve by normalizing the mass and time of drying. The general knowledge concerning blood pools within the field of bloodstain pattern analysis is very limited at most. This work is a step forward in the classification and characterization of blood pools within this field. Prospectively, the results of this study may be used for crime scene reconstruction or for future investigations into determining the time the blood shedding event occurred. Finally, we verified that the size of the blood pools is directly related to its volume and the wettability of the surface. This result could be used to estimate the original volume of a dried blood pool, to answer the question if the amount of blood could constitute loss of life. We anticipate this study to be of considerable importance for forensics and bloodstain pattern analysis as a whole.
Acknowledgments
Of the Institut de Recherche Criminelle de la Gendarmerie Nationale we would like to thank the bloodstain pattern group for helping in this investigation, the nurses from the infirmary for helping us with the blood collection and all volunteers for giving their blood. This work received a financial grant from the French Agence Nationale de la Recherche in the frame of the project ANR-13-BS09-0026. Also, this work has been carried out in the framework of the Labex MEC (ANR-10-LABX-0092) and of the A*MIDEX project (ANR-11-IDEX-0001-02), funded by the "Investissements dAvenir" French Government program managed by the French National Research Agency (ANR).
[1] S. H. James, P. E. Kish, Sutton, Principles of Bloodstain Pattern Analysis, Theory and Practice, CRC, 2005.
[2] H. F. Bartz, Estimating original bloodstain volume: the development of a new technique relating volume and surface area, Ph.D. thesis, Department of Biology, Laurentian University, Sudbury, Ontario (2003).
Figure 1 :
1 Figure 1: Picture of a real pool of blood found on a crime scene, (top) before the body was removed and (bottom) 22 hours later. The yellow liquid is serum which was separated during clotting and the black mass in the top picture is a large formed clot.
Figure 2 :
2 Figure 2: A schematic representation of the cross-section of a) a single droplet, b) a pool and c) three droplets on surfaces varying in wettability.
Figure 3 :
3 Figure 3: time-lapse of a drying pool of blood from a healthy person, at 21C with a relative humidity of 32%. For a movie of the drying blood pool, see supplementary material. Note that the elapsed time between pictures is different for each stage. The time of drying for each picture is shown in fig. 5.
Figure 4 :
4 Figure 4: close-up view of a drying pool, with several defined properties of the pool. The yellow dashed and dotted lines represent the gelation and drying front, respectively.
Figure 5 :
5 Figure5: the mass of a blood pool as a function of time with some pictures of the pool at several moments in time. The vertical dashed lines represent the time at which the pictures in fig.3where taken. The red line is the mass of the pool, the dashed line a linear fit to the data, from t=0 till t=8.2 h, the time the pool transitioned into stage III. The percentages are the amount of mass left at that specific point in time.
Figure 6 :
6 Figure 6: a) the normalized mass as a function of normalized time of the blood pools and b) the mass of the pools as a function of time (inset), for different humidities and haematocrit values.
Figure 7 :
7 Figure 7: the area A and as a function of the volume V of the blood pools. The colours represent the contact angle of the blood with the surface, red θ = 37 • , orange 40 • < θ < 45 • , green 45 • < θ < 50 • , blue 50 • < θ < 60 • , black θ = 65 • . Errors in volume are smaller than the size of the symbols. | 31,107 | [
"1525"
] | [
"110963",
"949",
"949",
"949"
] |
01770014 | en | [
"spi",
"info"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01770014/file/Channel_Hardening_preprint_final.pdf | Matthieu Roy
Stéphane Paquelet
Luc Le Magoarou
Matthieu Crussière
MIMO Channel Hardening: A Physical Model based Analysis
Keywords: channel hardening, physical model, MIMO
In a multiple-input-multiple-output (MIMO) communication system, the multipath fading is averaged over radio links. This well-known channel hardening phenomenon plays a central role in the design of massive MIMO systems. The aim of this paper is to study channel hardening using a physical channel model in which the influences of propagation rays and antenna array topologies are highlighted. A measure of channel hardening is derived through the coefficient of variation of the channel gain. Our analyses and closed form results based on the used physical model are consistent with those of the literature relying on more abstract Rayleigh fading models, but offer further insights on the relationship with channel characteristics.
I. INTRODUCTION
Over the last decades, multi-antenna techniques have been identified as key technologies to improve the throughput and reliability of future communication systems. They offer a potential massive improvement of spectral efficiency over classical SISO (single-input-single-output) systems proportionally to the number of involved antennas. This promising gain has been quantified in terms of capacity in the seminal work of Telatar [START_REF] Telatar | Capacity of Multi-antenna Gaussian Channels[END_REF] and has recently been even more emphasized with the newly introduced massive MIMO paradigm [START_REF] Marzetta | Fundamentals of massive MIMO[END_REF].
Moving from SISO to MIMO, the reliability of communication systems improves tremendously. On the one hand in SISO, the signal is emitted from one single antenna and captured at the receive antenna as a sum of constructive or destructive echoes. This results in fading effects leading to a potentially very unstable signal to noise ratio (SNR) depending on the richness of the scattering environment. On the other hand in a MIMO system, with appropriate precoding, small-scale multipath fading is averaged over the multiple transmit and receive antennas. This yields a strong reduction of the received power fluctuations, hence the channel gain becomes locally deterministic essentially driven by its large-scale properties. This effect, sometimes referred to as channel hardening [START_REF] Hochwald | Multipleantenna channel hardening and its implications for rate feedback and scheduling[END_REF] has recently been given a formal definition based on the channel power fluctuations [START_REF] Ngo | No downlink pilots are needed in TDD massive MIMO[END_REF]. Indeed, studies on the stability of the SNR are essential to the practical design of MIMO systems, in particular on scheduling, rate feedback, channel coding and modulation dimensioning [START_REF] Marzetta | Fundamentals of massive MIMO[END_REF], [START_REF] Hochwald | Multipleantenna channel hardening and its implications for rate feedback and scheduling[END_REF], [START_REF] Björnson | Random access protocol for massive MIMO: Strongest-user collision resolution (SUCR)[END_REF]. From the definition in [START_REF] Ngo | No downlink pilots are needed in TDD massive MIMO[END_REF], we propose in this paper a comprehensive study on channel hardening through a statistical analysis of received power variations derived from the propagation characteristics of a generic ray-based spatial channel model. Related work. Channel hardening, measured as the channel gain variance, has recently been studied from several points of view. The authors in [START_REF] Martínez | Massive MIMO properties based on measured channels: Channel hardening, user decorrelation and channel sparsity[END_REF] used data from measurement campaigns and extracted the variance of the received power. A rigorous definition of channel hardening was then given in the seminal work [START_REF] Ngo | No downlink pilots are needed in TDD massive MIMO[END_REF] based on the asymptotic behavior of the channel gain for large antenna arrays. This definition was applied to pinhole channels, i.i.d. correlated and uncorrelated Rayleigh fading models [START_REF] Björnson | Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency[END_REF]. Contributions. Complementary to this pioneer work, we propose a non-asymptotic analysis of channel hardening, as well as new derivations of the coefficient of variation of the channel not limited to classically assumed Rayleigh fading models. Indeed, channel hardening is analyzed herein using a physically motivated ray-based channel model widely used in wave propagation. Our approach is consistent with previous studies [START_REF] Ngo | No downlink pilots are needed in TDD massive MIMO[END_REF], [START_REF] Martínez | Massive MIMO properties based on measured channels: Channel hardening, user decorrelation and channel sparsity[END_REF], but gives deeper insights on channel hardening. In particular, we managed to provide an expression of the channel hardening measure in which the contributions of the transmit and receive antenna arrays, and the propagation conditions can easily be identified, and thus interpreted. Notations. Upper case and lower case bold symbols are used for matrices and vectors. z * denotes the conjugate of z. u stands for a three-dimensional (3D) vector. .,. and a• u denote the inner product between two vectors of C N and 3D vectors, respectively. [H] p,q is the element of matrix H at row p and column q. H F , h and h p stand for the Frobenius norm, the euclidean norm and the p-norm, respectively. H H and H T denotes the conjugate transpose and the transpose matrices. H denote the normalized matrix H/||H|| F . E{.} and Var{.} denote the expectation and variance.
II. CHANNEL MODEL
We consider a narrowband MIMO system (interpretable as an OFDM subcarrier) with N t antennas at the transmitter and N r antennas at the receiver, such that y = Hx+n, with x ∈ C Nt×1 , y ∈ C Nr×1 and n ∈ C Nr×1 the vectors of transmit, receive and noise samples, respectively. H ∈ C Nr×Nt is the MIMO channel matrix, whose entries [H] i,j are the complex gains of the SISO links between transmit antenna j and receive antenna i. The capacity of the MIMO channel can be expressed as [START_REF] Telatar | Capacity of Multi-antenna Gaussian Channels[END_REF]
C = log 2 (det(I Nt +ρ Q HH H)) bps/Hz, (1)
where ρ = Pt N0 ||H|| 2 F with Q ∈ C Nt×Nt , P t and N 0 the input correlation matrix (precoding), emitted power and noise power. C is a monotonic function of the optimal received SNR ρ [START_REF] Loyka | On physically-based normalization of MIMO channel matrices[END_REF], hence H 2 F directly influences the capacity of the MIMO channel. It is then of high interest studying the spatial channel gain variations to predict the stability of the capacity.
In the sequel, we will consider that the channel matrix H is obtained from the following generic multi-path 3D ray-based model considering planar wavefronts [START_REF] Raghavan | Sublinear Capacity Scaling Laws for Sparse MIMO Channels[END_REF], [START_REF] Zwick | A stochastic multipath channel model including path directions for indoor environments[END_REF], [7, p. 485]
H(f ) = N t N r P p=1
c p e r ( u rx,p )e t ( u tx,p ) H .
(
Such channel consists of a sum of P physical paths where c p is the complex gain of path p and u tx,p (resp. u rx,p ) its direction of departure -DoD -(resp. of arrival -DoA -). In (2) e t and e r are the so-called steering vectors associated to the transmit and receive arrays. They contain the path differences of the plane wave from one antenna to another and are defined as [START_REF] Zwick | A stochastic multipath channel model including path directions for indoor environments[END_REF] e
t ( u tx,p ) = 1 √ N t e 2jπ a tx,1 • u tx,p λ ,•••,e 2jπ a tx,N t • u tx,p λ T , (3)
and similarly for e r ( u rx,p ). The steering vectors depend not only on the DoD/DoA of the impinging rays, but also on the topology of the antenna arrays. The latter are defined by the sets of vectors A tx = { a tx,j } and A rx = { a rx,j } representing the positions of the antenna elements in each array given an arbitrary reference. Such channel model has already been widely used (especially in its 2D version) [START_REF] Raghavan | Sublinear Capacity Scaling Laws for Sparse MIMO Channels[END_REF], [START_REF] Zwick | A stochastic multipath channel model including path directions for indoor environments[END_REF], verified through measurements [START_REF] Samimi | 28 GHz Millimeter-Wave Ultrawideband Small-Scale Fading Models in Wireless Channels[END_REF] for millimeter waves and studied in the context of channel estimation [START_REF] Magoarou | Parametric channel estimation for massive MIMO[END_REF]. In contrast to Rayleigh channels, it explicitly takes into account the propagation conditions and the topology of the antenna arrays.
In the perspective of the following sections,
let c = [|c 1 |,•••,|c P |]
T denote the vector consisting of the amplitudes of the rays. c 2 is the aggregated power from all rays, corresponding to large-scale fading due to path-loss and shadowing.
III. CHANNEL HARDENING
Definition. Due to the multipath behavior of propagation channels, classical SISO systems suffer from a strong fast fading phenomenon at the scale of the wavelength resulting in strong capacity fluctuations [START_REF] Telatar | Capacity of Multi-antenna Gaussian Channels[END_REF]. MIMO systems average the fading phenomenon over the antennas so that the channel gain varies much more slowly. This effect is called channel hardening. In this paper, the relative variation of the channel gain H 2 F , called coefficient of variation (CV ) is evaluated to quantify the channel hardening effect as previously introduced in [4], [START_REF] Björnson | Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency[END_REF]:
CV 2 = Var H 2 F E{ H 2 F } 2 = E H 4 F -E H 2 F 2 E{ H 2 F } 2 (4)
In ( 4) the statistical means are obtained upon the model which govern the entries of H 2 given random positions of the transmitter and the receiver. This measure was previously applied to a N t ×1 correlated Rayleigh channel model h ∼ CN (0,R) [START_REF] Ngo | No downlink pilots are needed in TDD massive MIMO[END_REF], [7, p. 231]. In that particular case, (4) becomes
CV 2 = E |h H h| 2 -Tr(R) 2 Tr(R) 2 = Tr(R 2 ) Tr(R) 2 , (5)
where the rightmost equality comes from the properties of Gaussian vectors [START_REF] Björnson | Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency[END_REF]Lemma B.14]. This result only depends on the covariance matrix R, from which the influences of antenna array topology and propagation conditions are not explicitly identified. Moreover, small-scale and large-scale phenomena are not easily separated either. In this paper, (4) is studied using a physical channel model that leads to much more interpretable results.
Assumptions on the channel model. The multipath channel model described in Section II relies on several parameters governed by some statistical laws. Our aim is to provide an analytical analysis of CV while relying on the weakest possible set of assumptions on the channel model. Hence, we will consider that:
• For each ray, gain, DoD and DoA are independent.
• arg(c p ) ∼ U[0,2π] i.i.d.
• u tx,p and u rx,p are i.i.d. with distributions D tx and D rx . The first hypothesis is widely used and simply says that no formal relation exists between the gain and the DoD/DoA of each ray. The second one raisonnably indicates that each propagated path experiences independent phase rotation without any predominant angle. The last one assumes that all the rays come from independent directions, with the same distribution (distributions D tx at the emitter, D rx at the receiver).
It has indeed been observed through several measurement campaigns that rays can be grouped into clusters [START_REF] Saleh | A statistical model for indoor multipath propagation[END_REF], [START_REF] Wu | 60-GHz Millimeter-Wave Channel Measurements and Modeling for Indoor Office Environments[END_REF]. Considering the limited angular resolution of finite-size antenna arrays, it is possible to approximate all rays of the same cluster as a unique ray without harming a lot the channel description accuracy [START_REF] Magoarou | Parametric channel estimation for massive MIMO[END_REF]. It then makes sense to assume that this last hypothesis is valid for the main DoDs and DoAs of the clusters. Simulations. A preliminary assessment of the coefficient of variation is computed through Monte-Carlo simulations of (4) using uniform linear arrays (ULA) with inter-antenna spacing of λ 2 at both the transmitter and receiver and taking a growing number of antennas. A total of P ∈ {2, 4, 5, 6} paths were randomly generated with Complex Gaussian gains c p ∼ CN (0,1), uniform DoDs u tx,p ∼ U S2 and DoAs u rx,p ∼ U S2 .
Simulation results of CV are reported in Fig. 1 as a function of the number of antennas. It is observed that all curves seem to reach an asymptote around 1/P for large N t and N r . Hence, the higher the number of physical paths, the harder the channel. The goal of the next sections is to provide further interpretation of such phenomenon by means of analytical derivations.
IV. DERIVATION OF CV 2
In this section, CV 2 is analytically analyzed from (4).
Expectation of the channel gain. From ( 2) and (3) the channel gain H 2 F = Tr(H H H) can be written as
H 2 F = N t N r p,p c * p c p γ p,p ,
where the term γ p,p is given by γ p,p = e r ( u rx,p ),e r ( u rx,p ) e t ( u tx,p ),e t ( u tx,p ) * . Using the hypothesis arg(c p ) ∼ U[0,2π] i.i.d. introduced in the channel model and γ p,p = 1, the expectation of the channel gain can further be expressed as
E H 2 F = N t N r E c 2 . ( 6
) Thus the average channel gain increases linearly with N r and N t , which is consistent with the expected beamforming gain N t and the fact that the received power linearly depends on N r . Coefficient of variation. The coefficient of variation CV is derived using the previous hypotheses and ( 6). We introduce:
E 2 (A tx ,D tx ) = E | e t ( u tx,p ),e t * ( u tx,p ) | 2 E 2 (A rx ,D rx ) = E | e r ( u rx,p ),e r ( u rx,p ) | 2 . ( 7
)
These quantities are the second moments of the inner products of the transmit/receive steering vectors associated to two distinct rays. They represent the correlation between two rays as observed by the system. They can also be interpreted as the average inability of the antenna arrays to discriminate two rays given a specific topology and ray distribution. From such definitions, and based on the derivations given in Appendix A, CV 2 can be expressed as a sum of two terms,
CV 2 =E 2 (A tx ,D tx )E 2 (A rx ,D rx ) E c 4 -c 4 4 E{ c 2 } 2 + Var c 2 E{ c 2 } 2 . (8)
Note that this result only relies on the assumptions introduced in section II. The second term can be identified as the contribution of the spatial large-scale phenomena since it simply consists in the coefficient of variation of the previously defined large-scale fading parameter c 2 of the channel. To allow local channel behavior interpretation, conditioning the statistical model by c 2 is required. It results in the cancellation of the large-scale variations contribution of CV 2 which reduces to what is called hereafter small-scale fading.
V. INTERPRETATIONS A. Large-scale fading
The contribution of large-scale fading in CV 2 is basically the coefficient of variation of the total aggregated power c 2 of the rays. To better emphasize its let us consider a simple example with independent |c p | 2 of mean µ and variance σ 2 . The resulting large scale fading term is then
Var c 2 E{ c 2 } 2 = 1 P σ µ 2 .
It clearly appears that more rays lead to reduced large-scale variations. This stems from the fact that any shadowing phenomenon is well averaged over P independent rays, hence becoming almost deterministic in rich scattering environments. This result explains the floor levels obtained for various P in our previous simulations in Section II and is consistent with the literature on correlated Rayleigh fading channels where high rank correlation matrices provide a stronger channel hardening effect than low rank ones [START_REF] Björnson | Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency[END_REF].
B. Small-scale fading
The coefficient of variation particularized with the statistical conditional model can easily be proven to be:
CV 2 c 2 = E 2 (A tx ,D tx )E 2 (A rx ,D rx )α 2 (c)
where
α 2 (c) = 1- E c| c 2 c 4 4 c 4 . (9)
The small-scale fading contribution to CV 2 thus consists of a product of the quantities defined in ( 7) that depend only on the antenna array topologies (A tx /A rx ) and ray distributions (D tx /D rx ) multiplied by a propagation conditions factor α 2 (c) that depends only on the statistics of the ray powers c. Ray correlations. This paragraph focuses on the quantity E 2 (A tx , D tx ) (the study is done only at the emitter, the obtained results being equally valid at the receiver). Eq. ( 7) yields
E 2 (A tx , D tx ) = 1 N 2 t E Nt i=1 e 2jπ a tx,i •( u tx,p -u tx,p ) λ 2 .
A well-known situation is when the inner sum involves exponentials of independent uniformly distributed phases and hence corresponds to a random walk with N t steps of unit length. The above expectation then consists in the second moment of a Rayleigh distribution and E 2 (A tx , D tx ) = 1 Nt . A necessary condition to such a case is to have (at least) a half wavelength antenna spacing ∆d to ensure that phases are spread over [0,2π]. On the other hand, phase independences are expected to occur for asymptotically large ∆d. It is however shown hereafter that such assumption turns out to be valid for much more raisonnable value of ∆d.
Numerical evaluations of E 2 are performed versus ∆d (Fig. 2), and versus N t (Fig. 3). Uniformly distributed rays over the 3D unit sphere (D tx = D rx = U S2 ) and Uniform Linear, Circular and Planar Arrays (ULA, UCA and UPA) are considered. As a reminder, the smaller E(A tx ,D tx ) the better the channel hardening. In Fig. 2, E 2 reaches the asymptote 1/N t for all array types with ∆d = λ 2 and remains almost constant for larger ∆d. Fig. 3 shows that E 2 merely follows the 1/N t law whatever the array type. We thus conclude that the independent uniform phases situation discussed above is a sufficient model for any array topology given that ∆d ≥ λ 2 . It is therefore assumed in the sequel that, (11) The right inequality comes from the convexity of the square function. Equality is achieved when there is only one contributing ray, i.e. no multipath occurs. In that case CV 2 c 2 = 0 and the MIMO channel power is deterministic. The left part in [START_REF] Samimi | 28 GHz Millimeter-Wave Ultrawideband Small-Scale Fading Models in Wireless Channels[END_REF] is given by Hölder's inequality. Equality is achieved when there are P rays of equal power. Then, taking the expectation on each member in [START_REF] Samimi | 28 GHz Millimeter-Wave Ultrawideband Small-Scale Fading Models in Wireless Channels[END_REF] yields [START_REF] Zwick | A stochastic multipath channel model including path directions for indoor environments[END_REF].
E 2 (A tx ,U S2 )≈1/N t , E 2 (A rx ,U S2 )≈1/N r .
In contrast to the large-scale fading, more rays lead to more small-scale fluctuations. It is indeed well known that a richer scattering environment increases small-scale fading.
Comparison with the simulations. Based on the general formula given in [START_REF] Loyka | On physically-based normalization of MIMO channel matrices[END_REF], on the interpretations and evaluations of its terms, we can derive the expression of channel hardening for the illustrating simulations of Section II:
CV 2 illustration = 1 N t N r
(1-1/P )+1/P.
Simulation and approximated formula are compared in Fig. 4 in which small-scale and large-scale contributions are easily evidenced, as intuitively expected from simulations of Fig. 1.
Comparison with the Gaussian i.i.d. model. This model assumes a rich scattering environment. Using (5) with R = I:
CV 2 iid = 1 N t N r .
Using the realistic model in a rich scattering environment, the large-scale part of (8) vanishes leading to a deterministic c 2 and small-scale variations reach the upper bound of (10). This yields the limit
CV 2 P →∞ ----→ CV 2 iid ( 12
)
which is coherent with the interpretation of the model. VI. CONCLUSION In this paper, previous studies on channel hardening have been extended using a physics-based model. We have separated influences of antenna array topologies and propagation characteristics on the channel hardening phenomenon. Large-scale and small-scale contributions to channel variations have been evidenced. Essentially, this paper provides a general framework to study channel hardening using accurate propagation models.
To illustrate the overall behavior of channel hardening, this framework have been used with generic model parameters and hypotheses. The scaling laws evidenced for simpler channel models are conserved provided the antennas are spaced by at least half a wavelength. The results are consistent with state of the art and provide further insights on the influence of array topology and propagation on channel hardening. The proposed expression can easily be exploited with various propagation environments and array topologies to provide a more precise understanding of the phenomenon compared to classical channel descriptions based on Rayleigh fading models. ACKNOWLEDGMENT
This work has been performed in the framework of the Horizon 2020 project ONE5G (ICT-760809) receiving funds from the European Union. The authors would like to acknowledge the contributions of their colleagues in the project, although the views expressed in this contribution are those of the authors and do not necessarily represent the project.
APPENDIX A COEFFICIENT OF VARIATION [START_REF] Loyka | On physically-based normalization of MIMO channel matrices[END_REF] For the sake of simplicity, an intermediary matrix A is introduced. It is defined by
Fig. 1 .
1 Fig. 1. Simulated CV 2 for growing number of rays. Asymptotes are the black dashed lines.
Fig. 2 .
2 Fig. 2. Numerical evaluation of E(Atx, U S 2 ) for various array types and increasing antenna spacing ∆d. The values are normalized so the asymptote is 1.
Fig. 3 .
3 Fig. 3. Numerical evaluation of E(Atx,U S 2 ) for various antenna arrays at the half wavelength. The lower, the better.
Fig. 4 .
4 Fig. 4. Comparison between (8) and simulated CV 2 . Uniform distribution of DoDs and DoAs over the unit sphere and complex Gaussian gains.Propagation conditions. It is now interesting to point out that the propagation factor α(c) introduced in (9) is bounded by 0 ≤ α 2 (c) ≤ 1-1/P. (10) Those bounds are deduced from the following inequality:c 4 2 /P ≤ c 4 4 ≤ c 4 2 .(11)The right inequality comes from the convexity of the square function. Equality is achieved when there is only one contributing ray, i.e. no multipath occurs. In that case CV 2 c 2 = 0 and the MIMO channel power is deterministic. The left part in[START_REF] Samimi | 28 GHz Millimeter-Wave Ultrawideband Small-Scale Fading Models in Wireless Channels[END_REF] is given by Hölder's inequality. Equality is achieved when there are P rays of equal power. Then, taking the expectation on each member in (11) yields[START_REF] Zwick | A stochastic multipath channel model including path directions for indoor environments[END_REF].In contrast to the large-scale fading, more rays lead to more small-scale fluctuations. It is indeed well known that a richer scattering environment increases small-scale fading.Comparison with the simulations. Based on the general formula given in[START_REF] Loyka | On physically-based normalization of MIMO channel matrices[END_REF], on the interpretations and evaluations of its terms, we can derive the expression of channel hardening for the illustrating simulations of Section II:
2 F 4 F
24 [A] p,p = 2|γ p,p |cos(φ p,p ) if p = p 1 if p = p with φ p,p = arg(c * p c p γ p,p ) the whole channel phase dependence. H 2 F can be written using a quadratic form with vector c and matrix A, which can be decomposed into two terms I (identity) and J HN t N r = c T Ac = c T c+c T Jc where J = A-I. E{J} = 0 so: E H (N t N r ) 2 = E c 4 +E (c T Jc) 2 .The ray independence properties yields the following weighted sum of coupled ray powersE (c T Jc) 2 = p =p E |c p | 2 |c p | 2 E [J]2 p,p . Considering i.i.d. rays, all the weights E [J] 2 p,p are identical. Using the weights notations introduced in (7) and the definition of the 4-norm yields the second order moment E H 4 F . With the expectation (6) we derive the result (8). | 24,892 | [
"177777",
"4463",
"13113"
] | [
"435003",
"435003",
"435003",
"185974",
"435003"
] |
01770066 | en | [
"chim"
] | 2024/03/05 22:32:16 | 2017 | https://theses.hal.science/tel-01770066/file/WANG_XUAN_2017.pdf | Enric Garcia
Xuan Wang
These four years of PhD research constitute one of the most important period in my life. The result of the PhD is not only the academic papers, but also most importantly, how I am thinking and acting in the academic field, which will definitely change
Following the fabrication of the multilayer templates, an in situ and reproducible synthesis of metallic nanoparticles was developed in order to generate nanocomposites selectively inside the P2VP layers. The size of Au nanoparticles can be well controlled around 7-10 nm. We also found that the reduction process could influence the shape (sphere, triangle or cylinder) and size by using different solvents or reducing agents. Because the extraction of accurate optical responses from the spectroscopic ellipsometry data, which will come in the last part, critically relies on the precise knowledge of the sample structure, we have used several experimental techniques to access a precise description of the produced materials. In particular, we used a Quartz Crystal Microbalance as a measurement tool to 'kinetically' study the volume fraction of Au loading as we increase the number of gold introduction cycles.
We find that the amount of gold in the composite layers can be varied up to typically 40 volume%.
The optical properties of the nanocomposite films are determined by variable angle spectroscopic ellipsometry and analyzed by appropriately developed effective medium models. The films are structurally uniaxial and homogeneous, and we can define their dielectric permittivity tensor with the ordinary (parallel to the substrate) and extraordinary (normal to the substrate) components. The analysis of the lamellar structures allows the extraction of the components εo and εe, both presenting a resonance close to =540nm, with a significantly stronger amplitude for εo. When the gold load is high enough and the couplings between particles are strong enough, the values of εo become negative close to the resonance, and the material reaches the socalled hyperbolic regime, which constitutes a step towards applications in hyperresolution imaging.
Résumé
Des propriétés optiques inédites sont prédites si des nanorésonateurs optiques sont organisés dans un matériau, ce qui peut être réalisé par l'auto-assemblage de nanoparticules plasmoniques synthétisées chimiquement. Dans ce travail de doctorat, nous utilisons des structures ordonnées de copolymères à blocs pour organiser des nanoparticules plasmoniques. Nous étudions le lien entre la structure des nanocomposites en films minces, et en particulier la nature, la densité et l'organisation des nanoparticules, et leurs propriétés optiques.
Pour cela, nous avons tout d'abord produit des phases lamellaires de copolymères diblocs poly(styrène)-block-poly(2-vinylpyridine) (PS-b-P2VP) en films minces d'épaisseur (100nm-700nm) et de période lamellaire (17nm-70nm) contrôlées, et dont l'alignement et l'homogénéité sont optimisés.
Nous avons développé une synthèse in situ, au sein de ces films lamellaires, qui permet de produire de façon contrôlée et reproductible, des nanoparticules plasmoniques de diamètre 7-10nm sélectivement dans les domaines P2VP. Nous avons montré que la taille et la forme des particules d'or formées in situ peuvent être modifiées en jouant sur le solvant et le réducteur chimique mis en jeu. Nous avons étudié en détail la structure des nanocomposites formulés, ce qui est en particulier nécessaire à la bonne exploitation des données d'ellipsométrie spectroscopique afin de déterminer les réponses optiques. La structure des échantillons a été étudiée par différentes méthodes de microscopie (électronique en transmission ou à balayage, à force atomique), ainsi que de la diffusion des rayons X. Nous avons utilisé une microbalance à Quartz pour étudier la quantité d'or introduite dans les matrices lamellaires de manière « cinétique » au fil de son augmentation progressive. La quantité d'or atteint des valeurs de 40 % en volume.
Les propriétés optiques des films nanocomposites sont déterminées par ellipsométrie spectroscopique à angle variable et analysées à l'aide de modèles de milieux effectifs. Les films sont homogènes et anisotropes uniaxes, et on peut définir leur tenseur de permittivité diélectrique avec une composante ordinaire εo (parallèle au substrat) et une composante extraordinaire εe (perpendiculaire au substrat). L'analyse permet de montrer que les deux composantes εo and εe présentent une résonance
General Introduction
Over the past decades, big interest in fabrication and utilization of nanocomposites has grown. Thanks to advances both in manufacturing and measurement. Artificial materials structurally designed with the purpose of producing extraordinary optical electrical or magnetic properties. The principle of which was proposed in 1968 by Veselago, and which has been a very active field of research since 20 years ago.By combining nanostructuration and anisotropy, it is possible to engineer novel and nonnatural dispersion relations, in order to control original propagation properties. This is why a recent interest has focused on a special case of uniaxial anisotropic metamaterials, called hyperbolic materials. At the nano-scale, lithographic manufacturing processes used for producing such metamaterials, are no longer adequate due to the complex fabrication process and limited dimensional size. Selfassembly and nano-chemistry appearance provide a new ways of manufacturing. This thesis was carried out within the big project in "meta group", which aims to formulate new generations of metamaterials with innovative optical properties in the visible light domain. For this purpose, the aim of this PhD thesis is to study methodologies for the fabrication of nanocomposites of block copolymers with gold nanoparticles specifically in lamellar systems in metallic nanoparticles, and to control their nanostructure also to study the relation between the nanoscale structure of these self-assembled materials and their optical properties.
In first chapter we are interested in the context of our study. We will discuss the concepts and fabrication of metamaterials and hyperbolic metamaterial. The various strategies envisaged for periodically organizing gold nanoparticles in self-assembled diblock copolymers will be described. In order to achieve hyperbolic metamaterial in visible range, the anisotropic plasmonic nanocomposites will be produced by periodically organizing gold particles in nanostructured matrices of block copolymers.
Because gold nanoparticles have a well-defined plasmon resonance between 300 and 900 nm and the block copolymers are known for their self-assembly properties, and give access to a wide variety of sizes and shapes of nanostructures with characteristic sizes ranging from 10 to 100 nm. The physicochemical and optical properties of gold nanoparticles, and more particularly the parameters influencing surface plasmon resonance, will be described. Finally, the thermodynamics and the shaping of lamellar systems of diblock copolymers will also be explained.
In the second chapter, we present the different experimental techniques used to study the structure and the optical responses of the target material. The study of the optical properties of gold nanoparticles dispersed in a matrix of copolymer will be measured by spectroscopic measurements. More particularly, the spectroscopic ellipsometry will be described, which is an advanced technique that makes it possible in particular to study the optical properties of a material as a function of the photon energy or wavelength of the incident beam. The organization of the nanocomposites and gold nanoparticles dispersed in the layer will be analyzed by small-angle X-ray scattering and electron microscopy.
The third chapter is devoted to fabrication controlled nanostructure by diblock copolymers using poly(styrene)-block-poly(2-vinylpyridine)(PS-block-P2VP)). We try to organize characteristic at scales far below the wavelength (typically <λ / 10). Their characteristic size and morphology will be determined by small-angle X-ray scattering, transmission and scanning electron microscopy. Knowledge of controlling structural size of lamellar fabricated by PS-block-P2VP (ie. thickness of sample and thickness of layer) will be required to properly achieve the designed nanostructure.
In the fourth chapter, we will study the method to fabricate gold nanocomposite with two different ways. The aim in this chapter is to find a way of selectively organizing plasmonic gold nanoparticles in one of blocks of the lamellar diblock copolymer. The mechanism of the gold loading and limitation of strategy also discussed here. Furthermore, the shape and size of gold nanoparticles analyzed by small-angle X-ray scattering, and we provide a method giving access to determine the volume fraction of gold in nanocomposite by quartz crystal microbalance. Knowledge of the structure information will be required to properly analyze the optical responses of nanocomposites.
The fifth chapter is devoted to the optimizing the target structure by manipulate the process of gold loading in chemical ways. The use of self-assembly block and loading process in chapter four leads irregular gold particles on the surface of films.
The aim of this chapter is to find a good after treatment to achieve "better films" for optical properties analyzing. Furthermore, the influence of reducing agent, the solvent we use and thermal annealing are going to be discussed here.
In the sixth chapter, we are going to choose several samples to study how the structure influence the optical responses. According to spectroscopic ellipsometry data, sample with different thickness, layer thickness and volume fraction of gold are analyzed. The optical indices, real and imaginary part of permittivity, are extracted from the with various model base on ellipsometry data. More particularly, the influence of the volume fraction of gold in the nanostructure were discussed and if the target structure could be hyperbolic metamaterial are discussed also based on isofrequency dispersion relations.
Chapter I Context and state-ofthe-art
The aim of this PhD thesis is to study methodologies for the fabrication of nanocomposites of block copolymers and gold nanoparticles to control their nanostructure and to make the link between the nanometric structure of these selfassembled materials and their optical properties. This structure can be used as an optical metamaterials, which are envisioned breakthroughs in the near future: they are artificial materials structurally designed with the purpose of producing extraordinary optical properties. Their potential in revolutionizing the optical technologies has been largely recognized.
In this first chapter, we are interested in the context of our study, ie research on nanostructured and plasmonic materials for optics, and hyperbolic metamaterials that may have new optical properties, especially in the visible domain light. We discuss the experimental strategies envisaged for the fabrication of these new materials.
I.1 Metamaterial
I.1.1 Optics of materials
A metamaterial is an artificial composite material whose internal structure generates electromagnetic properties that do not exist in natural environments. "Meta" is from the Greek word μετά meta, meaning "beyond" 1,2 . In general, metamaterial is an artificially engineered structures. It consists of periodically or randomly distributed structured elements, whose size and spacing are much smaller than the wavelength (λ) of electromagnetic waves (EM) waves. In order to treat the metamaterial as an effective medium and avoid diffractive effects, the length scale of the structures is much smaller than the wavelength of the electromagnetic (EM) wave: in generally α<λ/10, where α is the characteristic scale, λ is wavelength of light. In such systems, it is the sub-wavelength features that control the macroscopic electromagnetic properties. In the past decades, metamaterials is so attractive mostly due to their wide applications including subwavelength imaging [3][4][5][6] , hyperlens 6,7 and optical cloaking 8,9 . Recent advances in nanofabrication technologies allow for producing metamaterial systems where the dielectric permittivity and magnetic permeability tensor can be designed and engineered at will. This ability to control the material they are made from assemblies of multiple elements fashioned from composite materials such as metals or dielectrics.
The materials are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the wave they influence. Metamaterials derive their properties not from the properties of the base materials, but from their designed structures. Their precise shape, geometry, size, orientation and arrangement gives them their smart properties capable of manipulating electromagnetic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials. In these realizations, one of metamaterial is attracted for us, which is called hyperbolic metamaterial. We are going to introduce deeply in I.2.
Figure I. 1 Left is a schematic of an electromagnetic wave, which consist of an oscillating electric field and an oscillating magnetic field. Right is the electromagnetic spectrum, and the position of the visible wavelengths region.
It is characterized by its complex refractive index 10 , which is noted as 𝑛 ̃ (Equation 1-1) and defined from Maxwell's equations in a material.
𝑛 ̃(𝜆) = 𝑛(𝜆) + 𝑖 𝑘(𝜆)
where 𝑛 is the real part of the complex refractive index, also called optical refractive index. The imaginary part is the absorption coefficient, noted as k, which represents the energy loss of an electromagnetic radiation which passes through the medium. 11 From Equation 1-1, we can see that the complex refractive index 𝑛 ̃ depends on the wavelength λ of the EM wave (Figure I. 1). If the complex number is real at some wavelength, it means that the electromagnetic wave passes through the medium without being absorbed (𝑘(𝜆) = 0), the medium is transparent for this wavelength. For all media, the complex refractive index 𝑛 ̃(𝜆) can be described by 2 parameters: the complex electrical permittivity 𝜀(𝜆) and the complex magnetic permeability 𝜇 ̃(𝜆)), which describe the response of the medium (see Equation 1-2) to electrical and magnetic excitation, respectively. The dielectric permittivity 𝜀(𝜆) and magnetic permeability 𝜇 ̃(𝜆) are the two fundamental parameters characterizing the EM property of a medium 12,13 . They classified metamaterial (see Figure I. 2(a)) according to their real part of electrical permittivity 𝜀(𝜆) and magnetic permeability 𝜇 ̃(𝜆). Quadrant I including most dielectric materials covers material with simultaneously positive permittivity and permeability and DPS stands for "double positive". In quadrant II and IV, there are two kinds of metamaterial with only the permittivity (in IV, called ENG ε negative media) or the permeability (in II, called MNG μ negative media) negative. For examples, metals, ferroelectric materials, and doped semiconductors shows negative permittivity, which are present in quadrant II. Quadrant IV contains some ferrite materials with negative permeability. A very interesting part is quadrant III, corresponding to double negative media (DNG), which have both a negative permittivity and a negative permeability, and which cannot be found in nature. Such metamaterials concept originates from the 1960's, when Veselago 14 initially discussed the propagation properties of a material with simultaneously negative and , and showed that it would have a negative refractive index. Interest for such materials actually really arised only after Pendry proposed, in the early 2000's, that they could produce perfect lenses 15 as well as cloaking devices 16 . DNG are also called left-handed media or negative refractive index media. DNG present many other interesting phenomena [17][18][19][20][21][22][23] : anomalous refraction, reversed Doppler shift, inverse Cherenkov radiation, opposite group velocity and phase velocity, etc.
According to When light travels in an ordinary material medium, it is typically slowed down. The amount of slowing depends upon the properties of the medium, and the fraction n by which the speed of light is reduced is referred to as the refractive index. According to Snell's law [START_REF] Jackson | Classical electromagnetics[END_REF] (Equation 1-3), when a light is incident from a positive-index medium (n1)
to another positive-index medium (n2), the light ray is deflected at an angle θ2 with θ1, the angle between the incident light ray and the axis perpendicular to the interface (see
Figure I. 3 left): Equation 1-3 𝑛 1 𝑠𝑖𝑛 𝜃 1 = 𝑛 2 𝑠𝑖𝑛 𝜃 2
When light travels in a negative material medium, Snell's law is still satisfied. When a light beam is incident from a positive-index material to a negative-index one, the refracted beam lies on the same side of the normal as the incident. In other words, the refracted light bends "negatively" at the interface (see Figure I. 3 right).
Figure I. 3 Schematic of refraction of ordinary material medium (left) and negative material medium (right)
To obtain a negative refraction in a homogeneous and isotropic material, the refractive index must be negative, which requires that the electrical permittivity and the magnetic permeability must be simultaneously negative. For an anisotropic material with a complex refractive index, a negative refraction can be obtained without necessarily having both negative permittivity and permeability. This is called hyperbolic metamaterial, which is going to discuss later in I.2.
I.1.2 Basic theories of EM propagation
The Maxwell's equations are used for describing all electromagnetic (EM) phenomena (Shown from Equation 1-4 to Equation 1-7) in vacuum. where E is the electric field and B is the magnetic induction, 𝜀 0 and 𝜇 0 are the permittivity and permeability of vacuum, respectively.
In a material medium, the equations additionally include the electric charge density ρ, and the current density given by J=σE, where σ is the conductivity, and the relative permittivity and permeability of the medium. Combining the Maxwell's equations, the equations of wave propagation can be derived Equation 1-12 and Equation 1-13. with the wavevector k (𝑘 2 = 𝜀𝜀 0 𝜇𝜇 0 𝜔 2 ), which defines the optical index of the media 𝑛 ̃= √ 𝜀𝜇. We are going to discuss more details of specific propagation properties in hyperbolic metamaterials in the Chapter I.2 and VI.3. [START_REF] Sun | Applications of Molecular Spectroscopy to Current Research in the Chemical and Biological Sciences[END_REF] After the pioneer works of Veselago and Pendry, various nanostructures 12 were demonstrated to exhibit optical properties unobtainable from natural media. Among the various metamaterial proposed in the past decades, hyperbolic metamaterials (HMMs)
I.2 Hyperbolic metamaterial I.2.1 Definition (a) (b)
Figure I. 4 (a) Schematic representation of an electron oscillation propagating along a metaldielectric interface. λSPP is the wavelength of surface plasmon-polariton, which is caused by the charge density oscillations and associated to an electromagnetic field. The right coordinate shows the exponential dependence of the electromagnetic field intensity on the distance away from the interface (from Wikipedia) (b) Illustration of the excitation of localized surface plasmon
were significant noticed, due to their ability of presenting subwavelength resolution imaging, as well as of manipulating the near-field of a light emitter or a light scattered.
Near-field is defined by the distance, which is relatively shorter than the wavelength of light in vacuum. The propagation properties of HMMs originate from the coupling of several surface plasmon modes in their volume [START_REF] Yang | Period reduction lithography in normal UV range with surface plasmon polaritons interference and hyperbolic metamaterial multilayer structure[END_REF][START_REF] Schasfoort | Handbook of surface plasmon resonance[END_REF][START_REF] Maier | Plasmonics: fundamentals and applications[END_REF][START_REF] Pitarke | Theory of surface plasmons and surface-plasmon polaritons[END_REF] . They present the interesting particularity of having non-natural light propagation properties, without requiring engineering of the permeability , which is notably more complex than playing on the permittivity .
A surface plasmon is a collective oscillation of electrons, which is confined to the interface between a metal and a dielectric and propagates along this interface (see
I.2.2 Optics of anisotropic materials
In an anisotropic medium, the relative permittivity and permeability entering into the Maxwell's equations take the form of tensors. In the present study, we are going to talk about nonmagnetic medium, so 𝝁 is unit tensor in this work. Permittivity tensor is assumed as the following tensor: In the Equation 1-16, the three components generally depend on the wavelength, or equivalently the angular frequency ω. When 𝜀 𝑥𝑥 = 𝜀 𝑦𝑦 = 𝜀 𝑧𝑧 , the material is isotropic.
When 𝜀 𝑥𝑥 = 𝜀 𝑦𝑦 ≠ 𝜀 𝑧𝑧 , the material is known as uniaxial. When 𝜀 𝑥𝑥 ≠ 𝜀 𝑦𝑦 ≠ 𝜀 𝑧𝑧 , the crystal is known as biaxial.
In order to determine the dispersion relation of light in the medium with ε described by Equation 1-16, we should combine the anisotropic Maxwell's equations. For a plane wave expressed as: the Equation 1-12 of the wave propagation then becomes : where the two terms describe the behavior of waves of different polarizations: polarization in the (x,y) plane for the first term (p-polarized) and polarization in a plane containing the z direction for the second (s-polarized waves) 34 . When set to zero, the two terms above correspond respectively to a spherical and an ellipsoidal isofrequency surface in the k-space. The term "hyperbolic" describes uniaxial materials in which the anisotropy is extremely strong, when one of the two components 𝜀 // and 𝜀 𝑧 is negative, and another one of them is positive, which is extremely rare [START_REF] Esslinger | Tetradymites as natural hyperbolic materials for the nearinfrared to visible[END_REF] in natural materials in the visible wavelength range. We can represent such materials as behaving optically as a metal in one direction and as a dielectric in another direction. Then, the first term defines either an isotropic propagation or vanishes because it is algebraically impossible. The second term results in a hyperboloidal isofrequency surface, which defines a hyperbolic medium. The latter can be written: There are two choice for the product 𝜀 // 𝜀 𝑧 < 0. If 𝜀 // > 0 𝑎𝑛𝑑 𝜀 𝑧 < 0, the hyperbolic medium is called dielectric hyperbolic (with reference to its behavior in the xy plane) or Type I hyperbolic. Its isofrequency surfaces and the dispersion relation are showed in
I.2.3 Application of HHMs and super-resolution
As we described previously, HHMs are materials with extreme anisotropy, which leads to specific light propagation. Such structures have displayed a variety of promising properties, generating a surge in the activity on the topic over the recent years: negative refraction [START_REF] Yao | Optical negative refraction in bulk metamaterials of nanowires[END_REF] , super-resolution [START_REF] Zhang | Superlenses to overcome the diffraction limit[END_REF] , sub-wavelength modes [START_REF] Narimanov | Optics: beyond diffraction[END_REF] , perfect multi-band absorption [START_REF] Sreekanth | A multiband perfect absorber based on hyperbolic metamaterials[END_REF] , optical topological transition [START_REF] Krishnamoorthy | Topological transitions in metamaterials[END_REF] , epsilon-near-zero light propagation [START_REF] Kurilkina | Features of hyperbolic metamaterials with extremal optical characteristics[END_REF] , spontaneous emission and Purcell effect enhancement [START_REF] Cortes | Quantum nanophotonics using hyperbolic metamaterials[END_REF][START_REF] Jacob | Engineering photonic density of states using metamaterials[END_REF][START_REF] Sreekanth | Large spontaneous emission rate enhancement in grating coupled hyperbolic metamaterials[END_REF][START_REF] Jacob | Broadband Purcell effect: Radiative decay engineering with metamaterials[END_REF][START_REF] Lu | Enhancing spontaneous emission rates of molecules using nanopatterned multilayer hyperbolic metamaterials[END_REF] , thermal emission engineering [START_REF] Biehs | Hyperbolic metamaterials as an analog of a blackbody in the near field[END_REF] including super-Planckian regimes [START_REF] Guo | Broadband super-Planckian thermal emission from hyperbolic metamaterials[END_REF] , or biosensing [START_REF] Kabashin | Plasmonic nanorod metamaterials for biosensing[END_REF][START_REF] Sreekanth | Extreme sensitivity biosensing platform based on hyperbolic metamaterials[END_REF] .
One very attractive property is the so-called "super-resolution" (i.e., sub-diffraction imaging), because super-resolution could be profitable to several technological fields.
Optical microscopy is indeed an essential tool in many fields such as microelectronics, biology and medicine. But it is hindered by the intrinsic diffraction limitation as it cannot obtain a better resolution than half the wavelength of light. 'Diffraction limit' in optics is well known: whenever an object is imaged by an optical system, the objects with sizes smaller than half the wavelength of the light are unresolved in the image. This loss of information is caused by the fact that the object fine features are carried by components with high spatial frequency (large wavevectors), which becomes evanescent in a usual material. HMMs offer a completely new paradigm to tackle the problem.
When a beam of light hits an object, the object information is transferred to propagating waves and evanescent waves. The propagating waves carry large features information which can reach the far field, whereas evanescent waves carry fine details which are confined to the near field. Due to the 'diffraction limit', if the scattered light is collected by a conventional glass lens, the evanescent waves are permanently lost before reaching the image plane ( [START_REF] Zhang | Superlenses to overcome the diffraction limit[END_REF] Near-field scanning techniques [START_REF] Dürig | Near-field optical-scanning microscopy[END_REF] , and fluorescence-based imaging methods [START_REF] Klar | Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission[END_REF] have proposed and validated ways to circumvent this limitation and have brought optical microscopy into the nano-world, with the fascinating and revolutionary goal to visualize the pathways of individual molecules inside living cells [START_REF] Boettiger | Super-resolution imaging reveals distinct chromatin folding for different epigenetic states[END_REF] . Similar intrinsic resolution limitations also hinder optical lithography, one of the most important tools equipping the semiconductor industry, ubiquitous in our societies.
The principle of subwavelength imaging was proven by some published works.
For example, an extremely thin Ag slab can act as a superlens, which enhances evanescent waves via resonant excitation of surface plasmons 4040,56 .
I.2.4 Achievement of HHMs
HHMs are considered amongst the most promising metamaterials, because of their ability to provide a multi-functional platform [START_REF] Ferrari | Hyperbolic metamaterials and their applications[END_REF] to reach different meta-properties.
There are two main approaches to achieving the desired hyperbolic isofrequency surface using metamaterials. 1) The first type consists in lamellar superlattices with by localized surface plasmons [START_REF] Maier | Plasmonics: fundamentals and applications[END_REF][START_REF] Raether | Surface plasmons on smooth and rough surfaces and on gratings[END_REF] . Both of the systems have been studied theoretically and experimentally [START_REF] Ferrari | Hyperbolic metamaterials and their applications[END_REF][START_REF] Poddubny | Hyperbolic metamaterials[END_REF][START_REF] Wood | Directed subwavelength imaging using a layered metal-dielectric system[END_REF][START_REF] Zhukovsky | Physical nature of volume plasmon polaritons in hyperbolic metamaterials[END_REF] .
Figure I. 7 Schematics of 2 types of HMMs (a) multilayer of deeply subwavelength alternating metallic and dielectric layers (b) lattice of metallic nanowires embedded in a dielectric matrix with subwavelength characteristic diensions 34
These 2 designs can achieve the required extreme anisotropy and can be tuned to be HHMs in all wavelength ranges from the UV, visible, near-IR to mid-IR with the appropriate choice of metal and metallic filling fraction, which can be shown using an effective medium theory as following.
I.2.4.1 Effective medium theory for multilayer structures 41,58
Effective medium theories relate the permittivity of a composite medium to the permittivities of the constituents, with some degree of structural information included in the relation. In the case of multilayer structures, a simple relation has been proposed
and is valid if the thickness of the layers is much smaller than the operating wavelength, typically smaller than 𝜆/10. Then, if the relative permittivities of the dielectric and metal layers are 𝜀 𝑑 and 𝜀 𝑚 , respectively, the uniaxial components of the dielectric tensor ε // and ε z , are given by 1-24, the wavelength and strength of the plasmon resonance given by ε // can be tunable by the fill fraction of the metal.
Such systems have been fabricated by electron-beam or sputter deposition in vacuum of both the metallic and dielectric constituents. One limitations of these deposition techniques is that there is often a limit to the number of periods that can be stacked while preserving an actual superlattice structure, which may be a problem to achieve bulk samples. However, high number may not be necessary, since as little as six layers were shown to achieve an effective hyperbolic behavior by using an electronbeam evaporator [START_REF] Ishii | Subwavelength interference pattern from volume plasmon polaritons in a hyperbolic medium[END_REF][START_REF] Yang | Experimental realization of threedimensional indefinite cavities at the nanoscale with anomalous scaling laws[END_REF] .
I.2.4.2 Effective medium theory for cylindrical nanowires 62,63
Following a similar approach, an equivalent effective medium representation of the nanowire geometry can be given by the generalized Maxwell-Garnett approach. In the case of perfectly aligned cylindrical nanorods, the permittivity tensor components are given by: or Au inside a self-assembled porous alumina (Al2O3) template [START_REF] Yao | fabrication and characterization of indefinite metamaterials of nanowires[END_REF][65][START_REF] Evans | Growth and properties of gold and nickel nanorods in thin film alumina[END_REF][START_REF] Sulka | Highly ordered anodic porous alumina formation by self-organized anodizing[END_REF] . Another method to realize nanowire material is to use the arrays of carbon nanotubes as the metallic domains [START_REF] Nefedov | Electromagnetic waves propagating in a periodic array of parallel metallic carbon nanotubes[END_REF] .
The chemical processes and the self-organization mechanisms, through which cylindrical porous templates are manufactured and used for nanowire HHMs fabrication, are considered easier and cheaper than the processes used to produce multilayer HHMs. In this thesis, we focus on the development of alternative routes for multilayer HHMs fabrication, possibly allowing large sample production and low cost.
I.2.4.3 Fabrication of multilayer HHMs
One can define two main families of fabrication methodologies to achieve 3-D nanostructured materials, usually called top-down and bottom-up. Top-down refers to methodologies similar to sculpting or machining matter, like lithography or printing. This is very efficient to produce well-defined patterns on surfaces, but is more difficult to apply to bulk materials. In particular, nanolithography has been successful in manufacturing nanostructured surfaces, evidencing meta-material properties, such as hyperlensing 15 and cloaking 16 , at wavelengths larger or close to the visible domain [START_REF] Soukoulis | Past achievements and future challenges in the development of three-dimensional photonic metamaterials[END_REF] .
Bottom-up refer to constructions from elementary molecular or supramolecular bricks into a design assembly. In particular the use of chemistry and self-assembly of metallic nanoparticles, acting as plasmonic resonators, into dense ordered structures, was anticipated as a promising 'bottom-up' fabrication route [START_REF] Baron | Self-assembled optical metamaterials[END_REF][START_REF] Gong | Self-assembly of noble metal nanocrystals: Fabrication, optical property, and application[END_REF] , especially in order to produce large-scale, 3D and tunable metamaterials. We develop below a particular combination of chemical synthesis and mechanisms of self-assembly, which can be used to produce materials with both nano-structuration and anisotropy, and presenting hyperbolic propagation properties in the visible or infrared ranges.
I.3 Self-assembly of block copolymers I.3.1 Definitions
I.3.1.1 Polymer
A polymer is a macromolecule consisting of a repeated sequence of the same units called segment, each bound to its neighbors by covalent bonds. Generally, the polymer chains are entangled and therefore in a disordered state. A polymer may be natural or chemically synthesized, characterized by degree of polymerization N, corresponding to the number of segments in the polymer [START_REF] Grulke | Polymer handbook[END_REF][START_REF] Bates | Block copolymer thermodynamics: theory and experiment[END_REF] .
Its molar mass M is equal to M=NMsegment. Polymer synthesis routes generally lead to macromolecules which have a size distribution. More precisely, the number average molar mass Mn and the weight average molar mass Mw (see Equation 1-27) are defined with Ni and Mi, the degree of polymerization and the molar mass of a polymer chain, respectively.
I.3.2 Thermodynamics of diblock copolymers I.3.2.1 Phase transition of the diblock copolymers
A diblock copolymer is a polymer consisting of two types of monomers, A and B, linked together by a covalent bond. In the particular case where the two blocks A and B are completely miscible, the properties of the diblock copolymer are intermediate between those of the two blocks. However, in generally the two blocks are immiscible because of the different chemical nature of the two blocks. This incompatibility causes the system to try to minimize the interfacial energy between the two blocks, which leads to a phase separation of the copolymer into domains with a characteristic period. The most characteristic feature of a block copolymer is the strong repulsion between the two unlike blocks even when the repulsion between segments is relatively weak. As the blocks A and B are linked together by a covalent bond, the size of the domains is restricted to spatial scales of the order of the size of the macromolecules, from 10 to 100 nm. The propagation of these ordered domains continues to the macroscopic scale. This self-assembly process is called the phenomenon of microphase separation.
The monomer segments will segregate and form regular, often periodic structures [START_REF] Bates | Block copolymer thermodynamics: theory and experiment[END_REF][START_REF] Ohta | Equilibrium morphology of block copolymer melts[END_REF][START_REF] Bates | Block copolymers-designer soft materials[END_REF][START_REF] Hamley | The physics of block copolymers. 19[END_REF][START_REF] Leibler | Theory of microphase separation in block copolymers[END_REF] .
The phase micro separation is controlled by the free energy of the system ΔG, which depends on a competition between entropy ΔS and enthalpy ΔH.
The entropy contribution ΔS, which present the configuration energy gain during mixing to deform the chains of each of the blocks, is proportional to the number of N repeat patterns.
The enthalpy contribution ΔH, which shows the energy cost of a contact between the two blocks A and B, is linked to the Flory-Huggins interaction parameter χAB which reflects the interaction energy between blocks A and B. The parameter χAB depends on the temperature and the chemical nature of the blocks. It can be written in two ways.
With a network model:
Equation 1-30 𝜒 𝐴𝐵 = 𝑍 𝑘 𝐵 𝑇 [ 𝜀 𝐴𝐵 -(𝜀 𝐴𝐴 +𝜀 𝐵𝐵 ) 2 ]
where kB is the Boltzmann constant, T is the temperature, Z is the network coordination number, and εij the interaction energy between two monomer i and j. For example: 𝜀 𝐴𝐴 is the energy interaction between two A monomers.
With the solubility coefficients:
Equation 1-31 𝜒 𝐴𝐵 = 𝑉 𝑠𝑒𝑔 (𝛿 𝐴 -𝛿 𝐵 ) 2 𝑁 𝐴 𝑘 𝐵 𝑇
where NA is the Avogadro's constant, A and B are the solubility parameters of segment A and segment B, respectively. Vseg is the mean volume occupied by a segment of the polymer.
The microphase separation is stronger when the product χABN is high, which defines the segregation power of the system. When the interaction parameter increases, the incompatibility between the two blocks increases and the phase separation is naturally favored. Also, the increase of the polymer chains size leads to a relative decrease in the entropy gain and also favors the microphase separation.
When χABN becomes greater than a certain critical value denoted as (χABN)OD, a microphase separation occurs (see Figure I. 9). Below this characteristic value (χN)OD, the system is mixed and described as in a disordered state.
Figure I. 9 Schematic representation of the states of a diblock copolymer
The microphase separation also depends on the temperature. When the temperature is high, the thermal agitation dominates and the repulsive interactions between the segments of the blocks A and B no longer lead to the separation of the blocks. Beyond a certain temperature, denoted as order-disorder temperature TOD, the system is in a disordered state. On the contrary, when the temperature is lower than TOD, the blocks are segregated into microdomains (Figure I. 9). The order-disorder transition temperature TOD can be measured by scattering of electromagnetic radiation, especially small angle X-ray scattering, or rheological techniques [START_REF] Nojima | Effect of molecular weight of added polystyrene on the[END_REF][START_REF] Zin | Phase equilibria and transition in mixtures of a homopolymer and a block copolymer. 1. Small-angle x-ray scattering study[END_REF][START_REF] Roe | Small-angle x-ray diffraction study of thermal transition in styrene-butadiene block copolymers[END_REF][START_REF] Hashimoto | Time-resolved smallangle x-ray scattering studies on the kinetics of the order-disorder transition of block polymers. 2. Concentration and temperature dependence[END_REF][82][START_REF] Han | Determination of the order-disorder transition temperature of block copolymers[END_REF] .
Figure I. 10 Simplified theoretical phase diagram of a diblock copolymer 78
The order-disorder transition is characterized by the parameters (χ)ODN and TOD interconnected by Equation 1-32, with two constants a and b.
𝜒 𝑂𝐷 = 𝑎 + 𝑏 𝑇 𝑂𝐷
The order-disorder transition can be modified by varying the degree of polymerization N of the copolymer or the interactions between the blocks A and B by changing their chemical nature. In both cases, this requires the synthesis of a novel diblock copolymer. The simplest option to change the order-disorder transition is to vary the temperature of the system.
The order-disorder transition of a diblock copolymer also can be tuned by volume For an asymmetric diblock copolymer (fA > 0.5
Figure I. 13 Diagram of the theoretical phase of a diblock copolymers and corresponding morphologies These morphologies can be characterized by transmission electron microscopy (TEM) or small-angle X-ray scattering (SAXS), experimental techniques described in Chapter 2.
In X-ray scattering at small angles, the intensity of the signal scattered by the diblock copolymer is observed as a function of the diffusion wave vector q. When the system presents an organization, there are peaks due to the scattering by the structure. The relative positions of these peaks, defined by the ratio q/q0, are characteristic of the morphology (see Table I. 1 and Figure I. 12). [START_REF] Leibler | Theory of microphase separation in block copolymers[END_REF] Table I. 1 Peak relative positions (expressed as q/q*) of Bragg reflections for various structures
Structure Ratio q/q * Lamellar 1, 2, 3,4,5,6… Spherical 1,√2, √3, √4, √5, √6… Cylinder 1, √3, 2, √7 … Gyroid 1, √4/3, √7/3, √8/3, √10/3, √11/3…
Our goal is to produce lamellar systems, which is why we are interested in the following in the specific case of symmetrical diblock copolymers, with fA ≈ fB ≈ 0.5.
I.3.2.3 Symmetric diblock copolymers in different segregation regimes
When an A-B symmetrical diblock copolymer is in an ordered phase, it has a lamellar morphology within a period (or bilayer thickness) denoted as d (Figure I. 11(a)).
The ability of segregation of the system depends on the segregation power χN and we can distinguish four regimes as shown in Figure I. 14 [START_REF] Tallet | Nanocomposites plasmoniques anisotropes à base de copolymères à blocs et de nanoparticules d'or[END_REF] .
Figure I. 14 The phase separation regimes for diblock copolymers 84
I.3.2.3.1 Strong segregation regime
When the value of χN is very large (typically larger than 100 for a symmetrical diblock copolymer), the blocks of the copolymers are very incompatible, there is formation of microdomains with net interfaces, the system is within the strong segregation limit. As explained in section 1.3.2.1, the free energy of the microseparated system of a diblock copolymer is given by the sum of the entropic contribution to deform the chains away from their conformation of random coils and the interfacial energy between the A and B domains. According to Semenov's theory [START_REF] Semenov | Kinetiki i Reaktsionnoy Sposobnosti[END_REF] , the two corresponding terms in free energy are given by Equation 1-33:
Equation 1-33 𝐹 𝑘𝑇 = 3𝑝 8𝑁𝑎 2 𝑑 2 + 2𝑝𝑁𝑎 𝑑 ( 𝜒 6 ) 0.5 Equation 1-34 𝑑 = 2𝑁 2 3 ⁄ 𝑎 [ 1 3 ( 𝜒 6 ) 1 2 ⁄ ] 1 3 ⁄
where d is the lamellar period, N is the total number of segments of statistical length a, p is the number of polymer chains in the system per unit volume and χ=χAB is the interaction parameter. The equilibrium lamellar period d of a diblock copolymer is obtained by minimizing Equation 1-33 and is given by Equation 1-34.
It can be seen that the lamellar period (or bilayer thickness) d depends rather strongly on the degree of polymerization N but little on the interaction parameter. We are going to control d by controlling the N of the polymers in the Chapter III.
I.3.2.3.2 Low segregation regime
On the contrary, when the value of χN is low (χN ≈ 10 for a symmetrical diblock copolymer), the blocks are mixed and the copolymer is in the vicinity of the orderdisorder transition. It is subject to fluctuations in composition which are organized in ordered microdomains with large interfaces. In this case, the system is in a weak segregation regime and the lamellar period of the microdomains d varies in N 1/2 .
I.3.3 Experimental shaping
The experimental conditions for obtaining ordered symmetrical diblock copolymers are described below, according to two types of evaporation.
I.3.3.1 Slow evaporation
Figure I. 15 Schematic illustration of the shaping of a symmetrical diblock copolymer by slow evaporation of the solvent
The diblock copolymer may spontaneously self-assemble under certain specific
I.3.4.2 Rapid evaporation
In some preparation processes, like the spin-coating deposition of thin films, the evaporation of the solvent is very rapid, so the system does not have time to reach its thermodynamic equilibrium and the polymer chains therefore do not have time to organize. In the rapid evaporation case, the system is amorphous that is, in a disordered state, or in a uncontrolled or intermediate state of organization. In order to organize the system, it is necessary to provide mobility to the chains later on, like thermal annealing, or exposure to a swelling solvent or mechanical shear (see Figure
I.3.4 Thermodynamics of symmetrical diblock copolymers in thin films
This is particularly the case for thin films which are produced by spin coating (rapid evaporation as described in I.3.4.2). The spin-coating is a technique to form a film in several seconds (details are given in Chapter III.1). According to the rapid evaporation experimental conditions, the equilibrium state of the system must be obtained by a post-spin-coating treatment, for example, thermal annealing or solvent annealing. Both of them can induce the alignment of the lamellar phase, which may be perpendicular or parallel to the substrate [START_REF] Bates | Block copolymer thermodynamics: theory and experiment[END_REF][START_REF] Sohn | Perpendicular lamellae induced at the interface of neutral self-assembled monolayers in thin diblock copolymer films[END_REF][START_REF] Morkved | Thickness-induced morphology changes in lamellar diblock copolymer ultrathin films[END_REF] .
I.3.4.1 Alignment perpendicular to substrate
It is possible to obtain a diblock copolymer film aligned perpendicularly to the substrate after annealing in the particular situation where the surface of the substrate is chemically neutral with respect to the two blocks. The interfacial interactions between the substrate and each block are then identical and there is no preferential wetting for the two blocks.
Figure I. 17 TEM cross-view of a PS-PMMA film on a substrate consisting of a random mixture of
PS and PMMA [START_REF] Sohn | Perpendicular lamellae induced at the interface of neutral self-assembled monolayers in thin diblock copolymer films[END_REF] During annealing, which is performed for providing mobility to the chains, the latter form microdomains and orient themselves perpendicularly to the neutral substrate.
However, in most of the cases, one of the blocks of the diblock copolymers has a preferential chemical affinity with air, the film orientates parallel to its upper surface, which forms mixed alignment films (see Figure I. 17).
I.3.4.2 Alignment parallel to substrate
As we mentioned in I.3.4.1, in most of cases, for thin films of diblock copolymers, the interfacial energies of each block with the substrate and with air generally induce preferential wetting of one of the blocks at the interface of substrate/film and film/air.
During annealing of the thin film, the lamellar morphology propagates from the two interfaces (air / film and film / substrate) to the interior of the film, and parallel alignment to the substrate of a diblock copolymer film can be obtained. Depending on the thickness of the thin film, the configuration adopted after annealing may be symmetrical, unsymmetrical, or more complex.
I.3.4.2.1 Symmetric and unsymmetrical configuration
In the particular case, when the thickness of the thin film L of symmetrical diblock copolymers is the exactly quantized value dependent on the lamellar period d, the film adopts a symmetrical or unsymmetrical configuration.
Figure I. 20 Schematic of a thin film of diblock copolymers aligned parallel to the substrate 91
The type of surface topography and the fraction of the surface occupied by islands or holes depend upon the initial thickness. This holes or islands have micron-size lateral dimensions and can be observed under optical microscope and AFM from our samples that will be seen in Chapter III and Chapter IV.
I.3.5 Dispersion in Solvents
If the interaction between a solvent and a polymer is favorable, the polymer segments will rather be surrounding by the solvent than by other segments, and the polymer swells and eventually dissolves. If it is unfavorable, the polymer does not or very little swell. In intermediate situations, it may swell but not dissolve. The nature of the interaction may be assessed from the difference in solubility parameters polym-solv.
What will happen when an ordered symmetrical diblock copolymer film is put in contact with a neutral or selective solvent? For a diblock copolymers, when we say neutral solvents, it means the solvent has a good chemical affinity with the two blocks. When swelling an ordered diblock copolymer film in a neutral solvent, the solvent is uniformly distributed within the blocks, the structure swells and the interfaces deform strongly until the copolymer is completely dissolved in the solvent [START_REF] Lodge | Solvent distribution in weakly-ordered block copolymer solutions[END_REF] . However, for many of block copolymers systems, there is no true neutral solvent [START_REF] Peng | Morphologies in solvent-annealed thin films of symmetric diblock copolymer[END_REF] . A selective solvent is a solvent having a preferential chemical affinity for one of the two blocks of the copolymers. By immersing an ordered diblock copolymers A-B in a selective solvent of block B preferential, the solvent swells the B domains of the ordered phase. This results in the increase of the layer B thickness.
Figure I. 21 Scheme of a lamellar copolymer A-B dispersed in a selective solvent of block B (without thermodynamic equilibrium)
At thermodynamic equilibrium, the interfaces of the system pushed to minimize its free energy and this induces morphological transitions. In the particular case, when the block A is not soluble in the solvent, which is the selective solvent for block B, and Both toluene and THF are good solvents for PS and moderately good solvent for P2VP. THF is a slightly better neutral solvent for the diblock, than toluene. P2VP exhibits the swelling characteristics in contact with ethanol and methanol. Water is a non-solvent for PS, which is strongly hydrophobic, and a bad solvent for P2VP, with unfavorable interactions between the polymer backbone with water only slightly compensated by the polar nature of the amine [START_REF] Lin | Solvent penetration into ordered thin films of diblock copolymers[END_REF][START_REF] Noshay | Block copolymers: overview and critical survey[END_REF] .
I.4 Gold nanoparticles
A nanoparticle is a solid particle, which is at the nanometer scale in at least one If the light lit inside the cup, it is red and it is green when lit the light in front of the cup.
Also the use of metallic nanoparticles can be found in the windows of certain cathedrals (Bourges, Troyes ...) from the middle ages and in certain colored glasses from the Renaissance period. Today, the gold are mainly used in jewelry, currency, electronics and dentistry because of the noble character of gold bulk.
Figure I. 23 Schematic of different shape of Au nanoparticles
In this section, we are particularly interested in chemical synthesis of gold nanoparticles (NPs), which includes reduction of gold salts based on chemical or nonchemical reducers.
Reaction based on chemical reducer
The synthesis of gold nanoparticles by chemical reaction is carried out by reaction between a gold salt, a reducing agent and a stabilizing agent. The gold salt is used as a precursor and mostly is chloroauric acid (HAuCl4), in which the gold is in oxidation state (III). A reducing agent is added to reduce gold ions Au 3 + to gold Au 0 and often is sodium borohydride (NaBH4), sodium citrate (Na3Ct) or ascorbic acid. As the reaction continues, the solution becomes supersaturated in gold atoms and the particles begin to aggregate and precipitate. The addition of a stabilizing agent (also called a ligand) makes it possible to control the precipitation and to prevent the uncontrolled growth of the nanoparticles. It should be noted that not all ligands have the same affinity for the gold surface 106 (see Table I. 3).
In the majority of syntheses, the shape and size of the nanoparticles can be controlled depending on many parameters: different chemical reducing agents and their concentration, order of addition of the reactants, stirring speed, pH of the reaction solutions, temperature 100,101,103-105,107,108 .
Reaction based on non-chemical reducer
The gold precursor also can be reduced by non-chemical reducing methods such as temperature, ultrasound or irradiation (UV, gamma ray). In this case, the reduction is caused by temperature or ionizing radiation consisting of a light beam, electrons or gamma rays 109 . By controlling the intensity of the beam, the temperature and the exposure time, the gold nanoparticles size, shape and dispersion can be well controlled. These method have the advantage of reducing the gold salts in films, but reports 109,110 claimed that the mechanism of thermal annealing process are complex, which involved several side reactions and lead to various shape of gold nanoparticles (see Figure I. 24). For example, with a thermal annealing in the presence of poly(vinylpyrrolidone), which is one of the most frequently used stabilizer, protective agent or reactant, various shape of gold NPs can be achieved. In this thesis work, we are going to use poly(vinlypyridine) playing a role somewhat similar to that of poly(vinylpyrrolidone) in the latter study.
I.4.2 Chemical and physical properties of gold nanoparticles
Gold is a noble metal because of the stability of its chemistry and its properties.
The physical -chemistry parameters of gold are listed in Table I. 4.
Different from the bulk gold, gold nanoparticles exhibit unique properties in color, density, melting point (due to the large ratio of surface atoms to inner atoms), mechanical strength and conductivity (due to increases of surface scattering). In addition, even slight change of the environment will cause the change of physical or chemical properties of gold nanoparticles, which make them a good sensor. Also, gold nanoparticle based catalysis can provide better yield, selectivity and are effective even at low temperatures in many reactions 111 due, in particular, to the high surface area/volume ratio. The optical response of gold is dominated by two electronic contributions:
interband and intraband transitions. The complex electrical permittivity of gold is written as the sum of the interband and intraband contributions according to Equation 1-35. and the first term 𝜀Ã 𝑢,𝑖𝑛𝑡𝑒𝑟𝑏𝑎𝑛𝑑 is negligible). On the contrary, for short wavelengths below 600nm, the interband contribution 𝜀Ã 𝑢,𝑖𝑛𝑡𝑒𝑟𝑏𝑎𝑛𝑑 cannot be neglected, which causes the disagreement between the experimental (JC) data and the Drude model.
There is no proper model for the interband component of the gold permittivity, but we use the JC data for reference. Note that the absorption response at 400 nm shown in
I.4.3.2 Absorption of gold nanoparticles
The properties of gold nanoparticles differ significantly from that of bulk. In the early 20 th century, Maxwell-Garnett 116,117 explained many of the scattering effects and color changes, while the size dependent optical properties of metal spheres were explained by Mie 118 . Because they exhibit novel properties, theoretical and experimental studies of metallic nanoparticles have been continuous since then.
Due to the extremely high surface area/volume ratio, the gold nanoparticles generate a phenomenon negligible at the macroscopic scale. When a gold nanoparticle is placed in an electromagnetic field whose wavelength is much greater than the particle size, all the conduction electrons can be considered as a plasma (gas of ionized species of high density). The external field induces a collective oscillation on the surface of the nanoparticle in a macroscopic scale. When the frequency of the incident field corresponds to the natural frequency 𝜔 0 of these oscillations (see [START_REF] Tallet | Nanocomposites plasmoniques anisotropes à base de copolymères à blocs et de nanoparticules d'or[END_REF] This resonance of gold nanoparticles which takes place at a visible wavelength and induces a strong absorption, gives gold nanoparticles red-to-green color depending on the size and shape of the NPs 119-121 .
When gold is confined to the nano-scale, the mean free path of electrons 𝑙′ is also modified. Colliding with the crystal lattice, electrons also can collide with the surface of the particle (see So the total damping is caused by 𝛾 0 in the collision term of bulk gold and the collision of surfaces 𝛾 𝑐𝑜𝑛𝑓 (see Equation 1.7).
𝑙 ′ = 𝑙×𝑙 𝑐 𝑙 𝑐 +𝐴×𝑙
From Equation 1-39 and Equation 1-40, the interband component is not modified by small size effect, so that the intraband contribution of the complex permittivity of gold nanoparticles is Equation 1-41.
Equation 1-41
𝜀Ã 𝑢𝑁𝑃𝑠,𝑖𝑛𝑡𝑟𝑎𝑏𝑎𝑛𝑑 = 𝜀 ∞ - 𝜔 𝑝 𝜔 2 +𝑖𝜔(𝛾 0 +𝛾 𝑐𝑜𝑛𝑓 )
From Equation 1-41, ε ̃AuNPs,intraband depends on the correction of the mean free path of electrons 𝑙 𝑐 . The value of 𝑙 𝑐 depends on the size of the nanoparticles.
In Due to their surface plasmon resonance (as we mentioned in section I. Where R is the radius of nanoparticles, 𝜀 r,Au and 𝜀 i,Au are real and imaginary parts of the permittivity of gold nanoparticles. According to Equation 1-46, the plasmonic peak depends on: the refractive index n and permittivity of the surrounding medium, the size and the permittivity of gold nanoparticles.
The absorbance shows [START_REF] Tallet | Nanocomposites plasmoniques anisotropes à base de copolymères à blocs et de nanoparticules d'or[END_REF] a strong increase when εr, Au=-2εm, which corresponds to the surface plasmon resonance of spherical gold nanoparticles.
According to Figure I. 28, this resonance is around 520 nm, so the suspension appears red. The interband contribution is also visible on an optical density measurement, between 400 and 450 nm and is independent from the particles size. In our study, we are interested in gold nanoparticles with a diameter of less than 50 nm. For gold nanoparticles with a diameter of between 10 and 50 nm, the plasmonic peak is around 520 nm in an aqueous medium. As the particle diameter increases, the peak becomes finer and more intense (see Figure I. 30). For nanoparticles with a diameter between 2 and 10 nm, the volume accessible to the conduction electrons is very restricted, which leads to a decrease in the mean free path of the electrons. The damping of the electronic oscillation caused by the collisions between the electrons and the walls of the particle is responsible for the widening of the plasmon resonance.
Thus, for nanoparticles with a diameter of less than 2 nm, the confining effects dominate the signal and the plasmonic peak is totally masked.
I.4.4 Gold nanoparticles in polymers
Introduction
In this chapter, we are presenting the structural and optical techniques we have used during this study. This presentation focuses on the use we have made of these techniques for the cases of our specific experimental systems: gold nanoparticles, polymer and block copolymer thin films, self-organized block copolymers and their nanocomposites.
II.1 Spectroscopic Ellipsometry
Ellipsometry is an optical measurement technique, based on the measurement of the change in the polarization state of a light beam caused by the reflection on the material surface or the transmission through the material. From the change in polarization, one can deduce the film thickness and/or the optical properties of the material. The principle was discovered already more than a century ago. However, over the past few decades the technique has progressed rapidly due to the availability of computers and thus simulations of high accuracy. Ellipsometry is very efficient and precise for homogeneous films of thickness between a few nm and a few tens of nm.
It has, however, no submicron lateral resolution and the application of ellipsometry to nanostructured samples is still quite recent, and usually requires the use of effective medium models. 1
II.1.1 General Introduction
The name 'ellipsometry' comes from the fact that polarized light often becomes The relation between the ellipsometric angles Ψ and Δ and the reflection coefficients for parallel and perpendicular polarizations is given by Equation 2-2:
Equation 2-2 𝜌 = 𝑟 𝑝 𝑟 𝑠 = |𝑟 𝑝 | |𝑟 𝑠 | 𝑒 𝑖(𝛿 𝑝 -𝛿 𝑠 ) = 𝑡𝑎𝑛(𝛹) 𝑒 𝑖𝛥 Where 𝑡𝑎𝑛(𝛹) = |𝑟 𝑝 | |𝑟 𝑠 | 0°≤ Ψ ≤ 90° Δ = 𝛿 𝑝 -𝛿 𝑠 0°≤ Δ ≤ 360°
In this manuscript, we will also present the ellipsometric data with the so-called "pseudo-permittivity" <ε>, defined by:
< 𝜀 >= 𝜀 1 + 𝑖𝜀 2 = 𝑠𝑖𝑛 2 (∅) {1 + [ 1-𝜌 1+𝜌 ] 2 𝑡𝑎𝑛 2 (∅)}
For a single semi-infinite material of index n1, the reflection coefficients for parallel and perpendicular polarization can be expressed using the Fresnel coefficients of the interface as:
Equation 2-3 𝑟 𝑝 = 𝑛 1 𝑐𝑜𝑠 𝛷 0 -𝑛 0 𝑐𝑜𝑠 𝛷 1 𝑛 1 𝑐𝑜𝑠 𝛷 0 +𝑛 0 𝑐𝑜𝑠 𝛷 1 = |𝑟 𝑝 |𝑒 𝑖𝛿 𝑝 𝑟 𝑠 = 𝑛 0 cos Φ 0 -𝑛 1 cos Φ 1 𝑛 0 cos Φ 0 +𝑛 1 cos Φ 1 = |𝑟 𝑠 |𝑒 𝑖𝛿 𝑠
Where, Φ0 is the angle of incidence and Φ1 is the angle of refraction, 𝑛 0 is the refraction index of surrounding medium.
The advantages of SE are as follows: high precision (thickness sensitivity: ∼0.1Å), nondestructive, wide application area, various characterizations including optical constants and film thicknesses. The disadvantages of SE are as follows: indirect characterization, an optical model is needed in data analysis and data analysis tends to be complicated.
II.1.2 Set-up of ellipsometry
In general, the spectroscopic ellipsometry measurement is carried out in the near
Photoelastic Modulator
Another important element in the UVISEL SE set-up is the photoelastic modulator.
It is an optical device based on the photoelastic effect of an isotropic transparent material becoming birefringent when subjected to a mechanical stress and able to induce a modulation of the state of polarization. The modulator used in the spectroscopic ellipsometer of this thesis is a silica bar (see Figure II. 4). When stress is applied to the silica bar, its optical properties are modified. In its equilibrium state, the modulator is optically isotropic with one index of refraction, and becomes birefringent (with two indices) under mono-axial stress. A cosine variation of the stress, using a piezoelectric transducer, modulates the birefringent state of the silica bar
(frequency 𝑓 = Ω 2𝜋 = 50𝑘𝐻𝑧).
The phase retardation between the two components of the electric field is:
Two configurations
The UVISEL can acquire data for various azimuth settings of the polarizer, modulator and analyzer angles. The two most used configurations in this thesis are described as following: they are usually referred to as configuration II for P=± 45º, M=0º, M -A=± 45º, and configuration III for P=± 45º, M=45º, M -A=± 45º, where P=angle of the polarizer, M=angle of the modulator, A=angle of the analyzer, with all angles defined with respect to the incidence plane. Configuration II allows an accurate determination of Δ in the whole range but cannot determined Ψ precisely at ca. 45°.
Configuration III allows accurate determination of Ψ in the whole range but cannot determine Δ precisely at around 90° and 270°.
Through the thesis, the measurement conditions are as follows:
the incident beam is circular with a radius of 250 μm -several angles of incidence 0 are used among: 50º, 55º, 60º, 65º, 70º, 75º.
λ (photon energy) ranges from 260 to 2060nm (0.6 to 4.0eV) with a increment of 2nm (0.025eV)
-the integration time is 200msec for each measuring point -the configuration II (M=0º and A=+45º) has been used for most of the studied samples, and both configurations II and and III (M=-45º and A=+45º) have been used
for a selection of samples -all the thin films observed by SE are obtained by spin-coating on silicon wafers of optical index (nSi, kSi) with a thin layer of SiO2 (2nm) with optical index (nSiO2, kSiO2).
In the case of the phase modulated ellipsometry, like the UVISEL setup, we acquire the ellipsometric quantities Is=sin2ΨsinΔ, Ic=sin2ΨcosΔ and Ic'=cos2Ψ, which are extracted from the parts of the reflected intensity synchronized with the modulator oscillations:
Equation 2-5 ) cos( . ) sin( . 0 c s I I I I in configuration II Equation 2-6 ) cos( '. ) sin( . 0 c s I I I I in configuration III
The quantities Is and Ic (or Ic') will be the quantities calculated and modeled in the model adjustments on the SE data, in order to extract the material characterizations.
II.1.3 Determination of thickness and optical properties of a simple film
We consider here the case of a single sample film on the substrate. The
Figure II. 5 Model of samples' determination
As a reminder, the complex index of a material is written according to Equation 2-7, where n is the refractive index and k is the absorption coefficient. For a thin film of polymers or diblock copolymers, the optical properties can be described by several conventional dispersion functions. We use a dispersion function called "New Amorphous" 6 for describing the pure polymers, which is originally derived from the Forouhi-Bloomer function 7 and is shown in Equation 2-8. The new amorphous model works well for amorphous materials exhibiting an absorption in the visible and/or UV range (absorbing dielectrics, semi-conductors, and polymers).
Equation 2-8 𝑛(𝜔) = 𝑛 ∞ + 𝐵•(𝜔-𝜔 𝑗 )+𝐶 (𝜔-𝜔 𝑗 ) 2 +𝛤 𝑗 2 𝑘(𝜔) = { 𝑓 𝑗 •(𝜔-𝜔 𝑔 ) 2 (𝜔-𝜔 𝑗 ) 2 +Γ 𝑗 2 , 𝜔 > 𝜔 𝑔 0, 𝜔 < 𝜔 𝑔 where 𝐵 = 𝑓 𝑗 Γ 𝑗 • (Γ 𝑗 2 -(𝜔 𝑗 -𝜔 𝑔 ) 2 ) and 𝐶 = 2 • 𝑓 𝑗 • Γ 𝑗 • (𝜔 𝑗 -𝜔 𝑔 ).
In the equations, the term n∞ is the value of the refractive index when ω→∞. 𝑓 𝑗 is the oscillator strength, related to the amplitude of the extinction coefficient peak, Γ 𝑗 is the broadening factor of the absorption peak, 𝜔 𝑗 is the energy at which the extinction coefficient is maximum and 𝜔 𝑔 is the energy band gap, which equals the minimum energy required for a transition from the valence band to the conduction band.
For a nanocomposite thin film made of polymer and gold nanoparticles, no conventional dispersion function can describe the sample. We then have two different methods to describe the optical response of the composite layer.
1) The first method consists in using a mixing law based on an effective medium model, as was explained in chapter IV. Knowing the index of both components nAu+ikAu, npolym+ikpolym or equivalently their dispersion functions 'Au+i"Au, 'polym+i"polym, as analytic functions of the wavelength, we can use an effective medium model to define analytically the optical properties of the composite. For example, we can choose the Maxwell-Garnett effective medium law already presented, and we will use :
Equation 2-9 𝜀 𝑒𝑓𝑓 = 𝜀′ 𝑝𝑜𝑙𝑦𝑚 + 3𝑓𝜀′ 𝑝𝑜𝑙𝑦𝑚 ( 𝜀′ 𝐴𝑢 +𝑖𝜀" 𝐴𝑢 -𝜀′ 𝑝𝑜𝑙𝑦𝑚 𝜀′ 𝐴𝑢 +𝑖𝜀" 𝐴𝑢 +2𝜀′ 𝑝𝑜𝑙𝑦𝑚 -𝑓(𝜀′ 𝐴𝑢 +𝑖𝜀" 𝐴𝑢 -𝜀′ 𝑝𝑜𝑙𝑦𝑚 )
)
with "polym 0.
2) The second method consists in the build-up of a complex dielectric function by the addition of several units, providing a dispersion function of the appropriate shape for the nanocomposite and including resonances described as Tauc Lorentz 6 (see Equation 2-10) or Lorentz 8 oscillators.
Equation 2-10
𝜀 = 𝜀 1 + 𝑖𝜀 2 where 𝜀 1 = 2 𝜋 • 𝑃 • ∫ 𝜉•𝜀 2 (𝜉) 𝜉 2 -𝐸 2 𝑑𝜉 ∞ 𝐸 𝑔 𝜀 2 = { 1 𝐸 • 𝐴•𝐸 0 •𝐶•(𝐸-𝐸 𝑔 ) 2 (𝐸 2 -𝐸 0 2 ) 2 +𝐶 2 •𝐸 2 , 𝐸 > 𝐸 𝑔 0 , 𝐸 ≤ 𝐸 𝑔
Where:
-E is photon energy;
-Eg is the optical band gap;
-E0 is the peak central energy;
-C is the broadening term of the peak;
-P is the Cauchy principal value containing the residues of the integral at poles located on lower half of the complex plane and along the real axis.
The real part of the dielectric function 𝜀 1 is derived from the expression of 𝜀 2 using the Kramers-Kronig integration.
𝜒 2 = 𝑚𝑖𝑛 ∑ [ (𝐼 𝑠 𝑡ℎ -𝐼 𝑠 𝑒𝑥𝑝 ) 𝑖 2 𝛤 𝐼 𝑠 ,𝑖 + (𝐼 𝑐 𝑡ℎ -𝐼 𝑐 𝑒𝑥𝑝 ) 𝑖 2 𝛤 𝐼 𝑐 ,𝑖 ] 𝑛 𝑖=1
Where Γi the standard deviation of each data point (generally set to 1), n is the number of measurement points.
The amount of light scattering by the surface is an additional experimental parameter, which must be determined to assess the quality of the measurements.
Indeed when light is scattered, for instance by surface roughness, it is also depolarized, which induces errors in the ellipsometry analysis. It can also reduce the reflected light intensity. Therefore, the surface roughness of samples has to be rather small.
II.1.4 Dispersion relation of poly(styrene) and poly(2-vinylpyridine)
In this thesis, we are using poly(styrene)-block-poly(2-vinylpyridine) (PS-block-P2VP) diblock copolymers. In order to minimize the number of fitting parameters, films of pure poly(styrene) (PS), pure poly(2-vinylpyridine) (P2VP) and PS-block-P2VP (Mn 25000-25000) were studied first.
II.2 Small angle X-ray scattering II.2.1 General Introduction
Small angle X-ray scattering (SAXS) [9][10][11] is a technique for obtaining information on the structure (size, shape and spatial organization) of solids, liquids or gels at dimensions between a few nm and a few tens of nm. X-ray scattering is based on the fundamental Bragg relation between the angle of diffraction 2θ and the spacing of a diffracting lattice:
Equation 2-13 𝑚𝜆 = 2𝑑 𝑠𝑖𝑛 2𝜃
Where m is an integer; λ is the wavelength of the incident X-ray beam; 2θ is the angle of diffraction (small angle means 2θ < 1º); d is the spacing of the diffracting lattice.
Unlike the conventional X-rays diffraction, SAXS can detect large lattice spacing, of the order of hundreds of interatomic distances, which makes it a useful tool for the study of block copolymer morphologies and other macro-or supramolecular systems.
Figure II. 9 Examples of patterns formed on the detector of Small Angle X-ray Scattering (SAXS)
Like other complementary scattering methods using visible light or neutrons, SAXS is a non-invasive structural technique. Unlike the direct imaging methods, the determination of the sample structure from scattering experiments requires a more complex analysis whether it is fitting to a model or a model-independent analysis. The deduced structural parameters are average values over the entire scattering volume involving a large number of scattering units.
II.2.2 Principle of SAXS
When an X-ray beam encounters a medium, a secondary radiating source is induced by the interaction between the beam and the electrons of the material, producing scattering. In the elastic scattering, the wavelength of the incident beam (wave vector noted by ki) and scattered wave (wave vector noted by ks) are the same.
The magnitudes of the incident and scattered wave vectors are equal, |ki|=|ks|=2π/λ.
When the scattering angle is 2θ, the momentum transfer or scattering vector q=ks -ki, In order to obtain information I(q), we are going to present how to get I(q) in theory.
and
As the starting point, the total amplitude A(q) of the scattered waves (Equation 2 When a sample consists in distinct objects, the scattered intensity is proportional to the number of scattering objects N and their contrast of density relative to the solvent (see Equation 2-17). For a very dilute suspension containing N uniform particles (scatterers) per unit volume, the interparticle interactions can be neglected and I(q) mainly depends on the shape and the size of the particles. The scattering intensity, I(q), from the scatterers with a scattering length density of ρel embedded in a matrix with scattering length density of ρmatrix can then be described as follows 12 :
Equation 2-17 𝐼(𝒒) = 𝑑 𝑁 ∆𝜌 𝑒𝑙 2 ∫ 𝑁(𝑟)[𝑉(𝑟)𝐹(𝒒, 𝑟)] 2 𝑑𝑟 ∞ 0
where dN is the number density, el=el-matrix, N(r) is the normalized size distribution function, V(r) is the volume, and F(q,r) is the form factor of the scatterer, containing the information on their shape and size. The form factors can be calculated for some simple scatterers shapes.
When the scatterers are less dilute and present mutual interactions, the scattered signal is the convolution of the previous form factor with the structure factor, containing the information on the inter-particles correlations and the translational order of the system. In periodic structures, the structure factor presents a number of high order scattering maxima, called Bragg peaks, with positions related to the order symmetry.
II.2.3 SAXS performance
In our studies, we use a "Nanostar" set-up from Bruker. The source is a copper anode tube operating at a voltage of 40 kV and a current of 35 mA. An optic consisting of two Göbels mirrors selects the copper Kα line (λ=1.54 Å). The incident beam is obtained after collimation by three slits; it is circular with a diameter of 700μm. The sample transmits the X-ray beam to a detector of dimension 22x22cm² placed at a distance D from the sample. The resolution of the signal is determined by the width at half height of the direct beam without sample and is of the order of 0.003Å -1 . The domains of studied wave vectors q differ according to the configuration, with large or small sample-to-detector distances D giving access to ranges of small or large angles, and defines the ranges of dimensions probed in the real space. According to Equation 2-14, the scattering vector q in the reciprocal space is defined by the scattering angle 2θ. For each scattering vector, there corresponds a size in the real space. Therefore, at large angles (D=26cm), for wave vectors between 0.04 and 0.8Å -1 , the corresponding characteristic sizes are very small, of the order of nanometers. At small angles (D=106cm), the accessible wave vectors are smaller, between 0.01 and 0.2Å - 1 , which corresponds to larger characteristic sizes, of the order of ca. 10 nanometers.
If samples are liquid, they are placed in 1 mm diameter cylindrical capillaries sealed and placed in a sample holder. If samples are solid, they can be directly placed in the sample holder. The holder is then placed in the sample chamber, under vacuum to avoid scattering by air, which is important at small angles. A two-dimensional spectrum is recorded and, if isotropic, is integrated by an angular grouping, in order to obtain the intensity as a function of the scattered vector I(q).
We are going to discuss the use of this technique for the study of the PS-b-P2VP block copolymers in Chapter III and Chapter IV.
II.2.4 SAXS in diblock copolymers
Small Angle X-ray Scattering (SAXS) can be used for quantitatively analyzing the microdomains structure. The structure factor is analyzed to extract the order symmetry and the peak positions in scattering vector q provide the domain spacing d from the Equation 2-15 𝑑 = 2𝜋/𝑞, in the case of simple lamellar spacing for diblock copolymers. 12
Table II. 2 Symmetry and scattering maxima relationships
Symmetry
Ratio of q values at the scattering maxima peaks II. 2 gives some of them. In the present work, the SAXS technique is employed to determine the the structure of the polymers, and in the case of lamellar phases, the bi-layer thickness, equal to the lamellar period, in the following chapters.
II.2.5 SAXS used for nanoparticles in solutions
SAXS is also an effective tools used for analyzing the size and distribution of nanoparticles in solutions. Depicting spherical particles, the scattered intensity I(q) consists in two components, (1) the form factor P(q), which provides information regarding the mean structural properties of the individual particles (i.e., size and shape) and ( 2) the structure factor S(q), which provides the positional correlation of the particles.
Equation 2-18
𝐼(𝒒) = 𝑆(𝒒) * 𝑃(𝒒)
The form factor P(q) provides information on the size and shape of the individual particles and the structure factor S(q) provides the positional correlation of the particles and is equal to 1 when the particles are dilute (see Equation 2-18). In that case, the analysis of SAXS spectra then corresponds to the form factor analysis and give access to the physical characteristics of the particles. For spherical monodisperse particles with radius R, the form factor can be written as following: )²
where 𝑉 𝑠𝑝 is the volume of the spheres and ∆𝜌 = 𝜌 𝑝 -𝜌 𝑠 . 𝜌 𝑝 and 𝜌 𝑠 stand for the scattering length density of the particles and the solvent, respectively.
For polydisperse particles, the particles size distribution needs to be taken into account, and the form factor is defined as:
Equation 2-20 𝑃(𝑞) = ∫ 𝐷(𝑅)〈|𝐹(𝑞, 𝑅)| 2 〉𝑑𝑅
Where R is the radius D(R) is the size distribution function. In our study, we assume that all the particles are spheres, and the distribution function can be described
𝜎√2𝜋
where 𝑅 0 is the mean radius, 𝜎 is the standard deviation.
𝑃 𝑝𝑜𝑙𝑦 (𝑞, 𝑅 0 , 𝜎) = 𝛼-(𝛽 𝑐𝑜𝑠 2𝑞𝑅 0 +𝛾 𝑠𝑖𝑛 2𝑞𝑅 0 )•𝑒 -2𝑞 2 𝜎 2 2𝑞 6
where,
{ 𝛼 = 1 + 𝑞 2 (𝜎 2 + 𝑅 0 2 ) 𝛽 = 1 -𝑞 2 (𝜎 2 + 𝑅 0 2 ) + 4𝑞 2 𝜎 2 (1 + 𝑞 2 𝜎 2 ) 𝛾 = (1 + 2𝑞 2 𝜎 2 )2𝑞𝑅 0 Equation 2-25 𝐷(𝑅 0 , 𝜎) = 1 √2𝜋𝜎 𝑒 -(𝑅-𝑅 0 ) 2 /2𝜎 2
In our system, we assume that the nanoparticles are dense homogenous spheres in diluted suspensions.
II.3 Electron Microscopy
Direct observations of the samples were performed by electron microscopy.
Scanning Electron Microscopy (SEM) is employed to observe global view of films nanostructure, while nanoparticles are observed by Transmission Electron Microscopy (TEM).
Electron microscopes 14,15 have electron optical lens systems that are analogous to the glass lenses of an optical light microscope. It is used to investigate the ultrastructure of a wide range of biological and inorganic specimens including microorganisms, cells, large molecules, biopsy samples, metals, and crystals.
II.3.1 Transmission Electron Microscopy
Transmission electron microscopy (TEM) is based on the transmitted electrons (Figure II. 12 ) and allows a magnification of hundreds of thousands of times. A filament generally of tungsten or lanthanum hexaboride is heated in order to produce an electron beam, which is then accelerated by a high voltage of the order of 100 kV. The beam passes through the microscope column under a high vacuum and is focused by magnetic lenses. A thin sample, with a thickness of less than a few tens of nanometers is placed in the electron beam, and the detection of the electrons transmitted through the sample produces an image. Depending on the thickness, chemical nature or electron density of the sample, the electrons are more or less absorbed, which forms the observed image. The limit of resolution of the TEM depends on the wavelength of the electrons; it is of the order of the picometer. Because of the presence of several types of aberration of the lenses, the real resolution is of the order of a few nanometers 14,16 .
Since the TEM is the signal from transmitted electrons, the dark parts correspond to dense material while the light parts correspond to transparent material.
For example, the dark parts in
II.3.2 Scanning Electron Microscopy
The scanning electron microscope 17,18 present at PLACAMAT in Université de Bordeaux is a JSM 6700F. Most samples were observed under the acceleration voltage of 10.0 kV, ca. 8mm working distance for focusing, and magnifications ranging from 3000 to 80,000. brighter in the images. 19 Combining both imaging modes can be a way to determining the number of phases in a material and their mutual textural relationships.
Introduction
In this chapter, we are interested in the fabrication of polymer templates with a lamellar structure and their structural study, using the characterization techniques detailed in previous chapter. Indeed, lamellar multilayer nanocomposite could be one approach to hyperbolic metamaterial, as detailed in The Chapter I.2. In particular, the period size α must be smaller than λ/10, so that they can be treated as effective medium. Also, we would like to control the optical properties of material by controlling the structure of films. In this chapter, we are going to focus on fabricating different thickness of film, different thickness of layers and more morphology.
This chapter comprises three parts. The first part will present the film fabrication and thickness control; the second part will present the study of the copolymers morphology; the third part will present the layer thickness study. Usually the process can be separated into 3 steps. First of all, the material to deposit must be dissolved into a volatile solvent. In the second step, a small amount of the solution is applied on the center of the substrate. In the third step, the substrate is accelerated up to desired speed by spin coater. During the third step, the Rotator Substrate Substrate Substrate substrate fluid spreads on the whole substrate because of centrifugal force, meanwhile the volatile solvent evaporates. A film deposits homogenously on the substrate, except for the edge. 2 Due to the high speed of rotation, droplets form at the edge of the substrate to be flung off. Thus, a small difference in the thickness may be found between the center and the fringe at the edge.
III.1 Film preparation by spin-coating
During the spin-coating, several parameters influence the thickness and appearance of the film. They include the characteristics of the coating solution (the solution viscosity, the concentration of the coating material, the volatility of the solvent, etc), the acceleration and speed of rotation, the time of rotation, the volume of coating solution used in the 3 rd step. For each different film formation case, we need to optimize the process by varying these parameters in order to obtain a film with the desire thickness and a homogenous appearance.
The SCS G3 Spin Coater from A KISCO Company was used in this thesis. It provides the access to control rotation speed, spin time, acceleration and deceleration time.
III.1.1.2 Experiment
Material
The diblock copolymers poly(styrene)-block-poly(2-vinylpyridine) (Mn25000-25000, PDI 1.06) were purchased from Polymer Source Inc., denoted as PS25k-P2VP25k. Toluene and tetrahydrofuran (THF) were purchased from Sigma-Aldrich. The polymers, chemicals, and solvent described in this chapter were used without any further purification.
Two types of substrates were employed in this work: silicon wafers and Aclar substrates. Silicon substrates were purchased from Wafer Word inc. (thickness of the wafer is 450±50μm, P type/Boron, cut 001). Ellipsometric analysis of bare wafers established that the wafer surfaces had a 2nm silicon oxide layer on them. Aclar substrates (melting point 202 o ) were purchased from Agar Scientific Limited., and used for the preparation of samples for TEM imaging, since they can be sectioned without damage with ultramicrotomy knives and are as transparent as glass 3 . Prior to use, the substrates were cleaned by sonicating (Branson 1510DTH, AC input 115 V) for 15min in ethanol, following by 10 min in water and dried under a stream of nitrogen.
Substrates were immediately used for the spin-coating.
Film preparation
PS25k-P2VP25k were generally dissolved in toluene to yield 0.8 weight%, 1.5 weight%, 3.0 weight%, 6.0 weight% and 10.0 weight% solution and stirred overnight at room temperature to ensure complete solution.
The spin-coating was accomplished by dropping 55 μL of the polymer solution onto 1cm 2 silicon wafers at 5000 rpm with acc/dec time 3s for 30 s at room temperature.
It gives homogenous films.
Measurement
The topography of films is first observed by optical microscopy.
The thickness of the films is measured by variable angle spectroscopic ellipsometry (VASE) in the spectral range [0.6-4.0ev] in configurations UVISEL II and AOI=55º, 60º, 65º, 70º and 75º.
III.1.1.3 Results and discussion
After the deposition, the top view of films were observed by optical microscopy Where D is the thickness of the film, x is the concentration of the copolymer solution in weight percentage and R 2 is the coefficient of correlation.
As can be seen from the graph and from the value of the coefficient of determination close to 1, the equation is a good adjustment of the measurements. It can be used for the direct estimation of the thickness of the films from the concentrations of PS-P2VP in toluene.
Figure III. 5 Thickness of the films as a function of the concentration of the copolymer solution (PS-P2VP in toluene). The red dots are the experimental data and the black line is the fitting curve described by equations y=4.868x 2 + 18.591x. R 2 is coefficient of determination
In conclusion, the concentration of copolymer in solution controls the thickness of the films. In particular, the thickness of the PS-P2VP films can be anticipated using Equation 3-1. We can control the thickness of the films on the range 20nm-700nm.
III.1.2 Effect of different spin-coating conditions on the thickness of the films
Previous section III.1.1 focused on the influence of the concentration of the copolymer solutions. In this section, we studying the influence of the spin-coating conditions. Since the spin coating process involves several parameters: spin speed, volume of solution and acceleration and deceleration time (Acc/Dec time), Taguchi optimization methodology was employed to simplify the experiment.
III.1.2.1 Taguchi optimization methodology
This study utilizes the Taguchi optimization methodology (L9 orthogonal array) to optimize various parameters, which are involved in the spin coating process (spin speed, volume of solution and Acceleration and deceleration time). The results are statistically analyzed using the variance analysis, referred to as ANOVA [5][6][7] , to determine the percentage contribution of each individual parameter to the response. It is a type of regression analysis, which is employed in this study to find out the influence of each factor on the thickness of the films. ANOVA is used to calculate the sum of squares (SS), degree of freedom (df), variance, and F value. We first determine the sums, K1, K2 and K3 and the mean values. 𝐾1 ,𝐾2 and 𝐾3 𝐾𝑗 of the measures yi (here the thickness measures) for the level 1, level 2 and level 3 respectively. As examples, for parameter A (volume), K1=y1+y2+y3 and 𝐾1=(y1+y2+y3)/3, because these are the three measurements where the volume is 55µL (level 1), whereas for parameter B (spin speed), K1=y1+y4+y7 and 𝐾1=(y1+y4+y7)/3. For each parameter, the value Range(K), which is the difference between the largest and the smallest values of K, is already an indication of the influence of the parameter on the measurements, especially when compared to the value obtained for the blank row, acting like a random selection of measures. We then calculate the total sum of squares SST:
Equation 3-2 𝑆𝑆 𝑇 = ∑ (𝑦 𝑖 -𝑦 ̅) 2 𝑛 𝑖=1 , 𝑇 = ∑ 𝑦 𝑖 𝑛 𝑖=1
the sum of squares caused by the parameter j SSj:
Equation 3-3 𝑆𝑆 𝑗 = 𝑟 𝑛 (∑ 𝐾 𝑖 2 ) - 𝑇 2 𝑛 𝑟 𝑖=1 , 𝑆𝑆 𝑇 = ∑ 𝑆𝑆 𝑗 𝑚 𝑗=1
and SS4=SSe, the sum of squares caused by blank row is (error row), the degrees of freedom:
Equation 3-4 𝑑𝑓 𝑇 = 𝑛 -1 , 𝑑𝑓 𝑗 = 𝑟 -1
the mean squares is MSj (for example MSA is mean square of the parameter A): and the variance ratio:
Equation 3-6 𝐹 𝑗 = 𝑀𝑆 𝑗 𝑀𝑆 𝑒
The F ratio is a key criterion, which is used to distinguish the important influential factor. If the F is more than or equal to 90%, the factor has a significant influence in the results. The other terms in the table are discussed in Roy's book 5,6 , which we are not going to talk in this study.
III.1.2.2 Experiment
"Material" and "film preparation" are the same as described in III.1.1.2, expect for the concentration of polymer solution (a single concentration 6.0 wt. % PS25k-P2VP25k in toluene was used here) and spin-coating conditions. The experiments were carried out according to the experimental conditions as shown in Table III. 3.
Measurement
The topographies of the films are observed by AFM and Optical Microscopy.
The thickness of the films is measured by Variable angle spectroscopic ellipsometry (VASE) with the spectral range [0.6-4.0eV] in configurations UVISEL II and AOI=60º, 65º and 70º.
III.1.2.3 Results and discussion
After the nine designed samples were ready, the samples surface was observed by optical microscope (see Figure III. 6). We can tell that the thicknesses of the nine films are a little different due to the different colors of the surfaces.
Note that the films are not perfectly flat in this series, which is visible by the color stripes they present in the micrographs. We believe this is due to an experimental problem with the used solvent, but we also believe the thickness study presented here is still valid. From the values of Range𝐾 ̅ and F, it appears that the volume (parameter A) and the spin speed (parameter B) have a very small influence on the thickness of films, whereas the acceleration and deceleration time parameter C) has a strong influence on the thickness of films. , because the value of F is 93%.
III.2 Orientation and period size
We showed that the spin-coating process can produce good films of controlled thickness but the produced films have no controlled nanostructure. This is because the deposition process is too fast to allow the evolution of the copolymer towards its microphase separated equilibrium nanostructure. In this part, we are focusing on obtaining controlled lamellar structures in the films. As was explained in Chapter I, the block copolymers (BCP) present spontaneous spatial organization depending on the polymerization degree and the composition of the block copolymers. In our experiments, we restrict the study to symmetric or nearly symmetric diblock, in order to produce lamellar morphology. We will then assess how the polymerization degree of PS-block-PVP influences the obtained morphologies and period size.
III.2.1 Experiment
"Material" is the same as III.1.1.2. MnPS and MnP2VP are the molar mass of the PS and P2VP blocks. N is the degree of polymerization calculated by Equation 3-7, with molar mass of monomer mPS (102.15 g•mol -1 )and mP2VP (103.14 g•mol -1 ).
Samples for electron microscope
Polymers were dissolved in THF to yield 6.0 weight% solutions and stirred overnight at room temperature to ensure complete dissolution. The spin-coating was accomplished by dropping 55 μL (or 65 μL) of the polymer solution onto 1*1cm 2 silicon wafers (or 1.5*1.5cm 2 Aclar substrates) at 5000 rpm for 30 s at room temperature. This step provided homogenous films.
Following the deposition, thermal annealing at 180 o in vacuum (15hours for samples on silicon wafers, 2hours for samples coated on Aclar substrates) was used to generate well-defined periodic structure. Indeed, it is known that the fast solvent evaporation during spin-coating does not allow the copolymer chains to reach the equilibrium phase-separated nanostructures, and that thermal annealing can then provide the chains with the necessary mobility to reach their equilibrium organization.
Samples enhanced by iodine and gold nanoparticles were studied by SEM.
Because the I2 or gold NPs can selectively located in the PVP domains, they induce a strong enhancement of the electronic contrast observable by SEM BSEI mode, and make the measurement of the lamellar period more accurate (samples enhanced by iodine were place in a container with vapor of iodine for 1hour and the gold nanoparticles loading process can be found in the following Chapter IV.2.2.) Note that the presence of gold nanoparticles may change slightly the thickness of layers or films.
The samples are then broken into two pieces, to expose the section of the films, and are ready for SEM observation.
Samples for SAXS
The copolymers were dissolved in toluene or THF to yield 15.0 weight% solutions and were stirred overnight at room temperature to ensure complete dissolution.
Figure III. 8 Scheme of the preparation of the bulk samples of copolymer PS-block-P2VP
Approximately 3ml copolymer solutions were filled in Teflon molds (Figure III. 8) and then placed in glass container with a lid, in order to slow down the solvent evaporation, so that the complete drying of the films takes 3~5 days at room temperature. Unlike in the spin-coating process, the slow evaporation used here allows for the copolymer chains to have a significant mobility for an extended period of time and reach their equilibrium phase-separated nanostructure. The final samples have a thickness of the order of 1mm. They are then placed in a vacuum chamber at room temperature for one day to ensure that the residual solvent is completely evaporated.
Bulk samples are then ready for SAXS measurements.
III.2.2 Measurement
SAXS, SEM and TEM are used for the morphology study of the diblock copolymers. In order to observe the sections of the sample, in TEM study, we use ultramicrotomy to cut the thin films on aclar substrates into approximate 60nm thick sections and in SEM study, samples on silicon substrate were broken in 2 pieces and their side is ready for observation.
SAXS
The description of SAXS principle was given in Chapter II 2.2. Before analyzing the scattered intensities obtained after azimuthal integration of the detector 2D spectra, we corrected the data for background scattering. This is a combination of background by air, sample holder (capillary) if any, and electronic noise and is usually significant only at high wavevector q when the scattered signal is small. The correction can be done in two ways, direct and indirect. In the direct route, the scattering pattern is first recorded on the detector with a blank sample: empty sample holder or capillary. The recorded scattering pattern then contains the information of the background when there is no sample and can be subtracted from the signal obtained with sample. In the other route, the background is mathematically corrected, using the two assumptions that the background signal is a constant, and that the intensity scattered by the studied samples follow an asymptotic q -4 trend at high q. Thus, we can write the total measured intensity in the high q region as:
Equation 3-8 𝐼(𝑞) = 𝐴(𝑞) × 𝑞 -4 + 𝐵
Where I (q) is the total measured scattered intensity;
A(q) is the scattered intensity from the sample, which we want to determine, and which is asymptotically constant at high q;
B is the constant background.
When multiplied by q 4 on both sides, Equation 3-8 transforms into:
Equation 3-9
𝐼(𝑞) × 𝑞 4 = 𝐵 × 𝑞 4 + 𝐴(𝑞)
In the Equation 3-9 B represents the slope at high q of a 𝐼(𝑞) × 𝑞 4 versus 𝑞 4 plot.
The obtained slope is then subtracted from the measured data to give only the scattering from the sample.
Ultramicrotomy technique
Because TEM electrons cannot travel across matter on long (>100nm) distances, the study of the nanometric structures of polymer composites by TEM requires very thin sections carried out by ultramicrotomy. After the resin containing the sample is completely cross-linked, it is cut into the shape of a pencil (see Figure III. 9), and a mm-large even flat tip exposing the sample is cut with a glass. Once the surface is flat, automatic cutting is realized by a diamond knife (2 mm, Knife angle 35º, purchased from Diatome Ltd.), in order to obtain ultrafine sections of the sample embedded in resin with the thickness of 70nm.
Ultramicrotomy cannot be performed on hard materials and films on silicon wafers cannot be observed with TEM. This is the reason why we use Aclar substrates.
The Ultramicrotomy was performed in the Bordeaux Imaging Center, a service unit of the CNRS-INSERM and Bordeaux University, member of the national infrastructure France BioImaging.
III.2.3 Results
In certain cases, annealing was extended to 3 days to ensure equilibrium was reached, although there was no significant change of the ordering of lamellae after 15 h. In order to minimize the surface energy 9 , the P2VP block has a preferential interaction with the polar substrate while PS block prefer the free interface with air 10 .
This leads to the asymmetric wetting configuration (see Chapter I.3) and during the annealing process, the PVP stay at the substrate interface and PS stays at the air surface which can be seen in the following SEM images.
III.2.3.1 Results of electron microscope
Figure III. 10 Section SEM micrographs of a film of PS25k-block-P2VP25k copolymers enhanced by iodine after thermal annealing under (a) SEI and (b) BSEI mode
As we mentioned in the Chapter II.3, in the TEM images, the contrast is dominated by the absorption of the electrons beam by the samples, which is increasing with the electronic density so that metallic nanoparticles appear dark and polymer appear light.
On the contrary, the dark parts correspond to polymers and light parts correspond to metallic NPs in the images formed by back scattered electrons under the BSEI SEM.
Figure III. 11 Section SEM micrographs of films of PS-block-P2VP copolymers of different composition, after thermal annealing and gold impregnation, (the process is explained in Chapter IV.2.2). (a) PS34k-block-P2VP18k (b) PS102k-block-P2VP97k (c) PS25k-block-P2VP25k (d) PS8.2k-block-P2VP8.3k
For a pure diblock copolymers, there is no contrast under electron microscope.
We PS106k-block-P2VP75k (fPVP=0.41), it is not easy to obtain lamellar phases parallel to substrates.
Figure III. 12 Section TEM micrographs of films ofPS-block-P2VP copolymers of different compositions. The sections were prepared by ultramicrotomy of films after thermal annealing and gold impregnation process to increase the contrast of two blocks (see detail in Chapter
IV.2.2)). (a) PS34k-block-P2VP18k (b) PS106k-block-P2VP75k (c) PS102k-block-P2VP97k (d) PS25k-block-P2VP25k (e) PS8.2k-block-P2VP8.3k
PS block prefers 9,11 an air interface due to its smaller surface energy, and PVP stay in the interface of polymer and substrates (silicon or aclar). We can see that clearly III. 9.
III.2.3.2 Results of SAXS
Compound
Toluene THF poly(styrene) poly(2vinylpyrridine) 12 Hilderbrandt solubility parameter, (MPa) The dry films are set directly in the beam in the sample chamber. The intensity is accumulated for 2.6h. Based on Equation 3-9, the background B is obtained from the slope of the linear fitting of 𝐼(𝑞) × 𝑞 4 versus 𝑞 4 plot in the high q region. The Figure III.
13 shows an example of such data treatment for the case of the PS34k-block-P2VP18k copolymer. The higher order peaks of I(q) are usually better defined after background subtraction. All the SAXS results are shown in Erreur ! Source du renvoi introuvable.. We compared the results obtained with copolymers of different polymerization degrees cast from THF or from toluene solutions. The Erreur ! Source du renvoi introuvable.
shows that, in general, samples prepared from THF solutions present better defined peaks than samples prepared from toluene solutions. This could be due to the fact that THF is a slightly more neutral solvent for the diblock copolymer, while toluene is a better solvent for PS than for P2VP 12 , thus swelling more the PS domains and impacting the capacity of the copolymer to reach the equilibrium structure. and corresponding to a period of 37.2nm, with a lack of second order peak probably due to a minimum of the lamellae form factor, as is classical with very symmetrical compositions 13 . Finally, the sample PS8.2k-block-P2VP8.3k presents one well defined peak, corresponding to a period of 17.0nm. The absence of higher order peaks could be due to the fact that the 2d peak would be near the edge of the q-window, or to the proximity of the sample to the order-disorder transition line for small diblocks. Depending on the publication 13 , the lamellar period can be shown to relate to the total polymerization degree as follow Equation 3-10. Where 𝑎 ̅ is average segment size of monomer, were found by segment size of monomer polystyrene 𝑎 𝑃𝑆 = 0.67 𝑛𝑚 14 , poly(2-vinylpyridine)𝑎 𝑃2𝑉𝑃 = 0.71 𝑛𝑚 15 and their respective volume fractions 𝑓 𝑃𝑆 and 1 -𝑓 𝑃𝑆 (Equation 3-11). N is the degree of polymerization (see Table III. 6) and 𝜒 is the Flory-Huggins interaction parameter. In this case 𝑎 ̅ = 0.69 𝑛𝑚, 𝜒 = 0.18 at room temperature 16,17 . In conclusion, we have obtained several lamellar phases with PS-b-P2VP of different total polymerization degrees, when the blocks are equally long (fP2VP≈0.5).
Figure III. 15 Period size thickness under different measurements and theory
These phases present lamellar period over a large size range, so that we can produce lamellar films of controlled period between 15nm and 78nm.
III.3 Controlling period thickness of lamellar phase by homopolymer addition
As we discussed in III.
III.3.1 Experimental
Material
PS25k-block-P2VP25k (PDI: 1.12) and Poly (2-vinylpyridine) (P2VP, Mn=4500, PDI:
1.04).
Polymer solution
P2VP was mixed at 5.0 weight%, 10.0 weight% and 15.0 weight% in PS25k-block-P2VP25k, and then the polymers were generally dissolved in THF to yield 15.0 weight% 6.0 weight% and 3.0 weight% solutions and were stirred overnight at room temperature to ensure complete solution.
Samples for SAXS were prepared in the same way as we described in III.2.1" Samples for SAXS".
Samples for SEM were prepared in the same way as we described in III.2.1 "Samples for electron microscope", except for 55μl 3.0 weight% and 6.0 weight% solutions are spin-coated on silicon wafer only and the spin time is 30seconds with the spin speed of 4500 rpm/s, 3 sec of acceleration time and deceleration time. From the SEM images Figure III. 17, we can see that as the concentration of P2VP increasing from 0 wt% to 5wt% to 10 wt%, the bilayer thickness is obvious increasing from 28.1nm to 31.4nm to 33.6nm (measured from SEM images). Since the resolution for SEM is not enough for thickness less than 10nm, we have performed SAXS.
III.3.2 Results
Results of SEM
Results of SAXS
Based on the same analysis process of III.
Figure III. 18 SAXS results of the mixture of PS-block-P2VP with P2VP in various concentrations in semi-log coordinates. The curves are vertically shifted for clarity.
The layer thickness calculated from q0 is listed in Table III
Figure III. 19 Period size as function of the weight fraction of P2VP in the polymer mixtures
As a conclusion, the PS-P2VP diblock copolymers can be organized and aligned in lamellar structures on large areas by thermal annealing. By controlling the concentration of the polymer solution before spin coating, the film thickness can be well controlled between 30nm and 800nm. The bilayer thickness in the films can be controlled by the polymerization degree of the symmetric diblock copolymers, and is a subwavelength size ranging from 17nm to 70nm.
Figure IV. 2 PS-PMMA blend with Au-PS; A volume fraction of Au is 0.05%;B volume fraction of
Au is 0.58% 1
There are two main in situ methods we can use. The first one is called one-step method in this chapter: it consists in first blending in solution the gold precursor and the diblock copolymers and then use the self-assembly properties of diblock copolymers to achieve the lamellar nanostructure with nanocomposite and polymer layers. The second one is called the impregnation process: it start from the lamellar structure template based on self-assembled diblock copolymers (see Chapter III), impregnate metallic particles inside one of the block domains and then produce the target nanocomposite structures.
In this chapter, we are going to discuss these two methods and find an efficient way to reach the target structure.
IV.1 One-step method IV.1.1 Introduction
In the one-step process, we mix the Au precursor with the diblock copolymers in solution before spin-coating. In the mixed solution the gold ions from HAu(Cl)4 will form gold(Ⅲ) complexes with pyridine 2 .
After the annealing process (solvent or thermal annealing), we may obtain a nanocomposite structure with lamellar structures. In this study, chloroauric acid (HAuCl4•xH2O) is used as the Au precursor.
IV.1.2 Experimental
Material
The same as we described in chapter III 3.1.1.2 "material". Hydrogen tetrachloroaurate(III) hydrate(HAuCl4•xH2O) is purchased from Alfa Aesar were used without any further purification.
Film preparation
In order to obtain well dispersed mixture of gold precursor and polymer solution, we dissolved the copolymer and the gold precursor separately and then mix them together.
Two different polymer solutions were prepared: PS25k-P2VP25k and PS34k-P2VP18k
were dissolved in toluene to yield respectively 6.0 weight% and 8.0 weight% solution, and stirred overnight at room temperature to ensure complete solution.
For the gold precursor solution, HAuCl4•xH2O was dissolved in ethanol to yield 3.0 weight% solution.
Two mixed solutions were prepared:
1) 1 volume of gold precursor solution mixed with 6.4 volumes of PS25k-block-P2VP25k solution;
2) 1 volume gold precursor solution mixed with 4.4 volumes of PS34k-block-P2VP18k solution.
All the mixed solutions were stirred overnight in room temperature and protected from the light, to obtain well-dissolved solutions and were then spin-coated onto silicon wafers. Spin time is 30seconds with the spin speed of 5000 rpm/s, 3 seconds of acceleration time and deceleration time.
Figure IV. 3 Schematic illustration of the one-step method for Au loading in the films. Mix polymer solution and gold precursor solution to obtain mixed solution. Deposit the mixed solution on silicon wafer by spin coating. Following the deposition, two different annealing processes were used: solvent annealing only or solvent annealing followed by thermal annealing.
After the film coating, two different annealing processes were used:
1) Solvent annealing (8h in THF and then dried in room temperature) to obtain lamellar phase.
2) Solvent annealing (8h in THF and then dried in room temperature) followed by thermal annealing (180℃ in a vacuum oven for 10 hours).
In addition, the solvent annealing was usually followed by one day of slow drying in air. Subsequent removal of any residual solvent is carried out under vacuum for an additional minimum of 4 hours.
Table IV. 1 samples preparation conditions by the one-step method
Samples
PSm-block-P2VPn
Mn (m-n) Solvent annealing (In THF)
Thermal annealing
(180ºC in vacuum) OT1 25k-25k 8h 0 OT2 25k-25k 8h 10h OT3 34k-18k 8h 0 OT4 34k-18k 8h 10h
The prepared samples are listed in Table IV. 1 samples preparation conditions by the one-step method.
Sample observations
The upper surface of the samples was observed by optical microscopy at room temperature. For side view of samples, the films on silicon wafer were broken into half manually and observed by SEM. In conclusion, this one-step method is not efficient in fabricating the nanocomposite lamellar structures. So we consider using lamellar structures of polymer as the templates and loading gold afterwards.
IV.1.3 Results
IV.2.2 Experiment
Material used in this study is the same as described in IV.1.2 "Material". Silver nitrate (AgNO3) purchased from Sigma-Aldrich was used without any further purification. The diblock copolymers poly(styrene)-block-poly(vinylpyridine) (PS-block-P2VP) used are listed in Table IV. 2.
Table IV. 2 Diblock copolymers used in this chapter
Film preparation
PS25k-P2VP25k copolymers were dissolved in toluene to yield 0.8 weight%, 1.5 weight%, 3.0 weight%, 6.0 weight% and 10.0 weight% solutions and stirred overnight at room temperature to ensure complete dissolution.
The spin-coating was accomplished by dropping 55 μL of the polymer solution onto 1 cm 2 -silicon wafers and spin it at 5000 rpm with acc/dec time 3s/3s for 30 s at room temperature. This gave homogenous films.
Following the deposition, a thermal annealing at 180 o in vacuum (15 hours for samples on silicon wafers, 2 hours for samples coated on Aclar substrates) was used to generate well-defined lamellar structure.
Au loading process
After we obtained the aligned and organized lamellar phases, we proceed with the following process:
1) We immersed the film in 3.0 wt% HAuCl4 solution in ethanol for 5 minutes following by a gentle rinsing of deionized water several times. 2) The films loaded with HAuCl4 were then immersed into 0.65 wt% NaBH4 solution in water for 30 sec.
Polymer
We repeat the cycle comprising steps 1) and 2) for N times (N is integrate from 0 to 30) to increase the concentration of Au NPs. Figure IV. 7 shows the process of Au loading.
Ag loading process
We used a similar process for loading silver within the films, but we had to change the solvent of the first step, because silver nitrate is not soluble in ethanol. So we tried two different solvents to introduce the silver precursor in the films, water and a mixture water/ethanol (1/1 in volume). Starting from the previously aligned and organized lamellar phases, we followed the following process:
1) We immersed the film in 3.0 wt% AgNO3 solution in water or in water/ethanol
(1/1 in volume) for 5 minutes and then rinsed it with deionized water several times.
2) The films with loaded AgNO3 were then immersed into 0.65 wt% NaBH4 solution in water for 30 sec.
We repeated step 1) and 2) for N times (N is integrate from 0 to 15) to increase the concentration of Ag NPs.Figure IV. 7
Figure IV. 7 Schematic illustration of the in-situ gold loading process. A film with organized lamellar phase was immersed into a 3 wt% HAuCl4 ethanol solution, and Au salts bind with the amine functions in the PVP domains; then the film loaded with Au salt is immersed into a 0.65 wt% aqueous NaBH4 solution to reduce the Au salts into Au NPs. We repeat the process for N times (N is integrate from 0 to 45) to increase the concentration of Au in the PVP domains.
Measurements
The surface topography of samples was imaged with optical microscopy (OLYMPUS BX51-P polarizing microscope) at room temperature.
The samples were studied by variable angle spectroscopic ellipsometry (VASE), the experimental details are in Chapter II.1. We use the configuration of UVISEL II with AIO=55º, 65º and 75º.
For side views of the samples, 1) the films on silicon wafer were broken into half manually and observed by SEM; 2) the films on Aclar substrates were prepared as described in the previous chapter 'Ultramicrotomy technique' in III.2.1. The films were embedding in epoxy and cut into 60nm thin sections using a diamond knife. The sections were floated on deionized water, picked up by TEM grids and observed in a TEM.
After all the measurements, samples were dissolved in toluene to disperse the resulting Au NPs, which were then observed by TEM and SAXS.
SAXS correction
The solutions with dissolved thin (ca. The recorded scattering pattern after background correction then contains the scattering from three different objects in the suspensions: the neat gold nanoparticles, gold nanoparticles attached with polymers and single polymer chains. In the case of the film with N=0, the suspension contains only the polymer. We subtracted the intensity scattered by the suspension N=0 to all the other data, in order to remove most of the polymer signal. We then neglected the remaining polymer signal, and analyzed the SAXS data with a model of neat gold nanoparticles only, because they present a much larger electronic density than the solvated polymer chains. We also assumed that the particles are all spheres and that their size distribution can be described by a
Gaussian function (Equation 4-2). These are approximations, which should provide a simple access to the order of size of the formed nanoparticles. The analyzed data is and pure polymer suspension. We can see that the polymer scattering is two orders of magnitude smaller that the sample scattering, which is dominated by the gold NPs, for the small wavevectors (0.2<q<1nm -1 ). The polymer signal becomes significant at larger q (q>1nm -1 ) when the nanoparticles scattering vanishes. All the SASX data were corrected to Ic.
IV.2.3 Results and Discussion
IV.2.3.1 Films with gold nanoparticles
As we discussed in the previous chapter, the P2VP blocks are preferred at the interface with the substrate while the PS blocks are preferred at the interface with air, which leads to an asymmetric configuration of the lamellar phase film (see Chapter I.3).
If the final film thickness after annealing is not exactly equal to nd0 or (n+1/2)d0 (n is positive integrate and d0 is the thickness of the bilayers), it is not possible to obtain a flat surface and holes or islands form on the free surface with step heights of d0.
Therefore, observation by optical microscopy or AFM of holes or islands formation on the top of the film is a good indication of a multilayered structure of parallel lamellae in the film 4 .
Figure IV. 9 Topography of films PS25K-P2VP25K observed by optical microscope with an increasing number of impregnation cycles N. N=0 is the film after spin-coating and before annealing. The increasing purple color of the film is related to the localized surface plasmon resonance of the nucleated gold nanoparticles
Figure IV. 9 shows the film for N=0 after spin-coating and before annealing, and we can see a homogenous film deposited on silicon wafer. After thermal annealing, the film is impregnated with gold one time (Figure IV. 9 N=1), and the images show islands and holes formation on the topography of the film, which suggests that the film is the parallel lamellae morphology. One of the advantages of this infiltration procedure is that we can slowly increase, in a controlled way, the volume fraction of introduced nanoparticles, by repeating the double dipping process N times. As we can see on the From the high magnification images in Figure IV. 11, we can see that the gold nanoparticles are well selectively introduced both in thin and thick films, along the whole film depth. This indicates that the gold precursors AuCl4 -were reduced to gold nanoparticles in a homogeneous manner within the P2VP layers by aqueous solutions and that both the gold precursors AuCl4 -and the reducing agents NaBH4 molecules penetrate in a homogeneous manner within the multilayer stack 5 . The reactants can pass through the stack using two possible mechanisms: (1) the reactants access the P2VP domains from the edges of the films (2) the reactants access from the top of the films by permeation through the different layers and/or through defects. There is no direct evidence of the possibility (1) and the homogeneity of the distribution of gold nanoparticles in each layer for the whole film, including on very large lateral scales, tends to favor rather the possibility (2). Indeed, if the access of reactant was coming from the sides of the film, we would expect to see some laterally graded composition of the layers due to the diffusion of gold from border to center. This is not seen even at low concentration of gold (N=5) or high concentration of gold (N=30). We then tend to consider that the reactants diffuse from the top, which means they have to permeate through the unfavorable PS 6 As displayed on the micrographs, the PS layers appear to have no obvious holes, defects or inhomogeneity, even on large scale images. We thus conclude that the PS does not block the penetration of reactants. This is in agreement with the conclusion of others studies in the literature 6 .
Concentrated gold nanoparticles in the top two layers are found to form on the surface of the films after 20 cycles of impregnation in both thin and thick films. We can thus propose that the reactants penetrate from the top of the films. From the homogeneity of film in the perpendicular direction to substrate, PS and P2VP domains did not block the penetration of reactants (ethanol swells P2VP and can well penetrate into films 6 ).
A significant increase of gold in the top two layers, indicating that loading is going from the top and that PS layers tend to hinder it, was observed in some of the samples.
We can thus propose that the reactants in this loading process could be diffusing through all PS and PVP layers. More studies are going to be discussed related to this in the next chapter.
IV.2.3.2 Au nanoparticles in the films
We dissolved in toluene the final films containing loaded gold nanoparticles with different values of N in order to "kinetically" study the gold nanoparticles inside the layers. The obtained suspensions were then studied by SAXS and TEM (see Figure Where r is the radius of the gold nanoparticles (2r is the diameter), μ is the mean diameter and the σ is the standard deviation. From the diameter distribution of gold nanoparticles, we can see that the diameter of the gold nanoparticles is well controlled and increases from 5 nm to 10 nm as N increases from 5 to 20. For N=30, we observe more inhomogeneous particles (see
Figure IV. 15 Experimental SAXS data Ic (blue dots) and fitting results (red dash lines) for the particles suspensions extracted from thin (a)(c) and thick (b)(d) films, for the value of N=5 (a)(b) and N=10 (c)(d). Insets are the diameter Gaussian distributions used in the fitting lines. μ and σ are the mean diameter and standard deviation of the distribution, respectively.
We complemented the TEM images analyses in order to access a more statistical measurement. For this, we analyzed the same nanoparticles suspensions by SAXS to obtain the global view of the size of the nanoparticles and their distributions.
Figure IV. 16 Experimental SAXS data Ic (blue dots) and fitting results (red dash lines) for the particles suspensions extracted from thin(a)(c) and thick(b)(d) films in the value of N=20 (a)(b) and N=30 (c)(d). Insets are the diameter Gaussian distributions used in the fitting lines. μ and σ are the mean diameter and standard deviation of the distribution, respectively.
Comparing the nanoparticles in thin
IV.2.3.3 Optical properties of the Au loaded films
The films were "kinetically" measured by spectroscopic ellipsometry with the gradual increase of N. N=0 stands for the structured film without any gold particles inside. Considering the film and the silicon wafer as an effective medium, we consider the pseudo-epsilon <ε> (see Figure IV. 18) from the ellipsometry data. Note that <ε> is not the real permittivity of the films, but it gives a first idea to understand the film structures.
Figure IV. 18 Real (upper plots) and imaginary (lower plots) parts of <ε> measured at the angle of incidence 65º, as a function of the photon energy for gradually increasing values of the number N of gold loading cycles (from N=0 to 34). (a) shows the thin films, with thickness ca. 300 nm while (b) shows the thick films, with thickness ca.700nm.
As seen on the nanoparticles and polymer presents a plasmon resonance due to the metallic nanoparticles 7 and that the resonance amplitude increases as the gold volume fraction in the nanocomposite increases. As the resonance deepens, the system will ultimately reach ε<0 in some frequency range: in fact, the Maxwell Garnett Effective Medium Approximation (MG-EMA), although not truthfully applicable for such high fractions, suggests this can occur for gold volume fractions beyond approximately 25%. An accurate measurement of the volume fraction of gold nanoparticle is not easy, so we use this "kinetically" loading process to give another access to approach the volume fraction of accumulated gold. We are going to describe two types of measurements we performed for the measurement of the volume fraction of gold nanoparticles in IV.3.
IV.2.3.4 Films with silver nanoparticles
Structures
The structure of the silver films is little different from the gold ones due to the solvent used for the impregnation of the precursors. We cannot use the same reaction condition because silver nitrate is not soluble in ethanol. So we used 2 different solvent to dissolve the silver precursor, water and a mixture water/ethanol (1/1 in volume). key factor for the diffusion and the metal loading. Note however, that the silver nanoparticles appear significantly more disordered and polydisperse than what we obtained with gold, which means that an optimization study would be necessary to obtain nicer films.
Figure IV. 20 High and low magnification SEM SEI and BSEI images of cross-sections of silver loaded films by precusor solution in (a) water and (b) water/ethanol(1/1 in volume) with N=20.
As we discussed in the previous section, the results of silver loading confirmed the conclusion that the reactants are diffusing though the bilayers from the top.
Ethanol 2,6 plays an important role in this diffusion, because it is a selective solvent for P2VP. In a previous study 6 of the penetration of solvents in multilayers, it was shown that when the solvent is selective for one block, the other block will retard the diffusion of the solvent. In addition, we can see from the results evidenced on the Figure IV. 20(a), that even if the solvent can also pass through the defects in the stack, there is no efficient penetration if the solvent is not a good solvent for at least one of the blocks.
In order to get a selective loading and a good penetration of the precursor, we will keep ethanol as the solvent of the precursors in the further study (in particular for the reducing agent optimization study).
Optical responses of the Ag loaded films
The films were "kinetically" measured by spectroscopic ellipsometry, with the gradual increase of N. N=0 stands for the structured film without any particles inside.
Considering the film and the silicon wafer as an effective medium, we plot the pseudoepsilon <ε> (see Figure IV. 21). Note that <ε> is not the real permittivity of the films, but it gives a first idea to understand the film response.
Figure IV. 21 Real (upper plots) and imaginary (lower plots) parts of pseudo permittivity <ε> as a function of photon energy for different values of silver loading cycles N (from 0 to 15). (a) shows the silver precursor loaded in water system while (b) shows the water/ethanol(1/1 in volume)
system.
Starting from the film without silver, N=0, we observe fringes, which we interpret as interferences related to the film thickness. As the value of N increases (the concentration of silver in the films increases), the curve fringes present a red shift and the absorption becomes stronger on the frequency range 1.8eV-2.73eV (λ ranging from 455nm to 690nm) which is likely caused by the plasmon resonance due to the gaining of silver in the films. When one face of a quartz crystal is in contact with a liquid, its resonance frequency f and dissipation D values are affected by the liquid density and viscosity. If the coated film is a pure elastic mass bound tightly to the QCM surface, the Sauerbrey equation Equation 4-3 is applicable 8 . In this study, the samples are rigid, distributed as an even film and sufficiently thin, which allows us to use the classical Sauerbrey relation. We are going to discuss the technique details in the following paragraph.
IV.3.1.2 Rigid film
Figure IV. 23 Diagram illustrates the difference in the oscillation generated by a rigid (red) and soft (green) molecular layer on the sensor crystal (from biolinscientific company technical note).
In order to choose a good model to calculate the mass loading, we need to find out if the film is rigid or viscoelastic. Dissipation is the value for distinguishing the film between rigid and viscoelastic. When the driving voltage to the sensor is switched off, the energy from the oscillating crystal dissipates and one can calculate: The amount of dissipation is extracted from the experimental measurements and will be displayed in the Experimental part. We will then see that the dissipation is not significant in our measurements and that the films can be considered as rigid.
IV.3.2 Experimental
The quartz sensors (Q-sensor) coated by silicon dioxide
IV.3.2.1 Preparation of films on silicon wafer and quartz sensor
PS25k-block-P2VP25k was dissolved in THF to yield 1.5 weight%, 3.0 weight% and IV. 3). The spin time is 30 seconds with the spin speed of 4000rpm, 3 seconds of acceleration time and deceleration time. Then a thermal annealing (180 o in vacuum for 15 hours) was used to generate well-defined and aligned lamellar structure.
After annealing, the block copolymer films are ready for the gold loading study.
The synthesis of gold nanoparticles was carried out according to the chemical reduction method using hydrogen tetrachloroaurate(III) hydrate (dissolved in ethanol)
as the precursor and sodium borohydride (dissolved in water with concentration of 0.65weight %) as the reducing agent. Here is the chemical reaction 14 : The homogeneity and roughness of the samples were examined by Atomic Force Microscope (AFM) imaging carried out under ambient conditions. The volume fraction of gold nanoparticles in the PVP layer of the samples on silicon wafer was investigated by spectroscopic ellipsometry after different numbers of gold loading cycles, and the samples on Q-sensor were subjected to the gold loading cycles within the QCM device, as will be explained below.
Table IV. 3 samples used in the experiments
Wafer
IV.3.2.2 Spectroscopic Ellipsometry measurements
Figure IV. 25 Primary VASE data of the film with 5 cycles of gold loading process (N=5). Left Is as function of photon energy in 5 angles of incidence. Right Ic as function of photon energy in 5 angles of incidence.
The films on silicon wafers are (i) immersed into 3.0weight% HAuCl4-ethanol for 5 min and then rinsed with deionized water several times (ii) then dipped into 0.65weight% NaBH4-H2O solution for 30sec to reduce the precursors to Au nanoparticles, (iii) then measured by spectroscopic ellipsometry (SE). We repeat the cycle (i)(ii)(iii) for N times (N is between 1 and 10). The "kinetic" VASE information was thus obtained by recording the full spectra in-between each gold loading cycle (step (i) and (ii)).
Five values of the incidence angle AOI=50º, 55º, 60º, 65º and 70º were used and
IV.3.2.3 QCM-D measurements
Figure IV. 28 Schematic illustration of full process of QCM measurement including inside QCM device (i) and outside QCM device (ii). Step (i) Measurement inside the QCM-D cell. The measurement starts in water, then HAuCl4-ethanol is flushed, then water is flushed to rinse the Au 3+ which is not bound with pyridine in the PVP layers. Step (ii) Reducing step outside the QCM-D cell. The sensor is immersed in the reducing agent solution to obtain gold nanoparticles.
Repeat step (i)(ii) for N times.
After stabilization of the second baseline, we can consider that the solvent effects are reversible, and we can start the gold loading study. A certain concentration of HAuCl4 in ethanol (1.0 weight%, 3.0 weight% and 6.0weight %) was flushed inside the cell and kept in contact with the film for 20mins. Then water was flushed for 10mins and kept for 20mins. Then the film-bearing sensor was taken out of the QCM and exposed to a reducing solution (0.65 weight% NaBH4 in H2O) outside the QCM-D device in order to synthesis the gold nanoparticles. The reason for not performing the reduction step inside the device is that the reaction between HAuCl4 and NaBH4 produce H2 bubbles, which dramatically changes the surrounding of sensor and the frequency of resonance and could not be handled reproducibly. We then repeated these steps for N times (N≤10) to increase the gold loading. In fact, the QCM-D measurements are related to the mass increase in gold salt and not in gold nanoparticles. In order to access the final gold quantity, we assume that all the gold salt deposited in the PVP layers is then reduced to gold nanoparticles by NaBH4. After the reduction step, the film-bearing Q-sensor is introduced again in the device to proceed with more cycles of gold loading. Density of gold 19.9 g/cm 3 Volume of PVP 0.62319x10 -5 cm 3
IV.3.3 Results
The results of the thin film with 58nm thickness are not stable, this may be caused by the unevenly distribution of polymer on wafers due to the super too small thickness (2-3 bilayers). So in this section, we are going to present the results of film with thickness around 160nm.
IV.3.3.1 Results of ellipsometry measurements
Figure IV. 30 The ellipsometric model used (b) is built based on the information from the thinsection TEM image (a). PS stands for layer of polystyrene and PVP+Au NPs stands for layer of nanocomposite composed of gold nanoparticles and poly(2-vinylpyridine).
The data were analyzed using the DeltaPsi2 software from Horiba Scientific.
Figure IV. 32 (a)-(j) are the plots of ellipsometry data Is and Ic as a function of the photon energy for different values of the number N of cycles in the impregnation process. They are examples at the AOI=65° of the SE data and fitting results from model (Figure IV. 30(b)). EXP stands for experimental data and Fit stands for fitting lines.
As the Figure IV. 32(a)-(j) shows, the fitting lines agree well with the VASE data for N smaller than N=4 but are not in good agreement with the experimental data after N=5. In particular, as the value of N increase, the fits do not adjust well the plasmon resonance domain (between 1.5 and 3eV). This is most likely due to the fact that even at low concentration of NPs, some coupling exists in such disordered nanocomposites, as was shown earlier 15 . An improved effective medium model can then be used. But it did not improve the fitting significantly, and the extracted Au volume fraction was found very similar. Another way of avoiding this problem is to fit only on the UV range, where the absorption of gold is related to the interband transition and does not relate much to the shape and structure of the gold domains. Also here, we found very similar value of gold volume fraction.
On the Is and Ic plots of the Figure IV. 32, we observe that the initial large interference fringes for N=0 are attenuated as the film is loaded with Au particles (increasing N), which is due to the absorption associated with the plasmon resonance of the gold nanoparticles. The measurement was started in water, a considerable decrease of frequency was observed after the first manipulation of solvent exchange, which is attributed to the swelling of the PVP layers in ethanol. After the second manipulation of solvent change, the relative frequency always came back to its original value, which could be treated as the initial frequency. When the solution HAuCl4-ethanol of 3.0weight% concentration was pumped and in contact with the film for 20mins, a significant decrease in the relative frequency was observed, which confirmed an efficient deposition of gold salt in the film. After being rinsed by water, the film stayed in water and the final frequency was read. The difference between this final frequency and the original one is caused by the mass of gold salt loading.
As we discussed in 4. . We can then suggest that the PVP layers get loaded with many gold seeds during the first impregnation cycles, and then the seeds grow bigger during the later impregnation process. We also studied the influence of the HAuCl4 concentration in the ethanol solution used for the impregnation. From the results, during the first cycle of the process, the gold gain values are 6.97μg, 6.97μg and 6.68μg for the 1.0, 3.0, 6.0 weight% solutions, respectively, and the differential gain of gold remain almost similar afterwards. In conclusion, the gold loading is independent of the concentration of the Au salt solution for the studied range, and the key factor for the gold loading seems to be the capture capacity of the PVP layers.
Figure IV. 34 (a) is the plot of the cumulative gold mass gaining (left axis) and differential gold mass gaining for each impregnation cycle (right axis) as function of the number of impregnation cycles obtained in various gold ionic concentration of HAuCl4 on the film of thickness 160nm with flow rate of 0.2 ml/min. The dash lines show that (i) the first impregnation cycles adsorb large quantities of gold, and the gold gaining then decreases; (ii) the first loading mass obtained from various concentration of gold salt solution are similar; (iii) after 5 cycles of loading process the mass of gold loading for each process cycle is constant. (b) is the plot of the volume fraction of Au in PVP layer as function of impregnation cycles, measured by QCM (dash lines) and SE (solid lines, analyzed with different model), which shows the good agreement between QCM results and SE results.
The solid lines are the cumulative loading of gold in the film mass. The gained mass of gold keeps increasing with the number of cycles and did not reach a maximum.
As the gold amount increases not only in the PVP layers but also on the surface of films (see AFM images Figure IV. 36), after 10 cycles of impregnation, our measurements do not make it possible to distinguish between a mass gaining in the layers and a mass gaining on surface. We noticed that for N>10, the QCM-D signal presents an increased dissipation, which could be related at least in part to the gold particles accumulating on the surface. A more complex analysis mode could maybe be used to distinguish the amount of gold inside and above the films. In conclusion, we deposited a homogeneous film on the Q-sensor. The concentration of gold precursor solution HAuCl4 has no influence on the cumulative loading of gold, the increasing lines are similar. After 5 impregnation cycles, the gold loading mass for each cycle is constant, taking a value of ca. 3µg. We can thus reach volume fractions of 10% (resp. 20%) with 5 (resp. 10) cycles of impregnation.
Figure IV. 36 AFM images on the surface of the Q-sensor. (a)(b)(c)(d) are AFM images of the surface of the sensor after 0 cycle of impregnation process, 5 cycles, 7 cycles, and 10 cycles respectively. The bright dots are Au nanoparticles, whose quantity increases as the process cycle number increases. (e) is a thickness profile along part of image (a) displayed to show that the difference in thickness is 25nm (close to a half bilayer thickness) between light and dark domains on the images.
Conclusions
In this chapter, we studied two methods for fabricating multilayer nanocomposites.
The one-step method is not efficient to load gold nanoparticles in multilayer structures, while the impregnation process can load gold nanoparticles inside PVP layer effectively, which also works for silver nanoparticles impregnation.
In the impregnation process, the reactants could be diffusing through all PS and PVP layers. However, if the solvent of the metallic precursor change from ethanol to water, there is no penetration of the reactants in the films. So we are going to keep the solvent of gold precursor to be ethanol in the later studies.
The impregnation process used for silver nanoparticles loading need to be improved. In order to achieve structures as good as the one with gold nanoparticles, the reducing step needs to be optimized.
Furthermore, the observation of same reduction condition in various value of N suggests that the loading of the precursor could reach near-equilibrium: one HAuCl4 per pyridine unit 16 . As the value of N increase, the size of the gold nanoparticles increases in situ.
According to the study of the gold nanoparticles inside films by SAXS and TEM, the gold nanoparticles inside film are nanospheres with diameter of ± 8nm. The diameter of nanoparticles is independent of thicknesses of the films and value of N (N>5). The several first cycles of impregnation deposit small nanoparticles in the P2VP layers, which then act as seeds of gold and the following loading cycles used for growing the gold NPs.
After study the volume fraction of gold with QCM-D and SE, we obtained results with a good agreement. The concentration of gold precursor solution HAuCl4 has no influence on the cumulative loading of gold. After 5 impregnation cycles, the gold loading mass for each cycle is constant, taking a value of ca. 3µg. We can reach volume fractions of 10% (resp. 20%) with 5 (resp. 10) cycles of impregnation. This goal leads us to try to understand the processes of impregnation and reduction. In this chapter, we are going to study and optimize the Au NPs synthesis process and try different ways to fix the irregular particles on surface. In particular, we will study how the reduction process depends on the choice of solvent and reducing agent.
Introduction
V.1 Impregnation process improvement
Structures
As we discussed in Chapter III, the PS and PVP domains separate into a lamellar phase after thermal annealing. As the film is then dipped into the gold salt solution, the solvent brings the gold salt into the PVP layers. The Au NPs synthesis in water system works very well, the loading and reduction processes preserve the self-assembled block copolymers lamellar structure, and the Au NPs shape (sphere) and size (around 10 nm) are well controlled. In the water/ethanol system, the loading process preserves the self-assembled lamellar structure expect for the nanocomposite layer thickness (the thickness increases), and most of the gold particles in the layers are aggregated. We can thus propose that the aggregation could be manipulated by the proportion of ethanol and water. This gives us a possibility to study the optical response caused by the couplings between the metallic particles. In the methanol system, the lamellar structure is well kept but the insitu synthesis is not working well. However, the gold particles in the layers grow bigger than in the water system. It gives us another possibility to increase the size of gold particles. As we illustrate in 'Ellipsometric modeling', the MG and EMA models stands for two different dispersion state of the Au NPs in the layer. The MG model stands for dilute non-coupling spheres while the EMA stands for random mixture of polymer and Au. From the Figure V. 6, it appears that the fitting of MG works better in the water system and that of EMA works better in the water/ethanol system. The Au NPs in the layer reduced in water correspond better to non-coupling spheres while in the water/ethanol system, the produced gold correspond better to large Au domains, which can be confirmed by the TEM image in the For the following simulation of water system, the NC layer is described by MG model. On the contrary, the simulation of the water/ethanol system is done with the NC layer described by the EMA model. When pushing the Maxwell-Garnett effective medium approximation (MG-EMA) to higher N, a degraded agreement is naturally expected: it remains nevertheless reasonably good especially below 2.1 eV (above 580nm). These partial agreements provide rough estimates of the loading concentration in gold: we find in the water system N=6 to correspond to a MG-extracted value of f=7%, and N=15 and N=20 to approximately f=23% and f=40%, respectively. And in the water/ethanol system N=6 to correspond to an EMA-extracted value of f=6%, and N=15 and N=20 to approximately f=28% and f=47%, respectively. As the Au NPs loading (fAu) increases or aggregated into thin gold layers, the resonance perpendicular to the substrate Re(εz)
Results of optical properties
Film preparation is the same as described in V.1.1.2 "Film preparation".
Gold loading process
After we obtained the aligned and organized lamellar phases, 1) we immersed the films in 3.0 wt% ethanol solution of HAuCl4 for 5 minutes following by a rinsing of deionized water several times. 2) Afterward, the films with loaded HAuCl4 were immersed into different 0.65 wt% aqueous solution of NaBH4, Na3Ct or AA for 30 sec.
We repeat step 1) and 2) for N cycles (N is integrate from 0 to 20) to increase the concentration of Au NPs.
In the case of Na3Ct, step 2) was processed both in room temperature and at 70ºC, in order to check the effect of temperature.
In the case of AA, the step 2) was processed both in neat AA solution, and in a AA solution with pH adjusted to pH=11 by NaOH addition, and thermal annealing (180ºC in vacuum for 2hours) after all the loading process.
Measurement
The samples were studied by VASE using the configuration of UVISEL II with AOI=55º, 65º and 75º.
For cross-sectional views of the samples, the films on silicon wafer were broken in half manually and observed by SEM.
After all the measurements, samples were dissolved in toluene to disperse the resulting Au NPs, and then observe them by TEM.
V.1.2.3 Results
Structures
As we mentioned before, In the BSEI SEM images, white parts is corresponds to the gold-rich areas and dark parts correspond to the pure polymer domains (PS). (molecular weight 294.10 g/mol) and AA (molecular weight 176.12 g/mol) have larger molecular weight and seems less capable of penetrating the polymer layers. This tends to confirm that the reducing agent permeates from the surface to the bottom of the film.
Results of optical properties
In order to trail the "kinetic" loading process, we measured the samples by SE at annealing [14][15][16] . Because solvent annealing is more difficult to control, we have started with thermal annealing. Thermal annealing (a temperature higher than the glass transition temperature) will give the copolymer a chance to move freely and reorganize the lamellar structure, which may "absorb" the Au NPs from surface to layers.
V.2.1.2 Experiment
Film preparation follows the same process as described in V.1.1.2, with 0.65 wt% reducing agent NaBH4 and AA in H2O.
Gold loading process
After we obtain the aligned and organized lamellar phases, 1) we immersed the films in 3.0 wt% HAuCl4 solution in ethanol for 5 minutes followed by a rinsing of deionized water several times. 2) Afterward, the films with loaded HAuCl4 were immersed into different 0.65 wt% aqueous solutions of NaBH4 for 30 sec.
We repeat step 1) and 2) for N times of cycles (N is integrate from 0 to 30) to increase the concentration of Au NPs. This is noted as Normal Process.
We repeat step 1) and 2) for N times of cycles (N is integrate from 0 to 30) to increase the concentration of Au NPs. In additional, thermal annealing (180 ºC in vacuum for 2hours) was applied on samples after every 5 cycles of step 1) and 2) until N=30. This is noted as Annealing Process.
Measurement
The samples were observed by VASE using the configuration of UVISEL II with AOI=55º, 65º and 75º. For cross-sectional view of samples, the films on silicon wafer were broken in half manually and observed by SEM.
V.2.1.3 Results
Structures
As showed in Figure V. 12, the thermal annealing is an efficient method to remove the surface Au nanoparticles and rearrange the Au nanoparticles inside layers. The thermal annealing process for every 5 cycles preserves the self-assembled lamellar structure but not the arrangement of Au NPs in the layers. In order to understand the changes of this rearrangement, the SE data will be explained in the following paragraph.
Results of optical properties
We measured the samples prepared by Annealing Process by SE at N=0, 5, Annealing Process and Normal process. We observe a lost of fringes in the UV range for the both processes. At N=5, little difference is observed. A significant difference between the two processes is observed at the photon energy ranging from 2.1eV to 1.8eV (λ~590nm to 688nm) when N≥10. We can see that as the value of N increases, Re(<ε>) and Im(<ε>) of samples prepared by Normal process increase until N=20 while a decrease of the peak is observed in the samples prepared by Annealing Process. At N=25, in the case of Normal process the peak caused by resonance of gold NPs can still observed but become wider and smaller, which maybe due to the irregular particles deposited on the film surface; while in the case of Annealing Process, the resonance caused by gold NPs.is hardly visible in measured the signal. At N=30, both processes lost the resonance.
In conclusion, the Annealing Process can efficiently remove irregular particles on the film surface. However, the optical responses are impacted because the annealing also modifies the NPs inside the films. Therefore, the Annealing Process such as we studied is not a satisfying modification process.
V.2.2 Etching Au NPs on surface V.2.2.1 Introduction
As we discussed in V.
Experiment
The film preparation follows the same process as described in Chapter V.1.1.2.
Au loading process:
After we obtained the aligned and organized lamellar phases, 1) we immersed the film in 3.0 wt% HAuCl4 solution in ethanol of for 5 minutes followed by a rinsing with deionized water several times. 2) Afterwards, the film with loaded HAuCl4 was immersed into an aqueous solution of 0.65 wt% sodium borohydride for 30 sec. We repeat step 1) and 2) for N times of cycles(N >=30).
MUA solution preparation:
5 mg 11-MUA, 1 mg NaOH were dissolved in 5 ml distilled water.
Removal process:
The sample was immersed in MUA solution and put in an ultrasonic bath for 5-8 mins.
Measurement:
The films on silicon wafer were broke in halfmanually and observed by SEM.
Results
Because
Measurement:
For cross-sectional view of samples, the films on silicon wafer were broke in half manually and observed by SEM.
The surface were observed by Optical Microscope (OM).
Results
Aqua regia is aggressive and the reaction with gold is fast. Our observations after 6 times of removal process (see In order to fix the problem, we plan to use another milder etching agent and we chose potassium iodide. The following section will present this study.
V.2.2.4 Potassium iodide KI-I2
Experiment "Material", "Film preparation", and "Au loading process" are the same as V.2.2.1.
Potassium iodide solution preparation: Add 4 g of Potassium Iodide KI, 1 g of Iodine I2 and 40 ml distilled water in a glass container. Then stir until all the solid is dissolved in water.
Removal process:
Put 1 drop of KI-I2 solution on the samples and rinsing with distilled water instantly.
And repeat this process for 2, and 4 times in order to get a better surface. Figure V. 19
shows the schematic removal process. Or immerse the sample in the KI-I2 solution for 10 seconds and rinsing with distilled water instantly.
Measurement:
For side view of the samples, the films on silicon wafer were broke in half manually and observed by SEM.
Results
As we mentioned in the Chapter II.3, SEI is secondary electron image and BSEI is backscattered electrons image. In SEI, we could see the topography of the film. In BSEI, white part is metallic and dark part is polymer. So in order to fully understand the structure of film, we should compare SEI and BESI images. drops and immersed for 10 sec. We can see that after treatment with 4 drops, there are some "black holes" in the film, which are not real holes: In SEI, we can see that the polymers and the lamellar phase are present, but in BSEI appears like a hole.
Comparing the SEI and BSEI images, we can see that this "black hole" is actually polymer without gold, which results from the fact that the etchant solution permeates through the polymer film and removes the Au NPs in the layers. solution. The etching reaction with KI-I2 goes soft and controllable. The reaction with drops leads to inhomogeneous surfaces like in the case of aqua regia. The immersion method is more homogeneous. We can propose, as a conclusion, that the immersion with shorter time and lower concentration of KI-I2 could be a better method to remove Au NPs on surface and optimizing the Au distribution in the layers.
As a conclusion, the solvent used for the introduction of the reducing agent is important for the gold loading method and can tune the structure of the gold nanoparticles. When the value of N is high, irregular particles are present on the film surface, and we showed that they can be removed by an etching solution or a thermal annealing step.
Introduction
As we discussed in the Chapter III "film fabrication", Chapter IV "gold loading process" and Chapter V "film structure optimization", we produced our target nanostructures with well controlled film thickness, bilayer thickness, volume fraction of gold nanoparticles in the PVP layers, shape and size of the gold nanoparticles. Once the fabrication and structural characterization steps are completed, we would like to choose several typical films to study their optical properties and the relation between structure and optical responses.
In this chapter, we are focusing on the question of how the structure influences the optical properties. However, with our self-assembled nanostructures, presenting some degree of disorder, it is often difficult to find a good model to analyze the experimental data. In this chapter, we are going to study the SE data by several models to understand the relations between optical properties and structure parameters.
VI.1 Ellipsometric modelling
In order to extract the effective optical properties of the studied films from the SE data, two different models were built (see Figure Finally, Model A first allows us to extract εNC from the experimental VASE data, from which we then calculate ε// and εz, using Equation 6-2 and Equation 6-3.
This model works efficient to extract ε// and εz of the films, but not well in the high volume fraction of gold nanoparticles. Spline function is defined as piecewise polynomials of degree n and used as approximating functions in mathematics and numerical analysis. The definition 5 is shown in Equation 6-4 and Equation 6-5. The second approach used is the Tauc Lorentz
𝜀 1 = 2 𝜋 • 𝑃 • ∫ 𝜉•𝜀 2 (𝜉) 𝜉 2 -𝐸 2 𝑑𝜉 ∞ 𝐸 𝑔 𝜀 2 = { 1 𝐸 • 𝐴•𝐸 0 •𝐶•(𝐸-𝐸 𝑔 ) 2 (𝐸 2 -𝐸 0 2 ) 2 +𝐶 2 •𝐸 2 , 𝐸 > 𝐸 𝑔 0 , 𝐸 ≤ 𝐸 𝑔 -E is photon energy;
-Eg is the optical band gap; -E0 is the peak central energy;
-C is the broadening term of the peak; In conclusion, both of the fitting indicate that the results of the gold loading process we used do not depend on the thickness of the film in the studied range. With the increase of the N value, there might be a small influence of thickness on the films. Also, the optical properties analysis can be done in the same way for different film thicknesses.
VI.2.2 Influence of the lamellar period VI.2.2.1 Samples structure
As was detailed in Chapter III. fringes is due to the difference thickness of film, t1 and t2 are ca.250nm, while t3 is 520nm (seeTable VI. 4). As we concluded in the previous section, the film thickness has negligible influence on optical analysis. We can suppose that the difference in thickness will have no effect on our analysis. The fringes are damped in the UV range in the presence of gold, due to the intraband absorption of gold. Moreover, their amplitude around 2.1 eV (λ~590 nm) is markedly modified as N increases, which is due to the plasmon resonance of the gold nanoparticles. However, it is difficult to extract more information from the pseudo-permittivities.
As for the film thickness analysis, we extract the ordinary direction permittivity of From Equation 6-1, f, the volume fraction of gold inside the films is the only fitting parameter in this case. We can see that for that both values of N=5 and N=10, when the bilayer thickness increase, the films have a slightly higher uptake of gold. When N=5, 6.2 % and 7.3 % are present in t1 and t2, whose bilayer thickness are 16.4nm and 31 nm. While as the bilayer thickness continuous to increase, the structure is not well defined and the amplitude around gold plasmon resonance is not as good as t2.
Furthermore, the MG gives the same position of absorption, which is not exact in the experimental data. This is the limitation of the usage of the MG approach, which is improved by the TL fit. The comparison of the ε// extracted from both modeling functions, is seen on the Figure VI. 16. From N=5 to N=10, the volume fraction of gold increases, which leads to a stronger amplitude ranging from 1.9eV to 2.3 eV (λ ranging from 539nm to 620nm) in ε//, and the red shift of the position in εTL. The εMG presents a fixed resonance position at 2.3eV (λ=540nm), while with εTL the position of the gold plasmon resonance varies a little, and the absorption peak is wider than εMG. As we discussed in previous chapters, our fabrication method provides gold nanoparticles with some polydispersity and packed in a dense and disordered way in the NC layers, which may lead to wide and red-shifted plasmon peak due to inter particle couplings. At this point, the TL fitting seems more appropriate in this system. TL fitting results show that as the value of N increases, the plasmon resonance position does not shift significantly. The absorption position for t1, t2 and t3 with bilayer thickness 17nm, 31nm and 78nm are at 2.05eV, 2.15eV and 1.95 eV. Therefore, the different bilayer thickness can lead to only slightly different wavelengths of plasmon resonance.
In conclusion, both fittings with MG and TL models indicate that as the bilayer thickness increases, a slightly higher uptake of gold occurs. However, once the bilayer thickness is too large (70nm), the nanostructure of the film is not well defined and leads to a lower amplitude of the resonance. The sample with layer thickness 31nm gives the strongest resonance.
VI.3 Different metallic particles and optical properties
As we discussed in Chapter IV, we can introduce gold or silver nanoparticles inside the lamellar films. Also the size and shape of the gold nanoparticles can be modified by the choice of the solvent for the reducing step and/or the addition of a thermal annealing step, as was explained in Chapter V. We present here our study of how these parameters affect the optical properties. We saw in the previous paragraphs that when we fit the SE data, the resonance spectral position and width does not always correspond to what is predicted by a simple MG model of spherical gold nanoparticles in a polymer matrix. We know, from the previous work of Kévin Ehrhardt 12 and Julien Vieaud 13 , that this could be due to either non-spherical shape of the nanoparticles, or interparticle couplings, which can be modeled by non-spherical polarizabilities of the gold NPs. In this section, we are going to give simulations of the SE signal for lamellar nanocomposites with various shapes of ellipsoid.
VI.3.2.2 Ellipsoidal gold nanoparticles
Ellipsoid is a closed quadratic surface that is a three dimensional analogue of an ellipse (see Figure VI. 23), where a1, a2 and a3 are called the semi-principal axes (see Equation 6-6).
Equation 6-6
𝑥 2 𝑎 1 2 + 𝑦 2 𝑎 2 2 + 𝑧 2 𝑎 3 2 = 1
a1, a2 and a3 correspond to the semi-major axis and semi-minor axis of the appropriate ellipses (see Figure VI. 23). Sphere is a particular case when a1=a2=a3, while when a1=a2<a3 (a1=a2>a3) it is called prolate (oblate). Of course there will be some cases of a≠b≠c≠a, we are going to discuss the simple cases with a revolution symmetry: sphere, prolate and oblate in this section. Where the excentricity e is defined by on the optical responses from various ellipsoid particles. As the value of depolarization factor approaches 1/3, the value face sphere, the peak caused by the resonance of gold at 1.925 eV to 2.1 eV (λ ranging from 644.1 nm to 590.3 nm) is the strongest.
𝑒 2 = 1 -
Once the shape of particles tends to be oblate or prolate, the peak has a red shift and the resonance becomes weaker. Using ellipsoidal particles in the model did not, however, improve much the fits obtained in our systems, compared to a TL fit as presented before.
VI.4 Effect of the volume fraction of gold particles on the optical properties
As we discussed in VI.2, we know that the film thickness have negligible influence on the optical properties and the film with bilayer thickness 31nm have the strongest resonance for a given number N of impregnation cycles. We also know that the volume fraction of gold particles plays an important role in optical responses based on previous analysis. In order to study the complete optical properties and determine if we can access hyperbolic metamaterials, we chose a lamellar nanocomposite structures of 12~14 layers with characteristic size ca. 30 nm and various volume fraction of gold to discuss the possibility of the sample to have hyperbolic properties 14,15 , based on the following features: the material presents a high degree of order and a uniaxial symmetry with the lamellar phase in parallel alignment, it has a characteristic size 30 nm<< 500 nm (wavelength of gold NPs resonance) and combines a gold nanoparticles response (ε<0) and a polymer layer response (ε>0). In this section, we are going to study the volume fraction of gold nanoparticles influences the optical properties and also demonstrate that the studied system achieves the hyperbolic metamaterial regime. 29 shows an example of the results obtained by TEM in the case of N=25. We can therefore infer that such 7 nm-gold NPs are dispersed with no specific order and surrounded by a P2VP matrix in NC layers.
VI.4.2 Optical responses
Let us now present and discuss the optical parameters that were extracted from the ellipsometric study for the fabricated nanostructures 24 . This part of analysis was done in collaboration with Morten Kildemo, from the Norwegian University of Science and Technology (NTNU), in Trondheim, using the analysis software called Complete Ease from J.A. Woolam Company using spline function.
VI.4.2.1 Primary results
The "kinetic" VASE information was Finally note that attempting to combine the top Au deposit layer (of thickness dSAu) into the effective uniaxial medium smears its properties and does not produce negative ε//, in contrast with the analyses based on the two used models. Therefore, future studies will aim at lifting off the uncontrolled detrimental gold deposit top layer. In the current analysis, it is regarded as an experimental artifact whose analysis needs to be isolated from the analysis of the targeted nanostructure below it.
In conclusion, the volume fraction of gold nanoparticles in the nanocomposite layers plays an important role for defining the position and amplitude of the resonance.
The results also demonstrate the capacity of our bottom-up self-assembled multilamellar stack to respond as a hyperbolic effective medium in a given region of the visible spectrum (520 < λ < 560 nm). In the
General conclusion
Hyperbolic metamaterials have appeared about ten years ago. They consist in anisotropic metal-dielectric nanocomposites and they have innovative optical properties for light of wavelength greater than the characteristic size of the internal structure of the material. In order to have access to these new properties in visible light, resonant nano-objects must be organized on a scale of a few tens of nanometers.
Our strategy has been to fabricate such structures by assembling gold nanoparticles in ordered phases of nanostructured diblock copolymers, in order to obtain plasmonic and anisotropic nanocomposites. A state of the art on the different ways of incorporating nanoparticles into a single block of a self-assembled diblock copolymer of lamellar morphology allowed us to define a fabrication methodology taking into account the complexity of these nanocomposite systems and giving access to controlled structures.
Homogeneous thin films of poly(styrene)-block-poly(2vinylpyridine) (PS-block-P2VP) were obtained by spin-coating on silicon wafer. Their thickness is measured precisely by ellipsometry. With certain spin-coating conditions, the thickness of film can be controlled by the concentration of polymer solutions. We optimized the spin-coating conditions sing Taguchi optimization methodology, and determined that the most important factor influencing the thickness of films is the acceleration and deceleration time, with longer times producing thicker films. Typical film thicknesses used in our study lie in the range 300-700nm.
We used commercial PS-block-P2VP symmetrical diblock copolymers with various polymerization degrees. We showed by small-angle X-ray scattering that the PS-block-P2VP copolymers exhibit a lamellar morphology at thermodynamic equilibrium. The results are in good agreement with the generic phase diagram of block copolymers symmetrical compositions. when annealed PS102k-block-P2VP96k, PS25kblock-P2VP25k, and PS8.2k-block-P2VP8.3k with the volume fractions of the P2VP blocks fPVP≈0.5 form lamellar phases with characteristic size α 78 nm, 37 nm and 16 nm, respectively. The characteristic size at scales far below the wavelength of visible light (α < λ/10). The film structures were studied by electron microscope. In order to minimize the surface energy, the PVP block has a preferential interaction with the substrate while the PS block prefers the free interface with air. This leads to the asymmetric wetting configuration. Finally, this process combining spin-coating of a relatively dilute polymer solution on a selective surface and thermal annealing, produces a flat and homogeneous film with a parallel alignment of the 3-D organized structures, throughout the thickness of the film between 20 and 700 nm. This ordered structure spans an area as large as the spin-coating process can produce (10*10mm 2 wafers in our case, or larger).
From the aligned lamellar structures, we fabricate the target multilayer nanocomposites consisting in alternate layers of pure polymer and polymer/nanoparticles composite, using two different in situ synthesis of gold nanoparticles, namely a one-step method and an impregnation process. Both of the methods take advantage of the amine group in the P2VP domains. The amine group can bind with gold salt. The one-step method consists in blending a gold salt solution with the copolymer before casting the film and letting it reach microphase separation at thermodynamic equilibrium. This method is not efficient because the presence of the gold salt modifies the properties of the copolymer, which does not achieve a lamellar morphology. In the impregnation process, the pure copolymer film is first produced and aligned by a thermal annealing process. Then, the film-bearing wafer is dipped into an Au salt (HAuCl4) solution for 5 mins and then into a reducing agent (NaBH4) solution for 30 s. This process mostly preserves the lamellar structure of the diblock copolymers and realizes the target structure on large areas. Due to the strong insolubility of PS in polar solvents, on the one hand, and the strong affinity of P2VP to Au, on the other hand, Au nanoparticles form selectively within the P2VP layers, thus producing a structure of alternating pure PS and (NC) (Au nanoparticles:P2VP nanocomposite) layers. We developed an impregnation cycles procedure, in order to increase, in a controlled way, the volume fraction of nanoparticles in the (NC) layers, by repeating the double dipping process N times. Each cycle increases the purple color of the film, related to the localized surface plasmon resonance of the nucleated gold nanoparticles. Structural and optical studies of the film were performed at different steps along the fabrication process, so that the same film was investigated for different values of N, which we called "kinetic" measurements. Using scanning electron microscopy (SEM), we fully characterize the structure of the studied multilayer films.
In the case of the impregnation process, we studied the reactants loading process by varying the metal salt (from gold to silver) and the impregnation solvent. The salt solution is diffusing through the PS and PVP layers. The pathway of reactant diffusion is from the top of samples and pass though PS and PVP layers easily. After compare the same process performance in silver. The reactants are diffused though the pathway from the top. Ethanol play an important role in this diffusion, because it is a selective solvent for PVP. In the study of penetration of solvent in multilayers, where the solvent is selective for only one block, the other block will retard the diffusion of the solvent. In addition, even if the solvent also pass though the defects, there is no penetration after several layers. In order to get selectively loading and well penetration of precursor, the solvent of precursors is the key factor.
The gold nanoparticles synthesized in the PS25k-block-P2VP25k films were studied by SAXS and TEM. The nanoparticles in the PVP layers present a controlled size around 10 nm. This size can be controlled by the PVP layer thickness (size ca. 17nm).
The volume fraction of gold nanoparticles in PVP layers were studied by spectroscopic
Equation 1- 1
1
Equation 1- 2
2 Figure I. 2 (a) The nomenclature of metamaterials, based on the values of the real parts of their permittivity and permeability 24,25 . DPS: double positive media, ENG: ε negative media, MNG: µ negative media, DNG: double negative media, ENZ: ε near zero media, MNZ: µ near zero media, ZIM: zero index media. (b) Schematic of all the material. Green part is metamaterial. At visible wavelengths, µ is equal to or close to 1, dielectrics have positive ε and metals have negative ε.
Figure I. 2(b), natural materials (in blue) represent only a part of the εr and μr values. All other parts (in green) correspond to the metamaterials. In the visible range, all natural materials have a permeability μ equal to or close to 1, and lie along the red line in Figure I. 2(b). Dielectrics have positive ε and metals have negative ε.
Equation 1- 4 𝛻
4
Equation 1- 8 𝛻
8 = 𝜀𝜀 0 𝑬, 𝜇𝜇 0 𝑯 = 𝑩. Equation 1-4 & Equation 1-8 represents Gauss's law, and Equation 1-5 & Equation 1-9 implies that the north and south poles of magnets do not exist separately. Equation 1-6 & Equation 1-10 and Equation 1-7 & Equation 1-11 describe Faraday's induction law and Ampère's law modified by Maxwell, respectively.
When a monochromatic plane wave 𝑓(𝑥, 𝑡) = 𝑔(𝑥) cos 𝜔𝑡 propagates in an isotropic, homogenous medium, it must follow Equation 1-14, and this results in a spatial oscillation of the electric and magnetic components of the plane wave as shown on the Equation 1-15: Equation 1-14 𝑔(𝑥) ′′ + 𝜀𝜀 0 𝜇𝜇 0 𝜔 2 𝑔(𝑥) = 0 Equation 1-15 𝑔(𝑥) = 𝑔 0 𝑒 𝑖(±√𝜀𝜀 0 𝜇𝜇 0 𝜔𝑥)
Figure I. 4 (
4 Figure I. 4 (a)). If the metal is a nanoparticle, localized surface plasmons are excited at the surface of the metallic domain (see Figure I. 4(b)).
Equation 1- 17 𝑬
17 = 𝑬 𝟎 𝑒 -𝑖(𝜔𝑡-𝑘•𝑟) Equation 1-18 𝑯 = 𝑯 𝟎 𝑒 -𝑖(𝜔𝑡-𝑘•𝑟)
Figure I. 5
5 Figure I. 5 Schematics of the isofrequency surface 𝝎(𝒌 𝒙 , 𝒌 𝒚 , 𝒌 𝒛 ) = 𝒄𝒐𝒏𝒔𝒕. in the 3-dimensional kspace for 2 types of HMMs (a) dielectric (type I) HMMs with 𝜺 // < 𝟎 𝒂𝒏𝒅 𝜺 𝒛 > 𝟎 and (b) metallic (type II) HMMs with 𝜺 // > 𝟎 𝒂𝒏𝒅 𝜺 𝒛 < 𝟎 38
Figure I. 5 (
5 Figure I. 5 (a); if 𝜀 // < 0 𝑎𝑛𝑑 𝜀 𝑧 > 0, the hyperbolic medium is called metallic or Type II hyperbolic. Its isofrequency surfaces and the dispersion relation are showed in Figure I. 5 (b)[START_REF] Cortes | Quantum nanophotonics using hyperbolic metamaterials[END_REF][START_REF] Rytov | Electromagnetic properties of a finely stratified medium[END_REF] .
Figure I. 6(a)). The hyperlens made by a strongly anisotropic metamaterial, can transfer the deep subwavelength information into the far field (Figure I. 6(b)). The evanescent waves from the object can become propagating waves. With the help of the hyperlens geometry, the waves gradually reduce their 40 wavevector values along the propagation direction in the metamaterial, and thus the waves can continue to propagate even after leaving the hyperlens.
Figure I. 6
6 Figure I. 6 Light propagation through (a) Conventional lens, and (b) Hyperlens. The blue curvesand red curves represent propagating waves and evanescent waves, respectively.[START_REF] Zhang | Superlenses to overcome the diffraction limit[END_REF]
subwavelength layer thicknesses fabricated by multilayer showed in Figure I. 11 (a), is which a coupling of surface plasmons occurs. 2) The second type consists in lattices of metallic nanowires, showed in Figure I. 11(b). The physical mechanism is realized
Equation 1- 25 𝜀 // = ( 1 -
251 𝜌)𝜀 𝑚 𝜀 𝑑 +(1-𝜌) (1-𝜌)𝜀 𝑚 +(1+𝜌)𝜀 𝑑 Equation 1-26 𝜀 𝑧 = 𝜌𝜀 𝑚 + (1 -𝜌)𝜀 𝑑 In Equation 1-25 and Equation 1-26, where 𝜌 is the volume fraction of metal, or equivalently the relative area occupied by the nanowires in a xy section of the medium. Also like multilayer HHMs, the response of the media can be tuned to different regimes (2 type of HHMs) and different amplitudes by varying frequency and characteristic size of the structure. Most of the HMMs nanowires are fabricated by electrochemical deposition of Ag
I. 3 . 1 . 2 .
312 gyration Rg of a polymer chain describes the size of the polymer coil.The square radius of gyration is the average squared distance of any point in the polymer coil from its mass center. The radius of gyration of an equivalent freely joint polymer chain is given by 𝑅 𝑔2 with N being the number of Kuhn segment, lk being the Kuhn length, with the Kuhn segment usually approximated to a chemical segment.The polydispersity index PDI equals the ratio of the weight average molecular weight Mw to the number average molecular weight Mn. This index describes the width of the polymer molecular weight distribution. Equation 1-28 𝑃𝐷𝐼 = 𝑀 𝑤 /𝑀 𝑛 When the PDI is 1.0, all the polymers chains have identical length. For real polymers, PDI is always larger than 1.0. Most polymers have a refractive index n between 1.3 and 1.7. The present a glass transition from a vitreous state (hard and rigid polymer) to a rubbery state (soft and flexible polymer) characterized by a temperature Tg. A useful parameter is the solubility parameter , depending on the chemical nature of the polymer segments, which is a quantification of the degree of potential interactions between the polymer and other species. It was simply defined by Hildbrand as the energy required to completely remove a molecule from its neighbors to infinite separation. Diblock copolymers A polymer consisting of a single type of monomers is a homopolymer. When two or more different monomers unite together to polymerize, they are called copolymer. Block copolymers is one kind of copolymers, which comprises two or more blocks (homopolymer subunits) linked by covalent bonds. The latter are arranged according to a more or less complex architecture, they may be linear (diblock, triblock, multiblock) or branched (dendrimers, star) (see Figure I. 8).
Figure I. 8
8 Figure I. 8 Schematics of different type of block copolymer
Figure I. 11 Micro phase separation of (a) lamellar phase and (b) cylinder phase
Figure I. 12
12 Figure I. 12 Typical SAXS patterns observed for equilibrium structures formed in diblock copolymer melts128,129
conditions, for example by slowly evaporating the solvent. At the beginning of the process, the copolymer chains are diluted in a common (or neutral) solvent. As the solvent begins to evaporate slowly, the polymer chains get closer and unfavorable interactions occur. These repulsions between the blocks A and B cause the chains of copolymers to rearrange in order to minimize their contact with each other: the selfassembly begins. The microdomains are then gradually formed and propagate during the evaporation of the solvent until the solvent has completely evaporated. At the end, an ordered sample of lamellar morphology is obtained in the case of a symmetrical diblock copolymer (see Figure I. 15).
I. 16
16 ).
Figure I. 16
16 Figure I. 16 Schematic illustration of the shaping of a symmetrical diblock copolymer by rapid evaporation of the solvent
Figure I. 18 I. 3 . 4 . 2 . 2
183422 Figure I. 18 Schematic representation of an alignment of a lamellar diblock copolymer film in symmetrical (a) and unsymmetrical (b) configurations.
Figure I. 19
19 Figure I. 19 Illustration of the thin film topography formed by diblock copolymers as a function of time
block A is glassy (like poly(styrene)), there is no morphological transition. The solvent then selectively swells the B domains of the ordered phase of the A-B diblock copolymer without affecting the A domains or the curvature at the A / B interfaces. The A domains are then fixed in a state not at thermodynamic equilibrium and they keep their shape and size (see FigureI. 21 )
dimension. At this scale, the physicochemical and electronic properties of a nanoparticle are significantly different from the properties of a bulk material which has the same chemical composition. In this thesis work, we are interested in the specific case of gold nanoparticles. The first application of the optical properties of gold nanoparticles is found 97 in Lycurgus Cup of 4 th -century(see Figure I. 22), which is made of a mixture of Roman glass with gold nanoparticles. It shows a different color according to the lighting conditions.
Figure I. 22 Lycurgus 97 I. 4 . 1
229741 Figure I. 22 Lycurgus Cup in (a) reflected and (b) transmitted light.[START_REF] Freestone | The Lycurgus cup-a roman nanotechnology[END_REF]
Figure I. 24
24 Figure I. 24 SEM images of gold nanoparticles synthesized by thermal annealing in the presence of poly(vinylpyrrolidone) at 60º (a) with 10 g.dm -3 concentration of gold salt, and 95º (b)& (c)with 2 g.dm -3 & 88 g.dm -3 concentration of gold salt respectively.110
Equation 1- 35 𝜀Ã 2 𝜔 2
3522 𝑢 = 𝜀Ã 𝑢,𝑖𝑛𝑡𝑒𝑟𝑏𝑎𝑛𝑑 + 𝜀Ã 𝑢,𝑖𝑛𝑡𝑟𝑎𝑏𝑎𝑛𝑑 Equation 1-36 𝜀̃𝐴 𝑢,𝑖𝑛𝑡𝑟𝑎𝑏𝑎𝑛𝑑 = 𝜀 ∞ -𝜔 𝑝 +𝑖𝛾 0 𝜔 Several researchers 112,113 determined the values of the real εr and imaginary εi part of Au permittivity for wavelengths ranging from 200 to 2000 nm experimentally, with some differences between them. In the present thesis, we chose to use the data of Johnson and Christy 109 (JC), which is shown in Figure I. 25 (dots). In the visible red and near-infrared regions, the dielectric function is dominated by the intraband component, which can be well approximated by a free-electron Drude model (continuous lines in Figure I. 25), written according to Equation 1-36114,115 . Where the frequency of the photons of the incidence beam is 𝜔, 𝜀 ∞ is a constant which gives the permittivity of gold in very large frequencies and 𝛾 0 is the damping rate of the gold, which will be discussed later.The plasma frequency 𝜔 𝑝 of the conduction electrons of gold is defined by 𝜔 𝑝 2 = 𝑁 𝑒 𝑒 2 𝜀 0 𝑚 𝑒𝑓𝑓 with the density of electrons 𝑁 𝑒 , the elementary charge 𝑒, the permittivity of vacuum 𝜀 0 and the effective mass of an electron 𝑚 𝑒𝑓𝑓 .
Figure I. 25
25 Figure I. 25 Permittivity of gold taken from JC (dots) and compared with the Drude model approximation (continuous curves)
Figure I. 25 ,
25 Figure I. 25, which corresponds to the interband contribution, explains the golden yellow color of the bulk gold.
Equation 1 -Figure I. 26
126 Figure I. 26 the mean free path of electrons in (a) bulk gold and (b) gold nanoparticles[START_REF] Tallet | Nanocomposites plasmoniques anisotropes à base de copolymères à blocs et de nanoparticules d'or[END_REF]
Figure I. 26(b)).
Figure I. 27 ,
27 dots lines and continuous lines represent the electrical permittivities in the case of bulk gold and gold nanoparticle (NPs), respectively. The continuous lines in Figure I. 27 represent the adjustment of the JC data by the Drude model as we mentioned before. The continuous lines corresponds to the value of ε for nanoparticles of diameter of 10.7nm 122 .
Figure I. 27
27 Figure I. 27 permittivity of gold taken from JC (dots curves), gold nanoparticles (NPs) (continuous lines) and the Drude model approximation (dash curves).
Figure I. 29 Figure
29 Figure I. 29 (a) The position of surface plasmon resonant band depend on the chemical composition of particle surfactant shell by changing the refractive index of the surrounding
Figure I. 30
30 Figure I. 30 The position of surface plasmon resonant band depend on (a) 124,125 different size of nanoparticles, and the color of suspensions with gold nanospheres for diameters ranging from 30nm to 90nm (lower plot from left to right) and (b) 121,123,126 gold nanorods shape of nanoparticles, and the color of suspensions with increasing dimensions (scale bars is 100 nm).
Figure I. 31
31 Figure I. 31 Real (left) and imaginary (right) part of the nanocomposite permittivity, which consists of 50 nm poly(vinyl alcohol) with 24% volume fraction of gold nanoparticles (diameter around 14nm). The blue lines are experimental data, the red lines are fitting lines with a Maxwell
II. 2 . 5
25 SAXS used for nanoparticles in solutions ................................................. II.3 Electron Microscopy ........................................................................................ II.3.1 Transmission Electron Microscopy ........................................................... II.3.2 Scanning Electron Microscopy .................................................................. References ............................................................................................................
'
elliptical' upon light reflection. Spectroscopic ellipsometry (SE) is an optical technique used for thin films and measures the change of polarization of the reflected (or transmitted) light as a function of the wavelength 2-5 . The two fundamental values in the measurements are Ψ and Δ, which represent the module ratio and phase difference between light waves known as p-and s-polarized light waves, respectively (see Figure II. 1). These are defined with respect to the plane of incidence, which contains by definition the incident beam and the normal to the sample surface. The p-polarized light is a wave polarized parallel to the plane of incidence and s-polarized light is a wave polarized perpendicular, which identifies from the German word "senkrecht", to the plane of incidence.
Figure II. 1
1 Figure II. 1 Schematic of the principle of ellipsometry
ultraviolet/visible/near infrared region. Measuring the change of polarization of a polarized light after reflection on a surface for a wavelength range, this technique then offers the possibility of determining the optical indices (n, k) of thin films as a function of the wavelength λ.
Figure II. 2 2 .
22 Figure II. 2 Experimental set-up of a phase modulated ellipsometer (from Horiba Scientific UVISEL model)
Figure II. 3
3 Figure II. 3 Schematics of a Rochon polarizer. The z-axis is the direction of light propagation.
𝑒 -𝑛 𝑜 ) 𝜆 where (ne-no) is the induced birefringence of the silica bar and d is the size of the piezo electric bar. The applied stress is wavelength dependent. It is regulated with a modulation voltage, so that the modulation amplitude is kept constant.
Figure II. 4
4 Figure II. 4 Schematic of Photoelastic Modulator.
ellipsometric model, shown in Figure II. 5, takes into account the thickness and the optical indices of the two layers: above the semi-infinite silicon substrate, a first layer consists in the silica layer always present on the wafer, and a second layer is the sample to be studied, before the ambient surrounding medium (usually air). Depending on the sample, more layers may be present and included in the ellipsometric model. The model supposes homogeneous and planar layers.
Equation 2- 7 𝑛Figure
7 Figure II. 6(a) as a function of the photon energy.
2 +𝑖𝛾 𝑗 𝜔 With oj the resonance frequency fj an amplitude factor and j the dissipation term. The goodness of fit is represented by the χ² value (Equation 2-12). High χ 2 values are usually indicative of a poor fit to the experimental data. If we measure and simulate the spectra of Is and Ic, then the χ 2 value is a comparison of n theoretically calculated pairs (Is th , Ic th ) and n experimentally determined pairs (Is exp , Ic exp ): Equation 2-12
Figure II. 7 (
7 Figure II. 7 (a) model used for analyzing the pure polymer layers (b) index of refraction n and absorption coefficient k extracted from experiment for polystyrene (PS) in blue and Poly (2vinylpyridine) (P2VP) in red.
Figure II. 8
8 Figure II. 8 shows the geometry of the SAXS. Measurements are commonly performed in transmission geometry, using a narrow, collimated, and intense X-ray beam of wavelength λ. The beam impinges on a sample and a two-dimensional
Figure II. 8
8 Figure II. 8 Schematic of Small Angle X-ray Scattering (SAXS) geometry
Figure II. 10
10 Figure II. 10 Principle of scattering. X-rays are scattered with an angle 2θ. 𝒌 𝒊 ⃑⃑⃑ is the incident plane wave, 𝒌 𝒔 ⃑⃑⃑⃑ is the scattered spherical wave and 𝒒 ⃑ ⃑ is the scattering vector
- 15 )
15 is related to the Fourier transform of the electron density distribution, which is noted as ρ(r), where the electron density is 𝜌 𝑒𝑙 = 𝑁 𝐴 𝑑 ∑ 𝑛 𝑘 𝑍 𝑘 𝑘 ∑ 𝑛 𝑘 𝑀 𝑘 𝑘 , with the Avogadro number NA, the density d, the atomic number Z, the molar mass M and the number n of atoms k. Equation 2-16 𝐴(𝒒) = 𝐴 0 𝑟 𝑒𝑙 𝑃(𝜃) 𝑒 𝑖𝑘𝑹 𝑅 ∫ 𝜌(𝒓)𝑒 𝑖𝒒•𝒓 𝑑𝒓 𝑉where A0 is the amplitude of the incident wave, rel is the classical electron radius, P (θ) is a constant factor related to the polarization of the incident wave, ρ(r)dr is the number of electrons at position r in a volume element dr and R is the distance from the scattering volume V to the observation point (detector).
: √4 : √7 ∶ √9: √11…. Cubic bcc 1:√2 :√4 : √5 : √6 :…. Cubic fcc 1:√4/3 :√8/3 :√11/3 :√12/3 :√16/3 ……… Cubic gyroid 1: √4/3 : √7/3 : √8/3 : √10/3 : √11/3 …… Depending on the nature of the microdomain order, the scattering peaks exhibit a specific spatial relationship. Table
Equation 2- 19 𝑃
19 (𝑞) = 𝐹²(𝑞, 𝑅) = (∆𝜌𝑉 𝑠𝑝 3(𝑠𝑖𝑛(𝑞𝑅)-𝑞𝑅 𝑐𝑜𝑠(𝑞𝑅))(𝑞𝑅)3
by a Gaussian function (see Equation 2-21). Equation 2-21 𝐷(𝑅, 𝑅 𝑜 , 𝜎) = 𝑒 -(𝑅-𝑅 0 ) 2 /(2𝜎 2 )
Figure II. 11 shows an example of experimental data (EXP) fitted with the monodisperse model (Equation 2-22) with R=4.2nm and the polydisperse model (Equation 2-24 and Equation 2-23) with a Gaussian distribution (Equation 2-25 and Figure II. 11 inset) of D=2R=8.4nm and σ=1.2nm. The Equation 2-24 used in polydisperse model is determined by Denis Bendejacq 13 .
Figure II. 11
11 Figure II. 11 Illustration of an example of SAXS experimental data (EXP) and the fits by monodisperse (Pmono) and polydisperse (Ppoly) form factor for spheres. The diameter is 8.4 nm( 2R ), the standard deviation is 1.2 nm.
Figure II. 12
12 Figure II. 12 Different signals used from imaging of a specimen in electron microscopy (FromHand book of JSM6700F published by JEOL LTD.)
Figure II. 13 dark parts correspond to gold particles rich layer and light part corresponds to the polystyrene layers. The transmission electron microscope present at the CRPP is a Hitachi H600, having an acceleration voltage of 75 kV, for magnifications ranging from 700 to 300,000 times. A CCD camera takes digital pictures.A high-resolution transmission electron microscope (MET-HR), located at the Bordeaux Imaging Center at the Bordeaux University, was also used to study nanometer-scale samples more precisely. The TEM-HR is a Hitachi H7650, having an acceleration voltage ranging from 80 to 120 kV, for magnitudes ranging from 4000 to 600000 with a resolution of approximately 1 nm.
Figure II. 13
13 Figure II. 13 Multilayer of polystyrene and nanocomposite layer (consists of gold nanoparticle and poly (vinylpyridinie) imaged by TEM
Figure II. 14
14 Figure II. 14 Schematic of backscattered electron image
Figure II. 15
15 Figure II. 15 SEM images of the side of a lamellar film consisting of polymer layers and nanocomposite layers (polymer with gold nanoparticles) under (a) SEI and (b) BSEI, at the same position.
III. 1 . 1
11 Figure III. 1 Scheme for spin-coating process
(
see Figure III. 2). We see that the polymer is deposited on silicon wafer homogenously. As the concentration of the polymer solution increase, the colors change obviously, which indicate the increasing of film thickness. The color on the surface of films is caused by interference of reflected light from the upper and lower surfaces of films.
Figure III. 2
2 Figure III. 2 Films with different thickness under optical microscope. (a) surface of silicon wafer; (b) surface of film made by 0.8wt.% PS-P2VP in toluene; (c) surface of film made by 1.5wt.% PS-P2VP in toluene (d) surface of film made by 3.0wt.% PS-P2VP in toluene (e) surface of film made by 6.0wt.% PS-P2VP in toluene; (f) surface of film made by 10.0wt.% PS-P2VP in toluene.
Figure
Figure III. 4 (b) and (c) show Is and Ic as function of the photon energy in various thickness of films, at the AOI=55º. We can see that as the concentration of polymer in the solution increases, the oscillations number of the curves increases, which demonstrate the increase of the film thickness.
Figure III. 3 - 1 𝐷 = 4 .
314 Figure III. 3 Model built for extracting the thickness of the studied copolymer films
The thickness of the samples were extracting from the VASE data with the model shown in Figure III. 3. The illustration of the measured and adjusted Is and Ic at different angles of incidence (AOI) are shown in the Figure III. 7. The Figure III. 7(g) evidences the slight shift of the Is oscillations with the photon energy, which is due to the slightly different thickness among the nine samples.
Figure III. 6
6 Figure III. 6 Nine designed samples under the microscope.
Figure III. 7
7 Figure III. 7 Ellipsometry results for the nine designed samples. Dots are experimental data and solid lines are fitting curves (a) and (b) for AOI=60 o ; (c) and (d) for AOI=65 o ; (e) and (f) for AOI=70 o ;
Figure III. 9
9 Figure III. 9 Schema of the TEM sample preparation by ultramicrotomy technique
Figure III. 10 and gold nanoparticles Figure III. 11(c), we can see gold nanoparticles loading preserves the structure. The other copolymers morphologies were enhanced by gold nanoparticles, and are shown in Figure III. 11 and Figure III. 12.
Figure III. 12, half layer of PVP attaching with substrates and an asymmetric lamellar film formed. The bilayer thickness d measured by SEM and TEM are listed in TableIII.
Figure III. 13
13 Figure III. 13 Example of SAXS analyzing for the PS34k-block-P2VP18k copolymer. (a) Experimental measured intensity as a function of the scattering vector q in semi-log coordinates; (b) the plot of I*q 4 as a function of q 4 . The red line is the linear fit at high q (see Equation 3-9), giving the background here B=0.92419; (c) Background corrected scattered intensity in semi-logcoordinates.
From
the TEM (see Figure III. 12) and SEM images (see Figure III. 11), and in agreement with the copolymers composition, we know that PS102k-block-P2VP96k, PS25k-block-P2VP25k, and PS8.2k-block-P2VP8.3k are lamellar phases. The sample of largest molecular mass PS102k-block-P2VP96k presents a series of Bragg peaks characteristic of a lamellar order at positions 1:2:3:4 and corresponding to a period of
Figure III. 14
14 Figure III. 14 SAXS profiles depending on polymer degrees of polymerization in semi-log coordinates. The curves are vertically shifted for clarity
√𝑓 𝑃𝑆 𝑎 𝑃𝑆 2 + (1 -𝑓 𝑃𝑆 )𝑎 𝑃2𝑉𝑃 2
For block copolymers with
volume fraction approximately equal to 0.5, the morphologies are lamellar phases based on SAXS (Figure III. 14) and electron microscopy images (see Figure III. 11 and Figure III. 12)(PS102k-block-P2VP97k PS25kblock-P2VP25k and PS8.2k-block-P2VP8.3k). As shown in the plot Figure III. 15, the period measured by SAXS is very consistent with the theory Equation 3-10, while TEM and SEM results are consistent only for the smaller diblock. This is very likely due to the fact that the presence of the gold increases significantly the measured dimensions for the longest copolymer films.
For this, we studied the mixture of homopolymer of poly (2-vinylpyridine) into PSblock-P2VP to increase the layer thickness of P2VP, which will be helpful for fine-tuning of the layer thickness (see FigureIII. 16) and may also affect the gold growth in the systems.
Figure III. 16
16 Figure III. 16 the definition for layer thickness: 1 layer PVP + 1 layer PS.
Figure III. 17
17 Figure III. 17 Cross-sections of different P2VP fraction mixtures of polymers under SEM in BSEI. (a) 0 w.t. % P2VP in polymers, and 10 layers thickness is 281nm, so 1layer is 28.1nm; (a) 5 w.t. % P2VP in polymers, and 8 layers thickness is 251nm, so 1layer is 31.4nm; (a) 10 w.t. % P2VP in polymers, and 8 layers thickness is 269nm, so 1layer is 33.6nm.
2 . 2 "
22 SAXS", Figure III. 18 shows the final curves of different concentration of P2VP. As the weight fraction of P2VP grows from 50.0 % to 57.5 %( 0wt% added P2VP to 15wt% added P2VP), the q0 peaks shifts to smaller values, which indicates an increase of the bilayer thickness in the lamellar phase. If the fraction of P2VP increases to 65 %( 30 wt% added P2VP), the lamellar morphology is lost.
. 10 .
10 It shows the period size of all the results. Compared with the layer thickness measured from SEM, As the P2VP weight fraction increases, both SAXS and SEM methods indicate an increase of the bilayer thickness following a similar trend.
Figure III. 19
19 Figure III. 19 shows the period size as function of P2VP fraction in mixture. The range of variation of the period size is very limited (3nm), which is probably due to an incomplete mixture of the homopolymer in the diblock nanostructure.
Introduction
Our goal is to fabricate the target nanostructured multilayers (Figure IV. 1) consisting of alternate nanocomposite layers and polymer layers. We already know how to obtain controllable and aligned lamellar structures (see Chapter III), and we now want to produce similar structures with metallic particles inside some of the layers. After the work of Clémence Tallet (PhD student Year 2010-2013) 1 , we decided to study in situ synthesis processes, because the introduction of pre-formed nanoparticles in such lamellar polymer structures is often restricted to small nanoparticle sizes and small quantities.Figure IV. 2 shows, as an example, the nanostructure of a PS-PMMA blend with gold Nps (A 0.05 volume % and B 0.58 volume %) covered with a small poly(styrene) ligand.
Figure IV. 1
1 Figure IV. 1 Target structure. NC layer stands for polymer with metallic nanoparticles. Poly layer stands for layer consisting of pure polymer.
Figure IV. 4
4 Figure IV. 4 shows the surface of sample OT2 under optical microscopy after spincoating (a), after solvent annealing (b), and after thermal annealing(c). From the images we can see that after the solvent annealing, the surface of the sample shows globular structures with dark colors, which are preserved after thermal annealing, while the colors become lighter, which may be caused by a reflection effect of Au(0).
Figure IV. 4
4 Figure IV. 4 The top view of sample OT2 under microscopy. (a) After spin-coating. (b) After solvent annealing. (c) After thermal annealing.
Figure IV. 5
5 Figure IV. 5 Side view of samples (a) OT1 (Lamellar/Solvent annealed), (b) OT2 (Lamellar/Solvent+thermal annealed), (c) OT3 (Cylindrical/Solvent annealed), (d) OT4 (Cylindrical/Solvent +thermal annealed), under SEM in low magnification. Left images are BSEI while the right images are SEI. Under BSEI images, white parts correspond to gold domains.
Figure IV. 6
6 Figure IV. 6 Side views of samples (a) OT1, (b) OT2, (c) OT3, (d) OT4, under SEM in high magnification. Left images are BSEI while the right images are SEI. Under BSEI images, white parts correspond to gold domains.
300nm) and thick (ca. 700nm) films of various values of N (N=0, 5, 10, 20 and 30, N=0 stands for pure polymer used for treatment of the SAXS data) were placed in capillaries and were set directly in the beam in the SAXS sample chamber. The intensity is accumulated for 4h. The description of SAXS principle was given in Chapter II 2.2. Before analyzing the scattered intensities obtained after azimuthal integration of the detector 2D spectra, we corrected the data for background scattering following the method detailed in III.2.2.Based on Equation 3-9, the background B is obtained from the slope of a linear fitting of 𝐼(𝑞) × 𝑞4 versus 𝑞 4 plot in the high q region.
Figure IV. 8
8 Figure IV. 8 An example SAXS data correction on thickness of film 300nm and value of N=20. I0 ' stands for the polymer data without gold, I ' stands for experiment after background correction, and Ic stands for the correction data from pure polymers.
Equation 4- 1 𝐼
1 𝑐 = 𝐼 ′ -𝐼 0 ′ where I ' =Isample-Bsample is the background-corrected intensity for the sample N≠0 and I0' = Ipolym-Bpolym is the background-corrected intensity for the pure polymer solution N=0. For example, Figure IV. 8 shows the SAXS data after correction with background
Figure IV. 9 ,
9 Figure IV. 9, each cycle increases the purple color of the film, which is related to the localized surface plasmon resonance of the nucleated gold nanoparticles. When the cycle number increases, the film keeps the islands and holes on the free surface, which means that this gold loading process does not perturb significantly the structure of the films.
Figure IV. 10
10 Figure IV. 10 Low-magnification SEM BSEI images of (a) thick (~700nm) and (b) thin (~300nm) films at different values of N.
Figure IV. 11
11 Figure IV. 11 High magnification SEM BSEI micrographs of the section of (a) thick (~700nm) and (b) thin (~300nm) films in different values of N.
layers, sometimes in large numbers like in the examples of the Figure IV. 11(a) (almost 10 bilayers, D=9.5d0) and (b) (almost 12 bilayers, D=11.5d0).
IV. 12
12 ). TEM micrographs of the gold particles extracted from the films after different values of N were analyzed using the software ImageJ (see Figure IV. 13 ). We extracted the distribution of the diameters of the particles in the image and drew a histogram (with a step size bin=2nm, and a count of between 100 and 200 nanoparticles). The fits of these histograms with Gaussian distribution functions (Equation 4-2) are displayed on the Figure IV. 14.
Figure IV. 12
12 Figure IV. 12 TEM micrographs of the gold nanoparticles extracted from the gold loaded films with the values of (a) N=5 (b) N=10 (c) N=20 and (d) N=30.
Figure IV. 14
14 Figure IV. 14 Histograms the size distributions of the gold particles observed on TEM images .Bin=2nm. The count stands for the number of particles analyzed. Red lines are the fittedGaussian distrubution with the mean radii μ and the standard deviations σ.
Figure IV. 12
12 Figure IV. 12(d) and Figure IV. 13(d)), and we suspect that some of the bigger observed objects come from the uncontrolled gold deposition on the surface of the films (see irregular particles on surface from N=20 and N=30 in Figure IV. 11), making a proper analysis difficult. In conclusion, the nanoparticles size can be well controlled around 10 nm.
Figure IV. 15 and Figure IV. 16 show the experimental results and the fitting by a form factor of spheres modulated by a Gaussian distribution of sizes, for various values of N (5, 10, 20, 30) and two different film thicknesses ca. 300nm Figure IV. 11(b) and 700nm Figure IV. 11(a).
(a) and thick (b) (or (c) and (d)) films for various values of N, we can see that the thickness of the film has no influence on the nanoparticles sizes and distributions. When N=5 (Figure IV. 15(a) and (b)), the mean value of gold nanoparticles diameter μ=5.2nm and the standard deviation σ=1.4. When N=10 (Figure IV. 15(c) and (d)) μ=5.4nm and the standard deviation σ=2.4. When N=30 (Figure IV. 16(c) and (d)), μ=6.2nm and the standard deviation σ=1.8. As the value of N increases, the nanoparticles size increases but is limited to 7nm, which confirmed also by the results of TEM (see Figure IV. 12), showing that the nanoparticles size increases but is limited to 20nm.From the plot of the gold nanoparticles size, measured by TEM and SAXS, as a function of the N value (see Figure IV. 17), we can see that the difference thicknesses (400nm difference) of the films have no influence in the size of gold nanoparticles under this gold loading method. The size of gold nanoparticle is increased as the impregnation process continues and could be confined to ca.8 nm, which is a good size to obtain a nanoparticle plasmon resonance. Even though both of the measurements give a large error bar, we can thus propose that gold nanoparticles size can be well controlled by the bilayer thickness in the film ~15nm. The process appears to be the following: the several first cycles of impregnation deposit small nanoparticles in the P2VP layers, which then act as seeds of gold and the following loading cycles are used for growing the gold NPs.
Figure IV. 17
17 Figure IV. 17 Comparison of the mean nanoparticles diameter as a function of N, as obtained from TEM and SAXS.
Figure IV. 18 the pseudo-permittivity <> measured on films without gold, N=0, present fringes regularly spaced in the photon energy scale and closer to one another for the thicker films, which we interpret as interference fringes related to the film thickness. We can see that as the value of N increases, the fringes present a red shift and the absorption becomes stronger in the energy range 0.9eV-2.01 eV (λ ranging from 616.8 nm to 1377 nm) which is caused by the gaining of gold in the films. The red shift can either be caused by an increase of thickness or by a change in the optical index due to the gaining of gold. From the Figure IV. 11, we can see that the thickness increase as N increases is limited. So the shift of the <> fringes is more probably caused by the increase of volume fraction of gold nanoparticles. It was shown earlier that a single layer of such disordered nanocomposite of
The well-aligned copolymer films were immersed in solutions of silver precursor either in water or in water/ethanol (1/1 in volume), whereas the solvent for gold precursor was ethanol. From the side-section TEM image of ultramicrotomed films with N=5 cycles, Figure IV. 19, we can see that when the loading process is made in water the silver precursor cannot penetrate the films and accumulate at the surface. From the side-section SEM micrographs (Figure IV. 20) of films with N=20 cycles, we can see that silver nanoparticles were selectively synthesized in the P2VP domains, and that the structure of well-aligned multilayers was preserved. In the case of water solvent (Figure IV. 20 (a)), the reactant could apparently not penetrate the whole film and the silver synthesis was limited to the top 4 bilayers. Confirming the observations made on the film on Aclar observed by TEM Figure IV. 19(a), we thus propose that the silver precursor dissolved in water cannot penetrate the bilayers and that the silver particles present in the top 4 layers resulted from the penetration of precursors through defects in the films. In the water/ethanol case, the silver precursors diffused in the whole films, which indicates that the reactant permeate through both the PS and the P2VP layers. So the solvent use for the impregnation of the metallic precursor is the
Figure IV. 19
19 Figure IV. 19 Cross-section of lamellar structure loaded by silver nitrate in water solution with N=5.
IV. 3 IV. 3 . 1 Introduction
331 Figure IV. 21(a) corresponding to the film with only 4 layers of nanocomposite (see Figure IV. 20(a)) shows gradual increase of the amplitude and the shift of the curve fringes, which indicates the gaining of silver nanoparticles either in quantity or size. Figure IV. 21(b) corresponding to the film with a multilayer of nanocomposite spreading over the whole film (see Figure IV. 20(b)) present an increase in the amplitude and shift of the curves, which is quite abrupt as soon as N=1, in the frequency region around 3eV, which indicates the gaining of silver nanoparticles. For N≥11, the curves do not evolve anymore, which probably indicates a saturation of the silver in the layers. The films produced with silver precursor loaded by two different solvent systems show significantly different signals. This may be caused by both the difference in size or dispersion of the silver particles in the two systems (smaller in water and larger in water/ethanol) and by the difference in the location of the silver NPs in the multilayers. Study of the volume fraction of Au Nps in the P2VP layers As we explained (Chapter II), the multilayer templates is fabricated in a controlled way, and gold nanoparticles are selectively synthesized inside the P2VP layers (Chapter IV.2). The extraction of accurate optical responses from the spectroscopic ellipsometry data, which will come in Chapter VI, critically relies on the precise knowledge of the sample structure, and in particular, the gold loading. The quantities of gold nanoparticles in the layers are quite small, which is difficult to approach by conventional measurement methods. In this part, we are going to use a Quartz Crystal Microbalance as a measurement tool to 'kinetically' study the volume fraction of Au upon loading in the films. IV.3.1.1 Introduction of QCM-D Quartz Crystal Microbalance with Dissipation monitoring (QCM-D) 8 is an extremely sensitive sensor capable of measuring mass changes in the nanogram range. It provides novel information regarding structural (viscoelastic) properties of adsorbed layers. In the past decade, a lot of QCM 5,9 studies were applied to solutionsurface interface systems. This technique is based on the piezoelectricity of quartz crystal. Piezoelectric effect (Figure IV. 22) is a property of certain materials. When a piezoelectric material is placed under mechanical stress, the center of the positive and negative charges in the material shifts, which results in an induced electric field. Reversely, when a piezoelectricity material is placed under an external electrical field, it causes either stretching or compression of the material.
Figure IV. 22
22 Figure IV. 22 Schematic illustration of piezoelectricity effect. (a) Piezoelectric effect: induced electric field caused by mechanical stress (b) reverse piezoelectric effect: compression or stretching caused by external electric field.
resonant frequency decrease (Hz) 𝑓 is the intrinsic crystal frequency ∆𝑚 is the elastic mass change (g) A is the electrode area (cm 2 ) 𝜌 𝑞 is the density of quartz (2.65 g/cm 3 ) μ is the crystal shear modulus (2.95 x 10 11 dyn/cm 2 )
Equation 4- 4 𝐷
4 = 𝐸 𝑙𝑜𝑠𝑡 /(2𝜋𝐸 𝑠𝑡𝑜𝑟𝑒𝑑 ) where Elost is the energy lost during one oscillation cycle and Estored is the total energy stored in the oscillator. The Figure IV. 23 shows schematically the oscillations for a viscoelastic (green) and rigid (red) film when the driving voltage is turned off.
(
SiO2) with diameter 14mm and thickness 0.3mm were used in all experiments. They have a fundamental frequency of 4.95MHz±50kHz and were purchased from Biolin Scientific. Note that the diameter of the bare surface is 10 mm because the films were coated in the center of sensor (See Figure IV. 24).
6. 0
0 weight% solutions stirred overnight at room temperature to ensure complete dissolution. Two different substrates, silicon wafer and Q-sensor (Figure IV. 26), were used in the experiments for comparison purposes. 60μl of three different copolymer solutions were spin-coated on the substrates to form films of three different thicknesses (see Table
Figure
Figure IV. 24 Q-sensor with coated silica (from biolinscientific)
Concentration of polymer solution Thickness of the film
analyzed simultaneously. The VASE data, for one studied film at N=5, are shown on the Figure IV. 25, where the ellipsometric parameters Is and Ic are plotted for angles of incidence AOI=50º, 55º, 60º, 65º and 70º. The analysis of such data will be given in the following paragraph IV.3.3.
Figure IV. 26
26 Figure IV. 26 Illustration of QCM-D and Q-sensor. The Q-sensors we used consists of a quartz disk with metallic electrodes on both sides. Once alternating voltage is applied, the crystal vibrates in its fundamental thickness shear mode.
Figure IV. 29
29 Figure IV. 29 Example of the dissipations measured for films at various N until N=10.
SE data measured on the bare silicon substrate were analyzed using the Si and SiO2 tabulated dielectric functions and yielded a thickness value (2.0 nm) for the native silica layer on the surface, which was fixed in the further analysis. The model (Figure IV. 30 (b)) we used is a multilayer stack using the information on layers number and thicknesses from the thin section TEM images (with an example displayed in the Figure IV. 30(a)). We assume that the layers of gold-polymer nanocomposite can be described as effective media by the Maxwell-Garnett formula (see Chapter II.1, Equation 2-9) and the volume fraction f of Au nanoparticles is the only unknown parameter in the model. The fitting procedure was carried out both on the whole measurement wavelength range (320nm-1934nm) and also restricted to the UV range (320nm-400nm).
Figure IV. 31
31 Figure IV. 31 Volume fraction of Au nanoparticles in the composite layers as function of the impregnation cycles number.
IV. 3 . 3 . 2
332 Figure IV. 31 shows the volume fraction of gold nanoparticles extracted from SE data using the fits presented above. In order to confirm the data, the results from another technique QCM-D are compared in the following. Results of QCM-D Using the QCM-D procedure described in IV.3.2, kinetic measurements were performed in order to determine the mechanism of gold impregnation, in particular to determine the mass loading and volume fraction of particles introduced by each cycle of impregnation.
Figure IV. 33
33 Figure IV. 33 Primary QCM measurement for N=5, the solution HAuCl4-ethanol is 3.0 weight%, expressed as the relative frequency for the fifth overtone vs time. The measurement starts in water. Solvent exchange between water and ethanol was performed twice to eliminate the frequency changes due to the PVP swelling. Then the frequency in water is the initial frequency.Gold salt in ethanol is flushed for 20min and then rinsed by water until the measured frequency reached stabilization. Δf stands for the change of frequency caused by the change of mass.
Figure IV. 33
33 Figure IV. 33 shows an example of the primary QCM measurement (N=5 in 3.0 weight% solution of HAuCl4 in ethanol). The change of frequency Δf, is measured
3.2 the gained mass consists of Au and Cl in the ratio 1:3 based on the reaction between HAuCl4 and pyridine. Based on the Equation 4-6-Equation 4-8, the change of mass is calculated for the different samples (see Figure IV. 34).
Figure
Figure IV. 34(a) shows the cumulative gold mass gain in the film (thickness 160nm) as function of the number of impregnation cycles in three different concentrations of gold salt in ethanol solution (1.0 weight%, 3.0 weight%, and 6.0 weight %) with a flow rate of 0.2 ml/min. The dash lines are the differential gold gains for each cycle of impregnation process. The first impregnation cycles lead to the adsorption of large quantities of gold, and the differential gained mass of gold then decreases as the number of impregnation cycle increases. After 5 cycles, the differential gain of gold mass remains constant. We can recall the results of IV.2.3.2, showing that the diameter of the gold NPs is increasing as the number of impregnation cycles increases (Figure IV. 17). We can then suggest that the PVP layers get loaded with many gold seeds
)
Figure IV. 35
35 Figure IV. 35 Schematic illustration of the gold loading process. The left part is the flow of gold salt solution in the QCM cell, the gold salts stay in the PVP layer. The right part shows the mechanism of the gold loading, gold nanoparticles with small diameter are obtained in the first several cycles of the impregnation process, and then the mass gaining is caused by the growth of the nanoparticlesdiameter.
As we mentioned in the last chapter, the Au NPs are well introduced into the PVP layer of the lamellar phase. However, irregular Au NPs are also synthesized on the top of the upper surface of the films for high values of N, and the first two layers are thicker than the other layers (Figure V. 1). These structural irregularities are susceptible to influence the optical properties of the films, and we have studied the possibility of avoiding them.
Figure V. 1
1 Figure V. 1 Cross-section of the film under SEM in SEI (secondary electron image) and BSEI (backscattered electron image). The sample was fabricated by PS25000-block-P2VP25000 and repeatthe loading process for N=30.
V. 1 . 1 V. 1 . 1 . 1 IntroductionFigure V. 2
111112 Figure V. 2 the models used for analsis.(a) model 1, the whole film and silicon wafer are assumed as an effective media bulk. (b) (d) are model 2 for film fabricated in water system and water/ethanol system, respectively. They are built based on the detailed film nanostructure obtained from the cross-section SEM images, as schematized in (c) for water system and (e) for water/ethanol system. PS stands for pure polystyrene and NC for the nanocomposite composed of gold nanoparticles and poly (2-vinylpyridine), of varying composition when N increases. The trihedral in (c) indicates the ordinary ((x,y) or //) and extraordinary (z) directions. The ε is separated into εǁ and εz.
Figure V. 3
3 shows the side view of the samples prepared with NaBH4 in 3 different solvent, in water, water/ethanol=1/5 in volume, and methanol. In the SEM images, the white part is corresponds to the goldrich areas and dark parts correspond to the pure polymer domains (PS). The loading process goes well in the aqueous system (Figure V. 3 upper). The gold NPs are well introduced into the PVP layers and appear homogenously distributed. In the water/ethanol system (Figure V. 3 middle), the thickness of film is increased and the gold salts were reduced to gold within the PVP layers. However, the film is inhomogeneous and gold particles are obviously bigger than the ones in the aqueous system. In methanol (Figure V. 3 (a) below), only a few gold particles were synthesized. From Figure V. 4 (a), compared to the samples fabricated in water and methanol,we can see that the thickness of the film dipped in the water/ethanol solution increases significantly by almost 200 nm, which is likely due to the swelling of the PVP chains in ethanol. Concerning the water/ethanol system, since there is a large amount of solvent in the layers, several effects can be predicted: (1) a large amount of reducing agent NaBH4 enter the layers (2) more particles grow with little bound to PVP. As we mentioned in Chapter I, gold nanoparticle also bind with hydroxyl, even though the ligand is weaker than the one with amnion.(3) since ethanol can easily penetrate multi layers, the particles bound with ethanol have more mobility in the layers. All of these results in the larger gold particles in the layers and the increase of the layer thickness.
Figure V. 4 (
4 Figure V. 4 (b) and (c) show the TEM images of Au particles in the samples after the film has been dissolved and resulting NPs have been recovered on TEM grid, and the distribution of particles in histograms, respectively. In Figure V. 4 (b) and (c), we can see that the Au particles in the film which is reduced in water are spheres of diameters ranging from 1 to 15nm. In the case of the film reduced in water/ethanol, the Au are irregular and their sizes range from 1 to 230nm. Finally, the few Au particles in the film which is reduced in methanol are spheres of diameter ranging from 4 to 46 nm.
Figure V. 3
3 Figure V. 3 SEM images of samples cross-section fabricated in different solvent (N=20):water, water/ethanol=1/5(volume) and methanol.
Figure V. 4 (
4 Figure V. 4 (a) SEM images of samples cross-section in high resolution. (b) TEM images of Au NPs in samples synthesis in-situ in 3 different solvent (c) the distribution of particles in histograms. Bin=20.
Figure V. 5
5 Figure V. 5 Results of model 1. Real (upper plot) and imaginary (lower plot) parts of the pseudodielectric function ‹ε› of the whole sample from the SE study for different values of N=3, 6, 9, 12, 15, 18 and 20. As the values of N increase, the colour of the line decrease gradually. (a) (b) (c) are Au salts reduced in water system, water/ethanol=1/5 (in volume) system and methanol, respectively; The resonance amplitude varies as N increases, due to the increasing introduction of plasmonic NPs.
Figure V. 5 Figure V. 6
56 Figure V. 5 shows the primary results of model 1, they are plots of real (upper plot) and imaginary (lower plot) parts of the apparent dielectric function ‹ε› of the whole sample as function of the photon energy. The reaction in the water system (Figure V. 5 (a)) and water/ethanol system (Figure V. 5 (b)) are efficient. Indeed, as the value of N increases, the fringes of real and imaginary part of ‹ε› present a red shift. As we know, interference effects for a film of thickness d will lead to fringes with extrema at wavelengths verifying 2𝑛𝑑 = 𝑚𝜆 , where, m is an integer or half integer, 𝒏 = 𝑛 + 𝑖𝑘 is the refraction index of the film, k is the extinction coefficient. From the SEM imagesshown previously, we can see that the change of film thickness is very limited, so the progressive red shift of the fringes for an increasing value of N can be attributed to a change of refractive index of the film. In this case, the change of n is mostly caused by the increase of the Au NPs volume. Between photon energy 1.8 and 2.5ev (corresponding to the wavelength ranging from 689 to 496 nm), the real parts (upper
Figure V. 2 .
2 At the photon energy from 1.7eV to 1.8eV (λ~729nm to 690nm), the data of water system (Figure V. 6(a)) present a peak which is not exactly fitted by MG model. This is probably because the MG model considers monodisperse and perfectly spherical gold inclusions. In the sample, the Au NPs, although well defined, present a significant polydispersity, as is confirmed by the TEM results showed in the left image of Figure V. 4 (c).
The real and imaginary parts of εNC of the NC layers in the lamellar stack extracted from the SE study with Model 2(Figure V. 2(b) & (d)) are presented in the Figure V. 7 for different values of N=6, 15 and 20.
in the NC layers, the real part of εNC of NC layer corresponding to the Maxwell-Garnett effective medium (Figure V. 7 (a) upper plot) takes large values in the IR range, while the real part of εNC extracted from NC layer corresponding to the EMA-Bruggeman effective medium (Figure V. 7(a) lower plot) decreases in the IR energy range. Also, the imaginary part increases in the IR wavelength energy range.
Figure V. 7(b) shows the comparison of the extracted εNC for the two different Au particles in the NC layers. Forthe small Au NPs (the water system, solid line), εNC presents a stronger resonance, whereas for the large NPs (water/ethanol system, dashed line) εNC presents only a small and spectrally broad (for N=6) or no (for N=15 and 20) resonant behavior, and has a shape close to that of bulk gold.
Figure V. 7 (
7 Figure V. 7 (a) Real (Re, solid line) and imaginary (Im, dash line) parts of εNC of the NC layers in the lamellar stack, as extracted from the SE study for different values of N between 6 and 20. The Au NPs reduction process is done in water system (upper plot) and water/ethanol system (lower plot); (b) Comparison of the two systems. Real (upper plot) and imaginary (lower plot) parts of the dielectric function εNC of the NC layers in the lamellar stack. The resonance amplitude varies as N increases, due to the increasing introduction of plasmonic NPs. Au(JC) stands for experimental data of gold bulk from publication7 .
Figure V. 8
8 Figure V. 8 shows the comparison real part (Re) and imaginary part (Im) components ε// (λ) and εz (λ) extracted for N=6, 15 and 20 between the sphere gold nanoparticles and gold bulk. As is shown on the solid lines of Figure V. 8, for the goldNPs fabricated in water system(MG-extracted), the dielectric functions ε// and εz both present a resonance at the a photon energy of 2.1 eV (or alternatively the wavelength λ=580 nm), close to the plasmon resonance of the gold nanoparticles. However, the resonance amplitudes of the two components significantly differ. In the case of εz, the
Figure V. 8
8 Figure V. 8 Real (upper plots) and imaginary (lower plots) parts of the components ε// (left) and εz (right) of the lamellar nanoplasmonic thin films fabricated in water system (solid line) and water/ethanol system (dash line), as computed using Eqs. (4) and (5) from the Model 2 SE extractions for different values of N 6, 15 and 20 (solid lines: MG-extracted fAu between 7% and 40%; dash lines: EMA-extracted fAu between 7% and 47%). The resonance amplitude varies as N increases in MG-extracted, due to the increasing volume fraction of introduced plasmonicnanoparticles.
Figure V. 10 Figure V. 10
1010 Figure V. 10(a) shows the SEM image side view of the sample prepared by Na3Ct at room temperature and at 70 ºC. Figure V. 10(b) shows the side view of the samplesprepared by AA at room temperature. We can see that Au NPs are synthesized on surface of samples instead of inside the layers. We can thus propose that the reaction between the gold salt and organic reducing agents (Na3Ct and AA) works, but that the
Figure V. 10(h), (i) and (j) show the SEM images of the sample, TEM images of Au NPs in the sample, and the distribution of particle sizes in histograms based on TEM images, respectively. We can see that gold particles were well synthesized and introduced inside the layers and they are spheres of diameters ranging from 1nm to 15nm. It means NaBH4 can penetrate through the layers and get into contact with the gold salt. We can compare the SEM images (Figure V. 10(k)) of samples prepared by AA with and without thermal annealing in the last step. We believe that the gold precursor are well loaded in the PVP layers. From the surface of the sample prepared by Na3Ct (Figure V. 10 (a)) and AA (Figure V. 10(b)), there are gold NPs formed on surface. It means the gold salt can be reduced by the organic reducing agent. Therefore, the absence of reduction inside the layers is rather caused by the fact that the organic reducing agent cannot penetrate the layers to reduce Au salt inside samples rather than a low reducing ability. Comparing the results of samples reduced by different reducing agents (Figure V.10(k)), we can see that NaBH4 is more efficient than Na3Ct and AA. NaBH4 (molecular weight 37.83 g/mol) has lower molecular weight, which gives it more mobility through the porosity of the polymer layers. On the contrary, the organic reducing agents, Na3Ct
N=0, 3
3 , 6, 9, 12, 15, 18 and 20. For samples prepared in Na3Ct and AA (N=0, 3, 6, 7, 8), we measured only at N=0, 3, 6, 7 and 8, because as the value of N increased after 3, the measurement curves did not change anymore.
Figure V. 11
11 Figure V. 11 Real (Re, solid line) and imaginary (Im, dash line) parts of ‹ε› for different values of N between 0 and 20. The different colors of indicate different values of N. As the values of N increases, the line becomes gradually lighter. (A) gold salt reduced by Na3Ct at room temperature and at 70 ºC for values of N=0, 3, 6, 7, 8. (B) gold salt reduced by AA for values of N=0, 2, 6, 7, 8. (C) gold salt reduced by AA in condition of pH=11 for values of N=0, 3, 6, 9, 11. From the change of curves in plots (A) (B) and (C), it appears that the reactions were finished at the first loading process. No further reaction occurred after the first step. The fringes shift between N=0 and N=2 is caused by the increase of the film thickness or goldNPs deposition on the film surface. (D) gold salt reduced by NaBH4 for values of N=0, 3, 6, 9, 12, 15, 18 and 20. The resonance amplitude varies as N increases, due to the increasing introduction of plasmonic NPs.
Figure V. 12
12 Figure V. 12 SEM images of (a) N=20 (b) N=30. Before means gold loading follow the Normal Process. After means gold loading follow the Annealing Process.
5 (
5 after thermal annealing), 10, 10 (after thermal annealing), 15, 15(after thermal annealing), 20, 20(after thermal annealing), 25, 25(after thermal annealing) and 30, 30(after thermal annealing), while the samples prepared by Normal Process at N=0, 5, 10, 15, 20, 25, 30.
Figure V. 13
13 Figure V. 13 Comparing of the pseudo-permittivity <ε> before and after thermal annealing in N=5, 10, 15, 20 and 25. Continuous lines (dash lines) are before (after) annealing, while blue (red) lines are the real (imaginary) part of <ε>
2 . 1 ,
21 even though the Annealing Process can remove the irregular surface gold nanoparticles efficiently, the structure of gold nanoparticles inside the layers is changed and the optical responses are impaired because the resonance of the Au NPs is mostly lost. Thus, we are going to etch Au NPs by chemicals directly to preserve the gold nanoparticles inside the layers. Three different chemicals are used here: 11-mercaptoundecanoic acid (11-MUA), aqua regia and potassium iodide. acid (11-MUA) 17,18 is a well-known gold-capping and reducing agent, which contains an organosulfur compound thiol-SH. With metal ions, thiolates behave as ligands to form transition metal thiolate complexes. The structure formula of 11-MUA is shown below Figure V. 15.-Chapter V Optimization of the films made by impregnation process--180 -
Figure V. 15
15 Figure V. 15 The structure formula of 11-MUA
Figure V. 16
16 Figure V. 16 Schematic of the removal of surface Au NPs by 11-MUA. Red part is nanocomposite layer of Au and PVP, blue part is PS, yellow part is gold, and black lines are 11-MUA.
the reducing agents penetrate from the surface as we discussed in Chapter V.1, the in situ synthesis of Au NPs in PVP layers is inhomogeneous, especially the first two layers from the top are sometimes denser than the others. They contain more gold NPs than the other layers as we discussed previous. Since 11-MUA barely penetrates the film, the gold removal can only happen on the top surface and not inside the layers. This seems confirmed by our observations, as in Figure V.17, we can see that the irregular gold nanoparticles on the surface are well removed by MUA treatment, and that the first two layers from the top still contain more Au NPs than the other layers, and the structure of the film did not collapse.
Figure V. 17
17 Figure V. 17 SEM images of samples before and after treatment by 11-MUA with ultrasons. The middle 2two images are low-magnification BSEI images. The upper two images are the zoom of part of sample before treatement by 11-MUA in white square under highmagnification in SEI and BSEI, respectively. Lower two images are the zoom of part of sample after treated by 11-MUA in white square under high-magnification in SEI and BSEI, respectively.
Figure V. 19
19 Figure V. 19 shows the schematic removal process.
Figure V. 20(a)) show that not only the gold NPs on the surface but also in the layers of the sample, were removed. We then reduced the treatment cycles to 4 times (see Figure V. 20 (b)) and 2 times (see Figure V. 20(c)). The gold NPs on the surface became smaller and the Au NPs in the first top two layers are similar to the NPs in the other layers. This method can remove gold NPs efficiently, but there are two main problems: First one is shown in Figure V. 20(d): if there are many defects in the film, the gold NPs in the layers will be rinsed to the bottom and will aggregate. The second problem is shown in Figure V. 20(e): because the fast reaction between aqua regia and Au, we
FigureFigure V. 20
20 Figure V. 19 schematic removal process
Figure V. 21 (a) (b) and (c) show the side view SEM images of the samples treated by KI-I2 for 4 drops, 2
The treatment with 2 Figure V. 21
221 Figure V. 21 (a) (b) and (c) are SEM images of side views of the films after treatment by 4 drops of KI-I2 solution, 2 drops of KI-I2 solution and 10 seconds immersing in KI-I2 solution, respectively.The bottom of each images show large view of the films at lower magnification, the upper two images are zooms of the same place from white squares in SEI and BSEI. In SEI, we could see the film topography. (d) is the comparison of different level of treatment.
Figure V. 21
21 Figure V. 21(d) shows the comparison of three different treatments by the KI-I2
. 1).
Figure VI. 1
1 Figure VI. 1 Models used for analyzing the SE data. (a) Model A is a complete multilayer stack, using the SEM information on number and thicknesses of all the layers (b) Model B assumes the film is a uniaxial anisotropic layer. PS stands for pure polystyrene and NC for the nanocomposite composed of gold nanoparticles and poly (2-vinylpyridine. The "kinetic" SE data are analyzed so as to extract the N-independent dimensions dNC,dPS,dEM2, and the N-dependent parameters,
Figure VI. 2
2 Figure VI. 2 Permittivity of the nanoparticle gold εAu=εAuNP' + i εAuNP" and of the polymer εm=εP2VP' as used in the Maxwell-Garnett model presented in the analysis of the NC layer properties. The gold permittivity is obtained by a well-described modification from the Johnson & Christy tabulated data in order to take into account finite size effects in the nanoparticles4 , while the polymer permittivity was obtained independently.
Figure VI. 3 show the fitting lines using the multilayer model for values of N=5(a), 20(b) and 30(c). We can see that at N=20, the model is not good enough especially in the gold plasmon resonance region ca. 1.9eV to 2.2eV (λ 625nm to 563nm). So we propose another simpler model of Figure VI. 1(b).
Equation 6- 4 𝑦
4 = 𝑆(𝑥) = 𝑃 𝑖 (𝑥) = 𝑎 𝑖 + 𝑏 𝑖 𝑥 + 𝑐 𝑖 𝑥 2 + 𝑑 𝑖 𝑥 3 𝜉 𝑖-1 ≤ 𝑥 ≤ 𝜉 𝑖 and 𝜉 0 = -∞; 𝜉 𝑚+1 = ∞ Equation 6-5 𝑃 𝑖 𝑘 (𝜉 𝑖 ) = 𝑃 𝑖+1 𝑘 (𝜉 𝑖 ); 𝑘 = 0, 1, 2; 𝑗 = 1,2, ⋯ , 𝑚 Where, 𝑃 𝑖 𝑘 denotes the kth derivative of the jth polynomial piece. n is the degree of the spline function. m is the number of knots. 𝜉 𝑗 is the positions of the knots. The free coefficients of the spline function is m+n+1. Each polynomial piece has n+1 coefficients, and the continuity conditions introduce n bands per knot, leaving (m+1)(n+1)-mn=M+N+1 free coefficients. Is an example of fitting process. The data treatments follow this rules.
Figure VI. 4
4 Figure VI. 4 spline function(solid curve) with two knots, fitted to data points(crosses)
Figure VI. 6
6 Figure VI. 6 the permittivity of (a) PS-P2VP and (b) gold nanoparticles used in model.
Figure VI. 8
8 Figure VI. 8 Real (upper plots) and imaginary (lower plots) part of the pseudo permittivity <ε> extracted from the ellipsometry data (a) N=0 and (b) N=10. The ellipsometry data are measured at AOI 65°. The thicknesses of D1 and D2 are 287 nm and 655 nm, respectively.
Figure VI. 9
9 Figure VI. 9 Fitting with the Model B-2 of the ellipsometric data Is (left plots) and Ic(right plots) versus to photon energy (eV). The experimental data (EXP) are in blue, ε// described by Maxwell-Garnett approximation media (MG) are in black, and byTauc Lorentz with one oscillator are in red. The ellipsometry data are measured at AOI 65°. The thicknesses of (a) D1 and (b) D2 are 287 nm and 655 nm, respectively.
Figure VI. 10
10 Figure VI. 10 Real (upper plots) and imaginary (lower plots) parts of the permittivity ε// extracted from the ellipsometry data using the Model B-2. The uniaxial media is described by (a) Maxwell-Garnett Effective Medium Approximation and (b) Tauc Lorentz dispersion formulawith one oscillator. The ellipsometry data are measured at AOI 65°. The thicknesses of D1 and D2 are 287 nm and 655 nm, respectively.
10(b). We can see that as the thickness of sample increases the energy position of the peak of absorption 𝐸 are similar (2.09eV and 2.08eV). From Figure VI. 10, for N=5, as the thickness of layer increase, the resonance become stronger at 2.1eV (λ~590nm), while the position of resonance trends to the same when N=10. This results in a good agreement with MG fitting. From the ε//(λ) (both εMG Figure VI. 10(a) and εTL Figure VI. 10(b)) of D1 and D2 are almost overlapped, the thickness of film has little influence on the optical properties.
Figure VI. 11
11 Figure VI. 11 Comparison of the real (a) and imaginary (b) parts of the ordinary permittivity ε// extracted from the ellipsometry data using the Model B-2. The media for ordinary direction is described by Maxwell-Garnett Effective Medium Approximation (MG dash lines) and TaucLorentz dispersion formula with one oscillator (TL continuous lines). The film thicknesses of D1and D2 are 287 nm and 655 nm, respectively. The value of N stands for the number of gold loading process cycles. The curves are vertically shifted for clarity.
2, using different molar masses of symmetrical PSb-P2VP diblock copolymers, we can obtain lamellar films with different lamellar periods between 17 nm and 80 nm. The lamellar period extracted by SAXS is obtained on bulk samples without gold.
Figure VI. 12 BSEI
12 Figure VI. 12 BSEI SEM images of films of PS-b-P2VP 25k-25k after N=10 gold impregnation cycles with layer thicknesses (a) t1 16.4 nm (b)t2 31 nm (c)t3 78.5 nm
Figure VI. 13
13 Figure VI. 13 Real (a) and imaginary (b) part of the pseudo permittivity <ε> extracted from the ellipsometry data using the Model A. The ellipsometry data are measured at AOI 65°. The layer thicknesses of t1, t2 and t3 are 16.4 nm, 31 nm and 78.5 nm, respectively.
uniaxial layer ε// by Model B-2 with a fixed εz (Figure VI. 5) of extraordinary direction from t1, t2 and t3 when N=5 and 10. The ordinary permittivity ε//.is modeled either by a (MG) Maxwell-Garnett Effective Medium (the permittivity ε// is noted εMG or by a (TL) is Tauc Lorentz with one oscillator (the permittivity ε// is noted εTL. in this case). The fittings are shown in Figure VI. 14, and both of the fitting are good. When comparing the two different fittings in Figure VI. 14, we can see that the MG fitting (in black) adjusts less well the experimental data, especially in the spectral range pf the plasmon resonance of the gold NPs, while the TL fitting (in red) adjusts better the data. As we said earlier, with the TL function, the plasmon resonance spectral position and width are not fixed, unlike with the MG function. In the sample t3 (Figure VI. 14(c)), the fitting lines are not very good, which is probably related to the unwell defined film (see Figure VI. 12(c)).
Figure VI. 14
14 Figure VI. 14 Fitting results of Model B-2 for Is (left plots) and Ic (right plots) versus the photon energy (eV). The experimental data (EXP) are in blue, fitting ε// described by Maxwell-Garnett approximation media (MG) is in black, and fitting by Tauc Lorentz with one oscillator is in red.
6 and plotted in FigureVI. 15(b). We can see that as the bilayer thickness of the samples increases from 16nm to 31nm, both in N=5 and N=10, the plasmon resonance is stronger around 2.1eV (λ~590nm) and the position of resonance varies only a little.With the increase of volume fraction of gold (comparing N=5 and N=10), all the samples show a red shift of the resonance position, which is also observed in the samples D1 and D2.
Figure VI. 15
15 Figure VI. 15 Real (upper plots) and imaginary (lower plots) parts of the permittivity ε// extracted from the ellipsometry data using the Model B-2. The uniaxial media is described by (a) Maxwell-Garnett Effective Medium Approximation and (b) Tauc Lorentz dispersion formulawith one oscillator. The ellipsometry data are measured at AOI 65°. The bilayer thicknesses of t1, t2 and t3 are 16.4 nm, 31 nm and 78.5 nm, respectively.
Figure VI. 16
16 Figure VI. 16 Comparison of Real (a) and imaginary (b) parts of the permittivity ε// extracted from the ellipsometry data using the Model B-2. The media for ordinary direction is described by Maxwell-Garnett Effective Medium Approximation (MG dash lines) and Tauc Lorentz dispersion formula with one oscillator (TL continuous lines). The bilayer thicknesses of t1, t2 andt3 are 16.4 nm, 31 nm and 78.5 nm, respectively. The value of N stands for the number of gold loading process cycles. The curves are vertically shifted for clarity.
VI. 3 . 1 VI. 3 . 1 . 1
31311 Gold and silver nanoparticles Samples structureThe samples were all fabricated based on poly(styrene)-block-poly(2-vinylpyridine) (Mn25000-25000, PDI 1.06) copolymers, with lamellar period ca.30nm. The fabrication description can be found in the Chapter III.1.1.2. The film thicknesses were 288 ± 2 nm measured by SE carried out before the gold impregnation process. The metal loading process was performed as described in the Chapter IV.2.2: in short, in the gold loading case (samples are named Au), the film-bearing wafer is dipped into an Au salt (HAuCl4) solution and then into a reducing agent (NaBH4) water solution. In the silver case loading case, the film-bearing wafer is dipped into an Ag salt (AgNO3) solution and then into a reducing agent (NaBH4) water solution (the sample is named Ag1) or water/ethanol solution (the sample is named Ag2). We repeat the gold/silver loading processes for N cycles (N from 0 to 15). In the case of gold, we know that N=5 and N=10 correspond to a volume fraction of gold in the PVP layers of ca. 10% and ca. 20 %, respectively, according to the QCM study( see Chapter IV.3.3).
Figure VI. 17
17 Figure VI. 17 BSEI SEM images of films of PS-b-P2VP 25k-25k after N=10 gold impregnation cycles (a) gold nanoparticles in P2VP layers (b) silver nanoparticles in the first 4 P2VP layers from top (c) silver nanoparticles in P2VP layers
Figure VI. 19
19 Figure VI. 19 Real (a) and imaginary (b) parts of the pseudo permittivity <ε> from the ellipsometry data. The ellipsometry data are measured at AOI 65°. The different impregnation cycles for loading metal N=5, N=10 and N=15 are shown in the figure with gradually lighter colors. Au stands for the sample with gold NPs (Figure VI. 17(a)), Ag1 stands for sample with silver NPs in the top 4 layers (Figure VI. 17(b)), and Ag2 stands for the sample with silver NPs in the whole film (Figure VI. 17(c)). The curves are vertically shifted for clarity.
Figure VI. 20
20 Figure VI. 20 Real (upper plots) and imaginary (lower plots) parts of the pseudo permittivity <ε> of (a) samples Ag1 and (b) Ag2. EXP stands for the ellipsometry data (blue dots) and MG stands for the fitting lines of Model B-2 (red curves). We extract the ordinary direction permittivity of uniaxial layer ε// by Model B-2 with the fixed extraordinary component εz (Figure VI. 5) and the ε// is the approximation of Maxwell-Garnett Effective Medium. The ellipsometry data are measured at AOI 65°. The number of impregnation cycles for metal loading is N=10. Ag1 stands for the sample with silver NPs in the top 4 layers (Figure VI. 17(b)), and Ag2 stands for the sample with silver NPs in the whole film (Figure VI. 17(c)).
Figure VI. 22
22 Figure VI. 22 Real (a) and imaginary (b) parts of the pseudo permittivity <ε> from the ellipsometry data using the Model A. The ellipsometry data are measured at AOI 65°. N stands for the normal loading process, while AN stands for the gold loading process with thermal annealing every 5 cycles. And the value after N and AN means the number of cycles of loading processes.
Figure VI. 23
23 Figure VI. 23 Type of ellipsoid corresponds to semi-principal axes a, b, and c.
The depolarization factors Ni in ellipsoid is shown in Figure VI. 24. It can be calculated by The depolarization factors have the following properties 𝑁 1 + 𝑁 2 + 𝑁 3 = 1. If 𝑁 1 = 𝑁 2 = 1/3 the particle is a sphere.In the two particular cases of ellipsoids, prolate (a1>a2=a3) and oblate (a1=a2 <a3), their depolarization factors are described in Equation6-8 and Equation 6-9, respectively.
Figure VI. 24 )
24 Figure VI. 24 the shape of the ellipsoids as a function of the depolarization factors N1 and N2. The gray area corresponds to the case N1 < N2 < N3. The spheres are located on the central point of the triangle, the intersection of the different line segments (figure from Bohren and Huffman 21 ).
Figure VI. 26 (
26 Figure VI. 26 (note not the real permittivity). It gives primary indications
Figure VI. 26
26 Figure VI. 26 the real part of <ε> in various value of depolarization factor N1, whose values correspond to the shape of the ellipsoid in Figure VI. 24. The curves are vertically shifted for clarity.
Figure VI. 27 VI. 4 . 1
2741 Figure VI. 27 Backscattering scanning electron microscopy side-view image (SEM) of a 370 nmthick film of alternating layers of poly(styrene) (PS, appearing black) and Au nanoparticles: P2VP nanocomposite (NC, appearing white). The lower and upper black domains of the micrograph are the substrate and the air, respectively. The trihedral indicates the ordinary ((x,y) or //) and extraordinary (z) directions.
Figure VI. 28
28 Figure VI. 28 Backscattering scanning electron microscopy side-view image (SEM) of the 265 nm-thick film of alternating layers of pure polymer (PS, appearing black) and of Au nanoparticles:P2VP nanocomposite (NC, appearing white), for a number of cycles of gold impregnation and reduction of N=5 (a), 10 (b), 20 (c), 30 (d).
Figure VI. 29
29 Figure VI. 29 Transmission electron microscopy (TEM) image obtained with a grid on which was deposited a drop of the dispersion obtained by full dissolution of a film, at the step N=20 of the fabrication process, in a good solvent of the diblock copolymer. The image evidences gold nanoparticles of mean diameter 7 nm. Note that this observation technique being destructive, it was performed on different samples than the ones studied by SEM and VASE.
obtained by recording the full spectra every five gold loading cycles. This turns out to be very important information about the stacked structure as it is progressively evolving upon increased gold absorption in the layers. From Figure VI. 30, we can see for the initial lamellar structure (before infiltration, N=0), periodic oscillations typical of interferences driven by reflections at the air/film and film/substrate interfaces, as both PS and P2VP polymers are transparent in the studied energy range. As N is increased, the extrema shift toward lower energies and the fringe spectral separation decreases indicating an evolution possibly combining an increase in the total film thickness and a change in the effective dielectric function of the NC (Au nanoparticles:P2VP) layers. Moreover, their amplitude decreases markedly as
Figure VI. 30
30 Figure VI. 30 Evolution of the measured ellipsometric angles Ψ and Δ as a function of the photon energy for an angle of incidence of θo=65° and different values of N between 0 and25. The spectroscopic features are damped as N increases, due to the increasing absorption of the film
Figure VI. 31
31 Figure VI. 31 (a)Real (upper plot) and imaginary (lower plot) parts of the dielectric function εNC of the NC layers in the lamellar stack, as extracted from the SE study for different values of N between 5 and 25. The dotted line is the dielectric function used for the PS layer, with Im(εPS)=0. The resonance amplitude varies as N increases, due to the increasing introduction of plasmonicNPs.
Figure VI. 33
33 Figure VI. 33 Comparison between the real part of the ordinary components in the cases of the multilayer (Model A, continuous lines) and uniaxial model (Model B-1, dashed lines) for N=15 (a), N=20 (b) and N=25 (c).
hyperbolic region, the dispersion relation allows propagative modes for large values of |𝑘 𝑥 | , potentially providing super-resolution. These hyperbolic modes are however significantly attenuated due to the lossy nature of the media, in which the hyperbolic property is obtained from the resonant nature of the permittivity. These detrimental losses need now to be minimized by tuning the nature, size, size dispersity and organization of the nanoparticles. Furthermore, the same self-assembly methodology for lamellar stacks fabrication could be used with other nanoparticles with a resonance of better quality factor, or with a lamellar system including gain species (fluorophores or quantum dots) which could amplify the plasmons and compensate the losses. In any case, we have demonstrated for the first time the possibility of using a self-assembly methodology for the fabrication of bulk hyperbolic material, which opens new fabrication routes for metamaterials aiming at super-resolution lenses. Moreover, the self-assembly methodology we have developed could be beneficial for the fabrication of multilayer films of non-planar geometries, such as what is used in devices in which hyperbolic metamaterials are used as super-resolution enlarging lenses 20 .
Figure VI. 36
36 Figure VI. 36 Working principle of the spherical hyperlens: Schematic of a spherical hyperlens comprised of nine pairs of silver and titanium oxide layers 20 .
ellipsometry (SE) and quartz crystal microbalance (QCM) for different values of the number N of impregnation cycles. The SE data were analyzed with a simple multilayer model of alternate polymer and nanocomposite layer, the latter being modeled with a Gannett effective medium law. We find that the amount of gold in the composite layers can be varied up to typically 40 volume%. The data from QCM measurements were analyzed based on the Sauerbrey relation, valid for solid films. The results of both methods gave similar volume fraction of gold fAu: fAu~10% when N=5 and fAu~23% when N=10.Finally, we analyzed the optical indices for a sample of bilayer thickness 37nm with increasing values of N (from 0 to 25), using spectroscopic ellipsometry. Two models were set up to account for the spectroscopic ellipsometry measurements. The Model 1, is a perfect periodic superlattice with polymer and nanocomposite layer (NC layer), which is considered to determine the optical properties of the individual layers of the stack. The optical properties of the PS layer are fixed to those tabulated. The permittivity εNC of the NC layers is determined by fitting the Model 1 to the SE data using a BSPLINE function available in the Complete Ease software and using a Maxwell-Garnett Effective medium as an initial guess. From N=5 to N=30 the optical properties of the Au-loaded polymer NC layer are dominated by a resonance at λ~580 nm, close to that expected for the plasmon resonance of the gold NPs present in the
Equation 1-19 𝒌 × (𝒌 × 𝑬) + 𝜔 2 𝜇 0 𝜀 0 𝜀𝑬 = 𝟎 which can be developed into matrix in Equation 1-20: 𝜀 0 is the magnitude of the wavevector and c is the speed of light in vacuum. In this thesis, we focus our attention on uniaxial media with 𝜀 𝑥𝑥 = 𝜀 𝑦𝑦 ≡ 𝜀 // and 𝑘 // = √𝑘 𝑥 2 + 𝑘 𝑦 2 . Substituting 𝑘 // and 𝑘 0 2 = 𝜀 𝑜 𝜇 𝑜 𝜔 2 in Equation 1-20 yields the isofrequency dispersion relation in this material (𝜀 𝑧 = 𝜀 𝑧𝑧 ) 32,33 : 𝜀 // 𝜀 𝑜 𝜇 𝑜 𝜔 2 )(𝑘 // 2 𝜀 // + 𝑘 𝑧 2 𝜀 𝑧 -𝜀 // 𝜀 𝑧 𝜀 𝑜 𝜇 𝑜 𝜔 2 ) = 0
Equation 1-20 [ 𝑘 0 2 𝜀 𝑥𝑥 -𝑘 𝑦 2 -𝑘 𝑧 2 𝑘 𝑥 𝑘 𝑦 𝑘 𝑥 𝑘 𝑦 2 𝜀 𝑦𝑦 -𝑘 𝑥 𝑘 0 2 -𝑘 𝑧 2 𝑘 𝑥 𝑘 𝑧 𝑘 𝑥 𝑘 𝑦 ] [ 𝐸 𝑥 𝐸 𝑦 ] = 0
𝑘 𝑥 𝑘 𝑧 𝑘 𝑦 𝑘 𝑧 𝑘 0 2 𝜀 𝑧𝑧 -𝑘 𝑥 2 -𝑘 𝑦 2 𝐸 𝑧
where 𝑘 0 = = 𝜔√𝜇 0 Equation 1-21 𝜔 𝑐 (𝑘 // 2 + 𝑘 𝑧 2 -
𝑑 𝑚 and the thickness of dielectric is 𝑑 𝑑 . From the Equation 1-23 and Equation
Equation 1-23 𝜀 // = 𝜌𝜀 𝑚 + (1 -𝜌)𝜀 𝑑
Equation 1-24 1 𝜀 𝑧 = 𝜌 𝜀 𝑚 + (1-𝜌) 𝜀 𝑑
where 𝜌 is the fill fraction of the metal in the unit cell, 𝜌 = 𝑑 𝑚 𝑑 𝑚 +𝑑 𝑑 if the thickness of
metal is
Table I . 2 Hilderbrandt solubility parameter for some chemicals
I The solubility parameters of used chemicals are shown in TableI. 2.
defects in the multilayers will provide channels through which the solvent can easily
migrate.
The poly(styrene)-block-poly(2-vinylpyridine) (PS-PVP) is the main diblock used
in our study.
Compound Hilderbrandt solubility parameter, (MPa) 1/2
Toluene 18.3
THF 18.5
ethanol 26.2
methanol 29.5
poly(styrene) 18.3
poly(2-vinylpyrridine) 95 21.3
According to the work of H. Lin et all 94 a solvent that dissolves both blocks would
easily penetrate into the multilayered structure. When the solvent is selective for only
one block, the other block will retard the diffusion of the solvent. On the other hand,
Table I . 3 Serveral ligands affinity for gold surface
I
Stabilization agent thiol≈amines≈phosphines≈silanes>alkane≈halide≈alcohols≈Carboxylic Acid
Formula RSH ≈RNH ≈ R3P ≈ R3Si > R3C ≈ RX ≈ ROH ≈ COOH
Table I . 4 properties of gold (referred from Wikipedia)
I
Atomic number 79 Standard atomic weight 196.97 g/mol
Melting point 1064.18°C Electron configuration [Xe]4f 14 5d 10 6s 1
Density 19.30 g/cm 3 Boiling point 2970 °C
I
.4.3 Optical properties of gold nanoparticles I.4.3.1 Permittivity of bulk gold
Table I . 5 Values of the Drude parameters for bulk gold (JC)
I The damping of the free (Drude) electrons motion in bulk gold is due to collisions with the crystal lattice and the imperfections of the material. So 𝛾 0 is related to the mean free path of electrons 𝑙 , which is the average distance the electrons travel between collisions (see Equation1-37 and Figure I. 26(a)).
Equation 1-37 𝛾 0 = 𝑣 𝐹 𝑙
With Fermi velocity 𝑣 𝐹 = 1.4 × 10 6 𝑚/𝑠 . For bulk gold, 𝛾 0 = 0.073 𝑒𝑉 and 𝑙 =
12.5 𝑛𝑚.
Parameter 𝜀 ∞ 𝜔 𝑝 2 𝛾 0 𝜄
Value 9.4 8.92 eV 0.073 eV 12.5 nm
Table I . 6 values of parameters in JC
I
Parameter 𝜀 ∞ 𝜔 𝑝 2 𝛾 0 𝜄 𝜄 𝑐 𝜄 ′
Value 9.4 8.92 eV 0.073 eV 12.5 nm 10.7nm 5.9nm
Garnett effective medium model. The black lines are fitting lines with a improved MG model. 127
Introduction ........................................................................................................... II.1 Spectroscopic Ellipsometry ............................................................................. II.1.1 General Introduction ................................................................................. Determination of thickness and optical properties of a simple film ............ II.1.4 Dispersion relation of poly(styrene) and poly(2-vinylpyridine) ................... II.2 Small angle X-ray scattering ........................................................................... II.2.1 General Introduction .................................................................................
Chapter II Instrument and
method
After the nanoparticles of gold suspended in a liquid, we are interested in gold
nanoparticles dispersed in a film of polymer. The optical parameters can be determined
order-disorder transition of styrene-butadiene diblock copolymer.
by spectroscopic ellipsometry (see Chapter II.1). Julien Vieaud 122 and Kevin Ehrhardt 127 studied during their thesis nanocomposite systems composed of gold nanoparticles randomly dispersed in a polymer film. It has been shown that such systems can exhibit negative permittivity close to the plasmon resonance for dense nanocomposite systems with volume fraction of gold larger than 24% (see Figure I. 31). This negative responses give us a possibility to realize the hyperbolic metamaterial by lamellar stacks of nanocomposite layer and polymer instead of metallic layer and dielectric layer. The organization of the nanoparticles will influence the optical responses, and we plan to organize the nanoparticles using block copolymers as a structuring matrix. and scattering properties of gold nanoparticles of different size, shape, and composition: applications in biological imaging and biomedicine. J. Phys. Chem. B 110, 7238-7248 (2006). 127. Ehrhardt, K. Mesures, modélisations et simulations numériques des propriétés optiques effectives de métamatériaux auto-assemblés. (2014). 128. Mai, S.-M. et al. Microphase separation in poly (oxyethylene)-b-poly (oxybutylene) diblock copolymers. Macromolecules 31, 8110-8116 (1998). 129. Hamley, I. W. & Castelletto, V. Small-angle scattering of block copolymers: in the melt, solution and crystal states. Prog. Polym. Sci. 29, 909-948 (2004). II.1.2 Set-up of ellipsometry ............................................................................... II.1.3 II.2.2 Principle of SAXS ..................................................................................... II.2.3 SAXS performance ................................................................................... II.2.4 SAXS in diblock copolymers .....................................................................
Table II . 1 Parameters of Polymers used in New Amorphous dispersion relation (Equation 2-8)
II
Parameters PS-P2VP PS P2VP
𝑛 ∞ 1.507 1.541 1.509
𝜔 𝑔 (eV) 3.833 3.713 2.895
𝑓 𝑗 (eV) 0.055 0.124 0.025
𝜔 𝑗 (eV) 5.397 5.408 4.922
Γ 𝑗 (eV) 0.593 1.728 0.536
As we see from these results, the optical parameters for the two polymers are very similar, which is due to their close chemical nature and density. When organized in ordered domain, the PS-b-P2VP block copolymer films present a relatively low contrast both in SE and in EM.
....................................................................................................... III.1 Film preparation by spin-coating .................................................................... III.1.1 Effect of the concentration on the films thickness .................................... III.1.2 Effect of different spin-coating conditions on the thickness of the films ... III.2 Orientation and period size ............................................................................ III.2.1 Experiment............................................................................................... III.2.2 Measurement ........................................................................................... III.2.3 Results ..................................................................................................... III.3 Controlling period thickness of lamellar phase by homopolymer addition ....
III.3.1 Experimental .......................................................................................... III.3.2 Results ................................................................................................... References ..........................................................................................................
Compared to a conventional method, Taguchi experimental strategy can reduce the number of experiments as well as identify and quantify the interactions among parameters and the contribution of individual parameters. For example, to study the effect of four parameters taking three different values, 81(=3 4 ) different experiments
are needed for conventional method and 9 (=L9(3 4 )) designed experiments are needed
by Taguchi method. Over the decades, this design method has been used in scientific
world widely, since it can not only simplify experiments, but also inspect the interaction
among the experimental factors. It is based on orthogonal experimental design tables,
which is designed Ln(r m ), where, n stands for the number of measures in using the
Taguchi method; r stands for the level (number of taken values) of each factor; m
stands for the number of factors.
Taguchi method
[4][5][6]
is a powerful tool designed by Dr. Genichii Taguchi, which provides a simple, efficient, and systematic approach to optimize operating conditions under designated ranges of all selected parameters. The conventional method used in the optimization of experimental parameters requires a large number of experiments.
In the present work, we use 3 factors and 3 levels in the experiments. 3 3 =27 experiments will be needed if we would like to study all the variables. Instead, 9=L9
(3 4
) experiments are enough with the Taguchi method. The Table
III
. 2 shows the selected factors and their values. Factor A is the spin speed (Level 1:4500 rpm; Level 2: 4800 rpm; Level 3: 5100 rpm); Factor B is the volume of solution (Level 1: 55μl; Level 2: 58μl; Level 3: 61μl); Factor C is the acceleration and deceleration time (Level 1: 2s/2s; Level 2: 3s/3s; Level 3: 4s/4s). The Table
III
. 3 shows the L9 array for the design of the experiment by the Taguchi method, with orthogonal columns. A blank row is introduced as a control group in the table in order to compare with experimental groups.
Table III . 2 Selected factors and their levels
III
Factor
FactorA: FactorB: FactorC:
level Volume (μl) spin speed (rpm/s) acc/dec time (s)
1 55 4500 2/2
2 58 4800 3/3
3 61 5100 4/4
Table III . 3 L9 Array for design of experiment by Taguchi method
III
Operating parameters
exp. No. Volume spin speed acc/dec time Blank measure
(μl) (rpm/s) (s)
1 1 1 1 1 y1
2 1 2 2 2 y2
3 1 3 3 3 y3
4 2 1 2 3 y4
5 2 2 3 1 y5
6 2 3 1 2 y6
7 3 1 3 2 y7
8 3 2 1 3 y8
9 3 3 2 1 y9
Table III . 4 Experimental matrix and results obtained following L9 orthogonal array
III
Factor Level A B C outcomes
Exp. No. Volume Spin speed Acc/Dec time Blank Thickness Noted name
( µl) (rpm) (s) (nm) (nm)
1 A1 (55) B1 (4500) C1 (2/2) D1 y1=231 A1B1C1D1
2 A1 (55) B2 (4800) C2 (3/3) D2 y2=250 A1B2C2D2
3 A1 (55) B3 (5100) C3 (4/4) D3 y3=278 A1B3C3D3
4 A2 (58) B1 (4500) C2 (3/3) D3 y4=266 A2B1C2D3
5 A2 (58) B2 (4800) C3 (4/4) D1 y5=283 A2B2C3D1
6 A2 (58) B3 (5100) C1 (2/2) D2 y6=228 A2B3C1D2
7 A3 (61) B1 (4500) C3 (4/4) D2 y7=284 A2B1C3D2
8 A3 (61) B2 (4800) C1 (2/2) D3 y8=233 A2B2C1D3
9 A3 (61) B3 (5100) C2 (3/3) D1 y9=260 A2B3C2D1
Table III . 5 Results of ANOVA for film thickness
III Film thickness results are listed in TableIII.4. All the values of ANOVA are summarized in TableIII. 5, which is used to determine the percentage contribution of each parameter to the thickness of the films.
Factor A(volume) B(spin speed) C(acc/dec time) Blank
K1 759 781 692
K2 777 766 776
K3 777 766 845
𝐾1 253 260.3333333 230.6666667
𝐾2 259 255.3333333 258.6666667
𝐾3 259 255.3333333 281.6666667
Range(K) 18 15 153 15
Range(𝐾) 6 5 51 5
SSj 72 50 3914 42
T 2313
SST 4078
dfT 8
dfj 2 2 2
dfe 2
MS 36 25 1957
MSe 21
F 1.714285714 1.19047619 93.19047619
Table III . 6 Molecular characteristics of the PS-block-P2VP copolymers
III The diblock copolymers used in this study were purchased from Polymer Source Inc. (list in TableIII. 6) and were used without any further purification.
Polymer Total Mn Mw/Mn P2VP fraction Degree of
PSMnPS-block-PVPMnpvp (*10 3 g•mol -1 ) (PDI) In weight (%) Polymerization(N)
PS34k-block-P2VP18k 52.0 1.12 0.35 506
PS102k-block-P2VP97k 199.0 1.12 0.49 1899
PS106k-block-P2VP75k 181.0 1.10 0.41 1762
PS25k-block-P2VP25k 50.0 1.06 0.50 487
PS8.2k-block-P2VP8.3k 16.5 1.08 0.50 161
Equation 3-7 𝑁 = 𝑀 𝑛𝑃𝑆 𝑚 𝑃𝑆 + 𝑀 𝑛𝑃2𝑉𝑃 𝑚 𝑃2𝑉𝑃
Table III . 7 solubility parameters
III
Table III . 8 Peak positions and the period sizes
III
P2VP q0 Period size
Copolymers Solvent fraction (A -1 ) Ratio of peaks based on q0
PS34k-block-P2VP18k THF 0.35 0.0128 1.0 :2.0:3.0:3.83… 49.0nm
PS106k-block-P2VP75k THF 0.41 0.0127 1.0 :2.0:3.0… 49.49nm
PS102k-block-P2VP97k THF 0.49 0.00800 1.0:3.0:4.0… 78.53nm
PS25k-block-P2VP25k THF 0.50 0.01707 1.0:3.06… 36.81nm
PS8.2k-block-P2VP8.3k THF 0.50 0.03841 1.0:… 16.36nm
From the curves obtained with THF, peaks positions are carefully identified (see
Table III. 8 and Figure III. 14). The period sizes of PS102k-block-P2VP97k, PS25k-block-
P2VP25k and PS8.2k-block-P2VP8.3k are listed in Table
III
. 8.
Table III . 9 Period size under different measurements and theory
III
Polymer N d(nm) d(nm) d(nm) d(nm)
TEM SEM SAXS Theory
PS102k-block-P2VP97k 1899 137.6 120 78.53 81.80
PS25k-block-P2VP25k 487 37.74 31 37 33.02
PS8.2k-block-P2VP8.3k 161 13.66 13.8 16.36 15.76
Table III . 10 Peak positions and the period size
III
Concentration of P2VP in mixture Morphology P2VP weight fraction q0 (A -1 ) Period size Based on SAXS(nm) Period size based on SEM(nm)
0 lamellar 0.500 0.01686 37.27 28.1
1.0 lamellar 0.505 0.01617 38.86 -
5.0 lamellar 0.525 0.01587 39.59 31.4
10.0 lamellar 0.550 0.01587 39.59 33.6
15.0 lamellar 0.575 0.01563 40.20 -
30.0 No 0.650 - - -
Introduction ......................................................................................................... IV.1 One-step method ......................................................................................... IV.1.1 Introduction............................................................................................ IV.1.2 Experimental ......................................................................................... IV.1.3 Results .................................................................................................. IV.2 Impregnation process .................................................................................. IV.2.1 Introduction............................................................................................ IV.2.2 Experiment ............................................................................................
IV.2.3 Results and Discussion ......................................................................... IV.3 Study of the volume fraction of Au Nps in the P2VP layers ......................... IV.3.1 Introduction............................................................................................ IV.3.2 Experimental ......................................................................................... IV.3.3 Results .................................................................................................. Conclusions ......................................................................................................... References ..........................................................................................................
Table IV . 4 Parameters used in the calculations for the film with 160nm thickness
IV
Parameters of foreign weight Parameter of Film
Molecular Mass of Au (M Au ) 196.97 Thickness 160nm
Molecular Mass of Cl (𝑀 𝐶𝑙 ) 35.45 Area 0.785 cm 2
Mass fraction of Au in layer 64.94% Volume of film 1.25663x10 -5 cm 3
Table VI . 2 Volume fraction of gold used in Maxwell-Garnett Effective Medium Approximation Parameters. The thicknesses of D1 and D2 are 287 nm and 655 nm, respectively.
VI With the second fit εTL, the fitting results are listed in TableVI. 3 and Figure VI.
D1 D2
N=5 5.8% 9.8%
N=10 16.37% 17.33%
Table VI . 3 Tauc Lorentz and oscillator (Equation 2-10 and Equation 2-11) parameters found through the fit of the ellipsometry data for the samples with N=5 and 10. The thicknesses of D1 and D2 are 287 nm and 655 nm, respectively.
VI
D1 D2
N=5 N=10 N=5 N=10
𝜀 ∞ 3.370 4.670 4.190 5.420
𝐸 𝑔 1.600 1.550 1.260 1.510
𝐴 5.200 20.000 2.410 16.940
𝐸 2.210 2.090 2.280 2.080
𝐶 0.640 0.370 0.450 0.320
𝑓 -0.530 -1.300 -0.990 -1.950
𝜔 0 6.430 5.160 6.300 6.130
𝛾
-7.950 -3.010 -5.080 -3.640
Table VI . 4 Characteristics of the samples with different lamellar period.
VI
Samples t1 t2 t3
Lamellar period (SAXS) 16.4nm 36.8nm 78.5nm
Lamellar period (SEM) // 31 nm (7 layers) 120 nm (4 layers)
Thickness (SE) 213 nm 225 nm 507 nm
Thickness (SEM) 265.4 nm 242.6 nm 519.6 nm
VI.2
.2.2 Optical responses
Table VI . 5 Volume fraction of gold used in Maxwell-Garnett Effective Medium Approximation Parameters. The bilayer thicknesses of t1, t2 and t3 are 16.4 nm, 31 nm and 78.5 nm, respectively.
VI
t1 t2 t3
N=5 6.2% 7.4% 5.1%
N=10 10.9% 13.72% 12.7%
Table VI . 6 Parameters used in Tauc Lorentz and one oscillator (Equation 2-10 and Equation 2- 11). The bilayer thicknesses of t1, t2 and t3 are 16.4 nm, 31 nm and 78.5 nm, respectively.
VI
t1 t2 t3
N=5 N=10 N=5 N=10 N=5 N=10
𝜀 ∞ 3.37 8.71 3.53 4.68 2.74 14.30
𝐸 𝑔 1.78 1.16 1.86 1.59 1.68 1.31
𝐴 8.06 3.90 20.57 16.70 14.63 13.25
𝐸 2.40 2.03 2.03 2.07 2.19 1.92
𝐶 0.50 0.43 0.30 0.36 0.69 0.52
𝑓 -0.53 -5.67 -0.59 -1.41 -0.34 -11.53
𝜔 0 5.39 16.21 5.43 6.42 4.66 13.08
𝛾 -1.48 -7.61 -4.57 -4.36 -0.61 -0.54
The second approximation (TL) is a function Tauc Lorentz with one oscillator (the permittivity ε// is noted εTL. in this case), the fitting results are listed in Table VI.
Table VI . 7 Parameters of PS and P2VP used in New amorphous equations
VI
Parameters 𝑛 ∞ 𝜔 𝑔 𝑓 𝑗 𝜔 𝑗 Γ 𝑗
PS 1.541 3.713 0.124 5.408 1.728
P2VP 1.460 2.059 0.017 6.922 0.608
Chapter IV Metal loading process
Chapter VI Optical properties
Acknowledgements
This thesis was carried out at the Centre de Recherche Paul Pascal in Bordeaux. atmosphere to work with. I would like to address special thanks to the LabEx AMADEus, the China Scholarship Council (CSC) and the Université de Bordeaux
Chapter III Lamellar structure fabrication by block copolymers Q-sensor covered with a lamellar copolymer film was installed in the QCM cell (see Figure IV. 26). The measurement started in water to obtain a stable baseline. A first experiment showed that the exchange of solvent (from water to ethanol or ethanol to water) has no influence on a bare wafer: the resonance frequency is slightly modified, in a reversible manner (Figure IV. 27 (a)). In the case of the copolymer films, changing the ambient solvent from water to ethanol induces a change of resonance frequency, which we interpret as due to an increased swelling of the PVP domains in ethanol, which is not completely reversible when water and ethanol are exchanged again. We suppose that constraints in the collapsed conformation of the PVP chains in water relax irreversibly in ethanol. Once the sample underwent twice the exchange of solvent, the frequency changes become reversible. Hence, the frequency of the sensor in the third solvent change of water could be treated as the proper ("second") ethanol for the gold salt and water for the reducing agent. In order to know the influence of the solvents, we plan to change the solvent of the reducing agent and study the film structure and Au nanoparticles shape and size in the films. We are not going to change the solvent of the gold because some studies 1,2 and our study in Chapter IV show that the affinity of the gold salts with the PVP layers is due to the gold salt penetration from the top of the film and the solvent plays an important role in this step. In 1995, H. Lin et al 3 studied the penetration of solvent in ordered thin films and showed that there was no penetration of the solvent into the underlying multilayered structure if the solvent is poor to one of the block. So the manipulation of solvent in this step is ineffective. Thus, in this section we are going to focus on the solvent manipulation in the reducing step.
Ellipsometric modeling used in this section
As we mentioned the in V.
With the dielectric constant of spherical particles ε2 immerged in a host medium whose dielectric constant is ε1. f is the volume fraction of the spheres in the matrix. It is only valid for low volume of particles.
In EMA model, the different components of the mixture are treated equivalently, without preliminary assumption on the relative proportions. This model is selfconsistent: the mixture of the different materials forms the host medium, which means that the dielectric function of the host medium is the EMA-Bruggeman dielectric function. In this model the number of phases can exceed 2. The dielectric functions of the different materials are εi (with i=1,2,3,…) with corresponding volume fractions fi.
The dielectric functions verify the equation:
In this chapter, two phases are present, the metallic particles of volume fraction f1 and the polymer matrix, f2=1-f1.The dielectric function of the effective medium fulfills the following equation:
with ε1 the dielectric constant of the gold NPs and ε2that of the PVP matrix.
The permittivity εNC of the NC layers is determined by fitting the Model 2 to the SE data using the Deltapsi2 software and using either a Maxwell-Garnett or a Bruggeman effective medium as full fit. This fitting procedure allows to extract εNC(λ) for the different values of N.
Once the NC layers are modeled, the next step in the modeling is to include these NC layers in the multilayer structure, alternating with pure PS layers of known thickness and permittivity. This is done by considering the effective permittivity of a periodic stack of infinite thin layers, which gives access to the anisotropic effective permittivity of the lamellar material.
The optical properties of the uniaxial effective medium ε//(λ)=ε//'(λ)+ i ε//"(λ) and εz(λ) + εz'(λ) + i εz"(λ) can be written [4][5][6] in Equation 5-4 and Equation 5-5. In these equations, the thickness of PS and NC layer, noted as dPS and dNC respectively, as usually observed in lamellar block copolymer thin films. The permittivity of PS layer, noted as εPS can be found in the The permittivity of NC layer, noted as εNC is extracted by the fitting model mentioned previous paragraph.
Equation 5-4
Equation 5-5
V.1.1.2 Experiment
Film preparation:
The PS25k-P2VP25k was dissolved in toluene at 6.0 wt %. The polymer solutions were stirred overnight and spin-coated onto cleaned wafers. Spin time is 30seconds
with the spin speed of 5000 rpm/s, 3 seconds of acceleration time and deceleration time. Self-assembly of the lamellar structure in the films were obtained by thermal annealing (180ºC in oven with vacuum) for 15 h or longer.
Au loading process
After we obtained the aligned and organized lamellar phases, 1) we immersed the film in 3.0 wt% HAuCl4 solution in ethanol for 5 minutes following by a rinsing of deionized water several times. 2) Afterward, the films with HAuCl4 loading were immersed into different NaBH4 0.65 wt% solution in water, mixture of water and ethanol in volume of 1/5, or methanol for 30 sec. We repeat step 1) and 2) for N times of cycles (N is integrate from 0 to 30) to increase the concentration of Au NPs.
Measurement
The samples were measured by variable angle spectroscopic ellipsometry (VASE), the details are in Chapter II.1.3. We use the configuration of UVISEL II with AIO=55º, 65º and 75º.
For cross-sectional view of the samples, the films on silicon wafers were broken in half manually and observed by SEM.
After all the measurements, samples were dissolved in toluene to disperse the resulting Au NPs, and then observe them by TEM.
can be stronger and the parallel Re(ε//) becomes weaker, which can be seen also in dashed lines of Figure V. 8. However the resonance of Re(εz) is not significant in our case, this is because the volume fraction of gold in NC layers is not high enough and the gold nanoparticles are polydisperse. We can suppose that a more significant difference between Re(ε//) and Re(εz) could be observed if more gold particles were aggregated into nearly gold layers. In order to know the difference between pure gold/polymer multilayer system and the gold nanocomposite prepared in water/ethanol system, we set a model with infinite alternative layers of pure PS, of thickness 15 nm and pure gold of thickness dAu (from 1nm to 21nm). Then we simulate the real and imaginary parts of the components ε// and εz of the lamellar thin films showed in Figure V. 9(a).
The in-situ synthesis of Au NPs produces a better defined nanostructure when the reduction step is done in aqueous system. A tuning of the proportion of ethanol and water in the solvent of the reduction step could lead to different structures or couplings of Au particles, which can induce a modulation of the optical properties of the materials.
V.1.2 Influence of reducing agent V.1.2.1 Introduction
In order to better understand how the mechanisms of the Au NPs synthesis in the PVP layers, we varied the reducing agent in the reduction step. It is known that different reducing agents produce different shape and size of Au NPs via the reduction of HAuCl4 [8][9][10] . In this study, sodium borohydride (NaBH4), trisodium citrate (Na3Ct) and L-ascorbic acid (AA) are used as reducing agent.
V.1.2.2 Experiment
Material
The following materials were obtained from Aldrich: auric acid (HAuCl4•xH2O), trisodium citrate (Na3Ct), sodium borohydride (NaBH4), L-ascorbic acid (AA). All chemicals and solvents were used without further purification.
V.2 Removal irregular Au NPs on surface of films with high N
As we discussed in V.1, the best condition we found for the reducing step in the gold loading process is the use of an aqueous solution of sodium borohydride. This method works efficiently, but uncontrolled gold NPs are gradually deposited on the top of the film surface, for large N values, as well as inside the layers. In addition, the first two layers from the surface are sometimes loaded with more gold particles than the other layers. In order to avoid such inhomogeneous distribution of Au NPs in the layers and erase the irregular particles on the surface, two different techniques following the loading Au NPs process have been studied and will be presented here: 1. Reorganizing block copolymer by thermal annealing; and 2. Etching Au NPs by chemicals.
V.2.1 Thermal annealing after Au loading process V.2.1.1 Introduction
As we know, block copolymers have the ability to spontaneously form periodic morphologies with controllable length scales. The self-assembly of block copolymers can be achieved through annealing the block copolymers thin film in a suitable environment, either at elevated temperature using thermal annealing [11][12][13] ,or by solvent This study is going to be explained in the coming sections.
V.2.2.3 Aqua regia
Introduction
As we know, the traditional medium for dissolving gold is aqua regia. It is a mixture of three parts of concentrated hydrochloric to one part of concentrated nitric acid. The reactions involved are as follows 20 :
The last reaction is reversible. If the aqua regia solution is diluted with water, then
The unknown, which will be extracted from the fit of model A to the experimental data, is the permittivity εNC() of the NC layers. Once this function is determined, the optical properties of the uniaxial effective medium ε//(λ)=ε//'(λ)+ i ε//"(λ) and εz(λ) + εz'(λ) + i εz"(λ) can be calculated, using the relation [1][2][3] as follows:
VI.1.2 Model B
The second model is a direct uniaxial effective medium approach. As demonstrated by the SEM images, the films are structurally uniaxial and homogeneous and we can define their dielectric permittivity tensor with the ordinary (parallel to the substrate, εord=ε//) and extraordinary (normal to the substrate, εextraord=εz)
VI.1.2.1 Model B-1
The Model B-1
) is a direct uniaxial effective medium approach (compared to the indirect approach using the fitted lamellar stack) in which we use the super-lattice results as initial guesses. It combines the surrounding half layers, i.e. the bottom dNC/2 and top layer dPS/2 into the effective uniaxial layer, in order to account for the whole self-assembled diblock copolymer film. The uniaxial permittivity (ε//, εz) of the medium EM2 is determined by fitting the Model B-1 to the SE data using the BSPLINE function available in the Complete Ease software and using the results of Model A as an initial guess. This fitting procedure allows to extract ε// (λ) and εz (λ) for the different values of N.
VI.1.2.1 Model B-2
We assume that the film is a uniaxial anisotropic film. Shown in
Figure VI. 5 εz(λ) used in Model B
-P is the Cauchy principal value containing the residues of the integral at poles located on lower half of the complex plane and along the real axis. With oj the resonance frequency, fj an amplitude factor and j the dissipation term.
In the Maxwell-Garnett Effective Medium Approximation, the resonance position of gold is fixed to a certain position, which leads to disagreement with the experimental data for some samples. However εMG give us an initial guess of the volume fraction of gold inside layers. The dielectric function constructed with a Tauc Lorentz 8 function plus one oscillator is more flexible in the resonance position, which may give a better approach.
The data were analyzed using the DeltaPsi2 software from Horiba Scientific.
VI.2 Dimensions and optical properties
As we discussed in the previous chapters, we produce well controlled lamellar structures with film thickness from ca. 100nm to 700nm and lamellar period from ca.17nm to 70 nm.
VI.2.1 Influence of the film thickness VI.2.1.1 Samples structure
The studied samples were all fabricated from poly(styrene)-block-poly(2vinylpyridine) (Mn 25000-25000, PDI 1.06), with lamellar period of 30nm. The description of the fabrication can be found in the Chapter III.1.1.2 and the film thicknesses were measured by SEM and SE (spectroscopic ellipsometry). The number of impregnation cycles are written as N. Note the thickness measurement extracted from SE is carried out before gold impregnation process, when the film is not organized yet, while the values from SEM are measured after gold impregnation process. Also, the SE measurement is an average value on a large area of the sample, while the measurements done on the SEM images are very local.
We show here the comparison of the results obtained on films of PS-b-P2VP 25k-25k of two different thicknesses. We repeated the loading process for N=10 cycles (see detail of loading process in Chapter IV.2.2), and the volume fraction of gold in the PVP layers can be estimated at ca. 20 %( measurement details given in Chapter IV.3.3). We can see from Figure VI. 22 that the samples non annealed (N=0) and annealed (AN=0) before gold incorporation give the same optical response (blue lines). After 10 cycles of impregnation (black lines), a broad peak appears at photon energy of 2eV (λ=620 nm), which we attribute to the plasmon resonance of the gold nanoparticles , but it is lower in the annealed film (black dash line) than in the film without annealing (normal gold loading process, black solid line). As the gold loading goes up to 20 cycles of impregnation (red lines), the resonance is increased for the film without annealing (red solid line) and shifted to a photon energy of 1.65eV (λ=750 nm), while it has completely disappeared for the film with annealing process (red dash lines), which is probably due to the fact that the gold nanoparticles fuse and become much less well where the thickness of (PS) and (NC) dpoly=dNC=14nm, with an additional (NC) layer of 6 nm along the substrate surface and an additional (PS) layer of 6 nm along the air.
Table VI. 1 Samples used for analyzing the effect of the film thickness
Samples
The additional homogeneous layer of thickness dSAu in the model A is to account for the uncontrolled gold deposit layer on top of films. The thickness of this Au pollution layer (dSAu) was found to increase from 4 nm at N=5, to 20 nm at N=25, which is in agreement with the SEM observations. In fact, for N ≤ 15, the results show very little variation when removing the top layer from the model (dSAu=0), confirming its presence is negligible.
The resulting dielectric functions are consistent with what can be expected for a disordered composite of spherical inclusions within a homogeneous matrix. In fact, such composite can be described, at least at small gold volume fraction by the Maxwell-Garnett effective medium function 16 (Equation 6-1). The shows how the extracted values of εNC compare with those of εMG, calculated using Equation 6-1 and a dielectric function for gold modified from the Johnson & Christy tabulated data 4,7 in order to take into account finite size effects, which we used before.
For small N ≤ 5 (small gold fraction), the agreement is very good on all the red part of the spectrum, for wavelengths above the resonance value at ~580 nm, and When pushing the Maxwell-Garnett effective medium approximation (MG-EMA) to higher N, a degraded agreement is naturally expected: it remains nevertheless reasonably good especially for wavelengths above 580 nm. These partial agreements provide rough estimates of the loading concentration in gold: we find N=5 to correspond to a MG-extracted value of f=7%, and N=10 and N=25 to approximately f=18% and f=31%, respectively. It is in good agreement with the QCM results on similar systems (see the Chapter IV. nm, close to that expected 13 for the plasmon resonance of the gold NPs present in the NC layers, with an amplitude increasing with the number of loading cycles N, as expected. In particular, the NC medium presents a pseudo-metallic behavior beyond N=15 with εNC'< 0 on a large wavelength range, of width 80 nm for N=25 and 50 nm for N=20. This "pseudo-metal" behavior being resonant in nature is associated with a significant level of losses as can also be seen on Figure VI. 31(a).
VI.4.2.3 Uniaxial effective medium
Figure VI. 32 Real (upper plots) and imaginary (lower plots) parts of the components ε// (left) and εz (right) of the lamellar nanoplasmonic thin films, as computed using Equation 6-2 and Equation
6-3 from the Model A SE extractions for different values of N between 5 and 25 (MG-extracted f between 7% and 31%,). The resonance amplitude varies as N increases, due to the increasing volume fraction of introduced plasmonic nanoparticles.
Following the second step of Model A, the dielectric constants of the uniaxial effective medium ε//(λ)=ε//'(λ) + ε//"(λ) and εz(λ)=εz'(λ) + i εz"(λ) are first determined using the optical properties of the fitted Au loaded polymer layers (NC) and the PS layer, through Equation 6 We consider three regions of strong anisotropy denoted A: εz' > ε//' > 0 (for example λ=481nm), B: ε//' < 0 < εz' (λ=539 nm) and C: ε//' > εz' > 0 (λ=670 nm) indicated on
Continuous and dashed lines correspond to opposite solutions of Equation 1-22. The plots (a)-(d) (resp. (e)-(h)) correspond to fictional elliptical (resp. hyperbolic) cases with ε//' and εz' set to the values for case C (resp. B), and ε//"=4 εz"=0 (a) and (e); ε//"=4 εz"=0.8 (b) and (f); ε//"=4 εz"=4 (c) and (g); and ε//"=4 εz"=8 (d) and (h).
To understand the significance of these dispersion curves, it is useful to recall the fold hyperboloid, with a forbidden gap in 𝑘 𝑥 , as expected for a "metallic" 18 or type II 19 hyperbolic medium, when the permittivity tensor ε has two negative and one positive components.
To depart from these idealized situations, let us now introduce a small amount of losses in the materials, see | 278,686 | [
"772803"
] | [
"525419"
] |
01683946 | en | [
"chim",
"sdu",
"sde"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01683946/file/Mattei%20et%20al%2C%202018.pdf | Coraline Mattei
Henri Wortham
Etienne Quivet
email: [email protected]
Heterogeneous atmospheric degradation of pesticides by ozone: influence of relative humidity and particle type
Keywords: pesticides, heterogeneous reactivity, ozonolysis, kinetics, silica particles, relative humidity
In the atmosphere pesticides can be adsorbed on the surface of particles, depending on their physico-chemical properties. They can react with atmospheric oxidants such as ozone but parameters influencing the degradation kinetics are not clear enough. In this study the heterogeneous ozonolysis of eight commonly used pesticides (i.e., difenoconazole, tetraconazole, cyprodinil, fipronil, oxadiazon, pendimethalin, deltamethrin, and permethrin) adsorbed on hydrophobic and hydrophilic silicas, and Arizona dust at relative humidity ranging from 0% to 80% was investigated. Under experimental conditions, only cyprodinil, deltamethrin, permethrin and pendimethalin were degraded by ozone. Secondorder kinetic constants calculated for the pesticides degraded by ozone ranged from (4.7 ± 0.4) × 10 -20 cm 3 molecule -1 s -1 (pendimethalin, hydrophobic silica, 55% RH) to (2.3 ± 0.4) × 10 -17 cm 3 molecule -1 s -1 (cyprodinil, Arizona dust, 0% RH). Results obtained can contribute to a better understanding of the atmospheric fate of pesticides in the particulate phase and show the importance of taking humidity and particle type into account for the determination of pesticides atmospheric half-lives.
Introduction
Pesticides are used worldwide to control pests in agricultural production (e.g., vineyards, orchards, and farming) and in many nonagricultural settings (e.g., home, public spaces, gardens and parks, and industrial areas). The important increase in their utilization and their potentially adverse human health effects make their environmental fate to be a hot topic.
Atmospheric pesticide contamination was observed in urban and rural areas with concentration levels from some picogram per cubic meter (pg m -3 ) to several nanogram per cubic meter (µg m -3 ) [START_REF] Coscollà | Particle size distributions of currently used pesticides in a rural atmosphere of France[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in ambient air of an agricultural Mediterranean area[END_REF][START_REF] Estellano | Assessing Levels and Seasonal Variations of Current-Use Pesticides (CUPs) in the Tuscan Atmosphere, Italy, Using Polyurethane Foam Disks (PUF) Passive Air Samplers[END_REF][START_REF] Zivan | Airborne organophosphate pesticides drift in Mediterranean climate: The importance of secondary drift[END_REF][START_REF] Zivan | Primary and secondary pesticide drift profiles from a peach orchard[END_REF]. This contamination can be due to spray drift during pesticide applications (about 15 to 40%; [START_REF] Sinfort | Influence des conditions et matériels de pulvérisation sur les pertes de pesticides au sol et dans l'air en viticulture Languedocienne[END_REF][START_REF] Yates | Emissions of 1,3-Dichloropropene and Chloropicrin after Soil Fumigation under Field Conditions[END_REF][START_REF] Zivan | Airborne organophosphate pesticides drift in Mediterranean climate: The importance of secondary drift[END_REF][START_REF] Zivan | Primary and secondary pesticide drift profiles from a peach orchard[END_REF], post-application volatilization from treated plants [START_REF] Zivan | Airborne organophosphate pesticides drift in Mediterranean climate: The importance of secondary drift[END_REF][START_REF] Zivan | Primary and secondary pesticide drift profiles from a peach orchard[END_REF], soil [START_REF] White | Ambient Air Concentrations of Pesticides Used in Potato Cultivation in Prince Edward Island, Canada[END_REF] or aquatic surfaces [START_REF] Luo | Modeling complexity in simulating pesticide fate in a rice paddy[END_REF]) (about 0.1 to several dozen %; [START_REF] Lichiheb | Measuring Leaf Penetration and Volatilization of Chlorothalonil and Epoxiconazole Applied on Wheat Leaves in a Laboratory-Scale Experiment[END_REF], and wind erosion [START_REF] Glotfelty | Volatilization and wind erosion of soil surface applied atrazine, simazine, alachlor and toxaphene[END_REF].
Once in the atmosphere, pesticides partition among the gas, particulate, and liquid phases depending on their physicochemical properties and the environmental conditions [START_REF] Sanusi | Gas-particle Partitioning of Pesticides in Atmospheric Samples[END_REF][START_REF] Scheyer | Analysis of Some Organochlorine Pesticides in an Urban Atmosphere (Strasbourg, East of France)[END_REF][START_REF] Yao | Pesticides in the Atmosphere Across Canadian Agricultural Regions[END_REF][START_REF] Vogel | Pesticides in Rain in Four Agricultural Watersheds in the United States[END_REF]. Of importance, [START_REF] Sauret | Study of the Effects of Environmental Parameters on the Gas/Particle Partitioning of Current-Use Pesticides in Urban Air[END_REF] outlined that pesticides recently placed on the market are less volatile than the older ones (e.g., organochlorine compounds) and are mainly associated to the atmospheric particulate phase. Atmospheric sinks of pesticides include wet or dry deposition [START_REF] Majewski | Pesticides in Mississippi air and rain: A comparison between 1995 and 2007[END_REF], and direct [START_REF] Borras | Atmospheric degradation of the organothiophosphate insecticide -Pirimiphos-methyl[END_REF] or indirect [START_REF] Socorro | Heterogeneous OH Oxidation, Shielding Effects, and Implications for the Atmospheric Fate of Terbuthylazine and Other Pesticides[END_REF] photochemical degradation.
The indirect photochemical lifetime of pesticides in the particulate phase (i.e., adsorbed on atmospheric particles) is mostly governed by their chemical reactivity toward hydroxyl and nitrate radicals, and ozone as photolytic intermediate by-products.
The atmosphere contains particles with concentrations from few thousands to several hundreds of thousands of particles per cubic centimeter. Most pesticides are mainly accumulated in the ultrafine-fine (below 1 µm) size fractions [START_REF] Coscollà | Particle size distributions of currently used pesticides in a rural atmosphere of France[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in ambient air of an agricultural Mediterranean area[END_REF]. Particles are also exposed to water vapor in atmosphere (at a typical relative humidity (RH) range from 30% to 100% at ambient temperature) that produces mono or multilayer water film on their surfaces depending on the RH [START_REF] Keskinen | On-Line Characterization of Morphology and Water Adsorption on Fumed Silica Nanoparticles[END_REF]. These water layers can have important consequences for heterogeneous reactivity in the atmosphere as the kinetics of many reactions is strongly dependent on the phase of aerosol particles [START_REF] Knipping | Experiments and Simulations of Ion-Enhanced Interfacial Chemistry on Aqueous NaCl Aerosols[END_REF]. Atmospheric particles are diverse in origin, size, and chemical composition [START_REF] Seinfeld | Atmospheric Chemistry and Physics: From Air Pollution to Climate Change[END_REF].
They originate from natural sources such as desert dust, volcanic ashes, and marine aerosols, or from anthropogenic sources as vehicular traffic, biomass burning, and fossil fuel combustion [START_REF] Salameh | PM 2.5 chemical composition in five European Mediterranean cities: a 1-year study[END_REF].
Several kinetic studies of heterogeneous ozonolysis of pesticides (Table 1; [START_REF] Meng | Heterogenous ozonation of suspended malathion and chlorpyrifos particles[END_REF][START_REF] El Masri | A mechanistic and kinetic study of the heterogeneous degradation of chlorpyrifos and chlorpyrifos oxon under the influence of atmospheric oxidants: ozone and OH-radicals[END_REF][START_REF] Bouya | Kinetics of the Heterogeneous Photo Oxidation of the Pesticide Bupirimate by OH-Radicals and Ozone under Atmospheric Conditions[END_REF]Socorro et al., 2015 and reference therein) were carried out under different experimental conditions as ozone mixing ratio (e.g., from 0.4 to 86 ppm), RH (e.g., from 0% to 80% RH and uncontrolled), temperature (e.g., from 19°C to 26°C), and type of particles (e.g., hydrophilic and hydrophobic silicas, quartz surfaces, ZnSe crystals, and azelaic acid aerosols). As a result, literature data is often difficult to compare.
The aim of the present work is to better understand the influence of humidity and particle type on the heterogeneous degradation of pesticides by ozone. The experimental protocol and the eight studied pesticides (i.e., difenoconazole, tetraconazole, cyprodinil, fipronil, oxadiazon, pendimethalin, deltamethrin, and permethrin) are identical as those used in previous studies [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF](Socorro et al., , 2016b)). Pesticides are representative of major applications, i.e., herbicides, insecticides, and fungicides, and were chosen for their worldwide utilization but also for their physico-chemical properties. The heterogeneous ozonolysis was carried out at 25°C with a range of relative humidity from 0% to 80% RH. Three kinds of silica particles (i.e., hydrophilic and hydrophobic silicas, and Arizona dust) were used because silica is regarded as one of the main constituents of atmospheric mineral dust and widely used as the model particles for heterogeneous reactions.
Experimental section
Chemicals
Cyprodinil (purity 99.8%), deltamethrin (99.7%), difenoconazole (97.0%), fipronil (97.5%), oxadiazon (99.9%), pendimethalin (98.8%), permethrin (98.3%), and tetraconazole (99.0%) were purchased from Sigma-Aldrich (PESTANAL®, analytical standard) and were used as received. The chemical structures of the pesticides under study are depicted in Fig. SI1 and their physico-chemical properties are given in Table SI1.
Particles
Three commercial silica particles were used to mimick atmospheric mineral aerosols: AEROSIL R812 (purchased from Degussa) thereafter called "hydrophobic silica", AEROSIL 255 (Evonik Industries) called "hydrophilic silica", and ISO 12103-1, A1 Ultrafine Test Dust (Powder Technology Inc.) called "Arizona dust". Composition: Hydrophobic and hydrophilic silicas (> 99.8 wt%), and Arizona dust (69-77 wt%) contains mainly SiO2. Arizona dust is also composed of other various mineral oxides (Al2O3 (8-14 wt%), Fe2O3 (4-7 wt%), Na2O (1-4 wt%), CaO (2.5-5.5 wt%), K2O (2-5 wt%), MgO (1-2 wt%), and TiO2 (0-1 wt%)) and can be thought of as a model dust that originates from deserts [START_REF] Welti | Influence of Particle Size on the Ice Nucleating Ability of Mineral Dusts[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in a rural atmosphere of France[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in ambient air of an agricultural Mediterranean area[END_REF]. Particle size: The mean primary particle size of hydrophobic and hydrophilic silica particles ranges from 5 nm to 50 nm [START_REF] Evonik | AEROSIL® -Fumed Silica[END_REF]. However, most of them can be arranged as agglomerates with an aggregate size roughly measured mainly around 5 µm, sometimes up to 25 μm (Fig. SI2-SI4). Arizona dust particles are distributed in all size classes and 95.5-97.5 vol% are less than 11 μm. These size ranges are consistent with field observations that show that pesticides are distributed in the fine (0.1-1 µm), ultrafine (0.03-0.1 µm), and coarse (1-10 µm) particle size fraction, and that no pesticides were detected in the size fraction >10 µm [START_REF] Xu | Size distribution and seasonal variations of particle-associated organochlorine pesticides in Jinan, China[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in a rural atmosphere of France[END_REF][START_REF] Coscollà | Particle size distributions of currently used pesticides in ambient air of an agricultural Mediterranean area[END_REF]. Specific surface area (SSA; BET method):
Hydrophobic and hydrophilic silica particles have a specific surface area of (260 ± 30) m 2 g -1 and (255 ± 25) m 2 g -1 , respectively [START_REF] Evonik | AEROSIL® -Fumed Silica[END_REF]. Arizona dust has a specific surface area of (20.6 ± 0.2) m 2 g -1 (measured). Surface chemistry and hygroscopicity: Surfaces of hydrophobic and hydrophilic silica particles are mainly covered by silanol and siloxane groups. Silanol groups have a hydrophilic nature (i.e., readily mix with water) whereas the siloxane groups are chemically inert and hydrophobic. [START_REF] Vlasenko | Generation of Submicron Arizona Test Dust Aerosol: Chemical and Hygroscopic Properties[END_REF] suggest that Arizona dust particles are essentially non-hygroscopic (i.e., hydrophobic) due to the low amount of soluble material.
Particles coating
Hydrophobic silica, hydrophilic silica, and Arizona dust particles were independently coated with pesticides according to a liquid/solid adsorption. In an amber Pyrex bulb of 500 cm 3 , 600 mg of particles were mixed with 6 mL of a pesticide solution (all 8 pesticides at a concentration of 20 mg L -1 in dichloromethane (for HPLC, ≥ 99.8%, Sigma-Aldrich)), i.e., the load of pesticides on silica particles was about 0.02% by weight) and 40 mL of dichloromethane. After a 5-min ultrasound treatment, dichloromethane was evaporated by a rotary evaporator (Rotavapor R-114, Büchi) at 40°C and 850 mbar. This process allows a reproducible coating of the pesticides on the particle's surface [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF]. Assuming a uniform particle surface coverage for the pesticide molecules and a spherical geometry for particles, the percentage of the particle surface coated with individual pesticide was between 0.3% and 0.5% of the monolayer for hydrophobic and hydrophilic silicas (See calculation in supplementary information). Then, the total coated particle surface was 2.8% and 2.9% for hydrophobic silica and hydrophilic silica, respectively, which is much less than a monolayer. Due to their weak SSA, the percentage of the Arizona dust particle surface coated with individual pesticide was between 3.4% and 5.7%, which corresponds to a larger total coated particle surface (35.9%).
Ozonolysis experiments
The experimental setup was previously described in detail in [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF] (Fig. SI5).
Briefly, a 500 cm 3 amber Pyrex bulb containing about 500 mg of dried particles coated with pesticides was fixed to a modified rotary evaporator and placed in a thermostated water bath (25 ± 1°C). Ozone was generated by passing a flow of purified air through an ozone generator.
Particles coated with pesticides were exposed to an air flux containing an ozone mixing ratio of 400 ppb (i.e., 9.85 • 10 12 molecules cm -3 ) continuously monitored with a photometric ozone analyzer. The relative humidity (RH) ranged from 0% to 80%. The required humidity was obtained by mixing at different rate of dry and wet gaseous fluxes of purified air. The resulting humidity was measured with a humidity probe with uncertainties of 2% and the sum of the three air fluxes in the reaction flask was 500 mL min -1 . Experiments were all conducted for a duration of 26 h.
Extraction and pesticides quantification
During ozone exposure, 30 mg aliquots of particles were regularly sampled in order to quantify the remaining adsorbed pesticides on their surface. Each 30 mg aliquot of particles was individually introduced in a 33 mL stainless steel cell with an internal standard solution (Triphenyl phosphate, 99.9%, Sigma-Aldrich) and pesticides were extracted by accelerated solvent extraction (ASE 350, Dionex) with dichloromethane. Afterwards, the extracts were concentrated under a nitrogen flow using a concentration workstation (TurboVap II, Biotage).
Analysis of the obtained solutions were realized using gas chromatography coupled to tandem mass spectrometry (GC/MS-MS), with a Trace GC Ultra (Thermo Scientific) coupled to a TSQ Quantum™ Triple Quadrupole (Thermo Scientific) using electron impact ionization (70 eV).
More details about ASE extraction, concentration, and GC/MS-MS analysis are available in [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF].
Determination of second-order rate constants and half-lives toward ozone
Considering that ozone was continuously drifting in the reactor and that it was used in excess, a pseudo-first order kinetic constant was assumed:
𝐿𝑛 ( [𝑃𝑒𝑠𝑡𝑖𝑐𝑖𝑑𝑒 (𝑎𝑑𝑠) ] 𝑡 [𝑃𝑒𝑠𝑡𝑖𝑐𝑖𝑑𝑒 (𝑎𝑑𝑠) ] 0 ) = -(𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼 + 𝑘 𝑑𝑒𝑠 (𝑝𝑎𝑟𝑡) 𝐼 + 𝑘 ℎ𝑦𝑑 (𝑝𝑎𝑟𝑡) 𝐼 ) × 𝑡 (1)
where 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼 (s -1 ) is the pseudo-first order rate constant of the reaction between ozone and the pesticides, 𝑘 𝑑𝑒𝑠 (𝑝𝑎𝑟𝑡) 𝐼 (s -1 ) is the first order desorption rate constant, 𝑘 ℎ𝑦𝑑 (𝑝𝑎𝑟𝑡) 𝐼 (s -1 ) is the pseudo-first order hydrolysis rate constant, t (s) is the time of ozone exposure, and
[Pesticide(ads)]t/[Pesticide(ads)]0 is the pesticide concentration normalized to the initial pesticide concentration.
The pseudo-first-order reaction rate constants for the heterogeneous ozonolysis of the particlephase pesticides were determined by analyzing their corresponding temporal profiles in time frame from 0 to 26 h, for 400 ppb ozone mixing ratio.
In order to determine the second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼𝐼
, the experimental pseudo-first order reaction rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼
were plotted as a function of the ozone concentrations.
To simulate the kinetic mechanisms of the heterogeneous reactions the Langmuir-Rideal (L-R, known also as Eley-Rideal) and Langmuir-Hinshelwood (L-H) models are commonly used. In a previous work [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF], it was demonstrated that both L-R and L-H models show comparable reactivity for the same pesticide as the ones used in this study. Fig. 1 shows the pseudo-first order rate constant (𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼
) determined in this work and reported by [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF] as a function of ozone concentrations for cyprodinil adsorbed on hydrophilic silica. The same good agreement between the two studies is observed for the eight pesticides under study. As a result, we assumed to calculate the second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼𝐼
(cm 3 molecule -1 s -1 ) with only one ozone mixing ratio, i.e., 400 ppb, using a Langmuir-Rideal model.
The second-order rate constant was calculated as follows:
𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼 = 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 × [𝑂 3(𝑔𝑎𝑠) ] (2)
where [𝑂 3(𝑔𝑎𝑠) ] is the constant ozone mixing ratio of 400 ppb (i.e., 9.85 × 10 12 molecules cm - 3 ).
Half-lives corresponding to the heterogeneous atmospheric degradation of pesticides toward ozone were calculated for each specific experimental condition. Ozone mixing ratio used for calculation was 40 ppb [START_REF] Vingarzan | A review of surface ozone background levels and trends[END_REF], and half-lives (s) were obtained with the following equation:
t ½ O 3(part) = 𝑙𝑛2 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 × [𝑂 3(𝑔𝑎𝑠) ]
(3)
Results and discussion
Desorption and hydrolysis of pesticides adsorbed on particles
To estimate the loss of pesticide by desorption from particles and by hydrolysis, experiments were carried out for the three kinds of particles, with 500 mL min
𝐼
) at different humidity level are presented for each kind of particles in Fig. 2. They ranged from (2.0 ± 4.2) × 10 -7 s -1 to (3.4 ± 1.7) × 10 -6 s -1 , from (0.5 ± 2.2) × 10 -7 s -1 to (3.8 ± 0.3) × 10 -6 s -1 , and from (3.7 ± 0.7) × 10 -6 s -1 to (2.4 ± 0.4) × 10 -5 s -1 , for hydrophilic, hydrophobic, and Arizona dust particles, respectively. SI1) and their hydrolysis half-lives higher than 30 days (PPDB, 2017). As a result, pesticide losses observed in our experimental conditions are expected to be due to desorption rather than hydrolysis.
Fig. 3 shows that the RH does not significantly influence pesticides desorption when adsorbed on hydrophilic silica and Arizona dust while a slight influence is measured on hydrophobic particles. On the other hand, higher (𝑘 𝑑𝑒𝑠 (𝑝𝑎𝑟𝑡)
𝐼
+ 𝑘 ℎ𝑦𝑑 (𝑝𝑎𝑟𝑡)
𝐼
) are observed for Arizona dust than for hydrophilic silica particles.
These results are tricky to explain but due to their weak SSA, a larger fraction coated particle surface (35.9%) is expected for the Arizona dust which promotes the formation of pesticide clusters on particle surface. This possibly makes desorption easier and more important than for the other particles. The nature of the surface (greater proportion of metal oxides) could also play a key role. For hydrophobic silica, the RH dependence could be induced by the expected small amount of adsorbed water. Indeed, at high RH, a competition may take place between water and pesticide which would increase desorption. This hypothesis is contradicted by the results observed for hydrophilic particles, unless it is presumed that a larger amount of adsorbed water (as is the case even for low RH) makes it possible to keep the desorbed pesticides in the liquid phase.
All kinetic constants measured during ozonolysis experiments were corrected using rate constants of desorption and hydrolysis (𝑘 𝑑𝑒𝑠 (𝑝𝑎𝑟𝑡)
𝐼
+ 𝑘 ℎ𝑦𝑑 (𝑝𝑎𝑟𝑡)
𝐼
), therefore, all values presented below only refer to ozone degradation.
Heterogeneous ozonolysis
In order to evaluate the influence of humidity on the heterogeneous reaction rates of pesticides, ozonolysis experiments were conducted at relative humidity level between 0% and 80% RH for the three kinds of particles, i.e., hydrophobic silica, hydrophilic silica, and Arizona dust.
0% RH is an unrealistic value as water is always present in the atmosphere. However, most studies of the heterogeneous reactivity of pesticides toward ozone were carried out closer to 0% RH [START_REF] Pflieger | Kinetic Study of Heterogeneous Ozonolysis of Alachlor, Trifluralin and Terbuthylazine Adsorbed on Silica Particles under Atmospheric Conditions[END_REF][START_REF] Pflieger | The Heterogeneous Ozonation of Pesticides Adsorbed on Mineral Particles: Validation of the Experimental Setup with Trifluralin[END_REF][START_REF] Pflieger | Ozonation of isoproturon adsorbed on silica particles under atmospheric conditions[END_REF][START_REF] Segal-Rosenheimer | Heterogeneous Oxidation of the Insecticide Cypermethrin as Thin Film and Airborne Particles by Hydroxyl Radicals and Ozone[END_REF]. From the analysis of water adsorption on the surface of mineral oxides, [START_REF] Goodman | Spectroscopic Study of Nitric Acid and Water Adsorption on Oxide Particles: Enhanced Nitric Acid Uptake Kinetics in the Presence of Adsorbed Water[END_REF] have shown that the number of water layers is approximately one monolayer at 20% RH, two to three adsorbed water layers at 50% RH, and three to four at 85% RH. 80% RH seems to be a realistic level where the aerosol is close to the state of a deliquescent aerosol.
Under these experimental conditions, tetraconazole, fipronil, oxadiazon, and difenoconazole,
were not or slightly degraded (𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼𝐼
≤ 10 -20 cm 3 molecule -1 s -1 ; t½ ≥ 800 days) whatever the nature of particles. On the other hand, cyprodinil, deltamethrin, permethrin, and pendimethalin, showed significant losses due to ozone exposure when adsorbed on all three particle types.
Second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 ranged from (4.7 ± 0.4) × 10 -20 cm 3 molecule -1 s -1 (pendimethalin, hydrophobic silica, 55% RH) to (2.3 ± 0.4) × 10 -17 cm 3 molecule -1 s -1 (cyprodinil, Arizona dust, 0% RH). The order of magnitude of these constants are in accordance with previous pesticide heterogeneous ozonolysis studies conducted on silica particles [START_REF] Socorro | Heterogeneous Reactions of Ozone with Commonly Used Pesticides Adsorbed on Silica Particles[END_REF] and reference therein), quartz surfaces (Al [START_REF] Rashidi | The heterogeneous photo-oxidation of difenoconazole in the atmosphere[END_REF][START_REF] Rashidi | Heterogeneous Ozonolysis of Folpet and Dimethomorph: A Kinetic and Mechanistic Study[END_REF][START_REF] El Masri | A mechanistic and kinetic study of the heterogeneous degradation of chlorpyrifos and chlorpyrifos oxon under the influence of atmospheric oxidants: ozone and OH-radicals[END_REF][START_REF] Bouya | Kinetics of the Heterogeneous Photo Oxidation of the Pesticide Bupirimate by OH-Radicals and Ozone under Atmospheric Conditions[END_REF], ZnSe crystals (Segal-Rosenheimer and Dubowski, 2007), azelaic acid aerosols [START_REF] Gan | Products and kinetics of the heterogeneous reaction of suspended vinclozolin particles with ozone[END_REF][START_REF] Meng | Heterogenous ozonation of suspended malathion and chlorpyrifos particles[END_REF][START_REF] Yang | Heterogeneous Reactivity of Suspended Pirimiphos-Methyl Particles with Ozone[END_REF]Yang et al., , 1012)), and grape berries [START_REF] Walse | Remediation of Fungicide Residues on Fresh Produce by Use of Gaseous Ozone[END_REF].
It may also be noted that both pyrethroids (deltamethrin and permethrin) are degraded by ozone, probably due to an attack of ozone on the alkene double bond leading to the formation of a primary ozonide (Socorro et al., 2016b), while no significant degradation is observed in our experimental conditions for both triazoles (difenoconazole and tetraconazole). These results show the effect of the chemical structures on pesticide reactivity.
Influence of humidity
Fig. 3 represents the variation of the second-order rate constants in function of relative humidity for the four pesticides which showed significant losses by ozone exposure, i.e., cyprodinil, deltamethrin, permethrin, and pendimethalin, on all three particles types.
Fig. 3: Second-order rate constants for the ozonolysis of pesticides adsorbed on hydrophobic silica, hydrophilic silica, and Arizona dust at different relative humidity level (▲:
Hydrophilic silica; : Hydrophobic silica; : Arizona Dust)
To characterize the influence of RH, the ratio between the lowest and the highest 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 obtained at different RH was calculated for each pesticide adsorbed on each particle type.
Hydrophilic silica
Second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 on hydrophilic silica ranged from (2.3 ± 1.4) × 10 -19 cm 3 molecule -1 s -1 (cyprodinil, 80% RH) to (4.1 ± 0.4) × 10 -18 cm 3 molecule -1 s -1 (pendimethalin, 5% RH). The four pesticides exhibit the same trend, characterized by a decrease of their reactivity constant at high relative humidity. For all pesticides, the maximum ratio was obtained and 5% RH were not significantly different for all pesticides. Considering these ratios, deltamethrin, permethrin, and cyprodinil reacted 3, 4, and 4 times faster at 5% RH than at 80% RH, respectively. For pendimethalin, 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 varied between (3.3 ± 0.8) × 10 -19 cm 3 molecule - 1 s -1 (80% RH) and (4.1 ± 0.4) × 10 -18 cm 3 molecule -1 s -1 (5% RH), which corresponds to a degradation 12 times faster when the humidity was low.
Hydrophobic silica
Second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 ranged from (4.7 ± 0.4) × 10 -20 cm 3 molecule -1 s -1 (pendimethalin, 55% RH) to (2.6 ± 0.2) × 10 -18 cm 3 molecule -1 s -1 (cyprodinil, 0% RH). For the four pesticides, the highest 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 was obtained at 0% RH. As observed for hydrophilic silica particles, pesticides reacted slower when the relative humidity increased. The maximum ratios obtained between the extreme 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 were close to those obtained for hydrophilic silica.
Deltamethrin, permethrin, cyprodinil, and pendimethalin reacted 2, 2, 3, and 16 times faster at the lowest relative humidity, respectively. For deltamethrin and permethrin, 𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼𝐼
were not significantly different between 0% and 80% RH, despite a significant variation with the intermediate RH levels.
Arizona dust
𝑘 𝑂 3(𝑝𝑎𝑟𝑡)
𝐼𝐼 measured for Arizona dust were characterized by poor reproducibility, within a range of RSD from 3% to 234%. The difficulty to correctly assess the kinetic constants is potentially due to the ozone degradation on particles surfaces inducing a slow drift of its concentration throughout the experiments. Indeed, during the 26h of experiment, the ozone mixing ratio decreased continuously and could vary up to 33%. This important drift can be compared to those obtained for hydrophobic and hydrophilic silicas that is lower than 4%. This behavior can be associated to the chemical composition of the particles since silica particles contains mainly SiO2 (> 99.8wt%) while Arizona dust contains silica but also significant wt% of Al2O3 and Fe2O3). Those results are in agreement with previous studies [START_REF] Michel | Reactive Uptake of Ozone on Mineral Oxides and Mineral Dusts[END_REF]Usher et al., 2003) which showed that the relative reactivity of ozone followed the trend Fe2O3 > Al2O3 > SiO2. Anyway, the important RSD observed for Arizona particles makes it difficult to conclude about the influence of humidity on deltamethrin, permethrin, and pendimethalin for which no significant variation of 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 was observed. For cyprodinil, 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 varied by two orders of magnitude between dry conditions, i.e., (2.3 ± 0.3) × 10 -17 cm 3 molecule -1 s -1 (0% RH) and humid conditions, i.e., (0.9 ± 2.2) × 10 -17 cm 3 molecule -1 s -1 (30% RH) and (2.3 ± 0.1) × 10 -19 cm 3 molecule -1 s -1 (80% RH).
Overall, whatever the nature of the mineral particles, when an RH effect was observed, the same trend was observed in this work. The more the RH increased, the slowest the heterogeneous ozonolysis was. The influence of humidity on the heterogeneous ozonolysis degradation was already investigated for instance for PAH on many surfaces [START_REF] Pitts | Factors Influencing the Reactivity of Polycyclic Aromatic Hydrocarbons Adsorbed on Filters and Ambient POM with Ozone[END_REF][START_REF] Pöschl | Interaction of Ozone and Water Vapor with Spark Discharge Soot Aerosol Particles Coated with Benzo[a]pyrene: O3 and H2O Adsorption, Benzo[a]pyrene Degradation, and Atmospheric Implications[END_REF][START_REF] Kwamena | Kinetics of Surface-Bound Benzo[END_REF]. To our knowledge, only [START_REF] Segal-Rosenheimer | Heterogeneous Ozonolysis of Cypermethrin Using Real-Time Monitoring FTIR Techniques[END_REF] have worked on the heterogeneous ozonolysis of cypermethrin, a pyrethroid insecticide, adsorbed on ZnSe crystals and, to a lesser extent, the influence of the humidification rate on ozonation was investigated for seeds containing pesticides [START_REF] Bourgin | Study of the Degradation of Pesticides on Loaded Seeds by Ozonation[END_REF].
In their study, [START_REF] Pitts | Factors Influencing the Reactivity of Polycyclic Aromatic Hydrocarbons Adsorbed on Filters and Ambient POM with Ozone[END_REF] exposed five PAH adsorbed on glass fiber (GF) and Teflon impregnated glass fiber (TIGF) filters to 200 ppb of ozone at 1, 25 and 50% RH (ambient temperature). They observed a "humidity effect" with a decrease in reactivity at high RH and concluded to a competition between water and ozone for adsorptive sites. In the same way, [START_REF] Pöschl | Interaction of Ozone and Water Vapor with Spark Discharge Soot Aerosol Particles Coated with Benzo[a]pyrene: O3 and H2O Adsorption, Benzo[a]pyrene Degradation, and Atmospheric Implications[END_REF] observed the decrease of benzo[a]pyrene (BaP) decay rates toward ozone (0-1 ppm) on soot aerosols in the presence of water in the range from 0% to 25% RH at 23°C. In addition, they indicated that an increase of RH beyond 50% may not lead to further reduced reaction rates [START_REF] Pöschl | Formation and Decomposition of Hazardous Chemical Components Contained in Atmospheric Aerosol Particles[END_REF]. They also suggested a competitive adsorption of ozone and water on the surface. For both studies, this assumption was supported by the nonlinear fits between pseudo first-order rate constants and ozone concentrations, suggesting a L-H mechanism, i.e., the gas-phase ozone is adsorbed on the particle surface prior to the reaction with the adsorbed compounds. Hence, it involves that at high ozone concentration, the relative humidity has less influence on chemical lifetime.
In contrast to the behavior on soot aerosols and GF and TIGF filters, Segal-Rosenheimer and Dubowski ( 2007) observed no significant difference on the heterogeneous ozonolysis reactivity (30 ppb -60 ppm) of the insecticide cypermethrin adsorbed on ZnSe crystals at ~7% and ~80% RH. Moreover, [START_REF] Kwamena | Kinetics of Surface-Bound Benzo[END_REF] mentioned the opposite effect, that is, at high RH (~72% at 25°C) the ozonolysis kinetics were faster than at low RH (<1%) for BaP on azelaic acid aerosols. Even if the influence of the humidity has not the same trend as previous studies, [START_REF] Kwamena | Kinetics of Surface-Bound Benzo[END_REF] emphasized the role of the surface. Thus, it appears that the influence of the relative humidity on the heterogeneous reactivity should be linked with the nature of particles where pesticides under study were adsorbed.
Influence of the particle type
Fig. 4 presents the second-order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 of cyprodinil, deltamethrin, pendimethalin, and permethrin for the three particle types under study in function of relative humidity. [START_REF] Perraudin | Kinetic Study of the Reactions of Ozone with Polycyclic Aromatic Hydrocarbons Adsorbed on Atmospheric Model Particles[END_REF] have shown no significant kinetic difference during heterogeneous ozonolysis of several PAH adsorbed on various silica particles (same hydrophilic surface chemistry, different particle diameters (5 and 40 µm), pore sizes (70 and 200 Å), and specific surface areas (200 and 500 m 2 g -1 )). In order to evaluate the influence of the hydrophobicity/hydrophilicity of the surface of particles, both synthetic silicas were chosen with the same particle size and the same specific surface area. In the present work, only the surface chemistry can be considered to be different as silanol groups have a hydrophilic nature whereas the siloxane groups are chemically inert and hydrophobic. In addition, Arizona dust particles were chosen to be an atmospheric mineral model of particles but, as Saharan sand or Gobi dust [START_REF] Perraudin | Kinetic Study of the Reactions of Ozone with Polycyclic Aromatic Hydrocarbons Adsorbed on Atmospheric Model Particles[END_REF], their specific surface area was much lower than both synthetic silicas (one order of magnitude here).
According to their reactivity toward ozone, pesticides under study can be split in two categories whether they were more reactive on hydrophilic or on hydrophobic silica particles. Fig. 4: Second-order rate constant for the degradation of pesticides adsorbed on hydrophobic silica, hydrophilic silica, and Arizona dust by ozone Pendimethalin, deltamethrin, and permethrin showed the same pattern, that is, at low RH (0% and 30%), the reactivity was from 2 to 8 times faster on hydrophilic particles than on hydrophobic particles whereas at 80% RH, there was no significant difference. The secondorder rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 were the same on Arizona dust and hydrophobic silica particles at 0% and 30% RH, partly due to the high RSD of 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 for Arizona dust. However, at 80% RH, the reactivity was faster on Arizona dust of a factor of about 2-3 than on synthetic silicas.
Thus, the nature of the particle appears to influence the reactivity of pesticides toward ozone.
At 0% RH, in the absence of competition between ozone and water for adsorption, the difference in reactivity for pesticides adsorbed on hydrophobic and hydrophilic silica particles suggests a greater affinity of ozone for hydrophilic surface mainly composed by silanol groups.
At 30% RH, the amount of water adsorbed on particles increases mainly on hydrophilic particles because of the presence of hydroxyl groups on their surfaces, i.e., silanol groups [START_REF] Vigil | Interactions of Silica Surfaces[END_REF][START_REF] Muster | Water Adsorption Kinetics and Contact Angles of Silica Particles[END_REF][START_REF] Verdaguer | Growth and structure of water on SiO2 films on Si investigated by Kelvin probe microscopy and in situ Xray Spectroscopies[END_REF]. It should be noted that the rehydroxylation of the hydrophobic siloxane surface in presence of wet air is possible, but it is a slow process. Nevertheless, the reactivity on hydrophilic surface remained higher than on hydrophobic surface, suggesting that the amount of water remains too low to induce a competitive adsorption between ozone and water for particle surfaces. At 80% RH, when silica is fully hydrated, all siloxane groups link with water to form silanol groups [START_REF] Muster | Water Adsorption Kinetics and Contact Angles of Silica Particles[END_REF], which leads to a similar reactivity on both synthetic silicas.
Unlike the other 3 pesticides (deltamethrin, pendimethalin, and permethrin), cyprodinil was more reactive on hydrophobic silica than on hydrophilic silica of a factor of about 2-4 whatever the RH. Moreover, cyprodinil showed the highest reactivity with 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 = (2.3 ± 0.3) × 10 -17 cm 3 molecule -1 s -1 at 0% RH when adsorbed on Arizona dust. However, it should be kept in mind that the specific surface area of Arizona dust particles is about one order of magnitude lower than that of synthetic silicas. Therefore, the percentage of pesticide coating during these ozonolysis exposure was higher for Arizona dust. Complementary experiments will be needed to know the influence of the concentration of adsorbed pesticides on particles.
These differences in kinetic behavior between pendimethalin, deltamethrin, and permethrin on the one hand and cyprodinil on the other, seem to highlight that the chemical nature of the pesticide is also a factor to consider during the heterogeneous ozonolysis.
Nature of the adsorbed pesticide and comparison with other studies
The different reaction rate constants of heterogeneous ozonolysis and their associated atmospheric half-lives are summarized by chemical family of pesticides under study and compared to literature data [START_REF] Meylan | Computer estimation of the Atmospheric gas-phase reaction rate of organic compounds with hydroxyl radicals and ozone[END_REF][START_REF] Pflieger | Kinetic Study of Heterogeneous Ozonolysis of Alachlor, Trifluralin and Terbuthylazine Adsorbed on Silica Particles under Atmospheric Conditions[END_REF][START_REF] Pflieger | The Heterogeneous Ozonation of Pesticides Adsorbed on Mineral Particles: Validation of the Experimental Setup with Trifluralin[END_REF][START_REF] Yang | Heterogeneous Reactivity of Suspended Pirimiphos-Methyl Particles with Ozone[END_REF][START_REF] Yang | Heterogeneous Ozonolysis of Pirimicarb and Isopropalin: Mechanism of Ozone-Induced N-Dealkylation and Carbonylation Reactions[END_REF][START_REF] Walse | Remediation of Fungicide Residues on Fresh Produce by Use of Gaseous Ozone[END_REF][START_REF] Bourgin | Study of the Degradation of Pesticides on Loaded Seeds by Ozonation[END_REF][START_REF] Bouya | Kinetics of the Heterogeneous Photo Oxidation of the Pesticide Bupirimate by OH-Radicals and Ozone under Atmospheric Conditions[END_REF] in Table 1.
Even though no degradation was observed in our conditions for tetraconazole and difenoconazole, two triazole fungicides, several studies were carried out to remove triazole adsorbed on the surface of plants. However, the ozonation process frequently requires high ozone concentration. Heleno et al. (2013) investigated the effect of ozone fumigation on the reduction of difenoconazole residue on strawberries. Difenoconazole was removed up to 95% after 1h when exposed to 400 ppm of ozone. [START_REF] Bourgin | Study of the Degradation of Pesticides on Loaded Seeds by Ozonation[END_REF] exposed seeds coated by the triazole bitertanol at very high ozone concentration (51,000 ppm). The half-lives calculated from the rate constants reported in literature agree with the present study and confirm that triazole fungicides were not affected by ozonolysis at atmospheric concentrations.
Several ozonolysis studies dealt with the compounds of the pyrimidine family adsorbed on various surfaces without ever specifying the relative humidity. Our results are in accordance with those obtained for pyrimethanil, pirimiphos-methyl, and bupirimate adsorbed on model atmospheric aerosols [START_REF] Yang | Heterogeneous Reactivity of Suspended Pirimiphos-Methyl Particles with Ozone[END_REF][START_REF] Bouya | Kinetics of the Heterogeneous Photo Oxidation of the Pesticide Bupirimate by OH-Radicals and Ozone under Atmospheric Conditions[END_REF]. However, ozonation studies performed on surface of grape berries exhibit half-lives much higher [START_REF] Walse | Remediation of Fungicide Residues on Fresh Produce by Use of Gaseous Ozone[END_REF].
The same hydrophobic silica particles were used to study the heterogeneous ozonolysis of dinitroaniline herbicides [START_REF] Pflieger | Kinetic Study of Heterogeneous Ozonolysis of Alachlor, Trifluralin and Terbuthylazine Adsorbed on Silica Particles under Atmospheric Conditions[END_REF][START_REF] Pflieger | The Heterogeneous Ozonation of Pesticides Adsorbed on Mineral Particles: Validation of the Experimental Setup with Trifluralin[END_REF][START_REF] Yang | Heterogeneous Ozonolysis of Pirimicarb and Isopropalin: Mechanism of Ozone-Induced N-Dealkylation and Carbonylation Reactions[END_REF]. Half-lives of pendimethalin, trifluralin, and isopropalin were of the same order of magnitude, suggesting that the kinetic data obtained for one pesticide can be extended to the whole chemical family, here, the dinitroaniline family. Moreover, the same results were obtained for both pyrethroid insecticides whatever the particle types, suggesting again a generalization to a whole chemical family. It should also be noted that the kinetic data of the theoretical gas-phase ozonolysis for both pyrethroids under study [START_REF] Meylan | Computer estimation of the Atmospheric gas-phase reaction rate of organic compounds with hydroxyl radicals and ozone[END_REF] exhibits the same order of magnitude as the experimental ozonolysis kinetics rate constants in the particulate phase. a Atmospheric half-lives calculated for [𝑂 3(𝑔𝑎𝑠) ] = 40 ppb [START_REF] Vingarzan | A review of surface ozone background levels and trends[END_REF].
Atmospheric implications
The heterogeneous ozonolysis yields to deltamethrin, permethrin, cyprodinil, and pendimethalin degradation. However, whatever the atmospheric conditions of humidity and nature of particles, difenoconazole, tetraconazole, fipronil, and oxadiazon will present no degradation towards ozone. Indeed, unlike to deltamethrin and permethrin (alkene double bond) and cyprodinil and pendimethalin (secondary amine group), they have no chemical groups likely to react with ozone. Consequently, the calculated atmospheric half-lives (Table 1) for an ozone mixing ratio of 40 ppb, which is representative of the atmospheric ozone level in mid latitudes of the Northern hemisphere [START_REF] Vingarzan | A review of surface ozone background levels and trends[END_REF], are higher than several years.
Moreover, Socorro et al. (2016a) also reported that difenoconazole, tetraconazole, fipronil, and oxadiazon were not degraded by hydroxyl radicals in particulate phase which implies a significant persistence of these pesticides in the atmosphere once adsorbed on particles.
The half-life of cyprodinil can widely range from a few hours to several months, and from a few hours to several weeks for pendimethalin, permethrin, and deltamethrin. These half-lives are also significant enough to involve a transport over long distances prior to be removed and a potential transfer to aquatic and terrestrial ecosystems where they can cause hazard effects.
Hence, from a legislative point of view, according to the Stockholm convention, these pesticides can be considered as persistent organic compounds (POPs) in the atmosphere with respect to the gas-phase ozone (UNEP, 2001).
However, given the complexity of the surfaces of atmospheric particles and the atmospheric variation of relative humidity, the heterogeneous ozonolysis can be considered as a very slow process for the studied pesticides compared with the highly reactive hydroxyl radicals (Socorro et al., 2016a).
Conclusions
Kinetic data are dependent on relative humidity, on particles surface chemistry, and chemical nature of the adsorbed pesticides. For the four reactive pesticides (i.e., cyprodinil, deltamethrin, permethrin, and pendimethalin), heterogeneous ozonolysis reactions are slower at high relative humidity suggesting a competitive adsorption of ozone and water on particle surface.
Moreover, reactivity on hydrophilic surface are faster than on hydrophobic surface in the absence of water and at low RH for deltamethrin, permethrin, and pendimethalin. When RH increases (i.e., 80% RH), pesticides exhibit comparable reactivity on both synthetic silicas, suggesting a rehydroxylation of the hydrophobic siloxane surface in presence of wet air.
However, the opposite kinetic behavior observed for cyprodinil also implies the influence of the chemical nature of pesticide. Pesticides of the same chemical family (deltamethrin/permethrin and difenoconazole/tetraconazole) appear to have the same kinetic behaviour, suggesting a possible generalization of the kinetic results to the whole chemical family.
These results suggest that relative humidity must be taken into account to determine kinetic rate constants and that studies carried out at 0% RH cannot be considered as representative of the atmospheric conditions. Finally, given the chemical structures of pesticides and the complexity of atmospheric particles surface, it is currently difficult to predict degradation kinetics as may be the case for the gas-phase.
Fig. 1 :
1 Fig. 1: Observed pseudo-first order rate constants 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼
Fig. 2 :
2 Fig.2: First order kinetic constants for 8 pesticides adsorbed on hydrophobic silica,
and 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 at 5% RH. It should be noted that 𝑘 𝑂 3(𝑝𝑎𝑟𝑡) 𝐼𝐼 at 0%
Table 1 :
1 Kinetic data for the ozonolysis of the adsorbed pesticides (in bold) under study and atmospheric half-lives compared to the literature
data
Acknowledgement
This work has been carried out thanks to the support of the COPP'R project "Modelling of atmospheric contamination by plant protection products at the regional scale" funded by the PRIMEQUAL -AGRIQA « Agriculture et qualité de l'air » program. C. Mattei received a | 47,235 | [
"1289137",
"872931",
"12223"
] | [
"220811",
"243969",
"220811",
"220811"
] |
00177027 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2007 | https://hal.science/hal-00177027/file/ICNF07_llopis_full_paper.pdf | Olivier Llopis
email: [email protected]
Sébastien Gribaldo
Cédric Chambon
Bertrand Onillon
Jean-Guy G Tartarin
Laurent Escotte
Recent evolutions in low phase noise microwave sources and related
Keywords: microwave oscillator, frequency generation, phase noise, nonlinear noise PACS: 43.58.Wc
INTRODUCTION
Spectrally pure frequency generation is necessary in many applications. Precision timing, as an example, is directly related to the source phase noise. More generally, in many physics applications, the frequency is the parameter on which the highest constraints are put on. However, the strongest economical challenge in this field is today for telecommunications applications. The phase noise of the oscillator is indeed superimposed directly to the signal when this signal is demodulated or simply converted in frequency. Thus, the oscillator noise close to the carrier becomes the limit of the signal to noise ratio at the receiver output. Of course, different types of oscillators are used in physics or telecommunications applications. Physicists are mainly seeking performance meanwhile the telecommunications industry needs small size and low cost systems, which means a high level of integration. However, the theoretical description of the noise mechanisms in these systems is very similar, even if the frequency reference element (the resonator) is different. Another application field, which could be located between the two previous ones, is the development of highly sensitive radars or telemeters. This is of high interest for military purpose, but not only. In these applications, both high performance and small size are needed, which is of course difficult to achieve.
Because of these different application fields, research on high spectral purity oscillators is performed in different scientific communities : the time and frequency community first ; secondly the microwave community, which drives research for both telecommunications and military applications ; then the circuit community, which is also related to the telecommunications field, and which is focused on integrated sources. The last community is of course the one of noise. Indeed, the investigations on phase noise, and particularly the pioneering work performed in the microwave field [START_REF] Siweris | A GaAs FET oscillator noise model with a periodically driven noise source[END_REF][START_REF] Rizzoli | [END_REF][3][START_REF] Llopis | Nonlinear noise modeling of a PHEMT device through residual phase noise and low frequency noise measurements[END_REF], have revealed some theoretical problems related to the description of noise under nonlinear conditions. It was thus mandatory to think again the transistor noise modeling, which had been developed for linear applications, and to adapt this modeling to the description of a nonlinear problem. Of course, the device modeling is only one part of the problem, the other one being the development of powerful CAD tools, able to compute the phase noise from these models in an complex circuit. Such tools have been developed in the 1990s [START_REF] Rizzoli | [END_REF], and the problem of the noise transposition in a nonlinear circuit between the different harmonics is now well described by these approaches. However, the effect of the nonlinear signal on the noise source itself is still a disputed subject. This problem will be discussed in this paper.
Describing the oscillator phase noise through the behavior of the transistor noise under nonlinear conditions is valid, providing the other elements of the oscillator do not contribute too much to the oscillator noise. In this case, it is possible to improve the oscillator performance finding out the appropriate transistor load at each harmonics which minimizes the overall output phase noise. This approach can be efficient even if the complete noise generation process at the transistor level is not completely understood. However, for some applications it is mandatory to be able to simulate the oscillator phase noise with a high precision (because the circuit cannot be tuned after being realized). In this case, a model able to precisely describe the noise under nonlinear conditions is required.
The problem discussed in the above paragraph is the one of any microwave circuit using only passive elements for frequency reference: integrated LC oscillators and VCOs, dielectric resonator or coaxial resonator oscillators, sapphire oscillators… However, if the resonator itself is not passive, it may be the predominant contribution to the output phase noise. This is the case of acoustic wave resonators, which feature a 1/f fluctuation of their resonant frequency. Also, very recently, it has been proposed to use optical delay lines as the frequency reference elements of microwave oscillators. However, in this case also, the signal conversion to optics add its noise contribution, which must be evaluated.
MODELING OF OSCILLATORS WITH PASSIVE RESONATORS
As described in Leeson's paper [START_REF] Leeson | Proc. of the IEEE[END_REF], an oscillator is a feedback loop with a phase fluctuating amplifier and a resonator in the feedback. The amplifier phase noise is directly converted through the loop effect into the oscillator's frequency noise. Investigating the phase noise of the amplifier is simpler than investigating on the complete oscillator, because : 1) there is no phase loop condition, so the measurement can be performed in well defined conditions 2) the measurement can be performed on a large RF power range 3) it is easier and faster to simulate the performance of a driven circuit than an autonomous circuit 4) It is essential to investigate the noise conversion processes independently of the surrounding circuit. To this purpose, a measurement bench has been developed [START_REF] Cibiel | [END_REF] which is able to characterize this parameter, either on an already designed amplifier or on a single transistor on which both the RF and LF loads are controlled. The goal of these measurements is to study the correlation between the transistor low frequency (LF) noise and the transistor phase noise. The transistor LF noise should therefore be measured in the same operating conditions i.e. submitted to the nonlinear signal. This is performed on the same measurement bench, adding an output on the device bias Tees and a control of the device LF loads (Figure 2).
An example of the data obtained with this measurement bench is given in Figure 3. The microwave power at 3.5 GHz is smoothly increasing from the linear regime up to 4 dB compression. The change in the spectral shape between the linear behavior and the nonlinear behavior is strong, and the noise in the nonlinear behavior can hardly be predicted using the linear (or quiescent) data. However, is obviously a strong correlation between the LF noise and the phase noise, which means that a global modeling able to describe simultaneously these too spectrum sets could be extracted. The model which had been proposed for this device [START_REF] Llopis | Nonlinear noise modeling of a PHEMT device through residual phase noise and low frequency noise measurements[END_REF] involved to main LF noise sources, one of which (a g.r. noise source) taking advantage on the other one when the device was driven far in the nonlinear state. In this model, the nonlinear behavior of this g.r. noise source had been described through the association of this noise source with a nonlinear element, which was introduced in the device electrical model mainly to fit the measurement rather than because it had a physical meaning. Other authors argued that the noise source itself could be treated as a nonlinear element. This has been proposed as early as the first experiments [START_REF] Siweris | A GaAs FET oscillator noise model with a periodically driven noise source[END_REF], and the more recent papers on this subject seems to be in favor of this approach [START_REF] Bonani | [END_REF][9][10][11]. However, an equivalent circuit modeling remains a rough approximation of the transistor behavior, contrarily to physical modeling. A good model, at this level of approximation, is simply a model which is able to describe the measurement, at various input power levels or for different transistor load. So if good results can be obtained with a noise source instantaneously sensitive to the microwave signal, and there are some physical arguments that the noise could behave in this way, this approach should become the standard approach to model the noise in nonlinear circuits. Frequency (Hz) Phase Noise (dBrad/Hz) P in FIGURE 3. Example of data obtained on a FET device [START_REF] Llopis | Nonlinear noise modeling of a PHEMT device through residual phase noise and low frequency noise measurements[END_REF] ; left : equivalent input voltage noise spectral density at different microwave power levels (P in ) ; right : phase noise at 3.5 GHz, the transistor being loaded onto 50 Ω at RF, for the same range in RF power (P in )
We have thus used this approach to model the phase noise generated by a SiGe HBT. We will not describe in details this model here, because it is presented in a focused paper in these proceedings [START_REF] Gribaldo | Nonlinear noise in SiGe bipolar devices and its impact on radio-frequency amplifier phase noise[END_REF]. It is an improvement of a simpler model which was using external noise sources parameterized by the microwave power [13]. This technique, which uses power dependant external LF noise sources, is very efficient, but it requires that the noise parameters are measured on an amplifier which is close to the one which will be used in the final oscillator circuit. It allows anyway some phase noise optimization, and as proven its efficiency on the design of high spectral purity sources [START_REF] Cibiel | [END_REF]. The new model is no more based on an extrinsic parameter. On the contrary, the nonlinear behavior arises from an instantaneous dependence of the collector current noise spectral density S ic to the RF signal. S ic is supposed to be composed of a constant coefficient and of a second term proportional to i c 2 (t). In the commercial circuit design software (Agilent ADS) used to simulate the phase noise, this noise source is considered just like a classical nonlinear element, using a symbolically defined device. The parameters of this new type of nonlinearity are extracted from the measurement of the equivalent input LF noise at different RF power levels [START_REF] Gribaldo | Nonlinear noise in SiGe bipolar devices and its impact on radio-frequency amplifier phase noise[END_REF]. Other authors use to this purpose multi-bias noise measurements [10,11], but we think that the measurement of noise under nonlinear conditions is closer to the final behavior of the transistor and is more reliable for the extraction of the noise nonlinear behavior.
The above discussion is focused on the effect on phase noise of the device LF noise. However, phase noise is the fluctuation of a high frequency signal, an even if close to the carrier this fluctuation is dominated by the LF fluctuations (1/f or g.r. noise), far from the carrier it can be the result of the superposition of microwave noise. These two behaviors are described as multiplicative (or conversion) noise and additive noise [15]. The additive noise is a carrier to noise ratio, and is thus almost inversely proportional to the RF signal level. On the contrary, the multiplicative noise comes from a modulation or conversion process, and follows the signal level. The microwave noise has an advantage compare to the 1/f noise : it is generally well localized in the transistor model (thermal noise of the resistances, schottky noise in the junctions). However, it is also affected by the nonlinear behavior, and the classical approach which involves the transistor noise figure is no more valid, unless a nonlinear noise figure can be simulated or measured [15]. This last noise contribution can be simulated using modern CAD tools or, also, can be investigated experimentally in a relatively similar way which is used to find the minimum noise figure of an amplifier. A focused paper on such an approach can be found in this volume [16].
NOISE PROBLEMS IN MICROWAVE SOURCES USING ACTIVE FREQUENCY REFERENCE ELEMENTS
The evolution of microwave sources has introduced recently some new problems of noise modeling. Firstly, the telecommunications industry needs today highly integrated high quality sources, and has succeeded very recently in this field thanks to a new device : the integrated thin film bulk acoustic wave resonator. This resonator is an integrated version of the quartz resonator, and it is able to provide a Q factor of 10 2 to 10 3 in the low microwave range (1 to 5 GHz). Integrated oscillators based on this resonator feature a much better performance than classical LC based VCOs [START_REF] Aissi | A 5.4 GHz 0.35um BiCMOS FBAR resonator oscillator in above-IC technology[END_REF]. However, the performance of these sources is limited by an intrinsic 1/f fluctuation of the resonator frequency, which appears to be higher than the effect of the transistor phase noise [START_REF] Gribaldo | [END_REF]. Up to the author's knowledge, no modeling approach has been proposed for this frequency fluctuation, or at least no modeling approach which could be embedded in a circuit simulator. Investigations on this type of noise, and on integrated acoustic wave resonator oscillators, are the key of success for the development of the next generation of small volume microwave sources.
On the other of oscillators applications, where high performance is the main goal, optical techniques are today foresee to design very high spectral purity microwave sources. In this case, the microwave signal is transferred to the optical range thanks to an optical carrier. Once in the optical domain, it is possible to take benefit of the ultra low losses of the optical delay lines which can trap the signal in a few kilometers line with almost no attenuation, except the one due to the microwave to optical (MO) and optical to microwave (OM) conversions. The equivalent Q factor is very high, and high performance oscillators [START_REF] Yao | Progress in the optoelectronic oscillator -a ten year anniversary review[END_REF] or phase noise measurement benches [START_REF] Onillon | Optical links for ultra-low phase noise microwave oscillators measurement[END_REF] have been realized with this technique. One of the drawbacks is in the size required for a few kilometers fiber optics delay line. To overcome this problem, it has been proposed to replace the line by an optical resonator, but this is difficult and still under study. One of the problems of these systems is the carrier to noise degradation due to the MO and OM conversions. This carrier to noise ratio is determined by the efficiency of the optical modulation or detection, and by the laser relative intensity noise (RIN). The last parameter may also influence the close to carrier phase noise because of the laser 1/f RIN. Simulating the overall noise behavior of such a system is difficult, and up to now only analytical approaches have been proposed. The goal is now to develop models and CAD tools able to manage the noise simultaneously in RF and optical devices, in order to optimize these new hybrid systems.
CONCLUSION
Reducing the phase noise of a high frequency oscillator is one of today's more difficult modeling problems in electronics. This is because it requires an accurate description of the noise under nonlinear behavior. Experimental approaches have been described to check the validity of the various modeling approaches, and a focus is made on models involving nonlinear noise sources. Finally, the problem of noise studies of more complex sources involving resonances which are not entirely of electrical type is pointed out. All these systems still requires investigations in the field of noise modeling.
FIGURE 2 .
2 FIGURE 2. Details of the part of the experiment used to investigate on the device LF noise under nonlinear signal conditions.
Residual phase noise (or amplifier phase noise) measurement bench, which is able to perform phase noise measurements on the 1 to 18 GHz range (using different low FM noise sources)
FFT analyser FFT analyser
Faraday's shielding + battery bias Faraday's shielding + battery bias . .
A B LFG A B LFG A B LFG
Low FM and AM noise source Low FM and AM noise source
Att Att Att Att Att Att Att Att DUT DUT DUT DUT Att Att Att Att φ φ φ φ φ φ RF RF Dual Φ detector Dual Φ detector
AM limiter AM limiter X X X X A LF A LF
OL OL
Synthesised source with AM capabilities Synthesised source with AM capabilities X X X X A LF A LF
for AM minimization for AM minimization
LF signal LF signal
Measurement performed in a Faraday's shield Measurement performed in a Faraday's shield FIGURE 1. FFT Battery bias Battery bias Rc Battery bias Battery bias Rc analyser LNA Rb analyser LNA FFT Rb
P in P in > 20 μF > 20 μF ~ 20 μF ~ 20 μF
Att Att | 17,207 | [
"7396",
"6047",
"838302"
] | [
"388658",
"388658",
"459",
"388658",
"388658",
"388658"
] |
01481223 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01481223/file/Joulain_etal_10_2016.pdf | Antoine Joulain
Damien Desvigne
David Alfano
Thomas Leweke
Kt = Kocurek
S = Srinivasan
NUMERICAL INVESTIGATION OF THE VORTEX ROLL-UP FROM A HELICOPTER BLADE-TIP USING A NOVEL FIXED-WING ADAPTATION METHOD
Keywords: Helicopter, Hover flight, Blade tip, Vortex, Computational fluid dynamics. rpm CT =
This contribution relates to the simulation of the flow around the tip of a helicopter rotor blade in hovering flight conditions. We here propose a new methodology of framework adaptation, using a comprehensive rotor code and highfidelity numerical simulations. We construct an equivalent fixed-wing configuration from a rotating blade, in which centrifugal and Coriolis forces are neglected. The effect of this approximation on the solution is analysed. The method is validated by a detailed comparison with wind tunnel data from the literature, concerning aerodynamic properties and tip vortex roll-up. This validation also includes variations of the pitch angle and rotational speed, up to transonic tip velocities. Compared to previously published methods of framework adaptation, the new hybrid method is found to reproduce more accurately the flow around a rotating blade tip.
NOMENCLATURE
INTRODUCTION
The region around the blade tip of a helicopter main rotor is associated with complex aerodynamic phenomena [START_REF] Caradonna | The Application of CFD to Rotary Wing Flow Problems[END_REF] . Due to the pressure differential between the upper and lower blade surfaces, a vortical structure is generated in the vicinity of each tip. In hover, the vortices are convected downwards, influencing by induction the loads on all blades of the rotor.
Today's industrial maturity of numerical simulation methods opens the way for a better understanding of the local flow physics in the vicinity of rotor blade tips. The most accurate simulations of isolated rotors use Adaptative Mesh Refinement (AMR) [START_REF] Kamkar | Automated Off-Body Cartesian Mesh Adaptation for Rotorcraft Simulations[END_REF] , and high-order schemes to reduce numerical diffusion in the wake, as well as high-order turbulence closures or correction for vortical flows (e.g. [START_REF] Rumsey | Turbulence Model Predictions of Strongly Curved Flow in a U-Duct[END_REF][START_REF] Spalart | On the Sensitization of Turbulence Models to Rotation and Curvature[END_REF]) to avoid unrealistic turbulent diffusion in vortex cores.
However, these strategies are in general too costly to be applied in the frame of tip aerodynamic improvement by numerical design optimization. Low-order schemes and simplified turbulence models are widely used due to the low computational cost. But numerical and turbulent diffusion effects excessively spread out the vortices and reduce their influence on the blade, which leads to inaccurate tip airloads and wake geometry characteristics [START_REF] Komerath | Rotorcraft Wake Modeling: Past, Present and Future[END_REF] . Several approaches were proposed in the literature to correct for wake inaccuracy due to diffusion:
• The first and most commonly used strategy is to perform hovering rotor calculations with an approximate source-sink condition (based on Froude's theory) at the outer boundaries [START_REF] Srinivasan | Flowfield Analysis of Modern Helicopter Rotors in Hover by Navier-Stokes Method[END_REF] , in order to JOULAIN, DESVIGNE, ALFANO AND LEWEKE 2 compensate the wake-influence deficiency. This method can prevent the emergence of unrealistic flow recirculation when the sensitivity to the far field boundary conditions is significant. However, this method implies an a priori knowledge of the thrust coefficient of the rotor, and shows some sensitivity to the computational domain size. Strawn and Djomehri [11] simulated the experimental configuration of Lorber [12] using a source-sink boundary condition. Comparison revealed a fair agreement of the integrated rotor performance, although the local aerodynamics in the vicinity of the tip was not captured correctly. This is typical of a low-fidelity interaction between the blade and the trailing vortices;
• Potsdam et al. [13] performed a weak coupling (transfer of information once every revolution) between a Reynolds-Averaged Navier-Stokes (RANS) code and a comprehensive rotorcraft tool. The coupling strategy consisted in trimming the rotor thrust to the experimental value with the comprehensive tool, and to look for the convergence of the numerical aerodynamic coefficients (i.e. collective pitch angle). The advantage is that the blade deformation can be adjusted at each coupling iteration according to the aerodynamic forces acting on the blade. However, this method cannot correct extra-numerical diffusion, because the local interactional deficiency between the blade and the spread vortices is compensated by a global variation of the collective pitch. Moreover, although the coupling process of Potsdam et al. [13] converged without difficulty, the comparison with measurements showed poor agreement, with an unexpected redistribution of loading from inboard to outboard;
• Various hybrid methodologies involve solving accurate, more or less simplified, Navier-Stokes equations in the vicinity of the blades, and calculating a non-dissipative wake convection in the far field [START_REF] Caradonna | The Application of CFD to Rotary Wing Flow Problems[END_REF] . The first hybridizations were performed with an inviscid potential code in the near field, coupled with a vortex lattice method in the far field [START_REF] Caradonna | Finite Difference Modeling of Rotor Flows Including Wake Effects[END_REF] . Egolf and Sparks [START_REF] Egolf | A Full Potential Flow Analysis With Realistic Wake Influence for Helicopter Rotor Airload Prediction[END_REF] specified the inflow from the vortex lattice wake as a velocity field in the outer boundary of the potential simulation. Remarkable agreement was obtained with the experimental databases of Gray et al. [16] and Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] , except in viscous-dominated regions (blade tips and transonic flows). Later, the potential code was replaced by Euler and RANS solvers [START_REF] Khanna | Coupled Free-Wake/CFD Solutions for Rotors in Hover Using a Field Velocity Approach[END_REF] . The shock strength and position were in better agreement, although some discrepancies were still observed. The mesh refinement, limited by computational resource constraints, was probably the cause;
• Moulton et al. [START_REF] Moulton | The Development of a Hybrid CFD Method for the Prediction of Hover Performance[END_REF] , and more recently Bhagwat et al. [START_REF] Bhagwat | Hybrid CFD for Rotor Hover Performance Prediction[END_REF] , performed hybrid simulation by using a Thin-Layer Navier-Stokes (TLNS) solver in the vicinity of the blade, and the potential Vortex Embedding (VE) method of Steinhoff and Ramachandran [20] in the far field. Comparison with the measurements of Lorber [START_REF] Lorber | A Comprehensive Hover Test of the Airloads and Airflow of an Extensively Instrumented Model Helicopter Rotor[END_REF] showed good agreement in the inboard part of the blade, but revealed differences in the vicinity of the tips. The main cause was the lack of validity of the TLNS equations for separated flows;
• The recent Vorticity Transport Model (VTM) [21] uses the vorticity as a conservative variable. The coupling of VTM with a RANS method, performed by Whitehouse and Tadghighi [START_REF] Whitehouse | Investigation of Hybrid Grid-Based Computational Fluid Dynamics Methods for Rotorcraft Flow Analysis[END_REF] , showed promising results in comparison with the experimental data of Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] , and emphasizes the need for a comprehensive hover analysis;
• The last strategy is to alter the Navier-Stokes equations in order to confine vorticity and convect vortices without diffusion. The Vorticity Confinement (VC) method of Steinhoff and Underhill [23] involves the addition of an extra term to the momentum equations. The preliminary hover calculations performed by Tsukahara et al. [START_REF] Tsukahara | Numerical Analysis around Rotor Blade by Vortex Confinement[END_REF] , revealed mitigated agreement with the measurements of Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] .
In a context related to the numerical optimization of blade tip design, it is important to minimize the cost and complexity of the simulations. One significant simplification consists in considering an equivalent single fixed-wing configuration in a non-rotating Cartesian frame. In this approach, the incoming flow is straight, centrifugal and Coriolis forces are neglected, and the complex geometry at the rotor hub, which is not relevant for the blade tip aerodynamics, is not considered. However, the precise link between rotating and equivalent non-rotating cases, with respect to aerodynamic properties, remains largely an open issue.
A literature survey reveals several attempts to define an adaptation methodology in hover. Srinivasan and McCroskey [4] were the first to look for a numerical strategy to compare the aerodynamics of a rotating blade with an equivalent fixed wing. The base of the method was to keep a constant radial distribution of circulation and to retain the tip Mach number. The sectional circulation ! is calculated from the sectional lift ! ! , the local fluid velocity ! and the local blade chord ! by:
(1)
! ! ! ! !" 2
Three different ways were proposed to perform a simulation in a fixed-wing configuration:
• The first and most encouraging method is to consider an appropriate Mach function along the span, in order to reproduce the velocity gradient due to the rotation;
• In the second method, a uniform Mach number is kept along the span and the sectional lift coefficient is reduced by an adapted twist distribution;
• The last and most complicated way is to keep a uniform inflow and to alter the local blade chord in order to recover the sectional circulation.
In all methods, the wake influence is taken into account by a constant shift of the pitch angle, whose value is determined by a comprehensive rotor trim [START_REF] Agarwal | Euler Calculations for Flowfield of a Helicopter Rotor in Hover[END_REF] . The centrifugal and Coriolis forces are neglected in the fixed wing simulation. In comparison with the measurements of Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] , the first adaptation method showed the best agreement in the inboard part of the blade. However, discrepancies were observed near the tip. The simplicity of the employed wake model is probably responsible for the loss of accuracy in the tip area. According to Srinivasan and McCroskey [START_REF] Srinivasan | Navier-Stokes Calculations of Hovering Rotor Flowfields[END_REF] , the centrifugal and Coriolis forces appear to have little influence on the tip aerodynamics.
Komerath et al. [3] performed Laser Doppler Velocimetry (LDV) measurements under both rotating and non-rotating conditions, in order to evaluate centrifugal effects on flow separation. The rotating blade was not isolated from its own wake, and the non-uniform flow in the rotating case was not reproduced in the fixed-wing case, which makes this comparison less relevant.
More recently, Vion et al. [26] performed experimental and numerical investigations of a Counter-Rotating Open Rotor (CROR) configuration. The second way of Srinivasan and McCroskey (twist adaptation) was chosen to construct an equivalent fixed-wing configuration, because a velocity gradient cannot be easily reproduced in a wind tunnel. The difficulty of getting an adapted twist law was emphasized and resolved by an iterative process. Numerical comparisons of the rotating and fixed configurations showed good agreement of the tip vortex characteristics, despite a different flow topology in the vicinity of the tip.
We here propose a new methodology of framework adaptation, dedicated to hovering flight. An uncoupled hybrid simulation is set up with a comprehensive rotor code and a high-fidelity Computational Fluid Dynamics (CFD) solver, in order to construct an equivalent fixedwing configuration from a rotating blade. As for the Egolf and Sparks method [START_REF] Egolf | A Full Potential Flow Analysis With Realistic Wake Influence for Helicopter Rotor Airload Prediction[END_REF] , the influence of the hover wake is taken into account by a velocity field applied at the boundaries of the CFD domain. The adaptation process is based on Srinivasan and McCroskey's first method [4] and takes into account the induced velocities of the rotatingwing and the fixed-wing wakes. The well-known databases of Caradonna and Tung [1] and Gray et al. [16] are simulated in order to validate the numerical method according to global performance and local tip aerodynamics. The adaptation methodology is presented in detail in Section 2. Results are consolidated in Section 3 by variations of the pitch angle and rotational speed, including a transonic flow case. In Section 4, the new methodology is compared to previously published ones. Finally, the local tip aerodynamics of the rotating wing and the fixed wing are discussed in detail in Section 5.
NUMERICAL METHODS
Comprehensive Rotor Code
AIRBUS HELICOPTERS' comprehensive rotor code HOST [27] is used to trim isolated rotors in hover. The airfoils' aerodynamics is described by two-dimensional (2D) polars obtained from wind-tunnel measurements. The distribution of circulation along the span feeds into a vortex-lattice wake model, while the induced velocities from the vortices are taken into account by a Biot-Savart integration. Results are expressed in the rotating frame in cylindrical coordinates (!,!,!). The !-, !and !-axes extend from the root to the tip of the blade, in the direction of rotation and upwards, respectively, the origin being at the center of rotation (Fig. 1).
A trim law imposes fixed values for the collective pitch angle and the rotational speed, whereas the coning angle and thrust coefficient result from a Newton iterative method. All other angular parameters are set to zero.
The prescribed wake model of Kocurek and Tangler (KT) [START_REF] Kocurek | A Prescribed Wake Lifting Surface Hover Performance Analysis[END_REF] , based on the work of Landgrebe [START_REF] Landgrebe | The Wake Geometry of a Hovering Helicopter Rotor and Its Influence on Rotor Performance[END_REF] , was set up using the tip vortex and inboard vortex sheet trajectories of 26 different rotors. The study included the variation of the chord, the radius, the twist law, the number of blades, the pitch angle and the rotational speed. It should be noted that the influence of sweep, taper, anhedral, the sectional profile and the tip shape has not been investigated. The generalized equations defining the axial ! ! and radial ! ! tip vortex coordinates as function of the wake age ! are given by [START_REF] Kocurek | A Prescribed Wake Lifting Surface Hover Performance Analysis[END_REF] :
(2)
! ! ! !! 1 ! for 0 ! ! ! ! ! with ! ! ! 2! ! ! ! ! !! 2 ! ! ! 2 ! ! 1 ! ! for ! ! ! ! ! ! ! ! ! 1 ! ! ! !!" with ! ! 0.78
where ! ! is the azimuth angle between blades and ! the number of blades. The parameters ! 1 and ! 2 and ! are calculated from
(3) ! 1 ! ! ! ! ! ! ! ! ! ! ! !0.000729! ! ! !2.3 ! 0.206! ! ! 1 ! 0.25! 0.04! ! ! 0.5 ! 0.0172! (4) ! 2 ! ! ! ! ! ! !!! 1 2 ! !!! ! ! ! !! ! 1 ! (5) ! ! 4 ! !
where ! is the blade linear twist. According to Landgrebe [START_REF] Landgrebe | The Wake Geometry of a Hovering Helicopter Rotor and Its Influence on Rotor Performance[END_REF] , it is more convenient to express the axial vortex sheet displacement ! ! as a function of the normalized radial location !: Note that the azimuth of the slope variation of ! !!0 proposed by Landgrebe was ! 2 instead of ! ! . However, the sensitivity to this parameter is expected to be small. This prescribed model has been widely used since the 1980's for its short computation time (a few minutes) and for its reliability. It is recommended for hover calculations, as long as the rotor characteristics are fully compliant with the framework hypotheses [START_REF] Egolf | A Full Potential Flow Analysis With Realistic Wake Influence for Helicopter Rotor Airload Prediction[END_REF] .
(6) ! ! ! 1 ! ! ! !!0 ! !! !!1 ! !!0 ! 0 for 0 ! ! ! ! ! with ! ! ! 2! ! !! 3 ! ! ! ! for ! ! ! ! ! !!1 ! !! 4 ! for 0 ! ! ! ! ! with ! ! ! 2! ! !! 5 ! ! ! 5 ! ! 4 ! ! for ! ! ! ! (7)
! 3 ! ! 0.45! ! 18 128 ! ! 2 ! 4 ! !2.2 ! ! 2 ! 5 ! !2.7 ! ! 2
The vortex lattice method is built upon 2D aerodynamics, under the assumption that the flow field is incompressible, inviscid and irrotational. Thus transonic flow, a threedimensional (3D) separated boundary layer or tip vortex roll-up cannot be resolved. Several empirical parameters are added to enhance the aerodynamic behavior of the simulation. In particular, the lift coefficient, and thus the circulation, are artificially canceled at the blade tip. Moreover, in order to simulate the tip vortex roll up, the trailing vortex filaments generated between the tip and the radial position of maximum circulation are steadily merged with the tip vortex within several degrees of wake age (Fig. 5). The roll up extent does not significantly influence the numerical results and is taken to be 30°.
Computational Fluid Dynamics Solver
The ONERA elsA CFD code [29] is used in this study. The solver is based on a cell-centered, finite-volume approach. Multi-block structured grids are employed to compute the fixed-wing configurations. The application of the Reynolds decomposition to the Navier-Stokes equations leads to the RANS equations, which are written in conservative form:
(8) !" !" ! !"# !! ! 0 !"! !" ! !"# !!!! ! !!! ! !"# ! ! ! !"# !" ! !"# !" ! ! ! ! !"# !! ! ! ! ! !! !
where !, ! and ! are the mean density, velocity vector and total energy per unit of mass, respectively. The scalar field ! is the pressure, ! the heat flux vector and ! the combination of stress and Reynolds tensors. Finally, ! represents possible source terms. In an inertial reference frame, (!,!,!) in Fig. 1, without gravity effects, the source term vanishes. Cast in a non-inertial rotating frame, (!,!,!) in Fig. 1, the RANS equations bring out two source terms: the centrifugal ! 1 and the Coriolis ! 2 forces. The forces ! 1 and ! 2 , expressed in Cartesian coordinates, depend on the rotational speed ! [30] :
(9) ! 1 ! ! ! 2 ! ! 2 ! 0 ; ! 2 ! ! 2!! !2!! 0
Our objective here is to simulate the flow around a rotating blade in a non-rotating inertial frame, using an appropriately adapted configuration. In such a frame, centrifugal and Coriolis forces are neglected, which implies a free stream without curvature. This new frame is labeled (!,!,!) in Fig. 1. The !-, !and !-axes extend from upstream to downstream, from the root to the tip of the wing and upwards, respectively, the origin being at the fictitious center of rotation. It can be seen as a snapshot of the rotating frame (!,!,!) at the time when the !-axis is aligned with the !-axis.
The computational setup is based on a validated numerical method, presented in detail in Ref. [START_REF] Joulain | Aerodynamic Simulations of Helicopter Main-Rotor Blade Tips[END_REF].
A fully turbulent flow is considered, without boundary-layer transition. The governing equation system is closed with the two-equation !-! Baseline turbulence model of Menter [START_REF] Menter | Improved Two-Equation k-omega Turbulence Models for Aerodynamic Flows[END_REF] , which is widely used in engineering applications of external turbulent flow prediction. Although this eddyviscosity model is based on the Boussinesq assumption and is therefore less accurate inside vortex cores [START_REF] Menter | Two-Equation Eddy-Viscosity Turbulence Models for Engineering Applications[END_REF] , it is preferred over second-order closures and corrected models (e.g. [START_REF] Rumsey | Turbulence Model Predictions of Strongly Curved Flow in a U-Duct[END_REF][START_REF] Spalart | On the Sensitization of Turbulence Models to Rotation and Curvature[END_REF]) for its numerical stability and its short computation time.
The convective fluxes of the mean equations are discretized using the second-order central scheme of Jameson, Schmidt and Turkel [START_REF] Jameson | Numerical Solution of the Euler Equations by Finite Volume Methods Using Runge-Kutta Time-Stepping Schemes[END_REF] . Because this scheme is unconditionally unstable, a scalar artificial dissipation is introduced, as a blend of second-and fourth-order differences. The values of the corresponding dissipation coefficients are taken as 0.5 and 0.032, respectively. The convective fluxes of the turbulent equations are discretized using a second-order Roe scheme. All diffusive flux gradients are calculated with a five-point central scheme. A first-order backward-Euler scheme updates the steadystate solution.
The resulting flow solver is implicit and unconditionally stable, and high values of the Courant-Friedrichs-Lewy (CFL) number can be reached. The CFL number is linearly ramped up from 1 to 10 over 100 iterations, in order to avoid divergence during the transient phase.
The present simulations concern a rectangular fixed wing placed in a straight incoming flow, intended to represent one blade of a rotor.
The structured multiblock grids are designed with the ANSYS ICEM-CFD software. A 2D C-H topology surrounds the profile and the downstream zone. The 3D mesh is constructed by stacking 2D meshes along the span. Beyond the tip, the resulting gap is filled with a halfbutterfly mesh (O-grid mesh). The domain dimensions extend 200 chords in all directions from the tip. In the wall normal direction, the first grid spacing is chosen in order to fix the dimensionless wall distance (! ! , see e.g. Tennekes and Lumley [START_REF] Tennekes | A First Course in Turbulence[END_REF] ) between 0.7 and 1. Expansion ratios of the grid spacings are not greater than 1.15. An overview of the mesh in the vicinity of the tip leading edge is shown in Fig. 2. A typical mesh is constituted of roughly 130 blocks and 20 million points.
The wing surface is modeled as an adiabatic viscous wall (zero heat-flux). The boundary supporting the root of the wing is a symmetry plane, whereas a non-reflecting condition is applied to all other far-field boundaries (Fig. 3).
Approximately 20,000 iterations (20 hours on 48 processors) are required to obtain a converged solution with a 4-orders-of-magnitude decrease of the L 2 normbased residual of the total energy.
Hybridization procedure
On the one hand, the RANS simulations are able to accurately compute 3D separated boundary layers and tip vortex roll-up, including the influence of viscosity and compressibility. However, the numerical dissipation spreads out the vortices too quickly and degrades their influence on the blade, which leads to inaccurate rotor airloads and wake geometry characteristics.
On the other hand, wake properties are quickly estimated using the comprehensive rotor code. Tip vortices are analytically propagated over a long distance without dissipation, resulting in a good qualitative prediction of the wake influence on the blades. However, the simplified aerodynamic model implemented in the tool is too restrictive to study the tip vortex roll-up in detail.
The adaptation process presented here takes advantage of both tools and proceeds in three steps, as illustrated in Fig. 4. First, the rotating blade is represented by a fixed wing in a rectangular domain. Centrifugal and Coriolis forces are neglected, thus the fixed-wing is exposed to a straight incomming flow. The main features linked to the rotation are taken into account by a specific choice of the initial and boundary conditions. [START_REF] Srinivasan | Navier-Stokes Calculations of Hovering Rotor Flowfields[END_REF] , the radial gradient of relative velocity of the rotating blade is represented by a sheared inflow along the span of the wing (! ! ! !!), as shown by the symbols in Fig. 8a. In order to avoid numerical instabilities, a minimum velocity has to be imposed in the computational area in the vicinity of the centre of rotation, its value was chosen as ! ! ! !!!2 for most simulations. In the outer part of the domain, the inflow velocity was limited to a constant maximum value !!! ! 2!!. The stability of the computation and the solution are not sensitive to this parameter.
Following Srinivasan and McCroskey
Finally, the influence of the rotor wake is taken into account in the fixed-wing computation by the injection of an appropriate induced velocity profile at the boundaries of the domain. It is obtained from the three-dimensional induced velocity at the location of the considered blade, computed using the comprehensive rotor code HOST.
HOST is used to trim the rotor with the Kocurek and Tangler [2] (KT) wake model. The wake is calculated over 15 revolutions, in order to minimize the dependence of the wake length on the trim. However, the fixed-wing generates its own wake, which also alters the velocity distribution along the span of the wing. In order to avoid accounting twice for the near-wake influence, an additional step is performed. At a certain wake age ! ! , the wake sheet generated by the considered blade is segregated into a near wake and a far wake (Fig. 5). An independent Biot-Savart integration is performed in each part to compute the distribution of the vertical induced velocity ! ! (along !-axis) on the considered blade. Only contributions from the far wake of the considered blade and the wake(s) of the other blade(s) are accounted for. The missing contribution from the near wake is provided by the CFD simulation, which is far more accurate in this region. The segregation wake age ! ! can be chosen between 30° and 90° without significant modification of the solution on the blade. In fact, in this interval, the vortex is far from the considered blade and its influence decays proportionally to the square of the distance.
Figure 6 compares the induced velocity profiles on the considered blade, integrated from the whole wake and with the considered blade near wake removed. As expected, the influence of the near wake is maximum at the tip and is quickly reduced to zero far from !!! ! 1.
The CFD tool is then used to construct the fixed-wing equivalent configuration. The sheared inflow and the induced velocity distribution, extracted on the blade from HOST, are injected as far field boundary conditions in the CFD simulation (Fig. 7). These profiles (as functions of span !) are invariant from upstream to downstream and from the top to the bottom of the computational domain, which is justified since the principle aim is to correctly represent the flow in the blade tip region. No induced velocities are injected in the axial (!) and radial (!) directions, because these components are small compared to the vertical one, and in order to avoid problems related to continuity. The velocity profiles are also used to initialise the whole interior of the computational domain at the beginning of the simulation.
In Fig. 8, the normalized velocity profiles resulting from the calculation are plotted for several locations above the blade. In comparison with the prescribed distributions, the numerical profiles are in good agreement in the vicinity of the tip (!!! ! 1). Some discrepancies appear around the center of rotation (!!! ! 0). In this area, the large cell dimension increases the numerical dissipation, thus the high velocity gradients are smoothed.
The size of the computational downstream domain does not influence the solution in the range [10!-200!], i.e. blade vortex ageing from a quarter to five revolutions. In fact, the wing trailing vortex follows a straight trajectory, thus only the near wake contributes to the induced velocities on the tip.
TEST CASE OF CARADONNA AND TUNG [1]
The well-known experimental database of Caradonna and Tung [1] is chosen to perform a validation of the methodology. The configuration consisted of two blades with a constant, untwisted and untapered NACA 0012 airfoil. The chord (!) and the radius (!) were respectively 0.191 m (7.5 in.) and 1.143 m (45 in.). The internal radius of the blade (first aerodynamic profile) was not given, so it is arbitrarly fixed to one chord. Moreover, the tip cap geometry is assumed to be flat. This rotor is essentially rigid, thus no flexible model is needed. The blades were instrumented with pressure taps between 0.5! and 0.96!. This measurement resolution does not allow a comprehensive study of the tip vortex itself, but is useful to validate the hybrid numerical calculation. A good agreement of the sectional lift profiles along the blade span is an indication of an accurate interaction between the rotor and the wake. Given the scarcity of pressure measurements in regions with high pressure gradients (leading edge, shock waves) in the Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] database, the sectional drag profiles are not calculated from this data and not compared to the numerical results.
It has to be mentioned that, in various tables of Ref. [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF], some values of the integrated sectional lift are inverted. For example, the test case characterized by a pitch angle of 8° and a rotational speed of 2500 rpm is summarized in Table 25 and plotted in Fig. 6 of Ref. [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF]. The last three radial stations are clearly inverted. A trapezoidal rule integration, performed from the published pressure measurements, reveals that the figure is correct. All data from Caradonna and Tung [1] used in this study have been corrected.
Reference Test Case
The reference test case is characterized by a collective pitch angle of 8°, a rotational speed of 1250 rpm and a thrust coefficient of 0.00459. The tip Reynolds and Mach numbers are 1.94 million and 0.436, respectively. The Kocurek and Tangler [2] (KT) model is first used to perform the hybrid simulation. The trajectory of the tip vortex in the !and !-directions, as calculated by HOST, is plotted as a function of the wake age in Fig. 9 (square symbols). At ! ! 180°, the tip vortex is estimated to convect vertically a distance 0.098! below the rotor plane and a distance 0.126! radially inward.
The lift coefficient in the sectional frame (! ! ), resulting from the application of the hybrid procedure, is shown in Fig. 10. A normalization is applied by multiplying ! ! by the square of the local Mach number ! in order to account for the influence of the sheared inflow. The numerical result is in good agreement with the measurements. The amplitude and location of the maximum lift, in the vicinity of the tip (!!! ! 0.9) is correctly reproduced. In the inboard part of the blade, the computation slightly underpredicts the lift coefficient. The pressure coefficient ! ! , plotted in Fig. 11 Figure 9. Axial and radial tip vortex trajectory. HOST calculations with different wake models and experimental data from Caradonna and Tung [1] .
Figure 10. Normalized sectional lift coefficient along the span. Hybrid CFD simulations with different wake models and experimental data from Caradonna and Tung [1] .
(in blade tip frame)
as a function of the nondimensional chord location !!! for !!! ! 0.96, confirms the high accuracy of the hybrid procedure.
The trajectory of the vortex resulting from the hybrid CFD simulation, obtained from the local maxima of the !function [START_REF] Jeong | On the Identification of a Vortex[END_REF] , is compared to the experimental data of Caradonna and Tung [1] in Fig. 12. Concerning the axial trajectory, good agreement is obtained, which indicates that the total induced velocity (integrated from HOST for the far wake and computed by CFD for the near wake) is correct. Since no induced velocity is injected in the radial direction in the CFD simulation, the location of the tip vortex along the !-axis is virtually constant, the radial contraction is not reproduced. However, this is not detrimental for the computation of the global aerodynamics of the blade, that are very little influenced by the "older" parts of the tip vortex (! ! 30°).
In order to assess the influence of the HOST wake model on the CFD solution, different vortex trajectory laws are compared. An approximate model is constructed from the experimental trajectory by modifying the coefficients of the KT model according to: [START_REF] Srinivasan | Flowfield Analysis of Modern Helicopter Rotors in Hover by Navier-Stokes Method[END_REF] ! ! 0.796 ! ! !3.135 ! 0.206! At ! ! 180°, the measured radial and axial vortex locations of wake age are respected. In comparison with the KT model, the experimental model leads to a very similar axial convection, but a decrease of the radial contraction (Fig. 9). At ! ! 180°, the latter is reduced from 0.126! to 0.092! (-30%). As a consequence, the tip vortex from the preceding blade is closer to the considered blade tip and a redistribution of induced velocity occurs around !!! ! 0.8. Inboard of !!! ! 0.8, the induced velocity is lower, while it is higher outboard. The sectional lift coefficient (Fig. 10) and the pressure coefficient Hybrid CFD simulations with different wake models and experimental data from Caradonna and Tung [1] .
Figure 12. Axial and radial tip vortex trajectory. Hybrid CFD simulation with KT model and experimental data from Caradonna and Tung [1] .
Figure 13. Induced velocity distributions calculated from HOST for different collective pitch angles.
Figure 14. Sectional lift coefficient distributions for different collective pitch angles. Hybrid CFD simulations and experimental data from Caradonna and Tung [1] .
(Fig. 11) reveal a substantial deterioration (by more than 6%) of the solution in the vicinity of the tip leading edge.
Concerning the inboard part of the blade, the experimental wake model is in better agreement with the data.
Caradonna and Tung [1] indicated that the discrepancy between the KT model and the measurements can be explained by measurement error. Here, a CFD fitting model is constructed from several combinations of radial and axial trajectory perturbations. At ! ! 180°, the best result is obtained with an increase of the vertical convection by and a decrease of the radial contraction by 7% (Fig. 9). As a consequence, the lift coefficient in the inboard part is slightly increased and fits the measurements (Fig. 10), without altering the tip aerodynamics (Fig. 11). To conclude, the CFD fitting model is conserved for the reference test case.
Variation of Pitch Angle
The Caradonna and Tung [1] database allows a study of the influence of the collective pitch angle on the numerical hybrid solution. Given a rotational speed of 1250 rpm, the pitch angle is reduced to 5° (thrust coefficient of 0.00213) and increased to 12° (thrust coefficient of 0.00796). For these two new cases (and those of the next section), the time-consuming construction of a CFD fitting wake model is not carried out. Instead, they are calculated with the experimental wake model, which gives very similar results.
As expected, the induced velocity increases with the pitch angle (Fig. 13). With respect to the 5° case, the maximum amplitude of the induced velocity at 8° and 12° is increased by 28% and 56%, respectively. The radial location of the maximum slightly moves from approximately 0.8 at 5° to 0.75 at 8° and 12°. Figure 17. Sectional pressure coefficient at !!! ! 0.96 for different rotational speeds. Hybrid CFD simulations and experimental data from Caradonna and Tung [1] .
JOULAIN, DESVIGNE, ALFANO AND LEWEKE
The solutions of the hybrid procedure are compared to the measurements in Figs. 14 and15a. Very good agreement is obtained for all pitch angles and all radial locations. At !!! ! 0.96, the efficiency of the hybrid computation is confirmed by the comparison of the pressure profiles. Up to this radial location, the flow is essentially 2D due to the moderate pitch angles and the rotational speed.
The accuracy of the CFD tool allows a comprehensive study of the vortex roll-up in the vicinity of the tip.
Figure 15b reveals that the pressure at !!! ! 0.995 is strongly altered by the pitch angle. At 5°, one vortex suction peak, of very low amplitude is visible on the upper side at !!! ! 0.65. At 8°, this peak increases in amplitude and moves to !!! ! 0.55. Moreover, a second suction peak of small amplitude is visible at !!! ! 0.80. Finally, the 12° case shows an increase of the amplitude of both suction peaks, while their locations are closer to the leading edge.
Variation of Rotational Speed
The rotational speed was varied at 8° of collective angle.
The value of ! is increased from 1250 rpm to 1750 rpm and 2500 rpm. The corresponding tip Mach numbers are 0.436, 0.607 and 0.877, respectively. The thrust coefficient is not sensitive to the rotational speed.
The 1250 rpm case is calculated with the CFD fitting wake model (Section 3.1), whereas the experimental wake models are used for the two higher rotational speeds.
Due to the inception of a numerical instability in the region of low velocity, the minimum inflow value for ! ! is increased from !!!2 to !! for the 2500 rpm test case.
The sensitivity of the normalized induced velocity to the rotational speed is low. With respect to the 1250 rpm case, the maximum amplitude of the induced velocity at 1750 rpm and 2500 rpm is increased by 6% and 14%, respectively. Again, the sectional lift distributions along the radius computed with the hybrid procedure are in very good agreement with the measurements (Fig. 16). With respect to the 1250 rpm test case, the local maximum localised at !!! ! 0.9 is multiplied by a factor 2 and a factor 4 at 1750 rpm and 2500 rpm, respectively. The lift amplitude in the vortex footprint, visible at !!! ! 0.995 is also multiplied by a factor 2 between 1250 rpm and 1750 rpm. However, a factor 5.5 is noted between 1250 rpm and 2500 rpm, which indicates the presence of a non-linearity in the flow field.
In the two first measurement sections of the 2500 rpm case, i.e. !!! ! 0.5 and !!! ! 0.68 (as well as for all radial stations of the 1750 rpm case, including !!! ! 0.8), a closely 2D flow is observed, and the previous conclusions concerning the accuracy of the hybrid procedure apply. However, the three last measurement sections exhibit a strong discontinuity on the upper surface. This shock is visible on the pressure coefficient profile at !!! ! 0.96 in Fig. 17. Numerically, the amplitude of the shock is well reproduced, but its location is slightly shifted downstream (by 0.05!) in comparison with the measurements. This transonic flow case exhibits complex aerodynamic phenomena. The accuracy of the numerical tool allows a comprehensive analysis of the tip vortex roll-up; a detailed investigation is presented in a separate publication [START_REF] Joulain | Aerodynamic Simulations of Helicopter Main-Rotor Blade Tips[END_REF].
As an example, Fig. 18 shows the numerical solution in the region within one chord of the tip. The normalized pressure coefficient ! ! ! ! is plotted as color contours. In the vicinity of the tip, the vortical flow has a high influence on field on the upper surface and on the lateral side.
Positive of the !-function [36] are shown in crosssectional planes between !!! ! 0 and 1.4 in steps of 0.1!. Two strong vortices are generated by the two sharp edges of the truncated geometry. The lower-edge vortex moves up along the side of the tip. It is strained and wrapped around the upper-edge vortex further downstream. The proximity of the vortices to the blade induces a suction force near the vortex paths, which manifests itself as a local increase of -! ! ! ! . In the vicinity of the tip leading edge, a small lateral vortex is generated along the end cap. This vortex is driven inboard and quickly dissipated under the influence of the strong upperedge vortex.
The isosurface of high velocity gradient along the !-axis highlights the shock location. On the upper surface, the normal shock is interacting with the boundary layers and the 3D tip aerodynamics. A small oblique shock wave is observed in the vicinity of the lateral leading edge.
The streamlines reveal that a boundary layer separation occurs right after the upper-surface shock. In the vicinity of the tip, the vortex roll-up reduces the shock intensity and prevents the separation of the boundary layer.
COMPARISON TO PREVIOUSLY PUBLISHED METHODOLOGIES
In this section, the hybrid adaptation methodology is compared to the various other strategies identified in the literature survey. The reference test case of Caradonna and Tung [1] is simulated (collective pitch angle of 8°, rotational speed of 1250 rpm).
The fixed-wing approach presented by Komerath et al. [START_REF] Komerath | Flowfield of a Swept Blade Tip at High Pitch Angles[END_REF] does not take into account wake considerations, neither the non-uniform rotational inflow. Thus, the fixed configuration with the K strategy is described by a uniform Mach 0.436 inflow, no induced velocity (Fig. 19), and a constant pitch angle of 8°.The first method of Srinivasan and McCroskey [4] (S1 strategy) is implemented here by choosing an appropriate Mach function along the span to reproduce the velocity gradient due to the rotation. A constant shift of the pitch angle (-3.8°) is applied for the entire fixed blade, which is equivalent to the linear induced velocity profile shown in Fig. 19.
The second way of Srinivasan and McCroskey [4] is to keep a uniform Mach number along the span and to reduce the sectional lift coefficient by an adapted twist distribution. Since the optimal twist profile is unknown, a linear evolution is first considered (S2L). Then, as proposed by Vion et al. [START_REF] Vion | Counter-Rotating Open Rotor (CROR): Flow Physics and Simulation[END_REF] , a parabolic law is evaluated (S2P).
The induced velocity profiles from the above methods are potted in Fig. 19. The S2P profile is very close to the present one between 0.8! and the tip.
The sectional lift coefficient distributions obtained with the five strategies are plotted in Fig. 20. Due to the absence of induced velocity, the K strategy clearly overestimates the sectional angle of attack and lift coefficient. Moreover, the slope of the curve is not reproduced because of the constant Mach number along the span. The S1 strategy reveals a good behaviour from the centre of rotation to 0.7!. In this region, the induced velocity profile is close to the present one. From 0.7! to the tip, the induced velocity Figure 20. Sectional lift coefficient distributions obtained with different strategies compared to experimental data from Caradonna and Tung [1] .
profile and the lift coefficient show differences. The sectional lift distributions are overestimated by the S2 strategies, although the S2P method is close to the experimental result in the vicinity of the tip. This is confirmed by a comparison of the sectional pressure coefficient at 0.96! (Fig. 21a). With respect to the measurement, the leading-edge suction peak is overestimated by 60%, 25% and 11% with K, S1 and S2P, respectively, whereas it is underestimated by 30% with S2L. Finally, the sectional pressure coefficients across the by Vion et al. [START_REF] Vion | Counter-Rotating Open Rotor (CROR): Flow Physics and Simulation[END_REF] . However, this process is time consuming and was not investigated further.
TEST CASE OF GRAY ET AL.
[16]
The last simulated test case is extracted from the Gray et al. [16] database. A single-bladed rotor is used to investigate in detail the local influence of the tip vortex roll-up on the blade pressure. The blade is made of an untwisted NACA 0012 airfoil, with a constant chord of 0.127 m (5 in.), a radius of 0.61 m (24 in.) and a truncated tip. The rotational speed and the pitch angle are fixed to 1350 rpm and 11.4°, respectively. The resulting chord-based Reynolds and Mach numbers at the tip are 0.74 million and 0.25.
The HOST trim is performed using the KT model. Based on the Caradonna and Tung [1] database, the thrust coefficient is estimated as 0.006. This value is confirmed a posteriori with the Hybrid CFD solution.
The sectional pressure coefficient at !!! ! 0.966 (Fig. 22a) shows good agreement between the hybrid CFD simulation and the measurements. At this radius, the vortex roll-up does not influence the pressure, and the flow is two-dimensional.
Figures 22b and22c show results for !!! ! 0.987 and 0.995, in the vortex footprint. The pressure at the leadingedge suction peak and on the whole lower surface is well predicted. However, discrepancies appear on the vertical suction peak. The amplitude of the local maximum is underestimated by 45% and shifted upstream by 0.07! at !!! ! 0.987. At !!! ! 0.995, the two vortical peaks identified in the measurements are numerically reproduced, but deviations of the amplitude and location are again seen.
In order to understand the origin of these deviations, contours of the normalized pressure coefficient on the upper surface near the tip are plotted in Fig. 23. Two observations can be made: (1) the pressure decrease in the path of the numerical and experimental vortices is of the same amplitude, but the former is shifted upstream by approximately 0.15! with respect to the latter, i.e. the numerical vortex generated from the lateral upper edge starts to roll up before the experimental one. (2) Despite this delay, the radial position of the experimental vortex path in the vicinity of the trailing edge is slightly more inboard (by 0.02! at !!! ! 0.9), i.e. the radial displacement of the experimental vortex is larger than the numerical one.
Several simplifications were made in the construction of the present adaptation method. In particular, the radial induced velocities of the far wake are not taken into account. According to the HOST simulation, this radial velocity is directed inboard and is maximum at 0.8!. A one-chord vortex age results in an inboard displacement of 0.027!, which is of the same order of magnitude as the radial displacement deviation.
The centrifugal and Coriolis forces are neglected in the present hybrid strategy. Under the assumption of a high blade aspect ratio, the centrifugal force at the tip is aligned with the !-axis. Its expression in the fixed-wing (!,!,!) frame (Fig. 1) derives from Eq. 9 and is ! 1 ! !! 2 !! . This force can be written as the sum of two terms:
(11)
!! 2 !! ! ! !! 2 ! 2 2 ! ! 2 ! 2 2
!!
The first one derives from a potential flow, and can be included in the pressure gradient term of Eq. 8. The second term represents the part of the centrifugal force acting on the velocity field.
The norms of the inertial force and the velocity-affecting centrifugal term ( !"# !!!! and ! 2 ! 2 !! 2 , respectively) are compared in the upper-surface boundary layer near the tip in Fig. 24. These are calculated a posteriori from the hybrid CFD solution. In the vicinity of the tip leading edge and in the path of the upperedge vortex, the centrifugal force is higher than the inertial force. As a consequence, in the case of a rotating blade, the flow in this area is probably radially ejected from the tip under the influence of the centrifugal force. In particular, the small lateral vortex generated in the vicinity of the tip leading edge (observed in Fig. 18) is likely to be stronger than in the fixed-wing adapted simulation, since it results from the streamwise vorticity of the blade boundary layer related to radial outflow, delaying the generation of the upper-edge vortex. Thus the accelerated roll-up in the present simulations is likely related to the absence of centrifugal effects.
The influence of the Coriolis force can be estimated by calculating the Rossby number. This number represents the ratio between the inertial force (driving the flow field) and the Coriolis force, and can be expressed as:
!" !
Inertial force
Coriolis force
! ! 2!!
In the vicinity of the tip, assuming that the order of magnitude of the velocity is ! ! !!, the Rossby number is proportional to the aspect ratio of the blade ! !. For this particular test case, the Rossby number is equal to 2.4, which suggests that the Coriolis force may influence the flow field. However, the norm of the Coriolis force is several orders of amplitude lower than the norm of the centrifugal force in Fig. 24. Moreover, the aspect ratio of classical main rotors is higher than in the present configuration. The influence of the Coriolis force is therefore expected to be very low and can be neglected in the study of the tip vortex roll-up.
CONCLUSIONS
A new methodology of framework adaptation is presented for hovering flight. A fixed-wing equivalent configuration is constructed from a rotating blade, in order to simplify and speed up tip vortex simulations. An uncoupled hybrid strategy is set up using the comprehensive rotor code HOST [27] and the high-fidelity CFD solver elsA [START_REF] Cambier | The Onera elsA CFD Software: Input From Research and Feedback From Industry[END_REF] . The former is used to correctly account for the induction of the far wake by propagating the vortices over a long distance without dissipation, whereas the latter accurately simulates the aerodynamics in the tip region.
Global performance calculations (radial distributions of lift) are validated by a comparison to the experimental database of Caradonna and Tung [START_REF] Caradonna | Experimental and Analytical Studies of a Model Helicopter Rotor in Hover[END_REF] . Good agreement is found for all pitch angles and rotational speeds, including transonic flow conditions.
Comparisons with the previously published methods of Komerath et al. [START_REF] Komerath | Flowfield of a Swept Blade Tip at High Pitch Angles[END_REF] , Srinivasan and McCroskey [4] and Vion et al. [26] indicate considerable improvement in the prediction of the blade aerodynamics.
The flow around the blade tip is investigated in detail by a comparison with the database of Gray et al. [START_REF] Gray | Surface Pressure Measurements at Two Tips of a Model Helicopter Rotor in Hover[END_REF] . Good agreement is obtained with the hybrid CFD method, except for a slight difference in the vortex trajectory over the blade. This deviation is likely due to the absence of centrifugal effects and, to a lesser extent, of a radial component of induced velocity in the computation.
The low computational cost of the steady RANS method and the efficient and reliable hybrid adaptation method can now be used for further investigations, including variations of the blade geometry and tip shape.
(
!,!,!) = Rotating-blade frame (!,!,!) = Fixed-wing frame (!,!,!) = Inertial frame (!,!) = Sectional frame (!,!,!) = Non-Inertial frame !
Figure 1 .
1 Figure 1. Definition of the inertial (!,!,!), the non-inertial (!,!,!), the rotating-blade (!,!,!), the fixed-wing (!,!,!) and the sectional (!,!) frames.
Figure 2 .
2 Figure 2. Overview of the computational grid in the vicinity of the tip leading edge.
Figure 3 .
3 Figure 3. Boundary conditions of the CFD fixed-wing computations.
Figure 4 .
4 Figure 4. Overview of the fixed-wing adaptation process.
Figure 5 .
5 Figure 5. Definition of the segregation wake age ! ! , the near and far wakes of the considered blade and the wake of the other blade(s).
Figure 6 .
6 Figure 6. Induced velocity profiles integrated from the whole wake and with the considered blade near wake removed.
Figure 7 .Figure 8 .
78 Figure 7. Illustration of the velocity profiles injected at the boudaries of the fixed-wing domain.
Figure 11 .
11 Figure 11. Sectional pressure coefficient at !!! ! 0.96.
Figure 15 .
15 Figure 15. Sectional pressure coefficient for different collective pitch angles. Hybrid CFD simulations and experimental data from Caradonna and Tung [1] : a) !!! ! 0.96 ; b) !!! ! 0.995.
Figure 16 .
16 Figure 16. Sectional lift coefficient distributions for different rotational speeds. Hybrid CFD simulations and experimental data from Caradonna and Tung [1] .
Figure 18 .
18 Figure 18. Hybrid CFD simulation for 8° pitch angle and 2500 rpm rotational speed. Normalized contours of the pressure coefficient; positive !-function in cross-sectional planes between !!! ! 0 and 1.4; isosurface of high velocity gradient along the !-axis; and streamlines.
Figure 19 .
19 Figure 19. Induced velocity distributions with the K, S1, S2L and S2P strategies compared to the present adaptation method.
Figure 22 .Figure 21 .
2221 Figure 22. Sectional pressure coefficient. Hybrid CFD simulations and experimental data from Gray et al. [16] : a) !!! ! 0.966 ; b) !!! ! 0.987 ; c) !!! ! 0.995.
Figure 23 .
23 Figure 23. Contours of constant pressure coefficient (normalized by the tip Mach number) on the upper surface.Hybrid CFD simulation (right) and experimental data extracted from Gray et al.[16] (left).
Figure 24 .
24 Figure 24. Contours of the normlized inertial and (velocityaffecting) centrifugal forces in the upper-surface boundary layer, calculated a posteriori from the hybrid CFD solution.
ACKNOWLEDGMENTS
The authors thank Michael Le Bars (IRPHE-CNRS) for discussions on the centrifugal and Coriolis forces. | 54,056 | [
"8367"
] | [
"196526",
"266449",
"266449",
"266449",
"196526"
] |
01770294 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01770294/file/Houdroge_etal_2017.pdf | F Y Houdroge
T Leweke
K Hourigan
M C Thompson
Two-and three-dimensional wake transitions of an impulsively started uniformly rolling circular cylinder
. The main focus here is for Reynolds numbers beyond the transition to unsteady flow at Re c,2D = 88. From impulsive start up, the wake almost immediately undergoes transition to a periodic two-dimensional wake state, which, in turn, is three-dimensionally unstable. Thus, the previous threedimensional stability analysis based on the two-dimensional steady flow provides only an element of the full story. Floquet analysis based on the periodic two-dimensional flow was undertaken and new three-dimensional instability modes were revealed. The results suggest that an impulsively started cylinder rolling along a surface at constant velocity for Re 90 will result in the rapid development of a periodic two-dimensional wake that will be maintained for a considerable time prior to the wake undergoing threedimensional transition. Of interest, the mean lift and drag coefficients obtained from full three-dimensional simulations match predictions from two-dimensional simulations to within a few percent.
Introduction
Many previous studies have focused on the flow around a circular cylinder in an unbounded flow. In the Stokes range (Re = U d/ν 1), viscous effects dominate the flow. The flow around a stationary cylinder remains attached and symmetrical about the spanwise and streamwise axes through the centre point of the cylinder. As the Reynolds number is increased, the flow loses its upstream/downstream symmetry as the fluid separates at the rear of the cylinder. This results in the formation of two closed recirculation zones, first occurring at 5 Re 7 [START_REF] Taneda | Experimental investigation of the wakes behind cylinders and plates at low Reynolds numbers[END_REF][START_REF] Dennis | Numerical solutions for steady flow past a circular cylinder at Reynolds numbers up to 100[END_REF]. The length of these recirculation regions was found to increase linearly with Re, until at Re 46, the wake becomes absolutely unstable and undergoes a transition to a periodic flow state [START_REF] Taneda | Experimental investigation of the wakes behind cylinders and plates at low Reynolds numbers[END_REF][START_REF] Roshko | On the development of turbulent wakes from vortex streets[END_REF][START_REF] Henderson | Nonlinear dynamics and pattern formation in turbulent wake transition[END_REF][START_REF] Provansal | Bénard-von Kármán instability: transient and forced regimes[END_REF]. This transition is the result of a Hopf bifurcation (i.e., a steady to unsteady transition) of the steady flow that occurs as the flow becomes globally absolutely unstable [START_REF] Provansal | Bénard-von Kármán instability: transient and forced regimes[END_REF][START_REF] Henderson | Nonlinear dynamics and pattern formation in turbulent wake transition[END_REF]. The saturated state of vortex shedding in the wake of the cylinder takes the form of a Bénard-von Kármán vortex street [START_REF] Bénard | Formation de centres de giration à l'arrière d'un obstacle en mouvement[END_REF][START_REF] Von Kármán | Über den Mechanismus des Widerstandes, den ein bewegter Körper in einer Flüssigkeit erfährt[END_REF] that is characterised by a periodic, repeating pattern of swirling vortices of opposite sign that are shed from the rolled-up shear layers.
As the Reynolds number is increased further, the now-periodic wake undergoes a further transition to three-dimensional flow. [START_REF] Williamson | The existence of two stages in the transition to three-dimensionality of a cylinder wake[END_REF] found that two clearly identifiable transitions take place sequentially that are distinguished by the development of distinct spatio-temporal three-dimensional wake states designated mode A and mode B. The first of these transitions is also accompanied by a discontinuity in the Strouhal-Reynolds number curve. The mode A instability appears beyond Re ≈ 190 (Williamson 1996b;[START_REF] Henderson | Nonlinear dynamics and pattern formation in turbulent wake transition[END_REF], resulting in pairs of counter-rotating streamwise vortices forming along the span of the cylinder. This three-dimensional mode has a spanwise wavelength of λ ≈ 4d, where d is the diameter of the cylinder. The second transition to Mode B becomes fully developed at Re = 260 and has a preferred spanwise wavelength of λ 0.8d [START_REF] Henderson | Nonlinear dynamics and pattern formation in turbulent wake transition[END_REF]Williamson 1996b). The remnants of the streamwise Mode B vortical structures can be seen at much higher Reynolds numbers, well beyond the development of fully turbulent flow (e.g. [START_REF] Wu | Three-dimensional vortex structures in a cylinder wake[END_REF].
Imposing a rotation on a cylinder in an unbounded flow has a strong influence on the wake structure and transitions. The degree of rotation is often quantified by the nondimensional rotation rate, α = ωd/(2U ), defined as the ratio of the tangential surface speed (ωd/2, with ω the angular velocity) and the free-stream speed U . Many authors, including [START_REF] Tang | On steady flow past a rotating circular cylinder at Reynolds numbers 60 and 100[END_REF], have shown that imposing a rotation on the body renders the wake asymmetrical and, at Re 60, depending on the rotation rate, the elimination of one or both of the recirculation regions in the wake can be observed. For larger Re, the imposed rotation may also suppress or delay the transition to unsteady flow in comparison to the case of a non-rotating body.
For the non-rotating cylinder, as the Reynolds number is increased, the wake becomes unsteady. At low rotation rates, a Bénard-von Kármán vortex street is observed [START_REF] Jaminet | Experiments on vortex shedding from rotating circular cylinders[END_REF], also known as Mode I shedding. For higher values of α, the unsteady wake narrows and is displaced in the direction of motion of the cylinder surface [START_REF] Díaz | Vortex shedding from a spinning cylinder[END_REF][START_REF] Mittal | Flow past a rotating cylinder[END_REF]. As the rotation rate increases beyond a critical value of α c 2 [START_REF] Mittal | Flow past a rotating cylinder[END_REF][START_REF] Díaz | Vortex shedding from a spinning cylinder[END_REF], the unsteady flow is completely suppressed. Instead of vortex shedding, the surrounding fluid is entrained by the rotation of the body and creates a layer around the cylinder that thickens as α increases [START_REF] Díaz | Vortex shedding from a spinning cylinder[END_REF][START_REF] Mittal | Flow past rotating cylinders: effect of eccentricity[END_REF]. Perhaps surprisingly, a second shedding regime is observed over a specific range of α at much higher α [START_REF] Mittal | Flow past a rotating cylinder[END_REF][START_REF] Kumar | Flow past a rotating cylinder at low and high rotation rates[END_REF], where single-sided vortex shedding occurs with a period much longer than that of Mode I shedding. This wake state is referred to as Mode II shedding.
Limited investigations have been carried out on the development of three-dimensional wakes for a rotating cylinder. At low rotation rates, α < 1, [START_REF] Akoury | The threedimensional transition in the flow around a rotating cylinder[END_REF] found that Mode A becomes unstable at higher Reynolds numbers as α is increased. At higher rotation rates, the flow becomes increasingly unstable to perturbations at Re = 200 in the range 3 α 5 [START_REF] Meena | Three dimensional instabilities in flow past a spinning and translating cylinder[END_REF]. Recent numerical and experimental investigations [START_REF] Mittal | Flow past a rotating cylinder[END_REF]Rao et al. 2013b,a;[START_REF] Radi | Experimental evidence of new three-dimensional modes in the wake of a rotating cylinder[END_REF]) have identified several new three-dimensional transitions for Re < 400. In the Mode I shedding regime, five three-dimensional modes were found to become unstable and, in the steady regime of flow α 2, four three-dimensional modes were observed.
When the presence of a wall is considered, [START_REF] Taneda | Experimental investigation of vortex streets[END_REF] showed that a stationary wall near a cylinder stabilises the flow. [START_REF] Arnal | Vortex shedding from a bluff body adjacent to a plane sliding wall[END_REF] found that the onset of unsteady flow in the presence of a wall is shifted to Re ≈ 100, from Re ≈ 46 for an isolated cylinder in a free stream. This finding is correct as long as the gap ratio between the cylinder and the wall does not exceed a certain critical value, which depends on the Reynolds number. The steady flow around a stationary cylinder on a wall is similar to that of a backward-facing step [START_REF] Armaly | Experimental and theoretical investigation of backward-facing step flow[END_REF]; it is characterised by a single recirculation region, surrounded by fluid which separates from the body and reattaches on the wall downstream. However, the behaviour of the flow depends on many parameters: the distance between the cylinder and the wall, the motion of the wall relative to the cylinder, and the imposed rotation of the body. [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF] investigated the case of a rotating cylinder translating next to a moving wall at rotation rates in the range -1 < α < 1. In the steady flow regime, two recirculation zones are observed in the wake, and the drag and lift forces decrease as Re increases. Prograde rolling (α > 0) was found to destabilise the flow whereas retrograde rotation (α < 0) delayed the onset of unsteady flow. This was later confirmed/extended in a study by [START_REF] Rao | Flows past rotating cylinders next to a wall[END_REF] for which higher values of α were considered.
For unsteady flows in the wake of a cylinder placed near a wall, the strength of the vortex shedding decreases as the cylinder is placed closer to the wall [START_REF] Lei | Re-examination of the effect of a plane boundary on force and vortex shedding of a circular cylinder[END_REF]. At Re = 170, [START_REF] Taneda | Experimental investigation of vortex streets[END_REF] observed a single row of vortices for a cylinder moving near a wall. In the unsteady regime, vortex pairs with a net rotation appear in the wake as a result of the interaction between the shear layer shed from the top of the cylinder and induced secondary shedding from the wall boundary layer vorticity downstream [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF][START_REF] Rao | Flows past rotating cylinders next to a wall[END_REF]. Unlike the case of a cylinder placed in a free stream but similar to the flow over a backward facing step, the wake undergoes a transition to threedimensionality before the onset to two-dimensional vortex shedding [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF][START_REF] Rao | Flows past rotating cylinders next to a wall[END_REF].
In the current study, the results of [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF] for a cylinder rolling at α = 1 without slipping on a wall are extended to identify and characterise the different saturated flow states and modes of the three-dimensional instability, and their influence on the underlying two-dimensional structure of the flow. A more detailed description of the problem, the corresponding equations and the numerical methods are given in the first section of this paper. Then follow the results from the linear stability analysis and Floquet analysis, highlighting the appearance of new modes of the three-dimensional instability. Thorough comparisons of the similarities and differences in the flow structures and fluid forces show how the two-dimensional simulations can be used confidently to approximate this problem during the initial stage of flow development. The essential findings of this work are then summarised in the last section, along with some discussion of future research extending from this paper.
Problem description and methodology
Figure 1 illustrates the problem setup and parameter definitions: a cylinder of diameter d is rolling along a wall at a rotation rate ω. For computational simplicity, the frame of reference is placed at the centre of the cylinder, this being equivalent to the fluid and wall moving past the fixed, rotating cylinder at a speed U .
Background to new stability analysis studies
Consider a stationary cylinder placed in a freestream. In the steady, laminar regime below the critical Reynolds number Re c,2D = 46 at which transition to periodic shedding begins, it is well known that two mirror-symmetrical recirculation regions form in the wake whose length increases with the Reynolds number. For Reynolds numbers above the critical value, the steady wake is unstable, causing a decrease in the mean formation length of the recirculation region of the time averaged-flow (Williamson 1996a,b). Experiments and numerical simulations show that, for a given Reynolds number after the background flow is impulsively started, the evolution of the recirculation region is characterised by a linear, steady increase of its length followed by slowly growing waviness, and eventually, a fully developed Bénard-von Kármán vortex street. Thus in this case, the growth of perturbations leading to a fully developed periodic wake occurs after the steady symmetric wake state has fully (or almost fully) formed (e.g., see [START_REF] Thompson | The Stuart-Landau model applied to wake transition revisited[END_REF]. Moreover, the two-dimensional steady wake is not unstable to three-dimensional perturbations; the first three-dimensional transition occurs on the two-dimensional periodic wake [START_REF] Williamson | The existence of two stages in the transition to three-dimensionality of a cylinder wake[END_REF].
Conversely, when the cylinder is uniformly rolling along a wall, two-dimensional numerical simulations indicate that well below the critical Reynolds number for sustained two-dimensional vortex shedding (Re c,2D = 88), vortex shedding still occurs almost from startup. This can be seen from figure 2, which shows the time evolution of the lift coefficient for Re = 90 and 160, above the critical Reynolds number. The oscillations in the curves, observed from the initial starting time t 0 = 0, are the result of the immediate shedding of vortices into the wake. For Re = 90, it takes approximately six shedding cycles to reach the asymptotic periodic state, whilst for Re = 160, well beyond the critical Reynolds number, the transition to the fully developed 2D periodic state is essentially complete after just two shedding periods. Interestingly, even when the flow is laminar below Re c,2D , it undergoes immediate vortex shedding in the first instances of the flow development before settling down to its steady state. This effect was also made visible by [START_REF] Gal | Visualization of the space-time impulse response of the subcritical wake of a cylinder[END_REF] who studied experimentally the impulse response of the subcritical wake of a cylinder. These preliminary oscillations are visible on the plots of the lift force on figure 2 for Re = 60 and 80.
Thus, while [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF] documented that the onset of three-dimensional flow occurs at a Reynolds number of approximately 40 on a steady recirculating base flow, the actual flow transitions and dynamics observed in practice, for a cylinder rolling along a surface at constant velocity from an impulsive start, may be different. In particular, above the critical Reynolds number for 2D vortex shedding, a two-dimensional periodic wake seems likely to essentially fully develop prior to any observable development of threedimensionality. This hypothesis will be tested in the following sections. It has several consequences. It suggests that two-dimensional simulations have validity well beyond the critical Reynolds number at which three-dimensionality first occurs for the steady wake, at least for a non-negligible period after impulsive startup. It also suggests that the previous-documented three-dimensional stability analysis [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF] based on a steady two-dimensional base flow may not have a strong relevance for the overall wake dynamics for Reynolds numbers above Re c,2D . Finally, it is not clear what this means for the fully saturated wake state at different Reynolds numbers. Thus, in this paper, the stability analysis is extended to examine the three-dimensional stability of the two-dimensional periodic wake state. This is supplemented by direct three-dimensional simulations to examine the longer-term wake evolution.
Governing equations
The governing equations are the continuity and incompressible Navier-Stokes equations for the motion of the fluid. Let u(x, y, z, t) = (u, v, w) represent the velocity of the fluid in Cartesian coordinates. In the case of an incompressible flow, the continuity equation is:
∇ • u = 0, (2.1)
and the general form of the incompressible Navier-Stokes equation is:
∂u ∂t + u • ∇u = - 1 ρ ∇P + ν∇ 2 u, (2.2)
where ρ is the density of the fluid and P is the static pressure. The drag and lift coefficients per unit length are defined in the usual way:
C D = D 1 2 ρU 2 d and C L = L 1 2 ρU 2 d , (2.3)
where D and L represent the drag and lift forces, respectively.
Numerical scheme
The solver is based on a code that has been used extensively for similar problems, so it will only be described briefly here. Overall, the time-dependent incompressible Navier-Stokes equations for the fluid are solved in Cartesian coordinates using a spectral-element approach, in a cross-sectional plane for two-dimensional simulations or a combined spectral-element/Fourier approach in three dimensions. The spectral-element method is a formulation of a high-order finite-element method that uses high-order Lagrangian interpolants to approximate the solutions of partial-differential equations. It has the advantage of converging much faster than a typical finite-element method, considering that the error decreases exponentially with the order of the approximating polynomial, all the while retaining some of the flexibility for modelling complex geometries that finite-element methods provide. The (nodal) approach adopted is described in detail in [START_REF] Karniadakis | Spectral/HP element methods for CFD[END_REF]. The spatially discretised equations are then integrated forward in time using a three-step time-splitting approach, where the advection, pressure and diffusion terms are treated separately and sequentially [START_REF] Chorin | Numerical solution of the Navier-Stokes equations[END_REF][START_REF] Karniadakis | High-order splitting methods for the incompressible Navier-Stokes equations[END_REF][START_REF] Thompson | Hydrodynamics of a particle impact on a wall[END_REF]. The advection step is carried out using the third-order Adams-Bashforth approach. The pressure and viscous substeps are both solved directly using LU decomposition, which factors the matrices into a lower triangular matrix L and an upper triangular one U [START_REF] Turing | Rounding-off errors in matrix processes[END_REF], invoking the second-order Adams-Moulton (Crank-Nicholson) approximation for the linear viscous step. Whilst higher-order timestepping methods could be employed, because typically several hundred time-steps are required per shedding cycle to guarantee stability of the iterative approach, there is no discernible improvement in accuracy of the overall solution. The solver is explained in more detail by [START_REF] Thompson | Hydrodynamics of a particle impact on a wall[END_REF], and has widely been tested, validated and used for studies of flows around bluff bodies such as cylinders (Thompson et al. 2001b;[START_REF] Ryan | Three-dimensional transition in the wake of bluff elongated cylinders[END_REF][START_REF] Rao | Flows past rotating cylinders next to a wall[END_REF] and spheres (Thompson et al. 2001a;[START_REF] Rao | Transition to chaos in the wake of a rolling sphere[END_REF][START_REF] Thompson | Hydrodynamics of a particle impact on a wall[END_REF]. This code has also been modified to determine the linear stability of base flows, as explained in section 2.4 below.
In addition to the time-dependent solver, a steady solver is required to produce the steady base flows for the linear stability analysis. This is a modified version of the spectralelement code based on the penalty formulation (see [START_REF] Zienkiewicz | The Finite Element Method[END_REF]. This has been validated for a number of similar flow problems (e.g., [START_REF] Jones | A study of the geometry and parameter dependence of vortex breakdown[END_REF][START_REF] Thompson | The sensitivity of steady vortex breakdown bubbles in confined cylinder flows to rotating lid misalignment[END_REF].
Linear stability analysis
Linear stability analysis was undertaken in order to quantify flow transitions leading to vortex shedding, and to three-dimensional flow. For a two-dimensional steady or periodic base flow U, the velocity and pressure perturbation fields (u , P ) satisfy the continuity and linearised Navier-Stokes equations:
∇ • u = 0, (2.4) ∂u ∂t + U • ∇u + u • ∇U = - 1 ρ ∇P + ν∇ 2 u . (2.5)
Because the coefficients are independent of z, the perturbation fields can be further decomposed, representing z-dependence of variables as the sum of complex terms of a Fourier expansion:
u (x, y, z, t) = k ûk (x, y, t) sin(2πz/λ k ), (2.6) v (x, y, z, t) = k vk (x, y, t) sin(2πz/λ k ), (2.7) w (x, y, z, t) = k ŵk (x, y, t) cos(2πz/λ k ), (2.8) P (x, y, z, t) = k Pk (x, y, t) sin(2πz/λ k ).
(2.9)
Using these expansions, equations (2.4) and (2.5) give for each of the Fourier modes:
∂ ûk ∂t = -ûk ∂U ∂x + vk ∂U ∂y + U ∂ ûk ∂x + V ∂ ûk ∂y - 1 ρ ∂ Pk ∂x +ν ∂ 2 ûk ∂x 2 + ∂ 2 ûk ∂y 2 -(2π/λ k ) 2 ûk , (2.10) ∂v k ∂t = -ûk ∂V ∂x + vk ∂V ∂y + U ∂v k ∂x + V ∂v k ∂y - 1 ρ ∂ Pk ∂y +ν ∂ 2 vk ∂x 2 + ∂ 2 vk ∂y 2 -(2π/λ k ) 2 vk , (2.11) ∂ ŵk ∂t = -U ∂ ŵk ∂x + V ∂ ŵk ∂y -(2π/λ k ) 1 ρ Pk +ν ∂ 2 ŵk ∂x 2 + ∂ 2 ŵk ∂y 2 -(2π/λ k ) 2 ŵk , (2.12) ∂ ûk ∂x + ∂v k ∂y -(2π/λ k ) ŵk = 0. (2.13)
These perturbation field modes (û k , vk , ŵk , Pk ) can be further expressed as a sum of eigenmodes, each with its own growth rate. After choosing a wavelength and integrating these equations from initially random fields for sufficient time, the velocity perturbation fields will be dominated by the eigenmodes with the largest growth rates. Using a Krylov subspace together with Arnoldi decomposition allows a sequence of evolved fields to be decomposed into the dominant eigenmodes together with their corresponding growth rates (e.g., [START_REF] Mamun | Asymmetry and Hopf-bifurcation in spherical Couette flow[END_REF][START_REF] Barkley | Three-dimensional Floquet stability analysis of the wake of a circular cylinder[END_REF]. If represents any of the perturbation fields ûk , vk , ŵk and Pk , then this method gives Â(x, y, t + T ) = Â(x, y, t) exp(σT ), if the eigenmode spatial distribution is not a function of time. Here σ is the growth rate and T is the time interval over which the growth of the mode is recorded.
It is also possible to get pairs of eigenmodes that have complex conjugate growth rates, providing the possibility of solutions with a periodic component on top of the exponential time variation. These pairs can also be extracted directly from the Arnoldi decomposition together with the complex growth rates σ = σ r + iσ i . For a three-dimensional transition on a two-dimensional periodic base flow, the procedure is the same, with the sequence of perturbation fields forming the Krylov subspace taken at full baseflow period intervals T .
In that case, the approach is called Floquet analysis, and the growth of each eigenmode is often expressed as a Floquet multiplier µ = exp(σT ), i.e., the amplitude of the mode after it has evolved for one period relative to the initial state. Again, it is possible to have pairs of eigenmodes with complex conjugate Floquet multipliers that are resolved through Arnoldi decomposition. For the three-dimensional analysis, the eigenmodes depend on the selected spanwise wavelength λ k . If |µ| > 1 (or σ r > 0), then the perturbation field will be amplified over time, while |µ| < 1 (or equivalently σ r < 0) for all λ implies that any perturbation will decay and, hence, the flow is linearly stable. Transition occurs when |µ| = 1 or σ r = 0. For the three-dimensional case, this condition has to be tested for every possible spanwise wavelength. The (eigen)modes that are obtained with Floquet analysis reported in this paper comprise periods equal to that of the periodic two-dimensional base flow, twice the period (subharmonic modes) and modes with different periods (quasiperiodic modes). The first two cases are characterised by a growth rate σ that is real. The quasi-periodic modes have a period that isn't commensurate with the base flow period
Domain size and resolution studies
For this study, the Reynolds number range is restricted to be Re 300, covering flow transitions to vortex shedding and to three-dimensionality.
Two domain sizes were investigated to quantify blockage effects. All positions are nondimensional and are scaled by the diameter of the cylinder. The first mesh M 1 is shown on figure 3 and consists of 1472 macro-elements. The upstream, downstream and upper boundaries are positioned at (x 1,u , x 1,d , y 1 ) = (-25, 25, 50), respectively. The second mesh M 2, also shown on figure 3, consists of 1906 elements and has the dimensions (x 2,u , x 2,d , y 2 ) = (-50, 50, 100). Both meshes have increased resolution in the vicinity and downstream of the cylinder that is located at x = 0 and y = 0. The differences in the drag and lift forces do not exceed 1% at the highest Reynolds number considered (Re = 300).
To ensure that the solution is converged with the chosen timestep ∆t = 0.0030, the latter was halved to 0.0015. This produced a variation in the body forces of less than 0.2% at Re = 150.
A last resolution study was carried out by increasing the number of the internal node points within each macro-element from N = 4 (×4) to N = 5 (×5), N = 6 (×6) and N = 7 (×7), which is taken as the reference value. We found that the drag force, the lift force and the period of oscillation differ by less than 1% respectively for N 5 at Re = 150. At the highest Reynolds number Re = 300, the drag and Strouhal number are well within the 0.5% at N = 5 (×5) whereas the error in the lift force reached 2.5%. From these results, and, considering that this study focuses mostly on a range of Reynolds number around the transition values Re c,3D and Re c,2D and up to 160, we can safely conclude that the mesh with 5 nodes per macro-element is converged and use it throughout our simulations.
The point of contact between the cylinder and the wall leads to a mesh singularity, and hence a small gap G is imposed between the cylinder and the wall. It has been shown previously that the flow structures visualised in the experiments and those observed numerically are in good agreement with G/d = 0.005 [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF][START_REF] Stewart | Flow dynamics and forces associated with a cylinder rolling along a wall[END_REF][START_REF] Rao | Flows past rotating cylinders next to a wall[END_REF], while reducing the gap ratio has little effect on flow quantities of interest [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF]) Thus a gap ratio G/d of 0.005 was used throughout this investigation.
For the full three-dimensional non-linear simulations, a Fourier expansion is used to represent the spanwise dependence of the flow variables (e.g., see [START_REF] Karniadakis | Three-dimensional dynamics and transition to turbulence in the wake of bluff objects[END_REF][START_REF] Thompson | Three-dimensional instabilities in the wake of a circular cylinder[END_REF][START_REF] Karniadakis | Spectral/HP element methods for CFD[END_REF]. The spanwise dimension is set to 54d for simulations for Re Re c,2D and 25.5d for Re < Re c,2D . These domains were chosen to fit approximately three wavelengths of the longest wavelength instability mode predicted by Floquet analysis (λ ∼ 8.5d and 18d for the steady and periodic base states). The choice of such large spanwise domains was to allow non-linear interactions between all relevant modes as the wake evolves to a fully saturated state. Another option would be to restrict the domain to wavelengths corresponding to each dominant mode to look at the super-/sub-critical nature of the each transition from a linear to a non-linear state (but not the fully saturated state). This could have been done, but it is not clear it would contribute much to the physical picture. For instance, for the Mode A transition of a circular cylinder in freestream, beyond saturation the wake evolves to allow dislocations (e.g., Williamson 1996a,b) that cannot be represented on a single wavelength spanwise domain. Also relevant, [START_REF] Karniadakis | Three-dimensional dynamics and transition to turbulence in the wake of bluff objects[END_REF] used a spanwise width that only allowed Mode B to grow. Because of this unphysical restriction, the authors observed period-doubling as the route to a fully-turbulent flow. However, this does not seem to be the situation for the real wake, or for computations using a sufficiently wide spanwise domain [START_REF] Henderson | Nonlinear dynamics and pattern formation in turbulent wake transition[END_REF]). Thus, it was decided to use a wider domain that would not put unphysical restrictions on mode development. Typically 48 and 96 Fourier modes are used for these simulations for the steady and periodic regimes, respectively. Since the shortest wavelength mode corresponds to λ 2.5d, this corresponds to approximately 10 Fourier planes to resolve the smallest important scales that develop in the wake. Tests with 144 Fourier modes confirm that this resolution accurately captures the wake evolution for the Reynolds number range considered.
Table 1 reports the results from the different resolution studies mentioned above.
Results
Linear stability analysis and flow transitions
Experimental studies by [START_REF] Taneda | Experimental investigation of vortex streets[END_REF] showed that the presence of the wall has a stabilising effect on the flow as long as the gap ratio does not exceed a certain critical value [START_REF] Lei | Re-examination of the effect of a plane boundary on force and vortex shedding of a circular cylinder[END_REF]. [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF] investigated the wake behind rolling cylinders at various rotation rates α, and found that as α varies from prograde (α > 0) to retrograde (α < 0) rolling, the critical Reynolds numbers for three-dimensional (Re c,3D ) and unsteady (Re c,2D ) transitions both increase. For comparison with that previous study, these critical Reynolds numbers were again predicted for the reference case of α = 1 (pure rolling without slipping). These transitions and the resulting flow states are depicted in figure 4. As indicated above, in contrast to the situation of a cylinder placed in an unbounded flow, the transition to three-dimensional flow for a cylinder placed near a wall occurs directly from a steady two-dimensional flow, similar to the situation for a backward- facing step [START_REF] Barkley | Three-dimensional instability in flow over a backward-facing step[END_REF]. As with that flow, its perturbation mode takes the form of periodic cells evenly distributed along the span of the cylinder. Figure 5 shows the perturbation velocity field projected into the plane just touching the top of the cylinder for Re = 60 and λ/d = 11 taken when the three-dimensional instability begins. This clearly shows the rotating cells associated with this instability mode. Stability analysis on the steady base flow shows that the onset of three-dimensional flow first occurs at Re c,3D = 36.8 for a spanwise wavelength λ c of 8.6d. Figure 6 shows that the maximum observed growth rate saturates by Re ∼ 60, only increasing further beyond Re ∼ 150, well beyond the onset of shedding. This is further confirmed in figure 7, where the growth rate is reported over the entire range of Reynolds numbers and spanwise wavelengths. The white regions in this figure represent negative growth rates and therefore stable wakes. Growth rates that are not real and are instead composed of a complex conjugate pair (i.e. the period of the mode is different from that of the base flow) were only detected in the blue region comprised between 90 Re 150 and 5 λ/d 8. The base flows for this analysis were generated using a steady version of the spectral-element code, allowing the stability to be investigated well beyond the transition to unsteady flow. The preferred wavelength can be seen to increase from λ = 8.6d at onset to reach values in excess of 20d at Re = 150, before suddenly dropping back on further increasing the Reynolds number. This is associated with the growth rate versus wavelength curves for Re 150 developing two peaks, with the lower wavelength peak dominating for Re 180.
Transition to two-dimensional unsteady flow
The transition from steady to unsteady two-dimensional flow occurs when the recirculation bubble at the rear of the cylinder becomes unstable and starts to shed vortices (figure 4b). These vortices interact with the wall through the no-slip condition, generating secondary vorticity as they advect downstream. In turn, this secondary vorticity is pulled away from the wall to combine with the primary generating vortex to form a vortex pair. The self-induced velocity of the pair causes it to move upwards and away from the wall, but because the primary vortex is stronger, the movement is along a curved path. The essential features of this process can be seen in figure 4b.
Figure 8 shows the form of the instability mode visualised by the perturbation spanwise vorticity, indicating that the mode has large amplitude where the base vorticity field is strong as well as close to the ground plane. Linear stability analysis of the steady base flow shows that this transition, which is characterised by a Hopf bifurcation, occurs at Re c,2D
88. This is shown in figure 9, which gives the growth rate and the preferred oscillation frequency as a function of the Reynolds number. Although not shown in the paper, it was verified that the fluctuating lift oscillation amplitude varied as (Re -Re c,2D ) close to the transition, as expected for a Hopf bifurcation. The oscillation frequency decreases with Reynolds number from St 0.066 at onset to 0.054 at Re = 200. Interestingly, the frequency of the fully saturated 2D wake stays relatively close to the perturbation mode frequency over this entire range. This is perhaps surprising given that the saturated periodic state deviates considerably from the steady wake base state, but it is probably an indication that the frequency selection is based on the separating shearlayer properties rather than the near-wake field. Discussion on frequency selection for the related case of a cylinder in freestream can be found in [START_REF] Pier | On the frequency selection of finite-amplitude vortex shedding in the cylinder wake[END_REF]; [START_REF] Barkley | Linear analysis of the cylinder wake mean flow[END_REF]; [START_REF] Sipp | Global stability of base and mean flows: a general approach and its applications to cylinder and open cavity flows[END_REF]; [START_REF] Leontini | A numerical study of global frequency selection in the time-mean wake of a circular cylinder[END_REF].
Stability of the fully developed 2D periodic wake
Given the discussion in section 2.1, it seems likely that the initial three-dimensional development of the wake at a particular Reynolds number above Re c,U = 88 will be determined by the stability of the 2D periodic flow. That will be tested in later sections through direct simulations. In this section, the 3D linear stability of the 2D periodic base flow is characterised first. Beyond the transition to unsteady flow, a number of different modes contribute to the wake becoming three-dimensional. The occurrence and growth rates of these modes are also strongly dependent on the Reynolds number, presumably because the structure of the time-dependent two-dimensional wake is also a strong function of the Reynolds number. Figure 10 summarises the situation by showing the growth rate corresponding to the dominant mode as a contour map over a wide range of spanwise wavelengths and
F. Y. Houdroge et al
Reynolds numbers beyond Re c,2D
. There are a few regions of substantial growth, notably corresponding to λ/d ∼ 4 and 8-9 covering different Reynolds number ranges. The picture is a little more complicated than indicated by this map, with local peaks corresponding to different mode types: synchronous modes (i.e., with the same period as the base flow); subharmonic modes (with twice the base period); and quasi-periodic modes (with periods different from the base period). To show this in more detail, Floquet multiplier variations as a function of wavelength at three different Reynolds numbers Re = 100, 130 and 160 are given in figure 11. Just above the transition at Re = 100, the fastest growing mode, marked (1) in the figure, reaches a maximum growth at λ 3.2d. This corresponds to a real or synchronous Floquet mode, i.e., the period is the same as the base flow period. At this Reynolds number, there are several other contributing modes with positive growth rates covering the wavelength range studied: another real mode (2) with a wavelength of λ ∼ 6-7d, a subharmonic mode (3) starting at λ 8d, and a quasi-periodic mode (4) for λ 15d. These all have strongly positive growth rates, although less than the short wavelength mode (1). At Re = 130, the short wavelength mode (1) is still present, but now the most dominant mode is the subharmonic mode (2) at a wavelength of λ 9d. The longer wavelength quasi-periodic mode is still present, although it gives way to another real mode at still higher wavelengths (λ 25d). At Re = 160, there are again several changes to the picture. The short wavelength mode (1) and subharmonic mode (3) are still present, with the subharmonic mode relatively much more dominant. At higher wavelengths (λ 15), a real mode (5) becomes more dominant than the quasi-periodic mode (4) over that wavelength range. The fact that all these modes are amplified and they cover a wide wavelength range suggests that the wake is likely to become chaotic quickly after the initial growth of the most dominant mode begins to saturate. (The evolution of the modal amplitude until saturation is shown in figure 20, as discussed later). This is investigated further using direct numerical simulations in the following section, but prior to this, the vorticity structure of the modes is examined.
Figure 12 shows the evolution of the perturbation spanwise vorticity field for mode (1) at Re = 100, where it is the fastest growing mode, and at Re = 160, where it is less dominant. Especially in the lower Reynolds number case, the structure of the perturbation field inside the newly formed and shed vortex cores clearly shows the characteristics of elliptical instability [START_REF] Bayly | Three-dimensional instability of elliptical flow[END_REF][START_REF] Pierrehumbert | Universal short-wavelength instability of two-dimensional eddies in an inviscid fluid[END_REF][START_REF] Landman | The three-dimensional instability of strained vortices in a viscous fluid[END_REF][START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF][START_REF] Leweke | Three-dimensional instabilities in wake transition[END_REF]Thompson et al. 2001b;[START_REF] Kerswell | Elliptical instability[END_REF]. In particular, the perturbation vorticity shows two lobes of positive and negative vorticity, whose extrema align at ∼ 45 • to the main axes of the elliptically shaped vortex cores (marking regions with elliptic streamlines in the reference frame moving with the advection velocity at the centres of these cores. Also importantly, the orientation of the lobes is approximately maintained as the vortices advect downstream, allowing the perturbation to grow and allowing feedback from one shedding cycle to the next. Although somewhat far from the idealised cases for which the theory was developed [START_REF] Waleffe | On the three-dimensional instability of strained vortices[END_REF], for finite-sized vortices, the preferred wavelength is dependent on the core size. Le [START_REF] Dizès | Theoretical predictions for the elliptic instability in a twovortex flow[END_REF] showed that for Gaussian vortices under strain, the spanwise wavelength is given by λ = 2.78a, with a the Gaussian length scale. For highly strained vortices, such as is the case here, the appropriate length scale (a) is given by Le [START_REF] Dizès | Viscous interaction of two co-rotating vortices before merging[END_REF] as a 2 = (a 2 M + a 2 m )/2, with a M and a m corresponding to the semi-major and minor axis lengths, respectively. At Re = 100, figure 12 shows that the approximately invariant vorticity tube grows in size as the vortex cores advect downstream. For the first image, where the elliptical instability pattern is first recognisable, the length scales from the just formed and downstream cores obtained by fitting Gaussian profiles to the major and minor axes are 0.95d and 1.27d, giving preferred spanwise wavelengths of 2.7 and 3.5d, respectively. Figure 11 shows that Floquet analysis indicates that the maximum growth for this mode corresponds to a wavelength of λ = 3.2d, near the centre of the range of the theoretical prediction. At Re = 160, Floquet analysis shows the preferred wavelength of mode (1) drops to 2.5d. This is in line with the prediction of the more compact shed cores due to lower viscous diffusion at the higher Reynolds number. In this case, an asymmetrical counter-rotating vortex pair is formed before the newly formed vortex advects very far downstream. This composite structure also shows evidence of a perturbation pattern consistent with elliptic instability, as it advects away from the wall in an approximately circular arc. Various studies have identified elliptic instability in an isolated counter-rotating vortex pair (e.g., [START_REF] Leweke | Cooperative elliptic instability of a vortex pair[END_REF].
Figure 13 shows the perturbation streamwise vorticity structure of modes ( 3) and ( 5) at Re = 160. Mode (3), the shorter wavelength mode, is subharmonic, repeating every two base flow periods. Mode (5) only becomes dominant for Re 160. Below this Reynolds number, a quasi-periodic mode occupying this wavelength range has a higher growth rate. Interestingly, the amplitude distributions of these two modes appear similar. Inside the newly forming vortex, the distributions broadly match between the two modes, in terms of both the sign and distribution of perturbation vorticity. The vorticity distributions within the downstream vortex pairs are also similar, but of opposite sign, as is the case with the layer of vorticity near the ground between the two main vortical structures.
The physical nature of the instability in this case is more difficult to discern. The developing wake does not form a series of relatively long-lived elliptical-shaped vortices in this case, but rather a newly formed vortex generates secondary vorticity beneath it, which is subsequently drawn away from the boundary to form an unequal strength counter-rotating vortex pair. The process is shown in figure 14. This indicates that there are a number of different identifiable vortex structures and groupings that combine to lead to the observed flow instability modes.
The evolution of the circulation in the primary and secondary vortices as they advect downstream is shown in figure 15. The primary (clockwise) vortices that directly form from the separating shear layer, grow quickly in strength prior to the shear layer pinching off and releasing the vortices to move downstream. The circulation then slowly decays. During its initial growth and soon after its release, this primary vortex combines with the secondary vorticity generated at the boundary, to form the vortex pair. At a point in time when this pair becomes unambiguously defined, i.e., between images 2 and 3 of figure 14, the ratio of circulations between the component vortices of the pair is approximately -2:1. [START_REF] So | Short-wave instabilities on a vortex pair of unequal strength circulation ratio[END_REF] analysed the stability of unequal strength Lamb-Oseen vortices, which have a Gaussian vorticity distribution, examining how the growth rate and preferred wavelength varied with circulation ratio. Such a system is subject to both the shortwavelength elliptic instability and the longer-wavelength Crow instability. The results of [START_REF] So | Short-wave instabilities on a vortex pair of unequal strength circulation ratio[END_REF] can be used to obtain an estimate for the most unstable wavelength. A recent review of the Crow instability is given in [START_REF] Leweke | Dynamics and instabilities of vortex pairs[END_REF]. Examining the spanwise perturbation vorticity field of mode 5 (not shown) shows the characteristic lobe structure of Crow instability in the vortex pair as it moves away from the cylinder and the wall. By approximating the vorticity distributions within the pair at formation in terms of a sum of Gaussian distributions to extract the length scale for each vortex, together with the overall circulation ratio, a preferred wavelength of approximately λ 20d can be predicted from the work of [START_REF] So | Short-wave instabilities on a vortex pair of unequal strength circulation ratio[END_REF]. This is close to the observed preferred wavelength of mode 5 at Re = 160 of λ 18d. However, in this case, the perturbation field does not grow as the vortex pair advects, so the Crow instability of the pair alone cannot be responsible for the maintenance of the instability mode from one cycle to the During the formation and evolution of the wake vortices, it is also possible to take into account the image vortices, linked to the presence of the wall and symmetrically located with respect to it. The near wake vortex pair and its image form a symmetric four-vortex system, a configuration analyzed previously in the context of aircraft trailing wakes (e.g., [START_REF] Crouch | Airplane trailing vortices and their control[END_REF][START_REF] Winckelmans | Vortex methods and their application to trailing wake vortex simulations[END_REF][START_REF] Jacquin | Unsteadiness, instability and turbulence in trailing vortices[END_REF]. The existence of short-wave (elliptic) and Crow-type long-wave instabilities was also found in these systems. Although the identification of such systems is transitory for this wake, is seems plausible that the Crow instability would play a role in the global three-dimensional instability and wavelength selection for the wake.
Saturated three-dimensional state
Computed forces
In this section the force coefficients after the flow has reached its fully saturated threedimensional state are compared with predicted force coefficients from 2D steady and periodic simulations. The time-mean forces computed obtained from 3D direct numerical simulations are in good agreement with the two-dimensional ones. Figure 16 shows plots of the mean drag and lift coefficients, as defined in equations (2.3), versus the Reynolds number. For Reynolds numbers up to the Hopf bifurcation leading to a periodic two-dimensional state, the 2D and 3D curves are effectively indistinguishable. This is consistent with the relatively weak effect on the wake of the steady three-dimensional instability even as it saturates, as shown in figure 17. However, even beyond the transition to periodic flow, the difference between the 2D periodic and 3D predictions remains small, and is limited to be less than 5% for the mean drag coefficient and 4% for the lift coefficient at the highest Reynolds number considered here of Re = 180. In this case, the saturated three-dimensional wake is distinctly different from the two-dimensional periodic wake, as is explored further below. The figure also shows the lift and drag coefficients based on the steady flow for Re > Re c,2D obtained from the steady solver. These curves deviate considerably from the other two sets as the Reynolds number increases. This is consistent with the increasingly elongated recirculation region of the steady flow deviating further from the near-wake vortex shedding of the 2D-periodic and 3D flows as the Reynolds number is increased.
Figure 18 shows the temporal evolution of the lift coefficient obtained from full threedimensional simulations at Re = 60, 80, 90 and 160. The initial evolution follows the one observed from the two-dimensional simulations (figure 2) until the three-dimensional instability grows sufficiently to change the two-dimensional structure of the flow. This effect can be seen in the temporal development of the lift coefficient: below the Hopf bifurcation at Re c,2D = 88, the three-dimensional transition disturbs the otherwise constant lift coefficient at approximately t = 400 -600U/d (upper two plots), whilst beyond the 2D transition, the periodic oscillations in the curves die out at approximately t = 200 -400U/d (bottom two plots) as a result of the three-dimensional instability reaching a sufficient amplitude to substantially alter the otherwise two-dimensional periodic flow.
It is of interest why the oscillations in the lift signal are substantially suppressed once the wake reaches its saturated three-dimensional state. To investigate this, sectional lift coefficient signals, i.e., the lift coefficient per unit span at a particular spanwise position, were examined at different points across the span. Figure 19 shows the evolution of the lift signals at two points separated by half the span width (dashed lines) together with the mean lift signal (solid line), for Re = 160 in the saturated state. Clearly, there is a significant variation in the local lift coefficient across the span, indicating that the underlying two-dimensional vortex shedding is uncorrelated. In addition to this, even the sectional lift coefficients aren't very periodic. This is consistent with a change from strong, regular 2D shedding of vortices initially to a much more disordered 3D wake without a strong underlying 2D periodic vortical wake structure. Note that for these simulations, a low-level white-noise perturbation of amplitude 10 -4 U was added to each velocity component at startup to accelerate the development of the three-dimensionality.
Development and saturation of three-dimensional flow
Within the steady regime (Re < Re c,2D ), the evolution to a fully evolved 3D wake for Re > Re c,3D leads to the two-dimensional spanwise vorticity isosurfaces becoming wavy in the spanwise direction, with little alteration to the main underlying two-dimensional structure of the flow. As identified above, this effect can be seen on figure 17(a), and the extent of this deformation on figure 17(b). These predictions are consistent with those made by [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF][START_REF] Stewart | The dynamics and stability of flows around rolling bluff bodies[END_REF], who in addition conducted experiments in a water channel. Their experiments showed that the results of the experimental streaklines and of the predicted two-dimensional flow are in good qualitative agreement, at least while the three-dimensionality is developing. Note that the final saturated state in this case is weakly unsteady. For instance, at Re = 45 there is a low-level oscillation leading to the weak shedding of vorticity into the wake, while the global 3D structure shown in figures 5 and 17 is maintained. For Reynolds numbers beyond the transition to vortex shedding, which is the main focus here, direct numerical simulations of the flow indicate that, prior to its settling into its final state, the flow initially undergoes a rapid transition to two-dimensional vortex shedding, as indicated by the lift trace curves of figure 18. The numerical method involves representing the spanwise variation through a Fourier decomposition, hence the evolution of the spanwise modes can be easily extracted. A convenient measure of the amplitude of each mode is provided by the RMS amplitude of the spanwise velocity component of each complex Fourier mode, since that velocity component is zero prior to three-dimensional flow development. Specifically, the evolution of the modal amplitudes (A n ), computed as a RMS spatial average over all 2D node points (N xy ) of the moduli of the z-velocity complex Fourier coefficients (a n ) corresponding to mode index n, i.e.,
A n (t) = 1 N xy Nxy i=1 |a i n (t)| 2 ,
are shown in figure 20 for Re = 100 and Re = 160. The two figures in the left column show the development of modes corresponding to key wavelengths identified by the global stability analysis. Indeed, after an initial period over which the dominant mode for each wavelength emerges, the growth rates as measured by the slopes of the curves over many oscillation periods have values consistent with the linear stability analysis predictions. At some point in time, the modes grow sufficiently to begin to saturate non-linearly, leading the flow to reach its asymptotic state. After saturation, the figures in the right column show that the final state is influenced by many modes of different wavelengths, suggesting a rapid transition to fully chaotic flow. Figure 21 shows time-mean RMS amplitudes of each Fourier mode taken over the last 100 time units (in the fully saturated state) at (a) Re = 100 and (b) Re = 160. Here, the horizontal axis is the non-dimensional wavenumber kd. These spectra can be compared with figure 21(c) at Re = 45, where the saturated state shows a single spectral peak corresponding to kd = 0.74 (or λ/d = 8.5) together with harmonics accounting for the distortion of the saturated final state from the pure sinusoidal linear instability mode. For the two higher Reynolds number cases, the spectra are continuous as a result of the nonlinear interactions between modes, and this is indicative of a chaotic final wake state. Indeed, the modes corresponding to the dominant linear instability mode numbers do not dominate the spectra at saturation. Isosurface plots taken at key points in the flow development help elucidate the wake development. Figure 22(a) shows an isosurface of Q = 0.01 at t = 350d/U . The Qcriterion is a vortex identification method defined initially by [START_REF] Hunt | Eddies, streams, and convergence zones in turbulent flows[END_REF]. This isosurface is merged with isosurfaces of positive/negative streamwise vorticity to highlight the dominant spanwise mode at a time when the three-dimensionality is beginning to modify the otherwise two-dimensional wake structure. In this case, Re = 100. The wavelength of the streamwise vorticity pattern extracted from this image is consistent with the short wavelength mode (1) instability prediction (figure 22a) from stability analysis. Soon after, the wake develops non-linearly, with a typical snapshot shown in figure 22(b).
At a higher Reynolds number of Re = 160, the development is somewhat different. Figure 23 shows a sequence of wake states from the time that three dimensionality is beginning to develop. The first three images show the evolution at three consecutive shedding cycles. The three-dimensionality develops quickly, with the spanwise wavelength of the perturbation corresponding to that of mode (3) of figure 11(c). The second image shows substantial distortion of the previously shed two-dimensional vortex pair, whilst the third image, one cycle later, shows that the subsequently shed vortex pair is virtually destroyed. The final image is a plot taken a few cycles later, after the final asymptotic state is reached. This is similar to the final state at Re = 100 shown in figure 22, except it shows an even more complex finer-scale structure. The previous two-dimensional periodic wake state is no longer visible at all.
Conclusions
The stability analysis of the steady two-dimensional flow past a cylinder rolling at constant speed along a rigid surface has shown that the key transitions to steady threedimensional flow and to periodic vortex shedding occur at Re c,3D = 36.8 and Re c,2D = 88, respectively. These results are mainly confirmation of findings from a previous study by [START_REF] Stewart | The wake behind a cylinder rolling on a wall at varying rotation rates[END_REF]. However, the main emphasis of this paper is concerned with three-dimensional wake development in a more realistic configuration, i.e. after a cylinder starts rolling impulsively at a constant velocity, and how this evolution is related to stability theory.
Two main cases can be distinguished. The first is when the Reynolds number is lower than the critical Reynolds number leading to two-dimensional transition (Re c,2D ) and above the critical Reynolds number for three-dimensional steady transition (Re c,3D ). In this case, the asymptotic flow state is a three-dimensional flow that is not too far from the prediction based on assuming two-dimensional flow. Indeed, the drag and lift coefficients are practically unaffected by the three-dimensionality. When the Reynolds number is close to Re c,2D , the initially stationary flow develops a few cycles of shedding prior to settling towards a steady state. The second case is observed for Reynolds numbers above Re c,2D . After an impulsive start, the flow undergoes a rapid transition to two-dimensional periodic shedding. Within a few cycles, e.g., about three at Re = 90 and two at Re = 160, the wake evolves to be close to the periodic state predicted by two-dimensional simulations. It then continues in this near-periodic state for several cycles, depending on the background noise level. For the cases considered here, this period of evolution was about 15 and 10 cycles at Re = 90 and 160, respectively. Stability analysis of the two-dimensional periodic state, which has not been undertaken previously, then determines the subsequent development of the wake three-dimensionality. At Re = 100, the wake appears to initially undergo a short wavelength instability (mode 1 shown in figure 8a), consistent with an elliptic instability of the shed vortex cores. At longer times, many more spanwise modes come into play and interact non-linearly, leading to a chaotic final flow state. At a higher Reynolds number of Re = 160, the initial development of three-dimensionality is different. Here, mode 3 of figure 8(c) is the mode to break two-dimensionality. Again, the wake undergoes a rapid transition to a chaotic final state soon afterwards. With the noise levels used to initiate the three-dimensional flow development in the three-dimensional simulations, the fully saturated wake states take approximately 400 and 200 non-dimensional time units to develop, for Re = 90 and 160, respectively. These values are equivalent to the number of diameters the cylinder rolls whilst maintaining a two-dimensional state. Although experimental noise levels are likely to be higher, it is still an indication that, after an impulsive start of the cylinder, a two-dimensional periodic flow state will be maintained for a considerable rolling distance prior to the evolution to a complex three-dimensional wake. The two-and three-dimensional simulations also show that the mean lift and drag coefficients of the fully saturated three-dimensional flow are very close to predictions based on two-dimensional simulations.
It is interesting to speculate whether a similar scenario would apply to a sphere rolling at constant velocity along a wall. In that case, even in freestream, a non-axisymmetric steady transition occurs prior to the periodic transition. The presence of the wall seems likely to cause the premature generation of shedding on impulsive startup, bypassing the slow transition associated with a Hopf bifurcation of a steady flow. However, we will leave this as an open question at this stage.
The numerical model here is essentially an infinite two-dimensional cylinder forced to roll at constant speed. End effects may play a strong part in the wake evolution, just as it can with the flow past a cylinder away from a boundary. Additionally, if the cylinder is free to roll without any constraints on its movement and velocity, vortex-induced vibrations are likely to occur with an unsteady wake. The simulations and results presented in this paper aim to provide a reference study for the idealised case, and constitute an essential element of an ongoing study concerning the fluid-structure interaction of uniformly and freely rolling bodies translating along a wall.
Figure 1 :
1 Figure1: A two-dimensional schematic of the problem under consideration: a cylinder of diameter d is rolling along a wall. The translational and angular velocities are represented by U and ω, respectively. In the simulations, the inertial frame of reference (x, y) is attached to the centre of the body.
Figure 2 :
2 Figure 2: Time evolution (scaled by d/U ) of the lift coefficient from impulsive startup. (a) & (b) Evolution for Reynolds number below the transition to 2D shedding. (c) & (d) Evolution for Reynolds numbers beyond the transition to vortex shedding.
Figure 3 :
3 Figure 3: View of the cylinder mesh M 1: (x 1,u , x 1,d , y 1 ) = (-25, 25, 50) (left image) and M 2: (x 2,u , x 2,d , y 2 ) = (-50, 50, 100) (right image). The cylinder is placed in the middle of the x-axis, and near the wall at a gap ratio of G/d = 0.005 in order to avoid numerical singularities from arising. The flow is from left to right, and the resolution in the vicinity and downstream of the cylinder is increased in order to accurately capture the flow structures in the wake.
(a) 2D steady (left, Re = 30) to 3D steady at (right, Re = 40). (b) 2D steady (left, Re = 30) to 2D unsteady (right, Re = 200).
Figure 4 :
4 Figure 4: Visualisation of the initial flow transitions under consideration. The images showing 2D flow depict the spanwise vorticity field, whereas the 3D flow pattern is depicted using isosurfaces of streamwise vorticity. Red and blue represent positive and negative vorticity, respectively.
Figure 5 :
5 Figure 5: Top down view of the three-dimensional steady flow at Re = 60 and λ/d = 11, depicted through the projected velocity field in the plane grazing the top of the cylinder.
Figure 6 :
6 Figure 6: Left: (a) Maximum growth rate of the most unstable mode for transition from 2D steady to 3D steady flow. Right: (b) Variation of the wavelength of the fastest growing mode with the Reynolds number. Beyond Re ∼ 150 there are two peaks in the growth rate curve with the shorter wavelength peak developing the higher amplitude for Re 180.
Figure 7 :
7 Figure 7: Contour map of the growth rate σ as a function of the Reynolds number Re and the wavelength λ/d for the 2D steady to 3D transition. For most of the domain, the dominant 3D mode is steady, except for a small (blue) region centred around Re = 120 and λ/d = 5 and extending to higher Reynolds numbers.
Figure 8 :Figure 9 :
89 Figure 8: The structure of the perturbation field at Re = Re c,2D depicted using perturbation spanwise vorticity with overlaid base flow vorticity contours at ±0.1U/d.
Figure 10 :Figure 11 :
1011 Figure 10: Contour map of the dominant growth rate σ as a function of the Reynolds number Re and the wavelength λ/d for the 2D periodic base flow.
Figure 12 :
12 Figure 12: Evolution of the spanwise perturbation vorticity for the mode (1) of figure 11 at Re = 100 (left column) and Re = 160 (right column), overlaid with the base flow vorticity contours at ±0.1U/d. The images are 1/5 of a period apart. The spatial distribution of the instability fields show strong signs of elliptic instability of the vortex cores, as discussed in the text.
Figure 13 :
13 Figure 13: Visualisations of the streamwise perturbation vorticity for subharmonic mode (3) (left) and real mode (5) (right) at Re = 160. These images depict streamwise perturbation vorticity coloured contours with base flow vorticity contour line at ±0.1d/U overlaid to highlight the locations of the vortices.
Figure 14 :
14 Figure 14: Evolution of the spanwise wake vorticity at Re = 160 showing the formation of new vortices from the separating shear layer, generation of secondary vorticity at the boundary and release into the wake, and the formation of counter-rotating vortex pairs that self-propel away from the wake as the pair moves downstream. Images are separated in time by 1/5 of a period.
Γ
Figure 15: Evolution of the circulation Γ of the primary and secondary vortices over a shedding cycle as they form and advect downstream. Here, Re = 160.
Figure 16 :
16 Figure 16: Comparison of time-averaged body force coefficients, drag C D and lift C L , between the two-and three-dimensional simulations for 30 Re 180. Note that beyond Re c,2D the 2D-steady predictions shown by the filled circles are based on flows calculated with the steady solver.
Figure 17 :Figure 18 :
1718 Figure17: The fully developed wake state at Re = 50 visualised over a spanwise distance of three characteristic wavelengths. The wake state is depicted using an isosurface of the spanwise vorticity (ω z = ±0.5).
Figure 19 :
19 Figure19: Evolution of the sectional lift coefficient at two spanwise locations (dashed lines) separated by half the span width at Re = 160 in the fully saturated state. The spanwise averaged lift coefficient is also shown by the solid line. The period corresponding to two-dimensional vortex shedding is approximately 20 units.
Figure 20 :Figure 21 :
2021 Figure 20: Evolution of the amplitude of spanwise Fourier modes at different Reynolds numbers. Top row: Re = 100; bottom row: Re = 160. The subfigures on the left show the evolution of the three dominant wavelengths as predicted by Floquet analysis, and those on the right show the evolution of the amplitudes corresponding to the first 48 modes.The simulations are started impulsively, with a low level white noise to accelerate the development of the three-dimensionality. Measured slopes of the evolution curves in the linear regime give estimated growth rates (σ) in agreement with growth rate predictions from Floquet analysis given in figure11.
Figure 22 :
22 Figure 22: Evolution of the wake at Re = 100 from direct simulations. (a): Isosurface of Q = 0.01 (blue) highlighting the predominantly 2D vortices, with isosurfaces of streamwise vorticity at ω x = ±0.1 (red/yellow) superimposed. At this time (t = 350d/U ), the three-dimensional instability modes have grown sufficiently to begin to affect the wake. (b) & (c): Later, at t = 500d/U , after the flow has fully saturated. The wake is complex and chaotic with many three-dimensional wavelength components contributing. The iso-surface corresponding to Q = 0.01 is plotted on figure (b), and the isosurfaces of streamwise vorticity at ω x = ±0.1U/d on figure (c).
Figure 23 :
23 Figure 23: Stages in the evolution of the wake at Re = 160 as depicted by isosurfaces of Q = 0.01. The first three images show the wake structure for three consecutive cycles just after the three-dimensionality is starting to modify the flow. The final image is a typical image after the flow has reached its asymptotic state.
Table 1 :
1
Mesh Re N × N ∆t CD CL Period
M1 150 4 × 4 0.0030 3.2827(-2.45) 1.48186(2.81) 17.525(-0.51)
M1 150 5 × 5 0.0030 3.3631(-0.06) 1.45020(0.62) 17.612(-0.02)
M1 150 6 × 6 0.0030 3.3650 1.44133 17.615
M1 300 5 × 5 0.0020 3.4175(-0.12) 0.8532(2.47) 19.646(-0.13)
M1 300 6 × 6 0.0020 3.4200(-0.05) 0.8364(0.46) 19.670(-0.01)
M1 300 7 × 7 0.0020 3.4205(-0.03) 0.8325(-0.01) 19.671(-0.005)
M1 300 8 × 8 0.0020 3.4216 0.8326 19.672
M1 150 5 × 5 0.0015 3.3637(0.02) 1.45179(0.11) 17.611(-0.01)
M2 150 5 × 5 0.0030 3.3977(1.03) 1.45282(0.18) 17.400(-1.20)
Domain, and temporal and spatial resolution study. The numbers in parentheses show the error relative to the highest resolution (or number of nodes N ) used for the comparison. The mesh M 1 has a blockage ratio of 1% and the mesh M 2 a blockage ratio of 2%. Most of the simulations were undertaken with mesh M 2 , noting typical blockage-induced errors in the Strouhal and force coefficients of ∼ 2%. The comparisons indicate that the timestepping error is negligible, whilst the error induced using 5 × 5 nodes/element is typically ∼ 1% for a Reynolds number of 150.
† Email address for correspondence: [email protected]
Acknowledgements
This research was supported under Australian Research Council, Discovery Projects funding scheme DP130100822 and DP150102879. We also acknowledge computing time support through National Computational Infrastructure projects D71 and N67. | 70,055 | [
"8367"
] | [
"197533",
"196526",
"197533",
"197533"
] |
01481240 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01481240/file/Joulain_et_al_29_03_2016_Journal.pdf | Antoine Joulain
email: [email protected]
Damien Desvigne
email: [email protected]
David Alfano
email: [email protected]
Thomas Leweke
email: [email protected]
Numerical investigation of the reliability of wind-tunnel wall corrections applied to measurements of wingtip flow
McAlister and Takahashi [2] is investigated at an uncorrected chordbased Reynolds number Re = 1.6 × 10 6 , Mach number M = 0.131 and angle of attack α = 12 • .
I. Introduction
H ELICOPTER aerodynamics is strongly influenced by the vortices generated from the rotor blade tips. A literature survey highlights the lack of local blade tip flow measurements, mainly due to the complexity of the aerodynamic phenomena involved and the difficulty of measuring in a small region of interest. Fixed wings have been widely investigated experimentally and numerically to gain more insight into the tip vortex formation, the influence of the shape and the accuracy of the numerical codes (see, e.g. Brocklehurst and Barakos [START_REF] Brocklehurst | A Review of Helicopter Rotor Blade Tip Shapes[END_REF] and references therein). The database of McAlister and Takahashi [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF] includes detailed measurements of the vortex roll-up in the vicinity of the wing tip, as well as vortex characteristics during the near-field propagation. It is one of the very few published experimental campaigns considering Reynolds numbers of more than one million, as well as the sensitivity to the tip shape, the Reynolds number and the angle of attack. Moreover, measurements are rectified for meandering, and corrections are proposed to account for wind-tunnel wall interference. In spite of its wealth, this database is used surprisingly rarely as a reference test case to validate numerical methods.
Lim [START_REF] Lim | Numerical Study of the Trailing Vortex of a Wing With Wing-Tip Blowing[END_REF] used the proposed wall corrections to simulate a free-air rounded-wing tip configuration without representing the overall experimental setup. The Thin-Layer compressible Navier-Stokes equations (TLNS) were solved with a second-order scheme and an algebraic turbulence model. The grid extended 8 chords downstream of the trailing edge and 6 chords around the profile, and was refined in the spanwise direction near the tip. The solution was in agreement with the measurements, but the leading-edge upper-surface suction peak and the tip vortex pressure footprint were not captured. Due to computational speed and memory constraints, no grid dependence study was undertaken. According to the author, the simulation could be improved in a number of ways: by increasing the mesh density in areas of high velocity gradients, performing a grid convergence study, extending the domain dimensions, using Reynolds-Averaged Navier-Stokes (RANS) instead of TLNS equations and by using a more physical turbulence model. The relevance of the wall corrections was not called into question.
Kamkar and Wissink [START_REF] Kamkar | Automated Off-Body Cartesian Mesh Adaptation for Rotorcraft Simulations[END_REF] used McAlister and Takahashi's database [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF] to perform a RANS simulation of the square-tip test case in free air, without wall corrections. An unstructured near-body grid was coupled with a structured off-body grid system. Adaptive Mesh Refinement, based on the Q-criterion [START_REF] Jeong | On the Identification of a Vortex[END_REF], was performed on the offbody mesh and showed considerable improvements concerning the vortex convection. A difference of 15% for the velocity inside the vortex remained between the best calculation and the measurements, it was attributed to the dissipation of the unstructured near-body mesh and to the communication between unstructured and structured grids. The lack of wall corrections was not discussed, but represents probably a major contribution to the error.
The objective of the present work is to assess the accuracy of the wind-tunnel wall corrections published by McAlister and Takahashi [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF]. To do this, a validated and efficient steady-state RANS method is used to simulate two-and three-dimensional configurations in free-air flow configurations. In this paper, the square-tip test case of
II. Computational Setup
The configuration considered in the present simulations consists of a constant-chord NACA 0015 wing with a square tip, bounded at the root by a symmetry plane. The wing aspect ratio is small, it is therefore assumed rigid and free of aeroelastic deformations. Two coordinate systems are defined in the same way as for the experimental setup (presented in Section III). The first one is used to study forces and moments acting on the wing: the rand ζ-axes extend respectively from the experimental root to the tip and from the leading edge to the trailing edge, the origin being at the root leading edge of the experimental geometry. The Eiffel frame (x, y, z) is employed to follow the tip vortex: the x-axis is aligned with the (horizontal) inflow direction; the y-axis extends from the tip to the root of the wing and the z-axis is vertical; moreover the origin is located at the center of the vortex core.
II.A. Numerical Method
The ONERA CFD code elsA [START_REF] Cambier | The Onera elsA CFD Software: Input From Research and Feedback From Industry[END_REF] is used in this study. The solver is based on a cell-centered, finite-volume approach. Assuming a steady flow field, the compressible RANS equations are solved in twodimensional (2D) and three-dimensional (3D) configurations on a multi-block structured grid. A fully turbulent flow is considered, without boundary-layer transition.
The governing equation system is closed with the one-equation model of Spalart and Allmaras [START_REF] Spalart | A One-Equation Turbulence Model for Aerodynamic Flows[END_REF]. Based on the Boussinesq assumption, this formulation was initially designed for the simulation of wallbounded flows. In order to reduce the production of eddy viscosity in areas where the vorticity is higher than the strain rate (e.g. inside vortex cores), the rotation and streamline curvature correction proposed by Spalart and Shur [START_REF] Spalart | A One-Equation Turbulence Model for Aerodynamic Flows[END_REF] is applied.
The convective fluxes of the mean flow equations are discretized using the second-order central scheme of Jameson, Schmidt and Turkel [START_REF] Jameson | Numerical Solution of the Euler Equations by Finite Volume Methods Using Runge-Kutta Time-Stepping Schemes[END_REF]. Since this scheme is non-dissipative and unstable, a matrix artificial dissipation is introduced. The fourth-order coefficient κ (4) dampens the high-frequency oscillations in the bulk of the computational domain and is important for stability and convergence to a steady state. Typical values for κ (4) are in the range 0.016 to 0.032 [START_REF] Swanson | On Some Numerical Dissipation Schemes[END_REF]. In order to reduce the amount of dissipation being introduced and to improve the accuracy, κ (4) is taken as 0.016. The convective fluxes of the turbulence equations are discretized using a second-order Roe scheme [START_REF] Roe | Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes[END_REF]. All diffusive flux gradients are calculated with a five-point central scheme. A first-order backward-Euler scheme updates the steady-state solution and the LU-SSOR (Lower-Upper Symmetric Successive Over-Relaxation) scheme [START_REF] Yoon | An LU-SSOR Scheme for the Euler and Navier-Stokes Equations[END_REF] is employed to speed up convergence.
The resulting flow solver is implicit and stable, and high values of the Courant-Friedrichs-Lewy (CFL) number can be reached. The CFL number is linearly ramped up from 1 to 10 over 100 iterations, in order to avoid divergence during the transient phase. The L 2 norm-based residual of the total energy per unit volume, ℜ, is used to measure the convergence of the simulations. 1
II.B. Computational Grid
The structured grids are generated using the ANSYS ICEM-CFD software, and all lengths are normalized by the wing chord c. The initial mesh is designed for industrial use and is universally applicable, without prior knowledge of the final solution. In particular, the vortex trajectory is not used to align the main grid axis and to refine the mesh in the zone covering the vortex core. This choice allows a wide variation of angle of attack, free-stream velocity and geometry.
The mesh results from an exhaustive grid-independence study concerning each construction parameter (documented in Ref. [START_REF] Joulain | Aerodynamic Simulations of Helicopter Main-Rotor Blade Tips[END_REF]). A Ctopology surrounds the 2D NACA 0015 profile and an H-topology covers the downstream zone. In the spanwise direction, an H-topology is also chosen to match the square wing tip. The computational domain extends up to a distance of 200c in all directions from the tip in order to minimize the influence of boundary conditions. Thus the numerical wing has a span of 200c. The difference between experimental and numerical wing spans is not detrimental to the study of the vortex roll-up in the vicinity of the tip, as the root circulation is transferred into the trailing sheet.
The 2D mesh around the NACA 0015 profile is constructed as follows. In the streamwise direction, the leading-edge grid spacing is equal to 10 -4 c and the trailing-edge grid spacing is 2 × 10 -4 c. The blunt trailing edge (of thickness 3.15 × 10 -3 c) is finely discretized with a grid spacing of 8.3 × 10 -6 c at the corners. The mesh distribution around the profile follows geometric progressions starting from each specified grid spacing. The common ratio of a geometric progression is called expansion ratio. The expansion ratio on the profile is lower than 1.06. In the wall-normal direction, the first grid spacing is set to 8.3 × 10 -6 c for a dimensionless wall distance (y + , see e.g. Ref. [START_REF] Tennekes | A First Course in Turbulence[END_REF]) around unity at the first computational nodes. At 12 • incidence and Reynolds number 1.5 × 10 6 , the computed y + is maximal at the suction peak with a value of 1.5, and it does not exceed 0.5 on the lower side of the profile. The wall-normal expansion ratio is set to 1.15 and is not relaxed away from the profile in order to preserve the orthogonality of the cells. Downstream of the trailing edge, the first grid spacing is equal to the trailing-edge grid spacing and an expansion ratio of 1.1 is applied. The distribution on the outer boundary is manually adapted to maximize the orthogonality of the mesh at the wall. The resulting 2D mesh contains 7 × 10 5 points.
The 3D mesh is constructed by stacking 2D meshes along the raxis. The boundary layer developing around the square tip is simulated with a spanwise tip spacing of 5 × 10 -5 c. The maximum value of the dimensionless wall distance y + reaches 8 and is located in the vicinity of the leading edge. A spanwise expansion ratio of 1.01 is imposed throughout the fluid domain, as well as from the tip to the root of the wing. In the fluid domain, the gap resulting from the 2D mesh stacking is filled with a half-butterfly mesh (O-grid mesh), which inevitably induces two lines of skewed cells near the fictitious leading edge. The wall-normal grid spacing and expansion ratio are used in the gap domain. The final mesh contains 4.4 × 10 7 points and is split into 137 blocks to enable parallel processing. The values of the grid parameters are summarized in Table 1. The wing surface is modeled as an adiabatic viscous wall (zero heat-flux). The outer boundary supporting the root of the wing is a symmetry plane, whereas a far-field condition is applied to all other external boundaries.
III. Experimental Database
McAlister and Takahashi's [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF] wing consisted of a constant and untwisted NACA 0015 profile with a square tip. The chord (c) and semispan (R) were respectively 0.52 m (1.7 ft) and 1.7 m (5.6 ft). The width (w) and the height (h) of the test section were 3 m (10 ft) and 2.1 m (7 ft). The wing was mounted on a 6.4 cm (2.5 in) thick supporting end plate extending from floor to ceiling. In the lateral direction, the tip of the wing and the supporting end plate were positioned respectively at 1 m (3.3 ft) and 0.3 m (1 ft) from the wind tunnel walls. A cylindrical tube fairing was placed between the supporting end plate and the wall. In order to perform 2D measurements, a second end plate was positioned against the tip, preventing the development of a trailing vortex. The transition of the boundary layers was not forced.
Wing surface pressure (coefficient C p ) was measured at 320 stations, with a denser distribution near the leading edge and the tip. The measurement section nearest the tip was located at a distance of 2% of the chord c, i.e. at r/R = 0.994. Lift (C l ), pressure-drag and pitching moment coefficients were calculated from a trapezoidal-rule integration of the pressure distribution. The pitching moment coefficient is defined about the quarter-chord and is taken as positive when the angle of attack is increasing. Results are published in the Eiffel frame (with respect to the undisturbed flow). A two-components laser velocimeter is used to measure the vertical (V z ) and the axial (V x ) velocity profiles across the trailing vortex along the y-axis. Measurements are made at streamwise locations from 0.1c to 6c behind the wing.
III.A. Blockage Correction
During the experiment, the reference velocity is obtained from a Pitotstatic probe placed upstream in the test section. The experimental assembly generates a blockage resulting in a higher velocity in the region where the airfoil is located. The uncorrected reference velocity V ∞,u (and consequently the Mach number, Reynolds number and all aerodynamic coefficients) has to be corrected according to
V ∞ = (1 + ε)V ∞,u (1)
The value of the blockage factor ε, in 2D and 3D configurations, is estimated using a simple formula based on the area reduction induced by each object in the tunnel [START_REF] Rae | Low-Speed Wind Tunnel Testing[END_REF]:
ε = b Object frontal area Test section area (2)
with b = 0.25 for the airfoil and b = 1 otherwise. The additional blockage resulting from the development of the tunnel boundary layers and the wing wake is not taken into account. According to the published geometry of the assembly, the total blockage factor, at an angle of attack α = 12 • , is ε = 0.06 in the 2D configuration and ε = 0.04 in the 3D configuration. The blockage factor of the airfoil is weakly dependent on the angle of attack and represents 13% and 20% of the total blockage in 2D and 3D, respectively. End plates are the major contributors, with 70% of the total blockage in the 2D configuration (two end plates) and 54% in the 3D configuration. For the former, this leads to a reduction of the force and moment coefficients by about 12%. Velocity profiles reveal that far from the wing tip vortex (and the wing trailing sheet), the streamwise velocity tends neither to the uncorrected nor to the corrected reference velocity, but converges to a value between 1.1 V ∞,u (at x t /c = 0.1) and 1.07 V ∞,u (at x t /c = 4). The boundary layers developing on the supporting plate and the tunnel walls, as well as the wake, may induce an additional blockage and an increase of the local velocity downstream of the profile. However, the observed deviation of the axial velocity from the corrected reference value is rather erratic and does not increase with downstream distance. A calibration error of the laser velocimeter could be responsible for such discrepancies. Two methods can be used to correct the vertical and axial velocities: the first is to increase the blockage factor (up to 0.1 at x t /c = 0.1), and the second to apply the blockage factor calculated from Equation ( 2) and shift the corrected measurements to recover V ∞ far from the vortex. The two methods give similar results, and the first one is retained.
III.B. Lift Interference Correction
The walls of the closed test section confine the airflow and distort the streamlines around the lifting body.
In the 2D configuration, Lock's [START_REF] Lock | The Interference of a Wind Tunnel on a Symmetrical Body[END_REF] method for taking this effect into account consists in replacing the lifting airfoil by a single vortex at the center of pressure. The floor and ceiling of the tunnel are mathematically represented by an infinite series of image vortices, in order to satisfy the absence of normal flow at the walls. The angle of attack α and the lift coefficient C l of the profile are modified by the velocity induced by these image vortices. Assuming a profile centered in the test section, a short chord c relative to the tunnel height h and a small angle of attack, the corresponding corrections were given by Allen [START_REF] Allen | Wall Interference in a Two-Dimensional-Flow Wind Tunnel, with Consideration of the Effect of Compressibility[END_REF] as:
α = α u + 180 π πc 2 96βh 2 (C l,u + 4C m,u ) and C l = 1 - 1 3 πc 4βh 2 C l,u with (3)
β = 1 -M 2 u
The low Mach numbers considered here allow to assume an incompressible flow (compressibility factor β = 1). Due to the small value of the ratio c/h (0.25), the lift interference correction in the 2D configuration is very small: the angle of attack is increased by 1% and the lift coefficient is decreased by 1.6%.
In the 3D configuration, the half-model is treated as a complete wing in a tunnel with twice the cross section, with the side-wall as a symmetry plane. The lifting wing is replaced by a semi-infinite vortex pair trailing from the tips. The walls of the tunnel are now represented by a doubly-infinite series of vortex images. Assuming again a small wing compared to the dimensions of the tunnel, only the angle of attack is modified according to [START_REF] Garner | Subsonic Wind Tunnel Wall Corrections[END_REF]
α = α u + δ E 180 π 1 + cδ 1 2βhδ 0 A A t C l,u (4)
where A is the planform area of the semi-span wing (Rc), A t the crosssectional area of the tunnel (wh), and where δ E , δ 0 and δ 1 depend only on the geometric parameters 2w/h and R/w. Detailed expressions and charts can be found in Ref. [START_REF] Garner | Subsonic Wind Tunnel Wall Corrections[END_REF] and Ref. [START_REF] Glauert | Wind Tunnel Interference on Wings, Bodies and Airscrews[END_REF]. For the present experiment, with 2w/h = 2.86 and 2R/b = 0.56, these parameters are estimated as δ E = 0.05, δ 0 = 0.19 and δ 1 = 0.42, respectively. Assuming an incompressible flow (β = 1), the correction in Equation ( 4) is directly proportional to the lift coefficient and increases the incidence by 4.3% at 12 • . The trajectory of the tip vortex is also influenced by the walls of the tunnel. In free air, the vortex pair generated by a lifting wing is deflected downward by mutual induction. In the tunnel, the walls, or the equivalent doubly-infinite series of images, generate induced velocities acting on the tip vortex. The wing and trailing vortex are centered in the vertical direction in the tunnel, thus the interference concerning the horizontal displacement of the tip vortex is very small. On the contrary, in the lateral direction the tip of the wing and the trailing vortex are closer to the opposite side wall than to the supporting end plate. Thus the lift interference on the vertical displacement is quite substantial. Due to this up-wash the tip vortex is actually measured to move upward, which is not consistent with the behavior in free air.
IV. Comparison Between Simulation and Experiment
IV.A. Two-Dimensional Flow
The 2D test case of McAlister and Takahashi [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF] at Re = 1.5 × 10 6 (M = 0.124) is calculated for angles of attack in the range 0 • ≤ α ≤ 14 • , in steps of 1 • . The blockage correction increases the Reynolds number to 1.6 × 10 6 (M = 0.131). The lift interference correction is taken into account in the CFD simulations.
In order to study the convergence history, 100,000 iterations are performed at a (corrected) incidence of α = 12.13 • . The residual ℜ drops by six orders of magnitude. After 12,000 iterations, the variation of lift and pressure drag is less than 1% of the converged values. 20,000 iterations are required to reach an error below 0.1%. Approximately three hours are needed to achieve 20,000 iterations on six cores of Intel Xeon E5-2697 v2 processors at 2.7 GHz interconnected with an Infiniband FDR network and with 48 Go of RAM.
The dependence of the lift coefficient on angle of attack is plotted in Fig. 1. Its variation along the span of the wing in the experiments appears as error bars on the symbols. In the case of a purely 2D flow, no spanwise variation is expected. The observed deviation depends weakly on the angle of attack in the range 0 ≤ α ≤ 10 • and can be interpreted as the experimental measurement scatter. For α > 10 • , this variation increases, indicating the additional influence of a 3D stall phenomenon.
Figure 1 reveals a significant difference between the uncorrected lift measurements and the simulation results. At α = 12 • , this difference reaches 16%. The blockage correction greatly improves the agreement, the deviation at α = 12 • being reduced to 3%, and even to 1% when applying the lift interference correction. The agreement is poor at α = 14 • . Experimentally, the 2D assumption at this incidence is less justified, because the variation along the span is high and the flow is probably stalled. According to the review of 2D experimental databases published by McCroskey [START_REF] Mccroskey | A Critical Assessment of Wind Tunnel Results for the NACA 0012 Airfoil[END_REF], the measurement of maximum lift at low Mach number suffers from a high dispersion, partly due to wind tunnel wall effects. Concerning simulation aspects, the RANS equations have previously shown low efficiency with the simulation of 2D stalled flow [START_REF] Szydlowski | Simulation of Flow Around a NACA0015 Airfoil for Static and Dynamic Stall Configurations Using RANS and DES[END_REF][START_REF] Moulton | The Role of Transition Modeling in CFD Predictions of Static and Dynamic Stall[END_REF]. One of the main causes is the inadequacy of the boundarylayer transition model to simulate leading-edge laminar-bubble separation and expansion up to stall.
IV.B. Three-Dimensional Flow
The accuracy of the 3D corrections is now investigated by a comparison between the numerical simulation and the 3D experimental database of McAlister and Takahashi [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF]. The square-wing tip test case is used at Reynolds number Re = 1.6 × 10 6 , Mach number M = 0.131 and angle of attack α = 12 • . The blockage correction (III.A) increases the Reynolds number to Re = 1.7 × 10 6 and the Mach number to M = 0.140. The lift interference correction (III.B) increases the angle of attack to α = 12.5 • .
In order to assess the convergence of the 3D solution, a simulation with 100,000 iterations was performed, requiring approximately 50 hours of calculation on 150 cores. During the first 40,000 iterations, the residual ℜ drops by four orders of magnitude. It then stagnates until the end of the calculation. The same trend is observed for all other conservative variables, including turbulent ones. The main cause for this stagnation is localized at the leading edge of the tip section. In this area, the geometric truncation produces sharp edges, responsible for flow separation and a destabilization of the solution. The residual, which is an integrated value over the whole domain, is polluted by this local fluctuation. Nevertheless, the low CFL number employed prevents the propagation of this instability beyond a few tenth of a percent of the chord from the tip leading edge.
The vortex generation process alters the pressure in the vicinity of the wing tip. The low-pressure footprint generated on the upper surface is responsible for additional lift, pressure drag and pitching moment peaks. In the vortex footprint, the three sectional coefficients converge very well, but at different speeds. After 20,000 iterations, the resulting C l reaches the converged value (obtained with the 100,000-iteration calculation) to within 0.25%. The convergence of the drag and pitching moment coefficients is slower, and the results are within 0.4% and 0.6% of the converged values, respectively. Far away from the tip, in the inner part of the wing, the convergence of the aerodynamic coefficients is identical to the purely 2D configuration presented in Section IV.A.
From the blade root to the blade tip, and in particular from r/R = 0.88 to the tip (Fig. 2), the corrected experimental lift coeffi- cient is in good agreement with the calculated one. Similar agreement is obtained for the pressure drag and the pitching moment coefficients.
y /c V z / V ∞ -0.
At r/R = 0.370 and r/R = 0.974, the numerical pressure coefficient profiles and the corrected measurements are in good agreement, which demonstrate the validity of the wall corrections. At r/R = 0.994 (Fig. 3), the three upper-side suction peaks are found at the same position along the chord. The two pressure peaks at the leading-edge (upper side and lower side) are in good agreement. However, the simulation overestimates the amplitude of the first (at ζ/c ≈ 0.3) and second (at ζ/c ≈ 0.8) vortex footprints by 9% and 14%, respectively. The numerical results are always located between the uncorrected and the corrected measurements. The deviation on the lower side of the wing is surprising, it could be due to the application of an excessive blockage correction to the measurements. Considering the rough approximation of Eq. ( 2), the deviation remains acceptable.
Numerically, the "main vortex" center is identified, from the trailing edge up to 6 chords behind the wing by the maximum value of the Q-function [START_REF] Jeong | On the Identification of a Vortex[END_REF]. From the trailing edge up to 0.5c downstream, the simulated and measured main vortex trajectories are in good agreement.
Beyond 1 chord downstream of the trailing edge, the horizontal displacements slightly diverge and result in a 10% deviation at 6 chords. In the vicinity of the wing, the measured vertical displacement is not influenced by the wind tunnel walls and matches the simulation result. From 1 chord downstream, the experimental values are clearly influenced by the tunnel walls and are not comparable to a free-air simulation.
Velocity profiles are extracted on a horizontal line passing through the main vortex center, in order to mimic the experimental procedure. The measured profiles are corrected with a suitable blockage factor (see Section III.A), in order to recover V x /V ∞ = 1 far from the vortex and the wing trailing sheet. At 0.1 chord downstream of the trailing edge (Fig. 4), the vertical velocity amplitude is lower than unity and the application of the correction has little influence. The two maxima observed on the vertical velocity profile (at y/c ≈ -0.09 and -0.02) are well located. The simulation overestimates the first maximum by 20%, compared to the corrected measurement. A precise comparison cannot be made concerning the maximum and minimum vertical velocities; nevertheless, the experimental values and the numerical result are consistent. In particular, the small double inflection seen at y/c ≈ 0.02 is present in both cases. At y/c ≈ 0.07, the simulation exhibits a local deviation of 25% with respect to the corrected measurement. At 0.2 chord downstream of the trailing edge (Fig. 5), the corrected measurements and the calculated axial velocity profile are in good agreement. The maximum amplitudes are identical, the large lowvelocity region between y/c ≈ -0.12 and -0.04 is similar, and the small low-velocity region at y/c ≈ 0.06 is visible. Contrary to the measurements, the numerical solution exhibits a second peak at y/c ≈ 0.04.
V. Conclusions
To conclude, concerning the 2D test case at Reynolds number 1.6 × 10 6 and Mach number 0.131, good agreement is found between the numerical solution and the corrected measurements. On the one hand, the numerical is fast and gives accurate solutions, as long as the airfoil does not approach stall. On the other hand, the blockage correction and the lift interference correction given by Equations ( 2) and (3) are well suited and reveal the high quality of the database. Except near stall angles, where the 2D assumption becomes invalid, the corrected mean values of pressure, lift and pitching moment coefficients match the numerical results.
Concerning the 3D square-wing tip test case at Reynolds number Re = 1.6 × 10 6 , Mach number M = 0.131 and angle of attack α = 12 • , the corrected experimental measurements and the numerical solution are in good agreement regarding the mean aerodynamic coefficients on the wing and the mean vortex characteristics in the near field (up to 1 chord downstream of the trailing edge). The basic wall corrections proposed by McAlister and Takahashi [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF] are found to be necessary and reliable. They produce results consistent with free-air simulations, even if the employed blockage factor appears slightly too high for the measurements close to the tip. Between the trailing edge and 0.5 chord behind the wing, the simulated vortex trajectory agrees well with the measurements, even in the vertical direction, where the influence of the tunnel walls is known to be strong. In order to recover the reference velocity far from the vortex and the wing trailing sheet, the vertical and axial velocity measurements are corrected with an ad hoc method. A good match is found between corrected experiment and simulation for the vortex roll-up, concerning the amplitude and location of the velocity peaks, as well as the medium-and high-frequency variations of the velocity profiles.
Deviations between numerical solution and measurements are probably related to the use of steady-state simulations. Nevertheless, this efficient numerical method allows a detailed physical analysis of the vortex roll-up around the square wing tip in order to understand the origin and development of the multiple variations observed in the pressure and velocity profiles. The same numerical method is used in Ref. [START_REF] Joulain | Aerodynamic Simulations of Helicopter Main-Rotor Blade Tips[END_REF] to compute different test cases of McAlister and Takahashi's database [START_REF] Mcalister | NACA 0015 Wing Pressure and Trailing Vortex Measurements[END_REF], including variations of tip shape, Reynolds number and angle of attack variations.
Figure 1 .
1 Figure 1. Lift coefficient as function of angle of attack for the 2D configuration at Re = 1.6 × 10 6 and M = 0.131. numerical solution (--), uncorrected (•) and corrected ( ) experimental data from McAlister and Takahashi [2].
Figure 2 .
2 Figure 2. Sectional lift coefficient along the span in the vicinity of the tip. Numerical simulations (--), uncorrected (•) and corrected ( ) experimental data from McAlister and Takahashi [2].
Figure 3 .
3 Figure 3. Pressure coefficient along the chord at r/R = 0.994. Numerical simulation (--), uncorrected (•) and corrected ( ) experimental data from McAlister and Takahashi [2].
Figure 4 .Figure 5 .
45 Figure 4. Vertical velocity profiles across the vortex core at 0.1 chord downstream of the trailing edge. Numerical simulation (--), uncorrected (•) and corrected ( ) experimental data from McAlister and Takahashi [2].
Table 1 .
1 Grid parameters
Domain dimension (400c) 3
Leading-edge streamwise spacing 10 -4 c
Trailing-edge streamwise spacing 2 × 10 -4 c
Profile streamwise expansion ratio 1.06
Wall-normal first spacing 8.3 × 10 -6 c
Wall-normal expansion ratio 1.05
Downstream first spacing 2 × 10 -4 c
Downstream expansion ratio 1.1
Spanwise tip spacing 5 × 10 -5 c
Spanwise expansion ratio 1.01 | 32,601 | [
"8367"
] | [
"196526",
"266449",
"266449",
"266449",
"196526"
] |
01770372 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2017 | https://shs.hal.science/halshs-01770372/file/Complex_is_beautiful.pdf | Sandrine Maljean-Dubois
email: [email protected]
Matthieu Wemaëre
email: [email protected]
"Complex is beautiful". What role for the 2015 Paris Agreement in making the Effective Links within the Climate Regime Complex?
The 1992 United Nations Framework Convention on Climate Change (UNFCCC) 3 provides for the foundation of the international regime to fight against climate change. But it is widely acknowledged that it is neither efficient nor sufficient to tackle this challenge. As from 2000, a proliferation of policy initiatives outside this international climate regime has progressively questioned the centrality of the international governance on climate change as laid down by the UNFCCC.
Climate talks under the auspices of the UNFCCC for the adoption of a new accord in Paris in December 2015 at the 21st Conference of the Parties to the UNFCCC provided a unique opportunity to rethink the role and structure of the international climate regime within its own boundaries. In order to increase the level of ambition of climate action as soon as possible and in the future, it was of crucial importance to design the Paris Accord in a way that it coulb be complemented and enhanced through synergies with other initiatives, which may be developed in other fora of international cooperation.
This article analyses the role the Paris Agreement could play in achieving the defragmentation of the international governance on climate change, which could contribute to enhancing new cooperation dynamics, and to building a (more) consistent global climate law.
Introduction
Climate change is a highly complex policy challenge. Its causes cut across all economic sectors. Solutions require many different kind of policies in energy, infrastructure, finance and innovation, to name just a few. At the international level, the United Nations Framework Convention on Climate Change (UNFCCC) is widely seen as the central pillar of a broader 'regime complex', encompassing a number of formal and informal international policy processes.
Negotiations on a new climate agreement should be concluded under the UNFCCC by the end of 2015 in Paris. 4 Many countries expected these negotiations to produce a durable, legal agreement, which can structure climate cooperation in the long-term. But, regarding their difficulties to found a consensus, the Durban Platform gave birth to an a minima agreement in which the collective effort is the result of the aggregation of "nationally determined" contributions. The Parties themselves establish their contribution's level of ambition, at a national level, keeping in mind the collective objective of holding global warming well under 2°C. The COP will likely provide further guidance to the States as to how they determine their contribution 5 , but until now there has been no burden sharing of the implementation of this collective objective, as it had been the case pursuant to the Kyoto Protocol and in particular as between the fifteen countries of the European Union at the time, who had allocated between themselves a common objective of reducing their emissions by 8% 6 . The objective of holding temperatures laid down in the Agreement is however completely unrealistic based on our emissions' trajectories. This is established on an annual basis by the United Nations Environment programme in its report entitled The Emissions Gap, released before each COP. This report analyses the gap in terms of ambition until 2020 7 . Several studies have also analysed the aggregate effect of States' national contributions prior to the COP 21, including a study commissioned pursuant to the UNFCCC for 31 October 2014 8 . It concluded that, combined altogether, these contributions do not take us towards 2°, even less 1.5°, but rather, according to estimates, towards 3 or 3.5°C. This is undoubtedly progress compared to the 4 or 5 °C expected by so-called "business-as-usual" scenarios. Even though today 191 Parties representing 98% of global emissions have submitted their national contribution, we are still very far from the objective set out in the Paris Agreement and, beyond that, from the safe operating space of our planet 9 .
Thus, to improve the level of ambition now and in the future, it is fundamental that the Paris agreement could be supplemented or even fuelled by other initiatives, actions and policies, coming from other fora of international cooperation. This raises the question: what can the new agreement do in order to better forge effective synergies within the different elements of the climate regime, and to manage potential frictions? In a fragmented legal landscape (2), the Paris Agreement gives some new leverage tools for achieving the defragmentation of the international climate governance (3). 4 UNFCCC, Decision 1/CP.17 (2011), Establishment of an Ad Hoc Working Group on the Durban Platform for Enhanced Action.
5 CCNUCC, Secretariat, Parties' views regarding further guidance in relation to the mitigation section of decision 1/CP.21, FCCC/APA/2016/ INF.1, 7 October 2016, add.1, 18 October 2016. 6 This burden sharing was carried out by applying a basket of criteria established by the Utrecht University, based on the population, growth and energetic efficiency as well as opportunity or more political considerations. G. PHYLIPSEN, J. BODE, K. BLOK, H. MERKUS, B. METZ. A triptych sectoral approach to burden differentiation; GHG emissions in the European bubble, Energy Policy, n°26, pp. 929-943. 1998
From a fragmented governance landscape to regime complexes
The issue of fragmentation has gained more weight in recent times, as researchers and policy makers realise the complexity of climate change, and search for effective solutions. We can highlight three central reasons why the climate regime has long displayed this degree of fragmentation.
First, as mentioned above, climate change is a high complex, multi-sector, multi-scale problem. Addressing it effectively requires coordinated policy responses in many domains. Ongoing work by the OECD, for example, assesses the multiple policy response required at the national level from different sectoral policy processes: from energy policy, trade and competition policy, innovation policy, infrastructure planning, and financial regulation. The same principle holds at the international level. An effective response to climate change will inevitably require a complex, multifaceted response, combining the expertise and mandates of different policy processes.
Secondly, there has also been in the past a divergence of views between countries and researchers regarding appropriate processes. This has been particular manifest in the debate about multilateral versus mini-lateral approaches to international coordination. Today it is probably fair to say that this conflict is perhaps less fundamental, and that a majority of experts and policy makers see multilateral and mini-lateral as complementary. This can be seen in the so-called Workstream 2 process under the UNFCCC and the Lima-Paris Action Agenda, which aims to catalyse a range of International Cooperative Initiatives. 10 It is therefore being increasingly recognized that the UNFCCC is a core aspect of the global climate regime, but insufficient by itself.
Thirdly, international law is, by definition, a fragmented regime. Fragmentation arises logically from the principle of 'autonomy of treaties', according to which every treaty is independent of all other treaties. The fragmentation of the international legal order is even increasing, due to the twofold movement of expansion and diversification of international law. [START_REF]BIERMANN F. Planetary boundaries and earth system governance: exploring the links[END_REF] The situation is even worse in international environmental law, without a global environmental organization supervizing or even unifying the hundreds of autonomous institutional arrangements existing. 12 In international law, "normative conflict is endemic to international law" as stated by an International Law Commission's report. 13 The UNFCCC forms, of itself, already a fairly complex legal regime. The Convention, the Kyoto Protocol, the Cancun Accords and now the Paris agreement all represent significant albeit distinct developments of the climate regime. Nonetheless, overall the UNFCCC regime represents a relatively cohesive whole. In the past, it can be argued that the sharing of the workload between the UNFCCC and institutions like the International Maritime Organization, the International Civil Aviation Organization, the World Trade Organization, or the Montreal Protocol or the Convention on Biological Diversity have not been fully synergistic.
But it should be noted that fragmentation is not necessarily prejudicial. What matters is the effectiveness of the policy response, and in the past it seems relatively clear that the lack of better coordination has hindered the policy response. Regarding its relationships with other regimes or policy spaces, the climate regime has proven to be naturally closed and loosely interacting. Indeed, Parties has been quite indifferent, or even hermetic to what occured elsewhere, regarding the consequences of their action or inaction on other environmental regimes and initiatives, as well as the consequences on other regimes and initiatives on their action within the climate regime. Despite the openness of the 'constitutional' framework, the later has in general functioned in a kind of "clinical isolation"14 from other parts of international law. For example, measures adopted to implement the UNFCCC and its Kyoto Protocol have shown that Parties have had so far little consideration of biodiversity conservation issues, with some very few exceptions in relation to forest and land use management, but always as ancillary consequences of climate mitigation or adaptation objectives. 15 International climate change governance consists of a 'regime complex' rather than just a single regime16 of norms and institutions under the 1992 UN Climate Change Convention, the Kyoto Protocol and now the Paris Agreement. Raustiala and Victor identified a regime complex as "an array of partially overlapping and non-hierarchical institutions governing a particular issue-area". 17 The figure 1 bellow is adapted from the one that Kehoane and Victor proposed of the international regime complex for climate change. As well as the UNFCCC, the complex includes numerous public and private institutions and initiatives which operate at the international, regional, bilateral, national and even subnational level, each with its own focus (for example, expertise with the IPCC, finance, technology, business…) and involving varying levels of commitment. The interactions between these different spheres vary both in their strength, and in whether they are intended or accidental. Orsini, Morin and Young later gave regime complexes the more precise, practical and operational meaning of "a network of three or more international regimes that relate to a common subject matter; exhibit overlapping membership; and generate substantive, normative, or operative interactions recognized as potentially problematic whether or not they are essential in identifying regime complexes and analyzing their effects". 18 After years of an abundant scholarship on the fragmentation of international law, this concept of regime complex has the advantage of highlighting that which links the different regimes together as much as that which divides them. Hence it offers an interesting analytical framework for understanding accurately a complex reality by focusing on the flow of both norms and actors which exist between the different legal and institutional realms. It therefore takes us further forward than the simple, well-established and relatively unhelpful observation that regime fragmentation exists, and towards a greater understanding of the relationships and interactions between these regimes.
From this perspective, we propose to view the Paris Accord as the bedrock of the regime complex for climate change. Otherwise the Accord and the whole regime complex are likely to be ineffective. To this end, the Accord should aim to fulfil two different but complementary objectives:
-on the one hand, a catalysing role to create a dynamic and contribute to raising the level of ambition in the other regimes that form part of the regime complex ;
-on the other hand, to play the leading role in order to orchestrate climate governance, to strengthen coherence and ensure complementarity, ensure work is covered correctly and avoid duplication of effort etc.
The Paris Agreement as the bedrock of the 'regime complex' on climate change
The COP Decision 1/CP.21, adopting the Paris Agreement at COP21, provides for several indicators towards a greater openness to external challenges than in the past. They acknowledge the need for a global approach to such challenges, which goes beyond the forum of meetings of Contracting Parties to the Paris Agreement, the climate COPs. Depending on subjects, bridges have been built in two directions: either the Agreement takes into consideration other objectives or requirements, or it invites Parties or intergovernmental organisations to integrate better the climate change dimension. In its preamble, the Agreement refers to the need to ensure "the integrity of all ecosystems, including oceans, and the protection of biodiversity, recognized by some cultures as Mother Earth", while recognizing the importance of "the conservation and enhancement, as appropriate, of sinks and reservoirs of the greenhouse gases referred to in the Convention".
Regarding forests and sinks, it underlines "the importance of incentivizing, as appropriate, non-carbon benefits associated with such approaches" (art. 5). Such references could improve the coverage of biodiversity protection in the climate change framework which has been so far rather hermetic in that respect.
The preamble of the COP Decision 1/CP.21 also makes a reference to the Sustainable Development Goals, in particular to the SDG 13 on climate change as well as to Addis-Abeba Action Agenda adopted at the United Nations Third International Conference on Financing for Development and even to the Sendei Framework for Disaster Risk Reduction. Indeed, it is crucial to create greater consistency among these global objectives at the international level. The reference to such "meta-norms" could play a key role in strengthening the regime complex on international climate change and linking efforts under the UNFCCC with others in the international environmental and development law arena. It assists in linking the work under the Convention with relevant regimes, environmental and other, to the benefit of each of those regimes. Crucially, it will guide Parties' in their implementation of the Paris Accord so that it is consistent with those other regimes, thus enhancing international legal coherence and reducing fragmentation. The synergistic role of the Aichi Targets in the field of biodiversity protection has already been pointed out. 19 Developing commonalities between regimes through common principles or strategic objectives could help to prevent conflict between multilateral environmental agreements.
A link is also established between climate change and human rights, not in the operative part of the Agreement as initially proposed by some Parties, but in the preamble of the Accord, which emphasizes that "climate change is a common concern of humankind, Parties should, when taking action to address climate change, respect, promote and consider their respective obligations on human rights, the right to health, the rights of indigenous peoples, local communities, migrants, children, persons with disabilities and people in vulnerable situations and the right to development, as well as gender equality, empowerment of women and intergenerational equity". 20 However, this relates to the respect of "respective obligations" of States (existing or to be adopted in another context) : with such qualifier, some wanted to make sure that the Paris Accord would not create new obligations in this area.
On indigenous people, it must be noted that the Paris Accord also mentions the need to take into account traditional knowledge, knowledge of indigenous peoples and local knowledge systems, with a view to integrating adaptation into relevant policies and actions (art. 7, § 5). Noteworthy that indigenous people have welcomed the recognition of the 19 G. FUTHAZAR. The diffusion of the Strategic Plan for Biodiversity and its Aichi targets within the biodiversity cluster: An illustration of current trends in the global governance of biodiversity and ecosystems. YIEL, Vol. 25, No. 1, pp. 133-166. 2015. 20 We can find a similar formula in the preamble of the Paris Decision.
ambitious objective to pursue efforts to limit temperature increase to 1,5° C as compared to pre-industrial levels in Article 2 of the Paris Agreement. However, stating that climate change is a "common concern of humankind" is not new, in as far as it was already affirmed by the preamble UNFCCC since 1992. On finance, there has also been an effort towards greater consistency: the COP Decision "Invites all relevant United Nations agencies and international, regional and national financial institutions to provide information to Parties through the secretariat on how their development assistance and climate finance programmes incorporate climate-proofing and climate resilience measures" ( § 44). In doing so, the Agreement assumes to become the core framework and a catalyst of the strengthened international cooperation on climate change, in and out the UNFCCC. 21 By contrast, there is no reference made in the Paris Agreement to international trade law or agreements. Initially proposed as an option in the Geneva text, it has been withdrawn and not tabled again in the final rounds of talks before COP21. Therefore, the statu quo should remain, with a relative deference from the international climate regime with regard to international trade law. 22 Having in mind how much international trade law, particularly intellectual property rights, can affect its implementation, the Paris Agreement could have integrated, at least, a clause reflecting upon the mutual supportiveness principle. This principle requires consideration of whether there are areas of conflict given that the Parties are required to interpret and apply the rules emanating from the two different legal regimes in a way that is mutually compatible. It is therefore a principle that enables the different regimes to be linked and coordinated whilst avoiding a hierarchy. According to the International Legal Commission, "The assumption is that conflicts may and should be resolved between the treaty partners as they arise and with a view to mutual accommodation". 23 A reference to mutual supportiveness in the Paris Accord would have been helpful to the world of international business by providing a more balanced approach to the relationship between climate change and business than is currently the case. In terms of its environmental and commercial objectives, the Paris Agreement should not be subservient to international commercial law. More generally, it would have been beneficial to promote the principle of mutual supportiveness in the Paris Accord with a fairly general formulation that takes the mutual supportiveness principle out of its usual application specifically to international trade law.
The Paris Accord could have been inspired by the relatively balanced formulation contained in the Legal principles adopted in 2014 by the International Law Association. Article 10 entitled "Inter-Relationships" is expressed thus:
"1. In order to effectively address climate change and its adverse effects, States shall formulate, elaborate and implement international law relating to climate change in a mutually supportive manner with other relevant international law. 2. States in cooperation with relevant international organizations shall ensure that consideration of climate mitigation and adaptation will be integrated into their law, policies and actions at all relevant levels, as laid out in Article 3.
3. According to Article 8, States shall cooperate with each other to implement the interrelationship principle in all areas of international law, whenever necessary (…)" 24 .
Given that it represents a compromise, mutual supportiveness should have been a politically acceptable option and would have benefited the other areas of international environmental cooperation (in particular on ozone layer protection or biological diversity where there has already been some friction) and also more widely to the trade, investment, law of the sea and human rights domains. The clause proposed by the ILA is particularly interesting because it is balanced in aiming to take into account other areas of law in the drafting and implementation of climate law ( §1), it also provides a principle of integration of climate demands into other policy areas at all relevant levels ( §2) and provides, if necessary, for cooperation on the implementation of the principle of internormativity ( §3). In this way, such a clause could enable secondary legislation (for example, previous COP decisions relating to the Paris Accord) to guide the implementation of the Accord in a direction which takes account of other normative areas in an evolutionary way. Such a clause could also inspire other normative areas to take better account of the relevant targets in the Paris Accord in a spirit of "win-win".
To take the example of biodiversity, procedures and modalities for "Intended Nationaly Determined Contributions" 25 implementation could be used to authorise the COP to recommend to Parties that they take into account the need to protect biodiversity, with reference to decisions under the Convention on Biological Diversity (CBD) that relate to certain objectives, means or indicators. Combined with the principle of "no backsliding", such an initiative could produce a domino effect on biodiversity conversation (whose positive effects on climate change mitigation and adaptation are already well-known). Clearly, the level of resistance that can be expected from some Parties to such an initiative should not be under-estimated. This resistance originates from a fear that discussions within the CBD could be "contaminated" through the "importation" of difficulties and structural issues from the UNFCCC. It also stems from a fear of losing sovereignty by "importing" concepts and rules that emanate from the UNFCCC, also remembering that some important Parties such as the US are not parties to the CBD. It may also be worth thinking about the need for better coordinated action with the Montreal Protocol for example on the elimination of HFCs. This shows particularly well the need of coordination between two international regimes, in that case the climate regime and the ozone regime. The increasing use of HFCs is a climate issue, complicated by the international policy aiming to protect the ozone layer. It is exactly at the interface of the two regimes. There is not a single word on that in the Paris Agreement. The issue has been given only marginal consideration on the road to Paris, as an opportunity to mitigate climate change. However, the Paris Agreement created the momentum which has favoured the adoption of an agreement during the Kigali Meeting of the Parties to the Montreal Protocol, in October 201626 . The Kigali amendment will guarantee a better consistency of internation action in favour of climate and ozone.
By promoting voluntary cooperative approaches in its Article 6, the Paris Agreement offers also multiple benefits, both by enhancing the implementation of the climate and other international regimes, and by inspiring and providing good examples to other Parties which could in turn result in an acceleration of the global effort under the Convention. Given the difficulty in making substantial progress involving all Parties, enhanced cooperative action could be a key way of enhancing not just the implementation of work under the Convention, but also under other regimes by promoting synergies with the Convention, and thus reducing fragmentation in the realm of international law.
Conclusion
In recent years, other important environmental issues as forests, biodiversity, ozone, marine acidification, and so on, have in most cases been overshadowed by the issue of human-induced climate change, yet both are equally important, if not fundamental to the ongoing future of human populations and even planetary life. Indeed planetary boundaries are closely connected and this should be duly reflected in policies and legal tools [START_REF] Biermann | Planetary boundaries and earth system governance: exploring the links[END_REF] . As shown by the lack of cross-reference in decisions taken in the context of the international climate change regime, the UNFCCC behaved sometimes like an autistic convention hermetic to external concerns. According to some authors, "the connection with issues other than its own has been seen as an unwanted distraction to achieving its narrowly defined and interpreted object and purpose". [START_REF] Chambers | Interlinkages and the Effectiveness of Multilateral Environmental Agreements[END_REF] COP21 has provided an unic opportunity to make a decisive step toward a better open approach of other issues and regimes. The Paris Agreement shows timid progress from this point of view and States have now to assume their responsibilities. Who can claim today in good faith that the international climate regime could solve on its own the issue of climate change? Indeed the natural fragmentation of international law allows States to instrumentalize one policy space against another in order to protect their domestic interests, thereby undermining overall effectiveness in the end. [START_REF] Doelle | Re-thinking the Role of the UN Climate Regime[END_REF] Here, the "schism of reality" that Amy Dahan and Stefan Aykut pointed out, seems particularly relevant and is problematic. It is the result of a growing gap between a given reality of the world based on the globalization of markets and the overexploitation of fossil fuels, with States being prisoners of a fierce competition who hang on as ever their national sovereignty on the one hand and a negotiating forum supported by international governance arrangements which gives the impression it can be a central regulator being capable of allocating emission rights, but with less and less grip on this given reality. [START_REF] Dahan Dalmedico | Gouverner le climat, 20 ans de négociations internationales[END_REF] Scholars have definitely to pay more attention for the coming years both to interplays between the international regimes 31 but also to the management of regime complexity at the national level. [START_REF] Velázquez Gomar | Regime Complexes and National Policy Coherence: Experiences in the Biodiversity Cluster[END_REF]
FIGURE 1 :
1 FIGURE 1 : Adapted from The regime complex for managing climate change, R. O. Keohane and D. G. Victor, « The regime complex for climate change », Perspectives on Politics, 2011, vol. 9, pp. 7-23.
18 A. ORSINI, J.-F. MORIN, O. YOUNG. Regime complexes: A buzz, a boom, or a boost for global governance?, Global Governance: A Review ofMultilateralism and International Organizations, vol.19(1), p. 29. 2013.
. FCCC/CP/2016/2 , May 2016 ; W. STEFFEN et al. Planetary Boundaries: Guiding human development on a changing planet, Science , Vol. 347, Issue 6223, p. 1. 13 Feb 2015.
7 See UNEP, The Emissions Gap Report 2015, Summary for Policymakers,
http://uneplive.unep.org/media/docs/theme/13/EGR2015ESEnglishEmbargoed.pdf, 8 Synthesis report on the aggregate effect of the intended nationally determined contributions, Note by the secretariat, FCCC/CP/2015/7, 30 October 2015, 66 p. The Paris Decision takes note thereof ( §16). 9 UNEP, UNFCCC Secretariat, Aggregate effect of the intended nationally determined contributions: an update, Synthesis report by the secretariat,
21 See F. W. ABBOTT, P. GENSCHEL, S. SNIDAL ET B. ZANGL. International Organizations as Orchestrators, Cambridge: Cambridge University Press, 2015. Report of the Study Group of the International Law Commission Finalized by Martti Koskenniemi, A/CN.4/L.682, 13 April 2006, §276.
22
See MALJEAN-DUBOIS Sandrine, WEMAËRE Matthieu. L'accord à conclure à Paris en décembre 2015: une opportunité pour 'dé'fragmenter la gouvernance internationale du climat ? Revue juridique de l
'environnement, 4, p. 657. 2015.
23 ICD, Fragmentation of International Law : Difficulties arising from the Diversification and Expansion of International Law,
UNFCCC, Decision 1/CP.20 (2014), Lima call for climate action.
ICD, Fragmentation of International Law : Difficulties arising from the Diversification and Expansion of International Law, Report of the Study Group of the International Law Commission, A/CN.4/L.702, UNO. 28 July 2006.
CHURCHILL Robin R., ULFSTEIN Geir. Autonomous Institutional arrangements in multilateral environmental agreements : a little notice phenomenom, American Journal of InternationalLaw, October, p. 623. 2000.
ICD, Fragmentation of International Law : Difficulties arising from the Diversification and Expansion of International Law, Report of the Study Group of the International Law Commission Finalized by Martti Koskenniemi, A/CN.4/L.682, § 486. 13 April 2006.
WTO Appelate Body Report, United States -Standards for Reformulated and Conventional Gasoline, WT/DS2/AB/R,29.04.1996, p. 19.
MALJEAN-DUBOIS Sandrine, WEMAËRE Matthieu. Climate Change and Biodiversity. In : Encyclopedia of Environmental Law -Biodiversity and Nature Protection Law, Edward Elgar Publishing, ed. Jona Razzaque et Elisa Morgera, 2016. pp. 295-308.
On regimes, KRASNER S. Structural causes and regime consequences: regimes as intervening variables, InternationalOrganization, vol. 36, n°2, pp. 1-21. 1982.
RAUSTIALA K., VICTOR D. The regime complex for plant genetic ressources, InternationalOrganization, vol. 58, pp. 277-309. 2004.
ILA, Legal Principles Relating to Climate Change Draft Articles (2014), http://www.ilahq.org/en/committees/index.cfm/cid/1029.
25 UNFCC, Decision 1/CP.19 (2013), Further advancing the Durban Platform.
See http://www.unep.org/africa/news/kigali-amendment-montreal-protocol-another-global-commitment-stopclimate-change.
Acknowledgements. This work has been funded by the Agence nationale pour la recherche française within the project CIRCULEX <ANR-12-GLOB-0001-03 CIRCULEX> and by the Iddri (Paris). The authors would like to warmly thank Thomas Spencer, from the Iddri, for his advice and encouragements and Pierre Mazzega, GET Géosciences Environnement Toulouse UMR5563, CNRS / IRD / Université de Toulouse, for his useful insights and comments.
5. | 31,111 | [
"5576"
] | [
"199942",
"239063",
"199942"
] |
01770431 | en | [
"info"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01770431/file/IEEE%20T%20EM%20Ivanov%20et%20al.pdf | Prof. Dr. habil Dmitry Ivanov
email: [email protected]
Alexander Pavlov
Alexandre Dolgui
email: [email protected]
Boris Sokolov
Hybrid fuzzy-probabilistic
Keywords: supply chain engineering, risk management, resilience, structure dynamics, ripple effect, graph theory, fuzzy theory
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
The supply chain engineering under uncertainties is a major issue in production systems and logistics [START_REF] Dolgui | Supply chain engineering: useful methods and techniques[END_REF]. One of most challenging problems deals with an increase of supply chain resilience, especially facing ripple effect.
The ripple effect, also known as domino effect or snowball effect [START_REF] Wierczek | The impact of supply chain integration on the "snowball effect" in the transmission of disruptions: An empirical evaluation of the model[END_REF]) describes the disruption propagation in the supply chain (SC), the impact of a disruption on SC performance and the disruption-based scope of changes in the SC design (SCD) structures [START_REF] Liberatore | Hedging against disruptions with ripple effects in location analysis[END_REF], Ivanov et al. 2014a,b, Sokolov et al. 2016). Managing the ripple effect is closely related to designing resilient SCs.
SC resilience became one of the key investigation categories over the last decade [START_REF] Gunasekaran | Supply chain resilience: role of complexities and strategies[END_REF]. SC resilience is understood in literature as an ability to maintain, execute and recover (adapt) the planned process states along with achievement of the planned (or adapted, but yet still acceptable) performance [START_REF] Sheffi | A supply chain view of the resilient enterprise[END_REF], Ivanov and Sokolov 2013, Kamalahmadi and Mellat-Parast 2016a,b). Recent studies underline the crucial role of resilience assessment taking into account disruption risks (e.g., Ivanov et al. 2013[START_REF] Munoz | On the quantification of operational supply chain resilience[END_REF][START_REF] Das | Risk readiness and resiliency planning for a supply chain[END_REF], Ivanov et al. 2016).
The existing studies have mostly focused on the impact of disruptions on SC resilience using deterministic approaches and the reliability theory without considering structure reconfiguration [START_REF] Wagner | Assessing the vulnerability of supply chains using graph theory[END_REF][START_REF] Soni | Measuring supply chain resilience using a deterministic modeling approach[END_REF][START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF]. Recently, the necessity of inclusion the reconfiguration into SC resilience analysis was manifested by [START_REF] Simchi-Levi | From superstorms to factory fires: Managing unpredictable supply chain disruptions[END_REF], [START_REF] Xu | Predicted supply chain resilience based on structural evolution against random supply disruptions[END_REF], [START_REF] Snyder | OR/MS Models for Supply Chain Disruptions: A Review[END_REF]. However, to the best of our knowledge there is no published graph-theoretical research on SCD resilience analysis with both disruptions and recovery considerations. In addition, the existing studies (e.g., [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF]) consider monotonous SC structures (i.e., the failures of structure elements lead to the reliability decrease and recovery of structure elements lead to SC reliability increase). In reality, SC structures can be nonmonotonous, i.e., the factors such as competitor behavior can also influence the SC resilience (e.g., a failure of one competitor can lead to reliability increase).
In this study, the following problem is considered. The object of investigation is a multi-stage SC different elements of which can be disrupted. The problem is to identify the impact of disruptions at different SC locations on SC resilience and to identify the critical SC nodes and arcs the disruptions of which result in SC resilience loss.
The goal of this study is to develop a method for quantitative analysis of SCD resilience with disruption propagation and recovery considerations for both monotonous and non-monotonous structures. We are also interested in developing an SCD resilience index that can be used by SC managers to compare different SCDs regarding the resilience.
The remainder of this paper is organized as follows. Section 2 provides a state-of-the-art overview. In Section 3, the research methodology is presented. Section 4 describes the fuzzyprobabilistic approach and the underlying concept of graph genome. In Section 5, the modeling procedure for the SC structure dynamics with reconfiguration is described. In Section 6, the SCD resilience index is described. Section 7 is devoted to the experimental part. In Section 8, managerial insights are presented. The paper concludes by summarizing the most important insights from the research and delineating future research avenues.
Literature review
The ability to maintain and recover (adapt) the planned execution along with achievement of the planned (or adapted, but yet still acceptable) performance is related by most authors to SC resilience [START_REF] Bakshi | Co-opetition and investment for supply-chain resilience[END_REF], Ivanov and Sokolov 2013[START_REF] Xu | Predicted supply chain resilience based on structural evolution against random supply disruptions[END_REF][START_REF] Ambulkar | Firm's resilience to supply chain disruptions: Scale development and empirical examination[END_REF]. It can be observed in the existing studies that two groups of problem statements for SC resilience analysis are generally considered (see Fig. We limit the literature analysis to graph-theoretical approaches taking into account the focus of this research. Stochastic Petri nets have been applied to analyse disruption propagation through the SC and to evaluate the performance impact of the disruptions [START_REF] Wu | Methodology for supply chain disruption analysis[END_REF]. [START_REF] Wang | Evaluation and analysis of logistic network resilience with application to aircraft servicing[END_REF] and [START_REF] Soni | Measuring supply chain resilience using a deterministic modeling approach[END_REF] develop resilience metrics with the use of graph theory. [START_REF] Wagner | Assessing the vulnerability of supply chains using graph theory[END_REF] propose a method of quantifying risk using the permanent of an adjacency matrix based on graph theory. [START_REF] Nair | Supply network topology and robustness against disruptions -An investigation using a multi-agent model[END_REF] analyse the correlation between disruption and structural features of network. [START_REF] Hsu | Reliability evaluation and adjustment of supply chain network design with demand fluctuations[END_REF] develop a method to evaluate reliability of the SC performance under the demand fluctuations. [START_REF] Schoenlein | Measurement and optimization of robust stability of multiclass queuing networks: Applications in dynamic supply chains[END_REF] define robustness as the ability of a multiclass queuing network to remain stable, if the expected values of the inter-arrival and service time distributions are subject to uncertain shifts. Based on the fluid network analysis they present a measure to quantify the robustness called the stability radius. Ivanov and Sokolov (2013) describe SC stability, robustness, and resilience as an integrated control problems within the framework of automatic control theory. [START_REF] Zobel | Characterizing multi-event disaster resilience[END_REF] quantify resilience of supply chain for multiple disruptive events. [START_REF] Xu | Predicted supply chain resilience based on structural evolution against random supply disruptions[END_REF] develop a quantitative model for analysis of predicted SC resilience based on structural evolution against random supply disruptions. The study by [START_REF] Lin | Network reliability with deteriorating product and production capacity through a multi-state delivery network[END_REF] concentrates on the reliability assessment for a multi-state SC with multiple suppliers as the probability to satisfy the market demand within the budget and production capacity limitations.
They develop an algorithm in terms of minimal paths to evaluate the network reliability along with a numerical example regarding auto glass. Garvey et al. (2015) build upon minimal paths analysis and suggest using Bayesian network to analyse risk propagation in the SC. This is an interesting research avenue. Bayesian networks have also been applied to domino effect analysis in chemical industry infrastructures [START_REF] Khakzad | Application of dynamic Bayesian network to risk analysis of domino effects in chemical infrastructures[END_REF].
Another important research stream has been the understanding that different kinds of SCD structures with similar number of nodes and arcs may significantly differ regarding resilience. [START_REF] Bode | Structural drivers of upstream supply chain complexity and the frequency of supply chain disruptions[END_REF] analyse empirically structural drivers of upstream SC complexity and the frequency of SC disruptions. They find out dependencies between SC structural complexity and disruption occurrence. [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF] apply graph theory to analyse the impact of SC structural characteristics on resilience. This study reveals correlations between network structure and disruption impacts. [START_REF] Brandon-Jones | The impact of supply base complexity on disruptions and performance: the moderating effects of slack and visibility[END_REF] find out the impacts of supply base complexity on disruptions and performance. [START_REF] Sokolov | Structural analysis of the ripple effect in the supply chain[END_REF] quantify ripple effect in the SC with the help of selected indicators from graph theory and develop a hybrid static-model model for performance impact assessment of disruption propagation in a distribution network. [START_REF] Han | Evaluation mechanism for structural robustness of supply chain considering disruption propagation[END_REF] assess the SC structural robustness considering disruption propagation in a connected graph. They perform quantitative assessment of the structural robustness on random networks compared with the probability of network disruption due to the random risk. [START_REF] Tang | Complex interdependent supply chain networks: Cascading failure and robustness[END_REF] develop a time-varied cascading failure model and analyse the ripple effect as failed loads propagation in the SC. They present the SC as interdependent structure of an undirected cyber network and a directed physical network, two layers that constitute an SC. The authors develop a robustness measure and analyse the SC collapse situations. Some critical observations can be derived from this literature review. Typical assumptions that an increase in SC resilience depends on the number of nodes and arcs in the SC (i.e., back-up suppliers or alternative transportation routes) frequently lead to the solutions where SC resilience can be increased only if SC efficiency decreases. In reality, SC structures are different and the SCDs with the same efficiency can perform completely differently in regard to resilience. This dependence of the SC resilience on the SCD structure has not been studied in literature extensively so far. Furthermore, the performance analysis with the use of individual supplier failure probabilities dominates the research domain. At the same time, another important question concerning the disruption propagation and SCD survivability is still at the early stage of investigations. In the scope of the research described in this article is the analysis of what SC elements will survive (i.e., be in operation) after a disruption and under which conditions (i.e., joint failures in a group of suppliers or a failure at a critical supplier that interrupts the SC operation fully) the SC can survive or it will lose its survivability. Finally, the issues of SC reconfiguration have got only limited attention in recent literature and they will be included into our study.
Research methodology
The research concept of this study is based on the following four stages: Theoretical analysis of the hybrid fuzzy-probabilistic approach with the application to SCD resilience analysis Modelling the SC structure dynamics with reconfiguration Development of an SCD structure resilience index (SRI) Validation of the SCD SRI according to the referenced SCD structures from literature.
In Section 4, theoretical analysis of the hybrid fuzzy-probabilistic approach with the application to SCD resilience analysis will be elaborated. In this part, we define the underline concept of this study, the graph genome. It will be shown how to develop a SCD genome and how to compute its components on an example. Subsequently, the formulas for computing the structure failure probability for both monotonous and non-monotonous structures taking into account both homogenous and heterogeneous failure probabilities and possible failures will be presented. Theoretical genome properties will be analyzed and illustrated on an example.
Section 5 deals with modelling the SC structure dynamics with reconfiguration. In this section, the central issue is the description of scenarios for modeling SCD failure and reconfiguration. In addition, in this section two underlying optimization problems for computing the failure probabilities in the cases with and without reconfiguration will be defined.
Section 6 integrates the materials of Sections 4 and 5 and develops SCD structure resilience index. First, on the basis of the Section 5, an example of SCD reconfiguration path is presented.
Second, the computation of the SRI with the help of the genome concept from Section 4 is presented.
Section 7 is devoted to the validation of the SCD resilience index according to the referenced SCD structures from literature. First, we present the SCD structures from study by [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF] in terms of fuzzy-probabilistic approach. Second, we compute the resilience index values for three cases: worst-case and no reconfiguration, average case and no reconfiguration and average case with reconfiguration. The results are compared with each other and analyzed.
Hybrid fuzzy-probabilistic approach
The research methodology is based on a hybrid fuzzy-probabilistic approach to network reliability analysis.
For both monotonous and non-monotonous structures, we consider genome that can be represented as a vector [START_REF] Kopytov | New methods of calculating the Genome of structure and the failure criticality of the complex objects' elements[END_REF]). The genome is composed of the integer coefficients of the structure failure polynomial (Eq. 1) [START_REF] Aggarwal | A New Method for System Reliability Evaluation[END_REF][START_REF] Ryabinin | Reliability of Engineering Systems[END_REF], Colbourn 1978):
0 1 2 ( , , ,..., ) n (
1 2 1 1 1 1 2 1 1 ( , ) ( , ,..., ) { } ( ) ( ) ( ) ... ( 1) ( ... ) ( 1) t m i m n t i i j t i i j C m m t i j k m q i j k t i q V R t T Q Q Q P R P R P R R P R R R P R R R Q , (1)
where
( 1 2 ... n Q Q Q Q ); 2 0 1 2 ( ) ... n n T Q Q Q Q . 1 2 ... n Q Q Q Q
In the case of a monotonous structure, m is minimum number of failure edge-cuts, t m C is the number of combinations from m for t, ( , ) i V R t is the set of SC elements from {1, 2,..., } n , where n is the total number of SC elements in the SCD structure graph G= (V, E), i.e., the number of vertices and edges in the graph that can be disrupted plus one element. i R is minimum failure edge-cut (i.e., the edge-cut that does not contain any further failure edge-cuts); in the case of an element failure in the edge-cut, the net-work becomes divided into two non-linked parts, with the source node in the first part, and the target node in the second part, , are the logical operators "and" and "or" respectively, q Q is the failure probability of a network element.
The objective of the genome method application to the SCD resilience analysis is to include the structural properties of the SCD into resilience assessment. The next specific feature of the genome method is the usage of minimum structure failure edge-cuts that allows identifying groups of critical suppliers or a critical supplier. This means that disruptions at the minimum structure failure edge-cut separate the SC into two non-connected parts and operations are interrupted. If the minimum edge-cut comprises only one element, this element is called "bridge" [START_REF] Deistel | Graph Theory[END_REF]).
In Fig. 2, the monotonous network structure of an SC is depicted.
Fig. 2. Monotonous network structure of an SC
The simplified SC in Fig. 2 comprises five nodes (nodes #1 and #5 are source and target respectively) and ten arcs. For the situation, where disruption may happen only on the arcs, the genome of this SCD includes 11 elements. If disruptions would also be considered for nodes, the genome would include 16 elements. For simplification of the explanation we assume that the nodes are reliable and the failures are subject to the arcs only. In this case, for the given SCD there are five minimum failure edge-cuts as follows (Eq. 2):
{ , }, { , , , , }, { , , , , , }, { , , , , }, { , , , , , } R Q Q R Q Q Q Q Q R Q Q Q Q Q Q R Q Q Q Q Q R Q Q Q Q Q Q (2)
The failure polynomial for this SCD can be represented as follows (Eq.3):
1 2 10 1 10 2 3 7 9 10 3 4 6 7 9 10 5 6 8 9 10 2 4 5 8 9 10 1 2 3 7 9 10 1 3 4 6 7 9 10 1 5 6 8 9 10 1 2 4 5 8 9 10 2 3 4 6 7 9 10 2 3 5 6 7 8 9 10 2 3 4 5 7 8 9 10 3 4 5 6 7 8 ( , ,..., )
T Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q
9 10 2 4 5 6 8 9 10 1 2 3 4 6 7 9 10 1 2 3 5 6 7 8 9 10 1 2 3 4 5 7 8 9 10 1 3 4 5 6 7 8 9 10 1 2 4 5 6 8 9 10 2 3 4 5 6 7 8 9 10 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9
10 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q (3)
or in a more compact form as Eq. ( 4):
2 5 7 8 9 10 ( ) 2 4 5 2 . T Q Q Q Q Q Q Q (4)
In the example in Fig. 2, we have m=5, number of combinations from m for t (t=1,2,3,4,5; i.e., the combination number for 1,2,3,4 and 5 edge-cuts from five minimum edge-cuts) equals 5,10,10,5,1 respectively, n=10 edges. The first five members in 1 2 10 ( , ,..., ) T Q Q Q (Eq. 3) corre- spond to t=1 (i.e., the edge-cuts from five minimum edge-cuts). Next nine members in Eq. ( 3) with "-" are related to t=2 (there should be ten members but one member is cancelled by "+" in t=3). Next seven members with "-" are related to t=3 (there should be ten members but one member is cancelled by "-" in t=2 and two members are cancelled by "-" in t=4 ). Finally, the last two members with "-" relate to t=4 (there should be five members but two members are cancelled by "+" in t=3 and one member is cancelled by "+" in t=5 ). The genome of this SCD is (0,0,1,0,0, 2,0, 4, 1,5, 2)
.The interpretation of this genome is as follows:
-The SCD contains 10 transportation edges (vector dimensionality is 11=10+1)
-The components of the genome change their operators ("+" and "-") three times. This means we have three production sites in the SC (vertices #2, #3 and #4). Node #1 is a supplier, and node #5 is a customer -The last genome component is "-2". This means, that the graph has an edge to ensure the direct and return deliveries (i.e., the edge Q 4 is a "bridge" that makes it possible to deliver products from node #3 to # 4 and return the trucks from #4) -The first non-zero genome component "1" is the third one. This means that the SCD has one edge combination that contains two edges the failure of which would result in full SC destruction (i.e., the edges 1 10 { , } Q Q ).
-The second non-zero component is "2" and it is the sixth genome component. This means that the SCD has two edge combinations each of which contains five edges the failure of which would result in full SC destruction (i.e.,
2 3 7 9 10 { , , , , } Q Q Q Q Q and 5 6 8 9 10 { , , , , } Q Q Q Q Q ).
Insight 1. The usage of minimum structure failure edge-cuts allows identifying groups of critical suppliers or a critical supplier whose failure separates the network into two disconnected parts and interrupts the SC operation. For SC elements at the minimum structure failure edge-cuts, back-up sourcing strategies need to be implemented.
Furthermore, the genome encompasses the following properties of the monotonous and nonmonotonous structures:
if 0 0 and 1 2 ... 1 n then the failure polynomial 2 0 1 2 ( ) ... n n T Q Q Q Q describes the monotonous structure; if 0 1
and 1 2 ... and 1 2 ...
0 n ,
1 n
, then the structure is non-monotonous and for the failure polynomial is true (0) 1 T .
In addition, the following topological properties of the monotonous structures are contained in the genome:
the power of the lower polynomial member is equal to the minimal power among minimum structure failure edge-cuts (i.e., the number of the first non-zero genome component with 0 0,
l i è i l )
; the coefficient of the lower polynomial member is always positive and equals the number of the minimal-power minimum failure edge-cuts (i.e., 0 0,
l i è i l );
the power of the highest polynomial member is equal to the number of the network elements.
The genome can be used for computing the structure failure probability for both monotonous and non-monotonous structures taking into account both homogenous (F hom ) and heterogeneous (F het ) failure probabilities and possible (F possib ) failures according to Eq. ( 5):
1 1 1 ( ) (1, , ,..., ) 2 3 1 T hom F n , 2 1 1 1 ( ) (1, , ,..., ) 2 2 2 T het n F , 2 [0,1] ( ) sup min{ (1, , ,..., ) , ( )} n T possib F g .
(5)
In the case of homogenous failure probabilities, the network failure probability for the polyno-
mial 2 0 1 2 ( ) ... n n T Q Q Q Q is in the interval [0,1].
The closer the failure function to the line ( ) 1 T Q , the lower is the network reliability. This property allows using Eq. ( 6) for computing the indicator of the network structure failure:
1 1 2 0 1 2 0 1 2 0 0 1 1 1 ( ) ( ) ( ... ) ( , , ,..., ) (1, , ,..., ) 2 3 1 n T hom n n F T Q dQ Q Q Q dQ n . ( 6
)
For heterogeneous failure probabilities, Eq. ( 7) is used:
1 1 1 1 2 1 2 0 1 2 2 0 0 0 1 1 1 ( ) ... ( , ,..., ) ... ( , , ,..., ) (1, , ,..., ) 2 2 2 T het n n n n F T Q Q Q dQ dQ dQ . ( 7
)
However as pointed out in recent literature (Lim et al. 2014[START_REF] Simchi-Levi | From superstorms to factory fires: Managing unpredictable supply chain disruptions[END_REF], the usage of failure probabilities in SCD decisions is quite restrictive. Such estimations can be problematic if not enough statistical data is available to fairly estimate the failure probabilities. For this reason, the alternative approach can be the usage of the fuzzy-method based on the fuzzy possibility space [START_REF] Singer | A fuzzy set approach to fault tree and reliability analysis[END_REF]).
The fuzzy possibility space can be described as ( , ( ), ) X X P
, where X is a set of elementary events, ( ) X is σ-algebra of subsets in the space X , called the events, and the functions : ( ) [0,1] P X , called the possibility measure (i.e., ( ) P A is the possibility of the event ( ) A X
). The event possibility is defined by the possibility distribution function : [0,1] g X as ( ) sup ( )
x A P A g x
. In the possibility measure space, the fuzzy element ( )
( : [0,1] X )
is defined. Its membership function can be described as the possibility distribution function
( ) ( ) ( ), x P x g x x X .
Further assume that the SC network structure comprises ( ), 1,2,...,
i i i n fuzzy elements.
Using the operators for fuzzy structures ( 1
2 1 2 1 2 , 1 2 1 2 ),
) can be described as 2 0 1 2 ( ) ... n n T , where 0 1 2 ( , , ,..., ) n is the structure genome.
Similar to the probabilistic approach, we consider the computational procedure for the indicator ). These functions encompass the following properties:
1.
[0,1], ( ) ( ) f L X f L X . 2. ( ), :[0,1] [0,1] f L X v
are the involutions of the monotonously decreasing function
(0) 1, (1) 0 v v ) ( ( )) ( ) v f v f x L X (Pyt'ev 2002). 3. ( ), 1,2,.... i f L X i 1 1 ( ) sup ( ) ( ), ( ) inf ( ) ( ) i i i i i i i i f x f x L X f x f x L X .
Integral : ( ) p L X L for the possibility measure ( ) sup ( )
x A P A g x
at the distribution g is defined as:
( ) sup min{ ( ), ( )} x X p f f x g x
. If the possibility measure is defined by the distribution
( ) g L X
, then the following is true (Eq. 8):
[0,1] [0,1] [0,1] ( ) 0 ( ) ( ) sup min{ , ({ ( ) })} sup min{ ,sup{ ( ) ( ) }}
sup sup min{ , ( )} sup sup min{ , ( )} sup min{ ( ), ( )} ( ).
x
X x X x X f x x X f x s f P x f x g x f x g x g x f x g x p f (8),
where ( ) s f is classical Sugeno fuzzy integral [START_REF] Sugeno | Theory of fuzzy integrals and its applications[END_REF].
The polynomial of the structure failure possibility This property allows using as an integral indicator of the structure failure possibility the integral for the possibility measure (Eq. 9) and provides evidence for structure failure probabilities defined in Eq. ( 5):
2 0 1 2 ( ) ... n n T is
2 [0,1] [0,1] ( ) sup min{ ( ), ( )} sup min{ (1, , ,..., ) , ( )} n T possib F T g g . ( 9
)
Modelling the SC structure dynamics with reconfiguration
As a consequence of a disruption, a SCD can have the form of different states S. A state represents the SCD (i.e., the graph G=(V,E)) subject to non-disrupted and disrupted elements. Due to the fact that the SC elements can recover after a failure it should be noted that the initial state S 0 denotes the state where no SC element is failing. Subsequently, due to disruptions and recovery actions, the SCD graph can transit through different states. An example for five state levels is given in Fig. 3.
Fig. 3. Levels of the SC structure dynamics
The first degradation level corresponds to those states with the failure in one element; the second degradation level corresponds to the states with the failure in two elements, etc. The recovery actions move the SCD back from the second level to the first level, and so on. The sequence of the transitions through the structural states is called SC structure dynamics control [START_REF] Ivanov | A multi-structural framework for adaptive supply chain planning and operations with structure dynamics considerations[END_REF][START_REF] Ivanov | Structure dynamics control approach to supply chain planning and adaptation[END_REF]. The sequence of events for the structure dynamics is called the structure dynamics scenario.
Scenarios belong to crucial issues in the SCD resilience analysis. In general, we do not know exactly which elements will fail. However, the structure failure polynomial (Eqs 1 and 2) contains the graph structure. For example, in the case of Q 1 failure, we have an indirect failure of andQ 9 . In other words, the SCD will transit in a state where
Q 2 ,Q 3 ,Q 4 ,Q 5 ,Q 6 ,Q 7 ,Q 8 ,
Q 1, Q 2 ,Q 3 ,Q 4 ,Q 5 ,Q 6 ,Q 7 ,Q 8
, and Q 9 are missing.
In addition, failures in different elements may have different impact on the SCD resilience. That is why we suggest analyzing homogenous, heterogeneous and possibilistic failures, respectively.
Finally, recovery actions may influence the resilience. That is why we suggest considering three scenarios, i.e., worst case (failure in the most critical elements and no recovery measures), average case and no reconfiguration, and average case with reconfiguration (see Sect. 6).
Since genome represents the SCD, each structural state S can be described by a genome .
Therefore, the total resilience or total failure of a path in the SC structure dynamics can be com-puted using Eqs ( 6), ( 7) and ( 9). With the help of integrating these indicators, SCD resilience can be quantified for homogenous, heterogeneous and possibilistic failures and recovery actions.
Consider the SC reconfiguration processes. Denote as
j Q
the transition from one struc- tural state S (described as a genome ) to another structural state S (described as a genome )
as a consequence of a failure/recovery in a graph element ĵ Q Q . Denote as ( ) X the set of all structural states subject to . Then, a reconfiguration scenario in the degradation or recovery process subject to an initial state 0 S and the state f S can be described as the following transition chain (Eq. 10):
3 1 1 2 0 1 2 1 ... , j j j j j N N N N Q Q Q Q Q (10) where 0 0 , N f , 1 2 { , ,..., } N j j j Q Q Q Q The structural
Q Q Q Q (11)
To solve the problem (11), a hybrid branch-and-bound/evolutionary algorithm has been developed as follows.
Step 1. At each k -iteration, a random sequence is built
0 1 2 1 ( ) ( ) ( ) ( ) , , , ..., , N N k k k k (where 0 0 , N f
) that corresponds to a structure reconfiguration path.
Step 2. For the structure reconfiguration path, the indicator Step 3. The value of
1 ( ) ( ) 1 ( ) ( ) j N k k failure failure j F F is com-
1 ( ) ( ) 1 ( ) ( ) j N k k failure failure j F F
for the structure reconfiguration path is compared with the F-value at k-1 iteration.
Step 4. The path with highest F-value is used in the following iterations.
The problem of pessimistic structure dynamics path (i.e., the case where no recovery actions are included) can be described as the following optimization problem (12):
0 1 1 2 0 0 ( ) , , ( ) , { , ,..., } . j N j j N N failure j f j j j maximize F subject to X Q Q Q Q (
( ) ( ) 1 0 0 ( ) ( ) 2 j j k k N failure failure k j F F S
describes the total structural resilience for a given reconfiguration path by changes in the structure failure values during the reconfiguration on the path ( ) k . In this case, the relation (Eq. 13) describes the SCD resilience during the struc- tural reconfiguration on the path
( ) k 0 k k k S J S (13)
( ) ( ) ( ) ( ) , , , ..., , N N k k k k (where 0 0 , N f
) is built according to a SCD structure reconfiguration path. For the path, the value of
0 k k k S J S
is computed. Finally, the average of all experiments is computed as shown in Eq. ( 14).
0 1 1 M k k J J M (14)
The SCD structure resilience SU J belongs to the interval [ min J , max J ] and the expected SC resilience is 0 J . The values of SU J can be described as a fuzzy triple ( , , a ), where
0 a J , 0 min J J , max 0 J J .
Considering three cases (monotonous structure, non-monotonous structure and structure with possible failures), three fuzzy triples are computed ( , ,
î î î a ), ( , , n n n a ), ( , , a ).
Then the integral SCD structure resilience SU J is computed as Eq. ( 15):
( , , ) ( , , ) ( , , ) ( , , ) 3
î î î n n n SU a a a J a (15)
Finally, the center-of-gravity method is used to avoid the fuzziness in the final solution (Eq. 16):
( ) 3 SU E J a . ( 16
)
Experimental results
For analysis, we consider four SC network structures from the study by [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF] (Fig. 5). According to the model and algorithm in Sect. 5, the modelling of the worst case (failure in the most critical elements and no recovery measures), average case and no reconfiguration, and average case with reconfiguration subject to both nodes and arcs failures has been performed. Similar to [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF], we assumed that the node #1 (Target) and node #12 (Source) are not perturbed. The computed values of SCD resilience are presented in Table 1. In comparing the results of these methods, it can be observed that all methods suggest the same order of the resilient SCD: BCDA. At the same time, the difference in the absolute resilience values can be observed in comparing the results of [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF], our results without reconfiguration and our results with reconfiguration. Besides, in the study by [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF], the SCD #B has a significantly higher resilience as compared to three other structures. In our analysis, the resilience of B and C is close to each other. The explanation of these effects can be seen in the inclusion of the reconfiguration and fuzzy-probabilistic analysis into resilience index computation.
Insight 2. SC resilience depend both on network structure characteristics and recovery policies.
Fair SC resilience estimation needs to include both proactive and reactive policies in order to compare different SCDs regarding the resilience. For SC elements at the minimum structure failure edge-cuts, proactive strategies need to be integrated with recovery policies.
In addition, the inclusion of the minimum edge-cut failures in the genome (instead of individual failures in the reliability analysis) is seen as the explanation of the effects in computational results. This is the benefit and novelty of the proposed approach. It can also be observed that except for structure (b), the difference of the values of [START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF] and our results in the case with reconfiguration is approximately 0.2. The reason for this phenomenon is, on one hand, the inclusion of the pessimistic scenario (i.e., worst-case and no recovery actions) into our resilience index. Additionally, similarities in failures (to ensure the comparativeness of the results) lead to such a correlation. Finally, higher resilience values in our approach can be observed. If comparing the results of our approach with and without reconfiguration, it can be noted that the proposed method to compute structure resilience index works in both cases. Its values also reflect clearly that the inclusion of reconfiguration increases the SCD resilience. This insight requires more analysis efforts subject to different recovery policies but we can assume the effect of the SC structure reconfiguration as resilience increase driver.
Managerial Insights
Disruption risks may result into ripple effect and structure dynamics in the SC. It is to notice that the scope of the rippling and its performance impact depend both on robustness reserves (e.g., redundancies like inventory or capacity buffers) and speed and scale of recovery actions (Knemeyer et al. 2009[START_REF] Hu | Managing risk of supply disruptions: Incentives for capacity restoration[END_REF], Ivanov and Sokolov 2013[START_REF] Kim | Guilt by association: Strategic failure prevention and recovery capacity investments[END_REF][START_REF] Pettit | Ensuring supply chain resilience: Development and implementation of an assessment tool[END_REF]. In many practical settings, companies need analysis tools to estimate both the SC efficiency and SC resilience. For SC resilience, the impacts of recovery actions subject to different disruptions and performance indicators need to be estimated.
The results of this study contribute to support decisions in these practical problems. The developed model can help the SC risk managers to identify whether the existing SCD is resilient for different disruption scenarios. The model also considers mitigation strategies (i.e., reconfiguration) that can be used by SC risk managers and translated into the SCD changes.
With the use of the developed approach, SC managers can compare different possible SCDs regarding their resilience using the proposed SCD resilience index. Since the calculation of the resilience index includes the recovery actions, the developed model can help to identify opportunities to reduce disruption and recovery costs by SC re-design. If the experts can forecast or at least assume possible failures and recovery actions, it becomes possible to create reconfiguration scenarios (see. Fig. 4) and compute the SCD resilience index subject to optimistic, pessimistic or possibility failure scenarios.
The original feature of this study is that it allows analyzing the groups of critical suppliers or a critical supplier whose failure interrupts the SC operation fully. The identification of such critical suppliers with the help of genome analysis and minimum edge-cuts usage may allow redesigning the SC in order to increase the SCD resilience without efficiency decrease. In addition, the identification of nodes and arcs in the minimum edge-cuts in the SC the failure of which is critical for SCD survivability may help to develop more specific risk mitigation policies such as dual sourcing or continuous risk monitoring.
With the help of the structure dynamics control, the model analyses effective ways to recover and re-allocate resources and flows after the disruption. The model can be used by SC risk specialists to adjust mitigation and recovery policies with regard to critical SCD elements. Finally, the usage of the fuzzy-probabilistic approach extends possible application of the developed method regarding the data availability for analysis.
Conclusions
In recent literature, SCD models have been extensively considered in the light of severe disruptions. SC resilience became one of the key investigation categories. The existing graphtheoretical studies have mostly focused on the impact of disruptions on SCD resilience using the reliability theory without considering structure reconfiguration. In this paper, we take another perspective and extend the existing models to SCD resilience analysis by incorporating the structure reconfiguration. In the scope of this research is the analysis of what SC elements will survive (i.e., be in operation) after a disruption and under which conditions (i.e., failure in a group of suppliers that interrupt the SC operation fully) the SC can still survive or it will lose its resilience.
The research methodology is based on a hybrid fuzzy-probabilistic approach to network resilience analysis. The SCD resilience analysis is performed as the analysis of optimistic, pessimis- The results have some major implications. First, it suggests a method to compare different SCDs regarding the resilience. Second, since the reconfiguration is included, the resilience analysis becomes more realistic and considers both disruptions and recovery. Third, the developed method can be used both for monotonous and non-monotonous structures. Fourth, the usage of minimum structure failure edge-cuts allows identifying groups of critical suppliers or a critical supplier whose failure separates the network into two disconnected parts and interrupts the SC operation fully.
Resilience consideration without recovery measures and Resilience consideration with recovery measures.
Fig. 1 .
1 Fig.1. Disruption consideration without and with recovery measures
then the structure is monotonous and for the failure poly- nomial is true (0) 1 T and (1) 0 T ; , then the structure is non-monotonous and for the failure polynomial is true (1) 0 T ; if 0 1
of the network structure failure. According to reliability theory, in the function class ( ) L X a pos- sibility measure integral is defined in (Pyt'ev 2002). The function class ( ) L X includes the func- tions : f X L , where ([0, 1], , , ) L is the possibility value scale in the interval [0, 1], sub- ject to operators , «+» and « »:
belongs to the class ([0, 1]) L .
changes in the intermediate states path ((i.e., the case where recovery actions are included) can be described as the following optimization problem (11
puted. The random transition to an intermediate state subject to genome( )
structure reconfiguration is shown.
Fig
Fig. 4. Supply chain structure reconfiguration path
Fig. 5 .Fig. 6 .
56 Fig. 5. References SC network structuresIn terms of the probabilistic-fuzzy approach, these structures are represented as shown in Fig.6.
tic, and random (possible) reconfiguration paths of an SC structure due to disruptions. The application of the developed structure resilience index is demonstrated by the computation on the referenced SCD examples from the literature.
Table 1
1 Computed values of SCD resilience
Structure SCD structural resilience SU J
Worst-case (no Average case Average case ( ) SU E J
reconfiguration) (no (with
reconfiguration) reconfiguration)
a 0.0615 0.1548 0.7056 0.3073
b 0.0496 0.2538 0.8606 0.3880
c 0.0289 0.2527 0.8306 0.3707
d 0.0456 0.1640 0.7519 0.3205
In
Table 2 ,
2 the results are compared with the resilience estimation from[START_REF] Kim | Supply network disruption and resilience: A network structural perspective[END_REF].
Table 2
2 Comparison of the values of SCD resilience
Structure Resilience values
Kim et al. (2015) Our approach Our approach (no
(with reconfigu- reconfiguration)
ration)
a 0.11 0.31 0.15
b 0.30 0.39 0.25
c 0.16 0.37 0.25
d 0.13 0.32 0.16
In future, the gained insights require more detailed analysis efforts regarding the effect of the SCD structure reconfiguration as resilience increase driver. It is a very important task to develop methods of how to restore the SCD structure from an available genome. This task is quite similar to inverse problems in linear programming. In addition, the complexity issue in genome calculation (e.g., for structure C, the polynomial contains over 3,000 elements) needs to be addressed based on the development of efficient heuristic methods. Finally, development of risk mitigation policies for critical suppliers at the minimum edge-cuts is an interesting future research avenue. | 43,510 | [
"1000922",
"8541"
] | [
"484192",
"471046",
"481384",
"489559",
"471046"
] |
01770433 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01770433/file/Activation%20of%20sensorimotor%20representation.pdf | Thomas Rulleau
Lucette Toussaint
email: [email protected]
Massage can (re-)activate the lower-limb sensorimotor representation of older adult inpatients
Keywords: Massage, Sensorimotor representation, Older adult, Lower limb, Mental rotation
Understand how changes in afferent signal processing may impact the sensorimotor processes is essential for physical therapists whose objective is to actively improve the reorganization of motor function in patients suffering from sensorimotor system disturbance.
Because the sensorimotor processes are slowed with the advance in age, we examined whether a single massage session can reactivate the sensorimotor processes of older adult inpatients.
Participants were randomly assigned to the experimental (with massage) or control (without massage) groups. Massage was realized on both feet with 7.30 minutes spent on each foot (Experiment 1), the right foot or the right foot and knee for 10 minutes (Experiment 2). Body and non-body mental rotation tasks were used to assess the lower-limb motor representation before (pre-test), immediately after (post-test 1) and 24 hours after the massage (post-test 2). Results
showed the positive impact of massage on the body mental rotation task. The activation of the sensorimotor processes can last up to 24 hours depending on the extent of the massaged area.
Importantly, the activation of the sensorimotor representation concerned not only the massaged leg but also the contralateral leg. No difference between groups appeared in the non-body mental rotation task which did not solicit the sensorimotor processes. These results highlighted that peripheral activation via a massage had a specific impact on the sensorimotor processes. Massage is an interesting technique which can help older adult inpatients cope with the slowdown of the signal processing related to advancing age.
Plasticity is an old concept in neuroscience that led many researchers to take an interest in the structural as well as the functional reorganization of the central nervous system according to environmental demands and experiences. Pascual-Leone and collaborators (2005) wrote in their review on the plasticity of the cortex of the human brain that "The challenge we face is to learn enough about the mechanisms of plasticity and the mapping relations between brain activity and behavior to be able to guide it, suppressing changes that may lead to undesirable behaviors while accelerating or enhancing those that result in a behavioral benefit for the subject or patient."
One topic of interest related to plasticity is understanding how changes in afferent or efferent signal processing may be observed at the behavioral level to enhance changes that can favor behavioral benefits and suppress those that can induce undesirable behaviors for the subject or the patient.
Over the past two decades, researchers in cognitive neuroscience have revealed that the humain brain is rich in reorganization and changes not only in younger but also in older adults.
These changes observed in various parts of the brain may affect cognitive and sensorimotor functions, aging being associated with the progressive loss of these functions. For example, sensorimotor changes with aging are highlighted by slower and less smooth movements [START_REF] Diggles-Buckles | Age-related slowing[END_REF][START_REF] Goble | Proprioceptive sensibility in the elderly: degeneration, functional consequences and plastic-adaptive processes[END_REF], increased spatial and temporal variability [START_REF] Contreras-Vidal | Elderly subjects are impaired in spatial coordination in fine motor control[END_REF], difficulties to perform multi-joint movements [START_REF] Seidler | Changes in multi-joint performance with age[END_REF], postural instability that increases the risk for fall [START_REF] Tinetti | Risk factors for falls among elderly in persons living in the community[END_REF]) and so on. The sensorimotor performance impairments with aging are likely due to changes in peripheral structures and central nervous system (see [START_REF] Seidler | Motor control and aging: links to age-related brain structural, functional, and biochemical effects[END_REF], for a review on aged-related differences and motor deficits in older adults). In the present study, we will focus on the positive effects of brain plasticity in older adults, especially considering the sensorimotor function and the effects of massage on the sensorimotor representation. This knowledge can be important for physical therapists whose objective is to actively improve the reorganization of motor function in patients suffering from sensorimotor system disturbances.
In cognitive neuroscience, some studies that focused on the plasticity of the sensorimotor system mainly used two approaches that were based on either a stimulating or impoverished environment to assess the continuous and rapid changes in sensorimotor representation. Some researchers using brain mapping techniques have revealed a decrease in motor cortex excitability [START_REF] Avanzino | Shaping motor cortex plasticity through proprioception[END_REF][START_REF] Facchini | Time-related changes of excitability of the human motor system contingent upon immobilization of the ring and little fingers[END_REF] as well as a disruption in motor performance [START_REF] Bassolino | Functional effect of shortterm immobilization: kinematic changes and recovery on reaching-to-grasp[END_REF]Huber, Ghilardi, Massimini, Ferrarelli, Reidner, Peterson, Tononi, 2006) following 10-12 hours to 4 days of immobilization of the fingers or an arm. Recently, to examine the central and functional effects of non-use of a limb, some researchers based their reasoning on the simulation theory [START_REF] Jeannerod | Neural simulation of action: a unifying mechanism for motor cognition[END_REF], which states that physical and simulated actions share the same sensorimotor representations and rely on similar mechanisms. The authors specifically examined whether internal sensorimotor representations are affected by the input/output restriction of signal processing following a short delay of upper-limb non-use (24 or 48 hours; [START_REF] Meugnot | Motor imagery practice may compensate for the slowdown of sensorimotor processes induced by short-term upper-limb immobilization[END_REF][START_REF] Meugnot | The embodied nature of motor imagery processes highlighted by short-term limb immobilization[END_REF] by asking participants to solve mental rotation tasks using body or non-body stimuli that depended respectively on motor and visual imagery strategies, respectively. Based on a motor imagery strategy, the mental rotation tasks used body images (i.e., hand or foot) to assess the efficiency of specific internal sensorimotor representation (i.e., the upper-limb or the lower-limb representation, respectively). The results showed that immobilized participants took more time than controls (i.e., non-immobilized participants) to solve the body (hand) mental rotation task (i.e., to identify whether the stimulus corresponds to either a left or a right hand), whereas no differences were detected between the two groups when solving the non-body mental rotation task (i.e., to identify whether the stimulus corresponds to the number "2" or its mirror image).
Moreover, a short period of sensorimotor restriction did not lead to a general slowdown in the sensorimotor processes. A somatotopic effect induced by 24 hours of left-hand immobilization have been reported, as revealed by longer response times for the stimuli depicting the immobilized hand compared to the non-immobilized hand [START_REF] Meugnot | The embodied nature of motor imagery processes highlighted by short-term limb immobilization[END_REF]2015). In contrast, 48 hours of left-hand non-use affected both the immobilized and the non-immobilized hand [START_REF] Toussaint | Short-term limb immobilization affects cognitive motor processes[END_REF]. Moreover, if 48 hours of left-hand immobilization impairs the effector-system corresponding to the restricted limb (i.e., the upper-limb system), it does not extend to another effector-system (i.e. the lower-limb system) [START_REF] Meugnot | Selective impairment of sensorimotor representations following short-term upper-limb immobilisation[END_REF].
Other researchers that have focused more on the effect of a stimulating environment on sensorimotor representations confirmed its central consequences. Neuroimaging studies have shown that enhancing sensory input by proprio-tactile stimulations (vibrations on muscles) modulates the excitability of the motor cortical projections to a specific limb and to the opposite limb [START_REF] Kossev | Crossed effects of muscle vibration on motor-evoked potentials[END_REF][START_REF] Rosenkranz | The effect of sensory input and attention on the sensorimotor organization of the hand area of the human motor cortex[END_REF] and can reduce the decrease in motor cortex excitability due to limb non-use [START_REF] Avanzino | Shaping motor cortex plasticity through proprioception[END_REF][START_REF] Roll | Illusory movements prevent cortical disruption caused by immobilization[END_REF]. From a behavioral point of view, the functional relevance of a stimulating environment has been demonstrated by experiments showing the effect of augmented sensory feedback on movement control in patients. For example, adding or enhancing proprioceptive information by muscle vibrations improved head and trunk movements in patients with torticollis [START_REF] Karnath | Effect of prolonged neck muscle vibration on lateral head tilt in severe spasmodic torticollis[END_REF], finger movements in pianists with musician's dystonia [START_REF] Rosenkranz | Regaining motor control in musician's dystonia by restoring sensorimotor organisation[END_REF] and gait control in Parkinson's disease patients following vibrations on lower limb muscles [START_REF] El-Tamawy | Effects of augmented proprioceptive cues on the parameters of gait of individuals with Parkinson's disease[END_REF]. Somatosensory stimulation can also be performed by physical therapists with a massage procedure. In particular, massage therapy is known to reduce pain and increase the range of motion in patients with knee arthritis pain (30 minutes/week for 4 weeks), improve active knee flexion after knee arthroplasty following one week of treatment (20 minutes/day) [START_REF] Field | Massage therapy research[END_REF], and improve physical fitness (strength, flexibility, agility, speed) in healthy soccer players (30 minutes every 3 days for 10 days) [START_REF] Hongsuwan | Effects of thai massage on physical fitness in soccer players[END_REF]. Sensory stimulation for 3 hours has been shown to enhance tactile acuity, haptic object exploration and fine motor control in the older adults [START_REF] Kalisch | Improvement of sensorimotor functions in old age by passive sensory stimulation[END_REF]. Moreover, some authors have shown that a 20minute "over-activation" of somatosensory information (by manual massage and mobilization of both the feet and ankles joints) allows the older adults to compensate for the absence of vision for regulate their postural sway [START_REF] Vaillant | Effect of manipulation of the feet and ankles on postural control in elderly adults[END_REF] as well as to improve functional balance performances (assessed by means of the One Leg Balance test and the Time Up and Go test) [START_REF] Vaillant | Effect of manipulation of the feet and ankles on postural control in elderly adults[END_REF][START_REF] Vaillant | Massage and mobilization of the feet and ankles in elderly adults: Effect on clinical balance performance[END_REF]. Even if these later experiments clearly showed that such a therapeutic intervention plays a major role in postural control, nothing is known about the importance of massage taken in isolation.
The functional changes that probably followed massage, although still poorly documented, could reveal the impact of a massage procedure on the sensorimotor system and legitimized intervention with massage by therapists in rehabilitation programs. However, the origin of these functional changes is unknown, and an important issue for cognitive scientists is to clarify whether these functional changes resulted from peripheral (i.e., the softening of muscles and/or joints) or central effects (i.e., the activation of the sensorimotor processes/representation).
Therefore, the aim of the present study was to examine the effect of a single massage session on mental rotation tasks using either body or non-body stimuli (Experiments 1 and 2), which was also performed in the study of the immobilization-induced effects on the sensorimotor representation [START_REF] Meugnot | The embodied nature of motor imagery processes highlighted by short-term limb immobilization[END_REF]2015;[START_REF] Toussaint | Short-term limb immobilization affects cognitive motor processes[END_REF]. Because aging disturbs the sensorimotor representation, as revealed by the slowdown in response times on mental rotation tasks with body parts [START_REF] Saimpont | Motor imagery and aging[END_REF][START_REF] Saimpont | Aging affects the mental rotation of left and right hands[END_REF], we chose to examine the effect of a single massage session on the mental rotation abilities of older adult participants. In Experiment 1, a physical therapist performed a massage on both feet successively for 15 minutes. In Experiment 2, the therapist performed massage for 10 minutes on either the right foot or the right foot and the right knee. We expected that the massage-induced effects on sensorimotor processes would be manifested by better performance (i.e., a decrease in response times) when solving the body mental rotation task (with foot images), such a result confirming the central effects of massage. The importance of the extent of the massage area on the activation of the sensorimotor processes was specifically investigated in Experiment 2 by comparing the effects of massage on the foot only versus the foot and the knee. In Experiment 2, we also investigated the bilateral activation of the sensorimotor system following unilateral massage (i.e., massage on one side of the body). In both experiments, no positive impact of the massage was expected in the non-body mental rotation task (with number images), which did not spontaneously elicit the use of sensorimotor processes [START_REF] Dalecki | Mental rotation of letters, body parts and complex scenes: Separate or common mechanisms ?[END_REF].
Experiment 1
Method Participants 32 right-handed inpatients voluntarily participated in the experiment (mean age 78.5 years, SD = 7.6 years). They were hospitalized for diverse geriatric or neurogeriatric reasons (asthenia, general state alteration, falls, chronic obstructive pulmonary disease, depression, etc.)
and were still at the hospital during the experiment. All participants were able to walk 10 meters in less than 30 seconds. They had normal or corrected-to-normal vision and provided written informed consent for their participation prior to their inclusion in this study. Before testing, the participants were randomly divided into 2 groups: a control group (n=16, mean age 79±7.5 years, 9 males) and a massage group (n=16, mean age 78 ±8.5 years, 8 males). The study protocol was in accordance with the ethical standards of the local ethics committee of the hospital center where the experiment occurred. All participants were naïve to the purpose of the experiment. However, they received both written and verbal descriptions of the experimental procedure and signed consent forms indicating agreement to participate in the experiment.
Tasks and material
All participants performed 2 mental rotation tasks using either body or non-body stimuli.
For both tasks, participants were seated in front of a computer screen (̴ 60 cm) and instructed to place their left and right index fingers on 2 marked keys located on the left and the right sides of the keyboard, respectively. Participants were asked to identify the images displayed on the center of the computer screen as quickly and accurately as possible. In the body mental rotation task, the stimuli consisted of pictures of right or left feet (created with Poser 6.0 software; sized 20.7 x 12.7 cm; Figure 1A). The participants had to determine the laterality of the foot images and answer by pressing the left-marked key for a left foot image or the right-marked key for a right foot image. In the non-body mental rotation task, the stimuli consisted of the number "2" or its mirror image (20.7 x 12.7 cm; Figure 1B). Participants had to determine whether the number was presented in its canonical form or its mirror image by pressing the appropriate left or right key.
For both tasks, the foot and number stimuli were presented in different orientations in the plane of the images (i.e., 0°, 40°, 80° and 120° in clockwise and counterclockwise directions). A trial began when a fixation cross was displayed in the center of the screen for 500 ms. Then, a stimulus was presented and remained visible until the participant provided his/her response. The E-Prime 2.0 software package (Psychology Software Tools Inc., Pittsburgh, USA) was used to present images and record the participants' responses (accuracy and response times).
Procedure
The participants were divided into 2 groups: a massage group and a control group. In the massage group, the intervention method was standardized. The massage technique consisted of effleurage and kneading and friction with moderate pressure applied under the foot (see [START_REF] Field | Massage therapy research[END_REF], for a review). The massage was performed by a physical therapist with 9 years of experience. Both feet were successively massaged for 7.30 minutes each. The control group did not undergo the massage procedure, but they talked (about their family, weather, etc.) with the therapist for 15 minutes.
The body and non-body mental rotation tasks were performed during three experimental sessions: before (pretest), immediately after (posttest 1) and 24 hours after the massage (posttest 2). For each session, each task was divided into 2 phases: the familiarization phase and the experimental phase. During the first familiarization phase, participants were shown 14 randomly presented trials (illustrated in Figure 1). During the second experimental phase, participants were shown 5 blocks of 14 trials (i.e., 70 trials per participants) presented in a random order within each block. In the three experimental sessions, the body mental rotation task was performed before the non-body mental rotation task because the sensorimotor processes may be attenuated when a non-body mental rotation task is performed first [START_REF] Toussaint | Short-term limb immobilization affects cognitive motor processes[END_REF].
Data analysis
Accuracy and response times were recorded and analyzed. Only data from correct responses were used to analyze response times. Separate ANOVAs were performed for the body mental rotation task and the non-body mental rotation task on accuracy (%) and response times (ms) with group (control vs. massage) as a between-subjects factor and session (pretest, posttests 1 and 2) and rotation (0°, 40°, 80° and 120°) as within-subjects factors. Preliminary analyses revealed similar results for clockwise and counterclockwise directions for both body and nonbody stimuli oriented to 40°, 80° and 120° angles, which lead us to average the data with the same rotation angles to increase reliability (for a similar procedure, see [START_REF] Wilson | The link between motor impairment level and motor imagery ability in children with developmental coordination disorder[END_REF]. Post hoc comparisons were carried out with Newman-Keuls test. Alpha was set at .05 for all analyses.
Results
The body mental rotation task ANOVA performed on the percentage of correct responses showed a mean effect of rotation only (F 3,90 = 27.09, p < .0001, ŋ² p = .47). Post hoc comparison revealed that correct responses were more frequent for the 0°, 40° and 80° foot rotations (M = 95 %; SD = 8%) than the 120° rotation (M = 82%; SD = 15%; ps<.0002), regardless of the group and the session.
ANOVA on the response times showed a mean effect of session (F 2,60 = 5.25, p < .008, ŋ² p = .15) and rotation (F 3,90 = 51.66, p < .0001, ŋ² p = .63), as well as a significant session x group interaction (F 2,60 = 3.59, p = .034, ŋ² p = .11). Post hoc comparisons showed that response times increased from the 40° to 120° foot rotations (40°: M = 1352 ms, SD = 149 ms; 80°: M = 1489 ms, SD = 162 ms; 120°: M = 1740 ms, SD = 190 ms; ps < .006), while no significant differences were for the 0° to 40° rotations (0°: M = 1298 ms, SD = 131 ms; p < 0.27). As illustrated in Figure 2 and confirmed by post hoc comparisons, response times decreased from the pretest to the posttests following the massage (ps < .015) without a distinction between posttests 1 and 2 (p = 0.62). No significant differences were observed between the pretest and posttests for the control group (ps > 0.41). Moreover, response times were shorter in posttests 1 and 2 for the massage group than for the control group (ps < .05), while no significant difference appeared in pretest.
Discussion
The aim of the first experiment was to investigate the influence of massage on the mental rotation of body stimuli as an indicator of the efficiency of the sensorimotor processes. We expected that enhancing sensory input by means of a massage procedure performed by a physical therapist would activate the sensorimotor processes in older adult participants. For this purpose, we evaluated participants at pretest (i.e., before massage), posttest 1 (i.e., immediately after massage) and posttest 2 (i.e., 24 hours later). A similar pretest/posttest procedure was used for the control participants who did not undergo the massage procedure. The main results of Experiment 1 showed that the response times in the body mental rotation task significantly decreased after 15 minutes of a foot massage, while no significant differences were detected in the non-body mental rotation task between the pretest and posttests. The changes in response times reported for the body mental rotation task cannot be explained by a trade-off with response accuracy, as the percentage of correct responses did not differ between the groups regardless of the session and the rotation angles of the foot images. Importantly, the improvement in response times following the massage was maintained in the posttest performed after a 24-hour delay (in posttest 2). These findings revealed that visual imagery performance, which was evaluated with the non-body mental rotation task, did not show any effect related to the limb-massage procedure. In contrast, motor imagery performance, which was evaluated with the body mental rotation task, was improved by a single and brief massage session (15 minutes) performed by a physical therapist with 9 years of experience. The performance improvement was not manifested by an increase in the success of the task (i.e., the percentage of correct responses) but by the activation of the sensorimotor processes required to solve the task. Therefore, unlike the immobilization procedure which showed the negative effect of an impoverished environment on the sensorimotor representation with the slowing of the sensorimotor processes induced by input/output restriction [START_REF] Meugnot | The embodied nature of motor imagery processes highlighted by short-term limb immobilization[END_REF][START_REF] Toussaint | Short-term limb immobilization affects cognitive motor processes[END_REF], the present experiment highlighted the positive effect of a stimulating environment. Enhancing sensory input by massage led to rapid updates to the sensorimotor representation that may be more effective or easier to access due to an increase in proprioceptive signals. Similar observations were previously reported specifically with a vibratory stimulation procedure that activates the sensorimotor-related area [START_REF] Naito | Kinesthetic illusion of wrist movement activates motor related areas[END_REF][START_REF] Romaiguere | Motor and parietal cortical areas both underlie kinaesthesia[END_REF]. The present experiment does not cast doubt on the peripheral effects of massage (on muscle stiffness and peripheral blood flow; [START_REF] Liu | Recent advances in massage therapy -a review[END_REF] but shows, for the first time in the literature, the cognitive (or central) effects of a massage procedure that activates the sensorimotor processes. Further experiments will be carried out to determine whether the positive effect of massage on the processing of sensorimotor information is accompanied by a significant improvement in functional balance performances.
Importantly, the comparison between posttests 1 and 2 in the body mental rotation task revealed that the positive effect of massage on the sensorimotor processes was also found 24 hours after the intervention. These findings are interesting because they showed that a single and brief massage session performed by a physical therapist improved the functioning of the sensorimotor system and revealed that the massage is still effective one day later. However, the present experiment did not provide information on the importance of the extent of the massage area on the activation of sensorimotor processes or the duration of the massage-effect as a function of the extent of the massage area. These points were specifically investigated in the following experiment.
Experiment 2
The second experiment aimed to replicate the positive effects of massage on sensorimotor processes, which were shown in Experiment 1, and investigate the importance of the extent of the massage area on the activation of sensorimotor processes, as well as the possibility that these peripheral activations can activate not only the sensorimotor representation of the massaged limb but also of the opposite limb. It is particularly important at the theoretical level but also at the practical level to know whether contralateral activation of the sensorimotor system could be observed following the massage of a specific limb. For example, a positive massage-induced effect on the contralateral limb could aid in the rehabilitation of an immobilized limb that is still in a cast because of a fracture by regularly reactivating the sensorimotor processes of the nonused limb.
Method
Participants 34 right-handed inpatients voluntarily participated in the experiment (mean age = 77 years, SD = 8.8 years). None of them participated in experiment 1. They were hospitalized for diverse geriatric or neurogeriatric reasons (asthenia, general state alteration, falls, chronic obstructive pulmonary disease, depression, etc.) and were still at the hospital during the experiment. All inpatients were able to walk 10 meters in less than 30 seconds. They had normal or corrected to normal vision and provided written informed consent prior to their participation and inclusion in the study. Before testing, participants were randomly divided into 2 groups: a foot massage group (n = 17, mean age = 75 years, SD = 9.2 years, 6 males) and a foot-knee massage group (n = 17, mean age = 79 years, SD = 8.1 years, 7 males). The study protocol was in accordance with the ethical standards of the local ethics committee of the hospital center where the experiment occurred. All participants were naïve to the purpose of the experiment. However, they received written and verbal descriptions of the experimental procedure and signed consent forms indicating agreement to participate in the experiment.
Tasks and material
Participants performed 2 mental rotation tasks using either body or non-body stimuli.
Both tasks were similar to those used in Experiment 1.
Procedure
The participants were divided into 2 groups: a foot massage group and a foot-knee massage group. In the foot massage group, the massage was performed for 10 minutes on the right foot of each participant. In the foot-knee massage group, massage was successively performed for 5 minutes on the right foot and 5 minutes on the right knee of each participant. In both groups, the massage technique was similar to that used in Experiment 1 and consisted of effleurage and kneading and friction with application of moderate pressure. Massage was performed by the same physical therapist who took part in the first experiment.
Similar to Experiment 1, the body and non-body mental rotation tasks were performed during three experimental sessions: before (pretest), immediately after (posttest 1) and 24 hours after the massage (posttest 2). For each session, participants practiced 2 phases (the familiarization and the experimental phases), and the body mental rotation task was performed before the non-body mental rotation task.
Data analysis
Accuracy and response times were recorded and analyzed. Only data from correct responses were used to analyze response times. Separate ANOVAs were performed for the body and the non-body mental rotation tasks. For the body mental rotation task, ANOVAs were performed on accuracy (%) and response times (ms) with group (foot massage vs. foot-knee massage) as a between-subjects factor and session (pretest, posttest 1, posttest 2), foot (right vs. left) and rotation (0°, 40°, 80° and 120°) as within-subjects factors. For the non-body mental rotation task, ANOVAs were performed on accuracy (%) and response times (ms) with group (foot massage vs. foot-knee massage) as a between-subjects factor and session (pretest, posttest 1, posttest 2) and rotation (0°, 40°, 80° and 120°) as within-subjects factors. Post hoc comparisons were carried out with Newman-Keuls test. Alpha was set at .05 for all analyses.
Results
The body mental rotation task
ANOVA on the percentage of correct responses showed only a mean effect of rotation (F 3,96 = 35.78, p < .0001, ŋ² p = .53). Post hoc comparisons revealed that correct responses were significantly more frequent for the 0°, 40° and 80° rotations (M = 95%; SD = 8%) than the 120° rotation (M = 89%; SD = 13%, ps <.0001), regardless of the group, the foot and the session.
ANOVA on the response times showed mean effects of session (F 2,64 = 17.98, p < .0001, ŋ² p = .36) and rotation (F 3,96 = 39.18, p < .0001, ŋ² p = .55), as well as a group x session interaction (F 2,64 = 3.24, p < .04, ŋ² p = .10). As illustrated in Figure 3 and confirmed by post hoc comparisons, response times significantly decreased from pretest to posttest 1 for both groups (ps < .01) and from pretest to posttest 2 (i.e., after a 24-hour delay) in the foot-knee massage group only (p < .001). No effect of the foot (right or massaged-foot vs. left or non-massaged foot) was reported. To quantify how the massage improved the response times immediately after (in posttest 1) and 24 hours later (in posttest 2), we computed the Index of Performance Improvement (IPI=[response time in posttest-response time in pretest]/response time in pretest, expressed as a percentage) for each participant. A positive value indicated a performance improvement (i.e., a decrease in response time in the posttest), whereas a negative value indicated a performance deterioration (i.e., an increase in response time). IPI was analyzed by ANOVA with group (foot massage vs. foot-knee massage) as a between-subjects factor and posttest (posttest 1 vs. posttest 2) as a within-subjects factor. T-tests were used to examine whether the IPI significantly differed from zero.
The ANOVA revealed only a significant group x posttest interaction (F 1,32 = 3.91, p < .05, ŋ² p = .09]. Post hoc comparisons revealed that performance improvement from pretest to posttest 1 (i.e., immediately after the massage) was similar in both groups, whereas the performance improvement from pretest to posttest 2 (i.e., 24 hours after the massage) was better in the footknee massage group than the foot only massage group (p < .02; Figure 4). Moreover, in the foot massage group, the IPI was significantly smaller in posttest 2 than posttest 1 (p < .05). T-test analyses revealed that the IPI was significantly different from zero in posttest 1 for the foot massage group (t 17 = 5.30, p < .0001) and the foot-knee massage group (t 17 = 5.11, p < .0001), as well as in posttest 2 for the foot-knee massage group (t 17 = 5.70, p < .0001), whereas the IPI did not differ from zero in the foot massage group (t 17 = 1.43, p = .17).
The non-body mental rotation task
ANOVA on the percentage of correct responses showed only a main effect of rotation (F 3,96 = 19.01, p < .0001, ŋ² p = .37). Post hoc comparisons revealed that correct responses were significantly more frequent for the 0° and 40° rotations (M = 93%; SD = 15%) than the 80° (M = 89%; SD = 18%, p < .001) and 120° rotations (M = 82%; SD = 23%, p < .0001), as well as the 80° compared to 120° rotation (p < .001).
ANOVA on the response times showed only a mean effect of rotation (F 3,96 = 11.04, p < .0001, ŋ² p = .27). Post hoc comparisons revealed that response times were significantly lower for the 0° and 40° rotations (M = 1128ms; SD = 150 ms) than the 80° (M = 1263 ms; SD = 162 ms, < .001) and 120° rotations (M = 1492ms; SD = 206 ms, p < .0001), as well as for the 80° and 120° rotations (p < .0001), regardless of the group and the session.
Discussion
The second experiment had 2 specific aims. First, it was carried out to examine whether the extent of the massage area (i.e., massage performed on the foot only or on the foot and the knee of the right leg) differentially impacted the reactivation of the sensorimotor processes.
Second, it was carried out to examine whether contralateral activation might be observed following massage performed for 10 minutes on the right leg (i.e., activation of the sensorimotor representation of the left leg not massaged by the physical therapist). With the exception of the massage, the pre/posttest experimental procedure was similar to Experiment 1. The main results of Experiment 2 revealed that, for the body mental rotation task, response times similarly decreased from pretest to posttest 1 in both groups (the foot massage group and the foot-knee massage group), but differences between groups appeared in posttest 2, with better improvement in response times following a 10-minute massage on both the foot and the knee. Importantly, the positive effects of massage were detected for both the right foot (the massaged foot) and the left foot (the non-massaged foot) regardless of the posttest (i.e., immediately after the massage and 24 hours later). No difference between groups appeared in the non-body mental rotation task.
The results of Experiment 2 confirmed those of Experiment 1, in particular the positive effect of a single and brief (10 minutes) massage session performed by an experienced physical therapist on the activation of the sensorimotor processes. Moreover, when the massage-induced effects were evaluated immediately after the massage session, no difference appeared as a function of the extent of the massage area. The participants actually took less time to identify the laterality of foot images when the task was performed after the massage in both groups (foot massage and foot-knee massage). However, the present experiment showed that the extent of the massage area was important when the effects of massage were assessed 24 hours later. In that case, the activation of the sensorimotor processes lasted longer when the massage area was more extensive. Therefore, activating a larger body representation via a massage procedure on both the foot and the knee during a brief session is better than concentrating the massage on a specific area (the foot only) during the same period. This suggests the need to diversify the location of the massage to stimulate the sensorimotor system in a sustainable manner.
In Experiment 2, massage performed on the right leg alone activated not only the sensorimotor representation of the right leg but also that of the opposite leg. The results showed that improvement in response times following the massage was similar regardless the laterality of the foot that needed to be identified. These findings revealed that, for a given effector system (the lower-limb in the present experiment), massage on one side of the body activated the sensorimotor representation of the opposite side. Similar results were previously observed following the restriction of input/output of signal processing via an immobilization procedure [START_REF] Meugnot | Selective impairment of sensorimotor representations following short-term upper-limb immobilisation[END_REF][START_REF] Toussaint | Short-term limb immobilization affects cognitive motor processes[END_REF], which resulted in a slowdown of the sensorimotor processes for both the immobilized and the non-immobilized effectors. In the same vein, other studies have reported a positive transfer of a learned task to the contralateral side of the body [START_REF] Vangheluwe | Learning and transfer of an ipsilateral coordination task: Evidence for a dual-layer movement representation[END_REF]. In all cases, it may be that the interhemispheric exchange of sensorimotor information was made possible by the corpus callosum, which is considered a central structure in hemisphere interconnection processes [START_REF] Franz | Dissociation of spatial and temporal coupling in the bimanual movements of callosotomy patients[END_REF]. In the present experiment, we clearly showed that the central consequence of the enhancement of sensory input by unilateral massage was the induction of both ipsilateral and contralateral activation of the sensorimotor system, which supports the existence of an effector-independent sensorimotor representation [START_REF] Vangheluwe | Learning and transfer of an ipsilateral coordination task: Evidence for a dual-layer movement representation[END_REF]. For physical therapists, the bilateral activation reported in the present experiment is very interesting because it demonstrates the possibility of using massage on one side of the body alone in case of contraindications, e.g., when an older adult is wearing a cast due to a fracture (of the hand, of the foot) or when he experiences severe pain with touch, limb manipulation on one specific side of his body.
Conclusion
The present 2 experiments offer support for the hypothesis that a single and brief massage session performed by an experienced therapist has functional effects. Massage activates the sensorimotor receptors that favor the functioning of the related sensorimotor system. The positive impact of massage is not only immediate but lasts up to 24 hours later. Importantly, the activation of the sensorimotor representation may partially depend on the extent of the area and may concern not only the massaged leg but also the contralateral leg.
Overall, these findings confirmed that peripheral activation via a massage had a specific impact on the sensorimotor processes and can help patients cope with the slowdown of the signal processing related to advancing age [START_REF] Saimpont | Aging affects the mental rotation of left and right hands[END_REF]2013). These findings highlighted the impact that massage therapies could have on geriatric care, in particular during programs for the prevention of falling or rehabilitation of autonomy. However, further studies should be carried out to examine whether the possibility of modulating the sensorimotor cortex activity by manipulating sensory inputs via massage is accompanied by improvements in motor function or produces specific improvements in rehabilitation (i.e., better motor performance for inpatients or quicker improvements). It may be that unlike immobilization, which induces a slowdown of sensorimotor processes and a deterioration in motor performance [START_REF] Bassolino | Functional effect of shortterm immobilization: kinematic changes and recovery on reaching-to-grasp[END_REF]Hubert et al., 2006), the massage-induced effect (i.e., activation of sensorimotor processes) is accompanied by an improvement in motor performance for older adults.
Finally, in the context of evidence-based practice, the present experiment showed that using body mental rotation tasks can be an adequate tool to objectively assess the central improvement of motor function following various rehabilitation programs in patients experiencing from sensorimotor system disturbances. Examining response times when identifying body-parts stimuli allows the physical therapist to objectively know the central effects of rehabilitation, i.e., whether specific sensorimotor representations of patients have improved or not, or whether other rehabilitation sessions or techniques are required to maximize the activation of the sensorimotor processes. Although further studies are needed, it is possible that the activation of the sensorimotor processes favors motor control in older adults inpatients.
Figure 1 .
1 Figure 1. Illustration of the stimuli used in Experiments 1 and 2 for the body mental rotation task
Figure 2 .
2 Figure 2. Mean response times (ms) for the body mental rotation task as a function of group
Figure 3 .
3 Figure 3. Mean response times (ms) for the body mental rotation task as a function of group (foot
Figure 4 .
4 Figure 4. Index of Performance Improvement (%) as a function of group (foot massage vs. foot- | 41,803 | [
"786323",
"17994"
] | [
"529521",
"199401"
] |
01770255 | en | [
"chim",
"scco"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01770255/file/SRENG%20et%20al%20version%20auteur.pdf | Leam Sreng
Brice Temime-Roussel
Henri Wortham
Christiane Mourre
Chemical identification of 'maternal signature odours' in rat
Keywords: odour preference, maternal signature, learning and memory, amniotic fluid, diet, rodents
Newborn altricial mammals need just after birth to locate their mother's nipples for suckling. In this precocious behaviour, including for the human baby, maternal odour via the olfactory process plays a major role. Maternal odour emitted by lactating females or by amniotic fluid (AF) attracts pups, but the chemical identity of this attractant has not yet been elucidated. Here, using behavioural tests and gas chromatography coupled with mass spectrometry (GC-MS) techniques, we show that AF extracts from rat pregnant female, nipples, ventral skin, milk and nest extracts of mother contained 3-6 active substances. AF extracts contained 3 active compounds: ethylbenzene, benzaldehyde and benzyl alcohol, and their mixture in similar proportions to those found in AF extracts, in a ratio, respectively, of 1:1:12 (700 ng), attracts pups as putative maternal attractant substances (MAS). These 3 AF substances have already been identified in milk, nipples, ventral wash and nest extracts of mother, but not in feces. Moreover, anethole flavour incorporated in pregnant rat and mother's diet is also detected in AF, nipples, milk and nest extracts and the pups are attracted to anethole odour, but not in the case of the no-anethole pups. Maternal attractant substances, combined with diet flavours present in the AF bath, represent olfactory signals as 'maternal signature odours' (MSO) that are learned by foetus and pups. These findings open the way to improved understanding of the neurobiology of early olfactory learning and of the importance of evolutionarily conserved survival behaviour in many mammal species.
Introduction
For the survival and adaptation process of the species, newborn mammals are confronted, shortly after birth, with the need to locate, grasp and suck from the nipple of their mother. This is particularly obvious for the neonate which at birth must be able to use and respond appropriately to a whole range of new stimuli. In this early adaptation, maternal odour, via the olfactory mechanisms, plays a major role in early behavioural development, such as orientation to maternal odour and suckling [START_REF] Cheal | Social olfaction: a review of the ontogeny of olfactory influences on vertebrate behavior[END_REF][START_REF] Rosenblatt | Olfaction mediated developmental transition in the altricial newborn of selected species of mammals[END_REF][START_REF] Varendi | Does the newborn baby find the nipple by smell?[END_REF][START_REF] Porter | Unique salience of maternal breast odors for newborn infants[END_REF][START_REF] Schaal | Mammary odor cues and pheromones: mammalian infant-directed communication about maternal state, mammae, and milk[END_REF][START_REF] Schaal | Chemical signals 'selected for' newborns in mammals[END_REF]. The role of olfactory signals in the mediation of mother-newborn interactions and precocious attachment has been demonstrated in several species of rodents [START_REF] Carter | Olfactory imprinting and age variables in the guinea-pig, Cavia porcellus[END_REF][START_REF] Noirot | Selective priming of maternal responses by auditory and olfactory cues from mouse pups[END_REF][START_REF] Schapiro | Behavioral response of infant rats to maternal odor[END_REF][START_REF] Gandelman | Olfactory bulb removal eliminates maternal behavior in the mouse[END_REF][START_REF] Leon | The development of the pheromonal bond in the albino rat[END_REF][START_REF] Moltz | Stimulus control of the maternal pheromone in the lactating rat[END_REF][START_REF] Wallace | The control and function of maternal scent marking the Mongolian gerbil[END_REF][START_REF] Devor | Attraction to home-cage odor in hamster pups: specificity and changes with age[END_REF][START_REF] Porter | Maternal pheromone in the spiny mouse (Acomys cahirinus)[END_REF][START_REF] Breen | Maternal pheromone: A demonstration of its existence in the mouse (Mus musculus)[END_REF][START_REF] Hudson | Pheromonal release of suckling in rabbits does not depend on the vomeronasal organ[END_REF]. In rat, olfaction is essential in early development [START_REF] Salas | Development of olfactory bulb discrimination between maternal and food odors[END_REF][START_REF] Gregory | Development of olfactory-guided behavior in infant rats[END_REF][START_REF] Singh | Effects of nasal ZnSO4 irrigation and olfactory bulbectomy on rat pups[END_REF].
Without maternal assistance, the neonate rat, although blind and deaf, can crawl under its mother's body, position itself, choose a free nipple, and suckle. Rat pups which have been rendered insensitive to olfactory cues by olfactory denervation with intranasal ZnS04 are unable to attach to the nipples of an anaesthetized lactating female and when housed with their own unanaesthetized mother, lose weight and often die [START_REF] Singh | Effects of nasal ZnSO4 irrigation and olfactory bulbectomy on rat pups[END_REF]. Rat pups are also highly dependent on odour cues from the dam's ventrum for nipple location and attachment [START_REF] Hofer | Evidence that maternal ventral skin substances promote suckling in infant rats[END_REF][START_REF] Teicher | Suckling in newborn rats: eliminated by nipple lavage, reinstated by pup saliva[END_REF]). An odorous attractive substance, released by a lactating female rat and impregnating a soiled litter [START_REF] Nyakas | Olfaction guided approaching behaviour of infantile rats to the mother in maze box[END_REF][START_REF] Salas | Development of olfactory bulb discrimination between maternal and food odors[END_REF][START_REF] Schapiro | Behavioral response of infant rats to maternal odor[END_REF][START_REF] Gregory | Development of olfactory-guided behavior in infant rats[END_REF][START_REF] Galef | Olfactory mediation of mother-young contact in Long-Evans rats[END_REF], as well as another substance called maternal pheromone [START_REF] Moltz | Stimulus control of the maternal pheromone in the lactating rat[END_REF], serves as a powerful attractant to the young and incites early walking in three-day-old neonatal rats [START_REF] Jamon | Early walking in the neonatal rat: a kinematic study[END_REF]. Moreover, the ventral zone of lactating mothers contains olfactory cues that orient rat pups to finding and attaching to nipples, but chemical washing of the nipple area eliminated this behaviour [START_REF] Hofer | Evidence that maternal ventral skin substances promote suckling in infant rats[END_REF][START_REF] Singh | Oxytocin reinstates maternal olfactory cues for nipple orientation and attachment in rat pups[END_REF]Blass and Teicher, 1980). The relationship of maternal ventral skin substances and maternal pheromone is not yet clear. The nature of these putative maternal attractant substances is as yet unknown. Do these olfactory signals performing different behavioural functions originate from the same substances?
Previous studies in many animal species showed that neonates are capable of perceiving and recognizing the odour profile of their amniotic fluid (AF). Prenatal exposure to odour substances introduced into the AF [START_REF] Pedersen | Prenatal and postnatal determinants of the 1st suckling episode in albino rats[END_REF] or ingested by their pregnant mother may influence responsiveness to those same cues after birth [START_REF] Hepper | The discrimination of human odour by the dog[END_REF][START_REF] Schaal | Olfaction in utero: can the rodent model be generalized?[END_REF][START_REF] Mennella | Prenatal and postnatal flavor learning by human infants[END_REF][START_REF] Wells | Prenatal olfactory learning in the domestic dog[END_REF]. Moreover, some newborn mammals are attracted to the odour of AF per se [START_REF] Schaal | Olfaction in utero: can the rodent model be generalized?[END_REF][START_REF] Logan | Learned recognition of maternal signature odors mediates the first suckling episode in mice[END_REF]. What are the respective roles of putative maternal attractant substances (MAS) and of odorant intake of AF?
The purpose of the present study is to assess the source and identify volatile active compounds of the 'maternal signature odours' (MSO) by bioassay and gas chromatography-mass spectrometry (GC-MS) techniques. We used direct nest contact and Y maze tests to assess attractive natural and synthetic substances with newborn rats and 12-14 day-old post-natal rats.
Moreover, a substance, anethole, was incorporated in pregnant and mother rats' diet to assess whether diet flavours could moreover generate a specific olfactory signal as in mice 'maternal signature odours' (MSO) [START_REF] Wyatt | Pheromones and signature mixtures: defining species-wide signals and variable cues for identity in both invertebrates and vertebrates[END_REF][START_REF] Wyatt | Pheromones and animal behavior. Chemical signals and signatures[END_REF][START_REF] Logan | Learned recognition of maternal signature odors mediates the first suckling episode in mice[END_REF]. Signature odours have been proposed to underlie kin recognition between ewe and lambs [START_REF] Wyatt | Pheromones and signature mixtures: defining species-wide signals and variable cues for identity in both invertebrates and vertebrates[END_REF][START_REF] Porter | Individual olfactory signatures as major determinants of early maternal discrimination in sheep[END_REF].
Previously described individual olfactory signatures act to enable a ewe to distinguish and identify her individual lambs [START_REF] Porter | Individual olfactory signatures as major determinants of early maternal discrimination in sheep[END_REF]). Here we show the presence of 3 active compounds of the MAS in AF, and the existence of chemosensory cues related to the mother's diet. These findings suggest that the foetus and pups learn and memorize a specific combination of volatile odours or 'maternal signature odours' (MSO), facilitating suckling just after birth.
Materials and methods
Animals and experimental design
Wistar rats (male, virgin female, gravid female and female with litters) (280-300 g) (Charles River Company, France) were housed with pine wood shavings and given food (SAFE, A04, route de St Bris, 89290 Augy, France) and water ad libitum at a constant temperature (22°C), under a 12h/12h light/dark cycle (lights on at 7:00 a.m.). Each gravid female was individually housed and the date of parturition (postnatal day 0, P0) was recorded. Male and female pups used were postnatal day 5 (P5) in direct nest contact tests and postnatal day 12-14 (P12-14) in Y maze tests.
The young rats were isolated from their mothers for at least 1 hour before the tests and put together in a box maintained at 24°C. Each pup was identified by a number marked on its back.
Pups were naïve to each stimulus and were tested only once for a given odour. For each test, pups came from 2-3 different litters. The behavioural tasks were completed in a single-blind design in an experimental room at 24°C. All experiments were carried out in accordance with the European Communities Council Directive (86/609/EEC). Care was taken to minimize the numbers of animals used and to maintain them in good general health. Aix-Marseille Université and the Centre National de la Recherche Scientifique (CNRS) approved the study.
Behavioural tests
Direct nest contact test. The test apparatus consisted of a white plastic pan (28 cm length x 18 cm width x 4 cm height), divided into two parts, and separated by about 1.5 cm midline space (Fig. 2A).
One side contained the nest wood shavings of lactating female (~35g) and the other side contained the nest wood shavings of virgin female or clean nest shavings. Pups (P5), placed in the midline space, reacted to one type of nest shaving by snout probing, moving the head, crawling, burying in the nest material to the left or right side. The pup behaviour was recorded on a digital video camera (Canon Power Shot A95). The choice of a nest shaving was made when pups stopped walking or moving but stayed in the same nest side at least 5 sec. The choice and amount of time each pup spent over each type of nest shavings was recorded within a 60-second trial time.
Y maze test
Apparatus. A Y maze olfactometer in plastic material was composed of a start box (20 cm length, 6 cm width, 4 cm height) and two choice arms, (12 cm length with removable transparent cover (135° angle between start box and choice arms, 90° angle between each choice arm)) (Fig. 2B). The end of each choice arm was delimited by a removable Plexiglas door with a hole (4 mm) in the middle. A stimulus box comprising a plastic cylinder tube (6 cm length, 2.6 cm diameter) was positioned near the end of each choice arm. The choice arm and stimulus box were connected by a Teflon tube (2 cm x 4 mm in diameter) for delivery of the air and volatile substances. At the other end of the stimulus box, that was removable, a Teflon tube was attached as an air flow entry. Air flow used was 480 ml/min. At the junction of the choice arms, a central wall in Plexiglas (4 cm x 4 cm) was placed 3 mm from the screen moving door. This wall enabled the two stimuli air streams to be conducted to the beginning of the start box where the pups were positioned.
Test procedure. The 3-min test begins when a pup is placed in the start box, in front of the removable Plexiglas door. The door is then opened, enabling the pup to access the odour stimuli and make a choice. When the pup's snout has passed through a limit line, traced in each choice arm, two cm behind the moving door of the start box, the choice was recorded. It was noted "+" when a stimulus was chosen. When pups did not pass through the line, a no-response was noted and the pup was excluded from the experiment. Every 5 tests, the stimuli were changed and placed in the opposite choice arms. The olfactometer was cleaned with detergent, rinsed with distilled water, then with 95 % ethanol and dried. The performance was noted by an observer blind to the conditions of the experiments (in particular the nature of the tested compounds).
Olfactory stimuli
Different crude materials, extracts (nipples, milk, AF, nest shavings, feces, diet), synthetic compounds (alone or in a mixture), and polar solvent as control were used as olfactory stimulus.
Crude materials -The stimulus boxes were filled with clean litter, feces or with nest shavings of gravid, lactating or virgin female (34 ml), free of feces. For the crude amniotic fluid (AF), a filter paper was impregnated with fluid (50 µl), put in the stimuli box and clean filter paper was used as control.
Extracts -To determine the composition of the maternal chemosensory cues, we used an aprotic polar solvent (dichloromethane) because most of the substances of biofluids are hydrophilic, as in the AF and milk. Moreover, the polar solvent is able to extract polar and apolar molecules as fatty acids.
Organ and tissue extracts (nipples, milk, skin, and amniotic fluid).
Three pregnant (18 days of gestation) and three lactating females (P4) of normal and anethole groups were anaesthetized with ketamine (5%, Virbac, France) and medetomidine (1 mg/ml, Janssen), injected subcutaneously (0.33 ml/kg). 500 µl milk were collected from the lactating females, pooled in a cooled vial and extracted in 1 ml solvents. The nipples of gravid and lactating rats with the addition of surrounding skin (approximately 2 mm 2 ) and the skin from the mother's sides (approximately 4 mm 2 ) were removed and soaked in 3 ml of dichloromethane during 1 hr, then the skin and nipples were removed from the solvents. Amniotic fluid was collected from amniotic cavities and transferred to a vial at 4°C in a pooled sample and extracted in polar solvent in the ratio of 500 µl of AF for 1 ml of solvent during 1 hr. Then, the milk and AF extracts were vortexed for 1 min, centrifuged (10,000 tr/min) for 5 min and only the solvent parts were removed.
All extracts were evaporated to 0.5 ml with nitrogen stream, stored in the refrigerator at -20° C until used for behavioural tests or for GC-MS analysis.
Ventral wash extracts.
Each nipple and ventral surrounding tissue of three anaesthetized lactating females were washed 3 times with cotton swabs impregnated with polar solvent. Each cotton swab was transferred to a vial containing 3 ml of solvent and soaked for 1 hr. Then the cotton swab was removed from the solvent and the extracts were evaporated to a final 0.5 ml with nitrogen stream and stored in a freezer at -20°C prior to use.
Diet and feces extracts. 200 mg of broken chow or of feces were infused in 500 µl polar solvent for 30 min, centrifuged (10,000 tr/min) for 5 nm and only the solvent parts were removed. These extracts were stored in the refrigerator at -20° C until used.
Extracts and trappings of volatile substances of nests.
To obtain the attractive substances from nests, five nest shavings of lactating females (P7-8), free of feces, were used. The nest materials were placed in an Erlan Meyer (1 litre), heated to 60°C in a bain-marie. An air stream (250 ml/min) passed through the nest shavings and trapped the volatile odours in a clean glass tube in U form (3mm diameter, 16 cm long), cooled with ice + salt as refrigerant. After a 30 min trapping period, the glass U tubes were rinsed twice with 1 ml of polar solvent. The extracts were pooled and concentrated to 1 ml with nitrogen stream, stored in the refrigerator at -20° C until used for behavioural tests or GC-MS analysis.
Chemicals -All chemicals, including the 17 synthetic compounds as candidates for active constituents, were purchased from Sigma-Aldrich (St-Quentin Fallavier, France). They were >98% pure as determined by GC-MS analysis and were used without further purification. Each compound was diluted in polar solvent to obtain an adequate quantity for behavioural tests or for chemical identification with GC-MS. After pilot studies, we used 3 different doses: 500, 800 and 1,000 ng in 1µl dichloromethane unloaded on a filter paper for the Y maze test.
Chemical analysis -Gas chromatography-mass spectrometry (GC-MS).
The biological extracts and authentic compounds were analysed by gas chromatography-mass spectrometry (GC-MS), performed on a Thermo Finnigan PolarisQ Ion trap mass spectrometer with a Trace GC equipped with a split-splitless injector. An analytical capillary column Thermo Trace TR-5-MS was used (30m x 0.25 mm i.d., film thickness: 0.25 µm) with helium at 1 ml/mn as carrier gas.
Analyses were performed with a 1 µL splitless injection at 250°C, with an injection time of 1 min.
The temperature program was as follows: 1 min at 50 °C with 5 °C min-1 to 235 °C, held for 20 min and increased at a rate of 15 °C min-1 to 310 °C and finally held for 5 min. The transfer line and the manifold temperatures were at 280 and 200 °C, respectively. MS acquisition ranged from 50-650 in full scan mode. Compound identification was based on comparison of retention time and mass spectra of authentic standards as well as mass spectra comparison with the NIST (National Institute of Standard and Technology) mass spectral reference library.
Anethole exposure
To expose the foetuses and newborns to odorant during the gestation period and after parturition, we used anethole, a synthetic anisole compound. The novel flavour was incorporated in beverage water (0.5 µg anethole/1 ml water) and in chow food (1µg / 1gr chow) for three gravid and lactating females (anethole group). In addition, three gravid and lactating females were given food without anethole as control group. The nest shavings of lactating females (P7-8), free of feces, were used either for chemical extraction of active substances using trapping technique or for attraction stimulus in Y maze test.
Statistical analysis.
Differences in behavioural responses to stimuli were tested for statistical significance using Graphpad Prism6 software. The amount of time spent in direct contact nest was analysed by separate paired t-test (t-tests) in each group across individuals in a litter as statistical unit of analysis. All analyses were performed on raw performance data, not percentages. Fisher's test was used for analysis of performances in the direct contact nest and Y maze tests. P<0.05 was considered to indicate a significant difference.
Results
To identify the chemical composition of the MSO in rat, we developed consecutive stages to evaluate the attraction level between potential biological sources of active substances, to characterize the chemical compounds of these biological extracts, then to test the attraction power of identified molecules, and finally to investigate whether compounds in the diet can attract the pups (Fig. 1).
Attraction level between potential biological sources of active compounds
Two types of two choice behavioural tests were performed: the direct nest contact test and the Y maze test (Fig. 2A,B). In the direct nest contact test, postnatal day 5 (P5) pups were placed in the midline space between 2 types of nest shavings. The amount of time spent on the lactating female nest was higher than on clean or virgin female nests (Fig. 2C) (separate paired t-tests, P<0.05).
Significant preference for lactating female nest shavings was found in comparison to clean wood nests or virgin female nests, and no attraction difference was found between clean and virgin female nests (Fisher's, P<0.05)(Fig. 2D). These data indicate that the pups are attracted to the nest shavings of lactating females which are familiar to them and spend more time there. In addition, to refine this result, we performed Y maze tests wherein only the volatile substances emanating from substrates were submitted to the pups (P12-P14). The volatile substances of lactating female nests were more attractive for pups when they were opposed to clean, virgin female or male nest shavings (Fisher's, P<0.05)(Fig. 2E). The percentages of attraction responses for virgin nests in comparison with clean nest did not differ. We also tested the potential attraction of feces. No significant attraction of feces of lactating female was detected in comparison with that of male and of virgin female (Fisher's test, NS) (Fig. 2F). Only the lactating female nests contained attractive volatile substances that can be extracted in solvents for chemical analysis.
Characterization of the chemical compounds of biological extracts from gravid and lactating mothers
Volatile compounds of nest shavings of the lactating mother were extracted with polar (dichloromethane) and apolar (pentane) solvents. Newborns responded positively to mother's nest trappings extracted in polar and apolar solvents when paired with control solvents. As polar and apolar extracts elicit similar responses (Fig. 3A, Fisher's test, P<0.05), only the polar results were presented. To define the biological sources of these volatile attractive compounds, we investigated the effect of different tissue extracts from lactating female and from the AF of gravid female on P12-P14 pups. The volatile compounds of the nipple, ventral wash and milk from lactating females attract pups in the Y maze test in comparison with solvent as control (Fisher's, P<0.05) (Fig. 3B).
Pups also performed positive responses for nipples and crude AF of gravid rat early on day E18 of gestation, and for extracts of AF, when they were paired with solvent (Fisher's, P<0.05) (Fig. 3C). In our experiments, mother's ventral area, nipples, milk and pregnant AF were the sources of the MSO.
Then, gas chromatography-mass spectrometry (GC-MS) was used to identify specific or common molecules from ventral area, nipples, milk and AF extracts, together with extracts of nest and feces of lactating females, and rodent diet (Fig. 1 and4). We identified 37 compounds from these different extracts (Table 1). However, some compounds could not be tested or identified because of uninterpretable mass spectra caused by the lack of synthetic substances or by concentration or co-elution problems. Among the recognized molecules, 12 compounds were present in the AF and milk, 2 in the AF but not in milk, and 8 in the milk but not in AF. Four compounds were common to the diet and milk and only one to diet and AF (Table 1). Seventeen synthetic substances that could be potential MSO candidates were selected among the recognized compounds (Table 2). These compounds were used for identification of biological substances by comparison of their mass spectra. We showed that the spectrum of biological compounds found in tissue extracts was identical to that of synthetic compounds used for the Y maze (Fig. 5). Other molecules not tested in behavioural bioassays might be shown in the future to have a role in maternal signature odours.
Effect of characterized compounds in behavioural tests.
Only the behavioural tests in the Y maze allowed assessment of the biological properties of volatile active compounds. Among the seventeen molecules, six compounds, compared to solvent, attracted pups in the Y maze test and are considered as MSO constituents: diacetone alcohol, ethylbenzene, benzaldehyde, benzyl alcohol, methyl salicylate and vanillin (Table 1, Fig. 6A) (Fisher's, P<0.05). Gravid AF, nipples, and mother's nipples and milk contained 3 of these identified active substances in variable proportions: ethylbenzene (2-10%), benzaldehyde (3-10%) and benzyl alcohol (85-95%), presented with their mass spectrum (Fig. 3b). These 3 active compounds could be produced by gravid and mother rat itself, whereas diacetone alcohol and vanillin seemed to be acquired from the diet. These two compounds, diacetone alcohol and vanillin, are undetectable in AF by our systems of analysis, but may be present as well, although in trace quantities. Methyl salicylate was not detected in AF, and was not used in the blend (see below). These 6 attractive compounds are found associated with mother's nipple, ventral wash and nest trapping extracts.
Adult rat urinary volatile compound, 2-heptanone [START_REF] Osada | Profiles of volatiles in male rat urine: the effect of puberty on the female attraction[END_REF], was detected in nest trappings, but had no effect on pup attraction. Together, these findings suggest that the rat MAS was composed of ethylbenzene, benzaldehyde and benzyl alcohol, at least, present in AF, nipples and milk. Moreover, to attempt to define the natural MAS as a whole, we performed a synthetic MAS blend with the 3 common active compounds found in the AF, nipples and milk. The blend of these substances was prepared in similar proportions to those found in AF extracts, i.e. ethylbenzene, benzaldehyde, benzyl alcohol, in a ratio, respectively, of 1:1:12 (700 ng), according to chemical analysis. In behavioural tests, pups showed significant preference for the MAS blend in comparison with the control solvent (Fisher's, P<0.05) (Fig. 6B).
Attractiveness of an odorant incorporated in pregnant and lactating mother's diet
To attempt to determine the potential role of an odorant intake in MSO, we followed the route of an odorant, anethole, a synthetic anisole compound incorporated in pregnant and lactating mother's diet. Using GC-MS analysis, anethole was detected in AF, nipples, milk, and feces extracts of pregnant female and mother that consumed anethole. In Y maze tests, pups born from the mothers who consumed anethole were attracted by this aromatic compound (Fisher's, P<0.05) (Fig. 6C). In contrast, no attraction was detected for pups born from the mothers who did not consume anethole (Fig. 1 and6C). This experiment shows that previous experience of an odour is sufficient to elicit pup attraction.
Discussion
The first aim of this study was to examine the maternal attractive substances from rat lactating females that attract pups for suckling and try to provide new insights concerning the controversial origin of these attractive substances. The behavioural tests demonstrated that nest, nipples, milk, ventral area of lactating females, amniotic fluid and nipples of pregnant females are effective for attracting pups, but not feces. These data confirmed previous behavioural studies showing that AF, mother's ventral area, nipples and milk played a major role in nipple orientation, attachment and suckling process in newborn mammals [START_REF] Hofer | Evidence that maternal ventral skin substances promote suckling in infant rats[END_REF]Teicher andBlass, 1976, 1977;[START_REF] Singh | Oxytocin reinstates maternal olfactory cues for nipple orientation and attachment in rat pups[END_REF]Blass and Teicher, 1980;[START_REF] Rosenblatt | Olfaction mediated developmental transition in the altricial newborn of selected species of mammals[END_REF][START_REF] Varendi | Does the newborn baby find the nipple by smell?[END_REF][START_REF] Porter | Unique salience of maternal breast odors for newborn infants[END_REF][START_REF] Beauchamp | Early flavor learning and its impact on later feeding behavior[END_REF][START_REF] Schaal | Mammary odor cues and pheromones: mammalian infant-directed communication about maternal state, mammae, and milk[END_REF][START_REF] Schaal | Chemical signals 'selected for' newborns in mammals[END_REF]. AF is also an olfactory attraction source for newborns of several mammal species, including man [START_REF] Schaal | Responsiveness to the odour of amniotic fluid in the human neonate[END_REF][START_REF] Varendi | Attractiveness of amniotic fluid odor: evidence of prenatal olfactory learning?[END_REF][START_REF] Porter | Unique salience of maternal breast odors for newborn infants[END_REF][START_REF] Logan | Learned recognition of maternal signature odors mediates the first suckling episode in mice[END_REF].
The second aim was to identify, for the first time, the chemical composition of volatile MSO. MSO is a multi-component mixture, composed of 3 to 6 compounds depending on sources: 3 substances from amniotic fluid and nipples of pregnant females, 4 from ventral skin, 5 from milk and 6 from nipples, ventral wash and nest trapping of lactating females. These six substances have never been previously identified in rats as MSO components. The multi-component mixture of pheromone has already been underscored in many animal species (for review, [START_REF] Wyatt | Pheromones and animal behavior. Chemical signals and signatures[END_REF]. In AF, benzyl alcohol is the major compound with two minor substances, ethylbenzene and benzaldehyde. The level of positive behavioural performance reaction of the pups to the blend of these three compounds was very close to that for natural amniotic fluid. We did not found a synergy of the blend in the attractiveness performance as found for the nematode sex pheromone of Caenorhabditis elegans [START_REF] Pungaliya | A shortcut to identifying small molecule signals that regulate behavior and development in Caenorhabditis elegans[END_REF][START_REF] Forseth | NMR-spectroscopic analysis of mixtures: from structure to function[END_REF][START_REF] Lim | Common Carp Implanted with Prostaglandin F2α[END_REF][START_REF] Wyatt | Pheromones and animal behavior. Chemical signals and signatures[END_REF]. The positive behavioural performance level of the blend was similar to that for benzyl alcohol (500-800ng). However, this mixture could be suggested as a blend identity of rat species. These three individual compounds have been found as pheromone constituents in many animal species, especially insects (Pherobase) (www.pherobase.com). Moreover, benzaldehyde and benzyl alcohol were identified in relatively abundant amounts in human volatile metabolites of blood plasma collected from mother-infant pairs [START_REF] Stafford | Profiles of volatile metabolities in body fluids[END_REF] and in secretions of African wild dogs [START_REF] Wells | Prenatal olfactory learning in the domestic dog[END_REF]Apps et al., 2012), but the behavioural role of these compounds remains unknown. Likewise, benzaldehyde is contained in rabbit milk [START_REF] Schaal | Chemical and behavioural characterization of the rabbit mammary pheromone[END_REF] and in human milk [START_REF] Hausner | Characterization of the Volatile Composition and Variations between Infant Formulas and Mother's Milk[END_REF], without having a role in attraction of neonate. This compound has been identified as a pheromone component of more than 1 hundred insect species (Pherobase).
Ethylbenzene has been identified in bovine urine [START_REF] Kumar | Chemical characterization of bovine urine with special reference to oestrus[END_REF] and in Coffea arabica and used as a component of synthetic blend for coffee berry borer Hypothenemus hampei [START_REF] Mendesil | Semiochemicals used in host location by the coffee berry borer, Hypothenemus hampei[END_REF]. Furthermore, the 2-heptanone, dimethyl sulfone and benzaldehyde have been identified as urinary volatile compounds from mouse and rat [START_REF] Zhang | Putative chemical signals about sex, individuality, and genetic background in the preputial gland and urine of the house mouse (Mus musculus)[END_REF][START_REF] Zhang | Sex-and gonad-affecting scent compounds and 3 male pheromones in the rat[END_REF] and phenol, 4ethylphenol or indole [START_REF] Osada | Profiles of volatiles in male rat urine: the effect of puberty on the female attraction[END_REF] were also detected in our nest trapping, ventral wash or nipple extracts. However, these molecules were not attractive for rat pups. Dimethyl disulphide (DMDS), isolated from estrous hamster vaginal secretion, is found as an attractant for male hamsters (Singer et al. 1975). In rat, DMDS was shown to be effective in eliciting nipple grasping in 3-5 day-old pups, but with low releasing potency (about 50%) relative to that of olfactory intact nipples [START_REF] Pedersen | Olfactory control over suckling in albino rat[END_REF][START_REF] Schaal | Chemical signals 'selected for' newborns in mammals[END_REF]. DMDS is undetectable in rat bioactive extracts by our chemical system of analysis. While another sulfur compound, dimethyl sulfone, is identified in all bioactive extracts (table1) but is not attractive for newborn rats. Diacetone alcohol was identified in diet and detected in nipples, milk, nest trapping or ventral lavage. This hydroxyketone compound has been identified from anal gland of Tapinoma simrothi pheonicium ant as alarm pheromone [START_REF] Hefetz | Identification of new components from anal glands of Tapinoma simrothi pheonicium[END_REF]) and from temporal gland secretions of Alpine marmot Marmota marmota as a component of scent-marking substance (Bel et al., 1999). Vanillin, identified in nipples, ventral wash, nest trapping or faeces, and originating from diet, is utilised by many insect species in their communication system, and was also detected in human milk [START_REF] Buettner | A selective and sensitive approach to characterize odour-active and volatile constituents in small-scale human milk samples[END_REF]. Methyl salicylate, present at trace level in some rat extracts, is utilised by more than 80 insect species in their chemical communication systems (Pherobase). In addition, 2MB2, a volatile pheromone extracted from milk doe which facilitates suckling in young rabbit [START_REF] Schaal | Chemical and behavioural characterization of the rabbit mammary pheromone[END_REF], was not detected in rat MSO. On the same principle as for the first suckling episode in mice (Logan et al.2012), our data support the existence of 'maternal signature odours' rather than a classic pheromone that drives a hardwired response to specific or predetermined chemosensory cues. Our experiment with an exogenous molecule (anethole) incorporated in food is sufficient to enable pup behaviour (but not in no-anethole pups), which further reinforces this view.
The highlighting of a rat efficient volatile MSO, present in pregnant female's AF, and then in mother's nipples and milk, supports the hypothesis that the learning and memory mechanism processes would be activated in utero, as already suggested by [START_REF] Rosenblatt | Olfaction mediated developmental transition in the altricial newborn of selected species of mammals[END_REF], [START_REF] Porter | Unique salience of maternal breast odors for newborn infants[END_REF], [START_REF] Schaal | Mammary odor cues and pheromones: mammalian infant-directed communication about maternal state, mammae, and milk[END_REF] and [START_REF] Hepper | Long-term flavor recognition in humans with prenatal garlic experience[END_REF]. The relationship of biological rat MAS with intake of diverse diet odorants during the gestation period establishes the specific chemical signature of individual mother cues or 'maternal signature odours' (Logan et al.2012). This association opens new avenues for further investigating the neural mechanism underlying prenatal chemosensory learning and memory storage in the newborn brain receptors and behavioural specificity. Unravelling the chemical signals that cause innate attraction and searching for suckling, nipple attachment, foetal learning and memory in a mammal species is a necessary step in understanding the neuronal organisation pathways in signal reception and neuro-modulation in the formation of the new odorant receptors. Such a system may confer survival advantages by promoting the acquisition of information about safe foods. How does the plasticity process of the olfactory system operate in modifying its properties for specific environmental adaptation? Further dissection of the circuits engaged by this MSO should facilitate our understanding of the neurobiology of this evolutionarily conserved, innate social behaviour. financial interests or potential conflicts of interest. C, Anethole, the synthetic aniseed flavour, given in the diet of gravid and lactating rats, affected the chemical composition of amniotic fluid, milk, mother and gravid nipples. Only pups from 'anethole gravid rat' showed a chemosensory preference for anethole. In contrast, pups from 'no-anethole' gravid rat were not attracted by anethole. 1, listed in order of retention time. Behavioural active compounds were named on GC peaks. Peaks labelled "x" corresponded to the contaminated compounds such as complex siloxane or phthalate compounds. RT 17.70-Anethole indicated the retention time of synthetic anethole added to the diet of gravid female. Three identified compounds are common to extracts: Ethylbenzene, benzaldehyde and benzyl alcohol. Methyl salicylate was found in lactating female and nipples. Vanillin and diacetone alcohol, found in milk and nipples of lactating female, were derived from food. Anethole was found in all spectra when the female diet contained this substance. Vanillin and diacetone alcohol issued from food; 3 compounds, ethylbenzene, benzaldehyde, benzyl alcohol were found in the AF, milk and nipples; and methyl salicylate was present in milk and nipples but not in AF. (B) In Y maze, pups were attracted by a blend of the 3 synthetic compounds found in the AF (ethylbenzene, benzaldehyde, benzyl alcohol, in a ratio respectively of 1:1:12 for 700 ng in all). We suggest that this combination corresponds to natural MAS. (C), Attraction of pups to anethole that was introduced or not in the diet of rat female during pregnant and lactating period. Two experimental groups of pups were used: pups related to gravid females receiving anethole (A) and pups related to gravid females not receiving anethole (N). Only the "A" group pups were significantly attracted by anethole. This attraction suggests that food compounds, present in AF, contribute to the MSO and are involved in the prenatal learning and memory process.
Numbers indicate the numbers of pups tested. (*, P<0.05; Fisher's test) Bold and italic: attractive molecules found in AF or milk nipples or ventral wash but not in diet.
These substances potentially consisted of 'maternal signature odours'. Regular and Italic: attractive molecules found in diet. n: number of pups performing in Y maze test. Fisher's test, P<0.05 is significant.
Figure Captions Figure 1 /
Captions1 Figure Captions
Figure 2 /
2 Figure 2 / Behavioural effects on rat pups of different stimuli from nests and feces. A, schema of
Figure 3 /
3 Figure 3 / Behavioural effects on rat pups of different stimuli from biofluids, and tissue extracts
Figure 4 /
4 Figure 4 / Representative gas chromatography-mass spectrometry profiles of extracts.
Figure 5 /
5 Figure 5 / Representative spectra of biological and synthetic compounds common to different
Figure 6 /
6 Figure 6 / Behavioural effects of the identified compounds on pup attraction. (A) The 17
Table 1 :
1 Compound identification by GC-MS from different extracts of gravid, lactating females and diet. The pick number indicated the identified compounds on GC profiles of different extracts in order of retention time (RT). MW: molecular weight. "x" corresponds to compounds identified in different extracts. Bold and italic: attractive compounds present in AF or milk or nipples or wash or nest but not in diet. Regular and italic: attractive molecules found in diet. *: compounds tested in behavioural tests. RT 17.70-Anethole indicates the retention time of synthetic anethole added to the diet of gravid female.Table 2: Statistical results for behavioural performance of compounds (500 and 1,000 ng) present in potential 'maternal signature odours'.
Acknowledgements. We thank V. Gilbert and E. Mansour for animal care, Ali Gharbi for assistance in the construction of the Y maze apparatus, B. Barascud for figures 2a,b, L. Pézard and N. Pech for statistics discussions, J. Gaudineau for providing some chemicals, M. Paul for correction of the English and four anonymous reviewers for their helpful comments.
Funding and Disclosure
The authors declare no conflict of interest. This work was supported by Aix-Marseille Université (AMU), the Centre National de la Recherche Scientifique (CNRS), The authors report no biomedical | 42,283 | [
"16960",
"872931"
] | [
"188653",
"220811",
"220811",
"199398"
] |
01022455 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:16 | 2014 | https://hal.science/hal-01022455v2/file/Main-File-Pottier-et-al-HAL.pdf | Thomas Pottier
email: [email protected]
Guénaël Germain
Madalina Calamaz
Anne Morel
D Coupard
Sub-Millimeter measurement of finite strains at cutting tool tip vicinity
Keywords:
published or not. The documents may come
Introduction
To achieve high performance machining, it is of high interest to collect data about the chip formation mechanisms. They can help to understand and improve the integrity of the machined surface, e.g. roughness, geometrical dimensions, microstructure, residual stresses, deformation levels, as well as the wear resistance of the tool [START_REF] Mabrouki | A contribution to a qualitative understanding of thermo-mechanical effects during chip formation in hard turning[END_REF]. To obtain quantitative data concerning transient strain, strain rate and temperature fields in very small areas (the primary Z I , secondary Z II and tertiary Z III deformation zones), numerical simulations have been developed which enable these parameters and their evolution to be evaluated. The results only have meaning if suitable friction laws [START_REF] Bahi | A new friction law for sticking and sliding contacts in machining[END_REF][START_REF] Childs | Friction modelling in metal cutting[END_REF] and material models [START_REF] Calamaz | Numerical simulation of titanium alloy dry machining with a strain softening constitutive law[END_REF][START_REF] Hor | An experimental investigation of the behaviour of steels over large temperature and strain rate ranges[END_REF][START_REF] Shi | Identification of material constitutive laws for machining -part i: An analytical model describing the stress, strain, strain rate, and temperature fields in the primary shear zone in orthogonal metal cutting[END_REF] are used. They are currently validated merely by macroscopic data, such as cutting forces, chip morphology and tool/chip contact length. Thus, details on the behavior in the cutting zone are still lacking due to the difficulty, through experimentation, for industrial cutting parameters, to obtain local data that may link with the simulations. Direct measurements of deformation field parameters, such as velocity of material flow and strain, have mostly been performed by quick-stop devices (QSDs). The fundamental idea is to retract the tool from the workpiece [START_REF] Jaspers | Material behaviour in metal cutting: Strains, strain rates and temperatures in chip formation[END_REF] or to accelerate the workpiece to separate it from the tool [START_REF] Buda | New methods in the study of plastic deformation in the cutting zone[END_REF] as rapidly as possible, in order to freeze the process. [START_REF] Komanduri | On the mechanics of chip segmentation in machining[END_REF] investigated the mechanics of chip segmentation in machining of a ferritopearlitic steel using an explosive QSD. They found that the chip segmentation process arises as a result of instabilities in the cutting process. [START_REF] Jaspers | Material behaviour in metal cutting: Strains, strain rates and temperatures in chip formation[END_REF] obtained clear photographs of chip roots on an AISI 1045 steel. Following the shear strain representation proposed by [START_REF] Ernst | Chip formation, friction and high quality machined surfaces[END_REF][START_REF] Lee | The theory of plasticity applied to a problem of machining[END_REF] they predicted strain in the primary shear zone. Another way to estimate strains is to mark the material with micro grids inscribed upon the side of the workpiece by mechanical abrasion [START_REF] Childs | A new visio-plasticity technique and a study of curly chip formation[END_REF], chemical abrasion or photo-etching lithographic printing methods as used by [START_REF] Pujana | In-process high-speed photography applied to orthogonal turning[END_REF]. The velocity and strain values are estimated from the micro grids distortion. [START_REF] Chaudhri | Subsurface deformation patterns around indentation in workhardened mild steel[END_REF] proposed a variation of the grid deformation technique, which could be used in machining, using steel specimens that contained pearlite 'stringers'. The stringers served as intrinsic internal grids whose distortion can be measured to estimate the strain. However a major drawback of QSDs is the time necessary to interrupt the cutting process. Even if improvements have been made in devices design [START_REF] Jaspers | Material behaviour in metal cutting: Strains, strain rates and temperatures in chip formation[END_REF], this delay time can trigger a modification of the deformation state in the cutting zone. Moreover experiments are very time-consuming since achieving a complete restitution of the formation sequence requires quick stops at various deformation stages.
An alternative to QSDs is the use of high-speed cameras. Indeed new performances in terms of acquisition frequencies, frame rate, exposure time, image resolution and signal-to-noise ratio, now allow to work close to industrial cutting speed. [START_REF] Pujana | In-process high-speed photography applied to orthogonal turning[END_REF] used such a technique on a square grid marked 42CrMo4 steel. They managed to analyze the deformed squares at a cutting speed up to 300 m/min, deduced strains and consequently strain rates in the primary shear zone on the hypothesis of plane strain conditions. Nevertheless, at such cutting speed, the capture apparatus is only able to render 1.5 images per segment and thus only provides a glance at a more complex process. More recently, [START_REF] List | Strain, strain rate and velocity fields determination at very high cutting speed[END_REF] proposed a method to replace the grids using flow line shape analysis. Four lines were drawn parallel to the cutting direction on the side of the workpiece using a mechanical scratching method. The velocity, strain rates and strain distributions were calculated from the mathematical expression of the streamline functions. Orthogonal cutting tests were conducted up to 1020 m/min. But a high feed of 0.84 mm had to be selected to guarantee a sufficient spatial resolution and avoid an underestimation of strain. The use of Digital Image Correlation (DIC) has also been reported to analyze the cutting zone. [START_REF] Hijazi | A novel ultra-high speed camera for digital image processing applications[END_REF] have developed a complex dedicated device composed of four non-intensified digital cameras set in dual frame mode to perform a 4 images acquisition at 1M Hz.
In the present paper, DIC technique has also been chosen. It is performed from the use of a single high frame rate imager and optical microscopy. Such experimental apparatus allows to monitor strain at several instants during the chip formation process. The following therefore presents first the experimental procedure. Details are given on cutting configuration and observation device. Then the DIC post-processing is described. Its accuracy is discussed. A following section is dedicated to the numerical estimation of strain from displacement fields. Finally the obtained results are presented and discussed. Mechanisms of chip formation are then proposed based on these experimental data.
Experimental setup
Material and cutting characteristics
The cutting configuration is chosen to be orthogonal. Cutting tests are performed on a planer machine GSP 2108 R.20 which allows varying the cutting velocity between 6 and 60 m/min. Uncoated carbide inserts with a tool holder manufactured for the planer machine (rake angle : 0 • ) are used. The tool is fixed and the cutting speed is applied to the machine table on which the workpiece is set up, along a direction perpendicular to the cutting edge (orthogonal cutting operation). The workpiece moves perpendicular to the observation axis.
The width of cut (w = 3mm) is given by the dimensions of the Ti6Al4V workpiece (100 × 60 × 3). A cutting depth of f = 0.25 mm is selected. This choice results from a balance between the need of large shear zone (lower magnification required) and the need to limit the out-of-plane effects (optical depth of field). The instruction of cutting speed is set to V c = 6 m/min. These cutting parameters are expected to generate serrated chips (see Fig. 1-c) on a material well known for segmenting even at low cutting speeds [START_REF] Komanduri | On the mechanics of chip segmentation in machining[END_REF][START_REF] Shaw | Machining titanium[END_REF].
Optics and field measurements
One of the main improvement of the present study is the use of high magnification optics allowing the capture of submillimeter zones of interest. For instance, in [START_REF] Gnanamanickam | Direct measurement of large-strain deformation fields by particle tracking[END_REF] the use of a 3X optical microscope allows a pixel size of 8.2µm at 250 f ps, in [START_REF] Guo | Deformation field in largestrain extrusion machining and implications for deformation processing[END_REF] the use of a high definition camera and a similar magnification allows a pixel size of 1.4µm at 2000 f ps. The work of [START_REF] Pujana | In-process high-speed photography applied to orthogonal turning[END_REF] proposes the use of a 12X video-microscope for a pixel size of 4.9µm at 25000 f ps. Finally, [START_REF] Hijazi | A novel ultra-high speed camera for digital image processing applications[END_REF] proposes to use 4 standard CCD sensors triggered successively in order to obtain 4 images with an inter-frame time of 0.001ms (equivalent to 1M f ps) at very high magnification (pixel size: 0.27µm). However, monitoring the chip formation process require a complete sequence of images. Therefore, the proposed capture process relies on a Photron Fastcam APX-RS camera.
The spatial resolution is a critical parameter in investigating large strain gradients, especially when localization phenomena are involved. The use of high speed camera limits quite drastically the number of measured pixels. Subsequently, the link between frame rate and frame resolution is responsible for limiting the maximum cutting speed of an observation. The choice of the capture parameters is governed by the will to obtain several images of one segment formation. In order to keep a sufficient spatial resolution, the acquisition is set at f = 18000 f ps with an exposure time of 6.6µs. This setting ensures a resolution of 384 × 352 pixels.
A tunable optical video-microscope is used and its magnification (M t ) is set to 35X. The field of view is then 650×600µm, with a pixel size of 1.65µm. With such settings, a cutting speed of V c = 6 m/min provides about 45 images of one segment formation (assuming that the width of a segment can roughly be approximated by f = 0.25 mm). One should also notice that such an optical apparatus is responsible for optical aberrations of unknown nature. Under this condition, one should be careful in the use of DIC when estimating the correlation between a reference (undeformed) image and a deformed one that has undergo serious displacement within the aberration field. Accordingly, iterative correlation should be preferred (see section ).
Lighting the scene is a non-trivial issue and deserve to be discussed further here. The presented images are captured using a 200W halogen light source and an optic-fiber light guide. Set at full power this device is not sufficient to capture images with an exposition time of 6.6µs. Actually the beam splitter embedded within the video-microscope is responsible for a significant loss of the incoming light flux (indeed the half of it). Therefore the light guide was positioned outside of the microscope tube, about 5mm away from the observed scene. The approximated angle of incidence is 30 • . Nevertheless, lighting remains one of the main constraint in high-speed imaging especially under microscopic conditions where over-exposure and/or under-exposure may become significant issues.
Digital Image Correlation Calculation parameters
The whole captured sequence is made of 2568 frames representing the formation of 52 chip segments. However, only one segment formation has been selected for post-processing purpose. Hence, 45 images are investigated. Each image is postprocessed through Digital Image Correlation (DIC) in order to retrieve the incremental displacement field. For this purpose, each image is split in square elements that create a virtual grid upon the sample surface (denoted M 0 = {x 0 ; y 0 }). The resolution of this grid (the gauge length) is set to 10 × 10 pixels, corresponding to 17 × 17µm 2 .The correlation process consists in looking for the most probable deformed pattern in the neighborhood of each node of this grid in terms of grey level. The displacement fields of each element are then assessed by the means of a bi-linear interpolation. Finally, the displacement at image n is assessed by correlating the n th image with the previous image (n -1) th . A zero-mean formulation of the correlation parameter is used in order to prevent from error due to non-constant lighting. The correlation parameter used in the present study are summarized in Table 1. Calculations are performed using 7D software [START_REF] Vacher | Bidimensional strain measurement using digital images[END_REF]. The choice of incremental correlation relies on several considerations. Firstly, as mentioned above, the optical aberration leads to a significant error when large displacements are involved ( [START_REF] Pottier | Study on the use of motion compensation techniques to determine heat sources. Application to large deformations on cracked rubber specimens[END_REF]) and comparing image n and n -1 instead of n and 1 decrease such displacements. Secondly, in presence of very significant strains, the likeness of image no.1 and n is poor and the correlation process cannot return information over the whole correlation grid M 0 . In other words, the material fed in the right hand side of image n has no counter part in image 1 for it is still out of the frame at that time.
Displacement uncertainties
Estimating the measurement uncertainties of speckle-related DIC techniques, is known to be a complex task. Lots of studies have been addressed in this field and the point of the present paper is not to discuss any further in this matter. However, the poor quality of the obtained images (low resolution, optical aberration, camera measurement noise, unpainted surface) give raise to the question of the accuracy/noise of the DIC obtained displacement fields. Among the several parameters impacting the DIC accuracy, speckle quality and pattern size are often addressed as the most critical ones [START_REF] Bornert | Assessment of digital image correlation measurement errors: Methodology and results[END_REF][START_REF] Pan | Study on subset size selection in digital image correlation for speckle patterns[END_REF].
The benchmark speckles generated numerically in [START_REF] Bornert | Assessment of digital image correlation measurement errors: Methodology and results[END_REF] offer a fast way to qualitatively investigate a given speckle. Indeed the estimation of the autocorrelation of the texture allows a comparison with the 3 proposed speckle. Fig. 2-a) show the texture and the autocorrelation surface of a 10 × 10 pattern. Fig. 2-b) depicts a comparison of the speckle under investigation with those proposed in [START_REF] Bornert | Assessment of digital image correlation measurement errors: Methodology and results[END_REF]. Hence, it can be seen that the obtained texture exhibits a sufficient quality to ensure a proper DIC (though between standard and coarse).
Another approach in evaluating speckle quality is the use of rigid body motion [START_REF] Triconnet | Parameter choice for optimized digital image correlation[END_REF]. A non-deformed zone of the image is investigated over 5 successive images and the displacement norm is monitored. The average displacement in the zone is then subtracted. Subsequently, the residual stands for the noise generated by speckle imperfection. Alike any random variable, it can be characterized by its standard deviation. Fig. 2-c) shows that for the chosen pattern size (10×10) the displacement uncertainties can be assumed to be about 0.0181 pixels ≈ 0.031µm (say a signal-to-noise ratio of 165 in the non-deformed zone).
Finally, the standard deviation of the measured displacement can also be estimated through the Mean Intensity Gradient (MIG) of the speckle pattern( [START_REF] Pan | Mean intensity gradient: An effective global parameter for quality assessment of the speckle patterns used in digital image correlation[END_REF]). It is shown that under some assumptions, the random error is given by:
std(u) ≈ √ 2σ n × δ f ( 1
)
where σ is the standard deviation of the measurement noise, N the subset size and δ f the MIG. In the present study the MIG equals δ f = 9.22 and the noise (σ = 1.025 pixel) is evaluated from the substraction of two consecutive images captured prior to any motion of the sample. Hence the standard deviation of the measured displacment is approximated by: std(u) ≈ 0.0157 pixels ≈ 0.027µm which is consistent with the rigid motion approach.
Out-of-plane motion
Unlike the work presented in [START_REF] Gnanamanickam | Direct measurement of large-strain deformation fields by particle tracking[END_REF], the cutting geometry does not impose constraint on the observed side of the chip. It means that the chip is free to swell along the optical axis of the microscope and therefore impose accounting for the out-of-plane deformation in the sheared section. The magnitude of the out-of-plane deflection is estimated from either SEM-MEX technique (stereoscopic 3D reconstruction Fig. 3a) and chromatic confocal surface metrology (Altimet 500). The approximated magnitude of the out-of-plane deflection is ∆z ≈ ±35µm for both measurements. However, one should notice that this measurement is performed post-mortem and does not account for the elastic deformation of the chip which is then neglected.
Several studies have been addressed on the estimation of DIC error due to out-of-plan motion and theoretical models enable the assessment of the errors raised from such motion. The work of [START_REF] Sutton | The effect of out-of-plane motion on 2d and 3d digital image correlation measurements[END_REF] proposes both a single lens model and a telecentric model to account for the virtual magnification of objects moving along the optical axis. The nature of the objective in used in this study (infitity corrected planapochromatic) disable the used of any of these models. Hence, the error estimation has been performed from experimental means. A planar speckled plate was moved along the optical axis and a pictured was captured every 10µm (within a range of -40µm to +40µm from the focus plane). DIC was then used to estimate the virtual strains due to out-of-plane motion. Fig. 3 shows that the error in terms of strains is maximized by ε max ≈ 0.0055. Since the swelling of the chip is not homogeneous over the chip segment, some regions are more affected by this error than others. However, the small magnitude of such error compared to the estimated strains (see section ) leads to consider it as negligible in the presented study case. One should notice that this error source can be minimized by the use of telecentric optics as disscussed in [START_REF] Sutton | The effect of out-of-plane motion on 2d and 3d digital image correlation measurements[END_REF] and [START_REF] Pan | High-accuracy 2d digital image correlation measurements with bilateral telecentric lenses: Error analysis and experimental verification[END_REF].
Strain Calculation
Finite strain Framework
The main difficulty in estimating displacement and strain fields relies on the use of incremental correlation which disable the tracking of a given material point. For every image, the increment of displacement is given at the undeformed/reference location as ∆U x (x 0 , y 0 , k)
∆U y (x 0 , y 0 , k) , (2)
where (x 0 , y 0 ) are the grid coordinates (identical for every image pair) and k is the image number. This only information does not enable the estimation strains (whether in initial or final configuration) since material tracking can not be achieved, cumulating strain and different location is just not right. The evaluation of the deformed coordinates {x k ; y k } at every step/images of the deformation process is performed through triangular bi-linear interpolation.
x k =x k-1 + ∆U x (x k-1 , y k-1 , k) y k =y k-1 + ∆U y (x k-1 , y k-1 , k) (3)
with
∆U x (x k-1 , y k-1 , k) = 3 i=1 φ i (x k-1 , y k-1 ) × ∆U x (x 0 , y 0 , k) ∆U y (x k-1 , y k-1 , k) = 3 i=1 φ i (x k-1 , y k-1 ) × ∆U y (x 0 , y 0 , k) (4
) where the φ i are the classical triangular shape functions. The knowledge of displacement increments offers a straight forward access to the velocity fields V x and V y . Assuming a constant capture rates of the images, it therefore comes V x = ∆U x and V y = ∆U y . Then the components of the strain rate tensor are:
D xx (x k , y k , k) = ∂V x (x k , y k , k) ∂x k D yy (x k , y k , k) = ∂V y (x k , y k , k) ∂x k D xy (x k , y k , k) = 1 2 ∂V y (x k , y k , k) ∂x k + ∂V x (x k , y k , k) ∂y k (5)
The cumulated displacement at material point location is then used to calculate the updated Lagrangian strain tensor as:
E (x k , y k , k) = D (x k , y k , k)+ k m=1 3 i=1 φ i (x k , y k )×D (x m , y m , m) (6)
where 1 < k < N f and N f is the number of frame in the sequence. E is then the cumulated strain with respect to the
Modal derivation
The major issue related to the implementation of Eq.( 6) comes from the noisy nature of displacements obtained by DIC approaches which lead to significant signal-to-noise ratio (SNR) when derived. Such numerical instabilities prevent from direct derivation of displacements when these latter exhibit high SNR, e.g.: natural speckle, small spatial resolution, inconsistent lighting, strain localization.. All of these problems arises with the combined use of high-speed photography and high magnification. Hence, filtering method should be considered and implemented.
In the present study, a modal projection method is used [START_REF] Pottier | A new filtering approach dedicated to heat sources computation from thermal field measurements[END_REF]. This method consist in projecting the displacement fields within a modal basis and build the strain fields from a weighted sum of modal space-derivative. The displacement increment is written as:
V x (x k , y k , k) = N p=1 λ p (k)Q p (x k , y k ) ( 7
)
where N is the truncation order (the number of modes used for reconstruction) here set to 350, λ p the modal coordinates and Q p the modal vectors. These latter are here evaluated from dynamics vibration theories. Indeed, prior to any calculation, the basis B = {Q 1 , Q 2 . . . Q N } is calculated by solving the classical dynamics problem:
M -1 K - 1 ω i I Q i = 0, ( 8
)
where M is the mass matrix, K the stiffness matrix and ω i the modal pulsation. The problem is solved using Finite Element discretization over a mesh build using one node per pixel of the captured image.
Once the velocity fields are projected within B the linearity of the derivative operator states that:
D ij (x k , y k , k) = ∂V i ∂x j = N p=1 λ p (k) ∂Q p (x k , y k ) ∂x j (9)
This approach, though uncommon in strain derivation applications, has proved its worth in various noise related problems [START_REF] Chrysochoos | An infrared image processing to analyse the calorific effects accompanying strain localisation[END_REF][START_REF] Wang | Shape features and finite element model updating from full-field strain data[END_REF][START_REF] Le Goic | Multi scale modal decomposition of primary form, waviness and roughness of surfaces[END_REF].
Results and discussion
Cumulated strain in a segment Fig. 4 shows the shape of the displacement and principal strain fields during chip segment formation. Adiabatic shear bands (ASB) are clearly highlighted by significant strain magnitudes (up to 3) at these locations. In addition, it can be noticed that the inner part of the segment does not significantly deforms (columns c) and d) image no.45).
Moreover, the actual cutting speed can be evaluated from the actual displacements increments measured in an undeformed zone of the image. For this purpose the displacement norm within the lower right side of the image (see Fig. 4-a) is averaged and gives V c ≈ 5.63 m/min which is slightly lower than the instruction (V c = 6 m/min).
As mentionned in the above, the measured strains are significant. However, a close monitoring of the image sequence shows the outbreak of a localized black line (Fig. 5-a). In order to determine the nature of this phenomenon, complementary SEM investigations have been carried out. As shown in Fig. 5b, a material disjunction (or crack ) appears on the free side of the chip.
The fact that, at some point of the deformation process, the material splits in segments leads to reconsider the strains magnitude assessment. The DIC process such as presented in this paper keeps cumulating strains even though material undergoes segmentation. Hence, the final strain magnitude within ASB is not relevant as such from a continuum mechanics standpoint. The values (up to 3) as depicted in Fig. 4-cd) are indeed greatly overestimated. It is then required to monitor the outbreak of this material disjunction.
For this purpose, the study of the strain rates within the ASB as depicted in Fig. 6 provides a straight forward way to monitor the crack propagation. From the curves presented in Fig. 6-b) it is noticeable that a segment is formed from three successive stages.
Stage 1 -The strains are concentrated at the tooltip vicinity. The inner part of the chip slowly deforms (Fig. 4-d). During this stage an out-of-plane swelling of the segment is clearly visible. It is fully formed around image no.30.
Stage 2 -The strain rapidly cumulates along the ASB from bottom to top. The strain rates reach significant magnitudes (up to 4.10 3 s -1 ) and lead to a material failure (this latter point is specifically discussed in section ) .
Stage 3 -During the last images, strain rates stabilize, the segment is fully formed and slips on both the cutting face and the next segment (to be formed). Indeed, since material failure occurs the segment is animated with a rigid body motion. At this point, cumulating strains is no longer valid from a continuum mechanics standpoint.
Fig. 6-b also shows that the times spent for stage 2 goes shorter when moving from point 1 to point 3. In other words, the material failure accelerates from the tooltip to the free surface. Considering the decreasing of the resisting cross-section, this results is indeed expectable. Fig. 7 shows the last measured state of strain before material failure (at the end of stage 2) for 19 points along the ASB. It can be seen that the deformation process that leads to failure slightly differs from one end of the ASB to the other. Different strain paths are observed and can be sorted in three categories:
Zone Z IA : Nearby the tooltip, the ASB deforms mainly in compression with significant magnitudes. This process is slow and start very early in the sequence (Fig. 6-b, point P 1 and Fig. 7-b).
Zone Z IB : Moving away from the tool tip, the ASB deforms in compression/shear but with lower magnitudes. The strain rates curves in Fig. 7-c show an acceleration of the phenomenon: the material is less deformed at failure in this zone but reaches it faster.
Zone Z IC : Finally, around the free surface, the state of strain turns into pure shear. At this time of the sequence the cutting speed reaches a cycle maximum. It is noticeable that the resisting cross section tends toward its minimum at the same time. It seems plausible that the two observations are related, however only a rigorous measurement of the cutting force may validate such hypothesis.
The study of strain rates enables to follow the crack tip when propagating along the ASB. This criterion, though unautomated yet, corroborates the visual observations of the images sequence. It allows to limit the strain accumulation and evaluate strains at failure. These latter appears to be heterogeneous and lower than expected by numerical calculation [START_REF] Shih | Finite element analysis of the rake angle effects in orthogonal metal cutting[END_REF][START_REF] Calamaz | Toward a better understanding of tool wear effect through a comparison between experiments and sph numerical modelling of machining hard materials[END_REF]. These observations points the need of a better understanding of the decohesion phenomenon and the segmentation process.
Chip segmentation process
The development of numerical models of machining operations has become significant in the last decade and the modeling of the segmentation phenomena is a key issue in the validation of material models (either for constitutive models and for damage evolution laws) [START_REF] Calamaz | Numerical simulation of titanium alloy dry machining with a strain softening constitutive law[END_REF]. A close monitoring of the image sequence reveals that the crack grows from the tooltip to the top free surface over 17 images. This film shot on the side surface, clearly shows that the disjunction goes from end to end. It can be seen in Fig. 8 that the material is disconnected on the side from δ = 10 µm while that disconnection appear at the center at approximately 70 µm. Say, a small part of the cross-section (in the triaxial pressure zone) is still attached to the next segment. It results in the formation of a serrated chip (and not a fully segmented one). As highlighted by [START_REF] Komanduri | New observations on the mechanisms of chip formation when machining titanium alloys[END_REF], the presence of a high hydrostatic stress zone at the chip center prevents form crack formation at the tooltip vicinity. As a matter of fact, such zone cannot exist near the free surface of the chip which is consistent with the end-to-end crack observed on the side (Fig. 5-a).
The formation process of this disjunction remains largely undocumented in the literature at least from an experimental standpoint and no mention of side effects is made. However, the observations performed during this work show significant differences between the side and core disjunction processes. From the knowledge of the 3D nature of the chip formation process and other published observations, one may think of three possible scenarios to model the crack evolution:
The first scenario is depicted in Fig. 9-a). The crack initiate at the tooltip (on the side), propagates along the primary shear band, generates a step on the top surface, and keeps propagating backward (through-thickness) to the triaxial pressure zone. This hypothesis is backed by chip post-mortem micrography and QSP observations of polished chips and is widely acknowledge in the scientific community [START_REF] Gentel | Chip formation in machining Ti6Al4V at extremely high cutting speeds[END_REF][START_REF] Vyas | Mechanics of saw-tooth chip formation in metal cutting[END_REF][START_REF] Nakayama | Machining characteristics of hard materials[END_REF]. Regretfully, these observations are made at one given time of the formation process and cannot provide evolutionary information. Moreover, the material removal due to polishing disable the observation of side effects.
The second scenario is depicted in Fig. 9-b). The crack initiates at the tooltip (on the side) and while propagating along the primary shear band it turns trough-thickness when the hydrostatic pressure becomes low enough. Then, it reaches the face-side corner first, then the center of the face. This scenario is consistent with the 2D numerical observations performed by [START_REF] Hua | Prediction of chip morphology and segmentation during the machining of titanium alloys[END_REF]. Authors have used the energy-based ductile fracture model proposed by [START_REF] Cockroft | Ductility and workability of metals[END_REF] and performed chip micrographic observations. However, the numerical model is built under a 2D assumption and does not provide information on the side effects.
The third scenario, depicted in Fig. 9-c), is actually a variation of the second scenario. Instead of a side initiation, the crack initiates all along the borders of the triaxial zone. One may assume that the high compressive strains at these locations make the triaxial zone behave as a rigid body that slips on the cutting face. The crack is then initiated simultaneously on the side and through-thickness. This scenario, though not as well-established as the others, may explain the existence of the material voids visible in various chip micrographic observations [START_REF] Courbon | Vers une modélisation physique de la coupe des aciers spéciaux : intégration du comportement métallurgique et des phénomènes tribologiques et thermiques aux interfaces[END_REF][START_REF] Braham-Bouchnak | Étude du comportement en sollicitation extrême et de l'usinabilité d'un nouvel alliage de titane aeronautique : le Ti555-3[END_REF].
Discussion among those three possibilities is a complex experimental (and numerical) task and the purpose of this work is not validate (or discard) these hypothesis. However, various observations can offer leads on that matter. For instance, the constant strain rate observed right after the completion of the side crack observed in Fig. 6 is not really compatible with the first and second scenarios (where the segment is still Step 1
Step 1
Step 2
Step 2
Step 3
Step 3
Hydrostatic stress zone c)
Step 1
Step 2
Step 3 not fully formed when the side crack runs from end to end). The first scenario does not explain the existence of material voids along the ASB. QSDs could be a good way to investigate the crack geometry but also existence and location of material voids and side effects, upon condition that the instant of the tool separation is precisely known (high frame rate imaging can be used on that purpose). In addition, the use of in-process observations of the top face may help differentiate the 3 scenarios (outbreak of the crack on the top). Another lead would be a cutting force monitoring. It would enable to link strain rates and the evolution of the resisting cross-section. Finally, the effect of temperature is also to be investigated.
Conclusion
This paper discusses the difficulties using built-in strain measurement in machining conditions. It proposes a submillimeter procedure for image acquisition and a suitable numerical post-processing that enables strain evaluation during the segmentation process. Results show a significant strain localization within the ASB and heterogeneity of the deformation behavior along this band. This work provides experimental evidences of a significant side effect and highlights the 3D nature of the chip (even in orthogonal cutting conditions). It therefore challenges the classical plane strain hypothesis widely acknowledged in literature. Indeed, chips are often wider than thick and this hypothesis holds as long as the averaged behavior is investigated. However, considering a chip formation process with crack initiation on the sides (where the plane stress hypothesis seems more suited) would clearly require the use of 3D finite element formulation.
In addition, the presented procedure offers a way to quantify the strains within the adiabatic shear band under the assumption of material continuity. However, as seen in the above, this hypothesis does not hold when serrated chip are produced. Hence, one understands that detecting the presence (or absence) of material disjunction is a serious issue and can lead to significant over-estimation of the strains. The development of a reliable criterion for crack detection (based on the orientation of the velocity gradient vector) is under construction and will be presented in future publication.
Two main drawbacks can be singled out of the presented experiments. The first one is the lack of in-process information on the magnitude and the shape of the out-of-plain deformation of the chip. It is a direct consequence of the chosen experimental procedure. Indeed, the topological study of this phenomenon is an experimental challenge of its own and constitutes a perceptive of this work. For this purpose, the use of stereo-imaging, computed tomography or any other non planar strain measurement technique seems inevitable.
Another drawback of the proposed procedure lies in the inability to measure strains at the direct vicinity of the cutting tool (i.e. in zones Z II and Z III ). Indeed, experimental constraints are responsible for the of data at tions. Firstly, the depth of field and the out-of-plane position of the cutting tool lead to a fuzzy delimitation of the tool edges (fuzzy parts of the image around the theoretical position of the cutting tool). Secondly, the nonalignment of the lighting with the optical axis leads to over-exposure of the disoriented facets, especially during the out-of-plane deformation of the segment (white regions of the chip). A correction of these experimental limitations should be addressed in future works.
Finally, achieving a complete comparability between machining experiments and models will require the investigation of thermal effects. Several attempts has been made [START_REF] Abukhshim | Heat generation and temperature prediction in metal cutting: A review and implications for high speed machining[END_REF][START_REF] Filice | On the FE codes capability for tool temperature calculation in machining processes[END_REF][START_REF] Arrazola | Analysis of the influence of tool type, coatings, and machinability on the thermal fields in orthogonal machining of AISI 4140 steels[END_REF], but the fulfillment of an energy balance over the formation of one segment remains unexplored and surely constitutes the most challenging perspective of this work.
Figure 1 :
1 Figure 1: a) -Experimental apparatus [18]. b) -Geometry of the cutting and coordinates system. c) -Side view of the scene showing the camera capture frame and the serrated chip (SEM view overlaid).
Figure 2 :
2 Figure 2: a) Image detail window 100 × 100. and autocorrelation zoom 10 × 10. b) Associated centered and normalized autocorrelation function radius at half height [4]. c) Distribution of normalized displacement vector norm.
Figure 3 :
3 Figure 3: a) SEM image of the chip side, overlaid with the measured altitude map. b) Error estimation on strains due to out-of-plane motion.
17
17
Figure 4 :Figure 5 :
45 Figure 4: Displacements and principal strain fields at five time steps. The total elapsed time is 2.2 ms corresponding to the formation of one segment.a) Horizontal displacement (in µm). b) Vertical displacement (in µm). c) Major strain. d) Minor strain. The green rectangle represent the zone where displacements are averaged for the cutting speed estimation.
Figure 6 :
6 Figure 6: a) Initial and final position of the three investigated points (P 1 , P 2 and P 3 ). b) Principal strain rates evolution over the 45 images of the film (at P 1 , P 2 and P 3 ).
Figure 7 :
7 Figure 7: a) The three sub-zones of the ASB and the 19 selected points. b) State of strain at failure. c) strain rates at failure for the selected points.
Figure 8 :
8 Figure 8: Successive cuts of the chip through thickness showing an open crack at the left end but connected material within the hydrostatic stress zone.
Figure 9 :
9 Figure 9: a) First scenario : face-to-core propagation of the crack (with side initiation). b) Second scenario : side-to-core propagation of the crack (with side initiation). c) core/side-to-face propagation of the crack with simultaneous initiation.
Table 1 :
1 Main parameters of the Digital Image Correlation
Grid pattern grey level displacement fields
size size interpolation interpolation
10 × 10 10 × 10 bicubic bilinear | 40,259 | [
"18962",
"841509"
] | [
"110103",
"206863",
"164351",
"206863",
"164351"
] |
01604861 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01604861/file/Kadar%20et%20al%2C%202017%20%20Metabolisme%20version%20HAL.pdf | Ali Kadar
email: [email protected]
Georges De Sousa
Ludovic Peyre
Henri Wortham
Pierre Doumenq
Roger Rahmani
Evidence of in vitro metabolic interaction effects of a chlorfenvinphos, ethion and linuron mixture on human hepatic detoxification rates
Keywords: Health risk assessment, Pesticides mixture, Hepatic clearance, Metabolic interactions, Detoxification
General population exposure to pesticides mainly occurs via food and water consumption. However, their risk assessment for regulatory purposes does not currently consider the actual co-exposure to multiple substances. To address this concern, relevant experimental studies are needed to fill the lack of data concerning effects of mixture on human health. For the first time, the present work evaluated on human microsomes and liver cells the combined metabolic effects of, chlorfenvinphos, ethion and linuron, three pesticides usually found in vegetables of the European Union. Concentrations of these substances were measured during combined incubation experiments, thanks to a new analytical methodology previously developed. The collected data allowed for calculation and comparison of the intrinsic hepatic clearance of each pesticide from different combinations. Finally, the results showed clear inhibitory effects, depending on the association of the chemicals at stake. The major metabolic inhibitor observed was chlorfenvinphos. During co-incubation, it was able to decrease the intrinsic clearance of both linuron and ethion. These latter also showed a potential for metabolic inhibition mainly cytochrome P450-mediated in all cases. Here we demonstrated that human detoxification from a pesticide may be severely hampered in case of co-occurrence of other pesticides, as it is the case for drugs interactions, thus increasing the risk of adverse health effects. These results could contribute to *Manuscript(double-spaced and continuously LINE and PAGE numbered) Click here to view linked References improve the current challenging risk assessment of human and animal dietary to environmental chemical mixtures.
Introduction
Synthetic pesticides have helped to increase crop yields of modern agriculture for more than half a century. However, due to their widespread use as insecticides, herbicides, fungicides, fumigants and rodenticides, they are now considered as a major group of contaminants. For the general population, although pesticide use for elimination of pests is a significant route of indoor exposure (Van den Berg et al., 2012), dietary intake including water consumption is considered to be the main source of exposure to most pesticides [START_REF] Cao | Relationship between serum concentrations of polychlorinated biphenyls and organochlorine pesticides and dietary habits of pregnant women in Shanghai[END_REF][START_REF] Damalas | Pesticide Exposure, Safety Issues, and Risk Assessment Indicators[END_REF][START_REF] Ding | Revisiting pesticide exposure and children's health: focus on China[END_REF]. Thus, food commodities may simultaneously contain different pesticide residues, resulting in an uninterrupted exposure of human populations to complex pesticide mixtures through their diet. Crepet et al. (2013) found that the French population is mainly exposed to 7 different pesticide mixtures composed of two to six compounds (among 79 targeted food pesticides). As the marketing authorization for a chemical substance is delivered at the European Union scale, it could be assumed that the whole European population is likely to be exposed to these same pesticide mixtures. Among these residues, a mixture including two organophosphorus compounds (chlorfenvinphos and ethion) banned since 2007 but not necessarily totally off the agricultural practice [START_REF] Storck | Towards a better pesticide policy for the European Union[END_REF] and a substituted urea (linuron), was found to be frequently present in staple foods such as carrots and potatoes (see Fig. 1).
The organophosphorus insecticide chlorfenvinphos [2-chloro-1-(2,4-dichlorophenyl)vinyl diethyl phosphate] is a neurotoxic molecule which inhibits the acetylcholinesterase. This phosphoorganic pesticide is transformed in mammals by a hepatic oxidative O-deethylation (Hutson and Wright, 1980). Moreover, chlorfenvinphos administration leads to microsomal enzyme induction and alterations of free amino acid concentrations in rat liver [START_REF] Sedrowicz | Effect of chlorfenvinphos, cypermethrin and their mixture on the intestinal transport of leucine and methionine[END_REF]. In the same way, an in vivo study has revealed that chlorvenvinphos decreases the glutathione level and increases the concentrations of hydrogen peroxide and serum total glutathione in liver [START_REF] Lukaszewicz-Hussain | Liver and serum glutathione concentration and liver hydrogen peroxide in rats subchronically intoxicated with chlorfenvinphos--organophosphate insecticide[END_REF]. Indeed, chlorfenvinphos liver metabolism is associated with cytochrome P450 (CYP) activities resulting in the generation of reactive oxygenated metabolites and oxidative stress [START_REF] Swiercz | Partial protection from organophosphate-induced cholinesterase inhibition by metyrapone treatment[END_REF].
Ethion (O,O,O′,O′-tetraethyl S,S′-methylene bis(phosphoro-dithioate), is also an organophosphorus insecticide, which presents the same mechanism of action, compared to chlorfenvinphos. Ethion is converted in the liver to its active oxygenated analog, ethion mono-oxon, via desulfuration thanks to cytochrome P-450 enzymes [START_REF] Desouky | Distribution, fate and histopathological effects of ethion insecticide on selected organs of the crayfish, Procambarus clarkii[END_REF].
Linuron, [3-(3,4-dichlorophenyl)-1-methoxy-1-methylurea] is a phenylurea herbicide widely used in agriculture. Human liver is suspected to be a target of linuron, as it induced DNA damages in rat liver [START_REF] Scassellati-Sforzolini | In vivo studies on genotoxicity of pure and commercial linuron[END_REF]. Another study demonstrated that exposure to linuron leads to hepatocellular adenomas in rat [START_REF] Santos | Toxicity of the herbicide linuron as assessed by bacterial and mitochondrial model systems[END_REF]. Interestingly, this compound was described to be activated in mammalian metabolizing cells leading to an increase of mutagenic properties [START_REF] Federico | Mutagenic properties of linuron and chlorbromuron evaluated by means of cytogenetic biomarkers in mammalian cell lines[END_REF]. Finally, linuron was shown to be an aryl hydrocarbon receptor (Ahr) ligand and its agonistic activity leads to the induction of CYP1A genes' expression [START_REF] Takeuchi | In vitro screening for aryl hydrocarbon receptor agonistic activity in 200 pesticides using a highly sensitive reporter cell line, DR-EcoScreen cells, and in vivo mouse liver cytochrome P450-1A induction by propanil, diuron and linuron[END_REF].
Risk assessment carried out across the world by authorities mainly focus on compounds belonging to the same chemical family, or possessing the same mechanisms of action [START_REF] Reffstrup | Risk assessment of mixtures of pesticides. Current approaches and future strategies[END_REF][START_REF] Ragas | Tools and perspectives for assessing chemical mixtures and multiple stressors[END_REF]. In addition, assessment is only based on the evaluation of cumulative effects of these products, supposing the absence of potential effects concerning interactions between pesticides (European Food Safety Agency, 2012). Therefore, a wide thinking process has been started for more than 5 years to address the issue of risk assessment regarding the combined actions of substances on human health [START_REF] Solecki | Paradigm shift in the risk assessment of cumulative effects of pesticide mixtures and multiple residues to humans and wildlife: German proposal for a new approach[END_REF]European Commission, 2014;[START_REF] Rider | Mixtures research at NIEHS: An evolving program[END_REF]. An increasing number of experimental studies have been published in the last few years [START_REF] Starr | Environmentally relevant mixtures in cumulative assessments: an acute study of toxicokinetics and effects on motor activity in rats exposed to a mixture of pyrethroids[END_REF][START_REF] Takakura | In vitro combined cytotoxic effects of pesticide cocktails simultaneously found in the French diet[END_REF][START_REF] Carvalho | Mixtures of chemical pollutants at European legislation safety concentrations: how safe are they?[END_REF][START_REF] Orton | Mixture effects at very low doses with combinations of anti-androgenic pesticides, antioxidants, industrial pollutant and chemicals used in personal care products[END_REF][START_REF] Cedergreen | Quantifying synergy: a systematic review of mixture toxicity studies within environmental toxicology[END_REF][START_REF] Clarke | Challenging conventional risk assessment with respect to human exposure to multiple food contaminants in food: A case study using maize[END_REF][START_REF] Spaggiari | An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure[END_REF], helping to fill the gap in knowledge on this topic. More than a decade ago, Tang et al. (2002) already demonstrated a strong inhibition in the metabolism of carbaryl when it was simultaneously incubated with chlorpyrifos in human liver sub-cellular preparations. Similarly, [START_REF] Savary | Interactions of endosulfan and methoxychlor involving CYP3A4 and CYP2B6 in human HepaRG cells[END_REF] showed that hepatic metabolic interactions occurred during the co-incubation of the pesticides endosulfan and methoxychlor. This phenomenon increases the residence time of the active compounds and thus their latent toxicity. While Savary studied these interactions effects through the activities of the CYP isoforms involved, Tang et al. (2002) used the same experimental strategy. However, they also demonstrated the occurrence of an interaction effect on the basis of calculated intrinsic clearance rates resulting from human liver microsome experiments. An alternative approach to evaluate a metabolic interaction between two compounds consists in comparing the intrinsic clearances of the product when incubated alone, or as a mixture using the substrate depletion approach [START_REF] Donglu | Drug Metabolism in Drug Design and Development: Basic Concepts and Practice[END_REF].
Human liver is the most important site of biotransformation in the body, primary culture of hepatocytes and hepatocyte subcellular preparations have proven to be suitable in vitro models for the investigation of the metabolism and metabolic interactions of environmental contaminants [START_REF] Hodgson | Human Variation and Risk Assessment: Microarray and Other Studies Utilizing Human Hepatocytes and Human Liver Subcellular Preparations[END_REF]. Although liver microsomes support only a part of the whole metabolic process i.e. phase I metabolism, it continues to be the first-line model for metabolism study assays. Indeed, they are more readily available than hepatocytes and specifically adapted to CYP kinetic measurements. In order to highlight the part of phase II metabolism and cellular uptakes, primary culture of hepatocytes can be used as a complement as suggested by [START_REF] Houston | Prediction of hepatic clearance from microsomes, hepatocytes and liver slices[END_REF], who demonstrated that both microsomes and hepatocytes might be suitable for the ranking of specific substrate hepatic intrinsic clearances in rats. Here, we investigated, for the first time, the effect of a co-incubation of multi-class pesticides present as a mixture in the French diet, especially on the human liver metabolism. In order to highlight the possible human metabolic interaction effects of the pesticide mixture, the analytical method previously developed by Kadar et al. (2017) will be applied to this in vitro study.
Materials and methods
Biological samples
Human microsomes
The human hepatic microsomal preparation used was a pool of hepatic microsomes obtained from ten different donors. First, for each individual, microsomes purification was carried out as described by Van der Hoeven and Coon (1974). Then, for sample from each donor, the microsomal protein concentration was quantified according to the method of Bradford (Bio-Rad Protein Assay kit; Ref. 15), using bovine serum albumin as standard. Finally, a pooled microsomal sample was prepared in order to obtain a final concentration of 10 mg/mL of proteins in 100 mM phosphate potassium buffer (pH 7.4) containing 1.0 mM EDTA and 20% glycerol (v/v). Aliquots of 1.4 mL were then stored at -80 °C.
Primary culture of human hepatocytes
All experiments on human tissue were carried out according to the ethical standards of the responsible committee on human experimentation and the Helsinki Declaration. For each of the three livers donated (two females: F31 and F82, one male: M66), hepatocytes were isolated as previously described by [START_REF] Berry | High-yield preparation of isolated rat liver parenchymal cells: a biochemical and fine structural study[END_REF] and submitted to long-term cryopreservation [START_REF] De Sousa | Freshly isolated or cryopreserved human hepatocytes in primary culture: Influence of drug metabolism on hepatotoxicity[END_REF]. For the present study, they were thawed as established by Rijntjes et al. (1986). Viable cells were suspended in seeding medium I (Williams E Glutamax medium supplemented with penicillin 100 UI/mL, streptomycin 100 μg/mL, bovine insulin 0.1 UI/mL and fetal calf serum 10% v/v). Cell viability achieved by means of trypan blue exclusion was 80 % or greater, then the number of viable cells was determined using a Malassez cell. The hepatocytes suspended in the seeding medium I were inoculated after appropriate dilution at about 0.7 × 10 5 cells/well into 48-wells Corning® Costar® plates (Corning, NY, USA) that had previously been coated with rat tail collagen I (Sigma Aldrich, Saint-Quentin Fallavier, France). All the plates were then placed in an incubator with an atmosphere containing 5 % of CO 2 and 95 % of relative humidity for a 24 h adhesion period. After attachment, the wells were rinsed using medium II (Williams E Glutamax medium supplemented with penicillin 100 UI/mL, streptomycin 100 μg/mL, bovine insulin 0.1 UI/mL, hydrocortisone hemisuccinate 1 μM and bovine serum albumin 240 μg/mL) and maintained in contact with the medium until the exposition experiments.
In vitro metabolism experiments 2.2.1. Microsomes incubations
Microsomal samples at a total protein concentration of 0.5 mg/mL were prepared in 100 mM phosphate buffer (pH 7.4) containing a cofactor regenerating system. This latter consisted of 6 mM glucose-6-phosphate, 3 mM nicotinamide adenine dinucleotide phosphate and 0.4 unit of glucose-6phosphate dehydrogenase. Microsomal experiments were carried out in borosilicate glass tubes (volume 6 mL) in order to minimize physical adsorption of the pesticides.
A limited volume of 1 µL of stock individual solution of chlorfenvinphos, ethion, linuron or their different combination mixtures, prepared in acetonitrile, was deposited in each tube. Then, after a pre-incubation during 5 min at +37 °C, the microsomal sample was added. Care was taken in order not to exceed a 0.25% solvent proportion in each tube. The blank sample was prepared using the same volume of pure acetonitrile. The glass tube was incubated for the desired time as described above. At the end of the experiments, the reactions were stopped by transferring tubes in an ice bath and by adding 400 µL of ice-cold acetonitrile to each tube before vortex stirring.
Firstly, for Michaelis-Menten experiments, independent compounds were incubated at +37°C with the human microsomal sample in triplicate, at three increasing concentrations for chlorfenvinphos (0.5, 1 and 2 μM), ethion (1, 2 and 4 μM) and linuron (3, 6 and 12 μM). The reactions were stopped after 4, 7, 11 and 21 min.
Secondly, on the base of the initial results, the Michaelis apparent affinity constant (K m,app ) was determined for each xenobiotic as described in the data analysis part (see 2.3), and the metabolic influence of co-incubations was studied. To achieve this goal, a single concentration of 0.2 K m,app of each pesticide was incubated alone as a reference, with concentrations of 0.5 K m,app and 5 K m,app of each other compounds and with both of them at the aforementioned levels. The assays carried out in triplicate were quenched as described above at 3.5, 6.5 and 9.5 min.
Hepatocytes incubations
In each seeded well, the seeding medium was removed and substituted by 200 μL of the same medium supplemented with the required amount of the pesticides in order not to exceed 0.25 % DMSO solvent proportion, and pre-incubated in the preparation glass tube at +37 °C during 5 min.
The blank seeded well consisted of the same percentage of pure DMSO solvent. After 5, 9, 15 or 22 h, the hepatocytes were scraped and homogenized by a gentle manual agitation after immediate addition of 200 µL ice-cold acetonitrile.
First, to determine the Michaelis-Menten parameters, cells from three donors F31 (31-yearold female), F82 (82-year-old female) and M66 (66-year-old male) were exposed to the pesticides at four increasing concentrations of chlorfenvinphos (2, 4, 8, 12 μM), ethion (4, 8, 12, 20 μM) and linuron (6, 12, 18, 24 μM). All the reactions were stopped after 5, 9, 15 and 22 h as reported above.
Secondly, after determination of the Michaelis apparent affinity constant (K m,app ) of each pesticide, the effect of co-incubation on individual metabolism was studied. To realize this aim, a single concentration of 0.1 K m,app of each pesticide was incubated alone as a reference, with concentrations of 0.1 K m,app and 0.5 K m,app of each other compounds and with both of them at the same last two levels.
Each experiment was performed in triplicate with pools of hepatocytes from the three individuals mentioned above (i.e. pooled after thawing) in order to represent an average human donor pool.
Consequently, the pesticides dependent K m,app values were calculated as the average of the specific constants of each individual. The incubations were finalized after 5, 10 and 15 h as described above.
Data analysis
Michaelis-Menten affinity constants (K m,app ) and rates of intrinsic clearance (CL int ) were calculated from parent compound depletion data. For K m,app determination, concentration of pesticide remaining over the time course was determined thanks to the analytical method presented by Kadar et al. (2017). Moles of pesticide remaining were then converted into moles of product formed and, plotted versus time to allow determination of the kinetic reaction after nonlinear least square regression analysis using GraphPad Prism6 software (Ritme, Paris, France). In order to reduce the number of compound concentrations needed to accurately determine Michaelis-Menten kinetic parameters, the alternative direct-linear plot approach [START_REF] Eisenthal | The direct linear plot. A new graphical procedure for estimating enzyme kinetic parameters[END_REF] was employed. According to this method, for each pesticide, only two to four pesticide concentrations were needed to calculate K m,app . Finally, intrinsic clearances were determined directly using the in vitro half-life method [START_REF] Obach | Prediction of human clearance of twenty-nine drugs from hepatic microsomal intrinsic clearance data: An examination of in vitro half-life approach and nonspecific binding to microsomes[END_REF]. A regression analysis was carried out to determine T 1/2 value, before a conversion to CL int as described by the formulas (1) to (3) presented below.
(1)
Where S (µM) is the remaining substrate (pesticide) concentration, S 0 the initial substrate concentration (µM) and k the elimination rate constant (min -1 or h -1 )
(2)
Where T 1/2 is the half-life value for respectively microsomes (min) and hepatocytes assays (h)
(3)
Where Vi is the incubation volume (mL) and Q the proteins amount (mg)
Results and discussion
Microsomes metabolism study
The in vitro metabolism of each pesticide was first investigated using human liver microsomes. Incubation mixtures of each target analyte displayed no biotransformation when experiments were performed in the absence of a NADPH-generating system, implying a CYP dependent metabolism. As shown in Fig. 2 and summarized in Table 1, the K m,app values obtained after data processing for each pesticide were, 4.2±0.3 µM, 8.0±0.3 µM and 2.0±0.1 µM for chlorfenvinphos, ethion and linuron respectively.
As decribed by [START_REF] Obach | Prediction of human clearance of twenty-nine drugs from hepatic microsomal intrinsic clearance data: An examination of in vitro half-life approach and nonspecific binding to microsomes[END_REF], we assumed that an approximate value of 0.2 K m,app was well below the K m,app . Then, level values of 0.4 µM for linuron, 0.8 µM for chlorfenvinphos and 1.6 µM for ethion should be on the part of the saturation curve where initial rates correlate with CL int . The in vitro half-life data collected from individual and simultaneously incubated pesticides allowed the final calculation of the hepatic intrinsic clearances displayed in Table 2.
Firstly, the human liver microsomal intrinsic clearance of linuron pesticide (0.681 mL/min/mg) was not clearly impaired when the lowest concentrations of either chlorfenvinphos (2 µM) or ethion (4 µM) were individually added. Indeed, their respective intrinsic clearances were 0.678 and 0.672 mL/min/mg, which was not noticeably different from the linuron's intrinsic clearance. However, their joint-combination at the same low concentrations revealed an important decrease of the linuron intrinsic clearance (0.499 mL/min/mg). In addition, ethion coexposure at 40 µM slightly changed the intrinsic clearance, dividing it by a 1.1 fold. On the contrary, the presence of chlorfenvinphos at 20 µM decreased by a 2.7 fold linuron's biotransformation. An interesting synergistic effect could be noticed when linuron was co-incubated with a mixture of ethion and chlorfenvinphos at their highest levels, leading to a 3.7 fold drop off of the intrinsic linuron's intrinsic clearance.
Chlorfenvinphos intrinsic clearance was 1.069 mL/min/mg when the pesticide was incubated alone. Low amounts of either linuron or ethion did not clearly influence its intrinsic clearance. However, we observed a remarkable effect on the original intrinsic clearance when chlorfenvinphos was individually co-incubated with the highest amounts of linuron (10 µM) or ethion (40 µM). Indeed, in each condition, we noticed a decrease of approximately 1.5 and 1.4-fold, respectively. Moreover, the presence of a mixture containing these two inhibitors at their highest levels induced a comparable division (1.5 fold) of chlorfenvinphos' intrinsic clearance, suggesting that a competitive interaction was very likely to occur in this case.
Finally, in the third and last experimental conditions, ethion's intrinsic clearance (0.802 mL/min/mg) was not clearly modulated by the presence of linuron, whatever the concentration level. On the other hand, when co-incubated with chlorfenvinphos at its low level, ethion's intrinsic clearance was divided by a 1.3-fold and, at its highest level by a 2.7-fold. The decrease reached a 3.5fold when ethion was incubated with a mixture of the two other pesticides at their highest levels.
As a consequence, these results obtained from the microsomes metabolism study showed that even if each compound of the studied mixture revealed inhibiting actions, chlorfenvinphos was the most potent inhibitor. Added separately to linuron or ethion, it greatly inhibited the metabolization of these two pesticides with intrinsic clearance values close to those observed in the mixture exposition conditions. Moreover, as weaker inhibitors, linuron and ethion failed to exert such inhibitory effect when they were in turn added to chlorfenvinphos. This suggested once again the predominant role of chlorfenvinphos in the inhibition of the hepatic intrinsic clearance.
Hepatocytes metabolism study
The second part of the study enabled to examine the in vitro metabolism of each phytosanitary product, using primary cultures of human hepatocytes. This step allowed evaluating whether the mixture interactions revealed above could be confirmed with the use of the considered "gold model", knowing that freshly isolated hepatocytes, express all the hepatic enzymes and transporters required for complete metabolism studies (Fasinu et al., 2012).
Firstly, as depicted in Fig. 3 and summarized in Table 1, the K m,app values of each pesticide were estimated for the different liver cells. We found that the three donors presented a 2 to 3-fold inter-individual variability, while F82 exhibited the highest affinity values (i.e. lowest values for Km), followed by M66 and F31. The average K m,app values were, 42.2±24.8 µM, 43.0±14.1 µM and 20.0±13.2 µM for chlorfenvinphos, ethion and linuron, respectively. Concentrations of 2.0 µM for linuron, 4.0 µM for chlorfenvinphos and also 4.0 µM for ethion were chosen greatly below the K m,app . so as to be located in the range where each individual intrinsic clearance is constant, as presented above in the microsomes model. The data obtained from individually and simultaneously exposed pesticides are summarized in Table 3.
The human hepatocyte intrinsic clearance of chlorfenvinphos was 5.25 mL/h/mg. When coexposed with the highest dose of either linuron or ethion, the intrinsic clearance was reduced until 3.69 and 4.18 mL/h/mg respectively. The presence of ethion and linuron at low concentrations produced a noticeable but limited effect, whereas a strong inhibition was observed at high concentrations resulting in a 1.7-fold decrease of the chlorfenvinphos intrinsic clearance.
When incubated alone, ethion's intrinsic clearance was 5.07 mL/h/mg. Even if a slight inhibitory effect appeared with a decrease of this intrinsic clearance until approximately 4.70 mL/h/mg when chlorfenvinphos and linuron were individually co-incubated at 10 µM or 4 µM respectively, a more pronounced decrease of about a 1.1-fold, was observed when these two pesticides were present as a mixture at their lowest concentrations. Moreover, chlorfenvinphos had a more important influence when it was co-incubated at its highest concentration (20 µM) with ethion, decreasing the ethion's intrinsic clearance by a 1.3-fold. The concomitance of ethion with the highest levels of a mixture of chlorfenvinphos and linuron lead to reach a 1.7-fold reduction in its hepatic detoxification.
Linuron intrinsic clearance was 2.95 mL/h/mg when exposed alone at 2.0 µM to hepatocytes cells. This value remained unchanged when the lowest amounts of ethion or chlorfenvinphos were also present. However, as for the precedent pesticide, a slight decrease of about 1.1-fold was calculated when these substances were both added to the cellular medium at their lowest concentrations. This decrease of the clearance also reached an approximate 1.3-fold at the highest chlorfenvinphos concentration (20 µM). Lastly, linuron's intrinsic clearance was divided by about a 1.8-fold when this pesticide was incubated in the presence of a mixture of the two other compounds at their highest concentrations (20 µM).
Thus, the observations made during the hepatocytes metabolism study showed numerous similarities with the microsomal experiments presented above. Indeed, even if each compound of the studied mixture showed a capacity to hamper the degradation of the other pesticides, chlorfenvinphos was the strongest inhibitor among the three pesticides under study. Overall, the co-incubations of the studied compounds on both microsomes and hepatocytes depicted similar metabolic effects. We thus suggest that the intrinsic clearance inhibitions could be mainly due to the inhibition of phase I metabolic enzymes, with minor involvement of other cellular enzymes and membrane transporters. Indeed, herbicides from phenylurea family are known to be mainly metabolized by human CYP1A2, CYP2C19 and CYP3A4 enzyme isoforms and also, but less importantly, by CYP2B6 (Abass et al., 2007). On the other hand, human organophosphorus metabolism is mainly CYP1A2, CYP2B6, CYP2C9, CYP2C19 and CYP3A4 mediated [START_REF] Hodgson | Human metabolic interactions of environmental chemicals[END_REF]. As a consequence, this confirms that chlorfenvinphos, ethion and linuron share a common support by CYP for their detoxification and, are likely to compete for the same active sites, resulting in the decrease of their respective intrinsic clearance as experimentally observed above. Moreover, among these phase I enzymes, CYP1A2, CYP2B6 and CYP3A4 are known to be most active in the formation of active oxon phosphate metabolites (Buratti et al., 2004). During this activation by oxidative desulfuration reaction, the release of highly reactive sulfur results in irreversible CYP inhibitions [START_REF] Hodgson | Human metabolic interactions of environmental chemicals[END_REF]. This is precisely what could happen with ethion but not with chlorfenvinphos which, in contrast to most organophosphorus pesticides, presents an already desulfurated active parent form [START_REF] Carter | Analytical approaches to investigate protein-pesticide adduct[END_REF] as shown by its typical oxon chemical structure (Fig. 1). Therefore, the potent inhibitory characteristic shown by this compound may be explained through the action of other enzymes.Furthermore, the noticeable difference in intrinsic clearances between human liver models mentioned above might be due to the faster metabolization of chlorfenvinphos in cells. Indeed, as an oxon, the most potent inhibitor of the pesticide mixture should be more easily metabolized through the hydrolytic action of hepatic paraoxonase, which is far less present in microsomes than in hepatocytes (Gonzalvo et al., 1997).
Conclusions
In this work, a metabolism study on chlorfenvinphos, ethion and linuron mixture was conducted. It aimed at exploring the possible pesticide-pesticide(s) human liver metabolic interactions through the monitoring of the in vitro loss of the parent compound. A minimized number of assays provided the Michaelis-Menten K m,app parameter for each compound after Eisenthal and Cornish-Bowden velocities plotting. Based on combined incubations of the active substances, interaction experiments showed clear inhibition effects in both human liver microsomes and hepatocytes. Even if all the compounds showed an action, the rank order of the individual inhibitory potency was chlorfenvinphos >> linuron > ethion. In liver microsomes, the major metabolic inhibitions were observed after ethion and linuron concomitant treatments, with a high level of the two other products displaying respectively a 3.5 and 3.7 fold reduction of their intrinsic clearance. These inhibitions were also observed in primary cultures of hepatocytes but were reduced by approximately a half. We hypothesized that it might be linked to a decrease in chlorfenvinphos'inhibitory strength, due to the presence in the liver cells of much greater paraoxonase amounts.
Moreover, the similar trends revealed by the two human liver models demonstrated that the metabolic interactions were mainly mediated by phase I enzymes, probably CYP1A2, CYP2C19, CYP3A4 and CYP2B6.
To conclude, we found evidence of in vitro metabolic interaction effects of a chlorfenvinphos, ethion and linuron mixture on human hepatic detoxification rates. As the consumption of vegetables such as potatoes or carrots proved to have an impact on hepatic intrinsic clearances of these pesticides, we can hypothesize that in France, at the European scale and very probably also abroad, due to a delay in the detoxification of the body, people may be exposed to an increased toxicity of these pesticides.
Both experiments using CYP isoforms and paraoxonase, and a comprehensive study focusing on additivity, synergism, and antagonism interactions of the studied mixture should be useful. The results of this work may already encourage safety agencies to include the issue of pesticide mixtures in dietary and environmental risk assessment processes. The methodology described in this work could be applied for future studies on harmful chemical mixtures present in humans and animals' diets, as well as in their environment. * expressed in mL/h/mg of human hepatic proteins, 50 mg proteins per gm of liver [START_REF] Carlile | Microsomal prediction of in vivo clearance of CYP2C9 substrates in humans[END_REF].
Tables-MS2
Click here to download Table : Tables-MS2_170224.docx ABSTRACT General population exposure to pesticides mainly occurs via food and water consumption. However, their risk assessment for regulatory purposes does not currently consider the actual co-exposure to multiple substances. To address this concern, relevant experimental studies are needed to fill the lack of data concerning effects of mixture on human health. For the first time, the present work evaluated on human microsomes and liver cells the combined metabolic effects of chlorfenvinphos, ethion and linuron, three pesticides usually found in vegetables of the European Union. Concentrations of these substances were measured during combined incubation experiments, thanks to a new analytical methodology previously developed. The collected data allowed for calculation and comparison of the intrinsic hepatic clearance of each pesticide from different combinations. Finally, the results showed clear inhibitory effects, depending on the association of the chemicals at stake. The major metabolic inhibitor observed was chlorfenvinphos. During co-incubation, it was able to decrease the intrinsic clearance of both linuron and ethion. These latter also showed a potential for metabolic inhibition mainly cytochrome P450-mediated in all cases. Here we demonstrated that human detoxification from a pesticide may be severely hampered in case of co-occurrence of other pesticides, as it is the case for drugs interactions, thus increasing the risk of adverse health effects. These results could contribute to improve the current challenging risk assessment of human and animal dietary to environmental chemical mixtures.
Abstract-MS2
Click here to download Abstract: Abstract-MS2_170224.docx
Fig. 1 .
1 Fig. 1. Molecular structures of chlorfenvinphos, ethion and linuron
Fig. 2 .Fig. 3 .
23 Figure-MS2 Click here to download Figure: Figures-MS2_170224.docx
Table 1
1 Michaelis apparent affinity constant values from human microsomes and hepatocytes for chlorfenvinphos, ethion
and linuron
Pesticide
Table 3
3 Effects of co-incubations on intrinsic clearance of individual human liver hepatocytes.
Target clearance incubated pesticide(s)
_ ethion chlorfenvinphos mixture
Pesticide _ 4 20 4 20 4+4 20+20
concentrations (µM)
*Cl int linuron 2.951 2.942 2.799 2.825 2.362 2.666 1.629
_ linuron ethion mixture
_ 2 10 4 20 2+4 10+20
Pesticide
concentrations (µM)
*Cl int chlorfenvinphos 5.250 5.084 3.688 5.215 4.174 4.9940 2.834
_ linuron chlorfenvinphos mixture
Pesticide _ 2 10 4 20 2+4 10+20
concentrations (µM)
*Cl int ethion 5.067 4.827 4.707 4.725 3.890 4.576 2.965
Acknowledgements
This work was funded by the Agence Nationale de la Recherche (under reference ANR-2008-CESA-016-01), by the Office National de l'Eau et des Milieux Aquatiques (ONEMA) and by the Agence Nationale de SÉcurité Sanitaire de l'alimentation, de l'environnement et du travail (ANSES) in the frame of the MEPIMEX Project. The authors also gratefully acknowledge the supply of the LC-MS/MS system at Laboratoire de l'environnement de Nice as a collaboration agreement with Patricia Pacini.
Tang, J., Cao, Y., Rose, R.L., Hodgson, E., 2002. In vitro metabolism of carbaryl by human cytochrome P450 and its inhibition by chlorpyrifos. Chem-Biol. Interact. 141, 229-241. Van den Berg, H., Zaim, M., Mnzava, A., Hii, J., Dash, A.P., Ejov, M., 2012. Global trends in the use of insecticides to control vector-borne diseases. Environ. Health Perspect. 120, 577-82. Van der Hoeven, T.A., Coon, M.J., 1974. Preparation and properties of partially purified cytochrome P-450 and reduced nicotinamide adenine dinucleotide phosphate-cytochrome P-450 reductase from rabbit liver microsomes. J. Biol. Chem. 249, | 36,789 | [
"1019158",
"872931"
] | [
"155356",
"220811",
"441569",
"155356",
"155356",
"220811",
"441569",
"220811",
"441569",
"155356"
] |
01298135 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2017 | https://shs.hal.science/halshs-01298135/file/Accountability%20by%20proxySSRN.pdf | Vanessa Richard
Accountability by proxy? The ripple effects of MDBs' international accountability mechanisms on the private sector
Between Fall 2010 and Spring 2012, allegations of serious harm caused by Corporación Dinant-a palm oil and food company in Honduras to which the International Finance Corporation (IFC) was providing a corporate loan-reached the IFC and its Compliance Advisor Ombudsman (CAO). These allegations included "forced evictions of farmers," "violence against farmers on and around Dinant plantations … because of inappropriate use of private and public security forces under Dinant's control or influence," and the fact that IFC had "failed to identify early enough and/or respond appropriately to the situation of Dinant in the context of the declining political and security situation in Honduras." 1 The CAO, who is the independent recourse mechanism for the IFC and the Multilateral Investment Guarantee Agency (MIGA), decided to trigger an investigation to verify "whether IFC exercised due diligence in its review of the social risks attached to the Project; whether IFC responded adequately to the context of intensifying social and political conflict surrounding the project post commitment; and whether IFC policies and procedures provide adequate guidance to staff on how to assess and manage social risks associated with projects in areas that are subject to conflict or conflict prone." 2 In the course of its investigation, CAO discovered that Dinant was one of the largest borrowers of a Honduran bank, Banco Financiera Comercial Hondureña (Ficohsa
3
actor may face consequences." 9 It thus means "that policy-makers have to ''give an account'' to accountability holders-publics or otherwise. This requires three things: (1) a set of standards to which they are held to account; (2) relevant information available to the accountability holders; and
(3) the ability of the accountability holders to sanction the policy-makers." 10 Based on the different definitions that have been proposed and brought back to "concrete practices of account giving," 11 one can put forward that accountability is operative when the following components are gathered: -Identifiable accountable entities (accounters); -Identifiable account-holders; -Standards by which the behaviour of accounters can be assessed; -A forum or a procedure that frames the assessment of this behaviour; -The assessment has an impact on the behaviour of the accounter (sanction, remedial measures, better performance…) 12 Hence, this paper does not deal with the concept of accountability per se, but addresses issues related to the functioning and impacts of accountability mechanisms.
In this quick exploration of existing literature, it is also important to note that several taxonomies of accountability mechanisms have been proposed. The classification of the types of accountability mechanisms is first and foremost based on the nature of the link between the entities that are expected to account and the account-holders. In addition, depending on the framework in which the typology is elaborated, the nature of those links is more or less varied, because in different domains accounters and account-holders are different and have different kinds of relationship. For example, Mashaw proposes a taxonomy of "accountability regimes" distributed in three domains: in the realm of "State Governance", accountability regimes can be political, administrative or legal; in the realm of "Private Markets", accountability regimes relate to products, labor or financial; in the realm of "Social Networks", accountability regimes relate to the fact one belongs inter alia to a family, a profession or a team… 13 Mashaw specifies that "Putting the characteristics of these accountability regimes into boxes or grids surely overstates the degree to which they are distinctive. In the real world, these "regimes" flow and blend into each other in just about every imaginable way. Nevertheless, broad differences in kind are clearly distinguishable." 14 In mapping accountability, Bovens, whose work considers accountability as a social relation in the public sphere, proposes a classification which depends on whether accountability is based on the nature of the forum (political accountability, legal accountability, administrative accountability, professional accountability, social accountability), the nature of the actors (corporate accountability, hierarchical accountability, collective accountability, individual accountability), the nature of the conduct (financial accountability, procedural accountability, product accountability) or else based on the nature of the 9 Mark Bovens, "Analysing and Assessing Accountability: A Conceptual Framework", 13 European Law Journal (2007), at 447. 10 Robert O. Keohane, "Decisiveness and Accountability as Part of a Principled Response to Nonstate Threats", 20 Ethics & International Affairs (2006), at 221. 11 Bovens, op. cit., at 450. 12 Put differently, "in any accountability relationship we should be able to specify the answers to six important questions: Who is liable or accountable to whom; what they are liable to be called to account for; through what processes accountability is to be assured; by what standards the putatively accountable behavior is to be judged; and, what the potential effects are of finding that those standards have been breached": Mashaw, op. cit., at 17; see also Bovens, "A relationship qualifies as a case of accountability when: 1. there is a relationship between an actor and a forum; 2. in which the actor is obliged; 3. to explain and justify; 4. his conduct; 5. the forum can pose questions; 6. pass judgment; 7. and the actor may face consequences": op. cit., at 452. 13 Mashaw,op. cit.,at 27. 14 Ibid., at 28.
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 4 obligation (vertical accountability, diagonal accountability, horizontal accountability.) 15 Analyzing accountability mechanisms in the narrower realm of world politics, Grant and Keohane count seven kinds of accountability mechanisms: hierarchical, supervisory, fiscal, legal, market, peer, publicreputational. As for Stewart, who addresses accountability as a means to remedy the problem of disregard in global regulatory governance-that is to say the fact that "the present structures and practices of global regulatory governance often generate unjustified disregard of and consequent harm to the interests and concerns of weaker groups and targeted individuals" 16 -he distinguishes between five types of accountability mechanisms. As this paper endorses Stewart's approach as regards the purpose and functions of accountability mechanisms in the context of global governance, I will briefly describe his analytical framework. 17
Stewart sees accountability mechanisms as one of the "three basic types of governance mechanisms-decision rules, accountability mechanisms, and other responsiveness-promoting measures-as potential tools for addressing disregard." 18 'Decisions rules' indicate who can make decisions for an institution and how (along which procedures decisions are made). 'Other responsiveness-promoting measures' are those which "provide global bodies with various incentives to give greater regard to disregarded interests". 19 These include transparency, participation without decisional power, reasoned decisions, peer and public reputational influences, competition, market forces… Stewart specifies that "Decision rules and accountability mechanisms tend to assign defined authorities and responsibilities to specified actors. The other responsiveness-promoting practices do not. Their operation is typically more diffuse and indeterminate." 20 Regarding accountability mechanisms, Stewards' definition is that three fundamental requirements must be met: "(1) a specified accounter, who is subject to being called to provide account, including, as appropriate, explanation and justification for his conduct; (2) a specified account holder who can require that the accounter render account for his performance; and (3) the ability and authority of the account holder to impose sanctions or mobilize other remedies for deficient performance by the accounter and perhaps also to confer rewards for a superior performance by the accounter." 21 Based on this definition, Stewart counts five types of accountability mechanisms (electoral, hierarchical, supervisory, fiscal and legal), that belong to two categories (the first four types in the first category, legal accountability mechanisms in the second.) The first category is based on the fact the relationship between the accounter and the account-holders involves "a delegation or a transfer of authority or resources … where the accounters are to act in the interest of the grantors/accountholders or designated third persons." 22 The second category is characterized by the fact it is triggered by the accounter acting contrary to the applicable law, applicable rules also providing for a legal remedy.
As Jutta Brunnée states, "International legal accountability, then, involves the legal justification of an international actor's performance vis-à-vis others, the assessment or judgment of that performance against international legal standards, and the possible imposition of consequences if the actor fails to live up to applicable legal standards." [START_REF] Brunnée | International Legal Accountability through the Lens of the Law of State Responsibility[END_REF] In a global regulatory governance context, it would be unrealistic to expect such legal standards to always be legally binding rules, that is, rules provided for by instruments that belong to the traditional sources of international law. Huge areas of the global regulation are governed by the so-called soft law 24 .
With respect to the specific subject of this article, one can note that the standards that MDBs apply are, according to the terminology of the Draft Articles on the Responsibility of International Organizations (DARIOs), "rules of the organization." [START_REF]Draft articles on the responsibility of international organizations, with commentaries[END_REF] Their legal nature is debated and there is no consensus on whether they are part of international law or can only bind the organization's staff. 26 Besides, the DARIOs specify that unless the lex specialis of the organization provides otherwise (Article 64), a breach of the rules of the organization can amount to an internationally wrongful act only if the rules "are part of international law"; what is more, "while the rules of the organization may affect international obligations for the relations between an organization and its members, they cannot have a similar effect in relation to non-members." [START_REF]Draft articles on the responsibility of international organizations[END_REF] This rules out the idea that the people affected by the activities of international organizations can invoke this organization's international legal responsibility. The recourse to the notion of legal accountability, which is not restricted to legal responsibility/liability mechanisms (where relevant standards are legally binding rules, and consequences are whatever form of compensation the law provides for), opens avenues for a legal analysis of the protean normative phenomena that occur in global regulatory governance.
II -Main features of IAMs with respect to MDBs-supported private projects
In the context of the rise of the sustainable development concept [START_REF]On the impact of the 1992 Rio Declaration on World Bank's environmental policies and procedures[END_REF] and faced with widespread criticism of the E&S impacts of the projects they supported, MDBs started creating grievance mechanisms in the early 90s. The first ever created, the Inspection Panel of the World Bank, was established in 1993 [START_REF] David | On the reasons for creating the Inspection Panel see inter alia Dana Clark[END_REF] and examines the requests submitted by people who allege they are or will be affected by a project which is supported by the International Bank for Reconstruction and Development (IBRD) and/or the International Development Association (IDA). In November 1995, the Inspection Panel received its fifth complaint, submitted by the Grupo de Acción por el BioBío (GABB) on its "own behalf and that of 385 other concerned people-including 47 Pehuenche, 194 citizens from Concepción (located at the mouth of the Biobío), 145 Chileans from other cities, and three Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 6 members of Parliament." 30 A petition was also sent to the then-World Bank President James Wolfensohn. The complaint was about the decision of the IFC, made in December 1992, to support the construction of the Pangue dam on the Biobío river basin in Chile by ENDESA, a Chilean electric utility. The Inspection Panel had not choice but to reject the complaint, as the IFC's doings are outside its remit. 31 However, the project showed such serious violations of indigenous peoples' rights and negative environmental impacts, and such an absolute disregard for the fact the Pangue dam was planned in the framework of a series of dams which cumulative impacts had been neglected, that Wolfensohn decided to ask Jay Hair, the then-President of the International Union for the Conservation of Nature (IUCN), to perform an independent review of the Pangue dam project.
Objective and remit of IAMs
The objective of IAMs is to "provide an independent and effective forum for people adversely affected by [Bank]-assisted projects to voice their concerns and seek solutions to their problems", 36 to "provide an opportunity for an independent review of complaints from one or more individual(s) or Organisation(s) concerning a Project which allegedly has caused, or is likely to cause, harm," 37 to provide "people adversely affected by a project financed by the Bank … with an independent mechanism through which they can request the Bank Group to comply with all its own policies and procedures," 38 to " [a]ddress complaints from people affected by [the Bank's] projects (or projects in which those organizations play a role) in a manner that is fair, objective, and equitable; and [e]nhance the environmental and social outcomes of [the Bank's] projects (or projects in which those organizations play a role)," 39 or else to "a. Provide a mechanism and process independent of Management in order to investigate allegations by Requesters of Harm produced by the Bank's failure to comply with its Relevant Operational Policies in Bank-Financed Operations; b. Provide information to the Board regarding such investigations; and c. Be a last-resort mechanism for addressing the concerns of Requesters, after reasonable attempts to bring such allegations to the attention of Management have been made." 40 Thus, the IAMs of MDBs do not record wrongful acts under international law attributable to the MDB at issue and they are not intended as judicial procedures. Generally speaking, their role is threefold: -To assess, upon request of the people affected-or likely to be affected-by the Bank's activities, the compliance of the Management of the Bank with its own internal rules, that is to say with its policies and procedures for instance related to the disclosure of information, environmental and social assessment, indigenous people rights… If the Management is found not compliant, it does not result in the legal implication of the Bank but it is expected to adopt corrective measures; -To offer redress for negative environmental and social impacts, based on a problem-solving approach tailored to the needs of the requesters, using techniques such as fact-finding, mediation, consultation, negotiation… Except for the IRM and the MICI, the latest being the less accessible of all IAMs, access to problem-solving (sometimes called dispute resolution or consultation phase) is not conditioned by the fact claimants allege a breach of the Bank's standards and; 41 36 ADB Accountability Mechanism Policy 2012, para. 103, <http://www.adb.org/documents/accountability-mechanismpolicy-2012?ref=site/accountability-mechanism/publications> (last visited 14 February 2015.) All ADB AM-related documents cited in this article can be accessed from <http://www.adb.org/site/accountability-mechanism/main>. -To provide the Bank with lessons learned from the cases, including recommendations related to changes in MDBs' policies and procedures that would be needed to prevent future noncompliance situations. In this respect, the CAO is the only IAM whose mandate expressly includes direct "advice to the President and IFC/MIGA on broader environmental and social issues related to policies, standards, guidelines, procedures, resources, and systems established to improve the performance of IFC/MIGA projects." 42 As for the other IAMs, this 'lessons learned' function is part of their compliance review and/or problem-solving roles. [START_REF] Pcm | Rules of Procedure[END_REF] As far as the consequences of a problem-solving exercise or a compliance review are concerned, IAMs have varied remits.
The Special Project Facilitator (problem-solving officer) of the ADB AM is responsible for monitoring the implementation of the remedial actions agreed upon during the problem-solving process 44 ; the Compliance Review Panel (AM CRP) 'lost' its power to make recommendations based on its findings in the course of the 2012 review of its policy, but still monitors the implementation of the decisions made by the Board based on the findings of the AM CRP and the remedial actions proposed by the Management. 45 The CAO does not make recommendations but monitors the implementation of agreements reached during an Ombudsman/dispute resolution exercise; 46 regarding its compliance review role, in cases of non-compliance, the CAO monitors the situation "until actions taken by IFC/MIGA assure CAO that IFC/MIGA is addressing the noncompliance. CAO will then close the compliance investigation." 47 The EBRD PCM Officer monitors the agreements reached during problem-solving initiatives; 48 the Experts who perform compliance reviews can make recommendations to: "a. address the findings of non-compliance at the level of EBRD systems or procedures in relation to a Relevant EBRD Policy, to avoid a recurrence of such or similar occurrences; and/or; b. address the findings of noncompliance in the scope or implementation of the Project taking account of prior commitments by the Bank or the Client in relation to the Project; and c. monitor and report on the implementation of any recommended changes." 49 The PCM Officer then monitors the implementation of the Management Action Plan as approved by the Board or the President "until the PCM Officer determines that monitoring is no longer needed." 50 In the framework of the AfDB IRM, the Compliance Review and Mediation Unit monitors the implementation of agreements reached thanks to the problem-solving exercise; 51 when the Bank is found non-compliant by a compliance review, the compliance review report includes both findings and recommendations on "i. any remedial changes to systems or procedures within the Bank Group to avoid a recurrence of such or similar violations; ii. any remedial changes in the scope or implementation of the Bank Group-financed project … ; and/or iii. any steps to be taken to monitor the implementation of the changes …" 52 No provision describes who decides, and which basis, that the monitoring should end.
Finally, the MICI "may … provide its recommendations, views, or observations on findings or systemic issues relating to Relevant Operational Policy noncompliance"; 53 it monitors the "implementation of any action plans or remedial or corrective actions agreed upon as a result of a Compliance Review." 54
Applicable standards
Regarding the standards IAMs are competent to assess compliance with, all MDBs' rules share a common distinction: there are standards the bank itself must live up to, and standards on what is required from borrowers in and implementation of their project. 55 Most banks list these two kinds of requirements in two different categories of instruments. The first category is usually called 'policies'; the second is called either 'performance requirements' (EBRD), 'performance standards' (IFC/MIGA), 'operational safeguards' (AfDB), or else 'safeguard requirements' (ADB). Note that in the framework of the IDB, 'operational policies' (OPs) designate the conditions of the bank's activities, including which responsibilities are the bank's and which are the borrower's. MDBs are then expected not only to comply with the standards directly aimed at their Management but also with their due diligence obligation to check whether clients meet their performance standards throughout the whole life-cycle of the project. Due to the very mission of the banks (development), MDBs' Managements also have an overarching obligation to "do no harm", which means that development banks-supported projects should at the very least not result in the concerned people being worse off... The standards that are applicable to the private sector may or may not be different from the standards applicable to sovereign borrowers depending on the MDB concerned, and the remit of IAMs may or may not be different depending on the nature of the borrower. Obviously, the CAO assesses compliance with standards completely tailored to the IFC and MIGA's relations with private sector clients. Since 2006, these standards have been enclosed in a Sustainability Framework, consisting in an IFC Policy on Environmental and Social Sustainability 56 complemented by eight Performance Standards (PSs) 57 which "define IFC clients' responsibilities for managing their environmental and social risks." 58 In addition, "[t]he World Bank Group Environmental, Health and Safety Guidelines (EHS Guidelines) are technical reference documents with general and industryspecific examples of good international industry practice. IFC uses the EHS Guidelines as a technical source of information during project appraisal. The EHS Guidelines contain the performance levels 52 Ibid., para 52 c). 53 MICI Policy 2014, para. 45. 54 Ibid., para. 49 55 The purpose of this article is not to describe the substance of MDBs' policies and safeguards applicable to the private sector. The following paper provides a very good overview of the existing, using the example of public participation in MDBS' policies: Daniel D. To date, the IFC Sustainability Framework is considered to be the most advanced and comprehensive set of international standards aimed at the private sector.
The IDB applies the same environmental and social safeguards to both sovereign and nonsovereign borrowers 60 . From "September 9, 2013, the ICIM applies to all relevant operational policies approved by the IDB Board of Executive Directors in effect as of that date," 61 and the scope of the MICI's remit does not change depending on the public or private nature of the borrower.
As for the AfDB, it adopted in December 2013 an Integrated Safeguards System (ISS) that includes an Integrated Safeguards Policy Statement and five Operational Safeguards. 62 The ISS is complemented by Environmental and Social Assessment Procedures (ESAPs-"specific procedures that the Bank and its borrowers or clients should follow to ensure that Bank operations meet the requirements of the OSs at each stage of the Bank's project cycle") 63 and Integrated Environmental and Social Impact Assessment Guidance Notes ("IESIA Guidance Notes provide technical guidance for the Bank and its borrowers on specific methodological approaches or standards and management measures relevant to meeting the requirements of the OSs.") 64 The IRM can assess complaints related to sovereign and non-sovereign projects; however, the IRM Operating Rules and Procedures specify that requests related to the "private sector or other non-sovereign guaranteed projects" are eligible in situations where complainants allege a "breach of the Bank-Group's agricultural, education, health, gender, good governance or environmental policies" only 65 Regarding the ADB's AM, it may assess compliance with the Bank's standards whether the project at issue is sovereign or not. 66 Under the Bank Policy OM Section D10/BP on 'Non-Sovereign Operations', ADB must assess each proposed financing to ensure inter alia that it "(i) complies with the relevant provisions in ADB's policies on poverty reduction, safeguards (including environment, involuntary resettlement, indigenous peoples), governance, anticorruption, … (ii) complies with the applicable country partnership strategy and sector policy …" 67 the ADB lists four Safeguard Requirements (SRs) 69 which mix considerations on the Bank's and clients' responsibilities.
The PCM's remit isn't different for public and private EBRD-supported projects. It assesses compliance with "relevant EBRD polic [ies]," 70 in particular the Public Information Policy (PIP) 71 and the Environmental and Social Policy (ESP). 72 The ESP includes the text of the policy itself, which is aimed at the Bank and "outlines how the Bank will address the environmental and social impacts of its projects," and the text of the ten Performance Requirements (PRs) 73 that frame what is expected from clients.
Some limits of IAMs' scope
Finally, when dealing with the scope of the control that IAMs can perform, a crucial consideration should be mentioned: with the exception of the CAO, IAMs' compliance reviews are request-based and project-based. This means that IAMs cannot self-trigger when informed of worrying situations involving their MDB's support to private clients. Nor can they launch a sectoral compliance review, or accept requests implicating what the World Bank Groups calls 'Development Policy Lending', 74 an expression which replaced in 2004 the infamous 'adjustment lending,' despite the fact development policy lending can have significant adverse environmental and social impacts.
The CAO is quite unique in that respect. The CAO Vice President can initiate a compliance review (following the CAO's terminology in force until 2012: a 'compliance audit', and since the 2013 Operational Guidelines: a 'compliance investigation') "based on project-specific or systemic concerns resulting from CAO Dispute Resolution and Compliance casework." 75 Hence, concerns raised in the framework of the Celulosas de M'Bopicua (CMB) & Orion-01/Argentina & Uruguay case led CAO Vice President to commence a compliance audit related to internal due diligence issues, in order to ensure greater clarity in the implementation of social and environmental appraisal procedures by both IFC and MIGA. In particular, the audit focused on public disclosure of E&S documentation. 74 As described by the World Bank, "Development policy loans provide quick-disbursing assistance to countries with external financing needs to support structural reforms in an economic sector or in the economy as a whole. They support the government policy and institutional changes needed to create a dynamic environment that encourages fair and sustained growth for every segment of society. Over the past two decades, development policy lending-previously called adjustment lending-has accounted, on average, for 20 to 25 percent of total Bank lending. Development policy loans were originally designed to provide support for macroeconomic policy reforms and adjustment to economic crises. Over time, they have evolved to focus on longer-term structural, financial sector and social policy reforms. Loans seek to address complex institutional issues such as strengthening education and health policies, improving a country's investment climate, and addressing weaknesses in governance, public expenditure management and public financial accountability": <http://digitalmedia.worldbank.org/projectsandops/lendingtools.htm> (last visited 14 February 2015.) 75 Following the AD Hydro Power Limited-01/Himachal Pradesh case-that was addressed under the CAO's Ombudsman (now called 'dispute resolution') role 77 and that involved SN Power, a commercial investor and developer of hydropower projects that IFC and MIGA had supported several times-in December 2008 the CAO Vice President requested a compliance appraisal of IFC's/MIGA's due diligence and supervision of health and safety issues on all the projects where SN Power was involved. 78 Following the events of April 2010 in the Gulf of Mexico involving deepwater offshore exploration of oil and gas, the CAO Vice President initiated an investigation to assess IFC's procedures and standards when appraising investments in deepwater shore oil and gas exploration projects. 79 In the Dinant case, the CAO Vice President initiated in April 2012 an appraisal of IFC's investment in Corporación Dinant in response to concerns raised in a letter to the World Bank president in November 2010 and subsequent discussions between CAO and local NGOs. 80 Findings of the Dinant case resulted in the CAO Vice President initiating the Ficohsa case… 81 The 2012 Financial Markets audit was a bit different because it did not stem from a specific case but from the findings of the CAO made on the occasion of the drafting of an Advisory note as a contribution to IFC's policy review and update. This Advisory note "analyzed 18 real sector and 8 financial intermediary investments, and concluded that there were significant gaps between IFC's E&S requirements and their practical application for FI clients. The findings of the CAO's appraisal indicated that IFC's activities in the financial sector are creating a potentially increasing risk for IFC to the extent that its FM funding may result in environmental and/or social harm … The compliance appraisal report, issued in June 2011, concluded that there were sufficient grounds to proceed to a CAO audit of IFC's FM investments." 82 Moreover, the CAO provides the Board's Committee on Development Effectiveness (CODE) with a Management Action Tracking Record (MATR), which annually records actions taken by IFC/MIGA in response to CAO's recommendations and findings. 83
III -Do IAMs trigger the accountability of private sector clients?
Before presenting elements that may throw light on this issue, I believe it is useful to specify two things. First, the developments which follow are based on interviews of practitioners/experts of IAMs performed by the IGM's project team between July and November 2014, the study of compliance reviews 'case law' and of the relevant academic literature. Interviews include 17 practitioners or former practitioners and ad hoc experts of/for the CAO (IFC/MIGA), the PCM (EBRD), the MICI (IDB), the IRM (AfDB) and the AM (ADB.) These practitioners and experts were kind enough to share their views and experience and they are protected by confidentiality commitments on the IGMs' team part. Implicated entities (i.e. MDBs' Management and clients) have not been interviewed yet, so the field material presented here reflects the opinions of IAMs' practioners/experts only.
Second, there are enterprises and enterprises. This paper focuses on private sector clients of MDBs, which leaves aside the MDBs-supported projects of state-owned companies, considered as public operations. For example, the PCM have investigated much more EBRD-supported projects of state-owned enterprises than purely private projects. 84 83 The MATR is not available to the public. 84 If one counts only the cases before the PCM (since 2010) that have to date ended up with a compliance review report, two cases out of ten are related to private sector projects (case 2012/1 Paravani HPP and case 2010/1 D1 Motorway Phase I). Almost all other are related to state-owned utilities or companies. 13 not equal in terms of awareness of corporate social responsibilities, good business governance and knowledge of their impacts on the ground. In this respect, financial intermediaries can be seen as "typical laggard[s] because [they] do not have a material direct environmental footprint." 85 There also seems to be a general feeling that private businesses are first and foremost accountable to their shareholders, and accountability to other stakeholders is not exactly in the foreground of their preoccupations. 86 However, some experts/practitioners we interviewed also pointed out that when private sector clients consider E&S risks from the angle of potential reputational damage and risks, they are much more willing to know about their E&S impacts and responsibilities; 87 banks, being especially exposed to reputational risks, can have a "lightening fast reaction" 88 to these once they've realized the stakes. Besides, multinational companies could be expected to be more aware of international standards on environmental and social issues, but some interviewees denied this assumption on a divide between multinational clients and domestic clients. One of the interviewees pointed out that the attitude of clients is driven by who ultimately owns the company. Some national companies are ahead of a MDB's standards, and in the same country other corporations have no idea of what is corporate social responsibility. In the end, it depends on whether shareholders value these things. Interviewees noted that usually, clients-whether multinational enterprises or domestic businesses-don't know about the IAMs' work and have no idea of what it is all about. The MDB has to inform them of the existence of an IAM but this information is drowned out by the tons of information the clients gets from the bank. In some rare instances, clients have threatened to sue IAMs' staff before they realized what IAMs were and that they were part of the "package deal" of resorting to an MDB. Finally, the 'beliefs' and behaviors of enterprises regarding the scope of their E&S commitments might be different depending on whether they significantly rely on subcontractors and supply chains or not. 89 This said, it is likely that depending on the procedure-problem-solving/dispute resolution/consultation or compliance review/investigation/audit-triggered before an IAM, the extent to which MDBs' private clients may experience some ripple effects from the work of IAMs is different: whereas the problem-solving stage does not necessarily include any consideration related to the Bank's alleged breaches, compliance reviews explicitly focus on the accountability of the Bank, not the client's.
Hints of accountability at the problem-solving stage
At the problem-solving stage, the idea should be at a minimum to create an environment that enables stakeholders to voice their concerns and point of views, to provide the information stakeholders might need to position themselves, so that they can reach a mutually acceptable solution. Some of the IAMs condition access to the problem-solving procedure by the fact requesters allege a breach of the Bank's standards, others do not.
Regarding the IDB MICI, "The objective of the Consultation Phase is to provide an opportunity to the Parties to address the issues raised by the Requesters related to Harm caused by the failure of the Bank to comply with one or more of its Relevant Operational Policies in the context of a Bank-85 Interview of Herman Mulder, Senior Executive Vice President, Group Risk Management, ABN-Amro Bank, by Maartje van Putten in van Putten,op. cit.,at 370. 86 van Putten,op. cit.,[245][246][247][248][249] Along the same line see Bruce Holzer, "From Accounts to Accountability: Corporate Self-presentations in Response to Public Criticism", in Magnus Boström and Christina Garsten (eds.), Organizing Transnational Accountability, Cheltenham/Northampton: Edward Elgar ( 2008), at 80-97. 88 According to the word of one of the interviewees. 89 Robert Faulkner, "Business and Global Climate Governance. A Neo-pluralist Perspective", in Morten Ougaard and Anna Leander (eds.), Business and Global Governance, London/New York: Routledge (2010), at 105. The scope of IAMs' assessment as regards subcontractors and supply chains will be addressed further in this paper. Depending on the fact the focus is more on the wrongs that must be redressed or more on the breaches of the Bank, the ripple effects on private clients of a complaint before an IAM are arguably stronger in the first situation. This hypothesis is however difficult to ascertain because the practice of the MICI and the IRM as regards bank-supported private projects is too thin. To date the AfDB's IRM has performed only one problem-solving exercise 93 that was related to a project which involved loans both to a public agency and a private client, and the private client was not involved in the problem-solving. 94 In the framework of the MICI, it seems that only three consultations on IDBsupported private projects have been performed till their end. 95 As for the IAMs whose problem-solving stage is unrelated to alleged violations of MDBs' policies, they more or less have the same philosophy. The EBRD PCM's problem-solving initiatives "ha [ve] the objective of restoring a dialogue between the Complainant and the Client to resolve the issue(s) underlying a Complaint without attributing blame or fault." 96 This doesn't mean that only the claimants and the private clients may be involved: problem-solving initiatives (PSIs) may involve whoever might be relevant to solve the issues, and depending of the kind of issues raised by the claimants, that might not be the client. For instance, in the Tbilisi Railway Bypass 2 case, EBRD Management was invited to participate in the PSI, in particular because the requesters had little idea of how EBRD standards and guidelines concretely applied to their case and Management was best positioned to explain. 97 The ADB Special Project Facilitator engages "with all relevant parties, including the complainants, the borrower, the ADB Board member representing the country concerned, Management, and staff to gain a thorough understanding of the issues to be examined during problem solving." 98 The problem-solving function is "outcome-driven. It will not focus on the identification and allocation of blame, but on finding ways to address the problems of the projectaffected people." 99 Regarding the CAO, "the focus of CAO's Dispute Resolution role is on accessing directly those individuals and/or communities affected by the project and helping them, the client, and other relevant stakeholders resolve complaints, ideally by improving environmental and social 90 MICI Policy 2014, para. 28. 91 IRM Operating Rules and Procedures 2010, para. 36. 92 Ibid., para. 39: "If the problem-solving exercise is successful, the Director will include in the Problem-Solving Report the solution agreed upon by the Requestors, Management and any interested person." 93 As far as I can tell. Informations relating to some projects are not available on the Bank's website. 94 15 outcomes on the ground." 100 "As a nonjudicial, nonadversarial, neutral forum, CAO's approach provides a process through which parties may find mutually satisfactory solutions. This role facilitates an approach that ensures equitable treatment of participants in a dispute resolution process." 101 One may note however that the CAO, before the strengthening of its compliance function in [2005][2006]102 has sometimes included considerations related to whether IFC and the client had prima facie complied with their commitments in the assessment report, which is the first stage of the procedure and is supposed to precede a problem-solving exercise and possibly a compliance audit. 103 As mentioned earlier, to date the EBRD PCM has only had two cases related to private projects, and none of them gave rise to a problem-solving initiative. 104 The ADB AM has received two complaints related to private projects. Concerning the first, the problem-solving reports are not public, but the Compliance Review Panel (CRP) hints at the fact the mediation process ended because the requesters did not trust the mediation team. 105 The second case was directly handled as a compliance review case, as allowed by the new 2012 Accountability Mechanisms Policy, and the compliance report is not completed yet. 106 All in all, the biggest corpus of problem-solving reports is by far offered by the CAO: created in 1999, all its cases relate to private projects and in 2014, 58 requests had been at least partially settled by an ombudsman/dispute resolution exercise. 107
Though problem-solving is by definition tailored to the specifics of each case, the study of IAMs' problem-solving reports, coupled with the experience shared by the interviewees, suggests that lessons in terms of private clients feeling more accountable thanks to IAMs can be drawn. First, the most difficult thing is to have all stakeholders sitting at a same table including the client who, as stated above, might have no idea of what the IAM is. Some clients are adamant about not discussing the issues raised in the complaint, or else so reluctant it precludes any prospect of meaningful dialogue. 108 There are often misunderstandings on the IAMs' role, who are mistaken for auditors or judges. When the problem-solving team is able to explain and dispel the distrust, then there is a good chance to create a real dialogue. The client might learn how to build relationships with affected people. It might also gain from the opportunity to set up a fact-finding mission that can provide a common ground for discussion in case there is a disagreement on the facts. 109 Discussions can help clarify who is responsible for what, to the benefit of all stakeholders, and thus lead the client to understand why people are expecting it to take action and how. In the Agri-Vie cases, the client, New Forests Company (NFC), a UK-based forestry company operating established and growing timber plantations, invested after the Ugandan government had evicted the complainants from the lands where they were living, and did "not assume any direct responsibility for the evictions and claim [ed] that it was not involved in carrying out the evictions and was explicitly excluded by the
Hints of accountability at the compliance review stage
Given the compliance review stage's features, the issue of the impact that the practice of IAMs might have in terms of indirect accountability of clients appears more precisely delineated. IAMs are unanimous about the scope of compliance reviews: the purpose is to investigate, upon request of the people affected or likely to be affected by the project at issue, the Bank's compliance with its operational policies and procedures in respect of the design, implementation or supervision of the projects they support. 114 This means that the compliance review stage is driven by two considerations in various proportions: is the MDB compliant or not, in substance and spirit? In case of non-compliance, did the breaches cause significant harm to project-affected people? In other words, has the MDB shown due diligence at the stages of the formulation, processing, and/or implementation of the project and in doing so, has it given due regard to the do no harm principle that should guide its actions? Arguably, this phase only concerns the behavior of the bank, not the client's. But of course it is not that simple.
As the CAO Operational Guidelines state, "[t]he focus of CAO Compliance is on IFC and MIGA, not their client … In many cases, however, in assessing the performance of the project and IFC's/MIGA's implementation of measures to meet the relevant requirements, it will be necessary for CAO to review the actions of the client and verify outcomes in the field." 115 IAMs can not only look into the client's actions to assess the due diligence of the bank but beyond, into its subcontractors' and supply chains' if need be. 116 From this point of view, the CAO's remit is all the wider than since the 2005-2006 review of its Operational Guidelines, compliance audits/investigations can check how the IFC/IGA assured itself of the E&S performance of the projects it supports and whether the outcomes of the business activity or advice are consistent with the intent of the relevant policy provisions. 117 The criteria to decide whether to undertake a compliance investigation are also rather open: "-There is evidence of potentially significant adverse environmental and/or social outcome(s) now, or in the future. -There are indications that a policy or other appraisal criteria may not have been adhered to or properly applied by IFC/MIGA. -There is evidence that indicates that IFC's/MIGA's provisions, whether or not complied with, have failed to provide an adequate level of protection." 118 Standards directly aimed at the bank include an obligation to make reasonably sure (due diligence) that the client is complying with its obligations under the performance standards, performance requirements etc. 119 In principle, these standards the client must comply with have gained legal standing by force of the loan agreement 120 ; IAMs look into the provisions of the loan agreement to the extent that it may shed some light on the bank's due diligence and the investigation teams make it clear that they are not investigating the client. Many interviewees stated that some clients were more welcoming than expected during the investigation stage. Hence, some clients use the opportunity to learn about the IFC standards and how they could use them to improve their performance and gain or keep a 'social license' to operate. 121 In the Lukoil case, the April 2008 audit report had found IFC noncompliant on issues related to how IFC assured itself that emissions to air from the Karachaganak Project complied with IFC requirements. In January 2009, Lukoil ended its contractual obligations to IFC by prepaying its outstanding balance, which ended IFC's obligations to assure itself of project performance. However, the findings of the CAO led the company to become interested in how it could improve the monitoring of its impacts and kept working with IFC after the end of their contract to set up an action plan. 122 In the Visayas case, 123 the first case related to a private project brought before the AM's Compliance Review Panel, the client proved to be very cooperative. 124 All in all, interviews hint that the Management and the Board are at least as reluctant as their clients, if not more, to let IAMs investigate a private project, particularly for reasons of confidential business information.
As stated above, when the MDB is found non-compliant, some IAMs issue recommendations on the remedial actions that are to be taken (the EBRD PCM, the IDB MICI, the AfDB IRM), others do not (the ADB AM since the 2012 version of the AM Policy and the CAO since the 2007 version of its Operational Guidelines). Recommendations are addressed to the MDB at issue, but the line can be very thin between findings concerning the bank / recommendations made to the bank and findings concerning the borrower / recommendations made to the borrower. Typically, findings of IAMs state that the Management should have seen that the client's environmental impact assessment was failing, or else that the client had not conducted meaningful and timely consultations with affected people as required, and should have made sure these compulsory steps were completed before submitting the project to the Board for approval. Typical recommendations request the Management to make sure the client proceeds to the consultations which can still be helpful or to conduct additional studies needed to determine the proper course of action as regards E&S issues that remain unsolved. In other words, when a MDB is found non-compliant because it failed verifying that affected people have been properly consulted by the client, or the environmental impact assessment was grossly insufficient, or pollution prevention and abatement technologies and measures are lacking, it is expected to ensure so that the client complies with the performance standards referred to in the loan agreement.
Thus, it is very important to bear in mind that while clients are not subject to the compliance review, they will bear the consequences, financial and/or otherwise, of a finding of non-compliance. For instance, the PCM's compliance review report in the Paravani HPP case recommends that "in addition to effectively monitoring implementation of the Environmental and Social Action Plan EBRD should work with [the client] to prepare and disclose a comprehensive annual report which updates the [Environmental and Social Impact Assessment/ Environmental and Social Action Plan] on which consultation can take place and which can inform future HPP developments within Georgia." 125 From this point of view, the CAO's practice has greatly evolved. Designed much more to perform problemsolving than to conduct compliance reviews, this IAM's audit/compliance investigation function really began working with the appointment in late 2005 of Henrik Linders as Senior Specialist, Compliance Advisor. An early compliance case shows a mix up of the IFC's and the client's direct legal accountability, which is as interesting as it was potentially damaging, since such confusion could have been used internally as ammunition against the CAO. Indeed, in the second compliance audit the CAO performed, 126 in the COMSUR case (2004), the CAO makes direct recommendations both to the IFC and COMSUR. Besides, the final audit report is untitled "Review of the Capacity of COMSUR to Manage Environmental and Social Responsibility Issues"… 127 Subsequent practice does not show this confusion and compliance audits are careful to stick to findings related to IFC's compliance.
In addition, in practice the client might end up being more or less directly targeted by the monitoring of remedial actions. For example, the 1 st Monitoring Report in the Visayas case, while stating that "[t]he focus … is to ascertain the progress made by Management in implementing its remedial action plan for complying with the CRP's Board approved recommendations", 128 specifies that "[t]he CRP reviewed the environmental assessment report prepared by [the client] consistent with the CRP's recommendation … The CRP is of the opinion that [the client]'s long-term ash disposal plan is reasonable and adequately addresses the potential impact and mitigation aspects." 129 In the end, either the client totally disagree with the requirements imposed by the performance standards and can repay the loan to get rid of unwanted consequences, 130 or it has no 130 In cases where clients decided to repay the loan before due date or to abandon the project, it is often difficult to discern whether the fact that a complaint was submitted to an IAM has influenced their decision. In the Agrokasa case (Peru / Agrokasa-01/Ica, op. cit.) however, it is quite clear that the triggering of the CAO-whose audit report revealed that IFC had 'forgotten' to inform the Agrokasa company of the enhanced due diligence requirements that IFC had to apply because of
Conclusions
What kind(s) of accountability of private businesses, if any, is (are) mobilized by IAMs' practice, with the affected people as account-holders? As described in the first part of this article, Stewart's definition of accountability mechanisms is based on three components: "(1) a specified accounter, who is subject to being called to provide account, including, as appropriate, explanation and justification for his conduct; (2) a specified account holder who can require that the accounter render account for his performance; and (3) the ability and authority of the account holder to impose sanctions or mobilize other remedies for deficient performance by the accounter and perhaps also to confer rewards for a superior performance by the accounter." 133 Based on this definition, Stewart counts two categories of accountability mechanisms: -Delegation-based: electoral, hierarchical, supervisory 134 and fiscal 135 accountabilities; relationships between the accounter and the account-holders involve "a delegation or a transfer of authority or resources … where the accounters are to act in the interest of the grantors/account-holders or designated third persons." 136 -Law-based: legal accountability; the specified account-holder can require that the accounter render account for his performance by legal standards. Brought back to the example of the accountability of the private sector in the framework of IAM cases, a determination is not easy to make. Regarding the compliance review stage, on the one hand, at the end of the day when MDBs are found noncompliant, the onus is on clients to implement project-level remedial actions as recommended by IAMs/the Management and approved by the Board/the President of the MDB. On the other hand, these remedial measures must be implemented because the bank did not react consistently to its own standards to its client's noncompliance with the MDB's performance standards, made binding between the bank and the client via the loan agreement. Put differently, before IAMs the accounters are the banks and the account-holders are the adoption of the 2006 Performance Standards-has been instrumental in the decision of Agrokasa to cancel its request for a third loan in September 2009: CAO, CAO at Ten: Annual Report FY 2010 and Review FY 2000-10, Washington DC: CAO, electronic version at <http://www.cao-ombudsman.org/publications/documents/CAO_10Year_AR_web.pdf> (last visited 17 February 2015.) 131 Who defend sovereignty considerations. 132 In the Coastal Gujarat Power Limited (CGPL) case for example, the CAO notes that CGPL, a subsidiary of Tata Power, is quite cooperative but the results on the ground remain so far very unsatisfactory: CAO, India / Tata Ultra Mega-01/Mundra and Anjar, CAO CGPL Monitoring Report, 21 January 2015. 133 Stewart (2014), op. cit., at 245. 134 It is "a catchall category for relationships in which a delegation of authority or resources has occurred but in which the grantor does not have the right to control directly the grantee's conduct. Examples include the relations between clients and independent contractors or professionals, between the legislature and administrative agencies, and between states and the international organizations of which they are members. There may or may not be established standards and procedures and giving of reasons for evaluation of the accounter's conduct. Sanctions and other remedies include revocation or nonrenewal of the delegated authority or resources conferred, or other corrective measures such as organizational and policy changes": ibid., at 247. 135 Fiscal accountability "involves financial accounting and audit procedures by which the grantee of funds or other resources accounts for their use to an account holder, often the grantor, in accordance with generally accepted accounting standards and practices. Sanctions can include revocation of the grant and return of funds, denial of future grants, or imposition of more restrictive conditions on the activities of the grantee": ibidem. 136 Ibid., at 246. (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 20 the requesters. Clients are for their part legally accountable to the bank, not the requesters. However, if emphasis is put on who is eventually responsible for remedying noncompliance findings, responsibility is shared between the bank (project-level and systemic changes) and the client (project-level changes.) From that point of view, private borrowers are 'fiscally' accountable and indirectly legally accountable. Whether these fiscal accountability and indirect legal accountability can create systemic changes in the client's perception of the scope of its accountability towards affected-people is less than certain.
Concerning the problem-solving stage, several levels of accountability involving requesters as account-holders are also triggered but they hardly reach clients. First, some interviewees expressed their view that a MDB-funded problem-solving exercise, even entirely free from compliance-related considerations, by its very existence is a way of rendering the bank accountable for the environmental and social harm arising from the projects they support. Because it has some responsibility in the harm done, the bank has to pay the process that will lead to a remedy. It could be considered as a form of fiscal accountability to the benefit of affected people. There is also a legal component which is that the bank policies provide for this problem-solving mechanism. Requesters are then 'legally' entitled to a problem-solving exercise, but here the account-holder is the Management, not the client who is free to decline to participate in a problem-solving process. Some clients might feel accountable as a result of a problem-solving exercise and react accordingly; again, whether this can create systemic changes in the client's perception of the scope of its accountability towards affected-people is less than certain.
February 2015
IRM, Sénégal: Dakar-Diamniadio Highway Project, RQ 2011/01, Problem-solving Report, 25 March 2010. 95 MICI, Paraguay -Development of the Industry of Products of the Vegetable Sponge, MICI-PR-2010-001 (successful); Panama -Pando-Monte Lirio Hydroelectric Power Project, MICI-PN-2010-002 (unsuccessful, the company withdrawn from the discussions); Colombia -El Dorado International Airport, CO-MICI002-2011, (partly successful.) 96 PCM, Rules of Procedure 2014, Introduction and purpose, at 3.
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 5
The Hair report released in July 1997, even heavily redacted by the World Bank, 32 is damning. The Pangue dam had been inaugurated in March, this same year… One of the positive outcomes of this nasty business was nonetheless the establishment of an IFC disclosure policy, of IFC environmental and social safeguards and the creation of the Compliance Advisor Ombudsman, si,c" the Pangue dam case had highlighted the need for an accountability mechanism for the activities of the World Bank
Group aimed at promoting private sector's investment in development, namely, the activities of both
IFC and MIGA.
Other MDBs followed suit: the Interamerican Development Bank (IDB), who created and
Independent Inspection Mechanism in 1994, replaced in 2010 by the Independent Consultation and
Investigation Mechanism (MICI, for Mecanismo Independiente de Consulta e Investigación); the Asian
Development Bank (ADB), who created an Inspection Function in 1995, replaced with the
Accountability Mechanism (AM) since 2003; the European Bank for Reconstruction and Development
(EBRD), who created the Independent Recourse Mechanism in 2003, replaced with the Project
Complaint Mechanism (PCM) in 2010; the African Development Bank (AfDB), who created the
Independent Review Mechanism (IRM), entrusted to a Compliance Review and Mediation Unit
(CRMU) in 2004… Let us add to this list the Independent Redress Mechanism that is being created in
the framework of the Green Climate Fund, and whose terms of functioning and articulation with
existing MDBs' accountability mechanisms are still unclear, 33 and the combination of the Social and
Environmental Compliance Unit (SECU) with the Stakeholder Response Mechanism (SRM) freshly
created by the United Nations Development Programme (UNDP.) 34 Except for the Inspection Panel,
all these accountability mechanisms can receive complaints about these agencies' support to private
projects sponsors/clients. 35
30 For a detailed account of the Pangue/Ralco dams failures, see David Hunter, Cristián Opaso, Marcos Orellana, "The
Biobío's Legacy: Institutional Reforms and Unfulfilled Promises at the International Finance Corporation", in Clark, Fox,
Treakle (eds.), op. cit., at 127. See also Bruce Rich, Foreclosing the Future: The World Bank and the Politics of Environmental
Destruction, Washington/Covelo/London: Island Press (2013), at 54-55. 31 See on the website of the Inspection Panel: "Chile: Financing of Hydroelectric Dams in the Bío-Bío River (Not Registered)",
<http://ewebapps.worldbank.org/apps/ip/Pages/ViewCase.aspx?CaseId=37> (last visited 13 February 2015.) 32 Hunter, Opaso, Orellana, op. cit., at 129-131. 33 Terms of Reference of the Independent Evaluation Unit, the Independent Integrity Unit, and the Independent Redress
Mechanism, GCF/B.06/06, 13 February 2014. 34 See on the UNDP website <http://www.undp.org/content/undp/en/home/operations/accountability/secu-srm/> (last
visited 13 February 2015.) 35 As regards the UNDP's SECU and SRM, the UNDP Social and Environmental Standards (SES) specify that "Possible
Implementing Partners [of UNDP-supported projects] include government institutions (National Implementation Modality),
eligible UN agencies, inter-governmental organizations (IGOs), eligible civil society organizations (CSOs), and UNDP (Direct
Implementation Modality)"; however, the SES add that "UNDP's Policy on Due Diligence and Partnerships with the Private
Sector (forthcoming) stipulates due diligence requirements regarding such partnerships. Projects that may result from such
partnerships would be subject to UNDP's screening procedure and may trigger SES requirements": UNDP's Social and
Environmental Standards, at 7, respectively notes 7 and 10, available at
<http://www.undp.org/content/dam/undp/library/corporate/Social-and-Environmental-Policies-and-Procedures/UNDPs-
Social-and-Environmental-Standards-ENGLISH.pdf> (last visited 13 February 2015.)
37EBRD Project Complaint Mechanism, Rules of Procedure 2014, Introduction and purpose, <http://www.ebrd.com/downloads/integrity/pcmrules2014.pdf> (last visited 14 February 2015.) All PCM-related documents cited in this article can be accessed from <http://www.ebrd.com/work-with-us/project-finance/projectcomplaint-mechanism.html>. All IRM-related documents cited in this article can be accessed from <http://www.afdb.org/en/aboutus/structure/independent-review-mechanism-irm/requests-register/>. Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 8
38 AfDB IRM Operating Rules and Procedures 2010, at 1,
<http://www.afdb.org/fileadmin/uploads/afdb/Documents/Compliance-
Review/IRM%20Operating%20Rules%20and%20Procedures%20-%2016%20June%202010.pdf> (last visited 14 February
2015.) 39 CAO Operational Guidelines 2013, para. 1.1., <http://www.cao-
ombudsman.org/howwework/documents/CAOOperationalGuidelines2013_ENGLISH.pdf> (last visited 14 February 2015). 40 IDB MICI, Policy of the Independent Consultation and Investigation Mechanism, 2014,
<http://www.iadb.org/document.cfm?id=39313644> (last visited 14 February 2015.) All MICI-related documents cited in this article can be accessed from <http://www.iadb.org/en/mici/independent-consultation-and-investigation-mechanismicim,1752.html>.
41
IRM Operating Rules and Procedures 2010, para. 4 a); MICI Policy 2014, para. 24: "The objective of the Consultation Phase is to provide an opportunity to the Parties to address the issues raised by the Requesters related to Harm caused by the
Bradlow and Megan S. Chapman, "Public Participation and The Private Sector: The Role of Multilateral Development Banks in the Evolution of International Legal Standards", 4 Erasmus Law Review (2011), at 91-125. 56 Latest version: International Finance Corporation's Policy on Environmental and Social Sustainability, 2012, <http://www.ifc.org/wps/wcm/connect/7540778049a792dcb87efaa8c6a8312a/SP_English_2012.pdf?MOD=AJPERES> (last visited 14 February 2015.) 57 PS1: Assessment and Management of Environmental and Social Risks and Impacts; PS2: Labor and Working Conditions; PS3: Resource Efficiency and Pollution Prevention; PS4: Community Health, Safety, and Security; PS5: Land Acquisition and Involuntary Resettlement; PS6: Biodiversity Conservation and Sustainable Management of Living Natural Resources; PS7: Indigenous Peoples; PS8: Cultural Heritage. , Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 10 and measures that are normally acceptable to IFC, and that are generally considered to be achievable in new facilities at reasonable costs by existing When host country regulations differ from the levels and measures presented in the EHS Guidelines, projects are expected to achieve whichever is more stringent." 59 PSs are complemented by Guidance Notes plus an 'Interpretation Note on Financial Intermediaries' and an 'Interpretation Note on Small and Medium Enterprises and Environmental and Social Risk Management'.
58 IFC, "Environmental and Social Performance Standards and Guidance Notes",
<http://www.ifc.org/performancestandards> (last visited 14 February 2015).
Richard
The Safeguard Policy Statement 68 of 59 Performance Standards on Environmental and Social Sustainability, 2012, paras. 6 and 7, <http://www.ifc.org/wps/wcm/connect/115482804a0255db96fbffd1a5d13d27/PS_English_2012_Full-Document.pdf?MOD=AJPERES> (last visited 14 February 2015.) 60 See for example IDB, Environment and Safeguards Compliance Policy, 2006, para. 2.1; IDB, Operational Policy on Indigenous Peoples and Strategy for Indigenous Development, 2006, Part III. 61 MICI, Scope of work, <http://www.iadb.org/en/mici/scope-of-work,8166.html> (last visited 14 February 2015.) 62 The Integrated Safeguards Policy Statement "[d]escribes common objectives of the Bank's safeguards and lays out policy principles"; Operational Safeguards (OSs) "are a set of five safeguard requirements that Bank clients are expected to meet when addressing social and environmental impacts and risks. Bank staff use due diligence, review and supervision to ensure that clients comply with these requirements during project preparation and implementation": African Development Bank Group's Integrated Safeguards System Policy Statement and Operational Safeguards, 2013, at 2, <http://www.afdb.org/en/documents/document/afdbs-integrated-safeguards-system-policy-statement-and-operationalsafeguards-34993/> (last visited 14 February 2015.) The five OSs are OS1: Environmental and Social Assessment; OS2: Involuntary Resettlement: Land Acquisition, Population Displacement and Compensation; OS 3: Biodiversity and Ecosystem Services; OS4: Pollution Prevention and Control, Greenhouse Gases, Hazardous Materials and Resource Efficiency; OS5: Labour Conditions, Health and Safety. /www.adb.org/sites/default/files/institutional-document/31483/om-d10.pdf> (last visited 14 February 2015.)
66 Bank Policy OM Section D10/BP, Non-Sovereign Operations, 2013, para. 15,
<http:/
63 ISS 2013, 11. 64
Ibid., 12. 65 Para. 2xi). 67 Ibid., para. 10.
76 68 Bank Policy OM Section F1/BP, Safeguards Policy Statement, 2013, <http://www.adb.org/sites/default/files/institutionaldocument/31483/om-f1-20131001.pdf> (last visited 14 February 2015.) 69 SRs1: Environment, SRs2: Involuntary resettlement, SRs3: Indigenous Peoples, and SRs4: Special requirements for different finance modalities. 70 PCM, Rules of Procedure 2014, Para. 24 b.) 71 Public Information Policy, 2014, <http://www.ebrd.com/what-we-do/strategies-and-policies/public-informationpolicy.html> (last visited 14 February 2015.) 72 Environmental and Social Policy, 2014, <http://www.ebrd.com/news/publications/policies/environmental-and-socialpolicy-esp.html> (last visited 14 February 2015.) 73 PR1: Assessment and Management of Environmental and Social Impacts and Issues; PR2: Labour and Working Conditions; PR3: Resource Efficiency and Pollution Prevention and Control; PR4: Health and Safety; PR5: Land Acquisition, Involuntary Resettlement and Economic Displacement; PR6: Biodiversity Conservation and Sustainable Management of Living Natural Resources; PR7: Indigenous Peoples; PR8: Cultural Heritage; PR9: Financial Intermediaries; PR10: Information Disclosure and Stakeholder Engagement.
CAO Operational Guidelines 2013, para. 4.2.1.
76 CAO, Uruguay / Celulosas de M'Bopicua (CMB) & Orion-01/Argentina & Uruguay, CAO Audit of IFC's and MIGA's Due Diligence of Celulosas de M'Bopicua and Orion Paper Mills, 22 February 2006.
One must also note that all enterprises are
77 CAO, India / AD Hydro Power Limited-01/Himachal Pradesh, Request received on 1 78 CAO, India / SN Power-01/CAO Vice President Request, CAO Appraisal: Case of IFC and MIGA involvement with SN Power st October 2004.
with special focus on the Allain Duhangan Hydropower Project in India, 17 December 2009. 79 CAO, Ghana / Tullow Oil, Kosmos Energy & Jubilee FPSO-01/CAO Vice President Request, triggered 19 August 2010. 80 CAO, Honduras / Dinant-01/CAO Vice President Request, triggered 17 April 2012.
81 CAO, Honduras / Ficohsa-01/ CAO Vice President Request, triggered 21 April 2013. 82 CAO Audit of a Sample of IFC Investments in Third-Party Financial Intermediaries, 10 October 2012.
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 14 Financed Operation. The Consultation Phase provides an approach that ensures unbiased, equitable treatment for all the Parties. There is no guarantee that a Consultation Phase process will resolve all the concerns to the satisfaction of the Parties." The MICI Policy does not specify who participate in the consultation stage. It only mentions that, at the stage of the assessment of the opportunity to perform a consultation, there can be "[m]eetings with the Requesters, Management, Executing Agency, Private Sector Client, civil society organizations, and/or other stakeholders." 90 The purpose of the AfDB IRM's problem-solving exercise is somewhat different. Although eligibility of requests is conditioned by the fact they allege a breach by the Bank of its own standards, the objective of problem-solving exercises "is to restore an effective dialogue between the Requestors and any interested persons with a view to resolving the issue or issues underlying a Request, without seeking to attribute blame or fault to any such party." 91 Problem-solving exercises are supposed to include in any case the Management. 92
97 PCM, Tbilisi Railway Bypass 2 (Georgia), 2011/2, Eligibility Assessment Report, 23 September 2011. 98 ADB Accountability Mechanism Policy 2012, para. 128 iii). 99 Ibid., para. 126. Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013
100 CAO Operational Guidelines 2013, para. 1.2. 101 Ibid., para. 3.2. 102 On this, please see further in this article. 103 Elisa Morgera, "From Corporate Social Responsibility to Accountability Mechanisms", in Pierre-Marie Dupuy and Jorge E. Viñuales (eds.), Harnessing Foreign Investment to Promote Environmental Protection, Cambridge/New York: Cambridge University Press (2013), at 343-344. 104 The requesters sought a compliance review, not a PSI, in the Paravani HPP and D1 Motorway Phase I cases (op. cit.) 105 AM CRP, Visayas Base-Load Power Project (Philippines), 2011/1, Compliance Review Panel's Final Report, 11 April 2012. 106 AM CRP, Mundra Ultra Mega Power Project, 2013/01. 107 Figure based on CAO, 2014 Annual Report, Appendix B. Complaint log, FY 2000-14, Washington DC: CAO, at 56-68, electronic version at <http://www.cao-ombudsman.org/publications/documents/CAOANNUALREPORT2014.pdf> (last visited 17 February 2015.) 108 See for instance Independent Recourse Mechanism (the predecessor of the PCM), Sakhalin II (Russia), 2005/2, Eligibility Assessment Report, 6 September 2005, <http://www.ebrd.com/downloads/integrity/200501.pdf> (last visited 17 February 2015.) 109 CAO, Oyu Tolgoi-01/Southern Gobi (Mongolia), request received on 12 October 2012 and CAO, Oyu Tolgoi-02/Southern Gobi (Mongolia), request received on 11 February 2013.
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector(February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 16 government." 110 IFC had become involved via its equity investment in Agri-Vie Agribusiness Fund, a private equity fund that had in its portfolio an investment in NFC… At the end of the dispute resolution process, in March 2014 for the first complaint and July 2013 for the second complaint, CAO stated that "All the parties who participated, including the representatives of the affected community, their legal advisors, Oxfam, and the NFC showed considerable commitment, patience, tolerance, determination, creativity and goodwill through the mediation process."111 The outcome was two agreements signed between NFC and respectively the Kiboga and Mubende affected people in which NFC agrees to provide financial and other support to two newly created cooperatives that from now on allow the Kiboga and the Mubende people to own their own lands where they can settle and develop their economic activities.112 Sometimes, problem-solving 'only' leads to increased understanding of how to better handle environmental and social issues arising from the clients' activities. Sometimes problem-solving helps the client mitigate project costs or reputational damage that were born from not addressing E&S issues. Sometimes it results in the client integrating in its corporate structure and processes all the things learned about E&S risks and how the IFC's performance standards can help them prevent damaging situations.113
The first audit was conducted in the Peru / Compania Minera Antamina S.A.-01/Huarmey case, request received on 1 st September 2000, closed in January 2005. None of the documents related to this audit is available on the CAO's website. 127 CAO, Bolivia / Comsur V-01/Bosque Chiquitano, Final Report, Review of the Capacity of COMSUR to Manage Environmental and Social Responsibility Issues, June 2004. 128 AM CRP, Visayas Base-Load Power Project (Philippines), 2011/1, First Annual Monitoring Report, 12 July 2013, para. 4.
125 PCM, Paravani HPP (Georgia), 2012/1, Compliance Review Report, 1 st January 2014, at 44.
126 129 Ibid.,
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013 19 choice but to pay for unexpected additional studies or equipment, as provided for by the Action Plan validated by the Board of the MDB. Several interviewees noted that, from this viewpoint, governmental borrowers 131 are generally much more defensive on the implementation of remedial actions than private companies. This nevertheless gives no indication as to whether the fate of project-affected people has concretely improved in the end. 132
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector
Richard, Vanessa, Accountability by Proxy? The Ripple Effects of MDBs' International Accountability Mechanisms on the Private Sector (February 20, 2015). F. Seatzu, P. Vargiu, F. Esu (eds), Conceptualizing Accountability in International Financial Law (Forthcoming). Available at SSRN: https://ssrn.com/abstract=2905013
CAO Operational Guidelines 2013, para. 4.1.
), and that the Board of IFC had approved an equity and sub-ordinated debt investment in Ficohsa. Thus, IFC had a significant exposure to Dinant not only through its corporate loan to Dinant but also its equity stake in Ficohsa. This led the CAO Vice President (CAO's head) to trigger in December 2013 the first-ever investigation performed by the complaint mechanism of a multilateral development bank (MDB) on the degree of supervision exerted by a MDB over the environmental and social (E&S) risks attached to its investment in a financial intermediary. 3 This was besides in line with the CAO's sectoral audit on IFC's Financial Sector Investments released in February 2013. Principal Investigator of the International Grievance Mechanisms and International Law & Governance (IGMs) project. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 312514. 1 CAO, Honduras / Dinant-01/CAO Vice President Request, CAO Appraisal for Audit, 13 August 2012, at 5 <http://www.caoombudsman.org/cases/document-links/links-188.aspx> (last visited 19 February 2015.7 ERC Grant Agreement no. 312514. The IGMs project is nested at the CERIC, Faculty of Law and Political Science of Aix-Marseille University, and administered by the French Centre National de la Recherche Scientifique (CNRS.://www.iilj.org/courses/documents/Mashaw.IssuesinLegalScholarship. | 79,332 | [
"8125"
] | [
"199942"
] |
01605613 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01605613/file/Kadar%20et%20al%202017%20analyse%20version%20HAL.pdf | Ali Kadar
email: [email protected]
Ludovic Peyre
Georges De Sousa
Henri Wortham
Pierre Doumenq
Roger Rahmani
Georges De Souza
An accurate and robust LC-MS/MS method for the quantification of chlorfenvinphos, ethion and linuron in liver samples
Keywords: Pesticide mixtures, Liver samples, LC-MS/MS, Method validation
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
For many years, pesticides have been used on a broad scale for pest control in agriculture. Despite their outstanding positive influence on farm productivity, these active ingredients are harmful for the environment. Owing to their physicochemical properties and their wide use, many of the pesticide residues end-up in water resources and in agricultural products. Consequently, the entire food chain is exposed to such toxic molecules, which may ultimately reach human beings through bioaccumulation or directly by the consumption of contaminated water or foodstuffs [START_REF] Cao | Relationship between serum concentrations of polychlorinated biphenyls and organochlorine pesticides and dietary habits of pregnant women in Shanghai[END_REF][START_REF] Damalas | Pesticide Exposure, Safety Issues, and Risk Assessment Indicators[END_REF][START_REF] Ding | Revisiting pesticide exposure and children's health: focus on China[END_REF]. Most of studies aiming to estimate dietary exposure of the general population highlighted that the consumers were simultaneously exposed to different residues (Iñigo-Nuñez et al., 2010, Claeys et al., 2011, Chen et al., 2011, Nougadère et al., 2012, Bakırcı et al., 2014[START_REF] Betsy | Assessment of dietary intakes of nineteen pesticide residues among five socioeconomic sections of Hyderabad-a total diet study approach[END_REF], Lozowicka, 2015[START_REF] Szpyrka | Evaluation of pesticide residues in fruits and vegetables from the region of south-eastern Poland[END_REF][START_REF] Lemos | Risk assessment of exposure to pesticides through dietary intake of vegetables typical of the Mediterranean diet in the Basque Country[END_REF]. In France, Crepet et al. (2013) established that the general population was mainly exposed to 7 different pesticide mixtures consisting of 2 to 6 compounds. Among them, a mixture including chlorfenvinphos, linuron and ethion was significantly correlated to basic food items such as carrots and potatoes. After the consumption of these potentially contaminated vegetables, and once these xenobiotics have passed into the body, the blood flow delivers them to the liver for degradation and subsequent elimination. Thus, to evaluate the importance of the liver contamination, a sensitive and reliable analytical method is required. Unfortunately, to the best of our knowledge, no analytical procedure determining simultaneously chlorfenvinphos, linuron and ethion in a liver matrix has been published so far.
To date, literature survey reveals that very few articles have been reported on the analysis of linuron, chlorfenvinphos or ethion in biological samples. [START_REF] Nguyen | Quantification of atrazine, phenylurea, and sulfonylurea herbicide metabolites in urine by high-performance liquid chromatography-tandem mass spectrometry[END_REF] proposed a methodology based on liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) for the quantification of linuron in urine sample. [START_REF] Cazorla-Reyes | Single solid phase extraction method for the simultaneous analysis of polar and non-polar pesticides in urine samples by gas chromatography and ultra high pressure liquid chromatography coupled to tandem mass spectrometry[END_REF] also developed a method using LC-MS/MS for the determination of this polar herbicide in the same matrix. By contrast, the same authors quantified the non-polar compounds chlorfenvinphos and ethion thanks to gas chromatography coupled to tandem mass spectrometry (GC-MS/MS). [START_REF] Pitarch | Rapid multiresidue determination of organochlorine and organophosphorus compounds in human serum by solid-phase extraction and gas chromatography coupled to tandem mass spectrometry[END_REF] and [START_REF] Raposo | Determination of eight selected organophosphorus insecticides in postmortem blood samples using solid-phase extraction and gas chromatography/mass spectrometry[END_REF] also used GC-MS/MS for the determination of ethion in blood samples. Even if gas chromatography is adequate [START_REF] Deme | LC-MS/MS determination of organophosphorus pesticide residues in coconut water[END_REF][START_REF] Sinha | A liquid chromatography mass spectrometry-based method to measure organophosphorous insecticide, herbicide and nonorganophosphorous pesticide in grape and apple samples[END_REF] for the separation of organophosphorus, it is a less suitable option for phenyl urea herbicides, since these are thermolabiles [START_REF] Liska | Comparison of gas and liquid chromatography for analysing polar pesticides in water samples[END_REF]. As a result, an analytical protocol which uses a LC separation followed by MS/MS detection would be suitable to estimate the pesticides liver contamination.
Thus, the aim of this work was to develop, optimize and fully validate a simple, sensitive and reproducible analytical method for quantitative determination of linuron, chlorfenvinphos and ethion in human liver samples (hepatocytes, microsomes…).
Experimental
Chemicals, materials and biological samples
Trichloroacetic acid, ammonium sulfate salts of research grade purity and, anhydrous dimethylsulfoxide were supplied by Sigma Aldrich (Saint-Quentin Fallavier, France). LC-MS grade methanol and acetonitrile were obtained from Carlo Erba (Val de Reuil, France). Chlorfenvinphos, chlorfenvinphos-d10 (internal standard; IS), ethion and linuron certified standards of purity higher than 99.5% were purchased from Dr. Ehrenstorfer (Augsburg, Germany). Standard stock solutions were prepared by dissolving the pure compounds in acetonitrile and further diluted as required in acetonitrile, for calibration standards and sample treatment. Sample extracts were centrifuged using a Thermo IEC Micromax TM RF benchtop centrifuge acquired from Thermo Fisher Scientific (Illkirch, France). Oasis™ HLB (10 mg/1 mL), Strata X ® (10 mg/1 mL) and Sola TM (10 mg/1 mL) solid phase extraction (SPE) cartridges were provided by Waters (Guyancourt, France), Phenomenex (Le Pecq, France) and Thermo Fisher Scientific (Courtaboeuf, France), respectively. A 12 ports SPE manifold (J.T. Baker ® ) connected to a KNF Neuberger LABOPORT ® filtration pump (VWR, Paris, France) was used for conditioning, sample loading, drying of the cartridges and elution of the targeted compounds.
All experiments on human tissue were carried out according to the ethical standards of the responsible committee on human experimentation and the Helsinki Declaration. Liver tissue can be mechanically decomposed into cellular (hepatocytes) or subcellular (S9, cytosolic and microsomal) fractions. Here we chose to carry out the study with thermally inactivated hepatocytes (100 °C for 3 min) previously isolated as described by [START_REF] Berry | High-yield preparation of isolated rat liver parenchymal cells: a biochemical and fine structural study[END_REF].
Sample treatment
400 µL of thermally inactivated liver cells at a total protein concentration of 0.5 mg/mL in 100 mM phosphate potassium buffer (pH 7.4) were pipetted into 1.8 mL Eppendorf ® tubes. The samples were spiked with the required amounts of chlorfenvinphos, ethion, linuron and ITSD before being briefly vortex-mixed. Then, 400 µL of glacial acetonitrile was added to the tubes. Centrifugation performed at 16000g for 5 min allowed the denatured proteins to precipitate. The supernatant was purified according to the optimized following SPE protocol. The samples were diluted in 6 mL borosilicate glass tubes by addition of purified water in order to obtain a ratio of acetonitrile-water (25:75, v/v) in the mixture. The samples were then loaded onto Thermo Sola TM extraction cartridges, which had been pre-cleaned using 1mL of methanol, followed by 1 mL of acetonitrile and finally conditioned using 1 mL of acetonitrile-water (25:75, v/v). Compounds of interest were trapped on the cartridges while interferents were successively eluted with 1 mL of acetonitrile-water (25:75, v/v) and 1 mL of purified water. After 5 min SPE manifold vacuum drying (-10 PSI) of the cartridges, compounds of interest were eluted under vacuum with 2×0.2 mL of pure acetonitrile. Finally, the eluates were diluted 1:1 in purified water prior to analysis.
LC-MS/MS analysis
Compounds were separated and quantified using a Surveyor HPLC analytical system purchased from Thermo Fisher Scientific (Courtaboeuf, France). It consisted in a quaternary low pressure mixing pump equipped with an integrated degasser, a 20 µL injection loop, a temperature controlled autosampler set at 10 °C and a column oven kept at 25 °C. A Hypersil end-capped Gold PFP reversed phase column (100 mm×2.1 mm, 3 µm) purchased from Thermo Fisher Scientific (Gif-sur-Yvette, France) equipped the Surveyor module. LC separation was achieved at a flow rate of 280 µL.min -1 using a mobile phase composed of 10 mM ammonium formate in methanol (solvent A) and 10 mM ammonium formate in water (solvent B). The gradient program was run as follows: maintain 35% A from 0 to 7 min, linear increase to 100% A from 7 to 9 min, hold 100% A from 9 to 12 min, return to the initial conditions from 12 to 17 min and stabilization during 3 min before the next injection.
The detection and quantification were performed using a TSQ Quantum triple quadrupole mass spectrometer equipped with electrospray ionization (ESI) source. To prevent a contamination of the ESI source, from 0 to 6 min, the column effluent was systematically diverted to the waste by means of a motorized Divert/Inject valve.
The mass spectrometer was operated in multiple reaction monitoring (MRM) positive acquisition mode. The ion capillary temperature was heated to 350 °C and the ESI needle voltage was set at 4000 V. The sheath and auxiliary gas (N 2 ) pressures were respectively tuned at 40 and 30 (arbitrary units). A 4 V source collision induced dissociation (CID) offset and a 1.5 mTorr collision gas (Ar) pressure were applied at the collision cell. For each compound, sensitive quantitative determination was performed using an addition of the MRM transitions displayed in Table 1.
All MS parameters were optimized by direct infusion and the source parameters were subsequently adjusted by flow injection. Data analysis was accomplished using Xcalibur™ 2.1 software.
Analytical method validation
Validation of the analytical method was carried out in accordance with the general guidelines for bioanalytical methods established by the FDA (US Food and drug Administration, 2013). Validation criteria including lower limit of quantification (LLOQ) sensitivity, linearity, selectivity, accuracy, precision, recovery and stability were investigated.
Limit of quantification
The LLOQ is defined as the lowest concentration that can be measured with acceptable precision and accuracy. For its assessment, four serial dilutions of sample containing 0.400 µM mL of each analyte were made by mixing equal volumes of spiked microsomal sample with blank microsomal sample (six replicates). The peak areas of these fortified extracts should be at least five times higher than the background of blank samples (i.e. signal-to-noise ratio, S/N=5/1) to be considered as a proven LLOQ.
The precision and mean accuracy of back-calculated LLOQ replicate concentrations must be of <20% and ±20%, respectively.
Selectivity and matrix effects
The evaluation of the selectivity was conducted after the pretreatment and instrumental analysis of ten different blank human microsomes samples. Selectivity was assessed to ensure the absence of any potential endogenous interference co-eluting with analytes, including the chlorfenvinphos-d10 (IS).
Chromatographic signals of pesticides were discriminated on the basis of their specific retention times and MRM responses.
In addition, to assess matrix effects, ten different blank matrices were extracted, further spiked with the standard solution at the LLOQ level and compared with aqueous standards of the same concentration level. According to the guidance the difference of response should not exceed ±5%.
Linearity
For each compound, the calibration curve range varied from the validated LLOQ (0.005 µM) to 2.0 µM. Calibration curve standard samples were prepared in replicates (n=6) in a mixture of purified water and acetonitrile (50:50, v/v), and then analyzed. Data was reprocessed and validity of the linearity was checked through ANOVA statistical analyses (Microsoft Excel). The goodness of fit (GoF) and lack of fit (LoF) were determined and correlated with the corresponding Fisher theoretical table value. The fitting of the calibration curves was obtained with a 1/x weighted least squares linear regression.
Recovery
Recovery rates of pesticides from thermally inactivated human hepatocytes samples were assessed at three concentration levels: low (0.05 µM), medium (5.0 µM) and high (50.0 µM). Three replicates were prepared for each level and extracted. After their analysis, the peak areas from these samples were compared to those from post-extracted blank inactivated hepatocytes samples fortified with the targeted compounds at the same concentration and analyzed. The ratio of mean peak areas of preextracted samples to mean post-extracted spiked samples enabled to calculate individual percentage recovery.
Precision and accuracy
Thanks to freshly prepared calibration curves, imprecision (intra-and inter-day) and accuracy were back calculated for the mixture respectively at four concentration levels: low (0.05 µM), medium 1
(2.0 µM), medium 2 (5.0 µM) and high (50.0 µM).
For intra-day imprecision and accuracy, five replicate samples per concentration were prepared and consecutively analyzed on the same day. For inter-day imprecision, the samples' preparation and analysis were carried out in duplicate at the same spiking levels, and repeated on six different days.
Imprecision was expressed as the relative standard deviation (RSD%) and accuracy was calculated as the mean percentage deviation (A%) from the spiked value. The acceptance criteria for intra-and inter-day imprecision were ≤15% and, for accuracy, were between 85 and 115% of the nominal concentrations.
Stability of pesticides in the matrix extract
Stability tests were conducted in triplicate with processed samples that had been previously spiked at a concentration level of 5 µM. Different storage conditions were tested: 72 h in the autosampler tray at +12 °C, after 15 h of three cycles of freezing (-20 °C) and thawing (room temperature) for either 1 month, 3 months or 9 months at -20 °C. The results were calculated using freshly prepared calibration curves. The imprecision and accuracy calculated for samples' stability should be below 15% and between 85 and 115% of their nominal levels, respectively.
Results and discussion
Method development
Mass spectrometry optimization
Direct infusion of individual compound and IS solutions at a concentration of 1 mg.L -1 in watermethanol (50:50, v/v), 10 mM ammonium acetate in water-methanol (50:50, v/v) or 10 mM ammonium formate in water-methanol (50:50, v/v) was carried out to select the best solvent mixture phase, precursor and product ions. The individual mass spectra of each molecule obtained in positive ion mode showed the presence of both an abundant pseudo-molecular ion [M+H] + and a reproducible stable sodium adduct [M+Na] + signal. For all the compounds, the best [M+H] + /[M+Na] + signal ratio was obtained with the standard solution containing 10 mM ammonium formate. Once the precursor ion was chosen, the optimum tube lens voltage was automatically optimized. Then, analytes were fragmented by applying the collision energy giving the highest abundance for each product ion. The optimized source parameters, MRM transitions and settings were then included in the mass spectrometry acquisition method.
Chromatographic conditions
After optimization of the mass spectrometry parameters, different liquid chromatography columns were tested. The first evaluations were achieved on two core-shell LC columns from Phenomenex (Le Pecq, France): a Kinetex ® C 18 and a Kinetex ® PFP (100 mm × 2.1 mm, 3 µm). Both columns enabled a satisfactory separation of linuron, chlorfenvinphos, ethion and IS. Less peak tailing was nevertheless obtained with the Kinetex PFP, therefore contributing to a significant improvement on sensitivity, especially for the chlorinated molecules such as chlorfenvinphos and linuron. This column was initially selected, but the drawback of using this specific stationary phase appeared later during the study. In fact, a high back pressure was observed only after a short time use. As a consequence, a fully porous Thermo Hypersil TM Gold PFP column was used instead. A new elution gradient was therefore optimized for this column and enabled to achieve optimum resolution and compound detection, as already depicted in "Experimental -LC-MS/MS analysis". Finally, the analytes' signals were appropriately separated and chromatograms displayed good peak shapes as presented on Fig. 1. The carry-over in the chromatographic system was measured by injecting three blank solvents after the highest calibration standard.
Sample purification
A suitable optimization of the extraction step was then needed to achieve a satisfactory LLOQ and selectivity for the detection of the pesticides mixture in such small volumes of human liver extracts.
For this purpose, after the protein precipitation step described above, an additional solid phase extraction step was preferred against a liquid-liquid extraction to remove potential additional endogenous interferents such as phospholipids or inorganic salts contained in the cell seeding medium (Yaroshenko and Kartsova, 2014) and polyethylene glycol leached from plastic containers [START_REF] Weaver | Identification and reduction of ion suppression effects on pharmacokinetic parameters by polyethylene glycol 400[END_REF].
SPE sorbent types such as polymeric and silica based reversed-phase sorbents seemed appropriate for the simultaneous extraction of organophosphorus and neutral phenylurea pesticides from biological matrices [START_REF] Cazorla-Reyes | Single solid phase extraction method for the simultaneous analysis of polar and non-polar pesticides in urine samples by gas chromatography and ultra high pressure liquid chromatography coupled to tandem mass spectrometry[END_REF]. The few studies dedicated to their purification from human body fluids indicated that both polymeric reversed-phase cartridges [START_REF] Nguyen | Quantification of atrazine, phenylurea, and sulfonylurea herbicide metabolites in urine by high-performance liquid chromatography-tandem mass spectrometry[END_REF][START_REF] Raposo | Determination of eight selected organophosphorus insecticides in postmortem blood samples using solid-phase extraction and gas chromatography/mass spectrometry[END_REF] and silica based reversed-phase sorbent cartridges [START_REF] Pitarch | Rapid multiresidue determination of organochlorine and organophosphorus compounds in human serum by solid-phase extraction and gas chromatography coupled to tandem mass spectrometry[END_REF] could be used. In this study, the aim was to minimize elution volumes in order to simplify sample treatment by avoiding the concentration step after the elution of the analytes. As a result, the polymeric reversed-phase (PRP) sorbents were chosen for their higher loading capacities which allowed the use of lower sorbent amounts, associated with reduced elution volumes. In this work, three different commercial PRP cartridges were compared (Oasis TM HLB, Strata X ® and Sola TM ) with the goal to choose the one which could give the maximum recoveries with the minimum elution volume. Aiming at obtaining comparable results, a common SPE methodology was applied for all the cartridges. First, after being successively rinsed using 1 mL of methanol and 1 mL acetonitrile, the cartridges were conditioned with 1 mL of purified water. Then, a blank liver cells sample containing all the test compounds at 20 µM was loaded on the cartridge. Then, a washing step consisting of 1 mL of purified water, followed by a 5 min drying step and an elution of the target compounds with 2×0.2 mL of pure acetonitrile under vacuum (-5 PSI), were applied. Acetonitrile was chosen for its LC-MS compatibility and its higher elution strength in comparison with methanol. Indeed, the latter did not allow for satisfactory elution of ethion even if a larger volume was used (up to 2 mL). After dilution (50:50, v/v) in purified water, the extract was transferred to an injection vial for analysis. The performance of the SPE cartridges tested are displayed in Table 2. The best results, in terms of relative recoveries and RSD values, were achieved using Sola TM cartridges with values ranging from 97 to 100%.
As incubation of both hepatocytes and microsomes samples was stopped by adding cold-acetonitrile (50/50, v/v), further optimization was performed to determine the maximum acceptable amount of acetonitrile in the liver extracts to be purified by SPE. To this goal, as described above, blank samples were prepared and loaded onto Sola TM cartridges. Then, an elution/retention profile of the analytes was established with the successive addition of mixtures of water and acetonitrile, as shown in Table 3.
The experiments were performed in duplicate. All the analytes were retained with acetonitrile proportion below 30%. Linuron, as the most polar compound, was the first one to be desorbed. As a consequence, we considered that the liver extracts from kinetic experiments should be diluted with the appropriate amount of water (25:75, v/v) prior to the solid phase extraction.
Performance of the analytical method
The chromatograms of liver cells samples were visually checked and compared with chromatogram obtained from standard references in neat solvents. As they showed no disturbing peaks, the selectivity was approved. Besides, the matrix effect assessed at the LLOQ level did not exceed 0.6% and was considered negligible for all the pesticides. Additionally, no cross-contamination was observed when three blank solvent samples were injected consecutively to the highest calibration standard.
The results of the linearity, the intra-and inter-day precision and accuracy, as well as the stability of pesticides in the matrix extracts are summarized in Table 4.
The results from the Goodness of Fit and Lack of Fit of the Fisher significance tests (GoF-LoF) indicated that the linear regression model was validated in the defined range of concentrations for all compounds. In addition, the determination coefficient was systematically verified and always gave satisfactory values (r 2 >0.998).
Recovery data collected on three replicates of three wide range covering levels ranged from 92.9% to 99.5%, with a maximum RSD of 2.3%, demonstrating the efficiency of the SPE purification process.
Moreover, intra-and inter-day imprecision and accuracy were all within the established ranges of acceptance. Finally, stability data revealed that whichever test used, no significant loss was noticed, indicating that all the analytes were stable within the studied working conditions.
All the evaluated performance parameters were in accordance with FDA recommendations, making this method reliable and rugged for future in vitro liver metabolism studies.
Conclusions
To our knowledge, this is the first reported analytical method for determination of linuron, chlorfenvinphos and ethion in liver samples. This LC-MS/MS method developed with one stable isotope-labeled internal standard exhibited very satisfactory performance in terms of selectivity, limit of quantification, linearity, recovery, precision and accuracy, in compliance with current FDA requirements. A user-friendly sample treatment process providing excellent recoveries and high sample cleanliness was obtained after appropriate optimization of conditions. Indeed, protein precipitation, solid phase extraction sorbent type, volume and solvent elution strength were optimized. This method is optimal for conducting metabolism studies through the accurate monitoring of the parent compound loss in in-vitro human liver samples. Furthermore, it would probably be also convenient for the determination of the above-mentioned pesticides mixture in human liver biopsies or mammalian liver samples.
Table 1
1 Ions monitored under the MRM mode by LC-MS/MS a and their relative intensities (%). The compounds were quantified with both product 1 and product 2 ions. Ionized in the positive mode with a 4 V CID offset.
Compound precursor ion b product ion 1 product ion 2 collision energy
(m/z) (m/z) (m/z) (V)
Linuron 249 182(100) 161(23) 20-25
chlorfenvinphos 359 155(100) 127(45) 18-22
chlorfenvinphos-d10 (IS) 369 165(100) 133(66) 16-25
Ethion 385 143(100) 97(88) 35-45
a b
Table 2
2 Percentage recoveries and associated RSD (in brackets) of the target analytes testing different SPE cartridges (n=3).
Oasis TM HLB Strata X ® Thermo Sola TM
Linuron 92 (4) 87 (11) 100 (2)
chlorfenvinphos 90 (8) 83 (12) 97 (2)
Ethion 85 (9) 43 (3) 98 (5)
Table 3
3 Elution of the pesticides mixture from Sola TM sorbent cartridge (n=2).
elution mixture Elution rate from (%)
acetonitrile/water
(v:v, 1 mL) linuron chlorfenvinphos Ethion
0/1 0 0 0
0.05/0.95 0 0 0
0.10/0.90 0 0 0
0.15/0.85 0 0 0
0.20/0.80 0 0 0
0.25/0.75 0 0 0
0.30/0.70 1 0 0
0.35/0.65 14 3 0
0.40/0.60 56 29 0
1/0 100 100 100
Table Click here to download Table: Tables-MS1_170224.docx
Click
Table 4
4 Results of the analytical method validation: linearity (n=6), recovery (n=3), intra-day accuracy (n=5), inter-day accuracy (n=2, 6 days), stability (n=3). LC-MS/MS chromatograms obtained from fortified inactivated hepatocytes extract (LLOQ)
Parameter Linuron chlorfenvinphos Ethion Limits
Linearity GoF-LoF
LoF 0.164 1.019 0.055 <4.51
GoF 4840 21868 3084 >>5.39
Recovery R% RSD%
Low 99.2 1.0 97.0 2.0 92.9 1.9
Medium 98.7 2.2 95.1 1.8 94.2 1.6 n.a
High 99.2 1.3 97.6 1.7 95.0 1.2
Accuracy
Intra-day Ar% RSD%
Low 100.0 1.3 96.6 4.9 93.8 2.6 ±20, ≤20 %
Medium 1 98.4 3.5 99.6 4.2 93.6 0.8 ±15, ≤15 %
Medium 2 97.6 6.2 94.6 7.1 93.9 3.7
High 100.0 0.8 99.0 1.6 93.3 4.2
Inter-day Br% RSD%
Low 99.4 4.1 98.2 6.9 92.8 4.8 ±20%, ≤20 %
Medium 1 99.8 6.7 100 6.6 91.6 2.9 ±15%, ≤15 %
Medium 2 98.9 8.1 97.2 8.2 90.9 5.9
High 99.5 2.6 100.0 3.1 93.1 6.3
Stability
Freeze-thaw SFt%
-20/20 °C-15 h 101.1 5.3 99.9 5.7 100.0 4.2 ±15 %, ≤15 %
Long term SLt%
1 month 100.7 6.1 99.0 5.5 97.8 3.0 ±15 %, ≤15 %
3 months 102.1 5.7 101.1 5.8 104.2 4.1
9 months 99.8 6.0 99.5 5.1 102.2 3.7
Autosampler SA%
72 h 99.9 5.8 101.2 4.9 100.2 4.6 ±15 %, ≤15 %
GoF-LoF : Goodness of Fit -Lack of Fit Fig. 1.
Bioanalytical Methods Using Chromatography-Mass Spectrometry. J.Anal. Chem. 69,[311][312][313][314][315][316][317]
Acknowledgements
This work was funded by the Agence Nationale de la Recherche (under reference ANR-2008-CESA-016-01), by the Office National de l'Eau et des Milieux Aquatiques (ONEMA) and by the Agence Nationale de SÉcurité Sanitaire de l'alimentation, de l'environnement et du travail (ANSES) in the frame of the MEPIMEX Project. The authors also gratefully acknowledge the supply of the LC-MS/MS system at Laboratoire de l'environnement de Nice as a collaboration agreement with Patricia Pacini.
ABSTRACT
A generic method for the determination of linuron, chlorfenvinphos and ethion in liver samples by liquid chromatography-tandem mass spectrometry (LC-MS/MS) is described. In vitro sample treatment was performed by using solid phase extraction (SPE) after protein precipitation. The lowest elution solvent volumes providing the highest recoveries were obtained with Sola TM polymeric reversed-phase cartridges. Gradient elution using 10 mM ammonium formate in methanol (A) and 10 mM ammonium formate in water (B) was used for chromatographic separation of analytes on a
Hypersil TM end-capped Gold PFP column (100 mm×2.1 mm, 3 µm). All analytes were quantified without interference, in positive mode using multiple reaction monitoring (MRM) with chlorfenvinphos-d10 as internal standard. The whole procedure was successfully validated according to the Food and Drugs Administration (FDA) guidelines for bioanalytical methods. The calibration curves for all compounds were linear over the concentration range of 0.005 -2 µM, with coefficients of determination higher than 0.998. A Lower limit of quantification of 0.005 µM was achieved for all analytes. Compounds extraction recovery rates ranged from 92.9 to 99.5% with a maximum relative standard deviation (RSD) of 2.3%. Intra--and inter--day accuracies were within 90.9% and 100%, and imprecision varied from 0.8 to 8.2%. Stability tests proved all analytes were stable in liver extracts during instrumental analysis (+12°C in autosampler tray for 72 h) at the end of three successive freeze-thaw cycles and at -20°C for up to 9 months. This accurate and robust analytical method is therefore suitable for metabolism studies of pesticide mixtures.
Abstract Click here to download Abstract: Abstract-MS1_170224.docx | 29,866 | [
"872931"
] | [
"155356",
"155356",
"220811",
"441569",
"220811",
"441569",
"155356"
] |
01681599 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01681599/file/Ereskovsky_et_al_2017.pdf | A V Ereskovsky
email: [email protected]
A Geronimo
T Pérez
Asexual and puzzling sexual reproduction of the Mediterranean sponge Haliclona fulva (Demospongiae): life cycle and cytological structures
Keywords: asexual reproduction, budding, Haplosclerida, oviparity, reproductive cycle
Despite the common assumption that most Haplosclerida are viviparous sponges, this study of the reproductive cycle of Haliclona fulva demonstrates that this species is actually oviparous and gonochoric. Intriguingly, not a single male was recorded in 15 months of sampling. Oogenesis is synchronous, starting in late April and terminating in September. Asexual reproduction is represented by cyclic budding, which occurs from late November to early March. During the season of asexual reproduction, the reproductive effort represents from 0.21% to 1.49% of the parental tissue, with the highest values being recorded in winter. During the season of sexual reproduction, the female reproductive effort ranges 0.05-1.15%, with the highest effort appearing in early summer. However, no significant correlation between reproductive efforts and seawater temperature fluctuations could be detected. We describe the ultrastructural morphogenesis of the buds for the first time in this species. This process is asynchronous, with buds of variable size being attached to the maternal apical surface via a short stalk. Young buds lack any particular anatomical organization, whereas bud maturity is characterized by the development of mesohyl and by the appearance of an increasing number and volume of lacunae in the central part of each bud. At this stage, buds harbor numerous small choanocyte chambers scattered throughout the inner region, and all cell types known from the mesohyl of parental sponges: microgranular cells, granular cells, archaeocytes, endopinacocytes and exopinacocytes, central cells, and sclerocytes.
Pérez-Porro et al. 2012;Di Camillo et al. 2012;[START_REF] Mercurio | Sexual reproduction in Sarcotragus spinosulus from two different shallow environments[END_REF][START_REF] Zarrouk | Sexual Reproduction of Hippospongia communis (Lamarck, 1814) (Dictyoceratida, Demospongiae): comparison of two populations living under contrasted environmental conditions[END_REF][START_REF] Reverter | Secondary Metabolome Variability and Inducible Chemical Defenses in the Mediterranean Sponge Aplysina cavernicola[END_REF]. However, there is less knowledge about the sponges from order Haplosclerida, which are relatively common and abundant in the Mediterranean Sea. Up to now, we have data on reproductive cycles for only four species of Haplosclerida [START_REF] Scalera Liaci | Raffronto tra il comportamento sussuale di alcune Ceractinomorpha[END_REF][START_REF] Maldonado | Gametogenesis, embryogenesis, and larval features of the oviparous sponge Petrosia ficiformis (Haplosclerida, Demospongiae)[END_REF].
In this work, we investigate Haliclona fulva (TOPSENT 1893), a common shallow-water Mediterranean demosponge of the order Haplosclerida, family Chalinidae. We chose H. fulva for this study because this species regularly uses budding in its life cycle, unlike other haplosclerids, which have never been described to possess this mode of asexual reproduction. Moreover, as we show here, it is the only oviparous species in the family Chalinidae. We evaluate the incidence of sexual and asexual reproduction in the life cycle of Haliclona fulva and describe budding in H. fulva thoroughly, detailing the cellular composition of buds at different stages of development and comparing them with parental tissue.
Methods
Biological model
Haliclona (Halichoclona) fulva is a common Mediterranean sponge species, individuals of which form thick crusts (5-15 mm thickness) that vary in color from orange to red and which inhabit shaded benthic communities, such as semi-dark submarine caves or coralligenous formations, at depths of 5-50 m (Fig. 1A).
Sampling site
Sponges were collected by SCUBA diving at depths of between 14 and 16 m at a site called Grotte à Corail, located at Maire Island (Marseilles Bay). This sampling site was equipped with a permanent temperature recorder (HOBO Tidbit Data Logger). From September-March 2007 to December 2008, between four and nine individuals were collected monthly (during June and August 2008, the sponges were sampled twice), which represents a total of 103 individuals studied (Table 1). Only one month (October 2007) could not be sampled due to bad weather conditions. In order to pinpoint the period of budding, we also examined a great number of underwater photographs taken by scientific divers at different sites in the Marseille region from 1999 to 2009.
Morphological and ultrastructural analysis
To characterize the life cycle and assess the reproductive effort, samples were preserved in Bouin's fixative. Tissue fragments were then dehydrated through an ethanol series and embedded in paraffin. Serial sections of 6 μm thickness were mounted on glass slides and stained with Masson-Goldner's trichrome hematoxylin, and then observed under a WILD M20 light microscope.
For semithin sections and for transmission electron microscopy (TEM) investigations, sponges were fixed in a solution composed of one volume of 25% glutaraldehyde, four volumes of 0.2 M cacodylate buffer, and five volumes of filtered seawater for 2 h before being post-fixed in 2% OsO 4 in seawater at room temperature for 2 h. After fixation, samples were washed in 0.2 M cacodylate buffer and distilled water successively, and finally dehydrated through a graded ethanol series. Specimens were embedded in Araldite resin. Semithin sections (1 µm in thickness) were cut on a Reichert Jung ultramicrotome equipped with a "Micro Star" 45° diamond knife before being stained with toluidine blue, and observed under a WILD M20 microscope. Digital photos were taken with a Leica DMLB microscope using the Evolution LC color photo capture system. Ultrathin sections (60-80 nm) were cut with a Leica UCT ultramicrotome equipped with a Drukkert 45° diamond knife. Ultrathin sections, contrasted with uranyl acetate, were observed under a Zeiss-1000 transmission electron microscope (TEM).
Data analysis
Calculations of sexual reproductive effort (sRE) were carried out on serial histological sections. For each specimen, digital photographs of 16 histological sections were analyzed. Four photographs per serial section were taken. To avoid the overlapping of reproductive products that would lead to over-estimation, photographs of tissue were taken at least 200 µm from each other. The four photographs provided a total surveyed area of 1 mm 2 per sponge. We determined the number of sexually active sponges over time, counted the number of reproductive elements, and calculated the area of each reproductive element within the tissue sample, using ImageJ Software (http://rsb.info.nih.gov/ij/index.html). Reproductive elements were related to the overall surface of the section, and reproductive effort could thus be expressed as a percentage of reproductive tissue (mean ± SD).
In order to calculate the number of asexual reproductive efforts (aRE), the number of buds per individual was counted and bud area was estimated. These data allowed us to obtain the total surface of buds relative to the sponge surface, with aRE also expressed as a percentage of reproductive tissue.
The nonparametric Kruskal-Wallis test was used to determine the seasonal variability of each reproductive effort. To assess the putative influence of seawater temperature, we then applied a Spearman correlation test. Statistics and graphs were performed using RStudio (R Development Core Team, 2012 ).
Results
Reproductive cycle
No occurrence of gametogenesis was found between November 2007 and April 2008, nor from November to December 2008. In 2008, we observed oogenesis starting in late April and ending in September (Table 1, Fig. 2). No hermaphroditic individuals were observed; thus, Haliclona fulva appears to be gonochoric and oviparous, given the absence of observable embryo development. Through examination of our sponge collection and in situ photographs, we were able to observe the beginning of the budding period of H. fulva in early November and the end in April (Table 1, Fig. 2), which indicates an alternation between sexual and asexual processes.
Sexual reproduction
No male reproductive elements were recorded during this study of 103 individuals collected from this subpopulation. Of the 44 individuals observed during this period, 7 individuals lacked reproductive elements, but not a single male was recorded.
Oocytes were observed throughout the mesohyl, with the exception of the most superficial layer of the sponge tissue below the exopinacoderm. Oogenesis was synchronous within a mother sponge and in the entire population. Young oocytes were of small size, 14-24 µm in diameter. At this previtellogenic stage, oocytes (35-47 µm in diameter) had an irregular shape, a consequence of their active movement in the maternal mesohyl and phagocytosis of amoebocytes (Fig. 3A,B). During vitellogenesis, oocytes increased significantly in size to reach 50-77 µm in diameter. When the yolk started to accumulate, the oocytes amassed near exhalant canals (Fig. 3C,D). At this stage, they were surrounded by a follicle. At the end of oogenesis, the eggs penetrated into the exhalant canals to be released into the seawater column through the aquiferous system.
Reproductive effort
The calculation of monthly mean sexual reproductive effort (sRE) was possible for females only and revealed significant variation over time (Kruskal-Wallis test: χ 2 =24.40, K=24.28, df=8, p=0.002), with a minimum of 0.05% in April 2007 and a maximum of 1.15% in late August 2008 (Fig. 4A). Furthermore, the observed interannual variation is supported by a significant difference between sRE in September 2007 (2.70%) and September 2008 (0.44%) (Mann-Whitney-Wilcoxon: U=3, p=0.009).
The calculation of asexual reproductive effort (aRE) revealed a minimum of 0.21% for March 2008 and a maximum of 1.49% for January 2008 (Fig. 4B). We used the Kruskal-Wallis test to compare median aRE values for January, February and March, which showed that the apparent differences were not significant (χ 2 =3.84, K=3.84, df=2, p=0.147). A Spearman correlation statistic was used to quantify the association between reproductive effort and temperature fluctuations. No significant correlation could be detected between temperature fluctuations and either sRE or aRE (Spearman rank correlation: r=0.130,p=0.843;and r=0,p=1,respectively).
Morphological observations of budding
Buds were mostly concentrated on the central part of the sponge surface (Fig. 1B). They were attached to the maternal apical surface by a short stalk (Fig. 5A), and were of different sizes and shapes due to their asynchronous development. Buds were oval or spherical, their mean surface was 0.5 mm 2 and their diameter ranged 0.5-3 mm (Fig. 5A,B). Their density also differed among specimens.
The beginning of budding was marked by the formation of small, irregular protuberances that emerged from the sponge surface. The buds then grow gradually as their apical region swelled (Fig. 5A,B). Bud skeletons consisted of oxea arranged uniformly at the periphery of the bud and in the mesohyl, but at this early stage of development the bud surface remained smooth. At the histological level, early buds did not possess any particular anatomical organization: there was no developed ectosome or choanosome (Fig. 5C). They consisted of a mass of compact cells, with a higher density of cells in the central part than at the periphery (Fig. 5C). Cells with inclusions (microgranular, granular cells), as well as cells with irregular shapes and long, thin cytoplasmic extensions (lophocytes, archaeocytes), were uniformly distributed throughout the bud tissue.
Just before their detachment, the external morphology and anatomy of mature buds changed. At this stage, buds clearly protruded from the sponge surface. They were oval in shape and had a pleated surface. Their central part slackened due to a significant increase in the number and volume of internal lacunae. Meanwhile, the mesohyl developed with a synthesis of collagen bundles and spicules (Fig. 5D). In contrast with the early stage, many small choanocyte chambers, composed of a few choanocytes and a central cell, were scattered through the inner region of the buds (Fig. 5D).
Cytological composition of the buds
Many choanocytes could be observed in aggregates in the central part of young buds, but at this stage of development, they did not yet form choanocyte chambers (Figs. 5C,6A). These cells were spherical or oval and measured about 3.2 µm in width and 4.3 µm in height. Their nucleus was oval (~1.5 µm in diameter), lacked a nucleolus, but featured prominent heterochromatin (Fig. 6A). In mature buds, choanocyte chambers measured 15-18.5 µm in diameter (Figs. 5D,6B).
The central cells were large amoeboid-like cells that were present inside of each choanocyte chamber (Fig. 6B,C). The central cells were irregular and branched in shape and had numerous cytoplasmic projections. Their nucleus was large (2.3-2.9 µm in diameter), spherical, and without a nucleolus. The cytoplasm of these cells was perforated by one large canal (1.83.5 µm in diameter) and other, smaller, canals (~0.5 µm in diameter) into which entered the flagellum of each choanocytes in the chamber (Fig. 6C).
The buds and the stalk were covered with an exopinacoderm composed of exopinacocytes, which were flat cells (~17.8 µm wide and 5.1 µm high) that harbored an oval anucleolated nucleus (5.02.6 µm in diameter) (Figs. 5C,7A).
Microgranular cells with small inclusions had an amoeboid, slightly irregular shape (~8.54.9 µm), with a nucleus of 1.8 µm in diameter (Fig. 7B). These cells had vacuoles measuring 1-1.5 µm in diameter, with small, electron-dense inclusions. Microgranular cells were more numerous at the periphery of the buds (where they represented ~30% of total cells) than in the central part (where they represented only 8%) (Fig. 5C,D).
Granular cells had an oval or amoeboid shape (~9.5 µm in diameter) and harbored an anucleolated nucleus and large, homogenous electron-dense inclusions which were spherical in shape (1.2-2.7 µm in diameter) (Figs. 5C,7C). These cells were less numerous than the previous cell type.
Archaeocytes were amoeboid or spheroid cells with no particular inclusions (~8.2 µm wide and 4.6 µm high). These cells were also abundant and harbored a large nucleus (2.5 µm in diameter) and a prominent nucleolus (Fig. 7D). Another amoeboid cell was the lophocyte. They were the biggest cells found in the buds (~18 µm wide and 5 µm high), with a large oval nucleus (4.52.5 µm) that may have had a nucleolus (Fig. 7E). The cytoplasm of lophocytes contained different sorts of inclusions such as granules and phagosomes, and these cells presented a developed, rough endoplasmic reticulum.
Sclerocytes were oval or amoeboid cells (~7.0 µm in diameter) with a large vacuole (Fig. 7F). Endopinacocytes were flat cells (~19.6 µm wide and 3.0 µm high) with an oval nucleus (4.22.0 µm) and no nucleolus, and the cytoplasm of these cells included numerous vacuoles (Fig. 7G). These cells formed the canals of the aquiferous system at the late stage of bud development. Collagen fibrils were quite abundant in the central part of the bud where they were sometimes arranged in fine bundles; they were, however, rare at the periphery. Finally, many extracellular symbiotic bacteria were found in the mesohyl (Figs. 6A,B; 7B,F).
Discussion
It is considered that asexual reproduction together with sexual reproduction accompanied multicellular animals throughout their entire evolution. Asexual reproduction in one form or another has repeatedly appeared and disappeared in different groups of Metazoa (Ivanova-Kazas 1977;Adiyodi & Adiyodi 1993). In some animals, such as flatworms, annelids, and holothurians [START_REF] Christensen | Asexual propagation and reproductive strategies in aquatic Oligochaeta[END_REF][START_REF] Franke | Reproduction of the Syllidae (Annelida: Polychaeta)[END_REF][START_REF] Reuter | Flatworm asexual multiplication implicates stem cells and regeneration[END_REF][START_REF] Dolmatov | Asexual Reproduction in Holothurians[END_REF], asexual reproduction is a sporadic process. In other organisms with mixed life history strategies, such as most cnidarians, tunicates, bryozoans and some sponges, asexual reproduction is an obligatory stage of their life cycle [START_REF] Om | Asexual reproduction of animals[END_REF]Adiyodi & Adiyodi 1993;[START_REF] Brusca | Unvertebrates[END_REF]). In the latter case, sexual reproduction can be an adaptive advantage in unstable or unpredictable environments, while asexual reproduction is the competitively superior tactic for colonization of the parental environment (Williams 1975;[START_REF] Smith | Optimization Theory in Evolution[END_REF].
According to the available limited data, budding might promote the maintenance and growth of marine sponge populations [START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF][START_REF] Cardone | The budding process in Tethya citrina Sar. & Melone (Porifera, Demospongiae) and the incidence of post-buds in sponge population maintenance[END_REF][START_REF] Singh | Field and laboratory investigations of budding in the tetillid sponge Cinachyrella cavernosa[END_REF]. Indeed, buds have been shown to contribute significantly to the dispersal and recruitment of new sponges [START_REF] Wulf | Effects of a hurricane on survival and orientation of large erect coral reef sponges[END_REF], in particular under conditions of environmental stress. This strategy thus may improve the survival of individual sponge genotypes, and enhance the growth of a sponge population, as is the case for clonal organisms [START_REF] Jackson | Life cycles and evolution of clonal (modular) animals[END_REF].
Reproductive pattern
Viviparity (brooding) is more widespread and perhaps an ancestral reproductive mode in Porifera [START_REF] Riesgo | Inferring the ancestral sexuality and reproductive condition in sponges (Porifera)[END_REF]. However, oviparity is the general rule in the Demospongiae orders Tetractinellida, Polymastiida, Suberitida, Clionaida, Tethyida, Chondrosida, Verongida, Agelasida, Biemnida, and Axinellida [START_REF] Ereskovsky | The comparative embryology of sponges[END_REF]. Viviparous representatives are found in some these orders, however; these representatives include, for example, the viviparous genera Alectona and Thoosa (order Suberitida), genus Stylocordyla (order Suberitida), and genus Halisarca (order Chondrosida) [START_REF] Vacelet | Planktonic armoured propagules of the excavating sponge Alectona (Porifera: Demospongiae) are larvae: evidence from Alectona wallichii and A. mesatlantica sp[END_REF][START_REF] Sarà | Viviparous development in the Antarctic sponge Stylocordyla borealis Loven, 1868[END_REF][START_REF] Bautista-Guerrero | Reproductive cycle of the coralexcavating sponge Thoosa mismalolli (Clionaidae) from Mexican Pacific coral reefs[END_REF][START_REF] Ereskovsky | New data on embryonic development of Halisarca dujardini Johnston, 1842 (Demospongiae: Halisarcida)[END_REF]. It is largely accepted today that these two reproductive traits do not have any phylogenetic value and cannot be used for taxonomical revision of Porifera [START_REF] Hoppe | Reproductive patterns in three species of large coral reef sponges[END_REF][START_REF] Van Soest | Demosponge higher taxa classification re-examined[END_REF][START_REF] Ereskovsky | The comparative embryology of sponges[END_REF][START_REF] Riesgo | Inferring the ancestral sexuality and reproductive condition in sponges (Porifera)[END_REF].
The studies by Fromont (1994a, b) and [START_REF] Fromont | Reproductive biology of three sponges species of the genus Xestospongia (Porifera: Demospongiae: Petrosiida) from the Great Barrier Reef[END_REF] led to the general acceptance that all Haplosclerida families are characterized by viviparity, with the only exception being the oviparity in some Petrosiidae (Table 2). Although a good number of representatives of Chalinidae, especially in the Haliclona genus, have been shown to be viviparous, the present work demonstrates surprisingly that the Mediterranean Haliclona fulva is oviparous.
Another feature of the life cycle of H. fulva that is unusual in Haplosclerida is that this species includes budding as an obligatory phase of asexual reproduction, alternating with a sexual one. This case once more supports the observations that budding in Demospongiae is correlated with oviparity [START_REF] Fell | Porifera. In: Asexual propagation and reproductive strategies[END_REF][START_REF] Ereskovsky | The comparative embryology of sponges[END_REF]). Up to now there are only two exceptions to this rule: the viviparous demosponges Mycale (Aegogropila) contarenii (LIEBERKÜHN 1859) (Poecilosclerida) (Corrierro et al. 1998) and Radiospongilla cerebellata (BOWERBANK 1863) (Spongillida) [START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF].
While different modes of reproduction often occur within the same genus in various marine metazoans, for instance in Echinodermata and Bivalvia [START_REF] Strathmann | The evolution and loss of feeding larvae stages of marine invertebrates. Study of larval settlement in the sea[END_REF][START_REF] Kasyanov | Reproductive Strategy of Marine Bivalves and Echinoderms[END_REF][START_REF] Byrne | Reproduction and Larval Morphology of Broadcasting and Viviparous Species in the Cryptasterina Species Complex[END_REF], in sponges this phenomenon has previously been observed only in two genera of Petrosiidae (Haplosclerida). In the genus Xestospongia, X. bergquistia FROMONT 1991, X. testudinaria (WILSON 1925), and X. muta (SCHMIDT 1870) are oviparous, releasing sperm and eggs, while X. bocatorensis DIAZ, THACKER, RÜTZLER & PIANTONI, 2007 is viviparous, releasing brooded larvae [START_REF] Fromont | Reproductive biology of three sponges species of the genus Xestospongia (Porifera: Demospongiae: Petrosiida) from the Great Barrier Reef[END_REF][START_REF] Becerro | Spawning of the giant barrel sponge Xestospongia muta in Belize[END_REF][START_REF] Collin | Phototactic responses of larvae from the marine sponges Neopetrosia proxima and Xestospongia bocatorensis (Haplosclerida: Petrosiidae)[END_REF]). In the genus Neopetrosia, N. exigua (KIRKPATRICK 1900) is oviparous, while N. proxima (DUCHASSAING & MICHELOTTI 1864) is viviparous [START_REF] Fromont | Reproductive biology of three sponges species of the genus Xestospongia (Porifera: Demospongiae: Petrosiida) from the Great Barrier Reef[END_REF][START_REF] Collin | Phototactic responses of larvae from the marine sponges Neopetrosia proxima and Xestospongia bocatorensis (Haplosclerida: Petrosiidae)[END_REF]).
Another explanation for the phenomenon of oviparity in Haliclona fulva and different reproductive patterns in Xestospongia and Neopetrosia follows from the systematics of the order Haplosclerida. The phylogenetic relationships among haplosclerids are not clear yet, and suggestions of the polyphyletic nature of the various taxa within this order have appeared in various publications [START_REF] Hill | Reconstruction of Family-Level Phylogenetic Relationships within Demospongiae (Porifera) Using Nuclear Encoded Housekeeping Genes[END_REF][START_REF] Mccormack | Major discrepancy between phylogenetic hypotheses based on molecular and morphological criteria within the Order Haplosclerida (Phylum Porifera: Class Demospongiae)[END_REF][START_REF] Raleigh | Mitochondrial cytochrome oxidase 1 phylogeny supports alternative taxonomic scheme for marine Haplosclerida[END_REF]Redmond et al. 2007;[START_REF] Redmond | Phylogenetic relationships of the marine Haplosclerida (Phylum Porifera) employing ribosomal (28S rRNA) and mitochondrial (cox1, nad1) gene sequence data[END_REF][START_REF] Redmond | Phylogeny and Systematics of Demospongiae in Light of New Small Subunit Ribosomal DNA (18S) Sequences[END_REF]. It is important that the biggest genus Haliclona is distributed across the order Haplosclerida. [START_REF] Redmond | Phylogeny and Systematics of Demospongiae in Light of New Small Subunit Ribosomal DNA (18S) Sequences[END_REF] showed that members of this genus are positioned within Clade C with Niphatidae and Petrosiidae. The latter family, as mentioned above, includes both oviparous and viviparous species, particularly the viviparous Neopetrosia proxima [START_REF] Collin | Phototactic responses of larvae from the marine sponges Neopetrosia proxima and Xestospongia bocatorensis (Haplosclerida: Petrosiidae)[END_REF]) in Clade C. Species of Haliclona are also positioned within Clade B and has high support for a sister relationship with the viviparous Xestospongia bocatorensis [START_REF] Collin | Phototactic responses of larvae from the marine sponges Neopetrosia proxima and Xestospongia bocatorensis (Haplosclerida: Petrosiidae)[END_REF]). It should be noted that this Clade B also includes the oviparous X. muta. According to the results of [START_REF] Redmond | Phylogenetic relationships of the marine Haplosclerida (Phylum Porifera) employing ribosomal (28S rRNA) and mitochondrial (cox1, nad1) gene sequence data[END_REF], H. fulva is located in the cluster including the oviparous Petrosia ficiformis (POIRET 1789) and three other Petrosia species, as well as H. mucosa (GRIESSINGER 1971) and Cribrochalina vascuum (LAMARCK 1814). Therefore, it is possible that H. fulva might be misplaced in the genus Haliclona, and relationships with other oviparous clades might be the key to understanding reproductive behavior in H. fulva.
The general pattern of H. fulva oogenesis is similar to that reported for other oviparous Haplosclerida: no degeneration of mesohyl is observed during oogenesis and the majority of the eggs that develop to maturity retain an ovoid shape [START_REF] Fromont | Aspects of the reproductive biology of Xestospongia testudinaria (Great Barrier Reef)[END_REF][START_REF] Lepore | The ultrastructure of the mature oocyte and the nurse cells of the Ceractinomorpha Petrosia ficiformis[END_REF][START_REF] Maldonado | Gametogenesis, embryogenesis, and larval features of the oviparous sponge Petrosia ficiformis (Haplosclerida, Demospongiae)[END_REF]. During vitellogenesis, the oocytes have an amoeboid shape and participate in the phagocytosis of somatic cells, rich of phagosomes, like other investigated Haplosclerida [START_REF] Ereskovsky | The comparative embryology of sponges[END_REF].
It is always a challenge to describe the reproductive cycle of oviparous sponges, but it is even more difficult when the males are hidden. Despite the large number of specimens of H. fulva investigated, no spermatocysts were observed. A very low ratio of male : female individuals has been reported for many marine sponges [START_REF] Hogg | Approaches to the systematics of the Demospongiae[END_REF][START_REF] Scalera Liaci | Osservazioni sui cicli sessuali di alcune Keratosa (Porifera) e loro interesse negli studi filogenetici[END_REF][START_REF] Ayling | Patterns of sexuality, asexual reproduction and recruitment in some subtidal marine Demospongiae[END_REF][START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF][START_REF] Mercurio | A 3-year investigation of sexual reproduction in Geodia cydonium (Jameson 1811) (Porifera, Demospongiae) from a semi-enclosed Mediterranean bay[END_REF][START_REF] Ereskovsky | Pluriannual study of the reproduction of two Mediterranean Oscarella species (Porifera, Homoscleromorpha): cycle, sex-ratio, reproductive effort and phenology[END_REF]. For example, after two years of fortnightly monitoring, [START_REF] Ayling | Patterns of sexuality, asexual reproduction and recruitment in some subtidal marine Demospongiae[END_REF] found males of Aaptos aaptos (SCHMIDT 1864) in only one year of the study. No males were found in Mediterranean oviparous sponges Tethya citrina SARA &MELONE 1965 andT. auranrium (PALLAS 1766) during monthly collection over an 18-month period [START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF]. In a striking parallel to our study of H. fulva, the same author was unable to detect a single male among populations of Raspailia topsenti DENDY 1924 and Polymiastia sp. over a 2-year monitoring period [START_REF] Ayling | Patterns of sexuality, asexual reproduction and recruitment in some subtidal marine Demospongiae[END_REF]. [START_REF] Giesel | Sex ratio, rate of evolution and environmental heterogeneity[END_REF] proposed that deviations in sex ratio may internally regulate the size of a population by affecting its reproductive potential. In brooding corals, for example, female biased sex ratios may be an evolutionary adaptation caused by the physical limitations of the incubation chamber inside of polyps [START_REF] Szmant | Reproductive ecology of Caribbean reef corals[END_REF]. [START_REF] Whalan | Sexual reproduction of the brooding sponge Rhopaloeides odorabile[END_REF] discussed some possible problems that may lead to biased sex ratios in sessile or slow moving marine invertebrates: proximity to mates [START_REF] Sewell | Fertilisation success in a natural spawning of the dendrochirote sea cucumber Cucumaria miniata[END_REF][START_REF] Babcock | Fertilisation biology of the abalone Haliotis laevigata: laboratory and Weld studies[END_REF], sperm limitation [START_REF] Brazeau | Reproductive success in the Caribbean octocoral Briareum absestinum[END_REF], and dilution of gametes [START_REF] Oliver | Aspects of fertilisation ecology of broadcast spawning corals: sperm dilution effects and in situ measurements of fertilisation[END_REF]. Other possible factors that could influence the sex ratio are temperature, salinity, and the quantity and quality of food available [START_REF] Simonini | Effects of temperature on two Mediterranean populations of Dinophilus gyrociliatus (Polychaeta: Dinophilidae). I. Effects on life history and sex ratio[END_REF].
In this study of H. fulva, we remain perplexed as to where the males are. One explanation for the absence of males could be that the species has a very short period of spermatogenesis. In demosponges the period of spermatogenesis often is shorter compared to oogenesis [START_REF] Ayling | Patterns of sexuality, asexual reproduction and recruitment in some subtidal marine Demospongiae[END_REF][START_REF] Diaz | Variations, differentiations et functions des categories cellulaires de la demosponge d'eaux saumatres, Suberites massa, Nardo, au cours[END_REF][START_REF] Ereskovsky | Reproduction cycles and strategies of cold-water sponges Halisarca dujardini (Demospongiae, Dendroceratida), Myxilla incrustans and Iophon piceus (Demospongiae, Poecilosclerida) from the White Sea[END_REF][START_REF] Riesgo | Differences in reproductive timing among sponges sharing habitat and thermal regime[END_REF][START_REF] Mercurio | A 3-year investigation of sexual reproduction in Geodia cydonium (Jameson 1811) (Porifera, Demospongiae) from a semi-enclosed Mediterranean bay[END_REF][START_REF] Whalan | Sexual reproduction of the brooding sponge Rhopaloeides odorabile[END_REF][START_REF] Ereskovsky | Pluriannual study of the reproduction of two Mediterranean Oscarella species (Porifera, Homoscleromorpha): cycle, sex-ratio, reproductive effort and phenology[END_REF][START_REF] Stephens | Reproductive cycle and larval characteristics of the sponge Haliclona indistincta (Porifera: Demospongiae)[END_REF]). This trend is more pronounced in oviparous sponges, particularly in oviparous Haplosclerida. For example, in Xestospongia bergquistia and X. testudinaria from the Great Barrier Reef, the period of oogenesis lasts more than 5 months, while spermatogenesis lasts less than 5 days [START_REF] Fromont | Reproductive biology of three sponges species of the genus Xestospongia (Porifera: Demospongiae: Petrosiida) from the Great Barrier Reef[END_REF]. In Mediterranean Petrosia ficiformis, oogenesis duration is 7-8 months, and the duration of spermatogenesis is only 2-2.3 weeks [START_REF] Scalera Liaci | Raffronto tra il comportamento sussuale di alcune Ceractinomorpha[END_REF][START_REF] Maldonado | Gametogenesis, embryogenesis, and larval features of the oviparous sponge Petrosia ficiformis (Haplosclerida, Demospongiae)[END_REF]. [START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF] proposed that a very short period of spermatogenesis accounted for the absence of males during monthly collections of Tethya citrina, T. auranrium, and Mycale contarenii over 18 months of monitoring. Thus, it's possible to assume that the nine individuals of Haliclona fulva that lacked any reproductive elements (of the 37 individuals observed during the period of oogenesis) could be males with a short spermatogenesis period.
Reproductive cycle and effort
Numerous external factors could be responsible for induction of sexual or asexual reproduction in individuals within a population of heterogonic invertebrate species. Among the more important are the physicochemical quality of water, hydrodynamic, food availability, population density, habitat stability, and seasonal variation in temperature (for review see: Adiyodi & Adiyodi 1993). Nevertheless, it seems that water temperature and food availability play the principal roles in this process in the case of various marine invertebrates, such as sea anemones [START_REF] Bucklin | Growth and asexual reproduction of the sea anemone Metridium: comparative laboratory studies of three species[END_REF], scyphozoans [START_REF] Lucas | Jellyfish life histories: role of polyps in forming and maintaining scyphomedusa populations[END_REF][START_REF] Purcell | Temperature effects on asexual reproduction rates of scyphozoan species from the northwest Mediterranean Sea[END_REF], echinoderms [START_REF] Lawrence | Stress and deviant reproduction in Echinoderms[END_REF], and bryozoans [START_REF]Asexual propagation in the marine bryozoan Cupuladria exfragminis[END_REF]. In sponges, the increase in the frequency of specimens with buds and in the number of buds per sponge happens concomitantly with rapid decreases in water temperature as, for example, in the Mediterranean species Tethya citrina T. aurantium, Mycale contarenii [START_REF] Connes | Etude histologique, cytologique et expérimentale de la régénération et de la reproduction asexuée chez Tethya lyncurium Lamarck (= Tethya aurantium (Pallas) (Demosponges)[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF][START_REF] Cardone | The budding process in Tethya citrina Sar. & Melone (Porifera, Demospongiae) and the incidence of post-buds in sponge population maintenance[END_REF] and in White Sea species Polymastia arctica (MEREJKOWSKY 1878) [START_REF] Plotkin | Ecological aspects of asexual reproduction of the White Sea Sponge Polymastia mammillaris (Demospongiae, Tetractinomorpha) in Kandalaksha Bay[END_REF]. In H. fulva, budding takes place from late November to early March, which is the coldest season. This is also the case for the two Mediterranean Tethya citrina and T. aurantium [START_REF] Connes | Etude histologique, cytologique et expérimentale de la régénération et de la reproduction asexuée chez Tethya lyncurium Lamarck (= Tethya aurantium (Pallas) (Demosponges)[END_REF][START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF][START_REF] Gaino | Investigation of the budding process in Tethya citrina and Tethya aurantium (Porifera, Demospongiae)[END_REF] and Mycale contarenii [START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF]). However we were unable to demonstrate a significant correlation between the natural variations of seawater temperature and asexual reproductive effort.
Reproductive effort is an integrative indicator of resource allocation to reproductive compartments, and it can vary greatly depending on the reproductive strategy. Haliclona fulva allocates energy to the differentiation of gametes or buds at very different times, and as in other marine sponges, there is a clear alternation of sexual and asexual phases of reproduction [START_REF] Fell | Postlarval reproduction and reproductive strategy in Haliclona loosanoffi and Halichondria sp[END_REF][START_REF] Corriero | Sexual and asexual reproduction in two species of Tethya (Porifera: Demospongiae) from a Mediterranean coastal lagoon[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF]. Both reproductive efforts are of roughly the same intensity, with oogenesis representing 0.7% on average over the period, and budding representing 0.9% H. fulva demonstrates low annual variability of both reproductive processes, whereas marine organisms can exhibit highly variable sexual or asexual reproductive efforts which are sometimes related to the natural variations of the sea water temperature (Adiyodi & Adiyodi 1993;[START_REF] Olive | Annual breeding cycles in marine invertebrates and environmental temperature: probing the proximate and ultimate causes of reproductive synchrony[END_REF][START_REF] Llodra | Fecundity and Life-history Strategies in Marine Invertebrates[END_REF]. Overall, biotic and abiotic factors that regulate the proportion of asexual and sexual reproduction in the life cycle of heterogonic species are poorly understood. Unfortunately, in the case of H. fulva we were unable to fill this knowledge gap.
Although our sampling strategy has proven its efficiency in a number of previous studies of sponge life cycles (see for instance [START_REF] Pérez | Oscarella balibaloi, a new sponge species (Homoscleromorpha: Plakinidae) from the Western Mediterranean Sea: cytological description, reproductive cycle and ecology[END_REF]Ivanisevic et al. 2011a, b;[START_REF] Ereskovsky | Pluriannual study of the reproduction of two Mediterranean Oscarella species (Porifera, Homoscleromorpha): cycle, sex-ratio, reproductive effort and phenology[END_REF][START_REF] Zarrouk | Sexual Reproduction of Hippospongia communis (Lamarck, 1814) (Dictyoceratida, Demospongiae): comparison of two populations living under contrasted environmental conditions[END_REF]Reveter et al. 2016), we are here confronted with a limitation with this strategy for H. fulva.
Bud formation, structure and cell composition
The development of buds differs among sponge species. For example, the formation of the aquiferous system can occur either during bud development [START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF][START_REF] Ereskovsky | Asexual reproduction of homoscleromorph sponges (Porifera; Homoscleromorpha)[END_REF][START_REF] Gaino | Choanocyte chambers in unreleased buds of Tethya seychellensis (Wright, 1881) (Porifera, Demospongiae)[END_REF]this work), or after detachment from the parental sponge [START_REF] Ayling | Patterns of sexuality, asexual reproduction and recruitment in some subtidal marine Demospongiae[END_REF][START_REF] Battershill | The influence of storms on asexual reproduction, recruitment, and survivorship of sponges[END_REF][START_REF] Gaino | Investigation of the budding process in Tethya citrina and Tethya aurantium (Porifera, Demospongiae)[END_REF].
Haliclona fulva presents three stages in its bud formation: an ectosomal protuberance caused by cell migration; growth of the apical part of the bud with formation of the stalk connected to the parental sponge; and thin-stalked buds protruding noticeably from the sponge surface, followed by detachment of the bud. These stages are very similar to what was described in Tethya aurantium [START_REF] Connes | Etude histologique, cytologique et expérimentale de la régénération et de la reproduction asexuée chez Tethya lyncurium Lamarck (= Tethya aurantium (Pallas) (Demosponges)[END_REF][START_REF] Gaino | Investigation of the budding process in Tethya citrina and Tethya aurantium (Porifera, Demospongiae)[END_REF], Radiospongilla cerebellata [START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF], T. seychellensis (WRIGHT 1881) [START_REF] Gaino | Choanocyte chambers in unreleased buds of Tethya seychellensis (Wright, 1881) (Porifera, Demospongiae)[END_REF], and in Cinachyrella cavernosa (LAMARCK 1815) [START_REF] Singh | Field and laboratory investigations of budding in the tetillid sponge Cinachyrella cavernosa[END_REF].
Cells in buds are amoeboid-like in shape and show cytoplasmic extensions. They tend to align either in parallel rows or along the spicule bundles. These features suggest that these cells may be able to migrate from the parental sponge to the newly formed buds. The migration of specialized and polypotent cells, and their subsequent differentiation into definitive cells, is a typical feature of the budding process (Ereskovsky 2003[START_REF] Ereskovsky | The comparative embryology of sponges[END_REF][START_REF] Gaino | Investigation of the budding process in Tethya citrina and Tethya aurantium (Porifera, Demospongiae)[END_REF]. In this work we have shown that unreleased buds of H. fulva possess choanocyte chambers. The occurrence of choanocyte chambers is very uncommon in demosponge buds. Until now, these structures have been observed in unreleased buds of only four demosponge species: in Mycale contarenii [START_REF] Devos | Le bourgeonnement externe de l'éponge Mycale contarenii[END_REF][START_REF] Corriero | Reproductive strategies of Mycale contarenii (Porifera: Demospongiae)[END_REF], in the freshwater sponge Radiospongilla cerebellata [START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF], in Tethya seychellensis [START_REF] Gaino | Choanocyte chambers in unreleased buds of Tethya seychellensis (Wright, 1881) (Porifera, Demospongiae)[END_REF], and in T. wilhelma SARA, SARA, NICKEL & BRÜMMER 2001 [START_REF] Hammel | Sponge budding is a spatiotemporal morphological patterning process: insights from synchrotron radiation-based xray microtomography into the asexual reproduction of Tethya wilhelma[END_REF], as well as in homoscleromorph sponges of the genus Oscarella [START_REF] Ereskovsky | Asexual reproduction of homoscleromorph sponges (Porifera; Homoscleromorpha)[END_REF].
Cell composition in demosponge buds has been poorly studied. Optic microscopy has provided only two good descriptions of bud cell composition in Mycale contarenii and Axinella damicornis (ESPER 1794) [START_REF] Devos | Le bourgeonnement externe de l'éponge Mycale contarenii[END_REF][START_REF] Boury-Esnault | Un phénomène de bourgeonnement externe chez l'éponge Axinella damicornis (Exper.)[END_REF], and electron microscopy has illustrated this condition in Tethya aurantium, T. citrina, T. seyshellensis, Radiospongilla cerebellata, and Cinachyrella australiensis (CARTER 1886) [START_REF] Connes | Structure et développement des bourgeons chez l'éponge siliceuse Tethya lyncurium Lamarck[END_REF][START_REF] Connes | Etude histologique, cytologique et expérimentale de la régénération et de la reproduction asexuée chez Tethya lyncurium Lamarck (= Tethya aurantium (Pallas) (Demosponges)[END_REF][START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF][START_REF] Chen | Budding cycle and bud morphology of the globeshaped sponge Cinachyra australiensis[END_REF][START_REF] Gaino | Investigation of the budding process in Tethya citrina and Tethya aurantium (Porifera, Demospongiae)[END_REF][START_REF] Gaino | Choanocyte chambers in unreleased buds of Tethya seychellensis (Wright, 1881) (Porifera, Demospongiae)[END_REF]. According to these descriptions, it seems that cells with inclusions represent the main cellular components of buds, and that archaeocytes are the second most abundant cell type observed. In A. damicornis, M. contarenii, and R. cerebellata, all cell types of the parental sponge are present in equal proportions in the buds at their later developmental stages [START_REF] Devos | Le bourgeonnement externe de l'éponge Mycale contarenii[END_REF][START_REF] Boury-Esnault | Un phénomène de bourgeonnement externe chez l'éponge Axinella damicornis (Exper.)[END_REF][START_REF] Saller | Formation and construction of asexual buds of the fresh-water sponge Radiospongilla cerebellata (Porifera, Spongillidae)[END_REF]. In this respect, the buds of H. fulva do not differ from these sponges, as they are composed of microgranular cells, granular cells, archaeocytes, endopinacocytes (and exopinacocytes), choanocytes, central cells, and sclerocytes. On average, all these cells are similar in size to the cells of the parental sponge. Further research is needed to determine whether cell inclusions represent stored material useful in sustaining morphogenetic processes [START_REF] Connes | Structure et développement des bourgeons chez l'éponge siliceuse Tethya lyncurium Lamarck[END_REF], and whether they might thus be pivotal for the acquisition of complete functionality. Endopinacocyte. Scale: A-D,F,G=5 µm; E=10 µm. af, axial filament of a spicule; ar, archaeocytes; ch, choanocyte; en, endopinacocyte; ex, exopinacocytes; g, granule; lo, lophocyte; mgc, microgranular cells; n, nucleus; nu, nucleolus; sb, symbiotic bacteria; sc, sclerocytes; ss, spongin of spicule; v, vacuole. Table 1
Figure legends
Fig. 1 .
1 Fig. 1. Specimen of Haliclona fulva, in situ, (A) without buds, in June 2008, and (B) with buds, in February 2008. Scale bars=30 mm. b, buds; o, oscula.
Fig. 2 .
2 Fig. 2. Diagram of the reproductive cycle of Haliclona fulva.
Fig. 3 .
3 Fig. 3. Oogenesis in Haliclona fulva, histological section (Masson-Goldner's trichrome hematoxylin staining). A. The mesohyl with oocytes at previtellogenic stage. B. Previtellogenic oocyte phagocyting the cells (arrowhead). C. The mesohyl with the eggs. D. Egg of H. fulva. Scale: A=50 µm; B,D=20 µm; C=100 µm. cc, choanocyte chambers; eg, egg; exc, exhalant canal; gc, granular cell; n, nucleus; oo, oocytes; s, spicules.
Fig. 4 .
4 Fig. 4. Reproductive efforts of Haliclona fulva. A. Boxplot distribution of reproductive efforts during oogenesis and corresponding water temperatures during the study. B. Boxplot distribution of reproductive efforts during budding.
Fig. 5 .
5 Fig. 5. Buds of Haliclona fulva. A. Buds in vivo at different stages of development. B. Buds in vivo at last stages of development. C. Semithin section of early bud. D. Semithin section of the bud at last stage. Inset: TEM micrograph of collagen bundles in the mesohyl of a bud. Scale: A=5 mm, B=1.5 mm, C=100 µm, D=150 µm, Inset=4 µm. b, buds; cb, collagen bundles; cc, choanocyte chambers; ch, choanocytes; ep, exopinacocytes; gc, granular cells; l, lacuna; mgc, microgranular cells; o, osculum; sb, symbiotic bacteria; ss, spongin of spicules.
Fig. 6 .
6 Fig. 6. TEM images of the choanocytes in buds from individuals of Haliclona fulva. A. The aggregate of separated choanocytes in early bud. B. Choanocyte chamber with central cell in the bud at last stage. C. Central cell inside of a choanocyte chamber. Scale: A,B=5 µm; C=2 µm. c, canal in a central cell cytoplasm; ce, central cell; ch, choanocyte; f, flagella of choanocytes; mv, microvilli; n, nucleus; sb, symbiotic bacteria.
Fig. 7 .
7 Fig. 7.TEM images of the cells in buds from individuals of Haliclona fulva. A. Exopinacocytes. B. Microgranular cells. C. Granular cell. D. Archaeocytes. E. Lophocyte. F. Sclerocytes. G.Endopinacocyte. Scale: A-D,F,G=5 µm; E=10 µm. af, axial filament of a spicule; ar, archaeocytes; ch, choanocyte; en, endopinacocyte; ex, exopinacocytes; g, granule; lo, lophocyte; mgc, microgranular cells; n, nucleus; nu, nucleolus; sb, symbiotic bacteria; sc, sclerocytes; ss, spongin of spicule; v, vacuole.
Fig. 1
1 Fig. 1
Fig. 2
2 Fig. 2
Fig. 3
3 Fig. 3
Fig. 4
4 Fig. 4
Fig. 5
5 Fig. 5
Fig. 6
6 Fig. 6
Fig. 7
7 Fig. 7
Table legend Table 1 .
legend1 The number of individuals of Haliclona fulva collected during the sampling period for histological investigation.
Acknowledgments. This work has been supported by grant n° 1.38.209.2014 from Saint-Petersburg State University, and the Russian Science Foundation #17-14-01089 (for the microscopy). The authors gratefully thank Joël Courageot and Alexandre Altié of Service Commun de Microscopie Électronique et Photographie Faculté de Médecine La Timone, Aix-Marseille Université and Daria Tokina for technical support. | 49,951 | [
"12122",
"18399"
] | [
"188653",
"566953",
"188653",
"188653"
] |
01770579 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01770579/file/Version3_Flexitheralight_20170203.pdf | Anne-Sophie Vignion-Dewalle
Grégory Baert
Laura Devos
Elise Thecua
Claire Vicentini
Laurent Mortier
Serge Mordon
Grégory Baert
Gregory Baert
Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous
teaching and research institutions in France or abroad, or from public or private research centers.
Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling
Introduction
Photodynamic therapy (PDT) is a cancer treatment modality combining light of an appropriate wavelength, a nontoxic photosensitizer, and sufficient molecular oxygen to generate reactive oxygen species and destroy (pre-) malignant cells [START_REF] Plaetzer | Photophysics and photochemistry of photodynamic therapy: fundamental aspects[END_REF]. PDT using 5aminolevulinic acid (ALA) (ALA-PDT) and PDT using 5-aminolevulinic acid methyl ester (MAL) (MAL-PDT) have been widely used for dermatological applications in recent decades [START_REF] Bissonette | Large surface photodynamic therapy with aminolevulinic acid: treatment of actinic keratoses and beyond[END_REF][START_REF] Santos | Effectiveness of photodynamic therapy with topical 5aminolevulinic acid and intense pulsed light versus intense pulsed light alone in the treatment of acne vulgaris: comparative study[END_REF][START_REF] Braathen | Guidelines on the use of photodynamic therapy for nonmelanoma skin cancer: an international consensus[END_REF][START_REF] Morton | Guidelines for topical photodynamic therapy: update[END_REF][START_REF] Lu | Efficacy of topical aminolevulinic acid photodynamic therapy for the treatment of verruca planae[END_REF][START_REF] Sotiriou | Photodynamic therapy with 5-aminolevulinic acid in actinic cheilitis: an 18-month clinical and histological follow-up[END_REF][START_REF] Cai | Photodynamic therapy in combination with CO2 laser for the treatment of Bowen's disease[END_REF]. Topical administration of ALA or MAL induces the selective accumulation of the endogenous photosensitizer protoporphyrin IX (PpIX) within the target cells and subsequent light irradiation leads to the target destruction. ALA-PDT and MAL-PDT have in particular proven to be an efficient treatment modality for actinic keratoses (AK) [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF][START_REF] Wiegell | Update on photodynamic treatment for actinic keratosis[END_REF].
Actinic keratoses are scaly or crusty lesions that develop on sun-exposed areas, such as the face, scalp, neck, arms… in response to prolonged exposure to ultraviolet radiation. Confined to the epidermis (the basement membrane is intact), AKs are carcinomas in situ and, in approximately 10% of patients, will progress to invasive squamous cell carcinomas (SCCs) [START_REF] Glogau | The risk of progression to invasive disease[END_REF]. In order to reduce the subsequent risk of developing SCCs, most clinicians routinely treat AKs. Treatment options include lesion-directed destructive therapies, such as cryotherapy and surgical procedures, for individual lesions and field-directed therapies, such as topical medications and PDT, for areas with multiple or subclinical AKs. Compared to the other treatment options, the main advantage of PDT is the non-invasive nature and the excellent cosmetic results of this method [START_REF] Braathen | Guidelines on the use of photodynamic therapy for nonmelanoma skin cancer: an international consensus[END_REF][START_REF] Ericson | Review of photodynamic therapy in actinic keratosis and basal cell carcinoma[END_REF].
A variety of PDT protocols with different photosensitizers, photosensitizer incubation times, light sources, light fluence rates… have been used for the treatment of AKs [START_REF] Ericson | Review of photodynamic therapy in actinic keratosis and basal cell carcinoma[END_REF]. MAL-PDT using 635 nm red light with a total light dose of 37 J/cm 2 , a fluence rate of 75 mW/cm 2 and three hours of incubation time is a standard protocol, widely used in Europe for the treatment of actinic keratosis. This protocol has been reported to be an effective PDT treatment option for AK and to result in similar response rates and improved cosmetic outcomes compared with standard therapies [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF]. However, with these light dose parameters, high pain scores have been demonstrated and concurrent use of cold air analgesia may be required to prevent discomfort [START_REF] Tyrrell | The effect of air cooling pain relief on protoporphyrin IX photobleaching and clinical efficacy during dermatological photodynamic therapy[END_REF][START_REF] Stangeland | Cold air analgesia as pain reduction during photodynamic therapy of actinic keratoses[END_REF]. Alternative red light protocols with lower fluence rates, as effective as the standard protocol while being much better tolerated by patients, have been studied for the treatment of AK [START_REF] Ericson | Photodynamic therapy of actinic keratosis at varying fluence rates: assessment of photobleaching, pain and primary clinical outcome[END_REF][START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF][START_REF] Enk | Low-irradiance red LED traffic lamps as light source in PDT for actinic keratoses[END_REF]. Furthermore, fractionated irradiation with alternating light and dark periods, intended to allow tissue re-oxygenation and photosensitizer re-synthesis during the dark periods, has been demonstrated to increase the efficiency of the PDT for AK treatment [START_REF] Haas | Fractionated aminolevulinic acid-photodynamic therapy provides additional evidence for the use of PDT for non-melanoma skin cancer[END_REF][START_REF] Sotiriou | Single vs. fractionated photodynamic therapy for face and scalp actinic keratoses: a randomized, intraindividual comparison trial with 12-month follow-up[END_REF].
Developed in the framework of the French National Research Agency (ANR) Project FLEXITHERALIGHT (http://www.flexitheralight.com/), the FLEXITHERALIGHT device is composed of three adjacent light emitting textiles [START_REF] Cochrane | New design of textile light diffusers for photodynamic therapy[END_REF], which sequentially emit red light (635 nm) at low fluence rate (12.3 mW/cm 2 ) for one minute allowing a fractionated illumination (1 minute light, 2 minutes dark). The illumination duration of 2.5 hours already programmed in the device enables a light dose of 37 J/cm 2 to be delivered in contact with the textiles.
Combining illumination with the FLEXITHERALIGHT device with 30 minutes of incubation time, the FLEXITHERALIGHT protocol is being investigated for the treatment of actinic keratoses by the FLEXITHERALIGHT project. A phase II clinical trial approved by the French National Agency for Medicines and Health Products Safety (ANSM) on 27 November 2013 (registration number: 2013-A1096-39) and aiming to assess the non-inferiority of the FLEXITHERALIGHT protocol compared to the above mentioned standard 635 nm red light protocol for the treatment of actinic keratoses has just ended.
Based on this research project, we propose in this paper to compare the efficiency of the standard 635 nm red light protocol (incubation time: three hours, irradiation type: continuous, light dose: 37 J/cm 2 , fluence rate: 75 mW/cm 2 , treatment duration: 493 s) to the one of the FLEXITHERALIGHT protocol involving lower fluence rate, light fractionation and lower incubation time (incubation time: 30 minutes, illumination type: fractionated with two minutes dark intervals every three minutes, light dose: 37 J/cm 2 , fluence rate: 12.3 mW/cm 2 , treatment duration: 9024 s) through a mathematical modeling. This mathematical modeling greatly inspired by our previous works [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] and involving an improved model for both the biological clearance of PpIX and the conversion of MAL into PpIX enables the local damage induced by the therapy to be estimated.
II. Clinical materials
A. Presentation of the two red light protocols
Two different 635 nm red light protocols with 37 J/cm 2 were considered: the standard protocol using a three hours incubation period and a continuous irradiation with 75 mW/cm 2 fluence rate [START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF][START_REF] Tyrrell | Protoporphyrin IX photobleaching during the light irradiation phase of standard dermatological methyl-aminolevulinate photodynamic therapy[END_REF][START_REF] Sotiriou | Photodynamic therapy vs. imiquimod 5% cream as skin cancer preventive strategies in patients with field changes: a randomized intraindividual comparison study[END_REF] and the FLEXITHERALIGHT protocol using a 30 minutes incubation period and a fractionated irradiation (1 minute light, 2 minutes dark) with 12.3 mW/cm 2 fluence rate (Table 1).
The FLEXITHERALIGHT protocol has been proposed in the French National Research Agency (ANR) Project FLEXITHERALIGHT (http://www.flexitheralight.com/) in which our research unit is involved. This project aims to develop a biophotonic device based on a flexible light emitting textile [START_REF] Cochrane | New design of textile light diffusers for photodynamic therapy[END_REF] and dedicated to the treatment of dermatologic diseases and carcinoma. The major advantage of the flexible light emitting textile is its optimal conformation to the area to be treated, thus leading to a more homogeneous irradiation than that delivered by the standard rigid light sources (Figure 1). Consisted of three adjacent textiles of size 21.5 cm × 5 cm sequentially emitting red light as illustrated in Figure 1, the FLEXITHERALIGHT device enables to obtain a fractionated irradiation (1 minute light, 2 minutes dark) with a fluence rate of 12.3 mW/cm 2 leading to a light dose of 37 J/cm 2 after 9024 seconds of treatment (12.3 mW/cm 2 × 9024 s × 1 minute light / (1 minute light + 2 minutes dark)). Moreover, based on the comparative study of Wiegell et al. [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF], the FLEXITHERALIGHT protocol involves a 30 minutes incubation with MAL under occlusive dressing and no MAL removal before irradiation (Table 1). This clinical trial was designed somewhat similarly to the study of Wiegell et al., which aims to compare standard MAL-PDT with daylight MAL-PDT [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF]. First, the lesions of the forehead and scalp were counted, photographed and divided into two symmetrical areas, which were then randomized to receive either PDT using the standard protocol or PDT using the FLEXITHERALIGHT protocol. After gentle surface preparation of the lesions and MAL application to the lesions, an occlusive dressing was placed for 30 minutes (respectively, 3 hours) over the area randomly assigned to receive PDT using the FLEXITHERALIGHT protocol (respectively, the standard protocol). After 30 minutes, PDT using the FLEXITHERALIGHT protocol was applied to the corresponding assigned area without dressing removal. Once this treatment was completed, the treated area was protected with an aluminum foil and the area randomized to receive PDT using the standard protocol, was treated after dressing removal and lesions cleaning.
At the end of the procedure, the patients indicated the level of pain experienced during PDT using the FLEXITHERALIGHT protocol and the one experienced during PDT using the standard protocol through a visual analogue scale (VAS) ranging 0 (no pain) to 10 (maximum pain). Pain was also assessed at 7 days after the treatment.
The treatment response was evaluated at 3 and 6 months after the treatment based on comparisons with baseline photographs.
III. Modeling method
Except for the change made regarding the biological clearance of PpIX and the conversion of MAL into PpIX in section III.C.2, the modeling method is the same as in our previously validated work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] and therefore only outlines are referred to in this paper without further discussion.
A.
Skin sample model
Our simplified skin sample model consists of an epidermis section represented by a 150 μm thick parallelepiped and of an AK designed as an ellipsoid as already published in [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF].
As AKs are confined to the epidermis, the ellipsoid is included in the parallelepiped in such a way that it lies on, but does not cross, the lower boundary of this parallelepiped which represents the boundary between the epidermis and the dermis. To account for the thickening of the epidermis generally observed with AK, the thickness of the ellipsoid is set to 200 μm which leads, according to the curettage usually performed prior to PDT, to the skin sample model displayed on Figure 2.
The epidermis and AK tissues are both assumed to be homogeneous and z is assumed to be the beam direction, which is also the depth direction of the skin sample model (Figure 2). In this paper, the spectral fluence rate for the standard protocol (respectively, for the FLEXITHERALIGHT protocol) was modeled as a 75 mW/cm 2 (respectively, 12.3 mW/cm 2 )weighted Gaussian distribution with mean 635 nm and full width at half maximum (FWHM) of 19 nm as measured by Moseley et al. [START_REF] Moseley | Light distribution and calibration of commercial PDT LED arrays[END_REF] from the Aktilite CL16 and CL128 (Galderma SA, Lausanne, Switzerland) (Figure 3). The total light dose of 37 J/cm 2 is achieved with a treatment duration of 493 s using the standard protocol and 9024 s using the FLEXITHERALIGHT protocol (Table 1).
For the standard protocol, after 3 h of incubation, a primary planar broad beam with a spectral fluence rate 0 S of 75 mW/cm 2 (blue curve in Figure 3) is assumed to continuously perpendicularly irradiate, for 493 s, the surface of the skin sample model as illustrated in Figure 2. For the FLEXITHERALIGHT protocol, the irradiation for 9024 s is assumed to be performed, after 30 minutes of incubation, using a spectral fluence rate 0 S of 12.3 mW/cm 2 (red curve in Figure 3) during the light periods and a fluence rate 0 S of 0 mW/cm 2 during the dark periods.
Determination of the fluence rate
Similarly to Farrell et al. [START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF], based on a PpIX concentration varying only with depth, z , below the irradiated surface (Figure 3), the local total fluence rate at time t , depth z and wavelength , denoted by
, , z t
, is given by equation 1 [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF][START_REF] Carp | Radiative transport in the delta-P1 approximation: accuracy of fluence rate and optical penetration depth predictions in turbid semi-infinite media[END_REF]:
end start z t z
(1)
Where:
The above defined 0 S (Figure 3) is the spectral fluence rate of the primary planar broad beam, The total absorption coefficient, a , is the sum of the PpIX absorption coefficient, P , depending on both the optical properties of the actinic keratosis and the boundary conditions at the actinic keratosis surface, are computed as described in [START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF].
Updating of the PpIX absorption coefficient
During a PDT treatment, three processes affect the PpIX absorption coefficient: the biological clearance of PpIX, the conversion of ALA or MAL into PpIX and the PpIX photobleaching.
In our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF], the conversion of MAL into PpIX was modeled using an exponential growth function resulting in an controversial unlimited increase in time of the number of new PpIX molecules. In this paper, a more realistic model for the conversion of MAL into PpIX also taking into account the biological clearance of PpIX is defined based on clinical data from several studies while the photobleaching model is the same as in our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF].
In order to model the time evolution of the PpIX absorption coefficient when considering only the biological clearance of PpIX and the conversion of MAL into PpIX, we use the fluorescence data reported by Wiegell et al. [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF]. These data that have been collected from 30 mm depth in the epidermis [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF]. This latest data further demonstrates a depth-dependent shape of the logistic growth in time (the shape of the logistic growth depends on the depth in the epidermis). This dependence arising from the progressive skin penetration of MAL is taken into account in equation 2 through the limiting value of the logistic function as we deduced from the PpIX concentration data reported in [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF]:
t k z L z t M BC PpIX exp 1 , (2)
Where:
z t M BC
PpIX , is the number of PpIX molecules present in an unit volume, Assuming a standard exponential depth decay with constant, , for z L , equation 2 becomes equation 3: can be expressed as follows (equation 4):
t k z L z t M BC PpIX exp 1 exp 0 ,
t k dt t k z L z t M z dt t M z t dM BC PpIX BC PpIX BC PpIX exp 1 1 exp 1 1 exp 0 , , , (4)
Regarding the photobleaching process, based on our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF], the number of PpIX molecules eliminated by photobleaching during the time interval
dt t t ; , denoted by z t dM P PpIX ,
, can be estimated using equation 5:
~, ~, , , , ~, , d z t V c z t dt z t M V dt z t dM PpIX a U PpIX U P PpIX (5)
Where:
z t M PpIX ,
is the number of PpIX molecules present in an unit volume, U V , at time
t and depth z ,
is the bimolecular rate constant for the reaction of singlet oxygen with PpIX, is the Avogadro number (6. rates to be determined.
~, ~, , , , , exp 1 1 exp 1 1 exp 0 , , , , ,
d z t V c z t dt z t M V dt t k dt t k z L z t M z t dM z t dM z t M z dt t M PpIX a U PpIX U PpIX P PpIX BC PpIX PpIX PpIX (6) Based on the relationship t z t M V z t PpIX U PpIX PpIX a , ,
~, , , , , , , , , , exp 1 1
exp 1 1 exp 0 , , , ,
d z t V c z t dt z t V dt t k dt t k z L V z t z dt t PpIX a U PpIX a U U PpIX
Initialization
We naturally assume that:
k z L z M z PpIX exp 1 exp 0 , 0 ,
and subsequently
k z L V z M V z z U PpIX PpIX U PpIX PpIX a exp 1 exp 0 , 0 , , 0 , , , . 4.
Parameters setting
The optical properties for actinic keratosis mentioned in equation 1 are derived from the data reported in Garcia-Uribe et al. [START_REF] Garcia-Uribe | In Vivo diagnosis of melanoma and nonmelanoma skin cancer using oblique incidence diffuse reflectance spectrometry[END_REF].
The parameters related to the photobleaching process (equation 5 and last term in equation 7)
were assigned to the values used in our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] (Table 2). These values reported in the literature have been empirically determined as stated in [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF].
Parameters Value
dt 1×10 -5 s 5.3×10 9 l/mol/s ~ 0.56
Table 2: Specification of the model parameters from [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] Regarding the biological clearance of PpIX and the conversion of MAL into PpIX (second term in equation 7), four parameters remain to be specified: 0 L , k and both introduced in equation 2 and introduced in equation 3.
From the fitting of equation 2 to the PpIX concentration data computed at zero depth in normal human epidermis by Star et al. [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF],
0 L ( is the conversion factor between z t M BC
PpIX , and the corresponding PpIX concentration expressed in µg/g in [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF]), k and were estimated to be 3.43 AU, 2.93×10 -4 /s and 1.01×10 4 s, respectively. Computed from the PpIX concentration data at zero depth and the ratio between the PpIX concentration at 0.2 mm and at 0 mm both reported in [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF], the PpIX concentration data at 0. the PpIX concentration data at 0.2 mm depth [START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF], enables the depth decay constant, , to be deduced (Table 3).
h 3 exp 1 pmol/ml 8 . 11 0 pmol/ml 8 . 11 0 , h 3 h 3 exp 1 0 1 0 , h 3 0 , h 3 ] 32 [ . 3 k V L C k L V V M C U al et Smits BC PpIX U Equation U PpIX BC PpIX (8) Parameters Value k 2.93×10 -4 /s 1.01×10 4 s 0.89 /mm 0 L 1.29×10 4 using 3 μm 10 U V
1 0 ~, ~, , , , , i dt j t t j j PpIX a U j start i start j d z t V c z t dt z dt i t t D (9)
IV. Results
A. From the clinical trial
The preliminary results obtained for the first 14 patients of the clinical trial [START_REF] Vicentini | Phase II study evaluating the non-inferiority of the device FLEXITHERALIGHT compared to conventional PDT[END_REF] are summarized in Tables 4 and5. From these results, PDT using the FLEXITHERALIGHT protocol is not inferior in terms of complete response rate when compared to PDT using the standard protocol (Table 4) while being more comfortable for patients (Table 5). Table 5: Estimation of the pain for PDT using the standard protocol (second row) and for PDT using the FLEXITHERALIGHT protocol (third row) at day 0 (second column) and day 7 (third column) from the results of the first 14 patients of the clinical trial [START_REF] Vicentini | Phase II study evaluating the non-inferiority of the device FLEXITHERALIGHT compared to conventional PDT[END_REF].
Complete response rate
B.
From the mathematical modeling study All the computations were performed using a Matlab™ program on a standard personal computer (Intel Xeon CPU E3-1240 V2 3.40 GHz-8Go of RAM-Windows 7 64 bits). From Figure 4, whatever the depth in AK, the number of PpIX molecules corresponding to the FLEXITHERALIGHT protocol continues to increase even after the beginning of irradiation ( 30 start t minutes). This means that the number of PpIX molecules generated from the conversion of MAL is always higher than the number of PpIX molecules removed by either the biological clearance or the photobleaching of PpIX. Regarding the standard protocol, the irradiation leads to a mean percent drop of 32.04% in the number of PpIX molecules, that is close to the 27 percent drop computed from data reported in Wiegell et al. [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF].
Whatever the protocol, an important number of PpIX molecules is still present at the end of the irradiation (Figure 4): for the standard protocol (respectively, for the FLEXITHERALIGHT protocol), this number is more than 7.59 (respectively, 9.59) times higher than the number of PpIX molecules present at the beginning of the incubation (i.e., at
time 0 t s).
The time evolution of the PDT local damage achieved when using the standard and the FLEXITHERALIGHT protocols is illustrated for different depths in AK in Figures 5 and6. As expected looking at the involved fluence rates and irradiation types, the PDT local damage produced using the standard protocol increases much faster than that produced using the FLEXITHERALIGHT protocol (Figure 5). From a linear regression of the PDT local damage versus time performed for all depths in AK, the PDT local damage obtained using the standard protocol increases, on average, about 33.30 times faster than the one obtained using the FLEXITHERALIGHT protocol.
Furthermore, as can be seen in Figures 5 and6, whatever the depth position in AK, the PDT local damage achieved at the end of the treatment using the standard protocol is higher than the one achieved at the end of the treatment using the FLEXITHERALIGHT protocol.
Ranging from 1.78 at 0 µm depth to 1.80 at 150 µm depth, the ratio of the PDT local damage achieved at the end of the treatment between using the standard protocol and using the FLEXITHERALIGHT protocol is a slightly increasing function of the depth position in AK.
In addition, for both the protocols, an increasing impact of the depth in AK on the PDT local damage over time is observed in Figure 6. Moreover, the shape of the time course of the PDT local damage for the standard protocol tends to demonstrate a very slight logarithmic trend (Figure 6.a) while that for the FLEXITHERALIGHT protocol suggests an exponential trend (Figure 6.b).
The local damages obtained at the end of the treatment using the standard protocol and the FLEXITHERALIGHT protocol are displayed according to depth in Figure 7. From Figure 7, the depth-related decrease rate in the PDT local damage obtained at the end of the treatment using the FLEXITHERALIGHT protocol seems to be similar to those obtained using the standard protocol. Nonetheless from a linear regression, the depth-related decrease rate for the FLEXITHERALIGHT protocol is around one-third smaller than that obtained using the standard protocol.
V. Discussion
In this paper, a comparison between the two following 635 nm red light protocols is performed for the PDT treatment of actinic keratosis using a mathematical modeling of the PDT process:
Protocol 1 with incubation time: three hours, irradiation type: continuous, light dose: 37 J/cm 2 , fluence rate: 75 mW/cm 2 Protocol 2 with incubation time: 30 minutes, irradiation type: fractionated with two minutes dark intervals every three minutes, light dose: 37 J/cm 2 , fluence rate: 12.3 mW/cm 2 .
The continuous 75 mW/cm 2 red light protocol was considered due to its standardized use across Europe [START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF][START_REF] Tyrrell | Protoporphyrin IX photobleaching during the light irradiation phase of standard dermatological methyl-aminolevulinate photodynamic therapy[END_REF][START_REF] Sotiriou | Photodynamic therapy vs. imiquimod 5% cream as skin cancer preventive strategies in patients with field changes: a randomized intraindividual comparison study[END_REF] while the choice of the fractionated 12.3 mW/cm 2 red light protocol was motivated by the FLEXITHERALIGHT Project (http://www.flexitheralight.com/). This
French National Research Agency Project focuses on the development of a biophotonic device based on a flexible light emitting textile enabling a fractionated irradiation with a 12.3 mW/cm 2 fluence rate for the PDT treatment of actinic keratosis. Based, on one hand, on the published results of some alternative red light protocols with lower fluence rates than the standard 75 mW/cm 2 red light protocol [START_REF] Ericson | Photodynamic therapy of actinic keratosis at varying fluence rates: assessment of photobleaching, pain and primary clinical outcome[END_REF][START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF][START_REF] Enk | Low-irradiance red LED traffic lamps as light source in PDT for actinic keratoses[END_REF], and on the other hand, on the tissue reoxygenation and photosensitizer re-synthesis during the dark periods of the fractionated irradiation, the FLEXITHERALIGHT protocol is expected to be at least as effective as the standard protocol while being much better tolerated by patients. The preliminary results from the phase II clinical trial (ANSM authorization number: 2013-A01096-39) live up to these expectations since they demonstrate that the FLEXITHERALIGHT protocol is not inferior in terms of complete response rate to the standard protocol and is much more comfortable for patients.
A 200-μm thick partial ellipsoid included into a 150-μm thick parallelepiped was used to model a post-curettage AK in epidermis and an iterative procedure alternating determination of the fluence rate and updating of the optical properties was derived from our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] to model the PDT process. The determination of the fluence rate involves solving the one-dimensional diffusion equation (equation 1) while the updating of the optical properties takes the biological clearance of PpIX, the conversion of MAL into PpIX and the PpIX photobleaching into account (equation 7). In this paper, the biological clearance of PpIX and the conversion of MAL into PpIX were described using a single logistic growth model (equations 2 and 3), that is, according to [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Angell-Petersen | Porphyrin formation in actinic keratosis and basal cell carcinoma after topical application of methyl 5-aminolevulinate[END_REF][START_REF] Star | Quantitative model calculation of the time-dependent protoporphyrin IX concentration in normal human epidermis after delivery of ALA by passive topical application or lontophoresis[END_REF], a more realistic model compared to the two exponential models used in our previous work [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF]. Regarding the photobleaching, we used the original model that we proposed in [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF] (equation 5). This original photobleaching model involves the calculation of the number of singlet oxygen molecules generated over time assuming unlimited oxygen availability. This assumption that is made through the singlet oxygen quantum yield in equation 5, is deemed reasonable according to the thickness of the AK [START_REF] Stücker | The cutaneous uptake of atmospheric oxygen contributes significantly to the oxygen supply of human dermis and epidermis[END_REF]. In fact, as mentioned in [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Stücker | The cutaneous uptake of atmospheric oxygen contributes significantly to the oxygen supply of human dermis and epidermis[END_REF], the epidermis layer is almost exclusively supplied by diffused oxygen from the atmosphere, and the unlimited source of atmospheric oxygen allows unlimited oxygen availability in the skin sample model to be reasonably assumed.
Finally, estimation of the cumulative number of singlet oxygen molecules produced during the treatment enables the quantification of the PDT local damage (equation 9).
From the above-mentioned suitable assumption of unlimited oxygen availability, the fractionated irradiation aimed at allowing tissue re-oxygenation and photosensitizer resynthesis during the dark periods, is, in the proposed model, only taken into account for photosensitizer re-synthesis purposes.
All the parameters involved in this model are set to published values obtained using PpIX and either normal human epidermis or AK [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF].
Applying to the standard protocol and to the FLEXITHERALIGHT one, the model allows evaluation and comparison of their performance in terms of the PDT local damage.
From the results, the higher fluence rate of the standard protocol compared to that of the FLEXITHERALIGHT protocol combined with the continuous irradiation type of the standard protocol opposed to the fractionated irradiation of the FLEXITHERALIGHT protocol logically leads to a higher increase rate for the PDT local damage of the standard protocoldeduced to be more than 30 times higher than that for the PDT local damage of the FLEXITHERALIGHT protocol -(Figure 5).
Furthermore, in spite of the identical light dose of 37 J/cm 2 for the two protocols, using the well-known efficient standard protocol results in a PDT local damage at the end of the treatment of, on average, 1.79 times as high as that obtained using the FLEXITHERALIGHT protocol (Figures 5 and6).
However, the above-mentioned clinically-demonstrated non-inferiority in terms of complete response rate of the FLEXITHERALIGHT protocol versus the standard protocol, seems to highlight that the PDT local damage achieved using the FLEXITHERALIGHT protocol is sufficient to destroy any cancer cells and therefore that the parameters of the standard protocol should be revised accordingly. Thus, among the possible changes in parameters, reducing by half the treatment duration of the standard protocol (and therefore the light dose) may lead to a PDT local damage equivalent to the sufficient one obtained using the FLEXITHERALIGHT protocol (Figure 5).
Regarding the time evolution of the PpIX molecules number in Figure 4, the beginning of the irradiation is clearly identifiable for the standard protocol with a more than 30 percent drop which is in close agreement with results from [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF]. On the contrary, the steady growth curves observed for the FLEXITHERALIGHT protocol makes the identification of the beginning of the irradiation impossible: the 12.3 mW/cm 2 fluence rate and the fractionated irradiation do not allow a photobleaching of the PpIX molecules important enough to outweigh the conversion of MAL into PpIX. Consideration also needs to be given to the important number of PpIX molecules still present at the end of the treatment for both the protocols (Figure 4).
This important number tends to demonstrate that the incubation time of the two protocolsor the cream concentration in MAL -could be reduced [START_REF] Christiansen | 5-ALA for photodynamic photorejuvenation-optimization of treatment regime based on normal-skin fluorescence measurements[END_REF].
Moreover, based on various studies on the impact of fluence rates on pain [START_REF] Ericson | Photodynamic therapy of actinic keratosis at varying fluence rates: assessment of photobleaching, pain and primary clinical outcome[END_REF][START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF], a better tolerability in terms of pain is expected using the FLEXITHERALIGHT protocol. This expectation has been verified from the above-mentioned phase II clinical trial.
Finally, these results confirmed by a first analysis of the clinical trial data emphasize the need to redefine standard protocols and better determine treatment parameters for a similar efficiency but an improved tolerability and a more manageable clinical practice.
VI. Conclusions
In this paper, we have proposed to evaluate and compare two protocols: the standard protocol (wavelength: 635 nm, incubation time: three hours, illumination type: continuous, light dose: 37 J/cm 2 , fluence rate: 75 mW/cm 2 ) and an alternative one, the FLEXITHERALIGHT protocol (http://www.flexitheralight.com/) (wavelength: 635 nm, incubation time: 30 minutes, illumination type: fractionated with two minutes dark intervals every three minutes, light dose: 37 J/cm 2 , fluence rate: 12.3 mW/cm 2 ). The evaluation tends to demonstrate that an optimization of the two protocols parameters and especially of the incubation times could lead to a similarly efficient and more suitable treatment while the comparison tends to prove a slightly better efficiency of the standard protocol in term of the PDT local damage.
Figure 1 :
1 Figure 1: The three flexible light emitting textiles of the FLEXITHERALIGHT device are sequentially activated for one minute (http://www.flexitheralight.com/)
Figure 2 :
2 Figure 2: Skin sample model
Figure 3 :
3 Figure 3: The 75 mW/cm 2 and 12.3 mW/cm 2 spectral fluence rates used for the standard and the FLEXITHERALIGHT protocols, respectively.
The total transport coefficient, t , is the sum of the total absorption coefficient, a , and the actinic keratosis reduced scattering coefficient [28], The two parameters, b and
patients during three hours of MAL application without light irradiation suggest a logistic growth in time of the number of PpIX molecules. Based on this suggestion that is supported by the fluorescence data measured from 23 actinic keratoses during 28 hours of MAL incubation by Angell-Petersen et al. [29], equation 2 can be established. The limited growth in the number of PpIX molecules assumed by equation 2 is also observed for the PpIX concentration data computed by Star et al. during four hours of MAL application at 0 and 0.2
UV
, at time t and depth z , when considering only the biological clearance of PpIX and the conversion of MAL into PpIX, z L , k and are the limiting value, the steepness and the midpoint position of the logistic function representing the time evolution of z t M BC PpIX , , respectively.
( 3 )
3 From equation 3, the variation in the number of PpIX molecules resulting from the biological clearance of PpIX and the conversion of MAL into PpIX during the time interval dt t t ;
oxygen quantum yield (i.e. the number of singlet oxygen molecules generated for each photon of wavelength ~ absorbed by a PpIX molecule when the PDT process is not limited by the availability of oxygen), is the Planck constant (6.626×10 34 J×s), c is the speed of light (3×10 8 m/s). Combining equations 4 and 5 results in the following approximation equation for the number of PpIX molecules present at time dt t (equation 6):
molar extinction coefficient for wavelength , equation 6 can be rewritten in terms of the PpIX absorption coefficient giving the following updating formula for the PpIX absorption coefficient (equation 7):
equation 1, assuming a given distribution for the PpIX absorption coefficient at time 0, total fluence rate at time 0, , , 0 z , can be calculated for any point of the skin model (Figure 2) and any wavelength of the spectrum. Equation 7 then enables the distribution for the PpIX absorption coefficient at time dt , From this new distribution for the PpIX absorption coefficient, any local total fluence rate at time dt , calculated using equation 1... Iterating equations 1 and 7 therefore enables all the necessary PpIX absorption coefficients and local total fluence
2 mm depth were fitted to equation 2 leading to the values of 2.82 AU, 3.25×10 -4 /s and 1.01×10 4 s for , respectively. Regarding k and , these values are close to the ones obtained using the PpIX concentration data computed at zero depth and are therefore consistent with the use of a single value as assumed in equation 2
Finally
/ml obtained by Smits et al.[START_REF] Smits | New aspects in photodynamic therapy of actinic keratoses[END_REF] from 11 patients with AK incubated with 20% ALA for 3 hours (equation 8):
(
second row) and for PDT using the FLEXITHERALIGHT protocol (third row) at 3 (second column) and 6 months (third column) from the results of the first 14 patients of the clinical trial
Figure 4 Figure 4 :
44 Figure4shows the evolution of the number of PpIX molecules as a function of time when using the standard and the FLEXITHERALIGHT protocols (equation 6).
Figure 5 :Figure 6 :
56 Figure 5: Evolution in time of the PDT local damage achieved when using the standard protocol (incubation time: three hours, irradiation type: continuous, light dose: 37 J/cm 2 , fluence rate: 75 mW/cm 2 , treatment duration: 493 s) (blue curves) and the FLEXITHERALIGHT one (incubation time: 30 minutes, irradiation type: fractionated with two minutes dark intervals every three minutes, light dose: 37 J/cm 2 , fluence rate: 12.3 mW/cm 2 , treatment duration: 9024 s) (red curves) at 50 (a) and 150 (b) µm in depth in AK.
Figure 7 :
7 Figure 7: Depth evolution of the PDT local damage for the standard protocol (incubation time: three hours, illumination type: continuous, light dose: 37 J/cm 2 , fluence rate: 75 mW/cm 2 , treatment duration: 493 s) (blue curves), and the FLEXITHERALIGHT protocol (incubation time: 30 minutes, illumination type: fractionated with two minutes dark intervals every three minutes, light dose: 37 J/cm 2 , fluence rate: 12.3 mW/cm 2 , treatment duration:
flexitheralight.com/)
Protocol name Incubation time Irradiation type Fluence rate Treatment duration
Standard protocol 3 hours Continuous 75 mW/cm 2 ~493 s
FLEXITHERALIGHT protocol 30 minutes Fractionated 12.3 mW/cm 2 ~9024 s
Table 1: Parameters for the two different 635 nm red light protocols with 37 J/cm 2
investigated in this paper
B. Clinical trial for the comparison of the two protocols
A phase II clinical trial approved by the French National Agency for the Safety of Medicines
and Health Products (ANSM) (authorization number: 2013-A01096-39) and the French Ethics
Committee (CPP) (authorization number: CPP-03/051/2013) for the assessment of the non-
inferiority of the FLEXITHERALIGHT device compared to the standard photodynamic
therapy for the treatment of actinic keratoses was initiated at the end of 2013 and was recently
completed.
Table 3 :
3 Specification of the parameters for the biological clearance of PpIX and the
conversion of MAL into PpIX
D. Quantification of the PDT local damage
The integral in the photobleaching term of equation 7 (last part of the right hand side)
represents the number of singlet oxygen molecules generated during the time interval
t t ; dt in an unit volume,
; 0 dt , dt 2 ; dt , 2 dt dt 3 ; ,…, t i dt ; t i with
t t i dt provides the total cumulative singlet oxygen molecules produced during the
i start
i interval t ; 0 . Following several studies on PDT [26,33,34], this cumulative parameter
enables the quantification of the PDT local damage, below-denoted as D , over time
(equation 9).
U V
, located at depth z in the skin sample model when the PpIX molecules, excited by absorption of photons, return to the ground state
[START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF]
. Therefore, the sum of this integral over the time intervals
Table 4 :
4 Estimation of the complete response rate for PDT using the standard protocol
At 3 months At 6 months
Standard protocol 54.2% 64%
FLEXITHERALIGHT protocol 65.3% 71.7%
Acknowledgement
This work was supported by the French National Institute of Health and Medical Research (INSERM) and by the French National Research Agency (ANR) Emergence 2012 (project reference number: ANR-12-EMMA-0018). | 47,818 | [
"6424",
"903899"
] | [
"3061",
"489340",
"489340"
] |
01770583 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01770583/file/VersionDaylight_2017_07_06.pdf | Anne-Sophie Vignion-Dewalle
email: [email protected]
Grégory Baert
Elise Thecua
Claire Vicentini
Laurent Mortier
Serge Mordon
Gregory Baert
Photodynamic therapy for actinic keratosis: Is the European consensus protocol for daylight PDT superior to conventional protocol for Aktilite CL 128 PDT?
Keywords: Photodynamic therapy, local damage comparison, mathematical modeling, red light, daylight, Aktilite CL 128. I
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Photodynamic therapy for actinic keratosis: Is the European consensus protocol for daylight PDT superior to conventional protocol for Aktilite CL 128 PDT? Introduction
Photodynamic therapy (PDT) is a cancer treatment combining a light of an appropriate wavelength, a photosensitizer (PS), and sufficient molecular oxygen to generate reactive oxygen species and destroy (pre-) malignant cells [START_REF] Plaetzer | Photophysics and photochemistry of photodynamic therapy: fundamental aspects[END_REF]. Over the last 15 years, topical PDT using 5-aminolevulinic acid (ALA) (ALA-PDT) or methyl aminolevulinate (MAL) (MAL-PDT) has proven to be successful in the treatment of various dermatological conditions including actinic keratoses (AK) [START_REF] Bissonette | Large surface photodynamic therapy with aminolevulinic acid: treatment of actinic keratoses and beyond[END_REF][START_REF] Braathen | Guidelines on the use of photodynamic therapy for nonmelanoma skin cancer: an international consensus[END_REF][START_REF] Morton | Guidelines for topical photodynamic therapy: update[END_REF][START_REF] Morton | European Dermatology Forum Guidelines on topical photodynamic therapy[END_REF][START_REF] Wiegell | Update on photodynamic treatment for actinic keratosis[END_REF]. Topical administration of ALA or MAL leads to the selective accumulation of the photosensitizer protoporphyrin IX (PpIX) in the AK lesions and subsequent light irradiation leads to the destruction of the lesions.
Red light irradiation with the Aktilite CL 128 lamp (Galderma SA, Lausanne, Switzerland) using a total light dose of 37 J/cm 2 after three hours of incubation with MAL, is a conventional protocol that is approved and widely used in Europe for the PDT treatment of AK. This protocol, referred to in this paper as the conventional protocol for Aktilite CL 128 PDT, has been demonstrated to be an effective treatment with similar efficacy and better cosmetic results compared with standard therapies [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF]. However, due in particular to high pain scores reported by patient during the treatment [START_REF] Tyrrell | The effect of air cooling pain relief on protoporphyrin IX photobleaching and clinical efficacy during dermatological photodynamic therapy[END_REF] and high room occupancy for dermatologists, many other protocols for PDT involving either shorter incubation times [START_REF] Braathen | Short incubation with methyl aminolevulinate for photodynamic therapy of actinic keratoses[END_REF], lower fluence rates [START_REF] Apalla | The impact of different fluence rates on pain and clinical outcome in patients with actinic keratoses treated with photodynamic therapy[END_REF], irradiation with other light sources [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF]… have been proposed for dermatological PDT.
Among these alternative PDT protocols, several protocols involving exposure with daylight instead of irradiation with the Aktilite CL 128 lamp have been investigated over the last decade [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | A randomized, multicentre study of directed daylight exposure times of 1½ vs. 2½ h in daylight-mediated photodynamic therapy with methyl aminolaevulinate in patients with multiple thin actinic keratoses of the face and scalp[END_REF][START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF][START_REF] Wiegell | Daylight-mediated photodynamic therapy of moderate to thick actinic keratoses of the face and scalp: a randomized multicentre study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF]. Most of these protocols involve an incubation with MAL for a maximum of 30 minutes followed by a daylight exposure for between 1.5 and 2.5 hours. From an European consensus [START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF], using a two hours daylight exposure within 30 minutes after MAL application leads to a protocol, hereinafter referred to as the European consensus protocol for daylight PDT, as effective as and better tolerated by patients than the conventional protocol for Aktilite CL 128 PDT. The European consensus protocol for daylight PDT, more manageable in clinical practice than the conventional protocol for Aktilite CL 128 PDT [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF], has been recently approved in Europe for the treatment of thin, non-hyperkeratotic AK [START_REF] Cantisani | MAL Daylight Photodynamic Therapy for Actinic Keratosis: Clinical and Imaging Evaluation by 3D Camera[END_REF]. Based on the two comparative clinical studies between a protocol for daylight PDT and the conventional protocol for Aktilite CL 128 PDT [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF], we set the incubation time for the European consensus protocol for daylight PDT to 30 minutes.
In this paper, we propose to compare the efficiency of the conventional protocol for Aktilite CL 128 PDT (light source: Aktilite CL 128, incubation time: three hours, light dose: 37 J/cm 2 ) to the one of the European consensus protocol for daylight PDT (light source: daylight, incubation time: 30 minutes, treatment duration: two hours) through a mathematical modeling already published in our previous works [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]. Two weather conditions for the European consensus protocol for daylight PDT have been considered for this comparison: a clear sunny day and an overcast day. The mathematical modeling that involves a logistic model for both the biological clearance of PpIX and the conversion of MAL into PpIX and an analytic model for the PpIX photobleaching enables the local damage induced by the therapy to be estimated [START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF].
A detailed description of both the conventional protocol for Aktilite CL 128 PDT and the European consensus protocol for daylight PDT is presented in Section II while Section III describes the mathematical modeling enabling the quantification of the PDT local damage.
The PDT local damages achieved by the conventional protocol for Aktilite CL 128 PDT, the European consensus protocol for daylight PDT during a sunny day and the European consensus protocol for daylight PDT during an overcast day are compared in Section IV.
Based on the comparison results, some discussions and conclusions are drawn in Section V.
II. The two topical PDT protocols considered in this paper
Incubation time here refers to the time elapsed between the application of MAL cream and the beginning of the light irradiation.
A. The conventional protocol for Aktilite CL 128 PDT
The Aktilite CL 128 lamp (Galderma SA, Lausanne, Switzerland) is the most widely used device system for topical PDT in Europe. Equipped with 128 light emitting diodes (LEDs) arranged in a 8×16 array, this lamp emits red light with a fluence rate of 70-100 mW/cm 2 (at a distance from 5 cm to 8 cm) [20], a peak wavelength of 632 nm [START_REF] Moseley | Light distribution and calibration of commercial PDT LED arrays[END_REF] and a full width at half maximum (FWHM) of approximately 19 nm [START_REF] Moseley | Light distribution and calibration of commercial PDT LED arrays[END_REF]. The fluence can be adjusted at the control panel of the lamp and the irradiation time is calculated automatically accordingly: for the 37 J/cm 2 as recommended for dermatological MAL-PDT, the irradiation time varies between 6 and 10 minutes. Moreover, with an irradiation field size of 8×18 cm, the Aktilite CL 128 lamp enables quite large fields to be treated.
Sequentially involving gentle curettage of the lesions, MAL cream application with occlusive dressing to each lesion and 5 mm of surrounding tissue for an incubation time of three hours, removal of excess MAL cream, positioning of the Aktilite CL 128 lamp head 5-8 cm over the area to be treated and irradiation with a total light dose of 37 J/cm 2 , the so called conventional protocol for Aktilite CL 128 PDT has proven to be effective for the treatment of various skin malignancies [START_REF] Calzavara-Pinton | Methylaminolaevulinate-based photodynamic therapy of Bowen's disease and squamous cell carcinoma[END_REF][START_REF] Szeimies | A clinical study comparing methyl aminolevulinate photodynamic therapy and surgery in small superficial basal cell carcinoma (8-20 mm), with a 12-month follow-up[END_REF][START_REF] Lee | Photodynamic therapy: new treatment for recalcitrant Malassezia folliculitis[END_REF][START_REF] Kim | Photodynamic therapy with methyl-aminolaevulinic acid for mycosis fungoides[END_REF][START_REF] Kim | Photodynamic therapy with ablative carbon dioxide fractional laser for treating Bowen disease[END_REF][START_REF] Fernandez-Guarino | Six Years of Experience in Photodynamic Therapy for Basal Cell Carcinoma: Results and Fluorescence Diagnosis from 191 Lesions[END_REF] including actinic keratosis [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF][START_REF] Szeimies | Photodynamic therapy using topical methyl 5-aminolevulinate compared with cryotherapy for actinic keratosis: A prospective, randomized study[END_REF][START_REF] Pariser | Topical methyl-aminolevulinate photodynamic therapy using red lightemitting diode light for treatment of multiple actinic keratoses: A randomized, doubleblind, placebo-controlled study[END_REF][START_REF] Szeimies | Topical methyl aminolevulinate photodynamic therapy using red light-emitting diode light for multiple actinic keratoses: a randomized study[END_REF]. With similar response rates compared with standard therapies in the treatment for AK, the conventional protocol for Aktilite CL 128 PDT has also demonstrated improved cosmetic outcomes [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF][START_REF] Tyrrell | The effect of air cooling pain relief on protoporphyrin IX photobleaching and clinical efficacy during dermatological photodynamic therapy[END_REF].
B. The European consensus protocol for daylight PDT
Many studies on daylight PDT for the treatment of AK [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | A randomized, multicentre study of directed daylight exposure times of 1½ vs. 2½ h in daylight-mediated photodynamic therapy with methyl aminolaevulinate in patients with multiple thin actinic keratoses of the face and scalp[END_REF][START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF][START_REF] Wiegell | Daylight-mediated photodynamic therapy of moderate to thick actinic keratoses of the face and scalp: a randomized multicentre study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF][START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF][START_REF] Lane | Daylight photodynamic therapy: the Southern California experience[END_REF][START_REF] Perez-Perez | Daylight-mediated photodynamic therapy in Spain: advantages and disadvantages[END_REF][START_REF] See | Consensus recommendations on the use of daylight photodynamic therapy with methyl aminolevulinate cream for actinic keratoses in Australia[END_REF] have been published since the early work of Wiegell et al. [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF].
As stated above, the fundamental difference between the protocols for daylight PDT and the protocols for Aktilite CL 128 PDT is the exposure to daylight in place of the red light provided by the Aktilite CL128 lamp.
Furthermore, most of the studies on daylight PDT report an incubation with MAL cream for a maximum of 30 minutes and no excess cream removal before exposure to daylight. The combination of these two factors involves, during all the procedure, an accumulation of PpIX greatly reduced compared to that of the protocols for Aktilite CL 128 PDT. In fact, the incubation time prior to the daylight exposure that is shorter than the usual three hours used for the protocols for Aktilite CL 128 PDT, results in a low initial PpIX accumulation.
Thereafter, without excess cream removal and with a fluence rate lower than the one of the Aktilite CL 128 lamp, the protocols for daylight PDT allows for a balance between the development and the photodegradation of PpIX (the PpIX molecules are photoactivated / photodegraded as quickly they are formed) thus ensuring the maintain of a low PpIX accumulation.
Another difference between the protocols for daylight PDT and the protocols for Aktilite CL 128 PDT is the application of a chemical sunscreen to the treatment area to prevent sunburn.
This sunscreen allowing wavelengths activating PpIX to pass through is usually applied before the gentle curettage of the lesions [START_REF] Wiegell | A randomized, multicentre study of directed daylight exposure times of 1½ vs. 2½ h in daylight-mediated photodynamic therapy with methyl aminolaevulinate in patients with multiple thin actinic keratoses of the face and scalp[END_REF][START_REF] Wiegell | Daylight-mediated photodynamic therapy of moderate to thick actinic keratoses of the face and scalp: a randomized multicentre study[END_REF][START_REF] See | Consensus recommendations on the use of daylight photodynamic therapy with methyl aminolevulinate cream for actinic keratoses in Australia[END_REF].
Among the studies on daylight PDT, two randomized clinical trials each compared a protocol for daylight PDT with the conventional protocol for Aktilite CL 128 PDT [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF]. With an exposure to daylight PDT of 2.5 hours and 2 hours for the first clinical trial [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF] and the second one [START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF], respectively, the two involved protocols for daylight PDT (both with 30 minutes incubation time with MAL) have been demonstrated as effective as the conventional protocol for Aktilite CL 128 PDT. Furthermore, due to the above mentioned reduction in PpIX accumulation obtained with the protocols for daylight PDT, patients enrolled in these two trials reported less pain during the protocols for daylight PDT than during the conventional protocol for Aktilite CL 128 PDT [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF]. This experience of a nearly pain-free treatment for patients has also been reported in other studies on daylight PDT [START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | A randomized, multicentre study of directed daylight exposure times of 1½ vs. 2½ h in daylight-mediated photodynamic therapy with methyl aminolaevulinate in patients with multiple thin actinic keratoses of the face and scalp[END_REF][START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF][START_REF] Wiegell | Daylight-mediated photodynamic therapy of moderate to thick actinic keratoses of the face and scalp: a randomized multicentre study[END_REF][START_REF] Lane | Daylight photodynamic therapy: the Southern California experience[END_REF][START_REF] Perez-Perez | Daylight-mediated photodynamic therapy in Spain: advantages and disadvantages[END_REF][START_REF] See | Consensus recommendations on the use of daylight photodynamic therapy with methyl aminolevulinate cream for actinic keratoses in Australia[END_REF].
Finally, with a shorter time of clinic attendance than the conventional protocol for Aktilite CL 128 PDT, the protocols for daylight PDT are more convenient for both patients and clinicians.
With a maximum of 30 minutes of MAL incubation and no excess cream removal before daylight exposure for 2 hours, the so called European consensus protocol for daylight PDT has been adopted from a European consensus in 2012 [START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF]. Based on the above mentioned randomized clinical trials [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF], we set the incubation time for the European consensus protocol for daylight PDT to 30 minutes.
In this study, the outdoor temperature was assumed to be sufficient for the PDT process to occur [START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF][START_REF] Mordon | A commentary on the role of skin temperature on the effectiveness of ALA-PDT in dermatology[END_REF] and thus no further consideration has been given to the temperature in this paper.
III. Mathematical modeling for topical PDT protocol
A. AK sample model
To account for both the confinement of AKs to the epidermis and the usual 100 μm thickness of epidermis [START_REF] Liu | A dynamic model for ALA-PDT of skin: simulation of temporal and spatial distributions of ground-state oxygen, photosensitizer and singlet oxygen[END_REF], the simplified AK sample model consists of a 10 µm wide and 100 μm thick parallelepiped (Figure 1). The AK tissue is assumed homogeneous and its optical properties are set to the values reported in Garcia-Uribe et al. [START_REF] Garcia-Uribe | In vivo diagnosis of melanoma and nonmelanoma skin cancer using oblique incidence diffuse reflectance spectrometry[END_REF].
A primary planar beam with fluence rate 0 S is assumed to perpendicularly irradiate the surface of the AK sample model as illustrated in Figure 1.
B. Models for the different fluence rates
As depending on the weather conditions [START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF][START_REF] Spelman | Treatment of face and scalp solar (actinic) keratosis with daylight-mediated photodynamic therapy is possible throughout the year in Australia: Evidence from a clinical and meteorological study[END_REF][START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF], the spectral fluence rate for the daylight is not unique. In this paper, two spectral fluence rates have been used: the first one corresponds to a clear sunny day (blue curve in Figure 2) while the second stands for an overcast day (cyan curve in Figure 2). These two spectral fluence rates were recorded in the study of O'Gorman et al. [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF], who kindly provided them to us. The use of these two spectral fluence rates for the daylight aims to quantify the effect of the weather conditions on the PDT local damage.
Among the various spectral fluence rates for the Aktilite CL 128 lamp we have at our disposal, the one measured at a distance of 8 cm from the lamp by O'Gorman et al. [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] (red curve in Figure 2) was preferred. Indeed, recorded using the same measurement system as the two spectral fluence rates for daylight, this spectral fluence rate seems to be the most appropriate for a pertinent comparison.
C. Modeling of the PDT process
The modeling method is the same as in our previously validated work [START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF] Based on our previous works [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF], the modeling of the PDT process consists of two steps that are iteratively repeated: determination of the local fluence rate and updating of the PpIX absorption coefficient.
Determination of the fluence rate
The local total fluence rate at time t , depth z and wavelength , denoted by
, , z t , is
given by equation 1 [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF][START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF]:
end start z t z
(1)
Where:
The above defined 0 S is the spectral fluence rate of the primary planar broad beam, Due to the accumulation of PpIX in AK lesions, the total absorption coefficient, P , depending on both the optical properties of the actinic keratosis and the boundary conditions at the actinic keratosis surface, are computed as described in [START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF].
Updating of the PpIX absorption coefficient
The updating formula for the PpIX absorption coefficient in an unit volume, U V , is expressed as follows (equation 2) [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]:
~, , , , , , , , , , exp 1 1
exp 1 1 exp , , , , d z t V c z t dt z t V dt t k dt t k z L V z t z dt t PpIX a U PpIX a U U PpIX PpIX a PpIX a (2)
Where:
dt is the time increment,
PpIX is the PpIX molar extinction coefficient for wavelength , , and c are the Avogadro number (6.022×10 23 /mol), the Planck constant (6.626×10 34 J×s) and the speed of light (3×10 8 m/s), respectively, L , k , and are the parameters of the depth-dependent logistic growth related to both the biological clearance of PpIX and the conversion of 5-ALA into PpIX (2 nd term in the right hand side) [START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF],
and ~ are the bimolecular rate constant for the reaction of singlet oxygen with PpIX and the singlet oxygen quantum yield, respectively. These two parameters allow the photobleaching process to be analytically modeled (3 rd term in the right hand side) [START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF].
Iterative procedure
All the involved parameters were assigned to the values reported in our previous work and empirically determined in the literature [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF] (Table 1). Assuming the initial distribution for the PpIX absorption coefficient reported in the last row in Table 1, .
h 3 exp 1 k V U 5.3×10 9 l/mol/s 0.56 , , 0 , z PpIX a k z L V U PpIX exp 1 exp
From several studies on PDT [START_REF] Farrell | Modeling of photosensitizer fluorescence emission and photobleaching for photodynamic therapy dosimetry[END_REF][START_REF] Valentine | Monte Carlo modeling of in vivo protoporphyrin IX fluorescence and singlet oxygen production during photodynamic therapy for patients presenting with superficial basal cell carcinomas[END_REF], this cumulative quantity enables the quantification of the PDT local damage, D , over time (equation 3).
1 0 ~, ~, , , , , i j start PpIX a U start start d z dt j t V c z dt j t dt z dt i t D (3)
IV. Results
A. Comparison in terms of effective fluence
Given the spectral fluence rate for the Aktilite CL 128 lamp (Figure 2), several parameters have been computed (Table 2). First, from the integration of this spectrum over wavelength, the fluence rate has been estimated to be 85.39 mW/cm 2 that is consistent with the abovementioned 70-100 mW/cm 2 range provided by the manufacturer. By considering this fluence rate, the fluence of 37 J/cm 2 of the conventional protocol for Aktilite CL 128 PDT is achieved using an irradiation time of 433.3 seconds (Table 2). Moreover, weighting the spectral fluence rate with the normalized absorption spectrum for PpIX, which is derived from data measured by the Research Center for Automatic Control of Nancy (CRAN) (Figure 3.a), enables the effective or PpIX-weighted spectral fluence rate to be obtained (Figure 3.a) and the effective fluence rate to be deducted by integration over wavelength (approximately, 1.44 mW/cm 2 ) [START_REF] Wiegell | Continuous activation of PpIX by daylight is as effective as and less painful than conventional photodynamic therapy for actinic keratoses; a randomized, controlled, single-blinded study[END_REF][START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] (Table 2). Multiplying the effective fluence rate by the above determined irradiation time of 433.3 seconds leads to an effective fluence of 0.63 J/cm 2 for the conventional protocol for Aktilite CL 128 PDT.
Similar computations have been carried out for the sunny daylight spectral fluence rate and the overcast daylight spectral fluence rate (Figure 2). The integration over wavelength of these two spectral fluence rates yields fluence rates of 55.2 mW/cm 2 and 7.75 mW/cm 2 for the sunny daylight and the overcast daylight, respectively (Table 2). The effective or PpIXweighted spectral fluence rate for the sunny daylight (respectively, the overcast daylight), which was obtained by weighting the spectral for the sunny daylight fluence rate (respectively, the overcast daylight fluence rate) with the normalized absorption spectrum for PpIX, yields, when integrated over wavelength, an effective fluence rate of 5.11 mW/cm 2 (respectively, 0.80 mW/cm 2 ) for the sunny daylight (respectively, the overcast daylight) From these computations performed using a homemade software available online [START_REF] Vignion-Dewalle | A software for analyzing and comparing the light sources available for PDT in dermatology[END_REF], the effective fluence achieved using the European consensus protocol for daylight PDT during a sunny day is 58.40 and 6.40 times higher than those achieved using the conventional protocol for Aktilite CL 128 PDT and the European consensus protocol for daylight PDT during an overcast day, respectively. The effective fluence obtained using the European consensus protocol for daylight PDT during an overcast day is alsoalthough to a lesser extent than the European consensus protocol for daylight PDT during a sunny dayhigher (9.13 times higher) than that obtained using the conventional protocol for Aktilite CL 128. different scale is applied for (a) due to its much higher amplitude). All the three spectral fluence rates were provided by the authors of [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF]. computed from the spectral fluence rate provided by [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF]. The irradiation times used in this paper (row 3) and the corresponding standard and effective fluences are also indicated (rows 5 and 7).
(Table 2,
Light source Aktilite
B. Comparison in terms of photodynamic dose
All the computations were performed using a Matlab™ program on a standard personal computer (Intel Xeon CPU E3-1240 V2 3.40 GHz-8Go of RAM-Windows 7 64 bits).
The PDT local damages obtained using the conventional protocol for Aktilite CL 128 PDT, the European consensus protocol for daylight PDT during a sunny day and the European consensus protocol for daylight PDT during an overcast day are displayed as a function of depth in Figure 4 and as a function of time in Figure 5. From Figure 4, whatever the depth position in the AK sample model (Figure 1), the PDT local damage achieved at the end of the treatment using the European consensus protocol for daylight PDT during a sunny day is on average about 6.50 and 3.63 times higher than those achieved at the end of the treatment using the conventional protocol for Aktilite CL 128 PDT and using the European consensus protocol for daylight PDT during an overcast day, respectively. The ratio between the PDT local damage obtained using the European consensus protocol for daylight PDT during an overcast day and the one obtained using the conventional protocol for Aktilite CL 128 PDT is approximately 1.80 at 0 µm depth and 1.77 at 100 µm depth. From Figure 5, for the European consensus protocol for daylight PDT, the shape of the time courses of the PDT local damage tends to demonstrate an exponential trend (blue and cyan curves in Figure 5) while that for the conventional protocol for Aktilite CL 128 PDT suggests a very slight logarithmic trend (red curves). This results that, at least over the considered time intervals, the PDT local damage produced using the conventional protocol for Aktilite CL 128
PDT increases in time much faster than those produced using the European consensus protocol for daylight PDT (Figure 5).
C. Comparison in terms of number of PpIX molecules
The evolution of the number of PpIX molecules present in an unit volume, U V , at time t and depth z , deduced from equation 4, is illustrated for the conventional protocol for Aktilite CL 128 PDT, the European consensus protocol for daylight PDT during a sunny day and the European consensus protocol for daylight PDT during an overcast day in Figure 6. From Figure 6, regarding the conventional protocol for Aktilite CL 128 PDT, the irradiation (starting at time=10800 s) leads to a mean percent drop of 35.24% in the number of PpIX molecules while the number of PpIX molecules corresponding to the European consensus protocol for daylight PDT during an overcast day continues to increase even after the beginning of irradiation (occurring at time 1800 seconds). For the European consensus protocol for daylight PDT during a sunny day, the irradiation results, in terms of the number of PpIX molecules, in a transient slight decrease followed by a further controlled increase.
PpIX U PpIX a PpIX V z t z t M , , , , (4)
Furthermore, an important number of PpIX molecules is still present at the end of the irradiation for the conventional protocol for Aktilite CL 128 PDT and the European consensus protocol for daylight PDT during an overcast day (Figure 6). With regard to the European consensus protocol for daylight PDT during a sunny day, the number of PpIX molecules present at the end of the irradiation is slightly higher than that at the beginning of irradiation.
V. Discussion
In this paper, a comparison between the two most widely used MAL-PDT protocols in Europe for the treatment of actinic keratosis (AK) is performed using a mathematical modeling of the PDT process:
The conventional protocol for Aktilite CL 128 PDT (light source: Aktilite CL 128, incubation time: three hours, light dose: 37 J/cm 2 ) [START_REF] Morton | Intraindividual, right-left comparison of topical methyl aminolaevulinate-photodynamic therapy and cryotherapy in subjects with actinic keratoses: a multicentre, randomized controlled study[END_REF][START_REF] Tyrrell | The effect of air cooling pain relief on protoporphyrin IX photobleaching and clinical efficacy during dermatological photodynamic therapy[END_REF][START_REF] Pariser | Topical methyl-aminolevulinate photodynamic therapy using red lightemitting diode light for treatment of multiple actinic keratoses: A randomized, doubleblind, placebo-controlled study[END_REF],
The European consensus protocol for daylight PDT (light source: daylight, incubation time: 30 minutes, treatment duration: two hours) [START_REF] Wiegell | Daylight photodynamic therapy for actinic keratosis: an international consensus: International Society for Photodynamic Therapy in Dermatology[END_REF].
Two weather conditions for the European consensus protocol for daylight PDT have been considered for this comparison: a clear sunny day and an overcast day. The spectral fluence rates for the Aktilite CL 128, the clear sunny daylight and the overcast daylight have been provided by the authors of O'Gorman et al. [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] (Figure 2).
The comparison is performed using an AK sample model consisting of a 100 μm thick parallelepiped (Figure 1) and a recently published modeling of the PDT process iteratively alternating determination of the fluence rate and updating of the optical properties [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF].
The determination of the fluence rate involves solving the one-dimensional diffusion equation (equation 1, [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]) while the updating of the optical properties takes the biological clearance of PpIX, the conversion of MAL into PpIX and the PpIX photobleaching into account (equation 2, [START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]). All the parameters involved in equations 1 and 2 are set to published empirical values, which were obtained with PpIX and with either normal human epidermis or AK (Table 1, [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF]). The updating formula (equation 2) explicitly provides the number of singlet oxygen molecules produced at each iteration (integral in the right hand side).
Cumulating this number over the irradiation time enables the quantification of the PDT local damage (equation 3, [START_REF] Vignion-Dewalle | Comparison of three light doses in the photodynamic treatment of actinic keratosis using mathematical modeling[END_REF][START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]).
Applying to the conventional protocol for Aktilite CL 128 PDT, to the European consensus protocol for daylight PDT during a sunny day and to the European consensus protocol for daylight PDT during an overcast day, the model allows evaluation and comparison of their performance in terms of the PDT local damage. A comparison has also been performed in terms of effective or PpIX-weighted fluence.
From the results, regardless the two weather conditions considered in this study, the European consensus protocol for daylight PDT is more efficient in terms of both effective fluence and PDT local damage than the conventional protocol for Aktilite CL 128 PDT. First, this finding underlines the relevance of the European consensus protocol for daylight PDT, even when performed during an overcast day. Then this finding supports the non-inferiority in efficacy of the European consensus protocol for daylight PDT compared to the conventional protocol for Aktilite CL 128 PDT that has been demonstrated by a clinical trial, including all weather conditions except rain or cold [START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF]. This finding also suggests that, although a minimum effective fluence of 8 J/cm 2 is recommended with daylight PDT to result in an effective treatment of AK [START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF], the effective fluence of 5.75 J/cm 2 related to the overcast day considered in this study (Table 2), is sufficient for the European consensus protocol for daylight PDT to perform better than the efficient conventional protocol for Aktilite CL 128 PDT. Finally, based on the well-known efficiency of the conventional protocol for Aktilite CL 128 PDT, this finding could reflect a potential over-treatment of the lesions when using the European consensus protocol for daylight PDT during a sunny day but also during an overcast day.
As already mentioned in many studies on daylight PDT [START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | A randomized, multicentre study of directed daylight exposure times of 1½ vs. 2½ h in daylight-mediated photodynamic therapy with methyl aminolaevulinate in patients with multiple thin actinic keratoses of the face and scalp[END_REF][START_REF] Wiegell | Daylight-mediated photodynamic therapy of moderate to thick actinic keratoses of the face and scalp: a randomized multicentre study[END_REF][START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF][START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] and as evidenced by the above mentioned values, the effective fluence varies depending on weather conditions. This dependence on weather conditions is also evident here in terms of the PDT local damage:
the better the weather conditions, the more efficient is the European consensus protocol for daylight PDT. Nonetheless, based on the studies of Wiegell et al. [START_REF] Wiegell | Photodynamic therapy of actinic keratoses with 8% and 16% methyl aminolaevulinate and home-based daylight exposure: a double-blinded randomized clinical trial[END_REF][START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF] and O'Gorman et al [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] that have found no association between response rate and effective fluence in patients who received an effective fluence with daylight PDT higher than 8 J/cm 2 and 3.2 J/cm 2 , respectively, a sunny day is not necessarily required for the European consensus protocol for daylight PDT to be effective (an overcast daylight may be sufficient for the European consensus protocol for daylight PDT to destroy any cancer cells). This has also been highlighted by Rubel et al. [START_REF] Rubel | Daylight photodynamic therapy with methyl aminolevulinate cream as a convenient, similarly effective, nearly painless alternative to conventional photodynamic therapy in actinic keratosis treatment: a randomized controlled trial[END_REF], which have reported that, although patients received variable effective fluences during the European consensus protocol for daylight PDT, no correlation to efficacy was found. These clinical results with similar response rates whatever the weather conditions (except rainy or cold conditions) tend to support the above suggested hypothesis of overtreatment of the lesions by the European consensus protocol for daylight PDT.
outweigh the conversion of MAL into PpIX. Regarding the European consensus protocol for daylight PDT during a sunny day, despite the low initial PpIX accumulation, the 5.11 mW/cm 2 effective fluence rate appears to be appropriate for maintaining approximately constant the number of PpIX molecules.
Consideration also needs to be given to the important number of PpIX molecules still present at the end of the conventional protocol for Aktilite CL 128 PDT (Figure 6). This important number tends to demonstrate that the incubation time or the cream concentration in MAL for this protocol could be reduced [START_REF] Christiansen | 5-ALA for photodynamic photorejuvenation--optimization of treatment regime based on normal-skin fluorescence measurements[END_REF]. This observation can be extended to the European consensus protocol for daylight PDT during an overcast day…
The overall results emphasize the need to refine the parameters of both the conventional protocol for Aktilite CL 128 PDT and the European consensus protocol for daylight PDT.
This refinement (reduction of the incubation time, reduction of the irradiation time…) would allow for a similar efficiency but an improved tolerability and a more manageable clinical practice.
Only two weather conditions for the European consensus protocol for daylight PDT have been studied in this paper. These two weather conditions have been chosen because their spectral fluence rates discussed in the study of O'Gorman et al. [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF] have been kindly provided by the authors. Any other weather condition with an available spectral fluence rate could obviously be investigated using the same mathematical modeling… The subsequent results will either support the above suggested hypothesis of overtreatment of the lesions by the European consensus protocol for daylight PDT, or will allow to estimate the PDT local damage at which the treatment becomes effective.
In this study, the temperature was, as above mentioned, assumed to be at least 10°C so that the PDT process to occur [START_REF] Wiegell | Weather conditions and daylight-mediated photodynamic therapy: protoporphyrin IX-weighted daylight doses measured in six geographical locations[END_REF][START_REF] Mordon | A commentary on the role of skin temperature on the effectiveness of ALA-PDT in dermatology[END_REF]. If this is not the case, or if it's raining or windy, a greenhouse may be used [START_REF] Lerche | Alternatives to Outdoor Daylight Illumination for Photodynamic Therapy--Use of Greenhouses and Artificial Light Sources[END_REF]. However, the spectral fluence rate of the daylight is modified by the filtering effect of the greenhouse glass, and the mathematical modeling proposed in this paper could be used to quantify this filtering effect in terms of the PDT local damage. Another solution to get rid of this dependence on the weather conditions, geographical location, seasons…, is to use a light source with a spectral fluence rate close to that of daylight [START_REF] O'gorman | Artificial White Light vs Daylight Photodynamic Therapy for Actinic Keratoses: A Randomized Clinical Trial[END_REF][START_REF] Lerche | Alternatives to Outdoor Daylight Illumination for Photodynamic Therapy--Use of Greenhouses and Artificial Light Sources[END_REF]; a comparative study of these light sources for "indoor" daylight PDT could also be performed in terms of the PDT local damage as defined in this paper.
VI. Conclusion
In this paper, we have compared the conventional protocol for Aktilite CL 128 PDT (light source: Aktilite CL 128, incubation time: three hours, light dose: 37 J/cm 2 ) with the European consensus protocol for daylight PDT (light source: daylight, incubation time: 30 minutes, treatment duration: two hours). This comparison performed in terms of the effective fluence and the PDT local damage tends to demonstrate, whatever the two weather conditions considered for this comparison (i.e., a clear sunny day and an overcast day) the European consensus protocol for daylight PDT perform better than the conventional protocol for Aktilite CL 128 PDT.
Figure 1 :
1 Figure 1: The AK sample model
Figure 2 :
2 Figure 2: The spectral fluence rate for the Aktilite CL 128 lamp, the sunny daylight and the overcast daylight, respectively. All these three spectral fluence rates were provided by the authors of [39].
a
, is the sum of the PpIX absorption coefficient, PpIX a, , and the actinic keratosis absorption coefficient, The total transport coefficient, t , is the sum of the total absorption coefficient, a , and the actinic keratosis reduced scattering coefficient, The two parameters, b and
calculated. Applying equation 2 then yields the distribution for the PpIX absorption coefficient at time dt , deduced. By iterating equations 1 and 2, all the PpIX absorption coefficients and local total fluence rates can be determined throughout the treatment.
Figure 3 .
3 b and 3.c). Using an exposure of 2 hours as required with the European consensus protocol for daylight PDT, the sunny daylight fluence rate (respectively, the effective sunny daylight fluence rate) leads to a fluence of 397.44 J/cm 2 (respectively, an effective fluence of 36.79 J/cm 2 ) while the overcast daylight fluence rate (respectively, the effective overcast daylight fluence rate) provides a fluence of 55.8 J/cm 2 (respectively, an effective fluence of 5.76 J/cm 2 ).
Figure 3 :
3 Figure 3: The spectral fluence rate (green curves) and the effective spectral fluence rate (red curve in a), blue curve in b) and cyan curve in c)) for a) the Aktilite CL 128 lamp at a distance of 8 cm from the lamp, b) the sunny daylight and c) the overcast daylight are scaled according to the left axis while the normalized absorption spectrum for PpIX (black curve) is plotted according to the right axis (Note: left axes in (b) and (c) are identically scaled whereas a
Figure 4 :
4 Figure 4: Evolution in depth of the PDT local damage achieved at the end of the treatment using the conventional protocol for Aktilite CL 128 PDT (red curve), the European consensus protocol for daylight PDT during a sunny day (blue curve) and the European consensus protocol for daylight PDT during an overcast day (cyan curve).
Figure 5 :
5 Figure 5: Evolution in time of the PDT local damage achieved using the conventional protocol for Aktilite CL 128 PDT (red curves), the European consensus protocol for daylight PDT during a sunny day (blue curves) and the European consensus protocol for daylight PDT during an overcast day (cyan curves). The bright solid curves (respectively, light dashed curves) represent the PDT local damages at 0 µm (respectively, at 100 µm) in depth in AK.
Figure 6 :
6 Figure 6: Evolution in time of the number of PpIX molecules present at 50 μm in depth in AK using the conventional protocol for Aktilite CL 128 PDT (red curve), the European consensus protocol for daylight PDT during a sunny day (blue curve) and the European consensus protocol for daylight PDT during an overcast day (cyan curve).
and therefore only outlines are referred to in this paper without further discussion.
Let the incubation start at time t 0 s and let the light irradiation be performed during the time
interval t , start t end :
t start 3 hours and t end t start t ( t is the irradiation time necessary to achieve the
recommended 37 J/cm 2 using the spectral fluence rate for the Aktilite CL 128 lamp
depicted in Figure 2) for the conventional protocol for Aktilite CL 128 PDT,
t 5 . 0 hours and t t 7200 s for the European consensus protocol for daylight
start end start
PDT.
Table 1 :
1 Specification of the model parameters from[START_REF] Vignion-Dewalle | Red light photodynamic therapy for actinic keratosis using 37 J/cm 2 : fractionated irradiation with 12.3 mW/cm 2 after 30 minutes incubation time compared to standard continuous irradiation with 75 mW/cm 2 after three hours incubation time using a mathematical modeling[END_REF]
D. Quantification of the PDT local damage The
integral in the last part of the right hand side of equation 2 represents the number of singlet oxygen molecules generated during the time interval , located at depth z in the AK sample model when the PpIX molecules, excited by absorption of photons, return to the ground state. Summing this integral over the time intervals
t t ; dt in an unit volume,
t start ; t start dt , t start dt ; t start 2 dt , …, t start i 1 dt ; t start i dt provides the total
cumulative singlet oxygen molecules produced during the time interval t start ; t start i dt
U V
Table 2 :
2 Standard (row 4) and effective (row 6) fluence rates for the Aktilite CL128 lamp (second column), the sunny daylight (third column) and the overcast daylight (fourth column)
Sunny Overcast
daylight daylight
Acknowledgements
The authors would like to thank M. Manley from the Department of Medical Physics and Clinical Engineering, Saint Vincent's University Hospital, Dublin, Ireland, UK, for providing the spectral fluence rates for 1) the Aktilite CL128 lamp (Galderma SA, Lausanne, Switzerland), 2) the daylight on a clear sunny day and 3) the daylight on an overcast day.
The information provided by the ratios between the effective fluences is consistent with that provided by the ratios between the PDT local damages. In fact, if a ratio between two considered effective fluences is higher than 1, then the ratio between the two corresponding PDT local damages is also higher than 1. Nonetheless, the ratios between the effective fluences are always higher than those between the PDT local damages. This is explained, to a large extent, by the fact that the effective fluence is computed using the normalized absorption spectrum for PpIX (Figure 3) whereas the PDT local damages involves the "actual" absorption spectrum for PpIX. With an incubation time longer than that for the European consensus protocol for daylight PDT, the conventional protocol for Aktilite CL 128 PDT leads to a higher initial PpIX accumulation (Figure 6) and subsequently to a higher initial absorption spectrum for PpIX. This in turn allows to partially offset the higher effective fluence rate for the European consensus protocol for daylight PDT in the iterative calculation of the PDT local damage (equations 1-3). In contrast, as the effective fluence is computed from the normalized PpIX absorption spectrum, no such offset is present and thus the information provided by the effective fluence might not be sufficient to predict the result of the MAL-PDT procedure. The PDT local damage, as defined in this paper, may therefore be a more appropriate predictor… Regarding the evolution in time of the PDT local damage (Figure 5), with an effective fluence rate about 3.55 times lower than that of the European consensus protocol for daylight PDT during a sunny day (Table 2), the conventional protocol for Aktilite CL 128 PDT, however, leads to a higher increase rate for the PDT local damage. This results from the above mentioned longer incubation time and subsequent higher initial PpIX absorption coefficient for the conventional protocol for Aktilite CL 128 PDT compared to the European consensus protocol for daylight PDT, leading to a higher initial photons absorption efficiency (equations 1-3).
From Figure 6, the beginning of the irradiation is clearly identifiable for the conventional protocol for Aktilite CL 128 PDT with a more than 30 percent drop in the number of PpIX molecules resulting from the above mentioned high initial photons absorption efficiency. On the contrary, the steady growth curve observed for the European consensus protocol for daylight PDT during an overcast day makes the identification of the beginning of the irradiation impossible: the 0.80 mW/cm 2 effective fluence rate combined with a low initial PpIX accumulation do not allow a photobleaching of the PpIX molecules important enough to | 61,028 | [
"6424",
"903899"
] | [
"3061",
"489340",
"489340"
] |
01770985 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2008 | https://hal.science/hal-01770985/file/WOSPA2008.pdf | Abdeldjalil Aissa
El Bey
Amina Saleem
Karim Abed-Meraim
Azzedine Beghdadi
Abdeldjalil Aïssa-El-Bey
Watermark extraction using blind image separation and sparse representation
INTRODUCTION
Digital watermarking technology has evolved as an important technology in the recent years. The basic principle of most watermarking method involves the application of small, pseudo-random changes to the selected coefficients in the spatial or transform domain. Most of the watermark detection schemes use some kinds of correlating detector to verify the presence of the embedded watermark [START_REF] Hartung | Multimedia watermarking techniques[END_REF][START_REF] Stefan | Information Hiding Techniques for Steganography and Digital Watermarking[END_REF]. Digital image watermarking has its applications in copy rights protection, data tracking and monitoring [START_REF] Hartung | Multimedia watermarking techniques[END_REF]. Blind source separation (BSS) is an important area of research in signal and image processing. The BSS problem can be solved using sparse representations of the source signals. Solution for the blind separation of image sources using sparsity include the wavelet-transform domain method in [START_REF] Stefan | Information Hiding Techniques for Steganography and Digital Watermarking[END_REF] and the method in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF] using projection onto sparse dictionaries and the iterative Blind source separation algorithm presented in [START_REF] Souidene | Blind image separation using sparse representation[END_REF]. This paper introduces a new image watermark extraction technique based on the iterative sparse blind separation algorithm (ISBS) and a new ISBS algorithm. The ISBS algorithm employs an p norm based contrast function for blind signal separation. When the images are sparse or sparsely representable, a smooth approximation of the absolute value function is a good choice for the cost function. The NISBS algorithm proposed in this paper employs the modeling of the distributions of sparse images using a family of convex smooth functions. The ISBS algorithm proposed in [START_REF] Souidene | Blind image separation using sparse representation[END_REF] and the NISBS algorithm presented in this paper are shown to be more efficient than other existing techniques in the literature and both lead to improved separation quality with lower computational cost. The performance of the proposed algorithms is compared to the performance of other ICA algorithms using the objective image quality measure inspired by the Human Visual System (HVS) proposed in [START_REF] Beghdadi | A new image distortion measure based on wavelet decomposition[END_REF]. It is shown that the ISBS and NISBS algorithms perform better in terms of PSNR-WAV. The new watermarking technique is presented with the principal assumptions of (i) image source sparsity, (ii) instantaneous mix-tures and (iii) the same number of mixtures and sources (three mixtures and three sources). In our proposed BSS based method, we do not have restrictions on the mixing process as well as the mixing coefficients. The paper is organized as follows. Section 2 presents the data model and assumptions of our system. The blind watermark extraction system using the sparsity based algorithms is described in Section 3. The simulations and the performance of the algorithms is discussed in Section 4. The conclusions are drawn in Section 5.
DATA MODEL AND ASSUMPTIONS
A generic watermark embedding system consists of the inputs which are the original data f1 , the watermark signal f2 and an optional public or secret key f3 each of size (m f , n f ). The key is used to enforce the security, that is, to prevent unauthorized party from recovering and manipulating the watermark. The proposed image watermarking system uses a watermark f2 and a secret key f3 for the purpose of conducting two levels of security, by using the special images as the watermark and the key, with the same size as the original image f1, to be embedded. Both the watermark f2 and the key f3 are inserted in the spatial domain of the original image f1. The watermarked image g1 is a linear mixture of the original image, key and watermark. That is,
g1(m, n) = f1(m, n) + a f2(m, n) + b f3(m, n) (1)
where a and b are the weighting coefficients [START_REF] Yu | Watermark detection and extraction using independent component analysis[END_REF]. To assure the identifiability of BSS model, it is required that the number of observed linear mixture inputs is at least equal to or larger than the number of independent sources. For the proposed watermark extraction scheme, at least three linear mixtures of the three independent sources are needed. Using the key image f3 and with the help of original image f1, two more mixed images are generated by adding them into the watermarked image
g2(m, n) = c g1(m, n) + d f3(m, n) (2) g3(m, n) = k g1(m, n) + l f1(m, n) (3)
where {c, d, k, l} are arbitrary real numbers. The latter mixtures can be modeled by the following linear system:
g(m, n) = Af (m, n) (4)
where,
f (m, n) = [f1(m, n), • • • , fN (m, n)] T is a N × 1 (with N = 3
) image source vector consisting of the stack of corresponding pixels of source images, A is the M × N full column rank mixing matrix (here,
M = N = 3), g(m, n) = [g1(m, n), • • • , gM (m, n)]
T is an M × 1 vector of mixture image pixels and the superscript T denotes the transpose operator. The purpose of blind image separation is to find a separating matrix, i.e. a N × M matrix B such that f (m, n) = Bg(m, n) is an estimate of original images.
BLIND WATERMARK EXTRACTION
As shown in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF][START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF], exploiting the sparsity of some representations of the original images afford us to achieve the BSS problem. Indeed, the mixture destroys or 'reduces' the sparsity of the considered signals that is restored after source separation. Reversely, it is shown in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF][START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF] that restoring (maximizing) the sparsity leads to the desired source separation. Based on this, we propose in the sequel a two-step BSS solution consisting in a linear pre-treatment that transforms the original sources into sparse signals followed by a BSS algorithm that minimizes the cost function of the transformed image mixtures using natural gradient technique.
Image pre-treatment
The algorithms proposed in this article are efficients for separating sparse sources. For some signals, one can assume that the spatial or temporal representation is naturally sparse, whereas for natural scenes, this assumptions falls down. We propose to make the image sparse by simply taking into account its Laplacian transform:
F = ∇f = ∂ 2 f ∂x 2 + ∂ 2 f ∂y 2 , (5)
or, in discrete form
F (m, n) = f (m + 1, n) + f (m -1, n) + f (m, n + 1) +f (m, n -1) -4f (m, n) .
Our motivation for choosing this transformation is two fold. First the Laplacian transform is a sparse representation of the image since it acts as an edge detector which provides a two-level image, the edges and the homogeneous background. Second, the Laplacian is a linear transformation. This latter property is 'critical' since the separation matrix estimated to separate the image mixtures is the same to separate the mixture of Laplacian images:
G = ∂ 2 Af ∂x 2 + ∂ 2 Af ∂y 2 = AF ( 6
)
where G is the Laplacian transform of the mixtures. In the literature, some other linear transformations were proposed in order to make the image sparse, including the projection into a sparse dictionary [START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF]. In Figure 1, the original cameraman image is displayed as well as its Laplacian transform and their respective histograms that clearly show the sparsity of the latter.
In the pre-treatment phase, we also propose an optional whitening step which aims to set the mixtures to the same energy level. Furthermore, this procedures reduces the number of parameters to be estimated. More precisely, the whitening step is applied to the Laplacian image mixtures before using our separation algorithm. The whitening is achieved by applying a N × M matrix Q to the Laplacian image mixtures in such a way Cov(QG) = I in the noiseless case, where Cov(•) stands for the covariance operator. As shown in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF],
Q can be computed as the inverse square root of the noiseless covariance matrix of the Laplacian image mixtures (see [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] for more details). In the following, we apply our separation algorithm on the whitened data:
G w (m, n) = QG(m, n).
Sparsity-based BSS algorithm
In this section, we propose an iterative algorithm for the separation of sparse signals, namely the ISBS for Iterative Sparse Blind Separation algorithm. It is well known that Laplacian image transform is characterized by its sparsity property in the spatial domain [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF][START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF]. This property can be measured by the p norm where 0 ≤ p < 2. More specifically, one can define the following sparsity based contrast function,
Gp(F ) = N i=1 [Jp(Fi)] 1 p (7)
where
Jp(Fi) = 1 m f n f m f m=1 n f n=1 |Fi(m, n)| p . (8)
The algorithm finds a separating matrix B such as,
B = arg min B {Gp(B)} (9)
where Gp(B) Gp(H) [START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF] and H(m, n) BG w (m, n) represents the estimated image sources Laplacian. The approach we choose to solve (9) is inspired from [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF]. It is a block technique based on the processing of m f n f observed image pixels and consists in searching the minimum of the sample version of [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. Solutions are obtained iteratively in the form:
B (k+1) = (I + (k) )B (k) (11)
H (k+1) (m, n) = (I + (k) )H (k) (m, n) . ( 12
)
At iteration k, a matrix (k) is determined from a local linearization of Gp(BG w ). It is an approximate Newton technique with the benefit that (k) can be very simply computed (no Hessian inversion) under the additional assumption that B (k) is close to a separating matrix. This procedure is illustrated in the following. At the (k+1) th iteration, the proposed criterion ( 8) can be developed as follows:
Jp(H (k+1) i ) = 1 m f n f m f ,n f m,n=1 H (k) i (m, n) + N j=1 (k) ij H (k) j (m, n) p
Under the assumption that B (k) is close to a separating matrix, we have
| (k) ij | 1
and thus, a first order approximation of Jp(H
(k+1) i
) is given by:
Jp(H (k+1) i ) ≈ 1 m f n f m f ,n f m,n=1 |H (k) i (m, n)| p + p N j=1 (k) ij |H (k) i (m, n)| p-1 sign H (k) i (m, n) H (k) j (m, n) (13
) where sign(•) represents the sign value operator. Using equation (13), equation ( 7) can be rewritten in more compact form as:
Gp B (k+1) = Gp B (k) + T r (k) R (k)T D (k) ( 14
)
where T r(•) is the matrix trace operator, the ij th entry of matrix R (k) is given by:
R (k) ij = 1 m f n f m f ,n f m,n=1 |H (k) i (m, n)| p-1 sign H (k) i (m, n) H (k) j (m, n) .
and
D (k) = diag R (k) 11 , . . . , R (k) N N 1 p -1 . (15)
Using a gradient technique, (k) can be written as:
(k) = -µD (k) R (k) , (16)
where µ > 0 is the descent step. Replacing (16) into (14) leads to,
Gp B (k+1) = Gp B (k) -µ D (k) R (k) 2 , (17)
so µ controls the decrement of the criterion. Now, to avoid the algorithm's convergence to the trivial solution B = 0, one normalizes the outputs of the separating matrix to unit-power, i.e. ρ (k+1)
H i E |H (k+1) i (m, n)| 2 = 1, ∀ i.
Using first order approximation, this normalization leads to:
(k) ii = 1 -ρ (k) H i 2ρ (k) H i . ( 18
)
The final estimated separation matrix B = B (K) Q is applied to the image mixtures g to obtain an estimation of the original images. K denotes here the number of iterations that can be either chosen a priori or given by a stopping criterion the form B (k+1) -B (k) < δ where δ is a small threshold value.
New sparsity-based BSS algorithm
When the images are sparse or sparsely representable, a smooth approximation of the absolute value function might be a better choice for the cost function [START_REF] Bronstein | Blind deconvolution of images using optimal sparse representations[END_REF]. We, therefore, focus our attention on modeling distributions of sparse images using a family of convex smooth functions and propose the following cost function :
G λ (F ) = N i=1 J λ (Fi) (19)
with
J λ (F i ) = 1 m f n f m f ,n f m,n=1 |F i (m, n)| -λ log 1 + |F i (m, n)| λ ( 20
)
where λ is a positive smoothing parameter [START_REF] Bronstein | Blind deconvolution of images using optimal sparse representations[END_REF]. The algorithm finds a separating matrix B such as,
B = arg min B G λ (B) (21)
where
G λ (B) G λ (H) . (22)
In the same way as the ISBS algorithm presented in the previous section, the solutions are obtained iteratively as describe by the equations [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF] and [START_REF] Bronstein | Blind deconvolution of images using optimal sparse representations[END_REF]. Therefore, at the (k+1) th iteration, the proposed criterion (20) can be developed as follows :
J λ (H (k+1) i ) = 1 m f n f m f ,nf m,n=1 H (k) i (m, n) + N j=1 (k) ij H (k) j (m, n) -λ log 1 + H (k) i (m,n)+ N j=1 (k) ij H (k) j (m,n) λ .
Under the assumption that B (k) is close to a separating matrix, we have
| (k) ij | 1
and thus, by using a first order approximation of J λ (H (k+1) i
), we can rewrite equation (19) in more compact form as:
G λ B (k+1) = G λ B (k) + T r (k) R (k)T (23)
where the ij th entry of matrix R (k) is given by: R
k) ij = 1 m f n f m f ,nf m,n=1 sign H (k) i (m, n) H (k) j (m, n) λ + |H (k) i (m, n)| . (
Using a gradient technique, (k) can be written as:
(k) = -µ R (k) , (24)
Replacing ( 24) into (23) leads to,
G λ B (k+1) = G λ B (k) -µ R (k) 2 . (25)
Now, to avoid the algorithm's convergence to the trivial solution B = 0, one normalizes the outputs of the separating matrix to unitpower (see equation ( 18)). 1 shows the PSNR-WAV between the original and extracted images for the example described in Fig. 2. We compare the performance of the proposed algorithms to the ICA algorithm. It is clearly shown that our algorithms (ISBS and New ISBS) perform better in terms of PSNR-WAV.
CONCLUSION
In this paper, a new watermarking technique based on BSS algorithm using sparsity property of images has been proposed. The proposed technique consists in a sparsification of the natural observed mixtures followed by a blind separation of the original images. The
Fig. 1 .
1 Fig. 1. (a) Original image, (b) Laplacian transform, (c) Original image histogram, (d) Sparse Laplacian transform histogram
Fig. 2 .
2 Fig. 2. (a) watermarked image, (b) and (c) generated mixture images, (d) extracted watermark, (e) extracted image, (f) extracted key.
Table 1 .
1 Performance evaluation : Comparison of the PSNR-WAV for ICA, ISBS and NISBS algorithms sparsification is simply the Laplacian transform and has a low computational cost. The separation is performed using an iterative algorithm based on the minimizing of the sparsity cost function of the Laplacian image.
PSNR-WAV (dB)
BSS
Algorithm Parrot Key Watermark
ICA 28.76 15.85 22.36
ISBS 33.12 19.32 25.67
NISBS 40.70 28.92 34.49 | 16,131 | [
"18420",
"956547"
] | [
"391223",
"98040",
"88125",
"300839",
"88125"
] |
01770991 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2008 | https://hal.science/hal-01770991/file/OFDM_identPIMRC2008.pdf | François-Xavier Socheleau
email: [email protected]
Sébastien Houcke
email: [email protected]
Abdeldjalil Aissa
El Bey
Franc ¸ois-Xavier Socheleau
Abdeldjalil Aissa-El-Bey
email: [email protected]
Philippe Ciblat
email: [email protected]
OFDM system identification based on m-sequence signatures in cognitive radio context
teaching and research institutions in France or abroad, or from public or private research centers.
I. INTRODUCTION
The increasing demand of wireless services faced to the limited spectrum resources constrains wireless systems to evolve towards more embedded intelligence. The Cognitive Radio (CR) concept [START_REF] Mitola | Cognitive radio: making software radios more personal[END_REF] appears as a key solution to make different systems coexist in the same frequency band. CR terminals have the ability to reconfigure themselves (i.e. to adapt the modulation parameters, carrier frequency, power, etc.) with regards to the surrounding radio environment and spectrum policy. Spectrum sensing and especially system identification is therefore a crucial step towards radio environment awareness. In this paper we focus on OFDM based systems as it becomes the physical layer for many wireless standards [START_REF]Air Interface for Fixed Broadband Wireless Access Systems[END_REF]- [5]. Identification of such systems has mainly been studied using various OFDM cyclostationary properties induced by the Cyclic Prefix (CP) [START_REF] Yucek | OFDM Signal Identification and Transmission Parameter Estimation for Cognitive Radio Applications[END_REF]- [START_REF] Jallon | An algorithm for detection of DVB-T signals based on their second order statistics[END_REF]. The performance of this approach fully depends on the cyclic prefix duration and on the length of the multipath propagation channel. Note also that such a method is totally inefficient in the presence of a zero-padded OFDM system which may be relevant in a cognitive context [START_REF] Khambekar | Utilizing OFDM Guard Interval for Spectrum Sensing[END_REF]. Moreover, considering the increasing interest in OFDM by the wireless designers, cyclostationary properties of such systems are likely to become closer and closer. For instance 3GPP/LTE [5] and Mobile WiMAX [START_REF]Air Interface for Fixed and Mobile Broadband Wireless Access Systems, Amendment 2: Physical and Medium Access Control Layers for Combined Fixed and Mobile Operations in License Bands and Corrigendum 1[END_REF] systems have already an intercarrier spacing only different from 4% which may prevent from getting an accurate system identification based on the intercarrier spacing estimation principle.
To overcome these limitations, methods that involve more particular signatures in OFDM systems are required. [START_REF] Maeda | Recognition Among OFDM-Based Systems Utilizing Cyclostationarity-Inducing Transmission[END_REF] suggested approaches using specific preambles or dedicated subcarriers with cyclostationary patterns. Preambles being usually intermittently transmitted and therefore difficult to intercept, the use of dedicated subcarriers is preferable. However, dedicating subcarriers to only embed signatures may have a cost as it adds overhead and thus reduces systems capacity. One way to address this issue is to jointly use pilot tones for synchronization and/or channel estimation (their initial usage) as well as for system identification.
In this contribution, we develop a solution relying on the use of m-sequence (MS) modulated pilot tones to embed signatures in OFDM signals. MSs show specific high order statistics relevant for system identification and avoid any additional overhead as they meet the requirements of usual training sequences (such m-sequences are indeed already used in existing standards for synchronization and/or channel estimation, see WiMAX [START_REF]Air Interface for Fixed Broadband Wireless Access Systems[END_REF] and Wifi [START_REF]IEEE Std. 802.11a, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications High-speed Physical Layer in the 5 GHz Band[END_REF]). Thus, we here suggest to take advantage of signatures created as a side-effect of existing MS structures to identify standards such as [START_REF]Air Interface for Fixed Broadband Wireless Access Systems[END_REF] and [START_REF]IEEE Std. 802.11a, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications High-speed Physical Layer in the 5 GHz Band[END_REF] and also advocate to generalize MSs use in a cognitive context.
The paper is organised as follows: Section II describes the MS useful properties. Section III introduces the OFDM identification scheme and especially the associated cost function based on MS signature characterization. In section IV the impact of synchronization impairments is analysed. Identification performance is assessed through simulations in Section V. Finally, conclusions are presented in Section VI.
II. MS PROPERTIES
A maximum length sequence, commonly called msequence, is a type of pseudorandom binary sequence generated using maximal linear feedback shift registers and modulo 2 addition. A necessary and sufficient condition that a sequence be of maximal length (i.e. sequence of length 2 p -1 for length-p registers) is that its corresponding generator polynomial, denoted by P M S , be primitive.
Any binary MS w k generated by length p registers of polynomial P M S = p i=0 α i X i verifies over GF(2)
w k = p i=1 α i w k-i ⇔ p i=0 α i w k-i = 0.
Moreover, let wk = 1 -2w k be the BPSK associated sequence. Thanks to [START_REF] Macwilliams | Pseudo-Random Sequences and Arrays[END_REF], one can see that
lim M →+∞ 1 M M -1 k=0 i∈B wk-i = 1, if B = A 1/(1 -2 p ) otherwise (1) where B is any subset of {0, • • • , 2 p -2} and A = {i ∈ {0, • • • , p}|α i = 1}.
III. OFDM SYSTEM IDENTIFICATION ALGORITHM
A. System model
In order to facilitate the identification of OFDM systems, we suggest to generalize the use of m-sequence modulated pilot tones. Assuming that a transmitted OFDM symbol consists of N subcarriers and N p comb-type pilot tones, the discrete-time baseband equivalent signal is given by
x(m) = E s N x d (m) + x r (m) , (2)
where
x d (m) = k∈Z N -1 n=0 n / ∈Ip a k (n)e 2iπ n N [m-D-k(N +D)] g[m-k(N +D)],
and
x r (m) = k∈Z n∈Ip wk (n)e 2iπ n N [m-D-k(N +D)] g[m-k(N +D)].
E s is the signal power, a k (n) are the transmit data symbols assumed to be independent and identically distributed (i.i.d), D is the CP length, m → g(m) is the pulse shaping filter. I p denotes the set of pilot subcarrier indexes. For each n ∈ I p , wk (n) is a BPSK pilot symbols sequence associated with one m-sequence obtained by the generator polynomial P M S (n). System signature is thus entirely characterized by I p and
{P M S (n)} n=0,••• ,Np-1 .
Notice that the number of primitive polynomials of degree p over GF(2) is given by φ(2 p -1)/p, where φ(.) is Euler's Totient function [START_REF] Morgan | Primitive Normal Polynomials Over Finite Fields[END_REF]. As an example, for N p = 1 and p ≤ 11, there are 335 different possible signatures which is much larger than the number of existing OFDM systems! Consequently each existing or future system may have its own system signature based on the knowledge of
I p and {P M S (n)} n=0,••• ,Np-1 .
Now, we consider that the signal propagates through a multipath channel. Let {h(l)} l=0,••• ,L be the base-band equivalent discrete-time channel impulse response of length L. The received samples of the OFDM signal are thus given by
y(m) = e -i(2πε m-τ N +θ) L-1 l=0 h(l)x(m -l -τ ) + η(m), ( 3
)
where ε is the carrier frequency offset (normalized by the intercarrier spacing), θ the initial arbitrary carrier phase, τ the timing offset and η(m) a zero mean circularly-symmetric complex-valued white Gaussian noise of variance σ 2 per complex dimension.
The k-th received symbol on subcarrier n is therefore written as
Y k (n) = 1 √ N N -1 m=0 y[k(N + D) + D + m]e -2iπ nm N .
In the case of perfect synchronization (i.e. ε = 0, τ = 0 and θ = 0) and for
n ∈ I p , Y k (n) simplifies to Y k (n) = H k (n) wk (n) E s + N k (n)
where H k (n) and N k (n) are respectively the channel frequency response and the noise at subcarrier n of the k-th received symbol.
B. Identification cost function
Thanks to Eq. ( 1), systems described in Eq. ( 2) can be discriminated by using the following criterion
J J = n∈Ip lim M →+∞ 1 M M -1 k=0 i∈A(n) wk-i (n) 2
where A(n) is the set of indexes associated with the nonnull components of P M S (n). For the sake of simplicity, we consider that the same MS generator polynomial is used for all pilot subcarriers. Consequently A(n) and P M S (n) are independent of n. Moreover, we limit P M S to trinomials of the form 1 + X l + X p (p > l). 1J can now be written as
J = n∈Ip lim M →+∞ 1 M M -1 k=0 wk (n) wk-l (n) wk-p (n) 2 .
In practice, J cannot be computed and the sequence wk (n) is only accessible via the observations Y k (n). Thus, the cost function is based on the following estimate
Ĵ = n∈Ip |Z(n)| 2 , (4)
where Z(n) is defined as
Z(n) = 1 M -p M -1 k=p Ỹk (n) Ỹ * k-l (n) Ỹk-p (n). (5)
M is the number of available OFDM symbols and the superscript "*" stands for complex conjugation which is added on the second term to mitigate the influence of the frequency offset ǫ on Ĵ (for more details, see Section IV). Moreover, in order to get the criterion Ĵ independent of the received signal gain, each term Y k (n) in Eq. ( 5) is normalized so that
Ỹk (n) = Y k (n) Var [Y (n)] , (6)
where Var[.] denotes the variance and
Var [Y (n)] = 1 M M -1 k=0 |Y k (n)| 2 . (7)
C. Decision statistics
In this section we assume perfect synchronization (i.e. ε = 0, τ = 0 and θ = 0). Synchronization impairments are studied in Section IV.
Our identification problem described in the previous subsection boils down to a standard detection problem for which we have to select the most likely hypothesis between the following two hypotheses
H 0 : y(m) writes as in Eq. (3) without MS structure or with MS structure for tones in I p associated with P ′ M S = 1 + X l ′ + X p ′ and (l ′ , p ′ ) = (l, p) H 1 : y(m) writes as in Eq. (3) with MS structure for tones in I p described by
P M S = 1 + X l + X p . ( 8
)
To decide the most likely hypothesis, we propose a detection test constrained by the asymptotic false alarm probability similar to what is suggested in [START_REF] Jallon | An algorithm for detection of DVB-T signals based on their second order statistics[END_REF]. The decision is made by comparing Ĵ to a positive threshold such that
ĴH1 > < H0 Λ,
with Λ defined as
F Ĵ|H0 (Λ) = 1 -P f a . (9)
F Ĵ|H0 is the cumulative distribution of Ĵ when H 0 holds and P f a is the tolerated false alarm probability.
As implied in ( 8), H 0 embodies two different subhypotheses respectively named H a 0 and H b 0 . H a 0 represents the case where y(m) does not have any MS structure that is to say that y(m) is any signal such that Y k (n), Y * k-l (n) and Y k-p (n) are mutually independent and E [Y k (n)] = 0. As for H b 0 , it corresponds to the case where the tones of y(m) belonging to the set I p follow a MS structure with generator polynomial P ′ M S = 1 + X l ′ + X p ′ ( = P M S ) where l ′ = l and/or p ′ = p. Thanks to the random, independent and centered nature of the vast majority of digital modulated signals, we can make the reasonable assumption that H 0 = H a 0 ∪ H b 0 . In order to find a relevant threshold Λ, we hereafter analyse the asymptotic statistical behavior of Ĵ under both hypotheses H a 0 and H b 0 . 1) Asymptotic probability density function of Ĵ under H a 0 : As shown in Eq.( 6), Ỹk (n) is expressed as a ratio of two random variables. The variance estimator introduced in Eq.( 7) being consistent, it converges almost surely to a constant denoted v n so that, thanks to the asymptotic theory developped in [START_REF] Brockwell | Time Series: theory and analysis[END_REF], Ỹk (n) converges in distribution to Y (n)/ √ v n . Moreover, Z(n) being a sum of i.i.d random variables when H a 0 holds, we deduce that Z(n)|H a 0 is asymptotically normal with
E [Z(n)|H a 0 ] = 0, Var [Z(n)|H a 0 ] = E |H k (n)| 2 ρ(n) + σ 2 /N 3 (M -p)v 3 n .
where ρ(n) is the signal power of subcarrier n. If we consider the multipath channel as static over the observation window (impacts of channel variation are discussed in section V) then
v n = |H(n)| 2 ρ(n) + σ 2 /N and Var [Z(n)|H a 0 ] simplifies to Var [Z(n)|H a 0 ] = 1 M -p .
Therefore, the asymptotic probability density function of Ĵ under H a 0 is given by 2(M -p) Ĵ ∼ χ 2 2Np , where χ 2 d denotes a chi-square distribution with d degrees of freedom.
2) Asymptotic probability density function of Ĵ under H b 0 : According to Eq. ( 1),
lim M →+∞ 1 M M -1 k=0 wk (n) wk-l (n) wk-p (n) = 1 1 -2 p ′
when H b 0 holds. Following the same approach described in Sec. III-C1, we then have
E Z(n)|H b 0 = |H(n)| 2 H(n)ρ(n) 3 2 (1 -2 p ′ )v 3 2 n , Var Z(n)|H b 0 = 1 (M -p) 2 v 3 n M -p-1 i,j=0
[C] i,j .
[C] i,j are the elements of the covariance matrix defined as
C = E (Υ -E{Υ})(Υ -E{Υ}) H
where the superscript H stands for transpose conjugate and
Υ = [Y k (n)Y * k-l (n)Y k-p (n), Y k+1 (n)Y * k-l+1 (n)Y k-p+1 (n), • • • , Y k+M -p (n)Y * k+M -p-l (n)Y k+M -2p (n)]
. By developing each product term of the covariance matrix and assuming that M ≤ 2 p ′ -1, we get
[C] i,j = |H(n)| 2 ρ(n) + σ 2 /N 3 -|H(n)| 6 ρ(n) 3 (1-2 p ′ ) 2 , i = j σ 2 |H(n)| 4 ρ(n) 2 N (1-2 p ′ ) -|H(n)| 6 ρ(n) 3 2 p ′ (1-2 p ′ ) 2 , |i -j| = p -|H(n)| 6 ρ(n) 3 2 p ′ (1-2 p ′ ) 2
, otherwise.
In a realistic scenario, the probability density function of Ĵ under H b 0 cannot be easily estimated as it depends on H(n), ρ(n), σ 2 and p ′ which are unknown by the receiver. However, in practice MS degrees can be chosen large enough (e.g., p = 11 in [START_REF]Air Interface for Fixed Broadband Wireless Access Systems[END_REF] and [START_REF][END_REF]) to consider the covariance matrix C as diagonal. In that case,
Z(n)|H b 0 is asymptot- ically normal and Var Z(n)|H b 0 is well approximated by 1/(M -p). Furthermore, if we assume that M ≪ 2 p ′ -1 then E Z(n)|H b 0 ≪ Var Z(n)|H b
0 . Therefore, we can consider that Ĵ follows the same cumulative distribution under both hypotheses H a 0 and H b 0 , that is
F Ĵ|H0 (x) = γ (N p , (M -p) x) (N p -1)!
where γ(a, x) is the incomplete gamma function.
IV. EFFECT OF SYNCHRONIZATION IMPAIRMENTS
Timing missynchronization (τ = 0) and/or frequency offset (ε = 0) damage the observations Y k (n) as inter-symbol (ISI) and inter-carrier (ICI) interferences occur [START_REF] Sathananthan | Probability of error calculation of OFDM systems with frequency offset[END_REF]. In addition to interference, ε modifies the phase of
Y k (n)Y * k-l (n)Y k-p (n) and consequently makes E[Z(n)|H 1 ]
decrease (note that the complex conjugation in Eq.( 5) mitigates the phase variation speed). Therefore, as illustrated in Figure 1, the identification algorithm performance decreases dramatically in the case where ε = 0 and/or τ = 0. To overcome this issue, ǫ and τ can be estimated as
[ε, τ ] = argmax (ε,τ ) Ĵ. (10)
The direct use of the identification cost function Ĵ to estimate (ε, τ ) implies to change the detection threshold Λ. In the specific case where Eq. ( 10) is solved using a grid of K points and assuming that the different Ĵ on this grid are mutually independent, Λ is given by
F Ĵ|H0 (Λ) K = 1 -P f a .
V. SIMULATIONS
In the following, all the results are averaged over 1000 Monte Carlo runs. The system to be recognized is the Fixed WiMAX described in [START_REF]Air Interface for Fixed Broadband Wireless Access Systems[END_REF]. WiMAX embeds MS structures of polynomial P M S = 1+X 9 +X 11 . We recall that N = 256 and N p = 8. Unless otherwise stated, N/D = 32. The subcarriers are equipowered. The asymptotic false alarm probability P f a is fixed to 0.01. The Signal-to-Noise Ratio (SNR) is defined as SNR(dB) = 10log 10 E s /σ 2 .
In Figure 2, we plot the correct detection probability versus SNR in the context of AWGN channel. Different synchronization assumptions and various M are considered. We show that the performance of the MS criterion is significantly improved when the observation window increases. Moreover, we observe the impact of the synchronization method based on Eq. [START_REF] Khambekar | Utilizing OFDM Guard Interval for Spectrum Sensing[END_REF]. We see that the loss due to missynchronization decreases when M increases. For the simulation, uniformly distributed random ε and τ were generated with -0.5 ≤ ε ≤ 0.5 and -0.5(N + D) ≤ τ ≤ 0.5(N + D). ε and τ were estimated by maximizing Ĵ over a grid with a step of 4.10 -3 over ε and 0.1(N + D) over τ .
] = Ge -l/β for l = 0, • • • , L and G is chosen such that L l=0 E[|h(l)| 2 ] = 1).
Notice that β approximately corresponds to the root mean square (RMS) delay spread.
In Figure 3, we display the correct detection probability versus SNR for various RMS delay spread. We observe that the more frequency-selective the channel is, the better the performance is. This is due to the fact that Var[|h(l)| 2 ] decreases as the RMS delay spread increases. In Figure 4, we compare the correct detection probability versus SNR between the proposed MS criterion and the standard correlation based method for β = 0.5D and the various CP lengths handled by the WiMAX system.
To compare both methods, we consider that the correlation based detection is correct when
N -δ/2 ≤ argmax v∈[vmin,vmax] M ′ -v-1 m=0 y(m)y * (m + v) ≤ N +δ/2
where M ′ = M (N + D), v min = 32 and v max = 2048 which corresponds to searching systems from 32 to 2048 subcarriers, and where δ is the tolerated error on the subcarrier spacing. We choose δ to be conditioned by the P f a under the white gaussian noise hypothesis such that δ = (v max -v min )P f a . We observe that the proposed algorithm is not dependent on the CP length and outperforms the correlation based method as soon as N/D ≥ 8. Moreover, for a fair comparison, it is important to remind that as long as the MSs are different, our method can discriminate systems with the same intercarrier spacing whereas the correlation algorithm cannot.
In Figure 5, we plot the correct detection probability versus SNR when the frequency-selective channel becomes timevariant. Various values of maximum Doppler frequencies f d have been inspected. We see that our algorithm is quite robust to Doppler spread below 100Hz (at 3GHz, this corresponds to a relative velocity of 36kph) whilst above this frequency, performance degrades significantly.
VI. CONCLUSION
In this paper, we developed a new method based on msequence properties to embed signatures in OFDM systems without adding any overhead to standard pilot tones. We also studied the MS identification cost function and showed that this method exhibits excellent performance and is quite robust to channel impairements. Moreover, simulation results indicate that in addition to stronger discriminating properties, MS identification outperforms classical correlation based method in some relevant contexts.
Fig. 1 .
1 Fig. 1. Effect of ε and τ on the correct detection probability (SN R = 0 dB, M = 50, P M S = 1 + X 9 + X 11 , Np = 8, P f a = 0.01).
Fig. 2 .
2 Fig. 2. Effect of SNR, M , ε and τ on the correct detection probability.
Fig. 3 .
3 Fig. 3. Effect of β on the correct detection probability (M = 50).
Fig. 4 .
4 Fig. 4. Comparison between correlation based method and MS criterion (M = 50, β = 0.5D).
Fig. 5 .
5 Fig. 5. Effect of Doppler spread on the correct detection probability (M = 50, β = 0.5D).
Limiting P M S to trinomials reduces the number of possible signatures but does not call the validity of the concept into question. For instance, there are still 19 different possible signatures for Np = 1 and p ≤ 11. | 20,100 | [
"18336",
"18335",
"18420",
"174873"
] | [
"98040",
"391223",
"98040",
"391223",
"98040",
"300839"
] |
01769810 | en | [
"shs",
"sde"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01769810/file/Mayoral%20et%20al.%202018_preprint_Geoarchaeology.pdf | Alfredo Mayoral
Jean-Luc Peiry
Jean-François Berger
Paul Ledger
Bruno Depreux
François-Xavier Simon
Pierre-Yves Milcent
Matthieu Poux
Franck Vautier
Yannick Miras
Simon
John Wiley
Sons Geoarchaeology
J-L Peiry
Geoarchaeology and chronostratigraphy of the Lac du Puy intraurban protohistoric wetland, Corent, France
Keywords: Proto-urbanisation, Storage pits, intra-urban wetland, chrono-stratigraphy, Iron Age
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
The prehistoric impact of humans on the environment, especially vegetation and soils, has been widely documented throughout western and central Europe since the Neolithic [START_REF] Dotterweich | The history of human-induced soil erosion: Geomorphic legacies, early descriptions and research, and the development of soil conservation-A global synopsis[END_REF][START_REF] Ellis | Used planet: A global history[END_REF].
Late Holocene climatic oscillations in Western and Mediterranean Europe such as the 4.2 event, late Iron Age-Roman Period climatic optimum, or the Little Ice Age are now well known [START_REF] Magny | Holocene climate variability as reflected by mid-European lake-level fluctuations and its probable impact on prehistoric human settlements[END_REF][START_REF] Martin-Puertas | Arid and humid phases in southern Spain during the last 4000 years: The Zoñar Lake record, Córdoba[END_REF][START_REF] Wanner | Mid-to Late Holocene climate change: an overview[END_REF]. However these events are often considered as minor influences on geomorphological processes when compared to increasing human impacts (e.g. [START_REF] Lavrieux | 6700 yr sedimentary record of climatic and anthropogenic signals in Lake Aydat (French Massif Central)[END_REF]. Rather, the progressive anthropogenic forcing of natural systems is highly correlated with the development of agriculture and urbanization processes [START_REF] Lang | Land degradation in Bronze Age Germany: Archaeological, pedological, and chronometrical evidence from a hilltop settlement on the Frauenberg, Niederbayern[END_REF][START_REF] Notebaert | Characterization and quantification of Holocene colluvial and alluvial sediments in the Valdaine Region (southern France)[END_REF]. The latter has often been considered to be restricted to Mediterranean Europe, but is now known to have begun in Central and Western Europe during the late Bronze Age (LBA) and the Iron Age [START_REF] Milcent | Bourges-Avaricum, un centre proto-urbain celtique du Ve siècle av. J.-C. Les fouilles du quartier Saint-Martin-des-Champs et les découvertes des Etablissements militaires[END_REF][START_REF] Milcent | Résidences aristocratiques et expérience urbaine hallstattiennes en France (VIe-Ve siècle av.J.-C.)[END_REF][START_REF] Fernández-Götz | Paths to complexity. Centralisation and Urbanisation in Iron Age Europe[END_REF]. The environmental impacts of these proto-urbanization processes are still poorly understood at a local scale, as few geoarchaeological and palaeoenvironmental studies have specifically focused on these questions. This is mainly a result of palaeoenvironmental studies typically being pursued in areas suitable for sedimentary record preservation like lakes, floodplains or peatlands. While such studies provide remarkable results at landscape-scale they frequently lack sensitivity to local signals [START_REF] Ledger | Taphonomy or signal sensitivity in palaeoecological investigations of Norse landnam in Vatnahverfi, southern Greenland?[END_REF]. Geoarchaeology has typically been concerned with intrasite studies focusing on specific archaeological challenges, for example, stratigraphic records and postdepositional processes in human structures, analysis of lithic materials or specific deposits like dark earths, and the study of specialist structures such as ramparts or agricultural terraces (Butzer, 2008). However, in suitable contexts, such as intra-urban, or urban-connected wetlands, integrated palaeoenvironmental and geoarchaeological approaches can provide valuable information on human-environment interaction and characterize the environmental impact of the proto-urbanization processes in key periods such as the Iron Age (e.g. Ledger et al., 2015). These human influenced environments, frequently studied in rural areas (e.g. Bernigaud et al., 2014), are a rare and valuable place to develop integrated palaeoenvironmental and geoarchaeological approaches, but are rarely studied in connection with proto-urbanization processes in Western Europe, with some remarkable exceptions [START_REF] Mele | The Terramare and the surrounding hydraulic structures: A geophysical survey of the Santa Rosa site at Poviglio (Bronze Age, northern Italy)[END_REF]. This paper presents the first geoarchaeological results, including context of site formation and post-depositional processes, of a wider palaeoenvironmental study begun in 2014. A highly local approach, using an intra-urban wetland, has been developed to investigate the long-term palaeoenvironmental impacts of proto-urban human settlement episodes of the first millennium B.C.E.
STUDY AREA AND OBJECTIVES
Located in the French Massif Central, 20 km south of Clermont-Ferrand, the Puy de Corent is a volcanic plateau located in a key position controlling the north-south axis of the valley of the River Allier (Fig. 1A andB). The summit (621 m.a.s.l.), located in the southwest of the plateau, is a Pliocene (circa 3 Ma B.P.) monogenetic scoria cone [START_REF] Greffier | Aspects géomorphologiques et stabilité des versants au sud de Clermont-Ferrand[END_REF][START_REF] Nehlig | Les volcans du Massif central[END_REF]. Differential erosion of marls and basalts since the late Pliocene caused the gradual raising of the plateau above the marl lowlands by relief inversion. The central and northwestern sectors are characterized by gentle topography and are situated on a basaltic lava flow deposited over 200 m thick Oligocene sedimentary rocks (limestone and marl with occasional gypsum). The Corent plateau is a major regional archaeological site and several human occupations, from the Middle Neolithic to the Roman period, have been documented. These include two major settlement phases with proto-urban features in the LBA 3 (950-800 B.C.E.) and the late Hallstatt D1 (600-550 B.C.E.) [START_REF] Milcent | Une agglomération de hauteur autour de 600 a.C. en Gaule centrale : Corent (Auvergne)[END_REF], and a vast oppidum (La Tène D1-2, 125-25 B.C.E.) with monumental and planned urban characteristics, which is considered as the possible capital of the Arverni [START_REF] Poux | Corent, Voyage au coeur d'une ville gauloise[END_REF]. Corent is therefore an excellent site to study long-term human-environment interactions, and more specifically the highly-localized palaeoenvironmental impacts of proto-urbanization and urbanization processes during the 1 st millennium B.C.E.
Unfortunately, the plateau is afflicted by severe soil erosion and truncation of deposits, a process which probably began in the Mid-Neolithic, when the first palynological evidence of forest clearance and agriculture appears (Ledger et al., 2015). In some instances this has resulted in a poor preservation of archaeological sediments, especially those preceding the La Tène D period [START_REF] Poux | Corent, Rapport de fouille programmée[END_REF]. The Lac-du-Puy, an ancient pond located in the lowest part of the plateau within the extension zone of the LBA 3 and La Tène settlements, offers a well-preserved intra-urban sedimentary record (Fig. 1C). It appears as a small sub- circular natural depression (of approximately 2 Ha) in the surface of the basaltic rock, similar to others known on neighboring volcanic plateaus also situated above a sedimentary basement. These small basins are quite common in the region (e.g. Lacs de la Pénide, Lac de Pardines) and are usually interpreted as pseudosinkholes (Bureau de Recherches Géologiques et Minières, 2015). The modern pool is small (surface area of c. 300 m 2 ) and mainly fed by occasional runoff and subsurface water flow from the summit of the plateau.
Hydraulic traces and historical maps indicate evidence of recent management, probably in the 19 th century.
However, the palaeobasin represents a much larger area (0.5 Ha) as evidenced by aerial imagery and maps from 1820 (Fig. 1D andF). Small-scale archaeological survey explored this depression for the first time in the early 1990s, confirming the sedimentary nature of accumulation in the center of the depression (approximately 2 m deep) and therefore its palaeoenvironmental interest [START_REF] Guichard | Puy de Corent (Puy-de-Dôme) rapport de prospection et sondages[END_REF]. The Lac-du-Puy was then neglected until 2012 when, under the AYPONA project framework (Paysages et visages d'une agglomération Arverne: approche intégrée et diachronique de l'occupation de l'oppidum de Corent, dir. Y.
Miras & F. Vautier), a sediment coring allowed a palynological analysis. The result of this study (Ledger et al., 2015) confirmed that the Lac-du-Puy offers an exceptional sedimentary record within a protohistoric intraurban context. This excellent palaeoenvironmental potential encouraged a more extensive geoarchaeological survey in the summer of 2015. The objective of this paper is to present the initial results of this survey and to use them to build a robust chronostratigraphic framework as a basis for the forthcoming multi-proxy palaeoenvironmental research of this complex pedosedimentary site. The detailed archaeological analysis of the structures excavated during the geoarchaeological survey of the basin is beyond the scope of this work and will be undertaken in further studies.
MATERIALS AND METHODS
Ten 2 m wide trenches were excavated to survey sedimentary deposits and their stratigraphy across the Lacdu-Puy (Fig 1D). Trenches 1, 2 and 4 served to define the geomorphological context, while trenches 3 and 5-10 delineated the lateral extent of archaeological structures within the depression. The archaeological work involved manual excavation, description and photography of structures. The geomorphological element comprised pedosedimentary description of stratigraphic logs (Table 1) and photography, supported by topographic survey at a centimetric precision (Leica DGPS system 500). Sediment samples were taken for multi-proxy analysis (grain size analysis, geochemistry and micromorphology), which will be exploited in further studies. AMS 14 C dating was undertaken on macrocharcoal, microcharcoal and bulk sediment samples from stratigraphic logs and archaeological structures (see Table 1). For charcoal samples, the sediment was deflocculated in a solution of sodium hexametaphosphate and sieved at 500µm and 100µm. Macro-or microcharcoal were manually concentrated using a binocular microscope. Twelve samples were submitted to Beta Analytic, Florida, with a thirteenth being sent to the Poznan Radiocarbon Laboratory. 14 C dates were calibrated using CALIB V7.04 and IntCal13 calibration curve [START_REF] Reimer | IntCal13 and Marine13 Radiocarbon Age Calibration Curves 0-50,000 Years cal BP[END_REF]. The results of six previous radiocarbon dates, from the 2012 core and used to establish a first chronology (Ledger et al., 2015), were used to assess the potential reservoir effect associated to the bulk sediment and micro-charcoal. Since the locations of most trenches were widely linked to archaeological objectives and their number conditioned by the resources devoted to excavation, information on sedimentary structures and geometry of the sedimentary infilling was incomplete. To overcome this limitation we obtained additional information by using non-destructive techniques such as geophysics. Several methods were implemented within the Lac-du-Puy depression, including electromagnetic induction (EMI) using various devices with different investigation depth (EMP400, EM31, DualEM21S), and also electrical resistivity using Abem Terrameter. Electromagnetic measurements delivered valuable results and are therefore presented, including a map of apparent electrical conductivity undertaken using the EMP400 in Horizontal Coplanar (HCP) geometry (Fig. 1E). An EMI tomography resulting from the development of an experimental protocol [START_REF] Guillemoteau | 1D sequential inversion of portable multi-configuration electromagnetic induction data[END_REF] from DualEM21S provided a magnetic permeability (or magnetic susceptibility) pseudo-section (Fig. 1G).
RESULTS
Archaeological Finds
Older structures are mainly scattered remains of an archaeological soil without pottery fragments and are stratigraphically correlated to Early/Middle Neolithic levels of the 2012 core (Ledger et al., 2015). The , with the exception of 3 pits (within the 37 fully excavated) where a partial overlapping of two structures was observed. Attending to these characteristics the structures can be interpreted as storage pits [START_REF] Sigaut | Les réserves de grains à long terme[END_REF][START_REF] Reynolds | Pit technology in the Iron Age[END_REF][START_REF] Miret I Mestre | Systèmes traditionnels de conservation des aliments dans fosses et silos. Une vision multidisciplinaire[END_REF][START_REF] Miret I Mestre | Trous, Silos et autres[END_REF]. and their systematic proximity to a Stratigraphic Unit (SU) considered as backfill (SU 2, see Table 2), suggests a post-depositional origin related to this SU.
No sherds were found in the bottom of the structures. Moreover the absence of organic macro-remains or charred material in pits, likely a result of taphonomic processes due to frequent water level fluctuations in the basin (Ledger et al., 2015), necessitated the predominant use of bulk samples for dating, which are susceptible to vertisolization processes and possible reservoir effect and complicate interpretation of the results.
Radiocarbon Dates
The AMS radiocarbon database of the Lac-du-Puy is summarized in Table 1: the six first (nº1 to 6) dates come from a core drilled in a natural sequence (Log 8, Fig 2g) and were previously published (Ledger et al., 2015).
Five additional radiocarbon dates were performed in Log 4 (nº7 to 11, see Fig 2f ) to complete the natural chronological framework. Three of these dates (nº9 to 11) are coherent with previous dates and archaeological evidence. Dates nº9 and 10 confirm previous observations (Ledger et al., 2015) of a small reservoir effect on micro-charcoal compared to bulk sediment. Therefore dates from bulk sediment were preferred for chronological interpretation of the natural stratigraphy. The last ones, taken close to the top of the sequence in the SU 2 (nº7 and 8), were rejected because their ages are not compatible with the wellestablished archaeological dating of this unit, probably because of its anthropogenic origin and consequent mixing effects (see [START_REF] Cunliffe | Danebury an Iron Age Hillfort in Hampshire Vol1 the excavations 1969-1978: the site[END_REF][START_REF] Cunliffe | Danebury an Iron Age Hillfort in Hampshire Vol 4 The excavations 1979-1988: the finds[END_REF][START_REF] Pons | Mas Castellar de Pontós (alt Empordá) Un complex arqueològic d'época ibèrica[END_REF]. Therefore, we sought to further investigate the age of these archaeological structures through an in-depth analysis of the stratigraphy and its links with the archaeological data.
Geomorphology And Stratigraphy
The
Palaeo-vertisol type
Very dark grey to black (10 YR 2,5/1) clay, including basaltic particles from fine sand to granules, rarely gravels, and prismatic structure with slickensides-like features. No archaeological material. Very irregular distribution and thickness (0-20 cm) within the depression, lacking lateral continuity. The unit has probably its upper part truncated before SU2 deposition.
Late Bronze Age 3 archaeological structures (950-800 BC)
SU4a, 4a1, 4b 70-115 approx.
Palaeo-gleysol 4a/4a1-center of the depression 4b-peripheral areas
Olive grey (5 Y 4/2) massive heavy clay, few small orange oxydation mottles at the top. Rarely quartz gravels. Homogeneous distribution within the lowest part of the depression, in lateral transition with 4b.
Bioturbation (roots) and irregular palaeotopography with small mounds (4a1) Dark grey to brown (2,5 Y 3/2 to 3/1) clay loam, with few basaltic granules to gravels, scattered pottery fragments (Late Bronze Age) and few small orange oxydation mottles. Replaces laterally 4a in peripheral parts of the depression. Massive to blocky structure. Irregular palaeotopography.
Early/Middle Neolithic scattered remains SU5a, 5b, 5c
115-130 approx.
Palaeo-regosol 5a-center of the depression 5b-peripheral areas 5c-external areas Greyish to greenish brown (10 YR 3/1 to 2,5 Y 3,5/2) sandy clay, with abundant weathered subrounded basaltic granules (white mottles or orange oxydation mottles), few gravels and small blocks, sometimes quartz gravels. Bioturbation (rootlets).
Grayish brown (10 YR 3/1,5) sandy loam, with many subrounded basaltic granules and some angular basaltic gravels. Few rounded quartz gravel.
Very abundant orange oxydation mottles. Replaces laterally 5a.
Dark brown (10 YR 3/1 to 3/2) sandy loam, with many basaltic sub-rounded granules and sparse angular gravels. Some orange oxydation mottles. Replaces laterally 5b.
SU6a
130-145 approx.
Grey clay with scoria
Dark grey (10 YR 4,5/1) clayey silt with many sub-rounded inclusions of scoria (sand to gravels, 10 R 3/3) more or less disaggregated. Appears in discontinuous areas in central and peripheral parts of the depression.
6b 145-?
Scoria breccia with interstitial clay
Dark grey (10 YR 4,5/1) clayey silt, with abundant sub-rounded and heterometric inclusions of scoria (gravels to cobbles, 10 YR 3/3) more or less disaggregated and increasing to the bottom, until forming a scoriaceous breccia with hardened clay matrix. Appears in discontinuous areas, in the lower parts of the bedrock in central parts of the depression. The fill is generally clayey, and ranges in thickness from 30-40 cm at the periphery to 2 m in the deeper sectors of the central part of the basin; the lateral variability is pronounced and sharp due to the undulating bedrock bottom. In the NW sector the natural stratigraphy is well preserved, whereas in the SE sector several successive backfill layers separated by La Tène archaeological levels appear directly in contact with the bedrock. The sedimentary sequence systematically thins from the inner to external parts of the depression (Table 2,Fig 3) and is accompanied by a very gradual lateral change of facies (SU 4 and 5), likely related to the frequency of palaeoflooding and the resulting hydromorphic gradient.
Weathered
Focusing on stratigraphic particularities, results mainly concern the relationship between SU4, SU3 and pits.
First, elevations of the top of SU 4a and 4b are higher around each pit or group of pits (forming small mounds surrounding pits, SU 4a1) than in inter-pits zones, which can only be the result of an irregularity in the palaeosurface that is probably anthropogenic in nature (Fig 2d and2e). Second, there is a complete lack of stratigraphic contact between SU3 and pits filling: SU3 never covers pits and is never cross-cut by pits (Table 2, 2). Finally, SU3, SU4 and the pits are systematically truncated suggesting that a phase of erosion or levelling works occurred before SU2 was deposited (Fig 2c,2d,2e).
DISCUSSION
The archaeological, radiocarbon and stratigraphic data allowed the construction of a robust chronostratigraphic framework for the natural stratigraphy. SU 6b and 6a are not directly dated, but on the basis of their stratigraphic position and their pollen assemblages (Ledger et al., 2015) 1).
With regards to archaeological structures, surveyed trenches and geophysical maps show that a quasi perfect correlation exists between the pits implantation and the spatial extent of the clay basin (Fig 1E). Archeological and ethnographic studies suggest that storage pits are often placed in function of the presence of loose and thick sediment, with a preference for impermeable clays which offer important storage advantages, in spite of the associated humid conditions [START_REF] Sigaut | Les réserves de grains à long terme[END_REF][START_REF] Hill | Storage of barley grain in Iron Age type underground pits[END_REF][START_REF] Reynolds | Pit technology in the Iron Age[END_REF][START_REF] Miret I Mestre | Systèmes traditionnels de conservation des aliments dans fosses et silos. Une vision multidisciplinaire[END_REF]. On the basis of the spatial pattern of the pit distributions, which was visible in surveyed trenches after topsoil stripping (see Fig 2a), we estimate there are likely to be several hundreds, maybe a thousand, across the Lac-du-Puy basin. The narrow arrangement of the pits with rare overlap between them suggests a high degree of synchronicity and their probable excavation in a relatively short length of time, perhaps decades.
The five dates from the dark lining at the base of the pits provided Early/Middle Neolithic ages, and are clearly too ancient to be consistent with stratigraphic evidence and age of the fill (see Fig 2b,2c,2d,2e and Table 1 date nº15). This is probably because the bases of pits were systematically excavated through SU 5 before reaching bedrock or SU 6a/b, and therefore the bulk or micro-charcoal samples from the dark lining at the base of pits date the age of SU 5. 14 C assays on macro-charcoals from the small excavated structure opened in SU 5 stratigraphically below a pit ( These results make it clear that the pits, in absence of pottery or charred material at the base, cannot be directly dated by radiocarbon on bulk samples. This prevents their dating more precisely than the TPQ and TAQ provided by archaeostratigraphic evidence: pits are excavated through a well-dated LBA 3 archaeological Given the site size and the archaeological importance of this group of pit structures, we attempted to further refine the relative chronology of the pits on the basis of stratigraphic evidence in order to associate them with one of the archaeological occupations of the site. The discontinuous nature of the log-based stratigraphic data and the lack of a unique exposure containing all the archaeological and stratigraphic information required us to construct a stratigraphic synthesis (Fig 2h). This synthesis led us to the chrono-stratigraphic interpretation presented below as the most likely. The well-dated SUs (4a and 3) were used to achieve this, despite the lack of direct stratigraphic relationships between SU 3 and pits, which is interpreted as a consequence of subsequent soil truncation. The very uneven palaeotopography of the top of SU 4a, with small mounds (SU 4a1) around each pit and depressions between them, suggests that these mounds are minor earthworks constructed immediately after the excavation of pits in the clayey SU4 layer (Fig 2c,2d and 2e). This microrelief (SU 4a1) could be interpreted as the truncated remains of a rim or a "neck" around the upper part of the pits (Fig 2c ,2d, 2e and 2h), probably built using the clay extracted during the excavation of the pits (SU 4a). This morphology was observed in approximately 10% of the pits -despite their truncation and postdepositional erosion -and has been documented in storage pits in anthropological studies [START_REF] Sigaut | Les réserves de grains à long terme[END_REF][START_REF] Miret I Mestre | Systèmes traditionnels de conservation des aliments dans fosses et silos. Une vision multidisciplinaire[END_REF] and on archaeological sites such as Danebury Ring (U.K.), which were also systematically truncated [START_REF] Cunliffe | Danebury an Iron Age Hillfort in Hampshire Vol1 the excavations 1969-1978: the site[END_REF][START_REF] Cunliffe | Iron Age Grain Storage Pits -Danebury Ring Hillfort[END_REF][START_REF] Cunliffe | Danebury an Iron Age Hillfort in Hampshire Vol 4 The excavations 1979-1988: the finds[END_REF]. As a consequence the excavation of the pits can, with some certainty, be assumed to have been contemporary with the surface of SU 4a or slightly posterior, but in any case prior to SU 3 which fossilized this palaeotopography after the structures were abandoned (Fig 2h).
SU3 deposition occurred across the basin, but preferentially in the lower areas of SU 4 palaeotopography between pits. Finally, after a general truncation of the top of SU3 and the majority of the pits necks, SU2 deposition occurred across the entire basin (Fig 2h). Therefore, the pits were likely excavated sometime ). On the basis of the existing archaeological chronology for Corent the only two periods of major settlement within this timeframe are a LBA settlement and a Hallstatt settlement [START_REF] Milcent | Les occupations de l'âge du Bronze du plateau de Corent (Auvergne, Puy-de-Dôme) : résultats des campagnes de fouille 2010-2013[END_REF][START_REF] Milcent | Une agglomération de hauteur autour de 600 a.C. en Gaule centrale : Corent (Auvergne)[END_REF].
However the hypothesis of a LBA 3 age is unlikely because of the stratigraphic evidence of pits excavated through the LBA levels and the complete lack of ceramic remains in the base of the pits, while they are frequent in neighbouring Late Bronze Age archaeological levels. For this reason the excavation of the pits is probably related to the Hallstatt D1-2 agglomerated settlement attested on the volcanic plateau at ~600-550 B.C.E. [START_REF] Milcent | Une agglomération de hauteur autour de 600 a.C. en Gaule centrale : Corent (Auvergne)[END_REF], albeit an association with a little occupation of the Hallstatt D3-La Tène A1
(510-425 B.C.E.) is also possible. The absence of ceramic material in the base of the pits is also more coherent with a medium-size and distant occupation (Hallstatt D settlements) than with bigger and material-rich occupations surrounding the basin, such as the LBA 3 and La Tène D2 occupations (Fig 1C). This Hallstatt chronological hypothesis is not surprising considering that this kind of storage techniques was commonplace in this period [START_REF] Gransar | Le stockage alimentaire sur les établissements ruraux de l'âge du Fer en France septentrionale : complémentarité des structures et tendances évolutives. In Les installations agricoles de l'âge du Fer en France septentrionale[END_REF] as attested by examples in Western Europe, especially in or near hillforts (e.g. [START_REF] Pons | Mas Castellar de Pontós (alt Empordá) Un complex arqueològic d'época ibèrica[END_REF][START_REF] Cunliffe | Iron Age Grain Storage Pits -Danebury Ring Hillfort[END_REF]. This unusually large grouping of storage pits in an open area, close to a relatively important Hallstatt proto-urban settlement [START_REF] Milcent | Une agglomération de hauteur autour de 600 a.C. en Gaule centrale : Corent (Auvergne)[END_REF] implies an increased storage capacity suggesting agricultural surplus. This pattern has sometimes been interpreted as a consequence of a short climatic optimum during the First Iron Age, but is more likely linked to socio-economic changes. Increasing centralization, long distance commerce and proto-urbanization is also observed in other sites in central
France [START_REF] Gransar | Le stockage alimentaire sur les établissements ruraux de l'âge du Fer en France septentrionale : complémentarité des structures et tendances évolutives. In Les installations agricoles de l'âge du Fer en France septentrionale[END_REF][START_REF] Milcent | Résidences aristocratiques et expérience urbaine hallstattiennes en France (VIe-Ve siècle av.J.-C.)[END_REF], alongside evidence of contact with Mediterranean world [START_REF] Milcent | Présence de Gaulois du Midi en Auvergne vers 600 avant J.-C. : les fibules méditerranéennes de Corent[END_REF] where similar patterns are noted for the period (Buxô & Pons, 1999;[START_REF] Pons | Mas Castellar de Pontós (alt Empordá) Un complex arqueològic d'época ibèrica[END_REF]. The abundant storage pits at Corent can therefore be interpreted as a marker of the emergence of proto-urban activity outside of the Mediterranean world, aborted towards the end of the first Iron Age.
On the basis of chronostratigraphic evidence, we propose the following reconstruction of the main evolution phases of the basin from the Early Neolithic to Late Antiquity and modern period (Figure 3), that is coherent with the palynological data (Ledger et al., 2015) : (1) In the oldest sedimentary phase, probably during the Early Neolithic, the basin was irregular with some sparse pools where sedimentary inputs concentrated in more or less intense runoff episodes, carrying scoria and clays from the volcanic cone. (2) In the transition and c. This increased erosion could be connected with the Neolithic forest clearing on the plateau suggested in palynological data (Ledger et al., 2015). Scattered archaeological remains at the top of this level are indicative of human activity and soils stability at the end of this phase, probably related to the Middle Neolithic occupation of Corent (fortified settlement).
(3) The period between the middle Neolithic and the Early Bronze Age is difficult to interpret due to the low chronological resolution of this part of the sedimentary sequence. This may be related to a declining sedimentation rate or even a sedimentary hiatus. Palynological data clearly illustrate continuity in the vegetation dynamics for this period, suggesting a slow sedimentation rate rather than a long hiatus (Ledger et al., 2015) This sedimentary hiatus could be related to a contemporary stability and soil genesis phase in the plateau well documented in annual excavations [START_REF] Milcent | Une agglomération de hauteur autour de 600 a.C. en Gaule centrale : Corent (Auvergne)[END_REF]. ( 4 This reconstruction provides an insight to the evolution of socio-environmental interactions from the Early Neolithic to the Late Antiquity in an unusual intra-urban context. These interactions are characterized by different land uses, resources exploitation and impacts in the palustrine system, and appear to be the main driver of changing conditions in the basin. Neolithic sedimentary impacts, linked to a typical pattern of forest clearance and cultivation, are followed by a long period of low-energy hydrosedimentary dynamics. This phase can be interpreted as a system recovery, suggesting a non-linear but nevertheless progressive anthropogenic forcing. Major impacts in the systems evolution occur from as early as the first Iron Age in association with proto-urbanization, represented by the excavation of around 1,000 storage pits in the basin that resulted in major changes on its hydrosedimentary dynamics. These changes caused the shift of the Lac du Puy to an anthroposystem well before the impacts of the Late Iron Age Gallic oppidum.
CONCLUSION
This work presented the initial results of a geoarchaeological survey of the Lac-du-Puy basin, an unusual intraurban wetland with a valuable sedimentary record covering 5000 years of human activity including settlements and proto-urban episodes of the first millennium B.C.E. Despite a complex, discontinuous and sometimes highly disturbed or even absent stratigraphy, the sedimentary archives allowed us to build a robust chronostratigraphic framework and to reconstruct the main evolution phases of this small depression from the Early Neolithic to the Gallo-Roman period. The unexpected find of a large number of Iron Age pits probably used for storage was a major discovery but complicated interpretation, owing to difficulties associated with direct dating and wide time-range provided by the stratigraphy. Nevertheless, the chronostratigraphical approach developed in this work allowed the identification and dating of the effects of post-depositional processes on these Iron Age pit structures and associated sediments. This detailed analysis led us to confidently propose, on the basis of the available data, that these structures are associated with the Early Iron Age Hallstatt D settlements of Corent (600-425 B.C.E.). Moreover, these abundant storage pits are also a local evidence of a short and early phase of socio-economic changes and emerging proto-urbanization processes outside the Mediterranean world, centuries before the well known oppida period at the end of the This geoarchaeological study was also the first attempt to document the human-environment interactions which were likely the main driver of changing pedo-sedimentary and hydrological conditions during the last 5 millennia in the Lac-du-Puy basin. Especially in the first millennium B.C.E., the geoarchaeological data suggest abruptly changing conditions in this wetland in response to changes of use and growing anthropogenic impact linked to storage pits excavation, and more generally proto-urbanization, culminating with its total backfill in the Gallo-roman period.
These preliminary interpretations will be followed by additional investigations and analysis such as grain-size analysis, micromorphology or geochemistry, to truly understand the complex trajectory of the humanenvironment interaction in this intra-urban wetland, a privileged mirror of the proto-urbanization processes and the associated anthropogenic forcing on soils during the first millennium B.C.E.
ACKNOWLEDGEMENTS
Figure 1 .
1 Figure 1. A, B and C: Study area location and approximate extent of the main phases of settlement (C). D: Map of the geoarchaeological survey trenches, with location of the studied stratigraphic logs and the archaeological finds. E: electrical conductivity map and pit distributions (red: low values, blue: high values). F: Early 19 th century map of the Lac-du-Puy (Napoleonic land register). G: EMI tomography (magnetic permeability section) along the A-B transect illustrated in D. Blue: low values, red: high values (adapted from Guillemoteau et al., 2016).
LBA 3 is represented by an archaeological soil level with abundant basalt pebbles and pottery, located in the central part of the basin (see Fig 1D and Fig 2b). La Tène D1-2 is the best represented period, with abundant structures (mainly pavements) especially in the peripheral south and west areas of the depression (Fig 1D),characterized by the presence of frequent fragments of amphorae type Dressel 1[START_REF] Guichard | Les amphores. Lavendhomme, Guichard. Rodumna, le village gaulois[END_REF]. Gallo-Roman structures (pavements and small buildings foundations) are present in the south of trenches 1 and 6.Modern drainage works are also present in all surveyed trenches. As a major archaeological discovery, 114 protohistoric pits were found covering all the central part of the depression (Fig 1Dand 2a). Excavated in the clayey sediment, these pits have diameters between 70 and 120 cm, a maximal depth around 130 cm, variated shapes (pear-shaped, bottle-shaped, bell-shaped etc.) with a more or less truncated narrower top, and a dark probably organic thin lining in their walls and bottom (Fig 2aand 2c). The sedimentary fill of the pits is a massive and homogeneous dark grey clay presumably the result of deliberate infilling when they were decommissioned. All of the structures are carefully and closely distributed (sometimes only separated by 10 any coalescence(Fig 2a)
Figure 2 .
2 Figure 2. a) Closely distributed pits in trench 2. b) Pits cross-cutting a LBA archaeological soil in trench 1. c) Pit 25604 and underlying Neolithic structure. d) Log 2.2 including a pit and SU 4a1. e) Stratigraphic survey of the south of trench 1. f) Log 4. g) Log 8. h) Synthetic interpretation of chrono-stratigraphic relationships between pits, natural stratigraphy and well-dated archaeological remains. See Table2for SU codes.
electrical conductivity map provided information regarding the extent and thickness of clayey sediment in the basin, revealing that it is limited to a sub-circular irregular 100 m radius depression (blue and yellow areas in Fig 1E), and that some scattered spots in the central area have thicker clay accumulation (dark blue areas). The magnetic pseudo-section provides a good indication of the shape of the sedimentary infill and the underlying basaltic bedrock, showing an irregular and dissymmetric bottom of the basin, with a maximum depth reaching 2 m (Fig 1G). The 29 stratigraphic logs distributed in the surveyed trenches (Fig 1D) allowed a detailed characterization of the sedimentary infilling of the depression, which was divided into 12 Stratigraphic Units (Table 2). The more complete sequences were Log 8 (equivalent to the published core extracted in 2012) and Log 4 (Fig 2f and 2g) and hence these were the subject of the highest resolution dating. An additional 15 logs from trench 1 were used to document a 195 m long stratigraphic section which crosses the depression center, including only key archaeological structures for reasons of stratigraphic clarity (Fig 3).
Fig 2c, 2d, 2e). Furthermore, SU3 is predominately found in low-areas of the very irregular top SU4 topographic surface(Fig 2d, 2e). Third, SU4 and SU3 are both clayey sediment, but SU3 can be differentiated on the basis of its sand and gravel content, vertic characteristics and extremely irregular distribution and thickness. This indicates increased detritism, seasonal shrink-swell processes and probable truncation of the top of SU3 (Table
soil (950-800 B.C.E., Fig 2b) and are all covered by structures or archaeological soils from La Tène D2b to Gallo-Roman (SU2, 50/40-30 B.C.E., Fig 2c and 2d), except in a single instance where a pit is covered by a La Tène D1B archaeological soil (100/90-80/70 B.C.E., Fig 2e).
n°3 (819-755 cal. B.C.E.) and n°10 (490-366 cal. B.C.E.
Middle Neolithic (c. 4500 cal. B.C.E.) detrital inputs leveled the bottom of the basin and were probably associated with an oscillating water table as suggested by the lateral facies change between SU 5a, b
) Probably between 600 and 425 cal. B.C.E. a large number of storage pits were excavated in the basin with consequent relief and surface disturbance. (5) After the abandonment and filling of pits, external sedimentary inputs occurred again during ephemeral and episodic flooding, partially or totally fossilizing the previous topography and causing a vertisol-type functioning. These processes can be interpreted as the hydro-sedimentary reaction to the previous anthropogenic disturbance of the basin. (6) Between 100/90 B.C.E. and 50/40 B.C.E. at least two phases of soil levelling, backfill and occupation occurred in the southern part of the basin, whereas SU3 was still in deposition in central parts of the depression. (7) Between 50/40 and 30 B.C.E. (from La Tène D2B to Gallo-Roman period) the whole basin was levelled, backfilled and probably drained, partly truncating and fossilizing the pre-existing vertisol. (8) Finally, from Antiquity to Modern Period, a slow sedimentation gradually covered the antique backfill forming a topsoil with incipient vertic characteristics, and drainage works were performed in undetermined recent period.
Figure 3 .
3 Figure 3. Main pedo-sedimentary phases of the Lac-du-Puy evolution between early Neolithic and modern period. For SU colors, see Fig 2.
Iron Age, providing a new insight into the diffusion of proto-urbanization in Western Europe during the first millennium B.C.E.
This work is under the AYPONA project funded by the Conseil Régional d'Auvergne (France), the Service Régional d'Archéologie d'Auvergne, the Maison des Sciences de l'Homme de Clermont-Ferrand, and the University Lyon 2 (Projet BQR Lyon 2, dir. M. Poux and J-F. Berger). The authors are grateful to LUERN association, which provided logistic assistance, to the full team of Corent annual excavation, and to the landowners Mme. Mioche and M. Rodriguez. We want also to thank Julien Guillemoteau and Mathias Pareilh-Peyrou for the geophysical prospections, and Pierre Boivin, Yann Deberge, Josep Miret i Mestre and Enriqueta Pons for the valuable discussions. Finally, we express sincere thanks to the anonymous reviewers for their remarks and critical reading of the manuscript. 8. REFERENCES B.R.G.M. (2015). Carte Géologique de la France à 1:50000, feuille 765 (Massiac). Bernigaud, N., Berger, J.-F., Bouby, L., Delhon, C., & Latour-Argant, C. (2014). Ancient canals in the valley of Bourgoin-La Verpillière (France, Isère): morphological and geoarchaeological studies of irrigation systems from the Iron Age to the Early Middle Ages (8th century bc-6th century ad). Water History, 6, 73-93. Butzer, K. W. (2008). Challenges for a cross-disciplinary geoarchaeology: The intersection between environmental history and geomorphology. Geomorphology, 101, 402-411. Buxô, R., & Pons, E. (1999). Els productes alimentaris d'origen vegetal a l'adat del ferro de l'Europa occidental : de la producció al consum. Girona: Museo d'Arqueologia de Catalunya.
Table 2 ,
2 SU2).
On archaeological structures, two dates were obtained on centimetre size macro-charcoal from two levels of
the small excavated structure in the SU 5a, under the pit 25604 (see Fig 2c), both provided the same
Early/Middle Neolithic age (nº12 and 13). The bulk from the base of the pit 25604 (dark lining), partially
excavated into the summit of the underlying Neolithic structure, provided a similar age (nº14). A bulk
sediment date on the fill of this structure taken 10 cm above the base (nº15) provided a more recent date
(see
Fig 2c)
. A further four samples (nº 16 to 19), from similar contexts (dark lining at the base of the structures) in other pits also provided Early/Middle Neolithic ages. These dates are surprisingly old, especially when compared with similar settlements in Western Europe with abundant storage pits such as Ensérune (France), Mas Castellar de Pontós (Spain) or Danebury Ring (U.K.)
Table 1 .
1 Radiocarbon data base from Lac-du-Puy trenches and logs; dates n°1 to 6 were previously published inLedger et al. (2015). Dates have been placed in their stratigraphic context on Fig 2f, 2g and 2h; dates in italic were rejected on a stratigraphic basis.
Nº Localization Depth(cm)/position Lab. code Material 14C yr. BP Cal BC/AD (2σ)
1 Core/Log L8 54-57 (SU3-top) Beta-379416 Microcharcoal 2240 ± 30 390-205 BC
2 Core/Log L8 54-57 (SU3-top) Beta-379417 Bulk sed. 1990 ± 30 48 BC-AD 71
3 Core/Log L8 71-72 (SU4-top) Beta-377232 Bulk sed. 2590 ± 30 819-755 BC
4 Core/Log L8 71-73 (SU4-top) Beta-379418 Pollen 2750 ± 30 944-823 BC
5 Core/Log L8 87-88 (SU4-middle) Beta-375785 Bulk sed. 3510 ± 30 1916-1749 BC
6 Core/Log L8 102-103 (SU4-bottom) Beta-379419 Bulk sed. 3330 ± 30 1688-1528 BC
7 Log L4 43-47 (SU2-bottom) Beta-434187 Microcharcoal 5550 ± 40 4457-4339 BC
8 Log L4 43-47 (SU2-bottom) Beta-430610 Bulk sed. 1430 ± 30 575-657 AD
9 Log L4 59-63 (SU3-bottom) Beta-434188 Microcharcoal 2390 ± 30 542-397 BC
10 Log L4 59-63 (SU3-bottom) Beta-430611 Bulk sed. 2340 ± 30 490-366 BC
11 Log L4 87-91 (SU4-bottom) Beta-430612 Bulk sed. 3130 ± 30 1457-1300 BC
12 Excav. Struct. Base Beta-418695 Macrocharcoal 5650 ± 30 4546-4394 BC
13 Excav. Struct. Middle Beta-418694 Macrocharcoal 5650 ± 30 4546-4394 BC
14 Pit 25604 Lining-Base Poz-74925 Microcharcoal 5520 ± 40 4453-4327 BC
15 Pit 25604 Fill Beta-425790 Bulk sed. 3430 ± 30 1876-1643 BC
16 Pit 25677 Lining-Base Beta-425426 Bulk sed. 6130 ± 30 5208-4992 BC
17 Pit 25703 Lining-Base Beta-425787 Bulk sed. 5240 ± 30 4226-3971 BC
18 Pit 25699 Lining-Base Beta-425788 Bulk sed. 4780 ± 30 3641-3519 BC
19 Pit 25610 Lining-Base Beta-425789 Bulk sed. 5640 ± 30 4541-4372 BC
Table 2 .
2 Synthetic natural and archaeological stratigraphy of the Lac-du-Puy excavation area. Depths in cm are indicative. ) distributed all over the excavation area with constant thickness around 15 cm. Scattered basaltic particles from granules to coarse gravels. Many amphorae fragments (Dressel 1). Blocky/prismatic structure, sometimes crumbly. Frequent brown-orange oxidation mottles. ) clay loam with common heterometric basaltic inclusions from granules to small blocks, macrocharcoals and scattered pottery (La Tène D). Granular to blocky, sometimes prismatic structure. Present mainly in peripheral areas of the depression (south and west).
Stratigraphic Unit (SU) Depth (cm) Pedo/ Lithofacies Description
SU1 0-30 Modern topsoil/Plow zone Dark brown (10 YR 3/2) clay loam distributed over all the excavation area with constant thickness around 30 cm. Scattered basaltic fragments from granules to coarse gravels. Frequent pottery fragments (mainly amphorae type Dressel 1). Blocky to prismatic structure, sometimes crumbly. Surficial cracks in summer dry periods.
La Tène D2B to Gallo-Roman (50/40-30 BC) archaeological structures
SU2 30-45 General anthropogenic backfill Dark brown clay loam (10 YR 3/2La Tène D2A (80/70-50/40 BC)archaeological structures
SU A1-A2 Variable Local anthropogenic backfill Mainly dark brown (10 YR 3/1 to 2/1La Tène D1B (100/90-80/70 BC) archaeological structures
SU3 45-70 approx.
50-40 cal. B.C.E. Finally, SU 2 is dated by archaeological artefacts (ceramic) between 50 and 30 B.C.E. The pedo-sedimentary characteristics and the generally low sedimentation rate of the sequence suggest that it is a succession of more or less cumulic hydromorphic palaeo-soils, including archaeological levels and phases of low sedimentation (or maybe hiatuses), rather than a classical lacustrine sedimentary sequence (Table
A c
c e
p
t e
d
V e
r
s i o
n
, they likely date from
the Early Neolithic (c. 5000 B.C.E.) . Units 5a, b and c are dated to the Early/Middle Neolithic transition by two
radiocarbon dates from the archaeological structure (Table 1 dates nº12 and 13, 4545-4394 cal. B.C.E.). Units
4a and b cover at least the period from approximately the Early Bronze Age (c. 1900-1600 cal. B.C.E.) to the end of the LBA (c. 800 cal. B.C.E.). SU 3 deposition takes place from between 490-366 cal. B.C.E. to between
Table 1 date nº12 and nº13, Fig 2c) provided the same Neolithic age as the pits. However this structure cannot be contemporaneous with the pits, as the only date obtained from the pit fill provides a much younger age (Table 1 date n°15, Fig2c). This date is also inconsistent with stratigraphic evidence (Fig2b), but could be explained by the fact that this kind of structure was usually backfilled when a new neighbouring structure was excavated. Sediments from the new pit were then likely used to fill the abandoned pit, and logically provide a bulk age sensibly similar to the obtained dates from SU 4 sediments in logs 4 and 8 (seeFig 2c, 2f, and 2g).
. In the Early Bronze Age (after c. 1900-1600 cal. B.C.E.) a new sedimentary phase began, associated with low-energy inputs and a permanently high water table
(gleysol development). This phase was interrupted c. 800 cal. B.C.E., where an archaeological soil in the depression shows human occupation associated with the Corent LBA 3 settlement. Between c. 800 cal. B.C.E. and 5 th c. B.C.E. (dates 3 and 9-10 on Table
2
) sedimentation ceased. | 47,470 | [
"2512",
"11452",
"742611",
"169724",
"20817",
"21539",
"855058",
"847935"
] | [
"810",
"105430",
"810",
"145345",
"205742",
"444290",
"301475",
"44675",
"444290",
"490718",
"810",
"121947"
] |
01771102 | en | [
"phys",
"spi"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01771102/file/2018_FShaun_JMM_preprint.pdf | Shaun Ferdous
Sreyash Sarkar
Frederic Marty
Patrick Poulichet
William Cesar
Elyes Nefzaoui
email: [email protected]
Tarik Bourouina
Sensitivity optimization of micro-machined thermo-resistive flow-rate sensors on silicon substrates
Keywords: Flow-rate, sensor, Thermal sensor, MEMS, Micro-machined, Water, Silicon, Sensitivity
We report on an optimized micro-machined thermal flow-rate sensor as part of an autonomous multi-parameter sensing device for water network monitoring. The sensor has been optimized under the following constraints: low power consumption and high sensitivity, while employing a large thermal conductivity substrate, namely silicon. The resulting device consists of a platinum resistive heater deposited on a thin silicon pillar ~ 100 µm high and 5 µm wide in the middle of a nearly 100 µm wide cavity. Operated under the anemometric scheme, the reported sensor shows a larger sensitivity in the velocity range up to 1 m/s compared to different sensors based on similar high conductivity substrates such as bulk silicon or silicon membrane with a power consumption of 44 mW. Obtained performances are assessed with both CFD simulation and experimental characterization.
Introduction
An accurate measurement of fluid-flow is of paramount importance in different fields of science and technology such as-gas chromatography [START_REF] Kaanta | Novel device for calibration-free flow rate measurements in micro gas chromatographic systems[END_REF], environmental monitoring [START_REF] Bartlett | Modeling of Occupant-Generated CO2 Dynamics in Naturally Ventilated Classrooms[END_REF][START_REF] Zhu | 2-D Micromachined Thermal Wind Sensors #x[END_REF], weather forecasting [START_REF] Shen | A FCOB packaged thermal wind sensor with compensation Microsyst[END_REF], bio-sensing [START_REF] Renaudin | Integrated active mixing and biosensing using surface acoustic waves (SAW) and surface plasmon resonance ( SPR ) on a common substrate Lab[END_REF], medical [START_REF] Billat | Monolithic integration of micro-channel on disposable flow sensors for medical applications Sens[END_REF] and biomedical applications [7][START_REF] Hsiai | Micro Sensors: Linking Real-Time Oscillatory Shear Stress with Vascular Inflammatory Responses[END_REF][START_REF] Soundararajan | MEMS shear stress sensors for microcirculation Sens[END_REF], aircraft monitoring [START_REF]Anon Advanced flow measurement and active flow control of aircraft with MEMS[END_REF] and industrial process control [START_REF] Bruschi | Design Issues for Low Power Integrated Thermal Flow Sensors with Ultra-Wide Dynamic Range and Low Insertion Loss[END_REF]. Flow-rate measurement is crucial for water network management as well. Water is a precious natural resource and therefore is a global priority. Consequently, there is a significant need for an effective measurement system for water networks monitoring in order to reduce wastage and to ensure apposite water quality. Measurement of water flow-rate is necessary not only to survey water consumption but also to estimate and even localize the water loss due to leakage of distribution networks. To obtain a complete and monolithic measurement system for water networks, other parameters need to be considered along with the flow-rate such as pressure, temperature and electrical conductivity. Pressure and flow-rate sensors can provide information about the leakage of the distribution system; whereas the conductivity sensor can indicate the overall water ionic contamination level. The temperature of water is also important since the water conductivity depends on temperature as well. It is therefore tempting to co-integrate all the sensors mentioned above on the same chip so as to provide a monolithic multi-parameter sensing solution, which puts a strong constraint on using silicon as a substrate material * . Nowadays, several micro-machined flow-rate measurement systems are available to measure the flowrate in different fluid media [START_REF] Glaninger | Wide range semiconductor flow sensors Sens[END_REF][START_REF] Ju | Microfluidic flow meter and viscometer utilizing flow-induced vibration on an optic fiber cantilever 2011 16th International Solid-State Sensors[END_REF]. The flow-rate range of such systems range from nL/min to L/min [START_REF] Dijkstra | Miniaturized thermal flow sensor with planar-integrated sensor structures on semicircular surface channels Sens[END_REF][START_REF] Meng | A biocompatible Parylene thermal flow sensing array Sens[END_REF]. A majority of devices reported in the literature are based on thermal operation principles. Thermal flow-rate sensors can be divided into three categories based on their operation principle-anemometric (Hot-wire), calorimetric and time-of-flight (TOF) [START_REF] Kuo | Micromachined Thermal Flow Sensors-A Review[END_REF]. The anemometric flow-rate sensor measures the effect of the flowing fluid on a heating resistance, the hot wire for instance. The velocity of the corresponding fluid can be deduced by measuring the amount of heat extracted by the fluid from the hot surface of the sensor. The temperature variation of a hot-wire submitted to a fluid flow, indeed depends on the fluid velocity. Measuring this temperature variation enables the calculation of the fluid velocity. An anemometric flow-rate sensor can be operated in one of the following three modes: constant current, constant temperature or constant power [START_REF] Chiu | Low power consumption design of micro-machined thermal sensor for portable spirometer[END_REF][START_REF] Nguyen | Micromachined flow sensors-a review Flow[END_REF]. A calorimetric flow-rate sensor measures the asymmetry of the temperature profile caused by the fluid flow around a heating resistor. In general, at least two temperature sensors are placed upstream and downstream the heating resistor for this purpose [START_REF] Zhu | 2-D Micromachined Thermal Wind Sensors #x[END_REF][START_REF] Kuo | Micromachined Thermal Flow Sensors-A Review[END_REF]. A TOF flow-rate sensor usually measures the transit time of a thermal pulse over a known distance. This kind of sensor involves two or more resistors. One of them is used as a heater; the others are placed downstream and used as temperature sensors. A short thermal pulse generated by the heater travels through the fluid medium and is sensed by the temperature sensors. The pulse travel time from the source to the sensor enables the calculation of the fluid velocity [START_REF] Van Kuijk | Multi-parameter detection in fluid flows Sens[END_REF][START_REF] Berthet | Time-of-flight thermal flowrate sensor for lab-on-chip applications Lab[END_REF].
In the present work, we consider anemometric thermal flowrate sensors operated under constant current. Several versions are designed, characterized and compared: first, a flow-rate sensor, fabricated on bulk silicon. This device exhibits a small sensitivity and a large power consumption. This is mainly due to a large conductive heat leakage through the substrate. We have recently shown that the power consumption and sensitivity of a thermal flow-rate sensor are strongly dependent on the used substrate material [START_REF] Shaun | Design of micro-fabricated thermal flow-rate sensor for water network monitoring[END_REF]. For better performances in terms of power consumption and sensitivity, low thermal conductivity substrates are required. Due to the excellent linearity of its temperature-dependent resistivity, platinum is the most used material to fabricate the heating/sensing element of a thermal flow-rate sensor [START_REF] Meng | A biocompatible Parylene thermal flow sensing array Sens[END_REF][START_REF] Mailly | Anemometer with hot platinum thin film Sens[END_REF][START_REF] Fürjes | Thermal characterisation of a direction dependent flow sensor Sens[END_REF] while sometimes a mixture of platinum and titanium is also used [START_REF] Bailey | Turbulence measurements using a nanoscale thermal anemometry probe[END_REF]. Most of the micromachined thermal flow-rate sensors are fabricated on silica glass [START_REF] Norlin | A chemical micro analysis system for the measurement of pressure, flow rate, temperature, conductivity, UV-absorption and fluorescence[END_REF][START_REF] Wilding | Manipulation and flow of biological fluids in straight channels micromachined in silicon[END_REF][START_REF] Roh | Sensitivity enhancement of a silicon micro-machined thermal flow sensor Sens[END_REF] for maximum thermal efficiency and due to its low thermal conductivity. Sometimes, for nanofluidic or biomedical applications [START_REF] Wu | MEMS flow sensors for nano-fluidic applications Sens[END_REF][START_REF] Yu | MEMS Thermal Sensors to Detect Changes in Heat Transfer in the Pre-Atherosclerotic Regions of Fat-Fed New Zealand White Rabbits[END_REF] for example, silicon, which is not the optimal choice from thermal considerations, is used. Hence, when the use of silicon as a substrate material is a constraint, other solutions are required so as to optimize heat transfer in the device. Therefore, modification of the material's geometry is the only option to achieve low thermal conductance and maximize the device sensitivity. First, a bulk silicon substrate device is considered as a reference. Then, a second version of the sensor, which is based on a silicon membrane is inspected, but this version renders unsatisfactory results. Therefore, further optimization is required. A suspended resistor can be considered and have already been suggested by Neda et al. as early as 1995 [START_REF] Neda | A Polysilicon Flow Sensor For Gas Flowmeters[END_REF] and as of late by various works [START_REF] Berthet | Time-of-flight thermal flowrate sensor for lab-on-chip applications Lab[END_REF][START_REF] Jack | Two-Dimensional Micromachined Flow Sensor Array for Fluid Mechanics Studies[END_REF]. However, such geometric configuration increases the device fragility, particularly if wastewater is targeted as a possible operation medium. An intermediate configuration is then proposed in the present paper: the heater resistor is fabricated on a thin silicon microwall at the middle of a large cavity etched on bulk silicon. Such configuration would reduce the conduction heat loss while ensuring an acceptable robustness of the heater/sensor structure. The numerical optimization and design, as well as the experimental characterization of this improved micro-machined thermal flowrate sensors, are reported in the following sections. Obtained performances are also compared to those of the previously designed and fabricated devices.
The manuscript is organized as follows: first, modelling and fabrication processes of the different devices are described. Then, the experimental setup and numerical methods used are presented. Finally, the main numerical and experimental results regarding the sensors' responses to the fluid flow rate are reported and discussed. A particular attention is paid to the comparison of the new pillar-based device and previous prototypes, with respect to their sensitivity to the fluid velocity and their power consumption.
Methods
Design and Fabrication
The multi-parameter sensor has a MEMS chip size 1 x 1 cm in which the lateral size of each sensor is no more than a few millimetres.
The first prototype of the multi-parameter sensing module is presented in Figure 1. The sensor chip is placed on a PCB, which contains the CMOS chip and other electronics for wireless data transmission as shown in Figure 1(b). Finally, it is inserted into a hollow plastic pole (Figure 1(c)) which will be employed in the water distribution network to measure the multiple parameters. In Figure 2, the detailed description of different sensing parts of the fabricated chip is illustrated. The chip wire bonding to a PCB, which is designed to build the sensors' electrical connections for lab testing, is shown in Figure 2(a). The entire PCB used for laboratory characterization is shown in Figure 3. The whole PCB can be divided into 2 parts-base and body. The entire PCB length is 7.2 cm, where the base length is 0.89 cm. The width of the base and the body are 3.82 cm and 1.18 cm respectively. Two sets of header holes are created at the PCB base in order to build the electrical connection between the PCB and the electronic circuit. A 1 x 1 cm square shaped space is created at the tip of the PCB body to place the sensor chip. An adhesive is used to glue the chip on the PCB. The electrical connection between the chip's connection pads and the PCB's wires is done by gold wire bonding. The electrical connections between the chip and the PCB are protected hermetically by epoxy resin to make it aqua-resistive. It is noticeable that only the silicon and silicon membrane based sensors contain three platinum resistors while the pillar device contains only one. The arrangement of three platinum resistors allows us to implement any one of the three conventional operation schemes i.e., anemometric, calorimetric and Time-of-Flight (TOF) of a thermal flow-rate sensor. In case of the calorimetric operation principle, the central resistor can be used as a heater and two up and downstream resistors can be used as temperature sensors. Besides, in TOF mode, the left resistor is a thermal pulse generator where the other resistors are temperature sensors depending on the velocity range. Shorter distance between the pulse generator and the sensing resistor exhibits high sensitivity to low velocity, whereas the longer distance enhances the sensor's sensitivity to high velocity [START_REF] Zhu | 2-D Micromachined Thermal Wind Sensors #x[END_REF][START_REF] Ashauer | Thermal flow sensor for liquids and gases based on combinations of two principles Sens[END_REF]. The pillar-based device's heater is also made of platinum but with different geometric dimensions. Platinum is chosen for its high TCR (Temperature Coefficient Resistance) and excellent linearity over a wide temperature range. The TCR value of our sputter-deposited platinum is 2.218×10 -3 C -1 . Bulk silicon and silicon membrane configurations contain three platinum resistors to enable both calorimetric and TOF operating modes, which have not been considered in this report. The distance between the resistors is 20 m. The length and width of each platinum resistor are 106 m and 10 m respectively, and the thickness is 340 nm. The cavity depth of the pillar-based device is 100115 m while the width and length are 100 m and 200 m, respectively. The resistor length of this last configuration is equal to the length of the cavity while it is only 5-μm wide. This width is obtained by a parametric numerical optimization of the device sensitivity with respect to its geometric dimensions. Therefore, the resistor width is reduced by half compared to the two other configurations. It can be reduced more, but this would increase the complexity of the fabrication process and makes the resistor more fragile. For the same reason, a fully suspended resistor is not considered. More details on this optimization process are presented in section 3.1.1.
CMOS chip
All considered sensors are based on resistive read-out. All of them, except the pressure sensor, are obtained by metal micro-patterning with the combination of titanium, platinum and gold. Further cointegration of the pressure sensor along with the other sensors on the same chip requires additional steps, which include patterning of polysilicon strain gauges and backside etching of the silicon bulk. The latter step is also used to produce thin silicon membranes, not only for the pressure sensor but also for the flowrate sensor schematically shown in Figure 4(b). The membrane acts either as a mechanically flexible structure or as a thermal insulating layer depending on the considered sensor. A glass substrate is used to support the silicon membrane. The schematic cross-section view of the multi-parameter sensor chip with the silicon membrane flow-rate sensor is presented in Figure 5. In the case of pillar structure device, a 100 m deep cavity is created by anisotropic dry etching. The platinum resistor is patterned on the top of the pillar position at the cavity centre. The fabrication sequence is presented in detail in Figure 6(a, b, c, d ande) and Figure 6(f) shows a SEM picture of the pillar device based sensitive element of the sensor which is fabricated in the new version of the multi-parameter chip.
Experimental setup
The experiment is conducted to extract the sensor's responses with respect to the fluid velocity. The setup is constructed using a rigid pipe with a diameter of 25 mm. The length of the pipe is 10 times its diameter in order to reduce edge effects and to ensure an established flow regime around the sensor. A schematic of the experimental setup is presented in Figure 7. A variable speed pump, submerged in a 20litre water reservoir, is used to generate water flow at different flow rates. The velocity is measured by following the volumetric measurement process using the same setup (Figure 7). The required time to fill a certain volume of water is measured at different voltage supply levels. Then, the flow rate is calculated. Since the pipe diameter is known, the average velocity is calculated for each flow rate and pump power supply. LabView software interface is used for data acquisition. The sensor is operated according to hotwire (anemometric) operation scheme under constant current mode. For this purpose, a constant current source is built using an adjustable three-terminal positive-voltage regulator IC device (LM317). To increase the accuracy of measured resistance data, a 4-probe measurement scheme is used so as to eliminate the parasitic wiring resistance effect which may vary due to local heating.
The temperature dependence of the platinum resistance is used to assess the water velocity and flow rate. The setup, indeed measures a change in the resistance value due to a temperature variation induced by the fluid flow. The temperature variation is then obtained using the equation, 𝑅(𝑇) = 𝑅(𝑇 0 )[(1 + 𝛼Δ𝑇]; where, R(T) is the resistance value at temperature T, R(T0) the reference value of the resistance at the initial temperature T0, Δ𝑇 = 𝑇 -𝑇 0 , is the temperature difference and 𝛼 is the TCR. The resistance temperature drop with respect to zero velocity situation is then calculated. It is related to the fluid velocity by a calibration process. Consequently, a measurement of the resistance value and hence that of the temperature, enables access to the fluid velocity.
Numerical analysis
The different device configurations presented in Figure 4 are considered for numerical simulations. A parametric study is performed with respect to several parameters such as-shape, geometric dimensions, the substrate material and power consumption. Computational Fluid Dynamics (CFD) simulations are carried out based on Finite Element Method (FEM), using COMSOL Multiphysics. Material properties are imported from COMSOL material library and Conjugate Heat Transfer module is used to account for heat transfer both in the solid and fluid medium. Since the Conjugate Heat Transfer physics is a combination of conductive heat transfer in solid and fluids, and fluid flow (Laminar). Boundary conditions are defined for these two physics. For heat transfer-the left side of the fluidic domain is set to a constant temperature, and a continuity of heat flow is ensured at the other boundaries (Figure 8(a)). The platinum resistor is defined as the heat source with an energy dissipation rate of 10 mW. On the other hand, for the laminar flow physics, the velocity inlet (𝑣 = 𝑣 𝑓𝑙𝑢𝑖𝑑 = constant) and outlet (Δ𝑃 = 0) boundary conditions are applied to the left and right side of the water domain respectively, while the top and bottom edges are defined as symmetry axes. All the exterior edges of the sensor body are defined as walls with the 'no-slip condition'. The water medium includes the interior parts of the cavity. A 450 nm thick silicon dioxide insulation layer is introduced between the substrate and the heater in all configurations. For the membrane sensor, the platinum resistor is mounted on a silicon membrane where the backside cavity depth is around 430 m. The medium inside the cavity is defined as vacuum. A parametric study is conducted to optimize the aspect ratio of the cavity and the pillar structure. The cavity height and width range from 50 to 300 m and from 100 to 1000 m respectively. The pillar height and width are equal to the cavity height and the platinum resistor width, respectively. Because of the almost parabolic velocity profile inside the considered pipes, the sensor is placed on the pipe axis in order to be exposed to the maximum velocity. The sensing element is perpendicular to the main flow direction, thus aligned with the cross-sectional diameter of the pipe. A very large fluidic domain is chosen compared to the sensor size to avoid edge effects.
Results and Discussion
This section reports obtained numerical and experimental results. The numerical parametric study enables the determination of the optimal sensor design. Then, the optimal device is fabricated, experimentally characterized and compared to previous designs.
Numerical results
The sensor's sensitivity to the resistor's width.
As mentioned in section 2.1, a parametric numerical study is performed to optimize the geometry of the sensing element, thereby optimizing the resistor width of the pillar structure flow-rate sensor. We report in this section on the device sensitivity with respect to the heater resistor width Rw. The flow-rate sensor's response for a given velocity strongly depends on the heating resistor width. The latter, indeed affects Joule self-heating and the cooling effect of the fluid flow. We show that a decrease in Rw increases the sensor's sensitivity. Simulations are done at 10 mW power supply under a wide velocity range from 10 -6 m/s to 10 m/s. The length and thickness of the resistor are fixed at 100 m and 340 nm, respectively, while Rw is varied from 1 µm to 10 µm. For a better assessment of the sensors' response with respect to the velocity, the normalized temperature value of the device is plotted as a function of velocity. The normalized temperature is defined as the ratio between ΔT and Δ𝑇 𝑚𝑎𝑥 , where ΔT is the temperature difference between the heater resistor and room temperature at a given velocity and Δ𝑇 𝑚𝑎𝑥 is the same temperature difference at zero velocity. The variation of this non-dimensional temperature provides valuable information about the sensor's response to velocity. Figure 9. Pillar device normalized temperature and sensitivity plot as a function of the velocity. Here, the primary Yaxis shows the thermal variation, hence the sensitivity of the pillar device with respect to the velocity at different resistor's widths under 10 mW power supply within the velocity range from 10 -6 m/s to 10 m/s. And the secondary Y-axis expresses the sensitivity to the heater resistor width under the same power supply and velocity range.
The normalized temperature (NT) due to water flow is plotted with respect to the primary Y-axis and the NT sensitivity to Rw is plotted with respect to the secondary Y-axis. The effect of the resistor width on the device sensitivity to velocity is evaluated considering the 10 µm width as a reference, where the resultant normalized temperatures for 1 µm, 2 µm and 5 µm wide resistors are compared with the reference resistor response. This sensitivity is calculated as,
𝑆 𝑅 𝑊 = 𝑁𝑇 𝑅 𝑊 -𝑁𝑇 𝑅 𝑊𝑟𝑒𝑓 | 𝑣 ∆𝑅 𝑊
Where, 𝑆 𝑅 𝑊 is the device sensitivity to the resistor width, 𝑁𝑇 𝑅 𝑊 -𝑁𝑇 𝑅 𝑊𝑟𝑒𝑓 | 𝑣 is the normalized temperature difference between a specific resistor and the reference resistor at a specific 𝑣; where 𝑣 ∈ [10 -6 , 10] m/s and ∆𝑅 𝑊 = 𝑅 𝑊𝑟𝑒𝑓 -𝑅 𝑊 is the width difference between a given resistor and the reference resistor.
The best response is observed for the 1 m wide resistor. On the other hand, the sensor with the broadest resistor (10 µm) rapidly reaches room temperature at high velocity which means a small turndown ratio compared to the other resistors. In addition to the sensitivity improvement due to Rw reduction observed in Figure 9, we observe a decrease of sensitivity in all cases at high velocities. This is due to the low temperature variation at high velocities since large velocities induce large convective cooling.
In spite of the better performances expected with resistors having smaller widths, the technological fabrication process of 1 µm and 2 µm resistors was found to be significantly more complex. Consequently, the pillar device is fabricated with a 5 m wide resistor. All experimental results presented and discussed in the following paragraphs were obtained with this device.
Comparison of the three devices: on bulk silicon, silicon membrane, and silicon pillar
A numerical study is now conducted to compare the pillar device response, with a 5-m wide resistor, with the previously fabricated and tested devices, namely the bulk silicon and the silicon membrane device. Simulations are also done at 10 mW power supply under the same velocity range from 10 -6 m/s to 10 m/s. Results are shown in Figure 10, where normalized temperature values are plotted as a function of velocity for the three sensors. For the sake of comparison, NT variations are preferred to absolute temperature values. Bulk silicon and silicon membrane devices exhibit almost similar NT variations with respect to velocity. On the other hand, the pillar structure device shows higher sensitivity within the considered velocity range than the two other devices. This is mainly due to a lower heat leakage to the substrate and a much larger Joule self-heating. The black circle in Figure 10 indicates a discrepancy of the sensor's response for a certain velocity range. A monotonic decreasing temperature is indeed, expected when the flow velocity increases. This discrepancy is due to a reverse flow which appears at this velocity range. The reverse flow velocity field is shown in Figure 11.
The main water flow is heated by the heating resistance after a first pass. When the reverse flow carries back this hot water to the heater resistor, the latter experiences a temperature increase. As a result, a velocity increase induces a heater temperature increase rather than the opposite. At 0.3 m/s and 0.4 m/s the pillar flow-rate sensor shows this inconsistent response. A similar reverse flow is observed for bulk silicon and membrane based sensors. However, in these cases, reverse flow fluid is not hot enough to increase the heater temperature because of lower initial Joule self-heating and larger thermal leakage through the substrate. The reverse flow issue at the heater resistor vicinity is fixed by modifying the height and the cavity edge curvature. CFD simulations enable the optimization of these parameters to suppress reverse flow. The re-circulation effect is observed noticeably only for the pillar device among all of the silicon based sensors in the numerical study. The experimental measurement shows a monotonic decrement of the pillar heater temperature within the velocity range from 0 m/s to 0.91 m/s (Figure 14) and the recirculation effect is not observed in practice. The 2D flow assumption in the numerical model may explain the difference between simulation and experimental results where, flow is 3D.
Experimental results
The main purpose of this section is to present the experimental responses of the different sensors to the fluid flow. This will be tackled in the last subsection while the two first subsections will tackle static, i.e with no fluid flow, experiments results to discuss the main governing phenomenon for a thermal flow-rate sensor: Joule self-heating. The resistor's Joule self-heating is an important parameter for a thermal flow-rate sensor. The maximum temperature of the heater resistor under no-flow condition defines the turn-down ratio and the sensitivity of the sensor with respect to velocity. Moreover, the relation between the initial temperature and the power supply indicates the device power efficiency. A large temperature increase at low power consumption leads to a low power consuming thermal flow-rate sensor. Therefore, to determine the considered sensors' efficiency, both Joule self-heating and power consumption tests are studied under zero velocity condition. Results are presented in the following section.
Joule self-heating and Power consumption.
Information on Joule self-heating at the resistor level under the no-flow condition helps to anticipate the sensor's response in terms of the velocity and the minimal operating current. The sensor is submerged under the non-flowing water and supply current is varied from 10 mA to 30 mA with a step of 5 mA. Results of Joule self-heating tests are presented in Figure 12(a). We observe that the pillar sensor shows a maximum temperature increase, which is 2.57 °C at 30 mA current supply. Besides, the bulk silicon and the silicon membrane devices show small temperature increase and the maximum temperature values for these two sensors do not surpass 0.8 °C for maximum current supply. Although the three sensors are fabricated on the same substrate material, pillar configuration flow-rate sensor exhibits a significantly larger temperature increase than the others due to an optimized geometry. The power consumption of the three devices is also calculated under the same current supply range. It is illustrated in Figure 12(b). The pillar device consumption is relatively high, up to 43.78 mW for a 30mA current supply. On the other hand, bulk silicon and silicon membrane sensors are three times less power consuming than the pillar configuration at the same current supply. This is due to the high initial resistance value of the pillar device, which is 43.6 Ω. According to Ohm's law, power is proportional to resistance. Therefore, the larger the resistance value, the larger the power consumption. The pillar device heater resistor's width is reduced by two times compared to bulk silicon and membrane resistors, which causes a larger resistance value at room temperature. Consequently, the pillar device is expected to have a power consumption two times larger than the other device. This is not what is observed in Figure 12(b). At the same current supply, a larger resistance value leads to a larger Joule self-heating, hence corresponds to a larger equilibrium temperature. Joule self-heating is then amplified by two phenomena: i) the temperature increase due to Joule heating induces an increase in the resistance values, which results in an additional second-order Joule heating and a temperature increase; ii) The pillar device resistor contact with the substrate due to the pillar structure which reduces heat leakage through the substrate, increases the device equilibrium temperature. This double amplification of Joule self-heating for the pillar device, induces a larger increase of the resistance value, hence a larger power consumption. This amplification can be deduced from the variation of the sensors' resistance values reported in Table 1. For a current supply between 0 and 30 mA, we observe a relative variation of the pillar device resistance of 11.5 % versus a variation of 1 % for the other two devices. The relative resistance variation, hence the Joule selfheating, is larger by one order of magnitude for the pillar device. The tabulated resistance variation values are related to the temperature variation obtained from the Joule self-heating test that is presented in Figure 12(a). As mentioned above, the velocity is calculated according to the cooling magnitude of an anemometric flow-rate sensor. Therefore, we experimentally measure the sensor's response within a velocity range from 0 ms -1 to 0.91 ms -1 . The flow direction is parallel to the resistor's width. A schematic diagram of the flow direction with respect to the resistor is illustrated in Figure 13. We observe in Figure 14, a larger sensitivity for the pillar sensor as expected from numerical results. The relative temperature variation with respect to the velocity of the pillar flow-rate sensor exhibits a monotonic decrease over almost the entire considered fluid velocity domain, whereas, the silicon bulk and membrane based sensors show almost no temperature variations for velocities larger than 0.3 m/s. The device time constant, defined as the time needed to reach 63% of thermal equilibrium temperature for a given velocity, for pillar, silicon membrane and bulk silicon flow-rate sensors are 1.7 s, 1.9 s and 2.1 s respectively. Experimental results obtained under fluid flow are consistent with preliminary Joule self-heating tests and simulation results.
Conclusions
We demonstrated that the performance of the thermal flow-rate sensor largely depends on the thermal conductance of the substrate material, which has a great impact on the device sensitivity and power consumption. When there is a limitation regarding the selection of the substrate material, low thermal conductance can be achieved in spite of using high thermal conductivity materials by geometric optimization. In addition, the resistor dimensions are also a governing parameter of the sensor's sensitivity. We demonstrated the successful co-integration of a thermal flow-rate sensor with other MEMS sensors requiring silicon as a substrate material. Unfortunately, the relatively large thermal conductivity of silicon reduces the performance of the thermal flow-rate sensor. Consequently, the main challenge was to achieve a sensitive flow-rate sensor on a silicon substrate. This was successfully achieved through microstructuration of the silicon substrate using a pillar-like structure and reducing the substrate overall thermal conductance. In the present work, the flow rate sensor is operated independently from the other sensors of the chip. Due to a large number of sensing elements on a very small footprint, cross-interactions between the different sensors is possible and might have an effect on the overall chip performances. The investigation of these cross interactions is the next step of the present project.
Figure 1 .
1 Multi-parameter sensor for monitoring water networks-(a) the MEMS chip co-integrating 4 physical sensors (in addition to 5 chemical sensors); (b) capsule board with MEMS and CMOS chip, (c) sensor head.
Figure 2 .
2 Figure 2. Details of (a) fabricated silicon multi-parameter sensor chip including (b) flow-rate sensor, (c) temperature sensor, (d) 2 conductivity sensors and (e) 2 pressure sensors.
Figure 3 .
3 Figure 3. PCB for conducting experiments with the sensor chip.
Figure 4 .
4 The 3 different configurations considered for the flow-rate sensor aiming to study the impact of thermal conductance on the device sensitivity with respect to the velocity-(a) silicon, (b) silicon membrane and (c) silicon pillar structure.
Figure 5 .
5 Figure 5. Cross-section view of the multi-parameter sensor chip containing the membrane structure flow-rate sensor.
Figure 6 .
6 Schematic of the fabrication sequence-(a) front-side resistors patterning of the multi-parameter sensor chip, (b) patterning the hard mask on the backside for back-side DRIE process in order to create the pressure sensor's membrane, (c) Hard mask removal for silicon-glass anodic bonding, (d) patterning of the front-side in order to create the cavity for the pillar structure flow-rate sensor and etching the front-side deposited layter, (e) the cavity after the front-side DRIE and (f) the SEM photo of the fabricated pillar structure flow-rate sensor.
Figure 7 .
7 Figure 7. Schematic of the experimental setup.
Figure 8 .
8 Simulation geometry : (a) the whole geometry and (b) zoomed picture of the pillar device
Figure 10 .
10 Figure 10. Numerical results of the corresponding three sensors response to the velocity at 10 mW.
Figure 11 .
11 (a) Reverse flow at the resistor vicinity illustrating the anomalous behaviour within the velocity range highlighted by black circles in Figure 10; (b) normal velocity profile for other fluid velocities. Only the fluid domain at the heater resistor vicinity is shown here (in blue) while the white domain corresponds to the sensor substrate. The reverse flow is due to the finite size of the sensor chip (not shown here) and the flow separation around it.
Figure 12 .
12 (a) Joule self-heating temperature of the heater resistors at different supply current under the no-flow condition, (b) power consumption of the three corresponding sensors.
Figure 13 .
13 Figure 13. Water flow direction in parallel with respect to the resistor width. Here, Is denotes supply current, S + and S -is the voltage reading ports and GND stands for ground.
Figure 14 .
14 Figure 14. Pillar, silicon membrane and bulk silicon flow-rate sensor measured response as a function of velocity at 30 mA current supply.
Table 1 .
1 Resistance variation with respect to the current supply.
Sensor Resistance at room temperature (Ω) Resistance variation range (Ω)
Pillar 43.6 43.6 ~ 48.65
Silicon membrane 15.07 15.07 ~ 15.27
Bulk silicon 15.78 15.78 ~ 15.97
3.2.2 The sensor's response to the velocity.
Acknowledgement
Authors would like to thank Hugo Regina for his contributions to the present project. This work received funding from the European Union's H2020 Programme for research, technological development and demonstration under grant agreement No644852. Part of this work received the support of the National Research Agency (ANR) in the frame of the EquipEx project SENSE-CITY of the programme d'Investissements d'Avenir (PIA), involving IFSTTAR and ESIEE Paris as founding members of the consortium. Fabrication of the chips is done in the cleanroom facilities of ESIEE Paris, whose technical staff is deeply acknowledged. | 38,068 | [
"1015121",
"181371",
"177767",
"773364",
"14135",
"18824"
] | [
"302035",
"303005",
"43154",
"43154",
"302035",
"303005",
"302035",
"43154",
"303005",
"43154",
"302035",
"303005",
"113970",
"302035",
"303005",
"43154",
"43154",
"302035",
"303005"
] |
01681569 | en | [
"sdu",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01681569/file/Lavorel_2017_EcolInd_postprint.pdf | Sandra Lavorel
email: [email protected]
Anita Bayer
Alberte Bondeau
Sven Lautenbach
Ana Ruiz-Frau
Nynke Schulp
Ralf Seppelt
Peter Verburg
Astrid Van Teeffelen
Clémence Vannier
Almut Arneth
Wolfgang Cramer
Nuria Marba
Pathways to bridge the biophysical realism gap in ecosystem services mapping approaches
Keywords: Ecosystem Service Provider, spatial modelling, biophysical assessment, species distribution, functional traits, large-scale ecosystem model
The mapping of ecosystem service supply has become quite common in ecosystem service assessment practice for terrestrial ecosystems, but land cover remains the most common indicator for ecosystems ability to deliver ecosystem services. For marine ecosystems, practice is even less advanced, with a clear deficit in spatially-explicit assessments of ecosystem service supply. This situation, which generates considerable uncertainty in the assessment of ecosystems' ability to support current and future human well-being, contrasts with increasing understanding of the role of terrestrial and marine biodiversity for ecosystem functioning and thereby for ecosystem services. This paper provides a synthesis of available approaches, models and tools, and data sources, that are able to better link ecosystem service mapping to current understanding of the role of ecosystem service providing organisms and land/seascape structure in ecosystem functioning. Based on a review of literature, models and associated geo-referenced metrics are classified according to the way in which land or marine use, ecological processes and especially biodiversity effects are represented. We distinguish five types of models: proxy-based, phenomenological, nichebased, trait-based and full-process. Examples from each model type are presented and data requirements considered. Our synthesis demonstrates that the current understanding of the role of biota in ecosystem services can effectively be incorporated into mapping approaches and opens avenues for further model development using hybrid approaches tailored to available resources. We end by discussing ways to resolve sources of uncertainty associated with model representation of biotic processes and with data availability.
Introduction
Ecosystem services (ES) originate from spatially structured ecosystems and land/seascapes, and their dynamics over time. Quantifying ES provisioning therefore must account for spatiotemporal patterns and processes. Although this is evident, so far this challenge has been insufficiently resolved [START_REF] Bennett | Linking biodiversity, ecosystem services and to human well-being: Three challenges for designing research for sustainability[END_REF]. Spatially-explicit quantification of ES using georeferenced metrics and GIS-based approaches has recently gained prominence through the needs from policy and decision-makers for global to local ES assessments [START_REF] Maes | Mapping ecosystem services for policy support and decision making in the European Union[END_REF][START_REF] Martinez-Harms | Making decisions for managing ecosystem services[END_REF]. Similar needs follow from emerging practices of land and marine planning [START_REF] Outeiro | Using ecosystem services mapping for marine spatial planning in southern Chile under scenario assessment[END_REF][START_REF] Von Haaren | Integrating ecosystem services and environmental planning: limitations and synergies[END_REF] or land management decision (e.g. in agriculture or forestry; [START_REF] Doré | Facing up to the paradigm of ecological intensification in agronomy: Revisiting methods, concepts and knowledge[END_REF][START_REF] Grêt-Regamey | Facing uncertainty in ecosystem services-based resource management[END_REF][START_REF] Soussana | A European science plan to sustainably increase food security under climate change[END_REF] that incorporate ecosystem services among use allocation and management criteria.
However the reliability of ES mapping varies as a function of the methods employed. For instance in a review of 107 studies, [START_REF] Lautenbach | Blind spots in ecosystem services research and implementation[END_REF] found that, while half of ecosystem service studies were based on relatively simple look-up table approaches attributing fixed values for given land cover types, nearly a third of the studies of ecosystem services conducted between 1966 and 2013 mapped ecosystem services. This mapping is done in most cases based on land use composition ignoring land use configuration and land use intensity [START_REF] Lautenbach | Blind spots in ecosystem services research and implementation[END_REF][START_REF] Mitchell | Reframing landscape fragmentation's effects on ecosystem services[END_REF][START_REF] Verhagen | Effects of landscape configuration on mapping ecosystem service capacity: a review of evidence and a case study in Scotland[END_REF]. More specifically, regulating services have been the most commonly mapped, followed by provisioning services [START_REF] Egoh | Indicators for mapping ecosystem services: a review[END_REF][START_REF] Lautenbach | Blind spots in ecosystem services research and implementation[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF][START_REF] Seppelt | A quantitative review of ecosystem service studies: approaches, shortcomings and the road ahead[END_REF]. For marine and coastal systems, among a total of 27 available studies from an exhaustive search of the Web of Science on ES mapping and ES modelling studies, almost half (52%) focused on Regulation & Maintenance services, of those 22% concentrated on coastal protection (wind & flood protection) and 33% on carbon sequestration and storage. A further third of the studies (33%) focused on provisioning services, particularly on food production (i.e. fish). The rest of the studies (19%) mapped cultural services.
Linked with this increased practice of mapping ES provisioning, several recent reviews have summarised available methods used to map ES. In the following we refer to 'models' as quantitative representations of ES variables depending on abiotic, biotic and social parameters. Overall, statistical models quantifying ES supply based on relationships with biophysical and social variables are prevalent, while process models based on causal relations are still rare [START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF][START_REF] Egoh | Indicators for mapping ecosystem services: a review[END_REF][START_REF] Lautenbach | Blind spots in ecosystem services research and implementation[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF]. For terrestrial ecosystems, static maps of land use and land cover are the most commonly used indicator for ES in Europe [START_REF] Egoh | Indicators for mapping ecosystem services: a review[END_REF] and the second most common globally [START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF]. This widespread application of methods with weak links to ecosystem processes leads to severe uncertainty in the mapped ES supply from national [START_REF] Eigenbrod | The impact of proxy-based methods on mapping the distribution of ecosystem services[END_REF] to landscape (Lavorel et al., 2011) scales. More advanced approaches incorporate estimates of aboveand sometimes below-ground biomass, along with vegetation type, and soil parameters for the estimation of ecosystem functions from which services are derived. Species observation data, although potentially useful for the estimation of cultural services, is used only rarely.
Contrasting with these statistical models, process models, with explicit description of causal relationships between driver variables and ecosystem functions and properties underpinning ES provision, have primarily been used to map climate regulation and erosion control as well as the provisioning of food, fuel and fibre. Mapping and modelling assessments for marine and coastal ES are still in their infancy (Liquete et al 2013). Considering the 27 available studies from the Web of Science, 31% of the models were geostatistical while less than 20% were process-based.
Beyond the limitations of specific mapping methods, the large variety of primary indicators currently used to express one single ES is a large source of uncertainty for ES maps that limits their usefulness to managers and decision makers [START_REF] Egoh | Indicators for mapping ecosystem services: a review[END_REF]. Level of process understanding, modelling methodology and data sources are three of the critical, yet poorly understood or documented sources of uncertainty for ES maps (Grêt-Regamey et al., 2014a;[START_REF] Hou | Uncertainties in landscape analysis and ecosystem service assessment[END_REF][START_REF] Kandziora | Mapping provisioning ecosystem services at the local scale using data of varying spatial and temporal resolution[END_REF]. A systematic comparison of four sets of ecosystem service maps at the continental scale for five ecosystem services (climate regulation, flood regulation, erosion protection, pollination and recreation) showed considerable disagreement among spatial patterns across Europe (Schulp et al., 2014a), attributed to differences in the mapping aim, indicator definitions, input data and mapping approaches. The four original studies encompassed a land-cover look-up approach [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF], environmental indicator modelling [START_REF] Kienast | Assessing Landscape Functions with Broad-Scale Environmental Data: Insights Gained from a Prototype Development for Europe[END_REF] and two hybrid approaches combining environmental indicators, landscape effects and process modelling [START_REF] Maes | Mapping ecosystem services for policy support and decision making in the European Union[END_REF]Schulp et al., 2008;Stürck et al., 2014;[START_REF] Tucker | Policy Options for an EU No Net Loss Initiative[END_REF]van Berkel and Verburg, 2011). In addition to highlighting the need for mapmakers to clearly justify the indicators used, the methods and related uncertainties, this study provided additional support for the urgent need for better process understanding and data acquisition for ES mapping, modelling and validation. Progress on these issues will be essential to support the uptake of ES spatial assessments for national ecosystem service assessments, national accounts, land planning and broader policy and natural resource management decisions [START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF][START_REF] Egoh | Indicators for mapping ecosystem services: a review[END_REF].
Given this context, the common use of relatively simple statistical approaches contrasts with increasing evidence on the role of biodiversity for ecosystem functioning and ecosystem services [START_REF] Cardinale | Biodiversity loss and its impact on humanity[END_REF], which has been referred to as a lack of biophysical realism [START_REF] Seppelt | A quantitative review of ecosystem service studies: approaches, shortcomings and the road ahead[END_REF]. Ecosystem service supply is related to the presence, abundance, diversity and functional characteristics of service-providing organisms (also referred to as Ecosystem Service Providers, ESP henceforth; [START_REF] Luck | Quantifying the contribution of organisms to the provision of ecosystem services[END_REF]. Positive relationships have been found between species richness and ecosystem services such as fodder and wood production, regulation of water quality through soil nutrient retention, carbon sequestration or regulation of pest and weed species [START_REF] Cardinale | Biodiversity loss and its impact on humanity[END_REF]. Similarly, the presence of key coastal and marine species has been found to enhance carbon sequestration, coastal protection, food provision or water quality through nutrient retention and particle trapping [START_REF] Fourqurean | Seagrass ecosystems as a globally significant carbon stock[END_REF][START_REF] Mcleod | A blueprint for blue carbon: toward an improved understanding of the role of vegetated coastal habitats in sequestering CO2[END_REF][START_REF] Ondiviela | The role of seagrasses in coastal protection in a changing climate[END_REF][START_REF] Van Zanten | Coastal protection by coral reefs: A framework for spatial assessment and economic valuation[END_REF]. However such observed relationships between species richness and ES often reflect species functional traits and their diversity within communities, rather than species richness per se [START_REF] Diaz | Incorporating plant functional diversity effects in ecosystem service assessments[END_REF]. Lastly, land/sea-scape diversity and connectivity can strongly influence the ES provided by mobile organisms and vegetation [START_REF] Fahrig | Remote Sensing of Ecosystem Services: An Opportunity for Spatially Explicit Assessment[END_REF][START_REF] Mellbrand | Linking Land and Sea: Different Pathways for Marine Subsidies[END_REF][START_REF] Mitchell | Reframing landscape fragmentation's effects on ecosystem services[END_REF]. To provide more reliable estimates of service supply capacity by ecosystems, such fundamental ecological understanding needs to be better incorporated into ES models [START_REF] Bennett | Linking biodiversity, ecosystem services and to human well-being: Three challenges for designing research for sustainability[END_REF]. This paper aims to address the biophysical realism gap in ecosystem service mapping by synthesising available approaches, models and tools, and data sources (with special reference to Europe) for mapping ecosystem services, and focusing on how the role of ecosystem service providing biota can be better incorporated. Based on a review of published mapping studies, modelling methods and associated geo-referenced metrics are classified into five categories according to the way in which the contribution of ecosystem service providing organisms is represented. This review highlights the diversity of individual methods, which increasingly combine different model categories. We end by discussing associated uncertainties and pathways towards resolving them. Our review focuses on assessment of ecosystem capacity to deliver services and does not address the social and economic aspects of ecosystem service demand.
Methods
We reviewed ES biophysical modelling approaches in the literature that incorporate descriptions of ecosystem service providers (ESP) and their contribution to ES supply. Given our objective of analysing the merits and limitations of available models rather than producing a quantitative analysis of the state of the art, for terrestrial systems we did not attempt to reiterate the several systematic reviews that have already been published [START_REF] Lautenbach | Blind spots in ecosystem services research and implementation[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF][START_REF] Seppelt | A quantitative review of ecosystem service studies: approaches, shortcomings and the road ahead[END_REF], while for marine systems a systematic analysis of publications was conducted. As we aimed to assess how biophysical realism was incorporated in different approaches, we chose to classify models and associated geo-referenced metrics according to the way in which the relationships between ESP and biophysical processes are represented, using the following terminology: spatial proxy models, phenomenological models, niche-based models, traitbased models and full process-based models (Figure 1). Briefly, spatial proxy models refer to simple models based on expert knowledge or statistical associations relating abiotic and biotic indicators to ES provision. Phenomenological models add to these by explicitly incorporating effects of land/seascape configuration through spatial processes. Niche-based models deduce ES provision from the geographic distribution of ESP, while trait-based models depict statistical relationships between ecosystem processes and indicators of ESP community functional composition. Lastly, we refer to full process-based models as those incorporating explicit representations of geochemical, physical and biotic processes underpinning ecosystem functioning.
For each category of models we describe and exemplify based on published studies (supported by standard model descriptions presented in Appendix 1Erreur ! Source du renvoi introuvable.) the principles and mechanics of application of these models, with specific reference to how ESP are represented. Main biodiversity components for ESP characterisation and the strengths and weaknesses of different model types for practice are summarised in Table 1. Table 2 summarises main data sources for each model type, with specific reference to Europe for terrestrial (Table 2a) and marine (Table 2b) systems respectively. Lastly, key data sources and strengths and weaknesses for practice are discussed.
Results
Proxy models
We define proxy models as models that relate ES indicators to land or marine cover, abiotic and possibly biotic variables by way of calibrated empirical relationships or expert knowledge. It is desirable, and in practice most common for such models to use selected proxy variables that are based on well-known causal relationships between environmental variables [START_REF] Kienast | Assessing Landscape Functions with Broad-Scale Environmental Data: Insights Gained from a Prototype Development for Europe[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF]. In proxy models habitat type (or biotope) or (more rarely) species composition are considered as the ESP. Most commonly land cover types, ranging from coarse vegetation types (e.g. evergreen vs. deciduous forest) to detailed habitat types such as those of the European Union's Habitats Directive, are associated with levels of ES supply, with the possible incorporation of additional environmental modifiers (e.g. altitude, soil type, climate…). Likewise, for marine ecosystems different habitat types depending on bathymetry or substrate may be used to model ES associated with the presence or activity of particular species.
One simple, and often used method consists in combining look-up tables allocating ES values per land cover with modifying categorical variables describing abiotic factors and ecological integrity [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF]. For example, in the Austrian Stubai valley, [START_REF] Schirpke | Multiple ecosystem services of a changing Alpine landscape: past, present and future[END_REF] combined maps of vegetation types with measures of ES to map past, current or future fodder quantity and quality, carbon sequestration, soil stability, natural hazard regulation and aesthetic value. In traditional forest landscapes of Lapland, [START_REF] Vihervaara | Ecosystem services-A tool for sustainable management of human-environment systems. Case study Finnish Forest Lapland[END_REF] illustrated how multiple biophysical and social data sources can be combined to quantify regulation service supply by different biotopes. In marine ecosystems, bathymetry, habitat distribution, sediment type, wave and currents regime, tidal range and water temperature are most frequently used proxies. Liquete et al. (2013) developed a proxy-based model to assess coastal protection at European level based on 14 biophysical and socioeconomic variables describing coastal protection capacity, coastal exposure and demand for protection. Statistical models developed from observations or analysis of regional data sets may also be applied. Multiple regression models, Generalized Additive Models [START_REF] Yee | Generalized additive models in plant ecology[END_REF] or more sophisticated methods for capturing uncertainty in relationships, such as Bayesian modelling [START_REF] Grêt-Regamey | Facing uncertainty in ecosystem services-based resource management[END_REF]) may be used here. In general, the application of models developed at larger scale to smaller extents and greater resolution generates uncertainty as they do not capture context-dependent relationships [START_REF] Purtauf | Landscape context of organic and conventional farms: Influences on carabid beetle diversity[END_REF] and the effects of finer-grained relevant variables such as soils. Sitespecific models may be developed based on field data collection (as encouraged by Martinez-Harms and Balvanera, 2012 -see Lavorel et al., 2011) and on remote sensing data (Table 2, Figure 2). On the other hand, the validity of up-scaling from site-specific models to larger spatial scales depends on whether sites represent the average conditions at the larger scale. It has further been shown by Grêt-Regamey et al. (2014b) that spatially explicit information about non-clustered, isolated ES tends to be lost at coarse resolution, mainly in less rugged terrain, which calls for finer resolution assessments in such contexts.
In the marine case, proxy models have generally been used to map the distribution of coastal vegetation such as mangrove coverage, which then has been used to estimate carbon sequestration and storage. In contrast, proxy models have had a limited application for the assessment of "underwater" marine ES, and we contend that this mainly reflects the limitations of remote sensing for correctly determining underwater habitat coverage.
Strengths and weaknesses for practice Sophisticated proxy models have been recommended for national assessment of ecosystem services [START_REF] Maes | Mapping and Assessment of Ecosystems and their Services: Indicators for ecosystem assessments under Action 5 of the EU Biodiversity Strategy to 2020[END_REF]. They help move from a pure 'benefit transfer' approach based on land cover [START_REF] Eigenbrod | The impact of proxy-based methods on mapping the distribution of ecosystem services[END_REF]MAES Tier 1), to more precise assessments (MAES Tier 2) using classic GIS methods accessible to all [START_REF] Kienast | Assessing Landscape Functions with Broad-Scale Environmental Data: Insights Gained from a Prototype Development for Europe[END_REF]. Also, they can be easily combined with socio-economic variables in order to provide at least first level assessments of benefits [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF][START_REF] Grêt-Regamey | Linking GIS-based models to value ecosystem services in an Alpine region[END_REF][START_REF] Vihervaara | Ecosystem services-A tool for sustainable management of human-environment systems. Case study Finnish Forest Lapland[END_REF] and allow consistent mapping of different ecosystem services, which is essential for avoiding data artefacts when studying trade-offs. Model applications are however constrained by the availability of different data layers depending on scales and regions. For instance, while effects of soil parameters on regulation services (e.g. carbon sequestration, erosion control) are well understood by scientists and practitioners, soil maps are often not available at suitably fine resolution.
Phenomenological models
Phenomenological models are based on qualitative or semi-quantitative relationships between ESP and ES supply, based on an understanding of biological mechanisms underpinning ES supply. In difference to simple proxy models, at least part of the parameters and relationships are transferred from in-depth process-based studies or meta-analyses of observations. They assume, but do not represent explicitly, a mechanistic relationship between elements of the landscape, considered as ESP units, and the provisioning of ES. This often implies considering landscape configuration explicitly, contrary to simple land cover proxy models. This relationship might be represented by a functional relationship between landscape attributes and services, or might also incorporate spatial configuration. For example, the ES supply of a forest patch might depend on land cover, patch size and additional attributes such as soil quality or topography. However, quantitative biodiversity indicators are not commonly used in this type of models that are often dominated by the influence of land cover/use, although biodiversity indicators might be used, e.g. by incorporating a statistical relationship between plant or bird species richness and recreational value of a location. Typically, these approaches are used at the regional to the global scale since the assumed relationships ignore most often smaller scale details and focus on patterns emerging at coarser scales.
Phenomenological approaches have been applied for ecosystem services provided by mobile organisms and ecosystem services relating to lateral flows for which consideration of spatial configuration is essential [START_REF] Mitchell | Reframing landscape fragmentation's effects on ecosystem services[END_REF][START_REF] Verhagen | Effects of landscape configuration on mapping ecosystem service capacity: a review of evidence and a case study in Scotland[END_REF]). As a simple example of mobile ESP, in the Swiss valley of Davos, the cultural service of habitat for the protected bird species Capercaillie was modelled by combining habitat suitability criteria relating to quality and spatial pattern with GIS-modelled vegetation distribution [START_REF] Grêt-Regamey | Linking GIS-based models to value ecosystem services in an Alpine region[END_REF]. Phenomenological approaches that incorporate landscape configuration are commonly used to model pollination through the interplay between habitats suitable for wild pollinators and demand from insect-pollinated crops (Grêt-Regamey et al., 2014a;[START_REF] Lautenbach | Analysis of historic changes in regional ecosystem service provisioning using land use data[END_REF][START_REF] Maes | A European assessment of the provision of ecosystem services[END_REF]Schulp et al., 2014b). Based on a metaanalysis by [START_REF] Ricketts | Landscape effects on crop pollination services: are there general patterns[END_REF] these models represent realized pollination as a decay function based on the distance between pollinator habitat and fields with pollinationdependent crops. The InvESt pollination model (Lonsdorf et al., 2009) includes the location of crops to be pollinated and the habitat quality for different pollinator species or guilds, as well as the availability of floral resources. More sophisticated versions also limit the number of cells that can be pollinated by pollinator source [START_REF] Lautenbach | Analysis of historic changes in regional ecosystem service provisioning using land use data[END_REF]Grêt-Regamey et al. (2014a) used knock-off thresholds based on connectivity to modulate habitat quality.
The assessment of water quality regulation and recreation by [START_REF] Lautenbach | Analysis of historic changes in regional ecosystem service provisioning using land use data[END_REF], and the universal soil loss equation (USLE) and related approaches [START_REF] Wischmeier | Predicting rainfall erosion losses -a guide to conservation planning[END_REF] for the quantification of erosion control [START_REF] Schirpke | Multiple ecosystem services of a changing Alpine landscape: past, present and future[END_REF] represent examples of commonly used phenomenological approaches for ES depending on lateral flows.
In the marine environment phenomenological models have been rarely used, however [START_REF] Townsend | Overcoming the challenges of data scarcity in mapping marine ecosystem service potential[END_REF] developed a method whereby services were defined from a series of principles based on ecosystem functioning and linked to marine biophysical parameters to develop ES maps for an area in New Zealand (Figure 3). Other studies have used phenomenological models to link the coverage of key habitats and their level of connectivity to fisheries production [START_REF] Yee | Comparison of methods for quantifying reef ecosystem services: A case study mapping services for St[END_REF].
Strengths and weaknesses for practice
Phenomenological approaches depend on the validity of the qualitative or semi-quantitative relationships underlying the model. Typically, the required parameters are transferred from other study sites or obtained through meta-analysis. Results should therefore be interpreted as indicators of the direction or spatial variation of an effect or of the relative importance of an effect (e.g. by comparing different land use scenarios or historic land use data) rather than absolute values. The strength of the approach it incorporates land use configuration effects while requiring only limited data. It can therefore be used to get first estimates at scales where data availability is limited, such as the regional scale, or for the assessment of past conditions for which required data for more sophisticated approaches will not become available.
Niche-based models
We define as niche-based models of ES models that assess ES supply based on the presence (or abundance) of ESP (often species) depending on their geographic distribution (Figure 4). ES can be modelled by aggregating distribution maps for different ESP if there are more than one contributing species, thus considering for instance the number of ESP species as a proxy for ES supply. A frequent limitation to such an approach is the lack of continuous distribution maps of ESP occurrence. To overcome this, Species Distribution Modelling (SDM) [START_REF] Elith | Species Distribution Models: Ecological Explanation and Prediction Across Space and Time[END_REF][START_REF] Guisan | Predicting species distribution: offering more than simple habitat models[END_REF] can be used to produce statistical relationships that predict the probability of occurrence of a given species (or group of species) over geographic areas depending on parameters such as climate, soil or land use, and generate continuous distribution maps of these taxa. There are also more sophisticated, mechanistic models, which (akin to full process models -see below) model species distributions based on physiological mechanisms (e.g. temperature tolerance thresholds, temperature responses), phenology (the timing of specific life cycle events such as bud burst or flowering in plants) or animal behaviour. The contribution of e.g. different species or functional groups to the ES of interest is assessed based on specific traits (e.g. trophic guilds) or expert knowledge.
Niche-based models may in particular apply to cultural services provided by well-identified species (e.g. protected species, species of particular aesthetic value) or to provisioning services by particular species such as in the case of wild foods (Schulp et al., 2014c). In Mediterranean regions provisioning services such as timber, fuelwood or cork production can be related to the presence of particular species (e.g. Fagus sylvatica or Quercus ilex, or Quercus suber respectively) and to forest species richness [START_REF] Vilà | Species richness and wood production: a positive association in Mediterranean forests[END_REF][START_REF] Von Essen | Cork before cattle: Quantifying Ecosystem Services in the Portuguese Montado and Questioning Ecosystem Service Mapping[END_REF], while spiritual and aesthetic values are supported by Quercus suber and Pinus halepensis, and the regulation of fire hazards is promoted by Quercus suber but negatively affected by Pinus halepensis [START_REF] Lloret | Mediterranean vegetation response to different fire regimes in Garraf Natural Park (Catalonia, Spain): field observations and modelling predictions[END_REF]. Niche-based modelling was applied to model biocontrol of vertebrate and invertebrate pests by terrestrial vertebrates (birds, mammals, reptiles) in Europe, considering predator species richness as a proxy for biocontrol potential (Civantos et al., 2012). As SDMs enable the projection of ESP distribution under changing environmental conditions, this approach showed that under future climate change scenarios pest control would be substantially reduced, especially in southern European countries, whereas in much of central and northern Europe climate change would likely to benefit pest-control providers. In coastal and marine environments niche-based models have been primarily used to model the distribution of mangroves and thus their carbon capture and storage capacity [START_REF] Hutchison | Predicting Global Patterns in Mangrove Forest Biomass[END_REF][START_REF] Sunny | A global predictive model of carbon in mangrove soils[END_REF] and fisheries production, based on species distribution models (Jordan et al., 2012).
In principle, any method of aggregation is possible, although so far species richness for ESP (i.e. added contributing species) has been considered as the proxy for ES without applying any weighting to different species. Though in their infancy, approaches considering relationships between taxonomic, phylogenetic and functional diversity and their links to ES [START_REF] Flynn | Functional and phylogenetic diversity as predictors of biodiversity-ecosystem-function relationships[END_REF] are good candidates to expand existing ones. These approaches build on the premise that since functional diversity, or functional composition tend to be better related to ES supply than species richness or diversity [START_REF] Cardinale | Biodiversity loss and its impact on humanity[END_REF], then niche-based models of species distributions could be translated to functional diversity in order to generate projections of ES. The incorporation of phylogenetic diversity, which can be easily computed based on taxonomic data granted the availability of phylogenetic data (e.g. [START_REF] Thuiller | Consequences of climate change on the tree of life in Europe[END_REF], adds a further means to approach functional diversity and thereby to quantify ES [START_REF] Cadotte | Using Phylogenetic, Functional and Trait Diversity to Understand Patterns of Plant Community Productivity[END_REF].
Strengths and weaknesses for practice
Overall, niche-based modelling of species distributions is a well-developed approach with free accessible tools, suitable for future scenario projections (.e.g BIOMOD: [START_REF] Thuiller | BIOMOD -a platform for ensemble forecasting of species distributions[END_REF]Maxent: Elith et al., 2011). Species distribution data are often the critical bottleneck for niche-based approaches. Limitations of SDM and strengths and weaknesses of different distribution modelling methods have been discussed extensively (e.g. [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF][START_REF] Elith | Species Distribution Models: Ecological Explanation and Prediction Across Space and Time[END_REF], and improvements proposed [START_REF] Zurell | Benchmarking novel approaches for modelling species range dynamics[END_REF]. Apart from the intrinsic limitations of the approach, such as ignoring population dynamics, species interactions, or potential adaptive responses, the main avenue for improvement towards the application to ES modelling regards the understanding and quantitative specification of relationships between ESP and ES supply. This gap requires both greater ecological understanding [START_REF] Cardinale | Biodiversity loss and its impact on humanity[END_REF][START_REF] Nagendra | Impacts of land change on biodiversity: making the link to ecosystem services[END_REF], and research into the demand for ES in terms of the identities and relative weights of contributing species.
Trait-based models
There is increasing evidence for the relevance of traits of organisms as ES providers [START_REF] De Bello | Functional traits underlie the delivery of ecosystem services across different trophic levels[END_REF][START_REF] Lavorel | Plant functional effects on ecosystem services[END_REF][START_REF] Luck | Quantifying the contribution of organisms to the provision of ecosystem services[END_REF]. Trait-based models quantify ES supply based on statistical, quantitative relationships between an ecosystem property underpinning ES supply and trait-based metrics, as well as, if significant additional effects of abiotic parameters such as climate or soil variables [START_REF] Gardarin | Plant trait-digestibility relationships across management and climate gradients in French permanent grasslands[END_REF]Lavorel et al., 2011). (Lavorel et al., 2011) demonstrated that trait-based models reduce uncertainty in ES prediction over space as compared to models based on land use alone, or even land use and soil variables [START_REF] Eigenbrod | The impact of proxy-based methods on mapping the distribution of ecosystem services[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF]. Such models are constructed based on empirical measures of ecosystem functioning, which are then related to explanatory variables including: land cover/use, trait-based metrics quantifying functional diversity of ESP [START_REF] Mouchet | Functional diversity measures: an overview of their redundancy and their ability to discriminate community assembly rules[END_REF], soil variables and, for regional to continental scale or topographically complex landscapes climate / microclimate variables. Models may combine metrics for several individual traits, e.g. plant height and leaf nitrogen concentration to model grassland productivity (Lavorel et al., 2011); or use multi-trait metrics such as a compound index of different traits, e.g. the leaf economics spectrum [START_REF] Laliberté | Cascading effects of long-term land-use changes on plant traits and ecosystem functioning[END_REF][START_REF] Lienin | Plant trait responses to the environment and effects on multiple ecosystem properties[END_REF][START_REF] Mokany | Functional identity is more important than diversity in influencing ecosystem processes in a temperate native grassland[END_REF] or multivariate trait diversity (e.g. [START_REF] Conti | Plant functional diversity and carbon storage -an empirical test in semiarid forest ecosystems[END_REF][START_REF] Mokany | Functional identity is more important than diversity in influencing ecosystem processes in a temperate native grassland[END_REF]. A review of known relationships between indicators of ecosystem biogeochemical functioning for plants, relevant to the modelling of ES such as for instance fodder or timber production, climate regulation through carbon sequestration or the maintenance of water quality, suggested that, for available studies so far, community mean values of single traits tended to capture most of the variance in these ecosystem properties [START_REF] Lavorel | Plant functional effects on ecosystem services[END_REF]. The recent extension to ES associated with other biota such as soil fauna and microorganisms [START_REF] Mulder | Connecting the Green and Brown Worlds: Allometric and Stoichiometric Predictability of Above-and Below-Ground Networks[END_REF], insects [START_REF] Ibanez | Optimizing size thresholds in a plant-pollinator interaction web: towards a mechanistic understanding of ecological networks[END_REF][START_REF] Moretti | Linking traits between plants and invertebrate herbivores to track functional effects of environmental changes[END_REF], terrestrial vertebrates [START_REF] Luck | Extending trait-based frameworks to vertebrates and improving their application to the study of ecosystem services[END_REF], aquatic invertebrates [START_REF] Engelhardt | Relating effect and response traits in submersed aquatic systems[END_REF] or marine fish [START_REF] Albouy | From projected species distribution to food-web structure under climate change[END_REF] holds high promises. Multitrophic traitbased models quantify ecosystem services resulting from the interaction between several trophic levels such as pollination, biotic control of pests and weeds or maintenance of soil fertility (Grigulis et al., 2013;Lavorel et al., 2013a). These models capture not only the effects of environmental change on ES via their effects on e.g. plant traits, but by also integrating the traits that underpin biotic interactions between plants and other organisms such as pollinators [START_REF] Pakeman | Using plant functional traits as a link between land use and bee foraging abundance[END_REF], herbivores [START_REF] Ibanez | Plant functional traits reveal the relative contribution of habitat and food preferences to the diet of four subalpine grasshoppers[END_REF], or soil microorganisms [START_REF] De Vries | Abiotic drivers and plant traits explain landscape-scale patterns in soil microbial communities[END_REF][START_REF] Legay | The relative importance of above-ground and below-ground plant traits as drivers of microbial properties in grasslands[END_REF], and their effects of ES supply (Grigulis et al., 2013;[START_REF] Moretti | Linking traits between plants and invertebrate herbivores to track functional effects of environmental changes[END_REF]. In principle, and similar to niche-based models, a wide range of modelling methods are suitable, although selected methods must allow spatially-extensive extrapolation over space for which explanatory variables are available, and preferably across time under scenarios.
As an example, models of mountain grassland ES supply were developed based on plant traits (Lavorel et al., 2011), and further complemented by traits of soil microorganisms (Grigulis et al., 2013). In these models which focused principally on components of carbon and nutrient cycling, ecosystem properties were linked to plant height and easily measureable leaf traits such as dry matter content and nitrogen concentration, with additional effects of soil parameters. Both traits and soil parameters were related to grassland management to produce ES maps (Figure 5). These models were also applied to project effects of combined climate and socio-economic scenarios translated into grassland management projections and parameterised from observations and experiments [START_REF] Lamarque | Plant trait-based models identify direct and indirect effects of climate change on bundles of grassland ecosystem services[END_REF].
The initial construction of trait-based ES models requires observational or experimental data sets measuring ecosystem properties underpinning ES supply along with community composition of ESP. ESP community composition can then be combined with original, sitelevel trait data, or data extracted from trait data bases, but considering uncertainties resulting from intraspecific trait variability [START_REF] Kazakou | Are trait-based species rankings consistent across data sets and spatial scales?[END_REF][START_REF] Violle | The return of the variance: intraspecific variability in community ecology[END_REF]. Scenario projections can be parameterised by combining projected values for land use and environmental parameters with new community-level trait values calculated based on species compositional turnover from e.g. state-and-transition models [START_REF] Quétier | Linking vegetation and ecosystem response to complex past and present land use changes using plant traits and a multiple stable state framework[END_REF] and on intraspecific variability measured along environmental gradients [START_REF] Albert | Intraspecific functional variability: extent, structure and sources of variation within a French alpine catchment[END_REF] or through experiments [START_REF] Jung | Intraspecific trait variability mediates the response of subalpine grassland communities to extreme drought events[END_REF].
Increasing trait data availability through communal data bases and remote sensing offers high promises for the development of trait-based models of ES (Table 2a). There are definite geographic gaps, but overall European vegetation tends to be increasingly well covered, although more extreme environments such as Mediterranean or alpine, where intraspecific variability hinders the use of data measured in more temperate regions, still require collection efforts.
Strengths and weaknesses for practice
Although trait-based models of ES supply are in their infancy they rely on rapidly increasing conceptual and empirical evidence. Such models also provide a mechanistic basis for the understanding of biophysical bundles and trade-offs in ES supply [START_REF] Lavorel | How fundamental plant functional trait relationships scale-up to trade-offs and synergies in ecosystem services[END_REF][START_REF] Mouillot | Functional Structure of Biological Communities Predicts Ecosystem Multifunctionality[END_REF]. The existence of so called 'response -effects overlaps' [START_REF] Lavorel | Predicting the effects of environmental changes on plant community composition and ecosystem functioning: revisiting the Holy Grail[END_REF][START_REF] Suding | Scaling environmental change from traits to communities to ecosystems: the challenge of intermediate-level complexity[END_REF] enables mechanistic ES projections under future scenarios using relatively simple models [START_REF] Lamarque | Plant trait-based models identify direct and indirect effects of climate change on bundles of grassland ecosystem services[END_REF]. As with any statistical model however, the greatest care should be taken when attempting to apply such models beyond the parameter space for which they were derived. So far trait-based ES models have rarely been validated across sites, although inter-site analyses have identified generic traitbased models of fodder production, fodder digestibility and litter decomposability [START_REF] Fortunel | Plant functional traits capture the effects of land use change and climate on litter decomposability of herbaceous communities in Europe and Israel[END_REF][START_REF] Gardarin | Plant trait-digestibility relationships across management and climate gradients in French permanent grasslands[END_REF]Lavorel et al., 2013b), and the model by [START_REF] Gardarin | Plant trait-digestibility relationships across management and climate gradients in French permanent grasslands[END_REF] was applied to map fodder quality at national scale [START_REF] Violle | Vegetation Ecology meets Ecosystem Science: permanent grasslands as a functional biogeography case study[END_REF]. Lastly, traitbased models will become increasingly attractive as trait data bases become more generally available, although the lack of soil data layers in many regions will remain problematic.
Full process-based models
Full process-based models of (terrestrial) ecosystems rely on the explicit representation, using mathematical formulations, of ecological, physical, and biogeochemical processes that determine the functioning of ecosystems. The predictive algorithms simulate a large range of variables, which can then be post-processed to quantify ES. Process-based models have been most widely applied to quantify i) climate regulation [START_REF] Bayer | Historical and future quantification of terrestrial carbon sequestration from a Greenhouse-Gas-Value perspective[END_REF]Duarte et al., 2013;[START_REF] Metzger | A spatially explicit and quantitative vulnerability assessment of ecosystem service change in Europe[END_REF][START_REF] Naidoo | Global mapping of ecosystem services and conservation priorities[END_REF][START_REF] Ooba | Biogeochemical model (BGC-ES) and its basin-level application for evaluating ecosystem services under forest management practices[END_REF][START_REF] Watanabe | Dynamic emergy accounting of water and carbon ecosystem services: A model to simulate the impacts of land-use change[END_REF], ii) water supply, water quality, flood and erosion regulation [START_REF] Gedney | Detection of a direct carbon dioxide effect in continental river runoff records[END_REF]Lautenbach et al., 2012a;[START_REF] Lautenbach | Optimization-based trade-off analysis of biodiesel crop production for managing an agricultural catchment[END_REF][START_REF] Logsdon | A quantitative approach to evaluating ecosystem services[END_REF], iii) food, fodder, and bioenergy provision [START_REF] Bateman | Bringing Ecosystem Services into Economic Decision-Making: Land Use in the United Kingdom[END_REF][START_REF] Beringer | Bioenergy production potential of global biomass plantations under environmental and agricultural constraints[END_REF]Lindeskog et al., 2013;[START_REF] Müller | Hotspots of climate change impacts in sub-Saharan Africa and implications for adaptation and development[END_REF], iii) natural hazard regulation [START_REF] Elkin | A 2 °C warmer world is not safe for ecosystem services in the European Alps[END_REF], but also in the wider frame of habitat characterisation [START_REF] Hickler | Projecting the future distribution of European potential natural vegetation zones with a generalized, tree species-based dynamic vegetation model[END_REF][START_REF] Huntingford | Highly contrasting effects of different climate forcing agents on terrestrial ecosystem services[END_REF]. Here, we discriminate between large-scale and local to landscape scale process-based models.
Appendix 1 lists examples using Dynamic Vegetation Models (DVM) (LPJ-GUESS and LPJmL: Bondeau et al., 2007;[START_REF] Sibyll | Contribution of permafrost soils to the global carbon budget[END_REF]Sitch et al., 2003;Smith et al., 2001), Earth System Models (ESM) (JULES andORCHIDEE: Cox, 2001, Krinner et al., 2005;[START_REF] Zaehle | Carbon and nitrogen cycle dynamics in the O-CN land surface model: 1. Model description, site-scale evaluation, and sensitivity to parameter estimates[END_REF], hydrological models (SWAT and others: Neitsch et al., 2005;Stürck et al., 2014), forest dynamic models (e.g. [START_REF] Elkin | A 2 °C warmer world is not safe for ecosystem services in the European Alps[END_REF]Bugmann, 1996) and models for ecological restoration (e.g. [START_REF] Chen | A gap dynamic model of mangrove forest development along gradients of soil salinity and nutrient resources[END_REF]Duarte et al., 2013). Figure 6 represents the typical steps of ES assessments with process-based models and possible mapping outputs.
Large-scale process models
Several large-scale process models have been used for ES quantification. Dynamic Vegetation Models (DVMs) and Earth System Models (ESMs) are large-scale models that provide functional representations of plant and ecosystem processes that are universal rather than specific to one biome or region [START_REF] Prentice | Encyclopedia of Biodiversity[END_REF]. Hydrological models represent water-related processes within river catchments [START_REF] Gudmundsson | Evaluation of nine large-scale hydrological models with respect to the seasonal runoff climatology in Europe[END_REF]. Global models typically apply a spatial resolution of 0.5°x0.5°, but can be run at finer resolution, or even ecosystem scale if the required drivers are available. In that case, model adjustment might be necessary (e.g. re-calibration, re-formulation to better account for important processes of the region). Applications of these models that include a representation of regional specific features are often designed for local to regional scale application, like forest dynamic models and crop models. This type of models uses a set of process formulations for representing key biogeochemical and physical processes as a function of prevailing atmospheric CO 2 concentration, climate, soil characteristics and eventually land use and management or nutrient deposition. Vegetation is represented as a mixture of plant functional types (PFTs) or species that are distinct in terms of bioclimatic limits and ecological parameters (see [START_REF] Lavorel | Plant functional types : are we getting any closer to the Holy Grail ?[END_REF][START_REF] Woodward | Plant functional types and climatic change: Introduction[END_REF], simulated or prescribed. Age or size classes may be distinguished, but more typically the modelled properties represent averages of the entire grid cell population of a given PFT [START_REF] Prentice | Dynamic Global Vegetation Modeling: Quantifying Terrestrial Ecosystem Responses to Large-Scale Environmental Change[END_REF], possibly under a given management type. Soil profile is described using up to ten layers. Hydrological models consider also shallow and deep aquifer storages, and a river routing module simulates the discharge to the rivers network. "Fast" processes are modelled on a daily or sub-daily basis and include energy and gas exchange, photosynthesis, respiration and plant-soil water exchanges. Processes with seasonal dynamics such as plant phenology, growth and biomass allocation are implemented on a daily or monthly basis. Mortality, disturbance, management, are represented on an annual or sub-annual basis, eventually stochastically. By using a small number of PFTs which represent a low-dimensional continuum of plant trait combinations, process-based models generally underestimate the functional diversity of biota in favour of a manageable number of classes. Yet the rise of super computers allows to run DVMs with flexible individual traits [START_REF] Sakschewski | Leaf and stem economics spectra drive diversity of functional plant traits in a dynamic global vegetation model[END_REF], making it possible to account for functional diversity.
Local to landscape scale process models
At local to landscape scales, forest dynamic models with similar philosophy and structure as DVMs have been used for the assessment of bundles of ecosystem services including timber production, natural hazard regulation (avalanches, rockfalls), carbon sequestration, conservation of forest diversity for greater drought resilience and habitat for protected bird species [START_REF] Elkin | A 2 °C warmer world is not safe for ecosystem services in the European Alps[END_REF][START_REF] Grêt-Regamey | Linking GIS-based models to value ecosystem services in an Alpine region[END_REF][START_REF] Temperli | Adaptive management for competing forest goods and services under climate change[END_REF].
As another type of full-process models with strong potential for ES modelling, watershed models have been designed to simulate hydrological flows and often water quality at the landscape to regional scale. Soil infiltration, surface and subsurface flows as well as evapotranspiration and snowmelt are the main hydrological processes. For modelling water quality the transport and turnover of nutrients and other chemicals need to be represented as well as soil erosion processes, along with agricultural practices. Given the importance of vegetation for the water cycle a vegetation growth model is part of all watershed models. Specifically differences between different plants (or plant functional types) are considered for water use, and in nutrient use for crops. As an example, Stürck et al. (2014) quantified the supply of flood control by running a hydrological model for a number of representative catchment types to quantify the regulating effect of different land use types in different positions in the catchments. Results were extrapolated on a European map accounting for catchment type, location in the catchment, land use and soil conditions. The soil water assessment tool (SWAT) (Neitsch et al., 2005) is an example of an advanced watershed model applied to agricultural landscapes map water purification services (Lautenbach et al., 2012a), to assess fresh water provisioning, fuel provisioning, erosion regulation and flood regulation [START_REF] Logsdon | A quantitative approach to evaluating ecosystem services[END_REF] and to describe trade-offs between food and fodder provisioning, biofuel provisioning, water quality regulation and discharge regulation [START_REF] Lautenbach | Optimization-based trade-off analysis of biodiesel crop production for managing an agricultural catchment[END_REF] depending on crop rotations and crop management.
For marine ecosystems, the Ecopath with Ecosim (EwE) modelling approach supports predictions of changes in fish to evaluate ecosystem effects of fishing, explore management policy options, analyse impacts and placement of marine protected areas, predict movement and accumulation of contaminants and tracers and model the effects of environmental changes. At the core of EwE is Ecopath, a static, mass-balance snapshot of fisheries based on a set of linear equations combining net production that reflects the balance of catch, predation and other sources of mortality, migration, biomass accumulation, with respiration and unassimilated food. , The Ecosim and Ecospace modules then build on this basic massbalance module to simulate respectively temporal dynamics using differential equations, and spatial dynamics using spatially explicit distribution of habitat types and fishing effort, along with lateral movement. Alcamo et al. (2005) applied EwE to model fish consumption and production for three important regional marine fisheries (North Benguela, Central North Pacific and Gulf of Thailand) under the four Millennium Ecosystem Assessment global scenarios, showing that for all scenarios fish catch (by weight) was maintained in North Benguela, not maintained in the Central North Pacific, whereas results were scenariosensitive in the Gulf of Thailand. Another process-based model is available for the long-term carbon sequestration expected from seagrass restoration programmes (Duarte et al., 2013) by combining models of patch growth, patch survival in seagrass planting projects and estimates of seagrass CO 2 sequestration per unit area for the five seagrass species commonly used in restoration programmes. Results enable the estimation of an optimal planting density to maximise C sequestration.
Strengths and weaknesses for practice
There is a substantially overlapping set of physiological and ecological principles that is used across process-based models to represent ecosystem dynamics and matter flows, providing robustness and scalability to these models for ES quantification. However, predictions of response variables, e.g. net primary productivity, vary considerably among individual largescale models (e.g. [START_REF] Denman | Climate 2007: The Physical Science Basis[END_REF][START_REF] Friedlingstein | Climate-carbon cycle feedback analysis. Results from the C4MIP model intercomparison[END_REF][START_REF] Sitch | Trends and drivers of regional sources and sinks of carbon dioxide over the past two decades[END_REF]. This is due to the lack of a universal set of benchmarks e.g. for terrestrial carbon cycle modelling, and the lack of consensus about several aspects of ecological processes [START_REF] Prentice | Encyclopedia of Biodiversity[END_REF]. Conversely, local and landscape scale process models can be limited by detailed parameterization and calibration needs and by availability of case-specific validation data.
Most process models have not been designed to model ecosystem services but to model the underlying ecosystem functions from which an ecosystem service is derived directly or indirectly. The great strength of this approach is that it allows scenario analysis and if-thenelse experiments if the model has been proven to capture the essential system behaviour. Because they model processes and their fundamental biological and physical interactions they also appear particularly promising for exploring mechanisms underpinning synergies and trade-offs between ES [START_REF] Viglizzo | Partition of some key regulating services in terrestrial ecosystems: Meta-analysis and review[END_REF].
Discussion
Mobilising scientific understanding to assess the spatial distribution of ecosystem services is a tremendous challenge to support environmental assessment, planning and sustainable futures [START_REF] Bennett | Linking biodiversity, ecosystem services and to human well-being: Three challenges for designing research for sustainability[END_REF][START_REF] Maes | An indicator framework for assessing ecosystem services in support of the EU Biodiversity Strategy to[END_REF]. Because ES lie at the interface between social and environmental systems, this endeavour requires an integrated assessment of the social and ecological factors determining the production of ecosystem services (Reyers et al., 2013;[START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF]. In this review, we have focused solely on how the biophysical condition of ecosystems, as influenced by biotic, abiotic and management factors, determines ES supply, thus not considering ES demand, and how it influences ES flows and feedbacks to biophysical condition. Mapping ES demand is an even more challenging objective [START_REF] Wolff | Mapping ecosystem services demand: A review of current research and future perspectives[END_REF], and will ultimately need to be coupled in integrated spatial ES assessments (e.g. Schulp et al., 2014b;Schulp et al., 2014c;Stürck et al., 2014).
In the following we discuss future avenues for further improvement of the mapping of ES supply by increasing biophysical realism, but this will need proceed along with parallel progress in accounting for ES demand. [START_REF] Plant | Ecosystem services as a practicable concept for natural resource management: some lessons from Australia[END_REF] identified the lack of an 'ES toolbox' as a barrier to the adoption of ES by natural resource managers. While this paper does not attempt to produce such a tool box, it provides a basis for guiding model choice by scientists and practitioners. Here, we identified two important dimensions which enable the incorporation of greater biophysical realism in models supporting ES mapping and can be developed into different model categories regardless of their baseline complexity. First, while simple land cover information is most commonly used to map ES, effects of land use configuration and land use intensity that can be captured by phenomenological models are too often ignored [START_REF] Verhagen | Effects of landscape configuration on mapping ecosystem service capacity: a review of evidence and a case study in Scotland[END_REF]. Recent practice also shows the benefits of incorporating explicit land use rather than simple land cover information for the simplest proxy-based models all the way to advanced full process-based models Second, approaches incorporating explicitly the role of ES providing biota are emerging as powerful methods to reduce uncertainty in ES mapping. This includes models quantifying individual species effects, species diversity, functional traits, and ecosystem functions described in this review, and for which tools and data are becoming increasingly available. Our review also helps the selection of appropriate methods according to spatial scale, given that scale effects have so far been poorly considered in ES research (Grêt-Regamey et al., 2014b). Review of practice highlights [START_REF] Temperli | Adaptive management for competing forest goods and services under climate change[END_REF] the predominant effect of scale on model selection for practical case studies, (2) the prospect within a single case study to combine different model types, of varying complexity and detail in the representation of biotic effects, depending on specific ES of interest, skills and data / resources availability.
Future avenues for increasing biophysical realism in ES mapping
The last point highlights that our categories of methods are not necessarily exclusive and there may be more of a continuum between approaches. Hybridization is a fruitful avenue for model improvement depending on context, scale, skills and data availability. This is illustrated by a number of published examples and ongoing developments that gradually help progressing from MAES Tier 2 to Tier 3, by gradually incorporating more mechanistic approaches [START_REF] Grêt-Regamey | A tiered approach for mapping ecosystem services[END_REF], and especially a greater integration of explicit biodiversity effects into mapping of ES supply. For instance, Grêt-Regamey et al. ( 2008) demonstrated how statistical, phenomenological and process-based models of varying level of complexity can be coupled with a GIS platform in order to assess ecosystem services at landscape scale. Stürck et al. (2014) used a hybridization between a process-based hydrological model and a spatial proxy, look-up approach to map flood regulation across Europe by determining the regulating capacity of different land use-soil combinations within a catchment. [START_REF] Schirpke | Multiple ecosystem services of a changing Alpine landscape: past, present and future[END_REF] coupled the USLE phenomenological model of soil erosion with a semi-mechanistic statistical model of plant root trait effects on soil retention to quantify effects of land use change on soil stability. Large scale full-process models are also evolving towards the integration of trait-based approaches rather than using a small number of fixed Plant Functional Types. First, global vegetation models can be reformulated to incorporate plant traits and their trade-offs as drivers of vegetation distribution [START_REF] Reu | The role of climate and plant functional trade-offs in shaping global biome and biodiversity patterns[END_REF]. Second, recent models have started considering direct trait-based formulation [START_REF] Scheiter | Impacts of climate change on the vegetation of Africa: an adaptive dynamic vegetation modelling approach (aDGVM)[END_REF][START_REF] Zaehle | Carbon and nitrogen cycle dynamics in the O-CN land surface model: 1. Model description, site-scale evaluation, and sensitivity to parameter estimates[END_REF] and/or parameterisation [START_REF] Verheijen | Impacts of trait variation through observed traitclimate relationships on performance of an Earth system model: a conceptual analysis[END_REF][START_REF] Wullschleger | Plant functional types in Earth system models: past experiences and future directions for application of dynamic vegetation models in high-latitude ecosystems[END_REF]. Lastly, for landscape to regional scales so-called 'hybrid' DVMs pave the way to the integration of niche-based models with dispersal models [START_REF] Midgley | BioMove -an integrated platform simulating the dynamic response of species to environmental change[END_REF] and with trait-based process models [START_REF] Boulangeat | Optimizing plant functional groups for dynamic models of biodiversity: at the crossroads between functional and community ecology[END_REF], thereby opening new perspectives for the refinement of the trait-based modelling of ES supply under scenarios of climate change [START_REF] Boulangeat | FATE-HD: A spatially and temporally explicit integrated model for predicting vegetation structure and diversity at regional scale[END_REF]. Together, all these recent developments illustrate how increasing fundamental understanding on the role of different facets of biodiversity for ecosystem functioning and ecosystem services [START_REF] Cardinale | Biodiversity loss and its impact on humanity[END_REF] can be incorporated into the spatially explicit modelling of ecosystem service supply.
Quantifying and mapping marine ecosystem services has lagged behind efforts for terrestrial ecosystems, but this is in the process of changing (Liquete et al., 2013;[START_REF] Maes | Mapping ecosystem services for policy support and decision making in the European Union[END_REF]. Data and methods to assess the provision of services from the marine environment are far behind to those available for terrestrial environments (Barbier, 2012;[START_REF] Costanza | The ecological, economic, and social importance of the oceans[END_REF]. The gap is greatest when it comes to the mapping of ES, the main reasons behind this is the lack of high-resolution spatial information for habitat and species distribution and the incomplete understanding of ecosystem processes and functions within a highly dynamic threedimensional environment with fluid boundaries [START_REF] Maes | Mapping ecosystem services for policy support and decision making in the European Union[END_REF]. However, efforts towards mapping marine habitats are increasing. In order to rapidly move towards biophysically realistic mapping, some large fundamental knowledge gaps regarding ecosystem processes need to be resolved. First, there is a lack of information at which scales ecosystem processes and functions occur and how these relate to the provisioning of services. Second, the relationships between biodiversity and ecosystem functions in marine ecosystems are still poorly known [START_REF] Bergström | Modeling and predicting the growth of the mussel, Mytilus edulis: implications for planning of aquaculture and eutrophication mitigation[END_REF][START_REF] Moore | Distribution of Mangrove Habitats of Grenada and the Grenadines[END_REF]. The literature search on marine models/mapping conducted as part of this review highlighted a considerable number of studies which stopped at the assessment or prediction of ESP distributions without taking the next step in analysing their implications for ES provision (e.g. [START_REF] Albouy | From projected species distribution to food-web structure under climate change[END_REF][START_REF] Bergström | Modeling and predicting the growth of the mussel, Mytilus edulis: implications for planning of aquaculture and eutrophication mitigation[END_REF][START_REF] Moore | Distribution of Mangrove Habitats of Grenada and the Grenadines[END_REF]. Although this is not an easy problem, experience from terrestrial ecosystems in the integration of biotic processes and biodiversity effects into ES quantification and mapping may speed up progress for marine and coastal ecosystems.
Uncertainty and validation of spatially-explicit models of ES supply
While the importance of quantifying uncertainties and of model validation is accepted knowledge in the environmental and ecological modelling community [START_REF] Bennett | Characterising performance of environmental models[END_REF][START_REF] Dormann | Components of uncertainty in species distribution analysis: a case study of the Great Grey Shrike[END_REF][START_REF] Jakeman | Ten iterative steps in development and evaluation of environmental models[END_REF][START_REF] Laniak | Integrated environmental modeling: A vision and roadmap for the future[END_REF], these have been two enduring challenges for spatial explicit ES assessment [START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF][START_REF] Martinez-Harms | Methods for mapping ecosystem service supply: a review[END_REF][START_REF] Seppelt | A quantitative review of ecosystem service studies: approaches, shortcomings and the road ahead[END_REF].
Uncertainty can be conceptually separated into uncertainty about the model structure, uncertainty of model parameterization, uncertainty in the data and last but not least the conceptual uncertainty of the definition of an ecosystem service and underlying processes. In reality all these different components interact: data availability -especially at larger scalesdrives in many situations the choice of proxys and the model structure for assessing an ES [START_REF] Andrew | Spatial data, analysis approaches, and information needs for spatial ecosystem service assessments: a review[END_REF]. The use and parameterization of a model is limited by the availability of data -this can in turn lead to suboptimal decision about model structure which leads in turn to an increase in uncertainty. Choosing strongly simplified proxy-models to best match data availability may lead to ignoring the most important processes determining ecosystem service supply, and especially biotic processes. For ecosystem services that predominantly rely on mobile biota -such as pest control, or pollination -increased system understanding on how the different components of biodiversity affect ES provisioning of the services supports the development of more robust models suitable for spatial and temporal extrapolation [START_REF] Kremen | Pollination and other ecosystem services produced by mobile organisms: a conceptual framework for the effects of land-use change[END_REF]. At the same time, using the most advanced process models in the absence of data to parameterise them at the desired scale does not necessarily lead to higher accuracy. Still, incorporating at least phenomenological understanding into ES models would likely increase reliability and robustness of ES assessments and maps. The use of remote sensing data to estimate ecosystem service proxies [START_REF] Ayanu | Quantifying and Mapping Ecosystem Services Supplies and Demands: A Review of Remote Sensing Applications[END_REF][START_REF] De Araujo Barbosa | Remote sensing of ecosystem services: A systematic review[END_REF] and to derive information on biodiversity [START_REF] O'connor | Earth observation as a tool for tracking progress towards the Aichi Biodiversity Targets[END_REF][START_REF] Schmidtlein | Mapping plant strategy types using remote sensing[END_REF] (Table 2) is promising to overcome some of the data limitations, although the derivation of information from remote sensing data requires the use of additional models that bring in their own sources of uncertainty [START_REF] Foody | Front Matter, Uncertainty in Remote Sensing and GIS[END_REF].
In the previous sections we have illustrated how better biodiversity process understanding can be incorporated to reduce uncertainty in ES models. The uncertainty introduced by a selected model structure can be assessed by comparing models of different structure, as recently done for species-distribution models [START_REF] Buisson | Uncertainty in ensemble forecasting of species distribution[END_REF][START_REF] Morin | Comparing niche-and process-based models to reduce prediction uncertainty in species range shifts under climate change[END_REF]. Such comparisons can be used to highlight areas or situations in which models strongly agree or disagree. Schulp et al. (2014a) followed this approach by comparing ES maps from five distinct studies at the European scale, and such practice is now gaining currency to assess the value of novel model developments. Another strategy is the use of model ensembles for forecasts and assessments. This strategy which is common practice for fullprocess models is now spreading for niche-based and trait-based based models (e.g. [START_REF] Araújo | Ensemble forecasting of species distributions[END_REF][START_REF] Gritti | Estimating consensus and associated uncertainty between inherently different species distribution models[END_REF][START_REF] Thuiller | BIOMOD -a platform for ensemble forecasting of species distributions[END_REF].
As the availability on species, phylogenetic and functional biodiversity increases currently, along with remote sensing derived information, data availability is likely to become less limiting for using more advanced models (Table 2). But then uncertainty in biodiversity input data could propagate in ES models [START_REF] Dong | Land use mapping error introduces strongly-localised, scale-dependent uncertainty into land use and ecosystem services modelling[END_REF]. Researchers will then need to estimate the effects of increasing model complexity both on feasibility of parameterisation and on sensitivity to uncertainty in the input data, and to carefully assess potential mismatches in temporal and spatial scale of the new data [START_REF] Orth | Does model performance improve with complexity? A case study with three hydrological models[END_REF][START_REF] Perrin | Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments[END_REF]. First, a specific source of uncertainty in ES assessment relates to the scale of the input data. Although the selection of the modelling scale should be driven by the requirements of the end user (the scale of the decision to be made) (Grêt-Regamey et al., 2014b), data availability seriously limits the degrees of freedom for the selection of input data. There should always be a match between the resolution for ESP biota and environmental data. This in particular applies to climate, where downscaled layers need to be available or calculated for adequate species distribution or process modelling. Conversely, care should also be taken in combining high resolution soil maps with biota data with a coarser resolution (Grêt-Regamey et al., 2014b). Second extrapolating models beyond the range of calibration data is a critical source of uncertainty. Black box statistical models, e.g. some phenomenological, niche-based or trait-based models that have been (over-)fitted to data without ensuring a robust model structure are especially prone to effects of extrapolation. In general, for an analysis of temporal changes or for assessing management options the projected effects should be compared to the sensitivity of the model outputs against reasonable parameter changes: if the direction of the effect changes or if the magnitude of the response is weaker than the sensitivity of the model one should hesitate to draw strong conclusions from the model. Examples for this approach can be found in [START_REF] Lautenbach | Analysis of historic changes in regional ecosystem service provisioning using land use data[END_REF] and (Schulp et al. (2014b). It is also possible to test the sensitivity towards the uncertainty in the input data, an approach followed in Lautenbach et al. (2012b).
Thus, ES map users should be aware of the different sources of uncertainty and map creators should at least list them with respect to their application. In addition to this qualitative step a quantitative assessment of the uncertainties of model output should be given. If observed data is used for model parameterisation and calibration, it should be always possible to quantify the uncertainty attached. If no observed data is available against which model output could be compared sensitivity analysis is an option to quantify uncertainties. Lastly, validation of ES models is complicated by the fact that many ES cannot be directly measured. Water purification for example has to rely on water quality data not on measured purification rates (Lautenbach et al., 2012a). Therefore, model validation is often pattern-oriented, considering proxy-data, and tries to at least capture the system behaviour instead of specific process variables.
Ultimately, comparing the gains from improved biophysical process understanding to the possible propagation of uncertainties in biodiversity and ecosystem process data will determine the net benefits from using models of increasing biophysical complexity.
Conclusion
In order to achieve the ambitious political agenda for biodiversity, and to support sustainable development that both preserves and benefits from natural capital and ecosystem services, considerable progress is still needed in the practice of quantifying ecosystem service supply. Today a rich array of methods are available, especially for terrestrial systems, that enable the incorporation of biodiversity effects on ecosystem functioning into quantitative, spatiallyexplicit assessments of ecosystem service supply. We have summarised the main characteristics, strengths and weaknesses of different approaches for mapping ES supply, highlighting their complementarity depending on scale, assessment objectives and context, available skills and data. Review of practice illustrates the predominant effect of scale on model selection, and the ability within a single case study to combine different model types, of varying complexity and detail in the representation of biodiversity effects, depending on specific ES of interest, skills and data / resources availability. Besides, model categories are not necessarily exclusive and there may be more of a continuum between approaches. Recent model developments, with innovative hybridization across model types illustrate how increasing fundamental understanding on the role of different facets of biodiversity for ecosystem functioning and ecosystem services can be incorporated into the spatially explicit modelling of ecosystem service supply. As the availability of biodiversity data (species, phylogenetic and functional) increases and the potential for remote sensing of taxonomic and functional diversity becomes realised, the application of more 'biodiversity realistic' models should be able to move to from research to practice. Considerable challenges remain for assessments to embrace good practice in model uncertainty quantification and validation, an upstream research need to still be addressed. Lastly, while the mapping of terrestrial ecosystem service supply is now reaching greater maturity, for marine ecosystems research is still in its infancy. Urgent research needs regard a better understanding of marine biodiversity effects on ecosystem functioning, and at which scales this influences ecosystem service supply. The availability of high resolution data also proves to be an obstacle that needs to be cleared before sound practice can be achieved. Daily or monthly average air surface temperature, precipitation (daily precipitation or monthly precipitation and wet days), incoming shortwave radiation (or sunshine hours per day). Various sources depending on scale and considered time frame, e.g. 0.5° from CRU TS 3.1 [START_REF] Mitchell | An improved method of constructing a database of monthly climate observations and associated high-resolution grids[END_REF], usually in combination with CMIP5 scenario climate data (e.g. MPI-ES-LR after Giorgetta et al., 2013).
Figures
global soil map (FAO, 1991) atmospheric CO2 concentration
LPJmL (Bondeau et al. 2007)
Daily or monthly average air surface temperature, daily precipitation (or monthly precipitation and wet days), cloudiness or shortwave and longwave radiation. Various sources depending on the scale, e.g. 0.5° from CRU TS 3.1 [START_REF] Mitchell | An improved method of constructing a database of monthly climate observations and associated high-resolution grids[END_REF] 11 PFTs e.g. according to [START_REF] Ahlström | Robustness and uncertainty in terrestrial ecosystem carbon response to CMIP5 climate change projections[END_REF]; 11 CFTs (Lindeskog et al., 2013) LPJmL (Bondeau et al. 2007) 9 PFTs (Sitch et al., 2003) (Neitsch et al., 2005). The model integrates all relevant processes for watershed modeling including water flow, nutrient transport and turnover, vegetation growth, land use, and water management at the sub-basin scale. It considers five different pools of nitrogen in the soils (Neitsch et al., 2005): two inorganic (ammonium and nitrate) and three organic (fresh organic nitrogen and active and stable organic nitrogen). Nitrogen is added to the soil by fertilizer, manure or residue application, fixation by bacteria, and atmospheric deposition. Nitrogen losses occur by plant uptake, leaching, volatilization, denitrification and erosion. The continental scale of the model may oversimplify the local coastal processes, since they cannot be taken into account a the continental scale and resolution. Coastal studies generally lack the step of aggregation of data and knowledge transfer from local case studies to regional ones. Due to the lack of large scale datasets or methodologies some important factors could not be included in the model (e.g. local sediment Budget, subsidence, main direction of morphologic features with respect to wave action, coastal development and management, low vs. high coasts, health of the ecosystems, the specific non-linear response of habitats for protection among others). Further information on limitations or aspects that could improve the model are discussed in the paper. A primary limitation is that the Erosion Protection model assumes that all erosion leads to a loss of land. Further, the model estimates coastal protection services provided by habitats in terms of the reduction in damages due to erosion from storm waves, not surge. Some coastal habitats have the ability to attenuate surge in addition to waves (e.g., marshes, coastal forests), while other nearshore subtidal habitats do not (e.g., eelgrass). the model has technical limitations. The first is the lack of high quality GIS data that are readily available. The theoretical limitations of the Nearshore Waves and Erosion model are more substantial. As mentioned earlier, wave evolution is modeled with a 1D model. This assumes that the bathymetry is longshoreuniform (i.e. the profile in front of the site is similar along the entirety of the stretch of shoreline). Because this is unlikely true, the model ignores any complex wave transformations that occur offshore of the site of interest
HILLFLOW
InVEST
InVEST Food provision (Guerry et al 2012)
Production of maps that identify the most important marine and coastal areas across a variety of fishing fleets It does not model behavior , for that reason it is not well suited to the evaluation of how human uses may change in response to changes in the marine environment.
InVEST Marine renewable energy (Guerry et al 2012)
Not stated in the paper Not stated in the paper
InVEST Recreation (Guerry et al 2012)
Not stated in the paper The model does not presuppose that any predictor variable has an effect on visitation. Instead, the tool estimates the magnitude of each predictor's effect based on its spatial correspondence with current visitation in the area of interest. It requires the assumption that people's responses to attributes that serve as predictors in the model will not change over time.
In other words, in the future, people will continue to be drawn to or repelled by the attributes as they are currently.
Hydrodynamic model (Temmerman et al 2012)
Not stated in the paper Not stated in the paper
Hydrodynamic model (Shepherd et al 2007)
Cost benefit analysis (CBA) demonstrates that managed realignment provides positive economic advantages. In a wider sense the case study demonstrates how CBA of managed realignment schemes should be undertaken so that the potential social cost of realignment can be better understood
The model and CBA is based in many assumptions (e.g. sedimentation rate 1.5 or 6mm per year) and uncertainties (e.g. habitat values or nitrogen removal), however the assumptions are conservative, if anything it will have underestimated the value of the service Mangrove's wind protection [START_REF] Das | Mangroves can provide protection against wind damage during storms[END_REF] The study found that not accounting for the role of mangroves significantly overestimates actual wind damage. The model indicates that the cumulative C sequestered increases rapidly over time and with planting density. The value corresponding to this C sequestration suggests that the costs of seagrass restoration projects may be fully recovered by the total CO2 captured in societies with a carbon tax in place
The model presented delivers rough, but conservative, estimates of the average CO2 capture capacity associated with seagrass restoration projects. These estimates are conservative because they focus on the mean, whereas plant-ing of seagrass patches for CO2 capture can be managed to achieve maximum capture, which could double the esti-mates provided above. In addition, the model considers clonal spread alone, whereas the restored meadows would produce seeds as they develop, contributing to accelerate colonization beyond the limits imposed by clonal growth, accelerating space occupation and therefore carbon capture.
Role of eelgrass in ES, food web modeling (Plummer et al 2013)
Increased eelgrass coverage was most associated with increases in commercial and recreational fishing with some small decreases in one non-market activity, bird watching. When ES categories were considered (aggregations of individual groups of species) there was little evidence of strong tradeoffs among marine resources; that is, increasing eelgrass coverage was essentially either positive or neutral for all services examined Not stated in the paper
Mercury sequestration (Anastacio et al 2013)
tool to analyze system behavior and to make projections regarding mercury sequestration. This is particularly relevant in the case of human interventions (i.e. engineering) for the optimization of this ecosystem service the value calculated does not include sequestration by other plants sharing the habitat with B. maritimus. For this reason we should expect that the total value for mercury sequestration by wetland plants will be higher since other plant species are present in the studied area
Spatial model of coastal ecosystem services (Barbier 2012)
The basic model demonstrates how spatial production of ecosystem services affects the location and extent of landscape conversion. An extension allows for the risk of ecological collapse, when the critical size of the remaining landscape that precipitates the collapse is not known. Both models are simulated using the example of spatial variation in ecosystem services across a mangrove habitat that might be converted to shrimp aquaculture.
Not stated in the paper
FigureFigure 2 -
2 Figure 1 service s
Figure 3 marine e -Phenomen ecological pr nological mo rinciples. Ada del developm apted from T ment for ma Townsend et pping marin t al., 2014. e ecosystem m services based
Not stated in the paper
Wind barriers like mangroves reduces tangential wind and contributes subtantially to reduce wind-caused damage to structures While the simplicity of the model makes it very tractable for use in empirical studies in poor regions, further model development and better data would shed more light on the particular mechanisms Ecological production functions generally are observed at fine spatial scales for brief spans of time, whereas the resulting ecosystem services and their economic values may be delivered over broad geographic and temporal scales. This paper demonstrates methods of modeling and estimation that link fishery production and its associated economic indicators to the distributions and attributes of coastal habitats across scales ranging from habitat patches to large ocean basins.Not stated in the paper CO 2 capture potential of seagrass restoration(Duarte et al, 2013)
(Smith et al. 2001, Sitch et al. 2003).
IVM-FloodRegulation LPJ-GUESS precipitation regime (Haylock et al., soil water holding capacity DEM and forest and
(Stürck et al, 2014) 2008) classification (FAO, 2009) agricltural management
variables
SWAT precipitation, temperature (min and max), number and thickness of terrain (slope)
(Lautenbach et al. solar radiation, wind speed, humidity, layers, hydrologic soil
2012) potential evapotranspiration, daily class, soil porosity,
resolution, spatial resolution: climate saturated hydraulic
stations. Depending on the method used conductivity, porosity, field
to calculate evapotransipiration. Values capacity, water content at
can also be estimated using a weather wilting point, clay/silt/sand
generator. content, organic matter
content, bulk density,…
available resolution of soil
together with land cover
resolution defines the
Harmonized World Soil achiveable resolution atmospheric CO2
HILLFLOW precipitation, temperature (min and max), Database (version 1.2) soil texture, saturated concentration DEM: altitude, slope,
(Leitinger et al. 2015) solar radiation, wind speed, humidity, (2012) aggregated to 0.5• water content, field aspect
potential evapotranspiration, daily resolution and classified capacity, residual soil
resolution, spatial resolution: climate according to the USDA water content, saturated
stations. soil texture classification hydraulic conductivity, soil
and GPCC (http://edis.ifas.ufl.edu/ depth, macropores
(Rudolf et al 2010), 0.25° from WATCH ss169).
Forcing Data (Weedon et al. 2011) and
ERA-Interim (ECMWF).
Trait-based models of grassland ES (Lavorel et al. 2011, Grigulis et al. 2013)
IVM-tourism (Van Berkel & Verburg, 2011) LPJ-GUESS (Smith et al. 2001, Sitch et al. 2003) Applicable across the EU; using easy Global quantification; historical and future Only applicable at this scale; expert based Limitations in resolution (0.5°x0.5°), applicable up
accesible datasets transitions; high detail in representation of approach makes it inherently subjective; to contry/regional level (i.e. Downscaling aspired in
vegetation dynamics (age cohorts and gap preferences and suitable indicators might change Scottish Exemplar)
dynamics), closed carbon and nitrogen over time; dos not generate reliable results in all
cycles regions
Biocontrol (Civantos et al. 2012) LPJmL (Bondeau et al. 2007) Climate-and land use-based distributions Closed carbon cycle, consistent Usual limits of species distribution modelling Currently no nitrogen cycle
of service providing species. Easy to representation of the biogeochemical
project given scenarios. Contribution of processes bewteen the different plant
individual species may be combined with types.
different weights e.g. depending on diets
(although currently even weights)
Mechanistic understanding of ES supply; Generic nature of models under checking (inter-site
functional understanding of ES trade-offs / comparison). May need adaptation for different
bundles. Easy to project given scenarios. bioclimatic regions
Soil moisture, deep water Process-based Mechanistic model of water evaporation, lateral flow and
(Leitinger et al. 2015) seepage deep seepage depending on vegetation demand, soil
properties and terrain
IVM-FloodRegulation (Stürck et al, 2014)
Multiscale ecological Multiscale ecological InVEST Coastal Provisioning, nutrition, biomass (food production) protection Spatial proxy-based Phenomenological Fishery landings, effort data, habitat GIS coverages, survival rates for local & regional NA (estuary) coastal protection The InVEST Coastal Protection
and economic models and economic models (Guerry et al., 2012) habitat types, salinity, economic value salmon fisheries model quantifies the protective
salmon, shrimp & blue salmon, shrimp & blue services provided by natural
crab (Jordan et al crab habitats of nearshore environments
2012) (Jordan et al 2012) Mangrove's wind x mangrove distance from mangrove forest, mangrove in terms of avoided erosion and mangrove
Coastal protection Coastal protection protection coastal protection forest Spatial proxy-based continental (EU) Bathymetry, topography, slope, geomorphology, submarine habitats, Three data sources: forest extent forest flood mitigation. The model's profile
(Liquete et al 2013) (Liquete et al 2013) (Das et al, 2013) EU Corine Land Cover emerged habitats, wave regime, tidal range, relative sea level, storm generator prepares a 1D
(CLC) surge, population density, infrastructures, artificial surface, main cultural bathymetry transect of a shoreline,
InVEST Marine carbon storage & InVEST Food provision (Guerry et al InVEST Marine renewable energy (Guerry et al 2012) (Guerry et al 2012) 2013) InVEST Recreation (Plummer et al 2012) modeling in ES, food web Role of eelgrass et al 2012) (Guerry et al 2012) 2013) sequestration (Guerry renewable energy (Liquete et al InVEST Marine Coastal protection CO 2 capture x Role of eelgrass in ES, potential of seagrass food web modeling restoration (Plummer et al 2013) (Duarte et al, InVEST Marine carbon 2013) Role of eelgrass in ES, 2012) et al 2012) (Plummer et al 2013) provision (Guerry et al blue crab (Jordan food web modeling InVEST Food salmon, shrimp & economic models (Guerry et al 2012) storage & sequestration ecological and Multiscale x Combination of flood regulation demand and supply to identify prioritiy areas where flood regulation can be enhanced using natural vegetation. Spatial proxy-based local / regional / cover, soils and flood regulation from single catchments to the entire EU. Method is difficult to apply at smaller scales commercial and recreational fisheries continental marine mammals, detrital pools), NA producers, invertebrates, fishes, birds, Spatial proxy-based local / regional / continental NA emigration rate, biological groups (primary rates of prey, immigration rate, mortality, biomass, growth efficiency, consumption marine Carbon storage & sequestration Energy recreational activities visitation rates, Food provision maximum capacity) Performance tables, capabilities (e.g. technological Spatial proxy-based local / regional / (height, peak period, continental wave condition data marine NA Extrapolation of estimates for relation between land sites seagrass patch growth, parch survival in seagrass seagrass providing information about its dataset v.15 from the year 2000 with a resolution of 100 m (EEA 2011); meadows planting projects, estimates of seagrass meadows backshore and the location of No abiotic variables stated CO2 sequestration per unit area for five natural habitats. The transect is seagrass species used to estimate the total water level and shoreline erosion in the and modelled seabed Carbon stored, rate of C accumulation in sediments, economic presence and absence of habitat maps (MESH 2010; EUSeaMap JNCC price of harvestd product, harvest rate information multiple ES (provisioning, cultural, supporting) Spatial proxy-based local Fishing fleet distribution, value of landings, fishing grounds distribution, NA 2010) information such as market/non-market value of stored/sequestered carrying capacity Carbon stock biomass, net recruitment per year, marine nearshore marine habitats
InVEST Food provision (Guerry et al 2012)
energy based on wave conditions
and technology-specific
ccapabilities.
InVEST Recreation (Guerry et al Spatial proxy-based Recreation The InVEST recreation model
2012) predicts the spread of person-days
of recreation, based on the
locations of natural habitats and
other features that factor into
people's decisions about where to
recreate
Hydrodynamic model Phenomenological Mediation of flows (flood protection) Hydrodynamic model simulations of
(Temmerman et al., 2012) flood attenuation by a tidal marsh,
with particular focus on the effects
of spatial patterns of vegetation
dieoff. Tidal marsh die-off, which
Spatial proxy-based Food provision InVESt model estimates the may increase with ongoing global
quantity and monetary value of fish change (e.g. because sea level
haversted by commercial fisheries. rise), is expected to have non-linear
Appropiate to use for single species effects on reduced coastal
or groups of species with similar life protection against flood waves.
stories. It estimates annual
production of fish. Another section
of the model can be used to
analyse the production and
monetary value of farmed fish and
shellfish and quantify by-products of
farming.
InVEST Marine renewable energy Spatial proxy-based Energy Models energy production from
(Guerry et al 2012) waves an models off-shore wind
energy production. The model
asseses potential wave power and
Hydrodynamic model (Shepherd et al., 2007)
benefit analysis of the managed and biological proxies in others.
Mercury sequestration Process-based Mediation of waste, toxics and realignment is conducted too. Modelling of growth and mercury
Mangrove's wind protection (Das (Anastácio et al., 2013) Phenomenological Mediation of flows (wind protection) Modeling of wind attenuation and other nuisances (HG) sequestration by
and Crépin, 2013) protection offered by mangroves in Bolboschoenus maritimus on the
the event of wind-related damage most contaminated area of a
during storms, specially in areas temperate shallow coastal lagoon
affected by tangential wind historically subjected to heavy Hg
Ecosim/EcoPath (Alcamo et al., Niche-based Provisioning, nutrition, biomass Fish production (landings) is load, under gradients of climate
2005) (fish production) estimated driven variables. Simulation of B. for three regional
fisheries (Guf of Thailand, Central maritimus mercury sequestration
North Pacific North Benguela) for 4 under different environmental
different scenarios. The model scenarios involving increases and
computes dynamic changes in decreases in temperature, salinity
selected marine ecosystems as a and cloud cover. The largest effects
function of fishing efforts (Pauly et were related to high salinity
al. 2000). For its fish production scenarios but all variables
estimates, the EcoSim/EcoPath presented an inverse relation with
model takes into account not only Hg-sequestration
Spatial model of coastal NA mediation of flows (storm the future source of feed for Modelling of ecological production
ecosystem services (Barbier, protection) & fish density aquaculture, functions that decline across a but also future
2012) Phenomenological Maintenance of physical, chemical Hydrodynamic model to estimate subsidies for the fishing industry, coastal landscape
and biological conditions (nutrient nutrient removal and carbon the management objectives of
removal and carbon sequestration) sequestration in a UK estuary fishing (either to optimize
covered with tidal wetlands and employment or profits), and the
mudflats, based on sediment impact of climate change on shifts
dynamics and composition. The in species distribution and
model abundance (Pauly et al. 2003). For also estimates the
associated value of habitat created all scenarios, fish catch (by weight)
under a scenario of extensive is maintained in the North Benguela
managed realignment. A cost fishery, not maintained in the
Central North Pacific, and has
Acknowledgements
This study was funded by project OPERAs FP7-ENV-2012-two-stage-308393. AB and WC acknowledge Labex OT-Med (ANR-11-LABX-0061) funded by the French Government Investissements d'Avenir program of the French National Research Agency (ANR) through the A*MIDEX project (ANR-11-IDEX-0001-02)
2015). Criteria were rated according to expert opinion and synthesis of published studies.
Model type
Primary data sources for Europe Remote sensing data
Proxy
Land cover maps
Vegetation data bases [START_REF] Chytrý | Vegetation survey: a new focus for Applied Vegetation Science[END_REF]. European Vegetation Archive (EVA)
Global Index of Vegetation-Plot Databases (http://www.givd.info) [START_REF] Dengler | The Global Index of Vegetation-Plot Databases (GIVD): a new resource for vegetation science[END_REF] Potential lack of data layers (e.g. soils))
Mapping thematic variables like Land Use/Land Cover, Vegetation, Forest, Wetland, Water, Burnt area, etc.
Regional scale (for mapping landscape units), medium spatial resolution data like multitemporal MODIS, Spot Vegetation or MeteoSat data to follow vegetation dynamic.
Local scale (for mapping precise thematic variables): high spatial resolution Landsat8 or Sentinelle-2 (for passive RS data), RadarSat or TerraSar (for active RS data). [START_REF] Ayanu | Quantifying and Mapping Ecosystem Services Supplies and Demands: A Review of Remote Sensing Applications[END_REF][START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF][START_REF] Kuenzer | Earth observation satellite sensors for biodiversity monitoring: potentials and bottlenecks[END_REF][START_REF] Pettorelli | Satellite remote sensing for applied ecologists: opportunities and challenges[END_REF]
Phenomenologic al
Maps or proxy of landscape elements (van der Zanden et al., 2013) Topographic information including Digital Elevation Models: road networks, river networks (Lehner et al., 2004), coastlines (USGS HYDRO1K, 2015)
Biophysical data including soil data (European Soil database see Panagos et al., 2012) As for proxy models
Niche-based
Occurrence data for all European terrestrial vertebrate species: 187 mammals [START_REF] Mitchell-Jones | The Atlas of European Mammals[END_REF], 445 breeding birds [START_REF] Hagemeijer | EBCC atlas of European breeding birds : their distribution and abundance[END_REF], and 149 amphibians and reptiles [START_REF] Gasc | Atlas of amphibians and reptiles in Europe[END_REF].
Refined data for 275 mammals, 429 birds and 102 amphibians across the Palearctic at 300 m resolution by incorporating 46 GlobCover land use/land cover classes [START_REF] Maiorano | Threats from Climate Change to Terrestrial Vertebrate Hotspots in Europe[END_REF]. Clustering at 10' by [START_REF] Zupan | Spatial mismatch of phylogenetic diversity across three vertebrate groups and protected areas in Europe[END_REF].
Extensive distribution data available for 1280 higher plants; digitized Atlas Flora Europeae. Trees: exhaustive data at 1 km² resolution ( http://www.efi.int/portal/virtual_library/information_services/mapping_service s/tree_species_maps_for_european_forests/).
More comprehensive species distribution data available on a country per country basis, and for specific regions within a same country.
Availability of phylogenies currently increasing, especially in Europe. Megaphylogenies for higher plants, mammals and birds for Europe [START_REF] Thuiller | Consequences of climate change on the tree of life in Europe[END_REF], with further complements for the Palearctic and amphibians by (Zupan Texture variables like object size and shape, compactness, homogeneity/heterogeneity, neighborhood relationships, fragmentation, connectivity, relevant for plant type specification and habitats characterization.
Global scale, Meteosat or SpotVegetation.
Regional scale, medium spatial resolution data like multitemporal MODIS; can characterize vegetation dynamics. [START_REF] Ayanu | Quantifying and Mapping Ecosystem Services Supplies and Demands: A Review of Remote Sensing Applications[END_REF][START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF][START_REF] Kuenzer | Earth observation satellite sensors for biodiversity monitoring: potentials and bottlenecks[END_REF][START_REF] Pettorelli | Satellite remote sensing for applied ecologists: opportunities and challenges[END_REF][START_REF] Pettorelli | Satellite remote sensing for applied ecologists: opportunities and challenges[END_REF].
Trait data for functional diversity: see trait-based models
Trait-based
Community composition data -locally measured or from vegetation data bases as for proxy-based models. There are currently no public community composition data bases for animals Site-level measurements following standard methods [START_REF] Cornelissen | Handbook of protocols for standardised and easy measurement of plant functional traits worldwide[END_REF].
Communal plant trait data bases, e.g. TRY (Kattge et al., 2011) Trait data bases for birds [START_REF] Pearman | Phylogenetic patterns of climatic, habitat and trophic niches in a European avian assemblage[END_REF], mammals (PanTHERIA [START_REF] Jones | PanTHERIA: a species-level database of life history, ecology, and geography of extant and recently extinct mammals[END_REF]), amphibians (http://amphibiaweb.org/), fish (FishBase (Froese and Pauly)), phytoplankton [START_REF] Litchman | Trait-based community ecology of phytoplankton[END_REF], lotic invertebrates [START_REF] Nicole | A Database of Lotic Invertebrate Traits for North America[END_REF], soil invertebrates (e.g. [START_REF] Salmon | Linking species, traits and habitat characteristics of Collembola at European scale[END_REF] for Collembola).
Plant/Canopy height using laser scanning (LiDAR).
Leaf phenology: satellite multi-temporal data or timelaps cameras (class by spatial resolution from coarse to fine) : AVHRR NDVI time series, MODIS, Sentinelle-2, RadarSat-2, Pleiades ; or aerial photos/hyperspectral data using airborne sensor.
Using Radiative Transfer Models (RTM), leaf mass per area (for Specific Leaf Area SLA estimation), leaf water content using RTM and/or SWIR wavelengths of RS data (for leaf dry matter content), chlorophyll content (for leaf nitrogen concentration estimation).
Methods operational at individual/population/community/ecosystem scales depending on RS data source. [START_REF] Homolová | Review of optical-based remote sensing for plant trait mapping[END_REF][START_REF] Kuenzer | Earth observation satellite sensors for biodiversity monitoring: potentials and bottlenecks[END_REF] Full processlarge-scale Climate forcing from observations (e.g. [START_REF] Mitchell | An improved method of constructing a database of monthly climate observations and associated high-resolution grids[END_REF][START_REF] Rudolf | The new "GPCC Full Data Reanalysis Version 5" providing high-quality gridded monthly precipitation data for the global land-surface is public available since[END_REF][START_REF] Weedon | Creation of the WATCH Forcing Data and Its Use to Assess Global and Regional Reference Crop Evaporation over Land during the Twentieth Century[END_REF] or from a suite of climate models (e.g. CMIP5, [START_REF] Taylor | An Overview of CMIP5 and the Experiment Design[END_REF] Atmospheric CO 2 concentration from observations for the past and models for the future RCPs (e.g. [START_REF] Keeling | Atmospheric CO2 records from sites in the sio air sampling network[END_REF].
Land-use data historical reconstructions and future scenarios (e.g., [START_REF] Fader | Virtual water content of temperate cereals and maize: Present and potential future patterns[END_REF][START_REF] Von Essen | Cork before cattle: Quantifying Ecosystem Services in the Portuguese Montado and Questioning Ecosystem Service Mapping[END_REF], Hurtt et al., 2011[START_REF] Kaplan | The effects of land use and climate change on the carbon cycle of Europe over the past 500 years[END_REF], Klein Goldewijk et al., 2011, 2010, Ramankutty and Foley 1999).
Highly generalized classes of soil texture (e.g. by FAO/IIASA/ISRIC/ISSCAS/JRC, 2012).
Drainage direction map (e.g. [START_REF] Döll | Validation of a new global 30-min drainage direction map[END_REF] for models that apply a river routine scheme allowing for studying ecosystem services related to water flows
Global N deposition (e.g. [START_REF] Lamarque | Historical (1850-2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols: Methodology and application[END_REF]2011) and N fertilization of croplands (e.g. by [START_REF] Zaehle | Carbon benefits of anthropogenic reactive nitrogen offset by nitrous oxide emissions[END_REF] for models accounting for C-N 1) RS data provide model inputs, 2) RS data for calibrating model parameters, 3) RS data for evaluation of models output. Models that are run for scenario studies obviously must be prognostic models that are not fed by RS data.
Frequent terrestrial indicators that are derived from RS data are e.g. Land use / Land cover, topography, phenology, gross primary production, evapotranspiration, but see the reviews in [START_REF] Turner | Integrating Remote Sensing and Ecosystem Process Models for Landscape-to Regional-Scale Analysis of the Carbon Cycle[END_REF], [START_REF] Andrew | Potential contributions of remote sensing to ecosystem service assessments[END_REF]. At the global scale, the often used satellite instruments are Landsat, Meteosat, Spot-Vegetation, NOAA-AVHRR, ATSR, MISR, SeaWiFS (e.g. [START_REF] Ayanu | Quantifying and Mapping Ecosystem Services Supplies and Demands: A Review of Remote Sensing Applications[END_REF][START_REF] Kelley | A comprehensive benchmarking system for evaluating global vegetation models[END_REF][START_REF] Randerson | Systematic assessment of terrestrial biogeochemistry in coupled climate-carbon models[END_REF].
Remote Sensing derived land use data (e.g. [START_REF] Hansen | High-Resolution Global Maps of 21st-Century Forest Cover Change[END_REF] for forest extent, loss, and gain from 2000 to 2012) are typical model inputs.
Seasonal fraction of absorbed photosynthetically active radiation (fPAR) can be used as input by process-models [START_REF] Potter | Terrestrial ecosystem production: A process model based on global satellite and surface data[END_REF], but they are also useful tools for the validation of full process-models that simulate the fPAR based on the modelling of biophysical processes (Bondeau et al., 2007;Lindeskog et al., 2013).
interations Population density maps (Klein [START_REF] Goldewijk | Three centuries of global population growth: A spatial referenced population density database for 1700 -2000[END_REF][START_REF] Klein Goldewijk | Long-term dynamic modeling of global population and built-up area in a spatially explicit way: HYDE 3.1[END_REF] when this is required by fire disturbance modelling Communal plant trait data bases for PFT parameterisation, e.g. TRY (Kattge et al., 2011).
Full processlocal-landscapescale
Input data (historical and scenarios) as for large-scale process-based models with the additional possibility for local-landscape scale data -Fisheries landings, effort data, habitat GIS coverages, survival rates for habitat types, salinity (Jordan et al., 2012) Trait-based -Seagrass patch growth, parch survival in seagrass planting projects, estimates of seagrass CO 2 sequestration per unit area (Duarte et al., 2013) Full process -largescale -Submarine habitat cover [START_REF] Yee | Comparison of methods for quantifying reef ecosystem services: A case study mapping services for St[END_REF] Full process -locallandscape-scale -Use of biogeochemical model; Dissolved Organic Carbon; Particulated Organic Carbon; nutrients; carbonates; zooplankton; microzooplankton; phytoplankton [START_REF] Canu | Estimating the value of carbon sequestration ecosystem services in the Mediterranean Sea: An ecological economics approach[END_REF] Appendix 1 Use of dynamic simulations in a food web model of central Puget Sound, Washington, USA developed in the Ecopath with Ecosim software, to examine how the marine com-munity may respond to changes in coverage of native eelgrass (Zostera marina), and how these modeled responses can be assessed using an ecosystem services framework, expressing these services with economic currencies in some cases
Model name
Main strengths Limitations
Coastal protection (Liquete et al 2013)
The assessment shown in the paper could help assist the comparison between European regions and to further national or regional scale studies of ecosystem services. In Europe it can have a direct application on the EU Biodiversity Strategy, the EU Floods Directive. On an international scale, this approach could be taken by CBD (Convention on biological diversity) countries. | 115,838 | [
"757640",
"19228",
"9008",
"772005",
"18543"
] | [
"1041894",
"188653",
"234416",
"110792",
"1041894",
"76334",
"188653"
] |
01721328 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01721328/file/farzanehkaloorazi2016.pdf | Mohammadhadi Farzanehkaloorazi
Mehdi Tale
Stéphane Caro
email: [email protected]
Collision-free workspace of parallel mechanisms based on an interval analysis approach
Keywords: Parallel mechanism, Collision-free workspace, Mechanical interference, Interval analysis
This paper proposes an interval-based approach in order to obtain the obstacle-free workspace of parallel mechanisms containing one prismatic actuated joint per limb, which connects the base to the end-effector. This approach is represented through two cases studies, namely a 3-RPR planar parallel mechanism and the so-called 6-DOF Gough-Stewart platform. Three main features of the obstacle-free workspace are taken into account: mechanical stroke of actuators, collision between limbs and obstacles and limb interference. In this paper, a circle(planar case)/spherical(spatial case) shaped obstacle is considered and its mechanical interference with limbs and edges of the end-effector is analyzed. It should be noted that considering a circle/spherical shape would not degrade the generality of the problem, since any kind of obstacle could be replaced by its circumscribed circle/sphere. Two illustrative examples are given to highlight the contributions of the paper.
Introduction
Parallel mechanisms (PMs) are known to be more precise and able to carry heavier loads than the serial manipulators. [START_REF] Merlet | Parallel Robots[END_REF][START_REF] Gosselin | Determination of the workspace of planar parallel manipulators with joint limits[END_REF] Besides several advantages, they are restricted by their limited workspace. Therefore, in practice, the presence of an obstacle inside their workspace should be taken into consideration in order to alleviate this limitation. Obtaining the obstacle-free workspace of PMs leads to have a conservative workspace for which all actions inside the aforementioned workspace are free of collision. Furthermore, obtaining the obstacle-free workspace is a definite asset in path planning and obstacle avoidance while controlling PMs.
The problems of path planning and obstacle avoidance have been frequently investigated in the literature. [START_REF] Bohigas | Planning singularity-free paths on closed-chain manipulators[END_REF] In ref. [4], Zi et al. considered the possible collision of a cooperative cable-driven parallel robot for multiple mobile cranes and used the sensor technology in order to avoid obstacles. Laliberte et al. in ref. [5] calculated the motion of the manipulator using the velocity inversion of a redundant manipulator, which optimizes the distance to obstacles. The algorithm includes joint limit constraints, collision detection and heuristics for the solution of typical difficult cases, thereby leading to a high success rate. Brooks et al. proposed an efficient algorithm to generate collision-free paths for a manipulator with five or six revolute joints. Yang et al. investigated dynamic collision-free trajectory generation in a non-stationary environment, using biologically inspired neural network approaches. Brocks and Khatib [START_REF] Brock | Elastic strips: A framework for motion generation in human environments[END_REF] represented elastic strip framework which enables the execution of a previously planned motion in a dynamic environment for robots with many degree of freedom (DOF). Khatib et al. [START_REF] Khatib | Robots in human environments: Basic autonomous capabilities[END_REF] presented developments of models, strategies and algorithms dealing with a number of autonomous capabilities that are essential for robot operations in human environments. These capabilities include integrated mobility and manipulation, cooperative skills between multiple robots, interaction ability with humans and efficient techniques for real-time modification of collision-free paths. In ref. [8], Komainda and Hiller presented a concept for motion control of redundant manipulators in a changing environment. In ref. [9], Jiménez et al. described a general approach to cover all distance computation algorithms, hierarchical object representations, orientation-based pruning criteria and space partitioning schemes. In ref. [10], Wenger and Chedmail illustrated the collision-free workspace of serial manipulators. Caro et al., in ref. [11], introduced a new method, based on numerical constraint programming, in order to compute a certified enclosure of the generalized aspects. In ref. [12], the collision-free workspace of a planar PM is investigated. This paper aims at extending the latter study by proposing a general concept to be applicable to 6-DOF PMs, such as Gough-Stewart platform.
Most of the approaches presented in the literature are case dependent and cannot be generalized and extended to other cases. In ref. [13], the workspace of 3-RRR PMs has been investigated. Even for this particular PMs, the approach could be applied upon considering some assumptions, where in the latter paper, the mechanisms should be symmetric. It should be noted that in the foregoing paper, the workspace is only investigated without considering mechanical interferences and obstacles within the workspace.
The mathematical framework used in this paper is based on interval analysis. [START_REF] Moore | Methods and Applications of Interval Analysis[END_REF][START_REF] Jaulin | Set-membership localization with probabilistic errors[END_REF][START_REF] Kaloorazi | Determining the maximal singularity-free circle or sphere of parallel mechanisms using interval analysis[END_REF] Interval analysis is a reliable method to evaluate functions and is used frequently in the field of robotics. [START_REF] Merlet | Interval analysis and robotics[END_REF] An interval variable of [x] = [x, x] is a set of all real numbers from the lower bound, x, to the upper bound, x. More details about the application of interval analysis in kinematic investigation of robotic mechanical system are discussed in Section 2 and in ref. [14].
In the context of application of interval analysis in investigating the kinematic properties of PMs, several papers have been published. Most of them dealt with solving the Forward Kinematic Problem (FKP) [START_REF] Merlet | Solving the forward kinematics of a gough-type parallel manipulator with interval analysis[END_REF] and obtaining the workspace. [START_REF] Kaloorazi | Determining the maximal singularity-free circle or sphere of parallel mechanisms using interval analysis[END_REF][START_REF] Kaloorazi | Interval-Analysis-Based Determination of the Singularity-Free Workspace of Gough-Stewart Parallel Robots[END_REF][START_REF] Farzaneh Kaloorazi | On the Maximal Singularity-free Workspace of Parallel Mechanisms via Interval Analysis[END_REF] Merlet, in ref. [21], investigated the workspace of PMs. Several papers have been published in the context of obtaining the singularity-free workspace using geometrical approach, [START_REF] Kaloorazi | Determination of the maximal singularity-free workspace of 3-dof parallel mechanisms with a constructive geometric approach[END_REF] and using interval analysis [START_REF] Kaloorazi | Determining the maximal singularity-free circle or sphere of parallel mechanisms using interval analysis[END_REF] in which the novelty is about demonstrating the state of the art of applying interval analysis in solving the foregoing problems. In ref. [23], authors used interval analysis in order to investigate the orientation workspace of a parallel kinematic machine. Merlet, in ref. [24], proposed an algorithm that enables one to determine almost all the geometries of a simplified Gough platform whose workspace should include an arbitrary set of poses. There have been few study conducted on analyzing the workspace of PMs by considering mechanical interference and also in the presence of obstacle. This is a definite asset both in design and control of PMs, which is the main concern of this paper.
Few studies have been conducted on obtaining the collision-free workspace of PMs and this is mainly due to the fact that this mechanisms have complex kinematic expressions comparing to their serial counterpart. Interval analysis has been used more in solving the FKP and in obtaining theoretical workspace, i.e., workspace in which mechanical interference and obstacle are not taken into account. Some papers worked on obtaining the collision-free workspace of cable-driven PMs which is out of the scope of the present paper. In the latter papers, the approach is mostly based on numerical techniques in which all the configurations and poses of the end-effector (EE) are tested in order to obtain configurations for which collision may occur. [START_REF] Otis | Determination and management of cable interferences between two 6-dof foot platforms in a cable-driven locomotion interface[END_REF] For instance, interval analysis has been used in investigating the workspace of cable-driven PMs. [START_REF] Gouttefarde | Interval-analysis-based determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF] The main contribution of this paper deals with the use of interval analysis techniques in order to obtain the collision-free workspace of parallel manipulators. It should be noted that obtaining the collision-free workspace of robotic mechanical systems, PMs among others, is a very delicate task which is elusive to classical methods. Therefore, more elaborated methods should be used such as interval analysis techniques. To the best of author's knowledge, few studies have been conducted in this regard.
The remainder of this paper is organized as follows. First, the mathematical framework of interval analysis is broadly reviewed. Then, the concept of the algorithm developed to obtain the obstacle-free workspace and the corresponding pseudo-code are explained in detail. Finally, the results obtained for a 3-RPR and a 6-UPS PM are given and the corresponding obstacle-free workspaces are illustrated.
Interval Analysis and Mathematical Framework
Several people had the idea to bound rounding errors with intervals: e.g. Dwyer (1951), [START_REF] Dwyer | Computation with approximate numbers[END_REF] Sunuga (1958), [START_REF] Sungana | Theory of Interval Algebra and Application to Numerical Analysis[END_REF] Warmus (1956) [START_REF] Warmus | Calculus of approximations[END_REF] and Wilkinson (1980). [START_REF] Wilkinson | Turings work at the national physical laboratory and the construction of pilot ace, deuce, and ace[END_REF] However, interval mathematics has been widespread in the research community with Moore's book "Interval Analysis" in 1966. [START_REF] Moore | Interval Analysis. Series in Automatic Computation[END_REF] Moore's book transformed this simple idea into a viable tool for error analysis. Instead of merely treating rounding errors, Moore extended the use of interval analysis to bound the effect of errors from all sources, including approximation and errors in data. [START_REF] Hansen | Global Optimization Using Interval Analysis: Revised and Expanded[END_REF] In the literature, interval analysis is regarded as a powerful numerical method to solve a wide range of problems such as, among others, circumventing round-off errors, [START_REF] Merlet | Solving the forward kinematics of a gough-type parallel manipulator with interval analysis[END_REF] solving system of equations, optimization problem [START_REF] Hansen | Global Optimization Using Interval Analysis: Revised and Expanded[END_REF] and proper workspace presentation, etc.. [START_REF] Merlet | Interval analysis and robotics[END_REF][START_REF] Hansen | Global Optimization Using Interval Analysis: Revised and Expanded[END_REF][START_REF] Hao | Multi-criteria optimal design of parallel manipulators based on interval analysis[END_REF][START_REF] Chablat | An interval analysis based study for the design and the comparison of three-degrees-of-freedom parallel kinematic machines[END_REF] Furthermore, interval analysis provides an interactive visualization in the progress of calculation which is a definite asset in 2D and 3D representations of manipulator workspaces. Recently, upon revealing some remarkable features of interval analysis, such as finding the solution of a problem within some finite domains and taking into account the numerical computer round-off errors, it has stimulated the interests of many researchers in the robotic community to deal with complicated problems such as FKP and inverse kinematic problem (IKP), [START_REF] Merlet | Solving the forward kinematics of a gough-type parallel manipulator with interval analysis[END_REF][START_REF] Rao | Inverse kinematic solution of robot manipulators using interval analysis[END_REF] calibration [START_REF] Fusiello | Globally convergent autocalibration using interval analysis[END_REF][START_REF] Wabinski | Time Interval Analysis of Interferometer Signals for Measuring Amplitude and Phase of Vibrations[END_REF] and the determination of the singularity-free workspace of parallel manipulators. [START_REF] Kaloorazi | Determining the maximal singularity-free circle or sphere of parallel mechanisms using interval analysis[END_REF][START_REF] Kaloorazi | Interval-Analysis-Based Determination of the Singularity-Free Workspace of Gough-Stewart Parallel Robots[END_REF] In the evolutionary techniques, the chance of being trapped in a local optimum is highly dependent of the initial population and initial search space. However, in the case of interval analysis, the only parameter to obtain the global optimum is to choose the proper search space. In order to compute the maximum singularity-free workspace of PMs and other kinematic properties, [START_REF] Merlet | Interval analysis and robotics[END_REF][START_REF] Hao | Multi-criteria optimal design of parallel manipulators based on interval analysis[END_REF] interval analysis entails following advantages: (a) Contrary to other tools, which would result in a lengthy computation process and may converge to a local optimum, interval analysis is not a black box, since it requires combination of heuristics and numerical concepts to be effective; (b) it allows us to find all the solutions with inequalities within a given search space; [START_REF] Merlet | Solving the forward kinematics of a gough-type parallel manipulator with interval analysis[END_REF][START_REF] Oetomo | Certified Workspace Analysis of 3RRR Planar Parallel Flexure Mechanism[END_REF] (c) for two and three-dimensional problems, it leads to see the evolution of the solutions and to monitor the procedure in order to have better insight to the problem; (d) it allows us to consider uncertainties in the model of the robot.
Interval analysis is a branch of mathematics that basically works with closed intervals instead of accurate numbers. An interval [x] is a set of real numbers between two bounds and can be represented as follows:
[x] = [x, x] = {x ∈ R | x ≤ x ≤ x} , (x ≤ x), (1)
where x and x are lower bound and upper bound, respectively. All mathematical operations such as addition and multiplication can be performed on intervals. For instance, [START_REF] Moore | Methods and Applications of Interval Analysis[END_REF] [x]
+ [y] = [x, x] + [y, y] = [x + y, x + y], (2)
Moreover, a function of real numbers such as f (x) can be evaluated as an interval from a given interval, [x], which results in an interval [f ] = f ([x]). For example, for a monotonic function like
f (x) = x 3 , [f ] = f ([x]) = [f (x), f (x)] = [x 3 , x 3 ]. ( 4
)
The whole concept of interval analysis is based on bisecting a box (or a hyper-box in a higher dimensional space), called branch & prune approach, [START_REF] Van Hentenryck | Solving polynomial systems using a branch and prune approach[END_REF] upon considering some well-defined algebra on intervals and natural evaluations, in such a way that the latter box will converge toward the desired solution.
Obstacle Avoidance Formulation via Geometrical Concept
To the sake of understanding the proposed method, the latter is first fully described for 3-RPR PM. It should be noted that, without loss of generality, the obstacle is considered as a circle (planar case) or a sphere (spatial case) since any kind of obstacle could be replaced by its circumscribed circle or sphere. In order to avoid an obstacle, in general, the links of limbs and the edges of the EE are considered to be straight line segments. Moreover, all links can be considered to be perfect cylinder (rectangle in planar case). In the case of more complicated shape of a link, one should simplify it to a cylinder which contains all parts of the link. Even though it may cause to eliminate some non-collision part of the workspace, but it depends on the trade-off preference of the user to keep a balance between the computational time and precision of the result. This is a common approach to compute the constant-orientation workspace of parallel robots. In the case of 6-DOF PMs, since here is no human visualization for depicting six-dimensional space, thus one should fix the rotational DOF to have a graphical representation of the workspace.
A 1 A 2 A 3 B 1 B 2 B 3 φ l x l y ρ 1 ρ 2 ρ 3 x y x y P (Obstacle) O O (a)
A 1 P 1 P 2 [B 1 ] Obstacle Obstacle (a) [B 1 ] P 1 P 2 Obstacle Obstacle [B 2 ] (b)
As it can be observed in Fig. 1, in the case of planar PMs, the obstacle is a circle; x P and y P being the Cartesian coordinates of its center point and r P its radius. The problem of obtaining the obstacle-free workspace via interval analysis can be divided into two cases: the collisions between the obstacle and (a) a line passing through one point and one box (b) through two boxes. These two cases are depicted in Fig. 2. The first case is applicable for links that are connected to the fixed frame via a revolute joint; for instance, links A 1 B 1 , A 2 B 2 and A 3 B 3 in the 3-RPR planar PM shown in Fig. 1(a). The second case is more general and is applicable for those links which are the medial or distal links of the limb; there is not such a link in the 3-RPR planar PM. Moreover, the edges of the EE should be categorized in the second case, i.e., in the 3-RPR planar PM, B 1 B 2 , B 2 B 3 and B 3 B 1 .
Interval distance to obstacle
A simple solution to obtain the obstacle-free workspace is to write the equation from the distance of a point to a line in the 2D space, which can be written as follows: where d stands for the distance from point P to line L, x P and y P are the x and y coordinates of the obstacle P , respectively, m stands for the slope of the line and c is a constant. In the case of implementing an interval line using a point and a box, the first aforementioned case, one has
d = x P +my P -mc m 2 +1 -x P 2 + m x P +my P -mc m + c -y P 2 L : y = mx + c, (5)
m = y A -[y B ] x A -[x B ] , c= y A -mx A , (6)
in which x A and y A stand for the coordinates of point A, [x B ] and [y B ] stand for the components of box [B]. These lines are referred to as the collision lines, which are assigned to the links of mechanisms and in some configurations may interfere with the obstacle.
Collision of limbs with obstacle
Resorting to interval functions, [START_REF] Merlet | Parallel Robots[END_REF] one can apply interval variables to Eq. ( 5) and obtain an interval of distances, [d]. For example, in the case of the 3-RPR PM shown in Fig. 1(a), having geometric parameters represented in Table I, for the first limb, the collision line passes through A 1 , which is a fixed point, and a box in the search space of the interval algorithm. The task of the algorithm is to determine the distance from obstacle P to the collision line. If [d] > r P , i.e., the distance of all possible lines passing through the fixed point and the box under investigation will be higher than the obstacle radius, then the corresponding box will be fully inside the obstacle-free workspace. On the other hand, if [d] < r P , the box will be fully outside of the aforementioned workspace. Eventually, if 0 ∈ [d]r P the box goes for further bisection. The result of the above procedure is illustrated in Fig. 3(a). It is noteworthy that the green circles in the figures correspond to the lower limits of the actuators.
Collision of edges of EE with obstacle
The procedure of obtaining the obstacle-free workspace is not complete yet. Indeed, the RPR limb should be regarded as a segment instead of a line. Therefore, if
A -[B] 2 < A -P 2 , then [B]
does not interfere with the obstacle and should be a member of obstacle-free workspace, Fig. 3(b). In addition to the limbs, the interference between the edges of the EE and the obstacle should also be taken into account. In the case of 3-DOF PMs, a simple EE can be regarded as a triangle. Therefore, its edges belong to those lines passing through two intervals, i.e., the second case. In this case, obtaining the distance from the obstacle to the line, using Eq. ( 5), leads to a very time consuming and inefficient process. Figure 4 represents the obstacle-free workspace in the case that only the collision of the edge B 1 B 2 with the obstacle is taken into consideration. The result shown in Fig. 4 is not a desirable result. The white area, which shows unassigned boxes, comparing to Fig. 3(b), is a vast area. Furthermore, there are too many red boxes and green boxes in the middle of the workspace. From the previous results, we expected that the number of boxes would be lower. This shows that computational load of these boxes is high. The latter problem is known as blow up. Blow up is a common phenomenon in interval analysis, which appears when some intervals are multiplied together and when degree of complexity of the interval function becomes high. [START_REF] Hansen | Global Optimization Using Interval Analysis: Revised and Expanded[END_REF] Hence, a geometrical methodology is proposed in order to eliminate those parts of the workspace for which edges of the EE collide with the obstacle. Since the workspace is represented for a constant orientation, the slope of the interval line is constant. In other words, by determining the dimensions (width and height) and position of [B 1 ], the position and dimensions of [B 2 ] are known. The result of considering a constant slope for the interval line passing through two boxes is represented in Fig. 5. In the latter figure, the edge B 1 B 2 is regarded as a line and the workspace is obtained for a constant orientation of φ = π/4. However, the result is not complete because B 1 B 2 should be regarded as a segment line.
In order to find the obstacle-free workspace, only considering B 1 B 2 , a geometrical approach, based on the fact that the workspace is obtained for a constant orientation, is combined with interval analysis. For the sake of better understanding, first a simple example is discussed here. The example can be observed in Fig. 6. Assuming that the problem consists in obtaining a shape within the workspace which should be subtracted from the full workspace. In the constant orientation case, the slope of edge B 1 B 2 of the EE is constant, in this example φ = π/4. The workspace consists of the set of locations that point B 1 can reach in the fixed frame, because the moving frame is attached to the EE in B 1 . As it can be observed from Fig. 6, the shape is a rounded rectangle. A rounded rectangle is a shape which is generated by sweeping the center of a circle along a segment line. It is a rectangle having a rounded cap instead of right angle. An example of rounded rectangle is shown in Fig. 6. The next step is to define the shape using geometrical reasoning. By considering three geometrical constraints for the center of obstacle circle, the shape can be obtained: The center is subject to . This is a modified result of Fig. 4. A better solution is represented in Fig. 7(a).
P B 1 B 2 φ = π 4 P [x C ] [y C ]
x y Fig. 6. Technique to find the shape that should be subtracted from the full workspace. This rounded rectangle is the shape within the workspace that shows the locations of B 1 where edge B 1 B 2 collides with the obstacle circle.
along y-component. For the case study of this paper, one has
[x C ] = x P -[0, B 1 B 2 cos φ] ( 7 ) [y C ] = y P -[0, B 1 B 2 sin φ]. (8)
Figure 7(a) illustrates the results of the foregoing methodology, considering only the collision of edge B 1 B 2 with the obstacle, for φ = 0. In this figure, a rounded-end rectangle inside the workspace is eliminated. Upon considering all the three edges of the EE, Fig. 7(b) represents the obstacle-free workspace of the 3-RPR planar PM for φ = π/4.
Algorithm 1, in the appendix, represents the pseudo-code of the introduced method. It is based on a branch and prune algorithm. [START_REF] Van Hentenryck | Solving polynomial systems using a branch and prune approach[END_REF] Note that f , which indicates the number of actuated joint of the mechanism, is 3 in the case of planar PM. In the upcoming section, we will extend the algorithm to a higher DOF mechanism, i.e. 6-UPS PM and in that case f = 6. There is another variable which is indicated by g, referring to the dimension of the box. In the case of planar PM, it is 2 and in the case of spatial PM it is 3. In lines 5-7, the position of the three distal joints is determined as intervals in the fixed frame. In line 9, for all limbs, the distance from the distal to the proximal joints, i.e., the length of the actuator, is evaluated in order to be checked in line 15. In line 10, Line(A, B) is a function which creates a collision line that passes through points A and B. Since the second argument is an interval [B i ], hence the collision line is an interval line. Distance(L, P ) computes the Euclidean distance from point P to line L. Therefore, [d i ], i = 1, 2, 3, are intervals of possible distances from Algorithm 1 The pseudo-code of the algorithm to obtain the obstacle-free workspace of a PM with prismatic actuation. Lines followed by % are comments. and stand for logical OR and logical AND, respectively.
1: Input: Design parameters of the mechanism; x P , y P and r P for properties of the obstacle; [B] as search space; ρ min and ρ max as mechanical strokes; as the desired accuracy; f , the degree of freedom of the mechanism (i.e. 3 for 3-RPR planar PM and 6 6-UPS spatial PM); g dimension of boxes. 2: Output: L in as the constant-orientation obstacle-free workspace of the mechanism, L out as boxes, which are outside the aforementioned workspace. for i from 1 to f do 6: for i from 1 to f do 9:
[B i ] = L(1) + (b i -b i-
[ρ i ] = A i -[B i ] ;
% Length of prismatic actuator ρ i 10:
[d i ] = Distance(Line(A i , [B i ]),
L in ←-[B 1 ] 17: else if (ρ max < [ρ 1 ] . . . ρ max < [ρ f ]) . . . ([ρ 1 ] < ρ min . . . [ρ f ] < ρ min ) . . . (d 1 < r P A 1 -[B 1 ] 2 > A 1 -P 2 ) . . . . . . (d f < r P A f -[B f ] 2 > A f -P 2 ) . .
L out ←-[B 1 ] 19: else if Size([B 1 ] > then 20: L(end -1, end) ←-Bisect([B 1 ]) % Bisect [B 1 ]
ShiftLeft(Empty(L(1)))
% Erase data of L( 1) and shift one cell to the left 23: end while P to the corresponding line of the ith actuator. Lines 12-14 are interval lines passing through two intervals. Line 15 is the general if-clause for which if all [ρ i ] are in the acceptable range and if all [d i ] distances from P to the collision lines are higher than r P , then the box under study will be a member of the obstacle-free workspace, L in . There is an extra condition to ascertain that if the distance from the box [b i ] to the fixed point A i is lower than the distance from P to A i , hence the box should be inside the obstacle-free workspace. On the other hand, if only one of the aforementioned criteria is violated, then the box will be moved to the outer boxes list, L out . In line 19, if the box under study is partially inside the obstacle-free workspace and at least one of its dimensions is still larger than the desired precision, then it will be bisected by the largest edge (line 20) and two new boxes will be added at the end of list L. In line 22, ShiftLeft(Empty(L(1))), the first column of list L will be erased and all other elements of L will be moved one cell back to fill the gap. In other words, in each loop of the while-clause one box from list L will be investigated. In line 22, it is known that the aforementioned box is either located inside, outside or on the boundaries of the desired workspace;
B 3 B 1 , φ = π/4
therefore, there is no need to keep the box and erasing it will help the computer to free up memory. The algorithm continues until the prescribed precision is reached.
Results for 3-RPR PM
So far, an interval-based method to obtain the obstacle-free workspace of planar PMs, for a 3-RPR planar PM as a case study, has been introduced and the results of obstacle-free workspace, by considering only the collisions of one limb with its environment, Fig. 3(b), and only considering the collision of the edges of the EE, Fig. 7(b), are depicted. The next step is to put together all obstacle-free workspaces of all limbs. Figure 8(a) represents the obstacle-free workspace of the mechanism for a prescribed orientation, φ = π/4, by considering the collisions of all limbs with the obstacle. It should be noted that in Fig. 8(a), the obstacle is only inside the collision space of the first limb and as it can be observed, for the second and third limbs, the obstacle is located outside the corresponding red boxes. The latter is due to the fact that the represented workspace is depicted in the fixed coordinate frame and since a constant orientation of the EE is considered, thus other limbs should be translated into the fixed coordinate frame via the transformation of the moving frame. For instance, in the case of the 3-RPR planar PM for φ = π/4, if we move the obstacle along the vector of -{l x cos(π/4), l y sin(π/4)} T , which is the negative of position of joint B 2 represented in the moving frame, the obstacle will be located in the red boxes caused by considering the collisions of the second limb and the obstacle. The final step to obtain the obstacle-free workspace is to merge the obtained results and intersect them, i.e., considering the limb collisions, Fig. 8(a), and all edges of the EE, Fig. 7(b), with the obstacle. The final result is illustrated in Fig. 8 and as it could be expected, the obstacle divides the workspace into three separate parts, which are not connected to each other. The obtained obstacle-free workspace can be used in obstacle avoidance problems. As it is obtained via interval analysis, for all paths, whose points are located inside the obstacle-free workspace, it is guaranteed that the paths are free of obstacle collision.
The proposed interval-based method to obtain the obstacle-free workspace for planar PMs can be readily extended to more complicated and spatial PMs. Indeed, one can solve the IKP and use the obtained equations to determine the interval position of all joints and define the aforementioned segment lines to obtain the distances to the obstacle. It takes approximately 3 min to compute the obstacle-free workspace of the 3-RPR planar PM shown in Fig. 1(a), precision of 10 -6 percent of initial search space, with a 2 GHz processor, using INTLAB 6 toolbox. In the case of higher DOF mechanisms having more links, collision computation of each limb will be added into the computational time. For the sake of simplicity, the workspace of the mechanisms studied in this paper is represented for a constant orientation. This simplification does not affect the generality of the method, because it can be done for different orientations and represent a higher dimension collision-free workspace (3D workspace in case of planar mechanism for which z-axis is the orientation of the EE about z-axis).
Results for a 6-UPS PM
In this section, the aforementioned method is applied to obtain the obstacle-free workspace of a 6-UPS spatial PM. The proposed method is free from dimensional restriction and it can be applied to PM with more than two DOF, by only paying the computational time cost. A 6-UPS spatial PM is shown in Fig. 9(a) and its geometric parameters are represented in Table II. As it can be observed from Fig. 9, a sphere obstacle, P , is located inside the workspace of the mechanism. The workspace of the mechanism is obtained by using interval analysis, regardless of the obstacle, Fig. 9(b). [START_REF] Merlet | Interval analysis and robotics[END_REF][START_REF] Oetomo | Certified Workspace Analysis of 3RRR Planar Parallel Flexure Mechanism[END_REF] The workspace is depicted considering mechanical stroke of prismatic joints and spherical and universal joint limits.
The main advantage of using this method is that it could be readily extended to 6-DOF; in fact, it is a feature of the proposed method. Algorithm 1 can be directly used to determine the obstacle-free workspace of the 6-UPS spatial PM. By choosing the number of DOF f and the dimension of the box g, the algorithm computes the collision-free workspace of the corresponding PM, i.e., 3-RPR or 6-UPS. In line 6, the positions of all vertexes of the EE are obtained. Other lines of the algorithm is the same as before, by having in mind that the command Line creates a spatial line in the current case. Such 3D interval lines are illustrated in Fig. 10. In the case of a spatial PM, corresponding interval lines are derived from 3D expressions, and the distance from line to the obstacle is expressed as follows:
d = (x 2 -x 1 ) × (x 1 -x 0 ) x 2 -x 1 , (9)
in which, x 1 and x 2 are two points on the line, in our case, positions of the U joint and S joint. Moreover, x 0 stands for the center point of the obstacle.
Considering the limited workspace of the 6-UPS spatial PM, only two limbs of the mechanism clash with the obstacle, which are A 12 B 16 and A 56 B 16 . By pursuing the same procedure, as introduced in the previous section for 3-RPR planar PM, the obtained workspace could be computed in which only the collisions of one limb with the obstacle is considered, Fig. 11. The workspace is shown in two views, namely, top view, Fig. 11(a), and sectioned isometric view, Fig. 11(b). In this case, a conical part of the workspace is removed.
The final workspace of the mechanism, while considering the collisions of one limb with the obstacle, is represented in Fig. 12. Since the workspace is obtained for a constant orientation and the workspace of the 6-UPS spatial PM is very limited, the moving platform will not collide with the obstacle. The workspace is in a constant orientation because there is no graphical representation for a workspace in a dimension higher than three. In Fig. 12, two views of the obstacle-free workspace is represented, which are top view, Fig. 12(a), and sectioned isometric view, Fig. 12(b). In the case that the obstacle is bigger than the one considered here, other limbs may collide with it and there will be more subtracted conical area in the workspace.
Computational time and required memory of the process is highly dependent of the desired precision. It can be improved by choosing a proper initial search box to avoid unnecessary computation of the area outside of the workspace. Moreover, a symmetric and simple design of the robot may lead to a lower computing time. For this case study, using a 2 GHz CPU and 4 GB of ram and INTLAB toolbox in Matlab, it took almost three minutes to obtain the final result.
Conclusion
In this paper, an interval-based methodology was introduced in order to obtain the obstacle-free workspace of two Parallel Manipulators (PMs), namely a 3-RPR PM and 6-DOF Gough-Stewart platform. The proposed approach for the obstacle-free workspace and the corresponding results were represented. First, the collisions of proximal segment lines which pass through one point and one interval with the obstacle were investigated. Then a more general case in which the segment line passes through two intervals was used to determine the collisions between the obstacle and medial and distal limbs and also with edges of the EE. Indeed, the collision-free workspace of a 6-UPS spatial parallel manipulator was traced by using the proposed approach. To the best of authors' knowledge, this paper can be regarded as the first study on the collision-free workspace of parallel manipulator using interval analysis and opens an avenue to extend it to more complex parallel manipulators. However, the proposed approach in this paper could be well extended to parallel manipulator with one prismatic joint connecting the base to the EE directly. In fact, due to the limitation of interval analysis, although it is very reliable, it is a computationally intensive approach and in the case of high degree and complex equations, it may lead to very high computing time. Ongoing works deal with the collision detection of limbs between themselves and EE which is an important issue for the determination of the collision-free workspace of cable-driven parallel robots.
[x] [y] = [min(S) , max(S)] , S = {xy, xy, xy, xy}.
Fig. 1 .
1 Fig. 1. (a) Schematic representation of a 3-RPR planar PM and a circle-shaped obstacle located at point P , ρ min = 5 and ρ max = 50. and (b) the constant-orientation workspace of a 3-RPR planar PM for φ = π/4 via interval analysis, collisions are ignored. A 1 = {-10, -5}, A 2 = {50, -5} and A 3 = {15, 40}; x P = 20, y P = 0 and r P = 3; l x = l y = 10. Green boxes are inside and red boxes are outside the workspace.
Fig. 2 .
2 Fig. 2. Two types of interval lines. In both cases, there is no collision if the obstacle is located at either point P 1 or point P 2 . (a) Interval line passing through one point, A 1 , and one box, [B 1 ]; link A 1 B 1 in Fig. 1(a). (b) Interval line passing through two boxes, [B 1 ] and [B 2 ]; edge B 1 B 2 of the EE in Fig. 1(a).
Fig. 3 .
3 Fig. 3. The obstacle-free workspace of a 3-RPR planar PM shown in Fig. 1(a), for φ = 0, only considering the first limb A 1 B 1 . Green boxes are inside and red boxes are outside the obstacle-free workspace. (a) A 1 B 1 as a line satisfying [d] > r P . (b) A 1 B 1 as a segment line satisfying [d] > r P and A -[B] 2 < A -P 2 .
Fig. 4 .
4 Fig. 4. The obstacle-free workspace, considering only edge B 1 B 2 of EE, for φ = π/4. The result is undesirable due to interval blow up.
Fig. 5 .
5 Fig. 5. The obstacle-free workspace, considering only edge B 1 B 2 of EE, for φ = π/4. This is a modified result of Fig. 4. A better solution is represented in Fig. 7(a).
3 :
3 L(1) = [B] g % Position of the EE in g dimensional space; 4: while IsEmpty(L) = 1 do 5:
Fig. 7 .
7 Fig. 7. The obstacle-free workspace of the mechanism shown in Fig. 1(a) by considering interval segment lines passing through two intervals. Green boxes are inside and red boxes are outside the obstacle-free workspace. (a) The case for which only B 1 B 2 collides with the obstacle, φ = 0. (b) Regarding all edges of the EE, i.e. B 1 B 2 , B 2 B 3 and B 3 B 1 , φ = π/4
Fig. 8 .
8 Fig. 8. The obstacle-free workspace of the 3-RPR planar PM shown in Fig. 1, for φ = π/4, (a) considering limb collision only (b) final result considering all collisions. Green boxes are inside and red boxes are outside the obstacle-free workspace, respectively. (a) Only limbs collisions are taken into account, φ = π/4. (b) Intersection of all obstacle-free workspaces including all limbs and all edges of the EE.
Fig. 9 .
9 Fig. 9. (a) 6-UPS spatial PM and (b) the corresponding workspace considering actuation stroke, via interval analysis. P = [100, 0, 500], r P = 10. (a) A 6-UPS spatial PM and a sphere obstacle P . 19 (b) The sliced constant-orientation workspace of a 6-UPS spatial PM, via interval analysis.
Fig. 10 .
10 Fig. 10. Two types of interval lines in case of 3D workspace. (a) Interval line passing through one point, A 1 , and one 3D box, [B 1 ]; link A 12 B 16 in Fig. 9(a). (b) Interval line passing through two 3D boxes, [B 1 ] and [B 2 ]; edge B 16 B 23 of the EE in Fig. 9(a).
Fig. 11 .Fig. 12 .
1112 Fig. 11. Obstacle-free workspace of a 6-UPS Spatial PM, considering the collisions of one limb with the obstacle. (a) Top view (b) Sectioned isometric view.
Table I .
I Geometric parameters of a 3-RPR PM.
i x Ai y Ai ρ min ρ max x B i y B i unit
1 -10 -5 5 50 0 0 cm
2 50 -5 5 50 10 0 cm
3 15 40 5 50 0 10 cm
1 ) O xy ; % Position of box [B i ] in the fixed frame O xy
7: end for
8:
P ); % Distance from P to the line passing through A i and [B i ]
11: end for
12: for i from 1 to f do
13: [t i ] = Distance(Line([B i ], [B i+1 ]), P );
14: end for
15:
if ρ min < [ρ 1,...,f ] < ρ max (r P < d 1,...,f A 1,...,f -[B 1,...,f ] 2 < A 1,...,f -P 2 )
r P < t 1,...,f then % • stands for supreme of and interval 16:
by the largest edge and add two new boxes at the end of L
21: end if
22:
Table II .
II Geometric parameters of the 6-UPS Spatial PM under study (all lengths are given in mm).
i 1 2 3 4 5 6
x ai 92.58 132.58 40.00 -40.00 -132.58 -92.58
y ai 99.64 30.36 -130.00 -130.00 30.36 99.64
z ai 23.10 23.10 23.10 23.10 23.10 23.10
x bi 30.00 78.22 48.22 -48.22 -78.22 -30.00
y bi 73.00 -10.52 -62.48 -62.48 -10.52 73.00
z bi -37.10 -37.10 -37.10 -37.10 -37.10 -37.10
ρ imin 454.5 454.5 454.5 454.5 454.5 454.5
ρ imax 504.5 504.5 504.5 504.5 504.5 504.5 | 41,439 | [
"961614",
"961618",
"10659"
] | [
"136804",
"301046",
"473973",
"481388",
"441569"
] |
01771209 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2008 | https://hal.science/hal-01771209/file/TrSP-SOS-Identifiability_C1.pdf | Abdeldjalil Aïssa-El-Bey
email: [email protected]
Karim Abed-Meraim
email: [email protected]
Yves Grenier
email: [email protected]
Yingbo Hua
email: [email protected]
A General Framework for Second Order Blind Separation of Stationary Colored Sources
This paper focuses on the blind separation of stationary colored sources using the second order statistics of their instantaneous mixtures. We first start by presenting a brief overview of existing contributions in that field. Then, we present necessary and sufficient conditions for the identifiability and partial identifiability using a finite set of correlation matrices. These conditions depend on the autocorrelation function of the unknown sources. However, it is shown here that they can be tested directly from the observation through the decorrelator output. This issue is of prime importance to decide whether the sources have been well separated. If that's not the case then, further treatments will be needed. We then propose an identifiability testing based on resampling (jackknife) technique that is validated by simulation
Introduction
Source separation aims to recover multiple sources from multiple observations (mixtures) received by a set of linear sensors. The problem is said to be 'blind' when the observations have been linearly mixed by the transfer medium, while having no a priori knowledge of the transfer medium or the sources. Blind source separation (BSS) has applications in several areas, such as communication, speech and audio processing, biomedical engineering, geophysical data processing, etc [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. BSS of instantaneous mixtures has attracted so far a lot of attention due to its many potential applications and its mathematical tractability that leads to several nice and simple BSS solutions [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF][START_REF] Pham | Blind source separation of instantaneous mixtures of nonstationary sources[END_REF][START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF][START_REF] Cardoso | A blind beamforming for non-gaussian signals[END_REF][START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF].
Assume that m narrow band signals impinge on an array of n ≥ m sensors. The measured array output is a weighted superposition of the signals, corrupted by additive noise, i.e.
x(t) = y(t) + η(t) = As(t) + η(t) ,
where s(t) = [s 1 (t), • • • , s m (t)] T is the m × 1 complex source vector, η(t) =
[η 1 (t), • • • , η n (t)] T is the n×1 complex noise vector, A is the n×m full column rank mixing matrix, and the superscript T denotes the transpose operator. The source signal vector s(t) is assumed to be a multivariate stationary complex stochastic process.
In this paper, we only consider the second order BSS methods. Hence, hence the component processes s i (t), 1 ≤ i ≤ m are assumed to be temporally coherent and mutually uncorrelated, with zero mean and second order moments:
S(τ ) def = E (s(t + τ )s (t)) = diag[ρ 1 (τ ), • • • , ρ m (τ )] , (2)
where ρ i (τ ) def = E(s i (t + τ )s * i (t)), the expectation operator is E, and superscripts * and denote the conjugate of a complex and the complex conjugate transpose of a vector, respectively. The additive noise η(t) is modeled as a white stationary zero-mean complex random process of covariance E(η(t)η (t)) = σ 2 Q. The latter matrix is proportional to identity, (i.e.
E(η(t)η (t)) = σ 2 I) when the noise is spatially white. Under these assumptions, the observed data correlation matrices are given by : R x (τ ) = AS(τ )A + δ(τ )σ 2 Q .
From this expression, one can observe that the noise free correlation matrices are 'diagonalizable' under the linear transformation B = A # where the superscript (•) # is the Moore-Penrose's pseudo-inversion operator, i.e. BR x (τ )B H are diagonal ∀τ = 0. Hence, the source separation is achieved by decorrelating the signals at different time lags.
Before all, note that complete blind identification of separating (demixing) matrix B (or the equivalently mixing matrix A) is impossible in the blind context, because the exchange of a fixed scalar between the source signal and the corresponding column of A leaves the observations unaffected. Also note that the signals numbering is immaterial. It follows that the best that can be done is to determine B up to a permutation and scalar shifts of its columns [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF],
i.e., B is a separating matrix if and only if :
By(t) = P Λs(t) (3)
where P is a permutation matrix and Λ a non-singular diagonal matrix.
Under the above assumptions, we provide a general framework for the BSS including the study of the identifiability and its testing as well as the introduction of a simple but efficient decorrelation method and its performance analysis. The paper is organized as follows. Section 2 reviews the principal contributions to the BSS problem using the second order statistics. Section 3 states the necessary and sufficient second order identifiability conditions.
We then propose an identifiability testing based on resampling technique in section 4. Section 5 proposes a blind source separation algorithm using relative gradient technique. Section 6 is devoted to the performance analysis of the considered decorrelation method and the validation of the identifiability testing technique. Section 7 is for concluding remarks.
The work using the second order statistics to achieve the blind source separation was initiated by L. Féty et al. [START_REF] Féty | News methods for signal separation[END_REF]. Féty's method is based on the simultaneous diagonalization of the correlation matrices R x (0) and R x [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. Independently, L. Tong et al. proposed a similar technique, namely the AMUSE technique (Algorithm for Multiple Unknown Signals Extraction) [START_REF] Tong | Amuse: A new blind identification algorithm[END_REF][START_REF] Tong | Indeterminacy and identifiability of blind identification[END_REF], that achieves the BSS by the simultaneous diagonalization of two symmetric matrices R x (0) and (R x (τ k ) + R x (τ k ))/2 with τ k = 0. This method has been extended in [START_REF] Tomé | Blind source separation using a matrix pencil[END_REF] where a generalized eigenvalue decomposition of a matrix pen-
cil (R x (τ 1 ), R x (τ 2 )) is considered.
Later on, A. Belouchrani et al. proposed the SOBI (Second-Order Blind Identification) algorithm [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] that generalizes the previous methods to the case where more than two correlation matrices are used. In the SOBI algorithm, the separation is achieved in two steps; the first step is the whitening of the observed signal vector by linear transformation. The second step consists of applying a joint approximate diagonalization algorithm to a set of different time-lag correlation matrices of the whitened signal vector. A variant of SOBI has been presented by D. Nuzillard et al. in [START_REF] Nuzillard | Second-order blind source separation in the fourier space of data[END_REF] allowing direct signal separation from frequency domain data by exploiting the source correlation properties expressed in the time domain. This algorithm is referred to as f-SOBI, standing for "frequency domain SOBI".
Numerous approaches have been proposed in recent years both for the formulation of the diagonalization criterion and for the algorithms considered for its minimization. One of the most popular and computationally appealing approach for the joint diagonalization of a set of matrices M 1 , • • • , M K is the unitary Approximate Joint Diagonalization (J.F. Cardoso et al. [START_REF] Cardoso | Jacobi angles for simultaneous diagonalization[END_REF]), which minimizes the criterion
K k=1 off(BM k B ) (4)
with respect to B, subject to the unitary constraint B B = I, where
off(P ) = i =j |P ij | 2 . ( 5
)
The unitary constraint implies the assumption of a unitary mixing matrix.
Hence, in the general case, a pre-processing "spatial hard-whitening" stage is required, in which the non-unitary factor of the overall demixing matrix is found and applied to the data.
In [START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF][START_REF] Ziehe | A fast algorithm for joint diagonalization with non-orthogonal transformations and its application to blind source separation[END_REF] an iterative algorithm using relative gradient technique has been considered for the minimization of (4) without unitary constraint. An alternative approach for non-unitary Approximate Joint Diagonalization has been proposed by Yeredor (the "AC-DC" algorithm [START_REF] Yeredor | Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation[END_REF]), which minimizes
K k=1 M k -AD k A 2 (6)
without constraining A to be unitary. In [START_REF] Féty | News methods for signal separation[END_REF],
D k , k = 1, • • • , K represent diagonal matrices.
While computationally efficient in small-scale problems, this algorithm has been observed to exhibit extremely slow convergence in large-scale problems. This criterion is also considered in [START_REF] Wu | A unifying criterion for blind source separation and decorrelation : Simultaneous diagonalization of correlation matrices[END_REF] where a gradient descent technique is used for its minimization and in [START_REF] Vollgraf | Quadratic optimization for simultaneous matrix diagonalization[END_REF] where quadratic optimization is used.
A computationally efficient unconstrained minimization algorithm was proposed by D.T. Pham et al. [START_REF] Pham | Joint approximate diagonalization of positive definite matrices[END_REF], whose target criterion is the Kullback-Leibler divergence between the n × n operand and the diagonal matrix with the same diagonal as the operand :
K k=1 λ k [log det diag(BM k B ) -log det(BM k B )] (7)
where λ k , k = 1, • • • , K are positive scalar factors. This approach requires all the target matrices to be positive definite, which limits its applicability as a generic BSS tool. Another class of BSS techniques based on the second order statistics is the one using the maximum likelihood principle. This method uses the Gaussian asymptotic property of the discrete Fourier transform of the second order stationary observations [START_REF] Pham | Séparation aveugle de sources temporellement corrélées[END_REF][START_REF]Blind separation of mixture of independent sources through a quasimaximum likelihood approach[END_REF][START_REF] Dégerine | Separation of an instantaneous mixture of gaussian autoregressive sources by the exact maximum likelihood approach[END_REF].
There are multiple potential applications of blind source separation using second order statistics. Among others, one may cite the work of G.
(0), R x (1)} (resp. {R x (0), (R x (τ k ) + R x (τ k ))/2}) are all distinct.
In [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], the authors have shown that BSS using SOBI algorithm with the correlation matrix set {R x (τ 1 ), • • • , R x (τ K )} is possible only if the vectors
ρ i = [ρ i (τ 1 ), • • • , ρ i (τ K )], i = 1, • • • , m
are pairwise linearly independent. This result has been generalized in [START_REF] Abed-Meraim | Generalized second order identifiability condition and relevant testing technique[END_REF] to establish a necessary and sufficient identifiability condition with a finite set of correlation matrices. It is recalled and further developed in section 3.
3 Second Order Identifiability
Necessary and sufficient conditions of identifiability
In [START_REF] Tong | Indeterminacy and identifiability of blind identification[END_REF], Tong et al. showed that sources are blindly separable based on (the whole set) of second order statistics only if they have different spectral density functions. In practice we achieve the BSS using only a finite set of correlation matrices. Therefore, the previous identifiability result was generalized to that case in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF][START_REF] Abed-Meraim | Generalized second order identifiability condition and relevant testing technique[END_REF] leading to the necessary and sufficient identifiability conditions given by the following theorem :
Theorem 1 Let τ 1 < τ 2 < • • • < τ K be K ≥ 1 time lags, and define ρ i = [ρ i (τ 1 ), ρ i (τ 2 ), • • • , ρ i (τ K )] and ρi = [ e(ρ i ), m(ρ i )]
where e(x) and m(x) denote the real part and imaginary part of x, respectively. Taking advantage of the indetermination, we assume without loss of generality that the sources are scaled such that ρ i = ρi = 1, for all i 1 . Then, BSS can be achieved using the output correlation matrices at time lags τ 1 , τ
s(t) = [s 1 (t), s2 (t), s 3 (t), • • • , s m (t)] T with [ã 1 , ã2 ] = [a 1 , a 2 ]T and s1 (t) s2 (t) = T -1 s 1 (t) s 2 (t) (9)
with
T = cos θ sin θ -sin θ cos θ , if = +1 T = cosh θ sinh θ sinh θ cosh θ , if = -1
be detected (and a fortiori could not be estimated) from the considered set of correlation matrices. This hypothesis will be held in the sequel.
verifies
R x(τ k ) = R x (τ k ) and S(τ k ) = S(τ k ) for k = 1, • • • , K, where S(τ k ) def = E(s(t + τ k )s (t)). 2
Interestingly, we can see from condition (8) that BSS can be achieved from only one correlation matrix R x (τ k ) provided that the vectors [ e(ρ i (τ k )), m(ρ i (τ k ))] and [ e(ρ j (τ k )), m(ρ j (τ k ))] are pairwise linearly independent for all i = j.
Note also that from [START_REF] Tong | Indeterminacy and identifiability of blind identification[END_REF], BSS can be achieved if at most one temporally white source signal exists. Similarly, recall that when using higher order statistics, BSS can only be achieved if at most one Gaussian source signal exists.
Under the condition of Theorem 1, the BSS can be achieved by decorrelation according to the following result:
Theorem 2 Let τ 1 , τ 2 , • • • , τ K be K time lags and z(t) = [z 1 (t), • • • , z m (t)] T
be an m×1 vector given by z
(t) = Bx(t). Define r ij (τ k ) def = E(z i (t+τ k )z * j (t)).
If the identifiability condition holds, then B is a separating matrix (i.e. By(t) = P Λs(t) for a given permutation matrix P and a non-singular diagonal matrix Λ) if and only if
r ij (τ k ) = 0 and K k=1 |r ii (τ k )| > 0 ( 10
)
for all 1 ≤ i = j ≤ m and k = 1, 2, • • • , K.
Proof: see the proof of Theorem 3 in Appendix A.
Note that, if one of the time lags is zero, the result of Theorem 2 holds only under the noiseless assumption. In that case, we can replace the condition
K k=1 |r ii (τ k )| > 0 by r ii (0) > 0, for i = 1, • • • , m.
On the other hand, if all the time lags are non-zero and if the noise is temporally white (but can be spatially colored with unknown spatial covariance matrix) then the above result holds without the noiseless assumption.
Partial Identifiability
It is generally believed that when the identifiability conditions are not met, the BSS cannot be achieved. This is only half the truth as it is possible to partially separate the sources in the sense that we can extract those which satisfy the identifiability conditions. More precisely, the sources can be separated in blocks each of them containing a mixture of sources that are not separable using the considered set of statistics. For example, consider a mixture of 3 sources such that ρ1 = ρ2 while ρ1 and ρ3 are linearly independent. In that case, source s 3 can be extracted while sources s 1 and s 2 cannot. In other words, by decorrelating the observed signal at the considered time lags, one obtain 3 signals one of them being s 3 (up to a scalar constant) and the two others are linear mixtures of s 1 and s 2 .
This result can be mathematically formulated as follows: assume there are d distinct groups of sources each of them containing d i source signals with same (up to a sign) correlation vector ρi
, i = 1, • • • , d (clearly, m = d 1 + • • • + d d ).
The correlation vectors ρ1 , • • • , ρd are pairwise linearly independent. We write
s(t) = [s T 1 (t), • • • , s T d (t)]
T where each sub-vector s i (t) contains the d i source signals with correlation vector ρi .
Theorem 3 Let z(t) = Bx(t) be an m × 1 random vector satisfying equation
(10) for all 1 ≤ i = j ≤ m and k = 1, • • • , K.
Then, there exists a permutation matrix P such that z(t)
def = P z(t) = [z T 1 (t), • • • , z T d (t)] T where z i (t) = U i s i (t), U i being a d i × d i non-singular matrix.
Moreover, sources belonging to the same group, i.e., having same (up to a sign) correlation vector ρi can not be separated using only the correlation matrices R
x (τ k ), k = 1, • • • , K.
This result shows that when some of the sources have same (up to a sign) correlation vectors then the best that can be done is to separate them per blocks and this can be achieved by decorrelation. However, this result would be useless if we cannot check the linear dependency of the correlation vectors ρi and partition the signals per groups (as shown above) according to their correlation vectors. This leads us to the important problem of testing the identifiability condition that is discussed next.
4 Identifiability testing
Theoretical result
The necessary and sufficient identifiability condition (8) depends on the correlation coefficients of the source signals. The latter being unknown, it is therefore impossible to a priori check whether the sources are 'separable' or not from a given set of output correlation matrices. However, it is possible to check a posteriori whether the sources have been 'separated' or not. We have the following result:
Theorem 4 Let τ 1 < τ 2 < • • • < τ K be K distinct time lags and z(t) = Bx(t).
Assume that B is a matrix such that z(t) satisfies2 equation (10) for all
1 ≤ i = j ≤ m and k = 1, • • • , K.
Then there exists a permutation matrix P
such that for k = 1, • • • , K. E(z(t + τ k )z (t)) = P T S(k)P (11)
In other words the entries of z(t) def = P z(t) have the same correlation coefficients as those of s(t) at time lags
τ 1 , • • • , τ K , i.e. E(z i (t + τ k )z * i (t)) = ρ i (τ k ) for k = 1, • • • , K and i = 1, • • • , m. Proof: see Appendix B.
From Theorem 4, the existence of condition ( 8) can be checked by using the
approximate correlation coefficients r ii (τ k ) def = E(z i (t+τ k )z * i (t)). It is important
to point out that even if equation ( 10) holds, it does not mean that the source signals have been separated. Three situations may happen:
(1) For all pairs (i, j), ρi and ρj (computed from z(t)) are pairwise linearly independent. Then we are sure that the sources have been separated and that z(t) = s(t) up to the inherent indeterminacies of the BSS problem. In fact, testing the identifiability condition is equivalent to pairwise testing the angles between ρi and ρj for all 1 ≤ i = j ≤ m. The larger the angle between ρi and ρj , the better the quality of source separation (see performance analysis in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF]).
(2) For all pairs (i, j), ρi and ρj are linearly dependent. Thus, the sources haven't been separated and z(t) is still a linear combination of s(t).
(3) A few pairs (i, j) out of all pairs satisfy ρi and ρj linearly dependent.
Therefore the sources have been separated in blocks.
Now, having only one signal realization at hand, we propose to use a resampling technique to evaluate the statistics needed for the testing.
Testing using resampling techniques
Note that in practice the source correlation coefficients are calculated from noisy finite sample data. Due to the joint effects of noise and finite sample size, it is impossible to obtain the exact source correlation coefficients to test the identifiability condition. The identifiability condition should be tested using a certain threshold α, i.e., decide that ρi and ρj are linearly independent if
| ρi ρT j | -1 > α.
To find α we use the fact that the estimation error of ρi ρT j is asymptotically
Gaussian3 and hence one can build the confidence interval of such a variable according to its variance. This algorithm can be summarized as follows:
(1) Estimate a demixing matrix B and z(t) def = Bx(t) using an existing second order decorrelation algorithm (e.g. SOBI [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF]).
(2) For each component z i (t), estimate the corresponding normalized vector ρ i .
(3) Calculate the scalar product R(i, j) = | ρi ρT j | for each pair (i, j).
(4) Estimate σ(i,j) the standard deviation of R(i, j) using resampling technique (see section 4.3).
(5) Choose α (i,j) according to the confidence interval. e.g. to have a confidence interval equal to 99.7% we choose α (i,j) = 3σ (i,j) , and compare | R(i, j)-1|
to α (i,j) to test whether sources i and j have been separated or not.
Resampling techniques
In many signal processing applications one is interested in forming estimates of a certain number of unknown parameters of a random process, using a set of sample values. Further, one is interested in finding the sampling distribution of the estimators, so that the respective means, variances, and cumulants can be calculated, or in making some kind of probability statements with respect to the unknown true values of the parameters.
The bootstrap [START_REF] Zoubir | The bootstrap and its application in signal processing[END_REF] was introduced by Efron [START_REF] Efron | The jackknife, the bootstrap and other resampling plans[END_REF] as an approach to calculate confidence intervals for parameters in circumstances where standard methods cannot be applied. The bootstrap has subsequently been used to solve many other problems that would be too complicated for traditional statistical analysis.
In simple words, the bootstrap does with a computer what the experimenter would do in practice, i.e. if it were possible: he or she would repeat the experiment. With the bootstrap, the observations are randomly reassigned, and the estimates recomputed. These assignments and recomputations are done hundreds or thousands of times and treated as repeated experiments.
The jackknife [START_REF] Miller | The jackknife -a review[END_REF] is another resampling technique for estimating the standard deviation. As an alternative to the bootstrap, the jackknife method can be thought of as drawing T samples of size (T -1) each without replacement from the original sample of size T [START_REF] Miller | The jackknife -a review[END_REF].
Suppose we are given the sample X = {X 1 , X 2 , . . . , X T } and an estimate, θ, from X . The jackknife method is based on the sample delete-one observation at a time,
X (i) = {X 1 , X 2 , . . . , X i-1 , X i+1 , . . . , X T }
for i = 1, 2, . . . , T , called the jackknife sample. This i th jackknife sample consists of the data set with the i th observation removed. For each i th jackknife sample, we calculate the i th jackknife estimate, θ(i) of ϑ, i = 1, 2, . . . , T . The jackknife estimate of the standard deviation of θ is defined by
σ = T -1 T T i=1 θ(i) - 1 T T j=1 θ(j) 2 ( 12
)
The jackknife is computationally less expensive if T is less than the number of replicates used by the bootstrap for standard deviation estimation because it requires computation of θ only for the n jackknife data sets. For example, if L = 25 resamples are necessary for standard deviation estimation with the bootstrap, and the sample size is T = 10, then clearly the jackknife would be computationally less expensive than the bootstrap. In order to test the separability of the estimated signals, we have used a jackknife method to estimate the variance of the scalar product quantities R(i, j) for i, j = 1, 2, . . . , m. This is done according to the following steps:
(1) From each signal z i = [z i (0), . . . , z i (T -1)] T , generate T vectors such as z (j)
i = [z i (0), . . . , z i (j -1), z i (j +1), . . . , z i (T -1)] T and j = 0, 1, . . . , T -1.
(2) For each vector z (j)
i , estimate the corresponding vector ρ
(3) Estimate R such as its (i, j) th entry is
R(i, j) = 1 T T -1 k=0 ρ (k) i , ρ (k) j ρ (k) i ρ (k) j
where •, • denotes the scalar product and • is the euclidian norm.
(4) Estimate the standard deviation of R(i, j) by:
σ(i,j) = T -1 T T -1 k=0 ρ (k) i , ρ (k) j ρ (k) i ρ (k) j - 1 T T -1 l=0 ρ (l) i , ρ (l) j ρ (l) i ρ (l) j 2
Discussion
Some useful comments are provided here to get more insight onto the considered testing method and its potential applications and extensions.
• The asymptotic performance analysis of SOBI derived in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], shows that the separation performance of two sources s i and s j depends on the angle between their respective correlation vectors ρ i and ρ j . Hence, measuring this angle gives a hint on the interference rejection level of the two considered sources.
As a consequence, one can use the measure of this angle not only to test the separability of the two sources but also to guarantee a target (minimum) separation quality. Choosing the threshold α (i,j) accordingly is an important issue that deserves further investigation.
• The testing method can be incorporated into a two stage separation procedure where the first stage consists in a second order decorrelation method (e.g. SOBI). The second stage would be an HOS-based separation method applied only when the testing indicates a failure of separation at the first step.
• In many practical situations, one might be interested in only one or few source signals. This is the case for example in the interference mitigation problem in [START_REF] Belouchrani | Interference mitigation in spread spectrum communications using blind source separation[END_REF] or in the power plants monitoring applications [START_REF] D'urso | Blind identification methods applied to electricite de france's civil works and power plants monitoring[END_REF]. In this situation, the partial identifiability result is of high interest as it proves that the desired source signal can still be extracted even if a complete source separation cannot be achieved.
5 Separation algorithm using relative gradient
SOS-based separation criteria
Based on Theorem 2, we can define different objective functions for signal decorrelation. In [START_REF] Kawamoto | Blind separation of sources using temporal correlation of the observed signal[END_REF], the following criterion4 was used
G(z) = K k=1 log det (diag(R z (τ k ))) -log det (R z (τ k )) (13)
where diag(A) is the diagonal matrix obtained by zeroing the off diagonal entries of A. Another criterion used in [START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF] is
G(z) = K k=1 1≤i<j≤m [|r ij (τ k )+r ji (τ k )| 2 +|r ij (τ k )-r ji (τ k )| 2 ]+ m i=1 | K k=1 |r ii (τ k )|-1| 2 . (14)
The last term in ( 14) is introduced to avoid trivial (zero-valued) solutions.
Equations ( 13) and ( 14) are non-negative functions which are zero if and only
if R z (τ k ) = E(z(t + τ k )z (t)) are diagonal for k = 1, • • • , K or equivalently if (10
) is met. Hence, one can achieve the BSS through signal decorrelation by minimizing one of the previous cost functions.
Iterative Decorrelation Algorithm (IDA)
The separation criteria we have presented take the form:
B is a separating matrix ⇐⇒ G(z(t)) = 0 ( 15
)
where z(t) = Bx(t) and G is a given objective function. An efficient approach to solve (15) is the one proposed in [START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF][START_REF]Blind separation of mixture of independent sources through a quasimaximum likelihood approach[END_REF]. It is a block technique based on the processing of T received samples and consists of searching the zeros of the sample version of [START_REF] Vollgraf | Quadratic optimization for simultaneous matrix diagonalization[END_REF]. Solutions are obtained iteratively in the form:
B (p+1) = (I + (p) )B (p) (16)
z (p+1) (t) = (I + (p) )z (p) (t) (17)
At iteration p, a matrix (p) is determined from a local linearization of
G(Bx(t)).
It is an approximate Newton technique with the benefit that (p) can be very simply computed (no Hessian inversion) under the additional assumption that B (p) is close to a separating matrix. This procedure is illustrated as follows:
Using (17), we have:
r (p+1) ij (τ k ) = r (p) ij (τ k )+ m q=1 * (p) jq r (p) iq (τ k )+ m l=1 (p) il r (p) lj (τ k )+ m l,q=1 (p) il * (p) jq r (p) lq (τ k ) (18)
where
r (p) ij (τ k ) def = E z (p) i (t + τ k )z * (p) j (t) (19)
≈ 1 T -τ k T -τ k t=1 z (p) i (t + τ k )z * (p) j (t) (20)
Under the assumption that B (p) is close to a separating matrix, it follows that
| (p) ij | 1 and |r (p) ij (τ k )| 1 for i = j
and thus, a first order approximation of r (p+1) ij (τ k ) is given by:
r (p+1) ij (τ k ) ≈ r (p) ij (τ k ) + * (p) ji r (p) ii (τ k ) + (p) ij r (p) jj (τ k ) (21)
similarly, we have:
r (p+1) ji (τ k ) ≈ r (p) ji (τ k ) + * (p) ij r (p) jj (τ k ) + (p) ji r (p) ii (τ k ) (22)
From ( 21) and ( 22), we have:
r (p+1) ij (τ k )+r (p+1) ji (τ k ) ≈ 2r (p) jj (τ k ) e( (p) ij )+2r (p) ii (τ k ) e( (p) ji )+(r (p) ij (τ k )+r (p) ji (τ k )) r (p+1) ij (τ k )-r (p+1) ji (τ k ) ≈ 2r (p) jj (τ k ) m( (p) ij )-2r (p) ii (τ k ) m( (p) ji )+(r (p) ij (τ k )-r (p) ji (τ k ))
with = √ -1. By replacing the previous equation into [START_REF] Wu | A unifying criterion for blind source separation and decorrelation : Simultaneous diagonalization of correlation matrices[END_REF], we obtain the following least squares (LS) minimization problem min r
(p) jj , r (p) ii E (p) ij + 1 2 (r (p) ij + r (p) ji ), 1 2 (r (p) ij -r (p) ji )
where
E (p) ij def = e( (p) ij ) m( (p) ij ) e( (p) ji ) -m( (p) ji ) (23) r (p
) ij = [r (p) ij (τ 1 ), • • • , r (p) ij (τ K )] T (24)
A solution to the LS minimization problem is given by:
E (p) ij = -r (p) jj , r (p) ii # 1 2 (r (p) ij + r (p) ji ), 1 2 (r (p) ij -r (p) ji ) (25)
where A # denotes the pseudo-inverse of matrix A. Equations ( 23) and [START_REF] Efron | The jackknife, the bootstrap and other resampling plans[END_REF] provide the explicit expression of (p) ij for i = j. For i = j, the minimization of ( 14) using the first order approximation leads to :
K k=1 r (p) ii (τ k ) 1 + 2 e( (p) ii ) -1 = 0. ( 26
)
Without loss of generality, we take advantage of the phase indeterminacy to assume that ii are real-valued and hence e( ii ) = ii . Consequently, we obtain :
(p) ii = 1 - K k=1 |r (p) ii (τ k )| 2 K k=1 |r (p) ii (τ k )| (27)
In the case of real-valued signals, the LS minimization becomes : min
H (p) ij e (p) ij + ψ (p) ij
where
H (p) ij = 1 1 ⊗ r (p) jj , r (p) ii (28)
e
(p) ij = (p) ij , (p) ji T (29)
ψ (p) ij = r (p) ij r (p) ji (30)
and ⊗ denotes the Kronecker product. A solution to the LS minimization problem is given by:
e (p) ij = -H (p)# ij ψ (p) ij (31)
Remark: A main advantage of the above algorithm is its flexibility and easy implementation in the adaptive case. This is the focus of the next subsection.
This algorithm can Also be extended to deal with BSS of convolutive mixtures as shown in [START_REF] Aïssa-El-Bey | Iterative blind source separation by decorrelation algorithm: algorithm and performance analysis[END_REF].
Adaptive implementation
To derive an adaptive version of the above batch algorithm, we replace in the above formulae the iteration index p by the time index t and estimate adaptively the correlation coefficients r (t) ij (τ k ). The adaptive algorithm can be summarized as follows : At time instant t
• Update the correlation matrices, i.e., R z (τ k ), k = 1, . . . , K, using the fol-lowing averaging technique :
z(t) = B (t-1) x(t) R (t) z (τ k ) = λR (t-1) z (τ k ) + (1 -λ)z(t)z (t -τ k )
where 0 < λ < 1 is a positive forgetting factor. Note that r
(t) ij (τ k ) is the (i, j)-th entry of R (t) z (τ k ).
• Estimate (t) using equations ( 25) and ( 27) and the updated correlation coefficients r
(t) ij (τ k ).
• Update the value of the separating matrix, the correlation matrices R z (τ k ), k = 1, . . . , K, and the estimated sources z(t + 1 -τ k ), k = 1, . . . , K :
B (t) = (I + (t) )B (t-1) R (t) z (τ k ) = (I + (t) )R (t) z (τ k )(I + (t) ) z(t + 1 -τ k ) = (I + (t) )z(t + 1 -τ k ).
Besides its computational simplicity, this algorithm has the advantage of uniform performance (i.e. it has, in noiseless case, the same asymptotic performance whatever the mixing matrix is) and stability [START_REF] Xiang | Adaptive blind source separation by second order statistics and natural gradient[END_REF].
6 Performance analysis
Theoretical performance analysis
In this section, asymptotic (i.e. for large sample sizes) performance analysis results of the previous separation method is given. We consider the case of instantaneous mixture with i.i.d complex-valued sources satisfying, in addition to the identifiability condition k∈Z |ρ i (τ k )| < +∞ for i = 1, . . . , m. The noise is assumed Gaussian with variance σ 2 I. Assuming that the permutation indeterminacy is P = I, one can write:
BA = I + δ ( 32
)
and hence, the separation quality is measured in our simulations from the mixing matrix A and the decorrelation matrix B using the mean rejection level criterion [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] defined as:
Iperf def = 1≤p =q≤m E (|(BA) pq | 2 ) ρ q (0) E (|(BA) pp | 2 ) ρ p (0) (33) = 1≤p =q≤m E |δ pq | 2 ρ q (0) ρ p (0) ( 34
)
Our performance analysis consists in deriving the closed-form expression of the asymptotical variance errors: lim
T →+∞ T E |δ pq | 2 (35)
By using a similar approach to that in [START_REF]Blind separation of mixture of independent sources through a quasimaximum likelihood approach[END_REF] based on the central-limit and continuity theorems, one obtains the following result :
Theorem 5 Vector δ ij def = [ e(δ ij ) e(δ ji ) m(δ ij ) -m(δ ji )] T is asymptot-
ically Gaussian distributed with asymptotic covariance matrix
∆ ij def = lim T →+∞ T E δ ij δ T ij (36) = 1 4 H # ij Ψ ij H # ij ( 37
)
where
H ij = I 2 ⊗ ρ T i ρ T j ( 38
)
Ψ ij = Ξ (ij) 11 Ξ (ij) 12 Ξ (ij) 21 Ξ (ij) 22 (39) with Ξ (ij) 11 = Γ (ij) 11 + Γ (ij) 12 + Γ (ij) 21 + Γ (ij) 22
(40)
Ξ (ij) 12 = Γ (ij) 11 -Γ (ij) 12 + Γ (ij) 21 + Γ (ij) 22
(41)
Ξ (ij) 21 = - Γ (ij) 11 + Γ (ij) 12 -Γ (ij) 21 + Γ (ij) 22
(42)
Ξ (ij) 22 = Γ (ij) 11 -Γ (ij) 12 -Γ (ij) 21 + Γ (ij) 22 ( 43
)
and
Γ (ij) 11 (k, k ) = τ ∈Z r ii (τ k + τ )r jj (τ k + τ ) (44) Γ (ij) 22 (k, k ) = τ ∈Z r ii (τ k + τ )r jj (τ k + τ ) (45) Γ (ij) 12 (k, k ) = τ ∈Z r ii (τ k + τ )r jj (τ k -τ ) (46) r ii (τ k ) = ρ i (τ k ) + δ(τ k )σ 2 b i b T i . ( 47
)
Note that Γ
(ij) 21 = Γ (ij)T 12
and b i represents the i th row of B = A # .
In the case of real-valued signals, the preceding result becomes :
∆ ij = H # ij Ψ ij H #T ij ( 48
)
where
H ij = 1 1 ⊗ ρ T i ρ T j ( 49
)
Ψ ij = Γ (ij) 11 Γ (ij) 12 Γ (ij) 21 Γ (ij) 22 (50) 2 -1 2 S i (f )e 2πτ f df ),
the previous result can be rewritten by using the normalization assumption of Theorem 1 ( ρ i = 1 for all i) as :
∆ ij = 1 ρ 2 ij -1 2 1 -ρ ij -ρ ij 1 D 1 -ρ ij -ρ ij 1 (51)
where
ρ ij = ρ i ρ T j ( 52
)
and
D = 1 2 -1 2 S i (f )S j (f )V ij V T ij df . ( 53
)
Note that
V ij = [ e ( i (f )) e ( j (f ))] T , (54)
with
l (f ) = K k=1 ρ l (τ k ) exp(-2πτ k f ) for l ∈ {i, j} (55)
and
S l (f ) = S s l (f ) + σ 2 b l 2 , for l ∈ {i, j} (56)
where S s l (f ) is the power spectral density of the l th source. By replacing (56) in (53), we obtain :
D = 1 2 -1 2 S s i (f )S s j (f )V ij V T ij df + σ 4 b i 2 b j 2 1 2 -1 2 V ij V T ij df +σ 2 b i 2 1 2 -1 2 S s j (f )V ij V T ij df + σ 2 b j 2 1 2 -1 2 S s i (f )V ij V T ij df
From the above expression, we notice that in the noiseless case the performance of the considered BSS method is independent from the mixture matrix (i.e. equivariance property). In that case, the performance limit is essentially function of the "non-collinearity" of vectors ρ i and ρ j (one can see it mainly from the term ρ 2 ij -1 2 that appears in the denominator of equation ( 51)).
Simulation-based performance analysis
We present in this section some numerical simulations to evaluate the performance of the previous separation algorithm. We consider in our simulation an array of n = 4 sensors with half wavelength spacing, receiving two signals in the presence of stationary real temporally white noise. The two source signals are generated by filtering real white Gaussian processes by an autoregressive (AR) model of order 1 with coefficient a 1 = 0.95e 0.5 and a 2 = 0.5e 0.7 (except for Figure 6). The sources have direction of arrivals (DOA) φ 1 = 30 and φ 2 = 45 degrees respectively. Note that the simulation results shown here do not depend on this explicit choice of the source DOAs but rather on the angle difference φ 2 -φ 1 , as illustrated by Figure 7. The number of time lags is K = 5
(except for Figure 8). The signal to noise ratio is defined as SNR = 10 log 10
σ 2 s σ 2 n ,
where σ 2 n and σ 2 s are the noise variance and signal variance respectively. The mean rejection level is estimated over 1000 Monte-Carlo runs.
In Figure 1, we compare the separation performance obtained by the decorrelation algorithm with SOBI algorithm.
Figure 2 shows the mean rejection levels against the signal to noise ratio SNR.
We compare the IDA algorithm with the SOBI algorithm which is based on a joint diagonalization of a set of covariance matrices [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF]. The additive noise is temporally white but spatially colored. The noise covariance is assumed to be of the form 0.9 |i-j| (i.e.
E(η(t)η (t)) = nσ 2 n QQ H / Q 2 , where Q is given by Q ij =
η(t) = √ nσ n Q Q η(t)
where η(t) is a unit norm white Gaussian noise). In this case, IDA performs much better than SOBI at low SNR. This is an important advantage of the IDA method over existing SOS methods that often assume the noise covariance matrix known up to a scalar constant.
In Figure 3 totic closed form expressions of the rejection level are pertinent from snapshot length of about 100 samples. In the plots (4,4) T respectively. This means that asymptotic conditions are reached for short data block size. Figure 4 shows the mean rejection level against the signal to noise ratio SNR.
E (|δ ij | 2 ) and E (|δ ji | 2 ) are replaced by ∆ ij (1,1)+∆ ij (3,3) T and ∆ ij (2,2)+∆ ij
We compare the empirical performance with theoretical performance for T = 1000 sample size. Figure 5 shows the mean rejection level versus the number of sensors using the theoretical formulation for T = 1000 sample size. We observe that, the more the number of sensors, the lower the rejection level is in the low SNR case.
For high SNRs the number of sensors has negligible effect on the separation performance (in accordance with the uniform performance property).
Figure 6 shows Iperf versus the spectral shift δθ. The spectral shift δθ represents the spectral overlap of the two sources. In this figure, the noise is assumed to be spatially white and its level is kept constant at 10dB and 30dB. We let a 1 = 0.7e 0.5 and a 2 = 0.5e (0.5+δθ) . The plot evidences a significant increase in rejection performance by increasing δθ. sample size T = 1000, two AR sources with coefficient a 1 = 0.95e 0.5 and a 2 = 0.5e 0.7 are considered. Their respective DOAs are φ 1 = 30 degrees and φ 2 = φ 1 +δφ. From the plots, we observe a significant performance degradation when δφ is close to zero. Indeed in that case, the identifiability condition w.r.t. the mixture matrix A is ill-satisfied. Also, we can observe that in the absence of noise or equivalently when the noise is negligible (e.g. for SNR=50dB), the method has uniform performance independent from the mixture matrix A and its numerical conditioning. The plots in Figure 8 illustrate the effect of the number of time lags K for different SNRs. In this simulation the sources arrive from the directions φ 1 = 10 and φ 2 = 13 degrees. This choice is to have an ill-conditioned mixture matrix and hence a difficult separation context. In that case, the increase of the number of correlation matrices is needed to improve the separation quality.
Otherwise, in a 'good' context, increasing the number of time lags would not significantly affect the performance of the considered algorithm. From this figure, we also observe that a large increase of the number of correlation matrices leads to a degradation of the separation performance. The reason is that, the correlation coefficients of considered sources decrease exponentially towards zero and consequently the signal term in the large time lags correlation matrices is negligible and their estimations are too noisy.
Performance assessment of the testing technique
We present in this section some simulation results to illustrate the performance of the separability testing method. In the simulated environment we consider uniform linear array with n = 2 sensors receiving the signals from m = 2 unitpower first order autoregressive sources (with coefficients a 1 = 0.95e 0.5 and a 2 = 0.5e 0.7 ) in the presence of stationary complex temporally white noise.
The considered sources are separable according to the identifiability result, i.e. their respective correlation vectors ρ 1 and ρ 2 are linearly independent. The time lags (delays) implicitly involved are τ 0 , • • • , τ 9 (i.e., K = 10). We use SOBI algorithm [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] to obtain the decorrelated sources. The statistics in the curves are evaluated over 2000 Monte-Carlo runs. We first present in Figure 9 a simulation example where we compare the rate of success of the testing procedure (success means that we decide the 2 sources size is T = 1000 and the SNR = 25dB. Figure 12 shows the rate of success versus the spectral shift δθ. As we can see, small values of δθ lead to high rates of 'non-separability' decision by our testing procedure. Indeed, when δθ is close to zero the two vectors ρ 1 and ρ 2 are close to 'linear dependency'.
That means that the separation quality of the two sources is poor in that case which explains the observed testing results. In the last figure, we assume there exist three sources. The first two sources are complex white Gaussian processes (hence ρ 1 = ρ 2 ) and the third one is an autoregressive signal with coefficient a 3 = 0.95e 0. 5 . The plots in Figure 13 compares the average values of scalar products for ρ i and ρ j (i, j = 1, 2, 3)
with their corresponding threshold values 1-α (i,j) versus the SNR. The sample size is fixed to T = 500 and the number of sensors is n = 3. This example illustrates the situation where two of the sources (here sources 1 and 2) cannot be separated (this is confirmed by the testing result) while the third one is extracted correctly (the plots show clearly that R(1, 3) < 1 -α (1,3) and R(2, 3) < 1 -α (2,3) ).
Conclusion
The second order blind source separation of stationary colored sources is a simple decorrelation problem. The signal decorrelation for a finite set of time lags leads to source separation under certain conditions that are fully detailed in this paper. The separability conditions depend on the unknown sources and hence cannot be verified (tested) directly. However, we present in this paper a testing procedure that uses the output signals of the decorrelator to verify a posteriori whether the sources have been correctly separated or not. The signal decorrelation can be achieved in many different ways. We have presented in this work, one of them that has the advantage of simplicity and efficiency both in block and adaptive processing. Performance analysis using computer simulations and validation of the different theoretical results are provided at the end of the paper. Testing IDA in real world data is one of the perspectives of this work.
A Proof of Theorem 3
Before proceeding, note that Theorem 3 is a generalization of Theorem 2 and hence its proof is implicitly a proof of Theorem 2. Also, note that the result of Theorem 2 is a proof for the sufficiency of the identifiability condition of Theorem 1. Indeed, since A is full column rank, we know that a decorrelation matrix exist (e.g. B = A # ). Now, Theorem 2 result demonstrates that under condition (8), a decorrelation matrix at the considered time lags achieves the desired blind source separation.
For proof of Theorem 3, let us notice that equation ( 10) is equivalent to 5
CS(τ k )C H diagonal for k = 1, • • • , K C( K k=1 S(τ k ) H C H CS(τ k ))
i (x) = K k=1 e(ρ i (τ k ))x k +x K K k=1 m(ρ i (τ k ))
x k and P i (y), i = 1, • • • , m, respectively. Under the assumption of Theorem 2, we have 6 P i (y) ≡ 0 for all i P i (x)P j (y) ≡ P i (y)P j (x) if i and j belong to two distinct groups P i (x)P j (y) ≡ P i (y)P j (x) if i and j belong to the same group 5 The noise term is neglected. However, this term equals zero if all the time lags are non-zero and if the noise is temporally white. 6 The equalities below are understood in terms of equalities of polynomials.
37
Therefore, P (x, y) def = d i=1 P i (y) 1≤i<j≤d (P i (x)P j (y) -P j (x)P i (y)) ≡ 0 (here i and j denote the group index not the source index) and thus there exist infinite values of (x, y) such that P (x, y) = 0. Let (x 0 , y 0 ) be such a value and let S 1 = and (ii).
To complete the proof, note that
M 1 def = K k=1 x k 0 C e(S(τ k ))C H + x k+K 0 C m(S(τ k ))C H = CS 1 C H is diagonal and M 2 def = K k=1 y k 0 C e(S(τ k ))C H + y k+K 0 C m(S(τ k ))C H = CS 2 C H is diagonal and non-singular. It follows that, M 1 M -1 2 = CS 1 S -1 2 C -1 is
a diagonal matrix with d distinct eigenvalues (condition (ii)). Using standard spectral theory, e.g., [START_REF] Horn | Matrix Analysis[END_REF], we conclude that C = P Λ for a given permutation matrix P and a given non-singular block-diagonal matrix Λ, i.e.,
Λ = diag(U 1 , • • • , U d )
where U i is a d i × d i non-singular matrix. Finally, the fact that sources belonging to the same group cannot be separated from the considered set of statistics is simply a direct consequence of the necessity of condition [START_REF] Tong | Indeterminacy and identifiability of blind identification[END_REF]. We propose here to derive the expression of the asymptotic covariance error.
Note that the asymptotic Gaussianity of the error δ comes from the central limit theorem as shown in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF][START_REF]Blind separation of mixture of independent sources through a quasimaximum likelihood approach[END_REF]. The expression of δ can be obtained by replacing in equations ( 17) and ( 23) z (p) (t) and (p) by s(t) and δ respectively.
Hence, from the vectorized version of equation ( 25) the expression of matrix δ ij δ T ij can be written as :
δ ij δ T ij = 1 4 H # ij ψ ij H # ij (C.1)
where z i (t + τ k )z j (t)z i (t + τ k )z j (t ) (C.8)
H ij = I 2 ⊗ [r ii r jj ] (C.2)
ψ ij = ξ (ij)
γ (ij) 22 (k, k ) = 1 (T -τ k )(T -τ k ) T -τ k t=1 T -τ k t =1
z j (t + τ k )z i (t)z j (t + τ k )z i (t ) (C.9)
γ (ij) 12 (k, k ) = 1 (T -τ k )(T -τ k ) T -τ k t=1 T -τ k t =1
z i (t+τ k )z j (t)z j (t +τ k )z i (t ) (C.10)
γ (ij) 21 (k, k ) = 1 (T -τ k )(T -τ k ) T -τ k t=1 T -τ k t =1
z j (t+τ k )z i (t)z i (t +τ k )z j (t ) (C.11) Therefore, using the index τ = t -t , we obtain
Fig. 1 .Fig. 2 .
12 Fig. 1. Comparison between IDA and SOBI for spatially white noise: Mean Rejection Level in dB versus SNR for 2 autoregressive sources 4 sensors and T = 1000.
Fig. 3 .
3 Fig. 3. Mean Rejection Level in dB versus the sample size T for 2 autoregressive sources 4 sensors and SNR = 40dB.
Fig. 4 .
4 Fig. 4. Mean Rejection Level in dB versus the SNR for T = 1000 : (a) 4 sensors and 2 AR sources with DOAs φ 1 = 30 and φ 2 = 45 degrees; (b) 7 sensors and 5 AR sources with DOAs φ 1 = 15, φ 2 = 30, φ 3 = 45, φ 4 = 60 and φ 5 = 120 degrees respectively.
Fig. 5 .
5 Fig. 5. Mean Rejection Level in dB versus the number of sensors n for 2 autoregressive sources and T =1000.
Figure 7 Fig. 6 .
76 Figure 7 assesses the performance of IDA versus the angle difference δφ = φ 2 -φ 1 for different values of the SNR. The number of sensors is n = 4, the
Fig. 7 .
7 Fig. 7. Mean Rejection Level in dB versus the angle difference δφ for 2 autoregressive sources, 4 sensors and T =1000.
Fig. 8 .
8 Fig. 8. Mean Rejection Level in dB versus the number of time lags K for 2 autoregressive sources, 4 sensors and T =1000.
Fig. 9 .
9 Fig. 9. Rate of success versus SNR for 2 autoregressive sources and 2 sensors and β = 99.7%: comparison of the performance of our testing algorithm for different sample sizes T .
Fig. 13 .
13 Fig.[START_REF] Yeredor | Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation[END_REF]. Average values of the |R(i, j)| and thresholds 1 -α (i,j) versus SNR for 3 sources and 3 sensors : 2 sources are complex white Gaussian processes and the third one is an autoregressive signal.
C H > 0 where C = BA, H denotes the matrix conjugate transpose, and the inequality A > 0 stands for A positive definite. This latter inequality implies in particular that C is full rank. Note that CS(τ k )C H is diagonal if and only if both C e(S(τ k ))C H and C m(S(τ k ))C H are diagonal. Then we shall show that there exist linear combinations of e(S(k)) and m(S(k)), i.e., S 1 = K k=1 α k e(S(τ k )) +α k m(S(τ k )) and S 2 = K k=1 β k e(S(τ k )) + β k m(S(τ k )), such that (i) S 2 is non-singular and (ii) the diagonal entries of S 1 S -1 2 take d distinct values of multiplicity d 1 , • • • , d d , respectively. A simple way to prove it is to consider linear combinations of the form α k = x k , α k = x k+K and β k = y k , β k = y k+K (x and y being real scalars). The diagonal entries of S 1 and S 2 are P
K k=1 x k 0
0 e(S(τ k )) + x k+K 0 m(S(τ k )) and S 2 = K k=1 y k 0 e(S(τ k )) + y k+K 0 m(S(τ k )). S 1 and S 2 satisfy conditions (i)
2 B Proof of Theorem 4 2 C
242 According to the above development, we haveC = P diag(U 1 , • • • , U d ). On the other hand we have CS(τ k )C H diagonal for k = 1, • • • , K. This leads to U i S i (τ k )U H i diagonal ∀i = 1, • • • , d, ∀k = 1, • • • , K. Now since we have grouped the sources in such a way each group corresponds to sources with (up to a sign) correlation vector, i.e. S i (τ k ) = ρ i (τ k )Σ i where Σ i are diagonal matrices with diagonal entries equal to ±1, the previous equation becomesU i Σ i U H i diagonal for i = 1, • • • , d. Moreover,since Σ i is real diagonal and since we normalized to unity the correlation vectors (both estimated and exact), then U i Σ i U H i is real diagonal with diagonal entries equal to ±1. Finally, since the signature (i.e., the number of positive eigenvalues and the number of negative eigenvalues) of a quadratic form Q is invariant by multiplication left and right by a non-singular matrix and its conjugate transpose, respectively, then U i Σ i U H i has the same signature (i.e., number of +1s and number of -1s) as Σ i . In other words, there exists a d i × d i permutation matrix P i such thatU i Σ i U H i = P i Σ i P T i . It follows then that, for k = 1, • • • , K E(z(t + τ k )z (t)) = CS(τ k )C H = P S(τ k )P T (B.1) for a given permutation matrix P . Equation (B.1) means that the entries z i (t) of z(t) have the same correlation coefficients as s P (i) (t) for i = 1, • • • , m, where P (1), • • • , P (m) are the images of 1, • • • , m by the permutation P . This verifies the conclusion of Theorem 4. Proof of Theorem 5
are K × K matrices given by ∀1 ≤ k, k ≤ K : τ k )(T -τ k )
TTT
, k ) = lim T →+∞ T (T -τ k -|τ |) (T -τ k )(T -τ k ) T -τ k τ =-T +τ k r ii (τ k + τ )r jj (τ k + τ ) (T -τ k -|τ |) (T -τ k )(T -τ k ) T -τ k τ =-T +τ k r ii (τ k + τ )r jj (τ k + τ ) (T -τ k -|τ |) (T -τ k )(T -τ k ) T -τ k τ =-T +τ k r ii (τ k + τ )r jj (τ k -τ ) (T -τ k -|τ |) (T -τ k )(T -τ k ) T -τ k τ =-T +τ k r ii (τ k -τ )r jj (τ k + τ ) = Γ (ij) 21 (k, k )whereΓ (ij) 11 (k, k ), Γ (ij) 22 (k, k ), Γ (ij) 12 (k, k ) and Γ (ij)21 (k, k ) are given by equations (44), (45), (46) and (47). 2
2 , • • • , τ K if and only if for all 1 ≤ i = j ≤ m :
ρi and ρj are (pairwise) linearly independent (8)
Proof: The proof of sufficiency is given in Appendix A. Here we here only prove
that (8) is necessary to achieve BSS using correlation functions R x (τ k ) def =
E(x(t + τ k )x (t)), k = 1, • • • , K. In fact, if two sources, say s 1 and s 2 , have
correlation coefficients such that ρ1 = ρ2 where = ±1, then any 'virtual'
signal of the form x(t) = Ãs(t) + w(t) where à = [ã 1 , ã2 , a 3 , • • • , a m ] and
We implicitly assume here that ρ i = 0, otherwise the source signal could not
Because of the inherent indetermination of the BSS problem, we assume without loss of generality that the exact and estimated sources are similarly scaled, i.e., ρi = 1.
More precisely, one can prove that the estimation error √ T δ( ρi ρT j ) is asymptotically, i.e. for large sample size T , Gaussian with zero mean and finite variance.
In that paper, only the case where τ 1 = 0 was considered.
have been separated) to detect the sources separability for different sample sizes versus the SNR in dB. The confidence interval is fixed to β = 99.7%.
One can observe from this figure that the performance of the testing procedure degrades significantly for a small sample size due to the increased estimation errors and the fact that we use the asymptotic normality of considered statistics. In Figure 10, we present a simulation example where we compare the rate of success according to the sample size for different confidence intervals. The SNR is set to 25dB. Clearly, the lower the confidence interval is, the higher is the rate of success of the testing procedure. Also, as observed in Figure 9, the rate of success increases rapidly when increasing the sample size.
In Figure 11, we present a simulation example where we plot the rate of success The simulation example presented in Figure 12 assumes two source signals with parameters a 1 = 0.5e 0.5 and a 2 = 0.5e (0.5+δθ) , where δθ represents the spectral overlap of the two sources. The number of sensors is n = 5, the sample | 55,673 | [
"18420",
"956547",
"742871"
] | [
"391223",
"98040",
"300839",
"300839",
"300446"
] |
01771460 | en | [
"chim"
] | 2024/03/05 22:32:16 | 2011 | https://hal.science/hal-01771460/file/Pub_aq_2bis.pdf | John J Murphy
Prof Adrien Quintard
Patrick Mcardle
Dr Alexandre Alexakis
email: [email protected]
John C Stephens
email: [email protected]
Scheme
Dr ] J J Murphy
Prof P Mcardle
Asymmetric Organocatalytic 1,6-Conjugate Addition of Aldehydes to Dienic Sulfones**
Keywords: asymmetric synthesis, conjugate addition, dienes, organocatalysis, sulfones
Building enantiopure complex molecules simply, in a minimum number of operations, and in an environmentally friendly approach is one of the greatest challenges for synthetic chemistry, and particularly, for chemical industry. [START_REF] Breuer | For reviews on industrial approaches for the synthesis of chiral molecules[END_REF] Taking into account the requirements for the industrial application of an laboratory-scale reaction (functional group/H 2 O tolerance, simple procedures, no extreme temperatures), enamine catalysis has recently appeared as a method of choice to fulfil this ideal goal of reaction efficiency. [2,3] In this field, the asymmetric conjugate addition to activated alkenes has been extensively studied, as evidenced by the large number of publications on the subject. [START_REF] Sulzer-Mossø | For reviews on enamine 1,4-addition[END_REF] In contrast, the analogous asymmetric 1,6-addition to extended conjugated systems remains underdeveloped. [START_REF] Hayashi | For some recent examples of metal-catalyzed 1,6-conjugate additions[END_REF][START_REF] Bernardi | Jørgensen has reported a phase-transfer-catalyzed 1,6-conjugate addition of b-ketoesters. Regioelectivity issues can probably be accounted for in this reaction thanks to the unsubstituted 6-position of the acceptor[END_REF] Several groups reported the 1,4-addition to activated dienes in enamine catalysis without any traces of vinylogous 1,6addition. [START_REF] Csµkþ | For a recent review on conjugate addition reactions to electrondeficient dienes[END_REF] This higher reactivity of the b position compared to the d position seems to be a general trend difficult to overcome (Scheme 1). It probably arises from the poor propagation of the electronic effect through the conjugated system. This problem of charge delocalization is in contrast to the principle of vinylogy where the reactivity is in theory extended through the p-p system. [8] In our continuing efforts toward the development of new approaches for the stereoselective construction of enantiopure synthetically useful buildings blocks, we thought about expanding the scope of enamine Michael reactions to 1,6-addition.
We hypothesized that a suitably designed Michael acceptor would be able to promote exclusively the 1,6addition. To this purpose, we have focused our attention on unsaturated sulfones. The sulfonyl group is known for its electron-withdrawing ability together with high synthetic versatility. [START_REF]For some selected reviews on sulfones[END_REF] It has been shown that a vinyl sulfone with a single activating sulfone group was not sufficiently reactive to promote intermolecular enamine attack and generate a 1,4conjugate addition. Instead a second sulfone was required to generate the Michael-type addition. [10] As a result, 1,3-bis-(sulfonyl) butadiene (Scheme 1), should be able to promote exclusively the 1,6-addition by the insertion of an appropriately placed second electron-withdrawing group. [START_REF] Masuyama | For the synthesis and application of sulfonyl butadiene[END_REF] This butadiene should serve as an exciting application of the exceptional potential of charge delocalization in vinylogous reactions. The sulfone in the a position would not sufficiently activate the b-carbon atom toward enamine addition but would be expected to sufficiently delocalize the charge of the d-carbon atom thanks to the cooperative effect of the second sulfonyl group, thus promoting the single 1,6-addition (Scheme 1). Herein, we present our results on this unprecedented asymmetric 1,6-addition that leads, in operationally simple conditions, to exceptional levels of diastereo-and enantioselectivities for the formation of highly attractive chiral dienes.
We began our study by synthesizing 1,3-bis(sulfonyl) butadiene substrates 1 by a high-yielding, four-step reaction sequence. [12] To evaluate the feasibility of the asymmetric 1,6addition, we subjected the sulfonyl diene 1 to the addition of butanal 2 a using 30 mol % of the organocatalyst (R)-diphenylprolinol silyl ether 3. Chloroform was chosen as it easily solubilized the 1,3-bis(sulfonyl) butadiene. As we expected from our proposal (Scheme 1), only the 1,6-addition product was obtained in an excellent yield of 91 % using only 2 equivalents of aldehyde (Table 1,entry 1). The observed high reactivity and regioselectivity is in total agreement with our preliminary hypothesis of charge delocalization. The intermediate linear product was not observed and spontaneously cyclized to form the conjugated diene 4. It must be pointed out here that the cyclized product could be isolated from the crude reaction mixture by a very simple procedure. After evaporation of the solvent, the solid was only triturated with ice-cold methanol to directly obtain the pure compound. More remarkably, an exceptional diastereo-and enantiocontrol was observed in this reaction to furnish the 1,6-adduct in an astonishingly high 99 % ee and 99:1 d.r. while performing the reaction at room temperature. Decreasing the catalyst loading to 10 mol % led to the same excellent stereoselectivities (99 % ee, 99:1 d.r.) but as expected, a prolonged reaction time was needed to obtain 100 % conversion (120 h vs. 24 h, result not shown).
We explored the scope and limitations of this reaction by testing 1,3-bis(phenylsulfonyl)butadiene 1 a with a variety of different sterically demanding aldehydes 2 a-f (Table 1). Gratifyingly, all reactions gave the 1,6-addition product exclusively with no trace of the 1,4-adduct. Furthermore, the products were all isolated as virtually pure stereoisomers. The unbranched aldehydes 2 a-c underwent a fast 1,6addition in excellent yields, diastereoselectivities, and enantioselectivities (Table 1, entries 1-3). Perhaps most notable, branched aldehydes isovaleraldehyde 2 d and citronellal 2 e reacted efficiently even though longer reaction times where required to reach completion (40 h and 144 h respectively; Table 1, entries 5 and6). This lower reactivity is consistent with the higher steric hindrance of the substrates. Again we were happy to see that the expected compounds were still formed with perfect stereocontrol even though they required a longer reaction time (compound 4 d and 4 e were isolated as single stereoisomers). Furthermore, this protocol could also be applied for unsaturated phenylacetaldehyde 2 f, that underwent a high-yielding reaction with excellent diastereoselectivity and enantioselectivity (Table 1,entry 7). This attractive synthon should lead to an enantiomerically pure C 2 symmetric diene by sulfone removal.
To fully explore this remarkable transformation, we then continued to investigate the scope of the reaction by testing the 1,6-conjugate addition of valeraldehyde 2 b to a family of 1,3-bis(phenylsulfonyl)butadienes 1 a-e in the presence of 30 mol % of organocatalyst in chloroform (Table 2). A variety of different aryl substituents with a range of electronic properties could be used without affecting the overall selectivity of the reaction. All reactions gave greater than 99% conversion by 1 H NMR spectroscopy. The yields of the isolated 1,6-addition products were slightly lower in all cases (71 to 81 % yield vs. 98 % for the phenyl). This outcome probably arises from an increase in the solubility of the final compounds and results in product loss during workup. When the electron-withdrawing properties of the substituents were increased from F, Cl, to Br the reactions were slightly accelerated. The lower electron density of the acceptor 5 f containing a nitro substituent resulted in an impressive increase in reactivity (4 h vs. 24 h to obtain a full conversion; Table 2, entry 6 vs. entry 2). This result is in agreement with a Michael 1,6-addition mechanism and should indicate that the CÀC bond formation and not the cyclization is the ratedetermining step. This finding is consistent with the fact that no traces of the noncyclized product could be observed when monitoring the reaction by 1 H NMR spectroscopy. d.r.
(syn/anti) [b] ee [%] [c] ( R)-3 144 (S)-citronellal 89 [e] (4 e) 1:99 -7
( R)-3 24 Ph 96 (4 f) 1:99 99
[a] Yield of isolated product. [b] Determined by 1 H NMR spectroscopy and HPLC analysis. [c] Determined by HPLC on a chiral stationary phase for the anti products. [d] Opposite S,S enantiomer of the product formed. [e] Isolated as a single diastereoisomer as determined by 1 H NMR spectroscopy. TMS = trimethylsilyl.
Table 2: Scope of the bis(arylsulfonyl) butadienes.
Entry
t [h] X Yield [%] [a]
d.r. (syn/anti) [b] ee [%] [c] 1 24 OMe 75 ( 5 [a] Yield of isolated product. [b] Determined by 1 H NMR spectroscopy and HPLC analysis. [c] Determined by HPLC on a chiral stationary phase for the anti products.
In addition, the absolute and relative configuration of both the R,R adduct and S,S adduct of 4 b could be determined by X-ray crystallography (Figure 1 and the Supporting Information). [START_REF]R)-4b) and[END_REF] Despite the high synthetic potential of the disclosed reaction, it is also highly interesting in terms of mechanism. Although further experimentation is needed to have a complete understanding of the reaction mechanism, a plausible stepwise mechanism can be proposed (Scheme 2). The absolute configuration of the products is consistent with previous results obtained in 1,4-addition to other Michael acceptors catalyzed by catalyst 3. [START_REF] Mielgo | [END_REF] The acyclic synclinal transition-state model, as described by Seebach and Golin ´ski, could be applied to the 1,6-conjugate addition of aldehydes to 1,3-bis(sulfonyl) butadienes and explains the observed high levels of stereoselectivities. [15] Steric repulsion away from the bulky groups of the pyrrolidine ring promotes the selective attack of the Re face of the E-trans enamine and the Re face of the Michael acceptor forming 6. Previous studies have described a similar model for the 1,4-addition of aldehydes to vinyl sulfones, nitroolefins, [16] and vinyl phosphonates. [17] Even though this classical model can rationalize the observed stereoselectivity, several aspects still need to be addressed. The question on whether the catalyst is involved in the cyclization and in promoting the elimination is still not clear. This is more plausible given the fact that no linear product is observed, which implies that the cyclization/elimination steps are fast with the catalyst still involved. Despite the preliminary evidence for a 1,6-addition, a possible [4+2] cycloaddition cannot be ruled out and further investigations should shed light on these interesting problems. [START_REF] Masuyama | For the synthesis and application of sulfonyl butadiene[END_REF]18] The products obtained through the 1,6-conjugate addition/condensation reaction are highly interesting building blocks. To illustrate the synthetic utility of this method the adduct 4 a was converted into 8 in excellent 97 % yield through another conjugate addition of methyllithium (Scheme 3).
Perfect regioselectivity and good levels of diasteroselectivities (4:1 d.r.) were obtained for the subsequent creation of two new stereogenic centers in this final molecule containing four contiguous stereocenters. After a simple recrystallization, compound 8 was isolated as a 12:1 mixture of two diasterosiomers in 99 % ee among the 16 possible stereosiomers. The addition anti to the propyl group on the adjacent carbon atom was confirmed using NOE studies and 1 H, [START_REF]R)-4b) and[END_REF] C, DEPT, and HSQC spectra. This result highlights the great potential of the obtained dienes for further transformations by indicating the most electrophilic position in 4 b.
In conclusion we have developed an unprecedented enamine 1,6-addition by exploiting the properties of charge delocalization in 1,3-bis-(sulfonyl) butadienes. By appropriately designing a Michael acceptor, unique reactivities were obtained for the formation of highly valuable dienes containing two versatile vinyl sulfones. This remarkable reaction should find its applications in total synthesis thanks to its operational simplicity and to the exceptional levels of stereoselectivities of the final products (typically 99 % ee and 99:1 d.r.). We are convinced that this activation principle by charge delocalization through the addition of a second electron-withdrawing group should serve as a keystone for the development of new powerful 1,6- addition reactions. Full mechanistic studies as well as further investigations employing ketones and additional 1,6-acceptors are currently being pursued in our laboratories and will be published in due course.
Experimental Section
Typical procedure for the organocatalytic 1,6-addition reaction: Diene (0.2 mmol, 1 equiv) was added to a sample vial containing (R)-diphenylprolinol silyl ether (19,5 mg, 0.06 mmol, 0.3 equiv) dissolved in chloroform (0.5 mL), followed by addition of the aldehyde (0.4 mmol, 2 equiv). The reaction mixture was then stirred at RT until the reaction was complete (as evident by TLC). The reaction mixture was concentrated under reduced pressure and triturated with ice-cold methanol (2 3 mL) to yield the solid pure product.
Figure 1 .Scheme 2 .Scheme 3 .
123 Figure 1. ORTEP drawing of (R,R)-4 b with ellipsoids at 20 % probability.
Table 1 :
1 Scope of aldehydes for the 1,6-addition.
Entry Cat. t [h] R Yield
[%] [a]
www.angewandte.org 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim Angew. Chem. Int. Ed. 2011, 50, 5095 -5098
Angew. Chem. Int. Ed. 2011, 50, 5095 -5098 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim www.angewandte.org
www.angewandte.org 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim Angew. Chem. Int. Ed. 2011, 50, 5095 -5098
We thank the Irish Research Council for Science, Engineering & Technology (IRCSET) and NUI Maynooth for funding. We also thank Dr. Padraig McLoughlin of Teagasc and the Ashtown Food Research Centre, Dublin for some NMR analysis. | 14,374 | [
"181424"
] | [
"301862",
"532876",
"373032"
] |
01681595 | en | [
"sdu",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01681595/file/Corse_MER_2017%20HAL.pdf | Emmanuel Corse
email: [email protected]
Emese Megl Ecz
| Ga€ It Archambaud
| Morgane Ardisson
Jean-Franc ßois Martin
| Christelle Tougard
| R Emi Chappaz
Vincent Dubut
email: [email protected]
A from-benchtop-to-desktop workflow for validating HTS data and for taxonomic identification in diet metabarcoding studies
Keywords: cytochrome c oxidase I, diet studies, HTS data filtering, metabarcoding, taxonomic assignment
The main objective of this work was to develop and validate a robust and reliable "from-benchtop-to-desktop" metabarcoding workflow to investigate the diet of invertebrate-eaters. We applied our workflow to faecal DNA samples of an invertebrate-eating fish species. A fragment of the cytochrome c oxidase I (COI) gene was amplified by combining two minibarcoding primer sets to maximize the taxonomic coverage. Amplicons were sequenced by an Illumina MiSeq platform. We developed a filtering approach based on a series of nonarbitrary thresholds established from control samples and from molecular replicates to address the elimination of crosscontamination, PCR/sequencing errors and mistagging artefacts. This resulted in a conservative and informative metabarcoding data set. We developed a taxonomic assignment procedure that combines different approaches and that allowed the identification of ~75% of invertebrate COI variants to the species level. Moreover, based on the diversity of the variants, we introduced a semiquantitative statistic in our diet study, the minimum number of individuals, which is based on the number of distinct variants in each sample. The metabarcoding approach described in this article may guide future diet studies that aim to produce robust data sets associated with a fine and accurate identification of prey items.
| INTRODUCTION
In ecology and conservation, reliable diet data sets are critical to the understanding of prey/habitat relationships and feeding habitats. In this perspective, the use of DNA-based approaches in trophic ecology has grown during the last years (e.g., [START_REF] Razgour | High-throughput sequencing offers insight into mechanisms of resource partitioning in cryptic bat species[END_REF][START_REF] Soininen | Highly overlapping winter diet in two sympatric lemming species revealed by DNA metabarcoding[END_REF], especially due to the advent of high-throughput sequencing (HTS), leading to the development of metabarcoding [START_REF] Taberlet | Towards next-generation biodiversity assessment using DNA metabarcoding[END_REF]. However, diet metabarcoding approaches face four main challenges: (i) amplification bias related to the degradation of DNA [START_REF] Sint | Optimizing methods for PCR-based analysis of predation[END_REF]; (ii) taxonomic coverage of primers [START_REF] Gibson | Simultaneous assessment of the macrobiome and microbiome in a bulk sample of tropical arthropods through DNA metasystematics[END_REF]; (iii) taxonomic identification and resolution of DNA barcode sequences [START_REF] Richardson | Evaluating and optimizing the performance of software commonly used for the taxonomic classification of DNA metabarcoding sequence data[END_REF]; and (iv) filtering of HTS data to eliminate artefacts (e.g., PCR/ sequencing errors, mistagging, contamination) that produce false positives and constitute low-frequency noise (LFN; sensu De [START_REF] De Barba | DNA metabarcoding multiplexing and validation of data accuracy for diet assessment: Application to omnivorous diet[END_REF]. In diet studies, metabarcoding primers should therefore target short regions (i.e., <300 bp) of multicopy DNA to tackle the degradation of DNA [START_REF] Pompanon | Who is eating what: Diet assessment using next generation sequencing[END_REF]. Moreover, binding sites of primers should be sufficiently conserved to minimize biases in taxonomic coverage [START_REF] Clarke | Environmental metabarcodes for insects: In silico PCR reveals potential for taxonomic bias[END_REF][START_REF] Deagle | DNA metabarcoding and the cytochrome c oxidase subunit I marker: Not a perfect match[END_REF]. Alternatively, taxonomic coverage can be improved by amplifying several loci [START_REF] De Barba | DNA metabarcoding multiplexing and validation of data accuracy for diet assessment: Application to omnivorous diet[END_REF] or using several sets of primers that target the same locus [START_REF] Gibson | Simultaneous assessment of the macrobiome and microbiome in a bulk sample of tropical arthropods through DNA metasystematics[END_REF]. In both cases, the choice of the PCR primers is critical for optimizing detection of all prey in faeces.
As for the taxonomic identification, a considerable trade-off between accuracy and sensitivity should be considered when selecting an assignment procedure [START_REF] Richardson | Evaluating and optimizing the performance of software commonly used for the taxonomic classification of DNA metabarcoding sequence data[END_REF]. Moreover, both the confidence and the resolution of taxonomic classifiers are highly dependent on the richness of reference sequence databases of the targeted loci [START_REF] Gibson | Simultaneous assessment of the macrobiome and microbiome in a bulk sample of tropical arthropods through DNA metasystematics[END_REF][START_REF] Porter | Rapid and accurate taxonomic classification of insect (class Insecta) cytochrome c oxidase subunit 1 (COI) DNA barcode sequences using a na€ ıve Bayesian classifier[END_REF]. Additionally, producing robust data sets is critical for conducting reliable ecological studies. However, most of the current clustering-based methods for filtering HTS need specific (and partly arbitrary) parameterization and often overestimate the real number of taxa in samples [START_REF] Brown | Divergence thresholds and divergent biodiversity estimates: Can metabarcoding reliably describe zooplankton communities?[END_REF][START_REF] Clare | The effects of parameter choice on defining molecular operational taxonomic units and resulting ecological analyses of metabarcoding data[END_REF][START_REF] Flynn | Toward accurate molecular identification of species in complex environmental samples: Testing the performance of sequence filtering and clustering methods[END_REF].
In fact, even if clustering reads into operational taxonomic units can partially address the overestimation of taxa, clustering-based methods do not account for some LFNs, such as cross-sample contamination and mistagging.
In this context, and following the recommendations by [START_REF] Murray | From benchtop to desktop: Important considerations when designing amplicon sequencing workflows[END_REF], we developed a "from-benchtop-to-desktop" metabarcoding workflow to investigate the diet of invertebrateeaters. We particularly focused on two main objectives. Our first objective was data reliability, which involves both minimizing false-negatives and false-positives. This was achieved by ensuring the efficiency of the combined use of two primer sets, by performing several PCR replicates and by developing a clustering-free filtering method based on control samples and nonarbitrary thresholds. Second, we developed an approach that should maximize the taxonomic resolution of molecular identification of prey by combining four assignment procedures and three reference databases. Our workflow was applied to the biodiversity assessment of prey ingested by Zingel asper (Linnaeus, 1758), a critically endangered benthic freshwater fish. Using this model, we demonstrated the application of our workflow on faecal samples and illustrated its interest for a better characterization of feeding habitats.
| MATERIAL AND METHODS
| Faecal sample collection
Thirty-five Z. asper specimens from which faeces could be collected were sampled on 5 September 2014 in the Durance River (France: 44°20 0 14″N, 5°54 0 46″E). Fishes were caught by electrofishing and their abdomen was squeezed by hand in order to drain their faeces.
Collected faeces were stored in a 1.5-ml vial containing 96% ethanol.
After sampling, individuals were immediately released within the fishing area. Faeces were stored at À20°C until DNA extraction.
Additionally, five faecal samples from a fish species living in brackish-water habitats, Pomatoschistus microps (Krøyer, 1838), were also analysed to assess the versatility of the workflow and to control for mistagging in our HTS data set (see below). These individuals were sampled in the Vaccar es Lagoon (French Mediterranean coast: 44°20 0 14″N, 5°54 0 46″E).
| Faecal DNA extraction and controls
All faecal DNA extraction steps were conducted in a room dedicated to the handling of degraded DNA ("Plateforme ADN D egrad e" of the Institut des Sciences de l'Evolution de Montpellier, France) and following the specific safety measures described by [START_REF] Monti | Being cosmopolitan: Evolutionary history and phylogeography of a specialized raptor, the Osprey Pandion haliaetus[END_REF].
Before DNA extraction, faeces were dried using the Eppendorf Concentrator Plus (Eppendorf, Germany). One volume of dried faeces, one volume of zirconium oxide beads (0.5 mm) and ½ volume of sterile water were mixed to crush samples using a Bullet Blender (Next Advance, USA). The DNeasy â mericon Food Kit (QIAGEN, Germany) was used to extract DNA from faecal samples to minimize the level of co-extracted products and improve PCR success (Zarzoso-Lacoste, [START_REF] Zarzoso-Lacoste | Improving PCR detection of prey in molecular diet studies: Importance of group-specific primer set selection and extraction protocol performances[END_REF]. Each extraction series included (i) 23 faecal samples, (ii) a negative control for extraction (T ext ) that consisted of 50 ll of DNA-free water subjected to DNA extraction protocol and (iii) a negative control for DNA aerosols (T pai ) that consisted of a 1.5-ml vial containing 50 ll of DNA-free water that remained open but otherwise untouched during the extraction protocol. DNA concentrations were quantified using a Qubit Fluorometer (Invitrogen, Darmstadt, Germany) and standardized to max. 20 ng/ll.
| Local DNA library construction
A local (noncomprehensive) DNA library (lcDNA samples) was constructed using invertebrates sampled in the Durance River and invertebrate samples from our laboratory collections. The sample composition and laboratory protocols are detailed in Table S1 and Appendix S1, respectively. We successfully sequenced 301 samples, representing 209 distinct species.
| Minibarcoding protocol and taxonomic coverage
Two DNA primer pairs were initially selected. They both amplify a short fragment from the 5 0 end of the mitochondrial cytochrome c oxidase I (COI): ZBJ-ArtF1c and ZBJ-ArtR2c (Arthropods "universal"; [START_REF] Zeale | Taxon-specific PCR for DNA barcoding arthropod prey in bat faeces[END_REF], hereafter abbreviated as ZF and ZR; and Uni-MinibarF1 and Uni-MinibarR1 (Eukaryotes "universal"; [START_REF] Meusnier | A universal DNA mini-barcode for biodiversity analysis[END_REF], hereafter abbreviated as MF and MR.
All four pairwise combinations of the primers were tested for PCR success on lcDNA samples, and only MFZR and ZFZR were selected as a result (see Appendix S2). These two primer pairs produce overlapping amplicons (from ~210 to ~230 bp including primers) with MFZR amplicons being 18 bp longer in the 5 0 region. The COI region of the predator (Z. asper) was not amplified using either ZFZR or MFZR; however, the MFZR primer pair amplified an unspecific fragment (~850 bp). Therefore, a blocking primer (BlNupRan) was developed to inhibit the amplification of the nontargeted locus (see Appendix S2). The taxonomic coverage of both primer sets was assessed by in vitro tests using the lcDNA samples and in silico using EcoPCR, EcoTaxStat and EcoTaxSpecificity [START_REF] Ficetola | An In silico approach for the evaluation of DNA barcodes[END_REF] (see Appendix S2).
| PCR-based enrichment of prey DNA and high- throughput sequencing
To track amplicons back to the samples and to avoid flashes of light during HTS, 12-14 nucleotide-long sequence tags were added onto the 5 0 end of each primer (eight distinct tags for the ZF and MF primers, and 12 tags for ZR), creating 96 forward and reverse potential tag combinations (Table S2). Three PCR replicates were conducted in a volume of 25 ll for each minibarcode primer pair, resulting in six independent PCR enrichments per sample. Template DNA consisted of 2.5 ll of faecal standardized DNA extracts. For the primer pair MFZR, blocking primer BlNupRan was added in the PCR mix at 400 nM. Additionally, several control samples were subjected to the PCR enrichment step: (i) T ext and T pai (see above) are controls for cross-sample or exogenous contaminations during the DNA extractions; (ii) T PCR indicates the level of cross-contamination during the preparation of the PCR mix and plates (tagged primers but no DNA template in the PCR vial); (iii) one negative control (T tag ) was included to assess the level of mistagging due to the recombination of sequences from different samples (see [START_REF] Schnell | Tag jumps illuminated-reducing sequence-to-sample misidentifications in metabarcoding studies[END_REF]. The T tag consisted of an empty vial on the PCR plate that did not contain any tagged primers or DNA. This creates an extra tag combination that is not used for any samples or controls (as in: [START_REF] Esling | Accurate multiplexing and filtering for high-throughput amplicon-sequencing[END_REF]; (iv) finally, two mock faecal samples were also amplified (T pos1 and T pos2 ) and they consisted of identical artificial mixes of the DNA obtained from six potential prey (Caenis pusilla, Baetis rhodani, Orthocladiinae sp., Chironomidae sp., Hydropsyche pellucidula, Phoxinus cf. phoxinus) and from Z. asper. The DNA concentration of T pos1 and T pos2 was 0.2 ng/ll for each potential prey, and 0.8 ng/ll for Z. asper. T pos1 and T pos2 were used to gauge sequencing or PCR artefacts and evaluate reliability of our metabarcoding analyses (see De [START_REF] De Barba | DNA metabarcoding multiplexing and validation of data accuracy for diet assessment: Application to omnivorous diet[END_REF]. The DNA from the specimens used for our mock samples was extracted in a room free of DNA handling. Moreover, the COI reference sequences of each specimen were originally obtained by amplification using CK4 primer set (see Appendix S1) and Sanger sequencing (except the Chironomidae sp. sample that failed to be amplified with CK4). Amplicons were checked by gel electrophoresis and were then pooled by replicate series, that is, 48 samples for each of the six replicate series (two primer pairs, three replicates each). Electrophoretic migrations were carried out using 20 ll of each pool on an agarose gel at 1.25%. The amplicons with expected sizes were isolated using a sterile scalpel, and the DNA was purified using the PureLink â Quick Gel Extraction Kit (Life Technologies, Germany). About 20 ng of purified DNA from each replicate series was used to generate six Illumina sequencing libraries using the TruSeq â Nano DNA Sample Preparation Kit (Illumina, San Diego, CA, USA). The library preparation included DNA end-repairing, A-tailing, Illumina adapter ligation, and limited amplification (12 PCR cycles). Illumina libraries were distinguished by the ligation of distinct adapters. The libraries were controlled for size and quality using the Agilent Bioanalyzer DNA 1000 Kit (Agilent Technologies, Palo Alto, CA, USA) and for DNA concentrations with the Kapa Library Quantification Kit for Illumina â platforms (KapaBiosystems, Wilmington, MA, USA). The six libraries were pooled at equimolar concentration (4 nM) and sequenced on an ILLUMINA MISEQ v3 platform as paired-end 250nucleotide reads.
| Sequence processing and filtering
We developed a pipeline (see Figure 1) to filter the obtained MiSeq data (see Appendix S3 for a detailed version of the filtering pipeline).
Unless specified otherwise, bioinformatics processing of reads was performed using custom Perl scripts (Dryad; https://doi.org/10. 5061/dryad.f40v5) and statistical analyses were conducted using R software (R Development Core Team 2014). Reads from the different primer pairs and replicate series were sorted according to the Illumina adapter sequences. PEAR v0.9.5 [START_REF] Zhang | PEAR: A fast and accurate Illumina Paired-End reAd mergeR[END_REF] was used to merge read pairs and discard lowquality reads (Figure 1, Step 1). Merged reads were then assigned to samples and replicates using a BLAST-based approach, and tags and primers were trimmed from the reads. Only reads that had a perfect match to tags were accepted. This constituted an additional step to eliminate low-quality reads. Strictly identical trimmed reads within each of the six replicate series were pooled into variants (i.e. dereplicated reads) and singletons (i.e. only one read in a replicate series) were discarded (Figure 1, Step 2). Variants of the three replicate series obtained from the same primer set (MFZR or ZFZR) were pooled.
The read number associated with each variant-replicate combination, however, was kept.
The following filtering steps were done separately for MFZR and ZFZR. Variants that did not comply with our BLAST conditions (Evalue threshold: 1e-10; minimum query coverage: 80%) against our custom COI database (COI-filtering-DB; Dryad, https://doi.org/10. 5061/dryad.f40v5) were considered as non-COI sequences and discarded (Figure 1, Step 3).
Then, LFN filters were used to discard variants with low read counts, likely originating from contamination, mistagging or sequencing/PCR errors bias. All variant-replicate combinations were considered independently. For each replicate of a sample, all the variants associated with a read count or a relative frequency that were under one of the LFN thresholds (i.e., not distinguishable from the noise inherent to the HTS data) were removed. Three different thresholds were considered. In fact, a variant can be rare in a given replicate (i) compared to the total number of reads in the replicate (N repl ), (ii) compared to its total number of reads in the run (N var ) or (iii) if it has few reads in a replicate in absolute terms (N var-repl ). The first threshold (LFN pos = 0.3%) was based on the relative frequency (N var-repl /N repl ) of the least frequent expected variant in all mock community replicates. This threshold helps to eliminate most low-frequency variants due to the bias cited above (Figure 1, Step 4a).
The second threshold (LFN tag = 0.25%) was determined based on (i) freshwater prey variants present in samples of brackish-water fish samples (P. microps), (ii) brackish-water prey variants present in samples of freshwater fish samples (Z. asper) and (iii) unexpected variants present in the mock samples. The LFN tag allowed discarding variants that appear by mistagging or cross-sample contamination due to their high frequency in the run (Figure 1 When occurring early in PCR cycles, errors can generate variants with quite high frequency compared to the error-free sequences.
We therefore used the Obiclean program (Obitools package; [START_REF] Boyer | Obitools: A unix-inspired software package for DNA metabarcoding[END_REF] to further filter out variants resulting from PCR and/or sequencing errors. Based on the known variants of mock samples, a 20% threshold was determined, and all variants classified as "internal" (see: [START_REF] Shehzad | Prey preference of snow leopard (Panthera uncia) in South Gobi, Mongolia[END_REF][START_REF] De Barba | DNA metabarcoding multiplexing and validation of data accuracy for diet assessment: Application to omnivorous diet[END_REF]Appendix S3) were discarded (Figure 1 Step 4b
LFN tag
Filter out mistagging from highly amplified variants, where LFN pos does not work
Step4
All LFN
Step8
Variants present in at least 2 replicates per sample
Ensure variant consistency between replicates
Step9
Concensus over replicates
Step 4c
LFN neg
Eliminate low frequency contamination from replicates with low read count (e.g., in T neg )
Step10
Chimera filter (UCHIME2)
Step11
Pseudogene filter
Step1
Merge read pairs with PEAR
Includes quality filtering
Step2
Assign reads to replicates Dereplicate Delete singletons Accept only reads with perfect match to tags and delete singletons
Step3
Filter out non-COI reads In parallel, the data set that passed filtering steps 1-3 was also analysed by UNOISE2 [START_REF] Edgar | UNOISE2: Improved error-correction for Illumina 16S and ITS amplicon sequencing[END_REF] for denoising reads. UNOISE2
F I G U R E
intends to eliminate all noise coming from sequencing and PCR errors. However, UNOISE2 does not deal with mistagging and contamination. We therefore further filtered the data by UNCROSS [START_REF] Edgar | UNCROSS: Filtering of high-frequency cross-talk in 16S amplicon reads[END_REF]. The UNOISE2/UNCROSS filtering procedure is roughly comparable to our filtering steps 4 and 6. To make the results of this second filtering approach comparable to our pipeline, we further applied our filtering steps 5, 7 and 8 to ensure repeatability within samples, as well as filtering steps 10 and 11 to eliminate chimeras and pseudogenes variants, respectively. Second, we used an automatic procedure that assigned each variant/contig to the lowest taxonomic group (LTG) of BLAST hits against a custom-built local database (Taxassign-DB; Appendix S4). The principle of our LTG approach is similar to the lowest common ancestor developed by [START_REF] Huson | Integrative analysis of environmental sequences using MEGAN4[END_REF]. All variants/contigs were BLASTed against the Taxassign-DB (BLASTN, Evalue 1e-10, minimum coverage of the query sequence: 90%, subject is annotated to a family or lower level). The LTG was determined based on different similarity cut-offs (S) from 100% to 70%. At each S, the LTG was defined as the lowest taxonomic group that contained at least 90% of the selected hits and contains at least three taxa for S < 97%.
| Taxonomic assignation of variants
The LTG corresponding to the highest S was selected.
Third, we used the Identification Request of BOLD Systems Tools (www.boldsystems.org; Ratnasingham & Hebert, 2007) using the COI "All Barcode Records" database (performed in September 2015). This approach involved all the sequences of the BOLD Systems, even those not published yet.
Fourth, when a cross-comparison between the assignment analyses revealed inconsistencies or when the taxonomic assignments had low resolution, we performed complementary phylogenetic analyses of the variants and contigs. For each of these ambiguous variants/contigs, we selected ten sequences from the Taxassign-DB by retaining: (i) the best BLAST hits (BLASTN, E-value 1e-10, minimum coverage of the query sequence: 90%); (ii) a maximum of three sequences per taxon;
and (iii) only sequences annotated to at least the Family level. Neighbor-joining trees were constructed based on K2P distances with 1,000 bootstraps using MEGA 6.06 [START_REF] Tamura | MEGA6: Molecular Evolutionary Genetics Analysis version 6.0[END_REF]. The topology of the phylogenetic trees, especially the apices of branches, was used to resolve the taxonomy of variants and contigs. When the tree topologies did not resolve the taxonomic ambi-
| Additional analyses
The variants and contigs validated in the Z. asper and P. microps samples were used to further evaluate the coverage of MFZR and ZFZR primer pairs. To standardize this evaluation, a complete-linkage clustering [START_REF] Sneath | Numerical taxonomy: The principles and practice of numerical classification[END_REF] was used to delineate molecular operational taxonomic units (MOTUs; [START_REF] Blaxter | Defining operational taxonomic units using DNA barcode data[END_REF] among validated variants and contigs. A maximum of 3% divergence between variants/contigs was allowed within a MOTU. This maximal divergence was previously used as a proxy for delineating invertebrates' species (e.g., [START_REF] Clare | The diet of Myotis lucifugus across Canada: Assessing foraging quality and diet variability[END_REF][START_REF] Vesterinen | What you need is what you eat? Prey selection by the bat Myotis daubentonii[END_REF]. For taxa that present a higher within-species sequence divergence, however, this 3% threshold will allow taking into account possible differential intraspecific coverage.
In diet studies, the quantification of individuals per prey would add considerable ecological information. However, due to PCR biases, read counts are generally considered as inappropriate to assess the relative abundance of prey [START_REF] Elbrecht | Can DNA-based ecosystem assessments quantify species abundance? Testing primer bias and biomass-Sequence relationships with an innovative metabarcoding protocol[END_REF].
Alternatively, we used a variant-centred approach: we assumed that Two categories of prey were taken into consideration during our analysis: invertebrates (excluding Rotifera) and microorganisms (including diatoms, red and green algae, and Rotifera). In this work, we focused on invertebrates as we collected faeces from invertebrateeating species. In contrast, microorganisms are less relevant since some uncertainty remains about their ingestion (secondary predation and/or nontargeted ingestion when predating invertebrates).
| RESULTS
| HTS data filtering
High for the preparation of the mock samples (Figure 4a, Table S3), and one unexpected variant (MFZR_082136) assigned to an undetermined Eukaryota (Fig. S4). This variant is present in both mock community samples but absent from all other samples. Therefore, it is likely that it comes from an organism ingested by or attached to one of the prey individuals used for the construction of the mock sample.
When using the pipeline based on UNOISE2 and UNCROSS for filtering our data set, a total of 18 variants were retained for each mock community sample (17 present in both). While this pipeline allowed the validation of the six variants expected in the mock samples, it retained 12 additional and unexpected variants (including MFZR_082136).
| Taxonomic identification and resolution
The 99 variants and contigs identified for Z. asper and P. microps faecal samples were analysed using the SAP, BOLD, LTG and FTA analyses (see Table S3). Regarding the variants and contigs assigned to invertebrates in Z. asper samples (n = 42), the taxonomic resolution of each analysis was quantified using the IR index (Figure 2a).
The IR of FTA (IR FTA = 5.8 AE 0.3) was significantly higher than those calculated for SAP (IR SAP = 4.1 AE 1.1), BOLD (IR BOLD = 5.2 AE 1.2) and LTG (IR LTG = 5.7 AE 0.3). The FTA approach resulted in the assignment of most variants and contigs to the species level (Figure 2b). It should be noted that a very close performance (although significantly lower) was obtained by the fully automatic LTG approach.
During the cross-validation step, the taxonomic assignments of SAP, LTG and BOLD were compared: (i) in 6% of the cases, all three analyses suggested the same taxon; (ii) in 87% of cases, the analyses were consistent but assigned the variants/contigs to different taxonomic levels and the lowest taxonomic level was retained;
(iii) in 7% of cases, the three approaches produced conflicting assignments. Variants or contigs with unsatisfactorily taxonomic levels or conflicting assignments were subjected to a phylogenetic analysis. All variants and contigs assigned to Simulium sp. were also included in the phylogenetic analyses, since at least one of them showed a conflicting assignment. Phylogenetic analyses were conducted for a total of 33 variants and contigs (sequence alignment deposited in Dryad; https://doi.org/10.5061/dryad.f40v5), which resulted in (i) the resolution of the assignment of nine variants and
(ii) the refinement of the identification of a further 14 variants (Figs S3 andS4). In one case, biogeographical data were also considered for the FTA decision (Table S3; contig_0260 identified as Antocha vitripennis since it is recorded in sampling area, whereas Austrophorocera Janzen04 sequences originate from Costa-Rica).
T
| Taxonomic coverage of the minibarcode primer set
In vitro tests showed that 87% and 85% of the taxa were amplified by ZFZR and MFZR, respectively. The combination of both primer pairs leads to a mean amplification success of 94% (Table 2): 100%
for Arthropoda, 80% for Mollusca, 90% for Annelida and 65% for Teleostei. The PCR success was generally similar among samples of the same species, with a few exceptions (Table S1).
In silico tests showed that the taxonomic coverage at the species level for Arthropoda was 63% and 66% for MFZR and ZFZR, respectively (Table S4). When combining the coverage of the two primer pairs, the taxonomic coverage increased to 77% for Arthropoda, varying from 62% (Hymenoptera) to 100% (Lepidoptera).
The discriminating power of the minibarcoding amplicons was also estimated in silico: 79%-100% of the analysed species could be differentiated by at least one mutation for the analysed taxa (Table S4). To further evaluate the complementarity of the two primer pairs (ZFZR and MFZR), we looked at their performance in detecting the 81 MOTUs identified in the Z. asper and P. microps faecal samples (Table S5): ~30% MOTUs were detected with both primer pairs, 23% were detected with ZFZR only, and 47% were detected with MFZR only (Figure 3a). When focusing on invertebrates, 53% of MOTUs were detected with one of the two primer sets only (Figure 3b). The ZFZR pair was more efficient in detecting Diptera (both for Chironomidae and Simuliidae), whereas MFZR detected more Baetidae (Ephemeroptera) and Crustacea.
SAP
| Sample composition
The number of variants and contigs in the Z. asper faecal samples varied from 1 to 24 (6.6 AE 5.5). At least one variant of invertebrates was present in all Z. asper samples, with a mean value of 3.4 (AE 1.7) (Table S3; Figure 4a). These samples also varied in terms of prey composition: whereas Baetis was detected in 79% of the samples, the frequency of other prey varied widely. Ephemeroptera, Diptera and Trichoptera appeared to be the most abundant prey at the population scale (Figure 4b). Four Ephemeroptera families were identified with a predominance of Baetidae (five species), which also displayed the highest MNI (22% of the total MNI in the Z. asper samples; Figure 4b). The main groups of Diptera that were preyed upon were Simuliidae and Orthocladiinae (subfamily of Chironomidae). Noninvertebrate taxa were mainly composed of metazoan microorganisms: Bacillariophyta, Rotifera, Rhodophyta and Oomycetes. Furthermore, none of the dietary MOTU accumulation curves approached an asymptote (Figure 4c).
Regarding the P. microps samples, prey DNA was detected in samples P3 and P5 only (one and four variants/contigs, respectively), and the DNA of P. microps was detected in all five faecal samples (Table S3).
| DISCUSSION
| Producing robust HTS data sets
The choice of the primer sets is crucial in diet metabarcoding as it may lead to false negatives due to insufficient taxonomic coverage [START_REF] Pompanon | Who is eating what: Diet assessment using next generation sequencing[END_REF]. In this study, we used two primer sets to amplify a short but informative COI region (MFZR and ZFZR). In vitro tests on our lcDNA samples showed that the primer sets were complementary and enabled coverage of a large taxonomic spectrum, especially for invertebrates (100% of items), whereas in silico tests suggested lower coverage. Our Z. asper faecal samples displayed a taxonomic diversity comparable to a previous morphological-based diet study [START_REF] Cavalli | Diet and growth of the endangered Zingel asper in the Durance River[END_REF]; summary in Fig. S5). Interestingly, the complementarity of primer sets was higher in the Z. asper faecal samples (53% of invertebrates MOTUs detected by one of the two primer sets only) compared to in vitro tests (11%). Increased complementarity in multiplexed samples is likely to be the result of preferential primer binding (e.g. [START_REF] Thomas | Quantitative DNA metabarcoding: Improved estimates of species proportional biomass using correction factors derived from control material[END_REF], which justifies the use of more than one primer pair. In our case, the selection of two primer pairs led to the detection of all prey in the positive samples and to a high prey diversity within and among Z. asper samples. Furthermore, the successful detection of prey in P. microps faecal samples suggested that these primer sets are applicable for species that live in nonfreshwater environments. However, in vitro tests on lcDNA samples revealed that the combined use of ZFZR and MFZR displayed insufficient taxonomic coverage in some groups (e.g. 80% of Mollusca). Nevertheless, our workflow could easily be adapted to other predators and environments, thanks to the availability of several "universal" minibarcodes located in the COI region that we have targeted (e.g., [START_REF] Brandon-Mong | DNA metabarcoding of insects and allies: An evaluation of primers and pipelines[END_REF][START_REF] Hajibabaei | Environmental barcoding: A next-generation sequencing approach for biomonitoring applications using river benthos[END_REF][START_REF] Leray | A new versatile primer set targeting a short fragment of the mitochondrial COI region for metabarcoding metazoan diversity: Application for characterizing coral reef fish gut contents[END_REF][START_REF] Shokralla | Massively parallel multiplex DNA sequencing for specimen identification using an Illumina MiSeq platform[END_REF].
The most commonly used filtering pipelines when dealing with HTS metabarcoding data, such as mothur [START_REF] Schloss | Introducing mothur: Open-source, platform-independent, community-supported software for describing and comparing microbial communities[END_REF] and QIIME [START_REF] Caporaso | QIIME allows analysis of highthroughput community sequencing data[END_REF], rely on the clustering of reads into MOTUs. In these clustering-based approaches however, almost each step and each parameter in the bioinformatics pipeline influence the outcome, and the number of taxa is often overestimated (e.g. [START_REF] Brown | Divergence thresholds and divergent biodiversity estimates: Can metabarcoding reliably describe zooplankton communities?[END_REF][START_REF] Clare | The effects of parameter choice on defining molecular operational taxonomic units and resulting ecological analyses of metabarcoding data[END_REF][START_REF] Majaneva | Bioinformatic amplicon read processing strategies strongly affect eukaryotic diversity and the taxonomic composition of communities[END_REF]. Therefore, each data set will need a specific analysis method and parameters should be tailored to the purpose of the study [START_REF] Flynn | Toward accurate molecular identification of species in complex environmental samples: Testing the performance of sequence filtering and clustering methods[END_REF]. We therefore favoured and developed a clustering-free pipeline that relies on a series of stringent filtering , 2014). For this purpose, the thresholds used in our filtering approach were inferred from mock community samples (Step 4a) and negative controls (Step 4c). It has to be noted that this conservative approach will discard all variants for which frequencies are below the LFN thresholds. In the context of complex community samples, such as samples collected for biodiversity assessment and monitoring (e.g., Elbrecht & Leese, 2017;[START_REF] Lanz En | High-throughput metabarcoding of eukaryotic diversity for environmental monitoring of offshore oil-drilling activities[END_REF][START_REF] Leray | DNA barcoding and metabarcoding of standardized samples reveal patterns of marine benthic diversity[END_REF], some low-abundance taxa may not be validated. In fact, their read count may fall below one or more LFN thresholds, making the corresponding variant undistinguishable from noise. In this case, the analysis of several biological replicates (i.e., distinct DNA extractions corresponding to different fractions of a given sample) should help with the detection and the validation of low-abundance taxa (Lanz en, Lekang, Jonassen, [START_REF] Lanz En | DNA extraction replicates improve diversity and compositional dissimilarity in metabarcoding of eukaryotes in marine sediments[END_REF][START_REF] Zhan | Reproducibility of pyrosequencing data for biodiversity assessment in complex communities[END_REF]. In addition, we recommend that the mock samples approximate the complexity of the communities sampled, by approximating their expected taxonomic composition and even the differential abundance of taxa.
Ignoring the mistagging bias [START_REF] Schnell | Tag jumps illuminated-reducing sequence-to-sample misidentifications in metabarcoding studies[END_REF] may lead (i) to overestimate the number of taxa in a sample and (ii) to blur the differentiation between samples with respect to their taxonomic composition. To assess and overcome mistagging bias, two strategies were recently suggested: (i) increasing the number of unused tag combinations using Latin square design [START_REF] Esling | Accurate multiplexing and filtering for high-throughput amplicon-sequencing[END_REF]; and (ii) using fusion primers [START_REF] Herbold | A flexible and economical barcoding approach for highly multiplexed amplicon sequencing of diverse target genes[END_REF] to avoid the creation of intersample chimeras when pooling samples during the sequencing library preparation. In the last case however, PCR efficiency is reduced substantially [START_REF] Schnell | Tag jumps illuminated-reducing sequence-to-sample misidentifications in metabarcoding studies[END_REF] dependent approach (LFN tag ; Step 4b) to control mistagging. In our case, the threshold of the LFN tag filter is based on mock community samples and on the co-analysis of samples from different habitats (brackish-water and freshwater) in the same HTS run.
In our workflow, LFN thresholds appear as the most critical parameters for the filtering and the validation of HTS data: most variants that are expected to be false positives were discarded at Step 4 (see Table 1). Nevertheless, subsequent filtering steps (steps 5-11) are also important for the validation of the data. Expectedly, omitting the filtering steps 5-11 will decrease the robustness of the data and inflate the taxonomic and/or genetic diversity within samples, biasing biodiversity estimations. In our case, for example, omitting filtering steps 5-11 is expected to inflate the MNI estimator (see below). As a matter of fact, further false positives were controlled, such as variants with PCR/sequencing errors (Step 6) and chimeras (Step 10). Moreover, our workflow included three PCR replicates for each primer pair to avoid falsepositive detections (see [START_REF] Ficetola | How to limit false positives in environmental DNA and metabarcoding[END_REF]. These PCR replicates were sequenced separately and used at filtering steps 5, 7
and 8 for validating reproducible variants only. Several authors, however, used PCR replicates differently in the case of complex community samples (e.g., [START_REF] Lanz En | High-throughput metabarcoding of eukaryotic diversity for environmental monitoring of offshore oil-drilling activities[END_REF][START_REF] Leray | Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding[END_REF]: PCR replicates were pooled and then sequenced jointly to limit the impact of random sampling amplification biases and hence minimizing false negatives due to low-abundance taxa. Nevertheless, this approach does not allow minimizing the false-positive detections. In our case, the size of faecal samples allowed one single DNA extraction reaction. However, in the case of larger or more complex samples, several biological replicates (i.e., distinct fractions of the same sample) may be analysed for controlling false negatives (see [START_REF] Lanz En | DNA extraction replicates improve diversity and compositional dissimilarity in metabarcoding of eukaryotes in marine sediments[END_REF][START_REF] Zhan | Reproducibility of pyrosequencing data for biodiversity assessment in complex communities[END_REF].
The combined use of biological replicates and of PCR replicates (sequenced separately) should ensure both avoiding false-positive detections and maximizing the coverage of taxa diversity.
Alternatively, for complex community samples, more relaxed filtering thresholds (at steps 4a and 4c) and more relaxed reproducibility criteria (at steps 5, 7 and 8) might be considered if one wants to maximize the detection low-abundance taxa. In this case however, the presence of these taxa in the final data set will remain uncertain since confounded with the low-frequency noise and the false positives. These low-frequency taxa will therefore require to be validated by some complementary field data (for estimating the probability of false negatives, as suggested by: [START_REF] Leray | Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding[END_REF] [START_REF] Gonz Alez-Tortuero | The quantification of representative sequences pipeline for amplicon sequencing: Case study on within-population ITS1 sequence variation in a microparasite infecting Daphnia[END_REF][START_REF] Shokralla | Massively parallel multiplex DNA sequencing for specimen identification using an Illumina MiSeq platform[END_REF] or prey biomass [START_REF] Jo | Discovering hidden biodiversity: The use of complementary monitoring of fish diet based on DNA barcoding in freshwater ecosystems[END_REF]. In this study, thanks to rigorous filtering procedures we produced a robust data set of variants and contigs that is reliable and relatively free from false positives and artefacts. Moreover, the variants corresponding to pseudogenes were filtered out.
Therefore, we are confident that the number of COI variants/contigs can be used to estimate MNIs. However, in some cases, that are prey taxa prone to heteroplasmy or that present tissue-specific mitochondrial variants, an overestimation bias should be considered when using
MNIs. The MNI statistic may therefore deserve a further evaluation by using appropriate mock samples as controls.
In the case of Z. asper prey, the biological analyses suggested that the final variants can be used as a reliable estimation of their DNA diversity in faeces and therefore that the MNI statistic adequately reflects relative prey abundance. In fact, congruence between the MNI and previously observed prey diversity based on morphological gut-content analysis in Durance River [START_REF] Cavalli | Diet and growth of the endangered Zingel asper in the Durance River[END_REF] was observed. More specifically, the family Baetidae exhibited the highest MNIs in Z. asper faecal samples, which was also the most abundant prey found in morphological gut-content analysis (Fig. S5).
| Taxonomic assignment of prey
Taxonomic assignment procedures are critical in metabarcoding studies, and several innovative approaches were recently developed (e.g., [START_REF] Porter | Rapid and accurate taxonomic classification of insect (class Insecta) cytochrome c oxidase subunit 1 (COI) DNA barcode sequences using a na€ ıve Bayesian classifier[END_REF][START_REF] Somervuo | Unbiased probabilistic taxonomic classification for DNA barcoding[END_REF][START_REF] Somervuo | Quantifying uncertainty of taxonomic placement in DNA barcoding and metabarcoding[END_REF]. However, most metabarcoding studies are based on one single taxonomic assignment approach, and yet using different procedures as well as different reference databases should improve the reliability and the accuracy of MOTU identification. For this study, we combined different assignment procedures to combine their respective advantages. Although our strategy can be time-consuming-some of the steps cannot be fully automated-the IR statistic showed that it did significantly improve the precision of the assignments. This resulted in a very low proportion of high-level taxonomic assignments (e.g., Eukaryota), and a high proportion of low-level taxonomic assignments for variants of invertebrates (species level for most). In contrast, most invertebrates identified by morphological-based gut-content analysis were identified at the Family level in [START_REF] Cavalli | Diet and growth of the endangered Zingel asper in the Durance River[END_REF]. Moreover, our detailed assignment of Z. asper prey supports the use of COI as the favoured target gene for invertebrates (see also [START_REF] Brandon-Mong | DNA metabarcoding of insects and allies: An evaluation of primers and pipelines[END_REF].
The COI appears as the one of the most appropriate gene for invertebrate metabarcoding because (i) it reveals species-level variation [START_REF] Elbrecht | Testing the potential of a ribosomal 16S marker for DNA metabarcoding of insects[END_REF], and (ii) a huge amount of annotated sequences is available in public databases [START_REF] Deagle | DNA metabarcoding and the cytochrome c oxidase subunit I marker: Not a perfect match[END_REF].
The taxonomic assignment of invertebrates indicated that the Baetidae found in Z. asper faeces belonged mostly to the genus Baetis although other genera were also present in the sampling area (e.g., Alainites, Acentrella, Centroptilum and Procloeon). According to [START_REF] Tachet | Invert ebr es d'eau douce: Syst ematique[END_REF], Baetis has a higher affinity towards coarse substrates (e.g., stones, pebbles) with higher water velocity than other Baetidae, which in turn prefer epiphyte lifestyle on macrophyte or algae substrate. Additionally, the genera Hydropsyche (Hydropsychiidae) and Simulium (Simuliidae), all epibenthic and rheophilic taxa, occurred at non-negligible frequencies in Z. asper faecal samples.
Consequently, the cumulated frequency based on MNI of Baetis, Hydropsyche and Simuliidae was ~70% (see Figure 4). This suggests that Z. asper actively selected rheophilic and epibenthic macroinvertebrates, which constitutes accurate information related to habitat use of this critically endangered species.
| CON CLUSION AND PERSPECTIVE S
Diet analyses are critical to gain a better understanding of prey/habitat relationships and feeding habitats (e.g., [START_REF] Corse | A PCR-based method for diet analysis in freshwater organisms using 18S rDNA barcoding on faeces[END_REF][START_REF] Sanchez-Hernandez | Age-related differences in prey-handling efficiency and feeding habitat utilization of Squalius carolitertii (Cyprinidae) according to prey trait analysis[END_REF], especially when a fine and accurate taxonomic identification of prey can be achieved (e.g., Adamczuk & Mieczan, 2015). In this study, stringent wet-laboratory conditions, carefully selected primer sets, PCR replicates and nonarbitrary filtering thresholds based on control samples led to the validation of a robust data set dedicated to the study of the diet of a critically endangered fish species. The robustness of our data set allowed us to take into account the prey genetic variability and to obtain a semiquantitative estimate of diet through the use of MNI. Furthermore, a species-level identification for most of the invertebrate prey was obtained through a complementary taxonomic assignment approach. On the whole, our approach produced a robust data set for ecological analyses and opens perspective for more precision on feeding habitats of invertebrate-eaters. This new information would in turn improve conservation strategies such as habitat restoration or the selection of optimal re-introduction sites.
Finally, our results reinforced previous findings that suggested diet metabarcoding can be a powerful tool in trophic ecology as it allows the determination of large scale and highly resolved networks [START_REF] Evans | Merging DNA metabarcoding and ecological network analysis to understand and build resilient terrestrial ecosystems[END_REF]. It may also produce a level of precision that reveals unexpected food web structures [START_REF] Roslin | The use of DNA barcodes in food web construction-terrestrial and aquatic ecologists unite![END_REF]. This article proposes a from-benchtop-to-desktop workflow that provides an efficient tool for the study of invertebrateeater diets and will, we hope, stimulate and inspire future trophic works using metabarcoding approaches.
ACKNOWLEDG EMENTS
We thank three anonymous reviewers for their constructive comments, which improved our manuscript significantly. We warmly thank
, Step 4b). Finally, for replicates that had a low number of reads, contamination could not necessarily be discarded with LFN pos and LFN tag filters. Therefore, a last threshold (LFN neg ) based on the maximum read count (N var-repl ) among all variants of all negative controls replicates was used (LFN neg : 31 and 53 for MFZR and ZFZR, respectively; Figure 1, Step 4c). The three LFN filters were run in parallel on the merged and dereplicated COI variants and only variant-replicate combinations that passed all three LFN filters were retained (Figure 1, Step 4), except those that were present in only one replicate within a sample (Figure 1, Step 5).
, Step 6). By comparing replicates within samples, we were able to assess the repeatability of the experimental procedure in two different ways: (i) only variants present in at least two replicates of a sample were retained (Figure 1, Step 5), and (ii) distance between replicates was taken into account (Figure 1, Step 7). To this latter end, we followed the strategy developed by De Barba et al. (2014) and used the Renkonen distance (RD) to compare distances between replicates of the same sample. The threshold was set at the 10% upper tail of the distribution of the RDs, which corresponds in our data set to RD = 0.157 for MFZR, and RD = 0.088 for ZFZR (Fig. S1). All those replicates separated from the other replicates by RD above the defined threshold were discarded. Furthermore, samples with only one replicate were excluded (Figure 1, steps 7-8). After the filtering, consensus samples were created by averaging read counts over replicates (Figure 1, Step 9). The pool of all remaining variants and contigs was filtered for chimeras using UCHIME2 (Edgar, 2016a; Figure 1, Step 10) and for potential pseudogenes
1
High-throughput sequencing filtering pipeline(Figure 1, Step 11). Variants obtained from ZFZR and MFZR primer pairs that perfectly matched on their overlapping regions were merged into contigs. At this stage, the read count values associated with variants were disregarded, and only the presence/absence of the variants/contigs was considered in a given sample.
Four different approaches were used for the taxonomic assignment of variants and contigs. More detailed descriptions of the methods are given in Appendix S4. First, the phylogenetic approach implemented in the Statistical Assignment Package (SAP;[START_REF] Munch | Statistical assignment of DNA sequences using Bayesian phylogenetics[END_REF] was used to build phylogenetic trees for each of the variants/contigs and their homologues in GenBank using ≥95%, ≥85% and ≥70% sequence identity thresholds in three consecutive runs. The posterior probability (pp) for the query sequence to belong to a clade was estimated at different taxonomic levels.
guities, biogeographical data were then considered[START_REF] Tachet | Invert ebr es d'eau douce: Syst ematique[END_REF]; OPIE-Benthos database: www. opie-benthos.fr). The combination of the cross-comparison between SAP, LTG and BOLD assignment analyses and the phylogenetic and biogeographical approaches led to a final taxonomic assignment (FTA).To assess the efficiency of the different assignment methods, we focused on the invertebrates' variants and contigs identified in Z. asper faecal samples. An Identification Resolution index (IR; Zarzoso-Lacoste et al., 2016) was calculated for each Z. asper sample and for each assignment analysis. A score was attributed to each variant/contig according to its taxonomic rank assignment, where the maximal value is given to the species level (i.e., Species = 6, Genus = 5, Family = 4, Order = 3; Class = 2, Phylum = 1, Kingdom or NA = 0) and the IR represented the mean score among the variants of a given sample. Identification Resolution indices of the different assignment methods were compared by pairwise nonparametric Wilcoxon rank sum, controlled for multiple comparisons with the Benjamini & Hochberg (1995) procedure.
the number of distinct variants/contigs represented the Minimal Number of Individuals (MNI; White, 1953) ingested by the predator, providing a semiquantitative estimation of the diet composition. The MNI was used to summarize the proportion of prey items or MOTUs in each faecal sample as well as at the population level. Furthermore, to assess the relation between sample size and MOTU diversity, we constructed two sample-based rarefaction curves (for invertebrates and for all prey) using the observed-richness function implemented in EstimateS v9.1.0 (http://viceroy.eeb.uconn.edu/estimates/).
Taxonomic assignment analysis. (a) The distribution of index resolution (IR; using invertebrates only) among Zingel asper samples was represented by box plots for each taxonomic assignment procedure. Significant differences between taxonomic analyses were indicated at the top; n.s., nonsignificant; *p < .05; **p < .005; ***p < .0005). (b) The taxonomic levels of the Z. asper invertebrate prey identified using the FTA procedure T A B L E 2 Taxonomic coverage of the metabarcoding primers (in vitro tests)
U R E 3 Complementarity of minibarcoding primer sets based on Zingel asper and Pomatoschistus microps samples steps based on read counts of each variant in each replicate (for a similar approach, see De Barba et al.
and template-specific amplification bias may be inflated (O'Donnell, Kelly, Lowell, & Port, 2016). Following Esling et al. (2015), we chose a variant frequency-The diet composition of Zingel asper. (a) Invertebrate composition by sample based on absolute minimal number of individuals (MNIs). (b) Invertebrate composition at the population level, and proportions of microorganisms and invertebrates based on cumulative MNIs. (c) Dietary molecular operational taxonomic unit accumulation curves for Z. asper samples (AE95% confidence intervals)
Caroline Costedoat, Andr e Gilles, Georges Olivari and the technicians of the French Office National de l'Eau et des Milieux Aquatiques (ONEMA) for their assistance in the field, especially Guillaume Verdier. This work is part of the French Plan National d'Action en faveur de l'apron du Rhône (2012-2016), supervised by the Direction Régionale pour l'Environnement, l'Am enagement et le Logement de Rhône-Alpes and coordinated by the Conservatoire d'Espaces Naturels Rhône-Alpes. E.C. was supported by a postdoctoral grant from Electricit e de France (EDF) and ONEMA. This study was funded by the Syndicat Mixte d'Am enagement du Val Durance (SMAVD), the Agence de l'Eau Rhône-M editerran ee-Corse (AERMC) and the Conseil R egional de Provence-Alpes-Côte d'Azur. This work was conducted in accordance with permits from the French Direction D epartementale des Territoires des Hautes-Alpes (DDT 05). Data used in this work were partly produced through the molecular facilities of LabEx CeMEB (Montpellier) and CIRAD (Montpellier).
-throughput sequencing of the six amplicon libraries (288 PCR products: 40 faecal samples, 2 T pos , 2 T ext , 2 T pai , 1 T PCR , 1 T tag ; 2 primer pairs; 3 replicates/sample/primer pair) generated a total of about 8.6 million (M) paired-end reads (per base read quality plots available in Fig.S2). The number of reads, variants, replicates and samples that were validated by each filtering step is reported in Table1, for the whole data set and for T pos1 only (as a sample example). After the initial quality filtering and assignment steps (steps 1 and 2), the number of reads per replicate varied between 137 and 64,718 (median: 26,943) for faecal samples, 21,613 and 32,490 (me-mock samples nor in the faecal samples. At the end of the HTS filtering process, all negative controls were eliminated, and 38 of the 40 initial faecal samples were represented by at least one variant or contig (samples 14Ben05 and 14Ben09 were not validated). Seven variants/contigs were identified in the mock samples (T pos1 and T pos2 ): six that correspond to the six organisms used
cate per sample (Step 5), one sample for MFZR (14Ben09) and two
samples for ZFZR (14Ben09, 14Deo04) were discarded. The Obiclean
step (Step 6) reduced further the number of variants (by 32% for
MFZR and 35% for ZFZR). When applying the Renkonen filter (Step
7) and eliminating variants that occur in only one replicate per sam-
ple (Step 8), one MFZR sample (14Ben05) and two ZFZR samples
(14Ben02, 14Ben05) were further discarded. UCHIME2 detected four
MFZR and 17 ZFZR variants as chimeras (Step 10) and four MFZR
and two ZFZR variants were potential pseudogenes (Step 11). The
final 81 MFZR and 61 ZFZR variants represented <0.3% of the num-
ber of variants validated at Step 2, but they corresponded to over
70% of the reads initially assigned to samples.
dian: 26,943) for positive controls, and 16 and 230 (median: 38) for negative controls in the MFZR data set. For ZFZR, read counts per replicate varied between
83 and 76,279 (median: 26,603)
for faecal
samples, 19,144 and 42,281 (median: 36,645)
for positive controls, and 15 and 167 (median: 48) for negative controls. The total number of variants excluding singletons was 27,921 and 23,619 for MFZR and ZFZR, respectively (Table
1
). During the filtering stage, the three LFN filters drastically reduced the number of variants: 99.2% and 99.5% of the variants were discarded. Nevertheless, the remaining variants still represented 75% (MFZR) and 78% (ZFZR) of the reads present before the LFN filtering (Step 4). All negative control replicates for both markers were eliminated at this step. Additionally, three P. microps samples (P1, P2 and P4) were retained for MFZR data set only. After eliminating the variants present in only one repli-After the contigation of the MFZR and ZFZR variants, 38 distinct contigs and 66 distinct variants (43 amplified by MFZR only and 23 amplified by ZFZR only) were retained. A total of 93 variants and contigs were found in the Z. asper faecal samples and six in the P. microps samples. The DNA of Z. asper was neither detected in the
restituting the composition of our mock samples. In fact, the bioinformatics pipeline we developed retained all the expected variants and filtered out all but one nonexpected variants in mock controls (this latter variant was likely introduced during construction of the mock samples; see "Results"), as well as all variants in negative controls. The final data set contained only 0.3% of the original variants, but these variants represented over 70% of the reads, reinforcing that the eliminated variants could be considered to be noise.
4.2 | Towards a quantitative approach: the MNI
Due to PCR biases, the number of reads is a very poor estimator of
abundance in metabarcoding studies (Elbrecht & Leese, 2015), and
only presence/absence of MOTUs can be obtained with clustering-
based approaches, since different alleles coming from the same taxon
are confounded in the same MOTU. Alternatively, several studies
highlighted the pertinence of determining the number of distinct
sequences belonging to the same taxon when assessing genetic diver-
sity (Gonz alez-
and/or by
the use of site occupancy-detection models (that can account for
the presence of false positives; e.g., Lahoz-Monfort, Guillera-Arroita,
& Tingley, 2016).
Recently, new clustering-based methods have been developed
for denoising HTS reads (e.g., UNOISE2: Edgar, 2016b; Swarm: Mah e,
Rognes, Quince, De Vargas, & Dunthorn, 2015). These methods
avoid clustering the reads based on fixed similarity level and their outcome depends on a very few parameters. We therefore used a modified version of our pipeline, where our denoising steps (LFN and Obiclean, steps 4 and 6) were replaced by UNOISE2 and UNCROSS. This filtering approach retained 12 unexpected variants in the mock community samples, showing that our pipeline is more accurate for
DATA ACCESSIBILI TY
The lcDNA sequences were deposited in GenBank (GenBank ID: MF458551-MF458851). Supplementary data deposited in Dryad (https://doi.org/10.5061/dryad.f40v5) included: (i) custom COI database used during HTS filtering named COI-filtering-DB; (ii) Taxassign-DB; (iii) Perl script with the .csv files indicating the samples/tag combination correspondence; (iv) unfiltered HTS data; and (v) two alignments of COI sequences used during phylogenetic analysis. | 61,448 | [
"1173358",
"19816",
"747060",
"171893",
"184502",
"1122342",
"18840"
] | [
"188653",
"188653",
"302049",
"192475",
"12765",
"29770",
"188653",
"188653"
] |
01771586 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2007 | https://hal.science/hal-01771586/file/ISSPA2007.pdf | Abdeldjalil Aïssa-El-Bey
Karim Abed-Meraim
Yves Grenier
email: [email protected]
UNDERDETERMINED AUDIO SOURCE SEPARATION USING FAST PARAMETRIC DECOMPOSITION
In this paper, we consider the problem of underdetermined blind source separation using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal) components. Based on this representation, we propose a two steps approach consisting of a signal analysis (extraction of the modal components) followed by a signal synthesis (pairing of the components belonging to the same source) using vector clustering. Our contributions in this paper are a new separation method with relaxed assumption and reduced computational cost compared to other existing algorithms. Simulation results are given to assess the performance of the proposed algorithm.
INTRODUCTION
The objective of blind source separation (BSS) is to extract the original source signals from their mixtures using only the information within the observed mixtures with no, or very limited knowledge about the source signals and the mixing matrix. BSS problem arises in many fields, such as noise reduction, radar and sonar processing, speech enhancement, separation of rotating machine noises, biomedical signal processing and even in optical tracking system [START_REF] Nandi | Blind estimation using higher-order statistics[END_REF]. This problem has been intensively studied in the literature and many effective solutions have been proposed so far [START_REF] Nandi | Blind estimation using higher-order statistics[END_REF]. In the particular case where the number of sources is larger than the number of observed mixtures (underdetermined BSS case (UBSS)), the separation can be achieved only if side information about the sources is available (sparseness, W-disjointness, finite alphabet sources, etc). In the case of non-stationary signals (including the audio signals), certain solutions using time-frequency (TF) analysis of the observations and the sources TF-orthogonality exist for the underdetermined case [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF][START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. In this paper, we propose an alternative approach using modal decomposition (MD) of the received signals [START_REF] Aïssa-El-Bey | Blind separation of audio sources using modal decomposition[END_REF]. More precisely we propose to decompose the signal into its various modes. The audio signals and more particularly the musical signals can be modeled by a sum of damped sinusoids [START_REF] Boyer | Audio modeling based on delayed sinusoids[END_REF] and hence are well suited for our separation approach. We propose here to exploit this last property for the separation of audio sources by means of modal decomposition. To start, we review the MD-UBSS approach presented in [START_REF] Aïssa-El-Bey | Blind separation of audio sources using modal decomposition[END_REF], then we propose an improved algorithm that reduces the computational cost and relax some of the working assumptions. In this paper, we use bold upper and lower case letters for matrices and vectors, respectively. The remaining notational conventions and major symbols are listed as follows:
(•) * Complex conjugation. (•) T Transpose. (•) H Transpose conjugate.
• Frobenius norm. I Identity matrix.
DATA MODEL
The blind source separation model assumes the existence of N independent signals s1(t), . . . , sN (t) and M observations x1(t), . . . , xM (t) which represent the mixtures. These mixtures are supposed linear and instantaneous, i.e.
xi(t) = N X j=1 aij sj(t) i = 1, . . . , M . (1)
This can be represented compactly by the mixing equation
x(t) = As(t) (2)
where s(t)
. , aMi]
T contains the mixture coefficients. We assume that for any pair (i, j) with i = j, the vectors ai and aj are linearly independent. It is known that BSS is only possible up to some scaling and permutation [START_REF] Cardoso | Blind signal separation: statistcal principles[END_REF]. We take advantage of these indeterminacies to assume without loss of generality that the column vectors of A are of unit norm i.e. ai = 1 for 1 ≤ i ≤ N . The considered source signals are supposed to be decomposable in a sum of modal components c j i (t), i.e:
si(t) = l i X j=1 c j i (t) t = 0, . . . , T -1 . (3)
The usual source independence assumption is replaced here by a quasi-orthogonality assumption of the modal components, i.e.
c j i |c j ′ i ′ c j i c j ′ i ′ ≈ 0 for (i, j) = (i ′ , j ′ ) (4)
where
c j i |c j ′ i ′ def = T -1 X t=0 c j i (t)c j ′ i ′ (t) * (5)
and
c j i 2 = c j i |c j i . (6)
MD-UBSS ALGORITHM
Based on the previous model, we propose an approach in two steps consisting of:
• An analysis step: in this step, one applies an algorithm of modal decomposition to the sensor outputs in order to extract all the harmonic components from them. • A synthesis step: this is to group together the modal components corresponding to the same source in order to reconstitute the original signal. This is done by observing that all modal components of a given source signal 'live' in the same spatial direction. Therefore, the proposed clustering method is based on the component's direction evaluated by correlation of the extracted (component) signal with the observed antenna signal.
Parametric signal analysis
The source signal and hence the observations are modeled as sum of damped sinusoids:
x k (t) = ℜe ( L X l=1 α l,k z t l ) (7)
where α l,k represents the complex amplitude and z l = e d l +ω l is the l th pole where d l is the negative damping factor and ω l is the angular-frequency.
× (T -D) data Hankel matrix [H(x k )]n 1 n 2 def = x k (n1 + n2), D being a window parameter chosen in the range T /3 ≤ D ≤ 2T /3.
We use Kung's algorithm given in [START_REF] Kung | Spacetime and singular-value decomposition based approximation methods for the harmonic retrieval problem[END_REF] that can be summarized in the following steps:
1. Form the data Hankel matrix H(x k ).
Estimate the
↓ Ψ = U (L) ↑ ⇔ Ψ = U (L)# ↓ U (L) ↑ (8)
where Ψ = Φ∆Φ -1 , Φ being a non-singular 2L × 2L matrix and ∆ = diag(z1, z * 1 , . . . , zL, z * L ). () # denotes the pseudo-inversion operation and arrows ↓ and ↑ denote respectively the last and the first row-deleting operator. 4. Estimate the poles as the eigenvalues of matrix Ψ. 5. Estimate the complex amplitudes by solving the least squares fitting criterion
min α x k -Zα 2 ⇔ α = Z # x k (9)
where
x k = [x k (0) . . . x k (T -1)
] T is the observation vector, Z is a Vandermonde matrix constructed from the estimated poles and α is the vector of complex amplitudes.
Signal synthesis using vector clustering
For the synthesis of the source signals one observes that thanks to the quasi-orthogonality assumption, one has:
x|c j i c j i 2 def = 1 c j i 2 2 6 4 x1|c j i . . . xM |c j i 3 7 5 ≈ ai
where ai represents the i th column vector of A. We can then associate each component b c k j to a space direction (vector column of A) that is estimated by
b a k j = x|b c k j b c k j 2 .
Two components of a same source signal are associated to the same column vector of A. Therefore, we propose to gather these components by clustering the vectors b a k j into N classes 1 . One will be able to rebuild the initial sources up to a constant by adding the various components within a same class.
Existing MD-UBSS algorithm [4]
This algorithm applies the previous analysis and synthesis steps to each signal output x k (t). By doing so, one obtains M estimates of each source signal with an estimation quality varying significantly from one sensor to another. Indeed, this latter depends strongly on the mixing matrix coefficients and, in particular, on the signal to interference ratio (SIR) of the desired source. Consequently, a blind selection method to choose a 'good' estimate among the M available is proposed in [START_REF] Aïssa-El-Bey | Blind separation of audio sources using modal decomposition[END_REF]. First, the source estimates are paired together by associating each source signal extracted from the first sensor to the (M -1) signals extracted from the (M -1) other sensors that are maximally correlated with it. The correlation factor of two signals s1 and s2 is evaluated by s 1 |s2 s 1 s 2 . Once, the source pairing achieved, the source estimate of maximal energy is selected, i.e.
b si(t) = arg max b s j i (t) {E j i = T -1 X t=0 |b s j i (t)| 2 , j = 1, • • • , M } (10)
where E j i represents the energy of the i th source extracted from the j th sensor b s j i (t).
Proposed MD-UBSS algorithm
We propose here to improve the previous algorithm w.r.t the computational cost and the estimation accuracy when Assumption 4 (Equation ( 4)) is poorly satisfied 2 . First, in order to avoid repeated estimation of modal components for each sensor output, we use all the observed data to estimate (only once) the poles of the source signals. Hence, we apply the ESPRIT-like technique on the averaged data covariance matrix H(x) define by:
H(x) = M X i=1 H(xi)H(xi) H (11)
and we apply steps 1 to 4 of Kung's algorithm described in Section 3.1 to obtain all the poles zi, i = 1, . . . , L. This way, we reduce significantly the computational cost and avoid the problem of 'best source estimate' selection of the previous algorithm. Now, to relax Assumption 4, we can re-write the data model as:
Γz(t) = x(t) (12)
where
Γ def = [γ 1 , γ * 1 , • • • , γ L , γ * L ], γ i = βie φ i bi,
where bi is a unit norm vector representing the spatial direction of the i th component (i.e. bi = a k / a k if the i th component belongs to the k th source signal) and z(t
) def = [z t 1 , (z * 1 ) t , • • • , z t L , (z * L ) t ] T .
The estimation of Γ using the least-squares fitting criterion leads to : min
Γ X -ΓZ 2 ⇔ Γ = XZ # (13) where X = [x(0), • • • , x(T -1)] and Z = [z(0), • • • , z(T - 1)
]. After estimating Γ, we estimate the phase of each pole as:
φi = arg(γ H 2i γ 2i-1 ) 2 (14)
The spatial direction of each modal component is estimated by:
b vi = γ 2i-1 e -φ i + γ 2i e φ i = 2βibi . ( 15
)
Finally, we group together these components by clustering the vectors b vi into N classes. After clustering, we obtain N classes with N centroids b a1, . . . , b aN corresponding to the estimates of the column vectors of the mixing matrix A. If the pole zi belongs to the j th class, then according to (15), its amplitude can be estimated by:
βi = b a H j b vi 2 . ( 16
)
One will be able to rebuild the initial sources up to a constant by adding the various modal components within a same class Ci as follow:
b si(t) = ℜe
NON-DISJOINT SOURCES CASE
We consider here the case where a given component c k j (t) can be shared by several sources. This is the case, for example, for certain musical signals such as those treated in [START_REF] Rosier | Unsupervised classification techniques for multipitch estimation[END_REF]. To simplify, we suppose that a component belongs to at most two sources. Thus, let us suppose that the component c k j (t) is present in the sources sj 1 (t) and sj 2 (t) with the amplitudes αj 1 and αj 2 , respectively. It follows that the spatial direction associated with this component as estimated by (15), is given by:
b vi ≈ αj 1 aj 1 + αj 2 aj 2 . (18)
It is now a question of finding the indices j1 and j2 of the two sources associated with this component, as well as the amplitudes αj 1 and αj 2 . With this intention, one proposes an approach based on subspace projection. Let assume that M > 2 and matrix A is known and satisfies the condition that any triplet of its column vectors are linearly independent. Consequently, we have:
P ⊥ e A vi = 0, (19)
if and only if e A = [aj 1 aj 2 ], e A being a matrix formed by a pair of column vectors of A and P ⊥ e A represents the matrix of orthogonal projection on the orthogonal range space of e A, i.e.
P ⊥ e A = I -e A " e A H e A " -1 e A H . (20)
In practice, by taking into account the noise, one detects the columns j1 and j2 by minimizing:
(j1, j2) = arg min (l,m) n P ⊥ e A b vi | e A = [a l am] o . (21)
Once e A found, one estimates the weightings αj 1 and αj 2 by:
» αj 1 αj 2 - = e A # b vi. (22)
In the simulation, the optimization problem of (21) is solved using exhaustive search. This is computationally tractable for small vector array sizes but would be prohibitive if M is very large. In this paper, we treated all the components as being associated to two source signals. If ever a component is present only in one source, one of the two coefficients estimated in (22) should be zero or close to zero. Also, in what precedes, the mixing matrix A is supposed to be known. This means, it has to be estimated before applying a subspace projection. This is performed here by clustering all the spatial direction vectors in (15) as for the preview MD-UBSS algorithm. Then, the i th column vector of A is estimated as the centroid of Ci assuming implicitly that most modal components belong mainly to one source signal. This is confirmed by our simulation experiment shown in Figure 2.
SIMULATION
We present here some simulation results to illustrate the performance of our blind separation algorithms. For that, we consider a uniform linear array with M = 3 sensors receiving the signals from N = 4 audio sources. The angles of arrival of the sources are chosen randomly. The sample size is set to T = 10000 samples (the signals are sampled at a rate of 22 kHz). The observed signals are corrupted by an additive white noise of covariance σ 2 I (σ 2 being the noise power). The separation quality is measured by the normalized mean squares estimation errors (NMSE) of the sources evaluated over 200 Monte-Carlo runs and defined as:
N M SEi def = 1 Nr Nr X r=1 min α " αb si,r -si 2 si 2 « (23) N M SEi = 1 Nr Nr X r=1 1 - " b si,rs H i b si,r si « 2 (24)
N M SE = 1 N N X i=1 N M SEi . (25)
where si def = [si(0), . . . , si(T -1)], b si,r ( defined similarly) represents the r th estimate of source si and α is a scalar factor that compensate for the scale indeterminacy of the BSS problem. In Figure 1, we compare the separation performance obtained by existing MD-UBSS algorithm and the new MD-UBSS algo-rithm. We observe a performance gain in favor of the new MD-UBSS due mainly to the fact that it does not rely on the quasiorthogonality assumption. This plot also highlights the problem of 'best source estimate' selection related to the MD-UBSS as we observe a performance loss between the results given by the proposed energy-based selection procedure and the optimal3 one using the exact source signals. performance of the mixing matrix A using proposed clustering method. The observed good estimation performance translates the fact that most modal components belong 'effectively' to one single source signal. In Figure 3, we compare the performance of new MD-UBSS algorithm and the same algorithm with subspace projection. One can observe that using the subspace projection leads to a performance gain at moderate and high SNRs. At low SNRs, the performance is slightly degraded due to the noise effect. Indeed, when a given component belongs 'effectively' to
def = [s1(t), . . . , sN (t)] T is a N × 1 column vector collecting the source signals, vector x(t) similarly collects the M observed signals, and the M × N mixing matrix A def = [a1, . . . , aN ] with ai = [a1i, . .
ℜe(•) represents the real part of a complex entity. For the extraction of the modal components, we propose to use the ESPRIT-like (Estimation of Signal Parameters via Rotation Invariance Technique) technique that estimates the poles of the signals by exploiting the row-shifting invariance property of the D
2L-dimensional signal subspace U (L) = [u1 . . . u2L] of H(x k ) by means of the SVD (u1 . . . u2L are the principal left singular vectors of H(x k )). 3. Solve (in the least squares sense) the shift invariance equation U (L)
Fig. 1 .
1 Fig. 1. NMSE versus SNR for 4 audio sources and 3 sensors: comparison of the performance of MD-UBSS algorithms with and without quasi-orthogonality assumption.
Figure 2 Fig. 2 .
22 Fig. 2. Mixing matrix estimation: NMSE versus SNR for 4 speech sources and 3 sensors.
Fig. 3 .
3 Fig. 3. NMSE versus SNR for 4 audio sources and 3 sensors: comparison of the performance of modified MD-UBSS algorithm and the same algorithm with subspace projection.
In the simulation, we have used the k-means algorithm in[START_REF] Frank | The data analysis handbook[END_REF] for vector clustering.
This is the case when the modal components are closely spaced or for modal components with strong damped factors.
Clearly, the optimal selection procedure is introduced here just for performance comparison and not as an alternative selection method since it relies on the exact source signals that are unavailable in our context. only one source signal, equation (22) would provide a non zero amplitude coefficient for the second source due to noise effect which explains the observed degradation.
CONCLUSION
This paper introduces a new MD-UBSS algorithm of audio sources. The main advantages over the proposed MD-UBSS algorithm are, a reduced the computational cost and relaxed a quasi-orthogonality assumption. Moreover, this algorithm is extended to the non-disjoint sources case using an approximate subspace projection technique. Simulation results illustrate the effectiveness of our algorithm compared to the one [4]. | 17,678 | [
"18420",
"956547",
"742871"
] | [
"300839",
"300839",
"300839"
] |
01771615 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-01771615/file/ISSPA05_TF.pdf | N Linh-Trung
email: [email protected]
A Aïssa-El-Bey
K Abed-Meraim
A Belouchrani
email: [email protected]
UNDERDETERMINED BLIND SOURCE SEPARATION OF NON-DISJOINT NONSTATIONARY SOURCES IN THE TIME-FREQUENCY DOMAIN
This paper considers the blind separation of nonstationary sources in the underdetermined case, in which we have more sources than sensors. A recently proposed algorithm applied time-frequency distributions (TFDs) to this problem and gave good separation performances in the case where sources were disjoint in the time-frequency (TF) plane. However, in the non-disjoint case, the method simply relied on the interpolation at the intersection TF points implicitly performed by a TF synthesis algorithm, instead of directly treating these points. In this paper, we propose a new algorithm that combines the abovementioned method with subspace projection in order to explicitly treat non-disjoint sources. Another contribution of this paper is the estimation of the mixing matrix in the underdetermined case.
INTRODUCTION
Blind source separation (BSS) considers the recovery of unobserved original sources from several mixtures which are observed at the output of a set of sensors. Each mixture contains a combination of the sources that is resulted from the mixing medium between the sources and the sensors. The term "blind" indicates that no a priori knowledge of both the sources and the medium structure is available. To compensate for this lack of information, the sources are usually assumed to be statistically independent. BSS has application in different areas, such as communication, speech and image processing and biomedical engineering [START_REF] Nandi | Blind Estimation Using Higher-Order Statistics[END_REF].
A challenging problem of BSS arises when there are more sources than sensors; this is now referred to as underdetermined blind sources separation (UBSS). A TFbased UBSS algorithm has been recently proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] to successfully separate nonstationary sources using TFDs. This algorithm gave good performances when the sources were disjoint in the TF plane. It also provided the separation of TF quasi-disjoint sources, that is the sources were allowed to have a small degree of overlapping in the TF plane.
However, the intersection TF points were not directly treated. In particular, a point at the intersection of two sources is clustered "by chance" to belong to one of the sources. As a result, the source that has picked up this point now has some information of the other source while the later lost some information of its own. The lost information can be fortunately recovered to some extent by the interpolation at the intersection point using TF synthesis. However, for the other source, there is an interference at this point, hence the separation performance may degrade if no treatment is provided for this. The more the number of intersection points not to be treated, the worse the interpolation result and the interference would become, hence the worse the final separating performance.
In this paper, we propose another algorithm that combines the above TF-UBSS algorithm with subspace projection, offering an explicit treatment of the intersection points. The main assumption used in this algorithm is that the number of sources simultaneously present at any intersection point must be smaller than the total number of sensors.
DATA MODEL AND ASSUMPTIONS
Let N -dimensional vector s(t) = [s 1 (t), . . . , s N (t)]
T represent N nonstationary source signals. The source signals are transmitted through a medium so that an array of M linear sensors picks up a set of mixed signals represented by an M -dimensional vector x(t) = [x 1 (t), . . . , x M (t)]
T . Consider the instantaneous mixing model, as given by:
x(t) = As(t) + w(t), (1)
where A = [a 1 , . . . , a N ] is the mixing matrix and w(t) = [w 1 (t), . . . , w M (t)] T is the observation noise vector. The goal of BSS is to recover s(t) from x(t). When M < N , the problem becomes UBSS. In this case, we assume that any M column vectors of A are linearly independent.
In [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], sources are assumed to be disjoint in the TF plane; that is, for any pair of two sources s 1 (t) and s 2 (t) whose TF signatures are denoted by Ω 1 and Ω 2 respectively, we have Ω 1 ∩ Ω 2 = ∅. The notion of TF disjoint is illustrated in Fig. 1-a.
In this paper, we relax the above assumption by allowing the sources to be generally non-disjoint, but limiting the degree of disjoint to the following two conditions. First, there are at most (M -1) sources present at any TF point. This allows us to apply the subspace projection approach for the source TFDs estimation, to be shown in Section 4. Second, for each source signal, there exists a 00000 00000 00000 00000 00000 region in the TF plane where the source exists alone; in other words, the energy of the other sources are negligible at the TF points within the considered region. This is needed for the estimation of the mixture matrix, also to be shown in Section 4. In the case the latter condition is not satisfied, one can always estimate the mixing matrix using algebraic [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF] or other approaches. The TF non-disjoint is illustrated in Fig. 1-b.
SOURCE SEPARATION USING TFD
In this section, we briefly review the TF-UBSS method proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF]. This method uses the spatial TFD (STFD) that is defined in [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF] by the following:
D xx (t, f ) = +∞ l=-∞ +∞ k=-∞ φ(k, l)• x(t + k + l)x H (t + k -l)e -j4πf l , (2)
where φ(k, l) is the TFD time-lag kernel and the superscript ( H ) denotes the complex conjugate transpose operator. The matrix D xx (t, f ) varies with respect to t and f . When evaluated at a particular TF point, its (i, j)-element has the value of the cross-TFD of x i (t) and x j (t) at this point.
Applying [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] to the linear data model in (1), while assuming that the additive noise is not present, leads to the following expression:
D xx (t, f ) = AD ss (t, f )A H , ( 3
)
where D xx (t, f ) is now the mixture STFD matrix and D ss (t, f ) is the source STFD matrix. After computing the above mixture STFD matrix (we did use here the Wigner-Ville distribution (WVD) or the Modified-WVD (MWVD) [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]) and using noise thresholding, one proceeds then to the selection of auto-source TF points; these are the points at each of which only one source is active. This selection is done following the procedure in [START_REF] Belouchrani | Blind separation of nonstationary sources[END_REF].
The structure of the mixture STFD is as such: at an auto-source TF point, there is only one value not equal to zero among its diagonal elements. Therefore, if all sources are disjoint in the TF plane then, for all TF points belong to the TF signature Ω i of a source s i (t), we have
D xx (t, f ) = D s i s i (t, f )a i a H i , (4)
where D s i s i (t, f ) is the TFD of s i (t). In [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF], a i represents the principal eigenvector of D xx (t, f ) and
D s i s i (t, f ) is
(up to a constant) the corresponding eigenvalue, or equivalently the trace value, of D xx (t, f ). Therefore, all the auto-source points that belong to one particular source must have the same spatial direction. In other words, if we cluster these points into classes corresponding to different spatial directions, then these classes represent the individual source signals.
Given the class C i representing the source s i (t), the TFD estimate of s i (t) is then computed (up to a constant factor) by:
Dsisi (t, f ) = = trace{D wvd xx (t a , f a )}, ∀(t a , f a ) ∈ C i 0, otherwise . (5)
Having obtained the source TFD estimates Ds i s i , we then use an adequate source synthesis procedure to estimate the source waveform s i (t). The recovery of the source waveform from its TFD is made possible thanks to the following inversion property of the WVD [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF] x
(t) = 1 x * (0) ∞ -∞ ρ wvd x ( t 2 , f ) e j2πf t df , ( 6
)
which implies that the signal can be reconstructed to within a complex exponential constant e jα = x * (0)/|x(0)| given that |x(0)| = 0. The method in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] uses the synthesis algorithm that is proposed in [START_REF] Boudreaux-Bartels | Timevarying filtering and signal estimation using Wigner distributions[END_REF]. This algorithm recovers the source waveform from its WVD estimates.
In brief, the TF-UBSS algorithm in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] can be summarized in the following four steps:
Tab.1: TF-UBSS algorithm S1: STFD computation and noise thresholding S2: Auto-source TF point selection S3: Clustering and source TFD estimation S4: Source signal synthesis
TF-UBSS USING SUBSPACE PROJECTION
As explained in the introduction, the TF-UBSS method reviewed in Section 3 did not treat the intersection points properly in the case where the sources are non-disjoined in the TF plane. We propose here to use an appropriate subspace projection to estimate the TFDs of the individual sources from the selected auto-source points, under the assumption that there are at most (M -1) sources simultaneously present at any given point.
In particular, consider an auto-term point (t a , f a ) at which sources s i 1 , . . . , s i I contribute (I < M ) and define s = [s i 1 , . . . , s i I ] T and A = [a i 1 , . . . , a i I ] T . We have then D xx (t a , f a ) = AD s s (t a , f a ) A H [START_REF] Belouchrani | Blind separation of nonstationary sources[END_REF] and consequently (assumed that D s s (t a , f a ) is of full rank)
Range(D xx (t a , f a )) = Range( A).
Let us now assume that A is known (or already estimated), and proceed with the estimation of the source TFDs. One can obtain information about contributing sources at this point by observing that:
P ⊥ A a i j = 0, for j = 1, . .
. , I P ⊥
A a i = 0, otherwise .
Above, P ⊥
A represents the orthogonal projection matrix onto the noise subspace of D xx (t a , f a ) and can be computed by
P ⊥ A = I -VV H ( 8
)
where V is the matrix of the I principal singular eigenvectors of D xx (t a , f a ). In practice, to take into account estimation noise, one detects the I column vectors of A as the vectors of A corresponding to the I smallest values of the set { P ⊥ A a i } where i = 1, . . . , N . Once A is obtained, we estimate the TFDs of I sources at point (t a , f a ) as the diagonal entries of:
A # D xx (t a , f a ) A # H ≈ D s s (t a , f a ). (9)
In the estimation of source TFDs above, we have assumed that A had been known a priori or already been estimated. Along with other existing methods, we propose here a method to estimate A by using the assumption that for each source signal s i there exists a TF region R i where
D xx (t, f ) = D sisi (t, f )a i a H i , ∀(t, f ) ∈ R i .
Based on that observation we estimate A as follows.
• First, detect the TF points belonging to the region R = N i=1 R i using the test criterion
(t, f ) ∈ R iff λ max {D xx (t, f )} trace{D xx (t, f )} -1 < ,
where is a small threshold (typically ≤ 0.1) and λ max {D xx (t, f )} denotes the maximum eigenvalue of D xx (t, f ).
• Then, for each point (t, f ) in R, estimate the vector a(t, f ) as the principal eigenvector of D xx (t, f ).
• Finally, cluster the set of vectors { a(t, f )}, where (t, f ) ∈ R, into N classes1 using any vector clustering technique existing in the literature [START_REF] Frank | The data analysis handbook[END_REF]. In this paper, we have used the algorithm k-means. The column vectors of A are estimated as the N centroids of the N clusters.
Table 2 presents a summary of the subspace projection based TF-UBSS algorithm.
Tab.2: subspace-based TF-UBSS algorithm S1: STFD computation and noise thresholding S2: Auto-source TF point selection S3: Selection of rank-1 STFD matrices and computation of their respective principal eigenvectors S4: Clustering of the previous set of vectors and estimation of column vectors of A as the centroid points S5: For all auto-source TF point, perform the subspace based TFD estimation S6: Source signal synthesis
SIMULATION RESULTS
In the simulation, we use a uniform linear array of M = 3 sensors having half wavelength spacing. It receives signals from N = 4 independent linear frequency-modulated sources, each has 256 samples, in the presence of additive Gaussian noise where the SNR=20 dB. The effect of cross-terms on the WVD representation of one the mixture (as shown in Fig. 2-(a)) was reduced by using MWVD (as shown in Fig. 2-(b)). Fig. 2-(d) displays the auto-source TF points using MWVD. Fig. 2-(c) re-displays these points but with the TFD values extracted from the WVD in Fig. 2-(a); these TFD values will be used for TFD estimation. We compare the algorithm proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] (now referred to as cluster-based TF-UBSS) with the algorithm proposed in this paper, now referred to as subspace based TF-UBSS. Note that Fig. e,h,k) show the estimated source TFDs using the cluster-based algorithm, whereas Fig. f,i,l) are those obtained by the subspace based algorithm.
From Fig. 3-(b,e), we can see that the intersection points between source s 1 (t) and source s 2 (t) were picked up by source s 2 (t) by the cluster-based algorithm. On the other hand, using the subspace-based algorithm, the intersection points have been redistributed to the two sources (Fig. 3-(c,f)).
In Fig. 4, we provide a statistical evaluation of the performance gain we observed in the previous experiment.
For that, we run N r = 500 Monte Carlo trials with the same sources but random noise realization at each trial. The quality of source extraction is measured by the fol-
lowing normalize MSE MSE = 1 N r N r r=1 s r -s 2 s 2 ,
where s r represents the estimate of the sources at the r-th trial. Note that for comparison we remove the scalar and permutation indeterminacies. Even though the considered sources are almost disjoint in the TF domain, we observe an improvement of the separation method thanks to subspace projection. We expect this improvement to be more significant if the sources have larger intersection region in TF plane.
CONCLUSION
This paper introduces a new approach for blind separation of non-disjoint and nonstationary sources using TFDs. The proposed method can separate more sources than sensors and provides, in the case of non-disjoint sources, a better separation quality than the method proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF]. This method is based on a vector clustering procedure that estimates the mixing matrix A, and subspace projection to separate the sources at the intersection points in the TF plane.
Fig. 1 .
1 Fig. 1. (a)-TF disjoint, (b) TF non-disjoint
Fig. 2 .
2 Fig. 2. TFD choices and auto-source selection.
Fig. 3 .
3 Fig. 3. TFD estimation.
Fig. 4 .
4 Fig. 4. MSE versus SNR.
There exist techniques that perform both the clustering and the estimation of the number of classes. For simplicity, we assumed here the number of sources known. | 16,061 | [
"1031236",
"18420",
"956547",
"1031237"
] | [
"458369",
"300839",
"300839",
"27672"
] |
01771627 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-01771627/file/ISSPA05_Param.pdf | Abdeldjalil Aissa
El Bey
email: [email protected]
Karim Abed-Meraim
Yves Grenier
email: [email protected]
A Aïssa-El-Bey
BLIND SEPARATION OF AUDIO SOURCES USING MODAL DECOMPOSITION
published or not. The documents may come
Blind separation of audio sources using modal decomposition
INTRODUCTION
The problem of blind source separation consists of finding independent source signals from their observed mixtures without a priori knowledge on the actual mixing matrix. The source separation problem is of interest in various applications [START_REF]Blind estimation using higherorder statistics[END_REF] such as the localization and tracking of targets using radars and sonars, separation of speakers (problem known as "cocktail party"), detection and separation in multiple access communication systems, independent components analysis of biomedical signals (EEG or ECG), multispectral astronomical images etc. This problem has been intensively studied in the literature and many effective solutions have been proposed so far [START_REF]Blind estimation using higherorder statistics[END_REF]. Nevertheless, the underdetermined case where the number of sources is greater than the number of sensors (observations) remains relatively poorly treated, and its resolution is one of the open problems of blind source separation. In the case of non-stationary signals (including the audio signals), certain solutions using time-frequency analysis of the observations exist for the underdetermined case [START_REF] Nguyen | Separating more sources than sensors using time-frequency distributions[END_REF][START_REF] Jourjine | Blind separation of disjoint orthogonal signals: demixing n sources from 2 mixtures[END_REF]. In this paper, we propose an alternative approach using modal decomposition of the received signals [START_REF] Huang | The em-pirical mode decomposition and Hilbert spectrum for nonlinear and non-stationary times series analysis[END_REF][START_REF] Flandrin | Empirical mode decomposition as a filter bank[END_REF]. More precisely we propose to decompose a supposed locally periodic signal which is not necessarily harmonic in the Fourier sense into its various modes. The audio signals and more particularly the musical signals can be modeled by a sum of damped sinusoids [START_REF] Boyer | Audio modeling based on delayed sinusoids[END_REF] and hence are well suited for our separation approach. We propose here to exploit this last property for the separation of audio sources by means of modal decomposition.
DATA MODEL
The blind source separation model assumes the existence of N independent signals s 1 (t), . . . , s N (t) and M observations x 1 (t), . . . , x M (t) that represent the mixtures. These mixtures are supposed linear and instantaneous, i.e.
x i (t) = N j=1 a ij s j (t) i = 1, . . . , M (1)
This can be represented compactly by the mixing equation
x(t) = As(t) (2)
where s(t
) def = [s 1 (t), . . . , s N (t)]
T is a N × 1 column vector collecting the source signals, vector x(t) similarly collects the M observed signals, and the M × N mixing matrix A def = [a 1 , . . . , a N ] with a i = [a 1i , . . . , a M i ] T contains the mixture coefficients. We will suppose that for any pair (i, j) with i = j, the vectors a i and a j are linearly independent. The source signals are supposed to be decomposable in a sum of modal components c j i (t), i.e:
s i (t) = l i j=1 c j i (t) t = 0, . . . , T -1 (3)
The usual source independence assumption is replaced here by a quasi-orthogonality assumption of the modal components, i.e.
c j i |c j i c j i c j i ≈ 0 for (i, j) = (i , j ) (4)
where
c j i |c j i def = T -1 t=0 c j i (t)c j i (t) * (5)
and
c j i 2 = c j i |c j i ( 6
)
Remark: Assumption (4) may be restrictive in certain applications. However, it can be relaxed in such a way to allow common modal components to different sources as shown in [START_REF] Aissa-El-Bey | Séparation aveugle sous-déterminée de sources audio par la méthode EMD (Empirical Mode Decomposition)[END_REF].
SEPARATION USING MODAL DECOMPOSITION
Based on the previous model, we propose an approach in two steps consisting of:
• An analysis step: in this step, one applies an algorithm of modal decomposition to each sensor output in order to extract all the harmonic components from them. We compare, for this modal components extraction two decomposition algorithms that are the EMD (Empirical Mode Decomposition) algorithm introduced in [2, 3] and a parametric algorithm which estimates the parameters of the modal components modeled as damped sinusoids.
• A synthesis step: in this step we group together the modal components corresponding to the same source in order to reconstitute the original signal. This is done by observing that all modal components of a given source signal 'live' in the same spatial direction. Therefore, the proposed clustering method is based on the component's direction evaluated by correlation of the extracted (component) signal with the observed antenna signal.
Signal analysis using EMD
A new nonlinear technique, referred to as Empirical Mode Decomposition (EMD), has recently been introduced by N.E. Huang et al. for representing non-stationary signals as sum of zero-mean AM-FM components [START_REF] Huang | The em-pirical mode decomposition and Hilbert spectrum for nonlinear and non-stationary times series analysis[END_REF]. The starting point of the EMD is to consider oscillations in signals at a very local level. Given a signal z(t), the EMD algorithm can be summarized as follows [START_REF] Flandrin | Empirical mode decomposition as a filter bank[END_REF]:
1. Identify all extrema of z(t).
2. Interpolate between minima (resp. maxima), ending up with some envelope e min (t) (resp. e max (t)).
3. Compute the mean m(t) = (e min (t) + e max (t))/2.
Extract the detail d(t) = z(t) -m(t).
Iterate on the residual m(t).
By applying EMD algorithm to the i th mixture signal x i which is written as
x i (t) = N j=1 a ij s j (t) = N j=1 lj k=1 a ij c k j (t)
one obtains estimates c k j (t) of components c k j (t).
Parametric signal analysis
In this section we present an alternative solution for signal analysis. For that, we represent the source signal and hence the observations as sum of damped sinusoids:
x k (t) = L l=1 α l,k z t l ( 7
)
where α l,k represents the complex amplitude and z l = e d l +iω l is the l th pole where d l is the negative damping factor and ω l is the angular-frequency.
[H(x k )] n 1 n 2 def = x k (n 1 + n 2 )
, D being a window parameter chosen in the range T /3 ≤ D ≤ 2T /3. We use of Kung's algorithm given in [START_REF] Kung | Spacetime and singular-value decomposition based approximation methods for the harmonic retrieval problem[END_REF] that can be summarized in the following steps:
1. Form the data Hankel matrix H(x k ).
Estimate the
2L-dimensional signal subspace U (L) = [u 1 . . . u 2L ] of H(x k
) by means of the SVD (u 1 . . . u 2L are the principal left singular vectors of H(x k )).
3. Solve (in the least squares sense) the shift invariance equation
U (L) ↓ Ψ = U (L) ↑ ⇔ Ψ = U (L)# ↓ U (L) ↑ ( 8
)
where Ψ = Φ∆Φ -1 , Φ being a non-singular 2L × 2L matrix and ∆ = diag(z 1 , z * 1 , . . . , z L , z * L ). () # denotes the pseudo-inversion operation and arrows ↓ and ↑ denote respectively the last and the first rowdeleting operator.
4. Estimate the poles as the eigenvalues of matrix Ψ.
Estimate the complex amplitudes by solving the least squares fitting criterion
min α x k -Zα 2 ⇔ α = Z # x k ( 9
)
where
x k = [x k (0) . . . x k (T -1)
] T is the observation vector, Z is a Vandermonde matrix constructed from the estimated poles and α is the vector of complex amplitudes.
Signal synthesis using vector clustering
For the synthesis of the source signals one observes that thanks to the quasi-orthogonality assumption, one has:
x|c j i c j i 2 def = 1 c j i 2 x 1 |c j i . . . x M |c j i ≈ a i
where a i represents the i th column vector of A. We can then associate each component c k j to a space direction (vector column of A) that is estimated by
a k j = x| c k j c k j 2
Two components of a same source signal are associated to the same column vector of A, Therefore, we propose to gather these components by clustering the vectors a k j into N classes. One will be able to rebuild the initial sources up to a constant by adding the various components within a same class.
Source pairing and selection
Let us notice, that by applying the approach described previously (analysis plus synthesis) to all the antenna outputs x 1 (t), • • • , x M (t), we obtain M estimates of each source signal. The estimation quality of a given source signal varies significantly from one sensor to another. Indeed, it depends strongly on the matrix coefficients and, in particular, on the signal to interference ratio (SIR) of the desired source. Consequently, we propose a blind selection method to choose a 'good' estimate among the M we have for each source signal. For that, we need first to pair the source estimates together. This is done by associating each source signal extracted from the first sensor to the (M -1) signals extracted from the (M -1) other sensors that are maximally correlated with it. The correlation factor of two signals s 1 and s 2 is evaluated by s 1 |s 2 s 1 s 2 . Once, the source pairing achieved, we propose to select the source estimate of maximal energy, i.e.
s i (t) = max j {E j i = T -1 t=0 | s j i (t)| 2 , j = 1, • • • , M } (10)
where E j i represents the energy of the i th source extracted from the j th sensor. One can consider other methods of selection based on the dispersion around the centroid of each class, the number of components of each source estimate, etc.
Discussion
We provide here some comments to get more insight onto the proposed separation method:
• Over-determined case: In that case, one is able to separate the sources by left inversion of matrix A. The latter can be estimated from the centroids of the N clustering classes (i.e., the centroid of the i th class represent the estimate of the i th column of A).
• Estimation of the number of sources: This is a difficult and challenging task in the underdetermined case. Few approaches exist based on multi-dimensional tensor decomposition [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF] or based on the clustering with joint estimation of the number of classes [START_REF] Frank | The data analysis handbook[END_REF]. However, these methods are very sensitive to noise, to the source amplitude dynamic and to the conditioning of matrix A. In this paper, we assume the number of sources known (or correctly estimated).
• Number of modal components: In the parametric approach, we have to choose the number of modal components L needed to well approximate the audio signal. Indeed, small values of L lead to poor signal representation while large value of L increases the computational cost. In fact, L depends on the 'signal complexity' and in general musical signals require less components (for a good modeling) than speech signals. In section 4 we illustrate the effect of the value of L on the separation quality.
• Hybrid separation approach: It is most probably that the separation quality can be improved using signal analysis in conjonction with spatial filtering. Indeed, it has been observed that the separation quality depends strongly on the mixture coefficients. Spatial filtering can be used to improve the SIR for a desired source signal and consequently its extraction quality. This will be the focus of a future work.
SIMULATION
We present here some simulation results to illustrate the performance of our blind separation algorithms. For that, we consider a uniform linear array with M = 3 sensors receiving the signals from N = 4 audio sources (except for the third experiment where N varies in the range [START_REF] Huang | The em-pirical mode decomposition and Hilbert spectrum for nonlinear and non-stationary times series analysis[END_REF][START_REF] Nguyen | Separating more sources than sensors using time-frequency distributions[END_REF]).
The angles of arrival of the sources are chosen randomly. formance obtained by our algorithm using EMD and the parametric technique with L = 30. As a reference, we plot also the NMSE obtained by pseudo-inversion of matrix A [10] (assumed exactly known). It is observed that both EMD and parametric based separation provide better results than those obtained by pseudo-inversion of the exact mixing matrix. The plots in figure 2 illustrate the effect of the number of components L chosen to model the audio-signal. Too small or too large values of L degrade the performance of the method. In other words, it existe an optimal choice of L that depend on the signal type. In figure 3, we present the separation performance loss that we have when the number of sources increases from 2 to 6 in the noiseless case. For N = 2 and N = 3 (overdetermined case) we estimate the sources by left inversion of the estimate of matrix A. In the underdetermined case,
CONCLUSION
This paper introduces a new blind separation method audio-type sources using modal decomposition. The proposed method can separate more sources than sensors and provides, in that case, a better separation quality than the one obtained by pseudo-inversion of the mixture matrix (even if it is known exactly). For the signal analysis step of the proposed method, two algorithms are used and compared using respectively the EMD and the ESPRIT-like technique for the estimation of the poles of the modal components modeled as damped sinusoids.
Fig. 1 .
1 Fig. 1. NMSE versus SNR
Fig. 2 .
2 Fig. 2. NMSE versus L: SNR=10 and 30
Fig. 3 .
3 Fig. 3. NMSE versus N (noiseless case)
(•) represents the real part of a complex entity. For the extraction of the modal components, we propose to use the ESPRIT-like (Estimation of Signal Parameters via Rotation Invariance Technique) technique that estimates the poles of the signals by exploiting the row-shifting invariance property of the D×(T -D) data Hankel matrix | 14,201 | [
"18420",
"956547",
"742871"
] | [
"300839",
"300839",
"300839"
] |
01771658 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2005 | https://hal.science/hal-01771658/file/SPARS2005.pdf | Abdeldjalil Aissa
El Bey
Karim Abed-Meraim
Yves Grenier
A Aïssa-El-Bey
UNDERDETERMINED BLIND SEPARATION OF AUDIO SOURCES IN TIME-FREQUENCY DOMAIN
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Blind source separation (BSS) considers the recovery of unobserved original sources from several mixtures observed at the output of a set of sensors. Each mixture contains a combination of the sources that results from the mixing medium between the sources and the sensors. The term "blind" indicates that no a priori knowledge of the sources and the medium structure is available. To compensate for this lack of information, the sources are usually assumed to be statistically independent. Blind source separation has application in different areas, such as communications, speech processing, image processing and biomedical engineering [START_REF]Blind estimation using higher-order statistics[END_REF]. A challenging problem of BSS occurs when there are more sources than sensors, and this is referred to as underdetermined blind source separation (UBSS). A time-frequency based UBSS algorithm has been recently proposed in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF][START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] to successfully separate speech sources using time-frequency distributions (TFDs). This algorithm provides good separation performance when the sources are disjoint in the TF plane. It also provides the separation of TF quasi-disjoint sources, that is the sources are allowed to have a small degree of overlapping in the TF plane. However, the intersection points in the TF plane are not directly treated. More precisely, a point at the intersection of two sources is clustered "randomly" to belong to one of the sources. As a result, the source that picks up this point now contains some information from the other source while the later source loses some information of its own. However, for the other source, there is an interference at this point, hence the separation performance may degrade if no treatment is provided for this. An increasing in the number of intersection points degrades the separation quality. In this paper, we propose another algorithm, combining the TF-UBSS with subspace projection, that allows an explicit treatment of the intersection points. The main assumption used in this algorithm is that the number of sources simultaneously present at an intersection point in the TF plane cannot exceed the number of sensors.
PROBLEM FORMULATION
Data model
Let s(t) = [s 1 (t), . . . , s N (t)]
T represent the N nonstationary source signals. The source signals are transmitted through a medium so that an array of M linear sensors picks up a set of mixed signals represented by an M -dimensional vector
x(t) = [x 1 (t), . . . , x M (t)]
T . We consider the instantaneous mixing medium that is modeled as follows
x(t) = As(t) + w(t), (1)
where A = [a 1 , . . . , a N ] is the mixing matrix and w(t) = [w 1 (t), . . . , w M (t)] T is the observation noise vector. We assume that any M columns of A are linearly independent. The goal of BSS is to recover s(t) from x(t). When M < N , the problem becomes UBSS. Let Ω 1 and Ω 2 be the TF supports (i.e. the points of TF plane where the local energy of the considered sources is non-zero) of two sources s 1 (t) and s 2 (t).
If Ω 1 ∩ Ω 2 = ∅, the sources are said to be non-disjoint in the TF plane. The second assumption is that the sources are not necessarily disjoint, and in particular, there exist, at most, simultaneously (M -1) sources at the same TF point. However, we still assume that there exists for each source signal a region R i in the TF plane where it exists alone, i.e. the energy of the other sources are negligible at the TF points within the considered region. For this reason, a TF representation is commonly referred to as a time-frequency distribution (TFD). TFDs have been applied to a wide variety of engineering problems. Specifically, they have been successfully used for signal recovery at low signal-to-noise ratio (SNR), accurate estimation of the instantaneous frequency (IF), signal detection in communication, radar processing and for the design of time-varying filter. For more details on TFDs and related methods, see for example the recent comprehensive reference [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]. The method presented in this paper, uses the Short-Time Fourier Transform (STFT) that is defined as:
Time-frequency representation
S x (t, f ) = m=+∞ m=-∞ h(t -m)x(m)e -j2πf m (2)
where h(t) is the Hamming window.
CLUSTER-BASED TF-UBSS APPROACH FOR DISJOINT SOURCES
In this section, we briefly review the STFT method in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF], and propose a cluster-based linear TF-UBSS algorithm using STFT to avoid some of the drawbacks in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF]. First, under the transformation into the TF domain using the STFT, the model in (1) leads to:
S x (t, f ) = AS s (t, f ), (3)
where
S xi (t, f ) = m=+∞ m=-∞ h(t -m)x i (m)e -j2πf m (4a) S x (t, f ) = [S x1 (t, f ), . . . , S xM (t, f )] T . ( 4b
)
and S s (t, f ) is the N × 1 source STFT vector. To avoid processing all TF points (and hence to reduce computational cost), we apply first a noise thresholding as that for each timeslice (t, f ):
If S x (t, f 0 ) max f { S x (t, f ) } > ǫ, then keep (t, f 0 ), ( 5
)
where ǫ is a small threshold (typically, ǫ = 0.05). Then, the set of all selected points, Ω, is expressed by Ω = N i=1 Ω i , where Ω i is the TF support of the source s i (t). Under the assumption that all sources are disjoint in the TF domain, (3) is reduced to
S x (t, f ) = a i S si (t, f ), ∀(t, f ) ∈ Ω i , ∀i = 1, • • • , N. (6)
where the source STFT vector has been reduced to only the STFT of the source s i (t). Now, in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF], the structure of the mixing matrix is particular as such it has only 2 rows (i.e. the method uses only 2 sensors) and the first row of the mixing matrix contains all 1. Then, ( 3) is expanded to
S x1 (t, f ) S x2 (t, f ) = 1 . . . 1 a 2,1 . . . a 2,N S s1 (t, f ) . . . S sN (t, f ) , (7)
and ( 6) to
S x1 (t, f ) S x2 (t, f ) = 1 a 2,i S si (t, f ),
which results in
a 2,i = S x2 (t, f ) S x1 (t, f ) . (8)
Therefore, all the points for which the ratios on the right-hand side of ( 8) have the same value form the TF support Ω i of a single source, say s i (t). Then, the STFT estimate of s i (t) is computed by:
Ŝsi (t, f ) = S x1 (t, f ), ∀(t, f ) ∈ Ω i , 0, otherwise.
Finally, the source estimate ŝi (t) is obtained by converting Ŝsi (t, f ) to the time domain using inverse STFT [START_REF] Griffin | Signal estimation from modified short-time Fourier transform[END_REF]. For more details, refer to this paper. It is observed that the structure of the mixing matrix, as expressed in (7) has some limiting factors. First, the extension of the UBSS method in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF] to more than two sensors is not obvious. Second, the division on the right-hand side of ( 8) is prone to error if the denominator is close to zero. To avoid the above mentioned problems, we propose here a modified version of the method valid for any number of sensors. This method is now referred to as the cluster-based linear TF-UBSS algorithm. The clustering method proceeds as follows: first compute the spatial direction vectors by:
v(t, f ) = S x (t, f ) S x (t, f ) , (t, f ) ∈ Ω, (9)
and force them, without loss of generality, to have the first entry real and positive. Next, we cluster these vectors into N classes {C i | i = 1, • • • , N }, using the k-mean clustering algorithm [START_REF] Frank | The data analysis handbook[END_REF]. The collection of all points, whose vectors belong to the class C i , now forms 2. Vector clustering by (9) and [START_REF] Frank | The data analysis handbook[END_REF].
3. Mixing matrix and source STFT estimation by ( 10) and (11).
4. Source TF synthesis by [START_REF] Griffin | Signal estimation from modified short-time Fourier transform[END_REF].
the TF support Ω i of the source s i (t). Then, the column vector a i of A is estimated as the centroid of this set of vectors:
âi = 1 |C i | (t,f )∈Ωi v(t, f ), ( 10
)
where |C i | is the number of vectors in this class. Therefore, we can estimate the STFT of each source s i (t) (up to scalar constant) by:
Ŝsi (t, f ) = âH i S x (t, f ), ∀ (t, f ) ∈ Ω i , (11)
since, from (6), we have
âH i S x (t, f ) = âH i a i S si (t, f ) ∝ S si (t, f ), ∀ (t, f ) ∈ Ω i .
This algorithm is summarized in Table 1.
SUBSPACE-BASED TF-UBSS APPROACH FOR NON-DISJOINT SOURCES
We propose here to use an appropriate subspace projection to estimate the TFDs of the individual sources, under the previously stated data assumptions. Under the TF non-disjoint condition, consider a source point (t, f ) ∈ Ω at which there are K contributing sources s α1 (t), . . . , s αK (t), with K < M . Then, (3) is reduced to the following
S x (t, f ) = ÃS s(t, f ), ∀(t, f ) ∈ Ω (12)
where à and s are defined by:
s = [s α1 (t), . . . , s αK (t)] T , (13a) Ã = [a α1 , . . . , a αK ]. ( 13b
)
Let Q Ã be the orthogonal projection matrix onto the noise subspace of Ã. Then, Q Ã can be computed by:
Q Ã = I -Ã ÃH Ã -1 ÃH . (14)
We have the following observation:
Q Ãa i = 0, i ∈ {α 1 , . . . , α K } Q Ãa i = 0, otherwise . (15)
Table 2. Subspace-based TF-UBSS algorithm 1. STFT computation.
2. Single-source point selection; mixing matrix estimation by, k-mean algorithm.
3. For all source points, perform subspace-based TFD estimation of sources by ( 14), ( 16) and (17).
4. Source TF synthesis by [START_REF] Griffin | Signal estimation from modified short-time Fourier transform[END_REF].
If A is known or a priori estimated, then this observation gives us the criterion to detect the indices α 1 , . . . , α K ; and hence, the contributing sources at the considered TF point (t, f ). In practice, to take into account noise, one detects the column vectors of à minimizing:
{α 1 , . . . , α K } = arg min β1,...,βK Q ÃS x (t, f ) | Ãβ (16)
where Ãβ = [a β1 , . . . , a βK ].
Next, TFD values of the K sources at TF point (t, f ) are estimated by:
Ŝs (t, f ) ≈ Ã# S x (t, f ). (17)
where Ã# represents the pseudo-inverse of Ã. Now, to apply the above procedure, we need to estimate A first. This is performed here by clustering all the spatial direction vectors in (9) as for the preview TF-UBSS algorithm. Then within each class C i we estimate the far-located vectors from the centroid (in the simulation we estimate vectors v(t, f ) such that: v(t, f )-âi > 0.8 max
v(t,f )∈Ωi v(t, f )-âi
leading to a reduced size class Ci . This is to essentially keep the vectors corresponding to the TF region R i (which are ideally equal to the spatial direction a i of the considered source signal). Finally, the i th column vector of A is estimated as the centroid of Ci . Table 2 provides a summary of the subspace projection based TF-UBSS algorithm.
SIMULATIONS AND RESULTS
Simulation results are illustrated in the figures below. In this simulation, we have used uniform linear array of M = 3 sensors. It receives signals from N = 4 independent speech sources, lasting 8192 samples. In figure 2, the upper line represents the original source signals, the second line represents the M mixtures and the bottom one represents the sources estimates by our algorithm. In figure 3, we compare the performance of our method with the TF-UBSS method of
CONCLUSION
This paper introduces a new approach for blind separation of non-disjoint and nonstationary sources using TFDs. The proposed method can separate more sources than sensors and provides, in the case of non-disjoint sources, a better separation quality than the method proposed in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF]. This method is based on a vector clustering procedure that estimates the mixing matrix A, and subspace projection to separate the sources at the intersection points in the TF plane.
Fig. 1 .
1 Fig. 1. (a)-TF disjoint, (b) TF non-disjoint
Fig. 2 .
2 Fig. 2. Blind source separation example for 4 speech sources and 3 sensors in instantaneous mixture case: the upper line represents the original source signals, the second line represents the M mixtures and the bottom one represents estimates of sources by our algorithm.
Fig. 3 .
3 Fig. 3. NMSE versus SNR for 4 speech sources and 3 sensors: comparison of the performance of our algorithm with the modified TF-UBSS
Table 1 .
1 Cluster-based TF-UBSS algorithm 1. Mixture STFT computation by (4).
Table 1
1 (i.e. modified method of[START_REF] Yilmaz | Blind separation of speech mixtures via time-frequancy masking[END_REF]). The plots represents the average normalized MSE (NMSE) of the estimated sources versus the SNR in dB. For the subspace-based method we have used K = 2 for all TF points. As can be observed, a significatif gain is obtained, thanks to our subspace projection.
0.8 1
0.6 0.8
0.6
0.4
0.4
0.2 0.2
0 0
-0.2 -0.2
-0.4
-0.4
-0.6
-0.6 -0.8
-0.8 0 1000 2000 3000 4000 5000 6000 7000 8000 -1 0 1000 2000 3000 4000 5000 6000 7000 8000
(a) s 1 (t) (b) s 2 (t)
0.8 1
0.6 0.8
0.6
0.4
0.4
0.2 0.2
0 0
-0.2 -0.2
-0.4
-0.4
-0.6
-0.6 -0.8
-0.8 0 1000 2000 3000 4000 5000 6000 7000 8000 -1 0 1000 2000 3000 4000 5000 6000 7000 8000
(c) s 3 (t) (d) s 4 (t)
0.8 0.6 0.6
0.6
0.4 0.4
0.4
0.2 0.2
0.2
0 0 0
-0.2
-0.2 -0.2
-0.4
-0.4 -0.4
-0.6
-0.8 0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
(e) x 1 (t) (f) x 2 (t) (g) x 3 (t)
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8 0 1000 2000 3000 4000 5000 6000 7000 8000
(h) s 1 (t) | 15,549 | [
"18420",
"956547",
"742871"
] | [
"300839",
"300839",
"300839"
] |
01681649 | en | [
"sdv",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01681649/file/manus_Heteromysis_ekamako_RevFinal-PostprintHAL.pdf | Karl J Wittmann
email: [email protected]
Pierre Chevaldonné
email: [email protected]
Description of Heteromysis (Olivemysis) ekamako sp. nov. (Mysida, Mysidae, Heteromysinae) from a marine cave at Nuku Hiva Island (Marquesas, French Polynesia, Pacific Ocean)
Keywords:
Combined faunistic and genetic studies in the marine Ekamako Cave at the southern coast of Nuku Hiva, Marquesas, in the central Pacific yielded Heteromysis (Olivemysis) ekamako as a new species. This taxon differs from its congeners by a specific combination of morphological characters: flagellate, modified spines dorsally on each of the three segments of the antennular peduncle, a large smooth spine at the tip of only the second male pleopod, series of small flagellate spines along the oblique -1 -terminal margin of only the third and fourth male pleopods, and by 2-3 simple spines medially near the statocyst on the endopods of uropods. Although abundant at the entrance of Ekamako Cave, it has not been observed in nine additional submersed marine caves investigated at the Marquesas.
Introduction
It has been about three decades since the detection of Aberomysis muranoi and Palaumysis simonae at islands of Palau (Micronesia) and their description by [START_REF] Băcescu | Contribution to the knowledge of Mysidacea from western Pacific: Aberomysis muranoi n.gen., n.sp. and Palaumysis simonae n.gen., n.sp. from marine caves on Palau, Micronesia[END_REF]. Now, the further inspection of marine caves in remote, geographically isolated, oceanic archipelagos -this time in Marquesas Islands, French Polynesia in the Central Pacific -again yielded an endemic species of Mysidae with comparatively large eyes. This mysid is described here as a new species of the genus Heteromysis S.I. Smith, 1873. Most of the so-far described 83 species of this genus are known from cryptic habitats in a broader sense. Nonetheless, only five species were previously reported from marine caves: the Caribbean H. guitarti [START_REF] Băcescu | Heteromysini nouveaux des eaux cubaines: trois espèces nouvelles de Heteromysis et Heteromysoides spongicola n.g. n.sp[END_REF], H. bermudensis Sars, 1885, H. cyanogoleus Bamber, 2000, and the Indian Ocean species H. dentata Hanamura &Kase, 2001, andH. longiloba Hanamura &Kase, 2001. The species described below is the first Heteromysis from a marine cave in the Pacific.
Material and methods
Samples were collected during the French Agency of Marine Protected Areas (AAMP) biodiversity survey 'Pakaihi i te Moana' aboard the R/V Braveheart [START_REF] Poupin | Deep-water decapod crustaceans studied with a remotely operated vehicle (ROV) in the Marquesas Islands, French Polynesia (Crustacea: Decapoda)[END_REF]. Leg 3 (10-30 January 2012) was devoted to the deep sea and to the shallowwater caves. Three dives recovered mysid samples (leg. P. Chevaldonné), all from Ekamako Cave (Pérez et al. in press) at 8-10 m depth, 10-30 m from the entrance: 9 specimens upon dive #445, 7 Jan. 2012 (sample #MQ1-GR-PC1), 39 upon dive #451, 12 Jan. 2012 (sample #MQ1-GR-PC5), and 131 upon dive #468, 28 Jan. 2012 (sample #MQ1-GR-PC13). Collection took place with a suction bottle operated by SCUBA diver [START_REF] Chevaldonné | Improvements to the "Sket bottle": a simple manual device for sampling small crustaceans from marine caves and other cryptic habitats[END_REF]. Collected mysids were then stored in 95% ethanol.
Body size was measured from the tip of the rostrum to the posterior margin of the telson, excluding the spines. Further details on preparation, measurements, and examination of materials follow [START_REF] Wittmann | Two new species of Heteromysini (Mysida, Mysidae) from the Island of Madeira (N.E. Atlantic), with notes on sea anemone and hermit crab commensalisms in the genus Heteromysis S. I. Smith[END_REF] and citations therein; morphological terminology as in [START_REF] Wittmann | Orders Lophogastrida Boas, 1883, Stygiomysida Tchindonova, 1981, and Mysida Boas, 1883 (also known collectively as Mysidacea)[END_REF].
In order to obtain DNA barcodes of the new species, total genomic DNA was extracted as in [START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF] from two specimens from dive #445 (Heka445-1 and Heka445-2). A fragment of the nuclear gene of the small subunit ribosomal RNA (18S) has already been published by [START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF]. The present study also amplified a 658 bp fragment of the mitochondrial gene coding for cytochrome c oxidase subunit I (COI) using classical PCR protocols [START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF].
Results
Family
Etymology
The species name is used in apposition, adopted from the name of the type locality, the Ekamako Cave.
Definition
Carapace shows a well-formed subtriangular rostrum ending in a rounded apex. Large eyes with cornea occupying distal 30-40% eye surface. Medial margin of eyestalks produced in an obliquely anterior facing, acute spiniform extension. Each of the three articles of the antennular peduncle dorsally with a disto-medial flagellate spine (spinelike seta). Stout antennal scale extends beyond the (more basally inserting) trunk of the antennal flagellum but not beyond the antennular peduncle. Third thoracic endopod with 6-7 subapically flagellate, serrated, strongly modified spines on medial margin of carpus. Tip of each penis with two finger-like lobes and a rounded, flattened, posteriorly directed lobe. Second male pleopod apically ending in a large, non-flagellate spine; third male pleopod with series of 5-9 flagellate spines along distal margin; fourth male pleopod with series of 13-26 smaller flagellate spines in this position. Uropodal endopod armed with only 2-3 spines. Each lateral margin of telson armed with 5-6 spines along distal half, not counting the pair of apical spines; outer apical spine 3-4 times longer than inner; proximally rounded 'V'-shaped terminal cleft occupies 25-28% length of telson, anterior 78-83% of its margins armed with 18-21 acute laminae.
Description
Small, robust, light-red mysids with cephalothorax occupying 37-41% body length, pleon (without telson) 48-51%, and carapace 29-34%. Abdominal somites 1-5 are 0.8-0.9, 0.9-1.0, 0.8-1.0, 0.7-0.9, and 0.7-0.9 times the length of the sixth somite, respectively.
Carapace (Fig. 1A-C) normal, without apparent sexual dimorphism. Its anterolateral edges appear roughly evenly rounded. Cervical sulcus strongly developed, cardial sulcus less apparent but always evident. Posterior margin of carapace evenly rounded, weakly emarginated, leaving only the ultimate thoracic somite dorsally exposed. Arrangement of pores on carapace closely resembles that in H. dardani [START_REF] Wittmann | Two new species of Heteromysini (Mysida, Mysidae) from the Island of Madeira (N.E. Atlantic), with notes on sea anemone and hermit crab commensalisms in the genus Heteromysis S. I. Smith[END_REF]: a group of 10-12 pores with about 1 µm diameter are in median position behind the cardial sulcus. These pores surround a larger, less distinctly porelike structure in a roughly butterfly-shaped manner (Fig. 1C). In front of the cervical sulcus there is an additional group of 19-22 pores in roughly 'V'-shaped arrangement (Fig. 1B). Except for sulci and pores, the outer surface of the carapace is smooth in both sexes.
Eyes (Fig. 1A,D). Eyes well developed, stalk and cornea thick. Eyestalks and cornea dorsoventrally somewhat compressed. Cornea calotte-shaped in dorsal, oval in lateral, and calotte-shaped to sub-reniform in ventral view. Eyestalks with small group of scales on the inner basal corner.
Antennulae (Fig. 1A,D,). Peduncle three-segmented, whereby 42-44% of its length is occupied by the basal segment, 17-20% by the middle, and 35-41% by the terminal one. Basal segment terminally with a small dorsal and a larger outer setose apophysis. The dorsal apophysis anteriorly with 3-4 plumose setae, a slender smooth seta, and a modified spine (spine-like seta) with a subapical flagellum (Fig. 1E,J,K).
Median segment anteriorly obliquely truncate, its anterior margin with a flagellate spine (Fig. 1G,H) accompanied by a large plumose seta near inner distal corner. Flagellate spine from this segment (Fig. 1G,H) more seta-like and bearing its flagellum in more apical position compared to those from the basal (Fig. 1J,K) and the terminal (Fig. 1F, L, M) segment. Terminal segment also bearing a specialized setation at its inner distal corner: 1-2 large, medially directed, plumose setae on ventral face versus an obliquely laterally directed thicker, smooth seta, plus an obliquely medially-anteriorly directed, flagellate spine on dorsal face (Fig. 1F,L,M). Males with short, rounded appendix masculina in submedian, almost terminal position on the ventral surface of the terminal segment (Fig. 1L). This appendix shows a tuft of rather few, comparatively short setae extending obliquely downwards. In both sexes, the outer antennular flagellum is thicker than the inner one, by a factor of 1.4-2.2 when measured near the basis of the flagella.
Antennae (Fig. 1A,D). Antennal scale without spines, setose all around; its length 2.9-3.8 times maximum width. A small apical segment of 6-9% scale length is marked by a transverse suture; this segment broader than long, with five plumose setae.
Antennal sympod shows a forward projecting, tongue-like, terminally rounded process.
Posteriorly it bears a roughly ovoid lobe containing the end sac of the antennal gland.
Three-segmented antennal trunk slightly longer than the antennal scale, nonetheless not reaching up to the tip of the scale due to its more basal position. Basal segment amounts to 16-19% of trunk length, second to 48-50% and third to 32-35%.
Mouthparts. Labrum, mandibles, labium and maxillae as normal in this genus.
Mandibular palp normal, three-segmented. Basal segment is 11-14% palp length, median segment 60-65%, terminal segment 21-26%. Basal segment smooth all around.
Median segment is 2.4-2.9 times its maximum width. Only normal setae along the inner and outer margins of the median segment, with the reservation that the distal half of the outer margin is largely smooth, bearing only 1-2 setae. Pars molaris showing a strong grinding surface in both mandibles. Pars incisivus with 1-2 large teeth, digitus mobilis with two large plus 2-4 small teeth, and pars centralis ('spine row' in the terminology of [START_REF] Tattersall | The British Mysidacea[END_REF] with three very spiny teeth. Only smooth spines terminally at the distal segment of the maxillula, where there is a transverse row of three plumose setae in subterminal position. Closely in front of the basis of these setae there is a total of only 2-4 pores in variable arrangement. Endite of the maxillula with three large, spine-like setae at tip, and with additional 12-14 smaller setae in more proximal position, these latter setae with diverse small modifications. Thoracopods (general; Fig. 2). Basal plate of exopods 1-7 shows a distinct lateral expansion (Fig. 2A,B), so that plate length is only 1.4-2.3 times maximum width. By contrast, this relation is 2.0-3.0 in the basally less expanded exopod 8 (Fig. 2H). Outer margin of the plate ends in a rounded edge (Fig. 2A,B,H). However, in both sexes any exopod may bear a minute, rounded to acute knob at this edge, as shown in Fig. 2B, in contrast to the absence of knobs in Fig. 2A,H. Flagellum 8-segmented in first exopod, 9-segmented in exopods 2-7, whereas 8-or 9-segmented in eighth exopod (Fig. 2H), not counting the large intersegmental joint between basis and flagellum. The first thoracopods bear a comparatively large, leaf-like, smooth epipod. Length of endopods increases in order of thoracopods 1, 2, 4, 3, 8, 6, 7, and 5 (Fig. 2A,B,H). Length of ischium increases from first to last endopod. Merus longer than ischium in endopods 1-4 (Fig. 2A,B), whereas shorter than ischium in endopods 5-8 (Fig. 2H). Carpopropodus of endopods 1-8 with 2, 2, 2, 3, 3-5, 4-5, 4-5, and 4-5 segments, respectively (Fig. 2A,B,D,H). Carpus and dactylus always entire. Endopod 3 (Fig. 2A,B) with powerful carpus 0.5-0.7 times the length of the merus, or 0.8-1.0 times ischium. Thoracic endopods 1-2 each with large dactylus, endopod 3 with a relatively smaller one (Fig. 2A,B), and finally endopods 4-8 with again smaller but always distinct dactylus (Fig. 2D,E,H,J). Length of claws increasing in order of endopods 5, 6, 7, 8, 1, 3, and 4 (Fig. 2A,B,E,J). Claw of thoracic endopod 2 missing, claw of endopod 3 being the most powerful one (Fig. 2A,B), whereas claw of endopod 4 is the longest but also the thinnest one (Fig. 2E).
Endopod of first and second thoracopods modified as maxilliped. Coxa of first maxilliped with broadly and continuously rounded endite bearing one plumose seta; its basis with large prominent endite, whereas ischium and merus with merely a feebly projecting endite. The endites of basis, ischium, and merus are hairy and densely setose on inner margin. Dactylus characterized by a strong, subapically serrated claw. Basis of second maxilliped shows a distinctly projecting endite which is slightly larger in males than in females. In both sexes, the merus is slender, longer than combined praeischium and ischium, but somewhat shorter than combined carpopropodus and dactylus.
Dactylus with a dense brush of setae only. 10-14 among these setae are spine-like modified, bearing series of teeth in their median portions on either side. A true claw is missing on the dactylus of only the second maxilliped.
Endopod of third thoracopod modified as gnathopod (Fig. 2A-C). Basis with much shorter endite compared to that of the second endopod. Ischium and merus strong as is normal in gnathopods. Along the outer margin of the merus there are 4-5 setae that are unilaterally barbed along their median to subterminal portions. Third thoracic endopod moderately dimorphic: maximum length of the carpopropodus is 2.1-2.3 times its maximum width in males, compared to 3.0-3.3 in females. Maximum length of the carpopropodus, expressed in the same order (males versus females), is 0.7-0.8 versus 0.6-0.7 times that of the merus and 1.1-1.2 versus 0.9-1.1 that of the ischium. Both sexes with carpus showing 6-7 subapically flagellate spines (Fig. 2C) along terminal 35-55% of its medial margin. These spines with their inner margin serrate between subbasal portions and the flagellum; outer margin with a rugged subbasal portion (Fig. 2C). These spines arranged in three pairs plus one optional, unpaired, weaker spine in most basal position. The paired partners are located side by side at about the same position along the carpus (therefore only one of each shown in Fig. 2A). Claw smooth, strong, with apically increasing curvature (Fig. 2A,B).
Endopod of fourth to eighth thoracopods (Fig. 2D-J). These legs are moderately long and slender. Fifth endopod, when stretched, extending shortly beyond the rostrum, the eighth endopod up to the labrum. Basis of endopods 4-8 with lappet-to tongue-like apophysis in both sexes; this apophysis extends beyond the inner margin of the basis only in endopods 6-8 of females (Fig. 2G), but not in males (Fig. 2F,H). Fourth endopod with small dactylus bearing a long, almost straight, very thin and seta-like claw (Fig. 2D,E). Dactylus of endopods 5-8 equipped with stronger, short claw armed with acute, spine-like cilia (Fig. 2J) along its median portions and showing a distally increasing curvature. These cilia become continuously larger towards the tip of the claw (Fig. 2J). Endopods 5-8 each with three large, almost smooth and strongly curved paradactylary setae (Fig. 2H). Corresponding setae on endopod 4 much more weakly curved (Fig. 2D).
Marsupium. Females with large oostegites on thoracopods 7, 8. Thoracopod 6 with rudimentary oostegite represented merely by a small lobe with three plumose setae (Fig. 2G).
Penes (Fig. 2H). Penes slender, only slightly shorter than the merus of the ultimate thoracic endopod. Their shape roughly that of tubes, weakly bent forwards, even more weakly also outwards. Each penis stiff, with smooth cuticle all around, except for one small, plumose seta at about one third of its length on the exterior face; tip with lobes.
Thoracic sternal processes (Fig. 1N,O). Both sexes with a subtriangular, anteriorly directed, terminally rounded process from first thoracic sternite. No additional processes in females. Males with additional small, smooth, median bulge from first thoracic sternite and with median processes from thoracic sternites 2-8. These latter are small simple processes from sternites 2, 7, 8, but larger and terminally finger-like from sternites 3-6, each flanked by two shorter medio-lateral processes (Fig. 1N). Only females with fields of hairs on sternites 2-5 (Fig. 1O).
Pleopods (Fig. 3A-G). Pleopods reduced to small setose, bilobate, or obscurely bilobate plates in both sexes. Length without setae or spines increases from first to fifth pleopods in both sexes. However, more discontinuously in males, with a stronger increase between pleopods one and three versus a weak, inconspicuous increase in numbers three to five. All pleopods of females (Fig. 3A) and pleopods 1, 5 of males (Fig. 3B,F) lacking spines. For spines on second (Fig. 3C), third (Fig. 3D), and fourth (Fig. 3E) male pleopods, see 'Definition' above. A small flagellum (Fig. 3G) inserts shortly below the tip of each flagellate spine (= spine-like seta) on male pleopods 3, 4.
Not considering spine-like setae, all setae of females and most setae of males are plumose, at least in their apical portions. A smooth seta found only on the tip of pleopod 4 in males (Fig. 3E). Uropods (Fig. 3H). The exopods reach with 12-20% of their length beyond the endopods and 40-53% beyond the telson, nonetheless the endopods 31-44% of their length beyond the telson. Exopods 3.7-3.8 times longer than their maximum width.
Their medial margin much more strongly convex than the outer one. Endopods with 2-3 spines on medial margin, in subbasal position near statocyst. This statocyst large, containing a comparatively small statolith with a diameter of 55-60 µm. The statoliths are discoidal showing an indistinct fundus and a distinct tegmen; statolith formula is 2 + 3 + (6-10) + (4-6) = 16-20 (N = 5 statoliths from 3 dissected specimens). Mineral composition is fluorite (as also in eight Heteromysis species examined by [START_REF] Ariani | A comparative study of static bodies in mysid crustaceans: evolutionary implications of crystallographic characteristics[END_REF][START_REF] Wittmann | Heteromysis arianii sp.n., a new benthic mysid (Crustacea, Mysidacea) from coralloid habitats in the Gulf of Naples (Mediterranean Sea)[END_REF][START_REF] Wittmann | Centennial changes in the near-shore mysid fauna of the Gulf of Naples (Mediterranean Sea), with description of Heteromysis riedli sp. n. (Crustacea, Mysidacea)[END_REF][START_REF] Wittmann | Two new species of Heteromysini (Mysida, Mysidae) from the Island of Madeira (N.E. Atlantic), with notes on sea anemone and hermit crab commensalisms in the genus Heteromysis S. I. Smith[END_REF].
-12 -Telson (Fig. 3J) length 1.10-1.15 times that of the sixth pleonite and 0.65-0.76 times the uropodal exopod. Telson subtriangular with length 1.3-1.4 times its maximum width. Lateral margins straight to slightly concave. Apical cleft distinctly deeper than wide. Margins of cleft lined by acute laminae that are slightly shorter up to equally sized compared with average-sized spines on the lateral margins of the telson. For further details of the telson, see 'Definition' above.
Eggs and larvae (Fig. 3K-M)
The female with 4.8 mm body length carried five eggs with 0.35-0.38 mm diameter.
Two females with 3.8 and 4.3 mm, respectively, each carried a total of four late stage 2 nauplioid larvae. These larvae bear short setae at the tip of antennula and antenna (Fig. 3L), and of the tail (Fig. 3M). Their setae much smaller than, for example, in the nauplioids of Heteromysis (Olivemysis) wirtzi [START_REF] Wittmann | Two new species of Heteromysini (Mysida, Mysidae) from the Island of Madeira (N.E. Atlantic), with notes on sea anemone and hermit crab commensalisms in the genus Heteromysis S. I. Smith[END_REF], from the N.E. Atlantic (Wittmann 2008: Fig. 4m, n).
DNA sequences
The two specimens extracted yielded the same 818 bp 18S sequence (accession # HG315710; [START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF]. The COI amplifications, however, revealed one different haplotype for each specimen (2 synonymous substitutions). These two COI haplotypes have been deposited at the ENA-EMBL database (accession # LT555314-LT555315).
Discussion
The new species, Heteromysis ekamako, from a marine cave at an island in the central Pacific Ocean, shows very close morphological similarities with two species from the West Atlantic, H. mayana [START_REF] Brattegard | Mysidacea from shallow water in the Caribbean Sea[END_REF], from diverse shallow-water habitats, occasionally associated with sea anemones, along the Caribbean coast of Mexico, and with H. gomezi [START_REF] Băcescu | New spongicolous Heteromysis of the Caribbean Sea (H. gomezi n.sp. and H. mariani n.sp.)[END_REF], associated with sponges in Cuba. Besides the 'normal' features shared by members of the subgenus Olivemysis Băcescu, 1968, these three species share a single, large spine on the tip of the second male pleopod. They also share most of the other details of eyes, pleopods 2-5, uropods, and telson, given by [START_REF] Băcescu | Heteromysini nouveaux des eaux cubaines: trois espèces nouvelles de Heteromysis et Heteromysoides spongicola n.g. n.sp[END_REF][START_REF] Băcescu | New spongicolous Heteromysis of the Caribbean Sea (H. gomezi n.sp. and H. mariani n.sp.)[END_REF] and [START_REF] Brattegard | Mysidacea from shallow water in the Caribbean Sea[END_REF]. No other known species, particularly no Pacific species, shows such great similarities with the new species, especially regarding the second male pleopod. Nonetheless, the new Pacific species differs from the two Caribbean ones, among other features, by modified, flagellate spines (Fig. 1G-K) on basal and median segments of the antennular peduncle in both sexes, by a normal first male pleopod (Fig. 3B) without terminal spine, and by fewer flagellate spines on the third male pleopod (Fig. 3D).
The DNA sequences obtained provide little further information on the systematic relationships of H. ekamako with other Heteromysinae because of the scarcity of such sequences in DNA databases. This is particularly true for the COI sequences which only poorly match (ca. 80% identity) general Mysida sequences, due to the saturation of this marker. They will, however, be useful for future reference in the taxonomy of cave and/or Polynesian Heteromysinae. In contrast, the 18S sequences were compared to several other Heteromysis entries in GenBank, confirming their position in a Heteromysis clade as shown in [START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF].
There is surprisingly very little literature that refers to mysids in the shallow waters of Polynesia (but see [START_REF] Murano | New and already known species of the genus Anisomysis (Mysidacea) from Hawaii and the Society Islands[END_REF]. H. ekamako is, however, rather conspicuous at the entrance of Ekamako Cave, south of Nuku Hiva, where it forms dense aggregations just above the very fine sandy bottom. It is most abundant immediately after the entrance, between 10 and 30 m inside the cave, where large ripple-marks testify to the prominent influence of the powerful oceanic swell at this shallow depth (Pérez et al. in press).
Importantly, although similar exploration was conducted in ten caves throughout the whole Marquesas archipelago (from Fatu Hiva in the south to Hatutaa in the north), H. ekamako was found only at its type locality in Nuku Hiva (Pérez et al. in press). The reason for such a restricted distribution is yet unknown but apparently frequent in cave Heteromysinae [START_REF] Wittmann | Retromysis nura new genus and species (Mysidacea, Mysidae, Heteromysini) from a superficial marine cave in Minorca (Balearic Islands, Mediterranean Sea)[END_REF][START_REF] Chevaldonné | Molecular and distribution data on the poorly known, elusive, cave mysid Harmelinella mariannae (Crustacea: Mysida)[END_REF]. One singularity of Ekamako cave, however, is that it is the largest explored so far in the Marquesas. It is at least 100 m long and several metres wide, with a large entrance, but also more or less isolated chambers (Pérez et al. in press) that may provide a unique diversity of microhabitats favourable to the permanent residence of a large population of H. ekamako.
Conflict of Interest:
The authors declare that they have no conflict of interest.
Zoologie, Vol. 4 Part B (54), Brill, Leiden, pp 189-396, colour plates pp 404-406
Figure captions
Fig. 1 Heteromysis (Olivemysis) ekamako sp. nov., paratypes; A-C, male 3.8 mm; D, H, K, M, O, female 4.8 mm; E-G, J, L, N, male 3.7 mm. A, anterior body region of male, dorsal; B, cervical pore group on carapace; C, cardial pore group on carapace; D, cephalic region of female, dorsal; E, male antennula, dorsal; F-K, modified, flagellate spines of the antennula in male (F, G, J) and female (H, K); L, terminal segment of male antennular peduncle, ventral; M, the same for female, dorsal; N, thoracic sternites of male; O, first and second thoracic sternites of female [print in B/W; adjust printed width for fit with printed page width]
Heteromysis sp.,Chevaldonné et al. (2015: Fig. 5) Heteromysis,Pérez et al. (in press) Type material From dive #451: holotype, adult male (body size 3.7 mm), paratypes: five adult females (3.8-5.0 mm), one subadult female (3.6 mm), and four immatures of both sexes (2.6-2.8 mm); three paratypes were completely dissected: two adult males (3.7, 3.8 mm) and one adult female (4.8 mm). From dive #468: paratypes, 4 adult males (3.6-4.0 mm). The types are deposited in the Naturhistorisches Museum Wien (Vienna), crustacean collection: holotype, adult male (reg. no. NHMW 25722) and paratypes, 2 adult females from dive #451 (NHMW 25723), 2 adult males from dive #468 (NHMW 25724); and in the Muséum National d'Histoire Naturelle in Paris: 9 paratypes (1 adult male, 3 adult females, 1 subadult female, 4 immatures) from dive #451 (MNHN-IU-2014-17470), 2 paratypes (adult males) from dive #468 (MNHN-IU-2014-17471). The dissected specimens were retained by one of us (KJW) as the slides will degrade within a few decades. Type locality Ekamako Cave at the southern coast of Nuku Hiva Island, Marquesas, French Polynesia, Pacific Ocean, 08°56.215'S 140°05.450'W. Notes on microdistribution are given by Pérez et al. (in press): see also Discussion.
Fig. 2
2 Fig. 2 Heteromysis (Olivemysis) ekamako sp. nov., paratypes; A, G, female 4.8 mm; B-
Fig. 3
3 Fig. 3 Heteromysis (Olivemysis) ekamako sp. nov., paratypes; A, female 4.8 mm; B-J,
Acknowledgements
We first of all wish to thank Thierry Pérez for leading Leg 3 of the "Pakaihi I Te Moana" expedition and for the Ekamako cave dives with one of us (PC). We are also indebted to Xavier "Pipapo" Curvat for showing us the cave, to Laurent Albenga and John Starmer for their hard work underwater and to the captain and crew of R/V Braveheart. The French Agence des Aires Marines Protégées (AAMP) was the main sponsor and coordinator of this expedition. The Marquesan people are warmly thanked for their spirit and hospitality. | 28,178 | [
"19501"
] | [
"93275",
"188653"
] |
01765806 | en | [
"chim",
"sdv"
] | 2024/03/05 22:32:16 | 2016 | https://amu.hal.science/hal-01765806/file/Dary_et_al_2017.pdf | Chhavarath Dary
Sok-Siya Bun
email: [email protected].
Gaëtan Herbette
Hot Bun Fathi Mabrouki
Hot Bun e Sothea Kim
Florian Jabbour
Sovanmoly Hul
Béatrice Baghdikian
Evelyne Ollivier
Chemical profiling of the tuber of Stephania cambodica Gagnep. (Menispermaceae) and analytical control by UHPLC-DAD
Keywords: Accuracy profile, alkaloid, method validation, palmatine, quantification, roemerine, tetrahydropalmatine
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Stephania cambodica Gagnep. (Menispermaceae) is a woody climber found in mountainous regions in Cambodia and Vietnam. The main characteristic of this species is the absence of leaves during the blooming period. Mature individuals of S. cambodica often have multiple tubers lying on rocks and are interconnected with a woody stem (Figure S1). Known by its Cambodian vernacular name "Komar Pich", the plant tuber has been traditionally used by local people in forms of decoction or hydroethanolic macerate to treat various diseases and symptoms such as anxiety, malaria, fever, wounds, joint pains, fatigue and male sexual dysfunction (Center of Traditional Medicine 2013). In Vietnam, the tuber of S. cambodica is used in combination with other plants for treatment of various diseases such as depression, asthma, hypertension… [START_REF] Do | Selected Medicinal Plants in Vietnam[END_REF]. Despite its well-established use, few phytochemical studies have been undertaken on S.cambodica [START_REF] Thanh | Two protoberberine alkaloids isolated from tuber of Stephania cambodica Gagnep. (Menispermaceae)[END_REF][START_REF] Dinh | Interaction of Vietnamese Medicinal Plant Extracts with Recombinantly Expressed Human Neurokinin-1 Receptor[END_REF].
Rotundine (l-tetrahydropalmatine) the main alkaloid isolated from S. cambodica tuber has been shown to be responsible for the inhibition of neurokinine-1 receptor gene expression [START_REF] Dinh | Interaction of Vietnamese Medicinal Plant Extracts with Recombinantly Expressed Human Neurokinin-1 Receptor[END_REF]. The presence of rotundine in S. cambodica tuber could justify its traditional use as an anxiolytic remedy in Cambodia since in Vietnam and China, rotundine is used as an anxiolytic and hypnotic drug [START_REF] Wang | Treatment of Cocaine Addiction[END_REF]. This neuro-sedative activity was also shown in rats and mice [START_REF] Semwal | Efficacy and safety of Stephania glabra: An alkaloid-rich traditional medicinal plant[END_REF].
Liquid chromatography is widely used for quality control of alkaloids from Stephania species [START_REF] Bory | Simultaneous HPLC determination of three bioactive alkaloids in the Asian medicinal plant Stephania rotunda[END_REF][START_REF] Xie | Microwave-assisted extraction of bioactive alkaloids from Stephania sinica[END_REF][START_REF] Liu | Microwave-Assisted Extraction Followed by Solid-Phase Extraction for the Chromatographic Analysis of Alkaloids in Stephania cepharantha[END_REF]. Ultra-high performance liquid chromatography (UHPLC) method has recently been employed to characterise the alkaloids of Stephania tetrandra S.Moore [START_REF] Sim | Simultaneous determination of structurally diverse compounds in different fangchi species by UHPLC-DAD and UHPLC-ESI-MS/MS[END_REF]. However, the analytical control of the tuber of S. cambodica using these techniques has not yet been documented.
The aims of this study are firstly to characterise chemical constituents of the tuber of S. cambodica, which is the main plant part used in traditional medicine. Secondly, a thorough development and validation of the UHPLC method for the determination of the main metabolites of this plant was performed.
Results and discussion
A new aporphine glycoside (1, 0.5 mg) named "angkorwatine" (Figure 1), and eight known alkaloids namely oblongine (2, 0.2 mg) [START_REF] Kato | Examination of Alkaloidal Constituents of Zanthoxylum usambarense by a Combination of Ion-pair Extraction and Ion-pair Chromatography Using Sodium Perchlorate[END_REF], stepharine (3, 0.6 mg) [START_REF] Thuy | Aporphine and protoaporphine alkaloids from Stephania rotunda[END_REF]), asimilobine-β-D-glucopyranoside (4, 3.8 mg) [START_REF] Likhitwitayawuid | Cytotoxic and antimalarial alkaloids from the tubers of Stephania pierrei[END_REF], isocorydine (5, 0.5 mg) [START_REF] Ferreira | Aporphine and bisaporphine alkaloids from Aristolochia lagesiana var. intermedia[END_REF], tetrahydropalmatine (THP) (6, 1.0 mg) [START_REF] Mastranzo | Asymmetric synthesis of (S)-(-)-tetrahydropalmatine and (S)-(-)-canadine via a sulfinyl-directed Pictet-Spengler cyclization[END_REF], jatrorrhizine (7, 0.1 mg) [START_REF] Shi | Chemical Constituents and Biological Activities of Stephania yunnanensis H.S. Lo[END_REF], palmatine (PAL) (8, 0.4 mg) [START_REF] Shi | Chemical Constituents and Biological Activities of Stephania yunnanensis H.S. Lo[END_REF] and roemerine (ROE) (9, 1.0 mg) [START_REF] Thuy | Aporphine and protoaporphine alkaloids from Stephania rotunda[END_REF] were simultaneously isolated from the hydroethanolic extract of S. cambodica by preparative HPLC (Figure S2). The isolation of alkaloids from this species tuber in previous reports involved conventional chromatography [START_REF] Thanh | Two protoberberine alkaloids isolated from tuber of Stephania cambodica Gagnep. (Menispermaceae)[END_REF].
The alkaloid-type structure of the nine compounds isolated was revealed by the NMR spectra. The NMR data of compounds (2-9) were all in accordance with the literature values.
Angkorwatine (1) was isolated as amorphous powder, and the molecular formula was assigned as C 23 H 27 NO 8 by high-resolution mass spectrum (HR-ESI-MS) (m/z 446.1811
[M+H] + , calcd for C 23 H 27 NO 8 , 446.1809) (Figure S3) and NMR data implying 11 degrees of unsaturation. The 13 C NMR data gave a total of 23 separate resonances, including six signals assignable to sugar. The 1 H NMR spectrum exhibited one methoxy group ( 3.79), five aromatic protons ( 7.12; 7.36; 7.43; 7.47; 8.48), four ethylene protons ( 3.03; 3.27; 3.49; 3.69), one signal of oxymethine proton ( 4.74), one signal of methine proton ( 4.47and signals assigned to the sugar moiety with an anomeric proton ( 4.98, d, J=7.8Hz). The 1 H and 13 C resonances of 1 were typical of 7-oxygenated aporphine glycoside alkaloid. The HMBC crosspeak between H-1' ( 4.98) and C-2 ( C 153.1) and NOESY crosspeak between H-1' ( 4.98) and H-3 ( 7.12) placed the sugar moiety at C-2 of the 7-oxygenated aporphine. The sugar moiety was identified as -D-glucopyranose from the proton signals at 3.56 (dd, J=9.1, 7.8 Hz, H-2'), 3.48 (brt, J=9.0 Hz, H-3'), 3.40 (dd,J=9.4,9.0 Hz,3.50 (ddd,J=9.4,6.2,2.0 Hz,3.94 (dd,J=12.1,2.0 Hz,3.71 (dd,J=12.1,6.2 Hz, and from the carbon resonances at C 102.5 (C-1'), 74.9 (C-2'), 78.4 (C-3' and C-5'), 71.5 (C-4'), and 62.3 (C-6') in the 13 C NMR spectrum. The HMBC crosspeak between methoxy group H-12 ( 3.79) and C-1 ( C 147.5) and NOESY crosspeak between H-12 ( 3.79) and H-11 ( 8.48) placed the methoxy group at C-1 of the 7-oxygenated aporphine. The other proton signals were at 3.27 (m), 3.03 (brd, 15.8) and 3.69 (m), 3.49 (m) for the four aliphatic protons, which were assigned to H-4 and H-5. The two last signals at 4.77 (brs) and 4.74 (brs) were assigned to H-6a and H-7. In this particular case, the 1 H NMR analysis between 300 and 340K did not allow for a coupling constant to be observed between these signals: the two protons H-6a and H-7 appeared as a broad singlet (W½ ~ 8 Hz) which seemed to correspond to a coalescence of a cis H-7 and H-6a (J = 0 -> 3 Hz) [START_REF] Hocquemiller | Anaxagoreine, a new aporphine alkaloid, isolated from two species of the genus Anaxagorea[END_REF]. S1.
4.98 (d, J=7.8Hz, H-1'),
Compounds 1-5 and 9 were identified for the first time in the tuber of this species while compounds 4, 6, 8 and 9 were the major alkaloids. The nine compounds isolated belong to five classes of alkaloids, namely aporphine (1, 3, 4, and 9), quinoline (5), benzyltetrahydroisoquinoline (2), protoberberine (7), and tetrahydroprotoberberine (6, 8). As asimilobine-β-Dglucopyranoside (4) is commercially unavailable, palmatine (8), roemerine (9) and tetrahydropalmatine (6) were selected as analytical markers of S. cambodica (Figure 1).
During the development process, the system suitability test (Table S2) and selectivity of UHPLC method were verified. Concerning the UHPLC conditions, acetonitrile and methanol are frequently preferred to ethanol as the mobile phase in analysis of alkaloids. However, our study shows that ethanol gives a better resolution of the three analytes than methanol and acetonitrile. According to our knowledge, the developed UHPLC method is the first simultaneous determination of palmatine, roemerine and tetrahydropalmatine in the tuber of S. cambodica (Figure S2).
This UHPLC method was then validated in terms of analysis of response functions, trueness, precision, accuracy, limits of quantification (LOQ) and detection (LOD) and linearity. The validation parameters are summarised in Table S3. The calibration curves were based on the through-origin linear regression model which was fitted with concentration levels ranging from 1.67-33.20 µg/mL, 1.54-30.80 µg/mL and 4.24-84.70 µg/mL for PAL, ROE and THP, respectively. Each calibration point was analysed over the course of three consecutive days. Independent validation standards were also prepared by following the same process: five concentration levels ranging from 0.31-4.18 µg/mL, 2.01-30.72 µg/mL and 4.29-64.42 µg/mL for PAL, ROE and THP, respectively. Each point was analysed in triplicate over three consecutive days. The coefficients of determination of each-day equation of the regression line were all greater than R 2 >0.999 for reference solutions and test samples.
The data have been presented in Table S3. From the results obtained, the concentration of the validation standards were back-calculated to determine the mean relative bias, the relative standard deviation (repeatability and intermediate precision) and the upper and lower βexpectation tolerance limits at 95% level. The accuracy profile was built using trueness and intermediate fidelity variance. The acceptance thresholds were set at ±10% for ROE, THP and at ±20% for PAL. As indicated in Table S3, trueness expressed in terms of relative bias (%) was assessed by means of validation standards. RSD values smaller than 5% illustrated the good trueness of the method. The precision of the method was determined by computing the Relative Standard Deviation (RSDs) for repeatability and time-differentiated intermediate precision at each concentration level of the validation standards. The precision at each concentration level of the validation standards did not exceed 10% for PAL and 5% for ROE and THP, as shown in Table S3. The comparable RSDs between repeatability and intermediate fidelity were mainly due to non-significant intergroup variances, validating the precision of the developed method. The accuracy of the method was also evaluated taking into account the total error (sum of the systematic and random errors) of the test results. βexpectation tolerance intervals were determined to investigate the accuracy profile of the method. If β=0.95, this means that on average, 95% of the future results are included in the interval. As illustrated in Figure S10 and Table S3, the relative upper and lower β-expectation tolerance limits (%) did not exceed the adopted acceptance limits (±10% for ROE, THP and ±20% for PAL). Consequently, the method is able to provide accurate results over the investigated concentration range: 1.39-4.18 µg/mL, 2.01-30.72 µg/mL and 4.29-64.42 µg/mL for PAL, ROE and THP, respectively. As the smallest and highest quantity of the target substance in the sample could be assayed under experimental conditions with welldefined accuracy, the lower and upper LOQ were evaluated by calculating the smallest and highest concentrations. Beyond these points the accuracy limits or β-expectation tolerance limits would fall outside the acceptance limits. As the accuracy profile was included within the acceptance limits (Figure S10), the first concentration level was considered as the lower LOQ for all molecules studied (1.39 µg/mL for PAL, 2.01 µg/mL for ROE, and 4.29 µg/mL for THP) and the highest concentration level as the upper LOQ for three alkaloids (4.18 µg/mL for PAL, 30.72 µg/mL for ROE, and 64.42 µg/mL for THP). As for the limit of detection (LOD), it was estimated using mean intercept of calibration model and the residual variance of the regression. The LODs were evaluated at 0.19 µg/mL for PAL, 0.38 µg/mL for ROE and 2.36 µg/mL for THP. To demonstrate the linearity of the method, a linear regression line through origin was fitted to the estimated or back-calculated concentrations of all the series of validation standards (N = 45) as a function of the introduced concentrations. This was done by applying a linear regression model based on the least squares method. The following regression equations were determined: y=0.9721x (r 2 =0.9975), y=0.9939x (r 2 =0.9993) and y=0.9623x (r 2 =0.9996) for PAL, ROE and THP, respectively (Table S3).
The developed and validated method was applied to quantify alkaloid content in different samples of S. cambodica (Table S4). Subsequently, no difference in alkaloid content in tubers collected in the two successive years was observed. The results suggested that the collection time may not have influenced the alkaloid content in this species. The content of PAL, ROE and THP in stem was comparative to those of tubers. As the stem is a renewable part, it could potentially be a good alternative to tubers in traditional use. Also, the use of the stem assists in reducing over-exploitation of S. cambodica and hence preserving this species.
Experimental
Plant material
The plant collection for research was approved by the Ministry of Health in Cambodia.
Vegetal material was collected in two successive years from the same individual plant in Preah Vihear in Cambodia (14.223143 N-104.980178 E) (Figure S1). It was authenticated by Dr. Sovanmoly Hul. Vouchers were then deposited at Paris Herbarium (France).
Reagents and materials
Ethanol, methanol and formic acid of HPLC Ultra-Gradient grade were purchased from Carlo Erba (Val de Reuil, France). Ultrapure water (18.2 MΩ) for HPLC analysis was obtained from a Milli-Q Reference A+ system (Millipore, CO., Bedford, MA, USA). Palmatine, roemerine and tetrahydropalmatine were purchased from Sigma-Aldrich (Saint Quentin Fallavier, France), Ambinter (Orléans, France) and Phytolab (Vestenbergsgreuth, Germany), respectively.
Phytochemical study
Preparative HPLC isolation
Dried powder of S. cambodica tuber (0.5 g) was extracted with 10 mL of ethanol 50% (v/v) for 15 min in a microwave (CEM Corporation Matthews, NC, USA). This extractive protocol was developed in a previous study [START_REF] Desgrouas | Rapid and green extraction, assisted by microwave and ultrasound of cepharanthine from Stephania rotunda Lour[END_REF]. The dried extract (50 mg) was dissolved in a 2.5 mL mixture of formic acid 0.1% (v/v) methanol (70:30, v/v). The isolation of the compounds was carried out using a Gilson PLC 2020® preparative chromatograph with a DAD detector (LT350026, Gilson inc., USA). The separation and purification of the constituents were performed on the Luna C18 column (10 µm, 150 × 4.6 mm -Phenomenex).
A solvent system consisting of formic acid 0.1% (v/v) (A) and methanol (B) was used as the mobile phase in gradient mode. The eluting program was optimised as follows: linear gradient from 10% to 80% B (0-60 min). The flow rate was 12 mL/min with monitoring at 272 nm.
Identification of the isolated compounds
Structural elucidation of isolated compounds was based on spectroscopic experiments: 1D and 2D NMR, ESIMS/HRESIMS and by comparison of the spectral and chemical data with literature. 1 H and 13 C NMR spectra were measured with a 600 MHz Avance III spectrometer (Bruker) ( 1 H, 600 MHz; 13 C, 150 MHz), equipped with a 5 mm BBFO + probe. Spectra were recorded with a 2-mm NMR capillary tube in 80 µL of 99.96% CD 3 OD solvent ( 1 H 3.31 ppm- 13 C 49:00 ppm) at 300K. The 1 H (600 MHz) and 13 C NMR (150 MHz) data were reported in ppm downfield from tetramethylsilane. Coupling constants were expressed in Hz where s stands for singlet, d for doublet, t for triplet, q for quartet, m for multiplet and br for broad. Hydrogen connectivity (C, CH, CH 2 , CH 3 ) information was obtained from edited HSQC and/or DEPTQ-135 experiments. Proton and carbon peak assignments were based on 2D NMR analyses. ESIMS / HRESIMS analyses were measured with a SYNAPT G2 HDMS mass spectrometer (Waters). Accurate mass measurements were performed in triplicate with two internal calibrations. The direct sample introduction was performed at a 5 µL/min flow rate. Optical rotations were recorded on a Anton Paar MCP200 589 nm Polarimeter equipped with a sodium lamp (CH 3 OH, c in g/mL).
UHPLC analysis
The UHPLC apparatus used for the analysis of PAL, ROE, and THP were an Agilent Infinity 1290 liquid chromatography system equipped with a binary pump solvent delivery system and photodiode array detector (Agilent technologies Inc., Germany). Chromatographic separation was achieved on a Zorbax Eclipse Plus RRHD-C18 column (50 × 2.1 mm, 1.8 µm, Agilent, Germany), operated at 30 °C. The mobile phase consisted of a gradient elution of formic acid 0.1% (v/v) (solvent A) and ethanol (solvent B). The gradient program was: 0-1 min at 5% of B, 1-7 min from 5 to 42% of B with 3 min of post-time at a flow rate of 0.35 mL/min. The injected volume was 2 µL. UV detection wavelengths were 280 nm for THP and 272 nm for PAL and ROE. The system suitability test and selectivity of the method were carried out following FDA (1994) and ICH guidelines (2005).
Method validation
The validation strategy was based on the recommendations of the "Société Française des Sciences et Techniques Pharmaceutiques" (SFSTP) [START_REF] Bellenot | Issues of an analytical procedure development for the assay of constituents in herbal medicinal products IV. Recommendation for development and validation[END_REF]. In order to validate the analytical method, two kinds of samples were prepared independently. The concentrations of calibration standards (THP, PAL and ROE) are presented in Table S3. The desired concentrations of the validation standard was obtained by ultrasonication of different weight of powdered tuber for 10 min in 10 mL of ethanol 50% (v/v). One mL of the filtrate was diluted in 20 mL of ethanol 50% prior to analysis (Table S3).
Conclusions
Nine alkaloids were simultaneously isolated from the hydroethanolic extract of the tuber of Stephania cambodica. A new isolated glycoalkaloid was named angkorwatine. This particular study suggests that the validated method for the quantification of palmatine, roemerine and tetrahydropalmatine is a rapid, innovative and effective analytical approach to control the quality of the tuber of S. cambodica and regulate its use in traditional medicine.
Supplementary material
The underlying research materials for this article is available online, alongside Figure S1-S10.
Disclosure statement
Authors declare no conflict of interest in this work.
C
23 H 27 NO 8 . [α] 25 D = -72.5° (CH 3 OH, c 0.0008). UV (CH 3 OH) λ max 210, 272 nm. The NMR spectra and data of angkorwatine are provided in Figure S4-9 and Table
Figure caption Figure 1 .
caption1 Figure caption
Funding
The research was supported by French government scholarships. | 19,898 | [
"19591",
"765818",
"172981",
"184842",
"171075",
"172667"
] | [
"532732",
"532897",
"519585",
"532732",
"188653",
"88070",
"532732",
"188653",
"532898",
"532899",
"519585",
"519585",
"532732",
"188653",
"532732",
"188653"
] |
01681554 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01681554/file/Hafsi%20et%20al._2017.pdf | Complexe Spécifique
Juniperus Oxycedrus
Genévrier Oxycèdre
Sein Au
Sept De
Zakaria Populations D'algérie
Hafsi
Safia Belhadj
email: [email protected]
Arezki Derridj
email: [email protected]
Jean-Philippe Mevy
email: [email protected]
Roger Notonnier
email: [email protected].
Alain Tonetto
email: [email protected]
Thierry Gauquelin
email: [email protected]
ÉTUDE DE LA VARIABILITÉ MORPHOLOGIQUE (AIGUILLES, GALBULES) DU
SUMMARY.-Morphological variability (needles, galbulus) among seven populations of the Juniperus oxycedrus L. species-complex in Algeria.-The comparative study of plant polymorphism using plant taxonomic characters is underrated in Algeria, especially for species of the genus Juniperus L. The present work has been realized in order to look out for the existence of phenotypic variability among seven natural populations of Juniperus oxycedrus L. through a descriptive and comparative analysis of the needles, galbulus and stomata. The epidermal surfaces have also been investigated in order to better characterize this species. Twenty-three quantitative and qualitative morphological characters have been studied. The observation and evaluation of macro and micro-morphological characters have been realized with binocular optical and scanning electronic microscopes. The variance analysis showed that the seven studied populations present a considerable morphological diversity at both intra-and inter-population levels for the variables needles, galbulus and stomata, while the multivariate analysis allowed to separate all the studied populations into distinct groups according to their phenotypic characters, highlighting correlations between the morphological variables and the environmental characteristics. Otherwise, micromorphological variations have been observed for the epidermal surfaces including a variation in the stomatal density, the size of stomata, the occurrence of epicuticular wax and changes in the epidermal structure.
RÉSUMÉ.-L'étude comparative du polymorphisme des végétaux par le biais des caractères taxinomiques d'un végétal est insuffisamment prise en compte en Algérie, surtout pour les espèces du genre Juniperus L. Le présent travail a été réalisé afin de rechercher l'existence d'une variabilité phénotypique dans sept populations naturelles de Juniperus oxycedrus L. à l'aide d'une analyse descriptive et comparative des aiguilles, des galbules et des stomates. Les surfaces épidermiques ont été également examinées dans le but de mieux caractériser cette essence. Un total de vingt-trois caractères morphologiques, quantitatifs et qualitatifs, a été étudié. L'observation et l'évaluation des caractères macro-et micro-morphologiques ont été réalisées sous loupe binoculaire, au microscope optique et au microscope électronique à balayage. L'analyse de la variance montre que les sept populations étudiées présentent une diversité morphologique considérable aux niveaux intra-et inter-populationnels pour les variables aiguilles, galbules et stomates, tandis que l'analyse multivariée a permis de séparer l'ensemble des populations étudiées en groupes bien distincts selon leurs caractères phénotypiques en mettant en évidence des corrélations entre les variables morphologiques et les caractéristiques environnementales. Par ailleurs, des variations micro-morphologiques ont été observées pour les surfaces épidermiques, incluant une variation dans la densité stomatique, la taille des stomates, l'occurrence de cire épicuticulaire, et des modifications dans la structure de l'épiderme.
Le genre Juniperus L., de la tribu des Junipereae (Koch), sous-famille des Cupressoideae, comprend environ 75 espèces (Adams, 2014a). Il représente le genre le plus diversifié de la famille des Cupressaceae [START_REF] Debazac | Manuel des conifères[END_REF]. Il a la répartition la plus large, par rapport aux autres genres de conifères, mais sa répartition est limitée dans l'hémisphère Nord, seulement en Afrique où certaines espèces traversent l'équateur [START_REF] Mao | Diversification and biogeography of Juniperus (Cupressaceae): variable diversification rates and multiple intercontinental dispersals[END_REF][START_REF] Farjon | An atlas of the world's conifers: An analysis of their distribution, biogeography, diversity and conservation status[END_REF]. Les espèces de ce genre se développent dans des situations écologiques extrêmes et occupent une place non négligeable dans la végétation méditerranéenne. Elles constituent généralement des éléments pionniers jouant un rôle prépondérant dans la dynamique des groupements préforestiers [START_REF] Quézel | Écologie et biogéographie des forêts du bassin méditerranéen[END_REF]. Le genre Juniperus L. est bien représenté en Algérie [START_REF] Maire | Flore de l'Afrique du Nord[END_REF][START_REF] Quézel | Nouvelle flore de l'Algérie et des régions désertiques méridionales[END_REF]. On compte cinq espèces de ce genre, parmi lesquelles deux d'entre elles sont très rares (J. thurifera L. et J. sabina L.), une rare (J. communis L.) et les deux dernières, dans un état de dégradation intense, localisées dans les régions semi-arides et arides (J. oxycedrus L. et J. phoenicea L.). Un hybride entre ces deux dernières espèces a été signalé par les populations locales des hauts-plateaux du centre dénommé « cherkiya » en arabe local (qui signifie, associé ou hybride). Il comporte les deux formes de feuillage (des aiguilles et des écailles) sur le même rameau des sujets adultes. Le Genévrier oxycèdre (J. oxycedrus) (Fig. 1), sujet de la présente recherche, est appelé Genévrier cade en français ou « Taga » en arabe dialectal. C'est une espèce très commune dans le sous-bois et les zones dégradées des régions semi-arides en Algérie. Selon [START_REF] Boudy | Guide du forestier en Afrique du Nord[END_REF], [START_REF] Maire | Flore de l'Afrique du Nord[END_REF] et [START_REF] Quézel | Nouvelle flore de l'Algérie et des régions désertiques méridionales[END_REF], il s'étend sur une superficie de 112 000 ha, depuis les dunes littorales jusqu'aux limites du grand Sahara, soit sous la forme d'un arbre de 10 m de hauteur avec un tronc de 1m de diamètre, soit plus souvent sous la forme d'un arbuste buissonnant plus petit. En outre, ce taxon a un rôle écologique considérable du fait qu'il résiste à la sécheresse (Riou-Nivert, 2001) donc à la dégradation des sols et à la pression anthropique, surtout dans les régions les plus arides. Par ailleurs, il est souvent attaqué par Arceuthobium oxycedri (DC.) M. Bieb., une plante La famille des Cupressaceae comprend deux sous-familles, les Cupressoideae et les Callitroideae, se divisant chacune en trois tribus, qui sont essentiellement de l'hémisphère Nord [START_REF] Haluk | Caractérisation et origine des tropolones responsables de la durabilité naturelle des Cupressacées. Application potentielle en préservation du bois[END_REF]. Elle comporte environ trente genres [START_REF] Farjon | World checklist and bibliography of conifers[END_REF] ; les plus importants sont Cupressus L., Juniperus L. et Callitris Vent. [START_REF] Schulz | -Identification key to the Cypress family (Cupressaceae)[END_REF].
hémiparasite rare de la famille des Santalacées et l'un des principaux parasites obligatoires de plusieurs espèces de la famille des Cupressacées, essentiellement celles du genre Juniperus [START_REF] Ciesla | Hosts and geographic distribution of Arceuthobium oxycedri[END_REF][START_REF] Akkol | Bioactivity guided evaluation of anti-inflammatory and antinociceptive activities of A rceuthobium oxycedri (D.C.) M. Bieb[END_REF][START_REF] Gajšek | -Infection patterns and hosts of Arceuthobium oxycedri (DC.) M. Bieb. in[END_REF].
Du point de vue systématique, cette espèce ne soulève pas, a priori, autant d'interrogations que ses congénères (Genévriers commun et nain, Genévrier thurifère, Genévrier de Phénicie) [START_REF] Lebreton | Le statut systématique du genévrier oxycèdre Juniperus oxycedrus L. Cupressacées : Une contribution d'ordre biochimique et biométrique[END_REF]. [START_REF] Farjon | A monograph of Cupressaceae and Sciadopitys[END_REF][START_REF] Farjon | A handbook of the world's conifers[END_REF] et [START_REF] Farjon | An atlas of the world's conifers: An analysis of their distribution, biogeography, diversity and conservation status[END_REF] ont subdivisé l'oxycèdre en quatre sousespèces différentes : 1/ subsp. macrocarpa (Sibth. & Sm.) Ball, 2/ subsp. badia (H.Gay) Debeaux, 3/ subsp. transtagana (Franco) et enfin 4/ subsp. oxycedrus. En parallèle, d'autres études [START_REF] Adams | -Systematics of Juniperus section Juniperus based on leaf essential oils and RAPD DNA fingerprinting[END_REF](Adams, , 2014a)), considèrent le taxon macrocarpa comme espèce distincte selon des critères morphologiques et moléculaires. Selon [START_REF] Maire | Flore de l'Afrique du Nord[END_REF] et [START_REF] Quézel | Nouvelle flore de l'Algérie et des régions désertiques méridionales[END_REF] [START_REF] Bensegueni-Tounsi | de l'effet antibactérien et antifongique de : Inula viscosa, Lawsonia inermis, Asphodelus microcarpus, Aloe vera, Juniperus oxycedrus[END_REF], la chimiotaxinomie [START_REF] Dob | -Essential oil composition of Juniperus oxycedrus growing in Algeria[END_REF], l'effet antibactérien et antifongique de certains organes végétatifs [START_REF] Foudil-Cherif | Enantiomeric and non-enantiomeric monoterpenes of Juniperus communis L. and Juniperus oxycedrus L. needles and berries determined by HSSPME and enantioselective GC/MS[END_REF], les mono-terpènes et les propriétés anti-oxydantes des aiguilles et des galbules [START_REF] Fadel | -Antioxidant properties of four Algerian medicinal and aromatic plants Juniperus oxycedrus L., Juniperus phoenicea L., Marrubium vulgare L. and Cedrus atlantica (Manetti ex Endl)[END_REF]. Toutefois, les recherches concernant la micromorphologie des aiguilles des genévriers, ayant trait aux stomates et aux surfaces épidermiques sont inexistantes, à part quelques études à diffusion restreinte, tels que les travaux de [START_REF] Tahanout | Contribution à l'étude de la variabilité inter-individuelle de la morphologie des aiguilles et de la densité stomatique de Juniperus oxycedrus subsp. rufescens à Tigounatine (Tikjda, Djurdjura)[END_REF] Cet écart qui augmente avec l'éloignement de la mer [START_REF] Le Houérou | An agro-bioclimatic classification of arid and semi-arid lands in the isoclimatic Mediterranean zones[END_REF][START_REF] Mokhtari | Spatialisation des bioclimats, de l'aridité et des étages de végétation du Maroc[END_REF], a une valeur biocritique pour la végétation ligneuse surtout en Algérie [START_REF] Cote | L'espace algérien, les prémices d'un aménagement[END_REF]. Ensuite, nous avons évalué la sécheresse globale de chaque site à travers le quotient pluviothermique d'Emberger modifié par [START_REF] Stewart | -Quotient pluviothermique et dégradation biosphérique[END_REF] qui est spécifique à la région méditerranéenne [START_REF] Daget | Le bioclimat méditerranéen, caractères généraux, méthodes de classification[END_REF].
TABLEAU I
Caractéristiques écologiques des stations d'échantillonnage
MORPHOMÉTRIE
Aiguilles, galbules et stomates
Pour étudier la diversité morphologique de cette espèce, une évaluation de vingt caractères morphologiques (douze quantitatifs et huit qualitatifs), inspirés de plusieurs études [START_REF] Lebreton | Le statut systématique du genévrier oxycèdre Juniperus oxycedrus L. Cupressacées : Une contribution d'ordre biochimique et biométrique[END_REF][START_REF] Juan | Relationships between mature cone traits and seed viability in Juniperus oxycedrus L. subsp. macrocarpa (Sm.) Ball (Cupressaceae)[END_REF][START_REF] Klimko | Morphological variation of Juniperus oxycedrus subsp. macrocarpa (Cupressaceae) in three Italian localities[END_REF][START_REF] Klimko | Morphological variation of Juniperus oxycedrus subsp. Oxycedrus (Cupressaceae) in the Mediterranean region. Flora-Morphol., Distrib[END_REF][START_REF] Brus | Absence of geographical structure of morphological variation in Juniperus oxycedrus L. subsp. oxycedrus in the Balkan Peninsula[END_REF] a été réalisée pour les aiguilles, les galbules et les stomates (Tab. II). Les caractères mesurés pour les aiguilles et les galbules ont été effectués à l'aide d'un pied à coulisse à l'oeil nu et/ou en utilisant un stéréoscope Optika (grossissement 40x). Concernant les stomates, ils ont été évalués par la méthode de l'impression épidermique sous microscope optique (grossissement 400x) relié à un ordinateur à l'aide d'un logiciel d'analyse d'images (Motic Images Plus 2.0). Pour faire le comptage des stomates (la densité stomatique), cinq champs d'observation, par empreinte ou aiguille, ont été examinés aléatoirement. Dans chaque champ observé, la taille (longueur et largeur) d'un stomate a été mesurée. Au total 1750 champs et 1750 stomates ont été examinés, chaque champ ayant une surface de 400 mm 2 . À l'issue de l'analyse en composantes principales (ACP) pour les 12 variables quantitatives (Fig. 8), le plan factoriel (1x2) formé par la première et la seconde composante totalise une inertie de 75 % ; 82,40 % et 72,53 % respectivement pour les aiguilles, les galbules et les stomates. Le cercle de corrélation pour les aiguilles (Fig. 8A1), les galbules (Fig. 8B1) et les stomates (Fig. 8C1) montre que toutes les 12 variables morphologiques étudiées sont parfaitement corrélées avec l'axe 1, excepté les deux variables « Rapport longueur / largeur du galbule (Rg) et longueur du stomate (Lns)» qui ont enregistré une très faible corrélation. Alors que pour les aiguilles, les variables « longueur de l'aiguille (Ln) avec sa nervure (Lnv) » et le rapport (R) sont corrélés avec l'indice de continentalité (Ic) lié à l'altitude (Alt), tandis que la largeur de l'aiguille (Lr) est corrélée avec la température minimale (m). Les variables des galbules « Longueur (Lng) ; Largeur (Lng) et Poids (Pg) », sont corrélées avec la température minimale (m) selon un gradient latitudinal et altitudinal décroissant. Puis, les variables stomatiques, la largeur du stomate (Lrs) et le rapport (Rs) sont reliés à la continentalité (Ic) et de même pour la densité stomatique.
TABLEAU II
Caractères morphologiques étudiés pour les aiguilles, les galbules, les stomates et les surfaces épidermiques
Par conséquent, l'interaction des variables morphologiques avec les variables environnementales, notamment climatiques, a permis de regrouper l'ensemble des 210 arbres des sept populations en trois groupes distincts pour les aiguilles (Fig. 8A2), les galbules (Fig. 8B2) et les stomates (Fig. 8C2) selon leurs bioclimats et qu'on peut visualiser sur les trois cartes factorielles. Le premier groupe est composé de la population (Me) du littoral et le deuxième englobe les trois populations du subhumide (Ch, Ba et Cz), tandis que le troisième renferme les stations les plus arides (B, G et M). Aussi, chacun des groupes 2 et 3 peut être subdivisé en deux sous-groupes selon le degré de continentalité de sorte que chacune des deux populations Ch et B forme un sous-groupe, surtout pour les galbules (Fig. 8B2) et les stomates (Fig. 8C2). Concernant la classification hiérarchique ascendante (CHA), les dendrogrammes obtenus par la méthode de Ward montrent que les sept populations se répartissent en quatre groupes principaux pour les aiguilles (Fig. 9A) et les galbules (Fig. 9B) et en cinq groupes pour les stomates (Fig. 9C). L'analyse factorielle des correspondances (AFC) pour l'ensemble des 210 arbres des sept populations étudiées et pour toutes les variables mesurées totalise sur le plan factoriel (1x2) une inertie de 72,68 % (Fig. 10A) et 87,43 % (Fig. 10C) pour les aiguilles et les stomates, respectivement. Mais pour les galbules, ce sont les deux axes factoriels (1x3) qui ont permis de discriminer les groupes, avec 69 % d'inertie totale (Fig. 10C). À l'issue de cette analyse, les trois diagrammes de dispersion (Fig. 10 Concernant les deux axes factoriels, le premier axe (1) paraît traduire un gradient lié à la continentalité thermique concernant les aiguilles, les galbules et stomates. L'orientation de cet axe oppose les populations qui s'individualisent en fonction des valeurs de Ic croissant, allant de Me (14,3 °C) à M (22,4°C) et avec l'action modératrice de l'altitude pour certaines stations (le coefficient de corrélation linéaire entre l'altitude et les valeurs de l'indice Ic est r = 0,84).
Le second axe semble expliquer l'incidence du stress thermique hivernal (6,7 >m> -1,5) qui est lié aussi à l'altitude pour les aiguilles et les galbules (Fig. 8A L'interaction des facteurs climatiques avec les variables phénotypiques a été rapportée par plusieurs auteurs [START_REF] Gatti | -Il riconoscimento di alcune provenienze di Pseudotsuga menziesii (Mirb.) Franco en base alle caratteristiche anatomiche degli aghi[END_REF][START_REF] Alyafi | New characters differentiating Pistacia atlantica subspecies[END_REF][START_REF] Maley | -Phenology variation in cone and needle characters of Pinus banksiana[END_REF]) pour de nombreuses espèces. En effet, les végétaux peuvent développer des stratégies adaptatives leur permettant de se maintenir dans leur habitat naturel. Elles se manifestent par des ajustements qui s'opèrent au niveau des feuilles, suivis par des modifications touchant l'ensemble de l'organisme [START_REF] Chaves | Understanding plant responses to drought from genes to the whole plant[END_REF][START_REF] Flexas | Keeping a positive carbon balance under adverse conditions: responses of photosynthesis and respiration to water stress[END_REF]. Selon [START_REF] Aussenac | Effets de conditions microclimatiques différentes sur la morphologie et la structure anatomique des aiguilles de quelques résineux[END_REF], des conditions climatiques différentes influencent la dimension, la forme et la structure des aiguilles et même le nombre et la dimension des stomates, chez certaines espèces de résineux. D'autres études ont également mis en évidence l'influence de l'altitude, des températures minima et l'aridité dans la variabilité morphologique chez de nombreuses provenances de Pistacia atlantica (Belhadj et al., 2008 ;[START_REF] Said | .) aux conditions d'altitude, de salinité et d'aridité : Approches morpho-anatomiques, phytochimiques et écophysiologiques[END_REF], chez l'Eucalyptus [START_REF] Franks | Plasticity in maximum stomatal conductance constrained by negative correlation between stomatal size and density: An analysis using Eucalyptus globulus[END_REF] et chez certaines espèces du genre Banksia [START_REF] Drake | Smaller, faster stomata: scaling of stomatal size, rate of response, and stomatal conductance[END_REF]. Concernant la couleur des feuilles, les pigments foliaires sont les molécules responsables de la couleur des végétaux [START_REF] Bousquet | Mesure et modélisation des propriétés optiques spectrales et directionnelles des feuilles[END_REF]. Ils varient en fonction des espèces et des conditions environnementales [START_REF] Asner | Spectral and chemical analysis of tropical forests: scaling from leaf to canopy levels[END_REF], même la structure interne de la feuille est influencée par le climat lumineux (photo-morphogénèse) [START_REF] Ballaré | Early detection of neighbour plants by phytochrome perception of spectral changes in reflected sunlight[END_REF][START_REF] Baranoski | Light interaction with plants. A computer graphics perspective[END_REF][START_REF] Bousquet | Mesure et modélisation des propriétés optiques spectrales et directionnelles des feuilles[END_REF]Smith et al., 2010 ). D'une part, la teneur des pigments chlorophylliens lors d'un stress environnemental (stress hydrique, stress salin et stress au froid en hiver et en été) diminue selon un gradient d'aridité croissant [START_REF] Kaya | -Gibberellic acid improves water deficit tolerance in maize plants[END_REF][START_REF] Huseynova | Structural-functional state of thylakoid membranes of weat genotypes under water stress[END_REF]Degl'Innocenti et al., 2008 ;[START_REF] Laala | Les variations thermiques saisonnières et leurs impacts sur le comportement écophysiologique des semis de pin d'Alep[END_REF]. D'autre part, certaines espèces contiennent de fortes concentrations en anthocyanes dans l'épiderme d'où une très faible absorption [START_REF] Maas | Reflectance, transmittance, and absorptance of light by normal, etiolated, and albino corn leaves[END_REF] Les analyses multivariées (Figs 8 & 10), surtout l'analyse factorielle des correspondances, révèlent que la station Cz se trouve chaque fois en position centrale, indice probable de l'origine hybridogène de ses populations constitutives. [START_REF] Maire | Flore de l'Afrique du Nord[END_REF] et [START_REF] Quézel | Nouvelle flore de l'Algérie et des régions désertiques méridionales[END_REF] ont adopté ce concept évolutif entre l'essence du littoral (subsp. macrocarpa) et celle des montagnes (subsp. oxycedrus). D'après ces deux auteurs, il existe dans les montagnes des formes à gros galbules, intermédiaires (de passage) entre ces deux taxons en tenant compte d'un spécimen des monts de Tlemcen localisé par [START_REF] Maire | Flore de l'Afrique du Nord[END_REF] et qu'il lui a été impossible de le séparer de la subsp. macrocarpa. En plus, la population Cz se trouve en situation semi-continentale (Tab. I) à 50 km à vol d'oiseau de la mer et à mi-distance entre les phytodistricts littoraux de la subsp. macrocarpa représentées par Me et les phytodistricts continentaux de la subsp. rufescens de la station Ch. Aussi, [START_REF] Lebreton | Le statut systématique du genévrier oxycèdre Juniperus oxycedrus L. Cupressacées : Une contribution d'ordre biochimique et biométrique[END_REF][START_REF] Lebreton | -Étude systématique du sous-genre Oxycedrus (section Oxycedroides) du genre Juniperus (Cupressaceae)[END_REF] ont signalé l'existence d'hybrides entre ces deux sous-espèces (biochimiquement hétérozygotes) en Méditerranée occidentale (îles des Baléares) et que la dioécie de l'oxycèdre semble participer à ces phénomènes d'hybridation et/ou d'introgression des diverses populations.
Comme les baies des espèces du genre Juniperus L. sont très riches en résine et en huiles essentielles [START_REF] Adams | -The leaf essential oils and chemotaxonomy of Juniperus sect. Juniperus[END_REF], elles fournissent pour les oiseaux biodisséminateurs (grives, merles, etc.) l'énergie nécessaire à la migration printanière [START_REF] Ryall | Some factors affecting foraging and habitat of Ring Ouzels Turdus torquatus wintering in the Atlas Mountain of Morocco[END_REF][START_REF] Rumeu | The key role of a Ring Ouzel Turdus torquatus wintering population in seed dispersal of the endangered endemic Juniperus cedrus in an insular environment[END_REF]. Par conséquent, des hybridations des sous-espèces peuvent très bien se produire dans ces zones de contact. Cependant, dans les plantes fruitières charnues, les frugivores jouent un rôle important dans l'efficacité de la dispersion des graines sur des microbioclimats disponibles [START_REF] Schupp | Quantity, quality and the effectiveness of seed dispersal by animals[END_REF]. Cette dispersion augmente le flux de gènes et affecte certainement la structure génétique intra et inter-populations [START_REF] Ouborg | Population genetics, molecular markers and study of dispersal in plants[END_REF]. Les oiseaux représentent le premier vecteur de la dispersion des graines pour de nombreuses espèces végétales, ce qui contribue à leur migration sur de longues distances [START_REF] Whittaker R | Island biogeography. Ecology, evolution and conservation[END_REF]. En outre, de nombreux travaux [START_REF] Herrera | Seasonal variation in the quality of fruits and diffuse coevolution between plants and avian dispersers[END_REF][START_REF] Debussche | La diversité morphologique des fruits charnus en Languedoc méditerranéen : Relations avec les caractéristiques biologiques et la distribution des plantes et avec les disséminateurs[END_REF][START_REF] Debussche | Fleshy fruit characters and the choices of bird and mammal seed dispersers in a Mediterranean region[END_REF][START_REF] Compton | Seed dispersal in an African fig tree: birds as high quantity low quality dispersers[END_REF]Treca & Tamba, 1997) ont montré que les caractéristiques physiques des fruits charnus (taille, coloration, nombre des graines, etc.) s'expliquent par les pressions sélectives des vertébrés disséminateurs, notamment les oiseaux. D'ailleurs, pour les espèces du genre Juniperus L., [START_REF] Grant | Gene flow and the homogeneity of species populations[END_REF] a signalé l'effet de la bonne dispersion des graines par ornithochorie sur la variabilité des populations de Juniperus en contraste avec celles de Cupressus. En Algérie, le travail de Milla et al. (2013) a démontré le comportement trophique des oiseaux sur la diversité des fruits charnus du Sahel algérois incluant le Juniperus phoenicea subsp. turbinata (Guss.) Arcang., selon la couleur, le volume et le nombre de graines.
COMPARAISON DES DONNÉES RECUEILLIES POUR L'ESPÈCE DANS LA LITTÉRATURE
Les résultats obtenus pour les aiguilles et les galbules de Juniperus oxycedrus L. dans la littérature, renseignés dans le tableau V, montrent que les valeurs obtenues dans la population de Messida (Me) sont proches de celles enregistrées pour la subsp. macrocarpa tandis que les valeurs obtenues pour le reste des population sont semblables à celles de la subsp. oxycedrus. Les travaux de [START_REF] Lebreton | Le statut systématique du genévrier oxycèdre Juniperus oxycedrus L. Cupressacées : Une contribution d'ordre biochimique et biométrique[END_REF][START_REF] Lebreton | -Étude systématique du sous-genre Oxycedrus (section Oxycedroides) du genre Juniperus (Cupressaceae)[END_REF] pour les deux sous-espèces, montrent que les galbules ont une largeur et un poids plus petits par rapport à nos résultats. Inversement, les résultats obtenus par [START_REF] Juan | Relationships between mature cone traits and seed viability in Juniperus oxycedrus L. subsp. macrocarpa (Sm.) Ball (Cupressaceae)[END_REF] pour quatre populations de la subsp. macrocarpa dans le sud-ouest du littoral espagnol, montrent des valeurs plus grandes que celles de notre population (Me). L'étude de [START_REF] Klimko | Morphological variation of Juniperus oxycedrus subsp. macrocarpa (Cupressaceae) in three Italian localities[END_REF], au niveau de trois populations en Italie, révèle chez macrocarpa des aiguilles plus grandes et des galbules de même dimension que les nôtres. [START_REF] Klimko | Morphological variation of Juniperus oxycedrus subsp. Oxycedrus (Cupressaceae) in the Mediterranean region. Flora-Morphol., Distrib[END_REF] ont réalisé une autre étude sur la variabilité morphologique de la sous-espèce oxycedrus dans treize provenances de la région
Figure 1 .
1 Figure 1.-Photographies montrant l'aspect général de Juniperus oxycedrus L. en Algérie. (A-C) subsp. oxycedrus dans la station de Chikh Zouaoui : A-allure générale ; B-galbules et C-aiguilles. (D-F) subsp. macrocarpa dans la station de Messida : D-allure générale ; E-galbules et F-aiguilles.
Figure 2 .
2 Figure 2.-Localisation des stations algériennes d'échantillonnage : Chikh Zouaoui (Cz) ; Gottya (G) ; Djebel Messaad (M) ; Boutaleb (B) ; Babor (Ba) ; Chélia (Ch) et Messida (Me). Concernant l'échantillonnage, il a été effectué durant la campagne 2013-2014. Le nombre d'arbustes choisis aléatoirement est de trente pieds par population (un total de 420 arbres). Par la suite trente aiguilles et trente galbules en état de maturité ont été prélevés autour de la couronne de chaque arbre échantillonné (6300 aiguilles et 6300 galbules). Les échantillons sont, ensuite, séchés à l'air libre et conservés au laboratoire jusqu'à leur utilisation. Concernant les stomates, cinq aiguilles par arbuste ont été prélevées sur dix arbustes (parmi ceux déjà échantillonnés pour la première étude) des sept populations (un total de 70 arbres et 350 aiguilles).
des maxima du mois le plus chaud en degrés Celsius ; m, moyenne des minima du mois le plus froid en degrés Celsius ; A, amplitude thermique annuelle moyenne en degrés Celsius ; Ic, indice de continentalité thermique de Rivas-Martinez (2005) en degrés Celsius ; P, pluviométrie en mm/an ; Q3, quotient pluviothermique d'Emberger modifié par Stewart (1969) (Source : O.N.M., Office national de la météorologie d'Alger).
des surfaces épidermiques foliaires au microscope électronique à balayage (MEB) met en évidence des réponses d'adaptation morphologiques liées à une pression d'ordre abiotique. Les micrographies obtenues pour la face inférieure (abaxiale) démontrent des épidermes dépourvus de stomates, alors que celles de la face supérieure (adaxiale) ont des épidermes bien structurés comportant des bandes stomatiques, une bande médiane de cellules épidermiques et des stomates, disposés en ligne, entre six à quinze rangées (Figs3-7). Sur les échantillons de la population du littoral (Me) (Figure3), des bandes stomatiques plus larges avec une forte densité stomatique (Figure3A-D) et des dépôts de cires cuticulaires (Fig. 3B, C & E) ont été observés. Pour les autres populations, différentes caractéristiques ont été observées sur la face adaxiale des feuilles traitées (nettoyées) et non traitées. Des dépôts de cire se présentent sous forme de particules fines autour des stomates (Figs 4A & B ; 6B, D & F ; 7D) ou sous forme de couche mince au niveau de l'épiderme (Figs 4C-D ; 6E-F) ainsi que des bandes stomatiques étroites, notamment chez la population M, et des stomates situés au-dessous du niveau de l'épiderme pour les populations B, Ba et Cz (Figs 5A-D ; 6A-C, E-F & 7C) ont été enregistrées. Une augmentation de l'espace inter-stomatique (populations Ba) (Fig. 6D) et des cellules épidermiques (populations Ba et Cz) (6C & E ; 6D & F) a été également observée ainsi que la présence d'enfoncements visibles dans les bandes stomatiques de part et d'autre de la bande médiane (populations B & Cz) (Fig. 6A, E & F). Dans certains cas, les stomates sont absents et sont remplacés par des cellules épidermiques et regroupés dans des cavités formées par des enfoncements de l'épiderme. Ils sont parfois dispersés au bord de la bande stomatique (cas de la population Cz ; Fig. 6E & F).
Figure 3 .
3 Figure 3.-Micrographies montrant la face adaxiale des feuilles non traitées (A et B) et des feuilles traitées (C-E) de Juniperus oxycedrus de la station de Messida (Me) (sensu subsp. macrocarpa). A et C : Détail montrant de larges bandes stomatiques (B.St) et la bande médiane (BM). B, D et E : détails sur la forme des stomates (St), leur forte densité et les dépôts de cires épicuticulaires (Ci).
Figure 4 .
4 Figure 4.-Micrographies montrant la face adaxiale d'une feuille non traitée (A et B) et d'une feuille traitée (C-E) de Juniperus oxycedrus (station Chélia, Ch). Dépôts de cire (Ci) sous forme de particules fines (A, B et E) autour des stomates (A et B) et sous forme de couche (C et D). St: stomates; B.St: Bandes stomatiques.
Figure 5 .
5 Figure 5.-Micrographies montrant la face adaxiale d'une feuille non traitée (A et B) et d'une feuille traitée (C-E) de Juniperus oxycedrus de la station de Djebel Messaad (M), montrant des bandes stomatiques (B.St) étroites et des stomates (St) situés dans un plan inférieur à celui des cellules épidermiques (A-D). Détail de stomates et des dépôts de cire (Ci) (E).BM: Bande médiane de cellules épidermiques.
Figure 6 .
6 Figure 6.-Micrographies montrant la face adaxiale d'une feuille traitée de Juniperus oxycedrus. A et B: station de Boutaleb (B) ; C et D: station de Babor (Ba) ; E et F: station de Chikh Zouaoui (Cz). Stomates situés dans un plan inférieur à celui de l'épiderme (A, B,C,E et F) ; dépôts de cire (DC) au niveau des stomates sous forme de particules fines (B,D et F) et au niveau de l'épiderme sous forme de couche (E et F) ; augmentation de l'espace inter-stomatique (D) et des cellules épidermiques (CE) (D et F) ; présence d'enfoncements visibles dans l'épiderme (A, E et F).
Figure 7 .
7 Figure 7.-Micrographies montrant la face adaxiale d'une feuille non traitée (A et B) et d'une feuille traitée (C et D) de Juniperus oxycedrus de la station de Gottya (G). Dépôts de cires (DC) au niveau des bandes stomatiques sous forme de cristaux couvrant toute la surface des stomates (A et B) et de particules fines sous forme de poussière (D). Bandes stomatiques au-dessous du niveau de l'épiderme (C).
Figure 8 .
8 Figure 8.-Cercles de corrélation des variables morphologiques quantitatives et environnementales (A1, A2 et A3) avec les diagrammes de dispersion des 210 arbres (B1, B2 et B3) pour l'analyse en composantes principales concernant les aiguilles, les galbules et les stomates de Juniperus oxycedrus.
) ont montré une nette séparation spatiale des individus surtout ceux des populations M et Cz. Ils ont permis de séparer l'ensemble des arbres donnant ainsi les mêmes groupes que ceux retrouvés sur la CAH. Les deux premiers groupes (1 et 2) sont homogènes avec la même composition (mêmes populations) pour les aiguilles, les galbules et les stomates. Le premier groupe comprend la population Me du littoral et le deuxième la population Cz sur les trois plans factoriels. Le troisième groupe est représenté par les deux populations Ba et Ch pour les aiguilles et les stomates et par les populations Ba, Ch, B et G pour les galbules. Le quatrième groupe comprend trois stations (B, G et M) pour les aiguilles. Mais, il s'isole par la population M pour les galbules et B pour les stomates. Les variables discriminantes pour les stomates (la longueur des stomates (Lns), la largeur des stomates (Lrs) et la densité stomatique (Ds)), ont permis de différencier un cinquième groupe qui comporte les deux populations G et M.
Figure 9 .
9 Figure 9.-Dendrogrammes de la classification hiérarchique ascendante appliquée à l'ensemble des sept populations concernant les aiguilles (A), les galbules (B) et les stomates (C).
Figure 10 .
10 Figure 10.-Cartes de l'analyse factorielle des correspondances (AFC) appliquée à l'ensemble des 210 arbres concernant les aiguilles (A), les galbules (B) et les stomates (C) de Juniperus oxycedrus.
, deux sous-espèces ont été décrites sur des bases morphologiques : la subsp. macrocarpa (Sibth. & Sm.) Ball et la subsp. rufescens (Link) Asch. & Graebn. Lebreton et al. (1998) rapportent que ce dernier taxon dénommé J. oxycedrus subsp. rufescens est considéré comme illégitime par Flora Europaea et la Med-Checklist, car il est synonyme de J. oxycedrus sensu stricto. En 2005, Farjon a décrit un nouveau taxon en Algérie, la sous-espèce badia « l'oxycèdre des Haouaras », localisée par Gay et Bailly en 1888, à environ 20 km à l'est de Médéa, que Maire (1952) avait considérée comme
forme de la sous-espèce rufescens (Link) Asch. & Graebn. Plus tard,
[START_REF] Gaussen | Les Gymnospermes actuelles et fossiles, Fasc. X : les Cupressacées[END_REF]
, considéra comme autant d'espèces distinctes de la section II, Oxycedroides, du sous-genre Oxycedrus, les trois taxons J. rufescens (Link), J. macrocarpa (Sibth. & Sm.) et J. oxycedrus ainsi que le J. brevifolia (Seub. Antoine), subordonné à l'oxycèdre comme variété (J. oxycedrus L. var. brevifolia Seub.). En Algérie, de nombreuses études ont été effectuées sur l'oxycèdre. Elles ont porté essentiellement, sur la composition des huiles essentielles (Bensegueni
qualitatives (un découpage en 4 classes) par la méthode des histogrammes[START_REF] Cassin | Analyse des données et des panels des données[END_REF]. L'ensemble de ces tests a été effectué à l'aide des programmes STATISTICA 10 et R 3.2.4. Fap) est majoritairement aigüe chez l'espèce (74,8 %), c'est également le cas pour le reste des populations où la forme aigüe domine. Pour la couleur, le vert représente le pourcentage le plus élevé (66,2 %), suivi par le vert clair (22,9 %) et le vert très clair (10,9 %) pour l'espèce. Il en est de même pour chaque population de sorte que le vert prédomine et les aiguilles de Cz présentent une valeur maximale (80,8 %). Concernant la cire, elle est présente sur les deux faces abaxiales et adaxiales pour toutes les aiguilles. Celle-ci se présente sous forme de dépôts uniformes luisants à la surface de la feuille.Pour les galbules (Tab. III & IV), la longueur moyenne (Lng) pour l'espèce est de 10,92 mm. La valeur la plus grande (15,18 mm) est enregistrée pour Me et la plus petite (9,63 mm) pour M. Pour la largeur (Lrg), les galbules les plus larges (14,55 mm) ont été collectés à Me alors que les moins larges (9,69 mm) ont été obtenues à Ba. La largeur moyenne des galbules chez l'espèce est de 10,97 mm. Concernant le rapport longueur/largeur (Rg), il se situe dans un intervalle entre 0,96 à B et 1,05 à Me. Pour le poids (Pg), les galbules pèsent en moyenne 0,58 g, avec la population Me ayant les valeurs les plus élevées (1,28 g) tandis que Cz et M
RÉSULTATS %), Cz (95,7 %), M (93,2%), G (79,4%), B (82,1%), Ba (91,7%) et Ch (95,8%). Pour la forme de la base de
MORPHOMÉTRIE Aiguilles, galbules et stomates Pour les aiguilles (Tab. III & IV), la longueur (Ln) est en moyenne de 16,61 mm pour l'espèce, la population de Babor (Ba) a enregistré la plus faible valeur (14,87 mm) contrairement à Djebel Messaad (M) qui a enregistré la valeur la plus importante (18,33 mm). Concernant, la largeur (Lr), les valeurs par population, varient entre 2,19 mm à Messida (Me) et 1,60 mm à Boutaleb (B) avec une moyenne de 1,83 mm pour l'espèce. Le rapport longueur/largeur (R) est plus faible (7,06) à Me et plus grand (11,07) à B, avec une moyenne de 9,39 pour l'ensemble des populations. l'aiguille (Fb), la moyenne prédominante pour l'espèce est la forme arrondie (53,4 %). C'est la même forme qui ont les plus faibles (0,41 g). Pour la forme des galbules (Fg), deux formes majoritaires ont été rencontrées pour prédomine pour les populations M (70,3 %), Ch (67,8 %), Me (61,0 %), Cz (52,2 %). Cependant, les autres l'espèce, la forme arrondie (46,2 %) et ovoïde (41,9 %). Ces deux formes sont aussi majoritaires dans toutes populations G, B et Ba présentent deux formes majoritaires : arrondie (43,2 % ; 36,3 % et 42,9 %, les stations. En outre, l'aspect cordiforme est présent avec une proportion non négligeable (19,4 %) à Me. La respectivement) et arrondie droit d'un seul côté (41,0 % ; 45,4 % et 39,3 %, respectivement). La forme de variable « forme de la cicatrice blanchâtre (Fci) » est très hétérogène, les cinq formes se trouvent uniquement au niveau de deux populations Me et M. Néanmoins, la forme triangulaire prédomine au sein de toutes les populations avec une moyenne de 83,9 % pour l'espèce. Elle est suivie par la forme « polygones incomplets » pour les populations Me (2,9 %), Cz (6,7 %), M (3,0 %) et Ch (4,8 %) et par la forme « polygones complets » pour G (33,9 %), B (19 %) et Ba (7,1 %). Concernant la couleur (Cg), le brun-rouge sombre (64,7%) et le brun-rouge vif (14,9 %) représentent les proportions les plus élevées pour l'espèce. C'est également le cas pour l'ensemble des populations, à l'exception de la population de Gottya (G) où le brun sombre (41,9 %) et le l'apex (TABLEAU IV rouge sombre (28,6%) dominent.
Fréquences (%) pour les caractères qualitatifs des aiguilles et des galbules
TABLEAU III Valeurs des caractères quantitatifs mesurées pour les aiguilles, les galbules et les stomates Caractères Messida Chikh Z, Messaad Gottya Boutaleb Babor Chélia Moyenne Espèce Variantes (Me) (Cz) (M) (G) (B) (Ba) (Ch) (%)
1-Symétrique 94,66±5,57 95,66±4,81 93,22±7,08 79,44±9,02 82,11±9,45 91,67±6,11 95,78±4,46 90,36***±9,23
Caractères Longueur de l'aiguille en Forme de la base de Messida (Me) 2-Asymétri que 15,26***±2,29 9,36-27,82 1-Aplatie 1-Orienté vers Chikh Zouaoui (Cz) la droite 2-Orienté vers 1,56±2,43 1±1,99 la gauche 3-Droit d'un 2,78±3,69 seul côté 16,08***±3,32 7,88-26,64 9,56±7,57 mm (Ln) l'aiguille 2-Arrondie 61±17,92 Largeur de l'aiguille en mm (Lr) 2,19***±0,28 0,50-3,37 1,85***±0,28 1,05-2,83 (Fb) 3-Arrondie droit d'une seule face 29,44±17,18 36,22±15,70 21,44±9,58 Djebel Messaad (M) Gottya (G) 1±1,78 1,78±2,59 1,67±2,10 1,67±2,59 1,67±2,27 3,33±3,61 18,33***±3,4 8,10-29,46 17,32***±2,89 8,38-26,11 11,56±8,96 8,23±9,17 52,22±19,91 70,33±14,21 43,22±11,99 36,33±11,29 42,89±15,55 67,78±15,27 53,40***±19,17 (Moy ± ET ; Min-Max) Boutaleb (B) Babor (Ba) Chélia (Ch) Moyenne (Espèce) 7,78±5,05 5±4,61 2,11±2,70 1,11±1,82 2,83***±3,96 11,22±5,07 9,89±6,92 2±2,57 1,33±2,07 4,19***±5,52 1,56±2,59 3±3,08 4,22±3,71 1,78±3,89 2,62*±3,39 17,18***±3,21 9,25-28,99 14,87***±2,86 7,48-23,60 17,25***±3,06 8,84-26,72 16,61***±3,25 15,78±6,43 18,23±8,70 17,78±14,99 3,55±4,54 12,10***±10,32 7,48-29,46 1,81***±0,28 1,01-2,90 1,72***±0,26 0,07-2,86 1,60***±0,26 0,45-2,50 1,82***±0,29 0,57-2,75 1,80***±0,24 1,11-2,92 1,83***±0,32 0,45-3,37 41±11,25 45,44±9,73 39,33±17,23 28,67±13,80 34,51***±15,63
1-Aigüe 2-Elargie 3,75-30,46 7,06±1,48 *** Forme de l'apex de Rapport Longueur/ Largeur de l'aiguille (Fap) l'aiguille (R) 3-Rétrécie 72,11±26,16 81,33±18,10 87,77±14,69 61,67±10,64 8,88±2,18 4,16-17,88 *** 10,27±2,1 3,79-19,03 *** 10,29±2,26 5,11-18,92 *** 11,07±3,19 60,67± 4,39-40,98 *** 2,11±3,66 4,23±11,21 0,56±1,54 9,89±8,28 10±10,86 25,78±26,16 14,44±18,10 11,67±14,69 28,44±10,64 29,33±17,07 19,89±14,29 10,33±13,77 19,98***±18,34 71,11±14,29 88,67±13,77 74,76***±18,34 8,38±2,20 3,28-36,32 *** 9,74±2,18 3,94-18,5 *** 9,39±2,61 3,28-40,98 9±7,33 1±2,34 5,26***±8,33 ***
Longueur de la
nervure 12,17±2,03 1-Vert très clair 13,91±2,91 2,23±4,14 15,81±3,13 2,44±4,87 14,59±2,56 2,11±2,97 14,54±2,77 20,22±10,75 27±9,32 13,34±19,50 8,78±21,93 10,87***±15,42 12,92±2,47 14,86±2,65 14,11±2,90
Couleur de médiane de l'aiguille l'aiguille en (C) mm (Lnv) Longueur du Face galbule en mm (Lng) supérieure Largeur du galbule en mm (Lrg) (Cip) Cire Face inférieure (Cif) 6,81-24,03 *** 2-Vert clair 3-Vert 15,18±1,90 9,81-26,09 2-Absence *** 14,55±1,87 9,71-21,11 *** 2-Présence 1-Absence 2-Présence 6,5-23,30 *** 41,33±32,34 16,78±18,31 30,11±16,92 10,78±11,23 7,28-26,44 8,61-23,05 7,47-24,03 *** *** *** 2,11±3,21 31,33±18,27 28,11±21,76 22,94***±22,83 6,64-20,30 8,47-24,63 6,5-26,44 *** *** *** 56,44±33,76 80,78±20,62 67,78±17,95 69±12,75 70,89±10,32 55,33±21,36 63,11±28,62 66,19***±24,16 10,05±1,03 6,69-14,12 9,63±1,13 5,53-13,33 10,15±1,05 7,43-13,75 10,80±1,17 6,85-14,67 9,95±0,90 7,03-12,69 10,65±1,16 6,90-19,95 10,92±2,16 5,53-26,09 0 0 0 0 0 0 0 0 *** *** *** *** *** *** *** 9,82±1,06 6,62-15,48 *** 9,97±1,20 6,98-14,13 *** 10,48±1,14 6,95-14,06 *** 11,32±1,27 6,62-14,48 *** 9,69±0,93 6,73-12,23 *** 10,94±1,14 8,09-14,86 *** 10,97±2,01 100 100 100 100 100 100 100 100 0 0 0 0 0 0 0 0 6,62-21,11 *** 100 100 100 100 100 100 100 100
Rapport Longueur/ Largeur du galbule (Rg) Poids du galbule en g (Pg) Forme du galbule (Fg) 1,05±0,10 1-Arrondie 0,80-1,49 *** 2-Ovoïde 1,28±0,47 0,39-4,06 *** 3-Allongée 4-Cordiforme 1,03±0,08 47,33±17,63 56,78±20,52 61,11±26,41 24,33±20,68 37,33±21,02 53±25,12 43,44±25,05 46,19***±25,09 0,97±0,08 0,97±0,08 0,96±0,08 1,03±0,09 0,98±0,09 1,00±0,09 0,77-1,32 *** 0,76-1,21 *** 0,78-1,22 *** 0,76-1,31 *** 0,77-1,37 *** 0,75-1,42 *** 0,75-1,49 26,89±9,30 31,11±13,34 35,33±25,06 72,67±±19,64 52,11±20,96 28,22±16,58 47±23,39 41,90***±24,27 *** 0,41±0,12 0,08-1,09 *** 0,41±0,15 0,09-0,99 *** 0,46±0,14 0,17-1,02 *** 0,58±0,18 0,10-1,29 *** 0,42±0,11 0,12-0,88 *** 0,53±0,17 0,15-1,38 *** 5±6,71 6,22±11,033 1,67±3,13 2,22±4,58 8,78±19,58 11,78±16,88 3±3,95 5,52**±11,58 0,58±0,37 0,08-4,06 *** 19,44±14,09 5,44±7,24 1,89±3,47 0,78±2,09 1,78±3,79 7±5,770 6,45±14,65 6,11***±10,41
Longueur du 49,52±5,55 5-Falciforme 51,59±5,66 1,34±3,88 54,79±6,40 0,45±1,45 52,30±6,20 0 0 54,12±6,52 0 50,80±5,82 0 53,90±7,17 0,11±0,61 0,27*±1,62 52,43±6,45
stomate en µm (Lns) 30,50-64,63 *** 1-Triangle 31,50-71,75 *** 95,56±4,49 31,80-70,70 *** 89,22±11,77 38,40-74,60 NS 96±6,09 38,30-71,10 *** 65±22,00 63,11±18,75 85,57±21,22 93,29±10,38 83,96***±18,95 37,63-72,13 *** 28,50-71,90 *** 28,50-74,60 ***
Largeur du Forme de la 33,04±3,99 2-Polygones incomplets 32,24±4,67 2,88±3,69 33,43±3,80 6,67±9,18 31,44±4,21 3±4,41 32,91±4,17 1,11±2,37 17,89±12,49 6,79±21,63 4,79±6,71 32,89±4,16 32,58±4,90 6,16***±8,95 32,65±4,32
stomate en µm cicatrice 22,13-46,13 20,38-50,08 25,70-48,80 21,10-48,40 21,10-45,40 15,88-45,88 20,10-64,50 15,88-64,50
(Lrs) blanchâtre sur le ** 3-Polygones complets *** 0,67±1,36 *** 3,89±4,96 *** 0,78±1,68 *** 33,89±21,35 19±15,71 *** 7,07±4,63 1,81±4,53 ** 9,59***±15,77 ***
Rapport Longu eur/ Largeur du galbule (Fci) 1,51±0,21 1,04-2,27 4-Trait unique 1,62±0,23 1,06-2,30 0,67±1,36 1,66±0,24 1,04-2,34 0 1,68±0,22 1,14-2,60 0,11±0,61 0 1,66±0,22 1,19-2,37 0 1,56±0,22 1,08-2,74 0,57±6,30 1,67±0,24 1,00-2,47 0,11±0,61 0,21±**0,87 1,62±0,23 1,00-2,74
stomate (Rs) ANALYSE STATISTIQUE *** 5-Deux traits sécants Densité stomatique en 239,68±61,14 110,53-363,16 174,23±35,31 *** 0,22±0,85 94,74-273,68 1-Brun sombre 5,22±10,16 73,68-221,05 ** 0,22±0,85 150,04±29,2 5,67±8,72 Stomates/mm 2 (Ds) *** *** *** 2-Rouge sombre 3,33±7,17 5±5,79 Couleur du galbule La normalité des données a été vérifiée à l'aide du test de Kolmogorov Smirnov. Les statistiques élémentaires *** ** *** * *** 0,11±0,61 0 0 0 0 0,08±NS0,51 154,86±32,46 89,47-289,47 150,57±31,55 68,42-247,37 165,22±28,08 89,47-252,63 155,47±29,06 78,95-247,37 170,01±47,21 3,29±4,91 41,78±23,79 15,78±25,86 4,89±1,54 2,19±5,71 11,26***±19,57 68,42-363,16 *** *** *** *** *** 7,82±10,07 28,56±17,89 8±8,60 3,11±8,37 8,59±11,06 9,20***±12,95 Moy, Moyenne ; ET, écart type ; Min, minimum ; Max, maximum ; Seuil de signification statistique *, p < 0,05 ; **, p < 0,01 ; ***, p < 0,001 ; NS : Non significatif, (Cg) 3-Brun rouge sombre 84,56±16,03 82±15,45 60,33±23,64 20,33±20,12 63±2,56 70,11±7,80 72,33±31,05 64,67***±30,20
ont été calculées à l'échelle intra et inter-populations. L'analyse de la variance à un seul facteur a été abordée 4-Brun rouge vif 6,89±10,57 7,33±6,63 28,56±20,69 9,33±8,37 13,22±12,24 21,89±14,73 16,89±28,67 14,87***±15,57
pour comparer les variables quantitatives et les variables semi-quantitatives exprimées en pourcentage. Une Seuil de signification statistique *, p < 0,05 ; **, p < 0,01 ; ***, p < 0,001 ; NS : Non significatif,
analyse en composantes principales (ACP) a été appliquée pour les variables quantitatives relatives aux Concernant, la longueur de la nervure médiane (Lnv), la moyenne est de 14,11 mm ; elle est plus petite aiguilles, aux galbules, aux stomates avec les variables environnementales du tableau I. Par la suite, une (12,17 mm) à Me et plus longue (15,81 mm) à M. La forme symétrique des aiguilles est la plus répandue
(90,4 %) chez l'espèce ; il en est de même pour chacune des populations Me (94,7
Organe
Caractère morphologique Aiguille Longueur de l'aiguille en mm (Ln) Largeur de l'aiguille en mm (Lr) Rapport (longueur/largeur) de l'aiguille (R) Longueur de la nervure médiane de l'aiguille en mm (Lnv) Forme de l'aiguille (F) : 1-Symétrique ; 2-Asymétrique (1-orientée vers la droite ; 2-orientée vers la gauche ; 3-Droite d'un seul côté). Forme de la base de l'aiguille (Fb) : 1-Aplatie ; 2-Arrondie ; 3-Arrondie droit d'une face unique Forme de l'apex de l'aiguille (Fap) : 1-Aigu ; 2-Élargi ; 3-Rétréci Couleur de l'aiguille (C) : 1-Vert très clair ; 2-Vert clair ; 3-Vert Cire : 1-Face supérieure (Cip) ; 2-Face inferieure (Cif) ; (1-Absence ; 2-Présence) Galbule Longueur du galbule en mm (Lng) Largeur du galbule en mm (Lrg) Rapport (longueur/largeur) du galbule (Rg) Poids du galbule en g (Pg) Forme du galbule (Fg) : 1-Arrondie ; 2-Ovoïde ; 3-Allongée ; 4-Cordiforme ; 5-Falciforme Forme de la cicatrice blanchâtre sur le galbule (Fci) : 1-Triangle ; 2-Polygones incomplets ; 3-Polygones complets ; 4-Trait unique ; 5-Deux traits sécants Couleur du galbule (Cg) : 1-Brun sombre ; 2-Rouge sombre ; 3-Brun rouge sombre ; 4-Brun rouge vif Stomate Longueur du stomate en µm (Lns) Largeur du stomate en µm (Lrs) Rapport (Longueur/ Largeur) du stomate (Rs) Densité stomatique (Ds) (stomates/mm 2 ) Épiderme Occurrence de cire sur les deux faces de l'aiguille (Ci) Situation des stomates par rapport à la surface de l'épiderme supérieur de l'aiguille (S) Orientation des stomates par rapport à la nervure principale (Os) Surface épidermique Pour l'étude des surfaces épidermiques, des aiguilles ont été examinées au microscope électronique à balayage (FEI/Philips XL-30 Field Emission ESEM) (six aiguilles par population). Trois aiguilles par population ont été traitées pendant cinq heures à l'alcool à 90° sous ultrasons, puis séchées à l'air libre à température et humidité ambiantes, avant d'être observées au MEB. Trois autres échantillons par population ont été observés sans aucun traitement. Ensuite, les procédures standards pour la microscopie électronique à balayage ont été appliquées. Les échantillons sont couverts d'une fine couche d'or et placés sur des stubs. Les échantillons ainsi préparés peuvent être observés à différents grossissements et des photographies numériques ont été prises à différentes échelles. L'agencement des stomates (leur situation par rapport à la surface de l'épiderme supérieur de l'aiguille (S) et leur orientation par rapport à la nervure principale (Os)) et l'occurrence de cire sur les deux faces de l'aiguille (Ci) ont été étudiés (Tab. II). classification ascendante hiérarchique (CAH) été réalisée pour visualiser les groupes de populations pour les aiguilles, les galbules et les stomates. En outre, une analyse factorielle des correspondances (AFC) a été appliquée (Bouxin, 2016) pour avoir dans un même plan les individus (arbres) et les variables (caractères morphologiques mesurés) afin de mettre l'accent sur la similarité ou non entre les populations étudiées. Les données traitées par l'AFC correspondent à un fichier statistique homogène construit suite à une transformation des variables quantitatives en variables Pour les stomates (Tab. III), la longueur (Lns) est en moyenne de 52,43 µm pour l'espèce avec des valeurs extrêmes (54,79 µm à M) et (49,52 µm à Me). Pour la largeur (Lrs), les stomates sont plus larges à M (33,43 µm) et Me (33,04 µm) et plus étroits à G (31,44 µm) avec une valeur 32,65 µm enregistrée pour l'espèce. Concernant le rapport (Rs), il montre qu'en moyenne les stomates de Me (1,51) et Ba (1,56) sont moins allongés que ceux des autres populations. La densité stomatique (Ds) est en moyenne de 170,01 stomates/mm 2 pour l'espèce. Les valeurs les plus grandes s'observent à Me (239,68 stomates/mm 2 ) et Cz (174,23 stomates/ mm 2 ) et décroissent jusqu'à 150,57 et 150,04 stomates/mm 2 pour B et M, respectivement.
& B) de manière que l'ensemble des sept populations se répartissent selon la diminution des températures minimales du mois le plus froid (m).Comparaison de quelques résultats obtenus pour les aiguilles et les galbules de Juniperus oxycedrus L. dans la littérature DISCUSSION Dans cette section nous allons expliquer la variabilité morphologique du complexe spécifique Juniperus oxycedrus L. par l'ajustement entre les pressions sélectives d'ordre abiotique (facteurs environnementaux) et celles d'ordre biotique exercées par des vecteurs biodisséminateurs. Pour ce faire et dans un souci de comparaison, nous avons établi un tableau récapitulatif et comparatif, de nos résultats, avec la littérature pour Juniperus oxycedrus (Tab. V), en intégrant les deux sous-précédemment décrites par[START_REF] Maire | Flore de l'Afrique du Nord[END_REF] en Algérie, la subsp. macrocarpa (recensée sur le littoral) et la subsp. rufescens (syn. = oxycedrus) (des hautes montagnes). Nous avons calculé des moyennes pour chaque variable mesurée pour les populations Cz, M, G, B, Ba et Ch (sensu subsp. rufescens) et nous avons pris la population Me comme référence représentant la subsp. macrocarpa.Concernant l'effet du climat, il existe bel et bien un gradient lié à la continentalité thermique que nous pouvons visualiser pour les sept populations étudiées sur les plans factoriels des AFC (Fig.8). Selon Rivas-Martinez (2005b), l'écart thermique (Ic = Tmax -Tmin) est un facteur d'influence de première grandeur de la répartition de la végétation et par conséquent, sur les frontières de nombreux bioclimats. Donc, selon un gradient continental croissant, la population du littoral Messida représentée par la sous-espèce macrocarpa[START_REF] Maire | Flore de l'Afrique du Nord[END_REF][START_REF] Quézel | Nouvelle flore de l'Algérie et des régions désertiques méridionales[END_REF], s'isole par son bioclimat eu-océanique accentué ; elle est suivie par la station Cz. Cette dernière se caractérise par une situation limitrophe à celle de Me (semi-continental accentué), les valeurs enregistrées pour les caractères morphologiques semblent proches de celles de la subsp. macrocarpa (Fig.8). S'ensuit la répartition des autres stations, allant du type semi-continental atténué (pour Ch et B) jusqu'à l'accentué (pour Ba, G et M). Pareillement, le stress thermique hivernal semble avoir une influence sur le regroupement des sept populations concernant les aiguilles (Fig.8A) et les galbules (Fig.9A) du fait de la concordance entre les stations Me, Cz et M qui se localisent dans des régions à hiver avec m > 0° C et celles (Ch, Ba, B et G) qui se situent dans des zones à hiver avec m < 0° C. Selon Rivas-Martinez (2005b), le stress thermique hivernal est un facteur limitant pour beaucoup de plantes et communautés végétales. En outre, la xéricité aussi joue un rôle non négligeable surtout pour les caractères quantitatifs (Fig.8) car l'ensemble des populations se regroupe et ceci est dû probablement à leur présence sur le même étage bioclimatique, allant du bioclimat subhumide (Me, Cz, Ba et Ch) au semi-aride (B, G et M).
Les stations Me, Cz et
M avec des températures minimales (m > 0°C) forment un premier ensemble, alors que les autres stations (Ch,
Ba, B et G) les plus froides (m < 0°C) forment un second ensemble. Pour les stomates (Fig. 8C), l'axe 2
exprime également un gradient d'aridité croissante en fonction du bioclimat, allant du subhumide tempéré
(station Me) au semi-aride (station G). PRESSIONS D'ORDRE ABIOTIQUE (FACTEURS ENVIRONNEMENTAUX)
TABLEAU V
Aiguille Galbule
Caractères Longueur (mm) Largeur (mm) Longueur (mm) Largeur (mm) Poids (g) Forme Couleur
Lebreton et al, (1991) (France) - - - 12,9 0,65 - -
Lebreton et al (1998) (Grèce) - - - 12,8 0,86 -
subsp, Juan et al, (2003) (Espagne) - - - 15,23 (-16,37) 1,36 (-1,94) - -
macrocarpa
Klimko et al, (2004) (Italie) 11,65 2,02 15,52 14,52 - -
Farjon (2005) - - - (12-) 15-23 - Ovoïde-globulaire ou globulaire Brun, brun-violacé ou rouge-violacé,
Adams (2014a) 12-20 2-2,5 - 12-18 - Globulaire Rouge violacé
Amaral Franco (1986) (Espagne) 8-15 1-1,5 - 8-10 - - -
Lebreton et al, (1991) (France) - - - 8,8 0,285 - -
(Turquie) - - - 7,9 0,195 - -
Lebreton et al, (1998)
(Chypre) - - - - 0,165 - -
subsp, rufescens Adams et al, (2005) (Ouest méditerranéen : Maroc, Portugal, Espagne et France) 14,4 (-15,0) 1,3 (-1,27) - - - - -
(=oxycedrus) Farjon (2005) 6-25 1,1-2 - 6-13 - - Orange, brun rougeâtre plus ou moins glauque
Klimko et al, (2007) (Ukraine, Grèce, Bosnie, Croatie, Espagne, France et Maroc) 13,84 1,55 9,05 9,01 - - -
Yaltirik et al, (2007) 6-10 1-1,5 - - - Globulaire Brun rougeâtre
var, spilinanus (Turquie)
Avci et al, (2008) 12 - - 8,6 - - Brun
Brus et al, (2011) (Péninsule des Balkans) - 2-3 8,32 8,98 - - -
Adams (2014a) 15-20 - - 6-12 - Globulaire Rouge brunâtre, pourpre rougeâtre, glauque
Population Me (sensu subsp, macrocarpa) 15,26 2,19 15,18 14,55 1,28 Arrondie, Ovoïde ou cordiforme
Nos résultats Le reste des populations (Cz, G, B, M, Ba et Ch) (sensu subsp, rufescens) 16,84 1,77 10,21 10,37 0,47 Arrondie ou Ovoïde Brun rouge sombre
Espèce (Ensemble des populations) 16,61 1,83 10,92 10,97 0,58
espèces
. Dans notre cas le feuillage, précisément celui des populations du semi-aride (G, B et M), où l'on enregistre un déficit hydrique et une extrême irrégularité des précipitations, est d'un vert éclatant. Ceci pourrait s'expliquer par la présence de cire épicuticulaire qui contribue à la protection des pigments chlorophylliens des rayonnements UV en haute altitude. Selon[START_REF] Lodé | Cours de génétique des populations[END_REF], les individus d'une même espèce peuvent présenter de grandes différences morphologiques liées à un dimorphisme sexuel, ou une réponse à une pression environnementale. Quelles que soient leurs variations, les traits d'un individu constituent le phénotype qui résulte de l'interaction entre son génotype et les conditions de son environnement. PRESSIONS D'ORDRE BIOTIQUE(BIODISSÉMINATION PAR ORNITHOCHORIE) | 57,204 | [
"171561",
"18890"
] | [
"431566",
"431566",
"508832",
"188653",
"198056",
"180109",
"188653"
] |
01771730 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01771730/file/RSTA20170312p.pdf | Carlo Rovelli
email: [email protected]
'Space is blue and birds fly through it'
Keywords: quantum physics relational quantum mechanicsŠ, interpretations of quantum mechanics, quantum state, relations, wave function Q1
Quantum mechanics is not about 'quantum states': it is about values of physical variables. I give a short fresh presentation and update on the relational perspective on the theory, and a comment on its philosophical implications.
The following queries have arisen during the typesetting of your manuscript. Please answer these queries by marking the required corrections at the appropriate point in the text. Q1 Keywords are been taken from pdf. Please check is this correct. Q2 A Data accessibility statement has been added to your paper; please check that this is correct.
Q3 Mandatory end section has been added to your paper; please check if that is correct.
Q4 A funding statement has been added to your paper; please check that this is correct.
Q5 Please provide complete details for reference [START_REF] Candiotto | The reality of relations[END_REF][START_REF] Wood | Everything is relative: Has Rovelli found the way out of the woods? Q5 24[END_REF].
Q6 Please provide publisher details for reference [START_REF] Groenewold | Objective and subjective aspects of statistics in quantum description[END_REF].
Q7 Please provide workshop location details for reference [START_REF] Fuchs | Quantum foundations in the light of quantum information[END_REF].
Q8 Please provide conference location and publisher details for reference [40].
A misleading notion: quantum state
In his celebrated 1926 paper [START_REF] Schrödinger | Quantisierung als Eigenwertproblem (Erste Mitteilung)[END_REF], Erwin Schrödinger introduced the wave function ψ and computed the Hydrogen spectrum from first principles. But the theory we call 'quantum mechanics' (QM) was born 1 year earlier, in the work of Werner Heisenberg [START_REF] Heisenberg | Uber quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen[END_REF], and had already evolved into its current full set of equations in a spectacular series of articles by Born et al. [START_REF] Born | Zur Quantenmechanik[END_REF][START_REF] Born | Zur Quantenmechanik II[END_REF]. Dirac, following Heisenberg's breakthrough, got to the same structure independently, in 1925, the year before Schrödinger's work, in a work titled 'The fundamental equations of quantum mechanics' [START_REF] Dirac | The fundamental equations of quantum mechanics[END_REF]. (See [START_REF] Fedak | The 1925 Born and Jordan paper 'On quantum mechanics[END_REF][START_REF] Van Der Waerden | Sources of quantum mechanics[END_REF] for a historical account.) Even the Hydrogen spectrum had been computed by Pauli in [START_REF] Pauli | Über das Wasserstoffspektrum vom Standpunkt der neuen Quantenmechanik [On the hydrogen spectrum from the standpoint of the new quantum mechanics[END_REF], using the language of Heisenberg, Born and Jordan, based on the equations language of the theory, unfamiliar at the time, into a familiar one: differential equations. This brought ethereal quantum theory down to the level of the average theoretical physicist.
The conceptual step was to introduce the notion of 'wave function' ψ, soon to be evolved into the notion of 'quantum state' ψ, endowing it with heavy ontological weight. This conceptual step was wrong, and dramatically misleading. We are still paying the price for the confusion it has generated.
The confusion got into full shape in the influential second paper of the series [START_REF] Schrödinger | Quantisierung als Eigenwertproblem (Zweite Mitteilung)[END_REF], where Schrödinger stressed the analogy with optics: the trajectory of a particle is like the trajectory of a light ray: an approximation for the behaviour of an underlying physical wave in physical space. That is: ψ is the 'actual stuff', like the electromagnetic field is the 'actual stuff' underlying the nature of light rays.
Notice that this step is entirely 'interpretational'. It does not add anything to the predictive power of the theory, because this was already fully in place in the previous work of Heisenberg, Born and Jordan, where the 'quantum state' does not play such a heavy role. Schrödinger's conceptual step provided only a (misleading) way of reconceptualizing the theory.
The idea that the quantum state ψ represents the 'actual stuff' described by QM has pervaded later thinking about the theory. This is largely due to the toxic habit of introducing students to quantum theory beginning with Schrödinger's 'wave mechanics': thus betraying at the same time history, logic and reasonableness.
The founders of QM saw immediately the mistakes in this step. Heisenberg was vocal in pointing them out [START_REF] Kumar | Quantum: Einstein, Bohr and the great debate about the nature of reality[END_REF]. First, Schrödinger's basis for giving ontological weight to ψ was the claim that quantum theory is a theory of waves in physical space. But this is wrong: already the quantum state of two particles cannot be expressed as a collection of functions on physical space. Second, the wave formulation misses the key feature of atomic theory: energy discreteness, which must be recovered by additional ad hoc assumptions, because there is no reason for a physical wave to have energy related to frequency. Third, and most importantly, if we treat the 'wave' as the real stuff, we fall immediately into the horrendous 'measurement' problem. In its most vivid form (due to Einstein): how can a 'wave', spread over a large region of space, suddenly concentrate on a single place where the quantum particle manifests itself? All these obvious difficulties, which render the ontologicization of ψ absurd, were rapidly pointed out by Heisenberg. But Heisenberg lost the political battle against Schrödinger, for a number of reasons. First, all this was about 'interpretation' and for many physicists this was not so interesting after all, once the equations of QM began producing wonders. Differential equations are easier to work with and sort of visualize, than non-commutative algebras. Third, Dirac himself, who did a lot directly with non-commutative algebras, found it easier to make the calculus concrete by giving it a linear representation on Hilbert spaces, and von Neumann followed: on the one hand, his robust mathematical formulation of the theory brilliantly focused on the proper relevant notion: the non-commutative observable algebra, on the other, the weight given to the Hilbert space could be taken by some as an indirect confirmation of the ontological weight of the quantum states. Fourth, and most importantly, Bohr-the recognized fatherly figure of the community-tried to mediate between his two brilliant bickering children, by obscurely agitating hands about a shamanic 'wave/particle duality'. To be fair, Schrödinger himself realized soon the problems with his early interpretation, and became one of the most insightful contributors to the long debate on the interpretation; but the misleading idea of taking the 'quantum state' as a faithful description of reality stuck.
If we want to get any clarity about QM what we need is to undo the conceptual confusion raised by Schrödinger's introduction of the quantum state ψ.
The abstract of the breakthrough paper by Heisenberg reads: 'The aim of this work is to set the basis for a theory of QM based exclusively on relations between quantities that are in principle observable.' Only relations between variables, not new entities. The philosophy is not to inflate ontology: it is to rarefy it.
Felix Bloch reports an enlightening conversation with Heisenberg [START_REF] Bloch | Heisenberg and the early days of quantum mechanics[END_REF]: 'We were on a walk and somehow began to talk about space. I had just read Weyl's book Space, Time and Matter, and under its influence was proud to declare that space was simply the field of linear operations. 'Nonsense,' said Heisenberg, 'space is blue and birds fly through it.' This may sound naive, but I knew him well enough by that time to fully understand the rebuke. What he meant was that it was dangerous for a physicist to describe Nature in terms of idealized abstractions too far removed from the evidence of actual observation. In fact, it was just by avoiding this danger in the previous description of atomic phenomena that he was able to arrive at his great creation of QM. In celebrating the 15th anniversary of this achievement, we are vastly indebted to the men who brought it about: not only for having provided us with a most powerful tool but also, and even more significant, for a deeper insight into our conception of reality.'
What is thus this 'deeper insight into our conception of reality' that allowed Heisenberg to find the equations of QM, and that has no major use of the quantum state ψ?
Quantum theory as a theory of physical variables
Classical mechanics describes the world in terms of physical variables. Variables take values, and these values describe the events of nature. Physical systems are characterized by sets of variables and interact. In the interaction, systems affect one another in a manner that depends on the value taken by their variables. Given knowledge of some of these values, we can, to some extent, predict more of them.
The same does QM. It describes the world in terms of physical variables. Variables take values, and these values describe the events of nature. Physical systems are characterized by sets of variables and interact, affecting one another in a manner that depends on the value taken by their variables. Given knowledge of some values, we can, to some extent, predict more of them.
The basic structure of the two theories is therefore the same. The differences between classical and QM are three, interdependent:
(a) There is fundamental discreteness in nature, because of which many physical variables can take only certain specific values and not others. (b) Predictions can be made only probabilistically, in general. (c) The values that a variables of a physical system takes are such only relative to another physical system. Values taken relatively to distinct physical systems do not need to precisely fit together coherently, in general.
I discuss with more precision these three key aspects of quantum theory, from which all the rest follows, below. The first-discreteness-is the most important characteristic: it gives the theory its name. It is curiously disregarded in many, if not most, philosophers' discussions on quantum theory. The third is the one with heavy philosophical implications, which I shall briefly touch upon below. This account of the theory is the interpretative framework called 'Relational QM'. It was introduced in 1996 in [START_REF] Rovelli | Relational quantum mechanics[END_REF] (see also [START_REF] Laudisa | Stanford Encyclopedia of Philosophy[END_REF][START_REF] Laudisa | The EPR argument in a relational interpretation of quantum mechanics[END_REF][START_REF] Smerlak | Relational EPR[END_REF][START_REF] Rovelli | An argument against the realistic interpretation of the wave function[END_REF]). In the philosophical literature it as been extensively discussed by Bas van Fraassen [START_REF] Van Fraassen | Rovelli's world[END_REF] from a marked empiricist perspective, by Michel Bitbol [START_REF] Bitbol | Physical relations or functional relations? A non-metaphysical construal of Rovelli's relational quantum mechanics[END_REF][START_REF] Bitbol | De l'intérieur du monde[END_REF] who has given a neo-Kantian version of the interpretation, by Mauro Dorato [START_REF] Dorato | Rovelli' s relational quantum mechanics, monism and quantum becoming[END_REF] who has defended it against a number of potential objections and discussed its philosophical implication on monism, and recently by Laura Candiotto [START_REF] Candiotto | The reality of relations[END_REF] who has given it an intriguing reading in terms of (Ontic) Structural Realism. Metaphysical and epistemological implications of relational QM have also been discussed by Matthew Brown [START_REF] Brown | Relational quantum mechanics and the determinacy problem[END_REF] and Daniel Wolf (né Wood) [START_REF] Wood | Everything is relative: Has Rovelli found the way out of the woods? Q5 24[END_REF].
(a) Discreteness I find it extraordinary that so many philosophical discussions ignore the main feature of quantum theory: discreteness.
Discreteness is not an accessory consequence of quantum theory, it is its core. Quantum theory is characterized by a physical constant: the Planck constant h = 2π h. This constant sets the scale of the discreteness of quantum theory, and thus determines how bad is the approximation provided by classical mechanics. Several 'interpretations' of quantum theory seem to forget the existence of the Planck constant and offer no account of its physical meaning.
Here is a more detailed account of discreteness: A physical system is characterized by an ensemble of variables. The space of the possible values of these variables is the phase space of the system. For a system with a single degree of freedom, the phase space is two dimensional. Classical physics assumes that the variables characterizing a system have always a precise value, determining a point in phase space. Concretely, we never determine a point in phase space with infinite precision-this would be meaningless-, we rather say that the system 'is in a finite region R of phase space', implying that determining the value of the variables will yield values in R. Classical mechanics assumes that the region R can be arbitrarily small. Now, the volume Vol(R) of a region R of phase space has dimensions Length 2 × Mass/Time, for each degree of freedom. This combination of dimensions, Length 2 × Mass/Time, is called 'action' and is the dimension of the Planck constant. Therefore what the Planck constant fixes is the size of a (tiny) region in the space of the possible values that the variables of any system can take. Now: the major physical characterization of quantum theory is that the volume of the region R where the system happens to be cannot be smaller that 2π h:
Vol(R) ≥ 2π h, (2.1)
per each degree of freedom. This is the most general and most important physical fact at the core of quantum theory. This implies that the number of possible values that any variable distinguishing points within the region R of phase space, and which can be determined without altering the fact that the system is in the region R itself, is at most
N ≤ Vol(R) 2π h , (2.2)
which is a finite number. That is, this variable can take discrete values only. If it was not so, the value of the variable could distinguish arbitrary small regions of phase space, contradicting (2.1).
In particular: any variable separating finite regions of phase space is necessarily discrete. QM provides a precise way of coding the possible values that a physical quantity can take. Technically: variables of a system are represented by (self-adjoint) elements A of a (C * ) algebra A. The values that the variable a can take are the spectral values of the corresponding algebra element A ∈ A.
(b) Probability
Mechanics predicts the values of some variables, given some information on the values that another set of variables has taken. In QM, the available information is coded as a (normalized positive linear) functional ρ over A. This is called a 'state'. The theory states that the statistical mean value of a variable A is ρ(A). Thus values of variables can be, in general, predicted only probabilistically.
In turn, the state ρ is computed from values that variables take. I and so on.) The value of a quantity is sharp when the probability distribution is concentrated on it (ρ(A 2 ) = (ρ(A)) 2 ). For a non-commutative quantum algebra, there are no states where all variables are sharp. Therefore, the values of the variables can never determine a point in phase space sharply. This is the core of quantum theory, which is therefore determined by the non-commutativity of the algebra. The Planck constant h is the dimensional constant on the right-hand side of The non-commutativity between variables is Heisenberg breakthrough, understood and formalized by Born and Jordan, who were the first to write the celebrated relation [q, p] = ih and to recognize this non-commutativity as the key of the new theory, in 1925.
The non-commutativity of the algebra of the variables (measured by h) is the mathematical expression of the physical fact that variables cannot be simultaneously sharp, hence there is an (h-size) minimal volume attainable in phase space, hence predictions are probabilistic.
The fact that values of variables can be predicted only probabilistically raises the key interpretational question of QM: when and how is a probabilistic prediction resolved into an actual value?
(c) The relational aspect of quantum theory
When and how a probabilistic prediction about the value of a variable a of a physical system S is resolved into an actual value?
The answer is: when S interacts with another physical system S . Value actualization happens at interactions because variables represent the ways systems affect one another. Any interaction counts, irrespectively of size, number of degrees of freedom, presence of records, consciousness, degree of classicality of S , decoherence, or else, because none of these pertain to elementary physics.
In the course of the interaction, the system S affects the system S . If the effect of the interaction on S depends on the variable a of S, then the probabilistic spread of a is resolved into an actual value, or, more generally, into an interval I of values in its spectrum. Now we come to the crucial point. The actualization of the value of a is such only relative to the system S . The corresponding state ρ determined by the actualization is therefore a state relative to S , in the sense that it predicts only the probability distribution of variables of S in subsequent interactions with S . It has no bearing on subsequent interactions with other physical systems. This is the profoundly novel relational aspect of QM. Why are we forced to this extreme conclusion? The argument, detailed in [START_REF] Rovelli | Relational quantum mechanics[END_REF], can be summarized as follows.
We must assume that variables do take value, because the description of the world we employ is in terms of values of variables. However, the predictions of QM are incompatible with all variables having simultaneously a determined value. A number of mathematical results, such as the Kochen-Specker [24] theorem, confirm that if all variables could have a value simultaneously, the predictions of QM would be violated. Therefore, something must determine when a variable has a value.
The textbook answer is 'when we measure it'. This obviously makes no sense, because the grammar of Nature certainly does not care whether you or I are 'measuring' anything. Measurement is an interaction like any other. Variables take value at any interaction.
However, (this is the key point) if a system S interacts with a system S , QM predicts that in, a later interaction with a further system S , a variable b of the S ∪ S system is not determined by ρ . Rather, it is determined by the joint dynamical evolution of the S ∪ S quantum system. In physicists' parlance: quantum theory predicts interference effects between the different branches corresponding to different values of the variable a, as if no actualization had happened.
We have thus to combine the presence of these interference effects (which pushes us to say that a had no value) with the fact that the variable a does take a value. The answer of relational QM is that the variable a of the system S actualized in the interaction with S takes value with respect to S , but not with respect to S . This is the core idea underlying the 'relational' interpretation of QM.
Relationality is no surprise in physics. In classical mechanics, the velocity of an object has no meaning by itself: it is only defined with respect to another object. The colour of a quark in strong-interaction theory has no meaning by itself: only the relative colour of two quarks has meaning. In electromagnetism, the potential at a point has no meaning, unless another point is taken as reference; that is, only relative potentials have meanings. In general relativity, the location of something is only defined with respect to the gravitational field, or with respect to other physical entities; and so on. But quantum theory takes this ubiquitous relationalism, to a new level: the actual value of all physical quantities of any system is only meaningful in relation to another system. Value actualization is a relational notion like velocity.
What is the quantum state?
The above discussion shows that the quantum state ρ does not pertain solely to the system S. It pertains also to the system S , because it depends on variables' values, which pertain only to S . The idea that states in QM are relative states, namely states of a physical system relative to a second physical system is Everett's lasting contribution to the understanding of quantum theory [START_REF] Everett | Relative state formulation of quantum mechanics[END_REF].
A moment of reflection shows that the quantum states used in real laboratories where scientists use QM concretely is obviously always a relative state. Even a radical believer in a universal quantum state would concede that the ψ that physicists use in their laboratories to describe a quantum system is not the hypothetical universal wave function: it is the relative state, in the sense of Everett, that describes the properties of the system, relative to the apparata it is interacting with.
What precisely is the quantum state of S relative to S ? What is ψ (or ρ)? The discussion above clarifies this delicate point: it is a theoretical device we use for bookkeeping information about the values of variables of S actualized in interactions with S , values which can in principle be used for predicting other (for instance future, or past) values that variables may take in other interactions with S .
Charging ψ with ontological weight is therefore like charging with ontological weight a distribution function of statistical physics, or the information I have about current political events: a mistake that generates mysterious 'collapses' anytime there is an interaction. More specifically, in the semiclassical approximation ψ ∼ e iS where S is a Hamilton-Jacobi function. This shows that the physical nature of ψ is the same as the physical nature of a Hamilton-Jacobi function. Nobody in their right mind would charge S with ontological weight, in the context of classical mechanics: S is a calculational device used to predict an outcome on the basis of an input. It 'jumps' at each update of the calculation.
QM is thus not a theory of the dynamics of a mysterious ψ entity, from which mysteriously the world of our experience emerges. It is a theory of the possible values that conventional physical variables take at interactions, and the transition probabilities that determine which values are likely to be realized, given that others are [START_REF] Groenewold | Objective and subjective aspects of statistics in quantum description[END_REF].
The fact that the quantum state is a bookkeeping device that cannot be charged with ontological weight is emphasized by the following observation [START_REF] Rovelli | An argument against the realistic interpretation of the wave function[END_REF]. Say I know that at time t a particle interacts with a x-oriented Stern-Gerlach device. Then I can predict that (if nothing else happens in between) the particle has probability 1 2 to be up (or down) spinning, when interacting with a z-oriented Stern-Gerlach device at time t . Key point: this is true irrespectively of which between t and t comes earlier. Quantum probabilistic predictions are the same both forth and back in time. So: what is the state of the particle in the time interval between t and t ? Answer: it depends only on what I know: if I know the past (respectively, future) value, I use the state to bookkeep the future (respectively, past) value. The state is a coding of the value of the x spin that allows me to predict the z spin, not something that the particle 'has'. We can be realist about the two values of the spin, not about the ψ in between, because ψ depends on a time orientation, while the relevant physics does not.
A coherent ontology for QM is thus sparser than the classical one, not heavier. A good name for the actualization of the value of a variable in an interaction is 'quantum event'. The proper ontology for QM is a sparse ontology of (relational) quantum events happening at interactions between physical systems.
(a) Information Equation (2.1) can be expressed by saying that (P1) The amount of information that can be extracted from a finite region of phase space is finite.
'Information' means here nothing else than 'number of possible distinct alternatives'. The step from ρ to ρ determined by an actualization modifies the predictions of the theory. In particular, the value of a, previously spread, is then predicted to be sharper. This can be expressed in information theoretical theorems by saying that (P2) An interaction allows new information about a system to be acquired.
There is an apparent tension between the two statements (P1) and (P2). If there is a finite amount of information, how can we keep gathering novel information? The tension is only apparent, because here 'information' quantifies the data relevant for predicting the value of variables. In the course of an interaction, part of the previously relevant information becomes irrelevant. In this way, information is acquired, but the total amount of information available remains finite. 2It is the combination of (P1) and (P2) that largely characterizes quantum theory (for the case of qubit-systems, see [START_REF] Hoehn | Toolbox for reconstructing quantum theory from rules on information acquisition[END_REF]). These two statements were proposed as the basic 'postulates' of QM in [START_REF] Rovelli | Relational quantum mechanics[END_REF]. The apparent contradiction between the two capturing the counterintuitive character of QM in the same sense in which the apparent contradiction between the two Einstein's postulate for Special Relativity captures the counterintuitive character of relativistic space-time geometry. Very similar ideas were independently introduced by Zeilinger and Brukner [START_REF] Zeilinger | A foundational principle for quantum mechanics[END_REF][START_REF] Brukner | Information and fundamental elements of the structure of quantum theory[END_REF].
An attempt to reconstruct the full formalism of quantum theory starting from these two information-theoretic postulated was initiated in [START_REF] Rovelli | Relational quantum mechanics[END_REF] (see also [START_REF] Grinbaum | Information-theoretic princple entails orthomodularity of a lattice[END_REF]). Recently, a remarkable reconstruction theorem along these lines has been successfully completed for the case of finitedimensional systems in [START_REF] Hoehn | Toolbox for reconstructing quantum theory from rules on information acquisition[END_REF][START_REF] Höhn | Quantum theory from questions[END_REF], shedding considerable new light on the structure of the theory and its physical roots.
The role of information at the basis of quantum theory is a controversial topic. The term 'information' is ambiguous, with a wide spectrum of meanings ranging from epistemic states of conscious observers all the way to simply counting alternatives, à la Shannon. As pointed out, for instance, by Dorato, even in its weakest sense information cannot be taken as a primary notion from which all others can be derived, because it is always information about something. Nevertheless, information can be a powerful organizational principle in the sense of Einstein's distinction between 'principle theories' (like thermodynamics) versus 'constructive theories' (like electromagnetism) [START_REF] Spekkens | The invasion of Physics by Information Theory, talk at Perimeter Institute[END_REF]. The role of the general theory of mechanics is not to list the ingredients of the world-this is done by the individual mechanical theories, like the Standard Model of particle physics, general relativity, of the harmonic oscillator. The role of the general theory of mechanics (like classical mechanics or QM) is to provide a general framework within which specific constructive theories are realized. From this perspective, the notion of information as a number of possible alternatives may play a very useful role.
It is in this sense that the two postulates can be understood. They are limitations on the structure of the values that variables can take. The list of relevant variables, which define a physical system, and their algebraic relations, are provided by specific quantum theories.
There are several objections that come naturally to mind when one first encounters relational QM, which seem to render it inconsistent. These have been long discussed and have all been convincingly answered; see in particular the detailed arguments in van Fraassen [START_REF] Van Fraassen | Rovelli's world[END_REF] and Dorato [START_REF] Dorato | Rovelli' s relational quantum mechanics, monism and quantum becoming[END_REF] and the original paper [START_REF] Rovelli | Relational quantum mechanics[END_REF]; I will not rediscuss them here. Relational QM is a consistent interpretation of quantum theory.
However, like all other consistent interpretations, it comes at a price.
Philosophical implications (a) Every interpretation has a cost
Every interpretation of quantum theory comes with a 'cost'.
Examples from some interpretations popular nowadays are the following. The cost of the Many World interpretations is the hugely inflated ontology of a continuous family of equally existing 'worlds', of which we basically know nothing, and an awkward difficulty of rigorously recovering the actual values of the variables in terms of which we describe the world, from the pure-ψ picture taken as fundamental. The cost of the Physical Collapse interpretations, such as the Ghirardi-Weber-Rimini theory, is to be based on physics which is so far unobserved and that many physicists view as not very plausible. The cost of the Bohmian interpretations is to postulate the existence of stuff which is unobservable in principle and which, in the view of most physicists, violates too badly what we have learned about Nature in the last century. The cost of Quantum Informational interpretations (partially inspired by relational QM [START_REF] Fuchs | Quantum foundations in the light of quantum information[END_REF]) is to be tied to a basically idealistic stance where the holder of the information is treated as an unphysical entity, a priori differently from all other physical systems, which cannot be in superpositions. The so-called Copenhagen Interpretations, which are held by the majority of real physicists concretely working with QM, postulate the existence of a generally ill-explained 'classical world', whose interactions collapse quantum states. And so on. . .. Do not take these criticisms badly: they are not meant to dismiss these interpretations; they are simply the reasons commonly expressed for which each interpretation does not sound plausible to others: the point I am making is that there is no interpretation of QM that is not burdened by a heavy cost of some sort, which appears too heavy a price to pay to those who do not share the passion for that particular interpretation. Many discussions about quantum theory are just finger-pointing to one another's cost.
The evaluation of these costs depends on wider philosophical perspectives that we explicitly or implicitly hold. Attachment to full fledged strong realism leads away from Quantum Informational interpretations and towards Bohm or Many Worlds. Sympathy for empiricism or idealism leads in opposite directions, towards Copenhagen or Quantum Information. And so on; the picture could be fine-grained.
The beauty of the problem of the interpretation of QM is precisely the fact that the spectacular and unmatched empirical success of the theory forces us to give up at least some cherished philosophical assumption. Which one is convenient to give up is the open question.
The relational interpretation does not escape this dire situation. As seen from the reactions in the philosophical literature, relational QM is compatible with diverse philosophical perspectives. But not all. How strong is the philosophical 'cost' of relational QM?
Its main cost is a challenge to a strong version of realism, which is implied by its radical relational stance.
(b) Realism
'Realism' is a term used with different meanings. Its weak meaning is the assumption that there is a world outside our mind, which exists independently from our perceptions, beliefs or thoughts.
Relational QM is compatible with realism in this weak sense. 'Out there' there are plenty of physical systems interacting among themselves and about which we can get reliable knowledge by interacting with them; there are plenty of variables taking values, and so on. There is nothing similar to 'mind' required to make sense of the theory. What is meant by a variable taking value 'with respect to a system S ' is not S to be a conscious subject of perceptions-it just the same as when we say that the velocity of the Earth is 40 km s -1 'with respect to the sun': no implication of the sun being a sentient being 'perceiving' the Earth. In this respect, quantum theory is no more and no less compatible with realism (or other metaphysics) than classical mechanics. I myself think that we, conscious critters, are physical systems like any other. Relational QM is anti-realist about the wave function, but is realist about quantum events, systems, interactions. . .. It maintains that 'space is blue and birds fly through it', and space and birds can be constituted by molecules, particles, fields or whatever. What it denies is the utility-even the coherence-of thinking that all this is made up by some underlying ψ entity.
But there is a stronger meaning of 'realism': to assume that it is in principle possible to list all the features of the world, all the values of all variables describing it at some fundamental level, at each moment of continuous time, as is the case in classical mechanics. This is not possible in relational QM. Interpretations of QM that adhere to strong realism, like Many Worlds, or Bohm, or other hidden variables theories, circumvent the Kochen-Specker theorem, which states that in general there is no consistent assignment of a definite values to all variables, by restricting the set of elementary variables describing the world (to the quantum state itself, or to Bohmian trajectories, or else). Relational QM assumes seriously the Kochen-Specker theorem: variables take value only at interactions. The stronger version of the realist credo is therefore in tension with QM, and this is at the core of relational QM. It is not even realized in the relatively weaker sense of considering a juxtaposition of all possible values relative to all possible systems. The reason is that the very fact that a quantity has value with respect to some system is itself relative to that system [START_REF] Rovelli | Relational quantum mechanics[END_REF].
This weak realism of relational QM is in fact quite common in physics laboratories. Most physicists would declare themselves 'realists', but not realists about ψ. As one of the two (very good) referees of this paper put it: 'In physicists' circles, Schrödinger's ψ is mostly regarded as a mere instrument'. Relational QM is a way to make this position coherent.
There are three specific challenges to strong realism that are implicit in relational QM. The first is its sparse ontology. The question of 'what happens between quantum events' is meangless in the theory. The happening of the world is a very fine-grained but discrete swarming of quantum events, not the permanence of entities that have well-defined properties at each moment of a continuous time. This is the way the relational interpretation circumvents results like the Pusey-Barrett-Rudolph theorem [START_REF] Pusey | On the reality of the quantum state[END_REF]. Such theorems assume that at every moment of time all properties are well-defined. (For a review, see [START_REF] Leifer | Is the quantum state real? An extended review of ψ-ontology theorems[END_REF]) They essentially say that if there is a hidden variable theory, the hidden variables must contain at least the entire information which is in the quantum state. But the assumption is explicitly denied in relational QM: properties do not exist at all times: they are properties of events and the events happen at interactions.
In the same vein, in [START_REF] Laudisa | Open problems in relational quantum mechanics[END_REF] Laudisa criticizes relational QM because it does not provide a 'deeper justification' for the 'state reduction process'. This is like criticizing classical mechanics because it it does not provide a 'deeper justification' for why a system follows its equations of motion. It is a stance based on a very strong realist (in the narrow sense specified above) philosophical assumption. In the history of physics much progress has happened by realizing that some naively realist expectation was ill-founded, and therefore by dropping these kind of questions: How are the spheres governing the orbits of planet arranged? What is the mechanical underpinning of the electric and magnetic fields? Into where is the universe expanding? To some extent, one can say that modern science itself was born in Newton's celebrated 'hypotheses non fingo', which is precisely the recognition that questions of this sort can be misleading. When everybody else was trying to find dynamical laws accounting for atoms, Heisenberg's breakthrough was to realize that the known laws were already good enough, but the ontology was sparser and the question of the actual continuous orbit of the electron was ill-posed. I think that we should not keep asking what amounts to this same question over and over again: trying to fill in the sparse ontology of Nature with our classical intuition about continuity. On this, see the enlightening discussion given by Dorato in [START_REF] Dorato | Dynamical versus structural explanations in scientific revolutions[END_REF].
The second element of relational QM that challenges a strong version of realism is that values taken with respect to different systems can be compared [START_REF] Rovelli | Relational quantum mechanics[END_REF] (hence there no solipsism), but the comparison amounts to a physical interaction, and its sharpness is therefore limited by h. Therefore, we cannot escape from the limitation to partial views: there is no coherent global view available. Matthew Brown has discussed this point in [START_REF] Brown | Relational quantum mechanics and the determinacy problem[END_REF].
The third, emphasized by Dorato, is the related 'anti-monistic' stance implicit in relational QM. As the state of a system is a bookkeeping device of interactions with something else, it follows immediately that there is no meaning in 'the quantum state of the full universe'. There is no something else to the universe. Everett's relative states are the only quantum states we can meaningfully talk about. Every quantum state is an Everett's quantum state. A reason for rejecting relational QM, therefore, comes if we assume that the monistic idea of the 'state of the whole' must makessense and must be coherently given in principle. 3This relational stance of relational QM requires a philosophical perspective where relations play a central role. This is why Candiotto [START_REF] Candiotto | The reality of relations[END_REF] suggests to frame relational QM in the general context of Ontic Structural Realism. This is certainly an intriguing possibility. My sympathy for a natural philosophical home for relational QM is an anti-foundationalist perspective where we give up the notion of primary substance-carrying attributes, and recognize the mutual dependence of the concepts we use to describe the world. Other perspectives are possible, as we have seen in the strictly empiricist and neo-Kantian readings by van Fraassen and Bitbol, but strong realism in the strict sense of substance and attributes that are always uniquely determined is not.
(c) How to go ahead?
There are three developments that could move us forwards
The first is novel empirical information. Some interpretations of quantum theory lead to empirically distinguishable versions of the theory. Empirical corroboration of their predictions would change the picture; repeated failure to detect discrepancy from standard QM weakens their credibility. This is the way progress happens in experimental physics. So far, QM has been unquestionably winning for nearly a century, beyond all expectations.
The second is theoretical fertility. If, for instance, quantum gravity turned out to be more easily comprehensible in one framework than in another, then this framework would gain credibility. This is the way progress happens in theoretical physics.
My focus on relational QM, indeed, is also motivated by my work in quantum gravity [38,39]. In quantum gravity, where we do not have a background space-time where to locate things, relational QM works very neatly because the quantum relationalism combines in a surprisingly natural manner with the relationalism of general relativity. Locality is what makes this work. Here is how [40]: the quantum mechanical notion of 'physical system' is identified with the general relativistic notion of 'space-time region'. The quantum mechanical notion of 'interaction' between systems is identified with the general relativistic notion of 'adjacency' between space-time regions. Locality assures that interaction requires (and defines) adjacency. Thus quantum states are associated to three-dimensional surfaces bounding space-time regions and quantum mechanical transition amplitudes are associated to 'processes' identified with the space-time regions themselves. In other words, variables actualize at three dimensional boundaries, with respect to (arbitrary) space-time partitions. The theory can then be used locally, without necessarily assuming anything about the global aspects of the universe.
The third manner in which progress can happen is how it does in philosophy: ideas are debated, absorbed, prove powerful or weak, and slowly are retained or discarded. I am personally actually confident that this can happen for quantum theory.
The key to this, in my opinion, is to fully accept this interference between the progress of fundamental physics and some major philosophical issues, like the question of realism, the nature of entities and relations, and the question of idealism. Accepting the reciprocal interference means in particular to reverse the way general philosophical stances colour our preferences for interpretation. That is, rather than letting our philosophical orientation determine our reading of QM, be ready to let the discoveries of fundamental physics influence our philosophical orientations.
It woundn't certainly be the first time that philosophy is heavily affected by science. I believe that we should not try to understand the world rigidly in terms of our conceptual structure. Rather we should humbly allow our conceptual structure to be moulded by empirical discoveries. This, I think, is how knowledge develops best.
Relational QM is a radical attempt to directly cash out the initial breakthrough that originated the theory: the world is described by variables that can take values and obey the equations of classical mechanics, but products of these variables have a tiny non-commutativity that generically prevents sharp value assignment, leading to discreteness, probability and to the relational character of the value assignment.
The founders of the theory expressed this relational character in the 'observer-measurement' language. This language seems to require that special systems (the observer, the classical world, macroscopic objects. . .) escape the quantum limitations. But neither of this, and in particular no 'subjective states of conscious observers', is needed in the interpretation of QM. As soon as we relinquish this exception, and realize that any physical system can play the role of a Copenhagen's 'observer', we fall into relational QM. Relational QM is Copenhagen QM made democratic by bringing all systems onto the same footing.
between physical values and eigenvalues, with no reference to ψ. So, what did Schrödinger do, in his 1926 paper? With hindsight, he took a technical and a conceptual step. The technical step was to change the algebraic 2018 The Author(s) Published by the Royal Society. All rights reserved. rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(Technically: using the notation ρ(A) = Tr[ρA], a variable b taking value in the interval I of its spectrum, determines the state ρ = cP b I where P b I is the projector associated with I in the spectral resolution of B and c is the normalization constant fixed by ρ(1) = 1. If then a variable b takes value in I , ρ changes to ρ = cP b I P b I P b
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the commutator: it determines the amount of non-commutativity, hence discreteness, hence impossibility of sharpness of all variables.
1
1
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 2017.0312 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In Many World interpretations, a takes a value indexically relative to a world; in Bohm-like theories only an (abelian) subset of variables has value, not all of them; in Quantum Information interpretations, a takes a value only when the interaction is with the idealistic holder of the information; in Copenhagen-like interpretations, when the interaction is with the 'classical world'; in Physical Collapse theories, when some not yet directly observed random physical phenomenon happens. . .
Here is a simple example: if a spin-1 2 particle passes through a z-oriented Stern-Gerlach apparatus and takes the 'up' path, we have one bit of information about the orientation of its angular momentum L. If it then crosses an x-oriented apparatus, we gain one bit of information (about L x ) and we lose one bit of information (about L z ).
This does not prevent conventional quantum cosmology to be studied, because physical cosmology is not the science of everything: it is the science of the largest scale degrees of freedom.
Data accessibility. This article has no additional data. Q2 | 48,710 | [
"1681"
] | [
"179898",
"407863"
] |
01771852 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2006 | https://hal.science/hal-01771852/file/Lenfant-Complementarity-HOPE2006.pdf | Jean-Sébastien Lenfant
Complementarity and Demand Theory: From the 1920s to the 1940s
The history of consumer demand is often presented as the history of the transformation of the simple Marshallian device into a powerful Hicksian representation of demand. Once upon a time, it is said, the Marshallian "law of demand" encountered the principle of ordinalism and was progressively transformed by it into a beautiful theory of demand with all the attributes of modern science. The story may be recounted in many different ways, introducing small variants and a comparative complexity. And in a sense that story would certainly capture much of what happened. But a scholar may also have legitimate reservations about it, because it takes for granted that all the protagonists agreed on the meaning of such a thing as ordinalism-and accordingly that they shared the same view as to what demand theory should be. On the contrary, it may well be more accurate to think that ordinalism was as much a product of the story as it was a principle leading the intellectual change.
Thus it may be useful to hypothesize that what happened to the theory of the rational consumer and to demand theory in the first half of the twentieth century is the consequence of a rising interest attached to the idea of choice among alternatives. Consequently, it might be more accurate to think that demand theory and ordinalism went hand in hand at
least until the late 1930s. This story may be traced back to F. Y. Edgeworth, Vilfredo Pareto, and Irving Fisher, who introduced indifference curves into the microeconomics of consumption and exchange. Introducing choice into microeconomic theory opened the door to new questions and to the need for new tools of analysis as well as to a profound transformation of the representation of the psychological foundations of choice and demand theories. It led eventually to the now-standard theory of the consumer, after John R. Hicks's and Eugen Slutsky's contributions. The new Hicksian theory of demand that was "stabilized" in the 1940s was the result of debates on many related issues: What kinds of tools were needed to study demand and choice? What methodological principles should be adopted for this? Is choice to be rationalized by some kind of psychological explanation? Is rationality worth testing? Some light may be shed on the transformations of demand analysis in the 1930s by telling the story of one of the period's most debated concepts: complementarity. What is the meaning of such sentences as "x and y are substitutes," "y and z are complementary goods," or "x and z are independent"? If x is a substitute for y, will y necessarily be a substitute for x? Is it supposed to have any empirical counterpart? What kind of meaning does it have in the first place? 1 All those questions emerged soon after the marginalist revolution and were given prominence in the 1930s. They were at the center of the reshaping of demand theory, toward the now-classic Hicks-Slutsky presentation of demand theory.
The purpose of the present article is to focus on how economists debated the need for a "good" concept of complementarity and how they eventually adopted a definition that fit the needs of both econometric modeling and a theory of market interdependencies. Indeed, the reshaping of demand theory in the thirties was stimulated by statistical studies of demand, and econometrics played a prominent role in stabilizing neoclassical demand theory in the early forties.
The first section summarizes the history of complementarity up to Slutsky and gives an overview of the theoretical and methodological context of the 1920s that fostered a renewed interest in the concept of complementarity. The second section captures the ins and outs of Hicks and Roy Allen's 1934 reconstruction of demand and complementarity. The third section deals with parallel developments on the concept of related goods 50 History of Political Economy 38:5 (2006) 1. Of course, it is clear from those questions that complementarity is used here as a generic word to represent the relations between goods. that were conducted by Harold Hotelling and Henry Schultz in the early thirties and provides a narrative about stabilizing demand theory in the late thirties through Hicks's synthesis.
A Renewed Interest in Complementarity
Complementarity has been a subject of interest in the context of the marginalist revolution with the development of a Paretian theory of choice. It was then largely ignored under Marshallian dominance and was to be rehabilitated as an important question through the development of early econometrics in the 1920s. In this section, I briefly narrate how the concept was first developed at the turn of the twentieth century and how it became anew a subject of interest. 2
Complementarity before the 1920s
The first commonly accepted definition of completing and competing goods was proposed by Rudolf [START_REF] Auspitz | Recherches sur la théorie du prix[END_REF][START_REF] Auspitz | Recherches sur la théorie du prix[END_REF] and was later adopted by Edgeworth, Pareto, and Fisher. It was so quickly adopted because it was a quite natural way to interpret the cross second-order derivative of the utility function. The ALEP definition (from Auspitz, Lieben, Edgeworth, and Pareto) did not emerge as the result of long reflection but rather as a word given to an introspectively felt relation between goods: the analytic expression ∂ 2 u(.)/∂x∂y preceded the definition. 3 From William Stanley [START_REF] Jevons | The Theory of Political Economy[END_REF] 1965 onward, it would certainly be difficult to find a writer interested in price and demand theory who does not devote some pages to the notions of substitutes and complements. The Jean-Sébastien Lenfant / Complementarity and Demand Theory 51 2. For a wider and more detailed study, see [START_REF] Lenfant | Search of Complementarity[END_REF]. Some other aspects of the story have been studied in [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF] 3. This definition was introduced in a discussion of the shape of a so-called pleasure function (Lebensgenusskurve). While studying the sign of this function, Auspitz and Lieben had to cope with the second derivative of another function f(x 1 , x 1 (x 1 ),…, x n (x 1 )), and thus discussed the sign of a sum of terms
∂ 2 f(•) ∂x i -----. ∂x i ∂x 1 ∂x 1
On this occasion, Auspitz and Lieben introduced their famous criterion of complementarity. The cross second-order derivatives (∂ 2 f(•)/∂x 1 ∂x i ) i ≥ 2 will be positive, zero, or negative "depending on whether (x i ) is a complement to the enjoyment of (x 1 ), is totally indifferent to (x 1 ), or is competing to it" [START_REF] Auspitz | Recherches sur la théorie du prix[END_REF]Lieben [1889] 1914, 319).
first approach to the definition of complementarity is based on utility-and there every economist accepts the ALEP criterion. A second approachpacifically cohabitating with the first one-is built on preferences and indifference curves. Both approaches aim at improving demand theory. Thus complementarity has been linked with demand analysis from the outset, especially with the study of related demand. 4 It was commonly accepted that a price rise would eventually diminish the consumption of complementary goods. The ALEP criterion led one to question this intuitive statement, and [START_REF] Auspitz | Recherches sur la théorie du prix[END_REF]Lieben ([1889] 1914, 98) were the first to mention-without any formal proof, however-that the properties of utility could disturb this law of related demand:
It is nevertheless possible that some opposite manifestations may occur, because of the many cross-linked influences between goods. Thus, a rise in the consumption of coffee will always result in an increase in the quantity of sugar used to sweeten it, but, if the same individual is also reducing his consumption of tea in consequence of an increased use for coffee, it may happen that, instead of increasing, its total consumption of sugar should diminish.
A next step in the history of complementarity is Pareto's Manual of Political Economy (in the fourth chapter, "Tastes"). Here Pareto undertakes a thorough analysis of utility, preferences, and tastes and tackles the subject of substitutes and complements in consumption. This study dramatically changes the meaning of complementarity for value theory. With Pareto, complementarity becomes a concept for the theory of choice. As did Fisher, Pareto ([1909] 1971, 182, sec. 8) deals with complementarity when introducing a generalized utility function, and he adopts Auspitz and Lieben's analytic definition [START_REF] Pareto | Manuel d'économie politique[END_REF](Pareto [ ] 1971, appendix, sec. 46, equations 63, 64, appendix, sec. 46, equations 63, 64). Pareto's discussion reveals that he did not think that the concept of complementarity exhausted all the introspective and mental states of the mind for the description of choice situations, and that it should better be used as a first approximation, an imperfect analytic representation of our states of minds as consumers. 52 History of Political Economy 38:5 (2006) 4. But it would be misleading to believe that it has always been the sole or even the main reason for developing the concept of complementarity. Alfred [START_REF] Marshall | Principles of Economics[END_REF] 1898, [START_REF] Auspitz | Recherches sur la théorie du prix[END_REF][START_REF] Lehfeldt | The Elasticity of Demand for Wheat. The Elasticity of Demand for Wheat[END_REF][START_REF] Fisher | Mathematical Investigations in the Theory of Value and Prices[END_REF] 1961 will make special uses of the concept (in explaining the properties of the price system or in justifying a measure for consumer surplus). Moreover, as regards demand theory properly, complementarity appeared both in the study of related demands and in the study of the law of demand.
It is well known that both Fisher and Pareto were reluctant to accept psychological arguments in economics (although maybe not to the same extent). This meant, in the first place, to break with the traditional reference to utility as a measuring rod for pain and pleasure and, more broadly, to break with the search for ultimate causes [START_REF] Fisher | Mathematical Investigations in the Theory of Value and Prices[END_REF](Fisher [ ] 1961, 5;, 5;[START_REF] Pareto | Manuel d'économie politique[END_REF]Pareto [ ] 1971, 160), 160). For that reason, both of them engaged in another discussion about complementarity, on the basis of indifference curves. To that extent, their work is more representative of a preference-based approach to complementarity. 5 For all that, the preference-based approach is not presented as breaking away from the utility-based approach but rather as another way to look at demand behavior.
Fisher's typology is well known. In his Investigations, he endeavored to match the preference-based definition-a general typology built on different shapes of indifference curves-and properties of the price system. 6 Following on this line of thought, Pareto inaugurated a far more sophisticated analysis of complementarity. 7 He went beyond merely discussing the shape of indifference curves to describe the properties of the indifference map. Through this analysis, Pareto sought to explain the occurrence of increasing demand. Thus his main contribution was to shift the analysis of complementarity by giving up the study of the shape of single indifference curves and focusing instead on the shape of indifference maps. Simultaneously, Pareto shifted from the analysis of related demand to the analysis of the law of demand.
Jean-Sébastien Lenfant / Complementarity and Demand Theory 53 5. Compare the presentation in the full text of the Manual with the one in the appendix (secs. 44-51).
6. This approach is based on the idea that, within a two-good framework, complementarity relations can be made apparent and classified through a study of the curvature of indifference curves. Actually, Fisher does not develop this idea or any tools for differentiating substitutes and complements. The point is rather that Fisher is not interested in complementarity in itself; his idea is to create a continuous geometrical typology and to implement this representation in price theory. To that extent, Fisher makes a rather straight instrumental use of the concept, and he does not even pretend to discuss his own typology in relation with the ALEP criterion. He simply uses it to illustrate some statements about the properties of the price system: "The essential quality of substitutes or competing articles is that the marginal utilities or the prices of the quantities actually produced and consumed tend to maintain a constant ratio. We may define perfect substitutes as such that this ratio is absolutely constant. The essential attribute of completing articles is that the ratio of the quantities actually produced and consumed tends to be constant (as many shoe-strings as shoes for instance, irrespective of cost). We may define perfect completing articles as such that this ratio is absolutely constant" [START_REF] Fisher | Mathematical Investigations in the Theory of Value and Prices[END_REF](Fisher [ ] 1961, 66;, 66;emphasis in original).
7. That Pareto investigates both the utility-based and the preference-based approach to complementarity accords with his ecumenical methodology. The choice of one method over the other is a question of convenience and circumstances.
To put it briefly, it is well known that Pareto came most of the way in deriving the Slutsky equation in the October 1893 issue of the Giornale degli economisti [START_REF] Chipman | The Paretian Heritage[END_REF][START_REF] Dooley | Slutsky's Equation Is Pareto's Solution[END_REF][START_REF] Weber | More on Slutsky's Equation as Pareto's Solution[END_REF]. In theorizing about demand, Pareto gave the era's most general expression on the effect of a price change on the demand for a good. As a comment to this expression (equation 75 in [START_REF] Pareto | Manuel d'économie politique[END_REF]Pareto [ ] 1971, 423), 423), Pareto underlined the possible implications of ALEP complementarity, but he also remarked that this criterion cannot by itself serve as a basis for the law of demand, except in the case of complementary or independent goods: "For goods having a dependence of the second type [ALEP substitutes], when the price rises, the demand may increase and then decrease" . According to this lack of empirical implications of complementarity, Pareto shows that it is not easy to develop a solid basis for demand theory without having recourse to something other than complementarity in the ALEP sense (199).
In sum, Pareto's contribution to demand theory is inseparable from his reflection on complementarity. On the one hand, he lays down the empirical implications of the ALEP criterion; on the other, he tackles semantic aspects of the concept, in particular in relation with the ALEP criterion. One is even led to the result that the search for a new definition of substitutes and complements (following the preference-based approach) did not originate in any desire to give up the cardinalist criterion but was motivated by the search for a more precise and powerful theory of demand.
William [START_REF] Johnson | The Pure Theory of Utility Curves[END_REF] was the first to provide an ordinal definition of complementarity on the basis of the derivatives of the marginal rate of substitution (although he does not use the name) and to show that this definition cannot easily be compared with the traditional ALEP criterion. 8 Johnson's systematic recasting of the concept leads to integrating the utility-based approach within the preference-based approach. In this respect, his typology of complements and substitutes is even more dedicated toward laying the foundations for a new theory of demand.
The final step in this early story of complementarity is Slutsky's 1915 contribution. It is rather difficult to classify. Slutsky does not elaborate on the preference-based approach, and to that extent he is not developing the Fisher-Pareto-Johnson line of thought. Instead, he focuses exclusively on the relationships between the ALEP criterion and demand behavior.
Slutsky's famous equation-the first analytic decomposition between income and substitution effects-gives the variation in the demand of a good in reaction to a variation of the price of another good. In modern notation: 9
∂x i p, R ∂p j = s i, j p, R -x j p, R ∂x i p, R ∂R (1.1)
Slutsky ([1915] 1953, eq. 55) also derived the following symmetry "condition":
s ij = ∂x j ∂p i + x i ∂x j ∂R = ∂x i ∂p j + x j ∂x i ∂R = s ji (1.2)
Actually, s ij is the "residual variability" of the j th good for a "compensated variation" of the price p i [START_REF] Slutsky | On the Theory of the Budget of the Consumer[END_REF](Slutsky [ ] 1953, 43), 43), that is, for a variation of income that allows the consumer to buy exactly the same bundle as before. Moreover, on the basis of the above results, Slutsky wonders about the empirical counterpart of the second derivatives of the utility function. He comes to a negative conclusion, which is of direct consequence for distinguishing between complementary and competitive goods. Indeed, Slutsky provides the first explicit challenge to the ALEP criterion. 10 His conclusion is a final judgment: "This whole edifice falls if one remains loyal to the formal definition of utility, for it is impossible to deduce from the facts of behavior the character (that is, the sign) of the second derivative of utility" (54). 11 As a brief summary of this sketchy early history of the concept of complementarity, one can see that complementarity was the object of more or less parallel investigations on the part of Fisher, Pareto, Johnson, and Slutsky. All of them had the ALEP criterion in mind and tried either to trace its limits or to complete it with an analysis of indifference curves. Pareto is the one who shifted the conceptualization of complementarity Jean-Sébastien Lenfant / Complementarity and Demand Theory 55 9. Where x i is the quantity demanded of commodity i, p j is the price of commodity j, and R is income, while s i,j is the Slutsky substitution effect, representing the differential change in the consumption of i when the differential change of p j is compensated so that consumers can still just afford their original consumption bundles.
10. Of course, Pareto showed that the ALEP criterion is not self-sufficient to have clear-cut implications on demand behavior. For all that, he did not mean explicitly that the converse was true, that is, that demand behavior could not deliver some information about complementarity in the ALEP sense, at least on the basis of global information on prices and quantities bought.
11. For an appraisal of Slutsky's interest in complementarity, see [START_REF] Lenfant | Search of Complementarity[END_REF] by emphasizing the theory of choice, and Slutsky completed the move. Despite these differences, their reflections on complementarity share a common feature: the concept is regarded as a two-good relation. This is precisely this common horizon that would progressively collapse in the next twenty years. Also, it is remarkable that the complex relations between utility, demand, and preferences opened the door to many different lines of thought in developing the theory of demand and in connecting it with complementarity. As a result, in the 1920s and early 1930s, economists did not have a satisfactory definition of complementarity.
Taking Complementarity Seriously in the 1920s and Early 1930s
Under the influence of the early econometricians, the interest in complementarity was widespread among economists of the 1920s and 1930s, and the Hicks and Allen contributions (especially [START_REF] Hicks | A Reconsideration of the Theory of Value. Pts. 1 and 2[END_REF] to the theory of value constituted the acme of the work on complementarity.
The great difference in the history of the ALEP definition that I have just recounted lies in the fact that the need for a new definition of complementarity preceded the definition itself and that no definition seemed to apply naturally. From this moment on, something was on track for a complete reshaping of demand theory along the lines of Pareto and Fisher. One reason for this was the growing interest in the 1920s in the Lausanne school in England and the United States. In addition, Marshallian ideas on supply, demand, and surplus were subjected to increasing criticism as statistical economists and early econometricians played a growing role in the development of economics. 12The idea that competitive goods and complementary goods should be taken into account in the statistical analysis of demand was already at stake at the very beginning of the twentieth century. In 1907 Rudolfo Benini had expressed the demand for coffee in Italy as a function of the prices of coffee and sugar, using the method of multiple correlation. In the 1920s, estimations of demand functions for agricultural products sometimes incorporated complementary and competitive goods. But it is remarkable that complementarity was never a subject of serious speculation but only a way to implement new statistical tools. None of the main studies of demand reflect on the best way to integrate substitutes and complements or even address how to know if a good should be counted as a substitute or as a complement. 13 Things would change radically in the years 1930-33. The recognition of a need for proper reflection on complementarity was progressively fostered by econometricians such as Mordecai Ezekiel, W. F. Ferger, and Elisabeth Waterman Gilboy, and by economists at the frontier between statistical analysis and pure economic theory, such as Marco Fanno and Paul N. Rosenstein-Rodan. 14 They pointed out that (1) it was necessary to identify competitive and complementary goods and to neutralize their influences on the demand for a good if one aims at measuring elasticity of demand; and (2) the way to introduce related goods within the statistical studies on demand is not self-evident. Which related good should be retained in the estimation? What method should be privileged for this purpose? What would be the theory underlying the methods? Answering those questions soon appeared as a requisite for developing statistical studies of demand.
It is quite probable that the critical assessment of Henry Moore's rising demand for pig iron in Economic Cycles ([1914] 1967) was a starting point for all this. Moore's results were criticized not only for a kind of lack of rigor or patience with statistical methodology but also for his conception of demand that was at odds with the Marshallian law of demand and uninterested in a fruitful cooperation between theoretical economics and statistical economics [START_REF] Working | What Do Statistical "Demand Curves[END_REF]Lehfeldt 1915). Thus R. A. Lehfeldt (1915, 410-11) pointed out that Moore had estimated something that was "much more nearly a supply curve." Those critical assessments are known to be the point of departure of the identification problem in econometrics. But they also show that early statistical economists strongly disagreed on the role and the place of statistical economics and on the purpose of statistical demand studies. Was it to estimate something else than the Marshallian demand curve? Was it to pursue the never-ending goal of estimating it? Ferger (1932, 36) would sum up this Jean-Sébastien Lenfant / Complementarity and Demand Theory 57 13. Even Schultz, who was to champion integrating choice analysis and statistical laws of demand, did not consider that complementary and competitive goods were instrumentally useful for computing statistical laws of demand. In The Statistical Laws of Demand and Supply, with Special Application to Sugar (1928b), he introduced the price of different goods together with a temporal variable and eventually eliminated those other goods from the regression, considering that the temporal variable should represent all the variations.
14. See also Ezekiel's bibliography (1933, 172). The most important articles for the present story, leaving Schultz aside, are [START_REF] Fanno | Contributo alla teoria dei beni succedanei[END_REF]Fanno , 1929Fanno , 1933;;[START_REF] Ferger | The Static and the Dynamic in Statistical Demand Curves[END_REF][START_REF] Gilboy | Demand Curves in Theory and Practice[END_REF]Gilboy , 1932Gilboy , 1934;;[START_REF] Ezekiel | A Statistical Examination of Factors Related to Lamb Prices[END_REF]Ezekiel , 1933;;[START_REF] Wright | Statistical Laws of Demand and Supply[END_REF]Wright , 1930;;[START_REF] Lehfeldt | The Elasticity of Demand for Wheat. The Elasticity of Demand for Wheat[END_REF]Lehfeldt , 1915;;[START_REF] Rosenstein-Rodan | La complementarietà prima delle tre tappe del progresso della teoria economica pura[END_REF] "great disagreement regarding the concepts involved" in statistical demand curves. He notes that "in none of the methods thus far considered has the fundamental condition of the classical definition of static demand curves been realized-that all other things remain unchanged" (59; emphasis in original). In the same vein, in 1933, Ezekiel could still regret that "in spite of the general recognition of the importance of competing products, no one appears as yet to have attempted to formulate clearly the exact way in which they enter into the demand situation, or to determine on a logical basis which is the best way to bring them into the statistical investigation" (173).
In the first issue of Econometrica, Fanno underlined the technical difficulties raised by the study of the influence of a good's price on the demand for another good. To study the relationships between prices and quantities of substitutes requires accounting "for the simultaneous variations of the prices of all the goods within the group" (Fanno 1933, 165). Thus it is necessary to integrate interdependence in the statistical regressions and also to know how to select those related goods that are supposed to enter into the regression. Practically, the econometrician has to restrain the number of goods related by "direct economic relationships" (162). Ezekiel echoes Fanno's recommendations by comparing two models of interrelated demand. In the first model, p 1 is explained by x 1 and x 2 , whereas in the second model p 1 is explained by x 1 and p 2 (and symmetrically for p 2 ):
{ p 1 = h 1 x 1 , x 2 p 2 = h 2 x 1 , x 2 (1.3a) { p 1 = H 1 x 1 , p 2 p 2 = H 2 x 2 , p 1 (1.3b)
Ezekiel mixed theoretical and statistical arguments in favor of the first model and against the second. The choice of one model over the other may first depend on a hypothesis about the reactivity of one market. In this case, the second model may be favored. But the second model is flawed by its circularity, the price of each good being explained partly by itself (Ezekiel 1933, 178).
Thus, at the beginning of the thirties, the early econometrists of demand (especially in agricultural economics) show an urgent interest for a theory of related demand, but the theoretical and statistical methods for it are still in the wings or, at best, too controversial. Around 1933, the time was ripe for a fruitful dialogue between the pure theory of choice and the statistical analysis of demand in order to provide a useful definition of complementarity and substitution. Any theory that may help communicate the impact of the price of a good on the demand for another would be welcome for estimating statistical demand curves. Henry Schultz would be the one to develop this connection. What remains to be understood is to what extent those early econometrists thought it useful to support their analyses with the Paretian theory of choice as it existed at the time.
From Moore to Schultz
The relationship between statistical economics and pure economics and how it evolved is best perceptible through the writings of Henry Moore and Henry Schultz on demand studies. 15 There we can find the justification for developing statistical economics on the basis of pure mathematical economics. Among the American economists, Moore and Schultz shared a good knowledge of Léon Walras and Pareto, and the former were convinced that general equilibrium was the only solid basis for doing economics. 16 But to Moore, it deserved to be completed and criticized for its static nature. Already in Economic Cycles and later in Forecasting the Yield and the Price of Cotton ([1917] 1967), Moore would clarify the way statistical economics should account for interdependencies between markets.
To put it briefly, Moore's aim in Economic Cycles is to establish a chain of causality between "the law of the cycles of rainfall" and "the law of the cycles of the crops," on the one side, and "the law of Economic Cycles," on the other ([1914] 1967, 135). The statistical relation between the demand for agricultural products and the price of agricultural products is presented as a propagation mechanism for cycles, and Moore devotes chapter 4 to the analysis of demand. 17 Moore's economics relies on supposedly natural laws whose perception is made difficult because of endless cycles and changes. Further, those natural laws are in essence dynamic and need technical abilities to be discovered. The dynamic law of demand, in Moore's mind, must take into account both Jean-Sébastien Lenfant / Complementarity and Demand Theory 59 15. In 1916 Schultz began to study under Moore at Columbia. After the war, he was at the LSE, studying under Arthur L. Bowley and Karl Pearson. He came back to Columbia for his PhD in 1925 (Mirowski 1990, 605).
16. See Moore 1929;[START_REF] Schultz | Mathematical Economics and the Quantitative Method[END_REF]Schultz , 1928a. 17. On Moore and demand, more generally, see [START_REF] Wulwick | A Reconstruction of Henry Moore's Demand Studies[END_REF] the interdependencies of demands (and markets) and the unavoidable temporal character of all laws in economics. Instead of "an idol of the static state," Moore wishes to obtain statistical equations by "attacking directly the problem in its full concreteness," "other things changing according to their natural order," thus providing the "Statistical Complement to Deductive Economics" (64, 66, 67).
Although Moore attached great importance to Pareto's and Walras's expositions of the economic system, he did not praise them for the utility foundations of it and, more generally, for the static representation of the equilibrium based on a mechanical metaphor. For this reason, he wanted to go beyond putative explanations of economic phenomena through a theory of utility and a mechanical representation of it [START_REF] Moore | Economic Cycles: Their Law and Cause[END_REF](Moore [ ] 1967, 85), 85). In return, he maintained the principle of interdependence of markets. As a result, substitutability, once disentangled from any a priori and rationalist foundation, is seen as a main cause of endless theoretical difficulties that would be overcome only through the search for dynamical laws of demand:
In treatises on pure economics, particularly in those in which mathematical analysis is employed, the masters of the a priori method [i.e., mathematical economists using general equilibrium] point out what they regard as the extreme difficulty of the actual problem of the relation of price to quantity of commodity-a difficulty growing out of the interrelation of the many factors in the problem. . . . The degree in which hay, oats, and potatoes are substituted for corn is dependent not only upon the price of corn but also on their own several prices, and these latter prices are, in turn, dependent upon the supply and price of corn! This statement of the problem, complex as it appears, is unduly simplified; and it is presented not in order to ridicule the work of the masters who have elaborated the method of stating the problem in the form of simultaneous equations, but to show how hopelessly remote from reality is the very best theoretical treatment of the problem of the relation of price to the quantity of commodity, and to suggest, from the results of the preceding pages of this chapter, how imaginary, theoretical difficulties are dispelled by solving real problems. (82) Moore's theoretical and technical apparatus is thus presented as a solution to endless theoretical disputes in pure economics. All the complex relationships that are never tested become of secondary order or even null and void once we calculate a simpler relation on the whole economic cycle.
Calculating the elasticity of demand for a representative good over a sufficiently long period and using the method of multiple correlation gives "an extremely accurate formula summarizing the relation between variations in price and variations in the amount of the commodity that is demanded" (70). 18 It is commonly accepted that Moore had no great influence on other econometricians except for Schultz. But it is also known that Schultz was not a faithful follower of Moore. 19 The fascinating point is that Schultz tended to extirpate demand analysis from Moore's general project and methodology and to interpret it as an independent given that may still be improved; all subsequent works by Schultz on demand can be interpreted as implementing this program. On this occasion, Schultz gave central importance to utility theory for the statistical analysis of demand. The structural properties of demand are not yet to be sought in natural laws of cycles; they are anchored in utility theory. Little by little, Schultz abandoned references to a sui generis dynamic law of demand and constructed this law on the basis of a statistical static law of demand whose shifts from time to time would provide a dynamic law. Thus Schultz's (1928b, 23) criticism of the "neoclassical" theory of demand against the theory of the "mathematical school" was concentrated more on the impossibility of a ceteris paribus framework. By referring here and there to utility theory and to the influence of substitution on demand curves, Schultz (1931a) shows that integrating utility theory and statistical analysis of interdependencies is a possible direction of research at the beginning of the thirties, one he would always privilege against the use of budget data (Schultz 1933). Of the economists who may have inspired Schultz on the subject, Fanno seems to have been the most important. 20 Among the first to suggest a possible path between choice theory and Jean-Sébastien Lenfant / Complementarity and Demand Theory 61 18. A general presentation of the status of demand analysis within Moore's overall economic project is beyond the scope of this article. Moore would come back regularly in his other writings on the subject, with less optimism and less radically maybe, but still with great fidelity to his first ideas. Notably, he always considered multiple correlation as the best way to integrate a good number of related goods in the statistical laws of demand and in elasticity calculations (Moore 1917, 147-51). His approach is always the same and shows a good knowledge of Pareto and Walras and a constant search for improving techniques for measuring elasticity (Moore 1926;1929, chap. 3).
19. It is mainly through the eyes of Schultz that I have been led to comment on Moore's work. Quite different assessments of Moore's work can be found in [START_REF] Stigler | The Early History of Empirical Studies of Consumer Behavior[END_REF][START_REF] Christ | Early Progress in Estimating Quantitative Economic Relationships in America[END_REF][START_REF] Mirowski | Problems in the Paternity of Econometrics[END_REF] 20. On the importance of related demands for statistical demand curves, Schultz (1938, 581) recognized Fanno's 1926 memoir as his main source of inspiration.
statistical studies, Fanno (1933, 164) developed a correspondence between substitutability as it is represented with indifference curves and substitutability as it can be measured through the proportional variation of prices following any external shock on the demand for a good. He had already introduced this idea in his memoir [START_REF] Fanno | Contributo alla teoria dei beni succedanei[END_REF]) and again in 1929. Around 1930, Schultz (1931b, 83) was clearly looking in that direction, echoing Pareto's earlier statement:
The properties of the utility functions and indifference curves are very intimately related to certain characteristics of the laws of demand and supply. As an example, let us consider the demand and supply of an individual who has two or more commodities at his disposal. . . . When the consumptions of these commodities are not independent of one another, the quantity demanded may at first increase and then decrease with an increase in price, i.e., the demand curve may be positively inclined for part of its extent. . . . In [my] opinion, a study of these theoretical relationships will throw a flood of light on the problems connected with the derivation of demand curves from statistics.
Schultz developed this program in two steps between 1932 and 1935. However, complementarity was still to be defined within the framework of choice, and for that the main contribution came from Hicks and Allen.
The Hicks-Allen Article
The Hicks and Allen joint product-"A Reconsideration of the Theory of Value"-was published in 1934, in the February and May issues of Economica. It is of great importance to the extent that it opens up a new conception of the relationships between choice and the psychological foundations of choice. This is considered as true today as it was in 1934, and the Hicks-Allen article was immediately widely adopted-if not unanimously-as a solid foundation for demand theory. 21Among the subjects connected to choice and demand, the definition of complementarity is given much attention in the article. Hicks and Allen saw that all this was an inevitable necessity of the development of choice theory. 22 The most significant transformation of the concept, and certainly what allows speaking of a revolution, is that complementarity and competitiveness will no longer appear as a relation between two goods dealt with apart from all other goods in the set of choice. From now on, economists will have to deal with two concepts, according to the kind of modeling they use. The first is "substitutability," represented by the coefficient of elasticity of substitution within the two-good case. The second is "complementarity and competitiveness" between goods, represented by the coefficient of "elasticity of complementarity" within the general case. This is certainly the most important achievement together with the now-familiar Hicksian distinction between a price effect and an income effect. In this article, Hicks and Allen provided a new definition of complements and substitutes that was fully articulated with the theory of demand. To that extent, it is legitimate to take this article as a watershed in the history of complementarity and of demand.
Below I present the Hicks and Allen article in three steps. First, I focus on elasticity of substitution. Second, I concentrate on the new definitions of substitutes and complementary goods. Third, I reflect on their enterprise from a methodological point of view.
Demand Theory and the Elasticity of Substitution
The whole analysis of choice and demand is based on the hypothesis of a decreasing marginal rate of substitution, which is presented as replacing the old hypothesis of decreasing marginal utility. Consider R y
x (x, y) the marginal rate of substitution of x to y, that is, the quantity of x that would exactly compensate the individual for the loss of one unit of y (Hicks and Allen 1934, 55, 198). This marginal rate is given by the differential equation dx + R y
x dy = 0. In the two-good case, the preferences of the individual can be entirely expressed (independently of market conditions) through three indexes of Jean-Sébastien Lenfant / Complementarity and Demand Theory 63 22. Allen (1934c, 110) expressed what was at stake at the surface of things, that is, in the development of choice theory as a prerequisite for demand theory: "It is the existence of mutual relationships between goods, finding expression in the various forms assumed by indifference curves, that distinguishes the general theory of choice from the simpler and more artificial theory which serves in the case of one consumers' [sic] good only" (see also Allen 1934b, 486). Later in Value and Capital, Hicks was able to put things in a wider perspective: the Paretian theory of choice based on the analysis of complementary and competitive goods, he says, "started as an extension but ended as a revolution" ([1939] 1946, 13).
elasticity: the elasticity of substitution s and two coefficients of income variation r x and r y . 23 The measurement of substitutability is thus given by the elasticity of substitution between X and Y on a point (Hicks and Allen 1934, 59). 24 (I follow the practice of Hicks and Allen in distinguishing between goods X and Y and the quantity consumed or chosen of those goods, x and y.) The coefficients r x and r y express the relationships between two adjacent indifference curves along the x-axis and the y-axis, respectively, and are called (rather misleadingly) the "coefficients of income variation." Those three coefficients are independent of any price or market data. From the hypothesis of a decreasing marginal rate of substitution, r x and r y cannot be negative simultaneously. 25 Given prices p x and p y and a fixed income m, Hicks and Allen derive the decomposition of the effect of a price change of p x on the demands for x and y. Having defined the budget coefficient k x = xp x /m and the income-elasticity of demand for x, e m,x = (m/x)(∂x/∂m), and having shown that e m,x = sr x , the price-elasticities of demand e p x ,x and e p x ,x are decomposed into a sum of two effects, one involving e m,x (the income effect) and one involving s (the substitution effect).
{ e p x,x = κ x e Ç, x + 1 -κ x σ e p x,y = κ x e Ç, x + κ x σ (2.1)
In this two-good case, the substitution effect provides an index of substitutability. Hicks and Allen note that the substitution term in e p x ,y "is always negative" (i.e., the compensated fall in the price of one good causes a substitution of this good for the other) and symmetrical in x and y, thus being "a general measure of substitutability." 26 Hence, as Allen underlined it, 27 "Two goods must always be regarded as substitutes, or as 'competitive,' when they stand by themselves; complementarity is a char-64 History of Political Economy 38:5 (2006) 23. s = (d( x -y )/(x/y))/(dR y x /R y x ); r x = (∂R y x /∂y)/(R y x /y) and r y = -(∂R y x /∂x)/(R y x /x). s takes all values between 0 and ∞.
24. In this article, it will be useful to disentangle the 1934 Hicks-Allen joint product. I In this article, it will be useful to disentangle the 1934 Hicks-Allen joint product. I refer to Allen's part as "Hicks and Allen" and to Hicks's part as "Hicks and Allen."
25. In the "normal" case, both coefficients are positive; in the "exceptional" case, one of them is negative.
26. Of course, the variation of demand relative to the price change of the other good can be either positive or negative, but it does not indicate complementarity or competitiveness.
27. Allen does not stand firmly on his terminology, for he refers to competitiveness either as a generic word in the two-good case or as a specific word (opposed to complementarity) in the general case. acteristic which does not appear until at least three goods are considered" (Hicks and Allen 1934, 202).
The Concept of Complementarity
Complementarity and competitiveness appear only in the general case, and it is thus possible to "look for observable evidence of the 'competitive' or 'complementary' nature of the relations between the three goods" (Hicks and Allen 1934, 210). The change in the demand for Y when the price of X falls ∂y(m, p x , p y , p z )/∂p x splits into an income effect and a substitution effect (i.e., a substitution in favor of X). This substitution may take place at the expense of both Y and Z (as in the two-good case), and in this case X and Y are declared competitive (against Z), and X and Z are declared competitive (against Y). But substitution may also take place, differently, at the expense of Z only while the quantity of Y rises. In that case, Y is complementary with X against Z.
As noted by Hicks, this definition is not altogether free of market data and thus not a pure property of preferences, for it supposes implicitly that the reshuffling between Y and Z following the change in the price p x takes place without any change in the relative price p y /p z : "Since there is implicit in our definition this assumption about price ratios, we have not succeeded in defining complementarity (as we ought to do) purely in terms of the individual's preference-scale; we are making a reference to the market which is better avoided" (Hicks and Allen 1934, 70).
Analytically, the differential equation of the indifference surface is dx + R y
x (x, y, z)dy + R z x (x, y, z)dz = 0. The preference complex is now represented by nine indexes (in the integrable case). There are six distinct coefficients of the elasticity of substitution and one general coefficient s measuring mutual substitutability between goods. Three of the distinct coefficients represent the "ordinary" measure of elasticity of substitution (the elasticity of substitution between i and j being measured along the ij plan, k being fixed, ij s ij ); the three others, "nonstandard" coefficients, measure the elasticity of substitution between i and j along another plan (ik or jk) (e.g., ik s jk is the elasticity of substitution between j and k measured along the indifference direction [ik], j being fixed). 28 From those six coefficients and the general coefficient s, Hicks and Allen have six new measures of elasticities of substitution and complementarity in the three-good case:
Jean-Sébastien Lenfant / Complementarity and Demand Theory 65 28. Under the integrability assumption privileged here, ik s jk = jk s ik .
1. s / yz s yz , s / xz s xz , s / xz s xz , being the three elasticities of substitution between one good and the pair of the two other goods (e.g., the first one is the elasticity of substitution between X and the pair YZ).
They are always positive, as in the two-good case. 2. s xy = s / xz s yz , s xz = s / xy s zy , s yz = s / yx s zx being the three elasticities of complementarity of a pair of goods against the third one (e.g., s xy is the elasticity of complementarity of the pair XY against Z). If the elasticity of complementarity is positive, it denotes a complementary relation between two goods relative to the third one.
To those six coefficients, one must add 3. The three coefficients of r x , r y , r z .
From this complete set of tools describing preferences in the threegood case, Hicks and Allen obtain the general expression of the effect of the variation of price on demand (Hicks and Allen 1934, 209) with still an income term (function of the income-variation coefficient) and a substitution term (a function of the elasticities of substitution and complementarity):
{ e p x, x = κ x e Ç, x + 1 -κ x σ σ yz yz e p x, x = κ x e Ç, y + κ x σ xy e p x, z = κ x e Ç, z + κ x σ xz (2.2)
The second term in each equation of (2.2) indicates the substitution effect. Complementarity must be read on the second and third equations:
The effect of substitution following a fall in the price of X is to increase or decrease the demand for Y according as the elasticity of complementarity of Y with X against Z is positive or negative. A negative elasticity of complementarity implies that Y competes with X against Z and a positive elasticity that Y complements X against Z. The signs of the elasticities of complementarity determine the competitive and complementary nature of the relations between the three goods and their magnitudes indicate the extent of the relations. ( 211)
From those new definitions of complementarity and substitution in the general case, a few methodological and analytic remarks are in order. To implement a new definition is not without unexpected consequences, and it is important to catch all these consequences.
Some Analytic and Methodological Comments on Complementarity and Substitutability in Hicks and Allen 1934
Hicks and Allen 1934 is filled with references to the methodological and analytic consequences of the new definitions of complementarity. While the authors insist that the new tools are quantitative ones, they also insist on unintended properties of the new definitions that appear as more or less awkward. Hicks and Allen aim at breaking with the traditional theory of utility, hence the quantitative spirit that permeates the article. The consequences of the new theory of utility have to be worked out completely, they say, and new concepts must be developed: "It is hoped in this way to assist in the construction of a theory of value in which all concepts that pretend to quantitative exactitude, can be rigidly and exactly defined" (Hicks and Allen 1934, 55;emphasis added). 29 Analytically, one can make some remarks about the new definitions. First, some traditional representations are dispelled. For instance, Hicks and Allen note that the cross-price effects on demand ∂x(m, p x , p y , p z )/∂p y and ∂y(m, p x , p y , p z )/∂p x are not symmetrical except when the income terms are of the same magnitude. Moreover, if income-effects terms are of opposite sign and relatively large compared with the elasticity of complementarity, then ∂x/∂p y and ∂y/∂p x will differ "not only in magnitude, but also in sign" (Hicks and Allen 1934, 214). The second remark deals with the number of complementary relationships within the economy. Hicks and Allen show that the three elasticities of substitution and complementarity between any two goods are linked by the following constraint:
(1 -k x )s / yz s yz + k y s xy + k z s xz = 0 (and two other similar constraints relative to xy and xz).
(2.3)
Consequently, s/ yz s yz being necessarily positive, elasticities of complementarity s xy and s xz cannot be simultaneously positive, and hence "it is not possible for more than one pair of the three goods to be complementary" (213). 30 Thus the new concept of complementarity, as it appears in a choice Jean-Sébastien Lenfant / Complementarity and Demand Theory 67 29. For instance, the authors seem to insist on the fact that the principal advantage of the decreasing marginal rate of substitution over the decreasing marginal utility is that "it becomes significant and useful to ask: 'increasing how rapidly?'" (Hicks and Allen 1934, 58). This is precisely what the marginal elasticity of substitution is supposed to give: a measure of the curvature of the indifference loci.
30. See also Hicks and Allen 1934, 70;Hicks and Hicks and Allen 1934, 211. framework, implies some kind of asymmetry between complementarity and competitiveness from the outset. The third remark deals with the generalization of the analysis to an n-good case. Nothing is said about the best way to account for complementarity and competitiveness in this general case (n > 3). As regards this, Hicks and Allen remain evasive as to what of generalization would be a good measure of complementarity and competitiveness. 31 The fourth remark is about independence. There remains to be known what status will be given to independent goods (once the ALEP criterion is rejected for not being independent of monotone transformations of the utility function). Hicks and Allen reject the most immediate definition of independence as an intermediate situation between complementarity and competitiveness. Curiously, they appeal to a purely introspective idea of what independence should consist of: "For, if, as would happen at our watershed, the marginal rate of substitution between Y and Z is unaffected by compensating changes of X and Z, this does not mean that the goods are in any useful sense 'independent'-there subsists a very complex relation between them" (Hicks and Allen 1934, 74).Hicks and Allen's development on independence is not very well articulated indeed. Hicks (Hicks and Allen 1934, 74-76) expands mainly on the idea that X is independent of the pair YZ if the marginal rate of substitution between Y and Z is independent of the quantity of X. This definition does not prevent per se any type of relation between X and Y against Z (either complementarity or competitiveness). Nevertheless, X and Y can be complementary only if Y is an inferior good. Allen's mathematical treatment is centered on the hypothesis of mutual independence between X, Y, and Z (Hicks and Allen 1934, 214-18). As shown from all those analytic remarks on complementarity, we can draw a first conclusion. The first effect of this reconstruction of demand theory is that the whole conception of complementarity is metamorphosed. Complementarity and substitutability appear now as derivative concepts and necessarily lose the autonomy characteristic of the ALEP definition. At the same time, complementarity and competitive-68 History of Political Economy 38:5 (2006) 31. Hicks will tackle this question in Value and Capital through the use of a "composite commodity" (see the following section). It is nevertheless interesting to wonder about the minimum number of competitiveness relations in an n-good case. In this case, there are n constraints similar to (2.3) and thus at least n pairs of competitive goods in the whole system. Compared with the total number of complementarity or competitiveness relations n(n -1)(n -2)/2, the minimum number of competitiveness relations is shrinking as the number of goods is increased. ness are at least given a theoretical content within the theory of choice and demand. 32 Consequently, competitiveness and complementarity are now coextensive with the new theory of demand, 33 and the metamorphosis of the concept would not be challenged for long. 34 Another question is to know how Hicks and Allen were led to this reconstruction of demand theory. It is beyond the scope of this article to enter precisely into a genetic history of the Hicks-Allen article. It is nevertheless remarkable that, according to Allen (1935, 158), the main interest of the new definition is that it "applies at once in the explanation of the inter-relations of individual demands under market conditions," a subject of interest for econometricians, as we have seen. To put it in a few words, Hicks and Allen 1934 is the product of two converging topics. First, there is Hicks's decomposition between income and price effects linking elasticity of substitution and elasticity of demand. 35 Second, there Jean-Sébastien Lenfant / Complementarity and Demand Theory 69 32. For all that, the new definitions are not completely deprived of introspective qualitiesthat complementarity between two goods depends on which third good is associated to the choice and, more generally, on the context of choice.
33. Hicks and Allen are quite explicit about that when they deal with the two-good case: "It is perfectly consistent with the theory we have so far elaborated, to suppose that all goods are more or less related in consumption; yet we have made no use of the conception of complementarity and competitive goods. We have not used it, because we had no need to use it; we had not yet come to the problem where it is relevant" (Hicks and Allen 1934, 69).
34. Later, as he would comment on the Hicks-Allen definition, Allen insisted on the fact that the new definition was radically different from the ALEP definition, that it was something else and that economists were progressively discovering how different it was: "It is becoming increasingly evident, as its implications are worked out and its range extended, that this [Hicks-Allen] definition of complementarity provides a more constructive tool than the notoriously barren one based on the sign of the cross second-order derivative of the utility function. It must not be supposed, further, that the new definition and the old coincide when utility is taken as measurable-the new definition is not only wider but radically different from the old. Even when the utility function is determinate, the sign of the cross derivative cannot be expected to give results in this field of complementarity. Surely we have here something more than an 'axiomatic experiment'" (Allen 1935, 158 n). For all that, one must keep in mind that (1) some authors have challenged the new definition on the occasion of a debate on "intrinsic complementarity" inaugurated by Oskar [START_REF] Lange | Complementarity and Interrelations of Shifts in Demand[END_REF]; and (2) Allen's statement does not preclude searching for empirical implications of ALEP complementarity (Chipman 1977, Weber 2000).
35. The Hicksian decomposition is heuristically different from Slutsky's "residual variations," and it cannot be justified on empiricist grounds. It is rather a purely instrumental device based on the unobservable concept of indifference curves and hence possessing other heuristic advantages [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. The textual evidence in Hicks and Allen 1934 leads the reader back to a debate following Hicks's Theory of Wages ([1932] 1957) and Robinson's The Theory of Imperfect Competition (1933) and their independent introduction of the concept of elasticity of substitution. The debate took place mainly in the Review of Economic Studies and is referred to incidentally in Hicks and Allen 1934, 59 n. Hicks would is a desire to expand demand analysis to interrelations of demand in a general framework (n ≥ 3) and thus to provide a new definition of complementarity, as Allen had been convinced of through his earlier articles. 36 Behind this was also a growing interest for econometrics and for the questions raised by econometrists. As Hicks (1981, 3) recounted later, the 1934 article "also reckoned, quite explicitly, among the things that gave rise to it, Joan Robinson's definition of elasticity of substitution (e.s.) and a question about complementary goods that had been raised by Henry Schultz (of Chicago). It is thus not surprising that it made a noise in the world of economists." 37 So we are led back to Schultz and to the econometrics of complementarity.
Constructing the Hicks-Slutsky Theory of Demand
As was shown in section 1, around 1930, Schultz wished to tie the econometrics of demand with the theory of demand based on rational choice. Soon after, he implemented the Hicks-Allen theory of value into the econometrics of demand. There are two stages to this story. To begin with, 70 History of Political Economy 38:5 (2006) always insist on the influence of Robbins's seminar on the 1934 article: it was also "the product of discussions, in which several others took part, in the Robbins seminar at LES" (Hicks 1981, 3). Also, Hicks would give some insights of his own evolution during those years on many occasions. Hicks (1973, 3 n) sketched his own way from Theory of Wages to Hicks and Allen 1934 very succinctly: "Theory of Wages elasticity of substitution; Joan Robinson's elasticity of substitution (Imperfect Competition) 1933; proof of equivalence in two-factor case; Lerner's proof that the Robinson elasticity is a property of an isoquante; my own realization that the same geometrical property would hold for an indifference curve. Those are the steps that led to the Hicks-Allen article." See also Hicks 1981, 3-5. 36. Allen was trained as a statistician at the LSE, and he maintained an interest in choice and utility theory throughout his career, from the time of his first published article, "The Foundations of a Mathematical Theory of Exchange" (1932). He then wrote some other articles on complementarity before cooperating with Hicks (Allen 1934a(Allen , 1934b(Allen , 1934c)). Taken together, these articles provide a thorough study of ALEP complementarity in relation to indifference curves and utility functions. Allen was certainly the first economist to grasp the central importance of complementarity as a consequence of Paretian choice theory: "It is the existence of mutual relationships between goods, finding expression in the various forms assumed by indifference curves, that distinguishes the general theory of choice from the simpler and more artificial theory which serves in the case of one consumers ' good only" (1934c, 110). Hicks had been acquainted with the subject of complementarity and choice theory by reading Allen's articles on the subject. Hicks presented the final product by saying that "the present paper is the result, first, of my own reflections about Mr. Allen's work, and secondly, of our collaboration in working out the details of a theory which shall be free of the inconsistencies detected in Pareto" (Hicks and Allen 1934, 55). Thus it is clear that Allen's articles had circulated among economists at the LSE.
37. Allen (1934b) [START_REF] Edgeworth | The Pure Theory of Monopoly[END_REF] 1925) had shown that the taxation of a monopolized product might imply a lowering of its price. Hotelling shows that this result is not conditioned by the market structure (it might take place also under free competition). Hotelling's model led to symmetry conditions on cross-price elasticities of demand for two goods. His article is interesting because it is based on the same observation as Schultz's, that is, that demand studies are characterized by a gap between deductive statements on demand and purely inductive statistical relations. Thus Edgeworth's paradox stimulated "a critical examination" of the idea that "demand functions for several commodities need satisfy no conditions except the decrease of demand for each commodity when its price increases" (Hotelling 1932, 582). To sum up, Hotelling's main idea is that the study of a multigood system might produce additional structure on demand relationships and that most of the poorly motivated properties of demand in the economic literature are due precisely to the fact that "the correlation of demand for different commodities is neglected" (583). Through a multigood analysis, Hotelling provides structural constraints-the famous "Hotelling's integrability conditions" 38 -on demand Jean-Sébastien Lenfant / Complementarity and Demand Theory 71 38. Those integrability conditions are different from the Slutsky integrability conditions rejected by Allen and Samuelson (see [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. systems that can be deduced from the maximizing behavior of economic agents. 39 ∂x i /∂p j = ∂x j /∂p i (3.1)
Thus the cross-partial derivatives of demand functions are equal. What is of interest to us is that many related goods are considered at once and lead to structural relations on demand functions that can be tested statistically. As he later commented: "Theoretical conditions exist with which empirical results may be compared" (Hotelling 1933, 409). Schultz's (1933, 506) own model was presented soon after in "Interrelations of Demand," following Hotelling's idea that "economic theory lays down certain conditions which the demand functions for any two related goods must fulfill. These are the conditions of consistency (integrability conditions)." 40 The main result of Schultz's article was that statistical tests led to the conclusion that the "integrability conditions" were not satisfied exactly. 41 That is, the results were sufficiently bad to warrant searching for more general conditions.
Schultz's consistency conditions were the same as Hotelling's but were deduced from a Paretian model based on maximizing utility under fixed income and with the hypothesis of a constant marginal utility of money income (thus allowing symmetric income effects and the symmetry of cross-partials of demand functions) (478, equations 19 and 20). The econometric system to be tested (supposing linear demand curves) is thus: 72 History of Political Economy 38:5 (2006) 39. In fact, Hotelling's constraints derive from two different models. The first one is a nonstandard model with entrepreneurial demand functions; the second one is based on a fixedincome hypothesis. The detail of mathematical notations is to be found in Hotelling 1935, 68. Hotelling's model differs sensibly from the Paretian model to the extent that there is neither a fixed income nor an income determined by the price of the initial endowments and that the functions describe what is now called "entrepreneurial" demand functions. But the results are the same as under the hypothesis of fixed income and constant marginal utility of money. In Hotelling's view (1932, 592), his model was depicting something more general than the Paretian model of consumer behavior, and it is adapted to a great variety of consumption circumstances. Hotelling does not provide any justification for this. On this model and its interpretation, see [START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF][START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF][START_REF] Chipman | The Paretian Heritage[END_REF][START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF] 40. That Hotelling and Schultz made simultaneous discoveries is not surprising. Indeed, Schultz and Hotelling's research went hand in hand for two years. We know from Hotelling 1939, 100, that Schultz had been a referee for the first draft of the 1932 article and that Schultz had informed Hotelling that he had found similar conditions under the constant marginal utility hypothesis [START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF]. In fact, Schultz's (1933, 481 n. 16) findings were stimulated by his reading of Hotelling's draft. This "collaboration" led to a sustained correspondence between Hotelling and Schultz that has been commented on by [START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF].
41. In his 1932 article, Hotelling already mentioned (in the note on page 594) that Schultz and his staff had not found promising results. { x i = h i + r i, i p i + r i, j p j x j = h j + r j, i p i + r j, j p j (3.2)
In this model, r i, j and r j,i will be negative in the case of complementary goods and positive in the case of competing goods. That Schultz preferred his own formulation of the model is due mainly to its heuristic properties. Interestingly, what Schultz meant in this case was that his model and the symmetry properties could be easily related to intuitive properties of the ALEP definition of complementarity. Thus, in 1933, Schultz was still attached to an introspective conception of complementarity based on utility: I prefer, however, to adopt as my point of departure the fundamental classical definitions of related and independent commodities in terms of the utility functions, for these functions seem to me to lead more directly to the characteristics of the related demand equations with which we shall be primarily concerned in this paper, and to have other heuristic properties as well. (481 n. 16) 42 Schultz makes a statistical test of those consistency conditions for four agricultural products: barley, corn, hay, and oats, with the idea to determine which pairs are completing and which ones are competing. One pair of goods will show a significant contradictory result, and symmetry conditionsthus are not satisfied. Considering that the symmetry conditions should at least be approximately satisfied, the subsisting contradiction is commented on as follows, using the ALEP heuristics: "It is as though the relation between the two commodities were such that the utility of hay to farmers increases as the quantity of oats is increased, while the utility of oats decreases as the quantity of hay is increased! But such an explanation cannot be entertained" (Schultz 1933, 501). 43 At this stage, one can conclude that the first encounter of the Paretian theory of demand and statistical studies of demand is met with mixed success. The Jean-Sébastien Lenfant / Complementarity and Demand Theory 73 42. In the same article, Schultz (1933, 481) also vindicates not following Fanno's heuristics, because it does not lead to testable relations (see also Court 1941, 139). On the same footing, Hotelling (1935), coming back to the subject, does not adhere to Schultz's procedure and heuristics and shows some reluctance to adopt the fixed-income hypothesis. This is due to an attachment both for the physical analogy he had developed earlier and for a general equilibrium model without the hypothesis of the constant marginal utility of money (Hotelling 1935, 74). He is also more generous with Schultz's results (67 n). On the justification of the fixed-income hypothesis, see Schultz 1935, 436. 43. Among the main reasons that may explain those disappointing results are that some other related goods may have been forgotten (Schultz 1933, 480n1, 510).
Hicks and Allen article, together with the rediscovery of the Slutsky article, are coming at the right time. It is to be expected that they will also affect the methodological justification of the model and the interpretation of the results.
The Schultz-Slutsky Model
In 1934 Schultz benefited not only from Hicks and Allen's article but also from the discovery of Slutsky's article [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. Both contributions were received at the precise moment when Schultz was looking for new theoretical relationships on demand systems that might be tested. In "Interrelations of Demand, Price, and Income" (1935)-and again in his 1938 treatise-Schultz would devote most of his energy to developing the theory of related goods and its applications on the basis of Slutsky-Hicks-Allen demand analysis. To that extent, Schultz shares with Hicks and Allen the paternity of a modern theory of complementarity.
Two aspects of Schultz's work demand attention. First, he adapted Hicks and Allen's work in order to make it operational for statistical computation, and in this respect he preferred Slutsky's decomposition between price effects ("residual variability") and income effects. Second, Schultz abandoned the psychological introspective framework associated with the ALEP definition, adopting instead a new methodological credo for economics-the so-called operationalist methodology (after [START_REF] Bridgman | The Logic of Modern Physics[END_REF] to which he meant to associate the Hicks-Allen definition. To that extent, his interpretation of Pareto's recommendations appears, at first sight, more radical than Slutsky's and even Hicks and Allen's. For all that, it does not imply that Schultz is abiding strictly by an operationalist methodology in his reasoning. Neither does it mean that the reference to an operational procedure is a self-fulfilling argument to build an econometric analysis of demand. Indeed, Schultz (1935, 474-75 n. 47) eventually shows a preference for Hotelling's symmetry conditions. Thus Schultz's arguments and methodological references are worth discussing further.
It is beyond the scope of this article to appraise Schultz's implementation of the operationalist methodology into economics. 44 Nevertheless, Schultz's reflection on demand theory and complementarity is best under-stood if one makes clear how he articulated different levels of argumentation: first, the reference to operationalism and to "operational procedures" in economics, then the practical use of this operational procedure for selecting one definition of complementarity, and finally the effective choice of one model of demand. Let us take each of those levels one at a time.
Schultz's references to operationalism are not mere references to the methodology of the natural sciences. Although Schultz is certainly searching for scientific criteria that may put economics on the same footing with physics, his frequent references to Bridgman 1927 testify that he considered Bridgman's recommendations seriously for economics. This does not mean that Schultz is a strict operationalist. In his comments on Bridgman, Schultz (1938, 11) stresses the fact that economics will gain in making "a distinction between concepts which are defined in terms of operations and those which are defined in terms of properties of things." The focus on operational procedures is presented as the main criteria for selecting one concept of complementarity over another. In Schultz's own words: "The restatement and extension of the earliest concept of demand into forms which have meaning in terms of operations, which has been attempted in the foregoing pages, is the first step in the direction of the derivation of concrete statistical laws of demand" (12).
As an illustration of this method, Schultz's comments on the Hicks-Slutsky definition make it explicit that according to him, operational concepts in economics should be based above all on market data rather than on experimental data. In this respect, the ALEP definition cannot be recommended because it is not reducible to the observation of market data:
According to [the ALEP definition], if we wish to know whether two commodities are completing, independent, or competing to an individual, we must ask him (and he must be able to tell us) whether, as we increase the quantity of one of the goods, the final utility of the other increases, remains constant, or decreases. The operation calls for an introspective comparison of final degrees of utility, on his part. The size of his income and the number of commodities in the economy do not affect his ability to make the comparison in question.
According to [the SHAS definition], if we wish to know whether the individual considers two commodities as completing, independent, or competing, we must note his income, and observe whether a fall in the price of one of the goods, accompanied by a compensating variation in his income, will cause him to increase, maintain constant, or decrease his purchase of the other. . . . The point is . . . that the answers called for by [the SHAS definition] do not require a comparison of final utilities and may be obtained by observing the individual's market behavior; whereas the answers called for by [the ALEP definition] cannot be so obtained." (Schultz 1935, 462-63) As a third step in this analysis of Schultz's methodology for statistical demand studies, the selection of a statistical model of demand is based on instrumental considerations. In this respect, Schultz was led to prefer a model leading to the Hotelling symmetry conditions rather than the more general one based on Slutsky symmetry conditions. First, Schultz (1935, 457) The system of linear demand functions to be tested is
{ x = c 1 + c 1,1 p x + c 1 , 2 p y + c 1, R R y = c 2 + c 2,1 p x + c 2 , 2 p y + c 2, R R (3.4)
Then, x and y are complementary whenever (460)
c 1, 2 + yc 1, R = c 2,1 + xc 2, R < 0. (3.5)
Schultz provides statistical tests on those relationships for beef, pork, and mutton, using multiple regression. The quantity of each good is expressed as a function of the price of the three goods and of income. Schultz also tests pairs of goods (beef-pork, beef-mutton, pork-mutton). Results appear to be even worse than in the 1933 article. The only conclusion drawn from those tests is that the price of each good is the most influential variable on its demand and that it is an obstacle to a careful analysis of complementarity (Schultz 1935, 472). Drawing the lessons from those articles, Schultz seemingly came to see that many difficulties were obviating the test of symmetry conditions. Actually, Schultz's argument in favor of the Hotelling symmetry conditions over the Slutsky conditions is based on acknowledging the failure of the Slutsky symmetry conditions to improve the quality of statistical estimations. Considering that nothing had been gained by using Slutsky symmetry conditions, and that "Hotelling conditions and the corresponding Slutsky conditions are of approximately the same order of magnitude" (479) because of the smallness of aggregate income effects, 45 Schultz eventually advocated that the use of Hotelling's conditions might be enough to indicate "the type of relation existing between the two commodities" (479n51). So Schultz tended to regard income effects as relatively marginal compared with direct price effects, so that Hotelling's conditions might be verified as well as Slutsky's conditions at the market level without being open to any objection regarding the aggregation procedure (475). At this last stage in the construction of a statistical model of demand, the reference to an operational procedure is of no use, and Schultz is just selecting the best model.
In a few words, one must distinguish between two types of arguments in Schultz's work on complementarity. On the one hand, the so-called operationalist methodology is referred to as a justification for abandoning the ALEP definition, whereas, on the other, the final choice between different statistical models is justified on more instrumental grounds. 46 In the final analysis, Schultz's attitude toward disappointing results on the tests of symmetry conditions is oscillating between either the need to improve on statistical techniques or a disenchanted benign-neglect in favor of a simpler model of related demands.
Jean-Sébastien Lenfant / Complementarity and Demand Theory 77 45. The conclusions are the same whatever the functional representation chosen (Schultz 1935, fig. 5, 478).
46. [START_REF] Friedman | The Fitting of Indifference Curves as a Method of Deriving Statistical Demand Curves[END_REF], as one of Schultz's assistants at Chicago, provided a new definition of complementarity that enlarged the Johnson and Allen definitions. It was inserted partly into Schultz 1938. Friedman participated actively in preparing Schultz's 1935 article and in the corresponding eighteenth chapter of Schultz 1938 (cf. Chipman and[START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. In The Theory and Measurement of Demand, Schultz (1938, chaps. 18, 19) would carefully compare the relative merits of each definition. Comparing the Johnson-Allen-Friedman preference-based definitions of complementarity with the Slutsky-Hicks-Allen-Schultz (SHAS) definition, he also reassessed the advantages of the "special" theory of related demands (based on the constant marginal utility of money hypothesis) over the more complicated "general" theory of related demands. The main reason for not retaining the Johnson-Allen-Friedman definitions is that the conditions for statistical applications are too remote because the relation between goods is defined over the whole indifference map and thus necessitates having an estimate of demands on the whole choice set (Schultz 1938, 612). Another weakness for statistical economics is that the relation between goods is supposed to be the same whatever the combination of goods (614; see also page 619 for a comment on Friedman's definition). On the contrary, the SHAS definition, as it is constructed on the basis of Slutsky's compensating variations, depends on market conditions (income and relative prices) and can be obtained from statistical information (subject to restrictions on aggregation). Thus choice theory provides "quantitatively definite relations" from which it is possible "to test the agreement between theory and facts" (Schultz 1938, 646).
It is probable that the income effect is also small for most articles of wide consumption on which only a small proportion of the income is spent. We may, therefore, expect the simpler Hotelling conditions to be satisfied by a large number of demand phenomena. But this supposition needs to be fortified by more extensive statistical investigations. (Schultz 1938, 646) Despite the mitigated success of Schultz's endeavor at this stage, it still had an effect on the theory of choice. Schultz's influence will be perceptible through the synthesis of the theory of demand by Hicks, once he came back to the subject after his collaboration with Allen.
Constructing the Hicks-Slutsky Theory of Demand and Complementarity and Reconsidering the Marshallian Orthodoxy
What still needs to be recounted is to what extent and how the Schultz-Slutsky and Hicks-Allen reconstructions of demand theory were eventually synthesized as the Hicks-Slutsky theory of demand in Hicks's Value and Capital and subsequently in most standard textbooks.
We have seen that Schultz preferred to construct his analysis of demand on Slutsky's equations because they were more adapted to statistical handling. He had first expressed his disappointment about the bad results obtained from testing the Hotelling and Slutsky symmetry conditions, imputing the fault not to the model but to the weakness of statistical tools (Schultz 1935, 472). It is nevertheless quite striking that Schultz came to consider that income effects were so small as to block any effort to test Slutsky symmetry conditions. This development of the econometrics of demand was not without influence on Hicks, and it is certainly the major incentive toward the synthesis of demand theory that was exposed in Value and Capital. Already in his 1937 booklet, Hicks had exposed demand theory on the basis of Slutsky's equation; later in Value and Capital, he would mix both presentations. 47 Hicks also enlarged the definition in the n-good case through introducing a composite commodity ("money") whose relative prices are constants, so that the presentation fits with the usual 78 History of Political Economy 38:5 (2006) 47. The literal and geometrical representation in the full text is in the Hicksian manner, whereas the analytic presentation in the appendix follows Slutsky's compensation (Hicks 1939, 309). Later Jacob Mosak (1942) showed that the Slutsky and Hicks decompositions are equivalent for infinitesimal price variations.
presentation in a three-good framework: two goods and "money." 48 Hence the definition of complementarity is now expressed in this way: 49 Suppose that Y (one of the other commodities) is complementary with X-according to our definition of complementarity. Then we know that if the amount of Y is held constant, a substitution in favour of X and against money (now other goods than X or Y) will raise the marginal rate of substitution of Y for money. Now the price of Y in terms of money is given and constant; so a rise in the marginal rate of substitution of Y for money must encourage a substitution of Y for money, if the marginal rate of substitution of Y for money is to be kept equal to the price of Y. Therefore, if Y is complementary with X, a substitution of Y for money tends to be accompanied by a parallel substitution of Y for money. The substitution in favour of X stimulates a similar substitution in favour of Y. On the other hand, if, on our definition, Y is a substitute for X, a substitution of X for money (Y constant) encourages a substitution in favour of money and against Y. The substitution in favour of X tends to be accompanied by a substitution against Y. It is our definition of complementarity which draws the exact line between these two situations. (Hicks 1939, 46) As Samuelson (1950, 379) later remarked, Hicks's introduction of money was definitely putting the Hicks and Allen theory at the same level of generality with the Schultz-Slutsky presentation, thus inaugurating the nowstandard Hicks-Slutsky theory of demand:
In 1939, Hicks seems to have abandoned this definition in favour of the Slutsky-Schultz definitions; for n = 3, the results of either definition are qualitatively the same. For n > 3, this is not true. If we define all but the two goods in question to be a Hicksian composite commodity, then the Slutsky-Hicks definition can be cast in Hicks-Allen terminology.
Another probable instance of Schultz's influence on Hicks, although it must be advanced cautiously, is that the income effects are so negligible that it is best to work with Hotelling's symmetry conditions. Although Hicks advocated a general theory of value, he did not intend to repudiate Marshallian simplifications at all. Regarding the Marshallian theory of demand, Hicks went on to confess in Value and Capital that "further Jean-Sébastien Lenfant / Complementarity and Demand Theory 79 48. "So long as the prices of other consumption goods are assumed to be given, they can be lumped together into one commodity 'money' or 'purchasing power in general'" (Hicks 1939, 33).
49. For a geometrical representation, see [START_REF] Hayek | The Geometrical Representation of Complementarity[END_REF][START_REF] Schultz | Stigler[END_REF] investigation has only increased my admiration for Marshall's theory; I hope the reader will find the same" (11). Regarding the constant marginal utility of money assumption, Hicks wants to show, through the general formulation of the law of demand, that Marshall had just made "an ingenious simplification" and had "quite good reasons" for doing so (27). And now that economists do have a general theory of demand, they can indulge in Marshallian simplification, in full knowledge of what they are neglecting (32-33). In some respects, Hicks's justification for preserving the law of demand is based on the idea that in most economic situations, income effects are small enough compared with substitution effects, which is tantamount to Schultz's statistical estimations. In other respects, Hicks is more cautious than Schultz and would not regard those results as a litmus test in favor of the law of demand. For instance, Hicks's statement about the law of demand is based also on considerations on general equilibrium, aggregation on goods and individuals, whereas Schultz does not consider carefully the many intrinsic limits of his model (regarding exogenous income or linear demand functions).
Theoretical considerations apart, it is quite probable that Hicks's earlier intuitions on income effects must have been reinforced by Schultz's comments. As Hicks later confessed on many occasions, the reconstruction of demand theory was clearly driven by the development of econometrics. Hicks (1956b, vi, vii) claimed, in the introduction to the French translation of Value and Capital, that "economic theory must be the servant to applied economics" and also that "the first part [of Value and Capital], which deals with demand theory, was inspired by the work of old econometricians, especially the articles from Henry Schultz." And in A Revision of Demand Theory, he again acknowledged that in Value and Capital "at crucial points the argument was put in a form which was influenced by what the econometrists had been doing," and he also asserted that the end of the story inaugurated by Pareto and followed by Slutsky, Johnson, Hicks, and others was to make "the Pareto theory more usable and . . . [to weave] the Marshallian and Paretian threads together" (Hicks 1956a, 3). 50 The econometric concern, if not the sole concern, was quite clear, as Hicks (1981, xii) puts it: It was not explained, in the Hicks-Allen paper, what prompted us to make our enquiry. It began in fact from econometrics. It was the work of Henry Schultz, on statistical demand study, which set us off. What we were doing was to reformulate demand theory so as to put it into a form which would be more usable by econometrists.
Thus it is not to be doubted that there were cross-influences between Schultz on the one side and Hicks on the other. Was this of any influence on econometrics and economic theory at the time? At least two strands of work can be seen as a good testimony for the solid implantation of the Hicks-Slutsky synthesis in price theory. First, Hicks and Allen's article influenced econometrists and their offspring in the late 1930s, especially, although not only, at the Cowles Commission (Hands and Mirowski 1998, 374). Gerhard [START_REF] Tintner | The Theoretical Derivation of Dynamic Demand Curves[END_REF] theory of dynamical demand curves is an extension of [START_REF] Hicks | A Reconsideration of the Theory of Value. Pts. 1 and 2[END_REF][START_REF] Marschak | Money, Illusion, and Demand Analysis[END_REF] refers to Hicks and Allen to estimate individual demand functions; Jacob [START_REF] Mosak | Interrelations of Production, Price, and Derived Demand[END_REF] provides an extension to the demand of productive services; and E. E. Lewis (not at Cowles), through a series of articles on intercommodity relationships (1937, 1938a, 1938b), is still another example of work done under the guidance of Hicks, Allen, and Schultz. Second, the whole development of general equilibrium, at least until the 1950s, is centered on discussing income and price effects, aggregation, and above all stability, always with Value and Capital as a reference. But this is another story. 51
Conclusion
A first conclusion deals with the idea of a stabilization of demand theory around the Hicks-Slutsky synthesis. My claim in this article is that the development of a modern theory of demand in the 1930s was not exclusively motivated by the use of an index utility function but that it was motivated also by the need to define the concept of complementarity and to give to it a definition adapted to the construction of a theory of choice. To that extent, the most important result of the Hicks-Allen definition is not the Hicksian decomposition itself; rather, it is the use of this decomposition to study both the law of demand and the law of related demands. This result has been made possible only through a complete transformation of the concept: its meaning and its analytic definition have been changed drastically. Notably, after Hicks and Allen 1934 it is no longer Jean-Sébastien Lenfant / Complementarity and Demand Theory 81 51. My narrative is stopping at a time when divergent approaches would challenge the Hicks-Slutsky presentation of demand. But challengers do not hesitate to take the Hicks-Slutsky theory as their main target [START_REF] Knight | Realism and Relevance in the Theory of Demand[END_REF]Friedman 1949;Samuelson 1938aSamuelson , 1938b) ) (although for different reasons and with divergent aims). possible to interpret complementarity as a binary relationship between two goods (and an individual) taken apart from the context of choice. On the contrary, the context of choice is a constituting part of the definition.
A second conclusion is about the progressive transformation of the concept. At both extremes-the introspective properties and the operational properties-one may feel that complementarity was quite evidently and naturally transformed by the ordinalist principles. Things are not so simple. From all those debates, it is clear that the properties of the concept are as important at the end of the story as at the beginning. 52 The road taken to develop a new concept is not straightforward or reducible to a technical challenge. It is rather the product of many constraints: on testability, on the operational aspects of economic concepts, on intuitive properties of complementarity, on homogeneity of definitions. For instance, is it necessary that one should tell if two goods are completing or competing in any circumstances or just in given economic circumstances? Shall we impose a priori that completing goods are as likely to occur as competing ones? What should be the heuristic behind the concept? Shall we privilege homogeneous definitions of all types of relations, or shall we accept, for instance, to define independence in a peculiar way or to refer to different types of complementarities or competitiveness? Shall we privilege concepts that are appropriate for statistical handling, or shall we rather search for definitions that fit some intuitive (or subjective, or introspective) representations? All those questions have been addressed here and there, and the answers have never been dictated by one single a priori principle to demarcate ordinalism and cardinalism. Quite the contrary, answers depended on many different constraints and ideas about what should be a good concept of complementarity.
also mentionedSchultz 1933. in1933 Schultz "tests" symmetry conditions on demand systems obtained by[START_REF] Hotelling | Edgeworth's Taxation Paradox and the Nature of Demand and Supply Functions[END_REF]. Subsequently, facing bad statistical results, Schultz hints at the Hicks and Allen article and at the Slutsky article to test new kinds of theoretical relations: the Slutsky symmetry conditions. Between 1933 and 1938 Schultz develops a sophisticated analysis of complementarity and a justification for the simpler cross-price effect definition based on Hotelling's previous analysis. This led to Hicks's reformulating the theory of demand in Value and Capital, which resulted eventually in the classical theory of the consumer.The Schultz-Hotelling Tests on Demand FunctionsIn 1932 Hotelling and Schultz were searching for structural relations between demand functions, and they found similar theoretical constraints that Schultz and his team at Chicago would endeavor to test. The Schultz-Hotelling cooperation on demand theory dates back to Hotelling's article "Edgeworth's Taxation Paradox and the Nature of Demand and Supply Functions." In this 1932 article, Hotelling proposed two conditions under which an apparently counterintuitive relationship might appear.
noted that Hotelling's symmetry conditions are true only under special circumstances, and that "although [they] . . . are approximately satisfied by several concrete, statistical demand analyses, they rest on weak foundations." In contrast, the more general Slutsky symmetry conditions lead to a new definition of complementarity. Two goods x and y are complementary (or completing) if (458)
∂x ∂p y + y ∂x ∂R = ∂y ∂p x + y ∂y ∂R < 0. (3.3)
HOPE385-04lenfant.indd 55 7/6/06 6:47:17 PM
On this question, see[START_REF] Stigler | The Early History of Empirical Studies of Consumer Behavior[END_REF][START_REF] Christ | Early Progress in Estimating Quantitative Economic Relationships in America[END_REF][START_REF] Morgan | The History of Econometric Ideas[END_REF] HOPE385-04lenfant.indd 56 7/6/06 6:47:17 PM
HOPE385-04lenfant.indd 58 7/6/06 6:47:19 PM
HOPE385-04lenfant.indd 59 7/6/06 6:47:20 PM
HOPE385-04lenfant.indd 60 7/6/06 6:47:21 PM
In other respects, the Hicks and Allen article is outdated, especially for the debate on integrability and the role of the decreasing marginal rate of substitution for negative semidefiniteness of the Slutsky matrix (cf.Samuelson 1938aSamuelson , 1938b[START_REF] Schultz | Stigler[END_REF]). HOPE385-04lenfant.indd 62 7/6/06 6:47:23 PM
HOPE385-04lenfant.indd 66 7/6/06 6:47:28 PM
HOPE385-04lenfant.indd 68 7/6/06 6:47:30 PM
HOPE385-04lenfant.indd 69 7/6/06 6:47:31 PM
HOPE385-04lenfant.indd 70 7/6/06 6:47:32 PM
HOPE385-04lenfant.indd 71 7/6/06 6:47:32 PM
HOPE385-04lenfant.indd 72 7/6/06 6:47:33 PM
HOPE385-04lenfant.indd 73 7/6/06 6:47:34 PM
On operationalism in general and its application in economics, see[START_REF] Caldwell | Beyond Positivism: Economic Methodology in the Twentieth Century[END_REF], On operationalism in general and its application in economics, seeCaldwell 1982, chaps. 2, 9;[START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF][START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF][START_REF] Mongin | Le réalisme des hypothèses et la Partial Interpretation View[END_REF] Mongin , 2000. . HOPE385-04lenfant.indd 74 7/6/06 6:47:35 PM
HOPE385-04lenfant.indd 75 7/6/06 6:47:35 PM
HOPE385-04lenfant.indd 76 7/6/06 6:47:36 PM
HOPE385-04lenfant.indd 78 7/6/06 6:47:38 PM
HOPE385-04lenfant.indd 79 7/6/06 6:47:39 PM
This is perfectly compatible with the fact thatHicks (1979, 202) was always dubitative toward econometrics. HOPE385-04lenfant.indd 80 7/6/06 6:47:40 PM
HOPE385-04lenfant.indd 81 7/6/06 6:47:41 PM
History of Political Economy 38:5 (2006) HOPE385-04lenfant.indd 86 7/6/06 6:47:45 PM | 96,811 | [
"745282"
] | [
"1188"
] |
01771855 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2012 | https://hal.science/hal-01771855/file/Lenfant_HOPE-44-1_2012_IC%26ordinalism.pdf | Jean-Sébastien Lenfant
email: [email protected]
Clerse
Philippe Mongin
Indifference Curves and the Ordinalist Revolution
Schumpeter, as a spoilsport (and probably under Paul Samuelson's influence), looked pessimistically upon the internal coherence and methodological progress achieved by the ordinalist revolution:
(Schumpeter , 1067) ) Schumpeter actually raised a methodological issue more than a practical one. Indeed, once Vilfredo Pareto had faded from the scene, the status 1 of indifference curves within demand theory was rarely discussed for its own sake by the main protagonists of the ordinalist revolution.
2. More precisely, what is meant here is the escape from a certain kind of psychology that was widespread in the late nineteenth century and the beginning of the twentieth century, that is, psychological assumptions taken from psychophysiology and experimental psychology and whose main figures were (or had been) Helmholtz, Weber, Fechner, and Wundt.
3. "Pareto, having replaced the assumption of utility with the assumption of indifference curves, went one step further to suggest that, in principle, economists could replace the assumption of indifference curves with an experimental derivation of indifference curves" (Gross and Tarascio 1998, 171). As will be seen, this assertion is misleading and does not reflect adequately Pareto's own ideas about indifference curves. A different point of view can be found in Moscati 2007a. 4. The question of the status of indifference curves within the history of consumer demand has been tackled only indirectly or partially in some recent articles either in relation to integrability (Hands 2006) or in relation to the history of experimental economics (Moscati 2007a) and rational behavior [START_REF] Giocoli | Modeling Rational Agents[END_REF].
It was generally relegated to footnotes and asides, whereas other issues such as integrability (Samuelson 1950;[START_REF] Houthakker | Revealed Preference and the Utility Function[END_REF][START_REF] Chipman | Preferences, Utility, and Demand[END_REF], the measurement of utility [START_REF] Frisch | Sur un problème d'économie pure[END_REF]Schultz 1933;Frisch 1932;[START_REF] Lange | The Determinateness of the Utility Function[END_REF], and complementarity [START_REF] Johnson | The Pure Theory of Utility Curves[END_REF][START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF][START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF]Samuelson 1974) were discussed at length.
The concept of the indifference curve was the touchstone of the escape from cardinalism and the psychological foundations of demand and choice. 2 Yet, once Hendrik S. [START_REF] Houthakker | Revealed Preference and the Utility Function[END_REF] and Paul Samuelson (1950) recognized that the whole theory of the consumer could be derived from the strong axiom of revealed preferences (without supposing the existence of indifference curves from the outset), indifference curves survived mainly, if not exclusively, because they made it easy to teach and learn certain ideas and principles involving choices between certain prospects (intertemporal choice, the leisure-consumption trade-off). For such a role it is endowed with convenient mathematical properties allowing for the use of duality theorems.
Thus, the indifference curve went from being held in high esteem to being of secondary importance, once the strong axiom of revealed preferences was developed. How and when did this hierarchical about-face take place? The question does not seem to have caught the interest of historians. It is often alleged that the construction of experimental indifference curves was on Pareto's agenda. 3 Even this is a simplistic analysis of Pareto, and to the best of our knowledge, there have been no attempts to recount in a broad way the story of indifference curves within the ordinalist revolution. 4 As elsewhere with the development of the theory of choice, the 1930s and 1940s resulted in the stabilization of the concept of the indifference curve. Actually, it must be stressed that 5. Integrability, measurement, and complementarity all concern "deeper" questions, with an immediate concern for the robustness and fruitfulness of the theory, i.e., the very existence and nature of the utility function. In the two-good case, we start with these questions more or less answered with the notable exception of complementarity (Lenfant 2006) and then use indifference curves to represent the utility function. Thus, the methodological problems that the indifference curve involves are of a different nature than those that integrability, measurement, and complementarity involve. In the two-good case, if we assume that we can use indifference curves, then (1) we have already assumed a solution to the integrability problem; (2) we can use the curves to give one answer to the measurability question (utility is ordinal: the numbers attached to the indifference curves do not affect the choices that the consumer makes); and (3) we can learn (as Fisher and Pareto failed to learn) that we cannot really say anything much about complementarity. Of course, in the three-good case, the question of the status of the indifference surfaces is more on an equal footing with integrability, measurement, and complementarity. Yet, it was rather considered as a secondary problem, more methodological than theoretical. See also section 6. the status of indifference curves within the theory of choice is usually regarded as depending upon much deeper issues such as the measurement of utility, the integrability of demand, and the definition of complementarity.
Nevertheless, it would be unfair to regard views on indifference curves within the ordinalist revolution solely as by-products of other theoretical debates. 5 The question was occasionally treated for its own sake, and one can find here and there incidental remarks about the possibility, necessity, and usefulness of building the theory of demand upon experimental indifference curves. As regards this more specific issue, a number of arguments were raised in the 1930s and early 1940s in favor of a theoretical and nonexperimental status of indifference curves. The central piece of work that catalyzed the debate was Louis L. Thurstone's experimental derivation of an indifference map in 1931. Mainly, the decade following Thurstone's experiment sealed the status of indifference curves within demand theory: they became useful only for their pedagogical and heuristic properties.
At the beginning of this story are Henry Schultz, Harold Hotelling, and Milton Friedman, who were engaged in reconciling empirical demand studies and the Paretian theory of demand and utility. At the end of the story, one can consider W. Allen [START_REF] Wallis | The Empirical Derivation of Indifference Functions[END_REF] article as the most influential criticism of the experimental nature of indifference curves and Samuelson's theory of revealed preferences (e.g., Samuelson 1938aSamuelson , 1930bSamuelson , 1950) ) as the final plea for a theoretically observational concept of indifference curves. By the end of the 1940s, no economist would appeal seriously to any kind of naive experimental derivation of HOPE441_05Lenfant_1pp.indd 115 10/26/11 10:42:04 PM 6. Ivan Moscati (2007a) has shown that a revival in economic experiments on consumer choice would take place later in the 1950s and 1960s. But the theoretical stakes at the time were quite different, and Moscati devotes only a few lines to the most natural extension of experimental work on pure consumer theory, that is, stochastic choice and binary choice. Indeed, work in that field did not bear much on the development of experiments on consumer behavior.
7. Historians have pointed out that ordinalism and the escape from psychology are two different things. According to Shira B. [START_REF] Lewin | Economics and Psychology: Lessons for Our Own Day from the Early Twentieth Century[END_REF], Pareto and Fisher can be classified as "psychological ordinalists" (as opposed to "psychological hedonists") because preferences still have a hidden psychological meaning to them. Luigino Bruni and Francesco Guala indifference curves as a way to derive individual or aggregate demand functions. 6 The aim of this article is to show how the concept of the indifference curve was progressively stabilized within demand theory. Section 1 deals with the foundations of choice theory in the Paretian tradition. Section 2 gives some insights about the revival of the Paretian school in the United States in the 1930s. Thurstone's experiment is presented in section 3, and its reception by economists is the subject of section 4. Section 5 concentrates on [START_REF] Wallis | The Empirical Derivation of Indifference Functions[END_REF] article, which represented one challenge to indifference curves. Section 6 deals with the other challenge to indifference curves, Samuelson's theory of revealed preferences. The conclusion gives some insights into the experimental or observational status of indifference curves in the 1940s and after.
Indifference Curves within the Paretian School
Our aim here is to grasp some specific aspects of demand theory in relation to indifference curves, as they have been dealt with from Pareto onward and as they were understood in America in the late 1920s and early 1930s. Before entering into this story, it seems necessary to say a word about the ordinalist revolution and about the English contributions to ordinalism.
The ordinalist revolution originates in the criticism of the psychological foundations of the theory of demand, namely, the principle of decreasing marginal utility as Alfred [START_REF] Marshall | Principles of Economics[END_REF] 1898) used it. The rejection of hedonist hypotheses led Irving [START_REF] Fisher | Mathematical Investigations in the Theory of Value and Prices[END_REF] and [START_REF] Pareto | Sunto di alcuni capitoli di un nuovo trattato di economia pura del Prof. Pareto[END_REF]Pareto ( -97, 1900Pareto ( , 1909) ) to favor an objective or "positive" approach to economic concepts. The "ordinalist revolution" (Omarzabal 1995, 116) is grounded in a methodological transformation of economics that put the facts of objective experience as a foundation of economics and provided a research program for the ensuing years [START_REF] Green | The Reorientation of Neoclassical Consumer Theory[END_REF][START_REF] Lewin | Economics and Psychology: Lessons for Our Own Day from the Early Twentieth Century[END_REF]. 7 Mathemati-HOPE441_05Lenfant_1pp.indd 116 10/26/11 10:42:04 PM (2001) have convincingly applied this idea to Pareto's writings. It is true that ordinalism, taken in a very strict sense (that of using an index utility function), does not imply that psychological arguments are abandoned. We nevertheless argue that this formal opposition between psychological hedonists and psychological ordinalists might be exaggerated as long as one does not discuss precisely what kind of psychological assumptions or statements can be used in economics. Ordinalism would never have been promoted if theoreticians had not felt too much constrained with the main assumptions of hedonism. Thus, with all due care, ordinalism and the escape from psychology have been fellow travelers, and this is what is usually meant by the phrase "ordinalist revolution." 8. Moreover, Fisher (1891, 4) discovered the concept of the indifference curve apparently independently from Edgeworth, thus showing that it was just a tool that could be used either in a cardinal or in an ordinal context. 9. [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] had done most of the job but not finished it. Along the way, the main protagonists of this story (other than those already mentioned) are [START_REF] Johnson | The Pure Theory of Utility Curves[END_REF], [START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF], Schultz (1938), [START_REF] Friedman | The Fitting of Indifference Curves as a Method of Deriving Statistical Demand Curves[END_REF], [START_REF] Little | A Reformulation of the Theory of Consumer's Behaviour[END_REF][START_REF] Georgescu-Roegen | The Pure Theory of Consumer's Behavior[END_REF]. For more on Johnson, Slutsky, et al., see Stigler 1950. 10. Parts of this story can be found in [START_REF] Chipman | The Paretian Heritage[END_REF][START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF][START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF], Hands 2006, and Lenfant 2006. 11. We will discuss this assertion at beginning of the next section.
cally, ordinalism is entirely based upon the idea that one can dispense with the use of a specific utility function and that no meaning shall be attached to utility measurement, except as an ordinal principle.
Clearly, the development of ordinalism must be separated from the introduction of the concept of the indifference curve. Ordinalism was first advocated in Fisher's "Mathematical Investigations" (1892) and Pareto's Sunto (1900) and Manual ([1909] 1971), while the indifference curve had appeared in F. Y. Edgeworth's Mathematical Psychics (1881). It was thus only through Fisher's and Pareto's recasting that the concept of the indifference curve became irreversibly associated with the promotion of ordinalism. 8 Along the way, the recasting of the theory of choice along ordinalist lines raised a number of issues (about integrability, measurability, and complementarity) that would be progressively settled. A reasonable closing date for the ordinalist revolution is 1950, after [START_REF] Houthakker | Revealed Preference and the Utility Function[END_REF] and Samuelson's (1950) contributions. 9 From the late 1920s, the Paretian school was progressively gaining a larger audience while the use of the concept of marginal utility and other derivative concepts was challenged. Consequently, demand theory was recast along the principles of individual preferences and ordinal utility functions. 10 Nevertheless, English authors proved very silent about the meaning of indifference curves. Most if not all of the reflections after 1920 about the nature of indifference curves took place in America, mainly under the impulse of Henry Schultz at Chicago. This is an American story. 12. From the outset, we want to clear up any misunderstanding about the link between an analytical tool (indifference curves) and the escape from psychology (the abandonment of psychophysical assumptions). The simple fact of using indifference curves is not a plea against cardinalism. Indeed, Edgeworth, a founding father of indifference curves, was an unrepentant cardinalist. In fact, most if not all of his arguments about the use of indifference curves as a cardinalist concept are only possible once one compares two indifference curves for the same individual. So, it must be clear that I am not pointing out any contradiction per se between psychology and indifference curves. I am only stressing that indifference curves would be exploited in order to promote an ordinalist representation of utility and a behaviorist foundation for the theory of choice and demand.
13. As far as we know, neither Pareto nor Fisher ever commented on current developments in psychology. In the Manual, Pareto ([1909Pareto ([ ] 1971, chap. 4, para. 33, chap. 4, para. 33) mentions Fechner, Delboeuf, and Wundt in a footnote. Those works are mentioned only as references for the Weber-Fechner law of decreasing satisfaction. We can only speculate about Pareto's acquaintance with Wundt's theory presented in his Grundzüge der physiologischen Psychologie. Wundt was then the most important experimental psychologist, and he had shown that simple laws of the Weber-Fechnerian type (psychophysical or associative laws) could not apply to most psychological phenomena, which are governed by apperception [START_REF] Lachelier | Les lois psychologiques dans l'école de Wundt[END_REF]Boring 1950, chap.16).
As is well known, Pareto's and Fisher's main idea was that knowledge of observed behavior was enough to derive the equilibrium of markets and the laws of a market economy. This idea was based upon the intuition that indifference curves were in principle obtainable from observed behavior and that indifference maps could be represented by indexing utility functions. Consequently, they expected to ignore the psychological foundations of choice and of price theory. 12 It would take too long to enter precisely into Fisher's and Pareto's ideas about psychology. However, one or two aspects must be mentioned here briefly. Pareto ([1909] 1971, 29) and Fisher (1892, 5) showed a common reluctance toward a psychological foundation of utility, even though Pareto was rather looking for a temporary separate development of economics and psychology. Psychology at the time was considered as the science of psychical phenomena and sensations, and to some extent it was associated with the developments of psychophysics, which is a branch of psychology dealing with the measurement of mental states in relation to external stimuli. 13 It is thus interesting to see how Fisher and Pareto justified the construction of indifference curves and the existence of a utility index. Both seemed to believe that experimentation was impossible in practice but possible in theory. Fisher (1892, 67-68) suggested a "metaphoric" experiment, by which an individual would be asked to determine his consumption bundle and then, fixing the quantity of all goods but two, he would be asked to modify his combination of the two goods in order to keep the same level of utility. Nevertheless, in 14. Pareto ([1900] 2008) already argued that indifference curves could be obtained through experiments or statistical studies. As long as statisticians have not established lines of indifference, "for lack of more precise notions, the science possesses only some general data suggested by crude and everyday observations of facts" (478).
15. It does not prevent one from appealing to psychological arguments in the field of applied economics [START_REF] Bruni | Vilfredo Pareto and the Epistemological Foundations of Choice Theory[END_REF].
16. Introspection in utility theory is usually-and incorrectly-associated with the use of psychological arguments. Introspection is actually a method, whereas psychology is the study of perceptions and representations. Introspection as a method of investigation has been challenged in economics as well as in psychology. Now, the fact is that Fechner did use introspection whereas Wundt rejected it. More generally, Pareto ([1909] 1971, chap. 3, para. 31) always maintained that the methodology of economics could not be modeled on the methodology of physics, and that any economic theory must be regarded as a method of search rather than as a method of demonstration.
the end Fisher never relied on any kind of experiment, and he eventually derived the shape of indifference curves from the properties of demand that he attached to the extreme cases of perfect substitutes and perfect complements (71).
Pareto's own construction and discussion of indifference curves is developed in the Manual. 14 There he used the word experiment in a broad and fluctuating sense. On many occasions, Pareto ([1909] 1971) suggested finding individual indifference curves "by experiment," and he indicated many ways to achieve that task. For instance, Pareto referred to a hypothetical experiment bearing directly on tastes (391-92), whereas elsewhere he suggested constructing indifference curves from observed choices for different incomes and prices . He knew that experiments would not permit one to obtain the differential equation of an indifference curve but only the ratio of marginal utilities at a point. He ended with the idea that economists should be content with virtual experiments: "The fairly great difficulty, the impossibility even, that may be found in carrying out these experiments in practice, is of little importance; their theoretical possibility alone is enough to prove, in the cases which we have examined, the existence of the indices of ophelimity, and to reveal certain of their characteristics" (415; my italics).
So, the final methodological position of Pareto is that the theoretical possibility of an empirical construction of indifference curves is at least enough for the foundation of the theory of choice. 15 Eventually, when he comes to a precise description of indifference curves, Pareto appeals to "every day experience" and to introspection to discuss the shape of indifference curves (572). 16 In this respect, it is of the utmost importance to keep in mind what indifference curves are supposed to be as theoretical 17. Pareto ([190917. Pareto ([ ] 1971, 105, 123) , 105, 123) frequently reminds the reader about this meaning of indifference curves. constructs. Pareto never meant that indifference curves are the visual description of a mental state of mind regarding tastes at a given instant of time. On the contrary, he insisted again and again in the Manual that the description of individual tastes is about stabilized tastes, as they can be derived from repeated acts of choice in a context of stable economic conditions:
We will study the many logical, repeated actions which men perform to procure the things which satisfy their tastes. . . . In other words, we are concerned only with certain relations between objective facts and subjective facts, principally the tastes of men. Moreover, we will simplify the problem still more by assuming that the subjective fact conforms perfectly to the objective fact. This can be done because we will consider only repeated actions to be a basis for claiming that there is a logical connection uniting such actions. A man who buys a certain food for the first time may buy more of it than is necessary to satisfy his tastes, price taken into account. But in a second purchase he will correct his error, in part at least, and thus little by little, will end up by procuring exactly what he needs. We will examine this action at the time when he has reached this state. Similarly, if at first he makes a mistake in his reasoning about what he desires, he will rectify it in repeating the reasoning and will end up by making it completely logical. (103) Pareto could not have been more explicit. He would come back occasionally to the same idea, notably about the theory of choice, always dismissing any data that were not the result of a reflexive process on the part of the subject. 17 At this stage, the status of indifference curves remained uncertain, if not contradictory. On the one hand, indifference curves were conceived of as the guarantee for a positivist foundation of demand theory, while on the other hand they were used just as an instrument for proving the existence of an index-utility function. This ambiguity was manifest in the theory of choice from the outset.
From Pareto onward, two lines of thought about utility and demand coexisted and reinforced each other. The dominant line consisted in considering the main properties of indifference curves as given by internal experience and introspection and easily summarized by utility functions, HOPE441_05Lenfant_1pp.indd 120 10/26/11 10:42:04 PM 18. The idea was to construct indifference curves from a series of budget data under different income-price situations. This project was intimately linked with budget studies and would be promoted by scholars such as Abraham Wald,Jacob Marschak,and René Roy. 19. It is beyond the scope of this article to deal with the history of the integrability issue, which has been well documented elsewhere (see [START_REF] Chipman | The Paretian Heritage[END_REF][START_REF] Hands | More Light on Integrability, Symmetry, and Utility as Potential Energy in Mirowski's Critical History[END_REF][START_REF] Hands | Harold Hotelling and the Neoclassical Dream[END_REF][START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF], Mongin 2000aand 2000b, and Hands 2006).
from which general properties of demand could be inferred. The other, less frequent, way of thinking consisted of the idea that indifference curves could be inferred from observed behavior. In accordance with this marginal line of thought, two methods were on offer. The first one consisted of deriving indifference curves from budget data (relating prices, individual incomes, and quantities bought of different goods). 18 The second one consisted of finding out indifference curves by way of controlled experiments. Indeed, Pareto mentioned both the introspective (dominant) line of thought and the two methods pertaining to the marginal line of thought, and he left it to his followers to discuss their relative merits.
Of course, all these ideas were intermingled to such an extent that most authors were open to different methods. More, it must be stressed that the status of indifference curves appeared as partially dependent upon the debate on the integrability of demand, that is, the possibility of recovering utility functions from observed demand behaviors. 19 Incidentally, the issue of the status of indifference curves came back in the early 1930s, in the United States.
The Revival of Paretian Demand Theory in the United States
The development of the Paretian theory of demand in the 1920s and 1930s took place mainly in the United States and Great Britain. As was pointed out earlier, the status of indifference curves would be discussed in the United States only.
Indeed, as far as English authors are concerned, there was a common reluctance toward discussing the foundations of indifference curves. As a testimony to this, a few words are in order about Edgeworth, Johnson, Allen, and Hicks. Edgeworth, as the inventor of the concept, had a very peculiar purpose in mind. In Mathematical Psychics, "indifference lines" appear only after the presentation of total (generalized) utility functions for each agent and after the formal definition of the "contract curve." Edgeworth introduces indifference curves, defined as the locus of points HOPE441_05Lenfant_1pp.indd 121 10/26/11 10:42:04 PM 20. The only exception to this is [START_REF] Allen | The Foundations of a Mathematical Theory of Exchange[END_REF], in which he argues for the nonintegrability of the field of observed indifference directions. For a comment on Allen's discussion of integrability, see [START_REF] Chipman | The Paretian Heritage[END_REF]Lenfant 2002, 570-73. Note, however, that Allen (1932, 198) is skeptical about the possibility of relying on experimentation in economics and of dispensing with psychology: "In the first place, economic experiments under control are almost out of the question, and further, it is difficult in economic theory to prevent the introduction of subjective or psychological elements."
21. [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] take it for granted that indifference curves and surfaces are superior in a heuristic sense to utility functions when studying demand behavior, Giffen goods, and other concepts such as complementarity and independence (in a three-good case at least). Hicks and Allen's justification for the shape of indifference curves is motivated by the simplicity of the assumption, so that the concept of the demand function will be precisely defined (to dispose of any demand correspondence and discontinuity). As is well known, it is based mainly on the idea that the marginal rate of substitution must be decreasing, because it is not falsified in general by experience. Actually Hicks and Allen were not exactly on the same wavelength on this issue (see [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. Strange as it may seem, Allen's 1934 article titled "The Nature of Indifference Curves" is nothing but a discussion of their mathematical properties (it does not contain anything about their empirical content). As for Hicks, he would separating the set of exchanges that are acceptable from the set of exchanges that an agent would refuse, in order to demonstrate that the contract curve is really a curve (with no thick part) and can be understood as the result of a series of exchanges between parties. Consequently, indifference curves are derived from the concept of utility (they are not a primary given set of data), and they are devised for a very specific purpose. [START_REF] Johnson | The Pure Theory of Utility Curves[END_REF] is a good example of the reluctance among English economists to speculate over the semantics of indifference curves, since he takes it as a purely mathematical concept. He probably took the concept from Edgeworth (although he does not adopt the terminology, preferring that of "constant utility curve") and does not even mention Pareto. Indifference curves are simply regarded as a series of convex curves exhibiting different properties about adjacent marginal rates of substitution.
Allen and Hicks (separately and in their joint article as well) manifest no interest either in the semantics or in the methodology attached to indifference curves. For instance, [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] maintains that indifference curves are simply the expression of constant utility curves, whose various forms indicate various mutual relationships between goods. So, once again, indifference curves are presented as just a mathematical tool (a series of mathematical properties about curvature and adjacent curves) without the slightest word about their methodological foundation. 20 In their monumental article, [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] would make only incidental comments on the methodology of the theory of choice (mainly a question of internal coherence). 21
HOPE441_05Lenfant_1pp.indd 122 10/26/11 10:42:04 PM improve the argument a little bit in Value and Capital: the decreasing marginal rate of substitution stems from the idea of choice (if people were indifferent between two baskets at the same price, they would not be able to choose; the fact that they make a choice is an argument for convexity). For a more complete discussion of this, see Wong 1978, 36-41;Lenfant 2000, 267-69;andMoscati 2007b, 128-29. 22. Marco Fanno (1926) deserves to be mentioned for his work on the demand for substitutes. Besides, Schultz acknowledged Fanno as the main motivation for his own research program. In his long essay, Fanno did not question the possibility of constructing indifference curves, but he nevertheless introduced an interesting distinction between tastes, as they can be considered in the abstract and as they are practically influenced by economic conditions (income, prices, needs). On this basis he proposed two kinds of indifference curves: indifference curves according to tastes and indifference curves according to consumption (353-54). Nevertheless, the distinction remains very obscure and does not seem to have been mentioned elsewhere.
More broadly, the idea of linking rational behavior and statistical demand studies is a characteristic of the development of demand theory in America. In Europe, there were disseminated efforts, as we see from Shultz's 1938 book that reviewed them, and almost no connection between the Paretian school, statistical studies, and other kinds of reconstruction (Marco Fanno excepted). 22 It is quite clear that until 1932, the contributions of Fanno, Ricci, Dominedo, Allen, and Johnson were not oriented by a common set of questions and analytical tools. Things were to change slightly only after the discovery of Slutsky's article. Even the collaboration between [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] appears to have been very brief and led neither to a close convergence on certain important questions [START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF] nor to statistical studies.
Thus, it is worth concentrating exclusively on the debate among American scientists. The main protagonists of this development are Henry Schultz, Harold Hotelling, and Milton Friedman. Friedman is important for our narrative, to the extent that he took part in the discussion about demand and utility in the United States from the outset and eventually provided influential arguments against an experimental or even empirical derivation of indifference curves [START_REF] Wallis | The Empirical Derivation of Indifference Functions[END_REF]).
Friedman's training in demand theory was shaped by Schultz at Chicago and also by Hotelling at Columbia. A few words about both of them and few others are in order to grasp Friedman's interest and personal commitment to demand theory.
In the late 1920s and early 1930s, the main line of development of the new theory followed the original Paretian ideas, as well as some later contributions in the Paretian spirit, more or less neglected, such as Johnson HOPE441_05Lenfant_1pp.indd 123 10/26/11 10:42:05 PM 23. Apparently, [START_REF] Johnson | The Pure Theory of Utility Curves[END_REF][START_REF] Lenoir | Etudes sur la formation et le mouvement des prix[END_REF] were unknown to American economists. Friedman discovered Lenoir's thesis only during the preparation of his own paper [START_REF] Friedman | The Fitting of Indifference Curves as a Method of Deriving Statistical Demand Curves[END_REF]). Lenoir's book contains a discussion of the relative slopes of adjacent indifference curves and of the question of satiation, introducing a curve of satiation that would be used later independently by Friedman and Allais.
24. In his famous 1915 article, Slutsky derived the fundamental equations of value, decomposing the price effect on demand into two composite effects (the revenue effect and the substitution effect). Slutsky's analysis was explicitly a continuation of Pareto's theory of utility and demand. A noticeable difference, however, is that Slutsky never mentions indifference curves.
25. In this respect, Schultz acknowledged the influence of Marco Fanno (1933, 164), who developed a correspondence between substitutability as represented with indifference curves, and substitutability as measured through the proportional variation of prices following any external shock on the demand for a good. One can note that Schultz ignores the influence of income distribution upon aggregate demand (see Lenfant 2006).
1913 and Lenoir 1913. 23 Slutsky's 1915 article 24 was still unknown to the main workers in the field. New ideas about utility and demand were introduced progressively within the American academic world, spurred on by Henry Moore and Henry Schultz. Moore and his student Schultz were both involved in the development of theoretical foundations for statistical demand curves, and by the end of the 1920s Schultz's agenda was to improve the derivation of statistical demand curves, basing them on the Paretian theory of choice and utility.
On this occasion, Schultz gave central importance to utility theory for the statistical analysis of demand. [START_REF] Schultz | The Italian School of Mathematical Economics[END_REF] idea was that the structural properties of demand were linked to the properties of individual utility functions. In short, Schultz's program was to derive the main laws of "related demand" (demand for related goods) from individual preferences. By referring here and there to utility theory and to the influence of substitution upon demand curves, Schultz showed that the connection of utility theory with the statistical analysis of interdependencies was a promising direction for research. 25 Around 1930, Schultz (1931, 83) was clearly looking in that direction, echoing Pareto's earlier statement: "The properties of the utility functions and indifference curves are very intimately related to certain characteristics of the laws of demand and supply. . . . In [my] opinion, a study of these theoretical relationships [about demand for related goods] will throw a flood of light on the problems connected with the derivation of demand curves from statistics."
Schultz developed this program in two steps between 1932 and 1935, and it must have been at the core of Schultz's discussions with his young student Milton Friedman. Notably, during the winter of 1933, Friedman wrote a draft on demand theory under the title "The Fitting of Indiffer-26. As noted by D. Wade Hands and Philip Mirowski (1998, 352), "Edgeworth produced his counter intuitive numerical example in order to undermine the whole idea of a stable demand curve, but Hotelling recasts the problem set by Edgeworth as one of finding out the conditions under which interdependent demand curves could rule out the appearance of the 'paradox.'" ence Curves as a Method of Deriving Statistical Demand Curves." Friedman's often quoted 1933 manuscript is known by economists only indirectly through Schultz's 1938 book, The Theory and Measurement of Demand, where it was partially used in chapters 18 and 19.
Friedman's contribution to Schultz's research program was also reinforced during the year he spent with Hotelling in 1933 at Columbia (the academic year when Schultz was in Europe). Hotelling graduated as a mathematician at Princeton; afterward, he was at the Stanford mathematics department in 1927 and later appointed at Columbia in 1931 (after the early retirement of Moore), where he taught mathematical statistics and mathematical economics. Previously, he had worked with Holbrook Working at the Stanford Food Research Institute, collaborating on the estimation of crop yields, food requirements, amd demand and supply functions for agricultural products . He moved gradually to the mathematics department at Stanford before leaving for Columbia (Darnell 1990, 5-7). By the time Friedman arrived at Columbia, Hotelling had already published his famous article on "Edgeworth's taxation paradox" (Hotelling 1932). Edgeworth's paradox deals with the possibility that the imposition of a tax on one good supplied by a monopolist who also supplies a related good (a railway owner supplying first-and second-class travel) may lead to a lowering of the market price of the taxed good.
Hotelling's aim was to understand in depth under what circumstances Edgeworth's paradox might appear. The analysis proceeds within a system of interrelated markets. Thus, one needs to disentangle as far as possible the effects of the interdependence of markets (the system of demand functions) from the influence of specific preferences. The related question is to identify properties of individual preferences that would make the paradox likely or unlikely, and possibly to tell if such preferences are to be met in certain markets. This was the main motivation for his interest for demand analysis, on the basis of the Walras-Pareto framework. 26 Friedman's 1933 paper echoes the common set of questions asked by Hotelling and Schultz, notably because it shows how experimental or other kinds of empirical knowledge could or should be accounted for within demand theory. The main question raised incidentally is whether, and to what extent, economists ought to work with or without the help of indifference curves in theoretical and applied demand studies. Once he has presented the possible relationships between adjacent indifference curves, Friedman (1933, [14n5]) remarks that "from a purely abstract theoretical point of view it is impossible to set any condition whatsoever upon the indifference curves. It is only by appealing to concrete knowledge of the way individuals act, that is, to psychology, that it is possible to assert that any form of indifference curve is improbable or impossible. Mathematics, and 'pure' theory in general, can give only form, no content." Thus, Friedman is staging Pareto's methodological hesitations. On the one hand, it seems that the main simple properties could be justified by introspection (downward-sloping indifference curves, even convexity). On the other hand, more precise properties should be backed with empirical knowledge either by observation of actual behavior or by experiments (and subsequent statistical treatment of both). The respective contribution of introspection and empirical knowledge has not been established at this stage. We can even stress that the use of the word psychology in this quotation is puzzling and should be understood rather in a behaviorist sense, even though introspection is not altogether thrown away.
In brief, Hotelling, Schultz, and Friedman were the three main figures reflecting upon the relationship between the Paretian theory of demand and empirical studies of demand in the early 1930s. At the end of 1933, they felt that the introspective justification for indifference curves-the one that Pareto had recommended in the final analysis-was probably too loose and inadequate for practical needs. This idea was not yet well formed, even though they could have benefited from two attempts at experimenting on this subject. One experiment had been conducted by a well-known agricultural economist, Elizabeth Waterman Gilboy. Gilboy's (1932a) study is based on questionnaires and was regarded as a prototype for further research. The other had been conducted by a famous professor of psychology at Chicago, Louis Leon Thurstone.
In Gilboy's study, individuals were asked how many different items (travel, rent, clothes, savings, beverages, entertainment) they (and their families) would buy, with unchanged tastes and "on the basis of [their] present income and standard of living" (379), under different assumptions about income variations and price changes. Despite the call for further research, Gilboy's method would be largely ignored, probably because the use of questionnaires was severely criticized. The other way to go into experimental demand theory was to go directly into experi-27. Much of this section and the next deals with authors and material aptly discussed in Moscati 2007a. Nevertheless, we believe some original conclusions can be drawn from our analysis.
28. Among the founding members of the journal were John Dewey, Franklin Henry Giddings, Lucien Lévy-Bruhl, Bertrand Russell, and Thurstone himself. ments on preferences, and that was precisely what was done at Chicago under the impulse of Henry Schultz.
The Thurstone Experiment
In 1930, Louis Leon Thurstone, a professor of psychology at the University of Chicago, conducted the first experiment whose goal was to construct and fit indifference curves and to show "that it is possible to write a rational equation for the indifference function which is based on plausible psychological postulates" (Thurstone 1931, 165). 27 The results of his experiment were published in 1931 as "The Indifference Function" in the newly founded Journal of Social Psychology. 28 Thurstone's inquiry was supposed to catch the attention of those interested in choice and demand theory.
Thurstone's motivation came from Schultz at Chicago, and his experiment was soon presented to economists at the meeting of the econometric society, in Syracuse in June 1932, under the title "An Experimental Study of Indifference Curves." A brief summary was then published in the first issue of Econometrica [START_REF] Mayer | The Meeting of the Econometric Society in Syracuse, New York[END_REF].
Thurstone was a colleague and friend of Schultz, and it is through their discussions that he became interested in conducting experiments on indifference curves. As he noted at the beginning of the 1931 article, "The formulation of this problem is due to numerous conversations about psychophysics with my friend Professor Henry Schultz of the University of Chicago. It was at his suggestion that experimental methods were applied to this problem in economic theory" (139).
First, we will present the ins and outs of Thurstone's experiments, stressing some aspects of his method in relation to indifference curves. Then we will appraise Thurstone's project in relation to the theory of choice.
Let us briefly present Thurstone's experiment and then make a few comments. Thurstone's article is strictly limited to the presentation of an experiment on individual preferences. Thurstone cautiously avoids appealing to the Paretian theory of choice, and the whole experiment is based upon psychological and psychophysical hypotheses that were precisely HOPE441_05Lenfant_1pp.indd 127 10/26/11 10:42:05 PM 29. More precisely, "motivation [is] the amount of anticipated satisfaction per unit increase in the commodity" (Thurstone 1931, 165), and it has been assumed "that increments in satisfaction are measured in terms of the psychological unit of measurement, the discriminal error, or multiples of that unit" (166).
those that Pareto and Fisher wanted to eliminate. He nevertheless cites Fisher's "Mathematical Investigations" as his unique reference (aside from two articles by himself). First, Thurstone presents in its basic form the law of decreasing marginal utility, using his own terminology. He calls it the "law of satisfaction" and builds it upon Fechner's law, with the idea that each individual is able to measure quantitatively the "motivation," that is, the increase in utility, associated with an increase in the consumption of a given good. This is done by taking as a measuring rod the "discriminal error," that is, the most noticeable difference in satisfaction. 29 He then explains the concept of indifference curves. He arrives at the general expression of a set of indifference curves, using Fechner's law, which gives the simple Cobb-Douglas utility function (Thurstone 1931, equation 11, 147). The procedure aims at obtaining experimentally the shapes of both the satisfaction curve (indicating the increase in utility compared to an initial position) and the indifference curve, and also at fitting the indifference curves. Thus, the point is that the mathematical form of the indifference curves is given from the outset, independently from any experimental device. The only aim of the experiment is to provide enough information to derive the coefficient of motivation for each good and the level of satisfaction of the subject. The whole theoretical and experimental construction is based upon the assumption that utility is additively separable.
The experiment itself consists in asking a subject to compare two sets of two commodities, always starting from a constant combination to which another one is compared. The subject had only to tell which one he would take, thus revealing his preferences. Thurstone indicated to the subject that no economic information (notably, no budget constraint) should influence his choice. By repeating this procedure a number of times, it was possible to fill in the consumption set with minuses and pluses and to draw the experimental indifference curve associated with the constant initial combination. By repeating it again for other initial sets, it was then possible to draw four indifference curves (for hats and shoes). Then Thurstone proceeded to the same experiment with hats and overcoats and lastly with overcoats and shoes. The next step was to compare the theoretical indifference curve for overcoats and shoes (derived from Fechner's law and the first two indifference curves) with the exper-30. An interesting point can be made about the experimental procedure chosen: "The subject whose records are here analyzed was entirely naïve as regards the psychological problem involved and had no knowledge whatever of the nature of the curves that we expected to find" (Thurstone 1931, 154). Moreover, the experiment was carried out in such a way that the subject was unable to change his mind about his choice and that no learning could occur. Undoubtedly, this contradicts what Pareto had in mind when he wrote about indifference curves. For instance, it is not surprising that there were "inversions in responses," since the subject could make only one judgment for each pair of goods and the experiment was not carried out again. So inversions were to be expected, "especially in view of the fact that the difference in satisfaction represented by neighbouring points is not very marked" (155). Fitting was done by the method of averages, taking as data all the points in the subject's answers.
31. Obviously, independence is understood here according to the Auspitz-Lieben criterion (the second-order cross-derivatives of the utility function are zero).
32. We use the phrase "declared indifference" or "declared preference" in order to stress the methodological difference with Samuelson's theory of "revealed preference." imental indifference curve. 30 Here, the most important test, according to Thurstone, was about the independence of different goods: 31 Since we have the experimental data for all three sets of comparison, we can ascertain how closely the third set of indifference curves can be predicted from the known constants, derived from the first two sets of comparisons. This constitutes the test of the fundamental psychological hypothesis that is involved, namely, that the satisfactions from several commodities are summative when all of the quantities involved are above the level which the subject regards as the level of absolute necessity. ( 163)
The outcome of all this analysis was that indifference curves for shoes and overcoats were correctly predicted from the other experimental indifference curves, except for high constant combinations of shoes and overcoats.
At this point, a few methodological remarks are in order. They all revolve around the notion of indifference. Indeed, Thurstone raises serious doubts about the possibility of one person identifying introspectively a series of combinations of goods that he would declare indifferent to each other. Such a procedure of declared indifference, 32 he argues, would be biased by a desire for numerical consistency, which is not the case with declared preference. The scientist would have to deal with too much instability. That is the reason why he resorts to the "constant method," which consists in asking a subject to declare his preferences between two combinations, one combination being always the same. Consequently, when facing two combinations, the subject was not allowed to HOPE441_05Lenfant_1pp.indd 129 10/26/11 10:42:05 PM declare that they were indifferent to him. Thurstone, by not allowing subjects to declare indifference, made the results of his experiment easier to process. If declared indifference had been a possible choice for the subject, Thurstone would have had to deal with indifference points and to adapt his procedure for curve-fitting. Before we turn to the reactions of economists to Thurstone's results, it is worth stressing some methodological aspects of Thurstone's experiment that will help us appraise its usefulness for the theory of choice. First, Thurstone did not consider that each individual may be unable to reveal, even through a mental experiment, his own rational preferences (in the sense of no crossing of indifference curves). And from the outset, he even imposed very specific features on the shape of indifference curves (hyperbolic preferences). In short, Thurstone's experiment was not a device for testing some properties of rational preferences (convexity, transitivity, completeness). These properties were implemented from the beginning in the hyperbolic shape of the curves. Second, Thurstone could not ignore that the usefulness of cardinal concepts and hedonic preferences was hotly debated. And third, Thurstone did not ask whether controlled experiments were adequate for describing market behavior.
Thus, Thurstone's experiment seems to be grounded on old psychophysical hypotheses that had been rejected by the first ordinalists, and economists were rather free to interpret the results. In the final analysis, the outcome of Thurstone's experiment was to turn indifference curves into a purely psychological concept designed to illustrate the possibility of deriving rational preferences from experimental choice and to validate the set of hypotheses about the psychological continuum, about just-noticeable differences as a unit of measurement, and last, about Fechner's law.
The Thurstone Experiment as Economists Saw It
It is interesting to look at Thurstone's project from the viewpoint of economists. The Thurstone experiment received only rather critical appraisals on the part of economists and, more interestingly, it was the occasion for reintroducing the status of indifference curves within the theory of choice at the core of the debate. Broadly, Thurstone's experiment was considered as rather inconclusive, and economists raised many methodological arguments against it in the following years. The main judgment of the time was that experiments of that type were useless and inappropriate to HOPE441_05Lenfant_1pp.indd 130 10/26/11 10:42:05 PM 33. Thurstone's work in experimental psychology was very much praised at the time, notably his technical contributions to the measurement of individual psychological states, through multiple factor analysis [START_REF] Cottrell | Important Developments in American Social Psychology during the Past[END_REF]. Schultz (1933), commenting on Frisch's methods for measuring marginal utility, complains about the gap between the theoretical apparatus of Frisch and the experimental data and concepts used by psychologists, especially under the heading of just-noticeable differences. In Schultz 1933, Thurstone is cited as a reference but no mention is made of his experimental study of indifference. Thurstone is mentioned for his work on measurement, especially on just-noticeable differences (that is "discriminal error"): "Those who are interested in developing the border lines between economics and the other sciences will do well to investigate the relations between the methods of deriving utility and demand functions used by economists and the methods of measuring the 'psychological continuum' as used by modern psychophysicists. This is not a suggestion that economics should borrow its postulates from psychology, but only that the workers in the two fields should familiarize themselves with each other's problems and procedures. While I believe that, on the whole, psychophysicists stand to gain more from such an intermarriage of ideas, I am also convinced that those statistical economists who are interested in the measurement of utility and demand cannot afford to remain in ignorance of such methods as are used to determine just-noticeable differences, and the discriminal error as a unit of measurement on the psychological continuum; for is not utility also a psychological continuum?" (Schultz 1933, 116). As regards the just-noticeable differences, Schultz mentions William Brown and Godfrey H. Thomson's Essentials of Mental Measurement (1921). As regards the measurement of the psychological continuum, he refers to Thurstone's "Psychological Analysis" (1927b) and "A Mental Unit of Measurement" (1927a).
34. "A psychological experiment designed to determine the shape of a simple indifference curve was conducted last year by the writer's colleague, Professor L. L. Thurstone, of the Department of Psychology. The results will be published in the Journal of Social Psychology for May, 1931" (Schultz 1931, 78n5).
the study of economic behavior. Economists only occasionally mentioned Thurstone's work, and when they did, they did so only briefly. Even among economists, it was Thurstone's work on psychological measurement rather than his experiments on indifference curves that attracted attention. 33 In general, economists were quite critical or else unconcerned or, at best, very cautious.
The main protagonists of our story referred rather coldly to the Thurstone experiment. The experiment was known first by Schultz; other economists became aware of it after Thurstone's presentation at the 1933 meeting of the econometric society in Syracuse. Among our three main characters (Hotelling, Schultz, and Friedman), only Friedman carefully examined the article and drew some methodological lessons from it (Wallis and Friedman 1942). So, it was really in the second half of the 1930s that Thurstone's experiments attracted attention.
Schultz, who can be regarded as the silent partner to the experiment, would not even comment upon the result, and made just two references to the study. The first was in Schultz 1931, 34 in which he laid the foundation for statistical demand studies based upon rational behavior assumptions.
35. It is difficult to determine who exactly attended Thurstone's presentation. We know from Joseph [START_REF] Mayer | The Meeting of the Econometric Society in Syracuse, New York[END_REF] report that Hotelling was present for and discussed at least the first two contributions (by Roos and by Whitman) in the same session in which Thurstone presented his paper. Frisch was the chair of the session; Mayer and Mordecai Ezekiel discussed Thurstone's results. Among other people who probably attended Thurstone's presentation were Louis H. Bean, Harold T. Davis, Dr. Roos, Harry S. Kantor, S. S. Wilks, and F. G. Crawford. See also Bercaw 1934, 402. In 1933, Thurstone presented his experiment at the meeting of the econometric society, in Syracuse, where he was criticized for wanting to measure satisfaction.
The second was in The Theory and Measurement of Demand (1938). The only reference there was to the title of the article (15n18).
As for Hotelling, he certainly attended the Syracuse presentation, even though we do not know whether he took part in the discussion immediately following. 35 Hotelling (1938) would prove to be very cautious, and he would underline that Thurstone's experiment cannot give much more than a tentative result. Also, he clearly identified competing methods, by way of the study of family budgets:
It is to be emphasized that the indifference loci, unlike measures of pleasure, are objective and capable of empirical determination. One interesting experimental attack on this problem was made by L. L. Thurstone, who by means of questionnaires succeeded in mapping out in a tentative manner the indifference loci of a group of girls for hats, shoes, and coats. Quite a different method, involving the study of actual family budgets, also appears promising. (248; my italics)
In the same vein as Hotelling's are two other references to Thurstone's experiment. The first one is in Tintner 1942. Tintner tried to develop a dynamic theory of demand, taking into account individual expectations about future prices and interest rates. He drew conclusions about the possibility of getting information about how people anticipate and achieve an a priori probability distribution on expected prices and incomes, and he referred to Thurstone's experiment as an example of the method of questionnaires. The paper concludes with some methodological considerations: "Empirical studies of family budgets (especially historical) and of demand curves may be helpful in getting an idea about the way in which people really act. . . . The interview method may also prove useful" (304).
Another reference to Thurstone is found in Staehle 1942. Although the article is devoted to the construction of empirical cost curves, there are hints about demand theory. Staehle's criticism sounds like a farewell to the idea of experimenting beyond demand and to the possibility that experimental data can be useful: In a sense, Staehle is more skeptical about the possibility of erecting a complete theory of demand on experimental data, and all the comments by Hotelling, Staehle, and Tintner point to the weaknesses of the experimental method.
HOPE441_05Lenfant_1pp.indd
Yet, none of the comments mentioned copes with the consequences of Thurstone's experiment upon the theoretical nature of indifference curves. Among the very few references to Thurstone's experiment, the one in Georgescu-Roegen 1936 is certainly the most important, in as much as it gives clues as to why economists have rejected experimental indifference curves. Georgescu-Roegen is a centerpiece in this narrative because he deals simultaneously with the methodology of consumer theory and with the problem of integrability of consumer choice. For all that, Georgescu-Roegen's article is also sometimes muddled. The meaning of indifference curves and what can be built upon them is at the core of Georgescu-Roegen 1936 and shows once again that the ordinalist revolution did not take place without inquiring about the foundations of the new theory of the consumer. Georgescu-Roegen argues that we have to make clear what are the data that explain individual demand: "The demand and supply laws appear today to be derived concepts, and their justification is sought in terms of the ultimate considerations that find their place within the frame of economic science; i.e., the reasons that induce individuals to produce and exchange goods" (546).
After identifying sufficient assumptions for integrability, Georgescu-Roegen is led to inquire whether such assumptions can be checked by experiments. In the final analysis, he concludes that there must be a divide between what we can reasonably expect to obtain by way of experiments and what will remain necessarily of a more axiomatic nature. So, the article can be read as an inquiry into the gap between the axiomatic foundations of utility theory and the experimental design of indifference curves. Georgescu-Roegen's main conclusion is that the properties of indifference sets are given above all by mental experiments based upon introspection and abstraction. This is by necessity; this is driven by the logic of the HOPE441_05Lenfant_1pp.indd 133 10/26/11 10:42:06 PM 36. Georgescu-Roegen has in mind a kind of parallelism between mental experiments and actual experiments, provided that we recognize that not all the subtleties of our mental experiments can be captured by an actual experiment: "At the same time we may seek a safer line of approach. This might be reached, for instance, by formulating our mental experiment in such a way as to suggest, and direct step by step, the pattern of an actual experiment which may be carried out in the future, subject to technical possibilities in the matter" (Georgescu-Roegen 1936, 546). The nonintegrability case is an instance of this discrepancy, and it might be necessary to develop "an alternate theory of the nature of indifference curves" (546).
problem itself: "The method of economics remains-and it seems that it will remain despite many attempts in the opposite direction-that of the mental experiment aided by introspection. There are well-known attacks directed against this procedure for supporting scientific laws. Nevertheless, we may defend our position by arguing that, so far as we deal with the consumer's position, introspection is justified by the problem itself" (546).
It is interesting to compare Georgescu-Roegen's methodological arguments with Pareto's attitude on the same question. It seems to us that Pareto regarded introspection as a sufficient procedure, given that a theoretical experiment was possible. Georgescu-Roegen is able to draw the lessons from Thurstone's experiment and from a better knowledge of the high complexity of the hypotheses that economists may need to submit to experiments, so much so that introspection is not only sufficient but also necessary for the theory of choice. Meanwhile, it would be unscientific to reject a priori any attempt at experimenting with at least some of the fundamental assumptions of the theory of choice, provided that those experiments succeed in encapsulating the essentials of the mental experiment. 36 The main difficulty is precisely that experiments rarely do. Some assumptions, notably that the indifference direction at any point is uniquely determined, "are very unlikely to lend themselves successfully to [an experimental] treatment" (Georgescu-Roegen 1936, 584). This is where Georgescu-Roegen mentions and comments on the Thurstone experiment on the nature of indifference curves: Professor Thurstone's experiment is, however, very unlikely to help us in deciding anything about the forms of the postulates here analyzed. The investigation having been carried out by way of questions and answers, we cannot be sure whether the prices ruling on the market at the time of the experiment had or had not influenced the subjects in their answers. Some of the diagrams in Professor Thurstone's paper, namely 13 and 17, suggest on the contrary that they had.
Besides, the result of mere visualization cannot be relevant to a theory concerned with an actual choice, unless the combinations used HOPE441_05Lenfant_1pp.indd 134 10/26/11 10:42:06 PM 37. For instance, Georgescu-Roegen proposes that there be "a unique combination (Ct) that will separate the non-preferred from the preferred ones" (549). As he points out, "The essential implication of this postulate is that the mental comparison within a preferential set is as accurate as any other objective physical measurement can theoretically be. A perfect similarity with regard to the possibility of discerning differences in a monotonic series is thus assumed between a mental and a physical experiment" (549). Anyway, some hierarchy must be put into experimental properties of indifference functions, the most important being transitivity: "It seems that this point could be easily submitted to an experimental verification. We should really lose all hope in this direction only if the answer to such an investigation should be negative" (584-85).
in the experiment are those with which the subject is familiar because of his latest experience. This last condition restricts the range of the experiment to a degree which simply makes the investigation useless. It seems that we cannot avoid the necessity of letting the subject experience the satisfaction before making his choice. (585n3) Through those quotations, Georgescu-Roegen puts in the forefront the idea that the indifference curve is about stabilized preferences based on repeated acts of choice. Undoubtedly, this is the kind of argument a faithful Paretian should make against Thurstone's experiment, and it might have been very influential for the community of economists in that field. As an aside, an interesting point in this question of the essentials of mental experiments is that the experimenter must cope (1) with his own experimental difficulties (especially about the just-noticeable differences) 37 and ( 2) with what economists have in mind when they reflect upon the stability of preferences of individuals.
In the final analysis, something evolved in the 1930s, on the issue of testing consumer theory with experiments. As the integrability issue became clarified, as the assumptions needed to relate indifference curves and index utility functions became more clearly understood, as the gap between the experimental procedures and the axiomatics of consumer theory widened, economists abandoned the idea of experimenting on indifference curves, and they let psychologists tackle the issue of which experimental procedure could adequately emulate humdrum decisionmaking in people. The opposition between two methods, one relying on experiments, the other on empirical data based on long periods of choice, emerged from all this debate. Thus, in this period, Thurstone's experiment received rather critical comments and was never taken as a promising way to derive indifference curves and demand curves. [START_REF] Wallis | The Empirical Derivation of Indifference Functions[END_REF] would deal a decisive blow on experimental indifference curves, and they would go one step beyond in questioning the conceptual foundations of the theory of choice.
38. The Wallis and Friedman article is part of a book in memory of Henry Schultz. 39. "The first serious problem we had was to decide whether it would be better to have eight 50 caliber machine guns on a fighter plane or four 20 millimeter guns" (Olkin 1991, 124).
40. See especially his 1942 paper on the temporal stability of consumption patterns.
The Wallis and Friedman Article
It is worth concentrating on the Wallis and Friedman article of 1942, given that Friedman was involved from the outset in the development of demand theory through his collaboration with Schultz and Hotelling. 38 The article is mainly about the relative merits of different methods for deriving empirical indifference curves and, subsequently, empirical demand curves. Independently from each other, and then through regular discussions, Wallis and Friedman developed a systematic methodological critique of the usefulness of experimental economics in consumer theory, and even more radically, of the internal coherence of consumer theory. For this reason, the Wallis and Friedman article deserves to be read as a seminal contribution to demand theory at Chicago.
Wallis graduated in psychology and economics from Chicago and Columbia. He met Milton Friedman at Chicago in 1934, as a student of Schultz, and then went to Columbia to graduate in statistics with Harold Hotelling (Olkin 1991, 122-23). Between August 1935 and September 1937, Friedman was an associate economist together with Wallis at the National Resource Committee, where they designed questionnaires and methods of analyzing consumption [START_REF] Hammond | Agreement on Demand: Consumer Theory in the Twentieth Century, edited by[END_REF]. Then, during the war, Wallis was a founding member and director of the Statistical Research Group (July 1942-September 1945) (with Harold Hotelling and Jack Wolfowitz), a group of statisticians who worked on "fire quality control," that is, issues regarding the efficiency of weapons systems. 39 Friedman joined the SRG as associate director. The cooperation between Friedman and Wallis certainly began at the National Resources Committee and continued after Friedman's arrival at the SRG.
For sure, the fact that Wallis earned an undergraduate degree in psychology before turning to statistics and economics made him particularly sensitive to Friedman's, Hotelling's, and Schultz's speculations on demand and especially on the possibility of inferring regular properties of individual demand patterns from rational behavior. 40 The Wallis-Friedman article is entirely devoted to the derivation of indifference curves. It is based on an acknowledgment of failure: that 1942,175).
From the outset, Wallis and Friedman's aim is to question the possibility of giving any empirical content to indifference curves. Indifference functions, as they put it, capture, without psychological justification, the psychological and sociological determinants of choice. Apart from its theoretical achievements (about complementarity, income and price effects, welfare criteria, and index numbers), the question raised is of "giving quantitative expression to the indifference function" (176). Before entering in detail into their argument, it must be stressed that indifference curves as they define them are embedded in strong inertial and deterministic factors (psychological and sociological), far away from any impulsive or anecdotal representation of choice. In a sense, they abide by the Paretian semantics, except that Pareto would not have let sociological determinants constrain or influence economic choices, thereby keeping economics free from nonlogical behavior.
For the first time, Wallis and Friedman make a clear distinction between two kinds of data and respectively two approaches to the problem of deriving indifference functions: (1) the experimental approach is based upon a series of data points belonging to the same indifference surfaces. (2) The "statistical" approach is based on sets of points for which the only information is the slope of the indifference surfaces at each point. Most of the paper is devoted to a critical examination of the methods and feasibility of each approach and to its sequel.
The experimental approach is quickly discarded. At first sight, the experimental approach is the more direct and can be constructed from "the application of psychophysical experimental techniques to individual subjects" (177). When they come to discussing Thurstone's experiment ("the eminent psychophysicist" [177]), they insist that the hyperbolic shape of indifference curves is predetermined by the application of Fechner's law. They carefully note that the cardinalist concepts used in the experiment have no bearing upon the result. But Thurstone's experiment cannot be retained as a promising way of deriving indifference curves: "The economic significance of Thurstone's experiment is vitiated by a number of serious limitations" (179).
Even though arguments against Thurstone's method are mixed, one can identify different kinds of objections. First, the experimental procedure, through comparisons of sets of goods, implies that individuals are supposed to act as if goods were free, whereas the economist is interested in choice under budget constraints. Second, the experimental procedure cannot reproduce the actual determinants of choice. This conclusion perfectly echoes the comments of Georgescu-Roegen and Staehle: there is a gap between the way in which people make choices in everyday life and what can be grasped from an experimental decision:
For a satisfactory experiment it is essential that the subject give actual reactions to actual stimuli. This requires conditions under which the reaction being studied is the only one on the basis of which the subject could produce systematic results; that is, any rationalizing scheme, conscious or unconscious, which he might adopt should have as its only possible datum the phenomenon under investigation. Questionnaires or other devices based on conjectural responses to hypothetical stimuli do not satisfy this requirement. The responses are valueless because the subject cannot know how he would react. The reactions of people to variations in economic stimuli work themselves out through a process of successive approximation over a period of time. The initial response indicates only the first step in a trial-and-error adjustment. (Wallis and Friedman 1942, 179-80) A third argument can be raised against the experimental method. It has to do with the ceteris paribus assumption: "If a realistic experimental situation were devised, it would, consequently, be necessary to wait a considerable time after the initial application of the stimulus before recording the reaction. Even an experiment of restricted scope would have to continue for so long a period that it would be exceedingly difficult to keep 'other things the same'" (180).
During so long a period, it is highly unlikely that tastes and preferences would remain constant. The authors then provide a scheme of an ideally controlled experiment that would fit all the requirements for constructing indifference curves. It is based upon a psychological phenomenon about human perception: the influence of color on the apparent size of an object, a matter already studied by [START_REF] Wallis | The Influence of Color on Apparent Size[END_REF]. Yet, economic-HOPE441_05Lenfant_1pp.indd 138 10/26/11 10:42:06 PM type experiments are eliminated: "It is probably not possible to design a satisfactory experiment for deriving indifference curves from economic stimuli applied to human beings" (Wallis and Friedman 1942, 181). Thus, the experimental approach is irrelevant for the construction of indifference curves. Then Wallis and Friedman tackle the statistical approach, the one Friedman 1933 was calling for. It is judged more promising than the experimental approach, even though it was also falling under similar criticism:
According to the indifference function analysis of consumer behavior, the quantities of goods purchased define a point on the indifference surfaces at which the slopes are the ratios of the prices. This suggests the possibility of using data on consumer purchases for the quantitative determination of the indifference function. The obstacle of the approach is that the function is defined for a single person at a given time. It is obviously impossible to secure more than one observation at one time, whereas many observations covering a reasonably broad segment of the function are required. If it can be assumed either that a given person has the same tastes at different times or that different persons have the same tastes at a given time, it will be possible to secure a number of observations relating to a single indifference function. Such assumptions, while never literally fulfilled, seem plausible. Certainly they seem more reasonable than those which have to be introduced in the experimental approach. They have the added merit of involving economic phenomena proper. (Wallis and Friedman 1942, 183) Here again, limitations about the method seem to plague the construction of individual indifference functions. The period of observation must be of many years to obtain a few points; consequently, tastes cannot reasonably be considered as fixed, except maybe if there have been no important changes in the environment. So the economist is generally confronted with two equally awkward situations. Either he has a lot of observations in the neighborhood of an initial point, or he must assume that tastes have changed: "If, on the other hand, the individual has experienced a wide range of prices and incomes, his tastes have probably not remained constant, for past experience is surely one of the most important determinants of tastes at a given moment" (184). This is certainly an idea that was not explicitly accepted by all economists of the time, and it is an important aspect of Wallis and Friedman's argument and deserves to be developed further. All in all, it is not possible 41. Abraham [START_REF] Wald | The Approximate Determination of Indifference Surfaces by Means of Engel Curves[END_REF] showed how to approximate an index-utility function described by a polynomial of the second degree in goods from Engle curves (linearized in a small region of the commodity space), and he had raised both the integrable and nonintegrable case, wondering about the practical feasibility of this procedure. Especially, he raised doubts about the method because "not all individuals have exactly the same scale of preferences, the individuals do not choose exactly the set of goods for which the indicator has the greatest value, the goods compared in different periods are not exactly of the same quality, and so on" (148). He ended with the nonintegrability case. In this case, the assumption that indifference surfaces did not change during the period of observation was rejected (175).
to escape from the "alteration of preference" (184). So, the most promising procedure remains the use of data on the purchases of different individuals, belonging to groups for which the assumption of identity of preferences can reasonably be made. This assumption has to be discussed, because groups of people sharing similar tastes have usually proximate incomes, due to similar social status, cultural habits, and educational background, and they face similar prices. Once again, it will not be possible to extract enough information to construct indifference curves. Even the idea of combining observations on many individuals and for several periods seems hopeless. Either the variations of prices are slight, and then the statistical basis is narrow, or they are wide and indicate also important changes in the economic situation, so that tastes will have changed meanwhile. 41 What can be kept from all this is certainly that the notion of an indifference curve is meaningful only in the neighborhood of current economic conditions and current choices for each individual (already an idea from Pareto [1909Pareto [ ] 1971, para. 67), para. 67). The only hope is to observe wide changes in prices for only a few items, compatible with "the assumption of unaltered tastes and with fairly rapid adjustment to the price changes," so that it is possible to obtain "a fairly satisfactory derivation of indifference surfaces for a subset of goods" (Wallis and Friedman 1942, 185). To sum up the pessimistic results, While it is thus entirely impossible to obtain indifference surfaces from market data, it seems highly unlikely that reliable results can be obtained for more than a small range of quantities for a few goods. The necessity of using data that reflect reactions to essentially the same indifference function implies a serious limitation on the degree of income and price variation that can be observed and hence in the scatter of points on which the indifference surfaces can be based. If these points cover a wide range, it is unlikely that they relate to the same system of indifference surfaces; if they cover a narrow range, the indifference surfaces derived from them will be subject to wide margins of errors and to statistical instability. In the final analysis, Wallis and Friedman's main thesis is that however one tries to match statistical data with the concept of the indifference curve, it will be unsuccessful. The problem, as they see it, comes from the fact that the indifference curves have been constructed theoretically upon a supposed categorization of economic phenomena within three sets: tastes, opportunity, and goods. Wallis and Friedman show that the same thing (e.g., regional location, family size, and probably income) can be classified either into the first, the second, or the third, so that it is practically impossible to use any statistical data to construct indifference curves: "The ambiguity of the classificatory criteria which are implicit in indifference curve analysis is, of course, the reason it is so difficult to specify reasonable data. Satisfactory data can be obtained only if opportunity factors vary over a wide range while taste factors remain constant; but this is clearly impossible because the opportunity factors and the taste factors are inextricably interwoven-are really the same factors under different aliases" (187-88).
So the only useful work to be done is to abandon indifference curves, which are of no use "for the organization of empirical data" (189), as an intermediary object to demand analysis and to adopt a more direct approach and to infer from statistical data which factors do influence, and to what extent, current consumption (income, regional location, age, etc.): "There is much to be gained by concentrating some heavy theoretical artillery on the logical structure implicit in the practical work" (189).
It is beyond the scope of the present study to inquire into the general approach to demand theory advocated by Friedman and Wallis. It has often been remarked that their article shares some methodological principles with other members of the Chicago school [START_REF] Knight | Realism and Relevance in the Theory of Demand[END_REF][START_REF] Stigler | The Limitations of Statistical Demand Curves[END_REF] and that it contains the seeds of Friedman's (1949) famous article on the demand curve. It remains to be shown whether this episode can help to clarify the methodological principles of Friedman's demand theory (Mongin 2000a). We will not develop this point further.
Two final remarks can be made on the Wallis and Friedman article. First, it is clearly organized as a systematic attack on the foundations of the theory of choice. It begins with a regular criticism of experimental or empirical foundations of indifference curves on the basis of a Paretian definition of indifference curves. Once that work has been done, the next step in Wallis and Friedman's strategy is to question the interest of the 42. Except for a full understanding of the discrete analogue to Slutsky symmetry conditions and positive semi-definiteness of the Slutsky matrix (Chipman and Lenfant 2002, 577-78). theory itself. Second, Wallis and Friedman stipulate implicitly that any theoretical concept must have empirical content. So, they reject indifference curves on the ground that they are poorly adapted to empirical data. This is clearly at odds with the line of thought that emerged in Georgescu-Roegen's article and that had just been exemplified in Samuelson's revealed preference theory.
Samuelson's Revealed Preferences and the Status of Indifference Curves
In contrast with Wallis and Friedman's destructive criticism of the concept of the indifference curve and ultimately of the Paretian theory of choice, we have the other criticism of the concept, as it was first raised by Samuelson through the revealed preference approach. Our main point in this section is that Samuelson's criticism legitimized the concept of indifference curves as a heuristic device.
As is now well known, Samuelson (1938aSamuelson ( , 1938c) ) was at first critical of the Hicks-Allen hypothesis of a diminishing marginal rate of substitution. After the introduction of the strong axiom of revealed preferences [START_REF] Houthakker | Revealed Preference and the Utility Function[END_REF], Samuelson came to recognize that the Slutsky-Hicks-Allen approach and the revealed preference approach arrived at the same set of analytical statements, 42 so that a rational consumer could be represented equivalently as a utility maximizer or as behaving according to a set of axioms (Mongin 2000b;[START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF]. Meanwhile, it is interesting to examine Samuelson's comments upon the concept of indifference curves, because Samuelson has always pointed out that it remains a concept that exists only as a conjecture external to the revealed preference approach.
In the following, we claim that Samuelson consistently stressed the topic of the status of indifference curves (and of the mere idea of indifference) all along his breakthrough from the first articles on revealed preferences (Samuelson 1938a(Samuelson , 1938b(Samuelson , 1938c) ) up to the final 1950 article on the question of integrability. Samuelson's approach to demand behavior was based on observed choice and upon a principle of revelation of preferences. The most important consequences, for our subject, are (1) that Samuelson dispensed with the concept of indifference curves in favor of that of preferred choice and (2) that indifference curves could be obtained, at best HOPE441_05Lenfant_1pp.indd 142 10/26/11 10:42:06 PM
"
We are now in a position to complete the programme begun a dozen years ago of arriving at the full empirical implications for demand behaviour of the most general ordinal utility analysis. My own work in this direction grew out of a remark made to me by Professor Haberler in his 1936 international trade seminar at Harvard. 'How do you know indifference curves are concave?' My quick retort was 'Well, if they're not, your whole theory of index numbers is worthless.' Later I got to thinking about implications of this answer (disregarding the fact that it is not worded quite accurately). Being then full of Professor Leontief's analysis of indifference curves, I suddenly realized that we could dispense with almost all notions of utility: starting from a few logical axioms of demand consistency, I could derive the whole of the valid utility analysis as corollaries" (Samuelson 1950, 369-70).
44. For an analytical and methodological discussion of Samuelson's revealed preferences, see Mongin 2000b and[START_REF] Chipman | The Paretian Heritage[END_REF][START_REF] Chipman | Slutsky's 1915 Article: How It Came to Be Found and Interpreted[END_REF] 45. In the rest of this article, we adopt Hicks's revised terminology of "decreasing marginal rate of substitution" and "convex" indifference field even though Samuelson kept Hicks and Allen's original terminology of "increasing marginal rate of substitution" and "concave" indifference field.
46. Samuelson's approach is clearly in accordance with the Paretian semantics of choice. He is concerned with "long run 'normal' behaviour" only (Samuelson 1950, 360). He also makes it clear that the internal experiments of the consumer are of no interest for the problem of integrability, and he states, as Pareto did, that any observed choice can be regarded "as a steady flow of consumption per unit time, optimally patterned to the consumer's tastes. And the flow of consumption at B is again a steady flow long maintained. The comparison of A and B (and of intermediate points) is a case of comparative statics" (361). So, the economist need not invade the consumer's privacy.
indirectly, if they should be obtained at all. In the end, Samuelson did not reject the concept but, rather subtly, limited its role to that of a heuristic device. It is interesting to note first that Samuelson's interest in the theory of choice originated in a criticism of the (widespread) use of indifference curves and index numbers in economic theory. 43 It is necessary also to understand that Samuelson wanted to establish a comprehensive comparison between the traditional theory and his revealed preference approach, and that at the same time he was conscious that too narrow a comparison might lead to misleading terminological convergence.
A brief overview of the development of Samuelson's ideas will help clarify the difference between the indifference curve approach and the revealed preference approach. 44 Samuelson (1938c, 61) regarded the [START_REF] Hicks | A Reconsideration of the Theory of Value[END_REF] replacement of a utility function with the concept of the decreasing marginal rate of substitution 45 as an incomplete theoretical achievement, as it still depended on psychological assumptions. A true positivist theory of choice, as he put it, should rely upon observable choices only and thus should be deprived of such "vestigial traces of utility." Thus, the aim of the revealed preference approach was to identify a set of axioms necessary and sufficient "for most of the empirical meaning of the utility analysis" (Samuelson 1938a, 353). 46 Yet, Samuelson did not identify at once what exactly in the Hicks-Allen approach was supporting the psychological assumption. In addition, he did not mean that the traditional analysis in terms of utility and indifference curves was deprived of any analytical advantage. Consistently, Samuelson (1938b) would pursue at the same time the analysis of the revealed preference approach and the analysis of the traditional utility/indifference curve approach. Those parallel endeavors aimed at stressing the relative merits of each theoretical basis for demand analysis and, incidentally, at clarifying the status of indifference curves within each. The most significant article regarding the status of indifference curves within the revealed preference approach was the 1948 article, in which Samuelson dealt frontally with the possibility of obtaining indifference curves as the result of a limit process. In that article, he commented upon Ian Little's proof that "if enough judiciously selected price-quantity situations are available for two goods, we may define a locus which is the precise equivalent of the conventional indifference curve" (Samuelson 1948, 243) and presented an alternative demonstration. Finally, the 1950 article on integrability provided a comprehensive analysis of the qualitative differences between the utility approach and the revealed preference approach. Once the weak axiom of revealed preference was replaced with the strong axiom, the analytical differences between both approaches were considerably smoothed, and Samuelson stressed the main implications for the concept of indifference curves.
With this theoretical development in mind, we will examine more precisely the impact of the Samuelsonian program upon the concept of indifference curves. The axioms of revealed preference were designed in relation to the theory of index numbers and were especially adapted to empirical research, as they were conceived in terms of finite variations and not in terms of differentials. The outcome of all this was that Samuelson obtained the same results as did Hicks and Allen, and the only argument in favor of the revealed preference approach was mainly methodological: "The orientation given here is more directly based upon those elements which must be taken as data by economic science, and is more meaningful in its formulation" (Samuelson 1938c, 71).
Echoing this methodological position, Samuelson consistently remained reluctant to use the term indifference curve or indifference surface. Given that his rationality axioms were entirely formulated in terms of preferences, indifference curves could be at best obtained by describing the whole space of preferences. There was nothing in the indifference curves so obtained that could not have been obtained from data on properly HOPE441_05Lenfant_1pp.indd 144 10/26/11 10:42:07 PM 47. A comparative analysis of Little's and Samuelson's contributions to revealed preferences is beyond the scope of this article. In my view, Samuelson's approach is deliberately oriented toward clarifying the relationships with the traditional ordinalists' approach; Little, on the contrary, is much more interested in the implications of the new approach for the theory of index numbers. [START_REF] Little | A Reformulation of the Theory of Consumer's Behaviour[END_REF] contribution to the debate is quite important (his article was written before Samuelson 1948). It contains the central idea and method for constructing step-by-step a "behaviour line." Little's approach differs from that of Samuelson in two respects. First, the departure point is a comparison between welfare criteria based upon index numbers and based upon indifference curves. Second, it is directly critical of the concept of indifference curves. Little's main idea is to dispense with indifference curves and to inquire whether a series of points can be found, using a transitive property of consistent behavior, so that the index-number criterion of welfare improvement should "tell the same story" as the indifference curve criterion. In the end, "the concept of indifference is abandoned" (Little 1949, 91). Surely, Little's discussion of the meaning of "preference" and "choice" deserves more attention than it has received so far. selected choices. Conversely, observations on choices being the only data on which to base the construction of indifference curves, those data would never allow us to reveal those parts of the indifference complex that were concave to the origin, because no choice would ever be observed on those parts. So revealed preferences captured all the market data necessary to construct an indifference curve and at the same time they captured nothing more than what was obtainable from market behavior. Under that double constraint, Samuelson carefully examined the status of indifference curves within the governance of revealed preferences.
Samuelson and others were precisely pointing out the obstacles to the derivation of an indifference curve from a finite number of observations. [START_REF] Little | A Reformulation of the Theory of Consumer's Behaviour[END_REF] had shown that a selection of price-quantity situations and corresponding choices might be enough to "define a locus which is the precise equivalent of the conventional indifference curves" (Samuelson 1948, 243). 47 Samuelson (1948) showed how to obtain the same result through a Cauchy-Lipschitz process of approximating the supposed indifference curve from below. The argument ran as follows. On the basis of the axiom of revealed preference, if the consumption set could be filled with data on observed choices at any point (x, y) desired, it was thus possible to integrate the differential equation linking little slope elements (dy/dx) to a function f (x, y) that was equal to the ratio of the prices p x /p y at any point. The solution to the equation dy/dx = f (x, y) was shown to be precisely the "conventional" indifference curves.
It must be clear that nowhere did Samuelson suggest that the axioms of revealed preferences could be used to reveal indifference. Indifference could only be revealed after a certain kind of mathematical treatment HOPE441_05Lenfant_1pp.indd 145 10/26/11 10:42:07 PM 48. Or later: "If we wish, then, we may speak of them as being indifferent to A. The whole theory of consumer's behaviour can thus be based upon operationally meaningful foundations in terms of revealed preference" (Samuelson 1948, 251).
implying differential calculus and necessitating a potentially infinite amount of information of the revealed-preference type. "Any Cauchy-Lipschitz path always leads to a final point worse than the initial. And strictly speaking, it is only as an infinite limit that we can hope to reveal the neutral case of 'indifference' along the true solution curve to the differential equation" (Samuelson 1948, 248;my italics). This was not yet enough to speak of an "indifference curve." What we had was just a set of points revealed to be worse than a certain point A. In terms of the strict algebra of "revealed preferences" we had as yet no definition of what was meant by "equality" or "indifference" (248). So, in order to assign the name "indifference curves" to the loci of points described by the differential equation, it was necessary to prove that "all points above the true mathematical solution [were] definitely 'revealed to be better' than A" (248). This was done through providing a process of approximation from above, similar to the Cauchy-Lipschitz solution. The unique curve lying between both approximations was the "behaviour line" of Little, and "we may care to give this contour line, by courtesy, the title of an indifference curve" (248; my italics). 48 Once again, one can consider how reluctant Samuelson was to use the word indifference in the framework of the revealed preference approach. Our interpretation for this is that Samuelson believed that the term indifference curve could divert people from the behaviorist foundations of such a curve and make them think that indifference curves could be obtained directly through nonmarket experiments in spite of being revealed. Again in the 1950 article on integrability, the behaviorist nature of the data was the real frontier between the terminology of indifference and that of revealed preference: there was nothing like revealed indifference. There he drew a figure with indifference curves, some of which were partly concave to the origin. And on the other figure, he drew little slope elements corresponding to the convex parts only: "It will be noted that any point where the indifference curves are [concave] rather than [convex] cannot be observed in a competitive market. Such points are shrouded in eternal darkness" (Samuelson 1950, 359). In that case, only a monopsonist could bend the budget curve so as to make it possible to reveal some (but not all) the points in a concave portion of his preference field. And precisely non-49. Here, Samuelson seems to be less stringent in his interpretation of indifference curves, because he implicitly accepts that an experiment should allow the scientist to capture the whole preference field. This is at odds with other statements in the same articles. "Therefore, in this generalised monopsony we can behaviouristically identify the man's ordinal preference field if it has one" (Samuelson 1950, 361). Years later, Samuelson recalled this advantage of the revealed preference approach as a general methodology, to the effect that the strong axiom of revealed preference can imply "valid relations applicable to admissible specifications of non-convex sets" (Samuelson 1998(Samuelson , 1381)). market experiments were of this type, completely opposite to the perfect competition framework which was supposed in the theory of choice and which had to serve as a basis for the theory of demand and aggregate demand: 49 "The case of a Gallup-poll questioner who finds out the man's preference contours by giving him choices of every pair of goods is simply a limiting case of monopsony. And if we experiment sufficiently, we can always find a curved family of unique contours representing his ordinal preference field-if such a consistent field exists" (361).
To conclude this discussion of Samuelson's reconstruction of indifference, we would like to make two points.
Samuelson stressed that indifference curves are not a direct result of controlled experiments; they can only be obtained, if ever they can be obtained, through an indirect process of describing the field of preferences of a given consumer, under different situations of choice under budget constraints. Consequently, Samuelson remained reluctant to deal with indifference curves, because they were methodologically deceptive, giving the false impression of a theoretical (positivist) progress compared with the old utility theory, whereas they could also convey psychological assumptions. So, indifference curves conveyed a "dangerous terminology" (Samuelson 1950, 365). Nevertheless, Samuelson did not go as far as rejecting de facto the use of indifference curves, provided that one was clearly warned about the methodological fragility of the concept.
Second, Samuelson, echoing Georgescu-Roegen (1936), rejected the idea that indifference curves could be determined through experimentation. But contrary to common bottom-up criticism (criticism stressing the difficulty of constructing experimental indifference curves), his criticism stemmed directly from the idea that in a competitive framework at least, the economic facts one would use were acts of choice under budget constraints, so that other kinds of experiments were ruled out from the outset. A connected question, nevertheless, was about the possibility of practically running experiments on subjects facing budget constraints. My point 50. One may be inclined to consider also that Samuelson shows a sentimental attachment to the concept since it was an important step in the development of ordinalism. I prefer to consider that Samuelson retained indifference curves as something useful (see Samuelson 1974). 51. To be sure, the widespread use of indifference curves as a heuristic and pedagogical tool would be naturally reinforced by the simple fact that we can only use indifference curves to analyze choices involving two or three goods. It does not allow one to deal with utility defined over more than three goods. This supplementary explanation is but a negative justification for relegating indifference curves to microeconomics textbooks, whereas the Samuelsonian argument is rather a positive justification for this. is that Samuelson seemed to agree essentially with the Paretian semantics of indifference curves. That is, indifference curves were supposed to represent market choices under stabilized economic behaviors, after a period in which decisions were made on a trial-and-error basis. It was only then that a consistent indifference curve could be, at best, approximated. Thus, it is reasonable to infer from this that for Samuelson, it was practically impossible to carry out experiments on indifference curves compatible with the Paretian semantics. Consequently, if the concept of the indifference curve was to have any legitimacy, it would be just as a tool for ordering our knowledge about economic behaviors. 50 As such, indifference curves were useful for their heuristic and pedagogical properties, since they could help us organize our ideas and data on individual choices. 51 A connected result was that practical studies based on indifference curves should involve checks as to whether data on market behaviors were strong enough to guarantee a convex structure of tastes.
Conclusions
The aim of this article was to shine some light on a methodological debate about the exact nature of indifference curves underlying the foundations of the theory of choice. The main lesson to be drawn from the debate is that adopting the index-utility function was not a sufficient foundation on which to erect the new theory of choice. The adoption of an ordinal conception of utility had to be accompanied by a common language about the methodological foundations of the theory. All this debate was nascent in Pareto's works, where he seemed to hesitate about the experimental status of indifference curves. It was settled step-by-step in the 1930s and 1940s. Two major attitudes emerged from the debate. On the one side, there was Samuelson's revealed preference approach, which captured the essentials of the idea that indifference curves were useful for organizing our ideas about demand behavior, even though they were not likely to be obtained practically through experiments. On the other side, Wallis and Friedman's approach rejected the theory of choice on the grounds that it was badly suited to any kind of empirical data, and they called for other foundations directly grounded on facts. On the way toward this deepening of the theory of choice, we saw that the vast majority of economists concerned commented rather scantily on Thurstone's attempt. The reluctance toward experimenting on individual preferences was strong enough to make later attempts very rare. Within consumer choice theory (choice under certainty), only two other experiments with indifference curves seem to have been conducted since Thurstone's experiment [START_REF] Rousseas | Experimental Verification of a Composite Indifference Map[END_REF][START_REF] Maccrimmon | The Experimental Determination of Indifference Curves[END_REF]. 52 Indeed, it is reasonable to think that from the 1940s onward economists were not much interested in obtaining such curves. Before coming to a more general conclusion about the orientation that would be given to the foundations of the theory of choice, we can draw some specific conclusions about our story.
First, Thurstone's experiment must be appraised carefully. From the reading of his article in the Journal of Social Psychology, it is reasonable to infer that Thurstone's experiment was first designed to answer questions that were under debate within the community of psychologists and not within that of economists. The central question behind Thurstone's experiment was the existence of what he called a "psychological continuum" and of a unit of measurement for individual satisfaction. In fact, the article does not contain any reflection about the meaning of preferences or the best way to obtain those preferences or to make sure that those preferences are stable and not affected by external factors or by the experimental process. It was left to economists, through their comments, to appraise the possibility and the interest for the theory of choice of such experiments.
Second, one of the consequences of the debate about the status of indifference curves was Wallis and Friedman's radical criticism of the theory of choice. This radical criticism meant to get rid of the foundations of consumer theory and called for a new logical structure of consumer theory. This may help to organize our ideas about further unorthodox developments in the theory of demand and choice. It would be worth it to see to what extent work from Knight, Friedman, Becker, Lancaster, and others address Wallis and Friedman's recommendations.
53. Otherwise stated, the status of indifference curves could not be clarified as long as the analytical problem of integrability remained unsettled. This is in accordance with Mongin's (2000bMongin's ( , 1127) ) claim that the relationship between the revealed preference approach and integrability is first and foremost a technical issue. Our reading of revealed preference theory in relation to indifference curves reinforces Mongin's claim that Samuelson did not want to eliminate traditional utility theory. Nevertheless, it also shows that empirical workers in the field of demand shall not be exempted from being cautious when they resort to this concept.
Third, Samuelson's approach to the status of indifference curves has proved to be fruitful. Samuelson's criticism was based mainly on the same principles as Friedman's, but it was not aimed at getting rid of the theory. On the contrary, it was aimed at unveiling the legitimate uses of indifference curves within demand and choice theory. We have shown that it is possible to read Samuelson's work on revealed preferences by focusing on the meaning of the word indifference. Samuelson wanted to match two contrasting attitudes. On the one hand, he subscribed to a positivist methodology, by which he was led to privilege a definition of theoretical terms as observables, which, together with axioms of choice, would allow one to derive operationally meaningful theorems. On the other hand, he remained skeptical about the practical possibility of conducting experiments or obtaining data that would be reliable and precise enough to falsify or corroborate an operationally meaningful theorem. The fact is that because Samuelson was more stringent than Pareto in promoting a positivist approach to choice, his skeptical attitude was apparent on many more occasions. Quite logically, it was only through the evolution of the theoretical program that he tempered his reservations against utility theory. Thus, the bulk of the revealed preference approach regarding the nature of indifference curves would condense progressively, and the quintessence of it could be found in the 1950 article, once the main implications of the revealed preference approach were known (after Houthakker's introduction of the strong axiom of revealed preferences). 53 A more speculative conclusion, calling for further inquiry, can be drawn from this story. It deals, broadly speaking, with the internal relationships between economics, psychology, and experimentation. Our claim is that the criticisms raised against naive attempts at doing experiments on preferences considerably clarified the proper use of indifference curves as a useful tool for economic theory, and by so doing also made possible a deeper questioning of the meaning of indifference. There were now favorable conditions for a quiet interplay between economics and psychology. By, say, the end of the 1940s, it had become possible to question the inter-HOPE441_05Lenfant_1pp.indd 150 10/26/11 10:42:07 PM nal meaning of preferences. Many instances of such interplay can be seen in the economic literature (see Moscati 2007a). Georgescu-Roegen (1950) laid down the foundations for a stochastic theory of choice (under certainty). Binary choice and transitivity became a common field for experiments and theorization for economists and psychologists [START_REF] May | Intransitivity, Utility, and the Aggregation of Preference Patterns[END_REF][START_REF] Quandt | Probabilistic Theory of Consumer Behavior[END_REF]). More generally, before the 1940s economists rarely distinguished between descriptive and normative theories, but by the 1950s, such a distinction had become almost systematic. Thus, it is quite reasonable to think that the main outcome of the debates on the status of indifference curves was to put on firmer ground the idea that economists might gain something from cooperating with psychologists, because the latter offered a better understanding of the kind of questions economists wished to address: What makes it impossible to unveil preferences? What are the consequences of making binary choices in the absence of knowledge of additional possibilities? To what extent do people discover their own preferences through learning? Does this necessarily imply that intransitivity will be the rule?
Moscati 2007a for a discussion.
In the study of demand, actual individual behaviour is the deepestlying level to which we can dig. Everything beyond is largely in the nature of speculation. I say largely, and not completely, remembering Thurstone's valiant attempt to measure indifference curves by means of psychological experimentation. Nevertheless, it remains true that in that field, . . . the greater part of the operation of this rationality takes place in regions where direct measurement is at least difficult.(332)
132 10/26/11 10:42:05 PM
HOPE441_05Lenfant_1pp.indd 119 10/26/11 10:42:04 PM
HOPE441_05Lenfant_1pp.indd 124 10/26/11 10:42:05 PM
HOPE441_05Lenfant_1pp.indd 126 10/26/11 10:42:05 PM
HOPE441_05Lenfant_1pp.indd 128 10/26/11 10:42:05 PM
HOPE441_05Lenfant_1pp.indd 131 10/26/11 10:42:05 PM
HOPE441_05Lenfant_1pp.indd 135 10/26/11 10:42:06 PM
HOPE441_05Lenfant_1pp.indd 137 10/26/11 10:42:06 PM
HOPE441_05Lenfant_1pp.indd 139 10/26/11 10:42:06 PM
HOPE441_05Lenfant_1pp.indd 140 10/26/11 10:42:06 PM
HOPE441_05Lenfant_1pp.indd 141 10/26/11 10:42:06 PM
HOPE441_05Lenfant_1pp.indd 143 10/26/11 10:42:07 PM
HOPE441_05Lenfant_1pp.indd 146 10/26/11 10:42:07 PM
HOPE441_05Lenfant_1pp.indd 147 10/26/11 10:42:07 PM
HOPE441_05Lenfant_1pp.indd 149 10/26/11 10:42:07 PM | 113,487 | [
"745282"
] | [
"1188"
] |
01771857 | en | [
"info"
] | 2024/03/05 22:32:16 | 2018 | https://inria.hal.science/hal-01771857/file/icwe_ntumba_et_al_2018.pdf | Patient Ntumba
Georgios Bouloukakis
Nikolaos Georgantas
Interconnecting and Monitoring Heterogeneous Things in IoT Applications
Keywords: Internet of things, Middleware Protocols, Interoperability Artifacts, Emergency Response
Internet of Things (IoT) applications incorporate heterogeneous devices that employ different middleware protocols (MQTT, CoAP, WebSocket, etc). In this paper we present an extension of our cross-integration platform which supports the interoperability of IoT devices. In particular, we introduce the VSB Web Console which enables the development and monitoring of applications with heterogeneous IoT devices. We showcase our approach using the Fire Detection scenario.
Introduction
Internet of Things (IoT) devices such as smart thermostats, activity trackers, drones, parking sensors, etc, enable developers to create new types of smart applications. Such devices (or Things) are introduced and deployed by major tech actors using their own proprietary APIs and protocols. This results in the deployment of highly heterogeneous devices in terms of both hardware and software resources. Additionally, the Things' diversity -i.e., resource-tiny, resourceconstrained and resource-rich, prevents the development of applications that rely on a single standard/protocol. Hence, enabling interactions in the IoT requires to deal with the heterogeneity issue.
Existing interoperability efforts are based on i) bridging communication protocols; [START_REF] Ponte | [END_REF][START_REF] Al-Fuqaha | Toward better horizontal integration among iot services[END_REF]; and ii) providing common API abstractions [START_REF] Cherrier | D-lite: Building internet of things choreographies[END_REF]. The former approach focuses only on the data and primitive conversion between a specific set of protocols. In the latter approach, developers have to build their application by relying on a single protocol or API. In the context of our work, we have developed the eVolution Service Bus (VSB) [START_REF] Bouloukakis | Integration of Heterogeneous Services and Things into Choreographies[END_REF] which enables the interconnection of Things employing different IoT protocols at the middleware layer (i.e., MQTT [4], CoAP [START_REF] Shelby | The Constrained Application Protocol (CoAP)[END_REF], WebSocket [START_REF] Fette | The WebSocket Protocol[END_REF], HTTP, etc). VSB follows the ESB paradigm where a common intermediate bus protocol is used to interact with multiple Things employing middleware-layer protocols or IP-based Gateways hosting IoT devices. To bridge heterogeneous Things with the intermediate bus
The work is supported by the research associate team ACHOR (inria.fr/en/associateteam/achor) and the EU-funded H2020 project FIESTA-IoT (fiesta-iot.eu) Fig. 1. VSB Web Console Architecture protocol, we automatically synthesize interoperability software artifacts, the so called Binding Components (BCs). BCs perform the mapping between data and primitives of the bridged protocols. VSB is utilized as core component of the H2020 CHOReVOLUTION [START_REF]CHOReVOLUTION EU project[END_REF] project and enables interactions in IoT choreographies with heterogeneous devices.
In this paper, we rely on VSB and we introduce the VSB Web Console. Our console provides a graphical interface which enables developers to register their services/Things employing middleware IoT protocols. Then, based on their use case scenario, interactions can be enabled between the registered (and possibly heterogeneous) Things. Finally, the resulting application can be monitored through our console. Monitoring options include: message passing and management of the deployed artifacts. To demonstrate our work, we design a use case for fire detection inside a forest. Hence, we register real sensors (temperature sensors), devices (a drone) and services (an estimation service) to our console. Then, to enable the interconnection of heterogeneous devices, we automatically synthesize and deploy interoperability artifacts. Finally, the forest fire detection interactions can be monitored using our console.
In the following section, we provide an architectural overview of our console, as well as details about its implementation. Then, we provide an overview of the forest fire detection scenario. Finally, we demonstrate its implementation in the VSB Web Console -this includes a video representation.
System Overview
As depicted in Fig. 1, our console is implemented based on MVC standard. Below we describe its main components which are used to interconnect heterogeneous Things and monitor IoT applications: User Interface (UI): is a Web interface used by an IoT developer for registering services/Things by providing information such as the: i) role (provider, consumer), ii) host address, iii) employed middleware protocol, iv) supported operations and their input/output data. This process results to the creation of the corresponding Generic Interface Description Language (GIDL) model. We provide the GIDL models of our use case scenario at https://goo.gl/Lnzziz. The UI is also used for interconnecting the registered services/Things using drag and drop actions and automated code generation. Finally, is used to manage and monitor the overall IoT application.
Controller: it processes all actions performed at the UI, which are forwarded to the Thing Management component and the corresponding module: i) registration, ii) generation, iii) monitoring and iv) orchestration.
To interconnect and monitor two heterogeneous Things, e.g., a drone that sends/receives notifications using the UDP/MQTT protocols; and a media device that receives notifications using the WebSocket protocol, our console operates as follows: the generation module binds the drone's and the media device's GIDL models to automatically generate the BC that is responsible to map data and primitives described in their GIDL models. Then, the orchestration module interconnects these devices by deploying the BC in the console host node. To monitor the devices and the deployed BC, our console (monitoring module) implements a listener API which receives published data from the IoT app on a specific port.
Implementation. The VSB Web Console has been implemented using Java 8 and the GIDL interface is a metamodel developed using the Eclipse Modeling Framework. Our console can be downloaded through: https://gitlab.inria. fr/pntumba/vsb-web-console/wikis. We also provide 2 videos for demonstrating the usage of our console: the 1st one: https://youtu.be/v7ucoSgbZCI, demonstrates the process of installing the console. Additionally, we show how a developer can register, interconnect and monitor (heterogeneous or not) Things.
Forest Fire Detection Scenario
Continuous monitoring of forests for early fire detection is of primary importance. In 2007, there were more than 80 human losses in Greece and 670,000 acres burned because of fires [START_REF]Greek forest fires[END_REF]. In this section, we use our console to showcase the implementation, deployment and monitoring of a fire detection inside a forest. Implementing such a scenario requires the following sensors and devices: Temperature or smoke sensors: these can be deployed inside a forest in order to monitor the forest conditions and push warning notifications. In the context of our demo, we deploy these sensors into a Raspberry Pi and we employ the CoAP protocol to push notifications. Estimator service: it can be deployed inside a fire department for receiving forest-related notifications. We assume that the estimator service employs the HTTP protocol to receive notifications. Drone: a PARROT AR DRONE 2.0 located at the fire department -used to inspect the forest-area upon a warning notification. Such a device deploys specific protocols to: i) accept pilot-commands through the UDP protocol and ii) transmit video stream, location and drone speed to an MQTT broker. Drone handler: a device handling the drone. We have deployed this device into a BeagleBone Black [1] platform. This platform employs the HTTP protocol to receive commands from the estimator service and locate the corresponding drone into the forest. Media streaming devices: such devices receive video streaming data by the drone. We separate them in two categories: i) a device that employs an MQTT subscription mechanism; and ii) a smartphone that employs the WebSocket protocol.
Creating an IoT application with the above heterogeneous devices is not a trivial task. Initially, the developer has to deploy the services/things, select the proper protocol supporting the constrained sensors/devices or use the already employed protocol by the device (e.g., drone). Hence, the developer has to be aware of multiple APIs and protocols for interconnecting its heterogeneous services/things and create the application.
In this work, we enable developers to register services/Things into the VSB Web Console which can be installed in their private or public machine (see Section 2). Then, IoT applications can be created through simple drag and drop actions. Heterogeneous interconnections are solved by generating automatically BCs. Finally, our console provides a monitoring interface to manage BCs (start/stop actions) and message passing detection.
In the 2nd video we show the complete implementation and monitoring of the Fire Detection scenario: https://youtu.be/SJeiqJkBhls.
Conclusion
To facilitate the development of IoT applications, we have developed the VSB Web Console. Using our console, developers are able to register services/Things, create IoT apps through drag and drop actions, interconnect heterogeneous Things in an automated manner, and finally monitor the resulting applications.
In future, we intend to provide an API to enable developers accessing the Controller directly (not only through the UI). Furthermore, we aim to enable a distributed deployment of BCs. | 9,948 | [
"748443",
"740341",
"868414"
] | [
"454659",
"454659",
"454659"
] |
01771866 | en | [
"info"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01771866/file/main.pdf | Jing Chang
Damien Chablat
Fouad Bennis
email: [email protected]@tsinghua.edu.cn
Liang Ma
Using 3D Scan to Determine Human Body Segment Mass in OpenSim Model
Keywords: Musculoskeletal disorders, biomechanical analysis, virtual human model, Open-Sim, body segment mass
Biomechanical motion simulation and dynamic analysis of human joint moments will provide insights into Musculoskeletal Disorders. As one of the mainstream simulation tools, OpenSim uses proportional scaling to specify model segment masses to the simulated subject, which may bring about errors. This study aims at estimating the errors caused by the specifying method used in OpenSim as well as the influence of these errors on dynamic analysis. A 3D scan is used to construct subject's 3D geometric model, according to which segment masses are determined. The determined segment masses data is taken as the yardstick to assess the errors of OpenSim scaled model. Then influence of these errors on the dynamic calculation is evaluated in the simulation of a motion in which the subject walks in an ordinary gait. Result shows that the mass error in one segment can be as large as 5.31% of overall body weight. The mean influence on calculated joint moment varies from 0.68% to 12.68% in 18 joints. In conclusion, a careful specification of segment masses will increase the accuracy of the dynamic simulation. As far as estimating human segment masses, the use of segment volume and density data can be an economical choice apart from referring to population mass distribution data.
Introduction
Musculoskeletal Disorders (MSDs) makes up the vast proposition of the occupational diseases [START_REF]Eurogip: Déclaration des maladies professionnelles : problématique et bonnes pratiques dans cinq pays européens[END_REF]. Inappropriate physical load is viewed as a risk factor of MSD [START_REF] Chaffin | Occupational Biomechanics[END_REF]. Biomechanical analysis of joint moments and muscle loads will provide insight into MSDs.
Over the past decades, many tools have been developed for biomechanical simulation and analysis. OpenSim [START_REF] Delp | Opensim: open-source software to create and analyze dynamic simulations of movement[END_REF] is a virtual human modeling software that has been widely used for motion simulation and body/muscle dynamic analysis [START_REF] Thelen | Using computed muscle control to generate forward dynamic simulations of human walking from experimental data[END_REF][START_REF] Kim | Estimation of lumbar spinal loading and trunk muscle forces during asymmetric lifting tasks: application of whole-body musculoskeletal modelling in opensim[END_REF]. The simulation in OpenSim should be based on a generic virtual human that consists of bodies, muscles, joint constraints, etc., as shown in Figure 1.
A simulation is generally started by scaling the generic model specifically to the subjects geometric and mass data. The subjects body geometric data is obtained using a motion capture system, which records the spatial positions of flash reflecting markers that attached to the specific locations of the subject; then the generic OpenSim model is adjusted geometrically with attempts to minimize the position deviations between virtual markers and corresponding real markers. This makes a subject-specific model out of the generic model.
The geometrical adjustment increases the accuracy of posture simulation and kinematic analysis that follow. For accurate dynamic analysis, the body segment inertial parameters, such as segment masses, should also be adjusted specifically to each subject. In OpenSim, this adjustment is carried out by scaling the mass of each segment of the generic model proportionately with respect to the whole body mass of the subject. This method of determining segment mass is based on the assumption that the mass distribution among body segments is similar among humans, which is not always the case. For example, the mean mass proportion of the thigh has been reported to be 10.27% [START_REF] Clauser | Weight, volume, and center of mass of segments of the human body[END_REF], 14.47% [START_REF] De Leva | Adjustments to zatsiorsky-seluyanov's segment inertia parameters[END_REF], 9.2% [START_REF] Okada | Body segment inertia properties of japanese elderly[END_REF], and 12.72% [START_REF] Durkin | Analysis of body segment parameter differences between four human populations and the estimation errors of four popular mathematical models[END_REF], which indicates significant individual difference. Therefore, the scaling method used by OpenSim is likely to cause errors in the following dynamic analysis. There is a necessity to estimate the errors.
This paper aims at estimating the errors caused by the scaling method used in OpenSim. Firstly, subject's segment masses are determined based on the accordingly 3D geometric model constructed with the help of 3D scan. The determined data is taken as an approximation to the true value of the subject's segment mass. Secondly, this set of data, as well as the proportionately scaled segment mass data, is used to specify a generic OpenSim model. Errors caused by proportionately scaling are calculated. Finally, influence of the errors on dynamics analysis is checked on a simulation of a walking task.
The method to approximate subject's segment masses, model specification and dynamic simulation are described in chapter 2. Results are presented in chapter 3. These results are then discussed in chapter 4, followed by a conclusion in Chapter 5.
Methodology
Approximating segment masses with 3D scan
A whole-body 3D scan was conducted to a male subject (31 years old, 77.0 kg, 1.77 m) with a low-cost 3D scanner (Sense T M 3D scanner). Before scanning, reflecting markers were placed on the subject to notify the location of each joint plate as shown in Figure 2. The locations of the joint plates were set according to [START_REF] Drillis | Body segment parameters[END_REF] [START_REF] Drillis | Body segment parameters[END_REF], which meant to facility the dismemberment of the 3D model. During scanning, careful caution was taken to make sure that no extra contact between limbs and the torso. The scanned 3D model was stored in a stl mesh file, as shown in Figure 3. Then the 3D model was dismembered into 15 parts in the way that described by Drillis (1969) [START_REF] Drillis | Body segment parameters[END_REF]. Body markers and body parts lengths are referred. An example of the dismembered body part (Pelvis) is shown in Figure 4. Then, the volume of each body part was calculated.
To analyze the results obtained, the water displacements of eight distal body parts (hands, lower arms, feet, lower legs) were also measured, as described by Drilis (1966) [START_REF] Drillis | Body segment parameters[END_REF].
Specification of OpenSim models to the subject
In this step, a generic OpenSim model is specified to our subject in aspect of body segment mass. The model is developed by Delp S.L. et al (1990) [START_REF] Delp | An interactive graphicsbased model of the lower extremity to study orthopaedic surgical procedures[END_REF] (http://simtk-confluence.stanford.edu:8080/display/OpenSim/Gait+2392+an 2354+Models). It consists of 12 bodies, 23 degrees of freedom, and 52 muscles. The unscaled version of the model represents a subject that is about 1.8 m tall and has a mass of 75.16 kg. The approximating body segment mass data obtained from the process in chapter 2.1 as well as the proportionately weight-scaled body segment mass data is used to specific the generic model. The former is considered as the yardstick to estimate the error of the latter.
Dynamic simulation in OpenSim
An simulation is conducted on the two specific models. Simulation data comes from previous researches [START_REF] John | Contributions of muscles to mediolateral ground reaction force over a range of walking speeds[END_REF]. The subject walks two steps in 1.2 s in an ordinary gait. Data of Spatial posture is collected at a frequency of 60 Hz and the and ground reaction forces at a frequency of 600 Hz. Inverse dynamic analysis is conducted on both models. Joint moments are calculated and compared between the two models.
Results
The estimation of body segment masses
Significant difference was found between volumes calculated from the 3D scanned geometric model and that measured by water displacement. For lower leg, the difference is as large as 27% (4.5 l vs. 3.3 l). To approximate the real segment masses, assumption is made that the volume distribution of the 3D model merged by 3D scan among head, torso, pelvis, and upper limbs is the same as that of the real subject. Density data of the body parts [START_REF] Wei | The application of segment axial density profiles to a human body inertia model[END_REF] were used to calculate the whole-body density of the subject, which, as well as body weight, gives estimation of whole-body volume. Then the overall volume was distributed to each segment with respect to the relative volume ratio of the 3D geometric model. In this way, segment volumes and masses are approximated.
Relative data is shown in Figure 6. The whole-body volume calculated from the 3D geometric model is 7.31% larger than the estimated whole-body volume (81.81 l to 76.24 l). The whole body density is estimated to be 1.01 g/ml. Mass proportion of the thigh is about 11.30%, which is between that reported by Clauser et al., (1969) (10.27%) [START_REF] Clauser | Weight, volume, and center of mass of segments of the human body[END_REF], Okada (1996) (9.2%) [START_REF] Okada | Body segment inertia properties of japanese elderly[END_REF] and by De Leva (1996) (14.47%) [START_REF] De Leva | Adjustments to zatsiorsky-seluyanov's segment inertia parameters[END_REF], Durkin & Dowling (2003) (12.72%) [START_REF] Durkin | Analysis of body segment parameter differences between four human populations and the estimation errors of four popular mathematical models[END_REF].
Errors analysis of the OpenSim scaled model specific to the subject
Segment masses generated by proportionately scale and 3D modeling are used to specify the Open-Sim generic model, which bring about two specific models (noted as scaled model and approximate model). Figure 7 shows the segment mass data of the two models. Errors of the scaled model segment mass are between 4.06% and 47.42%. The most significant error merges from foot data which, however, represents only a small part of the overall body mass.
Motion simulation and dynamic analysis
Both the proportionately scaled segment mass data and approximate segment mass data are used to specify the OpenSim generic model, bringing about two specific models (noted as scaled model and approximate model).
Motion simulation is conducted on the two models. The simulated motion includes two steps of walking, lasting for 1.2 s. Since the two models differ in only segment mass, no difference in kinematic analysis is shown. As an example, the angles, velocity and acceleration of right hip flexion is shown in Figure 8. Inverse dynamic analysis on the two models generates different results. Figure 9 shows the right hip flexion moments calculated from the two models. With the approximate model as yardstick, the error of the calculated right hip flexion moment of the scaled model has a mean of 1.89 Nm, which is 10.11% of its mean absolute value. A total of 18 joint moments was calculated. The means of error percentage vary from 0.65% to 12.68%, with an average of 5.01%. Relative data are shown in Table 1.
Discussions
The use of 3D scan in the estimation of body segment masses
In previous researches, the inertial parameters of human body segment are usually determined by two means: (i) Applying predictive equations generated from database [START_REF] De Leva | Adjustments to zatsiorsky-seluyanov's segment inertia parameters[END_REF], (ii) Medical scanning of live subjects [START_REF] Lee | Measurement of body segment parameters using dual energy x-ray absorptiometry and three-dimensional geometry: An application in gait analysis[END_REF], and (ii) Segments geometric modelling [START_REF] Davidson | Estimating subject-specific body segment parameters using a 3-dimensional modeller program[END_REF]. The use of the first one, as stated by [START_REF] Durkin | Analysis of body segment parameter differences between four human populations and the estimation errors of four popular mathematical models[END_REF] [START_REF] Durkin | Analysis of body segment parameter differences between four human populations and the estimation errors of four popular mathematical models[END_REF], is limited by its sample population. Furthermore, the difference in segmentation methods makes it difficult to combine various equations [START_REF] Pearsall | The study of human body segment parameters in biomechanics[END_REF]. The second method, medical scanning, such as dual energy X-ray absorptiometry is more accurate in obtaining body segment inertial parameters [START_REF] Durkin | The measurement of body segment inertial parameters using dual energy x-ray absorptiometry[END_REF]. But it is more expensive and time-consuming.
In this study, body segment masses are estimated by segment density data and segment volume. 3D scan is used to estimate body segment volume. In this process, errors may merge from two aspects.
First, it is assumed that density data of each segment is constant among humans. This assumption may bring errors. Traditional body composition method defines two distinct body compartments: fat and lean body (fat-free). Fat has a density of 0.90 g/ml, while lean body has a density of 1.10 g/ml [START_REF] Lukaski | Methods for the assessment of human body composition: traditional and new[END_REF]. Subjects body fat rate may influence the segment density. However, the range of density variation is smaller than that of the mass distribution. Therefore, the use of density and volume may reduce the estimation error of segment mass. For example, in the current study, the thigh, with a volume of 8.35 l, holds a mass proportion that would vary from 9.74% (all fat, density = 0.90 g/ml) to 11.90% (fat-free, density = 1.10 g/ml). This range is much narrower than that found in previous research (from 9.2% [START_REF] Okada | Body segment inertia properties of japanese elderly[END_REF] to 14.47% [START_REF] De Leva | Adjustments to zatsiorsky-seluyanov's segment inertia parameters[END_REF]).
Second, 3D scan is used to build up 3D geometric model and calculate segment volumes. Significant difference exists between volumes calculated and volumes measured by water displacement. To approximate the real volume, assumption is made that the 3D geometric model has the same volume distribution with that of the subject, which may bring error.
In summary, as a simple and low-cost method of segment mass determination, the use of density data and 3D geometric model is more likely to reduce the estimation error. 3D scan is an easy way to construct a 3D geometric model, but attention should be payed to the model's volume errors. The method used in this study to approximate the real segment volumes with 3D scanned model needs to be examined in future researches.
The error and error significance of the proportionately scaled model
Proportional scaling is efficient to specific a generic model. In this study, relative errors of segment masses of the scaled model are between 4.06% to 47.42%. The error of torso mass is 4.09 kg, which takes up to 5.30% of the overall body weight. In the following motion simulation, these errors bring about difference in the calculated joint moment. Means of the difference of calculated joint moments are from 3.65% to 12.68%. This suggests that a careful specification of segment masses will increase the accuracy of the dynamic simulation.
Conclusions
This study aims at estimating the errors and their influences on dynamic analysis caused by the scaling method used in OpenSim. A 3D scan is used to construct subject's 3D geometric model, according to which segment masses are determined. The determined segment masses data is taken as the yardstick to assess OpenSim proportionately scaled model: errors are calculated, and influence of the errors on dynamics analysis is examined.
As a result, the segment mass error reaches up to 5.31% of the overall body weight (torso). Influence on the dynamic calculation has been found, with a average difference from 3.65% to 12.68% in the joint moments.
Conclusions could be drawn that (i) the use of segment volume and density data may be more accurate than mass distribution reference data in the estimation of body segment masses and (ii) a careful specification of segment masses will increase the accuracy of the dynamic simulation significantly. This current work is a study to determine inertial parameters of the human body segment in biomechanical simulation. It explores new, more precise and simpler ways to implement biomechanical analysis. This work is a step towards characterizing muscular capacities for the analysis of work tasks and predicting muscle fatigue.
Fig. 1 .
1 Fig. 1. A generic OpenSim model.
Fig. 2 .
2 Fig. 2. Body markers that indicate the location of joint plates.
Fig. 3 .
3 Fig. 3. The 3D geometric model of the subject generated by 3D scan.
Fig. 4 .
4 Fig. 4. The mesh of pelvis dismembered from the whole-body 3D model.
Fig. 5 .
5 Fig. 5. The OpenSim model used for error analysis.
Fig. 6 .
6 Fig. 6. Coordinate (q), velocity (ω), acceleration (dω/dt) of right hip flexion in the motion.
Fig. 7 .
7 Fig. 7. Right hip flexion moments calculated from the two models.
Table 1 .
1 Volume, density and mass of the whole body and segments.
Foot-l Foot-r Hand-l Hand-r 1.06 0.89 0.49 0.58 1.00 1.00 0.46 0.46 1.00 1.00 0.46 0.46 1.08 1.08 1.11 1.11 1.08 1.08 0.51 0.51
Lower leg-r 4.53 3.35 3.35 1.08 3.62
Lower leg-l 4.42 3.25 3.25 1.08 3.51
Upper leg-r 8.65 8.29 1.04 8.62
Upper leg-l 8.74 8.37 1.04 8.71
Lower arm-r 1.34 1.12 1.12 1.10 1.23
Lower arm-l 1.38 1.01 1.01 1.10 1.11
Upper arm-r 1.72 1.65 1.06 1.75
Overall Pelvis Head Torso Upper arm-l 81.81 11.29 5.65 29.90 1.63 76.24 10.81 5.41 28.64 1.56 1.01 1.01 1.07 0.92 1.06 10.92 5.79 26.35 1.66
Volume(l)-3D scanned model dis-Volume(l)-water placement Volume(l)-estimated Density(kg/l)[13] Estimated mass(kg)
Table 2 .
2 Errors of proportionally scaled segment masses with respect to the approximate masses.
Calcaneus 0.87 1.28 0.41 0.53% 47.42%
Upper leg-l Upper leg-r Lower leg-l Lower leg-r Talus 8.71 8.62 3.51 3.62 0.07 9.46 9.46 3.76 3.76 0.10 0.75 0.84 0.25 0.15 0.03 0.98% 1.10% 0.33% 0.19% 0.04% 8.67% 9.80% 7.26% 4.06% 47.42%
Torso 38.91 34.82 -4.08 -5.30% -10.49%
Pelvis 10.92 11.98 1.06 1.37% 9.66%
Approximate mass(kg) Proportionally scaled mass (kg) Absolute error (kg) Percentage of absolute error in overall body weight Relate error
Table 3 .
3 Errors of the joint moments calculated from the scaled model, with respect to that from the approximate model
Mean of error Mean of instant Mean of error per-
(Nm) moment (Nm) centage
Pelvis tilt 6.73 64.12 10.50%
Pelvis list 4.06 37.78 10.75%
Pelvis rotation 1.02 17.78 5.74%
Right hip flexion 1.89 18.73 10.11 %
Right hip adduction 0.48 18.57 2.60 %
Right hip rotation 0.07 3.47 2.13 %
Right knee angle 0.65 19.21 3.40 %
Right ankle angle 0.31 38.68 0.80 %
Right subtalar angle 0.06 9.02 0.65 %
Left hip flexion 2.27 17.90 12.68 %
Left hip adduction 0.80 24.99 3.18 %
Left hip rotation 0.10 4.28 2.23 %
Left knee angle 0.83 19.37 4.28 %
Left ankle angle 0.36 30.18 1.19 %
Left subtalar angle 0.07 7.65 0.91 %
Lumbar extension 5.47 60.85 9.00 %
Lumbar bending 3.05 37.37 8.17 %
Lumbar rotation 0.33 17.98 1.85 %
Acknowledge
This work was supported by INTERWEAVE Project (Erasmus Mundus Partnership Asia-Europe) under Grants number IW14AC0456 and IW14AC0148, and by the National Natural Science Foundation of China under Grant numbers 71471095 and by Chinese State Scholarship Fund. The authors also thank D.Zhang Yang for his support. | 20,215 | [
"1006",
"982475",
"17917"
] | [
"473973",
"481387",
"473973",
"481387",
"24050",
"473973"
] |
01622004 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2017 | https://amu.hal.science/hal-01622004/file/Zaghden%20et%20al%202017_final.pdf | Hatem Zaghden
email: [email protected]
Marc Tedetti
Sami Sayadi
Mohamed Moncef
Boubaker Elleuch
Alain Saliot
Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea)
Keywords: Sediment, hydrocarbons, organic matter, Sfax, Gulf of Gabès, Mediterranean Sea
We investigated the origin and distribution of aliphatic and polycyclic aromatic hydrocarbons (AHs and PAHs) and organic matter (OM) in surficial sediments of the Sfax-Kerkennah channel in the Gulf of Gabès (Tunisia, Southern Mediterranean Sea). TOC, AH and PAH concentrations ranged 2.3-11.7%, 8-174 µg g -1 sed. dw and 175-10,769 ng g -1 sed. dw, respectively. The lowest concentrations were recorded in the channel (medium sand sediment) and the highest ones in the Sfax harbor (very fine sand sediment). AHs, PAHs and TOC were not correlated for most of the stations. TOC/N and δ 13 C values revealed a mixed origin of OM with both marine and terrestrial sources. Hydrocarbon molecular composition highlighted the dominance of petrogenic AHs and the presence of both petrogenic and pyrogenic PAHs, associated with petroleum products and combustion processes. This work underscores the complex distribution patterns and the multiple sources of OM and hydrocarbons in this highly anthropogenized coastal environment.
Introduction
The study of the composition of coastal sediments represents one of the main pathways to highlight the level and source of contamination within marine ecosystems. Indeed, sediments are receptacles for a wide variety of organic contaminants emitted from land and which reach the sedimentary layer through their adsorption onto particles in the water column.
In addition, sediments may accumulate biogenic particulate material issued from surface waters and thus may provide fruitful information about autochthonous biological activity of marine ecosystems.
Hydrocarbons, including aliphatic hydrocarbons (AHs) and polycyclic aromatic hydrocarbons (PAHs), are abundant components of the organic material in coastal sediments [START_REF] Volkman | Identification of natural anthropogenic and petroleum hydrocarbons in aquatic sediments[END_REF][START_REF] Gogou | Marine organic geochemistry of the Eastern Mediterranean: 1. Aliphatic and polyaromatic hydrocarbons in Cretan Sea surficial sediments[END_REF], PAHs being among the most ubiquitous organic contaminants in the marine environment [START_REF] Louati | Hydrocarbon contamination of coastal sediments from the Sfax area (Tunisia), Mediterranean Sea[END_REF][START_REF] Roose | Monitoring organic microcontaminants in the marine environment: principles, programmes and progress[END_REF].
AHs, which may be of biogenic or anthropogenic origin, consist of a series of resolved nalkanes (R) and of unresolved complex mixture (UCM). Their biogenic sources include terrestrial plant waxes, marine phyto-and zoo-plankton, bacteria and diagenetic transformations [START_REF] Cripps | Problems in the identification of anthropogenic hydrocarbons against natural background levels in the Antartic[END_REF][START_REF] Rieley | The biogeochemistry of Ellesmere Lake, UK-I: source correlation of leaf wax inputs to the sedimentary lipid record[END_REF][START_REF] Volkman | Identification of natural anthropogenic and petroleum hydrocarbons in aquatic sediments[END_REF][START_REF] Wang | Oil spill identification[END_REF][START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF], whereas their anthropogenic sources comprise essentially unburned petroleum/oils [START_REF] Mazurek | Characterization of biogenic and petroleum-derived organic matter in aerosols over remote rural and urban areas[END_REF][START_REF] Bouloubassi | Investigation of anthropogenic and natural organic inputs in estuarine sediments using hydrocarbon markers (NAH, LAB, PAH)[END_REF][START_REF] Wang | Using systematic and comparative analytical data to identify the source of an unknown oil on contaminated birds[END_REF][START_REF] Readman | Petroleum and PAH contamination of the Black Sea[END_REF][START_REF] Zaghden | Hydrocarbons in surface sediments from the Sfax coastal zone, (Tunisia) Mediterrranean Sea[END_REF].
Parent PAHs and their alkylated homologues (i.e. mono-, di-, tri-or tetra-methyl PAHs) may be also of biogenic or anthropogenic origin. PAHs are synthesized during the formation of oil (petrogenic PAHs), during the incomplete combustion of fossil fuels and biomass (pyrogenic PAHs) [START_REF] Wang | Oil spill identification[END_REF][START_REF] Wurl | A review of pollutants in the sea-surface microlayer (SML): a unique habitat for marine organisms[END_REF], and biologically produced in soils from woody plants, termites or from the microbial transformation of organic matter [START_REF] Chen | Termites fumigate their nests with naphthalene[END_REF][START_REF] Wilcke | Carbon isotope signature of polycyclic aromatic hydrocarbons (PAHs): evidence for different sources in tropical and temperate environments?[END_REF]. Petrogenic PAHs consist of low molecular weight (LMW) 2-3 ring compounds with a high proportion of alkylated homologues, while pyrogenic PAHs comprise high molecular weight (HMW) 4-6 ring compounds with a low proportion of alkylated derivatives [START_REF] Neff | Polycyclic aromatic hydrocarbons in the aquatic environment sources, fates and biological effects[END_REF]. PAHs are known to be harmful to living organisms, with reprotoxic, carcinogenic and mutagenic effects [START_REF] Kennish | Poly-nuclear aromatic hydrocarbons[END_REF]. Therefore, they are recognized for a long time as highly priority contaminants by various international Hence, hydrocarbons (AH and PAHs) in sediments may originate from numerous sources including petroleum inputs, incomplete combustion of fuels (PAHs), forest and grass fires (PAHs), biosynthesis by marine and terrestrial organisms, and early diagenetic transformation processes (UNEP/IOC/IAEA, 1992; [START_REF] Clark | Marine Pollution[END_REF][START_REF] Readman | Petroleum and PAH contamination of the Black Sea[END_REF].
Anthropogenic hydrocarbons are introduced into marine waters mainly via the direct discharge of crude oil and petroleum products during sea-based activities (spills from tankers, platforms and pipelines, ballast water discharge, drilling…) or via industrial and urban wastes (fuel combustion, traffic exhaust emissions, varied spills) routed by rivers, surface runoffs, effluents and the atmosphere [START_REF] Wang | Oil spill identification[END_REF][START_REF] Wurl | A review of pollutants in the sea-surface microlayer (SML): a unique habitat for marine organisms[END_REF][START_REF] Dachs | Organic Pollutants in Coastal Waters, Sediments, and Biota: A Relevant Driver for Ecosystems During the Anthropocene?[END_REF]. Thus, investigating the concentrations and molecular composition of hydrocarbons in surficial sediments allows for a better understanding of the levels and sources of anthropogenic contaminations and of the origin (marine versus terrestrial) of natural organic matter in coastal waters [START_REF] Budzinski | Evaluation of sediment contamination by polycyclic aromatic hydrocarbons in the Gironde estuary[END_REF].
The Gulf of Gabès (Southeast Tunisia, Southern Mediterranean Sea) presents two major characteristics. First, it is one of the most productive coastal environments of the Mediterranean Sea due to nutrient availability and the main fishing area of the Tunisian coasts (Jabeur et al., 2001;Bel Hassen et al., 2009;D'Ortenzio and Ribera d'Alcalà, 2009;[START_REF] Mermex | Marine ecosystems responses to climatic and anthropogenic forcings in the Mediterranean[END_REF]. Second, the Gulf of Gabès is submitted to very high anthropogenic pressures, especially around Sfax city, the second largest city in Tunisia and the one with the most important fishing and harbor activities. In this highly urbanized and industrialized area, organic contaminants are issued from a multitude of sources [START_REF] Illou | Impact des rejets telluriques d'origines domestiques et industrielles sur les environnements côtiers : cas du littoral nord de la ville de Sfax (Tunisie)[END_REF][START_REF] Serbaji | Utilisation d'un S.I.G. multi-sources pour la compréhension et la gestion intégrée de l'écosystème côtier de la région de Sfax (Tunisie)[END_REF][START_REF] Louati | Etude de la contamination par hydrocarbures des sédiments de la région de Sfax (Tunisie)[END_REF]. However, the studies dealing with the assessment of the hydrocarbon levels in the surficial sediments of the Gulf of Gabès/Sfax coastal area remain few compared to those conducted in the Northwestern Mediterranean Sea. They have really begun in the 2000's with the first reports of concentrations in AHs [START_REF] Louati | Hydrocarbon contamination of coastal sediments from the Sfax area (Tunisia), Mediterranean Sea[END_REF][START_REF] Zaghden | Hydrocarbons in surface sediments from the Sfax coastal zone, (Tunisia) Mediterrranean Sea[END_REF][START_REF] Elloumi | Detection of Water and Sediments Pollution of An Arid Saltern (Sfax, Tunisia) by Coupling the Distribution of Microorganisms With Hydrocarbons[END_REF][START_REF] Aloulou | Even-numbered nalkanes/n-alkenes predominance in surface sediments of Gabes Gulf in Tunisia[END_REF][START_REF] Amorri | Organic matter compounds as source indicators and tracers for marine pollution in a western Mediterranean coastal zone[END_REF], in PAHs [START_REF] Kessabi | Possible chemical causes of skeletal deformities in natural populations of Aphanius fasciatus collected from the Tunisian coast[END_REF], and in both AHs and PAHs [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF]. Interestingly, the AH and PAH concentrations found in the surficial sediments of the Sfax coastal area are rather located in the upper range of those recorded for the whole Mediterranean Sea.
The main objectives of this study were 1) to determine the contents in several biogeochemical parameters (C/H/N/S, TOC, CaCO 3 and δ 13 C) and in hydrocarbons (AHs and PAHs) in surficial sediments of the Sfax-Kerkennah channel and to compare these contents to those recorded in other regions of the Mediterranean Sea. 2) To assess the spatial distribution of these parameters in relation to the Sfax-Kerkennah channel geomorphology/bathymetry and to examine the degree of correlation between these parameters, especially between TOC, AHs and PAHs. 3) To evaluate the origin (marine versus terrestrial versus anthropogenic) of organic matter and hydrocarbons regarding molecular/isotopic ratios and indices.
Material and methods
Study area
This study was conducted in the coastal area between the Sfax city and the Kerkennah Islands, in the Northern part of the Gulf of Gabès (Tunisia, Southern Mediterranean Sea) (Fig. 1). The Gulf of Gabès, located in the Southeast of Tunisia, extends from the city of Chebba (35°14'N, 11°09'E) to the Tunisian-Libyan border (33°10'N, 11°34'E) and shelters the Kerkennah Islands in the Northeast and Djerba Island in the Southeast. The climate in the Gulf of Gabès is arid to semiarid, i.e. dry (average annual precipitation: 210 mm) and sunny with strong easterly winds (INM, 2008), resulting in a severe aeolian erosion and the transport of Saharan dusts into the sea [START_REF] Jedoui | Etude hydrologique et sédimentologique d'une lagune en domaine méditerranéen: Le bahiret el Boughrara (Tunisie)[END_REF][START_REF] Bouaziz | Néotectonique affectant les dépôts marins tyrrhéniens du littoral sud-est tunisien: implications pour les variations du niveau marin[END_REF]. The gulf is characterized by a wide continental shelf (~ 250 km) with a very low slope and an important network of channels and wadis. Hence, its basin is very shallow, with in some areas, a water depth not exceeding 1 m over several kilometres. Also, in the Gulf of Gabès, the tide is semidiurnal and one of the highest in the Mediterranean Sea, with a maximum range of approximately 2.3 m at spring low tides [START_REF] Sammari | Sea level variability and tidal resonance in the Gulf of Gabes, Tunisia[END_REF]. The sediment of the gulf is rich in organic matter and principally composed of sand with a high density of plants [START_REF] Ben Othman | Le sud tunisien (golfe de Gabès) : Hydrologie, sédimentologie, flore et faune[END_REF]. The Gulf of Gabès is an important nursery for several species of fish and accounts for 65% of the Tunisian fish production [START_REF] Dgpa | Direction Générale de Pêche et de l'Aquaculture[END_REF]. These favourable geomorphologic and climatic conditions have led to the development of one of the most extensive marine habitats of seagrass
Posidonia oceanica.
Sfax (34°43'N, 10°46'E; Fig. 1), with a population of about 730,000 inhabitants distributed over about 20,000 ha, represents the second largest city and the second economic pole in Tunisia. Sfax is heavily industrialized with important fishing and harbor activities. Its main industrial activity domains are phosphates, chemical products, textiles, olive oil, food, soap and paint. Therefore, the sources of pollution in the Sfax coastal zone are numerous: atmospheric depositions, ship traffic, fishery activities, rivers, wadis, wild landfills, municipal sewage effluents and industrial wastewaters, especially those coming from the storage of crude oil, phosphogypsum and olive oil wastes at the coast [START_REF] Ben Mustapha | Bionomie des étages infra et circalittoral du golfe de Gabès[END_REF][START_REF] Louati | Hydrocarbon contamination of coastal sediments from the Sfax area (Tunisia), Mediterranean Sea[END_REF]. The Kerkennah Islands (34°42'N, 11°11'E; Fig. 1), which include the two principal islands Gharbi and Chergui, are situated ~ 20 km offshore Sfax. They have an area of 160 km 2 and are low-lying being no more than 13 m above the sea level. These islands are characterized by a -10 m isobath few kilometers away from the shoreline and by a lithology dominated by smooth rocks [START_REF] Katlane | Recent dynamics of submerged shoals and channels aroud the Kerkennah archipelago (Tunisia) from LANDSAT TM and MODIS[END_REF]. Since there is neither industrial activity nor urban concentration in the Kerkennah Islands, the latter are much less submitted to anthropogenic pressures than the Sfax coastal area.
The Sfax-Kerkennah channel is an underwater channel, Northeast-Southwest oriented, which cut the shelf at depths higher than 20 m. It allows for conducting the ferry boat crossings Sfax-Kerkennah (Fig. 1). Bottom sediments are largely composed of sand, muddy sand and shell-sand with a high density of plants. This structure extends from the beaches up to about 20 m depth. Muddy sands are the most widespread sediment of the Sfax shelf. This sediment is the substrate of mixed Cymodocea and Posidonia meadows. In some places (southern Kerkennah), areas with gravel or concretions of calcareous algae are distinguished.
At the Kerkennah islands, the shelf is characterized by the presence of sandy shoals.
Sampling, storage, sieving and granulometric analysis of samples
Twenty stations were sampled in January 2005 at low tide on board a vessel from the Fishing Professional Training Centre of Sfax. Stations were located along two transects: one between the North of Sfax -Sidi Mansour area (station R101) and the Kerkennah harbor (station R111), and the other one between the Kerkennah harbor and the Sfax harbor (station R201) (Fig. 1; Table 1). Stations R104-R107 and R203-R210, displaying depths from > 5 to 20 m, were comprised within the Sfax-Kerkennah channel, whereas stations R101-R103, R108-R111, R201 and R202, with depths < 5 m, were positioned outside the channel (Table 1). At each station, surficial sediments (0-1 cm) were collected by professional divers.
Surficial sediments were transferred into pre-combusted (450 °C, 6 h) glass bottles and stored on board in the dark in the cold (~ 6 °C).
Back in the laboratory, samples were immediately frozen at -20 °C. They were maintained frozen for few days and then freeze dried. Afterwards, each sediment sample (200 g) was dry sieved sequentially during 30 min, using an electric shaker, through twenty AFNOR standard stainless steel screens of mesh sizes ranging from 4 mm to 50 µm [START_REF] Aloulou | Benthic foraminiferal assemblages as pollution proxies in the northern coast of Gabes Gulf, Tunisia[END_REF]. Before sieving, stainless steel forceps were used to remove vegetal fragments from sediment samples. The percentages of sediment mass corresponding to each particle size were determined. Grain Size Analyzer (GSA) software was used to establish granulometric curves and to derive the parameters describing the grain size distribution, i.e. the index of mean trend, expressed by the mean of the size (M z ), and the index of classification, expressed by the standard deviation of the size (σ). M z and σ are given in unit ϕ, where ϕ = -log 2 D/D 0 (D 0 being a reference diameter and D the diameter of the particle in mm) [START_REF] Folk | Brazos River bar: a study in the significance of grain size parameters[END_REF][START_REF] Blott | GRADISTAT: a grain size distribution and statistics package for the analysis of unconsolidated sediments[END_REF][START_REF] Ghannem | Spatial Distribution of Heavy Metals in the Coastal Zone of ''Sfax-Kerkennah'' Plateau, Tunisia[END_REF]. All chemical analyses (hydrocarbons, elemental composition, organic carbon, calcium carbonate, δ 13 C, see below) were performed on the fraction < 63 µm, which represents the fraction of silts and clays.
Extraction, purification and GC-FID/MS analysis of hydrocarbons
Sediment samples (around 5 g dry weight of the fraction < 63 µm) were transferred into a pre-combusted glass tube, in which were also added 30 ml of a dichloromethane (CH 2 Cl 2 )/methanol (CH 3 OH) (3:1 v/v) solvent mixture, deuterated (internal) standard mixtures (n-C 24 -d 50 for AHs and p-terphenyl-d 14 for PAHs) and activated copper to remove sulphurs. Extraction was carried out in an ultrasonic bath for 15 min. The sediment was then isolated from the lipid extract by centrifugation (3500 rpm for 10 min). The entire extraction procedure (ultrasounds and centrifugation) was repeated 3 times. The extracts (supernatants) were combined and concentrated with a rotary evaporator at T < 30 °C, dried with magnesium sulphate and filtered on a pre-combusted glass fiber filter of porosity 4 µm. The remaining solvent was changed to n-hexane [START_REF] Ke | Determination of polycyclic aromatic hydrocarbons in mangrove sediments: Comparison of two internal standard surrogate methods and quality-control procedures[END_REF][START_REF] Bouloubassi | PAH transport by sinking particles in the open Mediterranean Sea: a 1 year sediment trap study[END_REF][START_REF] Parinos | Occurrence, sources and transport pathways of natural and anthropogenic hydrocarbons in deep-sea sediments of the eastern Mediterranean Sea[END_REF].
Hexane-solubilised extracts were purified to separate AHs and PAHs (hydrocarbon fraction) from more polar compounds. The extracts were fractionated on a column (6 mm i.d.) filled with 400 mg of silica gel (extra pure Merck 60), beforehand Soxhlet extracted with CH 2 Cl 2 , activated 1 h at 150 °C and partially deactivated with 4% water by weight. The AH fraction was first eluted with 3 ml n-hexane, followed by the elution of the PAH fraction with 9 ml hexane/toluene (9:1 v/v). Finally, purified extracts were concentrated to 150-200 µl with a rotary evaporator and a gentle stream of nitrogen.
Analyses of hydrocarbons (AHs and PAHs) were performed with a Delsi DI 200 gas chromatograph coupled with a flame ionisation detector (GC-FID) (Perichrom, France) [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF]. The GC-FID was equipped with a DB-5 MS fused-silica capillary column (30 m × 0.25 mm × 0.25 µm, J&W Scientific, Agilent Technologies, USA) and used helium as a carrier gas at a flow rate of 1.5 ml min -1 . The injector (used in splitless mode) and detector temperatures were 250 and 320 °C, respectively. The initial column temperature was held for 2 min at 60 °C, next ramped at 25 °C min -1 (ramp 1) to 100 °C and then at 2 °C min -1 (ramp 2) to a final temperature of 310 °C, which was held for 10 min. With each set of samples to be analysed, AH and PAH calibration standards were run for peak identification and quantification. Compounds were identified mainly by their retention times.
To confirm the structure of several hydrocarbons, some samples were also analysed using a HP 6890 gas chromatograph coupled with a HP 5973 MSD mass spectrometer (GC-MS) (Agilent Technologies, Wilmington DE, USA), equipped with a DB-5 MS fused-silica capillary column coated with 5% phenyl methyl siloxane [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF].
Helium was used as a carrier gas. The injector temperature and column temperature program were the same than those used for the GC-FID. GC-MS analyses were run in the electron impact mode at 70 eV with a 0.6 scan s -1 time over a 50-550 atomic mass unit (amu) range resolution.
Quality assurance and quality control of hydrocarbon analyses
During the procedures described above, nitrile gloves were worn and care was taken to avoid contaminations. All the glassware was cleaned with ultrapure water (Milli-Q water from Millipore system, final resistivity: 18.2 MΩ cm -1 at 25 °C, pH ~ 5), combusted at 450 °C during 6 h and finally cleaned with CH 3 OH and CH 2 Cl 2 before use. The precision glassware, which could not be baked, was cleaned in a bath of sulfochromic acid for at least 4 h, and then rinsed with ultrapure water, CH 3 OH and CH 2 Cl 2 . All the (Teflon-lined) caps were wrapped with Teflon tape for the storage of samples. All organic solvents were of trace-analysis quality (Merck, Darmstadt, Germany) and were further distilled before use. Silica and magnesium sulphate were purified by soxhlet extraction with CH 2 Cl 2 for 24 h and then dried in the oven. Deuterated (internal) standards mixtures (04071, Fluka, and 47543-U, Supelco) were introduced before ultrasonic treatment to assess the recoveries of the analytical procedure (including extraction, evaporation and purification processes). The latter were on average > 75% for the different AHs and PAHs investigated. Calibration (external) standards (04071, Fluka, and 47543-U, Supelco) as well as procedural and solvent blanks were run with each set of samples to check for contamination and for quantification. Calibration curves were constructed for all target hydrocarbons analysed except for the alkylated PAHs, which were quantified with their parent-compound calibration curves. For the different hydrocarbons, the detection limits ranged from 0.01 to 0.5 ng g -1 . Instrumental reproducibility, evaluated from samples R110 and R207, was on average ± 10%. All the concentration values, given in ng g -1 (PAHs) or µg g -1 (AHs) sediment dry weight (sed. dw), were blank-and recovery-corrected.
Determination of individual hydrocarbons and molecular diagnostic ratios
For AHs, we determined the concentrations of resolved n-alkanes (R), including linear n-alkanes from n-C 15 to n-C 34 and two isoprenoids, pristane (Pr, C 19 ) and phytane (Phy, C 20 ), and the concentrations of UCM. We computed different ratios and indices allowing to distinguish biogenic AHs (issuing from biological activity) and petrogenic AHs (coming from uncombusted petroleum): 1) the UCM/R ratio as indicator of the presence of degraded petroleum products (when > 3-4) [START_REF] Simoneit | Organic matter of the troposphere-II.* Natural background of biogenic lipid matter in aerosols over the rural western United States[END_REF][START_REF] Mazurek | Characterization of biogenic and petroleum-derived organic matter in aerosols over remote rural and urban areas[END_REF]. 2) The Pr/Phy ratio as indicator of biogenic AHs (when >> 1), even though values < 1 do not necessarily reflect the presence of petrogenic AHs [START_REF] Cripps | Problems in the identification of anthropogenic hydrocarbons against natural background levels in the Antartic[END_REF][START_REF] Commendatore | Natural and anthropogenic hydrocarbons in sediments from the Chubut River (Patagonia, Argentina)[END_REF][START_REF] Cincinelli | Natural and anthropogenic hydrocarbons in the water column of the Ross Sea (Antarctica)[END_REF]. 3) The n-C 17 /Pr and n-C 18 /Phy ratios as indicators of degraded (when < 1) or less degraded or relatively fresh AHs (when > 1) [START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF][START_REF] Asia | Occurrence and distribution of hydrocarbons in surface sediments from Marseille Bay (France)[END_REF]. 4) The carbon preference index, the ratio of odd to even carbon-numbered n-alkanes, in the ranges n-C 15 ˗n-C 24 and n-C 25 ˗n-C 34 (CPI 15-24 and CPI 25-34 ), which are indicators of crude oil/petrogenic AHs (when ~ 1) or biogenic AHs (when >> or << 1) [START_REF] Eglinton | Leaf epicuticular waxes[END_REF][START_REF] Rieley | The biogeochemistry of Ellesmere Lake, UK-I: source correlation of leaf wax inputs to the sedimentary lipid record[END_REF][START_REF] Wang | Oil spill identification[END_REF][START_REF] Harji | Sources of hydrocarbons in sediments of the Mandovi estuary and the Marmugoa harbour, west coast of India[END_REF]. 5)
The terrigenous/aquatic ratio (TAR), the ratio between the concentrations of long-chain n-
alkanes (n-C 27 + n-C 29 + n-C 31 ) to short-chain n-alkanes (n-C 15 + n-C 17 + n-C 19 )
, as index of the relative importance of terrestrial (higher plants) and aquatic (algae, phyto-and zooplankton) materials [START_REF] Bourbonniere | Sedimentary geolipid records of historical changes in the watersheds and productivities of Lakes Ontario and Erie[END_REF][START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF] (Table 2).
Concerning PAHs, we investigated the concentrations of 17 PAHs: 12 parent PAHs, namely naphthalene (Nap), phenanthrene (Phe), anthracene (Ant), thiophene (Thi), fluoranthene (Flt), pyrene (Pyr), benzo[a]anthracene (BaA), chrysene (Chr), benzo[k]fluoranthene (BkF), benzo[a]pyrene (BaP), perylene (Per) and benzo[g,h,i]perylene (BgP), and 5 alkylated homologues, i.e. the sums of mono-, di-and tri-methyl compounds (∑Me) of the five target compounds Nap, Phe, Ant, Pyr and Chr. Thi is a 1-ring sulfur heterocyclic compound. Naph and its alkylated homologues are 2-ring compounds. Phe, Ant and their alkylated homologues are 3-ring compounds. These 2-3 ring compounds represent low molecular weight (LMW) PAHs. Flt, Pyr, BaA, Chr and alkylated homologues of Pyr and Chr are 4-ring compounds. BkF, BaP and Per are 5-ring compounds, whereas BgP is a 6-ring compound. These 4-6 ring compounds represent high molecular weight (HMW) PAHs. We determined several ratios allowing to differentiate petrogenic PAHs and pyrogenic PAHs (issuing from the incomplete combustion of fossil fuels), PAHs from fuel combustion and those from the combustion of grass, coal or wood, and PAHs from traffic emissions and PAHs from non-traffic emissions: ∑LMW/∑HMW, ∑MePhe/Phe, ∑MePyr/Pyr, Flt/Flt+Pyr and BaP/BgP [START_REF] Yunker | PAHs in the Fraser River basin: a critical appraisal of PAH ratios as indicators of PAH source and composition[END_REF][START_REF] Brändli | Critical evaluation of PAH source apportionment tools using data from the Swiss soil monitoring network[END_REF][START_REF] Tobiszewski | PAH diagnostic ratios for the identification of pollution emission sources[END_REF][START_REF] Katsoyiannis | Model-based evaluation of the use of polycyclic aromatic hydrocarbons molecular diagnostic ratios as a source identification tool[END_REF] (Table 2).
Analysis of C/H/N/S, TOC, CaCO 3 and δ 13 C
Several parameters were measured on sub-samples of freeze-dried and homogenized sediments (fraction < 63 µm). Content in total carbon, hydrogen, nitrogen and sulphur (C/H/N/S) was determined using a SC-144 LECO Elemental Analyzer at combustion temperatures of 1050 (C, H, N) or 1350 °C (S) in oxygen (C, H, S) or helium (N) atmosphere.
For total organic carbon (TOC) determination, sediment sub-samples were acidified with 1 N HCl and oven dried at 60 °C. This procedure (acidification and drying) was repeated twice in order to remove inorganic carbon. Then, sub-samples were run in the Elemental Analyzer the same way as for total carbon. Two replicates of each sample were run for these elemental analyses. Content in calcium carbonate (CaCO 3 ) was estimated as the difference between total carbon and TOC contents. Concentrations in C, H, N, S, TOC and CaCO 3 are expressed in percentages (%) for 1 g sed. dw.
δ 13 C, given in ‰, is the ratio of stable isotopes 13 C/ 12 C. It was determined on OC (acidified) sub-samples using the following formula: [( 13 C/ 12 C) sample / ( 13 C/ 12 C) standard -1] x 1000, where ( 13 C/ 12 C) standard corresponds to the isotopic ratio of the international standard Pee Dee Belemnite (PDB), and ( 13 C/ 12 C) sample to the isotopic ratio measurements of sediment subsamples conducted by using an elemental analyzer coupled to an isotopic ratio mass spectrometer (EA-IRMS).
Statistical analyses
Ocean Data View (ODV) software version 4.6.5 (Schlitzer, R., http://odv.awi. de, 2014.) was employed for the spatial representation of TOC; δ 13 C, R and PAH concentrations as well as R/TOC and PAH/TOC ratios. The spatial interpolation/gridding of data was performed using Data-Interpolating Variational Analysis (DIVA) [START_REF] Barth | A web interface for griding arbitrarily distributed in situ data based on Data-Interpolating Variational Analysis (DIVA)[END_REF][START_REF] Troupin | Generation of analysis and consistent error fields using the Data Interpolating Variational Analysis (Diva)[END_REF]. Pearson correlation matrices were performed using XLSTAT 2011.2.05 (Microsoft Excel add-in program). The significance threshold was set at p < 0.05.
Results and discussion
Granulometry and content in C/H/N/S, TOC, CaCO 3 and δ 13 C
Mean grain size (M z ) ranged from 4 to 3 ϕ (i.e. 62.5-125 µm; very fine sand) in stations R104, R111, R201, from 3 to 2 ϕ (i.e. 125-250 µm; fine sand) in stations R101-R103, R106, R108-R110, R202, R210, and from 2 to 1 ϕ (i.e. 250-500 µm; medium sand) in stations R105, R107, R203-R209 (Table 3). The index of classification (σ) was comprised between 2 and 1 ϕ for all stations, which reflected poorly sorted sands (Table 3). Therefore, the granulometric distribution was closely related to the bathymetry and geomorphology of the Sfax-Kerkennah channel. The fact that sediments were essentially constituted of sand is in accordance with the hydrodynamic properties of this intertidal area where the combined action of waves and tide prevent the deposition of the finest fraction along with promoting a sandy facies [START_REF] Aloulou | Benthic foraminiferal assemblages as pollution proxies in the northern coast of Gabes Gulf, Tunisia[END_REF]. Apart from stations R104, R106 and R210, which were situated at the border of or within the channel, fine and very fine sands were found outside the channel, i.e. in the most coastal and the shallowest stations, including harbors (Fig. 1; Tables 1,3). In contrast, medium sand was observed in the deepest stations, in the area where the channel is the widest.
In this zone, fines particles, coming from the North-East South-West-oriented current, cannot reach the bottom and are transported further southward (Fig. 1; Tables 1,3).
The content in C, H, N and S ranged from 7.9 (R202) to 14.6% (R201), from 0.57 (R202) to 1.91% (R201), from 0.19 (R203, R202) to 1.23% (R201) and from 0.10 (R107) to 1.33% (R201), respectively (Table 3). The highest percentages were thus recorded in the Sfax harbor (R201) and the lowest ones close to the Sfax harbor (R203, R202) or in the Sfax-Kerkennah channel (R107). The C content was comparable to that determined in surficial sediments of Moroccan coastal waters (Southern Mediterranean Sea), i.e. 1.3-15.1% [START_REF] Pavoni | Environmental pollutants and organic carbon content in sediments from an area of the Moroccan Mediterranean Coast[END_REF]. In contrast, the N and S contents were much higher than those reported for these Moroccan waters or in the Todos os Santos Bay, Brazil (0.01-0.3 and < 0.01-1.3%, respectively) [START_REF] Pavoni | Environmental pollutants and organic carbon content in sediments from an area of the Moroccan Mediterranean Coast[END_REF][START_REF] Venturini | Characterization of the Benthic Environment of a Coastal Area Adjacent to an Oil Refinery, Todos os Santos Bay (NE-Brazil)[END_REF].
The TOC content varied between ~ 2.3 (R203, R202) and ~ 11.7% in the Sfax harbor (R201) and, quite surprisingly, in the Sfax-Kerkennah channel (R107) (Table 3; Fig. 2a).
High values of ~ 7.2% were observed in the Kerkennah harbor (R111) and the Sfax-Kerkennah channel (R104) (Table 3; Fig. 2a). The TOC content determined here was higher than that found in the surficial sediments of the Northeastern (0.02-2.4%) and Northwestern (0.35-6.2%) Mediterranean Sea (Lipiatou and Saliot, 1991a;[START_REF] Benlahcen | Distribution and sources of polycyclic aromatic hydrocarbons in some Mediterranean coastal sediments[END_REF][START_REF] Kucuksezgin | Distribution and sources of polycyclic aromatic hydrocarbons in Cilician Basin shelf sediments (NE Mediterranean)[END_REF] and the Bizerte Lagoon (0.4-3.9%) [START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF].
Nonetheless, it was lower than that recorded in the Abu Qir Bay (Egyptian coasts), where it reached up to 20% [START_REF] El Deeb | Distribution and Sources of Aliphatic and Polycyclic Aromatic Hydrocarbons in Surface Sediments, Fish and Bivalves of Abu Qir Bay (Egyptian Mediterranean Sea)[END_REF]. Our quite high TOC concentrations were very likely related to the enhanced marine productivity (eutrophication) of this coastal area due to diverse nutrient inputs (Bel Hassen et al., 2009;D'Ortenzio and Ribera d'Alcalà, 2009;[START_REF] Mermex | Marine ecosystems responses to climatic and anthropogenic forcings in the Mediterranean[END_REF].
The TOC/N ratio ranged from 9.4 (R201) to 56.6 (R107) (Table 3), which was higher than that measured in the surficial sediments from the Northwestern Mediterranean Sea (~ 7) [START_REF] Charles | Ecodynamics of PAHs at a peri-urban site of the French Mediterranean Sea[END_REF], the Northern Adriatic (3-37) [START_REF] Guerra | Polycyclic Aromatic Hydrocarbons, Polychlorinated Biphenyls and Trace Metals in Sediments from a Coastal Lagoon (Northern Adriatic, Italy)[END_REF][START_REF] Acquavita | The PAH level, distribution and composition in surface sediments from a Mediterranean Lagoon: The Marano and Grado Lagoon (Northern Adriatic Sea, Italy)[END_REF] and the Todos os Santos Bay, Brazil (8.8-27.6) [START_REF] Venturini | Characterization of the Benthic Environment of a Coastal Area Adjacent to an Oil Refinery, Todos os Santos Bay (NE-Brazil)[END_REF]. The TOC/N ratio may provide information about the origin of organic matter. A high TOC/N ratio (> 20) reveals rather a terrestrial origin of organic matter due to the low N percentage in the higher vegetation [START_REF] Muller | C/N ratios in Pacific deep-sea sediments: effects of inorganic ammonium and organic nitrogen compounds sorbed by clays[END_REF][START_REF] Emerson | Processes controlling the organic carbon content of open ocean sediments[END_REF][START_REF] Meyers | Preservation of elemental and isotopic source identification of sedimentary organic matter[END_REF]. On the other hand, a low TOC/N ratio (5-7) implies a marine origin (plankton or seaweeds) [START_REF] Muller | Productivity, sedimentation rate and sedimentary carbon content in the oceans, 1-Organic carbon preservation[END_REF][START_REF] Monoley | Modelling carbon and nitrogen flows in a microbial plankton community[END_REF][START_REF] Meyers | Preservation of elemental and isotopic source identification of sedimentary organic matter[END_REF]. In our case, TOC/N ratio was comprised between 9.4 (R201) and 18 (R110) for almost all the stations, which suggested a mixed origin with both autochthonous marine and terrestrial sources. For station R111 (22.3) and above all R107 (56.6), the high ratio was in favour of a dominance of the terrestrial origin of organic matter. The TOC/S ratio varied between 2.6 (R202) and 118.9 (R107) (Table 3) and was more important than that observed in Todos os Santos Bay (2.8-13.6) [START_REF] Venturini | Characterization of the Benthic Environment of a Coastal Area Adjacent to an Oil Refinery, Todos os Santos Bay (NE-Brazil)[END_REF].
Except in station R202, the TOC/S ratio was > 2.8, which put forward the occurrence of oxic conditions in the surficial sediments of the Sfax-Kerkennah channel [START_REF] Leventhal | An interpretation of carbon and sulfur relationships in Black Sea sediments as indicators of environments of deposition[END_REF][START_REF] Berner | Biogeochemical cycles of carbon and sulfur and their effect on atmospheric oxygen over Phanerozoic time[END_REF]. It is worth noting that the high TOC/N and TOC/S ratios observed in R107 were due to an elevated TOC content relative to low N and S concentrations (Table 3).
The content in CaCO 3 was minimal (~ 40%) in the Sfax-Kerkennah channel (R104) and the Sfax harbor (R201), and maximal (~ 95.6%) on the Sidi Mansour-Kerkennah transect (R105, R108) (Table 3). This content was on average higher than that recorded in surficial sediments of Todos os Santos Bay, Brazil (0-93.2%) [START_REF] Venturini | Characterization of the Benthic Environment of a Coastal Area Adjacent to an Oil Refinery, Todos os Santos Bay (NE-Brazil)[END_REF] and Athens coastal area, Greece (24-86.3%) [START_REF] Kapsimalis | Organic contamination of surface sediments in the metropolitan coastal zone of Athens, Greece: Sources, degree, and ecological risk[END_REF]. The presence of CaCO 3 in surficial sediments is related to marine organisms. In our case, the two mains contributors of CaCO 3 would be benthic foraminifera and green seaweeds [START_REF] Aloulou | Benthic foraminiferal assemblages as pollution proxies in the northern coast of Gabes Gulf, Tunisia[END_REF]. The highest δ 13 C signature was found in R101 (-20.9‰). It decreased from R101 to R107 (-25.1‰) and increased from R107 to R111 (-22.2‰). Then, the δ 13 C signature tended to decrease to reach -25.5‰ in R203. It finally slightly increased towards R201 (-24.0‰) (Table 3; Fig. 2b).
Carbon isotopic ratio (δ 13 C) is useful to distinguish between marine and terrestrial sources of sedimentary organic matter. Marine organic matter typically has δ 13 C values between -22 and -20‰, whilst terrestrial organic matter (C 3 land plants) has an average δ 13 C value of -27‰ (Meyer, 1994). It thus appears that our sediments displayed overall a mixed origin with a stronger marine fingerprint in stations R101-R103, R109 and R111 (Table 3; Fig. 2b).
Concentrations in hydrocarbons, comparison with other regions of the
Mediterranean and Sediment Quality Guidelines
Total R concentration (sum of n-C 15 to n-C 34 + Pr and Phy) ranged from 8.1-14.2 µg g -1 sed. dw in the Sfax-Kerkennah channel (R102, R105-R107, R208, R209) to 173.9 µg g -1 sed. dw in the Sfax harbor (R201). High values were also recorded in the Sfax-Kerkennah channel (45.7 µg g -1 sed. dw, R104) and the Kerkennah harbor (72.1 µg g -1 sed. dw, R111) (Table 3; Fig. 2c). These R concentrations (8.1-173.9 µg g -1 sed. dw) were quite high compared to those reported in the surficial/surface sediments from other regions of the Mediterranean Sea (Table 4). Indeed, for the sediments from Abu Qir Bay (Egypt), Gulf of Lions, Gulf of Fossur-mer, Berre lagoon (France), Gulf of Trieste (Italy), Catalan coast (Spain), Cretan Sea (Greece) and Coastal Aegean Sea (Tukey), concentrations did not exceed 10 µg g -1 sed. dw (Lipiataou and Saliot, 1991b;[START_REF] Tolosa | Aliphatic and polycyclic aromatic hydrocarbons and sulfur/oxygen derivatives in Northwestern Mediterranean sediments: Spatial and temporalvariability, fluxes, and budgets[END_REF][START_REF] Gogou | Marine organic geochemistry of the Eastern Mediterranean: 1. Aliphatic and polyaromatic hydrocarbons in Cretan Sea surficial sediments[END_REF][START_REF] El Deeb | Distribution and Sources of Aliphatic and Polycyclic Aromatic Hydrocarbons in Surface Sediments, Fish and Bivalves of Abu Qir Bay (Egyptian Mediterranean Sea)[END_REF][START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF][START_REF] Gonul | Aliphatic and polycyclic aromatic hydrocarbons in the surface sediments from the Eastern Aegean: assessment and source recognition of petroleum hydrocarbons[END_REF][START_REF] Kanzari | Aliphatic hydrocarbons, polycyclic aromatic hydrocarbons, polychlorinated biphenyls, organochlorine, and organophosphorous pesticides in surface sediments from the Arc river and the Berre lagoon, France[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF][START_REF] Mandalakis | Distribution of aliphatic hydrocarbons, polycyclic aromatic hydrocarbons and organochlorinated pollutants in deep-sea sediments of the southern Cretan margin, eastern Mediterranean Sea: A baseline assessment[END_REF] (Table 4). In the Gulf of Tunis, Khniss coast (Tunisia), Tangier coastal zone (Morocco), Rhône delta (France), Patroklos and Sitia areas (Greece) and Aliağa Bay (Turkey), maximal concentrations were comprised between 10 and 57 µg g -1 sed. dw (Lipiatou and Saliot, 1991b;[START_REF] Tsapakis | PAHs and n-alkanes in Mediterranean coastal marine sediments: aquaculture as a significant point source[END_REF][START_REF] Mzoughi | Distribution and partitioning of aliphatic hydrocarbons and polycyclic aromatic hydrocarbons between water, suspended particulate matter, and sediment in harbours of the West coastal of the Gulf of Tunis (Tunisia)[END_REF][START_REF] Bouzid | Distribution and origin of aliphatic hydrocarbons in surface sediments of strategical areas of the western moroccan mediterranean sea[END_REF][START_REF] Neşer | Polycyclic aromatic and aliphatic hydrocarbons pollution at the coast of Aliağa (Turkey) ship recycling zone[END_REF][START_REF] Zrafi | Aliphatic and Aromatic Biomarkers for Petroleum Hydrocarbon Investigation in Marine Sediment[END_REF] (Table 4). In Sfax ponds (Tunisia) and Eastern harbour of Alexandria (Egypt), maximal concentrations were 128 and 143 µg g -1 sed. dw, respectively (Aboul-Kassim and [START_REF] Aboul-Kassim | Petroleum hydrocarbon fingerprinting and sediment transport assessed by molecular biomarker and multivariate statistical analyses in the Eastern Harbour of Alexandria, Egypt[END_REF][START_REF] Elloumi | Detection of Water and Sediments Pollution of An Arid Saltern (Sfax, Tunisia) by Coupling the Distribution of Microorganisms With Hydrocarbons[END_REF]. In the Sfax coastline (Tunisia), close to our study area, [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF] found concentrations reaching up to 430 µg g -1 sed. dw (sediments collected in 2003). Finally, Amori et al. ( 2011) observed concentrations that were largely higher, i.e. 3,886 µg g -1 sed. dw., in Gabès, Kettana and Al-Zar coastline (Tunisia). Hence, this comparison underscores that n-alkane concentrations in the surficial/surface sediments of the Tunisian coasts, particularly of the Gulf of Gabès, were among the highest measured in the Mediterranean Basin (Table 4).
Total PAH concentration (sum of 17 PAHs) varied between 175-245 ng g -1 sed. dw in the Sfax-Kerkennah channel (R109, R209, R208) and 10,769 ng g -1 sed. dw in the Sfax harbor (R201), with concentrations > 1,000 ng g -1 sed. dw recorded in the Sfax-Kerkennah channel (R104) and in the vicinity of Sfax (R204, R203, R202) and Kerkennah (R110, R111) harbors (Table 5; Fig. 2d). These total PAH concentrations (175-10,769 ng g -1 sed. dw) mirror a moderate to very high pollution level according to the pollution level classification proposed by [START_REF] Baumard | Origin and bioavailability of PAHs in the Mediterranean Sea from mussel and sediment records[END_REF] (Table 5) and were situated in the mid-range of those reported in the surficial/surface sediments from other regions of the Mediterranean Sea (Table 6). Actually, for the sediments from Bizerte lagoon, Khniss coast (Tunisia), Al Hoceïma coastal area (Morocco), Bay of Banuyls-sur-mer (France), Chioggia and Ancona coastal zones (Italy), Cretan Sea (Greece), Candarli Gulf (Turkey) and Cilician Basin (Cyprus), concentrations did not exceed 1,000 ng g -1 sed. dw [START_REF] Magi | Distribution of polycyclic aromatic hydrocarbons in the sediments of the Adriatic Sea[END_REF][START_REF] Pavoni | Environmental pollutants and organic carbon content in sediments from an area of the Moroccan Mediterranean Coast[END_REF][START_REF] Trabelsi | Polycyclic aromatic hydrocarbons in superficial coastal sediments from Bizerte Lagoon, Tunisia[END_REF][START_REF] Charles | Ecodynamics of PAHs at a peri-urban site of the French Mediterranean Sea[END_REF][START_REF] Kucuksezgin | Marine organic pollutants of the Eastern Aegean: Aliphatic and polycyclic aromatic hydrocarbons in Candarli Gulf surficial sediments[END_REF][START_REF] Kucuksezgin | Distribution and sources of polycyclic aromatic hydrocarbons in Cilician Basin shelf sediments (NE Mediterranean)[END_REF][START_REF] Zrafi | Aliphatic and Aromatic Biomarkers for Petroleum Hydrocarbon Investigation in Marine Sediment[END_REF][START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF][START_REF] Mandalakis | Distribution of aliphatic hydrocarbons, polycyclic aromatic hydrocarbons and organochlorinated pollutants in deep-sea sediments of the southern Cretan margin, eastern Mediterranean Sea: A baseline assessment[END_REF] (Table 6). Also, in the Sfax, Luza, Sousse, Jarzouna-Bizerte coastal areas, Gulf of Tunis (Tunisia), Abu Qir Bay (Egypt), Rhône Delta, Port Vendres harbor (France), Gulf of Trieste (Italy) and coastal Aegean Sea (Turkey), maximal concentrations were comprised between 1,000 and < 10,000 ng g -1 sed. dw [START_REF] Bouloubassi | Investigation of anthropogenic and natural organic inputs in estuarine sediments using hydrocarbon markers (NAH, LAB, PAH)[END_REF][START_REF] Baumard | Origin and bioavailability of PAHs in the Mediterranean Sea from mussel and sediment records[END_REF][START_REF] Khairy | Risk assessment of polycyclic aromatic hydrocarbons in a Mediterranean semi-enclosed basin affected by human activities (Abu Qir Bay, Egypt)[END_REF][START_REF] Zrafi-Nouira | Distribution and Sources of Polycyclic Aromatic Hydrocarbons around a Petroleum Refinery Rejection Area in Jarzouna-Bizerte (Coastal Tunisia). Soil and Sediment Contamination[END_REF][START_REF] Mzoughi | Distribution and partitioning of aliphatic hydrocarbons and polycyclic aromatic hydrocarbons between water, suspended particulate matter, and sediment in harbours of the West coastal of the Gulf of Tunis (Tunisia)[END_REF][START_REF] Gonul | Aliphatic and polycyclic aromatic hydrocarbons in the surface sediments from the Eastern Aegean: assessment and source recognition of petroleum hydrocarbons[END_REF][START_REF] Kessabi | Possible chemical causes of skeletal deformities in natural populations of Aphanius fasciatus collected from the Tunisian coast[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF][START_REF] Zaghden | Evaluation of hydrocarbon pollution in marine sediments of Sfax coastal areas from the Gabes Gulf of Tunisia, Mediterranean Sea[END_REF] (Table 6). Nevertheless, in Egypt coastal areas, Lazaret Bay, Gulf of Fos-sur-mer (France), Taranto Gulf, Coastal Ligurian Sea, Venice Lagoon, Naples harbor (Italy), Santander Bay, Catalonia coast (Spain), Gulf of Corinth, North Evoikos and Saronikos Gulfs, Drapetsona-Keratsini coastal zone (Greece), Izmit and Aliağa Bays (Turkey), Rovinj coastal area and Rijeka Bay (Croatia), maximal concentrations were much higher than 10,000 ng g -1 sed. dw (La [START_REF] Rocca | PAHs content and mutagenicity of marine sediments from the Venice lagoon[END_REF][START_REF] Benlahcen | Distribution and sources of polycyclic aromatic hydrocarbons in some Mediterranean coastal sediments[END_REF][START_REF] Eljarrat | Toxic potency assessment of non-and mono-ortho PCBs, PCDDs, PCDFs, and PAHs in northwest Mediterranean sediments (Catalonia Spain)[END_REF][START_REF] Viguri | Environmental assessment of polycyclic aromatic hydrocarbons (PAHs) in surface sediments of the Santander Bay. Northern Spain[END_REF][START_REF] Bertolotto | Polycyclic aromatic hydrocarbons in surficial coastal sediments of the Ligurian Sea[END_REF][START_REF] Bihari | PAH content, toxicity and genotoxicity of coastal marine sediments from the Rovinj area, Northern Adriatic, Croatia[END_REF][START_REF] Tolun | Polycyclic aromatic hydrocarbon contamination in coastal sediments of the Izmit Bay (Marmara Sea): case studies before and after the Izmit Earthquake[END_REF][START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF][START_REF] Sprovieri | Heavy metals, polycyclic aromatic hydrocarbons and polychlorinated biphenyls in surface sediments of the Naples harbor (southern Italy)[END_REF][START_REF] Annicchiarico | PCBs, PAHs and metal contamination and quality index in marine sediments of the Taranto Gulf[END_REF][START_REF] Alebic-Juretic | Polycyclic aromatic hydrocarbons in marine sediments from the Rijeka Bay area, Northern Adriatic, Croatia, 1998-2006[END_REF][START_REF] Barakat | Distribution and characteristics of PAHs in sediments from the Mediterranean coastal environment of Egypt[END_REF][START_REF] Botsou | Polycyclic aromatic hydrocarbons (PAHs) in marine sediments of the Hellenic coastal zone, eastern Mediterranean: levels, sources and toxicological significance[END_REF][START_REF] Neşer | Polycyclic aromatic and aliphatic hydrocarbons pollution at the coast of Aliağa (Turkey) ship recycling zone[END_REF][START_REF] Kapsimalis | Organic contamination of surface sediments in the metropolitan coastal zone of Athens, Greece: Sources, degree, and ecological risk[END_REF] (Table 6). It should be noticed that our total PAH concentrations were similar to those recorded by [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF] for the same area (113-10,720 ng g -1 sed. dw) (sediments collected in 2003).
Sediment quality guidelines (SQGs) are used to assess the contamination level of marine and estuarine sediments [START_REF] Long | Incidence of adverse biological effects with ranges of chemical concentrations in marine and estuarine sediments[END_REF][START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF]. [START_REF] Long | Incidence of adverse biological effects with ranges of chemical concentrations in marine and estuarine sediments[END_REF] proposed two guideline values, an effects range low (ERL) and an effects range median (ERM) to determine the sediment quality. Our PAH concentrations were thus compared to these ERL and ERM values (Table 7). Total PAH concentrations in all stations were below ERL, except in R202 (close to Sfax harbor) and R201 (Sfax harbor) where they were comprised between ERL and ERM. At the level of individual compounds, concentrations in Nap, BaA and BaP were below ERL in all stations, while concentrations in Phe, Ant, Flt and Chr were below ERL or between ERL and ERM depending on the stations. Concentration in Pyr was > ERM in R201 (Table 7). Consequently, PAH concentrations in surficial sediments in this Sfax-Kerkennah channel area may be harmful for marine biota mainly in the Sfax harbor.
Sediment geochemistry and relationships between hydrocarbons and biogeochemical parameters
Stations displaying high C, H, N, S, TOC, R and PAH contents were the Sfax (R201) and Kerkennah (R111) harbors as well as station R104 (in the channel), and, in a lesser extent, station R110 (close to Kerkennah harbor; outside the channel). Except R110 (fine sand), these stations were characterized by very fine sand (Tables 3,5; Fig. 2a,c,d), which may be linked to the relatively high PAH levels encountered. Indeed, PAHs are known to be mainly adsorbed onto very fine particles because of their higher specific surface area [START_REF] Xia | Effect of sediment particle size on polycyclic aromatic hydrocarbon biodegradation: importance of the sediment-water interface[END_REF]. TOC/N and δ 13 C values emphasized a mixed (marine and terrestrial) origin of organic matter (Tables 3; Fig. 2b), while the hydrocarbon levels suggested significant anthropogenic inputs in this sites. In the Sfax and Kerkennah harbors, anthropogenic inputs were rather evident with ship traffic and petroleum wastes. Nevertheless, specific anthropogenic sources were less obvious in station R104. The latter was located in a cuvette that might receive particular industrial wastewaters. In general, sediments with high organic carbon content contained high PAH concentrations [START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF]. However, station R107 (in the channel) presented a high TOC concentration and, in the meantime, low R and PAH levels compared to those of stations R201, R111 and R104. R107 was also typified by medium sand and by a terrestrial fingerprint of the organic matter with regard to its TOC/N and δ 13 C values. An inverse pattern was observed in stations R202-R204 with high R and PAH concentrations and a very low TOC content. TOC/N and δ 13 C showed a marine and terrestrial origin of organic matter (Tables 3,5; Fig. 2a-d). These stations, located outside (R202) or at the channel border (R203-204), were very likely under the influence of anthropogenic (PAH) inputs from the Sfax harbor and city. All other stations of the channel (R105, R106, R205-R210) as well as stations R108 and R109 (outside the channel) were characterized by fine or medium sand and presented the lowest R and PAH concentrations, a low TOC content, a high CaCO 3 content and TOC/N and δ 13 C values revealing both marine and terrestrial organic matter (Tables 3,5; Fig. 2a-d). These sites were thus much less impacted by anthropogenic inputs. This was due to their relative greater distance from the coast and their higher bathymetry as well as to the North-East South-West-oriented current crossing the channel, which transports the finest particles further southward.
As mentioned above, strong decoupling occurred between hydrocarbon and TOC contents depending on the stations, especially for R107. As seen from Fig. 2e, f, the R/TOC and PAH/TOC ratios were not constant: low ratios were observed inside the channel (< 8 mg g -1 and < 60 µg g -1 sed. dw, respectively), while high ratios were found in coastal stations outside the channel (> 10 mg g -1 and > 500 µg g -1 sed. dw, respectively). When taken into account all stations (n = 20), significant positive linear correlations appeared between total R concentration and C, H, N, S and TOC contents (r = 0.49-0.93, p < 0.05, n = 20), and between total PAH concentration and H, N and S contents (r = 0.73-0.81, p < 0.05, n = 20). A significant correlation was also found between total R and PAH concentrations (r = 0.85, p < 0.05, n = 20) (Table 8). When removing station R107 from the dataset, for which the decoupling between hydrocarbons and TOC was very pronounced (this station appeared as outlier when plotting the data), correlations between R, PAHs and TOC clearly increased (r = 0.66-0.88, p < 0.05, n = 19) (Table 8). However, when station R201 (i.e. the station showing "extreme" hydrocarbon concentrations) was also excluded in addition to R107, the degree of correlation between parameters dropped: PAHs were not correlated anymore to TOC and R (r = -0.08-0.21, p > 0.05, n = 18), whereas the correlation between the two latter substantially decreased (r = 0.68, p < 0.05, n = 18) (Table 8). Finally, when eliminating R107 and all stations with total PAH concentration > 1000 ng g -1 sed. dw (i.e. R201, R104, R110, R111, R202-R204), the correlation between R and TOC disappeared as well (r = -0.02, p > 0.05, n = 12) (Table 8). This correlation study shows that the significant correlations occurring between R, PAHs and TOC were almost merely due to the "extreme" station R201. Therefore, the distribution and concentrations of R, PAHs and TOC were decoupled in the surficial sediments of the Sfax-Kerkennah channel. R and PAHs were very likely more influenced by specific inputs rather than by organic matter content. The absence of significant linear relationship between PAH and TOC concentrations or between R and PAH concentrations in surficial/surface sediments were also reported for the Bizerte Lagoon (Tunisia), Marano and Grado Lagoons (Italy) and Egyptian coastal areas [START_REF] El Nemr | Aliphatic and polycyclic aromatic hydrocarbons in the surface sediments of the Mediterranean: assessment and source recognition of petroleum hydrocarbons[END_REF]Aquavita et al., 2014;[START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF]. Moreover, [START_REF] Simpson | Composition and distribution of polycyclic aromatic hydrocarbon contamination in surficial marine sediments from Kitimat Harbor, Canada[END_REF] proposed that PAH and TOC concentrations in sediments are significantly correlated solely in greatly contaminated sites where total PAH concentrations > 2,000 ng g -1 , which is in accordance with our results.
Composition and sources of hydrocarbons
The UCM/R ratio was > 3 for stations R106, R111 (Kerkennah harbor), R201 (Sfax harbor), R202, R208-R210 (Sfax-Kerkennah channel) (Table 3), which reflects the presence of degraded petroleum products in these stations [START_REF] Simoneit | Organic matter of the troposphere-II.* Natural background of biogenic lipid matter in aerosols over the rural western United States[END_REF][START_REF] Guigue | Occurrence and distribution of hydrocarbons in the surface microlayer and subsurface water from the urban coastal marine area off Marseilles, Northwestern Mediterranean Sea[END_REF][START_REF] Parinos | Occurrence, sources and transport pathways of natural and anthropogenic hydrocarbons in deep-sea sediments of the eastern Mediterranean Sea[END_REF]. CPI 15-24 was < 1 for all the stations, except for the Sfax harbor (R201) where it was close to 1 (Table 3). This shows the predominance of even carbon number in short chain n-alkanes. Indeed, n-C 16 , n-C 18 and, in a lesser extent, n-C 20 were the dominant compounds over the whole range n-C 15 -n-C 34 apart from the coastal stations R101-R103, the Kerkennah harbor area (R110, R111) and the Sfax harbor (R201), in which n-C 31 and n-C 33 were the major compounds. The predominance of even light n-alkanes n-C 16 , n-C 18 and n-C 20 in sediments is not so common albeit it has been observed for instance in sediments from the Arabian Gulf [START_REF] Grimalt | n-Alkane distributions in surface sediments from the Arabian Gulf[END_REF], the Gulf of Fos-sur-mer [START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF] and the Taihu Lake, China [START_REF] Yu | Distribution and sources of n-alkanes in surface sediments of Taihu Lake, China[END_REF]. It has been suggested that these even light n-alkanes were issued from bacteria, but also fungi, and yeast species, and from petroleum-derived inputs [START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF][START_REF] Harji | Sources of hydrocarbons in sediments of the Mandovi estuary and the Marmugoa harbour, west coast of India[END_REF][START_REF] Yu | Distribution and sources of n-alkanes in surface sediments of Taihu Lake, China[END_REF]. CPI 25-34 displayed values > 3 for stations R101-R110 and R208 (Table 3). This underscored the presence of AHs from mixed petroleum and biogenic (terrestrial higher plant debris) sources, the latter having a higher contribution [START_REF] Rieley | The biogeochemistry of Ellesmere Lake, UK-I: source correlation of leaf wax inputs to the sedimentary lipid record[END_REF][START_REF] Harji | Sources of hydrocarbons in sediments of the Mandovi estuary and the Marmugoa harbour, west coast of India[END_REF]. In contrast, for almost all the stations from the Kerkennah-Sfax city transect, CPI 25-34 was < 3 (Table 3), which emphasised an increase in the contribution of petroleum inputs [START_REF] Mille | Hydrocarbons in coastal sediments from the Mediterranean sea (Gulf of Fos area, France)[END_REF][START_REF] Guigue | Occurrence and distribution of hydrocarbons in the surface microlayer and subsurface water from the urban coastal marine area off Marseilles, Northwestern Mediterranean Sea[END_REF]. TAR was > 1 in most of stations, being comprised between 1.2 and 2.8, with high values of 6.6 and 10.4 detected in R102 and R103. TAR was ~ 1.0 in stations R106 and R205 and < 1 in R104, R107, R202 and R207 (Table 3). Hence, this ratio mirrored the higher contribution of terrestrial higher plants compared to aquatic material (algae, phyto-and zoo-plankton).
Ratios involving Pr and Phy (Pr/Phy, n-C 17 /Pr and n-C 18 /Phy) have to be taken with extreme caution because these branched alkanes have multiple origins and are sensitive to diagenetic conditions and thermal maturity [START_REF] Peters | The Biomarker Guide, 2nd edn[END_REF]. They are both derived from the phytol side chain of chlorophyll a, either under reducing conditions (Phy) or oxidizing conditions (Pr). They are thus abundant in weathered crude oils. However, Pr can also originate from zooplankton, and Phy from Archaebacteria, such as methanogens [START_REF] Volkman | Identification of natural anthropogenic and petroleum hydrocarbons in aquatic sediments[END_REF]. It has been proposed that Pr/Phy < 1 could be taken as an indicator of petroleum origin and/or highly reducing (anoxic, hypersaline) depositional environments. Pr/Phy > 3 may reflect the presence of biogenic AHs, while Pr/Phy between 1 and 3 may be the sign of oxidizing depositional environments [START_REF] Volkman | Identification of natural anthropogenic and petroleum hydrocarbons in aquatic sediments[END_REF][START_REF] Ten Haven | Applications and limitations of Mango''s light hydrocarbon parameters in petroleum correlation studies[END_REF][START_REF] Peters | The Biomarker Guide, 2nd edn[END_REF]. Here, we found Pr/Phy < 1 for all the stations, except R110, R201, R208 and R209, which presented values comprised between 1.19 and 2.85 (Table 3). These Pr/Phy values (< 3) underlined the absence of biogenic AHs. Although no significant correlation was found between Pr/Phy ratio or Phy concentration and % S (data not shown), we may assume that the relatively high sulphur content in these sediments (underscoring the occurrence of anoxic conditions) contributed to the relatively low Pr/Phy values observed, in addition to the petroleum signature. These results are in agreement with those from [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF] in the same area. The n-C 18 /Phy ratio was > 1 for all the stations. Values comprised between 2 and 5 were observed for a majority of stations, whereas higher values (6-9) were found in stations R107, R205, R209, R210 (Table 3). Interestingly, the highest value was detected in R107 suggesting a very recent hydrocarbon input that could be related to the relatively high TOC concentration found in this station. In the Sfax harbor area, n-C 18 /Phy decreased (~ 1.7 in R201 and R202) (Table 3), which reflected the presence of more degraded AHs. The n-C 17 /Pr ratio, which showed values around 1 for several stations from coastal areas and the Sfax-Kerkennah channel, did not follow the same trend than the n-C 18 /Phy ratio (Table 3). Its interpretation seems thus more complicated. This analysis of AH ratios and indices suggests the presence of both petrogenic and biogenic materials, the petrogenic fingerprint being more accentuated in the harbor areas and in the Kerkennah-Sfax city transect compared to North of Sfax-Kerkennah transect.
Concerning PAHs, the most abundant compounds were ∑MePyr (in R101, R104, R110, R111, R207, R203, R201), ∑MePhe (in R102, R106, R107, R204) or Chr and/or ∑MeChr (in R103, R105, R210, R205), which accounted on average for 20% of total PAHs (Table 5). Per (35%), BgP (34%), BaP (30%), Flt (17%) and BkF (19%) dominated in R108, R109, R209, R208 and R202, respectively. Nap, ∑MeNap, Ant and BaA were not detected, except in R103, R209, R204, R203 and/or R202 (Table 5). High proportions of alkylated Pyr and Phe in the Sfax sediments have already been pointed out by [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF]. Hence, PAH molecular profiles illustrated the predominance of 4-ring compounds apart from stations R204 (dominance of 3 rings), R108, R209 (dominance of 5 rings) and R109 (dominance of 6 rings) (Fig. 3). Fig. 4 presents the cross plot of ∑LMW/∑HMW versus Flt/Flt+Pyr ratios for the different samples, except stations R106, R209 and R210, for which Flt/Flt+Pyr could not be determined. Most of the stations (R101-R105, R107, R109-R111, R205, R201) displayed ∑LMW/∑HMW ratio < 1 and Flt/Flt+Pyr ratio < 0.4, which highlighted both petrogenic and pyrogenic sources of PAHs. In station R204, the inverse pattern (∑LMW/∑HMW > 1 and Flt/Flt+Pyr < 0.4), suggested this mixed source as well. On the contrary, stations R108, R202, R203, R207 and R208 presented ∑LMW/∑HMW ratio < 1 and Flt/Flt+Pyr ratio > 0.4, putting forward the dominance of the pyrogenic source [START_REF] Soclo | Origin of polycyclic aromatic hydrocarbons (PAHs) in coastal marine sediments: case studies in Cotonou (Benin) and Aquitaine (France) areas[END_REF][START_REF] Yunker | PAHs in the Fraser River basin: a critical appraisal of PAH ratios as indicators of PAH source and composition[END_REF][START_REF] Li | Distribution and sources of polycyclic aromatic hydrocarbons in the middle and lower reaches of the Yellow River, China[END_REF][START_REF] Zhang | Source diagnostics of polycyclic aromatic hydrocarbons in urban road runoff, dust, rain and canopy throughfall[END_REF]. R108, R202, R203 and R207 had Flt/Flt+Pyr ratio < 0.5, which might be attributed to fuel combustion. R208, with Flt/Flt+Pyr ratio of 0.74, was distinguished by a contribution of grass, coal, and/or wood combustion [START_REF] De | Soil-borne polycyclic aromatic hydrocarbons in El Paso, Texas: analysis of a potential problem in the United States/Mexico border region[END_REF]Fig. 4; Table 2). On the other side, ∑MePhe/Phe ratio was > 2 with the exception of stations R109 and R210, which implied the dominance of the petrogenic source [START_REF] Prahl | Polycyclic aromatic hydrocarbon (PAH)-phase associations in Washington coastal sediments[END_REF][START_REF] Garrigues | Pyrolytic and petrogenic inputs in recent sediments: a definitive signature through phenanthrene and chysene compound distribution[END_REF]. In the same way, ∑MePyr/Pyr was > 1 apart from R107, R205 and R203, underscoring the petrogenic fingerprint [START_REF] Zaghden | Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments of Sfax, Tunisia, Mediterranean Sea[END_REF][START_REF] Bajt | Aliphatic and Polycyclic Aromatic Hydrocarbons in Gulf of Trieste Sediments (Northern Adriatic): Potential Impacts of Maritime Traffic[END_REF]. At last, BaP/BgP was > 0.6, except for R103, R109 and R201. This could reflect traffic emissions [START_REF] Katsoyiannis | On the use of PAH molecular diagnostic ratios in sewage sludge for the understanding of the PAH sources. Is this use appropriate?[END_REF]Table 2). Consequently, from these different indices and ratios, it appears that the Sfax-Kerkennah channel area was characterized by various petrogenic and pyrogenic sources of PAHs, with no clear trend highlighted between coastal stations (outside the channel) and stations inside the channel.
Conclusion
This study investigated the origin and distribution of hydrocarbons (AHs and PAHs) and organic matter in the surficial sediments of the Sfax-Kerkennah channel. Sediments, mainly composed of sand, displayed a grain size distribution in relation with the geomorphology, bathymetry and hydrodynamic properties of the Sfax-Kerkennah channel. Fine and very fine sands were generally found outside the channel (coastal stations and harbors), whereas medium sand was observed within the channel (deepest stations). Compared to other regions of the Mediterranean Sea, we recorded high TOC concentrations (> 11%), quite high R (up to 174 µg g -1 sed. dw) and PAH concentrations (> 10,000 ng g -1 sed. dw). According to [START_REF] Baumard | Origin and bioavailability of PAHs in the Mediterranean Sea from mussel and sediment records[END_REF], PAH pollution was moderate to very high in the Sfax-Kerkennah channel. Moreover, with regard to sediment quality guidelines [START_REF] Long | Incidence of adverse biological effects with ranges of chemical concentrations in marine and estuarine sediments[END_REF], the pyrene concentration in the Sfax harbor sediment may be detrimental for marine ecosystems. In the Sfax and Kerkennah harbors as well as in stations R104 and R110, we found high contents in C, H, N, S, TOC, R and PAHs. Except for station R110, these high contents were associated with very fine sand. In contrast, most of the stations located within the channel were characterized by fine or medium sand and by low TOC, R and PAH concentrations.
Nevertheless, by examining in details the degree of correlation between parameters, we put forward that R, PAHs and TOC were actually decoupled for most of the stations. This suggested that hydrocarbons were very likely more influenced by specific inputs rather than by organic matter content in the surficial sediments of the Sfax-Kerkennah channel. TOC/N and δ 13 C values revealed a mixed origin of organic matter with both autochthonous marine and terrestrial sources. Index and ratio diagnostic emphasized the dominance of petrogenic origin of n-alkanes (relatively to biogenic origin) and the presence of both petrogenic and pyrogenic PAHs. AH ratios revealed the presence of both biogenic and petrogenic materials, the petrogenic fingerprint being more important and more degraded in the harbor areas and in the Kerkennah-Sfax city transect. However, PAH diagnostic did not reveal any clear relationship between the geographical repartition of stations and the molecular composition/origin of hydrocarbons. This work underscores the complex distribution patterns and the multiple sources (marine, terrestrial, anthropogenic) of organic matter and hydrocarbons in the Gulf of Gabès. Also, further investigations should consider the molecular composition of hydrocarbons in the water column (particulate and dissolved phases) for a global view of organic pollutant dynamics in coastal waters of the Gulf of Gabès. Table 3. Granulometric parameters (Mz and σ in unit ϕ), percentages (for 1 g sediment dry weight) of elemental constituents (C, H, N, S), calcium 26 carbonate (CaCO 3 ) and total organic carbon (TOC), isotopic signature of organic carbon (δ 13 C in ‰), and concentrations in total n-alkanes (R in µg g -1 27 sed. dw) and associated molecular diagnostic ratios (UCM/R, CPI, Pr/Phy, n-C 17 /Pr, n-C 18 /Phy, TAR) in the fraction < 63 of surficial sediments (0-28 1 cm) collected along the Sfax-Kerkennah channel (Southeast Tunisia, Southern Mediterranean Sea). 29 30
R101 R102 R103 R104 R105 R106 R107 R108 R109 R110 R111 R210 R209 R208 R207 R205 R204 R203 R202 R201 Mz (ϕ)
3 to 2 3 to 2 3 to 2 4 to 3 2 to 1 3 to 2 2 to 1 3 to 2 3 to 2 3 to 2 4 to 3 3 to 2 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 3 to 2 4 to 3 σ (ϕ)
2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 2 to 1 C (%)
9 -20.9 -21. 2 -22.4 -23.7 -24.6 -25.1 -25.1 -24.4 -22.7 -23.6 -22.2 -24.7 -25.2 -25.4 -25.0 -25.0 -25.4 -25.5 -24.8 -24
organizations: 8
8 PAHs are included in the list of the 45 priority regulated substances by the European Union (Official Journal of the EU 24/08/2013, Directive 2013/39/EU) and 16 PAHs are included in the list of 126 priority regulated substances by the US Environmental Protection Agency (US EPA, 40 CFR Part 423, Appendix A to Part 423).
Figure captions Figure 1 .
captions1 Figure captions
Figure 2 .
2 Figure2. Spatial distribution of a) the concentration in TOC (in % for 1 g sed. dw), b) the isotopic signature of organic carbon (δ 13 C in ‰), c) the concentration in total n-alkanes (R in µg g -1 sed. dw), d) the concentration in total (∑17) PAHs (in ng g -1 sed. dw), e) the R/TOC ratio (in mg g -1 ) and f) the PAH/TOC ratio (in µg g -1 ) in surficial sediments of the Sfax-
Figure 3 .
3 Figure 3. Distribution pattern of PAHs in the surficial sediments of the Sfax-Kerkennah
Figure 4 .
4 Figure 4. Cross plot of ∑LMW/∑HMW versus Flt/Flt+Pyr ratios for the different samples,
Figure 2
Table 2 .
2 Hydrocarbon molecular diagnostic ratios investigated in this study with typical values from the literature. Adapted fromTobiszewski 23 and Namieśnik (2012) and[START_REF] Katsoyiannis | Model-based evaluation of the use of polycyclic aromatic hydrocarbons molecular diagnostic ratios as a source identification tool[END_REF].
Station Area Position Depth of the Type of bottom
water column (m)
R101 North of Sfax -Sidi Mansour 34°48'52"N, 10°53'05"E 1 Dense meadows of Posidonia
R102 34°47'56"N, 10°54'00"E 1.1 Meadows of Posidonia
R103 34°47'19"N, 10°54'30"E 1.7 Meadows of Posidonia
R104 Channel 34°46'00"N, 10°54'57"E 7.6 Muddy sand
R105 Channel 34°45'05"N, 10°55'25"E 9.5 Dense meadows of Posidonia
R106 Channel 34°44'08"N, 10°55'50"E 8.5 Sandy with Posidonia
R107 Channel 34°43'08"N, 10°56'20"E 5.5 Dense meadows of Posidonia
R108 34°42'10"N, 10°56'48"E 2.8 Dense meadows of Posidonia
R109 34°41'10"N, 10°57'18"E 1 Dense meadows of Posidonia
R110 34°40'19"N, 10°57'43"E 0.8 Muddy sand
R111 Kerkennah harbor 34°39'26"N, 10°58'00"E 2.7 Sandy with Posidonia
R210 Channel 34°40'19"N, 10°56'05"E 7 Sandy
R209 Channel 34°40'52"N, 10°55'05"E 9 Meadows of Posidonia
R208 Channel 34°41'25"N, 10°54'00"E 15.7 Meadows of Posidonia
R207 Channel 34°41'48"N, 10°52'55"E 20 Meadows of Posidonia
R205 Channel 34°42'15"N, 10°50'28"E 7.3 Muddy sand
R204 Channel 34°42'23"N, 10°49'15"E 8.5 Muddy sand
R203 Channel 34°42'31"N, 10°48'00"E 7.9 Muddy sand
R202 34°42'39"N, 10°46'50"E 4 Muddy sand
R201 Sfax harbor 34°42'46"N, 10°46'05"E 4.5 Muddy sand
Table 4 .
4 Comparison of total n-alkane concentrations (R in µg g -1 sed. dw) in surficial/surface sediments from different regions of the 36
.0
Table 5 .
5 Concentrations in polycyclic aromatic hydrocarbons (PAHs in ng g -1 sed. dw) and associated molecular diagnostic ratios in the fraction < 63 µm of 39 surficial sediments (0-1 cm) collected along the Sfax-Kerkennah channel (Southeast Tunisia, Southern Mediterranean Sea).
40
41
Table 6 .
6 Comparison of total PAH concentrations (in ng g -1 sed. dw) in surficial/surface sediments from different regions of the Mediterranean Sea.
Country Site
Acknowledgements.
We acknowledge the service central d'analyses du CNRS (Vernaison, France). We are grateful to Pr. J.-L. Reyss (LSCE, UMR CEA-CNRS 1572, France) for providing access to CHN and isotope ratio equipment. We warmly thank A. Lorre for her help and assistance for OC and δ 13 C analyses. One anonymous Reviewer is acknowledged for his relevant comments and corrections. This work was conducted in part in the framework of the IRD French-Tunisian International Joint Laboratory "LMI COSYS-Med".
a The pollution levels are those defined by [START_REF] Baumard | Origin and bioavailability of PAHs in the Mediterranean Sea from mussel and sediment records[END_REF]: low, 0-100 ng g -1 ; moderate, 100-1,000 ng g -1 ; high, 1,000-5,000 ng g -1 ; very high, > 5,000 ng g -1 ; bld: below detection limit. | 78,694 | [
"19857",
"927093"
] | [
"2069",
"489992",
"191652",
"489992",
"153209",
"397680"
] |
01635969 | en | [
"sdu",
"sde"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01635969/file/Guigue%20et%20al_2017_final.pdf | Catherine Guigue
email: [email protected]
Marc Tedetti
Huy Duc
Jean-Ulrich Dang
Cédric Mullot
M Garnier
Goutx
Huy Dang
Jean-Ulrich Mullot
Cédric Garnier
Madeleine Goutx
Remobilization of polycyclic aromatic hydrocarbons and organic matter in seawater during sediment resuspension experiments from a polluted coastal environment: insights from Toulon Bay (France)
Keywords: Sediment resuspension, remobilization, polycyclic aromatic hydrocarbons, fluorescent dissolved organic matter, sediment-water partition
come
Remobilization of polycyclic aromatic hydrocarbons and organic matter in seawater during sediment resuspension
experiments from a polluted coastal environment: Insights from Toulon Bay (France)
Introduction
Polycyclic aromatic hydrocarbons (PAHs) are among the most widespread organic pollutants in marine environments. Due to their hydrophobicity and low water solubility, PAHs entering marine waters tend to sorb onto particulate organic matter (OM) [START_REF] Gustafsson | Dynamic colloid-water partitioning of pyrene through a coastal Baltic spring bloom[END_REF][START_REF] Kennish | Poly-nuclear aromatic hydrocarbons[END_REF][START_REF] May | Determination of the solubility behavior of some polycyclic aromatic hydrocarbons in water[END_REF] that is then deposited to the sediments via vertical sinking. This mechanism is recognized as a major pathway for the removal and global cycling of hydrocarbons in the ocean [START_REF] Adhikari | Vertical fluxes of polycyclic aromatic hydrocarbons in the northern Gulf of Mexico[END_REF][START_REF] Berrojalbiz | Biogeochemical and physical controls on concentrations of polycyclic aromatic hydrocarbons in water and plankton of the Mediterranean and Black Seas[END_REF][START_REF] Gustafsson | Using 234 Th disequilibria to estimate the vertical removal rates of polycyclic aromatic hydrocarbons from the surface ocean[END_REF]. However, due to natural (waves, currents, storms) and anthropogenic (dredging, trawling, ship traffic) forcing, coastal sediments are frequently resuspended, refocused and transported along the continental shelf (Durrieu de [START_REF] Durrieu De Madron | Sediment dynamics in the Gulf of Lions: The impact of extreme events[END_REF][START_REF] Schoellhamer | Anthropogenic sediment resuspension mechanisms in a shallow microtidal estuary[END_REF] and thus may turn into a potential source of organic pollutants, including PAHs, for the water column (Dong et al., 2015;[START_REF] Eggleton | A review of factors affecting the release and bioavailability of contaminants during sediment disturbance events[END_REF]. While natural forcing generally induces the resuspension of surface sediments (the first centimeters), human activities, particularly dredging or harbor improvements, may lead to the resuspension over sediment depths of several dozens of centimeters. Dissolved PAHs originating from sediment resuspension may then have harmful effects for organisms living in the water column [START_REF] Gewurtz | Comparison of polycyclic aromatic hydrocarbon and polychlorinated biphenyl dynamics in the benthic invertebrates of lake Erie, USA[END_REF][START_REF] Varanasi | Bioavailability and biotransformation of aromatic hydrocarbons in benthic organisms exposed to sediment from an urban estuary[END_REF][START_REF] Woodhead | Polycyclic aromatic hydrocarbons in surface sediments around England and Wales and their possible biological significance[END_REF].
The partitioning of PAHs between water and sediment evaluated through the determination of the sediment-water or sediment OM-water partition coefficients (K d or K oc , respectively) is largely driven by grain size and OM content of sediments and may vary during resuspension events (Cornelissen et al., 2006;[START_REF] Karickhoff | Sorption of hydrophobic pollutants on natural sediments[END_REF][START_REF] Rockne | Distributed Sequestration and Release of PAHs in Weathered Sediment: The Role of Sediment Structure and Organic Carbon Properties[END_REF][START_REF] Tremblay | Effects of temperature, salinity, and dissolved humic substances on the sorption of polycyclic aromatic hydrocarbons to estuarine particles[END_REF]. However, little is known about the association between PAHs and sedimentary OM subjected to transformations during burial and diagenesis [START_REF] Berner | Early Diagenesis: A Theoretical Approach (1 st edn)[END_REF][START_REF] Henrichs | Early diagenesis of organic matter: the dynamics (rates) of cycling of organic compounds[END_REF]; that is, the change in K d and K oc values with sediment depth or OM aging. Further, studies dealing with PAH partitioning between water and sediment should better consider the interactions between dissolved PAHs and dissolved OM (DOM) [START_REF] Akkanen | Comparative Sorption and Desorption of Benzo[a]pyrene and 3,4,3',4'-Tetrachlorobiphenyl in Natural Lake Water Containing Dissolved Organic Matter[END_REF][START_REF] Mccarthy | Interactions between polycyclic aromatic hydrocarbons and dissolved humic material: Binding and dissociation[END_REF][START_REF] Yang | Insights into the binding interactions of autochthonous dissolved organic matter released from Microcystis aeruginosa with pyrene using spectroscopy[END_REF]. Over the last decade, the dynamics of DOM in coastal waters has been evaluated through the study of its fluorescent fraction (FDOM). Recently, FDOM associated with marine or river sediments has been characterized, i.e., the FDOM from sediment pore waters [START_REF] Burdige | Fluorescent dissolved organic matter in marine sediment pore waters[END_REF][START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF][START_REF] Sierra | Fluorescence and DOC contents of pore waters from coastal and deep-sea sediments in the Gulf of Biscay[END_REF] and from sediment particles [START_REF] Brym | Optical and chemical characterization of base-extracted particulate organic matter in coastal marine environments[END_REF][START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF][START_REF] He | Differences in spectroscopic characteristics between dissolved and particulate organic matters in sediments: Insight into distribution behavior of sediment organic matter[END_REF], as well as FDOM released by coastal sediment resuspension [START_REF] Komada | Fluorescence characteristics of organic matter released from coastal sediments during resuspension[END_REF]. Hence, due to the influence of OM in PAH partitioning processes, it is generally observed that theoretical K oc , i.e., K oc predicted from their octanol-water partition coefficients (K ow ), are lower than measured K oc (Accardi-Dey and Gschwend, 2002;Feng et al., 2007;[START_REF] Fernandes | Polyaromatic Hydrocarbon (PAH) Distribution in the Seine River and its Estuary[END_REF]. As a corollary, concentrations in water that are determined from sediment concentrations and theoretical K oc are frequently overestimated. This also leads to an over-estimation of uptake processes and/or effects of those chemicals on exposed organisms [START_REF] Arp | Estimating the in situ Sediment-Porewater Distribtuion of PAHs and Chlorinated Aromatic Hydrocarbons in Anthropogenic Impacted Sediments[END_REF][START_REF] Hawthorne | Predicting Bioavailability of Sediment Polycyclic Aromatic Hydrocarbons to Hyalella azteca using Equilibrium Partitioning, Supercritical Fluid Extraction, and Pore Water Concentrations[END_REF]. Measuring K oc is thus essential for more accurate predictions of PAH toxicity in the water column.
In the Mediterranean Sea, Toulon harbor (Southern France, Northwestern Mediterranean Sea) hosts the main French Navy structure. It is located deep inside Toulon Bay and enclosed by a sea wall. Because of this separation, and also the irregular freshwater inputs and the low tide, water circulation in this part of the Bay is limited, leading to low water regeneration and potentially strong accumulation of chemical contaminants such as PAHs in the sediments [START_REF] Benlahcen | Distribution and Sources of Polycyclic Aromatic Hydrocarbons in some Mediterranean Coastal Sediments[END_REF][START_REF] Misson | Chemical multi-contamination drives benthic prokaryotic diversity in the anthropized Toulon Bay[END_REF][START_REF] Wafo | A chronicle of the changes undergone by a maritime territory, the Bay of Toulon (Var Coast, France), and their consequences on PCB contamination[END_REF]. To maintain a navigable water depth, harbor dredging is regularly required in accordance with the current legislation. Such dredging may induce a sediment resuspension over 100 cm sedimentary depths, thus allowing surface and deep sediments to be resuspended. Since surface and deep sediments have different OM contents (from potentially different origins along with diagenetic transformations with time/depth), their resuspension could generate various remobilization kinetics of pollutants in seawater. Several studies have reported high concentrations of trace metals, metalloids and organometallics during Toulon Bay seabed disturbances [START_REF] Cossa | A Michaelis-Menten type equation for describing methylmercury dependence on inorganic mercury in aquatic sediments[END_REF]Dang et al., 2015a[START_REF] Pougnet | Sources and historical record of tin and butyl-tin species in a Mediterranean bay (Toulon Bay, France)[END_REF][START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF]. In addition, Dang et al. (2015b) demonstrated that Pb was significantly released in seawater and further transferred to organisms such as mussels during resuspension experiments on surface and deep sediments from Toulon Bay. Nevertheless, to our knowledge, no data are available describing PAH remobilization during sediment resuspension events in such a strongly dredged and multi-contaminated area.
Therefore, in the present study, we carried out supplementary experiments simulating the resuspension of surface and deep sediments in Toulon Bay, as proposed by Dang et al. (2015b).
Our objectives were (i) to assess the contamination level and origin of PAHs in a sediment core from Toulon Bay, (ii) to evaluate the effect of surface and deep sediment resuspension on the remobilization of PAH and OM, as well as on the water quality, and (iii) to compare theoretical and measured K oc to better understand the factors controlling PAH remobilization during sediment resuspension experiments (SRE). To our knowledge, this study represents the first assessment of remobilization kinetics of both dissolved PAHs and DOM during SRE involving surface and deep sediments.
Materials and Methods
Study area
Located on the French Northwestern Mediterranean coast, Toulon city is a part of a large urban area of approximately 500 ×10 3 inhabitants. Toulon Bay is divided into two unequal basins, a small basin (9.8 km 2 , semi-enclosed) submitted to various anthropogenic inputs (the French Navy, commercial traffic, raw sewage from the urban area, industries) and a large basin (42.2 km 2 ) that is less impacted and open to the sea (Fig. 1). Toulon harbor is situated in the small bay, which is the discharge area for the watershed. Low tides in the Mediterranean Sea associated with weak currents in this area of Toulon Bay have significant implications for the accumulation of contaminants in sediments [START_REF] Dufresne | Wind-forced circulation model and water exchanges through the channel in the Bay of Toulon[END_REF][START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF].
Sampling and sample treatment
Sediments and seawater were collected at the Missiessy (MIS) site within the nuclear submarine harbor of the French Navy, on May 5 th 2014, with the support of the French Navy (ship, material, divers) (Fig. 1).
Sediment core
A sediment core of ca. 50 cm was collected through the use of an interface corer (Plexiglas tube, 10 cm diameter and 1 m long), by Navy divers keeping undisturbed the bottom water column and the upper sediment column, and so preserving the water-sediment interface as described in [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF]Dang et al. ( , 2015a)), [START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF]. The collected sediment core was maintained vertically and was carefully transferred (by boat and then van) to the laboratory before being installed on a home-made slicing table under glove box. The core was sliced with a 2-cm resolution under inert atmosphere (N 2 ) to preserve oxidation-reduction (redox) conditions.
Each slice was then homogenized in a 150 mL high-density-polyethylene bottle and split into 3 × 50 mL polypropylene centrifuge tubes under N 2 atmosphere. Then, porewater was extracted by centrifugation (15 min, 20 °C, 400 rpm, Sigma 3-18 K), recovered under N 2 atmosphere by filtration (0.2-µm on-line syringe filters, cellulose nitrate, Sartorius) and stored in required vessels depending on further chemical analyses. Such methodology was successfully applied to study depth profile variation of main diagenesis tracers and OM characteristics [START_REF] Chen | Pre-treatments, characteristics, and biogeochemical dynamics of dissolved organic matter in sediments: A review[END_REF]. In accordance with previous studies on trace element sedimentary dynamics in the same area (Dang et al., 2015b), two slice samples were selected from this sediment core to perform SRE: the 0-2 cm sediment layer (denoted hereafter "0-2 cm sediment") and the 30-32 cm sediment layer (denoted hereafter "30-32 cm sediment") (see § 2.3). The PAH concentrations were determined in each layer of the core (freeze-drying and 2 mm sieving) to assess the contamination level and potential toxicity of PAHs at this specific site. In addition to PAHs, particle grain size, organic carbon and nitrogen (OC and ON) contents, and aliphatic hydrocarbons (AHs) were determined in the 0-2 and 30-32 cm sediment layers dedicated to SRE.
Additionally, porewater measurements of pH, redox potential (Eh), dissolved organic carbon (DOC), dissolved inorganic carbon (DIC), total nitrogen (N T ), ammonium (NH 4 + ), soluble reactive phosphorus (SRP), silicate (Si(OH) 4 ), manganese (Mn T ), iron (Fe T ), sulfate (SO 4 2-) and total sulfide (ΣHS -) were taken to verify the redox status of these two sediments according to previously described procedures [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF](Dang et al., , 2015a(Dang et al., , 2015b)).
Seawater
Seawater was collected to characterize the water column in the small basin of the bay in terms of PAH and DOM concentrations and to conduct SRE. Samples were taken in duplicates at 0.5, 2.5, 5 and 13 m (overlying water ~1 m above the sediments) depths using a 2.2-L Wildco Van-Dorn horizontal water sampler. Before use, the latter was rinsed with 3 × 1 L of ultrapure (Milli-Q) water acidified to 0.2% HCl (TraceSelect, Fluka) and then several times with sampling water. The water samples were poured into polycarbonate bottles (Nalgene). Before use, the bottles were washed with 1 M HCl followed by a wash with ultrapure water and three washes with the respective sample.
At the laboratory, the water samples were immediately filtered under a low vacuum (< 50 mm Hg) through precombusted (450 °C, 6 h) glass fiber filters (GF/F, ~ 0.7 µm, 47 mm diameter, Whatman) using all-glassware systems for dissolved PAHs, DOC and FDOM. The filtered samples for dissolved PAH analyses were stored in 2-L SCHOTT glass bottles with 50 mL dichloromethane (CH 2 Cl 2 ) at 4 °C in the dark before solvent extraction (within 24 h) while the samples for DOC and FDOM were stored frozen until analysis.
Sediment resuspension experiments
The solid/liquid (S/L) ratio was set at ~ 1 g L -1 (expressed in dry weight), a value close to in situ levels of suspended particulate matter measured during surface sediment reworking as previously published (Dang et al., 2015b). To apply this ratio for each experiment, ~ 11 g of wet sediment from the 0-2 cm slice (62% moisture) and ~ 8 g of wet sediment from the 30-32 cm slice (52% moisture) were introduced into 4 L Nalgene bottles filled with 0.7-µm filtered surface seawater (collected at 0.5 m depth). The bottles were then placed on roller agitation devices (Wheaton, 8 rpm) at room temperature (21-23 °C). Experiments lasted 14 days and were run in duplicate (i.e., four 4 L bottles in total). The water subsamples were collected at 1, 3, 7 h and 1, 3, 7, 10, 14 days and were analyzed for dissolved PAHs, DOC and FDOM. For each subsample, agitation was stopped for approx. 30 min allowing most of the sediment particles to settle. Then, 400 mL of water was collected from the top of the bottle and immediately filtered for analysis.
To minimize S/L ratio variation, the same volume (400 mL) of filtered initial seawater was added to the bottle after each subsampling. Because subsamples were taken several minutes after stopping agitation, the loss of sediment particles from this successive subsampling was estimated to be negligible over the course of the SRE (Dang et al., 2015b). Also, according to [START_REF] Dong | Effects of recurrent sediment reuspension-deposition events on bioavailability of polycyclic aromatic hydrocarbons in aquatic environments[END_REF], such short stops cannot be considered as successive sediment resuspension-deposition events. It should be noted that the biological activity was not stopped in these experiments since (i) any added poison could have affected OM measurements and (ii) our aim was to mimic the PAH and OM sorption/desorption as close as possible to the natural conditions.
Hydrocarbon analysis
Hydrocarbon extraction and purification
PAHs were measured in each layer of the whole sediment core and in seawater samples The protocol was then the same for sediment and water extracts. The solvent volume was reduced by rotary evaporation and the solvent was changed to n-hexane prior to purification. The hexanesolubilized extracts were purified to separate hydrocarbons from more polar compounds. These extracts were fractionated on a 500-mg silica column. Silica gel (extra pure, Merck) was activated at 450 °C for 6 h followed by partial deactivation with 4% water by weight. The extracts were deposited using 2-mL n-hexane and hydrocarbons were eluted with 3-mL nhexane/CH 2 Cl 2 (3:1 v/v) [START_REF] Guigue | Occurrence and distribution of hydrocarbons in the surface microlayer and subsurface water from the urban coastal marine area off Marseilles, Northwestern Mediterranean Sea[END_REF][START_REF] Guigue | Spatial and seasonal variabilities of dissolved hydrocarbons in surface waters from the Northwestern Mediterranean Sea: Results from one year intensive sampling[END_REF][START_REF] Guigue | Hydrocarbons in a coral reef ecosystem subjected to anthropogenic pressures (La Réunion Island, Indian Ocean)[END_REF]. All solvents were of organic traceanalysis quality (Rathburn Chemicals Ltd.).
(
Analysis of hydrocarbons
The purified extracts were analyzed by gas chromatography-mass spectrometry (GC-MS) (TraceISQ, ThermoElectron) operating at an ionization energy of 70 eV for a m/z range of 50-400 (full scan and selected ion monitoring (SIM) modes processed simultaneously), using helium as carrier gas at a flow rate of 1.2 mL min -1 . The GC-MS was equipped with a HP-5 MS ultrainert column (30 m × 0.25 mm × 0.25 µm, J&W Scientific, Agilent Technologies). The injector (used in splitless mode) and detector temperatures were 270 and 320 °C, respectively. The initial column temperature was held for 3 min at 70 °C, then ramped at 15 °C min -1 (ramp 1) to 150 °C and then at 7 °C min -1 (ramp 2) to a final temperature of 320 °C, which was held for 60 min. AHs and PAHs were identified and quantified in scan and SIM modes simultaneously using two distinct methods [START_REF] Guigue | Occurrence and distribution of hydrocarbons in the surface microlayer and subsurface water from the urban coastal marine area off Marseilles, Northwestern Mediterranean Sea[END_REF][START_REF] Guigue | Spatial and seasonal variabilities of dissolved hydrocarbons in surface waters from the Northwestern Mediterranean Sea: Results from one year intensive sampling[END_REF].
Quality assurance and quality control
All glassware was washed with 1 M HCl and ultrapure water and combusted at 450 °C for 6 h. All the materials that could not be baked were washed with 1M HCl and ultrapure water and dried at room temperature. Caution was taken during the evaporation because dryness could lead to the complete loss of the more volatile compounds. In addition, blanks were run for the whole procedure including the use of the Nalgene polycarbonate bottles, extraction, solvent concentration and purification. All concentration values were blank-and recovery-corrected (procedure fully described in [START_REF] Guigue | Hydrocarbons in a coral reef ecosystem subjected to anthropogenic pressures (La Réunion Island, Indian Ocean)[END_REF]. Instrumental detection limits for individual compounds varied from 1 to 30 pg of injected chemical. No certified reference standard is currently available for PAHs in water but the protocol for sediment was validated based on the agreement of individual compound quantification compared to NIST, 1941b (organics in sediment) (agreement of 97 ± 10%).
Determination of individual hydrocarbons
For
Sediment grain size and contents in total organic carbon and nitrogen
Grain size and total OC and ON contents were determined for the 0-2 and 30-32 cm sediment layers. Grain size was determined with a Beckman Coulter LS 13 320 laser granulometer after OM removal [START_REF] Ghilardi | The impact of early-to mid-Holocene palaeoenvironmental changes on Neolithic settlement at Nea Nikomideia, Thessaloniki plain, Greece[END_REF]. The relative abundance of sand (2000 to 63 µm), silt (63 to 2 µm) and clay (< 2 µm) was then measured. Contents in OC and ON were determined simultaneously, after acidification, with an AutoAnalyser II Technicon using the wet oxidation procedure [START_REF] Raimbault | Simultaneous determination of particulate forms of carbon, nitrogen and phosphorus collected on filters using a semiautomatic wet-oxidation procedure[END_REF]. The contents are expressed as a percentage (%) of sed. dw.
Dissolved organic matter measurements
DOC was determined by high-temperature catalytic oxidation using a Shimadzu TOC 5000
Total Carbon Analyzer (Kyoto, Japan) according to [START_REF] Sohrin | Seasonal variation in total organic carbon in the Northeast Atlantic in 2000-2001[END_REF]. Two replicates were analyzed for each sample. The concentrations are the mean of the two replicates with a coefficient of variance (CV) < 2%.
FDOM analyses were performed with a Hitachi F-7000 spectrofluorometer (Tokyo, Japan).
The method is fully described in [START_REF] Tedetti | Fluorescence properties of dissolved organic matter in coastal Mediterranean waters influenced by a municipal sewage effluent (Bay of Marseilles, France)[END_REF][START_REF] Tedetti | Evolution of dissolved and particulate chromophoric materials during the VAHINE mesocosm experiment in the New Caledonian coral lagoon (south-west Pacific)[END_REF] and [START_REF] Ferretto | Identification and quantification of known polycyclic aromatic hydrocarbons and pesticides in complex mixtures using fluorescence excitation-emission matrices and parallel factor analysis[END_REF]. Briefly, excitation-emission matrices (EEMs) were generated for excitation wavelengths (λ Ex ) between 230 and 500 nm and for emission wavelengths (λ Em ) between 280 and 550 nm. Two replicates were run for each sample. To correct EEMs for potential inner filtering effects, dilution method was used. Fluorescence intensities were blank-corrected and converted to quinine sulfate units (QSU), where 1 QSU corresponds to the fluorescence of 1 µg L -1 quinine sulfate in 0.05 M sulfuric acid at λ Ex /λ Em of 350/450 nm (5nm slit widths). The fluorescence intensities in QSU provided for each sample are the mean of the two replicates with a CV < 8%. The data were processed using parallel factor analysis (PARAFAC) operated using the DOMFluor toolbox v1.6.
running under MATLAB 7.10.0 (R2010a). The PARAFAC model was created and validated for 81 samples according to the method of [START_REF] Stedmon | Characterizing dissolved organic matter fluorescence with parallel factor analysis: a tutorial[END_REF].
Determination of sediment-water partition coefficients of PAHs
The partition coefficients of PAHs between sediment and water (K d ) were determined using the following equation:
K d = C p / C d
where C p is the concentration of individual PAHs in sediment (ng kg -1 ) and C d is the concentration in the water-dissolved phase (ng L -1 ). The OC-normalized partition coefficients of PAHs between sediment and water (K oc ) were then calculated using the following equation:
K oc = K d / f oc
where f oc is the fraction of organic carbon in resuspended sediment particles (% OC in Table 2).
The values for log K d and log K oc were computed for the 16 priority PAHs.
Statistics
Student's t-test, used to compare the mean of two independent data groups, and Pearson's linear correlation matrices, as well as non-linear regression analyses were carried out using XLSTAT 2013.5.01 (Microsoft Excel add-in program). For the different analyses and tests, the significance threshold was set at p < 0.05.
Results and discussion
3.1. Characterization of sediment and water from the MIS site
Contamination levels and origin of PAHs in the sediment core
In the first 40 cm of the sediment core, Σ34 PAH concentrations varied between 30.7 × 10 3 at the 10-12 cm depth and 123 × 10 3 ng g -1 sed. dw at the 36-38 cm depth, with a high value of 231 × 10 3 ng g -1 sed. dw recorded at the 24-26 cm depth (Fig. S1). The PAH concentrations in the surface layer of the MIS sediment core were in agreement with previous recordings from the MIS site (34.0 × 10 3 ng g -1 ; [START_REF] Misson | Chemical multi-contamination drives benthic prokaryotic diversity in the anthropized Toulon Bay[END_REF] and in the Toulon coastal area (48.1 × 10 3 ng g -1 ; [START_REF] Benlahcen | Distribution and Sources of Polycyclic Aromatic Hydrocarbons in some Mediterranean Coastal Sediments[END_REF]. The PAH concentrations we recorded in the whole core were in the upper-range of values reported for sediments from other regions of the Mediterranean Sea (see Table 6 in [START_REF] Zaghden | Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea)[END_REF]. For instance, they were close to the highest values measured in the surface sediments from the Taranto Gulf, Italy (28.9-262 × 10 3 ng g -1 sed. dw; [START_REF] Annicchiarico | PCBs, PAHs and metal contamination and quality index in marine sediments of the Taranto Gulf[END_REF]. The environmental quality guidelines are detailed in the supplementary material (Text S2). The 16 priority PAHs displayed concentrations above L1 and ERL levels at almost all sediment depths and above L2 and ERM levels at many depths (Table 1), which mirrors high to extreme pollution levels according to both JORF and SQG, in agreement with previous results for trace elements from the small basin of Toulon Bay [START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF].
The profile shape of Σ34 PAH concentrations with depth at MIS was in good agreement with previous studies showing disturbed sedimentation during the past (raising of a scuttled warship)
and the present (harbor extension) activities (Fig. S1; Dang et al., 2015b;[START_REF] Misson | Chemical multi-contamination drives benthic prokaryotic diversity in the anthropized Toulon Bay[END_REF].
Despite episodic events disturbing sedimentation, the surface layers are considered to be more recent than deeper ones (Dang et al., 2015b;[START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF]. The surprising value of 231 × 10 3 ng g -1 recorded at 24-26 cm, which had a similar molecular distribution to the other layers (data not shown), might not be attributed to a special historic period/contamination but more probably to the heterogeneity in the vertical distribution of sediment particles or to the presence of a tiny piece of coal block.
The PAH molecular distribution in the sediment core barely varied with depth and was dominated by 4-ring compounds, as shown for the 0-2 and 30-32 cm sediments in Fig. 2. This distribution reflected the dominance of Flt and Pyr, which has been observed in other coastal areas of the Mediterranean Sea [START_REF] Ben Othman | Impact of contaminated sediment elutriate on coastal phytoplankton community (Thau lagoon, Mediterranean Sea, France)[END_REF][START_REF] Pérez-Fernández | PAHs in the Ría de Arousa (NW Spain): A consideration of PAHs sources and abundance[END_REF]. The isomeric ratios were quite close from one depth to another, and showed that PAHs in the MIS sediment more probably originated from combustion processes, especially of grass/wood/coal, rather than from burned/unburned petroleum residues (Fig. S2). The dominance of pyrogenic PAHs in this sediment is in agreement with our knowledge of current and past activities at Toulon harbor (loading dock for coal and coal-fired Navy vessels) and also reflects a highly anthropized coastal environment subjected to important watershed and atmospheric particulate inputs (Fig. S2; [START_REF] Vane | Polycyclic aromatic hydrocarbons (PAH) and polychlorinated biphenyls (PCB) in urban soil of Greater London, UK[END_REF]Yunker et al., 2002). Nevertheless, it is worth recalling that isomeric ratios have to be used with caution for the determination of PAH origin because they may evolve with time or with the distance from emission sources [START_REF] Katsoyiannis | Model-based evaluation of the use of polycyclic aromatic hydrocarbons molecular diagnostic ratios as a source identification tool[END_REF].
Comparison of the 0-2 and 30-32 cm sediments
The 0-2 and 30-32 cm sediments used for SRE were exclusively composed of fine particles < 63 µm: 69 and 82% of silt, 31 and 18% of clay, respectively. Their OC content was 8.2 and 6.3%, respectively (Table 2). The predominance of silts, which promote the accumulation of contaminants, and the OC content in these sediments are in good agreement with previous studies from Toulon Bay [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF][START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF]. Additionally, the OC content was in the upper range of that reported for surface sediments of the Northern Mediterranean Sea (from 0.38 to 6.2%) [START_REF] Benlahcen | Distribution and Sources of Polycyclic Aromatic Hydrocarbons in some Mediterranean Coastal Sediments[END_REF][START_REF] Lipiatou | Fluxes and transport of anthropogenic and natural polycyclic aromatic-hydrocarbons in the western Mediterranean Sea[END_REF].
The C/N ratios in the 0-2 and 30-32 cm sediments (13.5 and 22, respectively) (Table 2) were higher than typical values reported for most coastal sediments (C/N = 6-10; [START_REF] Wang | Distribution and Partitioning of Polycyclic Aromatic Hydrocarbons (PAHs) in Different Size Fractions in Sediments from Boston Harbor, United States[END_REF] but lower than that of the alkaline-extracted OM previously recorded in this area (C/N = 36 ± 9; [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF]. The C/N ratio may provide information about the OM origin, as well as the diagenetic degradation state. A high C/N ratio (> 20) reveals a terrestrial origin of OM due to the low N percentage in the higher vegetation [START_REF] Emerson | Processes controlling the organic carbon content of open ocean sediments[END_REF][START_REF] Muller | C/N ratios in Pacific deep-sea sediments: effects of inorganic ammonium and organic nitrogen compounds sorbed by clays[END_REF]. On the other hand, a low C/N ratio (5-7) implies a marine origin (plankton or seaweeds) [START_REF] Meyers | Preservation of elemental and isotopic source identification of sedimentary organic matter[END_REF].
In addition, it has previously been demonstrated that the buried particulate OM in Toulon Bay (at another location than MIS) is N-, P-and Si-depleted, due to the sedimentary OM mineralization and transformation processes [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF]. This depletion could result in increases in the C/N ratio. In the present case, the C/N ratios suggest a mixed origin, with both terrestrial and autochthonous marine sources at both depths along with higher diagenetic processes at the 30-32 cm depth (N-depletion; Table 2) according to [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF].
The porewaters of the two studied sediment layers were slightly acidic and enriched in DOC, DIC, NH 4 + , SRP, Si(OH) 4 and Fe T compared to seawater, showing important diagenesis processes (Table S1), and differed in their redox status. The 0-2 cm sediment porewaters had positive Eh (109 mV/ENH) and high Fe T concentrations (70.2 µM) typical of oxidizing conditions, while the 30-32 cm sediment porewaters displayed negative Eh (-145 mV/ENH) and much lower Fe T concentrations (7.7 µM), reflecting reducing conditions. Additionally, they had differences in DOC (367 and 717 µM, respectively), N T (104 and 173 µM), NH 4 + (72.6 and 54.8 µM, respectively) and Mn T (2.3 and 0.7 nM, respectively) concentrations (Table S1).
The Σ34 PAH concentrations in these samples were 38.2 and 35.7 × 10 3 ng g -1 sed. dw, respectively (Fig. S1; Table 2) and all 16 priority compounds showed concentrations above L1 and ERL levels. Additionally, Ace and BaA in the 0-2 cm sediment and Phe, Pyr, BaP and DBA in the 0-2 and 30-32 cm sediments had concentrations above the L2 and ERM levels (Table 1).
For both the sediment layers, as in the whole core, 2-ring compounds accounted on average for 8% of total PAHs, 3-rings for 17%, 4-rings for 41% (with a high contribution of Flt and Pyr), 5rings for 22% and 6-rings for 11% (Fig. 2).
The R concentrations (n-C 15 to n-C 36 with Pr and Phy) were 2.4 and 10.0 × 10 3 ng g -1 sed. dw with UCM/R values of 30 and 118, respectively (Table 2). The R values were relatively low compared to the PAH content of these sediments and were situated in the middle-lower range of n-alkane concentrations reported from other Mediterranean environments (see Table 4 in [START_REF] Zaghden | Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea)[END_REF]. For example, they were of the same order of magnitude as those determined in the Gulf of Tunis (1.8 and 10.0 × 10 3 ng g -1 sed. dw; [START_REF] Mzoughi | Distribution and partitioning of aliphatic hydrocarbons and polycyclic aromatic hydrocarbons between water, suspended particulate matter, and sediment in harbours of the West coastal of the Gulf of Tunis (Tunisia)[END_REF].
The R and UCM/R values were four times higher at the 30-32 cm than at the 0-2 cm depth. The R molecular profiles were characterized by bi-modal distributions. Odd-carbon numbered dominated the n-alkane patterns: n-C 17 for the LMW compound profiles (≤ n-C 24 ) and n-C 29 , n-C 31 and n-C 33 for the HMW compound profiles (>n-C 24 ) (Fig. S3). This led to CPI values slightly above 1. CPI 15-24 was 1.3 in both sediments, showing that autochthonous algal material and anthropogenic n-alkanes accounted for 13 and 87% of total n-alkanes, respectively. CPI 25-36 was 2.9 and 2.2 in the 0-2 and 30-32 cm sediments, respectively. This underscores that continental higher plant debris n-alkanes accounted for 49 and 37%, and anthropogenic n-alkanes for 51 and 63% of total n-alkanes, respectively (Table 2). In both sediments, these results (R, UCM/R, CPI values) reflected the superposition of compounds originating from biogenic and anthropogenic inputs, as well as their incomplete degradation (Blumer et al., 1971;Bouloubassi and Saliot, 1993;Douglas and Eglinton, 1966). However, compared to the 0-2 cm sediment, the 30-32 cm sediment presented a higher anthropogenic fingerprint along with a higher diagenetic state. This greater anthropization level was very likely due to the Second World War and activities during and after this period [START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF]. The higher contribution of Phy at this depth might also be attributed to the diagenetic state of sediments, even though diagnostics using isoprenoid compounds must be taken with caution [START_REF] Grossi | Biotransformation pathways of phytol in recent anoxic sediments[END_REF][START_REF] Rontani | Production of pristane and phytane in the marine environment: role of prokaryotes[END_REF].
In conclusion, C/N ratios, diagenetic tracers in porewaters and AHs showed that the 0-2 cm sediment was globally less contaminated, presenting more biogenic inputs, especially terrigenous, and was more recent/less degraded or diagenetic transformed than the 30-32 cm sediment. This finding confirmed the differences already reported in previous studies combining redox potential and particulate OM aging [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF](Dang et al., , 2015b)). The OM aging of sediment particles might play a role in PAH-OM interactions and therefore on the PAH mobility during seabed disturbances, as suspected for other organic contaminants [START_REF] Calvo | Organic Carbon and Nitrogen in Sediments and in Resuspended Sediments of Venice Lagoon : Relationships with PCB Contamination[END_REF]. In contrast, PAH distribution did not make it possible to highlight such differences between the two sediments.
Characterization of the water column
The concentrations of dissolved Σ34 PAHs in the water column were 2.8 ± 0.7, 2.1 ± 0.5, 3.8 ± 1.5 and 4.6 ± 0.9 ng L -1 at 0.5, 2.5, 5 and 13 m depths, respectively (Fig. S4a). Dissolved PAHs were dominated by 2-and 3-ring compounds (38 ± 9 and 49 ± 10%, respectively), whereas 4-and 5-rings represented only 10 ± 2 and 3 ± 7%, respectively, with 6-ring compounds being under the detection limits (Fig. 2). These concentrations are in the lower range of those previously observed in some Northwestern Mediterranean coastal areas, whereas the molecular distribution (dominance of 2-3 ring compounds) was in agreement with these studies [START_REF] Guigue | Occurrence and distribution of hydrocarbons in the surface microlayer and subsurface water from the urban coastal marine area off Marseilles, Northwestern Mediterranean Sea[END_REF][START_REF] Guigue | Spatial and seasonal variabilities of dissolved hydrocarbons in surface waters from the Northwestern Mediterranean Sea: Results from one year intensive sampling[END_REF][START_REF] Guitart | Evaluation of sampling devices for the determination of polycyclic aromatic hydrocarbons in surface microlayer coastal waters[END_REF].
DOC concentrations were 79.4 ± 1.3, 75.6 ± 1.0, 74.8 ± 1.0 and 76.0 ± 0.2 µM at 0.5, 2.5, 5
and 13 m depths, respectively (Fig. S4b). These concentrations were in the same range as those previously measured in this area [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF]. PARAFAC applied to EEMs revealed the presence of two main FDOM fluorophores in seawater (for both depth profiles and SRE). superimposed on that of tryptophan [START_REF] Tedetti | Utilization of a submersible UV fluorometer for monitoring anthropogenic inputs in the Mediterranean coastal waters[END_REF][START_REF] Ferretto | Identification and quantification of known polycyclic aromatic hydrocarbons and pesticides in complex mixtures using fluorescence excitation-emission matrices and parallel factor analysis[END_REF]. However, the sum of Naph and alkylated Naph homologue concentrations measured here (< 5 ng L -1 ) in water were not high enough to generate such fluorescence signatures (detection limit of Naph by EEM method; ~ 1 µg L -1 ; [START_REF] Ferretto | Identification and quantification of known polycyclic aromatic hydrocarbons and pesticides in complex mixtures using fluorescence excitation-emission matrices and parallel factor analysis[END_REF]. The fluorescence intensities of humic-and tryptophanlike fluorophores at 0.5, 2.5, 5 and 13 m depths were 2.7 ± 1.2, 3.5 ± 0.9, 2.9 ± 0.3 and 4.0 ± 0.3 QSU (Fig. S4c) and 4.8 ± 2.4, 6.2 ± 1.4, 7.4 ± 0.7 and 8.4 ± 1.6 QSU, respectively (Fig. S4d).
These two fluorophores have been reported in several coastal environments, including the Northwestern Mediterranean Sea [START_REF] Ferretto | Spatiotemporal variability of fluorescent dissolved organic matter in the Rhône River delta and the Fos-Marseille marine area (NW Mediterranean Sea, France)[END_REF][START_REF] Para | Fluorescence and absorption properties of chromophoric dissolved organic matter (CDOM) in coastal surface waters of the northwestern Mediterranean Sea, influence of the Rhône River[END_REF][START_REF] Tedetti | Fluorescence properties of dissolved organic matter in coastal Mediterranean waters influenced by a municipal sewage effluent (Bay of Marseilles, France)[END_REF]. The tryptophan-like fluorophore represents free amino acids or amino acids bound as peptides or proteins. It is known to serve as a fresh and labile bioavailable product for heterotrophic bacteria [START_REF] Romera-Castillo | Production of chromophoric dissolved organic matter by marine phytoplankton[END_REF][START_REF] Yamashita | In situ production of chromophoric dissolved organic matter in coastal environments[END_REF]. The humic-like fluorophore has maximal absorption in the UVC and UVA wavelengths. It is more hydrophobic, more condensed and more photodegradable compared to the tryptophan material, as indicated by its higher λ Ex and λ Em . [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF] and [START_REF] Hur | Spectroscopic characterization of dissolved organic matter isolates from sediments and the association with phenanthrene binding affinity[END_REF] detected these two fluorophores (humic and tryptophan) in sediment particles after an alkaline extraction, showing that they could also originate from sediment, in addition to the water column. According to [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF], the tryptophan-like fluorophore would be derived from fresh POC in surface sediments, while humiclike fluorophore would be either produced by the hydrolysis of the buried HMW-POM or by the geo-polymerization of LMW-DOM. Overall, these data show that water from 0.5 m depth was not subjected to an important recent remobilization and was well adapted to forthcoming experiments.
Remobilization of PAHs and OM during sediment resuspension experiments (SRE)
Dissolved phase enrichment
Concentrations of dissolved Σ34 PAHs, DOC and FDOM fluorophores in water over the course of the 0-2 and 30-32 cm SRE are presented in Fig. 3. The standard deviation (to estimate the variance) is given for the two-bottle replicates (n = 2). For the four parameters, both SRE led first to an immediate (within one hour after the sediment addition) release in water. This fast PAH release in the dissolved phase (within the first hour) was also observed by [START_REF] Yang | Release of polycyclic aromatic hydrocarbons from Yangtze River sediment cores during periods of simulated resuspension[END_REF]. Then, from the first hour to the end (14 days) of the SRE, concentrations in water tended to increase: from 8.6 ± 1.6 to 24 ± 9.9 ng L -1 (Fig. 3a) and from 9.5 ± 3.9 to 12 ± 5.4 ng L -1 (Fig. 3b) for dissolved Σ34 PAHs, from 87 ± 0.2 to 91 ± 0.3 µM (Fig. 3c) and from 89 ± 1.4 to 98 ± 2.9 µM (Fig. 3d) for DOC, from 5.4 ± 0.1 to 11 ± 1.8 QSU (Fig. 3e) and from 4.8 ± 0.9 to 11 ± 0.2 QSU (Fig. 3f) for the humic-like fluorophore, and from 9.1 ± 0.3 to 28 ± 3.2 QSU (Fig. 3g) and from 8.0 ± 1.4 to 15 ± 0.9 QSU (Fig. 3h) for the tryptophan-like fluorophore. These results seem to be in agreement with desorption biphasic models of organic compounds from soils/sediments, i.e., a rapid desorption phase of PAHs adsorbed onto particles followed by a more slowly desorbing phase of PAHs strongly bound into particles (Karickhoff and Morris, 1985;Mitra and Dickhut, 1999;[START_REF] Van Noort | Slow and very slow desorption of organic compounds from sediment: influence of sorbate planarity[END_REF].
Enrichment factors (EFs, computed as the ratios of the concentration in water at time t over that at the initial time; that is, just before sediment addition) for dissolved PAHs, DOC and the humic-like fluorophore were quite similar between the 0-2 and 30-32 cm SRE, ranging from 2.6-8.5 and 2.5-10 (PAHs), 1.1-1.2 and 1.1-1.3 (DOC), and 2.0-4.2 and 1.8-4.4 (humic), respectively (Fig. 3a-f). For the tryptophan-like fluorophore, they were clearly higher in the 0-2 cm SRE (1.7-5.7) than in the 30-32 cm SRE (1.7-3.2) (Fig. 3g,h). EFs recorded here for PAHs were of the same order of magnitude as those reported in previous studies under natural conditions (Dong et al., 2015).
Change in dissolved PAH distribution profile
During SRE, all the individual PAHs investigated were, to some extent, released into the dissolved phase. For both SRE, dissolved PAH distribution profiles were dominated by 3-ring compounds (33 ± 8 and 38 ± 8% of total PAHs) (Fig. 2). Moreover, in the range of 2-4-ring compounds, SRE PAH profiles were clearly intermediate between (and significantly different from) that of the water column (0.5-13 m depths; dominated by 3-ring compounds, accounting for 49 ± 10% of total PAHs) and those of sediments (dominated by 4-ring compounds, accounting for 39 and 43% of total PAHs) (Fig. 2). However, for 5-and 6-ring compounds, the relative abundances were much closer between SRE (16-19 and 6-7%, respectively) and sediments (21-24 and 11-12%, respectively). Hence, compared to the water column, the remobilization of PAHs led to a shift in the molecular structure profiles towards HMW compounds, with a decrease in the relative abundances of 2-and 3-ring compounds (by 49 and 29% on average, respectively), an increase in the relative abundances of 4-and 5-ring compounds (by 115 and 414% on average, respectively) and the appearance of 6-ring compounds (Fig. 2).
Comparison of the remobilization kinetics between surface and deep sediments
Remobilization kinetics were studied by following dissolved PAH and DOM concentration evolution, as well as correlations between these parameters within each SRE. First, to help in comparing the behavior of parameters between each other and between the two experiments, we sought to model the kinetic patterns. Several models obtained by least square fits were tested (linear, exponential, power, logarithmic, polynomial of degrees 2-3). The polynomial function of degree 3 (cubic functions) best fitted our data. All these cubic relationships were significant (r = 0.78-1.00, p ≤ 0.0001-0.02, n = 8), except for DOC in the 30-32 cm SRE (r = 0.72, p = 0.06, n = 7; Fig. 3d). Nevertheless, we observed some differences in the shape of these cubic functions between the 0-2 cm and the 30-32 cm SRE.
In the 0-2 cm SRE, DOC and the tryptophan-like fluorophore presented a cubic curve with a negative leading coefficient (-0.045 and -0.005, respectively; Fig. 3c,g). Dissolved PAHs and humic-like fluorophores presented a different pattern due to the positive leading coefficient of their equation (0.031 and 0.002, respectively; Fig. 3a,e). In addition, maximal values for PAHs and the humic-and tryptophan-like fluorophores were recorded at the end of the experiment (t = 14 days) of 24 ± 9.9 ng L -1 , 11 ± 1.8 and 28 ± 3.2 QSU, respectively (Fig. 3a,e,g), whereas that of DOC was found at t = 10 days (99 ± 2.9 µM) (Fig. 3c). Only humic-and tryptophan-like fluorophore intensities were significantly linearly correlated for the 0-2 cm SRE (r = 0.98, p = 0.0003, n = 8) (Table 3).
In contrast, for the 30-32 cm SRE, leading coefficients were all positive (Fig. 3b,f,h). For dissolved PAHs and the humic-like fluorophore, maximal values (29 ± 20 ng L -1 and 12 ± 1.5 QSU, respectively) were found at t = 7 days (Fig. 3b,f). Those for DOC concentration (100 ± 8.5 µM) and the tryptophan-like fluorophore (15 ± 0.9 QSU) were observed at t = 3 days (Fig. 3d) and t = 14 days (Fig. 3h), respectively. For the 30-32 cm SRE, the correlation between humicand tryptophan-like fluorophores was still observed (r = 0.92, p = 0.001, n = 8), but in addition, significant linear correlations were found between dissolved PAHs and the humic-like fluorophore (r = 0.75, p = 0.025, n = 8) and between DOC and the tryptophan-like fluorophore (r = 0.78, p = 0.04, n = 7) (Table 3). These correlations are further investigated in § 3.3.3.
In the present study, the adjustment of dissolved concentrations to functions of degree 3 is in agreement with Delle [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF] and [START_REF] Karickhoff | Sorption of hydrophobic pollutants on natural sediments[END_REF], who presented the sorption processes of a chemical from one phase to another as the result of a reversible reaction (sorptiondesorption) reaching a final equilibrium condition between the concentrations of the chemical in the two phases. One can even suspect redistribution of the chemical between and within phases.
Furthermore, the significant leading coefficient being mainly positive suggested that the dissolved Σ34 PAH and DOM patterns for the two SRE may evidence such successive processes: first, desorption from particles related to dissolution or colloid formation from particle collisions that can be followed by some sorption onto/into particles or new aggregations (because of change in polarity or oxygenation conditions for example, especially for the 30-32 cm sediment) and, possibly, some bacterial degradation and volatilization occurring at different time scales between parameters and the two sediments. Even though sorption cubic models globally fit for the two SRE, the remobilization patterns of dissolved PAHs and DOM during SRE appeared to be sediment-dependent. Accordingly, Dang et al. (2015b) also observed different remobilization patterns for Pb between the 0-2 cm and 30-32 cm sediment that were attributed to differences in the oxidation degree, which is further linked to differences in the OM diagenetic state (see § 3.1.2.) of these two sediments.
Factors influencing the PAH sediment-water partitioning
Partition coefficients (K d and K oc ) are key parameters to understand and describe the distribution of PAHs between sediment and water. The log K d and log K oc values for PAHs found in the literature display great variability. Even though it is clear that K d and K oc are related to physico-chemical characteristics of the compounds, the variability between studies would come from the strong heterogeneity in the nature/composition of sedimentary particles and OM, which influence the whole sediment sorption capacity.
Hydrophobicity of PAHs
The values for log K oc we determined here were plotted versus the corresponding log K ow in Fig. 4. A strong positive significant correlation was observed between log K ow and the measured log K oc for the 2-4 ring PAHs for both SRE:
For 0-2 cm SRE: log K oc = 0.59 × log K ow + 4.9 (r = 0.89, p = 0.001, n = 10) (a)
For 30-32 cm SRE: log K oc = 0.64 × log K ow + 4.7 (r = 0.91, p < 0.0001, n = 10) (b)
The equations (a and b, r = 0.89-0.91) revealed that the linear free energy relationship was applicable for 2-4 ring PAHs. The log K oc values for the 5-6 ring compounds diverged from the 2-4 ring compound linearity area (Fig. 4). They were lower than expected from equation (a) or (b), and very close to each other, depicting a plateau. They were also closer to log K ow values and to log K oc predicted from [START_REF] Bouloubassi | Rôle des fleuves dans les apports de contaminants organiques aux zones côtières: cas des hydrocabures aromatiques polycycliques (HAP) dans le delta du Rhône (Mediterranée Nord-Occidentale)[END_REF] and [START_REF] Means | Sorption of polynuclear aromatic hydrocarbons by sediments and soils[END_REF]. These results confirmed that the hydrophobicity of the concerned PAHs remains an essential factor for their partitioning between water and suspended particles. This is in agreement with most of the previous studies (Accardi-Dey and Gschwend, 2003;[START_REF] Bouloubassi | Rôle des fleuves dans les apports de contaminants organiques aux zones côtières: cas des hydrocabures aromatiques polycycliques (HAP) dans le delta du Rhône (Mediterranée Nord-Occidentale)[END_REF]Dong et al., 2015). However, it is not the only factor, because (i) the log K oc measured here were generally above log K ow (Fig. 4), as observed in several studies [START_REF] Bouloubassi | Rôle des fleuves dans les apports de contaminants organiques aux zones côtières: cas des hydrocabures aromatiques polycycliques (HAP) dans le delta du Rhône (Mediterranée Nord-Occidentale)[END_REF]Feng et al., 2007;[START_REF] Rockne | Distributed Sequestration and Release of PAHs in Weathered Sediment: The Role of Sediment Structure and Organic Carbon Properties[END_REF], and above log K oc measured in previous studies dealing with the resuspension of sediment or porewater contents (Table S2), (ii) the slopes of the linear regressions for the 2-4 ring PAH were < 1 (0.59 and 0.64), which is in accordance with numerous studies, showing that sediment retention capacity was higher for lower MW than for higher MW PAHs (2->3->4-rings) [START_REF] Bouloubassi | Rôle des fleuves dans les apports de contaminants organiques aux zones côtières: cas des hydrocabures aromatiques polycycliques (HAP) dans le delta du Rhône (Mediterranée Nord-Occidentale)[END_REF][START_REF] Cao | Partitioning of PAHs in pore water from mangrove wetlands in Shantou, China[END_REF][START_REF] Fernandes | Polyaromatic Hydrocarbon (PAH) Distribution in the Seine River and its Estuary[END_REF],
(iii) the mobility of 5-6 ring PAH diverged from that of 2-4 rings, suggesting that the transport processes differed between these two groups (Feng et al., 2007), and (iv) the remobilization kinetics for the surface and deep sediment were shown to be sediment-dependent. For all of these reasons, other factors, such as sedimentary grain size and OM quality, were suspected to play an important role in PAH partitioning.
Sedimentary grain size and OM quality
In this work, sediments were characterized by fine-grained particles that promote the accumulation of hydrophobic compounds such as PAHs [START_REF] Yang | Release of polycyclic aromatic hydrocarbons from Yangtze River sediment cores during periods of simulated resuspension[END_REF]. Concerning OM sorption capacities, Cornelissen and Gustafsson (2004 and references therein) support the dualmode sorption concept. The latter suggests that sedimentary OM retention is due to absorption into amorphous OM, including biopolymers and humic substances (HS), and adsorption onto carbonaceous geosorbents, including unburned coal, kerogen and black carbon (BC). HS and BC are as two major effective sorbents for PAHs in sediment particles (Delle [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF]Khan and Shnitzer, 1972;Cornelissen and Gustafsson., 2004). [START_REF] Cornelissen | Extensive Sorption of Organic Compounds to Black Carbon, Coal, Kerogen in Sediments and Soils: Mechanisms and Consequences for Distribution, Bioaccumulation, and Biodegradation[END_REF] supported that important content in carbonaceous geosorbents might explain 1-2 order of magnitude higher K oc values. HS and BC were not measured in the sediment in the present work. Nonetheless, humic-like material was evidenced in these Toulon sediments by [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF] and we can reasonably make the assumption of a high presence of sedimentary soot BC because of PAHs originating mainly from combustion-residue-sources (Fig. S2, [START_REF] Readman | A record of polycyclic aromatic hydrocarbon (PAH) pollution obtained from accreting sediments of the Tamar estuary, U.K.: evidence for non-equilibrium behaviour of PAH[END_REF][START_REF] Zhou | The partition of fluoranthene and pyrene between suspended particles and dissolved phase in the Humber Estuary: A study of the controlling factors[END_REF] that can be both released in seawater during our SRE [START_REF] Dong | Effects of recurrent sediment reuspension-deposition events on bioavailability of polycyclic aromatic hydrocarbons in aquatic environments[END_REF][START_REF] Kaal | Molecular properties of ultrafiltered dissolved organic matter and dissolved black carbon in headwater streams as determined by pyrolysis-GC-MS[END_REF].
Therefore, the 2-4 ring and 5-6 ring PAHs could have been partitioned differently between HS and BC, which both present peculiar binding capacities.
The decoupling between 2-4 and 5-6 ring compounds could also be linked to the association of these two groups of PAHs with different sedimentary particle sizes. Indeed, [START_REF] Karickhoff | Sorption of hydrophobic pollutants on natural sediments[END_REF] and Latimer et al. (1999) showed that HMW PAHs are preferentially associated with larger size particles and will therefore have different transport characteristics and fate. In the present case, coarse silts would display lower active surface area, leading to lower retention capacities than the finer ones. This may also explain the relatively higher mobility of 5-6 ring PAHs towards the dissolved phase compared to LMW PAHs.
DOM
Due to the complexity of interactions driving the PAH partitioning between water and sediment, it is very likely that such contaminants were distributed into three phases: the first phase (solid phase, sediment), the second phase (solution phase, water) and the third phase (colloidal or DOM-associated phase) (Delle [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF]. Within the third phase, the formation of PAH-DOM complexes may increase the transfer of PAHs towards the dissolved phase [START_REF] Dong | Effects of recurrent sediment reuspension-deposition events on bioavailability of polycyclic aromatic hydrocarbons in aquatic environments[END_REF]. The effect of the third phase is known to be more pronounced for compounds exhibiting higher hydrophobicity (log K ow > 5) (Delle [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF][START_REF] Dong | Effects of recurrent sediment reuspension-deposition events on bioavailability of polycyclic aromatic hydrocarbons in aquatic environments[END_REF].
The remobilization of dissolved PAHs, especially during the 30-32 cm SRE, seemed to be associated with the release of dissolved/colloidal diagenetic HS (correlation between PAHs and humic-like fluorophore; Table 3, Dang et al., 2015b). This is consistent with the fact that this humic-like fluorophore in the Toulon sediment would arise from the OM diagenetic transformation [START_REF] Dang | Sedimentary dynamics of coastal organic matter: An assessment of the porewater size/reactivity model by spectroscopic techniques[END_REF]. We may assume that these dissolved PAHs released from sediment concomitantly to humic-like material were present in water both as free compounds and as complexes with DOM, which in turn might substantially modify their fate in the water column [START_REF] Hur | Spectroscopic characterization of dissolved organic matter isolates from sediments and the association with phenanthrene binding affinity[END_REF][START_REF] Kim | Determination of Partition Coefficients for Selected PAHs between Water and Dissolved Organic Matter[END_REF][START_REF] Sabbah | An independent prediction of the effect of dissolved organic matter on the transport of polycyclic aromatic hydrocarbons[END_REF]. The coupling between PAHs and HS could be favored by the anoxic/diagenetic state of this deep sediment, since previous studies have shown that the binding capacity between PAHs and HS was negatively related to the oxidation degree and positively related to the OM humification degree [START_REF] Chen | Assessing desorption resistance of PAHs in dissolved humic substances by membrane-based passive samplers[END_REF][START_REF] He | Adsorption of a typical polycyclic aromatic hydrocarbon by humic substances in water and the effect of coexisting metal ions[END_REF][START_REF] Perminova | Relationship between Structure and Binding Affinity of Humic Substances for Polycyclic Aromatic Hydrocarbons: Relevance of Molecular Descriptors[END_REF].
Consequently, the relative higher mobility of 5-6 ring compounds towards the dissolved phase we observed here, already highlighted by Mitra et al. (1999), could be explained by their interactions with HS (third phase effect), which would be in accordance with observations from Delle [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF]. However, a detailed correlation analysis revealed that only 2-4 ring compounds displayed a significant correlation with HS at the 30-32 cm depth (r = 0.81, p = 0.013, n = 8), while 5-6 rings did not correlate (r = 0.51, p = 0.21, n = 8) (not shown in Table 3). This does not necessarily exclude the hypothesis of the third phase distribution for 5-6 ring PAHs, but rather suggests that the latter would occur from their interaction with other OM moieties. Since PAHs in sediment most likely originated from combustion processes, the mobility of 5-6 ring PAHs could be related preferentially to the release of dissolved black carbon [START_REF] Kaal | Molecular properties of ultrafiltered dissolved organic matter and dissolved black carbon in headwater streams as determined by pyrolysis-GC-MS[END_REF].
Equilibration time
In parallel to previous conclusions, the present high log K oc values may be explained by the non-equilibrium partition status. Many authors indicated that partition coefficients measured in non-equilibrium situations can lead to errors in log K oc [START_REF] Site | Factors affecting Sorption of Organic Compounds in Natural Sorbent/Water Systems and Sorption Coefficients for Selected Pollutants. A Review[END_REF]. Moreover, if the suspected bi-phase desorption model is effective in the present case, the fast release might have taken a short time (1 h) but the total equilibrium may need months to years to be achieved (Delle Site, 2001 and references therein). Thus, considering the remobilization kinetics, it is very likely that the equilibrium between phases has not been reached and concentrations in the dissolved phase were expected to continue to increase. In spite of this, we aimed to focus on real conditions of few-to-many-day dredging resuspension sediments, and we thought this work might provide useful insight into processes that control PAH fate and transport.
Impact of the MIS sediment resuspension on water quality
Although the sediment from the MIS site was very highly contaminated and in spite of the PAH dissolved enrichment during the presented experiments, the concentrations of individual PAHs in water during both SRE remained far below the EU WFD MAC-EQS, except for one compound, BP, which exceeded by 3.8 (0-2 cm SRE)-to 5.1 (30-32 cm SRE)-fold the threshold value of 0.82 ng L -1 (Table 4). These BP concentrations showed that sediment resuspension in
Toulon Bay may lead to potential harmful effects on the biota (Feng et al., 2007). However, BP (6-ring compound) may be most likely associated with colloids or DOM, which might in turn reduce its bioavailability and toxicity.
Conclusion
Sediment at the MIS site (Toulon Bay) appears to be one of the most contaminated sediments of the Mediterranean coasts in terms of PAHs and trace elements. Sediment resuspension is a geochemically significant process for PAHs and OM in coastal environments. The two simulated SRE showed 2-to 6-ring PAH remobilization from the sediment to the dissolved phase, increasing seawater concentrations, up to 10-fold. These concentrations remained, however, below the toxicity thresholds, limiting potential adverse effects, except for BP. The PAH transfer from the Toulon sediment to water was lower than predicted from log K ow , confirming the sequestration role of such a fine-grained and OC-enriched sediment for hydrophobic contaminants. The remobilization patterns of dissolved PAHs and DOM were different for the two sediments very likely due to their respective diagenetic state and redox status. Additionally, the difference in the relationships between log K ow and log K oc for 2-4 ring PAHs and 5-6 ring PAHs may be explained by their partition between different OM moieties, i.e., HS and BC, respectively, but also different particle grain sizes, i.e., finer and larger, respectively. Additional experiments would be necessary to discriminate the roles of the particle size and associated OM, as well as weathering processes (degradation, volatilization), particularly for LMW PAHs, on the sediment-water PAH partitioning at this site. The combination of sedimentary OM and insufficient equilibration time may easily explain 2-3 order of magnitude higher K oc values compared to other studies. Resuspension of sediments remains a complex issue to interlink particle concentrations with the degradation of water quality and the living organism exposure.
These results should be considered for the future management operations and environmental policy in polluted coastal areas such as Toulon harbor. The standard deviation is given for the two bottle replicates (n = 2). For DOC, the sample at day 7 for 30-32 cm SRE, considered as an outlier, was removed. resuspension experiments performed on the 0-2 cm (white dots for 2-4 ring compounds, white square for 5-6 ring ones) and 30-32 cm (gray dots for 2-4 ring compounds, gray square for 5-6 ring ones) sediments for the 16 priority PAHs. The equations for 2-4 ring and 5-6 ring compounds in the 0-2 cm SRE were y = 0.59x + 4.9 (r = 0.89, n = 10, p < 0.05) and y = 0.07x + 7.3 (r = 0.12, n = 6, p > 0.05), respectively. The equations related to 2-4 ring and 5-6 ring compounds in the 30-32 cm SRE were y = 0.64x + 4.7 (r = 0.91, n = 10, p < 0.05) and y = -0.17x + 8.7 (r = 0.24, n = 6, p > 0.05), respectively. Crosses represent the theoretical relationship log K oc = log K ow -0.32 according to [START_REF] Means | Sorption of polynuclear aromatic hydrocarbons by sediments and soils[END_REF]. Black diamonds represent values from [START_REF] Bouloubassi | Rôle des fleuves dans les apports de contaminants organiques aux zones côtières: cas des hydrocabures aromatiques polycycliques (HAP) dans le delta du Rhône (Mediterranée Nord-Occidentale)[END_REF]. The standard deviation of log K oc for each compound is given for (1987).
Table 1. Concentration range of the 16 priority PAHs (10 3 ng g -1 sed. dw) in the whole sediment core and in the 0-2 and 30-32 cm sediments used for sediment resuspension experiments (SRE). Comparison with the toxicity critical levels published in JORF for the French legislation and with the international SQG references proposed by Long et al. (1995). This study 6.3-8.2 5.8 6.9-7.0 6.2 7.3-7.4 5.8-6.1 7.0-7.2 5.7-5.9 6.9-7.0 6.3-6.6 7.5-7.7 6.6-6.9 7.8-8.0 6.8-6.9 8.0 6.5-6.8 7.6-8.0 Accardy- Dey and Gschwend (2002) 1.2-3.1 3.5-3.7 5.3
Accardy-Dey and Gschwend ( 2003)
3.9-5.6
Cornelissen et al. ( 2006) 1.7-7.4 6.0-6.2 5.8-6.4 5.9-6.3 Dong et al. (2015) 0.9-4.8 4.0-4.9 5.2-5.7 4.8-5.4 5.0-5.7 5.3-6.1 4.7-5.7 5.2-6.2 4.5-6.4 Feng et al. (2007) 1.1-6.3 2.7-3.2 2.2-2.9 2.5-3.1 2.4 3.1-3.3 3.2-3.7 [START_REF] Jonker | Polyoxymethylene Solid Phase Extraction as a Partitioning Method for Hydrophobic Organic Chemicals in Sediment and Soot[END_REF] 5.7
6.1 Latimer et al. (1999) 2.5-5.6 2.0-3.4 2.3-4.6 2.6-4.4 3.9-6.2 3.9-4.1 4.3-5.3 4.3-5.0 Mitra et al. (1999a) 3.6-6.1 3.1-6.5 Mitra et al. (1999b) 1.5-4.5 5.1-5.4 [START_REF] Socha | Factors affecting pore water hydrocarbonconcentrations in Puget Sound sediments[END_REF] This study 6.3-8.2 7.3 8.4-8.5 6.9-7.1 8.1-8.2 6.4-6.6 7.6 6.2-6.5 7.4-7.6 6.6-6.9 7.8-8.0 6.6-7.0 7.9-8.1 6.0-6.5 7.2-7.6 6.4-6.7 7.6-7.8 Cornelissen et al. (2006) 1.7-7.4 6.9-7.5 6.9-7.3 6.8-7.5 7.3-8.1 7.6-8.1 Dong et al. (2015) 0.9-4.8 5.4-6.4 5.1-6.0 Latimer et al. (1999) 2.5-5.6 4.2-4.9 4.3-4.9 4.1-4.6 4.0-5.5 4.6-5.2 3.6-4.8 2.3-4.0 3.7-4.9 [START_REF] Jonker | Polyoxymethylene Solid Phase Extraction as a Partitioning Method for Hydrophobic Organic Chemicals in Sediment and Soot[END_REF] 7.96 7.84 Lohman et al. (2005) 7.0-7.5 Mitra et al. (1999a) 4.1-7.1 2.7-7.0 4.0-7.0 2.5-6.6 Mitra et al. (1999b) 1.5-4.5 5.1-6.0 5.1-6.0 5.4-6.2 5.1-6.0 [START_REF] Socha | Factors affecting pore water hydrocarbonconcentrations in Puget Sound sediments[END_REF] 0.4-2.7 5.5 5.5 5.8 5.8 5.9 6.4 Relative abundance (%)
PAHs, we determined the concentrations of 19 parent PAHs and alkylated homologues of the target compounds Naph, Flu, Phe/Ant, Flt/Pyr and Chr, leading to a total of 34 PAHs (see full names of PAHs in Text S1). Naph and its alkylated homologues are 2-ring compounds. Acy, Ace, Flu, DBT, Phe, Ant and alkylated homologues of Flu, Phe and Ant are 3-ring compounds. Flt, Pyr, BaA, Chr and alkylated homologues of Flt, Pyr and Chr are 4-ring compounds. BbF, BkF, BaP, Per and DBA are 5-ring, while BP and IndP are 6-ring, compounds. Concerning AHs, we determined the concentrations of the resolved n-alkane series (R) from n-C 15 to n-C 36 with two isoprenoids, pristane (Pr) and phytane (Phy), as well as the unresolved complex mixture (UCM) concentrations. All ratios and indices used to determine PAH and AH potential origins are given in the supplementary material (see Text S1).
Onefluorophore displayed two fluorescence maxima at λ Ex1 , λ Ex2 /λ Em of 230, 340/452 nm. This humic-like fluorophore corresponds to peaks A + C in[START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitation emission matrix spectroscopy[END_REF] classification and to component 2 in[START_REF] Ishii | Behavior of Reoccurring PARAFAC Components in Fluorescent Dissolved Organic Matter in Natural and Engineered Systems: A Critical Review[END_REF] classification. The other, with two fluorescence maxima located at λ Ex1 , λ Ex2 /λ Em of 230, 280/348 nm, corresponds to a tryptophan-like fluorophore, i.e., peaks T1 + T2 according to[START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitation emission matrix spectroscopy[END_REF] and[START_REF] Hudson | Fluorescence analysis of dissolved organic matter in natural, waste and polluted waters-A review[END_REF] (contour plots of PARAFAC components not shown). It should be noticed that this fluorophore could also indicate the presence of PAHs and specifically naphthalene, whose fluorescence spectral domain is
Figure captions Figure 1 .
captions1 Figure captions
Figure 2 .
2 Figure 2. Distribution profile of 1) PAHs in the 0-2 and 30-32 cm sediments used for sediment
Figure 3 .
3 Figure 3. Concentrations of dissolved Σ34 PAHs (ng L -1 ) (a,b), concentrations in DOC (µM) (c,
Figure 4 .
4 Figure 4. Relationship between log K ow and log K oc determined during the sediment
the two bottle replicates (n = 2). The log K ow are the means of different values reported in Accardi-Dey and Gschwend (2003); Dong et al. (2015); Feng et al. (2007); Socha and Carpenter
Figure2
Figure S1 .Figure S2 .
S1S2 Figure S1. Concentrations of Σ34 PAHs (10 3 ng g -1 sed. dw) with depth in the whole sediment core from the MIS site. The black circles represent the 0-2 and 30-32 cm sediment samples used for sediment resuspension experiments (SRE).
Figure S3 .
S3 Figure S3. Molecular distribution profile (%) of resolved n-alkanes with pristane (Pr) and phytane (Phy) for the 0-2 and 30-32 cm sediments used for sediment resuspension experiments (SRE).
Table 2 .
2 Physical and chemical characteristics of the 0-2 and 30-32 cm sediments used for sediment resuspension experiments (SRE).
0-2 cm sediment 30-32 cm sediment
Moisture (%) 62 52
Sand (%) 0 0
Silt (%) 69 82
Clay (%) 31 18
OC (%) 8.2 6.3
ON (%) 0.61 0.29
C/N 13.5 22.0
Σ34 PAHs (10 3 ng g -1 ) 38.2 35.7
R (10 3 ng g -1 ) 2.41 10.0
UCM/R 30 118
CPI 15-24 1.3 1.3
CPI 25-36 2.9 2.2
OC and ON: organic carbon and nitrogen (% for 1 g sed. dw); Σ34 PAHs:
concentrations in total polycyclic aromatic hydrocarbons (10 3 ng g -1 sed.
dw); R: concentration in resolved n-alkanes from n-C 15 to n-C 36 with two isoprenoids, pristane (Pr) and phytane (Phy) (10 3 ng g -1 sed. dw); UCM:
unresolved complex mixture. CPI 15-24 and CPI 25-36 : carbon preference
indices in the ranges n-C 15 -n-C 24 and n-C 25 -n-C 36 .
Table S2 .
S2 Table1). Briefly, the concentrations below L1 and ERL values reflect no particular contamination or conditions in which biologically adverse effects would be rarely observed.The concentrations between L1 and L2 or ERL and ERM values represent a potential contamination with adverse effects occasionally occurring and imply complementary investigations. The concentrations equal or above L2 or ERM values represent a high contamination with adverse effects frequently occurring. For waters, the PAH concentrations were compared to the maximum allowable concentration for environmental quality standards (MAC-EQS) defined by the European Union Water Framework Directive (WFD, 2013/39/EU). Partition coefficients of the individual PAHs between sediment and water (log K d ) and between sediment organic matter and water (log K oc ) determined in this study during sediment resuspension experiments (SRE). Comparison with the log K d , log K oc and partition coefficients between octanol and water (log K ow ) from the literature. The log K ow are the means of different values reported in Accardi-Dey and Gschwend (2003);Dong et al. (2015);Feng et al. (2007);[START_REF] Socha | Factors affecting pore water hydrocarbonconcentrations in Puget Sound sediments[END_REF].
Naph Acy Ace Flu Phe Ant Flt Pyr
log K ow 3.31 4.04 4.00 4.19 4.51 4.50 5.14 5.09
% log log log log log log log log log log log log log log log log
OC K d K oc K d K oc K d K oc K d K oc K d K oc K d K oc K d K oc K d K oc
Present address: School of the Environment, Trent University, 1600 West Bank Drive, Peterborough, ON, K9L 0G2, Canada
Acknowledgements. This study was funded and performed in the framework of CNRS MISTRALS MERMEX-WP3-C3A research program. We are grateful to the French Navy for logistical support and sampling assistance. We thank the Service d'Observation of the Mediterranean Institute of Oceanography and the core parameter platform, both managed by P.
Raimbault. We acknowledge the granulometry platform of the CEREGE, managed by D.
Delanghe, for OC, ON and particle grain size analyses, as well as G. Wassouf for FDOM analyses. Three anonymous reviewers are acknowledged for their relevant comments. BaA/BaA+Chr, IndP/IndP+BP and Flt/Flt+Pyr) were used to assess the origin of the sediment particles [START_REF] Tobiszewski | PAH diagnostic ratios for the identification of pollution emission sources[END_REF]Yunker et al., 2002).
Concerning AHs, the UCM hump corresponds to a mixture of many structurally complex isomers and homologues of branched and cyclic hydrocarbons that cannot be resolved by capillary GC columns (Bouloubassi and Saliot, 1993). Its relative importance, expressed as the ratio of unresolved to resolved compounds (UCM/R), is commonly used as a diagnostic criterion of petroleum inputs [START_REF] Mazurek | Characterization of biogenic and petroleum-derived organic matter in aerosols over remote, rural and urban areas[END_REF]. We also determined the carbon preference indices in the ranges n-C 15 -n-C 24 and n-C 25 -n-C 36 (CPI 15-24 and CPI 25-36 , respectively), expressed as the ratio of odd to even carbon numbered n-alkanes (Blumer et al., 1971;Bouloubassi and Saliot, 1993;Douglas and Eglinton, 1966). Finally, we also assessed the relative proportions of biogenic (marine and terrigenous) and petroleum n-alkanes using the formula: % anthropogenic n-alkanes = 2 / (CPI + 1) × 100
Text S2. Environmental quality guidelines
For the present study, the PAH concentrations were compared either to the (i) "Official Journal of the French Republic" (JORF), number 0046, February 23 th 2013 as current local legislation and to (ii) the "sediment quality guidelines" (SQG) proposed by Long et al. (1995) as international reference [START_REF] Barhoumi | Polycyclic aromatic hydrocarbons (PAHs) in surface sediments from the Bizerte Lagoon, Tunisia: levels, sources, and toxicological significance[END_REF][START_REF] Guo | Distribution of polycyclic aromatic hydrocarbons in water, suspended particulate matter and sediment from Daliao Riverwatershed, China[END_REF][START_REF] Zaghden | Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea)[END_REF].
They both proposed two critical levels: the levels 1 and 2 for JORF (L1 and L2, respectively) Zaghden, H., [START_REF] Zaghden | Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea)[END_REF] Origin and distribution of hydrocarbons and organic matter in the surficial sediments of the Sfax-Kerkennah channel (Tunisia, Southern Mediterranean Sea). Marine Pollution Bulletin 117, 414-428. | 80,177 | [
"20366",
"19857",
"1310672",
"1842",
"20622"
] | [
"835",
"191652",
"22032",
"262400",
"22032",
"191652"
] |
00734361 | en | [
"sdu",
"sde"
] | 2024/03/05 22:32:16 | 2012 | https://hal.science/hal-00734361/file/Tedetti%20et%20al%202012_final.pdf | Marc Tedetti
email: [email protected]
Rachele Longhitano
Nicole Garcia
Catherine Guigue
Ferretto Nicolas
Madeleine Goutx
N Ferretto
Fluorescence properties of dissolved organic matter in coastal Mediterranean waters influenced by a municipal sewage effluent (Bay of Marseilles, France)
Keywords: EEM fluorescence, PARAFAC, sewage effluent, Mediterranean Sea, Cortiou, tryptophan
Introduction
Dissolved organic matter (DOM), which represents one of the largest active pools of organic carbon at the earth's surface (~ 700 Gt C), plays a key biogeochemical role in the aquatic medium. [START_REF] Hedges | Why Dissolved Organics Matter? in Biogeochemistry of Marine Dissolved Organic Matter[END_REF][START_REF] Carlson | Production and removal processes[END_REF][START_REF] Jiao | Microbial production of recalcitrant dissolved organic matter: long-term carbon storage in the global ocean[END_REF] Fluorescence spectroscopy techniques, in particular excitation-emission matrices (EEMs), have been successfully used to investigate the origin, distribution and dynamics of DOM in various marine and freshwater environments. [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] EEMs, coupled to peak picking technique or to parallel factor analysis (PARAFAC), have allowed the identification of two main types of fluorophores within the aquatic DOM pool: 1) the protein-like fluorophores, with fluorescence signatures roughly similar to those of tryptophan and tyrosine aromatic amino-acids, generally attributed to autochthonous/labile DOM and 2) the humiclike fluorophores, with fluorescence signatures corresponding to those of humic and fulvic acids, rather associated with terrestrial/degraded DOM. [START_REF] Coble | Marine Optical Biogeochemistry -The Chemistry of Ocean Color[END_REF][START_REF] Hudson | Fluorescence analysis of dissolved organic matter in natural, waste and polluted waters-A review[END_REF][START_REF] Fellman | Fluorescence spectroscopy opens new windows into dissolved organic matter dynamics in freshwater ecosystems: A review[END_REF][START_REF] Yamashita | Chemical characterization of protein-like fluorophores in DOM in relation to aromatic amino acids[END_REF] The discharge of sewage effluents (SEs) in the aquatic systems is a source of environmental concern for a long time and will continue to be a major problem in future years due to the population growth and increasing urban activities combined with the effects of climate change such as unpredictable rainfall patterns. [START_REF] Brown | The effects of tertiary treated municipal wastewater on fish communities of a small river tributary in Southern Ontario, Canada[END_REF] SEs may contain high levels of organic matter, nutrients, fecal bacteria, viruses and chemical contaminants such as heavy metals, hydrocarbons, pesticides, polychlorinated biphenyls and pharmaceutical products. [10- 12] Interestingly, EEMs and synchronous fluorescence spectroscopy have proven to be relevant tools for tracking and fingerprinting SE-derived DOM in rivers and estuaries. [START_REF] Galapate | Detection of domestic wastes in Kurose River using synchronous fluorescence spectroscopy[END_REF][START_REF] Baker | Detecting river pollution using fluorescence spectrophotometry: case studies from the Ouseburn, NE England[END_REF][START_REF] Holbrook | Impact of reclaimed water on select organic matter properties of a receiving stream-fluorescence and perylene sorption behavior[END_REF] Indeed, the rivers impacted by treated/untreated SEs generally show higher tryptophanlike/humic-like fluorescence ratios compared to "clean" rivers. [START_REF] Henderson | Fluorescence as a potential monitoring tool for recycled water systems: A review[END_REF] In the latter, DOM mostly originates from terrestrial plants and soils, and thus contains a high amount of humic-like fluorophores and a low content in protein-like substances. [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF][START_REF] Fellman | Fluorescence spectroscopy opens new windows into dissolved organic matter dynamics in freshwater ecosystems: A review[END_REF] In contrast, DOM derived from SEs is enriched in tryptophan-like fluorophore, [START_REF] Reynolds | Rapid and direct determination of wastewater BOD values using a fluorescence technique[END_REF][START_REF] Arunachalam | Monitoring aerobic sludge digestion by online scanning fluorometry[END_REF][START_REF] Saadi | Monitoring of effluent DOM biodegradation using fluorescence, UV and DOC measurements[END_REF] ascribed to high levels of microbial activity in the effluents. [START_REF] Reynolds | The differentiation of biodegradable and non-biodegradable dissolved organic matter in wastewaters using fluorescence spectroscopy[END_REF][START_REF] Elliott | Characterisation of the fluorescence from freshwater, planktonic bacteria[END_REF][START_REF] Hudson | Can fluorescence spectrometry be used as a surrogate for the biochemical oxygen demand (BOD) test in water quality assessment? An example from South West England[END_REF] However, although numerous studies have already investigated the fluorescent DOM (FDOM) composition in municipal/industrial/agriculture SEs [START_REF] Saadi | Monitoring of effluent DOM biodegradation using fluorescence, UV and DOC measurements[END_REF][START_REF] Hudson | Can fluorescence spectrometry be used as a surrogate for the biochemical oxygen demand (BOD) test in water quality assessment? An example from South West England[END_REF][START_REF] Ahmad | Monitoring of water quality using fluorescence technique: prospect of on-line process control[END_REF][START_REF] Baker | Fluorescence properties of some farm wastes: implications for water quality monitoring[END_REF] or in SE-impacted rivers/estuaries, [START_REF] Galapate | Detection of domestic wastes in Kurose River using synchronous fluorescence spectroscopy[END_REF][START_REF] Baker | Detecting river pollution using fluorescence spectrophotometry: case studies from the Ouseburn, NE England[END_REF][START_REF] Holbrook | Impact of reclaimed water on select organic matter properties of a receiving stream-fluorescence and perylene sorption behavior[END_REF][START_REF] Baker | Measurement of protein-like fluorescence in river and waste water using a handheld spectrophotometer[END_REF][START_REF] Spencer | Discriminatory classification of natural and anthropogenic waters in two U.K. estuaries[END_REF][START_REF] Filippino | The Bioavailability of Effluent-derived Organic Nitrogen along an Estuarine Salinity Gradient[END_REF] much less work has focused on the FDOM distribution in coastal marine waters influenced by diverse SE inputs. [START_REF] Petrenko | Effects of a sewage plume on the biology, optical characteristics, and particle size distributions of coastal waters[END_REF][START_REF] Clark | A study of fecal coliform sources at a coastal site using colored dissolved organic matter (CDOM) as a water source tracer[END_REF][START_REF] Zhuo | Fluorescence Excitation-Emission Matrix Spectroscopy of CDOM from Yundang Lagoon and Its Indication for Organic Pollution[END_REF] None of them addressed the Mediterranean Sea. Located 8 km east from Marseilles City (northwestern Mediterranean Sea, France), Cortiou Cove is the discharge area of the municipal SE from Marseilles and fifteen surrounding municipalities. The latter (termed "Marseilles SE") is composed of a secondarytreated SE and pretreated river waters. Despite the establishment of primary (physicochemical) and secondary (biological) wastewater treatments in 1987 and 2007, respectively, Cortiou Cove is still considered one of the most polluted coastal sites of the French Mediterranean. [START_REF] Bellan | Benthic ecosystem changes associated with wastewater treatment at marseille: implications for the protection and restoration of the Mediterranean coastal shelf ecosystems[END_REF][START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF][START_REF] Togola | Multi-residue analysis of pharmaceutical compounds in aqueous samples[END_REF][START_REF] Syakti | Distribution of organochlorine pesticides (OCs) and polychlorinated biphenyls (PCBs) in marine sediments directly exposed to wastewater from Cortiou, Marseille[END_REF] The aim of this study is to characterize the FDOM pool in the marine waters influenced by the Marseilles SE using EEM spectrofluorometry and PARAFAC modelling. We report a time-series of FDOM data, associated with environmental parameters, collected in the Bay of Marseilles from September 2008 to June 2010.
Materials and methods
The Marseilles sewage effluent
The Marseilles SE is released in the surface waters of Cortiou Cove (Fig. 1). It is composed of 1) a secondary-treated SE (domestic sewages with or without storm waters) and 2) the pretreated Huveaune River waters. In dry weather, the secondary-treated SE contains only domestic sewage and has a daily average flow rate of 2.3 m 3 s -1 . Minimal around 6:30 am (1.3 m 3 s -1 ) and maximal at 10:30 am (3.2 m 3 s -1 ), the secondary-treated SE flow rate is directly linked to the daily activity patterns of the urban population. The SE residence time inside the treatment plant is ~ 4 h [data from the Société d'Exploitation du Réseau d'Assainissement de Marseille (SERAM)]. During rain events, the SE flow rate significantly rises due to the input of storm waters. Above 12 m 3 s -1 , the Marseilles treatment plant cannot process the SE anymore, which is directly discharged (untreated) into the sea. Since 1981, the Huveaune River, the main river of Marseilles, is routinely diverted from its natural outlet towards the treatment plant, pretreated, and transported to Cortiou Cove through the same sewer as the secondary-treated SE (Fig. 1).
In Cortiou Cove, the extent and fate of the Marseilles SE plume depends on its flow rate and on local hydrodynamic conditions. The latter are controlled by 1) the general circulation of the northwestern Mediterranean Sea, with the North Current that flows along the continental slope towards the west, 2) the wind-induced circulation and 3) the complex bathymetry of the Cortiou area and the shallow depths of its western part. Under low wind speed conditions, the dilution plume may persist up to 1500 m from the outlet with a westward-curved structure (influence of the North Current). [START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF] With southeast wind events, the dilution plume is pushed to the west coast, whereas under north wind conditions it extends offshore or eastward. [START_REF] Pairaud | Hydrology and circulation in a coastal area off Marseille: Validation of a nested 3D model with observations[END_REF] At the outlet, the plume presents a low salinity over a thickness of 3-4 m while this low salinity is observed over only few centimeters around 1 km from the outlet. [START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF] The residence time of waters from Cortiou Cove is approximately 2 days. [START_REF] Arfi | Impact d'une pollution urbaine sur la partie zooplanctonique d'un systeme neritique (Marseille -Cortiou)[END_REF]
Study sites and sampling
Seawater samples were collected eleven times from September 2008 to June 2010 in the Bay of Marseilles (northwestern Mediterranean Sea) aboard the R/V Antédon II. Five stations were sampled in the Cortiou area (South Bay) along a coast-open sea transect: Cort0 (40 m from the outlet), Cort1, Cort2, Cort3 and Cort4 (1500 m from the outlet). An additional site was sampled away from Cortiou Cove: Sofcom, the observation station of the Mediterranean Institute, located in the central Bay near Frioul Islands at 7 km from the coast (Fig. 1; Table 1). The sampling was performed in the morning, between 9:30 and 11:00 am for Cortiou stations (i.e. at or close to the maximal secondary-treated SE flow rate) and between 11:00 and 12:00 am for Sofcom, in dry weather under a variety of wind and sea conditions. Samples were taken in the subsurface water (SSW) at ~ 0.1 m depth using Nalgene ® polycarbonate bottles. The bottles were opened below the water surface to avoid the sampling of the surface microlayer. At Sofcom and Cort4, samples were also collected at 5, 20 and 55 m depth using a 5 l Niskin bottle equipped with silicon ribbons and Viton o-rings (Table 1).
The bottles were washed with 1 M hydrochloric acid (HCl) and ultrapure water (i.e. Milli-Q water, final resistivity: 18.2 MΩ cm -1 ) before use, rinsed three times with the respective sample before filling and stored in the dark in the cold.
Along with the discrete water samples, profiles of temperature (T), salinity (S) and chlorophyll a (Chla) concentration were obtained from a Seabird Electronics 19plus conductivity temperature depth (CTD) profiler equipped with a WETStar Chla fluorometer (WETLabs, Inc).
Filtration and handling of samples
Back in the laboratory, the samples were immediately filtered under a low vacuum (< 50 mm Hg). Filtration of samples was performed about 2-3 hours after their collection. The samples for FDOM and dissolved organic carbon (DOC) measurements were filtered through 0.2 µm polycarbonate filters (25 mm diameter, Nuclepore) in small pre-combusted (450 °C, 6 h) glass filtration systems. Prior to sample filtration, Nuclepore filters were first soaked in 1 M HCl and ultrapure water, and then processed with 300 ml of ultrapure water and 50 ml of sample. The 0.2 µm filtered water was transferred into pre-combusted 10 ml glass tubes (FDOM) and ampoules (DOC). The ampoules were flame-sealed after addition of 10 µl of 85% phosphoric acid. FDOM and DOC samples were kept at 4 °C in the dark during 24-48 h until analyses. The samples for particulate carbon (PC) measurements were filtered through GF/F glass fiber filters (47 mm diameter, Whatman). The GF/F filters were then dried 24 h at 50 °C and stored in a vacuum dryer until analysis. The samples for nutrients were analysed without being filtered. The samples for nitrate (NO 3 -), nitrite (NO 2 -) and phosphate (PO 4 3-) determination were collected into 50 ml polyethylene flasks and stored frozen until analysis.
Analysis of fluorescent dissolved organic matter (FDOM)
Instrument. FDOM measurements were carried out using a Hitachi F-7000 spectrofluorometer (Japan). This instrument, which provides a measuring wavelength range of 200-750 nm on both Ex and Em sides, is equipped with a 150 watt xenon short-arc lamp with a self-deozonating lamp compartment as light source, two stigmatic concave diffraction gratings with 900 lines mm -1 brazed at 300 (Ex side) and 400 nm (Em side) as single monochromators, and Hamamatsu R3788 (185-750 nm) photomultiplier tubes (PMTs) as reference and sample detectors (fluorescence measurements acquired in signal over reference ratio mode). The accuracy of the Ex and Em monochromators (± 0.4 nm) were determined using the mercury bright line at 435.8 nm from a fluorescent lamp. The correction of spectra for instrumental response was conducted from 200 to 600 nm according to the procedure recommended by Hitachi (Hitachi F-7000 Instruction Manual). First, the Ex instrumental response was recorded by placing a triangular quartz cuvette containing a concentrated solution of Rhodamine B (3 g l -1 in ethylene glycol) and a single-side frosted red (R-62) filter, used to suppress any stray light of the Ex beam below 620 nm, in the sample compartment.
An Ex scan was made from 200 to 600 nm for a λEm of 640 nm. The ratio of the signal recorded by the reference PMT to that recorded by the sample PMT provided the Ex correction curve. Then, the Em instrumental response was determined by using the xenon lamp. A quartz diffuser was placed in the sample compartment and a synchronous scan was performed from 200 to 600 nm. The ratio of the signal recorded by the sample PMT to that recorded previously by the sample PMT in presence of Rhodamine B provided the Em correction curve. The Ex and Em correction curves were applied internally by the instrument (through FL Solutions 2.1 software) to correct each fluorescence measurement acquired in signal over reference ratio mode from 200 to 600 nm.
Measurements. The samples were allowed to reach room temperature in the dark and transferred into a 1 cm pathlength far UV silica quartz cuvette (170-2600 nm; LEADER LAB ® ), thermostated at 20 °C in the cell holder by an external circulating water bath. The cuvette was cleaned with 1 M HCl and ultrapure water, and triple rinsed with the sample before use. EEMs were generated over λEx between 200 and 550 nm in 5 nm intervals, and λEm between 280 and 600 nm in 2 nm intervals, with 5 nm slit widths on both Ex and Em sides, a scan speed of 1200 nm min -1 , a time response of 0.5 s and a PMT voltage of 700 V.
Blanks (ultrapure water) and solutions of quinine sulphate dihydrate (Fluka, purum for fluorescence) in 0.05 M sulphuric acid (H 2 SO 4 ) from 0.5 to 50 µg l -1 were run with each set of samples. The physico-chemical parameters of water samples during EEM analyses, i.e. T (20 °C), S (30.0-38.4), pH (7.5-8.2) were consistent enough to not alter the fluorescence measurements. [START_REF] Hudson | Fluorescence analysis of dissolved organic matter in natural, waste and polluted waters-A review[END_REF][START_REF] Henderson | Fluorescence as a potential monitoring tool for recycled water systems: A review[END_REF][START_REF] Tedetti | Utilization of a submersible UV fluorometer for monitoring anthropogenic inputs in the Mediterranean coastal waters[END_REF] To account for inner filtering effects, absorbance measurements were performed from 200 to 600 nm in a 1 cm pathlength quartz cuvette with a Shimadzu UV-Vis 2450 spectrophotometer. Samples were analysed with reference to a filtered salt solution prepared with Milli-Q water and precombusted NaCl (Sigma) reproducing the refractive index of samples. Data processing. Different processing steps were performed on the fluorescence data: 1) all the fluorescence data (blanks, standards, samples) were normalized to the intensity of pure water Raman scatter peak at λEx/λEm: 275/303 nm, used as internal standard. This value varied by 10% (n = 100) over the study period. 2) Sample EEMs were corrected for inner filtering effects by multiplying each EEM by a correction matrix calculated for each wavelength pair from the sample absorbance, assuming a 0.5 cm pathlength of Ex and Em light in a 1 cm cuvette. [START_REF] Ohno | Fluorescence Inner-Filtering Correction for Determining the Humification Index of Dissolved Organic Matter[END_REF][START_REF] Murphy | Measurement of dissolved organic matter fluorescence in aquatic environments: an interlaboratory comparison[END_REF] 3) Sample EEMs were blank corrected by subtracting the pure water EEM. 4) Sample EEMs were converted into quinine sulphate unit (QSU), 1 QSU corresponding to the fluorescence of 1 µg l -1 quinine sulphate in 0.05 M H 2 SO 4 at λEx/λEm: 350/450 nm (5 nm slit widths). [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] The conversion in QSU was made by dividing each EEM fluorescence data by the slope of the quinine linear regression. The detection and quantification limits of the fluorescence measurement were 0.10 and 0.40 QSU, respectively.
The water Raman scatter peak was integrated from λEm 380 to 426 nm at λEx 350 nm for 70 ultrapure water samples. The average values was used to establish the conversion factor between QSU and Raman unit (RU, nm -1 ), based on the Raman-area normalized slope of the quinine linear regression. [START_REF] Murphy | Measurement of dissolved organic matter fluorescence in aquatic environments: an interlaboratory comparison[END_REF] The conversion factor was 0.014 RU QSU -1 .
Biogeochemical and microbiological analyses
DOC was measured on 2 replicates by high-temperature catalytic oxidation using a Shimadzu TOC 5000 Total Carbon Analyzer. [START_REF] Sohrin | Seasonal variation in total organic carbon in the Northeast Atlantic in 2000-2001[END_REF] The accuracy and system blank of the instrument were determined by the analysis of reference material including deep Atlantic water and low carbon water reference standards (D. Hansell, Rosenstiel School of Marine and Atmospheric Science, Miami, USA). The nominal analytical precision of the measurement was within 2%.
Analyses of PC were undertaken with a LECO SC144 Carbon/Sulphur Analyzer. The filters were weighed in ceramic nacelles, heated at 1350 °C under oxygen stream, and the resulting CO 2 was measured by infrared detection. So, PC measured here corresponds to the sum of inorganic + organic particulate carbon. Procedural blank value, given by the analysis of pre-combusted GF/F filters, was ~ 0.75 µM (n = 6). NO 3 -, NO 2 -and PO 4 3-were analyzed using automated colorimetric method. [START_REF] Aminot | Hydrologie des écosystèmes marins. Paramètres et analyses[END_REF][START_REF] Aminot | Dosage automatique des nutriments dans les eaux marines : méthodes en flux continu[END_REF] The detection limits were 0.05 µM for NO 3 -and NO 2 -, and 0.02 µM for PO 4 3-.
Escherichia (E.) coli and enterococci were enumerated by using the most-probablenumber statistical tests from the microtitration plate method in its normalized version (ISO 9308-3). This method is based upon the bacterial hydrolysis of 4-methylumbelliferyl-β-Dglucuronide, which produces a blue fluorescent substrate (4-methylumbelliferone) detectable under UV lamp. [START_REF] Hernandez | Evaluation of a miniaturized procedure for enumeration of Escherichia coli in sea water, based upon hydrolysis of 4-methylumbelliferyl β-dglucuronide[END_REF]
Parallel factor analysis (PARAFAC)
PARAFAC is a multi-way statistical method based on an alternating least square algorithm and used to decompose the complex EEM signal measured into its underlying individual fluorescent profiles (components). [START_REF] Bro | PARAFAC: tutorial and applications[END_REF] In this study, a PARAFAC model was created and validated for 64 EEMs according to the method by [START_REF] Stedmon | Tracing dissolved organic matter in aquatic environments using a new approach to fluorescence spectroscopy[END_REF][START_REF] Stedmon | Characterizing dissolved organic matter fluorescence with parallel factor analysis: a tutorial[END_REF] . Three outliers were initially present in the dataset and were removed. The EEM wavelength ranges used were 230-500 and 290-550 nm for Ex and Em, respectively. EEMs were thus merged into a threedimensional data array of the form: 64 samples × 55 λEx × 131 λEm. PARAFAC was executed using the DOMFluor toolbox v1.6. [START_REF] Stedmon | Characterizing dissolved organic matter fluorescence with parallel factor analysis: a tutorial[END_REF] running under MATLAB ® 7.10.0 (R2010a).
The validation of the PARAFAC model (running with the non-negativity constraint) and the determination of the correct number of components were achieved through the examination of residuals, the split half analysis and the random initialization. The fluorescence intensities of each PARAFAC component are given in QSU.
Other statistical analyses
Linear regressions, comparisons of groups and box-and-whisker plots were carried out with StatView 5.0 and XLSTAT 2010.2. Mann Whitney non parametric tests (U-test) were preferred to analyses of variance for the comparison of two independent data groups because of the non-normal distribution (normality assessed with Kolmogorov-Smirnov test) and the low number of samples for some groups. For the different analyses and tests, the significance threshold was set at p < 0.05.
Results
Spectral characteristics and identification of PARAFAC components
Five components (C1-C5) were identified by the PARAFAC model validated on 64 EEM samples. The spectral characteristics of C1-C5 are reported in Fig. 2. These components exhibited one or two Ex maxima and one Em maximum (i.e. one or two fluorescence peaks).
In the classification [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] , C1 (λEx/λEm: < 230, 275/306 nm) and C2 (λEx/λEm: < 230, 270/346 nm) corresponded to protein-like fluorophores. C1 had Ex and Em maxima analogous to those of tyrosine amino-acid (peaks B), whereas C2 had Ex and Em maxima similar to those of tryptophan amino-acid (peaks T). C3 (λEx/λEm: 280/386 nm) was consistent with marine humic-like fluorophore, named peak M. [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] C4 (λEx/λEm: 235, 340/410 nm) and C5
(λEx/λEm: 255, 365/474 nm) corresponded to humic-like fluorophores. In the classification [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] ,
their two fluorescence peaks referred to as peak A and peak C. According to the literature data, these protein-and humic-like materials may have different origins in the aquatic environment: autochthonous, terrestrial or anthropogenic. The attribution of potential origins to fluorophores identified in this work is provided in the discussion section.
Spatial distribution of environmental parameters
The distribution of environmental parameters along the stations for the entire study period is shown in Fig. 3. T, S and Chla, derived from the CTD profiler, were recorded at 2 m depth, whereas all the other parameters were measured in the subsurface water (SSW). T and Chla concentration, which ranged from 13.7 ± 0.9 (Cort3) to 16.8 ± 3.1 °C (Cort0) and from 0.9 ± 1.0 (Cort0) to 1.9 ± 1.4 µg l -1 (Cort1), respectively, displayed no significant differences among the stations (Fig. 3a,c). S was comprised between 37.5 ± 0.3 (Cort2) and 38.0 ± 0.1 (Sofcom) and tended to increase from Cort0 towards Sofcom (Fig. 3b). It should be noticed that some refractometric measurements made on SSW discrete samples revealed that S around the effluent outlet was actually lower in the SSW than at 2 m depth, with values of ~ 30 to 1.5 ± 0.9 µM and from 2.5 ± 1.2 to 0.05 ± 0.02 µM, respectively (Fig. 3f,g). Fecal bacteria (E. coli + enterococci) concentration was as high as 45905 ± 30981 colony forming units (CFU) 100 ml -1 at Cort0 and dropped to 178 ± 210 CFU 100 ml -1 at Sofcom (Fig. 3h).
According to the new European Directive (n° 2006/7/CE), in term of fecal indicators, the water quality was excellent or good at Sofcom, excellent, good, bad or very bad at Cort4 and Cort3, and very bad at Cort2-Cort0.
Spatial distribution of PARAFAC components
The distribution of PARAFAC components in the SSW along the stations for the entire study period is displayed in Fig. 4. As observed for environmental parameters, total fluorescence intensity (sum of fluorescence intensities of the five components) significantly decreased from Cort0 (95 ± 43 QSU) to Sofcom (6 ± 4 QSU) (Fig. 4a). Shifts in the relative abundances of components also emerged from Cort0 to Sofcom. Relative abundance of protein-like materials C1 and C2 declined from 25 ± 2 (Cort 2) to 10 ± 13% and from 37 ± 6 to 21 ± 18%, respectively (Fig. 4b,c). Relative abundance of humic-like materials C3 and C5 exhibited an inverse pattern, with increasing values from 8 ± 1 to 25 ± 19% and from 10 ± 2 to 23 ± 18%, respectively (Fig. 4d,f). On the other hand, the contribution of C4 within the FDOM pool, which ranged from 21 ± 1 (Cort2) to 27 ± 10% (Cort4), showed no significant variations along the stations (Fig. 4e). Hence, at Cort0-Cort4 stations, the major fluorophore was trytophan-like C2 (30-37%), followed by humic-like C4 (21-27%) and tyrosine-like C1 Besides relative abundances, we determined total FDOM intensity/DOC concentration and tryptophan (C2) intensity/DOC concentration ratios. These ratios tended to decrease from the effluent outlet to off shore, with on average 0.608 ± 0.129 and 0.224 ± 0.039 QSU µM -1 , respectively at Cort0-Cort2, and 0.215 ± 0.093 and 0.081 ± 0.056 QSU µM -1 , respectively at Cort3-Cort4. At Sofcom, these ratios were lower, with on average 0.103 ± 0.059 and 0.023 ± 0.021 QSU µM -1 , respectively. This shows that DOM was much more fluorescent around the effluent outlet than in off shore marine waters.
Seasonal distribution of PARAFAC components
The distribution of PARAFAC components in the SSW with respect to periods spring + summer (samples collected from April to September) and autumn + winter (samples collected from October to March) is depicted in Fig. 5 for Cort0 and Sofcom stations. Our choice to gather spring with summer data and autumn with winter data for seasonal comparisons was motivated by the fact that 1) our number of samples for each season was too low for statistical comparisons, 2) the repartition of samples within these two periods was homogenous, and 3) the meteorological and hydrological conditions presented clear patterns between these two periods, with colder temperatures, more rain events, and strong winds and subsequent water column mixing in autumn/winter, and warmer and drier weather associated to water column stratification in spring/summer (data not shown). [START_REF] Pairaud | Hydrology and circulation in a coastal area off Marseille: Validation of a nested 3D model with observations[END_REF] At Cort0, total fluorescence intensity was much higher in spring/summer (129 ± 25 QSU) than in autumn/winter (67 ± 34 QSU) (Fig. 5a), whilst the FDOM composition remained relatively stable with no significant differences in relative abundances between the two periods (C1: 21 ± 3%, C2: 36 ± 5%, C3: 8 ± 1%, C4:
23 ± 3%, C5: 10 ± 2%) (Fig. 5b-f). As for Cort0, total fluorescence intensity at Sofcom was significantly higher in spring/summer (11 ± 6 QSU) than in autumn/winter (4 ± 3 QSU) (Fig. 5a). However, contrary to Cort0, the FDOM composition was highly variable throughout the two periods, with higher relative abundances for C1 and C2 in spring/summer (22 ± 20 and 32 ± 18%, respectively) than in autumn/winter (5 ± 13 and 10 ± 15%, respectively), and higher ones for C3 and C5 in autumn/winter (29 ± 15 and 33 ± 16%, respectively) than in spring/summer (13 ± 8 and 11 ± 7%, respectively), while the contribution of C4 remained highly variable (~ 23 ± 15%) (Fig. 5b-f). At Cort4, the seasonal distribution of PARAFAC components in the SSW was similar to that of Sofcom, apart from C2, whose abundance did not significantly vary between spring/summer and autumn/winter (Figure not
Discussion
FDOM in the coastal marine waters not impacted by the Marseilles sewage effluent
Sofcom station (central Bay of Marseilles) presented the lowest values for the FDOM intensities (Fig. 4a) and for the DOC, PC, nutrient and fecal bacteria concentrations, whereas it exhibited the highest and most stable S values (Fig. 3b,d-h). Surface salinities as well as DOC and nutrient concentrations measured at Sofcom were typical of the northwestern Mediterranean Sea. [START_REF] Brasseur | Seasonal temperature and salinity fields in the Mediterranean Sea: climatological analyses of a historical data set[END_REF][START_REF] Marty | Seasonal and interannual dynamics of nutrients and phytoplankton pigments in the western Mediterranean Sea at the DYFAMED time-series station (1991-1999[END_REF][START_REF] Goutx | Short term summer to autumn variability of dissolved lipid classes in the Ligurian sea (NW Mediterranean)[END_REF] We conclude that Sofcom was not influenced by the Marseilles SE during our sampling period. A spreading of the Marseilles SE dilution plume in the central part of the Bay was not really expected with regard to the SE flow rates and the hydrodynamic conditions. [START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF][START_REF] Pairaud | Hydrology and circulation in a coastal area off Marseille: Validation of a nested 3D model with observations[END_REF] Nevertheless, we might have expected an impact of the Marseilles SE at Sofcom through the transport of isolated water lenses. Indeed, the general instability within the superficial layer in the vicinity of the SE outlet may lead to the formation of individualised less salty water lenses, which may be transported over greater distances and persist longer in the Bay. [START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF] SE-derived less salty water lenses were thus not present at Sofcom during the period investigated.
FDOM in these coastal marine waters not impacted by the Marseilles SE showed a marked seasonal pattern in terms of intensity and composition that may reflect the production/degradation processes of autochthonous organic matter. In spring/summer, the higher total fluorescence intensity and the higher contribution of protein-like materials within the FDOM pool (Fig. 5a-c) were very likely associated to primary production. Indeed, tyrosine-(C1) and tryptophan-like (C2) fluorophores are known to be released from phytoplankton activity and are considered as fresh/labile bioavailable products. [START_REF] Parlanti | Dissolved organic matter fluorescence spectroscopy as a tool to estimate biological activity in a coastal zone submitted to anthropogenic inputs[END_REF][START_REF] Romera-Castillo | Production of chromophoric dissolved organic matter by marine phytoplankton[END_REF][START_REF] Yamashita | Fluorescence characteristics of dissolved organic matter in the deep waters of the Okhotsk Sea and the northwestern North Pacific Ocean[END_REF][START_REF] Lønborg | Assessing the microbial bioavailability and degradation rate constants of dissolved organic matter by fluorescence spectroscopy in the coastal upwelling system of the Ría de Vigo[END_REF] Tyrosine-like material, whose fluorescence can be easily quenched by nearby tryptophan because of energy transfer effects, [START_REF] Lakowicz | Principles of fluorescence spectroscopy[END_REF] generally has fluorescence intensities lower than those of tryptophan-like fluorophore in marine waters, which is consistent with our results.
The lower total fluorescence intensity and the higher relative contribution of humic-like fluorophores C3 and C5 in autumn/winter (Fig. 5a,d,f) would result from the decrease in primary production (i.e. decrease in the production of protein-like materials) due to temperature decline and water column mixing. Actually, humic-like compounds C3-C5 may be produced by marine microbial communities during organic matter degradation processes. [START_REF] Yamashita | In situ production of chromophoric dissolved organic matter in coastal environments[END_REF][START_REF] Lønborg | Production of bioavailable and refractory dissolved organic matter by coastal heterotrophic microbial populations[END_REF][START_REF] Shimotori | Fluorescence characteristics of humic-like fluorescent dissolved organic matter produced by various taxa of marine bacteria[END_REF] Interestingly, marine humic-like fluorophore (C3) can be also derived from phytoplankton, as recently demonstrated by. [START_REF] Romera-Castillo | Production of chromophoric dissolved organic matter by marine phytoplankton[END_REF] According to red versus blue shifts in the Ex and Em spectra (Fig. 2), humic-like C5, would be more aromatic than C4 and could correspond to the most biorefractory and oldest material resulting from the microbial degradation of autochthonous organic matter. This humic-like material could originate in part from deep ocean water and be transported to surface waters via upwelling. [START_REF] Yamashita | Fluorescence characteristics of dissolved organic matter in the deep waters of the Okhotsk Sea and the northwestern North Pacific Ocean[END_REF] Besides marine sources, humic-like materials C4 and C5 are well known to have a terrestrial origin with the microbial degradation of higher plants/soil organic matter. [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF][START_REF] Stedmon | Tracing dissolved organic matter in aquatic environments using a new approach to fluorescence spectroscopy[END_REF][START_REF] Parlanti | Dissolved organic matter fluorescence spectroscopy as a tool to estimate biological activity in a coastal zone submitted to anthropogenic inputs[END_REF][START_REF] Singh | Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin using excitation-emission matrix (EEM) fluorescence and parallel factor analysis (PARAFAC)[END_REF] Hence, a terrestrial source for these fluorophores in the central Bay of Marseilles cannot be excluded although it should be minor compared to the autochthonous one as regards the important dilution effect of non point and point terrestrial inputs from the coast. [START_REF] Para | Fluorescence and absorption properties of chromophoric dissolved organic matter (CDOM) in coastal surface waters of the northwestern Mediterranean Sea, influence of the Rhône River[END_REF] In addition, humic-like fluorophores, which efficiently absorb natural UV radiation, are known to be subjected to photodegradation processes in surface waters. Consequently, photodegradation may be a significant sink for these compounds in summer in the Bay of Marseille, as already proposed by [START_REF] Para | Fluorescence and absorption properties of chromophoric dissolved organic matter (CDOM) in coastal surface waters of the northwestern Mediterranean Sea, influence of the Rhône River[END_REF] .
This decoupling between the processes driving the distribution of protein-like fluorophores (phytoplankton production, microbial degradation) and humic-like materials (microbial production, terrestrial inputs, photodegradation) in the central Bay of Marseilles is illustrated from correlation coefficients (r) presented in Table 2. Significant linear regressions were observed between C1 and C2 (r = 0.63) and between C3, C4 and C5 (r = 0.53-0.76), while no significant correlations were found between protein-and humic-like compounds (r = 0.02-0.40).
FDOM in the coastal marine waters impacted by the Marseilles sewage effluent
Considering that Cort0 and Sofcom are the two "end-member marine stations", i.e. strongly impacted and not impacted by the Marseilles SE, respectively, the SE plume would not extend further than Cort 1 (100 m from the outlet) or Cort2 (450 m from the outlet) based on environmental parameters (Fig. 3d-h). Alternatively, if based on FDOM data, the extent of the Marseilles SE may be seen up to Cort4 (1500 m from the outlet), which presented an intermediary fluorescence signature between Sofcom and the other Cortiou stations (for fluorophores C2, C3 and C5) (Fig. 4a,c,d,f).
FDOM in the coastal marine waters strongly impacted by the Marseilles SE (Cort0, 40 m from the outlet) presented a marked seasonal trend in intensity, while its composition remained rather stable (Fig. 5a-f). In fact, the higher FDOM amount recorded in spring/summer could not be explained by a higher SE flow rate, and this for two main reasons. First, since sampling was conducted only under dry weather, no increases in the Marseilles SE flow rate took place on sampling days due to rain events. The latter occurred prior to the dry sampling days, particularly in the autumn/winter (wet season). However, because of the short residence time of Cortiou waters (two days), the "rain memory effect" was very likely negligible. Secondly, the flow rate of secondary-treated domestic sewages, which is directly related to the activity patterns of the urban population, did not present any seasonal features (SERAM data). Other processes may account for the observed FDOM intensity increase during the spring/summer season. Enhanced bacterial and phytoplankton productions in spring/summer may occur in response to increasing temperature and light in this organic matter-and nutrient-enriched area and contribute to the higher FDOM signal measured in that period. Concomitantly, the more intense wind conditions that prevail in the autumn/winter period may lead to a decrease in the SE-derived FDOM by enhancing the mixing and the dilution with seawater. [START_REF] Arfi | Impact du grand émissaire de Marseille et de l'Huveaune détournée sur l'environnement marin de Cortiou -Etude bibliographique raisonnée 1960-2000[END_REF] The FDOM composition in the marine waters influenced by the Marseilles SE, constant at the seasonal level, was characterized by the dominance of tryptophan-like fluorophore. In SE-impacted natural waters, tryptophan-like compound, usually well correlated to biological oxygen demand, originates from sewage microbial activity. In fact, it would be a biological product of the microbial community (a product of bacterial metabolism) and/or a bioavailable substrate consumed by the latter (energy source). [START_REF] Yamashita | Chemical characterization of protein-like fluorophores in DOM in relation to aromatic amino acids[END_REF][START_REF] Elliott | Characterisation of the fluorescence from freshwater, planktonic bacteria[END_REF][START_REF] Hudson | Can fluorescence spectrometry be used as a surrogate for the biochemical oxygen demand (BOD) test in water quality assessment? An example from South West England[END_REF][START_REF] Yamashita | In situ production of chromophoric dissolved organic matter in coastal environments[END_REF] Typically, the fluorescence intensity of tryptophan considerably decreases through the SE treatment processes, i.e. from raw to treated effluents. [START_REF] Henderson | Fluorescence as a potential monitoring tool for recycled water systems: A review[END_REF] Tyrosine-like fluorophore, although less frequently observed that tryptophan-like material in SEs, may also come from sewage microbial activity. [START_REF] Baker | Detecting river pollution using fluorescence spectrophotometry: case studies from the Ouseburn, NE England[END_REF][START_REF] Ahmad | Monitoring of water quality using fluorescence technique: prospect of on-line process control[END_REF][START_REF] Murphy | Organic matter fluorescence in municipal water recycling schemes: toward a unified PARAFAC model[END_REF] Humic-like fluorophores C4 and C5 have been found to be of terrestrial origin, coming from higher plants/soils organic matter [START_REF] Stedmon | Tracing dissolved organic matter in aquatic environments using a new approach to fluorescence spectroscopy[END_REF][START_REF] Parlanti | Dissolved organic matter fluorescence spectroscopy as a tool to estimate biological activity in a coastal zone submitted to anthropogenic inputs[END_REF][START_REF] Singh | Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin using excitation-emission matrix (EEM) fluorescence and parallel factor analysis (PARAFAC)[END_REF] but have been also detected in diverse SEs, where they could be microbiologically produced during organic matter degradation processes. [START_REF] Saadi | Monitoring of effluent DOM biodegradation using fluorescence, UV and DOC measurements[END_REF][START_REF] Hudson | Can fluorescence spectrometry be used as a surrogate for the biochemical oxygen demand (BOD) test in water quality assessment? An example from South West England[END_REF][START_REF] Stedmon | Resolving the variability in dissolved organic matter fluorescence in a temperate estuary and its catchment using PARAFAC analysis[END_REF] Accordingly, at Cort0-Cort4, humic-like components C4 and C5 could come from both constituents of the Marseilles SE: secondary-treated domestic sewages and pretreated Huveaune waters. Nevertheless, EEM measurements conducted on Huveaune waters revealed high intensities for humic-like components C4 and C5 (data not shown),
suggesting that these fluorophores would be rather issued from Huveaune waters than from domestic sewages. Marine humic-like fluorophore C3, derived from microbial activity, has been observed in marine waters [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF][START_REF] Yamashita | Fluorescence characteristics of dissolved organic matter in the deep waters of the Okhotsk Sea and the northwestern North Pacific Ocean[END_REF][START_REF] Kowalczuk | Characterization of dissolved organic matter fluorescence in the South Atlantic Bight with use of PARAFAC model: Interannual variability[END_REF] , lakes [START_REF] Zhang | Characteristics and sources of chromophoric dissolved organic matter in lakes of the Yungui Plateau, China, differing in trophic state and altitude[END_REF] , estuaries [START_REF] Fellman | Source, biogeochemical cycling, and fluorescence characteristics of dissolved organic matter in an agro-urban estuary[END_REF] , rivers [START_REF] Fellman | Fluorescence spectroscopy opens new windows into dissolved organic matter dynamics in freshwater ecosystems: A review[END_REF] and more recently in SEs. [START_REF] Murphy | Organic matter fluorescence in municipal water recycling schemes: toward a unified PARAFAC model[END_REF] When looking at Table 2, we observe that the five PARAFAC components are highly correlated (r = 0.93-0.99) and that these fluorophores are well correlated to DOC, PC, nutrients and fecal bacteria (r = 0.64-0.91), contrary to what is found in the non impacted waters (Sofcom). This suggests that all these parameters co-vary due to a common source.
However, although correlation coefficients are all significant, we can see that those related to tryptophan-like fluorophore present generally the highest values. This is the case with salinity, DOC, PC, phosphates and fecal bacteria (Table 2). So, since tryptophan-like material is the most abundant FDOM fluorophore in the waters impacted by the Marseilles SE and since it displays the highest correlations with environmental parameters (organic carbon, nutrients and fecal bacteria) it may be considered as a good index of SE inputs.
To track SE contaminations in rivers, estuaries and recycled water systems, several studies proposed to use the tryptophan-(peak T)/humic-like (peak C) fluorophore intensity ratio. When the latter is > 1, it reflects the presence of DOM heavily impacted by sewage inputs. In our case, the tryptophan-(C2)/humic-like (C3-C5) fluorophore intensity ratios did not show any good correlation with environmental parameters (data not shown).
Therefore, the use of these ratios is not relevant for the Bay of Marseilles, where it seems much more appropriate to use only the intensity of tryptophan to track sewage-derived DOM.
Indeed, our results indicate a tryptophan intensity value (6.0 QSU) above which we may consider that marine waters are impacted by the Marseilles SE. In the waters strongly impacted (Cort0, Cort1), 100% of samples presented a tryptophan intensity > 6.0 QSU (intensity range for both stations: 12-52 QSU). At Cort2, this percentage decreased to 80% (intensity range: 4.3-27 QSU). In the waters weakly impacted (Cort4), it was 40% (intensity range: 0.0-11.8 QSU). Finally, in the waters not impacted by the Marseilles SE (Sofcom), 100% of samples had a tryptophan intensity < 6.0 QSU (intensity range: 0.0-5.3 QSU).
Consequently, as pointed out here for the Mediterranean Sea, higher tryptophan fluorescence values relative to those derived from autochthonous biological activity may be a sign of urban sewage inputs in coastal marine waters. For instance [START_REF] Tedetti | Characterization of dissolved organic matter in a coral reef ecosystem subjected to anthropogenic pressures (La Réunion Island, Indian Ocean) using multi-dimensional fluorescence spectroscopy[END_REF] observed that a station displayed higher tryptophan-like fluorescence intensities (75 QSU) compared to other ones (~ 15 QSU) in recifal waters of La Réunion Island (Indian Ocean). The authors showed that this station was influenced by river waters collecting different urban and agriculture SEs in which the tryptophan signal was extremely high (430 QSU). Similarly [START_REF] Petrenko | Effects of a sewage plume on the biology, optical characteristics, and particle size distributions of coastal waters[END_REF] attributed the higher tryptophan-like signal recorded from an in situ SAFire flurometer (WETLabs, Inc) to sewage plumes in Hawaii coastal waters. Coble (1996)'s classification [START_REF] Coble | Characterization of marine and terrestrial DOM in seawater using excitationemission matrix spectroscopy[END_REF] .
Conclusions
( 17 -
17 25%), humic-like C5 (9-13%) and marine humic-like C3 (8-11%) being the minor fluorophores. In contrast, Sofcom was characterized by a relatively equal contribution of fluorophores C2-C4 (21-25%) with a lower abundance for fluorophore C1 (10%). The ranges of relative abundances of the PARAFAC components increased from the effluent outlet to off shore, while the opposite pattern was observed for the total fluorescence intensity. It is worth noting that for Cort4 and Sofcom, no significant differences were found between samples collected in the SSW, at 5 m, 20 m and 55 m depth, neither in total fluorescence intensity, nor in relative abundances when the entire study period is taken into account (Figure not shown).
shown). For Cort4 and Sofcom, no significant differences were found between samples collected in the SSW, at 5 m, 20 m and 55 m depth, when considering separately spring/summer and autumn/winter seasons (Figure not shown). Total FDOM/DOC and tryptophan/DOC ratios did not display any seasonal variations at Cort0 and Sofcom stations (data not shown).
This study underscores the fluorescence properties of DOM in coastal Mediterranean waters influenced by a municipal sewage effluent (SE). A unique PARAFAC model was validated for an EEM dataset of samples strongly impacted, weakly impacted or not at all impacted by the Marseilles SE. Thus, although of different origin (SE-derived or marine autochthonous) and governed by different processes, DOM in the two end-members of this coast-open sea transect (Cort0 and Sofcom) presented the same protein-and humic-like fluorophores. The latter were those recurrently observed in various freshwater and marine environments. Despite of the highly heterogeneous character of DOM in SEs, the PARAFAC model did not reveal any atypical fluorescence signatures. It appeared that fluorescence was a much more pertinent tool than organic carbon and nutrients for detecting the SE plume in the Bay of Marseilles by allowing its extent to be seen up to 1500 m offshore. We propose to use the tryptophan fluorophore intensity to track sewage pollutions in coastal marine waters. This work has been conducted for dry weather conditions, and it would be necessary in the future to evaluate the impact of the Marseilles SE on the FDOM intensity and composition during rainfall events.
Figure captions Figure 1 .
captions1 Figure captions
Figure 2 .
2 Figure 2. Spectral characteristics of the five components (C1-C5) validated by the
Figure 3 .
3 Figure 3. Box-and-whisker plots of the environmental parameters measured at 2 m depth (T,
Figure 4 .
4 Figure 4. Box-and-whisker plots of the PARAFAC components for the subsurface water
Figure 5 .
5 Figure 5. Box-and-whisker plots of the PARAFAC components (total fluorescence and
Figure 1
Figure 4
Table 1 .
1 Characteristics of the study sites, located in the Bay of Marseilles (northwestern Mediterranean Sea, France) and sampled from September 2008 to June 2010.
Stations Position Site depth Distance from the Sampling depth
sewage effluent outlet
Cort0 43°12.8'N, 5°24.1'E 10 m 40 m SSW
Cort1 43°12.7'N, 5°24.1'E 20 m 100 m SSW
Cort2 43°12.6'N, 5°24.1'E 30 m 450 m SSW
Cort3 43°12.4'N, 5°24.0'E 50 m 850 m SSW
Cort4 43°12.0'N, 5°24.0'E 55 m 1500 m SSW, 5 m, 20 m, 55 m
Sofcom 43°14.3'N, 5°17.3'E 55 m Remote SSW, 5 m, 20 m, 55 m
Sampling dates: 23/09/08, 14/10/08, 14/11/08, 25/11/08, 19/02/09, 03/06/09, 23/06/09, 25/01/10,
08/04/10, 11/06/10, 16/06/10.
SSW: subsurface water (0.1 m depth).
Table 2 .
2 Pearson's correlation coefficients (r) of linear regressions between the fluorescence intensities of the five PARAFAC components (C1-C5, in QSU) and the environmental parameters for the stations impacted (Cor0-Cort4) and not impacted (Sofcom) by the Marseilles sewage effluent.
C2 C3 C4 C5 T S Chla DOC PC NO 3 -+ PO 4 3- E. coli +
NO 2 - entero.
Cort0-Cort4
C1 0.98 0.93 0.95 0.93 -0.01 -0.34 -0.33 0.86 0.77 0.91 0.88 0.76
C2 0.94 0.96 0.94 -0.07 -0.36 -0.32 0.89 0.78 0.90 0.90 0.78
C3 0.98 0.99 -0.13 -0.25 -0.32 0.84 0.67 0.89 0.83 0.73
C4 0.99 -0.14 -0.30 -0.31 0.84 0.69 0.90 0.85 0.74
C5 -0.13 -0.25 -0.34 0.81 0.64 0.90 0.81 0.71
n 41 41 41 41 38 38 38 21 24 11 11 24
Sofcom
C1 0.63 0.02 0.33 0.02 0.37 -0.31 -0.05 -0.04 0.13 nd nd nd
C2 0.17 0.40 -0.07 0.47 -0.43 0.14 0.23 0.27 nd nd nd
C3 0.53 0.71 0.38 0.46 0.21 0.59 -0.04 nd nd nd
C4 0.76 0.26 -0.02 0.00 0.11 0.37 nd nd nd
C5 0.21 0.46 -0.02 0.33 -0.07 nd nd nd
n 23 23 23 23 22 22 22 14 18 3 3 4
n: number of observations for each regression; nd: correlation coefficient not determined because of the too low number of observations.
Correlation coefficients in bold are significant (p < 0.05).
T: temperature (° C); S: salinity; Chla: chlorophyll a concentration (µg l -1 ); DOC: dissolved organic carbon concentration (µM); PC: particulate
carbon concentration (µM); NO 3 -+ NO 2 -: nitrate + nitrite concentration (µM); PO 4 3-: phosphate concentration (µM); E. coli + entero.: Escherichia coli + enterococci concentration [(colony forming units (CFU) 100 ml -1 ].
Acknowledgements. We are grateful to the captain and crew of the R/V Antédon II, and to the Service d'Observation of the Oceanology Center of Marseilles, managed by P. Raimbault, for their excellent cooperation. We acknowledge J.-F. Ghiglione, C. Sauret and C. Dumas for their assistance during the field work. We express gratitude to C. Stedmon for his valuable help in the use of the DOMFluor Toolbox. We warmly thank N. Van Den Broeck and M. Valmassoni from Surfrider Foundation Europe -Mediterranean coordination for fecal bacteria analyses. We thank B. Charrière for the DOC analyses and D. Lefèvre for the use of the spectrophotometer. We acknowledge D. Laplace (SERAM) for his enlightenments about the functioning of the Marseilles sewage treatment plant. Particulate carbon measurements were conducted by the Service Central d'Analyse of the Centre National de la Recherche Scientifique (CNRS). We are grateful to two anonymous reviewers for their comments and suggestions. This study is part of two research projects: 1) "IBISCUS", funded by the Agence Nationale de la Recherche (ANR) -ECOTECH program (project ANR-09-ECOT-009-01) and by the Continental and Coastal Ecosphere (EC2CO) program from CNRS and the Institut des Sciences de l'Univers (INSU), and 2) "SEA EXPLORER", labeled by the Competitivity Cluster Mer PACA and supported by the Fonds unique interministériel (FUI). This work also received the financial support of the Conseil Général des Bouches-du-Rhône (CG 13).
Mediterranean Sea | 56,025 | [
"19857",
"20205",
"20366",
"20622"
] | [
"191652",
"191652",
"191652",
"191652",
"191652",
"191652"
] |
01566395 | en | [
"sdu"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01566395/file/Sous_et_al_2017.pdf | Damien Sous
Jean-Luc Devenon
Jean Blanchot
Marc Pagano
Circulation patterns in a channel reef-lagoon system, Ouano lagoon, New Caledonia
This paper reports on two three-months field experiments carried out in the Ouano lagoon, New Caledonia. This channel-type lagoon, exposed to meso-tides, south pacific swells and trade winds, has been monitored thanks to a network of currents profilers to understand the dynamics of the lagoon waters. Four typical circulation patterns have been identified, covering all together more than 90% of the survey period.
These patterns are mainly driven by the waves and wind features. In particular, obliquely incident waves or strong winds blowing over a sufficient period are able to reverse the typical circulation pattern. The analysis of the vertical structure of the currents through passages shows the regular presence of a nearly linear vertical shear within the water column.
Introduction
Coral reefs are both invaluable and endangered living systems in the nearshore areas of tropical regions.
They provide a unique habitat for countless species as well as a natural and efficient protection against erosion process and submersion events induced by storms or tsunamis [START_REF] Fernando | Coral poaching worsens tsunami destruction in sri lanka[END_REF]. Unfortunately, the reef colonies and all their benefits for biological and human populations are threatened by the combined effects of the increasing anthropic pressure and the climate change (sea level rise, acidification, warming, etc).
The hydrodynamical functioning of reef-lagoon systems remains a challenging task for coastal oceanographers. It is, from a physical point of view, a striking example of interacting processes over a wide spatio-temporal range but also a fundamental step for the characterization of the biogeochemical processes which finally govern the health and resilience of the ecosystema [START_REF] Carassou | Spatial and temporal distribution of zooplankton related to the environmental conditions in the coral reef lagoon of new caledonia, southwest pacific[END_REF][START_REF] Szmant | Nutrient enrichment on coral reefs : is it a major cause of coral reef decline ?[END_REF].
Reef-lagoon systems are potentially exposed to a wide set of physical forcings such as tides, waves, wind, coastal currents, rainfalls, river discharges and evaporation which affect the dynamics and the quality of lagoon waters. Density-driven currents have been observed between lagoon and ocean [START_REF] Atkinson | Circulation in enewetak atoll lagoon[END_REF], but the overwhelming trend is that tides, waves and wind are, by far, the main drivers of lagoon circulation and water renewal in most configurations [START_REF] Wolanski | Modeling the fate of pollutants in the tiahura lagoon, moorea, french polynesia[END_REF][START_REF] Kraines | Wind-wave driven circulation on the coral reef at bora bay, miyako island[END_REF][START_REF] Kraines | Rapid water exchange between the lagoon and the open ocean at majuro atoll due to wind, waves, and tide[END_REF][START_REF] Tartinville | Wave-induced flow over mururoa atoll reef[END_REF][START_REF] Andréfouët | Water renewal time for classification of atoll lagoons in the tuamotu archipelago (french polynesia)[END_REF][START_REF] Kench | Hydrodynamics and sediment flux of hoa in an indian ocean atoll[END_REF][START_REF] Angwenyi | Wave-driven circulation across the coral reef at bamburi lagoon, kenya[END_REF][START_REF] Hench | Episodic circulation and exchange in a wave-driven coral reef and lagoon system[END_REF][START_REF] Lowe | Wave-driven circulation of a coastal reef-lagoon system[END_REF][START_REF] Taebi | Nearshore circulation in a tropical fringing reef system[END_REF][START_REF] Hoeke | Drivers of circulation in a fringing coral reef embayment : a wave-flow coupled numerical modeling study of hanalei bay, hawaii[END_REF][START_REF] Chevalier | Hydrodynamics of the toliara reef lagoon (madagascar) : Example of a lagoon influenced by both waves and tide[END_REF][START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF].
The tidal cycles have a direct effect on the lagoon water : the lagoon fills during the flow and empties during the ebb, inducing the so called tidal ellipses representing periodically rotating currents. This basic scheme can be significantly complicated by the presence of complex lagoon bathymetry with multiple openings and Preprint submitted to Elsevier passages toward the open ocean and the neighbouring lagoons. The dynamics of the lagoon waters, the mixing, exchanges and renewal processes as well as the residence time [START_REF] Delhez | Toward a general theory of the age in ocean modelling[END_REF][START_REF] Monsen | A comment on the use of flushing time, residence time, and age as transport time scales[END_REF][START_REF] Delhez | Residence time vs influence time[END_REF] are thus dramatically dependent on the lagoon geometry (atoll, barrier reef or fringing reef) and volume and the passage cross-sections. The fluctuations of the still water level, mainly due to tides, induces modifications of both lagoon volume and channels/passages sections. Furthermore, they strongly affect the wave breaking process over the reef which determines in a large part, i.e. as soon as waves break on the reef top, the cross-reef fluxes. This explains why lagoon circulation studies must take into account the wave effect, and also justifies the great amount of research efforts spent to understand wave transformation over reefs and related currents [START_REF] Lowe | Spectral wave dissipation over a barrier reef[END_REF][START_REF] Monismith | Hydrodynamics of coral reefs[END_REF][START_REF] Lugo-Fernandez | Tide effects on wave attenuation and wave set-up on a caribbean coral reef[END_REF][START_REF] Hearn | Wave-breaking hydrodynamics within coral reef systems and the effect of changing relative sea level[END_REF][START_REF] Gourlay | Wave-generated flow on coral reefs-an analysis for two-dimensional horizontal reef-tops with steep faces[END_REF][START_REF] Bonneton | Tidal modulation of wave-setup and wave-induced currents on the aboré coral reef, new caledonia[END_REF]. The main physical processes during the wave propagation, which are now fairly well understood, include refraction, reflection and shoaling on the outside reef slope [START_REF] Kraines | Wind-wave driven circulation on the coral reef at bora bay, miyako island[END_REF][START_REF] Symonds | Wave-driven flow over shallow reefs[END_REF]Gourlay, 1996a,b;[START_REF] Massel | On the modelling of wave breaking and set-up on coral reefs[END_REF], bathymetric breaking occuring generally before the reef top [START_REF] Hardy | Field study of wave attenuation on an offshore coral reef[END_REF][START_REF] Hearn | Hydrodynamic processes on the ningaloo coral reef, western australia[END_REF][START_REF] Kraines | Wind-wave driven circulation on the coral reef at bora bay, miyako island[END_REF], harmonic transfers toward infragravity (IG) waves [START_REF] Pomeroy | The dynamics of infragravity wave transformation over a fringing reef[END_REF][START_REF] Van Dongeren | Numerical modeling of low-frequency wave dynamics over a fringing coral reef[END_REF] but also possibly to higher frequency (superharmonics) waves [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF][START_REF] Masselink | Field investigation of wave propagation over a bar and the consequent generation of secondary waves[END_REF], dissipation by friction and interaction with co-or counter-current [START_REF] Roberts | Wave-current interactions on a shallow reef (nicaragua, central america)[END_REF]. The relative importance of each process during the wave propagation toward the shore is controlled by the offshore wave features, the bathymetry, the mean water level and slope and the reef roughness.
This rich literature shows that a great research effort has been engaged during the last two decades to understand the bulk dynamics of reef-lagoon systems exposed to a set of time-varying, and often interacting, forcings. In this context, the present study aims to present and analyse a long-term field survey of the Ouano lagoon, New Caledonia. More than six months of current measurements have been performed in strategic points of the system to better understand the lagoon interaction with the open ocean and the neighbouring lagoons. The first section of the paper is dedicated to the presentation of the studied site and the experimental setup. The second section summarizes the results in order to identify the main drivers of the lagoon dynamics and the most typical circulation patterns, with a subsection devoted to the analysis of the vertical structure of the currents. The third section discusses the observed mechanisms in a more general context, including biogeochemical issues.
Field site and experiments
Field site
The New Caledonia archipelago hosts one of the largest reef structures worldwide, partly inscribed to the UNESCO World Heritage List in 2008[START_REF] Grenz | New caledonia lagoon : a threatened paradise under anthropogenic pressure ? Lagoons : Habitat and Species[END_REF]. The study site is the Ouano lagoon (Fig. 1), located on the south-west coast. It is an approximately 30 km long, 10 km wide and 10 m deep channeltype lagoon [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF] mostly exposed to south pacific swell waves, trade winds and meso-tidal fluctuations.
The lagoon is directly opened to ocean through two reef-openings in the north-west section of the reef barrier. The southern opening is about 1 km wide and 10-20m deep (the Isié reef opening) while the northern (the Ouaraï reef opening) is the deepest, down to -60m and 1.5 km wide. The lagoon is connected to northern In the present study, the instrumentation is focused to a well-defined lagoon system extending from the Tenia passage to the N'Digoro passage where the lagoon topography is rather simple with a limited number of openings (see Fig. 1). This simple geometry allows that the physics of the problem can be more easily understood before extending to more complex systems. In this part of the Ouano lagoon, the reef barrier is 25 km long and only opened at the Isié reef opening. The total volume of the considered portion of the lagoon is then about 1.3 10 9 m 3 .
Field experiments and methods
Two field campaigns have been carried out in the Ouano lagoon : the first in 2013 from August 28 to December 4 and the second in 2015, from January 10 to April 15. A first analysis of the 2013 experiment has focused on the parameterization of cross-reef fluxes in a coastal circulation numerical model [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF]. The 2013 data is here further analysed and combined with the 2015 experiment to provide a general view of the circulations patterns in the Ouano lagoon, over two different seasons including a wide range of wave, wind and tide conditions.
Four current profilers (ADCP) were deployed during the experiment to provide data on temporal variability of current velocity and direction along vertical profiles. Three Acoustic Doppler Current Profilers (ADCP) were deployed (Fig. 1) in lagoon passages (N'Digoro and Tenia) and reef-opening (Isié). An additional profiler was deployed in shallower area at the onshore end of the reef flat (the Platier site) to measure the cross-reef exchanges between lagoon and ocean. Mooring depths and profilers parameters are summarized in table 1.
Wave dynamics on the outside reef slope was measured thanks to autonomous pressure sensors OSSI Wave Gauge (5Hz sampling frequency) and RBR Duo (1Hz sampling frequency) for the first and second campaigns.
Pressure sensors are fixed on the bottom, at immersion depths 14.4 and 12m for the first and second experiments, respectively. Linear theory is used to estimate free surface oscillations and related significant wave height H s from pressure measurements at the bottom over 30-min time window. Section 3.5 presents an analysis of the vertical structure of currents. A quantitative estimation of the vertical shearing over the water column is provided by the calculation of the horizontal component of the vorticity vector ∂U/∂z. The vorticity is first calculated for each bin and then depth-averaged over the water column. The vertical shear of current is generally quite linear for main velocity components at Isié, Tenia and N'Digoro sites, so that the depth-averaged vorticity can be generally considered as a relevant indicator.
Along the reef (Platier site), the vertical structure of the current is usually much more complex which can not be simply analysed in terms of depth-averaged quantities and will not be discussed here.
Field conditions
In most reef-lagoon systems, water circulation is mainly controlled by fluxes through passages, reef opening and above the immersed part of the coral barrier in response to external forcings, such as tides, waves and wind [START_REF] Bonneton | Tidal modulation of wave-setup and wave-induced currents on the aboré coral reef, new caledonia[END_REF][START_REF] Gourlay | Wave-generated flow on coral reefs-an analysis for two-dimensional horizontal reef-tops with steep faces[END_REF][START_REF] Roberts | Physical processes in fringing reef system[END_REF][START_REF] Roberts | Wave-current interactions on a shallow reef (nicaragua, central america)[END_REF]). In the considered lagoon, the main driver is the tide, as demonstrated for instance by the and nearly offshore (north-west to north). They increase during the day while clockwise rotating and blowing from the east, then south and finally west before slowing down in late afternoon. This typical trend shows a slight seasonal variation is observed when comparing Figs. 4 and5, the wind tends to be stronger during the spring. The mean and maximal measured values are 2.5 and 12.4 m/s for the 2013 experiment and 2.9 and 12.2 for the 2015 experiments.
The wave distribution shown in Fig. 2, right plot, shows that waves are coming from a quite narrow sector between 140 and 215 o . The main peak period is 11.7s which indicates the dominance of long swell waves.
Mean and maximal wave heights are about 0.96 and 2.74m for the 2013 experiment, and 1.22 and 3.79m for the 2015 experiment. The strong wave events (H s > 1.5m) are generally associated to direction about 200 o , with a noticeable exception around March 14, 2015 with a more south-eastern swell event.
Results
Overview
Let us first have a look on an overall directional and spectral analysis of the measured currents which allows to define the variables used hereinafter. Figure 3 depicts direction probability and energy spectrum for depth-averaged current at each site. As expected in the presence of strong bathymetric constraints, the current are well-channelized in the reef passages and opening during the filling-emptying cycles of the lagoon induced by the tide. At the Platier (reef) site, a larger directional spread is observed, but a main tendency toward north is clearly observed. A more detailed analysis of current dynamics is presented hereinafter, but this result allows to define, for each site, a projection axis along the main flow direction to obtain the main and transverse components called U and V respectively. The sign convention is that an inward lagoonentering current is related to a positive value of the main component. In most cases, the analysis will be performed on the main component U while the tranverse component can be neglected. The only exception is the Platier site for which the transverse component V platier can be important.
The spectral analysis of depth-averaged currents presented in Fig. 3 (right plot) demonstrates the strong influence of tidal components in the velocity signal : by order of decreasing importance M2 (12h25) combined with S2 (12h), K1 (23h56), M4 (6h12) and M6 (4h08). As described by [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF], the amplitude of water level variation shows the prevalence of the semi-diurnal and diurnal tides M2, S2 and K1 which determine about 97% of the signal. To remove the influence of tides on the velocity measurements, currents can be either day-averaged or detided. The former will be denoted with the superscript da in the following.
The latter, identified with the superscript dt , are obtained by applying band-stops filters to the current data around the three dominant harmonics frequencies : f = 2.28.10 -5 Hz (M2/S2), f = 1.16.10 -5 Hz (K1) and f = 4.47.10 -5 Hz (M4). As exposed in the following, day-averaged currents are the most relevant variable to identify the circulation pattern at the lagoon scale while detided values will mainly be used to highlight the timelag between forcings evolution and currents response.
One notes that the M4 component is more significant for current than for free surface spectra (dashed line in Fig. 3). However, rather than a real M4 signature, this observation reveals the modulation of cross-reef currents at twice the tidal frequency [START_REF] Symonds | Wave-driven flow over shallow reefs[END_REF][START_REF] Kraines | Wind-wave driven circulation on the coral reef at bora bay, miyako island[END_REF] which drives associated fluctuations at other measurement sites.
An ensemble view of measured depth-averaged currents and main external parameters during the 2013 and 2015 experiments is shown in Figs. 4 and5, respectively. The influence of tide on the depth-averaged currents is strong but appears to be site-dependent, as observed on the energy spectra in Fig. 3. The tide effect is clearly dominant for Isié and N'Digoro, less strong but still significant at Tenia and much smaller on the reef (Platier site). In this latter location, the overwhelming trend for lagoon-entering current is related to the nearly permanent wave breaking over the reef, as described by [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF]. At the other sites, the tide effect is much more significant, inducing intense currents alternatively inward and outward during the emptying-filling cycles of the lagoon in response to tidal fluctuations. As expected, this tidal effect is more pronounced at the Isié reef opening which is the only direct connection to the open ocean in the considered zone. Day-averaged currents reveal that the mean general tendency is a positive (lagoon-entering) current at the Platier site and a negative (lagoon-leaving) current for the three other sites. A striking feature is that large wave events are associated with entering flux on the reef (Platier) and outward currents at each other site as shown by [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF] but this latter trend is not systematic as denoted for instance during the Feb. 8 or Mar. 14, 2015 wave events.
Further understanding on the lagoon dynamics is provided by the statistical inter-sites relationships depicted in Tab. 2 and in Fig. 6. Note that, for the sake of clarity, all relationships between sites are not plotted in Fig. 6 but a focus is made on the most significant ones in terms of lagoon circulation. In order to withdraw the effect of tide, the analysis of the dominant trends is performed on the day-averaged currents.
The effect of external parameters, waves and wind, on the measured currents is discussed later on in sections 3.2 and 3.3. Further analysis will be carried throughout the text to finally characterize the circulation patterns in section 3.4.
The following trends on links between sites can be deduced from Tab. 2 and Fig. 6.
-Strong connection is observed between the two northern sites N'Digoro and Isié. The day-averaged currents are generally negative (outward) and nearly linearly related (black circles in Fig. 6 A).
-The correlation is weaker between northern and southern passages (see U da tenia and U da ndigoro in Fig. 6 A). One notes that strong currents at Tenia can be either in-or outward while weak northern flow conditions are associated to outflow in Tenia.
-The Platier main component is fairly anti-correlated to the N'Digoro site. A similar link, not shown here, is observed with Isié.
-A strong connection is observed between U da tenia and V da platier . However, slightly different slopes in the U da tenia /V da platier relationships are observed for positive and negative currents (see dash-dotted and dashed lines in Fig. 6,B). As discussed later on in section 3.4, this latter trend tends to indicate two distincts functionings of the southern part of the lagoon : south-eastward currents should be associated to converging water fluxes in the Tenia passage leading to a magnitude increase of U da tenia with respect to V da platier while, on the opposite, periods of north-westward velocities correspond to more defined (channelized) current entering the lagoon through Tenia and flowing along the reef barrier allowing a better flux conservation between both sites.
-The linear correlation between day-averaged U da tenia and V da platier is very weak. However, Fig. Correlation coefficients for complete velocity signals presented in Tab. 2 are generally much lower than their day-averaged counterparts. This highlights in particular the role of phase shift during the tide propagation inside the lagoon : the tidal filling-emptying cycles are not in phase at each site, as described by [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF].
U isie U ndigoro U platier U tenia V platier U da isie U da ndigoro U da platier U da tenia V da platier U isie - 73.4 -31.3 12.1 -40.6 - - - - - U ndigoro - - -38.3 -14.6 -41.4 - - - - - U platier - - - 6.3 9.7 - - - - - U tenia - - - - 54.5 - - - - - U da isie - - - - - - 81.5 -81.5 -44.2 -37.7 U da ndigoro - - - - - - - -71.3 -56.1 -49.1 U da platier - - - - - - - - -1.3 -0.3 U da tenia - - - - - - - - -
Wave effect
Table 2 shows that, as expected, the influence of waves and wind is greater on the day-averaged currents as the tidal-related components have been removed from the signal. The significant wave height H s is, apart from the tide, the dominant external parameter affecting the main current component at Isié, N'Digoro and Platier sites. The sign of correlation coefficients indicates that an increase of significant wave height promotes (negative) outward directed currents at Isié and N'Digoro sites and inward (positive) current over the reef at Platier. This is confirmed in Fig. 7A which shows a monotonic increase of the magnitude of U da isie and U da platier when increasing the incoming significant wave height. The overall tendency is that the increase of offshore wave energy induces stronger inward cross-reef fluxes generated by wave-breaking above the barrier. This water input all along the reef barrier is compensated by outward currents in each passage and reef opening, see [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF] for a parameterization of wave-induced cross-reef fluxes. The response of the
-0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 U NDigoro (m/s) Velocity (m/s) A U Isie U Platier U Tenia -0.3 -0.2 -0.1 0 0.1 0.2 0.3 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 U Tenia (m/s) Velocity (m/s) B V Platier U Platier
Figure 6: Inter-connections between day-averaged detided currents at selected sites. A : U da isie (black circles), U da platier (red stars) and U da tenia (blue dots) vs U da tenia . B : V da platier (black circles) and U da platier (red stars) vs U da tenia . Dash-dotted and dashed lines are linear regression for V da platier vs U da tenia for negative and positive values of U da tenia , respectively.
N'Digoro currents (not depicted here) is very close to the Isié one. However, this general circulation trend, already depicted in [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF], is not the sole circulation system in the lagoon. In particular, at Tenia (see blue dots in Fig. 7A), the day-averaged currents show no clear correlation with wave height and most of wave conditions are possibly associated to inward or outward currents.
The question arises now on the external conditions controlling the flushing or filling dynamics through the Tenia passage. While wave period does not show any noticeable statistical effect on depth-averaged currents, wave direction tends to anti-correlated with day-averaged U da tenia and V da platier with moderate correlation coefficients being -40.9 and -38.5 % respectively. Fig. 7B depicts the corresponding relactionships. In spite of the measurement spread, one notes the overall trend that eastern wave direction tend to promote positive current in Tenia and along the reef at Platier while western swells induce outflow at Tenia.
Wind effect
The correlation coefficients on both complete and day-averaged currents are presented in Fig. 8. The statistical effect of wind on the day-averaged lagoon dynamics is noticeable. It is negatively correlated with the northern sites and positively correlated anywhere else : strong south-east winds promote a north-west bulk motion within the lagoon (V platier > 0) and throughout the reef passages (U tenia > 0 and U ndigoro < 0)
and openings (U isie < 0). In addition, additional computations have shown that the wind effect is quite weaker when using detided currents : correlation coefficients are -30.3, -23.8, 17.5, 28.2 and 25.3% for U dt isie ,
U dt ndigoro , U dt platier , U dt tenia and V da platier , respectively. This trend is confirmed in Fig. 8 in wind intensity around 3m/s from which V da platier increases with wind intensity. For wind speed greater than 4m/s, V da platier is systematically positive. As observed in the wind rose in Fig. 2, such strong winds are coming from a narrow directional band between 125 and 140 o (in nautical convention). Similar tendencies with same wind intensity thresholds are observed at Tenia, Isié and N'Digoro sites.
Circulation patterns
The careful consideration of the presented data leads us to identify four dominant circulation patterns in the Ouano lagoon : the Classic Pattern (CP), the Tenia Reversal Pattern (TRP), the Isié Reversal Pattern (IRP) and the Southward Pattern (SP).
The hydrodynamical features (Fig. 9), occurence probabilities (Tab. 3) and onset conditions (Fig. 10) for each pattern are described below. The two dominant ones, i.e. CP and TRP, are described in more details through two four days selected events in Fig. 11. Furthermore, in order to understand their role in sites is not able to compensate the outward flow. Currents are maximal aruond the swell apex, at high tide at Platier and during the ebb for other sites.
The onset conditions of CP for day-averaged currents are synthesized in Fig. 10. It is generally associated to large south-western swells and rather moderate winds. It appears that for the largest swells, CP is systematically observed, i.e. the lagoon-entering cross-reef flux is strong enough to dominate any other process. The breaking process over the reef is dependent of the ratio between wave height and water level above the reef [START_REF] Hearn | Wave-breaking hydrodynamics within coral reef systems and the effect of changing relative sea level[END_REF][START_REF] Bonneton | Tidal modulation of wave-setup and wave-induced currents on the aboré coral reef, new caledonia[END_REF][START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF]. When strong waves combinates with low tidal amplitude, the cross-reef flux induced by wave breaking can be so important that it pushes out water through each passages and reef opening (i.e. even at Isié) all along the tidal cycles, as depicted in Fig. 4 around Sept. 29 or Oct. 31, 2013. However, more data in large wave conditions (H s > 3m) should be gathered to substantiate this observation.
Day-averaged currents and estimated fluxes are given in Tab. 4 for the October, 6. One notes that, if depth-averaged velocities are of the same order of magnitude for each site, the input flux over the reef barrier is much more important than through passages and reef openings. Furthermore, the outward flux in Tenia is 2.5 times the along-reef flux in Platier, indicating the convergence process in the southern part of the lagoon.
The Tenia reversal pattern.
The Tenia reversal pattern (TRP) is characterized by : -An inward flow above the reef barrier (U platier > 0) -An inward flow through Tenia passage (U tenia < 0) generally associated to north-west current along the reef (V platier > 0).
-An outward flow in the northern sites (U isie < 0, U ndigoro < 0)
The occurence probabilities of TRP for day-averaged currents are 28.6 %, respectively. A typical TRP is depicted in Fig. 11 (right plots) between Feb. 6 to 9, 2015. The main difference with CP is the observation of an inward flow at Tenia which magnitude oscillates with the tide but can remain in the same direction during several tidal cycles. The cross-reef flux measured at Platier site is still positive but a bit less intense during the swell peak than in CP presented above for similar type of wave height. This is attributed to the combined effects of, (i), a more southern wave direction which induces more westward (indeed WNW) flow above and along the reef and, (ii), a stronger south-east wind which drives lagoon waters toward north-west.
However, in the considered case, the wind peak occurs after the establishment of the pattern. This latter appears thus to be mainly initiated by the swell arrival. TRP is related to a bulk north-west motion along the barrier (see the relationship between U tenia and V platier in Fig. 6,B), with water entering through Tenia from the neighbouring south-east lagoon and above the reef-barrier pushed by wave-breaking of south/south-east swell and leaving the lagoon through the northern openings.
Figure 10 shows that TRP is promoted by south to south-east swells and strong winds which are generally from south-east (Fig. 2). The comparison of estimated fluxes in Tab. 4 shows that, for similar range of incoming wave height, the exchanged fluxes through Isié and N'Digoro are much stronger in the TRP case. This emphasizes the major role played by the circulation patterns, driven by wave direction and wind magnitude, on the lagoon flushing dynamics and water renewal [START_REF] Chevalier | Hydrodynamics of the toliara reef lagoon (madagascar) : Example of a lagoon influenced by both waves and tide[END_REF].
The Isié reversal and Southward patterns.
The Isié reversal and Southward patterns are discussed conjointly because they occur in quite similar conditions, i.e. small to moderate waves (typically H s < 1.2m) and weak winds (typically W < 4m/s) as depicted in Figs. 4 and 10. These patterns are not very stable and small waves periods are often characterized by alternating periods of IRP and SP.
Both Isié Reversal Pattern (IRP) and Southward Pattern (SP) are characterized by : -An inward flow above the reef barrier (U platier > 0) -An inward flow through Isié reef opening (U isie > 0) -An outward flow in Tenia (U tenia < 0)
The only difference is that N'Digoro current is inward (U ndigoro < 0) for SP and outward (U ndigoro > 0)for IRP. The occurence probabilities for IRP and SP are respectively 12.5 and 3.6 % in terms of day-averaged currents. The estimation of exchanged fluxes in Tab. 4 for selected 24h period of IRP and SP indicates that these patterns, associated to low wave energy, are much less effective in the advection of water mass.
Vertical structure
The vertical structure of lagoon currents and its effect on the lagoon hydrodynamics, water renewal and biogeochemical processes have been little studied to date. Strong vertical variability has been highlighted in the Majuro Atoll numerical model [START_REF] Kraines | Rapid water exchange between the lagoon and the open ocean at majuro atoll due to wind, waves, and tide[END_REF], with fast wind-driven surface layer (down to few meters) and much weaker or even return flows deeper in the water column possibly affected by baroclinic effects. The present data is purely hydrodynamical, i.e. the water properties such as salinity or temperature, which play a dominant role in stratification processes, are not documented by measurements. However, limiting our discussion to a purely currentological analysis, interesting observation can be performed on the vertical structure of flows through passages and reef openings.
As described in Section 2.1, the depth-averaged horizontal component of the vorticity ∂U ∂z is computed to provide a quantitative estimation of the vertical shear. Fig. 12 shows the relationship between depth-averaged vorticity and main component of currents for each site. Color levels in left and right columns indicate wind magnitude and wave height, respectively. Only one third of the dataset are depicted for the sake of clarity.
These plots should be interpreted as follows :
-Zero vorticity : no vertical shear in the water column.
-Both positive vorticity and velocity : the current over the whole water column is entering into the lagoon, the surface layer moves faster than the bottom water.
-Both negative vorticity and velocity : the current over the whole water column is leaving the lagoon, the surface layer moves faster than the bottom water.
-Positive vorticity and negative velocity : the depth-averaged current leaves the lagoon, the bottom layer moves faster than the surface layer. In some cases, this latter can even flows in an opposite direction (i.e. being positive lagoon-entering) than the bottom layer.
-Negative vorticity and positive velocity : the depth-averaged current enters the lagoon, the bottom layer moves faster than the surface layer. In some cases, this latter can even flows in an opposite direction (i.e. being negative lagoon-leaving) than the bottom layer.
One notes first, for a substantial portion of the measured data, the importance of the vertical shear. For illustration, a vertical vorticity of 0.05 s -1 over a 10m deep measurement water colum leads to a vertical variation of the velocity of 0.5m/s. The presence of flow reversal within the water column, i.e. opposite directions for surface and bottom layers, is observed in 16, 28, 29 % of the considered data for the Isié, Tenia and N'Digoro sites, respectively. It occurs generally during the tide reversal, when depth-averaged velocities are small.
The data distribution observed in Fig. 12 shows two overall tendencies : vertical shear generally increases with current velocity and vorticity is preferentially of the same sign as velocity, i.e. surface water moves faster than bottom one. The main exception to this trend is observed at Isié where inward current can show significant vorticity either positive or negative. In addition to this base functioning, the effects of wind and wave on the vertical shear are displayed by color labels in left and right columns, respectively. Strong winds (blowing from south-east) are generally associated to strong (negative) vertical shear associated with the outward currents at Isié and N'Digoro. This describes the expected effect of wind which enhances the north-west surface transport over the whole lagoon. At Tenia, the wind does not show a direct effect on the vertical shear and strong wind events are mainly associated to strong inward weakly sheared currents. One notes also that, at each site, large value of vorticity can be observed during calm wind periods. The wave effect is quite similar to that of the wind, although less marked. Strong wave events are generally related to outward sheared currents at Isié and N'Digoro whereas at Tenia no clear dependency is observed.
From the data presented in Fig. 12 and an overall analysis of the measured profiles, the vertical structure of the currents in passage and reef opening can be summarized as follows :
-During well-established currents and circulation patterns, the surface layers are generally moving faster than the bottom ones.
- The top plot provides a scheme to help the data interpretation of the results in terms of vertical profiles, the xand y-axis being the depth-averaged current and the vorticity, respectively. Current vorticity vs intensity for Isié, N'Digoro and Tenia. Color labels provide information of the wind magnitude (in m/s) and wave height (in m) for left and right plots, respectively.
-In most cases, the vorticity is roughly constant in the water column (linear velocity profile)
-The wind, generally eastern, affects as expected the vertical shear by accelerating the surface layer when depth-averaged current is inward (e.g. during TRP events) or slowing down the surface layer of outward flows.
-Opposite signs of depth-averaged velocity and vorticity can be observed, i.e. bottom layers moving faster than surface ones, in particular (but not exclusively) during switchovers from TRP to CP.
Note that due to the time scales of the external forcings, in particular tide and thermal breeze, and the lagoon depth (typically about 10m) and topographical constraints, the Ekman's effect on the current vertical structure is expected to be weak, in particular in comparison to the direct action of wind stress at the surface. Additional data processing will be carried out to further explore this issue.
Discussion
A generic functionning for channel lagoons ?
The hydrodynamical field measurements presented here provide a valuable dataset for the understanding of lagon circulation and a relevant benchmark for numerical modeling. It shows the importance of the combined effects of tides, waves and wind which must all be taken into account in numerical models to provide a proper characterization of water renewal dynamics. The main processes observed in the Ouano lagoon should be representative of similar types of reef-lagoon systems that we defined as channel lagoons.
The distinctive features of such lagoons are :
-a well-defined barrier reef -a typical lagoon depth (5-50m) much greater than the depth over the reef top (typically outcropping during low spring tides), -a reef-parallel dimension (length) of the lagoon much greater than the reef-normal dimension (width), i.e. with an aspect ratio of the order of 2-10, -longitudinal bathymetric gradients smaller than their transverse counterparts, -one or several passages toward open ocean or adjacent lagoons.
Typical channel lagoons can be found for instance in the Pacific Ocean :
-New Caledonia, west and east coasts ;
-French Polynesia, east coast of Moorea, Huahiné, Raiatea-Tahaa, Tahiti Iti and Nui ;
-Japan, south-west Okinawa (Bibi beach) ; -Fiji, Viti Levu, Nairai and Kandavu ;
-Samoa, Naunonga (Vanikoro), Utupoa ;
or in the Indian Ocean in Madagascar (Tulear) or in Maurice (north Mahebourg). Various tides, waves and wind conditions can be encountered at each site but the renewal time of lagoon waters typically ranges from few days to few weeks. Such time scales are, on one hand, sufficient to allow the setting up of a wide range of bio-geochemical processes (the lagoon can produce its "own" waters) and, on the other hand, short enough to be permanently affected by the fluctuations of ocean and atmospheric forcings : swell events, storms, spring/neap tide cycles, winds, etc. The present experiments demonstrates that, in addition to the tidal cycles, the wave and wind plays an important role in the lagoon circulation. The wave-breaking cross-reef fluxes are generally able, indepently to the tide, to renew the lagoon water in few days to few weeks. This wave effect, which is enhanced as the length to width ratio of the lagoon increases, can drive different current patterns depending on the swell direction and magnitude, possibly blocking or reversing the tidal fluxes through passages and reef openings. The wind stress affects the vertical flow structure and also participate to the reverse of the whole lagoon circulation when strongly blowing over a sufficient period of time (typically few days). Depending on the lagoon geometry, the difference between patterns, which can alternatively drive water from/into the open ocean or from/into the neighbouring lagoons in variable proportions, can lead to important consequences in terms of water properties and biogeochemical processes.
Seasonal variability
The All of these open questions should drive further ambitious research efforts. In particular, field campaigns must now be designed to measure simultaneously both hydrodynamical and biogeochemical properties of the reef-lagoon systems in order to gain a comprehensive understanding of these coupled processes.
Conclusion
From two field campaigns carried out in the Ouano lagoon, west New Caledonia, we have identified the main drivers of a typical channel lagoon. The selected lagoon -barrier reef system is exposed to south pacific swells, meso-tides and trade winds modulated by the thermal breeze. A network of current profilers has been deployed during two successive three-months field campaign in order to monitor the current within reef passages and openings in a wide range of hydrodynamical and meteorological conditions. Pressure sensors are used to monitor the incoming wave features on the outside reef slope.
The first driver of the lagoon hydrodynamics is the tide which induces periodic filling/flushing cycles of the lagoon well identified on both free surface and currents measurements. In addition, a modulation of currents at twice the tidal frequency has been observed, in response to the modulation of cross-reef flow [START_REF] Symonds | Wave-driven flow over shallow reefs[END_REF][START_REF] Kraines | Wind-wave driven circulation on the coral reef at bora bay, miyako island[END_REF]. The analysis of day-averaged depth-averaged currents allows to identify both the inter-connections between measurement sites and their dependence to the external forcings.
The effect of wave is straightforward : as soon as waves break on the reef top, an entering flow is observed above the reef barrier whatever the tide, increasing with wave energy. For the strongest swells, the cross-reef water input is such as depth-averaged currents are permanently outward at each other site, even during rising tide. For all other conditions, the current dynamics in reef openings and passages is mainly controlled by wave direction and wind magnitude. The comparison between detided and day-averaged currents highlights the slow day-scale adjustement of the lagoon circulation to the wind stress. Four typical circulations have been characterized, with the two first controlling the lagoon dynamics more than 70% of time :
-The Classic Pattern corresponds to a day-averaged outflow at each passage and opening, occuring during moderate wind and south-west wave conditions.
-The Tenia Reversal Pattern is defined by an input day-averaged flow at the southern opening driving an overall bulk water motion toward north-west. This pattern is forced by strong winds and/or swells coming from south-east -The Isié and Southwards Patterns are observed in calm conditions. They are characterized by a reversal of the day-averaged current in the northern opening of the lagoon. They are much less stable than the two previous patterns which last for several days as long as the wave or wind forcings maintain.
Following the fluctuations of the meteorological forcings, the lagoon is expected to show a seasonal functioning, with dominant inputs from open ocean waters and from neighbouring eastern lagoons during austral summer and winter, respectively. Moreover, the sensitivity of the lagoon circulation to forcings direction is expected to increase during severe and extreme wave and wind conditions encountered during tropical storms or cyclones.
The analysis of the vertical structure of the current in lagoon passages shows the regular presence of a significant nearly linear vertical shear in the water column. This shear often appears during strong wind events but are also observed in calm conditions. The main tendency is that surface layer are faster than bottom ones, either for in-or out-flows. Period of reversal between patterns are generally associated to complex vertical structure of the current with opposite flows in the upper and lower parts of the water column.
Figure 1 :
1 Figure 1: Top view of the Ouano lagoon system. Black dots and stars represent pressure sensors and velocity profilers locations, respectively.
Figure 2 :
2 Figure 2: Wind rose (left) and wave rose (right) for cumulated data of both 2013 and 2015 experiments.
spectral analysis shown in Fig 3 or the periodic oscillations of velocities observed in Figs. 4 or 5. In this paper, a particular attention is paid on the role played by the other main forcings of the lagoon system, i.e. waves and wind, through the study of five parameters : the incoming wave height H s , peak period T p and direction θ ww3 and the wind magnitude W and direction θ w . Rainfalls have been sparse on the studied periods and are neglected. Statistical features of waves and wind over the cumulated data (2013 and 2015 experiments) are summarized in the wind and wave roses shown in Fig 2. Wind measurements performed at the nearby Tontouta airport station revealed the typical wind pattern observed during the experiments. Trade winds are modulated by thermal breeze and guided by the mountainous topography of New Caledonia. The dominant pattern is clear : strong winds always blow from the south east and almost no winds are coming from the north-west sector. Daily variations are observed in Figs. 4 and 5. Winds are minimal during nights (lower than 1 m/s)
Figure 3 :
3 Figure 3: Depth-averaged current direction and normalized Spectral Density of Energy (dashed line is the SDE for the water level over the reef flat)
Figure 7 :Figure 8 :
78 Figure 7: Influence of incoming wave features on the depth-averaged currents at selected sites. A : U da isie (black circles), U da platier
Figure 9 :
9 Figure 9: Schematic view of the circulation patterns in the Ouano lagoon.
the lagoon flushing dynamics, one representative 24h-period has been selected for each pattern among the dataset : Sept. 21 2013 for SP, Sept. 22 2013 for IRP, Oct. 6 2013 for CP and Feb. 7 2015 for TRP. The measured depth-averaged currents are thus time-averaged over the corresponding 24h period and displayed in Tab. 4. Estimation of related exchanged daily fluxes can be computed by using the cross-section areas of each site provided by the numerical bathymetry used by[START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics : a simple parameterization for coral lagoon circulation model, with application to the ouano lagoon, new caledonia[END_REF]. For the N'Digoro, Isié and
Figure 10 :
10 Figure 10: Classic and Tenia patterns
Figure 11 :
11 Figure 11: Examples of Classic Pattern and Tenia Reversal Pattern. From top to bottom : depth-averaged currents, depthaveraged detided currents, wave height and peak period, wave direction, wind intensity and direction.
Fig. 13. The whole wave partitions are explored to compute the occurence frequency of swells components (H s > 1m) coming from eastern (θ < 200 o , black circles in Fig 13) and western (θ > 200 o , red circles in Fig 13) sectors. A clear seasonal variation is observed : western swells are clearly dominant between may and november while eastern swells are more present during the austral summer. In addition Meteo France wind data at Tontouta airport from 2006 to 2015 are processed to depict the occurence probability for strong eastern winds (W > 5 m/s) in green circles in Fig. 13. A statistical trend is observed, with more frequent strong wind events between September and March than during the rest of the year. According to the previous analysis, it is likely that the difference in lagoon circulation observed between 2013 and 2015 (and 2011) experiments are indeed representative of a cyclic seasonal variation. Even if the available hydrodynamical data does not cover the full annual variations, one can expect a dominance of CP from may to september when swells are predominantly from the west and winds are rather calm and, conversely, a dominant TRP during the austral summer between December to April when strong winds and eastern swells are more frequent. The renewal of lagoon waters should then be mainly controlled by inflow of open ocean waters during the austral winter and inflow from south-eastern neighbouring lagoon through Tenia passage the rest of the year.
Table 1 :
1 Current profilers characteristics during the field experiments (asap : as soon as possible -ping per ensemble, nc : no theoretical value calculated by Sontek).
Site Profiler Parameters
Location Depth Sensor Samp.-Aver. Vert. res. 1st Bin Pings/ens Time/Ping SD
(m) (min -min) (m) (m) (s) (cm/s)
2013
N'Digoro 14.84 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
Isié 16.80 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
Platier 3.53 ±0.05 Sontek 3MHz 10-1.5 0.25 0.45 asap nc nc
Tenia 13.89 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
2015
N'Digoro 15.71 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
Isié 16.19 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
Platier 3.31 ±0.05 Sontek 3MHz 10-1.5 0.25 0.45 asap nc nc
Tenia 14.20 ±0.05 Rdi 300KHz 10-1 1.5 3.73 100 0.55 1
Table 3 :
3 Tenia sites, these cross-sections are about 2.09.10 4 , 1.15.10 4 and 2.85.10 4 m 2 , respectively. For Platier, as the CP da TRP da IRP da SP da Occurence probabilities (in %) for the four selected patterns for day-averaged (da) currents.
Sept. -Nov. 2013 54.4 29.8 10.5 3.5
Jan. -Apr. 2015 27.9 40.7 19.8 5.8
Total 38.5 36.4 16.1 4.9
Classic Pattern
Tenia Reversal Pattern
Isie Reversal Pattern
Occurrence probability (%) Southward Pattern
5
0
140 160 180 200 220 240 260
θ ww3 ( o )
Table 4 :
4 Day-averaged currents (U in cm/s) and corresponding estimated exchanged fluxes (F in 10 3 m 3 .s -1 ) for four selected days corresponding to the four circulation patterns. Day-averaged values of wave and wind features are indicated for comparison.
Platier Platier
Tenia Tenia
0.6 NDigoro Isie 0.6 NDigoro Isie
Velocity (m/s) -0.2 0 0.2 0.4 Velocity (m/s) -0.2 0 0.2 0.4
-0.4 -0.4
-0.6 -0.6
Detided velocity (m/s) 05/10 -0.2 0 0.2 0.4 06/10 07/10 Platier Tenia NDigoro Isie 08/10 Detided velocity (m/s) 06/02 0.4 -0.2 0 0.2 07/02 08/02 09/02 Platier Tenia NDigoro Isie
05/10 -0.4 06/10 07/10 08/10 06/02 -0.4 07/02 08/02 09/02
2 20 2 20
05/10 0 1 s H (m) 06/10 07/10 H s T p (m) (s) 08/10 0 10 T p (s) (m) H s 06/02 0 1 07/02 08/02 H s (m) T p (s) 09/02 15 10 5 (s) p T
240 200
θ ww3 ( o ) 200 220 θ ww3 ( o ) 190 195
05/10 180 06/10 07/10 08/10 06/02 185 07/02 08/02 09/02
W (m/s) 5 10 15 W (m/s) 5 10 15
05/10 0 06/10 07/10 08/10 06/02 0 07/02 08/02 09/02
400 200
θ W ) ( o 200 θ W 100
05/10 0 06/10 07/10 08/10 06/02 0 07/02 08/02 09/02
Acknowledgements
This study was sponsored by the Action Sud MIO/IRD (A.S. OLZO and A.S. CROSS-REEF) and the ANR MORHOC'H (Grant No. ANR-13-ASTR-0007). The GLADYS group (www.gladys-littoral.org) supported the experimentation. We are grateful to all the contributors involved in this experiment. The authors are particularly indebted to David Varillon, Eric Folcher and Betrand Bourgeois whose efforts were essential to the deployment. A special thanks is extended to Jerôme Aucan for giving immediate and unconditional access to his data.
and outflow through the northern sites over more than 4 days which totally overcome the tidal dynamics (see Fig. 5). The clear regime shift from TRP to CP on March 14/15, while wave height and wind intensity remain strong, should probably be attributed to the slight shift in wind and waves directions. The proper validation of such an hypothesis would have require direct measurements of wave direction at the system entry, which are not available with the present dataset. However, this observation tends to highlight the increased sensitivity of the circulation system to the forcings direction during severe and extreme conditions.
Biogeochemical issues
The existence of well defined circulation patterns in the reef-lagoon system is of primary importance to analyse and predict the biological productivity and its evolution [START_REF] Cuif | Wind-induced variability in larval retention in a coral reef system : A biophysical modelling study in the south-west lagoon of new caledonia[END_REF]. During CP conditions, the lagoon is mainly fueled by across-barrier water inputs. During the reef barrier crossing, coral organisms and plankton-eating fishes tend, through ingestion and metabolism, to modify the nutrient composition (uptake and regeneration), to deplete food content of the incoming waters by grazing and predation (bacterio-, phyto-and zooplankton content) [START_REF] Houlbreque | Picoplankton removal by the coral reef community of la prévoyante, mayotte island[END_REF][START_REF] Hamner | Behavior of antarctic krill (euphausia superba) : schooling, foraging, and antipredatory behavior[END_REF][START_REF] Cuet | Cnp budgets of a coral-dominated fringing reef at la réunion, france : coupling of oceanic phosphate and groundwater nitrate[END_REF] and to increase the release of non living particles (mucus) by corals [START_REF] Cuet | Cnp budgets of a coral-dominated fringing reef at la réunion, france : coupling of oceanic phosphate and groundwater nitrate[END_REF] . Furthermore, from the exchanged volumes estimated hereinbefore, one can expect that CP would be less efficient in terms of water renewal than TRP for which water are entering the lagon both through the Tenia passage and above the reef barrier. Longer residence times for CP may be associated with an increase of the phytoplanktonic productivity of the lagoon [START_REF] Delesalle | Residence time of water and phytoplankton biomass in coral reef lagoons[END_REF][START_REF] Andréfouët | Water renewal time for classification of atoll lagoons in the tuamotu archipelago (french polynesia)[END_REF], which can compete with cross-reef transformation processes. TRP is expected to foster the inflow from neighbouring south-eastern lagoons through Tenia passage rather than open ocean waters. Visual observations notably revealed the arrival of water loaded with appendicularians and jellyfishes during TRP.
Current shearings generated by thermal and salinity fronts have been observed to induce strong plankton aggregation in the lagoon context [START_REF] Gomez-Gutierrez | Influence of thermo-haline fronts forced by tides on near-surface zooplankton aggregation and community structure in bahía magdalena, mexico[END_REF]. The present study demonstrates the presence of significant vertical shear induced by the wind forcing alone. The effect of such vertical shearing of the currents may significantly affect the plankton migration. For instance, in the passages and reef openings, a strongly sheared current with inflow at the surface and outflow near the bed (as revealed by the present | 56,939 | [
"19040",
"20492",
"173623",
"938823",
"173214"
] | [
"191652",
"191652",
"191652",
"191652",
"191652"
] |
01562927 | en | [
"sdu"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01562927/file/17_locatelli_fabien.pdf | Fabien Locatelli
Damien Sous
Vincent Rey
Cristele Chevalier
Frederic Bouchette
Julien Touboul
Jean-Luc Devenon
Frederic Bouchette
Frédéric Bouchette
WAVE TRANSFORMATION OVER THE OUANO REEF BARRIER, NEW CALEDONIA
Keywords: coral reef barrier, wave transformation, infragravity waves, very low frequency
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
A large part of tropical shores are protected by coral reef barriers. These living systems are both priceless and threatened. They offer a nourishing shelter for a large amount of marine species. Coral reef barriers have been also shown to provide a natural and efficient protection against erosion and submersion events induced by storms or tsunamis. Unfortunately, the reef colonies and all their benefits for biological and human populations are endangered by the combined effects of the increasing anthropic pressure and the climate change (sea level rise, ocean acidification and warming).
The transformation of incoming waves over the reef barrier is of primary interest to understand the mass, energy and nutrient transfers from the open ocean to the inner lagoon which in turn affect the wealth and the renewal time of lagoon waters [START_REF] Monismith | Hydrodynamics of coral reefs[END_REF]. A range of processes involved in wave transformation over reef bottom has been documented by in-situ and laboratory measurements [START_REF] Bonneton | Tidal modulation of wave-setup and waveinduced currents on the Aboré coral reef, New Caledonia[END_REF][START_REF] Lowe | Spectral wave dissipation over a barrier reef[END_REF], Pomeroy et al. 2012) as well as theoretical and numerical approaches [START_REF] Massel | On the modelling of wave breaking and set-up on coral reefs[END_REF]Gourlay 2000, Van Dongeren et al. 2013), including reflexion and refraction over the outside reef slope, breaking, dissipation and energy transfers toward lower and higher frequency bands.
An essential issue is to understand how the incoming swell energy is transferred to the shore or to the lagoon for fringing and barrier reef systems, respectively. Field and numerical studies demonstrated the importance of infragravity (IG) waves in reef environments (Pomeroy et al. 2012[START_REF] Van Dongeren | Numerical modeling of low-frequency wave dynamics over a fringing coral reef[END_REF]. In nearshore areas, these long waves are usually forced by the groupiness of incoming gravity short waves (SW), i.e. the envelope modulation of swell waves. The threshold between short and IG waves bands is generally about 0.04-0.06Hz (see [START_REF] Sénéchal | Observation of irregular wave transformations in the surf over a gently sloping sandy beach on the French Atlantic coastline[END_REF] for a discussion). Under certain circumstances, a distinction can be made within the IG band itself between IG and very low frequency (VLF) bands. The usual motivation for such distinction is that in the VLF band, free surface fluctuations are more controlled by topo-bathymetric features inducing standing and/or resonating oscillations. The IG/VLF frequency boundary varies depending on the authors and the studied sites (beaches or reefs) but is typically between 0.004 [START_REF] Ruessink | Bound and free infragravity waves in the nearshore zone under breaking and nonbreaking conditions[END_REF], Pomeroy et al. 2012[START_REF] Van Dongeren | Numerical modeling of low-frequency wave dynamics over a fringing coral reef[END_REF]) and 0.005Hz [START_REF] Sénéchal | Observation of irregular wave transformations in the surf over a gently sloping sandy beach on the French Atlantic coastline[END_REF][START_REF] Gawehn | Identification and classification of very low frequency waves on a coral reef flat[END_REF]. The lower limit for VLF (or for total IG when the IG/VLF distinction is not made) is usually taken at 0.001Hz [START_REF] Aagaard | Infragravity wave contribution to the surf zone sediment transport -the role of advection[END_REF]Greenwood 2008, Gawehn et al. 2016).
In reef environments, IG waves are generated around or just before the reef top. As the outer reef slope is strong (1:20 to 1:10), the generation mechanism is generally related to the moving of the breaking point 1Université de Toulon, CNRS/INSU, IRD, MIO,UM110, 83051 Toulon Cedex 9, France. [email protected] 2 Aix-Marseille Université, Université de Toulon, CNRS/INSU, IRD, MIO, UM 110, 13288, Marseille, Cedex 09, France 3 GEOSCIENCES-Montpellier, Université de Montpellier II, Montpellier, France Paper No. 17 (Pomeroy et al. 2012[START_REF] Baldock | Dissipation of incident forced long waves in the surf zone -Implications for the concept of "bound"wave release at short wave breaking[END_REF]. At the inner end of fringing reef, the shore can induce reflection of IG waves and development of standing wave motions. When the forcing period of incoming wave groups matches one of the natural seiching periods (eigenmode) of the reef flat, resonance can appear [START_REF] Péquignet | Forcing of resonant modes on a fringing reef during tropical storm Man-Yi[END_REF], Pomeroy et al. 2012b). [START_REF] Gawehn | Identification and classification of very low frequency waves on a coral reef flat[END_REF], which proposed a detailed classification of such processes in the case of fringing reef, indicates that resonance over reef flat is mostly observed for the longest fundamental mode around high tide.
Based on field measurement in the Ouano lagoon, New Caledonia, the present study aims to provide a new insight of wave transformation processes over a coral barrier reef. A particular attention will be paid on long waves, in order to understand how they propagate across the reef flat and are transmitted into the lagoon.
Experiments
Field site
The studied site is the Ouano lagoon, south-west New Caledonia. The field site is a coastal weakly anthropized narrow lagoon, nearly 30 km long, 10 km wide and 10 m deep (see Fig. 1). This reef-lagoon system is a typical "channel" lagoon with strong aspect ratio of horizontal dimensions: the lagoon is much longer than it is wide. Such configuration implies a strong effect of cross-reef fluxes induced by wave breaking on the lagoon dynamics and water renewal time. The lagoon is directly open to ocean through two reef-openings in the north-west section of the reef barrier. The southern opening is about 1 km wide and 10-20m deep while the northern is the deepest, down to -60m and 1.5 km wide. The lagoon is connected to northern and southern lagoons by two passages, one toward north and one toward south. The northern passage is about 5m deep while the south passage is 10-15 m deep. Note that this latter is close to a further south reef-opening. At high tide, the coral reef barrier is fully submerged, whereas at low tide it can be partly emerged depending on tide and wave conditions as well as large-scale sea level fluctuations. For further description of the Ouano reef-lagoon system, the reader should refer to [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics: a simple parameterization for coral lagoon circulation model, with application to the Ouano Lagoon, New Caledonia[END_REF].
Instrumentation and methods
The field measurements presented here are part of a large hydro-biogeochemical field campaign (OLZO and CROSS-REEF projects) conducted from April to August, 2016. The instrumentation used for the present analysis is dedicated to the study of cross-reef wave transformation and propagation within the lagoon. Incoming wave conditions are provided by a S4 electro-current meter (8 m deep) on the reef foreshore on 20 min bursts recorded each 3h. Due to the failure of the embedded pressure sensor, free surface fluctuations at S4 are deduced from the velocity measurements using linear wave theory. Seven autonomous pressure sensors have been deployed from the reef top to the inner lagoon. The reef flat sensors CP2 to CP5 are OSSI Wave Gauge® sensors recording bottom pressure at 5 Hz. The lagoon pressure sensors are RBR Duo® at 2Hz. Positioning of the sensors array is given in Fig. 2. Energy spectra presented in Fig. 3 are computed continuously over 40min burst for each pressure sensor (see Sec. 3.1 for discussion). Figure 3 displays the average spectra over the April 25 to July 31 time period. The average spectrum of the wave field envelope measured by S4 is computed each 3h over 20min bursts using Hilbert transform.
Bathymetric data results from in-situ surveys performed during the campaign and combined with the ZONECO atlas. It should be noted the acquisition of reliable bathymetric data in reef area remains quite challenging, in particular in the zones exposed to waves. The bathymetry of the inner reef coral colony has not been properly documented and is given as indicative information in Fig. 2 from sparse data survey and visual observations.
Results
Theoretical seiching modes
Fringing reefs have been observed to develop a specific dynamics for long waves, partly controlled by strong reflection process at the shore inducing standing or resonant VLF modes attached to the reef flat. Barrier reef systems are not that constrained due to the presence of a fluid rather than a rigid boundary at the inner limit of the reef flat. They however show strong topo-bathymetric gradients able to potentially induce partial reflection of surface waves. It is well-know that wave reflection in closed or semi-open basins leads to the development of standing waves, the so-called seiches. In order to estimate the seiching modes for our reef-lagoon system, we use the analytical work of [START_REF] Wilson | Seiches[END_REF], restated by [START_REF] Rabinovich | Seiches and harbour oscillations[END_REF].
Two types of standing oscillations can a priori be expected. First, the lagoon can be considered as a basin either closed or semi-open at the reef boundary. In the former case, the expected fundamental seiching wave has a node near the basin center and two anti-nodes at the lagoon boundaries. The oscillation period depend on the water depth and the exact bathymetric profile, but assuming a 12m deep 6000m long flat basin, the fundamental frequency is f0=9.4.10 -4 Hz. Higher nth harmonics are at n.f0. If the basin is considered semi-open, seiching frequencies are ((2n+1)/2).f0. The fundamental mode (n=0) shows a node at the reef barrier and an anti-node at the inner reef.
Second, the reef flat itself shows a strong bathymetric gradients at its boundaries, in particular a nearly vertical step at its inner end. This induces discontinuities in the wave celerity which can possibly induce partial reflection of surface wave. The reef standing wave pattern would thus consists in a node/anti-node at the barrier outer/inner boundary. Possible seiching oscillations attached to the reef barrier would naturally be more dependent on depth, because relative depth fluctuations are much more important on the reef than in the lagoon. The fundamental mode for the reef barrier seiche is at f0=1.4.10 -3 Hz.
Each of these fundamental modes are in the usual VLF range. The selected 40 min time window for spectral analysis is long enough to resolve the expected VLF scales [START_REF] Gawehn | Identification and classification of very low frequency waves on a coral reef flat[END_REF]. It is emphasized that the theoretical values, which are used as a guide for the data analysis, must be considered keeping in mind the limitations of the idealized case. First, the predictions are provided for flat basin which do not account for bathymetry variations which can strongly affect long waves structure [START_REF] Michallet | Long waves and beach profile evolutions[END_REF]) neither for the effect of bed friction. Second, this purely cross-reef longitudinal analysis discard any wave pattern developing in the long axis of the lagoon. Finally, one notes the possible overlapping between seiche harmonics in the lagoon or over the reef which increases the complexity of the analysis.
Wave classification
The first step of the data processing is the characterization of the free surface waves observed along the instrumented cross-shore transect. Figure 3 shows that in the short-wave (SW) or so-called "gravity" band (0.04<f<0.3 Hz) which combines swell and wind waves, the average spectrum depicts a swell-dominated environment. Peak periods between 14 and 22s correspond to long south pacific swells reaching the studied reef with a nearly normal incidence. Shorter waves with peak periods around 7s generally correspond to more south-easterly waves. At higher frequencies, the free surface oscillations are mostly wind waves locally generated by trade winds modulated by thermal winds but, further over the reef flat, they can also result from the energy transfer to higher harmonics already observed here [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics: a simple parameterization for coral lagoon circulation model, with application to the Ouano Lagoon, New Caledonia[END_REF] or in other sites [START_REF] Masselink | Field investigation of wave propagation over a bar and the consequent generation of secondary waves[END_REF]. The averaged spectrum of SW envelope over the reef foreshore (thin line in Fig. 3, left) shows that 3-months averaged SW modulations increase with decreasing frequency (as denoted for instance by [START_REF] Van Dongeren | Numerical modeling of low-frequency wave dynamics over a fringing coral reef[END_REF] and reach a peak around 0.0055Hz, i.e. just above the selected IG-VLF limit. Envelope energy remains quite high in the upper VLF band and falls quickly below 0.0015Hz (Fig. 3, right plot). Fig. 3 (right plot) shows a zoom on low frequency for the 3-months averaged spectra. The SW-IG limit is here taken at 0.04Hz as suggested by [START_REF] Van Dongeren | Numerical modeling of low-frequency wave dynamics over a fringing coral reef[END_REF] and well adapted to our long swell dominated environment. For the IG-VLF limit is less straightforward, the 0.005Hz value proposed by Gawehn et al. for fringing reef is depicted in Fig. 3 as a guide. For the reef flat sensors (CP2-CP6), a peak is observed is the VLF band around 0.0015Hz. Below this value, VLF energy on the reef barrier quickly decreases, which can be directly related to the drop of SW envelope energy. However, the 0.0015Hz value also appears in good agreement with the theoretical fundamental reef flat seiching oscillation but further analysis is required to better identify the possible presence of standing wave motions. The energy decreases with increasing frequency and increases again in the IG band after a trough around 0.01Hz. The overall energy decreases significantly when moving into the lagoon and this shoreward attenuation increases with frequency. The inner lagoon is mainly controlled by VLF motions. Specific spectral modulations are observed for CP7 and CP8 sensors. They both show a common peak around 0.0035Hz but CP7 (mid lagoon) also shows the 0.0015Hz peak than the reef sensors whereas CP8 (inner reef) is characterized by a clear energy minimum around 0.0023Hz and is maximal for the lowest part of the VLF band around 0.0009 Hz. This latter may fairly correspond to the anti-node of the fundamental seiching mode of the lagoon considered as a closed basin.
Wave climate
The analysis of surface waves climate in the reef-lagoon system is based here on the data recovered during the month of May, 2016, during which various conditions of waves, tides and sea level have been encountered. Time series for significant wave height for VLF, IG and SW are plotted in Fig. 4 together with 1h and 24h averaged mean sea level over the reef flat (CP5 sensor, top plot).
Let us first focus on the SW transformation depicted in Fig. 4. The results confirm the well-know strong filtering role played by the reef on the incoming swell energy [START_REF] Hardy | Field study of wave attenuation on an offshore coral reef[END_REF]Young 1996, Hearn 1999). The SW amplitude decreases as propagating over the reef barrier due to the combined effects of breaking, frictional dissipation and harmonic transfers. Each of these processes are governed by the water depth over the reef. The reef barrier acts thus as a tide-controlled filter of SW energy: the lower the depth, the stronger the attenuation. Wave breaking occurs in most cases between S4 and CP2, leading to a strong decrease of SW energy. It is only during very calm wave and high tide conditions that wave breaking occurs higher on the reef flat, as already observed by [START_REF] Chevalier | Impact of cross-reef water fluxes on lagoon dynamics: a simple parameterization for coral lagoon circulation model, with application to the Ouano Lagoon, New Caledonia[END_REF].
A series of observations on IG and VLF surface waves dynamics can be performed from Fig. 4. One notes first that overall VLF, IG and SW significant heights are well correlated: strong IG/VLF periods correspond to strong incoming waves. During the very strong swell event on May, 22 (Hs=6m, Tp=16s), the IG and VLF amplitudes reach 1.2 and 0.7m, respectively. An example of free surface elevation recording at CP5 on May, 22, is depicted in Fig. 5. One notes the combination of VLF, IG and SW fluctuations with periods of the order of 10 min, 1-2min and 15s, respectively. The wave pattern is here quite different than the VLF waves measured by [START_REF] Gawehn | Identification and classification of very low frequency waves on a coral reef flat[END_REF] on the Roi-Namur Island fringing reef. VLF waves are here more regular and do not show the bore waveform observed by Gawehn et al. These long oscillations rather appear to act as carrier waves for the higher frequency wave field resulting from the superposition of IG and SW. Figure 4 shows that, when the incoming wave energy is significant (typically Hs>2m), the IG waves energy generally decreases along the reef transect toward the inner lagoon. This process is modulated by the tidal elevation with more energy propagating along the barrier as the tide level rises. This trend is less straightforward when the swell is small and/or the tidal range is large. For the higher tidal ranges (see e.g. the low tides of May 8-11), the reef top is at least partly emerged at low tide. A very strong dissipation of IG waves is observed on the offshore part of the reef flat and nearly no waves are observed after CP3. At high tide, CP2-CP6 sensors show very close IG amplitude indicating a small IG waves dissipation when water depth is high and incoming swell rather moderate. The VLF dynamics is also strongly dependent on the incoming swell amplitude but the effect of tide is quite lower than for IG. The general trend is a general decay along the instrumented transect. However, for the stronger swells (May 14-15 or 22) the VLF energy is nearly steady across the reef barrier (CP2-CP6 sensors). The dependence of IG waves produced on the reef top on the incoming wave field is best predicted by the parameter Hs²Tp measured at the reef foreshore (see Fig. 6,A), similarly to the data obtained by [START_REF] Inch | Observations of nearshore infragravity wave dynamics under high energy swell and wind-wave conditions[END_REF] on dissipative sandy beach. The VLF amplitude over the reef top (Fig. 6,B) show a lower dependence on incoming energy, with more dispersion. The VLF/IG ratio over the reef top is around one third. The same analysis is performed for the inner reef sensor (CP8). The general trend is similar, i.e. more incoming energy produces more IG/VLF within the lagoon, but the role of water depth over the reef top is much more important. This is particularly visible for IG waves at CP8: the higher the free surface over the reef, the stronger the IG wave motions inside the lagoon. This confirms the strong dissipation of IG waves on the reef barrier, modulated by the water depth. As water level over the reef decreases, bottom friction over the reef increases the damping of IG energy produced in the surf zone and further reduces the IG wave propagating within the lagoon. The same tendency is observed for VLF at CP8, although with a larger spread. One notes also that, in a general manner, long waves are strongly attenuated in the lagoon itself. This is particularly true for VLF, for which a minimal threshold of energy flux is necessary to induce clear fluctuations whereas VLF waves over the reef top are observed to rise as soon as the incoming SW energy increases.
An additional observation can be performed on the reef top MWL (CP3) depicted in Fig. 4: the strong wave setup on the reef top, up to 40cm during swell events, which will be involved in the development of significant pressure gradients and currents across the reef barrier [START_REF] Hearn | Wave-breaking hydrodynamics within coral reef systems and the effect of changing relative sea level[END_REF][START_REF] Bonneton | Tidal modulation of wave-setup and waveinduced currents on the Aboré coral reef, New Caledonia[END_REF].
Wave transformation
Figure 7 depicts the IG and VLF transformations along the studied transect. Let us first focus on the IG waves evolution. Between CP2 and CP3, the IG dynamics depends on both incoming SW amplitude and tide elevation. When incoming swell is strong (typically greater than 2.5m at the foreshore), the breaking zone is on the outer reef slope. Consequently most of IG waves are produced well upstream the reef top and a significant dissipation is already observed between CP2 and CP3. This process is dependent on tide, and the dissipation decreases as water level increases. However, in any case of strong swell, the ratio between significant IG heights at CP3 and CP2 remains less than 1. For moderate and small incoming SW, the ratio can reach values up to 1.9. This indicates that, for such moderate wave events, production of IG waves occurs between CP2 and CP3. The tendency is again controlled by the tide: when the still water level lowers, the breaking zone and IG production shift offshore and dissipation process can dominate. Further onshore on the reef flat (CP3 to CP4 and CP4 to CP5), the transformation, which should mainly result from the competition between production and dissipation, is now fully controlled by tidal fluctuations. At low tide, the dissipation is very strong and IG waves amplitude are observed to decrease of more than 90% in 120m between CP3 and CP5). Between mid and high tide, the tidal effect on wave transformation is less intense than between low and high tide and amplification of IG waves can be observed. This non-linear response to tide indicates that for the lower levels, IG are only produced offshore the reef top and the IG dynamics is only controlled by dissipation, while for higher mean water levels, IG production is still active over the reef flat. Between CP5 and CP6, one notes a nearly unimodal dependence of IG transformation on the tide elevation and a systematic damping. This trend confirms the previous observations of the progressive transition from production to dissipation regimes for IG waves across the reef flat. Within the lagoon, i.e. between CP6 and CP8, the tide effect is nearly invisible but a strong damping (about 70%) is observed. Such a damping can be attributed to bottom friction processes, both over the sand bottom and the inner reef of the Ouano lagoon.
For VLF waves, the general trend is generally the same, i.e. an overall decrease from the reef top to the inner reef. The tidal effect is less marked. Between CP2 and CP3 (outer reef flat), the influence of swell previously noticed for IG waves is still valid: for strong swells, the VLF amplitude generally decreases while for smaller swells, strong amplification can be observed. This trend can again be related to the offshore shifting of breaking zone and VLF production during strong swell conditions. However, more importantly, a striking observation for each reef flat sensors is that VLF are amplified in a number of cases. This naturally raises the question of the part of standing or even resonant motions in the VLF band. The question can be also triggered for IG waves, but the first, and consistent, analysis performed herein before only assumes purely propagating waves.
Standing vs progressive wave patterns
To better understand the long wave dynamics, and in particular to discriminate the possible presence of standing wave motions in our system, we use part of the approach proposed by Pomeroy et al. (2012b) and [START_REF] Gawehn | Identification and classification of very low frequency waves on a coral reef flat[END_REF]. The analysis presented in Fig. 8 is first based on the calculation of the magnitude squared coherence between the signal recorded at two distinct sensors:
where Gxy(f) is the cross-spectral density between x and y, and Gxx(f) and Gyy(f) the autospectral density of x and y, respectively. When the coherence is close to one, the signals are in direct relationship. The coherence is computed and averaged over the month of May (Fig. 8,A). Then two specific events (May 15, 16:00 and May 22, 6:00) are selected, with 40 min bursts taken at the same depth over the reef top (0.7m) in two different wave conditions: strong swell (Fig. 8,B) and very strong swell (Fig. 8,C).
As emphasized by Pomeroy et al. (2012b), the coherence can not itself reveal the presence of a standing wave because "... any wave form that propagates across a basin and maintains a (reasonable) consistent form will have a high coherence.". A second indicator is thus used to better understand the relationship between long wave pattern at two different measurement points: the phase lag between the two signals (Fig. 8, D andE). A special attention is paid on the phase lags for the CP2-CP3, CP3-CP4, CP2-CP6 and CP6-CP8. If standing wave motions develop on the reef flat or within the lagoon following the theoretical approach (Sec. 3.1), neighboring sensors CP2-3-4 should be nearly in phase (Lag=0) while CP2-CP6 or CP6-CP8 should be out-of-phase (Lag around ) with strong coherence in each case. Note that the reef flat sensors are not strictly at the limits of the expected seiching domains, so difference to these theoretical values are expected. The month-averaged coherence for the reef flat sensors (CP2 to CP5) in Fig. 8 shows that, as expected, the coherence generally increases as the frequency decreases. For the higher part of the IG band (0.03<f<0.05 Hz) and further for gravity waves, strong wave transformations occur after the reef top as observed on the smaller coherences for CP3-CP4 and CP4-CP5 sensor pairs. In the lower part of the IG band (0.005<f<0.03Hz), the coherence remains high all across the reef flat. It still increases when entering the VLF band for CP3-CP4 and CP4-CP5 pairs but is a bit lower on the outer reef flat slope between CP2 and CP3. The averaged coherence drastically decrease further onshore. Between CP5 and CP6, a moderate coherence is observed mainly for low VLF, while between CP6 and CP8 such peak is hardly visible.
The analysis of the two selected events reveals the importance of the swell energy. For the 2.4m swell depicted in Fig. 8 B and D, strong coherence is observed for CP2-3-4 in both IG and VLF bands. The CP2-CP6 coherence strongly falls when entering the VLF band. This indicates that, even if VLF energy is transmitted toward the lagoon, the wave forms are altered. The phase lags for the same event (Fig. 8,D) show ramps for the reef flat sensor. This linear dependency on frequency indicates the presence of progressive waves in both VLF and IG bands. In the inner lagoon (CP6-CP8), no clear cross-shore wave propagation can be identified. The pattern is drastically different for the 5.2m swell event of May, 22. Coherence for sensors CP2-3-4 is still very strong (even increased). In the VLF band, significant coherence peaks are observed at 0.002-0.003Hz and 0.0005Hz for CP2-6 and CP6-8 sensors pairs. Phase lags show the similar frequency ramp patterns for the reef flat sensors but the frequencies associated to coherence peaks in the VLF reveal: (i) a π/4 phase lag between CP6 and CP8 around 0.0005Hz and, (ii), a lag around 3π/4 between CP2 and CP6 in the 0.0014-0.0019Hz range. The former fairly corresponds to the fundamental seiching mode of the lagoon considered here as a semi-open basin while the latter is in the range of the fundamental mode expected for seiching oscillations on the reef flat domain due to partial wave reflection at the reef barrier boundary. Further analysis is now required to better discriminate such bimodal standing wave pattern and, in particular, to understand their response on incoming wave conditions and tidal fluctuations.
Conclusion
The present study is a part of a large scale experiments performed on the Ouano barrier reef lagoon system, New Caledonia. A long term field campaign has been carried out to monitor the wave transformation from the reef foreshore to the inner lagoon. The measurements, based on a cross-shore network of pressure sensors, revealed the presence of strong long wave dynamics over the reef flat divided into the so-called infragravity (IG) and very low frequency (VLF) waves. The IG/VLF energy produced on the reef is directly related to the incoming swell energy flux. The filtering role played by the reef barrier, through wave breaking and frictional processes, is strongly depend on the water depth: the lower the tide, the stronger the dissipation.
Dedicated analysis has been carried out to identify the possible presence of standing long waves in our system. It appears that, at least for the most energetic event recorded (Hs=4.2m, Tp=15s), two patterns of seiching modes can be observed. The first is associated to the fundamental oscillation of the lagoon, considered as a semi-open basin, in its transverse direction. The second is attached to the reef barrier itself, possibly induced by partial reflection of the long waves by the bathymetric step at the reef/lagoon boundary. Further research work has to be engaged to confirm these observation and better understand the conditions for VLF/IG waves development and attenuation. In addition, a particular attention will be paid on the role of cross-reef currents, which are strong and vertically sheared, on the wave dynamics.
Figure 1 .
1 Figure 1. The Ouano reef-lagoon system, New Caledonia. Light grey zones indicate most of the reef colonies. Positions of the reef barrier sensors (S4, CP2-CP6) are distorted, precise values are given in Fig. 2.
Figure 2 .
2 Figure 2. Experimental Setup along the studied cross-shore transect (upper plot is a zoom on the reef barrier area). Inner reef bathymetry is indicative (for X>6000m) as no survey data is available. Note that CP7 sensor is not aligned with other sensors (see Fig. 1)
Figure 3 .
3 Figure 3. Average wave spectra over the whole acquisition period. Right: SW and envelope spectra deduced from velocity measurements over the outer reef slope (S4, 20min spectra every 3h). Center and right: full spectra for pressure sensors (CP2-CP8, continuous 1h measurements). The right plot is a zoom in the low frequency band.
Figure 4 .
4 Figure 4. Time series of VLF, IG and SW significant wave amplitude continuously computed over 1h bursts. Incoming swell significant wave height at the reef foreshore (S4) is computed over 20min bursts each 3h. The top plot depicts the Mean Water Level (1h averaged at CP5) and the 24h averaged levels measured at CP3 and CP8.
Figure 5 .
5 Figure 5. May, 22 swell event, CP3 reef top measurements. Left: free surface elevation from pressure measurements. Right: spectral density of energy.
Figure 6 .
6 Figure 6. IG and VLF significant wave heights vs SW incoming energy flux for reef top CP3 (A,B) and inner reef CP8 (C,D) sensors for the month of May, 2016. The color levels depict the water depth above the reef top. Note the difference in y-axis for A,B and C,D plots.
Figure 7 .
7 Figure 7. Long waves transformation along the cross-shore transect. Top and bottom plots are IG and VLF, respectively. Each plot is a ratio of significant wave heights between two consecutive sensors vs the water depth over the reef top. Color levels depict the SW significant height at the foreshore sensor (S4).
Figure 8 .
8 Figure 8. A: Averaged magnitude-squared coherence for neighboring sensors over the month of May. B and C: Magnitude squared coherence for 40 min bursts on May 15, 16:00 and May, 22 6:00. D and E: Phase lag for 40 min bursts on May 15, 16:00 and May, 22 6:00. For both events, the mean water depth above the reef top is 0.7m. Vertical dashed line on each plot correspond to th IG-VLF threshold.
Acknowledgements
This study was sponsored by the EC2CO OLZO program (CNRS INSU), the OLZO and CROSS-REEF Action Sud (MIO IRD) and the ANR project MORHOC'H (ANR-13-ASTR-0007). The Noumea IRD center and the GLADYS group supported the experimentation. We are grateful to all the contributors involved in this experiment. The authors are particularly indebted to David Varillon, Eric Folcher and Bertrand Bourgeois whose efforts were essential to the deployment and Pascal Douillet for providing free access to bathymetric data. | 33,519 | [
"19040",
"737004",
"20492",
"749528",
"18868",
"173623"
] | [
"191652",
"191652",
"191652",
"191652",
"527120",
"191652",
"191652"
] |
01562908 | en | [
"sdu"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01562908/file/18_sous_damien.pdf | Damien Sous
Lise Petitjean
Frederic Bouchette
Vincent Rey
Samuel Meulé
François Sabatier
Frédéric Bouchette
Groundwater fluxes within sandy beaches: swash-driven flow vs cross-barrier gradients
Keywords: swash zone, groundwater dynamics, watertable gradients, seasonal survey, field experiment
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
Groundwater fluxes through sedimentary beaches or barriers are expected to play a great role in the exchanges of fresh and salt water between open sea, lagoons and aquifers [START_REF] Elad | Tide-induced fluctuations of salinity and groundwater level in unconfined aquifers-field measurements and numerical model[END_REF][START_REF] Hamada | Simple estimate of filtration rates on a sandy beach[END_REF] as well as in the transport of dissolved nutrients and pollutants [START_REF] Sawyer | Stratigraphic controls on fluid and solute fluxes across the sediment-water interface of an estuary[END_REF]. From field measurements in a microtidal sandy beach, [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] recently demonstrated the presence of a groundwater circulation cell under the swash zone. These observations raised once again the key question of the effect of the landward fluctuations of the watertable on the beach groundwater flows. This debate has been earlier fed in particular by the contrasting results provided by the experiments of [START_REF] Turner | Groundwater fluxes and flow paths within coastal barriers: Observations from a large-scale laboratory experiment (BARDEX II)[END_REF] and the numerical simulations of [START_REF] Li | Wave-induced beach groundwater flow[END_REF]. They both observed infiltration at the upper swash and exfiltration at the lower swash, regardless the artificially imposed seaward-or landward-directed hydraulic gradients across the barrier. However, [START_REF] Turner | Groundwater fluxes and flow paths within coastal barriers: Observations from a large-scale laboratory experiment (BARDEX II)[END_REF] showed a groundwater circulation under the swash zone nearly isolated from the artificial back-barrier fluctuations while [START_REF] Li | Wave-induced beach groundwater flow[END_REF] stated that the inland watertable elevation affects the location of the divergence point at the top of the swash zone.
Using an extensive network of buried pressure sensors from the surf zone to the dune base, the present field study brings new insight on this issue. The second section of the communication is dedicated to the description of the field experiments while the third section presents the results, including a particular focus on a submersion storm event. Conclusive remarks and prospects are given in the last section.
Experiments
This work is a part on a larger scale high resolution hydro-morphodynamic field campaign (ROUSTY 2014) conducted from November 2014 to February 2015 on the Rousty beach (Camargue, France). The field site is an open and linear microtidal wave dominated beach oriented West-East. The sedimentary material consists of a fine sand ~ 200µm in size. The hydraulic conductivity is 0.016 cm/s [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF]. Main geomorphic features are a 50-70 m wide mild slope beach, an uneven barrier dune up to 5 m high, and a double barred system within 5 m of water depth. The instrumentation used here consists in a cross-shore network of buried pressure sensors depicted in Figure 1. Three synchronized high-frequency (10Hz) pressure sensors (named G1m, G3t and G5t) are deployed under the swash zone during a 24h period corresponding to the decay of a moderate storm event (see [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] for details about the full swash zone experiment). In addition, five autonomous sensors are buried under the surf zone (MWL) and inland within the upper beach (WT1 to WT4) to measure groundwater at lower frequency (5Hz for MWL and 0.007Hz for WTs). Measured data are averaged over 30min time windows. Each sensor was repeatedly positioned by DGPS and tacheometer during the experiment, and calibrated in laboratory basin both before and after the experimental campaign. The sensors are protected by a sediment net to prevent sediment infiltration and sensor membrane damage [START_REF] Turner | Rapid water table fluctuations within the beach face: Implications for swash zone sediment mobility?[END_REF]. Note that WT1 and WT2 measurements stop on December, 4 (battery limitations). Offshore wave and water level (named SWL) are provided by an additional ADCP in 7m depth.
Topo-bathymetric surveys have been performed all along the experiments. Three beach profiles (Nov. 20, Dec. 4 and Dec. 14) are plotted in Fig. 1 to help the results discussion.
Figure 1. Experimental setup and three selected cross-shore beach profiles. G1m, G3t and G5t are high-frequency sensors only deployed for the storm of Dec. 12-14 while WT1 to WT4 are deployed for the full experiment (Nov. 20, 2014to Feb. 10, 2015). Minimal, mean and maximal values of the Mean Water Level over the whole experiment are computed from the MWL sensor.
A number of inter-connected lagoons are present inland in the Parc Naturel Regional de Camargue. The exchanges between lagoons are fully controlled by a system gates and channels such as there is no direct relationship between rainfall and back-barrier lagoons levels. The data analysis will thus be mainly carried out considering the inner boundary conditions measured at WT4 sensor. et al. (2016) analyzed a high-resolution data subset in the swash zone. This study was limited in time to a single winter storm and focused on the swash-groundwater interactions. Here, we extend this analysis to the whole experiment in order to highlight the respective role played by the swash dynamics and the inland watertable gradients in the groundwater circulation through the beach.
Results
Sous
Data overview
Time-averaged free surface dynamics
The pressure head data recovered during the whole experiment is depicted in Fig. 2. The 30-min time averaged free surface elevation in the inner surf zone (MWL sensor) show significant fluctuations, which mainly result from the combination of large scale SWL oscillations, tide and wave activity (wave setup). The most important driver is clearly the SWL which shows oscillations up to 1m in response to the variations of atmospheric pressure and other large scale coastal features. The tide is between 10 and 30cm, typical of micro-tidal conditions encountered in this region of the Mediterranean basin. The wave setup measured at MWL (indeed the difference between the head at MWL and the offshore time-averaged elevation SWL) sensors is relatively small, with a maximal value about 0.15m.
Groundwater dynamics
The response of the beach aquifer is monitored thanks to the network of buried cross-shore sensors (see Fig. 2). The first observation is that the upper beach head, measured by sensors WT3-4 is nearly systematically the greater. The watertable is thus higher than the surf zone MWL, of the order of 20-40cm. This is most likely related to the presence of a high watertable level at the inland boundary, forced by the back-barrier lagoons (Vaccarès lagoon). Such observation may correspond to the well-know superelevation of the watertable widely observed in other sites (see [START_REF] Turner | Monitoring groundwater dynamics in the littoral zone at seasonal, storm, tide and swash frequencies[END_REF][START_REF] Raubenheimer | Tidal water table fluctuations in a sandy ocean beach[END_REF][START_REF] Kang | Watertable overheight due to wave runup on a sandy beach[END_REF]. Note that the respective roles of wave forcing and large scale gradients on the watertable elevation are still poorly documented in the literature, which is part of what is driving the present study.
Two additional typical observations can be performed. First, the sand soil acts as a low-pass filter. The day-to-week scale fluctuations of the Still Water Level easily propagates within the beach, inducing watertable oscillations of the same order of magnitude. At higher frequency, the tidal fluctuations are damped when propagating into the beach aquifer (Turner et al. 1997[START_REF] Turner | Monitoring groundwater dynamics in the littoral zone at seasonal, storm, tide and swash frequencies[END_REF][START_REF] Raubenheimer | Tidal water table fluctuations in a sandy ocean beach[END_REF]. In our microtidal site, the tide effect becomes barely measurable for the upper beach sensors (WT3 and WT4). Second, comparison between WT1 and WT2 sensors located at the same cross-shore position shows that the pressure field is very close to the hydrostatic equilibrium below the upper beach, with departure from hydrostaticity about 1%. This is in contrast with the measurements performed in the swash groundwater [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] for which the deviation from the hydrostatic balance can reach 20% and thus induces significant vertical flows below the beach face but confirms that, above the action zone of the swash, the pressure field is nearly hydrostatic and the related groundwater motion are horizontal.
The general trend is that the watertable follows the low frequency evolution of the SWL which is clearly the dominant driver of beach aquifer oscillations. This simple relationship is however affected by a number of secondary mechanisms. One notes first that the watertable can rise independently from free surface levels and/or wave activity, see e.g. November 25, December 4-5 or January 21. This is largely explain by the rain input, plotted in Fig. 2, bottom plot. It is remarkable that in most cases, the watertable rising is much greater than the rainfall amount. This probably indicates a cumulative effect of the direct water input on the beach and the lateral loading from back-barrier lagoons which drain water from the inland watershed. For some events, e.g. November 28-29 and January 19, both MWL setup and rainfall combine to elevate the watertable, which makes difficult to identify their relative contributions.
It is also observed that the relation between MWL and watertable risings depends on the initial elevation. During low level periods, the events of rapid MWL increase induce small response of the watertable. This is for instance observable for the events of December 9 and 27 or January 12, for which MWL rising about 20 to 30 cm only produce watertable rising of few centimeters. By contrast, when the overall water level is initially high, or when the MWL increase is long enough (over several days), the rising of MWL and watertable are generally of the same order of magnitude, see November 28, December 13, January 15-16 or February 3. While not fully understood, this observation should be related to beach saturation issues, the beach acting as a porous reservoir.
Submersion event
A particular attention is thus paid on the period from November 27 to December 3 during which the combined effects of wave activity and strong SWL/MWL setup lead to a major submersion event. Figure 3 depicts the evolution of pressure heads within the sand (top plot), the deep-water wave height (middle plot) and the rainfall (bottom plot) over this selected period.
The analysis of the groundwater behavior during the submersion should be made with caution, as both beach face and upper beach experienced very strong morphological evolution. This is clearly visible when comparing the morphological surveys performed before and after the storm (bed levels in blue and red lines in Fig. 1). In addition to this overall morphological evolution, note that the upper beach is usually covered by aeolian sand three-dimensional mobile deposits, with amplitude and extension about 20-50cm and few meters, respectively. These wind-induced bedforms, visible in Fig. 1 on the Nov. 20 profile, are made of loose aeolian sand which moves under wind action but are generally erased during large submersion events.
The initial situation before the submersion event, shown in Fig. 2, corresponds a slightly titled watertable below the upper beach, with all WT's sensors indicating close but seaward decreasing pressure heads. The moderate risings of the watertable on November 22 and 25 are associated to wave event and rainfall activity, respectively.
A series of successive phases is now selected to depict the submersion process, see Phases S1 to S6 in Fig. 3. During the night of December 27-28 (Phase S1, see Fig. 3), a rapid increase of the pressure head is observed for each WTs sensors. The rise is stronger seaward (38cm for WT1 and 17cm for WT4). This event occurs during a period of overall increase of wave activity (See Fig. 2 for an overview) and rising tide, but the relative suddenness of the watertable rise is remarkably rapid considering the slow evolution of the forcings. A careful observation of Phase S1 in Fig. 3 reveals that the watertable rise is not regular but rather results from successive abrupt increases followed by steady phases. Spectral analysis [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] indicates that the Rousty beach is dissipative under storm conditions. The swash zone is clearly dominated by low-frequency infragravity motion. Such long waves are able to successively load the beach aquifer, which explain the sequential watertable loading observed during Phase S1. The comprehensive characterization of the saturation process, including capillary fringe dynamics, would require a much more sophisticated instrumental apparatus [START_REF] Heiss | Swash zone moisture dynamics and unsaturated infiltration in two sandy beach aquifers[END_REF].
A nearly 24h steady phase (Phase S2, see Fig. 3) is then observed during the morning of November 28. The pressure head measured by WT1-2 sensors is close to the pre-storm bed level at this location, which indicates that the beach is here probably fully saturated and the watertable has risen up to the bed. The facts that the watertable remains coupled with the sand bed for several hours and that the observed decoupling events are rapidly compensated in one or few successive steps indicate the regular input of IG-driven swash events. Landward (WT3-4), the pressure head does not appear to precisely coincide with the pre-storm bed level, the watertable remains few centimeters below the sand. But according to the estimation of [START_REF] Turner | Rapid water table fluctuations within the beach face: Implications for swash zone sediment mobility?[END_REF], the capillary fringe in Rousty beach should be several tens of centimeters high, so the sand bed itself is very close to saturation.
Phase S3 corresponds to a strong rise of the water level, induced by the combined effects of increasing wave activity and incoming tide. WT1-2 sensors shows a moderate increase in pressure head followed by a larger decay (Phase S4). As the general morphological trend is a landward shift of the berm during the storm, it is likely that this rising correspond to the passage of the berm apex. The sand remains fully saturated due to the regular income of submersion infragravity waves, and the pressure measurement performed here each 2,5 min simply follows the evolution of the sand bed. During the same phase, the upper beach sensors WT3-4 show a much stronger pressure increase, of the order of 30cm. A steady phase is thus observed (Phase S4), during which both WT3 and WT4 sensors shows similar heads. This probably indicates that the runnel is now filled with water.
Later on, a 31h-long phase (Phase S5, Nov. 29, 19:00 to Nov. 30, 03:00) is observed during which the time-averaged component of pressure head is remarkably steady under the berm (WT1-2), with a watertable about 20cm below the sand surface. Regular fluctuations, of the order of 10cm, are superimposed to this steady mean value. These fluctuations are typical of the IG-driven motions in the upper swash observed by [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] for another storm event (see also next section for further details). In the upper beach, the rapid head fluctuations observed for WT2 indicate probably capillary processes, with the watertable oscillating between the sand bed and a lower equilibrium position. A finer analysis would require higher temporal resolution. Landward, WT4 is more stable. The last phase (Phase S6) is an ensemble slow lowering of the watertable affected by tidal fluctuations and episodic peaks.
Swash-driven vs watertable gradients
Fig. 1 shows that pressure gradient between upper beach watertable (WT3-4 sensors) and inner surf zone (MWL) are overwhelmingly negative (landward), i.e., following Darcy's assumptions, groundwater follows the decreasing potential and flows toward the open sea. While more limited in time, WT1-2 measurements confirm this dominant trend, induced by the back-barrier lagoon forcing at the landward boundary. The head gradients between upper beach (WT3-4) and MWL are of the order of 1%, fluctuating under the action of wave activity, sea level variations, inland boundary condition and rainfall (see Fig. 2). Darcy's law states that the horizontal groundwater velocity is inversely proportional to the cross-shore head gradients, the proportionality factor being K the hydraulic conductivity. Using such calculation between selected measurement points, horizontal groundwater velocities are computed and depicted in Fig. 4, for the November 20 to December 17 period. The present analysis is focused on the sole horizontal component. Vertical motions are expected to be very weak as the pressure distribution is nearly hydrostatic everywhere in the beach except under the swash zone [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF]. The reference groundwater state in Rousty beach during the 2014-2015 winter season can be observed for instance in Fig. 4 between November 20 to 28 and December 2 to 12. It is characterized by a very small seaward velocity, nearly the same for the WT4-3 and WT3-1 sensors pairs.
The effect of rainfall alone on this reference situation is observed on November 25 and 26-27, and later on December 6. As exposed herein before, the rainfall input rises and slightly tilts the watertable (Fig. 2) but the associated seaward groundwater velocity remains weak (Fig. 4).
The main drivers of groundwater gradients are the combined effects of mean water levels at the boundaries and wave at the beach face. For a better understanding of their interactions, a particular attention is paid now on the two dominant storms in terms of waves, levels and available instrumentation. The first storm is the submersion event on Nov. 28 to Dec. 1 described above, named Storm 1, while the second storm is the swash-monitored one on December 13-14, named Storm 2.
The first stage of Storm 1 (Phase S2 in Fig. 3) is a beach submersion characterized by a reversal of the head gradient and a landward groundwater flux all over the studied cross-shore profile. The associated negative velocity remains weak, about 1.5.10 -6 m/s (Fig. 4), and is maximal under the berm. As exposed previously, Phase S2 corresponds to a submersion event where the beach is probably fully saturated (or even partly immersed). In this case, the groundwater head gradients are related to the bed morphology: the head is maximal under the berm and groundwater should flow toward lower zones of the upper beach. During the storm retreat (from S3 Phase in Fig. 3), the groundwater under the berm flows seaward and the velocity strongly increases up to 7.10 -6 m/s (Fig. 4) while further landward the velocity fluctuates around zero below the upper beach. Such difference between berm and upper beach gradients is remarkable. It is explained by the fact that, in the no-wave no-rain reference state, the watertable should show a regular negative slope from the landward high-level boundary down to the open sea (or exit point if decoupled). The effect of wave action, here mainly IG waves in the swash zone, is to maintain a higher watertable level under the berm, relatively to the elevation it would have had without wave. This corresponds to the classical hump shape of the watertable observed by other authors (e.g. [START_REF] Lanyon | Groundwater-level variation during semidiurnal spring tidal cycles on a sandy beach[END_REF][START_REF] Sous | Swash-groundwater dynamics in a sandy beach laboratory experiment[END_REF], applied here in the particular context of a high landward watertable boundary condition.
From December 2, the wave activity decreases. The groundwater returns to its reference state, mainly controlled by the gradient between landward condition and MWL at the beachface. They both follow a similar decrease only affected by tidal fluctuations. The general fall of watertable and open sea levels is observed up to the storm of December 12-14. The only deviation is due to the rain input on December 5, which rises the upper beach watertable of about 20cm but does not significantly increase the groundwater velocity below the upper beach (data under the berm is unfortunately no more available).
Storm 2 has been detailed in terms of swash zone processes by [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] and is here further analyzed to understand the relation between swash-driven and cross-barrier watertable gradients (Fig. 5). Let us recall here that [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] observed, (i), a time-averaged anti-clockwise circulation cell and, (ii), at shorter time scale a typical groundwater circulation pattern associated to incoming IG-driven swash events. In the present study, we only consider the horizontal component of the motion. The initial situation on December 12 depicted in Figs. 4 and5 shows the reference groundwater situation in Rousty beach, i.e. a slow seaward groundwater flow associated to a titled upper beach watertable due to a higher landward level. The storm arrival during the evening of Dec. 12 forces a higher MWL at the beachface, progressively flattens the upper beach watertable and inhibits the seaward flow. The high-frequency groundwater measurements start near the storm apex on Dec. 13 around 12:00. The groundwater gradients observed under the swash zone are much higher than below the upper beach, inducing seaward velocities five to ten times greater than their upper beach counterparts. It is remarkable to note that, at least at the spatial resolution of the present experimental setup, these strong head gradients at strictly limited to the swash zone. The head at the top of the swash zone (G1m sensor) is just slightly greater than the upper beach water table one, only inducing a small landward flux. The storm starts to decrease from Dec. 13, 12:00. The swash zone pressure heads remain nearly steady until the early morning Dec. 14 and then starts to fall (see [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] for a finer analysis) while the upper beach watertable is still rising and flattens (the slope even shows a slight reversal during the evening of Dec. 13). This highlights the time lag between beach face and upper beach groundwater processes. The watertable gradient under the berm (between G1m and WT3) decreases as the upper beach watertable rises and reverses as the swash zone level starts to fall during the night of Dec. 13-14. It is again noted that, for the whole considered event, the cross-berm groundwater flux (being either positive or negative) remains one order of magnitude smaller than the seaward flow under the swash zone. During Dec. 14 and later on, the watertable progressively falls back to its reference state associated to slow seaward flux within the sand soil.
Let us know compare and relate these two storms. Storm 1 was characterized by larger waves and higher MWL. However, in terms of swash-groundwater velocities, the situation described in Storm 1 Phases S4 and S5 is nearly the same that for Storm 2. Measurements performed under the swash zone, i.e. between WT1 and G1m and between G1m and G5t for Storms 1 and 2, respectively, shows similar seaward groundwater flows with the same order of magnitude. Following the study of [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] focused on a single event, this experiment confirms in the field the laboratory observation of [START_REF] Turner | Groundwater fluxes and flow paths within coastal barriers: Observations from a large-scale laboratory experiment (BARDEX II)[END_REF] and the numerical simulation of [START_REF] Li | Wave-induced beach groundwater flow[END_REF]: a typical seaward groundwater circulation pattern under the swash zone. In addition, the present analysis allows to describe, for the first time in the field, the role played by upper beach watertable gradients. If the inland watertable is low, a divergence point can observed near the top of the swash zone (Storm 2 apex), but it can be smoothed or inexistent with higher inland watertable. What is important is that, whatever the upper beach watertable situation, the waves action at the swash zone are able to drive a localized swash-groundwater circulation. This circulation is about one order of magnitude stronger than the typical groundwater flux under the upper beach.
Figure 5: Groundwater dynamics during a three days period (Dec. 12 to Dec. 14) associated with a significant wave event. Swash-zone groundwater data (G1m, G3t, G5t sensors) are extracted from the analysis presented in [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF].
The observations performed during the 2014-2015 winter season in Rousty beach can be summarized as follows:
-the general trend is a slow (about 1.10 -6 m/s) seaward groundwater flow driven by a high-level landward boundary condition in back-barrier lagoons; -rainfall events induce an overall rise of the watertable (generally greater than the rain input), but do not significantly increase of groundwater discharge (for the considered events); -storm events, associated to increased wave energy (Ho>1.5m) and setup of MWL, are able to rise the watertable under the berm inducing, (i), a flattening or even a reversal of the watertable slope below the upper beach and, (ii), a seaward groundwater flow under the swash zone typically one order of magnitude stronger than its upper beach counterpart;
Discussion on beach groundwater discharge
The groundwater dynamics fluctuates under the action of waves, sea and inland levels variations, rainfall and beach morphology. An attempt is made here to estimate the exchanged volumes through the beach.
One must keep in mind the limitations of the present experimental setup, in particular the lack of spatial and temporal resolution, the fact that vertical motions are neglected and the unknown total depth of beach aquifer. This latter would require dedicated seismic and/or core sampling measurements, but is here estimated to be of the order of 10m.
The exchanges related to the swash-groundwater circulation have been calculated by [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] for a 1.5m deep sand layer. They conclude the daily seaward groundwater flux is around 5m² per longshore beach meter (i.e. 3.33m per longshore beach meter per meter depth). An important issue is to know the extent to which this swash circulation pattern is connected to the inland beach aquifer. The measurements presented here show that this swash pattern develops for any of the studied wave events, whatever the overall groundwater situation. This tends to indicate that it is quite isolated in terms of water exchanges. In a schematic view, the water input at the top of the swash zone should be nearly directly evacuated seaward by the groundwater swash cell without experiencing strong mixing or dilution with more inland water masses of the aquifer. However, the role of swash-driven circulation on beach groundwater discharge should certainly be taken into account in any beach groundwater study, in particular because its effect depends on its location which permanently evolves under the action of wave and SWL fluctuations (such as tides). In addition, it may play a significant role in biogeochemical beach processes.
The most relevant estimate of groundwater exchanges between the open sea and the back-barrier lagoon is then probably the classical measurement of the watertable slope in the upper beach. In our case, the flux is overwhelmingly seaward, with a daily value of the order of 0.1m per longshore beach meter per meter depth. Over a 10km beach (typically the Gulf of Beauduc) with a 10m thick beach aquifer, this leads to 10 5 m 3 of water flowing within the beach toward the open sea. This rough estimation allows to get a better figure of the importance of through groundwater flows in the exchanges between coastal aquifer and open sea [START_REF] Rodellas | Submarine groundwater discharge as a major source of nutrients to the Mediterranean Sea[END_REF]. This order of magnitude is for instance the half of typical Rhone river discharge. The main driver of this beach groundwater discharge is the level difference between the back-barrier lagoons and the open sea. The flux is thus minimal during high SWL periods, as observed during lowpressure atmospheric systems in Nov. 20-25, and maximal when anticyclonic low SWL conditions follow a large storm event (first weeks of December). The rain and tide effects on discharge velocities are, for the present experiment, much weaker than the role played by large scale SWL fluctuations. It should be recalled that episodic reversal of the discharge can be observed during submersion events, as exposed in Section 2.2 for the largest storm of winter 2013-2014. The strongest exchanges are thus expected when a high pressure (Mistral) event follows a low pressure (South-east storm) associated with strong rainfalls.
Note finally that all the above analysis has been performed considering a fully isotropic soil. This is certainly true for the upper 4-6m beach layer, but deeper in the soil stratigraphic issues should certainly play a role to be explored [START_REF] Evans | Submarine groundwater discharge and solute transport under a transgressive barrier island[END_REF].
Conclusion
The present study is a follow-up to the recent paper of [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF]. The overall aim is to monitor and analyze the groundwater dynamics of Rousty microtidal beach during the full 2013-2014 winter season. A dedicated instrumental setup has been deployed on the field, based on a cross-shore network of buried pressure sensors. The study of [START_REF] Sous | Field evidence of swash groundwater circulation in the microtidal Rousty beach, France[END_REF] was focused on the swash-groundwater circulation pattern observed during a single storm event. The aim here is to extend the analysis to the winter season, including a large beach submersion event, and to understand the interactions between the swash-driven circulation cell and larger scale watertable gradients across the beach.
The main findings of the present study are that, (i), the groundwater discharge through the beach is overwhelmingly seaward, (ii), this discharge, when integrated over the whole Gulf of Beauduc, shows comparable (half) water flux than the Rhone river, (iii), the beach groundwater velocity is primarily driven by the difference of levels between open sea and landward boundary condition, (iv), the rain and tide effect are small, (v), a swash-groundwater seaward circulation develops when wave energy is sufficient whatever the overall watertable gradients and (iv), this swash pattern is stronger than the upper beach groundwater flux but appears to be nearly isolated and should not play a dominant role in large scale water discharge.
Further characterization of beach groundwater discharge will require higher spatial and temporal resolution, in particular to better understand the mixing processes at boundary between the swash-driven circulation and the larger watertable gradients, to monitor the groundwater pressure field deeper in the soil down to the impervious boundary and learn more about the vertical structure of the circulation. Dedicated study should be planned in meso-and macro-tidal context, to test the validity of the present observation in microtidal conditions when considering larger tidal energy input within the beach. Based on the recent insight provided by laboratory and field experiments, numerical models should now be developed in a more comprehensive strategy integrating each of the involved processes (levels, saturation, gravity and IG waves, rainfall, salinity and thermal fluxes, etc).
1
Université de Toulon, CNRS/INSU, IRD, MIO,UM110, 83051 Toulon Cedex 9, France. [email protected] 2 GEOSCIENCES Montpellier, Université de Montpellier II, Montpellier, France 3 CEREGE, Aix-Marseille Université, CNRS UMR 6635, Aix en Provence, France
Figure 2 .
2 Figure 2. Overview of the recovered data during the winter season: pressure head (Top), deep-water wave height (Middle) and cumulative rain (Bottom). The grey box indicates the high-frequency deployment.
Figure 3 .
3 Figure 3. Groundwater dynamics during the Nov. 28 -Dec. 1 submersion event. Six phases numbered from S1 to S6 are selected to help the discussion. Top plot: pressure head, middle plot: deep water wave height, bottom plot: cumulative rainfall.
Figure 4 .
4 Figure 4. Top plot: horizontal Darcy groundwater velocity between sensor pairs (positive velocity is seaward). Bottom plot: deep water wave height.
Acknowledgements
This study was supported by the Direction Departementale Territoriale et Maritime 13 and the ANR Grant No. ANR-13-ASTR-0007. The GLADYS group (www.gladys-littoral.org) supported the experimentation. MeteoFrance provided the rainfall data at the Montpellier weather station. | 34,876 | [
"19040",
"974980",
"749528",
"737004",
"860",
"745998"
] | [
"191652",
"191652",
"527120",
"191652",
"199954",
"199954"
] |
01772096 | en | [
"shs"
] | 2024/03/05 22:32:16 | 2014 | https://amu.hal.science/hal-01772096/file/DV_DuprezetalSSMS39%20Revised_pourHal.pdf | Christelle Duprez
email: [email protected]
Véronique Christophe
email: [email protected]
Bernard Rimé
Anne Congard
Pascal Antoine
Motives for the social sharing of an emotional experience
Keywords: social sharing, alleged motives, emotion regulation
come
Motives for the Social Sharing of an Emotional Experience
Traditionally, psychologists have investigated emotions as intrapersonal processes. It was stressed that emotions develop in the physiology of the individual and resonate in the depths of this individual' subjective life (e.g., [START_REF] Arnold | Emotion and personality[END_REF][START_REF] Frijda | The emotions[END_REF][START_REF] Tomkins | Exploring affect: the selected writings of Silvan S Tomkins[END_REF].
However, the empirical literature of recent decades is replete with concepts emphasizing emotions as sustaining essential connections with interpersonal relationships. These concepts accent the fact that emotions are accompanied with verbal expression processes. What used to be considered as a private experience is generally put into words and communicated to members of the entourage. Thus for instance, the study of emotional disclosure addresses how people respond to emotional upheavals and why translating emotional events into language increases physical and mental health (for reviews, [START_REF] Pennebaker | Expressive writing, emotional upheavals, and health[END_REF][START_REF] Smyth | Exploring the boundary conditions of expressive writing: In search of the right recipe[END_REF]. Thus, whereas self-disclosure was defined as "an interaction between at least two individuals where one intends to deliberately divulge something to another" (Greene, Derlega and Mathews, 2006, p. 411), emotional disclosure represents a specific form of self-disclosure focused at the verbal expression and communication of a personal emotional experience. An important scientific interest exists for the investigation of effects of written emotional disclosure on well-being (e.g., [START_REF] Pennebaker | Writing about emotional experiences as a therapeutic process[END_REF][START_REF] Smyth | Exploring the boundary conditions of expressive writing: In search of the right recipe[END_REF]. Participants write about past stressful or traumatic events in their lives for short sessions (15 to 30 minutes) held on several consecutive days (for a meta-analytic study of effects, see [START_REF] Frattaroli | Experimental disclosure and its moderators: A meta-analysis[END_REF]. Co-rumination was particularly examined in the friendships of children and adolescents. It involves "extensively discussing and revisiting problems, speculating about problems, and focusing on negative feelings" [START_REF] Rose | Co-rumination in the friendships of girls and boys[END_REF](Rose, , p. 1830)). Corumination was related to positive friendship quality but also to elevated internalizing symptoms. Studies showed that such rehashing of one's emotions is socially reinforced and perpetuated by target persons and that co-rumination predicts the onset of depressive disorders during adolescence [START_REF] Calmes | Rumination in interpersonal relationships: Does corumination explain gender differences in emotional distress and relationship satisfaction among college students?[END_REF][START_REF] Stone | Co-rumination predicts the onset of depressive disorders during adolescence[END_REF]. Whereas co-rumination studies examine the expression and verbalization of negative emotions and feelings, capitalization studies investigate people's propensity to share with close persons the positive emotional experience they just went through (for a review, [START_REF] Gable | Chapter 4 -Good News! Capitalizing on Positive Events in an Interpersonal Context[END_REF]. Capitalization occurs when one member of a relationship dyad experiences a personal event that positively affects himself or herself and then relates it to the other member of the dyad. [START_REF] Langston | Capitalizing on and coping with daily-life events: Expressive responses to positive events[END_REF] proposed the term "capitalization" after having observed that sharing the news of a positive event with others led the subject experiencing more positive affect than could be attributed to the event itself. Capitalization studies later evidenced the positive effect that sharing a positive emotion can have on the interpersonal relationship itself (e.g., [START_REF] Gable | What Do You Do When Things Go Right? The Intrapersonal and Interpersonal Benefits of Sharing Positive Events[END_REF]. Besides these different concepts, studies on the social sharing of emotion were first to evidence people's propensity to share their emotional experiences. The social sharing of emotion was defined as a communication process involving the description of an emotion in a socially-shared language by the person who experienced it to another one [START_REF] Rimé | Beyond the emotional event: Six studies on the social sharing of emotion[END_REF]. Abundant data showed that when people go through an emotional experience, they immediately feel the need to talk with members of their entourage, and they actually do so in almost all cases (for reviews, [START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF][START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Rimé | Long lasting cognitive and social consequences of emotion: Social sharing and rumination[END_REF]. Overall, emotional episodes are subject to social sharing conversations in 80 to 95% of the cases, a figure that comes close to those reported for the sharing of positive emotions in recent capitalization studies (Gable & Reis, 2010, pp. 215-216). The social sharing of a given episode occurs most often repetitively--usually several times, with different people for a same emotional episode. The more intense the emotion is, the higher the propensity to talk about it [START_REF] Luminet | Social sharing of emotion following exposure to a negatively valenced situation[END_REF]. The sharing process typically begins early after an emotional experience has occurred. In 60% of the cases indeed, the first sharing of occurs on the actual day that the event occurred, as was also found for the specific case of positive emotions (Gable & Reis, 2010, p. 215). Across age groups, targets of social sharing were consistently found to be intimates (i.e., parents, brothers, sisters, friends, or spouse/partner) whereas nonintimates hardly played some role in the sharing process, as was also observed for the specific case of positive emotions (Gable & Reis, 2010, p. 216). Communicating an emotional experience seems to be a universal response to an emotion. It is observed with approximately equal magnitude in Asian and Western countries [START_REF] Singh-Manoux | Cultural variations in social sharing of emotions: An intercultural perspective[END_REF].
Episodes that involve fear, anger, or sadness, are reported to others as often as those involving happiness or love [START_REF] Rimé | Long lasting cognitive and social consequences of emotion: Social sharing and rumination[END_REF]. However, emotional episodes that involve shame and guilt tend to be verbalized to a lesser degree (Finkenauer & Rimé, 1998;[START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Singh-Manoux | Cultural variations in social sharing of emotions: An intercultural perspective[END_REF]).
An important feature is that social sharing of an emotion reactivates the shared emotion in the sharing person. Thus, related mental images are re-experienced, body sensations are felt, and subjective feelings are aroused [START_REF] Rimé | Episode émotionnel, réminiscences cognitives et réminiscences sociales[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF]. In the case of negative emotions, emotional reactivation typically leaves the sharing person in an arousal state. Interestingly, despite these negative consequences, research on social sharing has shown that people are generally eager to discuss their emotional experiences, whether negatives or positives (for reviews, [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF]. Then, why are people so eager to share their emotions? The studies that are presented in this article aimed to clarify and assess the motives underlying the universal propensity to share both pleasant and unpleasant emotions. First, we will briefly review the theoretical concepts that are pertinent to this topic, first those for positive emotions and then those for negative emotions.
Positive emotions result from circumstances that facilitate goal-attainment activities [START_REF] Carver | Origins and functions of positive and negative affect: A control-process view[END_REF][START_REF] Fredrickson | The role of positive emotions in positive psychology: The broadenand-build theory of positive emotions[END_REF], and they enhance a subject's well-being by increasing his/her level of positive affect. Likewise, the social sharing of a past positive emotional experience is likely to elicit pleasurable emotional feelings. In two different studies, [START_REF] Langston | Capitalizing on and coping with daily-life events: Expressive responses to positive events[END_REF] confirmed that the communication of positive events to others was indeed associated with an enhancement of positive affect far beyond the benefits resulting from the valence of the positive events themselves. [START_REF] Gable | What Do You Do When Things Go Right? The Intrapersonal and Interpersonal Benefits of Sharing Positive Events[END_REF] demonstrated that close relationships in which one's partner typically responds enthusiastically to such a capitalization were associated with higher relationship well-being (e.g., intimacy, daily marital satisfaction). Thus, sharing positive emotions can enhance both the positive affect of individuals and the social bonds between them [START_REF] Reis | Are you happy for me? How sharing positive events with others provides personal and interpersonal benefits[END_REF]. Therefore, capitalization and social integration constitute two demonstrated motives underlying the sharing of positive emotions.
With regard to the question of why people share negative emotional experiences, [START_REF] Schachter | The psychology of affiliation[END_REF] first proposed an answer in the framework of his classic "stress and affiliation" studies. He found that the participants who became anxious at the prospect of being administered electric shocks expressed a preference for waiting in the company of other persons, whereas the control participants preferred to wait alone. Schachter hypothesized that individuals encountering stress attempted to reduce their anxiety by verbally interacting with others in the same situation and thus using others as a lens through which to evaluate their own emotional state. This social comparison motive [START_REF] Festinger | A theory of social comparison processes[END_REF] is especially relevant when people lack objective standards or undergo a confusing experience, which are typical characteristics of negative emotional experiences.
Negative emotional episodes undermine a person's knowledge base because these episodes disconfirm expectations and models of the world. Thus, such episodes represent a broad form of distress that a person is highly motivated to reduce [START_REF] Epstein | The self-concept revisited: Or a theory of a theory[END_REF][START_REF] Epstein | Cognitive-experiential self-theory[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF]. Although he favored a social comparison explanation for his "stress and affiliation" effect, [START_REF] Schachter | The psychology of affiliation[END_REF] also considered emotional support, or direct distress reduction through the presence of others, to be involved in the process. Since the observations of [START_REF] Bowlby | Attachment and loss[END_REF] on attachment, ample evidence has shown that both primate and human infants seek contact with others during periods of uncertainty and distress (e.g., [START_REF] Ainsworth | Patterns of attachment: A psychological study of the strange situation[END_REF][START_REF] Sroufe | The ontogenesis of smiling laughter: A perspective on the organization of development in infancy[END_REF]. According to [START_REF] Shaver | Schachter's theories of affiliation and emotion: Implications of developmental research[END_REF], this early form of affiliation is perpetuated among adults and serves two distinct but related functions: direct anxiety reduction and increased cognitive clarity. This contact seeking would however depend on the quality of the attachment figure's responses when proximity/help was sought during infancy and childhood, and the expectations these responses elicited about the help the others can provide when distressed in adulthoodleading to interpersonal differences in attachment style and proximity seeking [START_REF] Ainsworth | Attachment and dependency: A comparison[END_REF][START_REF] Shaver | Attachment-related psychodynamics[END_REF]for review, Cassidy & Shaver, 2008;[START_REF] Mikulincer | Attachment in adulthood: Structure, dynamics, and change[END_REF]. Thus, the generalized distress that negative emotions produce likely motivates adults (particularly secure and insecure anxious individuals, but to a lesser extent insecure avoidant individuals -whose expectations regarding support from others are negative, for a review see [START_REF] Shaver | Attachment-related psychodynamics[END_REF] to search for emotional support and to turn to their attachment figures for this purpose.
Many arguments favor the search for cognitive clarity as the primary motive for the social sharing of negative emotional episodes. By disconfirming aspects of a person's schemas, models, theories, or assumptions, negative episodes both elicit a state of emotional distress and generate a state of cognitive dissonance within an individual. Therefore, negative emotions are likely to stimulate cognitive efforts toward dissonance reduction [START_REF] Festinger | A theory of cognitive dissonance[END_REF]. This reasoning was anticipated by both [START_REF] Cantril | The why of man's experience[END_REF] and [START_REF] Kelly | A theory of personality: A psychology of personal construct[END_REF], who viewed emotions as occurring in moments at which events "do not stick" with cognitive constructions and thus compel individuals to modify these constructions. More recently, [START_REF] Martin | Toward a motivational and structural theory of ruminative thought[END_REF] argued that when progression toward a goal is blocked or when a discrepancy occurs between the current state of affairs and the expected situation, conditions for the development of cognitive activity are fulfilled. Similarly, [START_REF] Weick | Sensemaking in organizations[END_REF] observed that when expectations are disconfirmed or when activities in progress are blocked, efforts to produce meaning emerge. In accordance with these theoretical propositions, a review of the previous empirical findings suggested that one of the most reliable predictors of the need to discuss an emotional episode is the extent of the cognitive needs that are aroused by a given episode [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF]. Thus, when emotional experiences elicited a need to "put things in order with regard to what occurred", to "find meaning in what occurred", or to "understand what occurred", these experiences were more likely to be subsequently shared. This finding suggests that as the extent to which an emotional episode creates a subjective sense of unfinished cognitive business increases, individuals are likely to feel more motivated to discuss their emotional experiences with others. The results of studies of "secret emotions" support this view.
Memories of unshared emotional episodes were found to elicit feelings of unresolved cognitive business among the respondents more so than did memories of episodes that had been shared (Finkenauer & Rimé, 1998). Thus, a need to obtain cognitive clarity or to find meaning appears to constitute a third motive for the social sharing of negative emotion.
In sum, three major motives appear to lead people to share their negative emotional experiences: emotional comparison, emotional support and cognitive clarity concern.
Naturalistic investigations of the stress and affiliation effect have also supported such a conclusion. Kulik and colleagues [START_REF] Kulik | Social comparison and affiliation under threat: Going beyond the affiliate-choice paradigm[END_REF][START_REF] Kulik | Social comparison and affiliation under threat: Effects on recovery from major surgery[END_REF] examined affiliation toward roommates among hospital patients expecting to undergo major cardiac surgery. In addition to using real life-threatening health events, the authors also assessed actual interaction patterns rather than the mere expression of intentions. In one such study, [START_REF] Kulik | Social comparison and affiliation under threat: Going beyond the affiliate-choice paradigm[END_REF] concluded that cognitive clarity most accurately accounted for the effects on verbal affiliation that were observed. However, in a subsequent study that examined cognitive clarity concerns, emotional comparison, and emotional support, [START_REF] Kulik | Social comparison and affiliation under threat: Effects on recovery from major surgery[END_REF] found evidence for all three motives. These authors concluded that when stress and affiliation relationships were considered in more naturalistic situations, multiple reasons for interpersonal affiliation under threat emerged. This conclusion can likely be extended to the social sharing of emotion.
Another relevant source of information lies in the motives that people openly allege for engaging in sharing behavior. Three sets of data are available in this regard (for a review, see [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. The first set was obtained from a group of psychology students who were enrolled in an advanced class on emotion. These students first recalled a recent emotional experience that they had shared and then listed all of the possible reasons that they had engaged in sharing [START_REF] Finkenauer | Motives for sharing emotion[END_REF]. In a second study, a pool of 200 answers was collected from non-psychology students who also referred to a recent emotional experience that they had shared. Their alleged motives for sharing were then grouped by judges using the smallest possible number of classifications [START_REF] Delfosse | Les motifs allégués du partage social et de la rumination mentale des émotions : Comparaison des épisodes positifs et négatifs [Alleged motives for social sharing and mental rumination of emotions: A comparison of positive and negative episodes[END_REF]. Finally, in a third study, 100 male and female participants were recruited in university libraries, and each participant was asked to list five different reasons that they had shared a recent emotional episode in their lives [START_REF] Nils | Partage social des émotions: Raisons invoquées et médiateurs perçus [Social sharing of emotions: Alleged reasons and perceived mediators[END_REF], cited by Rimé, 2007). These three studies manifested a striking consistency in the sources of motives that they evidenced.
(see Table 1). Together, these studies yielded a list of twelve motivational sub-types (see Table 2). Some of these motives are essentially self-oriented, including rehearsing an episode or venting about it, whereas other motives are more clearly other-oriented, such as entertaining, informing, or warning the target. In contrast, all of the remaining motives in the list manifest considerable demands on the social targets with regard to emotion regulation. Social sharing partners are indeed expected to provide contributions that are as diverse as providing assistance and support, comfort and consolation, legitimization, clarification and meaning, and advice and solutions. Moreover, this long list of specific social solicitations is still augmented with less specific and more personally involving demands of sharing partners, such as providing attention, bonding, and eliciting empathy. Thus, the motives that are openly alleged for socially sharing emotions reveal an overabundance of social demands aimed at emotional regulation. Although these motives also involve cognitive regulation needs, such as the pursuit of clarification and meaning, they are overwhelmingly likely to meet socioaffective regulation needs such as the search for comfort/consolation.
- ------------------------------------Insert tables 1 and 2 here
------------------------------------
The current studies intended to examine the interrelationship between the major classes of motives for social sharing and to construct a reliable questionnaire for the assessment of the various motives evidenced. Such a questionnaire may be useful in many different regards. For example, this type of survey would facilitate an examination of variations in motives as a function of the type of emotion that is involved in a shared episode, aspects of emotional circumstances surrounding an episode, types of target persons, and personality traits or clinical diagnoses of sharing persons. The questionnaire could also facilitate the investigation of relationships between sharing motives and the actual responses of targets, in addition to the effects on the emotional recovery and general well-being of sharing individuals. Several existing scales already aim at assessing emotional disclosure:
Emotional Self-Disclosure Scale [START_REF] Snell | Development of the Emotional Self-Disclosure Scale[END_REF], Self-Disclosure Index [START_REF] Miller | Openers: Individuals who elicit intimate self-disclosure[END_REF], Distress Disclosure Index [START_REF] Kahn | Measuring the tendency to conceal versus disclose psychological distress[END_REF], and Ambivalence Over emotional Expression [START_REF] King | Conflict over emotional expression: Psychological and physical correlates[END_REF]). Yet, these scales all measure confiding/not confiding one's emotions as a stable individual difference. However, the reasons why people talk about their emotional experiences are determined by both the characteristics of the emotional experience (e.g., valence, type of emotional episode) and the characteristics of the individual (e.g., gender, age) [START_REF] Delfosse | Les motifs allégués du partage social et de la rumination mentale des émotions : Comparaison des épisodes positifs et négatifs [Alleged motives for social sharing and mental rumination of emotions: A comparison of positive and negative episodes[END_REF]. Furthermore, these existing scales mostly assess the extent of disclosure of negative emotions, thus neglecting positive ones. Having a tool allowing to measure the alleged motives for social sharing would not only permits to better identify expectations in terms of emotion regulation, but also to assess the impact of characteristics of both the individual and the event on these emotion regulation needs. The three studies which previously investigated the alleged motives for social sharing have already documented this phenomenon. Yet, they simply aimed at exploring existing social sharing motives. The present study is in line with these studies, while differing by its aims of creating a questionnaire and as a consequence by collecting data in a more exhaustive way. The present study was planned for the purpose of collecting from participants, and in participant's own colloquial verbal formulations, a large number of motives, in order to create an assessment tool made of items directly inspired from these colloquial formulations. In addition, insofar as in these previous works the alleged motives for social sharing were mainly collected from students, it seemed important to collect data from a much more varied sample of respondents.
In the first study, we collected a large the broadest possible number of motives that the respondents could identify for having shared an emotional experience with others. The collected motives were then organized into categories and transformed into items. In a second study, the resulting questionnaire was tested on a large sample of respondents.
Study 1
Method
Procedure. 240 people were contacted individually by a female investigator in university libraries, on campus, or through social networks on the internet. These individuals were invited to participate in a university investigation of the memory of emotional events by completing a questionnaire. The contacted persons who accepted were then asked to recall a recent emotional event that happened less than 3 months before and that they had personally experienced and shared with other persons. Half of the participants were randomly selected to recall a positive emotional episode, whereas the other half were asked to recall a negative emotional experience. Individuals who declined to participate or were unable to recall an emotional event that they had experienced were thanked and dismissed. Those who retrieved a memory as requested then answered the study questionnaire. Confidentiality and anonymity were guaranteed. After indicating their age and gender, the participants were first asked to provide a short written description of the emotional episodes that they had recalled. This procedure, commonly used in studies about social sharing, helps participants to reimmerse themselves in the memory of the emotional situation and to experience a reactivation of the various emotional components before answering the study questionnaire (Rimé et al., 1991).
Participants. In total, 182 participants (97 females) whose ages ranged from 18 to 79 years (M = 30.16, SD = 12.08) completed the questionnaire, with 81 of them (43 females) in the positive emotion condition and 101 individuals (54 females) in the negative emotion condition. Nearly half of the participants were students (48.90%), 37.91% were employees, 9.89% were unemployed, and 3.30% were retired. A majority of the participants were living in couples (60.44%), 34.06% were single, 3.30% were living alone with children, and 2.20% were widows.
Measurements. The respondents rated the valence of these episodes on a 7-point scale (1 = "not positive at all" to 7 = "very positive") and the intensity of the distress that the episode had elicited (0 = "not upset at all" to 10 = "extremely upset"). Subsequently, the participants responded to items that were intended to examine their sharing of these episodes:
(1) with whom did they share their experiences (partner, spouse, family member, relative, or stranger), (2) how long after the events did they first share them (same day/same week/more than one week later), (3) the number of people to whom they had spoken (one or two/3 to 10/more than 10), and (4) the total number of times that they had discussed their experiences with someone (once or twice/3 to 10 times/more than 10 times). The questionnaire concluded with one question that was intended to collect a broad range of potential motives for sharing an emotion: "Please list the first 10 reasons that you can recall for discussing this episode with people around you". A prompt reading "I talked about this event because I wanted to... " was then followed by a blank space in which the participant could freely formulate up to 10 social sharing motives. This procedure allowed us to collect a wide range of alleged motives for social sharing.
Results and discussion
Emotional episodes. The reported emotional episodes were rated as moderately upsetting, both in the positive valence condition (M = 6.04, SD = 1.09) and in the negative condition (M = 5.44, SD = 1.58), which did not differ significantly (F(1, 180) = 1.89). Positive episodes were related to personal instances of achievement (34.57%, e.g., "finding a job"), leisure (14.81%, e.g., "attending a concert"), or relationships (12.34%, e.g., "falling in love"»). Negative episodes primarily involved relationship problems (15%, e.g., "break-up"), health (15%, e.g., "partner being hospitalized"), or experiences of defeat (14%, e.g., "exam failure").
Social sharing.
The episodes were first shared on the day that they occurred in 62.64% of cases, during the following week in 26.92% of instances, and more than one week later 10.44% of the time with no significant difference between the valence conditions (χ²(2, 182) < 1.00.The participants reported having first shared their experiences with their spouse or partner in 41.76% of cases, with another family member in 32.97% of instances, with a friend in 20.88% of instances, and with a relative in only 4.39% of cases. The episodes were shared with three to ten people by a majority of the participants (54.4%), whereas 26.9% had shared their experiences with more than ten people, and only 18.7% reported having shared with only one or two persons. Positive episodes were shared with more people than negative episodes (χ²(2, 182) = 7.58, p < .05). The frequency of social sharing was generally high: 43.96% of the respondents shared their episodes three to ten times, 32.42% shared their experiences more than 10 times, and only 23.63% shared only once or twice. Positively and negatively valenced episodes did not differ for this variable (χ²(2, 182) = 3.46). All of the above results were perfectly consistent with previous findings regarding the basic parameters of the social sharing of emotions (for reviews, [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF].
Motives for social sharing. The respondents provided a total of 514 motives for sharing the episodes that they reported, with 308 motives in the positive episode condition and 206 motives in the negative episode condition. As these figures show, positive emotional episodes elicited a much larger number of motives from the participants than did negatively valenced episodes. This difference most likely reflects the heightened creativity and broadened perspective that is observed when positive emotional memories are retrieved or, more generally, when participants are exposed to a positive mood induction [START_REF] Fredrickson | What good are positive emotions? Review of General Psychology[END_REF][START_REF] Fredrickson | The role of positive emotions in positive psychology: The broadenand-build theory of positive emotions[END_REF][START_REF] Isen | Some perspectives on positive affect and self-regulation[END_REF]. Of the 182 participants, 83 (n = 55 in the "negative episode" condition, and n = 28 in the "positive episode" condition) reported two or more reasons which after analysis appeared to belong to a same class of motives.
The 514 collected motives were submitted to a content analysis [START_REF] Bardin | L'analyse de contenu[END_REF] in order to organize them into categories of motives. This analysis was led by two independent judges who were uninformed of the categorization scheme proposed by [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. The answers that were collected were initially submitted to a semantic analysis that aimed to group items with similar meaning (e.g., "the need to free myself", "emotional release").
Subsequently, items with similar objectives were combined into the same category (e.g., "externalize my happiness", "venting my good mood state"). Categories that were obtained by the two judges were then compared to examine discordance. Seventeen items were ultimately eliminated because their content appeared peculiar or irrelevant to social sharing. In addition, 10 other items were discarded because the judges could not agree on how to categorize them. This data-driven categorization resulted in 8 classes of motives, which were very similar to the categorization proposed by [START_REF] Rimé | Interpersonal emotion regulation[END_REF] (see Table 2). The titles of each of the categories resulting from the content analysis were adjusted in order to better correspond to those defined by Rimé ( 2007) (e.g., a category first entitled "informational social support" was entitled "advices and solutions" after the judges learnt the classification established by [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. However, the items within the categories of "assistance and support" and "comfort/consolation" could not be meaningfully distinguished from one another. Thus, it was determined that these categories should be merged into a single category labeled "assistance, support, and comfort". A merging also occurred for "arousing empathy" and "gaining attention" which were merged into a category labeled "arousing empathy/attention". Encompassing nearly 28% of the items, "venting" accounted for the largest proportion of answers, followed by "informing and/or warning", "advice and solutions", "assistance, support, and comfort" and "arousing empathy/attention", each of which accounted for 10% to 20% of the collected answers. Less than 10% of the items were categorized as "rehearsing", "bonding", or "clarification and meaning". Finally, two categories, "legitimization" and "entertaining," which were present in the categorization proposed by [START_REF] Rimé | Interpersonal emotion regulation[END_REF], were not represented at all in the answers of the respondents.
--------------------------------- Insert table 3 here ---------------------------------
Overall, the classes of alleged motives that result from the present study confirm with a larger and more varied sample those that emerged from the three previous studies. In addition, the collected motives show strong links between the reasons why people talk about their emotional experiences and the strategies they initiate in order to regulate their emotions (e.g., searching for meaning).
"Venting", the most frequently observed motive in these data, is consistent with the common belief that discussing an emotional experience will reduce or even eliminate its emotional load [START_REF] Nils | Beyond the myth of venting: Social sharing modes determine the benefits of emotional disclosure[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF][START_REF] Zech | Is Talking About an Emotional Experience Helpful? Effects on Emotional Recovery and Perceived Benefits[END_REF]. Thus, the data show that the motives that were considered in this study were those that the respondents recalled.
However, some motives might be only weakly accessible to awareness, some motives might even fail to be recalled, and some motives might be completely inaccessible to the mind. We also should acknowledge that some motives may purposely not be reported either because of social desirability concerns or simply because there are motives that are inaccessible to the mind.
Positively and negatively valenced episodes were then compared for the occurrence of the various categories of motives, and the results of this comparison are displayed in Table 4.
Positive episodes were more frequently shared for purposes of "rehearsing", "arousing empathy/attention", or "informing and/or warning", whereas negative episodes were more frequently shared for purposes of "venting", "assistance, support and comfort, consolation", "clarification and meaning", or "advice and solutions". These findings are consistent with the literature on emotional regulation, which indicates that regulation needs differ according to whether an emotion is positive or negative [START_REF] Gross | Emotion Regulation: Conceptual Foundations[END_REF][START_REF] Koole | The psychology of emotion regulation: An integrative review[END_REF]. After a positive emotions, people predominantly want to amplify the pleasantness that is felt (upregulation), whereas after a negative emotion they are in need of cognitive and emotional assistance to gain control over this emotion (down-regulation). No difference occurred between the two types of episodes in the frequency of the "bonding" motive.
- -----------------------------Insert table 4 here ---------------------------------A final version of the item list was obtained by eliminating items in each of the categories of motives that were redundant or lacked clarity and by keeping only the most representative items in each category. In each category, a total of 9 items that were representative of the category to which they belonged were preserved. The 72 resulting items were organized in random order and thus constituted the Social Sharing Motives Scale (SSMS), which was then tested in Study 2. Whereas in the first study, participants referred to either a negative or a positive emotional experience, in this second study it was decided to collect the data in reference to emotion categories rather than to emotional valence. We adopted the four emotion categories that were common to classic research on emotional expression (see [START_REF] Ekman | Emotion in the human face: guide-lines for research and integration of findings[END_REF] and emotional experience [START_REF] Scherer | Experiencing emotion: a crosscultural study[END_REF][START_REF] Shaver | Emotion knowledge: further exploration of a prototype approach[END_REF]: joy, anger, sadness, and fear.
Study 2 Method
Participants. In Study 2, 770 participants (245 males, 525 females) were invited to participate in a study about the emotions. They were asked to recall a recent emotional event that they personally experienced, before completing a questionnaire. The participants were randomly distributed in one of the four conditions. A quarter of the participants was randomly assigned to recall an event of joy, another quarter had to recall an event of sadness, and the two others quarters were recalled an event of fear or an event of anger. Overall, the participants had shared their emotional events in more than 93.4% (n = 719 out of 770) of cases. The data from those 719 participants were then taken into account in the analyses reported hereafter.
Measurements
Manipulation check. The participants first rated the emotional valence of the event that they had experienced on a 7-point scale (1 = "not negative at all" to 7 = "very negative") and then rated the intensity of their subjective emotion on a 10-point scale from 1 = "not upset at all" to 10 = "very upset". The respondents then evaluated the primary emotions that they had felt in this situation by rating each of four primary feelings (anger, joy, fear, and sadness) on a 7-point scale (not at all/very strong).
Social sharing. The participants were asked if they had spoken to other person(s) about the episode. Answers were collected for five successive items: (a) yes or no; (b) if yes, how long after the emotional event did you discuss it for the first time? (the same day/the same week/more than a week later); (c) with whom did you discuss the event?
(partner/friend/family/relative); (d) how often did you discuss it? (once or twice/three to four times/five or more times); and (e) with how many people did you discuss it? (1 to 2/3 to 4/5 or more).
Social Sharing Motive Scale (SSMS).
The SMSS began with the instruction "We would like you to report the reasons that you shared this episode with other people. To this end, please rate the following propositions by indicating how much you agree or disagree with each of them (1 = not at all; 7 = very much)". The participants then rated the 72 items of social sharing motives on the SSMS resulting from Study 1.
Results and discussion
Manipulation check. As expected, the participants who recalled a negative social sharing situation (anger: M = 5.75, SD = 1.50; sadness: M = 5.70, SD = 1.71; fear: M = 5.59, SD =1.88) reported that these events were more negative than those in the positive condition 39.39%) or an acquaintance in 5.96% of cases. This pattern was independent of event valence (F(3, 715) = 0.50, p > 1). These results were consistent with those of previous studies and existing literature about social sharing (e.g., [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF].
Social Sharing Motive Scale. The analysis of the ratings of the respondents on the SSMS was completed in two steps. First, the correlation matrix for the 72 items was inspected, and redundant items were eliminated to avoid generating spurious factors. Item distributions were inspected to eliminate skewed items that were likely to bias the factor analysis. Finally, a preliminary factor analysis was conducted to identify items failing to load onto any factor. In a second factor analysis, the remaining items were re-analyzed and refined until a satisfactory factor structure was obtained.
Item analysis. The 72 x 72 correlation matrix was inspected to detect potential redundancies (r > 0.60). A semantic analysis showed an absence of redundancy in the majority of cases. However, five pairs of items were determined to be similar; thus, one item was removed from each pair. Second, the item distributions were examined [START_REF] Gorsuch | Exploratory factor analysis: Its role in item analysis[END_REF] for skewness and kurtosis. The item distributions were examined following [START_REF] Kendall | The advanced theory of statistics[END_REF]. In the purpose of factorial analysis, Kline (1998) states that nonnormality is not problematic unless skewness >3 and kurtosis >10. In this study items which had response distributions with high skewness (approaching 3.0), and high kurtosis (greater than 7.0) were eliminated, which is slightly more severe than suggested by Kline. In exploratory analyses, factor loadings are generally considered to be meaningful when they exceed .40 [START_REF] Floyd | Factor analysis in the development and Research refinement of clinical assessment instruments[END_REF]. If an item or items fail to have any substantially high loadings on any factor, these items may be deleted from the analysis and the factor analysis may be recomputed on the remaining subset of items [START_REF] Floyd | Factor analysis in the development and Research refinement of clinical assessment instruments[END_REF]. Simple structure is achieved when each factor is represented by several items that each load strongly on that factor only [START_REF] Pett | Making sense of factor analysis[END_REF]. [START_REF] Tabachnick | Using multivariate statistics[END_REF] suggest that the secondary loading (or cross-loading) should be no greater than .32. [START_REF] Beavers | Practical considerations for using factor analysis in educational research[END_REF] states the requirement of having a difference between highest loading and other loadings greater than 0.3 but this rule is very strong, mostly available for cognitive constructs and we take 0.2 as final criterion. Two items had high skewness (greater than 3.0) and kurtosis (greater than 7.0) and were thus eliminated, as highly skewed items can significantly bias the results of factor analyses [START_REF] Lyne | A psychometric re-assessment of the COPE questionnaire[END_REF]. In total, this item analysis resulted in a loss of 7 items.
Factorial refinement.
During the second phase, the questionnaire was re-analyzed. In order to investigate its factorial structure, the data were submitted to an exploratory factor analysis (EFA) with SPSS 18 and then to a confirmatory factor analysis (CFA) with LISREL 8.8 software (Jöreskog & Sörbom, 2001). The exploratory factor analysis (EFA) was conducted on a random sample of 382 participants and the confirmatory analysis (CFA) on a sample of 387 participants.
EFA.
The responses to the remaining 65 items were subjected to a common factor analysis using the principal axis factoring method of extraction and Oblimin oblique rotation to allow for correlations among factors. To obtain an understandable, parsimonious, and stable structure, we defined the selection criteria to determine which items should be included in each factor. To be included, an item must have satisfied the following requirements: the highest loading needed to be above 0.40, the loadings on the other factors needed to be below 0.30, and the difference between the highest loading and the other loadings needed to be greater than 0.20. After the analysis of the factor loadings and iterative eliminations of items that did not fulfill the selection criteria, 39 items remained and were organized into seven factors that explain 66.7% of the total variance (Table 5). The translation of these items is presented in Appendix 1.
CFA.
The results of this analysis generally suggest a good model fit (χ² (387)= 1789.95; dll=671 ; p<0.001 ; RMSEA=0.0661 ; GFI=0.81 ; NFI=0.94). The remaining 65 items were subjected to a principal component analysis using pair-wise missing data deletion, which yielded 720 valid cases. Three methods were employed to estimate the optimal number of components to be retained: the scree test [START_REF] Cattell | Handbook of multivariate experimental psychology[END_REF], Kaiser-Guttman's criterion [START_REF] Kaiser | Tops in Factor Analysis[END_REF], and component representativeness. The Kaiser criterion generally leads to overestimation of the number of dimensions [START_REF] Tzeng | On reliability and number of principal components: Joinder with Cliff and Kaiser[END_REF], and the scree test is a rather subjective evaluation that can also slightly overestimate the number of factors [START_REF] Zwick | Comparison of five rules for determining the number of components to retain[END_REF]. The representativeness of each component after rotation gives the number of non-negligible dimensions. Eleven factors had an eigenvalue above 1 (Kaiser criterion), and the shape of the eigenvalue curve suggested that seven components should be retained (scree test). Moreover, examinations of the eight-and seven-component solutions consistently indicated that only seven components had at least three loadings above 0.40. Thus, a sevendimensional structure was retained.
- -------------------------Insert table 5 here -----------------------------Factor 1 "clarification and meaning" (6 items) comprises the various strategies that are intended to achieve an understanding of an emotional experience and to assign meaning to it.
Factor 2 "rehearsing" (5 items) refers to a person's willingness to re-experience an emotion, memorize it, recall it, and even amplify it through rehearsal. Factor 3 "venting" (6 items) reflects a desire to reduce the emotional weight that is associated with an experience. Factor 4 "arousing empathy/attention" (7 items) covers the various strategies that are developed to describe emotions to listeners and elicit their sympathy. Factor 5 "informing and/or warning"
(5 items) represents the intention to lend one's own experience to others for their benefit.
Factor 6 "assistance/support and comfort/consolation" (5 items) reflects the willingness to obtain some form of emotional support. Finally, Factor 7 "advice and solutions" (5 items) represents the pursuit of intellectual and/or practical support.
Scores for the seven subscales were calculated by averaging the ratings of the respondents for the various items involved. Cronbach's alpha coefficients were then used to assess the internal consistency of the subscales. As shown in Table 6, the internal consistency of the six subscales ranged from 0.75 to 0.92, and the correlations among the subscales were satisfactory.
- ------------------------Insert table 6 here --------------------------The influence of event valence on social sharing motives. The motives for the social sharing of emotion varied across emotional conditions (see Table 7). Overall, the motives for sharing a positive event (joy) differed significantly from those for sharing a negative event (anger, fear or sadness). Compared with negative events, positive experiences were shared more frequently for the purposes of re-experiencing the event and arousing empathy/attention, and less frequently for the purposes of venting and seeking understanding, support and advice. There were also numerous differences in alleged motives between the various types of negative episodes (implying anger, fear or sadness); thus, the discriminative power of the SSMS was confirmed.
- ------------------------Insert table 7 here ------------------------Impact of gender on the alleged motives for social sharing. As shown in Table 8, all the alleged motives for social sharing were influenced by gender. Women reported more than men talking about their emotional experiences for making sense of their emotional experience, venting, seeking help/support, and obtaining advices/solutions. Men confided themselves more in order to relive the event, elicit empathy/attention, and to inform/warn others.
------------------------- ------------------------
Insert table 8 here
Links between the social sharing parameters and the alleged motives for social sharing.
The links between the emotional intensity of the event, the characteristics of the social sharing (delay, frequency, number of sharing partners), and the 7 categories of alleged motives for social sharing are shown in Table 9. This correlation matrix clearly demonstrates the discriminative power of the SSMS39, as each of the four parameters of social sharing appears to be linked to a specific pattern of alleged motives. Regarding the relationship between the type of partner the episode was confided to for the first time and the reasons why social sharing was undertaken, it appears that seeking help/support and venting are not undertaken to the same extent depending on the type of sharing partner (Table 10).
- ------------------------Insert tables 9 and 10 here ------------------------
General discussion
The purpose of the current studies was to construct a reliable questionnaire to assess the major classes of motives for the social sharing of emotion. In the first study, we collected a large number of motives that the respondents could identify for having shared a personal emotional experience with others. Subsequently, the collected motives were organized into categories and converted into items to create the intended questionnaire of alleged motives for sharing an emotional experience. In the second study, this questionnaire was tested on a larger sample of respondents. We will first examine how far our findings were consistent with the results of the literature on the social sharing of emotion. We will then discuss the comparability of our findings regarding social sharing motives with those of the three existing studies that investigated this question. Next, we will comment on the motives that prevailed in our respondents' answers and on the instrument that was created in the present study to assess such motives in a systematic way in the future. Finally, we will discuss the findings resulting from our first empirical use of this instrument, in the comparison of motives according to type of emotions on the one hand and according to respondents' gender on the other hand.
Did the data reported by our respondents in relation to the social sharing of emotional episodes confirm previous observations regarding the social sharing of emotion? The results of our two studies in this respect were largely consistent with those of previous published studies. In particular, emotional episodes were predominantly found to be socially shared within a short delay. In total, these episodes had been shared on the same day that they occurred 63% and 73% of the time in Studies 1 and 2, respectively, each of which is close to the value of 60% that has typically been reported in previous studies (for a review, see [START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF]. The major traits of the social sharing of emotion were also confirmed by our findings, which indicate that the reported episodes had been modally shared several times with several persons and that these persons were nearly always individuals with whom the respondents reported having close relationships, such as friends, companions, spouses, or family members (e.g., [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF][START_REF] Skowronski | The effect of social disclosure on the intensity of affect provoked by autobiographical memories[END_REF]. The fact that in Study 1, less participants (54.4%) had talked about their emotional experience to a large number of persons than was the case in Study 2 (70.46%) likely results from the difference in the intensity of the emotional episodes collected in the two studies (M = 2.10 for moderate intensity events in Study 1 versus M = 8.00 for more intense events in Study 2, it is indeed well established that the more the experienced event was emotionally intense, the more people talk about it [START_REF] Luminet | Social sharing of emotion following exposure to a negatively valenced situation[END_REF][START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF].
How consistent were our findings on motives for the social sharing of emotion with regard to preexisting studies? Eight of the twelve classes of motives that were proposed by [START_REF] Rimé | Interpersonal emotion regulation[END_REF] were confirmed by the open answers that were provided by our participants. The motives that did not emerge from these answers were « legitimization » (i.e., receiving approval, being legitimized, being understood) and « entertaining » (i.e., amusing another person). However, these two classes of motives partially overlapped with the motives of "clarification and meaning" and "gaining attention", respectively. In addition, the motives of "emotional social support" and "gaining attention and empathy" became broader categories, with "assistance, support and comfort/consolidation" in one category and "arousing empathy/gaining attention" in the other category. The results of the present study are also similar to those of the three preceding studies on alleged motives (see Table 11). The results of this study thus confirm the data from the three previous studies [START_REF] Delfosse | Les motifs allégués du partage social et de la rumination mentale des émotions : Comparaison des épisodes positifs et négatifs [Alleged motives for social sharing and mental rumination of emotions: A comparison of positive and negative episodes[END_REF][START_REF] Finkenauer | Motives for sharing emotion[END_REF][START_REF] Nils | Partage social des émotions: Raisons invoquées et médiateurs perçus [Social sharing of emotions: Alleged reasons and perceived mediators[END_REF], cited by Rimé, 2007) with a sample whose average age is higher (M = 30, SD = 12.08, Study 1). In addition, in our sample there was as many women as men, and as many students as salaried employees (Study 1). Now, insofar as it is well established that the emotion regulation needs evolve according to age and gender, having a heterogeneous sample in terms of age and gender constitutes an undeniable contribution to the existing data.
- ------------------------Insert table 11 here ------------------------In our data, the most popular motives were those that involved the notion of venting (i.e., expressing one's emotion and relieving an emotional load), which was immediately followed by motives that were oriented toward listeners: "informing and/or warning", "arousing empathy/attention", "assistance, support and comfort, consolation", and "advice and solutions". In our discussion of Study 1, we emphasized that "venting" corresponded to the common belief that emotional expression reduces the intensity of emotions or resolves them completely (for a critique, see [START_REF] Nils | Beyond the myth of venting: Social sharing modes determine the benefits of emotional disclosure[END_REF]. The next most popular motives that were mentioned above involve the expectation of an active contribution of the social sharing target to the emotion regulation process of the narrator. These expected contributions can take either a cognitive form, such as assistance in search for meaning, or a socio-emotional form, such as emotional social support and manifestations of empathy [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. The importance of interpersonal relationships in emotional sharing motives is also evidenced by the presence of the motive of "help/support and comfort/consolation". This evidence indicates that the social sharing of emotion is not only primarily addressed to close persons with important demands for assistance in the emotion regulation process; this sharing also occurs with the open purpose of strengthening pre-existing social ties. As was already stated earlier, we insist that the motives considered in the reported studies were restricted to those that respondents recalled. Obviously, some motives may be inaccessible to the mind. Despite this caveat, collecting self-report data about motives for social sharing of emotions means identifying the reasons why, according to them, people talk about their emotional states.
Consequently, it provides relevant information on the needs people experience after an emotion and that they feel as central in their interpersonal emotional verbalization.
The SSMS-39 that was developed in the present studies was both consistent and reliable. Future research should aim to verify the psychometric proprieties of this scale among larger and different populations. Most of the participants in Study 2 were women, and the respondents were quite young. This characteristic of the sample could limit the generalization of our results. Future studies will thus have to confirm the psychometric properties of this scale with a more representative sample, and to investigate the divergent and predictive validity of this tool. Also, due to the fact the SSMS-39 should differ from the existing measures of disclosure (amongst others because not focusing on negative emotions and not considering the reasons for confiding as a fixed characteristic but rather as determined by the characteristics of the emotional experience (e.g., valence, type of emotional episode) and of the individuals (e.g., gender, age), its divergent validity with such existing scales will be investigated in future studies. The scale developed in the current study offers a new tool for research that is intended to provide insight into the social dimensions of emotional regulation.
In particular, the SMSS-39 will facilitate investigation of the relationships between motives and expectations that are manifested by a narrator, the manner in which listeners respond, and the consequences of such interaction on the various types of benefits that emotional sharing can provide (e.g., [START_REF] Badr | Effects of relationship maintenance on psychological distress and dyadic adjustment among couples coping with lung cancer[END_REF][START_REF] Banthia | The Effects of Dyadic Strength and Coping Styles on Psychological Distress in Couples Faced with Prostate Cancer[END_REF]. Future research in this direction is likely to provide new insight into the manner in which regulation processes and the social regulation of emotions complement one another. Since emotion regulation skills are evolving with age [START_REF] Carstensen | Socioemotional Selectivity Theory and the Regulation of Emotion in the Second Half of Life[END_REF][START_REF] Charles | Emotion regulation in aging[END_REF], it is likely that the alleged motives also change with age. Although this could not be shown in our study due to the homogeneity of age in our sample, future studies would have to investigate the impact of this variable. Having a scale which evaluates the alleged motives for social sharing then allows to better understand the emotional regulation needs the individuals search to fulfil when they talk about their emotions with their close relatives, and more generally the interpersonal side of emotion regulation.
The data collected with the SSMS-39 led to two comparisons offering an opportunity to test the capacity of this new tool to reveal differences in respondents' alleged motives for social sharing. A first comparison regarded the type of emotion (joy, fear, anger, or sadness) respondents had to refer to. Paying attention to these four emotions (joy, fear, anger, sadness) in particular was justified by the fact that they are easily identifiable, what ensured us that the participants in each condition would report an event related to the emotion cited. This variable was found to determine markedly the reasons for sharing (study 2). Thus, compared with the sharing of negative episodes, the social sharing of episodes of joy is more motivated by a desire to revive such an event by discussing it and to elicit empathy/attention. These findings are coherent with those from the capitalization studies. [START_REF] Gable | What Do You Do When Things Go Right? The Intrapersonal and Interpersonal Benefits of Sharing Positive Events[END_REF] have indeed shown that sharing positive experiences within the couple not only results in intra-individual benefits (increased well-being and positive affects by the pleasurable sensations associated with the evocation of the episode), but also in inter-individual benefits.
In particular, these studies show that socially sharing positive events even more strengthens the relationship when the listener responds in an empathically way [START_REF] Gable | What Do You Do When Things Go Right? The Intrapersonal and Interpersonal Benefits of Sharing Positive Events[END_REF]. As evidenced by [START_REF] Gable | Chapter 4 -Good News! Capitalizing on Positive Events in an Interpersonal Context[END_REF], talking about one's positive emotions to an empathic listener increases the appreciation and confidence that we have in this person (interpersonal benefits). In the light of these studies on "capitalization", it thus seems logical that individuals look for partners' empathy after they experienced positive emotions, since this type of reaction will draw both intrapersonal and interpersonal benefits. In the same way, the findings pertaining to the negative emotions are not surprising. Negative emotional episodes are shared more frequently for the purpose of finding meaning and seeking distress-buffering attitudes and actions from others. However, differences emerged according to whether the emotional episodes involved sadness, anger, or fear. Emotional episodes of sadness or anger are more frequently shared to obtain counsel from others than are episodes of fear. The data that allowed the construction of the questionnaire were collected on respondents' emotions distinguished by their valence in the first study (positive and negative emotions) or by their category in the second one (joy, anger, fear, sadness). The question then arises whether the scope of the questionnaire constructed in this manner is limited to these particular emotions or if it extends to the whole realm of emotions. In fact, we have stressed that the different social sharing motives evidenced in our work largely overlapped those that emerged from [START_REF] Rimé | Interpersonal emotion regulation[END_REF] review of the sharing motives recorded in the three previous studies on alleged motives for social sharing. In these studies, participants were instructed to refer to "an emotion that they had experienced recently and of which they had spoken". They thus referred to an emotional experience they themselves had selected, a selection procedure that allowed collecting a large variety of emotional experiences. The overlap between the reasons highlighted in these studies and those that emerge from the present work constitutes an important argument in favor of the generality of the scope of our questionnaire. Future studies will refine these preliminary data, by investigating the impact of other types of emotions on the alleged motives.
The second comparison conducted with the new SSMS-39 demonstrated that males and females respondent differ significantly-and sometimes at very high levels of significance--in their level of endorsement of all seven assessed motives of social sharing. In comparison to men, women reported sharing emotions much more often for the purpose of venting their feelings, for receiving help and comfort, for obtaining advice and solutions and for understanding their emotions. By contrast, men express their emotions much more in order to reexperience the episode, to arouse empathy or attention, and to inform people around them. In sum, after an emotional episode, women experience needs that are different from those experienced by men. It has already been argued that emotional regulation strategies depend on gender (e.g., [START_REF] Haga | Emotion regulation: Antecedents and wellbeing outcomes of cognitive reappraisal and expressive suppression in cross-cultural samples[END_REF][START_REF] Zlomke | Cognitive emotion regulation strategies: Gender differences and associations to worry[END_REF]. In particular, [START_REF] Haggard | Co-rumination in the workplace: Adjustment trade-offs for men and women who engage in excessive discussions workplace problems[END_REF] argued that compared to women, men are more inclined to see problem-talk as a puzzle to be solved. This is suggesting that males and females have complementary ways of discussing emotional experiences. Indeed, social sharing theory proposed that two different social sharing modes can be adopted after an emotion and that each mode is having specific consequences for the regulation of the emotion [START_REF] Nils | Beyond the myth of venting: Social sharing modes determine the benefits of emotional disclosure[END_REF][START_REF] Rimé | Interpersonal emotion regulation[END_REF][START_REF] Rimé | Emotion elicits the social sharing of emotion: Theory and empirical review[END_REF]. The cognitive mode takes place when the social sharing involves cognitive work, with distancing, perspective taking, reframing and reappraisal of the episode. This mode favoring the processing of the emotional experience is proper to bring emotional recovery, or a significant reduction of the impact of the episode's memory. The socio-affective mode provides the narrator with social responses involving help, support, comfort, consolation, legitimization, attention, bonding, and empathy perspective. In contrast to the previous one, this mode does not bring emotional recovery, but well a strong but temporary alleviation and feeling of relief. The socio-affective mode may usefully pave the way to the more demanding cognitive mode. The underlying rationale of this two-mode view is that as long as the cognitive appraisal of a past emotional episode remains unchanged, the memory of this episode necessarily triggers the same emotional state as the one experienced initially [START_REF] Rimé | Interpersonal emotion regulation[END_REF]. Experimental findings by [START_REF] Lepore | It's not that bad: Social challenges to emotional disclosure enhance adjustment to stress[END_REF] and by [START_REF] Nils | Beyond the myth of venting: Social sharing modes determine the benefits of emotional disclosure[END_REF] provided support for the two-mode view. In the framework of such concepts, the gender differences evidenced here are suggesting that the social sharing motives of males and females are complementary. It should be reminded that at adult age, in both genders, the predominant sharing partners are the spouses (e.g., [START_REF] Rimé | Social sharing of emotions: New evidence and new questions[END_REF]. These findings definitely call for future work on the way males and females on the one hand, and members of couples on the other hand do mutually regulate their emotions in the social sharing process. In any case, the strong and consistent differences evidenced in social sharing motives of males and females prove that the new scale developed in the reported studies is capable of evidencing new and heuristic findings. " receiving advice" (" recevoir des conseils") " receiving suggestions" ("recevoir des suggestions") " hearing an outside perspective" ("avoir un avis extérieur") 6. Bonding 25 (5.13%) "strengthening my social bonds" ("resserrer mes liens avec l'autre") "feeling the other's presence" (" sentir la présence de l'autre") " escaping from loneliness" ("me sentir moins seul(e)" ) 7. Arousing empathy 67 (13.76 %) " touching him/her" ("toucher l'autre") "arousing empathy" ("susciter l'empathie") " sharing my experience" ("partager mon expérience") Gaining attention 8. Informing and/or warning 85 (17.45 %) "informing him/her" ("informer l'autre") " warning others" ("avertir les autres") "informing him/her about what occurred" ("prévenir l'autre de ce qui s'est passé") -0,202 0,288 0,542 mettre en garde l'autre (« warn him/her ») 0,371 -0,312 0,261 0,476 être entouré(e) (« be surrounded ») 0,790 être épaulé(e) (« be supported ») 0,751 sentir que je pouvais compter sur quelqu'un (« feel I could rely on somebody ») 0,208 0,663 être aidé(e) (« be helped ») -0,316 0,631 être soutenu(e) (« receive support ») 0,204 0,614 avoir l'opinion de l'autre (« learn about his/her opinion ») -0,667 savoir ce que l'autre en pense (« learn bout his/her view ») -0,659 avoir un avis extérieur (« get an outside perspective ») -0,653 recevoir des suggestions (« receive suggestions ») 0,219 -0,586 0,323 voir comment l'autre aurait réagi (« see how he/she would have reacted ») -0,431 Note. N= 382, Extraction: principal axis factoring method, Rotation: oblimin method. Loadings above 0.30 are shown in boldface. For the sake of readability, loadings below 0.20 are not shown.
(joy: M = 1.48, SD = 1.18; F3, 715 = 335.82, p < .001). Experienced emotions were congruent with the assigned condition, with a high level of anger in the anger condition (M = 6.19, SD = 1.16), of sadness in the sadness condition (M = 5.37, SD = 1.69), of fear in the fear condition (M = 4.98, SD =1.94) and of joy in the joy condition (M = 6.22, SD =1.10). Moreover, the intensity of the recalled emotional episodes was generally high, with M = 8.37 (SD = 1.37) in the joy condition, M = 7.63 (SD = 1.73) in the anger condition, M = 7.82 (SD = 1.86) in the fear condition, and M = 8.18 (SD = 1.96) in the sadness condition.Parameters of social sharing. The participants initiated this sharing on the day that the episodes occurred in 70.74% of cases, during the same week in 25.24% of instances, and more than a week later in 4.02% of cases. The latency of sharing initiation varied across conditions (F(3,715) = 3.25, p < .05). Joyful events were shared more rapidly (M = 1.25, SD = 0.47) than sad events (M = 1.40, SD = 0.62) or angry events (M = 1.40, SD = 0.58). Overall, social sharing occurred once or twice in 26.33% of cases, three to four times in 26.62% of instances, and five or more times in 47.05% of instances. The respondents in the joy condition discussed their episodes more recurrently (M = 2.47, SD = 0.70) than did the respondents in the anger condition (M = 2.05, SD = 0.82) or fear condition (M = 2.04, SD = 0.86; F(3,715) = 12.35, p < 001). The number of social sharing partners amounted to one or two in 2.77% of cases, three to four in 26.77% of cases, and five or more in 70.46% of instances. The participants in the joy condition shared with more persons (M = 2.88, SD = 0.33) compared with those in the other three conditions: anger (M = 2.54, SD = 0.57), fear (M = 2.64, SD = 0.56), or sadness (M = 2.64, SD = 0.55; F(3,715) = 15.55, p < .001). The social sharing partners were an intimate in 94.04% of cases (companion: 22.75%, family member: 31.90%, friend:
Table 3
3 lists these categories, the number and proportion of items within each category, and examples of such items. The table shows that the answers of the respondents generally corresponded to categories proposed by
There were 193 participants in the joy condition (M age = 19.55, SD = 2.97; 62.7% females), 166 in the sadness condition (M age = 18.8, SD = 2.78; 70.5% females), 167 in the fear condition (M age = 19.18; SD = 2.83; 73.6% females), and 193 participants in the anger condition (M age = 18.5, SD = 2.95; 69.9% females). The majority of participants were college (n = 443, 57.53%) or university students (n = 327, 42. 47%).
Table 2 .
2 A list of motives alleged by respondents for sharing an emotion (from[START_REF] Rimé | Interpersonal emotion regulation[END_REF]
Being in touch: escaping loneliness Social bonding: escaping loneliness/feeling of abandonment Sociorelational motives: strengthening social links Social cohesion, bonding / strengthening social links
Empathy:
touching/moving others, feeling oneself closer to Affecting the target : moving the listener Arousing empathy
others
Receiving attention, impressing others Gaining attention: eliciting interest distinguishing oneself, Gaining attention
Informing others: warning Informing others: bringing them one's experience Informing one's close circle of one's experience or of one's condition Informing and/or warning
. Informing and/or warning
Bringing others one's experience, preventing others from making the same mistake
Table 3 .
3 Results of the content analysis: distribution of the 487 collected social sharing motives in the
various categories
Social sharing motives n (%) typical items
"re-experiencing the moment"
("revivre le moment")
1. Rehearsing/Brooding 41 (8.42%) "remembering what occurred" ("me remémorer ce qui s'est passé") "giving significance to the event"
("donner de l'importance à l'événement")
"letting off steam"
("évacuer le trop plein d'émotions")
2. Venting 132 (27.11%) " letting go of a burden" ("me libérer d'un poids")
"externalizing emotions"
("extérioriser mes émotions")
3. Assistance and " being comforted"
support ("être réconforté(e)")
60 "receiving support"
Comfort (12.32 %) ("être soutenu(e)") "being helped"
("être aidé(e)")
"having a better understanding of what
occurred"
("mieux comprendre ce qui s'est passé")
4. Clarification and 19 "analyzing what occurred"
meaning (3.90 %) ("analyser ce qui s'est passé")
" putting what occurred into perspective"
("prendre du recul par rapport à ce qui s'est
passé")
5. Advice and solutions 58 (11.91%)
Table 4 .
4 Comparison of positive and negative events for frequency of occurrence of the various categories of motives in respondents' answers.
Positive event Negative event
Motives (n = 193) (n = 294) χ²(1, N = 487)
Rehearsing 35 6 11.69*
Venting 32 100 98.79*
Assistance, Support and Comfort 3 57 34.30*
Clarification and Meaning 0 19 12.98*
Advice and Solutions 4 54 29.48*
Bonding 7 18 1.49
Arousing Empathy/Attention 55 12 58.54*
Informing and/or Warning 55 30 27.06*
Note. * p < .001
Table 5 .
5 Loadings of the 39 items in the Seven-Factor Solution
Items
Table 6 .
6 Properties of the seven subscales of the SSMS and their intercorrelationsNote. Bravais-Pearson's r (N = 720). Correlations in boldface p < .001.
M SD N of α F1 F2 F3 F4 F5 F6 F7
items
Table 7 .
7 Motives of social sharing by emotional event condition (means and standard deviation).Note. * p < .01; ** p< .001
Emotional Event Condition (N = 719)
Anger (n = 193) Joy (n = 193) Fear (n = 167) Sadness (n = 166) F(3, 715)
Clarification/Meaning 3.69a 2.10b 2.98c 3.47a 35.95***
(1.75) (1.35) (1.64) (1.71)
Rehearsing 2.19c (1.13) 4.44a (1.43) 2.49c (1.35) 2.04b (1.11) 145.33**
Venting 4.75a (1.74) 3.78b (1.39) 4.25b (1.66) 4.77a (1.65) 15.82**
Arousing Empathy/Attention 1.72b (.97) 2.03a (1.02) 1.69b (1.02) 1.78ab (1.03) 4.50*
Informing and/or Warning 2.88 (1.48) 2.89 (1.35) 2.98 (1.49) 2.61 (1.36) 2.16
Assistance/Support and Comfort/Consolation 3.48a 1.85 (1.35) 3.25a (1.97) 4.10b (1.89) 53.44**
Advice/Solutions 4.25b (1.56) 3.10a (1.65) 3.47abc (1.69) 3.81b (1.66) 17.03**
Table 8 .
8 Motives of social sharing by gender (mean and standard deviation) Note. * p < .05; ** p < .01; *** p < .0001 Running head: MOTIVES FOR SHARING AN EMOTIONAL EXPERIENCE
Female Male F(1, 717)
(n = 496) (n = 223)
Clarification/Meaning 3.16 (1.79) 2.83 (1.58) 5.54**
Rehearsing 2.75 (1.59) 3.01 (1.62) 4.15*
Venting 4.68 (1.59) 3.70 (1.63) 56.766***
Arousing
1.70 (.93) 2.06 (1.15) 20.27***
empathy/Attention
Informing and/or Warning 2.76 (1.36) 3.05 (1.55) 6.66**
Assistance/Support and
3.39 (1.99) 2.55 (1.72) 26.56***
Comfort/Consolation
Advice/Solutions 3.80 (1.73) 3.33 (1.56) 12.33**
Table 9 .
9 Correlations between the intensity of the emotional episode, the parameters of social sharing (delay, frequency, number of persons) and the alleged motives for social sharing (n =719) Note. * p < .05, ** p < .01 Running head: MOTIVES FOR SHARING AN EMOTIONAL EXPERIENCE
Intensity Latency Frequency Number
of
sharing
partners
Table 10 .
10 Motives of social sharing by type of sharing partner (mean and standard deviation)
Companion Friend Family Non intimate F3, 715
member
(n = 164) (n = 281) (n = 231) (n = 43)
Clarification/meaning 3.01 (1.75) 3.10 (1.79) 3.03 (1.68) 2.91 (1.55) .217
Rehearsing 2.69 (1.60) 2.77 (1.58) 3.04 (1.67) 2.60 (1.38) 2.18
Venting 4.38 (1.68)a 4.49 (1.63)a 4.35 (1.64)a 3.71 (1.83)b 2.80*
Arousing 1.76 (0.97) 1.84 (1.05) 1.80 (0.99) 1.88 (0.99) .301
empathy/attention
Informing and/or 2.73 (1.38) 2.77 (1.38) 3.02 (1.46) 2.74 (1.62) 1.96
warning
Assistance/support 3.08 (1.81)ab 3.37 (2.07)a 2.99 (1.88)b 2.44 (1.77)b 3.64*
and Help/consolation
Advice/Solutions 3.45 (1.69) 3.79 (1.71) 3.66 (1.66) 3.36 (1.67) 1.83
Note. * p < .05
Table 11 .
11 Comparison between the results of the present study and those of the three previous studies about alleged motives for social sharing
Finkenauer & Rimé (1996) Delfosse et al. (2004) Nils et al. (2005, cited by Rimé, 2007) The present study
Rehearsing: reexperiencing Reminding: remembering, rehearsing reexperiencing, rehearsing
Venting: expressing, searching for relief Catharsis: venting, finding relief, alleviating Affective motives: catharsis, search for relief venting
Obtaining comfort: support, listening, sympathy, help Social support: being listened to, receiving help/support Social motives: seeking help and support
Socioaffectives motives: Assistance/support and
being consoled, comforted Help/consolation
Social approval motives:
being legitimized,
approved, understood
Finding understanding: explanation, meaning Understanding: analyzing meaning what happened, finding Cognitive motives: finding words cognitive clarification, Clarification/meaning
Obtaining advice: feedback, guidance Knowing other person's finding solutions view: receiving advice, Sociocognitive motives: suggestions, solutions receiving advice, Advices/Solutions
Table 1.
Summary of the data from the three previous studies about alleged motives for social sharing of emotions (adapted from [START_REF] Rimé | Interpersonal emotion regulation[END_REF] Finkenauer & Rimé (1996) | 80,382 | [
"769631",
"1075043",
"19046",
"769630"
] | [
"268350",
"203506",
"102135",
"203506"
] |
01772121 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2018 | https://hal.sorbonne-universite.fr/hal-01772121/file/JMSM-D-17-00609_R1_sans%20marque.pdf | MD, MSc Pierre Bourdillon
email: [email protected]
PharmD Tanguy Boissenot
PharmD, PhD Lauriane Goldwirt
PhD Julien Nicolas
MD, MSc Caroline Apra
MD Alexandre Carpentier
Incomplete copolymer degradation of in situ chemotherapy
Keywords: Glioblastoma, carmustine, gliadel, recurrent surgery, BCNU, in situ drug delivery
teaching and research institutions in France or abroad, or from public or private research centers.
Introduction: in situ carmustine wafers containing 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) are commonly used for the treatment of recurrent glioblastoma to overcome the brainblood barrier. In theory, this chemotherapy diffuses into the adjacent parenchyma and the excipient degrades in maximum 8 weeks but no clinical data confirms this evolution, because patients are rarely operated again.
Materials and Methods: a 75-year-old patient was operated twice for recurrent glioblastoma, and a carmustine wafer was implanted during the second surgery. Eleven months later, a third surgery was performed, revealing unexpected incomplete degradation of the wafer. 1H-Nuclear Magnetic Resonance was performed to compare this wafer to pure BCNU and to an unused copolymer wafer.
Results: In the used wafer, peaks corresponding to hydrophobic units of the excipient were no longer noticeable, whereas peaks of the hydrophilic units and traces of BCNU were still present.
These surprising results could be related to the formation of a hydrophobic membrane around the wafer, thus interfering with the expected diffusion and degradation processes.
Conclusions:
The clinical benefit of carmustine wafers in addition to the standard radiochemotherapy remains limited, and in vivo behavior of this treatment is not completely elucidated yet. We found that the wafer may remain after several months. Alternative strategies to deal with the blood-brain barrier, such as drug-loaded liposomes or ultrasound-opening, must be explored to offer larger drug diffusion or allow repetitive delivery.
Introduction
Tight capillary cellular junctions of the blood-brain barrier (BBB) limit drug diffusion from the systemic circulation to the brain parenchyma and prevent the use of numerous chemotherapies for central nervous system neoplasms [START_REF] Donelli | Do anticancer agents reach the tumor target in the human brain?[END_REF] . Numerous approaches aiming to breaching the BBB, including molecular engineering 2,3 and mechanical devices [START_REF] Carpentier | Clinical trial of blood-brain barrier disruption by pulsed ultrasound[END_REF] , have been developed to answer the challenge of bringing anti-cancer agents 5 into the brain. In this context, in situ therapies, consisting in direct cell [START_REF] Gill | Direct brain infusion of glial cell line-derived neurotrophic factor in Parkinson disease[END_REF] or drug 7 infusion, have also been proposed. Carmustine wafers fall into this last category: they are biodegradable copolymers containing 3.85% of 1,3-bis(2chloroethyl)-1-nitrosourea (BCNU, also known as carmustine) and are used as an in situ chemotherapy in glioblastoma recurrences and even, more recently, as an adjuvant first-line treatment in newly diagnosed tumors [START_REF] Qi | The role of Gliadel wafers in the treatment of newly diagnosed GBM: a meta-analysis[END_REF] . Each wafer delivers 7.7 mg of carmustine during 5 days with only 1% of the administered drug remaining in the brain tissue after 7 days [START_REF] Fleming | Pharmacokinetics of the carmustine implant[END_REF] . Studies in non-human primates in normal brain without surgical debulking reveal a diffusion of carmustine in a 2 to 10 mm range around the wafer [START_REF] Strasser | Distribution of 1,3-bis(2chloroethyl)-1-nitrosourea and tracers in the rabbit brain after interstitial delivery by biodegradable polymer implants[END_REF] . The inactive excipient consists in a degradable, random copolymer of 1,3-bis-(p-carboxyphenoxy)propane (CPP) and sebacic acid (SA) units in a 20:80 molar ratio connected by anhydride bonds (Figure 1a), termed poly[(1,3bis-(p-carboxyphenoxy)propane)-co-(sebacic acid)], P(CPP-co-SA). P(CPP-co-SA) should completely degrade over a period of 6 to 8 weeks [START_REF] Fleming | Pharmacokinetics of the carmustine implant[END_REF] . SA contains an aliphatic alkyl chain whereas CPP exhibits 2 phenyl rings per CPP unit. Therefore, as degradation occurs in an aqueous environment, it is expected that bond cleavage will first occur at the SA-SA junctions compared to SA-CPP or CPP-CPP junctions.
Although carmustine wafers have been an important subject of research and discussion, there is no report of in vivo local evolution in a patient, to our knowledge. Here, we report the case of a patient who underwent additional surgery months after carmustine wafers insertion, which allowed us to observe the persistence of the wafer. We provide the Proton Nuclear Magnetic Resonance ( 1 H-NMR) analysis of this wafer in order to investigate the possible reasons of its default of resorption.
Materials and Methods
Patient: A 75-year-old patient was admitted for glioblastoma progression, 11 months after a second resection surgery with carmustine wafer insertion. He had previously received fractionated focal irradiation at a dose of 2 Gy per fraction given once daily 5 days per week over a period of 6 weeks, for a total dose of 59.4 Gy focal radiotherapy and 13 regimens of temozolomide at a dose of 75 mg per square meter per day, given 7 days per week from the first day of radiotherapy until the last day of radiotherapy. After a 4-week break, the patient received 7 cycles of adjuvant chemotherapy according to the standard 5-day schedule every 28 days. The dose was 150 mg per square meter for the first cycle and was increased to 200 mg per square meter beginning with the second cycle (no hematologic toxic effects were noticed). Because he presented local recurrence, his case was discussed in a multidisciplinary neuro-oncological meeting, and he underwent a third surgical resection. During the procedure, we surprisingly found remains of the partially degraded copolymer of the carmustine wafer (Figure 1b). We obtained the patient's written consent to analyze and publish anonymously his data. The wafer's spectrum by [START_REF] Donelli | Do anticancer agents reach the tumor target in the human brain?[END_REF] H-RMN was compared to BCNU and to an unused copolymer wafer.
Sample treatment:
Carmustine wafer extracted from the patient and unused carmustine wafer, Gliadel ® (mgi pharma ltd) were dissolved using 1 mL of solvable ® solution (Perkin Elmer). The solution was stirred overnight until complete dissolution. After filtration, solvable ® was evaporated and replace by CDCl3 before 1 H-NMR analysis. Carmustine, Bicnu ® (emcure pharma UK ltd) was directly dissolved in CDCl3 before 1 H-NMR analysis.
Analytical methods. [START_REF] Donelli | Do anticancer agents reach the tumor target in the human brain?[END_REF]
Results
The 1 H-RMN analysis of the unused wafer exhibits all expected peaks of CPP (Figure 1a, b, c, andd) and SA (Figure 1e, f, andg) units belonging to the copolymer, and with the right molar ratio between CPP and SA units (Figure 1c). Peaks of BCNU were also detected in the 3.3-4.25 ppm region (Figure 1c). Interestingly, the 1 H-RMN spectrum of the partially degraded wafer showed that peaks corresponding to CPP units (peaks a-d) were no longer noticeable, whereas peaks f and g belonging to SA units were still present. Note that peak e from SA is not visible, probably because of the poor solvation of the nearby carboxylic acid functions in deuterated chloroform (the solvent used for the NMR study). Traces of BCNU were also detected demonstrating that not all BCNU was released from the water.
Apart from the unexpected finding of a non-degraded wafer, the incomplete clearance concerning only SA and not CPP units was rather surprising. Given SA is more hydrophilic than CPP, it was supposed to be cleared prior to CPP. Because of their greater hydrophobicity, the CPP-CPP bonds are likely more difficult to be cleaved compared to CPP-SA and SA-SA bonds in an aqueous environment. These results may be related to the potential formation of a hydrophobic membrane around the wafer, which could interfere with the normal degradation process.
Discussion
The benefit of copolymer wafers in addition to the standard radio-chemotherapy is supported by some controlled studies 7 but remains limited (inferior to 4 months of progression-free survival). However, the higher postoperative infection rate after copolymer wafers implantation does not affect survival [START_REF] Pallud | Long-term results of carmustine wafer implantation for newly diagnosed glioblastomas: a controlled propensity-matched analysis of a French multicenter cohort[END_REF] . Although the long-term degradation of carmustine wafers has not been studied in patients, recent imaging studies prospectively investigated the MRI aspect of copolymer wafer during the years following surgery [START_REF] Ulmer | Temporal changes in magnetic resonance imaging characteristics of Gliadel wafers and of the adjacent brain parenchyma[END_REF] . The kinetics of changes of such images mainly helps to discriminate between a "normal" postoperative evolution and an abscess, a tumor residue or recurrence. A restricted diffusion in MRI reshaping copolymer wafer' outline is known to be noticeable up to one year postoperatively. In light of our report and considering that it is highly uncommon to perform a recurrence glioma surgery distant from the use of copolymer wafer because of the short life expectancy of the patients, we question if such MRI findings could not be related to incomplete degradations of the P(CPP-co-SA). The local consequences of such events are unknown.
Conclusion
Despite exclusive in situ drug delivery during 5 days between surgery and standard radiochemotherapy, carmustine shows limited efficiency. Our report shows that it could be related to an uncontrolled local tolerance and degradation of the copolymer. This finding underlines the importance of new studies concerning the long-term in vivo degradation of copolymer wafer in human. Moreover, alternative strategies to deal with the blood-brain barrier, such as drugloaded liposomes or opening using ultrasound 4 must be considered. They could offer larger drug diffusion and repetitive delivery compared to that observed with copolymer wafers [START_REF] Fleming | Pharmacokinetics of the carmustine implant[END_REF] .
H-NMR spectroscopy was performed in 5 mm diameter tubes in CDCl3 at 25 °C on a Bruker Avance 300 spectrometer at 300 MHz. The chemical shift scale was calibrated based on the internal solvent signals (CDCl3, 7.26 ppm).
Figure 1 :
1 Figure 1: A. Structure of polifeprosan 20: a random copolymer formed by 1,3-bis-(p-carboxyphenoxy)propane (CPP, green square) and sebacic acid (SA, blue square) monomers (20:80 molar ratio) connected by anhydride bonds ; B. Circled in red: carmustine wafer one year after a first surgery ; C. 1H RMN analysis of a control carmustine wafer (bottom), BCNU (middle) and the wafer extracted from patient (top). The unused wafer exhibits all expected peaks of CPP (a, b,c, and d) and SA (e, f, and g) units of the copolymer, together with peaks of BCNU in the 3.3-4.25 ppm region (c). The partially degraded wafer shows no peaks corresponding to CPP units, whereas peaks f and g belonging to SA units are still present, with traces of BCNU. Note that peak from SA is not visible, likely because of the poor solvation of the nearby carboxylic acid functions in deuterated chloroform (solvent used for the NMR study).
Figure
Figure
Disclosure: the authors declare no conflict of interest. | 11,855 | [
"986325",
"738878",
"784909",
"939694"
] | [
"246044",
"251210",
"85046",
"251210",
"246044",
"353778",
"246044"
] |
01772162 | en | [
"sdv"
] | 2024/03/05 22:32:16 | 2018 | https://hal.sorbonne-universite.fr/hal-01772162/file/Vietbocap%202018%20-%20on-line%20publication_sans%20marque.pdf | Wilson R Lourenc ¸o
email: [email protected]
Dinh-Sac Pham
Thi-Hang Tran
Nha-Ke Phong
Bang
The genus Vietbocap Lourenc ¸o & Pham, 2010 in the Thien Duong cave, Vietnam: A possible case of subterranean speciation in scorpions (Scorpiones: Pseudochactidae) Le genre Vietbocap
Keywords:
Two new species of scorpion belonging to the family Pseudochactidae and to the genus Vietbocap are described based on specimens collected in the Thien Duong cave, which belongs to the Vom cave system, in the Phong Nha-Ke Bang National Park, Quang Binh
Introduction
As already outlined in several previous publications [START_REF] Lourenc ¸o | First record of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Laos and new biogeographic evidence of a Pangaean palaeodistribution[END_REF][START_REF] Lourenc ¸o | The genus Vietbocap Lourenc ¸o & Pham, 2010 (Scorpiones: Pseudochactidae); proposition of a new family and description of a new species from Laos[END_REF][START_REF] Lourenc ¸o | Second record of the genus Troglokhammouanus Lourenc ¸o, 2007 from Laos and description of a new species (Scorpiones: Pseudochactidae)[END_REF][START_REF] Lourenc ¸o | A remarkable new cave scorpion of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Vietnam[END_REF][START_REF] Lourenc ¸o | A second species of Vietbocap Lourenc ¸o & Pham, 2010 (Scorpiones: Pseudochactidae) from Vietnam[END_REF], the family Pseudochactidae Gromov, 1998 most certainly contains the most remarkable scorpions described in the last twenty years. The first species to be discovered was Pseudochactas ovchinnikovi [START_REF] Gromov | A new family, genus and species of scorpions (Arachnida, Scorpiones) from southern Central Asia[END_REF], found in an isolated mountainous region of southeastern Uzbekistan and southwestern Tajikistan, in Central Asia [START_REF] Gromov | A new family, genus and species of scorpions (Arachnida, Scorpiones) from southern Central Asia[END_REF]. A second genus and species, Troglokhammouanus steineri Lourenc ¸o, 2007, was described from karst caves in Laos [START_REF] Lourenc ¸o | First record of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Laos and new biogeographic evidence of a Pangaean palaeodistribution[END_REF]. Although this species was found inside a cave, its morphological characteristics do not correspond to a totally troglobitic element. This Laotian species reopened the question about the origins and affinities of the Pseudochactidae and led to new biogeographical interpretations [START_REF] Lourenc ¸o | First record of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Laos and new biogeographic evidence of a Pangaean palaeodistribution[END_REF].
In the following years, scorpions have been prospected in karst cave systems in Vietnam, and several specimens of a new pseudochactid scorpion were collected in the Tien Son cave, which belongs to the Phong Nha system. These were described as a new genus and species, Vietbocap canhi Lourenc ¸o & Pham, 2010, which represents a true troglobitic element [START_REF] Lourenc ¸o | A remarkable new cave scorpion of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Vietnam[END_REF]. Subsequent surveys in the cave systems of Vietnam have been carried out and another species of pseudochactid scorpion was collected in the Thien Duong cave, which belongs to the Vom cave system. The new species, Vietbocap thienduongensis Lourenc ¸o & Pham, 2012 showed features of a totally troglobitic element [START_REF] Lourenc ¸o | A second species of Vietbocap Lourenc ¸o & Pham, 2010 (Scorpiones: Pseudochactidae) from Vietnam[END_REF]. Almost simultaneously, one more new species of Vietbocap was collected in a cave in Laos; Vietbocap lao Lourenc ¸o, 2012, once again proved to be a true troglobitic element [START_REF] Lourenc ¸o | The genus Vietbocap Lourenc ¸o & Pham, 2010 (Scorpiones: Pseudochactidae); proposition of a new family and description of a new species from Laos[END_REF] Only several years after the description of the first species belonging to the genus Troglokhammouanus Lourenc ¸o, 2007, a second species was found and described from another cave in Laos [START_REF] Lourenc ¸o | Second record of the genus Troglokhammouanus Lourenc ¸o, 2007 from Laos and description of a new species (Scorpiones: Pseudochactidae)[END_REF]. The description of Troglokhammouanus louisanneorum Lourenc ¸o, 2017 was based on a single female specimen and the elements of this genus are apparently less common than those of the genus Vietbocap.
The fact that several pseudochactid elements originating from caves within the same major karst system have been found in Laos and Vietnam suggest that this region of Southeast Asia may represent a refuge or centre of endemism for this family.
In recent years, more intense research was concentrate in the Thien Duong cave, which is a major cave in this karst system (see next section). Prospections were carried on much deeper distances from the entrance of the cave and further elements belonging to the genus Vietbocap were located. The study of the specimens collected respectively at 3000 and 5000 m from the cave entrance showed that these were new species; similar to V. thienduongensis, but presenting some clear morphological differences. This observed situation could suggest a possible case of speciation within the cave system, the first one ever reported for scorpions. Moreover, the population found at 5000 m from the entrance of the cave represent a new record of distance from a cave entrance for scorpions.
We will not re-discuss here the controversial phylogenetic and biogeographical aspects concerning this peculiar scorpion family, since most basic points have already been largely discussed by Lourenc ¸o [START_REF] Lourenc ¸o | First record of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Laos and new biogeographic evidence of a Pangaean palaeodistribution[END_REF]. Using molecular tools, Sharma et al. [START_REF] Sharma | Phylogenomic resolution of scorpions reveals multilevel discordance with morphological phylogenetic signal[END_REF] strongly supported the monophyly of pseudochactids, chaerilids, and buthids. The precise relationships among these three families remain however strongly ambiguous. More detailed information on the orogeny and geodynamics of South East Asia, and on the location, ecology and climate of the national park and caves, can be found in Lourenc ¸o and Pham [START_REF] Lourenc ¸o | A remarkable new cave scorpion of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Vietnam[END_REF].
The Thien Duong cave in the Vom cave system
The Thien Duong cave (also called Paradise cave) where the new species were found is located in the Phong Nha-Ke Bang National Park, 60 km northwest of Do ˆng Ho ´i city (Figs. 12). The Thien Duong cave is at an elevation of 200 m above sea level, near the west branch of Ho Chi Minh Highway, in Son Trach Commune, Bo Trach District, Quang Binh Province, Vietnam. The cave was discovered by a local people in 2005, and initially the first 5 km of this cave were explored by scientists from the British Cave Research Association in 2005. More recently, the whole extension of the cave was explored by the same Association. The cave is 31 km long, and in parts can reach 100 m in height and 150 m in width. There are two cave systems in Phong Nha Ke Bang region: Phong Nha cave system and Vom cave system. However, these systems are globally isolated, with no geological connections being known between them [START_REF] Nghi | Adventurous tourism-a potential realm of world natural heritage-National Park Phong Nha-Ke Bang[END_REF].
The Phong Nha-Ke Bang karst is the oldest major karst area in Asia. It has been subject to massive tectonic comple ´ te ´ e a ` partir de femelles collecte ´ es a ` 750 m de l'entre ´ e de la grotte. Les deux nouvelles espe ` ces de ´ crites a ` pre ´ sent ont e ´ te ´ collecte ´ es respectivement a ` 3000 et 5000 m de l'entre ´ e de la grotte et sont e ´ galement des e ´ le ´ ments troglobies, pluto ˆt similaires a ` V. thienduongensis, mais avec des diffe ´ rences morphologiques bien nettes. La situation observe ´ e sugge ` re un possible cas de spe ´ ciation a ` l'inte ´ rieur de la grotte, le premier signale ´ chez les scorpions. La population trouve ´ e a ` 5000 m de l'entre ´ e de la grotte repre ´ sente un record absolu de distance de l'entre ´ e d'une grotte pour des scorpions. changes and comprises a series of rock types that are interbedded in complex ways. Probably as many as seven major levels of karst development have occurred as a result of tectonic uplift and changing sea levels, thus the karst landscape of PNKB is extremely complex with high geodiversity and many geomorphic features of considerable significance [START_REF] Lourenc ¸o | A remarkable new cave scorpion of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Vietnam[END_REF][START_REF] Nghi | Adventurous tourism-a potential realm of world natural heritage-National Park Phong Nha-Ke Bang[END_REF][START_REF]Phong Nha-Ke Bang National park[END_REF].
Methods
Scorpions were collected by scientists of the IEBR and the Phong Nha-Ke Bang National Park, while exploring the cave with the help of standard electric torches. They were found on the cave walls and sometimes under rocks, approximately at 750, 3000 and 5000 m from the main cave entrance. This last distance is a new distance record from a cave entrance for a scorpion. Measurements and illustrations were made using a Wild M5 stereo-microscope with a drawing tube and an ocular micrometer. Measurements follow those of Stahnke [START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF] and are given in mm. Trichobothrial notations are those developed by Soleglad & Fet [START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF] and the morphological terminology mostly follows that of Hjelle [START_REF] Hjelle | Anatomy and morphology[END_REF] and Lourenc ¸o [START_REF] Lourenc ¸o | First record of the family Pseudochactidae Gromov (Chelicerata, Scorpiones) from Laos and new biogeographic evidence of a Pangaean palaeodistribution[END_REF][START_REF] Lourenc ¸o | Complements to the morphology of Troglokhammouanus steineri Lourenc ¸o, 2007 (Scorpiones: Pseudochactidae) based on scanning electron microscopy[END_REF] General coloration yellow, less pale than on male for metasomal segments and telson; cheliceral teeth, telson tip, and rows of granules on pedipalp fingers dark reddish. Anterior margin of carapace only slightly depressed, with a concavity slightly stronger than that of male; carapace smooth, except for some isolate granules. Lateral ocelli absent. Pair of circumocular sutures complete in the posterior region to median ocular tubercle with a broad Ushaped configuration. Median ocelli absent; median tubercle represented by a smooth but not depressed zone. Anterosubmedial carinae absent from zone delimited by circumocular sutures; furrows obsolete. Chelicerae shorter in the female; dorsal edge of fixed finger with four denticles (basal, medial, subdistal, distal); ventral edge with 3-4 very reduced denticles; movable finger with three denticles (medial, subdistal, external distal) on dorsal edge, without basal denticles; ventral edge with 4-5 reduced denticles and a weak serrula; external distal denticle smaller than internal distal denticle; ventral aspect of fingers and manus with numerous macrosetae. Type-D trichobothrial pattern [START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF][START_REF] Soleglad | High-level systematics and phylogeny of the extant scorpions (Scorpiones: Orthosterni)[END_REF] reduced); 10 on patella, of which 3 dorsal, 1 internal and 6 external (est extremely reduced); ventral surface without trichobothria; 13 trichobothria on chela, of which 5 on manus, 8 on fixed finger (ib 2 extremely reduced); dorsal trichobothria of femur with ''beta-like'' configuration. Sternum pentagonal, type 1 [START_REF] Soleglad | The scorpion sternum: structure and phylogeny (Scorpiones: Orthosterni)[END_REF], strongly compressed horizontally, slightly longer than wide, external aspect not flat, with a concave region, posteromedian depression round. Pectines each with 3-4 distinct marginal lamellae and 6-7 well-delineated median lamellae; fulcra absent; pectinal tooth count 7-7 and 7-8. Genital operculum completely divided longitudinally. Telotarsi each with several spinular setae, not clearly arranged in rows. Metasomal segment V with a weakly marked pair of ventrosubmedian carinae; no ventromedian carina between ventrosubmedian carinae; metasomal carinae better marked than on male. Pedipalps shorter than those of male; fixed and movable fingers strongly curved, but less than on male; dentate margins each with median denticle row comprising seven oblique granular sub-rows; internal and external accessory granules at base of each sub-row. Respiratory spiracles small, semi-oval to round. Pro-and retrolateral pedal spurs present on legs I-IV. Tibial spurs absent from all legs. Telson long and less bulbous than on male; vesicle smooth on all faces; aculeus shorter than vesicle and weakly curved without a subaculear tubercle ventrally. Form of venom glands extremely simples with a total absence of folds (Fig. 5).
Measurements (in mm) of female Vietbocap thienduongensis
Total length 23.9. Carapace: length 3.0; anterior width 2.0; posterior width 3.2. Mesosoma length 5.5. Metasomal segments: I, length 1.2, width 1.5; II, length 1.4, width 1. Etymology. The specific name is a Latin adjective referring to the orange coloration of the new species (aurantiacus in Latin).
Description based on holotype and paratypes (measurements given after the description).
Colour. General coloration yellow to reddish-yellow, darker than in the other species of the genus; cheliceral teeth, telson tip, pedipalpal and metasomal carinae and rows of granules on pedipalp fingers dark reddish.
Morphology. Chelicerae: dorsal edge of fixed finger with four denticles (basal, medial, subdistal, distal); ventral edge with 3-4 very reduced denticles; movable finger with three denticles (medial, subdistal, external distal) on dorsal edge, without basal denticles; ventral edge with 4-5 reduced denticles and a moderate serrula; external distal denticle smaller than internal distal denticle; ventral aspect of fingers and manus with numerous macrosetae. Carapace: anterior margin not depressed with a weakly to moderately marked concavity; lateral ocelli absent; median ocular tubercle represented by a smooth and not depressed zone; median ocelli absent; interocular furrow obsolete. One pair of weakly marked circumocular sutures with a broad U-shaped configuration, also complete behind median ocular tubercle. Anteromedian and posteromedian furrows shallow; posterolateral furrow shallow, weakly curved; posteromarginal furrow narrow, shallow. Carapace almost totally smooth, except for some isolated granules anteriorly. Pedipalp segments apilose. Femur with five strongly marked carinae; intercarinal surfaces smooth. Patella with six strongly marked carinae; ventrointernal carinae with some spinoid granules; intercarinal surfaces smooth. Chela with dorso-external and ventral carinae moderately marked; tegument smooth. Fixed and movable fingers strongly curved; dentate margins, each with median denticle row comprising seven oblique granular sub-rows; each sub-row comprising several small granules and internal and external accessory granules. Trichobothria orthobothriotaxic, Type D [START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF][START_REF] Soleglad | High-level systematics and phylogeny of the extant scorpions (Scorpiones: Orthosterni)[END_REF], ''beta-like'' configuration, d 2 situated on dorsal surface, d 3 and d 4 in same axis of the femur, parallel and closer to dorsoexternal carina than is d 1 , angle formed by d 1 , d 3 and d 4 opening toward internal surface; totals: femur, 12 (5 dorsal, 4 internal, 3 external); patella, 10 (3 dorsal, 1 internal, 6 external); chela, 13 (5 on manus, 8 on fixed finger). Legs I to IV: tibiae without spurs; basitarsi each with a pair of pro-and retrolateral spurs; telotarsi each with several spinular setae, not clearly arranged in rows. Sternum pentagonal, type 1 [START_REF] Soleglad | The scorpion sternum: structure and phylogeny (Scorpiones: Orthosterni)[END_REF], strongly compressed horizontally, longer than wide, external aspect not flat, with a concave region, posteromedian depression round. Pectines each with 3-4 distinct marginal lamellae and 7-8 well-delineated median lamellae in females. Fulcra absent. Pectinal tooth count: 7-7 in the female holotype, 6-7, 6-6 in female paratypes. Genital operculum completely divided longitudinally. Mesosoma: pre-tergites smooth and shiny; post-tergites II-VI smooth, without granules; VII with a few granules and a pair of dorso-submedian and dorsolateral carinae, reaching posterior edge of segment. Sternites entirely smooth, acarinate; V with a white posterior inflated triangular zone; surfaces with scattered macrosetae; distal margins with sparse row of macrosetae; respiratory spiracles small, semi-oval to round. Metasoma with a few short macrosetae. Ten carinae on segments I to III, weakly marked on II-III; eight carinae on segment IV; four on segment V. Dorso-submedian carinae moderately developed on segments I-IV, absent on segment V; spinoid granules absent. Other carinae moderately to weakly developed on segments I-V. Telson long and slightly bulbous; vesicle smooth on all faces; aculeus shorter than vesicle and weakly curved, without a subaculear tubercle ventrally. Form of venom glands unknown, but most certainly similar to that of V. thienduongensis.
Measurements (in mm) of female holotype of Vietbocap aurantiacus sp. n.
Total length 35.8. Carapace: length 4.5; anterior width 2.7; posterior width 4.8. Mesosoma length 8.4. Metasomal segments: I, length 1.9, width 2.2; II, length 2.2, width 1.9; III, length 2.5, width 1.8; IV, length 3.2, width 1.7; V, length 6.3, width 1.6, depth 1.4. Telson length 6.8; vesicle width 2.2, depth 1.9. Pedipalp: femur length 6.0, width 1.3; patella length 5.5, width 1.6; chela length 10.6, width 1.7, depth 1.6; movable finger length 6.3.
Ratios: sternum length/width, 2.1/1.8 = 1.67; chela length/movable finger length, 10.6/6.3 = 1.68.
Relationships
It is noticeable to recall that the material used in the description of Vietbocap aurantiacus sp. n. was available for study since a number of years now; however, only females of this species were available for the study. Contrarily, Vietbocap thienduongensis was described on the base of males only. Consequently, an objective comparative analysis of both species becomes possible only after the more recent discovery of females of V. thienduongensis (see previous section). The general morphologies of all known species of Vietbocap are particularly similar. Therefore, the identification of these very closely related species can only be based on some rather discrete features.
Vietbocap thienduongensis and Vietbocap aurantiacus sp. n. are the two most geographically close species found in the cave, distant of 1.0 to 1.5 km. V. auranticus sp. n. can, however, be distinguished by a number of features: bigger size (35.8 vs 23.9 mm) and distinct morphometric values (see measurements and ratios following the description); darker coloration, more to orange-yellow; anterior margin of carapace not depressed; sternum longer than wide (see ratios following the description); metasomal segments and pedipalps better carinated and granulated; sternite V with a conspicuous white posterior inflated triangular zone; moderate serrula on chelicera movable finger.
Vietbocap quinquemilia sp. n. (Figs. [START_REF] Nghi | Adventurous tourism-a potential realm of world natural heritage-National Park Phong Nha-Ke Bang[END_REF][START_REF]Phong Nha-Ke Bang National park[END_REF][START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF][START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF] Diagnosis: anterior margin of carapace slightly depressed, with a weak concavity. Lateral ocelli absent. Pair of circumocular sutures complete in the posterior region to median ocular tubercle with a broad U-shaped configuration. Median ocelli absent; median tubercle represented by a smooth and only slightly depressed zone. Anterosubmedial carinae absent from zone delimited by circumocular sutures. Type-D trichobothrial pattern [START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF][START_REF] Soleglad | High-level systematics and phylogeny of the extant scorpions (Scorpiones: Orthosterni)[END_REF] with 35 trichobothria per pedipalp: 12 on femur, of which 5 dorsal, 4 internal and 3 external (d 1 , d 4 , d 5 and i 4 extremely reduced); 10 on patella, of which 3 dorsal, 1 internal and 6 external (est extremely reduced); ventral surface without trichobothria; 13 trichobothria on chela, of which 5 on manus, 8 on fixed finger (ib 2 extremely reduced); dorsal trichobothria of femur with ''beta-like'' configuration. Sternum pentagonal, type 1 [START_REF] Soleglad | The scorpion sternum: structure and phylogeny (Scorpiones: Orthosterni)[END_REF], strongly compressed horizontally, slightly longer than wide, external aspect not flat, with a concave region, posteromedian depression round. Telotarsi each with several spinular setae, not clearly arranged in rows. Metasomal segment V with a weakly marked pair of ventrosubmedian carinae; no ventromedian carina between ventrosubmedian carinae. Etymology. The specific name is a Latin noun in apposition referring to the distance from the cave entrance, 5000 m (quinquemilia in Latin) where the new species was found.
Description based on male holotype and female paratypes (measurements given after the description). Colour. General coloration very pale yellow almost whitish, paler than all the other known species in the genus; cheliceral teeth, telson tip and rows of granules on pedipalp fingers slightly reddish.
Morphology. Chelicerae: dorsal edge of fixed finger with four denticles (basal, medial, subdistal, distal); ventral edge with 3-4 very reduced denticles; movable finger with three denticles (medial, subdistal, external distal) on the dorsal edge, without basal denticles; ventral edge with 4-5 reduced denticles and a moderate serrula; external distal denticle smaller than internal distal denticle; ventral aspect of fingers and manus with numerous macrosetae. Carapace: anterior margin only slightly depressed with a weakly marked concavity; lateral ocelli absent; median ocular tubercle represented by a smooth and only slightly depressed zone; median ocelli absent; interocular furrow obsolete. One pair of weakly marked circumocular sutures with a broad U-shaped configuration, also complete behind median ocular tubercle. Anteromedian and posteromedian furrows shallow; posterolateral furrow shallow, weakly curved; posteromarginal furrow narrow, shallow. Carapace almost entirely smooth, except for some very isolated granules anteriorly. Pedipalp segments apilose. Femur with five carinae, all moderate to weak; intercarinal surfaces smooth. Patella with six discernible carinae; ventrointernal carinae with some spinoid granules; intercarinal surfaces smooth. Chela with dorso-external and ventral carinae weakly marked; tegument smooth. Fixed and movable fingers strongly curved; dentate margins, each with median denticle row comprising seven oblique granular sub-rows; each subrow comprising several small granules and internal and external accessory granules. Trichobothria orthobothriotaxic, type D [START_REF] Soleglad | Evolution of scorpion orthobothriotaxy: a cladistic approach[END_REF][START_REF] Soleglad | High-level systematics and phylogeny of the extant scorpions (Scorpiones: Orthosterni)[END_REF], ''beta-like'' configuration, d 2 situated on dorsal surface, d 3 and d 4 in same axis of the femur, parallel and closer to the dorsoexternal carina than is d 1 , angle formed by d 1 , d 3 and d 4 opening toward the internal surface; totals: femur, 12 (5 dorsal, 4 internal, 3 external); patella, 10 (3 dorsal, 1 internal, 6 external); chela, 13 (5 on manus, 8 on fixed finger). Legs I to IV: tibiae without spurs; basitarsi each with a pair of pro-and retrolateral spurs; telotarsi each with several spinular setae, not clearly arranged in rows. Sternum pentagonal, type 1 [START_REF] Soleglad | The scorpion sternum: structure and phylogeny (Scorpiones: Orthosterni)[END_REF], strongly compressed horizontally, slightly longer than wide, external aspect not flat, with a concave region, posteromedian depression round. Pectines each with 3 distinct marginal lamellae and 7-8 well-delineated median lamellae in both sexes. Fulcra absent. Pectinal tooth count: 8-8 in males, 7-7 in females. Genital operculum completely divided longitudinally; genital plugs observed in the male. Mesosoma: pre-tergites smooth and shiny; post-tergites II-VI smooth; granules totally absent; VII equally without granules and a pair of dorso-submedian and dorsolateral carinae, reaching posterior edge of segment. Sternites almost entirely smooth, acarinate; surfaces with scattered macrosetae; distal margins with sparse row of macrosetae; respiratory spiracles small, semi-oval to round. Metasoma with a few short macrosetae. Ten carinae on segments I to III; eight carinae on segment IV; four on segment V. Dorsosubmedian carinae moderately developed on segments I-IV, absent on segment V; spinoid granules absent. Other carinae moderately to weakly developed on segments I-V. Telson long and slightly bulbous in the male, slender in the female; vesicle smooth on all faces; aculeus shorter than vesicle and weakly curved, without a subaculear tubercle ventrally. Form of venom glands unknown, but most certainly similar to that of V. thienduongensis.
Measurements (in mm) of male holotype and female paratype of Vietbocap quinquemilia sp. n.
Relationships
Vietbocap quinquemilia sp. n. is geographically separated of Vietbocap aurantiacus sp. n. by 2 km. Their general morphology although similar presents a number of differences and in fact V. quinquemilia seems more closely related to V. thienduongensis. This new species can however be characterized by a number of particular features: small size (only 20.2 mm for female) and distinct morphometric values (see measurements and ratios following the description); a very pale coloration almost whitish; this is the paler species known for the genus; median ocular tubercle on carapace only slightly depressed; cheliceral serrula moderately marked; tergites globally smooth; pedipalp carinae very weakly marked.
Discussion
Cases of more than one species of a given genus, living in a common cave system, have been recorded for other zoological groups. For example, Bayer and Ja ¨ger [START_REF] Bayer | Heteropoda species from limestone caves in Laos (Araneae: Sparassidae: Heteropodinae)[END_REF] reported the presence of two spiders of the genus Heteropoda, namely Heteropoda maxima and Heteropoda steineri in the Xe Bangfai Cave system. Although these authors indicate that H. steineri would be more restricted to deeper parts in the cave, they suggest that these two very mobile species could potentially meet. A noticeable parallel case was also observed for the species of the cave scorpion genus Alacran Francke in Mexico [START_REF] Lopez | Shining a light into the world's deepest caves: phylogenetic systematics of the troglobiotic scorpion genus Alacran Francke, 1982 (Typhlochactidae: Alacraninae)[END_REF]. In temperate regions, several species of a same beetle genus are known to co-occur frequently in a same cave, like for the Aphaenops in the Pierre Saint-Martin system in French Pyrenees [START_REF] Cabidoche | Bioce ´nose cavernicole de la salle de la Verna (gouffre de la pierre Saint-Martin), me ´thode d'e ´tude en milieu naturel[END_REF]. Most reported cases however show that these parapatric and/or almost sympatric species present marked distinct degrees of adaptation to cave and/or subterranean life.
The general assumption that cave species evolved, in most cases, directly from surface ancestors, was recently questioned by Juan et al. [START_REF] Juan | Evolution in caves: Darwin's wrecks of ancient life in the molecular era[END_REF]. Based on phylogeographic studies, these authors confirmed that cave species were often cryptic and presenting highly restricted distributions. These studies also suggested that their divergence and potential speciation processes may occur despite the presence of gene flow from surface populations. Finally, they concluded that these same studies could provide evidence for speciation and adaptive evolution within the confines of cave systems [START_REF] Juan | Evolution in caves: Darwin's wrecks of ancient life in the molecular era[END_REF].
Vietbocap, as the other genera associated with the family Pseudochactidae, can be considered as one of the most, if not the most basic lineage among known scorpions. If not all authors agree about the precise phylogenetic position of pseudochactids, they do agree about its basal position [START_REF] Sharma | Phylogenomic resolution of scorpions reveals multilevel discordance with morphological phylogenetic signal[END_REF][START_REF] Prendini | A ''living fossil'' from Central Asia: the morphology of Pseudochactas ovchinnikovi Gromov, 1898 (Scorpiones: Pseudochactidae), with comments on its phylogenetic position[END_REF]. The Vietbocap ''populations'' found in the Thien Duong cave show a very similar degree of cave adaptation, with a complete regression of eyes and, in most cases, a very marked regression of the pigmentation. On the other hand, Vietbocap epigean relatives remain unknown. In fact, epigean elements of the family Pseudochactidae are not known from South-East Asia. The original epigean ancestor of the Vietbocap lineage was most certainly an ancient colonizer of underground habitats; however, the totality of the present known species probably did not derived from a surface ancestor. This attests to a rather old process of adaptation to subterranean environments. Although available data about the scorpion populations of the different species are still very limited and phylogenetic evidence are still lacking, it can be considered that the speciation process took place after the colonization of the subterranean habitat by the epigean ancestor of the Vietbocap lineage.
As already exposed, the Thien Duong cave is a huge cave with 31 km of passages, in several sectors 100 m in height and 150 m in width; most present physical parameters inside the cave such as air temperature and humidity can be considered rather similar. Nevertheless, the geological history of the cave system is very old and most certainly knew a number of vicissitudes during geological time, probably causing collapsing situations with the creation of local barriers, maybe long enough in time to allow full processes of speciation. No data is, however, available to confirm any hypothesis in this sense.
It is difficult to estimate from the sole morphological study of these ''populations'' of Vietbocap living in the Thien Duong cave what is their precise degree of differentiation. The processes of differentiation and speciation in scorpions are probably rather slow, since the group shows long-term reproduction strategies and a low number of generations when compared to other arthropods [START_REF] Lourenc ¸o | Reproduction in scorpions, with special reference to parthenogenesis[END_REF]. Consequently, the question to be addressed is: are we faced with species, subspecies or only morphs of a large polymorphic species? For three of these populations, a specific status is here suggested in association with their possible allopatric distribution, though the number of available specimens is small to evaluate the robustness of the observed differences. In this case, the condition of superspecies sensu Mayr [START_REF] Mayr | Notes on Halcyon chloris and some of its subspecies[END_REF] could be suggested to the global Vietbocap lineage. The species within each sub-lineage would be represented by allopatric, parapatric our weakly sympatric groups, really or potentially inter-sterile in natural conditions [START_REF] Bernardi | Les cate ´gories taxonomiques de la syste ´matique e ´volutive[END_REF]. Each one of these different species can therefore be defined as a Prospecies in the sense of Birula [START_REF] Birula | Ueber Scorpio maurus Linne ´und seine Unterarten[END_REF]. The main cue, however, remains the discovery and description of an epigean element associated with Vietbocap, which however probably knew a negative selection during the evolutionary time.
The presence of syntopic and closely related troglobitic species of a same genus in a same cave is quite exceptional. Molecular approaches will be necessary to evaluate the robustness of the observed differences between the species which have been recognized.
Disclosure of interest
The authors declare that they have no competing interest.
.
4. Taxonomic treatment Family Pseudochactidae Gromov, 1998 Subfamily Vietbocapinae Lourenc ¸o, 2012 Genus Vietbocap Lourenc ¸o & Pham, 2010 Vietbocap thienduongensis Lourenc ¸o & Pham, 2012 (Figs. 3-5) Enlarged diagnosis for the female Vietnam, Quang Binh Province, Phong Nha-Ke National Park, Thien Duong cave (17831'10.3'' N-106813'22.9''E), initial section of the cave (750 m from the cave entrance), 18/V/2016 (T.-D Do, T.-H. Tran & T.-N. Nguyen), 2 females. Material deposited in the 'Muse ´um national d'histoire naturelle', Paris.
with 35 trichobothria per pedipalp: 12 on femur, of which 5 dorsal, 4 internal and 3 external (d 1 , d 4 , d 5 and i 4 extremely
Fig. 1 .
1 Fig. 1. The Karstic Massif where the Thien Duong cave is located, showing typical vegetation (photo by Cao Xuan Loc).
Fig. 2 .
2 Fig. 2. Interior view of the Thien Duong cave, 3000 m from the entrance, showing two of the co-authors (T.-H.T. & T.-H. T.) searching for scorpions.
Fig. 3 .
3 Fig. 3. Vietbocap thienduongensis. Habitus, dorsal and ventral aspects. A-B. Male holotype. C-D. Female.
Fig. 4 .
4 Fig. 4. Vietbocap thienduongensis. Male in natural habitat; 800 m from the cave entrance.
Fig. 5 .
5 Fig. 5. Vietbocap thienduongensis. Female. A. Simple venom glands. B. Sternum, genital operculum and pectines. C. Metasomal segment V and telson, lateral aspect. D. Chelicera, dorsal aspect. E. Cutting edge of movable finger showing rows of granules. F. Femur, dorsal aspect, showing trichobothrial pattern.
2; III, length 1.7, width 1.1; IV, length 2.2, width 1.1; V, length 4.3, width 1.0, depth 1.0. Telson length 4.6; vesicle width 1.4, depth 1.2. Pedipalp: femur length 4.0, width 1.0; patella length 3.8, width 1.2; chela length 7.5, width 1.2, depth 1.2; movable finger length 4.4. Reports: Sternum length/width, 1.2/1.2 = 1.00. Chela length/movable finger length, 7.5/4.4 = 1.70. Vietbocap aurantiacus sp. n. (Figs. 6-7) Diagnosis: Anterior margin of carapace not depressed with a weak to moderate concavity. Lateral ocelli absent. Pair of circumocular sutures complete in the posterior region to median ocular tubercle with a broad U-shaped configuration. Median ocelli absent; median tubercle represented by a smooth but not depressed zone. Anterosubmedial carinae absent from zone delimited by circumocular sutures. Type-D trichobothrial pattern [11,14] with 35 trichobothria per pedipalp: 12 on femur, of which 5 dorsal, 4 internal and 3 external (d 1 , d 4 , d 5 and i 4 extremely reduced); 10 on patella, of which 3 dorsal, 1 internal and 6 external (est extremely reduced); ventral surface without trichobothria; 13 trichobothria on chela, of which 5 on manus, 8 on fixed finger (ib 2 extremely reduced); dorsal trichobothria of femur with ''beta-like'' configuration. Sternum pentagonal, type 1 [15], strongly compressed horizontally, longer than wide, external aspect not flat, with a concave region, posteromedian depression round Sternite V with a white posterior inflated triangular zone. Telotarsi each with several spinular setae, not clearly arranged in rows. Metasomal segment V with a moderately marked pair of ventrosubmedian carinae; no ventromedian carina between ventrosubmedian carinae. Fixed and movable fingers strongly curved; dentate margins each with median denticle row comprising seven oblique granular sub-rows; internal and external accessory gra-nules at base of each sub-row. Carinae on metasoma and pedipalps better marked than on the other species. Respiratory spiracles small, semi-oval to round. Pro-and retrolateral pedal spurs present on legs I-IV. Tibial spurs absent from all legs. Type material: Female holotype, two female paratypes. Vietnam, Quang Binh Province, Phong Nha-Ke Bang National Park, Thien Duong cave (17831 0 10.3 00 N-106813 0 22.9 00 E), mid-section of cave (3000 m from cave entrance), 23/V/2013 (D.-S. Pham). Holotype and one paratype deposited in the ''Muse ´um national d'histoire naturelle'', Paris. One paratype deposited in the Institute of Ecology and Biological Resources, Vietnam Academy of Science and Technology, Hanoi.
Fig. 6 .
6 Fig. 6. Vietbocap aurantiacus sp. n. Female holotype. Habitus, dorsal and ventral aspects.
Fig. 7 .
7 Fig. 7. Vietbocap aurantiacus sp. n. Female holotype. A. Cutting edge of movable finger showing rows of granules. B. Metasomal segments IV-V and telson, lateral aspect. C. Chelicera, dorsal aspect. D-F. Trichobothrial pattern. D. Chela, dorso-external aspect. E-F. Patella and femur, dorsal aspect.
Fixed and movable fingers strongly curved; dentate margins each with median denticle row comprising seven oblique granular sub-rows; internal and external accessory granules at base of each sub-row. Respiratory spiracles small, semioval to round. Pro-and retrolateral pedal spurs present on legs I-IV. Tibial spurs absent from all legs. Type material: Male holotype, two female paratypes. Vietnam, Quang Binh Province, Phong Nha -Ke Bang National Park, Thien Duong cave (17831 0 10.3 00 N-106813 0 22.9 00 E), mid-section of cave (5000 m from cave entrance), 6/IV/2015 (D.-S. Pham). Holotype and one paratype deposited in the ''Muse ´um national d'histoire naturelle'', Paris. One paratype deposited in the Institute of Ecology and Biological Resources, Vietnam Academy of Science and Technology, Hanoi.
Fig. 8 .
8 Fig. 8. Vietbocap quinquemilia sp. n. Male holotype. Habitus, dorsal and ventral aspects.
Fig. 9 .
9 Fig. 9. Vietbocap quinquemilia sp. n. Male in natural habitat; 5000 m from cave entrance.
Fig. 10 .
10 Fig. 10. Vietbocap quinquemilia sp. n. A-C. Male holotype. D. Female paratype. A. Chelicera, dorsal aspect. B. Cutting edge of movable finger showing rows of granules. C-D. Metasomal segment V and telson, lateral aspect.
Fig. 11 .
11 Fig. 11. Vietbocap quinquemilia sp. n. Male holotype. A-C. Trichobothrial pattern. A. Chela, dorso-external aspect. B-C. Patella and femur, dorsal aspect.
Acknowledgements
We wish to express our thanks to E ´lise-Anne Leguin (MNHN, Paris) for the preparation of the photos and plates; to Adriano Kury (Museu Nacional, Rio de Janeiro), Louis Deharveng (MNHN, Paris), and Lucienne Wilme ´(Missouri Botanical Garden) for their useful comments to the manuscript. The second author is most grateful to the research project of the Vietnam Academy of Science and Technology (VAST04.09/16-17) and to the NAFOSTED grant (106NN.06-2015.38), which financially supported him for field works. | 40,031 | [
"1011345"
] | [
"519585",
"302598",
"302598"
] |
01771719 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01771719/file/1710.00218.pdf | Black holes have more states than those giving the Bekenstein-Hawking entropy: a simple argument
Carlo Rovelli
Black holes have more states than those giving the Bekenstein-Hawking entropy: a simple argument.
Carlo Rovelli
CPT, Aix-Marseille Université, Université de Toulon, CNRS, Case 907, F-13288 Marseille, France.
(Dated: October 3, 2017)
It is often assumed that the maximum number of independent states a black hole may contain is NBH = e S BH , where SBH = A/4 is the Bekenstein-Hawking entropy and A the horizon area in Planck units. I present a simple and straightforward argument showing that the number of states that can be distinguished by local observers inside the hole must be greater than this number.
There are several arguments supporting the idea that the thermodynamical interaction between a black hole and its surroundings is well described by treating the black hole as a system with N BH = e A/4 (orthogonal) states, where A is the horizon area in Planck units = G = c = 1. These arguments are convinging. However, it has then become fashionable to deduce from this fact that the black hole itself cannot have more than N BH states (see for instance the discussion in [START_REF] Marolf | The Black Hole information problem: past, present, and future[END_REF] and references therein). I present here an argument indicating that this further step is wrong and that the actual number N of independent states of a black hole of area A can be larger than N BH .
The possibility of a distinction between N and N BH is opened by the fact that according to classical general relativity the interaction between a black hole and its surroundings is entirely determined by what happen in the vicinity of the horizon. This may be true in general, and therefore it is possible that N BH counts only states that can be distinguishable from the exterior, which may be called "surface" states. On the other hand, N counts also states that can be distinguished by local observables inside the horizon. Here I argue that to have more states than N BH is not just a possibility: it follows from elementary considerations of causality.
To show this, consider a gravitationally collapsed object and let Σ 1 be a Cauchy surface that crosses the horizon but does not hit the singularity, see Figure 1
. Let Σ 2 Σ Σ 1 2 FIG.
1. The (lowest part) of the conformal diagram of a gravitational collapse. The clear grey region is the object, the dotted line is the horizon, the thick upper line the singularity, the dark upper region is where quantum gravity effect may become relevant (this region play no role in this paper.) The two Cauchy surfaces used in the paper are the dashed lines. be a later similar Cauchy surface and i = 1, 2. Let A i be the area of the intersection of Σ i with the horizon. Assume that no positive energy falls into the horizon during the interval between the two surfaces. Let quantum fields live on this geometry, back-reacting on it [START_REF] Wald | Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics[END_REF]. Finally, let Σ in i be the (open) portions of Σ i inside the horizon. Care is required in specifying what is meant here by 'horizon', since there are several such notions (event horizon, trapping horizon, apparent horizon, dynamical horizon...) which in this context may give tiny (exponentially small in the mass) differences in location. For precision, by 'horizon' I mean here the event horizon, if this exist. If it doesn't (as for instance in [START_REF] Rovelli | Vidotto: Planck Stars[END_REF]), I mean the boundary of the past of a late-time spacelike region lying outside the black hole (say outside the trapping region). With this definition, the horizon is light-like.
Because of the back-reaction of the Hawking radiation, the area of the horizon shrinks and therefore
A 2 < A 1 . (1)
Now consider the evolution of the quantum fields from Σ 1 to Σ 2 . We are in a region far away from the singularity and therefore (assuming the black hole is large) from high curvature. Therefore we expects conventional quantum field theory to hold here, without strange quantum gravity effects, at least up to high energy scales. Since the horizon is light-like, Σ in 1 is in the causal past of Σ in 2 . This implies that any local observable on Σ in 1 is fully determined by observables on Σ in 2 . That is, if A i is the local algebra of observables on Σ in i then A 1 is a subalgebra of A 2 :
A 1 ⊂ A 2 .
(
) 2
Therefore any state on A 2 is also a state on A 1 and if two such states can be distinguished by observables in A 1 they certainly can be distinguished by observables in A 2 as the first are included in the latest. Therefore the states that can be distinguished by A 1 -which is to say: on Σ in 1 -can also be distinguished by A 2 -which is to say: on Σ in 2 . Therefore the distinguishable states on Σ in 1 are a subset of those in Σ in 2 . How many are them? Either there is an infinite number of them, or a finite number due to some high-energy (say Planckian) cut-off. If there is an infinite number of them, then immediately the number of states distinguishable from inside the black arXiv:1710.00218v1 [gr-qc] 30 Sep 2017 hole is larger that N N B , which is finite. If there is a finite number of them, then the number N 2 of distinguishable states on Σ in 2 must be equal or larger than the number N 2 of states distinguishable on Σ in 1 , because the second is a subset of the first. That is
N 2 ≥ N 1 . (3)
Comparing equations ( 1) and ( 3) shows immediately that it is impossible that N i = e Ai/4 , as the exponential is a monotonic function.
The conclusion is that the number of states distinguishable from the interior of the black hole must be different from the number N BH = e A/4 of the states contributing to the Bekenstein-Hawking entropy. Since the second is shrinking to zero with the evaporation, the first must overcome the second at some point. Therefore in the interior of a black hole there are more possible states than e A/4 .
The physical interpretation of the conclusion is simple: the thermal behaviour of the black hole described by the Bekenstein-Hawking entropy S = A/4 is fully determined by the physics of the vicinity of the horizon.
In classical general relativity, the effect of a black hole on its surroundings is independent from the black hole interior. A vivid expression of this fact is in the numerical simulations of black hole merging and radiation emission by oscillating black holes: in writing the numerical code, it is routine to cut away a region inside the (trapping) horizon: it is irrelevant for whatever happens outside! This is true in classical general relativity, and there is no compelling reason to suppose it to fail if quantum fields are around. Therefore a natural interpretation of S BH is to count states of near-surface degrees of freedom, not interior ones. This is of course not a new idea: it has a long history [START_REF] York | Dynamical origin of black-hole radiance[END_REF][START_REF] Zurek | Statistical Mechanical Origin of the Entropy of a Rotating, Charged Black Hole[END_REF][START_REF] Wheeler | A Journey into Gravity and Spacetime[END_REF][START_REF] Hooft | The black hole interpretation of string theory[END_REF][START_REF] Susskind | The Stretched Horizon and Black Hole Complementarity[END_REF][START_REF] Frolov | Dynamical Origin of the Entropy of a Black Hole[END_REF][START_REF] Carlip | The Statistical Mechanics of the (2+ 1)-Dimensional Black Hole[END_REF][START_REF] Cvetic | Tseytlin: Solitonic strings and BPS saturated dyonic black holes[END_REF][START_REF] Larsen | Internal structure of black holes[END_REF] and see in particular [START_REF] Rovelli | Loop Quantum Gravity and Black Hole Physics[END_REF] and [START_REF] Strominger | Black hole entropy from near-horizon microstates[END_REF] in support of this idea from two different research camps, loops and strings. The argument presented here strongly support this idea, by making clear that there are interior states that do not affect the Bekenstein-Hawking entropy.
This conclusion is not in contrast with the the various arguments leading to identify Bekenstein-Hawking entropy with a counting of states. To the opposite, evidence from it comes from the membrane paradigm [START_REF]Black Holes: the Membrane Paradigm[END_REF] and from Loop Quantum Gravity [START_REF] Rovelli | Loop Quantum Gravity and Black Hole Physics[END_REF][START_REF] Rovelli | Black Hole Entropy from Loop Quantum Gravity[END_REF], which both show explicitly that the relevant states are surface states, but also from the string theory counting [START_REF] Strominger | Microscopic origin of the Bekenstein-Hawking entropy[END_REF][START_REF] Horowitz | Counting states of nearextremal black holes[END_REF], because the counting is in a context where the relevant state space is identified with the scattering state space, which could be blind to interior observables.
The consequences of this observation are far reaching for the discussions on the black-hole information paradox [START_REF] Marolf | The Black Hole information problem: past, present, and future[END_REF][START_REF] Hooft | The Good, the Bad, and the Ugly of Gravity and Information[END_REF]. The solid version of the paradox is Page's [START_REF] Page | Information in black hole radiation[END_REF], which does not require hypotheses on the future of the hole. If there are more states available in a black hole than e A/4 , then Page argument for the information loss paradox fails. Page argument is based on the fact that if the number of black hole states is determined by the area, then there are no more available state to be entangled with the Hawking radiation when the black hole shrinks. For the radiation to be thermal it must be entangled with something, and the only option is earlier Hawking quanta, and this is in tension with quantum field theory. But if there can be many states also inside a black hole with small horizon area, then late-time Hawking radiation does not need to be correlated with early time Hawking radiation, because it can simply be correlated with internal black hole states, even when the surface area of the back hole has become small.
Recall indeed that the interior of an old black hole can have large volume even if its horizon has small area. It was in fact shown in [START_REF] Christodoulou | How big is a black hole?[END_REF] that at a time v after the collapse, a black hole with mass m has interior volume
V ∼ 3 √ 3π m 2 v (4)
for v m. See also [START_REF] Lorenzo | Volume inside old black holes[END_REF][START_REF] Bengtsson | Black holes: Their large interiors[END_REF][START_REF] Ong | Never Judge a Black Hole by Its Area[END_REF][START_REF] Ong | The Persistence of the Large Volumes in Black Holes[END_REF][START_REF] Shao-Jun | Maximal volume behind horizons without curvature singularity[END_REF]. This volume may store large number of states. When the evaporation ends (because the hole has become small, or earlier if the hole is disrupted by non perturbative quantum gravitational effects [START_REF] Rovelli | Vidotto: Planck Stars[END_REF][START_REF] Christodoulou | Realistic Observable in Background-Free Quantum Gravity: the Planck-Star Tunnelling-Time[END_REF]) this information can leak out, possibly slowly, if much of it is in long wavelength modes (see [START_REF] Ashtekar | Black hole evaporation: A paradigm[END_REF][START_REF] Bianchi | Entanglement entropy production in gravitational collapse: covariant regularization and solvable models Eugenio Bianchi[END_REF] and references therein). Therefore information can emerge from the hole, before total dissipation, and is not lost.
I do realize that these observations go against diffused prejudices regarding holography, but I think we should not be blocked by prejudices. The result presented here does not invalidate holographic ideas: it sharpens them by pointing out that what is bound by the area of the boundary of a region is not the number of possible states in the region, but only the number of states distinguishable from observations outside the region.
-I thank Don Marolf, Tommaso De Lorenzo, Alejandro Perez and Eugenio Bianchi for crucial conversations that have lead to this result. | 12,335 | [
"1681"
] | [
"179898",
"407863"
] |
01771746 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2018 | https://hal.science/hal-01771746/file/1804.04147.pdf | Francesca Vidotto
email: [email protected]
Carlo Rovelli
email: [email protected]
White-hole dark matter and the origin of past low-entropy
Recent results on the end of black hole evaporation give new weight to the hypothesis that a component of dark matter could be formed by remnants of evaporated black holes: stable Planck-size white holes with a large interior. The expected lifetime of these objects is consistent with their production at reheating. But remnants could also be pre-big bang relics in a bounce cosmology, and this possibility has strong implications on the issue of the source of past low entropy: it could realise a perspectival interpretation of past low entropy. The ideas briefly presented in this essay are developed in forthcoming papers.
I. REMNANTS
The possibility that remnants of evaporated black holes form a component of dark matter was suggested by MacGibbon [START_REF] Macgibbon | Can Planck-mass relics of evaporating black holes close the Universe?[END_REF] and has been explored in [START_REF] Barrow | The cosmology of black hole relics[END_REF][START_REF] Carr | Black hole relics and inflation: Limits on blue perturbation spectra[END_REF][START_REF] Liddle | Primordial black holes and early cosmology[END_REF][START_REF] Alexeyev | Black-hole relics in string gravity: last stages of Hawking evaporation[END_REF][START_REF] Chen | Black hole remnants and dark matter[END_REF][START_REF] Barrau | Peculiar relics from primordial black holes in the inflationary paradigm[END_REF][START_REF] Chen | Inflation Induced Planck-Size Black Hole Remnants As Dark Matter[END_REF][START_REF] Nozari | Gravitational Uncertainty and Black Hole Remnants[END_REF]. There are no strong observational constraints on this potential contribution to dark matter [START_REF] Carr | Primordial Black Holes as Dark Matter[END_REF] and the weak point of the scenario has been, so far, the question of the physical nature of the remnants.
Here we point out that: (i) the picture has changed because of the realisation that conventional physics provides a candidate for remnants: small-mass white holes with large interiors [START_REF] Rovelli | Planck stars[END_REF][START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF][START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF], which can be stable if they are sufficiently light [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF], and that are produced at the end of the evaporation as recent results in quantum gravity indicate [START_REF] Christodoulou | Planck star tunneling time: An astrophysically relevant observable from background-free quantum gravity[END_REF][START_REF] Rovelli | Crossing Schwarzschild's Central Singularity[END_REF][START_REF] Rovelli | Interior metric and ray-tracing map in the firework black-to-white hole transition[END_REF][START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF]; (ii) the remnant lifetime predicted in [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] happens to be consistent with the production of primordial black holes at the end of inflation; and (iii) even more interesting, if remnants come from a pre-big-bang phase in a bouncing cosmology, they provide a solution of the problem to the early-universe low-entropy, and hence to the problem of the arrow of time [START_REF] Rovelli | Is Time's Arrow Perspectival?[END_REF].
II. WHITE HOLES
The difference between a black hole and a white hole is not very pronounced. Observed from the outside (say from the exterior of a sphere of radius r = 2m + > 2m, where m is the mass of the hole) for a finite amount of time, a white hole cannot be distinguished from a black hole: Zone II of the maximally extended Schwarzschild solution is equally the outside of a black hole and the outside of a white hole (see Fig. 1, Left). Analogous considerations hold for the Kerr solution. The objects in the sky we call 'black holes' are approximated by a stationary metric only for a limited time. At least in the past the metric was non-stationary, as they were produced by gravitational collapse. The continuation of the metric inside the radius r = 2m + > 2m contains a trapped region, but not an anti-trapped region. Instead of an anti-trapped region there is a collapsing star. Thus from the outside a 'black hole' (as opposite to a 'white') is only characterised by not having the anti-trapped region in the past (see Fig. 1, Center). Viceversa, from the exterior and for a finite time, a white hole is undistinguishable from a black hole, but in the future ceases to be stationary and there is no trapped region (see Fig. 1,Right).
The classical prediction that the black is stable forever is not reliable. In the uppermost band of the central diagram of Figure 1 quantum theory dominates. In other words, the death of a black hole is a quantum phenomenon. The same is true for a white hole, reversing time direction. That is, the birth of a white hole is in a region where quantum gravitational phenomena are strong. This consideration eliminates a traditional objection to the physical existence of white holes: How would they originate? The answer is that they originate from a region where quantum phenomena dominate the behaviour of the gravitational field.
Such regions are generated in particular by the end of the life of a black hole. Hence a white hole can be originated by a dying black hole. This has been shown to be possible in [START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF] and was explored in [START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF][START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF][START_REF] Christodoulou | Planck star tunneling time: An astrophysically relevant observable from background-free quantum gravity[END_REF][START_REF] Rovelli | Crossing Schwarzschild's Central Singularity[END_REF][START_REF] Rovelli | Interior metric and ray-tracing map in the firework black-to-white hole transition[END_REF][START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF]. The the life cycle of the black-white hole is given in Figure 2.
The process is asymmetric in time [START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF][START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] and the time scales are determined by the initial mass of the hole m o . The lifetime τ BH of the black hole is known by Hawking radiation theory
τ BH ∼ m 3 o (1)
in Planck units = G = c = 1. This time can be as shorter as τ BH ∼ m 2 o because of quantum gravitational effects [START_REF] Rovelli | Planck stars[END_REF][START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF] (see also [? ? ? ? ? ]) but we disregard this possibility here. The lifetime of the white hole phase, τ WH , is longer [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]:
τ WH ∼ m 4 o . (2)
The tunnelling process itself takes a time of the order of the current mass [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF][START_REF] Barceló | Black holes turn white fast, otherwise stay black: no half measures[END_REF].
τ T ∼ m. (3)
The classical evolution and the exterior of a (stationary, spherically symmetric) black hole is uniquely determined by its current horizon area, but its internal properties are not. The internal volume keeps increasing with time [START_REF] Christodoulou | How big is a black hole?[END_REF][START_REF] Bengtsson | Black holes: Their large interiors[END_REF][START_REF] Ong | Never Judge a Black Hole by Its Area[END_REF][START_REF] Wang | Maximal volume behind horizons without curvature singularity[END_REF][START_REF] Christodoulou | Volume inside old black holes[END_REF] and determines post tunnelling evolution. Therefore the state of the hole at some time is not specified by its sole current mass m, but also by internal geometry, in turn determined by m o . We write the quantum state of the hole in the form |m o , m B , where the first quantum number is the initial mass which determines the interior, the second is the current mass, which determines the horizon area. Here is the life cycle of a collapsed object (4)
III. STABILITY
Large classical white holes are unstable under perturbations [START_REF] Frolov | Black Hole Physics: Basic Concepts and New Developments[END_REF]. The wavelength of the perturbation needed to trigger the instability must be smaller that the size of the hole. To make a Planck size white hole unstable, we need trans-Planckian radiation, and this is not allowed by quantum gravity [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]. Even independently from this, if a Planck-scale white hole is unstable, its decay mode is into a Planck-scale black hole:
|m o , m P W ------→ instability |m o , m P B , (5)
which is equally unstable to tunnel back into a white hole
|m o , m P B ------→ tunnelling |m o , m P W . ( 6
)
Since the two are indistinguishable from the exterior, this oscillation has no effect on the exterior: the Planck size remnant remains a Planck size remnant. The possibility of this oscillation has been considered in [START_REF] Barceló | The lifetime problem of evaporating black holes: mutiny or resignation[END_REF][START_REF] Barceló | Black holes turn white fast, otherwise stay black: no half measures[END_REF][START_REF] Garay | Do stars die too long?[END_REF].
IV. COSMOLOGY: REHEATING
White-hole remnants can be a constituent of dark matter. A local dark matter density of the order of 0.01 M /pc 3 corresponds to approximately one Planck-scale white hole per each 10.000 Km 3 .
For these objects to be present now we need their lifetime larger or equal than the Hubble time
T H m 4 o ≥ T H . (7)
On the other hand, the lifetime of the black hole phase must be shorter than the Hubble time
m 3 o < T H . (8)
This gives the possible value of m 0 : 10 10 gr ≤ m o < 10 15 gr.
Their Schwarzschild radius is in the range
10 -18 cm ≤ R o < 10 -13 cm. (10)
Black holes of a given mass could have formed when their Schwarzschild radius was of the order of the horizon. Remarkably, the horizon was presumably in this range at the end of inflation, during or just after reheating. This concordance supports the plausibility of the scenario.
V. EREBONS AND THE ARROW OF TIME
Roger Penrose has coined the name erebons, from the Greek god of darkness Erebos, to refer to matter crossing over from one eon to the successive one [? ] in cyclic cosmologies [? ]. Remnants could also be formed in a contracting phase before the current expanding one [? ? ] in a big bounce scenario [? ? ].
A surprising aspect of the white-hole erebons scenario is that it addresses the low-entropy of the initial cosmological state.
The current arrow of time can be traced back to the homogeneity of the gravitational field at the beginning of the expansion: this homogeneity is the reserve of low-entropy that drives current irreversibile phenomena [? ]. If the big bounce was actually finely dotted by holes with large interior, the metric was not homogeneous. Its low entropy is a consequence of the fact that we do not access the vast interiors of these holes. In other words, it is determined by the peculiar subset of observable to which we happen to have access.
This represents a realisation of the perspectival interpretation of entropy [START_REF] Rovelli | Is Time's Arrow Perspectival?[END_REF]. Entropy depends on a microstate of a system and a choice of macroscopic observables, or coarse graining. For any microstate there is some choice of macroscopic observable that make entropy low. Early universe could have low entropy not because its microstate is peculiar, but because the coarse graining under which we access it is peculiar. Past low entropy, and its consequent irreversible evolution can be real, but perspectival, like the apparent rotation of the sky.
If the cosmos at the big bounce was finely dotted with white holes with large interiors, the gravitational field was not in an improbable homogeneous configuration. It was in a generic crumpled configuration. But being outside all white holes, we find ourselves in a special place, and from the special perspective of this place we see the universe under a coarse graining which defines an entropy that was low in the past.
FIG. 1 :
1 FIG.1: Left: Extended Schwarzschild spacetime. The (light grey) region outside r = 2m + is equally the outside of a black and a white hole. Center: A collapsing star (dark grey) replaces the white hole region in the non-stationary collapse metric. Right: The time revered process. The difference between the last two is only in the far past or future.
FIG. 2 :
2 FIG. 2:The full life of a black-white hole.
P , m P W --→ end .
Acknowledgements The work of FV at UPV/EHU is supported by the grant IT956-16 of the Basque Government and by the grant FIS2017-85076-P (MINECO/AEI/FEDER, UE). | 13,792 | [
"1681"
] | [
"179898",
"179898"
] |
01772222 | en | [
"shs",
"info"
] | 2024/03/05 22:32:16 | 2015 | https://hal.science/hal-01772222/file/OEGlobal2015_Teachers_Time_Is_Valuable.pdf | Camila Morais Canellas
email: [email protected]
Colin De
La Higuera
Teacher's Time is Valuable 1
Keywords: OER adoption, teachers' time, technology, pedagogy, workload
Consistent adoption adoption of Open Educational Resources (OER) depends on a number of factors. These may correspond to business, legal, policy, academic or technology issues. Among the vast number of elements making deployment harder, one is often underestimated: an increase on the demand for teacher's time. Some solutions to motivate and/or gratify teachers that adopt OER have been proposed over the years and used successfully among the first adopters. Unfortunately, such solutions are not always possible in several contexts. In this paper, we propose that a key concern of research on OER technologies should focus on saving teacher's time when creating pedagogical resources.
Introduction
When deploying Open Educational Resources (OER) initiatives it is often the case that the project stalls because of having failed to foresee that the excellent teacher, whose work the OER is supposed to be based upon, does not have the time nor the energy to brush2 her slides, to reengineer her course, to add the meta-data, to make herself available to answer the forum or prepare the additional material without which the courseware is not what the promoters were expecting.
A number of studies have already pointed out that the lack of teacher's involvement is an important (if not the single most important) issue to start the adoption of OER. In certain contexts, the institution uses an incentive-based model in order to encourage the teachers to accept the extra workload. In under-resourced systems, or in places where newcomers to these questions are failing to understand the different issues, this may not be possible. In this work, we analyse the problem, the proposed alternatives and argue in favour of deploying technology in a way that enhances the learning (and teaching) experience, whilst avoiding burdening the teacher with a heavier workload.
Problem Statement 2.1. A List of Issues for OER
Projects regarding the adoption of OER face a number of issues related to its attributes and decision points as described in the framework proposed by [START_REF] Stacey | A Dialogue on Open Educational Resources and Social Authoring Models[END_REF]. The issues are grouped under the following categories: business, legal, policy, academic, technology.
Moreover, in many countries the law has more or less broad definitions of fair use and educational use. Such a circumstance often means that teachers are used to incorporate in their pedagogical resources some materials of which they are not authors, but that they believe they can use if in a pedagogical environment. Nonetheless, when creating/adapting this material to be published as OER, this "exception" no longer applies and they face the need to brush the material3 .
Frequently, some of these key attributes (or the combination of them) may end up being more time consuming than expected, especially regarding teacher's time.
It was also noted that the question of effort is a crucial one when the teacher's perception of OER production was surveyed: teachers do not perceive OER as a means to save time (Masterman, Wild, White, & Manton, 2011, p. 138).
About Teacher's Time
Therefore, the issues related to the adoption of OER and the fact of having her work made public will have consequences regarding the work of the teacher:
• She will receive extra pressure because her material is going to be made public -perhaps due to the fact that her course is going to be filmed; • The teacher is expected to help render her material impeccable. In some cases she will be asked to sign agreements she does not really understand (copyright, image forms); • The teacher may be asked to enrich her contents, by providing additional texts, updating her lectures, adding a forum, some quizzes and exams, in order to better adapt the material to a larger public.
The above has also been noted by several authors, including in the work by [START_REF] Masterman | The impact of OER on teaching and learning in UK universities: implications for Learning Design[END_REF] already mentioned. In fact, "time" is mentioned as the most significant barrier for not adopting OER by two thirds of the respondents of an OECD questionnaire (Hylén, 2005, p. 4).
Why Is This Not Always Taken into Account?
We can conclude that the process of adopting OER is not easy, automatic or effortless. Still, why is it usually seen as so?
Possibly, the common confusion between free and open has a role in this misconception. The result of this work, the resource itself, may be open and free, but not necessarily the work itself.
As stated before, the effort is substantial.
Another probable reason lies in the view that, being seen as technology, OER are supposed to save money and time. When a clear policy for adoption of OER is absent, it is difficult for the decision makers to accept to spend more money on this.
Extra Funding, Direct Rewards to the Teacher
Sometimes the institution is conscious that there is extra effort, that it needs this effort (because the institution has a visibility strategy, typically), and it is prepared to directly reward the teacher.
The following types of incentives have been identified by [START_REF] West | OER Incentive Models Summaries[END_REF]: large lump sum stipends, incremented stipends, gifts instead of money, pay for creation of OER, sabbaticals and other existing institutional incentive plans.
It should be noted that, in institutions that chose these incentives to get the ball rolling a few years ago, the tendency is now that enough motivation has been generated and these are no longer needed.
Promotion of Teaching Quality
In other cases, the prize may be differed: the institution will attempt to promote quality of teaching as it promotes quality of research. These promotion efforts are visible: the teacher will know that making the extra effort will result in some form of prestige or of a differed prize. The establishment of a credible academic reward system that includes OER was pointed out as one of the most important policy issue that leads to a large scale adoption in an Australian study (Hylén, 2005, p. 6).
Adding Support to the Teacher's Tasks
In this case, rather than rewarding or recognizing the work made by the teacher, the organization encourages her work by creating services that will support her. Staff that is specialized in the juridical questions regarding openness or grants that can be used to accomplish the task are some examples. This not only can save the teacher's time but also somehow demonstrates the importance of the work for the institution.
Convincing the Staff that Practices Have Changed and Adaptation is Needed
Finally, the argument used by decision makers can be that no intervention is required, as progress is just the natural sense of history, and that teachers should adapt to new technologies just because it is their job to do so. Following this view, one should not regard the creation of OER as something exceptional, worth a specific prize. The point of view is defended by two very different types of decision makers: those that no longer teach or those who are currently promoting the way teaching is (or should be) done and are possibly over-enthusiastic and sometimes fail to see that other teachers are not always so keen or have less time.
The Role of Technology
Arguments in favour of technology are often that it makes life simpler. Yet in the case of teaching, technology has been introduced to provide the teacher with tools she did not have before, allowing her to do her job much better, provided she uses these tools. However, it often represented an increase cost regarding time.
Accordingly, it is possible to distinguish two types of technologies proposed to teachers: 1. Time-saving technologies: as an example provided by [START_REF] Cuban | Techno-Reformers and Classroom Teachers[END_REF], mimeograph machines and projectors were quickly and largely adopted by teachers around the world.
2. Time-consuming technologies: computers and television, also cited by [START_REF] Cuban | Techno-Reformers and Classroom Teachers[END_REF], represent technologies that may enhance the quality and possibilities for the teachers' practices, but usually demanded more time and were not as well adopted in classrooms compared with time-saving technologies.
In fact, research has shown that having technology available does not mean that teachers will adopt it. The same occurs regarding OER adoption (Schuwer, Kreijns, & Vermeulen, 2014, p. 95).
Two Alternatives
We see two alternatives to the problem of teacher's time when adopting OER: the introduction of (or the publicity given to) time-saving technologies and/or a scenario where the teacher can do better with less work, or at least with as much work as before. Some examples: 1. Time-saving technology: a typical heavily time consuming task for the teacher is the production of metadata. This has been identified, for instance, as a blocking point in the Wikiwijs project [START_REF] Schuwer | Wikiwijs: An unexpected journey and the lessons learned towards OER[END_REF]. As a recurring theme for sessions in the past OCW Conferences, the importance and difficulty of the question is clear. On the other hand, natural language processing (NLP) research is able, today, to provide automatic transcriptions for videos, summarizations, and identification of names entities4 . 2. A pedagogical scenario using technology to improve the results, without adding much time to the teacher's agenda is currently studied at University of Nantes. In the Informatics for beginners course, the teachers have to face a number of problems:
• A large number of students, and hence of groups to be run in parallel;
• The necessity to propose innovative courses, not just one that can be taught by the different instructors.
The choice which has been made is that of a first part including lectures on a wide range of topics (bio-informatics, natural language processing, distributed algorithms, cryptology and social networks); and a second part consisting of a programming course which makes use of the general topics addressed in the previous lectures.
One of the problems is that a lot of information is passed to the students, and they need time to digest it. It has therefore been proposed to record the classes and make the videos available as OER. In order to make things more interesting, we considered pedagogical scenarios allowing the students to interact better with this material. However, in most of the envisaged scenarios, the teacher's cooperation was needed and the ideas were too demanding of her time, for which we did not have any support.
We are therefore proposing to do the following:
• The teacher proposes 8 to 10 quiz questions. It should be said that she has to do this anyways for the examination; • These quizzes are added to the video and synchronized with the moment the topic is addressed by the teacher; • The students, when viewing the video, can not only answer a quiz (and know if they were right -self assessment), but also rate the quiz and even propose better quizzes (learn by teaching); • In order to avoid overcharging the video with quizzes, the system automatically updates the list of the better quizzes proposed so far, following the evaluation made by the students.
In this way, the intervention of the teacher is kept as small as possible: she is not asked to do more than she would do anyhow, and we hope to end with a better material.
Conclusion
Whereas technology has usually been deployed to answer necessities identified by the teacher, and often by over-enthusiastic pedagogues who will be identifying more complex and interesting learning scenarios, we feel that (at least part of) technology should be introduced in order to help the teacher do better for an (almost) identical cost regarding time.
Technology is now being identified as a key factor to success in open education as shown by:
• We believe and expect new ideas and tools in this direction in the future.
The creation of the UNESCO Chair on Open Technologies for Open Educational Resources and Open Learning 5 ; • The Opening up Slovenia initiative 6 : initiatives to help promote open education in Slovenia are heavily backed by research; • The creation of partnerships based on research in tools based on machine learning, as suggested by the Knowledge 4 All Foundation 7 .
This work has received a French government support granted to the COMIN Labs excellence laboratory and managed by the National Research Agency in the "Investing for the Futures" program ANR-JO-LABX-07-0J.
Brushing the slides consists in making them free of any material inconsistent with the chosen licence.
The Usual SolutionsOverall, the effort the teacher has to make, once she has agreed to adopt OER, is quite substantial. On the other hand, what will the employer, or the team running the open education scheme, propose to leverage this effort? 3 Cf. http://www.jisc.ac.uk/publications/programmerelated/2013/Openeducationalresources.aspx, section Open Licensing and http://poerup.referata.com/wiki/France#Copyright_in_education.
There are a number of conferences on these topics. The webpage of the Association for Computational Linguistics is a good starting point: http://www.aclweb.org/
http://www.ouslovenia.net/project/unesco-chair-slovenia/
http://www.ouslovenia.net/
http://www.k4all.org/ | 13,651 | [
"19047",
"18019"
] | [
"95421",
"95421"
] |
00845520 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-00845520/file/1306.5206.pdf | Eugenio Bianchi
Hal M Haggard
Carlo Rovelli
The boundary is mixed
We show that Oeckl's boundary formalism incorporates quantum statistical mechanics naturally, and we formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, and surmise that local gravitational processes are indivisibly statistical with no possible quantal versus probabilistic distinction.
I. INTRODUCTION
Quantum field theory and quantum statistical mechanics provide a framework within which most of current fundamental physics can be understood. In their usual formulation, however, they are not at ease in dealing with gravitational physics. The difficulty stems from general covariance and the peculiar way in which general relativistic theories deal with time evolution. A quantum statistical theory including gravity requires a generalized formulation of quantum and statistical mechanics.
A key tool in this direction which has proved effective in quantum gravity, is Oeckl's idea [START_REF] Oeckl | A 'general boundary' formulation for quantum mechanics and quantum gravity[END_REF][START_REF] Oeckl | General boundary quantum field theory: Foundations and probability interpretation[END_REF] of using a boundary formalism, reviewed below. This formalism combines the advantages of an S-matrix transition-amplitude language with the possibility of defining the theory without referring to asymptotic regions. It is a language adapted to general covariant theories, where "bulk" observables are notoriously tricky, because it can treat dependent and independent variables on the same footing. This formalism allows a general covariant definition of transition amplitudes, n-point functions and in particular the graviton propagator [START_REF] Rovelli | Graviton propagator from backgroundindependent quantum gravity[END_REF][START_REF] Bianchi | Graviton propagator in loop quantum gravity[END_REF]. These are defined on compact spacetime regions-the dependence on the boundary metric data makes general covariance explicit and circumvents the difficulties (e.g. [START_REF] Arkani-Hamed | A Measure of de Sitter entropy and eternal inflation[END_REF]) usually associated to the definition of these quantities in a general covariant theory.
In the boundary formalism, the focus is moved from "states", which describe a system at some given time, to "processes", which describe what happens to a local system during a finite time-span. For a conventional nonrelativistic system, the quantum space of the processes, B (for "boundary"), is simply the tensor product of the initial and final Hilbert state spaces. Tensor states in B represent processes with given initial and final states.
What about the vectors in B that are not of the tensor form? Remarkably, it turns out that mixed statistical quantum states are naturally represented by these non-tensor states [START_REF] Bianchi | Talk at the 2012 Marcel Grossmann meeting[END_REF]. Here we formalize this observation, showing how statistical expectation values are expressed in this language. This opens the way to a systematic treatment of general-covariant quantum statistical mechanics, a problem still wide open.
The structure of this paper is as follows: In Section II, we start from conventional non-relativistic mechanics and move "upward" towards more covariance: we construct the formal structures that define the boundary formalism, characterize physical states and operators, define the dynamics through amplitudes, and show how statistical states and equilibrium states can be treated. In Section III, we adapt the boundary formalism to a general covariant language by including the independent evolution parameter (the "time" partial observable) into the configuration space. This is the step that permits the generalization to general covariant systems. Once these structures are clear, in Section IV we take them as fundamental, and show that they retain their meaning also in the more general cases where the system is genuinely general relativistic. In Section V we apply the formalism to the Unruh effect and in Section VI we draw some tentative conclusions regarding quantum gravity.
These point towards the idea that any local gravitational process is statistical.
II. NON-RELATIVISTIC FORMALISM
A. Mechanics Consider a Hamiltonian system with configuration space C. Call x ∈ C a generic point in C. The corresponding quantum system is defined by a Hilbert space H and a Hamiltonian operator H. We indicate by A, B, ... ∈ A the self-adjoint operators representing observables. In the Schrödinger representation, which diagonalizes configuration variables, a state ψ is represented by the functions ψ(x) = x|ψ , where |x is a (possibly generalized) eigenvector of a family of observables that coordinatizes C (we use the Dirac notation also for generalized states, as Dirac did). States evolve in time by ψ t = e -iHt ψ o . For convenience we call H t the Hilbert space isomorphic to H, thought of as the space of states at time t.
Fix a time t and consider the non-relativistic boundary space
B t = H 0 ⊗ H * t , (1)
arXiv:1306.5206v1 [gr-qc] 21 Jun 2013
where the star indicates the dual space. This space can be interpreted as the space of all (kinematical) processes.
The state Ψ = ψ ⊗ φ * ∈ B t represents the process that takes the initial state ψ into the final state φ in a time t. For instance, if ψ and φ are eigenstates of operators corresponding to given eigenvalues, then Ψ represents a process where these eigenvalues have been measured at initial and final time.
In the Schrödinger representation, vectors in B t have the form ψ(x, x ) = x, x |ψ . The state |x, x ≡ |x ⊗ x | represents the process that takes the system from x to x in a time t. The interpretation of the states in B t which are not of the tensor form is our main concern in this paper and is discussed below.
There are two notable structures on the space B t .
(a) A linear function W t on B t , which completely codes the dynamics. This is defined by its action
W t (ψ ⊗ φ * ) := φ|e -iHt |ψ (2)
on tensor states, and extended by linearity to the entire space. This function codes the dynamics because its value on any tensor state ψ ⊗ φ * gives the probability amplitude of the corresponding process that transforms the state ψ into the state φ. Notice that the expression of W t in the Schrödinger basis reads
W t (x, x ) = x |e -iHt |x , (3)
which is precisely the Schrödinger-equation propagator, and can be represented formally as a Feynman path integral from x to x in a time t, and, of course, it codes the dynamics of the theory.
(b) There is a nonlinear map σ that sends H into B t , given by
σ : ψ → ψ ⊗ (e -iHt ψ) * . (4)
Boundary states in the image of σ represent processes that have probability amplitude equal to one, as can be easily verified using [START_REF] Oeckl | General boundary quantum field theory: Foundations and probability interpretation[END_REF] and ( 4). The process σ(ψ) is the one induced by the initial state ψ.
In general, we shall call any vector Ψ ∈ B t that satisfies
W t (Ψ) = 1 (5)
a "physical boundary state."
These are the basic structures of the boundary formalism in the case of a non-relativistic system.
B. Statistical mechanics
The last equation of the previous section is linear, hence a linear combination of solutions is also a solution. But linear combinations of tensor states are not tensor states. What do the solutions of (5) which are not of the tensor form represent? Consider a statistical state ρ. By this we mean here a trace class operator in H that can be mixed or pure. An operator in H is naturally identified with a vector in H ⊗ H * , of course. In particular, let |n be an orthogonal basis that diagonalizes ρ, then
ρ = n c n |n n|. (6)
The corresponding element in B 0 is
ρ = n c n |n ⊗ n| (7)
and we will from now on identify the two quantities. That is, below we often write states in H ⊗ H * , as operators in H. The numbers c n in (6) are the statistical weights. They satisfy
n c n = 1 (8)
because of the trace condition on ρ, which expresses the fact that probabilities add up to one. Thus the state ρ can be seen as an element of B 0 . Consider the corresponding element of B t , defined by
ρ t := n c n |n n|e iHt . (9)
It is immediate to see that
W t (ρ t ) = 1. ( 10
)
Therefore we have found the physical meaning of the other (normalized) solutions of [START_REF] Arkani-Hamed | A Measure of de Sitter entropy and eternal inflation[END_REF]. They represent statistical states. Notice that these are expressed as vectors in the boundary Hilbert space B t . (See also [START_REF] Oeckl | A positive formalism for quantum theory in the general boundary formulation[END_REF].)
The expectation value of the observable A in the statistical state ρ is
A = Tr[Aρ], (11)
the correlation between two observables is
AB = Tr[ABρ], (12)
and the time dependent correlation is
A(t)B(0) = Tr[e iHt Ae -iHt Bρ], (13)
of which the two previous expressions are special cases. These quantities can be expressed in the simple form
A(t)B(0) = W t ( (B ⊗ A) ρ t ) (14)
because
W t ((B ⊗ A)ρ t ) = Tr[e -iHt Bρ t A] = Tr[e -iHt Bρe iHt A],
here the placement of ρ t within the trace reflects the fact that its left factor is in the initial space and its right factor is in the final space (and A does not need a dagger because it is self-adjoint). Therefore the boundary formalism permits a direct reformulation of quantum statistical mechanics in terms of general boundary states, boundary operators and the W t amplitude. Consider states of Gibbs's form ρ = N e -βH . The corresponding state in B t is
ρ t = N n e -βEn e iEnt |n n| = N e iH(t+iβ) ( 15
)
where |n is the energy eigenbasis and N = N (β), determined by the normalization, is the inverse of the partition function. A straightforward calculation shows that for these states the correlations ( 14) satisfy the KMS condition
A(t)B(0) = B(-t -iβ)A(0) (16)
which is the mark of an equilibrium state. Thus Gibbs states are the equilibrium states.
C. L1 and L2 norms: physical states and pure states
The two classes of solutions illustrated in the previous two subsections (pure states and statistical states) exhaust all solutions of the physical boundary state condition when B t decomposes as a tensor product of two Hilbert spaces:
B t = H 0 ⊗ H * t . (17)
This can be shown as follows. Consider an orthonormal basis |n in H 0 . Due to the unitarity of the time evolution, the vectors (e -iHt |n ) * form a basis of H * t . Therefore any state in B t can be written in the form
Ψ = nn c nn |n ⊗ (e -iHt |n ) * . ( 18
)
The physical states satisfy
W |Ψ = nn c nn n |e iHt e -iHt |n = n c nn = 1, ( 19
)
therefore they correspond precisely to the operators
ρ = nn c nn |n n | ( 20
)
in H 0 , satisfying the condition Trρ = 1 [START_REF] Wightman | Quantum field theory in terms of vacuum expectation values[END_REF] which is to say: they are the statistical states. In particular, they are pure states if they are projection operators,
ρ 2 = ρ.
Observe that in general a statistical state in B t is not a normalized state in this space. Rather, its L 2 norm satisfies
|Ψ| 2 = nn |c nn | 2 ≤ 1 (22)
where the equality holds only if the state is pure. This is easy to see in a basis that diagonalizes ρ, because the trace condition implies that all eigenvalues are equal or smaller than 1 and sum to 1. Thus there is a simple characterization of physical states and pure states: the first have the "L 1 " norm [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF] equal to unity. The second have also the "L 2 " norm |Ψ| 2 equal to unity.
III. RELATIVISTIC FORMALISM A. Relativistic mechanics
Let us now take a step towards the relativistic formalism where the time variable is treated on the same footing as the configuration variables.
With this aim, consider again the same system as before and define the extended configuration space E = C × R. Call (x, t) ∈ E a generic point in E. Let Γ ex = T * E be the corresponding extended phase space and C = p t + H the Hamiltonian constraint, where p t is the momentum conjugate to t. The corresponding quantum system is characterized by the extended Hilbert space K and a Wheeler-deWitt operator C [START_REF] Rovelli | Quantum Gravity[END_REF].
Indicate by A, B, ... ∈ A the self-adjoint operators representing partial observables [START_REF] Rovelli | Partial observables[END_REF] defined in K. In the Schrödinger representation that diagonalizes extended configuration variables, states are given by functions ψ(x, t) = x, t|ψ . The physical states are the solutions of the Wheeler-deWitt equation Cψ = 0, which here is just the Schrödinger equation. Physical states are the (generalized) vectors ψ(x, t) in K that are solutions of the Schrödinger equation.
The space H formed by the physical states that are solutions of the Schrödinger equation is clearly in oneto-one correspondence with the space H 0 of the states at time t = 0. Therefore there is a linear map that sends H 0 into (a suitable completion of) K, simply defined by sending the state ψ(x) into the solution ψ(x, t) of the Schrödinger equation such that ψ(x, 0) = ψ(x). Vice versa, there is a (generalized) projection P from (a dense subspace of) K to H, that sends a state ψ(x, t) to a solution of the Schrödinger equation. This can be formally obtained from the spectral decomposition of C, or, more simply, by
(P ψ)(x, t) = dx dt W (t-t ) (x, x ) ψ(x , t ). ( 23
)
Now, without fixing a time, the relativistic boundary state space is defined by
B = K ⊗ K * . ( 24
)
Notice the absence of the t-label subscript. In the Schrödinger representation, vectors in B have the form ψ(x, t, x , t ) = x, t, x , t |ψ . This space can again be interpreted as the space of all (kinematical) processes, where now the boundary measurement of the clock time t is treated on the same footing as the other partial observables. Thus for instance |x, t, x , t ≡ |x, t ⊗ x , t | represents the process that takes the system from the configuration x at time t to the configuration x at time t .
The two structures considered above simplify on the space B.
(a) The dynamics is completely coded by a linear function W (no t label!) on B. This is defined extending by linearity
W (φ * ⊗ ψ) := φ|P |ψ . ( 25
)
Its expression in the Schrödinger basis reads
W (x, t, x , t ) = x, t|P |x , t = x|e iH(t-t ) |x , (26)
which is once again nothing but the Schrödingerequation propagator, now seen as a function of initial and final extended configuration variables. The variable t is not treated as an independent evolution parameter, but rather is treated on equal footing with the other partial observables. The operator P can still be represented as a suitable Feynman path integral in the extended configuration space, from the point (x, t) to the point (x , t ).
(b) Second, there is again a nonlinear map σ that sends K into B, now simply given by
σ : ψ → ψ ⊗ ψ * . ( 27
)
States in the image of this map are "physical", namely represent processes that have probability amplitude equal to one, only if ψ satisfies the Schrödinger equation. In this case, a straightforward calculation verifies that
W (Ψ) = 1. ( 28
)
As before, we call "physical" any state in B solving this equation.
B. Relativistic statistical mechanics
As before, linear combinations of physical states represent statistical states. A general relativistic statistical state is a statistical superposition of solutions of the equations of motion [START_REF] Rovelli | Statistical mechanics of gravity and the thermodynamical origin of time[END_REF]. 1 Consider again the state (6) in this 1 A concrete example is illustrated in [START_REF] Rovelli | The Statistical state of the universe[END_REF].
language: if ψ n is the full time-dependent solution of the Schrödinger equation corresponding to the initial state |n , we can now represente the state (6) in B simply by
ρ = n c n ψ n ψ * n . (29)
Explicitly, in the Schrödinger basis
ρ(x, t, x , t ) = n c n ψ n (x, t) ψ n (x , t ). ( 30
)
The equilibrium statistical state at inverse temperature β is given by ρ(x, t, x , t ) = N n e iEn(t-t +iβ) ψ n (x) ψ n (x ).
= N e iH(t-t +iβ) .
(
) 31
where ψ n (x) are the energy eigenfunctions.
The correlation functions between partial observables are now given simply by
AB = W ((A ⊗ B) ρ). ( 32
)
Notice the complete absence of the time label t in the formalism. Any temporal dependence is folded into the boundary data. (However, see the next section for a generalization of the KMS property and equilibrium.) This completes the construction of the boundary formalism for a relativistic system. We now have at our disposal the full language and we can "throw away the ladder," keep only the structure constructed, and extend it to far more arbitrary systems, including relativistic gravity.
IV. GENERAL BOUNDARY
We now generalize the boundary formalism to genuinely (general) relativistic systems that do not have a non-relativistic formulation.
A quantum system is defined by the triple (B, A, W ). The Hilbert space B is interpreted as the boundary state space, not necessarily of the tensor form. A is an algebra of self-adjoint operators on B. The elements A, B, ... ∈ A represent partial observables, namely quantities to which we can imagine associating measurement apparatuses, but whose outcome is not necessarily predictable (think for instance of a clock). The linear map W on B defines the dynamics.
Vectors Ψ ∈ B represent processes. If Ψ is an eigenstate of the operator A ∈ A with eigenvalue a, it represents a process where the corresponding boundary observable has value a. The quantity
W (Ψ) = W |Ψ (33)
is the amplitude of the process. Its modulus square (suitably normalized) determines the relative probability of distinct processes [START_REF] Rovelli | Quantum Gravity[END_REF]. A physical process is a vector in B that has amplitude equal to one, namely satisfies
W |Ψ = 1. ( 34
)
The expectation value of an operator A ∈ A on a physical process Ψ is
A = W |A|Ψ . ( 35
)
If a tensor structure in B is not given, then there is no a priori distinction between pure and mixed states. The distinction between quantum incertitude and statistical incertitude acquires meaning only if we can distinguish past and future parts of the boundary [START_REF] Smolin | On the nature of quantum fluctuations and their relation to gravitation and the principle of inertia[END_REF][START_REF] Smolin | Quantum gravity and the statistical interpretation of quantum mechanics[END_REF].
So far, there is no notion of time flow in the theory. The theory predicts correlations between boundary observables. However, as pointed out in [START_REF] Connes | Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories[END_REF], a generic state Ψ on the algebra of local observables of a region defines a flow α τ on the observable algebra by the Tomita theorem [START_REF] Connes | Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories[END_REF], and the state Ψ satisfies the KMS condition for this flow
A(τ )B(0) = B(-τ -iβ)A(0) , (36)
where A(τ ) = α τ (A). It will be interesting to compare the flow generated in this manner with the flow generated by a statistical state within the boundary Hilbert space. If a flow is given a priori, the KMS states for this flow are equilibrium states for this flow.
In a general relativistic theory including gravity, no flow is given a priori, but we can still distinguish physical equilibrium states as follows: an equilibrium state is a state that defines a mean geometry and whose Tomita flow is given by a timelike Killing vector of this geometry: see [START_REF] Rovelli | General relativistic statistical mechanics[END_REF].
V. UNRUH EFFECT
As an example application of the formalism, we describe the Unruh effect [START_REF] Unruh | Notes on black hole evaporation[END_REF] in this language. Other treatments with a focus on the general boundary formalism are [START_REF] Colosi | The Unruh Effect in General Boundary Quantum Field Theory[END_REF][START_REF] Banisch | Vacuum states on timelike hypersurfaces in quantum field theory[END_REF]. Consider a partition of Minkowski space into two regions M and M separated by the two surfaces
Σ 0 : {t = 0, x ≥ 0}, Σ η : {t = ηx, x ≥ 0}. ( 37
)
The region M is a wedge of angular opening η and M is its complement (Figure 1). Consider a Lorentz invariant quantum field theory on Minkowski space, say satisfying the Wigtmam axioms [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF]: in particular, energy is positive-definite and there is a single Poincaré-invariant state, the vacuum |0 . How is the vacuum described in the boundary language?
In general, a boundary state φ b on ∂M = Σ = Σ 0 ∪ Σ η is a vector in the Hilbert space B = H 0 ⊗ H * η , where H 0 and H η are Hilbert spaces associated to the states on Σ 0 FIG. 1. The wedge M in Minkowski space.
and Σ η respectively. The conventional Hilbert space H associated to the t = 0 surface is the tensor product of two Hilbert spaces H = H L ⊗ H R that describe the degrees of freedom to the left or right of the origin. We can identify H R and H 0 since they carry the same observables: the field operators on Σ 0 . Because the theory is Lorentz-invariant, H carries a representation of the Lorentz group. The self-adjoint boost generator K in the t, x plane does not mix the two factors H L and H R . If we call k its eigenvalues and |k, α L,R , its eigenstates in the two factors with α labeling the distinct degenerate levels of k, then it is a well known result [START_REF] Bisognano | On the Duality Condition for Quantum Fields[END_REF]
that 0|k, α L = e -πk k, α| R (38)
which we can write in the form
|0 = dk dα e -πk |k, α L ⊗ |k, α R . (39)
Tracing over H L gives the density matrix in H R
ρ 0 = Tr L |0 0| = e -2πK (40)
which determines the result of any vacuum measurement, and therefore any measurment [START_REF] Wightman | Quantum field theory in terms of vacuum expectation values[END_REF], performed on Σ 0 . The evolution operator W η in the angle η, associated to the wedge, sends Σ 0 to Σ η and is
W η = e -iηK . (41)
These two quantities give immediately the boundary expression of the vacuum on Σ:
ρ η = ρ 0 e iηK = e i(η+2πi)K (42)
This is the vacuum in the boundary formalism. It is a KMS state at temperature 1/2π with respect to the flow generated by K in η. For an observer moving with constant proper acceleration a along the hyperboloid of points with constant distance from the origin, this flow is proportional to proper time s
s = η/a. (43)
And therefore the vacuum is a KMS state, namely a thermal state, at the Unruh temperature (restoring )
T = a 2π . (44)
This is the manner in which the Unruh effect is naturally described in the boundary language. Notice that no reference to accelerated observers or special basis in Hilbert space is needed to identify the thermal character of the vacuum on the η-wedge.
An interesting remark is that the expectation values of operators on Σ can be equally computed using the region M which is complementary to the wedge M . Let us first do this for η = 0. In this case, the insertion of the empty region M cannot alter the value of the observables, and therefore it is reasonable to take the boundary state we associate to it to be the unit operator.
ρ = 1l (45)
And therefore
ρη = e -iηK . (46)
For consistency, we have then that the evolution operator associated to M must be Wη = e i(η+2πi)K .
Therefore the evolution operator and the boundary state simply swap their roles when going from a region to its complement. 2 Notice that there exists a geometrical transformation that rotates Σ 0 into Σ η , obtained by rotating it clockwise, rather than anti clockwise. This rotation is not implemented by a proper Lorentz transformation, because the Lorentz group rotates Σ 0 at most only up to the light cone t = -x. But it can nevertheless be realized by extending a Lorentz transformation x = cosh(η)x + sinh(η)t t = sinh(η)x + cosh(η)t (48) to a complex parameter iη x = cosh(iη)x + sinh(iη)t = cos(η)x + i sin(η)t t = sinh(iη)x + cosh(iη)t = i sin(η)x + cos(η)t. (49)
For a small η = , this transformation rotates the positive x axis infinitesimally into the complex t plane. The Lorentz group acts on the expectation values of the theory, and in particular on the expectation values of products of its local observables. Since the n-point functions 2 This can be intuitively understood in terms of path integrals: the evolution operator is the path integral on the interior of a spacetime region, at fixed boundary values; the boundary state can be viewed as the path integral on the exterior of the region.
In the case under consideration, the vacuum is singled out by the boundary values of the field at infinity. For a detailed discussion, see [22].
of a quantum field theory where the energy is positive can be continued analytically for complex times (Theorem 3.5, pg. 114 in [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF]), this action is well defined on expectation values. In particular, we can rotate (t, x) infinitesimally into the complex t plane, and then rotate around the real t, x plane, passing below the light cone x = ±t in complex space. In other words, by adding a small complex rotation into imaginary time, we can rotate a space-like half-line into a timelike one [START_REF] Gibbons | Action integrals and partition functions in quantum gravity[END_REF][START_REF] Bianchi | Entropy of Non-Extremal Black Holes from Loop Gravity[END_REF]. A full rotation is implemented by U (2πi), giving (47). Finally, observe that the vacuum is the unique Poincaré invariant state in the theory. This implies that if a state is Poincaré invariant then it is thermal at the Unruh temperature on the boundary of the wedge. This is clearly a reflection of correlations with physics beyond the edge of the wedge.
Since vacuum expectation values determine all local measurable quantum-field-theory observables, this implies that the boundary state is unavoidably mixed. In essence the available field operators are insufficient to purify the state. This can be seen physically as follows: in principle, we can project the state onto a pure state on Σ 0 , breaking Poincaré invariance by singling out the origin, but to do so we need a complete measurement of field values for x > 0 and therefore an infinite number of measurements, which would move the state out of its folium [START_REF] Haag | Local Quantum Physics[END_REF]. We continue these considerations in the next section.
VI. RELATION WITH GRAVITY AND THERMALITY OF GRAVITATIONAL STATES
So far, gravity has played no direct role in our considerations. The construction above, however, is motivated by general relativity, because the boundary formalism is not needed as long as we deal with a quantum field theory on a fixed geometry, but becomes crucial in quantum gravity, where it allows us to circumvent the difficulties raised by diffeomorphism invariance in the quantum context.
In quantum gravity we can study probability amplitudes for local processes by associating boundary states to a finite portion of spacetime, and including the quantum dynamics of spacetime itself in the process. Therefore the boundary state includes the information about the geometry of the region itself.
The general structure of statistical mechanics of relativistic quantum geometry has been explored in [START_REF] Rovelli | General relativistic statistical mechanics[END_REF], where equilibrium states are characterized as those whose Tomita flow is a Killing vector of the mean geometry. Up until now it hasn't been possible to identify the statistical states in the general boundary formalism and so this strategy hasn't been available in this more covariant context. With a boundary notion of statistical states this becomes possible. It becomes possible, in particular, to check if given boundary data allow for a mean geometry that interpolates them. In quantum gravity we are interested in spacelike boundary states where initial and final data can be given, therefore a typical spacetime region will have the lens shape depicted in Figure 2. Past and future components of the boundary will meet on wedge-like two-dimensional "corner" regions. Now, say we assume that a quantum version of the equivalence principle holds, for which the local physics at the corner is locally Lorentz invariant. Then the result of the previous section indicates that the boundary state of the lens region will be mixed. Any such boundary state in quantum gravity is a mixed state. (Other arguments for the thermality of local spacetime processes are in [START_REF] Martinetti | Diamonds's temperature: Unruh effect for bounded trajectories and thermal time hypothesis[END_REF].) The dynamics at the corner is governed by the corner terms of the action [START_REF] Carlip | The Off-shell black hole[END_REF][START_REF] Bianchi | Horizon energy as the boost boundary term in general relativity and loop gravity[END_REF], which can indeed be seen as responsible for the thermalization [START_REF] Massar | How the change in horizon area drives black hole evaporation[END_REF][START_REF] Jacobson | Horizon entropy[END_REF].
Up to this point we have emphasized the mixed state character of the boundary states in order to make a clear connection with the standard quantum formalism. However, note that from the perspective of the fully covariant general boundary formalism (see section IV) there is always a single boundary Hilbert space B that can be made bipartite in many different manners. From this point of view it is more natural to call these boundary states nonseparable. Then, local gravitational states are entangled states. This was first appreciated in the context of the examples treated in [22], which was an inspiration for the present work.
Recently Bianchi and Myers have conjectured that in a theory of quantum gravity, for any sufficiently large region corresponding to a smooth background spacetime, the entanglement entropy between the degrees of freedom describing the given region with those describing its complement are given by the Bekenstein-Hawking entropy [START_REF] Bianchi | On the Architecture of Spacetime Geometry[END_REF]. The Bianchi-Myers conjecture and the considerations above result in a compelling picture supporting a quantum version of the equivalence principle.
Both the mixing of the state near a corner and the Bianchi-Myers conjecture can be seen as manifestations of the fact that by restricting the region of interest to a finite spatial region we are tracing over the correlations between this region and the exterior, and therefore we are necessarily dealing with a state which is not pure. If, as we expect, the boundary formalism is crucial for extracting physical amplitudes from quantum gravity, all this appears to imply that the notion of pure state is irrelevant in local quantum gravitational physics and therefore statistical fluctuations cannot be disentangled from quantum fluctuations in quantum gravity [START_REF] Smolin | On the nature of quantum fluctuations and their relation to gravitation and the principle of inertia[END_REF][START_REF] Smolin | Quantum gravity and the statistical interpretation of quantum mechanics[END_REF].
FIG. 2 .
2 FIG. 2. Lens shaped spacetime region with spacelike boundaries and corners (filled circles).
EB acknowledges support from a Banting Postdoctoral Fellowship from NSERC. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. HMH acknowledges support from the National Science Foundation (NSF) International Research Fellowship Program (IRFP) under Grant No. OISE-1159218. | 32,298 | [
"1681"
] | [
"255460",
"186908",
"179898",
"407863"
] |
01772243 | en | [
"spi"
] | 2024/03/05 22:32:16 | 2016 | https://hal.science/hal-01772243/file/Conf_Canadas_al_Modelling_collection_2016.pdf | E Postek
F Dubois
R Mozul
P Ca Ñadas
MODELLING OF A COLLECTION OF NON-RIGID PARTICLES WITH SMOOTH DISCRETE ELEMENT METHOD
Introduction
Usually, the models of generation of the cell colonies use the quasi-static discrete element approach to evaluate the contact forces between the cells [START_REF] Adra | Development of a three dimensional multiscale computational model of the human epidermis[END_REF]. The forces are necessary for evaluation of the mechanotransduction phenomena [START_REF] Postek | Parameter sensitivity of a monolayer tensegrity model of tissues[END_REF]. In contrast to this approach we use compliant particles. The use of the non-rigid particles changes the stresses distribution in the particles assembly. Even intuitively, it is closer to reality.
Cell colony
The cell colony stands for a piece of tissue. We employ a non-rigid model of a single particle of equivalent stiffness to the cell. The calibration is done following the paper [START_REF] Mcgarry | A three-dimensional finite element model of an adherent eukaryotic cell[END_REF].
Multibody approach
We apply the program LMGC90 [START_REF] Renouf | A parallel version of the Non Smooth Contact Dynamics Algorithm applied to the simulation of granular media[END_REF], [START_REF] Radjai | Discrete-element Modelling of Granular Materials[END_REF] with the possibility of modelling of contact of large number of compliant particles that are discretized with finite elements.
The governing equations are written in the framework of the approach that was proposed by Moreau and Jean [START_REF] Renouf | A parallel version of the Non Smooth Contact Dynamics Algorithm applied to the simulation of granular media[END_REF]. The set of equations of motion including the initial and the boundary conditions takes the form:
M( qi+1 -qi ) = t i+1 t i (F(q, q, s) + P(s))ds + p i+1 (1)
where M is the mass matrix, q is the vector of generalized displacements, P(t) is the vector of external forces, F(q, q, t) is the vector of internal forces including the inertia terms and p i+1 is the vector of impulse resulting from contacts over the time step.
The integration with the θ scheme of the above system of equations leads to the equation:
Mk ∆ qk+1 i = p k f ree + p k+1 i+1 (2)
The effective mass matrix Mk reads:
Mk = M + h 2 θ 2 K k (3)
where h is the time increment, θ is the integration coefficient [0.5, 1] and K is the tangent stiffness. The θ coefficient is taken as 1.0 yielding the Newton-Raphson integration rule. The effective vector of forces free of contact is of the form:
p k f ree = Mk qk i+1 + M( qi -qk i+1 ) + h[(1 -θ)(F i + P i ) + θ(F k i+1 + P i+1 )] (4)
Contact impulses are computed using the NSCD method implemented in the LMGC90 software platform. Firstly, it will perform contacts detection between cells. Then the previous dynamics equations will be expressed in term of contact unknowns (gap or relative velocity and contact impulse). Afterward a Non Linear Gauss-Seidel method computes the contacts impulses. Finally, the resulting impulse on cells nodes due to contacts impulses are added to the dynamics equation to compute the new velocities and positions. We use the Open-MPI parallel version of the program [START_REF] Postek | Parameter sensitivity of a monolayer tensegrity model of tissues[END_REF].
Concluding remarks
With the presented scheme we calculate contact forces between the particles. The scheme that is based on a coupling of LMGC90 software and the modeling of cells, has been joined to an agent model able to take into account the effect of the stress evolution in the growing tissue [START_REF] Postek | Concept of an Agent-stress Model of a Tissue[END_REF].
Fig. 1 .
1 Single cell (a) Group of cells (b). The group of cell avatars in Fig. 1 (b) has been generated employing the agent model in the framework of the FLAME platform (Flexible Large-scale Agent Modelling Environment) [1]. It consists of the stem cells, the TA cells and the Committed cells. The avatars are replaced with the equivalent stiffness cells, Fig. 1 (a). | 4,092 | [
"14314",
"171604",
"19991"
] | [
"31065",
"693",
"388220",
"693",
"388220",
"693",
"411468"
] |
01476561 | en | [
"math"
] | 2024/03/05 22:32:16 | 2017 | https://hal.science/hal-01476561/file/1508.00001.pdf | Carlo Rovelli
Michelangelo's Stone: an Argument against Platonism in Mathematics
If there is a 'platonic world' M of mathematical facts, what does M contain precisely? I observe that if M is too large, it is uninteresting, because the value is in the selection, not in the totality; if it is smaller and interesting, it is not independent of us. Both alternatives challenge mathematical platonism. I suggest that the universality of our mathematics may be a prejudice hiding its contingency, and illustrate contingent aspects of classical geometry, arithmetic and linear algebra.
Mathematical platonism [START_REF] Plato | The Republic[END_REF] is the view that mathematical reality exists by itself, independently from our own intellectual activities. 1 Many top level mathematicians hold this view dear, and express the sentiment that they do not "construct" new mathematics, but rather "discover" structures that already exist: real entities in a platonic mathematical world. 2 Platonism is alternative to other views on the foundations of mathematics, such as reductionism, formalism, intuitionism, or the Aristotelian idea that mathematical entities exist, but they are embodied in the material world [START_REF] Gillies | An Aristotelian Approach to Mathematical Ontology[END_REF].
Here, I present a simple argument against platonism in mathematics, which I have not found in the literature.
The argument is based on posing a question. Let us assume that a platonic world of mathematical entities and mathematical truths does indeed exist, and is independent from us. Let us call this world M, for Mathematics. The question I consider is: what is it reasonable to expect M to contain? I argue that even a superficial investigation of this question reduces the idea of the existence of M to something trivial, or contradictory.
In particular, I argue that the attempt to restrict M to "natural and universal" structures is illusory: it is the perspectival error of mistaking ourselves as universal ("English is clearly the most natural of languages".) In particular, I point out contingent aspects of the two traditional areas of classical mathematics: geometry and arithmetic, and of a key tool of modern science: linear algebra.
Michelangelo's stone and Borges' library
Say we take a Platonic stance about math: in some appropriate sense, the mathematical world M exists. The expressions "to exist", "to be real" and similar can have a variety of meanings and usages, and this is a big part of the issue, if not the main one. But for the sake of the present argument I do not need to define them-nor, for that matter, platonism-precisely. The argument remains valid, I think, under most reasonable definitions and usages.
So, what does M include? Certainly M includes all the beautiful mathematical theories that mathematicians have discovered so far. This is its primary purpose. It includes Pythagoras' theorem, the classification of the Lie groups, the properties of prime numbers and so on. It includes the real numbers, and Cantor's proof that they are "more" than the integers, in the sense defined by Cantor, and both possible extensions of arithmetic: the one where there are infinities larger than the integers and smaller than the reals, and the one where there aren't. It contains game theory, and topos theory. It contains lots of stuff.
But M cannot include only what mathematicians have discovered so far, because the point of platonism is precisely that what they will discover tomorrow already exists in the platonic world today. It would not be kind towards future mathematicians to expect that M contains just a bit more of what we have already done. Obviously whoever takes a Platonic view must assume that in the platonic world of math there is much more than what has been already discovered. How much?
Certainly M contains, say, all true (or at least all demonstrable) theorems about integer numbers. All possible true theorems about Euclidean geometry, including all those we have not yet discovered. But there should be more than that, of course, as there is far more math than that.
We can get a grasp on the content of M from the axiomatic formulation of mathematics: given various consistent sets A 1 , A 2 , ... of axioms, M will include all true theorems following from each A n , all waiting to be discovered by us. We can list many sets of interesting axioms, and imagine the platonic world M to be the ensemble of theorems these imply, all nicely ordered in families, according to the corresponding set of axioms.
But this is still insufficient, because a good mathematician tomorrow could come out with a new set of axioms arXiv:1508.00001v2 [math.HO] 6 Sep 2015 and find new great mathematics, like the people who discovered non-commutative geometry, or those who defined C * algebras did.
We are getting close to what the platonic world must contain. Let us assume that a sufficiently universal language exist, say based on logic. Then the platonic world M is the ensemble of all theorems that follow from all (non contradictory) choices of axioms. This is a good picture of what M could be. We have found the content of the platonic world of math.
But something starts to be disturbing. The resulting M is big, extremely big: it contains too much junk. The large majority of coherent sets of axioms are totally irrelevant.
Before discussing the problem with precision, a couple of similes can help to understand what is going on.
(i) During the Italian Renaissance, Michelangelo Buonarroti, one of the greatest artists of all times, said that a good sculptor does not create a statue: he simply "takes it out" from the block of stone where the statue already lay hidden. A statue is already there, in its block of stone. The artists must simply expose it, carving away the redundant stone [START_REF] Neret | [END_REF]. The artist does not "create" the statue: he "finds" it.
In a sense, this is true: a statue is just a subset of the grains of stone forming the original block. It suffices to take away the other grains, and the statue is taken out. But the hard part of the game is of course to find out which subset of grains of stone to leave there, and this, unfortunately, is not written on the stone. It is selection that matters. A block of stone already contained Michelangelo's Moses, but it also contained virtually anything else -that is, all possible forms. The art of sculpture is to be able to determine, which, among this virtual infinity of forms, will talk to the rest of us as art does.
Michelangelo's statement is evocative, maybe powerful, and perhaps speaks to his psychology and his greatness. But it does not say much about the art of the sculptor. The fact of including all possible statues does not confer to the stone the immense artistic value of all the possible beautiful statues it might contain, because the point of the art is the choice, not the collection. By itself, the stone is dull.
(ii) The same story can be told about books. Borges's famous library contained all possible books: all possible combinations of alphabet letters [START_REF] Borges | The Library of Babel[END_REF]. Assuming a book is shorter than, say, a million letters, there are then more or less 30 10 6 possible books, which is not even such a big number for a mathematician. So, a writer does not really create a book: she simply "finds" it, in the platonic library of all books. A particularly nice combinations of letters makes up, say, Moby-Dick. Moby-Dick already existed in the platonic space of books: Melville didn't create Moby-Dick, he just discovered Moby-Dick... Like Michelangelo's stone, Borges's library is void of any interest: it has no content, because the value is in the choice, not in the totality of the alternatives.
The Platonic world of mathematics M defined above is similar to Michelangelo's block of stone, or Borges library, or Hegel's "night in which all cows are black" 3 : a mere featureless vastness, without value because the value is in the choice, not in the totality of the possibilities. Similarly, science can be said to be nothing else than the denotation of a subset of David Lewis's possible worlds [START_REF] Lewis | On the Plurality of Worlds[END_REF]: those respecting certain laws we have found. But Lewis's totality of all possible world is not science, because the value of science is in the restriction, not in the totality.
Mathematics may be called an "ensemble of tautologies", in the sense of Wittgenstein. But it is not the ensemble of all tautologies: these are too many and their ensemble is uninteresting and irrelevant. Mathematics is about recognizing the "interesting" ones. Mathematics may be the investigation of structures. But it is not the list of all possible structures: these are too many and their ensemble is uninteresting. If the world of mathematics was identified with the platonic world M defined above, we could program a computer to slowly unravel it entirely, by listing all possible axioms and systematically applying all possible transformation rules to derive all possible theorems. But we do not even think of doing so. Why? Because what we call mathematics is an infinitesimal subset of the huge world M defined above: it is the tiny subset which is of interest for us. Mathematics is about studying the "interesting" structures.
So, the problem becomes: what does "interesting" mean?
Interest is in the eye of the interested Can we restrict the definition of M to the interesting subset? Of course we can, but interest is in the eyes of a subject. A statue is a subset of the stone which is worthwhile, for us. A particular combination of letters is a good book, for us. What is it that makes certain set of axioms defining certain mathematical objects, and certain theorems, interesting?
There are different possible answer to this question, but they all make explicit or implicit reference to features of ourselves, our mind, our contingent environment, or the physical structure our world happens to have.
This fact is pretty evident as far as art or literature are concerned. Does it hold for mathematics as well? Hasn't mathematics precisely that universality feel that is at the root of platonism? 3 Hegel utilized this Yiddish saying to ridicule Schelling's notion of Absolute, meaning that -like mathematical platonism-this included too much and was too undifferentiated, to be of any relevance [START_REF] Hegel | Phenomenology of Spirit[END_REF].
Shouldn't we expect -as often claimed-any other intelligent entity of the cosmos to come out with the same "interesting" mathematics as us?
The question is crucial for mathematical platonism, because platonism is the thesis that mathematical entities and truths form a world which exists independently from us. If what we call mathematics ends up depending heavily on ourselves or the specific features of our world, platonism looses its meaning.
I present below some considerations that indicate that the claimed universality of mathematics is a parochial prejudice. These are based on the concrete examples provided by the chapters of mathematics that have most commonly been indicated as universal.
The geometry of a sphere Euclidean geometry has been among the first pieces of mathematics to be formalized. Euclid's celebrated text, the "Elements" [START_REF] Euclid | [END_REF], where Euclidean geometry is beautifully developed, has been the ideal reference for all mathematical texts. Euclidean geometry describes space. It captures our intuition about the structure of space. It has applications to virtually all fields of science, technology and engineering. Pythagoras' theorem, which is at its core, is a familiar icon.
It is difficult to imagine something having a more "universal" flavor than euclidean geometry. What could be contingent or accidental about it? What part do we humans have in singling it out? Wouldn't any intelligent entity developing anywhere in the universe come out with this same mathematics?
I maintain the answer is negative. To see why, let me start by recalling that, as is well known, Euclidean geometry was developed by Greek mathematicians mostly living in Egypt during the Hellenistic period, building on Egyptians techniques for measuring the land. These were important because of the Nile's floods cancelling borders between private land parcels. The very name of the game, "geometry", means "measurement of the land" in Greek. Two-dimensional Euclidean geometry describes, in particular, the mathematical structure formed by the land.
But: does it? Well, the Earth is more a sphere than a plane. Its surface is better described by the geometry of a sphere, than by two-dimensional (2d) Euclidean geometry. It is an accidental fact that Egypt happens to be small compared to the size of the Earth. The radius of the Earth is around 6,000 Kilometers. The size of Egypt is of the order of 1,000 Kilometers. Thus, the scale of the Earth is more than 6 times larger than the scale of Egypt. Disregarding the sphericity of the Earth is an approximation, which is viable when dealing with the geometry of Egypt and becomes better and better as the region considered is smaller. As a practical matter, 2d Euclidean geometry is useful, but it is a decent approximation that works only because of the smallness of the size of Egypt. Intelligent beings living on a planet just a bit smaller than ours [START_REF] De Saint-Exupéry | [END_REF], would have easily detected the effects of the curvature of the planet's surface. They would not have developed 2d Euclidean geometry.
One may object that this is true for 2d, but not for 3d geometry. The geometry of the surface of a sphere can after all be obtained from Euclidean 3d geometry. But the objection has no teeth: we have learned with general relativity that spacetime is curved and Euclidean geometry is just an approximation also as far as 3d physical space is concerned. Intelligent beings living on a region of stronger spacetime curvature would have no reason to start mathematics from Euclidean geometry. 4 A more substantial objection is that 2d euclidean geometry is simpler and more "natural" than curved geometry. It is intuitively grasped by our intellect, and mathematics describes this intuition about space. Its simplicity and intuitive aspect are the reasons for its universal nature. Euclidean geometry is therefore universal in this sense. I show below that this objection is equally ill founded: the geometry of a sphere is definitely simpler and more elegant than the geometry of a plane.
Indeed, there is a branch of mathematics, called 2d "spherical" geometry, which describes directly the (intrinsic) geometry of a sphere. This is the mathematics that the Greeks would have developed had the Earth been sufficiently small to detect the effects of the Earth's surface curvature on the Egyptians fields. Perhaps quite surprisingly for many, spherical geometry is far simpler and "more elegant" than Euclidean geometry. I illustrate this statement with a few examples below, without, of course, going into a full exposition of spherical geometry (see for instance [START_REF] Todhunter | Spherical trigonometry[END_REF][START_REF] Harris | Spherical Geometry[END_REF]).
Consider the theory of triangles: the familiar part of geometry we study early at school. In Euclidean geometry, a triangle has three sides, with lengths, a, b and c, and three angles α, β and γ (Figure 1). We measure angles with pure numbers, so, α, β and γ are taken 4 It is well known that Kant was mistaken in his deduction that the Euclidean geometry of physical space is true a priori [START_REF] Kant | The Critique of Pure Reason[END_REF]. But even Wittgenstein bordered on mistake in dangerously appearing to assume a unique possible set of laws of geometry for anything spatial: "We could present spatially an atomic fact which contradicted the laws of physics, but not one which contradicted the laws of geometry". Tractatus, Proposition 3.0321 [START_REF] Wittgenstein | Tractatus Logico-Philosphicus[END_REF]. to be numbers with value between 0 and π. Measuring with numbers the length of the sides is a more complicated business. Since there is no standard of length in Euclidean geometry, we must either resort to talk only about ratios between lengths (as the ancients preferred), or to choose arbitrarily a segment once and for all, use it as our "unit of measure", and characterize the length of each side of the triangle by the number which gives its ratio to the unit (as the moderns prefer). All this simplifies dramatically in spherical geometry: here there is a preferred scale, the length of the equator. The length of an arc (the shortest path between two points) is naturally measured by the ratio to it. Equivalently (if we immerge the sphere in space) by the angle subtended to the arc. Therefore the length of the side of a triangle (a, b, c) is an angle as well. See Figure 2. Compare then the theories of triangles in the two geometries (Figure 1):
Euclidean geometry: (i) Two triangles are equal if they have equal sides, or if one side and two angles are equal. (ii) The area of the triangle is
A = 1 4 √ 2a 2 b 2 + 2a 2 c 2 + 2b 2 c 2 -a 4 -b 4 -c 4 .
(iii) For right triangles:
a 2 + b 2 = c 2 .
Spherical geometry:
(i) Triangles with same sides, or same angles, are equal.
(ii) The area of a triangle is
A = α + β + γ -π.
(iii) For right triangles: cos c = cos a cos b.
Even a cursory look at these results reveals the greater simplicity of spherical geometry. Indeed, spherical geometry has a far more "universal" flavor than Euclidean geometry.
Euclidean geometry can be be obtained from it as a limiting case: it is the geometry of figures that are much smaller than the curvature radius. In this case a, b and c are all much smaller than π. Their cosine is well approximated by cos θ ∼ 1 -1 2 θ 2 and the last formula reduces to Pythagoras' theorem in the first approximation. Far from being a structural property of the very nature of space, Pythagoras' theorem is only a first order approximation, valid in a limit case of a much simpler and cleaner mathematics: 2d spherical geometry. There are many other beautiful and natural results in spherical geometry, which I shall not report here. They extend to the 3d case: the intrinsic geometry of a 3sphere. A 3-sphere is a far more reasonable structure than the infinite Euclidean space: it is the finite homogeneous three-dimensional metric space without boundaries. The geometry may well be the large scale geometry of our universe [START_REF] Einstein | Cosmological Considerations in the General Theory of Relativity[END_REF]. 5 It shape is counterintuitive for many of us, schooled in Euclid. But it was not so for Dante Alighieri, who did not study Euclid at school: the topology of the universe he describes in his poem is precisely that of a 3-sphere [START_REF] Peterson | Dante and the 3-sphere[END_REF]. See Figure 3. What is "intuitive" changes with history.
These considerations indicate that the reason Euclidean geometry has played such a major role in the foundation of mathematics is not because of its universality and independence from our contingent situation. It is the opposite: Euclidean geometry is of value to us just because it describes-not even very wellthe accidental properties of the region we happen to inhabit. Inhabitants of a different region of the universe-a smaller planet, or a region with high space curvaturewould likely fail to consider euclidean geometry interesting mathematics. For them, Euclidean geometry could 5 Cosmological measurements indicate that spacetime is curved, but have so far failed to detected a large scale cosmological curvature of space. This of course does not imply that the universe is flat [START_REF] Ellis | Relativistic Cosmology[END_REF], for the same reason for which the failure to detect curvature on the fields of Egypt did not imply that that the Earth was flat. It only shows that the universe is big. Current measurements indicate that the radius of the Universe should be at least ten time larger than the portion of the Universe we see [START_REF] Hinshaw | Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results[END_REF]. A ratio, by the way, quite similar to the Egyptian case.
be a uninteresting and cumbersome limiting case.
Linear algebra
Every physicist, mathematician or engineer learns linear algebra and uses it heavily. Linear algebra, namely the algebra of vectors, matrices, linear transformations and so on, is the algebra of linear spaces, and since virtually everything is linear in a first approximation, linear algebra is ubiquitous. It is difficult to resist its simplicity, beauty and generality when studying it, usually in the early years at college. Furthermore, today, we find linear algebra at the very foundations of physics, because it is the language of quantum theory. In the landmark paper that originated quantum theory [START_REF] Heisenberg | Uber quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen[END_REF], Werner Heisenberg understood that physical quantities are better described by matrices and used the multiplication of matrices (core of linear algebra) to successfully compute the properties of quantum particles. Shortly later, Paul Dirac wrote his masterpiece book [START_REF] Dirac | Principles of Quantum Mechanics[END_REF], where quantum mechanics is entirely expressed in terms of linear algebra (linear operators, eigenvalues, eigenvectors...).
It would therefore seem natural to formulate the hypothesis that any minimally advanced civilization would discover linear algebra very early and start using it heavily.
But it is not the case. In fact, we have a remarkable counterexample: a civilization that has developed for millennia without developing linear algebra-ours.
When Heisenberg wrote his famous paper he did not know algebra. He had no idea of what a matrix is, and had never previously learned the algorithm for multiplying matrices. He made it up in his effort to understand a puzzling aspect of the physical world. This is pretty evident from his paper. Dirac, in his book, is basically inventing linear algebra in the highly nonrigorous manner of a physicist. After having constructed it and tested its power to describe our world, linear algebra appears natural to us. But it didn't appear so for generations of previous mathematicians.
Which tiny piece of M turns out to be interesting for us, which parts turns out to be "mathematics" is far from obvious and universal. It is largely contingent.
Arithmetic and identity
The last example I discuss is given by the natural numbers (1, 2, 3, ...), which form the basis of arithmetic, the other half of classical mathematics. Natural numbers seem very natural indeed. There is evidence that the brain is pre-wired to be able to count, and do elementary arithmetic with small numbers [START_REF] Vallortigara | Cervelli che contano[END_REF]. Why so? Because our world appears to be naturally organized in terms of things that can be counted. But is this a feature of reality at large, of any possible world, or is it just a special feature of this little corner of the universe we inhabit and perceive? I suspect the second answer to be the right one. The notion of individual "object" is notoriously slippery, and objects need to have rare and peculiar properties in order to be countable. How many clouds are there in the sky? How many mountains in the Alps? How many coves along the coast of England? How many waves in a lake? How many clods in my garden? These are all very ill-defined questions.
To make the point, imagine some form of intelligence evolved on Jupiter, or a planet similar to Jupiter. Jupiter is fluid, not solid. This does not prevent it from developing complex structures: fluids develop complex structures, as their chemical composition, state of motion, and so on, can change continuously from point to point and from time to time, and their dynamics is governed by rich nonlinear equations. Furthermore, they interact with magnetic and electric fields, which vary continuously in space and time as well. Imagine that in such a huge (Jupiter is much larger than Earth's) Jovian environment, complex structures develop to the point to be conscious and to be able to do some math. After all, it has happened on Earth, so it shouldn't be so improbable for something like this to happen on an entirely fluid planet as well. Would this math include counting, that is, arithmetic? Why should it? There is nothing to count in a completely fluid environment. (Let's also say that our Jovian planet's atmosphere is so thick that one cannot see and count the stars, and that the rotation and revolution periods are equal, as for our Moon, and there are neither days nor years.) The math needed by this fluid intelligence would presumably include some sort of geometry, real numbers, field theory, differential equations..., all this could develop using only geometry, without ever considering this funny operation which is enumerating individual things one by one.
The notion of "one thing", or "one object", the notions themselves of unit and identity, are useful for us living in an environment where there happen to be stones, gazelles, trees, and friends that can be counted. The fluid intelligence diffused over the Jupiter-like planet, could have developed mathematics without ever thinking about natural numbers. These would not be of interest for her. I may risk being more speculative here. The development of the ability to count may be connected to the fact that life evolved on Earth in a peculiar form characterized by the existence of "individuals". There is no reason an intelligence capable to do math should take this form. In fact, the reason counting appears so natural to us may be that we are a species formed by interacting individuals, each realizing a notion of identity, or unit. What is clearly made by units is a group of interacting primates, not the world. The archetypical identities are my friends in the group. 6Modern physics is intriguingly ambiguous about countable entities. On the one hand, a major discovery of the XX century has been that at the elementary level nature is entirely described by field theory. Fields vary continuously in space and time. There is little to count, in the field picture of the world. On the other hand, quantum mechanics has injected a robust dose of discreteness in fundamental physics: because of quantum theory, fields have particle-like properties and particles are quintessentially countable objects. In any introductory quantum field theory course, students meet an important operator, the number operator N , whose eigenvalues are the natural numbers and whose physical interpretation is counting particles [START_REF] Itzykson | Quantum Field Theory[END_REF]. Perhaps our fluid Jovian intelligence would finally get to develop arithmetic when figuring out quantum field theory...
But notice that what moderns physics says about what is countable in the world has no bearing on the universality of mathematics: at most, it points out which parts of M are interesting because they happen to describe this world.
Conclusion
In the light of these consideration, let us look back at the development of our own mathematics. Why has mathematics developed at first, and for such a long time, along two parallel lines: geometry and arithmetic? The answer begins to clarify: because these two branches of mathematics are of value for creatures like us, who instinctively count friends, enemies and sheep, and who need to measure, approximately, a nearly flat earth in a nearly flat region of physical space. In other words, this mathematics is of interest to us because it reflects very contingent interests of ours. Out of the immense vastness of M, the dull platonic space of all possible structures, we have carved out, like Michelangelo, a couple of shapes that talk to us. From the immense vastness of M, the dull platonic space of all possible structures, we have carved out, like Michelangelo, a couple of shapes that speak to us.
There is no reason to assume that the mathematics that has developed later escapes this contingency. To the contrary, the continuous re-foundations and the constant re-organization of the global structure of mathematics testify to its non-systematic and non-universal global structure. Geometry, arithmetic, algebra, analysis, set theory, logic, category theory, and -recentlytopos theory [START_REF] Caramello | The unification of Mathematics via Topos Theory[END_REF] have all been considered for playing a foundational role in mathematics. Far from being stable and universal, our mathematics is a fluttering butterfly, which follows the fancies of inconstant creatures. Its theorems are solid, of course; but selecting what represents an interesting theorem is a highly subjective matter.
It is the carving out, the selection, out of a dull and undifferentiated M, of a subset which is useful to us, interesting for us, beautiful and simple in our eyes, it is, in other words, something strictly related to what we are, that makes up what we call mathematics.
The idea that the mathematics that we find valuable forms a Platonic world fully independent from us is like the idea of an Entity that created the heavens and the earth, and happens to very much resemble my grandfather.
-Thanks to Hal Haggard and Andrea Tchertkoff for a careful reading of the manuscript and comments.
FIG. 1 .
1 FIG. 1. Flat and spherical triangles.
FIG. 2 .
2 FIG.2. Two points on a sphere determine an arc, whose size is measured by the angle it subtends, or equivalently, intrinsically, by its ratio to an equator.
FIG. 3 .
3 FIG.3. Dante's universe: the Aristotelian spherical universe is surrounded by another similar spherical space, inhabited by God and Angel's spheres. The two spheres together form a three-sphere.
For an overview, see for instance[START_REF] Linnebo | Platonism in the Philosophy of Mathematics[END_REF].
Contemporary mathematicians that have articulated this view in writing include Roger Penrose[START_REF] Penrose | The Road to Reality : A Complete Guide to the Laws of the Universe[END_REF] and Alain Connes[START_REF] Connes | A View of Mathematics[END_REF].
This might be why ancient humans attributed human-like mental life to animals, trees and stones: they were perhaps utilizing mental circuits developed to deal with one another -within the primate group-extending them to deal also with animals, trees and stones. | 30,762 | [
"1681"
] | [
"179898",
"407863"
] |
01476783 | en | [
"phys"
] | 2024/03/05 22:32:16 | 2019 | https://hal.science/hal-01476783/file/1603.01561.pdf | Valerio Astuti
email: [email protected]
Marios Christodoulou
email: [email protected]
Carlo Rovelli
email: [email protected]
Volume entropy
Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states -with finite-volume individual nodes-describing a region with total volume smaller than V , has finite dimension, bounded by V log V . This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.
I. Introduction
Thermodynamical aspects of the dynamics of spacetime have first been pointed out by Bekenstein's introduction of an entropy associated to the horizon of a black hole [START_REF] Bekenstein | Black Holes and Entropy[END_REF]. This led to the formulation of the "laws of black holes thermodynamics" by Bardeen, Carter, and Hawking [START_REF] James M Bardeen | The Four laws of black hole mechanics[END_REF] and to Hawking's discovery of black role radiance, which has reinforced the geometry/thermodynamics analogy [START_REF] Hawking | Particle creation by black holes[END_REF]. The connection between Area and Entropy suggests that it may be useful to treat aspects of space-time statistically at scales large compared to the Planck length [START_REF] Jacobson | Thermodynamics of space-time: The Einstein equation of state[END_REF], whether or not we expect the relevant microscopic elementary degrees of freedom to be simply the quanta of the gravitational field [START_REF] Chirco | Spacetime thermodynamics without hidden degrees of freedom[END_REF], or else. Black hole entropy, in particular, can be interpreted as cross-horizon entanglement entropy (see [START_REF] Bianchi | Horizon entanglement entropy and universality of the graviton coupling[END_REF] for recent results reinforcing this interpretation, and references therein), or -most likely equivalently-as the von Neumann entropy of the statistical state representing a macrostate with given horizon Area. In the context of Loop Quantum Gravity (LQG), this was considered in [START_REF] Rovelli | Black Hole Entropy from Loop Quantum Gravity[END_REF] and later extensively analyzed; for a recent review and full references see [START_REF] Barbero | Quantum Geometry and Black Holes[END_REF][START_REF] Ashtekar | The Issue of Information Loss: Current Status[END_REF].
All such developments are based on the assignment of thermodynamic properties to spacetime surfaces. This association has motivated the holographic hypothesis: the conjecture that the degrees of freedom of a region of space are somehow encoded in its boundary.
In this paper, instead, we study statistical properties associated to spacetime regions. We show that it is possible to define a Von Neumann entropy for the quantum gravitational field, associated to the Volume of a region, and that this entropy is (under suitable conditions) finite. The existence of an entropy associated to bulk degrees of freedom of a spin network was already considered in [START_REF] Krzysztof | Eigenvalues of the volume operator in loop quantum gravity[END_REF].
To this aim, we prove a finiteness result on the num-ber of quantum states of gravity describing a region of finite volume. More precisely, we work in the context of LQG, and we prove that the dimension of the space of diffeomorphism invariant quadrivalent states without zero-volume nodes, describing a region of total volume smaller than V is finite. We give explicitly the upper bound of the dimension as a function of V . The proof is based on a result on the spectrum of the LQG Volume operator proven by Brunnemann and Rideout [START_REF] Brunnemann | Properties of the volume operator in loop quantum gravity. I. Results[END_REF][START_REF] Brunneman | Properties of the volume operator in loop quantum gravity. II. Detailed presentation[END_REF]. Using this, we define the Von Neumann entropy of a quantum state of the gravitational field associated to Volume measurements.
II. Counting spin networks
Consider the measurement of the volume of a 3d spacelike region Σ. The physical system measured is the gravitational field. In the classical theory, this is given by the metric q on Σ: the volume is V = Σ √ det q d 3 x. In the quantum context, using the LQG formalism, the geometry of Σ is described by a state in the kinematical Hilbert space H diff . The volume measurement of Σ are described by a volume operator V on this state space. We refer to [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF][START_REF] Rovelli | Quantum Gravity[END_REF] for details on basic LQG results and notation.
We restrict H diff to four-valent graphs Γ where the nodes n have non-vanishing (unoriented) volume v n . The spin network states |Γ, j l , v n ∈ H diff , where j l is the link quantum number or spin, form a countable, orthonormal basis of H diff . (We disregard here eventual additional quantum numbers such as the orientation, that have no bearing on our result.) The intertwiner basis at each node is chosen so that the local volume operator Vn , acting on a single node, is diagonal and is labelled by the eigenvalues v n , of the node volume operator Vn associated to the node n.
Vn |Γ, j l , v n = v n |Γ, j l , v n (1)
The states |Γ, j l , v n are also eigenstates of the total volume operator V = N n=1 Vn , where N is the number of nodes in Γ, with eigenvalue
v = N n=1 v n , (2)
the sum of the node volume eigenvalues v n . We seek a bound on the dimension of the subspace H V spanned by the states |Γ, j l , v n such that v ≤ V . That is, we want to count the spin-networks with volume less than V . We do this by bounding the number N Γ of four valent graphs in H V , the number N {j l } of possible spin assignments, and the number of the volume quantum numbers assignments N {vn} on each such graph. Clearly
dim H V ≤ N Γ N {j l } N {vn} . (3)
Crucial to this bound is the analytical result on the existence of a volume gap in four-valent spin networks found in [START_REF] Brunnemann | Properties of the volume operator in loop quantum gravity. I. Results[END_REF][START_REF] Brunneman | Properties of the volume operator in loop quantum gravity. II. Detailed presentation[END_REF]. The result is the following. In a node bounded by four links with maximum spin j max all nonvanishing volume eigenvalues are larger than
v gap ≥ 1 4 √ 2 ℓ 3 P γ 3 2 j max (4)
Where ℓ P is the Planck constant and γ the Immirzi parameter. Numerical evidence for equation ( 4) was first given in [START_REF] Brunnemann | Simplification of the spectral analysis of the volume operator in loop quantum gravity[END_REF] and a compatible result was estimated in [START_REF] Bianchi | Discreteness of the volume of space from Bohr-Sommerfeld quantization[END_REF]. Since the minimum non-vanishing spin is j = 1 2 , this implies that
v gap ≥ 1 8 ℓ 3 P γ 3 2 ≡ v o (5)
From existence of the volume gap, it follows that there is a maximum value of N Γ , because there is a maximum number of nodes for graphs in H V , as every node carries a minimum volume v o . Therefore a region of volume equal or smaller than V contains at most
n = V v o (6)
nodes. Equation ( 4) bounds also the number of allowed area quantum numbers, because too large a j max would force too large a node volume. Therefore N {j l } is also finite. Finally, since the dimension of the space of the intertwiners at each node is finite and bounded by the value of spins, it follows that also the number N {vn} of individual volume quantum numbers is bounded. Then (3) shows immediately that the dimension of H V is finite. Let us bound it explicitely. We start by the number of graphs. The number of nodes must be smaller than n, given in [START_REF] Bianchi | Horizon entanglement entropy and universality of the graviton coupling[END_REF]. The number N Γ of 4-valent graphs with n nodes is bounded by
N Γ ≤ n 4n (7)
because each node can be connected to each other n n four times (n n ) 4 .
Equation ( 4) bounds the spins. Since we must have V ≥ v gap , we must also have
j ≤ j max ≤ 32 V 2 ℓ 6 P γ 3 = 1 2 n 2 (8)
In a graph with n nodes there are at most 4n links (the worst case being all boundary links), and therefore there are at most (2j max + 1) 4n spin assignments, or, in the large j limit, (2j max ) 4n . That is
N {j l } ≤ (2j max ) 4n ≤ n 8n (9)
Finally, the dimension of the intertwiner space at each node is bounded by the areas associated to that node:
dim K j1,j2,j3,j4 = = dim Inv SU(2) (H j1 ⊗ H j2 ⊗ H j3 ⊗ H j4 ) = min (j 1 + j 2 , j 3 + j 4 ) -max ((j 1 -j 2 ), (j 3 -j 4 )) + 1 ≤ 2 max(j l∈n ) + 1 ≤ 4 max(j l∈n )
with the last step following from max(j l∈n ) ≥ 1/2. Thus on a graph with n nodes, the maximum number of combination of eigenvalues is limited by:
N {vn} ≤ (4j max ) n = 2 n n 2n (10)
Combining equations ( 3), ( 7), ( 9) and ( 10), we have an explicit bound on the dimension of the space of states with volume less than
V = nv o : dim H V ≤ (cn) 14n ( 11
)
where c is a number. For large n we can write
S V ≡ log dim H V ≤ 14 n log n (12)
which is the entropy associated to Hilbert space. Explicitly
S V ≤ 14 V v o log V v o ∼ V log V. (13)
In the large volume limit, when the eigenvalues become increasingly dense, this corresponds to a density of states
ν(V ) ≡ d(dim H V )/dV similarly bounded ν(V ) < 14 [log(n) + C] (cn) 14n . ( 14
)
III. Von Neumann proper volume entropy
In the previous section, we have observed that the dimension of the space of (with four-valent, finite-volume nodes) quantum states with total volume less than V is finite. This results implies that there is a finite von Neumann volume entropy associated to statistical states describing to volume measurements. The simplest possibility is to consider the microcanonical ensemble describing the volume measurement of a region of space. That is, we take Volume to be a macroscopic (or thermodynamic, or"coarse grained") variable, and we write the corresponding statistical microstate that maximizes entropy. If the measured volume is in the interval I V = [V -δV, V ], with small δV , then the corresponding micro-canonical state is simply
ρ = P V,δV dim H V . ( 15
)
where P V,δV is the projector on
H V,δV ≡ Span{|Γ, j l , v n > : v ∈ I V }. ( 16
)
namely the span of the eigenspaces of eigenvalues of the volume that are in I V . Explicitly, the projector can be written in the form
P V,δV ≡ v∈IV |Γ, j l , v n >< Γ, j l , v n | (17)
The von Neumann entropy of ( 15) is
S = -T r[ρ log ρ] = log dim H V < S V ∼ V log V. (18)
It is interesting to consider also a more generic state where ρ ∼ p(V ), for an arbitrary distribution p(V ) of probabilities of measuring a given volume eigenstate with volume V . For this state, the probability distribution of finding the value V in a volume measurement is
P (V ) = ν(V )p(V ) (19)
and the entropy can be written as the sum of two terms
S = dV ν(V )p(V ) log(p(V )) = S1 + S2 (20)
where the first
S P = -dV P (V ) log(P (V )) (21)
is just the entropy due to the spread in the outcomes of volume measurements, while the second
S Volume ≡ S -S P = dV P (V ) log(ν(V )) (22)
can be seen as as a proper volume entropy. The bound found in the previous Section on ν(V ), which indicates that ν(S) grows less that V 2 , shows that this proper volume entropy is finite for any distribution P (V ) whose variance is finite. S Volume can be viewed as the irreducible entropy associated to any volume measurement.
IV. Lower bound
Let us now bound the dimension of H V from below. The crucial step for this is to notice the existence of a maximum δV in the spacing between the eigenvalues of the operator Vn . For instance, if we take a node between two large spins j and two 1 2 spins, the volume eigenvalues have decreasing spacing, with maximum spacing for the lowest eigenvalues, of the order v o . Disregarding irrelevant small numerical factors, let's take v o as the maximal spacing.
Given a volume V , let, as before, n = V /v 0 and consider spin networks with total volume in the interval
I n = [(n -1)v o , nv o ].
Let N m be the number of spin networks with m nodes that have the total volume v in the interval I n . For m = 1, there is at least one such spin network, because of the minimal spacing. For m = 2, the volume v must be split between the two nodes: v = v 1 + v 2 . This can be done in at least n -1 manners, with v 1 ∈ I p and v 1 ∈ I n-p and p running from 1 to n -1. This possibility is guaranteed again by the existence of the maximal spacing. In general, for m nodes, there are
N n,m = n -1 m -1 (23)
different ways of splitting the total volume among nodes. This is the number of compositions of n in m subsets. Finally, the number m of nodes can vary between 1 and the maximum n, giving a total number of possible states larger than
N n = n m=1 N n,m = n m=1 n -1 m -1 = 2 n-1 . (24)
From which it follows that
dim H V ≥ 2 n-1 . (25)
Can all these states be realised by inequivalent spin networks, with suitable choices of the graph and the spins? To show that this is the case, it is sufficient to display at least one (however peculiar) example of spin network for each sequence of v n . But given an arbitrary sequence of v n we can always construct a graph formed by a single one dimensional chain of nodes, each (except the two ends) with two links connecting to the adjacent nodes in the chain and two links in the boundary. All these spin networks exist and are non-equivalent to one another. Therefore we have shown that there are at least 2 n-1 states with volume between V -v o and V . In the large volume limit we can write
dim H V ≥ 2 n = 2 V vo .
(26) so that the entropy satisfies
cV ≤ S ≤ c ′ V log V. (27)
with c and c ′ constants.
V. Discussion
Geometrical entropy associated to surfaces of given Area plays a large role in the current discussions of the quantum nature of spacetime. Here we have shown that, under suitable conditions, it is also possible to compute a Von Neumann entropy associated to measurements of the Volume of a region of space. We have not discussed possible physical roles played by this entropy. A number of comments are in order:
(i) Since in the classical low energy limit Volume and area are related by V ∼ A (ii) The result presented above depends on the restriction of H diff to four-valent states. We recall that the discussion is currently open in the literature on which of the two theories, with or without this restriction, is physically more interesting, with good arguments on both sides. However, it might be possible to extend the results presented here to the case of higher-valent graphs. Indeed, there is some evidence that there is a volume gap in higher-valent cases too, see for instance [START_REF] Haggard | Pentahedral volume, chaos, and quantum gravity[END_REF]. The effect of zerovolume nodes on the Volume entropy will be discussed elsewhere.
(iii) Volume entropy appears to fail to be an extensive quantity. The significance of this conclusion deserves to be explored. This feature is usual for systems with long range interactions, and in particular for systems of particles governed by the gravitational interaction. It is suggestive that gravity could retain this feature even when there are no interacting particle, and the role of long range interactions is taken by "long range" connections between graph nodes1 . A final word on this behaviour, however, has to wait for a more precise computation of the entropy growth with volume.
(iv) It has been recent pointed out that the interior of an old black old contains surfaces with large volume [START_REF] Christodoulou | How big is a black hole?[END_REF][START_REF] Christodoulou | The (huge) volume of an evaporating black hole[END_REF] and that the large volume inside black holes can play an important role in the information paradox [START_REF] Ashtekar | The Issue of Information Loss: Current Status[END_REF][START_REF] Perez | No firewalls in quantum gravity: the role of discreteness of quantum geometry in resolving the information loss paradox[END_REF]. The results presented here may serve to quantify the corresponding interior entropy.
(v) A notion of entropy associated to the volume of space might perhaps provide an alternative to Penrose's Weyl curvature hypothesis [START_REF] Penrose | Before the big bang: An outrageous new perspective and its implications for particle physics[END_REF]. For the sec-ond principle of thermodynamics to hold, the initial state of the universe must have had low entropy. On the other hand, from cosmic background radiation observations, the initial state of matter must have been close to having maximal entropy. Penrose addresses this discrepancy by taking into consideration the entropy associated to gravitational degrees of freedom. His hypothesis is that the degrees of freedom which have been activated to bring the increase in entropy from the initial state are the ones associated to the Weyl curvature tensor, which in his hypothesis was null in the initial state of the universe. A definition of the bulk entropy of space, which, as would be expected, grows with the volume, could perhaps perform the same role as the Weyl curvature degrees of freedom do in Penrose's hypothesis: the universe had a much smaller volume close to its initial state, so the total available entropy was low -regardless of the matter entropy content -and has increased since, just because for a space of larger volume we have a greater number of states describing its geometry.
(vi) We close with a very speculative remark. Does the fact that entropy is large for larger volumes imply the existence of an entropic force driving to larger volumes? That is, could there be a statistical bias for transitions to geometries of greater volume? Generically, the growth of the phase space volume is a driving force in the evolution of a system: in a transition process, we sum over out states, more available states for a given outcome imply greater probability of that outcome. A full discussion of this point requires the dynamics of the theory to be explicitly taken into account, and we postpone it for future work.
3 2 , 3 2
23 the Volume entropy we have considered S V ∼ V log V ∼ A log A may exceed the Bekenstein bound S < S A ∼ A. Volume entropy is accessible only by being in the bulk, and not necessarily from the outside, therefore it does not violate the versions of the Bekenstein bound that only refer to external observables.
Of course they are not really long range, in the sense that graph connections actually define locality.
Acknowledgments
MC and VA thank Thibaut Josset and Ilya Vilenski for critical discussions. MC acknowledges support from the Educational Grants Scheme of the A.G.Leventis Foundation for the academic years 2013-2014, 2014-2015, 2015-2016, as well as from the Samy Maroun Center for Time, Space and the Quantum. VA acknowledges financial support from Sapienza University of Rome. | 19,141 | [
"1681"
] | [
"179898",
"407863",
"407863",
"179898",
"179898",
"407863"
] |
01461384 | en | [
"phys"
] | 2024/03/05 22:32:18 | 2018 | https://hal.science/hal-01461384/file/1611.02420.pdf | come L'archive ouverte pluridisciplinaire
Meaning = Information + Evolution
Carlo Rovelli
CPT, Aix-Marseille Université, Université de Toulon, CNRS, F-13288 Marseille, France.
Notions like meaning, signal, intentionality, are difficult to relate to a physical word. I study a purely physical definition of "meaningful information", from which these notions can be derived. It is inspired by a model recently illustrated by Kolchinsky and Wolpert, and improves on Dretske classic work on the relation between knowledge and information. I discuss what makes a physical process into a "signal".
I. INTRODUCTION
There is a gap in our understanding of the world. On the one hand we have the physical universe; on the other, notions like meaning, intentionality, agency, purpose, function and similar, which we employ for the like of life, humans, the economy... These notions are absent in elementary physics, and their placement into a physicalist world view is delicate [START_REF] Price | Naturalism without Mirrors[END_REF], to the point that the existence of this gap is commonly presented as the strongest argument against naturalism.
Two historical ideas have contributed tools to bridge the gap.
The first is Darwin's theory, which offers evidence on how function and purpose can emerge from natural variability and natural selection of structures [START_REF] Darwin | On the Origin of Species[END_REF]. Darwin's theory provides a naturalistic account for the ubiquitous presence of function and purpose in biology. It falls sort of bridging the gap between physics and meaning, or intentionality.
The second is the notion of 'information', which is increasingly capturing the attention of scientists and philosophers. Information has been pointed out as a key element of the link between the two sides of the gap, for instance in the classic work of Fred Dretske [START_REF] Dretske | Knowledge and the Flow of Information[END_REF].
However, the word 'information' is highly ambiguous. It is used with a variety of distinct meanings, that cover a spectrum ranging from mental and semantic ("the information stored in your USB flash drive is comprehensible") all the way down to strictly engineeristic ("the information stored in your USB flash drive is 32 Giga"). This ambiguity is a source of confusion. In Dretske's book, information is introduced on the base of Shannon's theory [START_REF] Shannon | A Mathematical Theory of Communication[END_REF], explicitly interpreted as a formal theory that "does not say what information is".
In this note, I make two observations. The first is that it is possible to extract from the work of Shannon a purely physical version of the notion of information. Shannon calls its "relative information". I keep his terminology even if the ambiguity of these terms risks to lead to continue the misunderstanding; it would probably be better to call it simply 'correlation', since this is what it ultimately is: downright crude physical correlation.
The second observation is that the combination of this notion with Darwin's mechanism provides the ground for a definition of meaning. More precisely, it provides the ground for the definition of a notion of "meaningful infor-mation", a notion that on the one hand is solely built on physics, on the other can underpin intentionality, meaning, purpose, and is a key ingredient for agency.
The claim here is not that the full content of what we call intentionality, meaning, purpose -say in human psychology, or linguistics-is nothing else than the meaningful information defined here. But it is that these notions can be built upon the notion of meaningful information step by step, adding the articulation proper to our neural, mental, linguistic, social, etcetera, complexity. In other words, I am not claiming of giving here the full chain from physics to mental, but rather the crucial first link of the chain.
The definition of meaningful information I give here is inspired by a simple model presented by David Wolpert and Artemy Kolchinsky [START_REF] Wolpert | Observers as systems that acquire information to stay out of equilibrium[END_REF], which I describe below. The model illustrates how two physical notions, combined, give rise to a notion we usually ascribe to the nonphysical side of the gap: meaningful information.
The note is organised as follows. I start by a careful formulation of the notion of correlation (Shannon's relative information). I consider this a main motivation for this note: emphasise the commonly forgotten fact that such a purely physical definition of information exists. I then briefly recall a couple of points regarding Darwinian evolution which are relevant here, and I introduce (one of the many possible) characterisation of living beings. I then describe Wolpert's model and give explicitly the definition of meaningful information which is the main purpose of this note. Finally, I describe how this notion might bridge between the two sides of gap. I close with a discussion of the notion of signal and with some general considerations.
II. RELATIVE INFORMATION
Consider physical systems A, B, ... whose states are described by a physical variables x, y, ..., respectively. This is the standard conceptual setting of physics. For simplicity, say at first that the variables take only discrete values. Let N A , N B , ... be the number of distinct values that the variables x, y, ... can take. If there is no relation or constraint between the systems A and B, then the pair of system (A, B) can be in N A × N B states, one for each choice of a value for each of the two variables x and y. In physics, however, there are routinely constraints between systems that make certain states impossible. Let N AB be the number of allowed possibilities. Using this, we can define 'relative information' as follows.
We say that A and B 'have information about one another' if N AB is strictly smaller than the product N A × N B . We call
S = log(N A × N B ) -log N AB , (1)
where the logarithm is taken in base 2, the "relative information" that A and B have about one another. The unit of information is called 'bit'. For instance, each end of a magnetic compass can be either a North (N ) or South (S) magnetic pole, but they cannot be both N or both S. The number of possible states of each pole of the compass is 2 (either N or S), so N A = N B = 2, but the physically allowed possibilities are not N A × N B = 2 × 2 = 4 (N N, N S, SN, SS). Rather, they are only two (N S, SN ), therefore N AB = 2. This is dictated by the physics. Then we say that the state (N or S) of one end of the compass 'has relative information'
S = log 2 + log 2 -log 2 = 1 ( 2
)
(that is: 1 bit) about the state of the other end. Notice that this definition captures the physical underpinning to the fact that "if we know the polarity of one pole of the compass then we also know (have information about) the polarity of the other." But the definition itself is completely physical, and makes no reference to semantics or subjectivity.
The generalisation to continuous variables is straightforward. Let P A and P B be the phase spaces of A and B respectively and let P AB be the subspace of the Cartesian product P A × P B which is allowed by the constraints. Then the relative information is
S = log V (P A × P B ) -log V (P AB ) ( 3
)
whenever this is defined. 1Since the notion of relative information captures correlations, it extends very naturally to random variables. Two random variables x and y described by a probability distribution p AB (x, y) are uncorrelated if
p AB (x, y) = pAB (x, y) (4)
where pAB (x, y) is called the marginalisation of p AB (x, y) and is defined as the product of the two marginal distributions
pAB (x, y) = p A (x) p B (y), (5)
in turn defined by
p A (x) = p AB (x, y) dy, p B (y) = p AB (x, y) dx. (6)
Otherwise they are correlated. The amount of correlation is given by the difference between the entropies of the two distributions p A (x, y) and pA (x, y). The entropy of a probability distribution p being S = p log p on the relevant space. All integrals are taken with the Luoiville measures of the corresponding phase spaces.
Correlations can exist because of physical laws or because of specific physical situations, or arrangements or mechanisms, or the past history of physical systems.
Here are few examples. The fact that the two poles of a magnet cannot have the same polarisation is excluded by one of the Maxwell equations. It is just a fact of the world. The fact that two particles tied by a rope cannot move apart more than the distance of the rope is a consequence of a direct mechanical constraint: the rope. The frequency of the light emitted by a hot piece of metal is correlated to the temperature of the metal at the moment of the emission. The direction of the photons emitted from an object is correlated to the position of the object. In this case emission is the mechanism that enforces the correlation. The world teams with correlated quantities. Relative information is, accordingly, naturally ubiquitous.
Precisely because it is purely physical and so ubiquitous, relative information is not sufficient to account for meaning. 'Meaning' must be grounded on something else, far more specific.
III. SURVIVAL ADVANTAGE AND PURPOSE
Life is a characteristic phenomenon we observe on the surface of the Earth. It is largely formed by individual organisms that interact with their environment and embody mechanisms that keep themselves away from thermal equilibrium using available free energy. A dead organism decays rapidly to thermal equilibrium, while an organism which is alive does not. I take this -with quite a degree of arbitrariness-as a characteristic feature of organisms that are alive.
The key of Darwin's discovery is that we can legitimately reverse the causal relation between the existence of the mechanism and its function. The fact that the mechanism exhibits a purpose -ultimately to maintain the organism alive and reproduce it-can be simply understood as an indirect consequence, not a cause, of its existence and its structure.
As Darwin points out in his book, the idea is ancient. It can be traced at least to Empedocles. Empedocles suggested that life on Earth may be the result of random happening of structures, all of which perish except those that happen to survive, and these are the living organisms. 2 The idea was criticised by Aristotle, on the ground that we see organisms being born with structures already suitable for survival, and not being born at random ([6] II 8, 198b35). But shifted from the individual to the species, and coupled with the understanding of inheritance and, later, genetics, the idea has turned out to be correct. Darwin clarified the role of variability and selection in the evolution of structures and molecular biology illustrated how this may work in concrete. Function emerges naturally and the obvious purposes that living matter exhibits can be understood as a consequence of variability and selection. What functions is there because it functions: hence it has survived. We do not need something external to the workings of nature to account for the appearance of function and purpose.
But variability and selection alone may account for function and purpose, but are not sufficient to account for meaning, because meaning has semantic and intentional connotations that are not a priori necessary for variability and selection. 'Meaning' must be grounded on something else.
IV. KOLCHINSKY-WOLPERT'S MODEL AND MEANINGFUL INFORMATION
My aim is now to distinguish the correlations that are are ubiquitous in nature from those that we count as relevant information. To this aim, the key point is that surviving mechanisms survive by using correlations. This is how relevance is added to correlations.
The life of an organisms progresses in a continuous exchange with the external environment. The mechanisms that lead to survival and reproduction are adapted by evolution to a certain environment. But in general environment is in constant variation, in a manner often poorly predictable. It is obviously advantageous to be appropriately correlated with the external environment, because survival probability is maximised by adopting different behaviour in different environmental conditions.
A bacterium that swims to the left when nutrients are on the left and swims to the right when nutrients are on the right prospers; a bacterium that swims at random has less chances. Therefore many bacteria we see around us are of the first kind, not of the second kind. This simple observation leads to the Kolchinsky-Wolpert model [START_REF] Wolpert | Observers as systems that acquire information to stay out of equilibrium[END_REF],.
A living system A is characterised by a number of variables x n that describe its structure. These may be numerous, but are in a far smaller number than those describing the full microphysics of A (say, the exact position 2 [There could be] "beings where it happens as if everything was organised in view of a purpose, while actually things have been structured appropriately only by chance; and the things that happen not to be organised adequately, perished, as Empedocles says". of each water molecule in a cell). Therefore the variables x n are macroscopic in the sense of statistical mechanics and there is an entropy S(x n ) associated to them, which counts the number of the corresponding microstates. As long as an organism is alive, S(x n ) remains far lower than its thermal-equilibrium value S max . This capacity of keeping itself outside of thermal equilibrium, utilising free energy, is a crucial aspects of systems that are alive.
Living organisms have generally a rather sharp distinction between their state of being alive or dead, and we can represent it as a threshold S thr in their entropy. Call B the environment and let y n denote a set of variables specifying its state. Incomplete specification of the state of the environment can be described in terms of probabilities, and therefore the evolution of the environment is itself predictable at best probabilistically.
Consider now a specific variable x of the system A and a specific variable y of the system B in a given macroscopic state of the world. Given a value (x, y), and taking into account the probabilistic nature of evolution, at a later time t the system A will find itself in a configuration x n with probability p x,y (x n ). If at time zero p(x, y) is the joint probability distribution of x and y, the probability that at time t the system A will have entropy higher that the threshold is
P = dx n dx dy p(x, y) p x,y (x n )θ(S(x n ) -S thr ), ( 7
)
where θ is the step function. Let us now define P = dx n dx dy p(x, y) p x,y (x n )θ(S(x n ) -S thr ). (8) where p(x, y) is the marginalisation of p(x, y) defined above. This is the probability of having above threshold entropy if we erase the relative information. This is Wolpert's model.
Let's define the relative information between x and y contained in p(x, y) to be "directly meaningful" for B over the time span t, iff P is different from P . And call
M = P -P (9)
the "significance" of this information. The significance of the information is its relevance for the survival, that is, its capacity of affecting the survival probability. Furthermore, call the relative information between x and y simply "meaningful" if it is directly meaningful or if its marginalisation decreases the probability of acquiring information that can be meaningful, possibly in a different context.
Here is an example. Let B be food for a bacterium and A the bacterium, in a situation of food shortage. Let x be the location of the nurture, for simplicity say it can be either at the left of at the right. Let y the variable that describe the internal state of the bacterium which determines the direction in which the bacterium will move. If the two variables x and y are correlated in the right manner, the bacterium reaches the food and its chances of survival are higher. Therefore the correlation between y and x is "directly meaningful" for the bacterium, according to the definition given, because marginalising p(x, y), namely erasing the relative information increases the probability of starvation.
Next, consider the same case, but in a situation of food abundance. In this case the correlation between x and y has no direct effect on the survival probability, because there is no risk of starvation. Therefore the x-y correlation is not directly meaningful. However, it is still (indirectly) meaningful, because it empowers the bacterium with a correlation that has a chance to affect its survival probability in another situation.
A few observations about this definition: i. Intentionality is built into the definition. The information here is information that the system A has about the variable y of the system B. It is by definition information "about something external". It refers to a physical configuration of A (namely the value of its variable x), insofar as this variables is correlated to something external (it 'knows' something external).
ii. The definition separates correlations of two kinds: accidental correlations that are ubiquitous in nature and have no effect on living beings, no role in semantic, no use, and correlations that contribute to survival. The notion of meaningful correlation captures the fact that information can have "value" in a darwinian sense. The value is defined here a posteriori as the increase of survival chances. It is a "value" only in the sense that it increases these chances.
iii. Obviously, not any manifestation of meaning, purpose, intentionality or value is directly meaningful, according to the definition above. Reading today's newspaper is not likely to directly enhance mine or my gene's survival probability. This is the sense of the distinction between 'direct' meaningful information and meaningful information. The second includes all relative information which in turn increases the probability of acquiring meaningful information. This opens the door to recursive growth of meaningful information and arbitrary increase of semantic complexity. It is this secondary recursive growth that grounds the use of meaningful information in the brain. Starting with meaningful information in the sense defined here, we get something that looks more and more like the full notions of meaning we use in various contexts, by adding articulations and moving up to contexts where there is a brain, language, society, norms... iv. A notion of 'truth' of the information, or 'veracity' of the information, is implicitly determined by the definition given. To see this, consider the case of the bacterium and the food. The variable x of the bacterium can take to values, say L and R, where L is the variable conducting the bacterium to swim to the Right and L to the Left. Here the definition leads to the idea that R means "food is on the right" and L means "food is on the left". The variable x contains this information. If for some reason the variable x is on L but the food happens to be on the Right, then the information contained in x is "not true". This is a very indirect and in a sense deflationary notion of truth, based on the effectiveness of the consequence of holding something for true. (Approximate coarse grained knowledge is still knowledge, to the extent it is somehow effective. To fine grain it, we need additional knowledge, which is more powerful because it is more effective.) Notice that this notion of truth is very close to the one common today in the natural sciences when we say that the 'truth' of a theory is the success of its predictions. In fact, it is the same.
v. The definition of 'meaningful' considered here does not directly refer to anything mental. To have something mental you need a mind and to have a mind you need a brain, and its rich capacity of elaborating and working with information. The question addressed here is what is the physical base of the information that brains work with. The answer suggested is that it is just physical correlation between internal and external variables affecting survival either directly or, potentially, indirectly.
The idea put forward is that what grounds all this is direct meaningful information, namely strictly physical correlations between a living organism and the external environment that have survival and reproductive value. The semantic notions of information and meaning are ultimately tied to their Darwinian evolutionary origin. The suggestion is that the notion of meaningful information serves as a ground for the foundation of meaning. That is, it could offer the link between the purely physical world and the world of meaning, purpose, intentionality and value. It could bridge the gap.
V. SIGNALS, REDUCTION AND MODALITY
A signal is a physical event that conveys meaning. A ring of my phone, for instance, is a signal that means that somebody is calling. When I hear it, I understand its meaning and I may reach the phone and answer.
As a purely physical event, the ring happens to physically cause a cascade of physical events, such as the vibration of air molecules, complex firing of nerves in my brain, etcetera, which can in principle be described in terms of purely physical causation. What distinguishes its being a signal, from its being a simple link in a physical causation chain?
The question becomes particularly interesting in the context of biology and especially molecular biology. Here the minute working of life is heavily described in terms of signals and information carriers: DNA codes the information on the structure of the organism and in particular on the specific proteins that are going to be produced, RNA carries this information outside the nucleus, receptors on the cell surface signal relevant external condition by means of suitable chemical cascades. Similarly, the optical nerve exchanges information between the eye and the brain, the immune system receives information about infections, hormones signal to organs that it is time to do this and that, and so on, at libitum. We describe the working of life in heavily informational terms at every level. What does this mean? In which sense are these processes distinct from purely physical processes to which we do not usually employ an informational language? I see only one possible answer. First, in all these processes the carrier of the information could be somewhat easily replaced with something else without substantially altering the overall process. The ring of my phone can be replaced by a beep, or a vibration. To decode its meaning is the process that recognises these alternatives as equivalent in some sense. We can easily imagine an alternative version of life where the meaning of two letters is swapped in the genetic code. Second, in each of these cases the information carrier is physically correlated with something else (a protein, a condition outside the cell, a visual image in the eye, an infection, a phone call...) in such a way that breaking the correlation could damage the organism to some degree. This is precisely the definition of meaningful information studied here.
I close with two general considerations.
The first is about reductionism. Reductionism is often overstated. Nature appears to be formed by a relative simple ensemble of elementary ingredients obeying relatively elementary laws. The possible combinations of these elements, however, are stupefying in number and variety, and largely outside the possibility that we could compute or deduce them from nature's elementary ingredients. These combinations happen to form higher level structures that we can in part understand directly. These we call emergent. They have a level of autonomy from elementary physics in two senses: they can be studied independently from elementary physics, and they can be realized in different manners from elementary constituents, so that their elementary constituents are in a sense irrelevant to our understanding of them. Because of this, it would obviously be useless and self defeating to try to replace all the study of nature with physics. But evidence is strong that nature is unitary and coherent, and its manifestations are -whether we understand them or not-behaviour of an underlying physical world. Thus, we study thermal phenomena in terms of entropy, chemistry in terms of chemical affinity, biology in terms functions, psychology in terms of emotions and so on. But we increase our understanding of nature when we understand how the basic concept of a science are ground in physics, or are ground in a science which is ground on physics, as we have largely been able to do for chemical bonds or entropy. It is in this sense, and only in this sense, that I am suggesting that meaningful information could provide the link between different levels of our description of the world.
The second consideration concerns the conceptual structure on which the definition of meaningful information proposed here is based. The definition has a modal core. Correlation is not defined in terms of how things are, but in terms of how they could or could not be. Without this, the notion of correlation cannot be constructed. The fact that something is red and something else is red, does not count as a correlation. What counts as a correlation is, say, if two things can each be of different colours, but the two must always be of the same colour. This requires modal language. If the world is what it is, where does modality comes from?
The question is brought forward by the fact that the definition of meaning given here is modal, but does not bear on whether this definition is genuinely physical or not. The definition is genuinely physical. It is physics itself which is heavily modal. Even without disturbing quantum theory or other aspects of modern physics, already the basic structures of classical mechanics are heavily modal. The phase space of a physical system is the list of the configurations in which the system can be. Physics is not a science about how the world is: it is a science of how the world can be.
There are a number of different ways of understanding what this modality means. Perhaps the simplest in physics is to rely on the empirical fact that nature realises multiple instances of the same something in time and space. All stones behave similarly when they fall and the same stone behaves similarly every time it falls. This permits us to construct a space of possibilities and then use the regularities for predictions. This structure can be seen as part of the elementary grammar of nature itself. And then the modality of physics and, consequently, the modality of the definition of meaning I have given are fully harmless against a serene and quite physicalism.
But I nevertheless raise a small red flag here. Because we do not actually know the extent to which this structure is superimposed over the elementary texture of reality by ourselves. It could well be so: the structure could be generated precisely by the structure of the very 'meaningful information' we have been concerned with here. We are undoubtably limited parts of nature, and we are so even as understanders of this same nature.
[ 6 ]FIG. 1 .
61 FIG.1. The Kolchinsky-Wolpert model and the definition of meaningful information. If the probability of descending to thermal equilibrium P increases when we cut the information link between A and B, then the relative information (correlation) between the variables x and y is "meaningful information".
Here V (.) is the Liouville volume and the difference between the two volumes can be defined as the limit of a regularisation even when the two terms individually diverge. For instance, if A and B are both free particles on a circle of of size L, constrained to be at a distance less of equal to L/N (say by a rope tying them), then we can easily regularise the phase space volume by bounding the momenta, and we get S = log N , independently from the regularisation.
ACKNOWLEDGMENTS
I thank David Wolpert for private communications and especially Jenann Ismael for a critical reading of the article and very helpful suggestions. | 28,344 | [
"1681"
] | [
"407863",
"179898"
] |
01772292 | en | [
"sde"
] | 2024/03/05 22:32:18 | 2017 | https://hal.science/hal-01772292/file/mt2017-pub00056979.pdf | Julianne De
Castro Oliveira
Jean-Baptiste Feret
Jorge Flavio
Yann Ponzoni
Jean-Philippe Nouvellon
Otavio Camargo Gastellu-Etchegorry
José Luiz Campoe
Luiz Stape
Estraviz Carlos
Gueric Rodriguez
Le Maire
Julianne De Castro Oliveira
Jean-Baptiste Féret
Jorge Flávio
Guerric Rodriguez
Simulating the canopy reflectance of different eucalypt genotypes with the DART 3-D model
1
Abstract-Finding suitable models of canopy reflectance in forward simulation mode is a prerequisite for their use in inverse mode to characterize canopy variables of interest, such as Leaf Area Index (LAI) or chlorophyll content. In this study, the accuracy of the 3D reflectance model DART was assessed for canopies of different genotypes of Eucalyptus, having distinct biophysical and biochemical characteristics, to improve the knowledge on how these characteristics are influencing the reflectance signal as measured by passive orbital sensors. The first step was to test the model suitability to simulate reflectance images in the visible and near infrared. We parameterized DART model using extensive measurements from Eucalyptus plantations including 16 contrasted genotypes. Forest inventories were conducted and leaf, bark and forest floor optical properties were measured. Simulation accuracy was evaluated by comparing the mean top of canopy (TOC) bidirectional reflectance of DART with TOC reflectance extracted from a Pleiades very high resolution satellite image. Results showed a good performance of DART with mean reflectance absolute error lower than 2 %. Inter-genotype reflectance variability was correctly simulated, but the model didn't succeed at catching the slight spatial variation for a given genotype, excepted when large gaps appeared due to tree mortality. The second step consisted in a sensitivity analysis to explore which biochemical or biophysical characteristics influenced more the canopy reflectance between genotypes. These results present perspectives for using DART model in inversion mode.
Index Terms-DART, 3D modeling, eucalypt, radiative transfer model, remote sensing
I. INTRODUCTION
MONG the different methods to estimate biophysical or biochemical characteristics of forest plantations, the analysis of the images measured by Julianne de Castro Oliveira and Luiz Carlos Estraviz Rodriguez are with University of São Paulo, ESALQ/USP, Brazil (e-mail: [email protected]; [email protected]), Jean-Baptiste Féret is with IRSTEA, UMR TETIS, BP5092 Montpellier, France (e-mail: [email protected]), Flávio Jorge Ponzoni is with INPE, Brazil (email: [email protected]), Yann Nouvellon is with CIRAD, UMR ECO&SOLS, F-34398 Montpellier, France and with University of São Paulo, ESALQ/USP, Brazil (e-mail: [email protected]), Jean-Philippe Gastellu-Etchegorry is with CESBIO, France (e-mail : [email protected]), Otavio Camargo Campoe is with Federal University of Santa Catarina, UFSC, Brazil (e-mail : [email protected]), José Luiz Stape is with Suzano Pulp and Paper, Brazil (e-mail: [email protected]) and Guerric le Maire is with CIRAD, UMR ECO&SOLS, F-34398 Montpellier, France and with NIPE, UNICAMP, Campinas, Brazil (e-mail: [email protected]).
sensors on orbital platforms is appropriate for large spatial scales studies. Images are converted into reflectance values for each spectral band of the image, and later used to retrieve biophysical parameters of the forest through empirical relationships, or through radiative transfer models (RTM) inversion [START_REF] Le Maire | Calibration of a species-specific spectral vegetation index for leaf area index (LAI) monitoring: example with MODIS reflectance time-series on Eucalyptus plantations[END_REF] - [START_REF] Le Maire | Calibration and validation of hyperspectral indices for the estimation of broadleavedforest leaf chlorophyll content, leaf mass per area, leaf area index and leafcanopy in[END_REF].
RTM explicitly take into account stand structural characteristics (tree dimensions and positions, leaf area index, leaf angle distribution, crown cover, among others) and can simulate the quantitative value of the reflectance spectra of the canopy as observed on top of the canopy or by a sensor onboard a plane or a satellite. They are based on the knowledge of the physical laws that control the transfer and interaction of solar radiation in a vegetative canopy, in interaction with the soil [START_REF] Gastellu-Etchegorry | A modeling approach to assess the robustness of spectrometric predictive equations for canopy chemistry[END_REF]. The DART -Discrete Anisotropic Radiative Transfer -model [START_REF] Gastellu-Etchegorry | A simple anisotropic reflectance model for homogeneous multilayer canopies[END_REF], [START_REF] Gastellu-Etchegorry | Discrete Anisotropic Radiative Transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes[END_REF] is a comprehensive threedimensional model that simulates bidirectional reflectance and enables new possibilities of data analysis to evaluate, for example, canopy structure [START_REF] Barbier | Linking canopy images to forest structural parameters: potential of a modeling framework[END_REF], radiative budget [START_REF] Gastellu-Etchegorry | DART: a 3D model for simulating satellite images and studying surface radiation budget[END_REF], [START_REF] Demarez | Modeling of the radiation regime and photosynthesis of a finite canopy using the DART model. Influence of canopy architecture of canopy architecture assumptions and border effects[END_REF], photosynthesis [START_REF] Demarez | Modeling of the radiation regime and photosynthesis of a finite canopy using the DART model. Influence of canopy architecture of canopy architecture assumptions and border effects[END_REF], chlorophyll content [START_REF] Malenovský | Retrieval of spruce leaf chlorophyll content from airborne image data using continuum removal and radiative transfer[END_REF], [START_REF] Demarez | A modeling approach for studying forest chlorophyll content[END_REF], Leaf Area Index (LAI) [START_REF] Banskota | An LUT-Based inversion of DART model to estimate forest LAI from hyperspectral data[END_REF], [START_REF] Banskota | Investigating the utility of wavelet transforms for inverting a 3-D radiative transfer model using hyperspectral data to retrieve forest LAI[END_REF], among others.
Eucalypt plantations in Brazil cover 5.6 million ha, which accounts for 71.9 % of planted forests in Brazil [START_REF]Indústria Brasileira de Árvores[END_REF]. Currently, most areas are planted with several genotypes, mainly on clonal plantations, which have been tested and selected for distinct widespread soils and climatic Brazilian conditions [START_REF] Gonçalves | Integrating genetic and silvicultural strategies to minimize abiotic and biotic constraints in Brazilian eucalypt plantations[END_REF]. These genotypes provide different phenotypes, with distinct canopy structure, leaf morphology and biochemical compounds and biomass production. Due to their high economic importance in Brazil, the understanding of how biophysical parameters of planted forests could explain the spatial-temporal growth dynamics and the estimation of such parameters through remotely-sensed images is of paramount importance [START_REF] Le Maire | Calibration of a species-specific spectral vegetation index for leaf area index (LAI) monitoring: example with MODIS reflectance time-series on Eucalyptus plantations[END_REF], [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF].
Eucalyptus plantations in Brazil present particular structures: they are planted at high densities (e.g. 1700 trees/ha), they generally have a low leaf area index compared to other dense forests, and they are planted in rows of different spacing (anisotropy). One supplementary difficulty comes from the variability of eucalypts species and genotypes that are planted in Brazil. The different genotypes can have different structural and biophysical properties, even at the same age, and these parameters may change the canopy reflectance in different magnitude. It is therefore necessary to understand better the drivers of the reflectance differences between genotypes to further assess if their estimation through inversion procedures is possible.
2
Despite the successful use of physical approach of DART to retrieve canopies characteristics from inversion procedures, e.g. in [START_REF] Banskota | An LUT-Based inversion of DART model to estimate forest LAI from hyperspectral data[END_REF], [START_REF] Yáñez-Rausell | Estimation of spruce needle-leaf chlorophyll content cased on DART and PARAS canopy reflectance models[END_REF] - [START_REF] Gastellu-Etchegorry | An interpolation procedure for generalizing a look-up table inversion method[END_REF], few detailed studies have tested the efficiency of this 3D reflectance model in forward mode in forest canopy ecosystem [START_REF] Couturier | A modelbased performance test for forest classifiers on remote-sensing imagery[END_REF], [START_REF] Schneider | Simulating imaging spectrometer data: 3D forest modeling based on LiDAR and in situ data[END_REF]. The first assumption of inversion procedure is the suitability of the RTM to simulate accurately the reflectance for a range of canopy characteristics corresponding at least to the range of application conditions. In this study, we parameterized DART model using an extensive in situ measurement dataset. Eucalyptus plantations of 16 different genotypes were used to test the accuracy of the simulations generated by DART when compared with experimental images acquired from a very high spatial resolution satellite, Pleiades. In a second step, we performed a sensitivity analysis using the parameters variability as they were measured in situ to quantify the effect of the main stand parameters (inter-genotype variability) on the canopy reflectance. We finally discussed the use of DART for inversion studies for these particular ecosystems.
II. DATASET DESCRIPTION
A. Study site
The study site is located in Itatinga Municipality, in the state of São Paulo, southeastern Brazil, 22°58'04''S and 48°43'40''W (Fig. 1), as part of the IPEF-Eucflux project. A genotype trial experiment of eucalypt was installed in November 2009 with 16 genotypes comprising several genetic origins from different eucalypt growing companies and regions in Brazil (G1, G2, G10: E. grandis; G3-G9, G11-G13, G15: E. grandis x urophylla; G14: E. saligna; G16: E. camaldulensis x grandis). Fourteen of these 16 genotypes were clones and two (G1 and G2) had seminal origin. Planting rows were mainly east-west oriented, with plant arrangement of 3 m × 2 m (1666 trees per hectare). The experiment comprised 9 blocks, each having 16 treatments (genotypes) randomly distributed within a 4 × 4 subplot grid of 192 trees each (each subplot comprised 12 lines of 16 trees). Only the 10 lines and 10 rows central part of the subplot was analyzed (100 trees, 20 m × 30 m area).
B. In-situ measurements
Forest inventories were carried out at 6, 12, 19, 26, 38, 52, 62 and 74 months of age. During these inventories, trunk diameter at breast height (DBH) and tree height were measured. Close to most of these dates, 10-12 trees were cut for each genotype to compute the biomass per compartment (leaves, branches, trunk and bark) to generate allometric relationships between trunk DBH and tree height, height to the base of the live crown, crown diameter and leaf area, as classically done in other studies in the same area [START_REF] Laclau | Mixed-species plantations of Acacia mangium and Eucalyptus grandis in Brazil -1. Growth dynamics and aboveground net primary production[END_REF] - [START_REF] Christina | Importance of deep water uptake in tropical eucalypt forest[END_REF]. All these allometric relationships presented good adjustments (e.g. R 2 ~ 0.72, 0.70 and 0.88, respectively, for crown diameter, crown height and leaf area) and included the age as an explanatory variable, allowing their application for each tree at each inventory date. LAI was calculated as the sum of the leaf area of each tree inside the plot divided by the plot area. Leaf angle distribution (LAD) was estimated from the leaf angles measured in the field for each genotype (as described in [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF]) and adjusted with an ellipsoidal leaf angle density function. In each tree, a clinometer was used to measure the inclination of 72 leaves selected according to their position within the crown to be representative of the tree-scale distribution. The eucalypt stands were analyzed at the date of May, 2014 (54 months), corresponding to the date of satellite image acquisition, using interpolation of the field measurements between inventories at 52 and 62 months. For the leaf area, auxiliary leaf area index values retrieved from more frequent measurements on one of the genotypes allowed to improve the interpolation by considering a common seasonal variation.
Leaves, trunks and forest floor optical properties were measured on October 2015 with an ASD Field SpecPro (Analytical Spectral Devices, Boulder, Colorado, USA) spectrometer in the spectral range from 400 to 2500 nm with 1 nm intervals at 71 months after planting (in October 2015). In these dates, three trees per genotype were selected and for each tree, leaves were collected randomly at three crown layers (bottom, middle and top, divided by exact height proportions) and two horizontal positions in each layer (near and far from trunk), totaling two leaves per crown layer, six leaves per tree and 18 leaves per genotype. These leaves were kept cold and in the dark for less than one hour. Adaxial leaves reflectance and transmittance were measured in the laboratory using an integrating sphere (LI-COR 1800, LI-COR, Inc., Lincoln, Nebraska, USA). Forest floor and bark reflectance were measured using a Contact Probe (ASD, Inc., Boulder, Colorado) on five different points for each genotype, in the same week without rain.
The spectral measurement date occurred more than one year after the satellite image acquisition. However, these component spectra have probably not evolved a lot during this interval: for leaves, there were no significant difference between months 52 and 72 for specific leaf area, water content, and SPAD values (measured with the SPAD-Minolta device) (data not shown). For trunk and forest floor, we assumed no changes, which seem a reasonable hypothesis for these components.
C. Pleiades satellite images
Very high spatial resolution multispectral scenes including four bands (blue: 430-550 nm, green: 490-610 nm, red: 600-720 nm and near infrared: 750-950 nm) from Pleiades satellite were used to validate DART simulations. The image (four bands) was acquired on May 2014, at 13:36 GMT, with the following angles: view azimuth φ 𝑣 = 180.03°, view zenith θ 𝑣 = 13.40°, sun azimuth φ 𝑠 = 33.43° and sun zenith θ 𝑠 = 44.48°. The image was orthorectified and projected. Polygons of each internal plot extension (20 m × 30 m) were used to extract the radiance of the plots in each band of the Pleiades image. Transformation to TOA reflectance was performed, followed by an atmospheric correction to compute the reflectance of the top of canopy (TOC) of the scenes using the 6S model and default atmospheric parameterization for this location [START_REF] Vermote | Second simulation of the satellite signal in the solar spectrum, 6S: An overview[END_REF].
III. ANALYSES AND DART PARAMETERIZATION
A. DART parameterization
DART was used in the ray tracing method and reflectance mode [START_REF] Gastellu-Etchegorry | A simple anisotropic reflectance model for homogeneous multilayer canopies[END_REF], [START_REF] Gastellu-Etchegorry | DART: a 3D model for simulating satellite images and studying surface radiation budget[END_REF] to simulate TOC bidirectional reflectance images. Simulations with DART were conducted on 4 wavebands corresponding to Pleiades sensor relative spectral response.
The input solar angles (θ 𝑠 andφ 𝑠 ) were computed knowing the local latitude, date and hour of satellite overpass. Image acquisition geometry (θ 𝑣 , φ 𝑣 ) was obtained from metadata of Pleiades images. All DART simulated scenes were created using individualized positions and dimensions of the 192 trees of each subplot, but the output stand reflectance computation was restricted to an internal plot of 20 m × 30 m (100 trees), to avoid any border effect. One scene was simulated for each of the 16 genotypes and 9 blocks at 54 months (corresponding to date May, 2014), with computing cubic cells of 0.50 m edge.
Input parameters related to the trees positions (coordinates x and y in the plot), dimensions (e.g. crown diameter and height, DBH and total height), LAI and LAD for each tree were all insitu measurements (described on Section II.B.). For simulating tree crowns, we used a half ellipsoid shape, which typically fit well with the shape of eucalypts crown. Optical properties of the leaves were prescribed in function of the crown layer for each tree (upper, middle and lower) and in function of the genotype, such as the bark and forest floor reflectance. In these canopies, the branches are very thin and represent a very small absorbing surface in comparison to leaves and barks, and therefore they were not simulated.
B. Comparison between simulated and satellite images
The accuracy of the simulated reflectance TOC scenes from DART was checked against the TOC reflectance obtained from Pleiades scenes, for all 4 broadbands (blue, green, red, and NIR), 9 blocks and 16 genotypes. The overall accuracy level for simulating eucalypt plantations was expressed by the mean absolute error (MAE) of each spectral band [START_REF] Willmott | Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance[END_REF]:
𝑀𝐴𝐸 𝜆 = 1 𝑛 ∑ |𝑅 𝑃𝑙é𝑖𝑎𝑑𝑒𝑠,𝜆 (𝑖)-𝑅 𝐷𝐴𝑅𝑇,𝜆 (𝑖)| 𝑛 𝑖=1 , (1)
where𝑅 𝑃𝑙é𝑖𝑎𝑑𝑒𝑠,𝜆 is the reflectance measured by Pleiades satellite for spectral band λ, 𝑅 𝐷𝐴𝑅𝑇,𝜆 is the reflectance simulated by DART for the same spectral band, and 𝑛 is the number of samples (𝑛=144 plots, product of 9 blocks by 16 genotypes). The systematic error (BIAS), root mean square error (RMSE) and the determination coefficient (R 2 ) were also computed, both at genotype scale (averaged by blocks, so n=16 for each band), or for each genotypes for inter-block variability (so n=9 for each band and each genotype).
C. Sensitivity analysis of DART for eucalypt plantations
A simple sensitivity analysis was performed to better understand the effect of inter-genotype differences in structure, biophysical and biochemical parameters on the simulations output. We selected one of the genotype (the G3, that represents the main genotype planted around the experimental area), grown in one of the block (B2, where the plots shows good growth and health) as an example. For each of the parameter listed below, we exchanged one by one the G3 value by the value of another genotype of the same block B2. The range and variation of these values reflected therefore the real inter-genotype variability as it appears on in situ measurements, which enabled more realistic description and analysis of parameters influences on the reflectance. For instance, the LAI of G3 was replaced by the one of G1, the DART reflectance in the four bands were simulated, then a new simulation was performed with the LAI of G2, etc. At the end, we computed the average, variance and produced a boxplot figure for each parameter at each reflectance band. The tested parameters were LAI, LAD, leaf, bark and forest floor optical properties (reflectance), trees dimensions (tree and crown height, crown diameter and DBH), and row azimuth. Note that for the particular case of row azimuth, we changed the orientation by using the orientation of the other blocks one by one, and this is not linked to the genotype. However, including this variability will give more precise information on the importance of this factor. This procedure allows us to better understand which parameters drive the inter-genotypes variability in reflectance.
IV. RESULTS
A. Differences between genotypes structural and biochemical properties
The main characteristics of the genotypes (DBH, height, leaf area, LAI, crown length, crown diameter, leaf angle and mortality) based on field measurements and used for DART parameterization are shown in Fig. 2, together with their interblock repetitions. Overall, we can see that the tree dimensions and structural properties are similar between genotypes having the same age, and high local variability. However, when looking closer, there are some differences between genotypes. The DBH and height values were very similar between genotypes, with higher variability for the seminal material G1 and G2, and higher growth homogeneity of the clonal materials. G16 was the most homogeneous clone. G7, G12 and G16 presented the lowest leaf area values and lowest variability between trees. LAI for all genotypes was around 3-6 m 2 /m 2 and with small spatial variability, mainly for G12 and G16. G10, G11 and G13 presented the highest and G16 the lowest LAI values. In contrast with the tree height, the crown length varied more between genotypes. Similar with tree DBH and height, the crown diameter exhibited little variability across genotypes, with a median around 3 m indicating that at this age (54 months) the trees inside the plots are exploring more or less the space they individually have (3 m × 2 m). Note that there was a small measured difference between within rows and between-rows crown diameters that was included in the simulations. The leaf inclination angle showed high between-trees variability, mostly driven by differences in tree size since there were strong canopy vertical gradients of leaf inclination angles [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF]. G16 had the highest leaf inclination angles with low variability between trees. Mortality exhibited large variability across genotypes, with the highest values (reaching around 20 %) for genotypes G1, G3, G6 and G13. For other genotypes, the mortality was lower in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n° 11, 2017. p.4844-4852 than 10 %, which are common values for eucalypt plantations [START_REF] Zhou | Mapping local density of young Eucalyptus plantations by individual tree detection in high spatial resolution satellite images[END_REF].
Leaves, trunks and forest floor optical properties are shown on Fig. 3 for each genotype. Leaf reflectance (shown on Fig. 3 for expanded mature leaves of the middle crown layer) exhibited high absorption peaks in the blue and red regions and high NIR reflectance for all genotypes. Note that the reflectance ranking between genotypes was conserved for all wavelengths in the visible but changed further in the NIR and MID regions. There were larges differences of bark reflectance between genotypes. Interestingly the reflectance was very high in the visible and NIR regions compared to leaf reflectance. Some spectra clearly show an absorption feature in the red region. Forest floor reflectance showed similar pattern for all genotypes, but with a high inter-genotype variability, with low reflectance in the visible region and increasing values along the spectrum, and a mild absorption peak in the water absorption band (1400 nm).
Fig. 4 shows the leaves reflectance in the green, red and near infrared bands for each crown level (bottom, middle and top) and genotypes. There was no significant difference between crowns layers, significant differences between genotypes and no significant differences for the interaction genotype × crown layers for each band (N-Way ANOVA under Matlab 2013a, α=0.05). Note that statistical analysis was done using all measured reflectance data instead of the average values shown on Fig. 4.
B. Comparison of DART simulations with Pleiades satellite image
The TOC reflectances simulated by DART and acquired by the Pleiades sensor at the four multispectral bands for each genotype are shown on Fig. 5, averaged by genotype and with standard deviation. In general, the mean TOC reflectances from DART simulations were in good agreement with the mean TOC reflectance of the Pleiades scenes for all four bands and genotypes. Discrepancies were found mainly for the blue band (430-550 nm) for all genotypes, and some discrepancies appeared in the near infrared band (750-950 nm) for some genotypes (e.g., genotypes 5, 8 and 12). A numerical comparison between the reflectance simulated by DART and acquired by Pleiades scenes was performed using the MAE, RMSE and R 2 for all blocks and genotypes in each band (Table 1). The minimum and maximum range of R 2 values computed for each genotype for the four bands are also presented.
The MAE values were low for all bands (< 0.0195), with the lowest values for the green band. Higher values were found for blue and NIR bands, which corroborates the results of Fig. 5. BIAS, that represents the average difference between Pleiades and DART reflectance, was negative and indicated that TOC reflectances simulated by DART model were, in general, slightly higher than TOC reflectances derived from Pleiades images. RMSE were also low (<0.023), mainly for the bands in the visible domain (<0.0023). NIR band had the higher value. The R 2 best performance was for red and NIR bands and worst for blue band. The R 2 for each genotype computed with the different blocks (spatial variability) in all bands showed a wide range of values. The spatial variability of some genotypes was correctly simulated whereas others were not significant. An example of the level of detail of trees parameterization on DART simulated scenes compared with Pleiades scenes is shown on Fig. 6 for the near infrared band of G14, block 4. (scenes with 0.50 m and 2 m of spatial resolution). The near infrared (2.0 m of spatial resolution) and panchromatic band of Pleiades (0.50 m of spatial resolution) scenes for the same G14 and block 4 are also presented. The Pleiades panchromatic was chosen to present this example, due to the higher spatial resolution of this band. This visual comparison illustrates how the DART model represents the canopy. We can see that DART simulations are in accordance with the image in terms of shadow proportion, gaps, row orientations, textures and object dimensions. However, the model cannot reach the level of detail for a use on a tree-by-tree analysis in this type of canopy structure.
C. Sensitivity Analysis
The results of the sensitivity analysis of the simulated reflectance for the blue, green, red and NIR bands according to stand parameters (LAI, LAD, leaf, bark and forest floor optical properties -reflectance, trees dimensions and row azimuth) are presented in Fig. 7. The behavior of real range variation of each parameter individually (without interaction between them) on the average canopy reflectance was presented together to compare their magnitude. LAI, leaf reflectance, trees dimensions and row azimuth had the highest sensitivity and explain most of the difference between genotypes in the visible bands. These variability were of the same order of magnitude as the variability due to row orientation. Bark and forest floor reflectance and LAD showed the weakest sensitivity in these bands despite their intergenotype variability being relatively high. The NIR band showed similar reflectance results among the replacing tests, but with higher inter-genotype standard deviations compared to the others bands. The LAD, bark and forest floor reflectance showed higher influence in the NIR band compared to visible bands.
V. DISCUSSION
A. Parameterization of DART
Overall, the differences between eucalypt trees of different genotypes and locations were not very large for many of the in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n° 11, 2017. p.4844-4852 parameters. However, the final importance of a parameter to explain the difference in TOC reflectance between genotypes (and/or locations) is a conjunction of the inter-genotype variability (or spatial variability) of this parameter, and the sensitivity of that parameter in the model. It is therefore important, before setting some of the DART parameters to constants (and therefore not explaining the genotype or spatial variability), to model the system with the maximum precision, and simplify afterwards if possible. The model parameterization is therefore a critical step of this work.
The leaf reflectance was shown to be different between genotypes, reflecting differences in pigment contents, and internal structure of leaves. A more detailed analysis could be done to assess which leaf structural or biochemical characteristics could explain this reflectance variability, but such analysis is out of the scope of the study: here we focused our analysis on the macro-scale differences between genotypes, and leaf reflectance was therefore an input parameter of DART.
The high inter-genotypes difference of bark reflectance (Fig. 3) was expected, since their color and roughness was extremely different in the field. The absorption feature in the red is associated to the presence of chlorophyll pigments in the bark surface for some of the genotypes, as observed in many other studies (e.g. in [START_REF] Wittmann | The optical, absorptive and chlorophyll fluorescence properties of young stems of five woody species[END_REF], [START_REF] Girma | Photosynthetic bark: use of chlorophyll absorption continuum index to estimate Boswellia papyrifera bark chlorophyll content[END_REF]). There was also a high intergenotype variability on forest floor reflectance (Fig. 3), mainly in the NIR and MID regions. This behavior is due to the different composition of the forest floor materials (e.g. green or yellowing leaves just fallen and dead dry leaves, bark and branches proportion, leaf sizes), their structural variance, moisture content and decomposition stage [START_REF] Asner | Variability in leaf and litter optical properties: implications for BRDF model inversions using AVHRR, MODIS, and MISR[END_REF]; which directly influences the reflectance.
The ANOVA analysis of the leaf reflectance for bottom, middle and top crown layers in the green, red and NIR bands (Fig. 4) showed that there was no statistical significant difference between bottom, middle and top crown layers but there were differences between genotypes considering all crown layers. Therefore, the use of different spectra for upper, middle and lower part of the canopy could be unnecessary for simulating reflectance in these wavebands. However, since some genotypes showed different spectra for upper layer, which could be locally important for TOC simulation, we preferred to keep this detailed description in the simulations. Also, the leaves inside each crown layers have different combinations of development stages (juveniles and mature). Generally there is a gradient of these development stages inside crown, with more juvenile leaves in the top layer and more mature leaves in the bottom. Mature leaves have more pigments, higher mass per area than juvenile [START_REF] Stone | Spectral reflectance characteristics of eucalypt foliage damaged by insects[END_REF], and different internal structure, which directly influence the reflectance in the visible and NIR wavelengths. However, our results did not clearly show any vertical trend of reflectance between crown layers. The explanation is that the proportion of juvenile leaves in the top layer is variable between genotypes, and between trees of different heights.
B. Suitability of DART for TOC simulations
Assessing if a RTM is suitable to simulate a given ecosystem depends on the objective of the study. In this study, we can distinguish the results in function of the level of variability of the observed canopy, i.e., evaluating the degree of precision of DART for simulating i) a "typical" Eucalyptus plantation reflectance, ii) inter-genotype reflectance variability and iii) the inter-block reflectance variability for the same genotype.
Our results showed that the DART model was suitable to simulate Eucalyptus plantation in general, with their very high tree density, their tall trunks, bright forest floor, and ellipsoidal form of their crown (Fig. 5): this is especially underlined by the low MAE obtained for this ecosystem (lower than 2%). The inter-genotype variability of reflectance comes from the variability of many structural and biochemical parameters of the ecosystem, as represented in Fig. 2 and Fig. 3 (e.g. optical properties of the different components, leaf angles, dimensions of trees, etc.). This inter-genotype variability was adequately simulated as could be seen on Fig. 5 and Table 1, with coefficients of determination > 0.41 for all spectral bands, and of 0.55 in the NIR bands. Such a bias for blue band could come from residual atmospheric effects not properly taken into account in the atmospheric correction of the Pleiades images, which was based on standard atmospheric parameterization of 6S in absence of local measurements of atmospheric water, ozone and aerosols contents.
Finally, the simulations of the spatial variability between blocks, for each genotype were not adequately simulated for most genotypes. There were very low average coefficients of determination in all bands considering each genotype for all blocks. The spatial variability for a given genotype is more difficult to assess by simulation, mainly because of the low variability existing between these blocks. Therefore, the precision of the simulation is not sufficient to catch up this spatial variability. However, some genotypes had higher mortality rates (e.g. G1, G3, G6 and G13), which created large gaps in the canopy and increased the variability to a range possible to simulate (high R 2 scores). As a consequence, the use of DART model in inversions mode for these ecosystems would gain precision if the genotype is already known, and in areas where the proportion of gaps remains low. Moreover, the row orientation could also act as a confounding factor and should also be prescribed prior to inversion, i.e. a pre-analysis of row orientations needs to be carried on.
In terms of bi-directional TOC reflectance, the comparison between simulated and real satellite scenes from forest stands is a difficult task, since the reflectance image is dominated by the macroscopic properties of the illuminated and shadowed crowns as well as ground surface [START_REF] Houborg | Combining vegetation index and model inversion methods for the extraction of key vegetation biophysical parameters using Terra and Aqua MODIS reflectance data[END_REF], as illustrated in Fig. 6, at very high resolution.
Our results confirm the ability of DART to simulate remote sensing data under several eucalypt forest conditions. Some comparisons between DART simulations and forest ecosystems reflectance was also done in [START_REF] Couturier | A modelbased performance test for forest classifiers on remote-sensing imagery[END_REF], [START_REF] Schneider | Simulating imaging spectrometer data: 3D forest modeling based on LiDAR and in situ data[END_REF] and the main conclusions were that DART showed very low pixels spectral dissimilarity compared with IKONOS images and R 2 of 0.48 for a pixel-wise comparison with APEX imaging spectrometer, respectively. DART has been successfully compared with other 3D models throughout the RAdiative transfer Models Intercomparison -RAMI exercise [START_REF] Pinty | RAdiation transfer Model Intercomparison (RAMI) exercise: results from the second phase[END_REF], [START_REF] Widlowski | The fourth phase of the radiative transfer model intercomparison (RAMI) exercise: actual canopy scenarios and conformity testing[END_REF] under several conditions. Our results extend DART model validation on real measured dataset of individualized trees and stands of Eucalyptus plantations, which have particular characteristics (e.g. a high tree density but rather low LAI, lots of trunk surface but few branches).
C. Source of the inter-genotype reflectance variability
After having tested the model suitability to simulate intergenotype TOC reflectance variability, we seek to address which of the stand structural or biochemical parameters (LAI, LAD, leaf, bark and forest floor optical properties, trees dimensions and row orientation) influences more the reflectance between genotype (Fig. 7). These parameters were chosen since they are the main input parameters of DART.
The LAI was one of the most influencing parameter for explaining the difference of reflectance between genotypes. Numerous studies have proved that vegetation reflectance is strongly affected by LAI in the entire spectra, but more in the NIR [START_REF] Shi | Consistent estimation of multiple parameters from MODIS top ofatmosphere reflectance data using a coupled soil-canopy-atmosphereradiative transfer model[END_REF] - [START_REF] Le Maire | Calibration and validation of hyperspectral indices for the estimation of biochemical and biophysical parameters of broadleaves forest canopies[END_REF]. The leaf reflectance, which reflects in the visible the different leaves pigments contents, was another very important factor driving the canopy reflectance, mainly in the visible region. These results agree with [START_REF] Xiao | Sensitivity analysis of vegetation reflectance to biochemical and biophysical variables at leaf,canopy, and regional scales[END_REF], which performed a sensitivity analysis of vegetation reflectance and found more influence of leaves pigments content in the visible and LAI in the NIR regions at canopy scale. They also showed a weak effect of leaf angle at this scale.
The crown dimensions also explained the difference of TOC reflectance between genotypes (Fig. 7), as shown in other studies [START_REF] Rautiainen | The effect of crown shape on the reflectance of coniferous stands[END_REF]. This variable, jointly with the row azimuth, mainly drives the proportion of visible soil between rows and the proportion of shaded/illuminated crowns on the image. The presence of empty spaces (dead trees) in some of the plots increased even more this heterogeneity, which also increased the contribution of this parameter to the inter-genotype and spatial variability of TOC reflectance. Some of the parameters tested here showed moderate sensitivity on simulated TOC reflectance, which is the case for bark and forest floor reflectance. Therefore, average values could have been chosen for these parameters, and could simplify further inversions. In contrast, TOC reflectance showed high sensitivity to LAI, leaf reflectance, trees dimensions and row azimuth. It seems therefore important to perform genotype-specific inversion in the future, or grouping genotypes for their crown dimensions. Also, knowledge of the row orientation will be critical for inversion purposes. Further step will be to simulate a comprehensive database along eucalypts growth stages for different genotypes, and use this database to estimate some variables such as the LAI or chlorophyll content through inversion procedures. Our first sensitivity analysis can further help distinguish the inversion errors coming from the model itself or coming from the inversion methodology (algorithm, constraints, etc.).
These sensitivity analysis results confirm the relevance of using 3D models such as DART, as they are particularly suitable to explicit the influence of tree shape, leaf pigments and plot heterogeneity on the canopy reflectance of different genotypes and row orientations.
VI. CONCLUSION
In this study we tested DART model to simulate Eucalyptus plantation reflectance, their difference between genotypes and between plots for a given genotype. DART was reliable for eucalypts plantation simulation in general, and adequately simulated the difference of reflectance between 16 genotypes including the mostly planted ones in the region, and some particular genotypes (e.g. G16: E camaldulensis x grandis). However, the local difference of reflectance was correctly simulated only when the range of TOC reflectance was high for a given genotype, which occurred mainly through local mortality.
The difference of TOC reflectance in the visible bands between genotypes is mainly explained by differences in LAI, leaf optical properties and row orientation. In the NIR, the same parameters influence the TOC canopy, together with the tree dimensions. Leaf angles, bark and forest floor reflectance have a smaller effect in comparison to the other parameters, although their inter-genotype variability was large.
Successful test of DART in forward mode for simulating the TOC reflectance of these different genotypes open possibilities for parameter estimation through model inversion procedures for eucalypt plantations.
FIGURESFig. 1 .
1 FIGURES
Fig. 2 .
2 Fig. 2. Main stand structural characteristics (diameter at breast height -DBH, tree height, tree leaf area, leaf area index -LAI, crown length, crown diameter, leaf inclination angle and mortality) of the 16 genotypes on May, 2014. Mortality represents the percent of dead trees in each block per genotype. Lines inside boxes are the median values, inferior and superior boxes limits are the first and third quartiles, respectively; and error bars outside boxes extend from minimum and maximum values within 3 standard deviations. Variability considered here is the tree-scale variability considering all blocks. Mortality and LAI variability is inter-block variability.
Fig. 4 .
4 Fig. 4. Leaves reflectance in the green, red and near infrared regions at bottom, middle and top crown layer for the 16 genotypes (labeled as G1 to G16).
Fig. 5 .
5 Fig.[START_REF] Gastellu-Etchegorry | A modeling approach to assess the robustness of spectrometric predictive equations for canopy chemistry[END_REF]. DART (light gray) and Pleiades (dark gray) mean top of canopy (TOC) reflectance of four bands (B=blue, G=green, R=red, NIR=near infrared) for each genotype averaged for all blocks and subplots. Lines in each bar represent the standard deviation for blocks.
Fig. 6 .
6 Fig. 6. Example of near infrared DART simulated scene with 0.50 m (a) and 2 m (b) of spatial resolution, panchromatic Pleiades image(c) with 0.50 m and near infrared Pleiades image with 2 m of spatial resolution for the genotype 14 in the block 4.
Fig. 7 .
7 Fig. 7. Sensitivity analysis of the reflectance in blue, green, red and near infrared bands relative to stand parameters (respectively, LAI, LAD, leaf, bark and forest floor reflectance, trees dimensions and row azimuth). Boxplot definition is given in Fig. 2. Dashed green line represents the TOC reflectance of the genotype 3 (reference). Numbers above each boxplot are the standard deviation. Red crosses are the outliers values.Parameters
TABLE 1 MEAN
1 ABSOLUTE ERROR (MAE), SYSTEMATIC ERROR (BIAS), ROOT MEAN SQUARE ERROR (RMSE) AND DETERMINATION COEFFICIENT (R 2 ) FOR SIMULATED BANDS (BLUE, GREEN, RED AND NIR) IN RELATION TO PLEIADES BANDS, AVERAGED BY GENOTYPE AND BLOCK. R 2 OF GENOTYPES (MIN. -MEAN -MAX.) IS THE MINIMUM, MEAN AND MAXIMUM R 2 VALUES IN EACH BAND FOR THE GENOTYPES, COMPUTED ON THE INTER-BLOCK VARIABILITY.
Band MAE BIAS RMSE R 2 R 2 of genotypes (min. -mean-max.)
Blue 0.0180 -0.0180 0.00106 0.41 0.0003-0.11 -0.79
Green 0.0063 -0.0063 0.00223 0.43 0.0003-0.12 -0.88
Red 0.0170 -0.0170 0.00104 0.51 0.0003-0.12 -0.75
NIR 0.0194 -0.0044 0.02200 0.55 0.0023 -0.28-0.91
in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n°11, 2017. p.4844-4852
ACKNOWLEDGMENT This study was funded by Forestry Science and Research Institute -IPEF (EUCFLUX project -funded by Arcelor Mittal, Cenibra, Copener, Duratex, Fibria, International Paper, Klabin, Suzano and Vallourec), the HyperTropik project (TOSCA program grant of the French Space Agency, CNES), CNPq, CAPES and the Centre de Coopération Internationale en Recherche Agronomique pour le Développement -CIRAD. Pleiades image was acquired in the frame of GEOSUD program, a project (ANR-10-EQPX-20) of the program "Investissementsd'avenir" managed by the French National Research Agency. We are grateful to Eloi Grau (IRSTEA) for support in running the program, José Guilherme Fronza for field work, and the staff at the Itatinga Experimental Station, in particular Rildo Moreira e Moreira (ESALQ, USP) and Eder Araujo da Silva (http://www.floragroapoio.com.br) for their technical support. | 45,305 | [
"736436",
"862030",
"751072"
] | [
"214510",
"182888",
"106631",
"11574",
"615",
"119004",
"214510",
"11574"
] |
01669806 | en | [
"phys"
] | 2024/03/05 22:32:18 | 2017 | https://hal.science/hal-01669806/file/Forward%20scattering%20effects%20on%20muon%20imaging_accepted.pdf | H Gómez
email: [email protected]
D Gibert
C Goy
K Jourde
Y Karyotakis
S Katsanevas
J Marteau
M Rosas-Carbajal
A Tonazzo
Forward scattering effects on muon imaging
Keywords:
A
: Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure.
Introduction
The idea to use muons produced in the Earth's atmosphere by cosmic rays as a scanning method of anthropic or geological structures, the so-called muon imaging, was proposed soon after the discovery of these muons [START_REF] Neddermeyer | Note on the nature of cosmic-ray particles[END_REF][START_REF] Neddermeyer | Cosmic-ray particles of inter-mediate mass[END_REF][START_REF] Auger | Les rayons cosmiques[END_REF]. Muon imaging leverages the capability of cosmic muons to pass through hundreds of metres or even kilometres of ordinary matter with an attenuation mainly related to the length and density of this matter encountered by the muons along their trajectory before their detection [START_REF] Nagamine | Introductory Muon Science[END_REF]. As this attenuation is principally caused by muons absorption and scattering, muon imaging can be mainly performed using two different techniques. The first one is the so-called transmission and absorption muography [START_REF] Lesparre | Geophysical muon imaging: feasibility and limits[END_REF][START_REF] Marteau | Muons tomography applied to geosciences[END_REF]. This technique relies on the well-known radiography concept (widely used, for example, in medicine with X-rays), based on the muon energy loss and its consequently probability to cross a given amount of material. The second is known as deviation muography, which relies on the measurement of the muon track deviation to determine the object density [START_REF] Borozdin | Radiographic imaging with cosmic-ray muons[END_REF][START_REF] Procureur | Muon imaging: Principles, technologies and applications[END_REF]. For the first technique, studying all the directions for which muons go through the studied object, and knowing its external shape, it is possible to obtain a 2D mean density image. Thus, muon imaging provides a non-invasive and remote scanning technique utilisable even for large objects, where the detection set-up may be relatively far away from the -potentially dangeroustarget (e.g. domes of active volcanoes or damaged nuclear reactors).
One of the first studies performed based on muon imaging dates from 1955, being the scanning of the rock overburden over a tunnel in Australia [START_REF] George | Cosmic rays measure overburden of tunnel[END_REF]. Later, other applications from mining [START_REF] Malmqvist | Theoretical studies of in-situ rock density determination using cosmic-ray muon intensity measurements with application in mining geophysics[END_REF] to archaeology were proposed, both in the 70s. For the latter case, some measurements have been already performed, as the exploration of the Egyptian Chephren [START_REF] Alvarez | Search for hidden chambers in the pyramids using cosmic rays[END_REF] and Khufu [START_REF] Morishima | Discovery of a big void in Khufu's Pyramid by observation of cosmic-ray muons[END_REF] pyramids. Nowadays, thanks to the improvements on the detector performance, and also to their autonomy and portability, muon imaging reveals itself to be a scanning technique competitive and complementary to others non-invasive methods as seismic and electrical resistivity tomography or gravimetry. This has led to its proposal and utilisation in a wide range of fields.
In addition to the above-mentioned applications (archaeology and mining), two others stand out. The first one is related with geophysics, more precisely with the monitoring of volcanoes. This A c c e p t e d m a n u s c r i p t has an important benefit both from a scientific and social point of view. The continuous monitoring of volcanoes helps to understand their internal dynamics, a key feature in the risk assessment. The other application, more related with particle physics, was motivated by the necessity to characterize the overburden of underground laboratories hosting various experiment detectors. It is worth mentioning other applications related to civil engineering and nuclear safety. For the first one, it will be possible for example to scan structures looking for defects. For the case of nuclear safety, set-ups looking for the transport of radioactive materials and wastes already work in cooperation with homeland security agencies. Moreover, the study of nuclear reactors looking for structural damages have been already used, as in the case of the recent Fukushima nuclear power plant accident [START_REF] Kume | Muon trackers for imaging a nuclear reactor[END_REF], and it is being considered as a remote scanning method.
As mentioned, the improvement on the detectors used for muon imaging has been one of the main reasons for the renewal of this technique. Better detectors provide a better angular resolution for the muon direction reconstruction and improve the precision of the density radiography. Nonetheless, the background muon flux rejection remains a key procedure for the structural imaging with muons. An important potential noise source, specially in the measurements based on the transmission and absorption muography, is the forward scattering of low energy muons on the object surface, reaching afterwards the detector. This effect mimics through-going particles since the reconstructed direction of the scattered particle points towards the target. The result is an increase of the total number of detected particles, as if the target's opacity was lower than its actual value, leading to a systematic underestimation of the density [START_REF] Nishiyama | Monte Carlo simulation for background study of geophysical inspection with cosmic-ray muons[END_REF][START_REF] Rosas-Carbajal | Three-dimensional density structure of La Soufrière de Guadeloupe lava dome from simultaneous muon radiographies and gravity data[END_REF]. Being produced by muons, these events can not be rejected by particle identification techniques, representing an irreducible background. For this reason, an evaluation of the magnitude of this effect is mandatory to conveniently correct the reconstructed object density.
In this work, a general evaluation of forward scattering of muons has been performed by Monte Carlo simulations. The aim was to develop a versatile tool to be able to evaluate this process for different object geometries and compositions, due to the increasing number of proposed applications based on muon tomography. The main features and results are presented in section 2. Then the impact of this process on the muon imaging capability has been evaluated defining a signal to background parameter. Two physics cases have been studied in section 3. The first one concerns an archaeological target, the Apollonia tumulus near Thessaloniki in Greece, and the second one La Soufrière volcano in Guadeloupe Islands of the Lesser Antilles. Finally, a summary of the different results and the main conclusions extracted from them are compiled in section 4.
Evaluation of the forward scattering of muons
As mentioned in the introduction, low-energy cosmic muons can change their original direction after interacting with the target or any other object in the surroundings before their detection. As muon imaging is based on the reconstruction of the detected muons direction, these muons would forge the measurement. As a consequence, the determination of the target's internal structure and the corresponding reconstructed mean density will be affected.
Muons trajectory deviation is mainly driven by their interaction with matter via multiple Coulomb scattering. The resulting deflection angular distribution, theoretically described by the Molière theory [START_REF] Bethe | Molière's Theory of Multiple Scattering[END_REF], roughly follows a Gaussian, -2 -A c c e p t e d m a n u s c r i p t
dN dα = 1 √ 2πα M S e -α 2 2α 2 M S (2.1)
which is centred in zero (i.e. no deflection happens), having a standard deviation α M S :
α M S = 13.6MeV βcp Q x X 0 (1 + 0.038ln(x/X 0 )) (2.2)
where β is the relativistic factor, p the muon momentum in MeV/c, x the material thickness and Q the absolute electric charge of the muon. α M S also depends on the radiation length (X 0 ) which is empirically given by
X 0 ≈ 716.4g/cm 2 ρ A Z (Z + 1)log(287/ √ (Z )) (2.3)
with Z and A the atomic and mass numbers respectively and ρ the material density. This reveals the relationship of the multiple Coulomb scattering with the properties of the studied material. Different works (see for example [START_REF] Schenider | Coulomb scattering and spatial resolution in proton radiography[END_REF]) provide analytical solutions to the angular distribution of deflected muons after traversing an object with a determined geometry and composition. Besides, other relevant features, as the higher scattering probability for lower energy muons, are also demonstrated in these studies. However, the increasing number of different applications proposed for muon tomography, implies a large variety of objects dimensions, shapes and compositions, being less evident to obtain an analytical estimation of the forward scattering process suitable for all these cases. In this context, Monte Carlo simulations represent a useful tool for the study of muon scattering process, versatile enough to adapt them to the main features of each particular case. As first step on the development of these simulations, a general evaluation of the muons forward scattering has been carried using the Geant4 simulation tool-kit [START_REF] Agostinelli | GEANT4: A Simulation toolkit[END_REF]. It allows the simulation of the 3D muon transport through the defined geometry taking into account the energy loss and trajectory variations due to multiple Coulomb scattering as well as to ionization, bremsstrahlung, pair production and multiple inelastic scattering. Considering these possible processes, results can be compared with the estimations given by the analytical formulas above-mentioned. A scheme of the simulated set-up is shown in figure 1. For this case, generated muons are thrown to a fixed point on a standard rock surface (with a density of 2.5 g/cm 3 ). In the case of scattered muons, the direction changes, in zenith and/or azimuth angles, can be evaluated.
A first set of simulations were performed in order to evaluate the general features of the muon forward scattering. In the previously described set-up, muons up to 10 GeV, with a zenith incident angle (θ ini det ) between 70 • and 90 • and an azimuth incident angle ϕ ini det = 0 • were generated. It is worth mentioning that by the set-up definition θ ini det = 0 • implies muons perpendicular to the rock surface, while θ ini det = 90 • corresponds to tangential ones. Figure 2 summarizes the results of this general simulation, leading to some conclusions about the muon forward scattering studied in these simulations. First, it is observed that this process is negligible if the muon energy is higher than 5 GeV, independently of the incident direction. For the lower energy muons, most of the "efficient" scattering processes (i.e. when the scattered particle exits the medium) occur if θ ini det is higher than 85 • and do not exist if θ ini det is lower than 80 • . That means that only low energy muons with incident directions close to the surface tangent are likely to be scattered on the object surface and to induce a -3 - signal in the detector. For these muons the angular deviation can reach up to 25 • both for the zenith and azimuth angles. By the simulation set-up definition, only the azimuth scattering angle (∆ϕ det ) has been registered for the whole angular range. As presented in figure 2, the ∆ϕ det distribution for all the muon energies considered is in agreement with the Gaussian predicted by Molière theory, as well as the other extracted conclusions agree with the analytical predictions [START_REF] Patrignani | Review on Particle Physics[END_REF].
Taking into account this general information and having checked the agreement between the general simulations and the analytical predictions, a more detailed simulation, optimizing the initial muon sampling was performed. The objective was to establish a probability density function (PDF) to further estimate the background due to forward scattered muons that could be detected during a muon imaging measurement and should be considered in the image analysis. With this aim 10 8 muons homogeneously distributed up to 5 GeV and with θ ini det between 85 • and 90 • (all with ϕ ini det = 0 • ) were generated and simulated in the described Geant4 framework. The generated PDF provides a probability value P(θ ini det , θ f in det , E ini µ ) depending on the initial and final zenith angle and the initial muon energy. A summary plot of the generated PDF divided in 0.5 GeV energy windows is shown in figure 3.
At this point it is worth mentioning that for the studies presented in this work (summarized in section 3), the considered composition of the studied objects are the standard rock used to generate the PDF, but also a definition of soil with different composition and density than the rock (ρ = 2.2 g/cm 3 ). Moreover, there exist several types of rocks and soils with different compositions and densities typically, between 2.0 and 2.5 g/cm 3 . For this reason the influence of these two parameters in the PDF generation has been evaluated: a set of dedicated simulations have been performed changing the composition and the density of the target to compare their results. The obtained PDFs, including the standard soil case, agree to better than 97 %. Thus, the PDF presented in figure 3
Signal to background ratio estimations
The impact of the forward scattered muons in an imaging measurement for a particular object can be evaluated based on simulations as those presented in section 2. This impact can be expressed as a signal to background ratio (S/B) for a given direction θ z -ϕ z . These spherical coordinates correspond to those centred at the detector where θ z = 0 • is the vertical direction and ϕ z = 0 • points to the main axis of the studied object. The signal S(θ z , ϕ z ) is estimated as not scattered muon flux, so their reconstructed direction corresponds to their initial one. The background B(θ z , ϕ z ) represents the scattered muons for which the reconstructed direction points towards the target.
As mentioned, these evaluations allow the study of a particular object, with its corresponding composition. For this it is necessary to know its external shape, to assume the object mean density (since this is the observable that can be extracted from a muon tomography measurement), and to determine the muon detector position with respect to this object. This allows the estimation of the -5 - GeV windows between 0 and 5 GeV for the initial simulated muon energy (E ini µ ). These plots, correspond to 10 8 simulated muons with incident angles between 85 • and 90 • and energies between 0 and 5 GeV (both of them homogeneously distributed). They are used as PDF for further estimations on the forward scattered muon flux.
-6 -A c c e p t e d m a n u s c r i p t object length traversed by muons for each direction as well as its surfaces positions with respect to the detector and to the Earth's surface (required for the determination of θ z and ϕ z ).
For this work two cases have been considered, corresponding to two applications of the muon imaging: the archaeology and the volcanology. For the first one a Macedonian tumulus located near Apollonia (Greece) has been studied [START_REF] Gómez | Studies on muon tomography for archaeological internal structures scanning[END_REF]. For the second, La Soufrière volcano (Guadeloupe island in the Lesser Antilles), already explored by muon imaging, has been taken as reference [START_REF] Jourde | Experimental detection of upward going cosmic particles and consequences for correction of density radiography of volcanoes[END_REF][START_REF] Jourde | Muon dynamic radiography of density changes induced by hydrothermal activity at the La Soufrière of Guadeloupe volcano[END_REF].
Archaeology: Apollonia tumulus
As quoted in section 1, the exploration of archaeological structures is one of the applications for which muon imaging has been proposed since it is non invasive and does not induce any harmful signals (contrary, for example, to vibrations used in seismic tomography). Already suggested in the 60s [START_REF] Alvarez | Search for hidden chambers in the pyramids using cosmic rays[END_REF], there exist at present different projects based on muon imaging devoted to the study of the internal structure of archaeological constructions (see for example [START_REF] Morishima | Discovery of a big void in Khufu's Pyramid by observation of cosmic-ray muons[END_REF][START_REF]ScanPyramids project[END_REF]). The ARCHé project proposes to scan the Apollonia Macedonian tumulus [START_REF] Gómez | Muon imaging for archaeological applications: feasibility studies of the Apollonia Macedonian Tumulus[END_REF]. These tumuli are man-made burial structures where the tomb, placed on the ground, is covered by soil, creating a mound which can also contain internal corridors. The geometry and dimensions of these tumuli are variable but they can always be approximated to a truncated cone. In the case of Apollonia tumulus its height is 17 m while the radius of the base and the top are 46 m and 16 m respectively. With this geometry the angle of the slope of the lateral surface of the tumulus is 29.5 • .
In the present study a standard soil composition, with a density of 2.2 g/cm 3 , has been assumed. The detector has been placed 4 m beside the tumulus base (50 m from the tumulus base centre), so muons with zenith angles θ z > 63.4 • are those which will provide information about the structure of the tumulus. With all these features the signal and background, S(θ z , ϕ z ) and B(θ z , ϕ z ) respectively, can be estimated for a given direction θ z -ϕ z . As already described, these coordinates are centred at the detector and θ z = 0 • correspond to vertical muons, while ϕ z = 0 • points towards the centre of the tumulus base.
From the knowledge of the external shape, it is possible to determine the length of tumulus traversed by muons for a given direction L(θ z , ϕ z ) and thus, the corresponding opacity as the product of this length by the density ( = L ×ρ). The required minimal muon energy (E min ) to cross the target of opacity can be calculated as
E min = a b × e b× -1 (3.1)
where a(E) and b(E) represent the energy loss coefficients due to ionization and radiative losses respectively. In this case, coefficients corresponding to standard rock summarized in [START_REF] Patrignani | Review on Particle Physics[END_REF] have been used, obtaining E min values as a function of . As a cross-check, these E min values have been also estimated from the CSDA range values of standard rock [START_REF] Groom | Muon stopping-power and range tables[END_REF]. The agreement between both methods is better than 95 %. Hence, the expected signal S(θ z , ϕ z ) corresponds to the muon flux on the studied direction with energies higher than E min :
S(θ z , ϕ z ) = ∞ E=E mi n φ µ (E, θ z , ϕ z )dE (3.2)
A c c e p t e d m a n u s c r i p t
To compute the background due to muons forward scattered in the same direction, B(θ z , ϕ z ), two assumptions have been done. First, a point-like detector is considered. This implies that for each scattering point on the tumulus surfaces there is a unique final direction reaching the detector. Second, scattering effects in the azimuth angle are neglected. Since the general muon scattering studies (section 2) show that these effects are symmetric and mostly below 5 • for the azimuth angle (see figure 2), a low influence on the overall estimation is expected, having fade-out effects among the different azimuth directions.
With these two assumptions, B(θ z , ϕ z ) corresponds to the product between the initial flux of muons which can be scattered by the corresponding probability to be scattered with a final zenith angle θ z . As already shown, only muons up to 5 GeV with an incident zenith angle higher than 85 • with respect to the surface normal need to be considered for the forward scattering studies. This delimits the energy and zenith angle ranges to estimate the initial muon flux. The scattering PDF,
P(θ ini det , θ f in det , E ini µ )
, corresponds to the one presented in figure 3. This PDF was generated using the coordinates θ det -ϕ det , centred in the scattering point and orthogonal to the surface. In order to be able to use this PDF with the θ z -ϕ z coordinates, it is necessary to define the relationship between θ det and θ z , which is given by θ det = α + θ z . α represents the elevation angle of the scattering surface (that means, with respect to the Earth's surface). θ det , θ z and α angles are presented in figure 4. For the case of ϕ z = 0 • , α corresponds to the slope of the lateral surface. For the cases where ϕ z 0 • , it is estimated from the tangent to the tumulus surface at the scattering point. Thus, the expected background B(θ z , ϕ z ) is calculated as:
B(θ z , ϕ z ) = 5 E=0 90-α θ=85-α P (θ, α + θ z , E)φ µ (E, θ, ϕ z )dEdθ (3.3)
For the different muon flux calculations required to obtain S(θ z , ϕ z ) and B(θ z , ϕ z ), the parametrization proposed in [START_REF] Shukla | Energy and angular distributions of atmospheric muons at the Earth[END_REF] has been used, corresponding to:
φ µ (θ, E) = I 0 (n -1)E n-1 0 (E 0 + E) -n 1 + E -1 D(θ) -(n-1) (3.4) D(θ) = R 2 d 2 cos 2 θ + 2 R d + 1 - R d cosθ (3.5)
where the experimental parameters, summarized in table 1 together with other constants used in the equations, have been obtained from the fit of different experimental measurements. This parametrization provides an analytical formula for the muon flux estimation valid for low energy muons and high incident zenith angles. With all these ingredients the S/B ratio for the Apollonia tumulus has been calculated scanning the ϕ z range in 10 • steps and the corresponding θ z values for each case in 1 • steps. The results from these calculations are summarized in figure 5 as a function of θ z and the opacity , which is a more significant variable than ϕ z since the muon flux is basically independent of the azimuth angle. The main conclusion is that for all the studied directions the S/B ratio is higher than 73.9, which means that at most the 1.3 % of the detected muons have been previously scattered on the object surface. It is observed that the directions with the lowest S/B values are those with high θ z values. For these directions lower values for the signal are expected since they correspond to -8 - the most horizontal ones (where the muon flux is lower) and, due to the tumulus geometry, these are cases for which longer tumulus length is traversed. Actually, for directions with θ z lower than 85 • , the S/B ratio is always higher than 254.8, reducing the contribution of the scattered muons to the total detected to less than 0.4 %. In this region, the obtained S/B values can be considered homogeneous. Differences between directions are basically associated to the uncertainties in the muon scattering PDF. As mentioned in section 2, even if the used PDF was generated with another target material than the assumed tumulus composition, it would have a limited effect on the results. This leads to consider that the forward scattered muons on the object surface do not significantly influence the results of the muon imaging for the case of tumuli and, by extension, of other objects with similar dimensions and composition.
Volcanology: La Soufrière
The use of muon imaging for the scanning of volcanoes is another application of this technique, which implies the study of objects with larger dimensions than for archaeology. With this purpose, some projects have already performed measurements in different locations. One of them is the -9 - DIAPHANE collaboration [START_REF]Diaphane project[END_REF], which surveys La Soufrière volcano paying special attention to possible variations of the inner liquid/vapour content that can be related to the hydrothermal system dynamics. For this work, La Soufrière volcano has been taken as reference to study the impact of the forward scattered muons on the muon imaging reconstruction in volcanology.
As for the case of tumuli, volcanoes geometry can also be approximated to a truncated cone. Based on the topographic plan of La Soufrière, their dimensions correspond to a height of 460 m and a base and top radius of 840 m and 160 m respectively. These dimensions lead to a lateral surface with a slope angle of 34.1 • . In this case a homogeneous composition of standard rock has been considered, with a density of ρ = 2.5 g/cm 3 , together with 3 different detector positions corresponding to real measurement points of the DIAPHANE project. They are labelled as h-270, h-170 and h-160 respectively because of the height where they are placed. These positions are summarized in table 2 taking as reference the centre of the volcano base. Main differences among the positions rely on the distance between the detector and the volcano, going from 5 to 25 m approximately, and on the height with respect to the volcano base, which has a direct influence on the length of the volcano traversed by muons before their detection and, consequently, on the signal S(θ z ,ϕ z ) computation.
Both tumulus and volcano have been approximated to the same geometrical shape with their -10 -A c c e p t e d m a n u s c r i p t corresponding dimensions. So for volcanoes, the procedure to determine the S/B ratio for different incident directions is equivalent to the described in section 3.1 for the tumulus case. The only difference is that for this case the assumed density is ρ = 2.5 g/cm 3 instead of ρ = 2.2 g/cm 3 (corresponding to the standard soil), affecting in the opacity estimation. Nevertheless, this density variation is expected to have a reduced impact on the results as it has been estimated in section 2.
As for the tumulus case, the S/B ratio have been evaluated scanning the ϕ z range in 10 • steps and the corresponding θ z values for each case in 1 • steps. Results have also been represented with respect to θ z and the opacity . They have been summarized for the three different detector positions in figure 6. For the three cases the S/B ratio takes values significantly lower than for the tumulus, although the corresponding distributions present similar features. For example, the S/B values for directions with θ z > 85 • are again lower than for the rest of the directions. Moreover, for all detector positions, directions with low opacity (corresponding to the volcano contour) present systematically higher values of S/B than those directions pointing to the bulk of the volcano.
Focused on each detector, for the h-270 case, incident directions with high θ z have S/B ratio values in general below 1, which implies that it is possible to detect more forward scattered muons than those emerging from the volcano, significantly influencing the object density reconstruction. On the opposite side, for the directions where the opacity is smaller than 50×10 3 g/cm 2 , S/B ratio takes values higher than 5, so no more than the 17 % of the detected muons have been previously scattered. If we consider all the other directions, with θ z < 85 • and > 50 ×10 3 g/cm 2 , the S/B distribution is more homogeneous, having a mean value of 2.7 with a standard deviation of 1.4. That means that in average about 27 % of the detected muons are low energy forward scattered muons. In this case, the scattered muons have a significant impact on the volcano density reconstruction. Assuming the percentage of scattered muons constant for all the scanned directions, and estimating the uncertainty of this percentage from the standard deviation of the S/B mean value, it would imply that the reconstructed density should be corrected by multiplying it by a factor 1.4 +0. 4 -0.1 . Results for the detector positions labelled as h-170 and h-160 are similar between them. This suggests that S/B ratio has more dependence with the detector height with respect to the volcano base (170 m for h-170 and 160 m for h-160) than with the distance between the detector and the volcano (6.72 m for h-170 and 23.99 m for h-160). Since both detectors are placed lower than the h-270 case, the mean volcano length traversed by non-scattered muons is longer in this case. This leads to smaller S/B ratios mainly because lower S(θ z , ϕ z ) values. As mentioned, the features of the distribution are equivalent: smaller S/B values for θ z > 85 • and higher for low opacities. If only muon directions with θ z < 85 • and > 50 ×10 3 g/cm 2 are considered, a mean S/B value of 1.1 is obtained for both cases with a standard deviation of 0.9 and 1.0 for the h-170 and h-160 detector positions respectively. These values reveal a high influence of the low energy forward scattered muons in the overall detection (almost half of the detected muons have been previously scattered). Keeping the assumption of a constant S/B value for all the considered directions, the density correction factors are 1.9 +4.1 -0.4 for the h-170 position and 1.9 +9.1 -0.4 for the h-160 case. Summarizing, for the case of volcanoes, where the length of material to be traversed by muons is longer than for archaeology, the forward scattering of low energy muons and their further detection has a clear influence on the results of the muon imaging. The three studied scenarios and the defined geometry of the volcano as a truncated cone, reveal that the S/B ratio mainly depends on the length of material traversed by non-scattered muons, those considered for S(θ z ,ϕ z ) -11 -A c c e p t e d m a n u s c r i p t Figure 6. Distribution of the ratio between the non-scattered and scattered detected muons (defined as S/B in the text) with respect to the reconstructed zenith incident angle (θ z ) and the opacity ( ). The distribution is showed for the 3 detectors installed in La Soufrière volcano. Numbers correspond to the bin value and have been placed next to the corresponding bin to ease their reading.
-12 -A c c e p t e d m a n u s c r i p t computation. Moreover, for a fixed detector position it can be considered S/B as homogeneous for all the incident directions corresponding to the volcano bulk volume, so a global correction factor for the reconstructed density can be applied. The main source of uncertainty of the S/B ratio estimation comes from those associated to the PDF and consequently, B(θ z , ϕ z ). For this reason, as deduced for the h-270 detection position, a higher S/B mean value translates to a more accurate determination of the correction factor for the reconstructed volcano density.
Summary and discussion
At present, muon imaging is being used and proposed for an increasing number of different applications. This implies that objects with quite different dimensions can be scanned, from some tens to several hundreds of meters as typical sizes. Furthermore, the composition and density of these objects can also vary from one to other. All the experimental approaches to do muon imaging, generally known as transmission and deviation tomography respectively, rely on the direction reconstruction of the detected muons. For this reason, specially for the transmissionbased technique, muons changing their direction because of a scattering on the object surface before their detection, represent an irreducible background noise, biasing the object mean density reconstruction. An estimation of the percentage of these forward scattered muons out of all detected, would allow the estimation of correction factors to reconstruct the proper density.
Muons trajectory deviation is mainly driven by the multiple Coulomb scattering. The resulting angular distribution due to this effect is theoretically described by the Molière theory. Besides, some analytical descriptions of the process have been already performed for particular objects and compositions. Nevertheless, the large variety of objects that are currently proposed to be studied by muon tomography, requires more versatile tools to evaluate the forward scattering muons, easily adaptable to each case. With this aim a set of Monte Carlo simulations have been performed using the Geant4 framework.
These simulations provide a general evaluation of the muon forward scattering probability depending on their energy and their incident angle, being in overall agreement with theoretical estimations. They revealed that muons with energies lower than 5 GeV and incident angles above 85 • with respect to the normal direction of the surface, are almost the only muons susceptible to be scattered and then detected. The simulations results have been used as PDF to evaluate the influence of scattered muons in different scenarios. To do that, the signal to background ratio (S/B) has been defined. S(θ z ,ϕ z ) corresponds to the flux of those muons reconstructed in a direction θ z -ϕ z without any previous scattering, while B(θ z ,ϕ z ) is the muon flux of muons reconstructed with the same direction but after a previous forward scattering on the object surface. The S/B evaluation has been presented for two particular cases, corresponding to two of the applications of muon imaging: archaeology and volcanology.
Taking the muon distribution at Earth's surface proposed in [START_REF] Shukla | Energy and angular distributions of atmospheric muons at the Earth[END_REF], for the archaeological applications, the Apollonia tumulus has been considered as reference, placing the detector beside the tumulus base. S/B estimations reveals that the percentage of scattered muons detected is never higher than 1.6 %, being lower than 0.5 % if the incident zenith angle is smaller than 85 • . This leads to conclude that the influence of scattered muons for these cases can be neglected.
A c c e p t e d m a n u s c r i p t
This is not the case for volcanology applications. A model based on La Soufrière volcano, already scanned inside the DIAPHANE project, has been used together with three different detector positions, corresponding to real measurement points. It has been observed a significant influence of the forward scattered muons in the measurement, which can represent up to 50 % of the detected muons, and even more for incident zenith angles higher than 85 • . S/B values can be considered homogeneous for the directions corresponding to the bulk volume of the volcano. Main differences on S/B mainly depend on the height of the detector with respect to the volcano base. Due to the volcano geometry, defined as a truncated cone, this is directly related to the muon path length along the volcano. Other features, such as the distance between the detector and the volcano, seem to have smaller influence. With the estimations and numbers obtained in this work, correction factors for the density reconstruction have been computed, taking values from 1.4 to 1.9 depending on the detector position.
Forward scattered muons represent events that are in principle not taken into account, so their detection has a direct impact on the mean density reconstruction. Nonetheless, the observed homogeneity on the S/B ratio for all the considered directions, both in the tumulus and volcano case, leads to think that these muons would not significantly affect fading the resolution of the resulting image.
All these estimations and conclusions are based on simulations of scattered muons on standard rock, which has been demonstrated to produce equivalent results than the standard soil case. In any case, changing accordingly the material composition and properties, this simulation framework can be used to evaluate the influence of forward scattered muons for further muon imaging measurements of other objects and structures.
Figure 1 .
1 Figure 1. Schema of the defined geometry to perform the general studies of forward scattering of muons.
has been used for all the studies. -4 -A c c e p t e d m a n u s c r i p t
Figure 2 .
2 Figure 2. Summary plots of the results of the general study of forward scattering of muons (see text for details about the study). Top left: Difference on the zenith angle (∆θ det = θ f in det -θ ini det ) with respect to the initial muon energy (E ini µ ). Top right: Correlation between the initial and final zenith angles (θ ini det vs. θ f in det ) for all the muon energies considered. Bottom left: Difference on the azimuth angle (∆ϕ det = ϕ f in det -ϕ ini det ) with respect to the initial muon energy (E ini µ ). Bottom right: ∆ϕ det distribution for all the muon energies considered. A Gaussian distribution as predicted by Molière theory (Equations 2.1 -2.3) is observed.
A c c e p t e d m a n u s c r i p t
Figure 3 .
3 Figure 3. Correlation between the initial and final zenith angles (θ ini det vs. θ f in det ) from the general study of forward scattering of muons (see text for details about the study). The correlation plots are divided in 0.5GeV windows between 0 and 5 GeV for the initial simulated muon energy (E ini µ ). These plots, correspond to 10 8 simulated muons with incident angles between 85 • and 90 • and energies between 0 and 5 GeV (both of them homogeneously distributed). They are used as PDF for further estimations on the forward scattered muon flux.
Figure 4 .Table 1 .
41 Figure 4. Schema showing the relationship between θ det , θ z and α angles (see text for angles definition), for the use of the muon scattering PDF, P(θ ini det , θ f in det , E ini µ ), in the B(θ z , ϕ z ) calculation.
A c c e p t e d m a n u s c r i p tFigure 5 .
5 Figure 5. Distribution of the ratio between the non-scattered and scattered detected muons (defined as S/B in the text) with respect to the reconstructed zenith incident angle (θ z ) and the opacity ( ) for the Apollonia tumulus case.
Table 2 .
2 Summary of the detector positions with respect to the base centre of La Soufrière volcano model (see text for model details).
Acknowledgments
Authors would like to acknowledge the financial support from the UnivEarthS Labex program of Sorbonne Paris Cité (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02). Data from La Soufrière volcano are part of the ANR DIAPHANE project ANR-14-ce04-0001. Part of the project has been funded by the INSU/IN2P3 TelluS and "DEFI Instrumentation aux limites" programmes. | 40,263 | [
"7286",
"750602"
] | [
"129",
"129",
"1004962",
"2",
"1004962",
"179899",
"250059",
"18404",
"250059"
] |
01772396 | en | [
"shs"
] | 2024/03/05 22:32:18 | 2007 | https://hal.science/hal-01772396/file/Antoine%2C%20P.%2C%20Poinsot%2C%20R.%2C%20%26%20Congard%2C%20A.%20%282007%29%28bes%29.pdf | Pascal Antoine
email: [email protected]
R Poinsot
A Congard
Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives Measuring Subjective well-being: place of the emotions in positive psychotherapies
Keywords: émotion, bien-être, psychologie positive, psychothérapie positive, évaluation Measuring Subjective well emotion, well-being, positive psychology, positive psychotherapy : assessment
Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives Résumé. La psychologie positive est l'étude scientifique des expériences positives, du bien-être et du fonctionnement optimal de l'individu. Elle vise à dépasser la centration fréquente en psychologie clinique sur la souffrance, sa résolution ou sa réduction. Son objectif est de rendre le patient plus heureux grâce à la compréhension et l'investissement de trois voies : une existence plaisante, engagée et pleine de sens. Pour une approche scientifique de chacun de ces domaines, il est nécessaire de disposer de mesures valides et pratiques adaptées à un cadre clinique. En pratique, il est souhaitable d'évaluer séparément les facettes constituant le concept de « bien-être subjectif », notamment l'humeur et les émotions, afin d'étudier au mieux l'efficacité de la psychothérapie positive. L'objectif de cette étude est de développer en France une Mesure de la Valence Emotionnelle (MVE) basée sur le modèle du bien-être proposé par Diener. Pour développer cet outil, un questionnaire de fréquence des émotions a été construit et proposé à 571 participants. La version finale de la mesure est composée de 23 items organisés en six ensembles, constituant chacun une échelle à part entière. La consistance est satisfaisante, de même que les secteurs de la validité qui ont été éprouvés. Les six facettes émotionnelles sont divisées en deux facteurs d'ordre supérieur, l'un positif, l'autre négatif. Le bien-être subjectif était, de façon surprenante, rarement mesuré en psychopathologie. Cette absence était regrettable, la présence d'émotions positives, l'absence d'émotions négatives et une évaluation de son sentiment de satisfaction et d'accomplissement, étant des composantes du bien-être importantes mêmes pour les patients les plus en souffrance. Nous proposons un instrument d'évaluation du bien-être adapté au cadre clinique.
Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives
Psychologie et psychothérapie positive
En 2000, Seligman et Csikszentmihalyi publient un article intitulé "Positive Psychology".
Cet écrit fait le point des recherches dans le domaine de la psychologie positive et constitue le point de départ d'une dynamisation et d'un élargissement de ce courant. Par définition, la psychologie positive est présentée comme l'étude des conditions et processus qui contribuent à l'épanouissement ou au fonctionnement optimal des personnes, des groupes et des organisations [START_REF] Gable | What (and Why) is Positive Psychology?[END_REF]. Ce courant prend racine essentiellement dans le constat d'un déséquilibre, notamment en psychologie clinique, faisant que la plupart des recherches sont centrées sur la maladie mentale, la détresse et les dysfonctionnements psychologiques. Environ un tiers des personnes souffrent un jour d'un trouble psychiatrique et, dans ce domaine, le panel des psychothérapies efficaces est vaste.
Pour autant, les deux-tiers de personnes qui ne rencontreront pas la maladie psychiatrique éprouvent-ils nécessairement un total bien-être ? Ici, le but de la psychologie positive est d'intégrer ce que l'on connaît aujourd'hui de la résilience, des ressources personnelles et de l'épanouissement individuel pour construire et développer un corpus organisé de connaissances et de pratiques. Un des enjeux actuel consiste à passer d'approches descriptives ou explicatives à des approches prescriptives destinées aux patients comme au grand public. Certains étudient donc les techniques qui améliorent directement ou indirectement le bien-être, par exemple les pratiques méditatives de pleine conscience, l'écriture de journaux personnels ou plus spécifiquement les thérapies orientées sur le bienêtre [START_REF] Gable | What (and Why) is Positive Psychology?[END_REF]. Plusieurs propositions thérapeutiques existent déjà. La première historiquement est celle de [START_REF] Fordyce Mw | Development of a program to increase personal happiness[END_REF] qui présente 14 stratégies comportementales visant explicitement l'augmentation du bien-être. Il s'agit globalement d'être plus actif et d'investir de façon privilégiée les relations avec l'entourage. Plus récemment, [START_REF] Fava | Well-being therapy: Conceptual and technical issues[END_REF] propose une thérapie orientée sur le bien-être (well-being therapy) basée sur le modèle du bien-être de Ryff et Singer (1998). [START_REF] Frisch Mb | Quality of life therapy: Applying a life satisfaction approach to positive psychology and cognitive therapy[END_REF] propose une thérapie orientée sur la qualité de vie (quality of life therapy) intégrant la notion de satisfaction de la vie à des techniques de thérapie cognitive. Ces démarches ont en commun de s'adresser à des patients présentant des troubles affectifs dans un dispositif relativement classique où le bien-être est un ingrédient complémentaire mais pas central (Seligman et coll., 2006). Seligman et coll. (2004) distinguent trois processus qui conduisent au bien-être ou au bonheur : les émotions positives, l'engagement, et le sens de l'existence. Les émotions positives sont orientées vers le passé (gratitude et pardon), le présent (plaisir et pleine conscience) et le futur (espoir et optimisme). Elles facilitent la flexibilité de pensée et la résolution de problèmes [START_REF] Frederickson Bl | Positive emotions broaden scope of attention and thought-action repertoires[END_REF]Isen et coll., 1987). Elles contrebalancent les effets des émotions négatives au niveau physiologique [START_REF] Ong Ad | Cardiovascular intraindividual variability in later life: the influence of social connectedness and positive emotions[END_REF] et facilitent l'utilisation de coping ajusté [START_REF] Folkman | Positive Affect and the Other Side of Coping[END_REF], 2004). Les émotions positives permettent de mettre en oeuvre et de gérer les ressources (Tugade et Frederickson, 2004) et accélèrent la récupération face à des événements stressants (Frederikson et coll., 2003 ;Tugade et coll., 2004). L'engagement correspond à la poursuite active d'un but important pour soi et qui mobilise ses ressources psychologiques personnelles. Le sens de l'existence correspond à la poursuite d'un but abstrait dépassant largement l'individu. Ces trois voies (pleasant life, good life/engaged life et meaningfull life) sont relativement indépendantes et peuvent être diversement investies par les personnes. Lorsqu'elles sont également et intensément investies, on parle alors de « vie pleine et entière » (full life). Seligman part de ce modèle de bien-être pour proposer une psychothérapie positive dont l'objectif fondamental est bien plus d'augmenter le bien-être que de diminuer la souffrance1 . Mais, une nouvelle proposition thérapeutique soulève la question de son efficacité, de ses indications et de ses limites. Il est donc nécessaire de disposer d'instruments de mesure adaptés, ce qui revient à se demander comment évaluer, sinon le bonheur, au moins le bien-être. Pour Duckworth et coll. (2005), les outils les plus utilisés par les cliniciens en psychologie positive sont la Subjective Happiness Scale [START_REF] Lyubomirsky | A measure of subjective happiness: preliminary reliability and construct validation[END_REF], la Fordyce Happiness Measure [START_REF] Fordyce Mw | A review of research on the happiness measures: a sixty-second index of happiness and mental health[END_REF] et surtout la Satisfaction with Life Scale (Diener et coll., 1985 ;[START_REF] Pavot | Further Validation of the Satisfaction With Life Scale : Evidence for the Cross-Method Convergence of Well-Being Measures[END_REF]. L'échelle de satisfaction de vie est une évaluation cognitive tournée vers le passé et composée d'items du type : « sur la plupart des plans, ma vie est presque idéale ». Ces trois outils sont en fait largement de nature cognitive, ce qui n'est pas cohérent avec la principale recommandation de Diener (2006) concernant l'utilisation d'indicateurs de bien-être en santé : distinguer les facettes de l'expérience subjective de bien-être (well-being) et d'être souffrant (ill-being), incluant l'humeur et l'émotion, la perception de sa santé physique et mentale, et la satisfaction dans différents domaines de l'existence2 .
Le bien-être et ses conceptions
Il existe trois conceptualisations majeures. En premier lieu, le bien-être subjectif (BES), dont les composantes sont à la fois cognitives et émotionnelles [START_REF] Diener | Subjective Well-Being[END_REF] correspond à l'ensemble des évaluations individuelles, négative et positive, cognitive et émotionnelle, que l'on fait de sa vie (Diener et coll., 1998 ;[START_REF] Diener | guidelines for national indicators of subjective well-being and ill-being[END_REF]. Sur le plan cognitif, la satisfaction de la vie peut être décomposée en autant de domaines que l'individu a d'investissements et constitue de ce fait une structure hiérarchique. De la même façon, émotions positives et émotions négatives peuvent être décomposées en émotions plus simples constituant une structure factorielle hiérarchisée. Le bien-être subjectif constitue donc le niveau global supérieur de cette hiérarchie. Cette approche est celle qui a donné lieu aux travaux les plus importants dans le domaine. Une seconde approche, plus récente, est celle du bien-être psychologique [START_REF] Ryff | The Structure of Psychological Well-Being Revisited[END_REF]. Le bien-être est ici conçu comme un ensemble multidimensionnel largement cognitif synthétisé par un construit latent unique. Cette approche ne prend toutefois pas en compte les composantes émotionnelles du bien-être. Enfin, une troisième approche est celle de la santé mentale au travail [START_REF] Warr | Employee Well-being[END_REF] avec une centration sur le contexte professionnel plutôt que le bienêtre général. De fait, la conception la plus fructueuse reste celle du bien-être subjectif. En effet, les analyses factorielles (Diener et Emmons, 1985) ainsi que les analyses multi-traits multi-méthodes (Lucas et coll., 1996) ont indiqué qu'il est pertinent de prendre en compte de façon distincte les aspects cognitifs et émotionnels. Ils peuvent être organisés hiérarchiquement dans le construit de BES (Sandvik et coll., 1993), assez stable sur les plans situationnels et temporels, et consistant avec différents modes d'évaluation (hétéroou auto-évaluations).
Les composantes émotionnelles du bien-être subjectif
Les recherches sur les émotions dans le BES prennent une place dans un débat plus large.
Il existe deux courants majeurs dans l'étude des émotions (Mayne, 1999) : le courant des émotions discrètes mesurées par un niveau d'activation physiologique ou par des expressions faciales, et le courant dimensionnel ou lexical où les émotions sont décrites dans un espace factoriel. C'est ce dernier qui nous concerne ici. Le courant lexical des émotions, contrairement à celui de la personnalité, n'en est qu'à ses débuts [START_REF] De Raad | Traits and Emotions : A Review of their Structure and Management[END_REF]. A l'instar des études sur la personnalité de type Big Five, il est possible de considérer que le langage regroupe les termes pertinents pour désigner l'ensemble des émotions, et il est possible d'en proposer un modèle hiérarchisé. Dans les années 80-90, les études étaient partielles et tâtonnantes, basées sur de petites listes de mots, et il a fallu attendre les travaux de Church (Church et coll., 1998[START_REF] Church At | The Structure of Affect in a Non-Western Culture : Evidence for Cross-Cultural Comparability[END_REF] pour voir l'exploration d'une liste de mots tendant à l'exhaustivité. Leurs résultats témoignent de la pertinence de facteurs positif et négatif pour décrire la structure des émotions à travers plusieurs cultures différentes. Les travaux factoriels ont antérieurement pris leur essor suite à la proposition de Watson et Tellegen (1985) de deux dimensions orthogonales : les émotions positives et les émotions négatives. De nombreux travaux se sont alors centrés sur ces deux dimensions, en questionnant notamment leur indépendance (Diener et coll., 1995 ;[START_REF] Egloff B | The independence of positive and negative affect depends on the affect measure[END_REF]Watson et coll., 1988). Par exemple, dans des études préalables, Diener et Emmons (1985) avaient mis en évidence une forte corrélation négative entre les mesures d'émotions plaisantes et désagréables, corrélation diminuant lorsque la mesure porte sur une période de temps qui s'allonge. Leur interprétation se base sur l'idée qu'il n'est pas possible de ressentir deux émotions différentes au même instant alors qu'il est possible de ressentir une succession d'émotions différentes si le laps de temps le permet. Stone et coll. (1993) rapportent des données (mesures quotidiennes de l'humeur pendant plusieurs semaines) qui peuvent étayer cette position à l'échelle de la journée. [START_REF] Zelenski | The Distribution of Basic Emotions in Everyday Life : A State and Trait Perspective from Experience Sampling Data[END_REF], en mesurant trois fois par jour pendant un mois les émotions ressenties, montrent que les émotions positives dominent tant en intensité qu'en fréquence chez les sujets tout-venants.
Van Eck et coll. (1998) montrent que les événements aversifs quotidiens entraînent une augmentation des affects négatifs et une diminution des émotions positives. Plus ces événements sont évalués comme déplaisants, plus ces changements sont importants.
Parallèlement, de nombreuses études montrent une certaine stabilité temporelle des émotions [START_REF] Izard Ce | Stability of Emotion Experiences and Their Relations to Traits of Personality[END_REF]Lucas et coll., 1996 ;[START_REF] Ormel | How Neuroticism, Long-Term Difficulties, and Life Situation Change Influence Psychological Distress : A Longitudinal Model[END_REF][START_REF] Watson | Measurement and Mismeasurement of Mood : Recurrent and Emergent Issues[END_REF][START_REF] Watson | The Long-Term Stability and Predictive Validity of Trait Measures of Affect[END_REF], même si les variables cognitives apparaissent plus stables sur le plan temporel [START_REF] Eid | Intraindividual Variability in Affect : Reliability, Validity, and Personality Correlates[END_REF] et situationnel [START_REF] Diener | Temporal stability and cross-situational consistency of affective, behavioral, and cognitive responses[END_REF]. Mettez une croix dans la case qui correspond le mieux". Sept modalités de réponses étaient possibles de "jamais" à "plusieurs fois par jour".
Pour étudier la validité externe des différentes facettes émotionnelles, une mesure de satisfaction de la vie a été utilisée (Diener et coll., 1985) ainsi qu'une mesure de détresse.
L'échantillon étant composé de soignants, la mesure de détresse utilisée est celle de burnout [START_REF] Maslach | The measurement of experienced burnout[END_REF].
Analyses
Les analyses statistiques suivent une progression classique : analyse des distributions des réponses aux items pour vérifier que la totalité des modalités de réponses sont exploitées, analyse de la matrice de corrélations entre les items pour s'assurer qu'ils ne sont pas excessivement redondants, analyses factorielles afin de dégager les dimensions pertinentes et le type de structure résumant au mieux les données, analyse de la consistance interne des échelles dérivées des analyses factorielles afin de vérifier si l'erreur de mesure est acceptable, et, enfin, analyse des corrélations avec des mesures proches pour compléter l'étude de validité et situer la nature du nouveau construit par rapport à des instruments déjà connus. Toutes les analyses ont été réalisées avec les logiciels Statistica 6 et Lisrel 8.5.
RESULTATS
Les analyses de distributions des réponses aux items ont mis en évidence des problèmes pour des émotions à valence ou activation très élevée (ie. haine, honte, dépression). Ces trois items ont été éliminés car quatre des possibilités de réponses n'étaient utilisés que par une minorité des participants. En résumé, ces items ne permettent pas de différencier les participants car tous ont tendance à répondre de la même façon. Cette idée est à tester avec des instruments de nature émotionnelle au cours de mesures quotidiennes. On peut faire l'hypothèse de systèmes autonomes, l'un négatif modifié par une TCC classique, et l'autre positif modifié par une psychothérapie positive. Cette hypothèse serait cohérente avec les résultats en psychologie différentielle. La personnalité et le bien-être subjectif sont fortement liés, la personnalité étant parfois considérée comme le déterminant dominant du bien-être [START_REF] Compton Wc | Measures of mental health and a five factor theory of personality[END_REF][START_REF] Costa Pt | Influence of Extraversion and Neuroticism on Subjective Well-Being : Happy and Unhappy People[END_REF][START_REF] Deneve | The Happy Personality : A Meta-Analysis of 137 Personality Traits and Subjective Well-Being[END_REF][START_REF] Diener | Subjective Well-Being : Three Decades of Progress[END_REF][START_REF] Myers | Who is happy ?[END_REF]. L'affectivité positive est liée à l'extraversion et l'affectivité négative au névrosisme. [START_REF] Jp | Differential Roles of Neuroticism, Extraversion, and Event Desirability for Mood in Daily Life : An Integrative Model of Top-Down and Bottom-Up Influences[END_REF] élargissent ces constats aux événements quotidiens désirables et indésirables. Pour les auteurs, le névrosisme et les événements indésirables seraient des prédicteurs de l'humeur négative et positive, tandis que l'extraversion et les événements désirables ne seraient des prédicteurs que de l'humeur positive. Le BES peut faire l'objet d'interprétations ascendantes (bottom-up) ou descendantes (top-down) selon que l'on considère respectivement que c'est la suite d'événements favorables ou défavorables qui détermine le bien-être subjectif ou que le bien-être subjectif est une prédisposition à vivre les événements de façon positive ou négative [START_REF] Diener | Subjective Well-Being[END_REF]. Il est probable que la réalité soit en fait plus complexe et que la causalité soit réciproque [START_REF] Feist Gj | Integrating Top-Down and Bottom-Up Structural Models of Subjective Well-Being : A Longitudinal Investigation[END_REF]. Ces liens entre le bien-être et la personnalité sont même suffisamment consistants et étroits pour conduire Lykken et Tellegen à une conclusion extrême : « Essayer d'être plus heureux est aussi futile que d 'essayer d'être plus grand et, en conséquence, c'est contre-productif » (Lykken et Tellegen, 1996, p.189traduction Rolland, 2000). Toutefois, cette conclusion n'est pas compatible avec les résultats de Seligman et coll. (2005), mais elle conduit à rappeler que le bien-être serait un trait prédéterminé à la naissance [START_REF] Diener | Traits Can Be Powerful, but Are Not Enough : Lessons from Subjective Well-Being[END_REF] modifiable seulement dans une certaine mesure… Seligman et coll. (2004) tiennent compte de cette limite en particulier pour la première voie de la thérapie (pleasant life). 1988), mais les 20 items de cette échelle ne sont pas strictement de nature émotionnelle, certains (determined, active, strong) impliquant un système plus complexe de variables latentes. La comparaison entre ces deux outils est donc importante. L'alternative majeure aux travaux de Watson et Tellegen (1985) est constituée par les propositions de Russel [START_REF] Ja | A circumplex model of affect[END_REF] et Diener (Diener et coll., 1985). Ces auteurs distinguent deux dimensions
Concernant les trois voies
Diener et coll. (1995) choisissent en conséquence de travailler sur une structure factorielle de la fréquence des émotions ressenties durant le mois précédent. Pour construire leur modèle, ils partent de trois courants théoriques liés aux émotions, le courant cognitif, le courant évolutionniste et le courant empirique. Par recoupement de ces théories, ils proposent six gammes d'émotions, dont deux dites « plaisantes » (ou positives), Love et Joy, et quatre dites « déplaisantes » (ou négatives), Fear, Anger, Shame et Sadness. et ensemble constitue une structure hiérarchique dans laquelle les deux facteurs d'ordre supérieur sont modérément corrélés. S'inscrivant dans cette approche évaluative, le but principal de la recherche présentée ici est de disposer en langue française d'un instrument de mesure de la fréquence des émotions positive et négative. L'objectif est de proposer un outil court, facile d'utilisation, et susceptible d'être utilisé en mesures répétées dans le cadre d'une recherche longitudinale ou d'un accompagnement psychologique. Nous avons choisi de prendre modèle sur celui créé par Diener et coll. (1995), qui repose sur une structure hiérarchique permettant une lecture globale (émotions positives vs émotions négatives) et une interprétation dans chaque gamme d'émotions. Méthodologiquement, il est donc important de vérifier la qualité de la structure factorielle et la consistance des échelles obtenues dans cette forme française, ainsi que d'étudier les liens avec des outils de bien-être et de détresse existants.METHODE AdaptationL'adaptation a été réalisée en plusieurs étapes. Dans un premier temps, 24 termes anglais désignant des émotions, issus de l'articlede Diener et coll. (1995), ont été traduits chacun par deux à trois mots en langue française à l'aide d'un dictionnaire. Une recherche de synonymes a été conduite ensuite sur ces traductions pour aboutir à un corpus final de 129 mots français. Dans un second temps, sept juges, tous psychologues, ont pris connaissance de l'articlede Diener et coll. (1995). Ils ont ensuite indiqué pour chacun des 129 termes s'il correspondait à l'esprit de l'échelle originale. En outre, ils devaient exclure les termes trop peu courants pour des sujets tout-venants. Pour chaque gamme d'émotions, les six termes français les plus fréquemment conservés par les juges ont été retenus pour l'étape suivante.EchantillonsTrois échantillons ont été agrégés pour le besoin des analyses multidimensionnelles. Un premier échantillon est constitué de 259 militaires, tous de sexe masculin, et relativement jeunes (moy= 28 ans ± 5 ans). Ils ont répondu à ce questionnaire dans le cadre d'une étude sur le stress pendant les opérations extérieures. Un deuxième échantillon est constitué de 198 personnes âgées (moy= 75 ans ± 12 ans), dont une majorité de femme (N = 122 ; 62 %). Ils ont répondu à ce questionnaire au cours d'une étude sur la détresse du sujet âgé institutionnalisé. Enfin, un troisième échantillon a été recruté pour la présente recherche, notamment dans le cadre des analyses de la validité externe de l'indicateur de bien-être subjectif. Cet échantillon est constitué de 125 soignants (personnel médical, paramédical, psychologue, et assistant social), âgés en moyenne de 36 ans (± 10 ans), en majorité des femmes (N = 95 ; 76 %). Au total, sur ces 582 participants, 571 questionnaires étaient exploitables d'un point de vue statistique (98 %), ce qui permet déjà de souligner la bonne réception de l'outil par les sujets.MatérielLa forme expérimentale du questionnaire de bien-être subjectif est constituée de six groupes de six items : [bienveillance, amitié, attachement, affection, bonté, amour],[anxiété, inquiétude, appréhension, peur, angoisse, crainte],[agacement, haine, colère, irritation, mécontent, énervement],[satisfaction, bonheur, allégresse, joie, bien-être, gaieté], [remords, gêne, regret, embarras, culpabilité, honte],[cafard, tristesse, morosité, dépression, mélancolie, abattement]. La consigne était : "Indiquez la fréquence avec laquelle vous avez ressenti chacune des émotions durant le mois qui vient de passer.
L
'analyse de la matrice de corrélations inter-items a mis en évidence des corrélations très faibles entre des items censés appartenir aux mêmes groupes théoriques ou des corrélations trop élevées avec les items d'autres gammes d'émotions (ie. bienveillance, anxiété, allégresse). Par conséquent ces items n'ont pas été retenus.Les analyses en composantes principales (ACP) avec rotation varimax ont permis d'identifier sept items qui ne répondaient pas aux principes d'une structure simple(ie. amitié, bonté, appréhension, énervement, bien-être, remords, abattement). Ces items ont été évincés. Le modèle final comprend donc 23 items, soit six dimensions de trois ou quatre items qui expliquent 68 % de la variance totale. L'analyse confirmatoire de ce modèle de structure est satisfaisante (Chi2/ddl = 2,89; p<0,001 ; SRMR = 0,051 ; GFI = 0,91) 3 . Une analyse complémentaire a été réalisée afin de vérifier la pertinence d'une structure hiérarchique (cf. tableau I). Le sommet de cette structure est constitué des variables latentes d'affectivité négative et positive alors que le niveau de base est constitué des six gammes émotionnelles. INSERER ICI TAB I La variance propre à chaque item se décompose en une variance générale et une variance spécifique. Deux niveaux d'interprétation en découlent. On peut s'intéresser aux émotions en distinguant classiquement l'affectivité négative et positive, ceci étant conforté par l'analyse hiérarchique. L'étude des corrélations entre les échelles indique une indépendance globale des émotions positives et négatives (r = -0,07 ; ns). Il est possible également d'affiner l'évaluation du bien-être subjectif en distinguant six facettes intercorrélées (cf. tableau II). On constate que les liens entre les échelles sont un peu plus complexes. Notamment, il existe une corrélation non négligeable entre l'échelle de joie et l'échelle de tristesse alors que la joie n'est pas corrélée avec les autres échelles d'émotion négative et que la tristesse n'est pas corrélée avec le score d'affection. INSERER ICI TAB II Les indices de consistance interne sont satisfaisants (cf. tableau III). Les échelles d'affectivité négative et positive sont très consistantes (respectivement 0,92 et 0,81). Les six échelles d'émotions sont également satisfaisantes sur ce critère, situées entre 0,71 et 0,87. INSERER ICI TAB III Deux résultats sont notables. En premier lieu, le score de satisfaction de la vie est corrélé modérément tant avec les émotions positives (r = 0,42 ; p<0,001) qu'avec les émotions négatives (r = -0,38 ; p<0,001). Le résultat saillant se situe au niveau des facettes puisque les corrélations les plus élevées sont avec l'échelle de joie et de tristesse. Si on peut estimer une relative communauté entre ces trois mesures, les autres construits (affection, anxiété, colère et gêne) apparaissent plus originaux. Le second résultat concerne les liens entre le 3 L'analyse confirmatoire vise un χ 2 non significatif ou, à défaut, un rapport χ 2 /ddl<3, correctif fréquemment admis. Le SRMR (Standardized Root Mean Squared Residual) quantifie la différence entre les covariances observées et théoriques. On attend un SRMR<0,08. Le GFI (Goodness of Fit Index) s'interprète comme un R 2 , et traduit la proportion de covariances observées expliquée par le modèle testé. On attend un GFI>0,90. burnout et les facettes émotionnelles. Le sentiment d'épuisement est la composante la plus étroitement associée avec les émotions, en particulier la tristesse (r = 0,41 ; p<0,001). En revanche, les liens avec le sentiment de dépersonnalisation et le manque d'accomplissement sont faibles. Il n'existe aucune corrélation négative notable entre les émotions positives et les trois composantes du burnout. INSERER ICI TAB IV DISCUSSION Le but principal de cette recherche était de disposer en langue française d'un instrument d'évaluation de la fréquence des émotions positive et négative. La construction de l'outil francophone reposait sur le modèle de Diener et coll. (1995). Les résultats aboutissent à une mesure présentant des qualités similaires à l'échelle originale. Cet outil permet d'évaluer six gammes distinctes d'émotions organisées en deux dimensions d'ordre supérieur. Les échelles sont consistantes et mesurent des phénomènes spécifiques par rapport à d'autres critères couramment utilisés dans les études sur le bien-être subjectif ou le stress. Ces résultats doivent être répliqués auprès de patients souffrant de différents troubles. Il est important en particulier de vérifier l'efficacité d'une thérapie cognitive et comportemantale « classique » versus une psychothérapie positive sur les émotions négatives et positives. On note dans les présents résultats une indépendance des deux types d'émotions. Néanmoins on peut s'interroger sur les processus sous-jacents. S'agit-il de processus communs qui détermineraient à la fois les émotions positives et les émotions négatives ou existe-t-il des processus spécifiques aux émotions négatives et d'autres aux émotions positives ? Frederickson et Joiner (2002) font l'hypothèse d'une spirale émotionnelle et comportementale positive à l'instar de la spirale dépressive rencontrée chez les patients.
du bien-être (pleasant life, good life/engaged life et meaningfull life) , il faut enfin vérifier si les techniques qui augmentent une composante du bien-être améliorent également les deux autres. Par exemple, est-ce que l'engagement dans une vie pleine de sens augmente la fréquence des émotions positives ? D'un point de vue psychométrique, d'autres travaux sont à entreprendre, notamment l'étude de la fidélité temporelle et de la sensibilité au changement. L'analyse de la validité peut être complétée à l'aide d'outils d'évaluation des émotions existants. D'autres échelles existent telle que la PANAS (Positive and Negative Affect Schedule ; Watson et coll.
indépendantes dans les émotions : leur niveau d'activation (arousal) et leur caractère plus ou moins plaisant-déplaisant (valence ou dimension hédonique). L'axe arousal irait d'un état proche du sommeil ou de la relaxation à un état proche de la frénésie. Ainsi, lorsque Diener propose des échelles d'émotions positives et négatives, celles-ci se situent sur l'axe hédonique (Meyer et Shack, 1989) et ne peuvent être confondues avec les échelles de Watson et Tellegen (1985) qui ne distinguent pas les axes hédonique et arousal. D'un autre côté, les résultats de Feldman Barrett et Russell (1998) parviennent à montrer deux axes indépendants, l'axe de la valence et l'axe du niveau d'activation. Ces auteurs montrent également que les émotions seraient bipolaires sur l'axe de leur valence, et bipolaires sur l'axe de leur activation. Par exemple, la fatigue est l'opposée de la tension sur l'axe ANNEXE ICe questionnaire concerne les émotions que vous avez pu ressentir depuis un mois. Ce sont des émotions ressenties dans votre milieu professionnel, dans votre entourage familial ou lors de vos loisirs... Indiquez la fréquence avec laquelle vous avez ressenti chacune des émotions durant le mois qui vient de passer. Mettez une croix dans la case qui vous correspond le mieux. --------------------------------------------
Scorage : La mesure de valence émotionnelle (MVE) permet de calculer 8 scores. Le score d'AFFECTION correspond à la somme des items: attachement, affection, amour Le score de BIEN-ETRE correspond à la somme des items: satisfaction, bonheur, joie, gaieté Le score d'ANXIETE correspond à la somme des items: inquiétude, peur, angoisse, crainte Le score de COLERE correspond à la somme des items: agacement, colère, irritation, mécontentement Le score de REMORDS correspond à la somme des items: gêne, regret, embarras, culpabilité Le score de DEPRESSION correspond à la somme des items: cafard, tristesse, morosité, mélancolie Le score d'AFFECTIVITE NEGATIVE correspond à la somme des 4 scores d'émotion négative Le score d'AFFECTIVITE POSITIVE correspond à la somme des 2 scores d'émotion positive Tableau I. Saturations des 23 items dans une structure hiérarchique N= 571 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative). Tableau III : Corrélations a entre les scores aux facettes émotionnelles et les indicateurs psychologiques r de Bravais-Pearson ; N= 125 sauf pour satisfaction de la vie (N=107) ; *p<0,05 ; **p<0,01 ; ***p<0,001 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative). Les scores d'épuisement, d'accomplissement et de dépersonnalisation font référence à l'échelle de burnout.
a a entre les scores aux six facettes émotionnelles Tableau III : Caractéristiques des échelles d'émotion Tableau II : Corrélations a
AFF POSIT 0,40 0,45 0,55 0,45 0,59 0,61 0,59 1,00 0,19*** 0,10* 0,42*** 0,10* 0,01 AMOUR a r de Bravais-Pearson (N= 571) ; *p<0,05 ; **p<0,01 ; ***p<0,001. AFF NEG AMOUR JOIE PEUR ATTACHEMENT 0,20 0,66 AFFECTION 0,69 AMOUR 0,52 0,22 SATISFACTION 0,65 BONHEUR 0,57 JOIE 0,60 GAIETE -0,22 0,51 INQUIETUDE 0,64 0,41 PEUR 0,58 0,57 AMOUR ANXIETE 1,00 COLERE 0,49*** 1,00 JOIE -0,11* -0,06 1,00 GENE 0,60*** 0,57*** -0,12** 1,00 TRISTESSE 0,63*** 0,48*** -0,40*** 0,59*** ANXIETE COLERE JOIE GENE moyenne Nombre Alpha de Écart-type d'items Cronbach AFF NEG 34,9 8,4 9 0,92 AFF POSIT 38,3 15,7 16 0,81 AMOUR 14,5 4,7 3 0,71 ANXIETE 9,5 5,0 4 0,84 COLERE 11,9 5,2 4 0,84 JOIE 20,4 5,3 4 0,84 GENE 8,0 3,8 4 0,77 TRISTESSE 8,9 5,2 4 0,87 ANXIETE COLERE JOIE Satisfaction de la vie 0,25** -0,19 -0,18 0,47*** -0,27** COLERE 1,00 TRISTESSE GENE GENE TRISTESSE AFF POSIT TRISTESSE -0,56*** 0,42*** Sentiment d'épuisement -0,01 0,32** 0,34** -0,16 0,26** 0,41*** -0,10 Sentiment d'accomplissement 0,13 -0,11 -0,16 0,28*** -0,15 -0,19* 0,24** a AMOUR Sentiment de dépersonnalisation -0,12 0,26** 0,21* -0,06 0,24** 0,26** -0,10 AFF NEG -0,38*** 0,42*** -0,19* 0,30**
ANGOISSE 0,65 0,55
CRAINTE 0,66 0,51
AGACEMENT 0,51 0,64
COLERE 0,48 0,52
IRRITATION 0,56 0,66
MECONTENT 0,56 0,65
GENE 0,57 0,51
REGRET 0,64 0,44
EMBARRAS 0,63 0,51
CULPABILITE 0,60 0,32
CAFARD -0,25 0,67 0,46
TRISTESSE -0,27 0,64 0,50
MOROSITE -0,21 0,67 0,42
MELANCOLIE -0,24 0,69 0,42
a Analyse factorielle hiérarchique ; N= 571 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative).
a
Si l'objectif est d'améliorer chaque composante du bien-être, ce type d'intervention n'est pas pour autant réservé au public tout venant. Les patients souffrant de dépression sont concernés en premier lieu et Seligman fait l'hypothèse d'une triple étiologie de la dépression sous la forme d'une carence dans les trois voies du bien-être.
Les quatre autres recommandations sont : utiliser des mesures sensibles au changement, distinguant les évolutions à court et à long terme et qui puissent être employées dans des études longitudinales, dans le cadre d'échantillonnages temporels ou de relevés quotidiens ; construire des outils présentant de bonnes qualités psychométriques, notamment concernant leur validité ; tenir compte des limites des outils de mesure, par nature imparfaits, lors de l'interprétation des données ; et ne pas occulter d'autres indicateurs. | 35,485 | [
"769630",
"19046"
] | [
"1343",
"102135"
] |