halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
8
timestamp
stringclasses
938 values
year
stringclasses
55 values
url
stringlengths
43
389
text
stringlengths
16
2.18M
size
int64
16
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
0
229
01485827
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485827/file/978-3-642-41329-2_32_Chapter.pdf
Michiko Matsuda email: [email protected] Fumihiko Kimura email: [email protected] Digital Eco-Factory as An IT Support Tool for Sustainable Manufacturing Keywords: Production modelling, Software agent, Sustainable production planning, Virtual factory, Environmental simulation A sustainable discrete manufacturing using a digital eco-factory as an IT tool has been proposed. In this paper, details of a digital eco-factory with construction methods are discussed. A digital eco-factory is a virtual factory and IT support platform on which a production scenario is examined from various viewpoints. When a digital eco-factory is used, environmental impact of the planned production scenario is examined in addition to productivity and manufacturability. A digital eco-factory is constructed on a digital factory. A digital factory is constructed on the virtual production line. Multi agent technologies can be applied to modelling an actual shop floor and its components. All components are configured as software agents. These agents are called "machine agents." Furthermore, manufactured products are also configured as software agents. These agents are called "product agents." In the digital eco-factory, there are three panels which have a user interface from each different viewpoint. The three panels are a plant panel, product panel and an environmental index panel. By using a digital eco-factory, a production system designer can do pre-assessment of the configuration of the production line and production scenario, and a factory equipment vender can show performance of his equipment. INTRODUCTION A low carbon society has striven for a long time to achieve conservation of the global environment. However, the realization does not progress easily. Although mechanical products, such as cars, personal computers, mobile phones, household appliances, and daily use equipment are indispensable in everyday life, their recycle/reuse systems are still at the level of considering recycling/reuse of their material or component parts only. For society to proceed further, the recycle-based manufacturing system must be improved fundamentally from the viewpoint of sustainability. At present, manufacturing enterprises are required to optimize the service to the product user while considering the sustainability of the global environment, and it becomes usual to design the whole product life cycle before the production. At the design stage, various kinds of CADE(s) (CAD for Environment) are used as IT support tool. This ICT investment for the use of these tools is becoming a big defrayment. Moreover, starting with the ISO 14000 series (e.g. [START_REF]An Empirical Study of the Energy Consumption in Automotive Assembly[END_REF][2][3]) for environmental management, which was issued in 1996, the methodologies for life cycle assessment techniques are being standardized (e.g. [4]). According to the above trend, at the real manufacturing scene [5], an IT support tool with low ICT investment is strongly desired for estimating production cost and environmental impact of production plans before actual production. To use a digital eco-factory as an IT support tool for green production planning has been proposed by the authors [START_REF] Matsuda | Digital eco-factory as an IT platform for green production, Design for innovative value towards sustainable society[END_REF][START_REF] Matsuda | Configuration of the Digital Eco-Factory for Green Production[END_REF][START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF]. A digital eco-factory is a virtual factory and integrated IT platform on which a production scenario is simulated and examined from various viewpoints. When the proposed digital eco-factory is used, green performance of the planned production scenario is examined in addition to productivity and manufacturability at the same time with various granularities such as machine level, product level and factory level. In the future, when this digital eco-factory is available as a Web service such as Cloud service and SaaS (Software as a Service), it becomes possible to use IT support tools for sustainable manufacturing with low investment. As a first step towards the above direction, the digital eco-factory must be implemented in practice. The detailed inside structure of a digital eco-factory is discussed and determined for a practical implementation in this paper. First, technical requirements and IT solutions for them are presented. And a conceptual structure of a digital eco-factory for discrete manufacturing is shown. Modelling of a production line for the construction of virtual factory is discussed in detail. Then, the usage and control of a digital ecofactory are explained. A digital eco-factory is operated based on the execution of virtual manufacturing. Finally, an example of trial implementation is introduced. IT SUPPORT TOOL FOR SUSTAINABLE MANUFACTURING 2.1 Requirements for the IT support tool 2.1.1 General idea of a IT support tool. The production engineer and plant manager use the IT support tool to assess and examine the production scenario before the actual execution of the production. The product designer also uses this tool to consider the production process in the product life cycle. Moreover, the manufacturing device and equipment developer use this tool to examine and show the capability and environmental efficiency of new devices and equipment. The IT support tool shows performance and environmental impact from the various view points by simulation of the input production scenario. Figure 1 shows the image of the IT support tool. This IT support tool is called "digital ecofactory." Production lines in an actual factory are modelled as a virtual factory in a digital eco-factory. Virtual manufacturing is performed according to the input scenario in the virtual factory, and its performance is watched from the product viewpoint, production line viewpoint and other viewpoints. There are functional requirements for the digital eco-factory as an IT support tool from systematic view, monitoring view and user interface view [START_REF] Matsuda | Digital eco-factory as an IT platform for green production, Design for innovative value towards sustainable society[END_REF][START_REF] Matsuda | Configuration of the Digital Eco-Factory for Green Production[END_REF]. Following are the major functional requirements:  easy input of production scenario such as device/equipment configuration, production schedule, process plan, manufactured product data, optimization parameters, change of schedule/plan,  precise simulation of production scenario from the machine view, process view and product view,  simulation which also includes added peripheral equipment such as an airconditioner to equipment which is directly used in the production.  computation of environmental items such as the amount of raw materials and various energy intensities (ex. CO2, NOx, SOx, energy consumption) in addition to conventional items such as production costs and delivery time,  monitoring for status of each and every process (machine), each and every product, and the system as a whole,  and monitoring for relationships between environmental indicators and an indicator in cost-oriented conventional processes such as delivery time and production cost. A digital eco-factory as an IT support tool To fulfil the above functional requirements, the digital eco-factory must be a robust IT platform for simulation of various production scenarios, pre-assessment of various line configurations, and comparison of several production processes. Furthermore, technologies are required for the proper evaluation of each process by carefully making individual components one by one, and assessing the entire factory based on them. To implement these things on an actual IT platform, it is important to construct precise models of a production line, production process and target product including both of static properties and dynamic behaviour. In other words, the core of the digital ecofactory is a digital factory in which actual machine, production line and factory are Actual Factory Digital Eco-Factory (Saas) Product View Machine/Production Line View Planning & Deliberation Simulation for Productivity & Environmental Impact Infrastructure View Virtual Factory mirrored. There are several previous studies about digital factory (e.g. [START_REF] Freedman | An overview of fully integrated digital manufacturing technology[END_REF][START_REF] Bley | Integration of product design and assembly planning in the digital Factory[END_REF][START_REF] Monostori | Agent-based systems for manufacturing[END_REF]) in which production lines are modelled statically. Based on these result, authors proposed using multi agent technology to model factory elements statically and dynamically [START_REF] Matsuda | Flexible and autonomous production planning directed by product agents[END_REF][START_REF] Matsuda | Agent Oriented Construction of A Digital Factory for Validation of A Production Scenario[END_REF]. Moreover, it is proposed to construct a digital eco-factory using this agent based digital factory. The conceptual structure of the proposed digital ecofactory is shown in Figure 2. In Figure 2, the digital factory is the basis of the digital eco-factory. The digital factory is constructed on the virtual production line modelling an actual production line and its components. All components are configured as software agents. These agents are called "machine agents." In addition to machine agents, manufactured products are also configured as software agents. These agents are called "product agents." In the digital eco-factory, there are three panels which have a user interface from each viewpoint. The three panels are plant panel, product panel and environmental index panel. The operator of the digital eco-factory can input production scenario, configuration of the shop floor, control policy for the production line, energy saving policy, granularity of environmental indexes etc. through the user interface of the panels. The operator can also observe progress and results of virtual production through the user interface of the panel. The product panel controls the progression of virtual production by the creation of product agents. The structure of a machine capability model is schematically shown in Figure 3. A machine capability model consists of specification data of a machine, operations which the machine can perform, knowledge on how to operate processes, required utilities such as air and light, and knowledge on how to calculate cost related items and environmental indexes. Operation data has operation orders and its own operation conditions. Operation data consists of performed operation types such as machining, screwing and bonding, using tools and jigs corresponding to the each operation, energy consumption for each single operation and other operation information such as operation method and control algorithm. The machine capability model provides the associated production line ID and its position in the line. If an associated production line ID is the same, they are in the same production line and the associated line position shows the order of machine positioning. A machine agent has its own machine capability model. The plant panel has templates for machine capability models and fulfils an adequate template when initially setting a machine agent. MODELING FOR VIRTUAL PRODUCTION product agent. A product agent is a collective designation such as a workpiece agent with the workpiece data and the machining process data, and the part agent with the part data and the assembly process data. According to the production schedule, a product agent with product model and process plan data is created by the product panel. Product model and process plan data are prepared outside of a digital eco-factory using a design assist system such as a CAD/CAM system. Usually, the product model and process plan are included in a production scenario. The activity diagram of the product agent is shown in Figure 5. A product agent has a machine allocation rule, process plan for completing the product and the product model. When the production request is accepted, a product agent allocates jobs to adequate machine agents in an order according to the process plan, monitors product condition in the virtual operation by collecting productivity data and environmental data from the machine agent, and reports production status of the product. Production scenario A reviewing production scenario is input to the digital eco-factory. Usually, a production scenario is prepared by a production engineer such as a process planner. A production scenario is validated by virtual production following the scenario. By repeatedly modifying and validating a scenario, a proper production scenario is selected from an economical point and environmental point of view. A formal structure of the production scenario is shown in Figure 6. A production scenario is constructed from product data, which is a target of the manufacturing process, which are job sequences for producing the product, and rules and methods for executing the virtual production. Product data includes data of its component parts and workpiece data. A process consists of sub processes. A minimum sub-process is a job which is executed on some resources. There are rules and methods such as methodology and optimized parameters for production line control, dispatching rules for scheduling and theory for machine allocations [START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF][START_REF] Matsuda | Agent Oriented Construction of A Digital Factory for Validation of A Production Scenario[END_REF]. Fig. 6. -Structure of a production scenario [START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF] 4 A DIGITAL ECO-FACTORY Construction of the virtual production line A virtual production line is constructed by machine agents and product agents. A sequence diagram of a virtual manufacturing in a virtual production line is shown in Figure 7. A virtual production line configuration is given as activations of machine agents by the plant panel. According to the production scenario, the product panel creates product agents. When a product agent is created, the first step of the virtual manufacturing procedure is that a product agent requests machine status from machine agents. Depending on the reply, the product agent requests the job execution on (constraints) the allocated machine. The machine agent replies with the estimated job starting time and the product agent confirms the job request. Furthermore, the product agent requests an AGV agent to transfer virtual things such as material, mounted parts and tools to the machine for virtual execution of the job. The machine agent proceeds with the virtual operation according to the scheduled job list in proper order. When the machine agent starts the virtual operation, the machine agent notices the starting to the product agent. During execution of the virtual operations in the machine agent, the machine agent reports the condition and status to the product agent and others. The product agent makes and sends the report to the product panel. The plant panel sets up the configuration of a virtual production line, and shows the operating condition and status of machines which are used to manufacture products in a virtual production line. Figure 8 shows the activity diagram of a plant panel. The configuration of the production line and details of component machines (devices and equipment) are provided from outside of the digital eco-factory by the operator. When a new machine is indicated, the plant panel sets up machine agents with a machine capability model corresponding to the input configuration by using templates of the machine capability model. On the other hand, when an already set up machine is indicated, the plant panel activates the corresponding machine agent with associated production line information. During execution of virtual production, by communicating with machine agents, the plant panel monitors the machine status on the virtual production line, collects operation condition data, productivity data and environmental data, and calculates the total of the economical and environmental index. When the configuration is changed, an operator could indicate this through deletion/generation of machine agents through the plant panel. Fig. 1 . 1 Fig. 1. -General idea of an IT support tool Fig. 2 . 2 Fig. 2. -Conceptual structure of the digital eco-factory 1 1 Machine agent and machine capability model.A machine agent is a software agent which has its machine capability model. According to the production line configuration, machine agents are set by the plant pan-machine agent simulates behaviour and activity of the manufacturing machine by referring the machine capability model. Manufacturing machines represent all of the device/equipment on the shop floor, including human operators. In other words, the machine capability model statically describes a machine's data, and a machine agent dynamically represents a machine's performance. Machine agents communicate with each other and autonomously structure a production line on the virtual shop floor. Fig. 3 . 3 Fig. 3. -structure of a machine capability model Fig. 4 . 4 Fig. 4. -Activity diagram for a machine agent Fig. 5 . 5 Fig. 5. -Activity diagram for a product agent Fig. 7 . 7 Fig. 7. -Sequence diagram of a virtual production line of the digital eco-factory. A digital eco-factory will support sustainable discrete manufacturing by virtually executing production. For future work, more trial implementations are required, and further detailed design should be generated, based on the results of trial implementations. ACKNOWLEDGEMENTS The authors thank members of research project titled "the digital eco factory" by FAOP (FA Open Systems Promotion Forum) in MSTC (Manufacturing Science and Technology Center), Japan for fruitful discussions and their supports. The authors are also grateful to Dr. Udo Graefe, retired from the National Research Council of Canada for his helpful assistance with the writing of this paper in English. Product panel. The product panel creates the product agent with a process model by referring to the production scenario and product model, and shows the progression and status of the manufactured product on the virtual shop floor from the productivity and environmental views. The activity diagram of the product panel is shown in Figure 9. At first, the product panel analyses the production scenario which includes manufactured product data such as parts structure, production amount, delivery period, and process plan, and the product panel generates the production schedule plan. According this plan, the product panel creates product agents and inputs them to the digital factory to start production for each product. As virtual production proceeds, the product panel monitors the progression and status of the products on the virtual shop floor from the productivity and environmental views. And the product panel displays the product status, collects environmental data and productivity data, calculates an environmental index by communicating with product agents and reports them. Environmental index panel. The environmental index panel shows green performance indexes such as carbon dioxide emissions and energy consumption at the machine level, production line level and plant/factory level. At the plant level, green performance indexes from plant utilities such as air compressor, air conditioning, exhaust air and lighting are also included. Figure 10 shows the activity diagram of an environmental index panel. The environmental index panel calculates green performance indexes based on the operating condition report from a machine agent by referring to the machine capability model. And the index panel generates a green performance report for each machine. This report is called the machine index report. Using this machine index report, the index panel calculates green performance indexes for production lines and produces a line index report. Then, using this line index report and utility consumptions, the index panel calculates green performance indexes of the whole plant. Utility consumptions are calculated in parallel using reported data such as power consumption and airflow volume from machine agents and referring to machine capability models. USAGE OF A DIGITAL ECO-FACTORY Green performance simulation Using the digital eco-factory, the productivity and green performance of production scenario can be simulated and evaluated. The sequence flow in the digital eco-factory is shown in Figure 11. The sequence flow of the digital factory which is the core of a digital eco-factory is shown in Figure 7. In Figure 11, relationships among the three panels and the digital factory are clarified. The production plan which indicates the workpiece/part input order to the production line and production scenarios are input to the product panel. By changing the production plan, the creation order of the product agent can be controlled. And, by changing production scenario, the job allocation to a machine agent by a product agent can be controlled. The production line configuration is input to the plant panel. By changing line configuration data, the activation of a machine agent through the plant panel can be controlled. As a result, various production plans, line configurations and production scenarios are easily comparable by using the digital eco-factory. Three panels monitor and report green performance simulation in the digital factory through their own view. Trial example Proposed concept of digital eco-factory is applied to the PCA (Printed Circuit Assembly) line. This trial system is implemented using a commercially available multi-agent simulator "artisoc." A PCA line consists of a solder paste printing machine, three electronic part mounters, reflow furnace and testing machine. In the PCA line, processes for the above machines are proceeded in sequence, these machine's capabilities are modelled as individual machine agents due to the precise simulation. There are six types of printed boards produced, depending on the number of mounted electronic components and the temperature of the solder. When a blank PCB (Printed Circuit Board) is input to a solder paste printing machine, the production process is started. A PCB is modelled as a part agent which is one of the product agents. Figure 12 shows the modelling concept for PCA line and parts of the concrete descriptions for some the agents in "artisoc". In this example, there are two PCA lines. Figure 13 shows displays of the execution example for the virtual production of the PCA. The animation display for the condition of agents is seen at the left-upper part of Figure 13, and the window at the right-upper part is the control panel for setting production volume for each type of PCA. Power consumption of each machine on each line from the environmental view is monitored in the lower part of Figure 13. The block graph at the lower left shows power consumptions for each machine in the PCA line no.1. Power consumption of the reflow soldering oven is predominantly large. In the PCA line at the lower right, the same phenomenon could be seen. CONCLUSIONS For the practical implementation of the proposed digital eco-factory, the detailed design of the digital eco-factory is discussed in this paper. Key items are how to precisely model the production activities statically and dynamically. In this paper, it is proposed that multi agent technologies are applied for modelling of production line and production behaviour. All elements configuring the production line are implemented as software agents including manufactured products. Agents communicate with each other and autonomously construct a virtual production line. Through the virtual manufacturing in the virtual production line, environmental effects can be estimated. The small trial example shows the effectiveness of the proposed implementation method
24,430
[ "1003725", "1003717" ]
[ "488126", "375074" ]
01485828
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485828/file/978-3-642-41329-2_33_Chapter.pdf
Elmira Kh Dusalina Nafisa I Yusupova email: [email protected] Gyuzel R Shakhmametova Elmira Kh Dusalina email: [email protected] Enterprises Monitoring for Crisis Preventing Based on Knowledge Engineering Keywords: Enterprise monitoring, crisis preventing, bankruptcy, decision support system, expert system technology, data mining des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION Enterprises monitoring is a management process component, which represents an enterprise activity continuous observation and analysis with a changes dynamics tracking. Monitoring of enterprises is important for identification of possible signs of crisis states, its preventing and enterprises safety. Enterprises crisis states include its insolvency, an inability to pay debts and a bankruptcy. Quickly estimation of the changes in the financial state of the enterprise allows to realize management decisions at the early stage of the crisis and to avoid the negative consequences for enterprise. One of the main enterprise crises is a bankruptcy. The researches in the field of bankruptcy monitoring have been carried on for a long time and can be found in the papers of many scientists, as well as in the IT-decisions. These problems are considered in detail in [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. In the article the decision support system (DSS) for enterprises monitoring on the base of knowledge engineering is proposed. The second section describes state of art in enterprise bankruptcy forecasting, third onethe problem statement. The decision support system (DSS) for enterprises monitoring is considered in forth section. The fifth section is devoted to DSS modules in details and in sixth section DSS implementation efficiency analysis is presented. STATE OF ART There are two main approaches to enterprise bankruptcy forecasting in modern business and financial performance practice [START_REF] Belyaev | Anti-Crisis Management[END_REF]. Quantitative methods are based on financial data and include the following coefficients: Altman Z-coefficient (the USA); Taffler coefficient (Great Britain); two-factor model (the USA); Beaver metrics system and the others. A qualitative approach to enterprise bankruptcy forecasting relies on the comparison of the financial data of the enterprise under review with the data of the bankrupt business (Argenti A-account, Scone method). Integrated points-based system used for the comprehensive evaluation of business solvency includes the characteristics of both quantitative and qualitative approaches. An apparent advantage of the methods consists in their system and complex approach to the forecasting of signs of crisis development, their weaknesses lie in the fact that the models are quite complicated in making decisions in case of a multi-criteria problem, it is also worth mentioning that the taken forecasting decision is more subjective. The carried-out analysis of methods of enterprises bankruptcy predicting and the analysis of possibilities of well known IT-decisions in this field showed that the development of a decision support system for bankruptcies monitoring is needed [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. The data required for anti-crisis management are semi-structured data in the majority of cases and therefore the application of intelligent information technologies is necessary [START_REF] Jackson | Introduction to Expert Systems[END_REF][START_REF] Duk | Data Mining[END_REF]. Financial and economic application software that is available on the market nowadays is quite varied and heterogeneous. The necessity to develop such software products is dictated by the need of enterprises to promptly receive management data in due time and to forecast the signs of crisis development. To one extent or another, tools for anti-crisis management are available in a number of ready-made IT-decisions [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. But the data analysis in many software products actually consists in providing the necessary strategic materials, while software products should meet the increasing needs such as analysis and forecasting enterprise financial performance in the next period of report. The distinguishing feature of presenting research is the possibility of fraudulent bankruptcy indications forecasting at its early stages when it is possible to take preventive measures. PROBLEM STATEMENT Enterprises monitoring involves an enterprise activity observation, works on adverse effects timely detection and assessment and includes a creation and an implementation of modern techniques providing a data collection and transmission automation. Enterprises monitoring goal is an early crisis (bankruptcy) signs detection, warning and prevention based on enterprise financial indicators analysis. Enterprises monitoring allows a bankruptcy process information transparency organizing, an enterprise economic component subjectivity assessment reducing and a decision maker early warning in case of a false bankruptcy signs presence. Data required for enterprises monitoring are both structured (characterized by a large volume and represent diverse information that contains hidden patterns) and semi structured, that creates an information processing problem. Therefore, it's not always possible to solve a problem of a decision making support without an intelligent information technology application. The authors of the study aim to develop models and algorithms based on intelligent technologies for the detection of the crisis state of the enterprise while still in its early stages for the timely changes of the development strategy of the enterprise, which will increase stability and economic independence of the enterprise, as well as reduce the impact of the human (subjective) factor on making important management decisions. Decision support system for crisis management is discussed in this article on the example of monitoring bankruptcies. DECISION SUPPORT SYSTEM FOR MONITORING BANKRUPTCY The major aspect of the bankruptcy monitoring problem is the analysis and identification in good time of the signs of fraudulent bankruptcies [START_REF] Belyaev | Anti-Crisis Management[END_REF]. The basis of the whole complex of techniques for the decision support system (DSS) is legally approved methodical instructions on accounting and analysis of enterprise' financial position and solvency so as to group the enterprises depending on the level of risk of bankruptcy, as well as techniques for the identification of the signs of fictitious and deliberate bankruptcy. These techniques are currently used by auditors and arbitration managers. To develop the decision support system for monitoring enterprise bank-ruptcy the authors propose the following general scheme of DSS (Figure 2) and used knowledge engineering, expert system (ES) technology [START_REF] Jackson | Introduction to Expert Systems[END_REF] and data mining (DM) technology [START_REF] Senthil Kumar | Knowledge Discovery Practices and Emerging Application of Data Mining: Trends and New Domains[END_REF]. The expert system technology underlies two modules of DSS in bankruptcy monitoring [START_REF] Shakhmametova | Expert System for Decision Support in Anti-Crisis Monitoring[END_REF]:  module for grouping companies depending on the level of risk of bankruptcy (module1);  module for the identification of the signs of illegal bankruptcy (module 2). Primary, intermediate and resulting data are stored in the main decision support database organized according to the relational model. To keep the decision support system operating the primary data on the company is imported in the system either automatically or manually. Interaction between the DSS and the user is carried out by means of an interface subsystem. In the first phase of the DSS the enterprise is classified according to the degree of the threat of bankruptcy by means of module 1 of the expert system (Figure 3). Depending on the results, the enterprise is either checked for signs of fraudulent bankruptcy (I step, module 1 of the expert system), or financial performance is forecasted using the data mining technology (II step, DM module). In the third phase on the basis of the forecasted values the signs of the deliberate bankruptcy are identified (III step, module 2 of expert system). On the IV step a report is made for the decision maker. DSS MODULES Expert system modules ES module 1 for an enterprises grouping in accordance with a bankruptcy threat degree involves an enterprises classification on the basis of its financial indicators into five groups:  group 1solvent enterprises, that have an ability to pay fully and within the prescribed period their current obligations at the expense of their current economic activity or liquid assets (G1);  group 2enterprises without a sufficient financial resources to ensure their solvency (G2);  group 3enterprises with established by law bankruptcy signs (G3);  group 4enterprises, that have a direct threat of the bankruptcy proceedings institution (G4);  group 5enterprises, in respect of which an arbitral tribunal accepted for consideration an application for recognition of the enterprise as a bankrupt (G5). This grouping allows determining an enterprise that should be analyzed for potential signs of a deliberate bankruptcy and an enterprise, in respect of which bankruptcy proceedings have already been entered, that should be analyzed for potential signs of a fictitious bankruptcy. Knowledge representation production model is applied for the knowledge base development (table 1). Where R1false bankruptcy signs are present; R2false bankruptcy signs are present, fixed assets can be withdrawn from an enterprise; R3false bankruptcy signs are absent, an enterprise pays out compulsory payments; R4false bankruptcy signs are absent, an enterprise is in a difficult financial situation; R5false bankruptcy signs are present, a deliberate debts accumulation for subsequent cancellation is taking place. Data mining module The problem which is solved by the data mining module in DSS in monitoring enterprise bankruptcy is the problem of forecasting financial indicators of the enterprise (company) and is considered in detail in [START_REF] Yusupova | Data Mining Application for Anti-Crisis Management[END_REF]. This problem can be seen as a problem of forecasting the time series, as the data for the prediction of financial indicators are presented in the form of measurement sequences, collated at non-random moments of time. The dynamics of lots of financial and economic indicators has a stable fluctuation constituent. In order to obtain accurate predictive estimates it is necessary to represent correctly not only the trend but the seasonal components as well. The use of data mining methods in time series forecasting makes the solution of the given task possible. These methods have a number of benefits: possibility to process large volumes of data; possibility to discover hidden patterns; use of neural networks in forecasting allows obtaining the result of the required accuracy without determining the precise mathematical dependence. There are a lot of other benefits of data mining such as basic data pre-processing, their storage and transformation, batch processing, importing and exporting of large volumes of data, availability of data pre-processing units as well as ample opportunities for data analysis and forecasting. The algorithm for forecasting the companies' financial indicators has been developed (Figure 4). Forecasting of enterprise financial indicators in the DSS can be performed by means of a number of DM techniques such as partial and complex data preprocessing, autocorrelation analysis, the method of "sliding window" and neural networks. In solving the problem of forecasting the time series with the aid of a neural net it is required to input the values of several adjacent counts from the initial set of data into the analyzer. This method of data sampling is called "sliding window" (windowbecause only a certain area of data is highlighted, sliding -because this window "moves" across the whole data set). Transformation of the sliding window has the parameter "depth of plunging" -the number of the "past" counts in the window. The software implementation of the data mining module to forecast the financial indicators of the enterprise is performed by means of the analytical platform [8]. As it was mentioned above, the data mining module is realized by the following main steps: primary data input; using of "sliding window"; neural network programmingconstructing and teaching; forecasting. Each of the financial indicators has its own prediction algorithm that includes the size of the step of the sliding window, neural network (NN) structure, the form of the activation function and its value (Table 3). These parameters are defined for each enterprise individually. 4). ES module 1 for an enterprises grouping in accordance with a bankruptcy threat degree has correctly determined an enterprises specific group membership in all cases. For a proposed decision making support system efficiency analysis a comparative analysis of the DSS results with classical methods results was conducted. Thus, according to classical methods, enterprise 1 is unprofitable, that confirms decision support system application results. Enterprise 4 is effectively functioning, that also confirms decision support system application results. Enterprise 2 and enterprise 3 were identified by classical methods as effectively functioning, but this conclusion wasn't confirmed by real data, namely enterprises accounting balance analysis results. This situation demonstrates a high efficiency of a system application. Therefore, a proposed decision making support system diagnoses an enterprises financial condition more accurately. The analysis of the effectiveness of the data mining module is based on the comparative analysis of the financial indicators for the same period of time, obtained directly from the enterprise and forecasted through data mining. The fragment of the analysis of the effectiveness of data mining with deviation of the forecasted values of the financial indicators from the actual data is presented in Table 5. Analysis of the effectiveness of data mining for values forecasting showed that the deviations of the forecasted values from the real data are in the range from 1,35% to 8,74%. The average deviation is about 6,5 %, which is quite a good result for forecasting. ES module 2 for a false bankruptcy signs detecting analysis also showed a high efficiency of a system application. This conclusion is confirmed by an analysis of the table 6. Thus, it may be concluded about an adequacy of considered informational support methods for a complex decision making support system designed to prevent crises in an enterprises monitoring. CONCLUSIONS The decision support system for bankruptcy monitoring including the data mining module is developed. The decision maker using the DSS may be the top manager or supervisory authority. It is possible for users of the system to monitor the major trends in the economic processes of the enterprise. With the help of the expert system the enterprise is classified according to the degree of the bankruptcy threat Then with the help of data mining means, neural networks in particular, the enterprise financial indicators can be forecasted for the definite period of time (for example for 3 months). The aim of the neural network at this stage is to catch the regularities of the financial indicators changes and detect them. And then on the basis of the forecasted indicators with the help of the expert system the signs of the enterprise illegal bankruptcy are identified. The condition of the enterprise is defined not for present moment but for the definite time period (for example for 3 months). It gives an opportunity to take measures preventing the enterprise from fraudulent bankruptcy. The efficiency analysis reveals good results of DSS implementation for bankruptcy monitoring. This research has been supported by grants № 11-07-00687-а, № 12-07-00377-а of the Russian Foundation for Basic Research and grant "The development of tools to support decision making in different types of management activities in industry with semi-structured data based on the technology of distributed artificial intelligence" of the Ministry of Education of the Russian Federation. Fig. 1 . 1 Fig. 1. -Enterprise monitoring system Fig. 2 . 2 Fig. 2. -The general scheme of the DSS in monitoring enterprise bankruptcy Fig. 3 . 3 Fig. 3. -The steps of the DSS modules using Fig. 4 . 4 Fig. 4. -Main stages of the data mining module Table 1 . 1 Expert system module 1-knowledge base fragment ES module 2 for a false bankruptcy signs detecting, based on the financial coefficients analysis, determines a false bankruptcy presence. Financial coefficients (debtor obligations provision by assets, net assets value, long-term investments share in assets, creditor debts share in liabilities, etc.) are calculated on the basis of enterprise financial indicators. Table2shows expert system module operational rules to detect a false bankruptcy signs presence. № Rules of production Rule 1 If К1 ≤ 6, Тhen G1 Rule 2 If К1 > 6 And К2 ≥ 1, Тhen G1 Rule 3 If К1 > 6 And К2 < 1, Тhen G2 Rule 4 If К3 = 1, Then G3 … … Rule11 If К10 = 1, Then G5 … … № Rules of production Rule 1 X1 : If K1(tj+Δt)<K1(tj), Then R1, Or X2 Rule 2 X2 : If K2(tj+Δt)<K2(tj), Then R1, Or X3 Rule 3 X3 : If K3(tj+Δt)<K3(tj), Then R1, Or X4 Rule 4 X4 : If K4(tj+Δt)<K4(tj), Then R2, Or X5 Rule 5 X5 : If K5(tj+Δt)≥K5(tj), Then R2, Or X6 Rule 6 X6 : If K6(tj+Δt)=K6(tj), Then X7, Or X8 Rule 7 X7 : If K7(tj+Δt)<K7(tj), Then R1, Or R4 Rule 8 X8 : If K6(tj+Δt)>K6(tj), Then X9, Or R3 Rule 9 X9 : If K7(tj+Δt)≤K7(tj), Then R5, Or R4 … … Table 2 . 2 Expert system module 2 -knowledge base fragment Table 3 . 3 Algorithms of data mining application to forecast enterprise's financial indicators 6 6 DSS IMPLEMENTATION EFFICIENCY ANALYSIS DSS has been used in state monitoring of a number of industrial and agro-industrial enterprises of Republic Bashkortostan (Russia) (Table Table 4 . 4 The enterprises characteristics Table 5 . 5 Deviations of forecasted values from actual data, in percentage prise DSS result Real situation Enter 1 false bankruptcy signs are investigating authorities present, fixed assets can initiated verification be withdrawn from an enterprise 2 false bankruptcy signs are enterprise continues work- absent, an enterprise is in ing, compulsory payments a difficult financial situation debts increase 3 false bankruptcy signs are enterprise continues work- present ing, enterprise obligations constitute 95% Table 6 . 6 Comparative analysis of the ES -module 2 application results
19,380
[ "1003726", "1003727", "1003728" ]
[ "488127", "488127", "488127" ]
01485832
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485832/file/978-3-642-41329-2_37_Chapter.pdf
Clemens Schwenke email: [email protected] Thomas Wagner email: [email protected] Klaus Kabitzsch email: [email protected] Event Based Identification and Prediction of Congestions in Manufacturing Plants Keywords: Semiconductor AMHS, Model building, Event analysis, congestion prevention ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction In modern semiconductor industry, more and more highly integrated customized wafer products have to be produced in shorter and shorter periods of time. Consequently, a large amount of production steps for a big variety of different products is carried out on the one hand. On the other hand, the production equipment is used flexibly, so that the transport system in a highly automated factory has to be adjusted frequently to new routes for wafer transports between stations. In general, these stations are connected by automated material handling systems (AMHS) which are complex interwoven networks of transport elements such as conveyor belts, rotary tables and other handling devices. Besides adjusting the transport system to new demands, the operating engineers of automated material handling systems face on main problem. AMHS often show congestion phenomena, which reduce the throughput in a wafer fabrication facility (fab). Most of the times, congestions result in queues of work pieces waiting for other work pieces because stations are temporarily overloaded or switched off (down times). Modern material handling systems provide features to detect and adjust to undesirable situations automatically and resolve congestions. For this feature intelligent material flow routing rules have to be implemented as well as rules for altering the feed of new work pieces entering the system. But in real fabs, practitioners ask themselves the following question. Which rules have to implemented, and more important, how to determine these rules systematically? In order to analyze the overall performance of transport systems, event data of material passing certain waypoints of the system has been collected in log files.. But the task of analyzing transient congestions and extracting conclusions often is still a sisyphean undertaking because of three reasons. First, this task is still carried out mostly manually by visually inspecting log files. Second, sometimes the efforts of studying log files do not result in generally applicable rules. Third, sometimes congestions seem unexplainable. In order to free the expert from this time consuming task, this paper introduces an approach to how event based congestion analysis and prediction can be executed automatically. Consequently, the steps of an approach for semi automatic event data inspection are provided. These steps include collecting of relevant event data, building a state model of the transport process, identification of temporarily overloaded segments, backtracking to influencing segments and congestion analysis in order to extract rules for prediction of congestions. As a result, rules for the prediction of congestion occurrence are derived. For validation, these rules are applied to new event data of the same AMHS, so that to predict congestions were predicted early enough for operators being able to take actions. For an exemplary use case, a set of trace data of wafer lots in an automated production line has been used for proving the approach. All steps of this workflow have been implemented in a software framework and tested against real fab data. This paper is structured as follows. In Section 2 related work is considered. The approach for data inspection is described in Section 3. Section 3 includes an exemplary validation. Conclusions are drawn and an outlook is given Section 4. Related Work The authors investigated several possibilities to identify and analyze congestions in a given transport system. Consequently, some relevant approaches are discussed and the disadvantages of those, that initially seem to be most obvious and useful, are portrayed. The first thought when transport systems shall be examined is queuing systems. But the pure queuing theory could not easily be applied for the authors real world use case, because arrival rates, service durations and sometimes even capacities of the system's elements change constantly. Consequently, the second thought is time series. But the application of pure classic time series analysis was not feasible, because models could not explain the observed phenomena exact enough, nonsense correlations were found, or the calculation costs were too high. Third, the authors considered state model building for examining event sequences. Finally, the authors combined findings from several specific fields. Therefore, the consideration of related work covers work in state model building, queuing theory and time series. State model building Automatic state model building requires event logs as input data and provides highly aggregated information about state changes in a system. The prerequisite is that recorded events can be understood as notifications of state changes of entities. Briefly, the main essence of state model building is the extraction of a graph out of traces of events. In the case of material flow systems, events are recorded when loads enter workstations where they are processed, or when they enter conveyor segments where they are transported to a succeeding workstation. In the extracted graph, nodes represent states of loads being in a certain station or transport segment, and edges represent the transfer of a load into another station or transport segment. For this kind of event data, an event always indicates that a load entered a work station or transport segment. The use of discrete state models describing a system's (or device's) behavior as a sequence of possible steps was studied successfully before [START_REF] Kemper | Trace based analysis of process interaction models[END_REF]. On the one hand, state models are useful to monitor or identify business processes [START_REF] Agrawal | Mining process models from workflow logs[END_REF]. Van der Aalst et al. [START_REF] Van Der Aalst | Business process mining: An industrial application[END_REF] used state model building as a method for analyzing business processes, where events are generated when certain work steps begin and end. In so-called process mining, models of business processes shall be recovered or checked. Additionally, the relevance of business steps can be evaluated and performance indicators are calculated based on event logs. The main problems are to recover adequate models and to identify relevant process steps, since the log data of business processes, involving humans and external events, oftentimes contain non-deterministic portions. The resulting models then have to be mostly analyzed manually, sometimes including a few automatically calculated performance parameters if applicable. On the other hand, state model building can be used for the examination of event logs of machines or transport systems, for example in semiconductor industry or in logistics applications. Compared to extracted models of business processes, the extracted models of logistic and manufacturing applications are more deterministic but contain many more states, so that sophisticated tailored analysis approaches are necessary to detect and explain unwanted phenomena, such as extremely varying delays [START_REF] Gellrich | Modeling of Transport Times in Partly Observable Factory Logistic Systems based on Event Logs[END_REF] or changing reject occurrence [START_REF] Shanthikumar | Queueing theory for semiconductor manufacturing systems: a survey and open problems[END_REF]. In order to enrich a pure state model with more information, Vasyutynskyy suggested the combination of state model building with calculation of performance indicators, such as overall throughput times, holding times and inter arrival times on states. Consequently the result is called an extended state model [START_REF] Vasyutynskyy | Analysis of Internal Logistic Systems Based on Event Logs[END_REF]. State models can be used as the basis for detailed analysis of congestions if they manifest as tailbacks of loads waiting for preceding loads [START_REF] Schwenke | Event-based recognition and source identification of transient tailbacks in manufacturing plants[END_REF]. In this work, an approach to automatically carry out transient tailback recognition and cause identifica-tion was introduced. In order to identify origins and causes of these observed tailbacks, historic event log data of loads passing certain waypoints were inspected. The approach is based on analysis of holding times and capacities of transport segments. As a result, complete lists of tailbacks and affected segments are provided. Plus, for each tailback an initial cause event is determined. But this tailback analysis approach does not relate the occurrence of tailbacks to the constantly altering arrival rates of new loads entering the system. Therefore it was necessary to investigate different approaches to enable a successful prognosis of congestions. Queuing Theory Queuing theory is a tool for estimating performance indicators in networks of waiting lines and service stations. The service stations take a certain amount of time, e.g., for processing one work piece. The work pieces, or loads, travel through the system and wait in line in front of the service stations, thus forming queues. The main application is the design of queuing systems, [START_REF] Beranek | A Method of Predicting Queuing at Library Online PCs[END_REF], [START_REF] Horling | Using Queuing Theory to Predict Organizational Metrics[END_REF]. At the design time important questions are: What is the average queue length, how long are average waiting times Wq, and how many service stations are needed? For answering the question of the average waiting time in the queue Wq, Formula (1) can be used [START_REF] Gross | Fundamentals of queuing theory[END_REF]. (1) The arguments for this Formula are the complete waiting time W that was spent in a system of queue and service station. The time W includes the average service duration . Alternatively the time Wq can be calculated using the arrival rate λ and service rate μ. The basic assumption in queuing theory is stable arrival-and service rates. In contrast, these rates change frequently in the investigated real world systems, e.g. depending on product mix and order situation. As a result, Formula (1) for estimating Wq, was not directly applicable. The average time loads spend in a conveyor segment is called in the following. Time series analysis Classic time series analysis provides many disciplines [START_REF] Hamilton | Time series analysis[END_REF]. For this work the most important ones are the following. The first one is time series analysis in the time domain, where oftentimes trends and seasons are extracted by developing linear models until the residues cannot be minimized anymore and are similar to stochastic white noise. This approach is used in, e.g., economics, biology and agriculture [START_REF] Mead | Statistical Methods in Agriculture and Experimental Biology[END_REF]. Sometimes this approach is also used in physics or engineering but only as a last resort if model building using known facts did not provide useful results [START_REF] Palit | Computational intelligence in time series forecasting: Theory & engineering applications[END_REF]. For example, time series analysis is used in the field of predictive maintenance to model trends, seasons and noise of deterioration indicators [START_REF] Krause | A generic Approach for Reliability Predictions considering non-uniformly Deterioration Behaviour[END_REF]. But these approaches try to smooth of outliers instead of explaining them. The second discipline often is applied for modeling the remaining residues, after trend and season are extracted, by estimating auto regressive moving average (ARIMA) models. ARIMA models often are used to model processes in economics, especially in financial industry trying to predict effects in the stock market [START_REF] Wang | Stock market trend prediction using ARIMA-based neural networks[END_REF]. This is done by assuming the stochastic nature of the unexplainable processes [START_REF] Bollerslev | Modeling and pricing long memory in stock market volatility[END_REF], [START_REF] Nelson | The time series behavior of stock market volatility and returns[END_REF]. Therefore, the main ingredients of these models are two parts, the auto regressive (AR) part and the moving average (MA) part. The AR part tries to model the time series by explaining the current value mainly by the previous value. The MA part models a white noise so, that in conjunction with the AR part the given time series can be approximated. Unexplainable peaks in general are considered outliers and are smoothed [START_REF] Breen | Economic significance of predictable variations in stock index returns[END_REF]. In contrast, the authors needed to explain the outliers, instead of smoothing them. As a result, the above mentioned time series analysis approaches were not applicable. One reason for this is that in reality peaks are not always stochastic and do not solely depend on the previous value. Summary When first confronted, the authors tested the following approach. First, the inter arrival times and holding times on conveyor segment in front of stations or rotary tables were examined for aggregated periods of time. With Formula (1) of the queuing theory, the waiting times W at stations were estimated at these time periods, but they did not match the actually observed holding times. By a different approach, the authors aimed to produce forecast models in order to estimate autoregressive moving average (ARMA) models for inter arrival times and holding times. The predictions of these models were used to estimate the current waiting time W applying Formula (1) to the forecasted arrival rates. Unfortunately, the quality of these models was not good enough for reliable predictions, because of one important fact. Forecast models tend to smooth peaks, because most of the times they are considered outliers. But in contrast, in the use case of investigating transport system data, the peaks of holding times are the sought after and to be explained congestions. Consequently, generic naive time series analysis was not constructive. As examined by the authors, the key to understanding the establishing and resolving of congestions is the combination of system knowledge with time series analysis. Congestions can travel through the system like waves, superimpose and thus, cause significantly varying waiting times on certain conveyor segments in front of service stations. Therefore, there is a true relation between only certain arrival rates, service rates and waiting times. As a result, the authors integrated a step in the overall approach that selects only the relevant time series before they are investigated further. The suggested analysis approach consists of a workflow of five general steps, see Figure 1. First event data of the AMHS has to be collected. Fig. 1. Workflow of Analysis of Congestions Second, a state model has to be extracted from of this event data. Third, overloaded segments of the transport system have to be identified. Fourth, the relevant source segments that feed loads into the system are identified by systematic backtracking. Finally, the ultimate purpose of analyzing congestions by correlating them with arrival rates at source segments can be carried out. The four first steps are prerequisites for the last step. All five steps are described in detail in the following. Logging of event data The first step is the collection of event data in the factory's AMHS. That is, at each relevant conveyor belt or rotary table an event is logged. The event contains the essential information in the three fundamental attributes, timestamp, segment number and load number. Based on these elementary attributes, a graph of the transport system can be built in the second step. State model building In this step, a state model of the transport system is extracted from the logged event data. The authors published applications of this method before in [START_REF] Schwenke | Event-based recognition and source identification of transient tailbacks in manufacturing plants[END_REF], [START_REF] Wagner | Modeling and wafer defect analysis in semiconductor automated material handling systems[END_REF]. But for completeness, the algorithm of the method isbriefly described. The algorithm extracts all relevant entities for building an extended state-transition model of a given log file of a given transport system. Consequently, the resulting model will consist of the following entities. S = {s 1 ; s 2 ; ... ; s n } (2) S is the finite not empty set of states, representing transport system elements, e.g., rotary tables or linear conveyor modules as well as storage elements (stockers) or production equipment (work stations). L = {l 1 ; l 2 ; ... ; l m } L is the finite set of loads, representing the moved entities, e.g., wafer carrier. T C S X S (4) T is the finite not empty set of transitions, representing interconnections between the single elements. T is a subset of all ordered pairs of states. One element of T is a binary relation over S. For example, (s 1 ; s 2 ) T with s 1 ; s 2 S; represents a transition from State s 1 to s 2 . For instance, a rotary table can be used as a crossing, unification or split of transport streams. Therefore it is connected to several other elements and can exhibit multiple transitions. The event log is an ordered sequence of events E as follows. E = {(τ 1 ; s 1 ; l 1 ); ... ; (τ N ; s N ; l N )} ( 5) One event e = (τ; l; s) E is defined as a triple consisting of timestamp τ Z, state s S and load l L. Z is the set of timestamps τ, so that Z = {τ 1 ;...; τ N }. The above mentioned entities S, T and L can be systematically extracted from this ordered sequence of events as shown in Figure 2. Fig. 2. State Model Building The model building algorithm is a loop that processes ach event separately in one individual loop cycle. This loop consists of steps for extracting elements from events as well as for finding or creating model entities, so that they can be included into the model. Conditional decisions allow for breaking out of the loop if not all steps have to be carried out, because current entities are already part of the model. Identification of overloaded segments After the state model is extracted, the third step of the overall workflow can be executed. Overloaded segments are results of congestions. In the state-transition model these segments are states. The suspect states are identified by finding states that sometimes exhibit unusual long holding times ω. Longer holding times are an effect of previous loads holding up succeeding loads and therefore affect the average holding times of loads on certain states. For each state one corresponding average holding time ω(s i ) can be calculated. The states that exhibit unusual long average holding times compared to the average holding time over all states, ω(s i ) > , are suspects to be affected by at least transient congestion effects. This comparison has to be executed for many fractions of time. Fig. 3. Identification of States that temporarily exhibit congestions As a result, a set of congestion states S effect C S is found. Backtracking to influencing segments In order to find states that influence the holding time of congestion states, a backtracking is carried out. This is necessary to compare the time series of only those states that actually can have an influence and not others that are unlikely to have an impact. This backtracking is carried out for each congestion state. The relevant influencing states are called feeding states. In that context, a feeding state is the closest preceding state that either exhibits more than one outgoing transition d out (s) ≥ 2 or that is a load source of the system, e.g. a production equipment input. Other states that represent linear conveyor segments and are closer do not have to be considered because the arrival rates do not differ from the feeding state. On each identified feeding state, recursively the same backtracking to previous feeding states is carried out. This recursion terminates when a number of maximum backtracking depth b max is reached or if no more preceding states can be found in the statetransition model. The result of this algorithm is a tree of feeding states for each congestion state, for example see Figure 4. The dashed line marks the maximum backtracking depth selected by the user. In this case, the time series of two feeding states sf1 and sf2 have to be considered in the congestion analysis step described in the next Subsection. Increased backtracking depths can result in longer forecast lead times for congestions but also cause higher calculation costs since more states have to be considered. Congestion analysis Once the above mentioned prerequisites are available, the actual congestion analysis can be started. The approach presented focuses on the diagnosis and prediction of tailback events caused by the dynamic interactions of different transport system elements or areas. Other possible causes of tailbacks, like random failures of single transport elements, are much less related to the system behavior which is observable using the event logs described in Section 3.1 and are therefore not considered. However, the prediction of such tailbacks could be tackled using semantic information about the transport systems hardware, e.g. mean time between failure considerations. Here, the progression of the inter arrival times (IAT) of the identified feeding states are considered in order to identify conditions that provoked the anomalies on the congestion states. This approach allows to draw inferences from temporary different workload situations on different load sources about the manifestation of transient tailbacks, e.g. due to temporary load concentration or mutual obstruction. Depending on the situation, not every feeding state identified in Section 3.4 has an influence to the appearance of tailbacks on the congestion states. To select the relevant subset of feeding states, several methods of selection can be considered. One approach could be to weigh the IATs of the different feeding state and ignore the ones that transport only a seemingly irrelevant amount of lots. However, the simplified example depicted in Figure 5 suggests that this is a misleading approach. Fig. 5. Influence of a low frequency feeding state on congestion probability In Figure 5, a congestion state A (see Figure 6) is shown which receives its loads at a rate of approximately three loads per minute from a major source B. If no other sources participate, no congestions appear at this state A as indicated by the red line. However, there exists another feeding state C which considerably influences the holding times on state A. Although state C only contributes to the traffic with around one load in nine minutes (green line) to around one load in 4 minutes (blue line), it significantly increases the holding times of A and therefore even causes congestions (noticeable peaks in the blue line). In the presented case, this is caused by a dead-lock prevention mechanism implemented in the transport systems controllers that block all traffic from C down to D once a load enters the critical area shown as a red cuboid in Figure 6. Therefore, it is necessary to measure the real influence of a feeding state's IAT on the HT of a congestion state regardless of its arrival rate. To achieve this, a correlation approach is used as a first step. For this purpose, the time series of holding time of the congestion state is correlated with the time series of the inter arrival times of all identified feeding states. Depending on their distance, it takes a certain amount of time until the inter arrival times of the feeding states affect the holding times of the congestion state. To take these delays into account, each IAT time series is cross-correlated with the HT time series, using a default maximum lag as defined below where N is the number of observations in the time series. ( ) (6) As a second step, in the resulting correlation values r l for each lag l, it is then sought for the maximum negative correlation. That is, the maximum suggested impact of a decreasing IAT (increasing arrival rate) of the feeding states on increased HTs of the congestion state. This correlation value r l is then checked for significance by comparing it with the values of the approximated 99% confidence interval c (∝=0.01): √ (7) Third, once the set of significant feeding states S^sig has been found and the corresponding lags yielding the maximum negative correlation for each state noted, the critical inter-arrival rates must be identified. These are the ones that cause overload situations on the congestion state, if they appear in combination. For this purpose, a set V for each feeding state is constructed as follows. {⋁ } With . In summary, this set V contains a mapping from the interarrival times of one feeding state to the corresponding holding times of the congestion state, shifted by l to compensate for the time delay between cause and effect as mentioned above. Afterwards, V is sorted by its IAT values in descending order, i.e., from the least to the most frequent lot appearance. Next, the HT components of V are scanned along the falling IAT values. Once the holding time on the congestion state first reaches or exceeds the critical value ( ) ̅( ) as described in Section 3.3, a previous is used to define a rule indicating the danger of congestion. This rule sets a Boolean value ( ) depending on whether this value is undershot. The parameter k can be used to manipulate the lead time and sensitivity of the congestion prognosis by choosing more conservative, i.e. larger, values of , so that the warning signal ( ) is set earlier. This procedure is repeated for every and the resulting rules are combined to one single rule, suggesting a high congestion probability once all of the conditions are met. The construction of these rules will now be demonstrated by the example shown in Figure 6. In this example, the state A's relevant was found at 60 seconds. In Figure 7 the ordered set V is shown for the congestion state A and the feeding state B. For this state, the congestion effects began to manifest once the IAT of state B was less than or equal to 27 seconds. Using a parameter value of 3 for the lead time parameter k, i.e., the next larger value is used which is 28 seconds. In this example the congestion indication rule can be defined as follows. (9) A second rule is derived from the holding-and inter-arrival times of the states A and C as shown in Figure 8. Here, congestions on state A appeared once the inter-arrival times of state C was lower than or equal to 86 seconds. Using again a k of 1, the corresponding congestion indication rule therefore is as follows. [START_REF] Gross | Fundamentals of queuing theory[END_REF] As a last step, the resulting rules must be combined to reflect the mentioned interrelation of the IATs between the corresponding states. A combined rule would be noted as follows. 0 (11) Conclusion and Outlook The presented approach has been implemented into a comprehensive analysis framework. User input is only required to define the input parameters k and the maximum backtracking depth b max to influence the prognosis lead time. Subsequently, the warning rules are derived fully automatically and can afterwards be evaluated against the current transport system behavior at runtime. The derived warning rules for congestion prognosis will serve as a basis for dynamic routing approaches within the transport system controllers. If their conditions are met, the controllers will be alarmed about possible future congestion situations. As a possible countermeasure, they can reroute part of the incoming traffic flow across different system parts, thus gaining a consistent lot flow while sacrificing only a small amount of transport speed for a few lots. In the use cases investigated, the derived rules predicted the observed congestions accurately enough to allow for effective prevention measures in most cases. However, the approach also exhibited a few limitations that have to be considered regarding the choice of the parameter values k and b max . If the maximum backtracking depth is set too large, then too many feeding states have to be considered, eventually causing interferences between the growing variety of load situations. That means, several groups of feeding states may cause overload situations on a single congestion states independently. Using just the cross-correlation approach, this can neither be reliably distinguished nor can it be expressed using just AND conjunctions of the warning rules. In addition, the lead time parameter k must be chosen carefully, since small values may reduce the prognosis horizon too much. On the other hand, values too large may provoke a lot of false-positive congestion warnings. Therefore, future work will focus on defining metrics to aid the system experts choosing the right parameter values. In addition, the authors will investigate a wider set of influencing variables to determine their suitability for making the predictions more accurate. Fig. 4 . 4 Fig. 4. Tree of influencing neighbor feeding states. Result of backtracking algorithm. Fig. 6 . 6 Fig. 6. Excerpt of the example system showing a congestion state A and the feeding states B and C Fig. 7 . 7 Fig. 7. Critical IAT for feeding state B Fig. 8 . 8 Fig. 8. Critical IAT for feeding state C
30,118
[ "1003732", "1003733", "1003734" ]
[ "96520", "96520", "96520" ]
01485833
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485833/file/978-3-642-41329-2_38_Chapter.pdf
Dipl.-Ing Gerald Rehage M.Sc. Dipl.-Ing.(FH Frank Bauer Prof. Dr.-Ing Jürgen Gausemeier email: [email protected] Dr. rer Nat Benjamin Jurke Dr -Ing Peter Pruschek email: [email protected] Intelligent Manufacturing Operations Planning, Scheduling and Dispatching on the Basis of Virtual Machine Tools Keywords: Operations planning, scheduling, dispatching, machine tools, simulations, industry 4.0 Today, numerical-controlled machine tools are used for flexible machining of increasingly individualized products. The selection of the most economic tool, machining strategy and clamping position is part of the manufacturing operations planning and bases on the employees' practical knowledge. The NC programmer is supported by current CAM systems with material removal simulation and collision detection. This early validation avoids damages and increases the productivity of the machines. The benefit of simulations can be increased by a better model accuracy. In common CAM systems the machine behaviour is often insufficiently reproduced; for example, the dynamic characteristics of axes, the tool change and the chronological synchronization are simplified or neglected. In view of complex operations, a slow trial run on the real machine or substantial safety margins are necessary. The described deficits can be solved by virtual machine tools. Thereby a virtual numerical control and a detailed machine kinematic is used for an accurate simulation. The result is an error-free NC program which can be directly used on the real machine. In addition, the exact processing time is determined and supplied for the operations scheduling and dispatching. Furthermore, virtual machine tools provide promising approaches for an automated optimization of machining operations and machine set up. Parameters for the optimization of the processing time are, for example, different clamping positions or tooling arrangement. Simulating several parameters requires lots of computational power. Hence, the vision of the project "InVorMa" is a cloud application, which supports the operation planning, scheduling and dispatching of machine tools. A computer cluster (cloud) provides noticeably faster results than a single computer. Therefore, the machine tools and the manufacturing equipment of the user are cloned one-to-one in the cloud. The manufacturing documents are optimized by the cloud application before they are forwarded to the shop floor. The optimization involves the NC program for each machine as well as the distribution of orders. The practical knowledge of the manufacturing planner and the results of the optimizations are pre-processed for reuse by an integrated knowledge base. INTRODUCTION Manufacturing in high-wage countries requires the efficient use of resources. Increasingly individualized products require a highly flexible production system [START_REF] Abele | Zukunft der Produktion -Herausforderungen, Forschungsfelder, Chancen[END_REF]. In the field of machining of metals, the needed flexibility is achieved by numericalcontrolled machine tools. It is the function of the manufacturing planner to ensure a rational use of the operating means. This is based on his practical knowledge and furthermore on the utilization of machine simulations to avoid damage and increase productivity from the office. The aim of the project "Intelligent Manufacturing Opera- The current procedure of operation planning was documented with the pilot users as groundwork for the requirements of a simulative assistance in this field. Figure 1 shows the summarised steps, tasks and results. In the operation planning, the manufacturing methods, production resources and sequences are determined according to firm-specific goals (such as punctuality, profitability, quality). In addition, the processing time and setting time is predicted on the basis of empirical values and the pre-calculation. In this phase the order of raw and purchased parts are initiated. Results are the routing sheet, the allowed times and the procurement orders. It is the work task of the operations scheduling and dispatching to determine the start time and sequence of manufacturing orders as well as the allocation of resources with regard to the allowed times, scheduled delivery dates and the disposable machines. The Result is the up-dated master scheduling. The last step is the NC programming for every manufacturing operation on numerical-controlled machines. Today, CAM-Systems are used for NC programming away from the machine. These provide an automatic calculation of the tool path for predefined shapes (e.g. plane surface, island, groove) deduced from given CAD models of blank, finished part, tool and fixture. The setting of technological machining parameters, used tools and clamping positions is still a manual task of the NC programmer. Hereby, he has a huge impact on the processing time and quality. Results are the NC program, the process sheet and the sketch of set up. APPLICATION OF VIRTUAL MACHINE TOOLS Nowadays, the CAD supported NC programming includes the simulation of machining. This kind of verification has become quite popular due to the process reliability of machining with 4 to 5 axes [START_REF] Rieg | Handbuch Konstruktion[END_REF]. Against the background of reduced batch sizes, the simulation achieves an increasing acceptation also for machines with 3 axes, since it is possible to reduce the test runs for new workpieces and special tools and moreover to reduce the risk of discard. Therefor, common CAM systems provide a material removal simulation and an integrated collision detection. The material removal simulation shows the change of the workpiece during the machining. The automated collision detection reports any unwanted contact between the tool (shank, holder), workpiece und fixture. However, the reproduction of the real machine behaviour is mostly reproduced insufficient by these systems. For example, the dynamic characteristics of the axes, the movement of PLC controlled auxiliary axes, the automatic tool and pallet change, as well as the time synchronization of all movements are only simplified implemented or even neglected [START_REF] Kief | CNC-Handbuch[END_REF]. The basis of all common CAM systems is the emulation of the calculated tool paths by an imitated control. Therefore, the machine independent source code CLDATA (cutter location data) [START_REF]DIN 66215: Programmierung numerisch gesteuerter Arbeitsmaschinen -CLDATA Allgemeiner Aufbau und Satztypen[END_REF] is used instead of the control manufacturer specific NC program that run on the real machine. The machine specific NC program is compiled after the simulation by a post processor to adapt the source code to the exact machine configuration. The wear points of integrated simulations are known to the NC programmer, they are compensated by tolerant safe distances. This causes extended processing times and with it unused machine capacities. The result of the integrated simulation in CAM systems is a checked syntax, tool path, zero point and collision-free run of the NC program and additionally the approximated processing time. Nevertheless, a slow and careful test run is still necessary for complex machining operations due to the low modeling accuracy. The optimized machining by utilization of simulations requires a reliable verification of the operations that are defined in the NC program. The simulations in the contemporary CAM systems can't provide this due to the mentioned deficits. Fig. 2. -The simulation models of the virtual machine tool An approach to optimize the NC program away from the machine is the realistic simulation with virtual machining tools [START_REF]DMG Powertools -Innovative Software Solutions[END_REF]. This includes the implementation of a virtual numerical control with the used NC interpolation as well as the behaviour of the PLC and the actuators. Additionally the entire machine kinematics, the shape of the workspace and peripheries are reproduced in the virtual machine (figure 2). Input data is the shape of the blank and the used fixture as well as the machine specific NC program. The virtual machine tool enables the execution of the same tests as the real machine. This includes optimizing parameters (for example different clamping positions or tooling arrangements) to reduce the processing time. The result is an absolutely reliable NC program, which can run straight on the real machine. Additionally, the reliable processing time is determined by the simulation and made available to the operations scheduling. However, the variation of parameters (for example the clamping position) or adaptations (to minimize unnecessary operations and tool changes) have to be done manually by the user in the NC program. A new simulation run is necessary after each change and the result must be analysed and compared to previous simulations by the user. This is an iterative process until a subjective optimum (concerning time, costs, quality) is found. The simulation on a single PC runs only 2 to 10 times faster than the real processing time depending on the complexity of the workpiece. Today, the simulation of complex and extensive machining takes too long for multiple optimization runs. VISION: CLOUD-APPLICATION TO SUPPORT PROCESS PLANNING The illustrated possibilities of virtual machine tools offer promising approaches for optimizing the machining and setting up of the machine. The vision of the project InVorMa is a cloud application, which supports the employees in the planning, scheduling and dispatching of manufacturing operations on tooling machines (figure 3). Instead of passing the manufacturing order and documents directly to the shop floor, the relevant data is previously optimized by the cloud application. The optimization involves the NC program of individual machines as well as the efficient scheduling and dispatching of orders to individual machines. The user obtains the service over the internet from a cloud service, this provides considerably more rapid results compared to a simulation on local hardware. Recent market studies emphasize the potential benefits of an automated routing sheet generation, the integration of expert knowledge and the planning validation through simulation [START_REF] Denkena | Quo vadis Arbeitsplanung? Marktstudie zu den Entwicklungstrends von Arbeitsplanungssoftware[END_REF]. FIELDS OF ACTION In the light of the presented tasks of operations planning, scheduling and dispatching as well as the exposed potentials and disadvantages of virtual machine tools, there are four fields of action (figure 4). 1. A significant increase in simulation speed is the basis of the intended optimization. The main approach is the use of powerful hardware in a computer cluster. For some time, "Cloud computing" is a highly topical technological trend [START_REF]Fujitsu Launches Cloud Service for Analytical Simulations[END_REF]. However, this technology has not yet been used for the simulation of virtual machine tools. 2. The optimized machining result from the evaluation of possible resource combinations and parameters. Depending on the workpiece shapes to be manufactured, there are different combinations of available tool machines, tools, fixtures for the machining. For example, the machine configuration and parameters can be used to control the clamping position, the tooling arrangement in the magazine and the superposition of the feed speed. To optimize the machining process, the possible combinations have to be simulated and evaluated automatically. 3. Optimizing the machining on each machine does not necessarily lead to an efficient scheduling. This requires a cross-machine optimization witch considers the processing time, the resource management and the occupancy rate of all machines. Waiting orders have to be economically dispatched to available machines. The operation scheduling needs to adapt continuously to the current situation, such as new orders and failures of machines or workers. Nowadays, operation planning is based essentially on the experience of the responsible manufacturing planner. The combination of resources as well as the machine settings is chosen with regard to the shape and the mechanical behaviour of the workpiece. If the machining result does not reach the expectations, this will be considered in further planning tasks. Therefore, a computer-aided optimization requires an aimed processing and reuse of technical and practical knowledge. CONCEPT AS A CLOUD APPLICATION The operation planning is assisted by the verification and optimization of the NC program and machine set up by the use of simulations. In addition, the operations scheduling and dispatching is improved by a pre-selection of resources and providing of reliable processing times. Figure 5 shows the system architecture of the cloud application with its modules "Production Optimizer", "Setup Optimizer", "Simulation Scheduler" and the "Virtual Manufacturing" as basis. The optimization steps in each module are supported by a "Knowledge Base", which provides both, technical and practical knowledge from previous simulations. The interface for incoming and outgoing information is part of this "Knowledge Base". The user sends the manufacturing order and documents (blank description, NC program) as well as the desired firm-specific goals to the cloud application. This represents the new bridge between the customers' CAPP-System (Computer-aided process planning) and the shop floor control. First of all, the "Knowledge Base" determines suitable machine tools by reference to the machining operations described in the NC program and the existing resources. The result is a combination of resourcescomposed of machine, fixture and toolfor each machining step. The selection bases on the description of relations between resources and possible machining operations. Empirical data from previous simulations like the process times are reused to estimate the processing time on each of the suitable resource combination. This outcome is utilized by the "Production Optimizer" to accomplish a cross-machine optimization using a mathematical model and a job shop scheduling. This takes account of the processing time, delivery date, batch size, current resource disposability, machine costs material availability, set up time, maintenance plan and the shift schedule. The master schedule sets the framework for the detailed operations scheduling and dispatching. Scheduling is the assignment of starting and completion dates to operations on the route sheet. Selecting and sequencing of waiting operations to specific machines is called dispatching. Here, the real time situation in the shop floor is provided thru the "Knowledge Base" to ensure a reliable scheduling. In the next step, the NC program is optimized for the selected machine tool by the "Setup Optimizer". It varies systematically the parameters of the NC program and evaluates the simulation results from the virtual machine tool. For example, the target is to determine a timesaving workpiece clamping position, to remove collisions, to minimize the tool change times and empty runs or to maximize the cutting speed profile. The parameters that are being evaluated are chosen by a special algorithm in an adequate distance in order to reduce the number of simulation runs and to quickly identify the optimum parameter range. The result is an optimized, verified NC program and the necessary parameters to set up the machine. The results of all performed optimizations are saved in the "Knowledge Base". It links workpiece information, configurations and technological parameters with already conducted simulation results. Thus, it is possible to early identify parameters with a high potential for optimization as well as relevant parameter ranges for new NC programs. This restriction for the scope of solutions reduces the number of necessary simulation runs too. All simulation orders from the "Setup Optimizer" are managed by the "Simulation Scheduler" and distributed to the virtual machine tools and hardware capabilities. To increase the simulation speed, extensive NC programs are divided into sub-programs, that are simulated parallel and the results combined again afterwards. The prerequisite for achieving the overall aim is the customized "Virtual Manufacturing" with virtual images of all available machine tools and manufacturing equipment. This includes the virtual machine tool in that current version as well as tools, holders and fixtures as CAD models. If it is necessary, multiple instances of a virtual machine are generated in the computer cluster of the cloud application. For further improvements, potentials for the parallelization of separated computations are considered. For example, the calculations for the collision detection and the simulation of the numerical control systems can be executed on different CPU cores. CONCLUSIONS The fully automated operation planning will not be realized in a short period of time. Instead, the paradigm of Industry 4.0 pushes decision-making techniques to support the user. The presented project combines approaches of knowledge reusing, advanced planning and scheduling and reliable machine simulations in a cloud application. Virtual machine tools are used to verify and improve the machining without interrupting the production process on the shop floor. In addition, a more efficient distribution of manufacturing orders to the machine tools is addressed. This enables an increase in efficiency without changing the existing machine tools. The following tasks are part of the next project phase: Characteristics and a taxonomy to describe manufacturing processes and resources are defined for the "Knowledge Base". Simultaneously, a concept is developed to speed up the simulation run; this includes software and hardware technologies. Furthermore, a basic model for scheduling and dispatching is developed; this can be adjusted later to the customers' framework. FUNDING NOTE This research and development project is / was funded by the German Federal Ministry of Edu-cation and Research (BMBF) within the Leading-Edge Cluster "Intelligent Technical Systems OstWestfalenLippe" (it's OWL) and managed by the Project Management Agency Karlsruhe (PTKA). The author is responsible for the contents of this publication. Planning, Scheduling and Dispatching on the Basis of Virtual Machine Tools" (InVorMa) is a cloud-based simulation of machine tools and it is developed by the Heinz Nixdorf Institute and the Decision Support & Operations Research Lab (DSOR) of the University of Paderborn as well as the Faculty of Engineering sciences and Mathematics of the University of Applied Sciences Bielefeld in cooperation with the machine tool manufacturer Gildermeister Drehmaschinen GmbH. The companies Strothmann GmbH and Phoenix Contact GmbH & Co. KG support the definition of requirements as well as the following validation phase as pilot users. Fig. 1 . 1 Fig. 1. -Summarized as-is process of operations planning, scheduling and dispatching Fig. 3 . 3 Fig. 3. -Cloud application for supporting the operations planning, scheduling and dispatching Fig. 4 . 4 Fig. 4. -Fields of action Fig. 5 . 5 Fig. 5. -Architecture of the cloud application
19,435
[ "1003735" ]
[ "488132", "488132", "488132", "488133", "488133" ]
01485834
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485834/file/978-3-642-41329-2_39_Chapter.pdf
Marius Essers email: [email protected] Martin Erler email: [email protected] Andreas Nestler email: [email protected] Alexander Brosius Dipl.-Ing Marius Eßers Dipl.-Ing Martin Erler Dr Priv.-Doz -Ing Methodological Issues in Support of Selected Tasks of the Virtual Manufacturing Planning Keywords: Virtual Machining, Virtual Manufacturing, Virtual Machine Tooling, Virtual Machine des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION In the area of production planning there are many points of application for the supporting use of simulation models due to the multitude of influencing variables to be taken into account. In principle all unique activities to be implemented for the design of a manufacturing system and the manufacturing processes to be planned can be simulated. The challenge is in developing models with suitable representations and visualisations as well as furnishing them with additional, growing physical characteristics. This affects all activities to be planned for drafting, design and optimisation of manufacturing processes in component production [01]. With the inclusion of physical characteristics from models, increasingly realistic statements for technological matters in particular can be attained. Realistic, in the sense of the planning of target stipulations, means attaining sufficient accuracy for the results relating to a relevant point of observation. Thus with the knowledge of sufficiently accurate machining forces for planned operations, the power and energy considerations can also be incorporated into the simulation, e.g. for the reduction of the energy expended through the evaluation for the design of operations based on low energy requirements [02]. For the best possible process design, conditions typical to the planning must be selected, whereby the various different display options incorporated for an increase in process planning quality can be tested and which can be linked to a comprehensive procedure [03]. Proving techniques, which will analyse the processes already designed as a follow-up, must also be integrated for verification purposes. The functionality existing at present for commercial and non-commercial machining simulation systems amounts primarily to the classical collision avoidance [04], the predominantly geometry-based visualisation of the overall system Machine-Tool-Workpiece [05] and selected optimisation on the basis of the NC code [06]. For the best possible process design and verification, the machining process must take account of the working process of the mechanical processing and its effect on the physical complete system Machine-Tool-Workpiece. In addition, further processes, e.g. the setting up of the machine, must also be taken into account in order to avoid potential fault sources. The objective is the combination of a long term process simulation and a point-intime-specific system simulation with a high degree of detailing (Figure 1). Fig. 1. Trends of process simulation The high degree of detailing enables the planner to use additional functions of the process simulation for substantiated forecasting for special problem cases for a defined assessment period. These types of functions are not available in commercially available planning systems or are only available in rudimentary forms. This approach also counteracts transient performance problems occurring when working with a high degree of detail, which severely restrict the assessment period, in order to economically facilitate the overall process simulation with physical characteristics. The following investigations into methodical aspects are performed as examples of the virtual design of a milling machining centre and the milling process to be planned. METHODOLOGICAL ASPECT OF SHORT PERIOD SIMULATION OF MANUFACTURING SYSTEMS 1 Here the degree of detailing can be considered as an alternative illustration for a given process section. That means that for an assessment period alternative illustrations are possible, whereby each one can depict another degree of abstraction. That applies both to the representations and the characteristics of the models. For the illustration of the most comprehensive range of characteristics in the models of the mechanical machining, a multitude of application domains must be combined [07]. So that various different methods can be used, logically a universal user interface must be created for the design and implementation of the simulation. The SimulationX development environment was selected for these requirements. SimulationX can be used for interdisciplinary drafting, the modelling and analysis of physical-technical systems on a common platform [08]. In standard form it offers domains in the fields of drive technology and electrical engineering, flexible multibody mechanics and others. Additional models can be coupled to expand the functional scope. In principle the models can communicate via a locally shared memory or via a network. The basis for the coupling is an interface definition. Alongside a propriety interface, an example functional mock-up interface (FMI) of the MODELISAR project [09] can also be used. This enables a modular approach, whereby the computing resources will also have an influence on the costs of the degree of detail. The object-oriented implementation of the manufacturing system is sub-divided into important sub-systems with bidirectional interfaces, whereby clamping systems are currently not considered (Figure 2). The virtual workpiece, virtual tool and virtual machine tool sub-systems are dealt with in more detail below. The working point for the process will be explained in the process simulation section. Cutter-workpiece engagement Cutter Workpiece Machine Tool Fig. 2. -Object-oriented illustration of a manufacturing system Virtual Workpiece The real workpiece undergoes continuous change during the machining. This affects both the external form as well as the stiffness and the mass which are changing due to the removal of material. Where large volumes are to be machined, the mass has a greater effect on the complete system and for example on the expected energy con-sumption. For smaller components with thin-walled structures the smallest amount of material removal has a substantial effect on the stiffness. There are well-proven techniques existing for the representation of workpiece geometry. Alternatively the mapping of a coupled NC simulation core (NCSK) [START_REF] Lee | Tool load balancing at simultaneous five-axis ball-end milling via the NC simulation kernel[END_REF] can be applied via a Z-map [START_REF] Inui | Fast Visualization of NC-Milling Result using graphics Acceleration Hardware[END_REF] or via a 3-dexel model. The widely-used data format STL is used as a basis. With this, the starting geometry is imported and the virtual finished geometry exported. Volumetric and dimensional calculations, for example, can be carried out based on this. In principle, modelling is available as an ideally stiff workpiece. Firstly a check is carried out to ascertain whether a modular replacement system can be used to illustrate the changing stiffness. However, this does not enable the complex geometry to be completely illustrated. Therefore a freely available FEA software module is incorporated by means of Co-Simulation [START_REF] Calculix | A Free Software Three-Dimensional Structural Finite Element Program[END_REF] to show the structural mechanics of the workpiece (Figure 3). With this model representation forces, which can be used for deformation, can be applied to the node points. Virtual Tool If rough calculations are to be carried out then the tool can be adopted initially as ideally stiff (Figure 4a). There are various different illustrative models available for further detailing. If one ignores the wear on the cutting edge of the tool then it undergoes no geometric change in the process. An approximated representation of the tool with two flexible multi-bodies represents the shaft and the cutting part (Figure 4b). For a more exact implementation of the tool stiffness the FEM software module is available again (Figure 4c). Virtual Machine Tool As a minimum requirement on a virtual machine tool the kinematics must be implemented in order to be able to realise detailed movement information. The existing objects of the ITI mechanics library from the CAx development platform SimulationX are utilised for this. As an example a 3-axis portal-design vertical milling machine is modelled (Figure 5). Information from the machine documentation is sufficient to illustrate further characteristics. This is critical for the illustration of masses and centres of gravity for the machine components. The stiffnesses of the guides are modelled through springdamper systems. The machine components are primarily adopted as ideally stiff, so that all simulated machine deformations are created by the guides. The illustration of the cascade controller and the electrical illustration of the translational drive components is implemented through the "controller" components. An electrical drive motor ("driveMotor") and a ball screw drive ("ballScrewDrive") from the ITI mechanics library are used in these components. With stage one the representation of a Z-map is applied, built upon the kinematics of the penetration between tool and workpiece. In addition, an analytical geometry model calculates geometric intervention points within the XY-level of the tool in the time range of a tooth feed without taking vibrations into account (Figure 8) The intervention points determined are used for the calculation of an average cutting depth (Formula 1). Formula 1. -Berechnung der averaged cutting thickness auf geometrischer Grundlage Furthermore, the average cutting depth (Formula 2 and Figure 9) can be shown via a calculated value through the penetrated heights and their number in Zdirection. A force model, which delivers the machining forces which can be arbitrary in terms of magnitude, direction and application can in turn be applied to the geometric sizes determined. These are applied to the workpiece and tool in the vicinity of the working point for the process and will result in the deflection of the sub-system involved. This deflection in turn has a direct influence on the common penetration and the resultant geometric machining parameters. f METHODICAL ASPECTS OF THE SIMULATION OVER A LONG PERIOD OF TIME The simulation over a long period of time should be considered here to be a complete simulation process for an existing NC program. To do so systems are employed here, which are closer to the real machine/controller combination than is possible with the classical post-processor CAD/CAM or NC programming systems. Alongside the verification -so, the assurance of freedom from errors in the sense of collision avoidance and guaranteed achievement of the required surface qualities and dimensional accuracy -the objective of a simulation process is also increasingly the reduction of unutilised safety reserves, which lie in the technical process parameters and which finally lead to a non-optimum primary processing time or secondary processing time. In order to be able to utilise these reserves and thus to be able to reduce machining times and tool costs, the consideration of further influences is required in the simulation. With BC code based verification systems this is not normally possible, as the constraint that the NC code is dealt with as a whole significantly increases the difficulty of a more detailed evaluation. Influences of the machine tool control system for example remain almost completely disregarded. Coupled with a real CNC controller a simulation on the other hand can provide significantly more statements about an NC program and the resultant machining [START_REF] Kadir | Virtual machine tools and virtual machining-A technological review[END_REF], as it evaluates the control signals generated for the individual axes directly, for example. Virtual Machine Tool Environment (VMTE) Alongside the transfer and processing of the data, the incorporation of information generated on the CNC control system side (e.g. corner rounding, reduced approach torques or feed limitations) also requires a simulation model, where this can be illustrated. One such model has been developed -the Virtual Machine Tool Environment (VMTE). It provides a basic model in the sense of a Co-Simulation, which enables the processing of CNC control system control signals to machine model transformation information. Two important requirements arise from the application conditions described:  The VMTE shall be quick and simple to create, as well as  realtime-capable and modular, in order to achieve many iterations, a broad application spectrum and a high utility value. The rapid path to VMTE The process developed for creating the simulation models allows the use of generally available information and data for a machine tool and thus very quick creation of a VMTE, so that this can be economically used in many new areas of application. Prerequisites. In order to be able to use the VMTE as a planning and development tool, it must be able to be quickly adapted to the specific tasks for which it is required. Frequent changes to the machine configuration are normal in the early phase of development and planning as part of the process development and process checking. A wide range of different machines with differing configurations is also necessary for basic training and advanced training purposes. The consideration of established machine design variations in conjunction with the final operational sequence is an important point of the operational fine planning for the utilisation of VMTE in the manufacturing phase. Machine tools are generally based on serial kinematics. The most important tasks in the development of a model for illustrating these serial kinematics are the development of the kinematics on the basis of the machine configuration and the linking of the graphical data with each axis. The basic mechanism for the transformation of the original CAD data and the transfer into graphical data as well as its population with machine functions can be largely generalised such that a VMTE can be created in less than 30 minutes. Generalised virtual machine. In order to achieve this the emphasis is not on the linking, in order to illustrate the kinematics, but rather the axes which represent the real components. This approach is closer to reality and results in the position of the axes being directly adjustable with respect to one another. There is also no need for additional parameters such as in the DH convention [START_REF] Denavit | A kinematic notation for lower-pair mechanisms based on matrices[END_REF]. The mobility of the model (and thus the movement of its axes) is achieved through the decomposition of one axis into two axis modules and their displacement or rotation with respect to one another. In doing so an axis module has a zero point and a connection point, whose position and orientation with respect to one another will be described through a transformation matrix and which define the interior of the module (Figure 10). The combination of two modules (basic part and mobile part) describes the configuration of an axis. The mobile part will be moved relative to the basic part (likewise through a transformation matrix), whereby the typical axis characteristics will be realized. The set-up interface which is thus finally fully parameterized can also be considered an interface and can be supplied with information provided from outside. The CNC control system provides one such interface for example. VMTE for simulation of a long period of time Many intrinsic controller characteristics can also be considered with large time spans during the verification of the analysis through the coupling of the machine model and a real or virtual CNC control system, and these would otherwise have to have been modelled. Due to its  very short creation time,  ability to be fully parameterised and  intrinsic consideration of CNC controller influences the kinematic base model provided offers itself as a basis for a virtual machine environment (Figure 12). The fully parameterised interface enables changes to be made to the configuration whilst the simulation is running, so that various different configurations can be used as the subject-matter of the simulation or so that the simulation can take account of the configuration changes. OUTLOOK -THE COMPREHENSIVE MANUFACTURING SIMULATION The sub-processes detected in the VMTE, but which cannot be simulated there in sufficient detail, can be considered and evaluated downstream or in the meantime through the increased degree of detail in an enlargement of arbitrary resolution. In doing so not only can the time steps be reduced -thus increasing the slow-motionbut also the resolution of the model (e.g. FE meshes) in question can be increased. The combination of a simulation for a large time period and detailed process simulation enables the comprehensive evaluation of the complete manufacturing process as well as the parameters and influencing variables involved in it, both in a holistic context and in detail. In order to achieve this, the two simulations must interact with one another. This is achieved through the parametrisation of the two simulations. Thus the VMTE can transfer a parameterised machine tool model to the process simulation and can detect and specify the periods of time to be considered. The highly accurate analyses in small time periods received from the process simulation can be returned and used for consideration or correction of the VMTE in large time periods, where small changes have a significant impact. The approach presented unites geometrical/kinematic simulation methods for a large time period and restricted degree of detail, with highly detailed methods such as MKS and FEA analysis for small time periods. In this way the advantages of both methods can be utilised and their disadvantages reduced. Fig. 3 . 3 Fig. 3. -Geometric (a) and structural-mechanical representation (b) workpiece Fig. 4 . 4 Fig. 4. -Tool representation models Fig. 5 . 5 Fig. 5. Kinematics of the Mikromat 4V HSC (a) machine and a simplified visualisation (b) Fig. 6 .Figure 7 Fig. 7 . 677 Fig. 6. -Control-related and electrical implementation of an axis ∑FormulaFig. 9 . 9 Fig. 9. -Geometrical cutter-workpiece engagementOn the basis of the two values calculated, and , the Victor/Kienzle[START_REF] Kienzle | Spezifische Schnittkräfte bei der Metallbearbeitung[END_REF] force model for calculating a determined machining force can be applied. Because this procedure only permits 3-axis machining and only a rough determination of the machining forces, the considerably more accurate NCSK has been coupled via an FMI for Co-Simulation as a further alternative. The NCSK works with a three-dexel model for Fig. 10 . 10 Fig. 10. -Generalized axis pattern for serial kinematics Fig. 11 . 11 Fig. 11. -Configuration scheme for axis chains Fig. 12 . 12 Fig. 12. -Sample VMT Ops Ingersoll Funkenerosion GmbH SH 650 Acknowledgement: This work is kindly supported by the AiF ZIM Project SimCAP (KF 2693604GC1)
19,488
[ "1003736", "1003737", "1003738", "1003739" ]
[ "96520", "96520", "96520", "96520" ]
01485836
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485836/file/978-3-642-41329-2_40_Chapter.pdf
Marcus Petersen email: [email protected] Jürgen Gausemeier email: [email protected] Dipl.-Inf Marcus Petersen Prof. Dr.-Ing Jürgen Gausemeier A Comprehensive Framework for the Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components Keywords: Manufacturing Process Planning, Functional Graded Components, Expert System, Specification Technique, Sustainable Production des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION Functional gradation denotes a continuous distribution of properties over at least one of the spatial dimensions of a component consisting of only one material. This distribution is tailored according to the intended application of the component [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. Application areas for the use of functional graded components can be found for example in the automotive industry. Car interior door panels for instance are usually plastic materials that are supposed to absorb the impact energy of a lateral crash to an assured extent. The resulting deformation however must in no case lead to an injury of the car's passengers. To achieve a desired deformation behaviour it is necessary to assign exactly defined material properties to specific locations of the door panel. By a functional gradation, e.g. of the hardness, the functionality of the component can be considerably extended. The formerly purely decorative interior door panel becomes a functional element of the passive vehicle safety. Functional graded components provide a resource-conserving alternative for modern composite materials [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF] and therefore offer high potential to achieve a sustainable production. Instead of using post-processing steps to create the composites and their graded properties, the gradation is produced during their moulding process. This process integration for example shortens the manufacturing process chain for the production of the component and increases the energy efficiency significantly. The production of functional graded components requires complex manufacturing process chains, such as thermo-mechanically coupled process steps [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. While there are several material scientific approaches on how to develop an isolated process step to achieve a certain material structure, the holistic design of connected manufacturing process chains is much more difficult. For that purpose in section two an exemplary manufacturing process chain will be used to demonstrate our approach. To realise the full potentials of functional gradation, a computer-aided framework for the planning and optimisation of this manufacturing process chains will be introduced in the following subsection, whereupon the hierarchical process chain synthetisation as part of the Expert System will be presented in section three. Section four summarises the approach and identifies the significant future research challenges. FUNCTIONAL GRADED COMPONENTS Exemplary Manufacturing Process Chain The manufacturing process chains for functional graded components are characterised by strong interdependencies between the components and the applied manufacturing processes as well as between the process steps themselves. According to the presented interior door panel (cf. section 1), a manufacturing process chain for self-reinforced polypropylene composites is used here as a demonstrator. This process chain uses a thermo-mechanical hot-compaction process to integrate the functional gradation into self-reinforced polypropylene composites by processing layered semi-finished textile products on a thermoplastic basis. The semi-finished textile products were previously stretched and provide a self-reinforcement based on a macromolecular orientation. This self-reinforcement leads to a sensitive behaviour regarding pressure and thermal treatments and is therefore essential for the functional gradation of the composite [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF], [START_REF] Bledzki | Functional graded self-reinforced polypropylene sheets[END_REF]. Figure 1 shows the exemplary manufacturing process chain for self-reinforced polypropylene composites starting with a gradual preheating of the semi-finished textile products in a partially masked IR-preheating station. In the next step, the thermal gradation of the product will be enhanced due to consolidation by a special compression moulding tool. The tool was particularly design for thermo-mechanical gradation. For this reason, both tool halves can be tempered differentially and completely independent from each other. Furthermore this tool applies a mechanical gradation by a local pressure reduction of up to 30% due to the triangular geometry. A cooling phase is necessary before demoulding the self-reinforced polypropylene composite [START_REF] Paßmann | Prozessinduzierte Gradierung eigenverstärkter Polypropylen-Faserverbunde beim Heißkompaktieren und Umformen[END_REF]. Fig. 1. Exemplary manufacturing process chain for self-reinforced polypropylene composites Comprehensive Planning Framework The exemplary manufacturing process chain for self-reinforced polypropylene composites is characterised by strong interdependencies (cf. section 2.1). These interdependencies are typical for the production of components with functionally graded properties and need to be considered. Therefore a comprehensive planning framework for the planning and optimisation of manufacturing process chains is under development. This framework integrates several methods, tools and knowledge obtained by laboratory experiments and industrial cooperation projects in which the concept of functional gradation has been analysed. The planning process within the framework is continuously assisted by the modules "Component Description", "Expert System" and "Modelling and Process Chain Optimisation" [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. Figure 2 gives an overview about the structure of the planning framework and the information exchanges between the modules. The input information for the manufacturing process planning is provided by the computer-aided design (CAD) model of the component and the intended graded properties. Based on this information, several alternative process chains for the manufacturing of the component are synthesised by means of the framework. After this, the process parameters of each process chain are optimised based on empirical models. The best manufacturing process chain is de-scribed using a dedicated specification technique for production systems in the last step of the planning process [START_REF] Gausemeier | Planning of Manufacturing Processes for Graded Components[END_REF]. The Component Description module enables the desired graded properties to be integrated into the CAD model of the component. The model usually consists of geometric features (e.g. cylinder or disc), which will be extracted after loading the model. These features allow the framework to consider the geometry of the whole component and to pre-select reasonable gradients according to the geometry. This pre-selection increases the efficiency of describing the intended gradient since the manufacturing planner can directly provide the desired graded properties by modifying the parameters of the proposed gradients. If the CAD model does not contain any geometric feature or the user does not want to use one of the pre-selected gradients, the component is divided into small volume elements. These so called voxels enable the component model to be locally addressed and can be used as supporting points for the functionbased integration of the component's graded properties [START_REF] Bauer | Feature-based component description for functional graded parts[END_REF]. Based on the enhanced CAD model of the first module, the Expert System synthesises several alternative process chains for manufacturing the component. For that purpose all the manufacturing processes available in the knowledge base are filtered according to the component description such as material, geometry or the desired graded properties (e. g. hardness or ductility). To realise this filtering process, the content of the knowledge base is structured by an ontology. The ontology classifies the process steps with regard to their characteristics and connects the information of the knowledge base via relations between the content elements. An inference machine is applied to draw conclusions from the ontology, especially with respect to the varied interdependencies between the manufacturing processes. These conclusions provide the main information for connecting the several process steps of the knowledge base during the synthetisation of reasonable manufacturing process chains according to the enhanced CAD model of the component. The synthetisation of several alternative manufacturing process chains by a hierarchical process chain synthesis is described in section three. The exemplary manufacturing process chain for self-reinforced polypropylene composites is for example characterised by the fact that the initial material temperature, which is adjusted during the IR-preheating process in preparation of the compression moulding process has a strong influence on the mouldability of the component. Those and all the other interdependencies mentioned above need to be considered during the pairwise evaluation of process steps to ensure the compatibility of the synthesised process chains for the manufacturing of the component. All process chains with incompatible process steps are disregarded. Thus the result of the Expert System module is a set of several alternative process chains which are capable of producing the component [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. The parameters of a preferred set of manufacturing process chains are optimised by means of the Modelling and Process Chain Optimisation module [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. To accomplish this, predictions of empirical models based on several experiments, measurements and simulations of samples provide a comprehensive solution space (cf. [START_REF] Wagner | Efficient modeling and optimisation of the property gradation of self-reinforced polypropylene sheets within a thermo-mechanical compaction process[END_REF]). Modern empirical modelling techniques are then used as surrogates for the processes and a hybrid hierarchical multi-objective optimisation is utilised to identify the optimal setup for each process step of a manufacturing process chain. In the context of functional gradation, design and analysis of computer experiments (DACE) models have proven to show a very good prediction quality [START_REF] Sieben | Empirical Modeling of Hard Turning of AISI 6150 Steel Using Design and Analysis of Computer Experiments[END_REF], [START_REF] Wagner | Analysis of a Thermomechanically Coupled Forming Process Using Enhanced Design and Analysis of Computer Experiments[END_REF]. After all, the process chain that is capable of producing a functional graded component in the best way regarding to the component description is described using a dedicated specification technique. This fundamental specification is based on a process sequence and a resource diagram [START_REF] Gausemeier | Integrative Development of Product and Production System for Mechatronic Products[END_REF]. Figure 3 shows an extract of the optimised process sequence for the manufacturing of self-reinforced functional graded polypropylene composites with an example set of process step parameters for the compression moulding. Further information about the specification technique can be found in [START_REF] Gausemeier | Integrative Development of Product and Production System for Mechatronic Products[END_REF]. Fig. 3. Hierarchical process chain synthetisation as part of the Expert System module The next section gives an overview about the underlying principles of the manufacturing process chain synthetisation within the Expert System. HIERARCHICAL PROCESS CHAIN SYNTHETISATION The Expert System within the planning framework synthesises several alternative manufacturing process chains for functional graded components in a hierarchical way. This synthetisation is assisted by two steps -the "Core Process Selection" and the "Process Chain Synthetisation". The component requirements provided by the Component Description module, such as the enhanced CAD model, the material or general requirements (e.g. the surface quality) constitute the product attributes for the requirements profile of the component. This radar chart profile [START_REF] Fallböhmer | Generieren alternativer Technologieketten in frühen Phasen der Produktentwicklung[END_REF] and also the component requirements represent the input information for the Expert System (cf. Figure 4). Core Process Selection The Core Process Selection (according to [START_REF] Ashby | Materials Selection in Mechanical Design -Das Original mit Übersetzungshilfen[END_REF]) marks the first synthetisation loop of the Expert System and results in the core process for the manufacturing of the component. Thereby the process step which fulfils the requirements of the component according to the requirements profile in the best way is selected to be the core process. This manufacturing process also establishes the root process i.e. the starting point for the hierarchic process chain synthetisation within the iteration loops of the Expert System (cf. section 3.2). At first all the manufacturing process steps available in the knowledge base are structured according to each product attribute of the requirement profile for the manufacturing of the component. For this purpose the Expert System utilises matrix tables, in which the manufacturing processes are displayed in the rows and their ability range of the current product attribute is represented in the columns. These so called selection diagrams provide the basis for the automatic selection of the core process. Figure 5 shows an example of such a matrix table for the product attribute "tolerance". Based on these selection diagrams, all the manufacturing processes which do not match the product requirements in the defined range are removed. For the other process steps, a process profile is created in addition to the requirement profile. The process profile is presented to the user within the planning framework to explain the results of the selection process. These profiles show the fulfilment of the component requirements by the given manufacturing processes in a comprehensive way (cf. Figure 4). The manufacturing process step with the highest fulfilment of the product attributes is selected to be the core process and the unfulfilled requirements form the main input for the Process Chain Synthetisation as new requirement profile. Process Chain Synthetisation The Process Chain Synthetisation starts only if the Core Process Selection ends up with some unfulfilled requirements. This step of the Expert System tries to reduce the unfulfilled requirements down to a minimum by creating several alternative process chains. To create the process chains, the Expert System restarts the Core Process Selection as a loop, in which the unfulfilled requirements of a completed iteration loop provide the input information for the next iteration. This loop continues until no further process step can be found to fulfil the open requirements of the requirements profile. After every iteration loop, a pairwise evaluation with the new selected manufacturing process and the already connected process steps is performed to ensure the com-patibility of the synthesised process chain. If there is only one incompatible process step in the process chain a new alternative process chain will be started without this step, but with an own unfulfilled requirement profile. This new process chain will also be considered during the following iterations, whereby new selected process steps will be integrated in every suitable process chain. If the Expert System has to consider two or more alternative process chains, the Process Chain Synthetisation continues until no further process step can be found to fill up one of the open unfulfilled requirement profiles of the process chains. The result of the hierarchical process chain synthetisation is a set of several alternative process chains which are all able to achieve the desired component requirements (cf. Figure 6). CONCLUSIONS AND OUTLOOK Functional graded components offer an innovative and sustainable approach for customisable smart products. Thus a comprehensive framework for the computer-aided planning and model-based optimisation of components with functional graded properties has been presented and demonstrated with an application example. Future work includes the enhancement of the knowledge base with additional manufacturing process steps, materials and interdependencies as well as the adjustment of the ontology. Furthermore the inference rules of the expert system have to be expanded to realise the synthetisation of more complex manufacturing process chains and their pairwise evaluation. The Expert System of the comprehensive planning framework is able to automatically synthesise process chains for the manufacturing of a component with functionally graded properties. However the final selection of the best process chain for the specific production objective must still be conducted manually since it is not always obvious which alternative fulfils all the requirements according to the objective in the best way. The Analytic Hierarchy Process (cf. [START_REF] Saaty | The analytic hierarchy processplanning, priority setting, resource allocation[END_REF]) may offer an effective approach to handle the highly diverse characteristics of the decision criteria while not overstraining the decision process with data acquisition and examination. Fig. 2 . 2 Fig. 2. -Planning Framework for the computer-aided planning and optimisation of manufacturing processes for functional graded components Fig. 4 . 4 Fig. 4. -Part of the optimised process sequence for the manufacturing of self-reinforced polypropylene composites Fig. 5 . 5 Fig. 5. -Example of a selection diagram (according to [11]) Fig. 6 . 6 Fig. 6. -Set of alternative process chains for the interior door panel given by the Expert System ACKNOWLEDGEMENT The work in this contribution is based upon investigations of the Collaborative Transregional Research Centre (CRC) Transregio 30, which is kindly supported by the German Research Foundation (DFG).
19,395
[ "1003741", "1003742" ]
[ "74348", "74348" ]
01485841
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485841/file/978-3-642-41329-2_8_Chapter.pdf
Lapo Chirici email: [email protected] Kesheng Wang email: [email protected] A 'lean' Fuzzy Rule to Speed-up a Taylor-made Warehouse Management Process Keywords: Logistics, Warehouse management, Putaway process, Fuzzy rules, Data Mining 1 The minimization of the inventory storage cost and -as a consequence -optimize the storage capacity based on the Stock Keeping Unit (SKU) features is a challenging problem in operations management. In order to accomplish this objective, experienced managers make usually effective decisions based on the common sense and practical reasoning models. An approach based on fuzzy logic can be considered as a good alternative to the classical inventory control models. The purpose of this paper is to present a methodology which assigns incoming products to storage locations in storage departments/zones in order to reduce material handling cost and improve space utilization. The iterative Process Mining algorithm based on the concept of Fuzzy Logic systems set and association rules is proposed, which extracts interesting patterns in terms of fuzzy rules, from the centralized process datasets stored as quantitative values. INTRODUCTION In this era of such drastic and extemporaneous changes, manufacturers with global view pay strong efforts in striving to achieve a lean production, outsource their components, and manage the complexity of supply chain [START_REF] Blecker | RFID in Operation and Supply Chain Management -Research and Application[END_REF]. Warehouse management plays a vital role to be a central actor in any kinds of industry which put-away process is a key activity that brings significant influence and challenges to warehouse performance. In this dynamic operating environment, reducing the operation mistakes and providing accurate real time inventory information to stakeholder become the basic requirements to be an order qualifier. Here, an OLAP based intelligent system called Fuzzy Storage Assignment System (FSAS) is proposed to easy manipulate the decision support data and rationalize the production in terms of storage location assignment problem (SLAP). In condition of information's uncertainty, fuzzy logic systems can provide methodologies for carrying out approximate reasoning processes when available. Identifying an approach that can bring out the peculiarities of the key operations of warehouse is focal to track the priorities for the storage in terms of Stock Keeping Units (SKUs) [START_REF] Chirici | A Tailor Made RFID-fuzzy Based Model to Optimize the Warehouse Managment[END_REF]. Hence the need to develop a Put-away Decision Tree in order to automates the analysis of possible hidden rules useful to discover the most appropriate storage location assignment decisions. Some examples of SKU features are their dimensions, weights, loading values and popularity. All these are important in order to find out the relationship between the SKU properties and the relative assigned storage location. The aim of the paper is to create an algorithm that would be able to provide the best allocation position for SKUs in a just-in-time manner and with a lean and intelligent stock rotation. This approach provides strategic decisions to optimize the functionality and minimize the costs in a full automated warehouse. 2 THE "PUTAWAY'S DILEMMA" Manage the SLAP Warehouse storage decisions influence the main key performance indicators of a warehouse, such as order picking time and cost, productivity, shipping (and inventory) accuracy and storage density (Frazelle, 2002). The customers are looking always to obtain more comprehensive services and shorter response time. The storage location assignment problem (SLAP) results essential to assign incoming products to storage locations in well-defined departments/zones in order to reduce material handling cost and improve space utilization (Gu et al. 2007). Handling the storage location process is an activity that requires the supervision of several relevant factors. Up to now, some warehouse management systems (WMS) have been developed to acquire "simple data" by the warehouse operators and let recorded to computer support in intelligent slotting (storage location selection) in such way to ensure a constant quality of information available [START_REF] Chede | Fuzzy Logic Analysis Based on Inventory Considering Demand and Stock Quantity on Hand[END_REF]. Besides that, both the lack of relevant data and WMS low customization capability for supporting the put-away process, highlight a common problem the warehouse manager has to deal with. Thus, the put-away decisions are often based on human knowledge, sprinkled unavoidably by a high gradient of inaccuracy (and consequently long order time), which can bring to a negative impacts on customer satisfaction [START_REF] Zou | The Applications of RFID Technology in Logistics Management and Constraints[END_REF]. Previous theories on SLAP Warehouse is used to store inventories during all phases of the logistics process (James et al., 2001). The five key operations in warehouse are receiving, put-away, storage, order picking as well as utilizing and shipping (Frazelle, 2002). Hausman in 1976 suggested that warehouse storage planning involves decisions on storage policy and specific location assignment. In general, there are a wide variety of storage policies such as random storage, zoning, shortest/closest driveway, open location, etc (Michael et al., 2006). As each of the storage strategy with its own characteristics, there are different ways to solve the storage location assignment problem (SLAP). Brynzer and Johansson (1996) treated SLAP improving a strategy for pre-aggregate components and information for the picking work in storehouses. And this latter leveraging on the product's structure/shape in order to reduce order picking times. Pan and Wu (2009) developed an analytical model for the pick-and-pass system [START_REF] Convery | RFID Technology for Supply Chain Optimization: Inventory Management Applications and Privacy Issues[END_REF], [START_REF] Ho | Providing decision support functionality in warehouse management using the RFID-based fuzzy association rule mining approach[END_REF]. His theory was founded on three algorithms that optimally allocated items in the storage, analyzing apriori the cases of a single picking zone, a picking line with unequalsized zones, and a picking line with equal-sized zones in a pick-and-pass system. A nonlinear integer programming model built on a branch-and-bound algorithm was developed to enlighten class-based storage implementation decisions, considering the storage space, handling costs and area reduction (Muppani and Adil, 2008). Introducing Fuzzy Logic Fuzzy logic has already proven its worth to be used as tool to deal with real life problems that are full of ambiguity, imprecision and vagueness [START_REF] Chirici | A Tailor Made RFID-fuzzy Based Model to Optimize the Warehouse Managment[END_REF]. Fuzzy logic is a derivative from classical Boolean logic and implements soft linguistic variables on a continuous range of truth values to be defined between conventional binary. It can often be considered a suspect of conventional set theory. Since fuzzy logic handles approximate information in a systematic way, it is ideal for controlling non-linear systems and for modeling complex systems where an inexact model exists or systems where ambiguity or vagueness is common. A typical fuzzy system consists of a rule base, membership functions and an inference procedure. Fuzzy logic is a super set of conventional Boolean logic that has been extended to handle the concept of partial truthtruth-values between "completely true" and "completely false". In classical set theory, a subset U of asset S can be defined as a mapping from the elements of S to the elements the subset [0, 1], U: S -> {0, 1} [START_REF] Zadeh | Fuzzy sets[END_REF]. The mapping may be represented as a set of ordered pairs, with exactly one ordered pair present for each element of S. The first element of the ordered pair is an element of the set S, and the second element is an element of the set (0, l). Value zero is used to represent non-membership, and the value one is used to represent complete membership. The truth or falsity of the statement. The 'X is in U' is determined by finding the ordered pair whose first element is X. The statement is true if the second element of the ordered pair is 1, and the statement is false if it is 0. FROM FUZZIFICATION TO SLAM Online Analytical Process In order to collect and provide quality data for business intelligence analysis, the use of decision support system (DSS) becomes crucial to assist managers within problem solving critical area (Dunham, 2002). Online analytical process (OLAP) is a decision support system (DSS) tool which allows accessing and parsing data in a flexible and timely basis. Moreover, OLAP enables analysts to explore, create and manage enter-prise data in multidimensional ways (Peterson, 2000). The decision maker, therefore, is able to measure the business data in different deeper levels and aggregate them depending on his specific needs. According to Dayal and Chaudhuri (1997), the typical operations performed by OLAP software can be divided into four aspects: (i) roll up, (ii) drill down, (iii) slice and dice and (iv) pivot. With the use of OLAP, the data can be viewed and processed in a real time and efficient way. Artificial Intelligence (AI) is one of the techniques that support comprehensive knowledge representations and practical manipulation strategy (Robert, 1990). By the use of AI, the system is able to learn from the past experiences and handle uncertain and imprecise environment (Pham et al., 1996). According to Chen and Pham (2006), fuzzy logic controller system comprises three main processes: fuzzification, rule base reasoning and defuzzification. Petrovic et al. (2006) argued that fuzzy logic is capable to manage decision making problems with the aim of optimizing more than one objective. This latter proved that fuzzy logic could be adopted to meet the multi-put-away objective operation in the warehouse industry. Lau et al. ( 2008) proposed a stochastic search technique called fuzzy logic guided genetic algorithms (FLGA) to assign items to suitable locations such that the required sum of the total travelling time of the workers to complete all orders is minimized. With the advantages of OLAP and AI techniques in supporting decision making, an intelligent put-away system -namely Fuzzy Storage Assignment System (FSAS) -for the novel real world warehouse operation is proposed to enhance the performance of WMS system. Two key elements would be embraced: (1) Online Analytical Processing (OLAP) in the Data Capture and Analysis Module (DCAM); (2), a fuzzy logic system in the Storage Location Assignment Module (SLAM), with objective to achieve the optimal put-away decision minimizing the order cycle time, material handling cost and damage of items. Fuzzy Storage Assignment System. The Fuzzy Storage Assignment System (FSAS) is designed to capture the distributed item's data (warehouse status included) from different organizations along the supply chain. The crucial passage concerns the conversion of data into information to hone the correct put-away decision for SLAP [START_REF] Lam | Development of an OLAP Based Fuzzy Logic System for Supporting Put Away Decision[END_REF]. The tangible influence on the warehouse performance is immediately recognizable. In fact, it also allows warehouse worker to visualize a report regarding the status of SKUs in real time, both in arriving both already stocked in the warehouse. The architecture of the FSAS is illustrated in Figure 1. Generally, FSAS consists of two modules: (1) Data capture and analysis Module (DCAM) and (2) Storage location assignment module (SLAM). These are used to achieve the research objectives through a fully automated recommendation storage system. The second component, OLAP, provides calculation and multi-dimension structure of data. Warehouse management bases on these information in SKUs and the warehouse to make strategic decision formulating the Fuzzy rules for SLAP. Through holistic manipulation of quality information, the warehouse engineers are able to develop a set of specific rules or algorithms to fit their unique daily operations, warehouse configuration and their operational objective. DCAM offers the refined parameter of SKUs and warehouse that act as the input of the next module-SLAM for generating the automatic recommendation for SLAP. The last but not least component, the data mart, is developed to store the refined parameter and fuzzy rules (as a fuzzy rule repository), directly and specifically support the SLAP. Storage Location Assignment Module (SLAM) The storage location assignment module is used to decide the correct storage location for arrival SKUs, based on the analyzed information and the fuzzy rules set from DCAM. Its major component is the fuzzy logic system that consists of a fuzzy set, a fuzzy rule and fuzzy inference. The fuzzy rules is a set of rules that integrate the elements of the selected storage strategies, experience and knowledge of expert, and regulations. It is characterized by an IF (condition) THEN (action) structure. The set of rules determines each item storage location; the system will match the characteristics of the SKU and current warehouse (conditions) with the fuzzy rule and then find out the action (where it should be shored). Finally the automatic put-away solution is generated. The SLAM start from the data mart in the former DCAM, it provides the parameters that are format compatible to the fuzzy system, than the parameters will be the input into the fuzzy system that is specifically developed to support the SLAP. The output of fuzzy system will be explained as the recommendation of final storage location for the inbound cargo, than the warehouse workers will store the inbound cargo as the recommendation, finally the storage information will be updated to the WMS system. The "Golden zone" partition. There are golden zone (the most accessible), silver zone (middle accessible) and bronze zone (the least accessible). Therefore there are 3 subzones inside each of stor-age zone, A, B and C in the sequence of the accessibility in which zone A with the highest accessibility. THE REAL CASE Problem identification Generally, the Espresso Siena & Co. is characterized by a handle large amount of requests in warehouse operation. Efficient storage location assignment may result in minimizing the cost as well as the damage rate in order to increase the customer satisfaction. However, the current practice of SLAP in deciding storage department, location and the suitable tier relies on the warehouse manager is based on his knowledge. Problems may be raised as the wrong storage environment offers the storage item (resulting in deterioration of item quality) and the long storage location process (resulted in longer inbound shipment processing cycle). This is caused by the insufficient data availability and the lack of systematic decision support system in the decision process. According to the past experience, cargo storing in high tier of a pallet rack or the item with higher loading weigh have more probabilities of getting damage or higher loading height, because of the difficulty to control the pallet truck well. The more expensive cargo, the higher loss the warehouse suffers from the damage. To ensure the accurate and real-time data can be used, the proposed FSAS for integrating data, extracting quality data from different data source and assigning appropriate storage location for the inbound items, in the way to minimize the risk of getting damage and the loss from it during the put-away and storage process. Deployment of Online Analytical Process in DCAM. SKU data and warehouse data are captured and transferred into the centralized data warehouse from the data source systems. Through the OLAP application it's possible to build up a multidimensional data model called a star schema. This is composed of a central fact table and a set of surrounding dimension tables and each table has its own attributes in variety of data type. The users are able to view the data in different levels of detail, so the warehouse engineer can generate real-time report for decisionmaking. In fact the OLAP function allows finding out the statistics of SKUs activities for a specific period of time, representing the SKUs dimension, storage environment and information of warehouse etc [START_REF] Laurent | Scalable Fuzzy Algorithms for Data Management and Analysis: Methods and Design[END_REF]. This gives the possibility to master the critical decision support data by the warehouse operator. To ensure the OLAP approach functions properly, the OLAP data cube needs to be built in advanced in the OLAP sever. The cube is developed in a star schema (Figure 2) consisting of dimensions, measures and calculated members. Dimensions. In SKU dimension, the "SKU_ID", and "Product Type" fields are used to find out the dimensions of the SKU and the other characteristics for the storage department selection. In the "Invoice" dimension, the "Invoice ID", and "SKU_ID" and "Invoice _Type" fields are used to find the activity patterns of SKU's for deciding the location inside the department for the SKU. In "Time" dimension, the "Delivery Date" and "Arrival Date" field are used to find the expected storage time for the SKU and the number of transaction during the specific period. Measures. "Loading Item Height", "Loading Item Width", "Unit Cost" and "Unit_Cube" etc. are all used to provide critical information for the warehouse manager, in order to realize fuzzy rule composition and perform as a fuzzy input for implication process. Calculated Member. The calculated member calculates the mean of "Popularity", "Turnover", "Cu-be_Movement" and "Pick_density" etc., needed for fuzzy rule composition and implication process. Deployment of fuzzy system in SLAM The fuzzy rules and membership function of the company have to be first formulated in the fuzzy system for each parameter. The parameters (Table 1) and the fuzzy rules of others rule sets are specifically set by the warehouse manager, in order to truly reflect the operational conditions of such product families. The formulation is worked out by the knowledge of experts with the revision on the past experience on the warehouse daily operation; the historical revision could be achieved by the help of the OLAP report, in the former module-DCAM. Different sets of fuzzy rule, with particular parameters, make the decision to determine the storage zone/department, storage location and tier level for the item storage. The fuzzy rules are stored in the knowledge database and defined as a conditional statement in IF-THEN form [START_REF] Lam | Development of an OLAP Based Fuzzy Logic System for Supporting Put Away Decision[END_REF], [START_REF] Li | Mining Belief-Driven Unexpected Sequential Patterns and Implication Rules in Rare Association Rule Mining and Knowledge Discovery: Technologies for Infrequent and Critical Event Detection[END_REF]. Some examples of fuzzy rules are shown in Table 2. The warehouse manager ranged from 0-1 determines the membership function of each parameter. There is more than one type of membership functions existing, some with Gaussian distribution function, others with sigmoid curve, or quadratic and cubic polynomial curves. For this case study, since it's possible to demonstrate the manager's knowledge through the trapezoid and triangular membership functions, the graphic formats of the membership functions of the example parameters are demonstrated as the Figure 3. The MATLAB-Fuzzy Logic Toolbox needs to create and execute fuzzy inference systems. With the above fuzzy rules and required data, the final storage location for the incoming item would be automatically generated from the Fuzzy Toolbox for SLAP. In order to demonstrate the feasibility of the system, one supplier delivery input is selected into the FSAS system. When the market-operating department fulfill the relevant data into the ERP, these will be extracted by central data warehouse and then go to the OLAP module. At the same time, the warehouse department is informed and starts to go through their slotting decision tree. CONCLUSIONS This research tries to introduce the design and the implementation of a FSAS, which embraces the fuzzy theory to achieve warehouse capacity improvement and optimize the put-away process. The implementation of the proposed methodology, in the aspect of warehouse management through simulation, has been succeeded. Incorporating the error measurement and the complexity of the process into the fitness evaluation, the generalized fuzzy rule sets can be less complex and more accurate. In the matter of generation of new fuzzy rules, the membership functions are assumed to be static and known. Other fuzzy learning methods should be considered to dynamically adjust the various parameters of the membership functions, to enhance the model accuracy. Future contribution of this endeavor goes to validate the decision model in a way to be launched in case companies. Despite increasingly manufacturers and retailers emphasize the just-in-time inventory strategy, the delivery orders will become more frequent with smaller lot size. This creates considerable demand for put-away processes in warehouses, since put-away process is able to match the characteristics of the storage item and the storage location. In order to achieve this standard, the warehouse operators first need to master the characteristics of the incoming items and storage location and then correctly match the storage location, minimizing the material handling cost, product damage and order cycle. An OLAP based intelligent Fuzzy Storage Assignment System (FSAS) becomes suitable to integrate day-by-day operational knowledge from human's mind, supporting a key operation in warehouse-put-away process, minimizing product damage and material handling cost. FSAS enables the warehouse operators to perform put-way decision: (i) real-time decision support data with different query dimensions (ii) mimicking the warehouse manager to provide recommendation for SLAP. Further research on enhancing the fuzzy rules generation is considered to improve the accuracy in the suitable storage location assignment. As the database has been well developed for the put away process in the DCAM, this is eligible to provide overview on the past performance of the warehouse. Fig. 1 . 1 Fig. 1. -Fuzzy Storage Assignment System Algorithm : Portion of a six-tiers warehouse (b): Software to design customized warehouse in 3D Fig. 2 . 2 Fig. 2. The relational database structure of DCAM Fig. 3 . 3 Fig. 3. The MATLAB graphic function model Table 1 . 1 The parameters taken into account to optimize the put-away decision process RULE 1: IF Loading Item Height  Low AND Loading Item Width  Short AND Loading Item Length  Short AND Loading ItemWeight  Small AND Loading Item Cube  Small THEN Capability of Storage Depart- ment is LOW RULE 2: IF Popularity  High AND Turnover Rate  High AND Cube Movement  Hight AND Pick Density  not Hight AND Expected Storage Days  Short THEN Accessibility of Storage Zone is Good RULE 3: IF Loading Item Value High AND Loading Item Height  High AND Loading Item Weight  High THEN Tier Selection is Medium Table 2 . 2 Fuzzy Association Decision Rule
23,762
[ "1003746", "1003708" ]
[ "366408", "50794" ]
01485842
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485842/file/978-3-642-41329-2_9_Chapter.pdf
Lars Wolter email: [email protected] Haygazun Hayka Rainer Stark email: [email protected] Improving the Usability of Collaboration Methods and Technologies in Engineering Keywords: Collaboration, collaborative engineering, PLM, heterogeneous IT landscape, intellectual property rights des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION The industry sets new requirements for collaborative engineering due to technological improvements on products and product development methods, increasing complexity of supply chains and the trend to establish virtual teams. The paper discusses these requirements and the resulting fields in need of action. Each field shows different opportunities for the industry. Technology can give better integration and usability, the processes can be more transparent and standardized and the human factors can receive more attention to increase the stimulation of the stakeholders. To utilize all the opportunities in these fields, the collaborative engineering processes, methods and tools can not only focus on the technical goals and how to achieve them, they must also consider the human factor, containing the other stakeholders and there reasons. Only then the task work can be united with teamwork for successful collaboration. The need for Collaborative Engineering The increasing complexity of consumer products and industrial goods also increases the complexity of their development. This is given by a raising number of parts in each individual product and especially the combination of multiple engineering domains into a single product. Additional complexity in today's product development originates from release cycles that need to get shorter to stay competitive. To address the complexity a company needs to involve more people in the development process, each being an expert in his domain. This also includes the engineers which are experts for specific domains, parts, functionalities or steps in the development and manufacturing process. As with common meetings collaboration is a task that does not produce anything but needs to be done to be successful. Therefor the collaboration needs to be efficient and natural for all participants. The collaboration in product development is happening on many different levels starting from asynchronous groupware systems, telecooperation solutions, viewing collaboration and full featured interactive collaboration. Additionally the collaboration can be done locally or across long distances, it can be with or without the use of digital tools. Then there is also the difference of collaboration inside companies which happens intensively and the collaboration with partners and suppliers which is performed less intensively [START_REF] Müller | Study on Collaborative Product Development and Digital Engineering Tools[END_REF] and is also shown in figure 1. This increases the complexity of collaboration management which only needed to supply meeting rooms and a telephone number to allow collaboration. To manage all the collaboration scenarios in companies today, additional people need to be involved being experts in the area of collaboration. The focus during this paper lies in engineering scenarios during virtual engineering. It does not address collaboration during concept development, production or selling. Fig. 1. -Distribution of Engineering Working Time [START_REF] Müller | Study on Collaborative Product Development and Digital Engineering Tools[END_REF]. Communication, coordination and negotiation happen intensively in the industry. Problems of Collaborative Engineering In collaboration there is a large field of problems that can be analyzed and addressed. This article focuses on collaboration supported by digital tools that an engineer uses during his product development tasks. For an engineer the types of tasks have changed over time. There is much more information needed to fulfill his day-to-day tasks forcing him to spend a lot of his time to search and acquire this information, not only from IT-Systems but also from other colleagues or partners. This also means he has to supply information he generates during his work either by storing them in an IT-System or by directly communicating them to other people. This additional overhead is already part of the collaboration happening in a company. Looking at basic collaboration tasks without the use of digital tools already introduces a lot of problems. If you leave someone a message on a post-it it can get lost during cleaning, your block of post-its could be empty or you try to find someone for a talk but don't know the room where that person currently is. Those problems can be directly translated to collaboration using digital tools. For example the email getting lost in the spamfolder, no free space on the network drives or the Web-Ex session that cannot initiate because of a too restrictive firewall. These kinds of problems can be addressed very effectively by rules and a good organization, which are necessary to achieve robustness in using digital engineering technology [START_REF] Stark | The way forward of Virtual Product Creationhow to achieve robustness in using digital engineering technology?[END_REF]. But digital tools for collaboration introduce new kind of problems that need to be addressed separately. Heterogeneity. Due to the increasing fragmentation of companies, either through outsourcing or by integrating other companies leads to the use of multiple IT-Systems dedicated to the same kind of task. This is also true for the collaboration tools, either stand-alone or integrated, because most of them can only be connected to tools of the same vendor. This is not a problem with the telephone for example; it can be used with every other telephone from different vendors. Tool Acceptance. An engineer today has to use multiple digital tools to solve his engineering tasks, this includes CAx, PDM, ERP, MES, Excel, Outlook and various other tools. The number of tools increases in a collaboration situation because an extra collaboration tool needs to be used. Same goes for engineering tools, which normally only the collaboration partner uses. This is a frustrating situation for the engineer because it maybe a tool he only uses irregular when contacting a supplier for example and sometimes differs from his regular tools in usage and even methodic concepts. The engineer is naturally blocking to use this extra tool for collaboration resulting in reduced efficiency or lesser collaboration. This also happens for local collaboration scenarios, where multiple people discuss a situation with a specific tool. This happens in design reviews for example, where a dedicated operator is necessary to operate the tool introducing an extra layer into the interaction. Protection of intellectual property. The ideas of now products as well as the methods and processes to produce them are a constant interest for all competitors in the market. Therefor this knowledge needs to be kept secret. Collaboration using digital tools is normally associated with sharing your own digital data. This results in trust problems with any kind of new or specialized tool which connects multiple stakeholders for collaborative purposes. Opposite to that generally used tools like emails have a very high trust even if they are used insecurely. The problem is to make any kind of collaboration tool used be trusted to protect the intellectual property; otherwise it cannot be effectively used. How are these problems currently addressed The market has a multitude of solutions to achieve collaboration in engineering. The system vendors of larger software products started to integrate collaboration features in their products. This includes features common to current social media products like text, audio and video chat combined with the sharing of product data. One example of those solutions is 3D-Live in Catia v6 from Dassault-Systemes. Such a system works very well in a homogenous environment, in which all participants use the same CAD program which tightly integrates into the PDM of the same vendor to make the product data sharing available for collaborative engineering. Using a different CAD system breaks all the collaboration features. This problem can be solved by deploying all applications to every user, forcing the engineer to use multiple IT-Systems in collaborative scenarios. Another way to address the heterogeneity problem is to use stand-alone collaboration solutions. Standalone solutions are a less complex extra piece of software, but also need a context switch and the conversion of the data from the engineering tool. Multiple vendors developed standalone tools to allow collaboration without the need of supplying the whole authoring tool set to all users. There are two kinds of stand-alone solutions for collaboration. Screensharing solutions are the easiest to use in this category and don't need any data conversion. They work by capturing the contents of the screen or a specific application window and transmit that as a video stream to the other participants. These solutions are well known in web conferencing and are widely used because of their ease of usability. The user just starts the screensharing solution and decides which screen to share. After the setup, he only operates his preferred tool. Products in this category are WebEx from Cisco or WebConf from Adobe. But the concept of screensharing lacks the possibility of an equal participation in the collaboration scenario, because only one participant can present his screen to all others. Even when using multiple monitors, a monitor for each participant would be needed to present all the different views. The second type of solution is applications that use their own visualization engine to display geometric models. Most of those solutions import data from the engineering tool and share them across multiple participants. Because these tools are separate from the authoring tools, they normally only import common exchange formats. This means the collaborating engineer needs to export the product data from his application in a suitable format for the collaboration tool and all the data need to be transmitted to all participants. This conversion and transfer of product data induces long setup times. This kind of tools can have a lot of functionality to alter the viewing, leaving annotations and therefor support the collaboration very well, but results in additional tool-knowledge for the engineer. Changes that can only be done in the authoring system would need an export of the changed product data and redistribution of those changes to all participants. Examples for this kind of tools are. The most basic way of collaboration thru sharing data is handled asynchronously. Data management solutions allow locking of complete files by different users but there is no technique available that can merge two differently changed CAD files like Microsoft Word is offering for documents. Most PDM and other data storage systems ease the asynchronous collaboration with large files by duplicating them to different sites making them rapidly available from different locations. This is still limited in speed, but is already well established among the industries. The last kind of well-established collaboration solutions consists of web based groupware solutions. They are often based around whole communication solutions for email, task and workflow management. They allow the users, to exchange tasks, documents and other information using wikis forums or blogs. Newer systems also incorporate so called social office functions to allow commenting, rating and sharing of information across the company intranet. Examples for these kinds of systems are Microsoft Office with Sharepoint and Outlook or open source groupware solutions like Liferay, Tiki-Wiki or other portal solutions. These solutions are independent of product development but can be customized to fit specific products and companies. The type of collaboration is limited, but due to the inclusion of real time communication and web based editors for documents these solutions are not only used for cooperation but also full collaboration. Research to increase the collaboration efficiency Some research is done to support the engineering collaboration. One approach is to let different CAD systems communicate with each other, to allow collaborative design even with heterogeneous CAD systems. One of those approaches in [START_REF] Li | Real-Time Collaborative Design With Heterogeneous CAD Systems Based on Neutral Modeling Commands[END_REF] uses a common set of commands. Every CAD system translates its own authoring commands to this common set which is distributed to all participants and at each target platform converted to a command of that specific CAD system. Therefor it does not need to exchange any product data beforehand. This approach allows concurrent design from the beginning; it does not allow modification of existing product data. Other Research is done to allow better network communication even in firewalled scenarios across companies for communication applications [START_REF] Stark | Verteilte Design Reviews in heterogenen Systemwelten[END_REF]. This is a very fundamental type of research affecting nearly all collaboration attempts which communi-cate thru the internet. Also very fundamental is research to resolve conflicts in collaboratively authored CAD. This is addressed for example by research activities to integrate Boolean operations in CAD systems [START_REF] Zheng | Conflict resolution of Boolean operations by integration in real-time collaborative CAD systems[END_REF]. There is also research activity to enhance distributed design reviews. There are solutions that adapt to different kind of devices allowing the use of large VR-Cave-Systems together with participants only using desktop computers during the collaboration. One such approach is documented in [START_REF] Daily | Distributed Design Review in Virtual Environments[END_REF]. The setup they are using uses VRML models to visualize 3D data but can also show the screens of the participants using screensharing. Many research projects focus on the protection of the intellectual property. Everything related to encryption is only protecting the data while it is traveling between the collaboration participants. If the aim is to not let one of the participants misuse data from other participants, any encryption is of no use. Therefor some research projects describe methods and algorithms to watermark geometric models [START_REF] Kuo | A Blind Robust Watermarking Scheme for 3D Triangular Mesh Models Using 3D Edge Vertex Detection[END_REF]. This way it can be traced back where copies originated allowing the owner to sue them. Other methods reduce the details in certain areas by using multiresolution meshes and specifying the detail priorities of certain areas [START_REF] Zyda | User-controlled creation of multiresolution meshes[END_REF]. On the other hand there is also research going on for 3D model reconstruction from image sequences. With flaws this was possible 1996 from camera images [START_REF] Beardsley | 3D model acquisition from extended image sequences[END_REF]. Better algorithms, the good quality of the rendered images and faster computers allow much better reconstruction [START_REF] Snavely | Scene Reconstruction and Visualization from Internet Photo Collections: A Survey[END_REF]. But for all reconstruction algorithms it is important that there is at least one image from every feature necessary to reconstruct the original model. If the backside or inside of an object is never shown, it cannot be reconstructed. 1.5 Areas in need of Action Technology. The Technology for collaboration is in a very good state, but there are technological problems when different solutions need to be connected. The rapid increase in information and the expectation of its global availability introduces a new field of information management that does not require a central distribution point but intelligent information containers that can manage the containing information and is able to route those information to systems and participants in a collaborative scenario that need them. Besides the technical areas that are in need of action, the collaboration between engineers needs to address human factors to make the collaboration to something an engineer wants to do instead of something he needs to do. This can be seen in interaction with tools, where a user is pleased to use a tablet-pc to read its newspaper using simple gestures. This kind of technology common to consumer products needs to be adapted to solutions for the industry. Processes. The business processes need to address the specialty in collaborative processes. When working in a collaborative manner, each step has a meaning for the task the group does together, like the patterns in [START_REF] De Vreede | Collaboration Engineering: Designing Repeatable Processes for High-Value Collaborative Tasks[END_REF]. This can also lead to additional tasks for collaborative processes, because after a collaborative session the gathered information needs to be converged from multiple participants before it can be evaluated. Methods. The different methods of collaboration can be as coarse has to choose from personal meeting, phone call or use of email to very fine grained methods that define the format of the emails to send for specific tasks. Having a method for handling specific collaboration tasks is a must, to ensure correct flow of the information. NEW SOLUTIONS FOR COLLABORATIVE ENGINEERING To fill some of the gaps presented above the following solution are described. These solutions handle different collaboration scenarios like local collaboration on multitouch tables as well as remote collaboration over the network. The technology for touch is well established in the consumer area. The technology is also very mature in its use. To be of a real benefit in the industry not only the technology needs to be used, but also the methods must adapt to the scenarios, where multitouch environments can be used to raise the efficiency of engineering processes. Dur-ing the research at the Fraunhofer IPK methods where developed to visualize product structures on multitouch tables [START_REF] Woll | Kollaborativer Design Review am Multitouchtisch[END_REF]. The requirement is a good usability with touch devices but also be understandable und usable in a multi-person-scenario where the participants have different views onto the multitouch device. This resulted in a Voronoi [START_REF] Balzer | Voronoi treemaps for the visualization of software metrics[END_REF] based structure as seen in figure 1. This special structure was analyzed to see if it can fulfill some typical tasks in engineering like search and compare operations with this structure [START_REF] Schulze | Intuitive Interaktion mit Strukturdaten aus einem PLM-System[END_REF] Fig. 3. -Multi-User Multitouch environment for design reviews. Using current Touch Technology for local Cooperation In figure 3 this special application can be seen in a multi user environment. It allows multiple participants in a meeting to either work in their own workplace or to cooperate with some or all of the other participants. This allows part of the cooperating group to prepare their content while others are discussing a previous item. The example is the door of a car. In this example the expert for the door opener can discuss some details with the chassis expert while the experts for the window-lifter-system illuminate the use of an extra-ordinary expensive part to the management. Technology for secure and instant collaboration The here presented solution constitutes a combination of screen sharing and the local visualization at each participant. In contrast to screen sharing not the whole program window or the whole desktop is being transmitted, but only the 2D image of the rendered 3D model, which is superimposed with the images of all participants. A correct superposition is necessary so that every participant can correctly perceive the visual impression of the complete product and properly interpret the correlations and distances between the components. This collaboration technique focuses on different scenarios shown in figure 4. All participants shown in figure 4 see a 3D representation of the object being reviewed, in this case a truck. The parts in blue are locally existent as 3D-Modells and are locally rendered on the computer and the rendered image is transferred to all participants. The gray parts of the model do not exist on the local computer. They are just 2D images streamed from one of the other participants. All views share the same point of view and orientation while looking at the truck. This information is also shared among the users and consists of a simple matrix. The scenario can also incorporate special participants like the mobile lead engineer which only needs a web browser to join the session. He is not supplying any 3D model, he just consumes the images. The opposing case is the PDM System at site B which just renders it locally stored data and sends it to the others. This participant does not consume any information. All the other participants, the OEM at site east and the two suppliers deliver their own data and consume from the others. The OEM holds the 3D models of the chassis, while supplier A holds the cabin and supplier B the wheels. They only deliver their own property as images without being afraid that for example supplier B can steal the 3D model data from supplier A. To achieve the correct superposition a so-called depth image is transmitted additionally. The depth images can be used, to decide for every pixel from which 2D image the respective pixel should be used figure 5. To create, transmit or show it on a screen, a 2D image of a 3D model must be rendered. This process is called image synthesis, where for each pixel on the screen must be determined, which part of the 3D model it represents. The color of that point is being shown at this pixel (figure 5, bottom left). The depth image is constructed on the same principle. This is achieved by storing a distance value instead of a color value. If you interpret this distance value as a color, so is the value that represents the pixel brighter for larger distances and darker for shorter distances. At multiple participants a color-and depth rendering is created that are all collected. These can now be used to construct a joint image. Condition is that all participants have the same point of view. This joint point of view acts as a zero point of the viewed scene and is reached by using the aforementioned matrix that is shared among the participants. This aspect corresponds to the technique of local visualization where likewise just the view matrix needs to be exchanged. Assuming there are three participants with color renderings C1-C3 and depth renderings D1-D3. Then for each pixel it is analyzed which of the three Dx is the darkest. If it is D3 then the color value of C3 is being used. That way a new image is assembled where all models of all three participants are integrated. An interpolation allows combining images with different resolutions as long as the aspect ratio is maintained. This is of advantage when generating images on hardware with different performances. The images from different sources can still be combined. The realization of this technology was executed in a prototypic collaboration tool. Increased acceptance through collab-oration using CAD systems To approach the problem of acceptance, methods were evaluated to include collaboration functionality in existing CAD systems. Through the strong link with the rendering an easy connection via a plugin is difficult. In the course of work the CAD systems Spaceclaim and NX were extended via plugins in a way that they communicate their viewing location with other participants and could react to commands. Doing that the CAD systems showed limitations of different magnitude. The realization of the plugin proved more difficult for NX than for Spaceclaim. An automatic reacting e.g. to the network commands could not be realized with the NX API until the end of the project. Nonetheless the realization of the plugins proved that a collaborative coupling of two very diverse CAD systems is possible and a consequent commitment across manufacturers towards an open adaption of the API according to the CPO could create new possibilities. Similar as in [START_REF] Li | Real-Time Collaborative Design With Heterogeneous CAD Systems Based on Neutral Modeling Commands[END_REF] it was also tested to fulfill some modeling tasks using heterogeneous CAD systems. But the target was different as in the paper because the focus is still located in the design review scenario where existing CAD Models need to be examined and possibly changed. Therefor the problem of identifying the same parts in the different systems needs to be managed. The research in this area is still ongoing, to deliver a solid solution. CONCLUSIONS The increasing need for effective collaboration solutions is a challenge for today's companies. But system vendors and research facilities are continuing to generate better solutions. Useful cross vendor programs like the codex of PLM openness can ensure that future collaboration solutions will not suffer because of heterogeneous IT landscapes. The continuous evolvement of user interaction technologies from the consumer market into the industry gives the opportunities for user friendly and easy to use solutions. The presented solutions and ongoing research work introduce new accents for collaboration. These accents can be used by the industry to bring new concepts of collaboration into the companies to increase their collaboration abilities. Fig. 2 . 2 Fig. 2. -Touch optimized visualization of a product structure Fig. 4 . 4 Fig. 4. -The different collaboration scenarios condensed in a single collaboration. Fig. 5 . 5 Fig. 5. -Description of the image merging algorithm that merges two rendered images displayed at the top into a single image in the bottom middle. Therefor it uses a depth image explained on bottom left. Bottom right describes the basic principle of rendering a 3D scene into a 2D image in conjunction to the rendering of a depth image.
26,624
[ "1003747", "1003748" ]
[ "86624", "306935", "306935" ]
01485925
en
[ "spi" ]
2024/03/04 23:41:48
2009
https://hal.science/hal-01485925/file/doc00026647.pdf
Neila Bhouri Haj Habib Salem Habib Haj Salem Improving Improving Travel Time Reliability Using Ramp Metering: Field Assessment results on A6W Motorway in Paris published or not. The documents may come L'archive ouverte
242
[ "1278852" ]
[ "81038", "81038" ]
01485931
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485931/file/978-3-642-38530-8_2_Chapter.pdf
Henrich C Pöhls Stefan Peters email: [email protected] Kai Samelin Joachim Posegga Hermann De Meer email: [email protected] Malleable Signatures for Resource Constrained Platforms Malleable signatures allow the signer to control alterations to a signed document. The signer limits alterations to certain parties and to certain parts defined during signature generation. Admissible alterations do not invalidate the signature and do not involve the signer. These properties make them a versatile tool for several application domains, like e-business and health care. We implemented one secure redactable and three secure sanitizable signature schemes on secure, but computationally bounded, smart card. This allows for a secure and practically usable key management and meets legal standards of EU legislation. To gain speed we securely divided the computing tasks between the powerful host and the card; and we devise a new accumulator to yield a useable redactable scheme. The performance analysis of the four schemes shows only a small performance hit by the use of an off-the-shelf card. Introduction Digital signatures are technical measures to protect the integrity and authenticity of data. Classical digital schemes that can be used as electronic signatures must detect any change that occurred after the signature's generation. Digital signatures schemes that fulfill this are unforgeable, such as RSA-PSS. In some cases, controlled changes of signed data are required, e.g., if medical health records need to be sanitized before being made available to scientists. These allowed and signer-controlled modifications must not result in an invalid signature and must not involve the signer. This rules out re-signing changed data or changes applied to the original data by the signer. Miyazaki et al. called this constellation the "digital document sanitization problem" [START_REF] Miyazaki | Digital documents sanitizing problem[END_REF]. Cryptographic solutions to this problem are sanitizable signatures (SSS) [START_REF] Ateniese | Sanitizable Signatures[END_REF] or redactable signatures (RSS) [START_REF] Johnson | Homomorphic signature schemes[END_REF]. These have been shown to solve a wide range of situations from secure routing or anonymization of medical data [START_REF] Ateniese | Sanitizable Signatures[END_REF] to e-business settings [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF][START_REF] Pöhls | Sanitizable Signatures in XML Signature -Performance, Mixing Properties, and Revisiting the Property of Transparency[END_REF][START_REF] Tan | Applying sanitizable signature to web-service-enabled business processes: Going beyond integrity protection[END_REF]. For a secure and practically usable key management, we implemented four malleable signature schemes on an off-the-shelf smart card. Hence, all the algorithms that involve a parties secret key run on the smart card of that party. Smart cards are assumed secure storage and computation devices which allow to perform these actions while the secret never leaves the card's protected computing environment. However, they are computationally bounded. Contribution To the best of our knowledge, no work on how to implement these schemes on resource constraint platforms like smart cards exists. Additional challenges are sufficient speed and low costs. Foremost, the smart card implementation must be reasonably fast and manage all the secrets involved on a resource constraint device. Secondly, the implementation should run on off-the-shelf smart cards; cheaper cards only offer fast modular arithmetics (e.g., needed for RSA signatures). The paper's three core contribution are the: (1) analysis and selection of suitable and secure schemes; (2) implementation of three SSSs and one RSS scheme to measure runtimes; (3) construction of a provably secure RSS based on our newly devised accumulator with a semi-trusted third party. Previously only accumulators with fully-trusted setups where usably fast. This paper shows how to relax this requirement to a semi-trusted setup. Malleable signatures on smart cards allow fulfilling the legal requirement of keeping keys in a "secure signature creation device" [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF]. Overview and State of the Art of Malleable Signatures With a classical signature scheme, Alice generates a signature σ using her private key sk sig and the SSign algorithm. Bob, as a verifier, uses Alice's public key pk sig to verify the signature on the given message m. Hence, the authenticity and integrity of m is verified. Assume Alice's message m is composed of a uniquely reversible concatenation of blocks, i.e., m = (m [START_REF] Ahn | Computing on authenticated data[END_REF], m[2], . . . , m[ ]). When Alice uses a RSS, it allows that every third party can redact a block m[i] ∈ {0, 1} * . To redact m[i] from m means creating a m without m[i], i.e., m = (. . . , m[i -1], m[i + 1], . . . = (. . . , m[i -1], m[i] , m[i + 1], . . . ). In comparison to RSSs, sanitization requires a secret, denoted as sk san , to derive a new signature σ , such that (m , σ ) verifies under the given public keys. A secure RSS or SSS must at least be unforgeable and private. Unforgeability is comparable to classic digital signature schemes allowing only controlled modifications. Hence, a positive verification of m by Bob means that all parts of m are authentic, i.e., they have not been altered in a malicious way. Privacy inhibits a third party from learning anything about the original message, e.g., from a signed redacted medical record, one cannot retrieve any additional information besides what is present in the given redacted record. The concept behind RSSs has been introduced by Steinfeld et al. [START_REF] Steinfeld | Content extraction signatures[END_REF] and by Johnson et al. [START_REF] Johnson | Homomorphic signature schemes[END_REF]. The term SSS has been coined by Ateniese et al. [START_REF] Ateniese | Sanitizable Signatures[END_REF]. Brzuska et al. formalized the standard security properties of SSSs [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. RSSs were formalized for lists by Samelin et al. [START_REF] Samelin | Redactable signatures for independent removal of structure and content[END_REF]. We follow the nomenclatures of Brzuska et al. [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. If possible, we combine explanations of RSSs and SSSs to indicate relations. In line with existing work we assume the signed message m to be split in blocks m[i], indexed by their position. W.l.o.g., we limit the algorithmic descriptions in this paper to simple structures to increase readability. Algorithms can be adapted to work on other data-structures. We keep our notation of Sanitizer general, and also cater for multiple sanitizers, denoted as Sanitizer i [START_REF] Canard | Sanitizable signatures with several signers and sanitizers[END_REF]. Currently, there are no implementations of malleable signatures considering multi-sanitizer environments. A related concept are proxy signatures [START_REF] Mambo | Proxy signatures for delegating signing operation[END_REF]. However, they only allow generating signatures, not controlled modifications. We therefore do not discuss them anymore. For implementation details on resource constrained devices, refer to [START_REF] Okamoto | Extended proxy signatures for smart cards[END_REF]. Applications of Malleable Signatures One reason to use malleable signatures is the unchanged root of trust: the verifier only needs to trust the signer's public key. Authorized modifications are specifically endorsed by the signer in the signature and subsequent signature verification establishes if none or only authorized changes have occurred. In the e-business setting, SSS allows to control the change and to establish trust for intermediary entities, as explained by Tan and Deng in [START_REF] Tan | Applying sanitizable signature to web-service-enabled business processes: Going beyond integrity protection[END_REF]. They consider three parties (manufacturer, distributor and dispatcher ) that carry out the production and the delivery to a forth party, the retailer. The distributor produces a malleable signature on the document and the manufacturer and dispatcher become sanitizers.Due to the SSS, the manufacturer can add the product's serial number and the dispatcher adds shipment costs. The additions can be done without involvement of the distributor. Later, the retailer is able to verify all the signed information as authentic needing only to trust the distributor. Legally binding digital signatures must detect "any subsequent change" [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF], a scheme by Brzuska et al. was devised to especially offer this public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. Another reason to use a malleable signature scheme is their ability to sign a large data set once, and then to only partly release this information while retaining verifiability. This privacy notion allows their application in healthcare environments as explained by Ateniese et al. [START_REF] Ateniese | Sanitizable Signatures[END_REF]. For protecting trade secrets and for data protection it is of paramount important to use a private scheme. Applications that require to hide the fact that a sanitization or redaction has taken place must use schemes that offer transparency, which is stronger than privacy [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. However, the scheme described by Tan and Deng is not private according to the state-of-the-art cryptographic strict definition [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. Motivation for Smart Cards To facilitate RSSs and SSSs in practical applications, they need to achieve the same level of integrity and authenticity assurance as current standard digital signatures. This requires them to be unforgeable while being linkable to the legal entity that created the signature on the document. To become fully recognized by law, i.e., to be legally equivalent to hand-written signatures, the signature needs to be created by a "secure signature creation device" (SSCD) [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF]. Smart cards serve as such an SSCD [START_REF] Meister | Protection profiles and generic security targets for smart cards as secure signature creation devices -existing solutions for the payment sector[END_REF]. They allow for using a secret key, while providing a high assurance that the secret key does not leave the confined environment of the smart card. Hence, smart cards help to close the gap and make malleable signatures applicable for deployment in real applications. State of the art secure RSSs and SSSs detect all modifications not endorsed by the signer as forgeries. Moreover, Brzuska et al. present a construction in [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] and show that their construction fulfills EU's legal requirements [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF]. Sanitizable and Redactable Signature Schemes We assume the verifier trusts and possesses the Signer's public key pk sig and can reconstruct all other necessary information from the message-signature pair (m, σ) alone. Existing schemes have the following polynomial time algorithms: SSS := (KGen sig , KGen san , Sign SSS , Sanit SSS , Verify SSS , Proof SSS , Judge SSS ) RSS := (KGen sig , Sign RSS , Verify RSS , Redact RSS ) Key Generation (SSS, RSS). Generates key pairs. Only SSSs need KGen san . (pk sig , sk sig ) ← KGen sig (1 λ ), (pk i san , sk i san ) ← KGen san (1 λ ) Signing (SSS, RSS). Requires the Signer's secret key sk sig . For Sign SSS , it additionally requires all sanitizers' public keys {pk 1 san , . . . , pk n san }. adm describes the sanitizable or redactable blocks, i.e., adm contains their indices. (m, σ) ← Sign SSS (m, sk sig , {pk 1 san , . . . , pk n san }, adm), (m, σ) ← Sign RSS (m, sk sig ) Sanitization (SSS) and Redaction (RSS). The algorithms modify m according to the instruction in mod, i.e., m ← mod(m). For RSSs, mod contains the indices to be redacted, while for SSSs, mod contains index/message pairs {i, m[i] } for those blocks i to be sanitized. They output a new signature σ for m . SSSs require a sanitizer's private key, while RSSs allow for public alterations. (m , σ ) ← Sanit SSS (m, mod, σ, pk sig , sk i san ), (m , σ ) ← Redact RSS (m, mod, σ, pk sig ) Verification (SSS, RSS). The output bit d ∈ {true, false} indicates the correctness of the signature with respect to the supplied public keys. d ← Verify SSS (m, σ, pk sig , {pk 1 san , . . . , pk n san }), d ← Verify RSS (m, σ, pk sig ) Proof (SSS). Uses the signer's secret key sk sig , message/signature pairs and the sanitizers' public keys to output a string π ∈ {0, 1} * for the Judge SSS algorithm. π ← Proof SSS (sk sig , m, σ, {(m i , σ i ) | i ∈ N + }, {pk 1 san , . . . , pk n san }) Judge (SSS). Using proof π and public keys it decides d ∈ {Sig, San i } indicating who created the message/signature pair (Signer or Sanitizer i ). d ← Judge SSS (m, σ, pk sig , {pk 1 san , . . . , pk n san }, π) Security Properties of RSSs and SSSs We consider the following security properties as formalized in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF][START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] : Unforgeability (SSS, RSS) assures that third parties cannot produce a signature for a "fresh" message. "Fresh" means it has been issued neither by the signer, nor by the sanitizer. This is similar to the unforgeability requirements of standard signature schemes. Immutability (SSS, RSS) immutability prevents the sanitizer from modifying non-admissible blocks. Most RSSs do treat all blocks as redactable, but if they differentiate, immutability exists equally, named "disclosure secure" [START_REF] Samelin | Redactable signatures for independent removal of structure and content[END_REF]. Privacy (SSS, RSS) inhibits a third party from reversing alterations without knowing the original message/signature pair. Accountability (SSS) allows to settle disputes over the signature's origin. Trade secret protection is initially achieved by the above privacy property. Cryptographically stronger privacy notions have also been introduced: Unlinkability (SSS, RSS) prohibits a third party from linking two messages. All current notions of unlinkability require the use of group signatures [START_REF] Brzuska | Unlinkability of sanitizable signatures[END_REF]. Schemes for statistical notions of unlinkability only achieve the less common notion of selective unforgeability [START_REF] Ahn | Computing on authenticated data[END_REF]. We do not consider unlinkability, if needed it can be achieved using a group signature instead of a normal signature [START_REF] Canard | Implementing group signature schemes with smart cards[END_REF]. Transparency (SSS, RSS) says that it should be impossible for third parties to decide which party is accountable for a given signature-message pair. However, stronger privacy has to be balanced against legal requirements. In particular, transparent schemes do not fulfill the EU's legal requirements for digital signatures [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF]. To tackle this, Brzuska et al. devised a non-transparent, yet private, SSS with non-interactive public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. Their scheme does not impact on privacy and fulfills all legal requirements [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF][START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF]. Non-interactive public accountability (SSS, RSS) offers a public judge, i.e., without additional information from the signer and/or sanitizer any third party can identify who created the message/signature pair (Sig or San i ). Implementation on Smart Cards First, the selected RSSs and SSSs must be secure following the state-of-the-art definition of security, i.e, immutable, unforgeable, private and either transparent or public-accountable. Transparent schemes can be used for applications with high privacy protection, e.g., patient records. Public accountability is required for a higher legal value [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. Second, the schemes underlying cryptographic foundation must perform well on many off-the-shelf smart cards. Hence, we chose primitives based on RSA operations computing efficiently due to hardware acceleration. The following schemes fulfill the selection criterions and have been implemented: Each participating party has its own smart card, protecting each entities' secret key. The algorithms that require knowledge of the private keys sk sig or sk i san are performed on card. Hence, at least Sign and Sanit involve the smart card. When needed, the host obtains the public keys out of band, e.g., via a PKI. SSS Scheme BFF + 09 [5] The scheme's core idea is to generate a digest for each admissible block using a tag-based chameleon hash [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. Finally, all digests are signed with a standard sig-nature scheme. At first, let S := (SKGen, SSign, SVerify) be a regular UNF-CMA secure signature scheme. Moreover, let CH := (CHKeyGen, CHash, CHAdapt) be a tag-based chameleon hashing scheme secure under random-tagging attacks. Finally, let PRF be a pseudo random function and PRG a pseudo random generator. We modified the algorithms presented in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] to eliminate the vulnerability identified by Gong et al. [START_REF] Gong | Fully-secure and practical sanitizable signatures[END_REF]. See [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] for the algorithms and the security model. Key Generation: KGen sig on input of 1 λ generates a key pair (sk, pk) ← SKGen(1 λ ), chooses a secret κ ← {0, 1} λ and returns (sk sig , pk sig ) ← ((sk, κ), pk). KGen san generates a key pair (sk ch san , pk ch san ) ← CHKeyGen(1 λ ). Signing: Sign on input of m, sk sig , pk ch san , adm it generates nonce ← {0, 1} λ , computes x ← PRF(κ, nonce), followed by tag ← PRG(x), and chooses r[i] $ ← {0, 1} λ for each i ∈ adm at random. For each block m[i] ∈ m let h[i] ← CHash(pk ch san , tag, (m, m[i]), r[i]) if i ∈ adm m[i] [i] ∈ m, h[i] ← CHash(pk ch san , tag, (m, m[i]), r[i]) if i ∈ adm m[i] otherwise and returns SVerify(pk san , (h, pk ch san , adm), σ 0 ), where h = (h[0], . . . , h[l]). Proof: Proof on input of sk sig , m, σ, pk ch san and a set of tuples {(m i , σ i )} i∈N from all previously signer generated signatures it tries to lookup a tuple (pk ch san , tag, m[j], r[j]) such that CHash(pk ch san , tag, (m, m[j]), r[j]) = CHash(pk ch san , tag i , (m i , m i [j]), r i [j]). Set tag i ← PRG(x i ), where x i ← PRF(κ, nonce i ). Return π ← (tag i , m i , m i [j], j, pk sig , pk ch san , r[j] i , x i ). If at any step an error occurs, ⊥ is returned. Judge: Judge on input of m, a valid σ, pk sig , pk ch san and π obtained from Proof checks that pk sig = pk sig π and that π describes a non-trivial collision under CHash(pk san , •, •, •) for the tuple (tag, (j, m[j], pk sig ), r[j]) in σ. It verifies that tag π = PRG(x π ) and on success outputs San, else Sig. 3.2 SSS Scheme BFF + 09 [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] on Smart Card. In this scheme, the algorithms Sign, Proof and CHAdapt from Sanit require secret information. The smart card's involvement is illustrated in Fig. 1. First, During KGen sig we generate κ as a 1024 Bit random number using the smart card's pseudo random generator and store it on card. To obtain x, illustrated as invocation of PRF(•, •), the host passes a nonce to the card, which together with κ forms the input for the PRF implementation on card. The card returns x to the host. On the host system, we let tag ← PRG(x). Second, CHAdapt used in Sanit requires a modular exponentiation using d as exponent. d is part of the 2048 Bit private RSA key obtained by CHKeyGen. The host computes only the intermediate result i = ((H(tag, m, m[i]) • r e ) • (H(tag , m , m [i]) -1 )) mod N from the hash calculation described in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] and sends i to the smart card. The final modular exponentiation is performed by the smart card using the RSA decrypt operation, provided by the Java Card API2 , to calculate r = i d mod N and returns r . Finally, to execute the Proof algorithm on the Signer's host requires the seed x as it serves as the proof that tag has been generated by the signer. To obtain x, the host proceeds exactly as in the Sign algorithm, calling the PRF implementation on the card with the nonce as parameter. SSS Schemes BFLS09 [6] and BPS12 [8] The core idea is to create and verify two signatures: first, fixed blocks and the Sanitizer's pk san must bear a valid signature under Signer's pk sig . Second, admissible blocks must carry a valid signature under either pk sig or pk san . The scheme by Brzuska et al. [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] is a modification of the scheme proposed by Brzuska et al. [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF], that is shown to achieve message level public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] using an additional algorithm called Detect. Both, BFF + 09 and BPS12, solely build upon standard digital signatures. We implemented both; due to space restrictions and similarities, we only describe the BPS12 scheme, which achieves blockwise public accountability. Refer to [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF] and [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] for the security model. In this section, the uniquely reversible concatenation of all non-admissible blocks within m is denoted FIX m , that of all admissible blocks is denoted as adm m . Key Generation: On input of 1 λ KGen sig generates a key pair (pk sig , sk sig ) ← SKGen(1 λ ). KGen san generates a key pair (pk san , sk san ) ← SKGen(1 λ ). Signing 3.4 SSS Schemes BFLS09 [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF] and BPS12 [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] on Smart Card. We implemented Sign and Sanit with involvement of the smart card. Fig. 2 illustrates the interactions. The algorithms are executed on the host system as Host SC hσ FIX = H(0, mfix, adm, pk san ) σFIX SSign (sksig, hσ FIX ) hσ [i] = H(1, i, m[i], pk san , pk sig , tag, ⊥) σ[i] SSign(sksig, hσ [i]) For each m[i] ∈ m: Sign Sign Host SC h σ FULL = H(1, i, m [i], pk san , pk sig , tag, tag ) σ [i] SSign (sksan, h σ FULL ) For each m[i] ∈ mod: Sanit Sanit RSS Scheme PSPdM12 [24] The scheme's core idea is to hash each block and accumulate all digests with a cryptographic accumulator. This accumulator value is signed with a standard signature scheme. Each time a block is accumulated, a witness that it is part of the accumulated value is generated. Hence, the signed accumulator value is used to provide assurance that a block was signed given the verifier knows the block and the witness. A redaction removes the block and its witness. They further extended the RSS's algorithms with Link RSS , Merge RSS . We omit them, as they need no involvement of the smart card because they require no secrets. Refer to [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] for details on the security model. Building block: Accumulator. For more details than the algorithmic description, refer to [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF][START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF][START_REF] Lipmaa | Secure accumulators from euclidean rings without trusted setup[END_REF][START_REF] Sander | Efficient accumulators without trapdoor extended abstracts[END_REF]. We require the correctness properties to hold [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF]. ACC consists of five PPT algorithms ACC := (Setup, Gen, Dig, Proof, Verf): Setup. Setup on input of the security parameter λ returns the parameters parm, i.e., parm ← Setup(1 λ ) Gen. Gen, on input of the security parameter λ and parm outputs pk i.e., pk ← Gen(1 λ , parm). Dig. Dig, on input of the set S, the public parameter pk outputs an accumulator value a and some auxiliary information aux, i.e, (a, aux) ← Dig(pk, S) Proof. Proof, on input of the public parameter pk, a value y ∈ Y pk and aux returns a witness p from a witness space P pk , and ⊥ otherwise, i.e., p ← Proof(pk, aux, y, S) Verf. On input of the public parameters parm, public key pk, an accumulator a ∈ X pk , a witness p, and a value y ∈ Y pk Verf outputs a bit d ∈ {0, 1} indicating whether p is a valid proof that y has been accumulated into a, i.e., d ← Verf(pk, a, y, p). Note, X pk denotes the output and Y pk the input domain based on pk; and parm is always correctly recoverable from pk. Our Trade-off between Trust and Performance. Pöhls et al. [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] require ACC to be collision-resistant without trusted setup. Foremost, they require the ACC's setup to hide certain values used for the parameter generation from untrusted parties, as knowledge allows efficient computation of collisions and thus forgeries of signatures. All known collision-resistant accumulators based on number theoretic assumptions either require a trusted third party (TTP), named the accumulator manager [START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF][START_REF] Li | Universal accumulators with efficient nonmembership proofs[END_REF], or they are very inefficient. As said, the TTP used for setup of the ACC must be trusted not to generate collisions to forge signatures. However, existing schemes without TTP are not efficiently implementable, e.g., the scheme introduced by Sander requires a modulus size of 40, 000 Bit [START_REF] Sander | Efficient accumulators without trapdoor extended abstracts[END_REF]. Our trade-off still requires a TTP for the setup, but inhibits the TTP from forging signatures generated by signers. In brief, we assume that the TTP which signs a participant's public key also runs the ACC setup. The TTP already has as a secret the standard RSA modulus n = pq, p, q ∈ P. If we re-use n as the RSAaccumulator's modulus [START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF], the TTP could add new elements without detection. However, if we add "blinding primes" during signing, neither the TTP nor the signer can find collisions, as long as the TTP and the signer do not collude. We call this semi-trusted setup. Note, as we avoid algorithms for jointly computing a modulus of unknown factorization, we do not require any protocol runs. Thus, keys can be generated off-line. The security proof is in the appendix. On this basis we build a practically usable undeniable RSS, as introduced in [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF]. It is based on a standard signature scheme S := (SKGen, SSign, SVerify) and our accumulator with semi-trusted setup ACC := (Setup, Gen, Dig, Proof, Verf). Key Generation: The algorithm KeyGen generates (sk S , pk S ) ← SKGen(1 λ ). It lets parm ← Setup(1 λ ) and pk ACC ← Gen(1 λ , parm). The algorithm returns ((pk S , parm, pk ACC ), (sk S )). Signing: Sign on input of sk S , pk ACC and a set S, it computes (a, aux) ← Dig(pk ACC , (S)). It generates P = {(y i , p i ) | p i ← Proof(pk ACC , aux, y i , S) | y i ∈ S}, and the signature σ a ← SSign(sk S , a). The tuple (S, σ s ) is returned, where σ s = (pk S , σ a , {(y i , p i ) | y i ∈ S}). Verification: Verify on input of a signature σ = (pk S , σ a , {(y i , p i ) | y i ∈ S}), parm and a set S first verifies that σ a verifies under pk S using SVerify. For each element y i ∈ S it tries to verify that Verf(pk ACC , a, y i , p i ) = true. In case Verf returns false at least once, Verify returns false and true otherwise. Redaction: Redact on input of a set S, a subset R ⊆ S, an accumulated value a, pk S and a signature σ s generated with Sign first checks that σ s is valid using Verify. If not ⊥ is returned. Else it returns a tuple (S , σ s ), where σ s = (pk S , σ a , {(y i , p i ) | y i ∈ S }) and S = S \ R. 3.6 RSS Scheme PSPdM12 [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] on Smart Card. This scheme involves the smart card for the algorithms Setup and Sign, illustrated in Fig. 3. We use the smart card to obtain the blinding primes of the modulus described in Sect. 3.5, needed by Setup. To compute these primes on card, we generate standard RSA parameters (N, e, d) with N being of 2048 Bit length, but store only N on card and discard the exponents. On the host system this modulus is multiplied with that obtained from the TTP to form the modulus used by ACC. Additionally, the smart card performs SSign to generate σ a . Performance and Lessons Learned We implemented in Java Card [START_REF] Chen | Java Card Technology for Smart Cards: Architecture and Programmer's Guide[END_REF] 2.2.1 on the "SmartC@fé R Expert 4.x" from Giesecke and Devrient [START_REF] Giesecke | SmartC@fé R Expert 4[END_REF]. The host system was an Intel i3-2350 Dual Core 2.30 GHz with 4 GiB of RAM. For the measurements in Tab. 1, we used messages with 10, 25 and 50 blocks of equal length, fixed to 1 Byte. The block size has little impact as inputs are hashed. However, the number of blocks impacts performance in some schemes. 3.12 5 7.16 5 13.24 5 2.60 5 6.65 5 12.74 5 0.016 0.039 0.084 0.043 0.051 0.060 0.001 0.001 0.002 [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] 11.16 5 59.97 5 221.97 5 1.42 3.17 6.32 1.32 3.12 6.12 -4 -4 -4 -4 -4 -4 4 Algorithm not defined by scheme 5 Involves smart card operations Table 1. Performance of SSS prototypes; median runtime in seconds and Redact operations modify all sanitizable blocks. The BFLS12 scheme allows multiple sanitizers and was measured with 10 sanitizers. Verify and Judge always get sanitized or redacted messages. The results for the BFLS12 scheme include the verification against all possible public keys (worst-case). We measured the complete execution of the algorithms, including those steps performed on the host system. We omit the time KeyGen takes for 2048 bit long key pairs, as keys are usually generated in advance. We carefully limited the involvement of the smart card, hence we expect the performance impact to be comparable to the use of cards in regular signature schemes. For the RSS we have devised and proven a new collision-resistant accumulator. If one wants to compare, BPS12 states around 0.506s for signing 10 blocks with 4096 bit keys [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. We only make use of the functions exposed by the API. Hence, our implementations are portable to other smart cards, given they provide a cryptographic co-processor that supports RSA algorithms. We would have liked direct access to the cryptographic co-processor, as raised in [START_REF] Tews | Performance issues of selective disclosure and blinded issuing protocols on java card[END_REF], instead of using the exposed ALG RSA NOPAD as a workaround. Experiment Semi -Trusted -Collision -ResistancePK ACC A (λ) parm $ ← Setup(1 λ ) (pk * , p * , m * , a * ) ← A ODig(•,•) (1 λ , parm) where oracle ODig, on input of Si, pk i returns: (ai, auxi) ← Dig(pk i , Si) (answers/queries indexed by i, 1 ≤ i ≤ k) Pi = {(sj, pi) | pi ← Proof(pk i , auxi, sj, Si), sj ∈ Si} return (ai, Pi) return 1, if: Verf(pk * , a * , m * , p * ) = 1 and ∃i, 1 ≤ i ≤ k : ai = a * and m * / ∈ Si Fig. 4. Collision-Resistance with Semi-Trusted Setup Part I Experiment Semi -Trusted -Collision -ResistancePARM ACC A (λ) (parm * , s * ) ← A(1 λ ) (pk * , p * , m * , a * ) ← A ODig(•,•),GetPk() (1 λ , s * ) where oracle ODig, on input of pk i , Si: (ai, auxi) ← Dig(pk i , Si) (answers/queries indexed by i, We say that an accumulator ACC with semi-trusted setup is collision-resistant for the public key generator, iff for every PPT adversary A, the probability that the game depicted in Fig. 1 ≤ i ≤ k) Pi = {(sj, pi) | pi ← Proof(pk i , auxi, sj, Si), sj ∈ Si} return (ai, Pi) where oracle GetPk returns: pk j ← Gen(1 λ , parm * ) (answers/queries indexed by j, 1 ≤ j ≤ k ) return 1, if: Verf(pk * , a * , m * , p * ) = 1 and ∃i, 1 ≤ i ≤ k : ai = a * , m * / ∈ Si and ∃j, 1 ≤ j ≤ k : pk * = pk j A returns 1, is negligible (as a function of λ). The basic idea is to let the adversary generate public key pk. The other part is generated by the challenger. Afterwards, the adversary has to find a collision. Definition 2 (Collision-Resistance with Semi-Trusted Setup (Part II)). We say that an accumulator ACC with semi-trusted setup is collision-resistant for the parameter generator, iff for every PPT adversary A, the probability that the game depicted in The basic idea is to either let the adversary generate the public parameters parm, but not any public keys; they are required to be generated honestly. Afterwards, the adversary has to find a collision. Setup. The algorithm Setup generates two safe primes p 1 and q 1 with bit length λ. It returns n 1 = p 1 q 1 . Gen. On input of the parameters parm, containing a modulus n 1 = p 1 q 1 of unknown factorization and a security parameter λ, the algorithm outputs a multi-prime RSA-modulus N = n 1 n 2 , where n 2 = p 2 q 2 , where p 2 , q 2 ∈ P are random safe primes with bit length λ. Verf. On input of the parameters parm = n 1 , containing a modulus N = p 1 q 1 p 2 q 2 = n 1 n 2 of unknown factorization, a security parameter λ, an element y i , an accumulator a, and a corresponding proof p i , it checks, whether p yi i (mod N ) = a and if n 1 | N and n 2 = N n1 / ∈ P. If either checks fails, it returns 0, and 1 otherwise Other algorithms: The other algorithms work exactly like the standard collision-free RSA-accumulator, i.e., [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF]. Theorem 1 (The Accumulator is Collisions-Resistant with Semi-Trusted Setup.). If either the parameters parm or the public key pk has been generated honestly, the sketched construction is collision-resistant with semi-trusted setup. Proof. Based on the proofs given in [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF], we have to show that an adversary able to find collisions is able to find the e th root of a modulus of unknown factorization. Following the definition given in As parm is public knowledge, every party can compute n 2 = N n1 . For this proof, we assume that the strong RSA-assumption [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF] holds in (Z/n 1 Z) and (Z/n 2 Z). Moreover, we require that gcd(n 1 , n 2 ) = 1 holds. As (Z/N Z) ∼ = (Z/n 1 Z) × (Z/n 2 Z) we have a group isomorphism ϕ 1 . Furthermore, as the third party knows the factorization of n 1 , we have another group isomorphism ϕ 2 . It follows: (Z/N Z) ∼ = (Z/p 1 Z) ×(Z/q 1 Z) ×(Z/n 2 Z). Assuming that A can calculate the e th root in (Z/N Z), it implies that it can calculate the e th root in (Z/n 2 Z), as calculating the e th root in (Z/pZ), with p ∈ P is trivial. It follows that A breaks the strong RSA-assumption in (Z/n 2 Z). Building a simulation and an extractor is straight forward. II) Malicious Signer. Similar to I). III) Outsider. Outsiders have less knowledge, hence a combination of I) and II). Obviously, if the factorization of n 1 and n 2 is known, one can simply compute the e-th root in (Z/N Z). However, we assumed that signer and TTP do not collude. All other parties can collude, as the factorization of n 2 remains secret with overwhelming probability. otherwise and computes σ 0 ← SSign(sk sig , (h, pk ch san , adm)), where h = (h[0], . . . , h[l]). It returns σ = (σ 0 , tag, nonce, r[0], . . . , r[k]), where k = |adm|. Sanitizing: Sanit on input of a message m, information mod, a signature σ = (σ 0 , tag, nonce, adm, r[0], . . . , r[k]), pk sig and sk ch san checks that mod is admissible and that σ 0 is a valid signature for (h, pk san , adm). On error, return ⊥. It sets m ← mod(m), chooses values nonce $ ← {0, 1} λ and tag $ ← {0, 1} 2λ and replaces each r[j] in the signature by r [j] ← CHAdapt(sk ch san , tag, (m, m[j]), r[j], tag , (m , m [j])). It assembles σ = (σ 0 , tag , nonce , adm, r [0], . . . , r [k]), where k = |adm|, and returns (m , σ ). Verification: Verify on input of a message m, a signature σ = (σ 0 , tag, nonce, adm, r[0], . . . , r[k]), pk sig and pk ch san lets, for each block m Fig. 1 . 1 Fig. 1. BFF + 09: Data flow for algorithms Sign, CHAdapt and Proof Fig. 2 . 2 Fig. 2. BFLS09: Data flow between smart card and host for Sign and Sanit Fig. 3 . 3 Fig. 3. PSPdM12: Data flow between smart card and host for Sign and Setup Fig. 5 . 5 Fig. 5. Collision-Resistance with Semi-Trusted Setup Part II Fig. A returns 1 , 1 is negligible (as a function of λ). Fig. A and Fig. A, we have three cases: I) Malicious Semi-Trusted Third Party. ). Redacting further requires that the third-party is also able to compute a new valid signature σ for m that verifies under Alice's public key pk sig . Contrary, in an SSS, Alice decides for each block m[i] whether sanitization by a designated third party, denoted Sanitizer, is admissible or not. Sanitization means that Sanitizer i can replace each admissible block m[i] with an arbitrary string m[i] ∈ {0, 1} * and hereby creates a modified message m modified to eliminate the vulnerability identified by Gong et al.[START_REF] Gong | Fully-secure and practical sanitizable signatures[END_REF]. RSA implementation must not apply any padding operations to its input. Otherwise, i is not intact anymore. We use Java Card's ALG RSA NOPAD to achieve this. Is funded by BMBF (FKZ:13N10966) and ANR as part of the ReSCUeIT project. The research leading to these results was supported by "Regionale Wettbewerbsfähigkeit und Beschäftigung", Bayern, 2007-2013 (EFRE) as part of the SECBIT project (http://www.secbit.de) and the European Community's Seventh Framework Programme through the EINS Network of Excellence (grant agreement no. [288021]).
40,325
[ "1001640", "1003786", "1003787", "1003788", "979976" ]
[ "98761", "98761", "98761", "98761", "98761" ]
01485933
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485933/file/978-3-642-38530-8_4_Chapter.pdf
Daniel Schreckling Stephan Huber Focke Höhne Joachim Posegga URANOS: User-Guided Rewriting for Plugin-Enabled ANdroid ApplicatiOn Security URANOS is an Android application which uses syntactical static analysis to determine in which component of an Android application a permission is required. This work describes how the detection and analysis of widely distributed and security critical adware plugins is achieved. We show, how users can trigger bytecode rewriting to (de)activate selected or redundant permissions in Android applications without sacrificing functionality. The paper also discusses performance, security, and legal implications of the presented approach. Introduction Many Smartphone operating systems associate shared resources with permissions. API calls accessing such resources require permissions to gain the required privileges. Once an application obtains these privileges, it can generally access all the items stored in the respective resource. Additionally, such privileges are often valid until the deinstallation or an update of the application. These properties conflict with the emerging privacy needs of users. Increasing sensitivity encourages the protection of data which helps applications, vendors, or providers to generate individual user profiles. Unfortunately, current coarse grained permission systems only provide limited control or information about an application. Hence, informed consents to the use of permissions are far from being available. In Android, numerous analyses of permissions requested by an application [START_REF] Chan | Droidchecker: analyzing android applications for capability leak[END_REF][START_REF] Gibler | AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale[END_REF][START_REF] Hornyack | These aren't the droids you're looking for: retrofitting android to protect data from imperious applications[END_REF][START_REF] Stevens | Investigating user privacy in android ad libraries[END_REF][START_REF] Zhou | Dissecting android malware: Characterization and evolution[END_REF] substantiate this problem. Permissions increase the attack surface of an application [START_REF] Chin | Analyzing inter-application communication in android[END_REF][START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF][START_REF] Grace | Systematic Detection of Capability Leaks in Stock Android Smartphones[END_REF] and the platform executing it. Thus, granting permissions in excessive manners induces new exploit techniques. Static analysis and runtime monitoring frameworks have been developed to detect permission-based platform and application vulnerabilities. There are also Android core extensions enabling the deactivation of selected permissions. However, such frameworks either interfere with the usability of the application and render it unusable or they only provide permission analysis on separate hosts. Thus, there is a strong need for flexible security solutions which do not aim at generality and precision but couple lightweight analysis and permission modification mechanisms. We define URANOS, an application rewriting framework for Android which enables the selective deactivation of permissions for specific application contexts, e.g. plugins. The contributions of this paper include an on-device static analysis to detect permissions and their usage, selective on-device rewriting to guarantee user-specific permission settings, and a prototype implementing detection and rewriting in common Android applications. Our contribution is structured as follows: Section 3 provides a knowledge base for this contribution, Section 2 gives a high-level overview of URANOS. Its components are explained in Section 4. Section 5 discusses performance, limitations, and legal implications. Finally, Section 6 lists related work before Section 7 summarizes our conclusions. We strive for an efficient on-device framework (see Figure 1) for Android which allows users to selectively disable permissions assigned to an application. To preserve functionality a static analysis infers the permissions required during execution from the bytecode. For efficiency we exploit existing knowledge about the permission requirements of Android API calls, resource access, intent broadcasting etc. Detected permissions are compared with the permissions requested in the application manifest to detect excessive permissions etc. Additionally, we scan the bytecode for plugins using a pre-generated database of API methods and classes used in popular adware. They define context for each bytecode instruction. This allows us to infer the permissions exclusively required for plugins or for the application hosting the plugins. We communicate this information to the user. Depending on his needs, the user can enable or disable permissions for specific application contexts. Disabled and excessive permissions can be completely removed from the manifest. However, removing an effectively required permission will trigger a security exception during runtime. If these exceptions are unhandled the application will terminate. Therefore, URANOS additionally adapts the application bytecode and replaces the API calls in the respective call context by feasible wrappers. This combination of analysis and rewriting allows a user to generate operational applications compliant with his security needs. Unfortunately, compliant but rewritten Android applications are neither directly installed nor are they updated by Android. Therefore, URANOS also delivers an application manager service, replacing applications with their rewritten counterparts and ensuring their updates. Background This section gives a short overview of the structure of Android applications, their execution environment, and the permission system in Android. Android Applications Applications (Apps) are delivered in zipped packages (apk files). They contain multimedia content for the user interface, configuration files such as the manifest, and the bytecode which is stored in a Dalvik executable (dex file). Based on the underlying Linux, Android allots user and group IDs to each application. Four basic types of components can be used to build an App: activities, services, content providers, and broadcast receivers. Activities constitute the user interface of an application. Multiple activities can be defined but only one activity can be active at a time. Services are used to perform time-consuming or background tasks. Specific API functions trigger remote procedure calls and can be used to interact with services. Application can define content providers to share their structured data with other Apps. To retrieve this data, so called ContentResolvers must be used. They use URIs to access a provider and query it for specific data. Finally, broadcast receivers enable applications to exchange intents. Intents express an intent to perform an action within or on a component. Actions include the display of a picture or the opening of a specific web page. Developers usually use these components defined in the Android API and the SDK to build, compile, and pack Apps. Their apks are signed with the private developer key, distributed via the official Android market, other markets, or it is delivered directly to a Smartphone. Dalvik Virtual Machine Bytecode is stored in Dalvik executables (dex files) and is executed in a register based virtual machine (VM) called Dalvik. Each VM runs in its own application process. The system process Zygote starts at boot time and manages the spawning of new VMs. It also preloads and preinitializes the core library classes. Dex files are optimized for size and data sharing among classes. In contrast to standard Java archives, the dex does not store several class files. All classes are compiled into one single dex file. This reduces load time and supports efficient code management. Type-specific mappings are defined over all classes and map constant properties of the code to static identifiers, such as constant values, class, field, and method names. The bytecode can also contain developer code not available on the platform, e.g. third-party code, such as plugins (see Figure 2). Bytecode instructions use numbered register sets for their computations. For method calls, registers passed as arguments are simply copied into new registers only valid during method execution. Android Permissions Android permissions control application access to resources. Depending on their potential impact Android distinguishes three levels: normal, dangerous, signature and signatureORsystem. Unlike normal permission which do not directly cause financial harm to users, dangerous and system permission control access to critical resources and may enable access to private data. Granted signature or signaturesORsystem permission grant access to essential system services and data. During installation permissions are assigned as requested in the manifest. The user only approves dangerous permissions. Normal permissions are granted without notification and signature or signatureORsystem permissions verify that the application requesting the permissions has been signed with the key of the device manufacturer. Resource access can be obtained through API calls, the processing of intents, and through access to content providers and other system resources such as an SD card. Thus, permission enforcement varies with the type of resource accessed. In general, permission assignment and enforcement can be described using a label model as depicted in Figure 2. Each system resource or system service is labeled with the set of permissions it requires to be accessed. An application uses the public API to trigger resource access. This request is forwarded to the system. The system libraries, the binder, and the implementation of the libraries finally execute the resource access. We abstract from the details of the binderlibrary pair and call this entity a central permission monitor. It checks whether an application trying to access a resource with label L x has been assigned this label. If not, access is forbidden and an appropriate security exception is thrown. Android also places permission checks in the API and RPC calls [START_REF] Felt | Android permissions demystified[END_REF]. Thus, security exceptions may already occur although the access requests have not reached the permission monitor, yet. As such checks may be circumvented by reflection the actual enforcement happens in the system. The URANOS Framework This section explains our system in more detail. To ease the understanding we complement our description with Figure 3. URANOS Application Processing To process manifest and bytecode of the application, URANOS must obtain access to the apks. Depending on how the developer decides how to publish an APK, it is stored in different file system locations: the regular application storage, the application storage on an SD card, or storage which prevents the forwarding (forward-lock) of the application. The PackageManager API offered by Android can be used to retrieve the path and filename of the apks. Regular applications are able to obtain reading access to other apks. Thus, as a regular application, URANOS can copy apks to a local folder and process them. With root permissions, it can also process forward-locked applications. Apks are extracted to obtain access to the manifest and the dex file. We enhanced the dex-tools from the Android source tree. It directly operates on the bytecode and can extract information required for our analysis. Thus, we avoid intermediate representations. Handles to manifest and bytecode are forwarded [START_REF] Backes | App-Guard -Real-time policy enforcement for third-party applications[END_REF] to the static analysis and rewriting components of our framework. Permission Detection Next, we parse the manifest and retrieve the set P apk of permissions requested by the App. Afterwards, we scan the bytecode to find all invoke instructions and determine the correct signature of the methods invoked. Invoke instructions use identifiers pointing to entries of a management table which contains complete method signatures. From this table we derive the set I of methods potentially invoked during execution. As this is a syntactical process set I may contain methods which are never called. We then use function π to compute P M = ∀m∈I π(m), i.e. π maps method m to a set of permissions required to invoke m at runtime. Thus, P M reflects the permissions required by the application to execute all methods in I. Function π is based on the results of Felt et al. [START_REF] Felt | Android permissions demystified[END_REF] which associate actions in an Android App with its required permissions, e.g. method calls. The use of content providers or intents may also require permissions. However, specific targets of both can be specified using ordinary strings. To keep our analysis process simple we search the dex for strings which map the pattern of a content provider URI or of a activity class name which is defined in the Android API. If a pattern is matched, we add the respective permission to the set P P of provider permissions or to the set P I of intent permissions, respectively. At the end of this process we intersect the permissions specified in the manifest with the permissions extracted from the bytecode, i.e. P val = P apk ∩ (P M ∪ P P ∪ P I ) to obtain the validated permissions likely to be required for the execution of the application. Our heuristics induce an over-approximation in this set. Section 5 explains why it does not influence the security of our approach. Context Detection Based on P val we now determine the App components in which the methods requiring these permissions are called. For this purpose we define the execution context for an instruction. It is the signature of the method and its class in which the instruction is executed. This definition is generic and can be applied to various detection problems. We focuse on widely distributed plugins for Android. To give users a better understanding on the possible impact of the plugins hosted by the analyzed Apps we manually assign each plugin to the following four categories: passive, active, audio advertising, and feature extensions. Passive advertising plugins display advertisements as soon as an activity of the hosting application is active. They are usually integrated into the user interface with banners as placeholders. Active advertising plugins are similar to pop-up windows and do not require a hosting applications. They use stand alone activities or services, intercept intents, or customize events to become active. Audio advertising is a rather new plugin category which intercepts control sequences and interferes with the user by playing audio commercials or similar audio content, e.g. while hearing the call signal on the phone. Feature extensions include features in an application a user or developer may utilize. Among many others, they include in-app billing or developer plugins easing the debugging process. To detect plugins in an application, we perform the same steps required for archiving the signatures. We scan the application manifest and bytecode for the names listed above and investigate which libraries have to be loaded at runtime. From this information we build a signature and try to match it against our plugin database. This process also uses fuzzy patterns to match the strings inferred from the application. We assume that plugins follow common naming conventions. So, full class names should start with the top Internet domain name, continue with the appropriate company name, and end with the class names. If we do not find matches on full class names, we search for longest common prefixes. If they contain at least two subdomains, we continue searching for the other names to refine the plugin match. In this way we can account for smaller or intentional changes in class or package naming and prevent a considerable decline of the detection rate. The ability to detect classes of plugins allows us to determine execution contexts. During the bytecode scanning, we track the context C. As soon as our analysis enters the namespace of a plugin class, we change C. It is defined by the name of the plugin N P lugin or by the name N apk of the application if no plugin matches. We generate a map for each method call to its calling context. Together with the function π, this implicitly defines a map γ from permissions to calling contexts. We can now distinguish four types of permissions: Dispensable permissions p ∈ P apk \ P val are not required by the application, Application only permissions p ∈ P apk are exclusively required for the hosting application to run, i.e. γ(p) = {N apk }, Plugin only permissions p ∈ P apk are exclusively required for the execution of a plugin, i.e. γ(p) ∩ {N apk } = ∅, and Hybrid permissions p ∈ P apk which are required by both, the hosting application and the plugin, i.e. γ(p) does not match the conditions for the other three permission types. This result is communicated to the user in step [START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF]. He gets an overview of the types of permissions and the context in which they are required. The user can enable or disable them in the entire application, only in the plugin, or only in the hosting application. The next section shows how to support this feature with the help of bytecode rewriting and without modifying Android. Rewriter In general, dispensable permissions are not required for the execution and don't need to be assigned to the application. They can removed from the manifest. The same holds for permissions which should be disabled for the entire application. Thus, the first rewriting step is performed on the application manifest. It revokes the permissions either not required or not desired. However, withdrawing permissions from an application may render it unusable. Calls to methods which require permissions will throw exceptions. If they are not handled correctly, the runtime environment could finally interrupt execution. To avoid this problem, enable the deactivation of permissions in only specific application components, and to retain an unmodified Android core, the activation or deactivation of permissions triggers a rewriting process (3). It is guided by the results of the syntactical analysis (4). The rewriter, described in this section, adapts the bytecode in such a way that the App can be executed safely even without the permissions it originally requested. API Methods For each method whose execution requires a permission, we provide two types of wrappers [START_REF] Conti | CRePE: context-related policy enforcement for android[END_REF] to replace the original call. Regular API method calls which require a permission, can be wrapped by simple try and catch blocks as depicted by WRAPPER1 in Listing 1.1. If the permission required to execute the API call has been withdrawn, we catch the exception and return a feasible default value. In case the permission is still valid, the original method is called. In contrast, the second wrapper WRAPPER2 (Listing 1.2) completely replaces the original API call and only executes a default action. Evidently, rewriting could be reduced to only WRAPPER2. But, WRAPPER1 reduces the number of events at which an application has to be rewritten and reinstalled. Assume that a user deactivates a permission for the entire application. The permission is removed from the manifest and all methods requiring it are wrapped. Depending on the wrapper and the next change in the permission settings a rewriting may be avoided because the old wrapper already handles the new settings, e.g. the reactivation of the permission. Wrappers are static methods and apart from one additional instance argument for non-static methods, they inherit the number and type of arguments from the methods they wrap. This makes it easy to automatically derive them from the API. Additionally, it simplifies the rewriting process as follows. URANOS delivers a dex file which contains the bytecode of all wrappers. This file is merged with the original application dex using the dx compiler libraries. The new dex now contains the wrappers but does not make use of them, yet. In the next step we obtain the list of method calls which need to be replaced from the static analysis component [START_REF] Chin | Analyzing inter-application communication in android[END_REF]. The corresponding invoke instructions are relocated in the new dex and the old method identifiers are exchanged with the identifiers of the corresponding wrapper methods. Here, the rewriting process is finished even if the wrapped method is nonstatic. At bytecode level, the replacement of a non-static method with a static one simply induces a new interpretation of the registers involved in the call. The register originally storing the object instance is now interpreted as the first method argument. Thus, we pass the instance register to the wrapper in the first argument and can leave all other registers in the bytecode untouched. We illustrate this case in Listing 1.3. It shows bytecode mnemonics for the invocation of the API method getDeviceId as obtained by a disassemblers. The instruction invoke-virtual calls the method getDeviceId on an instance of class TelephonyManager. It is rewritten to a static call in Listing 1.4 and passes the instance as an argument to the static wrapper method. Reflection Android supports reflective method calls. They use strings to retrieve or generate instances of classes and to call methods on such instances. These operations can be constructed at runtime. Hence, the targets of reflective calls are not decidable during analysis and calls to API methods may remain undetected. Therefore, we wrap the methods which can trigger reflective calls, i.e. invoke and newInstance. During runtime, these wrappers check the Method instance passed to invoke or the class instance on which newInstance is called. Depending on its location in the bytecode the reflection wrapper is constructed in such a way that it passes the invocation to the appropriate wrapper methods (see above) or executes the function in the original bytecode. This does not require dynamic monitoring but can be integrated in the bytecode statically. Reflection calls show low performance and are used very infrequently. Thus, this rewriting will not induce high additional overhead. Content Providers Similar to reflective calls, we handle content providers. Providers must be accessed via content resolvers (see Section 3) which forward the operations to be performed on a content provider: query, insert, update, and delete. They throw security exceptions if required read or write permissions are not assigned to an application. As these methods specify the URI of the content provider we replace all operations by a static wrapper which passes their call to a monitor. It checks whether the operation is allowed before executing it. Intents In general, intents are not problematic as they are handled in the central monitor of Android, i.e. the enforcement does not happen in the application. If an application sends an intent to a component which requires permissions an exception in the error log is generated if the application does not have this permission. The corresponding action is not executed but the application does not crash. Thus, our rewriting must cover situations in which only some instructions in specific execution contexts must not send or receive intents. The control over sending can be realized by wrappers handling the available API methods such as startActivity, broadcastIntent, startService, and bindService. The wrappers implement monitors which first analyse the intent to be sent. Depending on the target, the sending is aborted. By rewriting the manifest, we can control which intents a component can receive. This excludes explicit intents which directly address a application component. Here, we assume that the direct access of a system component to an application can be considered legitimate. Application Management We realize permission revocation by repackaging applications. First, our App manager obtains the manifest and dex (6) from the rewriter. For recovery, we first backup the old dex file and its corresponding manifest. All other resources, such as libraries, images, audio or video files, etc. are not backed up as they remain untouched. They are extracted from the original apk (7), signed with the URANOS key together with the new bytecode and manifest. The signed application is then directly integrated into a new apk. This process is slow due to the zip compression of the archive. In the end, the application manager assists the user to deinstall the old and install the new application [START_REF] Enck | On lightweight mobile phone application certification[END_REF][START_REF] Felt | Android permissions demystified[END_REF]. In the background we also deploy a dedicated update service. It mimics the update functionality of Android but also operates on the applications resigned by URANOS. We regularly query the application market for updates, inform the user about them, and assists the update process by deinstalling the old App, rewriting the new App, and installing it. Similarly, the App manager provides support for deinstallation and recovery. Discussion Performance To assess the performance of our approach we downloaded over 180 popular applications from the Google Play Store. The URANOS App was adjusted in such a way that it automatically starts analysing and rewriting newly installed applications. Our benchmark measured the analysis time, i.e. the preprocessing of the dex (pre) and the execution context detection (det), and the rewriting time, i.e. the merging of wrappers (wrap), the rewriting of the resulting dex (rew), and the total time require to generate the final apk (tot). The analysis and rewriting phase were repeated 11 times for each App. The first measurement was ignored as memory management and garbage collection often greatly influence the first measurements and hard to reproduce as they heavily depend on the phone state. For the rewriting process, we always selected three random permissions to be disabled. If there were less permissions we disabled all. All measurements were conducted on a Motorola RAZR XT910, running Android 4.0.4 on a 3.0.8 kernel. Due to space restrictions this contribution only discusses a selection of applications and their performance figures. An overview of the complete results, a report on the impact of our rewriting on the App functionality, and the App itself are available at http://web.sec.uni-passau.de/research/uranos/. Apart from the time measurements mentioned above Table 1 enumerates the number of plugins the application contains (#pl), the number of permissions requested (#pm), the number classes (#cl) in the dex and the size of the apk. In particular the apk size has a tremendous impact on the generation of the rewritten application due to APK compression. This provides potential for optimization in particular if we look at the rather small time required to merge the wrapper file of 81 kB into the complex dex structure and redirecting the method calls. This complexity is also reflected in the time for pre-processing the dex to extract information required to work on the bytecode. We can also see that the number of classes and permissions included in an application influence the analysis time. Classes increase the number of administrative overhead in a dex. Thus their number also increases the effort to search for the appropriate code locations. Here, Shazam and Instagram are two extreme examples. In turn, the number of permissions increase the number of methods which have to considered during analysis and rewriting. In our measurements, we do not include execution overhead. The time required for the additional instructions in the bytecode are negligible and within measuring tolerance. Thus, although the generation of the final apk is slow, our measurements certify that the analysis and rewriting on Android bytecode can be implemented efficiently on a Smartphone. While other solutions run off-device and focus on precision, such as the web interface provided by Woodpecker [START_REF] Felt | Android permissions demystified[END_REF], URANOS can deliver timely feedback to the user. With this information he can decide about further countermeasures also provided by our system. Limitations As we have already stated above, our analysis uses approximations. In fact, P M is an overapproximation of the permissions required by method calls, e.g. there may be methods in the bytecode which are never executed. Thus, the mere existence of API calls does not justify a permission assignment to an application. On the other hand P P ∪ P I is an underapproximation as we only consider strings as a means to communicate via intents or to access resources. There are numerous other ways for such operations, our heuristic does not cover. Attackers or regular programmers can achieve underapproximations by hiding intent or provider access with various implementation techniques. In this case URANOS will alert the user that a specific permission may not be needed. The user will deactivate the respective permission and no direct damage is caused. Overapproximation can be achieved by simply placing API calls in the bytecode which are never executed. In this case, our analysis does not report the permission mapping to those dead API calls to be dispensable. Thus, the overapproximation performed in this round may give the user a wrong sense of security concerning the application. Therefore, URANOS also allows the deactivation of permissions in the hosting application and not only in the plugin. Attackers may also hide plugin code by obfuscating it, e.g. by renaming all plugin APIs. In this case, URANOS will not detect the plugin. This will prevent the user from disabling permissions for this plugin. In this case, it is still possible to remove permissions for the whole application. Plugin providers which have an interest in the use of their plugins will not aim for obfuscated APIs. Legal Restrictions If software suffices the copyright law's fundamental requirement of originality it is protected by international and national law, such as the Universal Copyright Convention, the WIPO Copyright Treaty grant protection, Article 10 of the international TRIPS agreement of the WTO agreement, and the European Directive 2009/24/EC. In general, these directives prohibit the manipulation, reproduction, or distribution of source code if the changes are not authorized by its rights holder. No consent for modification is required if the software is developed using an open source software licensing model or if minor modifications are required for repair or maintenance. To achieve interoperability of an application even reverse engineering may be allowed. However, any changes must not infringe with the regular exploitation of the affected application and the legitimate interest of the rights holder. URANOS cannot satisfy any of the conditions mentioned above. First of all, all actions are performed automatically. Thus, it is not possible to query the rights owner for his permission to alter the software. One may argue that URANOS rewrites the application in order to ensure correct data management. Unfortunately, the changes described above directly infringe with the interest of the rights holder of the application. On the other hand one argue that a developer must inform the user how the application processes and uses his personal data as highlighted in the "Joint Statement of Principles" of February 22 nd 2012 and signed by global players like Amazon, Apple, Google, Hewlett-Packard, Microsoft and Research In Motion. However, current systems only allow an informed consent of insufficient quality. In particular when using plugins, a developer would need to explain how user data is processed. But developers only use APIs to libraries without knowing internal details. To provide adequate information about the use of data a developer would have to understand and/or reverse engineer the plugin mechanisms he uses. So, for most plugins or libraries, the phrasing of a correct terms of use is impossible. Yet, this fact does not justify application rewriting. The user can still refuse the installation. If, despite deficient information, he decides to install the software he must stick to the legal restrictions and use it as is. In short: URANOS and most security systems which are based on application rewriting conflict with international and most national copyright protection legislation. This situation is paradoxical as such systems try to protect private data from being misused by erroneous or malicious application logic. Thus, they try to enforce data protection legislation but are at the same time limited by copyright protection laws. Related Work This section focuses on recent work addressing permission problems in Android. We distinguish two types of approaches: Analysis and monitoring mechanisms. Permission Analysis One of the first publications analysing the Android permission system is Kirin [START_REF] Enck | On lightweight mobile phone application certification[END_REF]. It analyzes the application manifest and compares it with simple and pre-defined rules. In contrast to URANOS, rules can only describe security critical combinations of permissions and simply prevent an application from being installed. The off-device analysis in [START_REF] Chin | Analyzing inter-application communication in android[END_REF] is more sophisticated. It defines attack vectors which are based on design flaws and allow for the misuse of permissions. Chin et al. describe secure coding guidelines and permissions as a means to mitigate these problems. Their tool, ComDroid, can support developers to detect such flaws but it does not help App users in detecting and mitigating such problems. This lack of user support also holds Stowaway [START_REF] Felt | Android permissions demystified[END_REF]. This tool is focused on permissions which are dispensable for the execution of an application. Comparable to URANOS, Stowaway runs a static analysis on the bytecode. However, this analysis is designed for a server environment. While it provides better precision through a flow analysis, it can not correct the detected problems and the analysis times exceed those of URANOS by several magnitudes. Similar to Stowaway, AndroidLeaks [START_REF] Gibler | AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale[END_REF] uses an off-device analysis which detects privacy leaks. Data which is generated by operations which are subject to permission checks are tracked through the application to data sinks using static information flow analysis. AndroidLeaks supports the human analyst. The actual end user can not directly benefit from this system. DroidChecker [START_REF] Chan | Droidchecker: analyzing android applications for capability leak[END_REF] and Woodpecker [START_REF] Grace | Systematic Detection of Capability Leaks in Stock Android Smartphones[END_REF] use inter-procedural control flow analyzes to look for permission vulnerabilities, such as the confused deputy (CD) vulnerability. However, DroidChecker additionally uses taint tracking to detect privilege escalation vulnerabilities in single applications while Woodpecker targets system images. Techniques applied in Woodpecker were also used to investigate the functionality of in-app advertisement [START_REF] Grace | Unsafe exposure analysis of mobile in-app advertisements[END_REF]. URANOS is based on an extended collection of advertisement libraries used in this work. Similar analytical work with a less comprehensive body has been conducted in [START_REF] Stevens | Investigating user privacy in android ad libraries[END_REF]. Enhanced Permission Monitoring An early approach which modifies the central security monitor in Android to introduce an enriched permission system of finer granularity is Saint [START_REF] Ongtang | Semantically Rich Application-Centric Security in Android[END_REF]. However, Saint mainly focuses on inter-application communication. CRePE [START_REF] Conti | CRePE: context-related policy enforcement for android[END_REF] goes one step further and extends Android permissions with contextual constraints such as time, location, etc. However, CRePE does not consider the execution context in which permissions are required. Similar holds for Apex [START_REF] Nauman | Apex: extending android permission model and enforcement with user-defined runtime constraints[END_REF]. It manipulates the Android core implementation to modify the permissions framework and also introduces additional constraints on the usage of permissions. Approaches such as QUIRE [START_REF] Dietz | QUIRE: Lightweight Provenance for Smart Phone Operating Systems[END_REF] or IPC inspection of Felt et al. [START_REF] Felt | Permission redelegation: attacks and defenses[END_REF] focus on the runtime prevention of CD attacks. QUIRE defines a lightweight provenance system for permissions. Enforcement in this framework boils down to the discovery of chains which misuse the communication to other apps. IPC inspection solves this problem by reinstantiating apps with the privileges of their callers. Both approaches require an OS manipulation and consider an application to be monolithic. This prevents them from recognizing execution contexts for permissions. The same deficiencies hold for XManDroid [START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF] which extends the goal of QUIRE and IPC inspection by also considering colluding applications. Similar to IPC inspection and partially based on QUIRE is AdSplit [START_REF] Shekhar | AdSplit: separating smartphone advertising from applications[END_REF]. It targets advertisement plugins and also uses multi-instantiation. It separates the advertisement and its hosting application and executes it in independent processes. Although mentioned in their contribution Shekhar does not aim at deactivating permissions in one part of the application or at completely suppressing communication between the separated application components. Leontiadis et al. also do not promote a complete deactivation of permissions and separate applications and advertising libraries to avoid overprivileged execution [START_REF] Leontiadis | Don't kill my ads!: balancing privacy in an ad-supported mobile application market[END_REF]. A trade-off between user privacy and advertisement revenue is proposed. A separated permission and monitoring system controls the responsible processing of user data and allows to interfere with IPC if sufficient revenue has been produced. URANOS could be coupled with such a system by only allowing the deactivation of permissions if sufficient revenue has been produced. However, real-time monitoring would destroy the lightweight character of URANOS. Although developed independently, AdDroid [START_REF] Pearce | AdDroid: Privilege separation for applications and advertisers in Android[END_REF] realizes a lot of the ideas proposed by Leontiadis et al. AdDroid proposes specific permissions for advertisement plugins. Of course, this requires modifications to the overall Android permission system. Further, to obtain a privilege separation, AdDroid also proposes a process separation of advertisement functionalities from the hosting application. Additionally, the Android API is proposed to be modified according to the advertisement needs. It remains unclear how such a model should be enforced. The generality of URANOS could contribute to such an enforcement. Two approaches which are very similar to URANOS are I-Arm Droid [START_REF] Davis | I-arm-droid: A rewriting framework for in-app reference monitors for android applications[END_REF] and AppGuard [START_REF] Backes | App-Guard -Real-time policy enforcement for third-party applications[END_REF]. Both systems rewrite Android bytecode to enforce user defined policies. I-Arm Droid does not run on the Android device and is designed to enforce developer defined security policies enforced by inlined reference monitors at runtime. The flexibility of the inlining process is limited as all method calls are replaced by monitors. Selective deactivation of permissions is not possible. The same holds for AppGuard. While it can be run directly on the device the rewriting process replaces all critical method calls. AppGuard compares to URANOS as it uses a similar resource and user friendly deployment mechanism which do not require root access on the device. Conclusions The permission system and application structure in today's Smartphones do not provide a good foundation for an informed consent of users. URANOS takes a first step into this direction by providing enhanced feedback. The user is able to select which application component should run with which set of permissions. Thus, although our approach can not provide detailed information about its functionality the user benefits from a finer granularity of permission assignment. If in doubt, he is not confronted with a all or nothing approach but can selectively disable critical application components. The execution contexts we define in our work are general and can describe many different types of application components. Further, we neither require users to manipulate or root their Smartphones. Instead we maintain the regular install, update, and recovery procedures. Our approach is still slow when integrating the executable code into a fully functional application. However, this overhead is not directly induced by our efficient analysis or rewriting mechanisms. In fact, we highlighted the practical and security impact of a trade-off between a precise and complete flow analysis and a lightweight but fast and resource saving syntactical analysis which can run on user device without altering its overall functionality. Fig. 1 . 1 Fig. 1. High-level overview of URANOS 9 Fig. 3 . 93 Fig. 3. System Overview Listing 1 . 1 .Listing 1 . 2 . 1112 Wrapper pattern one public static WRAPPER1 { try { A PI _C AL L _A CT I ON ; } catch ( S e c u r i t y E x c e p t i o n se ) { DEFA ULT_ACTI ON ; } } Wrapper pattern two public static WRAPPER2 { DEFAU LT_ACTI ON ; } Table 1 . 1 Selection of analyzed and rewritten applications App #pl #pm #cl apk[MB] pre[ms] det[ms] wrap[ms] rew[ms] tot[ms] 100 Doors 4 3 757 14.4 1421 356 1690 4277 9073 Angry Birds 10 6 873 24.4 1863 640 2308 5767 50408 Bugvillage 13 8 1127 3.1 1819 1214 3425 6832 18092 Coin Dozer 11 6 855 14.7 2028 788 2605 6457 56749 Fruit Ninja 8 8 1472 19.2 2520 1197 3657 7955 144374 Instagram 7 7 2914 12.9 5168 1906 8114 17031 39908 Logo Quiz 3 2 232 9.7 553 96 701 1939 7729 Shazam 8 13 2822 4.4 4098 3214 7837 15182 27263 Skyjumper 3 4 292 0.9 772 257 1222 2991 4106 Acknowledgements The research leading to these results has received funding from the European Union's FP7 project COMPOSE, under grant agreement 317862.
43,585
[ "1003790", "1003791", "1003792", "1003788" ]
[ "98761", "98761", "98761", "98761" ]
01485935
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485935/file/978-3-642-38530-8_6_Chapter.pdf
Michael Lackner email: [email protected] Reinhard Berlach email: [email protected] Wolfgang Raschke email: [email protected] Reinhold Weiss email: [email protected] Christian Steger email: [email protected] A Defensive Virtual Machine Layer to Counteract Fault Attacks on Java Cards Keywords: Java Card, Defensive Virtual Machine, Countermeasure, Fault Attack The objective of Java Cards is to protect security-critical code and data against a hostile environment. Adversaries perform fault attacks on these cards to change the control and data flow of the Java Card Virtual Machine. These attacks confuse the Java type system, jump to forbidden code or remove run-time security checks. This work introduces a novel security layer for a defensive Java Card Virtual Machine to counteract fault attacks. The advantages of this layer from the security and design perspectives of the virtual machine are demonstrated. In a case study, we demonstrate three implementations of the abstraction layer running on a Java Card prototype. Two implementations use software checks that are optimized for either memory consumption or execution speed. The third implementation accelerates the run-time verification process by using the dedicated hardware protection units of the Java Card. Introduction A Java Card enables Java applets to run on a smart card. The primary purpose of using a Java Card is the write-once, run-everywhere approach and the ability of post-issuance installation of applets [START_REF] Sauveron | Multiapplication smart card: Towards an open smart card? Information Security Technical Report[END_REF]. These cards are used in a wide range of applications (e.g., digital wallets and transport tickets) to store security-critical code, data and cryptographic keys. Currently, these cards are still very resourceconstrained devices that include an 8-or 16-bit processor, 4kB of volatile memory and 128kB of non-volatile memory. To make a Java Card Virtual Machine run on such a constrained device, a subset of Java is used [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. Furthermore, special Java Card security concepts, such as the Java Card firewall [START_REF]Oracle: Runtime Environment Specification. Java Card Platform[END_REF] and a verification process for every applet [START_REF] Leroy | Bytecode verification on Java smart cards[END_REF], were added. The Java Card firewall is a run-time security feature that protects an applet against illegal access from other applets. For every access to a field or method of an object, this check is performed. Unfortunately, the firewall security mechanism can be circumvented by applets that do not comply with the Java Card specification. Such applets are called malicious applets. To counteract malicious applets, a bytecode verification process is performed. This verification is performed either on-card or off-card for every applet [START_REF] Leroy | Bytecode verification on Java smart cards[END_REF]. Note that this bytecode verification is a static process and not performed during applet execution. The reasons for this static approach are the high resource needs of the verification process and the hardware constraints of the Java Card. This behavior is now abused by adversaries. They upload a valid applet onto the card and perform a fault attack (FA) during applet execution. Adversaries are now able to create a malicious applet out of a valid one [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF]. A favorite time for performing a FA is during the fetching process. At this time, the virtual machine (VM) reads the next Java bytecode values from the memory. An adversary that performs an FA at this time can change the readout values. The VM then decodes the malicious bytecodes and executes them, which leads to a change in the control and data flow of the applet. A valid applet is mutated by such an FA to a malicious applet [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF][START_REF] Hamadouche | Subverting Byte Code Linker service to characterize Java Card API[END_REF] and gains unauthorized access to secret code and data [START_REF] Markantonakis | Smart card security[END_REF][START_REF] Bar-El | The Sorcerer's Apprentice Guide to Fault Attacks[END_REF]. To counteract an FA, a VM must perform run-time security checks to determine if the bytecode behaves correctly. In the literature, different countermeasures, such as control-flow checks [START_REF] Sere | Evaluation of Countermeasures Against Fault Attacks on Smart Cards[END_REF], double checks [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF], integrity checks [START_REF] Bouffard | Evaluation of the Ability to Transform SIM Applications into Hostile Applications[END_REF] and method encryption [START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF], have been proposed. Barbu [3] proposed a dynamic attack countermeasure in which the VM executes either standard bytecodes or bytecodes with additional security checks. All these works do not concentrate on the question of how these security mechanisms can be smoothly integrated into a Java Card VM. For this integration, we propose adding an additional security layer into the VM. This layer abstracts the access to internal VM resources and performs run-time security checks to counteract FAs. The primary contributions of this paper are the following: -Introduction of a novel defensive VM (D-VM) layer to counteract FAs during run-time. Access to security-critical resources of the VM, such as the operand stack (OS), local variables (LV) and bytecode area (BA), is handled using this layer. -Usage of the D-VM layer as a dynamic countermeasure. Based on the actual security level of the card, different implementations of the D-VM layer are used. For a low-security level, the D-VM implementation uses fewer checks than for a high-security level. The security level depends on the credibility of the currently executed applet and run-time information received by hardware or software modules. -A case study of a defensive VM using three different D-VM layer implementations. The API of the D-VM layer is used by the Java Card VM to perform run-time checks on the currently executing bytecode. -The defensive VMs are executed on a smart card prototype with specific HW security features to speed up the run-time verification process. The resulting run-time and main memory consumption of all implemented D-VM layers are presented. Section 2 provides an overview of attacks on Java Cards and the current countermeasures against them. Section 3 describes the novel D-VM layer presented in this work and its integration into the Java Card design. Furthermore, the method by which the D-VM layer enables the concept of dynamic countermeasures is presented. Section 4 presents implementation details regarding how the three D-VM implementations are inserted into the smart card prototype. Section 5 analyzes the additional costs for the D-VM implementations based on the execution and main memory overhead. Finally, the conclusions and future work are discussed in Section 6. Related Work In this section, the basics of the Java Card VM and work related to FA on Java Cards are presented. Then, an analysis of work regarding methods of counteracting FAs and securing the VM are presented. Finally, an FA example is presented to demonstrate the danger posed by such run-time attacks for the security of Java Cards. Java Card Virtual Machine A Java Card VM is software that is executed on a microprocessor. The VM itself can be considered a virtual computer that executes Java applets stored in the data area of the physical microprocessor. To be able to execute Java applets, the VM uses internal data structures, such as the OS or the LV, to store interim results of logical and combinatorial operations. All of these internal data structures are general objects for adversaries that attack the Java Card [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF][START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF][START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF]. For every method invocation performed by the VM, a new Java frame [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF] is created. This frame is pushed to the Java stack and removed from it when the method returns. In most VM implementations, this frame internally consists of three primary parts. These parts have static sizes during the execution of a method. The first frame part is the OS on which most Java operations are performed. The OS is the source and destination for most of the Java bytecodes. The second part is the LV memory region. The LV are used in the same manner as the registers on a standard CPU. The third part is the frame data, which holds all additional information needed by the VM and Java Card Runtime Environment (JCRE) [START_REF]Oracle: Runtime Environment Specification. Java Card Platform[END_REF]. This additional information includes, for example, return addresses and pointers to internal VM-related data structures. Attacks on Java Cards Loading an applet that does not conform to the specification defined in [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF] onto a Java Card is a well-known problem called a logical attack (LA). After an LA, different applets on the card are no longer protected by the so-called Java sandbox model. Through this sandbox, an applet is protected from illegal write and read operations of other applets. To perform an LA, an adversary must know the secret key to install applets. This key is known for development cards, but it is highly protected for industrial cards and only known by authorized companies and authorities. In conclusion, LAs are no longer security threats for current Java Cards. Side-channel analyses are used to gather information about the currently executing method or instructions by measuring how the card changes environment parameters (e.g., power consumption and electromagnetic emission) during runtime. Integrated circuits influence the environment around them but can also be influenced by the environment. This influence is abused by an FA to change the normal control and data flow of the integrated circuit. Such FAs include glitch attacks on the power supply and laser attacks on the cards [START_REF] Bar-El | The Sorcerer's Apprentice Guide to Fault Attacks[END_REF][START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF]. By performing side-channel analyses and FAs in combination, it is possible to break cryptographic algorithms to receive secret data or keys [START_REF] Markantonakis | Smart card security[END_REF]. In 2010, a new group of attacks called combined attacks (CA) was introduced. These CAs combine LAs and FAs to enable the execution of ill-formed code during run-time [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF]. An example of a CA is the removal of the checkcast bytecode to cause type confusion during run-time. Then, an adversary is able to break the Java sandbox model and obtain access to secret data and code stored on the card [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF]. In this work work, we concentrate on countering FAs during the execution of an applet using our D-VM layer. Countermeasures Against Java Card Attacks Since approximately 2010, an increasing number of researchers have started concentrating on the question of what tasks must be performed to make a VM more robust against FAs and CAs. Several authors [START_REF] Sere | Checking the Paths to Identify Mutant Application on Embedded Systems[END_REF][START_REF] Bouffard | Evaluation of the Ability to Transform SIM Applications into Hostile Applications[END_REF] suggest adding an additional security component to the Java Card applet. In this component, they store checksums calculated over basic blocks of bytecodes. These checksums are calculated off-card in a static process and added to a new component of the applet. During run-time, the checksum of executed bytecodes is calculated using software and compared with the stored checksums. If these checksums are not the same, a security exception is thrown. Another FA countermeasure is the use of control-flow graph information [START_REF] Sere | Evaluation of Countermeasures Against Fault Attacks on Smart Cards[END_REF]. To enable this approach, a control-flow graph over basic blocks is calculated offcard and stored in an additional applet component. During run-time, the current control-flow graph is calculated and compared with the stored control graph. In [START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF], the authors propose storing a countermeasure flag in a new applet component to indicate whether the method is encrypted. They perform this encryption using a secret key and the Java program counter for the bytecode of every method. Through this encryption, they are able to counteract attacks that change the control-flow of an applet to execute illegal code or data. Another countermeasure against FAs that target the data stored on the OS is presented in [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF]. In this work, integrity checks are performed when data are pushed or popped onto the OS. Through this approach, the OS is protected against FAs that corrupt the OS data. Another run-time check against FAs is proposed in [START_REF] Dubreuil | Type Classification against Fault Enabled Mutant in Java Based Smart Card[END_REF][START_REF] Lackner | Towards the Hardware Accelerated Defensive Virtual Machine -Type and Bound Protection[END_REF], in which they create separate OSes for each of the two data types, integralValue and reference. With this approach of splitting the OS, it is possible to counteract type-confusion attacks. A drawback is that in both works, the applet must be preprocessed. In [START_REF] Barbu | Dynamic Fault Injection Countermeasure[END_REF], the authors propose a dynamic countermeasure to counteract FAs. Bytecodes are implemented in different versions inside the VM, a standard version and an advanced version that performs additional security checks. The VM is now able to switch during run-time from the standard to the advanced version. By using unused Java bytecodes, an applet programmer can explicitly call the advanced bytecode versions. The drawbacks of current FA countermeasures are that most of them add an additional security component to the applet or rely on preprocessing of the applet. This has different drawbacks, such as increased applet size or compatibility problems for VMs that do not support these new applet components. In this work, we propose a D-VM layer that performs checks on the currently executing bytecode. These checks are performed based on a run-time policy and do not require an off-card preprocessing step or an additional applet component. EMAN4 Attack: Jump Outside the Bytecode Area In 2011, the run-time attack EMAN4 was found [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF]. In this work a laser was used to manipulate the read out values from the EEPROM to 0x00. By this laser attack an adversary is able to change the Java bytecode of post-issuance installed applets during their execution. The target time of the attack is when the VM fetches the operands of the goto w bytecode from the EEPROM. Generally the goto w bytecode is used to perform a jump operation inside a method. The goto w bytecode consists of the operand byte 0xa8 and two offset bytes for the branch destination [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. This branch offset is added to the actual Java program counter to determine the next executing bytecode. An adversary which changes this offset is able to manipulate the control flow of the applet. With the help of the EMAN4 attack it is possible to jump with the Java program counter outside the applet bytecode area (BA), as illustrated in Figure 1. This is done by changing the offset parameters of the goto w bytecode from 0xFF20 to 0x0020 during the fetch process of the VM. The jump destination address of the EMAN4 attack is a data array outside the bytecode area. This data array was previously filled with adversary defined data. After the laser attack the VM executes the values of the data array. This execution of adversary definable data leads to considerably more critical security problems, such as memory dumps [START_REF] Bouffard | The Next Smart Card Nightmare[END_REF]. In this work we counteract the EMAN4 attack by our control flow policy. This policy only allows to fetch bytecodes which are inside the bytecode area. Execute Malicious Data goto_w 0x0020 Fig. 1. The EMAN4 run-time attack changes the jump address 0xFF20 to 0x0020, which leads to the security threat of executing bytecode outside the defined BA of the current applet [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF]. Defensive VM Layer In this work, we propose adding a novel security layer to the Java Card. Through this layer, access to internal structures (e.g., OS, LV and BA) of the VM is handled. In reference to its defensive nature and its primary use for enabling a defensive VM, we name this layer the defensive VM (D-VM) layer. An overview of the D-VM layer and the D-VM API, which is used by the VM, is depicted in Figure 2 and is explained in detail below. Functionalities offered by the D-VM API include, for example, pushing and popping data onto the OS, writing and reading from the LV and fetching Java bytecodes. It is possible for the VM to implement all Java bytecodes by using these API functions. The pseudo-code example in Listing 1.1 shows the process of fetching a bytecode and the implementation of the sadd bytecode using our D-VM API approach. The sadd bytecode pops two values of integral data type from the OS and pops the sum as an integral data type back onto the OS. cialized for VM security, is able to implement and choose the appropriate countermeasures within the D-VM layer. These countermeasures are based on stateof-the-art knowledge and the hardware constraints of the smart card architecture. Programmers implementing the VM do not need to know these security techniques in detail but rather just use the D-VM API functions. If HW features are used, the D-VM layer communicates with these units and configures them through specific instructions. Through this approach, it is also very easy to alter the SW implementations by changing the D-VM layer implementation without changing specific Java bytecode implementations. It is possible to fulfill the same security policy on different smart card platforms where specific HW features are available. On a code size-constrained smart card platform, an implementation that has a small code size but requires more main memory or execution time is used. The appropriate implementations of security features within the D-VM API are used without the need to change the entire VM. Dynamic Countermeasures: The D-VM layer is also a further step to enable dynamic fault attack countermeasures such as that proposed by Barbu in [START_REF] Barbu | Dynamic Fault Injection Countermeasure[END_REF]. In this work, he proposes a VM that uses different bytecode implementations depending on the actual security level of the smart card. If an attack or malicious behavior is detected, the security level is decreased. This decreased security leads to an exchange of the implemented bytecodes with more secure versions. In these more secure bytecodes, different additional checks, such as double reads, are implemented, which leads to decreased run-time performance. Our D-VM layer further advances this dynamic countermeasure concept. Depending on the actual security level, an appropriate D-VM layer implementation is used. Therefore, the entire bytecode implementation remains unchanged, but it is possible to dynamically add and change security checks during run-time. An overview of this dynamic approach is outlined in Figure 3. The actual security level of the card is determined by HW sensors (e.g., brightness and supply voltage) and the behavior of the executing applet. For example, at a high security level, the D-VM layer can perform a read operation after pushing a value into the OS memory to detect an FA. At a lower security level, the D-VM layer performs additional bound, type and control-flow checks. D-VM Layer Apples Security Context of an Applet: Another use case for the D-VM layer is the post-issuance installation of applets on the card. We focus on the user-centric ownership model (UCOM) [START_REF] Akram | A Paradigm Shift in Smart Card Ownership Model[END_REF] in which Java Card users are able to load their own applets onto the card. For the UCOM approach, each newly installed applet is assigned a defined security level at installation time. The security level depends on how trustworthy the applet is. For example, the security level for an applet signed with a valid key from the service provider is quite high, which results in a high execution speed. Such an applet should be contrasted with an applet that has no valid signature and is loaded onto the card by the Java Card owner. This applet will run at a low security level with many run-time checks but a slower execution speed. Furthermore, access to internal resources and applets installed on the card could be restricted by the low security level. Security Policy This chapter introduces the three security policies used in this work. With the help of these policies, it is possible to counteract the most dangerous threats that jeopardize security-critical data on the card. The type and bound policies are taken from [START_REF] Lackner | Towards the Hardware Accelerated Defensive Virtual Machine -Type and Bound Protection[END_REF] and are augmented with a control-flow policy. The fulfillment of the three policies on every bytecode is checked by three different D-VM layer implementations using our D-VM API. Control-Flow Policy: The VM is only allowed to fetch bytecodes that are within the borders of the currently active method's BA. Fetching of bytecodes that are outside of this area is not allowed. The actual valid method BA changes when a new method is invoked or a return statement is executed. Because of this policy, it is no longer possible for control-flow changing bytecodes (e.g., goto w and if scmp w ) to jump outside of the reserved bytecode memory area. This policy counters the EMAN4 attack [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF] on the Java Card and all other attacks that rely on the execution of a data array or code of an-other applet that is not inside the current BA. Type Policy: Java bytecodes are strongly typed in the VM specification [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. This typing means that for every Java bytecode, the type of operand that the bytecode expects and the type of the result stored in the OS or LV are clearly defined. An example is the sastore bytecode, which stores a short value in an element of a short array object. The sastore bytecode uses the top three elements from the OS as operands. The first element is the address of the array object, which is of type reference. The second element is the index operand of the array, which must be of type short. The third element is the value, which is stored within the array element and is of type short. Type confusion between values of integral data (boolean, byte or short) and object references (byte[], short[] or class A, for example) is a serious problem for Java Cards [START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF][START_REF] Iguchi-Cartigny | Developing a Trojan applets in a smart card[END_REF][START_REF] Vetillard | Combined Attacks and Countermeasures[END_REF][START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF][START_REF] Hamadouche | Subverting Byte Code Linker service to characterize Java Card API[END_REF]. To counter these attacks, we divide all data types into the two main types, integralData and reference. Note that this policy does not prevent type confusion inside the main type reference between array and class types. Bound Policy: Most Java Card bytecodes push and pop data onto the OS or read and write data into the LV, which can be considered similar to registers. The OS is the main component for most Java bytecode operations. Similar to buffer overflow attacks in C programs [START_REF] Cowan | Buffer overflows: attacks and defenses for the vulnerability of the decade[END_REF], it is possible to overflow the reserved memory space for the OS and LV. An adversary is then able to set the return address of a method to any value. Such an attack was first found in 2011 by Bouffard [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF][START_REF] Bouffard | The Next Smart Card Nightmare[END_REF]. An overflow of the OS happens by pushing or popping too many values onto the OS. An LV overflow happens when an incorrect LV index is accessed. This index parameter is decoded as an operand for several LV-related bytecodes (e.g., sstore, sload and sinc). This operand is therefore stored permanently in the nonvolatile memory. Thus, changing this operand through an FA gives an attacker access to memory regions outside the reserved LV memory region. These memory regions are created for every method invoked and are not changed during the method execution. Therefore in this work, we permit Java bytecodes to operate only within the reserved OS and LV memory regions. Java Card Prototype Implementation In this work three implementations of the D-VM layer are proposed to perform run-time security checks on the currently executing bytecode. Two implementations perform all checks in SW to ensure our security policies. One implementation uses dedicated HW protection units to accelerate the run-time verification process. The implementations of the D-VM layer were added into a Java Card VM and executed on a smart card prototype. This prototype is a cycle-accurate SystemC [START_REF]IEEE: Open SystemC Language Reference Manual IEEE Std 1666-2005[END_REF] model of an 8051 instruction set-compatible processor. All software components, such as the D-VM layer and the VM, are written in C and 8051 assembly language. D-VM Layer Implementations This section presents the implementation details for the three implemented D-VM layers used to create a defensive VM. All three implemented D-VM layers fulfill our security policy presented in Chapter 3 but differ from each other in the detailed manner in which the policies are satisfied. The key characteristic of the two SW D-VM implementations is that they use a different implementation of the type-storing approach to counteract type confusion. The run-time type information (integralData or reference) used to perform run-time checks can be stored either in a type bit-map (memory optimization) or in the actual word size of the microprocessor (speed optimization). The HW Accelerated D-VM uses a third approach and stores the type information in an additional bit of the main memory. Through this approach, the HW can easily store and check the type information for every OS and LV entry. An overview of how the type-storing policy is ensured by our D-VM implementations and a memory layout overview are shown in Figure 4 for every entry of the OS and LV is now represented by a one-bit entry. A problem with this approach is that the run-time overhead is quite high because different shift and modulo operations must be performed to store and read the type information from the type bitmap. These operations (shift and modulo) are, for the 8051 architecture, computationally expensive operations and thus lead to longer execution times. An advantage of the bit-storing approach is the low memory overhead required to hold the type information in the type bitmap. Word Storing D-VM: The run-time performance of the type storing and reading process is increased by storing the type information using the natural word size of the processor and data bus on which the memory for the OS and LV is located. Every element in the OS and LV is extended with a type element of a word size such that it can be processed very quickly by the architecture. By choosing this implementation, the memory consumption of the type-storing process increases compared with the previously introduced SW Bit Storing D-VM. Pseudo-codes for writing to the top of the stack of the OS for the bit-and word-storing approach are shown in Listings 1.2 and 1.3. HW Accelerated D-VM: Performing type and bound checks in SW to fulfill our security policy consumes a lot of computational power. Types must be loaded, checked and stored for almost every bytecode. The bounds of the OS and LV must be checked such that no bytecode performs an overflow. The HW Accelerated D-VM layer uses specific HW protection units of the smart card to accelerate these security checks. New protection units (bound protection and type protection) are able to check if the current memory move (MOV) operation is operating in the correct memory bounds. The type information for the OS and LV entries is stored as an additional type bit for every main memory word. The information is decoded into new assembly instructions to specify which memory region (OS, LV or BA) and with which data type (integralData or reference) the MOV operation should write or read data. An overview of the HW Accelerated D-VM is shown in Figure 5. Depending on the assembly instruction, the HW protection units perform four security operations: -Check if the Java opcode is fetched from the current active BA. -Check if the destination address of the operation is within the memory area of the OS or LV. If the operation is not within these two bounded areas, a HW security exception is thrown. -For every write operation write the type decoded in the CPU instruction into the accessed memory word. -For every read operation, check if the stored type is equal to the type decoded in the CPU instruction. If they are not equal, throw a hardware security exception. Malicious Java bytecodes violating our run-time policy will be detected by new introduced HW protection units. Bound Prototype Results In this section, we present the overall computational overhead of the three implemented D-VM layers and their main memory consumption. All of them are compared with a VM implementation without the D-VM layer. The speed comparison is performed for different groups of bytecodes by self written microbenchmarks where all bytecodes under test are measured. These test programs first perform an initialization phase where the needed operands for the bytecode under test are written into the OS or LV. After the execution of the bytecode under test the effects on the OS or LV are removed. Note that our smart card platform has no data or instruction cache. Therefore, no caching effects must be taken into account for all test programs. Computational Overhead Speed comparisons for specific bytecodes are shown in Figure 6. For example, the Java bytecode sload requires 148% more execution time for the Word Storing D-VM. For the Bit Storing D-VM, the execution overhead is 212%. The increased overhead is because of the expensive calculations used to store the type information in a bitmap. For the HW Accelerated D-VM, the execution speed decreases by only 4% because all type and bound checks are performed using HW. Additional run-time statistics for groups of bytecodes are listed in Table 1. As expected, the Bit Storing D-VM consumes the most overall run-time, with an increase of 208%. The Word Storing D-VM needs 142% more run-time. The HW Accelerated D-VM has only 6% more overhead. Main Memory Consumption The HW Accelerated D-VM requires one type bit per 8 bits of data to store the type information during run-time. This results in an overall main memory Conclusions and Future Work This work presents a novel security layer for the virtual machine (VM) on Java Cards. Because it is intended to defend against fault attacks (FAs), it is called the defensive VM (D-VM) layer. This layer provides access to security-critical resources of the VM, such as the operand stack, local variables and the bytecode area. Inside this layer, security checks, such as type checking, bound checking and control-flow checks, are performed to protect the card against FAs. These FAs are executed during run-time to change the control and data flow of the currently executing bytecode. By storing different implementations of the D-VM layer on the card, it is possible to choose the appropriate security implementation based on the actual security level of the card. Through this approach, the number of security checks can be increased during run-time by switching among different D-VM implementations. Furthermore, it is possible to assign a trustworthy applet a low security level, which results in high execution performance, and vice versa. One D-VM layer implementation can be, for example, low security with high execution speed or high security with low execution speed. Another advantage is the concentration of the security checks inside the layer. To demonstrate this novel security concept, we implemented three D-VM layers on a smart card prototype. All three layers fulfill the same security policy (control-flow, type and bound) for bytecodes but differ in their implementation details. Two D-VM layer implementations are fully implemented in software but differ in the manner in which the type information is stored. The Bit Storing D-VM has the highest run-time overhead, 208%, but the lowest memory increase, 6.25%. The Word Storing D-VM decreases the run-time overhead to 142% but consumes approximately 33% more memory. The HW Accelerated D-VM uses dedicated Java Card HW to accelerate the run-time verification process and has an execution overhead of only 6% and a memory increase of 12.5%. In future work, we will focus on the question of which sensor data should be used to increase the internal security of the Java Card. Another question is how many security states are required and how much they differ in their security needs. Listing 1 . 1 .Fig. 2 . 112 Fig.2. The VM executes Java Card applets and uses the newly introduced D-VM layer to secure the Java Card against FAs. Fig. 3 . 3 Fig. 3. Based on the current security level of the VM, an appropriate D-VM layer implementation is chosen. Fig. 4 . 4 Fig. 4. The Bit Storing D-VM stores the type information for every OS and LV entry in a type bitmap. The Word Storing D-VM stores the type information below the value in the reserved OS and LV spaces. The HW Accelerated D-VM holds the type information as an additional type bit, which increases the memory size of a word from 8 bits to 9 bits. Listing 1 . 2 . 12 Operations needed to push an element on the OS by the Bit Storing D-VM. d v m p u s h i n t e g r a l D a t a ( v a l u e ) { // push v a l u e onto OS and // i n c r e a s e OS s i z e OS [ s i z e ++] = v a l u e ; // s t o r e t y p e i n f o r m a t i o n // i n t o t y p e bitmap , //INT->i n t e g r a l D a t a t y p e bitmap [ s i z e / 8 ] = INT<<( s i z e %8); } Listing 1.3. Operations needed to push an element on the OS by the Word Storing D-VM. d v m p u s h i n t e g r a l D a t a ( v a l u e ) { // push v a l u e onto OS // i n c r e a s e OS s i z e OS [ s i z e ++] = v a l u e ; // s t o r e t y p e i n f o r m a t i o n // i n t o n e x t memory word , //INT->i n t e g r a l D a t a t y p e OS [ s i z e ++] = INT ; } Fig. 5 . 5 Fig.5. Overview of the HW Accelerated D-VM implementation using new typed assembly instructions to access VM resources (OS, LV and BA). Malicious Java bytecodes violating our run-time policy will be detected by new introduced HW protection units. Table 1 . 1 Speed comparison for different groups of bytecodes compared with a VM without the D-VM layer. 5%. The Word Storing D-VM requires in the worst case 33% more memoy because one type byte holds the type information for two data bytes. The Bit Storing D-VM requires approximately 6.25% more memory in the case in which the entire memory is filled with OS and LV data. This is because the Bit Storing D-VM requires one type bit per 16 bits of data. Bytecode Groups HW Accelerated D-VM Word Storing D-VM Bit Storing D-VM Arithmetic/Logic +7% +146% +240% LV Access +5% +185% +243% OS Manipulation +5% +151% +231% Control Transfer +7% +113% +173% Array Access +5% +130% +166% Overall +6% +142% +208% increase of 12. Acknowledgments The authors would like to thank the Austrian Federal Ministry for Transport, Innovation, and Technology, which funded the CoCoon project under the FIT-IT contract FFG 830601. We would also like to thank our project partner NXP Semiconductors Austria GmbH.
38,073
[ "974301", "1003793", "1003794", "1003795", "1003796" ]
[ "65509", "65509", "65509", "65509", "65509" ]
01485938
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01485938/file/978-3-642-38530-8_9_Chapter.pdf
Pierre Dusart email: [email protected] Sinaly Traoré Lightweight Authentication Protocol for Low-Cost RFID Tags Keywords: Providing security in low-cost RFID (Radio Frequency Identification) tag systems is a challenging task because low-cost tags cannot support strong cryptography which needs costly resources. Special lightweight algorithms and protocols need to be designed to take into account the tag constraints. In this paper, we propose a function and a protocol to ensure pre-shared key authentication. Introduction In the future, optical bar codes based systems will be replaced by Radio Frequency Identification systems. These systems are composed of two parts: a RFID tag which replaces the bar code; a RFID reader which handles information send from the tag. The tag consists of a microchip which communicates with a reader through a small integrated antenna. Various external form factors can be used: the tag can look like a sheet of paper, like a plastic card or can be integrated below bar code for backward device's compatibility. RFID tags offer many advantages over optical bar codes [START_REF] Agarwal | RFID: Promises and Problems[END_REF]: the use of microchip enables a range of functionalities like computing capability or readable/writable storage. The stored data, depending on the capacity of the tag, can be static identification number up to rewritable user data. the use of RF antenna enables communication between the reader and the tag without line of sight from a distance of several decimeters [START_REF] Weis | Rfid (radio frequency identification): Principles and applications 3[END_REF]. A reader can communicate sequentially with up to hundred tags per second. To provide further functionalities than bar codes, the tag may require data storage. For example, the price of a product can be stored into the tag [3]. To know the price of a product, the customer can ask directly the tag instead of asking the database server connected with the cash register. With these new features, the adoption of RFID technology is growing: inventory without unpacking [START_REF] Östman | Rfid 5 most common applications on the shop floor[END_REF], prevention of counterfeiting [START_REF] James | Fda, companies test rfid tracking to prevent drug counterfeiting[END_REF], quality chain with environmental sensing [START_REF] Miles | RFID Technology and Applications[END_REF] are deployed applications. The tag systems can be easily adapted for universal deployment by various industries with low prices. But a new technology must also take into account problems inherited from legacy systems. For example in a shop, security problems to deal with are: an item is changed to another (it means for RFID to substitute a tag for a fake one); a price is changed without authorization by a malicious user (it means for RFID, to write a tag), . . . In addition, the privacy problem must be considered in some context i.e. an user must not reveal unintentionally information about himself. It means for RFID, the ability of a tag to reveal its identity only to authenticated partners. To cope with security and privacy problems, the first idea is to use asymmetric cryptography (e.g. RSA [START_REF] Rivest | A method for obtaining digital signatures and public-key cryptosystems[END_REF]) like in public key infrastructures. Unfortunately tags with strong cryptography [START_REF] Feldhofer | Strong crypto for rfid tags -a comparison of low-power hardware implementations[END_REF] and tamper resistant hardware [START_REF] Kömmerling | Design principles for tamper-resistant smartcard processors[END_REF] are too expensive for a wide deployment. Hence a constraint class of cryptography [START_REF] Poschmann | Lightweight cryptography -cryptographic engineering for a pervasive world[END_REF], named Lightweight Cryptography, appears. The aim of this paper is to propose a protocol and its related computational function. Section 2 introduces the system model and the underlying assumptions for our protocol. Then related work is presented in section 3. The protocol environment is described in section 4. Section 5 presents the protocol details and the computational functions. Section 6 provides an analysis of some security constraints and shows that the protocol satisfies the lightweight class. Section 7 illustrates how our protocol behaves against cryptographic attacks. System model and assumptions We consider a system with one RFID tag reading system and several low cost RFID tags. We assume that each tag shares a secret K with the reader, which is shared in a secure manner before the beginning of the communication (e.g. in manufacturing stage). The aim of the communication is to authenticate the tag i.e. find its identity and prove that it belongs to the system (by knowing the same secret). The tag is passively powered by the reader, thus: the communication needs to be short (speed and simplicity of an algorithm are usually qualifying factors); the communication can be interrupted at any time if the reader does not supply enough energy to the tag. For cost reasons, the standard cryptographic primitives (hash function, digital signature, encryption) are not implemented (no enough computation power is available or too much memory is required). Hence, we need a protocol using primitives with a low complexity. This property which is named "Lightweight property" [START_REF] Poschmann | Lightweight cryptography -cryptographic engineering for a pervasive world[END_REF] consists to use basic boolean operations like XOR, AND, ... The security of protocols needs also a good random number generator [START_REF] Hellekalek | Good random number generators are (not so) easy to find[END_REF]. This part can be assumed by the reader environment where the features can be higher and costly (e.g. a computer connected with a tag reading system). Related work The RFID technology needs security mechanisms to ensure the tag identity. Hence a tag spoofing, where an attacker replaces the genuine tag by its own creation, is defeated if good authentication mechanisms are used. But classical authentication solutions use cryptographic primitives like AES [START_REF]Advanced encryption standard[END_REF] or hash functions (SHA1 [START_REF] Eastlake | US Secure Hash Algorithm 1 (SHA1)[END_REF] or MD5 [START_REF] Rivest | The MD5 Message-Digest Algorithm[END_REF]) which are not adapted to low cost RFID tags. It is thus necessary to look for new suitable primitives for this specific constraint resources environment. In [START_REF] Vajda | Lightweight authentication protocols for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Lmap: A real lightweight mutual authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | M 2 ap: A minimalist mutual-authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Emap: An efficient mutual-authentication protocol for low-cost rfid tags[END_REF], authors suggest some protocol families based on elementary arithmetic (e.g. binary bit addition or modular addition by a power of 2). However in [START_REF] Defend | Cryptanalysis of two lightweight rfid authentication schemes[END_REF], B. Defend et al. put in defect XOR and SUBSET protocols given in [START_REF] Vajda | Lightweight authentication protocols for low-cost rfid tags[END_REF] by learning key sequence. They proved that with few resources, an attacker can recover the session keys of these two protocols. The LMAP, M 2 AP and EMAP protocols proposed respectively in [START_REF] Peris-Lopez | Lmap: A real lightweight mutual authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | M 2 ap: A minimalist mutual-authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Emap: An efficient mutual-authentication protocol for low-cost rfid tags[END_REF] allow a mutual authentication between the reader and the tag but are also completely broken [START_REF] Li | Security analysis of two ultra-lightweight rfid authentication protocols[END_REF] by key recovery attacks. In [START_REF] Lee | Efficient rfid authentication protocols based on pseudorandom sequence generators[END_REF], the authors proposed a family of protocols, called S-protocols, based on a family of generic random number generators that they introduced in the same paper. They presented a formal proof which guarantees the resistance of the S-protocol against the attacks of desynchronization [START_REF] Lo | De-synchronization attack on rfid authentication protocols[END_REF][START_REF] Van Deursen | Security of rfid protocols -a case study[END_REF] and impersonation [START_REF]Sixth International Conference on Availability, Reliability and Security, ARES 2011[END_REF]. With a small modification, they proposed the family of S * -protocols, which not only has the properties of S-protocols but also allows a mutual authentication between the reader and the tag. However authors do not show that their generic functions are compatible with lightweight RFID tags. In [START_REF] Yeh | Securing rfid systems conforming to epc class 1 generation 2 standard[END_REF], Yeh proposes a protocol corrected by Habibi [START_REF] Habibi | Attacks on a lightweight mutual authentication protocol under epc c-1 g-2 standard[END_REF], but attacks [START_REF] Castro | Another fallen hash-based rfid authentication protocol[END_REF] appear using O(2 17 ) off-line evaluations of the main function. Recently, some protocols are also defined in ISO/IEC WD 26167-6. Since they use AES engine [START_REF] Song | Security improvement of an rfid security protocol of iso/iec wd 29167-6[END_REF], they are out of the scope of this paper. Protocol requirements and Specifications We want to use a very simple dedicated protocol which uses a non-invertible function h. We provide a protocol in which the tag identity is sent in a secure manner and the tag is authenticated according to a challenge given by the reader. Then the reader shows that it knows a secret key by calculating an answer to the tag challenge. We present the authentication protocol: the reader needs to verify the identity of the tag. For the verification of the tag identity iD, the RFID reader R sends to the tag T a challenge C. Next, the tag proves its identity iD by computing a response using the common secret K, shared with the reader. We avoid taking K = 0 for a maximum security. Denoting by Auth this response, the authentication phase is presented in the following scheme: -R -→ T : C = (C 0 , C 1 , . . . , C 15 ) where C i are bytes randomly chosen. -T -→ R : Auth = [iD ⊕ h K (C), h iD (C)] To verify, the reader computes h K (C) using its challenge C and the key K and then it can retrieve the identity of the tag. Next the authentication of the tag can be verified by computing h iD (C) using the result of previous computation and the first challenge. The protocol allows card authentication by the reader. It can be adapted to allow mutual authentication with a slightly modification: a challenge C' (which can be a counter) is sent with the tag response Auth. Next the reader should respond with the computation of h K⊕C (C ⊕ iD). Proposal description Our protocol uses a function h that is composed of two sub-functions S and f taking respectively one and two bytes as input. The function h used in the protocol must be lightweight (for low-cost devices) and satisfy some properties: must be a like a one-way function (from output, input cannot be retrieved); its output must seem to be random; its output length must be sufficient to have enough intrinsically security (to avoid replay and exhaustive authentication search). We define an input size and an output size of 16 bytes for h and the same size for the secret key K. Output size is chosen to be presented in the 16-byte form to iterate an algorithm defined on byte. Function f which processes byte data blocks and a substitution function S are described in the following subsections. Function design f function Here we define the function f which needs two input bytes to produce an output result of one byte. f : F 256 × F 256 -→ F 256 (x, y) -→ z with z := [x ⊕ ((255 -y) 1)]+16•[((255 -x) ⊕ (y 1)) mod 16] mod 256, ( 1 ) where ⊕ is the bitwise exclusive or, + represents the classical integer addition, n 1 divides n by 2, n 1 multiplies n by 2 and keeps the result modulo 256 by not taking into account a possible overflow and "16•" is the classical multiplication by 16. In the subsection 6.2, we explain how to keep lightweight these various operations by using 8-bit registers. We have the following properties: f is non-symmetric, i.e., for all (x, y) pair in F 256 × F 256 , the function verifies f (x, y) = f (y, x); f has a uniform distribution of values, i.e., for all z in F 256 , the function verifies {(x, y) ∈ F 256 × F 256 : f (x, y) = z} = 256. These properties can be easily verified. Hence we consider that the f function is one-way: one cannot retrieve the good (x, y)-entry with the z value. The function h inherits of this property. Let i ∈ {0, • • • , 15} a vector index and j ∈ {1, 2, 3, 4} a round index. Let M = (M 0 , • • • , M 15 ) a vector of 16 bytes. The function f does not use the same entries depending on a vector index i and a round index j. We define: F j i (M ) = f (M i , M (i+2 j-1 ) mod 16 ). and F j (M ) = (F j 0 (M ), F j 1 (M ), • • • , F j 15 (M )). A working example of these indexes can be found in the table 2. S function Our S function is not a new one. We choose the AES [START_REF]Advanced encryption standard[END_REF][START_REF] Daemen | The Design of Rijndael: AES -The Advanced Encryption Standard[END_REF] SubBytes function for the quality of its properties. The SubBytes transformation is a non-linear byte substitution. For example, the eight-bits data "00000000" is transformed into B = "01100011". To avoid attacks based on simple algebraic properties, the definition of Sub-Bytes Transformation is the composition of the following two transformations in the finite field F 2 8 with a chosen structure representation F 2 8 ≈ F 2 (X)/(X 8 + X 4 + X 3 + X + 1). The first transformation is the multiplicative inverse in Galois Field GF (2 8 ), known to have good non-linearity properties. Then the multiplicative inverse of each element is taken (the 8bit-element "00000000", or {00} in hexadecimal format, is mapped to itself). Next, the previous result is combined with an invertible affine transformation: x → Ax ⊕ B, where A is a 8 × 8 fixed matrix over GF (2) and B is the number defined above and ⊕ operates "Exclusive Or" on the individual bits in a byte. The SubBytes Transformation is also chosen to avoid any fixed point (S(a) = a), any opposite fixed point (S(a) = ā) and also any self invertible point (S(a) = S -1 (a)). Because it is based on many mathematical objects, the SubBytes function could seem difficult to implement but the transformation could be reduce in an 8-bit substitution box. Hence for any element the result can be found by looking up in a table (see the Figure 7 of [START_REF]Advanced encryption standard[END_REF]: substitution values for the byte {xy} (in hexadecimal format)). We define by S the following transformation: let M = (M 0 , • • • , M 15 ) a 16byte vector. Let S the function which associates M with the vector S(M ) = (SubBytes(M 0 ), • • • , SubBytes(M 15 )). Description of the authentication function h : (C, K) -→ H Formally, we will follow the tag computation. First, we add the challenge to key by Xor operation, i.e. we calculate D = C ⊕ K = (C 0 ⊕ K 0 , . . . , C 15 ⊕ K 15 ). Then we apply the substitution S to D. The first state M 0 is initialized by M 0 = S(D). Then, we calculate the following values: M 1 = S(F 1 (M 0 )) ⊕ K, M 2 = S(F 2 (M 1 ))) ⊕ K, M 3 = S(F 3 (M 2 ))) ⊕ K, M 4 = S(F 4 (M 3 ))) ⊕ K. Finally, the function returns H = M 4 = (M 4 0 , . . . , M 4 15 ). We denote the result H by h K (C). The figure 1 summarizes this description and a more classical definition can be found through the algorithm 1. Input : C, K Output : H M 0 = S(C ⊕ K) for j = 1 to 4 do M j = S(F j (M j-1 )) ⊕ K end for H = M 4 return H Fig. 1. Authentication Function Analysis Protocol security The identity of the tag is not revealed directly: the tag's identity iD is masked by h K (C), output of h function which appears random. But the reader can still determine the iD identity using the shared secret key K. The reader verifies that this identity has been used to compute the second part of authentication. Algorithm 1 Tag computations Input: C = (C0, . . . , C15), K = (K0, . . . , K15) Output: H = (H0, . . . , H15) {Comment: Computation of M 0 = S(C ⊕ K)} for i = 0 to 15 do Mi ← S(Ci ⊕ Ki) end for {Comment: Computation of S(F j (M j-1 )} for j = 1 to 4 do for i = 0 to 15 do k ← Mi ⊕ ((M i+2 j-1 mod 16 ) 1) l ← (255 -Mi) ⊕ (M i+2 j-1 mod 16 1) mod 16 t ← (k + 16 l) mod 256 T empi ← S(t) end for {Comment: Computation of M j+1 = M j ⊕ K} for i = 0 to 15 do Mi ← T empi ⊕ Ki end for end for for i = 0 to 15 do Hi ← Mi end for return H At this state, the reader is sure that the tag with iD identity knows the secret key K. But as aforementioned section 4, a mutual authentication can be set by adding the following steps. The reader shows that it knows K and iD by computing h K⊕C (C ⊕ iD) where C is the challenge given by the tag. The tag authenticates the reader by computing in the same way and comparing the proposed result with the computed one. If they are equal, the mutual authentication is achieved. Now we consider two cases: -Fake Tag: the tag receives the challenge C. It can choose arbitrarily a number iD to enter into the system. But it does not know K to compute the first part of authentication response. -Fake reader: the reader chooses and sends C. Next it receives a proper tag authentication. It cannot find iD thanks to h iD (C) (because h is a one-way function) nor K. Lightweight We have to establish that function could be programmed using usual assembler instructions. We refer to ASM51 Assembler [30]. First we use 8-bit registers. To represent an entry of 128 bits, eight registers or space blocks must be reserved. Next we can implement the f function defined by (1) using very simple instructions using a register named A and a carry C: -The computation of A 1 can be translated by CLR C (Clear Carry) followed by RLC A (Rotate Left through Carry). The computation of A 1 can be translated by RRC A (Rotate Right through Carry). -The computation of 255 -A can be translated by CPL A, the complemented value. -The bitwise-xor is classically translated by XRL. -The modular reduction by 16 can by translated by AND 0x0F. -The multiplication by 16 can be translated by four left shift or by AND 0x0F followed by SWAP which swaps nibbles. -The modular addition (mod 256) can be translated simply by ADD without taking care of possible carries of an 8-bit register. The SubBytes function can be implemented by looking up in a table as explain in the Figure 7 of [START_REF]Advanced encryption standard[END_REF]. This part of AES algorithm can be computed with a few gates compared to the whole AES (The most penalizing part being the key expansion according to the table 3 of [START_REF] Hamalainen | Design and implementation of low-area and low-power aes encryption hardware core[END_REF]). Now we claim that properties of h function presented in section 5 are satisfied: the overflows of f are intended and contribute to the non-reversibility of the h function, the output seems random (subsection 6.4), the avalanche criterion (subsection 6.3) shows that the outputs distribution of f is well reported to h outputs. Strict Avalanche Criterion The strict avalanche criterion was originally presented in [START_REF] Forré | The strict avalanche criterion: Spectral properties of boolean functions and an extended definition[END_REF], as a generalization of the avalanche effect [START_REF] Webster | On the design of s-boxes[END_REF]. It was introduced for measuring the amount of nonlinearity in substitution boxes (S-boxes), like in the Advanced Encryption Standard (AES). The avalanche effect tries to reflect the intuitive idea of high-nonlinearity: a very small difference in the input producing a high change in the output, thus an avalanche of changes. Denote by HW the Hamming weight and DH(x, y) = HW (x ⊕ y) the Hamming distance. Mathematically, the avalanche effect can be formalized by ∀x, y|DH(x, y) = 1, average(DH(F (x), F (y))) = n 2 , where F is candidate to have the avalanche effect. So the output of a n-bit random input number and one generated by randomly flipping one of its bits should be, on average, n/2. That is, a minimum input change (one single bit) is amplified and produces a maximum output change (half of the bits) on average. First we show that if an input bit is changed then the modification will change an average of one half of the following byte. The input byte x will be changed to x with a difference ∆x of one bit. After the first SubBytes transformation, the difference will be S(x ⊕ k) ⊕ S(x ⊕ k) = S(y) ⊕ S(y + ∆x), with y = x ⊕ k. We have in average 1 256 • 8 y ∆x,HW (∆x)=1 HW (S(y) ⊕ S(y + ∆x)) ≈ 4, where HW is the Hamming weight. Hence an average of four bits will change if the difference is of one bit. Furthermore, for any difference ∆x, 1 256 • 256 y ∆x HW (S(y) ⊕ S(y + ∆x)) = 4. Our function satisfies the avalanche effect as 1 256 2 x y HW (x ⊕ S(f (x, y))) ≈ 4. Next we show that if an input bit is changed then the modification will be spread over all the bytes of the output. Suppose that a bit of the k th byte M 0 k is changed (1 ≤ k ≤ 16). Then M 1 is also changed as the SubBytes substitution is not a constant function. At the first round, the bytes k and k + 1 will be modified. At the second round, the bytes k, k + 2, k + 1 and k + 3 will be modified. Furthermore, eight bytes will be modified and at the end, the whole 16 bytes will be modified. For example, if the first input byte is changed (M 0 0 is changed). Then M 0 0 is used for compute M 1 0 and M 1 15 , hence a difference appears in M 1 0 and M 1 15 , and so on. We trace the difference diffusion in the following table: M 0 M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10 M 11 M 12 M 13 M 14 M 15 First Xor x j = 1 x x j = 2 x x x x j = 3 x x x x x x x x j = 4 x x x x x x x x x x x x x x x x Last Xor x x x x x x x x x x x x x x x x Table 1. Diffusion table If another byte is changed, the same remark works by looking in the dependence table 2. Hence for any input difference, the modification will change an average of one half of the output. Security Quality To evaluate the security quality, we take Y = 1 et X = 0. We consider the iterated outputs of the authentication function. Hence we test the series h Y (X), h Y (h Y (X)), ... like a random bitstream with the NIST test suite [START_REF] Williams | A statistical test suite for the validation of random number generators and pseudo random number generators for cryptographic applications[END_REF]. The bitstream satisfies all the tests (parameters of NIST software: 10 6 input bits, 10 bitstreams). Hardware Complexity: Implementation and computational Cost We choose a 8bit-CPU Tag for cost reasons. We implement the authentication function on a MULTOS Card [START_REF]Multos: Multos developer's guide[END_REF] without difficulties. This card is not a low-cost card but we only test the implementation with basic instructions. The code size of the authentication function (with S-box table) without manual optimization is 798 bytes. We can optimize the memory usage: the S-box table can be placed in Read-Only memory area: 256 bytes needed for AES SubBytes Table . the variables placed in the Random Access Memory Memory can be optimized. For internal state computation, one have to represent M with 16 bytes and we need two supplementary temporary bytes: at each round, a state byte value M i is used twice to compute the next state. In fact M j i is used for compute M j+1 i and M j+1 i+2 j-1 mod 16 . After computation of these two variables, the space allocation for the variable M j i can be reused. Next we compute the value M j+1 i+2 j-1 mod 16 depending on M j i+2 j-1 mod 16 and another byte. Now we can delete the memory space for M j i+2 j-1 mod 16 and compute another byte of M j+1 , step by step. Hence we use only two additional bytes to compute the next state of M . We evaluate the computational time with a PC computer (Intel CoreDuo T9600 2.8Ghz): 30 s for 10 7 authentications for a program in a C language, i.e. 3µs per authentication. Privacy Even if RFID technology is used for identify something in tracing system, in many cases this technology would merely cause infringements of private rights. We do not prevent the tracing system from recording informations but we need to protect the tag iD from external recording. Hence if an attacker records all transactions between tag and a reader, he cannot retrieve if the same tag has been read one or many times. Contrarily, a fake reader can determine if it has previously ask a tag by sending always the same challenge and recording responses, but it cannot know the real iD of the tag. Attacks The attacker's aim is to validate its tag identity. He can do this by producing a response to a challenge. If he can exploit the attack in a feasible way, then we say that the protocol is broken. Such a success of the attacker might be achieved with or without recovering the secret key shared by the reader and the tag. Hence a large key size is not enough to prove that the protocol cannot be broken with brute force attack. We might also take into account other attacks where the attacker can record, measure and study the tag responses. The necessary data could be obtained in a passive or in an active manner. In case of a passive attack, the attacker collects messages from one or more runs without interfering with the communication between the parties. In case of an active attack, the attacker impersonates the reader and/or the tag, and typically replays purposefully modified messages observed in previous runs of the protocol. Recording Attacks Replay Attack by recording: An attacker tries to extract the secret of a tag. He uses a reader and knows the commands to perform exchanges with the tag. He asks the tag many times. By listening to different requests, one can record n complete answers. A complete record is composed of a challenge C and the associated response Auth. Next if a recording challenge C is used or reused, then the attacker knows the correct response Auth. This attack works but -The attacker must have time to record all the possibilities; -To create a fake tag, the tag must have 2 128 • (2 • 2 128 ) bits (e.g. 10 60 To) of memory to store the previous records and have the good answer. If this type of tag exists, it is not a commercial one. -The challenge C, generated by the reader environment, is supposed to be random. So for a fixed C, the probability to have the good answer is very low. Relay Attack [START_REF] Kasper | An embedded system for practical security analysis of contactless smartcards[END_REF]: the attacker makes a link between the reader and tag; it's a kind of Man-in-the-Middle attack. He creates independent connections with reader and tag and relays messages between them. Hence a tag can be identified without being in the reader area. The problem can be treated by security environment protections. A partial solution to protect tag against this attack [START_REF] Schneier | Rfid cards and man-in-the-middle attacks[END_REF] is to limit its communication distance, but this countermeasure limits the potential of RFID tags. A better way is to activate a distance-bounding protocol [START_REF] Hancke | An rfid distance bounding protocol[END_REF]. Man-In-The-Middle attack: A man-in-the-middle attack is not possible because our proposal is based on a mutual authentication, in which two random numbers (C, C ), refreshed at each iteration of the protocol, are used. One cannot forge new responses using challenge differences because h iD (C+∆) = h iD (C)+∆ and h K (C +∆) = h K (C)+∆. In the same way, h K⊕C ⊕∆ (C ⊕iD) = h K⊕C (C ⊕ iD) ⊕ ∆. Side channels attacks Timing Attack: a timing attack [START_REF] Kocher | Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems[END_REF] is a side channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithm. The attack exploits the fact that every operation in a computer takes a dedicated time to execute. If the time cost of operation depends on key value or input values, on can retrieve these secret values by timing attack. Hence, during the implementation, we must be aware of the timing attack. For the computation of tag authentication, the time cost of the operations is the same whatever the value of the key. Next for the reader authentication, the tag must compare the reader response with its own computation. With poor security implementation but unfortunately "classical", if a difference between two bytes is found, the algorithm stops and return the information "Authentication failed". This kind of program is sensible to timing attack. The execution time is different according if the value is rapidly found or not found. To be immune from this attack, we make always a fixed number of steps; the response is send when all the response is verified. One can also add dummy cycles to equilibrate the parts of an implementation. Hence our function is resistant to Timing attack. Power consumption attack: an attacker studies the power consumption [START_REF] Kocher | Differential power analysis[END_REF] of the tag. He can do it by monitoring the delivery power from the reader to the tag. As the consumption of the chip depends on the executed instructions, the attacker can observe (SPA) the different parts of an algorithm. Here the algorithm does not need to be secret and the operations do not depend on the key values. One can also use random dummy cycles to disrupt the observation of the same part of program execution. Hence our function is SPA-resistant. Mathematical Attacks Lucky Authentication: A attacker tries to have a good authentication with a fake tag. He sends (C Nowadays, this probability is sufficient for a good security. Active Attack: Suppose that an attacker queries the tag T by sending C = 0 as challenge. Then, to determine the secret K, it must solve the equation S(F 4 (S(F 3 (S(F 2 (S(F 1 (S(K))) ⊕ K)) ⊕ K)) ⊕ K)) ⊕ K = H, (2) where H is the response of T and the unknowns are the bytes of K. Since each round of the algorithm operations are performed modulo 16 or modulo 256 and the results from these transactions are processed by substitution tables, the equation 2 is very difficult to analyze algebraically. Linear [START_REF] Matsui | Linear cryptoanalysis method for des cipher[END_REF] or differential [START_REF] Biham | Differential cryptanalysis of des-like cryptosystems[END_REF] Attacks: These attacks depend especially on properties of the substitution function. First remember that for a function g from F 2 m to F 2 m , a differential pair (α, β) is linked with the equation g(x⊕α)⊕g(x) = β. The differential attack is based on finding pairs where the probability P ( {x ∈ F 2 m : g(x ⊕ α) ⊕ g(x) = β}) is high. If such pair exists then the attack is feasible. Our function is well resistant to this attack. Indeed the substitution function S is constructed by composing a power function with an affine map, which avoid from differential attacks. Our h function inherits from these properties: considering the output z of f (x, y) describes in the paragraph 5.1, it is easy to verify (like in the paragraph 6.3) that for all α, β ∈ F 256 , {z ∈ F 256 : S(z ⊕ α) ⊕ S(z) = β} ≤ 4. It allows to avoid the existence of differential pair such that the probability P ( {x ∈ F 256 : S(x ⊕ α) ⊕ S(x) = β}) be high. To achieve a linear attack, it aims at awarding credibilities to the equations of the type α, x ⊕ β, S(x) = 0, with α, β ∈ F 256 . We know that for all α and β not identically equal to zero, the equation has a number of solutions close to 128 which makes expensive the linear attack. Desynchronizing attack In a desynchronization attack, the adversary aims to disrupt the key update leaving the tag and reader in a desynchronized state in which future authentication would be impossible. Compared to some other protocols [START_REF] Van Deursen | Security of rfid protocols -a case study[END_REF], the key does not change in our authentication protocol. It is not a lack of security, the key may change during stocktaking or subscription renewal, by changing tag by another with the new key. Conclusion We have presented a lightweight authentication protocol for low-cost RFID tags. The internal functions are well adapted for 8-bit CPU with few memory and without cryptoprocessor, even if it is true that a precise evaluation of the building cost and performance of a tag supporting our protocol (i.e. very few CPU functions and less than 1Kbytes of memory) should be evaluated with a manufacturer. We use the security qualities of the AES S-Boxes to build a function, specifically dedicated to the authentication, which keeps them. The notions of privacy and the classic attacks are addressed. The proposed version is light in terms of implementation and in a reduced cost what makes it usable on RFID systems. Even if these systems are intended for simple applications as secure counter of photocopies or stock management in a small shop, the security level reached here allows to envisage more ambitious applications. Table 2 . 2 Dependency table Table 3 . 3 NIST STATISTICAL TEST RESULTS Test Name Percentage of passing sequences with Significance level α = 0.01 1. Frequency Test (Monobit) 99/100 2. Frequency Test (Block) 100/100 3. Runs Test 100/100 4. Longest Run of Ones 99/100 5. Binary Matrix Rank Test 98/100 6. Discrete Fourier Transform Test 98/100 7. Non-Overlapping Template 98/100 8. Overlapping Template 98/100 9. Maurers Universal Statistical 100/100 10 Linear Complexity Test 100/100 11. Serial Test 99/100 12. Approximate Entropy Test 100/100 13. Cumulative Sums (Cusum) Test 98/100 14. Random Excursions Test 90/93 15. Random Excursion Variant Test 91/93 Acknowledgements The authors want to thank the anonymous reviewers for their constructive comments which were helpful to improve this paper and Damien Sauveron for proofreading of preliminary versions.
34,771
[ "6208", "1003802" ]
[ "444304", "302584" ]
01485970
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485970/file/978-3-642-37635-1_12_Chapter.pdf
Carlos G López Pombo email: [email protected] Pablo F Castro email: [email protected] Nazareno M Aguirre email: [email protected] Thomas S E Maibaum Satisfiability Calculus: The Semantic Counterpart of a Proof Calculus in General Logics Since its introduction by Goguen and Burstall in 1984, the theory of institutions has been one of the most widely accepted formalizations of abstract model theory. This work was extended by a number of researchers, José Meseguer among them, who presented General Logics, an abstract framework that complements the model theoretical view of institutions by defining the categorical structures that provide a proof theory for any given logic. In this paper we intend to complete this picture by providing the notion of Satisfiability Calculus, which might be thought of as the semantical counterpart of the notion of proof calculus, that provides the formal foundations for those proof systems that use model construction techniques to prove or disprove a given formula, thus "implementing" the satisfiability relation of an institution. Introduction The theory of institutions, presented by Goguen and Burstall in [START_REF] Goguen | Introducing institutions[END_REF], provides a formal and generic definition of what a logical system is, from a model theoretical point of view. This work evolved in many directions: in [START_REF] Meseguer | General logics[END_REF], Meseguer complemented the theory of institutions by providing a categorical characterization for the notions of entailment system (also called π-institutions by other authors in [START_REF] Fiadeiro | Generalising interpretations between theories in the context of π-institutions[END_REF]) and the corresponding notion of proof calculi; in [START_REF] Goguen | Institutions: abstract model theory for specification and programming[END_REF][START_REF] Tarlecki | Moving between logical systems[END_REF] Goguen and Burstall, and Tarlecki, respectively, extensively investigated the ways in which institutions can be related; in [START_REF] Sannella | Specifications in an arbitrary institution[END_REF], Sannella and Tarlecki studied how specifications in an arbitrary logical system can be structured; in [START_REF] Tarlecki | Abstract specification theory: an overview[END_REF], Tarlecki presented an abstract theory of software specification and development; in [START_REF] Mossakowski | Comorphism-based Grothendieck logics[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF] and [START_REF] Diaconescu | Logical foundations of CafeOBJ[END_REF][START_REF] Diaconescu | Grothendieck institutions[END_REF], Mossakowski and Tarlecki, and Diaconescu, respectively, proposed the use of institutions as a foundation for heterogeneous environments for software specification. Institutions have also been used as a very general version of abstract model theory [START_REF] Diaconescu | Institution-independent Model Theory[END_REF], offering a suitable formal framework for addressing heterogeneity in specifications [START_REF] Mossakowski | The heterogeneous tool set[END_REF][START_REF] Tarlecki | Towards heterogeneous specifications[END_REF], including applications to UML [START_REF] Cengarle | A heterogeneous approach to UML semantics[END_REF] and other languages related to computer science and software engineering. Extensions of institutions to capture proof theoretical concepts have been extensively studied, most notably by Meseguer [START_REF] Meseguer | General logics[END_REF]. Essentially, Meseguer proposes the extension of entailment systems with a categorical concept expressive enough to capture the notion of proof in an abstract way. In Meseguer's words: A reasonable objection to the above definition of logic 5 is that it abstracts away the structure of proofs, since we know only that a set Γ of sentences entails another sentence ϕ, but no information is given about the internal structure of such a Γ ϕ entailment. This observation, while entirely correct, may be a virtue rather than a defect, because the entailment relation is precisely what remains invariant under many equivalent proof calculi that can be used for a logic. Before Meseguer's work, there was an imbalance in the definition of a logic in the context of institution theory, since the deductive aspects of a logic were not taken into account. Meseguer concentrates on the proof theoretical aspects of a logic, providing not only the definition of entailment system, but also complementing it with the notion of proof calculus, obtaining what he calls a logical system. As introduced by Meseguer, the notion of proof calculus provides, intuitively, an implementation of the entailment relation of a logic. Indeed, Meseguer corrected the inherent imbalance in favour of models in institutions, enhancing syntactic aspects in the definition of logical systems. However, the same lack of an operational view observed in the definition of entailment systems still appears with respect to the notion of satisfiability, i.e., the satisfaction relation of an institution. In the same way that an entailment system may be "implemented" in terms of different proof calculi, a satisfaction relation may be "implemented" in terms of different satisfiability procedures. Making these satisfiability procedures explicit in the characterization of logical systems is highly relevant, since many successful software analysis tools are based on particular characteristics of these satisfiability procedures. For instance, many automated analysis tools rely on model construction, either for proving properties, as with model-checkers, or for finding counterexamples, as with tableaux techniques or SAT-solving based tools. These techniques constitute an important stream of research in logic, in particular in relation to (semi-)automated software validation and verification. These kinds of logical systems can be traced back to the works of Beth [START_REF] Beth | The Foundations of Mathematics[END_REF]17], Herbrand [START_REF] Herbrand | Investigation in proof theory[END_REF] and Gentzen [START_REF] Gentzen | Investigation into logical deduction[END_REF]. Beth's ideas were used by Smullyan to formulate the tableaux method for first-order predicate logic [START_REF] Smullyan | First-order Logic[END_REF]. Herbrand's and Gentzen's works inspired the formulation of resolution systems presented by Robinson [START_REF] Robinson | A machine-oriented logic based on the resolution principle[END_REF]. Methods like those based on resolution and tableaux are strongly related to the semantics of a logic; one can often use them to guide the construction of models. This is not possible in pure deductive methods, such as natural deduction or Hilbert systems, as formalized by Meseguer. In this paper, our goal is to provide an abstract characterization of this class of semantics based tools for logical systems. This is accomplished by introducing a categorical characterization of the notion of satisfiability calculus which embraces logical tools such as tableaux, resolution, Gentzen style sequents, etc. As we mentioned above, this can be thought of as a formalization of a semantic counterpart of Meseguer's proof calculus. We also explore the concept of mappings between satisfiability calculi and the relation between proof calculi and satisfiability calculi. The paper is organized as follows. In Section 2 we present the definitions and results we will use throughout this paper. In Section 3 we present a categorical formalization of satisfiability calculus, and prove relevant results underpinning the definitions. We also present examples to illustrate the main ideas. Finally in Section 4 we draw some conclusions and describe further lines of research. Preliminaries From now on, we assume the reader has a nodding acquaintance with basic concepts from category theory [START_REF] Mclane | Categories for working mathematician[END_REF][START_REF] Fiadeiro | Categories for software engineering[END_REF]. Below we present the basic definitions and results we use throughout the rest of the paper. In the following, we follow the notation introduced in [START_REF] Meseguer | General logics[END_REF]. An Institution is an abstract formalization of the model theory of a logic by making use of the relationships existing between signatures, sentences and models. These aspects are reflected by introducing the category of signatures, and by defining functors going from this category to the categories Set and Cat, to capture sets of sentences and categories of models, respectively, for a given signature. The original definition of institutions is the following: Definition 1. ([1] ) An institution is a structure of the form Sign, Sen, Mod, {|= Σ } Σ∈|Sign| satisfying the following conditions: -Sign is a category of signatures, -Sen : Sign → Set is a functor. Let Σ ∈ |Sign|, then Sen(Σ) returns the set of Σ-sentences, -Mod : Sign op → Cat is a functor. Let Σ ∈ |Sign|, then Mod(Σ) returns the category of Σ-models, -{|= Σ } Σ∈|Sign| , where |= Σ ⊆ |Mod(Σ)| × Sen(Σ), M |= Σ Sen(σ)(φ) iff Mod(σ op )(M ) |= Σ φ . Roughly speaking, the last condition above says that the notion of truth is invariant with respect to notation change. Given Σ ∈ |Sign| and Γ ⊆ Sen(Σ), Mod(Σ, Γ ) denotes the full subcategory of Mod(Σ) determined by those models M ∈ |Mod(Σ)| such that M |= Σ γ, for all γ ∈ Γ . The relation |= Σ between sets of formulae and formulae is defined in the following way: given Σ ∈ |Sign|, Γ ⊆ Sen(Σ) and α ∈ Sen(Σ), Γ |= Σ α if and only if M |= Σ α, for all M ∈ |Mod(Σ, Γ )|. An entailment system is defined in a similar way, by identifying a family of syntactic consequence relations, instead of a family of semantic consequence relations. Each of the elements in this family is associated with a signature. These relations are required to satisfy reflexivity, monotonicity and transitivity. In addition, a notion of translation between signatures is considered. Definition 2. ([2]) An entailment system is a structure of the form Sign, Sen, { Σ } Σ∈|Sign| satisfying the following conditions: -Sign is a category of signatures, -Sen : Sign → Set is a functor. Let Σ ∈ |Sign|; then Sen(Σ) returns the set of Σ-sentences, and -{ Σ } Σ∈|Sign| , where Σ ⊆ 2 Sen(Σ) × Sen(Σ), is a family of binary relations such that for any Σ, Σ ∈ |Sign|, {φ} ∪ {φ i } i∈I ⊆ Sen(Σ), Γ, Γ ⊆ Sen(Σ), the following conditions are satisfied: 1. reflexivity: {φ} Σ φ, 2. monotonicity: if Γ Σ φ and Γ ⊆ Γ , then Γ Σ φ, 3. transitivity: if Γ Σ φ i for all i ∈ I and {φ i } i∈I Σ φ, then Γ Σ φ, and 4. -translation: if Γ Σ φ, then for any morphism σ : Σ → Σ in Sign, Sen(σ)(Γ ) Σ Sen(σ)(φ). Definition 3. ([2] ) Let Sign, Sen, { Σ } Σ∈|Sign| be an entailment system. Its category Th of theories is a pair O, A such that: -O = { Σ, Γ | Σ ∈ |Sign| and Γ ⊆ Sen(Σ) }, and -A = σ : Σ, Γ → Σ , Γ Σ, Γ , Σ , Γ ∈ O, σ : Σ → Σ is a morphism in Sign and for all γ ∈ Γ, Γ Σ Sen(σ)(γ) . In addition, if a morphism σ : Σ, Γ → Σ , Γ satisfies Sen(σ)(Γ ) ⊆ Γ , it is called axiom preserving. By retaining those morphisms of Th that are axiom preserving, we obtain the subcategory Th 0 . If we now consider the definition of Mod extended to signatures and sets of sentences, we get a functor Mod : Th op → Cat defined as follows: let T = Σ, Γ ∈ |Th|, then Mod(T ) = Mod(Σ, Γ ). Definition 4. ([2]) Let Sign, Sen, { Σ } Σ∈|Sign| be an entailment system and Σ, Γ ∈ |Th 0 |. We define • : 2 Sen(Σ) → 2 Sen(Σ) as follows: Γ • = γ Γ Σ γ . This function is extended to elements of Th 0 , by defining it as follows: Σ, Γ • = Σ, Γ • . Γ • is called the theory generated by Γ . Definition 5. ([2] ) Let Sign, Sen, { Σ } Σ∈|Sign| and Sign , Sen , { Σ } Σ∈|Sign | be entailment systems, Φ : Th 0 → Th 0 be a functor and α : Sen → Sen • Φ a natural transformation. Φ is said to be α-sensible if and only if the following conditions are satisfied: 1. there is a functor Φ : Sign → Sign such that sign • Φ = Φ • sign, where sign and sign are the forgetful functors from the corresponding categories of theories to the corresponding categories of signatures, that when applied to a given theory project its signature, and 2. if Σ, Γ ∈ |Th 0 | and Σ , Γ ∈ |Th 0 | such that Φ( Σ, Γ ) = Σ , Γ , then (Γ ) • = (∅ ∪ α Σ (Γ )) • , where ∅ = α Σ (∅) 6 . Φ is said to be α-simple if and only if Γ = ∅ ∪α Σ (Γ ) is satisfied in Condition 2, instead of (Γ ) • = (∅ ∪ α Σ (Γ )) • . It is straightforward to see, based on the monotonicity of • , that α-simplicity implies α-sensibility. An α-sensible functor has the property that the associated natural transformation α depends only on signatures. Now, from Definitions 1 and 2, it is possible to give a definition of logic by relating both its modeltheoretic and proof-theoretic characterizations; a coherence between the semantic and syntactic relations is required, reflecting the soundness and completeness of standard deductive relations of logical systems. Definition 6. ([2] ) A logic is a structure of the form Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| satisfying the following conditions: -Sign, Sen, { Σ } Σ∈|Sign| is an entailment system, -Sign, Sen, Mod, {|= Σ } Σ∈|Sign| is an institution, and the following soundness condition is satisfied: for any Σ ∈ |Sign|, φ ∈ Sen(Σ), Γ ⊆ Sen(Σ): Γ Σ φ implies Γ |= Σ φ . A logic is complete if, in addition, the following condition is also satisfied: for any Σ ∈ |Sign|, φ ∈ Sen(Σ), Γ ⊆ Sen(Σ): -Sign, Sen, { Σ } Σ∈|Sign| is an entailment system, -P : Finally, a logical system is defined as a logic plus a proof calculus for its proof theory. Γ |= Σ φ implies Γ Σ φ . Th 0 → Struct P C is a functor. Let T ∈ |Th 0 |, then P(T ) ∈ |Struct P C | is the proof-theoretical structure of T , -Pr : Struct P C → Set Definition 8. ([2]) A logical system is a structure of the form Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| , P, Pr, π satisfying the following conditions: -Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| is a logic, and -Sign, Sen, { Σ } Σ∈|Sign| , P, Pr, π is a proof calculus. Satisfiability Calculus In Section 2, we presented the definitions of institutions and entailment systems. Additionally, we presented Meseguer's categorical formulation of proof that provides operational structure for the abstract notion of entailment. In this section, we provide a categorical definition of a satisfiability calculus, providing a corresponding operational formulation of satisfiability. A satisfiability calculus is the formal characterization of a method for constructing models of a given theory, thus providing the semantic counterpart of a proof calculus. Roughly speaking, the semantic relation of satisfaction between a model and a formula can also be implemented by means of some kind of structure that depends on the model theory of the logic. The definition of a satisfiability calculus is as follows: Definition 9. [Satisfiability Calculus] A satisfiability calculus is a structure of the form Sign, Sen, Mod, {|= Σ } Σ∈|Sign| , M, Mods, µ satisfying the following conditions: -Sign, Sen, Mod, {|= Σ } Σ∈|Sign| is an institution, -M : Th 0 → Struct SC is a functor. Let T ∈ |Th 0 |, then M(T ) ∈ |Struct SC | is the model structure of T , -Mods : Struct SC → Cat is a functor. Let T ∈ |Th 0 |, then Mods(M(T )) is the category of canonical models of T ; the composite functor Mods • M : Th 0 → Cat will be denoted by models, and µ : models op → Mod is a natural transformation such that, for each T = Σ, Γ ∈ |Th 0 |, the image of µ T : models op (T ) → Mod(T ) is the category of models Mod(T ). The map µ T is called the projection of the category of models of the theory T . The intuition behind the previous definition is that, for any theory T , the functor M assigns a model structure for T in the category Struct SC 7 . For instance, in propositional tableaux, a good choice for Struct SC is the collection of legal tableaux, where the functor M maps a theory to the collection of tableaux obtained for that theory. The functor Mods projects those particular structures that represent sets of conditions that can produce canonical models of a theory T = Σ, Γ (i.e., the structures that represent canonical models of Γ ). For example, in the case of propositional tableaux, this functor selects the open branches of tableaux, that represent satisfiable sets of formulae, and returns the collections of formulae obtained by closuring these sets. Finally, for any theory T , the functor µ T relates each of these sets of conditions to the corresponding canonical model. Again, in propositional tableaux, this functor is obtained by relating a closured set of formulae with the models that can be defined from these sets of formulae in the usual ways [START_REF] Smullyan | First-order Logic[END_REF]. Example 1. [Tableaux Method for First-Order Predicate Logic] Let us start by presenting the tableaux method for first-order logic. Let us denote by I F OL = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| the institution of first-order predicate logic. Let Σ ∈ |Sign| and S ⊆ Sen(Σ); then a tableau for S is a tree such that: 1. the nodes are labeled with sets of formulae (over Σ) and the root node is labeled with S, 2. if u and v are two connected nodes in the tree (u being an ancestor of v), then the label of v is obtained from the label of u by applying one of the following rules: where, in the last rules, c is a new constant and t is a ground term. A sequence of nodes s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . is a branch if: a) s 0 is the root node of the tree, and b) for all i ≤ ω, s i → s i+1 occurs in the tree, τ αi i is an instance of one of the rules presented above, and α i are the formulae of s i to which the rule was applied. A branch s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . in a tableau is saturated if there exists i ≤ ω such that s i = s i+1 . A branch s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . in a tableau is closed if there exists i ≤ ω and α ∈ Sen(Σ) such that {α, ¬α} ⊆ s i . Let s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . be a branch in a tableau. Examining the rules presented above, it is straightforward to see that every s i with i < ω is a set of formulae. In each step, we have either the application of a rule decomposing one formula of the set into its constituent parts with respect to its major connective, while preserving satisfiability, or the application of the rule [f alse] denoting the fact that the corresponding set of formulae is unsatisfiable. Thus, the limit set of the branch is a set of formulae containing sub-formulae (and "instances" in the case of quantifiers) of the original set of formulae for which the tableau was built. As a result of this, every open branch expresses, by means of a set of formulae, the class of models satisfying them. In order to define the tableau method as a satisfiability calculus, we provide formal definitions for M, Mods and µ. The proofs of the lemmas and properties shown below are straightforward using the introduced definitions. The interested reader can find these proofs in [START_REF] Lopez Pombo | Satisfiability calculus: the semantic counterpart of a proof calculus in general logics[END_REF]. First, we introduce the category Str Σ,Γ of tableaux for sets of formulae over signature Σ and assuming the set of axioms Γ . In Str Σ,Γ , objects are sets of formulae over signature Σ, and morphisms represent tableaux for the set occurring in their target and having subsets of the set of formulae occurring at the end of open branches, as their source. Definition 10. Let Σ ∈ |Sign| and Γ ⊆ Sen(Σ), then we define Str Σ,Γ = O, A such that O = 2 Sen(Σ) and A = {α : {A i } i∈I → {B j } j∈J | α = {α j } j∈J }, where for all j ∈ J , α j is a branch in a tableau for Γ ∪ {B j } with leaves ∆ ⊆ {A i } i∈I . It should be noted that ∆ |= Σ Γ ∪ {B j }. The functor M must be understood as the relation between a theory in |Th 0 | and its category of structures representing legal tableaux. So, for every theory T , M associates the strict monoidal category [START_REF] Mclane | Categories for working mathematician[END_REF] Str Σ,Γ , ∪, ∅ , and for every theory morphism σ : Σ, Γ → Σ , Γ , M associates a morphism σ : Str Σ,Γ → Str Σ ,Γ which is the homomorphic extension of σ to the structure of the tableaux. Definition 12. M : Th 0 → Struct SC is defined as M( Σ, Γ ) = Str Σ,Γ , ∪, ∅ and M(σ : Σ, Γ → Σ , Γ ) = σ : Str Σ,Γ , ∪, ∅ → Str Σ ,Γ , ∪, ∅ , the homo- morphic extension of σ to the structures in Str Σ,Γ , ∪, ∅ . Lemma 3. M is a functor. In order to define M ods, we need the following auxiliary definition, which resembles the usual construction of maximal consistent sets of formulae. Definition 13. Let Σ ∈ |Sign|, ∆ ⊆ Sen(Σ), and consider {F i } i<ω an enumeration of Sen(Σ) such that for every formula α, its sub-formulae are enumerated before α. Then Cn(∆) is defined as follows: - Cn(∆) = i<ω Cn i (∆) -Cn 0 (∆) = ∆, Cn i+1 (∆) = Cn i (∆) ∪ {F i } , if Cn i (∆) ∪ {F i } is consistent. Cn i (∆) ∪ {¬F i } , otherwise. Given Σ, Γ ∈ |Th 0 |, the functor Mods provide the means for obtaining the category containing the closure of those structures in Str Σ,Γ that represent the closure of the branches in saturated tableaux. Definition 14. Mods : Struct SC → Cat is defined as: Mods( Str Σ,Γ , ∪, ∅ ) = { Σ, Cn( ∆) | (∃α : ∆ → ∅ ∈ ||Str Σ,Γ ||) ( ∆ → ∅ ∈ α ∧ (∀α : ∆ → ∆ ∈ ||Str Σ,Γ ||)(∆ = ∆))} and for all σ : Σ → Σ ∈ |Sign| (and σ : Str Σ,Γ , ∪, ∅ → Str Σ ,Γ , ∪, ∅ ∈ ||Struct SC ||), the following holds: Now, from Lemmas 3, 4, and 5, and considering the hypothesis that I F OL is an institution, the following corollary follows. Mods( σ)( Σ, Cn( ∆) ) = Σ , Cn(Sen(σ)(Cn( ∆))) . Corollary 1. Sign F OL , Sen F OL , Mod F OL , {|= Σ F OL } Σ∈|Sign F OL | , M, Mods, µ is a satisfiability calculus. Another important kind of system used by automatic theorem provers are the so-called resolution methods. Below, we show how any resolution system conforms to the definition of satisfiability calculus. Example 2. [Resolution Method for First-Order Predicate Logic] Let us describe resolution for first-order logic as introduced in [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF]. We use the following notation: [] denotes the empty list; [A] denotes the unitary list containing the formula A; 0 , 1 , . . . are variables ranging over lists; and i + j denotes the concatenation of lists i and j . Resolution builds a list of lists representing a disjunction of conjunctions. The rules for resolution are the following: 0 + [¬¬A] + 1 [¬¬] 0 + [A] + 1 0 + [¬A] + 1 0 + [A] + 1 [¬] 0 + 1 + 0 + 1 0 + [A ∧ A ] + 1 [∧] 0 + [A, A ] + 1 0 + [¬(A ∨ A )] + 1 [¬∧] 0 + [¬A, ¬A ] + 1 0 + [A ∨ A ] + 1 [∨] 0 + [A] + 1 0 + [A ] + 1 0 + [¬(A ∧ A )] + 1 [¬∧] 0 + [¬A] + 1 0 + [¬A ] + 1 0 + [∀x : A(x)] + 1 for any closed term t [∀] 0 + [A[x/t]] + 1 0 + [∃x : A(x)] + 1 for a new constant c [∃] 0 + [A[x/c]] + 1 where A(x) denotes a formula with free variable x, and A[x/t] denotes the formula resulting from replacing variable x by term t everywhere in A. For the sake of simplicity, we assume that lists of formulae do not have repeated elements. A resolution is a sequence of lists of formulae. If a resolution contains an empty list (i.e., []), we say that the resolution is closed; otherwise it is an open resolution. For every signature Σ ∈ |Sign| and each Γ ⊂ Sen(Σ), we denote by Str Σ,Γ the category whose objects are lists of formulae, and where every morphism σ : [A 0 , . . . , A n ] → [A 0 , . . . , A m ] represents a sequence of application of resolution rules for [A 0 , . . . , A m ]. Then, Struct SC is a category whose objects are Str Σ,Γ , for each signature Σ ∈ |Sign| and set of formulae Γ ∈ Sen(Σ), and whose morphisms are of the form σ : Str Σ,Γ → Str Σ ,Γ , obtained by homomorphically extending σ : Σ, Γ → Σ , Γ in ||Th 0 ||. As for the case of Example 1, the functor M : Th 0 → Struct SC is defined as M( Σ, Γ ) = Str Σ,Γ , ∪, ∅ , and Mods : Struct SC → Set is defined as in the previous example. A typical use for the methods involved in the above described examples is the search for counterexamples of a given logical property. For instance, to search for counterexamples of an intended property in the context of the tableaux method, one starts by applying rules to the negation of the property, and once a saturated tableau is obtained, if all the branches are closed, then there is no model of the axioms and the negation of the property, indicating that the latter is a theorem. On the other hand, if there exists an open branch, the limit set of that branch characterizes a class of counterexamples for the formula. Notice the contrast with Hilbert systems, where one starts from the axioms, and then applies deduction rules until the desired formula is obtained. Mapping Satisfiability Calculi In [START_REF] Goguen | Institutions: abstract model theory for specification and programming[END_REF] the original notion of morphism between Institutions was introduced. Meseguer defines the notion of plain map in [START_REF] Meseguer | General logics[END_REF], and in [START_REF] Tarlecki | Moving between logical systems[END_REF] Tarlecki extensively discussed the ways in which different institutions can be related, and how they should be interpreted. More recently, in [START_REF] Goguen | Institution morphisms[END_REF] all these notions of morphism were investigated in detail. In this work we will concentrate only on institution representations (or comorphisms in the terminology introduced by Goguen and Rosu), since this is the notion that we have employed to formalize several concepts arising from software engineering, such as data refinement and dynamic reconfiguration [START_REF] Castro | Towards managing dynamic reconfiguration of software systems in a categorical setting[END_REF][START_REF] Castro | A categorical approach to structuring and promoting Z specifications[END_REF]. The study of other important kinds of functorial relations between satisfiability calculi are left as future work. The following definition is taken from [START_REF] Tarlecki | Moving between logical systems[END_REF], and formalizes the notion of institution representation. M |= γ Sign (Σ) γ Sen Σ (α) iff γ M od Σ (M ) |= Σ α . An institution representation γ : I → I expresses how the "poorer" set of sentences (respectively, category of models) associated with I is encoded in the "richer" one associated with I . This is done by: constructing, for a given I-signature Σ, an I -signature into which Σ can be interpreted, translating, for a given I-signature Σ, the set of Σ-sentences into the corresponding I -sentences, obtaining, for a given I-signature Σ, the category of Σ-models from the corresponding category of Σ -models. The direction of the arrows shows how the whole of I is represented by some parts of I . Institution representations enjoy some interesting properties. For instance, logical consequence is preserved, and, under some conditions, logical consequence is preserved in a conservative way. The interested reader is referred to [START_REF] Tarlecki | Moving between logical systems[END_REF] for further details. In many cases, in particular those in which the class of models of a signature in the source institution is completely axiomatizable in the language of the target one, Definition 16 can easily be extended to map signatures of one institution to theories of another. This is done so that the class of models of the richer one can be restricted, by means of the addition of axioms (thus the need for theories in the image of the functor γ Sign ), in order to be exactly the class of models obtained by translating to it the class of models of the corresponding signature of the poorer one. In the same way, when the previously described extension is possible, we can obtain what Meseguer calls a map of institutions [2, definition 27] by reformulating the definition so that the functor between signatures of one institution and theories of the other is γ T h : Th 0 → Th 0 . This has to be γ Sen -sensible (see definition 5) with respect to the entailment systems induced by the institutions I and I . Now, if Σ, Γ ∈ |Th 0 |, then γ T h0 can be defined as follows: γ T h0 ( Σ, Γ ) = γ Sign (Σ), ∆ ∪ γ Sen Σ (Γ ) , where ∆ ⊆ Sen(ρ Sign (Σ)). Then, it is easy to prove that γ T h0 is γ Sen -simple because it is the γ Sen -extension of γ T h0 to theories, thus being γ Sen -sensible. The notion of a map of satisfiability calculi is the natural extension of a map of institutions in order to consider the more material version of the satisfiability relation. In some sense, if a map of institutions provides a means for representing one satisfiability relation in terms of another in a semantics preserving way, the map of satisfiability calculi provides a means for representing a model construction technique in terms of another. This is done by showing how model construction techniques for richer logics express techniques associated with poorer ones. Definition 17. Let S = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| , M, Mods, µ and S = Sign , Sen , Mod , {|= Σ } Σ∈|Sign | , M , Mods , µ be satisfiability calculi. Then, ρ Sign , ρ Sen , ρ M od , γ : S → S is a map of satisfiability calculi if and only if: 1. ρ Sign , ρ Sen , ρ M od : I → I is a map of institutions, and 2. γ : models op • ρ T h0 → models op is a natural transformation such that the following equality holds: Th0 Mod models op A A ρ T h 0 $ $ =⇒ µ Cat = Th0 Mod ! ! ρ T h 0 & & =⇒ ρ M od Cat =⇒ γ = ⇒ µ Th 0 models op M M Th 0 models op N N Mod . . Roughly speaking, the 2-cell equality in the definition says that the translation of saturated tableaux is coherent with respect to the mapping of institutions. Example 3. [Mapping Modal Logic to First-Order Logic] A simple example of a mapping between satisfiability calculi is the mapping between the tableau method for propositional logic, and the one for first-order logic. It is straightforward since the tableau method for first-order logic is an extension of that of propositional logic. Let us introduce a more interesting example. We will map the tableau method for modal logic (as presented by Fitting [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF]) to the first-order predicate logic tableau method. The mapping between the institutions is given by the standard translation from modal logic to first-order logic. Let us recast here the tableau method for the system K of modal logic. Recall that formulae of standard modal logic are built from boolean operators and the "diamond operator" ♦. Intuitively, formula ♦ϕ says that ϕ is possibly true in some alternative state of affairs. The Methods like resolution and tableaux are strongly related to the semantics of a logic. They are often employed to construct models, a characteristic that is missing in purely deductive methods, such as natural deduction or Hilbert systems, as formalized by Meseguer. In this paper, we provided an abstract characterization of this class of semantics-based tecniques for logical systems. This was accomplished by introducing a categorical characterization of the notion of satisfiability calculus, which covers logical tools such as tableaux, resolution, Gentzen style sequents, etc. Our new characterization of a logical system, that includes the notion of satisfiability calculus, provides both a proof calculus and a satisfiability calculus, which essentially implement the entailment and satisfaction relations, respectively. There clearly exist connections between these calculi that are worth exploring, especially when the underlying structure used in both definitions is the same (see Example 1). A close analysis of the definitions of proof calculus and satisfiability calculus takes us to observe that the constraints imposed over some elements (e.g., the natural family of functors π Σ,Γ : proofs( Σ, Γ ) → Sen( Σ, Γ ) and µ Σ,Γ : models op ( Σ, Γ ) → Mod( Σ, Γ )) may be too restrictive, and working on generalizations of these concepts is part of our further work. In particular, it is worth noticing that partial implementations of both the entailment relation and the satisfiability relation are gaining visibility in the software engineering community. Examples on the syntactic side are the implementation of less expressive calculi with respect to an entailment, as in the case of the finitary definition of the reflexive and transitive closure in the Kleene algebras with tests [START_REF] Kozen | Kleene algebra with tests[END_REF], the case of the implementation of rewriting tools like Maude [START_REF] Clavel | All About Maude -A High-Performance Logical Framework, How to Specify, Program and Verify Systems in Rewriting Logic[END_REF] as a partial implementation of equational logic, etc. Examples on the semantic side are the bounded model checkers and model finders for undecidable languages, such as Alloy [START_REF] Jackson | Alloy: a lightweight object modelling notation[END_REF] for relational logic, the growing family of SMT-solvers [START_REF] Moura | Satisfiability modulo theories: introduction and applications[END_REF] for languages including arithmetic, etc. Clearly, allowing for partial implementations of entailment/satisfiability relations would enable us to capture the behaviors of some of the above mentioned logical tools. In addition, functorial relations between partial proof calculi (resp., satisfiability calculi) may provide a measure for how good the method is as an approximation of the ideal entailment relation (resp., satisfaction relation). We plan to explore this possibility, as future work. is a family of binary relations, and for any signature morphism σ : Σ → Σ , Σ-sentence φ ∈ Sen(Σ) and Σ -model M ∈ |Mod(Σ)|, the following |=-invariance condition holds: is a functor. Let T ∈ |Th 0 |, then Pr(P(T )) is the set of proofs of T ; the composite functor Pr • P : Th 0 → Set will be denoted by proofs, and π : proofs → Sen is a natural transformation such that for each T = Σ, Γ ∈ |Th 0 | the image of π T : proofs(T ) → Sen(T ) is the set Γ • . The map π T is called the projection from proofs to theorems for the theory T . Lemma 1 . 1 Let Σ ∈ |Sign| and Γ ⊆ Sen(Σ); then Str Σ,Γ , ∪, ∅ , where ∪ : Str Σ,Γ × Str Σ,Γ → Str Σ,Γ is the typical bi-functor on sets and functions, and ∅ is the neutral element for ∪, is a strict monoidal category. Using this definition we can introduce the category of legal tableaux, denoted by Struct SC . Definition 11. Struct SC is defined as O, A where O = {Str Σ,Γ | Σ ∈ |Sign| ∧ Γ ⊆ Sen(Σ)}, and A = { σ : Str Σ,Γ → Str Σ ,Γ | σ : Σ, Γ → Σ , Γ ∈ ||Th 0 ||}, the homomorphic extension of the morphisms in ||Th 0 ||. Lemma 2. Struct SC is a category. Lemma 4 .Fact 1 41 Mods is a functor. Finally, the natural transformation µ relates the structures representing saturated tableaux with the model satisfying the set of formulae denoted by the source of the morphism. Definition 15. Let Σ, Γ ∈ |Th 0 |, then we define µ Σ : models op ( Σ, Γ ) → Mod F OL ( Σ, Γ ) as µ Σ ( Σ, ∆ ) = Mod( Σ, ∆ ). Let Σ ∈ |Sign F OL | and Γ ⊆ Sen F OL (Σ). Then µ Σ,Γ is a functor. Lemma 5. µ is a natural transformation. Definition 16 . 16 ([5]) Let I = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| and I = Sign , Sen , Mod , {|= Σ } Σ∈|Sign | be institutions. Then, γ Sign , γ Sen , γ M od : I → I is an institution representation if and only if:γ Sign : Sign → Sign is a functor, γ Sen : Sen → γ Sign • Sen , is a natural transformation, γ M od : (γ Sign ) op • Mod → Mod,is a natural transformation, such that for any Σ ∈ |Sign|, the function γ Sen Σ : Sen(Σ) → Sen (γ Sign (Σ)) and the functor γ M od Σ : Mod (γ Sign (Σ)) → Mod(Σ) preserve the following satisfaction condition: for any α ∈ Sen(Σ) and M ∈ |Mod(γ Sign (Σ))|, Authors' note: Meseguer refers to a logic as a structure that is composed of an entailment system together with an institution, see Def. ∅ is not necessarily the empty set of axioms. This fact will be clarified later on. Notice that the target of functor M, when applied to a theory T , is not necessarily a model, but a structure which, under certain conditions, can be considered a representation of the category of models of T . X ∪ {A ∧ B} [∧] X ∪ {A ∧ B, A, B} X ∪ {A ∨ B} [∨] X ∪ {A ∨ B, A} X ∪ {A ∨ B, B} X ∪ {¬¬A} [¬1] X ∪ {¬¬A, A} X ∪ {A} [¬2] X ∪ {A, ¬¬A} X ∪ {A, ¬A} [false] Sen(Σ) X ∪ {¬(A ∧ B)} [DM1] X ∪ {¬(A ∧ B), ¬A ∨ ¬B} X ∪ {¬(A ∨ B)} [DM2] X ∪ {¬(A ∨ B), ¬A ∧ ¬B} X ∪ {(∀x)P (x)} [∀] X ∪ {(∀x)P (x), P (t)} X ∪ {(∃x)P (x)}[∃] X ∪ {(∃x)P (x), P (c)} Notice that ρ Sign ( {pi}i∈I ) = R, {pi}i∈I , where {pi}i∈I ∈ |Sign K |. Acknowledgements The authors would like to thank the anonymous referees for their helpful comments. This work was partially supported by the Argentinian Agency for Scientific and Technological Promotion (ANPCyT), through grants PICT PAE 2007 No. 2772, PICT 2010 No. 1690, PICT 2010 No. 2611 and PICT 2010 No. 1745, and by the MEALS project (EU FP7 programme, grant agreement No. 295261). The fourth author gratefully acknowledges the support of the National Science and Engineering Research Council of Canada and McMaster University. semantics for modal logic is given by means of Kripke structures. A Kripke structure is a tuple W, R, L , where W is a set of states, R ⊆ W × W is a relation between states, and L : W → 2 AP is a labeling function (AP is a set of atomic propositions). Note that a signature in modal logic is given by a set of propositional letters: {p i } i∈I . The interested reader can consult [START_REF] Blackburn | Modal logic[END_REF]. In [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF] modal formulae are prefixed by labels denoting semantic states. Labeled formulae are then terms of the form : ϕ, where ϕ is a modal formula and is a sequence of natural numbers n 0 , . . . , n k . The relation R between these labels is then defined in the following way: R ≡ ∃n : , n = . The new rules are the following: The rules for the propositional connectives are the usual ones, obtained by labeling the formulae with a given label. Notice that labels denote states of a Kripke structure. This is related in some way to the tableau method used for first-order predicate logic. Branches, saturated branches and closed branches are defined in the same way as in Example 1, but considering the relations between sets to be also indexed by the relation used at that point. Thus, must be understood as follows: the set s i+1 is obtained from s i by applying rule τ i to formula α i ∈ s i under the accessibility relation R i . Assume Sign F OL , Sen F OL , M F OL , Mods F OL , {|= Σ F OL } Σ∈|Sign F OL | , µ F OL is the satisfiability calculus for first-order predicate logic, denoted by SC F OL , and Sign K , Sen K , M K , Mods K , {|= Σ K } Σ∈|Sign K |,µ K is the satisfiability calculus for modal logic, denoted by SC K . Consider now the standard translation from modal logic to first-order logic. Therefore, the tuple ρ Sign , ρ Sen , ρ M od is defined as follows [START_REF] Blackburn | Modal logic[END_REF]: Definition 18. ρ Sign : Sign K → Sign F OL is defined as ρ Sign ( {p i } i∈I ) = R, {p i } i∈I by mapping each propositional variable p i , for all i ∈ I, to a firstorder unary logic predicate p i , and adding a binary predicate R, and ρ Sign (σ : R , and p i to p i for all i ∈ I. Lemma 6. ρ Sign is a functor. α) where: for all M = S, R, The proof that this is a mapping between institutions relies on the correctness of the translation presented in [START_REF] Blackburn | Modal logic[END_REF]. Using this map we can define a mapping between the corresponding satisfiability calculi. The natural transformation: γ : ρ T h0 • models op → models op is defined as follows. is defined as: Finally, the following lemma prove the equivalence of the two cells shown in Definition 17. This means that building a tableau using the first-order rules for the translation of a modal theory, then obtaining the corresponding canonical model in modal logic using γ, and therefore obtaining the class of models by using µ, is exactly the same as obtaining the first-order models by µ and then the corresponding modal models by using ρ M od .
40,467
[ "1003760", "1003761", "1003762", "977624" ]
[ "131288", "92878", "488156", "92878", "488156", "92878", "64587" ]
01485971
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485971/file/978-3-642-37635-1_13_Chapter.pdf
Till Mossakowski Oliver Kutz Christoph Lange Semantics of the Distributed Ontology Language: Institutes and Institutions The Distributed Ontology Language (DOL) is a recent development within the ISO standardisation initiative 17347 Ontology Integration and Interoperability (OntoIOp). In DOL, heterogeneous and distributed ontologies can be expressed, i.e. ontologies that are made up of parts written in ontology languages based on various logics. In order to make the DOL meta-language and its semantics more easily accessible to the wider ontology community, we have developed a notion of institute which are like institutions but with signature partial orders and based on standard set-theoretic semantics rather than category theory. We give an institute-based semantics for the kernel of DOL and show that this is compatible with institutional semantics. Moreover, as it turns out, beyond their greater simplicity, institutes have some further surprising advantages over institutions. Introduction OWL is a popular language for ontologies. Yet, the restriction to a decidable description logic often hinders ontology designers from expressing knowledge that cannot (or can only in quite complicated ways) be expressed in a description logic. A current practice to deal with this problem is to intersperse OWL ontologies with first-order axioms in the comments or annotate them as having temporal behaviour [START_REF] Smith | Relations in biomedical ontologies[END_REF][START_REF] Beisswanger | BioTop: An upper domain ontology for the life sciences -a description of its current structure, contents, and interfaces to OBO ontologies[END_REF], e.g. in the case of bio-ontologies where mereological relations such as parthood are of great importance, though not definable in OWL. However, these remain informal annotations to inform the human designer, rather than first-class citizens of the ontology with formal semantics, and will therefore unfortunately be ignored by tools with no impact on reasoning. Moreover, foundational ontologies such as DOLCE, BFO or SUMO use full first-order logic or even first-order modal logic. A variety of languages is used for formalising ontologies. 4 Some of these, such as RDF, OBO and UML, can be seen more or less as fragments and notational variants of OWL, while others, such as F-logic and Common Logic (CL), clearly go beyond the expressiveness of OWL. This situation has motivated the Distributed Ontology Language (DOL), a language currently under active development within the ISO standard 17347 Ontology Integration and Interoperability (OntoIOp). In DOL, heterogeneous and distributed ontologies can be expressed. At the heart of this approach is a graph of ontology languages and translations [START_REF] Mossakowski | The Onto-Logical Translation Graph[END_REF], shown in Fig. 1. What is the semantics of DOL? Previous presentations of the semantics of heterogeneous logical theories [START_REF] Tarlecki | Towards heterogeneous specifications[END_REF][START_REF] Diaconescu | Grothendieck institutions[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF][START_REF] Kutz | Carnap, Goguen, and the Hyperontologies: Logical Pluralism and Heterogeneous Structuring in Ontology Design[END_REF][START_REF] Mossakowski | The Onto-Logical Translation Graph[END_REF] relied heavily on the theory of institutions [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF]. The central insight of the theory of institutions is that logical notions such as model, sentence, satisfaction and derivability should be indexed over signatures (vocabularies). In order to abstract from any specific form of signature, category theory is used: nothing more is assumed about signatures other than that (together with suitable signature morphisms) they form a category. However, the use of category theory diminishes the set of potential readers: "Mathematicians, and even logicians, have not shown much interest in the theory of institutions, perhaps because their tendency toward Platonism inclines them to believe that there is just one true logic and model theory; it also doesn't much help that institutions use category theory extensively." (J. Goguen and G. Roşu in [START_REF] Goguen | Institution morphisms[END_REF], our emphasis) Indeed, during the extensive discussions within the ISO standardisation committee in TC37/SC3 to find an agreement concerning the right semantics for the DOL language, we (a) encountered strong reservations to base the semantics entirely on the institutional approach in order not to severely limit DOL's potential adoption by users, and (b) realised that a large kernel of the DOL language can be based on a simpler, category-free semantics. The compromise that was found within OntoIOp therefore adopted a twolayered approach: (i) it bases the semantics of a large part of DOL on a simplification of the notion of institutions, namely the institute-based approach presented in this paper that relies purely on standard set-theoretic semantics, and (ii) allows an elegant addition of additional features that do require a full institution-based approach. Indeed, it turned out that the majority of work in the ontology community either disregards signature morphisms altogether, or uses only signature inclusions. The latter are particularly important for the notion of ontology module, which is essentially based on the notion of conservative extension along an inclusion signature morphisms, and related notions like inseparability and uniform interpolation (see also Def. 5 below). Another use case for signature inclusions are theory interpretations, which are used in the COLORE repository of (first-order) Common Logic ontologies. Indeed, COLORE uses the technique of extending the target of a theory interpretation by suitable definitions of the symbols in the source. The main motivation for this is probably the avoidance of derived signature morphisms; as a by-product, also renamings of symbols are avoided. There are only rare cases where signature morphisms are needed in their full generality: the renaming of ontologies, which so far has only been used for combinations of ontologies by colimits. Only here, the full institution-based approach is needed. However, only relatively few papers are explicitly concerned with colimits of ontologies. 5Another motivation for our work is the line of signature-free thinking in logic and ontology research; for example, the ISO/IEC standard 24707:2007 Common Logic [START_REF]Common Logic: Abstract syntax and semantics[END_REF] names its signature-free approach to sentence formation a chief novel feature: "Common Logic has some novel features, chief among them being a syntax which is signature-free . . . " [START_REF]Common Logic: Abstract syntax and semantics[END_REF] Likewise, many abstract studies of consequence and satisfaction systems [START_REF] Gentzen | Investigations into logical deduction[END_REF][START_REF] Scott | Rules and derived rules[END_REF][START_REF] Avron | Simple consequence relations[END_REF][START_REF] Carnielli | Analysis and synthesis of logics: how to cut and paste reasoning systems[END_REF] disregard signatures. Hence, we base our semantics on the newly introduced notion of institutes. These start with the signature-free approach, and then introduce signatures a posteriori, assuming that they form a partial order. While this approach covers only signature inclusions, not renamings, it is much simpler than the category-based approach of institutions. Of course, for features like colimits, full institution theory is needed. We therefore show that institutes and institutions can be integrated smoothly. Institutes: Semantics for a DOL Kernel The notion of institute follows the insight that central to a model-theoretic view on logic is the notion of satisfaction of sentences in models. We also follow the insight of institution theory that signatures are essential to control the vocabulary of symbols used in sentences and models. However, in many logic textbooks as well as in the Common Logic standard [START_REF]Common Logic: Abstract syntax and semantics[END_REF], sentences are defined independently of a specific signature, while models always interpret a given signature. The notion of institute reflects this common practice. Note that the satisfaction relation can only meaningfully be defined between models and sentences where the model interprets all the symbols occurring in the sentence; this is reflected in the fact that we define satisfaction per signature. We also require a partial order on models; this is needed for minimisation in the sense of circumscription. Moreover, we realise the goal of avoiding the use of category theory by relying on partial orders of signatures as the best possible approximation of signature categories. This also corresponds to common practice in logic, where signature extensions and (reducts against these) are considered much more often than signature morphisms. Definition 1 (Institutes). An institute I = (Sen, Sign, ≤, sig, Mod, |=, .| . ) consists of a class Sen of sentences; a partially ordered class (Sign, ≤) of signatures (which are arbitrary sets); a function sig : Sen → Sign, giving the (minimal) signature of a sentence (then for each signature Σ , let Sen(Σ ) = {ϕ ∈ Sen | sig(ϕ) ≤ Σ }); for each signature Σ , a partially ordered class Mod(Σ ) of Σ -models; for each signature Σ , a satisfaction relation |= Σ ⊆ Mod(Σ ) × Sen(Σ ); -for any Σ 2 -model M, a Σ 1 -model M| Σ 1 (called the reduct), provided that Σ 1 ≤ Σ 2 , such that the following properties hold: -given Σ 1 ≤ Σ 2 , for any Σ 2 -model M and any Σ 1 -sentence ϕ M |= ϕ iff M| Σ 1 |= ϕ (satisfaction is invariant under reduct), -for any Σ -model M, given Σ 1 ≤ Σ 2 ≤ Σ , (M| Σ 2 )| Σ 1 = M| Σ 1 (reducts are compositional), and for any Σ -models M 1 ≤ M 2 , if Σ ≤ Σ , then M 1 | Σ ≤ M 2 | Σ (reducts preserve the model ordering). We give two examples illustrating these definitions, by phrasing the description logic ALC and Common Logic CL in institute style: Example 2 (Description Logics ALC). An institute for ALC is defined as follows: sentences are subsumption relations C 1 C 2 between concepts, where concepts follow the grammar C ::= A | | ⊥ |C 1 C 2 |C 1 C 2 | ¬C | ∀R.C | ∃R.C Here, A stands for atomic concepts. Such sentences are also called TBox sentences. Sentences can also be ABox sentences, which are membership assertions of individuals in concepts (written a : C, where a is an individual constant) or pairs of individuals in roles (written R(a, b), where R is a role, and a, b are individual constants). Signatures consist of a set A of atomic concepts, a set R of roles and a set I of individual constants. The ordering on signatures is component-wise inclusion. For a sentence ϕ, sig(ϕ) contains all symbols occurring in ϕ. Σ -models consist of a non-empty set ∆ , the universe, and an element of ∆ for each individual constant in Σ , a unary relation over ∆ for each concept in Σ , and a binary relation over ∆ for each role in Σ . The partial order on models is defined as coincidence of the universe and the interpretation of individual constants plus subset inclusion for the interpretation of concepts and roles. Reducts just forget the respective components of models. Satisfaction is the standard satisfaction of description logics. An extension of ALC named SROIQ [START_REF] Horrocks | The Even More Irresistible SROIQ[END_REF] is the logical core of the Web Ontology Language OWL 2 DL 6 . Example 3 (Common Logic -CL). Common Logic (CL) has first been formalised as an institution in [START_REF] Kutz | Carnap, Goguen, and the Hyperontologies: Logical Pluralism and Heterogeneous Structuring in Ontology Design[END_REF]. We here formalise it as an institute. A CL-sentence is a first-order sentence, where predications and function applications are written in a higher-order like syntax as t(s). Here, t is an arbitrary term, and s is a sequence term, which can be a sequence of terms t 1 . . .t n , or a sequence marker. However, a predication t(s) is interpreted like the first-order formula holds(t, s), and a function application t(s) like the first-order term app(t, s), where holds and app are fictitious symbols (denoting the semantic objects rel and fun defined in models below). In this way, CL provides a first-order simulation of a higher-order language. Quantification variables are partitioned into those for individuals and those for sequences. A CL signature Σ (called vocabulary in CL terminology) consists of a set of names, with a subset called the set of discourse names, and a set of sequence markers. The partial order on signatures is componentwise inclusion with the requirement that the a name is a discourse name in the smaller signature if and only if is in the larger signature. sig obviously collects the names and sequence markers present in a sentence. A Σ -model consists of a set UR, the universe of reference, with a non-empty subset UD ⊆ UR, the universe of discourse, and four mappings: rel from UR to subsets of UD * = {< x 1 , . . . , x n > |x 1 , . . . , x n ∈ UD} (i.e., the set of finite sequences of elements of UD); fun from UR to total functions from UD * into UD; -int from names in Σ to UR, such that int(v) is in UD if and only if v is a discourse name; -seq from sequence markers in Σ to UD * . The partial order on models is defined as M 1 ≤ M 2 iff M 1 and M 2 agree on all components except perhaps rel, where we require rel 1 (x) ⊆ rel 2 (x) for all x ∈ UR 1 = UR 2 . Model reducts leave UR, UD, rel and fun untouched, while int and seq are restricted to the smaller signature. Interpretation of terms and formulae is as in first-order logic, with the difference that the terms at predicate resp. function symbol positions are interpreted with rel resp. fun in order to obtain the predicate resp. function, as discussed above. A further difference is the presence of sequence terms (namely sequence markers and juxtapositions of terms), which denote sequences in UD * , with term juxtaposition interpreted by sequence concatenation. Note that sequences are essentially a second-order feature. For details, see [START_REF]Common Logic: Abstract syntax and semantics[END_REF]. Working within an arbitrary but fixed Institute Like with institutions, many logical notions can be formulated in an arbitrary but fixed institute. However, institutes are more natural for certain notions used in the ontology community. The notions of 'theory' and 'model class' in an institute are defined as follows: Definition 4 (Theories and Model Classes). A theory T = (Σ ,Γ ) in an institute I consists of a signature Σ and a set of sentences Γ ⊆ Sen(Σ ). Theories can be partially ordered by letting (Σ 1 ,Γ 1 ) ≤ (Σ 2 ,Γ 2 ) iff Σ 1 ≤ Σ 2 and Γ 1 ⊆ Γ 2 . The class of models Mod(Σ ,Γ ) is defined as the class of those Σ -models satisfying Γ . This data is easily seen to form an institute I th of theories in I (with theories as "signatures"). The following definition is taken directly from [START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF], 7 showing that central notions from the ontology modules community can be seamlessly formulated in an arbitrary institute: Definition 5 (Entailment, inseparability, conservative extension). - A theory T 1 Σ -entails T 2 , written T 1 T 2 , if T 2 |= ϕ implies T 1 |= ϕ for all sentences ϕ with sig(ϕ) ≤ Σ ; -T 1 and T 2 are Σ -inseparable if T 1 Σ -entails T 2 and T 2 Σ -entails T 1 ; -T 2 is a Σ -conservative extension of T 1 if T 2 ≥ T 1 and T 1 and T 2 are Σ -inseparable; -T 2 is a conservative extension of T 1 if T 2 is a Σ -conservative extension of T 2 with Σ = sig(T 1 ). Note the use of sig here directly conforms to institute parlance. In contrast, since there is no global set of sentences in institutions, one would need to completely reformulate the definition for the institution representation and fiddle with explicit sentence translations. From time to time, we will need the notion of 'unions of signatures': Definition 6 (Signature unions). A Signature union is a supremum (least upper bound) in the signature partial order. Note that signature unions need not always exist, nor be unique. In either of these cases, the enclosing construct containing the union is undefined. Institute Morphisms and Comorphisms Institute morphisms and comorphisms relate two given institutes. A typical situation is that an institute morphism expresses the fact that a "larger" institute is built upon a "smaller" institute by projecting the "larger" institute onto the "smaller" one. Somewhat dually to institute morphisms, institute comorphisms allow to express the fact that one institute is included in another one. (Co)morphisms play an essential role for DOL: the DOL semantics is parametrised over a graph of institutes and institute morphisms and comorphisms. The formal definitions are as follows: Definition 7 (Institute morphism). Given I 1 = (Sen 1 , Sign 1 , ≤ 1 , sig 1 , Mod 1 , |= 1 , .| . ) and I 2 = (Sen 2 , Sign 2 , ≤ 2 , sig 2 , Mod 2 , |= 2 , .| . ), an institute morphism ρ = (Φ, α, β ) : I 1 -→ I 2 consists of -a monotone map Φ : (Sign 1 , ≤ 1 ) → (Sign 2 , ≤ 2 ), a sentence translation function α : Sen 2 -→ Sen 1 , and for each I 1 -signature Σ , a monotone model translation function β Σ : Mod 1 (Σ ) → Mod 2 (Φ(Σ )), such that -M 1 |= 1 α(ϕ 2 ) if and only if β Σ (M 1 ) |= 2 ϕ 2 holds for each I 1 -signature Σ , each model M 1 ∈ Mod 1 (Σ ) and each sentence ϕ 2 ∈ Sen 2 (Σ ) (satisfaction condition) -Φ(sig 1 (α(ϕ 2 ))) ≤ sig 2 (ϕ 2 ) for any sentence ϕ 2 ∈ Sen 2 (sentence coherence); -model translation commutes with reduct, that is, given Σ 1 ≤ Σ 2 in I 1 and a Σ 2 - model M, β Σ 2 (M)| Φ(Σ 1 ) = β Σ 1 (M| Σ 1 ). The dual notion of institute comorphism is then defined as: Definition 8 (Institute comorphism). Given I 1 = (Sen 1 , Sign 1 , ≤ 1 , sig 1 , Mod 1 , |= 1 , .| . ) and I 2 = (Sen 2 , Sign 2 , ≤ 2 , sig 2 , Mod 2 , |= 2 , .| . ), an institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 consists of -a monotone map Φ : (Sign 1 , ≤ 1 ) → (Sign 2 , ≤ 2 ), a sentence translation function α : Sen 1 -→ Sen 2 , and for each I 1 -signature Σ , a monotone model translation function β Σ : Mod 2 (Φ(Σ )) → Mod 1 (Σ ), such that -M 2 |= 2 α(ϕ 1 ) if and only if β Σ (M 2 ) |= 1 ϕ 1 holds for each I 1 -signature Σ , each model M 2 ∈ Mod 2 (Σ ) and each sentence ϕ 1 ∈ Sen 1 (Σ ) (satisfaction condition) -sig 2 (α(ϕ 1 )) ≤ Φ(sig 1 (ϕ 1 )) for any sentence ϕ 1 ∈ Sen 1 (sentence coherence); -model translation commutes with reduct, that is, given Σ 1 ≤ Σ 2 in I 1 and a Φ(Σ 2 )- model M in I 2 , β Σ 2 (M)| Σ 1 = β Σ 1 (M| Φ(Σ 1 ) ). Some important properties of institution (co-)morphisms will be needed in the technical development below: Definition 9 (Model-expansive, (weakly) exact, (weak) amalgamation). An institute comorphism is model-expansive, if each β Σ is surjective. It is easy to show that model-expansive comorphisms faithfully encode logical consequence, that is, Γ |= ϕ iff α(Γ ) |= α(ϕ). An institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 is (weakly) exact, if for each signature extension Σ 1 ≤ Σ 2 the diagram Mod I 1 (Σ 2 ) .| Σ 1 Mod I 2 (Φ(Σ 2 )) .| Φ(Σ 1 ) β Σ 2 o o Mod I 1 (Σ 1 ) Mod I 2 (Φ(Σ 1 )) β Σ 1 o o admits (weak) amalgamation, i.e. for any M 2 ∈ Mod I (Σ 2 ) and M 1 ∈ Mod J (Φ(Σ 1 )) with M 2 | Σ 1 = β Σ 1 (M 1 ), there is a (not necessarily unique) M 2 ∈ Mod J (Φ(Σ 2 )) with β Σ 2 (M 2 ) = M 2 and M 2 | Φ(Σ 1 ) = M 1 . Given these definitions, a simple theoroidal institute comorphism ρ : I 1 -→ I 2 is an ordinary institute comorphism ρ : I 1 -→ I th 2 (for I th 2 , see Def. 4). Moreover, an institute comorphism is said to be model-isomorphic if β Σ is an isomorphism. It is a subinstitute comorphism (cf. also [START_REF] Meseguer | General logics[END_REF]), if moreover the signature translation is an embedding and sentence translation is injective. The intuition is that theories should be embedded, while models should be represented exactly (such that model-theoretic results carry over). A DOL Kernel and Its Semantics The Distributed Ontology Language (DOL) shares many features with the language HetCASL [START_REF] Mossakowski | HetCASL -Heterogeneous Specification[END_REF] which underlies the Heterogeneous Tool Set Hets [START_REF] Mossakowski | The Heterogeneous Tool Set[END_REF]. However, it also adds a number of new features: minimisation of models following the circumscription paradigm [START_REF] Mccarthy | Circumscription -A Form of Non-Monotonic Reasoning[END_REF][START_REF] Lifschitz | Circumscription[END_REF]; ontology module extraction, i.e. the extraction of a subtheory that contains all relevant logical information w.r.t. some subsignature [START_REF] Konev | Formal properties of modularization[END_REF]; projections of theories to a sublogic; ontology alignments, which involve partial or even relational variants of signature morphisms [START_REF] David | François Scharffe, and[END_REF]; combination of theories via colimits, which has been used to formalise certain forms of ontology alignment [START_REF] Zimmermann | Formalizing Ontology Alignment and its Operations with Category Theory[END_REF][START_REF] Kutz | Chinese Whispers and Connected Alignments[END_REF]; referencing of all items by URLs, or, more general, IRIs [START_REF] Lange | LoLa: A Modular Ontology of Logics, Languages, and Translations[END_REF]. Sannella and Tarlecki [START_REF] Sannella | Specifications in an arbitrary institution[END_REF][START_REF] Sannella | Foundations of Algebraic Specification and Formal Software Development[END_REF] show that the structuring of logical theories (specifications) can be defined independently of the underlying logical system. They define a kernel language for structured specification that can be interpreted over an arbitrary institution. Similar to [START_REF] Sannella | Specifications in an arbitrary institution[END_REF] and also integrating heterogeneous constructs from [START_REF] Tarlecki | Towards heterogeneous specifications[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF], we now introduce a kernel language for heterogeneous structured specifications for DOL. We will use the term "structured ontology" instead of "structured specification" to stress the intended use for DOL. Since DOL involves not only one, but possibly several ontology languages, we need to introduce the notion of a 'heterogeneous logical environment'. Definition 10 (Heterogeneous logical environment). A heterogeneous logical environment is defined to be a graph of institutes and institute morphisms and (possibly simple theoroidal) comorphisms, where we assume that some of the comorphisms (including all obvious identity comorphisms) are marked as default inclusions. The default inclusions are assumed to form a partial order on the institutes of the logic graph. If I 1 ≤ I 2 , the default inclusion is denoted by ι : I 1 -→ I 2 . For any pair of institutes I 1 and I 2 , if their supremum exists, we denote it by I 1 ∪ I 2 , and the corresponding default inclusions by ι i : I i -→ I 1 ∪ I 2 . We are now ready for the definition of heterogeneous structured ontology. Definition 11 (Heterogeneous structured ontology -DOL kernel language). Let a heterogeneous logical environment be given. We inductively define the notion of heterogeneous structured ontology (in the sequel: ontology). Simultaneously, we define functions Ins, Sig and Mod yielding the institute, the signature and the model class of such an ontology. Let O be an ontology with institute I and signature Σ and let Σ min , Σ fixed be subsignatures of Σ such that Σ min ∪ Σ fixed is defined. Intuitively, the interpretation of the symbols in Σ min will be minimised among those models interpreting the symbols in Σ fixed in the same way, while the interpretation of all symbols outside Σ min ∪ Σ fixed may vary arbitrarily. Then O minimize Σ min , Σ fixed is an ontology with: The full DOL language adds further language constructs that can be expressed in terms of this kernel language. Furthermore, DOL allows the omission of translations along default inclusion comorphisms, since these can be reconstructed in a unique way. Ins(O minimize Σ min , Σ fixed ) := I Sig(O minimize Σ min , Σ fixed ) := Σ Mod(O minimize Σ min , Σ fixed ) := {M ∈ Mod(O) | M| Σ min ∪Σ fixed is minimal in Fix(M)} where Fix(M) = {M ∈ Mod(O)| Σ min ∪Σ fixed | M | Σ fixed = M| Σ fixed } Logical consequence. We say that a sentence ϕ is a logical consequence of a heterogeneous structured ontology O, written O |= ϕ, if any model of O satisfies ϕ. Monotonicity. Similar to [START_REF] Sannella | Foundations of Algebraic Specification and Formal Software Development[END_REF], Ex. 5.1.4, we get: Proposition 12. All structuring operations of the DOL kernel language except minimisation are monotone in the sense that they preserve model class inclusion: Mod(O 1 ) ⊆ Mod(O 2 ) implies Mod(op(O 1 )) ⊆ Mod(op(O 2 )). (Union is monotone in both argu- ments.) Indeed, the minimisation is a deliberate exception: its motivation is to capture nonmonotonic reasoning. Proposition 13. If reducts are surjective, minimize is anti-monotone in Σ min . Proof. Let O be an ontology with institute I and signature Σ and let Σ 1 min , Σ 2 min , Σ fixed ⊆ Σ be subsignatures such that Σ 1 min ≤ Σ 2 min , and Σ i min ∪ Σ fixed is defined for i = 1, 2. Let Fix 1 and Fix 2 defined as Fix above, using Σ [START_REF] Lutz | Conservative Extensions in Expressive Description Logics[END_REF] for an example from description logic, and see [START_REF] Kutz | Conservativity in Structured Ontologies[END_REF] for more general conservativity preservation results. We first considered to integrate a module extraction operator into the kernel language of heterogeneous structured ontologies. However, there are so many different notions of ontology module and techniques of module extraction used in the literature that we would have to define a whole collection of module extraction operators, a collection that moreover would quickly become obsolete and incomplete. We refrained from this, and instead provide a relation between heterogeneous structured ontologies that is independent of the specificities of particular module extraction operators. Still, it is possible to define all the relevant notions used in the ontology modules community within an arbitrary institute, namely the notions of conservative extension, inseparability, uniform interpolant etc. The reason is that these notions typically are defined in set-theoretic parlance about signatures (see Def. 5). The full DOL language is based on the DOL kernel and also includes a construct for colimits (which is omitted here, because its semantics requires institutions) and ontology alignments (which are omitted here, because they do not have a model-theoretic semantics). The full DOL language is detailed in the current OntoIOp ISO 17347 working draft, see ontoiop.org. There, also an alternative semantics to the above direct set-theoretic semantics is given: a translational semantics. It assumes that all involved institutes can be translated to Common Logic, and gives the semantics of an arbitrary ontology by translation to Common Logic (and then using the above direct semantics). The two semantics are compatible, see [START_REF] Mossakowski | Three Semantics for the Core of the Distributed Ontology Language[END_REF] for details. However, the translational semantics has some important drawbacks. In particular, the semantics of ontology modules (relying on the notion of conservative extension) is not always preserved when translating to Common Logic. See [START_REF] Mossakowski | Three Semantics for the Core of the Distributed Ontology Language[END_REF] for details. An Example in DOL As an example of a heterogeneous ontology in DOL, we formalise some notions of mereology. Propositional logic is not capable of describing mereological relations, but of describing the basic categories over which the DOLCE foundational ontology [START_REF] Masolo | Ontology library[END_REF] defines mereological relations. The same knowledge can be formalised more conveniently in OWL, which additionally allows for describing (not defining!) basic parthood properties. As our OWL ontology redeclares as classes the same categories that the propositional logic ontology Taxonomy had introduced as propositional variables, using different names but satisfying the same disjointness and subsumption axioms, we observe that it interprets the former. Mereological relations are frequently used in lightweight OWL ontologies, e.g. biomedical ontologies in the EL profile (designed for efficient reasoning with a large number of entities, a frequent case in this domain), but these languages are not fully capable of defining these relations. Therefore, we finally provide a full definition of several mereological relations in first order logic, in the Common Logic language, by importing, translating and extending the OWL ontology. We use Common Logic's second-order facility of quantifying over predicates to concisely express the restriction of the variables x, y, and z to the same taxonomic category. such that for each σ : Σ -→ Σ in Sign the following satisfaction condition holds: ( ) M |= Σ σ (ϕ) iff M | σ |= Σ ϕ for each M ∈ |Mod(Σ )| and ϕ ∈ Sen(Σ ), expressing that truth is invariant under change of notation and context. 10With institutions, a few more features of DOL can be equipped with a semantics: renamings along signature morphisms [START_REF] Sannella | Specifications in an arbitrary institution[END_REF], combinations (colimits), and monomorphic extensions. Due to the central role of inclusions of signatures for institutes, we also need to recall the notion of inclusive institution. Definition 15 ([31]). An inclusive category is a category having a broad subcategory which is a partially ordered class. An inclusive institution is one with an inclusive signature category such that the sentence functor preserves inclusions. We additionally require that such institutions have inclusive model categories, have signature intersections (i.e. binary infima), which are preserved by Sen, 11 and have well-founded sentences, which means that there is no sentence that occurs in all sets of an infinite chain of strict inclusions . . . → Sen(Σ n ) → . . . → Sen(Σ 1 ) → Sen(Σ o ) that is the image (under Sen) of a corresponding chain of signature inclusions. Definition 16. Given institutions I and J, an institution morphism [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF] written µ = (Φ, α, β ) : I -→ J consists of a functor Φ : Sign I -→ Sign J , a natural transformation α : Sen J • Φ -→ Sen I and a natural transformation β : Mod I -→ Mod J • Φ op , such that the following satisfaction condition holds for all Σ ∈ Sign I , M ∈ Mod I (Σ ) and ϕ ∈ Sen J (Φ(Σ )): M |= I Σ α Σ (ϕ ) iff β Σ (M) |= J Φ(Σ ) ϕ Definition 17. Given institutions I and J, an institution comorphism [START_REF] Goguen | Institution morphisms[END_REF] denoted as ρ = (Φ, α, β ) : I -→ J consists of a functor Φ : Sign I -→ Sign J , a natural transformation α : Sen I -→ Sen J • Φ, a natural transformation β : Mod J • Φ op -→ Mod I such that the following satisfaction condition holds for all Σ ∈ Sign I , M ∈ Mod J (Φ(Σ )) and ϕ ∈ Sen I (Σ ): M |= J Φ(Σ ) α Σ (ϕ) iff β Σ (M ) |= I Σ ϕ. Let InclIns (CoInclIns) denote the quasicategory of inclusive institutions and morphisms (comorphisms). Furthermore, let Class denote the quasicategory of classes and functions. Note that (class-indexed) colimits of sets in Class can be constructed in the same way as in Set. Finally, call an institute locally small, if each Sen(Σ ) is a set. Let Institute (CoInstitute) be the quasicategory of locally small institutes and morphisms (comorphisms). Proposition 18. There are functors F co : CoInstitute → CoInclIns and F : Institute → InclIns. Proof. Given an institute I = (Sen I , Sign I , ≤ I , sig I , Mod I , |= I , .| . ), we construct an institution F(I) = F co (I) as follows: (Sign I , ≤ I ) is a partially ordered class, hence a (thin) category. We turn it into an inclusive category by letting all morphisms be inclusions. This will be the signature category of F(I). For each signature Σ , we let Sen F(I) (Σ ) be Sen I (Σ ) (here we need local smallness of I). Then Sen F(I) easily turns into an inclusion-preserving functor. Also, Mod F(I) (Σ ) is Mod I (Σ ) turned into a thin category using the partial order on Mod I . Since reducts are compositional and preserve the model ordering, we obtain reduct functors for F(I). Satisfaction in F(I) is defined as in I. The satisfaction condition holds because satisfaction is invariant under reduct. Given an institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 , we define an institution comorphism F co (ρ) : F(I 1 ) -→ F(I 2 ) as follows. Φ obviously is a functor from Sign F(I 1 ) to Sign F(I 2 ) . If sig(ϕ) ≤ Σ , by sentence coherence, sig(α(ϕ)) ≤ Φ(Σ ). Hence, α : Sen 1 -→ Sen 2 can be restricted to α Σ : Sen 1 (Σ ) -→ Sen 2 (Σ ) for any I 1signature Σ . Naturality of the family (α Σ ) Σ ∈Sign 1 follows from the fact that the α Σ are restrictions of a global α. Each β Σ is functorial because it is monotone. Naturality of the family (β Σ ) Σ ∈Sign 1 follows from model translation commuting with reduct. The satisfaction condition is easily inherited from the institute comorphism. The translation of institute morphisms is similar. Proposition 19. There are functors G co : CoInclIns → CoInstitute and G : InclIns → Institute, such that G co • F co ∼ = id and G • F ∼ = id. Proof. Given an inclusive institution I = (Sign I , Sen I , Mod I , |= I ), we construct an institute G(I) = G co (I) as follows: (Sign I , ≤ I ) is the partial order given by the inclusions. Sen G(I) is the colimit of the diagram of all inclusions Sen I (Σ 1 ) → Sen I (Σ 1 ) for Σ 1 ≤ Σ 2 . This colimit is taken in the quasicategory of classes and functions. It exists because all involved objects are sets (the construction can be given as a quotient of the disjoint union, following the usual construction of colimits as coequalisers of coproducts). Let µ Σ : Sen I (Σ )-→ Sen G(I) denote the colimit injections. For a sentence ϕ, let S(ϕ) be the set of signatures Σ such that ϕ is in the image of µ Σ . We show that S(ϕ) has a least element. For if not, choose some Σ 0 ∈ S(ϕ). Assume that we have chosen Σ n ∈ S(ϕ). Since Σ n is not the least element of S(ϕ), there must be some Σ ∈ S(ϕ) such that Σ n ≤ Σ . Then let Σ n+1 = Σ n ∩ Σ ; since Sen preserves intersections, Σ n+1 ∈ S(ϕ). Moreover, Σ n+1 < Σ n . This gives an infinite descending chain of signature inclusions in S(ϕ), contradicting I having well-founded sentences. Hence, S(ϕ) must have a least element, which we use as sig(ϕ). Mod G(I) (Σ ) is the partial order of inclusions in Mod I (Σ ), and also reduct is inherited. Since Mod G(I) is functorial, reducts are compositional. Since each Mod G(I) (σ ) is functorial, reducts preserve the model ordering. Satisfaction in G(I) is defined as in I. The satisfaction condition implies that satisfaction is invariant under reduct. Given an institution comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 , we define an institute comorphism G co (ρ) : G(I 1 ) -→ G(I 2 ) as follows. Φ obviously is a monotone map from Sign G(I 1 ) to Sign G(I 2 ) . α : Sen G(I 1 ) -→ Sen G(I 2 ) is defined by exploiting the universal property of the colimit Sen G(I 1 ) : it suffices to define a cocone Sen I 1 (Σ ) → Sen G(I 2 ) indexed over signatures Σ in Sign I 1 . The cocone is given by composing α Σ with the inclusion of Sen I 1 (Φ(Σ )) into Sen G(I 2 ) . Commutativity of a cocone triangle follows from that of a cocone triangle for the colimit Sen G(I 2 ) together with naturality of α. This construction also ensures sentence coherence. Model translation is just given by the β Σ ; the translation of institution morphisms is similar. Finally, G • F ∼ = id follows because Sen can be seen to be the colimit of all Sen(Σ 1 ) → Sen(Σ 2 ). This means that we can even obtain G • F = id. However, since the choice of the colimit in the definition of G is only up to isomorphism, generally we obtain only G • F ∼ = id. The argument for G co • F co ∼ = id is similar, since isomorphism institution morphisms are also isomorphism institution comorphisms. It should be noted that F co : CoInstitute → CoInclIns is "almost" left adjoint to G co : CoInclIns → CoInstitute: By the above remarks, w.l.o.g., the unit η : Id -→ G co • F co can be chosen to be the identity. Hence, we need to show that for each institute comorphism ρ : I 1 -→ G(I 2 ), there is a unique institution comorphism ρ # : F(I 1 ) -→ I 2 with G(ρ # ) = ρ. The latter condition easily ensures uniqueness. Let ρ = (Φ, α, β ). We construct ρ # as (Φ, α # , β ). Clearly, Φ also is a functor from Sign F(I 1 into Sign I 2 (which is a supercategory of Sign G(I 2 ) . A similar remark holds for β , but only if the model categories in I 2 consist of inclusions only. α # can be constructed from α by passing to the restrictions α Σ . Altogether we get: Since also G co • F co ∼ = id, CoInstitute comes close to being a coreflective subcategory of CoInclIns. We also obtain: Proposition 21. For the DOL kernel language, the institute-based semantics (over some institute-based heterogeneous logical environment E) and the institution-based semantics (similar to that given in [START_REF] Sannella | Specifications in an arbitrary institution[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF], over F applied to E) coincide up to application of G to the Ins component of the semantics. Conclusion We have taken concepts from the area of formal methods for software specification and applied them to obtain a kernel language for the Distributed Ontology Language (DOL), including a semantics, and have thus provided the syntax and semantics of a heterogeneous structuring language for ontologies. The standard approach here would be to use institutions to formalise the notion of logical system. However, aiming at a more simple presentation of the heterogeneous semantics, we here develop the notion of institute which allows us to obtain a set-based semantics for a large part of DOL. Institutes can be seen as institutions without category theory. Goguen and Tracz [START_REF] Goguen | An implementation-oriented semantics for module composition[END_REF] have a related set-theoretic approach to institutions: they require signatures to be tuple sets. Our approach is more abstract, because signatures can be any partial order. Moreover, the results of Sect. 7 show that institutes integrate nicely with institutions. That is, we can have the cake and eat it, too: we can abstractly formalise various logics as institutes, a formalisation which, being based on standard settheoretic methods, can be easily understood by the broader ontology communities that are not necessarily acquainted with category theoretic methods. Moreover, the possibility to extend the institute-based formalisation to a full-blown institution which is compatible with the institute (technically, this means that the functor G defined in Prop. [START_REF] Lifschitz | Circumscription[END_REF], applied to the institution, should yield the institute), allows a smooth technical integration of further features into the framework which do require institutions, such as colimits. This work provides the semantic backbone for the Distributed Ontology Language DOL, which is being developed in the ISO Standard 17347 Ontology Integration and Interoperability, see ontoiop.org. An experimental repository for ontologies written in different logics and also in DOL is available at ontohub.org. Fig. 1 . 1 Fig. 1. An initial logic graph for the Distributed Ontology Language DOL presentations: For any institute I, signature Σ ∈ |Sign I | and finite set Γ ⊆ Sen I (Σ ) of Σ -sentences, the presentation I, Σ ,Γ is an ontology with: Ins( I, Σ ,Γ ) := I Sig( I, Σ ,Γ ) := Σ Mod( I, Σ ,Γ ) := {M ∈ Mod(Σ ) | M |= Γ } union: For any signature Σ ∈ |Sign|, given ontologies O 1 and O 2 with the same institute I and signature Σ , their union O 1 and O 2 is an ontology with: Ins(O 1 and O 2 ) := I Sig(O 1 and O 2 ) := Σ Mod(O 1 and O 2 ) := Mod(O 1 ) ∩ Mod(O 2 ) extension: For any ontology O with institute I and signature Σ and any signature extension Σ ≤ Σ in I, O with Σ is an ontology with: Ins(O with Σ ) := I Sig(O with Σ ) := Σ Mod(O with Σ ) := {M ∈ Mod(Σ ) | M | Σ ∈ Mod(O)} hiding: For any ontology O with institute I and signature Σ and any signature extension Σ ≤ Σ in I, O hide Σ is an ontology with: Ins(O hide Σ ) := I Sig(O hide Σ ) := Σ Mod(O hide Σ ) := {M | Σ | M ∈ Mod(O )} minimisation: translation along a comorphism: For any ontology O with institute I and signature Σ and any institute comorphism ρ = (Φ, α, β ) : I → I , O with ρ is a ontology with:Ins(O with ρ) := I Sig(O with ρ) := Φ(Σ ) Mod(O with ρ) := {M ∈ Mod I (Φ(Σ )) | β Σ (M ) ∈ Mod(O)} If ρ is simple theoroidal, then Sig(O with ρ) is the signature component of Φ(Σ ).hiding along a morphism: For any ontology O with institute I and signature Σ and any institute morphism µ = (Φ, α, β ) : I → I , O hide µ is a ontology with: Ins(O hide µ) := I Sig(O hide µ) := Φ(Σ ) Mod(O hide µ) := {β Σ (M ) | M ∈ Mod(O )} Derived operations. We also define the following derived operation generalising union to arbitrary pairs of ontologies: For any ontologies O 1 and O 2 with institutes I 1 and I 2 and signatures Σ 1 and Σ 2 , if the supremum I 1 ∪ I 2 exists and the union Σ = Φ(Σ 1 ) ∪ Φ(Σ 2 ) is defined, the generalised union of O 1 and O 2 , by abuse of notation also written as O 1 and O 2 , is defined as (O 1 with ι 1 with Σ ) and (O 2 with ι 2 with Σ ) Proposition 20 . 20 F co : CoInstitute → CoInclIns is left adjoint to G co : CoInclIns → CoInstitute if institutions are restricted to model categories in consisting of inclusions only. 1 min and Σ 2 min respectively. Let M ∈ Mod(O minimize Σ 2 min , Σ fixed ). Then M is an O-model such that M| Σ 2 min ∪Σ fixed is minimal in Fix 2 (M). We show that M| Σ 1 min ∪Σ fixed is minimal in Fix 1 (M): Let M be in Fix 1 (M). By surjectivity of reducts, it can be expanded to a Σ 2 min ∪ Σ fixed -model M . Now M ∈ Fix 2 (M), because all involved models agree on Σ fixed . Since M| Σ 2 min ∪Σ fixed is minimal in Fix 2 (M), M| Σ 2 min ∪Σ fixed ≤ M . Since reducts preserve the model ordering, M| Σ 1 min ∪Σ fixed ≤ M . Hence, M ∈ Mod(O minimize Σ 1 min , Σ fixed ). if for any O 1 -model M 1 , M 1 | Σ can be extended to an O 2 -model (resp. O 2 is a conservative extension of O 1 , see Def. 5). It is easy to see that the model-theoretic module relation implies the consequence-theoretic one. However, the converse is not true in general, compare 5 Relations between Ontologies Besides heterogeneous structured ontologies, DOL features the following statements about relations between heterogeneous structured ontologies: interpretations Given heterogeneous structured ontologies O 1 and O 2 with institutes I 1 and I 2 and signatures Σ 1 and Σ 2 , we write O 1 ∼ ∼ ∼ > O 2 (read: O 1 can be interpreted in O 2 ) for the conjunction of 1. I 1 ≤ I 2 with default inclusion ι = (Φ, α, β ) : I 1 -→ I 2 , 2. Φ(Σ 1 ) ≤ Σ 2 , and 3. β (Mod(O 2 )| Φ(Σ 1 ) ) ⊆ Mod(O 1 ). modules Given heterogeneous structured ontologies O 1 and O 2 over the same institute I with signatures Σ 1 and Σ 2 , and given another signature Σ ≤ Σ 1 (called the restriction signature), we say that O 1 is a model-theoretic (consequence-theoretic) module of O 2 w.r.t. Σ For the purposes of this paper, "ontology" can be equated with "logical theory". To make this more explicit, as of January 2013, Google Scholar returns about 1 million papers for the keyword 'ontology', around 10.000 for the keyword 'colimits', but only around 200 for the conjunctive query. See also http://www.w3.org/TR/owl2-overview/ There are two modifications: 1. We use ≤ where[START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF] write ⊆. 2. In[START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF], all these notions are defined relative to a query language. This can also be done in an institute by singling out a subinstitute (see end of Sect. 3 below), which then becomes an additional parameter of the definition. Set is the category having all small sets as objects and functions as arrows. CAT is the category of categories and functors. Strictly speaking, CAT is not a category but only a so-called quasicategory, which is a category that lives in a higher set-theoretic universe. Note, however, that non-monotonic formalisms can only indirectly be covered this way, but compare, e.g.,[START_REF] Guerra | Composition of Default Specifications[END_REF]. This is a quite reasonable assumption met by practically all institutions. Note that by contrast, preservation of unions is quite unrealistic-the union of signatures normally leads to new sentences combining symbols from both signatures. Acknowledgements: We would like to thank the OntoIOp working group within ISO/TC 37/SC 3 for providing valuable feedback, in particular Michael Grüninger, Pat Hayes, Maria Keet, Chris Menzel, and John Sowa. We also want to thank Andrzej Tarlecki, with whom we collaborate(d) on the semantics of heterogeneous specification, Thomas Schneider for help with the semantics of modules, and Christian Maeder, Eugen Kuksa and Sören Schulze for implementation work. This work has been supported by the DFGfunded Research Centre on Spatial Cognition (SFB/TR 8), project I1-[OntoSpace], and EPSRC grant "EP/J007498/1". (forall (x y z) (if (and (X x) (X y) (X z)) (and %% now list all the axioms (if (and (isPartOf x y) (isPartOf y x)) (= x y)) %% antisymmetry (if (and (isProperPartOf x y) (isProperPartOf y z)) (isProperPartOf x z)) %% transitivity; can't be expressed in OWL together with asymmetry (iff (overlaps x y) (exists (pt) (and (isPartOf pt x) (isPartOf pt y)))) (iff (isAtomicPartOf x y) (and (isPartOf x y) (Atom x))) (iff (sum z x y) (forall (w) (iff (overlaps w z) (and (overlaps w x) (overlaps w y))))) (exists (s) (sum s x y))))))) %% existence of the sum } Relating Institutes and institutions In this section, we show that institutes are a certain restriction of institutions. We first recall Goguen's and Burstall's notion of institution [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF], which they have introduced as a formalisation of the intuitive notion of logical system. We assume some acquaintance with the basic notions of category theory and refer to [START_REF] Adámek | Abstract and Concrete Categories[END_REF] or [START_REF] Mac | Categories for the Working Mathematician[END_REF] for an introduction.
47,437
[ "769746" ]
[ "461380", "258630", "461380", "461380", "421435" ]
01485976
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485976/file/978-3-642-37635-1_2_Chapter.pdf
Francisco Durán email: [email protected] Fernando Orejas email: [email protected] Steffen Zschaler email: [email protected] Behaviour Protection in Modular Rule-Based System Specifications Model-driven engineering (MDE) and, in particular, the notion of domain-specific modelling languages (DSMLs) is an increasingly popular approach to systems development. DSMLs are particularly interesting because they allow encoding domain-knowledge into a modelling language and enable full code generation and analysis based on high-level models. However, as a result of the domain-specificity of DSMLs, there is a need for many such languages. This means that their use only becomes economically viable if the development of new DSMLs can be made efficient. One way to achieve this is by reusing functionality across DSMLs. On this background, we are working on techniques for modularising DSMLs into reusable units. Specifically, we focus on DSMLs whose semantics are defined through in-place model transformations. In this paper, we present a formal framework of morphisms between graph-transformation systems (GTSs) that allow us to define a novel technique for conservative extensions of such DSMLs. In particular, we define different behaviour-aware GTS morphisms and prove that they can be used to define conservative extensions of a GTS. Introduction Model-Driven Engineering (MDE) [START_REF] Schmidt | Model-driven engineering[END_REF] has raised the level of abstraction at which systems are developed, moving development focus from the programming-language level to the development of software models. Models and specifications of systems have been around the software industry from its very beginning, but MDE articulates them so that the development of information systems can be at least partially automated. Thus models are being used not only to specify systems, but also to simulate, analyze, modify and generate code of such systems. A particularly useful concept in MDE are domainspecific modelling languages (DSMLs) [START_REF] Van Deursen | Domain-specific languages: An annotated bibliography[END_REF]. These languages offer concepts specifically targeted at a particular domain. On the one hand this makes it easier for domain experts to express their problems and requirements. On the other hand, the higher amount of knowledge embedded in each concept allows for much more complete generation of executable solution code from a DSML model [START_REF] Hemel | Code generation by model transformation: A case study in transformation modularity[END_REF] as compared to a model expressed in a general-purpose modelling language. DSMLs can only be as effective as they are specific for a particular domain. This implies that there is a need for a large number of such languages to be developed. However, development of a DSML takes additional effort in a software-development project. DSMLs are only viable if their development can be made efficient. One way of achieving this is by allowing them to be built largely from reusable components. Consequently, there has been substantial research on how to modularise language specifications. DSMLs are often defined by specifying their syntax (often separated into concrete and abstract syntax) and their semantics. While we have reasonably good knowledge of how to modularise DSML syntax, the modularisation of language semantics is an as yet unsolved issue. DSML semantics can be represented in a range of different ways-for example using UML behavioural models [START_REF] Engels | Dynamic meta modeling: A graphical approach to the operational semantics of behavioral diagrams in UML[END_REF][START_REF] Fischer | Story diagrams: A new graph rewrite language based on the unified modeling language[END_REF], abstract state machines [START_REF] Di Ruscio | Extending AMMA for supporting dynamic semantics specifications of DSLs[END_REF][START_REF] Chen | Semantic anchoring with model transformations[END_REF], Kermeta [START_REF] Muller | Weaving executability into object-oriented metalanguages[END_REF], or in-place model transformations [START_REF] De Lara | Automating the transformation-based analysis of visual languages[END_REF][START_REF] Rivera | Analyzing rule-based behavioral semantics of visual modeling languages with Maude[END_REF]. In the context of MDE it seems natural to describe the semantics by means of models, so that they may be integrated with the rest of the MDE environment and tools. We focus on the use of in-place model transformations. Graph transformation systems (GTSs) were proposed as a formal specification technique for the rule-based specification of the dynamic behaviour of systems [START_REF] Ehrig | Introduction to the algebraic theory of graph grammars[END_REF]. Different approaches exist for modularisation in the context of the graph-grammar formalism [START_REF] Corradini | The category of typed graph grammars and its adjunctions with categories of derivations[END_REF][START_REF]Handbook of Graph Grammars and Computing by Graph Transformations[END_REF][START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. All of them have followed the tradition of modules inspired by the notion of algebraic specification module [START_REF] Ehrig | Fundamentals of Algebraic Specification 2. Module Specifications and Constraints[END_REF]. A module is thus typically considered as given by an export and an import interface, and an implementation body that realises what is offered in the export interface, using the specification to be imported from other modules via the import interface. For example, Große-Rhode, Parisi-Presicce, and Simeoni introduce in [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF] a notion of module for typed graph transformation systems, with interfaces and implementation bodies; they propose operations for union, composition, and refinement of modules. Other approaches to modularisation of graph transformation systems include PROGRES Packages [START_REF] Schürr | The PROGRES-approach: Language and environment[END_REF], GRACE Graph Transformation Units and Modules [START_REF] Kreowski | Graph transformation units and modules[END_REF], and DIEGO Modules [START_REF] Taentzer | DIEGO, another step towards a module concept for graph transformation systems[END_REF]. See [START_REF] Heckel | Classification and comparison of modularity concepts for graph transformation systems[END_REF] for a discussion on these proposals. For the kind of systems we deal with, the type of module we need is much simpler. For us, a module is just the specification of a system, a GTS, without import and export interfaces. Then, we build on GTS morphisms to compose these modules, and specifically we define parametrised GTSs. The instantiation of such parameterized GTS is then provided by an amalgamation construction. We present formal results about graph-transformation systems and morphisms between them. Specifically, we provide definitions for behaviour-reflecting and -protecting GTS morphisms and show that they can be used to infer semantic properties of these morphisms. We give a construction for the amalgamation of GTSs, as a base for the composition of GTSs, and we prove it to protect behaviour under certain circumstances. Although we motivate and illustrate our approach using the e-Motions language [START_REF] Rivera | A graphical approach for modeling time-dependent behavior of DSLs[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF], our proposal is language-independent, and all the results are presented for GTSs and adhesive HLR systems [START_REF] Lack | Adhesive categories[END_REF][START_REF] Ehrig | Adhesive high-level replacement categories and systems[END_REF]. Different forms of GTS morphisms have been used in the literature, taking one form or another depending on their concrete application. Thus, we find proposals cen-tered on refinements (see., e.g., [START_REF] Heckel | Horizontal and vertical structuring of typed graph transformation systems[END_REF][START_REF] Große-Rhode | Spatial and temporal refinement of typed graph transformation systems[END_REF][START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF]), views (see, e.g., [START_REF] Engels | A combined reference model-and viewbased approach to system specification[END_REF]), and substitutability (see [START_REF] Engels | Flexible interconnection of graph transformation modules[END_REF]). See [START_REF] Engels | Flexible interconnection of graph transformation modules[END_REF] for a first attempt to a systematic comparison of the different proposals and notations. None of these notions fit our needs, and none of them coincide with our behaviour-aware GTS morphisms. Moreover, as far as we know, parameterised GTSs and GTS morphisms, as we discuss them, have not been studied before. Heckel and Cherchago introduce parameterised GTSs in [START_REF] Heckel | Structural and behavioural compatibility of graphical service specifications[END_REF], but their notion has little to do with our parameterised GTSs. In their case, the parameter is a signature, intended to match service descriptions. They however use a double-pullback semantics, and have a notion of substitution morphism which is related to our behaviour preserving morphism. Our work is originally motivated by the specification of non-functional properties (NFPs), such as performance or throughput, in DSMLs. We have been looking for ways in which to encapsulate the ability to specify non-functional properties into reusable DSML modules. Troya et al. used the concept of observers in [START_REF] Troya | Simulating domain specific visual models by observation[END_REF][START_REF] Troya | Model-driven performance analysis of rulebased domain specific visual models[END_REF] to model nonfunctional properties of systems described by GTSs in a way that could be analysed by simulation. In [START_REF] Durán | On the reusable specification of non-functional properties in DSLs[END_REF], we have built on this work and ideas from [START_REF] Zschaler | Formal specification of non-functional properties of component-based software systems: A semantic framework and some applications thereof[END_REF] to allow the modular encapsulation of such observer definitions in a way that can be reused in different DSML specifications. In this paper, we present a full formal framework of such language extensions. Nevertheless, this framework is independent of the specific example of non-functional property specifications, but instead applies to any conservative extension of a base GTS. The way in which we think about composition of reusable DSML modules has been inspired by work in aspect-oriented modelling (AOM). In particular, our ideas for expressing parametrised metamodels are based on the proposals in [START_REF] Clarke | Generic aspect-oriented design with Theme/UML[END_REF][START_REF] Klein | Reusable aspect models[END_REF]. Most AOM approaches use syntactic notions to automate the establishment of mappings between different models to be composed, often focusing primarily on the structural parts of a model. While our mapping specifications are syntactic in nature, we focus on composition of behaviours and provide semantic guarantees. In this sense, our work is perhaps most closely related to the work on MATA [START_REF] Whittle | MATA: A unified approach for composing UML aspect models based on graph transformation[END_REF] or semantic-based weaving of scenarios [START_REF] Klein | Semantic-based weaving of scenarios[END_REF]. The rest of the paper begins with a presentation of a motivating example expressed in the e-Motions language in Section 2. Section 3 introduces a brief summary of graph transformation and adhesive HLR categories. Section 4 introduces behaviour-reflecting GTS morphisms, the construction of amalgamations in the category of GTSs and GTS morphisms, and several results on these amalgamations, including the one stating that the morphisms induced by these amalgamations protect behaviour, given appropriate conditions. The paper finishes with some conclusions and future work in Section 5. NFP specification with e-Motions In this section, we use e-Motions [START_REF] Rivera | A graphical approach for modeling time-dependent behavior of DSLs[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF] to provide a motivating example, adapted from [START_REF] Troya | Simulating domain specific visual models by observation[END_REF], as well as intuitions for the formal framework developed. However, as stated in the previous section, the framework itself is independent of such a language. e-Motions is a Domain Specific Modeling Language (DSML) and graphical framework developed for Eclipse that supports the specification, simulation, and formal anal- ysis of DSMLs. Given a MOF metamodel (abstract syntax) and a GCS model (a graphical concrete syntax) for it, the behaviour of a DSML is defined by in-place graph transformation rules. Although we briefly introduce the language here, we omit all those details not relevant to this paper. We refer the interested reader to [START_REF] Rivera | Formal specification and analysis of domain specific models using Maude[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF] or http://atenea.lcc.uma.es/e-Motions for additional details. Figure 1(a) shows the metamodel of a DSML for specifying Production Line systems for producing hammers out of hammer heads and handles, which are generated in respective machines, transported along the production line via conveyors, and temporarily stored in trays. As usual in MDE-based DSMLs, this metamodel defines all the concepts of the language and their interconnections; in short, it provides the language's abstract syntax. In addition, a concrete syntax is provided. In the case of our example, this is sufficiently well defined by providing icons for each concept (see Figure 1(b)); connections between concepts are indicated through arrows connecting the corresponding icons. Figure 2 shows a model conforming to the metamodel in Figure 1(a) using the graphical notation introduced in the GCS model in Figure 1(b). The behavioural semantics of the DSML is then given by providing transformation rules specifying how models can evolve. Figure 3 shows an example of such a rule. The rule consists of a left-hand side matching a situation before the execution of the rule and a right-hand side showing the result of applying the rule. Specifically, this rule shows how a new hammer is assembled: a hammer generator a has an incoming tray of parts and is connected to an outgoing conveyor belt. Whenever there is a handle and a head available, and there is space in the conveyor for at least one part, the hammer generator can assemble them into a hammer. The new hammer is added to the parts set of the outgoing conveyor belt in time T, with T some value in the range [a.pt -3, a.pt + 3], and where pt is an attribute representing the production time of a machine. The complete semantics of our production-line DSML is constructed from a number of such rules covering all kinds of atomic steps that can occur, e.g., generating new pieces, moving pieces from a conveyor to a tray, etc. The complete specification of a Production Line example using e-Motions can be found at http://atenea.lcc. uma.es/E-motions/PLSExample. For a Production Line system like this one, we may be interested in a number of non-functional properties. For example, we would like to assess the throughput of a production line, or how long it takes for a hammer to be produced. Figure 4(a) shows the metamodel for a DSML for specifying production time. It is defined as a parametric model (i.e., a model template), defined independently of the Production Line system. It uses the notion of response time, which can be applied to different systems with different meanings. The concepts of Server, Queue, and Request and their interconnections are parameters of the metamodel, and they are shaded in grey for illustration purposes. Figure 4(b) shows the concrete syntax for the response time observer object. Whenever that observer appears in a behavioural rule, it will be represented by that graphical symbol. Figure 4(c) shows one of the transformation rules defining the semantics of the response time observer. It states that if there is a server with an in queue and an out queue and there initially are some requests (at least one) in the in queue, and the out queue contains some requests after rule execution, the last response time should be recorded to have been equal to the time it took the rule to execute. Similar rules need to be written to capture other situations in which response time needs to be measured, for example, where a request stays at a server for some time, or where a server does not have an explicit in or out queue. Note that, as in the metamodel in Figure 4(a), part of the rule in Figure 4(c) has been shaded in grey. Intuitively, the shaded part represents a pattern describing transformation rules that need to be extended to include response-time accounting. 4 The lower part of the rule describes the extensions that are required. So, in addition to reading Figure 4(c) as a 'normal' transformation rule (as we have done above), we can also read it as a rule transformation, stating: "Find all rules that match the shaded pattern and add ResponseTime objects to their left-and right-hand sides as described." In effect, observer models become higher-order transformations [START_REF] Tisi | On the use of higher-order model transformations[END_REF]. To use our response-time language to allow specification of production time of hammers in our Production Line DSML, we need to weave the two languages together. For this, we need to provide a binding from the parameters of the response-time metamodel (Figure 4(a)) to concepts in the Production Line metamodel (Figure 1(a)). In this case, assuming that we are interested in measuring the response time of the Assemble machine, the binding might be as follows: -Server to Assemble; -Queue to LimitedContainer as the Assemble machine is to be connected to an arbitrary LimitedContainer for queuing incoming and outgoing parts; -Request to Part as Assemble only does something when there are Parts to be processed; and -Associations: • The in and out associations from Server to Queue are bound to the corresponding in and out associations from Machine to Tray and Conveyor, respectively; and • The association from Queue to Request is bound to the association from Container to Part. As we will see in Section 4, given DSMLs defined by a metamodel plus a behaviour, the weaving of DSMLs will correspond to amalgamation in the category of DSMLs and DSML morphisms. Figure 5 shows the amalgamation of an inclusion morphism between the model of an observer DSML, M Obs , and its parameter sub-model M Par , and the binding morphism from M Par to the DSML of the system at hand, M DSML , the Production Line DSML in our example. The amalgamation object M DSML is obtained by the construction of the amalgamation of the corresponding metamodel morphisms and the amalgamation of the rules describing the behaviour of the different DSMLs. In our example, the amalgamation of the metamodel corresponding morphisms is shown in Figure 6 (note that the binding is only partially depicted). The weaving process has added the ResponseTime concept to the metamodel. Notice that the weaving process also ensures that only sensible woven metamodels can be produced: for a given binding of parameters, there needs to be a match between the constraints expressed in the observer metamodel and the DSML metamodel. We will discuss this issue in more formal detail in Section 4. The binding also enables us to execute the rule transformations specified in the observer language. For example, the rule in Figure 3 matches the pattern in Figure 4(c), given this binding: In the left-hand side, there is a Server (Assemble) with an in-Queue (Tray) that holds two Requests (Handle and Head) and an out-Queue (Conveyor). In the right-hand side, there is a Server (Assemble) with an in-Queue (Tray) and an out-Queue (Conveyor) that holds one Request (Hammer). Consequently, we can apply the rule transformation from the rule in Figure 4(c). As we will explain in Section 4, the semantics of this rule transformation is provided by the rule amalgamation illustrated in Figure 7, where we can see how the obtained amalgamated rule is similar to the Assemble rule but with the observers in the RespTime rule appropriately introduced. MPar MMPar ⊕ RlsPar Binding BMM ⊕ B Rls + 3 MDSML MMDSML ⊕ RlsDSML M Obs MM Obs ⊕ Rls Obs + 3 M DSML (MMDSML ⊗ MM Obs ) ⊕ (RlsDSML ⊗ Rls Obs ) Clearly, such a separation of concerns between a specification of the base DSML and specifications of languages for non-functional properties is desirable. We have used the response-time property as an example here. Other properties can be defined easily in a similar vein as shown in [START_REF] Troya | Simulating domain specific visual models by observation[END_REF] and at http://atenea.lcc.uma.es/index. php/Main_Page/Resources/E-motions/PLSObExample. In the following sections, we discuss the formal framework required for this and how we can distinguish safe bindings from unsafe ones. The e-Motions models thus obtained are automatically transformed into Maude [4] specifications [START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF]. See [START_REF] Rivera | Formal specification and analysis of domain specific models using Maude[END_REF] for a detailed presentation of how Maude provides an accurate way of specifying both the abstract syntax and the behavioral semantics of models and metamodels, and offers good tool support both for simulating and for reasoning about them. Graph transformation and adhesive HLR categories Graph transformation [START_REF]Handbook of Graph Grammars and Computing by Graph Transformations[END_REF] is a formal, graphical and natural way of expressing graph manipulation based on rewriting rules. In graph-based modelling (and meta-modelling), graphs are used to define the static structures, such as class and object ones, which represent visual alphabets and sentences over them. We formalise our approach using the typed graph transformation approach, specifically the Double Pushout (DPO) algebraic approach, with positive and negative (nested) application conditions [START_REF] Ehrig | Theory of constraints and application conditions: From graphs to high-level structures[END_REF][START_REF] Habel | Correctness of high-level transformation systems relative to nested conditions[END_REF]. We however carry on our formalisation for weak adhesive high-level replacement (HLR) categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. Some of the proofs in this paper assume that the category of graphs at hand is adhesive HLR. Thus, in the rest of the paper, when we talk about graphs or typed graphs, keep in mind that we actually mean some type of graph whose corresponding category is adhesive HLR. Specifically, the category of typed attributed graphs, the one of interest to us, was proved to be adhesive HLR in [START_REF] Ehrig | Fundamental theory for typed attributed graph transformation[END_REF]. Generic notions The concepts of adhesive and (weak) adhesive HLR categories abstract the foundations of a general class of models, and come together with a collection of general semantic techniques [START_REF] Lack | Adhesive categories[END_REF][START_REF] Ehrig | Adhesive high-level replacement categories and systems[END_REF]. Thus, e.g., given proofs for adhesive HLR categories of general results such as the Local Church-Rosser, or the Parallelism and Concurrency Theorem, they are automatically valid for any category which is proved an adhesive HLR category. This framework has been a break-through for the DPO approach of algebraic graph transformation, for which most main results can be proved in these categorical frameworks, and then instantiated to any HLR system. Definition 1. (Van Kampen square) Pushout ( 1) is a van Kampen square if, for any commutative cube with (1) in the bottom and where the back faces are pullbacks, we have that the top face is a pushout if and only if the front faces are pullbacks. A f r r m $ $ a C n $ $ c A f r r m # # (1) B g r r b C n # # D d B g r r A f r r m $ $ D C n $ $ B g r r D Definition 2. (Adhesive HLR category) A category C with a morphism class M is called adhesive HLR category if -M is a class of monomorphisms closed under isomorphisms and closed under composition and decomposition, -C has pushouts and pullbacks along M -morphisms, i.e., if one of the given morphisms is in M , then also the opposite one is in M , and M -morphisms are closed under pushouts and pullbacks, and pushouts in C along M -morphisms are van Kampen squares. In the DPO approach to graph transformation, a rule p is given by a span (L l ← K r → R) with graphs L, K, and R, called, respectively, left-hand side, interface, and right-hand side, and some kind of monomorphisms (typically, inclusions) l and r. A graph transformation system (GTS) is a pair (P, π) where P is a set of rule names and π is a function mapping each rule name p into a rule L l ←-K r -→ R. An application of a rule p : 1) and ( 2), which are pushouts in the corresponding graph category, leading to a direct transformation G p,m =⇒ H. L l ←-K r -→ R to a graph G via a match m : L → G is constructed as two gluings ( p : L m (1) K l o o r / / (2) R G D o o / / H We only consider injective matches, that is, monomorphisms. If the matching m is understood, a DPO transformation step G p,m =⇒ H will be simply written G p =⇒ H. A transformation sequence ρ = ρ 1 . . . ρ n : G ⇒ * H via rules p 1 , . . . , p n is a sequence of transformation steps ρ i = (G i pi,mi ==⇒ H i ) such that G 1 = G, H n = H, and consecutive steps are composable, that is, G i+1 = H i for all 1 ≤ i < n. The category of transformation sequences over an adhesive category C, denoted by Trf(C), has all graphs in |C| as objects and all transformation sequences as arrows. Transformation rules may have application conditions. We consider rules of the form (L l ←-K r -→ R, ac), where (L l ←-K r -→ R) is a normal rule and ac is a (nested) application condition on L. Application conditions may be positive or negative (see Figure 8). Positive application conditions have the form ∃a, for a monomorphism a : L → C, and demand a certain structure in addition to L. Negative application conditions of the form a forbid such a structure. A match m : L → G satisfies a positive application condition ∃a if there is a monomorphism q : C → G satisfying q • a = m. A matching m satisfies a negative application condition a if there is no such monomorphism. Given an application condition ∃a or a, for a monomorphism a : L → C, another application condition ac can be established on C, giving place to nested application conditions [START_REF] Habel | Correctness of high-level transformation systems relative to nested conditions[END_REF]. For a basic application condition ∃(a, ac C ) on L with an application condition ac C on C, in addition to the existence of q it is required that q satisfies ac C . We write m |= ac if m satisfies ac. ac C ∼ = ac C denotes the semantical equivalence of ac C and ac C on C. C q L a o o m K l o o r / / R G D o o / / H (a) Positive application condition C / q L a o o m K l o o r / / R G D o o / / H (b) Negative application condition To improve readability, we assume projection functions ac, lhs and rhs, returning, respectively, the application condition, the left-hand side and the right-hand side of a rule. Thus, given a rule r = (L l ←-K r -→ R, ac), ac(r) = ac, lhs(r) = L, and rhs(r) = R. Given an application condition ac L on L and a monomorphism t : L → L , then there is an application condition Shift(t, ac L ) on L such that for all m : [START_REF] Parisi-Presicce | Transformations of graph grammars[END_REF] a notion of rule morphism very similar to the one below, although we consider rules with application conditions, and require the commuting squares to be pullbacks. L → G, m |= Shift(t, ac L ) ↔ m = m • t |= ac L . ac L L t / / m L m Shift(t, ac L ) G Parisi-Presicce proposed in Definition 3. (Rule morphism) Given transformation rules p i = (L i li ← K i ri → R i , ac i ), for i = 0, 1, a rule morphism f : p 0 → p 1 is a tuple f = (f L , f K , f R ) of graph mono- morphisms f L : L 0 → L 1 , f K : K 0 → K 1 , and f R : R 0 → R 1 such that the squares with the span morphisms l 0 , l 1 , r 0 , and r 1 are pullbacks, as in the diagram below, and such that ac 1 ⇒ Shift(f L , ac 0 ). p 0 : f ac 0 L 0 f L pb K 0 l0 o o r0 / / f K pb R 0 f R p 1 : ac 1 L 1 K 1 l1 o o r1 / / R 1 The requirement that the commuting squares are pullbacks is quite natural from an intuitive point of view: the intuition of morphisms is that they should preserve the "structure" of objects. If we think of rules not as a span of monomorphisms, but in terms of their intuitive semantics (i.e., L\K is what should be deleted from a given graph, R\K is what should be added to a given graph and K is what should be preserved), then asking that the two squares are pullbacks means, precisely, to preserve that structure. I.e., we preserve what should be deleted, what should be added and what must remain invariant. Of course, pushouts also preserve the created and deleted parts. But they reflect this structure as well, which we do not want in general. Fact 1 With componentwise identities and composition, rule morphisms define the category Rule . Proof Sketch. Follows trivially from the fact that ac ∼ = Shift(id L , ac), pullback composition, and that given morphisms f • f such that p 0 : f ac 0 L 0 f L pb K 0 l0 o o r0 / / pb R 0 p 1 : f ac 1 L 1 f L pb K 1 l1 o o r1 / / pb R 1 p 2 : ac 2 L 2 K 2 l2 o o r2 / / R 2 then we have Shift(f L , Shift(f L , ac 0 )) ∼ = Shift(f L • f L , ac 0 ). A key concept in the constructions in the following section is that of rule amalgamation [START_REF] Boehm | Amalgamation of graph transformations with applications to synchronization[END_REF][START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. The amalgamation of two rules p 1 and p 2 glues them together into a single rule p to obtain the effect of the original rules. I.e., the simultaneous application of p 1 and p 2 yields the same successor graph as the application of the amalgamated rule p. The possible overlapping of rules p 1 and p 2 is captured by a rule p 0 and rule morphisms f : p 0 → p 1 and g : p 0 → p 2 . 2) and (3) are pushouts, l and r are induced by the universal property of (2) so that all subdiagrams commute, and ac = Shift( f L , ac 2 ) ∧ Shift( g L , ac 1 ). ac 0 L 0 f L | | g L " " (1) K 0 } } " " l0 o o r0 / / (2) R 0 ~! ! (3) ac 2 L 2 f L | | K 2 } } l2 o o r2 / / R 2 } } ac 1 L 1 g L " " K 1 ! ! l1 o o r1 / / R 1 ac L K l o o r / / R Notice that in the above diagram all squares are either pushouts or pullbacks (by the van Kampen property) which means that all their arrows are monomorphisms (by being an adhesive HLR category). We end this section by introducing the notion of rule identity. Definition 5. (Rule-identity morphism) Given graph transformation rules p i = (L i li ←-K i ri -→ R i , ac i ) , for i = 0, 1, and a rule morphism f : p 0 → p 1 , with f = (f L , f K , f R ), p 0 and p 1 are said to be identical, denoted p 0 ≡ p 1 , if f L , f K , and f R are identity morphisms and ac 0 ∼ = ac 1 . Typed graph transformation systems A (directed unlabeled) graph G = (V, E, s, t) is given by a set of nodes (or vertices) V , a set of edges E, and source and target functions s, t : E → V . Given graphs G i = (V i , E i , s i , t i ), with i = 1, 2, a graph homomorphism f : G 1 → G 2 is a pair of functions (f V : V 1 → V 2 , f E : E 1 → E 2 ) such that f V • s 1 = s 2 • f E and f V • t 1 = t 2 • f E . With componentwise identities and composition this defines the category Graph. Given a distinguished graph TG, called type graph, a TG-typed graph (G, g G ), or simply typed graph if TG is known, consists of a graph G and a typing homomorphism g G : G → T G associating with each vertex and edge of G its type in TG. However, to enhance readability, we will use simply g G to denote a typed graph (G, g G ), and when G2 g 2 G1 k : : g 1 $ $ TG f / / TG (a) Forward retyping functor. G2 g 2 / / G 2 g 2 G1 k : : the typing morphism g G can be considered implicit, we will often refer to it just as G. A TG-typed graph morphism between TG-typed graphs (G i , g i : g 1 $ $ / / G 1 k : : g 1 $ $ TG f / / TG (b) Backward retyping functor. G i → T G), with i = 1, 2, denoted f : (G 1 , g 1 ) → (G 2 , g 2 ) (or simply f : g 1 → g 2 ), is a graph morphism f : G 1 → G 2 which preserves types, i.e., g 2 • f = g 1 . Graph TG is the category of TG-typed graphs and TG-typed graph morphisms, which is the comma category Graph over TG. If the underlying graph category is adhesive (resp., adhesive HLR, weakly adhesive) then so are the associated typed categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF], and therefore all definitions in Section 3.1 apply to them. A TG-typed graph transformation rule is a span p = L l ← K r → R of injective TG-typed graph morphisms and a (nested) application condition on L. Given TG-typed graph transformation rules p i = (L i li ← K i ri → R i , ac i ), with i = 1, 2, a typed rule morphism f : p 1 → p 2 is a tuple (f L , f K , f R ) of TG-typed graph monomorphisms such that the squares with the span monomorphisms l i and r i , for i = 1, 2, are pullbacks, and such that ac 2 ⇒ Shift(f L , ac 1 ). TG-typed graph transformation rules and typed rule morphisms define the category Rule TG , which is the comma category Rule over TG. Following [START_REF] Corradini | The category of typed graph grammars and its adjunctions with categories of derivations[END_REF][START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF], we use forward and backward retyping functors to deal with graphs over different type graphs. A graph morphism f : TG → TG induces a forward retyping functor f > : Graph TG → Graph TG , with f > (g 1 ) = f • g 1 and f > (k : g 1 → g 2 ) = k by composition, as shown in the diagram in Figure 9(a). Similarly, such a morphism f induces a backward retyping functor f < : Graph TG → Graph TG , with f < (g 1 ) = g 1 and f < (k : g 1 → g 2 ) = k : g 1 → g 2 by pullbacks and mediating morphisms as shown in the diagram in Figure 9(b). Retyping functors also extends to application conditions and rules, so we will write things like f > (ac) or f < (p) for some application condition ac and production p. Notice, for example, that given a graph morphism f : TG → TG , the forward retyping of a production p = (L l ← K r → R, ac) over TG is a production f > TG (p) = (f > TG (L) f > TG (l) ←--f > TG (K) f > TG (r) --→ f > TG (R), f > TG (ac)) over TG , defining an induced morphism f p : p → f > TG (p) in Rule. Since f p is a morphism between rules in |Rule TG | and |Rule TG |, it is defined in Rule, forgetting the typing. Notice also that f > TG (ac) ∼ = Shift(f p L , ac). As said above, to improve readability, if G → TG is a TG-typed graph, we sometimes refer to it just by its typed graph G, leaving TG implicit. As a consequence, if f : TG → TG is a morphism, we may refer to the TG -typed graph f > (G), even if this may be considered an abuse of notation. The following results will be used in the proofs in the following section. Proposition 1. (From [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF]) (Adjunction) Forward and backward retyping functors are left and right adjoints; i.e., for each f : TG → TG we have f > f < : TG → TG . Remark 1. Given a graph monomorphism f : TG → TG , for all k : G 1 → G 2 in Graph TG , the following diagram is a pullback: f < (G 1 ) f < (k ) / / pb f < (G 2 ) G 1 k / / G 2 This is true just by pullback decomposition. Remark 2. Given a graph monomorphism f : TG → TG , and given monomorphisms k : G 0 → G 1 and h : G 0 → G 2 in Graph TG , if the following diagram on the left is a pushout then the diagram on the right is also a pushout: G 0 k / / h po G 1 h f < (G 0 ) f < (k ) / / f < (h ) po f < (G 1 ) f < ( h ) G 2 k / / G f < (G 2 ) f < ( k ) / / f < ( G) Notice that since in an adhesive HLR category all pushouts along M -morphisms are van Kampen squares, the commutative square created by the pullbacks and induced morphisms by the backward retyping functor imply the second pushout. f < (G 1 ) / / G 1 f < (G 0 ) / / 4 4 G 0 4 4 f < ( G ) / / | | G | | f < (G 2 ) / / 5 5 G 2 5 5 T G / / T G Remark 3. Given a graph monomorphism f : TG → TG , and given monomorphisms k : G 0 → G 1 and h : G 0 → G 2 in Graph TG , if the diagram on the left is a pushout (resp., a pullback) then the diagram on the right is also a pushout (resp., a pullback): G 0 k / / h G 1 h f > (G 0 ) f > (k) / / f > (h) f > (G 1 ) f > ( h) G 2 k / / G f > (G 2 ) f > ( k) / / f > ( G) Remark 4. Given a graph monomorphism f : TG → TG , and a TG -typed graph transformation rule p = (L l ← K r → R, ac), if a matching m : L → C satisfies ac, that is, m |= ac, then, f < (m) |= f < (ac). A typed graph transformation system over a type graph TG, is a graph transformation system where the given graph transformation rules are defined over the category of TGtyped graphs. Since in this paper we deal with GTSs over different type graphs, we will make explicit the given type graph. This means that, from now on, a typed GTS is a triple (TG, P, π) where TG is a type graph, P is a set of rule names and π is a function mapping each rule name p into a rule (L l ← K r → R, ac) typed over TG. The set of transformation rules of each GTS specifies a behaviour in terms of the derivations obtained via such rules. A GTS morphism defines then a relation between its source and target GTSs by providing an association between their type graphs and rules. Definition 6. (GTS morphism) Given typed graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 , with f = (f TG , f P , f r ), is given by a morphism f TG : TG 0 → TG 1 , a surjective mapping f P : P 1 → P 0 between the sets of rule names, and a family of rule morphisms f r = {f p : f > T G (π 0 (f P (p))) → π 1 (p)} p∈P1 . Given a GTS morphism f : GTS 0 → GTS 1 , each rule in GTS 1 extends a rule in GTS 0 . However if there are internal computation rules in GTS 1 that do not extend any rule in GTS 0 , we can always consider that the empty rule is included in GTS 0 , and assume that those rules extend the empty rule. Please note that rule morphisms are defined on rules over the same type graph (see Definition 3). To deal with rules over different type graphs we retype one of the rules to make them be defined over the same type graph. Typed GTSs and GTS morphisms define the category GTS. The GTS amalgamation construction provides a very convenient way of composing GTSs. Definition 7. (GTS Amalgamation). Given transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , the amalgamated GTS GTS = GTS 1 + GTS0 GTS 2 is the GTS ( TG, P , π) constructed as follows. We first construct the pushout of typing graph morphisms f TG : TG 0 → TG 1 and g TG : TG 0 → TG 2 , obtaining morphisms f TG : TG 2 → TG and g TG : TG 1 → TG. The pullback of set morphisms f P : P 1 → P 0 and g P : P 2 → P 0 defines morphisms f P : P → P 2 and g P : P → P 1 . Then, for each rule p in P , the rule π(p) is defined as the amalgamation of rules f > TG (π 2 ( f P (p))) and g > TG (π 1 ( g P (p))) with respect to the kernel rule f > TG (g > TG (π 0 (g P ( f P (p))))). GTS 0 g # # f { { GTS 1 g # # GTS 2 f { { GTS Among the different types of GTS morphisms, let us now focus on those that reflect behaviour. Given a GTS morphism f : GTS 0 → GTS 1 , we say that it reflects behaviour if for any derivation that may happen in GTS 1 there exists a corresponding derivation in GTS 0 . Definition 8. (Behaviour-reflecting GTS morphism) Given graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 is behaviourreflecting if for all graphs G, H in |Graph TG1 |, all rules p in P 1 , and all matches m : lhs(π 1 (p)) → G such that G p,m =⇒ H, then f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H) in GTS 0 . Morphisms between GTSs that only add to the transformation rules elements not in their source type graph are behaviour-reflecting. We call them extension morphisms. Definition 9. (Extension GTS morphism) Given graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 , with f = (f TG , f P , f r ), is an extension morphism if f TG is a monomorphism and for each p ∈ P 1 , π 0 (f P (p)) ≡ f < TG (π 1 (p)). That an extension morphism is indeed a behaviour-reflecting morphism is shown by the following lemma. Lemma 1. All extension GTS morphisms are behaviour-reflecting. Proof Sketch. Given graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, let a GTS morphism f : GTS 0 → GTS 1 be an extension morphism. Then, we have to prove that for all graphs G, H in |Graph TG1 |, all rules p in P 1 , and all matches m : lhs(π 1 (p)) → G, if G p,m =⇒ H then f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H). Assuming transformation rules π 1 (p) = (L 1 l1 ←-K 1 r1 -→ R 1 , ac 1 ) and π 0 (f P (p)) = (L 0 l0 ←-K 0 r0 -→ R 0 , ac 0 ), and given the derivation ac 1 L 1 m po K 1 l1 o o r1 / / po R 1 G D o o / / H since f is an extension morphism, and therefore f TG is a monomorphism, and l 1 and m are also monomorphisms, by Remark 2 and Definition 8, we have the diagram ac 0 ∼ = L 0 K 0 l1 o o r1 / / R 0 f < TG (ac 1 ) f < TG (L 1 ) f < TG (m) po f < TG (K 1 ) f < TG (l1) o o f < TG (r1) / / po f < TG (R 1 ) f < TG (G) f < TG (D) o o / / f < TG (H) Then, given the pushouts in the above diagram and Remark 4, we have the derivation f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H). Notice that Definition 9 provides specific checks on individual rules. In the concrete case we presented in Section 2, the inclusion morphism between the model of an observer DSML, M Obs , and its parameter sub-model M Par , may be very easily checked to be an extension, by making sure that the features "added" in the rules will be removed by the backward retyping functor. In this case the check is particularly simple because of the subgraph relation between the type graphs, but for a morphism as the binding morphism between M Par and the DSML of the system at hand, M DSML , the check would also be relatively simple. Basically, the backward retyping of each rule in M DSML , i.e., the rule resulting from removing all elements not target of the binding map, must coincide with the corresponding rule, and the application conditions must be equivalent. Since the amalgamation of GTSs is the basic construction for combining them, it is very important to know whether the reflection of behaviour remains invariant under amalgamations. Proposition 2. Given transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and the amalgamation GTS = GTS 1 + GTS0 GTS 2 of GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , if f TG is a monomorphism and g is an extension morphism, then g is also an extension morphism. GTS 0 f / / g GTS 1 g GTS 2 f / / GTS Proof Sketch. Let it be GTS = ( TG, P , π). We have to prove that for each p ∈ P , π 1 ( g P (p)) ≡ g < TG ( π(p)). By construction, rule π(p) is obtained from the amalgamation of rules g > TG (π 1 ( g P ( p))) and f > TG (π 2 ( f P ( p))). More specifically, without considering application conditions by now, the amalgamation of such rules is accomplished by constructing the pushouts of the morphisms for the left-hand sides, for the kernel graphs, and for the right-hand sides. By Remark 2, we know that if the diagram f > TG (g > TG (L 0 )) ∼ = g > TG (f > TG (L 0 )) f > TG (g f P (p) L ) g > TG (f g P (p) L ) / / g > TG (L 1 ) g p L f > TG (L 2 ) f p L / / L is a pushout, then if we apply the backward retyping functor g < T G to all its components (graphs) and morphisms, the resulting diagram is also a pushout. g < TG ( g > TG (f > TG (L 0 ))) g < TG ( f > TG (g f P (p) L )) g < TG ( g > TG (f g P (p) L )) / / g < TG ( g > TG (L 1 )) g < TG ( g p L ) g < TG ( f > TG (L 2 )) g < TG ( f p L ) / / g < TG ( L) Because, by Proposition 1, for every f : TG → TG and every TG-type graph G and morphism g, since f T G is assumed to be a monomorphism, f < (f > (G)) = G and f < (f > (g)) = g, we have g < TG ( g > TG (f > TG (L 0 )) = f > TG (L 0 ), g < TG ( g > TG (f g P (p) L )) = f g P (p) L , and g < TG ( g > TG (L 1 )) = L 1 . By pullback decomposition in the corresponding retyping diagram, g < TG ( f > TG (L 2 )) = f > TG (g < TG (L 2 ) ). Thus, we are left with this other pushout: f > TG (L 0 ) f > TG (g < TG (g f P (p) L )) f g P (p) L / / L 1 g < TG ( g p L ) f > TG (g < TG (L 2 )) g < TG ( f p L ) / / g < TG ( L) Since g is an extension, L 0 ∼ = g < TG (L 2 ), which, because f TG is a monomorphism, implies f > TG (L 0 ) ∼ = f > TG (g < TG (L 2 ) ). This implies that g < TG ( L) ∼ = L 1 . Similar diagrams for kernel objects and right-hand sides lead to similar identity morphisms for them. It only remains to see that ac(π 1 ( g P (p))) ∼ = ac( g < TG ( π(p))). By the rule amalgamation construction, ac = f > TG (ac 2 ) ∧ g > TG (ac 1 ). Since g is an extension morphism, ac 2 ∼ = g > TG (ac 0 ). Then, ac ∼ = f > TG (g > TG (ac 0 )) ∧ g > TG (ac 1 ). For f , as for any other rule morphism, we have ac 1 ⇒ f > TG (ac 0 ). By the Shift construction, for any match m 1 : L 1 → C 1 , m 1 |= ac 1 iff g > TG (m 1 ) |= g > TG (ac 1 ) and, similarly, for any match m 0 : L 0 → C 0 , m 0 |= ac 0 iff f > TG (m 0 ) |= f > TG (ac 0 ). Then, ac 1 ⇒ f > TG (ac 0 ) ∼ = g > TG (ac 1 ) ⇒ g > TG (f > TG (ac 0 )) ∼ = g > TG (ac 1 ) ⇒ f > TG (g > TG (ac 0 )). And therefore, since ac = f > (g > TG (ac 0 )) ∧ g > TG (ac 1 ) and g > TG (ac 1 ) ⇒ f > TG (g > TG (ac 0 )), we conclude ac ∼ = g > TG (ac 1 ). When a DSL is extended with observers and other alien elements whose goal is to measure some property, or to verify certain invariant property, we need to guarantee that such an extension does not change the semantics of the original DSL. Specifically, we need to guarantee that the behaviour of the resulting system is exactly the same, that is, that any derivation in the source system also happens in the target one (behaviour preservation), and any derivation in the target system was also possible in the source one (behaviour reflection). The following definition of behaviour-protecting GTS morphism captures the intuition of a morphism that both reflects and preserves behaviour, that is, that establishes a bidirectional correspondence between derivations in the source and target GTSs. Definition 10. (Behaviour-protecting GTS morphism) Given typed graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 is behaviour-protecting if for all graphs G and H in |Graph TG1 |, all rules p in P 1 , and all matches m : lhs(π 1 (p)) → G, g < TG (G) g P (p),g < TG (m) ======⇒ g < TG (H) ⇐⇒ G p,m =⇒ H We find in the literature definitions of behaviour-preserving morphisms as morphisms in which the rules in the source GTS are included in the set of rules of the target GTS. Although these morphisms trivially preserve behaviour, they are not useful for our purposes. Works like [START_REF] Heckel | Horizontal and vertical structuring of typed graph transformation systems[END_REF] or [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF], mainly dealing with refinements of GTSs, only consider cases in which GTSs are extended by adding new transformation rules. In our case, in addition to adding new rules, we are enriching the rules themselves. The main result in this paper is related to the protection of behaviour, and more precisely on the behaviour-related guarantees on the induced morphisms. Theorem 1. Given typed transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and the amalgamation GTS = GTS 1 + GTS0 GTS 2 of GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , if f is a behaviour-reflecting GTS morphism, f TG is a monomorphism, and g is an extension and behaviour-protecting morphism, then g is behaviourprotecting as well. GTS 0 f / / g GTS 1 g GTS 2 f / / GTS Proof Sketch. Since g is an extension morphism and f TG is a monomorphism, by Proposition 2, g is also an extension morphism, and therefore, by Lemma 1, also behaviour-reflecting. We are then left with the proof of behaviour preservation. Given a derivation G 1 p1,m1 ==⇒ H 1 in GTS 1 , with π 1 (p 1 ) = (L 1 l1 ←-K 1 r1 -→ R 1 , ac 1 ), since f : GTS 0 → GTS 1 is a behaviour-reflecting morphism, there is a corresponding derivation in GTS 0 . Specifically, the rule f P (p 1 ) can be applied on f < TG (G 1 ) with match f < TG (m 1 ) satisfying the application condition of production π 0 (f P (p 1 )), and resulting in a graph f < TG (H 1 ). f < TG (G 1 ) f P (p1),f < TG (m1) =======⇒ f < TG (H 1 ) Moreover, since g is a behaviour-protecting morphism, this derivation implies a corresponding derivation in GTS 2 . By the amalgamation construction in Definition 7, the set of rules of GTS includes, for each p in P , the amalgamation of (the forward retyping of) the rules π 1 ( g P (p)) = (L 1 l1 ←-K 1 r1 -→ R 1 , ac 1 ) and π 2 ( f P (p)) = (L 2 l2 ←-K 2 r2 -→ R 2 , ac 2 ), with kernel rule π 0 (f P ( g P (p))) = π 0 (g P ( f P (p))) = (L 0 l0 ←-K 0 r0 -→ R 0 , ac 0 ). First, notice that for any TG graph G, G is the pushout of the graphs g < TG (G), f < TG (G) and f < TG ( g < TG (G)) (with the obvious morphisms). This can be proved using a van Kampen square, where in the bottom we have the pushout of the type graphs, the vertical faces are the pullbacks defining the backward retyping functors and on top we have that pushout. Thus, for each graph G in GTS, if a transformation rule in GTS 1 can be applied on g < TG (G), the corresponding transformation rule should be applicable on G in GTS. The following diagram focus on the lefthand sides of the involved rules. f > TG (g > TG (L 0 )) = g > TG (f > TG (L 0 )) g p 2 L t t f p 1 L * * f > TG (g > TG (m0))= g > TG (f > TG (m0)) f > TG (L 2 ) f p L * * f > TG (m2) f > TG (g > TG (g < TG ( f < TG (G)))) = g > TG (f > TG (f < TG ( g < TG (G)))) t t * * g > TG (L 1 ) g p L t t g > TG (m1) f > TG ( f < TG (G)) g2 + + L m g > TG ( g < TG (G)) g1 s s G As we have seen above, rules g P (p), f P (p), and f P (g P (p)) = g P (f P (p)) are applicable on their respective graphs using the matchings depicted in the above diagram. Since, by the amalgamation construction, the top square is a pushout, and g 1 • g > TG (m 1 )•f p1 L = g 2 • f > TG (m 2 )•g p2 L , then there is a unique morphism m : L → G making g 1 • g > TG (m 1 ) = m • g p L and g 2 • f > TG (m 2 ) = m • f p L . This m will be used as matching morphism in the derivation we seek. By construction, the application condition ac of the amalgamated rule p is the conjunction of the shiftings of the application conditions of g P (p) and f P (p). Then, since We can then conclude that rule p is applicable on graph G with match m satisfying its application condition ac. Indeed, given the rule π (p) = ( L l ←-K r -→ R, ac) we have the following derivation: ac L m po K l1 o o r1 / / po R G D o o / / H Let us finally check then that D and H are as expected. To improve readability, in the following diagrams we eliminate the retyping functors. For instance, for the rest of the theorem L 0 denotes f > TG (g > TG (L 0 )) = g > TG (f > TG (L 0 )), L 1 denotes g > TG (L 1 ), etc. First, let us focus on the pushout complement of l : K → L and m : L → G. Given rules g P (p), f P (p), and f P (g P (p)) = g P (f P (p)) and rule morphisms between them as above, the following diagram shows both the construction by amalgamation of the morphism l : K → L, and the construction of the pushout complements for morphisms l i and m i , for i = 0 . . . 2. L 0 v v m0 K 0 v v l0 o o L 2 m2 K 2 l2 o o L 1 v v m1 K 1 v v l1 o o L m K l o o G 0 v v D 0 v v l0 o o G 2 D 2 l2 o o G 1 v v D 1 u u l1 o o G D l o o r r X By the pushout of D 0 → D 1 and D 0 → D 2 , and given the commuting subdiagram D 0 v v G 2 D 2 o o G 1 x x D 1 u u o o G D o o there exists a unique morphism D → G making the diagram commute. This D is indeed the object of the pushout complement we were looking for. By the pushout of K 0 → K 1 and K 0 → K 2 , there is a unique morphism from K to D making the diagram commute. We claim that these morphisms K → D and D → G are the pushout complement of K → L and L → G. Suppose that the pushout of K → L and K → D were L → X and D → X for some graph X different from G. By the pushout of K 1 → D 1 and K 1 → L 1 there is a unique morphism G 1 → X making the diagram commute. By the pushout of K 2 → D 2 and K 2 → L 2 there is a unique morphism G 2 → X making the diagram commute. By the pushout of G 0 → G 1 and G 0 → G 2 , there is a unique morphism G → X. But since L → X and D → X are the pushout of K → L and K → D, there is a unique morphism X → G making the diagram commute. Therefore, we can conclude that X and G are isomorphic. Theorem 1 provides a checkable condition for verifying the conservative nature of an extension in our example, namely the monomorphism M Par → M Obs being a behaviour-protecting and extension morphism, M Par → M DSML a behaviour-reflecting morphism, and MM Par → MM DSML a monomorphism. In the concrete application domain we presented in Section 2 this result is very important. Notice that the parameter specification is a sub-specification of the observers DSL, making it particularly simple to verify that the inclusion morphism is an extension and also that it is behaviour-protecting. The check may possibly be reduced to checking that the extended system has no terminal states not in its parameter sub-specification. Application conditions should also be checked equivalent. Forbidding the specification of application conditions in rules in the observers DSL may be a practical shortcut. The morphism binding the parameter specification to the system to be analysed can very easily be verified behaviour-reflecting. Once the morphism is checked to be a monomorphism, we just need to check that the rules after applying the backward retyping morphism exactly coincide with the rules in the source GTS. Checking the equivalence of the application conditions may require human intervention. Notice that with appropriate tools and restrictions, most of these restrictions, if not all, can be automatically verified. We may even be able to restrict editing capabilities so that only correct bindings can be specified. Once the observers DSL are defined and checked, they can be used as many times as wished. Once they are to be used, we just need to provide the morphism binding the parameter DSL and the target system. As depicted in Figures 6 for the metamodels the binding is just a set of pairs, which may be easily supported by appropriate graphical tools. The binding must be completed by similar correspondences for each of the rules. Notice that once the binding is defined for the metamodels, most of the rule bindings can be inferred automatically. Finally, given the appropriate morphisms, the specifications may be merged in accordance to the amalgamation construction in Definition 7. The resulting system is guaranteed to both reflect and preserve the original behaviour by Theorem 1. Conclusions and future work In this paper, we have presented formal notions of morphisms between graph transformation systems (GTSs) and a construction of amalgamations in the category of GTSs and GTS morphisms. We have shown that, given certain conditions on the morphisms involved, such amalgamations reflect and protect behaviour across the GTSs. This result is useful because it can be applied to define a notion of conservative extensions of GTSs, which allow adding spectative behaviour (cf. [START_REF] Katz | Aspect categories and classes of temporal properties[END_REF]) without affecting the core transformation behaviour expressed in a GTS. There are of course a number of further research steps to be taken-both in applying the formal framework to particular domains and in further development of the framework itself. In terms of application, we need to provide methods to check the preconditions of Theorem 1, and if possible automatically checkable conditions that imply these, so that behaviour protection of an extension can be checked effectively. This will enable the development of tooling to support the validation of language or transformation compositions. On the part of the formal framework, we need to study relaxations of our definitions so as to allow cases where there is a less than perfect match between the base DSML and the DSML to be woven in. Inspired by [START_REF] Katz | Aspect categories and classes of temporal properties[END_REF], we are also planning to study different categories of extensions, which do not necessarily need to be spectative (conservative), and whether syntactic characterisations exist for them, too. Fig. 1 . 1 Fig. 1. Production Line (a) metamodel and (b) concrete syntax (from [45]). Fig. 2 . 2 Fig. 2. Example of production line configuration. Fig. 3 . 3 Fig. 3. Assemble rule indicating how a new hammer is assembled (from [45]). Sample response time rule. Fig. 4 . 4 Fig. 4. Generic model of response time observer. Fig. 5 . 5 Fig. 5. Amalgamation in the category of DSMLs and DSML morphisms. Fig. 6 . 6 Fig. 6. Weaving of metamodels (highlighting added for illustration purposes). Fig. 7 . 7 Fig. 7. Amalgamation of the Assemble and RespTime rules. Fig. 8 . 8 Fig. 8. Positive and negative application conditions. Definition 4 . 4 (Rule amalgamation) Given transformation rules p i : (L i li← K i ri → R i , ac i ), for i = 0,1, 2, and rule morphisms f : p 0 → p 1 and g : p 0 → p 2 , the amalgamated production p 1 + p0 p 2 is the production (L l ← K r → R, ac) in the diagram below, where subdiagrams (1), ( Fig. 9 . 9 Fig. 9. Forward and backward retyping functors. m 1 1 |= ac 1 ⇐⇒ m |= Shift( g p L , ac 1 ) and m 2 |= ac 2 ⇐⇒ m |= Shift( f p L , ac 2 ), and therefore m 1 |= ac 1 ∧ m 2 |= ac 2 ⇐⇒ m |= ac. By a similar construction for the righthand sides we get the pushout K Please, notice the use of the cardinality constraint 1.. * in the rule in Figure4(c). It is out of the scope of this paper to discuss the syntactical facilities of the e-Motions system. Acknowledgments We are thankful to Andrea Corradini for his very helpful comments. We would also like to thank Javier Troya and Antonio Vallecillo for fruitful discussions and previous and on-going collaborations this work relies on. This work has been partially supported by CICYT projects TIN2011-23795 and TIN2007-66523, and by the AGAUR grant to the research group ALBCOM (ref. 00516).
60,060
[ "1003768", "872871", "950126" ]
[ "198404", "85878", "327716" ]
01485978
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485978/file/978-3-642-37635-1_4_Chapter.pdf
Irina Mȃriuca Asȃvoae Frank De Boer Marcello M Bonsangue email: [email protected] Dorel Lucanu Jurriaan Rot email: [email protected] Bounded Model Checking of Recursive Programs with Pointers in K Keywords: pushdown systems, model checking, the K framework We present an adaptation of model-based verification, via model checking pushdown systems, to semantics-based verification. First we introduce the algebraic notion of pushdown system specifications (PSS) and adapt a model checking algorithm for this new notion. We instantiate pushdown system specifications in the K framework by means of Shylock, a relevant PSS example. We show why K is a suitable environment for the pushdown system specifications and we give a methodology for defining the PSS in K. Finally, we give a parametric K specification for model checking pushdown system specifications based on the adapted model checking algorithm for PSS. Introduction The study of computation from a program verification perspective is an effervescent research area with many ramifications. We take into consideration two important branches of program verification which are differentiated based on their perspective over programs, namely model-based versus semantics-based program verification. Model-based program verification relies on modeling the program as some type of transition system which is then analyzed with specific algorithms. Pushdown systems are known as a standard model for sequential programs with recursive procedures. Intuitively, pushdown systems are transition systems with a stack of unbounded size, which makes them strictly more expressive than finite The research of this author has been partially supported by Project POSDRU/88/ 1.5/S/47646 and by Contract ANCS POS-CCE, O2.1.2, ID nr 602/12516, ctr.nr 161/15.06.2010 (DAK). The research of this author has been funded by the Netherlands Organisation for state systems. More importantly, there exist fundamental decidability results for pushdown systems [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF] which enable program verification via model checking [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF]. Semantics-based program verification relies on specification of programming language semantics and derives the program model from the semantics specification. For example, the rewriting logic semantics project [START_REF] Meseguer | The Rewriting Logics Semantics Project[END_REF] studies the unification of algebraic denotational semantics with operational semantics of programming languages. The main incentive of this semantics unification is the fact that the algebraic denotational semantics is executable via tools like the Maude system [10], or the K framework [START_REF] Roşu | An Overview of the K Semantic Framework[END_REF]. As such, a programming language (operational) semantics specification implemented with these tools becomes an interpreter for programs via execution of the semantics. The tools come with model checking options, so the semantics specification of a programming language have for-free program verification capabilities. The current work solves the following problem in the rewriting logic semantics project: though the semantics expressivity covers a quite vast and interesting spectrum of programming languages, the offered verification capabilities via model checking are restricted to finite state systems. Meanwhile, the fundamental results from pushdown systems provide a strong incentive for approaching the verification of this class of infinite transition systems from a semantics-based perspective. As such, we introduce the notion of pushdown system specifications (PSS), which embodies the algebraic specification of pushdown systems. Furthermore, we adapt a state-of-the-art model checking algorithm for pushdown systems [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] to work for PSS and present an algebraic specification of this algorithm implemented in the K tool [START_REF] Roşu | K-Maude: A Rewriting Based Tool for Semantics of Programming Languages[END_REF]. Our motivating example is Shylock, a programming language with recursive procedures and pointers, introduced by the authors in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. Related work. K is a rewriting logic based framework for the design, analysis, and verification of programming languages, originating in the rewriting logic semantics project. K specifies transition systems and is built upon a continuationbased technique and a series of notational conventions to allow for more compact and modular executable programming language definitions. Because of the continuation-based technique, K specifications resemble PSS where the stack is the continuation. The most complex and thorough K specification developed so far is the C semantics [START_REF] Ellison | An Executable Formal Semantics of C with Applications[END_REF]. The standard approach to model checking programs, used for K specifications, involves the Maude LTL model checker [START_REF] Eker | The Maude LTL Model Checker[END_REF] which is inherited from the Maude back-end of the K tool. The Maude LTL checker, by comparison with other model checkers, presents a great versatility in defining the state properties to be verified (these being given as a rewrite theory). Moreover, the actual model checking is performed on-the-fly, so that the Maude LTL checker can verify systems with states that involve data in types of infinite cardinality under the assumption of a finite reachable state space. However, this assumption is infringed by PSS because of the stack which is allowed to grow unboundedly, hence the Maude LTL checker cannot be used for PSS verification. The Moped tool for model checking pushdown systems was successfully used for a subset of C programs [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] and was adapted for Java with full recursion, but with a fixed-size number of objects, in jMoped [START_REF] Esparza | A BDD-Based Model Checker for Recursive Programs[END_REF]. The WPDS ++ tool [START_REF] Kidd | WPDS++: A C++ Library for Weighted Pushdown Systems[END_REF] uses a weighted pushdown system model to verify x86 executable code. However, we cannot employ any of these dedicated tools for model checking pushdown systems because we work at a higher level, namely with specifications of pushdown system where we do not have the actual pushdown system. Structure of the paper. In Section 2 we introduce pushdown system specifications and an associated invariant model checking algorithm. In Section 3 we introduce the K framework by showing how Shylock's PSS is defined in K. In Section 4 we present the K specification of the invariant model checking for PSS and show how a certain type of bounded model checking can be directly achieved. Model Checking Specifications of Pushdown Systems In this section we discuss an approach to model checking pushdown system specifications by adapting an existing model checking algorithm for ordinary pushdown systems. Recall that a pushdown system is an input-less pushdown automaton without acceptance conditions. Basically, a pushdown system is a transition system equipped with a finite set of control locations and a stack. The stack consists of a non-a priori bounded string over some finite stack alphabet [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF][START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF]. The difference between a pushdown system specification and an ordinary pushdown system is that the former uses production rules with open terms for the stack and control locations. This allows for a more compact representation of infinite systems and paves the way for applications of model checking to recursive programs defined by means of structural operational semantics. We assume a countably infinite set of variables Var = {v 1 , v 2 , . . .}. A signature Σ consists of a finite set of function symbols g 1 , g 2 , . . ., each with a fixed arity ar(g 1 ), ar(g 2 ), . . .. Function symbols with arity 0 are called constants. The set of terms, denoted by T Σ (Var ) and typically ranged over by s and t, is inductively defined from the set of variables Var and the signature Σ. A substitution σ replaces variables in a term with other terms. A term s can match term t if there exists a substitution σ such that σ(t) = s. A term t is said to be closed if no variables appear in t, and we use the convention that these terms are denoted as "hatted" terms, i.e., t. A pushdown system specification (PSS) is a tuple (Σ, Ξ, Var , ∆) where Σ and Ξ are two signatures, Var is a set of variables, and ∆ is a finite set of production rules (defined below). Terms in T Σ (Var ) define control locations of a pushdown system, whereas terms in T Ξ (Var ) define the stack alphabet. A production rule in ∆ is defined as a formula of the form (s, γ) ⇒ (s , Γ ) , where s and s are terms in T Σ (Var ), γ is a term in T Ξ (Var ), and Γ is a finite (possibly empty) sequence of terms in T Ξ (Var ). The pair (s, γ) is the source of the rule, and (s , Γ ) is the target. We require for each rule that all variables appearing in the target are included in those of the source. A rule with no variables in the source is called an axiom. The notions of substitution and matching are lifted to sequences of terms and to formulae as expected. Example 1. Let Var = {s, t, γ}, let Σ = {0, a, +} with ar(0) = ar(a) = 0 and ar(+) = 2, and let Ξ = {L, R} with ar(L) = ar(R) = 0. Moreover consider the following three production rules, denoted as a set by ∆: (a, γ) ⇒ (0, ε) (s + t, L) ⇒ (s, R) (s + t, R) ⇒ (t, LR) . Then (Σ, Ξ, Var , ∆) is a pushdown system specification. Given a pushdown system specification P = (Σ, Ξ, Var , ∆), a concrete configuration is a pair ŝ, Γ where ŝ is a closed term in T Σ (Var ) denoting the current control state, and Γ is a finite sequence of closed terms in T Ξ (Var ) representing the content of the current stack. A transition ŝ, γ • Γ -→ ŝ , Γ • Γ between concrete configurations is derivable from the pushdown system specification P if and only if there is a rule r = (s r , γ r ) ⇒ (s r , Γ r ) in ∆ and a substitution σ such that σ(s r ) = ŝ, σ(γ r ) = γ, σ(s r ) = ŝ and σ(Γ r ) = Γ . The above notion of pushdown system specification can be extended in the obvious way by allowing also conditional production rules and equations on terms. Continuing on Example 1, we can derive the following sequence of transitions: a + (a + a), R -→ a + a, LR -→ a, RR -→ 0, R . Note that no transition is derivable from the last configuration 0, R . A pushdown system specification P is said to be locally finite w.r.t. a concrete configuration ŝ, Γ , if the set of all closed terms appearing in the configurations reachable from ŝ, Γ by transitions derivable from the rules of P is finite. Note that this does not imply that the set of concrete configurations reachable from a configuration ŝ, Γ is finite, as the stack is not bounded. However all reachable configurations are constructed from a finite set of control locations and a finite stack alphabet. An ordinary finite pushdown system is thus a pushdown system specification which is locally finite w.r.t. a concrete initial configuration ĉ0 , and such that all rules are axioms, i.e., all terms appearing in the source and target of the rules are closed. For example, if we add (s, L) ⇒ (s+a, L) to the rules of the pushdown system specification P defined in Example 1, then it is not hard to see that there are infinitely many different location reachable from a, L , meaning that P is not locally finite w.r.t. the initial configuration a, L . However, if instead we add the rule (s, L) ⇒ (s, LL) then all reachable configurations from a, L will only use a or 0 as control locations and L as the only element of the stack alphabet. In this case P is locally finite w.r.t. the initial configuration a, L . A Model Checking Algorithm for PSS Next we describe a model checking algorithm for (locally finite) pushdown system specifications. We adapt the algorithm for checking LTL formulae against pushdown systems, as presented in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], which, in turn, exploits the result from [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF], where it is proved that for any finite pushdown system the set R(ĉ 0 ) of all configurations reachable from the initial configuration ĉ0 is regular. The LTL model checking algorithm in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] starts by constructing a finite automaton which recognizes this set R(ĉ 0 ). This automaton has the property that ŝ, Γ ∈ R(ĉ 0 ) if the string Γ is accepted in the automaton, starting from ŝ. According to [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], the automaton associated to R(ĉ 0 ), denoted by A post * , can be constructed in a forward manner starting with ĉ0 , as described in Fig. 1. We use the notation x ∈ T Σ (Var ) for closed terms representing control states in P, γ, γ1 , γ2 ∈ T Ξ (Var ) for closed terms representing stack letters, ŷx,γ for the new states of the A post * automaton, f for the final states in A post * , while ŷ, ẑ, û stand for any state in A post * . The transitions in A post * are denoted by ŷ γ ẑ or ŷ ε ẑ. The notation ŷ Γ ẑ, where Γ = γ1 ..γ n , stands for ŷ γ1 .. γn ẑ. In Fig. 1 we present how the reachability algorithm in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] for generating A post * can be adjusted to invariant model checking pushdown system specifications. We emphasize that the transformation is minimal and consists in: (a) The modification in the lines containing the code: "for all ẑ such that x, γ → ẑ,ˆ is a rule in the pushdown system do" i.e., lines 9, 12, 15 in Fig. 1, where instead of rules in the pushdown system we use transitions derivable from the pushdown system specification as follows: "for all ẑ such that x, γ -→ ẑ,ˆ is derivable from P do" (b) The addition of lines 1, 10, 13, 16 where the state invariant φ is checked to hold in the newly discovered control state y. This approach for producing the A post * in a "breadth-first" manner is particularly suitable for specifications of pushdown systems as we can use the newly discovered configurations to produce transitions based on ∆, the production rules in P. Note that we assume, without loss of generality, that the initial stack has one symbol on it. Note that in the algorithm Apost* of [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], the set of states of the automaton is determined statically at the beginning. This is clearly not possible starting with a PSS, because this set is not known in advance, and could be infinite if the algorithm does not terminate. Hence, the states that are generated when needed, that is, in line 9, 12 and 15, where the derivable transitions are considered. We give next some keynotes on the algorithm in Fig. 1. The "trans" variable is a set containing the transitions to be processed. Along the execution of the algorithm Apost*(φ, P), the transitions of the A post * automaton are incrementally deposited in the "rel" variable which is a set where we collect transitions in the A post * automaton. The outermost while is executed until the end, i.e., until "trans" is empty, only if all states satisfy the control state formula φ. Hence, the algorithm in Fig. 1 verifies the invariant φ. In case φ is a state invariant for the pushdown system specification, the algorithm collects in "rel" the entire automaton A post * . Otherwise, the algorithm stops at the first encountered state x which does not satisfy the invariant φ. Note that the algorithm in Fig. 1 assumes that the pushdown system specification has only rules which push on the stack at most two stack letters. This assumption is inherited from the algorithm for A post * in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] where the requirement is imposed without loss of generality. The approach in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] is to adopt a standard construction for pushdown systems which consists in transforming the rules that push on the stack more than two stack letters into multiple rules that push at most two letters. Namely, any rule r in the pushdown system, of the form x, γ → x , γ1 ..γ n with n ≥ 3, is transformed into the following rules: x, γ → x , νr,n-2 γn , x , νr,i → x , νr,i-1 γi+1 , x , νr,1 → x , γ1 γ2 where 2 ≤ i ≤ n -2 and νr,1 , .., νr,n-2 are new stack letters. This transformation produces a new pushdown system which simulates the initial one, hence the assumption in the A post * generation algorithm does not restrict the generality. However, the aforementioned assumption makes impossible the application of the algorithm Apost* to pushdown system specifications P for which the stack can be increased with any number of stack symbols. The reason is that [START_REF] Roşu | K-Maude: A Rewriting Based Tool for Semantics of Programming Languages[END_REF] for all ẑ such that x, γ -→ ẑ, γ1..γn is derivable from P with n ≥ 2 do 16 if ẑ |= φ then return false; 17 trans := trans ∪{ẑ γ1 ŷẑ,γ 1 }; 18 rel := rel ∪{ŷ ẑ,ν(r,i) γi+2 ŷẑ,ν(r,i+1) | 0 ≤ i ≤ n -2}; where r denotes x, γ -→ ẑ, γ1..γn and ν(r, i), 1 ≤ i ≤ n -2 are new symbols (i.e., ν is a new function symbol s.t. ar(ν) = 2) and ŷẑ,ν(r,0) = ŷẑ,γ 1 and ŷẑ,ν(r,n-1) = ŷ 19 for all P defines rule schemas and we cannot identify beforehand which rule schema applies for which concrete configuration, i.e., we cannot identify the r in ν r,i . û ε ŷẑ,ν(r,i) ∈ rel, 0 ≤ i ≤ n -2 do 20 trans := trans ∪{û γi+2 ŷẑ,ν(r,i+1) | 0 ≤ i ≤ n -2}; Our solution is to obtain a similar transformation on-the-fly, as we apply the Apost* algorithm and discover instances of rule schemas which increase the stack, i.e., we discover r. This solution induces a localized modification of the lines 15 through 20 of the Apost* algorithm, as described in Fig. 2. We denote by Apost*gen the Apost* algorithm in Fig. 1 with the lines 15 through 20 replaced by the lines in Fig. 2. The correctness of the new algorithm is a rather simple generalization of the one presented in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF]. 3 Specification of Pushdown Systems in K In this section we introduce K by means of an example of a PSS defined using K, and we justify why K is an appropriate environment for PSS. A K specification evolves around its configuration, a nested bag of labeled cells denoted as content label , which defines the state of the specified transition system. The movement in the transition system is triggered by the K rules which define transformations made to the configuration. A key component in this mechanism is introduced by a special cell, labeled k, which contains a list of computational tasks that are used to trigger computation steps. As such, the K rules that specify transitions discriminate the modifications made upon the configuration based on the current computation task, i.e., the first element in the k-cell. This instills the stack aspect to the k-cell and induces the resemblance with a PSS. Namely, in a K configuration we make the conceptual separation between the k-cell, seen as the stack, and the rest of the cells which form the control location. Consequently, we promote K as a suitable environment for PSS. In the remainder of this section we describe the K definition of Shylock by means of a PSS that is based on the operational semantics of Shylock introduced in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. In Section 3.1 we present the configuration of Shylock's K implementation with emphasis on the separation between control locations and stack elements. In Section 3.2 we introduce the K rules for Shylock, while in Section 3.3 we point out a methodology of defining in K production rules for PSS. We use this definition to present K notations and to further emphasize and standardize a K style for defining PSS. Shylock's K Configuration The PSS corresponding to Shylock's semantics is given in terms of a programming language specification. First, we give a short overview of the syntax of Shylock as in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF], then describe how this syntax is used in Shylock's K-configuration. A Shylock program is finite set of procedure declarations of the form p i :: B i , where B i is the body of procedure p i and denotes a statement defined by the grammar: B ::= a.f := b | a := b.f | a := new | [a = b]B | [a = b]B | B + B | B; B | p We use a and b for program variables ranging over G ∪ L, where G and L are two disjoint finite sets of global and local variables, respectively. Moreover we assume a finite set F of field names, ranged over by f . G, L, F are assumed to be defined for each program, as sets of Ids, and we assume a distinguished initial program procedure main. Hence, the body of a procedure is a sequence of statements that can be: assignments or object creation denoted by the function " := " where ar(:=) = 2 (we distinguish the object creation by the "new" constant appearing as the second argument of ":="); conditional statements denoted by "[ ] "; nondeterministic choice given by " + "; and function calls. Note that K proposes the BNF notation for defining the language syntax as well, with the only difference that the variables are replaced by their respective sorts. A K configuration is a nested bag of labeled cells where the cell content can be one of the predefined types of K, namely K , Map, Set, Bag, List. The K configuration used for the specification of Shylock is the following: K k Map var Map fld h heap Set G Set L Set F Map P pgm K kAbs The pgm-cell is designated as a program container where the cells G, L, F maintain the above described finite sets of variables and fields associated to a program, while the cell P maintains the set of procedures stored as a map, i.e., a set of map items p → B. The heap-cell contains the current heap H which is formed by the variable assignment cell var and the field assignment cell h. The var cell contains the mapping from local and global variables to their associated identities ranging over N ⊥ = N ∪ {⊥}, where ⊥ stands for "not-created". The h cell contains a set of fld cells, each cell associated to a field variable from F . The mapping associated to each field contains items of type n →m, where n, m range over the object identities space N ⊥ . Note that any fld-cell always contains the item ⊥ →⊥ and ⊥ is never mapped to another object identity. Intuitively, the contents of the heap-cell form a directed graph with nodes labeled by object identities (i.e., values from N ⊥ ) and arcs labeled by field names. Moreover, the contents of the var-cell (i.e., the variable assignment) define entry nodes in the graph. We use the notion of visible heap, denoted as R(H), for the set of nodes reachable in the heap H from the entry nodes. The k-cell maintains the current continuation of the program, i.e., a list of syntax elements that are to be executed by the program. Note that the sort K is tantamount with an associative list of items separated by the set-aside symbol " ". The kAbs-cell is introduced for handling the heap modifications required by the semantics of certain syntactic operators. In this way, we maintain in the cell k only the "pure" syntactic elements of the language, and move into kAbs any additional computational effort used by the abstract semantics for object creation, as well as for procedure call and return. In conclusion, the k-cell stands for the stack in a PSS P, while all the other cells, including kAbs, form together the control location. Hence the language syntax in K practically gives a sub-signature of the stack signature in P, while the rest of the cells give a sub-signature, the control location signature in P. Shylock's K Rules We present here the K rules which implement the abstract semantics of Shylock, according to [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. Besides introducing the K notation for rules, we also emphasize on the separation of concerns induced by viewing the K definitions as PSS. In K we distinguish between computational rules that describe state transitions, and structural rules that only prepare the current state for the next transition. Rules in K have a bi-dimensional localized notation that stands for "what is above a line is rewritten into what is bellow that line in a particular context given by the matching with the elements surrounding the lines". Note that the solid lines encode a computational rule in K which is associated with a rewrite rule, while the dashed lines denote a structural rule in K, which is compiled by the K-tool into a Maude equation. The production rules in PSS are encoded in K by computational rules which basically express changes to the configuration triggered by an atomic piece of syntax matched at the top of the stack, i.e., the k-cell. An example of such encoding is the following rule: rule a.f := b • ••• k ••• v(a) → v(b) ••• fld(f ) v var when v(a) = Bool ⊥ which reads as: if the first element in the cell k is the assignment a.f := b then this is consumed from the stack and the map associated to the field f , i.e., the content of the cell fld(f ), is modified by replacing whatever object identity was pointed by v(a) with v(b), i.e., the object identity associated to the variable b by the current variable assignment v, only when a is already created, i.e., v(a) is not ⊥. Note that this rule is conditional, the condition being introduced by the keyword "when". We emphasize the following notational elements in K that appear in the above rule: " " which stands for "anything" and the ellipses "•••". The meaning of the ellipses is basically the same as " " the difference being that the ellipses appear always near the cell walls and are interpreted according to the contents of the respective cell. For example, given that the content of the k-cell is a list of computational tasks separated by " ", the ellipses in the k-cell from the above rule signify that the assignment a.f := b is at the top of the stack of the PSS. On the other hand, because the content of a fld cell is of sort Map which is a commutative sequence of map items, the ellipses appearing by both walls of the cell fld denote that the item v(a) → may appear "anywhere" in the fld-cell. Meanwhile, the notation for the var cell signifies that v is the entire content of this cell, i.e., the map containing the variable assignment. Finally, "•" stands for the null element in any K sort, hence "•" replacing a.f := b at the top of the k-cell stands for ε from the production rules in P. All the other rules for assignment, conditions, and sequence are each implemented by means of a single computational rule which considers the associated piece of syntax at the top of the k-cell. The nondeterministic choice is implemented by means of two computational rules which replace B 1 + B 2 at the top of a k-cell by either B 1 or B 2 . Next we present the implementation of one of the most interesting rules in Shylock namely object creation. The common semantics for an object creation is the following: if the current computation (the first element in the cell k) is "a:=new", then whatever object was pointed by a in the var-cell is replaced with the "never used before" object "oNew " obtained from the cell kAbs . Also, the fields part of the heap, i.e., the content of h-cell, is updated by the addition of a new map item "oNew → ⊥". However, in the semantics proposed by Shylock, the value of oNew is the minimal address not used in the current visible heap which is calculated by the function min(R(H) c ) that ends in the normal form oNew(n). This represents the memory reuse mechanism which is handled in our implementation by the kAbs-cell. Hence, the object creation rules are: rule a := new ••• k H heap • min(R(H) c ) kAbs rule a := new ••• k H h h oNew(n) • update H h with n →⊥ kAbs rule a := new • ••• k ••• x → n ••• var H h H h h oNew(n) updated(H h ) • kAbs where "min(R(H) c )" finds n, the first integer not in R(H), and ends in oNew(n), then "update Bag with MapItem" adds n → ⊥ to the map in each cell fld contained in the h-cell and ends in the normal form updated(Bag). Note that all the operators used in the kAbs-cell are implemented equationally, by means of structural K-rules. In this manner, we ensure that the computational rule which consumes a := new from the top of the k-cell is accurately updating the control location with the required modification. The rules for procedure call/return are presented in Fig. 3. They follow the same pattern as the one proposed in the rules for object creation. The renaming rule p ••• k H heap L L G G F F ••• p → B ••• P • processingCall(H, L, G, F ) kAbs rule p B restore(H) ••• k H H heap ••• p → B ••• P processedCall(H ) • kAbs rule restore(H ) ••• k H heap L L G G F F • processingRet(H, H , L, G, F ) kAbs rule restore( ) • ••• k H H heap processedRet(H ) • kAbs Fig. 3. K-rules for the procedure's call and return in Shylock scheme defined for resolving name clashes induced by the memory reuse for object creation is based in Shylock on the concept of cut points as introduced in [START_REF] Rinetzky | A Semantics for Procedure Local Heaps and its Abstractions[END_REF]. Cut points are objects in the heap that are referred to from both local and global variables, and as such, are subject to modifications during a procedure call. Recording cut points in extra logical variables allows for a sound return in the calling procedure, enabling a precise abstract execution w.r.t. object identities. For more details on the semantics of Shylock we refer to [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. Shylock as PSS The benefit of a Shylock's K specification lies in the rules for object creation, which implement the memory reuse mechanism, and for procedure call/return, which implement the renaming scheme. Each element in the memory reuse mechanism is implemented equationally, i.e., by means of structural K rules which have equational interpretation when compiled in Maude. Hence, if we interpret Shylock as an abstract model for the standard semantics, i.e., with standard object creation, the K specification for Shylock's abstract semantics renders an equational abstraction. As such, Shylock is yet another witness to the versatility of the equational abstraction methodology [START_REF] Meseguer | Equational Abstractions[END_REF]. Under the assumption of a bounded heap, the K specification for Shylock is a locally finite PSS and compiles in Maude into a rewriting system. Obviously, in the presence of recursive procedures, the stack grows unboundedly and, even if Shylock produces a finite pushdown system, the equivalent transition system is infinite and so is the associated rewriting system. We give next a relevant example for this idea. Example 2. The following Shylock program, denoted as pgm0, is the basic example we use for Shylock. It involves a recursive procedure p0 which creates an object g. gvars: g main :: p0 p0 :: g:=new; p0 In a standard semantics, because the recursion is infinite, so is the set of object identities used for g. However, Shylock's memory reuse guarantees to produce a finite set of object identities, namely ⊥, 0, 1. Hence, the pushdown system associated to pgm0 Shylock program is finite and has the following (ground) rules: (g:⊥, main) → (g:⊥, p0; restore(g:⊥)) (g:⊥, p0) → (g:⊥, g := new; p0; restore(g:⊥)) (g:⊥, g := new) → (g:0, ) (g:0, p0) → (g:0, g := new; p0; restore(g:0)) (g:0, g := new) → (g:1, ) (g:1, p0) → (g:1, g := new; p0; restore(g:1)) (g:1, g := new) → (g:0, ) Note that we cannot obtain the pushdown system by the exhaustive execution of Shylock[pgm0] because the exhaustive execution is infinite due to recursive procedure p0. For the same reason, Shylock[pgm0] specification does not comply with Maude's LTL model checker prerequisites. Moreover, we cannot use directly the dedicated pushdown systems model checkers as these work with the pushdown system automaton, while Shylock[pgm0] is a pushdown system specification. This example creates the premises for the discussion in the next section where we present a K-specification of a model checking procedure amenable for pushdown systems specifications. Model Checking K Definitions We recall that the PSS perspective over the K definitions enables the verification by model checking of a richer class of programs which allow (infinite) recursion. In this section we focus on describing kA post * (φ, P), the K specification of the algorithm Apost*gen. Note that kA post * (φ, P) is parametric, where the two parameters are P, the K specification of a pushdown system, and φ a control state invariant. We describe kA post * (φ, P) along justifying the behavioral equivalence with the algorithm Apost*gen. The while loop in Apost*gen, in Fig. 1, is maintained in kA post * by the application of rewriting, until the term reaches the normal form, i.e. no other rule can be applied. This is ensured by the fact that from the initial configuration: Init ≡ • traces • traces x 0 γ0 f trans • rel • memento φ formula true return collect the rules keep applying, as long as trans-cell is nonempty. We assume that the rewrite rules are applied at-random, so we need to direct/pipeline the flow of their application via matching and conditions. The notation rulei [label] in the beginning of each rule hints, via [label], towards which part of the Apost*gen algorithm that rule is handling. In the followings we discuss each rule and justify its connection with code fragments in Apost*gen. The last rule, ruleP, performs the exhaustive unfolding for a particular configuration in cell trace. We use this rule in order to have a parametric definition of the kA post * specification, where one of the parameters is P, i.e., the K specification of the pushdown system. Recall that the other parameter is the specification of the language defining the control state invariant properties Init ≡ • traces • traces x0 γ 0 f trans • rel • memento φ formula true return collect rule1 [if x γ ŷ ∈ rel else] : • traces • traces ••• ••• x γ y • ••• trans ••• x γ y ••• rel • memento ••• collect rule2 [if (x γ ŷ ∈ rel then...if γ = else] : • traces • traces ••• ••• x y (x Rel[y -]) ••• trans • x y Rel rel • memento ••• collect when x y ∈ Rel rule3 [if x γ ŷ ∈ rel then...if γ = then] : • x ctrl γ k trace traces • traces ••• ••• x γ y • ••• trans • x γ y Rel rel • x γ y memento ••• collect when x γ y ∈ Rel and Bool γ = ε rule4 [for all ẑ s.t. x, γ -→ ẑ, ε|γ|γ1..γn is derivable from P do if ẑ |=φ then] : • traces z ctrl ••• trace • traces • trans • rel • memento φ formula true f alse return collect when z |= φ rule5 [for all ẑ s.t. x, γ -→ ẑ, ε|γ1 is derivable from P do] : • traces ••• z ctrl Γ k trace • ••• traces ••• ••• • z Γ y ••• trans x γ y memento φ formula ••• collect when |Γ | ≤ 1 and Bool z |= φ rule6 [for all ẑ s.t. x, γ -→ ẑ, γ1..γn is derivable from P do] : • traces ••• z ctrl γ Γ k trace • ••• traces ••• • z γ new(z, γ ) (Rel[ new(z, γ ), news(x, γ, z, γ , Γ )] Γ news(x, γ, z, γ , Γ ), y) φ which are to be verified on the produced pushdown system. ruleP takes x ctrl γ Γ k a configuration in P and gives, based on the rules in P, all the configurations z i ctrl Γ i Γ k , 0 ≤ i ≤ n obtained from x ctrl γ Γ k after exactly one rewrite. • traces • traces ••• x γ y • memento ••• collect ruleP [all ẑ s.t. x, γ -→ ẑ, Γ is derivable from P] : ••• x ctrl γ Γ k trace • ••• traces ••• • z0 ctrl Γ0 Γ k trace.. zn ctrl Γn Γ k trace ••• traces The pipeline stages are the following sequence of rules' application: rule3ruleP(rule4 + rule5 + rule6) * rule7 The cell memento is filled in the beginning of the pipeline, rule3, and is emptied at the end of the pipeline, rule7. We use the matching on a nonempty memento for localizing the computation in Apost*gen at the lines 7 -20. We explain next the pipeline stages. Firstly, note that when no transition derived from P is processed by kA post * we enforce cells traces, traces to be empty (with the matching • traces • traces ). This happens in rules 1 and 2 because the respective portions in Apost*gen do not need new transitions derived from P to update "trans" and "rel". The other cases, namely when the transitions derived from P are used for updating "trans" and "rel", are triggered in rule3 by placing the desired configuration in the cell traces, while the cell traces is empty. At this point, since all the other rules match on either traces empty, or traces nonempty, only ruleP can be applied. This rule populates traces with all the next configurations obtained by executing P. After the application of ruleP, only one of the rules 4, 5, 6 can apply because these are the only rules in kA post * matching an empty traces and a nonempty traces . Among the rules 4,5,6 the differentiation is made via conditions as follows: rule4 handles all the cases when the new configuration has a control location z which does not verify the state invariant φ (i.e., lines 10, 13, 16 in Apost*gen). In this case we close the pipeline and the algorithm by emptying all the cells traces, traces, trans. Note that all the rules handling the while loop match on at least a nonempty cell traces, traces, or trans, with a pivot in a nonempty trans. rules 5 and 6 are applied disjunctively of rule4 because both have the condition z |= φ. Next we describe these two rules. rule5 handles the case when the semantic rule in P which matches the current < x, γ > does not increase the size of the stack. This case is associated with the lines 9 and 11, 12 and 14 in Apost*gen. rule6 handles the case when the semantic rule in P which matches the current < x, γ > increases the stack size and is associated with lines 15 and 17 -20 in Apost*gen. Both rules 5 and 6 use the memento cell which is filled upon pipeline initialization, in rule3. The most complicated rule is rule6, because it handles a for all piece of code, i.e., lines 17 -20 in Fig. 2. This part is reproduced by matching the entire content of cell rel with Rel, and using the projection operator: Rel[ γ z 1 , .., z n ] := {u | (u, γ, z 1 ) ∈ Rel}, .., {u | (u, γ, z n ) ∈ Rel} where z 1 , .., z n in the left hand-side is a list of z-symbols, while in the right hand-side we have a list of sets. Hence, the notation: (Rel[ new(z, γ ), news(x, γ, z, γ , Γ )] Γ news(x, γ, z, γ , Γ ), y) in rule6 cell trans stands for the lines 17 and 19-20 in Fig. 2. (Note that instead of notation r for rule < x, γ >-→< ẑ, γ Γ > we use the equivalent unique representation (x, γ, ẑ, γ , Γ ) and that instead of ŷẑ,ν(r,0) we use directly ŷẑ,γ , i.e., new(z, γ ), while instead of ŷẑ,ν(r,n-1) in Fig. 2 we use directly ŷ.) Also, the notation in cell rel: "new(z, γ ), news(x, γ, z, γ , Γ ) Γ news(x, γ, z, γ , Γ ), y" stands for line 18 in Fig. 2. rules 4, 5, 6 match on a nonempty traces -cell and an empty traces, and no other rule matches alike. rule7 closes the pipeline when the traces cell becomes empty by making the memento cell empty. Note that traces empties because rules 4, 5, 6 keep consuming it. Example 3. We recall that the Shylock program pgm0 from Example 2 was not amenable by semantic exhaustive execution or Maude's LTL model checker, due to the recursive procedure p0. Likewise, model checkers for pushdown systems which can handle the recursive procedure p0 cannot be used because Shylock[pgm0], the pushdown system obtained from Shylock's PSS, is not available. However, we can employ kA post * for Shylock's K-specification in order to discover the reachable state space, the A post * automata, as well as the pushdown system itself. In the Fig. 5 we describe the first steps in the execution of kA post * (true, Shylock[pgm0]) and the reachability automaton generated automatically by kA post * (true, Shylock[pgm0]). Bounded Model Checking for Shylock One of the major problems in model checking programs which manipulate dynamic structures, such as linked lists, is that it is not possible to bound a priori the state space of the possible computations. This is due to the fact that programs may manipulate the heap by dynamically allocating an unbounded number of new objects and by updating reference fields. This implies that the reachable state space is potentially infinite for Shylock programs with recursive procedures. Consequently for model checking purposes we need to impose some suitable bounds on the model of the program. A natural bound for model checking Shylock programs, without necessarily restricting their capability of allocating an unbounded number of objects, is to impose constraints on the size of the visible heap [START_REF] Bouajjani | Context-Bounded Analysis of Multithreaded Programs with Dynamic Linked Structures[END_REF]. Such a bound still allows for storage of an unbounded number of objects onto the call-stack, using local variables. Thus termination is guaranteed with heap-bounded model checking of the form |= k φ meaning |= φ ∧ le(k), where le(k) verifies if the size of the visible heap is smaller than k. To this end, we define the set of atomic propositions (φ ∈) Rite as the smallest set defined by the following grammar: r ::= ε | x | ¬x | f | r.r | r + r | r * where x ranges over variable names (to be used as tests) and f over field names (to be used as actions). The atomic proposition in Rite are basically expressions from the Kleene algebra with tests [START_REF] Kozen | Kleene Algebra with Tests[END_REF], where the global and local variables are used as nominals while the fields constitute the set of basic actions. The K specification of Rite is based on the circularity principle [START_REF] Goguen | Circular Coinductive Rewriting[END_REF][START_REF] Bonsangue | A Decision Procedure for Bisimilarity of Generalized Regular Expressions[END_REF] to handle the possible cycles in the heap. We employ Rite with kA post * (φ, P), i.e., φ ∈ Rite, for verifying heap-shape properties for Shylock programs. For the precise definition of the interpretation of these expressions in a heap we refer to the companion paper [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. We conclude with an example showing a simple invariant property of a Shylock program. This is an example of a program which induces, on some computation path, an unbounded heap. When we apply the heap-bounded model checking specification, by instantiating φ with the property le (10), we collect all lists with a length smaller or equal than 10. We can also check the heap-shape property "(¬first+first.next * .last)". This property says that either the first object is not defined or the last object is reached from first via the next field. Conclusions In this paper we introduced pushdown system specifications (PSS) with an associated invariant model checking algorithm Apost*gen. We showed why the K framework is a suitable environment for pushdown systems specifications, but not for their verification via the for-free model checking capabilities available in K. We gave a K specification of invariant model checking for pushdown system specifications, kA post * , which is behaviorally equivalent with Apost*gen. To the best of our knowledge, no other model checking tool has the flexibility of having structured atomic propositions and working with the generation of the state space on-the-fly. Future work includes the study of the correctness of our translation of Shylock into the K framework as well as of the translation of the proposed model checking algorithm and its generalization to any LTL formula. From a more practical point of view, future applications of pushdown system specifications could be found in semantics-based transformation of real programming languages like C or Java or in benchmark-based comparisons with existing model-based approaches for program verification. Fig. 1 . 1 Fig. 1. The algorithm for obtaining Apost * adapted for pushdown system specifications. Fig. 2 . 2 Fig. 2. The modification required by the generalization of the algorithm Apost*. Fig. 4 . 4 Fig. 4. kApost * (φ, P) p0 news( g →⊥ var • h heap , p0, g →⊥ var • h heap , g:=new, p0 restore(...), 1) restore(...) new( g →⊥ var • h heap , p0) rel rule7 • traces • traces • memento rule3 ... Fig. 5 . 5 Fig. 5. The first pipeline iteration for kApost * (true, Shylock[pgm0]) and the automatically produced reachability automaton at the end of kApost * (true, Shylock[pgm0]). Note that for legibility reasons we omit certain cells appearing in the control state, like g G • L • F main →p0 p0 →g := new; p0 P pgm, which do not change along the execution. Hence, for example, the ctrl-cell is filled in rule3 with both cells heap and pgm. Example 4 . 4 The following Shylock program pgmList creates a potentially infinite linked list which starts in object first and ends with object last. gvars: first, last lvars: tmp flds: next main :: last:=new; last.next:=last; first:=last; p0 p0 :: tmp:=new; tmp.next:=first; first:=tmp; (p0 + skip) • traces • traces g →⊥ var • h heap main fin trans • rel • memento true formula true return collect rule3 g →⊥ var • h heap ctrl main k traces • trans g →⊥ var • h heap main fin rel g →⊥ var • h heap main fin memento ruleP • traces g →⊥ var • h heap ctrl p0 restore( g →⊥ var • h heap ) k traces →⊥ var • h heap , p0) trans g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0) →⊥ var • h heap , p0) memento g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0) →⊥ var • h heap , p0) rel →⊥ var • h heap , p0) memento g →⊥ var • h heap g:=new new( g →⊥ var • h heap , g := new) trans g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0) rule6 • traces • traces g →⊥ var • h heap main fin memento g →⊥ var • h heap p0 new( g restore(...) fin rel rule7 • traces • traces • memento rule3 g →⊥ var • h heap ctrl p0 k traces • trans g →⊥ var • h heap p0 new( g restore(...) fin g →⊥ var • h heap p0 new( g ruleP • traces g →⊥ var • h heap ctrl g := new p0 restore( g →⊥ var • h heap ) k traces rule6 • traces • traces g →⊥ var • h heap p0 new( g restore(...) fin g →⊥ var • h heap p0 new( g →⊥ var • h heap , p0) new( g →⊥ var • h heap , g := new) Scientific Research (NWO), CoRE project, dossier number: 612.063.920. Acknowledgments. We would like to thank the anonymous reviewers for their helpful comments and suggestions.
47,009
[ "1003769", "1003770", "895511", "966814", "1003771" ]
[ "452729", "20495", "135222", "135222", "20495", "452729", "135222", "20495" ]
01485980
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485980/file/978-3-642-37635-1_6_Chapter.pdf
Roberto Bruni email: [[email protected] Andrea Corradini email: [email protected] Fabio Gadducci email: gadducci]@di.unipi.it Alberto Lluch Lafuente email: [email protected] Andrea Vandin email: andrea.vandin]@imtlucca.it Alberto Lluch Lafuente Adaptable Transition Systems Keywords: Adaptation, autonomic systems, control data, interface automata à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Self-adaptive systems have been advocated as a convenient solution to the problem of mastering the complexity of modern software systems, networks and architectures. In particular, self-adaptivity is considered a fundamental feature of autonomic systems, that can specialise to several other self-* properties like selfconfiguration, self-optimisation, self-protection and self-healing. Despite some valuable efforts (see e.g. [START_REF] Salehie | Self-adaptive software: Landscape and research challenges[END_REF][START_REF] Lints | The essentials in defining adaptation[END_REF]), there is no general agreement on the notion of adaptivity, neither in general nor in software systems. There is as well no widely accepted foundational model for adaptivity. Using Zadeh's words [START_REF] Zadeh | On the definition of adaptivity[END_REF]: "it is very difficult -perhaps impossible-to find a way of characterizing in concrete terms the large variety of ways in which adaptive behavior can be realized". Zadeh's concerns were conceived in the field of Control Theory but are valid in Computer Science as well. Zadeh' skepticism for a concrete unifying definition of adaptivity is due to the attempt to subsume two aspects under the same definition: the external manifestations of adaptive systems (sometimes called black-box adaptation), and the internal mechanisms by which adaptation is achieved (sometimes called white-box adaptation). The limited effort placed so far in the investigation of the foundations of adaptive software systems might be due to the fact that it is not clear what are the characterising features that distinguish adaptive systems from those that are not so. For instance, very often a software system is considered "self-adaptive" if it "modifies its own behavior in response to changes in its operating environment" [START_REF] Oreizy | An architecture-based approach to self-adaptive software[END_REF], when the software system realises that "it is not accomplishing what the software is intended to do, or better functionality or performance is possible" [START_REF] Robertson | Introduction to self-adaptive software: Applications[END_REF]. But, according to this definition, almost any software system can be considered self-adaptive, since any system of a reasonable complexity can modify its behaviour (e.g. following one of the different branches of a conditional statement) as a reaction to a change in its context of execution (e.g. values of variables or parameters). Consider the automaton of Fig. 1, which models a server providing a task execution service. Each state has the format s{q} [r] where s can be either D (the server is down) or U (it is up), and q, r are possibly empty sequences of t symbols representing, respectively, the lists of tasks scheduled for execution and the ones received but not scheduled yet. Transitions are labelled with t? (receive a task), u! (start-up the server), s! (schedule a task), f! (notify the conclusion of a task), and d! (shut-down the server). Annotations ? and ! denote input and output actions, respectively. Summing up, the server can receive tasks, start up, schedule tasks and notify their termination, and eventually shut down. Now, is the modelled server self-adaptive? One may argue that indeed it is, since the server schedules tasks only when it is up. Another argument can be that the server is self-adaptive since it starts up only when at least one task has to be processed, and shuts down only when no more tasks have to be processed. Or one could say that the server is not adaptive, because all transitions just implement its ordinary functional behaviour. Which is the right argument? How can we handle such diverse interpretations? White-box adaptation. White-box perspectives on adaptation allow one to specify or inspect (part of) the internal structure of a system in order to offer a clear separation of concerns to distinguish changes of behaviour that are part of the application or functional logic from those which realise the adaptation logic. In general, the behaviour of a component is governed by a program and according to the traditional, basic view, a program is made of control (i.e. algorithms) and data. The conceptual notion of adaptivity we proposed in [START_REF] Bruni | A conceptual framework for adaptation[END_REF] requires to identify control data which can be changed to adapt the component's behaviour. Adaptation is, hence, the run-time modification of such control data. Therefore, a component is adaptable if it has a distinguished collection of control data that can be modified at run-time, adaptive if it is adaptable and its control data are modified at run-time, at least in some of its executions, and self-adaptive if it modifies its own control data at run-time. Several programming paradigms and reference models have been proposed for adaptive systems. A notable example is the Context Oriented Programming paradigm, where the contexts of execution and code variations are first-class citizens that can be used to structure the adaptation logic in a disciplined way [START_REF] Salvaneschi | Context-oriented programming: A programming paradigm for autonomic systems (v2)[END_REF]. Nevertheless, it is not the programming language what makes a program adaptive: any computational model or programming language can be used to implement an adaptive system, just by identifying the part of the data that governs the adaptation logic, that is the control data. Consequently, the nature of control data can vary considerably, including all possible ways of encapsulating behaviour: from simple configuration parameters to a complete representation of the program in execution that can be modified at run-time, as it is typical of computational models that support meta-programming or reflective features. The subjectivity of adaptation is captured by the fact that the collection of control data of a component can be defined in an arbitrary way, ranging from the empty set ("the system is not adaptable") to the collection of all the data of the program ("any data modification is an adaptation"). This means that white-box perspectives are as subjective as black-box ones. The fundamental difference lies in who is responsible of declaring which behaviours are part of the adaptation logic and which not: the observer (black-box) or the designer (white-box). Consider again the system in Fig. 1 and the two possible interpretations of its adaptivity features. As elaborated in Sect. 3, in the first case control data is defined by the state of the server, while in the second case control data is defined by the two queues. If instead the system is not considered adaptive, then the control data is empty. This way the various interpretations are made concrete in our conceptual approach. We shall use this system as our running example. It is worth to mention that the control data approach [START_REF] Bruni | A conceptual framework for adaptation[END_REF] is agnostic with respect to the form of interaction with the environment, the level of contextawareness, the use of reflection for self-awareness. It applies equally well to most of the existing approaches for designing adaptive systems and provides a satisfactory answer to the question "what is adaptation conceptually?". But "what is adaptation formally?" and "how can we reason about adaptation, formally?". Contribution. This paper provides an answer to the questions we raised above. Building on our informal discussion, on a foundational model of component based systems (namely, interface automata [START_REF] De Alfaro | Game models for open systems[END_REF][START_REF] De Alfaro | Interface automata[END_REF], introduced in Sect. 2), and on previous formalisations of adaptive systems (discussed in Sect. 5) we distill in Sect. 3 a core model of adaptive systems called adaptable interface automata (aias). The key feature of aias are control propositions evaluated on states, the formal counterpart of control data. The choice of control propositions is arbitrary but it imposes a clear separation between ordinary, functional behaviours and adaptive ones. We then discuss in Sect. 4 how control propositions can be exploited in the specification and analysis of adaptive systems, focusing on various notions proposed in the literature, like adaptability, feedback control loops, and control synthesis. The approach based on control propositions can be applied to other computational models, yielding other instances of adaptable transition systems. The choice of interface automata is due to their simple and elegant theory. Background Interface automata were introduced in [START_REF] De Alfaro | Interface automata[END_REF] as a flexible framework for componentbased design and verification. We recall here the main concepts from [START_REF] De Alfaro | Game models for open systems[END_REF]. Definition 1 (interface automaton). An interface automaton P is a tuple V, V i , A I , A O , T , where V is a set of states; V i ⊆ V is the set of initial states, which contains at most one element (if V i is empty then P is called empty); A I and A O are two disjoint sets of input and output actions (we denote by A = A I ∪ A O the set of all actions); and T ⊆ V × A × V is a deterministic set of steps (i.e. (u, a, v) ∈ T , (u, a, v ) ∈ T implies v = v ). Example 1. Figure 2 presents three interface automata modelling respectively a machine Mac (left), an execution queue Exe (centre), and a task queue Que (right). Intuitively, each automaton models one component of our running example (cf. Fig. 1). The format of the states is as in our running example. The initial states are not depicted on purpose, because we will consider several cases. Here we assume that they are U, {} and [], respectively. The actions of the automata have been described in Sect. 1. The interface of each automaton is implicitly denoted by the action annotation: ? for inputs and ! for outputs. Given B ⊆ A, we sometimes use P | B to denote the automaton obtained by restricting the set of steps to those whose action is in B. Similarly, the set of actions in B labelling the outgoing transitions of a state u is denoted by B(u). A computation ρ of an interface automaton P is a finite or infinite sequence of consecutive steps (or transitions) {(u i , a i , u i+1 )} i<n from T (thus n can be ω). A partial composition operator is defined for automata: in order for two automata to be composable their interface must satisfy certain conditions. Definition 2 (composability). Let P and Q be two interface automata. Then, P and Q are composable if A O P ∩ A O Q = ∅. Let shared (P, Q) = A P ∩ A Q and comm(P, Q) = (A O P ∩ A I Q ) ∪ (A I P ∩ A O Q ) be the set of shared and communication actions, respectively. Thus, two interface automata can be composed if they share input or communication actions only. Two composable interface automata can be combined in a product as follows. Definition 3 (product). Let P and Q be two composable interface automata. Then the product T is the union of P ⊗ Q is the interface automaton V, V i , A I , A O , T such that V = V P ×V Q ; V i = V i P ×V i Q ; A I = (A I P ∪A I Q )\comm(P, Q); A O = A O P ∪A O Q ; and {((v, u), a, (v , u)) | (v, a, v ) ∈ T P ∧ a ∈ shared (P, Q) ∧ u ∈ V Q } (i.e. P steps), {((v, u), a, (v, u )) | (u, a, u ) ∈ T Q ∧ a ∈ shared (P, Q) ∧ v ∈ V P } (i.e. Q steps), and {((v, u), a, (v , u )) | (v, a, v ) ∈ T P ∧ (u, a, u ) ∈ T Q ∧ a ∈ shared (P, Q)} (i. e. steps where P and Q synchronise over shared actions). In words, the product is a commutative and associative operation (up to isomorphism) that interleaves non-shared actions, while shared actions are synchronised in broadcast fashion, in such a way that shared input actions become inputs, communication actions become outputs. Example 2. Consider the interface automata Mac, Exe and Que of Fig. 2. They are all pairwise composable and, moreover, the product of any two of them is composable with the remaining one. The result of applying the product of all three automata is depicted in Fig. 3 (left). States in P ⊗ Q where a communication action is output by one automaton but cannot be accepted as input by the other are called incompatible or illegal. Definition 4 (incompatible states). Let P and Q be two composable interface automata. The set incompatible(P, Q) ⊆ V P × V Q of incompatible states of P ⊗ Q is defined as {(u, v) ∈ V P × V Q | ∃a ∈ comm(P, Q) . (a ∈ A O P (u) ∧ a ∈ A I Q (v)) ∨ (a ∈ A O Q (v) ∧ a ∈ A I P (u))}. Example 3. In our example, the product Mac ⊗ Exe ⊗ Que depicted in Fig. 3 (left) has several incompatible states, namely all those of the form "s{t}[t]" or "s{t}[tt]". Indeed, in those states, Que is willing to perform the output action s! but Exe is not able to perform the dual input action s?. The presence of incompatible states does not forbid to compose interface automata. In an open system, compatibility can be ensured by a third automata called the environment which may e.g. represent the context of execution or an adaptation manager. Technically, an environment for an automaton R is a non-empty automaton E which is composable with R, synchronises with all output actions of R (i.e. A I E = A O R ) and whose product with R does not have incompatible states. Interesting is the case when R is P ⊗Q and E is a compatible environment, i.e. when the set incompatible(P, Q)×V E is not reachable in R⊗E. Compatibility of two (composable, non-empty) automata is then expressed as the existence of a compatible environment for them. This also leads to the concept of compatible (or usable) states cmp(P ⊗ Q) in the product of two composable interface automata P and Q, i.e. those for which an environment E exists that makes the set of incompatible states incompatible(P , Q) unreachable in P ⊗ Q ⊗ E. Example 4. Consider again the interface automata Mac, Exe and Que of Fig. 2. Automata Mac and Exe are trivially compatible, and so are Mac and Que. Exe and Que are compatible as well, despite of the incompatible states {t}[t] and {t}[tt] in their product Exe ⊗ Que. Indeed an environment that does not issue a second task execution requests t! without first waiting for a termination notification (like the one in Fig. 4) can avoid reaching the incompatible states. We are finally ready to define the composition of interface automata. Definition 5 (composition). Let P and Q be two composable interface automata. The composition P | Q is an interface automaton V, V i , A I P ⊗Q , A O P ⊗Q , T such that V = cmp(P ⊗ Q); V i = V i P ⊗Q ∩ V ; and T = T P ⊗Q ∩ (V × A × V ). Adaptable Interface Automata Adaptable interface automata extend interface automata with atomic propositions (state observations) a subset of which is called control propositions and play the role of the control data of [START_REF] Bruni | A conceptual framework for adaptation[END_REF]. Definition 6 (adaptable interface automata). An adaptable interface automaton ( aia) is a tuple P, Φ, l, Φ c such that P = V, V i , A I , A O , T is an interface automaton; Φ is a set of atomic propositions, l : V → 2 Φ is a labelling function mapping states to sets of propositions; and Φ c ⊆ Φ is a distinguished subset of control propositions. Abusing the notation we sometimes call P an aia with underlying interface automaton P , whenever this introduces no ambiguity. A transition (u, a, u ) ∈ T is called an adaptation if it changes the control data, i.e. if there exists a proposition φ ∈ Φ c such that either φ ∈ l(u) and φ ∈ l(u ), or vice versa. Otherwise, it is called a basic transition. An action a ∈ A is called a control action if it labels at least one adaptation. The set of all control actions of an aia P is denoted by A C P . Example 6. Recall the example introduced in Sect. 1. We raised the question whether the interface automaton S of Fig. 1 is (self-)adaptive or not. Two arguments were given. The first argument was "the server schedules tasks only when it is up". That is, we identify two different behaviours of the server (when it is up or down, respectively), interpreting a change of behaviour as an adaptation. We can capture this interpretation by introducing a control proposition that records the state of the server. More precisely, we define the aia Switch(S) in the following manner. The underlying interface automaton is S; the only (control) proposition is up, and the labelling function maps states of the form U{. . .}[. . .] into {up} and those of the form D{. . .}[. . .] into ∅. The control actions are then u and d. The second argument was "the system starts the server up only when there is at least one task to schedule, and shuts it down only when no task has to be processed ". In this case the change of behaviour (adaptation) is triggered either by the arrival of a task in the waiting queue, or by the removal of the last task scheduled for execution. Therefore we can define the control data as the state of both queues. That is, one can define an aia Scheduler(S) having as underlying interface automaton the one of Fig. 1, as control propositions all those of the form queues status q r (with q ∈ { , t}, and r ∈ { , t, tt}), and a labelling function that maps states of the form s{q}[r] to the set {queues status q r }. In this case the control actions are s, f and t. Computations. The computations of an aia (i.e. those of the underlying interface automata) can be classified according to the presence of adaptation transitions. For example, a computation is basic if it contains no adaptive step, and it is adaptive otherwise. We will also use the concepts of basic computation starting at a state u and of adaptation phase, i.e. a maximal computation made of adaptive steps only. Coherent control. It is worth to remark that what distinguishes adaptive computations and adaptation phases are not the actions, because control actions may also label transitions that are not adaptations. However, very often an aia has coherent control, meaning that the choice of control propositions is coherent with the induced set of control actions, in the sense that all the transitions labelled with control actions are adaptations. Composition. The properties of composability and compatibility for aia, as well as product and composition operators, are lifted from interface automata. Definition 7 (composition). Let P and Q be two aias whose underlying interface automata P , Q are composable. The composition P | Q is the aia P | Q , Φ, l, Φ c such that the underlying interface automaton is the composition of P and Q ; Φ = Φ P Φ Q (i.e. the set of atomic propositions is the disjoint union of the atomic propositions of P and Q); Φ c = Φ c P Φ c Q ; and l is such that l((u, v)) = l P (u) ∪ l Q (v) for all (u, v) ∈ V (i.e. a proposition holds in a composed state if it holds in its original local state). Since the control propositions of the composed system are the disjoint union of those of the components, one easily derives that control coherence is preserved by composition, and that the set of control actions of the product is obtained as the union of those of the components. Exploiting Control Data We explain here how the distinguishing features of aia (i.e. control propositions and actions) can be exploited in the design and analysis of self-adaptive systems. For the sake of simplicity we will focus on aia with coherent control, as it is the case of all of our examples. Thus, all the various definitions/operators that we are going to define on aia may rely on the manipulation of control actions only. Design Well-formed interfaces. The relationship between the set of control actions A C P and the alphabets A I P and A O P is arbitrary in general, but it could satisfy some pretty obvious constraints for specific classes of systems. Definition 8 (adaptable, controllable and self-adaptive ATSs). Let P be an aia. We say that P is adaptable if A C P = ∅; controllable if A C P ∩ A I P = ∅; self-adaptive if A C P ∩ A O P = ∅. Intuitively, an aia is adaptable if it has at least one control action, which means that at least one transition is an adaptation. An adaptable aia is controllable if control actions include some input actions, or self-adaptive if control actions include some output actions (which are under control of the aia). From these notions we can derive others. For instance, we can say that an adaptable aia is fully self-adaptive if A C P ∩ A I P = ∅ (the aia has full control over adaptations). Note that hybrid situations are possible as well, when control actions include both input actions (i.e. actions in A I P ) and output actions (i.e. actions in A O P ). In this case we have that P is both self-adaptive and controllable. Example 7. Consider the aia Scheduler(S) and Switch(S) described in Example 6, whose underlying automaton (S) is depicted in Fig. 1. Switch(S) is fully self-adaptive and not controllable, since its control actions do not include input actions, and therefore the environment cannot force the execution of control actions directly. On the other hand, Scheduler(S) is self-adaptive and controllable, since some of its control actions are outputs and some are inputs. Consider instead the interface automaton A in the left of Fig. 5, which is very much like the automaton Mac ⊗ Exe ⊗ Que of Fig. 3, except that all actions but f have been turned into input actions and states of the form s{t}[tt] have been removed. The automaton can also be seen as the composition of the two automata on the right of Fig. 5. And let us call Scheduler(A) and Switch(A) the aia obtained by applying the control data criteria of Scheduler(S) and Switch(S), respectively. Both Scheduler(A) and Switch(A) are adaptable and controllable, but only Scheduler(A) is self-adaptive, since it has at least one control output action (i.e. f!). Composition. As discussed in Sect. 3, the composition operation of interface automata can be extended seamlessly to aia. Composition can be used, for example, to combine an adaptable basic component B and an adaptation manager M in a way that reflects a specific adaptation logic. In this case, natural well-formedness constraints can be expressed as suitable relations among sets of actions. For example, we can define when a component M controls another component B as follows. Definition 9 (controlled composition). Let B and M be two composable aia. We say that M controls B in B | M if A C B ∩ A O M = ∅. In addition, we say that M controls completely B in B | M if A C B ⊆ A O M . This definition can be used, for instance, to allow or to forbid mutual control. For example, if a manager M is itself at least partly controllable (i.e. A C M ∩A I M = ∅), a natural requirement to avoid mutual control would be that the managed component B and M are such the A O B ∩ A C M = ∅, i.e. that B cannot control M . Example 8. Consider the adaptable server depicted on the left of Fig. 5 as the basic component whose control actions are d, u and s. Consider further the controller of Fig. 6 as the manager, which controls completely the basic component. A superficial look at the server and the controller may lead to think that their composition yields the adaptive server of Fig. 1, yet this not the case. Indeed, the underlying interface automata are not compatible due to the existence of (unavoidable) incompatible states. Control loops and action classification. The distinction between input, output and control actions is suitable to model some basic interactions and well-formedness criteria as we explained above. More sophisticated cases such as control loops are better modelled if further classes of actions are distinguished. As a paradigmatic example, let us consider the control loop of the MAPE-K reference model [START_REF]An Architectural Blueprint for Autonomic Computing[END_REF], illustrated in Fig. 7. This reference model is the most influential one for autonomic and adaptive systems. The name MAPE-K is due to the main activities of autonomic manager components (Monitor, Analyse, Plan, Execute) and the fact that all such activities operate and exploit the same Knowledge base. According to this model, a self-adaptive system is made of a component implementing the application logic, equipped with a control loop that monitors the execution through suitable sensors, analyses the collected data, plans an adaptation strategy, and finally executes the adaptation of the managed component through some effectors. The managed component is considered to be an adaptable component, and the system made of both the component and the manager implementing the control loop is considered as a self-adaptive component. Analysis and verification Property classes. By the very nature of adaptive systems, properties that one is interested to verify on them can be classified according to the kind of computations that are concerned with, so that the usual verification (e.g. model checking problem) P |= ψ (i.e. "does the aia P satisfy property ψ?") is instantiated in some of the computations of P depending of the class of ψ. For example, some authors (e.g. [START_REF] Zhao | Model checking of adaptive programs with modeextended linear temporal logic[END_REF][START_REF] Zhang | Modular verification of dynamically adaptive systems[END_REF][START_REF] Kulkarni | Correctness of component-based adaptation[END_REF]) distinguish the following three kinds of properties. Local properties are "properties of one [behavioral] mode", i.e. properties that must be satisfied by basic computations only. Adaptation properties are to be "satisfied on interval states when adapting from one behavioral mode to another ", i.e. properties of adaptation phases. Global properties "regard program behavior and adaptations as a whole. They should be satisfied by the adaptive program throughout its execution, regardless of the adaptations.", i.e. properties about the overall behaviour of the system. To these we add the class of adaptability properties, i.e. properties that may fail for local (i.e. basic) computations, and that need the adapting capability of the system to be satisfied. Definition 10 (adaptability property). Let P be an aia. A property ψ is an adaptability property for P if P |= ψ and P | A P \A C P |= ψ. Example 9. Consider the adaptive server of Fig. 1 and the aia Scheduler(S) and Switch(S), with initial state U{}[]. Consider further the property "whenever a task is received, the server can finish it". This is an adaptability property for Scheduler(S) but not for Switch(S). The main reason is that in order to finish a task it first has to be received (t) and scheduled (s), which is part of the adaptation logic in Scheduler(S) but not in Switch(S). In the latter, indeed, the basic computations starting from state U{}[] are able to satisfy the property. Weak and strong adaptability. aia are also amenable for the analysis of the computations of interface automata in terms of adaptability. For instance, the concepts of weak and strong adaptability from [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] can be very easily rephrased in our setting. According to [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] a system is weakly adaptable if "for all paths, it always holds that as soon as adaptation starts, there exists at least one path for which the system eventually ends the adaptation phase", while a system is strongly adaptable if "for all paths, it always holds that as soon as adaptation starts, all paths eventually end the adaptation phase". Strong and weak adaptability can also be characterised by formulae in some temporal logic [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF], ACTL [START_REF] De Nicola | Action versus state based logics for transition systems[END_REF] in our setting. Definition 11 (weak and strong adaptability). Let P be an aia. We say that P is weakly adaptable if P |= AG EF EX{A P \ A C P }true, and strongly adaptable if P |= AG AF (EX{A P }true ∧ AX{A P \ A C P }true). The formula characterising weak adaptability states that along all paths (A) it always (G) holds that there is a path (E) where eventually (F) a state will be reached where a basic step can be executed (EX{A P \ A C P }true). Similarly, the formula characterising strong adaptability states that along all paths (A) it always (G) holds that along all paths (A) eventually (F) a state will be reached where at least one step can be fired (EX{A P }true) and all fireable actions are basic steps (AX{A P \ A C P }true). Apart from its conciseness, such characterisations enables the use of model checking techniques to verify them. Example 10. The aia Switch(S) (cf. Fig. 1) is strongly adaptable, since it does not have any infinite adaptation phase. Indeed every control action (u or d) leads to a state where only basic actions (t, f or s) can be fired. On the other hand, Scheduler(S) is weakly adaptable due to the presence of loops made of adaptive transitions only (namely, t, s and f), which introduce an infinite adaptation phase. Consider now the aia Scheduler(A) and Switch(A) (cf. Fig. 5). Both are weakly adaptable due to the loops made of adaptive transitions only;: e.g. in Switch(A) there are cyclic behaviours made of the control actions u and d. Reverse engineering and control synthesis Control data can also guide reverse engineering activities. For instance, is it possible to decompose an aia S into a basic adaptable component B and a suitable controller M ? We answer in the positive, by presenting first a trivial solution and then a more sophisticated one based on control synthesis. Basic decomposition. In order to present the basic decomposition we need some definitions. Let P ⊥ B denote the operation that given an automaton P results in an automaton P ⊥ B which is like P but where actions in B ⊆ A have been complemented (inputs become outputs and vice versa). Formally, P ⊥ B = V, V i , ((A I \ B) ∪ (A O ∩ B)), ((A O \ B) ∪ (A I ∩ B)) , T . This operation can be trivially lifted to aia by preserving the set of control actions. It is easy to see that interface automata have the following property. If P is an interface automaton and O 1 , O 2 are sets of actions that partition A O P (i.e. A O P = O 1 O 2 ), then P is isomorphic to P ⊥ O 1 | P ⊥ O 2 . This property can be exploited to decompose an aia P as M | B by choosing M = P Example 11. Consider the server Scheduler(S) (cf. Fig. 1). The basic decomposition provides the manager with underlying automata depicted in Fig. 9 (left) and the basic component depicted in Fig. 9 (right). Vice versa, if the server Switch(S) (cf. Fig. 1) is considered, then the basic decomposition provides the manager with underlying automata depicted in Fig. 9 (right) and the basic component depicted in Fig. 9 (left). Decomposition as control synthesis. In the basic decomposition both M and B are isomorphic (and hence of equal size) to the original aia S, modulo the complementation of some actions. It is however possible to apply heuristics in order to obtain smaller non-trivial managers and base components. One possibility is to reduce the set of actions that M needs to observe (its input actions). Intuitively, one can make the choice of ignoring some input actions and collapse the corresponding transitions. Of course, the resulting manager M must be checked for the absence of non-determinism (possibly introduced by the identification of states) but will be a smaller manager candidate. Once a candidate M is chosen we can resort to solutions to the control synthesis problem. We recall that the synthesis of controllers for interface automata [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is the problem of solving the equation P | Y Q, for a given system Q and component P , i.e. finding a component Y such that, when composed with P , results in a system which refines Q. An interface automaton R refines an interface automaton S if (i) A I R ⊆ A I S , (ii) A O R ⊆ A O S , and (iii) there is an alternating simulation relation from R to S, and two states u ∈ V i R , v ∈ V i S such that (u, v) ∈ [1] . An alternating simulation relation from an interface automaton R to an interface automaton S is a relation ⊆ V R × V S such that for all (u, v) ∈ and all a ∈ A O R (u) ∪ A I S (v) we have (i) A I S (v) ⊆ A I R (u) (ii) A O R (u) ⊆ A O S (v) (iii) there are u ∈ V R , v ∈ V S such that (u, a, u ) ∈ T R , (v, a, v ) ∈ T S and (u , v ) ∈ . The control synthesis solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] can be lifted to aia in the obvious way. The equation under study in our case will be B | M P . The usual case is when B is known and M is to be synthesised, but it may also happen that M is given and B is to be synthesised. The solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] can be applied in both cases since the composition of interface automata is commutative. Our methodology is illustrated with the latter case, i.e. we first fix a candidate M derived from P . Then, the synthesis method of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is used to obtain B. Our procedure is not always successful: it may be the case that no decomposition is found. Extracting the adaptation logic. In order to extract a less trivial manager from an aia P we can proceed as follows. We define the bypassing of an action set B ⊆ A in P as P | B ,≡ , which is obtained by P | B (that is, the aia obtained from P by deleting those transitions whose action belong to B) collapsing the states via the equivalence relation induced by {u ≡ v | (u, a, v) ∈ T P ∧ a ∈ B}. The idea is then to choose a subset B of A P \ A C P (i.e. it contains no control action) that the manager M needs not to observe. The candidate manager M is then P ⊥ A O P \A C P | B ,≡ . Of course, if the result is not deterministic, this candidate must be discarded: more observations may be needed. Extracting the application logic. We are left with the problem of solving the equation B | M P for given P and M . It is now sufficient to use the solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] which defines B to be (M | P ⊥ ) ⊥ , where P ⊥ abbreviates P ⊥ A P . If the obtained B and M are compatible, the reverse engineering problem has been solved. Otherwise we are guaranteed that no suitable managed component B exists for the candidate manager M since the solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is sound and complete. A different choice of control data or hidden actions should be done. Example 12. The manager Scheduler(S) ⊥ {u,d} {u,d},≡ (see Fig. 10, left) and the other manager Switch(S) ⊥ {f,s} {s},≡ (see Fig. 10, right) are obtained by removing some observations. For the former we obtain no solution, while for the latter we obtain the same base component of the basic decomposition (Fig. 9 left). Related Works Our proposal for the formalisation of self-adaptive systems takes inspiration by many former works in the literature. Due to lack of space we focus our discussion on the most relevant related works only. S[B] systems [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] are a model for adaptive systems based on 2-layered transitions systems. The base transition system B defines the ordinary (and adaptable) behaviour of the system, while S is the adaptation manager, which imposes some regions (subsets of states) and transitions between them (adaptations). Further constraints are imposed by S via adaptation invariants. Adaptations are triggered to change region (in case of local deadlock). Weak and strong adaptability formalisations (casted in our setting in Sect. 4.2) are introduced. Mode automata [START_REF] Maraninchi | Mode-automata: About modes and states for reactive systems[END_REF] have been also advocated as a suitable model for adaptive systems. For example, the approach of [START_REF] Zhao | Model checking of adaptive programs with modeextended linear temporal logic[END_REF] represents adaptive systems with two layers: functional layer, which implements the application logic and is represented by state machines called adaptable automata, and adaptation layer, which implements the adaptation logic and is represented with a mode automata. Adaptation here is the change of mode. The approach considers three different kinds of specification properties (cf. 4.2): local, adaptation, and global. An extension of linear-time temporal logic (LTL) called mLTL is used to express them. The most relevant difference between aia and S[B] system or Mode automata is that our approach does not impose a two-layered asymmetric structure: aia can be composed at will, possibly forming towers of adaptation [START_REF] Bruni | A conceptual framework for adaptation[END_REF] in the spirit of the MAPE-K reference architecture, or mutual adaptation structures. In addition, each component of an adaptive system (be it a manager or a managed component, or both) is represented with the same mathematical object, essentially a well-studied one (i.e. interface automata) decorated with some additional information (i.e. control propositions). Adaptive Featured Transition Systems (A-FTS) have been introduced in [START_REF] Cordy | Model checking adaptive software with featured transition systems[END_REF] for the purpose of model checking adaptive software (with a focus on software product lines). A-FTS are a sort of transition systems where states are composed by the local state of the system, its configuration (set of active features) and the configuration of the environment. Transitions are decorated with executability conditions that regard the valid configurations. Adaptation corresponds to reconfigurations (changing the system's features). Hence, in terms of our white-box approach, system features play the role of control data. They introduce the notion of resilience as the ability of the system to satisfy properties despite of environmental changes (which essentially coincides with the notion of blackbox adaptability of [START_REF] Hölzl | Towards a system model for ensembles[END_REF]). Properties are expressed in AdaCTL, a variant of the computation-tree temporal logic CTL. Contrary to aias which are equipped with suitable composition operations, A-FTS are seen in [START_REF] Cordy | Model checking adaptive software with featured transition systems[END_REF] as monolithic systems. Concluding Remarks We presented a novel approach for the formalisation of self-adaptive systems, which is based on the notion of control propositions (and control actions). Our proposal has been presented by instantiating it to a well-known model for component-based system, interface automata. However, it is amenable to be applied to other foundational formalisms as well. In particular, we would like to verify its suitability for basic specification formalisms of concurrent and distributed systems such as process calculi. Among future works, we envision the investigation of more specific notions of refinement, taking into account the possibility of relating systems with different kind of adaptability and general mechanisms for control synthesis that are able to account also for non-deterministic systems. Furthermore, our formalisation can be the basis to conciliate white-and blackbox perspectives adaptation under the same hood, since models of the latter are usually based on variants of transition systems or automata. For instance, control synthesis techniques such as those used to modularize a self-adaptive system (white-box adaptation) or model checking techniques for game models (e.g. [START_REF] Alur | Alternating-time temporal logic[END_REF]) can be used to decide if and to which extent a system is able to adapt so to satisfy its requirements despite of the environment (black-box adaptation). Fig. 1 . 1 Fig. 1. Is it self-adaptive? Fig. 2 . 2 Fig. 2. Three interface automata: Mac (left), Exe (centre), and Que (right). Fig. 3 . 3 Fig. 3. The product Mac ⊗ Exe ⊗ Que (left) and the composition Mac | Exe | Que (right). Fig. 4 . 4 Fig. 4. An environment. Example 5 . 5 Consider the product Mac ⊗ Exe ⊗ Que depicted in Fig. 3 (left). All states of the form s{t}[t] and s{t}[tt] are incompatible and states D{}[tt] and U{}[tt] are not compatible, since no environment can prevent them to enter the incompatible states. The remaining states are all compatible. The composition Mac | Exe | Que is the interface automaton depicted in Fig. 3 (right). Fig. 5 . 5 Fig. 5. An adaptable server (left) and its components (right). 6 . 6 A controller. Fig. 7 . 7 Fig. 7. MAPE-K loop. Fig. 8 . 8 Fig. 8. MAPE-K actions. aia can be composed so to adhere to the MAPE-K reference model as schematised in Fig.8. First, the autonomic manager component M and the managed component B have their functional and output actions, respectivelyI ⊆ A I M , O ⊆ A O M , I ⊆ A I B , O ⊆ A O B suchthat no dual action is shared (i.e. comm(B, M ) ∩ (I ∪ I ) = ∅) but inputs may be shared (i.e. possibly I ∩ I = ∅). The autonomic manager is controllable and has hence a distinguished set of control actions C = A C B . The dual of such control actions, i.e. the output actions of M that synchronise with the input control actions B can be regarded as effectors F ⊆ A O M , i.e. output actions used to trigger adaptation. In addition, M will also have sensor input actions S ⊆ A I M to sense the status of B, notified via emit output actions E ⊆ A O M . Clearly, the introduced sets partition inputs and outputs, i.e. I S = A I M , O F = A O M , E I = A I B and O C = A O M . , the manager and the base component are identical to the original system and only differ in their interface. All output control actions are governed by the manager M and become inputs in the base component B. Outputs that are not control actions become inputs in the manager. This decomposition has some interesting properties: B is fully controllable and, if P is fully self-adaptive, then M completely controls B. Fig. 9 . 9 Fig. 9. A basic decomposition. Fig. 10 . 10 Fig. 10. Bypassed managers for Scheduler(S) (left) and Switch(S) (right). Research partially supported by the EU through the FP7-ICT Integrated Project 257414 ASCEns (Autonomic Service-Component Ensembles).
43,222
[ "1003772", "1003773", "894474", "1003774", "1003775" ]
[ "366408", "366408", "366408", "301837", "301837" ]
01485982
en
[ "info" ]
2024/03/04 23:41:48
2012
https://inria.hal.science/hal-01485982/file/978-3-642-37635-1_8_Chapter.pdf
Andrea Corradini Reiko Heckel Frank Hermann Susann Gottmann Nico Nachtigall Transformation Systems with Incremental Negative Application Conditions Keywords: graph transformation, concurrent semantics, negative application conditions, switch equivalence de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Graph Transformation Systems (GTSs) are an integrated formal specication framework for modelling and analysing structural and behavioural aspects of systems. The evolution of a system is modelled by the application of rules to the graphs representing its states and, since typically such rules have local eects, GTSs are particularly suitable for modelling concurrent and distributed systems where several rules can be applied in parallel. Thus, it is no surprise that a large body of literature is dedicated to the study of the concurrent semantics of graph transformation systems [START_REF] Corradini | Graph processes[END_REF][START_REF] Baldan | Concurrent Semantics of Algebraic Graph Transformations[END_REF][START_REF] Baldan | Processes for adhesive rewriting systems[END_REF]. The classical results include among others the denitions of parallel production and shift equivalence [START_REF] Kreowski | Is parallelism already concurrency? part 1: Derivations in graph grammars[END_REF], exploited in the Church-Rosser and Parallelism theorems [START_REF] Ehrig | Introduction to the Algebraic Theory of Graph Grammars (A Survey)[END_REF]: briey, derivations that dier only in the order in which independent steps are applied are considered to be equivalent. Several years later, taking inspiration from the theory of Petri nets, deterministic processes were introduced [START_REF] Corradini | Graph processes[END_REF], which are a special kind of GTSs, endowed with a partial order, and can be considered as canonical representatives of shift-equivalence classes of derivations. Next, the unfolding of a GTS was dened as a typically innite non-deterministic process which summarises all the possible derivations of a GTS [START_REF] Baldan | Unfolding semantics of graph transformation[END_REF]. Recently, all these concepts have been generalised to transformation systems based on (M-)adhesive categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF][START_REF] Corradini | Subobject Transformation Systems[END_REF][START_REF] Baldan | Unfolding grammars in adhesive categories[END_REF]. In this paper, we consider the concurrent semantics of GTSs that use the concept of Negative Application Conditions (NACs) for rules [START_REF] Habel | Graph Grammars with Negative Application Conditions[END_REF], which is widely used in applied scenarios. A NAC allows one to describe a sort of forbidden context, whose presence around a match inhibits the application of the rule. These inhibiting eects introduce several dependencies among transformation steps that require a shift of perspective from a purely local to a more global point of view when analysing such systems. Existing contributions that generalise the concurrent semantics of GTSs to the case with NACs [START_REF] Lambers | Parallelism and Concurrency in Adhesive High-Level Replacement Systems with Negative Application Conditions[END_REF][START_REF] Ehrig | Parallelism and Concurrency Theorems for Rules with Nested Application Conditions[END_REF] are not always satisfactory. While the lifted Parallelism and Concurrency Theorems provide adequate constructions for composed rules specifying the eect of concurrent steps, a detailed analysis of possible interleavings of a transformation sequence leads to problematic eects caused by the NACs. As shown in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF], unlike the case without NACs, the notion of sequential independence among derivation steps is not stable under switching. More precisely, it is possible to nd a derivation made of three direct transformations s = (s 1 ; s 2 ; s 3 ) where s 2 and s 3 are sequentially independent and to nd a derivation s = (s 2 ; s 3 ; s 1 ) that is shift equivalent to s (obtained with the switchings (1 ↔ 2; 2 ↔ 3)), but where s 2 and s 3 are sequentially dependent on each other. This is a serious problem from the concurrent semantics point of view, because for example the standard colimit technique [START_REF] Corradini | Graph processes[END_REF] used to generate the process associated with a derivation does not work properly, since the causalities between steps do not form a partial order in general. In order to address this problem, we introduce a restricted kind of NACs, based on incremental morphisms [START_REF] Heckel | DPO Transformation with Open Maps[END_REF]. We rst show that sequential independence is invariant under shift equivalence if all NACs are incremental. Next we analyse to which extent systems with general NACs can be transformed into systems with incremental NACs. For this purpose, we provide an algorithmic construction INC that takes as input a GTS and yields a corresponding GTS with incremental NACs only. We show that the transformation system obtained via INC simulates the original one, i.e., each original transformation sequence induces one in the derived system. Thus, this construction provides an over-approximation of the original system. We also show that this simulation is even a bisimulation, if the NACs of the original system are obtained as colimits of incremental NACs. In the next section we review main concepts for graph transformation systems. Sect. 3 discusses shift equivalence and the problem that sequential independence with NACs is not stable in general. Thereafter, Sect. 4 presents incremental NACs and shows the main result on preservation of independence. Sect. 5 presents the algorithm for transforming systems with general NACs into those with incremental ones and shows under which conditions the resulting system is equivalent. Finally, Sect. 6 provides a conclusion and sketches future developments. The proofs of the main theorems are included in the paper. Basic Denitions In this paper, we use the double-pushout approach [START_REF] Ehrig | Graph grammars: an algebraic approach[END_REF] to (typed) graph transformation, occasionally with negative application conditions [START_REF] Habel | Graph Grammars with Negative Application Conditions[END_REF]. However, we will state all denitions and results at the level of adhesive categories [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF]. A category is adhesive if it is closed under pushouts along monomorphisms (hereafter monos) as well as under pullbacks, and if all pushouts along a mono enjoy the van Kampen property. That means, when such a pushout is the bottom face of a commutative cube such as in the left of Fig. 1, whose rear faces are pullbacks, the top face is a pushout if and only if the front faces are pullbacks. In any adhesive category we have uniqueness of pushout complements along monos, monos are preserved by pushouts and pushouts along monos are also pullbacks. As an example, the category of typed graphs for a xed type graph T G is adhesive [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. =⇒ H from G to a H exists if a double-pushout (DPO) diagram can be constructed as in the right of Fig. 1, where (1) and (2) are pushouts. A * * q q q q B * * C r r r r D A * * q q q q B + + C q q q q D L m (1) K (2) o o l o o / / r / / R m * G D o o l * o o / / r * / / H The applicability of rules can be restricted by specifying negative conditions requiring the non-existence of certain structures in the context of the match. A (negative) constraint on an object L is a morphism n : L → L. A morphism m : L → G satises n (written m |= n) i there is no mono q : L G such that n; q = m. A negative application condition (NAC) on L is a set of constraints N . A morphism m : L → G satises N (written m |= N ) if and only if m satises every constraint in N , i.e., ∀n ∈ N : m |= n. All along the paper we shall consider only monic matches and monic constraints: possible generalisations are discussed in the concluding section. A graph transformation system (GTS) G consists of a set of rules, possibly with NACs. A derivation in G is a sequence of direct transformations s = (G 0 p1,m1 =⇒ G 1 p2,m2 =⇒ • • • pn,mn =⇒ G n ) such that all p i are in G; we denote it also as s = s 1 ; s 2 ; . . . ; s n , where s k = (G k-1 = p k ,m k ===⇒ G k ) for k ∈ {1, . . . , n}. Independence and Shift Equivalence Based on the general framework of adhesive categories, this section recalls the relevant notions for sequential independence and shift equivalence and illustrates the problem that independence is not stable under switching in presence of NACs. In the DPO approach, two consecutive direct transformations s 1 = G 0 p1,m1 =⇒ G 1 and s 2 = G 1 p2,m2 =⇒ G 2 as in Fig. 2 are sequentially independent if there exist morphisms i : R 1 → D 2 and j : L 2 → D 1 such that j; r * 1 = m 2 and i; l * 2 = m * 1 . In this case, using the local Church-Rosser theorem [8] it is possible to construct a derivation s = G 0 p2,m 2 =⇒ G 1 p1,m 1 =⇒ G 2 where the two rules are applied in the opposite order. We write s 1 ; s 2 ∼ sh s to denote this relation. Given a derivation s = s 1 ; s 2 ; . . . s i ; s i+1 ; . . . ; s n containing sequentially independent steps s i and s i+1 , we denote by s = switch(s, i, i + 1) the equivalent derivation s = s 1 ; s 2 ; . . . s i ; s i+1 ; . . . ; s n , where s i ; s i+1 ∼ sh s i ; s i+1 . Shift equivalence ≡ sh over derivations of G is dened as the transitive and context closure of ∼ sh , i.e., the least equivalence relation containing ∼ sh and such that if s ≡ sh s then s 1 ; s; s 2 ≡ sh s 1 ; s ; s 2 for all derivations s 1 and s 2 . Example 1 (context-dependency of independence with NACs). Fig. 3 presents three transformation sequences starting with graph G 0 via rules p1, p2 and p3. Rule p3 has a NAC, which is indicated by dotted lines (one node and two edges). N1 N2 L1 O O m 1 K1 l 1 o o r 1 / / k 1 R1 m * 1 ! ! i ' ' L2 O O m 2 } } j w w K2 l 2 o o r 2 / / k 2 R2 m * 2 G0 D1 l * 1 o o r * 1 / / G1 D2 l * 2 o o r * 2 / / G2 In the rst sequence s = G 0 = p1,m1 ===⇒ G 1 = p2,m2 ===⇒ G 2 = p3,m3 ===⇒ G 3 = (s 1 ; s 2 ; s 3 ) shown in the top of Fig. 3, steps s 1 and s 2 are sequentially independent, and so are s 2 and s 3 . After switching the rst and the second step we derive s = switch(s, 1, 2) = (s 2 ; s 1 ; s 3 ) (middle of Fig. 3) so that both sequences are shift equivalent (s ≡ sh s ). Since s 1 and s 3 are independent, we can perform a further switch s = switch(s , 2, 3) = (s 2 ; s 3 ; s 1 ) shown in the bottom sequence in Fig. 3. However, steps s 2 and s 3 are dependent from each other in s , because the match for rule p 3 will not satisfy the corresponding NAC for a match into G 0 . Hence, independence can change depending on the derivation providing the context, even if derivations are shift equivalent. In this section we show that under certain assumptions on the NACs of the rules, the problem identied in Ex. 1 does not occur. Intuitively, for each constraint n : L → L in a NAC we will require that it is incremental, i.e., that L does not extend L in two (or more) independent ways. Therefore, if there are two dierent ways to decompose n, one has to be an extension of the other. Incremental arrows have been considered in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF] for a related problem: here we present the denition for monic arrows only, because along the paper we stick to monic NACs. Denition 1 (incremental monos and NACs). A mono f : A B is called incremental, if for any pair of decompositions g 1 ; g 2 = f = h 1 ; h 2 as in the diagram below where all morphisms are monos, there is either a mediating morphism o : O O or o : O O, such that the resulting triangles commute. O g2 o A / / f / / > > g1 > > h1 B O > > h2 > > o O O A monic NAC N over L is incremental if each constraint n : L L ∈ N is incremental. Example 2 (Incremental NACs). The left diagram below shows that the negative constraint n 3 : L 3 → L3 ∈ N 3 of rule p 3 of Ex. 1 is not incremental, because L3 extends L 3 in L 3 L 3 Ô3 O 3 ' 1 1 2 1 2 L 4 L 4 Ô4 O 4 ' 1 2 Intuitively, the problem stressed in Ex. 1 is due to the fact that rules p 1 and p 2 delete from G 0 two independent parts of the forbidden context for p 3 . Therefore p 3 depends on the ring of p 1 or on the ring of p 2 , while p 1 and p 2 are independent. This form of or-causality from sets of independent events is known to be a source of ambiguities in the identication of a reasonable causal ordering among the involved events, as discussed in [START_REF] Langerak | Causal ambiguity and partial orders in event structures[END_REF]. The restriction to incremental NACs that we consider here is sucient to avoid such problematic situations (as proved in the main result of this section) essentially because if both p 1 and p 2 delete from G 0 part of an incremental NAC, then they cannot be independent, since the NAC cannot be factorized in two independent ways. Incrementality of monos enjoys some nice properties: it is preserved by decomposition of arrows, and it is both preserved and reected by pushouts along monos, as stated in the next propositions. Proposition 1 (decomposition of monos preserve incrementality). Let Proposition 2 (preservation and reection of incrementality by POs). In the diagram to the right, let B f * D g * C be the pushout of the monic arrows B g A f C. Then f is incremental if and only if f * is incremental. A / / g / / f B f * C / / g * / / D We come now to the main result of this section: if all NACs are incremental, then sequential independence of direct transformations is invariant with respect to the switch of independent steps. Theorem 1 (invariance of independence under shift equivalence). Assume transformation sequences s = G 0 p1,m1 =⇒ G 1 p2,m2 =⇒ G 2 p3,m3 =⇒ G 3 and s = G 0 p2,m 2 =⇒ G 1 p3,m 3 =⇒ G 2 p1,m 1 =⇒ G 3 using rules p 1 , p 2 , p 3 with incremental NACs only as in the diagram below, such that s ≡ sh s with s = switch(switch(s, 1, 2), 2, 3). G 0 p2,m 2 + 3 p1,m1 G 1 p3,m 3 + 3 p1,m 1 G 2 p1,m 1 G 1 p2,m2 + 3 G 2 p3,m3 + 3 G 3 Then, G 1 p2,m2 =⇒ G 2 and G 2 p3,m3 =⇒ G 3 are sequentially independent if and only if G 0 p2,m 2 =⇒ G 1 and G 1 p3,m 3 =⇒ G 2 are. Proof. Let N 1 , N 2 G 0 p1,m1 =⇒ G 1 p2,m2 =⇒ G 2 and G 0 p2,m 2 =⇒ G 1 p1,m 1 =⇒ G 2 according to the proof of the G0 G 1 p 1 ,m 1 G1 p 2 ,m 2 + 3 G2 ↓ L1 m 1 o o v v (3) K1 l 1 O O r 1 o o v v (6) R1 o o v v L3 m 3 c c èe k k G 1 (2) D 2 O O (5) G2 (8) R2 O O F F D 1 / / o o (1) D * 2 / / o o O O (4) D2 / / o o (7) K2 l 2 / / r 2 o o O O F F G0 D1 O O G1 L2 m 2 O O E E (a) Match m 3 : L3 → G0 O * | | ( ( O e 1 ( ( i 1 1 O e 2 } } L3 q L3 / / n / / o * 2 2 m 3 2 2 D * 2 } } ( ( (1) D1 ( ( =⇒ G 3 are independent. Then, there exists D 1 } } G0 (b) Induced morphism L3 → D 1 L 3 → D 2 commuting with m 3 such that m * 3 = L 3 → D 2 → G 1 satises N 3 . Also, G 1 p1,m 1 =⇒ G 2 and G 2 p3,m3 =⇒ G 3 are independent because equivalence of s and s requires to switch them, so there exists L 3 → D 2 commuting with m 3 such that m 3 = L 3 → D 2 → G 1 satises N 3 . There exists a morphism L 3 → D * 2 commuting the resulting triangles induced by pullback [START_REF] Corradini | Subobject Transformation Systems[END_REF]. To show that m 3 = L 3 → D * 2 → D 1 → G 0 = L 3 → D * 2 → D 1 → G 0 sat- ises N 3 , by way of contradiction, assume n : L 3 → L3 ∈ N 3 with morphism q : L3 → G 0 commuting with m 3 . We can construct the cube in Fig. 4 We show that e 2 : O ↔ L3 is an isomorphism. First of all, e 2 is a mono by pullback (FR) and mono D 1 → G 0 . Pushout (TOP) implies that the morphism pair (e 1 , e 2 ) with e 1 : O → L3 and e 2 : O → L3 is jointly epimorphic. By com- mutativity of i; e 2 = e 1 , we derive that also (i; e 2 , e 2 ) is jointly epi. By denition of jointly epi, we have that for arbitrary (f, g) it holds that i; e 2 ; f = i; e 2 ; g and e 2 ; f = e 2 ; g implies f = g. This is equivalent to e 2 ; f = e 2 ; g implies f = g. Thus, e 2 is an epimorphism. Together with e 2 being a mono (see above) we conclude that e 2 is an isomorphism, because adhesive categories are balanced [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF]. This means, there exists a mediating morphism L3 → O → D 1 which contradicts the earlier assumption that L 3 → D * 2 → D 1 satises N 3 . Example 3. If in Fig. 3 we replace rule p 3 by rule p 4 of Fig. 5 that has an incremental NAC, so that s = G 0 = p1,m1 ===⇒ G 1 = p2,m2 ===⇒ G 2 = p4,m4 ===⇒ G 3 = (s 1 ; s 2 ; s 4 ), then the problem described in Ex. 1 does not hold anymore, because s 2 and s 4 are not sequentially independent, and they remain dependent in the sequence s = s 2 ; s 4 ; s 1 . Let us start with some auxiliary technical facts that hold in adhesive categories and that will be exploited to show that the compilation algorithm terminates, which requires some ingenuity because sometimes a single constraint can be compiled into several ones. Denition 2 (nitely decomposable monos). A mono A f B is called at most k-decomposable, with k ≥ 0, if for any sequence of arrows f 1 ; f 2 ; • • • ; f h = f where for all 1 ≤ i ≤ h arrow f i is a mono and it is not an iso, it holds h ≤ k. Mono f is called k-decomposable if it is at most k-decomposable and either k = 0 and f is an iso, or there is a mono-decomposition like the above with h = k. A mono is nitely decomposable if it is k-decomposable for some k ∈ N. A 1-decomposable mono is called atomic. From the denition it follows that all and only the isos are 0-decomposable. Furthermore, any atomic (1-decomposable) mono is incremental, but the converse is false in general. For example, in Graph the mono {•} {• → •} is incremental but not atomic. Actually, it can be shown that in Graph all incremental monos are at most 2-decomposable, but there exist adhesive categories with k-decomposable incremental monos for any k ∈ N. Furthermore, every nitely decomposable mono f : A B can be factorized as A K g B where g is incremental and maximal in a suitable sense. Proposition 3 (decomposition and incrementality). Let f : A B be nitely decomposable. Then there is a factorization A K g B of f such that g is incremental and there is no K such that f = A K K g B, where K K is not an iso and K K g B is incremental. In this case we call g maximally incremental w.r.t. f . Proposition 4 (preservation and reection of k-decomposability). Let the square to the right be a pushout and a be a mono. Then b is a k-decomposable mono if and only if d is a kdecomposable mono. A (1) / / a / / b B d C / / c / / D In the following construction of incremental NACs starting from general ones, we will need to consider objects that are obtained starting from a span of monos, like pushout objects, but that are characterised by weaker properties. We describe now how to transform a rule p with arbitrary nitely decomposable constraints into a set of rules with simpler constraints: this will be the basic step of the algorithm that will compile a set of rules with nitely decomposable NACs into a set of rules with incremental NACs only. B ' ' d ' ' . . ( ( A / / h / / 8 8 a 8 8 & & b & & A O O a O O b D / / g / / G C 7 Denition 4 (compiling a rule with NAC). Let p = L K R, N be a rule with NAC, where the NAC N = {n i : L L i | i ∈ [1, s]} is a nite set of nitely decomposable monic constraints and at least one constraint, say n j , is not incremental. Then we dene the set of rules with NACs INC (p, n j ) in the following way. L j be a decomposition of n j such that k is maximally incremental w.r.t. n j (see Prop. 3). Then INC (p, n j ) = {p , p j }, where: 1. p is obtained from p by replacing constraint n j : L L j with constraint n j : L M j . 2. p j = M j K R , N , where M j K R is obtained by apply- ing rule L K R to match n j : L M j , as in the next diagram. L n j (1) K (2) o o l o o / / r / / R M j K o o l * o o / / r * / / R Furthermore, N is a set of constraints N = N 1 ∪ • • • ∪ N s obtained as follows. (1) N j = {k : M j L j }. (2) For all i ∈ [1, s] \ {j}, N i = {n ih : M j L ih | L i L ih M j is a quasi-pushout of L i L M j }. Before exploring the relationship between p and INC (p, n j ) let us show that the denition is well given, i.e., that in Def. 4(b).2 the applicability of L K R to match n j : L M j is guaranteed. K 5 5 ( ( / / / / (1) X (2) / / / / • L / / n j / / ( ( nj 5 5 M j / / / / L j In fact, by the existence of a pushout complement of K L nj L j we can build a pushout that is the external square of the diagram on the right; next we build the pullback (2) and obtain K X as mediating morphism. Since (1) + ( 2) is a pushout, (2) is a pullback and all arrows are mono, from Lemma 4.6 of [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF] we have that (1) is a pushout, showing that K L M j has a pushout complement. The following result shows that INC (p, n j ) can simulate p, and that if the decomposition of constraint n j has a pushout complement, then also the converse is true. {Li} / q ) ) L o o {n i } o o m (1) K (2) o o l o o / / r / / R m * G D o o l * o o / / r = id {• 1 } , n : {• 1 } {• 1 → • 2 → • 3 } be a rule (the identity rule on graph {• 1 }) with a single negative constraint n, which is not incremental. Then according to Def. 4 we obtain INC (p, n) = {p , p 1 } where p = id {• 1 } , n : {• 1 } {• 1 → • 2 } and p 1 = id {• 1 →• 2 } , n : {• 1 → • 2 } {• 1 → • 2 → • 3 } . Note G = {• 2 ← • 1 → • 2 → • 3 }, and let x be the inclusion morphism from {• 1 → • 2 } to G. Then G p1,x =⇒ G, but the induced inclusion match m : {• 1 } → G does not satisfy constraint n. Starting with a set of rules with arbitrary (but nitely decomposable) NACs, the construction of Def. 4 can be iterated in order to get a set of rules with incremental NACs only, that we shall denote INC (P ). As expected, INC (P ) simulates P , and they are equivalent if all NACs are obtained as colimits of incremental constraints. Denition 5 (compiling a set of rules). Let P be a nite set of rules with NACs, such that all constraints in all NACs are nitely decomposable. Then the set INC (P ) is obtained by the following procedure. =⇒ H for all G. 4. Suppose that each constraint of each rule in P is the colimit of incremental monos, i.e., for each constraint L L , L is the colimit object of a nite diagram {L L i } i∈I of incremental monos. Then P and INC (P ) are equivalent, i.e., we also have that G INC (P ) =⇒ H implies G P =⇒ H. Proof. Point 2 is obvious, given the guard of the while loop, provided that it terminates. Also the proofs of points 3 and 4 are pretty straightforward, as they follow by repeated applications of Prop. 6. The only non-trivial proof is that of termination. To this aim, let us use the following lexicographic ordering, denoted N k , for a xed k ∈ N, that is obviously well-founded. The elements of N k are sequences of natural numbers of length k, like σ = σ 1 σ 2 . . . σ k . The ordering is dened as σ < σ i σ h < σ h , where h ∈ [1, k] (b). In this case rule p is obtained from p by replacing the selected constraint with one that is at most ( k -1)-decomposable. Furthermore, each other constraint n i is replaced by a set of constraints, obtained as quasi-pushouts of n i and n j . If n i is incremental, so are all the new constraints obtained as quasi-pushouts, by Prop. 5(4), and thus they don't contribute to the degree. If instead n i is non-incremental, then it is h-decomposable for h ≤ k, by denition of k. Then by Prop. 5(3) all constraints obtained as proper quasi-pushouts are at most (h -1)-decomposable, and only one (obtained as a pushout) will be h-decomposable. Discussion and Conclusion In our quest for a stable notion of independence for conditional transformations, we have dened a restriction to incremental NACs that guarantees this property (Thm. 1). Incremental NACs turn out to be quite powerful, as they are sucient for several case studies of GTSs. In particular, the well studied model transformation from class diagrams to relational data base models [START_REF] Hermann | Ecient Analysis and Execution of Correct and Complete Model Transformations Based on Triple Graph Grammars[END_REF] uses incremental NACs only. In an industrial application for translating satellite software (pages 14-15 in [START_REF] Ottersten | Interdisciplinary Centre for Security, Reliability and Trust -Annual Report 2011[END_REF]), we used a GTS with more than 400 rules, where only 2 of them have non-incremental NACs. Moreover, the non-incremental NACs could also have been avoided by some modications of the GTS. Incremental morphisms have been considered recently in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF], in a framework dierent but related to ours, where requiring that matches are open maps one can restrict the applicability of transformation rules without using NACs. We have also presented a construction that compiles as set of rules with general (nitely-decomposable) NACs into a set of rules with incremental NACs only. For NACs that are obtained as colimits of incremental ones, this compilation yields an equivalent system, i.e., for every transformation in the original GTS there exists compatible step in the compiled one and vice versa (Thm. 2), and therefore the rewrite relation on graphs is still the same. In the general case, the compiled system provides an overapproximation of the original GTS, which nevertheless can still be used to analyse the original system. In fact our intention is to dene a stable notion of independence on transformations with general NACs. Using the compilation, we can declare a two-step sequence independent if this is the case for all of its compilations, or more liberally, for at least one of them. Both relations should lead to notions of equivalence that are ner than the standard shift equivalence, but that behave well thanks to Thm. 1. Moreover, independence should be expressed directly on the original system, rather than via compilation. Such a revised relation will be the starting point for developing a more advanced theory of concurrency for conditional graph transformations, including processes and unfoldings of GTSs. The main results in this paper can be applied for arbitrary adhesive transformation systems with monic matches. However, in some cases (like for attributed graph transformation system) the restriction to injective matches is too strict (rules contain terms that may be mapped by the match to equal values). As shown in [START_REF] Hermann | Analysis of Permutation Equivalence in Madhesive Transformation Systems with Negative Application Conditions[END_REF], the concept of NAC-schema provides a sound and intuitive basis for the handling of non-injective matches for systems with NACs. We are condent that an extension of our results to general matches is possible based on the concept of NAC-schema. Another intersting topic that we intend to study is the complexity of the algorithm of Def. 5, and the size of the set of rules with incremental constraints, INC (P ), that it generates. Furthermore, we plan to extend the presented results for shift equivalence to the notion of permutation equivalence, which is coarser and still sound according to [START_REF] Hermann | Analysis of Permutation Equivalence in Madhesive Transformation Systems with Negative Application Conditions[END_REF]. Finally, we also intend to address the problem identied in Ex. 1 at a more abstract level, by exploiting the event structures with or-causality of events that are discussed in depth in [START_REF] Langerak | Causal ambiguity and partial orders in event structures[END_REF]. Fig. 1 . 1 Fig. 1. van Kampen condition (left) and DPO diagram (right) Fig. 2 .p2,m 2 =⇒ G 1 221 Fig. 2. Sequential independence Fig. 3 . 3 Fig. 3. Independence of p2 and p3 is not preserved by switching with p1 two independent ways: by the loop on 1 in O 3 , and by the outgoing edge with one additional node 2 in O 3 . Indeed, there is no mediating arrow from O 3 to O 3 or vice versa relating these two decompositions. Instead the constraint n 4 : L 4 → L4 ∈ N 4 of rule p 4 of Fig. 5 is incremental: it can be decomposed in only one non-trivial way, as shown in the top of the right diagram, and for any other possible decomposition one can nd a mediating morphism (as shown for one specic case). f : A B be an incremental arrow and f = g; h with monos g : A C andh : C B.Then both h and g are incremental. and N 3 3 satisfy N 3 ,p2,m 2 =⇒ G 1 and G 1 p3,m 3 =⇒ G 2 . 3332132 be the NACs of p 1 , p 2 and p 3 , respectively. Due to sequential independence of G 1 p2,m2 =⇒ G 2 and G 2 p3,m3 =⇒ G 3 , match m 3 : L 3 → G 2 extends to a match m * 3 : L 3 → G 1 satisfying N 3 . Using that both m 3 and m * we show below that the match m 3 : L 3 → G 0 , that exists by the classical local Church-Rosser, satises N 3 , too. This provides one half of the independence of G 0 By reversing the two horizontal sequences in the diagram above with the same argument we obtain the proof for the other half, i.e., that the comatch of p 2 into G 2 satises the equivalent right-sided NAC of N 2 , which is still incremental thanks to Prop. 2. Finally reversing the vertical steps yields the reverse implication, that independence of the upper sequence implies independence of the lower. The diagram in Fig. 4(a) shows a decomposition of the transformations Fig. 4 . 4 Fig. 4. Constructions for proof of Thm. 1 Matches L 3 → D * 2 → D 1 and L 3 → D * 2 → D 1 satisfy N 3 because they are prexes of matches m * 3 and m 3 , respectively; indeed, it is easy to show that m; m |= n ⇒ m |= n for injective matches m, m and constraint n. (b) as follows. The bottom face is pushout (1), faces front left (FL), front right (FR) and top (TOP) are constructed as pullbacks. The commutativity induces unique morphism O * → D * 2 making the back faces commuting and thus, all faces in the cube commute. Back left face (BL) is a pullback by pullback decomposition of pullback (TOP+FR) via (BL+(1)) and back right face (BR) is a pullback by pullback decomposition of pullback (TOP+FL) via (BR+(1)). We obtain o * : L 3 → O * as induced morphism from pullback (BL+FL) and using the assumption m 3 = n; q. Further, by the van Kampen property, the top face is a pushout. Since the constraint is incremental and L 3 → O → L3 = L 3 → O → L3 , without loss of generality we have a morphism i : O → O commuting the triangles. 4 Fig. 5 . 45 Fig. 5. Rule p4 with incremental NAC Fig. 6 . 6 Fig. 6. Rule p3 (left) and the set INC ({p3}) = {p31, p32} (right) 6 Fig. 7 . 67 Fig. 7. Quasi-pushout of monos in an adhesive category CDC such that the mediating morphism g : D → G is mono.2. LetB A C be a span of monos. If objects B and C are nite (i.e., they have a nite number of subobjects), then the number of non-isomorphic distinct quasi-pushouts of the span is nite. 3. In span B A b C, suppose that mono b is k-decomposable, and that B d is a quasi-pushout based on A , where h : A A is not an iso. Then mono d : B D is at most (k -1)-decomposable. 4. Quasi-pushouts preserve incrementality: if B d D C is a quasi-pushout of B A b C and b is incremental, then also d : B D is incremental. L j has no pushout complement, then INC (p, n j ) = {p }, where p if obtained from p by dropping constraint n j . (b) Otherwise, let L n j M j k INC (P ) := P while (there is a rule in INC (P ) with a non-incremental constraint) do let k = max{k | there is a k-decomposable non-incremental constraint in INC (P )} let n be a k-decomposable non-incremental constraint of p ∈ INC (P ) Set INC (P ) := (INC (P ) \ {p}) ∪ INC (p, n) endwhile return INC (P ) Theorem 2 (correctness and conditional completeness of compilation). , namely the set INC ({p 3 }) = {p 31 , p 32 } containing rules with incremental NACs only. It is not dicult to see that p 3 can be applied to a match if and only if either p 31 or p 32 can be applied to the same match (determined by the image of node 1), and the eect of the rules is the same (adding a new node). In fact, if either p 31 or p 32 can be applied, then also p 3 can be applied to the same match, because at least one part of its NAC is missing (the loop if p 31 was applied, otherwise the edge). Viceversa, if p 3 can be applied, then either the loop on 1 is missing, and p 31 is applicable, or the loop is present but there is no non-looping edge from 1, and thus p 32 can be applied. As a side remark, notice that the NACs p 31 or p 32 1 p 3 ⇒ 1 3 1 1 p 31 ⇒ ⇒ p 32 1 3 1 3 5 Transforming General NACs into Incremental NACs In this section we show how to compile a set of rules P with arbitrary NACs into a (usually much larger) set of rules INC (P ) having incremental NACs only. The construction guarantees that every derivation using rules in P can be transformed into a derivation over INC (P ). Additionally, we show that P and INC (P ) are actually equivalent if all constraints in P are obtained as colimits of incremental constraints. The example shown in Fig. 6 can help getting an intuition about the transformation. It shows one possible outcome (indeed, the algorithm we shall present is non-deterministic) of the application of the transformation to rule p 3 Example 4. Fig. 6 shows one possible outcome of INC ({p 3 }), as discussed at the beginning of this section. As the NAC of p 3 is a colimit of incremental arrows, {p 3 } and INC ({p 3 }) are equivalent. Instead, let p of n j (L n j M j k L j ) has a pushout complement, then G INC (p,nj ) =⇒ H implies G p =⇒ H. Fig. 8. DPO diagram with NAC Proposition 6 (relationship between p and INC (p, n j )). In the hypotheses of Def. 4, if G p =⇒ H then G INC (p,nj ) =⇒ H. Furthermore, if the decomposition * / / H that all constraints in INC (p, n) are incremental, but the splitting of n as n ; n does not have a pushout complement. Indeed, we can nd a graph to which p 1 is applicable but p is not, showing that the condition we imposed on NACs to prove that p and INC (p, n j ) are equivalent is necessary. In fact, let 1 . 1 The algorithm of Def. 5 terminates. 2. INC (P ) contains rules with incremental NACs only. 3. INC (P ) simulates P , i.e., G P =⇒ H implies G INC (P ) is the highest position at which σ and σ dier. Now, let k be the minimal number such that all non-incremental constraints in P are at most k-decomposable, and dene the degree of a rule p, deg(p), as the σ ∈ N k given by σ i = |{n | n is an i-decomposable non-incremental constraint of p}| Dene deg(Q) for a nite set of rules as the componentwise sum of the degrees of all the rules in Q. we conclude by showing that at each iteration of the loop of Def. 5 the degree deg(INC (P )) decreases strictly. Let p be a rule and n be a non-incremental constraint, k-decomposable for a maximal k. The statement follows by showing that INC (p, n) has at least one k-decomposable non-incremental constraint less than p, while all other constraints are at most ( k -1)-decomposable. This is obvious if INC (p, n) is obtained according to point (a) of Def. 4. Otherwise, let INC (p, n) = {p , p j } using the notation of point Next
35,599
[ "1003773", "1003778", "1003779", "1003780", "1003781" ]
[ "366408", "300751", "104741", "104741", "104741" ]
01486026
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01486026/file/978-3-642-38493-6_13_Chapter.pdf
Behrooz Nobakht email: [email protected] Frank S De Boer Mohammad Mahdi Jaghoori email: [email protected] The Future of a Missed Deadline Keywords: actors, application-level scheduling, real-time, deadlines, futures, Java de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction In real-time applications, rigid deadlines necessitate stringent scheduling strategies. Therefore, the developer must ideally be able to program the scheduling of different tasks inside the application. Real-Time Specification for Java (RTSJ) [START_REF] Jcp | RTSJ v1 JSR 1[END_REF][START_REF]RTSJ v1.1 JSR 282[END_REF] is a major extension of Java, as a mainstream programming language, aiming at enabling real-time application development. Although RTSJ extensively enriches Java with a framework for the specification of real-time applications, it yet remains at the level of conventional multithreading. The drawback of multithreading is that it involves the programmer with OS-related concepts like threads, whereas a real-time Java developer should only be concerned about high-level entities, i.e., objects and method invocations, also with respect to real-time requirements. The actor model [START_REF] Scott | A foundation for actor computation[END_REF] and actor-based programming languages, which have re-emerged in the past few years [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF][START_REF] Armstrong | Programming Erlang: Software for a Concurrent World[END_REF][START_REF] Haller | Scala actors: Unifying thread-based and eventbased programming[END_REF][START_REF] Broch | An Asynchronous Communication Model for Distributed Concurrent Objects[END_REF][START_REF] Varela | Programming dynamically reconfigurable open systems with SALSA[END_REF], provide a different and promising paradigm for concurrency and distributed computing, in which threads are transparently encapsulated inside actors. As we will argue in this paper, this paradigm is much more suitable for real-time programming because it enables the programmer to obtain the appropriate high-level view which allows the management of complex real-time requirements. In this paper, we introduce an actor-based programming language Crisp for real-time applications. Basic real-time requirements include deadlines and time-outs. In Crisp, deadlines are associated with asynchronous messages and timeouts with futures [START_REF] Frank | A complete guide to the future[END_REF]. Crisp further supports a general actor-based mechanism for handling exceptions raised by missed deadlines. By the integration of these basic real-time control mechanisms with the application-level policies supported by Crisp for scheduling of the messages inside an actor, more complex real-time requirements of the application can be met with more flexibility and finer granularity. We formalize the design of Crisp by means of structural operational semantics [START_REF] Gordon D Plotkin | The origins of structural operational semantics[END_REF] and describe its implementation as a full-fledged programming language. This implementation uses both the Java and Scala language with extensions of Akka library. We illustrate the use of the programming language with an industrial case study from SDL Fredhopper that provides enterprise-scale distributed e-commerce solutions on the cloud. The paper continues as follows: Section 2 introduces the language constructs and provides informal semantics of the language with a case study in Section 2.1. Section 3 presents the operational semantics of Crisp. Section 4 follows to provide a detailed discussion on the implementation. The case study continues in this section with further details and code examples. Section 5 discusses related work of research and finally Section 6 concludes the paper and proposes future line of research. Programming with deadlines In this section, we introduce the basic concepts underlying the notion of "deadlines" for asynchronous messages between actors. The main new constructs specify how a message can be sent with a deadline, how the message response can be processed, and what happens when a deadline is missed. We discuss the informal semantics of these concepts and illustrate them using a case study in Section 2.1. Listing 1 introduces a minimal version of the real-time actor-based language Crisp. Below we discuss the two main new language constructs presented at lines [START_REF] Eker | Taming heterogeneity -the ptolemy approach[END_REF] and [START_REF] Fersman | Schedulability analysis using two clocks[END_REF]. How to send a message with a deadline? The construct f = e 0 ! m(e) deadline(e 1 ) describes an asynchronous message with a deadline specified by e 1 (of type T time ). Deadlines can be specified using a notion of time unit such as millisecond, second, minute or other units of time. The caller expects the callee (denoted by e 0 ) to process the message within the units of time specified by e 1 . Here processing a message means starting the execution of the process generated by the message. A deadline is missed if and only if the callee does not start processing the message within the specified units of time. What happens when a deadline is missed? Messages received by an actor C ::= class N begin V ? {M } * end (1) Msig ::= N(T x) (2) M ::= {Msig == {V ; } ? S} (3) V ::= var {{x}, + : T {= e} ? }, + (4) S ::= x := e | (5) : := x := new T(e ? ) | (6) := f = e ! m(e) deadline(e) | (7) : := x := f.get(e ? ) | (8) : := return e | (9) ::= S ; S | (10) : := if (b) then S else S end | (11) : := while (b) { S } | (12) : := try {S} catch(TException x) { S } (13) Fig. 1: A kernel version of the real-time programming language. The bold scripted keywords denote the reserved words in the language. The over-lined v denotes a sequence of syntactic entities v. Both local and instance variables are denoted by x. We assume distinguished local variables this, myfuture, and deadline which denote the actor itself, the unique future corresponding to the process, and its deadline, respectively. A distinguished instance variable time denotes the current time. Any subscripted type T specialized denotes a specialized type of general type T; e.g. T Exception denotes all "exception" types. A variable f is in T f uture . N is a name (identifier) used for classes and method names. C denotes a class definition which consists of a definition of its instance variables and its methods; M sig is a method signature; M is a method definition; S denotes a statement. We abstract from the syntax the side-effect free expressions e and boolean expressions b. generate processes. Each actor contains one active process and all its other processes are queued. Newly generated processes are inserted in the queue according to an application-specific policy. When a queued process misses its deadline it is removed from the queue and a corresponding exception is recorded by its future (as described below). When the currently active process is terminated the process at the head of the queue is activated (and as such dequeued). The active process cannot be preempted and is forced to run to completion. In Section 4 we discuss the implementation details of this design choice. How to process the response of a message with a deadline? In the above example of an asynchronous message, the future result of processing the message is denoted by the variable f which has the type of Future. Given a future variable f , the programmer can query the availability of the result by the construct v = f.get(e) The execution of the get operation terminates successfully when the future variable f contains the result value. In case the future variable f records an exception, e.g. in case the corresponding process has missed its deadline, the get operation is aborted and the exception is propagated. Exceptions can be caught by try-catch blocks. Listing 1: Using try-catch for processing future values 1 try { 2 x = f.get(e) 3 S_1 4 } catch(Exception x) { 5 S_2 6 } For example, in Listing 1, if the get operation raises an exception control, is transferred to line (5); otherwise, the execution continues in line (3). In the catch block, the programmer has also access to the occurred exception that can be any kind of exception including an exception that is caused by a missed deadline. In general, any uncaught exception gives rise to abortion of the active process and is recorded by its future. Exceptions in our actor-based model thus are propagated by futures. The additional parameter e of the get operation is of type T time and specifies a timeout; i.e., the get operation will timeout after the specified units of time. This challenging task involves working on difficult issues, such as the performance of information retrieval algorithms, the scalability of dealing with huge amounts of data and in satisfying large amounts of user requests per unit of time, the fault tolerance of complex distributed systems, and the executive monitoring and management of large-scale information retrieval oper-ations. Fredhopper offers its services and facilities to e-Commerce companies (customers) as services (SaaS) over the cloud computing infrastructure (IaaS); which gives rise to different challenges in regards with resources management techniques and the customer cost model and service level agreements (SLA). To orchestrate different services such as FAS or data processing, Fredhopper takes advantage of a service controller (a.k.a. Controller). Controller is responsible to passively manage different service installations for each customer. For instance, in one scenario, a customer submits their data along with a processing request to their data hub server. Controller, then picks up the data and initiates a data processing job (usually an ETL job) in a data processing service. When the data processing is complete, the result is again published to customer environment and additionally becomes available through FAS services. Figure 2 illustrates an example scenario that is described above. In the current implementation of Controller, at Step 4, a data job instance is submitted to a remote data processing service. Afterwards, the future response of the data job is determined by a periodic remote check on the data service (Step 4). When the job is finished, Controller continues to retrieve the data job results (Step 5) and eventually publishes it to customer environment (Step 6). In terms of system responsiveness, Step 4 may never complete. Step 4 failure can have different causes. For instance, at any moment of time, there are different customers' data jobs running on one data service node; i.e. there is a chance that a data service becomes overloaded with data jobs preventing the periodic data job check to return. If Step 4 fails, it leads the customer into an unbounded waiting situation. According to SLA agreements, this is not acceptable. It is strongly required that for any data job, the customer should be notified of the result: either a completed job with success/failed status, a job that is not completed, or a job with an unknown state. In other words, Controller should be able to guarantee that any data job request terminates. To illustrate the contribution of this paper, we extract a closed-world simplified version of the scenario in Figure 2 from Controller. In Section 4, we provide an implementation-level usage of our work applied to this case study. Operational Semantics We describe the semantics of the language by means of a two-tiered labeled transition system: a local transition system describes the behavior of a single actor and a global transition system describes the overall behavior of a system of interacting actors. We define an actor state as a pair p, q , where p denotes the current active process of the actor, and q denotes a queue of pending processes. Each pending process is a pair (S, τ ) consisting of the current executing statement S and the assignment τ of values to the local variables (e.g., formal parameters). The active process consists of a pair (S, σ), where σ assigns values to the local variables and additionally assigns values to the instance variables of the actor. Local transition system The local transition system defines transitions among actor configurations of the form p, q, φ , where (p, q) is an actor state and for any object o identifying a created future, φ denotes the shared heap of the created future objects, i.e., φ(o), for any future object o existing in φ, denotes a record with a field val which represents the return value and a boolean field aborted which indicates abortion of the process identified by o. In the local transition system we make use of the following axiomatization of the occurrence of exceptions. Here (S, σ, φ) ↑ v indicates that S raises an exception v: -(x = f.get(), σ, φ) ↑ σ(f ) where φ(σ(f )).aborted = true, - (S, σ, φ) ↑ v try{S}catch(T u){S }↑v where v is not of type T, and, - (S, σ, φ) ↑ v (S; S, σ, φ) ↑ v . We present here the following transitions describing internal computation steps (we denote by val(e)(σ) the value of the expression e in σ and by f [u → v] the result of assigning the value v to u in the function f ). Assignment statement is used to assign a value to a variable: (x = e; S, σ), q, φ → (S, σ[x → val(e)(σ)]), q, φ Returning a result consists of setting the field val of the future of the process: (return e; S, σ), q, φ → (S, σ), q, φ[σ(myfuture).val → val(e)(σ)] Initialization of timeout in get operation assigns to a distinguished (local) variable timeout its initial absolute value: (x = f.get(e); S, σ), q, φ → (x = f.get(e); S, σ[timeout → val(e + time)(σ), q, φ The get operation is used to assign the value of a future to a variable: (x = f.get(); S, σ), q, φ → (S, σ[x → φ(σ(f )).val]), q, φ where φ(σ(f )).val = ⊥ . Timeout is operationally presented by the following transition: (x = f.get(); S, σ), q, φ → (S, σ), q, φ where σ(time) < σ(timeout). The try-catch block semantics is presented by: (S, σ), q, φ → (S , σ ), q , φ (try{S}catch(T x){S }; S , σ), q, φ → (try{S }catch(T x){S }; S , σ), q , φ Exception handling. We provide the operational semantics of exception handling in a general way in the following: (S, σ, φ) ↑ v (try{S}catch(T x){S }; S , σ), q, φ → (S ; S , σ[x → v]), q, φ where the exception v is of type T. Abnormal termination of the active process is generated by an uncaught exception: (S, σ, φ) ↑ v (S; S , σ), q, φ → (S , σ ), q , φ where q = (S , τ ) • q and σ is obtained from restoring the values of the local variables as specified by τ (formally, σ (x) = σ(x), for every instance variable x, and σ (x) = τ (x), for every local variable x), and φ (σ(myfuture)).aborted = true (φ (o) = φ(o), for every o = σ(myfuture)). Normal termination is presented by: (E, σ), q, φ → (S, σ ), q , φ where q = (S, τ ) • q and σ is obtained from restoring the values of the local variables as specified by τ (see above). We denote by E termination (identifying S; E with S). Deadline missed. Let (S , τ ) be some pending process in q such that τ (deadline) < σ(time). Then (S, σ), q, φ → p, q , φ where q results from q by removing (S , τ ) and φ (τ (myfuture)).aborted = true (φ (o) = φ(o), for every o = τ (myfuture)). A message m(τ ) specifies for the method m the initial assignment τ of its local variables (i.e., the formal parameters and the variables this, myfuture, and deadline). To model locally incoming and outgoing messages we introduce the following labeled transitions. Incoming message. Let the active process p belong to the actor τ (this) (i.e., σ(this) = τ (this) for the assignment σ in p): p, q, φ m(τ ) ---→ p, insert(q, m(v, d)), φ where insert(q, m(τ )) defines the result of inserting the process (S, τ ), where S denotes the body of method m, in q, according to some application-specific policy (described below in Section 4). Outgoing message. We model an outgoing message by: (f = e 0 ! m(ē) deadline(e 1 ); S, σ), q, φ m(τ ) ---→ (S, σ[f → o]), q, φ Global transition system A (global) system configuration S is a pair (Σ, φ) consisting of a set Σ of actor states and a global heap φ which stores the created future objects. We denote actor states by s, s , s , etc. Local computation step. The interleaving of local computation steps of the individual actors is modeled by the rule: (s, φ) → (s , φ ) ({s} ∪ Σ, φ) → ({s } ∪ Σ, φ ) Communication. Matching a message sent by one actor with its reception by the specified callee is described by the rule: (s 1 , φ) m(τ ) ---→ (s 1 , φ ) (s 2 , φ) m(τ ) ---→ (s 2 , φ) ({s 1 , s 2 } ∪ Σ, φ) → ({s 1 , s 2 } ∪ Σ, φ ) Note that only an outgoing message affects the shared heap φ of futures. where Progress of Σ = { (S, σ ), q, φ | (S, σ), q, φ ∈ Σ, σ = σ[time → σ(time) + δ]} for some positive δ. Implementation We base our implementation on Java's concurrent package: java.util.concurrent. The implementation consists of the following major components: 1. An extensible language API that owns the core abstractions, architecture, and implementation. For instance, the programmer may extend the concept of a scheduler to take full control of how, i.e., in what order, the processes of the individual actors are queued (and as such scheduled for execution). We illustrate the scheduler extensibility with an example in the case study below. 2. Language Compiler that translates the modeling-level programs into Java source. We use ANTLR [START_REF] Parr | Antlr[END_REF] parser generator framework to compile modelinglevel programs to actual implementation-level source code of Java. 3. The language is seamlessly integrated with Java. At the time of programming, language abstractions such as data types and third-party libraries from either Crisp or Java are equally usable by the programmer. We next discuss the underlying deployment of actors and the implementation of real-time processes with deadlines. Deploying actors onto JVM threads. In the implementation, each actor owns a main thread of execution, that is, the implementation does not allocate one thread per process because threads are costly resources and allocating to each process one thread in general leads to a poor performance: there can be an arbitrary number of actors in the application and each may receive numerous messages which thus give rise to a number of threads that goes beyond the limits of memory and resources. Additionally, when processes go into pending mode, their correspondent thread may be reused for other processes. Thus, for better performance and optimization of resource utilization, the implementation assigns a single thread for all processes inside each actor. Consequently, at any moment in time, there is only one process that is executed inside each actor. On the other hand, the actors share a thread which is used for the execution of a watchdog for the deadlines of the queued processes (described below) because allocation of such a thread to each actor in general slows down the performance. Further this sharing allows the implementation to decide, based on the underlying resources and hardware, to optimize the allocation of the watchdog thread to actors. For instance, as long as the resources on the underlying hardware are abundant, the implementation decides to share as less as possible the thread. This gives each actor a better opportunity with higher precision to detect missed deadlines. Implementation of processes with deadlines. A process itself is represented in the implementation by a data structure which encapsulates the values of its local variables and the method to be executed. Given a relative deadline d as specified by a call we compute at run-time its absolute deadline (i.e. the expected starting time of the process) by TimeUnit.toMillis(d) + System.currentTimeMillis() which is a soft real-time requirement. As in the operational semantics, in the real-time implementation always the head of the process queue is scheduled for execution. This allows the implementation of a default earliest deadline first (EDF) scheduling policy by maintaining a queue ordered by the above absolute time values for the deadlines. The important consequence of our non-preemptive mode of execution for the implementation is the resulting simplicity of thread management because preemption requires additional thread interrupts that facilitates the abortion of a process in the middle of execution. As stated above, a single thread in the implementation detects if a process has missed its deadline. This task runs periodically and to the end of all actors' life span. To check for a missed deadline it suffices to simply check for a process that the above absolute time value of its deadline is smaller than System.currentTimeMillis(). When a process misses its deadline, the actions as specified by the corresponding transition of the operational semantics are subsequently performed. The language API provides extension points which allow for each actor the definition of a customized watchdog process and scheduling policy (i.e., policy for enqueuing processes). The customized watchdog processes are still executed by a single thread. Fredhopper case study. As introduced in Section 2.1, we extract a closedworld simplified version from Fredhopper Controller. We apply the approach discussed in this paper to use deadlines for asynchronous messages. Listing 2 and 3 present the difference in the previous Controller and the approach in Crisp. The left code snippet shows the Controller that uses polling to retrieve data processing results. The right code snippet shows the one that uses messages with deadlines. When the approach in Crisp in the right snippet is applied to Controller, it is guaranteed that all data job requests are terminated in a finite amount of time. Therefore, there cannot be complains about never receiving a response for a specific data job request. Many of Fredhopper's customers rely on data jobs to eventually deliver an e-commerce service to their end users. Thus, to provide a guarantee to them that their job result is always published to their environment is critical to them. As shown in the code snippet, if the data job request is failed or aborted based on a deadline miss, the customer is still eventually informed about the situation and may further decide about it. However, in the previous version, the customer may never be able to react to a data job request because its results are never published. In comparison to the Controller using polling, there is a way to express timeouts for future values. However, it does not provide language constructs to specify a deadline for a message that is sent to data processing service. A deadline may be simulated using a combination of timeout and periodic polling approaches (Listing 2). Though, this approach cannot guarantee eventual termination in all cases; as discussed before that Step 4 in Figure 2 may never complete. Controller is required to meet certain customer expectations based on an SLA. Thus, Controller needs to take advantage of a language/library solution that can provide a higher level of abstraction for real-time scheduling of concurrent messages. When messages in Crisp carry a deadline specification, Controller is able to guarantee that it can provide a response to the customer. This termination guarantee is crucial to the business of the customer. Additionally, on the data processing service node, the new implementation takes advantage of the extensibility of schedulers in Crisp. As discussed above, the default scheduling policy used for each actor is EDF based on the deadlines carried by incoming messages to the actor. However, this behavior may be extended and replaced by a custom implementation from the programmer. In this case study, the priority of processes may differ if they the job request comes from specific customer; i.e. apart from deadlines, some customers have priority over others because they require a more real-time action on their job requests while others run a more relaxed business model. To model and implement this custom behavior, a custom scheduler is developed for the data processing node. In the above listings, Listing 5 defines a custom scheduler that determines the priority of two processes with custom logic for specific customer. To use the custom scheduler, the only requirement is that the class DataProcessor defines a specific class variable called scheduler in Listing 4. The custom scheduler is picked up by Crisp core architecture and is used to schedule the queued processes. Thus, all processes from customer A have priority over processes from other customers no matter what their deadlines are. We use Controller's logs for the period of February and March 2013 to examine the evaluation of Crisp approach. We define customer satisfaction as a property that represents the effectiveness of futures with deadline. s 1 s 2 88.71% 94.57% Table 1: Evaluation Results For a customer c, the satisfaction can be denoted by s = r F c rc ; in which r F c is the number of finished data processing jobs and r c is the total number of requested data processing jobs from customer c. We extracted statistics for completed and never-ended data processing jobs from Controller logs (s 1 ). We replayed the logs with Crisp approach and measured the same property (s 2 ). We measured the same property for 180 customers that Fredhopper manages on the cloud. In this evaluation, a total number of about 25000 data processing requests were included. The results show 6% improvement in Table 1 (that amounts to around 1600 better data processing requests). Because of data issues or wrong parameters in the data processing requests, there are requests that still fail or never end and should be handled by a human resource. You may find more information including documentation and source code of Crisp at http://nobeh.github.com/crisp. Related Work The programming language presented in this paper is a real-time extension of the language introduced in [START_REF] Nobakht | Programming and deployment of active objects with application-level scheduling[END_REF]. This new extension features integration of asynchronous messages with deadlines and futures with timeouts; a general mechanism for handling exceptions raised by missed deadlines; high-level specification of application-level scheduling policies; and a formal operational semantics. To the best of our knowledge the resulting language is the first implemented real-time actor-based programming language which formally integrates the above features. In several works, e.g, [START_REF] Aceto | Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca[END_REF] and [START_REF] Nielsen | Semantics for an Actor-Based Real-Time Language[END_REF], asynchronous messages in actor-based languages are extended with deadlines. However these languages do not feature futures with timeouts, a general mechanism for handling exceptions raised by missed deadlines or support the specification of application-level scheduling policies. Futures and fault handling are considered in the ABS language [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF]. This work describes recovery mechanisms for failed get operations on a future. However, the language does not support the specification of real-time requirements, i.e., no deadlines for asynchronous messages are considered and no timeouts on futures. Further, when a get operation on a future fails, [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF] does not provide any context or information about the exception or the cause for the failure. Alternatively, [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF] describes a way to "compensate" for a failed get operation on future. In [START_REF] Bjørk | User-defined schedulers for real-time concurrent objects[END_REF], a real-time extension of ABS with scheduling policies to model distributed systems is introduced. In contrast to Crisp, Real-Time ABS is an executable modeling language which supports the explicit specification of the progress of time by means of duration statements for the analysis of real-time requirements. The language does not support however asynchronous messages with deadlines and futures with timeouts. Two successful examples of actor-based programming languages are Scala and Erlang. Scala [START_REF] Haller | Scala actors: Unifying thread-based and eventbased programming[END_REF][START_REF]Coordination Models and Languages, volume 4467, chapter Actors That Unify Threads and Events[END_REF] is a hybrid object-oriented and functional programming language inspired by Java. Through the event-based model, Scala also provides the notion of continuations. Scala further provides mechanisms for scheduling of tasks similar to those provided by concurrent Java: it does not provide a direct and customizable platform to manage and schedule messages received by an individual actor. Additionally, Akka [START_REF] Typesafe | [END_REF] extends Scala's actor programming model and as such provides a direct integration with both Java and Scala. Erlang [START_REF] Armstrong | Programming Erlang: Software for a Concurrent World[END_REF] is a dynamically typed functional language that was developed at Ericsson Computer Science Laboratory with telecommunication purposes [START_REF] Corrêa | Actors in a new "highly parallel" world[END_REF]. Recent developments in the deployment of Erlang support the assignment of a scheduler to each processor [START_REF] Lundin | Inside the Erlang VM, focusing on SMP[END_REF] (instead of one global scheduler for the entire application) but it does not, for example, support application-level scheduling policies. In general, none these languages provide a formally defined real-time extension which integrates the above features. There are well-known efforts in Java to bring in the functionality of asynchronous message passing onto multicore including Killim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF], Jetlang [START_REF] Rettig | Jetlang Library[END_REF], Ac-torFoundry [START_REF] Rajesh | Actor frameworks for the JVM platform: a comparative analysis[END_REF], and SALSA [START_REF] Varela | Programming dynamically reconfigurable open systems with SALSA[END_REF]. In [START_REF] Rajesh | Actor frameworks for the JVM platform: a comparative analysis[END_REF], the authors present a comparative analysis of actor-based frameworks for JVM platform. Most of these frameworks support futures with timeouts but do not provide asynchronous messages with deadlines, or a general mechanism for handling exceptions raised by missed deadlines. Further, pertaining to the domain of priority scheduling of asynchronous messages, these efforts in general provide a predetermined approach or a limited control over message priority scheduling. As another example, in [START_REF] Maia | Combining rtsj with fork/join: a priority-based model[END_REF] the use of Java Fork/Join is described to optimize mulicore applications. This work is also based on a fixed priority model. Additionally, from embedded hardwaresoftware research domain, Ptolemy [START_REF] Eker | Taming heterogeneity -the ptolemy approach[END_REF][START_REF] Lee | Actor-oriented design of embedded hardware and software systems[END_REF] is an actor-oriented open architecture and platform that is used to design, model and simulate embedded software. Their approach is hardware software co-design. It provides a platform framework along with a set of tools. In general, existing high-level programming languages provide the programmer with little real-time control over scheduling. The state of the art allows specifying priorities for threads or processes that are used by the operating system, e.g., Real-Time Specification for Java (RTSJ [START_REF] Jcp | RTSJ v1 JSR 1[END_REF][START_REF]RTSJ v1.1 JSR 282[END_REF]) and Erlang. Specifically in RTSJ, [START_REF] Zerzelidis | A framework for flexible scheduling in the RTSJ[END_REF] extensively introduces and discusses a framework for applicationlevel scheduling in RTSJ. It presents a flexible framework to allow scheduling policies to be used in RTSJ. However, [START_REF] Zerzelidis | A framework for flexible scheduling in the RTSJ[END_REF] addresses the problem mainly in the context of the standard multithreading approach to concurrency which in general does not provide the most suitable approach to distributed applications. In contrast, in this paper we have shown that an actor-based programming language provides a suitable formal basis for a fully integrated real-time control in distributed applications. Conclusion and future work In this paper, we presented both a formal semantics and an implementation of a real-time actor-based programming language. We presented how asynchronous messages with deadline can be used to control application-level scheduling with higher abstractions. We illustrated the language usage with a real-world case study from SDL Fredhopper along the discussion for the implementation. Currently we are investigating further optimization of the implementation of Crisp and the formal verification of real-time properties of Crisp applications using schedulability analysis [START_REF] Fersman | Schedulability analysis using two clocks[END_REF]. where φ results from φ by extending its domain with a new future object o such that φ (o).val =⊥ 4 and φ (o).aborted = false, τ (this) = val(e 0 )(σ), τ (x) = val(e)(σ), for every formal parameter x and corresponding actual parameter e, τ (deadline) = σ(time) + val(e 1 )(σ), τ (myfuture) = o. Time. The following transition uniformly updates the local clocks (represented by the instance variable time) of the actors. (Σ, φ) → (Σ , φ) Listing 4: Data Processor class
34,230
[ "1003810", "1003770", "1003811" ]
[ "121723", "488223", "20495", "121723" ]
01486032
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01486032/file/978-3-642-38493-6_2_Chapter.pdf
Andrea Cerone email: [email protected] Matthew Hennessy email: [email protected] Massimo Merro email: [email protected] Modelling MAC-layer communications in wireless systems (Extended abstract) We present a timed broadcast process calculus for wireless networks at the MAC-sublayer where time-dependent communications are exposed to collisions. We define a reduction semantics for our calculus which leads to a contextual equivalence for comparing the external behaviour of wireless networks. Further, we construct an extensional LTS (labelled transition system) which models the activities of stations that can be directly observed by the external environment. Standard bisimulations in this novel LTS provide a sound proof method for proving that two systems are contextually equivalent. In addition, the main contribution of the paper is that our proof technique is also complete for a large class of systems. Introduction Wireless networks are becoming increasingly pervasive with applications across many domains, [START_REF] Rappaport | Wireless communications -principles and practice[END_REF][START_REF] Akyildiz | Wireless sensor networks: a survey[END_REF]. They are also becoming increasingly complex, with their behaviour depending on ever more sophisticated protocols. There are different levels of abstraction at which these can be defined and implemented, from the very basic level in which the communication primitives consist of sending and receiving electromagnetic signals, to the higher level where the basic primitives allow the set up of connections and exchange of data between two nodes in a wireless system [START_REF] Tanenbaum | Computer Networks[END_REF]. Assuring the correctness of the behaviour of a wireless network has always been difficult. Several approaches have been proposed to address this issue for networks described at a high level [START_REF] Nanz | Static analysis of routing protocols for ad-hoc networks[END_REF][START_REF] Merro | An Observational Theory for Mobile Ad Hoc Networks (full paper)[END_REF][START_REF] Godskesen | A Calculus for Mobile Ad Hoc Networks[END_REF][START_REF] Ghassemi | Equational reasoning on mobile ad hoc networks[END_REF][START_REF] Singh | A process calculus for mobile ad hoc networks[END_REF][START_REF] Kouzapas | A process calculus for dynamic networks[END_REF][START_REF] Borgström | Broadcast psi-calculi with an application to wireless protocols[END_REF][START_REF] Cerone | Modelling probabilistic wireless networks (extended abstract)[END_REF]; these typically allow the formal description of protocols at the network layer of the TCP/IP reference model [START_REF] Tanenbaum | Computer Networks[END_REF]. However there are few frameworks in the literature which consider networks described at the MAC-Sublayer of the TCP/IP reference model [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF][START_REF] Merro | A timed calculus for wireless systems[END_REF]. This is the topic of the current paper. We propose a process calculus for describing and verifying wireless networks at the MAC-Sublayer of the TCP/IP reference model. This calculus, called the Calculus of Collision-prone Communicating Processes (CCCP), has been largely inspired by TCWS [START_REF] Merro | A timed calculus for wireless systems[END_REF]; in particular CCCP inherits its communication features but simplifies considerably the syntax, the reduction semantics, the notion of observation, and as we will see the behavioural theory. In CCCP a wireless system is considered to be a collection of wireless stations which transmit and receive messages. The transmission of messages is broadcast, and it is time-consuming; the transmission of a message v can require several time slots (or instants). In addition, wireless stations in our calculus are sensitive to collisions; if two different stations are transmitting a value over a channel c at the same time slot a collision occurs, and the content of the messages originally being transmitted is lost. More specifically, in CCCP a state of a wireless network (or simply network, or system) will be described by a configuration of the form Γ W where W describes the code running at individual wireless stations and Γ represents the communication state of channels. At any given point of time there will be exposed communication channels, that is channels containing messages (or values) in transmission; this information will be recorded in Γ. Such systems evolve by the broadcast of messages between stations, the passage of time, or some other internal activity, such as the occurrence of collisions and their consequences. One of the topics of the paper is to capture formally these complex evolutions, by defining a reduction semantics, whose judgments take the form Γ 1 W 1 Γ 2 W 2 . The reduction semantics satisfies some desirable properties such as time determinism, patience and maximal progress [START_REF] Nicollin | The algebra of timed processes, atp: Theory and application[END_REF][START_REF] Hennessy | A process algebra for timed systems[END_REF][START_REF] Yi | A Calculus of Real Time Systems[END_REF]. However the main aim of the paper is to develop a behavioural theory of wireless networks. To this end we need a formal notion of when two such systems are indistinguishable from the point of view of users. Having a reduction semantics it is now straightforward to adapt a standard notion of contextual equivalence: Γ 1 W 1 Γ 2 W 2 . Intuitively this means that either system, Γ 1 W 1 or Γ 2 W 2 , can be replaced by the other in a larger system without changing the observable behaviour of the overall system. Formally we use the approach of [START_REF] Honda | On reduction-based process semantics[END_REF], often called reduction barbed congruence; the only parameter in the definition is the choice of primitive observation or barb. Our choice is natural for wireless systems: the ability to transmit on an idle channel, that is a channel with no active transmissions. As explained in papers such as [START_REF] Rathke | Deconstructing behavioural theories of mobility[END_REF][START_REF] Hennessy | A distributed Pi-calculus[END_REF], contextual equivalences are determined by so-called extensional actions, that is the set of minimal observable interactions which a system can have with its external environment. For CCCP determining these actions is non-trivial. Although values can be transmitted and received on channels, the presence of collisions means that these are not necessarily observable. In fact the important point is not the transmission of a value, but its successful delivery. Also, although the basic notion of observation on systems does not involve the recording of the passage of time, this has to be taken into account extensionally in order to gain a proper extensional account of systems. The extensional semantics determines an LTS (labelled transition system) over configurations, which in turn gives rise to the standard notion of (weak) bisimulation equivalence between configurations. This gives a powerful co-inductive proof technique: to show that two systems are behaviourally equivalent it is sufficient to exhibit a witness bisimulation which contains them. One result of this paper is that weak bisimulation in the extensional LTS is sound with respect to the touchstone contextual equivalence: if two systems are related by some bisimulation in the extensional LTS then they are contextually equivalent. However, the main contribution is that completeness holds for a large class of networks, called well-formed. If two such networks are contextually equivalent then there is some bisimulation, based on our novel extensional actions, which contains them. In [START_REF] Merro | A timed calculus for wireless systems[END_REF], a sound but not complete bisimulation based proof method is developed for (a different form of) reduction barbed congruence. Here, by simplifying the calculus and isolating novel extensional actions we obtain both soundness and completeness. The rest of the paper is organised as follows: in Section 2 we define the syntax which we will use for modelling wireless networks. The reduction semantics is given in Section 3 from which we develop in the same section our notion of reduction barbed congruence. In Section 4 we define the extensional semantics of networks, and the (weak) bisimulation equivalence it induces. In Section 5 we state the main results of the paper, namely that bisimulation is sound with respect to barbed congruence and, for a large class of systems, it is also complete. Detailed proofs of the results can be found in the associated technical report [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF]. The latter also contains an initial case study showing the usefulness of our proof technique. Two particular instances of networks are compared; the first forwards two messages to the external environment using a TDMA modulation technique, the second performs the same task by routing the messages along different stations. The calculus Formally we assume a set of channels Ch, ranged over by c, d, • • • , and a set of values Val, which contains a set of data-variables, ranged over by x, y, • • • and a special value err; this value will be used to denote faulty transmissions. The set of closed values, that is those not containing occurrences of variables, are ranged over by v, w, • • • . We also assume that every closed value v ∈ Val has an associated strictly positive integer δ v , which denotes the number of time slots needed by a wireless station to transmit v. A channel environment is a mapping Γ : Ch → N × Val. In a configuration Γ W where Γ(c) = (n, v) for some channel c, a wireless station is currently transmitting the value v for the next n time slots. We will use some suggestive notation for channel environments: Γ t c : n in place of Γ(c) = (n, w) for some w, Γ v c : w in place of Γ(c) = (n, w) for some n. If Γ t c : 0 we say that channel c is idle in Γ, and we denote it with Γ c : idle. Otherwise we say that c is exposed in Γ, denoted by Γ c : exp. The channel environment Γ such that Γ c : idle for every channel c is said to be stable. The syntax for system terms W is given in Table 1, where P ranges over code for programming individual stations, which is also explained in Table 1. A system term W is a collection of individual threads running in parallel, with possibly some channels restricted. Each thread may be either an inactive piece of code P or an active code of the form c[x].P. This latter term represents a wireless station which is receiving a value from the channel c; when the value is eventually received the variable x will be replaced with the received value in the code P. The restriction operator νc : (n, v).W is nonstandard, for a restricted channel has a positive integer and a closed value associated with it; roughly speaking, the term νc : (n, v).W corresponds to the term W where The syntax for station code is based on standard process calculus constructs. The main constructs are time-dependent reception from a channel c?(x).P Q, explicit time delay σ.P, and broadcast along a channel c ! u .P. Here u denotes either a data-variable or closed value v ∈ Val. Of the remaining standard constructs the most notable is matching, [b]P, Q which branches to P or Q, depending on the value of the Boolean expression b. We leave the language of Boolean expressions unspecified, other than saying that it should contain equality tests for values, u 1 = u 2 . More importantly, it should also contain the expression exp(c) for checking if in the current configuration the channel c is exposed, that is it is being used for transmission. In the construct fix X.P occurrences of the recursion variable X in P are bound; similarly in the terms c?(x).P Q and c[x].P the data-variable x is bound in P. This gives rise to the standard notions of free and bound variables, α-conversion and capture-avoiding substitution; we assume that all occurrences of variables in system terms are bound and we identify systems up to α-conversion. Moreover we assume that all occurrences of recursion variables are guarded; they must occur within either a broadcast, input or time delay prefix, or within an execution branch of a matching construct. We will also omit trailing occurrences of nil, and write c?(x).P in place of c?(x).P nil. Our notion of wireless networks is captured by pairs of the form Γ W, which represent the system term W running in the channel environment Γ. Such pairs are called configurations, and are ranged over by the metavariable C. (Snd) Γ c ! v .P c!v ----→ σ δv .P (Rcv) Γ c : idle Γ c?(x).P Q c?v ----→ c[x].P (RcvIgn) ¬rcv(W, c) Γ W c?v ----→ W (Sync) Γ W 1 c!v ----→ W 1 Γ W 2 c?v ----→ W 2 Γ W 1 | W 2 c!v ----→ W 1 | W 2 (RcvPar) Γ W 1 c?v ----→ W 1 Γ W 2 c?v ----→ W 2 Γ W 1 | W 2 c?v ----→ W 1 | W 2 Reduction semantics and contextual equivalence The reduction semantics is defined incrementally. We first define the evolution of system terms with respect to a channel environment Γ via a set of SOS rules whose judg- ments take the form Γ W 1 λ ---→ W 2 . Here λ can take the form c!v denoting a broadcast of value v along channel c, c?v denoting an input of value v being broadcast along channel c, τ denoting an internal activity, or σ, denoting the passage of time. However these actions will also have an effect on the channel environment, which we first describe, using a functional upd λ (•) : Env → Env, where Env is the set of channel environments. The channel environment upd λ (Γ) describes the update of the channel environment Γ when the action λ is performed, is defined as follows: for λ = σ we let upd σ (Γ) t c : (n -1) whenever Γ t c : n, upd σ (Γ) v c : w whenever Γ v c : w. For λ = c!v we let upd c!v (Γ) be the channel environment such that upd c!v (Γ) t c :        δ v if Γ c : idle max(δ v , k) if Γ c : exp upd c!v (Γ) v c :        v if Γ c : idle err if Γ c : exp where Γ t c : k. Finally, we let upd c?v (Γ) = upd c!v (Γ) and upd τ (Γ) = Γ. Let us describe the intuitive meaning of this definition. When time passes, the time of exposure of each channel decreases by one time unit 3 . The predicates upd c!v (Γ) and upd c?v (Γ) model how collisions are handled in our calculus. When a station begins broadcasting a value v over a channel c this channel becomes exposed for the amount of time required to transmit v, that is δ v . If the channel is not free a collision happens. As a consequence, the value that will be received by a receiving station, when all transmissions over channel c terminate, is the error value err, and the exposure time is adjusted accordingly. For the sake of clarity, the inference rules for the evolution of system terms, Γ W 1 λ ---→ W 2 , are split in four tables, each one focusing on a particular form of activity. Table 2 contains the rules governing transmission. Rule (Snd) models a non-blocking broadcast of message v along channel c. A transmission can fire at any time, independently on the state of the network; the notation σ δ v represents the time delay operator σ iterated δ v times. So when the process c ! v .P broadcasts it has to wait δ v time units before the residual P can continue. On the other hand, reception of a message by a time-guarded listener c?(x).P Q depends on the state of the channel environment. If the channel c is free then rule (Rcv) indicates that reception can start and the listener evolves into the active receiver c[x].P. The rule (RcvIgn) says that if a system can not receive on the channel c then any transmission along it is ignored. Intuitively, the predicate rcv(W, c) means that W contains among its parallel components at least one non-guarded receiver of the form c?(x).P Q which is actively awaiting a message. Formally, the predicate rcv(W, c) is the least predicate such that rcv( c?(x).P Q, c) = true and which satisfies the equations rcv(P + Q, c) = rcv(P, c) ∨ rcv(Q, c), rcv(W 1 | W 2 , c) = rcv(W 1 , c) ∨ rcv(W 2 , c) and rcv(νd.W, c) = rcv(W, c) if d c. The remaining two rules in Table 2 (Sync) and (RcvPar) serve to synchronise parallel stations on the same transmission [START_REF] Hennessy | Bisimulations for a calculus of broadcasting systems[END_REF][START_REF] Nicollin | The algebra of timed processes, atp: Theory and application[END_REF][START_REF] Prasad | A calculus of broadcasting systems[END_REF]. Example 1 (Transmission). Let C 0 = Γ 0 W 0 , where Γ 0 c, d : idle and W 0 = c! v 0 | d?(x).nil ( c?(x).Q ) | c?(x).P where δ v 0 = 2. Using rule (Snd) we can infer Γ 0 c! v 0 c!v 0 -----→ σ 2 ; this station starts transmitting the value v 0 along channel c. Rule (RcvIgn) can be used to derive the transition Γ 0 d?(x).nil ( c?(x).Q ) c?v 0 -----→ d?(x).nil ( c?(x).Q ), in which the broadcast of value v 0 along channel c is ignored. On the other hand, Rule (RcvIgn) cannot be applied to the configuration Γ 0 c?(x).P , since this station is waiting to receive a value on channel c; however we can derive the transition Γ 0 c?(x).P c?v 0 -----→ c[x].P using Rule (Rcv). We can put the three transitions derived above together using rule (Sync), leading to the transition C 0 c!v ----→ W 1 , where W 1 = σ 2 | d?(x).nil ( c?(x).Q ) | c[x].P. The transitions for modelling the passage of time, Γ W σ ---→ W , are given in Table 3. In the rules (ActRcv) and (EndRcv) we see that the active receiver c[x].P continues to wait for the transmitted value to make its way through the network; when the allocated transmission time elapses the value is then delivered and the receiver evolves to { w / x }P. The rule (SumTime) is necessary to ensure that the passage of time does not resolve non-deterministic choices. Finally (Timeout) implements the idea that c?(x).P Q is a time-guarded receptor; when time passes it evolves into the alternative Q. However this only happens if the channel c is not exposed. What happens if it is exposed is explained later in Table 4. Finally, Rule (TimePar) models how σ-actions are derived for collections of threads. Example 2 (Passage of Time ). Let C 1 = Γ 1 W 1 , where Γ 1 (c) = (2, v 0 ), Γ 1 d : idle and W 1 is the system term derived in Example 1. We show how a σ-action can be derived for this configuration. First note that Γ 1 σ 2 σ ---→ σ; this transition can be derived using Rule (Sleep). Since d is idle in Γ 1 , we can apply Rule (TimeOut) to infer the transition Γ 1 d?(x).nil ( c?(x).Q ) σ ---→ c?(x).Q ; time passed before a value could be broadcast along channel d, causing a timeout in the Table 3 Intensional semantics: timed transitions Table 4 is devoted to internal transitions Γ W τ ---→ W . Let us first explain rule (RcvLate). Intuitively the process c?(x).P Q is ready to start receiving a value on an exposed channel c. This means that a transmission is already taking place. Since the process has therefore missed the start of the transmission it will receive an error value. Thus Rule (RcvLate) reflects the fact that in wireless systems a broadcast value cannot be correctly received by a station in the case of a misalignment between the sender and the receiver. (TimeNil) Γ nil σ ---→ nil (Sleep) Γ σ.P σ ---→ P (ActRcv) Γ t c : n, n > 1 Γ c[x].P σ ---→ c[x].P (EndRcv) Γ t c : 1, Γ v c : w Γ c[x].P σ ---→ { w / x }P (SumTime) Γ P σ ---→ P Γ Q σ ---→ Q Γ P + Q σ ---→ Γ P + Q (Timeout) Γ c : idle Γ c?(x).P Q σ ---→ Q (TimePar) Γ W 1 σ ---→ W 1 Γ W 2 σ ---→ W 2 Γ W 1 | W 2 σ ---→ W 1 | W 2 Table 4 Intensional semantics: internal activity (RcvLate) Γ c : exp Γ c?(x).P Q τ --→ c[x].{ err / x }P (Tau) Γ τ.P τ --→ P (Then) b Γ = true Γ [b]P, Q τ --→ σ.P (Else) b Γ = false Γ [b]P, Q τ --→ σ.Q The remaining rules are straightforward except that we use a channel environment dependent evaluation function for Boolean expressions b Γ , because of the presence of the exposure predicate exp(c) in the Boolean language. However in wireless systems it is not possible to both listen and transmit within the same time unit, as communication is half-duplex, [START_REF] Rappaport | Wireless communications -principles and practice[END_REF]. So in our intensional semantics, in the rules (Then) and (Else), the execution of both branches is delayed of one time unit; this is a slight simplification, Table 5 Intensional semantics: -structural rules (TauPar) Γ W 1 τ --→ W 1 Γ W 1 | W 2 τ --→ W 1 | W 2 (Rec) {fix X.P/X}P λ ---→ W Γ fix X.P λ ---→ W (Sum) Γ P λ ---→ W λ ∈ {τ, c!v} Γ P + Q λ ---→ W (SumRcv) Γ P c?v ----→ W rcv(P, c) Γ c : idle Γ P + Q c?v ----→ W (ResI) Γ[c → (n, v)] W c!v ----→ W Γ νc:(n, v).W τ --→ νc:upd c!v (Γ)(c).W (ResV) Γ[c → (n, v)] W λ ---→ W , c λ Γ νc:(n, v).W λ ---→ νc:(n, v).W as evaluation is delayed even if the Boolean expression does not contain an exposure predicate. Example 3. Let Γ 2 be a channel environment such that Γ 2 (c) = (1, v), and consider the configuration C 2 = Γ 2 W 2 , where W 2 has been defined in Example 2. Note that this configuration contains an active receiver along the exposed channel c. We can think of such a receiver as a process which missed the synchronisation with a broadcast which has been previously performed along channel c; as a consequence this process is doomed to receive an error value. This situation is modelled by Rule (RcvLate), which allows us to infer the transition Γ 2 c?(x).Q The final set of rules, in Table 5, are structural. Here we assume that Rules (Sum), (SumRcv) and (SumTime) have a symmetric counterpart. Rules (ResI) and (ResV) show how restricted channels are handled. Intuitively moves from the configuration Γ νc:(n, v).W are inherited from the configuration Γ[c → (n, v)] W; here the channel environment Γ[c → (n, v)] is the same as Γ except that c has associated with it (temporarily) the information (n, v). However if this move mentions the restricted channel c then the inherited move is rendered as an internal action τ, (ResI). Moreover the information associated with the restricted channel in the residual is updated, using the function upd c!v (•) previously defined. We are now ready to define the reduction semantics; formally, we let Γ 1 W 1 Γ 2 W 2 whenever Γ 1 W 1 λ ---→ W 2 and Γ 2 = upd λ (Γ 1 ) for some λ = τ, σ, c!v. Note that input actions cannot be used to infer reductions for computations; following the approach of [START_REF] Milner | Communicating and Mobile Systems: The π-calculus[END_REF][START_REF] Sangiorgi | The Pi-Calculus -A Theory of Mobile Processes[END_REF] reductions are defined to model only the internal of a system. In order to distinguish between timed and untimed reductions in Let C i = Γ i W i , i = 0, • • • , 2 be as defined in these examples. Note that Γ 1 = upd c!v 0 (Γ 0 ) and Γ 2 = upd σ (Γ 1 ). We have already shown that C 0 c!v 0 -----→ W 1 ; this transition, together with the equality Γ 1 = upd c!v 0 (Γ 0 ), can be used to infer the reduction Γ 1 W 1 Γ 2 W 2 we use Γ 1 W 1 σ Γ 2 W 2 if Γ 2 = upd σ (W 1 ) and Γ 1 W 1 i Γ 2 W 2 if Γ 2 = upd λ (Γ 1 ) for some λ = τ, c!v. C 0 i C 1 . A similar argument shows that C 1 σ C 2 . Also if we let C 3 denote Γ 2 W 3 we also have C 2 i C 3 since Γ 2 = upd τ (Γ 2 ). W 1 = σ | c! w 1 | c[x].P. Let Γ 1 := upd c!w 0 (Γ), that is Γ 1 (c) = (1, w 0 ) . This equality and the transition above lead to the instantaneous reduction C i C 1 = Γ 1 W 1 . For C 1 we can use the rules (RcvIgn), (Snd) and (Sync) to derive the transition We now define a contextual equivalence between configurations, following the approach of [START_REF] Honda | On reduction-based process semantics[END_REF]. This relies on two crucial concepts: a notion of reduction, already been defined, and a notion of minimal observable activity, called a barb. C 1 c!w 1 -----→ W 2 , where W 2 = σ | σ | c[x].P. While in other process algebras the basic observable activity is chosen to be an output on a given channel [START_REF] Sangiorgi | The Pi-Calculus -A Theory of Mobile Processes[END_REF][START_REF] Hennessy | A distributed Pi-calculus[END_REF], for our calculus it is more appropriate to rely on the exposure state of a channel: because of possible collisions transmitted values may never be received. Formally, we say that a configuration Γ W has a barb on channel c, written Γ W ↓ c , whenever Γ c : exp. A configuration Γ W has a weak barb on c, denoted by Γ W ⇓ c , if Γ W * Γ W for some Γ W such that Γ W ↓ c . As we will see, it turns out that using this notion of barb we can observe the content of a message being broadcast only at the end of its transmission. This is in line with the standard theory of wireless networks, in which it is stated that collisions can be observed only at reception time [START_REF] Tanenbaum | Computer Networks[END_REF][START_REF] Rappaport | Wireless communications -principles and practice[END_REF]. Definition 1. Let R be a relation over configurations. (1) R is said to be barb preserving if Γ 1 W 1 ⇓ c implies Γ 2 W 2 ⇓ c , whenever (Γ 1 W 1 ) R (Γ 2 W 2 ). (2) It is reduction-closed if (Γ 1 W 1 ) R (Γ 2 W 2 ) and Γ 1 W 1 Γ 1 W 1 imply there is some Γ 2 W 2 such that Γ 2 W 2 * Γ 2 W 2 and (Γ 1 W 1 ) R (Γ 2 W 2 ). Table 6 Extensional actions (Input) Γ W c?v ----→ W Γ W c?v -→ upd c?v (Γ) W (Time) Γ W σ ---→ W Γ W σ -→ upd σ (Γ) W (Shh) Γ W c!v ----→ W Γ W τ -→ upd c!v (Γ) W (TauExt) Γ W τ --→ W Γ W τ -→ Γ W (Deliver) Γ(c) = (1, v) Γ W σ ---→ W Γ W γ(c,v) -→ upd σ (Γ) W (Idle) Γ c : idle Γ W ι(c) -→ Γ W (3) It is contextual if Γ 1 W 1 R Γ 2 W 2 , implies Γ 1 (W 1 | W) R Γ 2 (W 2 | W) for all processes W. Reduction barbed congruence, written , is the largest symmetric relation over configurations which is barb preserving, reduction-closed and contextual. Example 6. We first give some examples of configurations which are not barbed congruent; here we assume that Γ is the stable environment. -Γ c! v 0 Γ c! v 1 ; let T = c?(x).[x = v 0 ]d! ok nil, , where d c and ok is an arbitrary value. It is easy to see that Γ c! v 0 | T ⇓ d , whereas Γ c! v 1 | T ⇓ d . -Γ c! v Γ σ.c! v ; let T = [exp(c)]d! ok , nil. In this case we have that Γ c! v | T ⇓ d , while Γ σ.c! v | T ⇓ d . On the other hand, consider the configurations Γ c! v 0 | c! v 1 and Γ c! err , where δ v 0 = δ v 1 and for the sake of convenience we assume that δ err = δ v 0 . In both cases a communication along channel c starts, and in both cases the value that will be eventually delivered to some receiving station is err, independently of the behaviour of the external environment. This gives us the intuition that these two configurations are barbed congruent. Later in the paper we will develop the tools that will allow us to prove this statement formally. Extensional Semantics In this section we give a co-inductive characterisation of the contextual equivalence between configurations, using a standard bisimulation equivalence over an extensional LTS, with configurations as nodes, but with a special collection of extensional actions; these are defined in Table 6. Rule (Input) simply states that input actions are observable, as is the passage of time, by Rule (Time). Rule (TauExt) propagates τ-intensional actions to the extensional semantics. Rule (Shh) states that broadcasts are always treated as internal activities in the extensional semantics. This choice reflects the intuition that the content of a message being broadcast cannot be detected immediately; in fact, it cannot be detected until the end of the transmission. Rule (Idle) introduces a new label ι(c), parameterized in the channel c, which is not inherited from the intensional semantics. Intuitively this rules states that it is possible to observe whether a channel is exposed. Finally, Rule (Deliver) states that the delivery of a value v along channel c is observable, and it corresponds to a new action whose label is γ(c, v). In the following we range over extensional actions by α. Example 7. Consider the configuration Γ c! v , where Γ is the stable channel environment. By an application of Rule (Shh) we have the transition Γ c! v τ -→ Γ σ δ v , with Γ c : exp. Furthermore, Γ c! v ι(c) -→ since channel c is idle in Γ. Notice that Γ σ δ v cannot perform a ι(c) action, and that the extensional semantics gives no information about the value v which has been broadcast. The extensional semantics endows configurations with the structure of an LTS. Weak extensional actions in this LTS are defined as usual, and the formulation of bisimulations is facilitated by the notation C α =⇒ C , which is again standard: for α = τ this denotes C -→ * C while for α τ it is C τ -→ * α -→ τ -→ * C . Definition 2 (Bisimulations). Let R be a symmetric binary relation over configurations. We say that R is a (weak) bisimulation if for every extensional action α, whenever Example 6. Recall that in this example we assumed that Γ is the stable channel environment; further, δ v 0 = δ v 1 = δ err = k for some k > 0. C 1 R C 2 , then C 1 α =⇒ C 1 implies C 2 α =⇒ C 2 for some C 2 satisfying C 1 R C 2 We let ≈ be the the largest bisimulation. Example 8. Let us consider again the configurations Γ W 0 = c! v 0 | c! v 1 , Γ W 1 = c! err of We show that Γ W 0 ≈ Γ W 1 by exhibiting a witness bisimulation S such that Γ W 0 S Γ W 1 . To this end, let us consider the relation S = { (∆ W 0 , ∆ W 1 ) , (∆ σ k | c! v 1 , ∆ σ k ) , (∆ c! v 0 , ∆ σ k ) , (∆ σ j | σ j , ∆ σ j ) | ∆ t c : n, ∆ (c) = (n, err) for some n > 0, j ≤ k} Note that this relation contains an infinite number of pairs of configurations, which differ by the state of channel environments.This is because input actions can affect the channel environment of configurations. It is easy to show that the relation S is a bisimulation which contains the pair (Γ 0 W 0 , Γ 1 W 1 ), therefore Γ W 0 ≈ Γ W 1 . One essential property of weak bisimulation is that it does not relate configurations which differ by the exposure state of some channel: Proposition 2. Suppose Γ 1 W 1 ≈ Γ 2 W 2 . Then for any channel c, Γ 1 c : idle iff Γ 2 c : idle. Full abstraction The aim of this section is to prove that weak bisimilarity in the extensional semantics is a proof technique which is both sound and complete for reduction barbed congruence. Theorem 1 (Soundness). C 1 ≈ C 2 implies C 1 C 2 . Proof. It suffices to prove that bisimilarity is reduction-closed, barb preserving and contextual. Reduction closure follows from the definition of bisimulation equivalence. The preservation of barbs follows directly from Proposition 2. The proof of contextuality on the other hand is quite technical, and is addressed in detail in the associated technical report [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF]. One subtlety lies in the definition of τ-extensional actions, which include broadcasts. While broadcasts along exposed do not affect the external environment, and hence cannot affect the external environment, this is not true for broadcasts performed along idle channels. However, we can take advantage of Proposition 2 to show that these extensional τ-actions preserve the contextuality of bisimilar configurations. To prove completeness, the converse of Theorem 1, we restrict our attention to the subclass of well-formed configurations. Informally Γ W is well-formed if the system term W does not contain active receivers along idle channels; a wireless station cannot be receiving a value along a channel if there is no value being transmitted along it. Definition 3 (Well-formedness). The set of well-formed configurations WNets is the least set such that for all processes P (i) Γ P ∈ Wnets, (ii) if Γ c : exp then Γ c[x].P ∈ WNets, (iii) is closed under parallel composition and (iv) if Γ[c → (n, v)] W ∈ WNets then Γ νc : (n, v).W ∈ WNets. By focusing on well-formed configurations we can prove a counterpart of Proposition 2 for our contextual equivalence: This means that, if we restrict our attention to well-formed configurations, we can never reach a configuration which is deadlocked; at the very least time can always proceed. Proposition 3. Let Γ 1 W 1 , Γ 2 W 2 be two well formed configurations such that Γ 1 W 1 Γ 2 W 2 . Theorem 2 (Completeness). On well-formed configurations, reduction barbed congruence implies bisimilarity. The proof relies on showing that for each extensional action α it is possible to exhibit a test T α which determines whether or not a configuration Γ W can perform the action α. The main idea is to equip the test with some fresh channels; the test T α is designed so that a configuration Γ W | T α can reach another one C = Γ W | T , where T is determined uniquely by the barbs of the introduced fresh channel; these are enabled in Γ T , if and only if C can weakly perform the action α. The tests T α are defined by performing a case analysis on the extensional action α: T τ = eureka! ok T σ = σ.(τ.eureka! ok + fail! no ) T γ(c,v) = νd:(0, •).((c[x].([x=v]d! ok , nil) + fail! no ) | | σ 2 .[exp(d)]eureka! ok , nil | σ.halt! ok ) T c?v = (c ! v .eureka! ok + fail! no ) | halt! ok T ι(c) = ([exp(c)]nil, eureka! ok ) + fail! no | halt! ok where eureka, fail, halt are arbitrary distinct channels and ok, no are two values such that δ ok = δ no = 1. For the sake of simplicity, for any action α we define also the tests T α as follows: T τ = T σ = eureka! ok T γ(c,v) = νd:(0, •).(σ.d! ok nil | σ.[exp(d)]eureka! ok , nil | halt! ok ) T c?v = σ δ v .eureka! ok | halt! ok T ι(c) = σ.eureka! ok | halt! ok Proposition 5 (Distinguishing contexts). Let Γ W be a well-formed configuration, and suppose that the channels eureka, halt, fail do not appear free in W, nor they are exposed in Γ. Then for any extensional action α, Γ W α =⇒ Γ W iff Γ W | T α * Γ W | T α . A pleasing property of the tests T α is that they can be identified by the (both strong and weak) barbs that they enable in a computation rooted in the configuration Γ W | T α . Proposition 6 (Uniqueness of successful testing components). Let Γ W be a configuration such that eureka, halt, fail do not appear free in W, nor they are exposed in Γ. Suppose that Γ W | T α * C for some configuration C such that -if α = τ, σ, then C ↓ eureka , C ⇓ eureka , C ⇓ fail , -otherwise, C ↓ eureka , C ↓ halt , C ⇓ eureka , C ⇓ halt , C ⇓ fail . Then C = Γ W | T α for some configuration Γ W . Note the use of the fresh channel halt when testing some of these actions. This is because of a time mismatch between a process performing the action, and the test used to detect it. For example the weak action ι(c) =⇒ does not involve the passage of time but the corresponding test uses a branching construct which needs at least one time step to execute. Requiring a weak barb on halt in effect prevents the passage of time. Outline proof of Theorem 2: It is sufficient to show that reduction barbed congruence, , is a bisimulation. As an example suppose Γ 1 W 1 Γ 2 W 2 and Γ 1 W 1 γ(c,v) -→ Γ 1 W 1 . We show how to find a matching move from Γ 2 W 2 . =⇒ Γ 2 W 2 . Now standard process calculi techniques enable us to infer from this that Γ 1 W 1 Γ 2 W 2 . Suppose that Γ 1 W 1 γ(c,v) -→ Γ 1 W 1 , we need to show that Γ 2 W 2 γ(c,v) =⇒ Γ 2 W 2 for some Γ 2 W 2 such that Γ 1 W 1 Γ 2 W 2 . By Proposition 5 we know that Γ 1 W 1 | T γ(c,v) * Γ 1 W 1 | T α .By the hypothesis it follows that Γ 1 W 1 | T γ(c,v) Γ 2 W 2 | T γ(c,v) , therefore Γ 2 W 2 | T γ(c,v) * C 2 for some C 2 Γ 1 W 1 | T γ(c,v) . Let C 1 = Γ 1 W 1 | T γ(c,v) . It is easy to check that C 1 ↓ eureka , C 1 ↓ halt , C 1 ⇓ Conclusions and Related work In this paper we have given a behavioural theory of wireless systems at the MAC level. We believe that our reduction semantics, given in Section 2, captures much of the subtlety of intensional MAC-level behaviour of wireless systems. We also believe that our behavioural theory is the only one for wireless networks at the MAC-Layer which is both sound and complete. The only other calculus which considers such networks is TCWS from [START_REF] Merro | A timed calculus for wireless systems[END_REF] which contains a sound theory; as we have already stated we view CCCP as a simplification of this TCWS, and by using a more refined notion of extensional action we also obtain completeness. We are aware of only two other papers modelling networks at the MAC-Sublayer level of abstraction, these are [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF][START_REF] Wang | A timed calculus for mobile ad hoc networks[END_REF]. They present a calculus CWS which views a network as a collection of nodes distributed over a metric space. [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF] contains a reduction and an intensional semantics and the main result is their consistency. In [START_REF] Wang | A timed calculus for mobile ad hoc networks[END_REF], time and node mobility is added. On the other hand there are numerous papers which consider the problem of modelling networks at a higher level. Here we briefly consider a selection; for a more thorough review see [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF]. Nanz and Hankin [START_REF] Nanz | Static analysis of routing protocols for ad-hoc networks[END_REF] have introduced an untimed calculus for Mobile Wireless Networks (CBS ), relying on a graph representation of node localities. The main goal of that paper is to present a framework for specification and security analysis of communication protocols for mobile wireless networks. Merro [START_REF] Merro | An Observational Theory for Mobile Ad Hoc Networks (full paper)[END_REF] has proposed an untimed process calculus for mobile ad-hoc networks with a labelled characterisation of reduction barbed congruence, while [START_REF] Godskesen | A Calculus for Mobile Ad Hoc Networks[END_REF] contains a calculus called CMAN, also with mobile ad-hoc networks in mind. Singh, Ramakrishnan and Smolka [START_REF] Singh | A process calculus for mobile ad hoc networks[END_REF] have proposed the ω-calculus, a conservative extension of the π-calculus. A key feature of the ω-calculus is the separation of a node's communication and computational behaviour from the description of its physical transmission range. Another extension of the π-calculus, which has been used for modelling the LUNAR ad-hoc routing protocol, may be found in [START_REF] Borgström | Broadcast psi-calculi with an application to wireless protocols[END_REF]. In [START_REF] Cerone | Modelling probabilistic wireless networks (extended abstract)[END_REF] a calculus is proposed for describing the probabilistic behaviour of wireless networks. There is an explicit representation of the underlying network, in terms of a connectivity graph. However this connectivity graph is static. In contrast Ghassemi et al. [START_REF] Ghassemi | Equational reasoning on mobile ad hoc networks[END_REF] have proposed a process algebra called RBPT where topological changes to the connectivity graph are implicitly modelled in the operational semantics rather than in the syntax. Kouzapas and Philippou [START_REF] Kouzapas | A process calculus for dynamic networks[END_REF] have developed a theory of confluence for a calculus of dynamic networks and they use their machinery to verify a leader-election algorithm for mobile ad hoc networks. station waiting to receive a value along d. Finally, since Γ 1 n c : 2, we can use Rule (ActRcv) to derive Γ 1 c[x].P σ ---→ c[x].P. At this point we can use Rule (TimePar) twice to infer a σ-action performed by C 1 . This leads to the transition C 1 σ ---→ W 2 , where W 2 = σ | c?(x).Q | c[x].P. τ ---→ c[x].{err/x}Q. As we will see, Rule (TauPar), introduced in Table 5, ensures that τ-actions are propagated to the external environment. This means that the transition derived above allows us to infer the transition C 2 τ ---→ W 3 , where W 3 = σ | c[x].{err/x}Q | c[x].P. Proposition 1 (Example 4 . 14 Maximal Progress and Time Determinism). Suppose C σ C 1 ; then C σ C 2 implies C 1 = C 2 , and C i C 3 for any C 3 . We now show how the transitions we have inferred in the Examples 1-3 can be combined to derive a computation fragment for the configuration C 0 considered in Example 1. Example 5 ( 5 Collisions). Consider the configuration C = Γ W, where Γ c : idle and W = c! w 0 | c! w 1 | c?(x).P ; here we assume δ w 0 = δ w 1 = 1. Using rules (Snd), (RcvIgn), (Rcv) and (Sync) we can infer the transition Γ W c!w 0 -----→ W 1 , where 1 iC 2 = 2 σ 122 This transition gives rise to the reduction C Γ 2 W 2 , where Γ 2 = upd c!w 1 (Γ 1 ). Note that, since Γ 1 c : exp we obtain that Γ 2 (c) = (1, err). The broadcast along a busy channel caused a collision to occur. Finally, rules (Sleep), (EndRcv) and (TimePar) can be used to infer the transition C ---→ W 3 = nil | nil | {err/x}P. Let Γ 3 := upd σ (Γ ); then the transition above induces the timed reduction C 2 σ C 3 = Γ 3 W 3 , in which an error is received instead of either of the transmitted values w 0 , w 1 . Then for any channel c, Γ 1 c : idle implies Γ 2 c : idle. Proposition 3 does not hold for ill-formed configurations. For example, let Γ 1 c : exp, Γ 1 d : idle and Γ 2 c, d : idle and consider the two configurations C 1 = Γ 1 nil | d[x].P and C 2 = Γ 2 c! v | d[x].P, neither of which are well-formed; nor do they let time pass, C i σ . As a consequence C 1 C 2 . However Proposition 2 implies that they are not bisimilar, since they differ on the exposure state of c. Another essential property of well-formed systems is patience: time can always pass in networks with no instantaneous activities. Proposition 4 (Patience). If C is well-formed and C i , then C σ C for some C . fail and C 1 ⇓ 1 eureka , C 1 ⇓ halt . By definition of reduction barbed congruence and Proposition 3 we obtain thatC 2 ↓ eureka , C 2 ↓ halt , C 2 ⇓ eureka , C 2 ⇓ halt and C 2 ⇓ fail . Proposition 6 ensures then that C 2 = Γ 2 W 2 | T γ(c,v)for some Γ 2 , W 2 . An application of Proposition 5 leads toΓ 2 W 2 γ(c,v) Table 1 1 CCCP: Syntax W ::= P station code c[x].P active receiver W 1 | W 2 parallel composition νc:(n, v).W channel restriction P, Q ::= c ! u .P broadcast c?(x).P Q receiver with timeout σ.P delay τ.P internal activity P + Q choice [b]P, Q matching X process variable nil termination fix X.P recursion Channel Environment: Γ : Ch → N × Val channel c is local to W, and the transmission of value v over channel c will take place for the next n slots of time. Table 2 2 Intensional semantics: transmission For convenience we assume 0 -1 to be 0. Supported by SFI project SFI 06 IN.1 1898. Author partially supported by the PRIN 2010-2011 national project "Security Horizons"
42,926
[ "1003818", "1003819", "1003820" ]
[ "22205", "22205", "542958" ]
01483419
en
[ "info" ]
2024/03/04 23:41:48
2016
https://theses.hal.science/tel-01483419v2/file/these_A_BOISARD_Olivier_2016.pdf
Directeur De Thèse Michel Paindavoine Philippe Coussy Professeur des Christophe Garcia Rapporteur Andres Perez-Uribe Robert M French Yann Lecun Lolita Mathieu Laura Jonathan Luc Margaux, Stéphane Julie Pierre Philippe, Corinne, Sandrine, Léa, Lydia Danilo Christophe Benoît, Kiki, Drak, Émilie, Rémi "Goodfinger", Manjo, Élisa, Clémence, Romain, Roswitha, Hélène, Margot Alena Chloé Jimmy Mélissa Valentin Claire David Thomas " Vougny-Pensez-Pas Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition Industry has growing needs for so-called "intelligent systems", capable of not only acquire data, but also to analyse it and to make decisions accordingly. Such systems are particularly useful for video-surveillance, in which case alarms must be raised in case of an intrusion. For cost saving and power consumption reasons, it is better to perform that process as close to the sensor as possible. To address that issue, a promising approach is to use bio-inspired frameworks, which consist in applying computational biology models to industrial applications. The work carried out during that thesis consisted in selecting bio-inspired feature extraction frameworks, and to optimize them with the aim to implement them on a dedicated hardware platform, for computer vision applications. First, we propose a generic algorithm, which may be used in several use case scenarios, having an acceptable complexity and a low memory print. Then, we proposed optimizations for a more global framework, based on precision degradation in computations, hence easing up its implementation on embedded systems. Results suggest that while the framework we developed may not be as accurate as the state of the art, it is more generic. Furthermore, the optimizations we proposed for the more complex framework are fully compatible with other optimizations from the literature, and provide encouraging perspective for future developments. Finally, both contributions have a scope that goes beyond the sole frameworks that we studied, and may be used in other, more widely used frameworks as well. So here I am, after three years spent playing around with artificial neurons. That went fast, and I guess I would have needed twice as long to get everything done. That was a great experience, which allowed me to meet extraordinary people without whom those years wouldn't have been the same. First of all, I wish to thank my mentor Michel Paindavoine for letting me be his student, along with my co-mentors Olivier Brousse and Michel Doussot. Résumé L'industrie a des besoins croissants en systmes dits intelligents, capable d'analyser les signaux acquis par des capteurs et prendre une dcision en consquence. Ces systmes sont particulirement utiles pour des applications de vido-surveillance ou de contrle de qualit. Pour des questions de cot et de consommation d'nergie, il est souhaitable que la prise de dcision ait lieu au plus prs du capteur. Pour rpondre cette problmatique, une approche prometteuse est d'utiliser des mthodes dites bio-inspires, qui consistent en l'application de modles computationels issus de la biologie ou des sciences cognitives des problmes industriels. Les travaux mens au cours de ce doctorat ont consist choisir des mthodes d'extraction de caractristiques bio-inspires, et les optimiser dans le but de les implanter sur des plateformes matrielles ddies pour des applications en vision par ordinateur. Tout d'abord, nous proposons un algorithme gnrique pouvant tre utiliss dans diffrents cas d'utilisation, ayant une complexit acceptable et une faible empreinte mmoire. Ensuite, nous proposons des optimisations pour une mthode plus gnrale, bases essentiellement sur une simplification du codage des donnes, ainsi qu'une implantation matrielle bases sur ces optimisations. Ces deux contributions peuvent par ailleurs s'appliquer bien d'autres mthodes que celles tudies dans ce document. List of Figures General introduction 1.1 The need for intelligent systems Automating tedious or dangerous tasks has been an ongoing challenge for centuries. Many tools have been designed to that end. Among them lies computing machines, allowing to assist human beings in calculations or even performing them. Such machines are everywhere nowadays, in devices that fit into our pockets. However, despite the fact that they are very efficient for mathematical operations that are complicated for our brains, they usually perform poorly at tasks that are easy for us, such as recognizing a landmark on a picture or analysing and understanding a scene. There are many applications for systems that are able to analyze their environments and to make a decision accordingly. In fact, Alan Turing, one of the founder of modern computing, estimated one of the ultimate goal of computing is to build machines that could be said intelligent [1]. Perhaps one of the most well known applications of such technology would be for autonomous vehicules, e.g cars that would be able to drive themselves, with little to no help from humans. In order to drive safely, those machines obviously need to retrieve information from different channels, e.g audio of video. Such systems may also be useful for access control for areas that need to be secured, or for quality control on production chains, e.g as was proposed for textile products in [2]. One could think of two ways to achieve a machine of that kind: either engineer how it should process the information, or use methods allowing it to learn it and determine it automatically. Those techniques form a research fields that have been active for decades called Machine Learning, which is part of the broader science of Artificial Intelligence (AI). Machine Learning In 1957, the psychologist Frank Rosenblatt proposed the Perceptron, one of the first system capable of learning automatically without being explicitly programmed. He proposed a mathematical model, and also built a machine implementing that learning behavior; he tested it with success on a simple letter recognition application. Its principle is very simple: the input image is captured by a retina, producing a small black and white image of the letter -black corresponds to 1, and white to 0. A weighted sum of those pixels is performed, and the sign function is applied to the result -for instance, one could state that the system must return 1 when the letter to recognize is an A, and -1 if its a B. If the system returns the wrong value, then the weights are corrected so that the output is correct. A more formal, mathematical description of the Perceptron is provided latter, in Section 2.1.1.1 on page 9. The system is also illustrated in Figure 1.2. Since the Perceptron, many trainable frameworks have been proposed, most of them following a neuro-inspired approached like the Perceptron or a statistical approach. They are described in Section 2.1. Recently, Machine Learning -and AI in general -gained renown from the spectacular research breakthrough and applications initiated by companies such as Facebook, Google, Microsoft, Twitter, etc. For instance, Google DeepMind recently developed AlphaGo, Perceptron applied to pattern recognition. Figure 1.2a shows an hardware implementation, and Figure 1.2b presents the principle: each cell of the retina captures a binary pixel and returns 0 when white, 1 when black. Those pixels are connected to so called input units, and are used to compute a weighted sum. If that sum is positive, then the net returns 1, otherwise it returns -1. Training a Perceptron consists in adjusting its weights. For a more formal and rigorous presentation, see page 9. a software capable of beating the world champion of Go [3]. Facebook is also using AI to automatically detect, localize and identify faces in pictures [4]. However those applications are meant to be performed on machines with high computational power, and it is beyond question to run such programs on constraint architectures, like those one expect to find on autonomous systems. Indeed, such devices fall into the field of Embedded Systems which shall be presented now. Embedded systems Some devices are part of larger systems, in which they perform one task in particulare.g control the amount of gas that should be injected in the motor of a vehicle. Those socalled embedded systems must usually meet high constraints in terms of volume, power consumption, cost, timing and robustness. Indeed, they are often used in autonoumous systems carrying batteries with limited power. In the case of mass produced devices such as phones or cars, it is crucial that their cost is as low as possible. Furthermore, they are often used in critical systems, where they must process information and deliver the result on time without error -any malfunction of those systems may lead to disastrous consequences, especially in the case of autonomous vehicles or military equipments. All those constraints also mean that embedded systems have very limited computational power. Many research teams have proposed implementations of embedded intelligent systems, as shown in Section 2.2.2. The work proposed in this thesis falls into that research field. However, as we shall see many of those implementations require high-end hardware, thus leading to potentially high cost devices. The NeuroDSP project 3 , in the frame of which this PhD thesis was carried out, aims to provide a device at a lower cost with a low power consumption. NeuroDSP: a neuro-inspired integrated circuit The goal of the research project of which this PhD is part of is to design a chip capable of performing the computation required by the "intelligent" algorithms presented earlier. As suggested in its name, NeuroDSP primarily focuses on the execution of algorithms based on the neural networks theory, among which lie the earlier mentioned Perceptron. As shown in Section 2.1, the main operators needed to support such computations are linear signal processing operators such as convolution, pooling operators and non-linear functions. Most Digital Signal Processing (DSP) operators, such as convolution, actually need similar features -hence that device shall also be able to perform DSP operation, for signal preprocessing for instance. As we shall see, all those operations may be, most of the time, performed in parallel, thus leading to a single-instruction-multiple-data (SIMD) architecture, in which the same operations is applied in parallel to a large amount of data. The main advantage of this paradigm is obviously to carry those operations faster, potentially at a lower clock frequency. As the power consumption of a device is largely related to its clock frequency, SIMD may also allow a lower power consumption. NeuroDSP is composed of 32 so called P-Neuro blocks, each basically consisting of a cluster of 32 Processing Elements (PE), thus totalling 1024 PE. A PE may be seen as an artificial neuron performing a simple operation on some data. All PEs in a single P-Neuro perform the same operation, along the lines of the aforementioned SIMD paradigm. A NeuroDSP device may then carry out signal processing and decision making operations. Since 1024 neurons may not be enough, they may be multiplexed to emulate larger systems -of course at a cost in terms of computation time. When timing is so critical that multiplexing is not a satisfying option, it is possible to use several NeuroDSP devices in cascade. The device's architecture is illustrated in Figure 1.3. Document overview While NeuroDSP was designed specifically to run signal processing and decision making routines, such algorithms are most of the time too resource consuming to be performed efficiently on that type of device. It is therefore mandatory to optimize them, which is the main goal of the research work presented here. In Chapter 2, a comprehensive tour of the works related to our research is proposed. After presenting machine learning theoretical background and also algorithms inspired by biological data, the main contribution concerning their implementations are shown. A discussion shall also be proposed, from which arises the problematic that is aimed to be addressed in this document, namely: how may a preprocessing algorithm be optimized given particular face and pedestrian detection applications, and how the data may be efficiently encoded so that few hardware resources may be used? The first part of that problem is addressed in Chapter 3. While focusing on a preprocessing algorithm called HMAX, the main works in the literature concerning feature selection are recalled. Our contribution to that question is then proposed. Chapter 4 presents our contribution of the second part of the raised problems, concerning data encoding. After reminding the main research addressing that issue, we show how a preprocessing algorithm may be optimized so that it may process data coded on a few bits only, with few to none performance drop. An implementation on a reconfigurable hardware shall then be proposed. Finally, Chapter 5 draws final thoughts and conclusions about the work proposed here. The main problems and results are reminded, as well as the limitations. Considered future research are also proposed. Chapter 2 Related works and problem statement This chapter proposes an overview of the frameworks used in the pattern recognition field. Both its theoretical backbone and the main implementation techniques shall be presented. It is shown here that one of the key problems of many PR frameworks is their computational cost. Those approaches mainly consists in either using machines with high parallel processing capabilities and high computational power, or on the contrary in optimizing the algorithms so they can be run with less resources. The problematics underlying the work proposed in this thesis, which follows the second paradigm, shall also be stated. Theoretical background In this section, the major theoretical contributions to PR are presented. The principle classification frameworks are first presented to the reader. Then, a description of several descriptors which aim to capture the useful information from the processed images and to get rid of the noise, is proposed. Classification frameworks The classification of an unknown data, also called vector or feature vector, consists in predicting the category it belongs to. Perhaps the simplest classification framework there is is Nearest Neighbor. It consists in storing examples of feature vectors in memory, each associated with the category it belongs to. To classify a unknown feature vector, one simply uses a distance (e.g Euclidean or Manhattan) to determine the closest example. The classifier then returns the category associated to that selected vector. While really simple, that framework however has many issues. The most obvious is its memory print and its computational cost: the more examples we have, the more expansive that framework is. From a theoretical point of view, that framework is also very sensitive to outliers; any peculiar feature vector, for instance in the case of labelling error, may lead to disastrous classification performance. A way to improve this framework is to take not only the closest feature vector, but the K closest, and to make them vote for the category. The retained category is then the one having the most votes [START_REF] Fix | Discriminatory analysis, nonparametric discrimination[END_REF]. That framework is called K-Nearest Neighbour (KNN). While this technique may provide better generalization and reduce the effects due to outliers, it still requires lots of computational resources. There exist many more other pattern classification frameworks. The most used of those frameworks shall now be described. Neural networks are presented first. A presentation of the Support Vector Machines framework shall follow. Finally, Ensemble Learning methods are presented. This document focuses on feedforward architecture only -nonfeedforward architectures, such as Boltzmann Machines [START_REF] Hinton | Optimal perceptual inference[END_REF][START_REF] David | A learning algorithm for boltzmann machines[END_REF], Restricted Boltzmann Machines [START_REF] Rumelhart | Parallel Distributed Processing -Explorations in the Microstructure of Cognition: Foundations[END_REF][START_REF] Bengio | Classification using discriminative restricted Boltzmann machines[END_REF] and Hopfield networks [START_REF] Hopfield | Neural networks and physical systems with emergent collective computational abilities[END_REF] shall not be described here. We also focus on supervised learning frameworks, as opposed to unsupervised learning, such as selforganizing maps [START_REF] Kohonen | Self-organized formation of topologically correct feature maps[END_REF]. In suppervised learning, each example is manually associated to a category, while in unsupervised learning the model "decides" by itself which vector goes to which category. Neural Networks Artifical Neural Networks (NN) are machine learning frameworks inspired by biolocical neural systems, used both for classification and regression tasks. Neural networks are formed of units called neurons, interconnected to each others by synapses. Each synapse has a synaptic weight, which represents a parameter of the model that shall be tuned during training. During prediction, each neuron performs a sum of its inputs, weighted by the synaptic weights. A non linear function called activation function is then applied to the result, thus giving the neuron's activation which feeds the neurons connected to the outputs of the considered one. In this thesis, only feedforward network shall be considered. In those systems, neurons are organized in successive layers, where each unit in a layer gets inputs from units in the previous layer and feeds its activation to units in the next layer. The layer getting the input data is called input layer, while the layer from which the network's prediction is read is the output layer. Such a framework is represented in Figure 2.1. For a complete overview of the existing neural networks, a good review is given in [START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF]. In each layer, units get their inputs from neurons in the previous layer and feed their outputs to units in the next layer. Perceptron The perceptron is one of the most fundamental contribution to the Neural Network field, and was introduced by Rosenblatt in 1962 in [START_REF] Rosenblatt | Principles of neurodynamics: perceptrons and the theory of brain mechanisms[END_REF]. It is represented in Figure 2.2. It has only two layers: the input layer and the output layer. A "dummy" unit is added to the input layer, the activation of which is always 1 -the weight w 0 associated to that unit is called bias. Those layers are fully connected, meaning each output unit is connected to all input units. Thus, the total input value z of a neuron with N inputs and a bias w 0 is given by: z = w 0 + N i=1 w i x i (2.1) Or, in an equivalent, more compact matrix notation: z = W T x (2.2) with x = (1, x 1 , x 2 , . . . , x n ) T and W = (w 0 , w 1 , w 2 , . . . , w N ) T . W is called weight vector. In the case where there is more than one output unit, then W becomes a matrix where the i-th column is the weight vector for the i-th output unit. By denoting M the number of output units, z i the input value of the i-th output unit and z = (z 1 , z 2 , . . . , z M ), one may write: z = W T x (2. 3) The output unit's activation function f is as follows: ∀x ∈ R, f (x) =        +1 x > θ 0 x ∈ [-θ, θ] -1 x < θ (2.4) Where θ represents a threshold (θ ≥ 0)1 . To train a Perceptron, it is fed with each feature vector x in the training set along with the corresponding target category t. Let's consider for now that we only have two different categories: +1 and -1. The idea is that, if the network predicts the wrong category, the difference between the target and the prediction, weighted by a learning rate and the input value, is added to the weights and bias. If the prediction is correct, then no modifications is made. The training algorithm is shown in more details for a Perceptron having a single output unit in Algorithm 1. It is easily extensible to systems with several output units; the only major difference is that t is replaced by a target vector t, the components of which may be +1 or -1. n ← number of input units; η ← learning rate; Initialize all weights and bias to 0; while Stopping condition is false do forall (x = (x 1 , x 2 , . . . , x n ) , t) in training set do y ← f (w 0 + w 1 x 1 + w 2 x 2 + • • • + w n x n ); for i ← 1 to n do w i ← w i + ηx i (t -y); end w 0 ← w 0 + η (t -y); end end Algorithm 1: Learning rule for a perceptron with one output unit. If there exists a hyperplan separating the two categories, then the problem is said linearly separable. In that case, the perceptron convergence theorem [START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF][START_REF] Michael | Brains, Machines, and Mathematics[END_REF][START_REF] Hertz | Introduction to the Theory of Neural Computation[END_REF][START_REF] Minsky | Perceptrons -An Intro to Computational Geometry Exp Ed[END_REF] states that such a hyperplan shall be found in a finite number of iterations -even if one cannot now that number a priori. However, that condition is required, meaning the perceptron is not able to solve non-linearly separable problems. Therefore, it is not possible to train a perceptron to perform the XOR operation. This is often referred to as the "XOR problem" in the literature, and was one of the main reasons why neural network had not known great popularity in industrial applications in the past. A way to address this class of problems is to use several layers instead of a single one. x 1 x 2 x 3 x N ∀x ∈ R f (x) = tanh (x) (2.5) or the very similar bipolar sigmoid: ∀x ∈ R f (x) = 2 1 + e -x -1 (2.6) Those functions' curves are represented in Figure 2.4. Its training algorithm is somewhat more complicated, and follows the Stochastic Gradient Descent approach. Let E be the cost function measuring the error between the expected result and the network's prediction. The goal is to minimize E, the shape of which is unknown. The principle of the algorithm achieving that is called back-propagation of error [START_REF] Rumelhart | Learning Internal Representations by Error Propagation[END_REF][START_REF] Rumelhart | Learning representations by back-propagating errors[END_REF]. RBF Radial Basis Function networks were proposed initially by Broomhead and Lowe [START_REF] Broomhead | Radial basis functions, multi-variable functional interpolation and adaptive networks[END_REF][START_REF] Broomhead | Multivariable Functional Interpolation and Adaptive Networks[END_REF] and fall in the kernel methods family. They consist in three layers: an input layer similar to the Perceptron's, a hidden layer containing kernels and an output layer. Here, a kernel i is a radial basis function f i (hence the name of the network) that measures the proximity of the input pattern x with a learnt pattern p i called center, according to a radius β i . It typically has the following form: x 1 f i (x) = exp - ||x -p i | | β i (2.7) -3 -2 -1 1 2 3 -1 -0.5 0.5 1 tanh(x) 2 1+exp(-x) -1 x 2 x 3 x N y 1 y 2 y M Input layer Hidden layer Output layer The output layer is similar to a Perceptron: the hidden and output units are fully connected by synapses having synaptic weights, which are determine during the training stage. The network is illustrated in Figure 2.5. To determine the kernels parameters, one may adopt different strategies. Centers may be directly drawn from the training set, and radius may be arbitrarily chosen -however such empirical solution leads to poor results. A more efficient way is to use a clustering algorithm that gathers the centers into clusters, the center of which shall represent an example center while the corresponding radius is evaluated w.r.t the proximity with other kernels. Such an algorithm is presented in Appendix A. The computational power and the memory required by this network grows linearly with the number of kernels. While the training method presented in Appendix A tend to reduce the number of kernels, it still may be quite important. There exists sparse kernel machines, that work in a similar way than RBF networks but are designed to use as few kernels as possible, like the Support Vector Machines described in Section 2.1.1.2. Spiking Neural Network All the models presented above treat the information at the level of the neurons activation. Spiking neural networks intend to describe the behaviour of the neurons at a lower level. That model was first introduced by Hodgkin et al [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF], who proposed a description of the propagation of the action potentials between biological neurons. There exists different variations of the spiking models, but the most used nowadays is probably the "integrate and fire", where the neurons' inputs are accumulated over time. When the total reaches a threshold, the neuron is committed. Thus, the information sent by a neuron is not carried by a numerical value, but rather by the spikes order and the duration between two spikes. It is still an active research subject, with many applications in computer vision -Masquelier and Thorpe proposed the "spike timing dependent plasticity" (STDP) algorithm, which allows unsupervised learning of visual features [START_REF] Masquelier | Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity[END_REF]. [START_REF] Bishop | Pattern recognition and machine learning[END_REF]. The selected vectors are called support vectors. After selecting them, the decision boundary's parameters are optimized so that it is as far as possible to all support vectors. Typically, a quasi-Newton optimization process could be choosen to that end; however its description lies beyond the scope of this document. Figure 2.6 shows an example of their determination as well as the resulting decision boundary. Ensemble learning The rational behind Ensemble Learning frameworks is that instead of having one classifier, it may be more efficient to use several ones [START_REF] Opitz | Popular ensemble methods: an empirical study[END_REF][START_REF] Polikar | Ensemble based systems in decision making[END_REF][START_REF] Rokach | Ensemble-based classifiers[END_REF][START_REF] Schapire | The strength of weak learnability[END_REF]. Those classifiers are called weak classifiers, and the final decision results from their predictions. Their exists several paradigms, among which Boosting [START_REF] Breiman | Arcing classifier (with discussion and a rejoinder by the author)[END_REF] in particular. Boosting algorithm are known for their computational efficiency during prediction. A good example is their use in Viola and Jone's famous face detection algorithm [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. The speed of the algorithm comes partly from the fact that the classifier is composed of a cascade of weak classifiers, in which all regions of the image that are clearly not faces are discarded by the top-level classifier. If the data goes through it, then it is "probably a face", and is processed by the second classifier, which either discard of accept it, and so on. This allows to rapidly eliminate irrelevant data and the noise. Boosting is also known to be slightly more efficient than SVM for multiclass classification tasks with HMAX [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], which is described in Section 2.1.2.2. Feature extraction frameworks Signal processing approach Classical approaches More than ten years ago, Lowe proposed a major contribution in computer vision with his Scale Invariant Feature Transform (SIFT) descriptor [START_REF] David | Distinctive Image Features from Scale-Invariant Keypoints[END_REF], which became quickly very popular due to its efficiency. Its primary aim was to provide, as suggested by its name, features that are invariant to the scale and to some extent to the orientation and small changes in viewpoint. It consists in matching features from the unknown image to a set of learnt features at different locations and scales, followed by a Hough transform that gathers the matched points in the image into clusters, which represent detected objects. The matching is operated by a fast nearest-neighbour algorithm, that indicates for a given feature the closest learnt feature. However, doing so at every locations and scale would be very inefficient, as most of the image probably does not contain much information. In order to find locations which are the most likely to hold information, a Difference of Gaussian (DoG) filter bank is applied to the image. Each DoG filter behaves as a band-pass filter, selecting edges at a specific spatial frequency and allowing to find features at a specific scale. Extrema are then evaluated across all those scales in the whole image, and constitute a set of keypoints at which the aforementioned matching operations are performed. As for rotation invariance, it is brought by the computation of gradients that are local to each keypoint. Before performing the actual matching, the data at a given keypoint is transformed according to those gradients so that any variability caused by the orientation is removed. Bay et al. proposed in [START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] a descriptor aiming to reproduce the result of the state of the art algorithm, but much faster to compute. They called their contribution SURF, for Speeded-Up Robust Features. It provides properties similar to SIFT (scale and rotation invariance), with a speed-up of 2.93X on a feature extraction task, where both frameworks were tuned to extract the same number of keypoints. Like SIFT, SURF consists in a detector that takes care of finding keypoints in the image, cascaded with a descriptor that computes features at those keypoints. The keypoints are evaluated using a simple approximation of the Hessian matrix, which can be efficiently computed thanks to the integral image representation, i.e an image where each pixels contains the sum of all the original image's pixels located left and up to it [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. Descriptors are then computed locally using Haar wavelet, which can also be computed with the integral image [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. [START_REF] Rosten | Machine Learning for High-Speed Corner Detection[END_REF][START_REF] Schmidt | An Evaluation of Image Feature Detectors and Descriptors for Robot Navigation[END_REF] Another popular framework for feature extraction is Histograms of Oriented Gradients (HOG) [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. It may be used in many object detection applications, though it was primarly designed for the detection of human beings. It consists in computing the gradients at each pixel, and make each of those gradients vote for a particular bin of a local orientation histogram. The weight with which each gradient votes is a linear function of its norm and of the difference between its orientation and the orientation of the closest bins' centers. Those gradients are then normalized over overlapping spatial blocks, and the result forms the feature vector. The classifier used here is typically a linear SVM, presented in Section 2.1.1. Like many feature extraction frameworks, there exists some variations of the HOG feature descriptor. Dalal and Triggs present two of them in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]: R-HOG and C-HOG, respectively standing for "Rectangular HOG" and "Circular HOG". The difference with the HOG lies in the shape of the overlapping spatial blocks used for the gradient normalization. R-HOG is somewhat close to presented earlier SIFT, except that computations are performed at all locations, thus providing a dense feature vector. C-HOG is somewhat trickier to implement due to the particular shape it induces, and shall not be presented here. All three frameworks provide similar recognition performances, which were the state of the art at that time. There are many other descriptors for images, like FAST [START_REF] Rosten | Machine Learning for High-Speed Corner Detection[END_REF][START_REF] Schmidt | An Evaluation of Image Feature Detectors and Descriptors for Robot Navigation[END_REF], and we shall not describe them in detail here as it lies beyond the scope of this document. However it is worth detailing another type of frameworks based on so-called wavelets, which allow to retreive •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • • x Input image 1st layer 2nd layer 3rd layer U λ 1 (x) U λ 1 ,λ 2 (x) S 0 (x) S λ 1 (x) S λ 1 ,λ 2 (x) Figure 2.7: Invariant scattering convolution network [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Each layer applies a wavelet decomposition U λ to its inputs, and feed the next layer with the filtered images U λ (x). At each layer, a low-pass filter is applied to the filtered images and the results are sub-sampled. The resulting so-called "scattering coefficients" S λ (x) are kept to form the feature vector. frequency information while keeping local information -which is not possible with the classical Fourier transform. Wavelets Wavelets have known a great success in many signal processing applications, such as signal compression or pattern recognition, including for images. They are linear operators decomposing locally a signal on a frequency basis. A wavelet decomposition consists in applying a "basis" linear filter, called mother wavelet, on the signal. It is then dilated in order to extract features of different sizes and, in the case of images, rotated so that it responds to different orientations. An excellent and comprehensive guide to the theory and practice of wavelets is given in [START_REF] Mallat | A Wavelet Tour of Signal Processing[END_REF]. Wavelets are used as the core operators of the Scattering Transform frameworks. Among them lie the Invariant Scattering Convolution Networks (ISCN), introduced by Bruna and Mallat [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. They follow a feedforward, multistage structure, along the lines of ConvNet described in Section 2.1.2.3, though contrary to ConvNet its parameters are fixed, not learnt. They alternate wavelet decompositions with low-pass filters and subsampling -the function of which is to provide invariance in order to raise classification performances. Each stage computes a wavelet decomposition of the images produced at the previous stage, and feed the resulting filtered images to the next stage. At each stage the network also outputs a low-pass filtered and sub-sampled version of those decompositions -the final feature vector is the concatenation of those output features. Figure 2.7 sums up the data flow of this framework. It should be noted that in practice, not all wavelet are applied at each stage to all images: indeed it is shown in [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF] that some of those wavelet cascades do not carry information, and thus their computation may be avoided, which allows to reduce the algorithmic complexity. Variations of the ISCN with invariance to rotation are also presented in [START_REF] Sifre | Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination[END_REF][START_REF] Oyallon | Deep Roto-Translation Scattering for Object Classification[END_REF], which may be used for texture [START_REF] Sifre | Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination[END_REF] or objects [START_REF] Oyallon | Deep Roto-Translation Scattering for Object Classification[END_REF] classification. A biological approach: HMAX Some frameworks are said to be biologically plausible. In such case, their main aim is not so much to provide a framework as efficient as possible in terms of recognition rates or computation speed, but rather to propose a model of a biological system. One of the most famous of such frameworks is HMAX, which also happens to provide decent recognition performances. The biological background was proposed by Riesenhuber and Poggio in [START_REF] Riesenhuber | Hierarchical models of object recognition in cortex[END_REF], on the base of the groundbreaking work of Hubel and Wiesel [START_REF] Hubel | Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex[END_REF]. Its usability for actual object recognition scenarios was stated by Serre et al. 8 years later in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. It is a model of the ventral visual system in the cortex of the primates, accounting for the first 100 to 200 ms of processing of visual stimuli. As its name suggests -HMAX stands for "Hierarchical Max" -that model is built in a hierarchical manner. Four successive stages, namely S1, C1, S2 and C2 process the visual data in a feedforward way. The S1 and S2 layers are constituted of simple cells, performing linear operations or proximity evaluations, while the C1 and C2 contain complex cells that provide some degrees of invariance. Figure 2.8 sums up the structure of this processing chain. Let's now describe each stage in detail. The S1 stage consists in a Gabor filter bank. Gabor filters -which are here two dimensional, as we process images -are linear filters responding to patterns of a given spatial frequency and orientation. They are a particular form of the wavelets described in Section 2.1.2.1. A Gabor filter is described as follows: G (x, y) = exp - x 2 0 + γ 2 y 2 0 2σ 2 × cos 2π λ x 0 (2.8) x 0 = x cos θ + y sin θ and y 0 = -x sin θ + y cos θ (2.9) where γ is the filter's aspect ratio, θ its orientation, σ the Gaussian effective width and λ the cosine wavelength. The S2 stage aims to compare the input features to a dictionary of learnt features. There are different ways to build up that dictionary. In [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF] it is proposed to simply crop patches of different sizes in images in C1 space at random position and scales. During feedforward, patches are cropped from images in C1 space at all locations and scales, and are compared to each learnt feature. The comparison operator is a radial basis function, defined as follows: ∀i ∈ {1, 2, . . . , N } r i (X) = exp(-β X -P i ) (2.10) where X is the input patch from the previous layer, P i the i-th learnt patch in the dictionary and β > 0 is a tuning parameter. Therefore, the closer the input patch is to the S2 unit learnt patch, the stronger the S2 unit fires. Finally, a complete invariance to locations and scales of the features in C1 space is reached in the C2 stage. Each C2 unit pools over all S2 unit sharing the same learnt pattern, and simply keeps the maximum value. Those values are then serialized in order to form the feature vector. The descriptor HMAX provides is well suited to detect the presence of an object in cluttered images, though the complete invariance to location and scales brought by C2 removes information related to its location. This issue is addressed in [START_REF] Chikkerur | What and where: A Bayesian inference theory of attention[END_REF] -however that model lies beyond the scope of this thesis and shall not be discussed here. [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Concerning the Gabor filters in S1, σ represents the spread of their Gaussian envelopes and λ the wavelength of their underlying cosine functions. like the S1 and C1 layers of HMAX, followed by a fully connected layer similar to a MLP. However, the parameters of the convolution kernels are not predefined, but rather learnt at the same time as the weights in the final classifier. Thus, the feature extraction and classification models are both tuned simultaneously, using an extension of the back-propagation algorithm.. An example of this model is presented in Figure 2.9. That framework became very popular since the industry demonstrated its efficiency, and is today actively used by big companies such as Facebook, Google, Twitter, Amazon and Microsoft. A particular implementation of that framework, tuned to perform best at face recognition tasks, was proposed by Garcia et al [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. However, the large amount of parameters to be optimized by the training algorithm requires a huge amount of data in order to avoid overfitting, lots of computational power and lots of time -still, pretrained models are provided by the community, making that problem avoidable. Frameworks implementations Software implementations There exists many implementation of the descriptors and classifier described in Section 2.1. Some of them are available in general purpose software packages, like the widespread Scikit-learn python package [START_REF] Pedregosa | Scikit-learn: Machine learning in Python[END_REF]. SVM also have a high performance dedicated library with LIBSVM [START_REF] Chang | LIBSVM: A library for support vector machines[END_REF]. Other frameworks, more dedicated to neural networks -and particularly deep learning -are accelerated on GPUs, like Theano [START_REF] Bastien | Theano: new features and speed improvements[END_REF][START_REF] Bergstra | Theano: a CPU and GPU math expression compiler[END_REF], Caffe [START_REF] Jia | Caffe: Convolutional architecture for fast feature embedding[END_REF], Torch [START_REF] Collobert | Torch7: A Matlablike Environment for Machine Learning[END_REF], cuDNN [START_REF] Woolley | cuDNN: Efficient Primitives for Deep Learning[END_REF] and the recently released TensorFlow [START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF]. There also exist frameworks more oriented towards neuroscience, such as PyNN [START_REF] Davison | PyNN: A Common Interface for Neuronal Network Simulators[END_REF] and NEST [START_REF] Plesser | Nest: the neural simulation tool[END_REF]. The Parallel Neural Circuit Simulator (PCSIM) allows to handle large-scale models composed of several networks that may use different neural models, and is able to handle several millions of neurons and synapses [START_REF] Pecevski | PCSIM: a parallel simulation environment for neural circuits fully integrated with Python[END_REF]. As for spiking neural networks, the BRIAN framework [START_REF] Goodman | Brian: A Simulator for Spiking Neural Networks in Python[END_REF][START_REF] Dan | The Brian Simulator[END_REF] provides an easy to use simulation environment. Uetz and Behnke along with its implementation on GPU [START_REF] Uetz | Large-scale object recognition with CUDA-accelerated hierarchical neural networks[END_REF], using the CUDA framwork. This framework was especially designed for large-scale object recognition. The authors claim a very low testing error rate of 0.76 % on MNIST, a popular hand-written digit dataset initially provided by Burges et al [START_REF] Christopher | Mnist database[END_REF], and 2.87 % on the general purpose NORB dataset [START_REF] Lecun | Learning methods for generic object recognition with invariance to pose and lighting[END_REF]. Embedded systems Optimizations for software implementations, both on CPU and GPU, for the SIFT and SURF frameworks have also been proposed [START_REF] Kim | A fast feature extraction in object recognition using parallel processing on CPU and GPU[END_REF]. It has also been shown that wavelets are very efficient to compute, even on low hardware resources [START_REF] Courroux | Use of wavelet for image processing in smart cameras with low hardware resources[END_REF], which make them a reasonable choice for feature extraction on embedded systems. Furthermore, an embedded version of the SpiNNaker board described in Section 2.2.2 for autonomous robots, programmable using with the C language or languages designed for neural networks programing is presented in [START_REF] Galluppi | Event-based neural computing on an autonomous mobile platform[END_REF]. Hardware implementations As shown in Section 2.2.1, GPUs are very efficient platforms for the implementation of classification and feature extraction frameworks, particularly for neuromorphic algorithms, due to their highly parallel architecture. Field Programmable Gate Arrays (FPGA) are another family of massively parallel platforms, and as such are also good candidates for efficient implementations. They are reconfigurable hardware devices, in which the user implement algorithms at a hardware level. Therefore, they provide a much finer control than the GPU: one implements indeed the communication protocols, the data coding, how computations are performed, etc. -though they utilization is also more complicated. FPGAs are configured using hardware description languages, like VHDL or Verilog. Going further down in the abstraction levels, there also exists fully analogical neural network implementations that use a component called memristor [START_REF] Brousse | Neuro-inspired learning of low-level image processing tasks for implementation based on nano-devices[END_REF][START_REF] Chabi | Robust neural logic block (NLB) based on memristor crossbar array[END_REF][START_REF] Choi | An electrically modifiable synapse array of resistive switching memory[END_REF][START_REF] He | Design and electrical simulation of on-chip neural learning based on nanocomponents[END_REF][START_REF] Liao | Design and Modeling of a Neuro-Inspired Learning Circuit Using Nanotube-Based Memory Devices[END_REF][START_REF] Retrouvey | Electrical simulation of learning stage in OG-CNTFET based neural crossbar[END_REF][START_REF] Retrouvey | Electrical simulation of learning stage in OG-CNTFET based neural crossbar[END_REF][START_REF] Snider | From Synapses to Circuitry: Using Memristive Memory to Explore the Electronic Brain[END_REF][START_REF] Versace | The brain of a new machine[END_REF][START_REF]Molecular-junction-nanowire-crossbar-based neural network[END_REF]. The resistance of such components can be controlled by the electric charge that goes through it. That resistance value is analogous to a synaptic weight. As it is still at the fundamental research level, analogical neural network shall not be studied here. Neural networks The literature concerning hardware implementations of neural networks is substantial. A very interesting and complete survey was published in 2010 by Misra et al [START_REF] Misra | Artificial neural networks in hardware: A survey of two decades of progress[END_REF]. Feedforward neural network are particularly well suited for hardware implementations, since the layers are, by definition, computed sequentially. It implies that the data goes through each layers successively, and that while the layer i processes the image k, the image k + 1 is processed by the layer i -1. Another strategy is, on the contrary, to implement a single layer on the device, and to use layer multiplexing to sequentially load and apply each layer to the data, thus saving lots of hardware resources to the expense of a higher processing time [START_REF] Himavathi | Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization[END_REF]. However, it has been demonstrated that neural network that are not feedforward may also be successfuly implemented on hardware [START_REF] Ly | High-Performance Reconfigurable Hardware Architecture for Restricted Boltzmann Machines[END_REF][START_REF] Coussy | Fully-Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories[END_REF]. There also exist hardware implementations of general purpose bio-inspired frameworks, such as Perplexus, which proposes among other the capability for hardware devices to self-evolve, featuring dynamic routing and automatic reconfiguration [START_REF] Upegui | The perplexus bio-inspired reconfigurable circuit[END_REF], particularly suited for large-scale biological system emulation. Architecture of adaptive size have also been proposed, that allow to dynamically scale itself when needed [START_REF] Héctor | A networked fpga-based hardware implementation of a neural network application[END_REF]. While the mentioned works intend to be general purpose frameworks with no particular applications in mind, some contributions also propose implementations for very specific purposes, such as the widespread face detection and identification task [START_REF] Yang | Implementation of an rbf neural network on embedded systems: real-time face tracking and identity verification[END_REF], or more peculiar application such as gas sensing [START_REF] Benrekia | FPGA implementation of a neural network classifier for gas sensor array applications[END_REF] or classification of data acquired from magnetic probes [START_REF] Nguyen | FPGA implementation of neural network classifier for partial discharge time resolved data from magnetic probe[END_REF]. Some frameworks received special considerations from the community in those attempts. After presenting the works related to HMAX, the next paragraphs shall present the numerous -and promising -approaches for ConvNet implementations. The many contributions that concern the Spiking Neural Networks are presented afterwards. HMAX Many contributions about hardware architectures for HMAX have been proposed by Al Maashri and his colleagues [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF][START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF][START_REF] Debole | FPGA-accelerator system for computing biologically inspired feature extraction models[END_REF][START_REF] Maashri | Accelerating neuromorphic vision algorithms for recognition[END_REF][START_REF] Park | Saliencydriven dynamic configuration of HMAX for energy-efficient multi-object recognition[END_REF][START_REF] Sun Park | An FPGAbased accelerator for cortical object classification[END_REF]. Considering that in HMAX, the most resource consuming stage is, by far, the S2 layer [START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF], a particular effort was made in [START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF] to propose a suitable hardware accelerator for that part. In that paper, Al Maashri et al. proposed a stream-based correlation, where input data is streamed to several pattern matching engines performing the required correlation operations in parallel. The whole model, including the other layers, was implemented on a single-FPGA and a multi-FPGA platforms that respectively provide 23× and 89× speedup, compared with a CPU implementation running on a system having a quad-core 3.2 GHz Xeon processor and 24 GB memory. The single-FPGA platform uses a Virtex-6 FX-130T, and the multi-FPGA one embeds four Virtex-5 SX-240T, all of which are high-end devices. Those systems did not have any drop in accuracy compared to the CPU implementation. A complete framework allowing to map neuromorphic algorithms to multi-FPGA systems is presented by Parket al. in [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF]. The chosen hardware platform is called Vortex [START_REF] Park | A reconfigurable platform for the design and verification of domain-specific accelerators[END_REF], which was designed to implement and map hardware accelerators for streambased applications. One of the biggest challenge for such systems is the inter-device communication, which is addressed in that work with the design of specific network interfaces. It also proposes tools allowing to achieve the mapping in a standardized way, with the help of a specially-designed tool called Cerebrum. As a proof of concept, a complete image processing pipeline was implemented, that cascades a preprocessing stage, a visual saliency2 determination and an object recognition module using HMAX. That pipeline was also implemented on CPU in C/C++ and on GPU with CUDA for comparison. The gain provided by the system is a speedup of 7.2× compared to the CPU implementation and 1.1× compared to the GPU implementation. As for the power efficiency, the gain is 12.1× compared to the CPU implementation and 2.3× compared to the GPU implementation. Kestur et al proposed with their CoVER system [START_REF] Kestur | Emulating Mammalian Vision on Reconfigurable Hardware[END_REF] a multi-FPGA based implementation of visual attention and classification algorithms -the latter being operated by HMAX -that aims to process high resolution images nearly in real time. It has a pre-processing stage, followed by either an image classification or a saliency detection algorithm, or both, depending on the chosen configuration. Each process uses a hardware accelerator running on an FPGA device. The architecture was implemented on a DNV6F6-PCIe prototyping board, which embeds six high-end Virtex6-SX475T FPGAs: one of them is used for image preprocessing and routing data, another one to compute HMAX's S1 and C1 feature maps, two perform the computations of HMAX's S2 and C2 features, and the remaining two are used both as repeaters and to compute the saliency maps. To our knowledge, the most recent hardware architecture for HMAX was proposed in 2013 by Orchard et al [99]. It was successfuly implemented on a Virtex 6 ML605 board, which carries a XC6VLX240T FPGA. The implementation is almost identical to the original HMAX described in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], and is able to process 190 images per second with less than 1% loss in recognition rate compared with standard software implementations, for both binary and multiclass objet recognition tasks. One of the major innovation of this contribution is the use of separable filters for the S1 layer: it was indeed shown that all filters used in HMAX, at least the original version presented in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], may be expressed as separable filters or as a linear combinations of separable filters -this allows to considerably reduce the utilization of FPGA resources. That engine is composed of three submodules: a wrapper that takes care of communications with other modules, a weight loader that manages the convolution kernel's coefficients and the convolution engine itself, that performs the actual computation. In order to perform the convolution operations in streams, the convolution kernel stores a stripe of the image and perform convolutions as soon there are enough data, so that for a K × K convolution kernel the system needs to store K -1 lines. Thus, the system can output one pixel per clock cycle. That engine reached the to date state-of-the-art in terms of energy efficiency, wih 2.76 GOPS/mW. ConvNet To our knowledge, the most recent effort concerning the implementation of ConvNets on hardware lies in the Origami project [START_REF] Cavigelli | Origami: A Convolutional Network Accelerator[END_REF]. The contributors claim that their integrated circuit is low-power enough to be embeddable, while handling network that only workstation with GPU could handle before. To achieve this, the pixel stream is first, if necessary, cropped to a Region Of Interest (ROI) with a dedicated module. A filter bank is then run on that ROI. Each filter consists in the combination of chanels, each performing multiplication-accumulation5 (MAC) operations on the data they get. Each channel then sums the final results individually, and output the pixel values in the stream. That system achieves a high throughput of 203 GOPS when running at 700 MHz, and consumes 744 mW. Spiking Neural Networks Due to the potentially low computational resources they need, SNN also have their share of hardware implementation attempts. Perhaps the most well-known is the Spiking Neural Network architecture (SpiNNaker) Project [START_REF] Furber | The SpiNNaker Project[END_REF]. It may be described as a massively parallel machine, capable of simulating neuromorphic systems in real time -i.e it respects biologically plausible timings. It it basically a matrix of interconnected processors (up to 2500 in the largest implementation), splitted in several nodes of 18 processors. Each processor simulates 1000 neurons. The main advantage in using spikes is that the information is carried by the firing timing, as explained in Section 2.1.1.1, page 12 -thus each neuron needs to send only small packets to the other neurons. However, the huge amount of those packets and of potential destination makes it challenging to route them efficiently. In order to guarantee that each emitted packet arrives on time at the right destination, the packet itself only contains the identifier of the emitting neuron. Then, the router sends it to the appropriate processors according to that identifier, which would depend on the network's topology and more precisely to which neurons the emitting neuron is connected to. IBM and the EPFL ( École Polytechnique Fédérale de Lausanne) collaborated to start a large and (very) ambitious research program: the Blue Brain project, which aims to use an IBM Blue Gene supercomputer to simulate mammalian brains, first of little animals like rodents, and eventually the human brain [START_REF] Markram | The blue brain project[END_REF]. However it is highly criticized by the scientific community, mostly for its cost, the lack of realism in the choice of its goals and the contributions it led to [START_REF] Kupferschmidt | Virtual rat brain fails to impress its critics[END_REF]. While still ongoing, that project led to the creation of SyNAPSE, meaning System of Neuromporphic Adaptive Plastic Scalable Electronics. Since the Blue Brain project needed a supercomputer, the aim of SyNAPSE is to design a somewhate more constrained system. In the frame of that project, the TrueNorth chip [START_REF] Merolla | A million spiking-neuron integrated circuit with a scalable communication network and interface[END_REF] [START_REF] Krichmar | Large-scale spiking neural networks using neuromorphic hardware compatible models[END_REF], run in a simulation environment. The authors backed the propositions that neural networks may be useful for both engineering and modeling purposes, and supported the fact that the spiking neural networks are particularly well suited with the use of Addressable Event Representation communication scheme, which consists in transmitting only the information about particular events instead of the full information, which is particularly useful to reduce the required bandwidth and computations. However, that strategy lies beyond the scope of this document. Other frameworks implementations There exists many academic works that are yet to be mentioned, for both classifiers and descriptors. As for classifiers, Kim et al proposed a bio-inspired processor for real time object detection, achieving high throughput (201.4 GOPS) while consuming 496 mW. Other frameworks for pattern recognition systems that are not biologically inspired have been proposed. For instance, Hussain et al proposed an efficient implementation of the simple KNN algorithm [START_REF] Hussain | An adaptive implementation of a dynamically reconfigurable K-nearest neighbour classifier on FPGA[END_REF], and an implementation of the almost-equally-simple Naive Bayes6 framework is proposed in [START_REF] Hongying Meng | FPGA implementation of Naive Bayes classifier for visual object recognition[END_REF]. Anguita et al proposed a framework allowing to generate user-defined FPGA cores for SVMs [START_REF] Anguita | A FPGA Core Generator for Embedded Classification Systems[END_REF]. An implementation for Gaussian Mixture Models, which from a computational point of view are somewhat close to RBF nets and as such may require lots of memory and hardware resources, have also been presented [START_REF] Shi | An Efficient FPGA Implementation of Gaussian Mixture Models-Based Classifier Using Distributed Arithmetic[END_REF]. Concerning feature extraction, the popular SIFT descriptor have been implemented on FPGA devices with success [START_REF] Bonato | A Parallel Hardware Architecture for Scale and Rotation Invariant Feature Detection[END_REF][START_REF] Yao | An architecture of optimised SIFT feature detection for an FPGA implementation of an image matcher[END_REF], as well as SURF [START_REF] Svab | FPGA based Speeded Up Robust Features[END_REF]. Some companies also proposed their own neural netwok implementations, long before the arrival of ConvNet, HMAX and other hierarchical networks. Intel proposed an analogical neural processor called ETANN in 1989 [START_REF] Holler | An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses[END_REF]. While harder to implement and not as flexible as their digital counterparts, analogical devices are much faster. That processors embeds 64 PEs that act as as many neurons and 10, 240 connections. The device was parameterizable by the user using a software called BrainMaker. A digital neural architecture was presented by Phillips for the first time in 1992, and was called L-neuro [START_REF] Mauduit | Lneuro 1.0: a piece of hardware LEGO for building neural network systems[END_REF][START_REF] Duranton | L-Neuro 2.3: a VLSI for image processing by neural networks[END_REF]. It was designed with modularity as a primarly concern in mind, and thus is easily interconnected with other modules which makes it scalable. In its latter version, that system was composed of 12 DSP processors, achieving 2 GOP/s with a 1.5 GB/s bandwidth, and was successfuly used for PR applications. IBM also proposed the Zero Instruction Set Computer (ZISC) [START_REF] Madani | ZISC-036 Neuroprocessor Based Image Processing[END_REF], their own neural processor. It was composed of a matrix of processing elements that act like a kernel function of an RBF network: as detailed in Section 2. Discussion In the previous sections of this chapter, the theoretical background of pattern recognition was presented as well as different implementations of pattern recognition framework on different platforms. This Section is dedicated to the comparison of those frameworks. Descriptors and then classifiers shall be discussed in terms of robustness and complexity, 7 Hardware modules that may be used as black boxes on FPGAs. with an emphasis on how well they may be embedded. Afterwards the problematics underlying the research work presented here shall be stated. [START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] that it was both more accurate and faster than SIFT. The accuracy brought by HMAX for computer vision was groundbreaking [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. It showed better performances than SIFT in many object recognition tasks, mainly on the Cal-tech101 dataset. Those results were corroborated by the work of Moreno et al, who compared the performances of HMAX and SIFT on object detection and face localization tasks, and found out that HMAX performed indeed better than SIFT [START_REF] Moreno | A Comparative Study of Local Descriptors for Object Category Recognition: SIFT vs HMAX[END_REF]. It is also worth mentionning the very interesting work of Jarett et al [START_REF] Jarrett | What is the Best Multi-Stage Architecture for Object Recognition?[END_REF], in which they evaluated the contribution of several properties of different computer vision frameworks applied to object recognition. That paper confirms and generalizes the aforementioned work of Moreno et al: it states that multi-stage architectures in general, which includes HMAX and ConvNets, perform better than single-stage ones, such as SIFT. ConvNet achieves outstandingly good performances on large datasets, such as MNIST [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] or ImageNet [START_REF] Szegedy | Going Deeper with Convolutions[END_REF][START_REF] He | Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[END_REF]. In comparison, HMAX's performances are lower. However, its number of parameters to optimize is very large, therefore a ConvNet needs a huge amount of data to be trained properly -indeed, models with lots of parameters are known to be more subject to overfitting [START_REF] Bishop | Pattern recognition and machine learning[END_REF]. If the data is sparse, it is worth considering using a framework with less parameters, such as HMAX; as explained in Section 2.1.2.2, its training stage simply consists in cropping images at random locations and scales. Despite the fact that that randomness is clearly suboptimal and has been subject to optimization works in the past [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF], it presents the advantage of being very simple. ConvNet Very high Yes, High on large datasets requires a large dataset Furthermore, while it has been stated that HMAX's accuracy is related to the amount of features in the S2 dictionnary, the performance do not evolve so much after 1,000 patches. Assuming only 1 patch per image is cropped during training, then one would require 1,000 which is much lower than the tens of thousands usually gathered to train ConvNet [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. That state of things led to the thought that while working in many situations, ConvNet may not be the most adapted tool for all applications -particularly in the case where the training set is small. Another possibility would be to use an Invariant Scattering Convolution Network as the first layers of a ConvNet, as suggested in [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF], instead of optimizing the weights of the convolution kernels during the training stage. Due to their performances, those three multistage architectures -ConvNet, ISCN and HMAX -seem like the most promising options for most computer vision applications. However, another important aspect that must be taken into account is that of their respective complexities: they have different requirements in terms of computational resources and memory that shall be decisive when choosing one of them, especially in the case of embedded systems. To that respect legacy descriptors such as HOG, SIFT and SURF in particular are interesting alternatives. In order to set boundaries to the present work, a few descriptors must be chosen so that most of the effort can focus on them. To that end, Table 2.2 sums up the main features of the presented descriptors. As the aim is to achieve state of the art accuracy, the work presented in this thesis shall mostly relate to the three aforementioned multistage architectures: ConvNet, ISCN and HMAX. Classifiers For a given application, after selecting the (believed) most appropriate descriptor one must choose a classifiers. Like descriptors, they have different features in terms of robustness, complexity and memory print, both for training and prediction. Most of the time, the classification stage itself is not the most demanding in a processing chain, and thus may not need to be accelerated. In the case where one need such acceleration, the literature on the subject is already substantial -see Section 2.1.1. For those reasons, the present document shall not address hardware acceleration for classification. However, as the choice of the classifier plays a decisive role in the robustness of the system, the useful criteria for classifier selection shall be presented. Let's first consider the training stage. As it shall be in any case performed on a workstation and not on an embedded system, constraints in terms of complexity and memory print are not so high. However, a clear difference must be made between the iterative training algorithms and the others. An iterative algorithm processes the training samples one by one, or by batch -they do not need to load all the data in once, and are therefore well suited for training with lots of samples. On the other hand, non-iterative data such as SVM or RBF need the whole dataset in memory to be trained, which is not a problem for reasonably small datasets but may become one when there are many datapoints -obviously the limit depends on the hardware configuration used to train the machine, though in any case efficient training requires strong hardware. The classifier must also be efficient during predictions -here, "efficiency" is meant as speed, as the robustness depends largely on training. Feedforward frameworks, as most of those presented here, present the advantage of being fast compared to more complex frameworks. In linear classifiers such as Perceptrons or linear SVMs, the classification often simply consists in a matrix multiplication, which is now well optimized even on non massively parallel architectures like CPUs, thanks to libraries such as LAPACK [START_REF]Lapack -linear algrebra package[END_REF] or BLAS [START_REF]Blas -basic linear algebra subprogram[END_REF]. The speed of kernel machines, e.g RBF or certain types of SVM, is often directly related to the number of used kernel functions. For instance, the more training examples, the more kernels an RBF net may have (see Appendix A). Particular care must therefore be taken during the training stage of such nets, so that the number of kernels stays to a manageable amount. Finally, ensemble learning frameworks such as Boosting algorithms are often used when speed is critical in an applications, and have been demonstrated to be very efficient in the case of face detection for instance [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. Those considerations put aside, according to the literature HMAX is best used with either AdaBoost or SVM classifiers respectively for one-class and multi-class classification tasks [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Concerning ISCN, it is suggested to use a SVM for prediction [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Concerning ConvNet, it embeds its own classification stage which typically takes the form of an MLP [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Lecun | Convolutional networks and applications in vision[END_REF]. Now that the advantages and drawbacks of both the classification and feature extraction frameworks have been stated, the next section proposes a comparison between different implementation techniques. Implementations comparison In order to implement those frameworks a naive approach would be to implement them on a CPU, as it is probably the most widespread computing machine. However that would be particularly inefficient, as those frameworks are highly parallel and that such devices are by nature sequential: a program consists in a list of successive instructions that are run one after the other. Their main advantage, however, is that they are fairly easy to program. For that reason, CPU implementations still remain a quasi-mandatory step when testing a framework. GPUs are also fairly widespread devices, even in mainstream machines. The advent of video games demanding more and more resources dedicated to graphics processing led to a massive production of those devices, which provoked a dramatic drop in costs. For those reasons they are a choice target platforms for many neuromorphic applications. While somewhat more complicated to program that CPUs, the coming of higher level languages such as CUDA made the configuration of GPU reasonably easy to reach. The amount of frameworks using that kind of platforms, and moreover their success show that it is a very popular piece of hardware for that purpose [START_REF] Bastien | Theano: new features and speed improvements[END_REF][START_REF] Collobert | Torch7: A Matlablike Environment for Machine Learning[END_REF][START_REF] Woolley | cuDNN: Efficient Primitives for Deep Learning[END_REF][START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF][START_REF]CUDA Implementation of a Biologically Inspired Object Recognition System[END_REF]. However, their main disadvantage is their volume and power consumption, the latter being in the order of magnitude of 10 W. For embeddable systems the power consumption should not go beyond 1 W, which is where reconfigurable hardware devices are worth considering. FPGAs present two major drawbacks: they are not as massively produced as GPUs and CPUs, which raises their cost. Their other downside actually goes alongside with their highest quality: they are entirely reconfigurable, from the way the computations are organized to the data coding scheme and such flexibility comes to the price of a higher development time (and thus cost) than CPUs and GPUs. However their power consumption is most of the time below 1 W, and can be optimized with appropriate coding principles. They are also much smaller than GPUs, and the low power consumption leads to colder circuits, which allows to save the energy and space that would normally be required to keep the device at a reasonnable temperature. Furthermore, they are reconfigurable to a much finer grain than GPU, and thus provide even more parallelization as the latters. All these criteria make FPGAs good candidates for embedded implementations of computer vision algorithms. Problem statement The NeuroDSP project presented in Section 1.4 aims to propose an integrated circuit for embedded neuromorphic computations, with high constraints in terms of power consumption, volume and cost. The ideal solution would be to produce the device as an Application Specific Integrated Circuit (ASIC) -however its high cost makes it a realistic choice only in the case where the chip is guaranteed to be sold in high quantities, which may be a bit optimistic for a first model. For that reason, we chose to implement that integrated circuit on an FPGA. As one of the aim of NeuroDSP is to be cost-efficient, we aim to propose those neuromorphic algorithms on mid-range hardware. Towards that end, one must optimize them w.r.t two aspects: complexity and hardware resource consumption. The first aspect may be optimized by identifying what part of the algorithm is the most important, and what part can be discarded. A way to address the second aspect of the problem is to optimize data encoding, so that computations on them requires less logic. Those considerations lead to to the following problematics, which shall form the matter of the present document: • How may neuromorphic descriptors be chosen appropriately and how may their complexity be reduced? • How the data handled by those algorithms may be efficiently coded so as to reduce hardware resources? Conclusion In this chapter we presented the works related to the present document. The problematics that we aimed to address were also stated. The aim of the contributions presented here is to implement efficient computer vision algorithms on embedded devices, with high constraints in terms of power consumption, volume, cost and robustness. The primary use case scenario concerns image classification tasks. There exist many theoretical frameworks allowing to classify data, be it images, one dimensional signals or other. Naive algorithms such as Nearest Neighbor have the advantage of being really simple to implement; however they may achieve poor classification performances, and cost too much memory and computational power when used on large datasets. More sophisticated frameworks, such as neural networks, SVMs or ensemble learning algorithms can achieve better results. In order to help the classifier, it is also advisable to use a descriptor, the aim of which is to extract data from the sample to be processed. Among such descriptors figures HMAX, which is inspired by neurophysiological data acquired on mamals. Such frameworks are said to be neuro-inspired, or bio-inspired. Another popular framework is ISCN, which decomposes the input image with particular filters called wavelets. One of the most popular frameworks nowadays is ConvNet, which is a basically a classifier with several preprocessing layers that act as a descriptor. While impressively efficient, it needs to be trained with a huge amount of training data, which is a problem for applications where data is sparse. In such case it may seem more reasonable to use other descriptors, such as HMAX or ISCN, in combination with a classifier. The algorithms mentioned above are most of the time particularly well suited for parallel processing. While it is easier to implement them on CPU using languages such as C, the efficiency gained when running them on massively parallel architecture makes it worth the effort. There exist several frameworks using GPU acceleration, however GPUs are ill-suited for most embedded applications where power consumption is critical. FPGAs are better candidates in those cases, and contributions about implementations on such devices have been proposed. The aim of the work presented in this document is to implement those demanding algorithms on mid-range reconfigurable hardware platforms. To achieve that, it is necessary to adapt them to the architecture. Such study is called "Algorithm-Architecture Matching" (AAM). That need raises two issues: how those frameworks may be reduced, and how the data handled for computation may be efficiently optimised, so as to use as few hardware resources as possible? The present document proposes solutions addressing those two questions. Chapter 3 Feature selection This chapter addresses the first question stated in Chapter 2, concerning the optimizations of a descriptor for specific applications. The first contribution presented here is related to a face detection task, while the second one proposes optimizations adapted to a pedestrian detection task. In both cases, the optimization scheme and rational are presented, along with a study of the complexity of major frameworks addressing the considered task. Accuracies obtained with the proposed descriptors are compared to those obtained with the original framework and the described systems of the literature. Those changes in accuracies are then put in perspectives with the computational gain. General conclusions are presented at the end of this Chapter. Feature selection for face detection This Section focuses on a handcrafted feature extractor for a face detection application. We start from a descriptor derived from HMAX, and we propose a detailed complexity analysis; we also determine where lies the most crucial information for that specific application, and we propose optimizations allowing to reduce the algorithm complexity. After reminding the reader of the major techniques used in face detection, we present our contribution, which consists in finding and keeping the most important information extracted by a framework derived from HMAX. Performance comparison with state of the art frameworks are also presented. Detecting faces For many applications, being either mainstream or professional, face detection is a crucial issue. Its more obvious use case is to address security problems, e.g identifying a person may help in deciding whether access should be granted or denied. It may also be useful in human-machine interactions, for instance if a device should answer in some way in case a human user shows particular states, such as distress, pain or unconsciousnessand to do that, the first step is to detect and locate the person's face. In that second scenario we fall into embedded systems, which explains our interest in optimizing face detection frameworks. Among the most used face detection techniques lie Haar-like feature extraction, and as usual ConvNet. We shall now describe the use of those two paradigms in those particular problems, as well as a framework called HMIN which is the basis of our work. Cascade of Haar-like features Before the spreading of ConvNets, one of the most popular framework for face detection was the Viola-Jones algorithm [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] -it is still very popular, as it is readily implemented in numerous widely used image processing tools, such as OpenCV [START_REF]Itseez. Open source computer vision library[END_REF]. As we shall see, the main advantage of this framework is its speed, and its decent performances. Framework description Viola's and Jones' framework is built along two main ideas [START_REF] Viola | Robust real-time face detection[END_REF]: using easy and fast to compute low-level features -the so-called Haarlike features -in combination with a Boosting classifier that selects and classifies the most relevant features. Classifiers are cascaded so that the most obviously not-face regions of the image are discarded first, allowing to spend more computational time on most promising regions. A naive implementation of the Haar-like features may use convolution kernels, consisting of 1 and -1 coefficients, as illustrated on Figure 3.1. Such features may be computed efficiently using an image representation proposed in [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] called Integral Image. In such representation, the pixel located at (x, y) takes as value the sum of the original image's pixels located in the rectangle defined by the (0, 0) and the (x, y) point, as shown in Figure 3.2. To compute such an image F one may use the following recurrent equation [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]: F (x, y) = F (x -1, y) + s (x, y) , (3.1) with s (x, y) = s (x, y -1) + f (x, y) (3.2) where f (x, y) is the original image's pixel located at (x, y). Using this representation, the computation of a Haar-like feature may be performed with few addition and subtraction operations. Moreover the number of operations does not depend on the scale of the considered feature. Let's consider first the feature on the left of Figure 3.1, and let's They can be seen as convolution kernels where the grey parts correspond to +1 coefficients, and the white ones -1. Such features can be computed efficiently using integral images [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. Point coordinates are presented here for latter use in the equations characterizing feature computations. assume its top-left corner location is (x 1 , y 1 ) and that of its bottom-right corner's is (x 2 , y 2 ). Given the integral image II, its response r l (x 1 , y 1 , x 2 , y 2 ) is given by (x 1 , y 1 ) (x 2 , y 2 ) x 2 , yg + + + (x 1 , y 1 ) (x 2 , y 2 ) xg , y 2 (xw , y 2 ) + + + + r l (x 1 , y 1 , x 2 , y 2 ) = F (x 1 , y g , x 2 , y 2 ) -F (x 1 , y 1 , x 2 , y g ) (3.3) with F (x 1 , y 1 , x 2 , y 2 ) the integral of the values in the rectangle delimited by (x 1 , y 1 ) and (x 2 , y 2 ), expressed as F (x 1 , y 1 , x 2 , y 2 ) = II (x 2 , y 2 ) + II (x 1 , y 1 ) -II (x 1 , y 2 ) -II (x 2 , y 1 ) (3.4) where II (x, y) is the value of the integral images at location (x, y). As for the response r r (x 1 , y 1 , x 2 , y 2 ) of the feature on the right, we have: r r (x 1 , y 1 , x 2 , y 2 ) = F (x w , y 2 , x g , y 1 ) -F (x 1 , y 1 , x g , y 2 ) -F (x w , y 1 , x 2 , y 2 ) (3.5) The locations of the points are shown in Figure 3.1. Once features are computed, they are classified using a standard classifier such as a perceptron for instance. If the classifier does not reject the features as "not-face", complementary features are computed and classified, and so on until either all features are computed and classified as "face", or the image is rejected. This cascade of classifiers allows to reject most non-faces images early in the process, which is one of the main reasons for its low complexity. Now that we described the so-called Viola-Jones framework, we shall study its computational complexity. Complexity analysis Let's now evaluate the complexity involved by that algorithm when classifying images. The first step of the computation of those Haar-like features on an image is then to compute its integral image. According to Equation 3.1 and Equation 3.2, it takes only 2 additions per pixels. Then, the complexity C VJ II of this II (X, Y ) = X x=1 Y y=0 f (x, y) + X Y Figure 3.2: Integral image representation. II (X, Y ) is its value of the point coordinated (X, Y ), and f (x, y) the value of the original image at location (x, y) [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. process for a w × h image is given by C VJ II = 2wh. (3.6) That serves as the basis of the computation of the Haar-like features, as we saw earlier. The complexity highly depends on the number of computed features, and for this study we shall stick to the implementation proposed in the original paper [START_REF] Viola | Robust real-time face detection[END_REF]. In that work, the authors have a total a 6060 features to compute -however, they also claimed that, given the cascade of classifiers they used, only N f = 8 features are computed in average. From [START_REF] Viola | Robust real-time face detection[END_REF], we now that each feature needs from 6 to 9 operations to compute -we shall consider here that, on average, they need N op = 7.5 operations. We note that, thanks to the computation based on the integral image, the number of operations does not depend on the size of the computed feature. After that, the features are classified -however we focus our analysis on the feature extraction only, so we do not take that aspect into account here. Thus, denoting C VJ F the complexity involved a this stage, we have C VJ F = N op N f . (3.7) In additions, images must be normalized before being processed. Viola et al. proposed in [START_REF] Viola | Robust real-time face detection[END_REF] to normalize the contrast of the image by using its standard deviation σ given by σ = m 2 - 1 N N i=0 x i 2 , (3.8) where m is the mean of the pixels of the image, N = wh is the number of pixels and x i is the value of the i-th pixel. Those values may be computed simply as m = II (W, H) wh (3.9) 1 N N i=0 x i 2 = II 2 (W, H) wh (3.10) where II 2 denotes is the integral image representation of the original image with all its pixels squared. The computation of that integral image needs thus one power operations per pixel, to which we must add the computations required by the integral images, which leads to a total of 3W H operations. Computing m requires a single operation, as computing 1 N N i=0 x i 2 . As the feature computation is entirely linear and since the normalization simply consists in multiplying the feature by the standard deviation, that normalization may simply be applied after the feature computation, involving a single operation per feature. Thus, the complexity C VJ N involved by image normalization is given by C VJ N = 3wh + N f (3.11) From Equations 3.6, 3.7 and 3.11, the framework's global complexity is given by C VJ = C VJ II + C VJ F + C VJ N = 5wh + N op + 1 N f , (3.12) which considering the implementation proposed in [START_REF] Viola | Robust real-time face detection[END_REF], i.e with w = h = 24 and N f = 7.5, leads to a total of 2948 operations. Although strikingly low, it must be emphasized here that that value is an average; when a face is actually detected, all 6060 features must be computed and classified, which then leads to 54,390 operations. However, for fair comparison we shall stick to the average value latter in the document. Now that we evaluated the complexity of the processing of a single w × h image, let's evaluate it in the case where we scan a scene in order to find and locate faces. Normally, one would typically use several sizes of descriptors in order to find faces of different sizes [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] -however, in order to simplify the study we shall stick here to a single scale. Let W and H respectively be the width and height of the frame to process, and let N w be the number of windows processed in the image. If we classify subwindows at each location of the image, we have The integral images are first computed on the whole 640×480 image; after that, features must be computed, normalized and classified for each window. From Equations 3.6, 3.7, 3.11 and 3.13 we know that we need N w = (W -w + 1) (H -h + 1) (3. C VJ = 2W H + N op N f N w + N f N w (3.14) = 2W H + N f N w N op + 1 (3.15) = 5W H + N f (W -w + 1) (H -h + 1) N op + 1 . (3.16) In the case of a 640 × 480 image, with w = 24, h = 24, N f = 8 and N op = 7.5 as before, we get C VJ = 20.7 MOP. Figure 3.3 shows the repartition of the complexity into several types of computations, considering that we derive from the above analysis that we need 4W H + N o pN f additions and W H multiplications. Memory print Let's now evaluat the memory required by that framework when processing a 640 × 480 image. Assuming the pixels of the integral image are coded on 32 bits integers, the integral image would require 1.2 MB to be stored entirely. Assuming ROIs are evaluated sequentially on the input image, 6060 features are computed at most and each feature is coded as 32-bits integers, we would require 24.24 ko to stores the features. Thus, the total memory print required by that framework would be, in that case, 1.48 MB. That framework also has the great advantage that a single integral image may be used to compute features of various scales, without the need of computing, storing and managing an image pyramid, as required by other frameworks -more information about image pyramids are available in Section 3.1.3.2. We presented the use of Haar-like features in combination with the AdaBoost classifier for face detection task, proposed by Viola and Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. We shall now present and analyse an other major tool for this task, which is called CFF. The framework is shown in Figure 3.4. During the prediction stage, it should be noted that the network can in fact process the whole image at once, instead of running the full computation windows by windows [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Long | Fully Convolutional Networks for Semantic Segmentation[END_REF]. This technique allows to save lots of computations, and is readily implemented if one considers the N1 layer as a convolution filter bank with kernel of size 6 × 7, and the N2 layer like another filter bank with 1 × 1 convolution kernels [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF]. Complexity analysis Let's now evaluate the complexity involved by the CFF algorithm. Denoting C CFF XX the complexity brought by the layer XX, and neglecting the classification as done in Section 3.1.1.1, we have C CFF = C CFF C1 + C CFF S1 + C CFF T 1 + C CFF C2 + C CFF S2 + C CFF T 2 , (3.17) where TX represents a non-linearity layer, where an hyperbolic tangeant is applied to each feature of the input feature map. Let's first evaluate C CFF C1 . It consists in 4 convolutions, which consists mainly in Multiplication-Accumulation (MAC), which we Input C1 S1 C2 S2 • • • • • • • • • • • • • • • N1 N2 Output Figure 3.4: Convolutional Face Finder [50] . This classifier is a particular topology of a ConNet, consisting in a first convolution layer C1 having four trained convolution kernels, a first sub-sampling layer S1, a second convolution layer C2 partially connected to the previous layer's units, a second sub-sampling layer S2, a partially-connected layer N1 and a fully-connected layer N2 with one output unit. assume corresponds to a single operation as it may be done on dedicated hardware. Thus we have C CFF C1 = 4 × 5 × 5 (W -4) (H -4) (3.18) = 100W H -400 (W + H) + 1600. (3.19) Since the S2 layer consists in the computation of means of features in contiguous nonoverlapping receptive fields, this means that each feature is involved once an only once in the computation of a mean, which also requires a MAC operation per pixel. Considering that at this point, we have 4 (W -4) × (H -4) feature maps, and so C CFF S1 = 4 (W -4) (H -4) (3.20) = 4W H -16 (W + H) + 64. (3.21) Now, the non-linearity layer must be applied: an hyperbolic tangeant function is used to each feature of the 4 W S2 × H S2 feature maps, with W S2 = W -4 2 (3.22) H S2 = H -4 2 , (3.23) and thus, considering the best case where an hyperbolic tangent may be computed in a single operation, C CFF T 1 = 4 W -4 2 H -4 2 (3.24) = W H -2 (W + H) + 16 (3.25) The C2 layers consists in 20 convolution, the complexity of which may be derived from 3.18. Then, there are 6 element-wise sums of feature maps, which after the convolutions are of dimensions W -4 2 -2 × H -4 2 -2 , (3.26) and thus we have C CFF C2 = (20 × 3 × 3 + 6 × 3 × 3) W -4 2 -2 H -4 2 -2 (3.27) = 9 × 26 W 2 -4 H 2 -4 (3.28) = 234 W H 4 - 5 2 (W + H) + 16 (3.29) = 58.5W H -585 (W + H) + 3744. (3.30) The complexity in S2 layer may be derived from Equations 3.20 and 3.26, giving C CFF S2 = 3.5W H -28 (W + H) + 224. (3.31) And finally, the complexity of the last non-linearity may be expressed as C CFF T2 = 14W S2 H S2 (3.32) with [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] and [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF] and as recalled earlier, we know that the features may be efficiently extracted at once in the whole image, by applying all the convolutions and subsampling directly to it. Thus, we may compute that complexity directly by reusing Equation 3.36, and we get 50.7 MOP. W S2 = 1 2 W -4 2 -2 (3.33) H S2 = 1 2 H -4 2 -2 , ( 3 Memory print Let's now evaluate the memory required by the CFF framework. As in Section 3.1.1.1, we shall consider here the case where we process a 640 × 480 image, without image pyramid. The first stage produces 4 636 × 476 feature mapsassuming the values are coded using single precision floating point scheme, hence using 32 bits, that stage requires a total of 4.84 MB. As the non-linearity and subsampling stages may be performed in-place, they do not bring any further need in memory. The second convolution stage, however, produces 20 316 × 236 feature maps. Using the same encoding scheme as before, we need 59.7 MB. We should also take into account the memory needed by the weights of the convolution and subsampling layers, but it is negligible compared to the values obtained previously. Hence, the total memory print is 64.54 MB. It should by noted that that amount would be much higher in the case where we process an image pyramid, as usually done. However, we stick to an evaluation on a single scale here for consistency with the complexity study. This Section was dedicated to the description and study of the CFF framework. Let's now do the same study on another framework to which we refer as HMIN. HMIN Framework description In order to detect and locate faces on images, one may use HMAX, which was described in Section 2.1.2.2. However using that framework to locate an object requires to process separately different ROI of the image. In such case, the S2 and C2 layers of HMAX provide little gain in performance, as it is mostly useful for object detection in clutter [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Considering the huge gain in computation complexity when not using the two last layers, we propose here to use only the first two layers for our application. In the rest of the document, the framework constituted by the S1 and C1 layers of HMAX shall be referred to as HMIN. We presented the so-called framework HMIN, on which we base our further investigations. We shall now study its complexity, along the lines of what we have proposed earlier for Viola-Jones and the CFF. Complexity analysis The overall complexity C HMIN involved by the two stages S1 and C1 of HMIN is simply C HMIN = C HMIN S1 + C HMIN C1 (3.37) Where C HMIN S1 and C HMIN C1 are respectively the complexity of the S1 and C1 layers. The S1 layer consists in a total of 64 convolutions on the W × H input image. Different kernel sizes are involved, but it is important that all feature maps fed in the C1 layer are of the same size. Thus, the convolution must be computed at all positions, even those where the center of the convolution kernel is on the edge of the image. Missing pixel may take any value: either simply 0 or the value of the nearest pixel for instance. Denoting k i the size of the convolution kernel at scale i presented in the filter column of Table 2.1, we may write C HMIN S1 = 4 16 i=1 W Hk i 2 = 36146W H. (3.38) As for the C1 layer, it may be applied as follows: first, the element-wise maximum operations accross pairs of feature maps are computed, which take 8WH operations; then we apply the windowed max pooling. Since there is a 50% overlap between the receptive fields of two contiguous C1 units, neglecting the border effects each feature of each S1 feature map is involved in 4 computations of maximums. Since those operations are computed on 32 feature maps, and adding the complexity of the first computation, we get This Section was dedicated to the presentation of several algorithms suited for face detection, include HMIN which shall serve as the basis of our work. Next Section is dedicated to our contributions in the effort of optimizing out HMIN. C HMIN C1 = 8W H + 8 × 4W H = 40W H. ( 3 HMIN optimizations for face detection In this Section we propose optimizations for HMIN, specific to face detection applications. We begin by analysing the output of the C1 layer, and we then propose our simplifications accordingly. Experimental results are then shown. This work is based on the one presented in [START_REF] Boisard | Optimizations for a bio-inspired algorithm towards implementation on embedded platforms[END_REF], which we pushed further as described below. C1 output As HMIN intends to be a general purpose descriptor, it aims to grasp features of as various types. Figure 3.6 shows an examples of the C1 feature maps for a face. The eyes, nose and mouth are the most prominent object of the face, and as such one can expect HMIN to be particularly sensible to them as it is based on the mammal's vision system, which can indeed easily be seen in Figure 3.6. One can also see that the eyes and mouths are more salient when θ = π/2, and that the nose is more salient when θ = 0. Furthermore, one can also see that the extracted features are redundant from C1 maps of neighboring scales and same orientations. Due to the redundancy, we also propose to sum the output of the S1 layer -which is equivalent to sum the remaining kernels of the filter bank to produce one, unique 37 × 37 convolution kernel. The smaller kernels are padded with zeros so that they are all 37×37 and may be sum across coefficients. This operation is sum-up in Figure 3.7. Figure 3.8 also show the output of that unique kernel applied to the image of a face. Since we now only have one feature map, we must adapt the C1 layer. As all C1 units now + + +. . . + = Figure 3.7: S1 convolution kernel sum. Kernels smaller that 37 × 37 are padded with 0's so that they all are 37 × 37. Kernels are then sum element-wise so as to produce the kernel on the right. It is worth mentioning the proximity of that kernel with one of the feature selected by the Adaboost algorithm in the Viola-Jones framework [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF], shown in Figure 3.1. pool over the only remaining scale, we propose to take the median value N m among the N s showed in Table 2.1, namely 16, as the width of the pooling window. Following the lines of the original model, the overlap between the receptive fields of two neighbouring C1 unit shall be ∆ m = 8. We shall refer to this descriptor as HMIN θ=π/2 later on. Let's now evaluate the complexity involved in this model. We have a single K × K convolution kernel, with K = 37. Applying it to a W × H image thus requires an amount of MAC operations given by C S1 = (W -K -1) (H -K -1) . (3.41) As for the C1 layer, it needs C C1 = (W -K -1) (H -K -1) (3.42) maximum operations. As for the memory print, since we produce a single (W -K -1) × (H -K -1) feature map of single precision floating point numbers, that optimized version of HMIN needs 4 (W -K -1) × (H -K -1) bytes. HMIN R θ=π/2 Following what has been done in earlier, we propose to reduce even further the algorithmic complexity. Indeed, we process somewhat "large" 128 × 128 face images with a large 37 × 37 convolution kernel. Perhaps we do not need such a fine resolution -in fact, the CFF takes very small 32 × 36 images as inputs. Thus, we propose to divide the complexity of the convolution layer by 16 by simply resizing the convolution kernel to 9 × 9 using a bicubic interpolation, thanks to Matlab's imresize function, with the default parameters. Finally, the maximum pooling layer is adapted by divided its parameters also by 4: the receptive fields are 4 × 4, with 2 × 2 overlaps between two receptive fields. Hence, our new descriptor, which we shall refer to as HMIN R θ=π/2 later on, expects 32×32 images as inputs, thus providing vector of the exact same dimensionality than HMIN θ=π/2 . The complexity involved by that framework is expressed as C HMIN = C HMIN S1 + C HMIN C1 , (3.43) with C HMIN S1 = 9 × 9 × W × H = 81W H (3.44) C HMIN C1 = 4W H, (3.45) which leads to C HMIN = 85W H. (3.46) As we typically expect 32 × 32 images as inputs, the classification of a single image would take 82.9 kOP. For extracting features of a 640 × 480 as done previously, that would require 26.1 MOP, and the memory print would be the same as for HM IN θ=π/2 assuming we can neglected the memory needed to store the coefficients of the 9 × 9 kernel, hence we need here 1.22 MB. Experiments Test on LFWCrop grey In this Section, we evaluate the different versions of HMIN presented in the previous Section. To perform the required tests, face images were provided by the Cropped Labelled Face in the Wild (LFW crop) dataset [START_REF] Huang | Robust face detection using Gabor filter features[END_REF], which shall be used as positive examples. Negative examples were obtained by cropping patches from the "background" classwhich shall be refered to as "Caltech101-background" -of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF] at random positions. All feature vectors v = (v 1 , v 2 , . . . , v N ) are normalized so that the lower value is set to 0, and the maximum value is set to 1 to produce a vector For each version of HMIN, we needed to train a classifier. We selected 500 images at random from LFW crop and another 500 from Caltech101-background. We chose to use an RBF classifier. The images were also transformed accordingly to the descriptor, i.e resized to 128 × 128 for both HMIN and HMIN θ=π/2 and resized to 32 × 32 images for HMIN R θ=π/2 . The kerneling parameter of the RBF network was set to µ = 2 -see Appendix A for more information about the RBF learning procedure that we used. v = (v 1 , v 2 , . . ., v n ) ∀i ∈ {1, . . . , N } v i = vi max k∈{1,...,N } vk (3.47) ∀i ∈ {1, . . . , N } vi = v i -min k∈{1,...,N } v k (3. After training, 500 positive and 500 negative images were selected at random among the images that were not used for training to build the testing set. All images were, again, transformed w.r.t the tested descriptor, the feature vectors were normalized and classification was performed. Table 3.1 shows the global accuracies for each descriptor, using a naive classification scheme with no threshold in the classification function. Figure 3.9 shows the Receiver Operating Characteristic curves obtained for all those classifiers on that dataset. In order to build those curves, we apply the classification process to all testing images, and for each classification we compare its confidence to a threshold. That confidence is the actual output of the RBF classifier, and indicates how certain the classifier is that its prediction is correct. If the confidence is higher than the threshold, then the classification is kept; otherwise it is rejected. By modifying that threshold, we make the process more or less tolerant. If the network is highly tolerant, then it shall tend to produce higher false and true positive rates; if it is not tolerant, then on the contrary it shall tend to produce lower true and false positive rates. The ROC curves show how the true positive rate evolve w.r.t the false positive rate. Test on CMU The CMU Frontal Face Images [START_REF] Sung | Cmu frontal face images test set[END_REF] dataset consists in grayscale images showing scenes with one or several persons (or characters) facing the camera or sometimes looking slightly away. Sample images are presented in Figure 3.10. It is useful to study the behaviour of a face detection algorithm on whole images, rather than simple classification of whole images in "Face" and "Not Face" categories. In particular, it has been used in the literature to evaluate the precision of the CFF [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF] and Viola-Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. We carried out our experiment as follows. We selected 500 images from the LFW crop dataset [START_REF] Gary | Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments[END_REF] and 500 images from the Caltech101-background [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF] does not significantly alter the accuracy. The drop of performance is to be put in perspective with the saving in terms of computational complexity. RBF using the kerneling parameter µ = 2. The images were all resized to 32 × 32, their histograms were equalized and we extracted features using HMIN R θ=π/2 ; hence the feature vectors have 225 components. After training, all images of the dataset were processed as follows. A pyramid is created from each images, meaning we built a set of the same image but with different sizes. Starting with the original size, the next image's width and height are 1.2 times smaller, which is 1.2 times bigger than the next, and so on until it is not possible to have an image bigger than 32 × 32. Then, 32 × 32 patches were cropped at all positions of all images of all sizes. Patches' histograms were equalized, and we extracted their HMIN R θ=π/2 feature vectors which fed the RBF classifier. We tested the accuracy of the classifications with several tolerance values, and accuracy were compared to the provided ground truth [START_REF] Sung | Cmu frontal face images test set[END_REF]. We use a definition of a correctly detected face close to what Garcia et al. proposed in [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]: we consider that a detection is valid if it contains both eyes and mouths of the face and the ROI's area is not bigger than 1.2 times the area of the square just wrapping the rectangle delimited by the eyes and mouths, i.e those square and rectangles share the same centroid and the width of the square is as long as the bigger dimension of the rectangle. For each face in the ground truth, we check that it was correctly detected using the aforementioned criterion -success counts as a "true positive", while failure counts as a "false negative". Then, for each region of the image that does not correspond to a correctly detected face, we check if the system classified it as a "not-face" -in which case it counts as a "true negative" -or a face -in which case it counts as a "false positive". Some faces in CMU are too small to be detected by the system, and thus are not taken into account. The chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop [START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. A face was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF][START_REF] Viola | Robust real-time face detection[END_REF], and thus are approximate. All false positive rates are obtained with a 90% accuracy. The "Classification" column gives the complexity involved when computing a single patch of the size expected by the corresponding framework which is indicated in the "Input size" column. The "Frame" column indicates the complexity of the algorithm when scanning a 640 × 480 image. The complexities and memory prints shown here only take into account the feature extraction, and not the classification. It should be noted that in the case of the processing of an image pyramid, both CFF and HMIN would require a much higher amount of memory. Test on Olivier dataset In order to evaluate our system in more realistic scenarios, we created our own dataset specifically for that task. We acquired a video on a fixed camera of a person moving in front of a non-moving background, with his face looking at the camera -an example of a frame extracted from that video are presented in Figure 3.12. The training and evaluation procedure is the same as in Section 3.1.3.2: we trained an RBF classifier with features extracted with HMIN R θ=π/2 from 500 images of faces from the LFW crop dataset [START_REF] Huang | Robust face detection using Gabor filter features[END_REF], and from 500 images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. We labeled the location of the face for each image by hand, so that the region takes both eyes and the mouth of the person, and nothing more, in order to be consistent with the CMU dataset [START_REF] Sung | Cmu frontal face images test set[END_REF]. Correct detections and false positives were evaluated using the same method as in Section 3.1.3.2: a face is considered as correctly detected if at least one ROI encompassing its eyes and mouth is classified as "face", and if that ROI is not more than 20% bigger than the face according to the ground truth. Each non-face ROI classified as a face is considered to be a false positive. With that setting up, we obtained a 2.38% error rate for a detection rate of 79.72% -more detailed results are shown on Figure 3.13. Furthermore, we process the video frame by frame, without using any knowledge of the results from the previous images. ROC curves obtained with HMIN R θ=π/2 on "Olivier" dataset. As in Figure 3.11, the chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop [START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. An image was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. a pedestrian detection application. Feature selection for pedestrian detection In this Section, we aim to propose a descriptor for pedestrian detection applications. The proposed descriptor is based on the same rational than in Section 3.1. Comparison in terms of computational requirements and accuracy shall be established between two of the most popular pedestrian detection algorithms. Detecting pedestrians With the arrival of autonomous vehicles, pedestrian detection rises as a very important issue nowadays. It is also vital in many security applications, for instance to detect intrusions in a forbidden zone. For this last scenario, one could think that a simple infrared camera could be sufficient -however such a device cannot determine by itself whether a hot object is really a human or an animal, which may be a problem in videosurveillance applications. It is then crucial to provide a method allowing to make that decision. In this Section, we propose to use an algorithm similar to the one presented in Section 3.1.1, although this time it has been specifically optimized for the detection of pedestrian. One of the state of the art systems -which depends greatly on the considered dataset -is the work proposed by Sermanet et al. [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF], in which they tuned a ConvNet for this specific task. However, as we shall see it requires lots of computational power, and we intend to produce a system needing as few resources as possible. Thus, we compare our system to another popular descriptor called HOG, which has proven efficient for this task. We shall now describe those two frameworks, then we shall study their computational requirements. HOG Histogram of Oriented Gradients (HOG) is a very popular descriptor, particularly well suited to pedestrian detection [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. As its name suggests, it consists in computing approximations of local gradients in small neighborhoods of the image and use them to build histograms, which indicates the major orientations across small regions of the image. Its popularity comes from its very small algorithmic complexity and ease of implementation. We focus here on the implementation given in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], assuming RGB input images as for the face detection task presented in Section 3.1.1. The first step is to compute gradients at each position of the image. Each gradient then contributes by voting for the global orientation of its neighborhood. Normalization is then performed across an area of several of those histograms, thus providing the HOG descriptor that shall be used in the final classifier, typically SVM with linear kernels, that shall decides whether the image is of a person. Gradients computation Using the same terminology as in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], we are interested in the so called "unsigned" gradients, i.e we are not directly interested into the argument θ of the gradient, but rather θ mod π. Keeping that in mind, in order to compute the gradient at each location, we use an approximation implying convolution filters. All gradients are computed separately for each R, G and B channels -for each location, the only the gradient with the highest norm is kept. Two feature maps H and V are produced from the input image respectively using the kernels [-1, 0, 1] and [-1, 0, 1] T . At each location, values across the two feature maps at the same location may be seen as components of the 2D gradients, which we can use to compute their arguments and norms. Respectively denoting G (x, y), φ [0,π] ((x, y)) and G (x, y) the gradient at location (x, y), its "unsigned argument" and its norm, and H (x, y) and V (x, y) the features from H and V feature maps at location (x, y), we have G (x, y) = H (x, y) 2 + V (x, y) 2 (3.49) φ [0,π] (G (x, y)) = arctan V (x, y) H (x, y) mod π (3.50) The result of that process is shown in Figure 3.14. It is important to note here that the convolutions are performed so that the output feature maps have the same width and height as the input image. This may be ensured by cropping images slightly bigger than actually needed, or by padding the image with 1 pixel at each side of its side with 0's or replicating its border. Binning Now that we have the information we need about the gradients, i.e their norms and arguments, we use them to perform the non-linearity proposed in this framework. The image is divided in so-called cells, i.e non-overlapping regions of N c × N c pixels, as illustrated in Figure 3.14. For each cell, we compute an histogram as follows. The half-circle of unsigned angles is evenly divided into B bins. The center c i of the i-th bin is given by the centroid of the bin's boundaries, as shown in Figure 3.15. Each gradient in the cell votes for the two bins with the centers closest to its argument. Calling those bins c l and c h , the weights of its votes w l and w h depend on the difference between its argument and the bin center, and on its norm: w h = |G (x, y)| φ (G (x, y)) -c l c h -c l (3.51) w l = |G (x, y)| φ (G (x, y)) -c h c h -c l (3.52) We end up having a histogram per cell. Assuming the input image is of size W × H and that N c both divide W and H, we have a total of W H/N c 2 histograms. We associate each histogram to its corresponding cell to build a so called histogram map. Local normalization The last step provides some invariance to luminosity among histograms. The histogram map is divided into overlapping blocks, each having 2 × 2 histograms. The stride between two overlapping blocks is 1 so that the whole histogram map is covered. All the bins' values of those histograms form a vector v (x h , y h ) having BN 2 b components where (x h , y h ) is the location of the top-left corner's of the block in the histogram map frame coordinate, and we compute its normalized vector v (x h , y h ) = v 1 (x h , y h ) , v 2 (x h , y h ) , . . . , v N b 2 (x h , y h ) using the so called L2-norm [36] normalization: ∀i ∈ 1, . . . , BN b 2 v i (x h , y h ) = min   v i (x h , y h ) v (x h , y h ) 2 + 2 , 0.2   (3.53) where is a small value avoiding divisions by 0. Thus we obtain a set of vectors v (x h , y h ), which are finally concatenated in order to form the feature vector fed in a SVM classifier. Complexity analysis Let's evaluate the complexity of extracting HOG features from an W × H image. As we saw, the first step of the extraction is the convolutions, that require of 6W H operations per channel, followed by the computation of their squared norms, which requires 3W H operations per channel; thus at this point we need 3(3 + 6)W H = 27W H operations. Afterward, we need to compute the maximum values across the three channels for each location, thus leading to 2W H more operations. Finally, we must compute the gradients, which we assume involves one operation for the division, one operation for the arc-tangent and one for the modulus operation; hence 3W H more operations. Thus, the total amount of operations at this stage is given by C HOG grad = 32W H (3.54) Next, we perform the binning. We assume that finding the lower and higher bins takes two operations: one for finding the lower bin, and another one to store the index of the higher bin. From Equation 3.51, we see that computing w h takes one subtraction and one division, assuming c h -c l is pre-computed, to which we add one operation for the multiplication with |G (x, y)|, thus totaling 3 operations. The same goes for the computation of w l . Finally, w h and w l are both accumulated to the corresponding bins, requiring both one more operations. This done at each location of the feature maps, thus this stage needs a total of operations of C HOG hist = 8W H. ( 3 N p = (W h -1) × (H h -1) (3.56) positions, with of each component of the vector by a scalar, and finally a comparison. Since the sum and the square root may be considered to take a single operation, which is very small compared to the total, we chose to neglect it to make the calculation more tractable. W h = W 8 , (3.57) H h = H 8 . ( 3 The Euclidean distance itself requires one subtraction followed by a MAC operation per component. Thus, extracting features from a 64 × 128 image as suggested in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] takes 344.7 kOP. When scanning an image to locate pedestrians, we may use the same method as usual [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. Using Equation 3.63 on a 640 × 480 image, we get a complexity of 12.96 MOP. Repartitions of the computational efforts are presented in Figure 3.16. Memory print Let's now evaluate the memory print required by the extraction of HOG features for a 640 × 480 input image. When computing the gradients, the first step consists in performing 2 feature maps from convolutions, of the same size of the input image. We consider here each feature of the feature maps shall be coded as 16 bits integers, hence we need 2 × 2 × 640 × 480 = 1.23 × 10 6 bytes at this stage. Then, the modulus and arguments of the gradients are computed at each feature location. We assume here that that data shall be stored using single precision floating point scheme; hence 32 bits per value, and then we need 2.45 MB. As for the histograms, since there is no overlaps between cells, they may be evaluated in-place -hence, they do not bring more memory requirement. Finally comes the memory needed by the normalization stage; assuming we neglect the border effect, one normalized vector is computed at each cell location, which correspond to 8 × 8 areas in the original image. Hence, 4800 normalized vectors are computed, each having 36 component, which leads to 691.2 kB. Thus, the memory print of the HOG framework is 4.37 MB. We presented an analysed the HOG algorithm for pedestrian detection. In the next Section, we describe a particular architecture of a ConvNet optimized for that same task. ConvNet As for many other applications, ConvNet have proven very efficient for pedestrian detection. Sermanet et al. proposed in [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF] a ConvNet specifically designed for that purpose. Presentation We now review the architecture of that system, using the same notations as in Section 3.1.1.2. First of all, we assume images use the Y'UV representation. In this representation, the Y channel represents the luma, i.e the luminosity, while the U and V channels represent coordinates of a color in a 2D space. The Y channel is processed separately from the UV channels in the ConvNet. The Y channel first goes through the C Y 1 convolution stage which consists in 32 kernels, all 7 × 7, followed by an absolute-value rectification -i.e we apply a point-wise absolute value function on all output feature maps [START_REF] Kavukcuoglu | Learning convolutional feature hierarchies for visual recognition[END_REF] -followed by a local constrast normalization which is performed as followed [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]: v i = m i -m i w (3.64) σ = N i=1 w v i 2 (3. 65 ) y i = v i max (c, σ) (3.66) where m i is the i-th un-normalized feature map, denotes the convolution operator, w is a Gaussian blur 9 × 9 convolution kernel with normalized weights, N is the number of are concatenated to form the feature vector to be classified, which is performed with a classical linear classifier. That architecture is sum-up in Figure 3.17. C Y 1 C U V 1 Y S U V 0 UV S Y 1 C2 S2 F u Complexity analysis Let's now evaluate the amount of operations needed for a W × H Y'UV image to be processed by that ConvNet. Denoting C X the complexity involved in layer X and along the lines of the calculus done in Section 3.1.1.2, we have C C Y 1 = 32 × 7 × 7 × (W -6) (H -6) (3.67) C S Y 1 = 32 × 9 × W -6 3 H -6 3 (3.68) C S U V 0 = 2 × 9 × W 3 H 3 (3.69) C C2 = 2040 × 9 × 9 × 2 × (W S U V 0 -8) (H S U V 0 -8) (3.70) C S2 = 68 × 2 × 2 W C2 2 H C2 2 (3.71) where W X and H X respectively denote the width and height of the X feature maps. The C U V 1 layer has full connection between its input and output feature maps. Thus, denoting N I and N O respectively the number of input and output feature maps, a total of N I N O convolutions are performed. Inside this layer, this produces N I N O feature maps, which are sum feature-wise so as to produce the N O output feature maps. This leads to C U V 1 = 2 × 6 × 6 × (W S U V 0 -4) (H S U V 0 -4) . (3.72) We shall now evaluate the complexity involved by the absolute value rectifications which are performed on the C Y 1 and C2 feature maps. It needs one operation per feature, thus denoting C (A X ) the complexity involved by those operations on feature map X we have C A C Y 1 = 32W C Y 1 H C Y 1 (3.73) C A C U V 1 = 6W C U V 1 H C U V 1 (3.74) C A C2 = 68W C2 H C2 . (3.75) Finally, we evaluate the complexity brought by the local contrast normalizations. From Equations 3.64, 3.65 and 3.66, we see that the first step consists in a convolution by a 9 × 9 kernel G followed by a pixel-wise subtraction between two feature maps. Assuming the input feature map is w × h and that the convolution is performed so that the output feature map is the same size as the input feature map, the required amount of operations at this step is given by C N 1 (w, h) = 2 × 9 × 9 × wh = 162wh. (3.76) The second step involves squaring up each feature of the wh output feature maps, which implies wh operations. The result is again convolved with G, implying 81wh operations, and the resulting feature are sum feature-wise across N feature maps, implying nwh sums. Finally, we produce a "normalization map" by taking the square root of all features, which involves wh operations assuming a square root takes only one operation. Hence: C N 2 (w, h, n) = (83 + n) wh (3.77) The final normalization step consists in computing, for each feature of the normalization map, the maximum value between that feature and the constant c, which leads to wh operations, and perform feature-wise divisions between the N maps computed in Equation 3.64 and those maximums, which leads to nwh operations. Thus we have C N 3 (w, h, n) = (1 + n) wh, (3.78) and the complexity brought by a local contrast normalization on n w × h feature maps is given by C N (w, h, n) = (246 + 2n) wh. (3.79) The overall complexity is given by C ConvNet = C C Y 1 + C S Y 1 + C S U V 0 + C C2 + C S2 + C A C Y 1 + C A C U V 1 + C A C2 + C N (W C Y 1 , H C Y 1 , 32) + C N (W C U V 1 , H C U V 2 , 6) + C N (W C1 , H C2 , 68) (3.80) which leads to C ConvNet = 1568W C Y 1 H C Y 1 + 288W 1 H 1 + 18W S U V 0 H S U V 0 + 330480W C2 H C2 + 272W S2 H S2 + 24W 1 H 1 + 342W C Y 1 H C Y 1 + 264W C U V 1 H C U V 1 + 450W C2 H C2 (3.81) with W C Y 1 = W -6 (3.82) H C Y 1 = H -6 (3.83) W S U V 0 = W 3 (3.84) H S U V 0 = H 3 (3.85) W 1 = W S Y 1 = W C U V 1 = W C Y 1 3 = W C U V 0 -4 (3.86) H 1 = W S Y 1 = H C U V 1 = H C Y 1 3 = H C U V 0 -4 (3.87) W C2 = W 1 -8 (3. 88 ) H C2 = H 1 -8 (3.89) W S2 = W C2 2 (3.90) H S2 = H C2 2 (3. HMAX optimizations for pedestrian detection We propose optimizations along the lines of what was explained in Section 3.1.2. When we were looking for faces, we hand-crafted the convolution kernel so that it responded best to horizontal features, in order to extract eyes and mouths for instance. However, in the case of pedestrians it intuitively seems more satisfactory to detect vertical features. Thus, we propose to keep the same kernel as represented in Figure 3.7, but flipped by 90 • . As in Section 3.1.2, we have two descriptors: HMIN θ=0 and HMIN R θ=0 . For consistency reasons with what was done for faces in Section 3.1.2 and with the HOG [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] and ConvNet [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF] algorithms, HMIN θ=0 expects 64 × 128 input images and consists in a single 37 × 37 convolution kernel. As for HMIN R θ=0 , it expects 16 × 32 inputs and consists in a 9 × 9 convolution kernel. Experiments In order to test our optimizations, we used the INRIA pedestrian dataset, originally proposed to evaluate the performances of the HOG algorithm [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. That dataset is divided in two subsets: a training set and a testing set. Hence, we simply trained the system described in Section 3.2.2 on the training set and evaluated it on the testing set. Results are shown in Figure 3.18, which is a ROC curve produced as done for faces in Section 3.1.3.1. All images were resized to 16 × 32 before process. Comparisons with HOG and ConvNet features are shown in Table 3.3. In this Section, we proposed and evaluated optimizations for the so-called HMIN descriptor applied to pedestrian detection. Next Section is dedicated to a discussion about the results that we obtained both here, and in the previous Section which was related to face detection. Discussion Let's now discuss the results obtained in the two previous Sections, where we described a feature extraction framework and compared its performance, both in terms of accuracy and complexity, against major algorithms. The drop of performance is more important here than it was for faces, as shown on Figure 3.9. However, the gain in complexity is as significant as in Section 3. Table 3.3: Complexity and accuracy of human detection frameworks. The false positive rate of the HOG has been drawn from the DET curve shown in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], and thus is approximate. The false positive rates presented here correspond to a 90% detection rate. As in Table 3.2, the "Classification" column gives the complexity involved when computing a single patch of the size expected by the corresponding framework which is indicated in the "Input size" column. The "Frame" column indicates the complexity of the algorithm when applied to a 640 × 480 image. Furthermore, the complexities involved by HMIN are computed from Equation 3.46, with the input size shown in the column on the right. Finally the result of the ConvNet may not be shown here as their strategy for evaluating it is different from what was done in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] -using the evaluation protocol detailed in [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF], HOG produces approximately three times as many false positives as ConvNet. Furthermore, the miss rate of the HOG was determined on a scene-scanning task, while we evaluated our framework on a simpler classification task. Thus, comparisons of the accuracy of those frameworks are difficult, although the preliminary results presented here show a clear disadvantage in using HMIN R θ=0 . Finally, the complexities and memory prints shown here only take into account the feature extraction, and not the classification. It should also be noted that both are evaluated without image pyramid, and that in that case they would be much higher than evaluated here. Results of our framework are sum up in Table 3.2 for face detection applications and in Table 3.3 for pedestrian detection application. First of all, we see from the ROC curves shown in Figures 3.11 and 3.18 that the accuracy of our framework is significantly bigger for face detection tasks than for pedestrian detection task -although comparing performances on two different tasks is dangerous, those results seem to indicate that our framework would operate much better in the first case. However, the raw accuracy is significantly lower than those of the other frameworks presented here, be it for face or human detection. This is probably due to the fact that our frameworks HMIN R θ=x 2 produce features that are much simpler than those of the other frameworks -indeed, the feature vector for a 32 × 32 input image has only 225 components. Among all other frameworks, the only other that may be considered better to that respect is Viola-Jones, where on average only 8 features are computed, although in the worst case that amount rises dramatically to 6060. Nevertheless, Viola-Jones and the HOG algorithms are both slightly less complex than HMIN θ = x R . There is also a consequent literature about their implementations on hardware [START_REF] Mizuno | Architectural Study of HOG Feature Extraction Processor for Real-Time Object Detection[END_REF][START_REF] Jacobsen | FPGA implementation of HOG based pedestrian detector[END_REF][START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF][START_REF] Hahnle | Fpga-based real-time pedestrian detection on high-resolution images[END_REF][START_REF] Hsiao | An FPGA based human detection system with embedded platform[END_REF][START_REF] Negi | Deep pipelined one-chip FPGA implementation of a real-time image-based human detection algorithm[END_REF][START_REF] Komorkiewicz | Floating point HOG implementation for real-time multiple object detection[END_REF][START_REF] Kelly | Histogram of oriented gradients front end processing: An FPGA based processor approach[END_REF][START_REF] Tam | Implementation of real-time pedestrian detection on FPGA[END_REF][START_REF] Lee | HOG feature extractor circuit for realtime human and vehicle detection[END_REF][START_REF] Chen | An Efficient Hardware Implementation of HOG Feature Extraction for Human Detection[END_REF][START_REF] Karakaya | Implementation of HOG algorithm for real time object recognition applications on FPGA based embedded system[END_REF][START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF][START_REF] Ngo | An area efficient modular architecture for real-time detection of multiple faces in video stream[END_REF][START_REF] Cheng | An FPGA-based object detector with dynamic workload balancing[END_REF][START_REF] Gao | Novel FPGA based Haar classifier face detection algorithm acceleration[END_REF][START_REF] Das | Modified architecture for real-time face detection using FPGA[END_REF]. In particular, the main difficulties of the HOG algorithm for hardware implementations, i.e the highly non-linear computations of the arc-tangents, divisions and square roots, have been addressed in [START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF]. As for the CFF, it was also optimized and successfully implemented on hardware devices [START_REF] Farrugia | Fast and robust face detection on a parallel optimized architecture implemented on FPGA[END_REF] and on embedded processors [START_REF] Roux | Embedded Convolutional Face Finder[END_REF][START_REF] Roux | Embedded facial image processing with Convolutional Neural Networks[END_REF]. However, one can expect HMIN R θ=x to be implemented easily on FPGA, with really low resource utilization -that aspect shall be tested in future development. Furthermore, the only framework that beats HMIN in terms of memory-print is Viola-Jones -that aspect is crucial when porting an algorithm on an embedded systems, especially in industrial use cases where constraints may be really high in that respect. Furthermore, while HMIN R θ=x may not seem as attractive as the other frameworks presented here, it has a very interesting advantage: it is generic. Indeed, both ConvNet implementations presented in this Chapter were specifically designed for a particular task: face detection or pedestrian detection. As for Viola-Jones, it may be used for tasks other than face detection as was done for instance for pedestrian detection [START_REF] Viola | Detecting pedestrians using patterns of motion and appearance[END_REF] -however, a different task might need different Haar-like features, which would be implemented differently than the simple ones presented in Section 3.1.1.1. In terms of hardware implementation, that difference would almost certainly mean code modifications, while with HMIN R θ=x one would simply need to change the weights of the convolution kernel. Concerning the HOG, it should be as generic as HMIN -however it suffers from a much greater memory print. 2 HMIN R θ=x refers to both HMIN R θ=0 and HMIN R θ=π/2 . Finally, researchers have also proposed other optimization schemes for HMIN [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF][START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] -future research shall focus on comparing our work with the gained one can expect with their solutions, as well as use a common evaluation scheme for the comparison of HMIN R θ=0 with other pedestrian detection algorithms. Conclusion In this Chapter, we presented our contribution concerning the optimization of a feature extraction framework. The original framework is based on an algorithm called HMAX, which is a model of the early stages of the image processing by the mamal brain. It consists in 4 scales, called S1, C1, S2 and C2 -however, in the use case scenarions presented here the S2 and C2 layers do not provide much more precision, but are by far the most costly in terms of algorithm complexity. We thus chose to keep only the S1 and C1 layers, respectively consisting in a convolution filter bank and max-pooling operations. We explored how the algorithm behaved when diminuing its complexity, by reducing the number and sizes of the linear and max-pooling filters, by estimating where the most relevant information is located. We replaced the initial 64 filters in the S1 layer with only one, the size of which is 9 × 9. It expects 32 × 32 grayscale images as inputs. The nature of the filter depends on the use case: for faces, we found that most saliencies lie in the eyes and mouth of the face, thus we chose a filter responding to horizontal features. As for the use case of human detection, we assume that pedestrians are standing up, which intuitively made us use a filter responding to vertical features. In both cases, we compared the results with standard algorithms having reasonable complexities. Optimizing out the HMIN descriptor provoked a drop in accuracy of 5.73 points on the face detection task on the CMU dataset, and 21.91 points on the pedestrian detection task when keeping a false positive rate of 10%. However, that drop of performance is to be put in perspective with the gain in complexity: after optimizations, the descriptor is 429.12 less complex to evaluate. In spite of everything, that method does not provide results as good as other algorithms with comparable complexities, e.g Viola-Jones for face detection -as for pedestrian detection, we need to perform complementary tests with common metrics for the comparison of that system with the state of the art, but the results presented here tend to show that that algorithm is not well suited for this task. However, we claim that our algorithm provides a low memory print and is more generic than the other frameworks, which make it implementable on hardware with fewer resources, and should be easy to adapt for new tasks: only the weights of the convolution kernel are to be changed. This Chapter was dedicated to the proposition of optimizations for a descriptor. Next Chapter will present another type of optimizations, not based on the architecture of the algorithm, but on the encoding of the data, with implementation on a dedicated hardware. As we shall see, those optimizations are much more efficient and promising, and may easily be applied to other algorithms. Chapter 4 Hardware implementation This chapter addresses the second question stated in Chapter 2, about the optimization of the HMAX framework with the aim of implementing it on a dedicated hardware platform. We begin by exposing the optimizations that we used, coming both from our own work and from the literature. In particular, we show that the combination of all those optimizations does not bring a severe drop in accuracy. We then implement our optimized HMAX on an Artix-7 FPGA, as naively as possible, and we compare our results with those of the state of the art implementation. While our implementation achieves a significantly lower throughput, we shall see that it uses much less hardware resources. Furthermore, our optimizations are fully compatible with those of the state of the art, and future implementations may profit from both contributions. Algorithm-Architecture Matching for HMAX In the case of embedded systems, having an implemented model in a high-level language such as Matlab is not enough. Even an implementation using the C language may not meet the particular constraints that are found in critical systems, in terms of power consumption, algorithmic complexity and memory print. This is particularly true in the case of HMAX, where the S2 layer in particular may take several seconds to be computed on a CPU. Furthermore, GPU implementations are most of the time not an option, as GPUs often have a power consumption in the order of magnitude of 10 W. In the fields of embedded systems, we look for systems consuming about 10 to 100 mW. This may be achieve thanks to FPGAs, as was done in the past [91-96, 98, 99]. This Chapter proposes a detailed review of one of those implementations; the other ones are either based on architecture with multiple high-end FPGAs or focus on accelerating a part of the framework only, thus they are hardly comparable with what we aim to do here. Orchard et al. proposed in [99] a complete hardware implementation of HMAX on a single Virtex-6 ML605 FPGA. To achieve this, the authors proposed optimizations on their own, which concern mostly the way the data is organized and not so much the encoding and the precision degradation -indeed, the data coming out of S1 and carried throughout the processing layers is coded on 16 bits. We shall now review the main components of their implementation, e.g. the four modules implementing the behaviours of S1, C1, S2 and C2. The layers are pipelined, so they may process streamed data. As for the classification stage, it is not directly implemented on the FPGA and should be taken care of on a host computer. The results of that implementation are presented afterwards. Description S1 First of all, the authors showed how all filters in S1 may be decomposed as separable filters, or sums of seperable filters. Indeed, if we consider the "vertical" Gabor filters in S1, i.e. we have θ = π/2, Equations 2.8 and 2.9 lead to [99] G (x, y) θ=π/2 = exp - Let's now focus on the filters having "diagonal" shapes. As shown in [99] and following the same principles as before, we may write x 2 + γ 2 y 2 2σ 2 cos 2π λ x (4.1) = exp - x 2 2σ 2 cos 2π λ x × exp - γ 2 y 2 2σ 2 (4.2) = H (x) V (y) (4.3) H (x) = exp - x 2 2σ 2 cos 2π λ x (4.4) V (y) = exp - γ 2 y 2 2σ 2 , ( 4 I * G θ=π/4 = I * c H * r H + I * c U * r U (4.9) I * G θ=3π/4 = I * c H * r H -I * c U * r U (4.10) with U (x, y) = exp - x 2 2σ 2 sin 2π λ x . (4.11) The benefits in using separable filters are twofolds. First of all, the memory prints of those filters are much smaller than their unoptimized counterparts. Indeed, storing a N × N filter in a naive way requires storing N 2 words, while their separated versions would require the storage of 2N words for G| θ=0 and G| θ=π/2 , and 3N words for G| θ=π/4 and G| θ=3π/4 . The other benefit is related to the algorithmic complexity. Indeed, performing the convolution of a W I × H I image by a W K × H K kernel has an O (W I H I W K H K ), while for separable filters it goes down to O (W I W K + H I H K ). According to [99], doing so reduces the complexity from 36,146MAC operations to 2816MAC. In order to provide some invariance to luminosity, Orchard et al. also use a normalization scheme called l 2 . Mathematically, computing that norm consists in taking square root of the sum of the pixels. Gabor filters where thus normalized so that their l 2 norms equal 2 16 -1, and so that their means are null. C1 Let's consider a C1 unit with a 2∆ × 2∆ receptive field. The max-pooling operations are performed as follows: first, maximums are computed in ∆×∆ neighborhoods, producing an intermediate feature map M t . Second, the output of the C1 unit are obtained by pooling over 2 × 2 pooling windows from M t with a overlap of 1. This elegant method allows to avoid the storage of values that would have been discarded any way, as the data is processed here as it is provided by S1, in a pipelined manner. S2 In the original model, it is recommended to use 1000 pre-learnt patches in S2. However, the authors used themselves 1280 of them -320 per classes -as it was the maximum Results That system all fits in the chosen Virtex-6 ML605 FPGA, including the temporary results and the pre-determined data that are stored in the device's BRAM. It was synthesized using Xilinx ISE tools. It has a latency of 600k clock cycles, with a throughput of one image every 526k clock cycles. The system may operate at 100MHz, with implies a 6ms latency and a 190 image per second throughput. The total resource utilization of the device is given in Table 4.1. Finally the VHDL implementation was tested on binary classification tasks, using 5 classes of objects from Caltech101 and a background class. Accuracies for those tasks are given in Table 4.2. Results show that the accuracy on FPGAs is comparable to that of CPU implementations. In this Section, we presented the work proposed by Orchard et al [99] and the architecture of their implementation. Next Section is dedicated to our contribution, which mainly consists of reducing the precision of the data throughout the process. Proposed simplification In order to save hardware resources, we propose several optimizations to the original HMAX model. Our approach mainly consists in simplifying the encoding of the data and reducing the required number of bits. In order to determine optimal encoding and algorithmic optimizations, we test each of our proposition on the widely used Catlech101 dataset. For fair comparison with other works, we use the same classes as in [99]: "airplanes", "faces", "car rear", "motorbikes" and "leaves". Optimizations are tested individually, starting from those intervening at the beginning of the feed-forward and continuing in processing order, to finish with optimizations to apply to the later layer of the model. For optimizations having tunable parameters (e.g the bit width), those tests shall be used to determine a working point, which is done for all optimizations that require it in order to have a complete and usable optimization scheme. Optimizations are performed at the following levels: the input data, the coefficients of the Gabor filters in S1, the data produced by S1, the number of filters in S2, and finally 8 bits 3 bits 2 bits 1 bit Color maps are modified so that the 0 corresponds to black and the highest possible value corresponds to white, with gray level linearly interpolated in between. We can see that while the images are somewhat difficult to recognize with 1 bit pixels, they are easily recognizable with as few as 2 bits. the computation of the distances in S2 during the pattern matching operations. We shall first present our work, namely the reduction of the precision of the input pixels. We shall then see how that optimization behaves with further optimizations got from the literature. Input data Our implementation of HMAX, along the lines of what is done in [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], processes grayscale images. The pixels of such images are typically coded on 8 bits unsigned integers, representing values ranging from 0 to 255, where 0 is "black" and 255 is "white". We propose here to use less than 8 bits to encode those pixels, simply by keeping the Most Significant Bits (MSB). This is equivalent to an Euclidean division by a power of two: unwiring the N Least Significant Bits (LSB) amounts to perform an Euclidean division by 2 N . The effect of such precision degradation is shown in Figure 4.2. In order to find the optimal bit width presenting the best compromise between compression and performance, an experiment was conducted. It consisted of ten independent runs. In each run, the four classes are tested in binary independent classification tasks. Each task consists in splitting the dataset in halves: one half is used as the training set, and the other half is used as the testing set. All images are resized so that their height For each bit width, ten independent tests were carried out, in which half of the data was learnt and the other half was kept for testing. We see that the pixel precision has little to no influence on the accuracy. is 164 pixel, and are then degraded w.r.t the tested bit width, i.e. all pixels are divided by 2 N where N is the number of removed LSB. The degraded data is then used to train first HMAX, and then the classifier -in this case, GentleBoost [START_REF] Friedman | Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors)[END_REF]. The images used as negative samples are taken from the Background Google class of Caltech101. All tests were performed in Matlab. Is should also be noted that we do not use RBFs in the S2 layer as described in [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF] and in Section 2.1.2.2.The global accuracy for each class is then given by the mean of the recognition rates for that class across all runs, and the uncertainty in the measure is given by the standard deviations of those accuracies. Finally, the random seed used in the pseudo-random number generator was manually set to the same value for each run, thus ensuring that the conditions across all bit-widths are exactly the same and only the encoding changes. The results of this experiment are shown in Figure 4.3. It is shown that for all four classes the bit width has only limited impact on performances: all accuracies lie beyond 0.9, except when the input image pixels are coded on a single bit where the Airplanes class gets more difficult to be correctly classified. For that reason, we chose to set the input pixel's bit width to 2 bits, and all further simplifications shall be made taking that into account. The next step is to reduce the precision of the filter's coefficient, in a way that is somewhat similar to what is proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. S1 filters coefficients The second simplification that we propose is somewhat similar to that presented in Section 4.2.1, except this time we operate on the coefficients of the Gabor filters used in S1. Mathematically, those coefficients are real numbers in the range [-1, 1], thus the most naive implementation for them is to use double precision floating point representation as used by default in Matlab, and that encoding scheme shall be used as the baseline of our experiments. Our simplifications consist in using signed integers instead of floats using n-bits precision, by transforming the coefficients so that their values lie within -2 n-1 , . . . , 2 n-1 -1 , which is done by multiplying them by 2 n-1 and rounding them to the nearest integer. Several values for n where tested, along the lines of the methodology described in Section 4.2.1: 16, and from 8 downto 1. However, using the standard signed coding scheme the 1 bit encoding would lead to coefficients equal either to -1 or 0, which does not seem relevant in our case. Thus, we proposed to use a particular coding here, where the "0" binary value actually encodes -1 and "1" still encodes 1. The rational is that that encoding is close to the Haar-like features used in Viola-Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF] as explained in Section 3.1.1.1, and this technique is also suggested in [START_REF] Courbariaux | BinaryConnect: Training Deep Neural Networks with binary weights during propagations[END_REF]. As explained in Section 4.2.1, the input pixels precision is 2 bits. Recent works [START_REF] Trinh | Efficient Data Encoding for Convolutional Neural Network Application[END_REF] also propose much more sophisticated encoding scheme. While their respective efficiencies have been proven, they seem more adapted to a situation where the weights are learnt during the learning process, and thus unknown before learning. In our case, all weights of the convolution are predetermined, thus we have a total control over the experiment and we prefered to use optimizations as simple as possible. Results for that experiment are given in Figure 4.4. We see that the impact of the encoding of the Gabor filter coefficients has even less impact than the input image pixels precision, even in the case of 1 bit precision. This result is consistent with the fact that Haar-like features are used with success in other frameworks. Thus, we shall use that 1 bit precision encoding scheme for Gabor filters in combination with the 2 bit encoding for input pixels in further simplifications. In this Section, we validated that we could use only one bit to encode the Gabor filter's coefficients, using "0" to encode "-1" and "1" to encode "1", in conjunction with input pixels coded on two bits only. In order to continue our simplification process, next Section proposes optimizations concerning the output of S1. S1 output encoding It has been proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] to use Lloyd's algorithm [START_REF] Stuart | Least squares quantization in PCM[END_REF][START_REF] Roe | Quantizing for minimum distortion (corresp.)[END_REF], that provides a way to find an optimal encoding w.r. . subset S of the data to encode. The encoding strategy consists in defining two sets: a codebook C = {c 1 , c 2 , . . . , c Q } and a partition Q = {q 0 , q 1 , q 2 , . . . , q K-1 , q K } . With those elements, mapping a code l (x) to any arbitrary value x ∈ R is done as follows: ∀x ∈ R l (x) =                        c 1 x ≤ q 1 , c 2 q 1 < x ≤ q 2 , . . . c q-1 q K-2 < x ≤ q K-1 , c q q K-1 < x. . (4.12) One can see here that p 0 and p K are not used to encode data; however those values are required to be computed when determining the partition, as we shall now see. Finding the partition consists in minimizing the Mean Square Error E (C, P ) between the real values in the subset and the values after quantization [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF][START_REF] Stuart | Least squares quantization in PCM[END_REF]: E (C, P ) = K i=1 q i q i-1 |c i -x| 2 p (x) dx (4.13) Where p is the probability distribution of x. One can show [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] that ∀i ∈ {1, . . . , K} c i = q i q i-1 xp (x) dx q i q i-1 p (x) dx (4.14) ∀i ∈ {1, . . . , K -1} q i = c i-1 + c i 2 (4.15) a 0 = min S (4.16) a K = max S (4.17) We see that Equations 4.14 and 4.15 depend on each other, and there is no closed-form solution for them. The optimal values are thus determined with an iterative process: starting from arbitrary values for Q = {q 1 , q 2 , . . . , q K }, e.g separating the range of values to encode in segments of same size: ∀k ∈ {1, . . . , } q i = q 0 + k q K -q 0 K , (4.18) we compute C = {c 1 , . . . , c k } with Equation 4.14. Once this is done, we use those values to compute a new ensemble Q with Equation 4.15, and so on until convergence. Since the dynamics of the values vary greatly from scales to scales in C1, we computed a set C i and Q i per C1 scale in i. However, contrary to what is proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF], we did not separate the orientations. We thus produced 8 sets S i of data to encode (i ∈ {1, . . . , 8}). using the same 500 images selected at random among all of the five classes we use to test our simplifications. As suggested in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF], we used four quantization levels for all S i . Each partition Q i and code book C i where computed using Matlab's Communication System Toolbox's lloyd function. The results are given in Table 4.3. While this simplification uses the values computed in C1, it is obvious that it could easily be performed at the end of the S1 stage, simply by using a strictly growing encoding function f . This is easily performed by associated each value from C i to a positive integer as follows: ∀i ∈ {1, . . . , 8} , j ∈ {1, . . . , 4} f (c i j) = j (4.19) and encoding f (c ij ) simply as unsigned integers on 2 bits. By doing so, performing the max-pooling operations in C1 after that encoding is equivalent to performing it before. We must now make sure that this simplification, in addition to the other two presented earlier, does not have a significant negative impact on accuracy. Thus, we perform an experiment along the lines of what is described in Section 4. Filter reduction in S2 As it has been stated many times in the literature [91-96, 98, 99], the most demanding stage of HMAX is S2. Assuming there are the same amount of pre-learnt patches of each size, then the algorithmic complexity depends linearly on the amount of filters N S2 and their average number of elements K. It has been suggested in [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF] to simply reduce the number of per-learnt patches in S2 by sorting them by relevance according to a criterion, and to keep only the N most relevant patches. The criterion used by the authors is simply the variance ν of the components inside a patch p = (p 1 , . . . , p M ): In order to ensure that all sizes are equally represented, we propose to first crop at random 250 patches of each of those sizes in order to get the suggested 1000 patches by Serre et al. [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], and we select 50 patches of each size according to the variance criterion so that we have a total of 200 patches, as proposed in [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF]. The rational is that we must keep in mind that we aim to implement that process on a hardware device, thus we need to know in advance the amount of patches of each size and to keep them to pre-determined values. ν (p) = M i=1 |p i -p| 2 , ( 4 Let's now experiment that simplification on our dataset. We followed the methodology established in Section 4.2.1, and we used the simplification proposed here along with all the other simplifications that were presented until now. Results are compiled with those of Section 4.2.3 and Section 4.2.5 in Table 4.4. Manhattan distance in S2 In S2, pattern-matching is supposed to be performed with a Gaussian function, the centers of which are the pre-learnt patches in S2, so that each S2 unit In this Section, we proposed a series of optimizations, both of our own and from the literature. In the next Section, we show how that particular encoding may be put into practice on a dedicated hardware configuration. M (v 1 , v 2 ) = Nv i=1 |v 1i -v 2i | . ( 4 Input and FGPA implementation 4.3.1 Overview We propose now our own implementation of the HMAX model, using both our contributions and the simplifications proposed in the literature proposed in Section 4.2. We did not use the architectural optimization proposed in [99] on purpose, to see how a "naive" implementation of the optimized HMAX model compares with that of Orchard et al. This implementation of the HMAX model with our optimizations intends to process fixed-size grayscale images. We aim to process 164 × 164 grayscale images. The rational behind those dimensions is that we want to actually process the 128 × 128 ROI located at the center of the image -however, the largest convolution kernel in S1 is 37 × 37, therefore in order to have 128 × 128 S1 feature maps we need input images padded with 18 pixels wide stripes. That padding is assumed to be performed before the data is sent to the HMAX module. The data is processed serially, i.e pixels arrive one after the other, row by row. The pixels' precision is assumed to be already reduced to two bits per pixel, as suggested in Section 4.2. The module's input pins consists in a serial bus of two pins called din in which the pixels should be written, a reset pin rst allowing to initialize the module, an enable pin en din allowing to activate the computation and finally three clocks: a "pixel clock" pix clk for input data synchronization, a "process clock" proc clk synchronizing the data produced by the module's processes, and a "sub-process clock" subproc clk as some processes need a high-frequency clock. Suggestion concerning the frequencies of those clocks are given in Section 4.4. The output pins consist in: an 8 pins serial bus for the descriptor itself called dout and a pin indicating when data is available named en dout. The serialized data is sent to s2c2, which perform pattern matching between input data and pre-learnt patches with its s2 components, several in parallel, with a multiplexing. The maximum responses of each S2 unit are then computed by c2. The data is then serialized by c2 to out. The HMAX module -illustrated in Figure 4.5 -itself mainly consists in two sub-modules, s1c1 and s2c2. As suggested in their names, the first one performs the computations required in the S1 and C1 layers, while the second one takes care of the computation for the S2 and C2 layers of the model. The rational behind that separation is that it is suggested in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF] that in some cases one may use only the S1 and C1 layers, as we did in Chapter 3. The following two Sections describe those modules in detail. s1c1 That module consists uses two components of its own, called s1 and c1, which performs the operations required by the layers of the same names of the model. It process the input pixels with a multiplexing across orientations, meaning that all processes concerning the first orientation of the Gabor filters in S1 are performed in the same clock cycle, then all processes concerning the second orientation are performed on the same input data, and so on until all four orientations are processed. The input pins of that module are directly connected to those of the top module. Its input pins consist in a dout bus of 4 pins where the C1 output data are written, a en dout pin indicating when new data is available and a dout ori serial bus that precises which orientation the output data corresponds to. The s1 and c1 modules shall now be presented. First of all the pixels arrive in the pix to stripe, which returns columns of 37 pixels. Those columns are then stored in shift registers, which store a 37 × 37 patchonly 7 lines are represented here for readability. Then for each of the 16 scales in S1, there exists an instance of the image cropper module that keeps only the data needed by its following conv module. The convolution kernels' coefficients are gotten from the coeffs manager module, which get them from the FPGA's ROM and retrieve those corresponding to the needed orientation, for all scales. Here only 4 of the 16 convolution engines are shown. The computed data is written in dout, in parallel. Note that not all components of s1 are repesented here: pixmat, pixel manager, coeffs manager and conv crop are not displayed to enhance readability and focus and the dataflow. s1 That module consists in three sub-modules: pixel manager which gets the pixels from the input pins and reorder them so that they may be used in convolutions, the coeffs manager module which handles the coefficients used in the convolution kernels, and the convolution filter bank module conv filter bank which take care of the actual linear filtering operations. Shift registers are also used to synchronize the data produced by the different components when needed. The main modules are described below, and the dataflow in the module is sum up Figure 4.6. pixel manager As mentioned in Section 4.3.1, the data arrives in our module serially, pixel by pixel. It is impractical to perform 2D convolutions in those conditions, as we need the data corresponding to a sub-image of the original image. The convolution cannot be processed fully until all that data arrives, and the data not needed at a particular moment needs to be stored. This is taken care of by this component: it stores the temporary data and outputs it when ready, as a 37 × 37 pixel matrix as needed by the following conv filter bank, as explained below. That process is performed by two different sub-modules: pix to stripe, which reorder the pixels so that they may be processed column per column, and the pixmat that stores the data in a matrix of registers and provide them to the convolution filter bank module. pix to stripe That modules consists in a BRAM, the output pins of which are rewired to its input pins in the way shown in Figure 4.6. It gets as inputs, apart from the usual clk , en din and rst pins, the 2 bit pixels got from the top-module. Its output pins consist in a 37 × 2 = 74 pins bus providing a column of the 37 pixels, as well as a en dout output port indicating when data is ready to be processed. pixmat That module gets as inputs the outputs of the aforementioned pix to stripe module. It simply consists in a matrix of 37 × 37 pixels. At each pixel clock cycle, all registered data is shifted to the "right", and the registers on the left store the data gotten from pix to stripe. The pixmat module's output pins are directly wired to its outputs, and an output pin called en dout indicates when the data is ready. When that happens, the data stored in the matrix of registers may be used by the convolution engines. In order to handle new lines, that module has an inner counter incremented every time new data arrives. When that counter reaches 164, i.e when a full stripe of the image went through the module, the en dout signal is unset and the counter is reset to 0. The en dout signal is set again when the counter reaches 37 again, meaning that the matrix is filled. coeffs manager That module's purpose is to provide the required convolution kernels' coefficient, w.r.t the required Gabor filters orientation. It gets as inputs the regular rst, clk and en signals, but also a bus of two pins called k idx indicating the desired orientation The output pins consists of the customary en dout output port indicating that the data is ready, and a large bus called cout that outputs all coefficients of all scales for the requested orientation. This is also close to the box filter approximation proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. As explained in Section 4.2, we use a particular one bit encoding. Since our convolution kernels' sizes go from 7 × 7 to 37 × 37 by steps of 2 × 2, the total amount of input pins in the cout bus is given by In order to simplify the process, all coefficients needed at a particular time are read all at once from several BRAM, of which only two are represented here for readability. The coefficients are then concatenated in a single vector directly connected to the cout output port. en din and rst pins, which serve their usual purposes. It also gets the orientation identifier thanks to an id in input bus -that identifier is not directly used for computation, but is passed with the output data for use in latter modules. Finally, that modules needs two clocks: the pixel clock, on which the input data is synchronized and acquired through the clk pin, and the process clock (acquired through clk proc) needed for multiplexing the filters per orientations, as suggested in Section 4.3.1. Output pins consist in a dout bus in which the result of the convolutions at all scales are written, an id out bus simply indicating the orientation identifier got from the id in input bus and the usual en dout pin. In order to perform its operations, that module has one distinct instance of the conv crop component per scale (i.e, 16 instances in total). Each instance has parameters of its own depending on its scale. conv crop That module's input and output ports are similar to those of its parent module conv filter bank. It gets the pixel and process clocks respectively from its clk and clk sum input ports, and it may be reset using the rst input port. Image data arrive through din, and the convolution coefficients got from coeffs manager are acquired through the coeffs input port. Data identifier is given by id in input port, and en din indicates when input data is valid and should be processed. Output ports encompass dout, which provide the results of the convolution, and id out which gives back the signal got from id in. Finally, en dout indicates when valid output data is available. dout signals from all instances of conv crop are then gathered in conf filter bank's dout bus. This module gets its name from its two main purposes: select the data required for the convolution, and perform the actual convolution. The first stage is done asynchronously by a component called image cropper. As explained earlier, conv crop get the data in the form of a 37 × 37 pixel matrix -however, all that data is only useful for the 16th scale convolution kernel, which is also of size located in the middle of the 37 × 37 matrix, as shown in Figure 4.6. The selected data is then processed by the conv component, which is detailed in the next section. conv That module carries out the actual convolution filter operations. It gets as inputs two clocks: clk which gets the process clock and clk sum which is used to synchronize sums in the convolution sub-process clock. It also has the usual rst pin for initialization, a bus called din through which the pixel matrix arrives, a bus called coeffs which gets the convolution kernel's coefficients, an id in bus allowing to identify the orientation that is being computed, and an en din pin warning that the input data is valid and that operations may be performed. Its outputs are a dout bus that provides the convolution results, another one called id out that indicates which orientation that data corresponds to and a en dout bus announcing valid output data. In order to simplify the architecture and to limit the required frequency of the subprocess clock, the convolution is first performed row by row in parallel. The results of each rows are then added to get the final result. That row-wise convolution is performed by a bank of convrow module having one filter per row. The sum of the rows are performed by the sum acc module, and the result is coded as suggested in Section 4.2 thanks to the s1degrader module; both modules shall now be presented. convrow That module has almost the same inputs as conv, the only exception being that it only gets the input pixels and coefficients corresponding to the row it is expected to process. Its output pins are similar to those of conv. As explained in Section 4.2, our filters coefficients are either +1's and -1's, respectively coded as "1" and "0". Thus, each 1 bit coefficient does actually not code a value, but rather an instruction: if the coefficient is 1, the corresponding pixel value is added to the convolution's accumulated value, and it is subtracted if the coefficient is 0. That trick allows to perform the convolution without any products. In practice, a subtraction is performed by getting the opposite value of the input pixel by evaluating its two's complement and performing and addition. Sums involved at that stage are carried out by the sum acc module, which shall now be described. sum acc That module sums serially the values arriving in parallel. The data arrives through its din parallel bus, and must be synchronized with the process clock arriving through the clk pin. That module uses a unique register to store its temporary data. At each process clock cycle, the MSB of the din bus, which correspond to the first value of In each of those modules, the "multiplications" are performed in parallel in rowmult between the data coming from din and coeffs input buses -as mentioned in Section 4.2, those multiplications consist in fact in simple changes of signs, depending on the 1 bit coefficients provided by the external module coeffs manager. The results are the accumulated thanks to convrow's cumsum component. Finally, the output of all conrow modules are accumulated thanks to another cumsum component. The result is afterward degraded thanks to the s1degrader module, the output of which is written in dout. the sum, is written in the register. At each following sub-process clock cycle, an index is incremented, indicating which value should be added to the accumulated total. Timing requirements concerning the involved clocks are discussed later in Section 4.4.2. The result is written on the output pins synchronously with the process clock. Once the data has been accumulated row by row, and the results coming out of all rows have been accumulated again, the result may be encoded on significantly shorter words as we explained in Section 4.2.3. That encoding is taken care of by the s1degrader module, which shall be described now. s1degrader This modules takes care of the precision degradation of the convolution's output. It is synchronized on the process clock, and as such has a clk input pin, and The results written in dout simply depends on the position of the input value w.r.t the partition boundaries on the natural integer line. r 0 r 1 r 2 r 3 din en din dout en dout shift registers That module allows to delay data. This is mostly useful to address synchronization problem, and thus it needs a clock clk. A rst input port allows to initialize it, and data is acquired through the din port while an en din input port allows to indicate valid input data. Delayed data may be read from the dout output port, and a flag called en dout is set when valid output data is available and unset otherwise. The way that module works is straightforward. It simply consist in N registers r i , each one of them being connected to two neighboors except for r 1 and r N . At each clock cycle, both the data from din and en din are written in r 1 , and each other register r i gets the data from its neighboor r i-1 as shown in Figure 4.9. The last register simply writes its data in the dout and en dout output ports. c1 Once the convolutions are done and the data encoded on a shorter word, max-pooling operations must be performed. Following the lines of the theoretical model, this is done by the c1 module, which gets its inputs directly from s1 output pins. It is synchronized on the process clock, and therefore it has the mandatory clk and rst pins. It also has input buses called din, din ori and en din which are respectively connected to s1's dout, ori and en dout. Its outputs pins are made up of buses named dout, dout ori Maximums are first computed accross scales with the max 2by2 components. The data is then organized into stripes in the same fashion as done in the pix to stripe component used in s1 module. That stripe is organized by lines, and then scales, and needs to be organized by scales, and then lines to be processed by the latter modulethis reorganization is taken care of by reorg stripes. Orientations being multiplexed, we needed to separate them so each may be processed individually, which is done by the data demux module. Each orientation is then processed by one of the c1 orientation module. Finally, data comming out of c1 orientation is multiplexed by data mux before being written in output ports. and en dout, which respectively provide the result of the max-pooling operations, the associated orientation identifier and the flag indicating valid data. The process is carried out by the following components: c1 max 2by2 which computes the pixel-wise maximum across two S1 feature maps of consecutive scales and same orientation, c1 pix to stripe which reorganize the values in a way similar to that of the aforementioned pixel manager module, c1 reorg stripes which routes the data to the following components in an appropriate manner, c1 orientation demux which routes the data to the corresponding max-pooling engine depending on the orientation it corresponds to, and finally max filter which is the actual max-pooling engine and performs for a particular orientation, hence the name. That flow is shown in Figure 4.10. c1 max 2by2 Apart from the clk, rst and en din input pins, that module has an input bus called din that gets the data produce by all convolution engines and perform the max-pooling operations across consecutive scales. Since the immediate effect of that process is to divide the number of scales by two, that module's output bus dout has half the width of din. A signal going through the en dout output pin indicates that valid data is available via dout. c1 pix to stripe That module is very similar to the pix to stripe module used in s1 (see Section 4.3.2.1), except that it operates on data of all of the 8 scales produced by c1 max 2by2 and produces stripes of 22 pixels in heights, as the maximum window used for the max-pooling operations in C1 is 22 × 22 as stated in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Its input and output ports are the same as those of pix to stripe, with additional din ori and dout ori allowing to keep track of the orientation corresponding to the data. c1 reorg stripes The data produced by c1 pix to stripe is ordered first by the position of the pixels in its stripe, and then per scale -i.e first pixels of all scales are next to each others, followed by the second pixels of all scales, and so on. This is impractical for the processed needed in the later module, where we need the data to be grouped by scales. That module achieves it simply by rerouting the signals asynchronously. c1 orientations demux During C1, each orientation is performed independently from the others. However, at this point they arrive multiplexed from the same bus: first pixels from the first orientation, then the pixels at the same locations from the second orientation, followed by the third and the fourth -we then go back to the first orientation, then the second one and so on. That modules gets those pixels through its din bus, and route the signal to the relevant pins of its dout bus depending on its orientation, which is given by the din ori input bus, which is wired to c1 pix to stripe's dout ori bus. Each set of pins corresponding to a particular orientation then routes the signal to the correct instance of the c1 orientation module. In order to perform that demultiplexing operation, that module also has the compulsory clk, rst and en din pins. c1 orientation The actual max-pooling operation is performed by the c1unit components contained in that module. Each c1 orientation instance has a bank of 8 c1unit instances, each having its own configuration so as to perform the max-pooling according to the parameters indicated in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. The role of the c1 orientation module is to serve as an interface between the max-pooling unit bank and the rest of the hardware model. As inputs, is has the usual clk, rst and en din input pins as well as a din input bus. That bus gets the data of the corresponding orientation generated in the s1 module. Data of all scales arrive in parallel, as a result of the previous modules. Data of each of the 8 scales is routed to a particular c1unit component, which shall be described soon. Output data is then written in the dout bus. An en dout output is set to "1" when data is ready, and pins of an output bus called dout en scales are set depending on the scales at which the data is available, while the other pins are unsete.g, is the output data correspond to the 1st and 4th scales of the C1 layer of the model, dout en scales shall get the value "00001001". Figure 4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data. The control signals presented in Figure 4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap. c1unit This is the core-module of the max-pooling operations -the purpose of all other modules in c1 is mostly to organize and route data, and manage the whole process. Its inputs consist in the compulsory clk, rst and en din pins and the din bus. Data are written to the usual dout and en dout output ports. The max-pooling operations are performed by two instances of a component named maxfilt. The use of those two instances, latter refered to as maxfilt a and maxfilt b, is made mandatory by the fact that there is 50% overlapping between the receptive fields of two C1 unit in the original model. The data is always sent to both components, however setting and unsetting their respective en din pins at different times emulates the behaviour of the set of C1 units operating at the corresponding orientation and scale: at the beginning of a line, only one of the two modules is enabled, and the other one gets enabled only after an amount of pixels equal to half the size of the pooling window (e.g the stride) as arrived. That behaviour is illustrated in Figure 4.11, and is made possible thanks to the c1unit ctrl module. In the next two paragraphs, we first describe how maxfilt works, and then how it is controlled by c1unit ctrl. maxfilt This is where the maximum pooling operation actually takes place. That module operates synchronously with the process clock, and thus has the usual clk, rst and en din input ports -data is got in parallel via the din input port. The input data corresponds to a column of values generated by s1, with the organization performed by the above modules. There are also two additionnal control pins called din new and din last, allowing to indicate the module that the input data is either the first ones of the receptive field, the last ones, or intermediate data. The value determined by the filter is written in the dout port, and valid data is indicated with the en dout output port. The module operates as follows. the module is enabled only when the en din port is set to "1". It has an inner register R that shall store intermediate data. When din new is set, the maximum of all input data is computed and the result is stored in R. When both din new and din last are unset and en din is set, the maximum between all input values and the value stored in R is computed and stored back in R. Finally, when din last is set the maximum value between inputs and R is computed again but this time it is also written in dout and en dout is set to "1". Figure 4.11b shows how those signals should act to make that module work properly. c1unit ctrl That module's purpose is to enable and disable the two maxfilt components of c1unit when appropriate. It does so thanks to a process synchronized on the process clock, and thus has the customary clk, rst and en din input ports. It gets the data that is to be processed in its parent c1unit module through its din input bus, and re-write to the dout output bus along with flags wired to the two c1unit components of its parent module, via four output ports: en new a and en last a which are connected to maxfilt a, and en new b and en last b which are connected to maxfilt b. maxfilt a and maxfilt b are the modules mentioned in the the description of c1unit, presented earlier. c1 to s2 That module's goal is to propose an interface between the output port of c1 and the input ports of s2. It also allows to get the data directly from c1 and use it as a descriptor for the classification chain. It reads the data coming out of c1 in parallel, stores it, and serializes it in an output port when ready. That module needs three clocks: clk c1, clk s2 and clk proc. It also has the rst port, as any other modules with synchronous processes. The input data is written in the c1 din input port, and its associated orientation is written in c1 ori. Data coming from different scale in C1 are written in parallel. en c1 is a input port having of side 8 -one pin per scale in c1 -that indicates which scale from c1 din is valid. Finally, a retrieve input port indicates that the following module is ready to get new data. Output data is written serially in dout output port, and a flag called en dout indicates when data in dout is valid. As shown in Figure 4.12, that module has four major components: two BRAM-based buffers that store the data and write it in din when ready, an instance of c1 handler which gets the input data and provides it along with the address where it should be written in the buffers, and finally a controller ctrl with two processes that takes care of the controlling signals. The reason why we need two buffers is that we use a double buffering: the data is first written into buffer A, then when all the required data has been written the next data is written into buffer B while we read that of buffer A, then buffer B is read while the data in buffer A is overwritten with new data, and so on. This allows to avoid problems related to concurrent accesses of the same resources. When new data in c1 din is available, -that is when, at least one of en c1's bits is set -the writting process is launched. This process, which is synchronized on the highfrequency clk proc clock, proceeds as follows: if en c1'LSB is set, the corresponding data is read from c1 din and sent to c1 handler along with an unsigned integer identifying its scale. Then the second LSB of en c1 is read, and the same process is repeated until all 8 bits of en c1 are checked. In parallel, c1 handler returns its input data along with the address where it should be written in BRAM. Both are sent to the buffer available for writing, which takes care of the writing of the data in its inner BRAM. Once data is ready, i.e when all C1 feature maps for an image are written in the buffer, then that buffer becomes read-only, as new incoming data is written in the other buffer. Every time the retreive input signal switches state, data is written into dout and en dout is set. When the data is written, it is always by batches of four values, one per orientation. c1 handler That module handles the pixels sent from the c1 to s2 and its corresponding scale, and simply rewrites it in its output ports with the address to which it should be written in c1 to s2 write buffer. Its input ports consist in clk which get the clock on which it should be synchronized, the rst port allowing to reset the component, the din port getting the C1 value to be handled, the scale of which is written in the scale input port, the rst cnts that allows to reset all of this module's inner counters used to generate the address, and the en din input port indicating when valid data is available and should be processed. This module's output port consist in dout which is used to rewrite input data, addr which indicates the address where to write the data in BRAM and en dout indicating that output data is available. .12: c1 to s2 module. The blue and red lines show the data flow in the two configurations of the double-buffering. The data goes through c1 handler, where the address to which it should be written is generated and written in waddr. The rea and reb signals control the enable mode of the BRAMs, while the wea and web enable and disable the write modes of the BRAMs. When the upper BRAM is in write mode, wea and reb are set and web and rea are unset. When the upper buffer is full, those signals are toggled so that we read the full buffer and write in the other one. Those signals are controled thanks to the ctrl component, which also generates the address from which the output data should be read from the BRAMs. Data read from both BRAMs are then multiplexed into the dout output port. Pins on the left of both BRAMs correspond to the same clock domain, and those on the right belong to another one so that it is synchronized with following modules. That module works as follows. It has 8 independent counters, one per scale. Let c n s be the value hold on the counter associated to scale s at instant n. When en din is set, assuming the value read from scale correspond to the scale s coded as an unsigned integer, the data read from scale is simply written in dout and the value written in addr is simply c n s + o s , again coded as an unsigned integer, where o s is an offset value as given in Table 4.5. Those offsets are determined so that each scale has its own address range, contiguous with each others, under the conditions given in Section 4.3.1: o 0 = 0, (4.24) ∀s ∈ Z * o s = s-1 k=0 4S k 2 , (4.25) where S k is the size of the C1 maps at scale k, also given in Table 4.5. Once all pixels have been handled by c1 handler, the module's counters must be reset by setting and then unsetting the rst cnts input signal. s2c2 That module gets as input the serialized data produced by c1 to s2. and performs the operation required in HMAX's S2 and C2 layers. It has two main components, s2 and c2, that respectively take care of the computations needed in HMAX's S2 and C2 layers. In order to save hardware resources, the pre-learnt in S2 filters are multiplexed as it is done in s1: every time new data arrives, pattern-matching are performed with some of the pre-learnt S2 patches in parallel, then the same operations are performed with other pre-learnt patches and same input data, and so on until all pre-learnt patches were used. We shall define here for latter use a multiplexing factor that we shall denote M S2C2 , which corresponds to the amount of serial computations required to perform computations on all S2 patches for a given input data. Is most useful output port is called rdy, and is connected to c1 to s2's retreive input port, to warn it when it is ready to get new data. s2 This module handles the data coming out of c1 to s2 as well as the pre-learnt patches, matches those patterns and returns the results. Its input pins firstly consist in clk and clk proc that each get a clock signal: the first one is the clock on which the input data is synchronized and the other one synchronizes the computations. It also has a rst input port allowing to reset it. The data should be written in the din port, and a port called en din indicates that the input data is valid. After performing the pattern matching operations, the data is written into the dout output port, along with an identifier into the id out output port. Finally, en new allows the other module to be warned that new data is available, en dout indicates precisely which parts of dout carry valid data and should be read and rdy indicates when the process is ready to read data from c1 to s2. That modules has three major components, which shall be described in the next Sections: s2 input manager, which handles and organizes input data; s2 coeffs manager, which handles and provides the coefficients of the pre-learnt filters; and s2 processors, which takes care of the actual pattern-matching operations. Figure 4.13 shows the dataflow in that module. We shall now described in more details the its sub-modules s2 coeffs manager, s2 coeffs manager and s2 processors. The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module. s2 input manager This module's purpose is somewhat similar to that of s1's pixman module: managing the incoming data and reorganizing it in a way that makes it easier to process. It gets input data from c1 to s2 serially and provide a N × N × 4 map of C1 samples, where N is the side length of the available map. Its input ports gather a clk port the clock and a rst port allowing to reset the module, and also a din port where the data should be written and an en din port that should be set when valid data is written into din. The output map may be read from the dout output port and its corresponding scale in C1 space is coded as an unsigned integer and written into the dout scale output port. Finally, the input matsize output gives a binary string w.r.t the value of the aforementioned N variable according to Table 4.6. Individually, each bit of dout scale allow to enable and disable s2bank modules, which takes care of the actual pattern-matching operation and which are described in Section 4.3.3.6. s2 input manager mainly consists in two components: s2 input handler, which get C1 samples serially as input and returns vertical stripes of those samples; and an instance of the pixmat component described in Section 4.3.2.1. However, pixmat is not used here in exactly the same way as in s1. First of all, we consider here that a "sample" stored in pixmat does not actually correspond to a single sample of a C1 map at a given location, but to an ensemble of four C1 samples, one per orientation. Furthermore, contrary to s1, the feature maps produced by c1 do not have the same sizes, as stated earlier in dout scale 0000 0001 0011 0111 1111 To address that issue, we chose to ignore pixmat's en dout port, and to use a state machine that shall keep track of the data in a similar way to that of pixmat, although it manages better the cases where the feature maps are smaller than 31 × 31: the process is similar but the line width depends on the scale to which the input data belongs. That scale is determined by an inner counter: knowing how many samples there are per scales in C1, it is easy to know the scale of the input data. s2 input handler The reorganization of the data arriving sample by sample in stripes that can feed pixmat is performed by that module. As a synchronous module, it has the required clk and rst input ports, the data is read from its din input port and valid data is signaled with the usual en din input port. Output stripes are written in the dout output port, along with the identifier of their scales which are written in the dout scale output port. Finally, the en dout output port indicated that data from dout scale is valid. Let's keep track of the organization of the data that arrives in that module. Pixels arrive serially, as a stream. The first pixels to arrive are those of the C1 maps of the smallest scale. Inside that scale, the data is organized by rows, then columns, and then orientation as shown in Figure 4.14a. The first thing that module does is to demultiplex the orientations, so that every word contains the pixels of all of the orientations, at the same locations and scale. Once this is done, this new stream may be processed as we explain now. As presented in Figure 4.14b, that module has 8 instances of the s2 pix to stripe component -one for each size of C1 feature maps -that produces the vertical stripes given input samples and generic parameters such as the desired stripe's height and width. Only one of those instances is used at a time, depending on the scale (which is computed internally depending on the amount of acquired samples). Thus, at scale 0 the C1 feature maps are 31 × 31 and the only active module is the 31 × 31 one. When processing samples of scale 2, which means 24 × 24 feature maps, the only active module is the 24 × 24 one, and so one. Whatever its side, the generated stripe is written in dout and its corresponding scale in dout scale. Finally, en dout indicates which data from dout is valid -this is somewhat redundant with dout scale, but makes it easier to interface that model to the others. units of all sizes is performed here. Data are synchronized on the pixel clock which is provided to this module via its clk input port. Operations, however, are synchronized on the S2 process clock of much higher frequency, given by the clk proc input port. The module also has the compulsory rst clock allowing to initialize it. The data resulting from the processes of the previous layers is passed through the din input port, along with its codebook identifier via the cb din input port. The pre-learnt patterns to be used for the pattern matching operations are passed through the coeffs input bus, and all their corresponding codebooks identifiers are given to the module via an input port called cbs coeffs. Finally, the id in input port gets an identifier that allows to keep track of the data in the latter c2 module, and the en din allows to enable or disable the module. Regarding the output ports, they consist in the dout port which provide the results of all the pattern matching operations performed in parallel, the id dout port that simply gives back the identifier provided earlier via the id in port, a "rdy" output port that warns that that module is ready to get new data, and finally an en dout output bus that indicates which data made available by dout is valid; this is required due to the fact that, as we shall see, pattern matching operations are not performed at all positions of the input C1 maps, depending on the various sizes of the pre-learnt pattern. Thus, data are not always available at the same time, and we need to keep track of this. For each size of the pre-learnt S2 patches, i.e 4×4×4, 8×8×4, 12×12×4, 16×16×4, this module implements two components: s2bank that performs the actual pattern matching operation, and corner cropper that makes sure that only valid data is routed to the s2bank instance. Data arriving from din corresponds to a matrix of 16 × 16 × 4 pixels: all of it is passed to the s2bank instance that match input data with 16×16×4 patterns. The data fed to s2bank instances performing computations for smaller pre-learnt pattern corresponds to a chunk of the matrix cropped from the "corner" of the pixel matrix. the pre-learnt vector used for the pattern matching operation, and the corresponding codebook is got via the cb coeffs input port. s2unit That module takes care of the computation of a single pattern matching operation in S2. As its top module s2processors, it has clk and clk proc input ports that respectively get the data and system clocks. It also has rst input ports for reset. The operands consist on one hand in the data produced by the s1c1 module and selected by corner cropper and on the other hand in the pre-learnt pattern with which the Manhattan distance is to be computed. They are respectively given to that module via the din and coeffs input ports. The data arrive in parallel in the form of the optimized encoding described in Section 4.2, and as explained there this encoding requires a codebook. Since there is a codebook per C1 map, the identifiers of the codebooks required for the input data and the pre-learnt pattern are respectively given by the cb din and cb coeffs input ports. The identifier mentioned in s2processors passed by the id in input port, and the module can be enabled or disabled thanks to the en din input port. The Manhattan distance computed between the passed vectors is written to the dout output port, along with the corresponding identifier which is written to the id out output ports. Finally, an output port called en dout indicates when valid data is available. The Manhattan distance is computed here in a serial way, synchronized on the clk proc clock. This computation is performed by a component called cum diff, which shall now be described. Shift registers as described in Section 4. In this Section, we described principles of the s2 module. Next Section does the same for the c2 module. Timing Our model works globally as a pipeline, where each module uses its own resources. Therefore, the overall time performances of the whole chain is determined by the module that takes longest. In order to evaluate how fast is our model in terms of frames per second, we shall now study, for each stage, the timing constraints it requires. As for the C1 layer, it processes the data as soon as it arrives, and thus no bottleneck is involved there. The S2 layer is the most demanding in terms of computations. Computations are performed only when all the required data is here, in order to save as most time as possible, as explained in Section 4.3.3.5. Considering we use a 25-to-1 multiplexer to process all S2 filters, the time T S2 required by this stage is given by the time required by that layer may be written as T S2 = 25 (16 × 16 × 4M 16 + 12 × 12 × 4M 12 + 8 × 8 × 4M 8 + 4 × 4 × 4M 4 ) , (4.26) where M i is the number of valid X i × X i patches in the C1 feature maps where patches bigger than X i × X i with X i = 4i are not valid, and may be expressed as M i = N i -N i+1 (4.27) where N i is the number of valid X i × X i patch in the C1 feature maps. Hence, we have The strategy proposed in [99] is very different from what we proposed here. The huge computational gain they brought is largely due to the use of separable filters for S1, which allow to use very few resources as explained in Section 4.1.1.1. The fact that, in their implementation of S1, filters are multiplexed across scales instead of across orientations as we did here, also allows to begin computations in the the S2 layer as soon as data is ready, while in our case we chose to wait for all C1 features to be ready before starting computation, using a double-buffer to allow a pipelined process. In their case, the bottleneck is the S1 layer, which forces them to process a maximum of 190 images per second. However, that amount is 8.37 times bigger than the FPS we propose. This is due to the fact that, while reducing data encoding seem to provide performances similar to those obtained with full double-precision floating point values, it does not take full advantage of the symmetries underlined by Orchard et al. in [99]. T S2 =25 [ As for the S2 layer, Orchard et al claimed that they used 640 multipliers in order to make the computation as parallel as possible -however it is not very clear in that paper how exactly those multipliers were split across filters, and the code is not available online -hence direct comparison with our architecture is not feasible. However, with their implementation of S2 they claim being able to process 193 128 × 128 images per seconds, while our implementation gives 22.69 images per second, although it uses much less resources. Finally, we did reduce the precision of the data going from S1 to S2, but the computation in S2 is still performed with data coded on 24 bits integer -this is due to the fact that we did not tested the model when degrading the precision at that stage. Future work shall address that issue, and we hope to reduce the precision to a single bit per word at that stage. Indeed, in that extreme scenario the computation of the Euclidean distance is equivalent to that of the Hamming distance, i.e. the number of different symbols between two words of same length. That kind of distance is much easier to compute than classical Euclidean or even Manhattan distance, be it on FPGA or CPU. The rational behind that idea is that single bit precisions were successfully used in other machine learning contexts [START_REF] Coussy | Fully-Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories[END_REF][START_REF] Courbariaux | BinaryConnect: Training Deep Neural Networks with binary weights during propagations[END_REF], and such an implementation would be highly profitable for implementation on highly constraint devices. Resource Conclusion This Chapter was dedicated to the optimizations of the computations that take place in the HMAX model. The optimization strategy was to use simpler operations as well as coding the data on shorter words. After that study, a hardware implementation of the optimized model was proposed using the VHDL language, targeting an Artix 7 200T platform. Implementation results in terms of resource utilization and timing were given, as well as comparisons with a work chosen as a baseline. We showed that the precision of the data in the early stages of the model could be dramatically reduced, while keeping acceptable accuracy: only the 2 most significant bits of the input image's pixels were kept, and the Gabor filters' coefficients were coded on a single bit, as was proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. We also used the coding strategy proposed in the same paper, in order to reduce the bit width of the stored coefficients and their transfer from modules to modules. We also instantiated less patches in S2 as proposed by Yu and Slotine [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF], and we proposed to use the Manhattan distance instead of the Euclidean distance as in the initial model [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF]. Those optimizations made the overall accuracy of the model lose XXX points in precision for an image classification task based on 5 classes of the popular Caltech101 dataset, while dividing the complexity in the S2 stage by 5 and greatly reducing the required precision of the data, hence diminishing the memory print and the needed bandwidth for inter-module communication. A hardware implementation of that optimized model was then proposed. We aim to that implementation to be as naive as possible, to see how those optimization compared with the implementation strategy proposed by Orchard et al. [99]. Their implementation was made so as to fully use the resources of the target device, and thus they claimed a throughput much higher than ours. However, our implementation uses much less resources than theirs, and our optimizations and theirs are fully compatible. A system implementing both of them would be of high interest in the fields of embedded systems for pattern recognition. Future research shall aim to combine our optimizations with the implementation strategy proposed by Orchard et al, thus reducing even further the resource utilization of that algorithm. Furthermore, we shall continue our efforts towards that objective, by addressing the computation in the S2 layers: at the moment, they are implemented as Manhattan distance -we aim to reduce the precision of the data during those pattern matching operation to a single bit. That way, Euclidean and Manhattan distances are reduced to the Hamming distance, much less complex to compute. Chapter 5 Conclusion In this thesis, we addressed the issue of optimizing a bio-inspired feature extraction framework for computer vision, with the aim of implementing it on a dedicate hardware architecture. Our goal is to propose an easily embeddable framework, generic enough to fit different applications. We chose to focus on efforts on HMAX, a computational model of the early stage of image processing in the mammal's cortex. Although that model may not be quite as popular as others, such as ConvNet for instance, it is interesting in that it is more generic and only requires little training, while frameworks such as ConvNet often require the design of a particular topology and a large amount of samples for training. HMAX is composed of 4 main stages, each computing features that are progressively more invariant that the one before, to translations and small deformations: the S1 stage uses Gabor filters to extract low-level features from the input image, the C1 stage uses a max-pooling strategy to provide a first level of translation and scale invariance, the S2 feature matches pre-learnt patches with the feature maps produced by C1 and the C2 provides full invariance to translation and scale thanks to its bag-of-word approach by keeping only the highest responses of S2. The only training that happens here is in S2, and it may be performed using simple training algorithms with few data. First, we aimed to optimize HMIN, which is a version of HMAX with only the S1 and C1 layers, for two particular tasks: face detection, and pedestrian detection. Our optimization strategy consisted in removing the filters that we assumed were not necessary: for instance, in the case of face detection, the most prominent features lie in the eyes and mouth, which respond best to horizontal Gabor filters. Hence, we proposed to keep only such features in S1. Furthermore, most useful information are redundant from scales to scales, thus we reduced further the complexity of our system by summing all the remaining convolution kernels in S1, and we reduced it to a manageable size of 9 × 9 which allows it to process smaller images. Doing so helped us to greatly reduce the complexity of the framework, while keeping its accuracy to an acceptable level. We validated our approach on the two aforementioned tasks, and we compared the performance of our framework with state-of-the art approaches, namely the Convolutional Face Finder and Viola-Jone's for the face detection task, and another implementation of ConvNet and the Histogram of Oriented Gradients for the pedestrian detection task. For face detection applications, we concluded that, while the precision of our algorithm is significantly lower than that of state of the art systems, our system still works decently on a real life scenario, where images were extracted from a video. Furthermore, it presents the advantage of being generic: in order to adapt our model to another task one would simply need to update the weights of the filter in S1 so as to extract relevant features, while state of the art algorithm were either design specifically for the considered task or would require particular implementation for it. However, our algorithm does not seem to perform to a sufficient level for the pedestrian detection task, and more efforts need to be made to that end. Indeed, while our simplifications allowed our system to be the most interesting in terms of complexity, they also brought a significant drop in terms of accuracy, although more tests need to be made for that use case as our results are not directly comparable to those of the state of the art. We then went back to the full HMAX framework with all four layers, and we studied optimizations aiming to reduce the computation precision. Our main contribution is the use of as few as two bits to encode the input pixels, hence using only 4 gray levels instead of the usual 255. We also tested that optimization in combination with other optimizations from the literature: Gabor filters in S1 were reduced to simple additions and subtractions, the output in S1 were quantized using Lloyd's encoding method, allowing to find the optimal quantization given a dataset, we divided by 5 the number of pre-learnt patches in S2 and we replaced the complex computation of Gaussians in S2 with much simpler Manhattan distance. We showed that all those approximations allow to keep an acceptable accuracy compared to the original model. We then implemented our own version on HMAX on a dedicated hardware, namely the Artix-7 200T FPGA from Xilinx, using the aforementioned optimizations. That implementation was purposely naive, in order to compare it with state of the art implementation. The precision reduction of the input pixels allows to greatly reduce the memory needed when handling the input pixels, and made the computation of the S1 feature map being done on narrower data. Furthermore, the replacement of the Gabor filter coefficients by simple additions and subtractions allowed us to encode that instruction on a single bit -"0" for subtraction and "1" for addition -instead of a full coefficient, using for instance a fixed or floating point representation. The data coming out of S1 is then encoding using the codebooks and partitions determined thanks to Lloyd's method, hence allowing to pass only words of 2 bits to the C1 stage. As for the S2 layer, the influence of data precision on the performance was not yet evaluated by the time that document was written, and hence all data processed here used full precision: input data are coded on 12 bits, and output data on 24 bits. The main limit of our implementation is that is does not use the symmetries of the Gabor filters. That technique was successfully used in the literature to propose a full HMAX implementation on a single FPGA, allong with different multiplexing scheme that allow a higher throughput. Indeed, our implementation -which is yet to be implemented and tested on a real device -may process 4.54 164×164 frames per second, while the authors of the state of the art solution claimed that it may process up to 193 128 × 128 frames per second. It must be emphasized however that our implementation uses much less hardware resources, and that our optimizations and theirs are fully compatible. Hence, future development shall mainly consist in merging the optimization they proposed with those that we used. Let's now give answers to the question we stated at the beginning of that document. The first one was: How may neuromorphic descriptors be chosen appropriately and how may their complexity be reduced? As we saw, a possible solution is to find empirically the most promising features, and keeping only the filters that respond best to it. Furthermore, it is possible to merge the convolution filters that are sensible to similar features. That approach led us to a generic architecture for visual pattern recognition, and one would theoretically need to change only its weights to adapt it to new problems. The second question that we stated was: How the data handled by those algorithms may be efficiently coded so as to reduce hardware resources? We show that full precision is not required to keep decent accuracy, and that we can acceptable results using even only a few bits to encode parameters and input data. We also showed that that technique may be successfully combined with other optimizations. Given the fact that nowadays, the most widely used framework for visual pattern recognition is ConvNet, it may seem surprising that we chose to stick to HMAX. The main reason is that their most well known applications are meant to run on very powerful machines, while on the contrary we directed our research towards embedded systems. We also found the bio-inspiration paradigm promising, and we chose to push as far as possible our study of frameworks falling in that categories, in order to use them to their full potential. While our contribution in deriving an algorithm optimized for a given task does not provide an accuracy as impressive as the state of the art, we claim that the architecture of that framework is generic enough to be easily implementable on hardware, and that only the parameters would need to change to adapt it to another task. Furthermore, our implementation of the general-purpose HMAX algorithm on FPGA is the basis of a future, more optimized and faster implementation on hardware, combining the presented optimizations which allowed to keep low hardware resource utilization low and those proposed in the literature, that take full advantage of the features of an FPGA. Combining those contributions may take several form: one can imagine using a full HMAX model with all four layers, but with a number of filters in S1 greatly reduced, thus leading to an implementation on FPGA using even less resources. Or, one can imagine directly implementing the framework proposed for face detection, i.e. without the S2 and C2 layers, with the optimizations that we proposed for the S1 and C1 layers. Doing so would produce a very tight framework, with a low memory print and a low complexity. However, one may argue that frameworks such as ConvNet are nevertheless more accurate than HMAX in most use case scenarios, that frameworks such as Viola-Jones have strikingly low complexities, and that the genericity we claim to bring does not make it up for it. With that consideration, we claim that the study carried out in Chapter 3 and 4 may still apply to those frameworks. Indeed, as was done in the literature, if one trains a ConvNet having a topology similar to that of the CFF, where the feature maps of the second convolution stage ultimately produce a scalar each, one may see that the weight affected to that scalar if close to zero, and hence the corresponding convolutions responsible for that feature map may simply be removed; furthermore, for a given task it may be easy to identify the shape of a Gabor filter that would allow to grasp interesting features -then, one can either use Gabor filters as the first stage of a ConvNet, as was done in the past, or initialize the weights of some convolution kernels before training. As for our hardware implementation of HMAX, most of the optimizations we proposed may be used for ConvNet as well. For instance, one could still chose to train a ConvNet on input images with pixels coded on less than 8 bits. Furthermore, after training one could also imagine to replace all positive weights with 1 and negative weights with -1, and remove weights close to 0 -given that the dynamics of the weights is not too far from the [-1, 1] range. We also confirmed that using those techniques in combinations with other techniques from the literature, such as Lloyd's algorithm for inter-layer communication, are usable without dramatically altering the accuracy. Hence, our example of implementation is perfectly applicable to other situations, and goes way beyond the sole scope of HMAX. To conclude, we would back the position that claims that bio-inspiration is often a good starting point and that it may open perspectives that were not explored until then, but that we should not fear to quickly move away from it. Indeed, humanity conquered the skies with machines only loosely connected to birds, and submarine depths with boats that share almost nothing with fishes. Computer vision boomed very recently thanks to frameworks that are indeed inspired by cognitive theories, but the implementations of those theories in industrial systems is far from mimicking the brain. But all those systems, at some point, were inspired by nature -and while it is not always the most fundamental aspect, going back to that viewpoint and rediscovering why it inspired a technology may shed new lights on how to go further and deeper in their improvement. A.3 Output layer training As for the final layer, it is trained using a simple least mean square approach. Denoting W the weight matrix and T the matrix of target vectors, it can be shown [START_REF] Bishop | Pattern recognition and machine learning[END_REF] that we have W = Φ T Φ -1 ΦT (A.4) with Φ =        Φ 0 (x 1 ) Φ 1 (x 1 ) . . . Φ M -1 (x 1 ) Φ 0 (x 2 ) Φ 1 (x 2 ) . . . Φ M -1 (x 2 ) . . . . . . Φ 0 (x N ) Φ 1 (X N ) . . . Φ M -1 (X N )        (A.5) where Φ i is the function corresponding to the i-th kernel, and where each vector of T has components equal to -1, except for it i-th component which is +1 if the categories of the vector it corresponds to is i. gaussienne modulée par un cosinus, et peut être formalisé de la manière suivante: •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • • x Image d'entre Couche 1 Couche 2 Couche 3 U λ 1 (x) U λ 1 ,λ 2 (x) S 0 (x) S λ 1 (x) S λ 1 ,λ 2 (x) G (x, y) = exp - x 2 0 + γ 2 y 2 0 2σ 2 × cos 2π λ x 0 , (B.2) x 0 = x cos θ + y sin θ and y 0 = -x sin θ + y cos θ, (B. B.2.2 Implantations matérielles Afin de répondre aux problématiques de l'embarqué, de nombreuses implantations matérielles de classificateurs, d'extracteurs de caractéristiques et même de réseaux de neurones à • Comment choisir des caractéristiques bio-inspirées de manière appropriée et comment réduire leurs complexités algorithmes ? • Comment les données manipulées par ces algorithmes peuvent-elles être codées efficacement de façon à réduire l'utilisation des resources matérielles ? Cette Section était dédiée à une revue de l'état de l'art lié à nos travaux. Dans la prochaine Section, nous allons répondre à la première problématique en décrivant notre contribution sur la sélection de caractéristiques. Dans la Section suivante, nous détaillerons les optimisations réalisées sur HMAX en vue d'une implantation sur matériel. Enfin, la dernière Section sera consacrée aux discussion et conclusions de ces travaux. B.3 Sélection de caractéristiques Dans cette Section, nous allons présenter nos travaux concernant la sélection de caractéristiques en vue d'optimiser un algorithme, pour deux tâches précises: la détection de visages, et la détection de piétons. B.3.1 Détection de visages B.3.2 Détection de piétons B.3.2.3 Expérimentations Afin de tester nos algorithmes, nous avons évalué sa précision sur une tâche de détection de piétons sur la base INRIA [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. Les résultats sont présentés en Figure B.17 et en 1. 1 1 Application examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Perceptron applied to PR . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 NeuroDSP architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 A feedforward architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Perceptron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Multi-layer perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 MLP activation functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 RBF neural network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Support vectors determination . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Invariant scattering convolution network. . . . . . . . . . . . . . . . . . . 2.8 HMAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Convolutional neural network. . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Example of Haar-like features used in Viola-Jones. . . . . . . . . . . . . . 3.2 Integral image representation. . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Complexity repartition of Viola and Jones' algorithm. . . . . . . . . . . . 3.4 CFF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Complexity repartition of the CFF algorithms . . . . . . . . . . . . . . . . 3.6 C1 feature maps for a face . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 S1 convolution kernel sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Feature map obtained with the unique kernel in S1 . . . . . . . . . . . . . 3.9 ROC curves of the HMIN classifiers. . . . . . . . . . . . . . . . . . . . . . 3.10 Samples from the CMU Face Images dataset . . . . . . . . . . . . . . . . 3.11 ROC curve obtained with HMIN R θ=π/2 on CMU dataset. . . . . . . . . . . 3.12 Example of frame from the "Olivier" dataset. . . . . . . . . . . . . . . . . 3.13 ROC curves obtained with HMIN R θ=π/2 on "Olivier" dataset. . . . . . . . 3.14 HOG descriptor computation. . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Binning of the half-circle of unsigned angles . . . . . . . . . . . . . . . . . 3.16 Complexity repartition of HOG features extraction. . . . . . . . . . . . . . 3.17 ConvNet for pedestrian detection. . . . . . . . . . . . . . . . . . . . . . . 3.18 ROC curves of the HMIN classifiers on the INRIA pedestrian dataset ..4.1 Caltech101 samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Precision degradation in input image. . . . . . . . . . . . . . . . . . . . . 4.3 Recognition rates of HMAX w.r.t input image bit width. . . . . . . . . . . 4.4 Recognition rates w.r.t S1 filters precision . . . . . . . . . . . . . . . . . . 4.5 HMAX VHDL module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii List of Figures xiii 4.6 Dataflow in s1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 coeffs manager module. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 7 × 7 convolution module. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 shift registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 c1 module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 c1unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 c1 to s2 module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Dataflow in s2c2. The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Data management in s2 handler. . . . . . . . . . . . . . . . . . . . . . . 4.15 Data flow in s2processors. . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Exemples d'applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 NeuroDSP architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Architecture feedforward . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Invariant scattering convolution network. . . . . . . . . . . . . . . . . . . . B.5 HMAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.6 Réseaux de neurones à convolutions. . . . . . . . . . . . . . . . . . . . . . B.7 Examples de caractéristiques utilisés dans Viola-Jones. . . . . . . . . . . . B.8 Représentation en image intégrale. . . . . . . . . . . . . . . . . . . . . . . B.9 CFF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.10 Sorties des C1 pour un visage . . . . . . . . . . . . . . . . . . . . . . . . . B.11 Somme des noyaux de convolutions dans S1. . . . . . . . . . . . . . . . . . B.12 Réponse du filtre unique dans S1 sur un visage. . . . . . . . . . . . . . . . B.13 Courbes ROC obtenues avec différentes versions de HMIN sur LFW Crop. B.14 Courbe ROC obtenue avec HMIN R θ=π/2 sur la base CMU. . . . . . . . . . B.15 HOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.16 ConvNet pour la détection de piétons. . . . . . . . . . . . . . . . . . . . . B.17 Courbes ROC obtenues avec les descripteurs HMIN sur la base INRIA. . . B.18 Effet de la dégradation de précision sur l'image d'entrée. . . . . . . . . . . B.19 Taux de reconnaissances avec HMAX en fonction de la précision des pixels en entrée. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.20 Précisions en fonction du nombres de bits dans les filtres de Gabor de S1, avec 2 bits pour l'image d'entrée. . . . . . . . . . . . . . . . . . . . . . . . B.21 Aperçu du module VHDL HMAX. . . . . . . . . . . . . . . . . . . . . . . To Ryan and Théo. xv Chapter 1 (a) Google's self driving car 1 . (b) Production control. (c) Security. (d) Home automation. Figure 1 . 1 : 11 Figure 1.1: Application examples. Figure 1 . 2 : 12 Figure1.2: Perceptron applied to pattern recognition. Figure1.2a shows an hardware implementation, and Figure1.2b presents the principle: each cell of the retina captures a binary pixel and returns 0 when white, 1 when black. Those pixels are connected to so called input units, and are used to compute a weighted sum. If that sum is positive, then the net returns 1, otherwise it returns -1. Training a Perceptron consists in adjusting its weights. For a more formal and rigorous presentation, see page 9. Figure 2 . 1 : 21 Figure 2.1: A feedforward architecture. In each layer, units get their inputs from neurons in the previous layer and feed their outputs to units in the next layer. Figure 2 . 2 : 22 Figure 2.2: Perceptron. Figure 2 . 4 : 24 Figure 2.4: MLP activation functions. Figure 2 . 5 : 25 Figure 2.5: RBF neural network. Figure 2 . 6 : 26 Figure 2.6: Support vectors determination. Green dots belong to a class, and red ones to the others. Dots marked with a × sign represent the selected support vectors. The unmarked dots have no influence over the determination of the decision boundary's parameters. The black dashed line represents the determined decision boundary, and the orange lines possible decision boundaries that would not be optimal. Figure 2 . 9 : 29 Figure 2.9: Convolutional neural network [48]. Figure 3 . 1 : 31 Figure 3.1: Example of Haar-like features used in Viola-Jones for face detection.They can be seen as convolution kernels where the grey parts correspond to +1 coefficients, and the white ones -1. Such features can be computed efficiently using integral images[START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. Point coordinates are presented here for latter use in the equations characterizing feature computations. Figure 3 . 3 Figure 3.5 shows the repartition of the complexity of that frameworks. Figure 3 . 6 : 36 Figure 3.6: C1 feature maps for a face. One can see here that most of the features corresponding to an actual feature of a face, e.g the eyes or the mouth, is given by the filters with orientation θ = π/2. Figure 3 . 8 : 38 Figure 3.8: Feature map obtained with the unique kernel in S1 presented in Figure 3.7. One can see that the eyes mouth and even nostrils are particularly salient. Figure 3 . 10 : 310 Figure 3.10: Samples from the CMU Face Images dataset. Figure 3 . 11 : 311 Figure 3.11: ROC curves obtained with HMIN Rθ=π/2 on CMU dataset. The chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop[START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. A face was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. Figure 3 . 12 : 312 Figure 3.12: Example of frame from the "Olivier" dataset. Figure 3 . 3 Figure 3.13: ROC curves obtained with HMIN Rθ=π/2 on "Olivier" dataset. As in Figure3.11, the chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop[START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. An image was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. Figure 3 . 14 : 314 Figure 3.14: HOG descriptor computation. Gradients are computed for each location of the R, G and B channels, and for each location only the gradient with the highest norm is kept. The kept gradients are separated into cells, shown in green, and histograms of their orientations are computed for each cell. This produces a histogram map, which is divided in overlapping blocks a shown on the right. Normalization are performed for each block, which produces one feature vector per block. Those feature vectors are finally concatenated so as to produce the feature vector used for training and classification. Figure 3 . 15 : 315 Figure 3.15: Binning of the half-circle of unsigned angles with N b = 9. The regions in gray correspond to the same bin. Figure 3 . 17 : 317 Figure3.17: ConvNet for pedestrian detection[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. Input image is assumed to be represented in Y'UV space. The Y channel feed the C Y 1 convolution layer, the resulting feature maps of which are sub-sampled in S Y 1. In parallel, the UV channels are subsampled by the S U V 0 layer, and the results feed the C U V 1 convolution layer. The C U V 1 and S Y 1 feature maps are concatenated and feed the C2 convolution layer. The C2 feature maps are then subsampled by S2. Finally, all output features from C2 and C U V 1 are serialized and used as inputs of a fully-connected layer for classification. Figure 3 . 18 : 318 Figure 3.18: ROC curves of the HMIN classifiers on the INRIA pedestrian dataset.The drop of performance is more important here than it was for faces, as shown on Figure3.9. However, the gain in complexity is as significant as in Section 3.1.2. . 5 ) 5 thus, by denoting * the convolution operator and I the input image: I * G | θ=0 = I * c V * r H (4.6) where A * r B denotes separated convolutions on rows of 2D data A by 1D kernel B, and A * c B denotes column-wise convolutions of A by B. Using the same notations: G (x, y) | θ=0 = G (y, x) θ=π/2 (4.7) and then I * G θ=π/2 = I * c H * r V (4.8) Table 4 . 2 : 42 Accuracies of Orchard's implementations on Caltech101 [99]. The "Original model" column shows the results obtained with the original HMAX code, while "CPU" shows the results obtained by Orchard et al.'s own CPU implementation, and "FPGA" show the results obtained with their FPGA implementation. (a) Car rears. (b) Airplanes. (c) Faces. (d) Leaves. (e) Motorbikes. (f) Background. Figure 4 . 1 : 41 Figure 4.1: Samples of images of the used classes from Caltech101 dataset [142]. Figure 4 . 2 : 42 Figure 4.2: Precision degradation in input image for three types of objects: faces, cars and airplanes.Color maps are modified so that the 0 corresponds to black and the highest possible value corresponds to white, with gray level linearly interpolated in between. We can see that while the images are somewhat difficult to recognize with 1 bit pixels, they are easily recognizable with as few as 2 bits. Figure 4 . 3 : 43 Figure 4.3: Recognition rates of HMAX on four categories of Caltech101 dataset w.r.t the input image pixel bit width.For each bit width, ten independent tests were carried out, in which half of the data was learnt and the other half was kept for testing. We see that the pixel precision has little to no influence on the accuracy. Figure 4 . 4 : 44 Figure 4.4: Recognition rates on four categories of Caltech101 dataset w.r.t the coefficients of the Gabor filter coding scheme in S1 layer. Those tests were run with input pixels having 2 bits widths. The protocol is the same as developped for testing the input pixels, as done in Figure 4.3. 21 ) 21 In their paper, Yu et al. proposed to keep the 200 most relevant patches, which when compared to the 1000 patches recommended in[START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF] would allow to divide the complexity at this stage by 5. In[START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], it is suggested to use patches of4 different sizes: 4 × 4 × 4, 8 × 8 × 4, 12 × 12 × 4 and 16 × 16 × 4. returns a value close to 1 when the patterns are closed in terms of Euclidean distance and 0 when they are far from each others. Computing an Euclidean distance implies the computation of square and square-roots function, which may use lots of hardware resources. The evaluation of the exponential function also raises similar issues, along with those already exposed in Section 4.2. Since we already removed the Gaussian function to simplify the training of S2, we propose to compare the performances obtained when replacing the Euclidean distance with the Manhattan distance: Figure 4 . 5 : 45 Figure 4.5: HMAX VHDL module. The main components are shown in colors, and the black lines represent the data flow.We see here that the data from the degraded 164 × 164 input image is first processed by S1 filters at all scales in parallel -only 8 out of the 16 filters in the bank are shown for readability. Orientations are processed serially and the outputs are multiplexed. The data is then processed by the c1 module, which produces half the feature maps produced in S1, before being serialized by c1 to s2. The serialized data is sent to s2c2, which perform pattern matching between input data and pre-learnt patches with its s2 components, several in parallel, with a multiplexing. The maximum responses of each S2 unit are then computed by c2. The data is then serialized by c2 to out. Figure 4 . 6 : 46 Figure 4.6: Dataflow in s1. This Figure shows the major components of the s1 module.First of all the pixels arrive in the pix to stripe, which returns columns of 37 pixels. Those columns are then stored in shift registers, which store a 37 × 37 patchonly 7 lines are represented here for readability. Then for each of the 16 scales in S1, there exists an instance of the image cropper module that keeps only the data needed by its following conv module. The convolution kernels' coefficients are gotten from the coeffs manager module, which get them from the FPGA's ROM and retrieve those corresponding to the needed orientation, for all scales. Here only 4 of the 16 convolution engines are shown. The computed data is written in dout, in parallel. Note that not all components of s1 are repesented here: pixmat, pixel manager, coeffs manager and conv crop are not displayed to enhance readability and focus and the dataflow. are stored in BRAM. The module fetches the needed ones depending on the value written in k idx, and route them to the cout module. Figure 4 Figure 4 . 7 : 447 Figure 4.7: coeffs manager module. In order to simplify the process, all coefficients needed at a particular time are read all at once from several BRAM, of which only two are represented here for readability. The coefficients are then concatenated in a single vector directly connected to the cout output port. Figure 4 . 8 : 7 × 487 Figure 4.8: 7 × 7 convolution module. That module has one convrow module per row in the convolution kernel, each taking care of a line.In each of those modules, the "multiplications" are performed in parallel in rowmult between the data coming from din and coeffs input buses -as mentioned in Section 4.2, those multiplications consist in fact in simple changes of signs, depending on the 1 bit coefficients provided by the external module coeffs manager. The results are the accumulated thanks to convrow's cumsum component. Finally, the output of all conrow modules are accumulated thanks to another cumsum component. The result is afterward degraded thanks to the s1degrader module, the output of which is written in dout. Figure 4 . 9 : 49 Figure 4.9: shift registers module with 4 registers. At each clock cycle, data is read from din and en din and written into the next register, the last of which writes its data into dout and en dout output ports. Figure 4 . 10 : 410 Figure 4.10: c1 module. For more readability, only 4 of the 8 filters are represented here. Maximums are first computed accross scales with the max 2by2 components. The data is then organized into stripes in the same fashion as done in the pix to stripe component used in s1 module. That stripe is organized by lines, and then scales, and needs to be organized by scales, and then lines to be processed by the latter modulethis reorganization is taken care of by reorg stripes. Orientations being multiplexed, we needed to separate them so each may be processed individually, which is done by the data demux module. Each orientation is then processed by one of the c1 orientation module. Finally, data comming out of c1 orientation is multiplexed by data mux before being written in output ports. Figure 4 . 4 Figure 4.11: c1unit.Figure 4.11a shows the principle components of the module architecture, and Figure 4.11b shows the control signals enabling and disabling the data.Figure4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data. The control signals presented in Figure4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap. Figure 4 . 4 Figure 4.11: c1unit.Figure 4.11a shows the principle components of the module architecture, and Figure 4.11b shows the control signals enabling and disabling the data.Figure4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data. The control signals presented in Figure4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap. Figure 4 4 Figure 4.12: c1 to s2 module. The blue and red lines show the data flow in the two configurations of the double-buffering. The data goes through c1 handler, where the address to which it should be written is generated and written in waddr. The rea and reb signals control the enable mode of the BRAMs, while the wea and web enable and disable the write modes of the BRAMs. When the upper BRAM is in write mode, wea and reb are set and web and rea are unset. When the upper buffer is full, those signals are toggled so that we read the full buffer and write in the other one. Those signals are controled thanks to the ctrl component, which also generates the address from which the output data should be read from the BRAMs. Data read from both BRAMs are then multiplexed into the dout output port. Pins on the left of both BRAMs correspond to the same clock domain, and those on the right belong to another one so that it is synchronized with following modules. Figure 4 . 13 : 413 Figure 4.13: Dataflow in s2c2.The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module. Organization of stream arriving in s2 input handler. Each color indicates the orientation of the C1 feature map the corresponding sample comes from. We assume here that those feature maps are 2 × 2. cX indicates that the samples are located in the X-th column in their feature maps, and rX indicates that the samples are located in the X-th row. s2 handler module. Orientations are first demultiplexed, and written in parallel into the relevant s2 pix to stripe, shown here in gray. There is one s2 pix to stripe per scale in C1 feature maps -i.e 8. The output of those compinents are then routed to the dout output port, using a multiplexer. Figure 4 . 14 : 414 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed. Figure 4 . 4 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed. Figure 4 . 4 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed. Figure 4 . 4 Figure 4.15 sums up the data flow in s2processors. We shall now describe the corner cropper and s2bank modules. 4. 3 . 3 .cropper 16 × 16 × 4 12 × 12 × 4 8 × 8 × 4 4 × 4 Figure 4 . 15 : 3316164124844415 Figure 4.15: Data flow in s2processors.Names in italic represent the components instantiated in that module, and plain names show input and output ports. Only din, dout and en dout are represented for readability. Each square in din represent one of the 1024 pixels read from din, and each set of four squares represents the pixels from C1 maps of the same scale and locations, and the four orientations. The corner cropper module makes sure only the relevant data is routed to the following s2bank components. Those components perform their computations in parallel. When the data produced by one or several of those instances is ready, it is written in the corresponding pins of the dout output ports and the relevant pins of the en dout output port are set. 3.2.3 are also used to synchronize data.cum diff As suggested by its name, this module computed the absolute difference between two unsigned integers, and accumulates the result with those of the previous operations. To that end, it needs the usual clk and rst input ports for respectively synchronization and resetting purposes. It also needs two operands, which are provided by the din1 and din2 input ports. An input port called new flag allows to reset the accumulation to 0 and start a fresh cumulative difference operation, and the en din flag allows to enable computation. That module has a single output port called dout, which provides the result of the accumulation as it is computed. It is not required to have an output pin stating when the output data is valid, for the reason that the data is always valid. Knowing when the data actually correspond to a full Manhattan distance is actually performed in s2unit. Let's begin with the S1 layer. The convolution is computed at 128 × 128 places of the input image. As detailed in Section 4.3.2.1, the sums of implied by the convolution are performed row-wise in parallel, and the results per row as then sum sequentially. Thus, for a k × k convolution kernel, k sums are of k elements are performed in parallel, and each one of them takes 1 cycle per element -hence, k cycles. That leads to k elements, which are them sum using the same strategy, and thus requiring another k cycles, thus totalizing 2k cycles. Since we use a 4-to-1 multiplexing strategy to compute the output of the orientations one after the other, all scales are processed in parallel and the biggest convolution kernel is 37 × 37, the convolution takes 128 × 128 × 8 × 37 = 4.85 × 10 6 clock cycles to process a single 128 × 128 image. Figure B. 1 : 1 Figure B.1: Exemples d'applications. Figure B. 4 :Figure B. 5 : 45 Figure B.4: Invariant scattering convolution network[START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Chaque couche applique une décomposition en ondelette U λ à l'entrée, et envoie le résultat auxquels a été appliqué un filtre passe-bas et un sous-échantillonage à la couche suivante. Les scattering coefficients S λ (x) ainsi produits forment le vecteur caractéristique à classifier. Figure B. 6 : 6 Figure B.6: Réseaux de neurones à convolutions [48]. (x 1 , y 1 )(x 2 , y 2 )Figure B. 7 :Figure B. 8 :Figure B. 9 : 1122789 Figure B.7: Examples de caractéristiques utilisés dans Viola-Jones [30, 136]. Figure B. 11 .Figure B. 10 :Figure B. 11 : 111011 Figure B.11. La sortie obtenue pour un visage après filtrage par ce noyau de convolution est donné en Figure B.12. Pour C1, la taille de la fenêtre du filtrage est ∆ k = 8. Cet extracteur de caractéristiques sera appelé HMIN θ=π/2 dans la suite du document. Nous proposons ensuite de réduire la taille de ce noyau de convolution, qui comporte à l'heure actuelle 37 × 37 éléments, en le réduisant à 9 × 9 en utilisant une interpolation bilinéaire, ce qui lui permet de traiter des images 4 fois plus petites. Cette version du descripteur sera appelée HMIN R θ=π/2 . Figure B. 12 :Figure B. 13 : 1213 Figure B.12: Réponse du filtre unique dans S1 sur un visage. Figure B. 14 : 14 Figure B.14: Courbe ROC obtenue avec HMIN R θ=π/2 sur la base CMU. 4 : 4 Complexité et précision de différentes méthodes de détections de visages. Les taux de faux positifs du CFF et de Viola-Jones ont été lus à partir des courbes ROC de leurs articles respectifs [50, 136], et sont donc approximatifs. Tous les taux de faux positifs correspondent à des taux de détections de 90%. La colonne Classification donne la complexité pour la classification d'une image dont la taille est donnée par la colonne Taille d'entrée. La colonne Scanning donne la complexité de l'algorithme lors d'un scan d'une image VGA complète de dimensions 640 × 480. Les complexités et empreintes mémoires ont été évaluées pour l'extraction de caractéristiques seulement, sans prendre en compte la classification. Il faut également noter qu'aucune pyramide d'images n'est utilisée ici, pour simplifier les calculs -dans le cas où on en utiliserait une, Viola-Jones demanderait bien moins de ressurces que le CFF et HMIN grâce à la représentation en image intégrale. Figure B. 15 : 15 Figure B.15: HOG [36]. C Y 1 C U V 1 YSFigure B. 16 :Figure B. 17 : 111617 Figure B.16: ConvNet pour la détection de piétons [145]. Les couches C XX désignent des couches de convolutions, et les couches S XX désignent des couches de souséchantillonage. B. 4 Figure B. 18 :Figure B. 19 : 41819 Figure B.18: Effet de la dégradation de précision sur l'image d'entrée. B. 4 . 3 ConclusionB. 5 Conclusion 435 Dans cette Section, nous avons présenté une série d'optimisations pour HMAX visant à faciliter son implantation matérielle. Notre contribution consiste à diminuer la précision des pixels de l'image d'entrée, diminuer la précision des coefficients des filtres de Gabor et utiliser une distance de Manhattan dans la couche S2 lors des opérations de comparaisons de motifs. Nous utilisons également des méthodes proposées dans la littérature consistant à utiliser l'algorithme de Lloyd pour compresser la sortie de S1, et pour diminuer la complexité de S2. Nous avons montré que ces simplifications n'ont que peu d'impact sur la précision du modèle. Nous avons ensuite présenté les résultats de l'implantation matérielle, que nous avons voulu aussi naïve que possible en dehors des optimisations proposées ici, puis nous avons comparé le résultat avec la littérature. Il apparaît que notre implantation traite les images significativement moins rapidement que ce qui est proposé dans la littérature ; cependant notre implantation utilise moins de ressources matérielles et nos optimisations sont parfaitement compatibles avec l'implantation de référence. Les travaux futurs consisteront donc à proposer une implantation tirant parti des avantages des deux méthodes, afin de proposer une implantation la plus réduite et avec la plus grande bande passante possible. Dans cette thèse, nous avons proposé une solution à un problème d'optimisation d'un algorithme bio-inspiré pour la classification de motifs visuels, avec pour but de l'implanter sur une architecture matérielle dédiée. Notre but était de proposer une architecture facilement embarquable et suffisamment générique pour répondre à différents problèmes. Notre choix s'est porté sur HMAX, en raison de l'unicité de son architecture et de ses performances acceptable même avec un nombre réduit d'examples à apprendre, contrairement à ConvNet. Notre première contribution consistait à optimiser HMIN, qui est une version allégée de HMAX, pour deux tâches précises, la détection de visages et la détection de piétons, en se basant sur le fait que seules certaines caractéristiques sont utiles. Les performances que nous avons obtenus, pour chacune des deux tâches, sont significativements inférieures à celles proposées dans la littérature -cependant, nous estimons que notre algorithme à l'avantage d'être plus générique, et nous pensons qu'une implémentation matérielle nécessiterait extrêmement peu de ressources. Notre seconde contribution est de proposer une série d'optimisations pour l'algorithme HMAX complet, principalement basées sur un codage des données efficace. Nous avons montré qu'HMAX ne perdait pas de précisions de manière significative en réduisant la précision des pixels des images d'entrées à 2 bits, et celle des coefficients des filtres de Gabor à 1 seul bit. Bien que cette implantation, naïve en dehors des optimisations nommées ci-dessus, ne permettent pas de traiter une quantité d'images équivalentes à ce qu'il se fait dans la littérature, nos optimisations sont parfaitement utilisables en conjonctions avec celles de l'algorithme de référence, ce qui produirait une implantation particulièrement compact et rapide de cet algorithme -ce qui sera réalisé dans des recherches futures. NeuroDSP architecture[START_REF] Paindavoine | NeuroDSP Accelerator for Face Detection Application[END_REF]. A NeuroDSP device is composed of 32 clusters, called P-Neuro, each constituted of 32 artificial neurons called PE, thus representing a total of 1024 neurons. The PEs may be multiplexed, so that they can perform several instruction sequentially and thus emulate bigger neural networks. When timing is critical, one may instead cascade several NeuroDSP processors and use them as if it was a single device. Data In (audio, image. . . ) Cluster Cluster Cluster Decision From previous 32 PE 32 PE 32 PE To next NeuroDSP NeuroDSP Figure 1.3: 3 http://goo.gl/Ax6CoF There are 16 different scales and four different orientations, thus totaling 64 filters. During the S1 stage, each filter is applied independently on the input image and the filtered images are fed to the next layer.The C1 stage gives a first level of location invariance of the features extracted in S1. It does so with maximum pooling operators: each C1 unit pools over several neighboring S1 units with a 50% overlap and feed the S2 layer with the maximum value. The number of S1 units a C1 unit pools over depends on the scale of the considered S1 units.Furthermore, each C1 unit pools across two consecutive scales, with no overlap. This leads to a number of images divided by two, thus only 32 images are fed to the following layer. The parameters of the S1 and C1 layers are presented in Table2.8. The filter bank has several filters, each having a specific wavelength, effective width, size and orientation. The wavelength, effective width and size define the filter's scale. Table 2 . 2 1: Paramaters for HMAX S1 and C1 layers Table 2 . 2 : 22 Comparison of descriptors. Framework Accuracy Training Complexity ISCN High None High HMAX High Yes, requires few data points High HOG Reasonnable None Low SIFT Reasonnable None Low SURF Reasonnable None very low Figure 3.3: Complexity repartition of Viola and Jones' algorithm when processing a 640 × 480 with a 24 × 24 sliding window. From Equations 3.7 to 3.13, we see that the integral image computation requires 2W H additions, the feature extraction needs N op N f N w additions, and C VJ N needs W H multiplications and 2W H. Thus, we need a total of 4W H + N op N f N w . [START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF] Complexity repartition of the CFF algorithms, separated in three types of computations: MAC, hyperbolic tangents ("Tanh") and sums. We see here that the large majority of operations are MAC, toward which most effort should then be put for fine optimizations or hardware implementation. Tanh (0.88%) MAC (97.8%) Sums (1.32%) Figure 3.5: .34) which gives C CFF T2 = 1.75W H + 7 (W + H) + 16. (3.35) Using those results in Equation 3 .17, we finally get C CFF = 168.75W H -1038 (W + H) + 5664. (3.36) Now that we have this general formula, let's compute the complexity involved in the classification of a typical 36 × 32 patch. We get 129.5 kOP. Let's now assume that we must find and locate faces in a VGA 640 × 480 image. From Memory print Using the same method as for Viola-Jones' in Section 3.1.1.1 and the CFF in Section 3.1.1.2, let's evaluate the memory print of HMIN. Since the C1 layer may be processed in-place, the memory print of HMIN is the same as its S1 layer, From Equations 3.37 to 3.39, we get C HMIN = 36456W H. (3.40) If we aim to extract feature from a typical 128 × 128 image for classification as suggested in [31], it needs 597 MOP operation. When scanning a 640 × 480 image as done with the CFF in Section 3.1.1.2, we get a total of 11.2 GOP. From Equations 3.38 and 3.39, we also see that the convolutions operations take 99.89 % of the computation -thus, they represent clearly the basis of our optimizations. .39) which produces 16 640×480 feature maps, coded on 32-bit single precision floating point numbers. Hence, its memory print is 19.66 MB. [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] Accuracy (%) 95.78 ± 0.97 90.81 ± 1.10 90.05 ± 0.98 Descriptor HMIN HMIN θ=π/2 HMIN R θ=π/2 Table 3 . 1 : 31 Accuracies of the different version of HMIN on the LFW crop dataset. dataset at random to build the training set, as in Section 3.1.3.1. Once again, we used it to train the 1 0.8 Recognition rate 0.4 0.6 -HMIN -HMIN| θ=π/2 -HMIN| R θ=π/2 0.2 0 0.2 0.4 0.6 0.8 1 False positive rate Figure 3.9: ROC curves of the HMIN classifiers on LFW crop dataset. They show the recognition rate w.r.t the false positive rate: ideally, that curve would represent the function ∀x ∈ (0, 1] f : x → 0 when x = 0, 1 otherwise. One can see a significant drop of performance when using HMIN θ=π/2 compared to HMIN -however using HMIN R θ=π/2 Table 3 . 2 : 32 Complexity and accuracy of face detection frameworks. The false positive of the CFF and VJ frameworkw were drawn from the ROC curves of their respective papers Finally the S2 layer produces 2040 102 × 76 feature maps, which using 32 bits floating point precision would require 63.26 MB. Memory print Let's now evaluate the memory print of that framework when pro- cessing a 640 × 480 input image. The C Y 1 layer produces 32 634 × 474 feature maps, in which we assume the features are coded using 32-bits floating point precision, which needs 38.47 MB. In order to simplify our study, we then assume that the subsampling [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF] Let's evaluate this expression as a function of the width W and height h of the input image. In order to make it more tractable, we approximate it by neglecting the floor operators . Reusing Equation 3.81 to 3. [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF] we have C ConvNet (W, H) ≈ 38.8 × 10 3 W H -1.12 × 10 6 (W + H) + 33.2 × 10 6 (3.92) It should be noted that we again neglected the classification stage. Considering input images are 78 × 126, we have C ConvNet ≈ 484.84 MOP. Applying Equation 3.92 to the case where we process a 640 × 480, we have 11 GOP. From the previous analysis, we see that lots of MAC are computed at almost all stage, including the average downsampling ones. This is largely due to the C2 layer, with its high amount of convolution filters. It is then clear that optimization efforts should be directed towards the computation of MACs. and normalization operations are performed in-place, and hence do not bring more need in memory. The S Y 1 layer produces 2 213 × 160 feature maps, hence needing 272.64 kB. 1.2. Framework False positive rate (%) Complexity (OP) Scanning Classification Memory print Input size HOG 0.02 [36] 12.96 M 344.7 k 4.37MB 64 × 128 ConvNet See caption 484.84 M 11 G 63.26MB 78 × 126 HMIN R θ=0 30% 13.05 M 41.45 k 1.2 MB 32 × 16 Table 4 . 1 : 41 Hardware resources utilized by Orchard's implementation[99].amount that could fit on their device. At each location, pattern-matching are multiplexed by size, i.e first all 4 × 4 × 4 in parallel, then 8 × 8 × 4, then 12 × 12 × 4 and finally 16 × 16 × 4. Responses are computed for two different orientations in parallel, this results in a total of 320 × 2 = 640 MAC operations to be performed in parallel at each clock cycle. Thus, this requires 640 multipliers, and 640 coefficients to be read at each clock cycle. As for the precision, each feature is coded on 16 bits to fit. Resource Used Available Utilization(%) DSP 717 768 93 BRAM 373 416 89 Flip-flops 66196 301440 21 Look-up tables 60872 150720 40 4.1.1.4 C2 Due to the simplicity of C2 in the original model, there is not much room for optimizations or implementation tricks here. Orchard et al.'s implementation simply gets the 320 results from S2 in parallel and use them to perform the maximum operations with the previous values, again in parallel. Table 4 . 3 : 43 Code books and partitions by scales for features computed in C1. Values were computed with the simplification proposed in Sections 4.2.1 and 4.2.2 for S1, using Matlab's lloyds function. i 1 2 3 4 C 1 14 27 37 50 Q 1 21 32 43 - C 2 42 82 118 154 Q 2 62 100 136 - C 3 37 65 94 141 Q 3 51 79 117 - C 4 81 148 209 284 Q 4 114 178 246 - C 5 122 208 278 380 Q 5 165 243 329 - C 6 175 309 427 559 Q 6 242 368 494 - C 7 296 521 707 905 Q 7 408 614 806 - C 8 499 868 1182 1492 Q 8 633 1025 1337 - 2.1, with the exception that this time we add the simplification proposed here. Results are compiled with further optimizations in Table 4.4 Table 4 . 4 : 44 Accuracies of HMAX with several optimizations on five classes of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. That Table compiles the results of the experiment conducted in Sections 4.2.3, 4.2.4 and 4.2.5. The column on the left shows the result gotten from Section 4.2.2. Starting from the second column, each column show the accuracies obtained on the 5 classes in binary task classification, as described before, taking into account the corresponding simplification as well as those referred by the columns left to it. Table 4 . 5 : 45 Offsets used to computed addresses in c1 to s2 modules. scale s 1 2 3 4 5 6 7 8 C1 patch side 31 24 20 17 15 13 11 10 offset o s 0 3844 6148 7748 8904 9804 10480 10964 Table 4 . 4 [START_REF] Paindavoine | NeuroDSP Accelerator for Face Detection Application[END_REF]. Thus, we instantiated a pixmat component adapted to the maximum size of C1 feature maps, i.e 31 × 31. The problem is that pixmat's en dout signal is only set when the whole matrix is ready, which make it impractical for C1 feature maps smaller than 31 × 31. N 0 4 8 12 16 Table 4 . 6 : 46 Mapping between N and dout scale. Table 4 . 7 : 47 's final stage is performed in the c2 module. It is synchronized, and thus has a clk input port expecting to get a clock signal, as well as an rst input port allowing to reset the component. The data used to performe the computation is obtained thanks to the din input port, and it arrives in parallel. The id in input port allows to indicate which of the data from din are valid, and a new in input port allows to warn about the arrival of new data. After performing of the maximum operations, the results for all pre-learnt vectors in S2 are written in parallel into the dout output port, and the last output port, which is called new out, indicates that new data is available through dout. Resource utilization of HMAX implementation on XC7A200TFBG484-1 with the proposed simplifications. The proportion of used flips-flops is high enough to cause problems during implementation. However, the biggest issue comes from the fact that we use way too many blocks RAM for a single such target. 4.3.4 c2 As done in the c1 to s2 presented in Section 4.3.2.5, we use a double-buffering design pattern to manage output data. HMAX 16 × 16 × 4N 16 + 12 × 12 × 4 (N 12 -N 16 ) One of the most interesting contributions about HMAX hardware implementation is the work of Orchard et al., described in Section 4.1.1 -as mentioned in Section 2.2.2.1,there exists several implementations of either parts of the model or of the whole model on boards containing many FGPAs, but we shall focus here only on that work, as it is the only one to our knowledge aiming to implement the whole model on a single FPGA. In that work, they implemented their algorithm on a Virtex 6 XC6VLX240T FPGA, while we targeted an Artix-7 XC7A200TFBG484-1 device. Table4.8 sums up the resources of those two devices; we see that the Virtex-6 FPGA has slightly more resources than the Artix-7, however the two devices have roughly the same resources. + 8 × 8 × 4 (N 8 -N 12 ) + 4 × 4 × 4 (N 4 -N 8 )] (4.28) =2240N 16 + 1600N 12 + 960N 8 + 320N 4 . (4.29) Let's now evaluate the N i . Considering that some the C1 feature maps are smaller than some of the pre-learnt patches and that in such case, no computations are performed, we may write N i = 8 k=1 max 128 ∆ k -i + 1, 0 2 . (4.30) with ∆ k defined in Table 2.1. Hence we have N 16 = 435 N 12 = 821 N 8 = 1437 N 4 = 2309, (4.31) which gives T S2 = 2240 × 435 + 1600 × 821 + 960 × 1437 + 320 × 2309, (4.32) and thus T S2 = 110.16 × 10 6 clock cycles. Finally, C2 processes the data as soon as it arrives in a pipelined manner, as done in C1. Hence, it doesn't bring any bottleneck. We see from the above analysis that the stage that takes most time is S2, with 4.41 × 10 6 clock cycles per image. Assuming we have a system clock cycle of 100 MHz, we get 22.69 FPS. 4.5 Discussion Table 4 . 8 : 48 Hardware resources comparison between the Virtex-6 FPGA used in[99], and the Artix-7 200T we chose. XC6VLX240T Artix 7 200T DSP 768 740 BRAM 416 365 Flip-flops 301440 269200 Look-up tables 150720 136400 7. if d opp > µR, where µ is a strictly positive constant, accept the merge and go back to 3 using C\ {c} instead of C; if d opp ≤ µR, reject the merge and go back to 3 selecting another cluster, 8. repeat steps 3 to 7 until all clusters from C were considered, which leads to a new set of clusters C 2 , 9. repeat steps 2 to 8 using C 2 instead of C 1 and c 2 1 ∈ C 2 instead of c 1 1 , and continue using C 3 , C 4 and so on until no further merge is possible. de sélection de caractéristiques pour la classifications d'objets visuels. Nous présenterons ensuite une implantation optimisée d'un algorithme de classification d'images sur une plateforme matérielle reconfigurable. Finalement, la dernière Section présentera la conclusion de nos travaux. Architecture feedforward. de reconnaissances d'images. Nous nous intéresserons ici uniquement aux architectures dites feedforward, dans lesquelles les neurones sont organisées par couches et chaque unité transmet l'information à des neurones de la couche suivante -ainsi, l'information se propage dans un seul sens. Ce genre d'architecture est représenté en Figure B.3. Les connexions entre les unités sont appelés synapses, et à chacune d'entre elles est affecté un poid synaptique. Ainsi, la valeur d'entrée z d'un neurone de N entrée ayant des poids synaptiques w 1 , w 2 , . . . , w N est donnée par z = w 0 + B.2 État de l'art Cette Section propose une brève revue de la littérature concernant les travaux présentés ici. Nous commencerons par les fondements théoriques de l'apprentissage automatique et de l'extraction de caractéristiques d'un signal. Nous verrons ensuite les implémentations matérielles existances pour ces méthodes. Finalement, nous proposerons une discussion au cours de laquelle nous établirons les problématiques auxquelles nous répondront dans ce documents. Figure B.3: N w i x i , (B.1) i=1 Entrée (son, image. . . ) Cluster Cluster Cluster Décision Depuis NeuroDSP 32 PE 32 PE 32 PE Vers NeuroDSP précédant Suivant Figure B.2: Architecture NeuroDSP [5]. d'encombrement. Il est constitué de 32 blocs appelé P-Neuro, qui consistent chacun en 32 processeurs élémentaires (PE), pour un total de 1024 PE. Chacun de ces PE peut KNN [6], pour K-Nearest Neighbors, et présente l'avantange d'être extrêmement simple Une autre méthode de classification que nous utilisons dans ces travaux s'appelle le être vu comme un neurone d'un réseau de neurones artificiel, tel que le Perceptron. Au à implanter. Cependant, lorsque le nombre d'exemples de la base d'apprentissage devient RBF, qui fait partie des méthode dites à noyaux. Elles consistent à évaluer un en- sein d'un P-Neuro, tous les PE exécutent la même opération sur des données différentes, important ou que la taille des vecteurs devient trop grande, cette méthode devient trop semble de fonction à base radiale au point représenté par le vecteur à classifier, et le constituant ainsi une architecture de type SIMD (Single Instruction Multiple Data), par-complexe et trop consommatrice en mémoire pour être efficace, en particulier dans un valeurs produites par ces fonctions forment un nouveau vecteur qui sera classifié par faitement adaptée aux calculs parallèle tels que nécessités dans les réseaux de neurones contexte embarqué. un classificateur linéaire -e.g, un Perceptron. En revanche, dans ce cas la technique artificiels. Cette architecture est présenté en Figure B.2. Les travaux présentés dans ce Il existe beaucoup d'autres méthode de classification de motifs, parmi lesquelles fig-d'apprentissage utilisée est simplement une recherche de moindres carrés. documents ont été réalisés dans le cadre de ce projet. urent en particulier les réseaux de neurones (cf. le Perceptron en Section B.1), ou des Dans ce résumé, nous ferons tout d'abord un état de l'art de la littérature concer-approches plus statistiques telles que les Machines à Vecteurs de Supports, ou SVM 2 . B.2.1.2 Méthodes d'extraction de caractéristiques nant ce domaine -nous y verrons les principales méthodes d'apprentissage automatique, leurs implantations sur matériel, et nous poserons les problématiques auxquelles nous répondront dans la suite du document. Une Section sera ensuite consacrée à notre méthode B.2.1 Fondements théoriques B.2.1.1 Méthodes de classification Il existe de nombreuses approches permettant à une machine d'apprendre d'elle-même à classifier des motifs. Nous allons ici revoir les principales. Une approche extrêmement simple consiste à considérer l'intégralité des vecteurs dont nous disposons a priori, que l'on appelle base d'apprentissage. Lors de la classification d'un vecteur inconnu, on évalue une distance (par exemple, Euclidienne) avec tous les vecteurs de la base d'apprentissage, et on ne considère que les K plus proches. Chacun de ces vecteurs vote alors pour sa propre catégorie, et la catégorie ayant obtenue le plus de vote est retenue. On considère alors que le vecteur inconnue appartient à cette catégorie. Cette approche s'appelle Les réseaux de neurones sont récemment devenus extrêmement populaires, depuis leurs utilisations par les entreprises Facebook et Google notamment pour leurs applications avec x i les valeurs propagées par les unités de la couche précédente et w 0 un biais, nécessaire pour des raisons mathématiques. Une fonction non-linéaire, appelée fonction d'activation, est ensuite appliquée à z, et le résultat est propagé aux neurones de la couche suivante. Apprendre un réseau de neurones de ce type à éxecuter une tâche consiste à trouver les bons poids synaptiques, au moyen d'un algorithme d'apprentissage. Dans le cas des réseaux de neurones feedforward à plusieurs couches, l'algorithme le plus utilisé en raison de son efficacité et de sa faible complexité algorithmique est la descente de gradient stochastique -en effet, celui-ci peut être facilement exécuté au moyen d'une technique appelée rétro-propagation de l'erreur, qui permet d'evaluer rapidement la dérivée de la fonction de coût à optimiser [START_REF] Rumelhart | Learning Internal Representations by Error Propagation[END_REF][START_REF] Rumelhart | Learning representations by back-propagating errors[END_REF] . Afin de faciliter la tâche du classificateur, il est possible de faire appel à un algorithme d'extraction de caractéristiques, dont l'objet est de transformer le signal à classifier, Finalement, la dernière couche C2 ne conserve, pour chacun de ces motifs pré-appris, que la réponse maximale, formant ainsi le vecteur caractéristique. Cet algorithme est présenté en Figure B.5.D'autres méthodes d'extractions de caractéristiques ou de classifications (ou les deux), tels que SIFT[START_REF] David | Distinctive Image Features from Scale-Invariant Keypoints[END_REF], SURF[START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] ou Viola-Jones[START_REF] Viola | Robust real-time face detection[END_REF] ont également connu une certaine popularité.Enfin, il n'est pas possible de ne pas mentionner les réseaux de neurones à convolu- Couche C1 Couche S1 Échelle Taille du filtre maximum (N k × N k ) Recouvrement ∆ k Taille de filtre S1 k Gabor Gabor σ λ Band 1 × 8 4 7 × 7 9 × 9 2.8 3.6 3.5 4.6 Band 2 × 10 5 11 × 11 13 × 13 4.5 5.4 5.6 6.8 Band 3 × 12 6 15 × 15 17 × 17 6.3 7.3 7.9 9.1 Band 4 × 14 7 19 × 19 21 × 21 8.2 9.2 10.3 11.5 Band 5 × 16 8 23 × 23 25 × 25 10.2 11.3 12.7 14.1 Band 6 × 18 9 27 × 27 29 × 29 12.3 13.4 15.4 16.8 Band 7 × 20 10 31 × 31 33 × 33 14.6 15.8 18.2 19.7 Band 8 × 22 11 35 × 35 37 × 37 17.0 18.2 21.2 22.8 3) où γ est le ratio d'aspect, λ la longueur d'onde du cosinus, θ l'orientation du filtre et σ l'écart-type de la gaussienne. S1 comporte des filtres de 16 échelles et 4 orientations différentes, totalisant ainsi 64 filtres. Les paramètres des filtres sont donnés en Table B.1. La couche C1 fourni un premier niveau d'invariance aux translations et à l'échelle grâce à un ensemble de filtres maximum, dont la taille de la fenêtre N k et le recouvrement ∆ k dépendent de l'échelle considérée, et sont donnés en Table B.1. La troisième couche, S2, compare les sorties de la couche C1 avec un ensemble de motifs pré-appris aux moyens de fonctions à base radiale. tions [START_REF] Lecun | Convolutional networks and applications in vision[END_REF] , qui sont les principaux contributeurs au succès que rencontrent les réseaux de neurones à l'heure actuelle. Leur approche est très simple: plutôt que de séparer l'extraction de caractéristiques de la classification, ces méthodes considèrent l'ensemble de la chaîne algorithmique et réalisent l'apprentissage sur son intégralité. L'extraction de Table B.1: Paramètres des couches S1 et C1 de HMAX [31]. Convolution Sous-chantillonage Convolution Sous-chantillonage Connection complte Sortie Table B . 2 : B2 Comparaison des principaux extracteurs de caractéristiques. Il existe également de nombreuse implantations logicielles, mais nous ne les mentionneront pas dans ce résumé. HMAX lui-même a été implanté de nombreuses fois sur du matériel reconfigurable (FPGA)[START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF][START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF][START_REF] Debole | FPGA-accelerator system for computing biologically inspired feature extraction models[END_REF][START_REF] Maashri | Accelerating neuromorphic vision algorithms for recognition[END_REF][START_REF] Park | Saliencydriven dynamic configuration of HMAX for energy-efficient multi-object recognition[END_REF][START_REF] Sun Park | An FPGAbased accelerator for cortical object classification[END_REF][START_REF] Park | A reconfigurable platform for the design and verification of domain-specific accelerators[END_REF][START_REF] Kestur | Emulating Mammalian Vision on Reconfigurable Hardware[END_REF] -récemment, l'implantation la plus prometteuse pour ce modèle est celle proposée par[99]. Des travaux ont été menés en ce sens également pour les réseaux de neurones à convolutions[START_REF] Farabet | Neu-Flow: A runtime reconfigurable dataflow processor for vision[END_REF][START_REF] Cavigelli | Origami: A Convolutional Network Accelerator[END_REF].B.2.3 DiscussionNotre but est de proposer un système embarquable et générique de reconnaissance de motifs. Pour cela, nous allons choisir un extracteur de caractéristiques qui servira de base à nos futurs travaux. Le problème de la classification ne sera pas traité ici. La TableB.2 présente une comparaison des principaux descripteurs. Au vu de cette comparaison, nous avons décidé de porter notre étude sur HMAX, qui nous assurera de plus une certaine généricité.Notre but est d'adapter cet algorithme à différentes tâches tout en conservant une généricité au niveau de l'architecture, et d'optimiser, notamment en termes de codage, ces algorithmes pour faciliter leur portage sur des cibles matérielles, ce qui amène les problématiques suivantes auxquelles nous nous efforcerons de répondre : Méthode Précision Apprentissage requis Complexité Scattering Transform Haute Non Élevée HMAX High Oui, requière peu de données Élevée HOG Raisonnable Non Basse SIFT Raisonnable Non Basse SURF Raisonnable Non Très basse Réseaux de neurones Très élevée à convolutions Oui, requière beaucoup de données Élevée convolutions ont été réalisées. Table B . 3 : B3 Précision des différentes versions de HMIN sur la base de données LFW crop. B.3.1.3 HMIN et optimisations D'après l'article de Serre et al. [31], nous savons que pour détecter et localiser un objet dans une scène, il est préférable de n'utiliser que les deux premières couches de HMAX, i.e S1 et C1. Afin de voir quelles caractéristiques sont les plus pertinentes, et donc quelles caractéristiques peuvent être enlevées sans trop impacter la précision du système, nous avons observé les réponses des différents filtres de Gabor pour les visages. Les résultats sont montrés en Figure B.10. Nous pouvons y voir que les informations qui semblent les plus pertinentes sont celles correspondant à l'orientation θ = π/2. Par ailleurs, nous pouvons voir que les informations sont semblables d'une échelle à l'autre. Ainsi, nous proposons de ne conserver que les filtres d'orientations θ = π/2, et de les sommer, afin de n'avoir plus qu'une convolution. L'aspect du noyau de cette convolution est donné en Table B . B Nous allons commencer par décrire les méthodes avec lesquelles nous allons comparer notre approche. Nous avons choisi de nous comparer avec l'état de l'art du domaine, à savoir le HOG et une implémentation particulière d'un réseau de neurones à convolutions, que nous appellerons ConvNet[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. La seule différence avec la détection de visages réside dans le fait que, cette fois, nous nous intéressons à des objets verticaux, et donc nous avons choisis de conserver cette fois-ci les filtres d'orientation θ = 0. Nous appellerons les algorithmes ainsi produits HMIN θ=0 et HMIN R θ=0 . Méthode Taux de faux positifs (%) Scanning Classification Complexité (OP) Empreinte mémoire Taille d'entrée VJ 5.32 × 10 -5 [136] 20.7 M 2.95 k 1.48 MB 24 × 24 CFF 5 × 10 -5 [50] 50.7 M 129.5 k 64.54 MB 36 × 32 HMIN R θ=π/2 4.5 26.1 M 82.9 k 1.2 MB 32 × 32 Table B . 5 . B5 B.3.3 ConclusionDans cette Section, nous avons présenté notre contribution à l'optimisation d'une méthode d'extraction de caractéristiques. L'algorithme initial est basé sur HMAX, mais n'utilise que ses deux premières couches, S1 et C1. La couche S1 est constituée de 64 filtres TableB.5: Complexité et précisions de différentes méthode de détections de personnes. Le taux de faux positifs du HOG a été obtenu à partir des courbes DET présentés dans l'article original[START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], et est donc approximatif. Les taux de faux positifs présentés ici correspondent à des taux de détections de 90%. Les résultats concernant le Con-vNet ne sont pas directement indiqués ici, en raison du fait que la méthode d'évaluation de sa précision employée dans la littérature est différente de ce qui a été réalisé pour le HOG[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. Cependant, les contributeurs ont évalué la précision du HOG selon le même critère, et il en ressort que le HOG produit trois fois plus de faux positifs sur cette même base que ConvNet. En raison de ces différences de méthodologies, il est délicat de comparer directement nos résultats avec ceux de la littérature -en revanche, les résultats présentés ici suggèrent un clair désavantage à l'utilisation de HMIN R θ=0 pour cette tâche.de Gabor, avec 16 échelles et 4 orientations différentes. En étudiant les carte de caractéristiques produites par S1 pour différentes tâches spécifiques, nous avons conclus que nous pouvions ne conserver qu'une seule orientation, et sommer les noyaux des convolutions des 16 filtres restants de façon à n'en n'avoir plus qu'un, orienté horizontalement dans le cas de la détection de visages et verticalement dans le cas de la détection de personnes. Nos résultats montrent que notre système a une complexité acceptable, mais sa précision est moindre. Cependant, l'architecture est extrêmement simple, et peut facilement être implantée sur une cible matérielle. De plus, notre architecture est générique : un changement d'applications consiste simplement à changer les poids du noyau de convolution, alors que les autres architectures présentées requièreraient des changements plus en profondeur de l'architecture matérielle. Enfin, l'empreinte mémoire de notre méthode est très faible, ce qui autorise son implantation sur des systèmes ayant de fortes contraintes.La prochaine Section est dédiée à une proposition d'implantation matérielle de l'algorithme HMAX complet. La dernière Section sera quant à elle dédiée aux discussions finales et aux conclusions générales de nos travaux. Méthode Taux de faux positifs (%) Scanning Classification Complexité (OP) Empreinte Taille d'entrée mémoire HOG 0.02 [36] 12.96 M 344.7 k 4.37MB 64 × 128 ConvNet Voir légende 484.84 M 11 G 63.26MB 78 × 126 HMIN R θ=0 30% 13.05 M 41.45 k 1.2 MB 32 × 16 Table B . 6 : B6 Précision de HMAX en utilisant différentes optimisations. correspond à -1 et 1 à +1. Le nombre de bit pour les pixels de l'image d'entrée est de 2. Cette approche est similaire à ce qui a été proposé dans[START_REF] Chikkerur | Approximations in the HMAX Model[END_REF].B.4.1.3 Autres optimisationsNous avons appliqué un ensemble d'autres optimisations. La sortie des S1 est compressée sur 2 bits seulement grâce à la méthode de Lloyds, telle que proposée dans[START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. Nous avons également réduit le nombre de vecteurs pré-appris dans S2 grâce à la méthode de Yu et al.[START_REF] Yu | FastWavelet-Based Visual Classification[END_REF]. De plus, nous avons utilisé une distance de Manhattan au lieu d'une distance Euclidienne dans les opérations de comparaison de motifs de S2. En cumulant ces optimisations avec une précision de 2 bits pour les pixels de l'image d'entrée et de 1 bit pour les filtres de Gabor, nous obtenons les résultats présentés en Table B.6. Table B.7: Utilisation des ressources matérielles de HMAX sur un Artix7-200T. La Table B.7 présente une estimation de l'utilisation des ressources matérielles. Concernant le timing, une étude théorique indique que, sur la base d'une fréquence de l'horloge système à 100 MHz, notre système peut traiter 22.69 images par seconde, contre 193 pour l'implantation présentée en [99]. Cela est dû à une organisation des ressources très différentes, notamment au niveau du multiplexage. Cependant, notre implantation requiert moins de ressources matérielles, et il est important de signaler que nos optimisations et celles proposées par Orchard et al. [99] sont parfaitement compatibles. Ressource Estimation Disponible Utilisation (%) Look-up tables 58204 133800 43.50 Flip-flops 158161 267600 59.10 Inputs/outputs 33 285 11.58 Global buffers 6 32 18.75 Block RAM 254 365 69.59 Entrée et Méthode de Réduction Distance de coefficients des filtres Lloyd des patchs de S2 Manhattan Avions 95.49 ± 0.81 94.43 ± 0.88 92.07 ± 0.69 91.83 ± 0.63 Voitures 99.45 ± 0.41 99.35 ± 0.40 98.45 ± 0.54 98.16 ± 0.60 Visages 92.97 ± 1.49 90.11 ± 1.05 82.71 ± 1.32 83.35 ± 1.40 Feuilles 96.83 ± 0.79 97.21 ± 0.89 94.61 ± 1.12 93.20 ± 1.42 Motos 95.54 ± 0.79 94.79 ± 0.62 88.83 ± 1.10 89.08 ± 1.31 By Michael Shick -Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php? curid=44405988. By Arvin Calspan Advanced Technology Center; Hecht-Nielsen, R. Neurocomputing (Reading, Mass.: Addison-Wesley, 1990). In the literature the definition of the activation function may be slightly different, with "≥" signs instead of ">" in Equation 2.4 and with θ > 0. A saliency is a region that is likely to contain information in an image. Saliencies are typically determined with edge detection and the frequency of occurrences of a pattern in the image -the less frequent, the more unusual and thus the more salient that pattern shall be. A multiplication between an input data and a coefficients, the result of which being added to a data computed before by another MAC operation. Naive Bayes are a class of classification frameworks, the principle of which is to assume each component of the feature vector is indenpendent from the others -hence the word naive. We consider the case where the initial scale is 1 and ∆ = 1 -see[START_REF] Viola | Robust real-time face detection[END_REF] for more information. Travaux de Michael Shick -Production peronnelle, CC BY-SA 4.0, https://commons.wikimedia. org/w/index.php?curid=44405988. Acknowledgements and accepting to review it. Finally, I would like to thank the ANRT, i.e the French National Research and Technology Agency, for giving me the opportunity to realize that PhD with the CIFRE program. c2 to out This is the very final stage of our HMAX hardware implementation. It gets the data given by the c2 module in parallel, and serialize it in a way very similar to that of the c1 to s2 module. Its input pins consist in the usual clk and rst respectively for synchronization and reset purposes, as well as a port called din that get the input data and new in that indicates when new data is available. Serial output data is written in the dout output port, and the en dout output port indicates when the data from dout is valid. As in c2 to out, the parallel data from din is simply read and written serially into the dout output port, while en dout is set. When this is done, en dout is unset again. In this Section, we described the architecture of our VHDL model for the HMAX frameworks, taking into account our own optimizations along with other simplification from the literature. That implementation was purposely naive, in order to compare it with the state-of-the-art. Next Section focuses on the implementation results of that model on a hardware target. Implementation results In the previous Section, we described the architecture of our VHDL model. The next step is to synthesize and implement it for a particular device. We chose to target a Xilinx Artix-7 200T FPGA. Both synthesis and implementation were performed with Xilinx Vivado tools. We first examine the utilization of hardware resources -in particular, we shall see that our model does not fit on a single device as is. We then study the timing constraint of our system, including the latency it induces. Resource utilization We synthesized and implemented our VHDL code using Xilinx's Vivado 2016.2, targeting a XC7A200TFBG484-1 platform. Results are shown in Table 4.7. On can see that there is still room for other processes on the FPGA, for instance of a classifier. Now that we studied the feasability of the implementation of our model on hardware devices, let's study the throughput that it may achieve. Appendix A RBF networks training A.1 Overview Radial Basis Function neural network (RBF) fall into the fields of generative models. As suggested in its name, after fitting a model to a training set, that type of models may be used to generate new data [START_REF] Bishop | Pattern recognition and machine learning[END_REF] similar to the real one. RBF are also considered as kernel models, in which the data is processed by so-called kernel functions before the actual classification; the goal is to represent the data in a new space, in which it is expected to be more easily linearly separable -particularly in the case when that new space is of larger dimensionality than the space of the input data. Other well-known kernel-based models are e.g SVM. Although those models may be used for both classification and regression tasks, we shall detail here its use for classification tasks only. A short presentation of such models is proposed in Section 2. A.2 Clustering This stage consists in reducing the training set to a more manageable size. The method we chose is based on the work of Musavi et al., but a bit simpler as we shall see. It consists in merging neighboring vectors of the same categories into clusters, each represented in the network by a kernel function that is constituted of center, i.e a representation in the same space of one or several data points from the training set, and a radius, showing the generalization relevancy of that center: the bigger the radius, the better the center represents the dataset. As we shall see, this method allows to build highly non-linear boundaries between classes. Let X 1 = x 1 1 , x 1 2 , . . . , x 1 N be the training set composed of the N vectors x 1 1 , x 1 2 , . . . , x 1 N , and T 1 = t 1 1 , t 1 2 , . . . , t 1 N be their respective labels. As for many training algorithm, it is important that the x 1 i are randomized, so that we avoid the case where all vectors of a category have neighboring indexes i. Let also d (a, b) denote the distance between the a and b vectors. Although any distance could be used, we focus here on a typical Euclidean distance so that where a and b have M dimensions. The clustering algorithm proceeds as follows [START_REF] Musavi | On the training of radial basis function classifiers[END_REF]: 1. map each element x 1 i of X 1 to a cluster c 1 i ∈ C 1 , the radius r 1 i of which is set to 0, 2. select the first cluster c 1 1 from C 1 , 3. select a cluster c at random from the ensemble C of the other clusters of the same class -let x be its assigned vector and r its radius, 4. merge the two clusters into a new one c 2 1 , the vector x 2 1 of which is the centroid of c 1 1 and c: 5. compute the distance d opp between c 2 1 and the closest cluster ĉ ∈ C 1 of another category, 6. compute the radius r 2 1 of the new cluster c 2 1 , as the distance between the new center x 2 1 and the furthest point of the new cluster:
296,017
[ "778151" ]
[ "227" ]
01373668
en
[ "phys", "spi", "info" ]
2024/03/04 23:41:48
2018
https://hal.science/lirmm-01373668/file/A%20novel%20EMG%20interface.pdf
W Tigra email: [email protected] B Navarro A Cherubini X Gorron A Gelis C Fattal D Guiraud C Azevedo Coste A novel EMG interface for individuals with tetraplegia to pilot robot hand grasping Keywords: Control, electromyographic (EMG), grip function, robot hand, tetraplegia This article introduces a new human-machine interface for individuals with tetraplegia. We investigated the feasibility of piloting an assistive device by processing supra-lesional muscle responses online. The ability to voluntarily contract a set of selected muscles was assessed in five spinal cord-injured subjects through electromyographic (EMG) analysis. Two subjects were also asked to use the EMG interface to control palmar and lateral grasping of a robot hand. The use of different muscles and control modalities was also assessed. These preliminary results open the way to new interface solutions for high-level spinal cord-injured patients. I. INTRODUCTION Consequences of complete spinal cord injury (SCI) are often devastating for patients. This observation is particularly true for trauma at cervical levels (tetraplegia), since this impedes the use of the four limbs. Indeed, a complete SCI prevents any communication between the central nervous system and the sub-lesional peripheral nervous system, which receives no cervical commands. However, moving paralyzed limbs after such trauma is still possible, as for example when sufficient electric current is applied. Cells (neurons or myocytes), are then excited and generate the action potentials responsible for muscle contraction [START_REF] Keith | Implantable functional neuromuscular stimulation in the tetraplegic hand[END_REF], [START_REF] Billian | Upper extremity applications of functional neuromuscular stimulation[END_REF], [START_REF] Keith | Neuroprostheses for the upper extremity[END_REF], [START_REF] Hoshimiya | A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for multijoint upper extremity[END_REF]. Nevertheless, the interaction of the tetraplegic person with his/her electrical stimulation device, to control the artificial contractions and achieve a given task at the desired instant, is still problematic. The reason is that both the range of possible voluntary movements, and the media available to detect intention, are limited. Various interface types have therefore been tested in recent years. For lower limbs, these interfaces include push buttons on walker handles in assisted-gait [START_REF] Guiraud | An implantable neuroprosthesis for standing and walking in paraplegia: 5year patient follow-up[END_REF], accelerometers for movement detection in assisted-sit-to-stand [START_REF] Jovic | Coordinating Upper and Lower Body During FES-Assisted Transfers in Persons With Spinal Cord Injury in Order to Reduce Arm Support[END_REF], electromyography (EMG) [START_REF] Moss | A novel command signal for motor neuroprosthetic control[END_REF] and evoked-electromyography (eEMG) [START_REF] Zhang | Evoked electromyography-based closed-loop torque control in functional electrical stimulation[END_REF] and, most recently, brain computer interfaces (BCI) [START_REF] King | The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia[END_REF]. For upper limbs (restoring hand movement), researchers have proposed the use of breath control, joysticks, electromyography (EMG) [START_REF] Knutson | Simulated neuroprosthesis state activation and hand-position control using myoelectric signals from wrist muscles[END_REF], shoulder movements [START_REF] Hart | A comparison between control methods for implanted fes hand-grasp systems[END_REF], and voluntary wrist extension [START_REF] Bhadra | Implementation of an implantable joint-angle transducer[END_REF]. In this last work, a wrist osseointegrated Hall effect sensor implant provided the functional electrical stimulation (FES) of a hand neuroprosthesis. Keller et al. proposed using surface EMG from the deltoid muscle of the contralateral arm to stimulate the hand muscles [START_REF] Keller | Grasping in high lesioned tetraplegic subjects using the EMG controlled neuroprosthesis[END_REF]. In [START_REF] Thorsen | A noninvasive neuroprosthesis augments hand grasp force in individuals with cervical spinal cord injury: The functional and therapeutic effects[END_REF], the EMG signal from the ipsilateral wrist extensor muscles was used to pilot a hand neuroprosthesis. An implanted device [START_REF] Memberg | Implanted neuroprosthesis for restoring arm and hand function in people with high level tetraplegia[END_REF] took advantage of the shoulder and neck muscles to control the FES applied to the arm and hand muscles. EMG signals were also used to control an upper limb exoskeleton in [START_REF] Dicicco | Comparison of control strategies for an EMG controlled orthotic exoskeleton for the hand[END_REF]. Orthotics and FES can be effective in restoring hand movements, but the piloting modalities are often unrelated to the patient's level of injury and remaining motor functions, making the use of these devices somewhat limited. We believe that poor ergonomics and comfort issues related to the piloting modes also explain this low usage. In this paper, we therefore present a control modality closely linked to the patients remaining capacities in the context of tetraplegia. We propose here to evaluate the capacity and comfort of contracting supra-lesional muscles [START_REF] Tigra | Ergonomics of the control by a quadriplegic of hand functions[END_REF], and assess the feasibility of using EMG signals as an intuitive mode of controlling of functional assistive devices for upper limbs. In this preliminary study, we focus on the comfort and capacity for contracting four upper limb muscles (trapezius, deltoid, platysma and biceps) in individuals with tetraplegia. We then investigate the feasibility of using these contractions to control the motions of a robot hand. A robot hand was preferred to conventional grippers since it allows manipulators or humanoids to handle complex shaped parts or objects that were originally designed for humans, at the cost of more sophisticated mechanical designs and control strategies [START_REF] Cutkosky | On grasp choice, grasp models, and the design of hands for manufacturing tasks[END_REF], [START_REF] Bicchi | Hands for dexterous manipulation and robust grasping: a difficult road toward simplicity[END_REF]. Recently, robot hand usage has been extended to the design of prostheses for amputees, under the control of brain-computer interfaces [START_REF] Weisz | A user interface for assistive grasping[END_REF], or EMG signals [START_REF] Farry | Myoelectric teleoperation of a complex robotic hand[END_REF], [START_REF] Zollo | Biomechatronic design and control of an anthropomorphic artificial hand for prosthetic and robotic applications[END_REF], [START_REF] Cipriani | On the shared control of an EMG-controlled prosthetic hand: Analysis of user[END_REF], [START_REF] Kent | Electromyogram synergy control of a dexterous artificial hand to unscrew and screw objects[END_REF]. However, to our knowledge, surface EMG signals (in contrast to neural signals [START_REF] Hochberg | Reach and grasp by people with tetraplegia using a neurally controlled robotic arm[END_REF], [START_REF] Pfurtscheller | Thought control of functional electrical stimulation to restore hand grasp in a patient with tetraplegia[END_REF]) have never been used by tetraplegic individuals to pilot robot hands. CWRU [START_REF] Moss | A novel command signal for motor neuroprosthetic control[END_REF], for example, used EMG signals to pilot the patient's own hand through FES, whereas Dalley et al. [START_REF] Dalley | A method for the control of multigrasp myoelectric prosthetic hands[END_REF] used EMG within a finite state machine to control a robot hand, but with healthy subjects. Furthermore, in most of the cited works, a single motor was used to open or close a finger, a design constraint that impedes precise hand postures and grasps. Using a fully dexterous robot hand allowed us to further investigate the possibilities of an EMG interface to control different grasping modalities owing to the visual feedback provided by the robot hand. Furthermore, the dimensions and degrees of freedom are very close to those of the human hand, therefore providing the user an intuitive representation of the final movement that he/she can control with, for example, FESbased hand movement restoration. The goal of the study was two-fold: (i) to assess the ability of tetraplegic patients to pilot a robot hand device via muscle contractions even though the contractions are not functional. The EMG signals came from supra-lesional muscles that can be very weak and unable to produce any movement; and (ii) to compare different control modalities. In the following section, we present the protocol and experimental setup. We then present the results on the efficacy and comfort of the continuous or graded contraction of different muscles, along with details on the participants capacity to pilot the robot hand using these contractions. II. MATERIAL AND METHODS A. Subjects and selected muscles The study was conducted during scheduled clinical assessments at the Propara Neurological Rehabilitation Center in Montpellier, France. Thus, the experiments had to be of limited duration. The subjects were informed verbally and in writing about the procedure and gave their signed informed consent according to the internal rules of the medical board of the Centre Mutualiste Neurologique Propara. The experiments were performed with five tetraplegic male subjects with lesional levels between C5 and C7 (see Table I). Subject 2 had undergone muscle-tendon transfer surgery at the time of inclusion. Surface BIOTRACE Electrodes (Controle graphique S.A, France) were used for EMG recordings. Pairs of surfacerecording electrodes (1cm distance) were positioned above the four muscles on each body side. Subjects did not receive any pre-training before these experiments. They were only instructed on the movements for contracting the various muscles. As the muscles selected to control hand grasp devices are likely to be used in a daily context by tetraplegic subjects, these muscles should be under voluntary control. The targeted tetraplegic patients had no muscle under voluntary control under the elbow. The use of facial muscles to pilot a hand grasp device has never been studied because social acceptability would probably be problematic. In addition, muscle synergies were sought (e.g., hand closing could be linked to elbow flexion, as performed via the biceps or deltoid muscle). For these reasons, we chose to study the EMG activity of four upper arm muscles (right and left): the middle deltoid, the superior trapezius, the biceps and the platysma. Nevertheless, 1 The ASIA (American Spinal Injury Association) Impairment Scale (AIS) classifies the severity (i.e. completeness) of a spinal cord injury. The AIS is a multi-dimensional approach to categorize motor and sensory impairment in individuals with SCI. It identifies sensory and motor levels indicative of the most rostral spinal levels, from A (complete SCI) to E (normal sensory and motor function) [START_REF] Kirshblum | International standards for neurological classification of spinal cord injury (revised 2011)[END_REF]. there were slight differences in these eight muscles based on each subjects remaining ability. EMG signals are initially recorded on the ipsilateral and contralateral sides of the dominant upper limb.Yet, patients 1 and 3 showed signs of fatigue and they did not use the contralateral (left) limb. The superior trapezius, middle deltoid, biceps and platysma muscles of the ipsilateral side of the dominant (right) upper limb were thus studied for these subjects. For subjects 2 and 4, both (left and right) superior trapezii, middle deltoids, bicepses, and platysmas were considered. For patient 5, the deltoid was remplaced by the middle trapezius, which has a similar motor schema, since strong electrocardiogram signals were observed on the deltoid EMG signal. To guarantee that the selected EMG would not impede available functionality, the patients' forearms were placed in an arm brace and EMGs signals were recorded with quasi-isometric movements. B. EMG processing Surface EMG signals were recorded with an insulated National Instrument acquisition card NI USB 6218, 32 inputs, 16-bit (National Instruments Corp., Austin, TX, USA). BIOVISION EMG amplifiers (Wherheim, Germany) were used, with gain set to 1000. The acquisition card was connected to a batteryrun laptop computer. The acquisition was made at 2.5kHz. For the first three subjects, the data processing was offline: EMG data were filtered with a high-pass filter (20Hz, fourthorder Butterworth filter, 0 phase). Then, a low-pass filter was applied to the absolute value of the EMG to obtain its envelope (2Hz, fourth-order Butterworth filter). The data processing was online for the other two subjects in order to control the robot hand motion. We applied the same filtering except for the first filter, which had a non-zero phase. In all cases, the filtered EMG signal is denoted with s (t). A calibration phase was performed for each muscle's EMG. Subjects were asked to first relax the muscle and then to strongly contract it. The corresponding EMG signals were stored and post-processed to obtain the maximum envelope. The thresholds were then set as a proportion of the normalized value of the EMG signal (value for a maximal contraction = 1). The high and low thresholds were experimentally determined to s L = 0.3 ± 0.1 and s H = 0.44 ± 0.14 through the calibration process, in order to avoid false detection against noise, while maintaining them as low as possible, to require only a small effort from the patient. These thresholds, s L and s H (s H > s L > 0), were used to trigger the states of the robot hand finite state machine (FSM), as explained below. FSMs have been used in some myoelectric control studies, mostly on healthy or amputees subjects, but never with tetraplegic subjects [START_REF] Dalley | A method for the control of multigrasp myoelectric prosthetic hands[END_REF]. In our study, the goal was to determine whether the muscles in the immediate supra-lesional region could be used by tetraplegic patients to control a robot hand. We relied on myoelectric signals, even from very weak muscles that were unable to generate torque sufficient to pilot the hand. As we controlled only three hand states through event-triggered commands, an FSM was appropriate. On the contrary, EMG pattern recognition is mostly used to progressively pilot several hand movements from many sensors. Grasping is related to EMG amplitude (stronger EMG signal leads to tighter closure). When the muscle is relaxed, the hand opens. 4 Contracting (for 2 s) first muscle 1 causes palmar pinch (palmar grasping); then, the hand can be opened by contracting (for 2 s) muscle 2. Instead, contracting first (for 2 s) muscle 2 causes key-grip (lateral grasping), followed by hand opening if muscle 1 is contracted (for 2 s). 5 Contraction of muscle 1 causes a palmar pinch, whereas contraction of muscle 2 causes key-grip. In both cases, to stop the closure, subjects must stop muscle contraction (cf. Fig. 1). Open hand Palmar pinch Key grip Fig. 1. Finite state machine used to control the hand in Mode 5. C. Robot hand control We chose to use the robot hand since it gives patients much more realistic feedback on task achievement (via grasp of real objects) compared to a virtual equivalent (e.g., a simulator). With a real (yet robot) hand, patients can perform the task as if FES had been used on their hand. The Shadow Dexterous Hand (Shadow Robot Company, London, UK) closely reproduces the kinematics and dexterity of the human hand. The model used here is a right hand with 19 cable-driven joints (denoted by angle q i for each finger i = 1, . . . , 5): two in the wrist, four in the thumb and little finger, and three in the index, middle and ring fingers. Each fingertip is equipped with a BioTac tactile sensor (SynTouch, Los Angeles, CA, USA). These sensors mimic human fingertips by measuring pressure, vibrations and temperature. The hand is controlled through ROS 1 , with the control loop running at 200Hz. In this work, the hand could be controlled in five alternative modes, shown in Table II. Each mode corresponds to a different FSM, and the transitions between states are triggered by muscle contractions and relaxations. Three hand states were used: open hand, palmar pinch, and key-grip (see Fig. 2). Unlike the other modes, mode 3. is not an "all-or-nothing" closing, but allows progressive closing, according to the amplitude of the EMG signal. To begin grasping, contraction has to be above the first chosen threshold, and then the finger position is proportional to the EMG envelope amplitude. One muscle is monitored in modes 1 to 3, and two muscles in modes 4 and 5 (see Table II). Hysteresis was used: we considered a muscle contracted if s (t) > s H and relaxed if s (t) < s L . For s (t) ∈ [s L , s H ], the muscle (hence, hand) state is not changed. In modes 1-3, only one -predetermined -grasp (palmar) was used, whereas in modes 4 and 5, the user was able to change the grasp (palmar/lateral) type online via the EMG signal. Each state was characterized by the five finger target joint values, q * i . In all modes, except for mode 3, these were pretuned offline to constant values (corresponding to open and closed configurations).In mode 3, however, the desired finger position q * i was obtained by interpolating between open and closed positions (q o i and q c i ): q * i = q o i (1 + e(q c i -q o i )), (1) with e the contraction level, normalized between 0 (no contraction) and 1 (full contraction): e =      1 if s > s H , 0 if s < s L , s-s L s H -s L otherwise. (2) We now outline how the target values q * i were attained. For the two grasping states, finger motion should stop as soon as contact with the grasped object occurs. To detect contact on each fingertip i, we use the pressure measurement P i on the corresponding BioTac. At time t, the contact state (defined by the binary value C i (t)) is detected by a hysteresis comparator over P i : C i (t) =      1 if P i > P H , 0 if P i < P L or t = 0, C i (t -T ) otherwise. (3) Here, P H and P L (P H > P L > 0) are the pre-tuned high and low thresholds at which C i changes, and T is the sampling period. For the open hand state, we do not account for fingertip contact, and keep C i (t) = 0. For all three states, an online trajectory generator (OTG) is used to generate the joint commands q i , ensuring smooth motion of each finger to its target value q * i . The commands depend on the contact state: q i (t) = OT G(q i (t -T ), q * i , qM i ), if C i (t) = 0, q i (t -T ) otherwise, (4) with qM i the vector of -known -maximum motor velocities allowed for the joints of finger i. Each finger is controlled by a separate OTG, in order to stop only the ones in contact. As OTG, we used the Reflexxes Motion Library2 . D. Experimental protocols The experiments were performed through two successive protocols at two different times and with two different sets of patients to limit the duration of the session within their clinical assessment. The first time (protocol A, subjects 1,2 and 3, Fig. 3), we checked whether the patients could contract each muscle (assumed to be supra-lesional but not far from the lesion) with a sufficient level of EMG. The second time (protocol B, subjects 4 and 5, Fig. 3) we tested their ability to control the robot hand without previous practice so visual feedback (from observing the hand) was added to the proprioceptive feedback (subjects 4 and 5). Both protocols are described below. 1) Protocol A -EMG alone: This protocol evaluated the subjects' capacity to voluntarily control the different muscles and the comfort and ease of contraction (Fig. 3). Each task was performed only once since the objective was achieved at the first attempt, thereby confirming the easiness of command. Moreover, warm-up was not necessary, since the muscles were not used to output torque but only to generate usable EMG. For each muscle, the subjects performed two tasks: 1) maintain maximum contraction for 15 seconds, 2) successively maintain three levels of contraction (low, medium, high), each for 5 seconds. 2) Protocol B -EMG driving robot hand motion: For this second protocol, muscle contractions controlled the robot hand motion (see Fig. 4). Protocol B was thus composed of two consecutive parts: individual, and preferred muscle assessment. a) Individual muscle assessment: In the first part of protocol B, individual muscle contractions were assessed through three tasks. T1) calibrate: s L and s H are set, T2) maintain maximum contraction for 5 seconds, T3) maintain contraction as long as possible (after the minimum of 15 seconds. In tasks 2 and 3, the contraction level had the empirically defined threshold s H . After each muscle assessment, the subject was asked to assess the comfort, fatigue and ease of contraction efforts through a questionnaire. The questionnaire was inspired by the ISO 9241-9 standard on "Ergonomics of non-keyboard input devices." Once all eight muscles were tested, the subjects were asked to select the two preferred muscles. These two muscles were then taken into account to evaluate the different robot hand control modes in the second part of the protocol. b) Preferred muscle assessment: Two muscles were selected among the eight, based on subjective patients assessments. The choice of preferential muscles was up to the patient, with the constraint that these two muscles must be on the same side. All five modes of robot hand control (shown in Table II) were tested and evaluated. For mode 5, the subject was instructed to select contraction muscle 1 or 2 (i.e., either palmar or lateral grasping), depending on the object randomly presented by the experimenter. Two objects were presented to the subject, one with a cylindrical or spherical shape requiring palmar grasping, the other with a triangular prism shape requiring lateral grasping. The subject had to trigger the correct closure of the robot hand through the contraction of the appropriate muscle to grasp the presented object. Each type of prehension was tested at least five times during the 11 randomized trials. III. RESULTS A. EMG Results We analyzed EMG data from continuous (Fig. 5 (a different muscles is presented in Table III. All subjects were able to individually contract the eight muscles on demand for at least 7 seconds, except subject 1 for biceps (no voluntary contraction was visible in the EMG signal). Interestingly, a contraction could be extracted from the EMG signals even for very weak muscles. This is illustrated in Fig. 5 (a) and Fig. 5 (b), where a voluntary sustained contraction of the subject's left biceps can be seen. He was able to maintain his contraction for more than 30 seconds. Although this subject presented a C5 lesion with non-functional biceps activity (no elbow flexion), this very weak EMG activity of the biceps could still be turned into a functional command to pilot a device. Among our five patients, there was only one case where a very weak muscle produced a functional EMG signal. This muscle had an MRC1 score of 1. For all other muscles with EMG activity, the MRC score was ≥ 3. For protocol A (subjects 1-3), we present in Table IV the ability to grade muscle contraction. The three subjects were able to achieve the three levels of contraction (low, medium and high). The biceps of subject 1 was not tested here, as continuous voluntary contraction was not visible in the EMG signal. In Fig. 5 (c) and Fig. 5 (d), we present an example of a trial from subject 3. He was able to perform an isometric graded contraction of his superior trapezius muscle, but had difficulties holding the contraction for more than 5 seconds. The amplitude of contraction was increased by a factor of seven from 17.3 ±1.9 (rest level), 34.3±14.9 (low contraction), 43.7 ±36.1 (middle contraction) to 78.7±38.5 (high contraction). In protocol B, the subjects were able to maintain the contraction of each of the tested muscles. B. Hand results The tasks (e.g., holding the object in the robot hand for 5 s) were successfully achieved with each of the tested muscles. Among the tested modes, mode 2 was the favorite mode for subject 4. Mode 1 was the favorite mode for subject 5. Regarding the preferential muscle: Subject 4 chose the left biceps as muscle 1 and the left superior trapezius as muscle 2, whereas subject 5 chose the left superior and left middle trapezius, respectively, as 1 and 2. Muscle 1 contraction resulted in palmar grasping, whereas a contraction of muscle 2 resulted in lateral grasping (mode 5). We randomly presented two distinct objects to subjects 4 and 5. They performed 11 hand grasping tests with the robot hand (Fig. 6). To grasp the objects, the subjects had to make either a palmar prehension via muscle 1 contraction, or a lateral prehension through muscle 2 contraction. Among the 11 trials, subject 4 had 100% success, while patient 5 managed to seize eight objects out of 11. The three failures occurred with the palmar grasp because of co-contraction. Indeed, cocontraction was still present to some degree but this was the first muscle to reach the threshold that is considered to trigger the hand movement. Patient 5 tended to push the shoulder back (this activated the middle trapeze) just before raising it (this activated the superior trapezius). C. Comfort survey For protocol B (subjects 4-5), we present in Table V the responses of the subjects to the questionnaire on comfort and fatigue, related to the contraction of the different muscles. Each subject declared some muscles to be easier and more comfortable to contract (in terms of effort, fatigue, and concentration) than others. IV. DISCUSSION The control of a neuroprosthesis by the user -that is, the patient -is a key issue, especially when the objective is to restore movement. Control should be intuitive and thus easily linked to task finality [START_REF] Hoshimiya | A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for multijoint upper extremity[END_REF], [START_REF] Keith | Neuroprostheses for the upper extremity[END_REF], [START_REF] Bhadra | Implementation of an implantable joint-angle transducer[END_REF]. Furthermore, interfaces are based on the observation (i.e., sensing) of voluntary actions (even mentally imagined, as with BCI interfaces [START_REF] King | The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia[END_REF]). EMG is widely used to achieve this goal for amputees, but for patients with tetraplegia, the use of supra-lesional muscles to control infra-lesional muscles was a neat option. The second generation of the Freehand system was successfully developed and is the only implanted EMGcontrolled neuroprosthesis to date. As far as we know, robot hands for tetraplegics have not yet been controlled using EMG. The feasibility of using supra-lesional muscle EMG was not straightforward. Indeed, the available muscles are few and most of them cannot be considered valid as they are underused and their motor schema is in some cases deeply impaired, with no functional output. This leads to highly fatigable and weak muscles, but also to the loss of synergy between the paralyzed muscles that are normally involved in upper limb movements. In some cases, even if the muscle is contractable, the produced contraction is not functional (does not induce any joint motion). Here, the goal was to understand whether the immediately supra-lesional muscles of tetraplegic patients could be used to control a robot hand. The targeted population -that is, tetraplegics with potentially weak supra-lesional musclesshould have a very simple interface for two reasons: (i) simple contraction schemes to control the hand limit cognitive fatigue, and (ii) short contractions limit physiological fatigue. These two constraints mean that the hand should be controlled with predefined postures and not in a proportional way. Thus, the output of our control framework was a limited set of hand states, while its input, except for one mode (mode 3), was a limited set of EMG levels. In this context, the FSM scheme should be preferred. In our study, we found in all five subjects a combination of muscles such that each was able to easily perform the tasks (protocol A) that is, to maintain a continuous contraction or a grade contraction, so that it could be quantified by an EMG signal. We were able to calibrate quite low thresholds, so that patients did not have to contract much and experience fatigue. Moreover, these experiments were conducted during the scheduled clinical assessment, so no training was offered, even during the session. The patients were merely asked to contract muscles and to try to hold objects with the robot hand. All were able to control it immediately. The calibration procedure is linked only to EMG signal scaling so that, as a whole, the system is very easy to use in a clinical context, compared with approaches like BCI, for instance. Interestingly, the lesion age had no influence on performance. Two subjects participated in the second session (protocol B), in which the EMG signals were used to control a robot hand. This was achieved without any prior learning or training. We show that both the used muscle, and the way the contraction controls the hand (control mode), have a drastic effect on performance. This robot hand approach may thus be a very good paradigm for rehabilitation or training, for future FES-based control of the patients' own hand. These two subjects did not have the same preferred mode of control, but clearly preferred one over the others. Mode 1 (continuous contraction to maintain robot hand closure) seems to be more intuitive, as the contraction is directly linked to the posture of the hand, but mode 2 (an impulsive contraction provokes robot hand closure/opening) induces less fatigue as it needs only short muscle contractions to toggle from open to closed hand. Depending on their remaining motor functions, patients feel more or less comfortable with a given mode. Also, the choice of the preferred control mode would probably be different after a training period. In our opinion, patients should select their preferred mode themselves. However, a larger study would give indications on how to classify patients preferred modes, based on the assessment of their muscle state. In any case, control cannot be defined through a single mode and should be adapted to each patient and probably to each task and fatigue state. For practical reasons, we decided that the two EMGs would be located on the same side without any knowledge beforehand as to which side to equip. The subject selected one preferred muscle and based on this choice, the second muscle was selected on the same side. A major issue with this decision is that the two muscles sometimes co-contract and in mode 5 (muscle 1 contraction causes palmar pinch and muscle 2 contraction causes key-grip) the robot hand grasping task selected by the system was not always the one the user intended to execute. In the future, patients will control their own hand by means of electrical stimulation instead of a distant robot hand, and the choice of which body side to equip with EMG will need to be made with respect to the task that the stimulated hand must achieve. For example, if muscle contraction is associated with arm motion, this might well disturb the grasping to be achieved. Furthermore, an analysis is needed to determine the effect of the dominant side on performance. For our patients, grasping would not be disturbed since shoulder movements do not induce forearm movements. The questionnaire at the end of each test allowed us to evaluate the ease of using EMG as a control method. Preferential muscles were chosen so as not to disturb the functionalities available to the subjects. Yet, one can also imagine a system that deactivates electrostimulation when the patient wishes to use his/her remaining functionality for other purposes. In this case, the subject would be able to contract his/her muscles without causing hand movements. Furthermore, one can imagine using forearm/arm muscle synergies or relevant motor schemas to facilitate the learning (e.g. hand closing when the elbow bends, hand opening during elbow extension, and so on). The interesting property of the proposed interface is that even a weak muscle can produce a proper EMG signal. As an example, subject 4 was able to control the robot hand with a weak muscle to produce functional movement. In other words, a non-functional muscle in the context of natural movements can be turned into a functional muscle in the context of assistive technology and one can even expect that motor performances will improve with training. V. CONCLUSION We have demonstrated the feasibility of extracting contraction recordings from supra-lesional muscles in individuals with tetraplegia that are sufficiently rich in information to pilot a robot hand. The choice of muscles and modes of control are patient-dependent. Any available contractable muscleand not just functional muscles -can be candidates and should be evaluated. The control principle could also be used for FES applied to the patient arm, or to control an external device such as a robot arm or electric wheelchair, or as a template of rehabilitation movements. The robot hand might help to select (via their residual control capacity), and possibly train, patients as potential candidates for an implanted neuroprosthetic device. A greater number of patients using the robot hand would provide a better picture of the range of performance. Therefore, the next step will be to extend the study to a wider group of patients, to provide a better picture of the range of performance. We also plan to use the robot hand as a part of a training protocol for future FES devices. 2 A 2 first contraction of 2 s triggers grasping. The hand remains closed even when the muscle is relaxed. The next 2 s contraction triggers hand opening. 3 3 Fig. 2 . 2 Fig. 2. Different states of the robot hand: (a) open hand, (b) palmar pinch (palmar grasping), (c) Key-grip (lateral grasping). Fig. 3 . 3 Fig. 3. Top: Principle of EMG recording and analysis (protocol A). Bottom: Principle of robot hand control through EMG signals. (protocol B). Fig. 4 . 4 Fig. 4. Protocol B: setup description and upper arm positioning during EMG recordings. ) and Fig. 5 (b)) and graded (Fig. 5 (c) and Fig. 5 (d)) muscle contractions. Data on each subject's ability to contract the muscle contractions (Subj. 4). sup. trapezius muscle (Subj. 3). Fig. 5 . 5 Fig. 5. Example of muscle contractions observed in SCI subjects. Raw signal (a and c), filtered signal (b and d). Fig. 6 . 6 Fig. 6. Example of robot hand trajectories generated from EMG recording in subject 5 for modes 1 and 3. Top: raw EMG, Bottom: filtered EMG (blue) and hand trajectory (red).0: hand is open, 1: hand is closed. TABLE II DESCRIPTION II OF THE FIVE HAND CONTROL MODES Mode n o Description 1 Continuous muscle contraction provokes grasping. When the muscle is relaxed, the hand opens. TABLE III MUSCLE III CONTRACTION ABILITIES. D: MAXIMUM CONTRACTION DURATION. ** FAVORITE MUSCLE, *WITH HELP OF ARM SUPPORT Subject ID superior trapezius middle deltoid / biceps platysma middle trapezius Right (I) Left (C) Right (I) Left (C) Right (I) Left (C) Right (I) Left (C) 1 10s** NA >15s NA 0 NA >15s NA 2 >15s** >15s >15s >15s >15s >15s >15s >15s 3 >15s NA >15s NA >15s NA >15s >15s 4 >15s* >15s* >15s >15s* 7s >15s** >15s >15s 5 >15s 15s >15s >15s** >15s >15s** 14s >15s TABLE IV ABILITY TO GRADE THE CONTRACTION FOR THE 3 FIRST SUBJECTS, TIME FOR EACH CONTRACTION: 5 S (PROTOCOL A) Level upper Trapezius middle Deltoid Biceps Platysma of Average STD Normalised Average STD Normalised Average STD Normalised Average STD Normalised contraction (mV) (mV) value (%) mV) (mV) value (%) (mV) (mV) value (%) mV) (mV) value (%) 1 75.33 18.87 0.32 72.93 9.51 0.6 NA NA NA 53.7 16.88 0.39 Subject 1 2 104 12.9 0.44 84.53 10.53 0.69 NA NA NA 59.83 14.3 0.44 3 237 59.9 1 122.13 12.87 1 NA NA NA 135.83 19.4 1 1 50.94 7.81 0.22 273.3 59.9 0.52 110.9 8.07 0.39 73.7 2 0.35 Subject 2 2 96.36 3.87 0.42 370 73.2 0.71 164.8 30.8 0.58 157.5 14.5 0.74 3 226.97 211.51 1 522 61.1 1 285.8 50.5 1 213 51.6 1 1 53.93 19.32 0.29 85.42 5 0.25 21.38 6.39 0.37 42.5 11.19 0.30 Subject 3 2 116.32 38.11 0.63 185 33.75 0.54 41.5 8.74 0.72 100 15.11 0.70 3 185 56.05 1 345 72.25 1 57.38 10.21 1 143.61 32.58 1 TABLE V EVALUATION OF INDIVIDUAL MUSCLE CONTRACTION FOR SUBJECTS 4 AND 5 (PROTOCOL B), * 1=VERY HIGH EFFORTS AND FATIGUE, 7=VERY LOW EFFORTS AND FATIGUE Superior Middle deltoid / Biceps Platysma trapezius Middle trapezius Right Left Right Left Right Left Right Left Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Subject 4 3.8 2 3 2 4.5 2.5 3.3 5.3 4.3 3.7 5 5.7 2.25 2.5 3 2 Subject 5 7 4.3 2.5 1 6.8 6.3 4 3 2.5 5 3.5 6.3 2 1 3.5 3 http://www.ros.org http://www.reflexxes.ws The MRC (Medical Research Council) Scale assesses muscle power in patients with peripheral nerve lesions from 0 (no contraction) to 5 (normal power). ACKNOWLEDGMENTS The authors wish to thank the subjects who invested time into this research, as well as MXM-Axonic/ANRT for support with the PhD grant, CIFRE # 2013/0867. The work was also supported in part by the ANR (French National Research Agency) SISCob project ANR-14-CE27-0016. Last, the authors also warmly thank Violaine Leynaert, occupational therapist at Propara Center, for her precious help.
38,597
[ "982188", "12978", "6566", "925900", "838724", "8582", "8632" ]
[ "450088", "303268", "395113", "98357", "395113", "395113", "31275", "234185", "455505", "450088", "450088" ]
01486186
en
[ "info" ]
2024/03/04 23:41:48
2017
https://hal.science/hal-01486186/file/Han16-STF.pdf
Jing Han Zixing Zhang email: [email protected] Nicholas Cummins Fabien Ringeval Björn Schuller Strength Modelling for Real-World Automatic Continuous Affect Recognition from Audiovisual Signals published or not. The documents may come Introduction Automatic affect recognition plays an essential role in smart conversational agent systems that aim to enable natural, intuitive, and friendly human-machine interaction. Early works in this field have focused on the recognition of prototypic expressions in terms of basic emotional states, and on the data collected in laboratory settings, where speakers either act or are induced with predefined emotional categories and content [START_REF] Gunes | Automatic temporal segment detection and affect recognition from face and body display[END_REF][START_REF] Schuller | Speaker independent speech emotion recognition by ensemble classification[END_REF][START_REF] Schuller | Hidden markov model-based speech emotion recognition[END_REF][START_REF] Zeng | A survey of affect recognition methods: Audio, visual, and spontaneous expressions[END_REF]. Recently, an increasing amount of research efforts have converged into dimensional approaches for rating naturalistic affective behaviours by continuous dimensions (e. g., arousal and valence) along the time continuum from audio, video, and music signals [START_REF] Gunes | Automatic, dimensional and continuous emotion recognition[END_REF][START_REF] Gunes | Categorical and dimensional affect analysis in continuous input: Current trends and future directions[END_REF][START_REF] Petridis | Prediction-based audiovisual fusion for classification of non-linguistic vocalisations[END_REF][START_REF] Weninger | Discriminatively trained recurrent neural networks for continuous dimensional emotion recognition from audio[END_REF][START_REF] Yang | A regression approach to music emotion recognition[END_REF][START_REF] Kumar | Affective feature design and predicting continuous affective dimensions from music[END_REF][START_REF] Soleymani | Emotional analysis of music: A comparison of methods[END_REF][START_REF] Soleymani | Analysis of EEG signals and facial expressions for continuous emotion detection[END_REF]. This trend is partially due to the benefits of being able to encode small difference in affect over time and distinguish the subtle and complex spontaneous affective states. Furthermore, the affective computing community is moving toward combining multiple modalities (e. g., audio and video) for the analysis and recognition of human emotion [START_REF] Mariooryad | Correcting time-continuous emotional labels by modeling the reaction lag of evaluators[END_REF][START_REF] Pantic | Toward an affect-sensitive multimodal human-computer interaction[END_REF][START_REF] Soleymani | Continuous emotion detection using EEG signals and facial expressions[END_REF][START_REF] Wöllmer | LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework[END_REF][START_REF] Zhang | Enhanced semi-supervised learning for multimodal emotion recognition[END_REF], owing to (i) the easy access to various sensors like camera and microphone, and (ii) the complementary information that can be given from different modalities. In this regard, this paper focuses on the realistic time-and value-continuous affect (emotion) recognition from audiovisual signals in the arousal and valence dimensional space. To handle this regression task, a variety of models have been investigated. For instance, Support Vector Machine for Regression (SVR) is arguably the most frequently employed approach owing to its mature theoretical foundation. Further, SVR is regarded as a baseline regression approach for many continuous affective computing tasks [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF]. More recently, memory-enhanced Recurrent Neural Networks (RNNs), namely Long Short-Term Memory RNNs (LSTM-RNNs) [START_REF] Hochreiter | Long short-term memory[END_REF], have started to receive greater attention in the sequential pattern recognition community [START_REF] Graves | Framewise phoneme classification with bidirectional LSTM and other neural network architectures[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF][START_REF] Zhang | Channel mapping using bidirectional long short-term memory for dereverberation in hand-free voice controlled devices[END_REF][START_REF] Zhang | Facing realism in spontaneous emotion recognition from speech: Feature enhancement by autoencoder with LSTM neural networks[END_REF]. A particular advantage offered by LSTM-RNNs is a powerful capability to learn longer-term contextual information through the implementation of three memory gates in the hidden neurons. Wöllmer et al. [START_REF] Wöllmer | Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies[END_REF] was amongst the first to apply LSTM-RNN on acoustic features for continuous affect recognition. This technique has also been successfully employed for other modalities (e. g., video, and physiological signals) [START_REF] Chao | Long short term memory recurrent neural network based multimodal dimensional emotion recognition[END_REF][START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF]. Numerous studies have been performed to compare the advantages offered by a wide range of modelling techniques, including the aforementioned, for continuous affect recognition [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF][START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Tian | Emotion recognition in spontaneous and acted dialogues[END_REF]. However, no clear observations can be drawn as to the superiority of any of them. For instance, the work in [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF] compared the performance of SVR and Bidirec-tional LSTM-RNNs (BLSTM-RNNs) on the Sensitive Artificial Listener database [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF], and the results indicate that the latter performed better on a reduced set of 15 acoustic Low-Level-Descriptors (LLD). However, the opposite conclusion was drawn in [START_REF] Tian | Emotion recognition in spontaneous and acted dialogues[END_REF], where SVR was shown to be superior to LSTM-RNNs on the same database with functionals computed over a large ensemble of LLDs. Other results in the literature confirm this inconsistent performance observation between SVR and diverse neural networks like (B)LSTM-RNNs and Feed-forward Neural Networks (FNNs) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF]. A possible rationale behind this is the fact that each prediction model has its advantages and disadvantages. For example, SVRs cannot explicitly model contextual dependencies, whereas LSTM-RNNs are highly sensitive to overfitting. The majority of previous studies have tended to explore the advantages (strength) of these models independently or in conventional early or late fusion strategies. However, recent results indicate that there may be significant benefits in fusing two, or more, models in hierarchical or ordered manner [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF][START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF]. Motivated by these initial promising results, we propose a Strength Modelling approach, in which the strength of one model, as represented by its predictions, is concatenated with the original feature space which is then used as the basis for regression analysis in a subsequent model. The major contributions of this study include: (1) proposing the novel machine learning framework of Strength Modelling specifically designed to take advantage of the benefits offered by various regression models namely SVR and LSTM-RNNs; (2) investigating the effectiveness of Strength Modelling for value-and time-continuous emotion regression on two spontaneous multimodal affective databases (RECOLA and SEMAINE); and (3) comprehensively analysing the robustness of Strength Modelling by integrating the proposed framework into frequently used multimodal fusion techniques namely early and late fusion. The remainder of the present article is organised as follows: Section 2 first discusses related works; Section 3 then presents Strength Modelling in details and briefly reviews both the SVR and memory-enhanced RNNs; Section 4 describes the selected spontaneous affective multimodal databases and corresponding audio and video feature sets; Section 5 offers an extensive set of experiments conducted to exemplify the effectiveness and the robustness of our proposed approach; finally, Section 6 concludes this work and discusses potential avenues for future work. Related Work In the literature for multimodal affect recognition, a number of fusion approaches have been proposed and studied [START_REF] Wu | Survey on audiovisual emotion recognition: databases, features, and data fusion strategies[END_REF], with the majority of them relevant to early (aka feature-level) or late (aka decision-level) fusion. Early fusion is implemented by concatenating all the features from multiple modalities into one combined feature vector, which will then be used as the input for a machine learning technique. The benefit of early fusion is that, it allows a classifier to take advantage of the complementarity that exists between, for example, the audio and video feature spaces. The empirical experiments offered in [START_REF] Chao | Long short term memory recurrent neural network based multimodal dimensional emotion recognition[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF] have shown that the early fusion strategy can deliver better results than the strategies without feature fusion. Late fusion involves combining predictions obtained from individual learners (models) to come up with a final prediction. They normally consist of two steps: 1) generating different learners; and 2) combining the predictions of multiple learners. To generate different learners, there are two primary ways which are separately based on different modalities and models. Modality-based ways combines the output from learners trained on different modalities. Examples of this learner generation in the literature include [START_REF] He | Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF][START_REF] Wei | Multimodal continuous affect recognition based on LSTM and multiple kernel learning[END_REF], where multiple SVRs or LSTM-RNNs are trained separately for different modalities (e.g. audio, video, etc). Model-based ways, on the other hand, aims to exploit information gained from multiple learners trained on a single modality. For example in [START_REF] Qiu | Ensemble deep learning for regression and time series forecasting[END_REF], predictions obtained by 20 different topology structures of Deep Belief Networks (DBNs). However, due to the similarity of characteristics of different DBNs, the predictions can not provide many variations that could be mutually complemented and improve the system performance. To combine the predictions of multiple learners, a straightforward way is to apply simple or weighted averaging (or voting) approach, such as Simple Linear Regression (SLR) [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF]. Another common approach is to perform stacking [START_REF] Wolpert | Stacked generalization[END_REF]. In doing this, all the predictions from different learners are stacked and used as inputs of a subsequent non-linear model (e.g., SVR, LSTM-RNN) trained to make a final decision [START_REF] Qiu | Ensemble deep learning for regression and time series forecasting[END_REF][START_REF] He | Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks[END_REF][START_REF] Wei | Multimodal continuous affect recognition based on LSTM and multiple kernel learning[END_REF]. Different from these fusion strategies, our proposed Strength Modelling paradigm operates on a single feature space. Using an initial model, it gains a set of predictions which are then fused with the original feature set for use as a new feature space in a subsequent model. This offers the framework a vital important advantage as the single modality setting is often faced in affect recognition tasks, for example, if when either face or voice samples are missing in a particular recording. Indeed, Strength Modelling can be viewed as an intermediate fusion technology, which lies in the middle of the early and late fusion stages. Strength Modelling can therefore not only work independently of, but also be simply integrated into early and late fusion approaches. To the best of our knowledge, intermediate fusion techniques are not widely used in the machine learning community. Hermansky et al. [START_REF] Hermansky | Tandem connectionist feature extraction for conventional HMM systems[END_REF] introduced a tandem structure that combines the output of a discriminative trained neural nets with dynamic classifiers such as Hidden Markov Models (HMMs), and applied it efficiently for speech recognition. This structure was further extended into a BLSTM-HMM [START_REF] Wöllmer | Bidirectional LSTM networks for context-sensitive keyword detection in a cognitive virtual agent framework[END_REF][START_REF] Wöllmer | Robust in-car spelling recognition-a tandem BLSTM-HMM approach[END_REF]. In this approach the BLSTM networks provides a discrete phoneme prediction feature, together with continuous Mel-Frequency Cepstral Coefficients (MFCCs), for the HMMs that recognise speech. For multimodal affect recognition, a relevant approach -Parallel Interacting Multiview Learning (PIML) -was proposed in [START_REF] Kursun | Parallel interacting multiview learning: An application to prediction of protein sub-nuclear location[END_REF] for the prediction of protein sub-nuclear locations. The approach exploits different modalities that are mutually learned in a parallel and hierarchical way to make a final decision. Reported results show that this approach is more suitable than the use of early fusion (merging all features). Compared to our approach, that aims at taking advantages of different models from a same modality, the focus of PIML is rather on exploiting the benefit from different modalities. Further, similar to early fusion approaches, PIML operates under a concurrence assumption of multiple modalities. Strength Modelling is similar to the Output Associative Relevance Vector Machine (OA-RVM) regression framework originally proposed in [START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF]. The OA-RVM framework attempts to incorporate the contextual relationships that exist within and between different affective dimensions and various multimodal feature spaces, by training a secondary RVM with an initial set of multi-dimensional output predictions (learnt using any prediction scheme) concatenated with the original input features spaces. Additionally, the OA-RVM framework also attempts to capture the temporal dynamics by employing a sliding window framework that incorporates both past and future initial outputs into the new feature space. Results presented in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF] indicate that the OA-RVM framework, is better suited to affect recognition problems than both conventional early and late fusion. Recently the OA-RVM model was extended in [START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF] to be multivariate, i. e., predicting multiple continuous output variables simultaneously. Similar to Strength Modelling, OA-RVM systems take input features and output predictions into consideration to train a subsequent regression model to perform the final affective predictions. However, the strength of the OA-RVM framework is that it is underpinned by the RVM. Results in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF] indicate that, the framework is not as successful when using either a SVR or a SLR as the secondary model. Further, the OA-RVM is non-casual and requires careful tuning to find suitable window lengths in which to combine the initial outputs; this can take considerable time and effort. The proposed Strength Modelling framework, however, is designed to work with any combination of learning paradigms. Furthermore, Strength Modelling is casual; it combines input features and predictions on a frameby-frame basis. This is a strong advantage over the OA-RVM in terms of employment in real-time scenarios (beyond the scope of this paper). Strength Modelling Strength Modelling The proposed Strength Modelling framework for affect prediction is depicted in Fig. 1. As can be seen, the first regression model (Model 1 ) generates the original estimate ŷt based on the feature vector x t . Then, ŷt is concatenated with x t pair-wise as the input of the second model (Model 2 ) to learn the expected prediction y t . To implement the Strength Modelling for these suitable combination of individual models, Model 1 and Model 2 are trained subsequently, in other words, Model 2 takes the predictive ability of Model 1 into account for training. The procedure is given as follows: x t Model 1 ∪ Model 2 y t ŷt [x t , ŷt ] -First, Model 1 is trained with x t to obtain the prediction ŷt . -Then, Model 2 is trained with [x t , ŷt ] to learn the expected prediction y t . Whilst the framework should work with any arbitrary modelling technique we have selected two commonly used, in the context of affect recognition, for our initial investigations, namely the SVR and BLSTM-RNNs which are briefly reviewed in the subsequent subsection. Regression Models SVR is extended from Support Vector Machine (SVM) to solve regression problems. It was first introduced in [START_REF] Drucker | Support vector regression machines[END_REF] and is one of the most dominant methods in the context of machine learning, particularly in emotion recognition [START_REF] Chang | Physiological emotion analysis using support vector regression[END_REF][START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF]. Applying the SVR for a regression task, the target is to optimise the generalisation bounds for regression in the high-dimension feature space by using a ε-insensitive loss function which is used to measure the cost of the errors of the prediction. At the same time, a predefined hyperparameter C is set accordingly for different cases to balance the emphasis on the errors and the generalisation performance. Normally, the high-dimension feature space is mapped from the initial feature space with a non-linear kernel function. However, in our study, we use a linear kernel function, as the features in our cases (cf. Section 4.2) perform quite well for affect prediction in the original feature space, similar to [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF]. One of the most important advantages of SVR is the convex optimisation function, the characteristics of which gives the benefit that the global optimal solution can be obtained. Moreover, SVR is learned by minimising an upper bound on the expected risk, as opposed to the neural networks trained by minimising the errors on all training data, which equips SVR a superior ability to generalise [START_REF] Gunn | Support vector machines for classification and regression[END_REF]. For a more in-depth explanation of the SVR paradigm the reader is referred to [START_REF] Drucker | Support vector regression machines[END_REF]. The other model utilised in our study is BLSTM-RNN which has been successfully applied to continuous emotion prediction [START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF] as well as for other regression tasks, such as speech dereverberation [START_REF] Zhang | Channel mapping using bidirectional long short-term memory for dereverberation in hand-free voice controlled devices[END_REF] and non-linguistic vocalisations classification [START_REF] Petridis | Prediction-based audiovisual fusion for classification of non-linguistic vocalisations[END_REF]. In general, it is composed of one input layer, one or multiple hidden layers, and one output layer [START_REF] Hochreiter | Long short-term memory[END_REF]. The bidirectional hidden layers separately process the input sequences in a forward and a backward order and connect to the same output layer which fuses them. Compared with traditional RNNs, it introduces recurrently connected memory blocks to replace the network neurons in the hidden layers. Each block consists of a self-connected memory cell and three gate units, namely input, output, and forget gate. These three gates allow the network to learn when to write, read, or reset the value in the memory cell. Such a structure grants BLSTM-RNN to learn past and future context in both short and long range. For a more in-depth explanation of BLSTM-RNNs the reader is referred to [START_REF] Hochreiter | Long short-term memory[END_REF]. It is worth noting that these paradigms bring distinct sets of advantages and disadvantages to the framework: • The SVR model is more likely to achieve the global optimal solution, but it is not context-sensitive [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF]; • The BLSTM-RNN model is easily trapped in a local minimum which can be hardly avoided and has a risk of overfitting [START_REF] Graves | Framewise phoneme classification with bidirectional LSTM and other neural network architectures[END_REF], while it is good at capturing the correlation between the past and the future information [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF]. In this paper, Model 1 and Model 2 in Fig. 1 could be either an SVR model or a BLSTM-RNN model, resulting in four possible permutations, i. e., SVR-SVR (S-S), SVR-BLSTM (S-B), BLSTM-SVR (B-S), BLSTM-BLSTM (B-B). It is worth noting that the B-B structure can be regarded as a variation of the neural networks in a deep structure. Note, the S-S structure is not considered, because SVR training is achieved by solving a large margin separator. Therefore, it is unlikely to get any advantage in concatenating a set of SVR predictions with its feature space for subsequent SVR based regression analysis. Strength Modelling with Early and Late Fusion Strategies As previously discussed (Sec. 2), the Strength Modelling framework can be applied in both early and late fusion strategies. Traditional early fusion combines multiple feature spaces into one single set. When integrating Strength Modelling with early fusion, the initial predictions gained from models trained on the different feature sets are also concatenated to form a new feature vector. The new feature vector is then used as the basis for the final regression analysis via a subsequent model (Fig. 2). Model 1a Model 1v Model 2 y e audio features x a video features x v Strength Modelling can also be integrated with late fusion using three different approaches, i. e., (i) modality-based, (ii) model-based, and (iii) modality-and model-based (Fig. 3). Modality-based fusion combines the decisions from multiple independent modalities (i. e., audio and video in our case) with the same regression model; whilst model-based approach fuses the decisions from multiple different models (i. e., SVR and BLSTM-RNN in our case) within the same modality; and modality-and model-based approach is the combination of the above two approaches, regardless of which modality or model is employed. For all three techniques the fusion weights are learnt using a linear regression model: S M 1a S M 2a S M 3a S M 1v S M 2v S M 3v fusion y l audio features x a video features x v y l = + N i=1 γ i • y i , (1) where y i denotes the original prediction of the model i from N available ones; and γ i are the bias and weights estimated on the development partition; and y l is the final prediction. Selected Databases and Features For the transparency of experiments, we utilised the widely used multimodal continuously labelled affective databases -RECOLA [START_REF] Ringeval | Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions[END_REF] and SEMAINE [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF], which have been adopted as standard databases for the AudioVisual Emotion Challenges (AVEC) in 2015/2016 [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] and in 2012 [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF], respectively. Both databases were designed to study socio-affective behaviours from multimodal data. To annotate the corpus, value-and time-continuous dimensional affect ratings in terms of arousal and valence were performed by six French-speaking raters (three males and three females) for the first five minutes of all recording sequences. The obtained labels were then resampled at a constant frame rate of 40 ms, and averaged over all raters by considering interevaluator agreement, to provide a 'gold standard' [START_REF] Ringeval | Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions[END_REF]. SEMAINE The SEMAINE database was recorded in conversations between humans and artificially intelligent agents. In the recording scenario, a user was asked to talk with four emotionally stereotyped characters, which are even-tempered and sensible, happy and out-going, angry and confrontational, and sad and depressive, respectively. For our experiments, the 24 recordings of the Solid-Sensitive Artificial Listener (Solid-SAL) part of the database were used, in which the characters were role-played. Each recording contains approximately four character conversation sessions. This Solid-SAL part was then equally split into three partitions: a training, development, and test partition, resulting in 8 recordings and 32 sessions per partition except for the training partition that contains 31 sessions. For more information on this database, the readers are referred to [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF]. All sessions were annotated in continuous time and continuous value in terms of arousal and valence by two to eight raters, with the majority annotated by six raters. Different from RECOLA, the simple mean over the obtained labels was then taken to provide a single label as 'gold standard' for each dimension. Audiovisual Feature Sets For the acoustic features, we used the openSMILE toolkit [START_REF] Eyben | openSMILE -the Munich versatile and fast open-source audio feature extractor[END_REF] to generate 13 LLDs, i. e., 1 log energy and 12 MFCCs, with a frame window size of 25 ms at a step size of 10 ms. Rather than the official acoustic features, MFCCs were chosen as the LLDs since preliminary testing (results not given) indicated that they were more effective in association with both RECOLA [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] and SEMAINE [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF]. The arithmetic mean and the coefficient of variance were then computed over the sequential LLDs with a window size of 8 s at a step size of 40 ms, resulting in 26 raw features for each functional window. Note that, for SEMAINE the window step size was set to 400 ms in order to reduce the computational workload in the machine learning process. Thus, the total numbers of the extracted segments of the training, development, and test partitions were 67.5 k, 67.5 k, 67.5 k for RECOLA, and were, respectively, 24.4 k, 21.8 k, and 19.4 k for SEMAINE. For the visual features, we retained the official features for both RECOLA and SEMAINE. As to RECOLA, 49 facial landmarks were tracked firstly, as illustrated in Fig. 4. The detected face regions included left and right eyebrows (five points respectively), the nose (nine points), the left and right eyes (six points respectively), the outer mouth (12 points), and the inner mouth (six points). Then, the landmarks were aligned with a mean shape from stable points (located on the eye corners and on the nose region). As features for each frame, 316 features were extracted, consisting of 196 features by computing the difference between the coordinates of the aligned landmarks and those from the mean shape and between the aligned landmark locations in the previous and the current frame, 71 ones by calculating the Euclidean distances (L2-norm) and the angles (in radians) between the points in three different groups, and another 49 ones by computing the Euclidean distance between the median of the stable landmarks and each aligned landmark in a video frame. For more details on the feature extraction process the reader is referred to [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF]. Again, the functionals (arithmetic mean and coefficient of variance) were computed over the sequential 316 features within a fixed length window (8 s) that shifted forward at a rate of 40 ms. As a result, 632 raw features for each functional window were included in the geometric set. Feature reduction was also conducted by applying a Principal Component Analysis (PCA) to reduce the dimensionality of the geometric features, retaining 95% of the variance in the original data. The final dimensionality of the reduced video feature set is 49. It should be noted that a facial activity detector was used in conjunction with the video feature extraction; video features were not extracted for the frames where no face was detected, resulting in the number of video segments somewhat less than that of audio segments. As to SEMAINE, 5 908 frame-level features were provided as the video baseline features. In this feature set, eight features describes the position and pose of the face and eyes, and the rest are dense local appearance descriptors. For appearance descriptors, the uniform Local Binary Patterns (LBP) were used. Specifically, the registered face region was divided into 10 × 10 blocks, and the LBP operator was then applied to each block (59 features per block) followed by concatenating features of all blocks, resulting to another 5 900 features. Further, to generate features on window-level, in this paper we used the method based on max-pooling. Specifically, the maximum of features were calculated with a window size of 8 s at a step size of 400 ms, to keep consistent with the audio features. We applied PCA for feature reduction on these window-level representations and generated 112 features, retaining 95% of the variance in the original data. To keep in line with RECOLA, we selected the first 49 principal components as the final video features. Experiments and Results This section empirically evaluates the proposed Strength Modelling by large-scale experiments. We first perform Strength Modelling for the continuous affect recognition in the unimodal settings (cf. Sec. 5.2), i. e., audio or video. We then incorporate it with the early (cf. Sec. 5.3) and late (cf. Sec. 5.4) fusion strategies so as to investigate its robustness in the bimodal settings. Experimental Set-ups and Evaluation Metrics Before the learning process, mean and variance standardisation was applied to features of all partitions. Specifically, the global means and variances were calculated from the training set, which were then applied over the development and test sets for online standardisation. To demonstrate the effectiveness of the strength learning, we first carried out the baseline experiments, where the SVR or BLSTM-RNNs models were individually trained on the modalities of audio, video, or the combination, respectively. Specifically, the SVR was implemented in the LIBLINEAR toolkit [START_REF] Fan | LIBLINEAR: A library for large linear classification[END_REF] with linear kernel, and trained with L2-regularised L2-loss dual solver. The tolerance value of ε was set to be 0.1, and complexity (C) of the SVR was optimised by the best performance of the development set among [.00001, .00002, .00005, .0001, . . . , .2, .5, 1] for each modality and task. For the BLSTM-RNNs, two bidirectional LSTM hidden layers were chosen, with each layer consisting of the same number of memory blocks (nodes). The number was optimised as well by the development set for each modality and task among [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF][START_REF] Wöllmer | Bidirectional LSTM networks for context-sensitive keyword detection in a cognitive virtual agent framework[END_REF]60,80,100,120]. During network training, gradient descent was implemented with a learning rate of 10 -5 and a momentum of 0.9. Zero mean Gaussian noise with standard deviation 0.2 was added to the input activations in the training phase so as to improve generalisation. All weights were randomly initialised in the range from -0.1 to 0.1. Finally, the early stopping strategy was used as no improvement of the mean square error on the validation set has been observed during 20 epochs or the predefined maximum number of training epochs (150 in our case) has been executed. Furthermore, to accelerate the training process, we updated the network weights after running every mini batch of 8 sequences for computation in parallel. The training procedure was performed with our CURRENNT toolkit [START_REF] Weninger | Introducing CUR-RENNT: The munich open-source cuda recurrent neural network toolkit[END_REF]. Herein we adapted the following naming conventions, the models trained with baseline approaches are referred to as individual models, whereas the ones associated with the proposed approaches are denoted as strength models. For the sake of a more even performance comparison the optimised parameters of individual models (i. e., SVR or BLSTM-RNN) were used in the corresponding strength models (i. e., S-B, B-S, or B-B models). Annotation delay compensation was also performed to compensate for the temporal delay between the observable cues, as shown by the participants, and the corresponding emotion reported by the annotators [START_REF] Mariooryad | Correcting time-continuous emotional labels by modeling the reaction lag of evaluators[END_REF]. Similar to [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF], this delay was estimated in the preliminary experiments using SVR and by maximising the performance on the development partition, while shifting the gold standard annotations back in time. As in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] we identified this delay to be four seconds which was duly compensated, by shifting the gold standard back in time with respect to the features, in all experiments presented. Note that all fusion experiments require concurrent initial predictions from audio and visual modalities. However, as discussed in (Sec. 4.2), visual prediction cannot occur where a face has not been detected. For all fusion experiments where this occurred we replicated the initial corresponding audio prediction to fill the missing video slot. Unless otherwise stated we report the accuracy of our systems in terms of the Concordance Correlation Coefficient (CCC) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF] metric: ρ c = 2ρσ x σ y σ 2 x + σ 2 y + (µ x -µ y ) 2 , ( 2 ) where ρ is the Pearson's Correlation Coefficient (PCC) between two time series (e. g., prediction and gold-standard); µ x and µ y are the means of each time series; and σ 2 x and σ 2 y are the corresponding variances. In contrast to the PCC, CCC takes not only the linear correlation, but also the bias and variance between the two compared series into account. As a consequence, whereas PCC is insensitive to bias and scaling issues, CCC reflects those two variations. The value of CCC is in the range of [-1, 1], where +1 represents total concordance, -1 total discordance, and 0 no concordance at all. One may further note that, it has also been successfully used as objective function to train discriminative neural networks [START_REF] Weninger | Discriminatively trained recurrent neural networks for continuous dimensional emotion recognition from audio[END_REF], and has been used as the official scoring metric in the last two editions of the AVEC. We further intuitively compared the difference between PCC and CCC by Fig. 5. From the figure, the obtained PCC of the two series (black and blue) is 1.000, while the obtained CCC is only 0.467 as it takes the bias of the mean and variance of the two series into account. For continuous emotion recognition, ones are often interested in not only the variation trend but also the absolute value/degree of personal emotional state. Therefore, the metric of CCC fits better for continuous emotion recognition than PCC. In addition to CCC, results are also given in all tables in terms of Root Mean Square Error (RMSE), a poplar metric for regression tasks. To further access the significance level of performance improvement, a statistical evaluation was carried out over the whole predictions between the proposed and the baseline approaches by means of Fisher's r-to-z transformation [START_REF] Cohen | Applied multiple regression/correlation analysis for the behavioral sciences[END_REF]. Affect Recognition with Strength Modelling Table 1 displays the results (RMSE and CCC) obtained from the strength models and the individual models of SVR and BLSTM-RNN on the development and test partitions of RECOLA and SEMAINE databases from the audio. As can be seen, the three Strength Modelling set-ups either matched or outperformed their corresponding individual models in most cases. This observation implies that the advantages of each model (i. e., SVR and BLSTM-RNN) are enhanced via Strength Modelling. In particular the performance of the BLSTM model, for both arousal and valence, was significantly boosted by the inclusion of SVR predictions (S-B) on the development and test sets. We speculate this improvement could be due to the initial SVR predictions helping the subsequent RNN avoid local minima. Similarly, the B-S combination brought additional performance improvement for the SVR model (except the valence case of SEMAINE), although not as obvious as for the S-B model. Again, we speculate that the temporal information leveraged by the BLSTM-RNN is being exploited by the successive SVR model. The best results for both arousal and valence dimensions were achieved with the framework of B-B for RECOLA, which achieved relative gains of 6.5 % and 29.1 % for arousal and valence respectively on the test set when compared to the single BLSTM-RNN model (B). This indicates there are potential benefits for audio based affect recognition by the deep structure formed by combining two BLSTM-RNNs using the Strength Modelling framework. Additionally, one can observe that there is no much performance improvement by applying Strength Modelling in the case of the valence recognition of SEMAINE. This might be attribute to the poor performance of the baseline systems, which can be regarded as noise and possibly not able to provide useful information for the other models. The same set of experiments were also conducted on the video feature set (Table 2). As for valence, the highest CCC obtained on test set achieves at .477 using the S-B model for RECOLA and at .158 using the B-B model for SEMAINE. As expected, we observe that the models (individual or strength) trained using only acoustic features is more efficient for interpreting the dimension of arousal rather than valence. Whereas, the opposite observation is seen for models trained only on the visual features. This finding is in agreement with similar results in the literature [START_REF] Gunes | Automatic, dimensional and continuous emotion recognition[END_REF][START_REF] Gunes | Categorical and dimensional affect analysis in continuous input: Current trends and future directions[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF]. Additionally, Strength Modelling achieved comparable or superior performance to other state-of-the-art methods applied on the RECOLA database. The OA-RVM model was used in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF], and the reported performance in terms of CCC, with audio features on the development set, was .689 for arousal [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], and .510 for valence using video features [START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF]. We achieved .755 with audio features for arousal, and .592 with video features for valence with the proposed Strength Modelling framework, showing the interest of our method. To further highlight advantages of Strength Modelling, Fig. 6 illustrates the automatic predictions of arousal via audio signals (a) and valence via video signals (b) obtained with the best settings of the strength models and the individual models frame by frame for a single test subject from RECOLA. Note that, similar plots were observed for the other subjects in the test set. In general, the predictions generated by the proposed Strength Modelling approach are closer to the gold standard, which consequently contributes to better results in terms of CCC. Strength Modelling Integrated with Early Fusion Table 3 shows the performance of both the individual and strength models integrated with the early fusion strategy. In most cases, the performance of the individual models of either SVR or BLSTM-RNN was significantly improved with the fused feature vector for both arousal and valence dimensions in comparison to the performance with the corresponding individual models trained only on the unimodal feature sets (Sec. 5.2) in most cases for both RECOLA and SEMAINE datasets. For the strength model systems, the early fusion B-S model generally outperformed the equivalent SVR model, and the structure of S-B outperformed the equivalent BLSTM model. However, the gain obtained by Strength Modelling with the early fused features is not as obvious as that with individual models. This might be due to the higher dimensions of the fused feature sets which possibly reduce the weight of the predicted features. Strength Modelling Integrated with Late Fusion This section aims to explore the feasibility of integrating Strength Modelling into three different late fusion strategies: modality-based, model-based, and the combination (see Sec. 3.3) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], and again confirms the importance of multimodal fusion for affect recognition. However, similar observation can only been seen on the validation set for SEMAINE, which might be due to the huge mismatch between the validation and test partitions. Interestingly when incorporating Strength Modelling into late fusion we can observe significant improvements over the corresponding non-strength set-ups. This finding confirms the effectiveness and the robustness of the proposed method for multimodal continuous affect recognition. In particular, the best test results of RECOLA, .685 and .554, were obtained by the strength models integrated with the modality-and modelbased late fusion approach. This arousal result matches the performance with the AVEC 2016 affect recognition subchallenge baseline system, .682, which was obtained using a late fusion strategy involving eight feature sets [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF]. As for SEMAINE, although obvious performance improvement can be seen on the development set, a similar observation can not be observed on the test set. This finding is possibly attributed to the mismatch between the development set and the test set, since all parameters of the training models were optimised on the development set. However, these parameters are not fit for the test set anymore. Further, for a comparison with the OA-RVM system, we applied the same fusion system as used in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], with only audio and video features. The results are shown in Table 4 and 5 for the RECOLA and SEMAINE database, respectively. It can be seen that, for both databases, the proposed methods outperform the OA-RVM technique, which further confirms the efficiency of the proposed Strength Modelling method. In general, to provide an overview of the contributions of Strength Modelling to the continuous emotion recognition, we averaged the relative performance improvement of Strength Modelling over RECOLA and SEMAINE for arousal and valence recognition. The corresponding results from four cases (i. e., audio only, video only, early fusion, and late fusion) are displayed in Fig. 7. From the figure, one can observe an obvious performance improvement gained by Strength Modelling, except for the late fusion framework. This particular case is highly attributed to the mismatch between validation and test sets of SEMAINE as aforementioned, as all parameters of the training models were optimised on the development set. Employing some state-of-the-art generation techniques like dropout for training neural networks might help to tackle this problem in the future. Conclusion and Future Work This paper proposed and investigated a novel framework, Strength Modelling, for continuous audiovisual affect recognition. Strength Modelling concatenates the strength of an initial model, as represented by its predictions, with the original features to form a new feature set which is then used as the basis for regression analysis in a subsequent model. To demonstrate the suitability of the framework, we jointly explored the benefits from two state-of-the-art regression models, i. e., Support Vector Regression (SVR) and Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN), in three different Strength Modelling structures (SVR-BLSTM, BLSTM-SVR, BLSTM-BLSTM). Further, these three structures were evaluated in both unimodal settings, using either audio or video signals, and the bimodal settings where early fusion and late fusion strategies were integrated. Results gained on the widely used RECOLA and SEMAINE databases indicate that Strength Modelling can match or outperform the corresponding conventional individual models when performing affect recognition. An interesting observation was that, among our three different Strength Modelling set-ups no one case significantly outperformed the others. This demonstrates the flexibility of the proposed framework, in terms of being able to work in conjunction with different combination of A further advantage of Strength Modelling is that, it can be implemented as a plug-in for use in both early and late fusion stages. Results gained from an exhaustive set of fusion experiments confirmed this advantage. The best Strength Modelling test set results on the RECOLA dataset, .685 and .554, for arousal and valence respectively were obtained using Strength Modelling integrated into a modality-and model-based late fusion approach. These results are much higher than the ones obtained from other state-of-the-art systems. Moreover, on the SEMAINE dataset, competitive results can also be obtained. There is a wide range of possible future research direction associated with Strength Modelling to build on this initial set of promising results. First, only two widely used regression model were investigated in the present article for affect recognition. Much of our future efforts will concentrate around assessing the suitability of more other regression approaches (e. g., Partial Least Squares Regression) for use in the framework. Investigating a more general rule of what kind of models can be implemented together in the framework help to expand its application. In addition, it is interesting to extend the framework widely and deeply. Second, motivated by the work in [START_REF] Kursun | Parallel interacting multiview learning: An application to prediction of protein sub-nuclear location[END_REF], we will also combine the original features with the predictions from different modalities (integrating the predictions based on audio features with the original video features for a final arousal or valence prediction), rather than from different models only. Furthermore, we also plan to generalise the promising advantages offered by Strength Modelling, by evaluating its performance on other behavioural regression tasks. Figure 1 : 1 Figure 1: Overview of the Strength Modelling framework. Figure 2 : 2 Figure 2: Strength Modelling with early fusion strategy. Figure 3 : 3 Figure3: Strength Modelling (SM) with late fusion strategy. Fused predictions are from multiple independent modalities with the same model (denoted by the red, green, or blue lines), multiple independent models within the same modality (denoted by the solid or dotted lines), or the combination. Figure 4 : 4 Figure 4: Illustration of the facial landmark features extraction from RECOLA database Figure 5 : 5 Figure 5: Comparison of PCC and CCC between two series. The black line is gold standard from RECOLA database test partition, and the blue line is generated by shifting and scaling the gold standard. Figure 6 : 6 Figure 6: Automatic prediction of arousal via audio signals (a) and valence via video signals (b) obtained with the best settings of the strength-involved models and individual models for a subject from the test partition on RECOLA database. Figure 7 : 7 Figure 7: Averaged relative performance improvement (in terms of CCC) cross RECOLA and SEMAINE for arousal and valence recognition. The performance of the Strength Modelling was compared with the best individual tems in the case of audio video only, early fusion, and late fusion frameworks. Table 1 : 1 Results based on audio features only: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) on the development and test partitions of RECOLA and SEMAINE databases from the audio signals. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. RECOLA SEMAINE Audio based AROUSAL VALENCE AROUSAL VALENCE method RMSE CCC RMSE CCC RMSE CCC RMSE CCC a. on the development set S .126 .714 .149 .331 .218 .399 .262 .172 B .142 .692 .117 .286 .209 .387 .261 .117 B-S .127 .713 .144 .348 * .206 .417 * .255 .179 S-B .122 .753 * .113 .413 * .210 .434 * .262 .172 B-B .122 .755 * .112 .476 * .206 .417 * .255 .178 * b. on the test set S .133 .605 .165 .248 .216 .397 .263 .017 B .155 .625 .119 .282 .202 .317 .256 .008 B-S .133 .606 .160 .264 .205 .332 .258 .006 S-B .133 .665 * .117 .319 * .203 .423 * .262 .017 B-B .133 .666 * .123 .364 * .205 .332 * .258 .006 Table 2 : 2 Results based on visual features only: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) on the development and test partitions of RECOLA and SEMAINE databases from the video signals. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. RECOLA SEMAINE Video based AROUSAL VALENCE AROUSAL VALENCE method RMSE CCC RMSE CCC RMSE CCC RMSE CCC a. on the development set S .197 .120 .139 .456 .249 .241 .253 .393 B .184 .287 .110 .478 .224 .232 .247 .332 B-S .183 .292 .110 .592 * .222 .250 .252 .354 S-B .186 .350 * .118 .510 * .231 .291 * .242 .405 B-B .185 .344 * .113 .501 * .222 .249 * .256 .301 b. on the test set S .186 .193 .156 .381 .279 .112 .278 .115 B .183 .193 .122 .394 .240 .112 .275 .063 B-S .176 .265 * .130 .464 * .235 .072 .285 .043 S-B .186 .196 .121 .477 * .249 .125 .284 .068 B-B .197 .184 .120 .459 * .235 .072 .255 .158 * Unless stated otherwise, a p value less than .05 indicates significance. . A comparison of the performance of different fusion approaches, with or without Strength Modelling, is presented in Table 4. For the systems without Strength Modelling for RECOLA, one can observe that best individual model test Table 3 : 3 Early fusion results on RECOLA and SEMAINE databases: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with early fusion strategy on the development and test partitions of RECOLA and SEMAINE databases. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. Sec. 5.2) were boosted to .671 and .405 with the modality-based late fusion approach, and to .651 and .497 with the model-based late fusion approach. These results were further promoted to .664 and .549 when combining the modalityand model-based late fusion approaches. This result is in line with other results in the literature RECOLA SEMAINE Early Fusion AROUSAL VALENCE AROUSAL VALENCE method RMSE CCC RMSE CCC RMSE CCC RMSE CCC a. on the development set S .121 .728 .113 .544 .213 .392 .252 .436 B .132 .700 .109 .513 .217 .354 .257 .205 B-S .122 .727 .118 .549 .210 .374 .239 .363 S-B .127 .712 .096 .526 .208 .423 * .253 .397 B-B .126 .718 * .095 .542 * .210 .421 * .241 .361 * b. on the test set S .132 .610 .139 .463 .224 .304 .292 .057 B .148 .562 .114 .476 .204 .288 .244 .127 B-S .132 .610 .121 .520 * .204 .328 * .264 .063 S-B .144 .616 * .112 .473 .198 .408 * .275 .144 * B-B .143 .618 * .114 .499 * .220 .307 * .265 .060 set performances, .625 and .394, for arousal and valence re- spectively ( Table 4 : 4 Late fusion results on the RECOLA database: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with late fusion strategies (i. e., modality-based, model-based, or the combination) on the development and test partitions of RECOLA database. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. RECOLA Table 5 : 5 Late fusion results on the SEMAINE database: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with late fusion strategies (i. e., modality-based, model-based, or the combination) on the development and test partitions of SEMAINE database. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. SEMAINE Acknowledgements This work was supported by the EU's Horizon 2020 Programme through the Innovative Action No. 645094 (SEWA) and the EC's 7th Framework Programme through the ERC Starting Grant No. 338164 (iHEARu). We further thank the NVIDIA Corporation for their support of this research by Tesla K40-type GPU donation.
59,664
[ "13134" ]
[ "488795", "488795", "488795", "488795", "1041971", "488795", "50682" ]
01486190
en
[ "info" ]
2024/03/04 23:41:48
2017
https://hal.science/hal-01486190/file/Lezoray_ICASSP2017.pdf
Olivier Lézoray 3D COLORED MESH GRAPH SIGNALS MULTI-LAYER MORPHOLOGICAL ENHANCEMENT Keywords: Graph signal, morphology, color, multilayer decomposition, detail enhancement, sharpness We address the problem of sharpness enhancement of 3D colored meshes. The problem is modeled with graph signals and their morphological processing is considered. A hierarchical framework that decomposes the graph signal into several layers is introduced. It relies on morphological filtering of graph signal residuals at several scales. To have an efficient sharpness enhancement, the obtained layers are blended together with the use of a nonlinear sigmoid detail enhancement and tone manipulation, and of a structure mask. INTRODUCTION 3D Meshes are widely used in many fields and applications such as computer graphics and games. Recently, low cost sensors have brought 3D scanning into the hands of consumers. As a consequence, a new market has emerged that proposes cheap software that, similarly to an ordinary video camera, enables to generate 3D models by simply moving around an object or a person. With such software one can now easily produce 3D colored meshes with each vertex described by its position and color. However, the quality of the mesh is not always visually good. In such a situation, the sharpness of the 3D colored mesh needs to be enhanced. In this paper we propose an approach towards this problem. Existing techniques for sharpness enhancement of images use structurepreserving smoothing filters [START_REF] Zhang | Rolling guidance filter[END_REF][START_REF] Cho | Bilateral texture filtering[END_REF][START_REF] Gastal | Domain transform for edge-aware image and video processing[END_REF][START_REF] Xu | Image smoothing via L 0 gradient minimization[END_REF] within a hierarchical framework. They decompose the image into different layers from coarse to fine details, making it easier for subsequent detail enhancement. Some filters have been extended to 3D meshes but most manipule only mesh vertices positions [START_REF] Fleishman | Bilateral mesh denoising[END_REF][START_REF] Michael Kolomenkin | Prominent field for shape processing and analysis of archaeological artifacts[END_REF]. Some recent works have considered the color information [START_REF] Afrose | Mesh color sharpening[END_REF]. In this paper we present a robust sharpness enhancement technique based on morphological signal decomposition. The approach considers manifold-based morphological operators to construct a complete lattice of vectors. With this approach, a multi-layer decomposition of the 3D colored mesh, modeled as a graph signal, is proposed that progressively decomposes an input color mesh from coarse to fine scales. The layers are manipulated by non-linear s-curves and blended by a structure mask to produce an enhanced 3D color mesh. The paper is organized as follows. In Section 2, we introduce a learned ordering of the vectors of a graph signal. From this ordering, we derive a graph signal representation and define the associated morphological graph signal operators. Section 3 describes the proposed method for multi-layer morphological enhancement of graph signals. Last sections present results and conclusion. MATHEMATICAL MORPHOLOGY FOR 3D COLORED GRAPH SIGNALS Notations A graph G = (V, E) consists in a set V = {v 1 , . . . , v m } of vertices and a set E ⊂ V × V of edges connecting vertices. A graph signal is a function that associates real-valued vectors to vertices of the graph f : G → T ⊂ R n where T is a non-empty set of vectors. The set T = {v 1 , • • • , v m } represents all the vectors associated to all vertices of the graph (we will also use the notation T [i] = v i = f (v i )). In this paper 3D colored graphs signals are considered, where a color is assigned to each vertex of a triangulated mesh. Manifold-based color ordering Morphological processing of graph signals requires the definition of a complete lattice (T , ≤) [START_REF] Ronse | Why mathematical morphology needs complete lattices[END_REF], an ordering of all the vectors of T . Since there exits no admitted universal ordering fo vectors, the framework of h-orderings [START_REF] Goutsias | Morphological operators for image sequences[END_REF] has been proposed as an alternative. This consists in constructing a bijective projection h : T → L where L is a complete lattice equipped with the conditional total ordering [START_REF] Goutsias | Morphological operators for image sequences[END_REF]. We refer to ≤ h as the h-ordering given by v i ≤ h v j ⇔ h(v i ) ≤ h(v j ). As argued in our previous works [START_REF] Lézoray | Complete lattice learning for multivariate mathematical morphology[END_REF], the projection h cannot be linear since a distortion of the space topology is inevitable. Therefore, it is preferable to rely on a nonlinear mapping h. The latter will be constructed by learning the manifold of vectors from a given graph signal and the complete lattice (T , ≤ h ) will be deduced from it. Complete lattice learning Given a graph signal that provides a set T of m vectors in R 3 , a dictionary D = {x ′ 1 , • • • , x ′ p } of p ≪ m vectors is built by Vector Quantization [START_REF] Gersho | Vector Quantization and Signal Compression[END_REF]. A similarity matrix K D that contains the pairwise similarities between all the dictionary vectors x ′ i is then computed. The manifold of the dictionary vectors is modeled using nonlinear manifold learning by Laplacian Eigenmaps [START_REF] Belkin | Laplacian eigenmaps for dimensionality reduction and data representation[END_REF]. This is be performed with the decomposition L = Φ D Π D Φ T D of the normalized Laplacian matrix L = I -D -1 2 D K D D -1 2 D with Φ D and Π D its eigenvectors and eigenvalues, and D D the degree diagonal matrix of K D . The obtained representation being only valid for the dictionary D, it is extrapolated to all the vectors of T by Nyström extrapolation [START_REF] Talwalkar | Large-scale SVD and manifold learning[END_REF] expressed by Φ = D -1 2 DT K T DT D -1 2 D Φ D (diag[1] - Π D ) -1 , where K DT is the similarity matrix between sets D and T , and D DT its associated diagonal degree matrix. Finally, the bijective projection h ⊂ R 3 → L ⊂ R p on the manifold is defined as h(x) = ( φ1 (x), • • • , φp (x)) T with φk the k-th eigenvector. The complete lattice (T , ≤ h ) is obtained by using the conditional ordering after this projection. Graph signal representation The complete lattice (T , ≤ h ) being learned, a new graph signal representation can be defined. Let P be a sorted permutation of the elements of T according to the manifold-based ordering ≤ h , one has P = {v ′ 1 , • • • , v ′ m } with v ′ i ≤ h v ′ i+1 , ∀i ∈ [1, (m -1)]. From this ordered set of vectors, an index graph signal can be defined. Let I : G → [1, m] denote this index graph signal. Its elements are defined as I(v i ) = {k | v ′ k = f (v i ) = v i }. Therefore, at each vertex v i of the index graph signal I, one obtains the rank of the original vector f (v i ) in P, the set of sorted vectors, that we will call a palette. A new representation of the original graph signal f is obtained and denoted in the form of the pair f = (I, P). Figure 1 presents such a representation for a 3D colored graph signal. The original graph signal f can be directly recovered since f (v i ) = P[I(v i )] = T [i] = v i . f : G → R 3 I : G → [1, m] P Fig. 1. From left to right: a 3D colored graph signal f , and its representation in the form of an index graph signal I and associated sorted vectors P. Graph signal morphological processing From this new representation of graph signals, morphological operators can now be expressed for the latter. The erosion of a graph signal f at vertex v i ∈ G by a structuring element B k ⊂ G is defined as: ǫ B k (f )(v i ) = {P[∧I(v j )], v j ∈ B k (v i )}. The dilation δ B k (f )(v i ) can be defined similarly. A structuring element B k (v i ) of size k defined at a vertex v i corresponds to the k-hop set of vertices that can be reached from v i in k walks, plus vertex v i . These graph signal morphological operators operate on the index graph signal I, and the processed graph signal is reconstructed through the sorted vectors P of the learned complete lattice. From these basic operators, we can obtain other morphological filters for graph signals such a as openings γ B k (f ) = δ B k (ǫ B k (f )) and clos- ings φ B k (f ) = ǫ B k (δ B k (f )). MULTI-LAYER MORPHOLOGICAL ENHANCEMENT Graph signal multi-layer decomposition We adopt the strategy of [START_REF] Farbman | Edge-preserving decompositions for multi-scale tone and detail manipulation[END_REF] that consists in decomposing a signal into a base layer and several detail layers, each capturing a given scale of details. We propose the following multiscale morphological decomposition of a graph signal into l layers, as shown in Algorithm 1. To extract the successive Algorithm 1 Morphological decomposition of a graph signal d -1 = f , i = 0 while i < l do Compute the graph signal representation at level i -1: d i-1 = (I i-1 , P i-1 ) Morphological Filtering of d i-1 : f i = M F B l-i (d i-1 ) Compute the residual (detail layer): d i = d i-1 -f i Proceed to next layer: i = i + 1 end while layers in a coherent manner, the layer f 0 has to be the coarsest version of the graph signal, while the residuals d i have to contain details that become finer across the decomposition levels. This means that the sequence of scales should be decreasing and therefore the size of the structuring element in the used morphological filtering (MF) should also decrease. In terms of graph signal decomposition, this means that as the process evolves, the successive decompositions extract more details from the original graph signal (similarly as [START_REF] Hidane | Graph signal decomposition for multi-scale detail manipulation[END_REF]). In Algorithm 1, this is expressed by B l-i which is a sequence of structuring elements of decreasing sizes with i ∈ [0, l -1]. Since each detail layer d i is composed of a set of vectors different from the previous layer d i-1 , the graph signal representation (I i , P i ) has to be computed for the successive lay-Fig. 2. From top to bottom, left to right: an original mesh f , and its decomposition into three layers f 0 , f 1 , and d 1 . ers to decompose. Finally, the graph signal can then be represented by f = l-2 i=0 f i + d l-1 . The f i 's thus represent different layers of f captured at different scales. The morphological filter we have considered for the decomposition is an Open Close Close Open. The OCCO filter is a self-dual operator that has excellent signal decomposition abilities [START_REF] Peters | A new algorithm for image noise reduction using mathematical morphology[END_REF]: OCCO B k (f ) = γ B k (φ B k (f ))+φ B k (γ B k (f )) 2 . In Figure 2, we show an example with three levels of decomposition (l = 3) to obtain a coarse base layer f 0 , a medium detail layer f 1 and a fine detail layer d 1 . Graph signal enhancement Proposed approach Given a graph signal f = (I, P), we first construct its multilayer decomposition in l levels. The graph signal can be enhanced by manipulating the different layers with specific coefficients and adding the modified layers altogether. This is achieved with the following proposed scheme: f (v k ) = S 0 (f 0 (v k )) + M (v k ) • l-1 i=1 S i (f i (v k )). (1) with f l-1 = d l-1 . Each layer is manipulated by a nonlinear function S i for detail enhancement and tone manipulation. The layers are combined with the use of a structure mask M that prevents from boosting noise and artifacts while enhancing the main structures of the original graph signal f . We provide now details on S i and M . Nonlinear boosting curve In classical image detail manipulation, the layers are manipulated in a linear way with specific layer coefficients (i.e., S i (x) = α i x [START_REF] Choudhury | Hierarchy of nonlocal means for preferred automatic sharpness enhancement and tone mapping[END_REF]). However this can over-enhance some image details and requires hard clipping. Therefore, alternative nonlinear detail manipulation and tone manipulation have ben proposed [START_REF] Farbman | Edge-preserving decompositions for multi-scale tone and detail manipulation[END_REF][START_REF] Paris | Local laplacian filters: edge-aware image processing with a laplacian pyramid[END_REF][START_REF] Talebi | Fast multi-layer laplacian enhancement[END_REF]. Similarly, we consider a nonlinear sigmoid function of the form S i (x) = 1 1+exp(-αix) , appropriately shifted and scaled. The parameter α i of the sigmoid is automatically determined and decreases while i increases, whereas its width increases from one level to the other (details not provided due to reduced space). Structure mask As recently proposed in [START_REF] Talebi | Fast multi-layer laplacian enhancement[END_REF] for image enhancement, it is much preferable to boost strong signal structures and to keep unmodified the other areas. For graph signals, a vertex located on an edge or a textured area has a high spectral distance with respect to its neighbors as compared to a vertex within a constant area. Therefore, we propose to construct a structure mask that accounts for the structures present in the graph signal. A normalized sum of distances within a local neighborhood is a good indicator of the graph signal structure, and is defined as δ(v i ) = [START_REF] Rubner | The earth mover's distance as a metric for image retrieval[END_REF]. To build H(v i ), an histogram of size N is constructed on the index graph signal I as H(v i ) = {(w k , m k )} N k=1 within the set B 1 (v i ) where m k is the index of the k-th element and w k its appearance frequency. One has to note that N ≤ |B 1 (v i )| since identical values can be found within the set B 1 (v i ), and two signatures can have different sizes. To compute the EMD, ground distances are computed in the CIELAB color space. Finally, we define the structure mask of a graph signal as M (v i ) = 1 + δ(vi)-∧δ ∨δ-∧δ . One can notice that M (v i ) ∈ [1, 2] and will be close to 1 for constant areas and to 2 for ramp edges. Figure 3 presents examples of structure masks on two 3D colored graph signals. The structure mask is computed only once, and on the original graph signal (I, P) = f . EXPERIMENTAL RESULTS AND CONCLUSION We illustrate our approach on graph signals in the form 3D colored meshes that represent 3D scans of several person busts 1 . Such scans have recently received much interest to generate 3D printed selfies and their perceived sharpness is of huge importance for final consumers. We have used l = 3 levels of decomposition for computational efficiency. To assess objectively the benefit of our method, we measure the sharpness of the orignal signal f and modified signal f with the TenenGrad criterion [START_REF] Xu | A comparison of contrast measurements in passive autofocus systems for low contrast images[END_REF][START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF], after adapting it to 3D colored meshes by using the morphological gradient (as in [START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF] for images): T G(f ) = 1 3|V| vi∈V 3 k=0 |δ(f k )(v i ) -ǫ(f k )(v i )| where the morphological δ and ǫ are performed on each channel f k on a 1-hop. It has been shown in [START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF] that a higher value means a sharper signal and that this value is correlated with perceived sharpness. be seen that our approach has enhanced the local contrast without artifact magnification or detail loss. CONCLUSION We have introduced an approach for 3D colored graph enhancement based on a morphological multi-layer decomposition of graph signals. The use of nonlinear detail manipulation with a structure mask enables to have an automatic method that produces visually appealing results of enhanced sharpness. v j ∈B 1 ( 1 v i ) d EM D (H(vj ),H(vi)) |B1(vi)| with d EM D the Earth Mover Distance between two signatures that are compact representations of local distributions Fig. 3 . 3 Fig. 3. Graph signal structure masks used to modulate the importance of detail enhancement. The original graph signals can be seen in the next figures. Figure 4 Fig. 4 . 44 Fig. 4. Morphological colored mesh detail manipulation with cropped zoomed areas. Fig. 5 . 5 Fig. 5. Morphological colored mesh detail manipulation. This work received funding from the Agence Nationale de la Recherche (ANR-14-CE27-0001 GRAPHSIP), and from the European Union FEDER/FSE 2014/2020 (GRAPHSIP project). Models from Cyberware and ReconstructMe.
16,965
[ "230" ]
[ "406734" ]
01486563
en
[ "spi" ]
2024/03/04 23:41:48
2017
https://ujm.hal.science/ujm-01486563/file/Visapp_Alex_CameraReady.pdf
Panagiotis-Alexandros Bokaris email: [email protected] Damien Muselet Alain Trémeau email: [email protected] 3D reconstruction of indoor scenes using a single RGB-D image Keywords: 3D reconstruction, Cuboid fitting, Kinect, RGB-D, RANSAC, Bounding box, Point cloud, Manhattan World The three-dimensional reconstruction of a scene is essential for the interpretation of an environment. In this paper, a novel and robust method for the 3D reconstruction of an indoor scene using a single RGB-D image is proposed. First, the layout of the scene is identified and then, a new approach for isolating the objects in the scene is presented. Its fundamental idea is the segmentation of the whole image in planar surfaces and the merging of the ones that belong to the same object. Finally, a cuboid is fitted to each segmented object by a new RANSAC-based technique. The method is applied to various scenes and is able to provide a meaningful interpretation of these scenes even in cases with strong clutter and occlusion. In addition, a new ground truth dataset, on which the proposed method is further tested, was created. The results imply that the present work outperforms recent state-of-the-art approaches not only in accuracy but also in robustness and time complexity. INTRODUCTION 3D reconstruction is an important task in computer vision since it provides a complete representation of a scene and can be useful in numerous applications (light estimation for white balance, augment synthetic objects in a real scene, design interiors, etc). Nowadays, with an easy and cheap access to RGB-D images, as a result of the commercial success of the Kinect sensor, there is an increasing demand in new methods that will benefit from such data. A lot of attention has been drawn to 3D reconstruction using dense RGB-D data [START_REF] Izadi | Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera[END_REF][START_REF] Neumann | Real-time rgb-d mapping and 3-d modeling on the gpu using the random ball cover data structure[END_REF][START_REF] Dou | Exploring high-level plane primitives for indoor 3d reconstruction with a hand-held rgb-d camera[END_REF]. Such data are obtained by multiple acquisitions of the considered 3D scene under different viewpoints. The main drawback of these approaches is that they require a registration step between the different views. In order to make the 3D reconstruction of a scene feasible despite the absence of a huge amount of data, this paper focuses on reconstructing a scene using a single RGB-D image. This challenging problem has been less addressed in the literature [START_REF] Neverova | 2 1/2 d scene reconstruction of indoor scenes from single rgb-d images[END_REF]. The lack of information about the shape and position of the different objects in the scene due to the single viewpoint and occlusions makes the task significantly more difficult. Therefore, various assumptions have to be made in order to make the 3D reconstruction feasible (object nature, orientation). In this paper, starting from a single RGB-D image, a fully automatic method for the 3D reconstruction of an indoor scene without constraining the object orientations is proposed. In the first step, the layout of the room is identified by solving the parsing problem of an indoor scene. For this purpose, the work of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] is exploited and improved by better addressing the problem of the varying depth resolution of the Kinect sensor while fitting planes. Then, the objects of the scene are segmented by using a novel plane-merging approach and a cuboid is fitted to each of these objects. The reason behind the selection of such representation is that most of the objects in a common indoor scene, such as drawers, bookshelves, tables or beds have a cuboid shape. For the cuboid fitting step, a new "double RANSAC"-based [START_REF] Fischler | Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[END_REF] approach is proposed. The output of the algorithm is a 3D reconstruction of the observed scene, as illustrated in Fig. 1. In order to assess the quality of the reconstruction, a new dataset of captured 3D scenes is created, in which the exact positions of the objects are measured by using a telemeter. In fact, by knowing the exact 3D positions of the objects, one can objectively assess the accuracy of all the 3D reconstruction algorithms. This ground truth dataset will be publicly available for future comparisons. Finally, the proposed method is tested on this new dataset as well as on the NYU Kinect dataset [START_REF] Silberman | Indoor segmentation and support inference from rgbd images[END_REF]. The obtained results indicate that the proposed algorithm outperforms the state-ofthe-art even in cases with strong occlusion and clutter. RELATED WORK The related research to the problem examined in this paper can be separated in two different categories. The first category is the extraction of the main layout of the scene while the second one is the 3D representation of the objects in the scene. Various approaches have been followed in computer vision for recovering the spatial layout of a scene. Many of them are based on the Manhattan World assumption [START_REF] Coughlan | Manhattan world: Compass direction from a single image by bayesian inference[END_REF]. Some solutions only consider color images without exploiting depth information [START_REF] Mirzaei | Optimal estimation of vanishing points in a manhattan world[END_REF][START_REF] Bazin | Globally optimal line clustering and vanishing point estimation in manhattan world[END_REF][START_REF] Hedau | Recovering the spatial layout of cluttered rooms[END_REF][START_REF] Schwing | Efficient exact inference for 3d indoor scene understanding[END_REF][START_REF] Zhang | PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding[END_REF] and hence provide only coarse 3D layouts. With Kinect, depth information is available, which can be significantly beneficial in such applications. [START_REF] Zhang | Estimating the 3d layout of indoor scenes and its clutter from depth sensors[END_REF] expanded the work of [START_REF] Schwing | Efficient exact inference for 3d indoor scene understanding[END_REF]) and used the depth information in order to reduce the layout error and estimate the clutter in the scene. [START_REF] Taylor | Fast scene analysis using image and range data[END_REF] developed a method that parses the scene in salient surfaces using a single RGB-D image. Moreover, [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] presented a method for parsing the Manhattan structure of an indoor scene. Nonetheless, these works are based on assumptions about the content of the scene (minimum size of a wall, minimum ceiling height, etc.). Moreover, in order to address the problem of the depth accuracy in Kinect, they used the depth disparity differences, which is not the best solution as it is discussed in section 3.1. Apart from estimating the layout of an indoor scene, a considerable amount of research has been done in estimating surfaces and objects from RGB-D images. [START_REF] Richtsfeld | Towards scene understanding -object segmentation using rgbd-images[END_REF] used RANSAC and NURBS [START_REF] Piegl | On nurbs: a survey[END_REF] for detecting unknown 3D objects in a single RGB-D image, requiring learning data from the user. [START_REF] Cupec | Fast 2.5d mesh segmentation to approximately convex surfaces[END_REF][START_REF] Jiang | Finding Approximate Convex Shapes in RGBD Images[END_REF] segment convex 3D shapes but their grouping to complete objects remains an open issue. To the best of our knowledge, [START_REF] Neverova | 2 1/2 d scene reconstruction of indoor scenes from single rgb-d images[END_REF] was the first method that proposed a 3D reconstruction starting from a single RGB-D image under the Manhattan World assumption. However, it has the significant limitation that it only reconstructs 3D objects which are parallel or perpendicular to the three main orientations of the Manhattan World. [START_REF] Lin | Holistic scene understanding for 3d object detection with rgbd cameras[END_REF] presented a holistic approach that takes into account 2D segmentation, 3D geometry and contextual relations between scenes and objects in order to detect and classify objects in a single RGB-D image. Despite the promising nature of such approach it is constrained by the assumption that the objects are parallel to the floor. In addition, the cuboid fitting to the objects is performed as the minimal bounding cube of the 3D points, which is not the optimal solution when working with Kinect data, as discussed by [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF]. Recently, an interesting method that introduced the "Manhattan Voxel" was developed by [START_REF] Ren | Three-dimensional object detection and layout prediction using clouds of oriented gradients[END_REF]. In their work the 3D layout of the room is estimated and detected objects are represented by 3D cuboids. Being a holistic approach that prunes candidates, there is no guarantee that a cuboid will be fitted to each object in the scene. Based on a single RGB image, [START_REF] Dwibedi | Deep cuboid detection: Beyond 2d bounding boxes[END_REF] developed a deeplearning method to extract all the cuboid-shaped objects in the scene. This novel technique differs from our perspective since the intention is not to fit a cuboid to a 3D object but to extract a present cuboid shape in an image. The two methods [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF][START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF] are similar with our approach since their authors try to fit cuboids using RANSAC to objects of a 3D scene acquired by a single RGB-D image. [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF] followed a 3D reasoning approach and investigated different constraints that have to be applied to the cuboids, such as occlusion, stability and supporting relations. However, this method is applicable only to pre-labeled images. [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] coarsely segment the RGB-D image into roughly piecewise planar patches and for each pair of such patches fit a cuboid to the two planes. As a result, a large set of cuboid candidates is created. Finally, the best subset of cuboids is selected by optimizing an objective function, subject to various constraints. Hence, they require strong constraints (such as intersections between pairs of cuboids, number of cuboids, covered area on the image plane, occlusions among cuboids, etc.) during the global optimization process. This pioneer approach provides promising results in some cases but very coarse ones in others even for dramatically simple scenes (see Figs. 9 and 10 and images shown in [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]). In this paper, in order to improve the quality of the reconstruction, we followed a different approach and propose an accurate segmentation step using novel constraints. The objective is to isolate the objects from each other before fitting the cuboids due to the fact that the cuboid fitting step can be significantly more efficient and accurate when working with each object independently. METHOD OVERVIEW The method proposed in this paper can be separated in three different stages. The first stage is to define the layout of the scene. This implies to extract the floor, all the walls and their intersections. For this purpose, the input RGB-D image is segmented by fitting 3D planes to the point cloud. The second stage is to segment all the objects in the scene and to fit a cuboid to each one separately. Finally, in stage 3 the results of the two previous stages are combined in order to visualize the 3D model of the room. An overview of this method can be seen in Fig. 2 3 .1 Parsing the indoor scene In order to parse the indoor scene and extract the complete layout of the scene, an approach based on the research of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] is used. According to this work, the image is separated in planar regions by fitting planes to the point cloud using RANSAC, as can be seen in Fig 2b . Then the floor and the walls are detected by analyzing their surfaces, angles with vertical and angles between them. This method provides the layout of the room in less than 6 seconds. The final result of the layout of the scene, visualized in the 3D Manhattan World, can be seen in the bottom of Fig. 2c. While working with depth values provided by the Kinect sensor, it is well known that the depth accuracy is not the same for the whole range of depth [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF], i.e. the depth information is more accurate for points that are close to the sensor than for points that are farther. This has to be taken into account in order to define a threshold according to which the points will be considered as inliers in a RANSAC method. Points with a distance to a plane inside the range of Kinect error should be treated as inliers of that plane. In order to address this problem, [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] proposed to fit planes in the disparity (inverse of depth) image instead of working directly with depth. This solution improves the accuracy but we claim that the best solution would be to use a threshold for the computation of the residual errors in RANSAC that increases according to the distance from the sensor. This varying threshold is computed once by fitting a second degree polynomial function to the depth values provided by [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF]. The difference between the varying threshold proposed by [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] using disparity and the one proposed here can be seen in Fig. 3. As observed in the graph, our threshold follows significantly better the experimental data of [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF] compared to the threshold of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF]. The impact of the proposed threshold on the room layout reconstruction can be seen in the two character-istic examples in Fig. 4. As it can be easily noticed, with the new threshold the corners of the walls are better defined and complete walls are now detected. This adaptive threshold is further used in the cuboid fitting step and significant improvements are obtained for various objects, as it is discussed in section 3.3. Segmenting the objects in the scene As an output of the previous step, the input image is segmented in planar regions (Fig. 2b). Moreover, it is already known which of these planar regions correspond to the walls and to the floor in the scene (bottom of Fig. 2c). By excluding them from the image, only planar regions that belong to different objects in the image are left, as can be seen in the top of Fig. 2c. In order to segment the objects in the scene, the planar regions that belong to the same object have to be merged. For this purpose, the edges of the planar surfaces are extracted using a Canny edge detector and the common edge between neighboring surfaces is calculated. Then, we propose to merge two neighbor surfaces by analyzing i)the depth continuity across surface boundaries, ii)the angle between the surface normals and iii)the size of each surface. For the first criterion, we consider that two neighboring planar surfaces that belong to the same object have similar depth values in their common edge and different ones when they belong to different objects. The threshold in the mean depth difference is set to 60 mm in all of our experiments. The second criterion is necessary in order to prevent patches that do not belong to the same object to be merged. In fact, since this study is focused on cuboids, the planar surfaces that should be merged need to be either parallel or perpendicular to each other. The final criterion forces neighboring planar surfaces to be merged if both of their sizes are relatively small (less than 500 points). The aim is to regroup all small planar regions that constitute an object that does not have a cuboid shape (sphere, cylinder, etc.). This point is illustrated in Fig. 5, where one cylinder is extracted. The proposed algorithm checks each planar region with respect to its neighboring regions (5 pixels area) in order to decide whether they have to be merged or not. This step is crucial for preparing the data before fitting cuboids in the next step. Fitting a cuboid to each object The aim of this section is to fit an oriented cuboid to each object. As discussed by [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF], the optimal cuboid is the one with the minimum volume and the maximum points on its surface. Since the image has been already segmented, i.e. each object is isolated from the scene, the strong global constraints used by [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]) can be relaxed and more attention to each cuboid can be drawn. Therefore, we propose the following double-RANSAC process. Two perpendicular planar surfaces are sufficient to define a cuboid. Hence, in order to improve the robustness of the method, we propose to consider only the two biggest planar surfaces of each object. In fact, in a single viewpoint of a 3D scene only two surfaces of an object are often visible. Thus, first, for each segmented object, the planar surface with the maximum number of inliers is extracted by fitting a plane to the corresponding point cloud using RANSAC (with our adaptive threshold described in section 3.1). The orientation of this plane provides the first axis of the cuboid. We consider that the second plane is perpendicular to the first one but this information is not sufficient to define the second plane. Furthermore, in case of noise or when the object is thin (few points in the other planes) or far from the acquisition sensor, the 3D orientation of the second plane might be poorly estimated. Hence, we propose a robust solution which projects all the remaining points of the point cloud on the first plane and then fits a line using another RANSAC step to the projected points. The orientation of this line provides the orientation of the second plane. This is visualized in Fig. 6. In the experiments section, it is shown that this double RANSAC process provides very good results while fitting cuboids to small, thin or far objects. Furthermore, as a second improvement of the RANSAC algorithm, we propose to analyze its qual-ity criterion. In fact, RANSAC fits several cuboids to each object (10 cuboids in our implementation) and selects the one that optimizes a given quality criterion. Thus, the chosen quality criterion has a big impact on the results. As it was discussed before, in RGB-D data a well estimated cuboid should have a maximum of points on its surface. Given one cuboid returned by one RANSAC iteration, we denote area f 1 and area f 2 the areas of its two faces and area c1 and area c2 the areas defined by the convex hull of the inlier points projected on these two faces, respectively. In order to evaluate the quality of the fitted cuboid, Jiang and Xiao proposed the measure defined as min( area c1 area f 1 , area c2 area f 2 ) which is equal to the maximum value of 1 when the fitting is perfect. This measure assimilates the quality of a cuboid to the quality of the worst plane among the two, without taking into account the quality of the best fitting plane. Nevertheless, the quality of the best fitting plane could help in deciding between two cuboids characterized by the same ratio. Furthermore, the relative sizes of the two planes are completely ignored in this criterion. Indeed, in case of a cuboid composed by a very big plane and a very small one, this measure does not provide any information about which one is well fitted to the data, although this information is crucial to assess the quality of the cuboid fitting. Consequently, we propose to use a similar criterion which does not suffer from these drawbacks: ratio = area c1 +area c2 area f 1 +area f 2 . Likewise, for an ideal fitting this measure is equal to 1. In order to illustrate the improvement due to the proposed adaptive threshold (of section 3.1) and the proposed ratio in the cuboid fitting step, 3 typical examples are shown in in Fig. 7. There, it can be seen that the proposed method (right column) increases significantly the performance for far and thin objects. In the final step of the method, the fitted cuboids are projected in the Manhattan World of the scene, in order to obtain the 3D model of the scene, as illustrated in Fig. 2f. Additionally, the cuboids are pro- jected on the input RGB image in order to demonstrate how well the fitting procedure performs (see Fig. 2e). NEW GROUND TRUTH DATASET For an objective evaluation, a new dataset with measured ground truth 3D positions was built. This dataset is composed by 4 different scenes and each scene is captured under 3 different viewpoints and 4 different illuminations. Thus, each scene consists of 12 images. For all these 4 scenes, the 3D positions of the vertices of the objects were measured using a telemeter. These coordinates constitute the ground truth. As the reference point was considered the intersection point of the three planes of the Manhattan World. It should be noted that the measurement of vertices positions in a 3D space with a telemeter is not perfectly accurate and the experimental measurements show that the precision of these ground truth data is approximately ±3.85mm. Some of the dataset images can be seen in the figures of the next section. EXPERIMENTS Qualitative evaluation As a first demonstration of the proposed method some reconstruction results are shown in Fig. 8. It can be seen that it performs well even in very demanding scenes with strong clutter. Moreover, it is able to handle small and thin objects with convex surfaces. Subsequently, our method is compared with the recent method proposed by [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] since their method not only performs cuboid fitting to RGB-D data but also outperforms various other approaches. A first visual comparison can be performed on both our dataset and the well-known NYUv2 Kinect Dataset [START_REF] Silberman | Indoor segmentation and support inference from rgbd images[END_REF] in Figs. 9 and 10, respectively. It should be noted that all the thresholds in this paper were tuned to the provided numbers for both ours and the NYUv2 dataset. This point highlights the generality of our method that was tested in a wide variety of scenes. [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] have further improved their code and its last release (January 2014) was used for our comparisons. A random subset of 40 images that contain information about the layout of the room was selected from the NYUv2 Kinect dataset. The results imply that our method provides significantly better reconstructions than this state-of-the-art approach. Furthermore, in various in Fig. 9, it can be observed that the global cuboid fitting method of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] can result in cuboids that do not correspond to any object in the scene. The reason for this is the large set of candidate cuboids that they produce for each two planar surfaces in the image. The strong constraints that they apply afterwards, in order to eliminate the cuboids which do not correspond to an object, do not always guarantee an optimal solution. Another drawback of this approach is that the aforementioned constraints might eliminate a candidate cuboid that does belong to a salient object. In the next section, the improvement of our approach is quantified by an exhaustive test on our ground truth dataset. Quantitative evaluation In order to test how accurate is the output of the proposed method and how robust it is against different viewpoints and illuminations, the following procedure was used. The 3D positions of the reconstructed vertices are compared to their ground truth positions by measuring their Euclidean distance. The mean value (µ) and the standard deviation (σ) of these Euclidean distances as well as the mean running time of the algorithm over the 12 images of each scene are presented in Table 1. The results using the code of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] are included in the table for comparison. It should be noted that since this method does not provide the layout of the room, their estimated cuboids are rotated to the Manhattan World obtained by our method for each image. During the experiments, it was noticed that the results of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] were very unstable and various times their method could not provide a cuboid for each object in the scene. Moreover, since the RANSAC algorithm is non-deterministic, neither are both our approach and the one of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]. In order to quantify this instability, each algorithm was run 10 times on the exact same image (randomly chosen) of each scene. The mean (µ) and standard deviation (σ) of the Euclidean distance between the ground truth and the reconstructed 3D positions were measured. The results are presented in Table 2. It should be noted that the resulting 3D positions of both algorithms are estimated according to the origin of the estimated layout of the room. Thus, the poor resolution of the Kinect sensor is perturb- ing the estimation of both the layout and the 3D positions of the objects and the errors are cumulating. However, the values of the mean and standard deviation for our method are relatively low with respect to the depth resolution of Kinect sensor at that distance, which is approximately 50 mm at 4 meters [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF]. Furthermore, the standard deviations of Table 2 are considerably low and state a maximum deviation of the result less than 4.5 mm. Finally, as can be seen in Table 1, the computational cost of our method is dramatically lower than the one of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]. It should be noted that in this running time our method estimates the complete 3D scene reconstruction of the scene. It requires around 9 seconds for a simple scene and less than 20 seconds for a demanding scene with strong clutter and occlusion on a Dell Inspiron 3537, i7 1.8 Ghz, 8 GB RAM. It is worth mentioning that no optimization was done in the implementation. Thus, the aforementioned running times could be considerably lower. CONCLUSIONS In this paper, a new method that provides accurate 3D reconstruction of an indoor scene using a single RGB-D image is proposed. First, the layout of the scene is extracted by exploiting and improving the method of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF]. The latter is achieved by better addressing the problem of the non-linear relationship between depth resolution and distance from the sensor. For the 3D reconstruction of the scene, we propose to fit cuboids to the objects composing the scene since this shape is well adapted to most of the indoor objects. Unlike the state-of-theart method [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] that runs a global optimization process over sets of cuboids with strong constraints, we propose to automatically segment the image, as a preliminary step, in order to focus on the local cuboid fitting on each extracted object. It is shown that our method is robust to viewpoint and object orientation variations. It is able to provide meaningful interpretations even in scenes with strong clutter and occlusion. More importantly, it outperforms the state-of-the-art approach not only in accuracy but also in robustness and time complexity. Finally, a ground truth dataset for which the exact 3D positions of the objects have been measured is provided. This dataset can be used for future comparisons. Figure 1 : 1 Figure 1: (left) Color and Depth input images, (right) 3D reconstruction of the scene. Figure 2 : 2 Figure 2: An overview of the proposed method. Figure 3 : 3 Figure 3: Comparison of the varying threshold set in (Taylor and Cowley, 2012) and the one proposed in this paper. Figure 4 : 4 Figure 4: Impact of the proposed threshold in the room layout reconstruction. (left column): Input image (middle column): Threshold in (Taylor and Cowley, 2012). (right column): Threshold proposed here. Figure 5 : 5 Figure 5: An example of merging objects that are not cuboids.(left) original input image. (middle):Before merging. (right):After merging. Figure 6 : 6 Figure 6: Illustration of our cuboid fitting step. (left): The inliers of the first fitted 3D plane are marked in green. The remaining points and their projection on the plane is marked in red and blue, respectively. A 3D line is fitted to these points. (right): The fitted cuboid. Figure 7 : 7 Figure 7: Impact of the selected threshold and ratio on the cuboid fitting. (left): Fixed global threshold and ratio proposed here. (middle): Varying threshold proposed here and ratio proposed in (Jiang and Xiao, 2013) (right): Threshold and ratio proposed here. Figure 8 : 8 Figure 8: Various results of the proposed method on different real indoor scenes. Figure 10 : 10 Figure 10: Random results of (Jiang and Xiao, 2013) (top 2 rows) and the corresponding ones of our method (bottom 2 rows) for the ground truth dataset. Figure 9 : 9 Figure 9: Comparison of the results obtained by (Jiang and Xiao, 2013) (odd rows) and the method proposed in this paper (even rows) for the NYUv2 Kinect dataset. Table 2 : 2 Mean value (µ) and standard deviation (σ) of the Euclidean distances between the ground truth and the reconstructed vertices over 10 iterations of the algorithm on the same image.Our method[START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]) µ (mm) σ (mm) µ (mm) σ (mm Table 1 : 1 Mean value (µ) and standard deviation (σ) of the Euclidean distances in mm between the ground truth and the reconstructed vertices over the 12 images of each scene and mean running time (t) in seconds of each algorithm. Our method (Jiang and Xiao, 2013) µ σ t * µ σ t * Scene 1 52.4 8.8 8.8 60.9 19.6 25.3 Scene 2 60.4 20.9 12.3 132.7 65.9 26.1 Scene 3 69.7 20.2 14.2 115.7 48.3 27.2 Scene 4 74.9 35.3 12.2 145.3 95.4 26.8 * Running on a Dell Inspiron 3537, i7 1.8 GHz, 8 GB RAM
31,490
[ "1003869", "172493", "859601" ]
[ "247329", "17835", "17835" ]
01486575
en
[ "info" ]
2024/03/04 23:41:48
2017
https://hal.science/hal-01486575/file/GRAPP_2017_29.pdf
Maxime Maria email: [email protected] Sébastien Horna email: [email protected] Lilian Aveneau email: [email protected] Efficient Ray Traversal of Constrained Delaunay Tetrahedralization Keywords: Ray Tracing, Acceleration Structure, Constrained Delaunay Tetrahedralization published or not. The documents may come INTRODUCTION Ray tracing is a widely used method in computer graphics, known for its capacity to simulate complex lighting effects to render high-quality realistic images. However, it is also recognized as timeconsuming due to its high computational cost. To speed up the process, many acceleration structures have been proposed in the literature. They are often based on a partition of Euclidean space or object space, like kd-tree [START_REF] Bentley | Multidimensional Binary Search Trees Used for Associative Searching[END_REF], BSP-tree, BVH [START_REF] Rubin | A 3-dimensional representation for fast rendering of complex scenes[END_REF][START_REF] Kay | Ray Tracing Complex Scenes[END_REF] and regular grid [START_REF] Fujimoto | ARTS: Accelerated Ray-Tracing System[END_REF]. A survey comparing all these structures can be found in [START_REF] Havran | Heuristic Ray Shooting Algorithms[END_REF]. They can reach interactive rendering, e.g exploiting ray coherency [START_REF] Wald | Interactive Rendering with Coherent Ray Tracing[END_REF][START_REF] Reshetov | Multilevel Ray Tracing Algorithm[END_REF][START_REF] Mahovsky | Memory-Conserving Bounding Volume Hierarchies with Coherent Raytracing[END_REF] or GPU parallelization [START_REF] Purcell | Ray Tracing on Programmable Graphics Hardware[END_REF][START_REF] Foley | KD-tree Acceleration Structures for a GPU Raytracer[END_REF][START_REF] Günther | Realtime Ray Tracing on GPU with BVHbased Packet Traversal[END_REF][START_REF] Aveneau | Understanding the Efficiency of Ray Traversal on GPUs[END_REF][START_REF] Kalojanov | Two-Level Grids for Ray Tracing on GPUs[END_REF]. Nevertheless, actually a lot of factors impact on traversal efficiency (scene layout, rendering algorithm, etc.). A different sort of acceleration structures is the constrained convex space partition (CCSP), slightly studied up to then. A CCSP is a space partition into convex volumes respecting the scene geometry. [START_REF] Fortune | Topological Beam Tracing[END_REF] introduces this concept by proposing a topological beam tracing using an acyclic convex subdivision respecting the scene obstacles, but using a hand-made structure. Recently, [START_REF] Maria | Constrained Convex Space Partition for Ray Tracing in Architectural Environments[END_REF] present a CCSP dedicated to architectural environments, hence limiting its purpose. [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] propose to use a constrained Delaunay tetrahedralization (CDT), i.e. CCSP only made up of tetrahedra. However, our experiments show that their CDT traversal methods cannot run on GPU, due to numerical errors. Using a particular tetrahedron representation, this paper proposes an efficient CDT traversal, having the following advantages: • It is robust, since it does not cause any error due to numerical instability, either on CPU or on GPU. • It requires less arithmetic operations and so it is inherently faster than previous solutions. • It is adapted to parallel programming since it does not add extra thread divergence. This article is organized as follows: Section 2 recapitulates previous CDT works. Section 3 presents our new CDT traversal. Section 4 discusses our experiments. Finally, Section 5 concludes this paper. PREVIOUS WORKS ON CDT This section first describes CDT, then it presents its construction from a geometric model, before focusing on former ray traversal methods. CDT description A Delaunay tetrahedralization of a set of points X ∈ E 3 is a set of tetrahedra occupying the whole space and respecting the Delaunay criterion (Delaunay, 1934): a tetrahedron T , defined by four vertices V ⊂ X, is a Delaunay tetrahedron if it exists a circumscribed sphere S of T such as no point of X \ {V } is inside S. Figure 1 illustrates this concept in 2D. Delaunay tetrahedralization is "constrained" if it respects the scene geometry. In other words, all the geometric primitives are necessarily merged with the faces of the tetrahedra making up the partition. Three kinds of CDT exist: usual constrained Delaunay tetrahedralization [START_REF] Chew | Constrained Delaunay triangulations[END_REF], conforming Delaunay tetrahedralization [START_REF] Edelsbrunner | An upper bound for conforming delaunay triangulations[END_REF] and quality Delaunay tetrahedralization [START_REF] Shewchuk | Tetrahedral Mesh Generation by Delaunay Refinement[END_REF]. In ray tracing context, [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] proved that quality Delaunay tetrahedralization is the most efficient to traverse. CDT construction CDT cannot be built from every geometric models. A necessary but sufficient condition is that the model is a piecewise linear complex (PLC) [START_REF] Miller | Control Volume Meshes using Sphere Packing: Generation, Refinement and Coarsening[END_REF]. In 3D, any non empty intersection between two faces of a PLC must correspond to either a shared edge or vertex. In other words, there is no self-intersection (Figure 2). In computer graphics, a scene is generally represented as an unstructured set of polygons. In such a case, some self-intersections may exist. Nevertheless, it is still possible to construct PLC using a mesh repair technique such as [START_REF] Zhou | Mesh Arrangements for Solid Geometry[END_REF]. CDT can be built from a given PLC using the Si's method [START_REF] Si | On Refinement of Constrained Delaunay Tetrahedralizations[END_REF]. It results in a tetrahedral mesh, containing two kinds of faces: occlusive faces, belonging to the scene geometry; and some nonocclusive faces, introduced to build the partition. Obviously, a given ray should traverse the latter, as nonocclusive faces do not belong to the input geometry. CDT traversal Finding the closest intersection between a ray and CDT geometry is done in two main steps. First, the tetrahedron containing the ray origin is located. Second, the ray goes through the tetrahedralization by traversing one tetrahedron at a time until hitting an occlusive face. This process is illustrated in Figure 3. Let us notice that there is no need to explicitly test intersections with the scene geometry, as usual acceleration structures do. This is done implicitly by searching the exit face from inside a tetrahedron. Locating ray origin Using pinhole camera model, all primary rays start from the same origin. For an interactive application locating this origin is needed only for the first frame, hence it is a negligible problem. Indeed, camera motion generally corresponds to a translation, for instance when the camera is shifted, or when ray origins are locally perturbed for depth-of-field effect. Using a maximal distance in the traversal algorithm efficiently solves this kind of move. Locating the origin of non primary rays is avoided by exploiting implicit ray connectivity inside CDT: both starting point and volume correspond to the arrival of the previous ray. Exit face search Several methods have been proposed in order to find the exit face of a ray from inside a tetrahedron. [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] present four different ones. The first uses four ray/plane intersections and is similar to [START_REF] Garrity | Raytracing Irregular Volume Data[END_REF]. The second is based on half space classification. The third finds the exit face using 6 permuted inner products (called side and noted ⊙) of Plücker coordinates [START_REF] Shoemake | Plücker coordinate tutorial[END_REF]. It is similar to [START_REF] Platis | Fast Ray-Tetrahedron Intersection Using Plucker Coordinates[END_REF] technique. Their fourth and fastest method uses 3 to 6 Scalar Triple Products (STP). It is remarkable that none of these four methods exploits the knowledge of the ray entry face. For volume rendering, [START_REF] Marmitt | Fast Ray Traversal of Tetrahedral and Hexahedral Meshes for Direct Volume Rendering[END_REF] extend [START_REF] Platis | Fast Ray-Tetrahedron Intersection Using Plucker Coordinates[END_REF]. Their method (from now MS06) exploits neighborhood relations between tetrahedra to automatically discard the entry face. It finds the exit face using 2,67 side products on average. Since the number of products varies, MS06 exhibits some thread divergence in parallel environment. This drawback also appears with the fastest Lagae et al. method. All these methods are not directly usable on GPU, due to numerical instability. Indeed, the insufficient arithmetic precision with 32-bits floats causes some failures to traverse CDT, leading to infinite loops. In this paper, we propose a new traversal algorithm, based on Plücker coordinates. Like MS06, it exploits the neighborhood relations between faces. The originality lies in our specific tetrahedron representation, allowing to use exactly 2 optimized side products. NEW TRAVERSAL ALGORITHM CDT traversal algorithm is a loop, searching for the exit face from inside a tetrahedron (Figure 3). We propose a new algorithm, both fast and robust. It uses Plücker coordinates, i.e. six coordinates corresponding to the line direction u and moment v. Such a line is oriented: it passes through a first point p, and then a second one q. Then, u = qp and v = p × q. For two lines l = {u : v} and l ′ = {u ′ : v ′ }, the sign of the side product l ⊙ l ′ = u • v ′ + v • u ′ indicates the relative orientation of the two lines: negative value means clockwise orientation, zero value indicates intersection, and positive value signifies counterclockwise orientation [START_REF] Shoemake | Plücker coordinate tutorial[END_REF]. Exit face search Our algorithm assumes that the entry face is known, and that the ray stabs the current tetrahedron. For a given entry face, we use its complement in the tetrahedron, i.e. the part made of one vertex, three edges and three faces. We denote Λ 0 , Λ 1 and Λ 2 the complement edges, with counterclockwise orientation from inside the tetrahedron (Figure 4). We number complement faces with a local identifier from 0 to 2, such that: face 0 is bounded by Λ 0 and Λ 2 , face 1 is bounded by Λ 1 and Λ 0 , and face 2 is bounded by Λ 2 and Λ 1 . Using Plücker side product, the face stabbed by ray r is: Λ 2 Λ 0 Λ 1 r r ⊙ Λ 1 r ⊙ Λ 2 2 1 < 0 ≥ 0 ≥ 0 r ⊙ Λ 0 < 0 < 0 ≥ 0 1 0 (b) (a) • face 0, if and only if r turns counterclockwise around Λ 0 and clockwise around Λ 2 (r ⊙ Λ 0 ≥ 0 and r ⊙ Λ 2 < 0); • face 1, if and only if r turns counterclockwise around Λ 1 and clockwise around Λ 0 (r ⊙ Λ 1 ≥ 0 and r ⊙ Λ 0 < 0); • face 2, if and only if r turns counterclockwise around Λ 2 and clockwise around Λ 1 (r ⊙ Λ 2 ≥ 0 and r ⊙ Λ 1 < 0). We compact these conditions into a decision tree (Figure 4(b)). Each leaf corresponds to an exit face, and each interior node represents a side product between r and a line Λ i . At the root, we check r ⊙ Λ 2 . If it is negative (clockwise), then r cannot stab face 2: in the left subtree, we only have to determine if r stabs face 0 or 1, using their shared edge Λ 0 . Otherwise, r turns counterclockwise around Λ 2 and so cannot stab face 0, and the right subtree we check if r stabs face 1 or 2 using their shared edge Λ 1 . With Figure 4(a) example, r turns clockwise around Λ 2 and then counterclockwise around Λ 0 ; so, r exits through face 0. Require: F e = {Λ 0 , Λ 1 , Λ 2 }: entry face; Λ r : ray; Ensure: F s : exit face; 1: side ← Λ r ⊙ F e .Λ 2 ; 2: id ← (side ≥ 0); {id ∈ {0, 1}} 3: side ← Λ r ⊙ F e .Λ id ; 4: id ← id + (side < 0); {id ∈ {0, 1, 2}} 5: F s ← getFace(F e ,id); 6: return F s ; Algorithm 1: Exit face search from inside a tetrahedron. Exit Entry face identifier F 0 F 1 F 2 F 3 0 F 1 F 0 F 0 F 0 1 F 2 F 3 F 1 F 2 2 F 3 F 2 F 3 F 1 Table 1: Exit face according to the entry face and a local identifier in {0, 1, 2}, following a consistent face numbering (Figure 5(a)). Since every decision tree branch has a fixed depth of 2, our new exit face search method answers using exactly two side products. Moreover, it is optimized to run efficiently without any conditional instruction (Algorithm 1). Notice that leave labels form two pairs from left to right: the first pair (0,1) is equal to the second (1,2), minus 1. Then, it uses that successful logical test returns 1 (and 0 in failure case) to decide which face to discard. So, the test r ⊙ Λ 2 ≥ 0 allows to decide if we have to consider the first or the second pair. Finally, the same method is used with either the line Λ 0 or Λ 1 . This algorithm ends with getFace function call. This function returns the tetrahedron face number according to the entry face and to the exit face label. It answers using a lookup-table, defined using simple combinatorics (Table 1), assuming a consistent labeling of tetrahedron faces (Figure 5(a)). Data structure Algorithm 1 works for any entry face of any tetrahedron. It relies on two specific representations of the tetrahedron faces: a local identifier in {0, 1, 2}, and global face F i , i ∈ [0 . . . 3]. For a given face, it uses 3 Plücker lines Λ i . Since such lines contain 6 coordinates, a face needs 18 single precision floats for the lines (18 × 32 bits), plus brdf and neighborhood data (tetrahedron and face numbers). To reduce data size and balance GPU computations and memory accesses, we dynamically calculate the Plücker lines knowing their extremities: each line starts from a face vertex and ends with the complement vertex. So, we need all the tetrahedron vertices. We arrange the faces such that their complement vertex have the same number, implicitly known. Vertices are stored into tetrahedra (for coalescent memory accesses), and vertex indices (in [0 . . . 3]) are stored into faces. This leads to the following data structure: To save memory and so bandwidth, we compact the structure Face. The neighboring face (the field face) is a number between 0 and 3; it can be encoded using two bits, and so packed with the field tetra, corresponding to the neighboring tetrahedron. Thus, tetrahedron identifiers are encoded on 30 bits, allowing a maximum of one billion tetrahedra. In a similar way, field idV needs only 2 bits per vertex. But, they are common to all the tetrahedra, and so are stored only once for all into 4 unsigned char. Hence, a face needs 8 bytes, and a full tetrahedron 80 bytes. Notice that, on GPU a vertex is represented by 4 floats to have aligned memory accesses. Then on GPU a full tetrahedron needs 96 bytes. Figure 5 proposes an example: for F 3 (made using the complement vertex V 3 and counterclockwise vertexes V 1 , V 0 and V 2 ), we can deduce that 2 gives the description of faces according to their vertices and edges, following face numbering presented in Figure 5(a). V 3 V 2 V 1 V 0 F 1 F 0 F 2 F 3 (a) V 3 F 3 Λ 2 Λ 0 Λ 1 ( Λ 0 = V 1 V 3 , Λ 1 = V 0 V 3 and Λ 2 = V 2 V 3 . Λ 0 = V 1 V 3 , Λ 1 = V 0 V 3 and Λ 2 = V 2 V 3 . Table F Vertexes Λ 0 Λ 1 Λ 2 0 {3, 1, 2} V 3 V 0 V 1 V 0 V 2 V 0 1 {2, 0, 3} V 2 V 1 V 0 V 1 V 3 V 1 2 {3, 0, 1} V 3 V 2 V 0 V 2 V 1 V 2 3 {1, 0, 2} V 1 V 3 V 0 V 3 V 2 V 3 Table 2: Complement edges of entry face F are implicitly by the face complement vertex (identified by and its vertices in counterclockwise order. Exiting the starting volume 1 assumes known the entry face. This condition is not fulfilled for the starting tetrahedron. Algorithm 1 must be adapted in that case. A simple solution lies in using a decision tree of depth 4, leading to three Plücker side products. One can settle this tree starting with any edge to discriminate between two faces, and so on with the children. Nevertheless, a simpler but equivalent solution exists. Once the root fixed, we have only three possible exit faces. This corresponds to Algorithm 1, as if the discarded face was the entry one. So, we just choose one edge to discard a face and then we call Algorithm 1 with the discarded exit face as the fake entry one. This leads to Algorithm 2. We naturally choose edge V 2 V 3 shared by faces F 0 and F 1 (Figure 5(a)). If the side product is negative, then we cannot exit through F 1 . Else, with a positive or null value, we cannot exit through F 0 . Thus, the starting tetrahedron problem is solved using three and only three side products. Require: T = {V i , F i } i∈[0...3] : Tetrahedron; Λ r : Ray; Ensure: F s : exit face; 1: side ← Λ r ⊙V 2 V 3 ; 2: f← side < 0; {f∈ {0, 1}} 3: return ExitTetra(F f , Λ r ); {Algorithm 1} Algorithm 2: Exit face search from the starting tetrahedron. Efficient side product Both Algorithm 1 and 2 use Plücker side products. A naive approach results in 23 operations per side product: to calculate Plücker coordinates, we need 3 subtractions for its direction and 6 multiplications and 3 subtractions for its moment. Then, product needs multiplications and 5 additions. The two side products in Algorithm 1 result in 46 operations. We propose a new method using less operations. It rests upon a coordinate system translation to the complement vertex V f of the entry face. In this local system, lines Λ i have a nil moment (since they contain the origin). So, side products are inner products of vectors having only 3 coordinates: each one needs 3 multiplications and 2 additions. Moreover, line directions are computed using 3 subtractions. Hence, such side products need only 8 operations. Nevertheless, we also need to modify Plücker coordinates of the ray r to obtain valid side products. Let us recall how a Plücker line is made. We compute its direction u using two points p and q on the line, and its moment v with p × q = p × u. In the local coordinates system, the new line coordinates must be calculated using translated points. The direction is obviously the same, only v is modified: v ′ = (p -V f ) × u = p × u -V f × u = v -V f × u. So, v ′ is calculated using 12 operations: 3 subtractions, 6 multiplications and 3 subtractions. This ray transformation is done once per tetrahedron, the local coordinates system being shared for all the lines Λ i . As a conclusion, the number of arithmetic operations involved in Algorithm 1 can be decreased from 46 to 28, saving about 40% of computations. EXPERIMENTS This section discusses some experiments made using our new traversal algorithm. Results Performance is evaluated using three objects tetrahedralized using Tetgen [START_REF] Si | TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator[END_REF]. Table 3 sums up their main characteristics and measured performance. The simplest object is constructed from a banana model, with 25k occlusive faces. The other two correspond to well-known Stanford's objects: BUNNY and ARMADILLO. Their CDT respectively count 200k and 1.1M occlusive faces. We use quality CDT, introducing new vertices into object models, explaining the high number of faces our three objects have. Performance is measured in millions of ray cast per second (Mrays/s) using ray casting, 1024 × 1024 pixels and no anti-aliasing. The used computer possesses an Intel R Core TM i7-4930K CPU @ 3.40Ghz, 32 Gb RAM and NVidia R GeForce R GTX 680. Algorithms are made parallel on CPU (OpenMP) and GPU (CUDA, with persistent threads [START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF]). On average, CPU ray casting reaches 9 Mrays/s, GPU version 280 Mrays/s. Traversal Closest ray/object intersection is found by traversing CDT one tetrahedron at a time until hitting an occlusive face. The ray traversal complexity is linear in the number of traversed tetrahedra. not strictly proportional, mainly due to memory accesses that become more important when more tetrahedra are traversed, leading to more memory cache defaults. False-colored image of point of view (B) reveals that rays going close to object boundary traverse more tetrahedra. Numerical robustness Using floating-point numbers can cause errors due to numerical instability. Tetgen uses geometric predicates (e.g. (Shewchuk, 1996) or [START_REF] Devillers | Efficient Exact Geometric Predicates for Delaunay Triangulations[END_REF]) to construct robust CDT. If this is common practice in algebraic geometry, it is not the case in rendering. Hence, it is too expensive to be used in CDT ray traversal. We experimented three methods proposed in (Lagae and Dutré, 2008) (ray/plane intersection tests, Plücker coordinates and STP), plus the method proposed in [START_REF] Marmitt | Fast Ray Traversal of Tetrahedral and Hexahedral Meshes for Direct Volume Rendering[END_REF]) (MS06) (Section 2.3.2). We noticed they all suffer from numerical errors either on CPU or GPU. Indeed, calculation are not enough precise with rather flat tetrahedra. Thus, without extra treatment (like moving the vertices) these algorithms may return a wrong exit face or do not find any face at all (no test is valid). view series. In contrast, we did not obtain wrong results using our method. It can be explained by the smaller number of performed arithmetic operations; less numerical errors accumulated, more accurate results. CPU results show that our method is much more efficient than former ones. This behavior is expected since our new method requires less arithmetic operations. STP is the fastest previous method, but is 83% slower than ours. Exit face search comparison On GPU, results are slightly different. For example, Plücker method is faster than STP. Indeed, even if it requires more operations, it does not add extra thread divergence. Hence, it is more adapted to GPU. Among the previous GPU methods, the most efficient is MS06, still 59% slower than ours. State-of-the-art comparison In [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF], authors noticed that rendering using CDT as acceleration structure takes two to three more computation times than using kdtree. In this last section, we check if it is still the case using our new tetrahedron exit algorithm and on GPU. We compare our GPU ray-tracer with the state-of-the-art ray tracer [START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF], always using the same computer. Their acceleration structure is BVH, constructed using SAH (MacDonald and [START_REF] Booth | Heuristics for Ray Tracing Using Space Subdivision[END_REF] and split of large triangles [START_REF] Ernst | Early Split Clipping for Bounding Volume Hierarchies[END_REF]. To our knowledge, nowadays their implementation is the fastest GPU one. Table 6 sums up this comparison. Results show that CDT is still not a faster acceleration structure than classical ones (at least than BVH on GPU). First, the timings show larger amplitude using CDT than BVH. Moreover, while CDT is on average faster than BVH with BANANA and BUNNY models, it is no more true using ARMADILLO. This is directly linked to the traversal complexity of the two structures. BVH being built up following SAH, its performance is less impacted with the geometry input size, contrary to CDT where this size has a direct impact on performance. Clearly, a heuristics similar to SAH is missing for tetrahedralization. CONCLUSION This article proposes a new CDT ray traversal algorithm. It is based upon a specific tetrahedron representation, and fast Plücker side products. It uses less arithmetic operations than previous methods. Last but not least, it does not involve any conditional instructions, employing two and only two side products to exit a given tetrahedron. This algorithm exhibits several advantages compared to the previous ones. Firstly it is inherently faster, requiring less arithmetic operations. Secondly it is more adapted to parallel computing, since having a fixed number of operations it does not involve extra thread divergence. Finally, it is robust and works with 32-bits floats either on CPU or GPU. As future work, we plan to design a new construction heuristic, to obtain as fast to traverse as possible CDT. Indeed, CDT traversal speed highly depends on its construction. CDT traversal complexity is linear in the number of traversed tetrahedra: the less tra-versed tetrahedra, the more high performance. Before SAH introduction, the same problem existed with well-known acceleration structures like kd-tree and BVH, for which performance highly depends on the geometric model. Since CDT for ray-tracing is a recent method, we expect that similar heuristics exists. Figure 1 : 1 Figure 1: Delaunay triangulation: no vertex is inside a circumscribed circle. Examples of two non-PLC configurations: intersection between (a) two faces, (b) an edge and a face. Figure 3 : 3 Figure 3: CDT traversal overview: the main key of any CDT traversal algorithm lies in the "exit face search" part. Figure 4 : 4 Figure 4: Exit face search example: (a) ray r enters the tetrahedron through the back face; (b) r ⊙ Λ 2 < 0 and r ⊙ Λ 0 ≥ 0, so the exit face is identified by 0. s t r u c t F a c e { i n t b r d f ; / / -1: Non-O c c l u s i v e i n t t e t r a ; / / n e i g h b o r i n t f a c e ; / / n e i g h b o r i n t idV [ 3 ] ; / / f a c e v e r t i c e s } ; b) 5: Description of a tetrahedron: (a) vertices and faces numbering; (b) the complement vertex for F 3 is {V 3 }, and its edges are s t r u c t T e t r a h e d r o n { f l o a t 3 V [ 4 ] ; / / v e r t i c e s F a c e F [ 4 ] ; / / f a c e s } ; Figure 6 : 6 Figure 6: Rendering times on CPU in ms (T , red curve) and number of traversed tetrahedra in millions (Φ, gray bars) using 1,282 points of view and BUNNY; (A) T = 40.1 ms -Φ = 9.6; (B) T = 122 ms -Φ = 22.71. Table 4 : 4 Table4reports for each object the number of rays per image concerned by this problem, averaged over points of Numerical errors impact on GPU: number of rays suffering from wrong results for 1024 × 1024 pixels, and averaged over about 1, 300 points of view. BANANA BUNNY ARMADILLO Ray/plane 33.27 40.85 74.85 Plücker 3.6 22.25 412.13 STP 63.07 204.89 456.65 MS06 0.0007 0.004 0.422 Ours 0 0 0 Table 5 : 5 This section compares performance of our exit face search algorithm with the same 4 previous methods: ray/plane intersection tests, Plücker coordinates, STP and MS06 (Section 2.3.2). Statistics are summed up in Table5. Times are measured for 16,384 random rays stabbing 10,000 random tetrahedra, both on CPU (using one thread) and GPU. Exit face search comparison: time (in ms) to determine the exit face for 10,000 tetrahedra and 16,384 random rays per tetrahedron; on CPU (single thread) and on GPU. Method Time (ms) CPU GPU Ray/plane 15,623 36 Plücker 10,101 28 STP 4,876 29 MS06 5,994 21 Ours 2,663 13 Table 6 : 6 Performance comparison with[START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF], in number of frames per second. CDT BVH (Aila et al., 2012) BANANA 315-947 200-260 BUNNY 130-1040 160-260 ARMADILLO 82-160 130-260
27,403
[ "7905", "7913", "6203" ]
[ "444300", "444300", "444300" ]
01486607
en
[ "spi" ]
2024/03/04 23:41:48
2009
https://hal.science/hal-01486607/file/doc00026658.pdf
ISOLATED VS COORDINATED RAMP METERING STRATEGIES: FIELD EVALUATION RESULTS IN FRANCE INTRODUCTION Severe traffic congestion is the daily lot of drivers using the motorway network, especially in and around major cities and built-up areas. On intercity motorways, this is due to heavy traffic during holiday weekends when many people leave the cities at the same time, or to accidents or exceptional weather conditions. In the cities themselves, congestion is a recurrent problem. The control measures which are produced in a coordinated way to improve traffic performance include signal control, ramp metering and route guidance. With respect to the ramp metering techniques, one successful approach, for example, is the ALINEA strategy [START_REF] Haj-Salem | ALINEA -A Local Feedback Control Law for on-ramp metering: A real life study[END_REF][START_REF] Haj-Salem | Ramp Metering Impact on Urban Corridor Traffic : Field Results[END_REF][START_REF] Papageorgiou | ALINEA: A Local Feedback Control Law for on-ramp metering[END_REF] which maintains locally the density on the carriage way around the critical value. Nevertheless, due to the synergetic effect of all metered on-ramps (they interact on each other at different time scale) the coordinated strategy could be more efficient than a local strategy. In this paper, some field trials, conducted in the southern part of Ile de France motorway in Paris are presented. Field trials have been design and executed over a period of several months in the aim of investigating the traffic impact of ramp metering measures. More specifically, the field trials, reported in this paper, include a comprehensive data collection from the considered network (A6W motorway) over several weeks with isolated and coordinated ramp metering strategies. The main objectives of the field trials were the development, the test and the evaluation of the traffic impact of new isolated and coordinated strategies. This paper is organized as follows: section 2 is dedicated to the test site description. Section 3 concerns the brief description of the candidate strategies. The last section 4 is focused on the description of the used criterion on one hand and the other hand the field results analysis. FIELD TEST DESCRIPTION The traffic management of "Ile de France" motorway network is under both main authorities: the Paris City "Ville de Paris" authority operates the Paris inner urban network and the ring way and the DIRIF "Direction interdépartementale de la Région d'Ile de France" authority operates the motorway network around Paris city (A1 to A13). The DIRIF motorway network covers around 700 km including A1 to A13 motorways. Since 1988, DIRIF has launched a project called "SIRIUS: Service d'Information pour un Réseau Intelligible aux USagers" aiming at optimising the traffic conditions on the overall "Ile de France" motorway network in terms of real-time traffic control strategies such as ramp metering, automatic incident detection, speed control, lane assignment, traffic user's information/guidance (travel time display) etc.). The particular motorway network considered in this field evaluation study is in the southern part of the Ile de France motorway network (A6W, figure 1). The considered site is one among the most critical areas of the Ile de France motorway network. The total length covers around 20 km including several on/off ramps. Morning and evening peak congestions extend over several hours and several kilometres. A recurrent congestion in the morning peak period typically starts around the on ramp Chilly and it spreads subsequently over several kilometres on A6W motorway axis. The considered motorway axis is fully equipped with measurement stations. The field test covers around 20 km length and includes 33 measurements stations (loop detectors) available on the carriageway, located around 500 m from each other. Each measurement station provides traffic volume, occupancy and speed measurements. The on-ramps and off-ramps are fully equipped also. In particular at each on-ramp, tow measurement stations are installed: the first one is located at the nose of the ramp behind the signal light which used for the realised onramp volume measurements and the second at the top of the on-ramp which used for the activation of the override tactic when the control is applied. CANDIDATE STRATEGY DESCRIPTIONS The implemented strategies are the following: 1. No control 2. ALINEA 3. VC_ALINEA (Variable Cycle ALINEA) 4. Coordination (CORDIN) ALINEA strategy ALINEA is based on a feedback philosophy and the control law is the following: cycle k is found to be lower (higher) than the desired occupancy O * , the second term of the right hand side of the equation becomes positive (negative) and the ordered on-ramp volume r k is increased (decreased) as compared to its last value r k-1 . Clearly, the feedback law acts in the same way both for congested and for light traffic (no switchings are necessary). r r K O O k k k = + - - 1 ( ) VC_ALINEA Strategy The basic philosophy of Variable Cycle ALINEA (VC_ALINEA) is the computation of the split as control variable instead of the green duration. The main objective of VC_ALINEA is to apply different cycles with respect to the on-ramp traffic demand and the traffic conditions. The split is defined as: α = G/C, where G is the green duration, C is the cycle duration. The VC_ALINEA control law is derived from ALINEA and has the following form: α(k) = α(k-1) + K'[Ô-O out (k)] Basically, the derivation of VC_ALINEA control law (see EURAMP Deliverable D3.1) consists to convert the computed ALINEA on-ramp volume r(k) in green (or flashing amber) duration. This conversion is based on the measurement of the maximum on-ramp flow (q sat ). In case of ALINEA, the calculated green time is constrained by the minimum and the Maximum green. Similarly, the split variable as a control law (α) is constrained by two limits also: the maximum cycle C M duration and the minimum cycle duration C m . This means that α is varying between α min and α max where α min = G m / C m α max = G M / C M Where: G m and G M are the fixed minimum green and maximum green durations respectively. C m and C M are respectively the Minimum and Maximum cycle duration: With sat k k q G r = we have: ( ) out k sat R k k o ô q K G G 1 1 - - - + = (1) G k : Calculated Green duration. q sat : Maximum output flow on the ramp. Dividing equation ( 1) by C k , we obtain the following VC_ALINEA control law: ( ) out k k sat R k k o ô C q K 1 1 - - - + = α α (2) The range of control variable α is defined by: In a fluid condition: ( ) ( )           + + = +       - = ⇔       = + + = ⇔ ≥ min min min min 1 R A G C R A G R R R A G G thr α α α α α And, in a congested condition: ( )           = = ⇔           + - = = ⇔ < α α α α min min min min min G C G G G A G R G G thr Coordinated strategy (CORDIN) The main philosophy of CORDIN strategy is to use the storage capacities of the upstream onramps in case of apparition of downstream congestion of the controlled on-ramp. Under critical on-ramp queue constraint, an anticipation of the control is applied at the upstream onramps of the head of the congestion. This means that the level of the traffic improvement in case of the application of CORDIN strategy is much related to the geometry of each on-ramp and particularly to the storage capacity. CORDIN is a based rule coordinated strategy using ALINEA strategy first and anticipating the control action. It consists in the following steps: 1. Application of ALINEA to all controlled on-ramps -> control sets U al . 2. Find the location of the head of the congestion by testing if the first on-ramp (r i ) where ALINEA is active (O i > 0.9 Ô i , cr ) and the queue constraint not active. 3. For every upstream on-ramp r up = r i +1, .., Nb_Ramps: if the queue constraint of the onramp (r up ) is NOT active then correction of the ALINEA command according to U coor = α 1 U al if r up = r i +1 and U coor = α 2 U al for the other upstream ramps, where (α 1 ) and (α 2 ) are parameters to be calibrated; otherwise do nothing. 4. Application of the new coordinated control sets on the field 5. Wait the next cycle time 6. Go to step 1. EVALUATION RESULTS Available data The different strategies have been applied in weekly alternation ALINEA, VC_ALINEA, CORDIN and no control respectively over the period from the middle of September 2006, until the end of January, 2007, and to perform subsequently, comparative assessments of the traffic impact. Full 140 days of collected data were stored in the SIRIUS database. Screening the collected data was firstly necessary in order to discard days which include major detector failures. Secondly, all days with atypical traffic patterns (essentially weekends and holidays) were discarded. Thirdly, in order to preserve the results comparability, all days including significant incidents or accidents (according to the incident files provided by the Police) were also left out. This screening procedure eventually delivered 11, 10, 11 and 9 days of data using No control, ALINEA, VC_ALINEA and CORDIN strategies respectively. In order to minimize the impact of demand variations on the comparative evaluation results, the selected days were averaged for each strategy. Assessments criteria The evaluation procedure was based on a computation of several criteria for assessing and comparing the efficiency of the ramp metering installation. These criteria were calculated for each simulation run. The horizon of the simulation is fixed to the overall period (5:00 -22:00), the morning peak period (6:00-12:00) and the evening period (17:00-21:00). The following quantitative criteria were considered for the evaluation of the control strategy: 1. The total time spent on the network (TTS) expressed in vh*h 2. The total number of run kilometres (TTD) expressed in vh*km 3. The mean speed (MS) expressed in Km/h 4. The travel time expressed in second from one origin to the main destination 5. Other environment criteria also were computed: -Fuel consumption (litres) [START_REF] Jurvillier | Simulation de temps de parcours et modèle de consommation sur une autoroute urbaine[END_REF] - The evaluation results were reported in the Deliverable D6.3 of EURAMP Project. In summary, the results obtained can be summarized as follows: -The VC_ALINEA seems to provide better results than ALINEA in term of the TTS index (12%). However, we observe that the TTD is decrease by 5% whereas for ALINEA, the TTD is decreases by 2% compared with the No control case. -The CORDIN strategy provides change of 12%, 0% and 11% for TTS, TTD and MS respectively compared with the No control case. -Figure 4 reports the congestion mapping of A6W and visually confirm these conclusions. -With respect to the Total Travel Time (TTT), figure 5 depicts the obtained results. The CORDIN strategy gives better results than the isolated strategies. As far as the travelled distance increases, the gain in term of travel times increase also. The maximum gain of 17 % is observed for CORDIN strategy. -The emission indices are decrease for all strategies. In particular, the gains of HC and CO indices are of -6%,-9% and -7% for ALINEA, VC_ALINEA and CORDIN respectively By considering the TTS and TTD costs hypothesis in France, the results of the cost benefit analysis, with regard to the investments and the maintenance of the ramp metering system, indicated a collective benefit per year (250 of working days) of 2.4M€, 2.44M€ and 3.5 M€ for ALINEA, VC_ALINEA and CORDIN respectively. Figure 1 . 1 Figure 1. Field test site * where r k and r k-1 are on-ramp volumes at discrete time periods k and k-1 respectively, O k is the measured downstream occupancy at discrete time k, O * is a pre-set desired occupancy value (typically O * is set equal to the critical occupancy) and K is a regulation parameter. The feedback law suggests a fairly plausible control behaviour: If the measured occupancy O k at Figure 3 3 Figure 3 depicts one example of the applied correction parameters (α 1, α 2 ) after a detection of the head of the congestion (MASTER on-ramp). Figure 3 : 3 Figure 3: Example of CORDIN parameters Pollutant emission of CO & Hydrocarbon (HC) expressed in kg (European project TR 1030, INRESPONSE, D91[START_REF] Ademe | Émission de Polluants et consommation liée à la circulation routière-Paramètres déterminant et méthodes de quantification, "connaître pour agir, guide et cahiers techniques[END_REF][START_REF] Ademe | Émission de Polluants et consommation liée à la circulation routière-Paramètres déterminant et méthodes de quantification, "connaître pour agir, guide et cahiers techniques[END_REF] Figure 4 : 4 Figure 4: Congestion mapping of the 4 strategies Figure 5 . 5 Figure 5. Gain = Fn(distance) of the candidate strategies CONCLUSIONS The obtained results of this field trial are leads the DIRIF authorities to generalize the implementation of the ramp metering technique to the overall motorway network. Renewal of ACCES_1 system is decided current 2007. The new system is called ACCES_2 and it is implemented in SIRIUS current 2008. The DIRIF authorities decided at the first step, to test and evaluated the ALINEA strategy on the East part of the Ile de France motorway network including 22 on-ramps. The second step consists to the extension of the generalization of ALINEA to 150 others existing on-ramps. The last step will concern the implementation of CORDIN strategy.
13,620
[ "1278852" ]
[ "81038", "520615", "81038" ]
01486698
en
[ "info" ]
2024/03/04 23:41:48
2016
https://theses.hal.science/tel-01486698/file/ZUBIAGA_PENA_CARLOS_JORGE_2016.pdf
Carlos Jorge Zubiaga Peña Keywords: Appearance, shading, pre-filtered environment map, MatCap, Compositing Apparence, ombrage, cartes d'environnement pré-flitrées, MatCap, Compositing Traditional artists paint directly on a canvas and create plausible appearances of real-world scenes. In contrast, Computer Graphics artists define objects on a virtual scene (3D meshes, materials and light sources), and use complex algorithms (rendering) to reproduce their appearance. On the one hand, painting techniques permit to freely define appearance. On the other hand, rendering techniques permit to modify separately and dynamically the different elements that compose the scene. In this thesis we present a middle-ground approach to manipulate appearance. We offer 3D-like manipulation abilities while working on the 2D space. We first study the impact on shading of materials as band-pass filters of lighting. We present a small set of local statistical relationships between material/lighting and shading. These relationships are used to mimic modifications on material or lighting from an artist-created image of a sphere. Techniques known as LitSpheres/MatCaps use these kinds of images to transfer their appearance to arbitrary-shaped objects. Our technique proves the possibility to mimic 3D-like modifications of light and material from an input artwork in 2D. We present a different technique to modify the third element involved on the visual appearance of an object: its geometry. In this case we use as input rendered images alongside with 3D information of the scene output in so-called auxiliary buffers. We are able to recover geometry-independent shading for each object surface, assuming no spatial variations for each recovered surface. The recovered shading can be used to modify arbitrarily the local shape of the object interactively without the need to re-render the scene. Chapter 1 Introduction One of the main goals of image creation in Computer Graphics is to obtain a picture which conveys a specific appearance. We first introduce the general two approaches of image creation in the Section 1.1, either by directly painting the image in 2D or by rendering a 3D scene. We also present middle-ground approaches which work on 2D with images containing 3D information. It is important to note that our work will take place using this middleground approach. We define our goal in Section 1.2 as 'granting 3D-like control over image appearance in 2D space'. Our goal emerges from the limitations of existing techniques to manipulate 3D appearance in existing images in 2D. Painted images lack any kind of 3D information, while only partial geometric information can be output by rendering. In any case, the available information is not enough to fully control 3D appearance. Finally in Section 1.3 we present the contributions brought by the thesis. Context Image creation can be done using different techniques. They can be gathered into two main groups, depending if they work in the 2D image plane or in a 3D scene. On the one hand, traditional painting or the modern digital painting softwares work directly in 2D by assigning colors to a plane. On the other hand, artists create 3D scenes by defining and placing objects and light sources. Then the 3D scene is captured into an image by a rendering engine which simulates the process of taking a picture. There also exist techniques in between that use 3D information into 2D images to create or modify the colors of the image. Painting Traditional artists create images of observed or imagined real-world scenes by painting. These techniques are based on the deposition of colored paint onto a solid surface. Artists may use different kinds of pigments or paints, as well as different tools to apply them, from brushes to sprays or even body parts. Our perception of the depicted scene depends on intensity and color variations across the planar surface of the canvas. Generated images may be abstract or symbolic, but we are interested in the ones that can be considered as natural or realistic. Artists are capable to depict plausible appearances of the different elements that compose a scene. The complexity of reality is well captured by the design of object's shape and color. Artists achieve good impressions of a variety of materials under different lighting environment. This can be seen in Figure 1.1, where different object are shown ranging from organic nature to hand-crafted. Nowadays painting techniques have been integrated in computer system environments. Classical physical tools, like brushes or pigments, have been translated to digital ones (Figure 1.2). Moreover, digital systems provide a large set of useful techniques like the use of different layers, selections, simple shapes, etc. They also provide a set of image based operators that allow artists to manipulate color in a more complex way, like texturing, embossing or blurring. Despite the differences, both classical painting and modern digital systems share the idea of working directly in image space. Artists are able to depict appearances that look plausible, in a sense that they look real even if they would not be physically correct. Despite our perception of the painted objects as if they were or could be real, artist do not control physical processes. They just manipulate colors either by painting them or performing image based operations. They use variations of colors to represent objects made of different materials and how they would behave under a different illumination. The use of achromatic variations is called shading; it is used to convey volume or light source variations (Figure 1.3), as well as material effects. Shading may also correspond to variations of colors, so we can refer to shading in a colored or in a grey scale image. Carlos Jorge Zubiaga Peña In real life, perceived color variations of an object are the result of the interaction between lighting and object material properties and shape. Despite the difficult understanding of these interactions, artists are able to give good impressions of materials and how they would look like under certain illumination conditions. However, once a digital painting is created it cannot be modified afterwards: shape, material, or lighting cannot be manipulated. Rendering Contrary to 2D techniques, computer graphics provide an environment where artists define a scene based on physical 3D elements and their properties. Artists manipulate objects and light sources, they control object's shape (Fig. 1.4b) and material (Fig. 1.4c) and the type of light sources (Fig. 1.4a), as well as their positions. When an artist is satisfied with the scene definition, he selects a point of view to take a snapshot of the scene and gets an image as a result. The creation of 2D images from a 3D scene is called rendering. Rendering engines are software frameworks that use light transport processes to shade a scene. The theory of light transport defines how light is emitted from the light sources, how it interacts with the different objects of the scene and finally how it is captured in a 2d plane. In practice, light rays are traced from the point of view, per pixel in the image. When the rays reach an object surface, rays are either reflected, transmitted or absorbed, see Figure 1.5a. Rays continue their path until they reach a light source or they disappear by absorption, loss of energy or a limited number of reflections/refractions. At the same time, rays can also be traced from the light sources. Rendering engines usually mix both techniques by tracing rays from both directions, as shown in Figure 1.6. Figure 1.6: Rays may be both traced from the eye or image plane as well as from the light sources. When a ray reaches an object surface it is reflected, transmitted or absorbed. Object geometry is defined by one or more 3D meshes composed of vertices, which form facets that describe the object surface. Vertices contain information about their 3D position, as well as other properties like their surface normal and tangent. The normal and tangent together describe a reference frame of the geometry at a local scale, which is commonly used in computer graphics to define how materials interact with lighting. This reference frame is used to define the interaction at a macroscopic level. In contrast, real-world interaction of light and a material at a microscopic level may turn out to be extremely complex. When a ray reaches a surface it can be scattered in any possible direction, rather than performing a perfect reflection. The way rays are scattered depends on the surface reflectance for opaque objects or the transmittance in the case of transparent or translucent objects. Materials are usually defined by analytical models with a few parameters; the control of those parameters allows artists to achieve a wide range of object appearances. Manipulation of all the 3D properties of light, geometry and material allows artists to create images close to real-world appearances. Nevertheless, artists usually tweak images by manipulating shading in 2D until they reach the desired appearance. Those modifications are usually done for artistic reasons that require the avoidance of physically-based restrictions of the rendering engines, which make difficult to obtain a specific result. Artists usually start from the rendering engine output, from which they work to get their imagined desired image. Carlos Jorge Zubiaga Peña Compositing Shading can be separated into components depending on the effects of the material. Commonly we can treat independently shading coming from the diffuse or the specular reflections (see Figure 1.5b), as well as from the transparent/translucent or the emission effects. Therefore, rendering engines can outputs images of the different components of shading independently. In the post-processing stage, called compositing, those images are combined to obtain the final image, as shown in Figure 1.7. In parallel with shading images, rendering engines have the capacity to output auxiliary buffers containing 3D information. In general, one can output any kind of 3D information, by assigning them to the projected surface of objects in the image. Usually those buffers are used to test errors for debugging, but they can be used as well to perform shading modifications in post-process. They can be used to guide modifications of shading: for instance, positions or depth information are useful to add fog or create focusing effects like depth of fields. Auxiliary buffers may also be used to add shading locally. Having information about positions, normals and surface reflectance are enough to create new shading by adding local light sources. This is similar to a technique called Deferred Shading used in interactive rendering engines. It is based on a multi-pass pipeline, where the first pass produces the necessary auxiliary buffers and the other passes produce shading by adding the contribution of a discrete set of lights, as is shown in Figure 1.8. Instead of computing shading at each pass we can pre-compute it, if we only consider distant illumination. Distant illumination means that there is no spatial variation on the incoming lighting, therefore it only depends on the surface normal orientation. Thanks to this approximation we only need surface normals to shade an object (usually they are used projected in screen space). Typically, pre-computed shading values per hemisphere direction are given by filtering the values of the environment lighting using the material reflectance properties. These techniques are referred by the name pre-filtered environment maps or PEM (see Chapter 2, Section 2.3). Different material appearances are obtained by using different material filters, as seen in Figure 1.9a. Pre-computed values are stored in spherical structures that can be easily accessed, shading is obtained by fetching using normal buffers. Instead of filtering an environment map, pre-computed shading values may also be created by painting or obtained from images (photographs or artwork). A well known technique, call the LitSphere, defines how to fill colors on a sphere from a picture and then use this sphere to shade an object, similarly to pre-filtered environment map techniques. The idea of LitSphere it's been extensively used in sculpting software where it takes the name of MatCap (see Figure 1.9b), as shorthand of Material Capture. MatCaps depict plausible materials under an arbitrary environment lighting. In the thesis we decided to use MatCaps instead of LitSpheres to avoid misunderstanding with non photo-realistic shading, like cartoon shading. Despite the limitations of distant lighting (no visibility effects like shadows or inter-reflections), they create convincing shading appearances. Summary On the one hand, painting techniques permit direct manipulation of shading with no restrictions, allowing artists to achieve the specific appearance they desire. In contrast, artists cannot manipulate dynamically the elements represented (object shape and material) and how they are lit. On the other hand, global illumination rendering engines are based on a complete control of the 3D scene and a final costly render. Despite the complete set of tools provided to manipulate a scene before rendering, artists commonly modify the rendering output in post-processing using image-based techniques similar to digital painting. Postprocess modifications permit to avoid the physically based restrictions of the light transport algorithms. As a middle-ground approach between the direct and static painting techniques and the dynamically controlled but physically-based restricted render engines, we find techniques which work in 2D and make use of 3D information in images or buffers. Those techniques may be used in post-process stage called compositing. Rendering engines can easily output image buffers with 3d properties like normal, positions or surface reflectance, which are usually called auxiliary buffers. Those buffers permit to generate or modify shading in ways different than digital painting, like the addition of local lighting or a few guided image operations (i.e. fog, re-texturing). Modifications of the original 3D properties (geometry, material or lighting) cannot be performed with a full modification on shading. A different way to employ auxiliary buffers is to use normal buffers alongside with pre-filtered environment maps or MatCaps/LitSpheres to shade objects. The geometry of the objects can be modified arbitrarily, but in contrast once pre-computed shading is defined, their depicted material and lighting cannot be modified. Real-time 2D manipulation of plausible 3D appearance Problem statement Problem statement Dynamic manipulation of appearance requires the control of three components: geometry, material and lighting. When we obtain an image independently of the way it has been created (painted or rendered) we lose the access to all components. Geometry is easily accessible, normal buffers may be generated by a rendering engine, but also may be obtained by scanners or estimated from images. Material are only accessible when we start from a 3D scene; the reflectance properties of the object can be projected to the image plane. Lighting in contrast is not accessible in any case. If we consider rendering engines, lighting is lost in the process of image creation. In the case of artwork, shading is created directly and information of lighting and materials is 'baked-in', therefore we do not have access to lighting or material separately. Lighting structure is arbitrary complex and the incoming lighting per surface point varies in both the spatial and the angular domain, in other words, it varies per position and normal. The storage of the full lighting configuration is impractical, as we would need to store a complete environment lighting per pixel. Moreover, in the ideal case that we would have access to the whole lighting, the modification of the material, geometry or lighting will require a costly re-rendering process. In that case there will not be an advantage compared to rendering engine frameworks. Our goal is to grant 3D-like control of image appearance in 2D space. We want to incorporate new tools to modify appearance in 2D using buffers containing 3D information. The objective is to be able to modify lighting, material and geometry in the image and obtain a plausible shading color. We develop our technique in 2 steps: first, we focus on the modification of light and material and then on the modification of geometry. We base our work on the hypothesis that angular variations due to material and lighting can be mimicked by applying modifications directly on shading without having to decouple material and lighting explicitly. For that purpose we use structures similar to pre-filtered environment maps, where shading is stored independently of geometry. In order to mimic material and lighting variations, we focus MatCaps. They are artistcreated images of spheres, which their shading depicts an unknown plausible material under an unknown environment lighting. We want to add dynamic control over lighting, like rotation, and also to the material, like modifications of reflectance color, increasing or decreasing of material roughness or controlling silhouette effects. In order to mimic geometry modifications, we focus on the compositing stage of the image creation process. Perturbations of normals (e.g. Bump mapping) is a common tool in computer graphics, but it is restricted to the rendering stage. We want to grant similar solutions of the compositing stage. In this stage several shading buffers are output by a global illumination rendering process and at the same time several auxiliary buffers are made available. Our goal in this scenario is to obtain a plausible shading for the modified normals without having to re-render the scene. The avoidance of re-rendering will permit to alter normals interactively. As described in the previous section, material reflectance, and as a consequence shading, can be considered as the addition of specular and diffuse components. Following this approach we may separate the manipulation of diffuse from specular reflections, which is similar to control differently low-frequency and high-frequency shading content. This approach can be considered in both cases, the MatCap and the compositing stage, see Figure 1.10. Meanwhile rendering engines can output both components separately, MatCaps will require a pre-process step to separate them. Carlos Jorge Zubiaga Peña Contributions The work is presented in three main chapters that capture the three contributions of the thesis. Chapter 3 present a local statistical analysis of the impact of lighting and material on shading. We introduce a statistical model to represent surface reflectance and we use it to derive statistical relationships between lighting/material and shading. At the end of the chapter we validate our study by analyzing measured materials using statistical measurements. In Chapter 4 we develop a technique which makes use of the statistical relationships to manipulate material and lighting in a simple scene: an artistic image of a sphere (MatCap). We show how to estimate a few statistical properties of the depicted material on the MatCap, by making assumptions on lighting. Then those properties are used to modify shading by mimicking modifications on lighting or material, see Figure 1.11. Contributions Chapter 5 introduces a technique to manipulate local geometry (normals) at the compositing stage; we obtain plausible diffuse and specular shading results for the modified normals. To this end, we recover a single-view pre-filtered environment map per surface and per shading component. Then we show how to use these recovered pre-filetered environment maps to obtain plausible shading when modifications on normals are performed, see Figure1.12. Figure 1.12: Starting from shading and auxiliary buffers, our goal is to obtain a plausible shading color when modifying normals at compositing stage. 10 Carlos Jorge Zubiaga Peña Chapter 2 Related Work We are interested in the manipulation of shading in existing images. For that purpose we first describe the principles of rendering, in order to understand how virtual images are created as the interaction of geometry, material and lighting (Section 2.1). Given an input image a direct solution to modify its appearance is to recover the depicted geometry, material and lighting. These set of techniques are called inverse rendering (Section 2.2). Recovered components can be modified afterwards and a new rendering can be done. Inverse rendering is limited as it requires assumptions on lighting and materials which forbids its use in general cases. These techniques are restricted to physically-based rendering or photographs and they are not well defined to work with artworks. Moreover, a posterior rendering would limit the interactivity of the modification process. To avoid this tedious process, we found interesting to explore techniques that work with an intermediate representation of shading. Pre-filtered environment maps store the results of the interaction between material and lighting independently to geometry (Section 2.3). These techniques have been proven useful to shade objects in interactive applications, assuming distant lighting. Unfortunately there is no technique which permits to modify lighting or material once PEM are created. Our work belongs to the domain of appearance manipulation. These techniques are based on the manipulation of shading without the restrictions of physically-based rendering (Section 2.4). However, the goal is to obtain images which appear plausible even if they are not physically correct. Therefore we also explore how the human visual system interprets shading (Section 2.5). We are interested into our capability to infer the former geometry, lighting and material form an image. Shading and reflectance We perceive objects by the light they reflect toward our eyes. The way objects reflect light depends on the material they are composed of. In the case of opaque objects it is described by their surface reflectance properties; incident light is considered either absorbed or reflected. Surface reflectance properties define how reflected light is distributed. In contrast, for transparent or translucent objects the light penetrates, scatters inside the object and eventually exists from a different point of the object surface. In computer graphics opaque object materials are defined by the Bidirectional Reflectance Distribution Functions (BRDF or f r ), introduced by Nicodemus [START_REF] Nicodemus | Directional reflectance and emissivity of an opaque surface[END_REF]. They are 4D functions of an incoming ω i and an outgoing direction ω o (e.g., light and view directions). The BRDF characterizes how much radiance is reflected in all lighting and viewing configurations, and may be considered as a black-box encapsulating light transport at a microscopic scale. Directions are classically parametrized by the spherical coordinates elevation θ and azimuth φ angles, according to the reference frame defined by the surface normal n and the tangent t as in Figure 2.1a. In order to guarantee a physically correct behavior a BRDF must follow the next three properties. It has to be positive f r (ω i , ω o ) ≥ 0. It must obey the Helmoth reciprocity: f r (ω i , ω o ) = f r (ω o , ω i ) (directions may be swapped without reflectance being changed). It must conserve energy ∀ω o , Ω f r (ω i , ω o ) cos θ i dω i ≤ 1, the reflected radiance must be equal to or less than the input radiance. Shading and reflectance n t ω o ω i φ i φ o θ o θ i (a) Classical parametrization n t h θ d θ h ω o φ d φ h ω i (b) Half-vector parametrization Different materials can be represented using different BRDFs as shown in Figure 2.2, which shows renderings of spheres made of five different materials in two different environment illuminations in orthographic view. These images have been obtained by computing the reflected radiance L o for every visible surface point x toward a pixel in the image. Traditionally L o is computed using the reflected radiance equation, as first introduced by Kajiya [Kaj86] : L o (x, ω o ) = Ω f r (x, ω o , ω i ) L i (x, ω i ) ω i • n dω i , (2.1) with L o and L i are the reflected and incoming radiance, x a surface point of interest, ω o and ω i the outgoing and ingoing directions, n the surface normal, f r the BRDF, and Ω the upper hemisphere. Thanks to the use of specialized devices (gonoireflectometers, imaging systems, etc.) we can measure real materials as the ratio of the reflected light from a discrete set of positions on the upper hemisphere. One of the most well-known databases of measured material is the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. This database holds 100 measured BRDFs and displays a wide diversity of material appearances. All BRDFs are isotropic, which means light and view directions may be rotated around the local surface normal with no incurring change in reflectance. When measuring materials we are limited by a certain choice of BRDFs among real-world materials. We are also limited by the resolution of the measurements: we only obtain a discretized number of samples, and the technology involved is subject to measurement errors. Lastly, measured BRDFs are difficult to modify as we do not have parameters to control them. The solution to those limitations has been the modeling of material reflectance properties using analytical functions. Analytical models have the goal to capture the different effects that a material can produce. The ideal extreme cases are represented by mirror and matte materials. On the one hand, mirror materials reflect radiance only in the reflection direction ω r = 2 (ω • n) n -ω. On the other hand, matte or lambertian materials reflect light in a uniform way over the whole hemisphere Ω. However, Real-world material are much more complex, they exhibit a composition of different types of reflection. Reflections vary from diffuse to mirror and 12 Carlos Jorge Zubiaga Peña therefore materials exhibit different aspects in terms of roughness or glossiness. Materials define the mean direction of the light reflection, it can be aligned with the reflected vector or be shifted like off-specular reflections or even reflect in the same direction (retro-reflections). Materials can also reproduce Fresnel effects which characterize variations on reflectance depending on the viewing elevation angle, making objects look brighter at grazing angles. Variations when varying the view around the surface normals are captured by anisotropic BRDFs. In contrast, isotropic BRDFs imply that reflections are invariant to variations of azimuthal angle of both ω o and ω i . BRDFs may be grouped by empirical models: they mimic reflections using simple formulation; or physically based models: they are based on physical theories. Commonly BRDFs are composed of several terms, we are interested in the main ones: a diffuse and a specular component. The diffuse term is usually characterized with a lambertian term, nevertheless there exist more complex models like Oren-Nayar [START_REF] Oren | Generalization of lambert's reflectance model[END_REF]. Regarding specular reflections, the first attempt to characterize them has been defined by Phong [START_REF] Bui Tuong | Illumination for computer generated pictures[END_REF]. It defines the BRDF as a cosine lobe function of the reflected view vector and the lighting direction, whose spread is controlled by a single parameter. It reproduces radially symmetric specular reflections and does not guarantee energy conservation. An extension of the Phong model has been done in the work of Lafortune et al. [START_REF] Eric Pf Lafortune | Non-linear approximation of reflectance functions[END_REF] which guarantees reciprocity and energy conservation. Moreover it is able to produce more effects like off-specular reflections, Fresnel effect or retro-reflection. Both models are based on the reflected view vector. Alternatively there is a better representation for BRDFs based on the half vector h = (ωo+ωi) ||ωo+ωi|| , and consequently the 'difference' vector, as the ingoing direction in a frame which the halfway vector is at the north pole, see Figure 2.1b. It has been formally described by Rusinkewicz [START_REF] Szymon | A new change of variables for efficient brdf representation[END_REF]. Specular or retro reflections are better defined in this parametrization as they are aligned to the transformed coordinate angles. Blinn-Phong [START_REF] James F Blinn | Models of light reflection for computer synthesized pictures[END_REF] redefined the Phong model by using the half vector instead of the reflected vector. The use of the half vector produces asymmetric reflections in contrast to the Phong model. Those model, Phong, Lafortune and Blinn-Phong are empirical based on cosine lobes. Another empirical model, which is commonly used, is the one defined by Ward [START_REF] Gregory | Measuring and modeling anisotropic reflection[END_REF]. This model uses the half vector and is based on Gaussian Lobes. It is designed to reproduce anisotropic reflections and to fit measured reflectance, as it was introduced alongside with a measuring device. The most common physically-based models are the ones who follow the micro-facet Real-time 2D manipulation of plausible 3D appearance theory. This theory assumes that a BRDF defined for a macroscopic level is composed by a set of micro-facets. The Torrance-Sparrow model [START_REF] Kenneth | Theory for off-specular reflection from roughened surfaces[END_REF] uses this theory by defining the BRDF as: f r (ω o , ω i ) = G(ω o , ω i , h)D(h)F (ω o , h) 4|ω o n||ω i n| , (2.2) where D is the Normals distributions, G is the Geometric attenuation and F is the Fresnel factor. The normal distribution function D defines the orientation distribution of the microfacets. Normal distributions often use Gaussian-like terms as Beckmann [START_REF] Beckmann | The scattering of electromagnetic waves from rough surfaces[END_REF], or other distributions like GGX [START_REF] Walter | Microfacet models for refraction through rough surfaces[END_REF]. The geometric attenuation G accounts for shadowing or masking of the micro-facets with respect to the light or the view. It defines the portion of the micro-facets that are blocked by their neighbor micro-facets for both the light and view directions. The Fresnel factor F gives the fraction of light that is reflected by each micro-facet, and is usually approximated by the Shlick approximation [START_REF] Schlick | An inexpensive brdf model for physically-based rendering[END_REF]. To understand how well real-world materials are simulated by analytical BRDFs we can fit the parameters of the latter to approximate the former. Ngan et al. [START_REF] Ngan | Experimental analysis of brdf models[END_REF] have conducted such an empirical study, using as input measured materials coming from the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. It shows that a certain number of measured BRDFs can be well fitted, but we still can differentiate them visually when rendered (even on a simple sphere) when comparing to real-world materials. The use of the reflected radiance equation alongside with the BRDF models tell us how to create virtual images using a forward pipeline. Instead we want to manipulate existing shading. Moreover we want those modifications to behave in a plausible way. The goal is to modify shading in image space as if we were modifying the components of the reflectance radiance equation: material, lighting or geometry. For that purpose we are interested in the impact of those components in shading. Inverse rendering An ideal solution to the manipulation of appearance from shading would be to apply inverserendering. It consists in the extraction of the rendering components: geometry, material reflectance and environment lighting, from an image. Once they are obtained they can be modified and then used to re-render the scene, until the desired appearance is reached. In our case we focus on the recovery of lighting and material reflectance assuming known geometry. Inverse rendering has been a long-standing goal in Computer Vision with no easy solution. This is because material reflectance and lighting properties are of high dimensionality, which makes their recovery from shading an under-constrained problem. Different combinations of lighting and BRDFs may obtain similar shading. The reflection of a sharp light on a rough material would be similar to a blurry light source reflected by a shiny material. At specific cases it is possible to recover material and/or lighting as described in [START_REF] Ramamoorthi | A signal-processing framework for inverse rendering[END_REF]. In the same paper the authors show that interactions between lighting and material can be described as a 2D spherical convolution where the material acts a lowpass filter of the incoming radiance. This approach requires the next assumptions: Convex curved object of uniform isotropic material lit by distant lighting. These assumptions make radiance dependent only on the surface orientation, different points with the same normal sharing the same illumination and BRDF. Therefore the reflectance radiance Equation (2.1) may be rewritten using a change of domain, by obtaining the rotation which transform the surface normal to the z direction. This rotations permits to easily transforms directions in local space to global space, as shown in Figure 2.3 for the 2D and the 3D case. They rewrite 14 Carlos Jorge Zubiaga Peña Related Work Equation (2.1) as a convolution in the spherical domain: L o (R, ω ′ o ) = Ω ′ fr (ω ′ i , ω ′ o ) L i (Rω ′ i )dω ′ i = Ω fr (R -1 ω i , ω ′ o ) L i (ω i )dω i = fr * L, where R is the rotation matrix which transforms the surface normal to the z direction. Directions ω o , ω i and the domain Ω are primed for the local space and not primed on the global space. fr indicates the BRDF with the cosine term encapsulated. The equation is rewritten as a convolution, denoted by * . Ramamoorthi et al. used the convolution approximation to study the reflected radiance equation in the frequency domain. For that purpose they use Fourier basis functions in the spherical domain, which correspond to Spherical Harmonics. They are able to recover lighting and BRDF from an object with these assumptions using spherical harmonics. Nevertheless this approach restricts the BRDF to be radially symmetric like: lambertian, Phong or re-parametrized micro-facets BRDF to the reflected view vector. Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF] manage to recover both reflectance and lighting, albeit with a degraded quality compared to ground truth for the latter. They assume real-world natural illumination for the input images which permits to use statistics of natural images with a prior on low entropy on the illumination. The low entropy is based on the action of the BRDF as a bandpass filter causing blurring: they show how histogram entropy increase for different BRDFs. They recovered isotropic directional statistics BRDFs [START_REF] Nishino | Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions[END_REF] which are defined by a set of hemispherical exponential power distributions. This kind of BRDF is made to represent the measured materials of the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. The reconstructed lighting environments exhibit artifacts (see Figure 2.4), but these are visible only when rerendering the object with a shinier material compared to the original one. In their work Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF] compare to the previous work of Romeiro et al [START_REF] Romeiro | Blind reflectometry[END_REF]. The latter gets as input a rendered sphere under an unknown illumination and extracts a monochromatic BRDF. They do not extract the lighting environment which restricts its use to re-use the BRDF under a different environment and forbids the manipulation of the input image. Similar to the work of Lombardi they use priors on natural lighting, in this case they study statistics of a set of environment maps projected in the Haar wavelet (1) (2) (3) (a) (b) (c) (d) 16 Carlos Jorge Zubiaga Peña basis, see Figure 2.5. Those statistics are used to find the most likely reflectance under the studied distribution of probable illumination environments. The type of recovered BRDF is defined in a previous work of the same authors [START_REF] Romeiro | Passive reflectometry[END_REF]. That work recovers BRDFs using rendered spheres under a known environment map. They restrict BRDFs to be isotropic and they add a further symmetry around the incident plane, which permits to rewrite the BRDF as a 2D function instead of the general 4D function. Other methods that perform BRDF estimation always require a set of assumptions, such as single light sources or controlled lighting. The work from Jaroszkiewicz [START_REF] Jaroszkiewicz | Fast extraction of brdfs and material maps from images[END_REF] assumes a single point light. It extracts BRDFs from a digitally painted sphere using homomorphic factorization. Ghosh et al. [GCP + 09] uses controlled lighting based on spherical harmonics. This approach reconstructs spatially varying roughness and albedo of real objects. It employs 3D moments (in Cartesian space) up to order 2 to recover basic BRDF parameters from a few views. Aittala et al. [START_REF] Aittala | Practical svbrdf capture in the frequency domain[END_REF] employs planar Fourier lighting patterns projected using a consumer-level screen display. They recover Spatially Varying-BRDFs of planar objects. As far as we know there is no algorithm that works in a general case and extracts a manipulable BRDF alongside with the environment lighting. Moreover, as we are interested in the manipulation of appearance in an interactive manner, re-rendering methods are not suitable. A re-rendering process uses costly global illumination algorithms once material and lighting are recovered. In contrast, we expect that manipulation of shading does not require to decouple the different terms involved in the rendering equation. Therefore, we rather apply approximate but efficient modifications directly to shading, mimicking modifications of the light sources or the material reflectance. Moreover, all these methods work on photographs; in contrast we also want to manipulate artwork images. Pre-filtered lighting Pre-filtered environment maps [KVHS00] take an environment lighting map and convolve it with a filter defined by a material reflectance. The resulting values are used to shade arbitrary geometries in an interactive process, giving a good approximation of reflections. Distant lighting is assumed, consequently reflected radiance is independent of position. In the general case a pre-filtered environment would be a 5 dimensional function, depending on the outgoing direction ω o , and on the reference frame defined by the normal n and the tangent t. Nevertheless some dependencies can be dropped. Isotropic materials are independent of the tangent space. Radially symmetric BRDFs around either the normal (e.g. lambertian) or the reflected view vector (e.g. Phong) are 2 dimensional functions. When the pre-filtered environment maps are reduced to a 2 dimensional function they can be stored in a spherical maps. A common choice is the dual paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF], which is composed of a front and a back image with the z value given by 1/2-(x 2 +y 2 ). This method is efficient in terms of sampling and the introduced distortion is small, see Figure 2.6. Unfortunately effects dependent on the view direction, like Fresnel, cannot be captured in a single spherical representation as in the last mentioned technique. Nevertheless it can be added afterwards, and several pre-filtered environment maps can be combined with different Fresnel functions. A solution defined by Cabral et al. [START_REF] Cabral | Reflection space image based rendering[END_REF] constructs a spare set of viewdependent pre-filtered environment maps. Then, for a new viewpoint they dynamically create a view-dependent pre-filtered environment map by warping and interpolating precomputed environment maps. A single view-dependent pre-filtered environment map is useful when we want to have a non expensive rendering for a fixed view direction. Sloan et al. [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF] introduce a technique which creates shaded images of spheres from paintings, which can be used as Real-time 2D manipulation of plausible 3D appearance [START_REF] Ramamoorthi | An efficient representation for irradiance environment maps[END_REF] corresponding to the lowestfrequency modes of the illumination. It is proven that the resulting values differ on average 1% of the ground truth. For that purpose they project the environment lighting in the first 2 orders of spherical harmonics, which is faster than applying a convolution with a diffuse-like filter. The diffuse color is obtained by evaluating a quadratic polynomial in Cartesian space using the surface normal. Pre-filtered lighting maps store appearance independently of geometry for distant lighting. This permits to easily give the same material appearance to any geometry, for a fixed material and environment lighting. As a drawback, when we want to modify the material or the environment lighting they need to be reconstructed, which forbids interactive appearance manipulation. In the case of the artwork techniques, LitSpheres/MatCaps are created for a single view, which forbids rotations as shading is tied to the camera. 18 Carlos Jorge Zubiaga Peña Appearance manipulation The rendering process is designed to be physically realistic. Nevertheless, sometimes we want to create images with a plausible appearance without caring about their physically correctness. There exist some techniques which permit different manipulations of appearance using different kinds of input, ranging form 3D scenes to 2D images. Those techniques reproduce visual cues of the physically-based image creation techniques but without being restricted by them. At the same time they take advantage of the inaccuracy of human visual system to distinguish plausible from accurate. Image-based material editing of Khan et al. [START_REF] Erum Arif Khan | Image-based material editing[END_REF] takes as input a single image in HDR of a single object and is able to change its material. They estimate both, the geometry of the object, and the environment lighting. Then estimated geometry and environment lighting are used alongside with a new BRDF to re-render the object. Geometry is recovered following the heuristic of darker is deeper. Environment lighting is reconstructed from the background. First the hole left by the object is filled with other pixels from the image, to preserve the image statistics. Then the image is extruded to form a hemisphere. The possible results range from modifications of glossiness, texturing of the object, replacement of the BRDF or even simulation of transparent or translucent objects, see Figure 2.8. The interactive reflection editing system of Ritschel et al. [START_REF] Ritschel | Interactive reflection editing[END_REF] makes use of a full 3D scene to directly displace reflections on top of object surfaces, see Figure 2.9. The method takes inspiration on paintings where it is common to see refections that would not be possible in real life, but we perceive them as plausible. To define the modified reflections the user define constraints consisting on the area where he wants the reflections and another area which defines the origin of the reflections. This technique allows users to move reflections, adapt their shape or modify refractions. The Surface Flows method [VBFG12] warps images using depth and normal buffers to create 3D shape impressions like reflections or texture effects. In this work they performed a differential analysis of the reflectance radiance Equation (2.1) in image space. From that differentiation of the equation they identify two kind of variations: a first order term related to texturing (variations on material) and a second order variation related to reflection (variations on lighting). Furthermore they use those variations to define empirical equations to deform pixels of an image following the derivatives of a depth buffer in the first case and the derivatives of a normal buffer in the second case. As a result they introduce a set of tools: addition of decal textures or reflections and shading using gradients or images (Fig. 2.10). The EnvyLight system [START_REF] Pellacini | envylight: an interface for editing natural illumination[END_REF] permits to make modifications on separable features of the environment lighting by selecting them from a 3D scene. Users make scribbles on rendered image of the scene to differentiate the parts that belong to a lighting feature from the ones that do not. The features can be diffuse reflections, highlights or shadows. The geometry of the zones containing the feature permits to divide the environment map on the features that affect those effect from the rest. The separation of the environment lighting permits to edit them separately as well as to make other modifications like: contrast, translation, blurring or sharpening, see Figure 2.11. Appearance manipulation techniques are designed to help artists achieve a desired appearance. To this end they might need to evade from physical constraints in which computer graphics is based. Nevertheless, obtained appearance might still remain plausible for the human eye. As artists know intuitively that the human visual system is not aware of how light physically interacts with objects. 20 Carlos Jorge Zubiaga Peña Visual perception Created or manipulated images are evaluated with respect to a 'reference' image (e.g. photograph, ground truth simulation). Measurements of Visual Quality consist in computing the perceived fidelity and similarity or the perceived difference between an image and the 'reference'. Traditionally numerical techniques like MAE (mean absolute error), MSE (mean square error), or similar have been used to measure signal fidelity in images. They are used because of their simplicity and because of their clear physical meaning. However, those metrics are not good descriptors of human visual perception. In the vast majority of cases human beings are the final consumer of images and we judge them based on our perception. Visual perception is an open domain of research which presents many challenging problems. In computer graphics perception is very useful when a certain appearance is desired, without relying completely on physical control. A survey of image quality metrics from traditional numeric to visual perception approaches is provided in [START_REF] Lin | Perceptual visual quality metrics: A survey[END_REF]. Real-time 2D manipulation of plausible 3D appearance 21 Ramanarayanan et al. [START_REF] Ganesh Ramanarayanan | Visual equivalence: towards a new standard for image fidelity[END_REF] have shown how the human visual system is not able to perceive certain image differences. They develop a new metric for measuring how we judge images as visually equivalent in terms of appearance. They prove that we are not mostly able to detect variations on environment lighting. Users judge the equivalence of two objects that can vary in terms of bumpiness or shininess, see Figure 2.12. Objects are rendered under transformations (blurring or warping) of the same environment lighting. The results prove that we judge images as equivalent, despite their visual difference. This limitation of the human visual system is used in computer graphics to design techniques of appearance manipulation, like shown in the previous section. Despite the tolerance of the human visual system to visual differences we are able to differentiate image properties of objects. To distinguish the material of an object we use visual cues like color, texture or glossiness. The latter is often defined as the achromatic component of the surface reflectance. In a BRDF, gloss is responsible for changes in the magnitude and spread of the specular highlight as well as the change in reflectance that occurs as light moves away from the normal toward grazing angles. Hunter [START_REF] Sewall | The measurement of appearance[END_REF] introduced six visual properties of gloss: specular gloss, sheen, luster, absence-of-bloom, distinctness-of-image and surface-uniformity. He suggests that, except for surface-uniformity, all of these visual properties may be connected to reflectance (i.e., BRDF) properties. There exists standard test methods for measuring some of these properties (such as ASTM D523, D430 or D4039). The measurements of Hunter as well as the standard methods are based on optical measurements of reflections. However, perceived glossiness does not have a linear relationships with physical measurements. The work of Pellacini [START_REF] Pellacini | Toward a psychophysically-based light reflection model for image synthesis[END_REF] re-parametrized the Ward model [START_REF] Gregory | Measuring and modeling anisotropic reflection[END_REF] As we have seen the perception of gloss has been largely studied [START_REF] Chadwick | The perception of gloss: a review[END_REF]. However, we believe that explicit connections between physical and visual properties of materials (independently of any standard or observer) remain to be established. Summary Work on visual perception shows how humans are tolerant to inaccuracies in images. The human visual system may perceive as plausible images with certain deviations from physically correctness. Nevertheless we are able to distinguish material appearance under different illuminations, despite the fact that we are not able to judge physical variations linearly. Manipulation appearance techniques take advantage of these limitations to alter images by overcoming physical restrictions on rendering while keeping results plausible. We pursue a similar approach when using techniques like pre-filtered environment maps, where shading is pre-computed as the interaction of lighting and material. We aim to manipulate dynamically geometry-independent stored shading (similar to pre-filtered environment maps) and be able to mimic variations on lighting and material within it. The use of these structures seems a good intermediate alternative to perform appearance modification in comparison to the generation of images using a classical rendering pipeline. Chapter 3 Statistical Analysis The lightness and color (shading) of an object are the main characteristics of its appearance. Shading is the result of the interaction between the lighting and the surface reflectance properties of the object. In computer graphics lighting-material interaction is guided by the reflected radiance equation [START_REF] James | The rendering equation[END_REF], explained in Section 2.1: L o (x, ω o ) = Ω f r (x, ω o , ω i ) L i (x, ω i ) ω i • n dω i , Models used in computer graphics that define reflectance properties of objects are not easily connected to their final appearance in the image. To get a better understanding we perform an analysis to identify and relate properties between shading on one side, and material reflectance and lighting on the other side. The analysis only considers opaque materials which are are well defined by BRDFs [START_REF] Nicodemus | Directional reflectance and emissivity of an opaque surface[END_REF], leaving outside of our work transparent or translucent materials. We consider uniform materials, thus we only study variations induced by the viewing direction. When a viewing direction is selected the BRDF is evaluated as 2D function, that we call a BRDF slice. In that situation the material acts a filter of the incoming lighting. Our goal is to characterize the visible effect of BRDFs, and how their filtering behavior impacts shading. For that purpose we perform an analysis based on statistical properties of the local light-material interaction. Specifically, we use moments as quantitative measures of a BRDF slice shape. Moments up to order can be used to obtain the classical mean and variance, and the energy as the zeroth moment. We use those statistical properties: energy, mean and variance to describe a BRDF slice model. In addition we make a few hypothesis on the BRDF slice shape to keep the model simple. Then, this model is used to develop a Fourier analysis where we find relationships on the energy, mean and variance between material/lighting and shading. Finally we use our moment-based approach to analyze measured BRDFs. We show in plots how statistical properties evolve as functions of the view direction. We can extract common tendencies of different characteristics across all materials. The results verifies our previous hypothesis and show correlations among mean and variance. This work have been published in the SPIE Electronic Imaging conference with the collaboration of Laurent Belcour, Carles Bosch and Adolfo Muñoz [ZBB + 15]. Specifically, Laurent Belcour has helped with the Fourier Analysis, meanwhile Carles Bosch has made fittings on the measured BRDF analysis. BRDF slices When we fix the view direction ω o at a surface point p a 4D BRDF f r is restricted to a 2D BRDF slice. We define it as scalar functions on a 2D hemispherical domain, which we write f rω o (ω i ) : Ω → R, where the underscore view direction ω o indicates that it is fixed, and R denotes reflectance. Intuitively, such BRDF slices may be seen as a filter applied to the environment illumination. We suggest that some statistical properties of this filter may be directly observable in images, and thus may constitute building blocks for material appearance. View-centered parametrization Instead of using a classical parametrization in terms of elevation and azimuth angles for Ω, we introduce a new view-centered parametrization with poles orthogonal to the view direction ω o , see Fig. 3.1. This parametrization is inspired by the fact that most of the energy of a BRDF slice is concentrated around the scattering plane spanning the view direction ω o and the normal n, then it minimize distortions around this plane. It is also convenient to our analysis. First, it permits to define a separable BRDF slice model, which is useful to perform the Fourier analysis separately per coordinate, see Section 3.2. Second, it enables the computation of statistical properties by avoiding periodical domains, see Section 3.3. Formally, we specify it by a mapping m : [-π 2 , π 2 ] 2 → Ω, given by: m(θ, φ) = (sin θ cos φ, sin φ, cos θ cos φ), (3.1) where φ is the angle made with the scattering plane, and θ the angle made with the normal in the scattering plane. θ i , φ i ) ∈ [-π 2 , π 2 ] 2 to a direction ω i ∈ Ω. (c) A 2D BRDF slice f rω o is directly defined in our parametrization through this angular mapping. The projection of a BRDF slice into our parametrization is then defined by: f rω o (θ i , φ i ) := f r (m(θ o , φ o ), m(θ i , φ i )), (3.2) where θ o , φ o and θ i , φ i are the coordinates of ω o and ω i respectively in our parametrization. In the following we consider only isotropic BRDF which are invariant to azimuthal view angle. This choice is coherent with the analysis of measured BRDFs as the MERL database only contains isotropic BRDFs. Then, BRDF slices of isotropic materials are only dependent on the viewing elevation angle θ o ; we denote them as f r θo . Statistical reflectance radiance model We define a BRDF slice model in our parametrization using statistical properties. This model will be useful to perform a statistical analysis to study the impact of material on shading. Specifically it allows us to derive a Fourier analysis in 1D that yields statistical relationships between shading and material/lighting. Our BRDF slice model is based on Gaussian lobes, which is a common choice to work with BRDFs. Gaussian functions are described by their mean µ and variance σ 2 . The mean µ describes the expected or the central value of the Gaussian distribution. BRDF Gaussian lobes centered on the reflected view vector or on the normal are commonly used in Computer Graphics. The variance σ 2 describes the spread of the function. In the case of BRDF Slices, variance may be seen as a representation of material roughness. Wider lobes are representative of rough or diffuse material, meanwhile narrow lobes represent shiny or specular materials. To define our BRDF slice model we have made a few assumptions by observing the measured materials from the MERL database. We have observed that BRDF slices of measured materials exhibits close to symmetric behavior around the scattering plane. Moreover, while BRDF slice lobes stay centered on this plane, the mean direction can vary, from the normal direction to the tangent plane direction, passing through the reflected view vector direction. These observations lead us to assume a perfect symmetry around the scattering plane. This assumption alongside with our view-centered parametrization allows us to define our model using a pair of 1D Gaussians. Therefore our BRDF slice model is defined as: f r θo (θ i , φ i ) = α(θ o ) g σ θ (θo) (θ i -µ θ (θ o )) g σ φ (θo) (φ i ) , (3.3) where g σ θ and g σ φ are normalized 1D Gaussians 1 of variance σ 2 θ for the θ axis and variance σ 2 φ for the φ axis of our parametrization. The Gaussian g σ θ is centered at µ θ , meanwhile the Gaussian g σ φ is centered at 0. The energy α is similar to the directional albedo (the ratio of the radiance which is reflected). But it differs in two ways: it does not take into account the cosine term of Equation (2.1), and is defined in our parametrization. As α is a ratio, it is bounded between 0 and 1, which guarantees energy conservation. A representation of the model is shown in Figure 3.2. Energy, mean and variance can be defined using statistical quantities called moments, which characterize the shape of a distribution f : µ k [f ] = ∞ -∞ x k f (x) dx, (3.5) where k is the moment order. In our case we use moments up to order 2 to define the meaningful characteristics: energy, mean and and variance. The energy α is the 0th order moment, which describes the integral value of the function. The energy is required to be 1 to guarantee f to be a distribution, as moments are only defined for distribution functions. The mean µ is the 1st moment and the variance σ 2 is the 2nd central moments, where µ is used to center the distribution. We emphasize out that this model does not ensure reciprocity, and shall thus not be used outside of this statistical analysis. 1 Both Gaussians correspond to normal distributions that have been rescaled to guarantee energy conservation on our parametrization domain. The scaling term is given by: A = π 2 -t 0 -π 2 -t 0 e -t 2 2σ 2 dt = πσ 2 2 erf π/2 -t 0 √ 2σ 2 -erf - π/2 -t 0 √ 2σ 2 , (3.4) where we have restricted the domain to [-π 2 , π 2 ] and centered it on t 0 . This accommodates both Gaussians with t 0 = µ θ (θo) in one case, and t 0 = 0 in the other. Fourier analysis Fourier analysis We conduct a local Fourier analysis that yields direct relationships between reflected radiance and BRDF/lighting of energy, mean and variance around a fixed view elevation. Local Fourier analysis Our analysis begins with a change of variable in Equation (2.1) using our parametrization. Analysis is performed in a local tangent frame for simplicity, with the domain of integration being the input space of m: L o (θ o , φ o ) = π 2 -π 2 π 2 -π 2 f r θo (θ i , φ i ) L i m(θ i , φ i ) cos θ i cos 2 φ i dθ i dφ i , (3.6) where the 3rd coordinate of ω i = m(θ i , φ i ) (given by cos θ i cos φ i according to Equation. (3.1)) stands for the cosine term in tangent space. Replacing f r θo with our BRDF slice model (Equation (3.3)) yields: L o (θ o , φ o ) = α(θ o ) π 2 -π 2 g σ θ (θo) (θ i -µ θ (θ o )) π 2 -π 2 g σ φ (θo) (φ i )L i m(θ i , φ i ) cos θ i cos 2 φ i dφ i dθ i . (3.7) Now, since our BRDF slice model is separable in θ i and φ i , we may pursue our study in either dimension independently. Let us focus on θ i . If we fold in the integral of terms over φ i and cosines and write: L i φ (θ i ) = π 2 -π 2 g σ φ (θo) (φ i )L i m(θ i , φ i ) cos θ i cos 2 φ i dφ i , then Equation 3.7 turns into a 1D integral of the form: L o (θ o , φ o ) = α(θ o ) π 2 -π 2 g σ θ (θo) (θ i -µ θ (θ o ))L i φ (θ i ) dθ i . (3.8) 28 Carlos Jorge Zubiaga Peña Our next step is to approximate this 1D integral with a convolution. To this end, we make local approximations of our BRDF slice model in a 1D angular window around θ o . We assume the energy and variance to be locally constant: α(θ o + t) ≈ α(θ o ) and σ 2 θ (θ o + t) ≈ σ 2 θ (θ o ). For the mean, we rather make use of a first-order approximation: µ θ (θ o + t) ≈ µ θ (θ o ) + dµ θ dt | θo t. As a result, L o may be approximated by a 1D convolution of the form: L o (θ o + t, φ o ) ≈ α g σ θ * L i φ (θ o + t), with t ∈ [-ǫ, +ǫ], (3.9) where we have dropped the dependencies of both α and σ θ on θ o since they are assumed locally constant. In Fourier space, this convolution turns into the following product: F[L o ](ξ) ≈ α F[g σ θ ](ξ) F[L i φ ](ξ), (3.10) where ξ is the Fourier variable corresponding to t. Note that Fourier shifts e iθo due to the centering on θ o cancel out since they appear on both sides of the equation. Equation (3.9) bears similarities with previous work [DHS + 05, RMB07], with the difference that our approach provides direct connections with moments thanks to our BRDF slice model. Relationships between moments An important property of moments is that they are directly related to the Fourier transform of a function f by [START_REF] Michael | Principles of statistics[END_REF]: F[f ](ξ) = k (iξ) k k! µ k [f ] (3.11) where µ k [f ] is the k-th moment of f . We thus re-write Equation (3.10) as a product of moment expansions: F[L o ](ξ) = α k (iξ) k k! µ k [g σ θ ] l (iξ) l l! µ l [L i φ ] . (3.12) To establish relationships between moments, we extract the moments from F[L o ] using its own moment expansion. This is done by differentiating F[L o ] at ξ = 0 [Bul65]: µ 0 [L o ] = F[L o ](0) (3.13) µ 1 [L o ] = Im dF[L o ] dξ (0) (3.14) µ 2 [L o ] = -Re d 2 F[L o ] dξ 2 (0) . (3.15) Next, we expand Equation 3.12 and its derivatives at ξ = 0 and plug them inside Equations (3.13) through (3.15): µ 0 [L o ] = αµ 0 [g σ θ ]µ 0 [L i φ ], (3.16) µ 1 [L o ] = µ 1 [g σ θ ] + µ 1 [L i φ ], (3.17) µ 2 [L o ] = µ 2 [g σ θ ] + µ 2 [L i φ ] + 2µ 1 [g σ θ ]µ 1 [L i φ ]. (3.18) Since g σ θ is normalized, µ 0 [g σ θ ] = 1. However, µ 0 [L o ] = 1 in the general case, and we must normalize moments of order 1 and 2 before going further. We write Lo = L o /µ 0 [L o ], which yields µ k [ Lo ] = µ k [Lo] µ0[Lo] . It can then be easily shown that Equations (3.17) and (3.18) remain valid after normalization. Lastly, we write the variance of Lo in terms of moments: Var [ Lo ] = µ 2 [ Lo ] -µ 2 1 [ Lo ]. After carrying out simplifications, we get: Var[ Lo ] = Var[ḡ σ θ ] + Var[ Li φ ]. Putting it all together, we obtain the following moment relationships for a given viewing elevation θ o : µ 0 [L o ](θ o ) = α(θ o ) µ 0 [L i φ ](θ o ), (3.19) E[ Lo ](θ o ) = µ θ (θ o ) + E[ Li φ ](θ o ), (3.20) Var[ Lo ](θ o ) = σ 2 θ (θ o ) + Var[ Li φ ](θ o ), (3.21) where we have used E [ḡ σ θ ](θ o ) = µ θ (θ o ) and Var[ḡ σ θ ](θ o ) = σ 2 θ (θ o ). The reasoning is similar when studying the integral along φ i , in which case we must define a L i θ term analogous to L i φ . We then obtain similar moment relationships, except in this case E [ḡ σ φ ] = 0, Var[ḡ σ φ ] = σ 2 φ , and L i φ is replaced by L i θ . Measured material analysis We compute statistical moments of BRDF slices up to order 2 (energy, mean and variance) on a set of measured materials of the MERL database. Moments are computed as functions of viewing angle which we call moment profiles. We experimentally show that such moment profiles are well approximated by parametric forms: a Hermite spline for the energy, a linear function for the mean, and a constant for the variance. Parametric forms for these functions are obtained through fitting, and additionally we show that mean and variance statistics are correlated. On the implementation side, we have made use of BRDF Explorer [START_REF] Burley | BRDF Explorer[END_REF], which we have extended to incorporate the computation of moments. Carles Bosch have performed fitting using Mathematica. Moments of scalar functions We analyze moments on BRDF slices of measured materials without making any hypothesis. Therefore we use general tensors to capture moments of a scalar distribution f : µ k [f ] = X x ⊗ • • • ⊗ x k factors f (x) dx, (3.22) where X is the domain of definition of f and ⊗ denotes a tensor product. As a result, a moment of order k is a tensor of dimension k + 1: a scalar at order 0, a vector at order 1, a matrix at order 2, etc. Similarly to our BRDF Slice model we analyze moments up to order 2 to study the energy, mean and variance of the BRDF slices. Despites that, the analysis is easily extensible to higher order moments: the 3rd and 4th order moments (skewness and kurtosis) are given in the AppendixA. Now, for a scalar distribution defined over a 2D domain, we write x = (x, y) and define: µ n,m [f ] := E f [x n y m ] = X x n y m f (x, Choice of domain The classical parametrization in terms of elevation and azimuth angles is not adapted to the computation of moments Indeed, the periodicity of the azimuthal dimension is problematic because domains are anti-symmetric when the power involved in the computation of moments is odd, see Equation (3.23). This incompatibility is avoided when using our parametrization. The projected result of the BRDF slices using our view-centered parametrization is shown in Figure 3.3. A different solution to deal with the periodicity of the hemispherical domain would be to compute 3D moments using Cartesian coordinates as done by Ghosh et al. [GCP + 09]. However, this would not only make analysis harder (relying on 3D tensors), but it would also unnecessarily introduce distortions at grazing angles, where hemispherical and Euclidean distances differ markedly. An alternative would be to rely on statistics based on lighting elevation, as done by Havran et al [START_REF] Havran | Statistical characterization of surface reflectance[END_REF] for the purpose of material characterization. Unfortunately, this approach is not adapted to our purpose since it reduces a priori the complexity of BRDFs by using a 1D analysis. Instead, we compute moments using a planar 2D parametrization that introduces as little distortion as possible for isotropic BRDFs. BRDF slice components Moments are not good descriptors for multimodal functions. They are only well-defined for unimodal functions; computed statistics are not meaningful otherwise. In contrast to our BRDF slice model, we have observed that many BRDFs from the MERL database display multi-modal components with a near constant (i.e. close to Lambertian) diffuse component. We rely on a simple heuristic method to separate that diffuse component, leaving the rest of the data as a specular component. Such a perfect diffuse component can be extracted using a simple thresholding scheme: we sample the BRDF at a viewing elevation of 45 degrees and retrieve the minimum reflectance value. We then remove this constant from the data in order to obtain its specular component. We will analyze only the remaining specular component, on which we ground our study. However, even after removing a Lambertian component from BRDF data, some materials still show multi-modal behaviors. We simply remove these BRDFs from our set manually, Real-time 2D manipulation of plausible 3D appearance leaving a total of 40 unimodal BRDFs 2 . They still span a wide range of appearances, from rough to shiny materials. Moment profiles We compute 2D moments of the specular component of the projected BRDF slices (see Fig. 3.3) using a discretized version of the Equation (3.23). Using moments up to order 2 we have as a result a scalar value for the energy, a 2D vector for the mean and a 2 × 2 matrix for the variance. We show how those statistical properties vary as functions of the viewing elevation θ o , which we call moment profiles. In practice, we sample the θ o dimension uniformly in angles, each sample yielding a projected BRDF slice.We then use a Monte Carlo estimator to evaluate the 2D moments for each of these slices: µ n,m [f r ](θ o ) ≈ π 2 N N i=1 θ n i φ m i f r θo (θ i , φ i ), (3.24) where x i = (θ i , φ i ) is the ith randomly generated sample in the slice, and N is the number of samples. In the following, we present moment profiles computed at increasing orders, as shown in Figs. 3.4 and 3.5. For the sake of clarity, we will omit the dependence on θ o both for BRDF slices and 2D moments. Energy As seen in these plots, the energy α stays below 1, which indicates that a portion of the light is reflected. They look mostly constant except near grazing angles where they tend to increase. We show α profiles for the red channel only of all our selected BRDFs. Mean For moments of order 1 and higher, we must normalize by the 0th order moment in order to guarantee that f r is a distribution, see Sec. 3.1.2. We thus write fr = fr α . The coefficients are now given by µ n,m [ fr ] = E fr [θ n i φ m i ] for n + m = 1. The profile for µ θ := µ 1,0 is shown in Fig. 3.4b: our selected BRDFs exhibit profiles that have different slopes, with deviations from a line occurring toward grazing angles. In contrast the profile of µ φ := µ 0,1 , as shown in Fig. 3.5a, remains close to zero for all values of θ o . This is coherent with the near-symmetry of the BRDF slice around the scattering plane. 2 • yellow-phenolic • yellow-matte-plastic • white-paint • white-marble • white-acrylic • violet-acrylic • two-layer-gold • tungsten-carbide • ss440 • specular-violet-phenolic • specular-green-phenolic • specular-blue-phenolic • specular-black-phenolic • silver-paint • silicon-nitrade • red-metallic-paint • pvc • pure-rubber • pearl-paint • nickel • neoprene-rubber • hematite • green-metallic-paint2 • green-metallic-paint • gold-paint • gold-metallic-paint3 • gold-metallic-paint2 • color-changing-paint3 • color-changing-paint2 • color-changing-paint1 • chrome • chrome-steel • brass • blue-metallic-paint2 • blue-metallic-paint • black-phenolic • black-obsidian • aventurnine • aluminium • alum-bronze Co-variance It is defined as the centralized moment matrix Σ of order 2, which consists of moments of fr centered on its mean. In our case, since µ 0,1 ≈ 0, the coefficients of the co-variance matrix may be written using a slightly simpler formula: Σ n,m [ fr ] = E fr [(θ i -µ θ ) n φ m i ] for n + m = 2. This matrix characterizes how the BRDF slice is spread around its mean, with larger values in either dimension implying larger spread. The profiles for the diagonal coefficients σ 2 θ := Σ 2,0 and σ 2 φ := Σ 0,2 are shown in Fig. 3.4c: our selected BRDFs exhibit profiles of different variances, with slight deviations from the average occurring toward grazing viewing angles. The off-diagonal coefficient Σ 1,1 remains close to zero as shown in Fig. 3.5b, again due to the near-symmetry of the BRDF slice. Real-time 2D manipulation of plausible 3D appearance Interim discussion Plots exhibit common behaviors along all the selected BRDFs where we can study their causes. First, both moments which correspond to anti-symmetric functions in the φ i dimension m = 1, µ 0,1 and Σ 1,1 , exhibits close to null profiles. This strongly suggests that they are due to the near-symmetry of most isotropic BRDFs about the scattering plane (i.e. along φ i ), as seen in Fig. 3.3. This affirms the symmetry hypothesis made for the definition of our BRDF slice model. Second, values at incident view θ o = 0 start at 0 for the mean and share the same value for σ 2 θ and σ 2 φ . The reason for this is that slices of isotropic BRDFs are near radially symmetric around the normal (the origin in our parametrization) at incidence. Lastly, all materials tend to exhibit deviations with respect to a simple profile toward grazing viewing angles. This might be due to specific modes of reflectance such as asperity scattering [START_REF] Koenderink | The secret of velvety skin[END_REF] coming in to reshape the BRDF slice. Those three common behaviors are coherent with the further results obtained with the study of the skewness and kurtosis as presented in the Appendix A. Fitting & correlation In order to have a better understanding of material behavior we fit analytical functions to moment profiles. We look for similarities for a same moment order, as well as correlations among different orders. Naturally, we will less focus on fitting accuracy than on concision: a minimal set of parameters is necessary if we wish to compare profiles across many measured materials. Regarding color channels, we have tried fitting them separately, or fitting their average for each slice directly. We only report fits based on averages since for our selected materials we found differences across fits for different color channels to be negligible. It must be noted that for the energy, profiles for each color channel will obviously be different; however they are merely offseted with respect to each other. It is thus also reasonable to fit their average since we are mostly interested in the shape of profile functions. Figure 3.6 shows the fitting results for the energy, mean and variance profiles, as detailed below. It shows computed profiles, fitted profiles, fitting errors and representative 'best', 'intermediate' and 'worse' fits. We introduce our choices of analytical function for each moment order in turn. Energy As seen in Fig. 3.6, the energy profiles α(θ o ) for our selection of BRDFs exhibit a constant behavior up to grazing angles, at which point they tend to increase progressively. We model this profile with α(θ o ) ≈ α(θ o ) = αb + αs (θ o ) where αb represents the constant base energy and αs is an Hermite spline that deals with the increase in energy. The Hermite spline itself is modeled with two knots defined by their angular position θ 0 , θ 1 , energy value α(θ 0 ), α(θ 1 ) and slope m 0 , m 1 (see Fig. 3.7a). We use a non-linear optimization to fit these parameters to each energy profile, using θ 0 = 45 and θ 1 = 75 degrees as initial values, and constraining m 0 = 0 to reproduce the constant profile far from grazing angles. We first fit knot positions independently for each material, which yields an average θ 0 of 35.3 degrees with a standard deviation of 14.8, and an average θ 1 of 71.5 degrees with a standard deviation of 5.2. Combined with the observation that α 1 > α 0 in all our materials, this suggests that all our materials exhibit an energy boost confined to grazing viewing angles. We then fit the same knot positions for all our materials; this yields θ 0 = 38.7 degrees and θ 1 = 69.9 degrees, which confirms the grazing energy boost tendency. 34 Carlos Jorge Zubiaga Peña 3.6: We fitted the moment profiles (from top to bottom: energy, mean and average variance) from the selected materials list. We provide in each column the computed moment profiles (a), the fitted profiles (b), and the corresponding fitting errors (c). The error of our fits is computed using both the Mean Absolute Error (MAE in blue) and the Root Mean Square Error (RMSE in purple). The small inset profiles correspond to worse, intermediate and best fits. Statistical Analysis Mean Concerning the mean profile µ θ (θ o ), the vast majority of cases show a linear tendency, with slopes proportional to the specularity of the material. Moreover, all profile functions go through the origin, as previously observed in Sec. 3.3.4. This suggests that a linear fit µ θ (θ o ) ≈ μθ o is appropriate for representing this behavior. We fit the single slope parameter μ using a least-squares optimization, which always leads to a negative value due to our choice of parametrization (e.g., the mirror direction is given by μ = -1). It is interesting to observe that materials exhibit mean slopes nearly spanning the entire range from -1 to 0. Variance We have observed in Sec. 3.3.4 that σ 2 θ (0) ≈ σ 2 φ (0), which is due to radial symmetry at viewing incidence. Our data also reveals that the deviations from a constant behavior observed around grazing angles tend to increase for σ 2 θ and decrease for σ 2 φ . We thus choose to study the average variance using a constant profile, hence σ2 ≈ (σ 2 θ (θo)+σ 2 φ (θo)) 2 . The constant parameter is obtained using a least-squares fitting as before, with values ranging Real-time 2D manipulation of plausible 3D appearance from 0 for a mirror to π 2 12 for a Lambertian 3 . Once again, our materials exhibit a large range of average variances. Correlation Looking at Fig. 3.6b, one may observe a seeming correlation between the fitted mean slope μ and average variance σ2 : the lower the variance, the steeper the mean. To investigate this potential correlation, we plot one parameter against the other in Fig. 3.7b, which indeed exhibits a correlation. We thus perform a least-squares fitting with a quadratic function, using parameters of a mirror material (μ = -1 and σ2 = 0) and of a Lambertian one (μ = 0 and σ2 = π 2 /12) as end point constraints. We conjecture that this correlation is due to hemispherical clamping. Because of clamping, the distribution mean is slightly offset toward incidence compared to the distribution peak, and this effect is all the more pronounced for distributions of high variance: wider distributions are clamped early on relative to narrow ones. Discussion Relationships defined in Sec. 3.2.2 provide insights on how material influence shading locally, which is observable in pictures. Moreover the conclusions of the analysis of measured BRDFs in Sec. 3.3.5 describe similar behaviors across different BRDFs. As an example, let's consider a distant environment illumination reflected off a sphere: in this case, each (θ o , φ o ) pair corresponds to a point on the sphere surface. First, the energy 3 A Lambertian material corresponds to fr L = 1 π 2 in our angular parametrization, irrespective of the view direction. Since we are working on a closed space [-π 2 .. π 2 ] 2 , the formulation of moments for a constant function is not the same than in an infinite domain: moments of a Lambertian BRDF are thus finite. We can simplify the expression of the average variance as the mean of a Lambertian is zero, µ 1 [fr L ] = 0, hence Cov[fr L ] = µ 2 [fr L ]. Furthermore, the formulation of the variance along θ i and along φ i provide the same result due to the symmetry of the integration space and integrands. Thus the average variance ν[fr L ] is equal to the variance along θ i , leading to the formula: ν[fr L ] = 1 π 2 θ i ,φ i ∈[-π 2 .. π 2 ] 2 θ i 2 dθ i dφ i = 1 π θ i 3 3 π 2 -π 2 = π 2 12 . relationship Equation (3.19) tells us that the material color has a multiplicative effect on shading. In addition the energy fitting describe this multiplication as a constant effect (α b ) with an additional boost ( αs ) toward the silhouette. Second, the Equation (3.20) acts as a warping that constantly increases toward the silhouette. Third, a constant blur is applied locally as defined by Equation (3.21). It has a constant average behavior for both dimension σ2 . Both mean and variance are linked to material roughness and their effect is correlated (Fig. 3.7b). It is significant that the only effect which is applied depending on the φ direction is the blurring which is consistent with the symmetry of BRDF slices around the scattering plane The reflected environment is this time much more blurred and exhibits less warping. This is explained in (e) by the filtering characteristics of the BRDF: the filter is wide and offset toward the center of the sphere for locations closer to the silhouette. This confirms the mean/variance correlation that we have observed in our study. Figure 3.8 provides an illustration of the effect described above. We start from an ideal mirror BRDF in Fig. 3.8a, which exhibits extreme sharpness and warping of the environment toward the silhouette as expected. We then study the effect of two BRDFs: specular-black-phenolic and pearl-paint, with renderings shown in Figure 3.8b and Figure 3.8d. In both case, we focus on three points increasingly closer to the silhouette (i.e., at increasingly grazing viewing elevation). Our analysis reveals that a BRDF directly acts as an image filter whose properties are governed by statistical moments. In Figure 3.8c and Figure 3.8e, we show the means and co-variance ellipses (in blue) for both BRDFs at the three picked locations. The filters corresponding to the specular-black-phenolic BRDF remain close to the evaluation position (in red), and their spread is narrow, resulting in a small blur. In contrast, the filters corresponding to the pearl-paint BRDF exhibit a stronger blur, and are offseted toward the center of the sphere for increasing viewing angles. As a result, the warping due to the BRDF is less pronounced in this case, a subtle effect of this BRDF which illustrates the impact of correlation between mean and variance. Our statistical analysis has shown that using a simple BRDF slice model based on energy, mean and variance we can derive relationships between lighting/material and shading. Those relationships are observable in images induced by coloring due to α, warping by µ and blurring by σ 2 . Our BRDF slice model is coherent with isotropic BRDFs as shown for the selected list of materials from the MERL database. As output of this study we obtain similar behaviors of the BRDF slices as functions depending on the viewing elevation angle, as well as a correlation between mean and variance. Carlos Jorge Zubiaga Peña Chapter 4 Dynamic Appearance Manipulation of MatCaps Object appearance is the result of complex interactions between shape, lighting and material. Instead of defining those components and performing a rendering process afterwards, we rather intend to manipulate existing appearance by directly modifying the resulting shading colors. We have shown how two of these components: lighting and materials are related to the resulting shading. In this chapter we are interested in using the studied relationships between material/lighting and shading to mimic modification of both material and lighting from existing shading. We focus our work on artwork inputs, as we find interesting the artist perspective to create plausible appearances by direct painting, instead of the tedious rendering process of trial and error. We use artistic images of sphere which are created without having to specify material or lighting properties. Appearance described in these images is easily transferable to an arbitrary-shaped object with a simple lookup based on screen-space normals. This approach was first introduced by the LitSphere technique introduced by Sloan et al. [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF]. It is also known under the name of 'MatCaps' for rendering individual objects, typical applications include scientific illustration (e.g., in MeshLab and volumetric rendering [START_REF] Bruckner | Style transfer functions for illustrative volume rendering[END_REF]) and 3D sculpting (e.g., in ZBrush, MudBox or Modo). In this work we use the term 'Mat-Cap' to refer to LitSphere images that convey plausible material properties, and we leave out of our work non-photorealistic LitSpheres approaches (e.g., [START_REF] Todo | Lit-sphere extension for artistic rendering[END_REF]). The main limitation of a MatCap is that it describes a static appearance: lighting and material are 'baked in' the image. For instance, lighting remains tied to the camera and cannot rotate independently; and material properties cannot be easily modified. A full separation into physical material and lighting representations would not only be difficult, but also unnecessary since a MatCap is unlikely to be physically-realistic. Instead, our approach is to keep the simplicity of MatCaps while permitting dynamic appearance manipulations in real time. Hence we do not fully separate material and lighting, but rather decompose an input MatCap (Figure 4.1a) into a pair of spherical image-based representations (Figure 4.1b). Thanks to this decomposition, common appearance manipulations such as rotating lighting, or changing material color and roughness are performed through simple image operators (Figures 4.1c, 4.1d and 4.1e). Our approach makes the following contributions: • We assume that the material acts as an image filter in a MatCap and we introduce a simple algorithm to estimate the parameters of this filter (Section 4.1); • We next decompose a MatCap into high-and low-frequency components akin to diffuse and specular terms. Thanks to estimated filter parameters, each component is then unwarped into a spherical representation analogous to pre-filtered environment maps (Section 4.2); • We perform appearance manipulation in real-time from our representation by means of image operations, which in effect re-filter the input MatCap (Section 4.3). As shown in Section 4.4, our approach permits to convey a plausible, spatially-varying appearance from one or more input MatCaps, without ever having to recover physicallybased material or lighting representations. Appearance model We hypothesize that the material depicted in a MatCap image acts as a filter of constant size in the spherical domain (see Figure 4.2b). Our goal is then to estimate the parameters of this filter from image properties alone. We first consider that such a filter has a pair of diffuse and specular terms. The corresponding diffuse and specular MatCap components may either be given as input, or approximated (see Section 4.2.1). The remaining of this section applies to either component considered independently. Definitions We consider a MatCap component L o to be the image of a Sphere in orthographic projection. Each pixel is uniquely identified by its screen-space normal using a pair (θ, φ) of angular coordinates. The color at a point in L o is assumed to be the result of filtering an unknown lighting environment L i by a material filter F . Therefore we can apply the approach described by Ramamoorthi et al. [START_REF] Ramamoorthi | A signal-processing framework for reflection[END_REF], of considering rendering as a 2D spherical convolution where the material acts a low-pass filter of the incoming radiance; we may write L o = F * L i . We make the same assumptions that we have introduced in Section 2.2: 'Convex curved object of uniform isotropic material lit by distant lighting'. 40 Carlos Jorge Zubiaga Peña Even though MatCaps are artist-created images that are not directly related to radiance, they still convey material properties. We further restrict F to be radially-symmetric on the sphere, which simplifies the estimation of these properties as it allows us to study L o in a single dimension. A natural choice of dimension is θ (see Figure 4.3a), since it also corresponds to viewing elevation in tangent space along which most material variations occur. We thus re-write the 2D spherical convolution as a 1D angular convolution of the form: L o (θ + t, φ) = (f * L i φ )(θ + t), t ∈ [-ǫ, +ǫ], (4.1) where f is a 1D slice of F along the θ dimension, and L i φ corresponds to L i integrated along the φ dimension. We have used the same approach in our Fourier analysis on Section 3.2. Where we have shown that starting from the Equation (3.9), similar to Equation (4.1), we obtain simple formula relating 1D image statistics to statistics of lighting and material. Moreover, we use the simplified relationship equations defined after studying the measured materials of the MERL database (Section 3.3.5). These formulas are trivially adapted to the angular parametrization based on screen-space normals (a simple change of sign in Equation (4.3)). For a point (θ, φ), we have: K[L o ] = K[L i φ ] α(θ), (4.2) E[ Lo ] = E[ Li φ ] -µ θ, (4.3) Var[ Lo ] = Var[ Li φ ] + ν, (4.4) where K denotes the energy of a function, hat functions are normalized by energy (e.g., Lo = Lo K[Lo] ), and E and Var stand for statistical mean and variance respectively. The filter parameters associated to each statistic are α, µ and ν. We make use of our fitting study (Section 3.3.5) to make a number of simplifying assumptions to ease their estimation. Equation (4.2) shows that the filter energy α(θ) acts as a multiplicative term. We define it as the sum of a constant α 0 and an optional Hermite function that accounts for silhouette effects (see Figure 4.2a). We assume only α 0 varies per color, hence we call it the base color parameter. Equation (4.3) shows that the angular location of the filter is additive. The assumption here is that it is a linear function of viewing elevation (i.e., the material warps the lighting environment linearly in θ); hence it is controlled by a slope parameter µ ∈ [0, 1]. Lastly, Equation (4.4) shows that the size of the filter ν acts as a simple additive term in variance. We make the assumption that this size parameter is constant (i.e., the material blurs the lighting environment irrespective of viewing elevation). One may simply use µ = 0 for the diffuse component and µ = 1 for the specular component. However, we have shown evidence of correlation between µ and ν, which are likely due to grazing angle effects. We use the correlation function µ(ν) = 1 -0.3ν -1.1ν 2 , in effect defining slope as a function of filter size. Putting it all together, we define our filter F as a 2D spherical Gaussian: its energy varies according to α(θ), it is shifted by µ θ and has constant variance ν. This is illustrated in Figure 4.2b, where we draw filter slices f for three different viewing elevations. In the following, we first show how to evaluate the filter energy α (Section 4.1.2), then its size ν (Section 4.1.3), from which we obtain its slope µ. Energy estimation The filter energy is modeled as the sum of a constant base color and an optional silhouette effect function. However, silhouette effects are scarce in MatCaps, as they require the artist to consistently apply the same intensity boost along the silhouette. In our experience, the few MatCaps that exhibit such an effect (see inset) clearly show an additive combination, suggesting a rim lighting configuration rather than a multiplicative material boost. We thus only consider the base color for estimation in artist-created MatCaps. Nevertheless, we show in Section 4.3.2 how to incorporate silhouette effects in a proper multiplicative way. The base color α 0 is a multiplicative factor that affects an entire MatCap component. If we assume that the brightest light source is pure white, then the corresponding point on the image is the one with maximum luminance. All MatCaps consist of low-dynamic range (LDR) images since they are captured from LDR images or painted in LDR. Hence at a point of maximum luminance, α 0 is directly read off the image since K[L i φ ] = 1 in Equation (4.2). This corresponds to white balancing using a grey-world assumption (see Figure 4.6c). This assumption may not always be correct, but it is important to understand that we do not seek an absolute color estimation. Indeed, user manipulations presented in Section 4.3 are only made relative to the input MatCap. 42 Carlos Jorge Zubiaga Peña Variance estimation The filter size corresponds to material variance. It is related to image variance (see Eq. (4.4)). Image variance We begin by explaining how we compute image variance, the left hand side in Equation (4.4). To this end we must define a 1D window with compact support around a point (θ, φ), and sample the MatCap along the θ dimension as shown in Figure 4.3b. In practice, we weight L o by a function W ǫ : [-ǫ, +ǫ] → [0, 1], yielding: L oǫ (θ + t, φ) = L o (θ + t, φ)W ǫ (t), (4.5) where W ǫ is a truncated Gaussian of standard deviation ǫ/3. Assuming L o to be close to a Gaussian as well on [-ǫ, +ǫ], the variance of L o is related to that of L oǫ by [START_REF] Bromiley | Products and convolutions of gaussian distributions[END_REF]: Var[ Lo ] ǫ ≃ Var[ Loǫ ] • Var[W ǫ ] Var[ Loǫ ] -Var[W ǫ ] . (4.6) The image variance computed at a point (θ, φ) depends on the choice of window size. We find the most relevant window size (and the corresponding variance value) using a simple differential analysis in scale space, as shown in Figure 4.4. Variance exhibits a typical signature: after an initial increase that we attribute to variations of W ǫ , it settles down (possibly reaching a local minimum), then raises again as W ǫ encompasses neighboring image features. We seek the window size ǫ ⋆ at which the window captures the variance best, which is where the signature settles. We first locate the second inflection point which marks the end of the initial increase. Now ǫ ⋆ either corresponds the location of the next minimum (Figure 4.4a) or the location of the second inflection if no minimum is found (Figure 4.4b). If no second inflection occurs, we simply pick the variance at the largest window size ǫ ⋆ = π 2 (Figure 4.4c). The computation may become degenerated, yielding negative variances (Figure 4.4d). Such cases occur in regions of very low intensity that compromise the approximation of Equation (4.6); we discard the corresponding signatures. Material variance The estimation of ν from Equation (4.4) requires to make assumptions on the variance of the integrated lighting L i φ . If we assume that the lighting environment contains sharp point or line light sources running across the θ direction, then at those points we have Var[ Li φ ] ≈ 0 and thus ν ≈ Var[ Lo ]. Moreover, observe that Equation (4.1) remains valid when replacing L o and L i φ by their derivatives L ′ o and L ′ i φ in the θ dimension. Consequently Equation (4.4) may also be used to recover ν by relying on the θ-derivative of a MatCap component. In particular, if we assume that the lighting environment contains sharp edge light sources, then at those points we have Var[ L′ i φ ] ≈ 0 and thus ν ≈ Var[ L′ o ]. In practice, we let users directly provide regions of interest (ROI) around sharpest features by selecting a few pixel regions in the image. We run our algorithm on each pixel inside a ROI, and pick the minimum variance over all pixels to estimate the material variance. The process is fast enough to provide interactive feedback, and it does not require accurate user inputs since variance is a centered statistic. An automatic procedure for finding ROIs would be interesting for batch conversion purposes, but is left to future work. Our approach is similar in spirit to that of Hu and de Hann [START_REF] Hu | Low cost robust blur estimator[END_REF], but is tailored to the signatures of Figure 4.4. Note that since MatCap images are LDR, regions where intensity is clamped to 1 produce large estimated material variances. This seems to be in accordance with the way material perception is altered in LDR images [START_REF] Phillips | Effects of image dynamic range on apparent surface gloss[END_REF]. Real-time 2D manipulation of plausible 3D appearance Validation We validate our estimation algorithm using analytical primitives of known image variance, as shown in Figure 4.5. To make the figure compact, we have put three primitives of different sizes and variance in the first two MatCaps. We compare ground truth image variances to estimates given by Var[ L′ o ] (for ROIs A and D) or Var[ Lo ] (all other ROIs), at three image resolutions. Our method provides accurate variance values compared to the ground truth, independently of image resolution, using L o or L ′ o . The slight errors observed in D, E and F are due to primitives lying close to each other, which affects the quality of our estimation. The small under-estimation in the case of I might happen because the primitives is so large that a part is hidden from view. To compute material variance, our algorithm considers the location that exhibits minimum image variance. For instance, if we assume the first two MatCaps of Figure 4.5 to be made of homogeneous materials, then their material variances will be those of A and D respectively. This implicitly assumes that the larger variances of other ROIs are due to blurred lighting features, which is again in accordance with findings in material perception [START_REF] Roland W Fleming | Real-world illumination and the perception of surface reflectance properties[END_REF]. 44 Carlos Jorge Zubiaga Peña MatCap decomposition We now make use of estimated filter parameters to turn a MatCap into a representation amenable to dynamic manipulation. Figure 4.6 shows a few example decompositions. Please note that all our MatCaps are artist-created, except for the comparisons in Figs. 4.8 and 4.12. Low-/High-frequency separation Up until now, we have assumed that a MatCap was readily separated into a pair of components akin to diffuse and specular effects. Such components may be provided directly by the artist during the capture or painting process, simply using a pair of layers. However, most MatCaps are given as a single image where both components are blended together. Separating an image into diffuse and specular components without additional knowledge is inherently ambiguous. Existing solutions (e.g., [NVY + 14]) focus specifically on specular highlights, while we need a full separation. Instead of relying on complex solutions, we provide a simple heuristic separation into low-frequency and high-frequency components, which we find sufficient for our purpose. Our solution is based on a gray-scale morphological opening directly inspired by the work of Sternberg [START_REF] Stanley R Sternberg | Grayscale morphology[END_REF]. It has the advantage of outputting positive components without requiring any parameter tuning, which we found in no other technique. We use morphological opening to extract the low-frequency component of a MatCap. An opening is the composition of an erosion operator followed by a dilation operator. Each operator is applied once to all pixels in parallel, per color channel. For a given pixel p: erode(p) = min q∈P v q (n p • n q ) ; (4.7) dilate(p) = max q∈P v q (n p • n q ) , (4.8) where P = {q | (n p • n q ) > 0} is the set of valid neighbor pixels around p, and v q and n q are the color value and screen-space normal at a neighbor pixel q respectively. Real-time 2D manipulation of plausible 3D appearance The dot product between normals reproduces cosine weighting, which dominates in diffuse reflections. It is shown in the inset figure along with the boundary ∂P of neighbor pixels. The morphological opening process is illustrated in Figure 4.7. The resulting low-frequency component is subtracted from the input to yield the high-frequency component. Figure 4.8 shows separation results on a rendered sphere compared to veridical diffuse and specular components. Differences are mostly due to the fact that some low-frequency details (due to smooth lighting regions) occur in the veridical specular component. As a result the specular component looks brighter compared to our highfrequency component, while the diffuse component looks dimmer than our low-frequency component. Nevertheless, we found that this approach provides a sufficiently plausible separation when no veridical diffuse and specular components exist, as with artist-created MatCaps (see Figure 4.6b for more examples). Spherical mapping & reconstruction Given a pair of low-and high-frequency components along with estimated filter parameters, we next convert each component into a spherical representation. We denote a MatCap component by L o , the process being identical in either case. We first divide L o by its base color parameter α 0 . This yields a white-balanced image L ⋆ o , as shown in Figure 4.6c. We then use the filter slope parameter µ to unwarp L ⋆ o to a spherical representation, and we use a dual paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF] for storage purpose. In practice, we apply the inverse mapping to fill in the dual paraboloid map, as visualized in Figure 4.9. Each texel q in the paraboloid map corresponds to a direction ω q . We rotate it back to obtain its corresponding normal n q = rot uq,-µθ (ω q ) where u q = e2×ωq e2×ωq , θ = acos(e 2 • ω q )/(1 + µ) and e 2 = (0, 0, 1) stands for the (fixed) view vector in screen space. Since for each texel q we end up with a different rotation angle, the resulting transformation is indeed an image warping. The color for q is finally looked up in L ⋆ o using n q . Inevitably, a disc-shaped region on the back-side of the dual paraboloid map will receive no color values. We call it the blind spot and its size depends on µ: the smaller the slope parameter, the wider the blind spot. Since in our approach the slope is an increasing function µ(ν) of filter size, a wide blind spot will correspond to a large filter, and hence a low-frequency content. It is thus reasonable to apply inpainting techniques without having to introduce new details in the back paraboloid map. In practice, we apply Poisson image editing [START_REF] Pérez | Poisson image editing[END_REF] with a radial guiding gradient that propagates boundary colors of the blind spot toward its center (Fig. 4 .9b). This decomposition process results in a pair of white-balanced dual paraboloid maps, one for each component, as illustrated in Figure 4.6d. They well suited to real-time rendering as they are analogous to pre-filtered environment maps (e.g., [START_REF] Kautz | A unified approach to prefiltered environment maps[END_REF][START_REF] Ramamoorthi | Frequency space environment map rendering[END_REF]). Real-time 2D manipulation of plausible 3D appearance Appearance manipulation Rendering using our decomposition is the inverse process of Section 4.2.2. The color at a point p on an arbitrary object is given as a function of its screen-space normal n p . For each component, we first map n p to a direction ω p in the sphere: we apply a rotation ω p = rot up,µθ (n p ), with u p = e2×np e2×np and θ = acos(e 2 • n p ). A shading color is then obtained by a lookup in the dual paraboloid map based on ω p , which is then multiplied by the base color parameter α 0 . The low-and high-frequency components are finally added together. Lighting manipulation Lighting may be edited by modifying our representation given as a pair of dual paraboloid maps. We provide a painting tool to this end, as illustrated in Figure 4.10b. The user selects one of the components and paints on the object at a point p. The screen-space normal n p and slope parameter µ are used to accumulate a brush footprint in the dual paraboloid map. To account for material roughness, the footprint is blurred according to ν. We use Gaussian-and Erf-based footprints to this end, since they enable to perform such a blurring analytically. We also provide a light source tool, which is similar to the painting tool, and is shown in Figure 4.10e. It takes as input a bitmap image that is blurred based on ν. However, instead of accumulating it as in painting, it is simply moved around. A major advantage of our decomposition is that it permits to rotate the whole lighting environment around. This is applied to both low-and high-frequency components in synchronization. In practice, it simply consists in applying the inverse rotation to n p prior to warping. As shown in Figure 4.10c,f, this produces convincing results that remain coherent even with additional reflections. Material manipulation Manipulating apparent material roughness requires the modification of ν, but also µ since it depends on ν. This is trivial for light sources that have been added or painted, as one simply has to re-render them. However, low-and high-frequency components obtained through separation of the input MatCap require additional filtering. For a rougher material look (Figure 4.11b), we decrease the magnitude of µ and blur the dual paraboloid map to increase ν. For a shinier material look (Figure 4.11c), we increase the magnitude of µ and manually add reflections with a lower ν to the dual paraboloid map. We have tried using simple sharpening operators, but avoided that solution as it tends to raise noise in images. For the manipulation of apparent material color, we take inspiration from color variation 48 Carlos Jorge Zubiaga Peña Results and comparisons Our material estimation algorithm (Section 4.1) is implemented on the CPU and runs in real-time on a single core of an Intel i7-2600K 3.4GHz, allowing users to quickly select appropriate ROIs. The decomposition process (Section 4.2) is implemented in Gratin (a GPU-tailored nodal software available at http://gratin.gforge.inria.fr/), using an Nvidia GeForce GTX 555. Performance is largely dominated by the low-/high-frequency separation algorithm, which takes from 2 seconds for a 400 × 400 MatCap, to 6 seconds for a 800 × 800 one. Rendering (Section 4.3) is implemented in Gratin as well and runs in real-time on the GPU, with a negligible overhead compared to rendering with a simple MatCap. We provide GLSL shaders for rendering with our representation in supplemental material. A benefit of our approach is the possibility to rotate lighting independently of the view. One may try to achieve a similar behavior with a mirrored MatCap to form an entire sphere. However, this is equivalent to a spherical mapping, in which case highlights do not move, stretch or compress in a plausible way. In this paper, we have focused on artist-created MatCaps for which there is hardly any ground truth to compare to. Nevertheless, we believe MatCaps should behave similarly to rendered spheres when lighting is rotated. Figure 4.12 shows a lighting rotation applied to the rendering of a sphere, for which a ground truth exists. We also compare to a rotation obtained by the method of Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF]. For the specific case of lighting rotation, our approach appears superior; in particular, it reproduces the original appearance exactly. However, the method of Lombardi et al. has an altogether different purpose, since it explicitly separates material and lighting. For instance, they can re-render a sphere with the same lighting but a different material, or with the same material but a different lighting. Up to this point, we have only exploited a single MatCap in all our renderings. However, we may use low-and high-frequency components coming from different MatCaps, as shown in Figure 4.13. Different MatCaps may of course be used on different object parts, as seen in Figure 4.14. Our approach offers several benefits here: the input Matcaps may be aligned, their color changed per components, and they remain aligned when rotated. Our representation also brings interesting spatial interpolation abilities, since it provides material parameters to vary. Figure 4.15 shows how bitmap textures are used to vary highand low-frequency components separately. Figure 4.16 successively makes use of an ambient occlusion map, a diffuse color map, then silhouette effects to convey object shape. Our approach thus permits to obtain spatial variations of appearance, which are preserved when changing input MatCaps. Discussion We have shown how to decompose a MatCap into a representation more amenable to dynamic appearance manipulation. In particular, our approach enables common shading operations such as lighting rotation and spatially-varying materials, while preserving the appeal of artist-created MatCaps. We are convinced that our work will quickly prove useful in software that already make use of MatCaps (firstly 3D sculpting, but also CAD and scientific visualization), with a negligible overhead in terms of performance but a greater flexibility in terms of appearance manipulation. We believe that this work restricted to MatCaps is easily transfered to other kind of inputs, once an spherical representation of shading is obtained. We presume that the estimation of materials will be valid independently if the input comes from an artwork, a photograph or a rendering. However, sharp lighting assumption might not always be met, in which case material parameters will be over-or under-estimated. This will not prevent our approach from working, since it will be equivalent to having a slightly sharper or blurrier lighting. Interestingly, recent psycho-physical studies (e.g., [START_REF] Doerschner | Estimating the glossiness transfer function induced by illumination change and testing its transitivity[END_REF]) show that different material percepts may be elicited only by changing lighting content. This suggests that our approach could be in accordance with visual perception, an exciting topic for future investigations. Our decomposition approach makes a number of assumptions that may not always be satisfied. We assume an additive blending of components, whereas artists may have painted a MatCap using other blending modes. For further explanation of our technique limitations like the restriction of radially symmetric BRDFs or the needed improvements of the filling-in technique we refer to the Chapter 6. 52 Carlos Jorge Zubiaga Peña Local Shape Editing at the Compositing Stage Images created by rendering engines are often modified in post-process, by making use of independent, additive shading components such as diffuse or reflection shading or transparency (Section 1.1.3). Most modern off-line rendering engines output these shading components in separate image buffers without impeding rendering performance. In the same way, it is possible to output auxiliary components such as position or normal buffers. These auxiliary buffer permit additional modifications: adding lights in post-process (using normals) or depth of field (using positions) for instance. Nevertheless, these modifications are limited and if we would like to modify auxiliary buffers holding 3D normals or positions, this would have no effect on shading buffers. Following our goal to modify existing appearance we want to grant geometry modifications. We specifically focus on ways to obtain a plausible shading color when modifying local shape (normals). If one wants to make these kinds of modifications, the straightforward solution will be to completely re-render the scene in 3D. This is a time-consuming process that we want to avoid to be able to explore modifications interactively. Such post-processing techniques are routinely used in product design applications (e.g., Colorway) or in movie production (e.g., Nuke or Flame) to quickly test alternative compositions. They are most often preferred to a full re-rendering of the 3D scene that would require considerably larger assets, longer times and different artistic skills. The main issue when one wants to modify shape at the compositing stage is that lighting information is not available any more as is is lost in the rendering process. The recovery of the environment lighting would not be possible as we lack much of the necessary 3D data. We instead strive for a plausible result, ensuring that the input diffuse and reflection shading buffers are recovered when reverting to the original normals As in Chapter 4 we will be working with pre-filtered environments maps. While with MatCaps we use pre-filtered environments maps to mimic modifications of material or lighting, now we use them to allow geometry modifications. The key idea of our method is to reconstruct a pair of pre-filtered environments per object/material: one for the diffuse term, the other for the reflection term. Obtaining new shading colors from arbitrarily modified normals then amounts to perform a pair of lookups in the respective prefiltered environment maps. Modifying local shape in real-time then becomes a matter of recompositing the reconstructed shading buffers. Alternatively we could export an environment lighting map, pre-filtered or not, per object/material in the rendering process. Afterwards they will be used to perform desired modifications of local shape. This solution requires to obtain the environment maps using light-transport techniques to capture the effects of the interaction between different objects (i.e. shadows or reflections). That results in a tedious and costly approach because each environment map would need a complete re-render of the whole scene. Moreover retroreflections could not be obtained as each object needs to be replaced by a sphere to get its environment map. Our approach is a first step toward the editing of surface shape (and more generally object appearance) at the compositing stage, which we believe is an important and challenging problem. The rendering and compositing stages are clearly separate in practice, involving different artistic skills and types of assets (i.e., 3D scenes vs render buffers). Providing compositing artists with more control over appearance will thus require specific solutions. Our paper makes the following contributions toward this goal (see Figure 5.1): • Diffuse and reflection shading environments are automatically reconstructed in preprocess for each object/material combinations occurring in the reference rendering (Section 5.1); • The reconstructed environments are used to create new shading buffers from arbitrarily modified normals, which are recomposited in real time with the reference shading buffers (Section 5.2). Figure 5.1: Our method permits to modify surface shape by making use of the shading and auxiliary buffers output by modern renderers. We first reconstruct shading environments for each object/material combination of the Truck scene, relying on normal and shading buffers. When normals are then modified by the compositing artist, the color image is recomposited in real-time, enabling interactive exploration. Our method reproduces interreflections between objects, as seen when comparing the reconstructed environments for rear and front mudguards. This work have been published in the Eurographics Symposium of Rendering at the 'Experimental Ideas & Implementations' with the collaboration of Gael Guennebaud and Romain Vergne [START_REF] Jorge | Local Shape Editing at the Compositing Stage[END_REF]. Specifically, Gael Guennebaud has developed the reconstruction of the diffuse component and the hole-filling and regularization has been developed by Gael Guennebaud and Romain Vergne. Reconstruction In this work, we focus on modifying the apparent shape of opaque objects at the compositing stage. We thus only consider the diffuse and reflection buffers, the latter resulting from reflections off either specular or glossy objects. These shading buffers exhibit different frequency content: diffuse shading is low-frequency, while reflection shading might contain arbitrarily high frequencies. As a result we use separate reconstruction techniques for each of them. Both techniques take their reference shading buffer D 0 or R 0 as input, along with an auxiliary normal buffer n. Reconstruction is performed separately for each surface patch P belonging to a same object with a same material, and identified thanks to a surface ID buffer. We also take as input ambient and reflection occlusion buffers α D and α R , which identify diffuse and specular visibility respectively (see Appendix). The latter is used in the reconstruction of the reflection environment. We use Modo to export all necessary shading and auxiliary buffers, as well as camera data. The normals are transformed from world to screen-space prior to reconstruction; hence the diffuse and reflection environments output by our reconstruction techniques are also expressed in screen space. Diffuse component In this section our goal is to reconstruct a complete prefiltered diffuse environment D : S 2 → R 3 , parametrized by surface normals n. The map D should match the input diffuse buffer inside P: for each selected pixel x ∈ P, D(n(x)) should be as close as possible to D 0 (x). However, as illustrated by the Gauss map in Figure 5.2, this problem is highly ill-posed without additional prior. First, it is under-constrained in regions of the unit sphere not covered by the Gauss map of the imaged surface. This is due to sharp edges, occluded regions and highly curved features producing a very sparse sampling. Second, the problem might also be over-constrained in areas of the Gauss map covered by multiple sheets of the imaged surface, as they might exhibit different diffuse shading values due to global illumination (mainly occlusions). the pixels of the input diffuse layer of the selected object are scattered inside the Gauss map. This shading information is then approximated by low-order spherical harmonics using either a Least Square fit (top) or our Quadratic Programming formulation (bottom) leading to the respective reconstructed environment maps. To evaluate the reconstruction quality, those environment maps are then applied to the original objects, and the signed residual is shown using a color code (shown at right). Observe how our QP reconstruction guarantees a negative residual. Input diffust laytr Comparison to Ltast Squart Rtconstruction Our QP Reconstruction Since diffuse shading exhibits very low frequencies, we address the coverage issue by representing D with low-order Spherical Harmonics (SH) basis functions, which have the double advantage of being globally supported and of exhibiting very good extrapolation capabilities. We classically use order-2 SH, only requiring 9 coefficients per color channel [START_REF] Ramamoorthi | An efficient representation for irradiance environment maps[END_REF]. The reconstructed prefiltered diffuse environment is thus expressed by: D(n) = 2 l=0 l m=-l c l,m Y l,m (n). (5.1) The multiple-sheets issue is addressed by reconstructing a prefiltered environment as if no occlusion were present. This choice will facilitate shading edition as the local residual D 0 (x) -D(n(x)) is then directly correlated to the amount of local occlusion. Formally, it Real-time 2D manipulation of plausible 3D appearance amounts to the following constrained quadradic minimization problem: c ⋆ l,m = arg min c l,m x∈P D 0 (x) -D(n(x)) 2 , s.t. D(n(x)) ≥ D 0 (x), which essentially says that the reconstruction should be as close as possible to the input while enforcing negative residuals. This is a standard Quadratic Programming (QP) problem that we efficiently solve using a dual iterative method [START_REF] Goldfarb | A numerically stable dual method for solving strictly convex quadratic programs[END_REF]. Figure 5.2 compares our approach to a standard least squares (LS) reconstruction. As made clear in the two right-most columns, our QP method produces shading results more plausible than the LS method: residuals are negative by construction and essentially correspond to darkening by occlusion. Validation The left column of Figure 5.6 shows reconstructed diffuse shading for a pair of environment illuminations. We project the light probes onto the SH basis and use them to render a 3D teapot model, from which we reconstruct the diffuse environments. In this case, there is no occlusion, only direct lighting: our reconstruction exhibits very good results as shown by the difference images with the ground truth environments. Reflection component At first glance, the problem of reconstructing a prefiltered reflection environment map is similar to the diffuse case. Our goal is to reconstruct a complete prefiltered reflection environment R : S 2 → R 3 parametrized by the reflected view vector r = reflect(v, n), where v and n are the local view and normal vectors respectively. As before, R should match the input reflection buffer: for each selected pixel x ∈ P, R(r(x)) should be as close as possible to R 0 (x). On the other hand, the reflection buffer contains arbitrarily high-frequencies, which prohibits the use of a SH representation. We thus propose to represent and store R in a high resolution dual-paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF] that we fill in three steps: 1. Mapping from R 0 to R while dealing with overlapping sheets; 2. Hole-filling of R using a spherical harmonic interpolation; 3. Smoothing of the remaining discontinuities. Partial reconstruction For the reconstruction of the diffuse map, interpreting each input pixel as a simple point sample proved to be sufficient. However, in order to reconstruct a sharp reflection map, it becomes crucial to fill the gaps between the samples with as much accuracy as possible. To this end, we partition the input set of pixels into smooth patches according to normal and depth discontinuities. Each smooth patch has thus a continuous (i.e., hole-free) image once mapped on S 2 through the reflected directions r. Due to the regular structure of the input pixel grid, the image of the patch is composed of adjacent spherical quads and triangles (at patch boundaries). This is depicted in Figure 5.3 for a block of 3 × 3 pixels. Depending on object shape and camera settings, each patch may self-overlap, and the different patches can also overlap each other. In other words, a given reflection direction r might coincide with several polygons. We combine shading information coming from these different image locations using two types of weights. First, we take as input an auxiliary 58 Carlos Jorge Zubiaga Peña where N (r) is the number of unoccluded spherical polygons containing r, Q k is the set of corner indices of the k-th polygon, and λ k j are barycentric coordinates enabling the interpolation of the shading colors inside the k-th polygon. For polygons embedded on a sphere, barycentric coordinates can be computed as described in Equation 8 of [START_REF] Langer | Spherical barycentric coordinates[END_REF]. We use normalized spherical coordinates, which amounts to favor the partition of unity property over the linear precision property on the sphere (i.e., Equation 2 instead of 3 in [START_REF] Langer | Spherical barycentric coordinates[END_REF]). In order to quickly recover the set of spherical polygons containing r, we propose to first warp the set S 2 of reflected directions to a single hemisphere so that the search space can be more easily indexed. To this end, we compute a rectified normal buffer n ′ such that r = reflect(z, n ′ ), where z = (0, 0, 1) T as shown in Figure 5.4. This is obtained by the bijection n ′ (r) = r+z r+z . In a preprocess, we build a 2D grid upon an orthographic projection of the Gauss map of these rectified screen-space normals. For each polygon corresponding to four or three connected pixels, we add its index in the cells it intersects. The intersection is carried out conservatively by computing the 2D convex hull of the projected spherical polygon. Then, for each query reflection vector r, we compute the spherical barycentric coordinates λ k j of each polygon of index k in the cell containing n ′ (r), and pickup the polygons having all λ k j positive. In our implementation, for the sake of simplicity and consistency with our 2D grid construction, we compute spherical barycentric coordinates with respect to the rectified normals n ′ , for both intersections and shading interpolation (Equation (5.2)). Figure 5.4: A normal n is 'rectified' to n ′ prior to reconstruction. The reflection of the view vector v is then given by r = reflect(z, n ′ ). Hole-filling and regularization Evaluating Equation (5.2) for each direction r in a dual-paraboloid map yields a partial reconstruction, with holes in regions not covered by a single polygon and discontinuities at the transition between different smooth parts. For instance, the bright red region in the top row of Figure 5.5 correspond to reflection directions where no shading information is available (i.e., N (r) = 0). It is necessary to fill these empty regions to guarantee that shading information is available for all possible surface orientations. In practice, we perform a harmonic interpolation directly on a densely tessellated 3D sphere, with vertices indexed by r matching exactly the pixels of the dual-paraboloid map. The tessellation is thus perfectly regular except at the junction between the front and back hemispheres. We use a standard Finite Element Discretization with linear basis functions over a triangular mesh to solve the Laplacian differential equation, while setting shading values recovered by Equation (5.2) as Dirichlet boundary constraints [IGG + 14]. A result is shown in the middle row of Figure 5.5: holes are properly filled in, but some shading discontinuities remain. Those are caused by spatial discontinuities in Equation (5.2) occurring when spatially disconnected polygons are used for neighboring directions in the environment. We thus apply a last post-processing step where we slightly blur the environment along those discontinuities. We identify them by computing a second dual-paraboloid map storing the 2D image position of the polygon that contributed the respective shading color. This map is simply obtained by replacing the shading values R 0 (x j ) in Equation (5.2) by the 2D coordinates x j . We then compute the gradient of these maps and use its magnitude to drive a spatially-varying Gaussian blur. The effect is to smooth the radiance on discontinuities caused by two or more remote polygons projected next to one another. An example of regularized environment is shown in the bottom row of Figure 5.5. Visualization Dual paraboloid maps are only used for practical storage purposes in our approach. It should be noted that once applied to a 3D object, most of the shading information in the back part of the map gets confined to the silhouette, as shown in the right column of Figure 5.5. In the following, we thus prefer to use shaded 3D spheres seen in orthographic projection (i.e., Lit Spheres [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF]) to visualize the reconstructed shading environments (both diffuse and 60 Carlos Jorge Zubiaga Peña Figure 5.5: A dual paraboloid map reconstructed with our approach is shown in the top row: it only partially covers the space of reflected view vectors. The missing information, shown in red, appears confined to the silhouette when visualized with a LitSphere at right. We focus on the back part of the paraboloid map in the middle row to show the result of the hole-filling procedure. Missing regions are completed, but some discontinuities remain; they are corrected by our regularization pass as shown in the bottom row. reflections). In practice, perspective projection only lets appear a subset of filled-in shading values close to object contours. Validation The right column of Figure 5.6 shows reconstruction results for a pair of known environment lighting. As before, a 3D teapot is rendered using the light probe, and then used for reconstructing reflection environment maps. The difference between our reconstruction and the ground truth is small enough to use it for shape editing. Recompositing The outcome of the reconstruction process is a set of SH coefficients and dual paraboloid maps for all object/material combinations appearing in the image. Obtaining reconstructed shading buffers simply amounts to evaluate shading in the appropriate SH basis or environment map, using arbitrary screen-space normals. A benefit of this approach is that any normal manipulation may then be used in post-process; we give some practical examples in Section 5.3. However, we must also ensure that the independently reconstructed shading buffers are seamlessly recombined in the final image. In particular, when a normal is left untouched 62 Carlos Jorge Zubiaga Peña by the compositing artist, we must guarantee that we reproduce the reference diffuse and reflection shading colors exactly. This is the goal of the recompositing process: taking as input an arbitrarily modified normal buffer, it combines the reconstructed prefiltered environments with rendered input buffers to produce a final color image where the apparent shape of objects has been altered. It works in parallel on all pixels; hence we drop the dependence on x for brevity. Combined diffuse term Given a modified normal ñ, we define the combined diffuse term D by: D = α D D 0 -D(n) residual +D(ñ) + (1 -α D ) D 0 , (5.3) where the ambient occlusion term α D is used to linearly interpolate between the reference and reconstructed diffuse colors. The rationale is that highly occluded areas should be preserved to prevent the introduction of unnatural shading variations. The D 0 -D(n) term is used to re-introduce residual differences between the reference and reconstructed buffers. It corresponds to local darkening of diffuse shading that could not be captured by our global reconstruction. The ⌊•⌋ symbol denotes clamping to 0, which is necessary to avoid negative diffuse shading values. This is still preferable to a multiplicative residual term D 0 /D(n) as it would raise numerical issues when D(n) ≈ 0. Observe that if n = ñ then D = D 0 : the reference diffuse shading is exactly recovered when the normal is not modified. Combined reflection term Contrary to the diffuse case, we cannot apply the residual approach between the reference and reconstructed reflection buffers as it would create ghosting artifacts. This is because reflections are not bound to low-frequencies as in the diffuse case. Instead, given a modified normal ñ and a corresponding modified reflection vector r = reflect(v, ñ), we define the combined reflection term R by: R = ν r,r α R R(r) + (1 -ν r,r α R ) R 0 , (5.4) where ν r,r = min(1, cos -1 (r • r)/ǫ) computes the misalignment between original and modified reflection vectors (we use ǫ = 0.005π), and α R is the reflection occlusion term. The latter serves the same purpose as the ambient occlusion term for the diffuse case. Equation (5.4) performs a linear interpolation between reference and reconstructed reflection colors based on ν r,r α R . As a result, if n = ñ, then ν r,r = 0 and R = R 0 : the reference reflection shading is exactly recovered when the normal is left unmodified. Final composition The final image intensity Ĩ is given by: Ĩ = α k D D + k R R 1 γ + (1 -α) I, (5.5) where the diffuse and reflection coefficients k D and k R are used to edit the corresponding shading term contributions (k D = k R = 1 is the default), γ is used for gamma correction (we use γ = 2.2 in all our results), and α identifies the pixels pertaining to the background (e.g., showing an environment map), which are already gamma corrected in our input color image I. Equation (5.5) is carried out on all color channels separately. Figure 5.7 shows an example of the recompositing process on a simple scene holding a single object, where input normals have been corrupted by a 2D Perlin noise restricted to a square region. The final colors using original and modified normals are shown in the leftmost column; the remaining columns show the different gamma-corrected shading terms. The top row focuses on the diffuse term (k R = 0), while the bottom row focuses on the reflection term (k D = 0). The importance of recombining reconstructed and reference diffuse shading done in Equation (5.3) becomes apparent when comparing D(ñ) and D. In particular, it permits to seamlessly reproduce D 0 outside of the square region (e.g., inside the ear). Similarly, using Equation (5.4) permits to remove implausible bright reflections in reflection shading (e.g., inside the ear or below the eyebrow). Experimental results We have implemented the recompositing process of Section 5.2 in Gratin [START_REF] Vergne | Designing gratin, a gpu-tailored node-based system[END_REF], a programmable node-based system working on the GPU. It permits to test various normal modification algorithms in 2D by programming them directly in GLSL, while observing results in real-time as demonstrated in the supplemental video. Alternatively, normal variations can be mapped onto 3D objects and rendered as additional auxiliary buffers at a negligible fraction of the total rendering time. It then grants compositing artists the ability to test and combine different variants of local shape details in post-process. We demonstrate both the 2D and 3D normal editing techniques in a set of test scenes rendered in global illumination. We start with three simple 3D scenes, each showing a different object with a same material in a same environment illumination. The normal buffer and global illumination rendering for each of these scenes are shown in the first two columns of Figure 5.8. Diffuse and reflection environments are reconstructed from these input images and shown in the last two columns, using Lit Spheres. The reconstructed diffuse environments are nearly identical for all three objects. However, the quality of reconstruction for the reflection environment depends on object shape. The sphere object serves as a reference and only differs from the LitSphere view due to perspective projection. With increasing shape complexity, in particular when highly curved object features are available, the reflection environment becomes less sharp. However, this is usually not an issue when we apply the reconstructed environment to the same object, as shown in Figures 5.9 and 5.10. We evaluate the visual quality of our approach on the head and vase objects in Figure 5.9. The alternative normal buffer is obtained by applying a Voronoi-based bump map on the object in 3D. We use the reconstructed environments of Figure 5.8 and our recompositing pipeline to modify shading buffers in the middle column. The result is visually similar to a re-rendering of the scene using the additional bump map, shown in the right column. A clear benefit of our approach is that it runs in real-time independently of the rendered scene complexity. In contrast, re-rendering takes from several seconds to several minutes depending on the scene complexity. Figure 5.10 demonstrates three interactive local shape editing tools that act on a normal n = (n x , n y , n z ). Normal embossing is inspired from the LUMO system [START_REF] Scott F Johnston | Lumo: illumination for cel animation[END_REF]: it replaces the n z coordinate with β n z where β ∈ (0, 1] and renormalize the result to make the surface appear to "bulge". Bump mapping perturbs the normal buffer with an arbitrary height map, here a fading 2D ripple pattern (the same manipulation is applied in Figure 5.7 with a 2D Perlin noise). Bilateral smoothing works on projected normals n = (n x , n y ) using an auxiliary depth buffer to preserve object contours. In more complex scenes, images of objects may appear in the reflections of each other. This is what occurs in Figure 5.11, which shows a Table top scene with various objects: a cup made of porcelain with a metal spoon, a reddish coated kettle with an aluminum handle, and a vase made of a glossy material exhibiting blurry reflections. Despite the increased 64 Carlos Jorge Zubiaga Peña complexity, our method still produces a plausible result when normals are modified with a noise texture on the cup, an embossed symbol on the kettle body and a carved pattern on the vase. The results remain plausible even when the material properties are edited, as shown in the right column where we decrease the diffuse intensity and increase the specular intensity. The reconstructed diffuse and reflection environments are shown combined in Figure 5.12, before and after material editing has been performed. Observe in particular how the reflections of nearby objects have been properly reconstructed. The reflected window appears stretched in the cup environment. This is because it maps to the highly curved rim of the cup. However, when reapplied to the same object, stretching goes unnoticed. The Truck scene of Figure 5.1 is more challenging: not only object parts are in contact, but each cover a relatively small subset of surface orientations. Nevertheless, as shown in the reconstructed shading environments, our method manages to capture a convincing appearance that reproduces inter-reflections between different parts. This permits to generate a plausible result when normals are modified to apply a noise texture and an embossed emblem to the truck body, and corrugations to the front and rear mudguards. Performance Reconstruction timings for all edited objects in the paper are given in Table 5.1, using a single CPU core on an [email protected]. The reconstruction of the diffuse environment is negligible compared to that of the reflection environment. Our partial reconstruction could be easily optimized with a parallel implementation. The performance of the hole-filling 66 Carlos Jorge Zubiaga Peña Alternative normals Our approach Ground truth Figure 5.9: A Voronoi-based bump texture is mapped onto the red head and vase models in 3D, yielding an alternative normal buffer. Our approach is used to modify shading at the compositing stage in real-time, with a resulting appearance similar to re-rendering the object using the bump map (ground truth). Normal embossing Bump mapping Bilateral smoothing Figure 5.10: Our technique permits to apply arbitrary modifications to the normal buffer, while still yielding plausible shading results. Normal embossing is applied to the eyebrow of the head, and the whole vase, resulting in an apparent bulging of local shape. Any bump texture may be applied as a decal to modify normals: we use a 2D fading ripple pattern, affecting both diffuse and reflection shading. Local shape details may also be removed: we apply a cross bilateral filter on normals, using an auxiliary depth buffer to preserve occluding contours. Reconstructed Edited Cup env. Kettle env. Vase env. Figure 5.12: Combined diffuse and reflection environments reconstructed from the cup, kettle and vase objects of Figure 5.11. The bottom row shows edited materials where the diffuse term is divided by 4 and the reflection term multiplied by 4: the supporting table and other nearby objects appear more clearly in the reflections. Discussion and future work We have demonstrated a first solution for the edition of surface shape at the compositing stage, based on environment reconstruction and real-time re-compositing. Our technique is limited in terms of the kinds of materials that we can work with and is quite dependent on the geometry of the input. We are restricted to homogeneous isotropic opaque materials. A first easy extension would be to treat spatially varying materials, meanwhile more complex materials would require more involved improvements. Geometry restricts the quality and the viability of our reconstruction. If object shape is too simple, it will not provide enough shading information, which will require to fill-in wider regions. If object shape is too detailed with respect to image resolution, it will tend to reduce the accuracy of the reconstructed shading, as seen in the right column of Figure 5.8 when object complexity is increased. We only modify normals, which mimics geometry modifications without the displacement of vertices that would be needed in an ideal case. Similarly, we are not able to reproduce other effects related to visibility, such as inter-reflections. For further explanation of these limitations and their possible solutions we refer to the Chapter 6. Besides the described limitations, as demonstrated in Section 5.2, our approach gives satisfying results in practical cases of interest, granting interactive exploration of local shape variations with real-time feedback. We believe it could already be useful to greatly shorten trial and error decisions in product design and movie production. Carlos Jorge Zubiaga Peña Chapter 6 Conclusions We have introduced a middle-ground approach for the control of appearance; it works inbetween 2D image creation and 3D scene rendering. The use of auxiliary buffers (mostly normal buffers) permits to situate our techniques between painting in 2D and the control of a 3D scene for rendering. Our technique is developed to manipulate appearance for arbitraryshaded images. We can work with artwork (MatCaps) and rendering (Compositing), and we expect it can be easily extended to photographs. In Chapter 4 I have shown how to modify shading from an input MatCap to mimic modifications on lighting and material without having to retrieve them separately. In Chapter 5 I have shown how to recover singleview pre-filtered environment maps at the compositing stage, and how these pre-filtered environment maps are used to obtain plausible shading when modifying local geometry. In the following I will discuss the main limitations of our approach, as well as possible solutions. I will conclude by presenting a series of future work directions to extend our approach. Discussion In this section I enumerate the basic restrictions of our techniques. Some restrictions are inherited from the use of structures similar to pre-filtered environment maps. They limit the kind of materials that we can represent and restrict lighting to be distant. Nevertheless, we propose possible solutions to work with an extended set of materials (Section 6.1.1), as well as to reproduce effects related to local lighting (Section 6.1.4). Alongside with these limitations, we present other problems related to the separation of shading and material components (Section 6.1.2) and the filling of missing shading information (Section 6.1.3). Non-radially symmetric and anisotropic materials We store geometry-independent shading values into a spherical representation (dual paraboloid maps) that we use to manipulate appearance. Our representation can be seen as a prefiltered environment map. Similarly to pre-filtered environment maps we are restricted to work with opaque objects. These kind of structures are not adapted to work with transparent or translucent objects, as they depend on complex light paths inside objects. Moreover, we have restricted the manipulations of MatCaps to radially symmetric BRDFs. Input MatCaps define shading tied to the camera. In order to enable modifications on lighting from an input MatCap, specifically rotation, we have turned MatCaps into a spherical representation that behaves as a pre-filtered environment lighting of a radially symmetric BRDF. Radially symmetric BRDFs permit to create 2D view-independent pre-filtered environment maps, and therefore enable the rotation of lighting independently of the view. The limitation of BRDFs to be radially symmetric also eases the estimation of material properties from a MatCap. Symmetry is also used to compute the correlation function between mean and variance, as variance in measured materials is computed as the average along the θ i and φ i dimensions. The radial symmetry is finally used in the definition of filters that mimic rougher materials by blurring using radial/circular filters. To incorporate arbitrary isotropic materials in our work, we should start with a deeper study of variances in both θ i and φ i directions. We expect that a better understating of variance will help us to define non-radial filters, and to estimate material properties for θ i and φ i directions independently. In contrast, if we assume that MatCaps depict arbitrary isotropic BRDFs instead of radially symmetric ones, rotations of lighting will not be straightforward. A solution would be to ask the artist to depict the same MatCap for different view directions, but it will turn the appealing simplicity of MatCaps into a tedious process. In the Compositing stage technique we reconstruct pre-filtered environment maps for a fixed view-direction. In this case, arbitrary isotropic BRDFs are possible since we are tied to the camera used for rendering. Nevertheless, this leaves out anisotropic materials since anisotropy is linked to tangent fields. Therefore we will need to make use of tangent buffers alongside with normal buffers. Moreover, we would need to have a 3D structure, instead of a 2D spherical representation. This increment on complexity will impact the reconstruction of geometry-independent shading. First, it increments the storage size while the input shading data stays similar, which makes the reconstructions more sparse. Second, partial reconstruction and filling-in techniques are not straightforward to extend to the required 3D structure. If we want to manipulate anisotropic BRDFs for arbitrary views it will require 5 dimensional structures in the naive case, since all possible views can be parametrized with 2 additional dimensions: the shading at a surface point would then depend on the 2D view direction and the 3D reference frame (normal and tangent). This increases the complexity even further. Working with 5D structure would not be straightforward, as opposed to the simplicity of a spherical representation. Anisotropic materials are not considered in our statistical approach. Their inclusion would imply to study BRDF slice statistics as variations of viewing direction, instead of simply the viewing elevation angle. Moreover, it would require to use a different database of measured materials because the MERL database only contains isotropic BRDFs. However, at the time of writing existing databases of anisotropic BRDFs, like the UTIA BTF Database [START_REF] Filip | Template-based sampling of anisotropic brdfs[END_REF], are not sampled densely enough or do not contain enough materials. A different approach to increase the dimensionality of our approach would be to use a similar 2D spherical structure with a set of filters that adapt it to local viewing and anisotropy configurations. In the case of non-radially symmetric materials we would apply non-radial filters and for anisotropic materials we would need to introduce local rotations as well. Non-radial symmetry can be see in the left of Figure 6.1 where the spread of the BRDF differs between dimensions. Anisotropic materials (Figure 6.1, three remaining images) have the effect of rotating the filter kernel in our parametrization. This approach seems promising, as our goal is to mimic modifications from a basic shading, while producing plausible results. Shading components We treat both shading and BRDF as the addition of diffuse and specular components. At the compositing stage we obtain a perfect separation of diffuse and specular components thanks to the capabilities of the rendering engines. In contrast, we need to separate those components for measured materials as well as in MatCaps. Because of our simple heuristic on measured BRDF decomposition, we were forced to consider a subset of the MERL 72 Carlos Jorge Zubiaga Peña database. It would thus be interesting to devise clever decomposition schemes so that each component could be studied using a separate moment analysis. We have shown that our MatCap separation into low and high frequency components is a good approximation of diffuse and specular components. Despite that, we attribute all low-frequency content to the diffuse component, which is not the case in real world materials.For instance, hazy gloss (see Section 6.2.1) is a low-frequency effect that belongs to the specular component. Moreover, material reflectance is composed of more than just diffuse and specular components. It would be interesting to treat separately those different components. We can relate them to different effects like grazing angle effects (e.g. asperity scattering), retro reflection or offspecular reflection. We would need to be able to separate them in our statistical analysis to understand better their effects. In the case of manipulation of shading from rendering engine outputs, we could again take advantage of their capability to render them separately. As we have already discussed, techniques related to pre-filtered environment maps are not ready to work with translucency or transparency materials. Nevertheless we are interested in trying to recover translucency shading. Depending on the material, it can look similar to a diffuse material. Moreover, it has been already shown by Khan et al. [START_REF] Erum Arif Khan | Image-based material editing[END_REF] that the human visual system would accept inaccurate variations of translucent materials as plausible. Filling-in of missing shading The construction of our geometry-independent shading structure, for both the MatCap and the compositing approach, requires the filling of some parts. The missing parts in MatCaps depend on the estimated roughness of the depicted material, which describes a circle in the back paraboloid map, that we called the 'blind spot'. When retrieving shading information from renderings at the compositing stage we are restricted by input geometry. The corresponding missing parts may be bigger than the ones defined by the 'blind spot' and their shape are arbitrary, which makes the filling-in more complex. This is illustrated in Figure 6.2, which shows the reconstructed reflection environment for the spoon in the Table top scene of Figure 5.11 shown in Chapter 5. One way to improve the filling-in would be to take into account structured shading, as shown in Figure 6.3. It is important to note that in the case of MatCaps this is not a blocking issue since users may correct in-painting results by hand, which is consistent with the technique as it is an artistic approach. Reconstructed env After hole-filling After regularization The structure of horizontal line should be taken into account in order to prevent it from fading out. Visibility and inter-reflections Our approach does not offer any solution to control or mimic local light transport.We do not take into account visibility effects like shadows or inter-reflections. Working with pre-filtered environment maps requires to assume distant illumination. As a solution, when working with MatCaps we plan to use different MatCaps to shade illuminated and shadowed parts of an object. Similarly, for the case of compositing we plan to create separated PEM for shadowed and un-shadowed parts thanks to the information obtained in auxiliary buffers. Geometry modifications would required to displace vertex in addition to modifying normals. A plausible solution would be to apply displacement of vertices to update occlusion buffers. Another limitation is the control of inter-reflections when recovering shading at the compositing stage. It could be interesting to recover shading for those specific zones. Available information is sparse and would not be enough to recover a good quality PEM. A solution would be to get more involved into the rendering process by recognizing these parts and output more shading samples that characterize the inter-reflections zones. 74 Carlos Jorge Zubiaga Peña Future work As long-term goals we would like to extend our analysis of BRDFs and their impact on shading (Section 6.2.1). A second major future goal is to be able to manipulate more complex 3D scenes. The most important challenge would be to deal with the spatial shading variations due to both material and lighting, not just angular variations (Section 6.2.2). Finally we explain how our technique could be useful to other applications in graphics or even perception (Section 6.2.3). Extended statistical analysis We have performed our analysis by considering simple shapes (spheres). Therefore we have studied the implication of material and lighting into shading without taking into account geometry. When considering more complex shapes, our observations may still be similar by considering a surface patch on the object. However, surface curvatures will impose restrictions on the window sizes we use for establishing relationships between material/lighting and shading on this patch. In particular, high curvatures will lead to rapid changes of the view direction in surface tangent space. In such situations, our local approximation will be valid only in small 1D windows. This suggests that the effect of a BRDF will tend to be less noticeable on bumpy surfaces, which is consistent with existing perceptual experiments [START_REF] Vangorp | The influence of shape on the perception of material reflectance[END_REF]. We have considered orthographic or orthographic-corrected projections along the whole thesis. To get complete relationships between shading and their components, we should consider the effect of perspective projection on reflected radiance. This will of course depend on the type of virtual sensor used. We may anticipate that foreshortening will tend to compress radiance patterns at grazing angles. This suggests that some grazing angle effects will get 'squeezed' in a thin image region around the silhouette. We have focused on moments up to order 2, but as we shown in Appendix A it can be extended to higher-order moments to study skewness and kurtosis. Skewness quantifies the asymmetry of a distribution, meanwhile kurtosis measures its peakedness. We have shown how energy, mean and variance of a BRDF slice are perceived in the image, as coloring, warping and blurring. One question is whether similar perceptible effects could be related to skewness and kurtosis. To study these effects would require to introduce both skewness and kurtosis into the statistical model and consequently into the Fourier Analysis. This would increase the complexity of the statistical analysis; hence we have performed a perceptual study to first identify whether they have a perceptible effect [START_REF] Vangorp | Specular kurtosis and the perception of hazy gloss[END_REF]. We have focused on kurtosis with the idea to identify it as a cue to hazy gloss. We performed a series of experiment with a BRDF model made of a pair of Gaussian lobes.The difference between the lobe intensities and their spread produces different kurtosis and different haze effects, as shown in Figure 6.4a. Using these stimuli we have studied how human subjects perceive haziness of gloss. Our conclusion is that perceived haziness does not vary according to kurtosis, as shown in Figure 6.4b and Figure 6.4c. We suggest that haziness depends on the sepparation of the specular component into two sub-components, which are not directly the two Gaussians used to define the BRDF. Instead, haziness effects would be characterized by a central peak plus a wide component characteristic of the halo effect of haziness. If this hypothesis is correct, then maybe other sub-decompositions can be performed for other BRDF components. Spatially-varying shading In this thesis we only have considered shading as variations in the angular domain. This approximation has lead us to satisfactory results in simple scenes. However, for a good Real-time 2D manipulation of plausible 3D appearance representation and manipulation of shading in complex scenes we should consider spatial variations as well. As we have shown, variations of shading depend of variations of material and of lighting. In the case of variations of materials our compositing approach would be easily extended to objects with spatially-varying reflectance, provided that diffuse (resp. specular) reflectance buffers are output separately by the renderer. Our method would then be used to reconstruct multiple diffuse (resp. specular) shading buffers, and the final shading would be obtained through multiplication by reflectance at the re-compositing stage. When considering variations of lighting, we may separate shading depending on the origin of the incoming radiance. We can distinguish and manipulate differently the shading due to local light sources or the reflection of close objects. This would help to deal with a problem that arises with extended objects: their incoming lighting may vary spatially and come from completely different sources. Ideally we should store shading in a 4D representation for the variations in spatial and angular domains, which can be seen as a light field. This is equivalent to reconstructing a pre-filtered environment map per pixel instead of per surface. To deal with this issue, we would like to explore the reconstruction of multiple pre-filtered environment maps for extended surfaces and recombining them through warping depending on pixel locations. We have considered out-of-the-box render buffers and have focused on algorithms working at the compositing stage. We would also like to work at the rendering stage to export more detailed shading information while retaining a negligible impact on rendering performance. For example we would like to export information about light paths, in a sense that we could have more information about where the incoming radiance came from. Another useful solution would be to have a fast pre-analysis of render output to know which parts will not provide enough information to recover shading and output more information for these parts. Rendering engines are usually made to generate a set of images that will form an animation. We plan to extend our technique to animations, which will require a temporally consistent behavior. 76 Carlos Jorge Zubiaga Peña New applications We believe our approach could prove useful in a number of applications in Computer Graphics and Visual Perception. Dynamic creation of MatCaps We have shown how to use existing MatCaps and later on apply our technique to enable modification of lighting and material. Instead we could consider the creation of MatCaps directly on a spherical representation (i.e. dual parabolid maps) using our tools. The rotation of the MatCap would avoid problems in the blind-spot by permitting to fill it during the creation process. When using paint brushes or light sources to create shading, the material roughness could be taken into account and blur the created shading accordingly. Material estimation on photographs We estimate a few material properties from MatCaps making assumptions on lighting. We believe that this technique could be extended to other kind of inputs, like rendering or, more interestingly, photographs. This will require some knowledge over lighting moments, either explicit or hypothesized. This technique should be complemented with a geometry estimation from images. In any case, for a correct behavior it will require the study of the impact of geometry in shading. Editing of measured materials Moments have proved to be a good method to analyze the BRDF effect on shading. We believe that they could be used as a way to edit measured BRDFs, by finding operators in BRDF space that preserve some moments while modifying others. Ideally, users could be granted control over a measured BRDF directly through its moment profiles. A sufficient accuracy of these edits would require a better decomposition of BRDFs. Perceptual studies Finally, we believe that BRDF moments may also be well adapted to the study of material perception. The end goal of such an approach is to explicitly connect perceptually-relevant gloss properties to measurable BRDF properties. Experiments should be conducted to evaluate the BRDF moments humans are most sensitive to, and check whether the statistical analysis can predict perceived material appearance. Discussion It would be interested to study potential correlations between moments of different orders, as we have done for the mean and the variance. We have already observed interesting deviations from simple behaviors at grazing angles in skewness and kurtosis profiles. They may be related to known properties such as off-specular peaks, but could as well be due to hemispherical clamping once again. Moreover, we would like to extend our local Fourier analysis to include co-skewness and co-kurtosis tensors, in order to characterize their effects in the image. Figure 1.1: Still-life is a work of art depicting inanimate subjects. Artists are able to achieve a convincing appearance from which we can infer the material of the different objects. Figure 1.2: Computer systems offer a complete set of tools to create images directly in image space. They provide virtual versions of traditional painting tools, such as different kinds of brushes or pigments, as can be seen in the interface of ArtRage TM (a). On the right (b) we can see the interface of one of the most well-known image editing softwares, Photoshop TM . They also provide other tools that couldn't exist in traditional painting, like working on layers, different kind of selections or image modifications like scaling or rotations. Figure 1 . 3 : 13 Figure 1.3: Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness. It makes possible to perceive volume and infer lighting direction. Image are property of Kristy Kate http://kristykate.com/. Figure 1 1 Figure 1.4: A 3D scene are composed of lights and objects. where lights may vary in type (a) from ambient, to point, direction, area, etc. Objects are defined by their geometry defined by (b) 3D meshes and (c) materials. Figure 1 . 5 : 15 Figure 1.5: In the general case, when a light ray reaches a object surface, it can be reflected, refracted or absorbed. When we focus on opaque objects the reflection can vary from shining (mirror) to matte (diffuse) by varying glossiness. Figure 1 1 Figure 1.7: Rendering engine computes shading per component: diffuse, reflections, transparency, etc. They generate per each component. Those images are used in post-process step called compositing. Final image is created as a combination of the different components. This figure shows an example from the software Modo TM . Figure 1 . 8 : 18 Figure 1.8: Deferred shading computes in a first step a set of auxiliary buffers: positions, normals, diffuse color and surface reflectance. In a second pass those buffers are used to compute shading by adding the contribution of every single light source Figure 1 . 9 : 19 Figure 1.9: Both pre-filtered environment maps (a) and MatCaps (b) can be used to shade arbitrary objects. Shading color per pixel is assigned by fetching the color that corresponds to the same normal in the spherical representation. Figure 1.10: Shading is usually composed as different components. The main components are diffuse and specular reflections. We can see how (a) a MatCap and (b) a rendering are composed as the addition of a diffuse and a specular component. Figure 1 1 Figure 1.11: Starting from a stylized image of a sphere (a) our goal is to vary the (b) lighting orientation, (c) the material color and (d) the material roughness. Figure 2 . 1 : 21 Figure 2.1: Directions ω o and ω i can be defined in the classical parametrization of elevation θ and azimuth φ angles (a). Or by the the halfvector (θ h , φ h ) and a difference vector (θ d , φ d ) (b). The vectors marked n and t are the surface normal and tangent, respectively. Figure 2 . 2 : 22 Figure 2.2: Renderings of different BRDF coming from the MERL database (from left to right: specular-black-phenolic, yellow-phenolic, color-changing-paint2, gold-paint and neoprene-rubber) under two different environment maps (upper row: galileo; lower row: uffizi). Each BRDF has a different effect on the reflected image of the environment. Figure 2 . 3 : 23 Figure 2.3: Different orientations of the surface correspond to rotations of the upper hemisphere and BRDF, with global directions (not primed) corresponding to local directions (primed). Figure 2 . 4 :Figure 2 242 Figure 2.4: Results of the alum-bronze material under three lighting environments using Lombardi et al. [LN12] method. Column (a) shows the ground truth alum-bronze material rendered with one of the three lighting environments, column (b) shows a rendering of the estimated BRDF with the next ground truth lighting environment, column (c) shows the estimated illumination map and column (d) shows the ground truth illumination map. The lighting environments used were Kitchen (1), Eucalyptus Grove (2), and the Uffizi Gallery (3). The recovered illumination is missing high frequency details lost at rendering. Figure 2 2 Figure 2.6: Distortion of a circle when projected from a paraboloid map back to the sphere. (a) Sloan et al. [SMGG01] (b) Todo et al. [TAY13] (c) Bruckner [BG07] Figure 2 . 7 : 27 Figure 2.7: Renderings using LitSphere: (a) the original LitSphere technique, (b) a nonphotorealistic approach and (c) a technique focused on scientific illustration Figure 2 . 8 : 28 Figure 2.8: Given a high dynamic range image such as shown on the left, the Image Based Material Editing technique makes objects transparent and translucent (left vases in middle and right images), as well as apply arbitrary surface materials such as aluminium-bronze (middle) and nickel (right). Figure 2 2 Figure 2.9: Users can modify reflections using constraints. (a) The mirror reflection is changed to reflect the dragon's head, instead of the tail. (b) Multiple constraints can be used in the same scene to make the sink reflected on the pot. (c) The ring reflects clearer the face by using two constraints. (d) The reflection of the tree in the hood is modified and at the same time the highlight on the door is elongated. Figure 2.10: Tools using Surface Flows. Deformed textures (a) or shading patterns (b) are applied at arbitrary sizes (red contours) and locations (blue dots). (c) Smooth shading patterns are created by deforming a linear gradient (red curve). Two anchor points (blue and red dots) control its extremities. (d) A refraction pattern is manipulated using anchor points. Color insets visualize weight functions attached to each anchor point. Figure 2 . 2 Figure 2.11: Example edits performed with envyLight. For each row, we can see the environment map at the bottom, as top and bottom hemispheres, and the corresponding rendered image at the top. Designers mark lighting features (e.g. diffuse gradients, highlights, shadows) with two strokes: a stroke to indicate parts of the image that belong to the feature (shown in green), and another stroke to indicate parts of the image that do not (shown in red). envyLight splits the environment map into a foreground and background layer, such that edits to the foreground directly affect the marked feature and such that the sum of the two is the original map. Editing operations can be applied to the layers to alter the marked feature. (a) Increased contrast and saturation of a diffuse gradient. (b) Translation of a highlight. (c) Increased contrast and blur of a shadow. (a) Objects varying in bumpiness and roughness (b) Env. Lighting modified by warping or blurring Figure 2 . 2 Figure 2.12: (a) Objects varying in bumpiness from left to right and the roughness of the material increase vertically. (b) Environment lighting map is blurred in the upper image and warped in the bottom row. to vary linearly in relation to human perceived glossiness. Wills et al. [WAKB09] performs an study from the isotropic BRDF database of MERL. From them they create a 2D perceptual space of gloss. Moreover, perceived reflectance depends on the environment lighting around an object. The work of Doerschner et al. [DBM10] tries to quantify the effect of the environment lighting on the perception of reflectance. They look for a transfer function of glossiness between pairs of BRDF glossiness and environment lighting. Fleming et al. [FDA03] perform a series of experiments about the perception of surface reflectance under natural illumination. Their experiments evaluate how we perceive variation on specular reflectance and roughness of a 22Carlos Jorge Zubiaga Peña material under natural and synthetic illuminations, see Figure2.13. Their results show that we estimate better material properties under natural environments. Moreover they have tried to identify natural lighting characteristics that help us to identify material properties. Nevertheless, they show that our judgment of reflectance seems to more related to certain lighting features than to global statistics of the natural light. Natural or analytical environment lighting Figure 2 . 2 Figure 2.13: (a) Rendered spheres are shown by increasing roughness from top to bottom, and by increasing specular reflectance from left to right. The scale of these parameters are re-scaled to fit visual perception as proposed by Pellacini et al. [PFG00] All spheres are rendered under the same environment lighting Grace. (b) Rendered spheres with the same intermediate values of roughness and specular reflectance are rendered under different environment maps. The first two columns use natural environment lighting, whether the last column use artificial analytical environment lighting. Figure 3.1: (a) Our parametrization of the hemisphere has poles orthogonal to the view direction ω o , which minimizes distortions in the scattering plane (in red). (b) It maps a pair of angles (θ i , φ i ) ∈ [-π 2 , π 2 ]2 to a direction ω i ∈ Ω. (c) A 2D BRDF slice f rω o is directly defined in our parametrization through this angular mapping. Figure 3 3 Figure 3.2: Different BRDF slices for the same viewing elevation angle at 45 o are shown in our view-centered parametrization. Different values of α, µ θ , σ 2 θ and σ 2 φ will define different BRDF slice and therefore different material appearances. These values can vary independently in terms of elevation angle θ o . Figure 3 . 3 : 33 Figure 3.3: Top row: 3D visualization of four slices of the gold-paint BRDF at increasing viewing angles. Bottom row: the same BRDF slices in our view-dependent parametrization. Figure 3 . 4 :Figure 3 343 Figure 3.4: Moment profiles computed from our selected BRDFs are shown at increasing moment orders. (a) Energy (b) Mean at θ (c) Co-Variance at θθ and φφ. Figure 3.7: (a) The energy profile α may exhibit a silhouette effect, which we model by a Hermite spline starting at θ 0 with m 0 = 0, and ending at θ 1 with a fitted m 1 . (b) We fit the correlation between the mean slope μ and the average variance σ2 using a quadratic function. Figure3.8: (a) A sphere made of an ideal mirror material rendered using the StPeter environment map. The reflected environment is extremely sharp and warped toward silhouettes. (b) A rendering using the Specular-black-phenolic BRDF. The reflected environment is slightly blurred and highly warped. This is explained in (c) by the filtering characteristics (in blue) of the BRDF at 3 different locations: the filter is narrow and remains close to the evaluation point (in red). Note that at similar viewing elevations (dashed red arcs), the filters are rotated copies of each other. (d) A rendering using the Pearl-paint BRDF. The reflected environment is this time much more blurred and exhibits less warping. This is explained in (e) by the filtering characteristics of the BRDF: the filter is wide and offset toward the center of the sphere for locations closer to the silhouette. This confirms the mean/variance correlation that we have observed in our study. Figure 4 . 1 : 41 Figure 4.1: Our approach decomposes a MatCap into a representation that permits dynamic appearance manipulation via image filters and transforms. (a) An input MatCap applied to a sculpted head model (with a lookup based on screen-space normals). (b) The low-& high-frequency (akin to diffuse & specular) components of our representation stored in dual paraboloid maps. (c) A rotation of our representation orients lighting toward the top-left direction. (d) Color changes applied to each component. (e) A rougher-looking material obtained by bluring, warping and decreasing the intensity of the high-frequency component. Figure 4 4 Figure 4.2: (a) the filter energy α(θ) is the sum of a base color α 0 and a Hermite funtion for silhouette effects (with control parameters θ 0 , m 0 = 0, m 1 and α 1 ). (b) three slices of our material filter for θ = {0, θ 0 , π 2 } (red points). Observe how the filter (in blue) is shifted in angles by µ θ (green arrows), with its energy increasing toward θ = π 2 . Figure 4 4 Figure 4.3: (a) A MatCap is sampled uniformly in the θ dimension, around three different locations (in red, green and blue). (b) Intensity plots for each 1D window. Figure 4 . 4 : 44 Figure 4.4: Our algorithm automatically finds the relevant window size ǫ ⋆ around a ROI (red square on MatCaps). We analyze image variances for all samples in the ROI (colored curves) as a function of window size ǫ, which we call a signature. The variance estimate (red cross) is obtained by following signature inflexions (blue tangents), according to four cases: (a) Variance is taken at the first minimum after the second inflexion; (b) There is no minimum within reach, hence variance is taken at the second inflection; (c) There is no second inflexion, hence the variance at the widest window size is selected; (d) The signatures are degenerated and the ROI is discarded. The signature with minimum variance (black curve) is selected for material variance. Figure 4 4 Figure4.5: We validate our estimation algorithm on analytic primitives of known image variance in MatCaps. This is done at three resolutions for nine ROI marked A to I. Comparisons between known variances (in blue) and our estimates (with black intervals showing min/max variances in ROI) reveal that our algorithm is both accurate and robust. Figure 4 . 6 : 46 Figure 4.6: Each row illustrates the entire decomposition process: (a) An input MatCap is decomposed into (b) low-and high-frequency components; (c) white balancing separates shading from material colors; (d) components are unwarped to dual paraboloid maps using slope and size parameters. Figure 4 4 Figure 4.7: (a) An input MatCap is (b) eroded then (c) dilated to extract its low-frequency component. The high-frequency component is obtained by (d) subtracting the low-frequency component from the input MatCap. Figure 4.8: A rendered Matcap (a) is separated into veridical diffuse & specular components (b,c). Our low-/high-frequency separation (d,e) provides a reasonable approximation. Intensity differences are due to low-frequency details in the specular component (c) that are falsely attributed to the low-frequency component (d) in our approach. Note that (a) = (b) + (c) = (d) + (e) by construction. Figure 4 . 9 : 49 Figure 4.9: We illustrate the reconstruction process, starting from a white-balanced MatCap component. (a) A dual paraboloid map is filled by warping each texel q to a normal n q ; the color is then obtained by a MatCap lookup. (b) This leaves an empty region in the back paraboloid map (the 'blind spot') that is filled with a radial inpainting technique. Painted reflections (c) Rotated lighting (d) Initial MatCap (e) Added reflection (f) Rotated lighting Figure 4 . 4 Figure 4.10: Lighting manipulation. Top row: (a) Starting from a single reflection, (b) we modify the lighting by painting two additional reflections (at left and bottom right); (c) we then apply a rotation to orient the main light to the right. Bottom row: (d) We add a flame reflection to a dark glossy environment by (e) blurring and positioning the texture; (f) we then rotate the environment. Figure 4 . 4 Figure 4.11: Material manipulation. Top row: (a) Starting from a glossy appearance, (b) we increase filter size to get a rougher appearance, or (c) decrease it and add a few reflections to get a shinier appearance. Warping is altered in both cases since it is a function of filter size. Bottom row: (d) The greenish color appearance is turned into (e) a darker reddish color with increased contrast in both components; (f) a silhouette effect is added to the low-frequency component. Lombardi et al. (b) Ground truth (c) Our approach Figure 4 . 4 Figure 4.12: Comparison on lighting rotation. The top and bottom rows show initial and rotated results respectively. (b) Ground truth images are rendered with the gold paint material in the Eucalyptus Grove environment lighting. (a) The method of Lombardi et al. [LN12] makes the material appear rougher both before and after rotation. (c) Our approach reproduces exactly the input, and better preserves material properties after rotation. Figure 4 . 4 Figure 4.13: Mixing components. (a,b) Two different MatCaps applied to the same head model. Thanks to our decomposition, components may be mixed together: (c) shows the low-frequency component of (a) added to the high-frequency component of (b); (d) shows the reverse combination. Figure 4 . 14 : 414 Figure 4.14: Using material IDs (a), three MatCaps are assigned to a robot object (b). Our method permits to align their main highlight via individual rotations (c) and change their material properties (d). All three MatCaps are rotated together in (e). Figure 4 . 15 : 415 Figure 4.15: Spatially-varying colors. (a) The MatCap of Figure 4.6 (2nd row) is applied to a cow toy model. A color texture is used to modulate (b) the low-frequency component, then (c) the high-frequency component. (d) A binary version of the texture is used to increase roughness outside of dark patches (e.g., on the cheek).In (e) we rotate lighting to orient it from behind. Figure 4 . 4 Figure 4.16: Shape-enhancing variations. (a) A variant of the MatCap of Figure 4.10 (1st row) is applied to an ogre model. (b) An occlusion map is used to multiply the lowand high-frequency components. (c) A color texture is applied to the low-frequency component. (d) Different silhouette effects are added to each component. (e) Lighting is rotated so that it comes from below. Figure 5 . 2 : 52 Figure5.2: Reconstruction of the prefiltered diffuse environment map. From left to right: the pixels of the input diffuse layer of the selected object are scattered inside the Gauss map. This shading information is then approximated by low-order spherical harmonics using either a Least Square fit (top) or our Quadratic Programming formulation (bottom) leading to the respective reconstructed environment maps. To evaluate the reconstruction quality, those environment maps are then applied to the original objects, and the signed residual is shown using a color code (shown at right). Observe how our QP reconstruction guarantees a negative residual. Figure 5 5 Figure 5.3: A 3×3 pixel neighborhood (shown at left on a low-resolution image for clarity) is mapped to four contiguous spherical quads on the Gauss sphere of rectified normals (right). The color inside each spherical quad is computed by bilinear interpolation inside the input image using spherical barycentric coordinates. Figure 5 . 6 : 56 Figure5.6: Reconstruction results for diffuse (left column) and reflection (right column) shading, using Galileo (top row) and rnl (bottom row) light probes. Each quadrant shows the reference prefiltered environment, along with the rendering of a 3D teapot assuming distant lighting and no shadowing or inter-reflections. The teapot image is then used to reconstruct the prefiltered environment using either the method of Section 5.1.1 or 5.1.2, and a boosted absolute color difference with the reference environment is shown. Figure 5.7: Illustration of the different terms involved in recompositing (Equations (5.3)-(5.5)). The left column shows reference and modified images, where we have perturbed the normal buffer with a 2D perlin noise in a square region. This permits to show the behavior of our approach on both modified & unmodified portions of the image (i.e., inside & outside the square); in the latter case, Ĩ = I. The remaining columns present the diffuse (top) and reflection (bottom) terms. Starting with the reference shading at left, we then show reconstructed shading using prefiltered environments. Reference and reconstructed shading terms slightly differ in some unmodified regions (e.g., inside the ear). The combined shading terms correct these effects with the aid of ambient and reflection occlusion buffers, shown in the rightmost column. Figure 6 6 Figure 6.1: BRDF slices on our parametrization for the anisotropic Ward BRDF. Slices vary in azimuthal angle, while remaining at the same elevation angle θ of 30 o . From left to right images correspond to azimuthal angles of [0 o ,180 o ,180 o ], [45 o ,225 o ], [90 o ,270 o ] and [135 o ,315 o ]. To reproduce the effect of anisotropy, filters will have to be rotated. Figure 6 . 2 : 62 Figure 6.2: The reconstructed reflection environment for the spoon object in Figure 5.11 does not contain enough shading information to grant editing, even after hole-filling and regularization. Figure 6 . 4 : 64 Figure 6.4: In our perceptual experiments we use a two lobe Gaussian model BRDF controlled separately by its intensity and spread to produce a hazy gloss appearance. The sum of of intensities and spread of the wider lobe are keep constant. (a) A set of stimuli is presented by increasing the intensity of the narrow lobe from bottom to top and by increasing the difference in spread from left to right. (b) We measured kurtosis for the BRDF of our stimuli, and we can see how variations differs from (c) how much subjects rated 'haziness' for each material. Figure A. 2 : 2 Figure A.1: Skewness (a) and Kurtosis (b) profiles computed from our selected BRDFs Real-time 2D manipulation of plausible 3D appearance Carlos Jorge Zubiaga Peña Original/modified normals Before/after shape editing With edited material as well as a modified version (bottom) where a noise texture, an embossed symbol and a carved pattern have been respectively applied to the cup, kettle body and vase. In the middle column, we show the result of our approach (bottom) on the corresponding color image (top), using the reconstructed diffuse and reflection environments shown in Figure 5.12 (1st row). In the right column, we have edited the intensities of the diffuse and reflection components in both the original and modified scene. Reflections of nearby objects become more clearly apparent, as is also seen in Figure 5.12 (2nd row). process highly depends on the size of the hole; it could be greatly improved by using an adaptive sphere tesselation strategy. Nevertheless, reconstruction is not too performancedemanding as it is done only once in pre-process. Skewness and Kurtosis Analysis of Measured BRDFs We have shown how to compute 2D moments of arbitrary order. By now we have used moments up to order 2 to define the statistical properties of energy, mean and variance. Here we extend our analysis on measured materials to moments of order 3 and 4, which allow us to define skewness and kurtosis. They seem to be important properties of material appearance, related to asymmetry and peakedness of a BRDF slice respectively. Co-skewness and co-kurtosis are defined as the standardized moment tensors of order 3 and 4 respectively. Standardized moments are computed by both centering fr on its mean and scaling it by respective variances. Since in our case Σ 1,1 ≈ 0, we may write standardized moments using , where σ θ = Σ 2,0 and σ φ = Σ 0,2 . The coefficients of the co-skewness and co-kurtosis tensors are then given by γ n,m [ fr ] for n+m = 3 and n + m = 4 respectively. It is common to use the modern definition of kurtosis, also called excess kurtosis, which is is equal to 0 for a Normal distribution. In our case (i.e., with µ 0,1 = 0 and Σ 1,1 = 0), it can be shown that excess kurtosis coefficients are given by γ 4,0 -3, γ 3,1 , γ 2,2 -1, γ 1,3 and γ 0,4 -3. For simplicity, we will make an abuse of notation and refer to excess kurtosis coefficients as γ n,m [ fr ] for n + m = 4. The co-skewness tensor characterizes asymmetries of the BRDF slices in different dimensions. The profile for two of its coefficients, γ 3,0 and γ 1,2 , are shown in Conclusions Plots of co-skewness and co-kurtossis follow the same insights described in Section 3.3.4. First, moments where m is odd are close to null, similar to µ 0,1 and Σ 1,1 . This enforces the symmetry about the scattering plane. The second symmetry at incident view is as well confirmed by the fact that co-skewness starts at 0, meanwhile co-kurtosis starts at the same value, similarly to mean and variance respectively. In general all deviations from a single profile occur toward grazing angles. This effect appears stronger for moments related to θ than for those along φ. We thus conjecture that such grazing-angle deviations are due in part to the clamping of directions by hemispherical boundaries. Indeed, such a clamping will have more influence at grazing angles in directions parallel to θ i (see Fig. 3.3).
186,161
[ "781504" ]
[ "3102" ]
01116414
en
[ "math", "qfin" ]
2024/03/04 23:41:48
2017
https://hal.science/hal-01116414v4/file/Huang_Nguyen_2017.pdf
Yu-Jui Huang email: [email protected] Adrien Nguyen-Huu Time Erhan Bayraktar René Carmona Ivar Samuel Cohen Eke- Paolo Guasoni Jan Ob Traian Pirvu Ronnie Sircar Xunyu Zhou Time-consistent stopping under decreasing impatience Keywords: time inconsistency optimal stopping hyperbolic discounting decreasing impatience subgame-perfect Nash equilibrium JEL: C61, D81, D90, G02 2010 Mathematics Subject Classification: 60G40, 91B06 published or not. The documents may come Introduction Time inconsistency is known to exist in stopping decisions, such as casino gambling in [START_REF] Barberis | A model of casino gambling[END_REF] and [START_REF] Ebert | Until the bitter end: On prospect theory in a dynamic context[END_REF], optimal stock liquidation in [START_REF] Xu | Optimal stopping under probability distortion[END_REF], and real options valuation in [START_REF] Grenadier | Investment under uncertainty and time-inconsistent preferences[END_REF]. A general treatment, however, has not been proposed in continuous-time models. In this article, we develop a dynamic theory for time-inconsistent stopping problems in continuous time, under non-exponential discounting. In particular, we focus on log sub-additive discount functions (Assumption 3.1), which capture decreasing impatience, an acknowledged feature of empirical discounting in Behavioral Economics; see e.g. [START_REF] Thaler | Some empirical evidence on dynamic inconsistency[END_REF], [START_REF] Loewenstein | Anomalies: Intertemporal choice[END_REF], and [START_REF] Loewenstein | Anomalies in intertemporal choice: evidence and an interpretation[END_REF]. Hyperbolic and quasi-hyperbolic discount functions are special cases under our consideration. The seminal work Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF] identifies three types of agents under time inconsistency -the naive, the pre-committed, and the sophisticated. Among them, only the sophisticated agent takes the possible change of future preferences seriously, and works on consistent planning: she aims to find a strategy that once being enforced over time, none of her future selves would want to deviate from it. How to precisely formulate such a sophisticated strategy had been a challenge in continuous time. For stochastic control, Ekeland and Lazrak [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF] resolved this issue by defining sophisticated controls as subgame-perfect Nash equilibria in a continuous-time inter-temporal game of multiple selves. This has aroused vibrant research on time inconsistency in mathematical finance; see e.g. [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Ekeland | Time-consistent portfolio management[END_REF], [START_REF] Hu | Time-inconsistent stochastic linear-quadratic control[END_REF], [START_REF] Yong | Time-inconsistent optimal control problems and the equilibrium HJB equation[END_REF], [START_REF] Björk | Mean-variance portfolio optimization with state-dependent risk aversion[END_REF], [START_REF] Dong | Time-inconsistent portfolio investment problems[END_REF], [START_REF] Björk | A theory of Markovian time-inconsistent stochastic control in discrete time[END_REF], and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. There is, nonetheless, no equivalent development for stopping problems. This paper contributes to the literature of time inconsistency in three ways. First, we provide a precise definition of sophisticated stopping policy (or, equilibrium stopping policy) in continuous time (Definition 3.2). Specifically, we introduce the operator Θ in (3.7), which describes the game-theoretic reasoning of a sophisticated agent. Sophisticated policies are formulated as fixed points of Θ, which connects to the concept of subgame-perfect Nash equilibrium invoked in [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF]. Second, we introduce a new, iterative approach for finding equilibrium strategies. For any initial stopping policy τ , we apply the operator Θ to τ repetitively until it converges to an equilibrium stopping policy. Under appropriate conditions, this fixed-point iteration indeed converges (Theorem 3.1), which is the main result of this paper. Recall that the standard approach for finding equilibrium strategies in continuous time is solving a system of non-linear equations, as proposed in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. Solving this system of equations is difficult; and even when it is solved (as in the special cases in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]), we only obtain one particular equilibrium, and it is unclear how other equilibrium strategies can be found. Our iterative approach can be useful here: we find different equilibria simply by starting the fixed-point iteration with different initial strategies τ . In some cases, we are able to find all equilibria; see Proposition 4.2. Third, when an agent starts to do game-theoretic reasoning and look for equilibrium strategies, she is not satisfied with an arbitrary equilibrium. Instead, she works on improving her initial strategy to turn it into an equilibrium. This improving process is absent from [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF], and subsequent research, although well-known in Game Theory as the hierarchy of strategic reasoning in [START_REF] Stahl | Evolution of smart-n players[END_REF] and [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF]. Our iterative approach specifically represents this improving process: for any initial strategy τ , each application of Θ to τ corresponds to an additional level of strategic reasoning. As a result, the iterative approach complements the existing literature of time inconsistency in that it not only facilitates the search for equilibrium strategies, but provides "agent-specific" equilibria: it assigns one specific equilibrium to each agent according to her initial behavior. Upon completion of our paper, we noticed the recent work Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF] on mean-variance optimal stopping. They introduced "dynamic optimality" to deal with time inconsistency. As explained in detail in [START_REF]Optimal mean-variance selling strategies[END_REF], this new concept is different from consistent planning in Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], and does not rely on game-theoretic modeling. Therefore, our equilibrium stopping policies are different from their dynamically optimal stopping times. That being said, a few connections between our paper and [START_REF]Optimal mean-variance selling strategies[END_REF] do exist, as pointed out in Remarks 2.2, 3.2, and 4.4. The paper is organized as follows. In Section 2, we introduce the setup of our model, and demonstrate time inconsistency in stopping decisions through examples. In Section 3, we formulate the concept of equilibrium for stopping problems in continuous time, search for equilibrium strategies via fixed-point iterations, and establish the required convergence result. Section 4 illustrates our theory thoroughly in a real options model. Most of the proofs are delegated to appendices. Preliminaries and Motivation Consider the canonical space Ω := {ω ∈ C([0, ∞); R d ) : ω 0 = 0}. Let {W t } t≥0 be the coordinate mapping process W t (ω) = ω t , and F W = {F W s } s≥0 be the natural filtration generated by W . Let P be the Wiener measure on (Ω, F W ∞ ), where F W ∞ := s≥0 F W s . For each t ≥ 0, we introduce the filtration F t,W = {F t,W s } s≥0 with F t,W s = σ(W u∨t -W t : 0 ≤ u ≤ s), and let F t = {F t s } s≥0 be the P-augmentation of F t,W . We denote by T t the collection of all F t -stopping times τ with τ ≥ t a.s. For the case where t = 0, we simply write F 0 = {F 0 s } s≥0 as F s = {F s } s≥0 , and T 0 as T . Remark 2.1. For any 0 ≤ s ≤ t, F t s is the σ-algebra generated by only the P-negligible sets. Moreover, for any s, t ≥ 0, F t s -measurable random variables are independent of F t ; see Bouchard and Touzi [8, Remark 2.1] for a similar set-up. Consider the space X := [0, ∞) × R d , equipped with the Borel σ-algebra B(X). Let X be a continuous-time Markov process given by X s := f (s, W s ), s ≥ 0, for some measurable function f : X → R. Or, more generally, for any τ ∈ T and R d -valued F τ -measurable ξ, let X be the solution to the stochastic differential equation (2.1) dX t = b(t, X t )dt + σ(t, X t )dW t for t ≥ τ, with X τ = ξ a.s. We assume that b : X → R and σ : X → R satisfy Lipschitz and linear growth conditions in x ∈ R d , uniformly in t ∈ [0, ∞). Then, for any τ ∈ T and R d -valued F τ -measurable ξ with E[|ξ| 2 ] < ∞, (2.1 ) admits a unique strong solution. For any (t, x) ∈ X, we denote by X t,x the solution to (2.1) with X t = x, and by E t,x the expectation conditioned on X t = x. Classical Optimal Stopping Consider a payoff function g : R d → R, assumed to be nonnegative and continuous, and a discount function δ : R + → [0, 1], assumed to be continuous, decreasing, and satisfy δ(0) = 1. Moreover, we assume that (2.2) E t,x sup t≤s≤∞ δ(s -t)g(X s ) < ∞, ∀(t, x) ∈ X, where we interpret δ(∞t)g(X t,x ∞ ) := lim sup s→∞ δ(st)g(X t,x s ); this is in line with Karatzas and Shreve [START_REF] Karatzas | Methods of mathematical finance[END_REF]Appendix D]. Given (t, x) ∈ X, classical optimal stopping concerns if there is a τ ∈ T t such that the expected discounted payoff (2.3) J(t, x; τ ) := E t,x [δ(τ -t)g(X τ )] can be maximized. The associated value function (2.4) v(t, x) := sup τ ∈T t J(t, x; τ ) has been widely studied, and the existence of an optimal stopping time is affirmative. The following is a standard result taken from [START_REF] Karatzas | Methods of mathematical finance[END_REF]Appendix D] and [START_REF] Peskir | Optimal stopping and free-boundary problems[END_REF]Chapter I.2]. Proposition 2.1. For any (t, x) ∈ X, let {Z t,x s } s≥t be a right-continuous process with (2.5) Z t,x s (ω) = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -t)g(X τ )] a.s. ∀s ≥ t, and define τ t,x ∈ T t by τ t,x := inf s ≥ t : δ(s -t)g(X t,x s ) = Z t,x s . (2.6) Then, τ t,x is an optimal stopping time of (2.4), i.e. (2.7) J(t, x; τ t,x ) = sup τ ∈T t J(t, x; τ ). Moreover, τ t,x is the smallest, if not unique, optimal stopping time. Remark 2.2. The classical optimal stopping problem (2.4) is static in the sense that it involves only the preference of the agent at time t. Following the terminology of Definition 1 in Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF], τ t,x in (2.6) is "statically optimal". Time Inconsistency Following Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], a naive agent solves the classical problem (2.4) repeatedly at every moment as time passes by. That is, given initial (t, x) ∈ X, the agent solves sup τ ∈Ts J(s, X t,x s ; τ ) at every moment s ≥ t. By Proposition 2.1, the agent at time s intends to employ the stopping time τ s,X t,x s ∈ T s , for all s ≥ t. This raises the question of whether optimal stopping times obtained at different moments, τ t,x and τ t ′ ,X t,x t ′ with t ′ > t, are consistent with each other. Definition 2.1 (Time Consistency). The problem (2.4) is time-consistent if for any (t, x) ∈ X and s > t, τ t,x (ω) = τ s,X t,x s (ω) (ω) for a.e. ω ∈ { τ t,x ≥ s}. We say the problem (2.4) is timeinconsistent if the above does not hold. In the classical literature of Mathematical Finance, the discount function usually takes the form δ(s) = e -ρs for some ρ ≥ 0. This already guarantees time consistency of (2.4). To see this, first observe the identity (2.8) δ(s)δ(t) = δ(s + t) ∀s, t ≥ 0. Fix (t, x) ∈ X and pick t ′ > t such that P[ τ t,x ≥ t ′ ] > 0. For a.e. ω ∈ { τ t,x ≥ t ′ }, set y := X t,x t ′ (ω). We observe from (2.6), (2.5), and X t,x s (ω) = X t ′ ,y s (ω) that τ t,x (ω) = inf s ≥ t ′ : δ(s -t)g(X t ′ ,y s (ω)) ≥ ess sup τ ∈Ts E s,X t ′ ,y s (ω) [δ(τ -t)g(X τ )] , τ t ′ ,y (ω) = inf s ≥ t ′ : δ(s -t ′ )g(X t ′ ,y s (ω)) ≥ ess sup τ ∈Ts E s,X t ′ ,y s (ω) [δ(τ -t ′ )g(X τ )] . Then (2.8) guarantees τ t,x (ω) = τ t ′ ,y (ω), as δ(τ -t) δ(s-t) = δ(τ -t ′ ) δ(s-t ′ ) = δ(τs). For non-exponential discount functions, the identity (2.8) no longer holds, and the problem (2.4) is in general timeinconsistent. Example 2.1 (Smoking Cessation). Suppose a smoker has a fixed lifetime T > 0. Consider a deterministic cost process X s := x 0 e 1 2 s , s ∈ [0, T ], for some x 0 > 0. Thus, we have X t,x s = xe 1 2 (s-t) for s ∈ [t, T ]. The smoker can (i) quit smoking at some time s < T (with cost X s ) and die peacefully at time T (with no cost), or (ii) never quit smoking (thus incurring no cost) but die painfully at time T (with cost X T ). With hyperbolic discount function δ(s) := 1 1+s for s ≥ 0, (2.4) becomes minimizing cost inf s∈[t,T ] δ(s -t)X t,x s = inf s∈[t,T ] xe 1 2 (s-t) 1 + (s -t) . By basic Calculus, the optimal stopping time τ t,x is given by (2.9) τ t,x = t + 1 if t < T -1, T if t ≥ T -1. Time inconsistency can be easily observed, and it illustrates the procrastination behavior: the smoker never quits smoking. . This can be viewed as a real options problem in which the management of a large non-profitable insurance company has the intention to liquidate or sell the company, and would like to decide when to do so; see the explanations under (4.2) for details. By the argument in Pedersen and Peskir [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF], we prove in Proposition 4.1 below that the optimal stopping time τ x , defined in (2.6) with t = 0, has the formula τ x = inf s ≥ 0 : X x s ≥ √ 1 + s . If one solves the same problem at time t > 0 with X t = x ∈ R + , the optimal stopping time is τ t,x = t + τ x = inf{s ≥ t : X t,x s ≥ 1 + (st)}. The free boundary s → 1 + (st) is unusual in its dependence on initial time t. From Figure 1, we clearly observe time inconsistency: τ t,x (ω) and τ t ′ ,X t,x t ′ (ω) do not agree in general, for any t ′ > t, as they correspond to different free boundaries. As proposed in Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], to deal with time inconsistency, we need a strategy that is either pre-committed or sophisticated. A pre-committed agent finds τ t,x in (2.6) at time t, and forces her future selves to follow τ t,x through a commitment mechanism (e.g. a contract). By contrast, a sophisticated agent works on "consistent planning": she anticipates the change of future preferences, and aims to find a stopping strategy that once being enforced, none of her future selves would want to deviate from it. How to precisely formulate sophisticated stopping strategies has been a challenge in continuous time, and the next section focuses on resolving this. Equilibrium Stopping Policies Objective of a Sophisticated Agent Since one may re-evaluate and change her choice of stopping times over time, her stopping strategy is not a single stopping time, but a stopping policy defined below. Definition 3.1. A Borel measurable function τ : X → {0, 1} is called a stopping policy. We denote by T (X) the set of all stopping policies. Given current time and state (t, x) ∈ X, a policy τ ∈ T (X) governs when an agent stops: the agent stops at the first time τ (s, X t,x s ) yields the value 0, i.e. at the moment Lτ (t, x) := inf s ≥ t : τ (s, X t,x s ) = 0 . (3.1) To show that Lτ (t, x) is a well-defined stopping time, we introduce the set (3.2) ker(τ ) := {(t, x) ∈ X : τ (t, x) = 0}. It is called the kernel of τ , which is the collection of (t, x) at which the policy τ suggests immediate stopping. Then, Lτ (t, x) can be expressed as (3.3) Lτ (t, x) = inf s ≥ t : (s, X t,x s ) ∈ ker(τ ) . Lemma 3.1. For any τ ∈ T (X) and (t, x) ∈ X, ker(τ ) ∈ B(X) and Lτ (t, x) ∈ T t . Proof. The Borel measurability of τ ∈ T (X) immediately implies ker(τ ) ∈ B(X). In view of (3.3), Lτ (t, x)(ω) = inf {s ≥ t : (s, ω) ∈ E}, where E := {(r, ω) ∈ [t, ∞) × Ω : (r, X t,x r (ω)) ∈ ker(τ )}. With ker(τ ) ∈ B(X) and the process X t,x being progressively measurable, E is a progressively measurable set. Since the filtration F t satisfies the usual conditions, [2, Theorem 2.1] asserts that Lτ (t, x) is an F t -stopping time. Remark 3.1 (Naive Stopping Policy). Recall the optimal stopping time τ t,x defined in (2.6) for all (t, x) ∈ X. Define τ ∈ T (X) by (3.4) τ (t, x) := 0, if τ t,x = t, 1, if τ t,x > t. Note that τ : X → {0, 1} is indeed Borel measurable because τ t,x = t if and only if (t, x) ∈ (t, x) ∈ X : g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ∈ B(X). Following the standard terminology (see e.g. [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], [START_REF] Pollak | Consistent planning[END_REF]), we call τ the naive stopping policy as it describes the behavior of a naive agent, discussed in Subsection 2.2. Remark 3.2. Despite its name, the naive stopping policy τ may readily satisfy certain optimality criterion. For example, "dynamic optimality" recently proposed in Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF] can be formulated in our case as follows: τ ∈ T (X) is dynamically optimal if there is no other π ∈ T (X) such that P t,x J Lτ (t, x), X t,x Lτ (t,x) ; Lπ Lτ (t, x), X t,x Lτ (t,x) > g(X t,x Lτ (t,x) ) > 0 for some (t, x) ∈ X. By (3.4) and Proposition 2.1, τ is dynamically optimal as the above probability is always 0. Example 3.1 (Real Options Model, Continued). Recall the setting of Example 2.2. A naive agent follows τ ∈ T (X), and the actual moment of stopping is L τ (t, x) = inf{s ≥ t : τ (s, X t,x s ) = 0} = inf{s ≥ t : X t,x s ≥ 1}, which differs from the agent's original decision τ t,x in Example 2.2. We can now introduce equilibrium policies. Suppose that a stopping policy τ ∈ T (X) is given to a sophisticated agent. At any (t, x) ∈ X, the agent carries out the game-theoretic reasoning: "assuming that all my future selves will follow τ ∈ T (X), what is the best stopping strategy at current time t in response to that?" Note that the agent at time t has only two possible actions: stopping and continuation. If she stops at time t, she gets g(x) immediately. If L * τ (t, x) := inf s > t : τ (s, X t,x s ) = 0 = inf s > t : (s, X t,x s ) ∈ ker(τ ) , (3.5) leading to the payoff J(t, x; L * τ (t, x)) = E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) . By the same argument in Lemma 3.1, L * τ (t, x) is a well-defined stopping time in T t . Note the subtle difference between Lτ (t, x) and L * τ (t, x): with the latter, the agent at time t simply chooses to continue, with no regard to what τ ∈ T (X) suggests at time t. This is why we have "s > t" in (3.5), instead of "s ≥ t" in (3.1). Now, we separate the space X into three distinct regions S τ := {(t, x) ∈ X : g(x) > J(t, x; L * τ (t, x))}, C τ := {(t, x) ∈ X : g(x) < J(t, x; L * τ (t, x))}, I τ := {(t, x) ∈ X : g(x) = J(t, x; L * τ (t, x))}. (3.6) Some conclusions can be drawn: 1. If (t, x) ∈ S τ , the agent should stop immediately at time t. 2. If (t, x) ∈ C τ , the agent should continue at time t. 3. If (t, x) ∈ I τ , the agent is indifferent between stopping and continuation at current time; there is then no incentive for the agent to deviate from the originally assigned stopping strategy τ (t, x). To summarize, for any (t, x) ∈ X, the best stopping strategy at current time (in response to future selves following τ ∈ T (X)) is (3.7) Θτ (t, x) :=      0 for (t, x) ∈ S τ 1 for (t, x) ∈ C τ τ (t, x) for (t, x) ∈ I τ . . The next result shows that Θτ : X → {0, 1} is again a stopping policy. Lemma 3.2. For any τ ∈ T (X), S τ , C τ , and I τ belong to B(X), and Θτ ∈ T (X). Proof. Since L * τ (t, x) is the first hitting time to the Borel set ker(τ ), the map (t, x) → J(t, x; L * τ (t, x)) = E t,x [δ(L * τ (t, x) -t)g(X L * τ (t,x) ) ] is Borel measurable, and thus S τ , I τ , and C τ all belong to B(X). Now, by (3.7), ker(Θτ ) = S τ ∪ (I τ ∩ ker(τ )) ∈ B(X), which implies that Θτ ∈ T (X). By Lemma 3.2, Θ can be viewed as an operator acting on the space T (X). For any initial τ ∈ T (X), Θ : T (X) → T (X) generates a new policy Θτ ∈ T (X). The switch from τ to Θτ corresponds to an additional level of strategic reasoning in Game Theory, as discussed below Corollary 3.1. Definition 3.2 (Equilibrium Stopping Policies ). We say τ ∈ T (X) is an equilibrium stopping policy if Θτ (t, x) = τ (t, x) for all (t, x) ∈ X. We denote by E(X) the collection of all equilibrium stopping policies. The term "equilibrium" is used as a connection to subgame-perfect Nash equilibria in an inter-temporal game among current self and future selves. This equilibrium idea was invoked in stochastic control under time inconsistency; see e.g. [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Ekeland | Time-consistent portfolio management[END_REF], and [START_REF] Björk | A theory of Markovian time-inconsistent stochastic control in discrete time[END_REF]. A contrast with the stochastic control literature needs to be pointed out. Remark 3.3 (Comparison with Stochastic Control). In time-inconsistent stochastic control, local perturbation of strategies on small time intervals [t, t + ε] is the standard way to define equilibrium controls. In our case, local perturbation is carried out instantaneously at time t. This is because an instantaneously-modified stopping strategy may already change the expected discounted payoff significantly, whereas a control perturbed only at time t yields no effect. The first question concerning Definition 3.2 is the existence of an equilibrium stopping policy. Finding at least one such a policy turns out to be easy. Remark 3.4 (Trivial Equilibrium). Define τ ∈ T (X) by τ (t, x) := 0 for all (t, x) ∈ X. Then Lτ (t, x) = L * τ (t, x) = t, and thus J(t, x; L * τ (t, x)) = g(x) for all (t, x) ∈ X. This implies I τ = X. We then conclude from (3.7) that Θτ (t, x) = τ (t, x) for all (t, x) ∈ X, which shows τ ∈ E(X). We call it the trivial equilibrium stopping policy. Example 3.2 (Smoking Cessation, Continued). Recall the setting in Example 2.1. Observe from (2.9) and (3.4) that L * τ (t, x) = T for all (t, x) ∈ X. Then, δ(L * τ (t, x) -t)X t,x L * τ (t,x) = X t,x T 1 + T -t = xe 1 2 (T -t) 1 + T -t . Since e 1 2 s = 1 + s has two solutions s = 0 and s = s * ≈ 2.51286, and e 1 2 s > 1 + s iff s > s * , the above equation implies S τ = {(t, x) : t < T -s * }, C τ = {(t, x) : t ∈ (T -s * , T )}, and I τ = {(t, x) : t = T -s * or T }. We therefore get Θ τ (t, x) = 0 for t < T -s * , 1 for t ≥ T -s * . Whereas a naive smoker delays quitting smoking indefinitely (as in Example 2.1), the first level of strategic reasoning (i.e. applying Θ to τ once) recognizes this procrastination behavior and pushes the smoker to quit immediately, unless he is already too old (i.e. t ≥ Ts * ). It can be checked that Θ τ is already an equilibrium, i.e. Θ 2 τ (t, x) = Θ τ (t, x) for all (t, x) ∈ X. It is worth noting that in the classical case of exponential discounting, characterized by (2.8), the naive stopping policy τ in (3.4) is already an equilibrium. Proposition 3.1. Under (2.8), τ ∈ T (X) defined in (3.4) belongs to E(X). Proof. The proof is delegated to Appendix A.1. The Main Result In this subsection, we look for equilibrium policies through fixed-point iterations. For any τ ∈ T (X), we apply Θ to τ repetitively until we reach an equilibrium policy. In short, we define τ 0 by (3.8) τ 0 (t, x) := lim n→∞ Θ n τ (t, x) ∀(t, x) ∈ X, and take it as a candidate equilibrium policy. To make this argument rigorous, we need to show (i) the limit in (3.8) converges, so that τ 0 is well-defined; (ii) τ 0 is indeed an equilibrium policy, i.e. Θτ 0 = τ 0 . To this end, we impose the condition: Assumption 3.1. The function δ satisfies δ(s)δ(t) ≤ δ(s + t) for all s, t ≥ 0. Assumption 3.1 is closely related to decreasing impatience (DI) in Behavioral Economics. It is well-documented in empirical studies, e.g. [START_REF] Thaler | Some empirical evidence on dynamic inconsistency[END_REF], [START_REF] Loewenstein | Anomalies: Intertemporal choice[END_REF], [START_REF] Loewenstein | Anomalies in intertemporal choice: evidence and an interpretation[END_REF], that people admits DI: when choosing between two rewards, people are more willing to wait for the larger reward (more patient) when these two rewards are further away in time. For instance, in the two scenarios (i) getting $100 today or $110 tomorrow, and (ii) getting $100 in 100 days or $110 in 101 days, people tend to choose $100 in (i), but $110 in (ii). Following [28, Definition 1] and [START_REF] Noor | Decreasing impatience and the magnitude effect jointly contradict exponential discounting[END_REF], [START_REF]Hyperbolic discounting and the standard model: Eliciting discount functions[END_REF], DI can be formulated under current context as follows: the discount function δ induces DI if (3.9) for any s ≥ 0, t → δ(t + s) δ(t) is strictly increasing. Observe that (3.9) readily implies Assumption 3. ) ker(Θ n τ ) ⊆ ker(Θ n+1 τ ), ∀n ∈ N. (3.11 Hence, τ 0 in (3.8) is a well-defined element in T (X), with ker(τ 0 ) = n∈N ker(Θ n τ ). Proof. The proof is delegated to Appendix A.2. Condition (3.10) means that at any (t, x) ∈ X where the initial policy τ indicates immediate stopping, the new policy Θτ agrees with it; however, it is possible that at some (t, x) ∈ X where τ indicates continuation, Θτ suggests immediate stopping, based on the game-theoretic reasoning in Subsection 3.1. Note that (3.10) is not very restrictive, as it already covers all hitting times to subsets of X that are open (or more generally, half-open in [0, ∞) and open in R d ), as explained below. Remark 3.5. Let E be a subset of X that is "open" in the sense that for any (t, x) ∈ E, there exists The stopping policy τ corresponds to the stopping times T t,x := inf{s ≥ t : (s, X t,x s ) ∈ E} for all (t, x) ∈ X. In particular, if ε > 0 such that (t, x) ∈ [t, t + ε) × B ε (x) ⊆ E, where B ε (x) := {y ∈ R d : |y -x| < ε}. Define τ ∈ T (X) by τ (t, x) = 0 if and only if (t, x) ∈ E. Since ker(τ ) = E is "open", for any (t, x) ∈ ker(τ ), we have L * τ (t, x) = t, E = [0, ∞) × F where F is an open set in R d , the corresponding stopping times are T ′ t,x := inf{s ≥ t : X t,x s ∈ F }, (t, x) ∈ X. Moreover, the naive stopping policy τ also satisfies (3.10). Proposition 3.3. τ ∈ T (X) defined in (3.4) satisfies (3.10). Proof. The proof is delegated to Appendix A.3. The next theorem is the main result of our paper. It shows that the fixed-point iteration in (3.8) indeed converges to an equilibrium policy. Proof. The proof is delegated to Section A.4. The following result for the naive stopping policy τ , defined in (3.4), is a direct consequence of Proposition 3.3 and Theorem 3.1. Corollary 3.1. Let Assumption 3.1 hold. The stopping policy τ 0 ∈ T (X) defined by (3.12) τ 0 (t, x) := lim n→∞ Θ n τ (t, x) ∀(t, x) ∈ X belongs to E(X). Our iterative approach, as in (3.8), contributes to the literature of time inconsistency in two ways. First, the standard approach for finding equilibrium strategies in continuous time is solving a system of non-linear equations (the so-called extended HJB equation), as proposed in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. Solving this system of equations is difficult; and even when it is solved (as in the special cases in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]), we just obtain one particular equilibrium, and it is unclear how other equilibrium strategies can be found. Our iterative approach provides a potential remedy here. We can find different equilibria simply by starting the iteration (3.8) with different initial policies τ ∈ T (X). In some cases, we are able to find all equilibria, and obtain a complete characterization of E(X); see Proposition 4.2 below. Second, while the continuous-time formulation of equilibrium strategies was initiated in [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], the "origin" of an equilibrium strategy has not been addressed. This question is important as people do not start with using equilibrium strategies. People have their own initial strategies, determined by a variety of factors such as classical optimal stopping theory, personal habits, and popular rules of thumb in the market. Once an agent starts to do game-theoretic reasoning and look for equilibrium strategies, she is not satisfied with an arbitrary equilibrium. Instead, she works on improving her initial strategy to turn it into an equilibrium. This improving process is absent from [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF], but it is in fact well-known in Game Theory as the hierarchy of strategic reasoning in [START_REF] Stahl | Evolution of smart-n players[END_REF] and [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF]. Our iterative approach embodies this framework: given an initial τ ∈ T (X), Θ n τ ∈ T (X) corresponds to level-n strategic reasoning in [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF], and τ 0 := lim n→∞ Θ n τ reflects full rationality of "smart ∞ " players in [START_REF] Stahl | Evolution of smart-n players[END_REF]. Hence, our formulation complements the literature of time inconsistency in that it not only defines what an equilibrium is, but explains where an equilibrium is coming from. This in turn provides "agent-specific" results: it assigns one specific equilibrium to each agent according to her initial behavior. In particular, Corollary 3.1 specifies the connection between the naive behavior and the sophisticated one. While these behaviors have been widely discussed in the literature, their relation has not been stated mathematically as precisely as in (3.12). The Time-Homogeneous Case Suppose the state process X is time-homogeneous, i.e. X s = f (W s ) for some measurable f : R d → R; or, the coefficients b and σ in (2.1) does not depend on t. The objective function (2.3) then reduces to J(x; τ ) := E x [δ(τ )g(X τ )] for x ∈ R d and τ ∈ T , where the superscript of E x means X 0 = x. The decision to stop or to continue then depends on the current state x only. The formulation in Subsection 3.1 reduces to: Definition 3.3. When X is time-homogeneous, a Borel measurable τ : R d → {0, 1} is called a stopping policy, and we denote by T (R d ) the set of all stopping policies. Given τ ∈ T (R d ) and x ∈ R d , we define, similarly to (3.2), (3.1), and (3.5), ker(τ ) := {x ∈ R d : τ (x) = 0}, Lτ (x) := inf{t ≥ 0 : τ (X x t ) = 0}, and L * τ (x) := inf{t > 0 : τ (X x t ) = 0}. Furthermore, we say τ ∈ T (R d ) is an equilibrium stopping policy if Θτ (x) = τ (x) for all x ∈ R d , where (3.13) Θτ (x) :=        0 if x ∈ S τ := {x : g(x) > E x [δ(L * τ (x))g(X L * τ (x) )]}, 1 if x ∈ C τ := {x : g(x) < E x [δ(L * τ (x))g(X L * τ (x) )]}, τ (x) if x ∈ I τ := {x : g(x) = E x [δ(L * τ (x))g(X L * τ (x) )]}. Remark 3.6. When X is time-homogeneous, all the results in Subsection 3.2 hold, with T (X), E(X), ker(τ ), and Θ replaced by the corresponding ones in Definition 3.3. Proofs of these statements are similar to, and in fact easier than, those in Subsection 3.2, thanks to the homogeneity in time. A Detailed Case Study: Stopping of BES(1) In this section, we recall the setup of Example 2.2, with hyperbolic discount function (4.1) δ(s) := 1 1 + βs ∀s ≥ 0, where β > 0 is a fixed parameter. The state process X is a one-dimensional Bessel process, i.e. X t = |W t |, t ≥ 0, where W is a one-dimensional Brownian motion. With X being timehomogeneous, we will follow Definition 3.3 and Remark 3.6. Also, the classical optimal stopping problem (2.4) reduces to (4.2) v(x) = sup τ ∈T E x X τ 1 + βτ for x ∈ R + . This can be viewed as a real options problem, as explained below. By [START_REF] Taksar | Optimal dynamic reinsurance policies for large insurance portfolios[END_REF] and the references therein, when the surplus (or reserve) of an insurance company is much larger than the size of each individual claim, the dynamics of the surplus process can be approximated by dR t = µdt + σdW t with µ = p -E[Z] and σ = E[Z 2 ]. Here, p > 0 is the premium rate, and Z is a random variable that represents the size of each claim. Suppose that an insurance company is non-profitable with µ = 0, i.e. it uses all the premiums collected to cover incoming claims. Also assume that the company is large enough to be considered "systemically important", so that when its surplus hits zero, the government will provide monetary support to bring it back to positivity, as in the recent financial crisis. The dynamics of R is then a Brownian motion reflected at the origin. Thus, (4.2) describes a real options problem in which the management of a large non-profitable insurance company has the intention to liquidate or sell the company, and would like to decide when to do so. An unusual feature of (4.2) is that the discounted process {δ(s)v(X x s )} s≥0 may not be a supermartingale. This makes solving (4.2) for the optimal stopping time τ x , defined in (2.6) with t = 0, nontrivial. As shown in Appendix B.1, we need an auxiliary value function, and use the method of time-change in [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF]. Proposition 4.1. For any x ∈ R + , the optimal stopping time τ x of (4.2) (defined in (2.6) with t = 0) admits the explicit formula (4.3) τ x = inf s ≥ 0 : X x s ≥ 1/β + s . Hence, the naive stopping policy τ ∈ T (R + ), defined in (3.4), is given by (4.4) τ (x) := 1 [0, √ 1/β) (x) ∀x ∈ R + . Proof. The proof is delegated to Appendix B.1. Characterization of equilibrium policies Lemma 4.1. For any τ ∈ T (R + ), consider τ ′ ∈ T (R + ) with ker(τ ′ ) := ker(τ ). Then L * τ (x) = Lτ (x) = Lτ ′ (x) = L * τ ′ (x) for all x ∈ R + . Hence, τ ∈ E(R + ) if and only if τ ′ ∈ E(R + ). Proof. If x ∈ R + is in the interior of ker(τ ), L * τ (x) = Lτ (x) = 0 = Lτ ′ (x) = L * τ ′ (x). Since a one-dimensional Brownian motion W is monotone in no interval, if x ∈ ker(τ ′ ) \ ker(τ ), L * τ (x) = Lτ (x) = 0 = Lτ ′ (x) = L * τ ′ (x); if x / ∈ ker(τ ′ ), then L * τ (x) = Lτ (x) = inf{s ≥ 0 : |W x | ∈ ker(τ )} = inf{s ≥ 0 : |W x | ∈ ker(τ )} = Lτ ′ (x) = L * τ ′ (x) . Finally, we deduce from (3.13) and L * τ (x) = L * τ ′ (x) for all x ∈ R + that τ ∈ E(R + ) implies τ ′ ∈ E(R + ), and vice versa. The next result shows that every equilibrium policy corresponds to the hitting time to a certain threshold. Recall that a set E ⊂ R + is called totally disconnected if the only nonempty connected subsets of E are singletons, i.e. E contains no interval. Lemma 4.2. For any τ ∈ E(R + ), define a := inf (ker(τ )) ≥ 0. Then, the Borel set E := {x ≥ a : x / ∈ ker(τ )} is totally disconnected. Hence, ker(τ ) = [a, ∞) and the stopping policy τ a , defined by τ a (x) := 1 [0,a) (x) for x ∈ R + , belongs to E(R + ). Proof. The proof is delegated to Appendix B.2 The converse question is for which a ≥ 0 the policy τ a ∈ T (R) is an equilibrium. To answer this, we need to find the sets S τa , C τa , and I τa in (3.13). By Definition 3.3, (4.5) Lτ a (x) = T x a := inf{s ≥ 0 : X x s ≥ a}, L * τ a (x) = inf{s > 0 : X x s ≥ a}. Note that Lτ a (x) = L * τ a (x) , by an argument similar to the proof of Lemma 4.1. As a result, for x ≥ a, we have J(x; L * τ a (x)) = J(x; 0) = x, which implies (i) For any a ≥ 0, x → η(x, a) is strictly increasing and strictly convex on [0,a], and satisfies 0 < η(0, a) < a and η(a, a) = a. (ii) For any x ≥ 0, η(x, a) → 0 as a → ∞. (iii) There exists a unique a * ∈ (0, 1/ √ β) such that for any a > a * , there is a unique solution x * (a) ∈ (0, a * ) of η(x, a) = x. Hence, η(x, a) > x for x < x * (a) and η(x, a) < x for x > x * (a). On the other hand, a ≤ a * implies that η(x, a) > x for all x ∈ (0, a).      > x, if x ∈ [0, x * (a)), = x, if x = x * (a), < x, if x ∈ (x * (a), a). By (4.6), (4.7), (4.8), and the definition of Θ in (3.13), For a > a * , although τ a / ∈ E(R + ) by Proposition 4.2, we may use the iteration in (3.8) to find a stopping policy in E(R + ). Here, the repetitive application of Θ to τ a has a simple structure: to reach an equilibrium, we need only one iteration. Recall "static optimality" and "dynamic optimality" in Remarks 2.2 and 3.2. By Proposition 4.1, τ x in (4.3) is statically optimal for x ∈ R + fixed, while τ in (4.4) is dynamically optimal. This is reminiscent of the situation in Theorem 3 of [START_REF]Optimal mean-variance selling strategies[END_REF]. Moreover, τ ∈ T (R + ) defined by τ (x) := 1 [0,b) (x), x ∈ R + , is dynamically optimal for all b ≥ 1/β, thanks again to Proposition 4.1. if a ≤ a * , Θτ a (x) = 1 [0,a) (x) + τ a (x)1 [a,∞) (x) ≡ τ a (x); if a > a * , Θτ a (x) = 1 [0,x * (a)) (x) + τ a (x)1 {x * (a)}∪[a,∞) (x) ≡ τ a (x E(R + ) = {τ ∈ T (R + ) : ker(τ ) = [a, ∞) for some a ∈ [0, a * ]}. Proof. The derivation of "τ a ∈ E(R + ) ⇐⇒ a ∈ [0, a * ]" is Further consideration on selecting equilibrium policies In view of (4.10), it is natural to ask which equilibrium in E(R + ) one should employ. According to standard Game Theory literature discussed below Corollary 3.1, a sophisticated agent should employ the specific equilibrium generated by her initial stopping policy τ , through the iteration (3.8). Now, imagine that an agent is "born" sophisticated: she does not have any previouslydetermined initial stopping policy, and intends to apply an equilibrium policy straight away. A potential way to formulate her stopping problem is the following: (4.11) sup τ ∈E(R + ) J(x; Lτ (x)) = sup a∈[0,a * ] J(x; Lτ a (x)) = sup a∈[x,a * ∨x] E x a 1 + βT x a . where the first equality follows from Proposition 4.2 and Lemma 4.1. Proposition 4.3. τ a * ∈ E(R + ) solves (4.11) for all x ∈ R + . Proof. Fix a ∈ [0, a * ). For any x ≤ a, we have T x a ≤ T x a * . Thus, J(x; Lτ a * (x)) = E x a * 1 + βT x a * = E x E x a * 1 + βT x a * F T x a ≥ E x 1 1 + βT x a E a a * 1 + βT a a * > E x a 1 + βT x a = J(x; Lτ a (x)), where the last inequality follows from Lemma 4.3 (iii). The conclusion is twofold. First, it is possible, at least under current setting, to find one single equilibrium policy that solves (4.11) for all x ∈ R + . Second, this "optimal" equilibrium policy τ a * is different from τ ′ x * ( a) , the equilibrium generated by the naive policy τ (see Remark 4.3). This indicates that the map Θ * := lim n→∞ Θ n : T (X) → E(X) is in general nonlinear: while τ ∈ T (T ) is constructed from optimal stopping times { τ x } x∈R + (or "dynamically optimal" as in Remark 4.4), Θ * ( τ ) = τ ′ x * ( a) ∈ E(X) is not optimal under (4.11). This is not that surprising once we realize τ x > L τ (x) > Lτ ′ x * ( a) (x) for some x ∈ R + . The first inequality is essentially another way to describe time inconsistency, and the second inequality follows from ker( τ ) ⊂ ker(Θ τ ) = ker(τ ′ x * ( a) ). It follows that the optimality of τ x for sup τ ∈T J(x; τ ) does not necessarily translate to the optimality of τ ′ x * ( a) for sup τ ∈E(R + ) J(x; Lτ (x)). A Proofs for Section 3 Throughout this appendix, we will constantly use the notation (A.1) τ n := Θ n τ n ∈ N, for any τ ∈ T (X). A.1 Proof of Proposition 3.1 Fix (t, x) ∈ X. We deal with the two cases τ (t, x) = 0 and τ (t, x) = 1 separately. If τ (t, x) = 0, i.e. τ t,x = t, by (2.7) g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ≥ E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) , which implies (t, x) ∈ S τ ∪ I τ . We then conclude from (3.7) that Θ τ (t, x) = 0 if (t, x) ∈ S τ τ (t, x) if (t, x) ∈ I τ = τ (t, x). If τ (t, x) = 1, then L * τ (t, x) = L τ (t, x) = inf{s ≥ t : τ (s, X t,x s ) = 0} = inf{s ≥ t : τ s,X t,x s = s}. By (2.6) and (2.5), τ s,X t,x s = s means g(X t,x s (ω)) = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -s)g(X τ )], which is equivalent to δ(s -t)g(X t,x s (ω)) = δ(s -t) ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -s)g(X τ )] = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -t)g(X τ )] = Z t,x s (ω), where the second equality follows from (2.8). We then conclude that L * τ (t, x) = inf{s ≥ t : δ(s -t)g(X t,x s ) = Z t,x s } = τ t,x . This, together with (2.7), shows that E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) = E t,x δ( τ t,x -t)g(X τt,x ) ≥ g(x), which implies (t, x) ∈ I τ ∪ C τ . By (3.7), we have Θ τ (t, x) = τ (t, x) if (t, x) ∈ I τ 1 if (t, x) ∈ C τ = τ (t, x). We therefore have Θ τ x) = τ (t, x) for all (t, x) ∈ X, i.e. τ ∈ E(X). A.2 Derivation of Proposition 3.2 To prove the technical result Lemma A.1 below, we need to introduce shifted random variables as formulated in Nutz [START_REF] Nutz | Random G-expectations[END_REF]. For any t ≥ 0 and ω ∈ Ω, we define the concatenation of ω and ω ∈ Ω at time t by (ω ⊗ t ω) s := ω s 1 [0,t) (s) + [ω s -(ω t -ω t )]1 [t,∞) (s), s ≥ 0. For any F ∞ -measurable random variable ξ : Ω → R, we define the shifted random variable [ξ] t,ω : Ω → R, which is F t ∞ -measurable, by [ξ] t,ω (ω) := ξ(ω ⊗ t ω), ∀ω ∈ Ω. Given τ ∈ T , we write ω ⊗ τ (ω) ω as ω ⊗ τ ω, and [ξ] τ (ω),ω (ω) as [ξ] τ,ω (ω). A detailed analysis of shifted random variables can be found in [3, Appendix A]; Proposition A.1 therein implies that give (t, x) ∈ X fixed, any θ ∈ T t and F t ∞ -measurable ξ with E t,x [|ξ|] < ∞ satisfy (A.2) E t,x [ξ | F t θ ](ω) = E t,x [[ξ] θ,ω ] for a.e. ω ∈ Ω. Lemma A.1. For any τ ∈ T (X) and (t, x) ∈ X, define t 0 := L * τ 1 (t, x) ∈ T t and s 0 := L * τ (t, x) ∈ T t , with τ 1 as in (A.1). If t 0 ≤ s 0 , then for a.e. ω ∈ {t < t 0 }, g(X t,x t 0 (ω)) ≤ E t,x δ(s 0 -t 0 )g(X s 0 ) | F t t 0 (ω). Proof. For a.e. ω ∈ {t < t 0 } ∈ F t , we deduce from t 0 (ω) = L * τ 1 (t, x)(ω) > t that for all s ∈ (t, t 0 (ω)) we have τ 1 (s, X t,x s (ω)) = 1 . By (A.1) and (3.7), this implies (s, X t,x s (ω)) / ∈ S τ for all s ∈ (t, t 0 (ω)). Thus, g(X t,x s (ω)) ≤ E s,X t,x s (ω) δ(L * τ (s, X s ) -s)g X L * τ (s,X s ) ∀s ∈ (t, t 0 (ω)) . (A.3) For any s ∈ (t, t 0 (ω)), note that [t 0 ] s,ω (ω) = t 0 (ω ⊗ s ω) = L * τ 1 (t, x)(ω ⊗ s ω) = L * τ 1 (s, X t,x s (ω))(ω), ∀ ω ∈ Ω. Since t 0 ≤ s 0 , similar calculation gives [s 0 ] s,ω (ω) = L * τ (s, X t,x s (ω))(ω). We thus conclude from (A.3) that g(X t,x s (ω)) ≤ E s,X t,x s (ω) δ([s 0 ] s,ω -s)g [X s 0 ] s,ω ≤ E s,X t,x s (ω) δ([s 0 ] s,ω -[t 0 ] s,ω )g [X s 0 ] s,ω , ∀s ∈ (t, t 0 (ω)) , (A.4) where the second line holds because δ is decreasing and also δ and g are both nonnegative. On the other hand, by (A.2), it holds a.s. that E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t s ](ω) = E t,x δ([s 0 ] s,ω -[t 0 ] s,ω )g([X t,x s 0 ] s,ω ) ∀s ≥ t, s ∈ Q. Note that we used the countability of Q to obtain the above almost-sure statement. This, together with (A.4), shows that it holds a.s. that (A.5) g(X t,x s (ω)) 1 {(t,t 0 (ω))∩Q} (s) ≤ E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t s ](ω) 1 {(t,t 0 (ω))∩Q} (s). Since our sample space Ω is the canonical space for Brownian motion with the right-continuous Brownian filtration F, the martingale representation theorem holds under current setting. This in particular implies that every martingale has a continuous version. Let {M s } s≥t be the continuous version of the martingale {E t,x [δ(s 0t 0 )g(X s 0 ) | F t s ]} s≥t . Then, (A.5) immediately implies that it holds a.s. that (A.6) g(X t,x s (ω)) 1 {(t,t 0 (ω))∩Q} (s) ≤ M s (ω) 1 {(t,t 0 (ω))∩Q} (s). Also, using the right-continuity of M and (A.2), one can show that for any τ ∈ T t , M τ = E t,x [δ(s 0t 0 )g(X s 0 ) | F t τ ] a.s. Now, we can take some Ω * ∈ F ∞ with P[Ω * ] = 1 such that for all ω ∈ Ω * , (A.6) holds true and M t 0 (ω) = E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ](ω). For any ω ∈ Ω * ∩{t < t 0 }, take {k n } ⊂ Q such that k n > t and k n ↑ t 0 (ω). Then, (A.6) implies g(X t,x kn (ω)) ≤ M kn (ω), ∀n ∈ N. As n → ∞, we obtain from the continuity of s → X s and z → g(z), and the left-continuity of s → M s that g(X t,x t 0 (ω)) ≤ M t 0 (ω) = E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ](ω). Now, we are ready to prove Proposition 3.2. Proof of Proposition 3.2. We will prove (3.11) by induction. We know that the result holds for n = 0 by (3.10). Now, assume that (3.11) holds for n = k ∈ N ∪ {0}, and we intend to show that (3.11) also holds for n = k + 1. Recall the notation in (A.1). Fix (t, x) ∈ ker(τ k+1 ), i.e. τ k+1 (t, x) = 0. If L * τ k+1 (t, x) = t, then (t, x) belongs to I τ k+1 . By (3.7), we get τ k+2 (t, x) = Θτ k+1 (t, x) = τ k+1 (t, x) = 0, and thus (t, x) ∈ ker(τ k+2 ), as desired. We therefore assume below that L * τ k+1 (t, x) > t. By (3.7), τ k+1 (t, x) = 0 implies (A.7) g(x) ≥ E t,x [δ(L * τ k (t, x) -t)g(X L * τ k (t,x) )]. Let t 0 := L * τ k+1 (t, x) and s 0 := L * τ k (t, x). Under the induction hypothesis ker(τ k ) ⊆ ker(τ k+1 ), we have t 0 ≤ s 0 , as t 0 and s 0 are hitting times to ker(τ k+1 ) and ker(τ k ), respectively; see (3.5). Using (A.7), t 0 ≤ s 0 , Assumption 3.1, and g being nonnegative, g(x) ≥ E t,x [δ(s 0 -t)g(X s 0 )] ≥ E t,x [δ(t 0 -t)δ(s 0 -t 0 )g(X s 0 )] = E t,x δ(t 0 -t)E t,x δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ≥ E t,x δ(t 0 -t)g(X t 0 ) , where the second line follows from the tower property of conditional expectations, and the third line is due to Lemma A.1. This implies (t, x) / ∈ C τ k+1 , and thus (A.8) τ k+2 (t, x) = 0 for (t, x) ∈ S τ 1 τ k+1 (t, x) for (t, x) ∈ I τ 1 = 0. That is, (t, x) ∈ ker(τ k+2 ). Thus, we conclude that ker(τ k+1 ) ⊆ ker(τ k+2 ), as desired. It remains to show that τ 0 defined in (3.8) is a stopping policy. Observe that for any (t, x) ∈ X, τ 0 (t, x) = 0 if and only if Θ n τ (t, x) = 0, i.e. (t, x) ∈ ker(Θ n τ ), for n large enough. This, together with (3.11), implies that {(t, x) ∈ X : τ 0 (t, x) = 0} = n∈N ker(Θ n τ ) ∈ B(X). Hence, τ 0 : X → {0, 1} is Borel measurable, and thus an element in T (X). A.3 Proof of Proposition 3.3 Fix (t, x) ∈ ker( τ ). Since τ (t, x) = 0, i.e. τ t,x = t, (2.6), (2.5), and (2.7) imply g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ≥ E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) . This shows that (t, x) ∈ S τ ∪ I τ . Thus, we have ker( τ ) ⊆ S τ ∪ I τ . It follows that ker( τ ) = (ker( τ ) ∩ S τ ) ∪ (ker( τ ) ∩ I τ ) ⊆ S τ ∪ (ker( τ ) ∩ I τ ) = ker(Θ τ ), where the last equality follows from (3.7). A.4 Derivation of Theorem 3.1 Lemma A.2. Suppose Assumption 3.1 holds and τ ∈ T (X) satisfies (3.10). Then τ 0 defined in (3.8) satisfies L * τ 0 (t, x) = lim n→∞ L * Θ n τ (t, x), ∀(t, x) ∈ X. Proof. We will use the notation in (A.1). Recall that ker(τ n ) ⊆ ker(τ n+1 ) for all n ∈ N and ker(τ 0 ) = n∈N ker(τ n ) from Proposition 3.2. By (3.5), this implies that {L * τ n (t, x)} n∈N is a nonincreasing sequence of stopping times, and L * τ 0 (t, x) ≤ t 0 := lim n→∞ L * τ n (t, x). It remains to show that L * τ 0 (t, x) ≥ t 0 . We deal with the following two cases. (i) On {ω ∈ Ω : L * τ 0 (t, x)(ω) = t}: By (3.5), there must exist a sequence {t m } m∈N in R + , depending on ω ∈ Ω, such that t m ↓ t and τ 0 (t m , X t,x tm (ω)) = 0 for all m ∈ N. For each m ∈ N, by the definition of τ 0 in (3.8), there exists n * ∈ N large enough such that τ n * (t m , X t,x tm (ω)) = 0, which implies L * τ n * (t, x)(ω) ≤ t m . Since {L * τ n (t, x)} n∈N is nonincreasing, we have t 0 (ω) ≤ L * τ n * (t, x)(ω) ≤ t m . With m → ∞, we get t 0 (ω) ≤ t = L * τ 0 (t, x)(ω). (ii) On {ω ∈ Ω : L * τ 0 (t, x)(ω) > t}: Set s 0 := L * τ 0 (t, x). If τ 0 (s 0 (ω), X t,x s 0 (ω)) = 0, then by (3.8) there exists n * ∈ N large enough such that τ n * (s 0 (ω), X t,x s 0 (ω)) = 0. Since {L * τ n (t, x)} n∈N is nonincreasing, t 0 (ω) ≤ L * τ n * (t, x)(ω) ≤ s 0 (ω), as desired. If τ 0 (s 0 (ω), X t,x s 0 (ω)) = 1, then by (3.5) there exist a sequence {t m } m∈N in R + , depending on ω ∈ Ω, such that t m ↓ s 0 (ω) and τ 0 (t m , X t,x tm (ω)) = 0 for all m ∈ N. Then we can argue as in case (i) to show that t 0 (ω) ≤ s 0 (ω), as desired. Now, we are ready to prove Theorem 3.1. Proof of Theorem 3.1. By Proposition 3.2, τ 0 ∈ T (X) is well-defined. For simplicity, we will use the notation in (A.1). Fix (t, x) ∈ X. If τ 0 (t, x) = 0, by (3.8) we have τ n (t, x) = 0 for n large enough. Since τ n (t, x) = Θτ n-1 (t, x), we deduce from "τ n (t, x) = 0 for n large enough" and (3.7) that (t, x) ∈ S τ n-1 ∪ I τ n-1 for n large enough. That is, g(x) ≥ E t,x δ(L * τ n-1 (t, x)t)g(X L * τ n-1 (t,x) ) for n large enough. With n → ∞, the dominated convergence theorem and Lemma A.2 yield g(x) ≥ E t,x δ(L * τ 0 (t, x)t)g(X L * τ 0 (t,x) ) , which shows that (t, x) ∈ S τ 0 ∪ I τ 0 . We then deduce from (3.7) and τ 0 (t, x) = 0 that Θτ 0 (t, x) = τ 0 (t, x). On the other hand, if τ 0 (t, x) = 1, by (3.8) we have τ n (t, x) = 1 for n large enough. Since τ n (t, x) = Θτ n-1 (t, x), we deduce from "τ n (t, x) = 1 for n large enough" and (3.7) that (t, x) ∈ C τ n-1 ∪ I τ n-1 for n large enough. That is, g(x) ≤ E t,x δ(L * τ n-1 (t, x) -t)g(X L * τ n-1 (t,x) ) for n large enough. With n → ∞, the dominated convergence theorem and Lemma A.2 yield g(x) ≤ E t,x δ(L * τ 0 (t, x)t)g(X L * τ 0 (t,x) ) , which shows that (t, x) ∈ C τ 0 ∪ I τ 0 . We then deduce from (3.7) and τ 0 (t, x) = 1 that Θτ 0 (t, x) = τ 0 (t, x). We therefore conclude that τ 0 ∈ E(X). B Proofs for Section 4 B.1 Derivation of Proposition 4.1 In the classical case of exponential discounting, (2.8) ensures that for all s ≥ 0, (B.1) δ(s)v(X x s ) = sup τ ∈T E X x s [δ(s + τ )g(X τ )] = sup τ ∈Ts E x [δ(τ )g(X τ ) | F s ] , which shows that {δ(s)v(X x s )} s≥0 is a supermartingale. Under hyperbolic discounting (4.1), since δ(r 1 )δ(r 2 ) < δ(r 1 +r 2 ) for all r 1 , r 2 ≥ 0, {δ(s)v(X x s )} s≥t may no longer be a supermatingale, as the first equality in the above equation fails. To overcome this, we introduce the auxiliary value function: for (s, x) ∈ R 2 + , V (s, x) := sup τ ∈T E x [δ(s + τ )g(X τ )] = sup τ ∈T E x X τ 1 + β(s + τ ) . (B.2) By definition, V (0, x) = v(x), and {V (s, X x s )} s≥0 is a supermartingale as V (s, X Following [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF], we propose the ansatz w(s, y) = 1 √ 1+βs h( y √ 1+βs ). Equation (B.4) then becomes a one-dimensional free boundary problem: (B.5) -βzh ′ (z) + h ′′ (z) = βh(z), h(z) > |z|, for |z| < b(s) √ 1+βs ; h(z) = |z|, for |z| ≥ b(s) √ 1+βs . Since the variable s does not appear in the above ODE, we take b(s) = α √ 1 + βs for some α ≥ 0. The general solution of the first line of (B.5) is h(z) = e β 2 z 2 c 1 + c 2 2 β √ β/2z 0 e -u 2 du , (c 1 , c 2 ) ∈ R 2 . The second line of (B.5) gives h(α) = α. We then have w(s, y) =      e βy 2 2(1+βs) √ 1+βs c 1 + c 2 2 β √ β/2y √ 1+βs 0 e -u 2 du , |y| < α √ 1 + βs; |y| 1+βs , |y| ≥ α √ 1 + βs. To find the parameters c 1 , c 2 and α, we equate the partial derivatives of (s, y) → w(s, y) obtained on both sides of the free boundary. This yields the equations 1+βs -1 -y 1+βs and observing that h(0) > 0, h( 1/β + s) = 0, and h ′ (y) < 1 1+βs -1 1+βs = 0 for all y ∈ (0, 1/β + s), we conclude h(y) > 0 for all y ∈ [0, 1/β + s), or w(s, y) > |y| 1+βs for |y| < 1/β + s. Also note that w is C 1,1 on [0, +∞) × R, and C 1,2 on the domain {(s, y) ∈ [0, ∞) × R : |y| < 1/β + s}. Moreover, by (B.6), w s (s, y) + 1 2 w yy (s, y) < 0 for |y| > 1/β + s). We then conclude from the standard verification theorem (see e.g. [START_REF] Øksendal | Applied stochastic control of jump diffusions[END_REF]Theorem 3.2]) that V (s, y) = w(s, y) is a smooth solution of (B.4). This implies that { V (s, W y s )} s≥0 is a supermartingale, and { V (s ∧ τ * y , W y s∧τ * y )} s≥0 is a true martingale, with τ * y := inf{s ≥ 0 : |W y s | ≥ 1/β + s}. It then follows from standard arguments that τ * y is the smallest optimal stopping time of V (0, y), and thus τx := inf{s ≥ 0 : X x s ≥ 1/β + s} is the smallest optimal stopping time of (4.2). In view of Proposition 2.1, τ x = τx . α = e β 2 α 2 c 1 + c 2 2 β √ β/2α 0 e -u B.2 Proof of Lemma 4.2 First, we prove that E is totally disconnected. If ker(τ ) = [a, ∞), then E = ∅ and there is nothing to prove. Assume that there exists x * > a such that x * / ∈ ker(τ ). Define We claim that ℓ = u = x * . Assume to the contrary ℓ < u. Then τ (x) = 1 for all x ∈ (ℓ, u). Thus, given y ∈ (ℓ, u), L * τ (y) = T y := inf{s ≥ 0 : X y s / ∈ (ℓ, u)} > 0, and (B.7) J(y; L * τ (y)) = E y X T y 1 + βT y < E y [X T y ] = ℓP[X T y = ℓ] + uP[X T y = u]. Since X s = |W s | for a one-dimensional Brownian motion W and 0 < ℓ < y < u, by the optional sampling theorem P[X T y = ℓ] = P[W y s hits ℓ before hitting u] = u-y u-ℓ and P[X T y = u] = P[W y s hits u before hitting ℓ] = y-ℓ u-ℓ . This, together with (B.7), gives J(y; L * τ (y)) < y. This implies y ∈ S τ , and thus Θτ (y) = 0 by (3.13). Then Θτ (y) = τ (y), a contradiction to τ ∈ E(R + ). This already implies that E is totally disconnected, and thus ker(τ ) = [a, ∞). The rest of the proof follows from Lemma 4.1. B.3 Proof of Lemma 4.3 (i) Given a ≥ 0, it is obvious from definition that η(0, a) ∈ (0, a) and η(a, a) = a. Fix x ∈ (0, a), and let f x a denote the density of T x a . We obtain Since T x a is the first hitting time of a one-dimensional Bessel process, we compute its Laplace transform using Theorem 3.1 of [START_REF] Kent | Some probabilistic properties of Bessel functions[END_REF] (or Formula 2.0.1 on p. 361 of [START_REF] Borodin | Handbook of Brownian motion-facts and formulae, Probability and its Applications[END_REF]): (B.9) E x 1 1 + βT x a = ∞ 0 1 1 + βt f x a (t)dt = E x e - where the second line follows from tanh(x) ≤ 1 for x ≥ 0 and a * ∈ (0, 1/ √ β). Since η a (a * , a * ) = 0 and η aa (a * , a * ) < 0, we conclude that on the domain a ∈ [a * , ∞), the map a → η(a * , a) decreases down to 0. Now, for any a > a * , since η(a * , a) < η(a * , a * ) = a * , we must have x * (a) < a * . Example 2 . 2 ( 22 Real Options Model). Suppose d = 1 and X s := |W s |, s ≥ 0. Consider the payoff function g(x) := x for x ∈ R + and the hyperbolic discount function δ(s) := 1 1+s for s ≥ 0. The problem (2.4) reduces to v(x) = sup τ ∈T E x Xτ 1+τ 5 Figure 1 : 51 Figure 1: The free boundary s → 1 + (st) with different initial times t. which implies (t, x) ∈ I τ . Thus, ker(τ ) ⊆ I τ . It follows that (3.10) holds, as ker(τ ) ⊆ S τ ∪ ker(τ ) = S τ ∪ (I τ ∩ ker(τ )) = ker(Θτ ), where the last equality is due to (3.7). Theorem 3 . 1 . 31 Let Assumption 3.1 hold. If τ ∈ T (X) satisfies (3.10), then τ 0 defined in(3.8) belongs to E(X). (4. 6 ) 3 . 4 . 3 . 6343 [a, ∞) ⊆ I τa . For x ∈ [0, a), we need the lemma below, whose proof is delegated to Appendix B.Lemma Recall T x a in (4.5). On the space {(x, a) ∈ R 2 + : a ≥ x}, define η(x, a) := E x a 1 + βT x a . 2 . 2 The figure below illustrates x → η(x, a) under different scenarios a ≤ a * and a > a * .We now separate the case x ∈ [0, a) into two sub-cases:1. If a ≤ a * , Lemma 4.3 (iii) shows that J(x; L * τ a (x)) = η(x, a) > x, and thus (4.7) [0, a) ⊆ C τa . If a > a * , then by Lemma 4.3 (iii), (4.8) J(x; L * τ a (x)) = η(x, a) presented in the discussion above the proposition. By the proof of Lemma 4.3 in Appendix B.3, a * satisfies η a (a * , a * ) = 1, which leads to the characterization of a * . Now, for any τ ∈ T (R + ) with ker(τ ) = [a, ∞) and a ∈ [0, a * ], Lemma 4.1 implies τ ∈ E(R + ). For any τ ∈ E(R + ), set a := inf(ker(τ )). By Lemma 4.2, ker(τ ) = [a, ∞) and τ a ∈ E(R + ). The latter implies a ∈ [0, a * ] and thus completes the proof. Remark 4 . 1 ( 41 Estimating a * ). With β = 1, numerical computation gives a * ≈ 0.946475. It follows that for a general β > 0, a * ≈ 0.946475/ √ β. Remark 4 . 2 . 2 . 4 . 3 . 42243 Fix a > a * , and recall x * (a) ∈ (0, a * ) inLemma 4.3 (iii). By (4.9),Θτ a (x) = τ ′ x * (a) (x) := 1 [0,x * (a)] (x) for all x ∈ R + . Equivalently, ker(Θτ a ) = ker(τ ′ x * (a) ) = (x * (x), ∞). Since ker(τ ′ x * (a) ) = [x * (a), ∞) and x * (a) ∈ (0, a * ), we conclude from (4.10) that τ ′ x * (a) ∈ E(R + ). Recall (3.12) which connects the naive and sophisticated behaviors. With the naive strategy τ ∈ T (R + ) given explicitly in (4.4), Proposition 4.2 and Remark 4.1 imply τ / ∈ E(R + ). We may find the corresponding equilibrium as in Remark 4.Remark Set a := 1/ √ β. By (4.4) and Remark 4.2, Θ τ = Θτ a = τ ′ x * ( a) ∈ E(R + ). In view of the proof of Lemma 4.3 in Appendix B.3, we can find x * ( a) by solving η(1/ √ β, x) = x, i.e. 1 √ β ∞ 0 e -s cosh(x √ 2βs) sech( √ 2s)ds = x, for x. Numerical computation shows x * ( a) ≈ 0.92195/ √ β, and thus x * ( a) < a * by Remark 4.1. This verifies τ ′ x * ( a) ∈ E(R + ), thanks to (4.10). Remark 4.4. x s ) is equal to the right hand side of (B.1). Proof of Proposition 4.1. Recall that X s = |W s | for a one-dimensional Brownian motion W . Let y ∈ R be the initial value of W , and define V (s, y) := V (s, |y|). The associated variational inequality for V (s, y) is the following: for (s, y) ∈ [0, ∞) × R, (B.3) min w s (s, y) + 1 2 w yy (s, y), w(s, y) -|y| 1 + βs = 0. Taking s → b(s) as the free boundary to be determined, we can rewrite (B.3) as (B.4) w s (s, y) + 1 2 w yy (s, y) = 0, w(s, y) > |y| 1+βs , for |y| < b(s); w(s, y) = |y| 1+βs , for |y| ≥ b(s). 2 2 1+βs - 1 , 221 du and sgn(x)c 2 = sgn(x)α 2 β.The last equation implies c 2 = 0. This, together with the first equation, shows that α = 1/ √ β and c 1 = αe -1/2 . Thus, we obtain(|y| < 1/β + s, |y| 1+βs , |y| ≥ 1/β + s.Note that w(s, y) > |y| 1+βs for |y| < 1/β + s. Indeed, by defining the function h(y) ℓ := sup {b ∈ ker(τ ) : b < x * } and u := inf {b ∈ ker(τ ) : b > x * } . e -βst f x a (t)dt ds = ∞ 0 e -s E x [e -βsT x a ]ds. (B.8) 2 ∞ 0 e 0 e 200 xλ) sech(aλ), for x ≤ a. Here, I ν denotes the modified Bessel function of the first kind. Thanks to the above formula with λ = √ 2βs, we obtain from (B.8) that (B.10) η(x, a) = a ∞ 0 e -s cosh(x 2βs) sech(a 2βs)ds.It is then obvious that x → η(x, a) is strictly increasing. Moreover,η xx (x, a) = 2aβ -s s cosh(x 2βs) sech(a 2βs)ds > 0 for x ∈ [0, a],which shows the strict convexity.(ii) This follows from (B.10) and the dominated convergence theorem.(iii) We will first prove the desired result with x * (a) ∈ (0, a), and then upgrade it to x * (a) ∈ (0, a * ). Fix a ≥ 0. In view of the properties in (i), we observe that the two curves y = η(x, a) and y = x intersect at some x * (a) ∈ (0, a) if and only if η x (a, a) > 1. Define k(a) := η x (a, a). By (B.10), (B.11) k(a) = a ∞ -s 2βs tanh(a 2βs)ds.Thus, we see that k(0) = 0 and k(a) is strictly increasing on (0, 1), since for any a > 0,k ′ (a) = ∞ 0 e -s √ 2s tanh(a √ 2s) + a √ 2s cosh 2 (a √ 2s) ds > 0. By numerical computation, k(1/ √ β) = ∞ 0 e -s √ 2s tanh( √ 2s)ds ≈ 1.07461 > 1. It follows that there must exist a * ∈ (0, 1/ √ β) such that k(a * ) = η x (a * , a * ) = 1. Monotonicity of k(a) thengives the desired result. Now, for any a > a * , we intend to upgrade the previous result to x * (a) ∈ (0, a * ). Fix x ≥ 0. By the definition of η and (ii), on the domain a ∈ [x, ∞), the map a → η(x, a) must either first increases and then decreases to 0, or directly decreases down to 0. From (B.10), we haveη a (x, x) = 1x ∞ 0 e -s 2βs tanh(x 2βs)ds = 1k(x),with k as in (B.11). Recalling k(a * ) = 1, we have η a (a * , a * ) = 0. Notice that η aa (a * , a * ) = -2 a * k(a * ) -2βa * + a * ∞ 0 4βse -s tanh 2 (a * 2βs)ds ≤ -2 a * + 2βa * < 0, 1, as δ(t + s)/δ(t) ≥ δ(s)/δ(0) = δ(s) for all s, t ≥ 0. That is, Assumption 3.1 is automatically true under DI. Note that Assumption 3.1 is more general than DI, as it obviously includes the classical case of exponential discounting, characterized by (2.8). The main convergence result for (3.8) is the following: Proposition 3.2. Let Assumption 3.1 hold. If τ ∈ T (X) satisfies (3.10) ker(τ ) ⊆ ker(Θτ ), then Proposition 4.2. τ a defined in Lemma 4.2 belongs to E(R + ) if and only if a ∈ [0, a * ], where a * > 0 is characterized by a * ∞ (4.9) ). 0 e -s √ 2βs tanh(a * √ 2βs)ds = 1. Moreover, (4.10) author's attention. Special gratitude also goes to Traian Pirvu for introducing the authors to know each other. Y.-J. Huang is partially supported by the University of Colorado (11003573).
61,905
[ "963972", "2654" ]
[ "425705", "2583" ]
01486990
en
[ "shs" ]
2024/03/04 23:41:48
2013
https://hal.parisnanterre.fr/hal-01486990/file/French%20report%20Personal%20guarantees%20between%20law%20and%20consumer%20protection%20M.%20Bourassin.pdf
Professor, Manuella Bourassin "Personal guarantees between commercial law and consumer protection" French Report Director of the Centre of Business Civil Law and Economical Litigation (EA 3457), Co-director of Master 2 Notarial Law 1. Economic aspects 1.1 Are figures available in your country concerning the number of personal guarantees issued within a certain period of time? How are these figures determined ? In France, personal guarantees are not officially and systematically recorded. Indeed, no file concerning them specifically exists nor will they be recorded in the new positive indebtness file, the National Registry of loans to private individuals, that will be dedicated to consumer credit 1 . However statistics are drawn up by organisations that represent or supervise French banks such as the Autorité de Contrôle Prudentiel et de Résolution, according to which, for example, 60% of property loans are secured by suretyships issued by specialised financial establishments. The lack of a general and official recording of personal guarantees is regrettable for several reasons. Firstly, for the main types of loans, this prevents us from knowing the nature of the guarantees put together 2 , as well as the quality of guarantors 3 . Then, to assess loan applicant solvency, a personal guarantee file would be very useful since these guarantees constitute latent indebtness and they often lead to overindebtedness. Finally, in case of the decease of the guarantor, a central personal guarantees file would enable heirs to exercise their inheritance options in better conditions 4 . Legislation 2.3 How has the law on personal guarantees developed in your country? The French Civil Code of 1804 only regulated one personal security : the suretyship. Its provisions on the nature, scope, effects and extinguishment of suretyships make no distinction as to the capacity of the surety, creditor or principal debtor, nor as to the characteristics of the debts secured. From the 1980s, to protect sureties deemed to be weakest and/or the most exposed to the dangers of suretyships, the legislation became more specialised. Alongside the common law set down in the Civil Code, rules were added to other codes or uncoded laws were passed specific to suretyships that secure corporate debt ; suretyships for debts resulting from a home rental lease ; suretyships taken out by a natural person to secure a consumer or property loan granted to a consumer ; suretyships signed between a "natural person surety" and a "professional creditor", regardless of the purpose of the secured debt. Furthermore special rules specifying the fate of sureties in insolvency proceedings have been added to the Commercial Code (professional insolvency proceedings) and to the Consumer Code (procedures to treat the problem of overindebtedness among private individuals). The interaction between on the one hand, common law and specific suretyship legislation, and on the other, the numerous special rules, has not been sufficiently taken into account in the successive sector-based reforms. Therefore litigation concerning suretyships has increased greatly. The Cour de Cassation has not always managed to put things in order. On the contrary, some of its jurisprudences, more dictated by 1 Consumer bill passed by the National Assembly on 3 July 2013 and the Senate on 13 September 2013. see http://www.economie.gouv.fr/loi-consommation 2 1.2 : the global importance of personal guarantees compared to real securities cannot therefore be determined. It is not possible either to estimate, within personal guarantees, the respective weight of personal securities recognised by the Civil Code (suretyships, independent guarantees and letters of intent) and innominate guarantees based on the common law on obligations (such as plurality of debtors, partial assignment of debt, undertakings to vouch for a third party) or insurance law (insurance-loans). 3 1.2 and 1.3 : it is therefore impossible to quantify the personal guarantees granted, either by private individuals not carrying out any commercial or professional activity in relation to the loan granted, or by natural or juridical persons related to the secured enterprise, or by institutional guarantors. 4 See infra 3.2.7. 5 This reminds us of the two logics that govern consumer law today: firstly protect the weak from the strong and secondly, regulate the market. 6 See infra 3.2.6. 7 A complete reform of personal securities was proposed by a committee chaired by Professor Michel Grimaldi, in a report submitted to the Justice Ministry on 31 March 2005 (see www.ladocumentationfrancaise.fr). 8 In the enabling statute n° 2005-842 dated 26 July 2005, the Government was not authorised by Parliament to carry out a global reform of personal securities, above all since it appeared inappropriate, from a democratic point of view, to have recourse to an order to deal with contracts that play such an important role in the daily lives of private individuals and that is likely to lead them into overindebtedness. 9 Civil Code, articles 2288 to 2320. 10 Civil Code, articles 2321 and 2322. 11 In particular, rules on the general accessoriness, common to all guarantees, rules on the subsidiary nature of personal guarantees or rules based on the contractual ethical imperative, such as the requirement for proportionality between the guarantee and the financial faculties of the guarantor or the need for the guarantor to be informed of the first delinquency by the debtor. 12 For detailed reform proposals along these lines, consult our thesis : the willingness to protect sureties deemed to be weak than by the imperative of legal security and the guarantee function of the suretyship, have further compromised the efficiency of this security. Over the last thirty years, French law on suretyships has become complex, inaccessible, incomprehensible, incoherent and unpredictable. It is increasingly turned towards the protection of sureties either to avoid having them take out or make reckless and ruinous undertakings, or to encourage the creation and durability of companies whose debts are secured by these sureties 5 . This suretyship crisis 6 has led creditors to have recourse to new personal guarantees, in particular the independent guarantee, letter of intent or mechanisms in the law on obligations enabling an additional debtor to be obtained (in particular, plurality of debtors with or without a stake in the debt, partial assignment of debt, undertaking to vouch for a third party). But the efficiency sought was not always achieved because the lack of regulation for these replacement guarantees made qualifying them and determining their regime an uncertain task. Numerous innominate guarantees were thus requalified as suretyships ; others had the mandatory rules of this catch-all security applied to them. Hence it became clear at the start of the 21st century that all personal guarantees and not just suretyships needed to be reformed in depth. Whereas detailed proposals along these lines were put forward by the doctrine 7 , the order n° 2006-346 dated 23 March 2006 that reformed securities mainly focused on real securities 8 . The suretyship was not modified in any significant way ; only the numbering of the articles in the Civil Code concerning it was changed 9 . Yes independent guarantees and letters of intent were recognised but only in two articles of the Civil Code that define them but do not detail their regime 10 . In France therefore, the reform of personal guarantee law has yet to be performed. When it finally takes place, it would be desirable on the one hand that rules be set down in the Civil Code common to all personal guarantees 11 and on the other hand that groups of special rules, some based on their accessoriness, reinforced, independent or indemnity-based, others based on the capacity of the guarantor (consumer-guarantor or acting for professional purposes) 12 . 2.1 Are there different statutory provisions governing personal guarantees given by private parties, given by commercial actors / professionals or given by consumers in your country? Are small and medium enterprises treated separately ? 2.2 Concerning codification of the statutory provisions: Are commercial and consumer guarantees covered by one act or are they dealt with in separate acts ? A consumer is "any natural person who is acting for purposes which are outside his or her trade, business, craft or profession" 13 . Suretyships granted by consumers are not specially governed by French law, but they are subject to not only all rules concerning sureties in general 14 , but also those concerning more particularly "natural person sureties". Indeed, in the Consumer Code, there are provisions protecting both natural person sureties who guarantee consumer or property loans taken out by a consumer-borrower 15 and natural person sureties contracting with a "professional creditor", regardless in this case of the nature of the debt secured and the capacity of the principal debtor 16 . The field of application of this second body of rules, that overlaps the first 17 , has given rise to serious difficulties of interpretation. Firstly, are "professional creditors" solely those whose profession is to provide credit ? The Cour de Cassation has ruled out this restrictive conception since 2009. It considers that "in the meaning given to articles L. 341-2 and L. 341-3 of the Consumer Code the professional creditor is he whose debt comes about in the course of his profession or is directly related to one of his professional activities" 18 . This broad interpretation is favourable to sureties, since they must benefit from the rules in the Consumer Code, even if their co-contractor is not an institutional creditor (e.g. a car salesman or seller of building materials who grants payment facilities to customers and receives suretyships in exchange). Then, what is meant by "natural person surety"? To limit application of the texts based on this qualification to consumer-sureties only, i.e. to sureties not acting in the course of their profession, a formal argument has been put forward : since these texts are set down in the Consumer Code and not the Civil Code, they should not benefit sureties acting in the course of their profession, in particular managers or members securing the debts of their business. To exclude sureties who are part of the indebted business, it was also advanced that these sureties do not need to be protected by rules of form aimed at making the consent more thorough (articles L. 341-2 and L. 341-3 of the Consumer Code), nor by information that the creditor must provide on the principal debtor during the lifetime of the suretyship (articles L. 341-1 and L. 341-6 of the Consumer Code), since these sureties are by their very capacity already informed. However, other arguments have been put forward to support undifferentiated application to all natural person sureties. In particular, the interpretation maxim "Ubi lex non distinguit nec non distinguere debemus", since the Consumer Code concerns all natural person sureties. But also the spirit of the Act of 1st August 2003, from which the litigious provisions of the Consumer Code come : this law "of economic initiative " having sought to improve the protection for entrepreneurs 19 , it seemed logical to apply it to sureties involved in the life of their business. The Cour de Cassation ruled on this delicate question of interpretation 20 by leaning in the direction most favourable to sureties. Since 2012 it has ruled that the provisions of the Consumer Code on 14 All common law in the Civil Code as well as the rules applicable to all sureties guaranteeing given debts, such as debts arising from a home rental lease or those of a business. 15 Articles L. 311-11, L. 312-7, L. 312-10 of the Consumer Code, based on the Act n° 78-22 dated 10 January 1978. See infra 3.2.2. Articles L. 313-7, L. 313-8, L. 313-9 and L. 313-10 of the Consumer Code, based on the Act n° 89-1010 dated 31 December 1989. See infra 3.1.6, 3.1.7, 3.2.3 and 3.2.4. 16 Article L. 341-1 of the Consumer Code, based on the Act n° 98-657 dated 29 July 1998. See infra 3.2.3. Articles L. 341-2 to L. 341-6 of the Consumer Code, based on the Act n° 2003-721 dated 1er August 2003. See infra 3.1.6, 3.1.7, 3.2.3 and 3.2.4. 17 The imperative of legal security should have led to the repeal of articles L. 313-7 to L. 313-10 of the Consumer Code when articles L. 341-1 and following were adopted. 18 Civ. 1 re , 25 June 2009, Bull. civ. I, n° 138 ; Civ. 1 re , 9 July 2009, Bull. civ. I, n° 173 ; Com. 10 January 2012, Bull. civ. IV, n° 2. 19 In particular it enabled a limited liability company to be incorporated without registered capital, as well as providing protection for individual entrepreneurs main homes through a declaration of immunity from distrain. 20 Therefore the practical implications are essential, since the rules of the Consumer Code concerned condition either the very existence of the suretyship (articles L. 341-2, L. 341-3 and L. 341-4 of the Consumer Code), or its joint and several nature (article L. 341-5 of the Consumer Code), or coverage of the accessories to the principal debt (articles L. 341-1 and L. 341-6 of the Consumer Code). natural person sureties are applicable "whether they are informed or not"21 . Therefore they also benefit managers and members of secured companies. Other rules protect only natural person sureties when the principal debtor is the subject of professional insolvency proceedings22 , or an overindebtedness procedure 23 . But here again it is certain that they benefit all sureties and not just consumers. Furthermore, the Commercial Code provisions that are favourable to natural person sureties of a business in difficulty were mainly inspired by the will to protect surety-managers to encourage them to initiate the procedure as early as possible and thereby increase the chance of saving the business. Since the suretyships granted by persons carrying out a professional or commercial activity are not subject to any specific regulations in France24 , all the rules of the Consumer Code or the Commercial Code concerning natural person sureties specifically are therefore applicable to them. Outside these special texts, the personal or professional links maintained by the surety and debtor do however have an effect. Firstly, in legislation, a text in the Civil Code concerns the cause (professional or not) of the surety's undertaking. This is article 1108-2, from the Act n° 2004-575 of 21 June 2004, which authorises hand written indications, required under pain of being declared void, to be replaced by electronic ones only if the security is taken out "by a person for the needs of their profession". Furthermore, in jurisprudence, it has been accepted since 1969 25 that the suretyship is a commercial one if the surety gains "a personal and patrimonial advantage" from it, which is the case for the managers of a debtor company, even if they do not have the capacity of trader (like managers of sociétés anonymes or limited liability companies). But it must be acknowledged that this qualification 26 has little impact, since there are no provisions specific to commercial suretyships and few rules in common law on commercial deeds concerning suretyships 27 . The Cour de Cassation however takes into consideration the quality of the surety, layman or "informed" 28 , when deciding to apply, or not to apply, the protections set down in contract common law. Several means of defence are thus refused to surety-managers : non-compliance with the requirement of proof of the suretyship 29 ; the deceit committed by the creditor concerning the financial circumstances of the debtor company30 ; the responsibility of the creditor for wrongful granting of credit to the debtor31 or for failure to warn the surety32 . If suretyship law is therefore particularly complex and not very coherent at present as regards the capacity of the surety, the law applicable to the other personal securities is incomplete. The new articles 2321 and 2322 of the Civil Code in no way deal with the capacity of the independent guarantor or the issuer of a letter of intent. We could be led to believe therefore that the regime of these securities does not vary depending on whether the guarantor is acting for professional purposes or not. In reality such is not the case. The order of 23 March 2006 introduces a ban in article L. 313-10-1 of the Consumer Code on taking out an independent guarantee for a consumer or property loan granted to a consumer-borrower. Furthermore, for home rental leases it provides that the independent guarantee can only be signed to replace the security deposit having to be paid by the tenant33 . Debts which, in practice, are most often secured by natural persons not acting for professional purposes, but for affective reasons, may not therefore be secured by an independent guarantee. As regards the letter of intent, no special text limits the capacity of the subscriber, but this capacity may have an impact if contractual responsibility is claimed. Indeed, if the issuer has taken out obligations to make his best efforts to do or not to do, the creditor will have to show proof that the issuer did not make his best efforts to avoid the default of the secured debtor. It is likely that this proof will be all the easier to provide the closer the professional links are between the secured enterprise and the issuer. The liability of a parent company could thus be easier to establish than that of the manager of the secured company or of a sister company34 . It would be appropriate to conclude this presentation of French law by looking at the capacity of the guarantor and specifying that company law (common law and rules specific to certain types of companies 35 ) has provisions on personal guarantees. It sets down the powers company representatives must have to obligate a company as guarantor 36 . In addition, in joint stock companies and limited liability companies, managers37 are banned from having these companies secure or endorse their own undertakings to third parties38 . Aspects of substantive law 3.1 General 3.1.1 Describe the distinction between dependent guarantees, e.g. a suretyship with a strong accessory relation to the secured debt, and independent guarantees, e.g. an indemnity or unconditional guarantee without such a relationship, in your country. 3.1.2 With regard to the accessoriness of guarantees: Are guarantees, or certain types of guarantees, only valid if they cover valid obligations? Are they valid only to the extent of the secured obligation? Since the order of 23 March 2006, the French Civil Code has recognised three personal securities : the suretyship, the independent guarantee and the letter of intent (article 2287-1). In each, the obligation to secure taken out by the surety, the independent guarantor or the letter issuer does not have an independent existence ; it must be attached to a principal obligation. In addition, the obligation to secure is at the service of this principal obligation, since performance thereof has the function of extinguishing it. For these two reasons, we can consider that all personal securities are accessories of the principal obligation. In other words, they all have a general accessory character. In one, the suretyship, this accessoriness is even more present since it is in play throughout the entire life of the security by having its regime (its existence, validity, scope, effects) depend on that of the secured debt. Three rules of the Civil Code dating from 1804 express the reinforced accessoriness of the suretyship. Firstly, "a suretyship may exist only on a valid obligation" (article 2289). Consequently, the nullity or inexistence of the principal obligation normally results in the disappearance of the suretyship39 . Then, "the suretyship may not exceed what is owed by the debtor, nor be contracted under more onerous conditions" (article 2290, paragraph 1) 40 . Otherwise, it "is not void: it is only reducible to the extent of the principal obligation" (article 2290, paragraph 3). Finally, the surety may set up against the creditor all the defences, i.e. all means of defence, "which belong to the principal debtor, and which are inherent to the debt. But he may not set up the defences which are purely personal to the debtor" (article 2313). The following are defences "inherent to the debt" 41 payment by the debtor 42 , offsetting between reciprocal debts between the creditor and the debtor (article 1294, paragraph 1), the nullity or resolution of the principal contract, conventional debt remissions (article 1287, paragraph 1), novation of the secured obligation (article 1281, paragraph 2) or the confusion between the persons of the creditor and the debtor (article 1301, paragraph 1). As for defences "purely personal to the debtor" 43 , article 2289, paragraph 2, gives us an example: when the debtor is a minor. The jurisprudence adds the other causes of incapacity, but also, for example, the waiving by the creditor to sue the debtor 44 or the defects affecting the consent of the debtor if the debtor did not himself request the cancellation of the principal contract in court 45 . It is mainly to avoid these three expressions of the reinforced accessoriness of the suretyship, greatly protective of sureties, that creditors had recourse from the 1970s, to other personal guarantees. The independent guarantee is distinguished from the suretyship by its independence. Before being introduced into the Civil Code by the order of 23 March 2006, the Cour de Cassation had specified this distinction by taking as criterion the object of the obligation to guarantee: "the undertaking that has as its object the principal debtor's own debt is not independent" 46 . To not be requalified as a suretyship, the independent guarantee had to therefore have precisely set amounts and durations, without it being necessary to consult the principal contract, and its performance should not be subordinate to the default of the principal debtor. This independence is now set down in article 2321 of the Civil Code: "an independent guarantee is an undertaking by which the guarantor binds himself, in consideration of a debt subscribed by a third party, to pay a sum either on first demand or subject to terms agreed upon. A guarantor may set up no defence depending on the guaranteed obligation 47 . Unless otherwise agreed, that security does not follow the guaranteed obligation". Since the order of 23 March 2006, the letter of intention is defined by article 2322 of the Civil Code as "an undertaking to do or not to do whose purpose is the support provided to a debtor in the performance of his obligation in respect of his creditor". The obligation of the issuer of the letter has therefore a purpose totally different from that of the principal obligation. If the agreed obligation to do or not to do is not satisfied, the issuer must repair the prejudice suffered by the creditor, in the conditions of common law contractual responsibility. For this reason, the letter of intent is frequently described as an indemnity based guarantee. Its accessoriness or independent nature is however discussed in the doctrine. Determining defences that can be opposed by the issuer is delicate, since it depends on the dual nature of the letter : as a security presenting a general accessory character it enables the issuer to oppose some defences arising out of the guaranteed contract 48 ; as an indemnity mechanism, it offers the issuer means of defence based on contractual responsibility law which may be related to defences inherent to the principal debt 49 . 3.1.3 As there is generally no direct monetary consideration for the guarantor, are personal guarantees seen as binding unilateral legal acts in your country? What are the consequences for interpretation in contrast to a contract imposing mutual obligations? Personal securities are, by nature, unilateral contracts 50 , since the guarantor subscribes to a undertaking 51 with respect to the creditor, whereas the latter does not generally contract any obligation. Even when the parties agree on obligations at the charge of the creditor 52 or that are imposed on him by the law or the courts, the unilateral character remains, for these obligations do not constitute the cause (in the sense of counterparty) of that to guarantee 53 . 47 By virtue of the principle of independence, the jurisprudence refuses that the guarantor oppose to the beneficiary the invalidity of the principal contract, its resolution, non-performance by the beneficiary, its performance by the debtor, its modification, its extinguishment in particular by offsetting or by transaction. The principle of independence does however have some exceptions : the illicit or immoral nature of the basic contract ; the patently abusive or fraudulent calling in of the guarantee by the beneficiary (article 2321, paragraph 2) ; the means of defence that law on companies in difficulty confers to all guarantors or, where necessary, to natural person guarantors (see supra 2.1 and 2.2). 48 The non-existence, invalidity or resolution of the principal contract, pronounced at the request of the debtor, or else full payment made by the latter. 49 If the secured debt is extinguished by payment, offsetting, remission of debt, confusion, novation, limitation, etc., the issuer should benefit from this extinguishment by the effect of the conditions of civil liability. Indeed, this extinguishment may either make the creditor's prejudice disappear (if the extinguishment results from a payment), or may prevent this prejudice from being attributed to non-performance of the obligations of the issuer of the letter (absence of a causal link between the responsibility generating event and the prejudice). See infra 3.1.9. 50 According to article 1103 of the Civil Code, the contract "is unilateral where one or more persons are bound towards one or several others, without there being any obligation on the part of the latter". 51 Undertaking to pay in the suretyship and independent guarantee, to do or not to do in the letter of intent. 52 For example, information obligations or the obligation to obtain the agreement of the guarantor to grant the debtor a term extension. When the guarantor is a credit establishment, a mutual guarantee company or an insurance company, the guarantee is consented in exchange for payment. This payment is usually borne by the principal debtor. In the exceptional cases in which it is placed at the expense of the creditor, the guarantee contract certainly becomes synallagmatic, since the cause of the obligation of the professional guarantor then lies in the obligation of the cocontractor, the creditor, to make this payment. The main consequence of the unilateral character is of a probative nature. The perfect proof of synallagmatic contracts is subordinate to their being drawn up in as many original copies as there are parties (article 1325). Since personal securities are not subject to this so-called "double original" formality, they are most often drawn up in one original copy that is retained by the creditor. The perfect proof of contracts in which "one party alone undertakes towards another to pay him a sum of money" depends on compliance with article 1326 of the Civil Code : the title must feature "the signature of the person who subscribes that undertaking as well as the mention, written by himself, of the sum or of the quantity in full and in figures". This probative formality, that only concerns civil contracts 54 drawn up under a private arrangement 55 , is applicable to the suretyship 56 and the independent guarantee 57 . However it does not come into play with respect to the letter of intent, since this letter never gives rise to a unilateral undertaking to pay a sum of money to the creditor 58 . 3.1.6 Do form requirements (writing, notarial deed) apply for the issuing of a personal guarantee in your country? Personal securities are in principle consensual contracts, i.e. they are validly formed by the sole exchange of consent between the guarantor and the creditor. While they may be signed before a notary 59 or countersigned by a lawyer 60 , there is no legal requirement to do so 61 . Given that this is in writing, it was traditionally only required as proof (Civil Code, article 1326 62 and article 1341 63 ) and, with respect to suretyships, to easily satisfy the requirement of the surety's specific consent (Civil Code, article 2292 64 ). But since the end of the 1980s, to ensure the surety binds themselves in perfect awareness of the nature, breadth and scope of their undertaking, several laws have imposed handwritten indications mainly concerning the amount, duration, even the joint and several nature of the surety's obligation and this, under pain of the security as a whole being declared void. Three types of suretyships have thus ceased to be consensual contracts to become solemn ones : the suretyship granted under private contract by a natural person to secure a consumer or property loan subscribed by a consumer- 53 For over forty years now, the Cour de Cassation has ruled that "the cause of the surety's obligation is the consideration for the credit granted by the creditor to the principal debtor" (Com. 8 November 1972, Bull. civ. IV, n o 278). 54 The proof of commercial deeds subscribed by traders is free (Commercial Code, article L. 110-3). 55 Since the Act n° 2011-331 of 28 March 2011, notarial deeds and private contracts countersigned by a lawyer are dispensed of the need to include the indications set down by law, both for reasons of proof and for the validity of legal deeds. 56 Its field of application is however greatly restricted since the Act of 1 st August 2003 imposed hand written indications under pain of being declared void in private suretyship contracts between a natural person surety and a professional creditor (Consumer Code, articles L. 341-2 and L. 341-3). See infra 3.1.6 and 3.1.7. 57 Com. 10 January 1995, Bull. civ. IV, n o 13 ; Com. 13 March 2001, n° 98-17133. 58 Com. 25 October 2011, n° 10-25607. 59 The creditor gains numerous advantages from this : since the indications imposed by law in private contracts do not have to be provided in deeds signed before a notary (nor in deeds countersigned by a lawyer), the risks of inefficiency of the security for want of proof or validity are considerably reduced ; the amount and duration of the suretyship do not have to be restricted ; the notarial deed constitutes a very efficient writ of execution if the guarantor does not honour his undertaking. For the guarantor, recourse to a notary (or lawyer) has the essential advantage of receiving customised information and advice on the characteristics of the security. 60 Deed countersigned by a lawyer was created by the Act n° 2011-331 of 28 March 2011. 61 The sole conventional securities drawn up by a notary, under pain of being declared void, are the mortgage (Civil Code, article 2416) and the antichresis (Civil Code, article 2388). 62 Cf. supra 3.1.3. 63 Civil contracts for sums in excess of 1,500 euros must in principle be proven in writing. 64 This provision means the willingness of the surety to commit themselves must not be presumed and implies strictly interpreting this will. However it does not impose any particular form or the use of specific terms to express the surety's consent. borrower 65 ; the suretyship securing the obligations arising out of a home rental lease 66 ; the suretyship by private contract between a natural person surety and a professional creditor 67 . With respect to independent guarantees and letters of intent, consensualism does not know such exceptions. Are personal guarantees seen as contracts for the performance of a continuing obligation? What are the necessary conditions for the guarantee to be terminated? 3.1.7 Describe the possible extent of the guarantee obligation: Could it be unlimited, even as a universal guarantee, or does it have to be limited to a maximum amount? Is it possible to limit the guarantee obligation to a part of the secured debt or to certain included assets, and how is the partial guarantee affected by partial payment of the debt? Is it possible to limit the guarantee period? Does the guarantee cover accessories and/or costs of legal remedies? Are guarantees valid for future debts and/or conditional obligations? As for their scope, the three personal securities recognised by French law must be envisaged separately, for their distinctive characters -reinforced accessoriness, independent, indemnity-basedand the imperative rules specific to the suretyship are decisive here. Concerning in the first place the suretyship, a limit has featured in the Civil Code since 1804 : due to its reinforced accessoriness, the amount, duration and terms of the suretyship may not exceed the amount, duration and terms of the principal obligation, under pain of being reduced to the measure of this obligation (article 2290). For example, if the surety secures a conditional obligation, his/her 65 Consumer Code, article L. 313-7 : "The natural person who undertakes by virtue of a private contract to stand surety for one of the transactions coming under chapters I or II of this part of the code must, under penalty of its undertaking being rendered invalid, precede its signature with the following handwritten statement, and only this statement : "in standing surety for X……., up to the sum of ……….. covering payment of the principal, interest and, where appropriate, penalties or interest on arrears and for the duration of ………… I undertake to repay the lender the sums owing on my income and property if X…… fails to satisfy the obligation himself". Consumer Code, article L. 313-8 : "Where the creditor asks for a joint and several guarantee for one of the transactions to which chapters I or II of this part of the code relate, the natural person who is standing surety must, under penalty of its undertaking being rendered invalid, precede its signature with the following handwritten statement : "In renouncing the benefit of execution defined in article 2021 of the French civil code and obliging me, jointly and severally, with X………., I undertake to repay the creditor without being able to ask that the latter first institute proceedings against X…". 66 Act of 6 July 1989, article 22-1, from the Act of 21 July 1994: "The person who is to stand surety precedes his/her signature by reproducing in hand writing the amount of the rent and the rent revision conditions such as they appear in the rental contract, by the hand written indication explicitly and unequivocally expressing their awareness of the nature and scope of the obligation they are contracting and the hand written reproduction of the previous paragraph. The lessor provides the surety with a copy of the rental contract. These formalities are required under pain of the suretyship being declared void". 67 Consumer Code, article L. 341-2 : "Any natural person who undertakes to act as surety for a professional creditor through a private agreement shall, if his undertaking is not to be declared null and void, affix the following words above his signature in his own handwriting, and these words only : "By standing surety for X ..., for a maximum sum of ... in respect of payment of the principal, interest and, should this prove necessary, any arrears interest or penalties, for a term of..., I hereby undertake to pay the sum due to the lender from my own income and property should X... fail to pay it himself". Consumer Code, article L. 341-3 : "When the professional creditor requests a joint and several guarantee, the natural person standing surety shall, if his undertaking is not to be declared null and void, affix the following words above his signature in his own handwriting : "By waiving the benefit of discussion defined in Article 2021 of the Civil Code and committing myself jointly and severally with X…, I hereby undertake to pay the creditor without any right to demand that he prosecute X... beforehand". Cf. supra 2.1 and 2.2 ; cf. infra 3.2.2. obligation must be subject to the same condition and not be pure and simple68 . The parties may not therefore contractually modify the reinforced accessoriness of the suretyship to make the surety more severely obligated than the principal debtor. However it is important to stress that the legislator and the courts, on the contrary, set aside this character from time to time, to serve interests deemed to be superior to those of the sureties. Thus, when the debtor is subject to insolvency proceedings, the surety may be bound to pay more and earlier than the debtor69 , for this type of procedure is a manifestation of the risk the security is designed to provide protection against and makes protection of the creditor even more imperious. On condition the reinforced accessoriness of the suretyship is respected, the parties were traditionally free to sign either an indefinite suretyship, i.e. not having any limits other than those of the principal debt, or a defined suretyship, i.e. having specific limits both in terms of the amount and the duration of the surety's undertaking. The suretyship that is indefinite with respect to the amount may take two forms depending on whether secured debts are already present at the time of signing or are future debts. If the suretyship covers one or more debts already existing and determined, such as one or more loans of a certain amount, it is limited to this amount. If it covers one or more future undetermined debts that will arise between the debtor and the creditor after being signed, its amount is unknown ab initio; this so-called "omnibus" suretyship automatically adapts to the changing indebtness circumstances of the principal debtor without it being necessary to complete or reiterate the suretyship. Regardless of whether it takes one or the other of these two forms, the indefinite suretyship "extends to the accessories of the debt" 70 (Civil Code, article 2293), without this needing to be specified 71 . However the parties may stipulate a clause to the contrary. The indefinite suretyship in terms of duration does not have a specified expiry term ; it takes on the duration of the principal debt. If the secured obligation has a set duration (e.g. a lease or a three year loan), when the term is up this puts an end to the surety's obligation to cover it 72 . If the principal obligation has an undetermined duration, the suretyship is also deprived of an expiry term. By virtue of the principle of prohibiting perpetual undertakings, it may then be unilaterally terminated at any time by the surety73 . The accessory rule in no way requires that the surety's undertaking strictly covers the scope of the principal debt. This is stated by article 2290, paragraph 2, of the Civil Code : "it may be contracted for a part of the debt only, and under less onerous conditions". Given this, the suretyship can be defined, in terms of its amount, for example by just a fraction of the principal debt or just the capital or by the setting of a ceiling. When the amount of the suretyship is thus limited, the accessories of the principal debt are only covered if the surety specifically undertakes to cover them. According to the Cour de Cassation, "when the suretyship only secures a part of the debt, it only expires when this debt is fully paid, with partial payments made by the principal debtor being firstly deducted from the unsecured part of the debt, failing agreement to the contrary" 74 . The suretyship can also be limited in terms of duration. Regardless of the duration of the principal obligation, the suretyship may be given a separate expiry term, which may be explicit75 , or just implicit 76 . In suretyships for existing debts, the stipulation or subsequent discovery of an expiry term is without effect, since the debts existing when the suretyship is signed must be covered by the surety, regardless of the duration of his undertaking. On the contrary, in suretyships for future debts, the term specific to the surety's obligation plays a decisive role : when this terms is up it puts an end to the cover period, in such a way that the surety only guarantees the debts that arose prior to the term. While forming a defined rather than indefinite suretyship was traditionally determined by the exclusive will of the parties, the scope of this freedom has been considerably reduced over the last twenty five years by several special texts. In certain cases, suretyships restricted in terms of the amount are encouraged. Indeed, the suretyship consented, either by a natural person to secure an individual entrepreneur's debts (Act n° 94-126 of 11 February 1994, article 47-II, paragraph 1), or by a natural person for the benefit of a professional creditor, regardless of the nature of the principal debt, but by notarial deed77 (Consumer Code, article L. 341-5 based on the Act n° 2003-721 of 1st August 2003), cannot be at once unlimited in terms of its amount and joint and several78 . Since the joint and several suretyship is very protective of the creditors' interests79 , these creditors are therefore incited to limit the suretyship to "an expressly and contractually determined global amount". In other hypotheses, much more detrimental to contractual freedom, the suretyship defined in terms of amount and duration is imposed for validity purposes. This is the case each time the suretyship must feature, under pain of being declared void, the following hand written indication: "By standing surety for X ..., for a maximum sum of ... in respect of payment of the principal, interest and, should this prove necessary, any arrears interest or penalties, for a term of..., I hereby undertake to pay the sum due to the lender from my own income and property should X... fail to pay it himself". Since the Act of 1st August 2003 80 , it is all private contract suretyships signed by a natural person surety in favour of a professional creditor that must comply with this indication (Consumer Code, article L. 341-2 81 ). This means, on the contrary, that the choice between defined suretyship and indefinite suretyship now only exists in three hypotheses: if the suretyship is signed by means of a notarial deed or private contract countersigned by a lawyer 82 ; or if the suretyship is signed by private contract by a juridical person ; or else if the suretyship is signed by private contract between a natural person surety and a non-professional creditor. As regards independent guarantees, the freedom of the parties as to the scope of the guarantor's undertaking is much greater since it is restricted neither by the reinforced accessoriness specific to the suretyship nor by the mandatory provisions set out above that only concern suretyships. However this contractual liberty has a limit : so as to not be requalified as a suretyship, the independent guarantee must have a different object from that of the principal debt 83 . So it must not be necessary to refer to the basic contract to determine its scope 84 . The amount is thus set on signing the guarantee and is not limited by that of the principal debt. However, the accessories of this debt are not covered due to the independence of the guarantee, unless they are included in the global amount stipulated. The independent guarantor's obligation is usually accompanied by an expiry term 85 , certain or uncertain, close to that of the principal obligation, so the guarantee is efficient for the entire performance of the basic contract. The expiry of this term is particularly serious for the creditor, since it totally extinguishes the guarantor's undertaking, even if the principal contract is still being performed and new debts can arise 86 . To avoid this extinguishment, the creditor can request an extension. If the guarantor refuses, he risks having the creditor demand immediate payment of the guarantee. Finally, as regards the scope of the letter of intent, there can be no question of its amount since the issuer does not commit himself to pay a sum of money 87 , but to provide one or more services so the debtor may be in a position to honour their undertakings to the creditor. The parties must therefore determine what are the obligations to do or not to do subscribed by the issuer, their intensity (best efforts or result obligations) and their duration. If the issuer does not comply with his obligations and there results a prejudice for the beneficiary, the latter can call in the contractual responsibility. The 80 Prior to this, this indication was reserved for private contract suretyships signed by a natural person surety to secure a consumer or property loan taken out by a consumer-borrower (Consumer Code, article L. 313-7, based on the Act of 31 December 1989). 81 This text is applicable to suretyships signed since 1st February 2004. 82 These two types of instruments are dispensed of any handwritten indications required by law (Civil Code, article 1317-1 and the Act of 31 December 1971, article 66-3-3, based on the Act n° 2011-331 of 28 March 2011). 83 See supra 3.1.1. 84 Com. 18 May 1999, Bull. civ. IV, n o 102. The Cour de Cassation however recognises the efficiency of so-called "sliding" or "reducible" guarantees, whose amounts vary over time as the principal debt is executed like work progress whose price is guaranteed (Com. 2 October 2012, n° 11-23401). 85 Nothing prevents the independent guarantee from having an undetermined duration, but this is rare since it runs the risk for the creditor of losing the totality of the guarantee at any time by the guarantor's unilateral termination. This risk however can be mitigated by stipulating a notice clause. 86 Com. 26 January 1993, Bull. civ. IV, n o 28 ; Com. 12 July 2005, Bull. civ. IV, n° 161 ; Com. 5 June 2012, n° 10-24875. 87 If the letter contains the signatory's undertaking to take the place of the debtor, there is a strong risk of the letter being requalified as a suretyship (Com. 21 December 1987, Bull. civ. IV, n o 281). amount of the repairable prejudice does not necessarily coincide with the amount of the secured debt 88 . It may be less, in particular if the letter features a clause limiting the amount of the repairable prejudice 89 ; it may also be greater 90 , since article 2290 of the Civil Code, that prevents the surety from being bound more severely than the principal debtor, does not apply to the letter of intent 91 . Which recourse can the guarantor take against the debtor after the guarantor has fulfilled his obligation and paid the creditor? Regardless of the advantage the surety may obtain from the guarantee operation 92 , the surety remains bound for the principal debtor. Because the surety is only a subsidiary debtor, the Civil Code acknowledges that the surety can take recourse against the principal debtor : an exceptional recourse before payment, aimed at protecting the surety against the risk of non-reimbursement, in the hypotheses restrictively envisaged by articles 2309 and 2316 of the Civil Code 93 ; two types of recourse after payment, that the surety has every interest in invoking cumulatively, since each has specific advantages. The surety has, on the one hand, a personal recourse because he/she is acting on behalf of and in the interest of the debtor by virtue either of a specific mandate or the business management quasi-contract. This recourse, governed by article 2305 of the Civil Code, has two advantages. The first concerns its assessment basis : it enables the surety to claim from the debtor full payment of all expenses incurred directly or indirectly by performance of the suretyship. The surety may thus demand repayment of expenses separate from the payment made to the creditor. For example, interest on arrears on the overall sum paid to the creditor 94 , expenses incurred in recovering his/her debt or compensatory damages if his/her payment causes a particular prejudice. The second advantage of the personal recourse lies in the independence of its regime compared to that of the creditor's action, in particular with respect to extinctive prescription. On the other hand, the surety has a right of subrogation, triggered by the payment it makes for the principal debtor (Civil Code, articles 1251, 3° and 2306 95 ). It is not as broad as the personal recourse, since it only authorises repayment of those sums paid to the creditor and interest on these sums at the legal rate 96 . However it is safer, since the debt of the creditor with all its accessories, in particular the various suits against the debtor or third parties and the other guarantees covering the same debt are forwarded to the surety. 88 It is however frequent, when the issuer has subscribed to a result obligation, that the jurisprudence aligns the amount of the repairable prejudice on that of the unpaid debt. 89 Com. 17 May 2011, Bull. civ. IV, n° 78. 90 For example, if non-performance of the issuer's obligations and consecutive default of the principal debtor compromise the creditor's financial situation. 91 Com. 6 May 2003, n° 00-22045. 92 A payment for professional sureties ; for sureties involved in the affairs of the debtor company, a financial advantage resulting from the benefits the latter is likely to obtain through the granting or maintaining of the secured loan ; a patrimonial and/or moral interest for sureties affectively close to the debtor. 93 These are hypotheses in which the risk of non-repayment is aggravated : either the debtor is insolvent and proceedings are taken against the surety or the debtor is about to go insolvent, so it is urgent to bring the debtor into question ; or the surety sees their obligation extended beyond what was initially provided for in the contract or beyond what is a reasonable deadline. Then, the anticipated recourse by the surety can take three forms : either an introduction of the debtor ; or, when the debtor is already in insolvency proceedings, the declaration of the debt the surety has against him (since this debt "comes about on the date of the surety's undertaking" and not on that of the payment : Com. 3 February 2009, Bull. civ. IV, n° 11) ; or a request to be indemnified for the risk of having to pay (Com. 29 October 1991, Bull. civ 95 Personal subrogation in favour of he who pays "for others" is provided for by article 1251, 3° of the Civil Code. In suretyship law, this general rule is restated in article 2306 of the Civil Code : "a surety who has paid the debt is subrogated to all the rights which the creditor had against the debtor". 96 For example, Civ. 1 re , 29 October 2002, Bull. civ. I, n o 257. If the creditor loses, through his own fault, these actions or guarantees, the surety can use the means of defence provided for by article 2314 of the Civil Code97 (often called "benefit of subrogation"), which is one of the most efficient in suretyship common law. Can independent guarantors and subscribers of letters of intent also seek relief at law after payment98 against the secured debtor? Even though the link between their undertaking and the debt concerned by this undertaking is much weaker than in the presence of a suretyship, the performance of their undertaking extinguishes this debtor's debt in proportion 99 . It is logical therefore that they should be able to take action against him. Their recourse may be based on three different elements. Firstly, the contract frequently binding them to the debtor. Then, business management, if the guarantor bound himself unbeknownst to or without having received instructions from the debtor. Finally, personal legal subrogation, even if independent guarantors and subscribers of letters of intent are not bound for the debtor to the very debt of the debtor, since jurisprudence accepts that article 1251, 3° of the Civil Code can benefit "he who pays a debt that is personal to him if, by this payment, he discharges with respect to their common creditor he who must bear the definitive charge of the debt"100 . If we accept recourse through subrogation, it appears coherent to acknowledge that independent guarantors and subscribers of letters of intent have the right to invoke article 2314 of the Civil Code, which sanctions the loss through the fault of the creditor of the rights that should have been transferred by subrogation. Several appeal courts have ruled to the contrary on the grounds that this text only envisages the discharge of sureties101 . We see here how regrettable it is that the regime of the independent guarantee and letter of intent has not been further developed. What is the effect if the debtor deals or colludes with the creditor to the detriment of the guarantor? If the creditor and/or the principal debtor act fraudulently against the rights of the guarantor, the guarantor can invoke the special texts that punish fraud or, failing this, the general principle "fraus omnia corrumpit" to be partially or even totally released. Here are three examples. Firstly, jurisprudence accords the joint and several surety, who has not taken part in the proceedings between the creditor and the debtor, the right to file a third opposition, by claiming their fraud102 . Then, with regard to the independent guarantee, article 2321, paragraph 2 of the Civil Code states that "a guarantor is not bound in case of patent abuse or fraud of the beneficiary or of collusion of the latter with the principal". Finally, when the principal debtor is subject to professional insolvency proceedings, article L. 650-1 of the Commercial Code103 provides that "creditors may not be held liable for harm in relation to credits granted", except in three cases, including fraud. Any guarantor can avail of this possibility to claim not only damages but also the invalidity or reduction of his undertaking104 . If the creditor and principal debtor come to an agreement causing prejudice to the guarantor, without however this being fraudulent, the question arises as to whether this agreement can be opposed to the guarantor or opposed by the guarantor. The solution usually depends on the accessory, independent or indemnity-based character of the guarantee and the meaning given to the reinforced accessory character of the suretyship. But some legal provisions and jurisprudence rulings take liberties with these solutions of principle. If the guarantee is independent of the principal obligation, non-fraudulent agreements between the creditor and the debtor cannot be opposed to the guarantor105 or by the guarantor106 . If the guarantee is of an indemnity nature, the solution is less evident, for it is necessary to combine the independence of the object of the guarantor's obligation and the rules of contractual responsibility. Thus, the issuer of a letter of intent can certainly not refuse to perform his obligations to do or to not do on the grounds that the creditor has granted the secured debtor a debt remission or remission of proceedings against him. But the creditor's application for indemnification could be rejected in the presence of such remissions because they could remove the causal link between the prejudice claimed by the creditor and the failings by the issuer to meet his obligations. In terms of suretyship, the scope of some agreements between the creditor and debtor is set down by law. Article 1287, paragraph 1, of the Civil Code thus accepts that "a remission or agreed discharge granted to a principal debtor releases the sureties". On the other hand, according to article 2316 of the Civil Code, "a mere extension granted by the creditor to the principal debtor does not discharge the surety, who may in that case sue the debtor to compel him to pay". In the absence of a legal provision, the opposability of agreements between the creditor and the debtor depends on their qualification as defences "inherent to the debt" or defences "which belong to the principal debtor". Indeed article 2313 of the Civil Code states that the former are opposable, unlike the latter, but without defining them. The dualist analysis of the obligation, inspired by work on the German doctrine, could enable this distinction to be clarified : the defences inherent to the debt are apparently those that affect the obligation itself, whereas defences purely personal to the debtor apparently refer to the sole right of the creditor to proceed. Some decrees would seem to confirm this analysis. The Cour de Cassation for example ruled on 22 May 2007 that "the waiving by the creditor of the right to proceed for payment against the principal debtor does not signify extinguishment of the principal obligation nor the recourse by the surety against this debtor, in such a way that the clause (to waive any legal proceedings whatsoever) is not an obstacle to the creditor proceeding against the joint and several surety"107 . But in many other rulings, the solution seems to be less based on the reinforced accessory character thus interpreted than on a teleological reasoning, based on the interests the judges wish to favour (those of the surety over those of the creditor or vice versa). The jurisprudence concerning conventional changes to the term of the principal obligation reveals such opportunistic and incoherent solutions. Indeed, the principle of the accessory is set aside in an event of default108 , whereas it comes into play if the term is extended109 , in both cases failing the surety's will to the contrary. It is consequently very difficult to summarise the position of French law with regard to agreements between creditors and debtors to the detriment of sureties. This difficulty is all the more important given that all the solutions set out up to here, based on the accessory or independent nature of the guarantee, can be ruled out by the legislator or the courts to give priority to interests other than those of the guarantor or the creditor 110 . This is especially true when the principal debtor is the subject of insolvency proceedings. The agreements signed between the debtor and his creditors are treated differently in amicable proceedings taken on behalf of a company in difficulty and in proceedings concerning an overindebted private individual. In the composition procedure, all guarantors 111 can avail of the provisions of the agreement observed or certified (Commercial Code, article L. 611-10-2, paragraph 1 112 ), in order to favour the rescue of the debtor company 113 . In the conventional procedure to deal with overindebtedness, creditors benefiting from a suretyship have a greater chance of being paid fully and regularly, since the remissions and deadlines they grant to the debtor are inopposable by the sureties 114 . The guarantee function of the suretyship then takes priority over its reinforced accessory nature, at the risk moreover of compromising the recovery in the overindebted party's situation 115 . Describe the relationship between the different parties in the case of a plurality of, even personal and real, security rights. Creditors are free to cumulate several personal and/or real securities to secure the same debt and to choose among them the means to obtain payment 116 , within the limits however of fraud or abuse 117 and at the risk, furthermore, that the loss of one of them by the creditor's fault be invoked by a surety as grounds for it being discharged on the basis of article 2314 of the Civil Code 118 . When a creditor is secured by several sureties and one of them is released without having paid him, the question arises as to the impact of this release on the obligation of the co-fidejussors to settle. The parties can subordinate the very existence of this obligation to the maintaining of the other suretyships. In that case, the release of one of the sureties means the others are also released 119 . In the absence of such a condition, the solution depends on the type of release in question. If the extinguishment of one of the surety's obligation to settle does not result from a payment by the surety in question, but is nonetheless accompanied by the creditor being paid, in principle the creditor 110 See supra 3.1.7. 111 Both those who consented a personal security and those having assigned or given property in guarantee (real security for others) ; both natural person guarantors and juridical person guarantors. 112 Inserted by the Act n° 2005-845 of 26 July 2005. The same solution was chosen beforehand by the Cour de Cassation in the framework of the former amicable settlement procedure (Com. 5 May 2004, Bull. civ. IV, n° 84). 113 The opposability of the measures in the composition plan by all guarantors does in no way arise from the reinforced accessory nature of the suretyship, since it also benefits independent guarantors. It was inspired by the desire to encourage company managers, who are very often guarantors for their company's debts, to take early action in treating the difficulties of their companies by initiating proceedings even before payments are suspended. 114 The solution does not feature in the Consumer Code, which unfortunately does not specify the impact on guarantors of the various steps taken to deal with overindebtedness. It is of jurisprudence origin (Civ. 1 re , 13 November 1996, Bull. civ. I, n° 401). 115 The Cour de Cassation has indeed ruled that the debtor cannot oppose the remissions and deadlines previously granted to the debtor to the solvens surety taking recourse for reimbursement (Civ. 1 re , 15 July 1999, Bull. civ. I, n o 248 ; Civ. 1 re , 28 March 2000, Bull. civ. I, n o 107). 116 Com. 2 June 2004, Bull. civ. IV, n o 106 (freedom to call a surety for payment rather than an independent guarantor). 117 Execution Procedure Code, articles L. 111-7 and L. 121-2, on useless or abusive execution measures. Commercial Code, article L. 650-1, which in the framework of professional insolvency proceedings, rules out the non-liability in principle of the creditors for the loans granted when "the guarantees obtained for the loans or credits are disproportionate". This disproportion can be revealed by an excessive accumulation of guarantees (Com. 27 March 2012, Bull. civ. IV, n° 68). 118 See supra 3.1.8. 119 Civ. 1 re , 18 May 1978, Bull. civ. I, n o 195. loses his right to payment against the co-fidejussors. Such is the solution that the Cour de Cassation adopted in 2010 in the hypothesis of an offsetting of the reciprocal debts of the creditor and one of his sureties120 . If, on the contrary, one of the surety's obligations to settle is extinguished without any payment to the creditor, the co-fidejussors are not released121 , but the amount of their contribution is reduced accordingly. Thus, a debt remission granted to one of the sureties releases the others for the personal share of the beneficiary surety122 . The same should be true in case of merger or novation. When a creditor is secured by several sureties and none can avail of the above-mentioned causes of release, another question arises, that of the amount that can be claimed from each. Article 2302 of the Civil Code provides that, "where several persons have become surety of the same debtor for a same debt, each one is liable for the whole debt". But this obligation for the whole debt is set aside if the surety can avail of the benefit of division, i.e. the right to ask the creditor to divide his action and reduce it to the share of each of the co-fidejussors (Civil Code, article 2303). This benefit is only available to non joint and several sureties called "single" sureties123 . Since joint and several co-fidejussors are much more numerous in practice 124 , they can therefore be called to pay the totality of the debt. Recourse for reimbursement125 may be taken by the solvens surety against joint and several sureties for the same debt, which is very useful in case of insolvency of the principal debtor. Whether the recourse be personal (based on article 2310 of the Civil Code126 ) or by subrogation (based on article 1251, 3° of the Civil Code127 ), it enables the solvens joint and several surety to ask co-fidejussors only for what he has paid over and above his share and this by dividing his recourse to only request from each surety their share in the debt. The calculation of this contribution share may turn out to be complex, for all suretyships do not necessarily have the same scope128 . To avoid these difficulties, sureties can organise their recourse in advance in a special agreement. In the presence of a principal debt secured at once by a surety and by a real security consented by a third party, for a long time the Cour de Cassation assimilated this security to a personal suretyship to enable the solvens surety to take recourse against the third party who constituted this real security 129 . Since the High jurisdiction has ruled that the real security constituted by a third party does not imply any personal undertaking and that this is solely a real security 130 , it is not certain that this recourse can still be taken. The solvens surety can still however be subrogated in the real accessory rights of the creditor concerning the goods of the third party who constituted the real security. In terms of suretyships, a last recourse hypothesis deserves to be presented. When recourse for reimbursement by the surety is covered by a sub-surety 131 , the surety can take recourse against this sub-surety after paying the creditor 132 . This is a personal recourse 133 subject to the regime of that of the surety against the principal debtor. The sub-surety, who secures the surety's debt against the principal debtor, and not that of the initial creditor, cannot avail themselves of the defences inherent to the debt of the principal debtor with regard to this creditor to combat the recourse of the surety. However, it can claim the liability of the surety for having wrongly omitted to invoke these defences 134 , subject however to a clause that limits or waives liability and that first rank professional sureties do not fail to stipulate. As regards independent guarantees, it is even more frequent that the guarantor be counterguaranteed 135 . So there is a dual independence of the counter-guarantee -on the one hand vis-à-vis the basic contract, and on the other vis-à-vis the initial guarantee 136 -which prevents the counter-guarantor from using the defences arising out of these two contracts 137 . However, in 2010, the Cour de Cassation tempered this principle of independence by ruling that "the independence of the counter-guarantee with respect to the first rank guarantee does not prevent the principal debtor, bound by the first demand independent guarantee, from taking liability proceedings against any one of the guarantors who, through their fault, forced him to pay" 138 . Consumer protection Describe the concept and provide the definition of a consumer as related to personal guarantees in your country. Does it include directors or members of companies ? As things stand, French law on personal guarantees does not feature any rule specific to consumerguarantors, i.e. natural persons acting for purposes which are outside their trade, business, craft or profession 139 . Remember that only the suretyship is in reality concerned, since the order of 23 March 2006 banned independent guarantee coverage of debts most often secured by consumers (consumer or property loans : Consumer Code, article L. 313-10-1 ; the debts arising out of a home rental lease : Act 6 July 1989, article 22-1) and because the creditors themselves do not take the risk of having natural persons sign letters of intent when these persons have no control or management powers over the debtor company. The latest laws that partially reformed the suretyship focused on "natural person sureties", especially when they contract with a "professional creditor". The Cour de Cassation refuses to limit the application of these special texts to just sureties not acting for professional purposes. Sureties involved in the secured company, in particular its managers or members, benefit from the protective provisions set down in the Consumer Code and in the Commercial Code 140 . When ruling on the common law grounds of contracts, in particular in the light of the contractual good faith requirement, the Cour de Cassation does however make a distinction between "informed sureties" and "uninformed sureties" 141 . This jurisprudence is hardly satisfactory. Indeed, the High jurisdiction has never defined these two categories of sureties, nor even accepted presumptions with respect to them. It is content to control the criteria adopted by the courts of first instance and courts of appeal and lets them independently assess these criteria (principally, the skills of the surety, their professional experience, their relations with the secured debtor) in each individual case. This assessment in concreto leads to unpredictable qualifications. Thus, the managers or members of the debtor company are not necessarily considered as "informed sureties". They are only so considered if the creditor proves their effective involvement in managing the secured company and their knowledge of the financial situation of this company 142 or at least their knowledge of its field of activity resulting from former or concomitant professional experience 143 . The qualification of "informed surety" can on the contrary be rejected if the manager was, when signing the suretyship, a novice, inexperienced and/or fictitious 144 . He may then benefit from the protections reserved for "uninformed sureties", in particular the cancellation of the suretyship on the grounds of fraudulent non-disclosure by the creditor as to the financial situation of the secured company or on the grounds that the creditor was contractually liable for failing to warn about the dangers of the undertaking. As for the spouses and other persons close to the principal debtor, they only benefit from these means of defence if their capacity of "uninformed sureties" is established. No presumptions are made with respect to this. It therefore may happen that a spouse, relation or friend of the principal debtor be qualified as an "informed surety", if for example their profession enabled them to understand the scope of the undertakings taken 145 or, as regards the spouse of the manager of the debtor company, due to the sole community regime enabling the spouse to benefit from "financial interests in the company" 146 . The capacity of guarantor is therefore currently taken into account in French personal security law in a very imperfect manner. Since the capacity of "consumer" is not defined by positive law, the following questions on consumerguarantors will be answered using concepts that come closest to this concept, in other words, in legislation, that of "natural person surety" and, in jurisprudence, that of "uninformed surety". Is there any pre-contractual duty to inform a consumer-guarantor in your country? What are the legal consequences in case of violation ? Pre-contractual information for sureties on the characteristics and dangers of their undertaking takes several different forms and is sanctioned differently depending on whether it is imposed a priori by special texts or discovered a posteriori by the courts on the grounds of contract common law. Under the influence of consumer law, several preventive measures aimed at informing the consent of what are deemed to be the weakest sureties, have been imposed since the 1980s. In some hypotheses, the formal information precedes the surety's undertaking to ensure his decision is as informed as possible. This takes two forms. On the one hand, the provision of documents to the suretyship candidate. He who is planning to secure a consumer loan or a property loan must be given a copy of the prior offer given to the consumerborrower by the financial establishment 147 . On the other hand, the granting of a cooling off period to the suretyship candidate. This preventive measure is provided solely for a property loan suretyship 148 . If the natural person surety commits himself before the end of a period of ten days following reception of the loan offer, the Cour de Cassation considers that the surety is void 149 . This solution can be criticised, for, given the silence from the law as regards the sanction for non-compliance with this period, forfeiture of the right to interest, the usual sanction in rules on loan offers, could be preferable. The formal information requirement takes other forms and has a very wide scope at the time when the surety commits himself. When a surety is signed to secure the debts arising out of a home rental lease, the lessor must provide the surety with a copy of the rental contract and this under pain of the suretyship being declared void 150 , "without it being necessary to establish the existence of a grievance" 151 . In this rental debt suretyship, but also in private contract suretyships granted by a natural person to secure a consumer or property loan taken out by a consumer-borrower 152 and, more broadly, in the private contract suretyship between a "natural person surety" and a "professional creditor" 153 , the precontractual information provided to sureties takes the form of a handwritten indication ad validitatem on the principal characteristics of the undertaking 154 . While the actual wording of the indication is not imposed by the act of 6 July 1989, it is however by the Consumer Code, which "only" accepts the wording it stipulates. Such a requirement can only encourage a spirit of bickering. Fortunately however, most of the small differences between the letter of the law and the content of the indications, invoked in bad faith by sureties, were not upheld when put to the Cour de Cassation 155 . The High jurisdiction was also measured with respect to the sanction for non-compliance with the formal requirement. Indeed, while the nullity of the suretyship is not subordinate to proof of prejudice suffered by the surety 156 , this is only a relative nullity, which the surety may, after the fact, abandon, for the formalism has as its goal "to protect the interests of the surety" 157 . Furthermore, when the irregularity only concerns the indication of the joint and several nature of the surety 158 , nullity is ruled out due to the impossibility for the creditor of availing of this joint and several liability 159 . The role of jurisprudence in pre-contractual information for sureties is not limited to the interpretation of special texts introducing preventive measures. It arises also every time when the courts, on the grounds of contract common law, discover, after the fact, an obligation to inform or warn. The sanction for the silence of the creditor as to the financial difficulties of the principal debtor when signing the suretyship, based on the deception160 , is also classically justified by the existence of a precontractual obligation to inform, itself based on the requirement of good faith161 . It is also the requirement of contractual loyalty that the Cour de Cassation has been imposing on credit establishments since 2007 that underlies the duty to warn "uninformed sureties" as to the risks of the planned operation and/or the disproportionate nature of the undertaking to be taken out 162 . This duty is, with the previously discussed handwritten indications, one of the most frequently used means of defence by sureties to escape their undertaking 163 . This shows the extent to which pre-contractual information for sureties as to the nature and scope of their undertaking has become essential. 3.2. 3 Does your legal system impose continuous duties to inform the consumer-guarantor during the guarantee period? What are the legal consequences in case of violation of such a duty? Over the last thirty years, the legislator has intervened on several occasions to impose on creditors two types of during the guarantee : firstly, annual information ; secondly, information when the debtor defaults. The annual information prescribed, in an imperative manner 164 , by four texts each having a specific field of application : article L. 313-22 of the Monetary and financial Code 165 concerns the suretyship by a natural or juridical person, in favour of a credit establishment, of financial facilities granted to a company ; article 47-II, paragraph 2, of the Act n° 94-126 of 11 February 1994 concerns the undetermined duration suretyship granted by a natural person to secure the professional debts of an individual entrepreneur ; article 2293, paragraph 2, of the Civil Code 166 concerns the indefinite suretyship 167 given by a natural person, regardless of the legal status of the principal debtor ; finally, article L. 341-6 of the Consumer Code, resulting from the Act n° 2003-721 of 1 st August 2003, focuses on the suretyship signed between a "natural person" and a "professional creditor", regardless of the nature and amount of the principal debt 168 . This multitude of texts and application criteria is hardly compatible with the demands of legal security and the search for efficiency in suretyships. These reproaches are all the more justified since the annual information obligation regime varies from one text to another and has serious shortcomings which are sources of disputes. Firstly, the object of the information is not exactly the same in all four texts. Three of them 169 order creditors to inform their sureties, between 1st January and 31 March of each year, not only as to the amount of the principal debt 170 on 31 December of the previous year, but also as to the term of the suretyship if it has a set duration or as to the faculty of unilateral termination if it is of an undetermined duration. Article 2293, paragraph 2, of the Civil Code does not impose this information as regards the duration of the suretyship and, as regards the principal debt, it is at once more flexible and more blurred when it provides that the surety "shall be informed by the creditor of the evolution of the amount of the debt secured and of those accessories at least once a year at the date agreed between the parties or, failing which, at the anniversary date of the contract". Then, the differences are even more marked as regards the scope of the partial forfeiture sanctioning the lack of information : article L. 313-22 of the Monetary and financial Code and article 47-II, paragraph 2, of the Act of 11 February 1994, concern the "interest due since the previous information up to the date of communication of the new information" ; article L. 341-6 of the Consumer Code provides for forfeiture, for the same period of "penalties or interest on arrears" ; article 2293, paragraph 2, of the Civil Code is more rigorous, by providing for the loss of "all accessories of the debts, costs and penalties", without limiting the forfeiture in time. Finally, since important conditions for application of the information duty are not specified anywhere in the texts, there are numerous disputes about beneficiary sureties 171 , the way the information is provided 172 , the performance duration of the information duty 173 or the sanction for breach thereof 174 . The other information imposed during the lifetime of the suretyship, i.e. when the principal debtor defaults, is not regulated in a satisfactory manner either. The information on "the first payment difficulty" by the debtor is dealt with by three provisions : article L. 313-9 of the Consumer Code 175 on the suretyship consented by a natural person to secure a consumer or property loan subscribed by a consumer-borrower ; article 47-II, paragraph 3, of the Act n° 94-126 of 11 February 1994 concerning the natural person surety guaranteeing the professional debt of an individual entrepreneur or a company ; article L. 341-1 of the Consumer Code 176 taking in more generally all suretyships subscribed by a "natural person" in favour of a "professional creditor". In these texts, the "first payment difficulty" is defined in two different ways. The articles 47-II, paragraph 3, of the Act of 1994 and L. 341-1 of the Consumer Code qualify it as a payment difficulty that is "not settled by the end of the month in which said payment is due", whereas, according to article L. 313-9 of the same code, this is the incident "characterised as being liable for registration in the file 177 ". However the three texts edict an identical sanction : partial forfeiture of "penalties or interest on arrears payable between the date of this first incident and that on which notification was given". The Cour de Cassation has recently specified that the forfeiture can also concern sums due by virtue of a penalty clause 178 . Given the numerous imperfections in French law on information due to sureties during the lifetime of the contract, it is desirable that the reform of personal securities, whenever it takes place, simplifies, 171 The courts are constant in ruling, under cover of the "ubi lex…" rule, and probably also to avoid a dispute about the real knowledge of sureties integrated into the secured company, that annual information obligations also benefit surety-managers (e. g. Com. 25 May 1993, Bull. civ. IV, n o 203). 172 No text specifies how creditors should inform sureties. The jurisprudence accepts that the information be given by simple letter, but on condition the creditor can prove, not only that it was sent, but also the content thereof (Civ. 1 re , 17 November 1998, Bull. civ. I, n o 321). However the creditor does not have to establish that the information was received (Com. 2 July 2013, n° 12-18413). 173 According to the Cour de Cassation, the information obligation must be complied with until extinguishment of the principal debt, which authorises the surety to avail of it even after the judgement condemning the surety has become an enforceable judgement (Cass., ch. mixte 17 November 2006, Bull. ch. mixte, n° 9). This solution can be criticised, since the information remains due at a time when the surety must be aware of the outstanding amount of the principal debt. 174 Can the contractual responsibility of the creditor be claimed while the four texts studied provide for a special sanction ? Since the start of the 2000s, the Cour de Cassation has ruled that, failing deceit or gross negligence by the creditor, the lack of annual information is sanctioned only by the forfeiture of accessories of the principal debt (Com. 25 April 2001, Bull. civ and clarifies positive law and that it reserves this information to those guarantors who do not already have it, i.e. mainly consumer-guarantors. Do any limitations in terms of amount or duration apply to guarantees provided by consumers? All private contract suretyships signed since 1 February 2004 by a "natural person surety" in favour of a "professional creditor" must contain, under pain of being declared void, a handwritten indication setting out the exact amount and duration of the surety's undertaking (Consumer Code, article L. 341-2, based on the Act n° 2003-721 of 1st August 2003) 179 . The aim of this mandatory indication of the scope of the suretyship is to prevent excessive undertakings. Moreover French law punishes, on different grounds, suretyships that are patently disproportionate ab initio to the surety's assets. Article L. 341-4 of the Consumer Code 180 , applicable to all suretyships subscribed since 5 August 2003 181 by a "natural person" 182 in favour of a "professional creditor" 183 , enables the surety to be entirely discharged 184 if he/she can prove that his/her "undertaking was, at the time of signing, manifestly disproportionate to his/her property and income" 185 . 179 See supra 3.1.7. 180 Based on the Act n° 2003-721 of 1st August 2003. 181 The Cour de Cassation refuses to apply this text retrospectively (Cass., ch. mixte, 22 September 2006, Bull. ch. mixte, n° 7). With respect to suretyships signed before 5 August 2003, the disproportion can be punished on other grounds. Suretyships consented by a natural person surety to secure a consumer or property loan assigned to a consumerborrower come under article L. 313-10 of the Consumer Code (based on the Act n° 89-1010 of 31 December 1989), the wording of which is identical to that of article L. 341-4. The praetorian requirement of proportionality governs the others. Indeed in 1997 the Cour de Cassation accepted the liability of a credit establishment that had a surety-manager subscribe to a patently disproportionate undertaking and this on the grounds of contractual good faith (Com. 17 June 1997, Macron, Bull. civ. IV, n o 188). In 2002, the High Jurisdiction toughened its jurisprudence with respect to sureties integrated in the debtor company by subordinating their indemnification to proof, not only of a mathematic disproportion between their undertaking and their property, but also to proof of deceit committed by the creditor as to the financial situation of the surety (Com. 8 October 2002, Nahoum, Bull. civ. IV, n o 136). 182 Since 2010, the Cour de Cassation grants the benefit of article L. 341-4 of the Consumer Code to suretymanagers in ruling that "the informed nature of the surety is indifferent for the application of this text"(for example, Com. 19 October 2010, n° 09-69203 ;Civ. 1 re , 12 July 2012, n° 11-20192). This solution can only be approved, since the discharge of sureties who subscribed to a patently disproportionate undertaking is part of the fight against overindebtedness of private individuals ; in the name of social justice and human dignity, it is legitimate to preserve all natural persons from this risk regardless of their functions or skills. It is furthermore coherent to have surety-managers benefit from overindebtedness prevention measures to the extent that, since the Act n° 2008-776 of 4 August 2008, they are eligible for overindebtedness treatment procedures (Consumer Code, article L. 330-1). 183 When article L. 341-4 of the Consumer Code is not applicable (i.e. for a suretyship signed before 5 August 2003 and/or by a juridical person surety), the Cour de Cassation also rules that non-professional creditors do not commit a fault by having an excessive suretyship signed (Com. 13 November 2007, Bull. civ. IV, n o 236). 184 The surety's excessive undertaking is not only reducible, for the Cour de Cassation considers that the sanction set down by article L. 341-4 of the Consumer Code ("a professional creditor cannot avail of…") "is not assessed by the measure of the disproportion" (Com. 22 June 2010, Bull. civ. IV, n° 112). 185 The disproportion must be proven by the surety who claims it (Com. 22 January 2013, n° 11-25377). It is independently assessed by the courts of first instance and appeal (Civ. 1 re , 4 May 2012, Bull. civ. I, n° 97), based however on criteria controlled by the Cour de Cassation. For example it imposes on these courts to assess the "property and income declared by the surety", and not the surety's effective assets (Com. 14 December 2010, Bull. civ. IV, n° 198). It checks that the disproportion has indeed been assessed on the basis of all the elements of the surety's situation, both assets (property and income) and liabilities (in particular, other surety undertakings : Com. 22 May 2013, n° 11-24812). In the event of a plurality of joint and several sureties, it also checks that the proportionality has indeed been assessed individually, for each of them (Com. 22 May 2013, n° 11-24812). In the presence of a suretyship claimed to be excessive, it is very frequent that the surety invokes, not just this special text, but also a failing by the creditor to perform his general warning duty 186 . Indeed this warning must be personalised in relation to the "financial capacities and risks of indebtness" of the "uninformed surety" 187 and, in its absence, the contractual responsibility of the credit establishment can only be invoked if a disproportion is proven 188 . If, despite all these preventive or corrective rules, the natural person surety ends up in an inextricable financial situation because of his obligation to secure, he can ask to benefit from an overindebtedness treatment procedure 189 , even if his undertaking was excessive at the time of signing 190 . If recovery is not possible, the personal restoration procedure enables the totality of the suretyship debt to be written off 191 . Under what conditions is the consumer-guarantor entitled to withdraw or to revoke the contract ("cooling off period") ? A cooling off period of ten days is only granted to natural person sureties planning to secure a property loan granted to a consumer (Consumer Code, article L. 312-10) 192 . However no period is provided for after the signing of the suretyship during which the surety may change his mind and retract. To the extent that the private contract suretyships signed by a "natural person surety" in favour of a "professional creditor" are, under pain of being declared void, of a fixed duration 193 , the unilateral termination by the surety has a very limited field of application. It can only concern two types of suretyships whose duration may still be undetermined, i.e. those signed by notarial deed or by private contract countersigned by a lawyer, or those signed by private contract between a natural person surety and a non-professional creditor. Most texts on the annual information for sureties impose on creditors the obligation to remind them of either the term of the suretyship if it has a fixed duration, or their faculty to unilaterally terminate if it does not have a fixed duration, under pain of forfeiture of interest (or penalties) due from the previous provision of information until the date on which the new information is communicated 194 . Describe the restrictions placed on standard contract terms concerning guarantees by consumers. The efficiency of a guarantee depends to a great extent on the freedom given to creditors to organise the protection of their interests 195 . Thus, the suretyship has for a long time been considered as an efficient security for contractual freedom had pride of place in it. Traditionally, in the Civil Code, the freedom of the parties was only restricted by the reinforced accessory nature of the suretyship 196 . So creditors could freely choose their sureties, the form of the contract, the scope and terms of the guarantee 197 and deprive sureties of means of defence, in particular through clauses that waived their entitlement to the benefits of seizure and sale, of division, of subrogation or to information on the financial situation of the principal debtor. A large number of these stipulations had become formal clauses in standard contracts drawn up by credit establishments. Since the start of the 1980s, numerous rules of public policy have come about with regard to suretyships and these are in favour of sureties. Essentially, and in particular when the surety is a "natural person" contracting with a "professional creditor", the suretyship regime is no longer nonobligatory but becomes mandatory. The content of the contract no longer depends on the imagination of the parties and the power of the creditor to impose clauses in favour of his payment, but on restrictions imposed by the numerous one-off reforms of the suretyship and the no less numerous rulings by the Cour de Cassation handed down against creditors. Various clauses, the very terms of which are sometimes dictated by law, now condition the validity of the suretyship198 . Other clauses may not be stipulated, under pain of being deemed not written199 . This legal and jurisprudential interventionism, greatly influenced by consumer law and, upstream, by the doctrines of contractual solidarism, certainly hinders the efficiency of the suretyship and leads creditors to seek guarantees that provide greater freedom and security. Since the independent guarantee and the letter of intention are not greatly regulated 200 , they do meet some needs of creditors. But those dispensing credit will not find in them efficient substitutes for the suretyship unless the guarantor acts for professional purposes 201 . Are guarantees issued by family members of the debtor or persons with a close relationship to the debtor governed by special regulations ? Neither suretyship common law nor the special texts concerning it feature rules specific to undertakings subscribed by a member of the family or persons with a close relationship to the principal debtor 202 . However there are provisions on securities in patrimonial family law 203 . In matrimonial regime law, two texts protect guarantor-spouses. Firstly, article 1415 of the Civil Code, based on the Act n° 85-1372 of 23 December 1985, provides that the suretyship subscribed by a spouse with common property 204 , alone, enables the creditor to seize this spouse's separate property and income but not that common to both 205 . To not limit the freedom of the spouses, especially when they run a business, article 1415 does not subordinate the validity of the suretyship to the dual consent of the spouses. But to protect the family patrimony from the dangers of the security 206 , it only authorises seizure of common goods on condition the suretyship was "contracted with the express consent of the other spouse who, in that case, does not obligate his separate property". Furthermore, article 1387-1 of the Civil Code, based on the Act n° 2005-882 of 2 August 2005, features a special extinguishment cause for debts and securities granted by spouses as part of company management. Indeed, at the time of the divorce of these spouses, the court of first instance can decide to discharge the surety-spouse who is not the manager. The reason for this rule is clear : it is to avoid that the spouse, who generously stood surety for the professional activity of his/her partner, be crushed under the weight of the suretyship debt after the divorce. The scope of the discharge of the suretyspouse is however uncertain, for the text does not specify whether this discharge is opposable to the creditor or whether it only affects intra conjugal relations. Fortunately some court of first instance and court of appeal jurisdictions favoured this second interpretation, which preserves the creditor's right to proceed and limits the discharge to matrimonial regime liquidation operations 207 . Inheritance law, for its part, protects the heirs of the surety. In case of decease the surety, article 2294 of the Civil Code states that "the undertakings of the sureties pass to their heirs". Since 1982, the Cour de Cassation has tempered this principle of transmission when the suretyship secures future debts : the decease of the surety puts an implicit extinctive term to his cover obligation 208 , in such a way that only the debts that arose before the decease are carried over to the heirs 209 . Even when thus limited, the transmission of the security may constitute a very heavy charge for the heirs, for, if they accept the succession purely and simply, the suretyship debt will be recovered from their personal patrimony in case of insufficient assets in the estate 210 . The suretyship also presents a specific danger : it is often unknown to heirs when they accept the succession, not only because the contract is usually drawn up in one original copy 211 , kept by the creditor, but also because there is no central file of personal guarantees 212 . The reform of inheritance law by the Act n° 2006-728 of 23 June 2006 brought a corrective solution to this lack of knowledge : the discharge, partial or total, of the heir accepting purely and simply the succession (Code Civil, article 786, paragraph 2). The scope of this judicial discharge leads one to wonder : is the heir discharged of the debt itself or only of the obligation to use his own assets to pay it if the estate assets are insufficient ? The doctrine comes down mostly in favour of this second interpretation, which modifies the basis of the right to proceed and not the quantum of the debt. All these texts recently adopted in family patrimonial law reveal to what extent personal securities can give rise to serious conflicts of interest. They classically bring head to head the interests of the guarantor, those of the creditor and those of the principal debtor but they also affect third parties to the initial guarantee operation. Securities law and the numerous laws it borders on must provide solutions to these conflicts. It is clear that over the last thirty years, French law has tended to cloak the function of guarantees, which is the payment of creditors, and has favoured the protection of guarantors and their relations, even when the guarantee is consented for professional purposes, since the notion of consumerguarantor is unknown to French law and that the principal legal protection criterion lies in the capacity of the "natural person" guarantor. In other words, and to repeat the title of the report entrusted to us, French law on personal guarantees does not achieve a correct balance between the requirements of business life and that of consumer protection. It is now leaning clearly towards the latter to such an extent that it is compromising the efficiency of all personal guarantees. Let us hope that a future reform will put security and freedom back into the core of personal guarantee law. . IV, n o 316 ; Civ. 1 re , 25 May 2005, Bull. civ. I, n o 225). 94 Civ. 1 re , 22 May 2002, Bull. civ. I, n o 138. . IV, n o 75 ; Civ. 1 re , 4 February 2003, Bull. civ. I, n o 35). 175 Based on the Act n° 89-1010 of 31 December 1989. 176 Based on the Act n° 98-657 of 29 July 1998. 177 This is the national file containing information relating to instances of deliberate non-payment of loans granted to natural persons for non-professional purposes (Consumer Code, article L. 333-4). 178 Civ. 1 re , 19 June 2013, n° 12-18478. Com. 10 January 2012, Bull. civ. IV, n° 2 ; Civ. 1 re , 8 March 2012, Bull. civ. I, n° 53. Commercial Code, article L. 622-26, paragraph 2 (opposability of failure to declare debts), L. 626-11 (opposability of the provisions of the rescue plan), L. 622-28, paragraph 1 (opposability of the interest ceasing to be incurred in the rescue procedure), L. 622-28, paragraph 2 and L. 631-14 (suspension of proceedings during the observation period of the rescue or reorganization procedure). Moreover, these texts do not only benefit sureties, but all "natural persons who have consented a personal security or having assigned or given property in guarantee". Consumer Code, articles L. 331-7-1, 2°, L. 332-5, L. 332-9 (debts which have been settled on the debtor's behalf by a surety or co-obligor shall not be eligible for writing off, in whole or in part). Subject to the special texts concerning institutional guarantors (credit establishments, mutual guarantee companies, insurance companies). Com. 7 July 1969, Bull. n° 26 The suretyship can also be commercial, either by its form (this is the endorsement of trade bills), or its nature when it is taken out by a professional surety (bank, mutual guarantee company).27 Nonetheless we should note the competence of trade courts ; the presumption of the joint and several nature ; the liberty of proof, but only when the suretyship is commercial and the surety is a trader (Commercial Code, article L. 110-3). Traditionally, the main difference between civil and commercial suretyships was related to their limitations period (30 years for the former, 10 years for the latter). The Act n° 2008-561 of 17 June 2008 that reformed the limitation abolished these specific limitations. The common law limitation is now 5 years, both for civil and commercial cases. 28 See infra 3.2.1.29 In pursuance of article 1326 of the Civil Code (see infra 3.1.3), the Commercial Chamber of the Cour de Cassation considers, since the start of the 1990s, that in the presence of an equivocal or incomplete wording or in the absence of any wording, the sole capacity of manager constitutes sufficient proof (e.g.Com. 19 June 1990, Bull. civ. IV, n o 180). E. g. Com. 17 December 1996, n° 94-20808 ; Com. 19 April 2005, n° 03-12879. Since 1994, the Cour de Cassation rejects the responsibility of banks for wrongful granting of credit given the perfect knowledge of the circumstances of the debtor company by the surety-manager (e. g.Com. 15 February 1994, Bull. civ. IV, n o 60 ; Civ. 3 e , 22 June 2005, n° 03-19694). Failure to have been warned, when signing the contract, on the risks of the planned operation and/or on the disproportion of the undertaking to be made, can only be claimed by "uninformed" debtors and sureties(Cass., ch. mixte, 29 June 2007, Bull. ch. mixte, n o 7). Thus, company managers or members, to the extent that they are involved in managing the guaranteed company and that they are aware of the financial situation of this company, cannot avail of such provisions (e. g. Com. 27 March 2012, Bull. civ. IV, n° 68). However inexperienced and/or fictitious managers can be indemnified on these grounds (e. g.Com. 11 April 2012, Bull. civ. IV, n° 76 ; Com. 5 February 2013, n° 11-26262). Article 22-1-1 of the Act n° 89-462 of 6 July 1989, based on order of 23 March 2006. In practice, credit establishments only make guarantors acting for professional purposes take out letters of intent, and not natural laypersons who are not involved in the debtor company's activities. It is the nature of the company (civil company, partnerships, limited liability company, joint stock company) that matters and not the size. Thus, small and medium sized businesses who stand guarantor are not subject to any specific rule. Requirement that the undertaking be compliant with the company's object (principle of speciality) and be in the company's interest. In sociétés anonymes, requirement to obtain authorisation for "suretyships, endorsements and guarantees" from the board of directors or the supervisory board(Commercial Code,, under pain, depending on the jurisprudence, of the guarantee being inopposable to the company (since Com. 29 January 1980, Bull. civ. IV, n o 47). As well as their family (spouses, ancestors or descendants) and more generally, "any interposed person". In joint stock companies (and not the SARL), the ban also concerns company members. Commercial Code, In the silence of the texts on the nature of this disappearance, the doctrine evokes either the invalidity or the null and void status of the suretyship. The jurisprudence ensures that the terms of the principal obligation cannot be brought into question, to the detriment of the surety, by the will of the parties to the principal contract (see infra 3.1.9). So neither the creditor waiving the condition affecting the principal debt (Civ. 1 re , 12 June 1990, Bull. civ. I, n o 158), nor the agreement between the creditor and the principal debtor by which they waive performance of a condition, may be opposed to the surety (Civ. 1 re , 29 April 1997, Bull. civ. I, n o 133). For example, no guarantor can avail of the provisions of a recovery plan(Commercial Code,, whether this be debt remission or payment deadline extensions. Similarly, in case of overindebtedness, the measures consented by the creditors in the conventional settlement plan cannot benefit sureties (Civ. 1 re , 13 November 1996, Bull. civ. I, n° 401) and they do not benefit either from the recovery measures decided by the overindebtedness commission or by the courts(Civ. 1re, 3 March 1998, Bull. civ. I, n° 82 ; Civ. 1re, 26 April 2000, Bull. civ. I, n o 122). 70 The accessories covered by the surety are : contractual or legal interest on the secured debt ; the indemnities set down in the principal contract, such as premature termination expenses, the penalty clause or contractual damages due by the debtor for failure to perform the contract.71 The Cour de Cassation has ruled that the handwritten indication imposed by article 1326 of the Civil Code does not need to mention the accessories to have them covered by an indefinite suretyship(Com. 16 March 1999, Bull. civ. IV, n o 59 ; Civ. 1 re , 29 October 2002, Bull. civ. I, n o 247 et 248).72 If the surety secures an existing debt, arising entirely ab initio and of which only the payability is delayed, the extinguishment of the cover obligation has no impact on the obligation to pay the totality of this debt. However, if the surety secures a future debt, the extinguishment of the cover obligation (which is a successive performance obligation) puts an end to the obligation to pay the debts subsequently arising between the debtor and the creditor ; only the obligation to settle existing debts remains. This distinction between the cover obligation and the settlement obligation within future debt suretyships was highlighted by Professor Christian Mouly (Les causes d'extinction du cautionnement,Litec, 1979) and approved by the Cour de Cassation in 1982 (see infra 3.2.7). This termination produces the same effects on the cover obligation and the settlement obligation as when the term expires. Com. 28 January 1997, Bull. civ. IV, n o 28 ; Com. 12 January 2010, n° 09-11710 ; Com. 27 March 2012, n° 11-13960. The term stipulated in the suretyship contract can be certain (a precise duration or date) or uncertain (e.g. for the duration of the surety's functions within the debtor company). Failing a term specifically stipulated in the suretyship contract, the question arises as to whether changes in the legal circumstances of the protagonists in the suretyship operation (creditor, debtor, surety) can constitute implicit expiry terms. The Cour de Cassation refuses this qualification and thereby the extinguishment of the surety's cover obligation in case of a change concerning the debtor company or the creditor company, but not jeopardizing their very existence, and when relations between the debtor and the surety change, particularly in case of a surety-manager ceasing to exercise his functions or in case of a divorce between the secured spouse and the surety-spouse. On the contrary, implicit expiry terms are constituted by the decease of the surety(Com. 29 June 1982, Bull. civ. IV, n o 258) and the disappearance without liquidation (in particular by merger or breakup) of the surety, debtor or creditor company. Article L. 341-5 of the Consumer Code does not mention notarial suretyships specifically, but it is solely by restricting in this way the scope of application of this provision that it is possible to avoid contradiction with article L. 341-2 of the same code, that prohibits indefinite private contract suretyships. The Cour de Cassation confirmed this interpretation(Com. 6 July 2010, Bull. civ. IV, n° 118). In the contrary situation, the stipulation of the joint and several nature of the suretyship or waiving the benefit of seizure and sale must be deemed to have not been written. It deprives the surety of the benefit of seizure and sale and the benefit of division (Civil Code, articles 2298 to 2304). See infra 3.1.10. "A surety is discharged where the subrogation to the rights, mortgages and prior charges of the creditor, may no longer take place in favour of the surety, by the act of that creditor. Any clause to the contrary is deemed not written". Before paying or indemnifying the creditor, independent guarantors and subscribers of letters of intent can doubtless not seek relief at law against the secured debtor, for anticipated recourse is a legal favour only granted to sureties. This extinguishment results from their general accessory nature (see supra 3.1.1). Civ. 1 re , 23 February 1988, Bull. civ. I, n o 50 ; Com. 9 May 1990, Bull. civ. IV, n o 146. Rennes, 6 November 1991, JurisData n o 048834 ; Paris, 28 April 1994, JurisData n o 021848. Civ. 1 re , 10 December 1991, Bull. civ. I, n o 348. Inserted by the Act n° 2005-845 of 26 July 2005. However such an application has little chance of success, since the Cour de Cassation, since 2012, has defined fraud in a very strict manner, by assimilation to penal fraud, by demanding the creditor be aware he is causing harm(Com. 2 October 2012, n° 11-23213 ; Com. 16 October 2012, Bull. civ. IV, n° 186). For example, granting a loan to a company in an irremediably difficult situation, even if this is to obtain a personal security, is not sufficient to characterise a fraud(Com. 27 March 2012, n° 11-13536). For example, extending the term of the basic contract cannot be opposed to an independent guarantor to have his undertaking maintained beyond its own term. This result can only be achieved if the guarantor consents to having his own obligations extended (see supra 3.1.7). For example, a debt remission or remission of proceedings against the debtor have no impact on an independent guarantor. Com. 22 May 2007, Bull. civ. IV, n o 136. The jurisprudence gives priority to the requirement of the express nature of the suretyship (Civil Code, article 2292) and the principle of the relative effect of agreements (Civil Code, article 1165), in ruling that the event of default on this guarantee, based on a clause of automatic forfeiture for failure to perform, is inopposable to the surety (e. g. Civ. 1 re , 20 December 1976, Bull. civ. I, n o 415 ; Civ. 1 re , 18 February 2003, n° 00-12771). Old jurisprudence from a court of appeal gives the surety an option : the surety may request to benefit from the extension or to discharge at the term initially agreed(Lyon, 6 January 1903, DP 1910. Somm. 1). Com. 3 November 2010, n° 09-16173.The solution of this decision, based on the extinguishment of the principal debt consecutive to the offsetting between the debt of one of the sureties and that of the creditor, could however be abandoned, for the Cour de Cassation ruled in 2012 that this offsetting did not extinguish the secured debt(Com. 13 March 2012, Bull. civ. IV, n° 51). In case of debt remission granted to one of the sureties, this absence of release of co-fidejussors is specifically provided for in article 1287, paragraph 3, of the Civil Code. In case of novation with respect to one of the sureties, it is retained by the Cour de Cassation(Com. 7 December 1999, Bull. civ. IV, n o 219). Civ. 1 re , 18 May 1978, Bull. civ. I, n o 195 ; Civ. 1re, 4 January 2005, n° 02-11307. If a single surety pays the creditor after opposing to him the benefit of division, the surety has no recourse against the other sureties since this means of defence enables him to only have to pay his share in the debt. The clause of joint and several liability between sureties is a formal clause in civil suretyships. If the suretyships are of a commercial nature, joint and several liability is presumed. Jurisprudence rules out the anticipated recourse in articles 2309 and 2316 of the Civil Code, which can only be taken against the principal debtor (e. g.Com. 11 December 2001, Bull. civ. IV, n o 196). This does not prevent the solvens caution from having the co-fidejussors made party to the proceedings, each for their share and portion (Civ. 1 re , 15 June 2004, Bull. civ. I, n o 169). "Where several persons have been sureties for the same debtor in regard to the same debt, a surety who has satisfied the debt has a remedy against the other sureties, for the share and portion of each of them". "Subrogation takes place by operation of law : 3° For the benefit of the person who, being bound with others or for others to the payment of a debt, was interested in discharging it". When the sureties have taken out unequal undertakings, "the fraction of the debt having to be borne by each of the sureties must be determined in proportion to their initial undertaking" (Civ. 1 re , 2 February 1982, Bull. civ. I, n o 55). To calculate this recourse, the jurisprudence accepted that the undertaking of the "real surety" was equal to the value of the goods allocated to the guarantee (Civ. 1 re , 25 October 1977, Bull. civ. I, n o 388). Civil Code, article 1116. For example, Civ. 1 re , 10 May 1989, Bull. civ. I, n o 187. On the jurisprudence that rejects nullity of the suretyship for deception and/or refuses claims of damages made by the surety, when the surety works within the debtor company, see supra 2.1 and 2.2. See supra 3.1.6 and 3.1.7 the indications with respect to the amount, duration and joint and several nature of the surety's obligation. For example, since the Act n° 84-148 of 1st March 1984, article 2314 of the Civil Code (former article 2037) prohibits clauses that prevent the surety, regardless of who this surety is, from being discharged in case of loss, through the fault of the creditor, of a right that should have been transmitted to the surety by subrogation. In some suretyships, the stipulations of joint and several liabilityand waiving the benefit of seizure and sale are deemed non written if the amount of the surety's undertaking is not limited (see supra 3.1.7). The Cour de Cassation, for its part, paralyses clauses indicating that the surety is perfectly aware of the debtor's situation or that the surety dispenses the creditor from having to provide him with information on the debtor, if the creditor stipulated such clauses in the knowledge of the economic difficulties of the debtor (Civ. 1 re , 13 May 2003, Bull. civ. I, n o 114 ;Com. 25 February 2004, n° 01-14114). In suretyships for future debts, it also deprives clauses of their effect that place debts that come about after the death of the surety, at the charge of the surety's heirs(Com. 13 January 1987, Bull. civ. IV, n o 9). On the other protections of these heirs, see infra 3.2.7.200 See supra 2.3, 3.1.1 and 3.1.2. 201 See supra 2.1 and 2.2.202 However jurisprudence protects sureties effectively close to the principal debtor by qualifying them most often as "uninformed sureties". See supra 2.1, 2.2 and 3.2.1.203 On company law that applies to managers or members standing surety, as well as their relations, see supra 2.1 and 2.2 204 Article 1415 appears in the chapter of the Civil Code governing the matrimonial regime of property acquired during marriage. The Cour de Cassation also applies it to sureties married under the universal community regime (Civ. 1 re , 3 May 2000, Bull. civ. I, n o 125).205 The creditor often encounters difficulties of proof for if he wishes to seize the account with the income of the surety-spouse, he must overturn the presumption of community posed by article 1402 of the Civil Code.
114,979
[ "750998" ]
[ "461303" ]
01487000
en
[ "shs" ]
2024/03/04 23:41:48
2015
https://hal.parisnanterre.fr/hal-01487000/file/French%20report%20on%20personal%20guarantees%20M.%20Bourassin.pdf
Professor, Manuella Bourassin teaching and research institutions in France or abroad, or from public or private research centers. The French Civil Code of 1804 only regulated one personal security : the suretyship. Its provisions on the nature, scope, effects and extinguishment of suretyships make no distinction as to the capacity of the surety, creditor or principal debtor, nor as to the characteristics of the debts secured. From the 1980s, to protect sureties deemed to be weakest and/or the most exposed to the dangers of suretyships, the legislation became more specialised. Alongside the common law set down in the Civil Code, rules were added to other codes or uncoded laws were passed specific to suretyships that secure corporate debt ; suretyships for debts resulting from a home rental lease ; suretyships taken out by a natural person to secure a consumer or property loan granted to a consumer ; suretyships signed between a "natural person surety" and a "professional creditor", regardless of the purpose of the secured debt. Furthermore special rules specifying the fate of sureties in insolvency proceedings have been added to the Commercial Code (professional insolvency proceedings) and to the Consumer Code (procedures to treat the problem of overindebtedness among private individuals). The interaction between on the one hand, common law and specific suretyship legislation, and on the other, the numerous special rules, has not been sufficiently taken into account in the successive sector-based reforms. Therefore litigation concerning suretyships has increased greatly. The Cour de Cassation has not always managed to put things in order. On the contrary, some of its jurisprudences, more dictated by the willingness to protect sureties deemed to be weak than by the imperative of legal security and the guarantee function of the suretyship, have further compromised the efficiency of this security. Over the last thirty years, French law on suretyships has become complex, inaccessible, incomprehensible, incoherent and unpredictable. It is increasingly turned towards the protection of sureties either to avoid having them take out or make reckless and ruinous undertakings, or to encourage the creation and durability of companies whose debts are secured by these sureties1 . This suretyship crisis has led creditors to have recourse to new personal guarantees, in particular the independent guarantee, letter of intent or mechanisms in the law on obligations enabling an additional debtor to be obtained (in particular, plurality of debtors with or without a stake in the debt, partial assignment of debt, undertaking to vouch for a third party). But the efficiency sought was not always achieved because the lack of regulation for these replacement guarantees made qualifying them and determining their regime an uncertain task. Numerous innominate guarantees were thus requalified as suretyships ; others had the mandatory rules of this catch-all security applied to them. Hence it became clear at the start of the 21st century that all personal guarantees and not just suretyships needed to be reformed in depth. Whereas detailed proposals along these lines were put forward by the doctrine2 , the order n° 2006-346 dated 23 March 2006 that reformed securities mainly focused on real securities 3 . The suretyship was not modified in any significant way ; only the numbering of the articles in the Civil Code concerning it was changed 4 . Yes independent guarantees and letters of intent were recognised but only in two articles of the Civil Code that define them but do not detail their regime 5 . In France therefore, the reform of personal guarantee law has yet to be performed. When it finally takes place, it would be desirable on the one hand that rules be set down in the Civil Code common to all personal guarantees 6 and on the other hand that groups of special rules, some based on their accessoriness, reinforced, independent or indemnity-based, others based on the capacity of the guarantor (consumer-guarantor or acting for professional purposes) 7 . I. The specialization of French law on personal guarantees As things stand, French law on personal guarantees does not feature any rule specific to consumer-guarantors, i.e. natural persons acting for purposes which are outside their trade, business, craft or profession 8 . But suretyships granted by consumers are subject to all rules concerning sureties in general 9 and those concerning more particularly "natural person sureties". Indeed, in the Consumer Code, there are provisions protecting both natural person sureties who guarantee consumer or property loans taken out by a consumer-borrower 10 and natural person sureties contracting with a "professional creditor", regardless in this case of the nature of the debt secured and the capacity of the principal debtor 11 . The field of application of this second body of rules, that overlaps the first 12 , has given rise to serious difficulties of interpretation. Firstly, are "professional creditors" solely those whose profession is to provide credit ? The Cour de Cassation has ruled out this restrictive conception since 2009. It considers that "in the meaning given to articles L. 341-2 and L. 341-3 of the Consumer Code the professional creditor is he whose debt comes about in the course of his profession or is directly related to one of his professional activities" 13 . This broad interpretation is favourable to sureties, since they must benefit from the rules in the Consumer Code, even if their co-contractor is not an institutional creditor 14 . Then, what is meant by "natural person surety"? To limit application of the texts based on this qualification to consumer-sureties only, i.e. to sureties not acting in the course of their profession, a formal argument has been put forward : since these texts are set down in the Consumer Code and not the Civil Code, they should not benefit sureties acting in the course of their profession, in particular managers or members securing the debts of their business. To exclude sureties who are part of the indebted business, it was also advanced that these sureties do not need to be protected by rules of form aimed at making the consent more thorough (articles L. 341-2 and L. 341-3 of the Consumer Code), nor by information that the creditor must provide on the principal debtor during the lifetime of the suretyship (articles L. 341-1 and L. 341-6 of the Consumer Code), since these sureties are by their very capacity already informed. other arguments have been put forward to support undifferentiated application to all natural person sureties. In particular, the interpretation maxim "Ubi lex non distinguit nec non distinguere debemus", since the Consumer Code concerns all natural person sureties. But also the spirit of the Act of 1st August 2003, from which the litigious provisions of the Consumer Code come : this law "of economic initiative " having sought to improve the protection for entrepreneurs 15 , it seemed logical to apply it to sureties involved in the life of their business. The Cour de Cassation ruled on this delicate question of interpretation 16 by leaning in the direction most favourable to sureties. Since 2012 it has ruled that the provisions of the Consumer Code on natural person sureties are applicable "whether they are informed or not" 17 . Therefore they also benefit managers and members of secured companies. Other rules protect only natural person sureties when the principal debtor is the subject of professional insolvency proceedings 18 , or an overindebtedness procedure 19 . But here again it is certain that they benefit all sureties and not just consumers. Furthermore, the Commercial Code provisions that are favourable to natural person sureties of a business in difficulty were mainly inspired by the will to protect surety-managers to encourage them to initiate the procedure as early as possible and thereby increase the chance of saving the business. Since the suretyships granted by persons carrying out a professional or commercial activity are not subject to any specific regulations in France 20 , all the rules of the Consumer Code or the Commercial Code concerning natural person sureties specifically are therefore applicable to them. Outside these special texts, the personal or professional links maintained by the surety and debtor do however have an effect. Firstly, in legislation, a text in the Civil Code concerns the cause (professional or not) of the surety's undertaking. This is article 1108-2, from the Act n° 2004-575 of 21 June 2004, which authorises hand written indications, required under pain of being declared void, to be replaced by electronic ones only if the security is taken out "by a person for the needs of their profession". 13 Civ. 1 re , June 25, 2009, Bull. civ. I, n° 138 ; Civ. 1 re , July 9, 2009, Bull. civ. I, n° 173 ; Com. January 10, 2012, Bull. civ. IV, n° 2. 14 E.g. a car salesman or seller of building materials who grants payment facilities to customers and receives suretyships in exchange. 15 In particular it enabled a limited liability company to be incorporated without registered capital, as well as providing protection for individual entrepreneurs main homes through a declaration of immunity from distrain. 16 Therefore the practical implications are essential, since the rules of the Consumer Code concerned condition either the very existence of the suretyship (articles L. 341-2, L. 341-3 and L. 341-4 of the Consumer Code), or its joint and several nature (article L. 341-5 of the Consumer Code), or coverage of the accessories to the principal debt (articles L. 341-1 and L. 341-6 of the Consumer Code). 17 Com. January 10, 2012, Bull. civ. IV, n° 2 ; Civ. 1 re , March 8, 2012, Bull. civ. I, n° 53. 18 Commercial Code, article L. 622-26, paragraph 2 (opposability of failure to declare debts), L. 626-11 (opposability of the provisions of the rescue plan), L. 622-28, paragraph 1 (opposability of the interest ceasing to be incurred in the rescue procedure), L. 622-28, paragraph 2 and L. 631-14 (suspension of proceedings during the observation period of the rescue or reorganization procedure). Moreover, these texts do not only benefit sureties, but all "natural persons who have consented a personal security or having assigned or given property in guarantee". 19 Consumer Code, articles L. 331-7-1, 2°, L. 332-5, L. 332-9 (debts which have been settled on the debtor's behalf by a surety or co-obligor shall not be eligible for writing off, in whole or in part). 20 Subject to the special texts concerning institutional guarantors (credit establishments, mutual guarantee companies, insurance companies). Furthermore, in jurisprudence, it has been accepted since 1969 21 that the suretyship is a commercial one if the surety gains "a personal and patrimonial advantage" from it, which is the case for the managers of a debtor company, even if they do not have the capacity of trader (like managers of sociétés anonymes or limited liability companies). But it must be acknowledged that this qualification has little impact, since there are no provisions specific to commercial suretyships and few rules in common law on commercial deeds concerning suretyships 22 . The Cour de Cassation however takes into consideration the quality of the surety, layman or "informed", when deciding to apply, or not to apply, the protections set down in contract common law. Several means of defence are thus refused to surety-managers : non-compliance with the requirement of proof of the suretyship 23 ; the deceit committed by the creditor concerning the financial circumstances of the debtor company 24 ; the responsibility of the creditor for wrongful granting of credit to the debtor 25 or for failure to warn the surety 26 . If suretyship law is therefore particularly complex and not very coherent at present as regards the capacity of the surety, the law applicable to the other personal securities is incomplete. The new articles 2321 and 2322 of the Civil Code in no way deal with the capacity of the independent guarantor or the issuer of a letter of intent. We could be led to believe therefore that the regime of these securities does not vary depending on whether the guarantor is acting for professional purposes or not. In reality such is not the case. The order of 23 March 2006 introduces a ban in article L. 313-10-1 of the Consumer Code on taking out an independent guarantee for a consumer or property loan granted to a consumer-borrower. Furthermore, for home rental leases it provides that the independent guarantee can only be signed to replace the security deposit having to be paid by the tenant 27 . Debts which, in practice, are most often secured by natural persons not acting for professional purposes, but for affective reasons, may not therefore be secured by an independent guarantee. As regards the letter of intent, no special text limits the capacity of the subscriber, but this capacity may have an impact if contractual responsibility is claimed. Indeed, if the issuer has taken out obligations to make his best efforts to do or not to do, the creditor will have to show proof that the issuer did not make his best efforts to avoid the default of the secured debtor. It is likely that this proof will be all the easier to provide the closer the professional links are between the secured enterprise and the issuer. The liability of a parent company could thus be easier to establish than that of the manager of the secured company or of a sister company 28 . It would be appropriate to conclude this presentation of French law by looking at the capacity of the guarantor and specifying that company law (common law and rules specific to certain types of 21 Com. July 7, 1969, Bull. n° 269. 22 Nonetheless we should note the competence of trade courts ; the presumption of the joint and several nature ; the liberty of proof, but only when the suretyship is commercial and the surety is a trader (Commercial Code, article L. 110-3). Traditionally, the main difference between civil and commercial suretyships was related to their limitations period (30 years for the former, 10 years for the latter). The Act n° 2008-561 of June 17, 2008 that reformed the limitation abolished these specific limitations. The common law limitation is now 5 years, both for civil and commercial cases. 23 The perfect proof of contracts in which "one party alone undertakes towards another to pay him a sum of money" depends on compliance with article 1326 of the Civil Code : the title must feature "the signature of the person who subscribes that undertaking as well as the mention, written by himself, of the sum or of the quantity in full and in figures". In pursuance of this article, the Commercial Chamber of the Cour de Cassation considers, since the start of the 1990s, that in the presence of an equivocal or incomplete wording or in the absence of any wording, the sole capacity of manager constitutes sufficient proof (e.g. Com. June 19, 1990, Bull. civ. IV, n o 180). 24 E. g. Com. December 17, 1996, n° 94-20808 ;Com. April 19, 2005, n° 03-12879. 25 Since 1994, the Cour de Cassation rejects the responsibility of banks for wrongful granting of credit given the perfect knowledge of the circumstances of the debtor company by the surety-manager (e. g. Com. February 15, 1994, Bull. civ. IV, n o 60 ; Civ. 3 e , June 22, 2005, n° 03-19694). 26 Failure to have been warned, when signing the contract, on the risks of the planned operation and/or on the disproportion of the undertaking to be made, can only be claimed by "uninformed" debtors and sureties (Cass., ch. mixte, June 29, 2007, Bull. ch. mixte, n o 7). Thus, company managers or members, to the extent that they are involved in managing the guaranteed company and that they are aware of the financial situation of this company, cannot avail of such provisions (e. g. Com. March 27, 2012, Bull. civ. IV, n° 68). However inexperienced and/or fictitious managers can be indemnified on these grounds (e. g. Com. April 11, 2012, Bull. civ. IV, n° 76 ; Com. February 5, 2013, n° 11-26262). 27 Article 22-1-1 of the Act n° 89-462 of July 6, 1989, based on order of March 23, 2006. 28 In practice, credit establishments only make guarantors acting for professional purposes take out letters of intent, and not natural laypersons who are not involved in the debtor company's activities. companies 29 ) has provisions on personal guarantees. It sets down the powers company representatives must have to obligate a company as guarantor 30 . In addition, in joint stock companies and limited liability companies, managers 31 are banned from having these companies secure or endorse their own undertakings to third parties 32 . II. The imperfections of French law on personal guarantees Over the last thirty years, French law has tended to cloak the function of guarantees, which is the payment of creditors, and has favoured the protection of guarantors and their relations, even when the guarantee is consented for professional purposes, since the notion of consumer-guarantor is unknown to French law and that the principal legal protection criterion lies in the capacity of the "natural person" guarantor. In other words, French law on personal guarantees does not achieve a correct balance between the requirements of business life and that of consumer protection. It is now leaning clearly towards the latter to such an extent that it is compromising the efficiency of all personal guarantees. The efficiency of a guarantee depends to a great extent on the freedom given to creditors to organise the protection of their interests 33 . Traditionally, in the Civil Code, the freedom of the parties was only restricted by the reinforced accessory nature of the suretyship. So creditors could freely choose their sureties, the form of the contract, the scope and terms of the guarantee and deprive sureties of means of defence. Since the start of the 1980s, numerous rules of public policy have come about with regard to suretyships and these are in favour of sureties. Two significant examples can be given. Form requirements. Personal securities are in principle consensual contracts, i.e. they are validly formed by the sole exchange of consent between the guarantor and the creditor. While they may be signed before a notary 34 or countersigned by a lawyer 35 , there is no legal requirement to do so 36 . But since the end of the 1980s, to ensure the surety binds themselves in perfect awareness of the nature, breadth and scope of their undertaking, several laws have imposed handwritten indications mainly concerning the amount, duration, even the joint and several nature of the surety's obligation and this, under pain of the security as a whole being declared void. Three types of suretyships have thus ceased to be consensual contracts to become solemn ones : the suretyship granted under private contract by a natural person to secure a consumer or property loan subscribed by a consumer-29 It is the nature of the company (civil company, partnerships, limited liability company, joint stock company) that matters and not the size. Thus, small and medium sized businesses who stand guarantor are not subject to any specific rule. 30 Requirement that the undertaking be compliant with the company's object (principle of speciality) and be in the company's interest. In sociétés anonymes, requirement to obtain authorisation for "suretyships, endorsements and guarantees" from the board of directors or the supervisory board (Commercial Code, articles L. 225-35 and L. 225-68), under pain, depending on the jurisprudence, of the guarantee being inopposable to the company (since Com. January 29, 1980, Bull. civ. IV, n o 47). 31 As well as their family (spouses, ancestors or descendants) and more generally, "any interposed person". In joint stock companies (and not the SARL), the ban also concerns company members. 32 Commercial Code, articles L. 225-43, L. 225-91, L. 227-12, L. 226-10 and L. 223-21. 33 On the various factors favouring the "objective efficiency", i.e. the realization of the expectation common to all creditors (payment through realization, or even the very constitution, of the guarantee), as well as the '"subjective efficiency", i.e. the realization of the expectations each party has depending on the specific features of the principal contract and the guarantee secured, see our thesis : M. Bourassin, L'efficacité des garanties personnelles (The efficiency of personal guarantees), LGDJ, Paris, 2006. 34 The creditor gains numerous advantages from this : since the indications imposed by law in private contracts do not have to be provided in deeds signed before a notary (nor in deeds countersigned by a lawyer), the risks of inefficiency of the security for want of proof or validity are considerably reduced ; the amount and duration of the suretyship do not have to be restricted ; the notarial deed constitutes a very efficient writ of execution if the guarantor does not honour his undertaking. For the guarantor, recourse to a notary (or lawyer) has the essential advantage of receiving customised information and advice on the characteristics of the security. 35 Deed countersigned by a lawyer was created by the Act n° 2011-331 of March 28, 2011. 36 The sole conventional securities drawn up by a notary, under pain of being declared void, are the mortgage (Civil Code, article 2416) and the antichresis (Civil Code, article 2388). borrower 37 ; the suretyship securing the obligations arising out of a home rental lease 38 ; the suretyship by private contract between a natural person surety and a professional creditor 39 . Limitation of the extent of the suretyship. While forming a defined rather than indefinite suretyship was traditionally determined by the exclusive will of the parties, the scope of this freedom has been considerably reduced over the last twenty five years by several special texts. In certain cases, suretyships restricted in terms of the amount are encouraged. Indeed, the suretyship consented, either by a natural person to secure an individual entrepreneur's debts (Act n° 94-126 of 11 February 1994, article 47-II, paragraph 1), or by a natural person for the benefit of a professional creditor, regardless of the nature of the principal debt, but by notarial deed 40 (Consumer Code, article L. 341-5 based on the Act n° 2003-721 of 1st August 2003), cannot be at once unlimited in terms of its amount and joint and several 41 . Since the joint and several suretyship is very protective of the creditors' interests 42 , these creditors are therefore incited to limit the suretyship to "an expressly and contractually determined global amount". In other hypotheses, much more detrimental to contractual freedom, the suretyship defined in terms of amount and duration is imposed for validity purposes. This is the case each time the suretyship must feature, under pain of being declared void, the hand written indication imposed by the article L. 341-2 of the Consumer Code 43 . Since the Act of 1st August 2003 44 , it is all private contract suretyships signed by a natural person surety in favour of a professional creditor that must comply with this indication. This means, on the contrary, that the choice between defined suretyship and indefinite suretyship now only exists in three hypotheses : if the suretyship is signed by means of a notarial deed 37 Consumer Code, articles L. 313-7 : "The natural person who undertakes by virtue of a private contract to stand surety for one of the transactions coming under chapters I or II of this part of the code must, under penalty of its undertaking being rendered invalid, precede its signature with the following handwritten statement, and only this statement : "in standing surety for X……., up to the sum of ……….. covering payment of the principal, interest and, where appropriate, penalties or interest on arrears and for the duration of ………… I undertake to repay the lender the sums owing on my income and property if X…… fails to satisfy the obligation himself". Consumer Code, article L. 313-8 : "Where the creditor asks for a joint and several guarantee for one of the transactions to which chapters I or II of this part of the code relate, the natural person who is standing surety must, under penalty of its undertaking being rendered invalid, precede its signature with the following handwritten statement : "In renouncing the benefit of execution defined in article 2021 of the French civil code and obliging me, jointly and severally, with X………., I undertake to repay the creditor without being able to ask that the latter first institute proceedings against X…". 38 Act of July 6, 1989, article 22-1, from the Act of July 21, 1994: "The person who is to stand surety precedes his/her signature by reproducing in hand writing the amount of the rent and the rent revision conditions such as they appear in the rental contract, by the hand written indication explicitly and unequivocally expressing their awareness of the nature and scope of the obligation they are contracting and the hand written reproduction of the previous paragraph. The lessor provides the surety with a copy of the rental contract. These formalities are required under pain of the suretyship being declared void". 39 Consumer Code, article L. 341-2 : "Any natural person who undertakes to act as surety for a professional creditor through a private agreement shall, if his undertaking is not to be declared null and void, affix the following words above his signature in his own handwriting, and these words only : "By standing surety for X ..., for a maximum sum of ... in respect of payment of the principal, interest and, should this prove necessary, any arrears interest or penalties, for a term of..., I hereby undertake to pay the sum due to the lender from my own income and property should X... fail to pay it himself". Consumer Code, article L. 341-3 : "When the professional creditor requests a joint and several guarantee, the natural person standing surety shall, if his undertaking is not to be declared null and void, affix the following words above his signature in his own handwriting : "By waiving the benefit of discussion defined in Article 2021 of the Civil Code and committing myself jointly and severally with X…, I hereby undertake to pay the creditor without any right to demand that he prosecute X... beforehand". 40 Article L. 341-5 of the Consumer Code does not mention notarial suretyships specifically, but it is solely by restricting in this way the scope of application of this provision that it is possible to avoid contradiction with article L. 341-2 of the same code, that prohibits indefinite private contract suretyships. The Cour de Cassation confirmed this interpretation (Com. July 6, 2010, Bull. civ. IV, n° 118). 41 In the contrary situation, the stipulation of the joint and several nature of the suretyship or waiving the benefit of seizure and sale must be deemed to have not been written. 42 It deprives the surety of the benefit of seizure and sale and the benefit of division (Civil Code, articles 2298 to 2304). 43 This text is applicable to suretyships signed since February 1st 2004. or private contract countersigned by a lawyer 45 ; or if the suretyship is signed by private contract by a juridical person ; or else if the suretyship is signed by private contract between a natural person surety and a non-professional creditor. This legal interventionism, greatly influenced by consumer law and, upstream, by the doctrines of contractual solidarism, certainly hinders the efficiency of the suretyship and leads creditors to seek guarantees that provide greater freedom and security. Let us hope that a future reform will put security and freedom back into the core of French personal guarantee law. 45 These two types of instruments are dispensed of any handwritten indications required by law (Civil Code, article 1317-1 and the Act of December 31, 1971, article 66-3-3, based on the Act n° 2011-331 of March 28, 2011). This reminds us of the two logics that govern consumer law today : firstly, protect the weak from the strong and secondly, regulate the market. A complete reform of personal securities was proposed by a committee chaired by Professor Michel Grimaldi, in a report submitted to the Justice Ministry on March 31, 2005. 3 In the enabling statute n° 2005-842 dated July 26, 2005, the Government was not authorised by Parliament to carry out a global reform of personal securities, above all since it appeared inappropriate, from a democratic point of view, to have recourse to an order to deal with contracts that play such an important role in the daily lives of private individuals and that is likely to lead them into overindebtedness.4 Civil Code, articles 2288 to 2320. 5 Civil Code, articles 2321 and 2322.6 In particular, rules on the general accessoriness, common to all guarantees, rules on the subsidiary nature of personal guarantees or rules based on the contractual ethical imperative, such as the requirement for proportionality between the guarantee and the financial faculties of the guarantor or the need for the guarantor to be informed of the first delinquency by the debtor.7 For detailed reform proposals along these lines, consult our article in French in this book and our thesis : M. Bourassin, L'efficacité des garanties personnelles (The efficiency of personal guarantees), LGDJ, 2006, n o 709 à 996. 8 A consumer is "any natural person who is acting for purposes which are outside his or her trade, business, craft or profession" (Consumer Code). 9 All common law in the Civil Code as well as the rules applicable to all sureties guaranteeing given debts, such as debts arising from a home rental lease or those of a business. Prior to this, this indication was reserved for private contract suretyships signed by a natural person surety to secure a consumer or property loan taken out by a consumer-borrower (Consumer Code, article L. 313-7, based on the Act of December 31, 1989).
30,241
[ "750998" ]
[ "461303" ]
01487002
en
[ "shs" ]
2024/03/04 23:41:48
2015
https://hal.parisnanterre.fr/hal-01487002/file/Le%20droit%20fran%C3%A7ais%20des%20s%C3%BBret%C3%A9s%20personnelles%20%20-%20French%20report%20M.%20Bourassin.pdf
2 Ce développement s'explique par l'essor du crédit aux particuliers et aux entreprises, et par les atouts qu'ont pu lui reconnaître les créanciers, notamment par comparaison aux sûretés réelles classiques : sa simplicité, sa souplesse, son faible coût de constitution, son efficacité en cas de mise en oeuvre, même dans le cadre d'une procédure d'insolvabilité ouverte au bénéfice du débiteur principal. 3 Proches du débiteur principal, personnes physiques ou morales intégrées dans l'entreprise débitrice, garants institutionnels. 4 Les sûretés personnelles sont des engagements juridiques pour autrui (le débiteur principal), consentis bien souvent sans réelle liberté (en raison des relations professionnelles ou personnelles unissant le garant au débiteur) et sans contrepartie. Elles risquent pourtant d'obérer gravement le patrimoine du garant, puisqu'il est "tenu de remplir son engagement sur tous ses biens mobiliers et immobiliers, présents et à venir" (C. civ., art. 2284), quand bien même les recours en remboursement contre le débiteur principal seraient voués à l'échec. 5 Le Code civil renferme, depuis 1804, diverses règles susceptibles de limiter, voire d'exclure le paiement des cautions. Les unes reposent sur le caractère accessoire du cautionnement et sa subsidiarité, les autres sur les règles applicables à tous les nouvelles protections des cautions ont vu le jour, pour l'essentiel en dehors du Code civil 6 . Il importe à cet égard de préciser que l'ordonnance n° 2006-346 du 23 mars 2006 relative aux sûretés n'a nullement réformé en profondeur le droit du cautionnement. Seule la numérotation des articles du Code civil le concernant a été modifiée 7 . Depuis une trentaine d'années, le droit commun du cautionnement cohabite ainsi avec de multiples règles spéciales 8 qui, à tous les stades de la vie de la sûreté, visent à en réduire, voire à en supprimer les risques pour les garants les plus exposés. Ces règles, qu'elles soient légales ou jurisprudentielles, protègent la volonté et le patrimoine de certaines cautions, en spécifiant la source des dettes couvertes (crédit de consommation, bail d'habitation), la qualité du débiteur principal (consommateur, société, entrepreneur individuel, particulier surendetté, entreprise en difficulté), les caractéristiques du cautionnement (sa nature, sa forme, son étendue, ses modalités), la qualité du créancier (personne physique ou morale, professionnel ou non) et/ou celle de la caution (personne physique ou morale, avertie ou profane, engagée pour les besoins de sa profession ou non). Ce mouvement de spécialisation concerne également les deux autres sûretés personnelles que le Code civil reconnaît depuis l'ordonnance du 23 mars 2006, à savoir la garantie autonome et la lettre d'intention 9 . En effet, des règles spéciales interdisent la couverture de certaines dettes par une garantie autonome. Par ailleurs, il existe en droit des sociétés, en droit des entreprises en difficulté ou encore en droit patrimonial de la famille, de nombreux textes relatifs aux garanties, aux sûretés ou aux sûretés personnelles, qui concernent certains garants seulement. Cette protection sélective caractérise la spécialisation du droit des sûretés personnelles 10 . En s'attachant aux intérêts que le législateur et les juges cherchent précisément à protéger, il est possible de ranger les multiples règles légales et jurisprudentielles qui se sont développées en marge du droit commun dans trois catégories. Certaines, d'abord, visent à sécuriser et à dynamiser la vie des affaires en général et celle des entreprises en particulier, afin de soutenir la croissance économique (A). D'autres, ensuite, ont pour but de protéger les consommateurs contre des engagements irréfléchis et ruineux, risquant de les conduire au surendettement et à l'exclusion sociale (B). Enfin, les deux finalités précédentes sous-tendent les règles bénéficiant aux garants personnes physiques (C). A/ Sécuriser et dynamiser la vie des affaires 3. Les règles spéciales intéressant la vie des affaires, c'est-à-dire celles relatives aux sûretés personnelles données par ou pour des entreprises, sont ambivalentes. Les unes expriment une sollicitude à l'égard des sociétés garantes (1) ou des garants d'entreprises (2). Les autres, au contraire, font preuve de rigueur à l'encontre des garants intégrés dans les entreprises débitrices (3). Ces deux dynamiques antagonistes révèlent la complexité du soutien aux entreprises : les sociétés et leurs membres, les entrepreneurs individuels et leurs proches doivent être protégés des dangers des sûretés, dont l'ampleur est souvent accrue en présence de dettes professionnelles. Mais les créanciers doivent Le droit français des sûretés personnelles Manuella Bourassin, Agrégée des Facultés de droit, Professeur à l'Université Paris Ouest Nanterre La Défense, Directrice du Centre de droit civil des affaires et du contentieux économique (EA 3457) Résumé Depuis les années 1980, le droit français des sûretés personnelles a profondément évolué. En marge du droit commun inscrit dans le Code civil, se sont développées des règles propres aux sûretés personnelles données par ou pour des entreprises, des règles protectrices des garants s'apparentant à des consommateurs et encore des règles spécifiques aux cautions personnes physiques contractant avec des créanciers professionnels. Cette spécialisation du droit des sûretés personnelles a généré une réelle insécurité juridique et économique, car les nouvelles règles légales et jurisprudentielles manquent d'accessibilité, d'intelligibilité et de stabilité et parce qu'elles sont davantage tournées vers la protection des garants que vers celle créanciers. Une réforme en profondeur de la matière s'impose pour restaurer l'efficacité des sûretés personnelles et conforter par là même le crédit aux entreprises et aux particuliers. Cette reconstruction devrait reposer sur l'édiction de règles communes à l'ensemble des sûretés personnelles (régime primaire) et sur une révision des critères et du contenu des règles spéciales (à côté des règles applicables à tous les garants personnes physiques, des règles particulières devraient dépendre de la cause, professionnelle ou non, de l'engagement du garant). 1. Les sûretés personnelles traversent une crise 1 . Alors que les textes et la jurisprudence devraient favoriser leur efficacité pour qu'elles confortent le crédit aux entreprises et aux particuliers, le droit positif les fragilise. Depuis une trentaine d'années, effectivement, l'insécurité juridique règne en la matière, sous toutes ses formes (inaccessibilité, illisibilité et instabilité des règles en vigueur), non seulement parce que des réformes ponctuelles ont morcelé le droit du cautionnement et l'ont rendu plus complexe, moins souple et cohérent, mais aussi en raison d'une jurisprudence pléthorique et fluctuante. La sécurité économique recherchée par les créanciers est quant à elle compromise par les multiples et diverses protections accordées par le législateur et par les juges à certains garants. La spécialisation des règles légales et jurisprudentielles (I) est largement responsable des imperfections que présente aujourd'hui le droit français des sûretés personnelles (II). Pour le rendre plus sûr et attractif, une reconstruction mérite d'être proposée (III). I. La spécialisation du droit des sûretés personnelles 2. A la fin du XXe siècle, le droit commun du cautionnement, c'est-à-dire les règles inscrites dans le Code civil depuis 1804, est apparu insuffisant pour répondre, tant au développement du cautionnement 2 et à la diversification des cautions 3 , qu'à la préoccupation de limiter les dangers de cette sûreté 4 . A partir des années 1980, en vue de remédier aux insuffisances du droit commun 5 , de contrats (les exigences probatoires, la sanction des vices du consentement et encore l'obligation de bonne foi). Le droit commun du cautionnement a pu toutefois sembler insuffisant pour sauvegarder les intérêts des garants et ce, pour plusieurs raisons : le consentement des cautions n'y est protégé qu'a posteriori, c'est-à-dire lors de l'appel en paiement, et non dès la conclusion du contrat ; la solvabilité des cautions y est largement ignorée, alors que les risques patrimoniaux de l'engagement sont bien souvent considérables ; le Code civil appréhende les cautions et les créanciers de manière abstraite, dans leur qualité générale de parties, sans tenir compte des caractéristiques des dettes garanties, alors que les dangers du cautionnement n'ont certainement pas la même intensité pour toutes les cautions, ni dans toutes les opérations de garantie. 6 Trois règles nouvelles seulement y ont été ajoutées depuis 1804, toutes protectrices des cautions : le caractère d'ordre public de l'exception de défaut de subrogation (art. 2314, al. 2, issu de la loi n° 84-148 du 1er mars 1984), le bénéfice d'un "reste à vivre" et une information annuelle sur l'évolution du montant de la créance garantie (art. 2301 et 2293, issus de la loi n° 98-657 du 29 juillet 1998). 7 C. civ., nouv. art. 2288 à 2320. 8 Sur ce mouvement de spécialisation, v. not. Ch. Albiges, "L'influence du droit de la consommation sur l'engagement de la caution", Liber amicorum J. Calais-Auloy, Dalloz, Paris, 2004, p. 1 ; L. Aynès, "La réforme du cautionnement par la loi Dutreil", Dr. et patr. 11/2003, p. 28 ; Ph. Delebecque, "Le cautionnement et le Code civil : existe-t-il encore un droit du cautionnement ?", RJ com. 2004, p. 226 ; J. Devèze, "Petites grandeurs et grandes misères de la sollicitude à l'égard du dirigeant caution personne physique", Mélanges Ph. Merle, Dalloz, Paris, 2013, p. 165 ; D. Houtcieff, "Le droit des sûretés hors le Code civil", LPA 22 juin 2005, p. 7 ; D. Legeais, "Le Code de la consommation siège d'un nouveau droit commun du cautionnement", JCP éd. E 2003, 1433 ; Ph. Simler, "Prévention et dispositif de protection de la caution", LPA 10 avr. 2003, p. 20 ; Ph. Simler, "Les principes fondamentaux du cautionnement : entre accessoire et autonomie", BICC 15 oct. 2013. 9 Les articles 2321 et 2322 du Code civil les définissent, sans les réglementer précisément. 10 Seuls les principaux textes et arrêts qui illustrent cette évolution seront ici exposés. Pour de plus amples références, v. M. Bourassin, V. Brémond, M.-N. Jobard-Bachellier, Droit des sûretés, Sirey, Paris, 5e éd., 2015. aussi être rassurés pour que les entreprises reçoivent les crédits nécessaires à leur création, leur développement et leur maintien. Les protections propres aux sociétés garantes 4. Il est fréquent qu'une société garantisse les dettes d'une autre société appartenant au même groupe ou les dettes d'une personne physique ou morale avec laquelle elle entretient des relations d'affaires. Cette garantie est dangereuse pour la société elle-même, pour ses associés et pour ses créanciers, puisqu'elle déplace le patrimoine social au service d'autrui, le plus souvent sans aucune contrepartie, au risque qu'en cas de défaut de remboursement par le débiteur principal, la pérennité de la société et les emplois qu'elle génère se trouvent menacés. Pour limiter ces risques, le droit des sociétés -droit commun et dispositions propres à certaines formes sociales -encadre les pouvoirs dont doivent disposer les représentants de la société pour l'engager en qualité de garant. D'abord, en vertu du principe de spécialité, la garantie doit être conforme à l'objet social. Ensuite, elle doit respecter l'intérêt social 11 . Enfin, dans les sociétés par actions, les "cautionnements, avals et garanties" doivent être autorisés par le conseil d'administration ou de surveillance 12 , à peine d'inopposabilité à la société 13 . 5. Pour éviter un autre risque, celui que les organes de direction ou les associés ne vampirisent le patrimoine social à leur seul profit, interdiction leur est faite, à peine de nullité du contrat, "de faire cautionner ou avaliser (par la société par actions ou à risque limité) leurs engagements envers les tiers" 14 . Les protections accordées aux garants d'entreprises 6. Il existe de nombreuses règles spéciales dont le principal critère d'application réside dans la qualité d'entreprise, sous forme sociale ou individuelle, du débiteur principal. Il est vrai que les sûretés personnelles garantissant les dettes d'une entreprise présentent des dangers accrus par rapport à celles couvrant des dettes non professionnelles : leur compréhension est rendue plus ardue par la diversité et le caractère futur, donc indéterminé, des dettes qu'elles peuvent embrasser ; les risques patrimoniaux sont plus importants dès lors que les créanciers requièrent habituellement une couverture, en montant et en durée, plus large ; en cas d'ouverture d'une procédure d'insolvabilité au bénéfice de l'entreprise, les risques de paiement par le garant et d'absence de remboursement par celle-ci sont très importants. De nombreuses règles spéciales s'attachent à limiter, voire à supprimer ces différents risques en protégeant les garants d'entreprises, qu'ils soient ou non intégrés dans celles-ci. Les entreprises, in bonis (a) ou en difficulté (b), en sont les bénéficiaires par ricochet. Toutes les protections ici envisagées sont en effet susceptibles d'encourager la constitution de sûretés et, par là même, l'octroi des crédits indispensables à la création et à la pérennité des entreprises. Celles qu'énonce le droit des entreprises en difficulté sont en outre de nature à inciter les dirigeants-garants à demander le plus tôt possible l'ouverture d'une procédure et à favoriser de la sorte le redressement de leur entreprise. a. Entreprises in bonis 7. En dehors du droit des procédures collectives professionnelles, les sources et les modes de protection des garants d'entreprises sont extrêmement diversifiés. Il est néanmoins possible de distinguer quatre types de mesures. 8. En premier lieu, détourner les parties des sûretés les plus dangereuses. La loi n° 94-126 du 11 février 1994 relative à l'initiative et à l'entreprise individuelle comporte deux dispositions en ce sens. D'une part, elle cherche à dissuader les entrepreneurs individuels de faire garantir leurs dettes professionnelles par des proches en imposant aux établissements de crédit de les informer par écrit de 11 En présence de sociétés à risque illimité, la jurisprudence annule les cautionnements qui contredisent l'intérêt social, même s'ils entrent dans leur objet statutaire ou ont été approuvés par tous les associés ou couverts par une communauté d'intérêts entre la société caution et le débiteur (v. not. Com. 23 sept. 2014, Bull. civ. IV, n° 142). 12 C. com., art. L. 225-35, al. 4, et L. 225-68, al. 2. 13 Cette sanction est retenue par la Cour de cassation depuis 1980 (Com. 29 janv. 1980, Bull. civ. IV, n o 47). 14 C. com.,. L'interdiction vaut également pour les proches des dirigeants (conjoints, ascendants ou descendants) et, plus généralement, pour "toute personne interposée". Dans la SARL (et non les sociétés par actions), l'interdiction vise en outre les associés. la possibilité de proposer une garantie sur les biens nécessaires à l'exploitation de l'entreprise ou par un garant institutionnel, plutôt qu'une "sûreté personnelle consentie par une personne physique" 15 . D'autre part, la loi de 1994 interdit aux personnes physiques cautionnant les dettes professionnelles d'un entrepreneur individuel de s'engager à la fois solidairement et indéfiniment. Sont effectivement réputées non écrites les stipulations de solidarité et de renonciation au bénéfice de discussion si leur cautionnement n'est pas limité en montant 16 . 9. En deuxième lieu, délivrer aux cautions d'entreprises des informations au cours de la période de garantie. Chaque année, les créanciers doivent leur préciser le montant de la dette principale au 31 décembre de l'année précédente, ainsi que le terme du cautionnement ou la faculté de le résilier s'il est à durée indéterminée. D'abord imposée dans les cautionnements des concours financiers accordés aux entreprises par des établissements de crédit 17 , y compris ceux fournis par les dirigeants-cautions 18 , cette information annuelle a ensuite été accordée aux personnes physiques cautionnant les dettes professionnelles d'un entrepreneur individuel, pour une durée indéterminée 19 . Si l'information n'est pas délivrée, la caution n'est plus tenue des "intérêts échus depuis la précédente information jusqu'à la date de communication de la nouvelle information". Il existe, par ailleurs, une information sur "le premier incident de paiement (du débiteur) non régularisé dans le mois d'exigibilité du paiement", sous peine de déchéance des "pénalités ou intérêts de retard échus entre la date de ce premier incident et celle à laquelle (la caution) en a été informée" 20 . Cette protection profite aux cautions personnes physiques garantissant les dettes professionnelles d'un entrepreneur individuel ou d'une société. 10. En troisième lieu, transférer la sûreté au conjoint divorcé entrepreneur. Pour éviter que l'époux, qui s'est porté garant de l'activité professionnelle de son conjoint entrepreneur individuel ou membre de la société dont les dettes sont garanties, ne se trouve, après le divorce, écrasé par le poids de la sûreté, la loi n° 2005-882 du 2 août 2005 relative aux petites et moyennes entreprises a prévu le transfert, sur décision du tribunal de grande instance, des "dettes ou sûretés consenties par les époux, solidairement ou séparément, dans le cadre de la gestion d'une entreprise", au conjoint divorcé entrepreneur 21 . 11. En quatrième et dernier lieu, appliquer le droit du surendettement aux cautions des entreprises. La situation de surendettement étant définie par l'impossibilité manifeste de faire face à l'ensemble des dettes non professionnelles exigibles et à échoir 22 , la Cour de cassation a initialement refusé le bénéfice des procédures de surendettement aux cautions retirant un intérêt patrimonial personnel de la dette professionnelle cautionnée, au premier rang desquelles se trouvent les dirigeants des sociétés garanties 23 . Mais, depuis la loi n° 2008-776 du 4 août 2008 de modernisation de l'économie, toutes les cautions surendettées, même celles garantissant des entreprises et dont l'engagement présente une nature professionnelle 24 , peuvent profiter des mesures protectrices du droit du surendettement, en particulier l'effacement total des dettes lors de la clôture de la procédure de rétablissement personnel pour insuffisance d'actif 25 . b. Entreprises en difficulté 15 C. mon. fin., art. L. 313-21. Le défaut d'information interdit au créancier de se prévaloir de la sûreté constituée "dans ses relations avec l'entrepreneur individuel", et non de demander paiement au garant. 16 Loi du 11 février 1994, art. 47, II, al. 1er. 17 C. mon. fin., art. L. 313-22, issu de la loi n° 84-148 du 1 er mars 1984 relative à la prévention et au règlement amiable des difficultés des entreprises. 18 Com. 25 mai 1993, Bull. civ. IV, n o 203. 19 Loi du 11 février 1994, art. 47, II, al. 2. 20 Loi du 11 février 1994, art. 47, II, al. 3, modifié par la loi n° 98-657 du 29 juillet 1998. 21 C. civ., art. 1387-1. La portée de la décharge de l'époux-caution est incertaine, car ce texte ne précise pas si elle est opposable au créancier ou si elle affecte uniquement les rapports intra conjugaux. Les juridictions du fond ont jusqu'à présent privilégié cette seconde interprétation, qui préserve le droit de poursuite du créancier et confine la décharge dans les opérations de liquidation du régime matrimonial. 22 C. consom., art L. 330-1. 23 Civ. 1 re , 31 mars 1992, Bull. civ. I, n o 107 ; Civ. 1 re , 7 nov. 2000, Bull. civ. I, n o 285. 24 A condition toutefois de ne pas être éligibles aux procédures collectives professionnelles (C. consom., art. L. 333-3). 25 C. consom., art. L. 332-5 et 332-9. 12. Lorsque l'entreprise garantie fait l'objet d'une procédure d'insolvabilité, des protections de quatre types sont accordées aux garants, qu'ils aient "consenti une sûreté personnelle" ou "affecté ou cédé un bien en garantie" 26 . Il s'agit d'abord de réduire le montant de la garantie, dans la procédure de conciliation, en permettant à tous les garants de se prévaloir des dispositions de l'accord constaté ou homologué 27 et, dans la procédure de sauvegarde, en autorisant les garants personnes physiques à opposer au créancier l'arrêt du cours des intérêts, ainsi que les remises inscrites dans le plan 28 . Il s'agit ensuite de retarder la mise en oeuvre de la sûreté, non seulement en faisant profiter tous les garants (dans la procédure de conciliation) ou les garants personnes physiques (dans la procédure de sauvegarde) des délais de paiement octroyés à l'entreprise 29 , mais également en suspendant les poursuites contre les garants personnes physiques pendant la période d'observation de la procédure de sauvegarde ou de redressement 30 . Il s'agit encore d'interdire toute poursuite contre les garants personnes physiques, pendant l'exécution du plan de sauvegarde, si la créance garantie n'a pas été déclarée 31 . Enfin, il s'agit, dans la procédure de rétablissement professionnel, de déroger au principe d'effacement des dettes du débiteur personne physique à l'égard des dettes de remboursement des cautions, personnes physiques ou morales 32 . 13. Même si tous les garants personnes physiques, voire tous les garants sans distinction, sont visés par ces dispositions, le législateur s'est surtout soucié des dirigeants et de leurs proches, afin d'inciter les premiers à anticiper le traitement des difficultés de l'entreprise, en demandant l'ouverture d'une procédure le plus tôt possible, c'est-à-dire avant la cessation des paiements. C'est pourquoi un sort nettement plus favorable leur est réservé dans les procédures de conciliation et de sauvegarde que dans les procédures de redressement ou de liquidation judiciaire 33 . Mais alors, la protection des garants n'est pas une fin en soi. C'est plutôt un moyen de soutenir les entreprises, de conforter les emplois, et de favoriser in fine la croissance économique 34 . Les protections refusées aux garants intégrés dans l'entreprise débitrice 14. Les garants intégrés dans l'entreprise débitrice sont les personnes physiques ou morales qui disposent d'un pouvoir de direction et/ou de contrôle à son égard. Pour l'essentiel, ce sont ses dirigeants ou associés et les sociétés-mères. Diverses protections leur sont refusées, que l'entreprise garantie soit in bonis (a) ou qu'elle fasse l'objet d'une procédure d'insolvabilité (b). a. Entreprises in bonis 15. En s'attachant à la cause professionnelle de l'engagement, la jurisprudence fait montre de rigueur à l'encontre des garants intégrés dans l'entreprise débitrice. Ainsi, parce qu'ils ont un "intérêt personnel et patrimonial" dans le crédit garanti, la Cour de cassation décide-t-elle que la sûreté présente un caractère commercial 35 . Cette commercialité rend le cautionnement solidaire et prive le garant, même s'il n'est pas commerçant 36 , des bénéfices de discussion et de division. 26 L'ensemble des sûretés personnelles, ainsi que les sûretés réelles pour autrui, font l'objet de ce traitement uniforme depuis l'ordonnance n° 2008-1345 du 18 décembre 2008 portant réforme du droit des entreprises en difficulté. 27 C. com., art. L. 611-10-2, al. 1er. 28 C. com., art. L. 622-28, al. 1er., et L. 626-11. 29 C. com., art. L. 611-10-2, al. 1er, et L. 626-11. 30 C. com., art. L. 622-28, al. 2 et L. 631-14, qui ajoutent que "le tribunal peut ensuite leur accorder des délais ou un différé de paiement dans la limite de deux ans". 31 C. com., art. L. 622-26, al. 2. 32 C. com., art. L. 645-11, issu de l'ordonnance n° 2014-326 du 12 mars 2014. 33 Sur la constitutionnalité de cette différence de traitement, v. Com., QPC, 8 oct. 2012, n° 12-40060. 34 Le droit des sûretés est ainsi mis au service des finalités qui innervent le droit des entreprises en difficulté. L'article 2287 du Code civil, issu de l'ordonnance du 23 mars 2006, consacre cette primauté des droits de l'insolvabilité en précisant que "les dispositions du présent livre (livre IV : "Des sûretés") ne font pas obstacle à l'application des règles prévues en cas d'ouverture d'une procédure de sauvegarde, de redressement judiciaire ou de liquidation judiciaire ou encore en cas d'ouverture d'une procédure de traitement des situations de surendettement des particuliers". 35 Com. 7 juill. 1969, Bull. n° 269. 36 Tel est le cas des dirigeants de sociétés anonymes ou à responsabilité limitée. La Haute juridiction rejette par ailleurs la libération des garants intégrés lorsqu'ils cessent leurs fonctions au sein de l'entreprise garantie37 , car la cause de l'obligation de garantir réside dans "la considération du crédit accordé par le créancier au débiteur principal"38 , et non dans les relations que le garant entretient avec ce dernier, et que son existence s'apprécie exclusivement lors de la conclusion du contrat. 16. D'autres protections sont refusées aux garants intégrés parce qu'ils sont censés connaître et comprendre la nature, le montant et la durée des garanties souscrites. Tel est le cas de certaines formalités ayant pour finalité d'attirer l'attention des contractants sur la nature et la portée de leurs obligations. En application de l'article 1326 du Code civil39 , la Chambre commerciale de la Cour de cassation considère ainsi qu'en présence d'une mention équivoque ou incomplète ou en l'absence de toute mention en chiffres et en lettres du montant de l'engagement, la seule qualité de dirigeant du garant constitue un complément de preuve suffisant 40 . Dans le même sens, l'article 1108-2 du Code civil 41 admet le remplacement des mentions manuscrites exigées à peine de nullité par des mentions électroniques, réputées moins éclairantes, si la sûreté personnelle est passée "par une personne pour les besoins de sa profession". 17. En outre, parce qu'elles connaissent leur propre solvabilité et qu'elles comprennent en principe les risques financiers liés à la mise en oeuvre des sûretés, les cautions intégrées se voient refuser par la Cour de cassation deux types de protections. D'une part, l'exigence de proportionnalité du cautionnement aux biens et revenus de la caution, fondée sur la bonne foi contractuelle 42 . Alors qu'il avait été initialement consacré au bénéfice d'un dirigeantcaution 43 , ce moyen de défense a par la suite été paralysé en présence de garants intégrés dans l'entreprise 44 . D'autre part, ceux-ci profitent rarement du devoir de mise en garde sur les risques de l'opération projetée et sur la disproportion de l'engagement à souscrire, que la Cour de cassation impose aux établissements de crédit depuis 2007. En effet, ce devoir, lui aussi fondé sur la loyauté contractuelle, ne peut être invoqué que par les cautions "non averties" 45 . Les connaissances des garants intégrés sur leurs capacités financières et sur les risques d'endettement liés à la sûreté évincent le plus souvent cette qualification et la protection qu'elle conditionne 46 . 18. Enfin, depuis une vingtaine d'années, d'autres moyens de défense fondés sur le droit commun des contrats sont rendus inefficaces en raison des connaissances des garants intégrés sur la situation financière de l'entreprise débitrice. Tel est le cas de la réticence dolosive commise par le créancier au sujet de la situation financière de l'entreprise 47 , ainsi que de la responsabilité des banques pour octroi abusif de crédit 48 . b. Entreprises en difficulté 19. Diverses dispositions protectrices des entreprises soumises aux procédures du Livre VI du Code de commerce ne profitent pas aux garants, qui se trouvent dès lors traités plus strictement que les débiteurs garantis. Il en va ainsi de la suspension des poursuites individuelles contre l'entreprise 49 . Dans les procédures de redressement et de liquidation judiciaire, les garants ne peuvent pas opposer non plus le défaut de déclaration des créances pour paralyser les poursuites du créancier 50 . Dans la procédure de redressement encore, aucun garant ne peut bénéficier des remises et délais prévus dans le plan 51 , ni de l'arrêt du cours des intérêts 52 . Enfin, la clôture de la procédure de liquidation judiciaire pour insuffisance d'actif n'empêche nullement les créanciers de poursuivre en paiement les garants 53 . 20. Cette rigueur à l'encontre de tous les garants d'entreprises reçoit plusieurs explications. D'abord, même si les règles concernées n'opèrent aucune distinction entre les garants (personnes physiques ou morales, intégrées ou non dans l'entreprise en difficulté), il est permis d'y voir, à l'encontre de ceux qui se trouvent aux commandes de l'entreprise en difficulté, une sanction pour avoir laissé la situation de celle-ci se dégrader jusqu'à la cessation des paiements. Ensuite, comme la rigueur se manifeste pour l'essentiel dans le cadre des procédures de redressement et de liquidation judiciaire, elle révèle que le législateur n'entend pas protéger les intérêts des garants lorsque le sauvetage de l'entreprise est compromis, voire impossible. Le contraste existant avec les procédures de conciliation et de sauvegarde est censé inciter les dirigeants-garants à se tourner vers les procédures préventives. Il est donc manifeste qu'en droit des entreprises en difficulté, les protections sont accordées ou refusées aux garants, non pas au regard des caractéristiques de leur engagement, et donc de leur propre besoin de protection, mais en fonction des chances de préserver l'activité économique de l'entreprise. Enfin, la rigueur à l'encontre des garants a pour corollaire une meilleure protection des créanciers. Mais il existe là aussi une instrumentalisation de cette protection au service de l'entreprise, puisque l'efficacité des sûretés personnelles dans les procédures de redressement et de liquidation judiciaire s'explique par la volonté d'asseoir la confiance des créanciers et de stimuler par là même l'octroi de crédit aux entreprises 54 . 21. Dynamiser et sécuriser la vie des affaires est donc bien une finalité partagée par de nombreuses et diverses règles spéciales. La spécialisation du droit des sûretés personnelles n'est pas uniquement sous-tendue par cette logique économique. Des impératifs sociaux ont conduit à l'adoption d'autres règles spécifiques, tournées vers la protection des consommateurs. B/ Protéger les consommateurs 22. En matière de sûretés personnelles, aucune règle ne vise les garants ou cautions consommateurs. Cette qualité est toutefois implicite chaque fois que la loi ou les juges réservent un traitement particulier aux personnes physiques s'engageant à des fins non professionnelles 55 . Les critères de leur protection méritent d'être détaillés (1), avant que n'en soient exposées les principales modalités (2). Critères de protection 23. Les protections qui bénéficient aux garants personnes physiques s'engageant dans un cadre non professionnel reposent sur des critères distincts en législation (a) et en jurisprudence (b). a. En législation : la nature de la dette principale 24. Les premiers textes ayant protégé les personnes physiques qui souscrivent une sûreté personnelle en dehors de leur activité commerciale, industrielle, artisanale ou libérale n'ont pas détaillé de la sorte la qualité du garant. Ils les ont implicitement visées en spécifiant la nature de la dette principale. En effet, ont été spécialement réglementés les deux types de dettes non professionnelles le plus souvent garanties par des proches du débiteur personne physique, à savoir, d'une part, les crédits mobiliers ou immobiliers de consommation et, d'autre part, les dettes naissant d'un bail d'habitation. b. En jurisprudence : les caractéristiques de l'engagement de garantie 51 C. com.,al. 6. 53 C. com.,II. 54 Le principe d'irresponsabilité des dispensateurs de crédit lorsque l'entreprise fait l'objet d'une procédure d'insolvabilité (C. com., art. L. 650-1) relève de la même logique. 55 L'article préliminaire du Code de la consommation, issu de la loi n° 2014-344 du 17 mars 2014 relative à la consommation, définit le consommateur comme "toute personne physique qui agit à des fins qui n'entrent pas dans le cadre de son activité commerciale, industrielle, artisanale ou libérale". 25. Depuis une vingtaine d'années, la Cour de cassation réserve le bénéfice de certaines règles du droit commun des contrats aux cautions qui n'ont pas d'intérêt pécuniaire dans l'opération garantie, qui ne sont pas rompues aux affaires, qui ne disposent d'aucun pouvoir juridique à l'égard du débiteur principal et qui ne maîtrisent nullement la situation financière de ce dernier. Sur le fondement de l'absence d'"interêt personnel et patrimonial" de la caution dans l'obtention du crédit garanti, les protections liées au caractère civil du cautionnement sont ainsi applicables 56 . C'est par ailleurs au profit des cautions "non averties", que la Haute juridiction découvre des obligations de loyauté particulières à la charge des créanciers, comme l'obligation de ne pas faire souscrire un cautionnement manifestement disproportionné aux biens et revenus de ces cautions et le devoir de les mettre en garde sur les risques patrimoniaux de l'opération. 26. La Cour de cassation n'a jamais défini la notion de caution "non avertie". Elle en contrôle en revanche les critères, liés principalement aux compétences et aux expériences professionnelles de la caution, lui permettant ou non de comprendre la nature et la portée des obligations principales et de son propre engagement, ainsi qu'aux relations -personnelles ou professionnelles -qu'elle entretient avec le débiteur garanti, lui permettant ou non de connaître et d'influencer l'endettement de celui-ci. Fréquemment, les proches du débiteur principal ou d'un membre de la société garantie sont qualifiés de cautions "non averties" et profitent dès lors des protections que la jurisprudence subordonne à cette qualité. Modes de protection 27. Les règles spéciales qui ont été consacrées depuis la fin du XXe siècle au bénéfice des garants personnes physiques s'engageant pour des raisons et à des fins non professionnelles expriment nettement l'emprise du droit de la consommation sur le droit des sûretés personnelles, en ce qu'elles déploient les techniques consuméristes classiques de protection du consentement et du patrimoine de la partie réputée faible, c'est-à-dire des interdictions (a), des informations (b) et des limitations (c). a. Interdictions caution personne physique solvable, ni rémunérer une caution professionnelle, risquaient de ne pouvoir se loger. Un autre type d'interdiction concerne la mise en oeuvre des cautionnements manifestement disproportionnés ab initio aux biens et revenus de la caution personne physique garantissant un crédit mobilier ou immobilier de consommation 63 . Effectivement, sous réserve d'un retour à meilleure fortune de la caution, l'établissement de crédit "ne peut se prévaloir" de la sûreté. Cette déchéance totale constitue une mesure de prévention du surendettement des cautions engagées pour des raisons et à des fins non professionnelles. b. Informations 29. Sous l'influence du droit de la consommation, qui organise précisément l'information des consommateurs au stade de la formation des contrats, plusieurs dispositions visant à éclairer le consentement des cautions sur la nature et la portée de leur engagement, dès la souscription de celuici, ont institué un formalisme informatif conditionnant la validité même des cautionnements visés, à savoir ceux garantissant des crédits de consommation ou des dettes provenant d'un bail d'habitation. 30. Certaines informations doivent être délivrées avant même la signature du contrat de cautionnement, pour que la décision de s'engager soit la plus libre et éclairée possible. Ainsi, celui qui envisage de cautionner un crédit à la consommation ou un crédit immobilier doit-il se voir remettre, comme l'emprunteur-consommateur lui-même, un exemplaire de l'offre de crédit 64 . Une autre mesure préventive est prévue dans le cautionnement par une personne physique d'un crédit immobilier. Il s'agit d'un délai de réflexion de dix jours suivant la réception de l'offre de crédit 65 . 31. Pour éclairer la caution sur les principales caractéristiques du contrat garanti et de son propre engagement, le formalisme informatif ad validitatem revêt deux autres modalités lors de la conclusion du cautionnement : en matière de bail d'habitation, la remise d'un exemplaire du contrat de location 66 ; en ce domaine et également lorsque la caution personne physique garantit un crédit accordé à un consommateur, des mentions manuscrites portant principalement sur le montant, la durée et, le cas échéant, le caractère solidaire de l'engagement 67 . Ces mentions n'ont pas à être respectées si le cautionnement est notarié ou contresigné par un avocat 68 , compte tenu des obligations d'information et de conseil pesant sur ces professionnels du droit. Elles conditionnent en revanche la validité des cautionnements conclus par actes sous seing privé, dans lesquels elles ne sauraient être apposées sous forme électronique par les cautions ne s'engageant pas pour les besoins de leur profession 69 . 32. Les personnes physiques garantissant un crédit à la consommation ou immobilier doivent par ailleurs être informées par l'établissement de crédit de la défaillance de l'emprunteur-consommateur 70 . Le non-respect de cette obligation est sanctionné par une déchéance partielle des droits du créancier 71 . Si le débiteur fait l'objet d'une procédure de surendettement, la caution doit en être informée par la commission de surendettement 72 . Cela peut lui permettre d'invoquer les protections spécifiques que renferme le droit du surendettement au profit de l'ensemble des cautions 73 71 Déchéance des "pénalités ou intérêts de retard échus entre la date de ce premier incident et celle à laquelle elle en a été informée" 72 C. consom., art. L. 331-3, qui ne prévoit aucune sanction en cas de défaut d'information. 73 Comme l'extinction du cautionnement par voie accessoire en cas de défaut de déclaration de la créance garantie dans la procédure de rétablissement personnel (C. consom., art. L. 332-7). 74 V. infra n° 51. 33. En vue de réduire les risques patrimoniaux inhérents au contrat de cautionnement, des limites à l'étendue de l'obligation de garantir, ainsi qu'au droit de poursuite du créancier, se sont développées au bénéfice des cautions personnes physiques s'engageant pour des raisons et à des fins personnelles. 34. La première limitation concerne l'étendue de leur engagement et joue a priori. Elle consiste à imposer, à peine de nullité du cautionnement, une mention précisant le montant et la durée de la garantie. Les personnes physiques qui s'engagent sous seing privé à cautionner des crédits de consommation doivent ainsi écrire la mention imposée par l'article L. 313-7 du Code de la consommation 75 . 35. D'autres limitations jouent a posteriori. Elles procèdent des sanctions prononcées à l'encontre du créancier sur le fondement de textes spéciaux 76 ou du droit commun de la responsabilité civile. Ainsi, lorsqu'un créancier professionnel se montre déloyal vis-à-vis d'une caution profane, en lui faisant souscrire un engagement manifestement disproportionné et/ou en ne la mettant pas en garde sur les risques de l'opération, cette caution "non avertie" peut-elle obtenir des dommages et intérêts, qui ont vocation à se compenser avec sa propre dette et à diminuer celle-ci à due concurrence. Sans être totalement remise en cause, l'obligation de garantir se trouve alors ramenée à un montant raisonnable. 36. Un dernier type de limitation porte sur la durée pendant laquelle des poursuites peuvent être exercées par le créancier à l'encontre des cautions garantissant un emprunteur-consommateur. Depuis 1989, les textes relatifs au crédit à la consommation étant applicables à son cautionnement 77 , les actions du prêteur doivent être exercées dans les deux ans du premier incident de paiement non régularisé 78 , tant à l'encontre de l'emprunteur, que de sa caution, à peine de forclusion. 37. Bien que la qualité de garant-consommateur ne soit pas expressément consacrée en droit français, il existe donc, depuis la fin des années 1980, de nombreuses règles légales et jurisprudentielles qui, sur le fondement de la nature de la dette principale ou des caractéristiques de l'engagement de garantie, et sous l'influence du droit de la consommation, protègent les garants personnes physiques s'engageant pour des raisons et à des fins n'entrant pas dans le cadre de leur activité professionnelle. Alors qu'initialement ces règles étaient clairement distinctes de celles relatives aux sûretés personnelles constituées pour des entreprises, des rapprochements ont par la suite été opérés entre le monde des affaires et celui des consommateurs. Tel est l'objet des règles spéciales protégeant les garants personnes physiques. C/ Protéger les personnes physiques 38. Les règles spéciales encadrant la vie des affaires et celles protégeant les consommateurs sont habituellement distinctes d'un point de vue formel et opposées d'un point de vue substantiel : inscrites dans des textes ou des codes séparés, les premières sont inspirées par des objectifs micro ou macro économiques et promeuvent bien souvent la liberté, la rapidité, la sécurité ou encore la confiance mutuelle, tandis que les secondes, sous-tendues par des impératifs sociaux, veillent à densifier la volonté de la partie faible, à rééquilibrer des relations réputées inégales et à pourchasser le surendettement. En matière de sûretés personnelles, ce classique clivage a d'abord été respecté. Nous avons vu que, jusqu'au milieu des années 1990, des règles spéciales différentes ont été adoptées, soit pour dynamiser et sécuriser l'activité des entreprises, soit pour protéger les garants n'agissant pas pour les besoins de leur profession. La frontière entre le monde des affaires et celui des consommateurs a ensuite été largement dépassée. Les règles édictées depuis une vingtaine d'années ont en effet privilégié deux nouveaux critères d'application, à savoir deux qualités cumulatives, celles de caution personne physique et de créancier professionnel (1), ou bien la seule qualité de garant personne physique (2). Ces deux critères ont pour point commun d'englober les garants intégrés dans l'entreprise débitrice et les garants agissant pour des raisons et à des fins non professionnelles. 75 Les protections des cautions personnes physiques engagées envers un créancier professionnel 39. Sur le fondement de la double prise compte de la qualité de la caution -personne physique -et de celle du créancier -professionnel -, un corps de règles spéciales a été créé, au sein du Code de la consommation, par la loi du 29 juillet 1998 relative à la lutte contre les exclusions et par celle du 1er août 2003 pour l'initiative économique 79 . Ces règles présentent une réelle singularité : tout en étant profondément liées au droit de la consommation (a), elles opèrent une alliance avec le monde des affaires (b). a. Parenté avec le droit de la consommation 40. Les protections instaurées au bénéfice des cautions personnes physiques engagées envers des créanciers professionnels entretiennent des liens très étroits avec le droit de la consommation. Outre l'inscription dans le Code du même nom, la parenté repose sur trois éléments. 41. D'abord, les modes de protection. Les lois de 1998 et de 2003 ont étendu le champ de la plupart des protections empruntées au droit de la consommation, qui étaient précédemment accordées aux cautions personnes physiques engagées envers un établissement financier pour garantir un crédit mobilier ou immobilier de consommation. Désormais, ce sont plus généralement les cautions personnes physiques garantissant des créanciers professionnels qui profitent du formalisme informatif ad validitatem (les mentions manuscrites portant sur les principales caractéristiques de leur engagement 80 ) et de la limitation qui en résulte du montant et de la durée de l'obligation de garantir. Ont pareillement été étendues l'information sur la défaillance du débiteur 81 et la décharge totale en cas de disproportion manifeste de l'engagement 82 . 42. La parenté avec le droit de la consommation se reconnaît ensuite aux critères de protection retenus, qui évoquent la prise en compte de la qualité des deux parties -un consommateur et un professionnelet le déséquilibre réputé exister entre elles, sur lesquels ce droit s'est historiquement construit. En effet, l'application des articles L. 341-1 à L. 341-6 du Code de la consommation ne dépend plus de la nature de la dette principale 83 , mais seulement de la qualité des parties : une caution personne physique, qui fait figure de partie faible, et un créancier professionnel, censé être en position de force. 43. C'est enfin la définition de ce créancier professionnel qui rapproche nettement les règles spéciales du cautionnement du droit de la consommation. Effectivement, les articles L. 341-1 à L. 341-6 concernent, non pas les seuls prestataires de services bancaires 84 , mais plus généralement tout créancier "dont la créance est née dans l'exercice de sa profession ou se trouve en rapport direct avec l'une de ses activités professionnelles" 85 . Or, depuis une vingtaine d'années, le critère du "rapport direct" avec l'activité professionnelle est précisément celui qui préside à l'interprétation des textes du droit de la consommation relatifs, notamment, à la lutte contre les clauses abusives ou à la vente par démarchage. 44. Compte tenu de ces divers liens avec le droit de la consommation, il est certain que les règles spéciales édictées en 1998 et 2003 ont vocation à protéger la volonté et le patrimoine de toutes les cautions qui s'apparentent à des consommateurs, c'est-à-dire les personnes physiques qui agissent à des 79 C. consom., art. L. 341-1, issu de la loi n° 98-657, et L. 341-2 à L. 341-6, issus de la loi n° 2003-721. 80 C. consom., art. L. 341-2 et L. 341-3, dont la rédaction est identique à celle des articles L. 313-7 et L. 313-8. 81 C. consom., art. L. 341-1, qui étend l'information imposée par l'article L. 313-9. 82 C. consom., art. L. 341-4, qui reprend les conditions et la sanction de l'article L. 313-10. 83 Les dettes garanties par les cautionnements soumis aux articles L. 341-2 et L. 341-3 du Code de la consommation peuvent naître, non seulement d'un crédit accordé sous la forme d'un prêt ou d'une autorisation de découvert en compte courant ou même de délais de paiement (Com. 10 janv. 2012, Bull. civ. IV, n° 2), mais également d'un contrat de bail commercial (Com. 13 mars 2012, inédit, n° 10-27814) ou encore d'un contrat de fournitures (Paris, 11 avr. 2012, JurisData n° 2012-014098). 84 Au contraire, les articles L. 313-7 à L. 313-10 régissant les cautionnements des crédits de consommation ne sont applicables qu'en présence d'un établissement de crédit, une société de financement, un établissement de monnaie électronique, un établissement de paiement ou encore un organisme mentionné au 5° de l'article L. 511-6 du Code monétaire et financier. 85 Par exemple, un garagiste ou un vendeur de matériaux de construction, qui accorderait des délais de paiement à ses clients moyennant la conclusion d'un cautionnement par une personne physique. V. not. Civ. 1 re , 25 juin 2009, Bull. civ. I, n° 138 ; Civ. 1 re , 9 juill. 2009, Bull. civ. I, n° 173 ; Com. 10 janv. 2012, Bull. civ. IV, n° 2 ; Civ. 1re, 10 sept. 2014, inédit, n o 13-19426. fins n'entrant pas dans le cadre de leur activité professionnelle, dès lors qu'elles contractent avec un créancier professionnel. Sont beaucoup moins évidentes au premier abord, mais néanmoins réelles, les relations existant entre ces mêmes règles et le monde des affaires. b. Alliance avec le monde des affaires 45. Les règles protectrices des cautions personnes physiques engagées envers un créancier professionnel associent le monde des affaires et celui des consommateurs, non seulement parce que les principaux acteurs de l'un et de l'autre n'y sont plus différenciés, mais aussi parce que les objectifs économiques qui gouvernent habituellement la vie des affaires et les impératifs sociaux qui président à la protection des consommateurs y sont étroitement mêlés. 46. S'agissant du rapprochement entre les acteurs, il s'est opéré de deux manières symétriques. D'une part, la loi du 1er août 2003 a étendu aux cautionnements conclus entre une caution personne physique et un créancier professionnel deux règles qui avaient vu le jour dans les cautionnements de la vie des affaires, à savoir la nullité des stipulations de solidarité et de renonciation au bénéfice de discussion dès lors que le cautionnement n'est pas limité à un montant global86 , ainsi que l'obligation d'information annuelle sur l'encours de la dette principale et le terme du cautionnement [START_REF] Consom | art. L. 341-6, qui se trouve dans le prolongement des articles 48 de la loi du 1er mars 1984[END_REF] . D'autre part, le décloisonnement est le fruit d'une interprétation large de la notion de caution personne physique. Depuis 2010, la Cour de cassation accorde aux dirigeants-cautions le bénéfice de l'article L. 341-4 du Code de la consommation, c'est-à-dire le droit d'être intégralement déchargés si le cautionnement était manifestement disproportionné ab initio à leurs biens et revenus, en décidant que "le caractère averti de la caution est indifférent pour l'application de ce texte" 88 . A partir de 2012, les articles L. 341-2 et L. 341-3 du Code de la consommation relatifs au formalisme informatif ont également été déclarés applicables à "toute personne physique, qu'elle soit ou non avertie" 89 . 47. Les objectifs poursuivis par les auteurs des lois du 29 juillet 1998 et du 1er août 2003 ont été à la fois sociaux et économiques, comme en attestent l'intitulé de la première, "loi relative à la lutte contre les exclusions", et celui de la seconde, "loi pour l'initiative économique". Il s'est essentiellement agi de prévenir le surendettement de toutes les cautions personnes physiques 90 et d'étendre les protections jusque là réservées aux cautions n'agissant pas pour les besoins de leur profession à celles exerçant un pouvoir de direction ou de contrôle au sein de l'entreprise garantie et ce, en vue d'encourager l'esprit d'entreprendre et la souscription de garanties, nécessaires au financement des entreprises à tous les stades de leur existence. Les protections fondées sur la seule qualité de garant personne physique 48. L'endettement génère des risques spécifiques pour les personnes physiques : risque d'exclusion sociale et d'atteinte à la dignité, s'il se transforme en surendettement ; risque de propagation aux membres de la famille tenus de répondre des dettes du débiteur. En matière de sûretés personnelles, ces dangers pèsent sur les garants personnes physiques avec une acuité particulière étant donné que l'endettement a lieu pour autrui. Il n'est dès lors pas surprenant que plusieurs protections aient été accordées à toutes les cautions, voire à tous les garants, personnes physiques, quelles que soient la nature de la dette principale et la qualité du créancier, soit pour préserver leur famille (a), soit pour lutter contre le surendettement (b). La protection des garants personnes physiques, sur le fondement de cette seule qualité, est alors une fin en soi, et non un moyen au service d'autres intérêts [START_REF]et 13 les règles qui protègent tous les garants personnes physiques dans les procédures collectives professionnelles, en vue de favoriser le maintien de l'activité des entreprises garanties. 107 Com. 27 mars[END_REF] . a. Protections de la famille du garant 49. Le droit des régimes matrimoniaux et le droit des successions protègent la famille du garant en imposant une limitation de l'assiette du droit de poursuite du créancier. Si une sûreté personnelle est souscrite par un époux commun en biens 92 , seul, le créancier ne peut en principe saisir que les biens propres et les revenus de cet époux. Les biens communs ne font partie du gage du créancier que si la garantie a été contractée "avec le consentement exprès de l'autre conjoint, qui, dans ce cas, n'engage pas ses biens propres" 93 . En cas de décès du garant, ses engagements sont transmis à ses héritiers 94 , qui, s'ils acceptent la succession purement et simplement, sont en principe tenus d'exécuter les obligations du défunt sur leur patrimoine personnel, même s'ils ignorent l'existence de la sûreté au moment d'exercer leur option successorale 95 . La réforme du droit des successions du 23 juin 2006 a tempéré la rigueur de ces solutions en prévoyant que l'héritier acceptant purement et simplement la succession "peut demander à être déchargé en tout ou partie de son obligation à une dette successorale qu'il avait des motifs légitimes d'ignorer au moment de l'acceptation" 96 . Dans la mesure où cette décharge judiciaire est subordonnée à la preuve que "l'acquittement de cette dette aurait pour effet d'obérer gravement son patrimoine personnel", elle ne libère sans doute pas l'héritier de la dette elle-même, mais uniquement de l'obligation de l'acquitter sur son propre patrimoine en cas d'insuffisance de l'actif successoral. La protection de l'héritier repose donc bien, elle aussi, sur une réduction de l'assiette du droit de poursuite du créancier. b. Lutte contre le surendettement du garant 50. Pour prévenir le surendettement, la loi du 29 juillet 1998 s'est attachée à réduire l'engagement des cautions personnes physiques, en inscrivant dans le Code civil deux règles indifférentes au type de dettes couvertes, à la cause de la garantie, professionnelle ou non, et encore à la qualité du créancier. La première impose une information annuelle sur "l'évolution du montant de la créance garantie et de ces accessoires" au bénéfice des personnes physiques ayant souscrit un "cautionnement indéfini" 97 . Dès lors que celui-ci ne comporte pas de limite propre et que sa durée est indéterminée si celle de la dette principale l'est elle-même, l'information peut favoriser sa résiliation 98 et, par conséquent, dans le cautionnement de dettes futures, la non-couverture de celles naissant postérieurement. La seconde limitation prévue par la loi de 1998 porte sur l'assiette des poursuites : "en toute hypothèse, le montant des dettes résultant du cautionnement ne peut avoir pour effet de priver la personne physique qui s'est portée caution d'un minimum de ressources" 99 , correspondant au montant du revenu de solidarité activité. Ce "reste à vivre" profite à toutes les cautions personnes physiques, que leur engagement soit simple ou solidaire, qu'il ait été consenti pour des raisons personnelles ou professionnelles 100 , car il procède de l'impératif de lutte contre l'exclusion des particuliers. 92 Régime légal de communauté réduite aux acquêts ou régime conventionnel de communauté universelle (Civ. 1 re , 3 mai 2000, Bull. civ. I, n o 125). 93 C. civ., art. 1415, issu de la loi n° 85-1372 du 23 décembre 1985 relative à l'égalité des époux dans les régimes matrimoniaux. La Cour de cassation décide que ce texte "est applicable à la garantie à première demande qui, comme le cautionnement, est une sûreté personnelle, (…) et est donc de nature à appauvrir le patrimoine de la communauté" (Civ. 1 re , 20 juin 2006, Bull. civ. I, n o 313). 94 En présence d'un cautionnement, la Cour de cassation limite cette transmission, rappelée par l'article 2294 du Code civil. En effet, lorsque des dettes futures sont garanties, le décès de la caution constitue un terme extinctif implicite de son obligation de couverture, de sorte que seules les dettes nées avant le décès sont transmises aux héritiers (Com. 29 juin 1982, Bull. civ 98 Contrairement aux autres textes régissant l'information annuelle des cautions, l'article 2293 du Code civil n'impose malheureusement pas au créancier de rappeler cette faculté de résiliation lorsque le cautionnement est à durée indéterminée. En revanche, si l'information n'est pas délivrée, il conduit à une réduction plus importante de l'obligation de garantir, puisque la caution se trouve déchargée "de tous les accessoires de la dette, frais et pénalités" et non seulement de ceux échus au cours de la période de non-information. 99 C. civ., art. 2301, qui renvoie à l'article L. 331-2 du Code de la consommation dans lequel se trouvent détaillées les sommes devant être obligatoirement laissées aux particuliers surendettés. 100 Com. 31 janv. 2012, Bull. civ. IV, n° 13. L'objectif de prévention du surendettement des cautions personnes physiques a inspiré d'autres protections dans le droit du surendettement lui-même. En effet, depuis 2003, les dettes payées en lieu et place d'un débiteur surendetté par une caution ou un coobligé, personne physique, ne sauraient être effacées partiellement dans le cadre de la procédure se déroulant devant la commission de surendettement, ni totalement effacées en cas de clôture de la procédure de rétablissement personnel pour insuffisance d'actif101 . L'existence même des recours en remboursement contre le débiteur surendetté se trouve ainsi préservée par la loi. La Cour de cassation conforte en outre leur efficacité, en décidant que le débiteur ne peut opposer à la caution les remises et délais dont il a profités102 . 51. En cas d'échec des diverses mesures visant à prévenir le surendettement103 , les garants104 personnes physiques se trouvant dans cette situation ont accès aux mesures de traitement régies par le Code de la consommation 105 , qui conduiront à retarder le paiement du créancier, à le réduire, voire à l'empêcher purement et simplement, autrement dit à limiter, voire à ruiner, l'efficacité de la sûreté. 52. L'inefficacité des sûretés personnelles ne résulte pas uniquement de ces règles ayant pour finalité de lutter contre le surendettement des garants. En réalité, presque toutes les règles spéciales adoptées depuis les années 1980 en matière de sûretés personnelles, qu'elles aient pour objet de sécuriser et dynamiser la vie des affaires, de protéger les consommateurs ou plus largement les garants personnes physiques, portent des atteintes plus ou moins profondes aux droits des créanciers. C'est probablement la principale critique que l'on puisse adresser à la spécialisation du droit des sûretés personnelles. Mais c'est loin d'être la seule. II. Les imperfections du droit des sûretés personnelles 53. L'évolution que le droit français des sûretés personnelles a connue depuis une trentaine d'années repose, nous l'avons vu, sur des objectifs parfaitement légitimes, si ce n'est impérieux : soutenir les entreprises, protéger les parties faibles, préserver les familles, lutter contre l'exclusion financière et sociale des particuliers. Les bons sentiments ne suffisent cependant pas à faire de bonnes règles. Celles que les réformes ponctuelles des sûretés personnelles et la jurisprudence ont forgées en marge du droit commun en sont l'illustration. Les règles spéciales en cette matière présentent effectivement de graves imperfections, tant formelles (A), que substantielles (B). A/ Imperfections formelles 54. D'un point de vue formel, le droit des sûretés personnelles est source d'insécurité juridique en raison de l'inaccessibilité des règles spéciales. Alors que celles-ci renferment le droit ordinaire, si ce n'est le nouveau droit commun, vu qu'elles portent sur les sûretés les plus fréquemment constituées dans et en dehors de la vie des affaires, il est malaisé d'y accéder matériellement. Elles sont dispersées dans plusieurs codes et textes non codifiés, ainsi qu'une jurisprudence pléthorique. En outre, leur emplacement ne reflète pas toujours leur champ d'application. Il en va ainsi des articles L. 341-1 à L. 341-6 du Code de la consommation, qui s'appliquent non seulement aux cautions n'agissant pas dans un cadre professionnel, mais également à celles intégrées dans l'entreprise garantie106 . Les règles spéciales sont également inintelligibles et ce, pour différentes raisons. 55. D'abord, elles reposent sur une multitude de critères de différenciation. Les développements précédents ont souligné qu'ils concernent :  le garant : sa qualité de personne physique ou morale ; ses connaissances en matière de crédit et sur la situation du débiteur ; les besoins, professionnels ou non, auxquels répond son engagement ;  le créancier : personne physique ou morale ; institutionnel, professionnel ou non professionnel ;  le débiteur principal : consommateur ; société ou entrepreneur individuel, in bonis ou en difficulté ; particulier surendetté ;  la nature de la dette principale : concours à une entreprise ; crédit de consommation ; bail d'habitation ;  la forme de la sûreté : acte sous seing privé ; acte notarié ; acte sous seing privé contresigné par un avocat ;  l'étendue de la garantie : définie ou indéfinie ; déterminée ou non en montant et en durée ;  les modalités de la garantie : simple ou solidaire. 56. L'inintelligibilité procède ensuite de l'obscurité de certains de ces critères d'application. Il en va ainsi des qualités de cautions "averties" ou "non averties". La Cour de cassation ne les ayant jamais définies et n'ayant admis aucune présomption à égard, la qualification est incertaine, alors qu'en dépendent plusieurs moyens de défense fondés sur la bonne foi contractuelle, dont la responsabilité en cas de disproportion du cautionnement ou de défaut de mise en garde. Ainsi, les dirigeants ou associés de la société débitrice ne sont-ils pas nécessairement considérés comme des "cautions averties". Ils le sont seulement si le créancier prouve leur implication effective dans la gestion de la société cautionnée et leur connaissance de la situation financière de celle-ci 107 , ou au moins de son domaine d'activité, grâce à des expériences professionnelles passées ou concomitantes 108 . La qualification de "caution avertie" peut être écartée, a contrario, si le dirigeant était, lors de la conclusion du cautionnement, inexpérimenté et/ou de paille 109 . Vis-à-vis des proches du débiteur principal, la qualification de "caution non avertie" n'est guère plus prévisible. Un conjoint, un parent ou un ami du débiteur peut être considéré comme "averti", si la preuve est rapportée par le créancier, soit de la compréhension des engagements 110 , soit de l'intérêt financier qu'en retire la caution, fût-ce seulement par le biais du régime matrimonial de communauté 111 . 57. L'inintelligibilité du droit des sûretés personnelles est par ailleurs imputable à l'absence de coordination entre les réformes successives instaurant des obligations identiques ou voisines. Les obligations d'information des cautions en sont l'exemple caricatural, puisque les critères d'application, les contours de l'information et les sanctions ne sont pas les mêmes dans les quatre textes régissant l'information annuelle 112 , non plus que dans les trois articles imposant l'information sur la défaillance du débiteur 113 . 58. L'insécurité juridique résulte encore du manque d'articulation entre les règles spéciales et le droit commun, notamment entre les sanctions spéciales, comme la déchéance des accessoires en cas de défaut d'information, et la responsabilité civile de droit commun 114 . Pose également difficulté la coexistence de l'exigence légale de proportionnalité du cautionnement aux biens et revenus de la caution et du devoir de mise en garde sur les risques de l'opération et la disproportion de l'engagement, créé par la jurisprudence sur le fondement de l'article 1134, alinéa 3, du Code civil. 59. Enfin, l'inintelligibilité du droit des sûretés personnelles est entravée par les incohérences entre certaines de ses dispositions. Par exemple, l'article L. 341-5 du Code de la consommation répute non écrites les stipulations de solidarité ou de renonciation au bénéfice de discussion "si l'engagement de la caution n'est pas limité à un montant global", et l'article L. 341-6 prévoit le rappel, chaque année, de la faculté de révocation "si l'engagement est à durée indéterminée", alors que l'article L. 341-2 impose de limiter le montant comme la durée du cautionnement sous seing privé souscrit par les mêmes parties, c'est-à-dire une caution personne physique et un créancier professionnel 115 . 60. Tous ces défauts formels entravent la connaissance, la compréhension et la prévisibilité du droit en vigueur et compromettent la réalisation des attentes des parties, singulièrement la sécurité recherchée par les créanciers garantis. B/ Imperfections substantielles 61. Sur le fond, le droit des sûretés personnelles présente d'autres d'imperfections qui entravent également l'efficacité de ces garanties. Les premières imperfections substantielles résident dans la méconnaissance de la fonction des sûretés et dans l'altération de leurs principaux caractères (1). Les secondes tiennent à l'inadéquation entre les objectifs poursuivis et les techniques déployées pour les atteindre (2). Ces différentes imperfections menacent directement la sécurité des créanciers. Elles produisent également des effets pervers à l'encontre de ceux-là mêmes qu'elles cherchent à protéger. Altération de la fonction et des caractères des sûretés 62. Depuis les années 1980, la protection des créanciers ne semble plus être la priorité, ni du législateur, ni des juges, les objectifs poursuivis étant essentiellement tournés vers les garants personnes physiques et, le cas échéant, vers les entreprises garanties. Dès lors que se trouve ainsi occultée la fonction des sûretés personnelles, qui consiste à augmenter les chances de paiement du créancier, il n'est pas étonnant que leur efficacité soit menacée 116 . 63. Techniquement, les règles spéciales entravent la protection des créanciers en remettant en cause les caractères de la sûreté qui leur étaient traditionnellement favorables. Quatre altérations de ce type peuvent être citées, la première relative aux sûretés non accessoires, les trois autres au cautionnement. Le caractère indépendant ou indemnitaire de la sûreté est méconnu par les règles communes aux sûretés pour autrui énoncées par le droit des entreprises en difficulté, précisément par celles rendant opposables par tous les garants les remises ou délais accordés au débiteur (dans la procédure de conciliation) ou par les seuls garants personnes physiques (dans la procédure de sauvegarde) 117 . Concernant le cautionnement, c'est d'abord son caractère consensuel 118 qui se trouve profondément entamé par les textes imposant, à peine de nullité, des mentions manuscrites 119 . La souplesse du cautionnement au stade de sa constitution s'en trouve diminuée. La sécurité que sont censés procurer, tant la sûreté, que le formalisme, est également compromise par le contentieux très abondant que suscitent ces mentions 120 . C'est ensuite le caractère unilatéral du cautionnement, donc sa simplicité pour les créanciers, qui reçoit de sérieux tempéraments par le biais des obligations diverses qu'ils supportent à tous les stades de la vie du contrat : remise de documents, vérification du patrimoine du garant, mise en garde avant la signature du contrat, informations pendant la période de couverture et lors de la défaillance du débiteur 121 . C'est enfin le caractère supplétif du régime du cautionnement qui est fortement battu en brèche. Dans une large mesure, les créanciers n'ont plus la liberté de modeler le contenu du contrat au plus proche de leurs besoins et intérêts, non seulement parce qu'une limitation du montant et de la durée de la garantie leur est souvent imposée, à peine de nullité du contrat 122 , mais aussi parce que des clauses qui pourraient favoriser leur paiement sont interdites. Il en va ainsi des stipulations de solidarité ou de renonciation au bénéfice de discussion, lorsque le montant de l'engagement n'est pas limité 123 . La jurisprudence paralyse aussi la clause, au sein d'un cautionnement de dettes futures, qui mettrait à la charge des héritiers de la caution les dettes nées après son décès 124 . Inadéquation entre les finalités recherchées et les règles adoptées 64. Bien qu'elles contredisent la fonction même des sûretés et ceux de leurs caractères qui sécurisent les intérêts des créanciers, les protections des garants ne sont pas ipso facto illégitimes. Des intérêts supérieurs à ceux des créanciers méritent d'être défendus. A cet égard, les principaux objectifs qui sous-tendent les protections des garants, qu'ils soient d'ordre économique ou social (soutenir les entreprises et maintenir les emplois ; protéger les contractants en situation de faiblesse ; lutter contre l'exclusion des particuliers ; préserver les familles du risque de propagation de l'endettement), sont suffisamment sérieux et légitimes, voire impérieux, pour autoriser des atteintes aux droits des créanciers. Si les protections des garants sont donc justifiées, dans leur principe même, elles prêtent en revanche le flanc à la critique chaque fois que leurs modalités ne sont pas en adéquation avec leurs finalités. Il en va ainsi lorsque leur périmètre est mal défini 125 ou que les sanctions sont mal calibrées, car les protections des garants sont alors insuffisantes pour atteindre les objectifs poursuivis ou excessives par rapport à ce que requièrent ceux-ci. 65. Deux exemples d'inadéquation entre les finalités recherchées et les sanctions prévues par les règles spéciales peuvent être fournis. Le premier concerne la nullité du cautionnement en cas de non-respect du formalisme informatif. Dès lors que la protection du consentement est au coeur des solennités instituées, il est logique que la nullité en question soit relative et que les cautions puissent y renoncer a posteriori par une confirmation non équivoque 126 . Il est en revanche critiquable d'admettre la nullité de l'acte "sans qu'il soit nécessaire d'établir l'existence d'un grief" 127 ou, a fortiori, lorsque la preuve est rapportée de la parfaite connaissance par la caution de l'étendue de son engagement 128 . La sanction excède alors le but poursuivi, elle donne une prime à la mauvaise foi du garant et encourage inutilement le contentieux. L'interdiction faite au créancier professionnel de se prévaloir d'un cautionnement manifestement disproportionné ab initio aux biens et revenus de la caution, constitue un autre exemple de sanction excessive. En effet, comme cette déchéance "ne s'apprécie pas à la mesure de la disproportion" 129 , l'engagement disproportionné est rendu totalement inefficace 130 , alors que, pour satisfaire l'objectif de prévention du surendettement de la caution, une réduction eût été suffisante 131 . 66. Les diverses imperfections formelles et substantielles que présentent les règles spéciales du droit des sûretés personnelles affectent directement les droits des créanciers et, par ricochet, ceux des autres protagonistes de l'opération de garantie. Il est bien connu, en effet, que la perte de confiance des créanciers dans les sûretés produit deux types d'effets pervers. D'une part, à l'encontre des garants, car les créanciers cherchent à compenser le déficit d'efficacité de la sûreté en imposant des garanties 122 Le montant et la durée du cautionnement peuvent demeurer indéterminés dans trois hypothèses seulement : s'il est conclu par acte notarié ou contresigné par avocat ; s'il est souscrit sous seing privé par une caution personne morale ; s'il est conclu sous seing privé entre une caution personne physique et un créancier non professionnel. 123 Loi du 11 février 1994, art. 47-II, al. 1er ;C. consom., art. L. 341-5. 124 Com. 13 janv. 1987, Bull. civ. IV, n o 9. 125 V. infra n° 78 à 84. 126 Com. 5 févr. 2013, Bull. civ. IV, n° 20, au motif que le formalisme (en l'espèce, la mention manuscrite de l'article L. 341-2 du Code de la consommation) a "pour finalité la protection des intérêts de la caution". 127 Civ. 3 e , 8 mars 2006, Bull. civ. III, n o 59 ; Civ. 3 e , 14 sept. 2010, inédit, n° 09-14001. 128 Civ. 1 re , 16 mai 2012, inédit, n° 11-17411 ; Civ. 1re, 9 juill. 2015, n° 14-24287, à paraître au Bulletin. 129 Com. 22 juin 2010, Bull. civ. IV, n° 112, relatif à l'article L. 341-4 du Code de la consommation. 130 La Cour de cassation a récemment décidé que la décharge intégrale de la caution ayant souscrit un engagement manifestement disproportionné joue erga omnes, c'est-à-dire "à l'égard tant du créancier que des cofidéjusseurs", de sorte que cette caution n'a pas à rembourser le cofidéjusseur ayant désintéressé le créancier (Ch. mixte 27 févr. 2015, n° 13-13709, à paraître au Bulletin). 131 Sur le fondement du droit commun de la responsabilité, la sanction de la disproportion est ainsi plus mesurée. supplémentaires132 et/ou moins encadrées133 , préservant davantage leur propre sécurité. D'autre part, sur le crédit, et donc sur le système économique dans son ensemble, puisque la perte d'efficacité des sûretés peut se traduire par un ralentissement et une augmentation du coût des crédits aux particuliers et aux entreprises. Il apparaît en définitive que l'inefficacité des sûretés personnelles, que génèrent les règles spéciales en la matière, est de nature à compromettre la protection des consommateurs (débiteurs principaux et garants), celle plus généralement des personnes physiques, ainsi que le soutien aux entreprises, autrement dit la réalisation des principaux objectifs qui sous-tendent ces règles spéciales. Pour restaurer à la fois l'efficacité des sûretés personnelles et celle du droit des sûretés personnelles lui-même, une réforme en profondeur s'impose. III. La reconstruction du droit des garanties personnelles 67. La reconstruction globale du droit des sûretés n'a pas été réalisée par l'ordonnance du 23 mars 2006. Si les sûretés réelles conventionnelles de droit commun ont été réformées en profondeur, les sûretés personnelles ne l'ont pas été. A leur égard, aucune refonte n'a été opérée : le cautionnement n'a nullement été modifié sur le fond, seule la numérotation des articles du Code civil le concernant a été modifiée ; la garantie autonome et la lettre d'intention ont certes été reconnues, mais seulement dans deux articles du Code civil, qui en donnent la définition, sans détailler leur régime juridique134 . Compte tenu des imperfections formelles et substantielles que présente le droit des sûretés personnelles135 , il est cependant regrettable qu'une réforme n'ait pas eu lieu depuis 2006. 68. La doctrine et les praticiens appellent de concert une reconstruction136 et se rejoignent sur les finalités qui devraient l'inspirer. Il est essentiel, d'abord, de renforcer l'accessibilité, l'intelligibilité et la prévisibilité du droit des sûretés personnelles pour rendre effectifs les droits de tous les protagonistes de l'opération de garantie, et pour favoriser le rayonnement du droit français dans l'ordre international. Ensuite, il est indispensable de restaurer l'efficacité des sûretés personnelles137 en augmentant les chances de paiement des créanciers, qui ont été compromises par les multiples causes de décharge partielle ou totale des garants consacrées par les lois récentes et la jurisprudence. Remettre la sécurité des créanciers au coeur du droit des sûretés personnelles favoriserait, par contrecoup, l'accès au logement des particuliers et surtout l'octroi de crédit à ceux-ci ainsi qu'aux entreprises, l'un et l'autre limités par la crise économique. La troisième finalité de la réforme du droit des sûretés personnelles a trait à la sauvegarde des intérêts légitimes des garants. L'impératif de justice contractuelle commande en effet de les mettre à l'abri d'un endettement excessif, source d'exclusion économique et sociale. Le principe de bonne foi contractuelle exige quant à lui de sanctionner les déloyautés des créanciers préjudiciables aux garants. La protection des garants qui en résulte est un moyen de stimuler le soutien qu'ils apportent aux particuliers et aux entreprises, autrement dit un instrument au service d'intérêts socio-économiques généraux. 69. Pour satisfaire ces trois objectifs, la réforme du droit des sûretés personnelles devrait modifier le contenu de bon nombre de règles en vigueur et en créer de nouvelles 138 . Dans le cadre limité de cet article, nous ne saurions détailler toutes les améliorations techniques qui mériteraient d'être apportées aux droits et obligations existants, ni les choix politiques qui devraient être opérés, particulièrement au sujet de l'articulation entre le droit des sûretés personnelles et ceux de l'insolvabilité -droit des entreprises en difficulté et droit du surendettement. Nous allons en revanche formuler des propositions intéressant le périmètre des règles gouvernant les sûretés personnelles. Pour renforcer la sécurité juridique en la matière et pour augmenter les chances de paiement des créanciers, tout en sauvegardant les intérêts légitimes des débiteurs et garants, le champ des règles en vigueur devrait être réformé de deux façons complémentaires. Il conviendrait, d'une part, d'étendre le champ des règles applicables à toutes les sûretés personnelles (A) et, d'autre part, de réviser le champ des règles spéciales du cautionnement (B). Autrement dit, un régime primaire, fondé sur les caractéristiques communes des sûretés personnelles, devrait être complété par des corps de règles spéciales, fondées sur leurs caractéristiques distinctives. Cette structure rationnelle et stratifiée, que l'ordonnance du 23 mars 2006 a consacrée en matière de sûretés réelles 139 , nous semble conditionner le succès de la réforme du droit des sûretés personnelles. A/ Extension du champ des règles communes aux sûretés personnelles 70. La reconstruction du droit des sûretés personnelles devrait reposer sur un régime primaire, c'est-àdire sur des règles communes à l'ensemble de ces garanties. Cette proposition mérite d'être justifiée (1), puis illustrée (2). 1. Justifications de l'édiction d'un régime primaire 71. Le Titre I du Livre IV du Code civil consacré aux sûretés personnelles ne comporte actuellement aucune règle générale applicable à la fois au cautionnement, à la garantie autonome et à la lettre d'intention. Des règles communes à plusieurs sûretés personnelles, voire à l'ensemble des garanties pour autrui, existent cependant déjà. Certaines ont une origine jurisprudentielle. Elles procèdent de l'application par analogie à d'autres sûretés personnelles que le cautionnement de dispositions qui ne visent que celui-ci, comme l'article 1415 du Code civil 140 . D'autres règles communes ont une origine légale. Le droit des sociétés 141 , le droit des entreprises en difficulté 142 , le droit des incapacités 143 ou encore le droit du bail d'habitation 144 encadrent effectivement les sûretés ou les garanties consenties 138 Sur ces mesures, il n'existe pas encore de consensus. Entre les reconstructions d'ensemble déjà proposées en doctrine, les principales divergences concernent :  les mécanismes à réformer : les seules sûretés personnelles ou, plus largement, les garanties personnelles ;  la structure de la réforme : uniquement des règles propres à chaque sûreté ou, en outre, des règles communes ;  les arbitrages à réaliser entre les intérêts des différents acteurs de l'opération de garantie, qui conduisent à définir différemment le champ des règles, spécialement au regard de la qualité des parties, à sanctionner plus ou moins rigoureusement le non-respect des obligations imposées aux créanciers et encore à réserver un sort différent aux sûretés dans le cadre des procédures d'insolvabilité. 139 Le droit des sûretés réelles, tel qu'il résulte de cette ordonnance, est articulé entre des "dispositions générales" et des "règles particulières", notamment en matière de gage de meubles corporels, d'hypothèques et de privilèges immobiliers. 140 Sur son extension, a pari, à la garantie autonome, v. Civ. 1 re , 20 juin 2006, Bull. civ. I, n o 313. 141 C. com., art. L. 225-35 et L. 225-68 imposant l'autorisation des "cautions, avals et garanties" par le conseil d'administration ou de surveillance de la société anonyme constituante. V. supra n° 4. 142 C. com., art. L. 611-10-2, L. 622-26, L. 622-28, L. 626-11, L. 631-14, L. 631-20 et L. 643-11. Depuis les ordonnances du 18 décembre 2008 et du 12 mars 2014, ces textes visent les coobligés et les personnes "ayant consenti une sûreté personnelle ou ayant affecté ou cédé un bien en garantie". V. supra n° 12 et 19. 143 C. civ., art. 509 relatif aux actes interdits aux tuteurs des mineurs ou majeurs sous tutelle, qui vise "la constitution d'une sûreté pour garantir la dette d'un tiers". 144 Loi du 6 juillet 1989, art. 22-1. V. supra n° 28. pour autrui. Ce droit commun en filigrane n'est guère accessible ; il manque de cohérence, de prévisibilité et n'est pas suffisamment développé. 72. C'est au sein du Titre I du Livre IV du Code civil que devraient être énoncées des règles générales, applicables à l'ensemble des sûretés personnelles, qu'elles soient accessoires ou indépendantes, quelles que soient également les caractéristiques de la dette principale ou la situation spécifique des parties. En s'inspirant du droit des régimes matrimoniaux, il s'agirait d'instaurer un régime primaire des sûretés personnelles venant s'ajouter aux règles propres à chacune d'elles. Il permettrait de satisfaire les trois objectifs qui devraient guider la reconstruction de la matière. D'abord, le renforcement de la sécurité juridique, dans toutes ses composantes. La cohérence et donc l'intelligibilité de la loi seraient améliorées si les règles du régime primaire étaient édictées dans le respect du principe de logique formelle selon lequel à une identité de nature doit correspondre une identité de régime. L'accessibilité matérielle serait favorisée par l'inscription du régime primaire dans le Code civil, en tête du Titre dédié aux sûretés personnelles. La prévisibilité et la stabilité du droit des sûretés personnelles seraient quant à elles renforcées, car le régime primaire orienterait l'interprétation des règles spéciales et la mise en oeuvre des mécanismes innomés. Ensuite, le régime primaire des sûretés personnelles respecterait l'objectif de protection des créanciers, d'une part, parce qu'il est parfaitement compatible avec la diversité actuelle des mécanismes de garantie personnelle et la liberté de choisir celle la mieux à même de procurer la sécurité recherchée145 , d'autre part, parce qu'un régime primaire pourrait diminuer le risque que les attentes des créanciers ne soient déjouées par une requalification de la garantie ou une application a pari des règles propres à une autre sûreté. Enfin, l'instauration d'un régime primaire répondrait à l'objectif de sauvegarde des intérêts des garants. Elle pourrait en effet limiter le déficit de protection auquel conduisent les stratégies de contournement du cautionnement. Illustration du régime primaire des sûretés personnelles 73. Le régime primaire devrait commencer par définir les sûretés personnelles, sur la base des caractéristiques qu'elles partagent toutes. Trois paraissent essentielles. En premier lieu, le caractère accessoire commun à toutes les garanties, et non celui qui se trouve renforcé dans certaines sûretés, particulièrement le cautionnement. Ce caractère accessoire général se reconnaît à l'adjonction de la garantie à un rapport d'obligation principal et à l'extinction de celui-ci par la réalisation de la garantie. La deuxième caractéristique des sûretés personnelles réside dans l'obligation de garantir, plus précisément dans les deux obligations distinctes, mais complémentaires, qui la composent, à savoir l'obligation de couverture naissant dès la conclusion de la sûreté et ayant pour objet d'"assurer l'aléa du non-paiement", et l'obligation de règlement, conditionnée par la défaillance du débiteur principal146 . Les sûretés personnelles se caractérisent, en troisième lieu, par un paiement pour le compte d'autrui, qui ne doit pas peser définitivement sur le garant. 74. Afin d'éclairer la définition de la sûreté personnelle fondée sur ces trois caractéristiques, une liste de mécanismes mériterait de figurer dans le régime primaire. Il serait opportun d'étendre celle de l'actuel article 2287-1 du Code civil, en présentant le cautionnement, la garantie autonome et la lettre d'intention comme des exemples ou en citant expressément d'autres garanties personnelles147 . Justifications de la révision du champ des règles spéciales 79. L'efficacité que les créanciers attendent du cautionnement et la protection des cautions que recherche le législateur sont compromises chaque fois que le champ des règles spéciales n'est pas en adéquation avec les finalités poursuivies. Cette incohérence est flagrante au sein des articles L. 341-1 à L. 341-6 du Code de la consommation, qui protègent de différentes manières le patrimoine et le consentement des cautions personnes physiques engagées envers un créancier professionnel. En effet, lorsqu'il s'agit de protéger les personnes physiques et leur famille des risques patrimoniaux les plus graves liés à la garantie, deux critères d'application semblent surabondants, à savoir celui de la nature de la sûreté et celui de la qualité du créancier. Dit autrement, les protections inspirées par l'impératif de justice distributive ou celui, à valeur constitutionnelle, de protection de la dignité humaine ne devraient pas être réservées aux cautions et encore moins à celles qui s'engagent envers un créancier professionnel, car s'attacher ainsi à la nature de la garantie et aux activités du créancier prive injustement de protection certains garants. Le périmètre des règles légales ayant pour objet de protéger la volonté des garants, que ce soit au stade de la formation du contrat 155 ou au cours de la vie de la sûreté 156 , paraît lui aussi inadapté. Le double critère retenu -caution personne physique et créancier professionnel -conduit à traiter toutes les cautions personnes physiques comme des parties faibles et tous les créanciers dont les créances sont en rapport direct avec leur activité professionnelle comme des parties fortes, alors qu'il n'existe pas nécessairement une asymétrie d'informations. En effet, les connaissances ou l'ignorance du garant relativement à la nature et à la portée de son engagement, ainsi qu'à la situation financière du débiteur principal, ne dépendent pas essentiellement de sa qualité de personne physique, mais bien plutôt de la cause non professionnelle de son engagement. Ainsi, les règles à finalité informative ne devraient-elles protéger que les cautions personnes physiques ayant un lien affectif avec le débiteur principal et les personnes morales dont l'activité est étrangère à l'engagement de garantie. Les cautions qui s'engagent pour des raisons et à des fins professionnelles, telles les cautions personnes physiques dirigeants ou associés de l'entreprise débitrice 157 , ne devraient pas, au contraire, en bénéficier, car elles disposent en principe de compétences, de connaissances et de pouvoirs juridiques vis-à-vis du débiteur, qui rendent superfétatoires les informations sur leur propre engagement et/ou sur la dette principale. En ignorant la cause de l'engagement de la caution, les règles spéciales du cautionnement protègent donc excessivement certaines cautions et portent atteinte inutilement à l'efficacité du cautionnement. Illustration des règles propres aux garants personnes physiques 80. Deux types de règles pourraient dépendre de la seule qualité de personne physique du garant. Il s'agit, d'une part, de celles qui ont trait aux spécificités attachées à la personnalité physique. Nous songeons aux règles relatives à la capacité du garant 158 , aux droits de la personnalité 159 et encore à la transmission de la sûreté en conséquence du décès du garant 160 . 155 Par la remise de documents, un délai d'acceptation et encore des mentions manuscrites. 156 Par l'information annuelle sur l'encours de la dette principale et sur la durée de la garantie, ainsi que par l'information sur la défaillance du débiteur. 157 Ces cautions intégrées dans les affaires de l'entreprise débitrice ne devraient pas être assimilées à des consommateurs. Telle est la position de la Cour de justice de l'Union européenne, qui a jugé qu'un avaliste, gérant et associé majoritaire de la société garantie, ne saurait être qualifié de consommateur au sens de l'article 15, § 1 er , du Règlement n° 44/2001 sur les contrats conclus par les consommateurs : "seuls les contrats conclus en dehors et indépendamment de toute activité ou finalité d'ordre professionnel, dans l'unique but de satisfaire aux propres besoins de consommation privée d'un individu, relèvent du régime particulier prévu en matière de protection du consommateur, (…) une telle protection ne se justifie pas en cas de contrat ayant comme but une activité professionnelle" (CJUE 14 mars 2013, aff. C-419/11, pt 34). 158 Règles protectrices des mineurs et majeurs sous tutelle, à l'image de l'article 509, 1° du Code civil. 159 Protections du droit au respect de la vie privée des garants personnes physiques, notamment par l'interdiction de la collecte et du traitement des données personnelles à d'autres fins que l'appréciation de leur situation financière et de leurs facultés de remboursement. 160 De lege ferenda, le principe de transmission à cause de mort de l'obligation de garantir devrait être rappelé au sein du corps de règles propres aux garants personnes physiques. Le nouveau texte devrait préciser si les successeurs recueillent uniquement l'obligation de régler les dettes déjà nées au moment du décès du garant (v. supra n° 49) ou également l'obligation de couvrir les dettes postérieures. Ce sont, d'autre part, les règles ayant pour finalité de protéger le garant lui-même et sa famille contre un endettement excessif, qui devraient profiter à tous les garants personnes physiques, quelles que soient la nature de la sûreté et de la dette principale, la cause de l'engagement de garantir et la qualité du créancier. Plusieurs règles bénéficiant actuellement aux seules cautions mériteraient ainsi d'être étendues à tous les garants personnes physiques. Tel est le cas de l'article 1415 du Code civil 161 , de la règle dite du "reste à vivre" 162 , de toutes les mesures de protection énoncées par le droit du surendettement 163 et de l'exigence de proportionnalité entre le montant de la garantie et le patrimoine du garant 164 , si la proposition d'inscrire cette règle dans le régime primaire des sûretés personnelles n'était pas retenue 165 . En outre, afin de prévenir le surendettement des particuliers, que peut engendrer un cumul de garanties ruineux, il est souhaitable qu'un fichier d'endettement de type positif voie enfin le jour 166 et qu'il tienne compte des sûretés personnelles souscrites par les personnes physiques 167 . Toutes les règles propres aux garants personnes physiques, dont nous venons de donner des exemples, devraient être indifférentes à la cause de l'engagement de garantir. Le champ d'autres règles spéciales devrait à l'inverse être circonscrit sur le fondement de la cause de cet engagement. Illustration des règles propres aux cautions ne s'engageant pas à des fins professionnelles 81. De lege lata, un seul texte, au sein du droit commun des contrats et non des règles spéciales du cautionnement, s'attache à la cause de l'engagement du garant. Il s'agit de l'article 1108-2 du Code civil 168 , qui écarte la forme électronique à l'égard des mentions requises à peine de nullité, si l'acte sous seing privé relatif à la sûreté personnelle n'est pas passé pour les besoins de la profession du garant. 82. De lege ferenda, même si la notion de cause devait ne plus figurer dans le droit commun des contrats 169 , les raisons et les buts des engagements devraient continuer d'être pris en compte, tant pour définir la qualité de certains contractants 170 , que pour délimiter le champ d'application de certains mécanismes 171 . C'est la raison pour laquelle il nous semble que toutes les règles visant la protection du consentement lors de la conclusion de la sûreté, ainsi que toutes celles ayant pour objectif d'informer 161 V. supra n° 49. D'autres règles protectrices de la famille du garant couvrent déjà l'ensemble des sûretés personnelles. Il s'agit de la règle de subsidiarité de l'article L. 313-21 du Code monétaire et financier (v. supra n° 8), de la décharge de l'exconjoint d'un entrepreneur (C. civ., art. 1387-1 ; v. supra n° 10) et de la décharge des héritiers prévue par l'article 786 du Code civil (v. supra n° 49). 162 C. civ., art. 2301, al. 2. V. supra n° 50. 163 V. supra n° 50 et 51. 164 Il s'agirait de modifier le champ de la règle figurant dans l'article L. 341-4 du Code de la consommation et de condamner la jurisprudence qui, en dehors de ce texte, refuse de sanctionner les créanciers non professionnels ayant fait souscrire un engagement excessif (Com. 13 nov. 2007, Bull. civ. IV, n o 236). 165 Sur cette proposition, v. supra n° 76. 166 La création d'un registre national des crédits aux particuliers a été censurée par le Conseil constitutionnel, au motif que ce fichier portait une atteinte au droit au respect de la vie privée qui ne pouvait être regardée comme proportionnée au but poursuivi, en l'occurrence la lutte contre le surendettement (Cons. const., 13 mars 2014, n° 2014-690 DC). 167 La publicité des sûretés personnelles souscrites par des personnes physiques présenterait des avantages, aussi bien pour les garants (elle limiterait le risque d'endettement excessif en évitant des cumuls de garanties ruineux), que pour les créanciers (la consultation du fichier d'endettement augmenterait leurs chances de paiement, car les garanties seraient certainement plus adaptées aux capacités patrimoniales du garant, ce qui faciliterait l'exécution de l'obligation de règlement et limiterait les risques d'extinction totale ou partielle de la sûreté pour cause de disproportion). De sérieux inconvénients lui sont cependant opposés : la rigidité et l'augmentation des coûts de constitution de la sûreté personnelle ; l'inefficacité procédant de la sanction du défaut de publicité ; le caractère illusoire des bénéfices attendus de la publicité des sûretés personnelles, insusceptible de refléter l'endettement réel des garants. 168 Issu de la loi n° 2004-575 du 21 juin 2004 pour la confiance dans l'économie numérique. 169 A l'heure où nous écrivons ces lignes, la suppression de la cause, en tant que condition de validité des contrats, n'est pas encore certaine, puisque la réforme du droit des obligations est attendue pour le mois de février 2016 (en vertu de la loi d'habilitation n° 2015-177 du 16 février 2015 relative à la modernisation et à la simplification du droit et des procédures dans les domaines de la justice et des affaires intérieures). La disparition de la cause est toutefois fort probable au vu du projet d'ordonnance en date du 25 février 2015. 170 V. en ce sens l'article inscrit en tête du Code de la consommation : "Au sens du présent code, est considérée comme un consommateur toute personne physique qui agit à des fins qui n'entrent pas dans le cadre de son activité commerciale, industrielle, artisanale ou libérale". 171 En ce sens, v. C. civ., art. 2422, al. 1er, issu de la loi n° 2014-1545 du 20 décembre 2014 sur la simplification de la vie des entreprises : "L'hypothèque constituée à des fins professionnelles par une personne physique ou morale peut être ultérieurement affectée à la garantie de créances professionnelles autres que celles mentionnées dans l'acte constitutif pourvu que celui-ci le prévoie expressément". la caution sur son engagement et sur la dette principale au cours de la vie de la sûreté, devraient être réservées aux cautions qui ne s'engagent pas à des fins professionnelles. Ainsi, dans l'optique de supprimer le risque de méconnaissance des spécificités des sûretés personnelles indépendantes, en particulier l'inopposabilité des exceptions, la réforme pourrait-elle interdire leur souscription à des fins non professionnelles 172 . En vue de limiter le risque d'ignorance de l'étendue du cautionnement et de l'ampleur des dettes couvertes, les règles en vigueur à finalité informative devraient voir leur champ limité aux cautionnements conclus à des fins non professionnelles. Nous envisageons ici le formalisme informatif lors de la conclusion du contrat, par le biais des mentions manuscrites portant sur le montant, la durée et, le cas échéant, le caractère solidaire du cautionnement 173 . Nous songeons également à l'information annuelle sur l'encours de la dette principale et la durée du cautionnement 174 et à l'information sur la défaillance du débiteur principal 175 . Chacune de ces règles devrait être énoncée par un texte unique se substituant aux multiples dispositions qui se superposent aujourd'hui. La sécurité juridique s'en trouverait renforcée. 82. L'accessibilité du droit du cautionnement serait également améliorée si les nouvelles règles propres aux cautions ne s'engageant pas à des fins professionnelles étaient inscrites dans le Code civil. Bien que ces cautions s'apparentent à des consommateurs, les règles particulières les concernant ne devraient pas figurer dans le Code de la consommation, mais bien dans le Code civil, et ce, pour deux raisons essentielles. D'une part, le champ des règles particulières que nous proposons de fonder sur la cause de l'engagement de garantie ne correspond pas exactement à celui du Code de la consommation. Celui-ci limite en effet la qualité de consommateur aux personnes physiques, alors que des personnes morales pourraient être qualifiées de cautions n'agissant pas à des fins professionnelles (telles des sociétés civiles de moyens, des associations ou encore des communes). De plus, le Code de la consommation s'intéresse le plus souvent au binôme consommateur/professionnel, alors que la qualité du créancier nous paraît indifférente lorsqu'il s'agit de protéger ces cautions. D'autre part, le Code civil semble le creuset idéal des règles propres aux cautions s'engageant à des fins non professionnelles 176 , non seulement parce que l'engagement de ces cautions constitue le prolongement du cautionnement "service d'ami", qui fait figure de principe depuis le Code Napoléon, mais surtout parce que le Code civil doit redevenir le siège des règles de droit commun pour que l'accessibilité matérielle et l'intelligibilité du droit du cautionnement soient restaurées. Dans le chapitre du Code civil consacré au cautionnement, il serait donc opportun de regrouper les règles particulières aux cautions ne s'engageant pas à des fins professionnelles dans une nouvelle section. 83. Celle-ci s'achèverait par un article déclarant les règles énoncées en son sein inapplicables, en principe, aux cautions s'engageant à des fins professionnelles. Mais, si les cautions personnes physiques dirigeants ou associés ou les cautions personnes morales appartenant au même groupe que le débiteur principal parvenaient à faire la preuve de circonstances exceptionnelles les ayant empêchées de connaître la situation financière du débiteur et/ou les spécificités de leur engagement 177 , elles pourraient rechercher la responsabilité du créancier ne les ayant pas informées, sur le fondement de la bonne foi contractuelle. 84. Ces dernières propositions, comme toutes celles présentées plus haut intéressant les règles spéciales du cautionnement ou le régime primaire des sûretés personnelles, montrent que le renforcement de la sécurité juridique, la restauration de l'efficacité de ces sûretés, dans le respect des 172 Cette prohibition remplacerait celles concernant aujourd'hui la garantie autonome en matière de crédit à la consommation ou immobilier et de bail d'habitation (C. consom., art. L. 313-10-1 ; Loi du 6 juillet 1989, art. 22-1-1). V. supra n° 28. 173 C. consom., art. L. 313-7, L. 313-8, L. 341-2 et L. 341-3. V. supra n° 31 et 41 . 174 C. mon. fin., art. L. 313-22 ;Loi du 11 février 1994, art. 47-II, al. 2 ;C. civ., art. 2293 ;C. consom., art. L. 341-6. V. supra n° 9, 46 et 50. 175 C. consom., art. L. 313-9 ;Loi du 11 février 1994, art. 47-II, al. 3 ;C. consom., art. L. 341-1. V. supra n° 9, 32 et 41. 176 En revanche, les règles spéciales principalement fondées sur la nature de la dette principale devraient rester en dehors du Code civil. Par exemple, la remise aux cautions personnes physiques des offres de crédit à la consommation ou immobilier, ainsi que le délai de réflexion précédant la conclusion de ce dernier, devraient demeurer dans le Code de la consommation. 177 La jurisprudence rendue en matière de preuve, de réticence dolosive ou d'octroi abusif de crédit fournit des exemples de circonstances particulières dans lesquelles les dirigeants cautions sont exceptionnellement autorisés à se prévaloir de ces moyens de défense : nouveau dirigeant encore inexpérimenté, caution âgée et malade dont les fonctions directoriales sont purement théoriques, dirigeant de complaisance (v. not. Com. 6 déc. 1994, Bull. civ. IV, n o 364). intérêts légitimes des garants, nécessitent une réforme en profondeur du droit français des sûretés personnelles. ou des seules cautions personnes physiques 74 . c. Limitations 63 C. consom., art. L. 313-10, issu de la loi n° 89-1010 du 31 décembre 1989. 64 C. consom., art. L. 311-11, al. 1er, et L. 312-7. 65 C. consom., art. L. 312-10. 66 Loi du 6 juillet 1989, art. 22-1. 67 Les termes mêmes de la mention ne sont pas imposés par l'article 22-1 de loi du 6 juillet 1989. Ils le sont, au contraire, par le Code de la consommation (art. L. 313-7 et L. 313-8), qui admet "uniquement" les mentions qu'il édicte. 68 C. civ., art. 1317-1 et Loi du 31 décembre 1971, art. 66-3-3, issus de la loi n° 2011-331 du 28 mars 2011 de modernisation des professions judiciaires et juridiques. 69 C. civ., art. 1108-1 et 1108-2. 70 C. consom., art. L. 313-9, qui vise "le premier incident de paiement caractérisé susceptible d'inscription au fichier institué à l'article L. 333-4". Com. 17 juill. 1978, Bull. civ. IV, n° 200 ; Com. 6 déc. 1988, Bull. civ. IV, n° 334. Com. 8 nov. 1972, Bull. civ. IV, n° 278 (en matière de cautionnement) ; Com. 19 avr. 2005, Bull. civ. IV, n° 91 et Com. 3 juin 2014, inédit, n° 13-17643 (en matière de garantie autonome). Selon ce texte, celui qui s'engage unilatéralement à payer une somme d'argent doit en indiquer le montant, en chiffres et en lettres, pour que la preuve de cet engagement soit parfaite. L'ordonnance du 23 mars 2006 a interdit la couverture par une garantie autonome des crédits mobiliers et immobiliers de consommation57 , ainsi que des loyers d'un bail d'habitation58 . Bien que la prohibition soit formulée en termes généraux, elle vise à protéger spécialement les personnes physiques s'engageant dans un cadre non professionnel contre les dangers inhérents à l'indépendance de la garantie autonome et ceux liés à l'absence de réglementation détaillée de cette sûreté. En matière de bail d'habitation, d'autres interdictions concernent le cautionnement. En effet, d'une part, le bailleur, quelle que soit sa qualité, ne saurait le cumuler avec une assurance couvrant les obligations locatives, ni avec toute autre forme de garantie souscrite par le bailleur (dépôt de garantie mis à part), "sauf en cas de logement loué à un étudiant ou un apprenti"[START_REF]de mobilisation pour le logement et la lutte contre l'exclusion, puis par la loi n° 2009-1437 du 24[END_REF] . La violation de cette règle de non-cumul est sanctionnée par la nullité du cautionnement 60 , l'assurance demeurant au contraire valable. D'autre part, si le bailleur est une personne morale[START_REF]Publique comme privée, à la seule exception d'une "société civile constituée exclusivement entre parents et alliés jusqu'au quatrième degré inclus[END_REF] , le cautionnement ne peut être conclu qu'avec des "organismes dont la liste est fixée par décret en Conseil d'État" 62 , sauf si le locataire est "un étudiant ne bénéficiant pas d'une bourse de l'enseignement supérieur". Il convient de souligner que ces restrictions ont moins été inspirées par la volonté de protéger les cautions, proches des locataires, que par l'impératif de lutte contre l'exclusion des personnes qui, ne pouvant proposer une56 En particulier, le formalisme probatoire de l'article 1326 du Code civil (la qualité de caution non intéressée dans l'opération principale ne saurait alors suffire à compléter la mention défaillante), et les bénéfices de discussion et de division, sauf clause expresse de renonciation ou de solidarité. C. consom., art. L. 341-5, qui reprend les termes de l'article 47-II, al. 1er, de la loi du 11 février 1994 relative à l'initiative et à l'entreprise individuelle. C. consom., art. L. 331-7-1, 2°, L. 332-5 et L. 332-9. Civ. 1 re , 15 juill. 1999, Bull. civ. I, n o 248 ; Civ. 1 re , 28 mars 2000, Bull. civ. I, n o 107. Non seulement celles qui viennent d'être décrites, mais aussi, le cas échéant, celles qui profitent plus spécialement aux cautions personnes physiques engagées envers un créancier professionnel (v. supra n° 41). Compte tenu des impératifs sociaux qui gouvernent les procédures de surendettement, tous les garants personnes physiques devraient y avoir accès, bien que l'article L. 330-1 du Code de la consommation envisage le seul cautionnement et que l'hypothèse d'un garant surendetté autre qu'une caution soit certainement rare en pratique (en raison des prohibitions dont fait l'objet la garantie autonome -v. supra n° 28 -et de la rareté des lettres d'intention émises par des personnes physiques). Il s'agit, pour l'essentiel, de l'interdiction des procédures d'exécution, de l'interdiction du paiement des dettes antérieures, de l'aménagement du montant et de la durée des dettes et de l'effacement total des dettes en cas de rétablissement personnel avec ou sans liquidation judiciaire. V. supra n° 46. Comme la pluralité de cautionnements garantissant la même dette ou un cumul de sûretés personnelles et réelles. Cautionnements fournis par des organismes habilités à cette fin (cautionnements mutuels, bancaires) ; sûretés personnelles dont le régime est plus souple que celui du cautionnement (garantie autonome et lettre d'intention) ; garanties personnelles fondées sur des mécanismes du droit des obligations (telles la solidarité sans intéressement à la dette, la délégation imparfaite et la promesse de porte fort) ; assurances. Pourtant, dans le premier projet de loi d'habilitation en date du 14 avril 2005 (projet de loi n° 2249 pour la confiance et la modernisation de l'économie), étaient inscrites la "refonte" du cautionnement, la modification des dispositions du droit des obligations relatives à des mécanismes pouvant servir de garanties personnelles et encore l'introduction dans le Code civil de règles sur la garantie autonome et la lettre d'intention. Les parlementaires ont finalement écarté une réforme d'une telle ampleur, car ils ont considéré inopportun, d'un point de vue démocratique, de recourir à la technique de l'ordonnance à l'égard de contrats jouant un rôle important dans la vie quotidienne des particuliers et susceptibles de provoquer leur surendettement (avis n° 2333 déposé à l'Assemblée nationale le 12 mai 2005 au nom de la commission des lois). V. supra n° 53 à 66. Au niveau national, plusieurs propositions de réforme ont été développées depuis 2005. V. not. le rapport du groupe de travail relatif à la réforme du droit des sûretés en date du 31 mars 2005 (http://www.justice.gouv.fr/publications-10047/rapports-thematiques-10049/reforme-du-droit-des-suretes-11940.html) ; M. Bourassin, L'efficacité des garanties personnelles, LGDJ, Paris, 2006 ; J.-D. Pellier, Essai d'une théorie des sûretés personnelles à la lumière de la notion d'obligation, LGDJ, Paris, 2012 ; F. Buy, "Recodifier le droit du cautionnement (à propos du Rapport sur la réforme du droit des sûretés)", RLDC juillet-août 2005, n°18, p. 27 ; M. Grimaldi, "Orientations générales de la réforme", Dr. et patr. 2005, n° 140, p. 50 ; D. Legeais, "Une symphonie inachevée", RDBF mai-juin 2005, p. 67 ; Ph. Simler, "Codifier ou recodifier le droit des sûretés personnelles ?", Livre duBicentenaire, Litec, Paris, 2004, p. 382 ; Ph. Simler, "Les sûretés personnelles", Dr. et patr. 2005, n° 140, p. 55. Il existe également des réflexions doctrinales en ce sens au niveau européen, dans le cadre du Projet de cadre commun de référence(Sellier, Munich, 2009). Selon l'un de ses auteurs (U. Drobnig, "Traits fondamentaux d'un régime européen des sûretés personnelles", Mélanges Ph. Simler,Dalloz-Litec, Paris, 2006, p. 315), l'objectif a été de présenter une sorte de dénominateur commun, à l'image des Restatements of the Law élaborés aux États-Unis. 137 Sur cette notion et son étude de lege lata et de lege ferenda, v. notre thèse : L'efficacité des garanties personnelles,LGDJ, Paris, 2006. Dans la reconstruction suggérée, aucune sûreté personnelle n'est rendue obligatoire ou n'est interdite de manière générale. Les créanciers resteraient libres de choisir la garantie qui leur semble la plus appropriée pour protéger leurs intérêts. Ils pourraient notamment toujours opter en faveur d'une sûreté indépendante, à condition de la faire souscrire par un garant professionnel ou intégré dans l'entreprise débitrice. Ils pourraient bénéficier d'un cautionnement non limité en montant et en durée, soit en s'adressant à des cautions qui s'engagent pour des raisons professionnelles, soit en le faisant souscrire par une caution agissant à des fins non professionnelles, mais en recourant alors à un notaire pour établir l'acte ou à un avocat pour le contresigner. Sur cette structure duale de l'obligation de garantir, en matière de cautionnement de dettes futures, v. Ch. Mouly, Les causes d'extinction du cautionnement, Litec,Paris, 1979. La distinction entre les sûretés personnelles et les garanties personnelles, reposant sur le caractère exclusif ou non de la fonction de garantie, est discutable dans l'optique d'une réforme, car elle contredit les principaux objectifs qui devraient animer celle-ci. D'une part, la sécurité juridique et la satisfaction des attentes des créanciers, puisque la qualification et le régime des garanties demeureraient incertains et sources de contentieux si seules les sûretés personnelles étaient visées, alors même qu'il importe peu aux créanciers d'être couverts par un mécanisme ayant une autre fonction que d'améliorer leurs chances de paiement. D'autre part, la sauvegarde des intérêts des garants, car les garanties personnelles peuvent se révéler 75. S'agissant des règles applicables à toutes les sûretés personnelles, ainsi définies et illustrées, elles devraient être dictées par ce qu'elles ont en commun et être indifférentes, à l'inverse, à ce qui est contingent dans chacune d'elles (à savoir, les caractéristiques de la dette principale, la nature accessoire ou indépendante de la garantie, la qualité des protagonistes et encore la cause de l'engagement du garant). Sur le fondement du caractère accessoire général des garanties, deux règles pourraient être consacrées. D'une part, le principe de transmission des accessoires avec la créance principale, énoncé par l'article 1692 du Code civil, pourrait être précisé à l'égard des sûretés personnelles, au sein du régime primaire. D'autre part, pourrait être mise à la charge des créanciers une obligation de restituer l'enrichissement procuré par la mise en oeuvre de la sûreté, c'est-à-dire les sommes excédant le montant des créances que la sûreté a pour fonction d'éteindre. Sur le fondement de l'obligation de couverture naissant dès la conclusion du contrat, pourraient être imposés, ad probationem 148 , l'établissement de celui-ci en deux exemplaires et la remise de l'un d'eux au garant 149 . En conséquence du paiement pour le compte d'autrui, des recours devraient être reconnus à tous les garants. Il s'agirait d'étendre ceux bénéficiant aujourd'hui aux cautions, c'est-à-dire un recours avant paiement et des recours en remboursement, personnel et subrogatoire. 76. D'autres dispositions du régime primaire devraient reposer sur le principe de bonne foi contractuelle. Sur ce fondement, deux règles du droit du cautionnement pourraient être étendues. D'abord, le bénéfice dit de subrogation de l'actuel article 2314 du Code civil 150 , puisque l'égoïsme du créancier qui fait perdre au garant des chances d'être remboursé par le débiteur constitue une déloyauté 151 , qui devrait être sanctionnée dans toutes les sûretés personnelles ouvrant au garant un recours subrogatoire. Ensuite, comme le principe de bonne foi commande à tous les contractants de faire preuve de tempérance 152 , l'exigence de proportionnalité entre le montant du cautionnement et les facultés financières de la caution personne physique contractant avec un créancier professionnel, inscrite dans l'article L. 341-4 du Code de la consommation, pourrait être généralisée par rapport aux garanties et aux parties 153 . Elle couvrirait alors l'ensemble des sûretés personnelles et s'appliquerait quelles que soient la qualité et les activités du créancier 154 et du garant. 77. Dans le régime primaire proposé, toutes les règles communes aux sûretés personnelles devraient être indifférentes aux spécificités relatives aux parties. En dehors du régime primaire, des règles particulières devraient toujours prendre en compte ces spécificités. Mais, à l'occasion de la réforme du droit des sûretés personnelles, le champ des règles spéciales devrait lui aussi être rationalisé. B/ Révision du champ des règles spéciales du cautionnement 78. Une fois justifiée cette révision (1), seront illustrées les règles particulières qui, de lege ferenda, pourraient être réservées aux garants personnes physiques (2) ou aux cautions ne s'engageant pas à des fins professionnelles (3). plus dangereuses que les sûretés personnelles (une comparaison entre la délégation imparfaite ou la promesse de porte fort et le cautionnement permet de s'en convaincre). 148 Le contrat de sûreté personnelle établi en un seul exemplaire conservé par le créancier serait privé de force probante, sauf commencement d'exécution ou défaut de contestation de son existence par le garant. Ces tempéraments sont déjà admis par la jurisprudence statuant en application de l'article 1325 du Code civil. 149 Cela éviterait que le garant n'oublie son engagement et ne s'abstienne dès lors de prendre des précautions pour l'honorer. Cela limiterait également le risque que les héritiers du garant n'ignorent l'obligation de leur auteur et ne soient déchargés sur le fondement de l'article 786 du Code civil. 150 "La caution est déchargée, lorsque la subrogation aux droits, hypothèques et privilèges du créancier, ne peut plus, par le fait de ce créancier, s'opérer en faveur de la caution. Toute clause contraire est réputée non écrite". 151 Com. 14 janv. 2014, inédit, n° 12-21389. 152 V. la jurisprudence relative aux cautionnements disproportionnés ne relevant pas des articles L. 313-10 ou L. 341-4 du Code de la consommation, qui sanctionne la faute commise par les créanciers "dans des circonstances exclusives de toute bonne foi" et notamment l'arrêt fondateur : Com. 17 juin 1997, Macron, Bull. civ. IV, n° 188. V. supra n° 17. 153 Une autre exigence de proportionnalité, celle imposée par l'article L. 650-1 du Code de commerce entre le montant de la garantie et le montant des concours consentis au débiteur principal, a déjà un champ d'application aussi général. 154 Aujourd'hui, seuls les créanciers professionnels sont visés par les articles L. 313-10 et L. 341-4 du Code de la consommation et, lorsque ces textes ne sont pas applicables, la Cour de cassation considère que les créanciers non professionnels ne commettent pas de faute en faisant souscrire à une caution un engagement prétendument excessif (Com. 13 nov. 2007, Bull. civ. IV, n o 236).
116,250
[ "750998" ]
[ "461303" ]
00148718
en
[ "phys" ]
2024/03/04 23:41:48
2006
https://hal.science/hal-00148718/file/COCIS_Oberdisse_sept2006_revised.pdf
Julian Oberdisse email: [email protected] Adsorption and grafting on colloidal interfaces studied by scattering techniques -REVISED MANUSCRIPT Keywords: Dynamic Light Scattering, Small Angle Neutron Scattering, Small Angle X-ray Scattering, Adsorption Isotherm, Polymer, Layer Profile, Surfactant Layer, PEO Figures: 4 Adsorption and grafting on colloidal interfaces studied by scattering techniques Introduction Adsorption and grafting of polymer and surfactants from solution onto colloidal structures has a wide range of applications, from steric stabilisation to the design of nanostructured functional interfaces, many of which are used in industry (e.g., detergence). There are several techniques for the characterization of decorated interfaces. Scattering counts without doubt to the most powerful methods, as it allows for a precise determination of the amount and structure of the adsorbed molecules without perturbing the sample. This review focuses on structure determination of adsorbed layers on colloidal interfaces by scattering techniques, namely Dynamic Light Scattering (DLS), Small Angle Neutron and X-ray Scattering (SANS and SAXS, respectively). The important field of neutron and X-ray reflectivity is excluded, because it is covered by a review on adsorption of biomolecules on flat interfaces [START_REF] Lu | Current Opinion in Colloid and Interface Science[END_REF]. The colloidal domain in aqueous solutions includes particles and nanoparticles, (micro-) emulsions, and self-assembled structures like surfactant membranes, all typically in the one to one hundred nanometer range. Onto these objects, different molecules may adsorb and build layers, possibly with internal structure. We start with a review of studies concerning a model polymer, poly(ethylene oxide) (PEO), the adsorption profile normal to the surface Φ(z) of which has attracted much attention. We then extent the review to other biopolymers, polyelectrolytes, and polymer complexes, as well as to surfactant and self-assembled layers. Adsorption isotherm measurement are the natural starting point of all studies, and whenever they are feasible, they yield independent information to be compared to the scattering results. Apart from the detailed shape of the isotherm, they give the height and position of the adsorption plateau, and thus also how much material is unadsorbed. The last point may be important for the data analysis as these molecules also contribute to the scattering. Analysis of scattering from decorated interfaces Adsorbed (or grafted) layers on colloidal surfaces can be characterized quite directly by small-angle scattering. The equation describing small-angle scattering from isolated objects (for simplicity called 'particles') with adsorbed layers reads: ( ) ( ) ( ) ( ) ( ) inc 2 3 iqr l 3 iqr p inc 2 l p I r d e r r d e r V N I q A q A V N q I + ρ ∆ + ρ ∆ = + + = ∫ ∫ (1) where N/V is the number density of particles, and ∆ρ p and ∆ρ l are the contrasts of the particle and the layer in the solvent, respectively [*2,*3,*4]. The first integral over the volume of the particles gives the scattering amplitude A p of the particles, and their intensity can be measured independently. The second integral over the volume of the layer gives the layer contribution A l . The last term, I inc , denotes the incoherent scattering background -particularly high in neutron scattering with proton-rich samples, which must be subtracted because it can dominate the layer-contribution. In eq.( 1), finally, the structure factor describing particleparticle interactions is set to one, and it needs to be reintroduced for studies of concentrated colloidal suspensions [*5,*6,7,8]. Small-angle scattering with neutrons or x-rays corresponds to different contrast conditions, which makes scattering powerful and versatile, applicable to all kinds of particle-layer combinations. The great strength of SANS is that isotopic substitution gives easy access to a wide range of contrast conditions. Eq.( 1) illustrates the three possible cases. If ∆ρ p = 0 ("oncontrast" or "layer contrast"), only the layer scattering is probed. Secondly, if, ∆ρ l = 0 ("particle contrast"), only the bare particle is seen, which is potentially useful to check that 'particles' (including droplets) are not modified by the adsorption process. Only in the last situation, where ∆ρ p ≠ 0 and ∆ρ l ≠ 0 ("off-contrast"), both terms in eq.( 1) contribute. This is important for polymer layers. Before going into modelling, one may wish to know the quantity of adsorbed matter. For small enough particles, the limiting value I(q→0) in small angle scattering gives direct acces to this information. For homogeneous particles of volume V p in particle contrast, we obtain ( ) p p 2 p p V 0 q I Φ ρ ∆ = → , and equivalently for layer contrast ( ) l l 2 l l V 0 q I Φ ρ ∆ = → , where we have introduced the volume fraction of the particles Φ p = N/V V p , (Φ l = N/V V l for the layer). Note that it is not important if the layer contains solvent: V l is the "dry" volume of adsorbed material, if we set ∆ρ l to its "dry" contrast. By measuring different contrast conditions and dividing the limiting zero-angle intensities, the adsorbed quantities can be determined regardless of structure factor influence and instrument calibration, as such contributions cancel in intensity ratios [*9,*10]. The spatial extent of an adsorbed layer can be determined via the hydrodynamic radius of particles by DLS with and without the adsorbed layer, the difference being the hydrodynamic layer thickness. However, DLS does not give any information on the amount of adsorbed matter. Alternatively, with SANS or SAXS, one can determine the particle radius, with and without adsorbed layer, as well as the adsorbed amount. If the contrasts of the particle and the adsorbed material are similar, the increase in particle radius can be directly translated into the layer thickness. If the contrasts are too different, the weighting (eq.( 1)) of the two contributions needs to be taken into account, e.g. with core-shell models. The simplest ones are a special case of eq.( 1), with constant contrast functions ∆ρ(r). For spherically symmetric particles and adsorbed layers, the model has only four parameters (radius and contrast of particle and layer), besides the particle concentration. The particle parameters can be determined independently, whereas the other two affect I(q) differently: An increase in layer thickness, e.g., shifts the scattering function to smaller q, whereas an increase in adsorbed amount (at fixed thickness) increases the intensity. Note that the average contrast of the layer and its thickness are convenient starting points for modelling (identification of monolayers or incomplete layers), while more elaborate core-shell models use decaying shell concentrations [*5,*11,*12]. The determination of the profile Φ(z) of adsorbed polymer chains, with the z-axis normal to the surface, needs a more involved data analysis [*2,*4]. There are two routes to Φ(z). The first one is based on a measurement in "layer contrast" (∆ρ p = 0). According to eq.( 1), with ∆ρ l ∝ Φ(z), this intensity is related to the square of the Fourier transform of Φ(z). One can then either test different profiles, or try to invert the relationship, which causes the usual problems related to data inversion (limited q-range, phase loss and limiting conditions …) [*2,*3,*4]. This route also gives a (usually small) second term in the layer-scattering, called the fluctuation term [13], which stems from deviations from the average profile. The second route is based on additional off-contrast measurements. Carrying out the square of the sum in eq.( 1) gives 3 terms, A p 2 + A l 2 + 2 A p A l . Subtracting the bare particle and pure layer term yields the cross-term with the layer contribution A l , this time without the square, which is easier to treat because the phase factor is not lost. Review of grafting and adsorption studies by small-angle scattering and DLS Structure of PEO-layers Many studies focussing on fundamental aspects deal with the model polymer PEO, as homopolymer or part of a block copolymer [14 -30]. 3.2 Structure of adsorbed and grafted layers, from polyelectrolytes to surfactants. Adsorption of polyelectrolytes, biomacromolecules, and polymer complexes. Adsorbed layers of many different macromolecules have been characterized by scattering [START_REF] Marshall | Small-angle neutron scattering of gelatin/sodium dodecyl sulfate complexes at the polystyrene/water interface[END_REF][START_REF] Dreiss | Formation of a supramolecular gel between α α α α-cyclodextrin and free and adsorbed PEO on the surface of colloidal silica: Effect of temperature, solvent, and particle size[END_REF][START_REF] Cárdenas | SANS study of the interactions among DNA, a cationic surfactant, and polystyrene latex particles[END_REF][START_REF] Lauten R A, Kjøniksen | Adsorption and desorption of unmodified and hydrophobically modified ethyl(hydroxyethyl)cellulose on polystyrene latex particles in the presence of ionic surfactants using dynamic light scattering[END_REF][START_REF] Rosenfeldt | The adsorption of Bovine Serum Albumin (BSA) and Bovine Pancreastic Ribonuclease A (RNase A) on strong and weak polyelectrolytes grafted onto latex particles is measured by SAXS. The scattered intensity is modelled by a geometrical model of beads on the surface of the latex[END_REF][START_REF] Borget | Interactions of hairy latex particles with cationic copolymers[END_REF][START_REF] Estrela-Lopis | SANS studies of polyelectrolyte multilayers on colloidal templates[END_REF][START_REF] Rusu | Adsorption of novel thermosensitive graft-copolymers: Core-shell particles prepared by polyelectrolyte multiplayer selfassembly[END_REF][START_REF] Okubo | Alternate multi-layered adsorption of macro-cations and -anions on the colloidal spheres. Influence of the deionisation of the complexation mixtures with coexistence of the ion-exchange resins[END_REF]. In these studies, the focus shifts from the more 'conceptual' interest in PEO-layers to specific substrate-molecule interactions. The profile of gelatin layers adsorbed on contrast matched PS-particles was shown to be well-described by an exponential by Marshall el al [START_REF] Marshall | Small-angle neutron scattering of gelatin/sodium dodecyl sulfate complexes at the polystyrene/water interface[END_REF]. Addition of equally contrast matched ionic surfactant (SDS) induces layer swelling, and finally gelatin desorption. Dreiss et al have shown that the α-cyclodextrin threads on adsorbed PEO chains (pseudopolyrotaxanes), modifying their configuration [START_REF] Dreiss | Formation of a supramolecular gel between α α α α-cyclodextrin and free and adsorbed PEO on the surface of colloidal silica: Effect of temperature, solvent, and particle size[END_REF]. Cárdenas et al have characterized DNA-coated contrast-matched PS-particles by SANS, and present evidence for layer compaction upon addition of cationic surfactant [START_REF] Cárdenas | SANS study of the interactions among DNA, a cationic surfactant, and polystyrene latex particles[END_REF]. Addition of ionic surfactants has been shown to lead to the desorption of ethyl(hydroxyethyl)cellulose from PS-latex, which can be followed by DLS [START_REF] Lauten R A, Kjøniksen | Adsorption and desorption of unmodified and hydrophobically modified ethyl(hydroxyethyl)cellulose on polystyrene latex particles in the presence of ionic surfactants using dynamic light scattering[END_REF]. Scattering studies have been crucial for polyelectrolyte layers. The adsorption of small proteins (BSA) onto spherical polyelectrolyte brushes was measured by SAXS by Rosenfeldt et al [**35]. Using DLS, the thickness of adsorbed cationic copolymer on latex particles was studied by Borget et al [START_REF] Borget | Interactions of hairy latex particles with cationic copolymers[END_REF]. Finally, polyelectrolyte multilayers have been characterized on contrast-matched PS using core-shell models, and by DLS on silica [**37, [START_REF] Rusu | Adsorption of novel thermosensitive graft-copolymers: Core-shell particles prepared by polyelectrolyte multiplayer selfassembly[END_REF][START_REF] Okubo | Alternate multi-layered adsorption of macro-cations and -anions on the colloidal spheres. Influence of the deionisation of the complexation mixtures with coexistence of the ion-exchange resins[END_REF]. In all of these studies, unperturbed structural characterizations in the solvent were made possible by scattering. There is a great amount of literature by synthesis groups in grafting of polymer chains onto or from colloidal surfaces. These groups often use DLS to characterize layer extensions [START_REF] Inoubli | Graft from' polymerization on colloidal silica particles: elaboration of alkoxyamine grafted surface in situ trapping of carbon radicals[END_REF][START_REF] Qi | Preparation of acrylate polymer/silica nanocomposite particles with high silica encapsulation efficiency via miniemulsion polymerisation[END_REF], with Adsorption of surfactant layers and supramolecular aggregates. The adsorption of ionic and non-ionic surfactants to hydrocarbon emulsion droplets is of evident industrial importance. In this case, the scattered intensity can be described by a coreshell model [*10], which was also used by Bumajdad et al to study the partitioning of C12Ej (j=3 to 8) in DDAB layers in water-in-oil emulsion droplets [START_REF] Bumajdad | Compositions of mixed surfactant layers in microemulsions determined by small-angle neutron scattering[END_REF]. On colloids, the thickness of an adsorbed layer of C 12 E 5 on laponite has been measured by Grillo et al by SANS using a core-shell model, and evidence for incomplete layer formation was found [START_REF] Grillo | structural determination of a nonionic surfactant layer adsorbed on clay particles[END_REF]. On silica particles, a contrast-variation study of adsorbed non-ionic surfactant has been performed, and the scattering data modelled with micelle-decorated silica [*9,55], a structure already seen by Cummins et al [START_REF] Cummins | Temperature-dependence of the adsorption of hexaethylene glycol monododecyl ether on silica sols[END_REF]. Pores offer the possibility to study adsorption at interfaces with curvatures comparable but opposite in sign to colloids. Porous solids are not colloidal, but adsorption inside pores can be analysed using small angle scattering. Vangeyte et al [*57] study adsorption of poly(ethylene oxide)-b-poly(ε-caprolactone) copolymers at the silica-water interface. They succeed in explaining their SANS-data with an elaborate model for adsorbed micelles similar to bulk micelles, cf. Fig. 3, and the result in q 2 I representation is shown in Fig. 4. In the more complex system with added SDS the peak disappears and a core-shell model becomes more appropriate, indicating de-aggregation [START_REF] Vangeyte | Concomitant adsorption of poly(ethylene oxide)-b-poly(ε ε ε ε-caprolactone) copolymers and sodium dodecyl sulfate at the silica-water interface[END_REF]. Conclusion Recent advances in the study of adsorption on colloidal interfaces have been reviewed. On the one hand, DLS is routinely used to characterize layer thickness, with a noticeable sensitivity to long tails due to their influence on hydrodynamics. On the other hand, SANS and SAXS give information on mass, and mass distribution, with a higher sensitivity to the denser regions. Small-angle scattering being a 'mature' discipline, it appears that major progress has been made by using it to resolve fundamental questions, namely concerning the layer profile of model polymers. In parallel, a very vivid community of researchers makes intensive use of DLS and static scattering to characterize and follow the growth of layers of increasing complexity. contrast variation and concentration dependence measurements, J Chem Phys 2006, 125: 044715 A contrast-variation study of the scattering of silica spheres with a hydrophobic layer in an organic solvent is presented. The intensity is described by a core-shell model combined with a structure factor for adhesive particles, which fits all contrast situations simultaneously. The structure of the adsorbed layers of PEO (10k to 634 k) on polystyrene is studied by onand off-contrast SANS. Different theoretical profiles are reviewed and used to describe the layer scattering. This includes the weak fluctuation term, which is proportional to q -4/3 and decays more slowly than the average layer contribution q -2 . It is therefore more important (but nonetheless very small) at high q. [16] Flood C, Cosgrove T, The volume profiles of three copolymers (F68, F127, tetronic/poloxamine 908) adsorbed onto emulsion droplets are determined by SANS, using two methods, inversion and fitting, to Φ(z). Considerable similarity in the adsorbed layer structure is found for hydrocarbon and fluorocarbon emulsions. Especially SANS has lead to a very detailed description of the structure of PEO-layers, deepening our understanding of polymer brushes, and their interaction, e.g. in colloidal stabilization. Hone et al have measured the properties of an adsorbed layer of PEO on poly(styrene)-latex (PS) [*14]. They have performed on-and off-contrast SANS experiments in order to determine the (exponential) profile Φ(z) and the weak fluctuation term, the determination of which requires a proper treatment of smearing and polydispersity. They have revisited the calculation by Auvray and de Gennes [13], and propose a one parameter description of the fluctuation term. Marshall et al have extended the adsorbed layer study of PEO on PS-latex to different molecular weights, and compare exponential, scaling-theory based and Scheutjens-Fleer self-consistent mean field theory, including the fluctuation term [**15]. Recently, the effect of electrolytes on PEO layers on silica was also investigated by DLS [16]. Concerning copolymers, Seelenmeyer and Ballauff investigate the adsorption of non-ionic C 18 E 112 onto PS latex particles by SAXS [*17]. They used exponential and parabolic density profiles for PEO to fit the data. The adsorption of a similar non-ionic surfactant (C 12 E 24 ) onto hydrophobized silica in water was studied by SANS, employing a two layer model describing the hydrophobic and hydrophilic layers [18]. On hydrocarbon and fluorocarbon emulsion droplets, layers of two triblock copolymers (pluronics F68 and F127) and a star-like molecule (Poloxamine 908) have been adsorbed by King et al [*19]. They found surprisingly similar exponentially decaying profiles in all cases, cf. Fig. 1, which also serves as illustration for two ways to determine Φ(z) in "layer contrast", inversion and fitting, as discussed in section 2. The SANS-study of Washington et al deals with small diblock copolymers adsorbed on perfluorocarbon emulsion droplets [20]. A clear temperature-dependence of Φ(z) was found, but the best-fitting profile type depends on the (low) molecular weight. Diblock copolymer layers adsorbed onto water droplets have been characterized by DLS by Omarjee et al [*21]. Frielinghaus et al determine the partial structure factors of diblock copolymers [*22,23] used for boosting of microemulsions [24]. Adsorption on carbon black has been studied by comparison of DLS and contrast-variation SANS [*25,26,27]. The adsorbed layer of both F127 and a rake-type siloxane-PPO-PEO copolymer was found to be a monolayer at low coverage, and adsorbed micelles at high coverage. On magnetic particles, Moeser et al have followed the water decrease in a PPO-PEO shell by SANS and theory as the PPO content increases [*11]. Concerning the adsorption of PEO and tri-block copolymers on non-spherical particles, Nelson and Cosgrove have performed SANS and DLS studies with anisotropic clay particles [*28,29,30]. Unusually thin layers are found for PEO, and a stronger adsorption of the pluronics. Studies of adsorbed PEO-layers at higher colloid concentrations have been published by several groups [*5,7,8]. Zackrisson et al have studied PEO-layers grafted to PS-particles (used for studies of glassy dynamics) by SANS with contrast variation, using a stretched-chain polymer profile [*5]. In Fig. 2, they compare their form factor measurements to a model prediction, at different solvent compositions, and nice fits are obtained. Along a very different approach, Qi et al match the PEO layer but follow its influence via the interparticle structure factor [7,8]. convincing plots of the growing hydrodynamic thickness during polymerisation [**42], or as a function of external stimuli [*12, 43-47]. In static scattering, El Harrak et al use SANS [*48, 49, 50], and Yang et al DLS and static light scattering with a core-shell model [*12]. Concerning the structure of grafted layers, the Pedersen model must be mentioned [**51]. Shah et al use polarized and depolarised light scattering to investigate PMMA layers grafted onto Montmorillonite clay [52]. In a concentration study, Kohlbrecher et al fit contrastvariation SANS intensities of coated silica spheres in toluene with a core-shell model and an adhesive polydisperse structure factor model [*6]. [ 7 ] 7 Qiu D, Cosgrove T, Howe A: Small-angle neutron scattering study of concentrated colloidal dispersions: The electrostatic/steric composite interactions between colloidal particles, Langmuir 2006, 22:6060-6067 [8] Qiu D, Dreiss CA, Cosgrove T, Howe A: Small-angle neutron scattering study of concentrated colloidal dispersions: The interparticle interactions between sterically stabilized particles, Langmuir 2005,21:9964-9969 [*9] Despert G, Oberdisse J: Formation of micelle-decorated colloidal silica by adsorption of nonionic surfactant, Langmuir 2003, 19, 7604-7610 The adsorption of a non-ionic surfactant (TX-100) on colloidal silica is studied by SANS, using solvent contrast variation. The adsorbed layer is described by a model of impenetrable micelles attached to the silica bead. [*10] Staples E, Penfold J, Tucker I: Adsorption of mixed surfactants at the oil/water interface, J Phys Chem B 2000, 104: 606-614 The adsorption of mixtures of SDS and C 12 E 6 onto hexadecane droplets in water is studied by SANS. A core-shell model is used to describe the form factor of the emulsion droplets, and coexisting micelles are modelled as interacting core-shell particles. A model-independent analysis using I(q→0) is used to extract information on layer composition. The results are shown to disagree with straightforward regular solution theory. [*11] Moeser GD, Green WH, Laibinis PE, Linse P, Hatton TA: Structure of polymerstabilized magnetic fluids: small-angle neutrons scattering and mean-field lattice modelling, Langmuir 2004, 20:5223-5234 The layer of PAA with grafted PPO and PEO blocks bound to magnetic nanoparticles is studied by SANS. Core-shell modelling including the magnetic scattering of the core is used to determine the layer density and thickness, for different PPO/PEO ratios. [*12] Yang C, Kizhakkedathu JN, Brooks DE, Jin F, WU C: Laser-light scattering study of internal motions of polymer chains grafted on spherical latex particles, J Phys Chem B 2004, 108:18479-18484 Temperature-dependent Poly(NIPAM) chains grown from relatively big poly(styrene) latex ('grafting from') are studied by static and dynamic light scattering. Near the thetatemperature, the hairy-latex is described by a core-shell model with a r -1 polymer density in the brush. The time correlation function reveals interesting dynamics at small scales, presumably due to internal motions. [13] Auvray G, de Gennes PG, Neutron scattering by adsorbed polymer layers, Europhys Lett 1986, 2 :647-650 [*14] Hone J H E, Cosgrove T, Saphiannikova M, Obey T M, Marshall J C, Crowley T L: Structure of physically adsorbed polymer layers measured by small-angle neutron scattering using contrast variation methods, Langmuir 2002, 18: 855-864 Combined on-and off-contrast SANS experiments in order to determine the (exponential) profile Φ(z) and the weak fluctuation term of PEO-layers on polystyrene latex. The fluctuation term is obtained by subtraction of layer intensities obtained via the two routes discussed in the text. [**15] Marshall J C, Cosgrove T, Leermakers F, Obey T M, Dreiss C A: Detailed modelling of the volume fraction profile of adsorbed layers using small-angle neutron scattering, Langmuir 2004, 20: 4480-4488 Howell I, Revell P: Effect of electrolyte on adsorbed polymer layers: poly(ethylene oxide) -silica system, Langmuir 2006, 22: 6923-6930 [*17] Seelenmeyer S, Ballauff M: Analysis of surfactants adsorbed onto the surface of latex particles by small-angle x-ray scattering, Langmuir 2000, 16: 4094-4099 layer structure of hydrophilic PEO attached onto latex by hydrophobic stickers is studied. PS-latex is virtually matched by the solvent, and the intensity curves show a nice the side maxima, which shift to smaller-q and raise in intensity as the layer scattering increases. Moments of the density profile are used to characterize the layer, and both an exponential and a parabolic density profile fit the data. Dale P J, Vincent B, Cosgrove T, Kijlstra J: Small-angle neutron scattering studies of an adsorbed non-ionic surfactant (C 12 E 24 on hydrophobised silica particles in water, Langmuir 2005, 21: 12244 -12249 [*19] King S, Washington C, Heenan R: Polyoxyalkylene block copolymers adsorbed in hydrocarbon and fluorocarbon oil-in-water emulsions, Phys Chem Chem Phys 2005, 7:143-149 [ 20 ]Figure Captions: Figure 1 : 201 Figure Captions: Figure 1 is 1 Figure 1 is Fig. 1 of ref. [19]. Figure 2 : 2 Figure 2: Form factors measured for deuterated latex with grafted PEO-layers in 0.4M Na 2 CO 3 at three different contrasts corresponding to 100:0, 91:9, and 85:15 (w/w) D 2 O/H 2 O. Lines are "simultaneous" fits (cf. [*5]) in which only the solvent scattering length density varies. Shown in the inset are accompanying scattering length density profiles. Reprinted with permission from ref. [*5], copyright 2005, American Chemical Society. Figure 2 2 Figure 2 is Fig. 7 of ref. [*5]. Figure Figure 3a is Fig. 9 of ref. [*57]. Figure 4 : 4 Figure 4: Fit of the micellar form factor for the core-rigid rods model to the SANS intensity for the PEO 114 -b-PCL 19 copolymer at surface saturation in porous silica, see ref. [*57] for details. The representation of q 2 I(q) vs q enhances the layer scattering. Reprinted with permission from ref. [*57], copyright 2005, American Chemical Society. Figure Figure 3b is the upper graph of Fig.10 in ref. [*57]. Figure 1 ( 1 Figures :I have copy-pasted them from the original pdf, is this ok ? Figure 2 (Oberdisse) Figure 3(Oberdisse) Acknowledgements: Critical rereading and fruitful discussions with François Boué and Grégoire Porte are gratefully acknowledged.
27,032
[ "995273" ]
[ "737" ]
01487239
en
[ "spi", "shs" ]
2024/03/04 23:41:48
2015
https://minesparis-psl.hal.science/hal-01487239/file/IMHRC%202015.pdf
Benoit Montreuil Eric Ballot William Tremblay Modular Design of Physical Internet Transport, Handling and Packaging Containers Keywords: Physical Internet, Container, Encapsulation, Material Handling, Interconnected Logistics, Packaging, Transportation, Modularity This paper proposes a three-tier characterization of Physical Internet containers into transport, handling and packaging containers. It first provides an overview of goods encapsulation in the Physical Internet and of the generic characteristics of Physical Internet containers. Then it proceeds with an analysis of the current goods encapsulation practices. This leads to the introduction of the three tiers, with explicit description and analysis of containers of each tier. The paper provides a synthesis of the proposed transformation of goods encapsulation and highlights key research and innovation opportunities and challenges for both industry and academia. Introduction The Physical Internet has been introduced as a means to address the grand challenge of enabling an order-of-magnitude improvement in the efficiency and sustainability of logistics systems in their wide sense, encompassing the way physical objects are moved, stored, realized, supplied and used all around the world [START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Ballot | The Physical Internet : The Network of Logistics Networks[END_REF]. The Physical Internet (PI, π) has been formally defined as an open global logistics network, founded on physical, digital, and operational interconnectivity, through encapsulation, interfaces, and protocols (Montreuil et al. 2013a). Recent studies have assessed PI's huge potential over a wide industry and territory spectrum. Estimations permit to expect economic gains at least on the order of 30%, environmental gains on the order of 30 to 60 % in greenhouse gas emission, and social gains expressed notably through a reduction of trucker turnover rate on the order of 75% for road based transportation, coupled to lower prices and faster supply chains (Meller et al. 2012[START_REF] Sarraj | Interconnected logistics networks and protocols : simulation-based efficiency assessment[END_REF]). It has recently been highlighted in the US Material Handling and Logistics Roadmap as a key contribution towards shaping the future of logistics and material handling (Gue et al. 2013). This paper focuses on one of the key pillars of the Physical Internet: goods encapsulation in smart, world-standard, modular and designed-for-logistics containers (in short, π-containers). Previous research has introduced generic dimensional and functional specifications for the π-containers and made clear the need for them to come in various structural grades (Montreuil, 2009-2013[START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF]. The purpose of this paper is to address the need for further specifying the modular design of π-containers. Specifically, it proposes to generically characterize π-containers according to three modular tiers: transport containers, handling containers and packaging containers. The paper is structured as follows. It starts in section 2 with a brief review of the Physical Internet and its focus on containerized goods encapsulation. Then it proceeds with a review of the essence of current goods encapsulation, containers and unit loads in section 3. The paper introduces the proposed three-tier structural characterization of π-containers in section 4. Finally, conclusive remarks are offered in section 5. The Physical Internet and goods encapsulation The Digital Internet deals only with standard data packets. For example, an email to be sent must first have its content chunked into small data components that are each encapsulated into a set of data packets according to a universal format and protocol. These data packets are then routed across the digital networks to end up at their final destination where they are reconsolidated into a readable complete email. The Physical Internet intends to do it similarly with goods having to flow through it. Indeed the Physical Internet strictly deals with goods encapsulated in standard modular π-containers that are to be the material-equivalent to data packets. This extends the classical single-organization centric unit load standardization concepts [START_REF] Tompkins | Facilities planning[END_REF], the shipping container (ISO 1161(ISO -1984) ) and the wider encompassing modular transportation concepts introduced nearly twenty-five years ago [START_REF] Montreuil | Modular Transportation[END_REF] and investigated in projects such as Cargo2000 [START_REF] Hülsmann | Automatische Umschlag-anlagen für den kombinierten Ladungsverkehr[END_REF], extending and generalizing them to encompass all goods encapsulation in the Physical Internet. The uniquely identified π-containers intend to offer a private space in an openly interconnected logistics web, protecting and making anonymous, as needed, the encapsulated goods. Indeed, πcontainers from a multitude of shippers are to be transported by numerous certified transportation and logistics service providers across multiple modes. They are also to be handled and stored in numerous certified open logistics facilities, notably for consolidated transshipment and distributed deployment across territories. They are to be used from factories and fields all the way to retail stores and homes. Their exploitation getting momentum and eventually universal acceptance requires on one side for them to be well designed, engineered and realized, and on the other side for industry to ever better design, engineer and realize their products for easing their standardized modular encapsulation. Figure 1. Generic characteristics of Physical Internet containers From a dimensional perspective, π-containers are to come in modular cubic dimensions from that of current large cargo containers down to pallet sizes, cases and tinier boxes. Illustrative sets of dimensions include {12; 6; 4,8; 3,6; 2,4; 1,2} meters on the larger spectrum and {0,8; 0,6; 0,4; 0,3; 0,2; 0,1} or {0,64; 0,48; 0,36; 0,24; 0,12} meters on the smaller spectrum. The specific final set of dimensions have been left to be determined based on further research and experiments in industry, so that this set becomes a unique world standard acknowledged by the key stakeholders and embraced by industry. From a functional perspective, the fundamental intent is for π-containers to be designed and engineered so as to ease interconnected logistics operations, standardizing key functionalities while opening vast avenues for innovation [START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF]. Their most fundamental capability is to be able to protect their encapsulated objects, so they need to be robust and reliable in that regard. They must be easy to snap to equipment and structures, to interlock with each other, using standardized interfacing devices. They should be easy to load and unload fast and efficiently as needed. Their design must also facilitate their sealing and unsealing for security purposes, contamination avoidance purposes as well as, when needed, damp and leak proof capability purposes; their conditioning (e.g. temperature-controlled) as required; and their cleaning between usages as pertinent. As illustrated in Figure 2, they must allow composition into composite π-containers and decomposition back into sets of smaller π-containers. A composite container exists as a single entity in the Physical Internet and is handled, stored and transported as such until it is decomposed. Composition capabilities are subject to structural constraints. Figure 2 illustrates how such composition/decomposition can be achieved by exploiting the modularity of π-containers and standardized interlocking property. Even though not technically necessary, π-containers should be easy to panel with publicity and information supports for business marketing and transaction easing purposes as well as for user efficiency and safety purposes. Designed for interconnected logistics, π-containers are to be efficiently processed in automated as well as manual environments, without requiring pallets. From an intelligence perspective, they are to take advantage of being smart, localized and connected, and should be getting better at it as technology evolves. As a fundamental basis, they must be uniquely identifiable. They should exploit Internet-of-Things standards and technologies whenever accessible (e.g. Atzori et al. 2010). Using their identification and communications capabilities, π-containers are to be capable of signaling their position for traceability purposes and problematic conditions relative to their content or state (breakage, locking integrity, etc.), notably for security and safety purposes. The π-containers should also have state memory capabilities, notably for traceability and integrity insurance purposes. As technological innovations make it economically feasible, they should have autonomous reasoning capabilities. Thus, they are to be notably capable of interacting with devices, carriers, other π-containers, and virtual agents for routing purposes [START_REF] Montreuil | An Open Logistics Interconnection Model for the Physical Internet[END_REF]. From an eco-friendliness perspective, π-containers are to be as light and thin as possible to minimize their weight and volume burden on space usage and on energy consumption when handled and transported. They are to be efficiently reusable and/or recyclable; to have minimal offservice footprint, and to come in distinct structural grades well adapted to their range of purposes. The current state of goods encapsulation and unit load design In order to better comprehend the subsequently introduced characterization of π-containers, it is important to revise the current state of goods "encapsulation". In order to achieve this in a compact manner, this section exploits a multi-tier characterization of goods encapsulation that is depicted in Figure 3. At the first encapsulation tier, goods are packaged in boxes, bottles and bags as illustrated in Figure 3 for consumer goods. The packaging may be done in a single layer or several layers. When exploited, the package is usually the basic selling unit of goods to consumers and businesses. Packaging is subject to product design, mostly related to its dimensions, weight and fragility. Indeed the package must protect its contained product. This involves many compromises between the size of the packaging, regulations, its materials, as well as with the inclusion of protective filling materials and fixations. It often ends up with the actual product using a fraction of the package space. Packaging is also subject to marketing needs. This is hugely important in the retail industry as the package is often what the consumer sees and touches when deciding whether to purchase the product or not in retail stores. Hence packages get all kinds of prints, colors and images. Packages have become differentiating agents affecting sales. This is less the case in industrial and e-commerce settings. In industrial B2B contexts, the purchasing decision is mostly subject to pricing, functional, technical and delivery time specifications and assessments. With e-commerce, the purchasing decision is done facing a smartphone, tablet or computer, mostly based on images, videos, descriptions, expert rankings, word-of-mouth, peer-to-peer comments, promised delivery time as well as total price including taxes and delivery fees. The consumer sees the packaging only upon receiving the product at home or a e-drive, when he has already committed to buy it. Goods Logistics considerations such as ergonomic manual and/or automated handling have usually very limited impact on package design for specific goods. This is a world currently dominated by packaging design and engineering, product design and engineering, and marketing. As asserted by Meller et al. (2012), this leads to situations whereas a consumer packaged goods manufacturer making and selling 1000 distinct products may well end up with 800 distinct package sizes. Encapsulation tier 2: basic handling unit loads At the second encapsulation tier, packages encapsulating goods are grouped into basic handling units such as cardboard cases, totes and containers. Figures 4 and5 provide typical examples. In some settings, goods are directly unitized, bypassing packaging encapsulation. Cases are often single-use while totes and containers are mostly reusable and returnable. The former are usually much cheaper than the latter. From a logistics perspective, the cubic format of cardboard cases makes them easier to handle than odd-shaped loads. Their low price and recyclability often leads users to adopt a throw-after-usage operation, avoiding the need for reverse logistics of cases. They are most often designed to fit the unitizing needs of a specific product or family of products, leading businesses to use often hundreds of distinct cases with specific dimensions. Cases often lack good handles to ease their handling, so they either have to be clamped or held from the bottom for manual or automated handling purposes. For example, their conveyance forces the use of roller or belt conveyors to support them. For storage purpose, their structural weakness and their lack of snapping devices force to lay them on a smooth strong surface (such as racks and pallets). In the parcel logistics industry, in order to help streamlining their logistics network and offering competitive pricing, the service providers prefer using their specific formats. Shippers who want to use their own formats are usually charged stiffer prices. Also, in order to avoid excessive pricing, shippers have to certify that their cases meet shock-resisting specifications, which often force shippers to double box their goods, the outer case protecting the inner case containing the goods: this increases significantly the material and operational costs of load unitizing in cases. Generally, returnable handling totes and containers are differently designed for logistics than cases, often for the specific context they are used in. Often times, they have handles, are easy to open and close multiple times, and are structurally stronger, allowing higher stacking capability. As shown in Figure 6, many are foldable or stackable when empty to limit the reverse logistics induced by the need for redeploying them. As they offer limited security and are designed for specific purposes and users, totes and plastic containers are mostly used in limited ecosystems, such as within a facility, a company, a client-supplier dyad, a collaborative supply chain or a specific industry, such as for fresh produce in a specific territory. Pallets have been characterized as one of the most important innovations ever in the material handling, logistics and supply chain domains, having a huge impact on productivity by easing the movement of multiple goods, cases, etc., as a single entity, using functionally standardized fork equipment (e.g. [START_REF] Vanderbilt | The Single Most Important Object in the Global Economy: The Pallet[END_REF]. Figure 8 illustrates several pallet-handling contexts. There are companies that specialize in providing pools of pallets shared by their clients, insuring their quality, making pallets available when and where their clients need them, involving relocating pallets and tactically positioning them based on client usage expectations. Encapsulation tier 4: Shipping containers At the fourth encapsulation tier lies the shipping container that contains some combination of goods themselves, in their unitary packaging or in basic handling unit loads such as cases, themselves either stacked directly on its floor, and/or loaded on pallets in pallet-wide swap boxes or containers. Illustrated in Figure 9, shipping containers are rugged, capable of heavy-duty work in tough environmental conditions such as rain, ice, snowstorms, sandstorms and rough waters in high sea. They come roughly in 2,4 by 2,4 meter section, with lengths of 6 or 12 meters (20 or 40 feet). There are numerous variants around these gross dimensions, notably outside of the maritime usage. In complement to their quite standard dimensions, they have standardized handling devices to ease their manipulation. As illustrated in Figure 11, this has lead to the development and exploitation of highly specialized handling technologies for loading them in ships and unloading them from ships, to move them around in port terminals and to perform stacking operations. As emphasized in Figure 3, carriers may encapsulate goods directly such as in the examples from the lumber and car industries provided in Figure 12. Yet in most cases, they transport goods already encapsulated at a previous tier. Figure 13 illustrates semi-trailers encapsulating pallets of cases, with much better filling ratio in the left side example than in the right side example. Indeed the right side represents a typical case where the pallets and cases are such that pallets cannot be stacked on top of each other in the semi-trailer, leading to filling ratios on the order of 60% in weight and volume. Shipping containers are ever more used in multimodal contexts, as illustrated in Figure 14, where they are encapsulated on a semi-trailer, on railcars, and on a specialized container ship. Figure 14. Shipping containers carried on semi-trailer, train and ship Sources: www.tamiya.com, www.kvtransport.gr and www.greenship.com Proposed three-tier modular design of Physical Internet containers The Physical Internet concept proposes to replace by standard and modular π-containers all various packages, cases, totes and pallets currently exploited in the encapsulation tiers one to four of Figure 3. Yet clearly, these must come in various structural grades so as to cover smartly the vast scope of intended usage. In a nutshell, it is proposed as depicted in Figure 15 that three types of π-containers be designed, engineered and exploited: transport containers, handling containers and packaging containers. The transport containers are an evolution of the current shipping containers exploited in encapsulation tier 4. The handling containers replace the basic handling unit loads and pallets exploited in encapsulation tiers 2 and 3. The packaging containers transform the current packages of encapsulation tier 1. These are respectively short-named T-containers, H-Containers and Pcontainers in this paper. Transport containers Transport containers are functionally at the same level as current shipping containers, yet with the upgrading generic specifications of π-containers. T-containers are thus to be world-standard, modular, smart, eco-friendly and designed for easing interconnected logistics. T-containers are to be structurally capable of sustaining tough external conditions such as heavy rain, snowstorms and tough seas. They are to be stackable at least as many levels as current shipping containers. From a dimensional modularity perspective, their external height and width are to be 1,2m or 2,4m while their external length is to be 12m, 6m, 4,8m, 3,6m, 2,4m or 1,2m. These dimensions are indicative only and subject to further investigations leading to worldwide satisfactory approval. The thickness of T-containers is also to be standard so as to offer a standard set of internal dimensions available for embedded H-containers. In order to generalize the identification and external dimensions of T-containers, it is proposed to define them according to their basic dimension, specified above as 1,2m. This basic dimension is corresponding to a single T unit. As detailed in Table 1, a T-container whose length, width and height are 1,2m, as shown in Figure 15, is to be identified formally as a T.1.1.1 container. Similarly, a 6m long, 1,2m wide, 1,2m high T-container can be identified as a T.5.1.1 container. As can be seen in Table 1, the majority of T-container volumes are unique, with two being the maximum number of distinct T-container dimensions having the same volume. This has lead to a way to shorten T-container identification, indeed the short name in the second column of Table 1. According to this naming, the formally named T.1.1.1 container of Figure 15 is short named a T-1 container, due to its unitary volume, and the short name for a T.5.1.1 container is T-5. The short name for the T.5.2.2 container of Figure 17 is T-20S to distinguish it from the T-20L short name for the T.10.2.1 container that is the only other T-container with a volume of 20 T units. The suffixes L and S respectively refer to long and short. Goods Encapsula on Tier Table 1. Identification and external dimensions of T-containers Formally, the modular dimensions of a T-container can be expressed as: Each side is shown to have an internal surface protecting visually and materially the content. Here the surface is drawn in dark red. As all subsequent renderings, the conceptual rendering of Figure 16 is not to be interpreted as a specific specification but rather simply as a way to illustrate the concept in a vivid way. Much further investigation and engineering work are required prior freezing such specifications. Internet in general and to dealing with T-containers in particular. Figure 22 shows a semi-trailer containing several T-containers that is first backed up to a docking station so as to unload one Tcontainer and loading another T-container. The T-container to be unloaded is side-shifted onto a πconveyor and moved inside the logistic facility. Then the T-container to be loaded is side-shifted from a π-conveyor on the other side onto the semi-trailer where it is interlocked with the adjacent T-container on the semi-trailer for secure travel. Figure 23 shows how a π-adapted stacker similar to those in current port terminals can be used to load a T-container on a semi-trailer. Transport containers have been described above in much detail so as to make vivid the distinctions and similarities with current containers. In the next sections, the handling and packaging containers are described in a more compact fashion, emphasizing only the key attributes as the essence of the proposed changes is similar to transport containers. 𝑑 𝑓 𝑇𝑒 = 𝑓𝑏 𝑇 ∀𝑓 ∈ 𝐹 𝑇 (1) 𝑑 𝑓 𝑇𝑖 = 𝑑 𝑓 𝑇𝑒 -2𝑡 𝑇 ∀𝑓 ∈ 𝐹 𝑇 (2) Handling containers Handling containers are functionally at the same level as current basic handling unit loads such as cases, boxes and totes, yet with the upgrading generic specifications of π-containers. Handling containers are conceptually similar to transport containers, as they are both Physical Internet containers. They could be designed to look as T-containers shown in Figures 17 and18. In order to contrast them with T-containers, in this paper H-containers are displayed as in Figure 2. Note that H-containers are also nicknamed π-boxes. A key difference between transport and handling containers lies in the fact that H-containers are smaller, designed to modularly fit within T-containers and dry-bed trailers and railcars. Figure 26 illustrates a T-container filled with a large number of H-containers, depicting the exploitation of their modularity to maximize space utilization. Figure 26. H-containers encapsulated in a T-container (sliced to show its content) A second key difference is that they only have to be able to withstand rough handling conditions, mostly within facilities, carriers and T-containers. So they are structurally lighter, being less rugged than T-containers. They have to be stackable, at least the interior height of a T-container (see Figure 24), higher in storage facilities, yet less high than T-containers in ports. H-containers, given their inherent interlocking capabilities and their robust structure, are designed to support and protect their content without requiring pallets for their consolidated transport, handling and storage. The Modulushca project, under the leadership of Technical University of Graz, has designed and produced a first-generation prototype of H-containers. The prototype is depicted in Figure 27 and is thoroughly described in [START_REF] Modulushca | Modulushca Work Package 3 Final Report[END_REF]. It has the capability of interlocking with others located above and below it through an elaborate locking mechanism, yet does not allow sideway interlocking. Even though it currently does not have all the desired characteristics for H-containers, it is indeed a first step along an innovation journey toward ever better π-boxes. The Modulushca is currently working a second-generation prototype for H-containers. From a dimensional modularity perspective, their dimensions are roughly to be on the order of series such as 1,2m, 0,6m, 0,48m, 0,36m, 0,24m and 0,12m or 1,2m, 0,8m, 0,6m, 0,4m, 0,3m, 0,2m and 0,1m. These dimensions are indicative only and subject to further investigations leading to worldwide satisfactory approval. Here a 1,2m dimension is meant to signify that it fits within a T-1 container as described in Table 1, taking into consideration the thickness of T-containers. So, based on the series above, one can generically use 1-2-3-4-5-10 and 1-2-3-4-6-8-12 series to describe Hcontainers. So, assuming a basis at 0,1m using the second series, then a H.2.4.6 container refers to a 0,2m*0,4m*0,6m approximate cube. Formally, the modular dimensions of a H-container can be expressed as follows, assuming that the largest-size H-container has to fit perfectly in the smallest-size T-container: 𝑏 𝐻 = (𝑑 1 𝑇𝑖 -2𝑠 𝐻 ) 𝑓 𝐻 ⁄ (3) 𝑑 𝑓 𝐻𝑒 = 𝑓𝑏 𝐻 ∀𝑓 ∈ 𝐹 𝐻 (4) 𝑑 𝑓 𝐻𝑖 = 𝑑 𝑓 𝐻𝑒 -2𝑡 𝐻 ∀𝑓 ∈ 𝐹 𝐻 (5) Where Set of modular dimension factors f for H-containers, here exemplified as {1; 2; 3; 4; 5; 10} or {1; 2; 3; 4; 6; 8; 12}. 𝑑 As contrasted with the huge number of customized sizes of current cases, boxes and totes, the modular dimension factor sets limit strongly the number of potential H-container sizes. For example, Table 2 demonstrates that exploiting the set {1; 2; 3; 4; 6; 8; 12} leads to a set of 84 potential H-container sizes, each composed of six modular sides from a set of 28 potential modularsize sides. It is not the goal of this paper to advocate using all these modular sizes in industry or to rather trim the number of modular sizes to a much lower H-container set. This is to be the subject of further research and of negotiations among industry stakeholders. Basically, a larger set enables a better fit of goods in H-containers yet induces more complexity in manufacturing, deploying, flowing and maintaining π-boxes. The exploitation of modular sides attenuates this complexity hurdle. Meller et al. (2012) and[START_REF] Meller | A decomposition-based approach for the selection of standardized modular containers[END_REF] provided optimization-based empirical insights relative to the compromises involved in setting the portfolio of allowed handling container sizes. snap-on handles. Furthermore, if the wheels are motorized and smart, then when snapped to the Hcontainer, the set becomes an autonomous vehicle. Specific company-standardized handling containers are already in used in industry, with significant impact. Figure 31 provides an example in the appliance industry. It depicts appliances encapsulated in modular handling containers. It allows moving several of them concurrently with a lift truck by simply clamping them from the sides. It also allows to store them in the distribution center without relying on storage shelves, indeed by simply stacking them. The Physical Internet aims to generalize and extend such practices through world standard H-containers designed for interconnected logistics. Packaging containers Packaging containers, short named P-containers or π-packs, are functionally at the same level as current goods packages embedding unit items for sales, as shown in Figure 3, the kind seen displayed in retail stores worldwide. P-containers are Physical Internet containers as T-containers and H-containers, with the same generic characteristics. Yet there are three key characteristics that distinguish them. 1. The need for privacy is generally minimal as, to the contrary, goods owners want to expose the product, publicity and instructions to potential buyers. 2. The need for robust protection of their embedded goods is lowest as the H-containers and Tcontainers take on the bulk of this responsibility; so they are to be lightest and thinnest amongst Physical Internet containers. 3. The need for handling and sorting speed, accuracy and efficiency is maximal as they encapsulate individual product units. Figure 32 illustrates the concept of π-packs as applied to cereals, toothpaste and facial tissues. The π-packs are here composed of display sides, reinforced standard tiny edges and corners acting as interfaces with handling devices, and they have modular dimensions. Figure 33 purposefully exhibits a toothpaste dispenser being loaded into a P-container, looking at first glance just like current toothpaste boxes on the market. Yet the P-container characteristics described above simplify very significantly the efforts and technologies necessary to move, pick, sort, and group them at high speed For example, it enables improved A-frame technologies, cheaper and more efficient, or innovative alternative technologies. As illustrated in Figure 34, the dimensional modularity of π-packs enables their space-efficient encapsulation in H-container for being flowed through the multiple distribution channels, all the way to retail stores, e-drives or households. Figure 34. Multiple modular P-containers efficiently encapsulated in a H-container From a dimensional perspective, P-containers are in the same realm as H-containers, yet are not generally expected to go as large as the largest 1,2m*1,2m*1,2m H-containers. So, given that their bases are in the same order, P-containers are to have dimensional factors of series such as 1-2-3-4-5 or 1-2-3-4-6-8 in line with yet shorter than H-containers. Formally, the modular dimensions of a P-container can be expressed as follows, assuming that the smallest-size P-container has to fit perfectly in the smallest-size H-container: 𝑏 𝑃 = 𝑑 1 𝐻𝑖 -2𝑠 𝑃 (6) 𝑑 𝑓 𝑃𝑒 = 𝑓𝑏 𝑃 ∀𝑓 ∈ 𝐹 𝑃 (7) Where 𝑑 𝑓 𝑃𝑒 : External dimension of a P-container side of factor f 𝑏 𝑃 : Base dimension of a P-container 𝑠 𝑃 : Standard minimal maneuvering slack between H-container interior side and encapsulated P-containers 𝑓 𝑃 : Maximum modular dimensional factor for a H-container 𝐹 𝐻 : Set of modular dimension factors f for H-containers, here exemplified as {1; 2; 3; 4; 5} or {1; 2; 3; 4; 6; 8}. Note that the need for standardizing the thickness and internal dimensions of P-containers is debatable, explaining why it is omitted in the above formalization. For the Physical Internet itself, standardization is functionally not necessary. It is necessary for T-containers as H-containers must modularly fit within them, and for H-containers as P-containers must similarly fit modularly within them. Only goods are to be encapsulated in P-containers. Variability in thickness may allow to adjust it to provide adequate protection to the encapsulated goods. On the other hand, standardizing the thickness of P-containers provides a strong advantage in guiding and aligning product designers worldwide with a fixed set of usable space dimensions within the P-containers they are to be encapsulated. Conclusion The three-tier characterization of transport, handling and packaging containers proposed for the Physical Internet enables generalizing and standardizing unit load design worldwide, away from single-organization centric unit load design as engraved in textbooks such as [START_REF] Tompkins | Facilities planning[END_REF]. It offers a simple and intuitive framework that professionals from all realms and disciplines can readily grasp. It simplifies unit load creation and consolidation. It is bearer of innovations that are to make transshipment, crossdocking, sorting, order picking, etc., much more efficient. This is true within a type of container as well as across types, notably enabling significant improvement in space-time utilization of transportation, handling and storage means. The proposed three-tier characterization also catalyzes a shift from the current paradigm of dimensioning the packaging to fit individual products, which leads to countless package dimensions, towards a new paradigm where product dimensioning and packaging dimensioning and functionality are adapted to modular logistics standards. There are strong challenges towards the appropriation by industry of the modular transport, handling and packaging containers. These challenges cross technical, competitive, legacy and behavioral issues. For example, there must be consensus on the base dimensions and factor series for each type of container. There must also be consensus on standardized thickness of T-containers and H-containers. The container thickness, weight and cost must be controlled in order to minimize the wasted space and loading capacity, and to make the containers profitably usable in industry. The same goes with the handling connectors (allowing snapping and interlocking), relative to their cost, size, ease of use, position on the containers of each type. Beyond the containers themselves, there must be engagement by the material handling industry to create the technologies and solutions capitalizing on the modular three-tier containers. Similarly, the vehicle and carrier (semi-trailer, railcar, etc.) industry must also get engaged. New types of logistics facilities are to be designed, prototyped, implemented and operationalized that enable seamless, fast, cheap, safe, reliable, distributed, multimodal transport and deployment of the three interconnected types of π-containers across the Physical Internet. Indeed, the proposed characterization opens a wealth of research and innovation opportunities and challenges to both academia and industry. Figure 2 . 2 Figure 2. Conceptual design illustrating the modularity and the composition functionality of π-containers (Source: original design by Benoit Montreuil and Marie-Anne Côté, 2012) Figure 3 . 3 Figure 3. Current encapsulation practice characterization Figure 3 . 3 Figure 3. Illustrative consumer goods packaging Source: www.bestnewproductawards.biz (2012) Figure 4 . 4 Figure 4. Cardboard cases used as handling unit loads Source: www.ukpackaging.com Figure 6 . 6 Figure 6. Illustrating the collapsible and stackable capabilities of some returnable plastic containers Source: www.pac-king.net and www.ssi-schaefer.us Figure 7 . 7 Figure 7. Two extreme examples of cases grouped as a unit load on a pallet Source: www.123rf.com and www.rajapack.co.uk Figure 8 . 8 Figure 8. Pallets handled by forklift, walkie rider and AS/RS system Source: www.us.mt.com, www.chetwilley.com and www.directindustry.fr Figure 9 . 9 Figure 9. A shipping containerMaritime containers have strong structural capabilities enabling their stacking, often up to three full and five empty high in port terminals, and even higher in large ships. Figure10depicts the wide exploitation of their stacking capabilities in a temporary storage zone of a port. Figure 10 . 10 Figure 10. Stacked shipping containers Figure 11 . 11 Figure 11. Shipping-Container adapted handling equipment in port operations Figure 12 . 12 Figure 12. Semi-trailers carrying logs and cars directly without further encapsulation Sources: www.commercialmotor.com/big-lorry-blog/logging-trucks-in-new-zealand and en.wikipedia.org/wiki/Semi-trailer_truck Figure 15 . 15 Figure 15. Proposed Physical Internet encapsulation characterization Figure16depicts a conceptual rendering of a T-1 container. It shows its sides to be identical. Each side is represented as having a frame composed of an internal X-frame coupled to an external edgeframe. Each side is shown to have five standard handling interfaces represented as black circles. Each side is shown to have an internal surface protecting visually and materially the content. Here the surface is drawn in dark red. As all subsequent renderings, the conceptual rendering of Figure16is not to be interpreted as a specific specification but rather simply as a way to illustrate the concept in a vivid way. Much further investigation and engineering work are required prior freezing such specifications. Figure 16 . 16 Figure 16. Illustrating a 1,2m long, wide and high transport container: T.1.1.1 or T-1 container Figure 17 . 17 Figure 17. Illustrating a 6m-long 1,2m-wide, 1,2m-high transport container: T.5.1.1 or T-5 container Figure 19 . 19 Figure 19. Modular spectrum of T-container sizes from T.1.1.1 (T-1) to T.10.2.2 (T-40) Figure 21 21 Figure 21. T-containers carried on π-adapted flatbed trucks and semi-trailers Figure 22 . 22 Figure 22. Conveyor based unloading and loading T-containers from semi-trailer Figure 24 . 24 Figure 24. Modular T-containers loaded on adapted flatbed π-railcars Figure 27 . 27 Figure 27. H-container prototyped in 2014 in the Modulushca project Source: www.modulshca.eu Figure 28 . 28 Figure 28. Composite H-container moved (1) snapped to a forkless lift truck and (2) using snapped wheels manually or autonomously if they are motorized and smart Source: Montreuil et al. (2010) Figure 30 30 Figure 30. H-containers stacked and snapped to a modular storage grid Source: Montreuil et al. (2010) Figure 32 . 32 Figure 32. Illustrative consumer-focused P-containers Table 2 . 2 Set of 84 H-container sizes using the {1; 2; 3; 4; 6; 8; 12} modular factor set and a set of 28 modular side sizes Figures 28 to 30, sourced from[START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF], highlight the potential for innovative handling technologies exploiting the characteristics of H-containers. Figure28shows that π-boxes do not require pallets to be moved, even a composite π-box, as the handling vehicle can have devices enabling to snap, lift and carry the H-container. It also shows that wheels can be easily snapped underneath a π-box so that a human handler or a mobile robot can readily carry it, potentially using Identification of H-container sides : side dimensions and number of each one H-Container X Y Z 1 1 1 1 2 1 1 2 3 1 1 3 4 1 1 4 5 1 1 6 6 1 1 8 7 1 1 12 8 1 2 2 9 1 2 3 10 1 2 4 11 1 2 6 12 1 2 8 13 1 2 12 14 1 3 3 15 1 3 4 16 1 3 6 17 1 3 8 18 1 3 12 19 1 4 4 20 1 4 6 21 1 4 8 22 1 4 12 23 1 6 6 24 1 6 8 25 1 6 12 26 1 8 8 27 1 8 12 28 1 12 12 29 2 2 2 30 2 2 3 31 2 2 4 32 2 2 6 33 2 2 8 34 2 2 12 35 2 3 3 36 2 3 4 37 2 3 6 38 2 3 8 39 2 3 12 40 2 4 4 41 2 4 6 42 2 4 8 43 2 4 12 44 2 6 6 45 2 6 8 46 Acknowledgements The authors thank for their support the Québec Strategic Grant Program through the LIBCHIP project and the European FP7 Program through the Modulushca project.
39,705
[ "766191", "10955" ]
[ "94189", "39111", "97391" ]
01487298
en
[ "shs" ]
2024/03/04 23:41:48
2015
https://shs.hal.science/halshs-01487298/file/Chapron%20The%20%C2%AB%C2%A0supplement%20to%20all%20archives%C2%A0%C2%BB.pdf
Eds B Delmas D Margairaz D Ogilvie Mutations de l'État, avatars des archives national' libraries, at a time when these were being institutionalised as central repositories and became the 'natural' place for the conservation of collections whose high political or intellectual value required that they be preserved 2 . Hence, from the final decades of the seventeenth century onwards, the Bibliothèque Royale de Paris incorporated several dozen private libraries belonging to scholars and senior government officials, which were rich in ancient manuscripts, transcribed texts and extracts from archival repositories: in such a way, as lawyer Armand-Gaston Camus, archivist of the Assemblée Nationale in revolutionary France later said, it came to form « the supplement to all archives and charter repositories » 3 . « Archival turn » is the term now commonly used to describe the new interest that historians are showing for past modalities of selecting, classifying and transmitting the documents that they use in archives 4 . Resulting from a partnership between historians and archivists, the history of the 'making of archives' will help achieve a better understanding of how history was written -and still is 5 . Historians' increasing reflexivity with regard to their own documentary practices, however, has had comparatively less impact on the history of libraries. The first reason for such a neglect is undoubtedly due to a certain conception of the historian's profession, which is essentially conceived as a work on archives and in archives. In France, today historians are still heirs to the 'big divide' which resulted from the debates, during the second half of the nineteenth century, between the former Royal, later Imperial, Library and the National Archives. At a time when the historian's profession was being redefined around the use of authentic, unpublished sources, archivists educated in the recently created École des Chartes (1821) availed themselves of the founding law on archives of Messidor Year II to claim that archives were the true repository of all « true sources of national history » 6 . Notwithstanding the archivists' claim, the Imperial Library did not give up all its treasures and maintained its status as repository of the historical 2 F. Barbier, "Représentation, contrôle, identité: les pouvoirs politiques et les bibliothèques centrales en Europe, XV e -XIX e siècles", Francia, 26, (1999): 1-22. 3 A. G. Camus, "Mémoire sur les dépôts de chartes, titres, registres, documents et autres papiers… et sur leur état au 1 er nivôse de l'an VI", cited in F. Ravaisson, Rapport adressé à S. Exc. le ministre d'État au English abstract In the early modern period, libraries were probably the most important place of work for historians. They were used as a kind of archive, where historians could find all sorts of records, be they original documents or copies. Based on the case of the Royal Library in eighteenth-century Paris, this study aims to investigate the chain of documentary acts which gave it a para-archivistical function -which it retains to this day. First of all, I will discuss the constitution of scholars' and bureaucrats' private collections and their incorporation in the Royal Library from the final decades of the seventeenth century onwards; then the various operations of classifying, cataloguing and filing that blurred the initial rationales of the 'archive avatars' developed by previous owners; finally the uses of this peculiar material, be they documentary (by scholars or royal officials) or pragmatic (by families wishing to clarify their genealogy or private individuals involved in court cases). In the early modern period, libraries were probably the most important place of work for historians. They were places where scholars went to look not just for the printed works and handwritten narrative sources they needed, but also for all sorts of other records produced during the everyday activities of secular and ecclesiastic institutions 1 . Mediaeval charters, ambassadors' reports, ministerial correspondence and judicial records can still all be found in Ancien Régime libraries, be they original documents or copies. The fact that they are kept there has nothing to do with the ordinary life of archives belonging to a given administration; it is the result of two successive operations. First, these items were generally part of large individual or family collections, compiled by royal officers or scholars for use in their everyday activities. In turn, a certain number of these collections were later donated to central 1 Historiography has put into proper perspective the opposition between scholarly history which turned to the source, and eloquent history which was considered to be less in compliance with the rules of scholarship. C. Grell, L'histoire entre érudition et philosophie. Étude sur la connaissance historique à l'âge des Lumières (Paris: PUF, 1993). I would like to thank Maria-Pia Donato, Filippo de Vivo and Anne Saada for their time and precious comments. records it had acquired in the Ancien Régime and during the Revolution 7 . Yet, its function changed as emphasis shifted toward providing access to printed books for a wider public. As a result, the mass of documents it contained that might easily be described as archival items fell into a sort of forgotten area in the mental landscape of contemporary historians. The way in which we nowadays write the history of libraries provides a second reason. In this field, primacy is given to ancient manuscripts and printed books; this means that library and information experts tend to forget the diversity of written and non-written material contained in libraries -exotic objects, collections of antiques, scientific instruments -and the various uses they had 8 . Whilst monographs devoted to early modern national libraries mention private collections that have been bequeathed or purchased, they offer little insight into the political or intellectual rationale behind these acquisitions. Yet, as I shall claim in this article, the way these collections were sorted, classified, inventoryed and even -at some point -returned into public archives, helps throw light on the slow and mutally dependent emergence of archival and library institutions in early modern States 9 . The concentration of the history of reading on private contexts, and the lack of sources on reading in public libraries, add to our poor understanding of how libraries were used as archives. In this article I wish to redress this problem, taking France as a case study. I. « ALL OF THE STATE'S SECRETS » 1782 saw the publication of an Essai historique sur la bibliothèque du Roi et sur chacun des dépôts qui la composent, avec la description des bâtiments, et des objets les plus curieux à voir dans ces différents dépôts 10 . Its author, Nicolas Thomas Le Prince, who was employed in the Bibliothèque royale as caretaker of the legal deposit, devised his book as a visitor's guide for the curious and for those travellers who came to admire the library. He walks the reader through the rooms, describes the paintings, and discusses the various 'sections' of the library's administrative organisation (printed books, manuscripts, prints and engravings, deeds and genealogies, medals and antiquities); last but not least, he 7 Article 12 of the law dated Messidor Year II specifies the documents to be deposited within the National Library: « charters and manuscripts belonging to history, to the sciences and to the arts, or which may be used for instruction ». The 1860s debates on the respective perimeters of the two institutions led to just a few ad hoc exchanges. 8 Even if some of these components have been properly examined. T. Sarmant, Le Cabinet des médailles de la Bibliothèque nationale (Paris: École des chartes, 1994). 9 See the procedures adopted in the Grand Duchy of Tuscany: E. Chapron, Ad utilità pubblica. Politique des bibliothèques et pratiques du livre à Florence au XVIII e siècle (Geneva: Droz, 2009), 224-261. 10 N. T. Le Prince, Essai historique sur la Bibliothèque du Roi, et sur chacun des dépôts qui la composent, avec la description des bâtimens, et des objets les plus curieux à voir dans ces différens dépôts [Historical essay on the King's library and on each of the repositories of which it is comprised, with the description of the buildings and of the most curious objects to be seen in these various repositories] (Paris, Bibliothèque du roi, 1782). provides details on the private collections which have been added in time to the huge manuscripts holdings. To describe these collections, Le Prince consistently used the word 'fonds', which is still used in modern archival jargon to indicate the entire body of records originating from an office or a family. This is accurate because the collections originated precisely from families and, as we shall see, it reflects the closeness of archives and librairies at the time. For eighteen of these collections (followed by a dozen smaller collections, more rapidly presented), Le Prince provides all available information on the identity of the former owner, the history of the collection, the conditions under which it became part of the Bibliothèque royale and the material description of the volumes (binding, ex libris). He pays little attention to the literary part of these collections, although most were rich in literary, scientific and theological treasures. For the most part, he focuses on listing the resources that each offered prospective historians, on the nature and number of original documents and charters, and on the quality of copies. In other words, Le Prince's presentation invited the reader to consider the Bibliothèque royale primarily in its role as a 'public repository', in the sense of a place designed to preserve authentic archives, deeds and legal instruments which may be needed as evidence for legal purposes 11 . The very use of the term 'repository' (dépôt) to designate the library's departments, whilst not unusual, is sufficiently systematic in his book to be meaningful. Like the word 'fonds', which he used for the single collections, the term derives from the field of archives, usually called 'public repository' at this time 12 . Alongside the minutely detailed enumeration of all authentic items it preserved, Le Prince repeatedly underlines the Bibliothèque Royale's role as a repository for 'reserve copies'. The copies made in Languedoc at Colbert's orders and brought together in the Doat collection acquired in 1732, for instance, made it possible « to find an infinite number of deeds which might have been mislaid, lost or burned », especially as « these copies made and collated by virtue of letters patent can, if so required, replace the very acts from which the copies were made » 13 . In his notes on the collection of Mégret de Sérilly, the Franche-Comté intendant who sold it to the king in 1748, Le Prince points out that, because of a fire at the Palais de Justice in 1737, « the original documents used to make these copies [the registers of the Cour des Aides up until 1717] were partly burned or seriously damaged, to the extent that the copies now replace the originals, and by virtue of this disastrous accident have now become priceless » 14 . At the same time, 11 F. Hildesheimer, "Échec aux archives: la difficile affirmation d'une administration", Bibliothèque de l'École des chartes, 156, (1998), 91-106. 12 "Dépôt public", Encyclopédie, ou Dictionnaire raisonné des sciences, des arts et des métiers, 35 vols. (Paris: Briasson, 1751-1780), 4: 865. 13 Le Prince, Essai historique, 267. 14 Le Prince, Essai historique, 214. the library description stresses the existence of numerous old notarial instruments, fallen into abeyance and henceforth devoid of legal value, but now of documentary interest and for this reason made available to scholars. Le Prince goes as far as dressing a somewhat imperfect yet innovative work tool for future historians: a long « list of the charters, cartularies etc. of French churches and other documents from the various collections in the manuscript department » 15 . Finally, he further signals valuable materials to historians, as in the Duchesne collection that allegedly contained « an infinite number of records which have not yet been used and which might usefully serve those working on the history of France and on that of the kingdom's churches » 16 . Hence, due to this mix of 'living' and 'dead' archives, of authentic instruments and artefacts, the Bibliothèque royale held a singular place in the monarchy's documentary landscape. It differed from the 'political' repositories which came into being in the second half of the seventeenth century, initially in the care of Louis XIV's senior officials and later, at the turn of the century, in a more established form in large ministerial departments (Maison du Roi, Foreign Affairs, War, Navy, General Control of Finances) the main scope of which was to gather documentation that might assist political action 17 . Around the same period, the Bibliothèque royale increased considerably by incorporating numerous private collections. The latter originated in the intense activity of production, copy and collection of political documentation carried out in scholarly and parliamentary milieus between the seventeenth and eighteenth centuries. Senior State officials collected documents for their own benefit, that is, not just papers relating to their own work (as was the custom through to at least the end of the seventeenth century), but also any kind of documentation likely to inform their activities in the service of the monarchy. The poor state or even total abandonment in which the archives of certain institutions were left explains the considerable facility with which old records and registers could find their way into private collections 18 . The major part of the collections was nevertheless made up of copies or extracts 19 . Powerful and erudite aristocrats and officials such as Louis-François Morel de Thoisy or Gaspard de Fontanieu hired small groups of clerks who were tasked with copying -« de belle main » and on quality paper -all the documents they deemed to be of 15 I. Vérité, "Les entreprises françaises de recensement des cartulaires", Les Cartulaires, eds. O. Guyotjeannin, L. Morelle and M. Parisse (Paris: École des chartes, 1993), 179-213. 16 Le Prince, Essai historique, 333. 17 Hildesheimer, "Échec aux archives". 18 For example, M. Nortier, "Le sort des archives dispersées de la Chambre des comptes de Paris", Bibliothèque de l'École des chartes, 123, (1965), 460-537. 19 This is also the case with scholarly collections, such as that of Étienne Baluze. In the bundles relating to the history of the city of Tulle (now Bibliothèque nationale de France [BnF], Baluze 249-253), Patricia Gillet calculated that 42% of the documents were copies made by or for Baluze, 21% were originals, 10% were authentic old copies, the rest being printed documents, leaflets, work papers and letters addressed to Baluze (P. Gillet, Étienne Baluze et l'histoire du Limousin. Desseins et pratiques d'un érudit du XVII e siècle (Genève: Droz, 2008), 141). potential use 20 . They also copied original documents of their collections, when they were ancient and badly legible, so as to produce properly ordered and clean copies, bound in volumes, whilst the original documents were kept in bundles. These collections were not repositories of curiosities; they were strategic resources for personal political survival and for the defence of the State's interests, at a time when the king's Trésor des Chartes had definitively fossilised in a collection of ancient charters and acts, and the monarchy had no central premises in which to store its modern administrative papers 21 . In the 1660s, when Hippolyte de Béthune gifted to the king the collection created by his father Philippe, a diplomat in the service of Henri III and Henri IV, Louis XIV's letters of acceptance underlined the fact that the collection contained « in addition to the two thousand original documents, all of the State's secrets and political secrets for the last four hundred years » 22 . Similarly, the collection compiled during the first half of the seventeenth century by Antoine and Henri-Auguste de Loménie, Secretaries of State of the Maison du Roi (the King's Household), constituted a veritable administrative record of the reigns of Henri IV and Louis XIII. To facilitate their work at the head of this tentacular office, father and son collected a vast quantity of documents relating to the provinces of the kingdom, to the functioning of royal institutions and to the sovereign's domestic services since the Middle Ages, in addition to the documents drawn up during the course of their duties 23 . Despite the establishment of administrative repositories at the turn of the eighteenth century, this type of collection continued to be assembled until the French Revolution. Thus, Guillaume-François Joly de Fleury, attorney-general at the Paris parliament (1717-1756), installed the Parquet archives and a large collection of handwritten and printed material in his mansion 24 . Scholarly collections had close links with these 'political' collections. A certain number of scholars, often with a legal background, moved in royal and administrative circles, participated in the creation of 'professional' collections, and gathered significant amounts of documents for themselves. Caroline R. Sherman has shown how, since the Renaissance, erudition became a family business for the Godefroys, the Dupuys or the Sainte-Marthes. The creation of a library made it possible to transmit 'scholarly capital' 20 Morel de Thoisy, counsellor to the king, treasurer and wage-payer at the Cour des Monnaies, gave his library to the king in 1725. On the clerks he employed, BnF, Clairambault 1056, fol. 128-156. The Mémoire sur la bibliothèque de M. de Fontanieu (sold in 1765 by its owner, maître des requêtes and intendant of Dauphiné) mentions « the work of four clerks he constantly employed over a period of fourteen or fifteen years » (published in H. Omont, Inventaire sommaire des portefeuilles de Fontanieu (Paris: Bouillon, 1898), 8-11). 21 O. Guyotjeannin and Y. Potin, "La fabrique de la perpétuité. Le trésor des chartes et les archives du royaume (XIII e -XIX e siècle)", Revue de synthèse, 125, (2004), 15-44. from one generation to the next: sons were trained by copying documents, filing bundles, making tables and inventories, and compiling extracts [START_REF] Sherman | The Ancestral Library as an Immortal Educator[END_REF] . Their collections were by no means disconnected from political stakes, as these scholars were involved simultaneously in compiling collections, caring for extant repositories, and defending royal interests. During the first half of the seventeenth century, brothers Pierre and Jacques Dupuy were employed to inventory the Trésor des Chartes and to reorganise Loménie de Brienne's collection, from which they were given the original documents as a reward for their work [START_REF] Solente | Les manuscrits des Dupuy à la Bibliothèque nationale[END_REF] . II. PRIVATE COLLECTIONS IN THE KING'S LIBRARY. The rationale behind the integration of a certain number of collections into the Bibliothèque royale from the 1660s onwards was obviously connected to these collections' political nature. Following the chronology established by Le Prince, the first wave of acquisitions coincided with the period in which Jean-Baptiste Colbert extended his control over the Bibliothèque royale. As Jacob Soll has pointed out, Colbert based his political action on the creation of an information system extending from the collection of data 'in the field', through their compilation and organisation, and onto their exploitation [START_REF] Soll | The information master: Jean-Baptiste Colbert's secret state intelligence system[END_REF] . His own library was an efficient work tool, a vast 'database' enhanced by documents collected in the provinces or copied by his librarians. The Bibliothèque royale was part of this information system. As early as 1656, while in Cardinal Mazarin's service, Colbert placed his protégés and friends there, including his brother Nicolas whom he placed in charge as head librarian. In 1666, the newly designated Controller-General of Finances ordered the books to be moved into two houses neighbouring his own in rue Vivienne. It was during the period between these two dates that the first collections of manuscript documents came into the library [START_REF] Dupuy Bequest | [END_REF] : the Béthune collection, gifted in 1662 by Hippolyte de Béthune, who considered that « it belonged to the king alone », and the Loménie de Brienne collection, which Jean-Baptiste Colbert recovered for the Bibliothèque royale after Mazarin's death [START_REF]The collection was transferred to Richelieu by Henri-Auguste de Loménie, and then passed on to Mazarin's library[END_REF] . In the library Jean-Baptiste Colbert employed scholars, such as the historiographer Varillas, to collate his own copy of the Brienne collection, based on a comparison with the originals in the king's Library, to explore the resources of the royal developed in this direction. After Gaignières' collection in 1715, Louvois' (1718), de La Mare's (1718), Baluze's (1719), and then Mesmes' (1731), Colbert's (1732), Lancelot's (1732) and Cangé's (1733) were either presented to or purchased by the Library. This acceleration ran parallel to the increasing authority of the royal establishment, which had come to coordinate the activities of the Collège royal, the royal academies, the Journal des Savants and the royal printing house und thus became somewhat of a « point of convergence for research » 31 . The idea that the purpose of the Bibliothèque royale was to conserve documentary corpuses relating to State interests would appear to have been widely shared by the intellectual and political elites. The initiatives of attorney-general Guillaume-François Joly de Fleury are indicative of this mind-set. Vigilant as he was on the repositories under his responsibility (Trésor des Chartes, archives of the Paris Parliament), he also took care to bring rich collections of historical material to the Bibliothèque royale 32 . In 1720, he made a considerable and personal financial effort to buy the collection of the Dupuy brothers, put up for sale by Charron de Ménars' daughters, because the Royal Treasury did not have enough funds to make the purchase. Yet, although « these manuscripts contain an abundance of important documents that the King's attorney-general cannot do without in the defence of the domain and rights of His Majesty's Crown », Joly de Fleury always saw them « less as his heritage, more as property which can only belong to the king, and considering himself lucky to have been able to conserve them, he always believed that His Majesty's library was the only place where they could be kept » 33 networks to attract donations and to find interesting manuscripts on the market. Between 1729 and 1731, among other historical items, they bought a collection of remonstrances of the Paris Parliament addressed to the king between 1539 and 1630 (almost certainly a copy, bought for 30 livres tournois), a collection of Philippe de Béthune's negotiations in Rome in the 1600s (bought from a bookseller for 20 livres tournois), and forty volumes of accounts of inspections to forests in the 1680s (bought for 1,000 livres tournois from a private individual) 35 . This representation of the Royal Library as repository of statesensitive papers was to be found in scholarly circles, although it did not elicit unanimity in a community which fostered the Republic of Letters' ideal of the free communication of knowledge. Upon the death of Antoine Lancelot (1675-1740), former secretary to the dukes and peers of France, secretary-counsellor to the king and inspector of the Collège royal, Abbott Terrasson wrote to Jamet, secretary to the Lorraine intendant, regretting that Lancelot had bequeathed his collections « to the abyss (to the king's library) », and added that « It would have been better shared among his curious friends[, but] you know as well as I his quirky habit of donating to the king's library, which he felt to be the unique repository in which all curiosities should lie » 36 . The Bibliothèque royale thus acted as a vast collector of papers relating to the monarchy, authentic documents or artefacts, which should not be allowed to fall into foreign hands 37 . This acquisition policy raised three questions. The first concerned remuneration for the documents produced in the State's service. This was a lively subject of debate in negotiations relating to Colbert's library at the end of the 1720s. Abbot de Targny and academician Falconet, experts appointed by the king, refused to give an estimate for « modern manuscripts », i.e. « State manuscripts ». According to Bignon, experts would argue on the basis of « natural law, by virtue of which I believe that ministers' papers belong to the king and not to their heirs » 38 . His argument is consistent with the measures taken as from the 1670s to seize the papers of deceased senior government officials. But this position was by no means self-evident when faced with an heir who wished to make the most out of his capital 39 . Indeed, in Colbert's case as in others, the transfer of papers was eventually not the result of an actual purchase, but of a gift in return for which the king either offered a reward of a substantial sum of money or granted an office. When he sold his 35 BnF, Arch. adm. AR 65, 101, 137. 36 G. Peignot, Choix de testaments anciens et modernes (Paris: Renouard, 1829), 418. 37 When there were rumours that the de Thou library was to be put up for sale, the library's caretaker, Abbot de Targny, told Abbott Bignon how « important it was to take steps to ensure that [the manuscripts] are not lost to the king's Library ». A few days later he let it be understood that he did not believe « they should be seen to be too eager to acquire [them]. It is enough that we be persuaded that the King will not suffer should they pass into foreign hands » (BnF, Arch. Adm. AR 61, 89, 21 September 1719, and 92). 38 BnF, ms. lat. 9365, 188. Bignon to Targny, 10 October 1731. 39 Before the experts were appointed, he demanded 150,000 pounds for the government papers and 300,000 for the ancient manuscripts. BnF, ms. lat. 9365, 312. library in 1765, Gaspard de Fontanieu made a point of stating that the papers relating to his intendancies of Dauphiné and to the Italian army should not be considered as having been sold, but as gifted, « seeing his personal productions as the fruit of the honours bestowed upon him by His Majesty through the different employments with which he had been entrusted; and for this reason being persuaded that the said productions belonged to His Majesty » 40 . Secondly, the confluence of old administrative documentation in the Bibliothèque royale raises the question of its relations with the existing archives of ministries. The competition was not as evident as it might seem, in so far as the function of those archives was not so much to preserve the memory of past policies, as to make available the documentation required for present and future political action. As for the old Trésor des Chartes, it was reduced to purchasing sets of documents linked directly to the king, and such opportunities were very infrequent 41 . Only the creation of the foreign affairs repository in the Louvre (1710) would appear to have had an effect on the management of documentary acquisitions. From this moment on, a portion of the acquisitions were destined or brought back to this repository, for instance from the Gaignières bequest (1715) or from the Trésor des Chartes de Lorraine (1739) 42 . Errors in destination were evidence of the uncertainty surrounding the exact boundaries of both institutions: in 1729, papers relating to the history of Burgundy collected by Maurist Dom Aubrée were taken to the Louvre repository, but in 1743 they were found to be « still on the floor, having not been touched since; these manuscripts are in no way of a nature to be kept in this repository where they will never be used, and their proper place is in the king's Library » 43 . Finally, distribution was not retroactive: in 1728, Foreign Affairs officials were sent to the Bibliothèque royale « to make copies of what was missing at the repository, in order to have a full set of documents from previous ministries » 44 . Interestingly, rather than requesting the originals preserved in the library, they made copies: their aim was to have complete sets of records in the ministry of Foreign Affairs, rather than originals, which they were happy to leave in the library. Tensions sometimes arose in relation to the 'youngest' collections, filled with documents that were still sensitive. In 1786, Delaune, secretary to the Peers, asked to borrow Antoine Lancelot's portfolios which contained items relating to the Peerage. He wanted to compare them with the boxes at the Peerage 40 Fontanieu's library bill of sale published by Omont, Inventaire sommaire, 4. 41 Guyotjeannin, Potin, "La fabrique de la perpétuité". No study has yet been done on these eighteenth century entries. 42 repository and copy the items which were missing 45 . When consulted in relation to this request, Abbot Desaulnais (custodian of the printed books department in the Bibliothèque royale) very coldly responded that the portfolios assembled loose sheets for which there was no accurate inventory. This would not however be a reason to refuse the loan, were it not for the fact that among the documents were « essential items which are missing from the Peerage repository and which M. de Laulne [Delaune] claims were removed from this repository by the late M. Lancelot ». The secretary had already examined these files and had made notes, but « I did not want him to make copies, in order to allow you the pleasure, in different circumstances and where so required, of obliging the peers, whilst conserving at the repository whatever may be unique » 46 . This reluctance to allow people to make copies of documents kept in the Bibliothèque royale draws attention to a third aspect of this documentary economy: the idea that copying reduced the value of the original, because the copy was itself not without value 47 . The price of the copy was calculated according to the cost of producing it (in paper and personnel), to the nature of the items copied, and, above all, to the use to which it could be put. During negotiations relating to Colbert's library, one memorandum pointed out: how important it is for the King and for the State to prevent the said manuscripts from being removed, even if they are only copies. The copy made from the records of the Trésor des Chartes is extremely important, both because it may be unique, and because it contains secrets that must not be made [known] to foreigners, who are eager to acquire these sorts of items so that they might one day use them against us 48 . III. THE « TASTE FOR ARCHIVES » IN LIBRARIES 49 The representation of the Bibliothèque royale as a public repository corresponds to the perception and documentary practices of its contemporaries. This is especially true of the Cabinet des Titres et Généalogies, which was a real resource centre for families, and of the manuscript section of the Library, which was frequented by all sorts of people. The registers of books loans and requests for documentation submitted to the secretary of State of the Maison du Roi (who formally supervised the Library) throw 45 BnF, Arch. adm., AR 56, 296. 46 BnF, Arch. adm., AR 56, 296. 47 This aspect also appears in Le Prince's Essai historique, in relation to the Brienne manuscripts « that we rightly regard as being very precious[, but which] would be of another value entirely, if copies did not exist elsewhere, which are themselves merely ordered copies of Dupuy's manuscripts ». 48 BnF, ms. lat. 9365, 313, "Mémoire sur la bibliothèque de M. Colbert", 15 December 1727. 49 A. Farge, Le goût de l'archive (Paris: Seuil, 1989). considerable light on these practices which reveal three types of relationship with the library: instrumental, evidentiary and scholarly 50 . First and foremost, the Bibliothèque royale constituted an immense documentation centre for agents of the monarchy. Requests concerned the preparation of reference works, such as the Traité de la police published between 1705 and 1738 by Commissaire de La Mare -in 1721 he asked for the Châtelet registers « which were in the cabinet of the late Abbot Louvois and are now in the King's library » -, the publication of the Ordonnances des rois de France prepared by lawyer Secousse on the orders of chancellor d'Aguesseau, and the diplomatic memoirs of Le Dran, senior official at the foreign affairs archive in 1730, who consulted ambassadors' reports 51 . The library's resources were also mobilised in relation to more pressing affairs: in 1768, it was Ripert de Monclar, attorney-general at the parliament of Provence, commissioned to establish the king's rights over the city of Avignon and the Comtat Venaissin, who asked for relevant documents to be searched 52 . In a certain number of cases, recourse to clean and well-organised copies of major collections undoubtedly sufficed, or at least meant that the work was more rapidly completed. The 'public repository' function likely to produce evidentiary documents is more evident in requests from families wishing to clarify their genealogy, or from individuals wanting to produce decisive evidence in a court case. The Bibliothèque royale was called upon in handwriting verification procedures and forgery pleas, for which parties had no hesitation in asking to borrow ancient documents 53 . In 1726, Mr. de Varneuille, cavalry officer in Rouen, asked to borrow the deeds signed in 1427 and 1440 by Jean de Dunois, the 'Bastard of Orleans', to help one of his friends « to support a plea of forgery which he made against the deeds used as evidence against him in his trial » 54 . In 1736, the count of Belle-Isle, in proceedings against Camusat, auditor of accounts, demanded that Philippe-Auguste's original cartulary be produced 55 . This conception of the library was so self-evident that some people asked for 'legalised' copies. To such requests, Amelot, secretary of State of the Maison du Roi at the end of the eighteenth 50 BnF, Arch. adm., AR 56, AR 123 (lending register, 1737-1759) and 124 (1775-1789). 51 BnF, Arch. adm., AR 56, 4, 25, 27, 29. N. de La Mare, Traité de la police, où l'on trouvera l'histoire de son établissement, les fonctions et les prérogatives de ses magistrats, toutes les loix et tous les règlemens qui la concernent, 4 vols. (Paris: Cot, Brunet, Hérissant, 1705-38). Ordonnances des rois de France de la 3e race, recueillies par ordre chronologique, 22 vols. (Paris: Imprimerie royale, 1723-1849). The numerous memoirs of Le Dran remained unpublished and are now kept in the Foreign Affairs Archive (Paris). 52 BnF, Arch. adm., AR 56, 56, 57. 53 On these procedures, A. Béroujon, "Comment la science vient aux experts. L'expertise d'écriture au XVII e siècle à Lyon", Genèses, 70, (2008), 4-25. 54 BnF, Arch. adm., AR 56, 299. 55 BnF, Arch. adm., AR 56, 35. century, replied that « the custodians of this library having sworn no legally binding oath, they may not issue authenticated copies », and that the librarian could only certify extracts 56 . Obviously, the collections were widely used by scholars. Latin, Greek and French manuscripts from the royal collections were the most frequently borrowed documents, but State papers from Louvois, Colbert, Béthune and de Brienne, along with the charters collected by Baluze and Lancelot, were also used for historical or legal works. The registers of book-loans give us an idea of scholars' work methods. Borrowing from the imagery used by Mark Hayer, who compares ways of reading to the way animals nourish themselves, in the registers we can easily identify historians as hunters, grazers, and gatherers 57 . Abbot de Mury, doctor from the Sorbonne and previously tutor to the Cardinal of Soubise, was a grazer; as from May 1757, he borrowed the entire set of registers of the Paris parliament (probably in the copy of the Sérilly collection incorporated into the library in 1756), at an average rate of one volume per month. Abbot de Beauchesne was a hunter, borrowing on the 9th February 1753 the inventory of the Brienne manuscripts and returning four days later to borrow four sets of documents from this collection 58 . The count of Caylus might be a gatherer, as he borrowed successively one volume taken from a Baluze carton (March 1740), a treatise about the mummies from the Colbert manuscripts (January 1750) and two greek inscriptions from the Fourmont collection (December 1751) for his Recueil d'antiquités égyptiennes, étrusques, grecques et romaines published between 1752 and 1767 59 . To what extent did the fact of putting documents into the library lead to new ways of thinking about, and using, these collections? We must first consider the fact that it was already relatively easy to access these documents within close circles of scholars and bureaucrats. During the period he owned the Dupuy collection, between 1720 and 1754, attorney-general Joly de Fleury never refused to lend volumes to his Parisian colleagues such as the lawyer Le Roy who was preparing a work on French public law (never published), or Durey de Meinières, first president of parliament, who borrowed a large part of the collection at a rate of three to four manuscripts every fortnight, to complete his own collection of parliamentary registers or ambassadors' reports 60 . The repository at the king's Library did not really 56 BnF, Arch. adm., AR 56, 93, 29 April 1781, in response to a request from the count of Apremont, asking for a legalised copy of the Treaty of Münster. 57 Quoted in Les défis de la publication sur le Web: hyperlectures, cybertextes et méta-éditions, eds. J. M. Salaün and C. Vandendorpe (Paris: Presses de l'ENSSIB, 2002). 58 BnF, Arch. adm., AR 123, 45, 48 and following. None of these scholars seems to have ever published any historical work. 59 BnF, Arch. adm., AR 123, 7, 42, 44. A.-C.-P. de Caylus, Recueil d'antiquités égyptiennes, étrusques, grecques et romaines, 7 vols. (Paris: Desaint et Saillant, 1752-1767). 60 The correspondence with Durey de Meinières is a good observatory for copying practices which are a constitutive element of collections. BnF, Joly de Fleury, 2491, letter dated 29 August 1746: « I spent part of the afternoon verifying the three volumes that you were kind enough to lend me this morning. I have most of them in my mss from Mr. Talon and the rest in my parliamentary facilitate consultation and copying because permissions were still left to the librarians' discretion, particularly as the possibility of borrowing manuscripts was called into question on several occasions during the course of the century. The rapidity with which scholars got hold of the manuscripts acquired by the library may be as much a sign of newly announced availability as of any continuity of use. Hardly three months went by between the purchase of the Chronique de Guillaume de Tyr, a late thirteenth century manuscript from the Noailles collection, and its loan to Dom Maur Dantine on 14 January 1741 61 . Had the monk had the opportunity to look at it before, in Maréchal de Noailles' library, or was its acquisition by the Library a boon, at the time when he was taking part in the great Benedictine undertaking of Recueil des historiens des Gaules et de la France, the first volume of which appeared in 1738 ? The opposition between public library and less public repositories should thus not be exaggerated. The Mazarine gallery where the manuscripts were kept was not open to the public, even though amateurs were allowed in to admire the splendour of the decor and contents. Le Prince's Essai historique also stated that the custodians « do not indiscriminately release every kind of manuscript » 62 . In some cases, putting documents into the library was meant to protect them from prying eyes, as was the case with four handwritten memoirs on the Regency, deposited in the library in 1749 « to thereby ensure that none of them become public » 63 . The uses to which documents were to be put were also carefully controlled. In 1726, minister Maurepas was against the idea of lending three volumes from the Brienne collection in order to provide evidence of the king's sovereignty over Trois Évêchés in a trial: « I feel that their intended use is a delicate one, it being a question of the king's sovereignty over a province, evidence of which is said to be contained in the manuscripts ». He feared exposing them to the risk of contradiction by the opposing party 64 . IV. LIBRARY PLACEMENT Putting documents into a library implies not only that their consultation was to be ruled by that institution, but also that they were integrated into the library's intellectual organisation. Le Prince's presentation suggests that collections maintained an independent existence in the Library's manuscripts registers »; 11 May 1748: « I believe I have the last five which are letters and negotiations from Mr de Harlay de Beaumont in England in 1602, 1603, 1604 and 1605. It is so that I can be sure that I beg you… to be kind enough to lend them to me ». 61 department: « these collections or repositories are divided by fonds and bear the names of those who left them or sold them to the king » 65 . The existence of numerous small rooms around the Mazarine gallery made it possible to keep entire collections in their own separate spaces, such as Colbert's State papers which were placed in two rooms, or the 725 portfolios transferred from Lorraine, which were kept in two others 66 . The collections were not actually merged into the Bibliothèque royale; they were progressively absorbed during various operations of inventorying, classifying, cataloguing and filing, which blurred the initial rationales of the work tools and quasi-archives -or 'archive avatars' -developed by their previous owners 67 . Two trends coexisted throughout the eighteenth century: the first consisted in integrating new acquisitions into a continuous series of 'royal' numbers; the second in preserving the identity of the collections acquired. The permanent hesitation between, on the one hand, a complete cancellation of the previous order into an new and integrated classification, and on the other, what was to become the principle of the archival integrity (which holds that the records coming from the same source should be kept together) is a core problem of archival management, which had its echoes also in the Bibliothèque royale. The major undertakings of cataloguing and verification tendentially favoured unification of the various collections. The Béthune manuscripts, received in 1664 and still noted as separate in 1666, were renumbered in the catalogue compiled in 1682 by Nicolas Clément, who broke up the unity of the original collection 68 . After this date, only a small portion of the acquired manuscripts were added to the catalogue. In 1719, at the time of the verification of holdings following the appointment of Abbot Bignon, the manuscript repository appeared to be made up of an 'old fonds', organised by language (Greek, Latin, etc.), and of a 'new fonds' which juxtaposed part of the private collections which had been acquired over the previous forty years, from Mézeray (1683) to Baluze (1719) 69 . Incorporations into Clément's framework continued, but in the mid-1730s the rapid growth in the number of collections, combined with the confusion caused by repeated insertions of new items within the catalogue, led to a reform of the principle guiding incorporation. From this date onwards, « collections composed of a fairly considerable number of volumes were kept intact and formed separate fonds », whereas those acquired singly or in 65 small groups were put into the New Acquisitions series 70 . Among these new acquisitions was the Sautereau collection 71 , added in 1743 and made up of the thirty-five volumes of the inventory of acts of the Grenoble Chambre des Comptes. This general principle did not mean that the collections had been incorporated in one fell swoop by the royal institution. Of course, the most prestigious collections maintained their integrity. The 363 volumes of the Brienne collection, magnificently bound in red Morocco leather, were never absorbed into the royal series. Colbert's State papers, acquired in 1732, were for the most part organised into homogeneous collections which preserved their individuality: the Doat collection (258 volumes), made up of copies of deeds which president Jean de Doat had ordered done in Languedoc; the collection of copies brought together by Denis Godefroy in Flanders (182 volumes) ; and a collection of more than five hundred volumes of Colbert's work files (the Cinq cents de Colbert) 72 . To arrange Étienne Baluze's « literature papers », the librarians requisitioned, from his heir, the seven armoires in which Baluze had kept his papers and correspondence. After an unsuccessful attempt by Jean Boivin to reclassify this material in a totally new order, Abbot de Targny went back to a system close to the original 73 . New collections were more often than not reorganised according to the overall rationale of the Bibliothèque royale and of its departments. I will give just one example, that of the library of Gaspard de Fontanieu, former intendant of Dauphiné, sold to the king in 1765. In addition to printed documents and manuscripts, it contains a large collection of fugitive pieces which included both manuscripts and printed documents (366 volumes), and a series of portfolios of original deeds, copies from the Bibliothèque royale and diverse repositories, work notes and printed items relating to historical documents (881 volumes). As one contemporary memoir explains, these three collections (manuscripts, collected works and portfolios) « have between them a link which is intimate enough for them not to be separated » 74 . In particular, the to-ing and fro-ing between the two constitutes a « sort of mechanism [which] offers a prodigious facility for research ». However, integration into the library led to a dual alteration to the collection's intellectual economy. First of all, as had already been the case with Morel de Thoisy's and Lancelot's libraries, sets of fugitive items were sent to the printed books department, whilst the portfolio 70 L. Delisle, Inventaire des manuscrits latins (Paris: Durand et Pedone-Lauriel, 1863), 3. The oriental, Greek and Latin manuscripts were allocated new numbers as independent series, retroactively including private fonds (BnF, NAF 5412, 5413-5414). 71 BnF, NAF 5427, 1. 72 In addition there were volumes and bundles which were combined and bound in the middle of the nineteenth century, thus becoming what is known as the Mélanges Colbert collection. The six thousand manuscripts were divided among the existing linguistic series. 73 Without managing to locate all items after the mess caused by the move, whence the existence of an « Uncertain armoires » category. For more on these operations, see Lucien Auvray, "La Collection Baluze à la Bibliothèque nationale", Bibliothèque de l'École des chartes, 81, (1920), 93-174. On the armoires, BnF, Arch. adm., AR 61, 92. 74 Cited in Omont, Inventaire sommaire, 8-11. series remained in the manuscripts section 75 . Secondly, the portfolios were partly split, and the printed documents were removed and transferred to the printed books, something that twenty years later Le Prince was to describe as « unfortunate » 76 . The ultimate librarians' device to blur the continuity between the old collections and the new was the renumbering of manuscripts. Yet, whilst this operation gave the librarians absolute discretionary power, Le Prince hints on several occasions at the existence of concordance tables which made it possible to navigate from old numbers to new, even if there is no evidence to show that these tables were available to scholars 77 . Also very striking were the vestiges of the old 'book order' in the new library configuration. It could be seen in the way volumes were designated, for example in the registers of loans. Even when they were allocated a new number in the royal series, manuscripts frequently appeared under their original numbers. The collection of copies of the Louvois dispatches appeared under n° 24 in the Louvois fonds entered in the Bibliothèque royale in 1718. It became ms. 9350 (A.B) in the royal series but was listed as « vol. 24 n° 9350 A.B. Louvois manuscripts » when borrowed by Mr. Coquet in 1737 78 . The old book order thus retained a practical significance for both scholars and librarians. It had even more meaning in that the instruments used for orientation within the royal collections often dated back to before the acquisition, since they had been redacted by scholars for their private use: to identify manuscripts from the Brienne collection which would be useful for their research, in 1753 Abbot de Beauchesne and Abbot Quesnel borrowed the two volumes of the inventory kept in the Louvois fonds, whilst Cardinal de Soubise used the catalogue from the Lancelot fonds 79 . In some ways, these collections continued to constitute a library sub-system within the royal establishment. Paris' Royal Library is probably a borderline case. The power of the institution and the absence of any central royal archives combined to make it a para-archival entity recognised as such by its contemporaries. Further research is needed to assess the peculiarity of the French case in comparison to other similar insitutions. Nonetheless, this case study serves as a reminder that in Paris and elsewhere, in the early modern era the 'taste for archives' was significantly developed in libraries -as it still is today. The 75 The collections were then distributed by theme within the printed document fonds: see the Catalogue des livres imprimés de la bibliothèque du roi, 6 vols. (Paris: Imprimerie royale, 1739-53). The Lancelot portfolios were transferred back to the manuscript department in the 19 th century (BnF, NAF 9632-9826). 76 In the nineteenth century, Champollion-Figeac once again separated the original items from the copies, bound in six volumes following on from the collection. 77 Emmanuelle Chapron, « The « Supplement to All Archives » : the Bibliothèque Royale of Paris in the Eighteenth-Century », Storia della storiografia, 68/2, 2015, p. 53-68. The « supplement to all archives »: the Bibliothèque royale of Paris in the eighteenth century Emmanuelle Chapron Aix Marseille univ, CNRS, Telemme, Aix-en-Provence, France 22 L. Delisle, Le Cabinet des manuscrits de la Bibliothèque impériale (Paris: Imprimerie impériale, 1868), 268. 23 C. Figliuzzi, "Antoine et Henri-Auguste de Loménie, secrétaires d'État de la Maison du Roi sous Henri IV et Louis XIII: carrière politique et ascension sociale" (École des chartes, thesis, 2012). 24 See D. Feutry's exemplary study, Un magistrat entre service du roi et stratégies familiales. Guillaume-François Joly de Fleury (1675-1756) (Paris: École des chartes, 2011), 15-35. The Joly de Fleury collection became part of the Library in 1836. . In 1743, he informed Chancellor d'Aguesseau of the importance of Charles Du Cange's collection, which included copies of the Chambre des Comptes memorials that had been destroyed in the Palais de Justice fire in 1737 34 . The function of conserving the monarchy's ancient archives can be seen on an almost daily basis in the purchases made by the royal librarians. Abbott Bignon and the library's custodians made use of their 30 On Varillas' work, S. Uomini, Cultures historiques dans la France du XVII e siècle (Paris: L'Harmattan, 1998), 368-375. 31 F. Bléchet, "L'abbé Bignon, bibliothécaire du roi et les milieux savants en France au début du XVIII e siècle", Buch und Sammler. Private und öffentliche Bibliotheken im 18. Jahrhundert (Heidelberg: C. Winter, 1979), 53-66. 32 D. Feutry, "Mémoire du roi, mémoire du droit. Le procureur général Guillaume-François Joly de Fleury et le transport des registres du Parlement de Paris, 1729-1733", Histoire et archives, 20, (2006), 19-40. 33 BnF, Archives administratives, Ancien Régime [now Arch. adm., AR] 59, 270. The collection was finally bought by the Library in 1754. 34 P.-M. Bondois, "Le procureur général Joly de Fleury et les papiers de Du Cange (1743)", Bibliothèque de l'École des chartes, 89, (1928), 81-88. The memorials are registers containing transcriptions of the letters patent relating to the administration of the finances and of the Domain. On the death of Du Cange (1688) the collection was dispersed; it was later reconstituted by his grandnephew Dufresne d'Aubigny. It became part of the Bibliothèque royale in 1756. BnF, Arch. adm., AR 123. 62 Le Prince, Essai historique, 151. 63 BnF, Arch. adm. AR 65, 304. They contain a regency project written by first president Mr. de Harlay, a memorandum by chancellor Voisin on the Regency and a chronicle of the Regency. 64 BnF, Arch. adm. AR 65, 67, Maurepas to Bignon, 17 August 1726. Le Prince, Essai historique, 156.66 Respectively BnF, NAF 5427, 118 and Peignot, Collection, 415. Map in J. F. Blondel, Architecture française, 4 vols. (Paris: Jombert, 1752-1756), 3: 67-80.67 The formula, coined in B. Delmas, D. Margairaz and D. Ogilvie, eds. De l'Ancien Régime à l'Empire, is particularly apt for a period when archival theory and vocabulary were still fluid.68 Pierre de Carcavy, "Mémoire de la quantité des livres tant manuscrits qu'imprimez, qui estoyent dans la Bibliothèque du Roy avant que Monseigneur en ayt pris le soing [1666]", published in Jean Porcher, "La bibliothèque du roi rue Vivienne", Fédération des sociétés historiques et archéologiques de Paris et de l'Ile-de-France.Mémoires, 1, (1949), 237-246. BnF, NAF 5402, Catalogus librorum manuscriptorum… [1682]. 69 BnF, Arch. adm., AR 65, 7bis. Only the Brienne collection remains intact within the old fonds. Le Prince, Essai historique, 155-156. 78 BnF, Arch. adm., AR 123. There are numerous examples. See also « ms de Gagnière n° 131 et nouveau n° du Roy 1245 » borrowed by Dom Duval in 1741. 79 BnF, Arch. adm., AR 123, 45-46. The first is the Le Tellier-Louvois 101 and 102 manuscript, now ms. fr. 4259-4260. historicisation of scholarly practices, will help to renew questions relating to the history of libraries, just as they have done for the history of archives.
55,116
[ "13055" ]
[ "56663", "199918", "198056" ]
01487333
en
[ "info", "scco" ]
2024/03/04 23:41:48
2006
https://hal.science/hal-01487333/file/TIME2006.pdf
Jean-Franc ¸ois Condotta Gérard Ligozat email: [email protected] Mahmoud Saade email: [email protected] A Generic Toolkit for n-ary Qualitative Temporal and Spatial Calculi Keywords: Temporal and spatial reasoning is a central task for numerous applications in many areas of Artificial Intelligence. For this task, numerous formalisms using the qualitative approach have been proposed. Clearly, these formalisms share a common algebraic structure. In this paper we propose and study a general definition of such formalisms by considering calculi based on basic relations of an arbitrary arity. We also describe the QAT (the Qualitative Algebra Toolkit), a JAVA constraint programming library allowing to handle constraint networks based on those qualitative calculi. Introduction Numerous qualitative constraint calculi have been developed in the past in order to represent and reason about temporal and spatial configurations. Representing and reasoning about spatial and temporal information is an important task in many applications, such as computer vision, geographic information systems, natural language understanding, robot navigation, temporal and spatial planning, diagnosis and genetics. Qualitative spatial and temporal reasoning aims to describe non-numerical relationships between spatial or temporal entities. Typically a qualitative calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF][START_REF] Randell | A spatial logic based on regions and connection[END_REF][START_REF] Ligozat | Reasoning about cardinal directions[END_REF][START_REF] Pujari | INDU: An interval and duration network[END_REF][START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] uses some particular kind of spatial or temporal objects (e.g. subsets in a topological space, points on the rational line, intervals on the rational line) to represent the spatial or temporal entities of the system, and focuses on a limited range of relations between these objects (such as topological relations between regions or precedence between time points). Each of these relations refers to a particular temporal or spatial configuration. For instance, in the field of qualitative reasoning about temporal data, consider the well known formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Thirteen basic relations between these intervals are used to represent the qualitative situation between temporal entities. An interval can be before the other one, can follow the other one, can end the other one, and so on. The thirteen basic relations are JEPD (jointly exhaustive and pairwise disjoint), which means that each pair of intervals satisfies exactly one basic relation. Constraint networks called qualitative constraint networks (QCNs) are usually used to represent the temporal or spatial information about the configuration of a specific set of entities. Each constraint of a QCN represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. The consistency problem for QCNs consists in deciding whether a given network has instantiations satisfying the constraints. In order to solve it, methods based on local constraint propagation algorithms have been defined, in particular methods based on various versions of the path consistency algorithm [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF]. In the literature most qualitative calculi are based on basic binary relations. These basic relations are always JEPD. Moreover, the operators of intersection, of composition and of inverse used for reasoning with these relations are always defined in a similar way. Hence we can assert that these qualitative calculi share the same structure. Recently, non binary qualitative calculi have been proposed. The difference between binary calculi and non binary calculi resides in the fact that new operators are necessary for the non binary case, namely the operator of permutation and the operator of rotation. In this paper we propose and study a very general definition of a qualitative calculus. This definition subsumes all qualitative calculi used in the literature. Moreover, to our knowledge, implementations and software tools have only been developed for individual calculi. The QAT (Qualitative Algebra Toolkit) has been conceived as a remedy to this situation. Specifically, the QAT is a JAVA constraint programming library developed at CRIL-CNRS at the University of Artois. It aims to provide open and generic tools for defining and manipulating qualitative algebras and qualitative networks based on these algebras. This paper is organized as follows. In Section 2, we propose a formal definition of a qualitative calculus. This definition is very general and it covers formalisms based on basic relations of an arbitrary arity. Section 3 is devoted to qualitative constraint networks. After introducing the QAT library in Section 4, we conclude in Section 5. A general definition of Qualitative Calculi Relations and fundamental operations A qualitative calculus of arity n (with n > 1) is based on a finite set B = {B 1 , . . . , B k } of k relations of arity n defined on a domain D. These relations are called basic relations. Generally, k is a small integer and the set D is an infinite set, such as the set N of the natural numbers, the set Q of the rational numbers, the set of real numbers, or, in the case of Allen's calculus, the set of all intervals on one of these sets. We will denote by U the set of n-tuples on D, that is, elements of D n . Moreover, given an element x belonging to U and an integer i ∈ {1, . . . , n}, x i will denote the element of D corresponding to the i th component of x. The basic relations of B are complete and jointly exclusive, in other words, the set B must be a partition of U = D n , hence we have: Property 1 B i ∩ B j = ∅, ∀ i, j ∈ {1, . . . , k} such that i = j and U = i∈{1,...,k} B i . Given a set B of basic relations, we define the set A as the set of all unions of the basic relations. Formally, the set A is defined by A = { B : B ⊆ B}. In the binary case, the various qualitative calculi considered in the literature consider a particular basic relation corresponding to the identity relation on D. We generalise this by assuming that a qualitative calculus of arity n satisfies the following property: Property 2 ∀ i, j ∈ {1, . . . , n} such that i = j, ∆ ij ∈ A with ∆ ij = {x ∈ U : x i = x j }. Note that the relations ∆ ij are called diagonal elements in the context of cylindric algebras [START_REF] Hirsch | Relation Algebras by Games[END_REF]. Given a non empty set E ⊆ {1, . . . , n} × {1, . . . , n} such that for all (i, j) ∈ E we have i = j, ∆ E will denote the relation {∆ ij : (i, j) ∈ E}. We note that from Property 1 and Property 2 we can deduce that ∆ E ∈ A. Hence, the relation of identity on U, denoted by Id n , which corresponds to ∆ {(i,i+1):1≤i≤n-1} , belongs to A. In the sequel we will see how to use the elements of A to define particular constraint networks called qualitative constraint networks. Several fundamental operations on A are necessary for reasoning with these constraint networks, in particular, the operation of permutation, the operation of rotation and the operation of qualitative composition also simply (and wrongly) called composition or weak composition [START_REF] Balbiani | On the consistency problem for the INDU calculus[END_REF][START_REF] Ligozat | What is a qualitative calculus? a general framework[END_REF]. In the context of qualitative calculi, the operations of permutation and rotation have been introduced by Isli and Cohn [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] for a formalism using ternary relations on cyclic orderings. These operations are unary operations which associate to each element of A a relation belonging to U. They can be formally defined in the following way: Definition 1. Let R ∈ A. The permutation and the rotation of R, denoted by R and R respectively, are defined as follows: -R = {(x 1 , . . . , x n-2 , x n , x n-1 ) : (x 1 , . . . , x n ) ∈ R} (Permutation), -R = {(x 2 , . . . , x n , x 1 ) : (x 1 , . . . , x n ) ∈ R} (Rotation). In the binary case, these operations coincide and correspond to the operation of converse. To our knowledge, all binary qualitative calculi satisfy the property that the converse relation of any basic relation is a basic relation. A similar property is required in the general case: Property 3 For each relation B i ∈ B we have B i ∈ B and B i ∈ B. These operations satisfy the following properties: For binary relations, the operation of composition is a binary operation which associates to two relations R 1 and R 2 the relation •(R 1 , R 2 ) = {(x 1 , x 2 ) : ∃u ∈ D with (x 1 , u) ∈ R 1 and (u, x 2 ) ∈ R 2 }. For several qualitative calculi of arity n = 2 the composition of two relations R 1 , R 2 ∈ A is not necessarily a relation of A (consider for example the interval algebra on the intervals defined on the integers). A weaker notion of composition is used. This operation, denoted in the sequel by ⋄, and called qualitative composition, is by definition the smallest relation (w.r.t. inclusion) of A containing all the elements of the bona fide composition : ⋄(R 1 , R 2 ) = {R ∈ A : •(R 1 , R 2 ) ⊆ R}. For an arbitrary arity n, composition and qualitative composition can be defined in the following way: Definition 2. Let R 1 , . . . , R n ∈ A. -•( R 1 , . . . , R n ) = {( x 1 , . . . , x n ) : ∃u ∈ D, ( x 1 , . . . , x n-1 , u) ∈ R 1 , (x 1 , . . . , x n-2 , u, x n ) ∈ R 2 , . . . , (u, x 2 , . . . , x n ) ∈ R n }, -⋄(R 1 , . . . , R n ) = {R ∈ A : •(R 1 , . . . , R n ) ⊆ R}. Note that we use the usual definition of the polyadic composition for the operation •. Both operations are characterized by their restrictions to the basic relations of B. Indeed, we have the following properties: Proposition 2. Let R 1 , . . . , R n ∈ A. -•(R 1 , . . . , R n ) = ∪{•(A 1 , . . . , A n ) : A 1 ∈ B, . . . , A n ∈ B and A 1 ⊆ R 1 , . . . , A n ⊆ R n }; -⋄(R 1 , . . . , R n ) = ∪{⋄(A 1 , . . . , A n ) : A 1 ∈ B, . . . , A n ∈ B and A 1 ⊆ R 1 , . . . , A n ⊆ R n }. Another way to define the qualitative composition is given by the following proposition: Proposition 3. Let R 1 , . . . , R n ∈ A. ⋄(R 1 , . . . , R n ) = {A ∈ B : ∃x 1 , . . . , x n , u ∈ D, ∃A 1 , . . . , A n ∈ B with (x 1 , . . . , x n ) ∈ A, (x 1 , . . . , x n-1 , u) ∈ A 1 , (x 1 , . . . , x n-2 , u, x n ) ∈ A 2 , . . . , (u, x 2 , . . . , x n ) ∈ A n , A 1 ⊆ R 1 , . . . , A n ⊆ R n }. Hence, tables giving the qualitative composition, the rotation and the permutation of basic relations can be used for computing efficiently these operations for arbitrary relations of A. Finally, we have the following properties, which generalize the usual relationship of composition with respect to converse in the binary case: Proposition 4. Let R 1 , . . . , R n ∈ A and OP ∈ {•, ⋄}. -OP(∅, R 2 , . . . , R n ) = ∅ ; -OP(R 1 , . . . , R n ) = OP(R n , R 1 , R 2 , . . . , R n-1 ) ; -OP(R 1 , . . . , R n ) = OP(R 2 , R 1 , , R 3 . . . , , R n ). An example of a qualitative calculus of arity 3: the Cyclic Point Algebra This subsection is devoted to a qualitative calculus of arity 3 known as the Cyclic Point Algebra [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF][START_REF] Balbiani | Reasoning about cyclic space: Axiomatic and computational aspects[END_REF]. The entities considered by this calculus are the points on an oriented circle C. We will call these points cyclic points. Each cyclic point can be characterised by a rational number belonging to the interval [0, 360[. This number corresponds to the angle between the horizontal line going through the centre of C. Hence, for this calculus, D is the set of the rational numbers {q ∈ Q : 0 ≤ q < 360}. In the sequel we assimilate a cyclic point to the rational number representing it. Given two cyclic points x, y ∈ D, [[x, y]] will denote the set of values of D corresponding to the cyclic points met between x and y when travelling on the circle counter-clockwise. The basic relations of the Cyclic Point Algebra is the set of the 6 relations {B abc , B acb , B aab , B baa , B aba , B aaa } defined in the following way: B abc = {(x, y, z) ∈ D 3 : x = y, x = z, y = z and y ∈ [[x, z]]}, B acb = {(x, y, z) ∈ D 3 : x = y, x = z, y = z and z ∈ [[x, y]]}, B aab = {(x, x, y) ∈ D 3 : x = y}, B baa = {(y, x, x) ∈ D 3 : x = y}, B aba = {(x, y, x) ∈ D 3 : x = y}, B aaa = {(x, x, x) ∈ D 3 }. These 6 relations are shown in Figure 1. Based on theses basic relations, we get a set A containing 64 relations. Note that for these basic relations the operation of composition and the operation of qualitative composition are the same operations. Table 1 gives the qualitative composition of a subset of the basic relations. Using Proposition 2, we can compute other qualitative compositions which are not given in this table. For example, ⋄(B aab , B acb , B abc ) = ⋄(B aab , B abc , B acb ) = {B aab }. Actually, the table provides a way of computing any composition of basic relations, since all qualitative compositions which cannot be deduced from it in that way yield the empty relation. This is the case for example of the qualitative composition of B aaa with B abc , which is the empty relation. Qualitative Constraint Networks Basic notions Typically, qualitative constraint networks (QCNs in short) are used to express information on a spatial or temporal situation. Each constraint of a constraint network represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. Formally, a QCN is defined in the following way: Definition 3. A QCN is a pair N = (V, C) where: -V is a finite set of l variables {v ′ 0 , . . . , v ′ l-1 } (where l is a positive integer); -C is a map which to each tuple (v 0 , . . . , v n-1 ) of V n associates a subset C(v 0 , . . . , v n-1 ) of the set of basic relations: C(v 0 , . . . , v n-1 ) ⊆ B. C(v 0 , . . . , v n-1 ) are the set of those basic relations allowed between the variables v 0 ,. . . ,v n-1 . Hence, C(v 0 , . . . , v n-1 ) represents the relation of A corresponding to the union of the basic relations belonging to it. We use the following definitions in the sequel: -A scenario on a set of variables V ′ is an atomic QCN whose variables are the set V ′ . A consistent scenario of N is a scenario that admits a solution of N as a solution. Definition 4. Let N = (V, C) be a QCN with V = {v ′ 0 , . . . , v ′ l-1 }. -A partial instantiation of N on V ′ ⊆ V is a map α of V ′ on D. Such a partial instantiation is consistent if and only if (α(v 0 ), . . . , α(v n-1 )) ∈ C(v 0 , . . . , v n-1 ), for all v 0 , . . . , v n-1 ∈ V ′ . -A solution of N is a consistent partial instantiation on V . N -A QCN N ′ = (V ′ , C ′ ) is equivalent to N if and only if V = V ′ and both networks N and N ′ have the same solutions. -A sub-QCN of a QCN N = (V, C) is a QCN N ′ = (V, C ′ ) where: C ′ (v 0 , . . . , v n-1 ) ⊆ C(v 0 , . . . , v n-1 ) for all v 0 , . . . , v n-1 ∈ V . Moreover we introduce the definition of normalized QCNs which intuitively correspond to QCNs containing compatible constraints w.r.t. the fundamental operations of rotation and permutation. Definition 5. Let N be a QCN. Then N is normalized iff: -C(v 2 , . . . , v n , v 1 ) = C(v 1 , . . . , v n ) , -C(v 1 , . . . , v n-2 , v n , v n-1 ) = C(v 1 , . . . , v n ) , -C(v 1 , . . . , v i , . . . , v j , . . . , v n ) ⊆ ∆ ij , ∀ i, j ∈ {1, . . . , n} such that i = j and v i = v j . Given any QCN, it is easy to transform it into an equivalent QCN which is normalized. Hence we will assume that all QCNs considered in the sequel are normalized. Given a QCN N , the problems usually considered are the following: determining whether N is consistent, finding a solution, or all solutions, of N , and computing the smallest QCN equivalent to N . These problems are generally NP-complete problems. In order to solve them, various methods based on local constraint propagation algorithms have been defined, in particular the method which is based on the algorithms of path consistency [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] which we will refer to as the ⋄-closure method. The ⋄-closure method This subsection is devoted to the topic of ⋄-closed QCNs. These QCNs are defined in the following way: Definition 6. Let N = (V, C) be a QCN. Then N is ⋄-closed iff C(v 1 , . . . , v n ) ⊆ ⋄(C(v 1 , . . . , v n-1 , v n+1 ), C(v 1 , . . . , v n-2 , v n+1 , v n ), . . . , C(v 1 , v n+1 , v 3 , . . . , v n ), C(v n+1 , v 2 , . . . , v n )), ∀v 1 , . . . , v n , v n+1 ∈ V . For qualitative calculus of arity two this property is sometimes called the path-consistency property or the 3-consistency property, wrongly since qualitative composition is in general weaker than composition (see [START_REF] Renz | Weak composition for qualitative spatial and temporal reasoning[END_REF] for a discussion to this subject). In the binary case, the usual local constraint propagation algorithms PC1 and PC2 [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] have been adapted to the qualitative case for computing a sub-QCN which is ⋄-closed and equivalent to a given QCN. As an extension of PC -1 to the n-ary case we define the algorithm PC1 n (see Algorithm 1). In brief, this algorithm iterates an operation (line 7-8) which suppresses non possible basic relations from the constraints using weak composition and intersection. This operation is repeated until a fixpoint is reached. It can be easily checked that the QCN output by PC1 n is ⋄-closed and equivalent to the initial QCN used as input In the binary case, a ⋄-closed QCN is not always 3-consistent but it is (0, 3)-consistent, which means, respectively, that we cannot always extend a partial solution on two variables to three variables, but that we know that all sub-QCNs on three variables are consistent. This last property can be extended to the n-ary case: Proposition 6. Let N = (V, C) be a QCN. If N is ⋄-closed then it is (0, n)-consistent. Note that in the same manner, we can extend PC2 to the n-ary case and prove similar results. Associating a binary qualitative calculus to a qualitative calculus of arity n Consider a qualitative calculus of arity n. There is actually a standard procedure for associating a binary calculus to it. Moreover, if a QCN is defined on the n-ary calculus, it can be represented by QCN in the associated binary calculus. We now proceed to sketch this procedure. Consider a qualitative calculus with a set of basic relations B = {B 1 , . . . , B k } of arity n defined on D. We associate to it a qualitative formalism with a set of basic relations B ′ = {B ′ 1 , . . . , B ′ k ′ } of arity 2 defined on a domain D ′ in the following way: -For all i, j ∈ {1, . . . , n} we define the relation E ij by: -D ′ is the set D n = U. Hence, each relation of B ′ is a subset of U ′ = D ′ × D ′ = D n × D n = U × U. -For each relation B i ∈ B, with 1 ≤ i ≤ k, a basic relation B ′ i is introduced in B ′ . B ′ i is defined by the relation {((x 1 , . . . , x n ), (x 1 , . . . , x n )) : (x 1 , . . . , x n ) ∈ B i }. Note that the set of relations B ′ P = {B ′ 1 , . . . , B ′ k } forms a E ij = {((x 1 , . . . , x n ), (x ′ 1 , . . . , x ′ n )) ∈ U ′ : x i = x ′ j } \ ∆ ′ 12 . E 0 = {E ij : i, j ∈ {1, . . . , n}}. E m with m > 0 is inductively defined by E m = {R 1 ∩ R 2 , R 1 \ (R 1 ∩ R 2 ), R 2 \ (R 1 ∩ R 2 ) : R 1 , R 2 ∈ E m-1 }. Let m ′ the smallest integer such that E m ′ = E m ′ +1 . B ′ E = {R ∈ E m ′ such that R = ∅ and ∄R ′ = ∅ ∈ E m ′ with R ′ ⊂ R}. The set of relations of B ′ E are added to the set B ′ . -Let F be the binary relation on D ′ defined by F = U ′ \ (E ij ∪ B ′ P ). We add F to B ′ . Hence the final set of basic relations is the set B ′ = B ′ P ∪ B ′ E ∪ {F}. The reader can check that B ′ satisfies properties 1, 2 and 3 and hence defines a qualitative calculus of arity 2. Now, consider a QCN N = (V, C) defined on B. Let us define an equivalent QCN N ′ = (V ′ , C ′ ) on B ′ : -To define V ′ , for each n-tuples of n variables (v 1 , . . . , v n ) of V we introduce a variable v ′ {v 1 ,...,vn} in B ′ . -Given a variable v ′ = v ′ {v 1 ,...,vn} belonging to V ′ we define C ′ (v ′ , v ′ ) by the relation {B ′ i : B i ∈ C(v 1 , . . . , v n )}. -Given two distinct variables v ′ i = v ′ {v i 1 ,...,v i n } and v ′ j = v ′ {v j 1 ,...,v j n } belonging to V ′ , C ′ (v ′ i , v ′ j ) is the relation E defined in the following way: let γ the set of pairs of integer defined by {(k, l) ∈ N × N : v i k = v j l }. E is the set of basic relations of B ′ (more precisely of B ′ E ) defined as the relation (k,l)∈γ E kl . The reader can check that N is a consistent QCN iff N ′ is a consistent QCN. This construction is inspired by the technique called dual encoding [START_REF] Bacchus | On the conversion between non-binary and binary constraint satisfaction problems[END_REF] used in the domain of discrete CSPs to convert n-ary constraints into binary constraints. 4 The Qualitative Algebra Toolkit (QAT) {B aab , B abc } v j v i v k v ′ ijk v ′ lim v ′ ijk {B ′ aab , B ′ abc } E 12 Clearly, all existing qualitative calculi share the same structure, but, to our knowledge, implementations and software tools have only been developed for individual calculi. The QAT (Qualitative Algebra Toolkit) has been conceived as a remedy to this situation. Specifically, the QAT is a JAVA constraint programming library developed at CRIL-CNRS at the University of Artois. It aims to provide open and generic tools for defining and manipulating qualitative algebras and qualitative networks based on these algebras. The core of QAT contains three main packages. In the sequel of this section we are going to present each of those packages. The Algebra package is devoted to the algebraic aspects of the qualitative calculi. While programs proposed in the literature for using qualitative formalisms are ad hoc implementations for a specific algebra and for specific solving methods, the QAT allows the user to define arbitrary qualitative algebras (including non-binary algebras) using a simple XML file. This XML file, which respects a specific DTD, contains the definitions of the different elements forming the algebraic structure of the qualitative calculus: the set of basic relations, the diagonal elements, the table of rotation, the table of permutation and the table of qualitative composition. We defined this XML file for many qualitative calculi of the literature: the interval algebra [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF], the point algebra [START_REF] Vilain | Constraint Propagation Algorithms for Temporal Reasoning[END_REF], the cyclic point algebra [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF], the cyclic interval algebra [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], the rectangle algebra [START_REF] Balbiani | A new tractable subclass of the rectangle algebra[END_REF], the INDU algebra [START_REF] Pujari | INDU: An interval and duration network[END_REF], the multidimensional algebra [START_REF] Balbiani | Spatial reasoning about points in a multidimensional setting[END_REF], the RCC-5 algebra [START_REF] Randell | A spatial logic based on regions and connection[END_REF], the RCC-8 algebra [START_REF] Randell | A spatial logic based on regions and connection[END_REF], the cardinal direction algebra [START_REF] Ligozat | Reasoning about cardinal directions[END_REF]). Tools allowing to define a qualitative algebra as the Cartesian Product of other qualitative algebras are also available. The QCN package contains tools for defining and manipulating qualitative constraint networks on any qualitative algebra. As for the algebraic structure, a specific DTD allows the use of XML files for specifying QCNs. The XML file lists the variables and relations defining the qualitative constraints. Functionalities are provided for accessing and modifying the variables of a QCN, its constraints and the basic relations they contain. Part of the QCN package is devoted to the generation of random instances of QCNs. A large amount of the research about qualitative calculi consists in the elaboration of new algorithms to solve QCNs. The efficiency of these algorithms must be validated by experimentations on instances of QCNs. Unfortunately, in the general case there does not exist instances provided by real world problems. Hence, the generation of random instances is a necessary task [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The QCN package of the QAT provides generic models allowing to generate random instances of QCNs for any qualitative calculus. The Solver package contains numerous methods to solve the main problems of interest when dealing with qualitative constraint networks, namely the consistency problem, the problem of finding one or all solutions, and the minimal network problem. All these methods are generic and can be applied to QCNs based on arbitrary qualitative calculi. They make use of the algebraic aspect of the calculus without considering the semantics of the basic relations. In other words, they make abstraction of the definitions of the basic relations and only uniquely manipulate the symbols corresponding to these relations. Nevertheless, by using the object-oriented concept, it is very easy to particularize a solving method to a specific qualitative algebra or a particular kind of relations. We implemented most of the usual solving methods, such as the standard generate and test methods, search methods based on backtrack and forward checking, and constraint local propagation methods. The user can configure these different methods by choosing among a range of heuristics. These heuristics are related to the choice of the variables or the constraints to be scanned, of the basic relations in a constraint during a search. The order in which the constraints are selected and the order in which the basic relations of the selected constraint are examined can greatly affect the performance of a backtracking algorithm [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The idea behind constraint ordering heuristics is to first instantiate the more restrictive constraints first. The idea behind value ordering basic relations is to order the basic relations of the constraints so that the value that most likely leads to a solution is the first one to be selected. The QAT allows the user to implement new heuristics based on existing heuristics. As for local constraint propagation methods, whereas in discrete CSPs arc consistency is widely used [START_REF] Apt | Principles of Constraint Programming[END_REF], path consistency is the most efficient and most frequently used kind of local consistency in the domain of the qualitative constraints. More exactly, the methods used are based on local constraint propagation based on qualitative composition, in the manner of the PC1 n algorithm described in the previous section. In addition to PC1 n , we have extended and implemented algorithms based on PC2 [START_REF] Bessière | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF]. Conclusions We propose and study a general formal definition of qualitative calculi based on basic relations of an arbitrary arity. This unifying definition allows us to capture the algebraic structure of all qualitative calculi in the literature. The main elements of the algebraic structure are diagonal elements, and the operations of permutation, rotation and qualitative composition. We give a transformation allowing to build a qualitative calculus based on binary basic relations from a qualitative calculus based on arbitrary basic relations. The expressive powers of both calculi are similar. Moreover we generalize the constraint propagation method P C1 to the general case, i.e. for relations of any arity. In a second part we describe the QAT1 (Qualitative Algebra Toolkit), a JAVA constraint programming library allowing to handle constraint networks defined on arbitrary n-ary qualitative calculi. This toolkit provides algorithms for solving the consistency problem and related problems, as well as most of the heuristics used in the domain. QAT is implemented using the object oriented technology. Hence, it is an open platform, and its functionalities are easily extendable. New heuristics (resp. methods) can be defined and tested. Among the tools it provides are classes allowing to generate and to use benchmarks of qualitative networks. Hence new heuristics or new solving algorithms can be conveniently evaluated. Proposition 1 . 1 Let R ∈ A. -R = {B : B ∈ B and B ⊆ R} and R = {B : B ∈ B and B ⊆ R}. Proposition 5 .Algorithm 1 51 . The time complexity of Algorithm PC1 n is O(|V | (n+1) ) where |V | is the number of variables of the QCN and n the arity of the calculus. We can prove the following properties: Applying the algorithm PC1 n to a normalized QCN N yields a QCN which is normalized, ⋄-closed, and equivalent to N . PC1 n Compute the closure of a QCN N = (V, C) 1: Do 2: N ′ := N 3: For each vn+1 ∈ V Do 4: For each v1 ∈ V Do 5:. . .6:For each vn ∈ V Do 7:C(v1, . . . , vn) := C(v1, . . . , vn)∩ 8:⋄(C(v1, . . . , vn-1, vn+1), C(v1, . . . , vn-2, vn+1, vn), . . . , C(vn+1, v2, . . . , vn)) 9: Until (N == N ′ ) 10: return N partition of the relation of identity of D ′ which we will denote by ∆ ′ 12 . Fig. 2 . 2 Fig. 2. Converting a ternary constraint C ijk of the cyclic point algebra into a binary constraint (left). Expressing a structural constraint between v ′ ijk and v ′ lim for distinct integers i, j, k, l, m (right). Table 1 . 1 The 6 basic relations of the Cyclic Point Algebra. R1 Baaa Baaa B aab B aab B aab B aab B aba B abc R2 Baaa B aab B aba B abc B baa B acb B aab B abc R3 Baaa B aab B baa B acb B aba B abc B abc B acb ⋄(R1, R2, R3) {Baaa} {B aab } {Baaa} {B aab } {B aab } {B aab } {B abc } {B abc } The qualitative composition of the Cyclic Point Algebra a Baaa B aab B aba B baa B abc B acb a Baaa B aba B aab B baa B acb B abc a Baaa B aba B baa B aab B abc B acb ✫✪ ✬✩ x s y s z s + ✫✪ ✬✩ x s y s z s + ✫✪ ✬✩ x y z s s + B abc (x,y,z) B acb (x,y,z) B aab (x,y,z) ✬✩ x s s y z + s ✬✩ s x y z + ✬✩ s z x y + ✫✪ ✫✪ ✫✪ B baa (x,y,z) B aba (x,y,z) Baaa(x,y,z) Fig. 1. Table 2 . 2 The permutation and the permutation operation of the Cyclic Point Algebra The documentation and the source of the QAT library can be found at http://www.cril.univ-artois.fr/˜saade/QAT.
31,866
[ "1142762", "998171" ]
[ "56711", "247329", "56711" ]
01487411
en
[ "info", "scco" ]
2024/03/04 23:41:48
2006
https://hal.science/hal-01487411/file/wecai06.pdf
Jean-François Condotta Gérard Ligozat email: [email protected] Mahmoud Saade email: [email protected] Empirical study of algorithms for qualitative temporal or spatial constraint networks Representing and reasoning about spatial and temporal information is an important task in many applications of Artificial Intelligence. In the past two decades numerous formalisms using qualitative constraint networks have been proposed for representing information about time and space. Most of the methods used to reason with these constraint networks are based on the weak composition closure method. The goal of this paper is to study some implementations of these methods, including three well known and very used implementations, and two new ones. Introduction Representing and reasoning about spatial and temporal information is an important task in many applications, such as geographic information systems (GIS), natural language understanding, robot navigation, temporal and spatial planning. Qualitative spatial and temporal reasoning aims to describe non-numerical relationships between spatial or temporal entities. Typically a qualitative calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF][START_REF] Randell | A spatial logic based on regions and connection[END_REF][START_REF] Ligozat | Reasoning about cardinal directions[END_REF][START_REF] Arun | Indu: An interval and duration network[END_REF][START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] uses some particular kind of spatial or temporal objects (subsets in a topological space, points on the rational line, intervals on the rational line,...) to represent the spatial or temporal entities of the system, and focuses on a limited range of relations between these objects (such as topological relations between regions or precedence between time points). Each of these relations refers to a particular temporal or spatial configuration. For instance, consider the well-known temporal qualitative formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Thirteen basic relations between these intervals are used to represent the qualitative situation between temporal entities (see Figure 1). For example, the basic relation overlaps can be used to represent the situation where a first temporal activity starts before a second activity and terminates while the latter is still active. Now the temporal or spatial information about the configuration of a specific set of entities can be represented using a particular kind of constraint networks called qualitative constraint networks (QCNs). Each constraint of a QCN represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. Given a QCN N , the main problems to be considered are the following ones: decide whether there exists a solution of N (the consistency problem), find one or several solutions of N ; find one or several consistent scenarios of N ; determine the minimal QCN of N . In order to solve these problems, methods based on local constraint propagation algorithms have been defined, in particular algorithms based on the •-closure method (called also the path consistency method) [START_REF] Allen | Maintaining Knowledge about Temporal Intervals[END_REF][START_REF] Peter Van Beek | Approximation algorithms for temporal reasoning[END_REF][START_REF] Ladkin | Effective solution of qualitative interval constraint problems[END_REF][START_REF] Ladkin | A symbolic approach to interval constraint problems[END_REF][START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF][START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF][START_REF] Renz | Efficient methods for qualitative spatial reasoning[END_REF][START_REF] Nebel | Solving hard qualitative temporal reasoning problems: Evaluating the efficienty of using the ord-horn class[END_REF] which is the qualitative version of the path consistency method [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] used in the domain of classical CSPs. Roughly speaking the •-closure method is a constraint propagation method which consists in iteratively performing an operation called the triangulation operation which removes for each constraint defined between two variables the basic relations not allowed w.r.t. a third variable. In following the line of reasoning of van Beek and Manchak [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and Bessière [START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF], in this paper we compare different possible versions of the •-closure method. The algorithms studied are adapted from the algorithms PC1 [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF] or PC2 [START_REF] Mackworth | Consistency in networks of relations[END_REF]. Concerning the algorithms issued of PC2 we use different heuristics, in particular heuristics defined in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and we use structures saving pairs of constraints or structures saving triples of constraints. Moreover we introduce two algorithms mixing the algorithm PC1 and the algorithm PC2. This paper is organized as follows. In Section 2, we give some general definitions concerning the qualitative calculi. Section 3 is devoted to the different •-closure algorithms studied in this paper. After discussing the realized experimentations in Section 4 we conclude in Section 5. 2 Background on Qualitative Calculi Relations In this paper, we focus on binary qualitative calculi and use very general definitions. A qualitative calculus considers a finite set B of k binary relations defined on a domain D. These relations are called basic relations. The elements of D are the possible values to represent the temporal or spatial entities. The basic relations of B correspond to all possible configurations between two temporal or spatial entities. The relations of B are jointly exhaustive and pairwise disjoint, which means that any pair of elements of D belongs to exactly one basic relation in B. Moreover, for each basic relation B ∈ B there exists a basic relation of B, denoted by B ∼ , corresponding to the converse of B. The set A is defined as the set of relations corresponding to all unions of the basic relations: A = { S B : B ⊆ B}. It is customary to represent an element B1 ∪ . . . ∪ Bm (with 0 ≤ m ≤ k and Bi ∈ B for each i such that 1 ≤ i ≤ m) of A by the set {B1, . . . , Bm} belonging to 2 B . Hence we make no distinction between A and 2 B in the sequel. There exists an element of A which corresponds to the identity relation on D, we denote this element by Id. Note that this element can be composed of several basic relations. Now we give some well known examples of calculi to illustrate this definition. The Allen's calculus. As a first example, consider the well known temporal qualitative formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Hence D is the set {(x -, x + ) ∈ Q × Q : x -< x + }. The set of basic relations consists in a set of thirteen binary relations corresponding to all possible configurations of two intervals. These basic relations are depicted in Figure 1. Here we have B = {eq, b, bi, m, mi, o, oi, s, si, d, di, f, f i}. Each basic relation can be formally defined in terms of the endpoints of the intervals involved; for instance, m = {((x -, x + ), (y -, y + )) ∈ D×D : The Meiri's calculus. Meiri [START_REF] Meiri | Combining qualitative and quantitative constraints in temporal reasoning[END_REF] considers temporal qualitative constraints on both intervals and points. These constraints can correspond to the relations of a qualitative formalism defined in the following way. D is the set of pairs of rational numbers: {(x, y) : x ≤ y}. The pairs (x, y) with x < y correspond to intervals and the pairs (x, y) with x = y correspond to points. Hence, we define to particular basic relations on D : eq i = {((x, y), (x, y)) : x < y} and eq p = {((x, y), (x, y)) : x = y} composing Id. These basic relations allow to constraint an object to be an interval or a point. In addition of these basic relations, the basic relations of the Allen's calculus and those ones of the point algebra are added to B. To close the definition of B we must include the ten basic relations corresponding to the possible configurations between a point and an interval, see 2 for an illustration of these basic relations. x + = y -}. Fundamental operations As a set of subsets, A is equipped with the usual set-theoretic operations including intersection (∩) and union (∪). As a set of binary relations, it is also equipped with the operation of converse (∼) and an operation of composition ( Qualitative Constraint Networks A qualitative constraint network (QCN) is a pair composed of a set of variables and a set of constraints. The set of variables represents spatial or temporal entities of the system. A constraint consists of a set of acceptable basic relations (the possible configurations) between some variables. Formally, a QCN is defined in the following way: Definition 1 A QCN is a pair N = (V, C) where: • V = {v1, . . . , vn} is a finite set of n variables where n is a positive integer; • C is a map which to each pair (vi, vj) of V × V associates a subset C(vi, vj ) of the set of basic relations: C(vi, vj ) ∈ 2 B . In the sequel C(vi, vj ) will be also denoted by Cij . C is such that Cii ⊆ Id and Cij = C ∼ ji for all vi, vj ∈ V . With regard to a QCN N = (V, C) we have the following definitions: A solution of N is a map σ from V to D such that (σ(vi), σ(vj)) satisfies Cij for all vi, vj ∈ V . N is consistent iff it admits a solution. A QCN N ′ = (V ′ , C ′ ) is a sub-QCN of N if and only if V = V ′ and C ′ ij ⊆ Cij for all vi, vj ∈ V . A QCN N ′ = (V ′ , C ′ ) is equivalent to N if and only if V = V ′ and both networks N and N ′ have the same solutions. The minimal QCN of N is the smallest (for ⊆) sub-QCN of N equivalent to N . An atomic QCN is a QCN such that each Cij contains a basic relation. A consistent scenario of N is a consistent atomic sub-QCN of N . Given a QCN N , the main problems to be considered are the following problems: decide whether there exists a solution of N ; find one or several solutions of N ; find one or several consistent scenarios of N ; determine the minimal QCN of N . Most of the algorithms used for solving these problems are based on a method which we call the •-closure method. The next section is devoted to this method. 3 The •-closure method and associated algorithms Generalities on the •-closure method In this section we introduce the path •-closure property and give the different implementations of this method studied in the sequel. Roughly speaking the •-closure method is a constraint propagation method which consists in iteratively performing the following operation (the triangulation operation): Cij := Cij ∩ (C ik • C kj ) , for all variables vi, vj , v k of V , until a fixed point is reached. Just no satisfiable basic relations are removed from these constraints with this method. In the case where the QCN obtained in this way contains the empty relation as a constraint, we can assert that the initial QCN is not consistent. However, if it does not, we cannot in the general case infer the consistency of the network. Hence the QCN obtained in this way is a sub-QCN of N which is equivalent to it. Moreover, the obtained QCN is •-closed, more precisely it satisfies the following property: Cij ⊆ C ik • C kj for all variables vi, vj , v k of V . Note that this property implies the (0, 3)consistency of the resulting QCN (each restriction on 3 variables is consistent). For several calculi, in particular for the Allen's calculus defined on the rational intervals, the (0, 3)consistency implies the 3 consistency or path consistency [START_REF] Mackworth | Consistency in networks of relations[END_REF]. It is why sometimes there exists a confusion between the •closure property and the path consistency property. Studied Algorithms There are two well known algorithms in the literature for enforcing the path-consistency of discrete CSPs [START_REF] Mackworth | Consistency in networks of relations[END_REF][START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF], namely the PC1 and the PC2 algorithms. These algorithms have been adapted on several occasions to the binary qualitative case in order to enforce •-closure [START_REF] Allen | Maintaining Knowledge about Temporal Intervals[END_REF][START_REF] Vilain | Constraint Propagation Algorithms for Temporal Reasoning[END_REF][START_REF] Ladkin | Effective solution of qualitative interval constraint problems[END_REF][START_REF] Van Beek | Reasoning About Qualitative Temporal Information[END_REF][START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF]. A possible adaptation of PC1 to the qualitative case is the function WCC1 defined in Algorithm 1. WCC1 checks all triples of variables of the network in a main loop. It starts again this main loop until no changes occur. For each triple of variables the operation of triangulation is made by the function revise. Note that in this function the call of updateConstraints(Cij, R) allows to set the constraint Cij with the new relation R and to set the constraint Cij with R ∼ . For particular situations, the treatment corresponding to lines 7-9 can be avoided. For example, for the QCNs defined from relations of the Allen's calculus this treatment is an useless work in the following cases : for i ← 1 to n do 4: C ik = B, C kj = B, i = k, k = j or i = j. for j ← i to n do 5: for k ← 1 to n do 6: if not skippingCondition(C ik , C kj ) then 7: if revise(i, k, j) then 8: if Cij == ∅ then return false 9: else change ← true 10: until not change 11: return true Function revise(i, k, j). 1: R ← Cij ∩ (C ik • C kj ) 2: if Cij ⊆ R then return false 3: updateConstraints(Cij , R) 4: return true The functions WCC2 P and WCC2 T defined in respectively Algorithm 2 and Algorithm 3 are inspired by PC2. WCC2 P handles a list containing pairs of variables corresponding to the modified constraints which must be propagated whereas WCC2 P handles a list containing triples of variables corresponding to the operations of triangulation to realize. The using of triples instead of pairs allows to circumscribe more precisely the useful triangulation operations. In the previous algorithms proposed in the literature, the exact nature of the list manipulated is not very clear, this list could be a set, a queue or still a stack. In WCC2 P and WCC2 T the nature of the list is connected with the nature of the object heuristic which is commissioned to handle it. The main task of heuristic consists in the insertion of a pair or a triple of variables in the list. It must compute a location in the list and places it. If the pair or the triple is already in the list it can insert it or do nothing. The method next always consists in removing and returning the first element of the list. In the sequel we will describe the used heuristics with more details. The predicate skippingCondition, like in WCC1, depends on the qualitative calculus used. For the Allen's calculus and for most of the calculi skippingCondition(Cij ) can be defined by the following instruction: return (Cij == B). The time complexity of WCC2 P and WCC2 T is O(|B| * n 3 ) whereas the spatial complexity of WCC2 P is O(n 2 ) and this one of WCC2 T is O(n 3 ). Algorithm 2 Function WCC2 P(N , heuristic), with N = (V, C). 1: Q ← ∅ 2: initP (N , Q, heuristic) 3: while Q = ∅ do 4: (i, j) ← heuristic.next(Q) 5: for k ← 1 to n do 6: if revise(i, j, k) then 7: if C ik == ∅ then return false 8: else addRelatedP athsP (i, k, Q, heuristic) for j ← i to n do 3: if not skippingCondition(Cij ) then addRelatedP athsP (i, j, Q, heuristic) Function addRelatedPathsP(i, j, Q, heuristic). 1: heuristic.append(Q, (i, j)) Algorithm 3 Function WCC2 T(N ), with N = (V, C). 1: Q ← ∅ 2: initT (N , Q, heuristic) 3: while Q = ∅ do 4: (i, k, j) ← heuristic.next(Q) 5: if revise(i, k, j) then 6: if Cij == ∅ then return false 7: else addRelatedP athsT (i, j, Q, heuristic) 8: end while 9: return true Function initT(N, Q, heuristic). 1: for i ← 1 to n do 2: for j ← i to n do 3: if not skippingCondition(Cij ) then addRelatedP athsT (i, j, Q, heuristic) Function relatedPathsT(i, j, Q, heuristic). 1: for k ← 1 to n do 2: if not skippingCondition(C jk ) then heuristic.append(Q, (i, j, k)) 4: if not skippingCondition(C ki ) then 5: heuristic.append(Q, (k, i, j)) 6: done Despite these different complexities, WCC2 P and WCC2 T can perform worse than WCC1. This is mainly due to the fact that WCC2 P and especially WCC2 T must make an expensive initialization of the list Q (line 2). This step can take more time than the subsequent processing of the elements of the list, in particular for no consistent QCNs. This is why we introduce the functions WCCMixed P and WCCMixed T (see Algorithm 4 and Algorithm 5) to remedy this drawback. Roughly, these functions realize a first step corresponding to a first loop of WCC1 and then continues in the manner of WCC2 P and WCC2 T. Algorithm 4 Function WCCMixed P(N ), with N = (V, C). 1: Q ← ∅ 2: initMixedPair(N , Q, heuristic) 3: while Q = ∅ do 4: (i, j) ← heuristic.next(Q) 5: for k ← 1 to n do 6: if revise(i, j, k) then 7: if C ik == ∅ then return false 8: else addRelatedP athsP air((i, k), Q, heuristic) if revise(i, k, j)then Generated instances To evaluate the performances of the proposed algorithms we randomly generate instances of qualitative constraint networks. A randomly generated QCN will be characterized by five parameters: • an integer n which corresponds to the number of variables of the network; • a qualitative calculus algebra which is the used qualitative calculus; • a real nonT rivialDensity which corresponds to the probality of a constraint to be a non trivial constraint (to be different of B); • a real cardinalityDensity which is the probality of a basic relation to belong to a non trivial given constraint; • a flag type which indicates if the generated network must be forced to be consistent by adding a consistent scenario. (i, k, j) ← heuristic.next(Q) 5: if revise(i, k, j) then 6: if Cij == ∅ then return false 7: else addRelatedP athsT (i, j, Q, heuristic) 8: end while 9: return true Function initMixedT(N , Q, heuristic). if revise(i, k, j)then if (change) addRelatedP athsT (i, j, Q, heuristic) 11: done The different algorithms have been implemented with the help of the JAVA library QAT 1 . We have conducted an extensive experimentation on a PC Pentium IV 2,4GHz 512mo under Linux. The experiences reported in this paper concern QCNs of the Allen's calculus generated with a nonT rivialDensity equals to 0.5 . Performances are measured in terms of the number of revise operations (numberOfRevises), in terms of cpu time (time), in terms of the number of maximum elements in the list (max). Heuristics Almost of the algorithms proposed in the later section use a list which contains the elements (pairs or triples) to be propagated. To improve the efficiency of the algorithms we have to reduce the number of these elements. When a constraint between (i,j) changes we must add all the elements which can be affected by this modification. The order that these elements are processed is very important and can reduce dramatically the number of triangulation operations. The set of the experimented heuristics contains the different heuristics proposed in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The main task of a heuristic consists in the insertion of a pair or a triple of variables in the list after computing its location. If the pair or the triple is already in the list it can insert it or do nothing depending on its policy. All heuritics experimented remove and return the first element of the list. In general, given an heuritics, more it reduces the number of triangulation operations more its time cost and spatial cost are important. Experimental results Stack or Queue. The list used to stock the pairs/the triples can be handles as a stack or a queue, i.e. after the changing 1 This library can be found at http://www.cril.univartois.fr/∼saade/. of a constraint the pair or the triples corresponding can be added at the head of the list or at its queue (recall that the first element of the list is always treat firstly). After the initialisation of the list, the addition of a pair/a triple is due to the restriction of a constraint. Intuitively, more this constraint is added belatedly, more its cardinality is small and more it will be restrictive for an operation of triangulation. It is why, the using of the list as a stack (a FIFO structure) must perform the using of the list as a queue (a LILO structure). This is confirmed by our experiences, for example consider Figure 3 in which we use WCC2 P and WCC2 T on forced consistent networks with the heuristic Basic. Note that for WCCMixed P and WCCMixed T the difference is not so important. Actually Basic is not really a heuristic, indeed, it just adds an element in the list if it is not present and removes the first element of the list. In the sequel, among the elements which can be returned from the list, the heuristics always choose the more recently added (LIFO handling). Add or not add a pair/a triple. The main task of heuristic is to add a pairs or a triples when a modification raises. As the pair/the triple is already in the list, depending of the used policy, heuristic can or cannot add the pair/the triple. Adding the element could have a prohibitive cost since one must remove the element of the list before add it at the new location. This cost is connected to the heuristic used and the structure used to implement the list. In our case, roughly speaking, we use doubly-linked lists or tables of doubly-linked lists for the more sophisticated heuristics. Moreover, we use tables with 2/3 entries to check the presence of a pair/a triple in the list. Actually, the experiences show that removing and adding the pair/the triple in the case where it is present avoid sufficient revise operations to be more competitive than the case where nothing is done. See for example Figure 4 which shows the behaviour of the heuristic cardinality with these two possible policies (cardinalityM oving for the systematic addition and cardinalityN oM oving for the addition in the case where the pair/the triple is not present). For the methods WCC2 P and WCC2 T cardinalityM oving performs cardinalityN oM oving, in particular before the phase transition (cardinalityDensity between 0.3 and 0.55). Conerning the no forced consistent instances, from the fact that the numbers of revises are very near, we have cardinalityM oving which is lightly better in term of time. For the mixed methods, cardinalityM oving and cardinalityN oM oving are very closed in term of time and number of revises. In the sequel we always use the policy which consists in systematically moving the present pair or triple. The better heuristics. We compared all the heuristics on the different algorithms. Concerning the algorithms manipulating the pairs we compare the heuristics Basic and Cardinality previously presented. Moreover we used the W eight heuristic, this heuristic processes the pair (i, j) following the weight of the constraint Cij in ascending order. The weight of a constraint is the sum of the weights of the basic relations composing it. Intuitively, to obtain the weight of a basic relation B we sum the number of basic relations present in the table of composition at the line and the column corresponding to the entry B then we scaled the obtained numbers to give the value 1 at the basic relations with the smallest numbers, then 2, etc. This method is lightly different from this proposed by van Beek and Manchak [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] but it is easy to implement it for all qualitative calculi. For the basic relations of the Allen's calculus we obtain the weight 1 for eq, 2 for m, mi, s, si, f, f i, 3 for d, di, b, bi and 4 for o, oi. In addition to these heuristics, we define heuristics corresponding to combinations of Cardinality and W eight: the SumCardinalityW eight heuristic which arranges the pairs (i, j) following the sum of the cardinality and the weight of the constraint Cij , the CardinalityW eight heuristic which arranges the pairs (i, j) following the cardinality of Cij and then, following the weight of Cij, and W eightCardinality which arranges the pairs (i, j) following the weight of Cij and then, following the cardinality of Cij . These heuristics are also define for for WCC2 T and WCCMixed T which use triples instead of pairs. By examining Figure 5, we constate that the number of revises are very closed for all these heuritics (expected for the Basic heuristic). In term of cpu time, the heuristics Cardinality, SumCardinalityW eight and W eight are very closed and are the more performing heuristics. Due to the using of triples we can define finer heuristics. For example, from the heuristic cardinality we have three different heuristics: the cardinalityI heuristic which considers the cardinality of Cij for the triples (i, j, k) and (k, i, j) (similarly to the previous cardinality heuristic), the cardinalityII heuristic which takes into account the sum of the cardinality of Cij and the cardinality of C jk for the triple (i, j, k), and the sum of the cardinality of Cij and the cardinality of C ki for the triple (k, i, j), the cardinalityIII heuristic which takes into account the sum of the cardinality of Cij , the cardinality of C jk and the cardinality of C ik for the triple (i, j, k), the sum of the cardinality of Cij , the cardi-nality of C ki and the cardinality of C kj for the triple (k, i, j). In a same line of reasoning we split the heuristics weight and SumCardinalityW eight in six heuristics. By considering the different versions of the Cardinality heuristic (it is the same thing for the weight and SumCardinalityW eight heuristics) we can see that the cardinalityII heuritic makes the smallest number of revises. Outside the phase transition it performs the other triple cardianality heuritics in terms of time. In the phase transition the cardinalityIII heuristic performs the cardinalityI heuristic and the cardinalityIII heuristic. In terms of cpu time, the handling with pairs performs the handling with triples. WCC1/WCC2 P/WCCMixed P/WCC2 T/WCCMixed T Now we compare all the algorithms we the more competitive heuristics. We can constate that in general WCC1 is the algorithm the less competitive algorithm, see Figure 7. The most favorable case for WCC1 is the case where the instances are inconsistent QCNs. Generally, in particular for the forced con- sistent instances, the algorithms based on triples make less revise operations than the algorithms based on pairs. Despit it we can see that the last ones are more speed than the first ones. The reason is that the handling of triples is most cost than the handling of pairs in term of time. Moreover the number of elements which must be stocked is very important for the triples contrary to pair case (see the last figures of Figure 7). For the forced consistent instances we can see that the mixed versions of the algorithms are less performing than the no mixed versions, note that the difference is not very important. Concerning the no forced consistent instances we have the same result for the cardinality density comprise between 0.5 and 0.6. For the densities strictly greater than 0.6 we have an inverion and the mixed versions are more competitive. By examing the number corresponding to the maximum of elements in the list we can see that the mixed versions reduced dramatically this number for the triples. Conclusions In this paper we study empirically several algorithms enforcing the •-closure on qualitative constraint networks. The algorithms studied are adapted from the algorithms PC1 and PC2. Concerning the algorithms issued of PC2 we use different heuristics, in particular heuristics defined in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and we use structures saving pairs of constraints or structures saving triples of constraints. We showed that using triples reduces dramatically the number of revises compared with an handling with pairs. Despite it, the versions using pairs are more competitive in term of time. We introduced two algorithms mixing the algorithm PC1 and the algorithm PC2. These algorithms seem to be a good compromise between a PC1 version which consumes lot of time and a PC2 version which consumes lot of space. Currently, we continue our experimentations on QCNs with a larger size in term of variables and on other qualitative calculus (in particular on INDU which is based on 25 basic relations and the cyclic point algebra which is a ternary calculus). Figure 1 . 1 Figure 1. The basic relations of the Allen's calculus. Figure 2 . 2 Figure 2. The basic relations of the Meiri's calculus concerning a point X and an interval Y . •) sometimes called weak composition or qualitative composition. The converse of a relation R in A is the relation of A corresponding to the transpose of R; it is the union of the converses of the basic relations contained in R. The composition A • B of two basic relations A and B is the relation R = {C : ∃x, y, z ∈ D, x A y, y B z and x C z}. The composition R • S of R, S ∈ A is the relation T = S A∈R,B∈S {A • B}. Computing the results of these various operations for relations of 2 B can be done efficiently by using tables giving the results of these operations for the basic relations of B. For instance, consider the relations R = {eq, b, o, si} and S = {d, f, s} of Allen's calculus, we have R ∼ = {eq, bi, oi, s}. The relation R • S is {d, f, s, b, o, m, eq, si, oi}. Consider now the relations R = {b * , s * } and S = {b} of the Meiri's calculus, we have R • S = {b * } whereas S • R = {}. 9 : 9 if revise(k, i, j) then 10: if C kj == ∅ then return false 11: else addRelatedP athsP (k, j, Q, heuristic) 12: done 13: end while 14: return true Function initMixedP(N , Q, heuristic). 1: change ← false 2: for i ← 1 to n do 3: for j ← i to n do 4: for k ← 1 to n do 5:if not skippingCondition(C ik , C kj ) 6: 7 : 7 if Cij == ∅ then return false 8: else change ← true 9: done 10 : 10 if (change) addRelatedP athsP (i, j, Q, heuristic) 11: done 4 Experimentation Algorithm 5 Function 5 WCCPCMixedTriple(N ), with N = (V, C). 1: Q ← ∅ 2: initMixedTriple(N , Q, heuristic) 3: while Q = ∅ do 4: 1 : 1 change ← false 2: for i ← 1 to n do 3: for j ← i to n do 4: for k ← 1 to n do 5:if not skippingCondition(C ik , C kj ) 6: 7 : 7 if Cij == ∅ then return false 8: else change ← true 9: Figure 3 . 3 Figure 3. Average time for WCC2 P and WCC2 T using the heuristic basic on consistent QCNs (200 instances per data points, with n = 50) Figure 4 . 4 Figure 4. Average number of revises and average time for WCC2 P and WCC2 T using cardinalityN oM oving and cardinalityM oving on consistent (top) and no forcing consistent (bottom) QCNs (200 instances per data points, with n = 50) Figure 5 . 5 Figure 5. Average number of revises and average time for the heuristics used with WCC2 P on consistent (top) and no forcing consistent (bottom) QCNs (200 instances per data points, with n = 50) Figure 6 . 6 Figure 6. Average number of revises and average time for the different cardinality heuristics used with WCC2 T and WCCMixed T on forced consistent QCNs (200 instances per data points, with n = 50) Figure 7 . 7 Figure 7. Average number of revises, average time and average maximum elements in the list for all algorithms with a competitive heuristic on consistent (left) and no forcing consistent (right) QCNs (200 instances per data points, with n = 50) This is respectively due to the facts that B • R = R • B = B for all non empty relation R ∈ A, Id is composed by a basic relation (eq) and Id ⊆ R • R ∼ for all non empty relation R. Note that these properties are not always true for another calculus, see the Meiri's calculus for example. It is why we introduce a conditional statement at line 6 allowing to avoid fruitless work by defining a good predicate skippingCondition ad hoc to the qualitative calculus used. For example, in the case of the Allen's calculus, skippingCondition could be defined by the following instruction: return (C ik == B or C kj == B or i == k or k == j or i == j). For this calculus this skipping condition can be more elaborated, see[START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The time complexity of WCC1 is O(|B| * n 5 ) whereas its spatial complexity is O(|B| * n 3 ). Algorithm 1 Function WCC1(N ), with N = (V, C). 1: repeat 2: change ← false 3:
34,068
[ "1142762", "998171" ]
[ "56711", "247329", "56711" ]
01487493
en
[ "info", "scco" ]
2024/03/04 23:41:48
2005
https://hal.science/hal-01487493/file/renz-ligozat-cp05.pdf
Jochen Renz email: [email protected] Gérard Ligozat email: [email protected] Weak Composition for Qualitative Spatial and Temporal Reasoning It has now been clear for some time that for many qualitative spatial or temporal calculi, for instance the well-known RCC8 calculus, the operation of composition of relations which is used is actually only weak composition, which is defined as the strongest relation in the calculus that contains the real composition. An immediate consequence for qualitative calculi where weak composition is not equivalent to composition is that the well-known concept of pathconsistency is not applicable anymore. In these cases we can only use algebraic closure which corresponds to applying the path-consistency algorithm with weak composition instead of composition. In this paper we analyse the effects of having weak compositions. Starting with atomic CSPs, we show under which conditions algebraic closure can be used to decide consistency in a qualitative calculus, how weak consistency affects different important techniques for analysing qualitative calculi and under which conditions these techniques can be applied. For our analysis we introduce a new concept for qualitative relations, the "closure under constraints". It turns out that the most important property of a qualitative calculus is not whether weak composition is equivalent to composition, but whether the relations are closed under constraints. All our results are general and can be applied to all existing and future qualitative spatial and temporal calculi. We close our paper with a road map of how qualitative calculi should be analysed. As a side effect it turns out that some results in the literature have to be reconsidered. Introduction The domain of qualitative temporal reasoning underwent a major change when Allen [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] proposed a new calculus which up to a degree resulted in embedding it in the general paradigm of constraint satisfaction problems (CSPs). CSPs have their well-established sets of questions and methods, and qualitative temporal reasoning, and more recently qualitative spatial reasoning, has profited significantly from developing tools and methods analogous to those of classical constraint satisfaction. In particular, a central question for classical constraint networks is the consistency problem: is the set of constraints specified by the constraint network consistent, that is, can the variables be instantiated with values from the domains in such a way that all constraints are satisfied? Part of the apparatus for solving the problem consists of filtering algorithms which are able to restrict the domains of the variables without changing the problem, while remaining reasonably efficient from a computational point of view. Various algorithms such as arc consistency, path consistency, and various notions of k-consistency have been extensively studied in that direction. Reasoning about temporal or spatial qualitative constraint networks on the same line as CSPs has proved a fruitful approach. Both domains indeed share a general paradigm. However, there is a fundamental difference between the two situations: -Relations in classical CSPs are finite relations, so they can be explicitly manipulated as sets of tuples of elements of a finite domain. In other terms, relations are given and processed in an extensional way. -By contrast, relations in (most) qualitative temporal and spatial reasoning formalisms are provided in intentional terms -or, to use a more down-to-earth expression, they are infinite relations, which means that there is no feasible way of dealing with them extensionally. But is that such an important point? We think it is, although this was not apparent for Allen's calculus. The differences began to appear when it became obvious that new formalisms, such as for instance the RCC8 calculus [START_REF] Randell | A spatial logic based on regions and connection[END_REF], could behave in a significantly different way than Allen's calculus. The differences have to do with changes in the notion of composition, with the modified meaning of the the classical concept of pathconsistency and its relationship to consistency, and with the inapplicability of familiar techniques for analysing qualitative calculi. Composition Constraint propagation mainly uses the operation of composition of two binary relations. In the finite case, there is only a finite number of binary relations. In Allen's case, although the domains are infinite, the compositions of the thirteen atomic relations are themselves unions of atomic relations. But this is not the case in general, where insisting on genuine composition could lead to considering an infinite number of relations, whereas the basic idea of qualitative reasoning is to deal with a finite number of relations. The way around the difficulty consists in using weak composition, which only approximates true composition. Path consistency and other qualitative techniques When only weak composition is used then some algorithms and techniques which require true composition can only use weak composition instead. This might lead to the inapplicability of their outcomes. Path-consistency, for example, relies on the fact that a constraint between two variables must be at least as restrictive as every path in the constraint network between the same two variables. The influence of the paths depends on composition of relations on the path. If we use algebraic closure instead of path-consistency, which is essentially path-consistency with weak composition, then we might not detect restrictions imposed by composition and therefore the filtering effect of algebraic closure is weaker than that of path-consistency. As a consequence it might not be possible to use algebraic closure as a decision procedure for certain calculi. Likewise, commonly used reduction techniques lose their strength when using only weak composition and might not lead to valid reductions. The main goal of this paper is to thoroughly analyse how the use of weak composition instead of composition affects the applicability of the common filtering algorithms and reduction techniques and to determine under which conditions their outcomes match that of their composition-based counterparts. Related Work The concepts of weak composition and algebraic closure are not new. Although there has not always been a unified terminology to describe these concepts, many authors have pointed out that composition tables do not necessarily correspond to the formal definition of composition [START_REF] Bennett | Some observations and puzzles about composing spatial and temporal relations[END_REF][START_REF] Bennett | When does a composition table provide a complete and tractable proof procedure for a relational constraint language[END_REF][START_REF] Grigni | Topological inference[END_REF][START_REF] Ligozat | When tables tell it all: Qualitative spatial and temporal reasoning based on linear orderings[END_REF]. Consequently, many researchers have been interested in finding criteria for (refutation) completeness of compositional reasoning, and Bennett et al. ( [START_REF] Bennett | Some observations and puzzles about composing spatial and temporal relations[END_REF][START_REF] Bennett | When does a composition table provide a complete and tractable proof procedure for a relational constraint language[END_REF]) posed this as a challenge and conjectured a possible solution. Later work focused on dealing with this problem for RCC8 [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF][START_REF] Li | Region connection calculus: Its models and composition table[END_REF]. In particular Li and Ying ( [START_REF] Li | Region connection calculus: Its models and composition table[END_REF]) showed that no RCC8 model can be interpreted extensionally, i.e., for RCC8 composition is always only a weak composition, which gives a negative answer to Bennett et al.'s conjecture. Our paper is the first to give a general account on the effects of having weak composition and a general and clear criterion for the relationship between algebraic closure and consistency. Therefore, the results of this paper are important for establishing the foundations of qualitative spatial and temporal reasoning and are a useful tool for investigating and developing qualitative calculi. The structure of the paper is as follows: Section 2 introduces the main notions and terminology about constraint networks, various notions of consistency and discusses weak composition and algebraic closure. Section 3 provides a characterisation of those calculi for which algebraic closure decides consistency for atomic networks. Section 4 examines the conditions under which general techniques of reduction can be applied to a qualitative calculus. Finally, Section 5 draws general conclusions in terms of how qualitative calculi should be analysed, and shows that some existing results have to be revisited in consequence. Background Constraint networks Knowledge between different entities can be represented by using constraints. A binary relation R over a domain D is a set of pairs of elements of D, i.e., R ⊆ D ×D. A binary constraint xRy between two variables x and y restricts the possible instantiations of x and y to the pairs contained in the relation R. A constraint satisfaction problem (CSP) consists of a finite set of variables V, a domain D with possible instantiations for each variable v i ∈ V and a finite set C of constraints between the variables of V. A solution of a CSP is an instantiation of each variable v i ∈ V with a value d i ∈ D such that all constraints of C are satisfied, i.e., for each constraint v i Rv j ∈ C we have (d i , d j ) ∈ R. If a CSP has a solution, it is called consistent or satisfiable. Several algebraic operations are defined on relations that carry over to constraints, the most important ones being union (∪), intersection (∩), and complement (•) of a relation, defined as the usual settheoretic operators, as well as converse (• -1 ) defined as R Path-consistency Because of the high complexity of deciding consistency, different forms of local consistency and algorithms for achieving local consistency were introduced. Local consistency is used to prune the search space by eliminating local inconsistencies. In some cases local consistency is even enough for deciding consistency. Montanari [START_REF] Montanari | Networks of constraints: Fundamental properties and applications to picture processing[END_REF] developed a form of local consistency which Mackworth [START_REF] Mackworth | Consistency in networks of relations[END_REF] later called path-consistency. Montanari's notion of path-consistency considers all paths between two variables. Mackworth showed that it is equivalent to consider only paths of length two, so path-consistency can be defined as follows: a CSP is path-consistent, if for every instantiation of two variables v i , v j ∈ V that satisfies v i R ij v j ∈ C there exists an instantiation of every third variable v k ∈ V such that v i R ik v k ∈ C and v k R kj v j ∈ C are also satisfied. For- mally, for every triple of variables v i , v j , v k ∈ V: ∀d i , d j : [(d i , d j ) ∈ R ij → ∃d k : ((d i , d k ) ∈ R ik ∧(d k , d j ) ∈ R kj )]. Montanari also developed an algorithm that makes a CSP path-consistent, which was later simplified and called path-consistency algorithm or enforcing path-consistency. A path-consistency algorithm eliminates locally inconsistent tuples from the relations between the variables by successively applying the following operation to all triples of variables v i , v j , v k ∈ V until a fixpoint is reached: R ij := R ij ∩ (R ik • R kj ). If the empty relation occurs, then the CSP is inconsistent. Otherwise the resulting CSP is path-consistent. Varieties of k-consistency Freuder [START_REF] Freuder | Synthesizing constraint expressions[END_REF] generalised path-consistency and the weaker notion of arc-consistency to k-consistency: A CSP is k-consistent, if for every subset V k ⊂ V of k variables the following holds: for every instantiation of k -1 variables of V k that satisfies all constraints of C that involve only these k -1 variables, there is an instantiation of the remaining variable of V k such that all constraints involving only variables of V k are satisfied. So if a CSP is k-consistent, we know that each consistent instantiation of k -1 variables can be extended to any k-th variable. A CSP is strongly k-consistent, if it is i-consistent for every i ≤ k. If a CSP with n variables is strongly n-consistent (also called globally consistent) then a solution can be constructed incrementally without backtracking. 3-consistency is equivalent to path-consistency, 2-consistency is equivalent to arc-consistency. Qualitative Spatial and Temporal Relations The main difference of spatial or temporal CSPs to normal CSPs is that the domains of the spatial and temporal variables are usually infinite. For instance, there are infinitely many time points or temporal intervals on the time line and infinitely many regions in a two or three dimensional space. Hence it is not feasible to represent relations as sets of tuples, nor is it feasible to apply algorithms that enumerate values of the domains. Instead, relations can be used as symbols and reasoning has to be done by manipulating symbols. This implies that the calculus, which deals with extensional relations in the finite case, becomes intensional in the sense that it manipulates symbols which stand for infinite relations. The usual way of dealing with relations in qualitative spatial and temporal reasoning is to have a finite (usually small) set A of jointly exhaustive and pairwise disjoint (JEPD) relations, i.e., each possible tuple (a, b) ∈ D × D is contained in exactly one relation R ∈ A. The relations of a JEPD set A are called atomic relations. The full set of available relations is then the powerset R = 2 A which enables us to represent indefinite knowledge, e.g., the constraint x{R i , R j , R k }y specifies that the relation between x and y is one of R i , R j or R k , where R i , R j , R k are atomic relations. Composition and weak composition Using these relations we can now represent qualitative spatial or temporal knowledge using CSPs and use constraint-based methods for deciding whether such a CSP is consistent, i.e., whether it has a solution. Since we are not dealing with explicit tuples anymore, we have to compute the algebraic operators for the relations. These operators are the only connection of the relation symbols to the tuples contained in the relations and they have to be computed depending on the tuples contained in the relations. Union, complement, converse, and intersection of relations are again the usual set-theoretic operators while composition is not as straightforward. Composition has to be computed only for pairs of atomic relations since composition of non-atomic relations is the union of the composition of the involved atomic relations. Nevertheless, according to the definition of composition, we would have to look at an infinite number of tuples in order to compute composition of atomic relations, which is clearly not feasible. Fortunately, many domains such as points or intervals on a time line are ordered or otherwise well-structured domains and composition can be computed using the formal definitions of the relations. However, for domains such as arbitrary spatial regions that are not well structured and where there is no common representation for the entities we consider, computing the true composition is not feasible and composition has to be approximated by using weak composition [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF]. Weak composition ( ) of two relations S and T is defined as the strongest relation R ∈ 2 A which contains S • T , or formally, S T = {R i ∈ A|R i ∩ (S • T ) = ∅}. The advantage of weak composition is that we stay within the given set of relations R = 2 A while applying the algebraic operators, as R is by definition closed under weak composition, union, intersection, and converse. In cases where composition cannot be formally computed (e.g. RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF]), it is often very difficult to determine whether weak composition is equivalent to composition or not. Usually only non-equality can be shown by giving a counterexample, while it is very difficult to prove equality. However, weak composition has also been used in cases where composition could have been computed because the domain is well-structured and consists of pairs of ordered points, but where the authors did not seem to be aware that R is not closed under composition (e.g. INDU, PDN, or PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]) Example 1 (Region Connection Calculus RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF]). RCC8 is a topological constraint language based on eight atomic relations between extended regions of a topological space. Regions are regular subsets of a topological space, they can have holes and can consist of multiple disconnected pieces. The eight atomic relations DC (disconnected), EC (externally connected), P O (partial overlap), EQ (equal), T P P (tangential proper part), N T P P (non-tangential proper part) and their converse relations T P P i, N T P P i were originally defined in first-order logic. It was shown by Düntsch [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF], that the composition of RCC8 is actually only a weak composition. Consider the consistent RCC8 constraints B{T P P }A, B{EC}C, C{T P P }A. If A is instantiated as a region with two disconnected pieces and B completely fills one piece, then C cannot be instantiated. So T P P is not a subset of EC • T P P [START_REF] Li | Region connection calculus: Its models and composition table[END_REF] and consequently RCC8 is not closed under composition. Algebraic closure When weak composition differs from composition, we cannot apply the path-consistency algorithm as it requires composition and not just weak composition. We can, however, replace the composition operator in the path-consistency algorithm with the weak composition operator. The resulting algorithm is called the algebraic closure algorithm [START_REF] Ligozat | Qualitative calculi: a general framework[END_REF] which makes a network algebraically closed or a-closed. If weak composition is equal to composition, then the two algorithms are also equivalent. But whenever we have only weak composition, an a-closed network is not necessarily path-consistent as there are relations S and T such that S • T ⊂ S T . So there are tuples (u, v) ∈ S T for which there is no w with (u, w) ∈ S and (w, v) ∈ T , i.e., for which (u, v) ∈ S • T . This contradicts the path-consistency requirements given above. Path-consistency has always been an important property when analysing qualitative calculi, in particular as a method for identifying tractability. When this method is not available, it is not clear what the consequences of this will be. Will it still be possible to find calculi for which a-closure decides consistency even if weak composition differs from composition? What effect does it have on techniques used for analysing qualitative calculi which require composition and not just weak composition? And what is very important, does it mean that some results in the literature have to be revised or is it enough to reformulate them? These and related questions will be answered in the remainder of the paper. As an immediate consequence, unless we have proven otherwise, we should for all qualitative spatial and temporal calculi always assume that we are dealing with weak composition and that it is not equivalent to composition. Weak composition and algebraic closure For analysing the effects of weak composition, we will mainly focus on its effects on the most commonly studied reasoning problem, the consistency problem, i.e., whether a-closure sufficient a-closure not sufficient weak composition = composition Interval Algebra [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF] rectangle algebra [START_REF] Guesgen | Spatial reasoning based on Allen's temporal logic[END_REF] containment algebra [START_REF] Ladkin | On binary constraint problems[END_REF] block algebra [START_REF] Balbiani | A tractable subclass of the block algebra: constraint propagation and preconvex relations[END_REF] cyclic algebra [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] weak composition = composition RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF], discrete IA INDU [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF],PDN [START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF], PIDN [START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF] Table 1. Does a-closure decide atomic CSPs depending on whether weak composition differs from composition? a given set Θ of spatial or temporal constraints has a solution. Recall that consistency means that there is at least one instantiation for each variable of Θ with a value from its domain which satisfies all constraints. This is different from global consistency which requires strong k-consistency for all k. Global consistency cannot be obtained when we have only weak composition as we have no method for even determining 3-consistency. For the mere purpose of deciding consistency it actually seems overly strong to require any form of k-consistency as we are not interested in whether any consistent instantiation of k variables can be extended to k + 1 variables, but only whether there exists at least one consistent instantiation. Therefore it might not be too weak for deciding consistency to have only algebraic closure instead of path-consistency. In the following we restrict ourselves to atomic CSPs, i.e., CSPs where all constraints are restricted to be atomic relations. If a-closure does not even decide atomic CSPs, it will not decide more general CSPs. We will later see how the results for atomic CSPs can be extended to less restricted CSPs. Let us first analyse for some existing calculi how the two properties whether a-closure decides atomic CSPs and whether weak composition differs from composition relate. We listed the results in Table 1 and they are significant: Proposition 1. Let R be a finite set of qualitative relations. Whether a-closure decides consistency for atomic CSPs over R is independent of whether weak composition differs from composition for relations in R. This observation shows us that whether or not a-closure decides atomic CSPs does not depend on whether weak composition is equivalent to composition or not. Instead we will have to find another criterion for when a-closure decides atomic CSPs. In order to find such a criterion we will look at some examples where a-closure does not decide atomic CSPs and see if we can derive some commonalities. Example 2 (STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF]). Directions between two-dimensional points are distinguished by specifying an arbitrary number of angles which separate direction sectors. The atomic relations are the sectors as well as the lines that separate the sectors (see Figure 1 left). The domain is ordered so it is possible to compute composition. The relations are closed under composition. If more than two angles are given, then by using constraint configurations involving four or more variables, it is possible to refine the atomic relations that correspond to sectors to different particular angles (see Figure 1 right). By combining configurations that refine the same atomic relation to different angles, inconsistencies can be constructed that cannot be detected by a-closure. In this Therefore, the atomic relation 11 can be refined to a subatomic relation using the given constraints example we can see thateven true composition can be too weak. Although we know the composition and all relations are closed under composition, it is possible to refine atomic relations using networks with more than three nodes. [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF]). Allen's 13 interval relations are combined with relative duration of intervals given in the form of a point algebra, i.e., INDU relations are of the form R = I δ where I is an interval relation (precedes p, meets m, during d, starts s, overlaps o, finishes f, equal =, and the converse relations fi,oi,si,di,mi,pi) and δ a duration relation (<, >, =). This leads to only 25 atomic relations as some combinations are impossible, e.g., a{s}b enforces that the duration of a must be less than that of b. Only weak composition is used, as for example the triple a{s < }b, a{m < }c, c{f < }b enforces that a < 0.5 * b and c > 0.5 * b. So an instantiation where a = 0.5 * b cannot be extended to a consistent instantiation of c. In the same way it is possible to generate any metric duration constraint of the form duration(x) R α * duration(b) where R ∈ {<, >, =} and α is a rational number. Consequently, it is possible to construct inconsistent atomic CSPs which are a-closed. Example 3 (INDU calculus In both examples it is possible to refine atomic relations to subatomic relations that have no tuples in common, i.e., which do not overlap. This can be used to construct inconsistent examples which are still a-closed. Note that in the case of the interval algebra over integers it is possible to refine atomic relations to subatomic relations, e.g., a{p}b, b{p}c leads to a{p + 2}c, where p + 2 indicates that a must precede c by at least 2 more integers than is required by the precedes relation. But since these new subatomic relations always overlap, it is not possible to construct inconsistencies which are a-closed. Let us formally define these terms. Definition 1 (refinement to a subatomic relation). Let Θ be a consistent atomic CSP over a set A and xRy ∈ Θ a constraint. Let R be the union of all tuples (u, v) ∈ R that can be instantiated to x and y as part of a solution of Θ. If R ⊂ R, then Θ refines R to the subatomic relation R . Definition 2 (closure under constraints). Let A be a set of atomic relations. A is closed under constraints if no relation R ∈ A can be refined to non-overlapping subatomic relations, i.e., if for each R ∈ A all subatomic relations R ⊂ R to which R can be refined to have a nonempty intersection. In the following theorem we show that the observation made in these examples holds in general and we can prove in which cases a-closure decides atomic CSPs, which is independent of whether weak composition differs from composition and only depends on whether the atomic relations are closed under constraints. Therefore, the new concept of closure under constraints turns out to be a very important property of qualitative reasoning. Theorem 1. Let A be a finite set of atomic relations. Then a-closure decides consistency of CSPs over A if and only if A is closed under constraints. Proof Sketch. ⇒: Given a set of atomic relations A = {R 1 , . . . , R n }. We have to prove that if A is not closed under constraints, then a-closure does not decide consistency over A. A is not closed under constraints means that there is an atomic relation R k ∈ A which can be refined to non-overlapping subatomic relations using atomic sets of constraints over A. We will prove this by constructing an a-closed but inconsistent set of constraints over A for those cases where A is not closed under constraints. We assume without loss of generality that if A is not closed under constraints, there are at least two non-overlapping subatomic relations R 1 k , R 2 k of R k which can be obtained using the atomic sets of constraints Θ 1 , Θ 2 (both are a-closed and consistent and contain the constraint xR k y). We combine all tuples of R k not contained in R 1 k or R 2 k to R m k and have that R 1 k ∪ R 2 k ∪ R m k = R k and that R 1 k , R 2 k , R m k are pairwise disjoint. We can now form a new set of atomic relations A where R k is replaced with R 1 k , R 2 k , R m k (analogous for R -1 k ). All the other relations are the same as in A. The weak composition table of A differs from that of A for the entries that contain R k or R -1 k . Since R 1 k and R 2 k can be obtained by atomic sets of constraints over A, the entries in the weak composition table of A cannot be the same for R 1 k and for R 2 k . Therefore, there must be a relation R l ∈ A for which the entries of R l R 1 k and of R l R 2 k differ. We assume that R l R k = S and that R l R 1 k = S \ S 1 and R l R 2 k = S \ S 2 , with S, S 1 , S 2 ∈ 2 A and S 1 = S 2 . We chose a non-empty one, say S 1 , and can now obtain an inconsistent triple xR 1 k y, zR l x, zS 1 y for which the corresponding triple xR k y, zR l x, zS 1 y is consistent. Note that we use A only for identifying R l and S 1 . If we now consider the set of constraints Θ = Θ 1 ∪ {zR l x, zS 1 y} (where z is a fresh variable not contained in Θ 1 ), then Θ is clearly inconsistent since Θ 1 refines xR k y to xR 1 k y and since R l R 1 k = S \ S 1 . However, applying the a-closure algorithm to Θ (resulting in Θ ) using the weak composition table of A does not result in an inconsistency, since a-closure does not see the implicit refinement of xR k y to xR 1 k y. ⇐: Proof by induction over the size n of Θ. Induction hypothesis: P (n) = {For sets Θ of atomic constraints of size n, if it is not possible to refine atomic relations to non-overlapping subatomic relations, then a-closure decides consistency for Θ.} This is clear for n ≤ 3. Now take an a-closed atomic CSP Θ of size n+1 over A and assume that P (n) is true. For every variable x ∈ Θ let Θ x be the atomic CSP that results from Θ by removing all constraints that involve x. Because of P (n), Θ x is consistent for all x ∈ Θ. Let R x be the subatomic relation to which R is refined to in Θ x and let R be the intersection of R x for all x ∈ Θ. If R is non-empty for every R ∈ A, i.e., if it is not possible to refine R to non-overlapping subatomic relations, then we can choose a consistent instantiation of Θ x which contains for every relation R only tuples of R . Since no relation R of Θ x can be refined beyond R by adding constraints of Θ that involve x, it is clear that we can then also find a consistent instantiation for x, and thereby obtain a consistent instantiation of Θ. This theorem is not constructive in the sense that it does not help us to prove that a-closure decides consistency for a particular calculus. But such a general constructive theorem would not be possible as it depends on the semantics of the relations and on the domains whether a-closure decides consistency. This has to be formally proven in a different way for each new calculus and for each new domain. What our theorem gives us, however, is a simple explanation why a-closure is independent of whether weak composition differs from composition: It makes no difference whatsoever whether non-overlapping subatomic relations are obtained via triples of constraints or via larger constellations (as in Example 2). In both cases a-closure cannot detect all inconsistencies. Our theorem also gives us both, a simple method for determining when a-closure does not decide consistency, and a very good heuristic for approximating when it does. Consider the following heuristic: Does the considered domain enable more distinctions than those made by the atomic relations, and if so, can these distinctions be enforced by a set of constraints over existing relations? This works for the three examples we already mentioned. It also works for any other calculus that we looked at. Take for instance the containment algebra which is basically the interval algebra without distinguishing directions [START_REF] Ladkin | On binary constraint problems[END_REF]. So having directions would be a natural distinction and it is easy to show that we can distinguish relative directions by giving constraints: If a is disjoint from b and c touches b but is disjoint from a, then c must be on the same side of a as b. This can be used to construct a-closed inconsistent configurations. For RCC8, the domain offers plenty of other distinctions, but none of them can be enforced by giving a set of RCC8 constraints. This gives a good indication that a-closure decides consistency (which has been proven in [START_REF] Renz | On the complexity of qualitative spatial reasoning: A maximal tractable fragment of the Region Connection Calculus[END_REF]). If we restrict the domain of RCC8, e.g., to two-dimensional discs of the same size, then we can find distinctions which can be enforced by giving constraints. When defining a new qualitative calculus by defining a set of atomic relations, it is desirable that algebraic closure decides consistency of atomic CSPs. Therefore, we recommend to test the above given heuristic when defining a new qualitative calculus and to make sure that the new atomic relations are closed under constraints. In section 5 we discuss the consequences of having a set of relations which is not closed under constraints. Effects on qualitative reduction techniques In the analysis of qualitative calculi it is usually tried to transfer properties such as tractability or applicability of the a-closure algorithm for deciding consistency to larger sets of relations and ideally find the maximal sets that have these properties. Such general techniques involve composition of relations in one way or another and it is not clear whether they can still be applied if only weak composition is known and if they have been properly applied in the literature. It might be that replacing composition with weak composition and path-consistency with a-closure is sufficient, but it might also be that existing results turn out to be wrong or not applicable. In this section we look at two important general techniques for extending properties to larger sets of relations. The first technique is very widely used and is based on the fact that a set of relations S ⊆ 2 A and the closure S of S under composition, intersection, and converse have the same complexity. This results from a proof that the consistency problem for S (written as CSPSAT( S)) can be polynomially reduced to CSPSAT(S) by inductively replacing each constraint xRy over a relation R ∈ S \S by either xSy ∧xT y or by xSz •zT y for S, T ∈ S [START_REF] Renz | On the complexity of qualitative spatial reasoning: A maximal tractable fragment of the Region Connection Calculus[END_REF]. If we have only weak composition, then we have two problems. First, we can only look at the closure of S under intersection, converse, and weak composition (we will denote this weak closure by S w ). And, second, we can replace a constraint xRy over a relation R ∈ S w \S only by xSy ∧xT y or by xSz zT y for S, T ∈ S. For xSz zT y we know that it might not be a consistent replacement for xRy. In Figure 2 we give an example for a consistent set of INDU constraints which becomes inconsistent if we replace a non-atomic constraint by an intersection of two weak compositions of other INDU relations. So it is clear that this widely used technique does not apply in all cases where we have only weak composition. In the following theorem we show when it can still be applied. Theorem 2. Let R be a finite set of qualitative relations and S ⊆ R a set of relations. Then CSPSAT( S w ) can be polynomially reduced to CSPSAT(S) if a-closure decides consistency for atomic CSPs over R. Proof Sketch. Consider an a-closed set Θ of constraints over S w . When inductively replacing constraints over S w with constraints over S, i.e., when replacing xRy where R ∈ S w with xSz and zT y where S T = R and S, T ∈ S and z is a fresh variable, then potential solutions are lost. However, all these triples of relations (R, S, T ) are minimal, i.e., every atomic relation of R can be part of a solution of the triple. No solutions are lost when replacing constraints with the intersection of two other constraints or by a converse constraint. Let Θ be the set obtained from Θ after inductively replacing all constraints over S w with constraints over S. Since potential solutions are lost in the transformation, the only problematic case is where Θ is consistent but Θ is inconsistent. If Θ is consistent, then there must be a refinement of Θ to a consistent atomic CSP Θ a . For each constraint xRy of Θ which is replaced, all the resulting triples are minimal and are not related to any other variable in Θ. Note that due to the inductive replacement, some constraints will be replaced by stacks of minimal triples. Therefore, each R can be replaced with any of its atomic relations without making the resulting stacks inconsistent. Intersecting Θ with Θ a followed by computing a-closure will always result in an a-closed set. Since the stacks contain only minimal triples, it is clear that they can be subsequently refined to atomic relations. The relations between the fresh variables and the variables of Θ can also be refined to atomic relations as they were unrelated before applying a-closure. The resulting atomic CSP will always be a-closed, so Θ must be consistent if a-closure decides atomic CSPs. This covers all the cases in the middle column of Table 1 such as RCC8, but does not cover those cases in the bottom right cell. This result is very important for all existing and future calculi where only weak composition is used. We know now that for all calculi where a-closure decides atomic CSPs, complexity results can be transferred between a set of relations and its closure, independent of whether we are using weak composition or composition. This also resolves all doubts (Düntsch, personal communication) about applying this technique to RCC8. On the other hand, we cannot use this popular method of transferring complexity results in cases where we have only weak composition and a-closure does not decide atomic CSPs. For all existing calculi that fall into this category, we should reconsider the complexity analysis. In the following section we will have a look at the complexity results of INDU and PIDN and it turns out that some of the complexity results in the literature are wrong. The second general technique which is very useful for analysing computational properties and identifying large tractable subsets is the refinement method [START_REF] Renz | Maximal tractable fragments of the region connection calculus: A complete analysis[END_REF]. It gives a simple algorithm for showing if a set S ⊆ 2 A can be refined to a set T ⊆ 2 A in the sense that for every path-consistent set of constraints Θ over S and every relation S ∈ S we can always refine S to a subrelation T ⊆ S with T ∈ T . If path-consistency decides consistency for T then it must also decide consistency for S. Theorem 3. Let R be a finite set of qualitative relations for which a-closure decides atomic CSPs. The refinement method also works for weak composition by using the a-closure algorithm instead of the path-consistency algorithm. Proof Sketch. Any a-closed triple of variables is minimal. So if a relation S can be refined to T in any a-closed triple that contains S, then the refinement can be made in any a-closed network without making the resulting network not a-closed. If a-closure decides the resulting network, then it also decides the original network. Note that the refinement method only makes sense if a-closure decides atomic CSPs as the whole purpose of the refinement method is to transfer applicability of a-closure for deciding consistency from one subset of R to another. A road map for analysing qualitative calculi Using the results of our paper we can now analyse new and revisit existing qualitative spatial and temporal calculi. When defining a new set of atomic relations and the domain is not ordered, we have to assume that we have only weak composition unless we can prove the contrary. The most important step is to prove whether a-closure decides atomic CSPs for our new calculus. It is possible to use the heuristic given in the previous section, but if a-closure decides atomic CSPs, then this has to be proven using the semantics of the relations. If it turns out that a-closure decides atomic CSPs then we can proceed by applying the techniques we discussed in the previous section, i.e., we can identify larger tractable subsets by using the refinement method and by computing the closure of known tractable subsets under intersection, converse and (weak) composition. But what if it does not? When a-closure does not decide atomic CSPs This is the case for many calculi in the literature (see e.g. Table 1) and will probably be the case for many future calculi. As shown in Theorem 1 this means that it is possible to enforce non-overlapping subatomic relations. If we only get finitely many non-overlapping subatomic relations, as, e.g., for the containment algebra, then it is best to study the calculus obtained by the finitely many new atomic relations and treat the original calculus as a subcalculus of the new calculus. If we do get infinitely many nonoverlapping subatomic relations, however, then we suggest to proceed in one of two different ways. Let us first reflect what it means to have infinitely many non-overlapping subatomic relations: An important property of a qualitative calculus is to have only finitely many distinctions. So if we have to make infinitely many distinctions, then we do not have a qualitative calculus anymore! Therefore we cannot expect that qualitative methods and techniques that are only based on (weak) compositions help us in any way. This is also the reason why we analysed the techniques in the previous section only for cases where a-closure decides atomic CSPs, i.e., where we do have qualitative calculi. 3One way of dealing with these calculi is to acknowledge that we do not have a qualitative calculus anymore and to use algorithms that deal with quantitative calculi instead. It might be that consistency can still be decided in polynomial time using these algorithms. Another way is to find the source that makes the calculus quantitative and to eliminate this source in such a way that it has no effect anymore, e.g., by combining atomic relations to form coarser atomic relations. Both of these ways were considered for the STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF]. A third way, which is sometimes chosen, but which we discourage everyone from taking, is to look at 4-consistency. Problems with using 4-consistency We sometimes see results in the literature of the form "4-consistency decides consistency for a set of relations S ⊆ 2 A and therefore S is tractable." What we have not seen so far is a proper 4-consistency algorithm. For infinite domains where we only manipulate relation symbols, a 4-consistency algorithm must be based on composition of real ternary relations. The question then is how can we show that the composition of the ternary relations is not just a weak composition. Just like computing composition for binary relations, we might have to check an infinite number of domain values. Consequently, there could be no 4-consistent configurations at all or it could be NP hard to show whether a configuration is 4-consistent. This makes these results rather useless from a practical point of view and certainly does not allow the conclusion that these sets are tractable. We illustrate this using an example from the literature where 4-consistency was wrongly used for proving that certain subsets of INDU or PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF] are tractable. 1. 4-consistency decides consistency for S ⊆ 2 A 2. Deciding consistency is NP-hard for T ⊆ S The first result was proven for some subsets of INDU and PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]. We obtained the second result by a straightforward reduction of the NP-hard consistency problem of PDN [START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF] to INDU and PIDN. It is clear from this example that 4-consistency results cannot be used for proving tractability. Validity and applicability of similar results in the literature should be reconsidered as well. Conclusions We started with the well-known observation that in many cases in qualitative spatial and temporal reasoning only weak composition can be determined. This requires us to use a-closure instead of path-consistency. We thoroughly analysed the consequences of this fact and showed that the main difficulty is not whether weak composition differs from composition, but whether it is possible to generate non-overlapping subatomic relations, a property which we prove to be equivalent to whether a-closure decides atomic CSPs. Since this occurs also in cases where weak composition is equal to composition, our analysis does not only affect cases where only weak composition is known (which are most cases where the domains are not ordered) but qualitative spatial and temporal calculi in general. We also showed under which conditions some important techniques for analysing qualitative calculi can be applied and finally gave a roadmap for how qualitative calculi should be developed and analysed. As a side effect of our analysis we found that some results in the literature have to be reconsidered. - 1 = 1 {(a, b)|(b, a) ∈ R} and composition (•) of two relations R and S which is the relation R • S = {(a, b) | ∃c : (a, c) ∈ R and (c, b) ∈ S}. Fig. 1 . 1 Fig.1. A STAR calculus with 3 angles resulting in 13 atomic relations (left). The right picture shows an atomic CSP whose constraints enforce that D must be 45 degrees to the left of B, i.e., the constraint B{11}D is refined by the other constraints to the line orthogonal to relation 2. Therefore, the atomic relation 11 can be refined to a subatomic relation using the given constraints Fig. 2 . 2 Fig. 2. (1) A consistent INDU network which becomes inconsistent when replacing b{s<, d<}a with (2). From (1) we get b > 0.5 * a and from (2) we get b < 0.5 * a. It is unlikely to find a version of Theorem 2 for cases where a-closure does not decide atomic CSPs. As a heuristic, the following property could be considered: xRy can only be replaced with xSz, zT y if for all weak compositions Ri Rj that contain R the intersection of all real compositions Ri • Rj is nonempty. National ICT Australia is funded through the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council.
47,724
[ "1003925", "997069" ]
[ "74661", "247329" ]
01487498
en
[ "info", "scco" ]
2024/03/04 23:41:48
2004
https://hal.science/hal-01487498/file/KR04CondottaJF.pdf
Jean-Franc ¸ois Condotta Gérard Ligozat email: [email protected] Axiomatizing the Cyclic Interval Calculus Keywords: qualitative temporal reasoning, cyclic interval calculus, cyclic orderings, completeness, ℵ 0 -categorical theories . In this formalism, the basic entities are intervals on a circle, and using considerations similar to Allen's calculus, sixteen basic relations are obtained, which form a jointly disjunctive and pairwise distinct (JEPD) set of relations. The purpose of this paper is to give an axiomatic description of the calculus, based on the properties of the meets relation, from which all other fifteen relations can be deduced. We show how the corresponding theory is related to cyclic orderings, and use the results to prove that any countable model of this theory is isomorphic to the cyclic interval structure based on the rational numbers. Our approach is similar to Ladkin's axiomatization of Allen's calculus, although the cyclic structures introduce specific difficulties. Introduction In the domain of qualitative temporal reasoning, a great deal of attention has been devoted to the study of temporal formalisms based on a dense and unbounded linear model of time. Most prominently, this is the case of Allen's calculus, where the basic entities are intervals of the real time line, and the 13 basic relations (Allen's relations) correspond to the possible configurations of the endpoints of two intervals [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. Other calculi such as the cardinal direction calculus (Ligozat 1998a;1998b), the n-point calculus (Balbiani & Condotta 2002), the rectangle calculus [START_REF] Balbiani | A new tractable subclass of the rectangle algebra[END_REF], the n-block calculus [START_REF] Balbiani | Tractability results in the block algebra[END_REF] are also based on products of the real line equipped with its usual ordering relation, hence on products of dense and unbounded linear orderings. However, many situations call for considering orderings which are cyclic rather than linear. In particular, the set of directions around a given point of reference has such a cyclic structure. This fact has motivated several formalisms in this direction: Isli and Cohn [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] and Balbiani et al. [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF] consider a calculus about points on a circle, based on qualitative ternary relations between the points. Schlieder's work on the concepts of orientation and panorama [START_REF] Schlieder | Representing visible locations for qualitative navigation[END_REF][START_REF] Schlieder | Reasoning about ordering[END_REF] is also concerned with cyclic situations. Our work is more closely related to Balbiani and Osmani's proposal [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] which we will refer to as the cyclic interval calculus. This calculus is similar in spirit to Allen's calculus: in the same way as the latter, which views intervals on the line as ordered pairs of points (the starting and ending point of the interval), the cyclic interval calculus considers intervals on a circle as pairs of distinct points: two points on a circle define the interval obtained when starting at the first, going (say counterclockwise) around the circle until the second point is reached. The consideration of all possible configurations between the endpoints of two intervals defined in that way leads to sixteen basic relations, each one of which is characterized by a particular qualitative configuration. For instance, the relation meets corresponds to the case where the last point of the first interval coincides with the first point of the other, and the two intervals have no other point in common. Another interesting relation, which has no analog in the linear case, is the mmi relation1 , where the last point of each interval is the first point of the other (as is the case with two serpents, head to tail, each one of them devouring the other). This paper is concerned with giving suitable axioms for the meets relation in the cyclic case. This single relation can be used to define all other 15 relations of the formalism (there is a similar fact about the meets relation in Allen's calculus). We give a detailed description of the way in which the axiomatization of cyclic orderings -using a ternary relation described in [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF])relates to the axiomatization of cyclic intervals based on the binary relation meets. Our approach is very similar to the approach followed by Ladkin in his PhD thesis [START_REF] Ladkin | The Logic of Time Representation[END_REF] where he shows how the axiomatization of linear dense and unbounded linear orderings relates to the axiomatization proposed by Allen and Hayes for the interval calculus, in terms of the relation meets. The core of the paper, apart from the choice of an appropriate set of axioms, rests on two constructions: • Starting from a cyclic ordering, that is a set of points equipped with a ternary order structure satisfying suitable axioms , the first construction defines a set of cyclic intervals equipped with a binary meets relation; and conversely. • Starting from a set of cyclic intervals equipped with a meets relation, the second construction yields a set of points (the intuition is that two intervals which meet define a point, their meeting point) together with a ternary relation which has precisely the properties necessary to define a cyclic ordering. The next step involves studying how the two constructions interact. In the linear case, a result of Ladkin's can be expressed in the language of category theory by saying that the two constructions define an equivalence of categories. Using Cantor's theorem, this implies that the corresponding theories are ℵ 0 categorical. In the cyclic case, we prove an analogous result: here again, the two constructions define an equivalence of categories. On the other hand, as shown in [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF], all countable cyclic orderings are isomorphic. As a consequence, the same fact is true of the cyclic interval structures which satisfy the axioms we give for the relation meets. This is the main result of the paper. We further examine the connections of these results to the domain of constraint-based reasoning in the context of the cyclic interval calculus, and we conclude by pointing to possible extensions of this work. Building cyclic interval structures from cyclic orderings This section is devoted to a construction of the cyclic interval structures we will consider in this paper, starting from cyclic orderings. In the next section, we will propose a set of axioms for these structures. Intuitively, each model can be visualized in terms of a set of oriented arcs (intervals) on a circle (an interval is identified by a starting point and an ending point on the circle), together with a binary meets relation on the set of intervals. Specifically, two cyclic intervals (m, n) and (m , n ) are such that (m, n) meets (m , n ) if n = m and n is not between m and n, see Figure 1 (as a consequence, n = m is the only point that the two intervals have in common). In order to build interval structures, we start from cyclic or-derings2 [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF]. Intuitively, the cyclic ordering on a circle is similar to the usual ordering on the real line. In formal terms, a cyclic ordering is a pair (P, ≺) where P is a nonempty set of points, and ≺ is a ternary relation on P such that the following conditions are met, for all x, y, z, t ∈ P: ¡ ¡ £¢ ¤¢ P1. ¬ ≺ (x, y, y); P2. ≺ (x, y, z)∧ ≺ (x, z, t) →≺ (x, y, t); P3. x = y ∧ x = z → y = z∨ ≺ (x, y, z)∨ ≺ (x, z, y); P4. ≺ (x, y, z) ↔≺ (y, z, x) ↔≺ (z, x, y); P5. x = y → (∃z ≺ (x, z, y)) ∧ (∃z ≺ (x, y, z)); P6. ∃x, y x = y. Definition 1 ( The cyclic interval structure associated to a cyclic ordering) Let (P, ≺) be a cyclic ordering. The cyclic interval structure CycInt((P, ≺)) associated to (P, ≺ ) is the pair (I, meets) where: • I = {(x, y) ∈ P × P : ∃z ∈ P with ≺ (x, y, z)}. The elements of I are called (cyclic) intervals. • meets is the binary relation defined by meets = {((x, y), (x , y )) : y = x and ≺ (x, y, y )}. As an example, consider the set C of all rational numbers contained in the interval [0, 2π )) is a cyclic interval structure (I, meets). Each element u = (x, y) of I can be viewed as the oriented arc containing all points between the points represented by x and y (we will refer to these two points as to the endpoints of the cyclic interval u and denote by u -and u + , respectively, the points associated to x and y). For instance, the cyclic intervals (0, π/2), (π/2, 0) and (3π/2, π/2) are shown in Figure 2. Notice that no cyclic interval contains only one point (there are no punctual intervals), and that no interval covers the whole circle. Intuitively, two cyclic intervals are in the relation meets if and only if the ending point of the first coincides with the starting point of the other, and the intervals have no other point in common. For instance, ((3π/2, π/2), (π/2, π)) ∈ meets, while ((3π/2, π/2), (π/2, 5π/3)) ∈ meets. Let (I, meets) be a cyclic interval structure. We now show how the other fifteen basic relations of the cyclic interval calculus defined by [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] can be defined using the meets relation. The 16 relations are denoted by the set of symbols {m, mi, ppi, mmi, d, di, f, f i, o, oi, s, si, ooi, moi, mio, eq} (where m is the meets relation). Figure 3 shows examples of these relations. More formally, the relations other than meets are defined as follows3 : ¥ ¥ ¦ ¨ § © ¦ £ § © ¦ ¨ § © ¦ ¨ § © ! " # $ % '& ( ! ) '& £ # $ % ) 0& £ 1 2 !3 ! '& ( # $ 43 5& 6 " 1 87 9 # $ @7 A& £ B 2 !3 53 5& ( # C D3 !3 5& £ 1 8E ! # C %E B& £ GF AH " # $ @F 0H " 1 53 I # $ 43 5& ( 4P P Q& ( # $ "P P & ( 1 B !R D # $ %R & ( • u ppi v def ≡ ∃w, x u m w m v m x m u, • u mmi v def ≡ ∃w, x, y, z w m x m y m z m w ∧ z m u m y ∧ x m v m w, • u d v def ≡ ∃w, x, y w m x m u m y m w ∧ v mmi w, • u f v def ≡ ∃w, x w m x m u m w ∧ v mmi w, • u o v def ≡ ∃w, x, y, z u m v m x m u∧v m x m y m v∧ y m z m w, • u s v def ≡ ∃w, x, y w m x m v m w ∧ x m u m y m w, • u ooi v def ≡ ∃w, x w f u ∧ w s v ∧ x s u ∧ x f v, • u moi v def ≡ ∃w, x, y w m x m y m w ∧ y ppi u ∧ x ppi v, • u mio v def ≡ ∃w, x, y w m x m y m w ∧ x ppi u ∧ y ppi v, • u eq v def ≡ ∃w, x w m u m x ∧ w m v m x. The relations mi, di, f i, oi, si are the converse relations of m, d, f, o, s, respectively. Axioms for cyclic interval structures: The CycInt theory In this section, we give a set of axioms allowing to characterise the relation meets of cyclic intervals. Several axioms are motivated by intuitive properties owned by models of cyclic intervals. Other axioms are axioms of the relation meets of the intervals of the line [START_REF] Ladkin | The Logic of Time Representation[END_REF][START_REF] Allen | A commonsense theory of time[END_REF] adapted to the cyclic case. In the sequel u, v, w, . . . will denote variables representing cyclic intervals. The symbol | corresponds to the relation meets. The expression v 1 |v 2 | . . . |v n with v 1 , v 2 , . . . , v n n variables (n > 2) is an abbreviation for the conjunction n-1 i=1 v i |v i+1 . Note that the expression v 1 |v 2 | . . . |v n |v 1 is equivalent to v 2 | . . . |v n |v 1 |v 2 . Another abbreviation used in the sequel is X(u, v, w, x). It is defined by the expression u|v ∧ w|x ∧ (u|x ∨ w|v). Intuitively, the satisfaction of X(u, v, w, x) expresses the fact that the cyclic interval u meets (is in relation meets with) the cyclic interval v, the cyclic interval w meets (is in relation meets with) the cyclic interval x and the two meeting points are the same points. In Figure 4 are represented the three possible cases for which X(u, v, w, x) is satisfied by cyclic intervals onto an oriented circle : (a) u|v, w|x, u|x, w|v are satisfied, (b) u|v, w|x, w|v are satisfied and u|x is not satisfied, (c) u|v, w|x, u|x are satisfied and w|v is not satisfied. Now, it is possible for us to give the CycInt axioms defined to axiomatize the relation meets of the cyclic interval models. After each axiom is given an intuitive idea of what S UT 2V W X Y aX cb d Y W Ẁeb X "d eb d 4Ỳ fb W S Ug cV S ih V Y pX qb X d d eb W 4d Y X W W eb d b X Ỳ b d 4`W b X qb X "Ỳ fb d `W Ỳa`Y Figure 4: Satisfaction of X(u, v, w, x). it expresses. Definition 2 (The CycInt axioms) A1. ∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, z, w, x) → X(u, v, y, z) Given three pairs of meeting cyclic intervals, if the meeting point defined by the first pair is the same as the one defined by the second pair and the meeting point defined by the second pair is the same as the one defined by the third pair then, the first pair and the second pair of meeting cyclic intervals define the same meeting point. A2. ∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, u, x, z) → ¬u|x ∧ ¬x|u Two cyclic intervals with the same endpoints do not satisfy the relation meets. A3. ∀u, v, w, x, y, z u|v ∧ w|x ∧ y|z ∧ ¬u|x ∧ ¬w|v∧ ¬u|z ∧ ¬y|v ∧ ¬w|z ∧ ¬y|x → ∃r, s, t r|s|t|r ∧ X(u, v, r, s) ∧ (X(w, x, s, t) ∧ X(y, z, t, r)) ∨ (X(w, x, t, r) ∧ X(y, z, s, t)) Three distinct meeting points can be defined by three cyclic intervals satisfying the relation meets so that these three meeting cyclic intervals cover the circle in its entirety. A4. ∀u, v, w, x, u|v ∧ w|x ∧ ¬u|x ∧ ¬w|v → (∃y, z, t, y|z|t|y ∧ X(y, z, w, x) ∧ X(t, y, u, v))∧ (∃y, z, t, y|z|t|y ∧ X(y, z, u, v) ∧ X(t, y, w, x)) Two meeting points are the endpoints of two cyclic intervals. Each one can be defined by two other cyclic intervals. A5. ∀u, v (∃w, x u|w|x|v|u) → (∃y u|y|v|u) Two meeting cyclic intervals define another cyclic interval corresponding to the union of these cyclic intervals. A6. ∃u u = u and ∀u∃v, w u|v|w|u There exists a cyclic interval and for every cyclic intervals there exist two other cyclic intervals such that they satisfy the relation meets in a cyclic manner (they satisfy the relation meets so that they cover the circle in its entirety). A7. ∀u, v (∃w, x w|u|x ∧ w|v|x) ↔ u = v There does not exist two distinct cyclic intervals with the same endpoints. A8. ∀u, v, w u|v|w → ¬u|w Two cyclic intervals separated by a third one cannot satisfy the relation meets. From these axioms we can deduce several theorems which will be used in the sequel. Proposition 1 Every structure (I, |) satisfying the CycInt axioms satisfies the following formulas: B1. ∀u, v u|v → ¬v|u B2. ∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, u, x, z) → w|v ∧ y|z B3. ∀u, v (∃w u|w|v|u) → (∃x, y u|x|y|v|u) Proof • (B1) Let u, v be two cyclic intervals satisfying u|v. Suppose that v|u is satisfied. It follows that X(u, v, u, v) and X(v, u, v, u) are satisfied. From Axiom A2 follows that u|v and v|u cannot be satisfied. There is a contradiction. • (B2) Let u,v,w,x,y,z be cyclic intervals satisfying X(u, v, w, x) and X(y, u, x, z). From Axiom A2 we can deduce that u|x and x|u are not satisfied. As X(u, v, w, x) and X(y, u, x, z) are satisfied, we can assert that y|z and w|v are satisfied. • (B3) Let u, v, w be cyclic intervals satisfying u|w|v|u. We have u|w, w|v and v|u which are satisfied. Moreover, since v|u is satisfied, from B1 we can deduce that u|v and w|w cannot be satisfied. From Axiom A4 follows that there exists cyclic intervals x, y, z satisfying x|y|z|x, X(x, y, u, w) and X(z, x, w, v). From Axiom A2 we can assert that x|w and w|x are not satisfied. From it and the satisfaction of X(x, y, u, w) ∧ X(z, x, w, v), we can assert that u|y and z|v are satisfied. We can conclude that u, v, y, z satisfy u|y|z|v|u. From cyclic interval structures back to cyclic orderings In this section, we show how to define a cyclic ordering ≺ onto a set of points from a set of cyclic intervals and a relation meets onto these cyclic intervals satisfying the CycInt axioms. The line of reasoning used is similar to the one used by Ladkin [START_REF] Ladkin | The Logic of Time Representation[END_REF] in the linear case. Indeed, intuitively, a set of pairs of meeting cyclic intervals satisfying the relation meets at a same place will represent a cyclic point. Hence, a cyclic point will correspond to a meeting place. Three cyclic points l, m, n defined in this way will be in relation ≺ if, and only if, there exist three cyclic intervals satisfying the relation meets in a cyclic manner (so that they cover the circle in its entirety) so that their meeting points are successively l, m and n. Now, let us give more formally the definition of this cyclic ordering. Let Proof We give the proof for Axioms P 1 and P 2 only. The proof for the other axioms is in the annex. • ∀uv, wx ∈ P, ¬ ≺ (uv, wx, wx) (P 1) Let uv, wx ∈ P. Suppose that ≺ (uv, wx, wx) is satisfied. From the definition of ≺, there exist y, z, t ∈ I satisfying y|z|t|y and such that (y, z) (u, v), (z, t) (w, x), (t, y) (w, x). owns the properties of transitivity and symmetry, in consequence, we can assert that (z, t) (t, y). From it and from the definition of , we have z|y or t|t which are satisfied. As | is an irreflexive relation, we can assert that z|y is satisfied. Moreover, y|z is also satisfied. There is a contradiction since the relation | is an asymmetric relation. • ∀uv, wx, yz, st ∈ P, ≺ (uv, wx, yz) ∧ ≺ (uv, yz, st) → ≺ (uv, wx, st) (P 2) Let uv, wx, yz, st ∈ P which satisfy ≺ (uv, wx, yz) and ≺ (uv, yz, st). From the definition of ≺ we can deduce that there exist m, n, o ∈ I satisfying m|n|o|m, mn = uv, no = wx, om = yz. On the other hand, we can assert that there exist p, q, r ∈ I satisfying p|q|r|p, pq = uv, qr = yz and rp = st. From the property of transitivity of the relation and the equalities mn = uv, pq = uv, om = yz, qr = yz, we obtain the equalities mn = pq and om = qr. Hence, from the definition of , we can assert that X(m, n, p, q) and X(o, m, q, r) are satisfied. From Theorem B2, it follows that p|n and o|r are also satisfied. From all this, we can deduce that n|o|r|p|n is satisfied. From Axiom A5, we can assert that there exists l satisfying n|l|p|n. By rotation, we deduce that p|n|l|p is satisfied. n|l and n|o are satisfied, in consequence, we have nl = no. From this equality, the transitivity of the relation and the equality no = wx, we can assert that nl = wx. As l|p and r|p are satisfied, we have the equality lp = rp. From this equality, the transitivity of the relation and the equality rp = st, we can deduce that lp = st. Consequently, p|n|l|p, pn = uv, nz = wx and zp = st are satisfied. Hence, from the definition of ≺, we can conclude that ≺ (uv, wx, st) is satisfied. Cyclic orderings yield models of CycInt In this section, we prove that every structure of cyclic intervals defined from a cyclic ordering is a model of CycInt. Theorem 2 Let (P, ≺) be a cyclic ordering. (I, |) = CycInt((P, ≺)) is a model of the CycInt axioms. Proof In the sequel, given an element u = (m, n) ∈ I, u - (resp. u + ) will correspond to m (resp. to n). Let us prove that the axioms of CycInt are satisfied by (I, |). • (A1) Let u, v, w, x, y, z ∈ I satisfying X(u, v, w, x) and X(y, z, w, x). From the definition of X we can assert that u|v and y|z are satisfied. Hence the equalities u + = v -, w + = x -and y + = z -. Moreover, from the definition of X, it follows that u|x or w|v and y|x or w|z are satisfied. Let us consider all the possible situations exhaustively: -u|x and y|x are satisfied. It follows that u + = x -and y + = x -are satisfied. Hence, we have u + = v -= w + = x -= y + = z -. -u|x and w|z are satisfied. It follows that u + = x -and w + = z -are satisfied. Consequently, u + = v -= w + = x -= y + = z -is satisfied. -w|v and y|x are satisfied. It follows that w + = v -and y + = x -are satisfied. Therefore, u + = v -= w + = x -= y + = z -is satisfied. -w|v and w|z are satisfied. It follows that w + = v -and w + = z -are satisfied. Hence, u + = v -= w + = x -= y + = z -is satisfied. Let us denote by l the identical points u + , v -, w + , x -, y + , z -. Suppose that X(u, v, y, z) is falsified. By using the fact that u|v and y|z are satisfied, we deduce that u|z and y|v are not satisfied. Since u + = z -and y + = v -, ≺ (u -, l, z + ) and ≺ (y -, l, v + ) are not satisfied. From P5, we get the satisfaction of ≺ (u -, z + , l) and the one of ≺ (y -, v + , l). As u|v and y|z are satisfied, ≺ (u -, l, v + ) and ≺ (y -, l, z + ) are also satisfied. Hence, by using P4, we can assert that ≺ (l, y -, v + ) and ≺ (l, v + , u -) are satisfied. From P2, it follows that ≺ (l, y -, u -) is also satisfied. From the satisfaction of ≺ (u -, z + , l) and the one of P4, it follows that ≺ (l, u -, z + ) is satisfied. By using P2, it results that ≺ (l, y -, z + ) is satisfied. Recall that ≺ (y -, l, z + ) is satisfied. From P4 and P2, it results that ≺ (y -, z + , z + ) is satisfied. From P1, a contradiction follows. Consequently, we can conclude that X(u, v, y, z) is satisfied. • (A2) Let u, v, w, x, y, z ∈ I satisfy X(u, v, w, x) and X(y, u, x, z). The following equalities are satisfied: u+ = x-and x + = u -. By using P4 and P1, we can assert that ≺ (u -, u + , x + ) and ≺ (x -, x + , u + ) cannot be satisfied. Hence, u|x and x|u are not satisfied. • (A3) Let us prove the satisfaction of Axiom A3. Let u, v, w, x, y, z ∈ I satisfying u|v, w|x, y|z, ¬u|x, ¬w|v, ¬u|z, ¬y|v, ¬w|z, ¬y|x. From the satisfaction of u|v (resp. w|x and y|z), it follows that u + = v -(resp. w + = x -and y + = z -). Let l (resp. m and n) the point defined by l = u + = v -(resp. m = w + = x - and n = y + = z -). Suppose that l = m. the equal- ity u + = v -= w + = x -is satisfied. Since w|x is true, we can deduce that ≺ (w -, x -, x + ) is also satisfied. Consequently, w -and x + are distinct points. Let us consider the three points u -, w -, x + . From P3, we can assert that only four cases are possible: u -= w -is satis- fied, u -= x + is satisfied, ≺ (w -, x + , u -) is satisfied, or ≺ (w -, u -, x + ) is satisfied. By using, P2, P3 and P4, we obtain for every case a contradiction: -u -= w -is satisfied. As w|x is satisified, ≺ (u -, x -, x + ) is also satisfied. Recall that u + = x -. It follows that u|x is satisfied. There is a contradiction. u -= x + is satisfied. As u|v and w|x are satisfied, we can assert that ≺ (u + , v -, v + ) and ≺ (w -, x -, x + ) are satisfied. Hence, ≺ (x + , v -, v + ) and ≺ (w -, v -, x + ) are also satisfied. By using P4, we can deduce that ≺ (v -, v + , x + ) and ≺ (v -, x + , w -) are satisfied. From P2 it follows that ≺ (v -, v + , w -) is also satisfied. From P4 follows the satisfaction of ≺ (w -, v -, v + ). Moreover, we have the equality w + = v -. Consequently, w|v is satisfied. There is a contradiction. -≺ (w -, x + , u -) is satisfied. From P4, we obtain the satisfaction of ≺ (x + , u -, w -). As w|x is satisfied, we deduce that ≺ (w -, x -, x + ) is satisfied. Hence, ≺ (x + , w -, x -) is also satisfied (P4). From P2, we can assert that ≺ (x + , u -, x -) is satisfied. From P4, ≺ (u -, x -, x + ) is satisfied. As x -= u + is satisfied, we can assert that u|x is satisfied. There is a contradiction. -≺ (w -, u -, x + ) is satisfied. Hence, u -and x + are distinct points. Moreover, we know that u + and x + are distinct points from the fact that x -and u + are equal. From P3, ≺ (u -, x + , x -) or ≺ (u -, x -, x + ) is satisfied. Suppose that ≺ (u -, x -, x + ) is satisfied. Since we have the equality u + = x -, u|x is satisfied. There is a contradiction. It results that ≺ (u -, x + , x -) must be satisfied. From the satisfaction of ≺ (w -, u -, x + ) and P4, we deduce that ≺ (u -, x + , w -) is satisfied. From the satisfaction of w|x and from P4, we can assert that ≺ (x -, x + , w -) is satisfied. ≺ (u -, x + , x -) is satisfied, hence, from P4 we can deduce that ≺ (u -, x + , x -) is satisfied. From P4, we obtain the sat- isfaction of ≺ (x -, u -, x + ). From P2, it results that ≺ (x -, u -, w -) is satisfied. Hence, ≺ (u + , u -, w -) is satisfied. From the satisfaction of u|v and from P4 it follows that ≺ (u + , v + , u -) is satisfied. From P2 we can assert that ≺ (u + , v + , w -) is satisfied. In consequence, ≺ (w + , v + , w -) is satisfied. Hence, from P4, ≺ (w -, w + , v + ) is satisfied. It results that w|v is satisfied. There is a contradiction. Consequently, we can assert that l = m. In a similar way, we can prove that l = n and m = n. Now, we know that l, m, n are distinct points. From P3, we can just examine two cases: -≺ (l, m, n) is satisfied. Let r = (n, l), s = (l, m) and t = (m, n). We have r|s|t|r which is satisfied. Suppose that u|s is falsified. It follows that ≺ (u -, l, m) is also falsified. As l is different from u -and m, we have u -= m or ≺ (u -, m, l) which is satisfied. * Suppose that u -= m is satisfied. Since u|v is satis- fied, it follows that ≺ (u -, u + , v + ) is satisfied. Con- sequently, ≺ (m, l, v + ) is true. From P4, it follows that ≺ (l, v + , m) is satisfied. From all this, the satisfaction of ≺ (l, m, n) and P2, we can assert that ≺ (l, v + , n) is satisfied. From P4, we deduce that ≺ (n, l, v + ) is satisfied. As l = v -, r|v is satisfied. * Suppose that ≺ (u -, m, l) is satisfied. From P4, it follows that ≺ (l, u -, m) is satisfied. From all this, the satisfaction of ≺ (l, m, n) and P2, we can assert that ≺ (l, u -, n) is satisfied. As u|v is satisfied, we can deduce that ≺ (u -, u + , v + ) is satisfied. Consequently, ≺ (u -, l, v + ) is also satisfied. From P4, it results that ≺ (l, v + , u -) is satisfied. From all this and the satisfaction of ≺ (l, u -, n), we can deduce that ≺ (l, v + , n) is satisfied. By using P4, we obtain the satisfaction of ≺ (n, l, v + ). As l = v -, we deduce that r|v is satisfied. It results that u|s or r|v is satisfied. Hence, X(u, v, r, s) is satisfied. With a similar line of reasoning, we can prove that X(w, x, s, t) and X(y, z, t, r) are satisfied. -≺ (l, n, m) is satisfied. Let r = (m, l), s = (l, n) and t = (n, m). We have r|s|t|r which is satisfied. In a similar way, we can prove that X(u, v, r, s), X(y, z, s, t) and X(w, x, t, r) are satisfied. • For Axioms A4-A5-A6-A7-A8, the proofs can be found in the annex. Categoricity of CycInt In this section, we establish the fact that the countable models satisfying the CycInt axioms are isomorphic. In order to prove this property, let us show that for every cyclic interval there exist two unique "endpoints". Proposition 3 Let M = (I, |) a model of CycInt. Let (P, ≺) be the structure CycPoint(M). For every u ∈ I there exist L u , U u ∈ P such that : 1. ∃v ∈ I such that (v, u) ∈ L u , 2. ∃w ∈ I such that (u, w) ∈ U u , 3. L u (resp. U u ) is the unique element of P satisfying (1.) (resp. (2.)), 4. L u = U u . Proof From Axiom A6, we can assert that there exist v, w ∈ I such that u|w|v|u is satisfied. Consequently, u|w and v|u are satisfied. By defining L u by L u = vu and U u by U u = uw, the properties (1) and (2) are satisfied. Now, let us prove that the property (3) is satisfied. Suppose that there exists L u such that there exists x ∈ I with (x, u) ∈ L u . We have (v, u) ≡ (x, u). It follows that L u = L u . Now, suppose that there exists U u such that there exists y ∈ I with (u, y) ∈ U u . We have (u, w) ≡ (u, y). It follows that U u = U u . Hence, we can assert that property (3) is true. Now, suppose that L u = U u . It follows that (v, u) ≡ (u, w). As a result, v|w or u|u is satisfied. We know that | is an irreflexive relation. Moreover, from Axiom A8 we can assert that v|w cannot be satisfied. It results that there is a contradiction. Hence, L u and U u are distinct elements. From an initial model of CycInt, we have seen that we can define a cyclic ordering. Moreover, from this cyclic ordering we can generate a cyclic interval model. We are going to show that this generated cyclic interval model is isomorphic to the initial cyclic interval model. satisfying v|u and u|w. Let us show that f is a one-to-one mapping. Let (uv, wx) ∈ I . We have u|v and w|x which are satisfied and u|x and w|v which are falsified (in the contrary case we would have uv = wx). From A4, it follows that there exist y, z, t satisfying y|z|t|y, X(y, z, w, x) and X(t, y, u, v). Note that L y = ty = uv and U y = yz = wx. Consequently, there exists y ∈ I such that f (y) = (uv, wx). Now, suppose that there exist u, v ∈ I such that f (u) = f (v). Suppose that f (u) = (wu, ux) and f (v) = (yv, vz). We have wu = yv and ux = vz. It follows that (w, u) (y, v) and (u, x) (v, z). From all this, we have w|u, y|v, u|x and v|z which are satisfied. Four possible situations must be considered: • w|v and u|z are satisfied. It follows that w|v|z and w|u|z are satisfied. • w|v and v|x are satisfied. It follows that w|v|x and w|u|x are satisfied. • y|u and u|z are satisfied. It follows that y|v|z and y|u|z are satisfied. • y|u and v|x are satisfied. It follows that y|v|z and y|u|z are satisfied. For each case, by using A7, we can deduce the equality u = v. Consequently, f is a one-to-one mapping. Now, let us show that u|v if, and only if, f (u)| f (v). We will denote f (u) by (wu, ux) and f (v) by (yv, vz). Suppose that u|v is satisfied. It follows that (u, x) (y, v), hence, ux = yv. For this reason, (wu, ux, vz) and ux = yv are satisfied. Hence, there exist r, s, t ∈ I such that r|s|t|r, rs = wu, st = ux and tr = vz are satisfied. f (u)| f (v) is satisfied. Now, suppose that f (u)| f (v) is satisfied. It follows that ≺ From the equalities rs = wu and st = ux, we can assert that u|x, s|t, r|s and w|u are satisfied. Moreover, one of the following cases is satisfied: • r|u and u|t are satisfied. It follows that r|u|t and r|s|t are satisfied. • r|u and s|x are satisfied. It follows that r|s|x and r|u|x are satisfied. • w|s and u|t are satisfied. It follows that w|u|t and w|s|t are satisfied. • w|s and s|x are satisfied. It follows that w|s|x and w|u|x are satisfied. For each case, from A7, we can deduce the equality u = s. From the equalities st = yv and tr = vz, we can deduce that s|t, y|v, t|r and v|z are satisfied. Moreover, one of the following cases is satisfied: • s|v and t|z are satisfied. It follows that s|t|z and s|v|z are satisfied. • s|v and v|r are satisfied. It follows that s|v|r and s|t|r are satisfied. • y|t and t|z are satisfied. It follows that y|t|z and y|v|z are satisfied. • y|t and v|r are satisfied. It follows that y|t|r and y|v|r are satisfied. For each case, from Axiom A7, we can deduce that v = t. Hence, we have the equalities u = s and v = t. We can conclude that u|v is satisfied. Now, let us show that two cyclic interval models generated by two countable cyclic orderings are isomorphic. Proposition 5 Let (P, ≺) and (P , ≺ ) be two cyclic orderings with P and P two countable sets of points. CycInt((P, ≺)) and CycInt((P , ≺ )) are isomorphic. Proof Let (I, |) and (I , | ) be defined by CycInt((P, ≺)) and CycInt((P , ≺ )). We know that (P, ≺) and (P , ≺ ) are isomorphic [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF]. Let g be an isomorphism from (P, ≺) to (P , ≺ ). Let h be the mapping from I onto I defined by h((l, m)) = (g(l), g(m)). First, let us show that (g(l), g(m)) ∈ I . As (l, m) ∈ I, there exists n ∈ P satisfying ≺ (l, m, n). It follows that ≺ (g(l), g(m), g(n)) is satisfied. It results that (g(l), g(m)) ∈ I . Now, let us show that for every (l, m) ∈ I , there exists (n, o) ∈ I such that h((n, o)) = (l, m). We can define n and o by n = g -1 (l) and o = g -1 (m). Indeed, h(g -1 (l), g -1 (m)) = (g(g -1 (l)), g(g -1 (m))) = (l, m). Now, let (l, m), (n, o) ∈ I such that h((l, m)) = h((n, o)). It follows that g(l) = g(n) and g(m) = g(o). Therefore, we have l = n and m = o. Hence, we obtain the equality (l, m) = (n, o). Finally, let us show that for all (l, m), (n, o) ∈ I, (l, m)|(n, o) is satisfied iff h((l, m))| h((n, o)) is satisfied. (l, m)|(n, o) is satisfied iff ≺ (l, m, o) and m = n are satisfied. Hence, (l, m)|(n, o) is satisfied iff ≺ (g(l), g(m), g(o) ) and g(m) = g(n) are satisfied. For these reasons, we can assert that (l, m)|(n, o) is satisfied iff h((l, m))| h((n, o)) is satisfied. We can conclude that h is an isomorphism. In the sequel, (Q, ≺) will correspond to the cyclic ordering on the set of rational numbers Q, defined by ≺ (x, y, z) iff x < y < z or y < z < x or z < x < y, with x, y, z ∈ Q and < the usual linear order on Q. It is time to As a direct consequence of this theorem we have that the set of the theorems of CycInt is syntactically complete and decidable. ) ' U ! ) D 1 0 Application to constraint networks Balbiani and Osmani [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] use constraint networks to represent the qualitative information about cyclic intervals. A network is defined as a pair (V, C), where V is a set of variables representing cyclic intervals and C is a map which, to each pair of variables (V i , V j) associates a subset C ij of the set of all sixteen basic relations. The main problem in this context is the consistency problem, which consists in determining whether the network has a so-called solution: a solution is a map m from the set of variables V i to the set of cyclic intervals in C such that all constraints are satisfied. The constraint C ij is satisfied if and only if, denoting by m i and m j the images of V i and V j respectively, the cyclic interval m i is in one of the relations in the set C ij with respect to m j (the set C ij is consequently given a disjunctive interpretation in terms of constraints). i Ij i lk i lm n co qp n cr ts 2r vu p n £w ew @u xs r yu 2p A first interesting point is the fact that the axiomatization we have obtained allows us to check the consistency of a constraint network on cyclic intervals by using a theorem prover. Indeed, the procedure goes as follows: First, translate the network (V, C) into an equivalent logical formula Φ. Then, test the validity of the formula (or its validity in a specific model) by using the CycInt axiomatization. As an example, consider the constraint network in Figure 7. The corresponding formula is Φ = (∃v 1 , v 2 , v 3 ) ((v 1 ppi v 2 ∨ v 1 mi v 2 ) ∧ (v 1 m v 3 ∨ v 1 mi v 3 ) ∧ (v 2 o v 3 )). In order to show that this network is consistent, we would have to prove that this formula is valid with respect to CycInt, or satisfiable for a model such as C. In order to show inconsistency, we have to consider the negation of Φ. Usually a local constraint propagation method, called the path-consistency method, is used to solve this kind of constraint network. The method 4 consists in removing from each constraint C ij all relations which are not compatible with the constraints in C ik and C kj , for all 3-tuples i, j, k. This is accomplished by using the composition table of the cyclic interval calculus which, for each pair (a, b) of basic relations, gives the composition of a with b, that is the set of all basic relations c such that there exists a configuration of three cyclic intervals u, v, w with u a v, v b w and u c w. For instance, the composition of m with d consists in the relation ppi.The composition table of the cyclic interval calculus can be automatically computed by using our axiomatization. Indeed, in order to decide whether c belongs to the composition of a with b, it suffices to prove that the formula (∃u, v, w) (u a v ∧ v b w ∧ u c w) is valid. In order to prove that, conversely, c does not belong to this composition, one has to consider the negated formula ¬(∃u, v, w) (u a v ∧ v b w ∧ u c w). Figure 1 : 1 Figure 1: Two cyclic intervals (m, n) and (m , n ) satisfying the meets relation. Figure 2 : 2 Figure 2: Three cyclic intervals. Figure 3 : 3 Figure 3: The 16 basic relations of the cyclic interval calculus. Definition 4 Figure 5 : 45 Figure 5: Satisfaction of ≺ (uv, wx, yz). structure (P, ≺) obtained from (I, |) will be denoted by CycPoint((I, |)) in the sequel. Theorem 1 The structure (P, ≺) is a cyclic ordering. Proposition 4 4 Let M = (I, |) a model of the CycInt axioms. M is isomorphic to (I , | ) = CycInt(CycPoint(M)). Proof Let f be the mapping from I onto I defined by f (u) = (L u , U u ), i.e. f (u) = (vu, uw) for any v, w ∈ I Figure 7 : 7 Figure 7: A constraint network on cyclic intervals. Proof Let M a model of CycInt. M is isomorphic to CycInt(CycPoint(M)). CycInt(CycPoint(M)) is isomorphic to CycInt((Q, ≺)). By composing the isomorphisms, we have CycInt((Q, ≺)) which is isomorphic to M. d fe g D h e h e h e 6 ) ' C 4 C U 8 ' 4 h e ) ' C 4 Figure 6: Every countable model of CycInt (I, |) is isomor-phic to CycInt((Q, ≺)). establish the main result of this section. Theorem 3 The theory axiomatized by CycInt is ℵ 0categorical. Moreover, its countable models are isomorphic to CycInt((Q, ≺)). The notation is mnemonic for meets and meets inverse. Actually, we use what are called "standard cyclic orderings" in[START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF]. We use the shorter term "cyclic ordering" in this paper. Here we use the notation v1 m v2 m . . . m vn where v1, v2, . . . , vn are n variables (n > 2) as a shorthand for the conjunction n-1 i=1 vi m vi+1. In the case of cyclic interval networks, the path-consistency Conclusions and further work We have shown in in paper how the theory of cyclic orderings, on the one hand, and the theory of cyclic intervals, on the other hand, can be related. We proposed a set of axioms for cyclic intervals and showed that each countable model is isomorphic to the model based on cyclic intervals on the rational circle. Determining whether the first order theory of the meets relation between cyclic orderings admits the elimination of quantifiers is to our knowledge an open problem we are currently examining. Another question is whether the axioms of the CycInt theory are independent. Still another interesting direction of research is the study of finite models of cyclic intervals. To this end, we will have to consider discrete cyclic orderings (which consequently do not satisfy axiom P5). This could lead to efficient methods for solving the consistency problem for cyclic interval networks: Since these involve only a finite number of variables, they should prove accessible to the use of finite models. Annex Proof (End of proof of Theorem 1) • ∀uv, wx, yz ∈ P, uv = wx ∧ wx = yz ∧ uv = yz → ≺ (uv, wx, yz)∨ ≺ (uv, yz, wx) (P 3) Let uv, wx, yz ∈ P satisfying uv = wx, wx = yz and uv = yz. From the definitions of P and we can assert that u|v, w|x, y|z, ¬u|x, ¬w|v, ¬u|z, ¬y|v, ¬w|z, ¬y|x are satisfied. From Axiom A3 we can deduce that there exist r, s, t satisfying r|s|t|r and such that X(u, v, r, s), X(w, x, s, t), X(y, z, t, r) or X(u, v, r, s), X(w, x, t, r), X(y, z, s, t) are satisfied. From all this, we can conclude that ≺ (uv, wx, yz)∨ ≺ (uv, yz, wx) is satisfied. • ∀uv, wx, yz ∈ P, ≺ (uv, wx, yz) ↔ ≺ (wx, yz, uv) ↔ ≺ (yz, uv, wx) (P 4) method is not complete even for atomic networks: path-consistency does not insure consistency. Let uv, wx, yz ∈ P satisfying ≺ (uv, wx, yz). From the definition of ≺, we have u|v, w|x and y|z which are satisfied and there exist r, s, t satisfying r|s|t|r, rs = uv, st = wx and tr = yz. By rotation, we can assert that s|t|r|s is also satisfied. From this, we can deduce that ≺ (wx, yz, uv) is satisfied. In a similar way, we can prove that ≺ (wx, yz, uv) →≺ (yz, uv, wx) and ≺ (yz, uv, wx) →≺ (uv, wx, yz) are satisfied. • ∀uv, wx ∈ P, uv = wx → ((∃yz ∈ P, ≺ (uv, wx, yz)) ∧ (∃rs ∈ P, ≺ (uv, rs, wx))) (P 5) Let uv, wx ∈ P such that uv = wx. From the definition of P and the one of the relation we can assert that u|v, w|x, ¬u|x and ¬w|v are satisfied. From Axiom A4 we deduce that there exist y, z, t such that y|z|t|y ∧ X(y, z, w, x) ∧ X(t, y, u, v)) is satisfied and that there exist q, r, s such that q|r|s|q ∧ X(q, r, u, v) ∧ X(s, q, w, x)) is satisfied. Consequently, there exists y, z, t such that ≺ (yz, zt, ty), yz = wx, ty = uv are satisfied and there exist q, r, s such that ≺ (qr, rs, sq), qr = uv, sq = wx are satisfied. Hence, there exists zt ∈ P such that ≺ (wx, zt, uv) is satisfied , and there exists rs ∈ P such that ≺ (uv, rs, wx) is satisfied. From C3 we can conclude that there exists zt ∈ P satisfying ≺ (uv, wx, zt), and that there exists rs ∈ P satisfying ≺ (uv, rs, wx). • ∃uv, wx ∈ P, uv = wx. (P 6) From Axiom A6 we can assert that there exist u, v, w satisfying u|v|w|u. Hence, there exist uv, vw, wu ∈ P such that ≺ (uv, vw, wu) is satisfied. From P1 we deduce that uv and vw are distinct classes. Proof (End of proof of Theorem 2) • (A4) Let u, v, w, x ∈ I satisfying u|v, w|x, ¬u|x, and ¬w|v. ≺ (u -, u + , v + ), ≺ (w -, w + , x + ) with u + = v - and w + = x -are satisified. Let l and m defined by l = u + = v -and m = w + = x -. Suppose that l = m. As ≺ (u -, u + , v + ) and ≺ (w -, w + , x + ) are satisfied, we have ≺ (u -, l, v + ) and ≺ (w -, l, x + ) which are also satisfied. Hence, we have u -= l and x + = l. From P3, we can just consider three cases: P2 and P4, we can deduce a contradiction for every case. We can assert that l = m. From P5, we can deduce there exist n, o ∈ P satisfying ≺ (l, m, n) and ≺ (l, o, n). Let us define three cyclic intervals y, z, t by y = (l, m), z = (m, n) and t = (n, l). From the satisfaction of ≺ (l, m, n) and P4, we can deduce that y|z|t|y is satisfied. Let us suppose that y|x is not satisfied. As y + = x -, it follows that ≺ (y -, y + , x + ) is not satisfied. We have y -= y + and y + = x + . From P3, it follows that y -= x + or ≺ (y -, x + , y + ) is satisfied. Let us examine these two possible cases. y -= x + is satisfied. It follows that x + = l = u + = v -. From the satisfaction of w|x, we have ≺ (w -, w + , x + ) which is satisfied, with w + = x -. Since ≺ (l, m, n) is satisfied, ≺ (x + , w + , n) is also satisfied. From P4, we can deduce that ≺ (w + , x + , w -) and ≺ (w + , n, x + ) are satisfied. From P2 follows that ≺ (w + , n, w -) is satisfied. Hence, from P4, we obtain the satisfaction of ≺ (w -, w + , n). As w + = m, w|z is satisfied. -≺ (y -, x + , y + ) is satisfied. Hence, ≺ (l, x + , w + ) is satisfied. As ≺ (l, m, n) is satisfied, ≺ (l, w + , n) is also satisfied. From P4, it follows that ≺ (w + , n, l) and ≺ (w + , l, x + ) are satisfied. From P2, we can deduce that ≺ (w From P4, we have ≺ (w + , x + , w -) which is satisfied. From P2, we deduce that ≺ (w + , n, w -) is satisfied. From P4, it follows that ≺ (w -, w + , n) is satisfied. We have w + = m. It results that w|z is satisfied. Hence, X(y, z, w, x) is satisfied. In a similar way, we can prove that X(t, y, u, v) is satisfied. By defining y, z, t by y = (m, l), z = (l, o) and t = (o, m), we can also prove that X(y, z, u, v) and X(t, y, w, x) are satisfied. • (A5) Let u, v, w, x ∈ I satisfying u|w|x|v|u. We have the following equalities: Let us define l 1 (resp. l 2 , l 3 and l 4 ) by Consider the pair y = (l 1 , l 3 ). As w|x is satisfied, we can deduce the satisfaction of ≺ (l 1 , l 2 , l 3 ). Hence, we can assert that l 1 = l 3 . From P5, it follows that there exists l satisfying ≺ (l 1 , l 3 , l). It results that y = (l 1 , l 3 ) belongs to I. Suppose that u|y is not satisfied. Since u + = l 1 , ≺ (u -, l 1 , l 3 ) is not satisfied. u -and l 1 are distinct points and, l 1 and l 3 are also distinct points. From the satisfaction of v|u, we can deduce that ≺ (l 3 , u -, u + ) is satisfied. It follows that l 3 = u -. Consequently, Axiom P3 and the non satisfaction of u|y allow us to assert that ≺ (u -, l 3 , l 1 ) is satisfied. As v|u is satisfied, ≺ (l 3 , u -, l 1 ) is also satisfied. From P4 and from P2, it follows that ≺ (l 3 , u -, u -) is satisfied. From Axiom P1, it results a contradiction. In consequence, u|y is satisfied. With a similar line of reasoning, by supposing that y|v is not satisfied, we obtain a contradiction. Hence, u|y|v|u is satisfied. • (A6) From P6, we can deduce that there exist l, m ∈ P such that l = m. From P5, it follows that there exists n satisfying ≺ (l, m, n). Let u = (l, m), we have u ∈ I and u = u. Now, let us prove the second part of the axiom. Let u = (l, m) ∈ I. By definition of I, there exists n ∈ P such that ≺ (l, m, n). Let v = (m, n) and w = (n, l). From P4, ≺ (m, n, l) and ≺ (n, l, m) are satisfied. From all this, we deduce that u|v, v|w and w|u are satisfied. • (A7) Let u, v, w, x ∈ I satisfying w|u|x and w|v|x. The following equalities are satisfied: w + = u -, u + = x -, w + = v -, v + = x -. It follows that (u -, u + ) = (v -, v + ). Consequently, we can assert that u = v. Let u, v ∈ I such that u = v. We know that u -= u + . From P5, it follows that there exists l ∈ P satisfying ≺ (u -, u + , l). Let w = (l, u -) and x = (u + , l). From P4, we deduce that ≺ (l, u -, u + ) is satisfied. From all this, we can assert that w, x ∈ I and that w|u and u|x are satisfied. Since (u -, u + ) = (v -, v + ), we can assert that w|v|x is satisfied. • (A8) Let u, v, w ∈ I satisfying u|v|w. It follows that u + = v -and v + = w -. Moreover, as ≺ (u -, v -, v + ) is satisfied, we have v -= v + . In consequence, u + = w -. Hence, we can assert that u|w is not satisfied.
45,855
[ "1142762", "997069" ]
[ "56711", "247329" ]
01487502
en
[ "info", "scco" ]
2024/03/04 23:41:48
2004
https://hal.science/hal-01487502/file/ligozat-renz-pricai04.pdf
Gérard Ligozat Jochen Renz What is a Qualitative Calculus? A General Framework What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another. Introduction What is a qualitative temporal or spatial calculus? And: why should we care? An obvious, if not quite satisfactory way of answering the first question would consist in listing some examples of fairly well-known examples: on the temporal side, Allen's interval calculus [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] is the most famous candidate; others are the point calculus [START_REF] Vilain | Constraint propagation algorithms for temporal reasoning[END_REF], the pointand-interval calculus [START_REF] Dechter | Temporal Constraint Networks[END_REF], generalized interval calculi [START_REF] Ligozat | On generalized interval calculi[END_REF], or the INDU calculus [START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]; on the spatial side, there are Allen-like calculi, such as the directed interval calculus [START_REF] Renz | A spatial Odyssey of the interval algebra: 1. Directed intervals[END_REF], the cardinal direction calculus [START_REF] Ligozat | Reasoning about cardinal directions[END_REF], which is a particular case of the n-point calculi [START_REF] Balbiani | Spatial reasoning about points in a multidimensional setting[END_REF], the rectangle calculus [START_REF] Balbiani | A model for reasoning about bidimensional temporal relations[END_REF], and more generally the n-block calculi [START_REF] Balbiani | A tractable subclass of the block algebra: constraint propagation and preconvex relations[END_REF], as well as calculi stemming from the RCC-like axiomatics, such as the RCC-5 and RCC-8 calculi [START_REF] Randell | A spatial logic based on regions and connection[END_REF], and various kinds of calculi, such as the cyclic interval calculus [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], the star calculi [START_REF] Mitra | Qualitative Reasoning with Arbitrary Angular Directions[END_REF], or the preference calculi [START_REF] Duentsch | Tangent circle algebras[END_REF]. Why should we care? A first reason is that, as becomes soon apparent after considering some of the examples, many calculi share common properties, and are used in analogous ways: Take for instance Allen's calculus. It makes use of a set of basic relations, and reasoning uses disjunctions of the basic relations (representing incomplete knowledge), also called (disjunctive) relations. A relation has a converse relation, and relations can be composed, giving rise to an algebraic structure called Allen's algebra (which is a relation algebra, in Tarski's sense [START_REF] Tarski | On the calculus of relations[END_REF]). In applications, the knowledge is represented by temporal networks, which are oriented graphs whose nodes stand for intervals, and labels on the arcs which are relations. In this context, a basic problem is determining whether a given network is consistent (the problem is known to be NPcomplete, [START_REF] Vilain | Constraint propagation algorithms for temporal reasoning[END_REF]). Finally, when a network is consistent, finding a qualitative instantiation of it amounts to refining the network to an atomic sub-network which is still consistent: and this can be checked at the algebraic level. Thus, it makes sense to ask the question: to what extent do those properties extend to the other calculi we mentioned above? As first discussed in [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], it soon appears that some properties of Allen's calculus do not extend in general. Some disturbing facts: -As remarked by [START_REF] Egenhofer | Relation Algebras over Containers and Surfaces: An Ontological Study of a Room Space[END_REF][START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], the algebras of some calculi are not relation algebras in the sense of Tarski, but more general algebras called non-associative algebras by Maddux (relation algebras being the particular case of associative non-associative algebras). In fact, the INDU algebra is only a semi-associative algebra. -The natural or intended models of the calculus may not be models in the strong sense or, in algebraic terms, representations of the algebra. This is no new realization: Allen's composition, for instance, expresses necessary and sufficient conditions only if the intervals are in a dense and unbounded linear ordering. But what is less known, apart from the fact that it may be interesting to reason in weaker structures, e.g., about intervals in a discrete linear ordering, is the fact that all such models correspond to weak representations of the algebra, in the sense of [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF]. -For some calculi, such as the containment algebra [START_REF] Ladkin | On Binary Constraint Problems[END_REF] or the cyclic interval calculus [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], it has been observed that some finite atomic constraint networks which are algebraically closed3 are not consistent. Again, this phenomenon is best expressed, if not explained, in terms of weak relations. -For Allen's calculus, any consistent atomic network is in fact k-consistent, for all k < n, if it has n nodes. Again, the analogous result is false for many calculi, and considering the various weak representations helps to explain why it may be so. If we can answer this last question, we have some hope of developing general methods which could be used for whole classes of calculi, instead of specific ones which have to be reinvented for each particular calculus. Although we do not consider this particular aspect in this paper, an example of a general concept which is valid for a whole class of calculi is the notion of pre-convexity [START_REF] Ligozat | Tractable relations in temporal reasoning: pre-convex relations[END_REF] which has been shown as providing a successful way of searching for tractable classes, at least for formalisms based on linear orderings such as Allen's calculus. The purpose of this paper is to give a precise technical answer to the first question: what is a qualitative calculus? The answer involves a modest amount of -actually, two -algebraic notions, which both extend standard definitions in universal algebra: the notion of a non-associative algebra (which generalizes that of a relation algebra), and the notion of a weak representation, (which generalizes that of a representation). This paper provides a context for discussing these various points. In section 2, the general construction of JEPD relations is presented in terms of partition schemes. The main operation in that context is weak composition, whose basic properties are discussed. Section 3 describes some typical examples of the construction. It is shown in Section 4 that all partition schemes give rise to non-associative algebras, and in Section 5 that the original partition schemes are in fact weak representations of the corresponding algebra. A proposal for a very general definition of a qualitative calculus is presented in Section 6 as well as a description of the various guises into which weak representations appear: both as particular kind of network and as natural universes of interpretation. Section 7 is concerned with the basic notion of consistency, which appears as a particular case of a more general notion of consistency of one weak representation with respect to another. Developing a new calculus Although there seems to be almost no end to defining qualitative spatial or temporal calculi, most constructions are ultimately based on the use of a set of JEPD (jointly exhaustive and pairwise disjoint4 ) relations. This will be our starting point for defining a generic qualitative calculus, in a very general setting. Partition schemes We start with a non-empty universe U , and consider a partition of U × U into a family of non-empty binary relations (R i ) i∈I : U × U = i∈I R i (1) The relations R i are called basic relations. Usually, calculi defined in this way use a partition into a finite number of relations. In order to keep things simple, we assume I to be a finite set. In concrete situations, U is a set of temporal, spatial, or spatio-temporal entities (time points, intervals, regions, etc.). Among all possible binary relations, the partition selects a finite subset of "qualitative" relations which will be a basis for talking about particular situations. For instance, in Allen's calculus, U is the set of all intervals in the rational line, and any configuration is described in terms of the 13 basic relations. We make some rather weak assumptions about this setup. First, we assume that the diagonal (the identity relation) is one of the R i s, say R 0 : R 0 = ∆ = {(u, v) ∈ U × U | u = v} (2) Finally, we choose the partition in such a way that it is globally invariant under conversion. Recall that, for any binary relation R, R ⌣ is defined by: R ⌣ = {(u, v) ∈ U × U | (v, u) ∈ R} (3) We assume that the following holds: (∀i ∈ I)(∃j ∈ I) R ⌣ i = R j (4) Definition 1. A partition scheme is a pair (U, (R i ) i∈I ) , where U is a non-empty set and (R i ) i∈I a partition of U × U satisfying conditions ( 2) and ( 4). Describing configurations Once we have decided on a partition scheme, we have a way of describing configurations in the universe U . Intuitively, a configuration is a (usually finite) subset V ⊆ U of objects of U . By definition, given such a subset, each pair (u, v) ∈ V ×V belongs to exactly one R i for a well-defined i. Later, we will think of V as a set of nodes of a graph, and of the map ν : V × V → I as a labeling of the set of arcs of the graph. Clearly, ν(u, u) is the identity relation R 0 , and ν(v, u) is the transpose of ν(u, v). The resulting graphs are called constraint networks in the literature. More generally, we can express constraints using Boolean expressions using the R i s. In particular, constraint networks using disjunctive labels are interpreted as conjunctions of disjunctive constraints represented by unions of basic relations on the labels. Weak composition Up to now, we did not consider how constraints can be propagated. This is what we do now by defining the weak composition of two relations. Recall first the definition of the composition R • S of two binary relations R and S: (R • S) = {(u, v) ∈ U × U | (∃w ∈ U ) (u, w) ∈ R & (w, v) ∈ S} (5) Weak composition, denoted by R i ⋄R j , of two relations R i and R j is defined as follows: (R i ⋄ R j ) = k∈J R k where k ∈ J if and only if (R i • R j ) ∩ R k ̸ = ∅ (6) Intuitively, weak composition is the best approximation we can get to the actual composition if we have to restrict ourselves to the language provided by the partition scheme. Notice that weak composition is only defined with respect to the partition, and not in an absolute sense, as is the case for the "real" composition. At this level of generality, some unpleasant facts might be true. For instance, although all relations R i are non-empty by assumption, we have no guarantee that R i ⋄R j , or R i • R j for that matter, are non-empty. A first remark is that weak composition is in a natural sense an upper approximation to composition: Lemma 1. For any i, j ∈ I: R i ⋄ R j ⊇ R i • R j Proof. Any (u, v) ∈ R i • R j is in some (unique) R k for a well-defined k. Since this R k has an element in common with R i • R j , R k must belong to R i ⋄ R j . ✷ Lemma 2. For any i, j, k ∈ I: (R i ⋄R j ) R k = ∅ if and only if (R i •R j ) R k = ∅ Proof. Because of Lemma 1, one direction is obvious. Conversely, if (R i ⋄ R j ) R k is not empty, then, since (R i ⋄ R j ) is a union of R l s, R k is contained in it. Now, by definition of weak composition, this means that R k intersects R i • Rj. ✷ The interaction of weak composition with conversion is an easy consequence of the corresponding result for composition: Lemma 3. For all i, j ∈ I: (R i ⋄ R j ) ⌣ = R ⌣ j ⋄ R ⌣ i 2. Weak composition and seriality In many cases, the relations in the partition are serial relations. Recall that a relation R is serial if the following condition holds: (∀u ∈ U )(∃v ∈ U ) such that (u, v) ∈ R (7) Lemma 4. If the relations R and S are serial, then R • S is serial, (hence it is nonempty). Proof. If R and S are serial, then, for an arbitrary u, choose first w such that (u, w) ∈ R, then v such that (w, v) ∈ S. Then (u, v) ∈ (R • S). ✷ As a consequence, since all basic relations are non-empty, the weak composition of two basic relations is itself non-empty. Lemma 5. If the basic relations are serial, then ∀i ∈ I: j∈I (R i ⋄ R j ) = U × U Proof. We have to show that, for any given i, and any pair (u, v), there is a j such that (u, v) is in R i ⋄ R j . We know that (u, v) ∈ R k , for some well-defined k. Because R i and R k are serial, for all t there are x and y such that (t, x) ∈ R i and (t, y) ∈ R k . Therefore (x, y) ∈ R ⌣ i • R k , so R ⌣ i • R k is non-empty. Moreover, there is one well- defined j such that (x, y) ∈ R j . Hence (t, y) is both in R k and in R i • R j . Therefore, R k ⊆ (R i ⋄ R j ), hence (u, v) ∈ (R i ⋄ R j ). ✷ Examples of partition schemes Example 1 (The linear ordering with two elements). Let U = {a, b} a set with two elements. Let R 0 = {(a, a), (b, b)}, R 1 = {(a, b)}, R 2 = {(b, a)}. The two-element set U , in other words, is linearly ordered by R 1 (or by R 2 ). Then R 1 • R 1 = R 2 • R 2 = ∅, R 1 • R 2 = {(a, a)}, and R 2 • R 1 = {(b, b)}. Hence R 1 ⋄ R 1 = ∅, R 2 ⋄ R 2 = ∅, R 1 ⋄ R 2 = R 0 , and R 2 ⋄ R 1 = R 0 . R 2 ). Then R 1 •R 1 = {(a, c)}, R 2 •R 2 = {(c, a)}, R 1 •R 2 = R 2 •R 1 = {(a, a), (b, b), (a, b), (b, a)}. Consequently, R 1 ⋄R 1 = R 1 , R 2 ⋄R 2 = R 2 , R 1 ⋄R 2 = R 2 ⋄R 1 = U ×U . Example 3 (The point algebra). The standard example is the point algebra, where U is the set Q of rational numbers, and R 1 is the usual ordering on Q, denoted by <. R 2 is the converse of R 1 . Because this ordering is dense and unbounded both on the left and on the right, we have R 1 • R 1 = R 1 , R 2 • R 2 = R 2 , R 2 • R 1 = R 1 • R 2 = U × U . Example 4 (Allen's algebra). Here U is the set of "intervals" in Q, i.e., of ordered pairs (q 1 , q 2 ) ∈ Q×Q such that q 1 < q 2 . Basic relations are defined in the usual way [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF]. Since Q is dense and unbounded, weak composition coincides with composition [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF]. Example 5 (Allen's calculus on integers). U is the set of intervals in Z, that is, of pairs (n 1 , n 2 ) ∈ Z × Z such that n 1 < n 2 . Weak composition differs from composition in this case: e.g., we still have p ⋄ p = p, but the pair ([0, 1], [2, 3]) is in p, but not in p • p. 4 The algebras of qualitative calculi Algebras derived from partition schemes Now we take an abstract algebraic point of view. For each i ∈ I, we introduce a symbol r i (which refers to R i ) and consider the set B = {r i | i ∈ I}. Let A be the Boolean algebra of all subsets of B. The top element of this algebra is denoted by 1, and the bottom element (the empty set) by 0. Union, intersection and complementation are denoted by +, • and -, respectively. Let 1 ′ denote {r 0 }. We still denote by r ⌣ i the operation of conversion. On this Boolean algebra, the weak composition function defines an operation which is usually denoted by ;. When tabulated, the corresponding table is called the weak composition table of the calculus. The operation of composition on basic symbols is extended to all subsets as follows: For a, b ∈ A, (a ; b) = i,j (r i ; r j ), where r i ∈ a and r j ∈ b. ( ) 8 Since the algebraic setup reflects facts about actual binary relations, the algebra we get in this way would be a relation algebra in Tarski's sense, if we considered tion. In the general case, however, what we are considering is only weak composition, an approximation to actual composition. What happens is that we get a weaker kind of algebra, namely, a non-associative algebra [START_REF] Maddux | Some varieties containing relation algebras[END_REF][START_REF] Hirsch | Relation Algebras by Games[END_REF]: Definition 2. A non-associative algebra A is a tuple A = (A, +, -, 0, 1, ; , ⌣, 1 ′ ) s.t.: 1. (A, +, -, 0, 1) is a Boolean algebra. 2. 1 ′ is a constant, ⌣ a unary and ; a binary operation s. t., for any a, b, c ∈ A: (a) (a ⌣ ) ⌣ = a (b) 1 ′ ; a = a ; 1 ′ = a (c) a ; (b + c) = a ; b + a ; c (d) (a + b) ⌣ = a ⌣ + b ⌣ (e) (a -b) ⌣ = a ⌣ -b ⌣ (f) (a ; b) ⌣ = b ⌣ ; a ⌣ (g) (a ; b) • c ⌣ = 0 if and only if (b ; c).a ⌣ = 0 A non-associative algebra is a relation algebra if it is associative. Maddux [START_REF] Maddux | Some varieties containing relation algebras[END_REF] also introduced intermediate classes of non-associative algebras between relation algebras (RA) and general non-associative algebras (NA), namely weakly associative (WA) and semi-associative (SA) algebras. These classes form a hierarchy: NA ⊇ WA ⊇ SA ⊇ RA (9) In particular, semi-associative algebras are those non-associative algebras which satisfy the following condition: For all a, (a ; 1) ; 1 = a ; 1. 1. The algebraic structure associated to a partition scheme is a non-associative algebra. If the basic relations are serial, it is a semi-associative algebra. Proof. We have to check points (2(a-g)) of Def.2 (checking the validity on basic relations is enough). The first six points are easily checked. The last axiom, the triangle axiom, holds because of lemma 2. If all basic relations are serial, the condition for semi-associativity holds, because, by lemma 5, (a ; 1) = 1 for all basic relations a. ✷ What about associativity? The non associative algebras we get are not in general associative. E.g., the algebra of Example 1 is not associative: ((r 1 ; r 2 ) ; r 2 ) = (1 ′ ; r 2 ) = r 2 , whereas (r 1 ; (r 2 ; r 2 )) = (r 1 ; 0) = 0. Although it satisfies the axiom of weak associativity [START_REF] Maddux | Some varieties containing relation algebras[END_REF], it is not semiassociative, since for instance (r 1 ; 1) ; 1 = 1 whereas r 1 ; (1 ; 1) = r 1 + 1 ′ . If weak composition coincides with composition, then the family (R i ) i∈I is a proper relation algebra, hence in particular it is associative. However, this sufficient condition is not necessary, as Example 2 shows: although the structure on the linear ordering on three elements has a weak composition which is not composition, it defines the point algebra, which is a relation algebra, hence associative. An example of an algebra which is semi-associative but not associative is the INDU calculus [START_REF] Balbiani | On the Consistency Problem for the INDU Calculus[END_REF]. The semi-associativity of INDU is a consequence of the fact that all basic relations are serial. Weak representations In the previous section, we showed how a qualitative calculus can be defined, starting from a partition scheme. The algebraic structure we get in this way is a non-associative algebra, i.e., an algebra that satisfies all axioms of a relation algebra, except possibly associativity. Conversely, what is the nature of a partition scheme with respect to the algebra? The answer is that it is a weak representation of that algebra. The notion of a weak representation we use here 5 was first introduced in [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF] for relational algebras. It extends in a natural way to non-associative algebras. (U,ϕ) where U is a non empty set, and ϕ is a map of A into P(U × U ), such that: Example 6. Take a set U = {u 1 , u 2 , u 3 } with three elements. ϕ be defined by: Definition 3. Let A be a non-associative algebra. A weak representation of A is a pair 1. ϕ is an homomorphism of Boolean algebras. 3. ϕ(a ⌣ ) is the transpose of ϕ(a). 2. ϕ(1 ′ ) = ∆ = {(x, y) ∈ U × U | x = y}. 4. ϕ(a ; b) ⊇ ϕ(a) • ϕ(b). ϕ(o) = {(u 1 , u 2 )}, ϕ(o ⌣ ) = {(u 2 , u 1 )}, ϕ(m) = {(u 1 , u 3 )}, ϕ(m ⌣ ) = {(u 3 , u 1 )}, ϕ(d) = {(u 3 , u 2 )}, ϕ(d ⌣ ) = {(u 2 , u 3 )}, ϕ(eq) = {(u 1 , u 1 ), (u 2 , u 2 ), (u 3 , u 3 )}, and ϕ(a) = ∅ for any other basic relation a in Allen's algebra. Then (U, ϕ) is a weak representation of Allen's algebra which can be visualized as shown in Fig. 1(a). Example 7 (The point algebra). . A weak representation of this algebra is a pair (U, ≺), where U is a set and ≺ is a linear ordering on U . It is a representation iff ≺ is dense and unbounded. Fig. 1(b) shows a weak representation with three points v 1 , v 2 , v 3 . Partition schemes and weak representations Now we come back to the original situation where we have a universe U and a partition of U × U constituting a partition scheme. Consider the pair (U, ϕ), where ϕ : A → P(U × U ) is defined on the basic symbols by: ϕ(r i ) = R i (11) and is extended to the Boolean algebra in the natural way: For a ∈ A let ϕ(a) = ri∈a ϕ(r i ) (12) Proposition 2. Given a partition scheme on U , define ϕ as above. Then the pair (U, ϕ) is a weak representation of A. Proof. The only point needing a proof is concerned with axiom 4. For basic symbols, ϕ(r i ; r j ) = R i ⋄ R j , by definition, while ϕ(r i ) • ϕ(r j ) = R i • R j . By lemma 1, the former relation contains the latter. The results extends to unions of relations. ✷ From this proposition we can assert the (obvious) corollary: Corollary 1. The weak representation associated to a partition scheme is a representation if and only if weak composition coincides with composition. 6 What is a qualitative calculus? We now have a general answer to our initial question: what is a qualitative calculus? Definition 4. A qualitative calculus is a triple (A, U, ϕ) where: 1. A is a non-associative algebra. 2. (U, ϕ) is a weak representation of A. The ubiquity of weak representations Summing up, we started with a partition scheme and derived an algebra from it. This algebra, in all cases, is a non-associative algebra. It may or may not be a relation algebra. If the partition scheme is serial, it is a semi-associative algebra. In all cases, anyway, the original partition scheme defines a weak representation of the algebra. In the sections, we show that weak representations appear both as constraints (a-closed, normalized atomic networks) and as universes of interpretation. Consequently, many notions of consistency are related to morphisms between weak representations. Weak representations as constraint networks Recall that a (finite) constraint network on A is a pair N = (N, ν), where N is a (finite) set of nodes (or variables) and ν a map ν : N × N → A. For each pair (i, j) of nodes, ν(i, j) is the constraint on the arc (i, j). A network is atomic if ν is in fact a map into the set of basic relations (or atoms) of A. It is normalized if ∀i, j ∈ N ν(i, j) = 1 ′ if i = j, and ∀i, j ∈ N ν(j, i) = ν(i, j) ⌣ . A network N ′ = (N, ν ′ ) is a refinement of N if ∀i, j ∈ N we have ν ′ (i, j) ⊆ ν(i, j). Finally, a network is algebraically closed, or a-closed, if ∀i, j, k ∈ N ν(i, j) ⊆ ν(i, k) ; ν(k, j). Let (N, ν) be a network, and consider for each atom a ∈ A the set ρ(a) = {(i, j) ∈ N ×N | ν(i, j) = a}. This defines a map from the set of atoms of A to the set of subsets of N × N , which is interpreted as providing the set of arcs in the network which are labeled by a given atom. If the network is atomic, any arc is labeled by exactly one atom, i.e., the set of non-empty ρ(a) is a partition of N × N labeled by atoms of A. If it is normalized, this partition satisfies the conditions ( 2) and (3) characterizing a partition scheme. If the network is a-closed, then (N, ρ), where ρ is extended to A in the natural way, i.e., as ρ(b) = a∈b ρ(a), is together with N a weak representation of A. Conversely, for any weak representation (U, ϕ), we can interpret U as a set of nodes, and ϕ(r i ) as the set of arcs labeled by r i . Hence each arc is labeled by a basic relation, in such a way that (v, u) is labeled by r ⌣ i if (u, v) is labeled by r i , and that for all u, v, w the composition of the label on (u, w) with that on (w, v) contains the label on (u, v). Hence a weak representation is an a-closed, normalized atomic network. Considering a weak representation in terms of a constraint network amounts to seeing it as an intensional entity: it expresses constraints on some instantiation of the variables of the network. Now, weak representations are at the same time extensional entities: as already apparent in the discussion of partition schemes, they also appear as universes of interpretation. Weak representations as interpretations Many standard interpretations of qualitative calculi are particular kinds of weak representations of the algebra, namely, representations. Allen's calculus, e.g., is usually interpreted in terms of the representation provided by "intervals", in the sense of strictly increasing pairs in the rational or real line. It has less been pointed out in the literature that in many cases weak representations, rather than representations, are what the calculi are actually about. As already discussed in [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF], a finite weak representation of Allen's algebra can be visualized in terms of finite sets of intervals on a finite linear ordering. More generally, ✏ ✏ ✏ ✏ ✏ ✶ P P P P P q ❄ A P(N × N ) P(U × U ) ρ ϕ (h × h) * Fig. 2. A general notion of consistency restricting the calculus to some sub-universe amounts to considering weak representations of Allen's algebra: for instance, considering intervals on the integers (Example 5) yields a weak representation. It also makes sense to consider problem of determining whether constraint networks are consistent with respect to this restrictive interpretation. Encountering the notion of seriality is not surprising. Recall that a constraint network is k-consistent if any instantiation of k -1 variables extends to k-variables. In particular, a network is 2-consistent if any instantiation of one variable extends to two variables. Hence a partition scheme is serial if and only if the (possibly infinite) "network" U (or weak representation) is 2-consistent. Many natural calculi have consistent networks which are not 2-consistent, e.g., Allen's calculus on integers. Although the 2-element network with constraint d is consistent, it is not 2-consistent: if an interval x has length one, there is no interval y such that ydx. What is consistency? The preceding discussion shows that a weak representation can be considered alternatively as a particular kind of constraint network (an atomic, normalized and a-closed one), or as a universe of interpretation. Now, a fundamental question about a network is whether it is consistent with respect to a given domain of interpretation. Intuitively, a network N = (N, ν) is consistent (with respect to a calculus (A, U, ϕ)) if it has an atomic refinement N ′ = (N, ν ′ ) which is itself consistent, that is, the variables N of N can be interpreted in terms of elements of U in such a way that the relations prescribed by ν ′ hold in U . More specifically, if (N, ν ′ ) is a-closed, normalized, and atomic, consider the associated weak representation (N, ρ). Then the consistency of the network with respect to the weak representation (U, ϕ) means that there exists an instantiation h : N → U such that, for each atom a ∈ A, (i, j) ∈ ρ(a) implies (h(i), h(j)) ∈ ϕ(a). Hence consistency of such a network appears as a particular case of compatibility between two weak representations. This means that in fact consistency is a property involving two weak representations: Definition 5. Let N = (N, ρ) and U = (U, ϕ) be two weak representations of A. Then N is consistent with respect to U if there exists a map h : N → U such that the diagram in Fig. 2 commutes, that is, for each a ∈ A, (i, j) ∈ ρ(a) implies (h(i), h(j)) ∈ ϕ(a). This generalization of the notion of consistency emphasizes the fact that it is a notion between two weak representations, where one is interpreted in intentional terms, while the other is used in an extensional way, as a universe of interpretation. Example 8 (The point algebra). A weak representation in that case is a linearly ordered set. Consider two such weak representations (N, ≺ N ) and (U, ≺ U ). Then (N, ≺ N ) is consistent with respect to (U, ≺ U ) iff there is a strictly increasing map h : N → U . Inconsistent weak representations In that light, what is the meaning of the existence of inconsistent weak representations? Examples of finite atomic a-closed networks which are not consistent exist e.g. for the cyclic interval calculus or the INDU calculus [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF]. In such cases, the universe of interpretation of the calculus (such as intervals on a rational circle, or intervals with duration) has too much additional and constraints on its relations for the network to take them into account. Characterizing the cases where this can happen seems to be an open problem in general. Conclusions This paper proposes to introduce a shift of perspective in the way qualitative calculi are considered. Since Allen's calculus has been considered as a paradigmatic instance of a qualitative calculus for more than two decades, it has been assumed that the algebraic structures governing them are relation algebras, and that the domains of interpretation of the calculi should in general be extensional or, in algebraic terms, representations of these algebras. These assumptions, however, have been challenged by a series of facts: some calculi, as first shown in [START_REF] Egenhofer | Relation Algebras over Containers and Surfaces: An Ontological Study of a Room Space[END_REF], then by [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], involve non-associative algebras. Also, for many calculi, the domains of interpretation may vary, and do not necessarily constitute representations. We argued in this paper that a qualitative calculus should be defined abstractly as a triple consisting of a non-associative algebra and a weak representation of that algebra. This abstract definition makes apparent the fact that particular kinds of networks on the one side, and representations of the algebras on the other side, are ultimately of a common nature, namely, both are particular kinds of weak representations. This last fact has of course been known before: for instance, the work described in [START_REF] Hirsch | Relation Algebras by Games[END_REF] is about trying to construct representations of a given relation algebra by incrementally enriching a-closed networks using games à la Ehrenfeucht-Fraissé. However, we think that putting qualitative calculi in this setting provides a clear way of considering new calculi, as well as an agenda for questions to be asked first: what are the properties of the algebra involved? What are weak representations? Are the intended interpretations representations of the algebra? When are weak representations consistent with respect to which weak representations? A further benefit of the framework is that it makes clearly apparent what consistency really means: consistency of a network (a network is a purely algebraic notion) with respect to the calculus is a particular case of consistency between two weak representations: it can be defined as the possibility of refining the network into a weak representation which is consistent wrt. the one which is part of the calculus considered. Obviously, defining a general framework is only an initial step for studying the new problems which arise for calculi which are less well-behaved than Allen's calculus. A first direction of investigation we are currently exploring consists in trying to get a better understanding of the relationship between consistency and the expressiveness of constraint networks. So we cannot hope to have general methods and have to look closer at what the calculi have to offer. Defining a family of calculi by giving examples amounts to a partial extensional definition. But what would an intensional definition be? Example 2 ( 2 The linear ordering with three elements). Let U = {a, b, c} a set with three elements. Let R 0 = {(a, a), (b, b), (c, c)}, R 1 = {(a, b), (b, c), (a, c)}, R 2 = {(b, a), (c, b), (c, a)}. Here, the three-element set U is linearly ordered by R 1 (or by Fig. 1 . 1 Fig. 1. A weak representation of Allen's algebra (a) and of the point algebra (b) We use the term algebraically closed, or a-closed, to refer to the notion which is often (in some cases incorrectly) referred to as path-consistency: for any 3-tuple (i, j, k) of nodes, composing the labels on (i, k) and (k, j) yields a result which contains the label on (i, j). Contrary to one of the authors' initial assumption, the JEPD acronym does not seem to be related in any way to the JEPD hypothesis in biblical exegesis, where J, E, P, D stand for the Jehovist, Elohist, Priestly and Deuteronomist sources, respectively! This notion is not to be confused with weak representability as used by Jónsson, see[START_REF] Jónsson | Representation of modular lattices and relation algebras[END_REF][START_REF] Hirsch | Relation Algebras by Games[END_REF]. ⋆⋆ National ICT Australia is funded through the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council.
34,597
[ "997068", "1003925" ]
[ "247329", "488648" ]
00635903
en
[ "shs" ]
2024/03/04 23:41:48
2011
https://shs.hal.science/halshs-00635903/file/2011GervaisMerchantOrFrenchAtlantic.pdf
A Merchant or a French Atlantic? Eighteenth-century account books as narratives of a transnational merchant political economy Pierre GERVAIS, University Paris 8 / UMR 8533 IDHE On June 1, 1755, an anonymous clerk in Bordeaux merchant Abraham Gradis's shop took up a large, leather-bound volume, containing 267 sheets of good paper, each page thinly ruled in red. The first page was headed by the legally compulsory stamp of approval of an official, in this case provided by Pierre Agard, third consul of the Chambre de commerce of Bordeaux. Agard had signed the volume on May 13, just a week after having taken up his position, and had thus made it fit for use as a merchant record. 1 Right under this certification, our clerk wrote in large cursive across the page 'Laus Deo Bordeaux ce Prem r Juin 1755,' and proceeded to copy the first entry of what was a new account book: a list of all 'Bills receivable,' that is, debts owed his master. This inaugural act, however, went unnoticed, and one would find no mention of it either in Gradis's letters or in the later accounts of historians. Opening an account book for anybody connected with the market place was a humdrum affair at the end of the eighteenth century, so much so that to this day we tend to take for granted the meaning of such a gesture. What would be more natural than wanting to record one's transactions, customers, and the ubiquitous mutual credit which was a necessary part of commercial life? Merchant practice has given rise to a very vast body of historiography; from the solid baseline provided by the classic studies by Paul Butel, Charles Carrière or André Lespagnol as well as Bernard Bailyn, David Hancock or Cathy Matson on the British side, it has developed into one of the major topics of historical conversation in the past twenty years with the rapid development of the Atlantic paradigm, which gave 'le doux commerce' center stage as the key force behind European expansion, and possibly the organizing principle of what has come to be called the 'Atlantic world'. 2 But account books have remained on the margins of this conversation, perhaps because the very activity they embodied was rather pedestrian in the 1700s. Accounting history has traced the rise of double-entry accounting, a sophisticated method of financial control which was the primary -though not exclusivesource for accounting as we know it today and as far back as the Renaissance. By the end of the seventeenth century, double-entry was well-known among elite traders, but most accounting was still done in a simpler single-entry system. In this respect, the eighteenth century is better known as the era in which accounting for costs started to develop along with the new, large productive ventures of the early agricultural and industrial revolutions, while, in sharp contrast with ironworks and noble estates, merchant accounting remained primarily concerned with financial transactions. Moreover, outside of a select group of large companies and international traders, which had to innovate because of the complexities of their multinational operations, most eighteenth-century merchants held to practices already well-established in the two preceding centuries. As a result, merchant accounting is largely seen as a rather uneventful branch of commercial life until the rapid spread of new, more elaborate management techniques associated with the Industrial revolution in the nineteenth century. 3 There is more to Abraham Gradis's account book, however, than this straightforward and whiggish tale of order and progress, for the figures it contains express the underlying mechanics of European mercantile expansion, and thus raise the question of its nature. That European expansion was trade-based is hardly a new idea, but why exactly trade expandedwhy merchants filled more books with more accounts-is not as simple a question as it sounds. The standard economic approach posits the quasi natural expression of an urge to expand among economic agents, particularly merchants, once the proper institutional environment had been created; secure property rights through limitation of royal power, for instance, or more open institutions through which people would be empowered to escape extended family or tribal constraints, generated new incentives for those willing to shed old routines. The discussion thus revolves around the presence of these incentives, while the acquisitive impulse itself is considered a given. Indeed, this view of economic expansion as a natural consequence of new incentives dovetails smoothly with descriptions of the First Industrial Revolution as a set of innovations in the productive sector also enabled through a propitious social-cultural environment. The 'Industrial Enlightenment' generated a new level of incentives and opportunities, freeing yet more energies and imagination on the road to industrial capitalism, but the basic mechanism was the same: homo oeconomicus -usually male in such accounts-saw a new field of opportunities once barriers had been removed. For a trade which promoted market unification and competitive intensification, account books first and foremost represented a recording tool which helped rationalize the decision process, and there is no reason to assume that their analysis would pose any particular challenge. This construction also fits well with a regional and even nationalized view of Atlantic trade, since a generalized economic pattern of growth would take varied local forms, depending on local political circumstances. [START_REF] Acemoglu | The Rise of Europe: Atlantic Trade, Institutional Change, and Economic Growth[END_REF] But in the past fifteen years or so, scholars specializing in the French Old Regime have started offering a different model, in which they stress the time-and space-specific dimensions of economic activity. In these accounts, European economic expansion followed rules and paths which were peculiar to the Early Modern era. Price-setting mechanisms, for instance, are described as primarily operating in periodic fashion and within regulatory and social limits, with profit itself a result of both experience and anticipation. In this universe, prices could not play the informational and distributive role assigned by modern economists. While progress was possible, it was not a straightforward result of competition and reward, in what were effectively 'priceless markets'. Expansion ocurred not so much by changing prices or productivity than by playing on segmented, spatially segregated niche markets, through product innovation more often than not. Profit distribution was dependent on a complex intertwining and hierarchization of activities, embodied in far-reaching sub-contracting networks and cartels. Last, but not least, the transition to the First Industrial Revolution represented a break with past practices, rather than being a gradual evolution of techniques. In this transition, we see trade as a separate sphere with its own rules and own separate developmental process. "Progress" does not necessarily occur, or at least its presence is open to question, because there is no assumption that the system will work to maximize the efficiency of resource allocation. Similarly, the question of whether these processes were heterogeneous, depending on regional variables, or on state, national or local institutions, or whether they were generalized and uniform, is left wide open. [START_REF] Grenier | L'économie d'Ancien Régime[END_REF] This debate has direct bearing on the analysis of French activity in the Caribbean trade, because depending on which side one picks, it leads to two very different ways of analyzing geographic space and its precise role in the period of 'first globalization.' [START_REF]A term used[END_REF] How was the fact of a French colonial Empire articulated in a more general movement in which trade expanded throughout the colonial sphere? If we accept the idea that the expansion of trade was a direct reflection of a timeless acquisitive impulse on the part of merchants, then the identification of the Caribbean as a separate sphere was primarily a political phenomenon, before being an economic one or even a social one. The creation of French Caribbean markets was consequently a function of royal policies, within a broader frame of market development. Conversely, the segmented and monopolistic character of Early Modern markets leads to the possibility that the French Caribbean islands were a separate market, or even a set of separate markets, deliberately constructed within a broader course of imperial economic development. Royal policy was an acknowledgement of a state of affairs on the ground as much as a contribution to it. The present paper questions the very notion of a 'French' Atlantic. The concept of a French Atlantic makes sense in a world structured by the colonial policies of European States. It makes less sense if that world turns out to have been a mosaic of places held together by the bonds of exchange relationships as much as by the bonds of Empire. This article focuses on the sources of the commercial activity which made the French Caribbean, in particular, Saint-Domingue, Martinique, and Guadaloupe, the pride of the eighteenth-century French Empire. I concentrate on the counting houses of Bordeaux, Nantes and Saint-Malo. In these places, the slave trade was organized and financed, planters were bankrolled, and sugar and other colonial products were brought back to be redispatched throughout Europe. Complex business webs were built to deal with what was arguably the most wide-ranging endeavour devised by economic agents anywhere, but these webs were not designed within a regionalized framework, at least not in the national or imperial sense. Account books left no space for place or borders. Instead, they structured relationships around very different notions of interpersonal credit and risk. This does not mean that the French Caribbean -or more broadly the French Empire -was irrelevant to the way merchants operated. To understand how place and politics interacted with trade, however, we have to understand according to which principles trade was organized in the first place. Account books provide us with a perfect tool to grasp this organization. Accounting was primarily a way of listing debts and loans, and as such was the most direct expression of the key relation of power in the Early Modern era, that of credit. [START_REF] Fontaine | L'économie morale: pauvreté, crédit et confiance dans l'Europe pré-industrielle[END_REF] It gave this relation its grammar, and underpinned all its manifestations. Each account was a narrative summarizing the interaction between two very specific partners within this very specific universe of credit. The fact that accounting was peripherally concerned with profit calculations or strategic decisions points to the hierarchy of priorities which an actor on Early Modern markets had to adopt in order to be successful; providing credit came first, bottom-line profit was a distant second, and cost issues were even farther behind. Within these constraints, merchant activity necessarily transcended both imperial and regional boundaries; it was articulated around the personal, not the political or regional, a fact which is clearly apparent in account books. Of course, borders did offer avenues for comparative advantage and provided ways to thwart competition from "foreign" merchant networks, just as much as regional, ethnic, or religious kinship ties could be used to reinforceme these networks. The importance of these possibilities was abundantly underscored by the exclusif and by other non-tariff barriers, as well as by the role of kinship in international trade. But regardless of the context in which it was deployed, credit still had to stand for a whole complex of interpersonal links, well below -or beyond -the national or regional level, and only partly quantifiable. As will be demonstrated in the case of the House of Gradis, there is good reason to believe that these links, more than any other ingredient of merchant life, were the defining force behind merchant strategies, and unified the merchant world in ways beyond the reach of all other centrifugal forces. All merchants operated in the same way, assuredly in a very segmented universe, but with a full consciousness of the underlying unity of commercial life. Regions and empires did exist, and did play a role, but their roles were strictly constrained by the rules of merchant exchanges. * Recording transactions in books, in a written manner, was a legal necessity in 'law merchant,' i. e. in the body of judicial decisions which provided precedent and guidance to jurisdictions having to adjudicate conflicting claims among traders. A written record of a transaction, appearing seamlessly as one item among a chronologically arranged series, provided solid proof that a transaction had taken place. This usage, already nearly universally enforced by customs in Western Europe, became a legal obligation in France with the Ordonnance de 1673. Knowledge of accounting came first in the list of skills one had to have in order to bear the title of Master merchant (Title 1, Article IV); a compulsory balancing of accounts had to take place at least every year between parties to a contract (Title 1, Articles VII and VIII); and a whole, though rather short, chapter was devoted to the issue of books (Title 3, 'Des livres et registres des négocians, marchands et banquier'). As with merchant law, however, recording daily transactions in order was enough; what was required was a 'journal' or 'daybook', containing 'all their trade, their bills of exchange, the debts they owe and that are owed to them, and the money they used for their house expenses,' and written 'in continuity, ordered by date, with no white space left' [between two transactions]. [START_REF] Sallé | a lawyer in the Paris parliament[END_REF] Any calculation above and beyond this simple act of recording was unnecessary. Admittedly, well-kept accounts were useful whenever they had to be balanced, in the case of a death, a bankruptcy or the dissolution of a partnership. But 'useful' did not mean 'necessary,' and it is hard to believe that generations of merchants would have filled endless volumes with tiny figures simply to spare some work to their creditors or executors. In truth, with a well-kept journal, the work of ventilating operations between accounts to calculate the balance on each of them could be postponed until it became necessary. And it was not necessary as often as one would think: the Ordonnance de 1673 quoted above prescribed compulsory settlements of accounts every six months or year, depending on the kind of goods traded, which implies that left to their own devices, traders would not necessarily have bothered to settle accounts every year -and indeed the book with which we started, Gradis's June 1755 journal, shows no trace of balancing the accounts all the way through to 1759. [START_REF]Ordonnance du commerce de 1673[END_REF] Even the most elaborate form of accounting, namely, double-entry accounting, was seldom used for calculating profits and helping managerial decision. In the absence of standardized production and enforceable norms of quality, each transaction was largely an act of faith on the part of the buyer. After all, the buyer was almost never enough of an expert on a given product to be able to detect hidden faults and blemishes in quality. A trader was thus at the mercy of his suppliers for the quality of his goods. The problem was identical for sales, since large merchants had to sell at least part of their goods through commission merchants living in far-away markets. These agents alone could gauge the state of a local market and maximize the returns of a sale; their principal could simply hope that his trust was not misplaced. Moreover, the slow flow of imperfect information meant that markets for any product could fluctuate wildly, suddenly and unexpectedly, so that even a venture with the best suppliers and the most committed selling agents could come to grief. In the last analysis, forecasts were at best guesswork, past experience was not a useful tool for short-term predictions, and the valuation of each good was the result of an ad hoc negotiation entailing both an informed decision on the value of each particular good -this piece of cloth, that barrel of port-and a bet on the market prices at the future moment of the eventual resale. [START_REF] Yamey | The "particular gain or loss upon each article we deal in": an aspect of mercantile accounting, 1300-1800[END_REF] If a simple record of transactions was enough to fulfill legal obligations, and more complex records were not necessarily very useful for profit calculations or to help the decision-making process given the shifts in markets and the uncertainties in supplies, why then were such records kept at all? To answer this, one has to understand what was recorded -and here we turn back to our Bordeaux merchant, Abraham Gradis, and his account book. [START_REF]'the account book of the David Gradis & Fils partnership[END_REF] Parts of this book, and elements of the preceding one, for 1751-1754, were analyzed for the purpose of this paper, in order to get a quantitative grasp on what kind of operation was recorded. [START_REF][END_REF] Gradis was an international trader, active in the French colonies. Much of his activity consisted in sending supplies to the French Caribbean and Canada, and importing colonial goods in return. As a Jewish trader, he could not at first own plantations directly, but maintained an extensive network of correspondents in the colonies. In Gradis's books, 217 persons or families owned an account active between October 1754 and September 1755. Out of these 217 people or kin groups, 54 can be geographically located through a specific reference in the books. Ten of them lived in Martinique, Guadeloupe and Saint-Domingue, including the Révérends Pères Jacobinss and individuals from well-known planter families. There were another seven correspondents in in Quebec, including François Bigot, intendant of Nouvelle-France -an official connection which would land Gradis in the middle of the Affaire du Canada after 1760, when Bigot would be accused of corruption. Of course, with 75 percent of the accounts not identified, one can easily assume that Gradis's network included significantly more correspondents in the French colonies than the few I could identify. [START_REF]The list of accounts, was derived from both Gradis's journals[END_REF] A list of business relations does not make an account book, however; what is really significant is what Gradis did with it. Personal accounts were the most numerous by far. To the 217 clearly personal accounts (individuals, families, institutions like the Jacobins or semiofficial accounts for "The King," "Intendant Bigot" or "Baron de Rochechouart") must be added seven opened for unspecified partnerships ("Merchandize for the Company") or for commission agents ("Wine on account with X"), and an extra 15 covering ship ventures ("Outfitting of ship X ", "Merchandizes in ship Y"), which were at least in part also partnership accounts. Overall, a least 225 of the 266 accounts which appeared between October 1754 and September 1755 can be classified with absolute certainty as personal accounts. Each of these personal accounts created a relationship between Gradis and the individuals concerned very much like that of a bank with its customers, except that the credit which was extended was apparently mostly free of charge and interest. For instance, the following posting: essentially meant that in the account 'Eaux de vie,' i. e. Gradis himself, had sent 2,176 Livres tournois worth of spirits for the benefit of Mr. La Roque, and no payment had been made by the latter. Indeed, no payment was made for the rest of the summer, and probably no payment would be made until the spirits were sold. The net result was that Gradis had loaned this amount of money to La Roque for several months, with no apparent charge or interest. Conversely, when Gradis's clerk wrote the following: Caisse Dt à Dupin £ 2712.3 pour du Sel qu'il à Livré en 1754 p le navire L'Angelique et pour le Cochon envoyé a la Rochelle p le n.re L'entreprenant dont nous debitons La Caisse, en ayant été Creditée 15 he was recording that a Mr. Dupin had generously loaned almost 3,000 Livres tournois worth of salt to Gradis for anywhere between six months and a year and a half; this sum had been received in the 'Caisse,' that is, paid over in cash to Gradis, for salt which had been given by Dupin -but Gradis had not paid his debt to the latter, and, again, no interest or charge was listed. Last but not least, two credits could cancel each other: Mr Darche Dt. à Mlle de Beuvron £ 2446.5 pour une année de la rente qu'il doit a lad.e Dlle 16 meant that the corresponding sum was transferred to Mademoiselle de Beuvron on Gradis's books, to be offset by sums she owed him, while he would take charge of recovering what Darche owed in the course of his business transactions with him. This book credit was the dominant form of payment in Gradis's accounts, as it may well have been the case for all traders everywhere. Payment could be made with metal currency, or with commercial paper, promissory notes and bills of hands ranging from the time-honored international letter of exchange to the more modern note of hand, a simple I.O.U from one individual to another. If currency was used, then the 'Caisse,' or cash box, would be listed as receiving or disbursing the corresponding sum; though sometimes, for obscure reasons, commercial paper found its way into the 'Caisse,' which undermined its very purpose. 17 If commercial paper was used, it would be listed as 'Lettres et billets à recevoir' or 'Bills receivable,' that is, paper debts from others to be cashed at some point, or 'Lettres et billets à payer,' or 'Bills payable,' I.O.Us manifesting that some money had been borrowed and would have to be reimbursed at some point. Complex rules governed interest rates and the period of validity of such paper debts, but the important point here is that these debts were always listed separately, in the relevant accounts. This makes possible a quantitative analysis of the use of currency, commercial paper and book credit over the period studied. We focused on 89 personal accounts from June-August 1755, having discarded accounts with ambiguous titles, or belonging to Crown officials, to Gradis family members, or to partnerships to which Gradis probably belonged, such as 'Les Intéressés au Navire L'Angélique.' [START_REF]Thus we excluded 'Le chevalier de Beaufremont' and 'Mr de Beaufremont', who could be one and the same, or father and son[END_REF] This gave us a set of individual or partnerships to whom 'normal' credit, not influenced by personal proximity or official status, would be extended. The results, as shown in Figure 1, are very clear: even for a major international trader like Gradis, in a European commercial port flush with metal currency, book credit was the main tool of business. Even if we count as 'Cash' transactions all exchanges of commercial paper for cash, the share of credit instruments compared to hard currency in transactions around the Gradis firm reached a hefty 72 percent. 19 The volumes involved in these transactions were impressive. Because the 'Bills receivable' and 'Bills payable,' that is, the formalized credit and debt accounts, were balanced at the beginning of June 1755, we know that at that date Gradis held 671,117 Livres tournois in IOUs from various people, and owed to dozens of creditors an equally impressive 430,072 Livres tournois, also in formalized paper IOUs. Considering the proportion of book debt to Book credit = all other transactions, eliminating double postings, all profit and loss postings (equivalent to the total or partial closing of an account), and subtracting any payment from or to the same account within two weeks (equivalent to a payment on the spot, recorded with some delay; quick payments of that type represented barely over 40,000 Livres tournois, less than 10 percent of all payments). The figures above only refer to the business Gradis was doing with the individuals and groups who held accounts with his firm. A complete analysis, including purchases and sales listed directly in Gradis's own accounts, which were all commercial paper-related operations, as well as credit extended to Crown officials, gives a different set of figures: Cash transactions represented over a third of all transactions in value. Over half of these cash movements went to the purchase and sale of commercial paper; straight cash purchases or sales represented 16 percent of all transactions in value. Even including all movements between cash and commercial paper accounts, book compensations still made up over 40 percent of all the volume of trade in the Gradis firm, proof that personal credit flows were allimportant. Indeed, commercial paper itself was nothing else than formalized credit, and if we exclude the complex category of transactions mixing commercial paper and cash, we end up with over 80 percent of all transactions being made on credit. Last but not least, cash, commercial paper and book credit were not equivalent means of payment. Leaving aside transactions involving both cash and commercial paper, it is possible to observe how each type of payment was used in terms of the value of the transactions involved: Source: 181 AQ T*, loc. cit. The transactions included are the same as in Figure 2, q. v. These graphs show again that book accounts were used as much as cash, and in much the same way. Obviously these book credits were not as liquid (that is, easily convertible without loss of value) as cash, since they could circulate only within the circle of individuals and groups having themselves accounts with Gradis. But on the other hand the volume they represented was actually much higher than the volume of cash used by the firm, which is another way of saying that transactions took place mostly within this circle of known partners. Moreover, book accounts could also, in special cases, comprise much larger amounts, otherwise usually dealt with through commercial paper. Thus our opening caveat: accounts and account books were not straightforward tools of analysis. Opening an account was tantamount to creating a special bond of partnership, which explains why whole aspects of a traders' activity were lumped together in nondescript general accounts, while apparently small transactions could give rise to specific accounting efforts. Because the core distinction was between direct partners and all others, issues of costs and profit were only dealt with peripherally, if at all; at the very least, they were clearly subordinate to this higher, primary boundary between the inner circle and the rest of the world. The same held true for national loyalty, regional proximity and all other parameters of interpersonal relationships. While they could play a role in specific cases, there is no indication that account holders could be neatly dumped into one of these categories. What made a business acquaintance into an account holder was the logic of the credit network, a network at the centre of Gradis' strategy. * Almost all the non-personal accounts were also credit-centered, in much the same way as personal accounts. Gradis, our Bordeaux merchant, practiced a sophisticated system of double-entry accounting, which allowed him to develop two types of accounts on top of personal accounts. Parts of his inventory, a handful of goods which he traded in most frequently, were granted specific accounts: sugar, wine, indigo, spirits, and flour, thus appeared as separate accounts, debited with incoming merchandise, and credited when the goods were sold. To these, should be added accounts such as Cash and Bills payable and receivable, which also contained assets. Then, there was what accountants would call today 'nominal' accounts such as Profit and Loss and expense accounts. These accounts were supposed to summarize in the final balancing of all accounts (which almost never took place) the status of the capital expended, which had been neither invested in inventory, nor loaned out as a book debt. In practice however, most real accounts, and even some apparently nominal accounts, were much closer to personal accounts than one would expect, since they, too, were meant to encapsulate a certain credit relationship with a certain group of partners. Let us start with an example: an account called 'Indigos p. Compte de Divers' included all indigo traded on commission. This made sense only if the discriminating principle was a combination of both the type of principal/agent relationship established by Gradis as a commission merchant with the people who commissioned him, and of the specific product concerned. Gradis's accounts could not readily provide an analysis of the benefits made on indigo in general, since there was a separate 'Indigo' account. Indeed, another account of the same type was 'Sucres et caffés p Cpte de Divers,' which mixed two vastly different products, proof if need be that the issue was not the products themselves. [START_REF] Actually | one of the transactions recorded in this account included a set of 'Dens d'éleffans,' which means that account titles were not[END_REF] There was no way either to calculate the commissions Gradis received from each principal he was commission merchant for, nor on each merchandise, since all commissions were dumped into one account. What such accounts implied, rather, was that the principals who commissioned him constituted a coherent group, a network of sorts which deserved separate analysis. Thus the indigo sold on commission came from 'Benech L'aîné' and 'Benech de L'Epinay,' while the 'Sugar and Coffee,' was sent by David Lopes and Torrail & La Chapelle, two firms from Martinique; each account was based on a specific contractual relationship with an identifiable group, not even necessarily specialized in one product, but clearly identifiable within the merchant network erected by Gradis. The same analysis holds true for ship-related accounts, except that the relationship was non-permanent and linked explicitly to a certain venture. A ship account constituted the perfect illustration of such a venture-based account, since it distinguished a separate group of people, from the captain to the co-investors who helped bankroll the outfitting and the lading, and only for the duration of the venture. Anything pertaining to a ship was thus gathered into one account, or distributed among several accounts if particular subgroups of investors were concerned within the larger framework of the general venture. This explains why in an extreme case three separate accounts existed side by side in the same three months of June-August 1755 for the ship Le David, one for the outfitting ('Armement du navire Le David' or 'Navire Le David'), one for its freight ('Cargaison dans le N.re L David'), and one for the goods on board which directly belonged to Gradis and nobody else ('Cargaison Pour n/C dans Le Navire Le David'). Again, the issue was not only the contractual link (investments held in partnership or not), nor the type of activity (shipping), much less the goods concerned (not even listed in this case), but a mix of all elements, which made of each venture a separate, particular item. Even when Gradis himself sold his goods through commission merchants, this act did not systematically lead to a separate account: whether the principal/agent relationship deserved to be individualized depended on a series of parameters, most of which probably elude us. Thus there were four specific accounts for wine sold on commission. But silverware sold through Almain de Quebec was credited to 'Marchandises générales,' with no separate record kept. The Benechs were dealt with through a common account, as were the two Martinique firms who used Gradis as commission merchant. The relationship was the same in all these cases, but no general rule was applied beyond Gradis's own view of the importance and separate character of the relationship giving rise to a given account. In some cases this relationship was so obvious to our Bordeaux merchant that the name he picked for an account was remarkably poor in information. The case occurred both for personal accounts ('La société compte courant,' 'La société compte de dettes à recevoir à la Martinique' -without Gradis feeling bound to explain which 'société' was concerned exactly) and to venture-based accounts (what exactly was 'Cargaison n° 7' in 'Marchandises pour la Cargaison n° 7'? Were certain unspecified ship accounts, such as 'Le Navire Le Président Le Berton,' outfitting accounts, lading accounts, or ownership accounts?). The systematic dividing of accounts according to the specific venture, and within it according to complex combinations of contracts and ownership, proves conclusively that by region. This holds true as well for merchandise accounts. There were eleven such accounts, with one of them, 'Marchandises générales,' including (over three months) silverware, unspecified 'divers de Hollande,' 'quincaille,' paper, cinnabar, salt, beef, 'Coity' [coati?], feathers, walnut oil, lentils, and even 'goods from Cork.' But even with more specialized accounts, such as 'Farine' or 'Eaux de vie,' there was no effort to trace a certain batch of goods from the origin through to the sale, which means that buying and selling prices of specific goods could not be compared. Moreover, the costs entailed in trading certain goods were not necessarily recorded in relation to them, as in the following example: Here packing and freight costs were credited to Cash, and debited to the personal account of the customer, rather than being listed in the 'Vins' accounts, so that the actual cost of delivering this wine could not be included in the calculation of the profit derived from selling this particular good, nor was it listed separately elsewhere. The lone cost account identifiable as such, called 'Primes d'assurance,' gathered all insurance premiums paid by Gradis for his shipping; but apparently he decided that separating this particular cost was not worth the trouble, and closed this account into the general Bills payable account on July 21st, 1755, only to reopen it the following day, listing a new insurance premium due for an indigo shipment. 24 Consequently, most insurance premiums found themselves jumbled together with the rest of Gradis's formal debts, while a few others stayed in the corresponding account. Another cost account, 'Fret à recevoir de divers,' listed freight paid by Gradis as a commissioner for others during the year 1754, but it had been closed by the summer of 1755, reappearing briefly because a mistake had been made in settling it. 25 Another account, 'Bien de Tallance,' was basically manorial; it individualized merchant relationships, with some of them set apart because of the specific personal relationship through which they appeared, as with the Benechs for indigo. As shown by the following table, the account book was largely dominated by the personal credit relationships Gradis had built with the people he dealt with. Very little space was left for other issues. Accounting was, first and foremost, credit accounting, and mostly personal credit accounting. Each account was a narrative of a certain relationship, a tool for quantitative or strategic analysis maybe, but on a strictly ad hoc basis: what counted in most cases was the people, or the group of people, who underpinned the activity thus accounted for. The identification of each element worth a separate account (assets specific to Gradis alone, or people being partners with Gradis, or people simply dealing with Gradis, or in a few cases all people entering into a certain kind of credit relationship) was neither a mere matter of legal contract, nor a straightforward result of regional or product specialization, but a complex combination of all these elements, and possibly more. No two accounts were the same, either; each had its own past, its own potential, and possibly its own constraints, so that generalization was largely impossible. What was reflected here was the highly segmented and uneven nature of early modern markets, and the fact that group control of one or other corner of this market, however small, was the best road to success. Each trading effort was thus very much an ad hoc affair, with a specific good or set of goods, in a specific region, along specific routes, all these specificities being summarized and expressed by the set of business associates which would take charge of the trade from its beginning to its end. What Each sum had its own history, and its own assessment of credit: loaning to the King had its risks and rewards, which were not the same as partnering with a fellow Bordeaux merchant, or humoring a friendly colonial official (who actually partnered with Gradis in supplying his own territory). In at least two cases out of four, Bigot and Veuve La Roche, the rewards were indirect; friendly officials could provide huge comparative advantages, being accommodating with a partner's widow gained one points within one's community, and there would be monetary windfalls eventually. Still, there was in all this a common grammar, a set of rules above and beyond the direct accounting rules, which would enable Gradis, and all other merchants, to compare and contrast their multiple ventures. Each of Gradis's decisions could be assessed -not measured, but judged qualitatively -in terms of enhanced credit, and each credit enhancement could be translated, again in unquantifiable but very concrete ways, in terms of control. Clienteles bred networks, which made access easier, and could turn into a decisive comparative advantage over less connected competitors, as in the case of Bigot. There is a last dimension to Gradis's activity which must be underlined. Counting the sum of all his operations for the 12 months between October 1754 and September 1755 amounts to nearly seven million Livres tournois. The total number of accounts active during the same period was well above 200. An obvious advantage of such a thick and diverse network was risk diversification; Gradis was too big to fail, not because of the size of his operations, but because of their variety. A few accounts could turn out to be lost investments, but there were many others from which these losses could be compensated. With hundreds of potential credit sources, a credit crunch was highly unlikely. One bad batch of goods could lead our Bordeaux trader to lose face on one specific market in one town, but he could point to dozens of other markets elsewhere on which he had been a trustful supplier, and his reputation would merely suffer a passing dent. Power such as Gradis's has implications for the analysis of the wider early modern economy. Certainly nobody would suggest that markets under the Old Regime were open and transparent. Network-based comparative advantages were turned into bases for monopolization of a market segment, a monopoly sometimes sanctioned by law, as in the case of the various India Companies. Collectively, then, the merchants who held the keys to the various segmented parts of the economy in Europe, the Americas and parts of Asia and Africa were truly a transnational ruling class, with an unassailable position as long as their solidarity held firm, as long as they successfully fended off any drift towards freedom of entry into these multiple niche markets where they made their fortunes. In this way we get back to a regional motif, but under a very different angle; regions existed insofar as they were controlled by a defined subgroup of this international ruling class. It may well be that access to the French Caribbean were dominated by a coherent group of French merchants, but this is unclear. At a higher level, the recent general trend towards describing the Atlantic as several more or less nationalized Atlantics may be read as an implicit recognition that nation-based groups of merchants had built exclusive trading spaces which they by and large controlled. But how these groups interacted with institutional realities and other constraints to create more or less exclusive trading spaces, and how rules of interpersonal, account-based behavior were modified under local conditions, are questions to be explored. On that score, Gradis' example provides only limited support to the idea of a French Atlantic. He operated mostly within the French colonial empire, but was also invested in the Spanish empire, a fact which seems to underscore the relevance of Empire-based analyses. On the other hand, his Caribbean ventures were only one facet of a broad and diverse network, which encompassed France and several other European countries. His was a specifically French operation, both as a royal supplier and as a Bordeaux trader focusing on Quebec and the Caribbean. Notwithstanding these specializations, his accounts stressed personal credit, not national or regional networks. This makes sense since the ubiquity of credit meant that the key to merchant success was a sizeable and trustworthy network of partners, which in the case of Gradis extended well beyond the limits of any one region of the French sphere, and indeed well beyond that sphere. A French trader could favor connections to French planters, French officials and the French Crown; but no trader in his right mind would ever forget that a successful operation depended on cooperation with other traders regardless of nationality, location, religion or ethnic origin. "Frenchifying" or "Atlanticizing" one's operation was always a possibility -but only within limits, and never so far as to structure the way accounts were kept. In the end, the King in Versailles was treated the same way as Jonathan Morgan from Cork, or as the la Pagerie from Martinique, as pieces in a wider puzzle, the shape of which included regional considerations, but was never limited by them. formal debt, over 2 to 1, generated in the next three months, book debt in toto may have amounted to well over two million Livres tournois... The larger accounts may have borne interest; one Jacob Mendes had his account balanced, and the clerk recorded the following: Jacob Mendes cte: Vx: a luy même ct N.au £ 67167.15.5 pour Solde Compte Regle ce Jour en double, dont les Interets Sont Compris Jusques au 1er Courant 20But four other personal accounts were balanced between June and August 1755, with no mention of interest. There is no such mention either in the numerous instances where errors were discovered, and accounts rectified, sometimes months after the error was made.[START_REF]Farines Dt; a Marieu & Comp & La Roche £ 200 pour Erreur Sur leur facture du 25 may, ou ils ont debité pour 300[END_REF] Fig. 1 : 1 Fig. 1: Value of transactions by means of payment used for personal accounts in the Gradis firm, June-Aug. 1755 (in percent of the total value of transactions for each type, in livres tournois. Crown officials and ambiguous accounts excepted) Fig. 2 : 2 Fig. 2: Value of transactions by means of payment in the Gradis firm, June-Aug. 1755 (in percent of the total value of each type of transaction, in livres tournois) Fig. 2 : 2 Fig. 2: Proportion of transactions by means of payment and value of transaction in the Gradis firm, June-Aug. 1755 (in percent of number of transactions for each category, n = 128 cash, 41 commercial paper, 107 compensation between accounts) counted most, and what was most counted, was with whom who did what; what was being done was only part of the equation. Mr La Roque à Versailles Dt, a Divers £ 2176.16.6 pour 60 demy Barriq. Eaux de vie envoyées p Son Compte à Quebec par le N.re Le st Fremery de st Valery Suivant Le Livre de factures a f° 136 Savoir à Eaux de Vie pr 19 p. Cont. en 6 Bq 970 V. £ 1864.11.6 à Caisse pour tous fraix deduit le montant des pieces vuides 232.5 à Primes d'assurance p £ 2000 à 4 p Ct 80 14 Le Comte de Raymond Dt; à Divers £ 395 Pour Le Vin Suivant a luy envoyé par la voye de Horutener & Comp de Rouen pour faire passer debout a Valogne, à Son adresse Savoir à Vins de talance pr 2 Bques en double futaille £ 175 à Vins achetés pour 1/3 à 70W le thoneau 35 à Caisse pr 50 Bouteilles vin muscat rouge à 30s £75 p 50 Bouteilles dt blanc a 30s 75 pr Rabatage des 2 Bques et double futaille 18 pr droits de Sortie arimage & fraix 17 185 23 Table 1 : The account structure of Gradis's journal, October 1754 -September 1755 1.A) Individual credit relationships 2) Shipping ventures 3) Other real assets 1.A.a) Personal accounts 2.a) In partnership with 1 Martinique, the key to success was a good network of planters who would supply him with quality colonial goods, official backing for his trading activities both in Martinique and in the colonial administration in France, and the physical means to bring his goods across the ocean. Trading with Amsterdam meant dealing with a very different group of people. In Amsterdam, Gradis needed Dutch commissioners who would sell his wine at the best possible price, access to Aquitaine winegrowers whose wine would be of good quality, and the monetary means to extend generous credit terms on both sides. Obviously, with each interlocutor the strategies, incentives, and even vocabulary used would be different.Hence the crucial role of the accounts. Each reflected a privileged relationship, a building block to be used in organizing a profitable access to a certain market. Each was to be treated within the specific context of the relationship for which it had been created. Mere figures were only part of a larger equation, other parts of which were simply not quantifiable. Each debt, however, needs to br treated on its own terms. Perrens had been loaned 80,000 Livres by Gradis to buy large amounts of flour, lard, salt and brine, which were then delivered to the king, and the entire sum was at once transferred to the king's account.Perrens was merely Gradis's agent in building up Canada supplies, and the large sums loaned were actually loaned to the king. The account from Marchand Fils was wholly different, since he was a partner with Gradis in the outfitting and lading of the ship Le Sagittaire, and the 23,000 Livres he had received were two IOUs from Gradis acknowledging that Marchand Fils, who was the main outfitter, had paid that much in excess and in Gradis's stead, with the latter eventually refunding his partner's loan. In Bigot's case, money belonging to the intendant du Canada was to be deposited in Gradis's account at Chabert & Banquet, his Parisian bankers, but Bigot had already used it up by drawing on the Parisians, and Gradis was simply acknowledging that Bigot was not a creditor anymore, contrary to what had been assumed at first in his accounts. Of course, in dealing with an intendant, no sensible merchant would have dreamt of pointing out that by drawing on funds which had not yet been deposited, Bigot was in effect borrowing from Gradis, and for free. As for poor Madame La Roche, widow of a business partner of Gradis, she was presumably trying to settle her deceased husband's affairs. She claimed 842 Livres from Gradis, who did not quite agree with her statement of affairs, but who decided to give the sum to her nonetheless, crediting a doubtful debts account ('Parties en suspend') in case the matter were eventually settledwhich would probably never be the case.27 Actually, the figures could take different meaning in context, a point which is made very clear if we try to compare the story line of different accounts. By the summer of 1755, one Lyon merchant, Perrens, owed almost 80,000 Livres tournois to Gradis. Another merchant, Marchand fils, was debitor for 25,000 Livres in the same summer; Bigot, the intendant of Canada, was found to owe 43,000 Livres; and Veuve La Roche, a widow from Girac, owed 842 Livres. Archives Nationales Paris (CARAN), Fonds Gradis, 181 AQ 7* Journal, June 1, 1755 to October 26, 1759. According to the Ordonnance of 1673, the registers had to be certified, and Gradis obtained Agard's certification onMay 13, 1755. The list of the consuls is available See John R. Edwards, A History of Financial Accounting, (London, 1989),. For the idea that large-scale, multinational operation brought about a new momentum for innovation as early 181 AQ 7*, ff. 8 verso,[START_REF] Actually | one of the transactions recorded in this account included a set of 'Dens d'éleffans,' which means that account titles were not[END_REF], for freight costs one Leris should have paid for two bales of cotton and a barrel of sugar 'qu'il a reçu l'année 1754,' thus at least 8 months earlier. The entry clearly implies that the non-payment comes from an oversight, and there is no other mention of the corresponding account for the whole year 1755. See supra n. 16. Gradis's own wine-producing property, with a corresponding account called 'Vins de Tallance' probably identifying the returns of this product. This listing proves that Gradis was indeed cost conscious as a producer, and that his choice not to record his trading costs consistently was not due to ignorance. Costs were worth recording as a producer, because they represented a stable quantity, with direct and easily measurable consequences on profits; costs of specific ventures or relationships, however, varied widely, both quantitatively and in their relationship to profits. In the end, there were few elements Gradis thought worth recording separately in real or nominal accounts, besides the few goods he traded more particularly, already mentioned, and his wine-producing venture in Talance. A general Profit and Loss account received indiscriminately all profits and all losses from all personal and venture-based accounts, in such a way as to make strategic calculations almost impossible. Cash was listed separately, though as we have seen some commercial paper found its way, for reasons unclear, into the 'Cash' box. 26 Commercial paper was recorded in the classic Bills payable and Bills receivable accounts, but some of it was included separately in a 'Lettres à négocier' account, of which we know next to nothing; it may have concerned dubious paper which Gradis had identified as such, and was trying to unload. The same can be said of 'Parties en suspend,' which was probably made up of clearly desperate debts. Two accounts, 'Contrats de cession' and 'Contrats d'obligation,' recorded formal purchases and sales materialized by notarized agreements; again, the shape of the relationship created by a given means of payment turned out to be more important than the kind of activity or goods concerned. Only one account could have been said to identify a specific activity and provide a basis to assess its returns, the 'Grosses avantures' account, which listed bottomry loans Gradis had consented to, except that we find another account named 'Grosses avantures données a Cadis par la Voye de Joseph Masson & C.e.' In other words, bottomry loans were treated somewhat like commission accounts Cargaison dans le N.re Le David 1.A.b) Assets in partnership or sent on commission Marchandises * What does Gradis's accounting tell us about his Caribbean operations, and generally about the world he operated in? First, it was a world dominated by interpersonal relationships, but not in the classic Gemeinschaft sense. Making a profit was still the ultimate goal: merchant relationships cannot be reduced to a form of moral economy. The best descriptive tool would be that of the cartel: a group of people bound together by a common economic goal of domination and profit, but among whom solidarity is both the key to success and a fragile construction at best. In some ways, each one of Gradis's accounts was an attempt at cartelization, at building a privileged, protected market access which would bring in profit. In this universe, there was no point in trying to compare two ventures, since each had its own defining characters, from the group of people involved to the institutional environment to use and the physical means of access to control. When Gradis was trading online thanks to AD Gironde, at Inventaire de la série C. Archives Civiles: Tome 3, articles C 4250 à C439.
52,780
[ "3926" ]
[ "176", "110860" ]
01408043
en
[ "info" ]
2024/03/04 23:41:48
2016
https://hal.science/hal-01408043/file/CC-pn16.pdf
Thomas Chatain email: [email protected] Josep Carmona email: [email protected] Anti-Alignments in Conformance Checking -The Dark Side of Process Models Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. These techniques suffer from the wellknown state space explosion problem, hence handling process models exhibiting large or even infinite state spaces remains a challenge. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. By avoiding the computation of the full state space of a model, current techniques only provide estimations of the precision metric, which in some situations tend to be very optimistic, thus hiding real problems a process model may have. In this paper we present the notion of antialignment as a concept to help unveiling traces in the model that may deviate significantly from the observed behavior. Using anti-alignments, current estimations can be improved, e.g., in precision checking. We show how to express the problem of finding anti-alignments as the satisfiability of a Boolean formula, and provide a tool which can deal with large models efficiently. Introduction The use of process models has increased in the last decade due to the advent of the process mining field. Process mining techniques aim at discovering, analyzing and enhancing formal representations of the real processes executed in any digital environment [START_REF] Van Der Aalst | Process Mining -Discovery, Conformance and Enhancement of Business Processes[END_REF]. These processes can only be observed by the footprints of their executions, stored in form of event logs. An event log is a collection of traces and is the input of process mining techniques. The derivation of an accurate formalization of an underlying process opens the door to the continuous improvement and analysis of the processes within an information system. Among the important challenges in process mining, conformance checking is a crucial one: to assess the quality of a model (automatically discovered or manually designed) in describing the observed behavior, i.e., the event log. Conformance checking techniques aim at characterizing four quality dimensions: fitness, precision, generalization and simplicity [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF]. For the first three dimensions, the alignment between the process model and the event log is of paramount importance, since it allows relating modeled and observed behavior [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF]. Given a process model and a trace in the event log, an alignment provides the run in the model which mostly resembles the observed trace. When alignments are computed, the quality dimensions can be defined on top [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF][START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF]. In a way, alignments are optimistic: although observed behavior may deviate significantly from modeled behavior, it is always assumed that the least deviations are the best explanation (from the model's perspective) for the observed behavior. In this paper we present a somewhat symmetric notion to alignments, denoted as anti-alignments. Given a process model and a log, an anti-alignment is a run of the model that mostly deviates from any of the traces observed in the log. The motivation for anti-alignments is precisely to compensate the optimistic view provided by alignments, so that the model is queried to return highly deviating behavior that has not been seen in the log. In contexts where the process model should adhere to a certain behavior and not leave much exotic possibilities (e.g., banking, healthcare), the absence of highly deviating anti-alignments may be a desired property to have in the process model. We cast the problem of computing anti-alignments as the satisfiability of a Boolean formula, and provide high-level techniques which can for instance compute the most deviating anti-alignment for a certain run length, or the shortest anti-alignment for a given number of deviations. Using anti-alignments one cannot only catch deviating behavior, but also use it to improve some of the current quality metrics considered in conformance checking. For instance, a highly-deviating anti-alignment may be a sign of a loss in precision, which can be missed by current metrics as they bound considerably the exploration of model state space for the sake of efficiency [START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF]. Anti-alignments are related to the completeness of the log; a log is complete if it contains all the behavior of the underlying process [START_REF] Van Der Aalst | Process Mining -Discovery, Conformance and Enhancement of Business Processes[END_REF]. For incomplete logs, the alternatives for computing anti-alignments grows, making it difficult to tell the difference between behavior not observed but meant to be part of the process, and behavior not observed which is not meant to be part of the process. Since there exists already some metrics to evaluate the completeness of an event log (e.g., [START_REF] Yang | Estimating completeness of event logs[END_REF]), we assume event logs have a high level of completeness before they are used for computing anti-alignments. To summarize, the contributions of the paper are now enumerated. -We propose the notion of anti-alignment as an effective way to explore process deviations with respect to observed behavior. -We present an encoding of the problem of computing anti-alignments into SAT, and have implemented it in the tool DarkSider. -We show how anti-alignments can be used to provide an estimation of precision that uses a different perspective from the current ones. The remainder of the paper is organized as follows: in the next section, a simple example is used to emphasize the importance of computing anti-alignments. Then in Section 3 the basic theory needed for the understanding of the paper is introduced. Section 4 provides the formal definition of anti-alignments, whilst Section 5 formalizes the encoding into SAT of the problem of computing anti-alignments and Section 6 presents some adaptions of the notion of antialignments. In Section 7, we define a new metric, based on anti-alignments, for estimating precision of process models. Experiments are reported in Section 8, and related work in Section 9. Section 10 concludes the paper and gives some hints for future research directions. A Motivating Example Let us use the example shown in Figure 1 for illustrating the notion of antialignment. The example was originally presented in [START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF]. The modeled process describes a realistic transaction process within a banking context. The process contains all sort of monetary checks, authority notifications, and logging mechanisms. The process is structured as follows (Figure 1 (top) shows a high-level overview of the complete process): it is initiated when a new transaction is requested, opening a new instance in the system and registering all the components involved. The second step is to run a check on the person (or entity) origin of the monetary transaction. Then, the actual payment is processed differently, depending of the payment modality chosen by the sender (cash, cheque and payment). Later, the receiver is checked and the money is transferred. Finally, the process ends registering the information, notifying it to the required actors and authorities, and emitting the corresponding receipt. The detailed model, formalized as a Petri net, is described in the bottom part of the figure. Assume that a log which contains different transactions covering all the possibilities with respect of the model in Figure 1 is given. For this pair of model and log, no highly deviating anti-alignment will be obtained since the model is a precise representation of the observed behavior. Now assume that we modify a bit the model, adding a loop around the alternative stages for the payment. Intuitively, this (malicious) modification in the process model may allow to pay several times although only one transfer will be done. The modified high-level overview is shown in Figure 2. Current metrics for precision (e.g., [START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF]) will not consider this modification as a severe one: the precision of the model with respect to the log will be very similar before or after the modification. Clearly, this modification in the process models comes with a new highly deviating anti-alignment denoting a run of the model that contains more than one iteration of the payment. This may be considered as a certification of the existence of a problematic behavior allowed by the model. Preliminaries Definition 1 ((Labeled) Petri net). A (labeled) Petri Net [START_REF] Murata | Petri nets: Properties, analysis and applications[END_REF] is a tuple N = P, T, F, m 0 , Σ, λ , where P is the set of places, T is the set of transitions (with marking, Σ is an alphabet of actions and λ : T → Σ labels every transition by an action. P ∩ T = ∅), F : (P × T ) ∪ (T × P ) → {0, 1} is the flow relation, m 0 is the initial A marking is an assignment of a non-negative integer to each place. If k is assigned to place p by marking m (denoted m(p) = k), we say that p is marked with k tokens. Given a node x ∈ P ∪ T , we define its pre-set • x := {y ∈ P ∪ T | (x, y) ∈ F } and its post-set x • := {y ∈ P ∪ T | (y, x) ∈ F }. A transition t is enabled in a marking m when all places in • t are marked. When a transition t is enabled, it can fire by removing a token from each place in • t and putting a token to each place in t Quality Dimensions. Process mining techniques aim at extracting from a log L a process model N (e.g., a Petri net) with the goal to elicit the process underlying a system S. By relating the behaviors of L, L(N ) and S, particular concepts can be defined [START_REF] Buijs | Quality dimensions in process discovery: The importance of fitness, precision, generalization and simplicity[END_REF] . A log is incomplete if S\L = ∅. A model N fits log L if L ⊆ L(N ). A model is precise in describing a log L if L(N )\L is small. A model N represents a generalization of log L with respect to system S if some behavior in S\L exists in L(N ). Finally, a model N is simple when it has the minimal complexity in representing L(N ), i.e., the well-known Occam's razor principle. Anti-Alignments The idea of anti-alignments is to seek in the language of a model N what are the runs which differ a lot with all the observed traces. For this we first need a definition of distance between two traces (typically a model trace, i.e. a run of the model, and an observed log trace). Relevant definitions about alignments can be found in [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF]. Let us start here with a simple definition. We will discuss other definitions in Section 6. Notice that the two definitions coincide when p = n and give σ |1...n := σ. Definition 3 (Hamming distance dist). For two traces γ = γ 1 . . . γ n and σ = σ 1 . . . σ n , of same length n, define dist(γ, σ) := i ∈ {1 . . . n} | γ i = σ i . In the sequel, we write dist(γ, σ) for dist(γ, σ |1...|γ| ). Notice that, in this definition, only σ is truncated or padded. In particular this means that γ is compared to the prefixes of the observed traces. The idea is that a run γ which is close to a prefix of an observed trace is good, while a run γ which is much longer than an observed trace σ cannot be considered close to σ even if its prefix γ |1...|σ| is close to σ. Example 1. For instance, for the Petri net shown in Figure 3, and the log L = { a, b, c, f, g, h, k , a, c, b, f, g, h, k , a, c, b, f, h, g, k , a, b, c, f, h, g, k , a, e, f, i, k , a, d, f, g, h, k , a, e, f, h, g, k }, the run a, b, c, f, i, k denotes an (6, 2)-antialignment. Notice that for m ≥ 3 there are no anti-alignments for this example. Lemma 1. If the model has no deadlock, then for every n ∈ N, for every m ∈ N, if there exists a (n, m)-anti-alignment γ, then there exists a (n + 1, m)-antialignment. Moreover, for n ≥ max σ∈L |σ|, there exists a (n + 1, m + 1)-antialignment. Proof. It suffices to fire one transition t enabled in the marking reached after γ; γ • t is a (n + 1, m)-anti-alignment since for every σ ∈ L, dist(γ • t, σ) ≥ dist(γ, σ). When n ≥ max σ∈L |σ|, we have more: dist(γ • t, σ) ≥ 1 + dist(γ, σ) (because the t is compared to the padding symbol w), which makes γ • t a (n + 1, m + 1)-anti- alignment. Corollary 1. If the model has no deadlock, (and assuming that the log L is a finite multiset of finite traces), then for every m ∈ N, there is a least n for which a (n, m)-anti-alignment exists. This n is less than or equal to m + max σ∈L |σ|. Lemma 2. The problem of finding a (n, m)-anti-alignment is NP-complete. (Since n and m are typically smaller than the length of the traces in the log, we assume that they are represented in unary.) Proof. The problem is clearly in NP: checking that a run γ is a (n, m)-antialignment for a net N and a log L takes polynomial time. For NP-hardness, we propose a reduction from the problem of reachability of a marking M in a 1-safe acyclic1 Petri net N , known to be NP-complete [START_REF] Stewart | Reachability in some classes of acyclic Petri nets[END_REF][START_REF] Cheng | Complexity results for 1-safe nets[END_REF]. The reduction is as follows: equip the 1-safe acyclic Petri net N with complementary places2 : a place p for each p ∈ P , with p initially marked iff p is not, p ∈ • t iff p ∈ t • \ • t, and p ∈ t • iff p ∈ • t \ t • . Now M is reachable in the original net iff M ∪ {p | p ∈ P \ M } is Computation of Anti-Alignments In order to compute a (n, m)-anti-alignment of a net N w.r.t. a log L, our tool DarkSider constructs a SAT formula Φ n m (N, L) and calls a SAT solver (currently minisat [START_REF] Eén | An extensible sat-solver[END_REF]) to solve it. Every solution to the formula is interpreted as a run of N of length n which has at least m misalignments with every log in L. The formula Φ n m (N, L) characterizes a (n, m)-anti-alignment γ: γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ), and for every σ ∈ L, dist(γ, σ) ≥ m. Coding Φ n m (N, L) Using Boolean Variables The formula Φ n m (N, L) is coded using the following Boolean variables: τ i,t for i = 1 . . . n, t ∈ T (remind that w is the special symbol used to pad the logs, see Definition 4) means that transition t i = t. m i,p for i = 0 . . . n, p ∈ P means that place p is marked in marking M i (remind that we consider only safe nets, therefore the m i,p are Boolean variables). δ i,σ,k for i = 1 . . . n, σ ∈ L, k = 1, . . . , m means that the k th mismatch with the observed trace σ is at position i. The total number of variables is n × (|T | + |P | + |L| × m). Let us decompose the formula Φ n m (N, L). -The fact that γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ) is coded by the conjunction of the following formulas: • Initial marking:   p∈M0 m 0,p   ∧   p∈P \M0 ¬m 0,p   • One and only one t i for each i: n i=1 t∈T (τ i,t ∧ t ∈T ¬τ i,t ) • The transitions are enabled when they fire: n i=1 t∈T (τ i,t =⇒ p∈ • t m i-1,p ) • Token game (for safe Petri nets): n i=1 t∈T p∈t • (τ i,t =⇒ m i,p ) n i=1 t∈T p∈ • t\t • (τ i,t =⇒ ¬m i,p ) n i=1 t∈T p∈P,p ∈ • t,p ∈t • (τ i,t =⇒ (m i,p ⇐⇒ m i-1,p )) -Now, the constraint that γ deviates from the observed traces (for every σ ∈ L, dist(γ, σ) ≥ m) is coded as: σ∈L m k=1 n i=1 δ i,σ,k with the δ i,σ,k correctly affected w.r.t. λ(t i ) and σ i : σ∈L m k=1 n i=1 δ i,σ,k ⇐⇒ t∈T, λ(t)=σi τ i,t and that for k = k , the k th and k th mismatch correspond to different i's (i.e. a given mismatch cannot serve twice): σ∈L n i=1 m-1 k=1 m k =k+1 ¬(δ i,σ,k ∧ δ i,σ,k ) Size of the Formula In the end, the first part of the formula (γ = λ(t 1 ) . . . The total size for the coding of the formula λ(t n ) ∈ L(N )) is Φ n m (N, L) is O n × |T | × |N | + m 2 × |L| . Solving the Formula in Practice In practice, our tool DarkSider builds the coding of the formula Φ n m (N, L) using the Boolean variables τ i,t , m i,p and δ i,σ,k . Then we need to transform the formula in conjunctive normal form (CNF) in order to pass it to the SAT solver minisat. We use Tseytin's transformation [START_REF] Tseytin | On the complexity of derivation in propositional calculus[END_REF] to get a formula in conjunctive normal form (CNF) whose size is linear in the size of the original formula. The idea of this transformation is to replace recursively the disjunctions φ 1 ∨ • • • ∨ φ n (where the φ i are not atoms) by the following equivalent formula: ∃x 1 , . . . , x n        x 1 ∨ • • • ∨ x n ∧ x 1 =⇒ φ 1 ∧ . . . ∧ x n =⇒ φ n where x 1 , . . . , x n are fresh variables. In the end, the result of the call to minisat tells us if there exists a run γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ) which has at least m misalignments with every observed trace σ ∈ L. If a solution is found, we extract the run γ using the values assigned by minisat to the Boolean variables τ i,t . Finding the Largest m for n It follows directly from Definition 5 that, for a model N and a log L, every (n, m + 1)-anti-alignment is also a (n, m)-anti-alignment. Notice also that, by Definition 5, there cannot exist any (n, n + 1)-antialignment and that, assuming that the model N has a run γ of length n, this run is a (n, 0)-anti-alignment (otherwise there is no (n, m)-anti-alignment for any m). (Under the latter assumption), we are interested in finding, for a fixed n, the largest m for which there exists a (n, m)-anti-alignment, i.e. the run of length n of the model which deviates most from all the observed traces. Our tool Dark-Sider computes it by dichotomy of the search interval for m: [0, n]. Finding the Least n for m If the model N has no deadlock, then by Corollary 1, for every m ∈ N, there is a least n for which a (n, m)-anti-alignment exists. Then it is relevant to find, for a fixed m, the least n for which there exists a (n, m)-anti-alignment, i.e. (the length of) the shortest run of N which has at least m mismatches with any observed trace. Corollary 1 tells us that the least n belongs to the interval [m, m + max σ∈L |σ|]. Then it can be found simply by dichotomy over this interval. However, in practice, when max σ∈L |σ| is much larger than m, the dichotomy would require to check the satisfiability of Φ n m (N, L) for large values of n, which is costly. Therefore our tool DarkSider proceeds as follows: it checks the satisfiability of the formulas Φ m m (N, L), then Φ 2m m (N, L), then Φ 4m m (N, L). . . until it finds a p such that Φ 2 p m m (N, L) is satisfiable. Then it starts the dichotomy over the interval [m, 2 p m]. 6 Relaxations of Anti-Alignments Limiting the Use of Loops A delicate issue with anti-alignments is to deal with loops in the model N : inserting loops in a model is a relevant way of coding the fact that similar traces were observed with a various number of iterations of a pattern. Typically, if the log contains traces ac, abc, abbc, . . . , abbbbbbbc, it is fair to propose a model whose language is ab * c. However a model with loops necessarily generate (n, m)-anti-alignments even for large m: it suffices to take the loops sufficiently many more times than what was observed in the log. Intuitively, these anti-alignments are cheated and one does not want to blame the model for generating them, i.e., the model correctly generalizes the behavior observed in the event log. Instead, it is interesting to focus the priority on the anti-alignments which do not use the loops too often. Our technique can easily be adapted so that it limits the use of loops when finding anti-alignments. The simplest idea is to add a new input place (call it bound t ) to every transition t; the number of tokens present in bound t in the initial marking determines how many times t is allowed to fire. The drawback of this trick is that the model does not remain 1-safe, and our tool currently deals only with 1-safe nets. An alternative is to duplicate the transition t with t , t . . . (all labeled λ(t)) and to allow only one firing per copy (using input places bound t , bound t . . . like before, but now we need only one token per place). Finally, another way to limit the use of loops is to introduce appropriate constraints directly in the formula Φ n m (N, L). Improving the Notion of Distance A limitation of our technique as presented above, concerning the application to process mining, is that it relies on a notion of distance between γ and σ which is too rigid: indeed, every symbol of γ i is compared only to the exact corresponding symbol σ i . This puts for instance the word ababababab at distance 10 from bababababa. In process mining techniques often other distances are usually preferred (see for instance [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF]), typically Levenshtein's distance (or edit distance), which counts how many deletions and insertions of symbols are needed to obtain σ starting from γ. We propose here an intermediate definition where every γ i is compared to all the σ j for j sufficiently close to i. Definition 6 (dist d ). Let d ∈ N. For two traces γ = γ 1 . . . γ n and σ = σ 1 . . . σ n , of same length n, we define dist d (γ, σ) := i ∈ {1 . . . n} | ∀ i -d ≤ j ≤ i + d γ i = σ j Notice that dist 0 corresponds to the Hamming distance. This definition is sufficiently permissive for many applications, and we can easily adapt our technique to it, simply by adapting the constraints relating the δ i,σ,k with the λ(t i ) in the definition of Φ n m (N, L). Anti-Alignments Between Two Nets Our notion of anti-alignments can be generalized as follows: Definition 7. Given n, m ∈ N and two labeled Petri nets N and N sharing the same alphabet of labels Σ, we call (n, m)-anti-alignment of N w.r.t. N , a run N of length n which is at least at distance m from every run of N . Our problem of anti-alignment for a model N and a log L corresponds precisely to the problem of anti-alignment of N w.r.t. the net N L representing all the traces in L as disjoint sequences, all starting at a common initial place end ending by a loop labeled w, like in Figure 4. We show below that the problem of finding anti-alignments between two nets can be reduced to solving a 2QBF formula, i.e. a Boolean formula with an alternation of quantifiers, of the form ∃ . . . ∀ . . . φ. Solving 2QBF formulas is intrinsically more complex than SAT formulas (Σ P 2 -complete [START_REF] Kleine Büning | Theory of quantified boolean formulas[END_REF] instead of NP-complete) and 2QBF solvers are usually far from being as efficient as SAT solvers. Anyway, the notion of anti-alignments between two nets allow us to modify the net N L in order to code a better notion of distance, for instance inserting optional wait loops at desired places in the logs. Possibly also, one can replace N L by another net which represents a large set of runs very concisely. 2QBF solvers are usually far from being as efficient as SAT solvers. As a matter of fact, we first did a few experiments with the 2QBF encoding, but for efficiency reasons we moved to the SAT encoding. Anyway we plan to retry the 2QBF encoding in a near future, with a more efficient 2QBF solver and some optimizations, in order to benefit from the flexibility offered by the generalization of the anti-alignment problem. 2QBF Coding. Finding a (n, m)-anti-alignment of a net N w.r.t. a net N corresponds to finding a run γ ∈ L(N ) such that |γ| = n and for every σ ∈ L(N ), dist(γ, σ) ≥ m. This is encoded by the following 2QBF formula: ∃(τ i,t ) i=1...n t∈T , (m i,p ) i=0...n p∈P ∀(τ i,t ) i=1...n t ∈T , (m i,p ) i=0...n p ∈P , (δ i,k ) i=1...n k=1...m    λ(t 1 ) . . . λ(t n ) ∈ L(N ) ∧ λ (t 1 ) . . . λ (t n ) ∈ L(N ) ∧ ∆    =⇒ m k=1 n i=1 δ i,k where: the variables τ i,t and m i,p encode the execution of N like for the coding into SAT (see Section 5.1); τ i,t and m i,p represent the execution of N ; δ i,k means that the k th mismatch between the two executions is at position i; the constraints that λ(t 1 ) . . . λ(t n ) ∈ L(N ) and λ (t 1 ) . . . λ (t n ) ∈ L(N ) are coded like in Section 5; -∆ is a formula which says that the variables δ i,k are correctly affected w.r.t. the values of the τ i,t and τ i,t . ∆ is the conjunction of: • there is a mismatch at the i th position iff λ(t i ) = λ (t i ): n i=1     ( m k=1 δ i,k ) ⇐⇒ t∈T,t ∈T λ(t) =λ (t ) (τ i,t ∧ τ i,t )     • a mismatch cannot serve twice: m-1 k=1 m k =k+1 ¬(δ i,k ∧ δ i,k ) 7 Using Anti-Alignments to Estimate Precision In this section we will provide two ways of using anti-alignments to estimate precision of process models. First, a simple metric will be presented that is based only on the information provided by anti-alignments. Second, a well-known metric for precision is introduced and it is shown how the two metrics can be combined to provide a better estimation for precision. A New Metric for Estimating Precision There are different ways of incorporating the information provided by antialignments that can help into providing a metric for precision. One possibility is to focus on the number of misalignments for a given maximal length n, i.e., find the anti-alignment with bounded length that maximizes the number of mismatches, using the search techniques introduced in the previous section. Formally, let n be the maximal length for a trace in the log, and let max n (N, L) be the maximal number of mismatches for any anti-alignment of length n for model N and log L. In practice, the length n will be set to the maximal length for a trace in the log, i.e., only searching anti-alignments that are similar in length with respect to the traces observed in the log. We can now define a simple estimation metric for precision: a n (N, L) = 1 - max n (N, L) n Clearly, max n (N, L) ∈ [0 . . . n] which implies a n ∈ [0 . . . 1]. For instance, let the model be the one in Figure 5 (top-left), and the log L = [σ 1 , σ 2 , σ 3 , σ 4 , σ 5 ] also shown in the figure. Since maximal length n for L is 6, max 6 (N, L) = 3, corresponding to the run a, c, b, i, b, i . Hence, a n = 1-3 6 = 0.5. Lemma 3 (Monotonicity of the Metric a n ). Observing a new trace which happens to be already a run of the model, can only increase the precision measure. Formally: for every N, L and for every σ ∈ L(N ), a n (N, L ∪ {σ}) ≥ a n (N, L). Proof. Clearly, every (n, m)-anti-alignment for (N, L ∪ {σ}) is also a (n, m)anti-alignment for (N, L). Consequently max n (N, L ∪ {σ}) ≤ max n (N, L) and a n (N, L ∪ {σ}) ≥ a n (N, L). The Metric a p In [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF][START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF] the metric align precision (a p ) was presented to estimate the precision a process model N (a Petri net) has in characterizing observed behavior, described by an event log L. Informally the computation of a p is as follows: for each trace σ from the event log, a run γ of the model which has minimal number of deviations with respect to σ is computed (denoted by γ ∈ Γ (N, σ)), by using the techniques from [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF] be the set of model traces optimally aligned with traces in the log. An automaton A Γ (N,L) can be constructed from this set, denoting the model's representation of the behavior observed in L. Figure 5 describes an example of this procedure. Notice that each state in the automaton has a number denoting the weight, directly related to the frequency of the corresponding prefix, e.g., in the automaton of Figure 5, w(ab) = 2 and w(acb) = 1. For each state s in A Γ (N,L) , let a v (s) be the set of available actions, i.e., possible direct successor activities according to the model, and e x (s) be the set of executed actions, i.e., activities really executed in the log. Note that, by construction e x (s) ⊆ a v (s), i.e., the set of executed actions of a given state is always a subset of all available actions according to the model. By comparing these two sets in each state the metric a p can be computed: a p (A Γ (N,L) ) = s∈Q ω(s) • |e x (s)| s∈Q ω(s) • |a v (s)| where Q is the set of states in A Γ (N,L) . This metric evaluates to 0.780 for the automaton of Figure 5. Drawbacks of the Metric a p . A main drawback of metric a p relies in the fact that it is "short-sighted", i.e., only one step ahead of log behavior is considered in order to estimate the precision of a model. Graphically, this is illustrated in the automaton of Figure 5 by the red states being successors of white states. A second drawback is the lack of monotonicity, a feature that metric a n has: observing a new trace which happens to be described by the model may unveil a model trace which has a large number of escaping arcs, thus lowering the precision value computed by a p . For instance, imagine that in the example of Figure 5, the model has another long branch starting as a successor of place p 0 and allowing a large piece of behaviour. Imagine that this happens to represent a possible behaviour of the real system; simply, it has not been observed yet. This branch starting at p 0 generates a new escaping arc from the initial state of A Γ (N,L) , but the metric a p does not blame a lot for this: only one more escaping point. Now, when a trace σ corresponding to the new behaviour is observed (proving somehow that the model was right!): after this observation, the construction A Γ (N,L∪{σ}) changes dramatically because it integrates the new observed trace. In consequence, if the corresponding branch in the model enables other transitions, then the model is going to be blamed for many new escaping points while, before observing σ, only one escaping point was counted. Combining the two Metrics In spite of the aforementioned problems, metric a p has proven to be a reasonable metric for precision in practice. Therefore the combination of the two metrics can lead to a better estimation of precision: whilst a p focuses globally to count the number of escaping points from the log behavior, a n focuses on searching globally the maximal deviation one of those escaping points can lead to. a n p (N, L) = α • a p (A Γ (N,L) ) -β • a n (N, L) with α, β ∈ R ≥0 , α + β = 1 . Let us revisit the example introduced at the beginning of this section, which is a transformation of the model in Figure 5 but that contains an arbitrary number of operations before the Post-chemo. If β = 0.2, then a n p will evaluate to 0.508, a mid value that may explicit the precision problem represented by the anti-alignment computed. We have implemented a prototype tool called DarkSider which implements the techniques described in this paper 4 Given a Petri net N and a log L, the tool is guided towards the computation of anti-alignments in different settings: -Finding an anti-alignment of length n with at least m mismatches (Φ n m (N, L)). -Finding the shortest anti-alignment necessary for having at least m mismatches (Φ m (N, L)). -Finding the anti-alignment of length n with maximal mismatches (Φ n (N, L)). Results are provided in Table 1. We have selected two considerably large models, initially proposed in [START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF][START_REF] Munoz-Gama | Single-entry single-exit decomposed conformance checking[END_REF]. The table shows the size of the models (number of places and transitions), the number of traces in the log and the size of the alphabet of the log. Then the column labeled as n establishes the length imposed for the derived anti-alignment. In this columns values always start with the maximal length of a trace in the corresponding log e.g., for the first log of the prAm6 benchmark the length of any trace is less or equal to 41. Then the column m determines the minimal number of mismatches the computed anti-alignment should have. Finally, the results on computing the three formulas described above on these parameters are provided. For Φ n m (N, L), it is reported whereas the formula holds. For Φ m (N, L), it is provided the length of the shortest anti-alignment found for the given number of mismatches (m). Finally, for Φ n (N, L) we provide the number of mismatches computed for the given length (n). For each benchmark, two different logs were used: one containing most of the behavior in the model, and the same log but where cases describing some important branch in the process model are removed. The results clearly show that using anti-alignments highly deviating behavior can be captured, e.g., for the benchmark prAm6 a very deviating anti-alignment (39 mismatches out of 41) is computed when the log does not contains that behavior in the model, whereas less deviating anti-alignments can be found for the full log (19 mismatches out of 41)5 . Related Work The seminal work in [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF] was the first one in relating observed behavior (in form of a set of traces), and a process model. In order to asses how far can the model deviate from the log, the follows and precedes relations for both model and log are computed, storing for each relation whereas it always holds or only sometimes. In case of the former, it means that there is more variability. Then, log and model follows/precedes matrices are compared, and in those matrix cells where the model has a sometimes relation whilst the log has an always relation indicate that the model allows for more behavior, i.e., a lack of precision. This technique has important drawbacks: first, it is not general since in the presence of loops in the model the characterization of the relations is not accurate [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF]. Second, the method requires a full state-space exploration of the model in order to compute the relations, a stringent limitation for models with large or even infinite state spaces. In order to overcome the limitations of the aforementioned technique, a different approach was proposed in [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF]. The idea is to find escaping arcs, denoting those situations where the model starts to deviate from the log behavior, i.e., events allowed by the model not observed in the corresponding trace in the log. The exploration of escaping arcs is restricted by the log behavior, and hence the complexity of the method is always bounded. By counting how many escaping arcs a pair (model, log) has, one can estimate the precision of a model. Although being a sound estimation for the precision metric, it may hide the problems we are considering in this paper, i.e., models containing escaping arcs that lead to a large behavior. Less related is the work in [START_REF] Vanden Broucke | Determining process model precision and generalization with weighted artificial negative events[END_REF], where the introduction of weighted artificial negative events from a log is proposed. Given a log L, an artificial negative event is a trace σ = σ • a where σ ∈ L, but σ / ∈ L. Algorithms are proposed to weight the confidence of an artificial negative event, and they can be used to estimate the precision and generalization of a process model [START_REF] Vanden Broucke | Determining process model precision and generalization with weighted artificial negative events[END_REF]. Like in [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF], by only considering one step ahead of log/model's behavior, this technique may not catch serious precision/generalization problems. Conclusions and Future Work In this paper the new concept of anti-alignments is introduced as a way to catch deviations a process model may have with respect to observed behavior. We show how the problem of computing anti-alignments can be casted as the satisfiability of a Boolean formula, and have implemented a tool which automates this encoding. Experimental results performed on large models show the usefulness of the approach, being able to compute deviations when they exist. This work starts a research direction based on anti-alignments. We consider that further steps are needed to address properly some important extensions. First, it would be interesting to put anti-alignments more in the context of process mining; for that it may be required that models have also defined a clear final state, and anti-alignments should be defined accordingly in this context. Also, the distance metric may be adapted to incorporate the log frequencies, and allow it to be less strict with respect to trace deviations concerning individual positions, loops, etc. Alternatives for the computation of anti-alignments will also be investigated. Finally, the use of anti-alignments for estimating the generalization of process models will be explored. Fig. 1 . 1 Fig.1. Running example (adapted from[START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF]). Overall structure (top), process model (bottom). Fig. 2 . 2 Fig.2. Model containing a highly deviating anti-alignment for the log considered. Definition 4 . 4 In order to deal with traces of different length, we define for every trace σ = σ 1 . . . σ p and n ∈ N, the trace σ |1...n as: σ |1...n := σ 1 . . . σ n , i.e. the trace σ truncated to length n, if |σ| ≥ n, σ |1...n := σ 1 . . . σ p • w n-p , i.e. the trace σ padded to length n with the special symbol w ∈ Σ (w for 'wait'), if |σ| ≤ n. Fig. 3 . 3 Fig. 3. The process model (taken from [10]) has the anti-alignment a, b, c, f, i, k for the log L = { a, b,c, f, g, h, k , a, c, b, f, g, h, k , a, c, b, f, h, g, k , a, b, c, f, h, g, k ,a, e, f, i, k , a, d, f, g, h, k , a, e, f, h, g, k }. coded by a Boolean formula of size O(n × |T | × |N |), with |N | := |T | + |P |. The second part of the formula (for every σ ∈ L, dist(γ, σ) ≥ m) is coded by a Boolean formula of size O(n × m 2 × |L| × |T |). Fig. 4 . 4 Fig. 4. The net NL for L = { a, b, c, f , a, c, b, f, g , a, c, b, f, h }. Fig. 5 . 5 Fig. 5. Example taken from [5]. Initial process model N (top-left), optimal alignments for the event log L = [σ1, σ2, σ3, σ4, σ5] (top-right), automaton A Γ (N,L) (bottom). 3 . Let Γ (N, L) := σ∈L Γ (N, σ) • . A marking m is reachable from m if there is a sequence of firings t 1 t 2 . . . t n that transforms m into m , denoted by m[t 1 t 2 . . . t n m . A sequence of actions a 1 a 2 . . . a n is a feasible sequence (or run, or model trace) if there exists a sequence of transitions t 1 t 2 . . . t n firable from m 0 and such that for i = 1 . . . n, a i = λ(t i ). Let L(N ) be the set of feasible sequences of Petri net N . A deadlock is a reachable marking for which no transition is enabled. The set of reachable markings from m 0 is denoted by [m 0 , and form a graph called reachability graph. A Petri net is k-bounded if no marking in [m 0 assigns more than k tokens to any place. A Petri net is safe if it is 1-bounded. In this paper we assume safe Petri nets.Definition 2 (Event Log). An event log L (over an alphabet of actions Σ) is a multiset of traces σ ∈ Σ * . process cash payment open and register transaction check sender process cheque payment check receiver transfer money notify and close transaction process electronic payment An event log is a collection of traces, where a trace may appear more than once. Formally: reachable in the complemented net (and with the same firing sequence).Notice that, since N is acyclic, each transition can fire only once; hence, the length of the firing sequences of N is bounded by the number of transitions |T |.Add now a new transition t f with• t f = t f • = M ∪{p | p ∈ P \M }.Transition t f is firable if and only if M is reachable in the original net, and in this case, t f may fire forever. As a consequence the new net (call it N f ) has a firing sequence of length |T | + 1 iff M is reachable in N . It remains to observe that a firing sequence of length |T | + 1 is nothing but a (|T | + 1, 0)-anti-alignment for N f and the empty log. Then M is reachable in N iff such anti-alignment exists. Table 1 . 1 Experiments for different models and logs. benchmark |P | |T | |L| AL| n m Φ n m (N, L) Φm(N, L) Φ n (N, L) prAm6 347 363 761 272 41 1 ! 3 39 41 5 ! 7 21 1 ! 3 19 21 5 ! 7 1200 363 41 1 ! 4 19 41 5 ! 8 21 1 ! 4 15 21 5 ! 8 BankTransfer 121 114 989 101 51 1 ! 8 32 51 10 ! 17 21 1 ! 8 14 21 10 ! 17 2000 113 51 1 ! 15 16 51 10 ! 37 21 1 ! 15 5 21 10 % 37 a Petri net is acyclic if the transitive closure F + of its flow relation is irreflexive. In general the net does not remain acyclic with the complementary places. Note that more than one run of the model may correspond to an optimal alignment with log trace σ, i.e., |Γ (N, σ)| ≥ 1. For instance, in Figure5five optimal alignments exist for trace a . For the ease of explanation, we assume that |Γ (N, σ)| = 1. The tool is available at http://www.lsv.ens-cachan.fr/ ~chatain/darksider. Since in the current implementation we do not incorporate techniques for dealing with the improved distance as explained in Section 5, we still get a considerably deviating anti-alignment for the original log. Acknowledgments. We thank Boudewijn van Dongen for interesting discussions related to this work. This work has been partially supported by funds from the Spanish Ministry for Economy and Competitiveness (MINECO), the European Union (FEDER funds) under grant COMMAS (ref. TIN2013-46181-C2-1-R).
42,337
[ "745648", "963422" ]
[ "157663", "2571", "85878" ]
01487676
en
[ "shs" ]
2024/03/04 23:41:48
2008
https://univ-sorbonne-nouvelle.hal.science/hal-01487676/file/2008GervaisNeitherImperial.pdf
Neither imperial, nor Atlantic: A merchant perspective on international trade in the eighteenth century Pierre GERVAIS 1 In the most literal sense, the "Atlantic world" is a misnomer: in the XVIIIth century, the period for which the term is most commonly employed, the Atlantic Ocean was a forbidding expanse of salt water, mostly empty save for a few islands, and could hardly constitute a world. Even today, supertankers and cruise ships notwithstanding, not much is taking place on the Atlantic proper. What counts, of course, is the land, including the aforementioned islands. But the geographical fact that these lands border the Atlantic or are surrounded by it does not tell us much about what an Atlantic world resembles, either. As a number of authors have pointed out more or less forcefully, the so-called Atlantic community was never strictly Atlantic, and contained many very different communities. What justifies the term for its advocates is that it eventually came to encompass a thick web of relationships, linking a number of people on each side of the Atlantic Ocean, so many in fact that, in some respect at least, it produced what could be called a shared Atlantic world. This world was not a numerical accumulation of empires, defined by national boundaries, or national loyalties ; on the contrary, if it had one defining characteristic, it was precisely its web-like structure, created by the free circulation of goods, people and ideas, across national boundaries, such as they were. Whether this process of circulation was oppressive, as with the slave trade, or liberating, as with Enlightenment ideals, is beside the point. The most determinant factor in the success of Atlantic exchanges was the international movement throughout interconnected parts, and the deeper historical evolution associated with it. 2 Was this movement in any sense truly "Atlantic," however? The present paper aims at presenting a brief and narrow view of it, but from a crucial point of view, that of the merchant. Commerce, everybody will agree, was at the heart of the Atlantic process. It looms large in every account of the XVIIIth Century, and even larger when one realizes that in many ways commerce was the reason why the European "Atlantic" empires were builtthe imperial viewpoint being the other major competitor in the race to offer an analytical framework for Eighteenth-century development in Europe and the Americas, at least. 3 Colonial goods and the colonial trade prompted the great confrontation between England and France; and some economists even credit them with a key role in fueling economic growth in the mother countries, regardless of their relatively marginal volume in the overall trade of these countries. 4 Merchants themselves were supposedly the quintessential Atlanticists, both at the personal and the professional level. If the concept makes sense at all, then, it should make sense particularly for the activities of these traders, whose breadth of horizon, manic activity, and constant personal intercourse underpinned almost everything significant which took place on the Atlantic Ocean outside of strictly military ventures, and largely provided the stakes and the motives for the latter. Even the one major "Atlantic" phenomenon which could be said to escape the merchant sphere, the multifaceted cross-cultural intercourse generated by constant flows of migrants over and around the ocean, was still technically channelled through merchant-made networks, and merchant-conceived crossing procedures. * * * To some extent, the minutiae of merchant practice have only recently become a topic of historical enquiry. Earlier works were often mainly concerned with aggregate data, the general movement of ships and goods, and changes in economic trends in the Labroussean structuralist tradition, or with collective political, cultural and social portraits of merchants groups in which account books were only peripherally used. The merchant mind was best read through correspondence and political lobbying, cultural attitudes and social differentiation. None of these areas of research, however, are illuminating for our purposes. To quote Ian Steele, nobody ever fought, prayed and died in the name of an Atlantic community, so that its existence is usually proved through reference to practice -to the circulation of ideas, people, and goods. 5 Hence the interest of analyzing the forms this circulation took, and here we can rely on a very strong body of recent prosopographies. In this respect, a series of works have reshaped our views on merchant activities, particularly in the last twenty years. We now know that merchants were combining multiple activities, integrating all the areas of the Atlantic world, and thereby holding together the many strands which made or unmade the central "adventure:" a shipping expedition. We know that they were impressively flexible, managing a multiplicity of endeavours at once through complex institutional forms, and that they suceeded in carrying on shipping activities in the face of imperial prohibition, and even in the face of Napoleon's Continental blockade. We also know that the same networks which underpinned their trade gave rise to complex "conversations" through which scales of qualities were set, goods defined within these scales, prices debated, and production and transportation processes refined and improved. [START_REF] Hancock | Commerce and Conversation in the Eighteenth Century Atlantic; the Invention of Madeira Wine[END_REF] The merchant world was thus a networked world, which, on the face of it, would fit perfectly into the model of a transnational community. However, both the motives and the implications of this networked approach to trade may not have received all the historical attention they deserve. For networks played a series of roles, some of which were characteristic of the era, and also had concrete consequences on the way merchants would view their world. First of all, the impact of information was particularly decisive to any society in which goods were far from standardized, and where official standards imposed by state institutions were constantly undermined through widespread imitation and fraud. In a remarkable article, Pierre Jeannin points out that merchants faced vast difficulties in gauging the quality of the wide range of goods they were supposed to sell. [START_REF] Jeannin | Distinction des compétences et niveaux de qualification : les savoirs négociants dans l'Europe moderne[END_REF] Who could say for sure that a given piece of textile had really been made according to the quality standards of the manufacturing area it purported to come from, that a barrel of flour contained the grade of flour it was sold for, that a jewel from India was what it seemed to be? While any merchant could acquire a competency in any given field, no buyer could hope to master the bewildering range of qualities and nomenclatures characteristic of the eighteenth-century. [START_REF]On the issue of quality scales, cf. "Networks in the Trade of Alcohol," a special section introduced by Paul Duguid[END_REF] Hence the vital role of networks. No merchant could be an expert on everything; but a good merchant would be able to rely on a network of peer experts, who would do the job for him. Indeed, this went beyond product quality, which was merely the visible part of the commercial iceberg. Each level of quality entailed a different marketing strategy, a different clientele, and ultimately different markets at each end of the process. Even (relatively) specialized traders dealt in a whole series of products, with no written and institutionalized nomenclature to help them. But the typical experience was that of unspecialized traders, such as grocer Thomas Allen of New London, Connecticut whose 1758 account book listed beef, corn, shingles, clapboard, and other local products along with coffee, sugar, raisin, rum, cotton, "stript" (striped cloth), "Oznabrigue" (Osnabrück cloth), and other colonial and European products. 9 There were thousands of retailers such as Allen, who left hundreds of such account books, each of which testified to a specific set of suppliers, or more accurately to a specific set of goods gathered through one correspondent from many suppliers. When Joshua Green, at age twentyone, started a business as a grocer in Boston, Massachusetts, he used his father's supplier, one Thomas Lane of London, and bought every year an assortment of Far Eastern spices and shipping products ; his first order brought cinnamon, cloves, nutmeg, mace, pepper, tea, starch, "Florence oil," raisins, and Cheshire cheese. Green's first introductory letter to Lane, dated 1752, started with these words : "The Satisfaction you have given in the Business you have done for Mess(rs) Green + Walker (with whom I serv'd my apprenticeship) has induc'd me to apply to you ;" past experience had taught Green that Lane could be trusted to be his expert buyer in London. [START_REF]Green Family of Boston, Mass[END_REF] Large-scale merchants did not operate differently ; according to Silvia Marzagalli, the ship Isaac Roget of New York sent in 1805 for Guadeloupe was filled with goods from five different suppliers, including various silk, linen and other textile products, as well as manufactured goods and wine. [START_REF] Marzagalli | Establishing Transatlantic Networks in Time of War : Bordeaux and the united States, 1793-1815[END_REF] All these shippers were general merchants, but specialization did not bring about significantly different approaches; ordering a shipment of British textile goods in Boston in 1813 (in the hopes that the War of 1812 would be over soon), merchant Nathan Appleton wrote to his brother in London : I should like however to have some good merchandize for me should they be reasonably low. Say to am(t) of £ 5000 -if you have not already purchased any for you M. Stone is an excellent judge of goods + I should like to have you get him to purchase them if you do not wish to do it yourself -I have about £ 2500 I suppose in Lodges + [Prother?] hands -but [ill.] they will be glad to accept drafts to a greater am(t) -whilst the goods are in their hands. It is [also?] necessary that I should give a particular order as I wish the goods to be of the most staple kinds say Cambrics Calicoes shirtings ginghams +c. to am(t) of £ 3000 or 4000 -+ 1 or £2000 in staple woolens as in my former letter [pr?] I + T Haigh for goods in their line -I leave it however to your judgement from the state of the market + the prospect of peace or a continuance of the war to purchase or not at all. 12 Thus Appleton relied on one M. Stone, and on his brother as a controlling element, when it came to order goods abroad, even though he was specialized in the type of merchandize he was buying. He had some ideas of his own, but was fully ready to defer to those who would actually buy, since they alone would be in a position to judge if the cloth they had in hand was suitable to the Boston market, and whether the price / quality ratio was adequate. In other words, even a specialized trader had to rely on others, not only to get the best possible quality for the price, but also, and possibly first and foremost, in order to pick the right type of goods and the proper variety. In any case, the time lag between orders and sales was usually such as to prevent instructions from being too specific. Commissioners everywhere had to exercise their judgements, and merchant correspondence is replete with complaints that an agent had bought too late, or too early, and at the wrong price. Choosing the right correspondent was thus essential, and even more so if one includes the second major dimension of merchant activity, that of credit. At any one time, little cash changed hands; most of the settlements took place through compensations. Green, for instance, almost never sent any cash to Lane, but "remitted" his debts by sending "bills," i. e. formal I.O.U.s, drawn on London houses. There is no indication on how these bills came into his hands, but in almost every one of his letters in 1752-1754, he apologizes for not sending Lane enough of them to balance his account. The fact that his was a paper debt, based on theoretically open credit, may mean that no interest was paid. Whatever the case in practice, the point is that Green needed Lane's forbearance. Thus a network was also a source of credit, which in turn was assuredly bound up with the personal relationships between creditor and debtor. Of course personal reputation was a decisive element, and it included non-economic ties -Lane had been the supplier of Green's father, after all. Kinship, religion, or any other potential link could become a motive for a creditor to be more tolerant of delays, or to offer better terms of payment, such as lower discounts on exotic commercial paper, for instance. The reverse was true as well, since Lane depended on payments from his customers, Green among them, to pay his suppliers on time. Much has been written on the delicate timing required by long-distance trade, but timing was always flexible, and dependent in part of the relationship between the actors of the exchange. The same could be said of interest rates and exchange rates, never rigidly fixed, and dependent in part on the relationship existing between the two parties. Networks, in a way, were credit, since they underpinned the ability to draw both capital and information on others. The result was in truth a joint venture between individuals who had to trust each other, a venture in which profit was distributed along complex channels of differential participation, again with close attention paid to interpersonal relationships. 13 In their concrete, day-to-day operations, networks were therefore carefully chosen and nurtured. A merchant's point of view tended to encompass first and foremost a discrete set of correspondents, usually picked among groups with which there were certain affinities. Religious or ethnic networks, or the universal tendency to pick close kin as partners, were simply rational business decisions, aimed at minimizing the risks of network 13 Laurence Fontaine, "Antonio and Shylock: Credit, and Trust in France, c. 1680-c. 1780," Economic History Review 54 (1, February 2001): 39-57; or William T. Baxter, "Observations on Money, Barter and Bookkeeping," Accounting Historians Journal 31 (1, June 2004): 129-139; as well as Cathy Matson's discussion of risk, with trust as an underlying thread, in "Introduction: The Ambiguities of Risk in the Early Republic," "Special Forum: Reputation and Uncertainty in Early America," Business History Review 78 (4, Winter 2004): 595-606). On merchant subcontracting, cf. Pierre Gervais, Les Origines de la révolution industrielle aux Etats-Unis, Paris: 2004. failure, without suppressing them completely of course. 14 What counted was on whom one could call for credit and information, and the links one relied on delineated a geography which was never universal, nor even Atlantic, but made up of the major nodes in which one's correspondents acted. Over twelve years, from 1763 to 1775, the Bordeaux firm of Schröder and Schyler, one of the few merchant firms for which we have solid information, dealt with only 17 foreign firms on a regular basis. And while it had over 250 other foreign clients from time to time, 47.8% of all consignments made from Bordeau went to these seventeen firms. 15 It is thus impossible to overstate the importance for a trader of these bilateral relationships (in this case, the term network is slightly misleading; these were chains of correspondents, or at most small groups linearly linked, rather than actual networks). And it is easy to show that they often gave rise to gate-keeping processes. In his first letter to Lane, already quoted, Green felt necessary to explain that "As Mes(s) G + W dont trade in those articles I purposed to write for [I] shall have the Advantage of supplying some of the best of their Customers on a short Credit or for the Cash." Green wanted to establish his credit with Lane, of course, but he was also careful to point out in passing that he was not going to compete with his father's firm; within a given set of trading links, competition was strictly limited. Insiders had preferential treatment, while outsiders could scarcely hope for such special treatment. This is assuredly one of the more misleading aspects of the current studies on the economic processes commonly associated with the Atlantic area in the XVIIIth century. No merchant operated with utter freedom nor could he easily change his commercial affiliations. Every account demonstrates that any new endeavour, any extensions of earlier channels, or much more rarely any attempt at redirecting these channels, entailed the careful building of new and strong bonds with key players in the desired market. As a rule, no redirection of trade traffic was complete, no business ruptures could be permanent ; all changes were incremental, because it had to be accomplished through the existing channels, and only thanks to them. Even bankruptcies could not shake these constraints, since the practice of settlement with creditors is universally attested in the archives. Conversely, finding new trading partners was difficult, time-consuming, and possible only to the extent that sound intermediate contacts could be found. In his same first introductory letter to Lane, Green junior was careful to point out that he had been his father's apprentice, and sent a note worth £ 50, the biggest sum he would ever send during his recorded first years of dealing with Lane. Green senior's standing was thus not automatically transferred to his son's new firm, and had to be reasserted. Establishing credit was no easy matter. On the other hand, no trader could operate without the help of other traders, and indeed in many areas of the world, especially in the Far East, but at one time or another in many European countries as well, having local correspondents was not only necessary, but compulsory. 16 14 David Hancock, "The Trouble with Networks : Managing the Scots' Early-Modern Madeira Trade," Business History Review 79 (3, Autumn 2005): 467-491. 15 Pierre Jeannin, "La cientèle étrangère de la maison Schröder et Schyler de la guerre de Sept Ans à la guerre d'indépendance américaine," in Marchands d'Europe. Pratiques et savoir à l'époque moderne, Jacques Bottin et Marie-Louise Pelus-Kaplan ed, Paris: 2002, 125-178. 16 Cf. for instance the Calcutta intermediaries and their relationship with foreign merchants as described in the contemporary letters of Patrick T. Jackson, Far Eastern trader in the 1800s ; cf. Kenneth W. Porter, The The net result of all these pressures is that the proper unit of analysis for the merchant world was the universe of discrete chains of trading links that structured mercantile commerce. This process had nothing to do with either the Atlantic Ocean or the relationship between "Old" and "New" worlds, since it can be observed in any setting where European-style merchant capitalism was a significant reality. Family solidarities, gate-keeping practices, credit-based dealings were merchant, not Atlantic, characteristics, and they created order in merchant life most everywhere. Commercial connections, far from being set up everywhere and at will, followed lines of least resistance created in constructing this merchant order. Mercxhant linkages were structured by existing routes and contacts, and were influenced by differential risk. This is where the imperial factor also intervened in international trade, especially during times of war. War was in itself a rejoinder to the very idea of an Atlantic community, which it "vetoed," so to speak, regular intervals. Losses in times of war are an ubiquitous story in the XVIIIth century, and no merchant, however experienced, could trust that he would be protected from international conflict of all kinds. Even such a vaunted meticulous planner as British merchant John Leigh saw his first foray in slave-trading end in near-disaster at the hands of a French privateer off the Coast of Guyana. [START_REF] Steele | Markets, Transaction Cycles, and Profits : Merchand Decision Making in the British Slave Trade[END_REF] Of course, proponents of the "Atlantic" framework insist that the barriers created by war were regularly finessed and crossed in various ways, which is quite true. It has been shown again and again that war did not completely cut off communications between enemies, and that trade was not easily enclosed within imperial models. But merchants did not freely redirect their energies anywhere they wanted in the great Atlantic web either, a point which is much less made. Even more than in peacetime, networks in wartime turned out to be highly incapable of adapting or changing to meet circumstances. They engendered dependencies on strict trading pathways, the importance of which can hardly be exaggerated, and which seems to resurface in many historical example. Thus the illegal Caribbean trade around 1780 underlines the persistent links of the New York merchant community with the Dutch West Indies over a century after Stuyvesant's surrender. A quarter of a century later, the Herculean efforts of Bordeaux merchants to maintain their colonial commerce after 1803 in the face of seemingly universal opposition reveals their inability to develop new trade channels on the continent in spite of their exceptionally famous wine-growing hinterland. Even the growth of neutral U. S. shipping during the same Napoleonic wars was insufficient to prompt Bordeaux retailers to call into question their traditional London-based financial networks even though their confiscated goods occasionally ended up in the warehouses of enemy continental firms. The much vaunted ability of traders to pursue trade in times of war thus may be read also, to a certain extent, as an inability to redirect this same trade along more secure lines, simply because the cost of this redirection was too high, hence the persistent attempt to derive profit from existing networks in spite of adverse conditions. At the very least, the assumption that such contraband trade was preferred because it was more profitable, and developed regardless of the political context, should be challenged, since in practice it turned out to be so often dependent on prior links. Total freedom of choice should have resulted in many more creative endeavours, launched well outside the beaten paths. 18 The complex dialectic between prior relationships and new business opportunities in times of war is well illustrated by the case of John Amory, a Boston merchant, in partnership with his brother Thomas. The two men were from a well-established merchant family, but their father, a Loyalist, had fled to England in 1775. In May 1779, John Jr. arrived in London, but apparently not for political reasons. For the next four years, he would travel ceaselessly between London, Brussels and Amsterdam, organizing a flow of shipments for the benefit of the firm John & Thomas Amory. 19 Most of the shipments for which shippers are specified were made from Amsterdam through a certain John Hodshon, who was, as it turns out, a correspondent of John Amory's father. Indeed, Hodshon was given the same wide latitude as Appleton's agents in London 30 years later, Amory having written him at one point to send "brother Jonathan" "1 Chest of good bohea tea [...] or Same Value in Spice as you may judge -if in spice 1/2 the value in Nutmegs 1/4 in Cinnamon 1/4 in Cloves and Mace." Goods came from both London and Brussels, and the use of a neutral port to ship to the United States was logical, as well as the various precautions which were taken to disguise the true status of the cargo : in the same letter in which Hodshon was left free to choose whether he would buy tea or spices, Amory wrote of "inclosing my letter to Brother Payne to be given Cpt Hayden, desiring the Cpt if taken to destroy it." 20 Actually, Amory's venture was probably not a journey to an entirely new territory. His correspondent firm in London was Dowling & Brett, and his first recorded transaction after his arrival in Brussels on July 1st, 1780 was to present a bill on them to the Brussels firm of Danoost & Co., for a grand total of £ 30. This sum in itself was relatively small ; according to the preceding entry Amory had reached Brussels with £ 400 in cash. The most important result of the transaction, however, was to establish Amory's credit by having Danoost & Co. draw on Dowling and Brett, a London firm which may well have been already known in Brussels anyway. In other words, Amory was most probably travelling along a chain of correspondents such as the ones we have described above. The war would slightly modify the order of the links in the 19 The complex web of family relationships and business partnerships between the various Amorys of Boston is described, if not entirely elucidated, in the Massachusetts Historical Society Guide to the Amory Family Papers, and available online at http://www.masshist.org/findingaids/doc.cfm?fa=fa0292. According to MHS records, the "John Amory" whose travels in Europe between 1778 and 1783 are used here must be John Amory Jr. (1759-1823), since John Amory Sr. was already in Europe in 1775. However, the accounts and letterbook from this Brussels trip, which come from the J. and J. Amory Collection (hereafter Amory Collection), Mss: 766, Baker Library, Harvard Business School, vol. 2 ("Journal, John Amory accounts in Europe, 1 Feb. 1778 -27 Feb. 1783"), and vol. 46 ("Copies of letters sent 1781-1789"), quote several times a "brother Jonathan," which should be either an uncle or a cousin, John Jr. having no brother Jonathan. Since William Payne, a cousin, is also called "brother Payne," we have assumed that the word "brother" here had a religious (Quaker?) connotation, and should not be taken as meaning a sibling, but this may well be a mistaken interpretation on our part. 20 For Hodshon's letters to J. & J. Amory, Amory Sr.'s firm, cf. Amory Collection, vol. 52, Folder 2 "Letters received from Miscellaneous, 1780-1785." Amory Jr's letter is in "Copies of letters sent 1781-1789," loc. cit., entry for May 5, 1781. chain, with Flemish and Dutch merchants inserted as a buffer between London and Boston, but the points of departure and arrival were the same, and even these new intermediaries were part of the original networks. Even more interestingly, the new status of France, allied to the United States, was not enough to prompt new financial networks. On February 8, 1781, Amory credited his Bills of Exchange account with two bills on London houses, for a total of £ 588, "The above bills being the net proceeds of four bills sent me by J. A. for 13998 livres tournois on paris, + w(ch) were rec(d) By MSs Vaden Yver Freres + C(o) on whom I gave my draft in favour of MSs Danoot + C(o) + who paid [ill.] 13944.9 livres." The two British bills were duly deposited in Amory's account at Dowling & Brett's, as the next entry shows. In other words, French commercial paper probably received in the United States by Jonathan Amory was changed into London paper through French and Flemish correspondents. There was apparently no attempt to reduce the discounts and losses entailed by this long chain of intermediaries, through importing directly from France. There are only two explanations for such a continued reliance on London-based houses in the middle of the War of Independence. Either John Amory, as the son of a Loyalist, gave precedence to his political leanings over his Atlantic impulses, and stuck with his original London friends for political reasons. Or, much more plausibly, he considered that the war was no sufficient reason to reorient his trade links, because the costs of such reorientation would be too high in comparison with the expected profits. When one considers how risky it was to use new, unknown suppliers who could easily take advantage of a newcomer with no previous connections, and also how difficult it was to gain the acceptance of fellow merchants for whom one was an unknown quantity of dubious credit, it becomes obvious that entering new business territory unbidden was very costly indeed. By far the most practical solution was to find some respected guarantor who would ensure his fellow traders that their new acquaintance was in good standing. The better known the guarantor, the more trusted one would be, and credit would flow accordingly; bills would be endorsed, orders filled with quality goods, since doing otherwise would be offending the fellow trader who had pledged his word. The upshot of this basic Greifian mechanism was a strong built-in tendency for merchant networks to reproduce themselves regardless of changes in political conditions, and to spread only slowly and cautiously. This could be taken as a proof of the resilient character of these networks, and of the irrelevance of imperial orders to their exercise, in a word of their truly "Atlantic" character. But such a reading glosses over the fact that Amory's links to London were in and of themselves the result of empirebuilding, not a free association generated in the course of free merchant exchange. Moreover, his lack of interest in any direct contact with France, which anticipated the subsequent failure of the Franco-American trade alliance after 1783, points to the same reality: networks themselves, far from being conceived in a vacuum, were in large part the results of empire-building processes in the first place. This is not to say that no merchant community ever took advantage of changed circumstances, of course. The Dutch in the XVIth century, the British in the XVIIth and XVIIIth centuries did seize opportunities from time to time. But even these takeovers may have had an element of concurrent business contacts in them. According to recent research, the Dutch at least gained entry into the Mediterranean at the end of the XVIth century in part through their (politically determined) alliance with Antwerp networks, already well established in Italy also for political reasons. 22 On the whole, though, Amory's cautious approach may have been more representative of standard merchant procedures than the brazen attempts of the Dutch in the Baltic, or of the British in Spanish America. In this case, we should picture an "Atlantic" world as not only partly non-Atlantic, but also markedly less "new" and innovative than assumed in current historiography. Certainly Nathan Appleton, the already quoted Boston merchant and soon-to-be textile magnate, took a similar position during the War of 1812. On November 14, 1813, he wrote to his brother Samuel in London that « if the war should continue I should think a great many articles [ill.] of English produce or manufacture, might be shipped here to great advantage in neutral ships via Lisbon or Gottenburg -by our treaty with Sweden + Spain -English property on board their vessels are secured against our privateers -as we have in them recognized the principle that free ships make free goods. » Again, traditional London links were not easily forsaken. 23 Like Amory, incidentally, Appleton had no qualms about trading with the enemy. One could see this as an expression of the often cited Anglophilia of Boston and New England in general, which, in a traditional political narrative, would eventually lead to the ill-fated Hartford Convention and the demise of the Federalist Party. I believe, however, that Appleton's flippancy in a time of war cannot simply be explained in terms of a rejection of Federal policy. There is no reference to politics in the statement above, which is couched in strictly commercial terms. It is an observation of fact, not an affirmation of dissidence. If contraband had been seen as a political activity, not an economic one, it should show somewhere in Appleton's statement. Our Boston magnate did end up having dealings with Great Britain, as shown by this excerpt from a letter dated September 2, 1813 ; « Capt Prince has given us his bill for the balance of this a/c say £ 110.14 which I send to Mess(r) Lodges + [B]ooth by this conveyance for your acc(t) as the 3(rd) of £ 1650. -1 + 2(d) forwarded via Halifax one half on your acc(t) other half on my own -viz: Leon Jacoby and Francis Jacoby on Sam [Balkiny?] + Sons £ 1100. Jos. + [Jon(a)?] Hemphill on Tho(s) Dory + Isaiah Robert 550 -» 24 Three notes of hand, totalling the hefty sum of £ 1760 s 14, were sent, apparently by three different ships, from the United States through Halifax, that is through enemy (British Canadian) territory, onto London, and into enemy hands. Of course, correspondence and remittances were generally accepted in time of war, and in fact even private citizens could, under certain circumstances, travel through enemy territory. Only the movement of goods 22 Pierre Jeannin, "Entreprises hanséates et commerce méditerranéen à la fin du XVIe siècle," in Marchand du Nord. Espaces et trafics à l'époque moderne, Philippe Braunstein and Jochen Hoock ed., Paris: 1996, 311-322. 23 Appleton Papers, Box 2, Folder 25, "1813," Nathan Appleton to Samuel Appleton, November 14, 1813 24 Ibid., Nathan Appleton to Samuel Appleton, September 2, 1813. was restricted, and in ways which were open to debate. 25 Even on this latter point governmental policy itself was often haphazard and vacillating, as exemplified by the recently analyzed case of the British smugglers invited by Napoleon in Gravelines, or the secret instructions sent by London to open the British West Indies to the Spanish American trade. 26 All in all, Appleton, like Amory, seems to have faced little moral pressure when choosing wartime strategies, and actually Amory makes one cryptic reference to a letter to John Jay, which seems to imply at least that he was in contact with the rebels besides or beyond his commercial ventures. 27 That both men chose to stick with the known approaches is all the more striking. Precisely because, as many historians have argued, enforcement of imperial policies was so haphazard, merchant relationships should have mutated much more freely and frequently than reflected by the historical record. Appleton did end up entering the French market, but after the end of the war only, in 1815, and in a way which in itself confirms how much merchants relied on preset chains of known correspondents. On March 11, 1815, He wrote his brother that : In revolving in my mind what course to take to avoid the necessity of laborious personal attention to business for which I am becoming too [ill.] and the other extreme of having no regular established business -I have finally concluded a partnership concern with the two M(r) Ward -B C + W. [...] M(r) W(m) Ward goes to England in the Milo with the intention of proceeding immediately to Paris for the purpose of purchasing French goods -+ being well acquainted with this market I think he will be able to select such as will pay a profit -I have agreed to put a £5000 sty to be the same on 60 day bills drawn [ill.] -and I wish you to see this arrangement completed by placing the amount to credit of the new firm Benj C. Ward + C(o) with yourself if you have established yourself as you propose in your last letter to me as a commission merchant -if not with Lodges + [Booth?] or some house in London" 28 One needed an entry into the French market, and that entry would be the young Ward. Appleton himself had no intention to go to France, but sought to obtain a surrogate more competent than himself. It is worth pointing out, moreover, that the transfer of funds from London to Paris was left to Ward's initiative. The choice of the merchant house that would serve as Ward's correspondent in Paris was up to Ward, quite logically, as this was the most crucial choice the young associate would have to make in order to crack open the French market -and he was the expert, after all. * * The most striking element in both Amory and Appleton's stories, and in countless other merchants' tales, is that they took place in a mercantile world which does not fit well into such categories as "Atlantic or "imperial". Because the concerns of these two men were structured by a flow of goods which never came close to imitating the free, unfettered market Adam Smith's utopian work made famous, they never thought on an Atlantic scale. Their view was economically both narrower and wider, encompassing a patchwork of fellow traders from whom they derived the goods they would send hither and thither, or the accesses to the customers who would buy these goods. But these networks were highly dependent on professional strategies, and narrowly constrained by the necessities entailed by the maintenance of these strategies. Thus Bordeaux traders would view their world as a set of correspondents, some in the Americas (the Caribbeans, some ports on the North American seaboard, South America sometimes), many in Europe, from their own region of Bordelais to London, Amsterdam and the Baltic sea, and maybe others in Asia and Africa, Calcutta, the Gold coast, or the Ile Royale. A Saint-Malo trader would have its own world as well, but it would be significantly different, with more focus on Newfoundland, on Normandy, on the Spanish empire. Boston would be a different story again, with London and the Caribbean looming large, but also the inner valleys of the North American continent, whence furs came, and the households of the Eastern seaboard, with their farmers and retailers. Even London at the height of its power, after the end of the Seven Years War, would have its own provincial outlook, and its own particular networks, or rather chains of relationships, centered on the British Caribbean Islands, the Yorkshire, the Bordeaux wine region, the slave-producing areas of Africa, the Indian dominion. And these are merely statistical orientations, dominant specializations which a few mavericks would always belie, since each trader had his own mix. From a merchant's eye view, the world was both wider and smaller than the Atlantic Ocean, but it never really corresponded to the Atlantic Ocean. The issue here is not merely a question of geographic precision. It has often been pointed out that no trade was ever specifically Atlantic. First of all, most commercial activity took place within the land masses of Europe. In volume, and possibly in economic import as well, short-distance carting of grains may have been more crucial than gold, silver, or even the slave trade, in determining the economic health of an area. 29 Only a minority of European trade routes were prolonged across the Atlantic, and all of them were part of longer sets which reached well beyond the ocean. In Isaac Roget's already quoted cargo to Guadeloupe, part of the textile came from Central Europe, and there was silk which may well have been Chinese, or at least from Lyons; potential return cargoes could include the usual colonial goods, sugar, coffee or tobacco, but also more complex routes involving intra-Caribbean trade, a shipment of slaves to the Southern United States, the loading in North American ports of wheat, timber or flaxseed to bring back to Europe, or of fur as part of a venture toward the Far East. Even in the biggest Atlantic seaports, coastal shipping and liaisons with the hinterland, as well as longrange contacts to the Far East, were as much part of the business equation as the Atlantic crossings. But what should be underlined here is not only that merchant activity was spatially complex ; a much more important point is that it was a single process, regardless of where it took place. 30 For what united British and French and American and other merchants was their common socio-economic practice, not some potential attachment to a peculiarly trans-Atlantic enterprise which, as such, was very far from their mind. Admittedly, the individuals through whom these networks came into being never formed some general, transactional, transnational community. Market segmentation brought division and competition, and these were forces at least as powerful as political-ideological convergences or polite sociability. Geographical choices were shaped by possible business relationships, which themselves were heavily determined by kin, religion, and national loyalties. In particular, the core activities of most trading groups would develop within imperial boundaries and alliances, if only because it was easiest and most cost-effective ; inter-imperial exchange would take place of course, and necessarily so, but making them one's focus was unwise, as Bordeaux traders eventually found out the hard way. No merchant could be unmindful of such constraints, and trade flows were directed accordingly, even though inter-imperial borders were crossed all the time, including in times of war. Imperial strictures were thus only one parameter in a much wider set, and it would be equally misleading to grant them the status of monocausal explanation as it would be to ignore them entirely. But the variegated nature of the resulting trade relations should not hide their underlying identity. Each particular merchant relationship, be it local, regional, worldwide, or transatlantic, was the expression of the basic merchant act of forging a link in a commercial chain which would eventually make possible the opening of a conduit between two separate, segmented markets and the transportation of one or more goods from one to the other. In other words, the sets of relationships each merchant created were geographically diverse, but identical in nature and function wherever they came into being. What, then, should be made of the "Atlantic" label? By focussing descriptively on a geographical area, rather than on any specified historical social development, the historiographical move toward "Atlantic" studies has unwittingly shifted the attention away from the causes of this development. Somehow the "Atlantic world" happened, along with empire-and / or community-building, but for no particular reason except maybe as the 30 The point that Atlantic history is a mere part of a wider history, and should not be separated from it, is repeatedly made in the various papers by Alison Games, Philippe J. Stearn, Peter Coclanis, and gathered in the forum section "Beyond the Atlantic÷ English Globetrotters and Transoceanic Connections," William and Mary Quarterly 63 (4, October 2006). But focalizing on the whole world does not tell us why this world became unified, any more than Jorge Cañizares-Esguerra's proposal to focus on the Americas as an area of "entangled" histories tells us why these histories became entangled in the first place. ("Entangled Histories. Borderland Historiographies in New Clothes?," the concluding paper in the already quoted forum in American Historical Review 112 (3, June 2007): 787-799. On this specific point, Bernard Bailyn's insistence on entirely rejecting Braudel's structural approach (Op. cit. 61) in favor of a purely narrative approach is intellectually coherent in its uncompromising empiricism; whether Atlantic of worldwide, unification happened because it happened. On the difficult issue of causality vs. description in Anglo-American historiography, cf. Pierre Gervais, "L'histoire sociale, ou heurs et malheurs de l'empirisme prudent," Chantiers d'histoire américaine, Jean Heffer and François Weil dir., Paris : 1994, 237-271. serendipitous subproduct of a host of impersonal economic and social forces. And precisely because it happened in the most neutral space one could imagine, far from any specific shore, it tended to lose its European, elite, merchant and imperial administrator overtones. This is a misleading presentation, at best. From a merchant's point of view at least, and maybe from a variety of other vantage points too, the XVIIIth-century world was unified by the powerful tool of trade, backed by state power. These forces in turn defined a worldwide sphere of European expansion and market intensification of varying intensity, but with socio-economic consequences common to all the geographical places in which they were manifested. The increasingly dominant economic role of merchants, the expansion of a market economy, and the political tensions these phenomena generated, was what the "Atlantic," (and the Pacific, and Central Europe, and the Western Hemisphere, and large swaths of Africa and Asia) was all about. What was at work was a general social process, much more than a technical tendency to cross boundaries and oceans. Moreover, these evolutions, on the Atlantic Ocean and elsewhere, were brought about through the deliberate efforts of a very specific, and quite narrow human subgroup, with definite economic, social and political goals. When we shift the focus toward these efforts and their nature, Atlantic history becomes again what Fernand Braudel argued it was all along, part of the wider history of the development of a specific social organization, European merchant capitalism, a model with a definite expansionist streak, which in turn elicited a wide range of complex reactions, from unyielding resistance to enthusiastic adoption, from the individuals and groups which had to face its encroachments or carried them out, until the eventual collapse of this model in the XIXth century with the advent of industrial capitalism. This was hardly an "Atlantic" story, since it can be traced just as well in the plains of Eastern Germany, in the Rocky Mountains years before the first French coureur des bois ever appeared, in African kingdoms which did not even have access to the sea, or in remote villages of India for which Europe was still barely a distant rumor. This was not world history, either ; this socalled first globalization was widely uneven, and left a good deal of the world population untouched, including in many regions of Europe. Neither was it purely European, though, accusations of Eurocentrism notwithstanding, the power relationships it entailed were clearly centered in Europe. There were centers and peripheries, mother countries and colonies, imperial capitals and client States or plantation economies. The space of European expansion was not homogenous, a fact which "Atlantic" history has never denied, but is hard put to explain with consistency beyond some general statements on unspecified profit motives or inherited prejudices. A history of market expansion, because such an expansion is of necessity a direct attack on other forms of social organizations, would naturally include the stories of its promoters, its opponents, their multi-faceted battles, and their winners and losers. And last but not least, this history would not be one reserved for sea captains, pioneer migrants and cosmopolitan-minded traders; tavern-keepers, transporters with their oxen-carts, village retailers, and ordinary farmers apparently mired in their routine were also a part of it. 31 Giving up on the sea as a peculiarly significant focus is the only way to restore these latter groups to their proper status as key players in eighteenth-century economic growth, the only way to free ourselves at last from the gemeinschaft / gesellschaft dichotomy. Rather than opposing the modern, roving denizens of Atlantic History to both their hapless victims in Africa and the New World and to the traditional, not to say backward people who stayed put in the Old World, we can see all of these groups as fighting -not always bloodily -over the shape and form European market-driven expansion would take. Demonstrating that this same market expansion weakened rural society and pushed impoverished inhabitants to leave the European countryside, wrought havoc with traditional inter-tribal relationships in Africa, and brought about a massive reorientation of production toward exports in parts of Asia and in the Americas, would be the best way to ensure that all migrants, and non-migrants as well, would truly become part of the same story. Moreover, placing merchants and market forces at the center of our narrative enables us to starkly differentiate the eighteenth-century Atlantic world from our own. For we live in an age of producers, not of merchants ; the ideals and practices of merchant communities, on the Atlantic or elsewhere, were developed for a very different world. This world, structured as it was by long chains of interpersonal relationships, has long since been lost, a fact which we should keep in mind a little more when assessing the relevance these ideals and practices may still have for us. 32-33).For recent analyses of trade and its importance in the Atlantic, cf. "Trade in the Atlantic World," a special section introduced by John J. McCusker, Business History Review 79 (4, Winter 2005), and "The Atlantic Economy in an Era of Revolutions," a special section coordinated by Cathy Matson, William and Mary Quarterly 62 (3, July 2005). For narratives stressing imperial structures, while dealing in various ways with an "Atlantic" framework, cf. John H. Elliott, Empires of the Atlantic World: Britain and Spain in America, 1492-1830, New Haven : 2006; the forum on "Entangled Histories" in American Historical Review 112 (3, June 2007); and William A. pettigrew, "Free to Enslave : Politics and the Escalation of Britain's Transatlantic Slave Trade, 1688-1714," William and Mary Quarterly 64 (1, January 2007): 3-38. 4 See the idea of a planter / merchant connection in both Paul Cheney's analysis of the underpinnings of French failure, then success in the Caribbean, "A False Dawn for Enlightenment Cosmopolitanism? Franco-American Trade during the American War of Independence," William and Mary Quarterly 63 (3, July 2006): 463-488, especially 465; and William Pettigrew's article on the slave trade quoted above. On the potential role of colonial profit as an engine for growth, cf. Guillaume Daudin, Commerce et prospérité: La France au XVIIIe siècle, Paris: 2005. 5 Quote by Ian K. 18 Thomas M. Truxes, "Transnational Trade in the Wartime North Atlantic : the Voyage of the Snow Recovery," Business History Review 79 (4, Winter 2005) : 751-779; Silvia Marzagalli, "Establishing Transatlantic Trade Networks in Time of War: Bordeaux and the United States, 1793-1815," Business History Review 79 (4, Winter 2005) : 811-844; François Crouzet, "Itinéraires atlantiques d'un capitaine marchand américain pendant les guerres « napoléoniennes, »" in Guerre et économie dans l'espace atlantique, op. cit. 27-41. 21 21 On the dismal trade record between the two erstwhile allies in spite of the so-called "Treaty of Amity and Commerce" of 1778, cf. Paul Cheney, "A False Dawn for Enlightenment Cosmopolitanism? Franco-American Trade during the American War of Independence," William and Mary Quarterly 63 (3, July 2006): 463-488, and Allan Potofsky, "The Political Economy of the French-American Debt Debate: The Ideological Uses of Atlantic Commerce, 1787 to 1800," William and Mary Quarterly 63 (3, July 2006): 489-516. * 25 There is very little secondary material on civilian movements in time of war during the 1700s. Numerous examples of safe passages can be found in various accounts of the time : see e. g. G. R. de Beer, "The Relations between Fellows of the Royal Society and French Men of Science When France and Britain were at War," Notes and Records of the Royal Society of London 9 (2, May 1952): 244-299; also Garland Cannon, "Sir William Jones and Anglo-American relations during the American Revolution," Modern Philology 76 (1, August 1978): 29-45. On the other hand, civilians seem to have been routinely captured and jailed, cf; e. g. Betsy Knight, "Prisoner Exchange and Parole in the American revolution," William and Mary Quarterly 48 (2, April 1991): 201-222.26 Gavin Daly, "Napoleon and the 'City of Smugglers, 1810-1814,'" Historical Journal 50 (2; June 2007): 333-352; John J. McCusker, "Introduction," special section on "Trade in the Atlantic World," Business History Review 79 (4, Winter 2005) : 697-713.27 Amory Collection, vol. 46 ("Copies of letters sent 1781-1789"), Letter dated January 1, 1781. 28 Appleton Papers, Box 3 "General Correspondence, etc. 1815-1825", Folder 1, "1815, Jan-June," Nathan Appleton to Eben Appleton,March 11, 1815. Steele, "Bernard Bailyn's American Atlantic," History and Theory 46 (1, February 2007): 48. Aggregate studies, a specialty of the French historical school, are best exemplified by Paul Butel, La croissance commerciale bordelaise dans la seconde motié du XVIIIe siècle, Lille: 1973; less well-known, Charles Carrière's Négociants marseillais au XVIIIe siècle : contribution à l'étude des économies maritimes, Marseille: 1973 is actually more detailed, and bridges the gap with more recent studies. Famous collective regional studies of merchant groups include Bernard Bailyn, The New England Merchants in the Seventeenth Century, Cambridge: 1955; Thomas Doerflinger, A Vigorous Spirit of Enterprise: Merchants and Economic Development in Revolutionary Philadelphia, Chapel Hill: 1986; and Cathy Matson, Merchants and Empire. Trading in Colonial New York, Baltimore: 1998. Cf. Michel Morineau, Incroyables gazettes et fabuleux métaux. Les retours des trésors américains d'après les gazettes hollandaises, Paris: 1984. Even for export industries, the impact of Atlantic markets could be highly variable, cf. Claude Cailly, "Guerre et conjuncture textile dans le Perche," in Silvia Marzagalli and Bruno Marnot dir., Guerre et économie dans l'espace Atlantique du XVIe au XXe siècle, Bordeaux : 2006, 116-138. Alison Games cogently raises this issue, along with many others, in her already quoted "Atlantic History" paper. The research for this paper was funded in part by a DRI CNRS grant, as well as by UMR 8168. I want to thank Allan Potofsky for his
54,075
[ "3926" ]
[ "176", "110860" ]
01339246
en
[ "info" ]
2024/03/04 23:41:48
2013
https://hal.science/hal-01339246/file/Liris-6268.pdf
Sabina Surdu email: [email protected] Yann Gripay email: [email protected] Vasile-Marian Scuturici email: [email protected] Jean-Marc Petit email: [email protected] P-Bench: Benchmarking in Data-Centric Pervasive Application Development Keywords: pervasive environments, data-centric pervasive applications, heterogeneous data, continuous queries, benchmarking Developing complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task. In this paper we pursue the difficult objective of assessing the "easiness" of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers. We introduce P-Bench, a benchmark that comparatively evaluates the easiness of development using three types of systems: (1) the Microsoft StreamInsight unmodified Data Stream Management System, LINQ and C#, (2) the StreamIn-sight++ ad hoc framework, an enriched version of StreamInsight, that meets pervasive application requirements, and (3) our SoCQ system, designed for managing data, streams and services in a unified manner. We define five tasks that we implement in the analyzed systems, based on core needs for pervasive application development. To evaluate the tasks' implementations, we introduce a set of metrics and provide the experimental results. Our study allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications. Introduction Nowadays we are witnessing the commencement of a new information era. The Internet as we know it today is rapidly advancing towards a worldwide Internet of Things [START_REF]The Internet of Things[END_REF], a planetary web that interconnects not only data and people, but also inanimate devices. Due to technological advances, we can activate the world of things surrounding us by enabling distributed devices to talk to one another, to signal their presence to users and to provide them with various data and functionalities. In [START_REF] Weiser | The Computer for the 21st Century[END_REF], Mark Weiser envisioned a world where computers vanish in the background, fitting smoothly into the environment and gracefully providing information and services to users, rather than forcing them to adapt to the intricate ambiance from the computing realm. Computing environments that arise in this context are generally referred to as pervasive environments, and applications developed for these environments are called pervasive applications. To achieve easy to use pervasive applications in a productive way, we must accomplish the realization of easy to develop applications. Developing complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task, which often requires technical areas of expertise spanning multiple fields. Current implementations, which use DBMSs, Data Stream Management Systems (DSMSs) or just ad hoc programming (e.g., using Java, C#, .NET, JMX, UPnP, etc), cannot easily manage pervasive environments. Recently emerged systems, like Aorta [START_REF] Xue | Action-Oriented Query Processing for Pervasive Computing[END_REF], Active XML [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] or SoCQ [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], aim at easing the development of data-centric applications for pervasive environments. We call such systems Pervasive Environment Management Systems (PEMSs). In this paper we pursue the difficult objective of assessing the "easiness" of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers. The main challenge lies in how to measure the easiness of pervasive application development and what metrics to choose for this purpose. We introduce Pervasive-Bench (P-Bench), a benchmark that comparatively evaluates the easiness of development using three types of systems: [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] the Microsoft StreamInsight unmodified DSMS [START_REF] Kazemitabar | Geospatial Stream Query Processing using Microsoft SQL Server StreamInsight[END_REF], LINQ and C#, [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF] the StreamInsight++ ad hoc framework, an enriched version of StreamInsight, which meets pervasive application requirements, and (3) our SoCQ PEMS [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], designed for data-centric pervasive application development. We define five tasks that we implement in the analyzed systems, based on core needs for pervasive application development. At this stage, we focus our study on applications built by a single developer. To evaluate the tasks' implementations and define the notion of easiness, we introduce a set of metrics. P-Bench allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications. P-Bench is driven by our experience in building pervasive applications with the SoCQ system [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. It also substantially expands our efforts to develop the ColisTrack testbed for SoCQ, which materialized in [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF]. Nevertheless, the benchmark can evaluate systems other than SoCQ, being in no way limited by this PEMS. We present a motivating scenario, in which we monitor medical containers transporting fragile biological content between hospitals, laboratories and other places of interest. A pervasive application developed for this scenario handles slower-changing data, similar to those found in classical databases, and distributed entities, represented as data services, that provide access to potentially unending dynamic data streams and to functionalities. Under reasonable as-sumptions drawn by these types of scenarios, where we monitor data services that provide streams and functionalities, P-Bench has been devised to be a comprehensive benchmark. To the best of our knowledge, this is the first study in the database community that addresses the problem of evaluating easiness in data-centric pervasive application development. Related benchmarks, like TPC variants [START_REF]Transaction Processing Performance Council[END_REF] or Linear Road [START_REF] Arasu | Linear Road: A Stream Data Management Benchmark[END_REF], focus on performance and scalability. While also examining these aspects, P-Bench primarily focuses on evaluating how easy it is to code an application, including deployment and evolution as well. This is clearly a daunting process, much more challenging than classical performance evaluation. We currently focus on pervasive applications that don't handle big data, in the order of petabytes or exabytes, e.g., home monitoring applications in intelligent buildings or container tracking applications. We believe the scope of such applications is broad enough to allow us to focus on them, independently of scalability issues. We strive to fulfill Jim Gray's criteria [START_REF] Gray | Benchmark Handbook: For Database and Transaction Processing Systems[END_REF] that must be met by a domain-specific benchmark, i.e., relevance, portability, and simplicity. Another innovative feature of P-Bench is the inclusion of services, as dynamic, distributed and queryable data sources, which dynamically produce data, accessed through stream subscriptions and method invocations. In P-Bench, services become first-class citizens. We are not aware of similar works in this field. Another contribution of this paper is the integration of a commercial DSMS with service discovery and querying capabilities in a framework that can manage a pervasive environment. This paper is organized as follows. Section 2 provides an insight into the requirements of data-centric pervasive application development, highlighting exigencies met by DSMSs, ad hoc programming and PEMSs. In Section 3 we describe the motivating scenario and we define the tasks and metrics from the benchmark. Section 4 presents the systems we assess in the benchmark, focusing on specific functionalities. In Section 5 we provide the results of our experimental study. Section 6 discusses the experimental results, highlighting the benefits and limitations of our implementations. Section 7 concludes this paper and presents future research directions. Overview of Data-Centric Pervasive Applications Pervasive applications handle data and dynamic data services4 distributed over networks of various sizes. Services provide various resources, like streams and functionalities, and possibly static data as well. The main difficulties are to seamlessly integrate heterogeneous, distributed entities in a unified model and to homogeneously express the continuous interactions between them via declarative queries. Such requirements are met, to different extents, by pervasive applications, depending on the implementation. DSMSs. DSMSs usually provide a homogeneous way to view and query relational data and streams, e.g., STREAM [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF]. Some of them provide the ability to handle large-scale data, like large-scale RSS feeds in the case of RoSeS [START_REF] Creus Tomàs | RoSeS: A Continuous Query Processor for Large-Scale RSS Filtering and Aggregation[END_REF], or the ability to write SQL-like continuous queries. Nevertheless, developing pervasive applications using only DSMSs introduces significant limitations, highlighted by P-Bench. Ad hoc programming using DSMSs. Ad hoc solutions, which combine imperative languages, declarative query languages and network protocols, aim at handling complex interactions between distributed services. Although they lead to the desired result, they are not long-term solutions, as P-Bench will show. PEMSs. These systems aim at reconciling heterogeneous resources, like slower-changing data, streams and functionalities exposed by services in a unified representation in the query engine. PEMSs can be realized with many systems or approaches, such as Aorta [START_REF] Xue | Action-Oriented Query Processing for Pervasive Computing[END_REF], Active XML [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF], SoCQ [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF] or HYPATIA [START_REF] Cuevas-Vicenttín | Evaluating Hybrid Queries through Service Coordination in HYPATIA (demo)[END_REF], to mention a few. P-Bench The P-Bench benchmark aims at providing an evaluation of different approaches to building data-centric pervasive applications. The common objective of benchmarks is to provide some way to evaluate which system yields better performance indicators when implementing an application, so that a "better" system can be chosen for the implementation [START_REF] Pugh | Technical Perspective: A Methodology for Evaluating Computer System Performance[END_REF]. Although we also consider performance in P-Bench, our focus is set on evaluating the easiness of data-centric pervasive application development with different types of systems: a DSMS, ad hoc programming and a PEMS. To highlight the advantages of declarative programming, we ask that the evaluated systems implement tasks based on declarative queries. Some implementations will also require imperative code, others will not. We argue that one dimension of investigation when assessing the easiness of building pervasive applications is imperative versus declarative programming. A pervasive application seen as a declarative query over a set of services from the environment provides a logical view over those services, abstracting physical access issues. It also enables optimization techniques and doesn't require code compilation. When imperative code is included, restarting the system to change a query, i.e., recompiling the code, is considered as an impediment for the application developer. Scenario and Framework In our scenario, fragile biological matter is transported in sensor-enhanced medical containers, by different transporters. During the containers' transportation, temperature, acceleration, GPS location and time must be observed. Corresponding sensors are embedded in the container: a temperature sensor to verify temperature variations, an accelerometer to detect high acceleration or deceleration, a timer to control the deadline beyond which the transportation is unnecessary and a GPS to know the container position at any time. A supervisor determines thresholds for the different quality criteria a container must meet, e.g., some organic cells cannot be exposed to more than 37 C. When a threshold is exceeded, the container sends a text message (e.g., via SMS) to its supervisor. In our scenario, only part of these data are static and can be stored in classical databases: the medical containers' descriptions, the different thresholds. All the other data are dynamically produced by distributed services and accessed through method invocations (e.g., get current location) or stream subscriptions (e.g., temperature notifications). Moreover, services can provide additional functionalities that change the environment, like sending some messages (e.g., by SMS) when an alert is triggered; they can also provide access to data stored in relations, if necessary. Therefore, our scenario is representative for pervasive environments, where services provide static data, dynamic data streams and methods that can be invoked [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. It is also compatible with existing scenarios for DSMSs (like the one from Linear Road), but services are promoted as first-class citizens. A data service that models a device in the environment has an URL and accepts a set of operations via HTTP. P-Bench contains car, medical container and alert services, which expose streams of car locations, of medical container temperature notifications, the ability to send alert messages when exceptional situations occur, etc. We developed a framework to implement this scenario (Figure 1). Since we use a REST/HTTP-based protocol to communicate with services, they can be integrated independently of the operating system and programming language. Moreover, assessed systems can be equipped with modules for dynamic service discovery. The World Simulator Engine is a C# application that runs on a Windows 2008 Server machine and simulates (i.e., generates) services in the environment. The simulator accepts different options, like the number of cars, the places they visit, the generation of medical containers, etc. Services' data rate is also parameterizable (e.g., how often a car emits its location). The engine uses the Google Maps Directions API Web Service to compute real routes of cars. The Control & Visualization Interface allows visualizing services in the World, and writing and sending declarative queries to a query engine. The Visualization Interface runs on an Apache Web server; the server side is developed in PHP. On the client side, the web user interface is based on the Google Maps API to visualize the simulated world on a map, and uses Ajax XML HTTP Request to load the simulated state from the server side. Several remote clients can connect simultaneously to the same simulated environment, by using their Web browser. This user interface is not mandatory for our benchmark, but it does provide a nice way of visualizing services and the data they supply. Declarative queries can be written using an interface implemented as an ASP.NET Web application that runs on the Internet Information Services server. We thoroughly describe this scenario and framework in our previous paper on ColisTrack [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF]. In our experiments we eliminate the overhead introduced by our web interface. In the StreamInsight and StreamInsight++ implementations we use an in-process server and send queries from the C# application that interacts with the server. In the case of SoCQ, we write and send queries from SoCQ's interface. Benchmark Tasks We define five benchmark tasks to evaluate the implementation of our scenario with the assessed systems. The main challenge in pervasive applications is to homogeneously express interactions between resources provided by dynamically discovered services, e.g., data streams, methods and static data. Therefore, we wrap tasks' definitions around functionalities dictated by these necessities. Each task is built around a main functionality that has to be implemented by a system in order to fulfill the task's objective. The parameters of the tasks are services specific to our scenario. These parameters can easily be changed, so that a task can be reformulated on any pervasive environment-based scenario, whilst maintaining its specified objective. The difficulty of the tasks is incremental. We start with a task that queries a single data stream from a given service, and we end with a task that combines heterogeneous resources from dynamically discovered services of different types. Since P-Bench is concerned with assessing development in pervasive environments, our tasks are defined in the scope of pervasive applications. Other types of applications like data analysis applications are not in the focus of our current study. Task 0: Startup. The objective of this task is to prepare the assessed systems for the implementation of the scenario. It includes the system-specific description of the scenario, i.e., data schema, additional application code, etc. Task 1: Localized, single stream supervision. The objective of this task is to monitor a data stream provided by a service that had been localized in advance, i.e., dynamic service discovery is not required. Task 1 tracks a single moving car and uses a car service URL. The user is provided, at any given time instant, with the last reported location of the monitored car. Task 2: Multiple streams supervision. The objective of this task is to monitor multiple data streams provided by dynamically discovered services. Task 2 tracks all the moving cars. The user is provided, at any given time instant, with the last reported location of each car. Task 3: Method invocation. The objective of this task is to invoke a method provided by a dynamically discovered service. Task 3 provides the user with the current location of a medical container, given its identifier. Task 4: Composite data supervision. The objective of this task is to combine static data, and method invocations and data streams provided by dynamically discovered services, in a monitoring activity. Task 4 monitors the temperatures of medical containers and sends alert messages when the supervised medical containers exceed established temperature thresholds. Benchmark Metrics Similarly to the approach from [START_REF] Fenton | Software Metrics: A Rigorous and Practical Approach[END_REF], we identify a set of pervasive application quality assurance goals: easy development, easy deployment and easy evolution. Since easiness alone cannot be a sole criterion for choosing a system, we also introduce the performance goal to assess the efficiency of a system under realistic workloads. Based on these objectives we define a set of metrics that we think fits best for evaluating the process of building pervasive applications. We define the life cycle of a task as the set of four stages that must be covered for its accomplishment. Each stage is assessed through related metrics and corresponds to one of the quality assurance goals: development -metrics from this stage assess the easiness of task development; -deployment -metrics from this stage evaluate the easiness of task deployment; -performance -in this stage we assess system performance, under realistic workloads; -evolution -metrics from this stage estimate the impact of the task evolution, i.e., how easy it is to change the current implementation of the task, so that it adapts to new requirements. The task's objective remains unmodified. By defining the life cycle of a task in this manner, we adhere to the goal of agility [START_REF] Rys | Scalable SQL[END_REF] in P-Bench. Agility spans three life cycle stages: development, deployment and evolution. Since we are not concerned with big data, we don't focus on scale agility. We now define a set of metrics for each of the four stages: Development. We separate task development on two levels: imperative code (written in an imperative programming language, e.g., C#) and declarative code (written in a declarative query language, e.g., Transact-SQL). We measure the easiness and speed in the development of a task through the following metrics: -LinesOfImperativeCode outputs the number of lines of imperative code required to implement the task (e.g., code written in Java, C#). The tool used to assess this metric is SLOCCount [START_REF] Wheeler | Counting Source Lines of Code (SLOC)[END_REF]. We evaluate the middleware used to communicate with services in the environment, but we exclude predefined class libraries from our assessment (e.g., classes from the .NET Base Class Library); -NoOfDeclarativeElements provides the number of declarative elements in the implementation of the task. We normalize a query written in a declarative language in the following manner. We consider a set of language-specific declarative keywords describing query clauses, for each of the evaluated systems. The number of declarative elements in a query is given by the number of keywords it contains (e.g., a SELECT FROM WHERE query in Transact-SQL contains three declarative elements); -NoOfQueries outputs the number of declarative queries required for the implementation of the task; -NoOfLanguages gives the number of imperative and declarative languages that are used in the implementation of the task; -DevelopmentTime roughly estimates the number of hours spent to implement the task, including developer training time and task testing, but excluding the time required to implement the query engine or the middleware used by the systems to interact with services. Deployment. The deployment stage includes metrics: -NoOfServers gives the number of servers required for the task (e.g., the StreamInsight Server); -NoOfSystemDependencies outputs the number of system-specific dependencies that must be installed for the task; -IsOSIndependent indicates whether the task can be deployed on any operating system (e.g., Windows, Linux, etc). Performance. Once we implemented and deployed the task, we can measure the performance of this implementation. We need now to rigorously define accuracy and latency requirements. The accuracy requirement states that queries must output correct results. Our work for an accuracy checking framework in a pervasive environment setting is ongoing. Using this framework we will compute the correct results for queries in a given task, we will calculate the results obtained when implementing the task with an assessed system, and finally, we will characterize the accuracy of the latter results using Precision and Recall metrics. We will consider both the results of queries and the effects that query executions have on the environment. We place an average latency requirement of 5 seconds on continuous queries, i.e., on average, up to 5 seconds can pass between the moment an item (i.e., a tuple or an event) is fed into a query and the moment the query outputs a result based on this item. We set a query execution time of 60 seconds. When assessing performance for systems that implement dynamic service discovery, a query starts only after all the required services have been discovered, but during query execution both StreamInsight++ and SoCQ continue to process messages from services that appear and disappear on and from the network. To evaluate performance, we consider the average latency and accuracy requirements described above and define a set of metrics for continuous queries. In the current implementation, the metrics are evaluated by taking into account the average latency requirement, but our accuracy checking framework will allow us to evaluate them with respect to the accuracy constraints as well. The performance stage metrics are: -MaxNoDataSources gives the maximum number of data sources (i.e., services) that can feed one continuous query, whilst meeting accuracy and latency requirements. We assign a constant data rate of 10 events/minute for each data source; -MaxDataRate outputs the maximum data rate for the data sources that feed a continuous query, under specified accuracy and latency requirements. All the sources are supposed to have the same constant data rate. This metric is expressed as number of events per second. We are not interested in extremely high data rates for incoming data, so we will evaluate the task up to a data rate of 10.000 events/second. Unless specified otherwise in the task, this metric is evaluated for 10 data sources; -NoOfEvents is the number of processed events during query execution when assessing the MaxDataRate metric. This metric describes the limitations of our implementations and hardware settings, more than system performance; -AvgLatency outputs the average latency for a continuous query, given a constant data rate of 10 events/second for the data sources that feed the query. AvgLatency is expressed in milliseconds and is computed across all the data sources (10 by default) that feed a continuous query, under specified accuracy requirements. Evolution. The evolution stage encompasses metrics that quantify the impact that new requirements or changes have on the whole task. The evolution of a task does not suffer radical changes (i.e., we don't modify a task that subscribes to a stream, to invoke a method in its updated version). A task's parameters, e.g., the services, may change, but the specified objective for a task is maintained. This stage contains the following metrics: -ChangedImperativeCode outputs the number of lines of imperative code that need to be changed (added, modified or removed), when the task evolves, in order to accomplish newly specified requirements. Lines of imperative code are counted like in the case of the LinesOfImperativeCode metric; -ChangedDeclarativeElements provides the number of declarative elements that need to be changed in any way (added, modified or removed), in order to update the task. Counting declarative elements is performed like in the case of the NoOfDeclarativeElements metric. Metrics in this stage provide a description of the reusability dimension when developing pervasive applications. We are assessing the energy and effort devoted to the process of task evolution. Assessed Systems The DSMS we use in P-Bench is StreamInsight. To accomplish the tasks in an ad hoc manner, we enrich StreamInsight with dynamic service discovery features, obtaining a new framework: StreamInsight++. As a PEMS, we use SoCQ. To communicate with services in the environment, we use UbiWare, the middleware we developed in [START_REF] Scuturici | UbiWare: Web-Based Dynamic Data & Service Management Platform for AmI[END_REF] to facilitate application development for ambient intelligence. StreamInsight was chosen based on the high familiarity with the Microsoft .NET-based technologies. We chose SoCQ because of the expertise our team has with this system and the ColisTrack testbed. We don't aim at conducting a comprehensive study of DSMSs or PEMSs, but P-Bench can as well be implemented in other DSMSs like [START_REF] Streambase | [END_REF], [START_REF] Creus Tomàs | RoSeS: A Continuous Query Processor for Large-Scale RSS Filtering and Aggregation[END_REF], [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF], or PEMSs like [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] or [START_REF] Cuevas-Vicenttín | Evaluating Hybrid Queries through Service Coordination in HYPATIA (demo)[END_REF]. Microsoft StreamInsight Microsoft StreamInsight [START_REF] Kazemitabar | Geospatial Stream Query Processing using Microsoft SQL Server StreamInsight[END_REF] is a platform for the development and deployment of Complex Event Processing (CEP) applications. It enables data stream processing using the .NET Framework. For pervasive application development, additional work has to be done in crucial areas, like service discovery and querying. To execute queries on the StreamInsight Server, one requires a C# application to communicate with the server. We enrich this application with a Service Manager module, which handles the interaction with the services in the environment and which is based on UbiWare. As described in the technical documentation [19], StreamInsight processes event streams coming in from multiple sources, by executing continuous queries on them. Continuous queries are written in Language-Integrated Query (LINQ) [START_REF] Meijer | The World According to LINQ[END_REF]. StreamInsight's run-time component is the StreamInsight server, with its core engine and the adapter framework. Input adapters read data from event sources and deliver them to continuous queries on the server, in a push manner. Queries output results which flow, using pull mechanisms, through output adapters, in order to reach data consumers. Figure 2 shows the architecture of an application implemented with StreamInsight (similar to [19]). Events flow from network sources in the pervasive environment through input adapters into the StreamInsight engine. Here they are processed by continuous queries, called standing queries. For simplicity, we depict data streaming in from one car service and feeding one continuous query on the server. The results are streamed through an output adapter to a consumer application. Static reference data (e.g., in-memory stored collections or SQL Server data) can be included in the LINQ standing queries specification. StreamInsight++ StreamInsight contains a closed source temporal query engine that cannot be changed. Instead, we enrich the Service Manager with dynamic service discovery capabilities, using ad hoc programming, thus obtaining the StreamInsight++. The enriched Service Manager allows the user of StreamInsight++ to write queries against dynamically discovered services. It can be thought of as the middleware between the system and the services in the environment, or the service wrapper that allows both service discovery and querying. The service access mechanism uses the REST/HTTP-based protocol mentioned in Section 3. The Service Manager delivers data from discovered services to input adapters. SoCQ We designed and implemented the Service-oriented Continuous Query (SoCQ) engine [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], a PEMS that enables the development of complex applications for pervasive environments using declarative service-oriented continuous queries. These SQL-like queries combine conventional and non-conventional data, namely slower-changing data, dynamic streams and functionalities, provided by services. Within our data-oriented approach, we built a complete data model, namely the SoCQ data model, which enables services to be modeled in a unified manner. It also provides a declarative query language to homogeneously handle data, streams and functionalities: Serena SQL. In a similar way to databases, we defined the notion of relational pervasive environment, composed of several eXtended Dynamic Relations, or XD-Relations. The schema of an XD-Relation is composed of real and/or virtual attributes [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. Virtual attributes represent parameters of various methods, streams, etc, and may receive values through query operators. The schema of an XD-Relation is further associated with binding patterns, representing method invocations or stream subscriptions. SoCQ includes service discovery capabilities in the query engine. The service discovery operator builds XD-Relations that represent sets of available services providing required data. For example, an XD-Relation car could be the result of such an operator, and be continuously updated when new car services become available and when previously discovered services become unavailable. Benchmark Experiments In this section we present the comparative evaluation of the chosen systems. For each task, we will describe its life cycle on StreamInsight, StreamInsight++ and SoCQ. We start with the development stage, continue with deployment and performance and end with task evolution. We rigorously assess each task through the set of metrics we previously defined. At the end of each subsection dedicated to a task we provide a table with metrics results and a short discussion. The experiments were conducted on a Windows Server 2008 machine, with a 2.67GHz Intel Xeon X5650 CPU (4 processors) and 16 GB RAM. Assessing Performance We present our system-specific evaluation approach for the performance stage: StreamInsight and StreamInsight++. In this case we use an in-process server. We connect to one or more service streams and deliver incoming data to an input adapter. We assess the time right before the input adapter enqueues an event on the server and the time right after the output adapter dequeues the event from the server. The time interval delimited by the enqueue and dequeue moments represents the event's latency. Average latency is computed incrementally based on individual event latencies. We also enqueue CTI events on the server, i.e., special events specific to StreamInsight, used to advance application time, but we compute average latency by taking into account only events received from environment services. By evaluating latency in this manner, we assess the performance of the StreamInsight engine together with the adapter framework and middleware that we implemented, and not the pure performance of the StreamInsight engine. SoCQ. The average latency is computed by comparing events from streams of data services, to events from the query output stream. An event from a service is uniquely identified by the service URL and the service-generated event timestamp. A unique corresponding event is then expected from the query output stream. A latency measurement tool has been developed to support the latency computation, based on UbiWare: it launches the task query in the query engine, connects to the query result output stream, connects to a number of services, and then matches expected events from services and query output events from the query engine. The difference between the arrival time of corresponding events at the measurement tool provides a latency for each expected event. Task 0: Startup The objective of this task is to prepare the evaluated systems for the implementation of Tasks 1 to 4. The latter can be implemented independently from one another, but they all require the prior accomplishment of the Startup task. We describe the schema of our scenario in system-specific terms. We also present any additional modules that need to be implemented. Task 0 uses the UbiWare middleware [START_REF] Scuturici | UbiWare: Web-Based Dynamic Data & Service Management Platform for AmI[END_REF] previously mentioned, to interact with services in the environment. UbiWare uses a REST/HTTP-based protocol for this purpose. The developer that implemented the Startup task in StreamInsight and StreamInsight++ has a confident level of C#, .NET and LINQ, but has never developed applications for StreamInsight before. We don't embark on an incremental development task, evolving from StreamInsight to StreamInsight++. We consider them to be independent, separate systems, hence any common features are measured in the corresponding metrics, for each system. The same developer also accomplished the Startup task in SoCQ, without having any prior knowledge about the system and the SQL-like language it provides. Development. StreamInsight and StreamInsight++. We implement C# solutions that handle the interaction with the StreamInsight server. They contain entities specific to StreamInsight (input and output configuration classes and adapters, etc) and entities that model data provided by services in P-Bench (car location, temperature notification classes, etc). To interact with environment services, these implementations also use and enrich the Service Manager specific to StreamInsight or StreamInsight++. StreamInsight. Task 1 is the only task that can be fully implemented with StreamInsight, as it doesn't require service discovery (the URL of the car service that represents the car to be monitored is provided). Therefore, we implement a C# solution, which handles the interaction with the StreamInsight server, to prepare the system for Task 1. The solution contains the following entities: a car location class, that models location data provided by a car service (latitude, longitude, timestamp and car id); -a car data source module, that is part of the Service Manager, and delivers incoming car locations (from the given service URL) to an input adapter; -input and output configuration classes, to specify particulars of data sources and consumers; -input and output adapter factory classes, responsible for creating input and output adapters; -a typed input adapter, which receives a specific car location event from the car data source in a push manner and enqueues this event, using push mechanisms, into the StreamInsight server; -an output adapter, which dequeues results from the query on the StreamInsight server; -an additional benchmark tools class, that manages application state, computes latency, etc. StreamInsight++. StreamInsight++ can implement all the tasks. The C# solution we built to enrich StreamInsight and to communicate with the StreamInsight server is much more complex than the one used with raw StreamInsight, but there are some common features. The solution contains the following entities: apart from the car location class, we developed C# classes that model medical containers and their temperature notifications, i.e., medical container and temperature notification; -we enriched the car data source module to encompass dynamic discovery, so as to deliver car locations from dynamically discovered cars; -we added medical containers data source modules specific to Tasks 3 and 4, respectively, i.e., medical container data source and temperature notification data source; -additional input adapter factory classes were developed, for the newly added input adapters (for medical containers and medical containers temperature notifications); -classes that contain the input configuration, output configuration, output adapter factory and output adapter were maintained (we chose to implement an untyped output adapter); -extra input adapters were developed, to handle the diversity of input events from the pervasive environment, i.e., medical containers dynamic discovery messages and medical containers temperature notifications; -the benchmark tools class was extended to encompass methods specific to Tasks 3 and 4. SoCQ. All the tasks can be implemented with SoCQ. SoCQ already contains the middleware required for the services in P-Bench, but to provide a fair comparison with the other systems, we will assess the code in SoCQ's middleware as well. We provide a SoCQ schema of our scenario, written in Serena SQL. Listing 1 depicts the set of XD-Relations, which abstract the distributed entities in the pervasive environment. This is the only price the application developer needs to pay to easily develop data-centric pervasive applications with SoCQ: gain an understanding of SoCQ and Serena SQL and model the pervasive environment as a set of XD-Relations, yielding a relational pervasive environment. Car, MedicalContainer and SupervisorMobile are finite XD-Relations, extended with virtual attributes and binding patterns in order to provide access to stream subscriptions and method invocations. Supervise is a simple dynamic relation, with no binding patterns, yet all four relations are specified in a consistent, unified model, in the Serena SQL. On top of the relational pervasive environment, the developer can subsequently write applications as continuous queries, which reference data services from the distributed environment and produce data. Deployment. StreamInsight and StreamInsight++. To attain this task with StreamInsight and StreamInsight++, one requires .NET, a C# compiler, SQL Server Compact Edition, a Windows operating system and the StreamInsight server. This minimum setting is necessary for Tasks 1 to 4, with some additional task-dependent prerequisites. SoCQ. The deployment machine must have the SoCQ Server and a Java Virtual Machine. Any operating system can support this task. Evolution. The Startup task prepares the system to handle a pervasive environment, based on entities from the scenario we proposed. If we change the StreamInsight and StreamInsight++. We must redevelop the C# solutions if the service wrappers change. If the service access mechanisms don't change, the middleware can remain unmodified, i.e., ChangedImperativeCode won't consider the ∼3700 lines of code that compose the middleware implemented for StreamInsight and StreamInsight++. Other classes might be kept if some data provided by services from the initial environment are preserved. SoCQ. In SoCQ, we need to build a different schema, in Serena SQL, for the new pervasive environment. If the service wrappers change, then the imperative code for the middleware must be reimplemented. If the middleware remains unmodified, no line of imperative code is impacted in the evolution stage, i.e., ChangedImperativeCode will be 0. Task discussion. The time and effort devoted to self-training and implementing Task 0 are considerably higher in the StreamInsight-based implementations than in SoCQ (see Table 1). The former can only be deployed on Windows machines. SoCQ needs a smaller number of system dependencies and can be deployed on any operating system. If we switch to a different scenario, Task 0 needs to be reimplemented, which translates to a significant amount of changed lines of imperative code in all the systems, if the service access mechanisms change. If the middleware doesn't change, the exact amount of changed code depends on the preservation of some services from the initial environment; in StreamInsight and StreamInsight++ we need to modify imperative code, whereas SoCQ requires changing only declarative elements. Table 1 shows figures for the worst-case situation, where all services and their access mechanisms are changed. Task 1: Localized, Single Stream Supervision Task 1 tracks one moving car. Its input is a car service URL and a stream of locations from the monitored car. The output of the task is a stream that contains the LocationTimestamp, Latitude, Longitude and CarId of the car, i.e., the user is provided with the car's stream of reported locations. The task's objective is to monitor a data stream provided by a car service that had been localized in advance, i.e., dynamic service discovery is not required. Development. StreamInsight and StreamInsight++. We require one LINQ query in order to track a given car (Listing 2a). We need additional C# code to create a query template that represents the business logic executed on the server, instantiate adapters, bind a data source and a data consumer to the query, register the query on the server and start and stop the continuous query. Dynamic discovery is not required for this task. SoCQ. In SoCQ, the developer writes a car tracking query in Serena SQL (Listing 2b). It subscribes to a stream of location data from the car XD-Relation, based on a car service URL. No imperative code is needed. Performance. StreamInsight, StreamInsight++ and SoCQ. For this task we assess metrics MaxDataRate, NoOfEvents and AvgLatency, since we track one car. Evolution. StreamInsight, StreamInsight++ and SoCQ. The user may want to track a different car, which means changing the car service URL. In our StreamInsight-based approaches, this requires changing and recompiling the imperative code, to provide the new URL. The LINQ query remains unchanged. In SoCQ, we supply a different car service URL in the declarative query code. Task discussion. The effort required to develop and update the task is more intense in the StreamInsight-based implementations, which can be deployed only on Windows machines and require 2 languages, LINQ and C#, and more dependencies (see Table 2). The SoCQ implementation uses 1 language (Serena SQL), needs 1 dependency and no imperative code, and can be deployed on any operating system, but it yields a higher average latency. All the systems achieved a MaxDataRate of 10.000 events/second under specified latency requirements. Task 2: Multiple Streams Supervision Task 2 tracks all the moving cars. The input of this task is represented by notification messages sent by services in the environment when they appear or disappear and by streams of interest emitted by services monitored in the task, i.e., car location streams from monitored cars. The output of this task is a stream that provides the LocationTimestamp, Latitude, Longitude and CarId of the monitored cars. The user is hence provided with the reported locations of each car. Task 2's objective is to monitor multiple data streams provided by dynamically discovered car services. Development. StreamInsight++. This implementation is similar to the one from Task 1, but the car data source receives events from all the streams the application subscribed to. It delivers them in a push manner to the input adapter. Hence, the LINQ query for this task is identical to the one described in Listing 2a. SoCQ. The SoCQ implementation is similar to the one described for Task 1. The only requirement is to write the car tracking query. The query for this task is identical with the one depicted in Listing 2b, except it doesn't encompass a filter condition, since we are tracking all the cars. Deployment. StreamInsight++ and SoCQ. The prerequisites for deployment are identical with those mentioned in Task 0. Performance. StreamInsight++ and SoCQ. We evaluate all the metrics from the performance stage. We compute MaxDataRate, NoOfEvents and AvgLatency across events coming in from all data sources, for a constant number of 10 data sources. Evolution. StreamInsight++ and SoCQ. A new requirement for this task can be to track a subgroup of moving cars. In StreamInsight++ we need to change the imperative code, to check the URL of the data source discovered by the system. In SoCQ, we need to add a filter predicate in the continuous query. Task discussion. SoCQ provides a convenient approach to development, deployment and evolution, without imperative code, obtaining better results for metrics NoOfLanguages, NoOfSystemDependencies and IsOSIndependent (Table 3). StreamInsight++ achieves superior performance when assessing MaxNo-DataSources and MaxDataRate. We believe this implementation could do better, but in our hardware setting we noticed a limit of 18.000 events that are received by the StreamInsight engine each second; hence this is not a limitation imposed by StreamInsight. For our scenario, the performance values obtained by SoCQ are very good as well. We have multiple threads in our StreamInsight++ application to subscribe to multiple streams, so the thread corresponding to the StreamInsight++ output adapter is competing with existing in-process threads. Therefore, the average latency we observe from the adapters is higher than the StreamInsight's engine pure latency and than the average latency measured for SoCQ. This task cannot be implemented in StreamInsight, due to lack of dynamic service discovery. SoCQ. We write a simple Serena one-shot query that uses the Medical-Container XD-Relation, defined in the SoCQ schema (Listing 3). We manually submit this query using SoCQ's interface. Deployment. StreamInsight++. Apart from the prerequisites described in Task 0, to implement Task 3 we also need an installed instance of SQL Server. SELECT latitude, longitude, locDate FROM MedicalContainer WHERE mcID="12345" USING getLocation; Listing 3: Locating medical container query in SoCQ SoCQ. Task 0 prerequisites hold for this task implemented in SoCQ. Performance. StreamInsight++ and SoCQ. We don't assess performance metrics for this task, as it encompasses a one-shot query. Assessing service discovery performance is out of the scope of this evaluation. Evolution. StreamInsight++ and SoCQ. The user may want to locate a different medical container. In StreamInsight++ we need to supply a different container identifier in the imperative application. In SoCQ we supply a different medical container identifier in the Serena query. Task discussion. For this task as well development time and effort are minimal in the SoCQ implementation, which doesn't need imperative code (see Table 4). In StreamInsight++ we also need an additional instance of SQL Server. If SoCQ requires only Serena SQL, StreamInsight++ requires C#, LINQ and Transact-SQL (to interact with SQL Server). 6 Metrics NoOfSystemDependencies and IsOSIndependent yield better values for SoCQ. This task cannot be implemented in StreamInsight, because it requires dynamic service discovery. StreamInsight++. This implementation integrates the StreamInsight Server, as well as SQL Server, LINQ and C#. We need SQL Server to hold supervision related data (which supervisors monitor which medical containers) and dynamically discovered alert services. For the incoming medical containers temperature notifications we receive, if the temperature of a medical container is greater than its temperature threshold, we search the corresponding supervisor and the alert service he or she uses in the SQL Server database. We issue a call, from imperative code, to the sendSMS method from the alert service. The implementation comprises an entire application. The LINQ continuous query selects temperature notifications from medical containers that exceed temperature thresholds and calls the sendSMS method of the alert service of the corresponding supervisor. One insert and one delete Transact-SQL queries are used to update the SQL Server table holding dynamically discovered alert services. A cache is used to speed up the retrieval of temperature thresholds and container supervisors. SoCQ. The development of this task in SoCQ contains one Serena query (Listing 4) that combines static data (temperature thresholds), method invocations (sendSMS method from SupervisorMobile) and data streams (temper-atureNotification streams from supervised medical containers). Listing 4: Temperature supervision query in SoCQ Deployment. StreamInsight++. This task requires the prerequisites from Task 0, as well as an instance of SQL Server. SoCQ. Only the prerequisites from Task 0 are required. Performance. StreamInsight++ and SoCQ. We evaluate all the metrics from the performance stage. Evolution. StreamInsight++ and SoCQ. The user may ask to send notifications for a subgroup of the supervised medical containers. In both approaches, filters need to be added, to the imperative application, for StreamInsight++ or the Serena SQL query, for SoCQ. Task discussion. StreamInsight++ outperforms SoCQ on the AvgLatency and MaxNoDataSources performance metrics (Table 5), which is not surprising, since the former is an ad hoc framework based on a commercial product, whereas SoCQ is a research prototype. As the service data rate increases, SoCQ outperforms our StreamInsight++ implementation when assessing MaxDataRate, due to the high number of alert service calls per second the query has to perform, for which SoCQ has a built-in asynchronous call mechanism. Development, deployment and evolution are easier with SoCQ, which requires no imperative code, decreased development time and a smaller number of servers and dependencies. Unlike SoCQ, StreamInsight++ does not offer an operating system independent solution. This task cannot be implemented with StreamInsight because it needs dynamic service discovery capabilities. We described the SoCQ queries from Listings 1, 2b, 3 and 4, with some modifications, in the ColisTrack paper as well [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF]. Discussion The StreamInsight approach revealed the shortcomings encountered when developing pervasive applications with a DSMS. Such systems don't consider services as first-class citizens, nor provide dynamic service discovery. External functions can be developed to emulate this integration in DSMSs, requiring ad hoc programming and sometimes intricate interactions with the query optimizer. With StreamInsight we were able to fully implement only Task 0 and Task 1. StreamInsight++ was our proposed ad hoc solution for pervasive application development. The integration of different programming paradigms (imperative, declarative and network protocols) was tedious. Developing pervasive applications turned out to be a difficult and time-consuming process, which required either expert developers with more than one core area of expertise or using teams of developers. Either way, the development costs increase. Ad hoc programming led to StreamInsight++, which could be considered as a PEMS, since it handles data and services providing streams and functionalities in a pervasive environment. However, apart from the cost issues, this system carries another problem: it is specific to the pervasive environment it was designed for. A replacement of this environment automatically triggers severe changes in the implementation of the system. Moreover, although there are DSMSs which offer ways of homogeneously interacting with classical data relations and streams, in StreamInsight++ we needed a separate repository to hold static data, i.e., an instance of SQL Server. The SoCQ PEMS solved the complex interactions between various data sources, by providing an integrated management of distributed services and a declarative definition of continuous interactions. In SoCQ we wrote declarative queries against dynamically discovered, distributed data services, the system being able to handle pervasive environments, without modifications in its implementation, as long as the services access mechanisms don't change. The price to pay was represented by the training time dedicated to the SoCQ system and the Serena SQL-like language (almost negligible for SQL developers), the description of a scenario-specific schema in Serena and the service wrappers development. Once Task 0 was accomplished, application development became straightforward. Writing SoCQ SQL-like queries was easy for someone who knew how to write SQL queries in a classical context. By comparison, the time required to study the StreamInsight platform, even if the developer had a confident level of C# and LINQ, was considerably higher. SoCQ led to concise code for Tasks 1 -4, outperforming StreamInsight and StreamInsight++ in this respect. The StreamInsight-based systems generally yielded better scalability and performance than SoCQ when evaluating average latency, the maximum number of data sources, or the maximum data rate. One case when SoCQ did better than the StreamInsight++ ad hoc framework, was in Task 4, when the engine had to call external services' methods at a high data rate. When assessing performance for StreamInsight and StreamInsight++, we considered the StreamInsight engine together with the adapter framework and middleware we implemented, and not the pure performance of the StreamInsight engine. SoCQ required only one SQL-like language to write complex continuous queries over data, streams and functionalities provided by services. In the StreamInsight and StreamInsight++ implementations, an application was developed in imperative code, to execute continuous queries on the server. The only host lan-guage allowed in the release we used (StreamInsight V1.2) is C#. SoCQ did not burden the developer with such requirements. One SoCQ server and a Java Virtual Machine were required in the SoCQ implementation and the solution could be deployed on any operating system. The StreamInsight++ solution also required more system-specific dependencies and it could only be deployed on Windows machines. Task evolution was straightforward with SoCQ. Entities of type XD-Relation could be created to represent new service types in the pervasive environment and changes to continuous or one-shot queries had a minimal impact on the declarative code. With StreamInsight and StreamInsight++, task evolution became cumbersome, impacting imperative code. For the StreamInsight-based implementations, task evolution had an associated redeployment cost, since the code had to be recompiled. SoCQ allows the developer to write code that appears to be more concise and somewhat elegant than the code written using the two other systems. Developers can fully implement Tasks 1 -4 using only declarative queries. The StreamInsight and StreamInsight++ systems require imperative code as well for the same tasks, which need to be coded using an editor like Visual Studio. The imperative paradigm also adds an extra compilation step. Conclusion and Future Directions In this paper we have tackled the difficult problem of evaluating the easiness of data-centric pervasive application development. We introduced P-Bench, a benchmark that assesses easiness in the development, deployment and evolution process, and also examines performance aspects. To the best of our knowledge, this is the first study of its kind. We assessed the following approaches to building data-centric pervasive applications: (1) the StreamInsight platform, as a DSMS, (2) ad hoc programming, using StreamInsight++, an enriched version of StreamInsight and (3) SoCQ, a PEMS. We defined a set of five benchmark tasks, oriented towards commonly encountered requirements in data-centric pervasive applications. The scenario we chose can easily be changed, and the task's objectives are defined in a generic, scenario-independent manner. We evaluated how hard it is to code a pervasive application using a set of metrics thoroughly defined. As expected, our experiments showed that pervasive applications are easier to develop, deploy and update with a PEMS. On the other hand, the DSMS-and ad hoc-based approaches exhibited superior performance for most of the tasks and metrics. However, for pervasive applications like the ones in our scenario, the PEMS implementation of the benchmark tasks achieved very good performance indicators as well. This is noteworthy, as the SoCQ PEMS is a research prototype developed in a lab, whereas StreamInsight is a giant company's product. Future research directions include finalizing our accuracy checking framework, considering error management and resilience, data coherency, and includ-ing additional metrics like application design effort, software modularity and collaborative development. Fig. 1 . 1 Fig. 1. Scenario framework architecture Fig. 2 . 2 Fig. 2. StreamInsight application architecture FROM Car IN CarSupervision SELECT Car.CarID, Car.Latitude, Car.Longitude, Car.LocationTimestamp; (a) Car supervision query in LINQ CREATE VIEW STREAM carSupervision (carID STRING, locDate DATE, locLatitude STRING, locLongitude STRING) AS SELECT c.carID, c.locDate, c.latitude, c.longitude STREAMING UPON insertion FROM Car c WHERE c.carService = "http://127.0.0.1:21000/Car" USING c.locationNotification [1]; (b) Car supervision query in SoCQ's Serena SQL Listing 2: Car supervision queries Deployment. StreamInsight, StreamInsight++ and SoCQ. For this task, the same prerequisites as for Task 0 are required, for all the implementations. Table 1 . 1 Task 0 metrics Stage Metric SI SI++ SoCQ Development LinesOfImperativeCode 4323 5186 26500 5 NoOfDeclarativeElements 0 0 13 NoOfQueries 0 0 4 NoOfLanguages 1 1 2 DevelopmentTime 120 160 16 Deployment NoOfServers 1 1 1 NoOfSystemDependencies 3 3 1 IsOSIndependent No No Yes Evolution ChangedImperativeCode ∼4323 ∼5186 ∼11000 ChangedDeclarativeElements 0 0 ∼13 Table 2 . 2 Task 1 metrics Stage Metric SI SI++ SoCQ Development LinesOfImperativeCode 33 33 0 NoOfDeclarativeElements 2 2 6 NoOfQueries 1 1 1 NoOfLanguages 2 2 1 DevelopmentTime 4 4 1 Deployment NoOfServers 1 1 1 NoOfSystemDependencies 3 3 1 IsOSIndependent No No Yes Performance MaxDataRate 10000 10000 10000 NoOfEvents 350652 350652 360261 AvgLatency 0.5 0.5 1.34 Evolution ChangedImperativeCode 1 1 0 ChangedDeclarativeElements 0 0 1 Table 3 . 3 Task 2 metricsTask 3 provides the location of a medical container. The input of this task is represented by a medical container identifier and notification messages sent by services in the environment when they appear or disappear. Its output is the current location of the container, i.e., the LocationTimestamp, Latitude and Longitude. The objective of this task is to invoke a method provided by a dynamically discovered medical container service.Development. StreamInsight++. We create a SQL Server database and dynamically update a table in the database with available medical container services. An input adapter delivers medical container services discovered by Service Manager to a simple LINQ continuous query, whose results are used to update the medical container services table in SQL Server. Based on the input container identifier (an mcID field), the application looks up the medical container URL in the SQL Server table. From imperative code, it calls the get-Location method exposed by the medical container service, which outputs the current location of the container. Stage Metric SI SI++ SoCQ Development LinesOfImperativeCode NA 31 0 NoOfDeclarativeElements NA 2 5 NoOfQueries NA 1 1 NoOfLanguages NA 2 1 DevelopmentTime NA 4 1 Deployment NoOfServers NA 1 1 NoOfSystemDependencies NA 3 1 IsOSIndependent NA No Yes Performance MaxNoDataSources NA 5000 2500 MaxDataRate NA 1700 750 NoOfEvents NA 976404 443391 AvgLatency NA 13.53 0.79 Evolution ChangedImperativeCode NA 1 0 ChangedDeclarativeElements NA 0 1 5.5 Task 3: Method Invocation Table 4 . 4 Task 3 metrics Stage Metric SI SI++ SoCQ Development LinesOfImperativeCode NA 102 0 NoOfDeclarativeElements NA 11 4 NoOfQueries NA 4 1 NoOfLanguages NA 3 1 DevelopmentTime NA 8 1 Deployment NoOfServers NA 2 1 NoOfSystemDependencies NA 3 1 IsOSIndependent NA No Yes Evolution ChangedImperativeCode NA 1 0 ChangedDeclarativeElements NA 0 1 5.6 Task 4: Composite Data Supervision Task 4 monitors the temperatures of medical containers and sends alert mes- sages when the supervised medical containers exceed established temperature thresholds. The input of this task is represented by notification messages sent Table 5 . 5 Task 4 metrics Stage Metric SI SI++ SoCQ Development LinesOfImperativeCode NA 175 0 NoOfDeclarativeElements NA 13 7 NoOfQueries NA 4 1 NoOfLanguages NA 3 1 DevelopmentTime NA 10 3 Deployment NoOfServers NA 2 1 NoOfSystemDependencies NA 3 1 IsOSIndependent NA No Yes Performance MaxNoDataSources NA 3000 2500 MaxDataRate NA 275 400 NoOfEvents NA 13170 23812 AvgLatency NA 6.25 34.37 Evolution ChangedImperativeCode NA 1 0 ChangedDeclarativeElements NA 0 1 We will refer to a data service as a service or data source in the rest of the paper. The SoCQ engine source code contains about 26500 lines of Java code. It encompasses the UbiWare generic implementation (client-side and server-side, about 11000 lines of code), the core of the SoCQ engine (data management and query processing, about 13200 lines), and some interfaces to control and access the SoCQ engine (2 Swing GUI and a DataService Interface, about 2300 lines). For StreamInsight and StreamInsight++, LinesOfImperativeCode assesses only the task application and Service Manager code (we don't have access to StreamInsight's engine implementation). We will replace Transact-SQL with LINQ to SQL.
65,058
[ "4133", "3040", "4224" ]
[ "217744", "401125", "401125", "401125", "401125" ]
01351708
en
[ "info" ]
2024/03/04 23:41:48
2013
https://hal.science/hal-01351708/file/Liris-6533.pdf
Student Member, IEEE Huibin Li email: [email protected]. Member, IEEE Wei Zeng email: [email protected] Jean Marie Morvan email: [email protected]. Member, IEEE Liming Chen email: [email protected]. Xianfeng David Gu X David Gu email: [email protected] Surface Meshing with Curvature Convergence Keywords: Meshing, Delaunay refinement, conformal parameterization, normal cycle, curvature measures, convergence Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. INTRODUCTION Surface meshing and remeshing play fundamental roles in many engineering fields, including computer graphics, geometric modeling, visualization and medical imaging. Typically, surface meshing finds a set of sample points on the surface with a curved triangulation, then approximates each face by an Euclidean triangle in R 3 , thereby approximating the underlying smooth surface by a polyhedral triangular surface, which is called a triangle mesh. Many geometric processing tasks are equivalent to solving geometric partial differential equations (PDEs) on surfaces. The following are some direct examples: for shape analysis, the heat kernel signature (HKS) [START_REF] Sun | A Concise and Provably Informative Multi-Scale Signature Based on Heat Diffusion[END_REF] is mostly utilized, which entails solving a heat equation and computing the eigenvalues and eigenfunctions of the Laplace-Beltrami operator on the surfaces; for shape registration, the surface harmonic map [START_REF] Wang | High Resolution Tracking of Non-Rigid Motion of Densely Sampled 3D Data Using Harmonic Maps[END_REF] is widely used, which essentially means solving elliptic PDEs on the surfaces; for surface parameterization, the discrete Ricci flow [START_REF] Jin | Discrete Surface Ricci Flow[END_REF] is often computed, which amounts to solving a nonlinear parabolic equation on the surfaces. Most geometric PDEs are discretized on triangle meshes, and solved using numerical methods, such as Finite Element Methods (FEM). The numerical stability, the convergence rates, and the approximation bounds of the discrete solutions are largely determined by the quality of the underlying triangle mesh, which is measured mainly by the size and the shape of triangles on the mesh. Therefore, the generation of high quality meshes has fundamental importance. Most existing meshing and remeshing approaches are based on the Delaunay refinement algorithms. They can be classified in three main categories: 1) The sampling is computed in R 3 , and triangulated using the volumetric Delaunay triangulation algorithms, such as [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF] [5] [6] [START_REF] Cheng | Sampling and Meshing a Surface with Guaranteed Topology and Geometry[END_REF] [8] [START_REF] Dey | Delaunay Meshing of Isosurfaces[END_REF]. 2) The sampling and triangulation are directly computed on curved surfaces, such as [10] [11]. 3) The sampling is computed in a conformal parameter domain, and triangulated using the planar Delaunay triangulation algorithms, such as [START_REF] Alliez | Isotropic Surface Remeshing[END_REF] [13] [START_REF] Marchandise | High-Quality Surface Remeshing using Harmonic MapsPart II: Surfaces with High Genus and of Large Aspect Ratio[END_REF] [15] [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. The convergence theories of curvature measures for the approaches in the first two categories has been thoroughly established in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [19] [START_REF] Morvan | Approximation of the Normal Vector Field and the Area of a Smooth Surface[END_REF]. However, so far, there is no theory to show the convergence of curvature measures for the approaches in the third category. Existing Theoretical Results Based on the classic results of Federer [START_REF] Federer | Geometric Measure Theory[END_REF] and Fu [START_REF] Fu | Monge-Ampre Functions 1[END_REF], among others, the authors in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [19] defined a general and unified framework of curvature measures for both smooth and discrete submanifolds of R N based on the normal cycle theory. Furthermore, they proved the convergence and approximation theorems of curvature measures for the general geometric subset of R N . In particular, suppose M is a smooth surface embedded in R 3 , M ε is an ε-sample of M, namely, for each point p ∈ M, the ball B(p, εlfs(p)) contains at least one sample point in M ε , where lfs(p) denotes the local feature size of M at point p. Let T be the triangle mesh induced by the volumetric Delaunay triangulation of M ε restricted to M. If ε is small enough, each point of the mesh has a unique closest point on the smooth surface. This leads to the introduction of the closest point projection π : T → M. This map has the following properties: 1) Normal deviation: ∀p ∈ T , |n(p)n • π(p)| = O(ε), by Amenta et al. [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF], and Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 2) Hausdorff distance: |p -π(p)| = O(ε 2 ), by Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 3) Homeomorphism: π is a global homeomorphism, by Amenta et al. [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF] and Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 4) Curvature measures: Let B be a Borel subset of R 3 , then the differences between the curvature measures on M and those on T are Kε, where K depends on the triangulation T [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Morvan | Generalized Curvatures[END_REF]. In the first category, the authors show that, unfortunately, the convergence of curvature measures can not be guaranteed. Depending on the triangulation, when ε goes to 0, K may go to infinity, (see [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] for a counterexample). To ensure the convergence of the curvature measures, in [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [START_REF] Morvan | Generalized Curvatures[END_REF], the authors suggest adding a stronger assumption to the sampling condition, namely, κ-light ε-sample, which is an ε-sample with the additional constraint that each ball B(p, εlfs(p)) contains at most κ sample points. In the second category, the curvature convergence for meshes obtained by Chew's second algorithm [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] has been proved in [START_REF] Morvan | Approximation of the Normal Vector Field and the Area of a Smooth Surface[END_REF]. The normal and area convergence for meshes based on the geodesic Delaunay refinement algorithm has been proved in [START_REF] Dai | Geometric Accuracy Analysis for Discrete Surface Approximation[END_REF]. However, the computation of the geodesic Delaunay triangulation is prohibitively expensive in practice [START_REF] Xin | Isotropic Mesh Simplification by Evolving the Geodesic Delaunay Triangulation[END_REF]. Our Theoretical Results This paper will deal with triangulations of the third category, showing stronger estimates. Using conformal parameterization, we obtain meshes satisfying the first two properties as before, 1) Normal deviation: O(ε), Lemma 4.8 and Lemma 4.9. 2) Hausdorff distance: O(ε 2 ), Lemma 4.8 and Lemma 4.9. Moreover, we improve the other two properties as follows: 3) Homeomorphism: In addition to the closest point projection π, we also define a novel mapping, the natural projection η, induced by the conformal parameterization. Both projections are global homeomorphisms, see section 4.4. In addition, the coding and computational complexities are much lower than those in the second category. Similarities Following the work in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF], our proof is mainly based on the normal cycle theory. Both methods estimate both the Hausdorff distance and the normal deviation at the corresponding points. Then both methods construct a homeomorphism from the triangle mesh to the surface, which induces a homotopy from the normal cycle of the mesh to the normal cycle of the surface. Then, the volume swept by the homotopy and the area of its boundary are estimated. This gives a bound on the difference between the curvature measures. Differences However our work can be clearly differentiated from theirs, in terms of both theoretical and algorithmic aspects: • In theory, as pointed out previously, without the stronger sampling condition, the volumetric Delaunay refinement algorithms cannot guarantee the convergence of curvature measures. In contrast, our results can ensure the convergence without extra assumptions. • In theory, the volumetric Delaunay refinement methods require the embedding of the surface. Our method is intrinsic, which only requires the Riemannian metric. In many real-life applications, e.g. the general relativity simulation in theoretical physics, the surface metric is given without any embedding space. In such cases, the volumetric Delaunay refinement methods are invalid, but our method can still apply. • In theory, to prove the main theorem, the closest point mapping was constructed in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. In contrast, we supply two proofs: one is based on the closest point mapping, whereas the other uses a completely different mapping based on conformal parameterization. Conceptually, besides its novelty, the latter is also simpler. • In practice, the planar Delaunay refinement methods are much easier to implement, the data structure for planar triangulation is much simpler than that of the tetrahedral mesh, and the planar algorithms are much more efficient. Remark The current meshing algorithm aims to achieve a good triangulation, and requires a conformal parameterization, which in turn requires a triangulation. Consequently, this looks like a chicken-and-egg problem. In fact, conformal parameterization can be carried out using an initial triangulation of low quality, and this algorithm will produce a new triangulation with much better quality. Many geometric processing tasks cannot be computed on the initial mesh. For example, the error bound for a discrete solution to the Poisson equation is O(ε 2 ) on good quality meshes. If the mesh has too many obtuse angles, then the discrete results will not converge to the smooth solution. In reality, surfaces are acquired by 3D scanning devices, such as the laser scanner or the structured light scanner. Usually, the raw point clouds are very dense, thus the initial triangulation can be induced by the pixel or voxel grid structures. In the geometric modeling field, the input surfaces may be spline surfaces, and the initial triangulation can be chosen as the regular grids on the parameter domain. Then, the conformal parameterizations can be computed using the dense samples with the initial triangulation. Finally, we can perform the remeshing using the current conformal parametric Delaunay refinement algorithm to improve the mesh quality or compress the geometric data. PREVIOUS WORKS Meshing/Remeshing Delaunay Refinement The Delaunay refinement algorithms were originally designed for meshing planar domains, and were later generalized for meshing surfaces and volumes. Chew's first algorithm [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF] splits any triangle whose circumradius is greater than the prescribed shortest edge length parameter ε and hence generates triangulation of uniform density and with no angle smaller than 30 • . But the number of triangles produced is not optimal. Chew's second algorithm [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] splits any triangle whose circumradius-to-shortest-edge ratio is greater than one, and hence in practice produces grade mesh. Similar split criterion was used in Ruppert's algorithm [START_REF] Ruppert | A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation[END_REF], which has the theoretical guarantee of the minimal angle of no less than 20.7 • . Shewchuk's algorithm [START_REF] Shewchuk | Delaunay Refinement Algorithms for Triangular Mesh Generation[END_REF] can create meshes with most angles of 30 • or greater. Dey et al. developed a series of algorithms for surface meshing and remeshing based on volumetric Delaunay refinement [START_REF] Cheng | Sampling and Meshing a Surface with Guaranteed Topology and Geometry[END_REF] [8] [START_REF] Dey | Delaunay Meshing of Isosurfaces[END_REF], which belong to the approaches in the first category. We refer readers to [START_REF] Cheng | Delaunay Mesh Generation[END_REF] for full details. Centroidal Voronoi Tessellation The concept of centroidal Voronoi tessellations (CVT) was first proposed by Du et al. [START_REF] Du | Centroidal Voronoi Tessellations: Applications and Algorithms[END_REF], and then was generalized to constrained centroidal Voronoi tessellations (CCVT) [START_REF] Du | Constrained Centroidal Voronoi Tessellations for Surfaces[END_REF]. Recently, CVT has been widely used for surface meshing/remeshing to produce high quality triangulations. It can be carried out in the ambient space, e.g. Yan et al. [START_REF] Yan | Isotropic Remeshing with Fast and Exact Computation of Restricted Voronoi Diagram[END_REF], or the conformal parameter domain, e.g. Alliez et al. [12] [31], or even high embedding space, e.g. Lévy et al. [START_REF] Lévy | Variational Anisotropic Surface Meshing with Voronoi Parallel Linear Enumeration[END_REF]. A complete survey of the recent advancements on CVT based remeshing can be found in [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. Although visually pleasing and uniform, all the existing CVT based remeshing methods for the generation of high quality triangulation have no theoretical bound of the minimal angle [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. Therefore, the convergence of curvature measures cannot be guaranteed. Conformal Surface Parameterization Over the last two decades, surface parameterization has gradually become a very popular tool for various mesh processing processes [START_REF] Sheffer | Mesh parameterization Methods and their Applications[END_REF] [START_REF] Floater | Surface Parameterization: a Tutorial and Survey[END_REF]. In this work, we consider only conformal parameterizations. There are many approaches used for this purpose, including the harmonic energy minimization [START_REF] Desbrun | Intrinsic Parameterizations of Surface Meshes[END_REF] [36] [START_REF] Wang | Surface Parameterization using Riemann Surface Structure[END_REF], the Cauchy-Riemann equation approximation [START_REF] Lévy | Least Squares Conformal Maps for Automatic Texture Atlas Generation[END_REF], Laplacian operator linearization [START_REF] Haker | Conformal Surface Parameterization for Texture Mapping[END_REF], circle packing [START_REF] Hurdal | Coordinate Systems for Conformal Cerebellar Flat Maps[END_REF], angle-based flattening [START_REF] Sheffer | Parameterization of Faceted Surfaces for Meshing using Angle-Based Flattening[END_REF], holomorphic differentials [START_REF] Gu | Global Conformal Surface Parameterization[END_REF], Ricci curvature flow [START_REF] Jin | Discrete Surface Ricci Flow[END_REF] [43], Yamabe flow [START_REF] Lui | Detection of Shape Deformities Using Yamabe Flow and Beltrami Coefficients[END_REF], conformal equivalence class [START_REF] Springborn | Conformal Equivalence of Triangle Meshes[END_REF], most isometric parameterizations (MIP-S) [START_REF] Hormann | Hierarchical Parametrization of Triangulated Surfaces[END_REF], etc.. STATEMENT OF THE MAIN THEOREM Curvature Measures First, let M be a C 2 -smooth surface embedded in R 3 , its curvature measures can be defined as follows. Now, let V be a polyhedron of R 3 and its polyhedral boundary M be a triangular mesh surface. We use v i to denote a vertex, [v i , v j ] an edge, and [v i , v j , v k ] a face of M. We define the discrete Gaussian curvature of M at each vertex as the angle deficit, G(v i ) = 2π -∑ jk θ jk i , where θ jk i is the corner angle on the face [v i , v j , v k ] at the vertex v i . Similarly, the discrete mean curvature at each edge is defined as H(e i j ) = |v i -v j |β (e i j ), where β i j is the angle between the normals to the faces incident to e i j . The sign of β (e i j ) is chosen to be positive if e i j is convex and negative if it is concave. Definition 3.2: The discrete Gaussian curvature measure of M, φ G M , is the function associated with each Borel set B ⊂ R 3 φ G M (B) = ∑ v∈B∩M G(v). ( 1 ) The discrete mean curvature measure φ H M is φ H M (B) = ∑ e∈B∩M H(e). ( ) 2 The curvature measures on both smooth surfaces and polyhedral surfaces can be unified by the normal cycle theory, which will be explained in section 4.3. Main Results It is well known that any Riemannian metric defined on a smooth (compact with or without boundary) surface M can be conformally deformed into a metric of constant curvature c ∈ {-1, 0, 1}, depending on the topology of M, the so-called uniformization metric (cf. Fig. 1). Now if M is endowed with a Riemannian metric with constant curvature, the Delaunay refinement algorithms can be used to generate a triangulation on M with good quality. The most common Delaunay refinement algorithms include Chew's [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF], [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] and Ruppert's [START_REF] Ruppert | A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation[END_REF]. Let ε be a user defined upper bound of the circumradius of the final triangulation. Given an initial set of samples on surface M, such that the distance between any pair of samples is greater than ε. If M has boundaries, then the boundaries are sampled and approximated by piecewise geodesics, such that each geodesic segment is greater than ε. The Delaunay refinement method on the uniformization space starts with an initial Delaunay triangulation of the initial samples, then updates the samples by inserting circumcenters of the bad triangles, and meanwhile, updates the triangulation by maintaining the Delaunay property. A bad triangle can be either bad-sized or bad-shaped. A triangle is bad-sized, if its circumradius is greater than ε. A triangle is bad-shaped, if its circumradiusto-shortest-edge ratio is greater than one. In this work, we will show the following meshing algorithm using the packing argument. Theorem 3.3 (Delaunay Refinement): Let M be a compact Riemannian surface with constant curvature. Suppose that the boundary of M is empty or is a union of geodesic circles. For any given small enough ε > 0, the Delaunay refinement algorithm terminates. Moreover, in the resultant triangulation, all triangles are well-sized and well-shaped, that is 1) The circumradius of each triangle is not greater than ε. 2) The shortest edge length is greater than ε. Suppose M is also embedded in E 3 with the induced Euclidean metric. Then M can also be conformally mapped to a surface with uniformization metric, such that all boundaries (if there are any) are mapped to geodesic circles. By running the Delaunay refinement on the uniformization space, we can get a triangulation of M, which induces a polyhedral surface T , whose vertices are on the surface, and all faces of which are Euclidean triangles. Furthermore, all triangles are wellsized and well-shaped under the original induced Euclidean metric. Based on the induced triangulation T , we will show the following main theorem. Theorem 3.4 (Main Theorem): Let M be a compact Riemannian surface embedded in E 3 with the induced Euclidean metric, T the triangulation generated by Delaunay refinement on conformal uniformization domain, with a small enough circumradius bound ε. If B is the relative interior of a union of triangles of T , then: |φ G T (B) -φ G M (π(B))| ≤ Kε (3) |φ H T (B) -φ H M (π(B))| ≤ Kε (4) |φ G T (B) -φ G M (η(B))| ≤ Kε (5) |φ H T (B) -φ H M (η(B))| ≤ Kε (6) where for fixed M K = O( ∑ {t∈T,t⊂ B} r(t) 2 ) + O( ∑ {t∈T,t⊂ B,t∩∂ B = / 0} r(t)), r(t) being the circumradius of triangle t. Moreover, K can be further replaced by: K = O(area(B)) + O(length(∂ B)). Furthermore, if M is an abstract compact Riemannian surface (only with a Riemannian metric, but not an embedding), inequalities (3) and ( 5) still hold. Here π denotes the closest point projection on M, and η denotes the natural projection on M, which is induced by the conformal parameterization, see Definitions 4.6 and 4.7. THEORETICAL PROOFS Surface Uniformization Let (M 1 , g 1 ) and (M 2 , g 2 ) be smooth surfaces with Riemannian metrics. Let φ : M 1 → M 2 be a diffeomorphism, φ is conformal if and only if φ * g 2 = e 2λ g 1 , where φ * g 2 is the pullback metric on M 1 , and λ : M 1 → R is a scalar function defined on M 1 . Conformal mappings preserve angles and distort area elements. The conformal factor function e 2λ indicates the area distortion. According to the classical surface uniformization theorem, every metric surface (M, g) can deform to one of three canonical shapes, a sphere, a Euclidean plane or a hyperbolic plane. Namely, there exists a unique conformal factor function λ : M → R, such that the uniformization Riemannian metric e 2λ g induces constant Gaussian curvature, the constant being one of {+1, 0, -1} according to the topology of the surface. If surfaces have boundaries, then the boundaries are mapped to circles on the uniformization space. Figures 1 and2 show the uniformizations for closed surfaces and surfaces with boundaries, respectively. The left-hand columns show the genus zero surfaces, which can conformally deform to the unit sphere with +1 curvatures. The middle columns demonstrate genus one surfaces, whose universal covering space is conformally mapped to the Euclidean plane, and the boundaries become circles. The columns on the right illustrate high genus surfaces, whose universal covering space is flattened to the hyperbolic plane, and whose boundaries are mapped to circles. Surface uniformization can be carried out using the discrete Ricci flow algorithms [START_REF] Jin | Discrete Surface Ricci Flow[END_REF]. Then we can compute the triangulation of the surface by performing the planar Delaunay refinement algorithms on the canonical uniformization domain. Delaunay Refinement The Delaunay refinement algorithm for mesh generation operates by maintaining a Delaunay triangulation, which is refined by inserting circumcenters of triangles, until the mesh meets constraints on element quality and size. Geodesic Delaunay Triangulation By the uniformization theorem, all oriented metric surfaces can be conformally deformed to one of three canonical shapes, the unit sphere S 2 , the flat torus E 2 /Γ and the hyperbolic surface H 2 /Γ, where E 2 is the Euclidean plane, H 2 the hyperbolic plane, and Γ is the Deck transformation group, a subgroup of isometries of E 2 or H 2 , respectively. The unit sphere S 2 can be conformally mapped to the complex plane by stereographic projection, with the Riemannian metric C ∪ {∞}, g = 4dzd z (1 + zz) 2 . Similarly, the hyperbolic plane H 2 is represented by Poincaré's disk model with a Riemannian metric {|z| < 1|z ∈ C}, g = 4dzd z (1 -zz) 2 . The concepts of Euclidean triangles and Euclidean circles can be generalized to geodesic triangles and geodesic circles on S 2 and H 2 . Therefore, Delaunay triangulation can be directly defined on these canonical constant curvature surfaces. A triangulation is Delaunay if it satisfies the empty circle property, namely the geodesic circumcircle of each geodesic triangle does not include any other point. Spherical circles on S 2 are mapped to Euclidean circles or straight lines on the plane by stereographic projection. Similarly, hyperbolic circles are mapped to the Euclidean circles on the Poincaré disk. Therefore, geodesic Delaunay triangulations on S 2 or H 2 are mapped to the Euclidean Delaunay triangulations on the plane. As a result, geodesic Delaunay triangulations can be carried out using the conventional Euclidean Delaunay triangulation. Delaunay Refinement on Constant Curvature Surfaces The Delaunay refinement algorithm on constant curvature surfaces with empty boundary is introduced as follows. Take a flat torus E 2 /Γ as an example. The user chooses a parameter ε, which is the upper bound of the circumradius. 1) An initial set of samples is generated on the surface, such that the shortest distance between any pair of samples is greater than ε. An initial Delaunay triangulation is constructed. 2) Select bad size triangles, whose circumradii are greater than ε, insert their circumcenters, and maintain the Delaunay triangulation. 3) Select bad shape triangles, whose ratio between circum radius and shortest edge length is greater than one, insert their circum centers, maintain the Delaunay triangulation. 4) Repeat 2 and 3, until the algorithm terminates. The proof of theorem 3.3 is based on the conventional packing argument [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF]. Proof: In the initial setting, all the edge lengths are greater than ε. In step 2, after inserting the circumcenter of a bad size triangle, all the newly generated edges are connected to the center, their lengths are no less than the circumradius, which is greater than ε. In step 3, the circumradius of the bad shape triangle is greater than the shortest edge of the bad triangle, which is greater than ε. All the newly generated edges connecting to the center are longer than the radius ε. Therefore, during the refinement process, the shortest edge is always greater then ε. Suppose p and q are the closest pair of vertices, then the line segment connecting them must be an edge of the final Delaunay triangulation, which is longer than ε. Therefore, the distance between any pair of vertices is greater than ε. Centered at the each vertex of the triangulation, a disk with radius ε/2 can be drawn. All these disks are disjoint. Because the total surface area is finite, the number of vertices is finite. Therefore, the whole algorithm will terminate. When the algorithm terminates, all triangles are well-sized and well-shaped. Namely, the circumradius of each triangle is smaller than ε, and the shortest edge length is greater than ε. For the flat torus case, the minimal angle is greater than 30 • . By the uniformization theorem, if a surface has a boundary, it can be conformally mapped to the constant curvature surfaces with circular holes. Then the boundaries can be approximated by the planar straight line graphs (PSLG), such that the angles between two adjacent segments are greater than 60 • . Using a proof similar to the one given by Chew in [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF] and [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF], we can show the theorem still holds. Delaunay Refinement on General Surfaces For general surfaces, we need to add grading to the Delaunay triangulation. The grading function is the conformal factor e 2λ , which controls the size of the triangles. Step 2 in the above algorithm needs to be modified as follows: select a bad size triangle with the circumcenter p and circumradius greater than εe -λ (p) . The same proof can be applied to show the termination of the algorithm. In the resultant triangulation, the grading is controlled by the conformal factor, the circumradius is less than εe -λ , the shortest edge is greater than εe -λ , so the triangles are still well-shaped. On the original surface, the edge length is greater than ε and the circumradius is less than ε. The minimal angle is bounded. According to [START_REF] Funke | Smooth-Surface Reconstruction in Near Linear Time[END_REF], such a kind of sampling is locally uniform, thus is also a κ-light ε-sample. Suppose the triangulation is T , t ∈ T is a triangle, with circumradius r(t), B ⊂ T is a union of triangles of T , then (7) Normal Cycle Theory In order to be complete, we briefly introduce the normal cycle theory, which closely follows the work in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. For a more in-depth treatment, we refer readers to [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. Intuitively, the normal cycle of a surface is its offset surface embedded in a higher dimensional Euclidean space. If the surface is not convex or smooth, its offset surface in R 3 may have self-intersections. By embedding it in a higher dimensional space, it can be fully unwrapped. Offset Surface Suppose V is a volumetric domain in R 3 , whose boundary M = ∂V is a compact C 2 -smooth surface. Let ρ be the distance between M and the medial axis of the complement of V . The ( ) ( 2 / ) Fig. 3: Offset surface and tube formula. ε-offset of V minus V is V ε = {p|p ∈ V d(p,V ) < ε} ⊂ R 3 . The tube formula can be written as Vol(V ε ) = area(M)ε + φ H V (M) ε 2 2 + φ G V (M) ε 3 3 for ε < ρ. The localized version of the tube formula is as follows. Let B ⊂ M be a Borel set, the ε-offset of B is V ε (B), then we have Vol(V ε (B)) = area(B)ε + φ H V (B) ε 2 2 + φ G V (B) ε 3 3 . The volume of the ε-offset V ε (B) is always a polynomial in ε, and its coefficients are multiples of the curvature measures of B. Even if the boundary of V is not smooth but if ρ > 0, the volume of V ε (B) is always a polynomial in ε for ε < ρ. Therefore the coefficients of this polynomial generalize the curvature measures from smooth surfaces to polyhedral surfaces. This approach does not generalize to non-convex polyhedral surfaces, where ρ may be equal to 0. So the normal cycle theory has been developed. Intuitively, normal cycles provide a way of unfolding offsets in a higher dimensional space. endowed with the orientation induced by that of M, where a current is the generalization of an oriented surface patch, with integral coefficients. When no confusion is possible, we use the same notation N(M) to denote both the current and its associated set. Normal Cycles The normal cycle of V is the same as that of M, namely, N(V ) = N(M). The diffeomorphic mapping from M to its normal cycle N(M) is denoted as i : M → N(M) p → (p, n(p)) Suppose V is a convex body, whose boundary M is a The crucial property of the normal cycle is its additivity as shown in Fig. 4. Suppose V 1 and V 2 are two convex bodies in R 3 , such that V 1 ∪V 2 is convex, then N(V 1 ∩V 2 ) + N(V 1 ∪V 2 ) = N(V 1 ) + N(V 2 ). By the additivity property, we can define the normal cycle of a polyhedron. Given a triangulation of the polyhedron V into tetrahedra t i . i = 1, 2, • • • , n, the normal cycle of V is defined as N(V ) = n ∑ k=1 (-1) k+1 ∑ 1≤i 1 <•••<i k ≤n N(∩ k j=1 t i j ) by inclusion-exclusion. It is proved that the normal cycle N(V ) is independent of triangulations. Similar to the smooth surface case, one can define a setvalued mapping from M and its normal cycle N(M) i : M → N(M) p → (p, n(p)) n ∈ NC V (p). Invariant Differential 2-Forms Normal cycles are embedded in the space R 3 × R 3 , denoted as E p × E n , where E p is called point space, and E n is called normal space. Let g be a rigid motion of R 3 , g(p) = Rp + d, where R is a rotation matrix, d is a translation vector. g can be extended to E p × E n as ĝ(p, n) = (R(p) + d, R(n)). We say that a differential 2-form ω is invariant under rigid motions, if ĝ * ω = ω. The following invariant 2-forms play fundamental roles in the normal cycle theory, Definition 4.5: Let the coordinates of E p × E n be (x 1 , x 2 , x 3 , y 1 , y 2 , y 3 ), then ω A = y 1 dx 2 ∧ dx 3 + y 2 dx 3 ∧ dx 1 + y 3 dx 1 ∧ dx 2 ω G = y 1 dy 2 ∧ dy 3 + y 2 dy 3 ∧ dy 1 + y 3 dy 1 ∧ dy 2 ω H = y 1 (dx 2 ∧ dy 3 + dy 2 ∧ dx 3 )+ y 2 (dx 3 ∧ dy 1 + dy 3 ∧ dx 1 )+ y 3 (dx 1 ∧ dy 2 + dy 1 ∧ dx 2 ). Curvature measures of a surface can be recovered by integrating specific differential forms on its normal cycle. The following formula unifies the curvature measures on both smooth surfaces and polyhedral surfaces. For a Borel set B ⊂ R 3 , the curvature measures are given by N(M) ω G |i(B∩M) = φ G M (B) N(M) ω H |i(B∩M) = φ H M (B) N(M) ω A |i(B∩M) = area(B) where ω G |i(B∩M) denotes the restriction of ω to i(B ∩ M). Estimation In this section, we explicitly estimate the Hausdorff distance, normal deviation, and the differences in curvature measures from the discrete triangular mesh to the smooth surface. Configuration Let (M, g) be a C 2 metric surface. D is the unit disk on the uvplane. A conformal parameterization is given by ϕ : D → M, such that g(u, v) = e 2λ (u,v) (du 2 + dv 2 ). Suppose p ∈ D is a point on the parameter domain, then ϕ(p) is a point on the surface. The derivative map dϕ| p : T p D → T ϕ(p) M is a linear map dϕ| p = e λ (p) cos θsin θ sin θ cos θ . η = ϕ • τ -1 : T → M is called the natural projection. Another map from the mesh to the surface is the closest point projection. Definition 4.7 (Closest point projection): Suppose T has no intersection with the medical axis of M. Let q ∈ T , and π(q) be its closest point on the surface M, π(q) = argmin r∈M |r -q|, we call the mapping from q to its closest point π(q) as the closest point projection. We will show that the closest point projection is also a homeomorphism. Hausdorff Distance and Normal Deviation In the following discussion, we assume the triangulation is generated by the Delaunay Refinement in Theorem 3.3. Our goal is to estimate the Hausdorff distance and the normal deviation, in terms of both the natural projection and the closest point projection. Lemma 4.8 (Natural projection): Suppose q ∈ T , then |q -η(q)| = O(ε 2 ), (8) |n(q) -n(η(q))| = O(ε). ( 9 ) Proof: As shown in Fig. 5, suppose p ∈ D, τ(p ) = q. p is inside a triangle t = [p 0 , p 1 , p 2 ], p = 2 ∑ k=0 α k p k , 0 ≤ α k ≤ 1, where α k 's are barycentric coordinates. All the edge lengths are Θ(ε), and angles are bounded. The area is Θ(ε 2 ). Equation 8: By the linearity of τ and dϕ, τ(p k ) = ϕ(p k ) and |ϕ(p k ) -dϕ(p k )| = O(ε 2 ), we obtain |τ(p) -dϕ(p)| = | ∑ k α k (τ(p k ) -dϕ(p k ))| ≤ ∑ k α k |ϕ(p k ) -dϕ(p k )| = O(ε 2 ). Therefore |τ(p) -ϕ(p)| ≤ |τ(p) -dϕ(p)| + |dϕ(p) -ϕ(p)| = O(ε 2 ), where q = τ(p) and η(q) = ϕ • τ -1 (q) = ϕ(p), this gives Eqn. 8. Equation 9: Construct local coordinates on the tangent plane T ϕ(p 0 ) M, such that ϕ(p 0 ) is at the origin, dϕ(p 1 ) is a- long the x-axis. Then τ(p 1 ) is (Θ(ε), 0, O(ε 2 )), τ(p 2 ) is (Θ(ε) cos β , Θ(ε) sin β , O(ε 2 )) , where β is the angle at p 0 . By direct computation, the normal to the face τ(t) is (O(ε), O(ε), Θ(1)). Therefore |n • τ(p) -n • ϕ(p 0 )| = O(ε). Furthermore, |n • ϕ(p) -n • ϕ(p 0 )| = |W (ϕ(p) -ϕ(p 0 ))| ≤ W |ϕ(p) -ϕ(p 0 )| = O(ε), where W is the Weigarten map. M is compact, therefore W is bounded, |ϕ(p) -ϕ(p 0 )| is O(ε). |n • τ(p) -n • ϕ(p)| ≤ |n • ϕ(p) -n • ϕ(p 0 )| + |n • τ(p) -n • ϕ(p 0 )| = O(ε). This gives Eqn. 9. Lemma 4.9 (Closest point projection): Suppose q ∈ T , then |q -π(q)| = O(ε 2 ), ( 10 ) |n(q) -n(π(q))| = O(ε). ( 11 ) Proof: Equation 10: From Eqn. 8 and the definition of closest point, we obtain |q -π(q)| ≤ |q -η(q)| = O(ε 2 ). Equation 11: From Eqn. 8 and Eqn. 10, we get |η(q) -π(q)| ≤ |η(q) -q| + |q -π(q)| = O(ε 2 ), therefore |n • η(q) -n • π(q)| ≤ W |η(q) -π(q)| = O(ε 2 ). Then from Eqn. 9 and the above equation, |n(q) -n(π(q))| ≤ |n(q) -n • η(q)| + |n • η(q) -n • π(q)| = O(ε) + O(ε 2 ). Remark The proofs for the Hausdorff distances in Eqn. 8 and Eqn. 10 do not require the triangulation to be well-shaped, but only well-sized. The proofs for the normal deviation ( 0 ) ( 2 ) ( 1 ) 2 Fig. 6: Small triangles inscribed to attitudinal circles of a cylinder do not guarantee the normal convergence. estimation in Eqn. 9 and Eqn. 11 require the triangulation to be both well-sized and well-shaped. In the proofs we use the facts that the triangulation on parameter domain has bounded angles, and the mapping ϕ is conformal. Figure 6 shows a counterexample: a triangle is inscribed in a latitudinal circle of a cylinder, no matter how small it is, its normal is always orthogonal to the surface normals. Global Homeomorphism Both the natural projection and the closest point projection are homeomorphisms. While it is trivial for natural projection, in the following we give detailed proof to show that the closest point projection is a piecewise diffeomorphism, and we estimate its Jacobian. Lemma 4.10: The closest point projection π : T → M is a homeomorphism. Proof: First we show that π restricted to the one-ring neighborhood of each vertex of T is a local homeomorphism. Suppose p ∈ T is a vertex, therefore p ∈ M as well. U(p) is the union of all faces adjacent to p. We demonstrate that π : U(p) → M is bijective. Assume q ∈ U(p), then |p -q| = O(ε), |π(q) -p| ≤ |π(q) -q| + |q -p| = O(ε 2 ) + O(ε). Therefore |n(π(q)) -n(p)| = O(ε). ( 12 ) Assume there is another point r ∈ U(p), such that π(q) = π(r). Let the unit vector of the line segment connecting them be d = r -q |r -q| , then because r, q ∈ U(p), d is almost orthogonal to n(p), d, n(p) = O(ε). ( 13 ) On the other hand, d is along the normal direction at π(q), n(π(q)) = ±d, assume d is along n(π(q)), from Eqn. 12, we obtain |d -n(p)| = O(ε). ( 14 ) Eqn. 13 and Eqn. 14 contradict each other. Therefore π |U(p) is bijective. Then we show that π restricted on each face is a diffeomorphism. Let r(u, v), n(u, v) be position and normals of M respectively, where (u, v) are local parameters along the principal directions. t ∈ T is a planar face. The inverse closest point projection map is π -1 : r(u, v) → q(u, v), where q(u, v) is the intersection between the ray through r(u, v) along n(u, v) and the face t, q(u, v) = r(u, v) + s(u, v)n(u, v), direct computation shows q u × q v , n = (1 + 2Hs + Ks 2 ) r u × r v , n , (15) where s = O(ε 2 ). When ε is small enough, the above equation is close to 1, which means π |U(P)| is a piecewise diffeomor- phism. Secondly, we show that π is a global homeomorphism. We have shown that π is a covering map. At each vertex of T , the closest point equals itself, therefore the degree of π is 1. So π is a global homeomorphism. Note that, the estimation of the Jacobian of the closest point projection in Eqn. 15 can be applied to show the following. Suppose B ⊂ R 3 is a Borel set, then |area(B ∩ T ) -area(π(B) ∩ M)| = Kε 2 . Proof of the Main Theorem The proof of the main Theorem 3.4. associated with the closest point projection π is a simple corollary of the following main theorem in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. Theorem 4.11: Suppose T is a bounded aspect ratio triangulation projecting homeomorphically on M, if B is a relative interior of a union of triangles of T , then |φ G T (B) -φ G M (π(B))| ≤ Kε (16) |φ H T (B) -φ H M (π(B))| ≤ Kε ( 17 ) where for fixed M K = O( ∑ {t∈T,t⊂ B} r(t) 2 ) + O( ∑ {t∈T,t⊂ B,t∩∂ B = / 0} r(t)), r(t) is the circumradius of triangle t. Proof (Closest point projection): By Lemma 4.10, the closest point projection is a homeomorphism. By Theorem 3.3, the triangulation T has a bounded aspect ratio, therefore the conditions of Theorem 4.11 are satisfied, and consequently, Eqns. 16 and 17 hold. According to Eqn. 7 in Lemma 4.1, therefore the main theorem holds. The proof of the main Theorem 3.4. associated with the natural projection η is more direct and more adapted to our framework. Proof (Natural projection): The natural projection η : T → M can be lifted to a mapping between the two normal cycles f : N(T ) → N(M), such that the following diagram commutes: N(M) f ← ----N(T ) i     p 1 M η ← ----T , where p 1 is the projection from E p × E n to E p , and i(q) = (q, n(q)) for all q ∈ M. Namely, given a point q ∈ T , and n(q) in its normal cone, (q, n(q)) ∈ N(T ), f : (q, n(q)) → (η(q), n • η(q)) ∈ N(M). By Lemma 4.8, |(q, n(q)) -f (q, n(q))| = O(ε). ( 18 ) It is obvious that f is continuous. Let B ⊂ E p , we denote the current N(T ) ∩ (B × E n ) by D, and the current N(M) ∩ (η(B) × E n ) by E, as shown in Fig. 7. Consider the affine homotopy h between f and the identity, D = N (T ) ∩ (B × En) E = N (M ) ∩ (B × En) C O(ε) (q, n) f (q, n) Fig. 7: Homotopy between the normal cycles N(T ) and N(M). h(x, •) = (1 -x)id(•) + x f (•), x ∈ [0, 1]. We define the volume swept by the homotopy as C = h # ([0, 1] × D), whose boundary is ∂C = E -D -h # ([0, 1] × ∂ D). Intuitively, C is a prism, the ceiling is E, the floor is D, and the walls are h # ([0, 1] × ∂ D). φ G M (η(B)) -φ G T (B) = E-D ω G = ∂C ω G + h # ([0,1]×∂ D) ω G . By Stokes' Theorem, ∂C ω G = C dω G . Both ω G and its exterior derivative dω G are bounded, therefore, we need to estimate the volume of block C and the area of the wall h # ([0, 1] × ∂ D). We use M(•) to denote the flat norm (volume, area, length). The volume of the prism C is bounded by the height and the section area. The height is bounded by sup| f -id|. The section area is bounded by the product of the bottom area M(D) and the square of the norm Dh(x, •) 2 = xD f + (1 -x)id 2 ≤ (x sup D f + (1 -x)) 2 . In later discussion, we will see that sup D f ≥ 1, therefore Dh(x, •) ≤ sup D f . We obtain M(C) ≤ M(D)sup| f -id|sup D f 2 , M(h # ([0, 1] × ∂ D)) ≤ M(∂ D)sup| f -id|sup D f . Now we estimate each term one by one. 1) Eqn. [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] shows sup| f -id| = O(ε). 2) Since the triangulation has a bounded ratio of circumradius to edge length, we obtain M(D) = O(∑ t∈T,t⊂ B r(t) 2 ) M(∂ D) = O(∑ t∈T,t⊂ B,t∩∂ B = / 0 r(t) ). Let K be the summation of the two terms above. According to Lemma 4.1, K is bounded by the area of B and the length of ∂ B. 3) For the estimation of D f , we observe that on each triangle t ∈ D, the mapping τ converges to dϕ, so D f on each triangle converges to (r u , 0)du + (r v , 0)dv → (r u , n u )du + (r v , n v )dv, (r u , n u ), (r u , n u ) (r u , n u ), (r v , n v ) (r v , n v ), (r u , n u ) (r v , n v ), (r v , n v ) = e 2λ id +III, (19) where the third fundamental form is The proof for the mean curvature measure is exactly the same. Remark 1. In our proofs, perfect conformality is unnecessary. All the proofs are based on one requirement: the max circumcircle of the triangles of the tessellations converge to zero. This only requires the parameterization to be K-quasiconformal, where K is a positive constant, less than ∞. III = n u , n u n u , n v n v , n u n v , n v . 2. It is well known that the Gauss curvature is defined on any (abstract) Riemannian surface. By the Nash theorem [START_REF] Nash | C1 Isometric Imbeddings[END_REF] [49], any (abstract) Riemannian surface can be isometrically embedded in a high-dimensional Euclidean space. Using the theory of normal cycle for large codimension submanifolds of Euclidean space, the inequalities (3) and (5) in Theorem 3.4 can be extended to any abstract Riemannian surface, the approximation depending on the chosen embedding. COMPUTATIONAL ALGORITHM We verified our theoretical results by meshing spline surfaces and comparing the Gaussian and mean curvature measures. Each spline patch M is represented as a parametric smooth surface defined on a planar rectangle γ : R → R 3 , where R is the planar rectangle parameter domain, the position vector γ is C 2 continuous, therefore the classical curvatures are well As shown in Fig. 8, in our experiments, each planar domain or surface S (S ∈ {D, R, M}), is approximated by two triangle meshes, T k S , k = 0, 1, where the T 0 S is induced by the regular grid on the rectangle; T 1 S is induced by the Delaunay triangulation on the unit disk. Both the conformal parameterization ϕ and the parameter domain mapping f are approximated by piecewise linear (PL) mappings, φ and f , respectively, which are computed on the meshes. Algorithm Pipeline Conformal Parametrization In the first stage, the conformal parameterization is computed as follows: f -1 : T 0 R T 0 M T 0 D E γ E φ-1 T 0 R is a triangulation induced by the regular grid structures on the rectangle R. Each vertex on T 0 R is mapped to the spline surface M by γ, each face is mapped to a Euclidean triangle, this gives the mesh T 0 M . If the grid tessellation is dense, the quality of the mesh T 0 M is good enough for performing the Ricci flow and we get the PL mapping φ-1 , which maps T 0 M to a triangulation of the disk T 0 D . The composition of φ and γ -1 gives the PL mapping f = γ -1 • φ : T 0 D → T 0 R . Resampling and Remeshing The process in the second stage is described in the following diagram: φ : T 1 D T 1 R T 1 M E f E γ First, we apply Ruppert's Delaunay refinement method to generate the triangulation T 1 D good quality on the unit disk. The triangulation on the disk T 1 D is mapped to a triangulation T 1 R on the rectangle by the PL mapping f : T 0 D → T 0 R . The connectivity of T 1 R is the same as that of T 1 D . The vertices of T 1 R are the images of the vertices of T 1 D under the PL mapping f , which are calculated as follows. Suppose q is a Delaunay vertex of T 1 D on the disk, covered by a triangle [p 0 , p 1 , p 2 ] ∈ T 0 D . Assume the barycentric coordinates of q are (α 0 , α 1 , α 2 ), q = ∑ k α k p k , then f (q) = ∑ k α k f (p k ). The triangulation T 1 R induces a triangle mesh T 1 M , whose connectivity is that of T 1 R , vertices of T 1 M are the images of those of T 1 R under the spline mapping γ. The discrete PL conformal mapping is given by φ = γ • f : T 1 D → T 1 M . The triangle mesh generated by the Delaunay refinement based on conformal parameterization is T 1 M . Fig. 9 shows the meshing results using the proposed method for a car model. In this experiment, the conformal parameter domain D is also a rectangle. Frame (a) shows a B-spline surface patch M; Frame (b) shows the initial triangle mesh T 0 M ; Frame (c) shows the triangulations on the conformal parameter domain, T 0 D on the top and T 1 D at the bottom; Frames (d), (e) and (f) illustrate the triangle meshes generated by the Delaunay refinement on a conformal parameter domain with a different number of samples, 1K, 2K, and 4K, respectively. EXPERIMENTAL RESULTS The meshing algorithms are developed using generic C++ on a Windows platform, all the experiments are conducted on a PC with Intel Core 2 CPU, 2.66GHz, 3,49G RAM. Triangulation Quality The patch on the Utah teapot (see Fig. 8) is meshed with different sampling densities, the meshes are denoted as {T n } 11 n=1 as in Tab. 2. The statistics of the meshing quality are reported in Fig. 10. Frame (a) shows the maximal circumradius of all the triangles of each mesh. Frame (b) is the average circumradius of all the triangles of each mesh. Because the sampling is uniform, we expect the circumradius ε n vs. the number of vertices s n to satisfy the relation ε n ∼ 1 √ s n . The curve in Frame (b) perfectly meets our expectations. Frames (c) and (d) show the minimal angles on all meshes. According to the theory of Rupert's Delaunay refinement, the minimal angle should be no less than 20.7 • . Frame (c) shows the minimal angles; in our experiments they are no less than 20.9 • . Frame (d) illustrates the means of the minimal angles, which exceed 46.5 • . Curvature Measure Comparisons For each triangle mesh T k produced by our method, for each vertex q ∈ T k , we define a small ball in R 3 , B(q, r) centered at q with radius r. We then calculate the curvature measures φ G T k (B(q, r)) and φ H T k (B(q, r)) using the formulae Eqn. 1 and Eqn. 2, respectively. We also compute the curvature measures on the smooth surface M, φ G M (B(q, r)) and φ H M (B(q, r)) using the following method, φ G M (B(q, r)) := γ(u,v)∈B(q,r) G(u, v)g(u, v)dudv, where γ(u, v) is the point on the spline surface, G(u, v) is the Gaussian curvature at γ(u, v), and g(u, v) is the determinant of the metric tensor. Because the spline surface is C 2 continuous, all the differential geometric quantities can be directly computed using the traditional formulas. Note that, because M and T k are very close, we use B(q, r) ∩ T k to replace π(B(q, r)) ∩ M in practice. In all our experiments, we set r to be 0.05area(M) 1 2 and 0.08area(M) 1 2 for Gaussian and mean curvature measures, respectively. We define the average errors between curvature measures as e G n = 1 |V n | ∑ v∈V n |φ G M (B(v, r)) -φ G T n (B(v, r))|, and e H n = 1 |V n | ∑ v∈V n |φ H M (B(v, r)) -φ H T n (B(v, r))|, where V n is the vertex set of T n . Figure 11 shows the errors between curvature measures with respect to sampling densities, or equivalently, the number of samples and the average circumradius. Frames (a) and (b) show that the curvature measure errors are approximately proportional to the inverse of the square root of the number of sample points; Frames (c) and (d) show the curvature measure errors are approximately linear with respect to the circumradius. This again matches our main Theorem 3.4. Figure 12 visualizes the curvature distributions on the smooth patch M (left column), and the triangle mesh T 11 (right column). The histograms show the distributions of the relative curvature errors at the vertices of the mesh. From the two left-hand columns, we can see that the curvatures of M look very similar to their counterparts on T 11 . Moreover, from the right-hand column, we can find that the overwhelming majority of vertices have relative curvature errors very close to zeros. In particular, for Gaussian curvature measure, more than 97% of vertices are fall into the relative error range of (-0.05, 0.05). For mean curvature measure, more than 95% of vertices are included in the relative error range of (-0.05,0.05). This demonstrates the accuracy of the proposed method. CONCLUSION This work analyzes the surface meshing algorithm based on the conformal parameterization and the Delaunay refinement method. By using the normal cycle theory and the conformal geometry theory, we rigorously prove the convergence of curvature measures, and estimate the Hausdorff distance and the normal deviation. According to [START_REF] Hildebrandt | On the Convergence of Metric and Geometric Properties of Polyhedral Surfaces[END_REF], these theoretical results also imply the convergence of the Riemannian metric and the Laplace-Beltrami operator. The method can be generalized to prove the curvature convergence of other meshing algorithms, such as the centroidal voronoi tessellation method, and so on. The normal cycle theory is general to arbitrary dimension. We will generalize the theoretical results of this work to include higher dimensional discretizations, such as volumetric shapes. We will explore these directions in the future. 3 . 4 ) 34 Curvature measures: we show the Delaunay refinement method on the conformal parameter domain generates κ-light ε-sample, which guarantees the convergence of curvature measures. Moreover, we show that the bounds of the curvature measures are Kε, where K is O(area(B))+O(length(∂ B)), and are independent of the triangulations, see Theorem 3.4 and section 4.4.4. Definition 3 . 1 : 31 The Gaussian curvature measure of M, φ G M , is the function associated with each Borel set B ⊂ R 3 , φ G M (B) = B∩M G(p)d p where G(p) is the Gaussian curvature of M at point p. Similarly, the mean curvature measure φ H M is given by φ H M (B) = B∩M H(p)d p where H(p) denotes the mean curvature of M at point p. Fig. 1 : 1 Fig. 1: Uniformization for closed surfaces. Fig. 2 : 2 Fig. 2: Uniformization for surfaces with boundaries. Lemma 4 . 1 : 41 The following estimation holds ∑ t⊂ B r(t) 2 + ∑ t⊂ B,t∩∂ B = / 0 r(t) = O(area(B)) + O(length(∂ B)). Definition 4 . 2 : 42 The normal cycle N(M) of a C 2 -smooth surface M is the current associated with the set N(M) := {(p, n(p))|p ∈ M} 2 Fig. 4 :Definition 4 . 3 :Definition 4 . 4 : 244344 Fig. 4: Additivity of the normal cycle. polyhedral surface. We use normal cones to replace normal vectors. Definition 4.3: The normal cone NC V (p) of a point p ∈ V is the set of unit vectors v such that ∀q ∈ V, qp, v ≤ 0. Definition 4.4: The normal cycle of M is the current associated with the set {(p, n(p))|p ∈ M, n ∈ NC V (p)} endowed with the orientation induced by the one of M. As in figure 4, normal cycles are graphically represented by their image under the map sending (p, n(p)) to p + n(p).The crucial property of the normal cycle is its additivity as shown in Fig.4. Suppose V 1 and V 2 are two convex bodies in R 3 , such that V 1 ∪V 2 is convex, then Fig. 5 : 5 Fig. 5: Configuration. where r(u, v) and n(u, v) are the position and normal vectors of the smooth surface M, (u, v) the conformal parameters, namely, |r u | = e λ , |r v | = e λ and r u ⊥ r v . Assume (du, dv) = (cos θ , sin θ ) for any angle θ , we obtain that the norm of the tangent vector on the left hand side is e λ . The norm of the vector on the right hand side is bounded by the eigenvalues of the following matrix From III -2HII + GI = 0, where the first fundamental form I = e 2λ id, the second fundamental form II = e 2λ W , W is the Weigarten matrix, we get III = 2HII -GI = e 2λ (2HW -Gid). Plugging into Eqn. 19, we get D f 2 bounded by the eigenvalues of (1 -G)id + 2HW, therefore on each face D f 2 ≤ max{1 + k 2 1 , 1 + k 2 2 }. So D f 2 is globally bounded. Putting all the estimates together, we obtain |φ G M (η(B)) -φ G T (B)| ≤ Kε. According to Lemma 4.1, K is bounded by the area of B and the length of ∂ B. defined. Let ϕ : D → M be the conformal mapping from the unit disk D to the spline surface M. As shown in the lefthand diagram in Diagram (20), the mapping f is from D to R, which makes the diagram commute, therefore f = γ -1 • ϕ. 1 MFig. 8 : 18 Fig. 8: Pipeline for meshing a Bézier patch of Utah teapot. 1 DFig. 9 : 19 Fig. 9: Remeshing of the Car spline surface model. Fig. 10 : 10 Fig. 10: The maximal and average circumradii {ε n } (a-b), and the minimal and average of minimal angles of {T n } (c-d). : T11 to T1 Ave. Err. mean cur. Fig. 11 : 11 Fig. 11: Curvature errors e G n and e H n of {T n } converge to zeros as the number of sample points goes to infinity (a-b), and as the average of the circumradii {ε n } goes to zero (c-d). Fig. 12 : 12 Fig. 12: Illustration of the curvature values on the Utah teapot spline surface patch M, (a, d), and on its approximate mesh T 11 (b, e). Their relative curvature error distribution histograms are shown in (c) and (f). TABLE 2 : 2 The numbers of vertices and triangles of the sequence of meshes {T n } with different resolutions. ACKNOWLEDGMENTS This work was supported under the grants ANR 2010 INTB 0301 01, NSF DMS-1221339, NSF Nets-1016829, NSF CCF-1081424 and NSF CCF-0830550. Huibin Li received a BSc degree in mathematics from Shaanxi Normal University, Xi'an, China, in 2006, and a Master's degree in applied mathematics from Xi'an Jiaotong University, Xi'an, China, in 2009. He is currently a PhD candidate in mathematics and computer science at Ecole Central de Lyon, France. His research interests include discrete curvature estimation, 3D face analysis and recognition. Wei
57,820
[ "7562" ]
[ "403930", "303540", "193738", "403930", "361557" ]
01487863
en
[ "phys" ]
2024/03/04 23:41:48
2017
https://theses.hal.science/tel-01487863/file/72536_LI_2017_archivage.pdf
M Hao M Wim Desmet Professeur M Alain L E Bot M Antonio Huerta Professeur Professeur Émérite M Hervé Riou On wave based computational approaches for heterogeneous media Keywords: structures, matériaux par, Right: evanescent wave convergence of result in Section 2.3. . . . . . . . . . . . . . . . Thèse de doctorat de l'Université Paris-Saclay préparée à l'École Normale Supérieure de Cachan (École normale supérieure Paris-Saclay) Introduction Nowadays, the numerical simulation has become indispensable to analyse and optimise the problems in every part of engineering processes. Without using real prototype, the virtual testing drastically reduces the cost and at the meantime highly speed up the design process. Such as in automotive industry, abiding by the standards against pollution, the objective of enterprise is to produce a lighter vehicle with improved comfort for passenger. However decreasing the weight of vehicle often leads to the fact that it is more susceptible to vibrations, which are mainly generated by acoustic effect. It requires designers to take account of all these factors in the conception of automotive structure. Another example is in the aerospace industry. Given the limited budget, designers endeavor to minimise the total mass of launcher and on the other hand abate the increasing vibrations. Last example is in construction of harbor, which agitated by ocean waves. To amass the maximum vessels and to alleviate the water agitation, designers search the optimised conception for the geometry of harbor. Characterised by the frequency response function, a vibration in the mechanic field could be classified into three zones as shown in Figure 1. The low-frequency range is characterized by the local response. The resonance peaks are distinct from one to another. The behavior of vibration can be represented by the combination of several normal modes. The Finite Element Methods (FEM) [START_REF] Zienkiewicz | The finite element method[END_REF] is most commonly used to analyse the low-frequency vibration problem. Making use of polynomial shape functions to approximate the vibration field, the FEM gives an efficient and robust performance. Considerable commercial software of this method is well developed and is widely used in the industry. With the increasing complexity of numerical model, large numbers of researchers still continue their effort to develop this method in the aspect of intensive calculation and parallel calculation techniques. In the high-frequency range the dimension of object is much larger than the wave length. There exist many small overlapping resonance peaks. Moreover the system is extremely sensible to uncertainties. In this context, the Statistical Energy Analysis (SEA) [Lyon et Maidanik, 1962] is developed to solve the vibration problems in this range. In fact, the SEA method neglects the local response. Instead it studies the global energy by taking the averages and variances of dynamic field over large sub-systems. These features enable the SEA well performs in the high-frequency range but on the other hand limits the use of the SEA only into this range. Therefore the SEA will become incapable facing to low-frequency and mid-frequency problem. Figure 1: A typical frequency response function divided in low-mid-and high-frequency zones [Ohayon et Soize, 1998]. In the mid-frequency range, the problem is characterised by intense modal densification. Thus it contains both the characteristics of low-frequency and high-frequency problem. It presents many high and partially overlapping resonance peaks. In this reason, the local response could not be neglected as in high-frequency range. In addition, the system is very sensible to uncertainties. Due to these features, the methods for low-frequency or high-frequency such as the FEM and the SEA could not be applied to mid-frequency problem. For high-frequency method, the neglecting of local response will lead to its undoing. For the low-frequency method, the need of prohibitively increased refinement of mesh will be its undoing due to the pollution effect [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF]. Facing to mid-frequency problem, one category of approaches could be classified into the extensions of the standard FEM, such as the Stabilized Finite Element Methods including the Galerkin Least-Squares FEM [Harari et Hughes, 1992] the Galerkin Gradient Least-Squares FEM (G∇LS-FEM) [Harari, 1997], the Variational Multiscale FEM [Hughes, 1995], The Residual Free Bubbles method (RFB) [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF], the Adaptive Finite Element method [Stewart et Hughes, 1997b]. There also exists the category of energy based methods, such as the Hybrid Finite Element and Statistical Energy Analysis (Hybrid FEM-SEA) [De Rosa et Franco, 2008, De Rosa et Franco, 2010], the Statistical modal Energy distribution Analysis [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF], the Wave Intensity Analysis [Langley, 1992], the Energy Flow Analysis [START_REF] Belov | Propagation of vibrational energy in absorbing structures[END_REF][START_REF] Buvailo | [END_REF], the Ray Tracing Method [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF], Chae et Ih, 2001], the Wave Enveloppe Method [Chadwick et Bettess, 1997]. Other approaches have been developed in order to solve mid-frequency problem, namely the Trefftz approaches [Trefftz, 1926]. They are based on the use of exact ap-proximations of the governing equation. Such methods are, for example, the partition of unity method (PUM) [Strouboulis et Hidajat, 2006], the ultra weak variational method (UWVF) [Cessenat et Despres, 1998a, Huttunen et al., 2008], the least square method [Monk et Wang, 1999, Gabard et al., 2011], the plane wave discontinuous Galerkin methods [START_REF] Gittelson | Plane wave discontinuous Galerkin methods: analysis of the h-version[END_REF], the method of fundamental solutions [Fairweather et Karageorghis, 1998, Barnett et Betcke, 2008] the discontinuous enrichment method (DEM) [START_REF] Farhat | The discontinuous enrichment method[END_REF], Farhat et al., 2009], the element free Galerkin method [Bouillard et Suleaub, 1998], the wave boundary element method [START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF], Bériot et al., 2010] and the wave based method [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF], Van Genechten et al., 2012]. The Variational Theory of Complex Rays (VTCR), first introduced in [Ladevèze, 1996], belongs to this category of numerical strategies which use waves in order to get some approximations for vibration problems. It has been developed for 3-D plate assemblies in [Rouch et Ladevèze, 2003], for plates with heterogeneities in [START_REF] Ladevèze | A multiscale computational method for medium-frequency vibrations of assemblies of heterogeneous plates[END_REF], for shells in [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF], and for transient dynamics in [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. Its extensions to acoustics problems can be seen in [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. In [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF] the broad band calculation problem in linear acoustic has been studied. In opposition to FEM, the VTCR has good performances for medium frequency applications, but is less efficient for very low frequency problems. Recently, a new approach called the Weak Trefftz Discontinuous Galerkin (WTDG) method is first introduced in [Ladevèze et Riou, 2014]. It differs from the pure Trefftz methods, because the necessity to use exact solution of the governing equations can be weaken. This method could achieve the hybrid use of the FEM (based on polynoms) and the VTCR (based on waves) approximations at the same time in different adjacent subdomains of a problem. Therefore for a global system which contains both low-frequency range vibration dominated sub-structures and mid-frequency vibration dominated substructures, the WTDG outperforms the standard FEM and the standard VTCR. Numerous methods for solving the mid-frequency range problem are presented above and among them those issued from Trefftz method seem more efficient. However most of them are limited to constant wave number Helmholtz problem. In other word, the system is considered as piecewise homogeneous medium. The reason lies on the fact that it is easy to find free space solutions of the Helmholtz equation with a constant wave number. It is not necessarily the case when the wave number varies in space. Indeed the spatially constant wave number is encountered in some applications of the Helmholtz equation, such as the wave propagation in geophysics or electromagnetics and underwater acoustics in large domains. Therefore these mid-frequency range methods will make the numerical result deviate from the real engineering problem. To alleviate this phenomenon, the UWVF proposes special solutions in the case of a layered material in [START_REF] Luostari | Improvements for the ultra weak variational formulation[END_REF]. Its studies of the smoothly variable wave number problem in one dimension by making use of exponentials of polynomials to approximate the solution can be seen in [START_REF] Després | Generalized plane wave numerical methods for magnetic plasma[END_REF]. The DEM method also suggests special solutions in case of layered material in [START_REF] Tezaur | A discontinuous enrichment method for capturing evanescent waves in multiscale fluid and fluid/solid problems[END_REF] and its extension to the smoothly variable wave number problem can be seen in [START_REF] Tezaur | The discontinuous enrichment method for medium-frequency Helmholtz problems with a spatially variable wavenumber[END_REF]. For smoothly variable wave number, the DEM introduces special forms of wave functions to enrich the result. The objective of the dissertation is to deal with heterogeneous Helmholtz problem. First, one considers the media with the square of wave number varying linearly. It is resolved by extending the VTCR. Then a general way to handle heterogeneous media by the WTDG method is proposed. In this case, there is no a priori restriction for the wave number. The WTDG solves the problem by approximately satisfying the governing equation in each subdomain. In extended VTCR, one solves the governing equation by the technique of separation of variables and obtains the general solution in term of Airy functions. However the direct use of Airy functions as shape functions suffer from numerical problem. The Airy wave function is a combination of Airy functions. They are built in the way that they tends towards the plane wave functions asymptotically when the wave number varies slowly. Through academic studies, the convergence properties of this method are illustrated. In engineering the heterogeneous Helmholtz problem often exists in harbor agitation problem [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF]. Therefore a harbor agitation problem solved by the extended VTCR further gives a scope of its performance in engineering application [Li et al., 2016a]. In the WTDG method, one locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. In this dissertation, they are denoted by the Zero Order WTDG and the First Order WTDG. The academic studies are presented to show the convergence properties of the WTDG. The harbor agitation problem is again solved by the WTDG method and a comparison with the extended VTCR is made [START_REF] Li | On weak Trefftz discontinuous Galerkin approach for medium-frequency heterogeneous Helmholtz problem[END_REF]. Lastly the WTDG is extended to mix the polynomial and the wave approximations in the same subdomains, at the same time. In this dissertation it is named FEM/WAVE WTDG method. Trough numerical studies, it will be shown that such a mix approach presents better performances than a pure FEM approach (which uses only a polynomial description) or a pure VTCR approach (which uses only a wave description). In other words, this Hybrid FEM/WAVE WTDG method could well solve the vibration problem of both low-frequency and mid-frequency range [START_REF] Li | Hybrid Finite Element method and Variational Theory of Complex Rays for Helmholtz problems[END_REF]. This dissertation is divided into five chapters. Chapter 1 is the description of the reference problem and the relevant literature analysis. Chapter 2 recalls the VTCR in the constant wave number acoustic Helmholtz problem and its cardinal results in previous work of VTCR. Chapter 3 addresses the Extended VTCR in slowly varying wave number heterogeneous Helmholtz problem. Chapter 4 illustrates the the Zero Order and the First Order WTDG in heterogeneous Helmholtz problem. Chapter 5 presents the FEM/WAVE WTDG method to constant wave number low-frequency and mid-frequency Helmholtz problem. The last Chapter draws the final remarks and conclusions. Chapter 1 Bibliographie The purpose of this chapter is to briefly introduce the principal computational methods that are developed for structural vibrations and acoustics. Up to the present day, there exist numerous methods indeed. Some are commonly adopted by the industry and others are still in the research phase. Depending on the frequency of problem, these methods could be globally classified into three categories, which are the polynomial methods, the energetic methods and the wave-based methods. Respectively, they are developed for the low-frequency, high-frequency and mid-frequency problems. Granted, this chapter could not cover all the details of each method, but the essential ideas and features will be fully illustrated in the context of Helmholtz related problems. The finite element method (FEM) is a predictive technique applied on a rewrite of reference problem into the weak form formulation, which is equivalent to reference problem. Then it makes a finite number elements discretization of problem. In each element, the vibrational field, acoustic pressure of the fluid or the displacement of the structures, is approximated by the polynomial functions. These functions are not the exact solutions of the governing equation. For the FEM, it is required to have a fine discretization to obtain a precise solution. Generally the weak formulation could be written as a(u,v) = l(v), where a(•,•) is a bilinear form and l(•) is a linear form. This formulation could be obtained by the virtual work principle or by minimisation of energy of system. It should be noticed that the working space of u is that U = u|u ∈ H 1 , u = u d on ∂Ω u d and v ∈ H 1 0 , where Ω u d represents the boundary ∂Ω imposed by Dirichlet type boundary condition. This means that the functions of working space need to satisfy the displacement imposed on boundary. Then it is to solve the formulation problem in a finite dimensional basis of working space. The domain Ω should be discretized into numerous small elements Ω E in the way that Ω = n E E=1 Ω E , Ω ⋍ Ω and Ω E Ω E ′ = / 0, ∀E = E ′ . This discretization allows one to approximate the Helmholtz problem by a piecewise polynomial base, whose support is locally defined by Ω E : u(x) ≃ u h (x) = N E ∑ e=1 u E e φ E e (x), x ∈ Ω E (1.1) When the vibration becomes oscillating, large numbers of piecewise polynomial shape functions are needed to be used. It has been proved in [Ihlenburg et Babuška, 1995, Bouillard et Ihlenburg, 1999] that the upper limit of error could be yielded by: ε C 1 kh p p + C 2 kL kh p 2p (1.2) where C 1 and C 2 are constants, k is the wave number of problem, h is the maximum element size, p is the degree of the polynomial shape functions. This error contains two terms. The first term represents the interpolation error which caused by the fact that the oscillation phenomenon is approximated by the polynomial functions. It is the predominant term for the low-frequency problem and could be remained small by keeping the term kh constant [Thompson et Pinsky, 1994]. The second term represents the pollution error due to the numerical dispersion [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF] and is preponderant when the wave number increases. It could be seen that unlike the first term, the second term of error could only be kept small when the element size h reduces drastically. This will lead to a prohibitive expensive cost of computer resources. This drawback of FEM inhibits it to solve mid-frequency problem. The extension of FEM The adaptive FEM To counteract the interpolation error and the pollution effect, reducing the size h and augment the order p of the polynomial could both be the solutions. Respectively they are called h-refinement and p-refinement. For a given problem, a refinement of mesh will create a large number of degrees of freedom. It it wiser to use a refinement of mesh only on the severely oscillating or shape gradient region and other case the coarse mesh instead. Therefore a posteriori error indicator is proposed. The idea is to give a first rough analysis and to evaluate the local error by the error indicator created. Then it is to add a refinement on specific region depending on the local error. This kind of technique could be seen in [Ladevèze et Pelle, 1983, Ladevèze et Pelle, 1989] for structures, in [Bouillard et Ihlenburg, 1999, Stewart et Hughes, 1996[START_REF] Irimie | [END_REF] for acoustics and in [START_REF] Bouillard | A waveoriented meshless formulation for acoustical and vibro-acoustical applications[END_REF] for the coupling of vibro-acoustics. Depending on different way to achieve the refinement, the corresponding techniques could be classified into p-refinement, h-refinement and hp-refinement. p-refinement introduces high order polynomial shape functions on the local region without changing the mesh [Komatitsch et Vilotte, 1998, Zienkiewicz et Taylor, 2005]. Conversely, h-refinement only refines the mesh without changing the shape functions [Stewart et Hughes, 1997a, Tie et al., 2003]. Of course hp-refinement is the combination of the two former methods [START_REF] Demkowicz | Toward a universal hp adaptive finite element strategy, part 1. constrained approximation and data structure[END_REF], Oden et al., 1989, Rachowicz et al., 1989]. Although the adaptive FEM outperforms the standard FEM and considerably reduces the unnecessary cost of computer resource, it still suffers from the pollution effect and expensive computational cost in mid-frequency problem. The stabilized FEM As one knows that when wave number increases, it will create the numerical dispersion problem due to the bilinear form. Because in this case the quadratic form associated to the bilinear form will risk losing its positivity [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF]. To alleviate this problem, some methods are proposed to modify the bilinear form in order to stabilize it. The Galerkin Least-Squares FEM (GLS-FEM) proposes to modify the bilinear form by adding a term to minimize the equilibrium residue [Harari et Hughes, 1992]. It is fully illustrated in [Harari et Hughes, 1992], the pollution effect is completely counteracted in 1D acoustic problem. However in the coming work [Thompson et Pinsky, 1994] it shows that facing to higher dimension problems, this method is not as successful as in 1D problem. It could only eliminate the dispersion error along some specific directions. The Galerkin Gradient Least-Squares FEM (G∇LS-FEM) is similar to the GLS-FEM method. The only difference is that the G∇LS-FEM adds a term to minimize the gradient of the equilibrium residue [Harari, 1997]. It shows that its performance depends on the problems. It deteriorates the solution quality in acoustic problem. In the mean time, however, it well performs in the elastic vibration problems. Conversely to the GLS-FEM, the G∇LS-FEM offsets the dispersion error in all directions on the 2D problem. The Quasi Stabilized FEM (QS-FEM) paves a way to modify the matrix rather than the bilinear form. The objective is to suppress the dispersion pollution in every direction. It is proved that this method could eliminate totally the dispersion error on 1D problem. For the 2D problem, it is valid under the condition that regular mesh is used [START_REF] Babuška | A generalized finite element method for solving the Helmholtz equation in two dimensions with minimal pollution[END_REF]. The Multiscale FEM The Variational Multiscale (VMS) is first introduced in [Hughes, 1995]. Based on the hypothesis that the solution could be decomposed into u = u p + u e where u p ∈ U p is the solution associated with the coarse scale and u e ∈ U e is the solution associated with the fine scale. The coarse solution u p could be calculated with the standard FEM method. Compared to the characteristic length of coarse scale, the mesh size h of the FEM is small. But on the other hand, h is rather big, compared to the fine scale. Therefore u e needs to be calculated analytically. The solution is split into two scale solutions. This nature could generate two variational problems. In this case, this method is to find u p + u e ∈ U p ⊕ U e such that a(u p ,v p ) + a(u e ,v p ) = b(v p ) ∀v p ∈ U p a(u p ,v e ) + a(u e ,v e ) = b(v e ) ∀v e ∈ U e (1. 3) The functions of fine scale u e has the zero trace on the boundary of each element. Let us denote the integrating by part as a(u e ,v p ) = (u e ,L * v p ) ∀v p ∈ U p a(u p + u e ,v e ) = (L(u p + u e ),v e ) ∀v e ∈ U e (1.4) where L * is the adjoint operator of L. In addition, the linear form b(v) only contains the terms of sources b(v) = Ω f vdV (1.5) where f represents the source. By denoting ( f ,v) Ω = Ω f vdV , (1.3) could be rewritten in the form of a(u p ,v p ) + (u e ,L * v p ) = b(v p ) ∀v p ∈ U p (Lu e ,v e ) Ω = -(Lu p -f ,v e ) Ω ∀v e ∈ U e (1.6) It could be seen that the second equation describes the fine scale and the solution u e strongly depends on the residue of equilibrium Lu pf . Therefore the second equation of (1.6) is solvable and u e could be expressed as u e = M(Lu p -f ) (1.7) where M is a linear operator. Replacing (1.7) into the first equation of (1.6), one could obtain the variational formulation only comprises u p in the form of a(u p ,v p ) + (M(Lu p -f ),L * v p ) Ω = b(v p ), ∀v p ∈ U p (1.8) Since u e has the zero trace on the boundary of each element, the expression (1.8) could be decomposed into each element without coupling terms. In [START_REF] Baiocchi | Virtual bubbles and Galerkin-least-squares type methods (Ga. LS)[END_REF], Franca et Farhat, 1995], the problem is solved in each element u e (x) = - Ω E g(x E ,x)(Lu p -f )(x E )dΩ E (1.9) where g(x E ,x) is the Green function's kernel of the dual problem of fine scale L * g(x E ,x) = δ(x) on Ω E g(x E ,x) = 0 on ∂Ω E (1.10) Approximating g(x E ,x) by the polynomial functions [Oberai et Pinsky, 1998]. This technique gives an exact solution on 1D problem. However on 2D the error depends on the orientation of waves. The Residual-Free Bubbles method (RFB) introduced in [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF] is very similar to the VMS method. They base on the same hypothesis, which nearly leads to the same variation formulation as (1.8). The RFB modifies the linear operator M and has the variational formulation as follow: a(u p ,v p ) + (M RFB (Lu p -f ),L * v p ) Ω = b(v p ), ∀v p ∈ U p (1.11) The approximation space of the fine scale u h e is U p,RFB = ∪ n E E=1 U p,RFB,E . The spaces U p,RFB,E are generated by m + 1 bubble functions defined in each element U p,RFB,E = Vect b 1 , b 2 , • • • , b m , b f (1.12) The where ϕ e denotes the shape functions associated with the coarse scale. The function b f is the solution of Lb f = f on Ω E b f = 0 on ∂Ω E (1.14) Resolution of these equations in each element could be very expansive, especially on 2D and on 3D. In [Cipolla, 1999], infinity of bubble functions are added into the standard FEM space and the performance of this method is improved. Domain Decomposition Methods The Domain Decomposition Methods (DDM) resolves a giant problem by dividing it into several sub-problems. Even though the stabilized FEM could eliminate the numerical dispersion effect, it still resolve the problem in entirety. Facing to mid-frequency problem it still requires a well refined mesh. This phenomenon will give rise to expensive computational cost. The DDM provides a sub-problem affordable by a single computer. Moreover, the DDM is endowed with great efficiency when paralleling calculation is used. The Component Mode Synthesis (CMS) is a technique of sub-structuring dynamic. It is first introduced in [Hurty, 1965]. The entire structure is divided into several substructures, which are connected by the interfaces. Then the modal analysis is applied on each sub-structure. After obtaining the preliminary proper mode of each sub-structure, the global solution could be projected on this orthogonal base. Furthermore, by condensing the inside modes on the interfaces, the CMS highly reduces the numerical cost. Then considerable methods are developed from the CMS. These methods use different ways to handle the interfaces. Such as fixed interfaces [Hurty, 1965, Craig Jr, 1968], free interfaces [MacNeal, 1971], or the mix of fixed and free interfaces [Craig Jr et Chang, 1977]. The Automated Multi-Level Substructuring (AMLS) divides the substructures into several levels in the sense of numerical model of FEM. In this case the substructure is no longer a physical structure and the lowest level are elements of FEM. Then, by assembling the substructures of lower level, one could obtain a substructure of higher level. In work [Kropp et Heiserer, 2003], this method is proposed to study the vibro-acoustic problem inside the vehicle. The Guyan's decomposition introduced in [START_REF] Sandberg | Domain decomposition in acoustic and structure-acoustic analysis[END_REF] uses the condensed Degrees of Freedoms (DoFs). In fact some of the DoFs could be classified into slave nodes and master nodes. The idea of this method is to solve a system only described by the master nodes, which contains the information of its slave notes. The Finite Element Tearing and Interconnecting (FETI) is a domain decomposition method based on the FEM and it is first introduced in [Farhat et Roux, 1991]. The formulation of displacement problem is decomposed into substructures, which are arranged into a functional minimization under constraints. These constraints are the continuity conditions of the displacement along the interfaces between substructures and could be taken into account by using the Lagrange multipliers. In [START_REF] Farhat | Two-level domain decomposition methods with Lagrange multipliers for the fast iterative solution of acoustic scattering problems[END_REF], Magoules et al., 2000] it is applied to acoustic problems. In [Mandel, 2002] it is applied to vibro-acoustic problems. The boundary element method The boundary element method (BEM) based on a integral formulation on the boundary of focusing domain. This method comprises two integral equations. The first one is an integral equation. Its unknowns are only on the boundary. The second integral equation describes the connection between the field inside the domain and the quantity on the boundary. Therefore for the BEM, the first step is to figure out the solution on the boundary field through the first integral equation. Then knowing the distribution of the solution on the boundary, one could use another integral equation to approximate solutions at any point inside the domain [Banerjee et Butterfield, 1981, Ciskowski et Brebbia, 1991]. Considering an acoustic problem where u(x) satisfy the Helmholtz equation ∆u(x) + k 2 u(x) = 0 (1.15) The two integral equations could be written as follow: u(x) 2 = G(x 0 , x) - ∂Ω G(y, x) ∂u ∂n (y) -u(y) ∂G(y, x) ∂n(y) dS(y) x ∈ ∂Ω (1.16) u(x) = G(x 0 , x) - ∂Ω G(y, x) ∂u ∂n (y) -u(y) ∂G(y, x) ∂n(y) dS(y) x ∈ Ω (1.17) where in (1.16) x, y are the points on the boundary ∂Ω. In (1.17) x is the point in the domain Ω and y is the point on the boundary ∂Ω. And x 0 represents the point of acoustic source. G(x 0 , x) is the Green function to be determined. As presented before, u(x) on ∂Ω could be determined by replacing the prescribed boundary conditions into (1.16). Based on this thought, BEM divides the boundary ∂Ω into N non overlapping small pieces, which are named boundary elements and denoted by ∂Ω 1 , ∂Ω 2 , • • • , ∂Ω N . By interpolation on these elements, one could resolve (1.16) and obtain the approximated u(x) on ∂Ω. It should be noticed that these integral equations could be obtained by direct boundary integral equation formulation or by indirect boundary integral equation formulation. The difference is that the direct one is derived from Green's theorem and the indirect one is derived from the potential of the fluid. Compared with FEM method, the BEM has the following advantages: (1) Instead of discretizing the volume and doing the integration on volume, the BEM only undertakes the similar work on the boundary. This drastically reduces the computational cost. (2) Facing to the unbounded problem, the integral equations (1.16) and (1.17) are still valid in the BEM method. The solution u(x) satisfies the Sommerfeld radiation conditions. The drawback of the BEM is to solve a linear system where the matrix needed to be inversed is fully populated. Conversely the matrix of FEM to inverse is quite sparse. This means for the FEM, it is easier to store and solve the matrix. Despite of its efficiency, facing to midfrequency problem the BEM still possesses the drawback of polynomial interpolation. The energetic methods The Statistical Energy Analysis The Statistical Energy Analysis (SEA) is a method to study high-frequency problems [Lyon et Maidanik, 1962]. This method divides the global system into substructures. Then it describes the average vibrational response by studying the energy flow in each substructure. For each substructure i, the power balance is hold P i in = P i diss + ∑ j P i j coup (1.18) where P i in and P i diss represents the power injected and dissipated in the substructure i. P i j coup denotes the power transmitted from the substructure i to its adjacent substructure j. If the model is hysteretic damping, the dissipated work is related with the total energy of the substructure i in the form of P i diss = ωη i E i (1.19) where η i is the hysteretic damping and E i is the total energy. Then the coupling between the substructures could be expressed as P i j coup = ωη i j n i E i n i - E j n j (1.20) where n i and n j are the modal densities of the substructure i and j respectively. η i j is the coupling loss factor. This equation illustrates the fact that the energy flow between the substructures i and j is proportional to the modal energy difference. The SEA lies on some strong assumptions that are generally true only at high frequency: • the energy is transmitted only to adjacent subdomains. • the energy field is diffuse in every sub-system. It should be mentioned that at very high frequency the energy field is not diffuse. [Mace, 2003] provides an excellent SEA review. The Hybrid FEM-SEA The Hybrid FEM-SEA method splits the system into two systems, namely the master and the slave systems [Shorter et Langley, 2005]. The standard FEM is used to treat the master system, which represents a deterministic response. On the other hand, the slave system is solved by the SEA method because it will show a randomized response. This hybrid use of the FEM and the SEA possesses both of their advantages. In fact, the uncertainty fields are directly described by the SEA without any information on stochastic parameters. The counterpart which does not require any Montecarlo simulation seems quite appropriate for the application of the FEM method . Wave Intensity Analysis The prediction of the SEA is valid under the diffuse field hypothesis. The calculation of the coupling loss factors are based on this hypothesis. The Wave Intensity Analysis (WIA) [Langley, 1992] proposes the hypothesis that the vibrational field diffuses and could be mainly represented by some preliminary directions, which are in the form of u(x) = 2π 0 A(θ)e ik(θ)•x dθ (1.21) where k(θ) represents the wave vector which propagates in the direction θ. Supposing the waves are totally uncorrelated 2π 0 2π 0 A(θ 1 )A * (θ 2 )e ik(θ 1 -θ 2 )•x dθ 1 dθ 2 = g(θ 1 )δ(θ 1 -θ 2 ) (1.22) where g(θ 1 ) is the measure of the energy in the direction θ 1 and δ represents the Dirac function. The energy could be expressed by the relation E(x) = 2π 0 e(x,θ)dθ (1.23) The energy e(x,θ) is then homogenised in space and developed by the Fourier series e(x,θ) = +∞ ∑ p=0 e p N p (θ) (1.24) The power balance therefore provides the amplitude e p . This method gives a better result than the SEA method on plate assemblies [START_REF] Langley | Statistical energy analysis of periodically stiffened damped plate structures[END_REF]. However, the local response is not addressed and the coupling coefficients are hard to determine. The Energy Flow Analysis The Energy Flow Analysis was first introduced in [Belov et Rybak, 1975, Belov et al., 1977]. This method studies the local response by a continue description of the energy value which characterizes the vibrational phenomenon of the mechanical system. The effective energy density, which is denoted by e, is the unknown. The energy flow is related to this energy by I = - c 2 g ηω ∇e (1.25) where c g is the group velocity. Then the work balance divI = P in j -P diss could lead to ωηec 2 g ηω ∆e = -P in j (1.26) Because the quantity e varies slowly with the space variable, the simplicity of this equation makes it easily be treated with an existant FEM code. This method well performs in 1D problem in [START_REF] Lase | Energy flow analysis of bars and beams: theoretical formulations[END_REF], Ichchou et al., 1997], however it is difficult to be applied in 2D coupling problem [Langley, 1995]. In addition, using the equation (1.26) creates numerous difficulties [Carcaterra et Adamo, 1999]. For example, the 2D field radiated by the source decays as 1/ √ r. Yet in the analytic theory it decays as 1/r. In the stationary case, this model only correctly represents the evaluation of energy while the waves are uncorrelated [Bouthier et Bernhard, 1995]. Ray Tracing Method The Ray Tracing Method (RTM) is derived from the linear optic theory and it was first introduced in [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF] to predict acoustic performances in rooms. The vibrational response is calculated following a set of propagative waves until fully damped. Transmissions and reflections are computed using the classical Snell formula. If frequency and damping are enough elevated, the RTM is cheap and accurate. Otherwise, computational costs could be unduly expensive. Moreover, complex geometries are difficult to study due to their high scattering behaviour. This technique is applied to acoustic [START_REF] Allen | [END_REF], Yang et al., 1998, Chappell et al., 2011] and to plates assemblies in [Chae et Ih, 2001, Chappell et al., 2014]. The wave-based methods Ultra Weak Variational Formulation The Ultra Weak Variational Formulation (UWVF) discretizes the domain into elements. It introduces a variable on each interface and this variable satisfies a weak formulation on the boundary of all the elements. The vibrational field is approximated by a combination of the plane wave functions. Then the Galerkin method leads this approach to solve a matrix system and the solution is the boundary variables. The continuity between the elements verified by a dual variable. Once the interface variables are calculated, one could build the solution inside each element. However the matrix is generally ill-conditioned. In [Cessenat et Despres, 1998b] a uniform distribution of wave directions is proposed to maximize the matrix determinant. Of course, the idea of pre-conditioner is also introduced to alleviate this problem. A comparison of the UWVF and the PUM on a 2D Helmholtz problem with irregular meshes is done in [START_REF] Huttunen | Comparison of two wave element methods for the Helmholtz problem[END_REF]. It presents that both of the methods could lead to a precise result with coarse mesh. Moreover, the UWVF outperforms the PUM at mid-frequency and PUM outperforms UWVF at low-frequency. As to the conditioning numbers, PUM is always better that the UWVF at mid-frequency. It is proved in [START_REF] Gittelson | Plane wave discontinuous Galerkin methods: analysis of the h-version[END_REF] that the UVWF is a special case of the Discontinuous Galerkin methods using plane waves. In [START_REF] Luostari | Improvements for the ultra weak variational formulation[END_REF], it is proposed to use special solutions in the case of a layered material. Wave Based Method The Wave Based Method (WBM) makes use of evanescent wave functions and plane wave functions to approximate the solution [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF]. p E = +∞ ∑ m=0 a jm cos mπx L jx e ±i k 2 -( mπ L jx ) 2 y + +∞ ∑ n=0 a jn cos nπy L jy e ±i k 2 -( nπ L jy ) 2 x (1.27) where L ix and L iy represents the dimensions of the smallest encompassing rectangle of subdomain Ω j . In order to implement this approach, series in (1.27) must be truncated. The criteria to choose the number of shape functions is n ix L ix ≈ n iy L iy ≈ T k π (1.28) where T is a truncation parameter to be chosen. It is proposed in [Desmet, 1998] to take T = 2, which makes sure that the wave length λ min of the shape function is smaller than the half of the characteristic wave length of problem. The boundary conditions and the continuity conditions between subdomains is satisfied by a residues weighted variational technique. Moreover, since the test functions in the formulation are taken from the dual space of the working space, this method could not be categorized into the Galerkin method. The final unknown vector to be solved by the matrix system is the complex amplitude of waves. The study of the normal impedance on the interface is addressed in [START_REF] Pluymers | Trefftz-based methods for time-harmonic acoustics[END_REF] to improve the stability of this method. Introducing the damping in the model could achieve this objective. For the WBM method, p-convergence performs a much more efficient way than the h-convergence. Similar to other Trefftz methods, the matrix of the WBM suffers from the ill-condition. In [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF], Van Hal et al., 2005] the WBM is applied to 2D and 3D acoustics. Its application to plate assemblies in [START_REF] Vanmaele | An efficient wave based prediction technique for plate bending vibrations[END_REF], to the unbounded problem in [Van Genechten et al., 2010]. Wave Boundary Element Method The Wave Boundary Element Method (WBEM) is an extension of the standard BEM presented in Section 1.1.3. It is proposed in [START_REF] Perrey-Debain | Plane wave interpolation in direct collocation boundary element method for radiation and wave scattering: numerical aspects and applications[END_REF][START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF] that the WBEM enriches the the base of the standard BEM by multiplying the propagative plane waves with the polynomial functions on the boundary. The number of the wave directions is free to choose. Generally a uniform distribution of wave directions is used. In [START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF] it also proposes the idea that if the propagations of waves of problem are known a priori, one could use a non-uniform distribution of wave directions. Again this method could not escape from the ill-conditioning of the matrix due to the plane wave functions. Of course, compared to the standard BEM, the gain of this method largely reduces the cost. The mesh used in WBEM is much coarser than the standard BEM. Discontinuous Enrichment Method The Discontinuous Enrichment Method (DEM) was first introduced in [START_REF] Farhat | The discontinuous enrichment method[END_REF]. This method is similar to the multi-scale FEM. However the enrichment functions of the DEM are not zero-trace on the boundaries. In the DEM, the exact solutions of governing equations are taken as enrich functions for the fine scale solution u e . These functions neither satisfy the continuity condition between elements nor satisfy the boundary conditions. Therefore the Lagrange multipliers are introduced to meet these conditions. In order to have a good stability, the number of the Lagrange multipliers on each boundary is directly related to the number of plane waves used in each element. This inf-sup condition is presented in [Brezzi et Fortin, 1991]. Therefore the elements built by this method is specially noted such as R -4 -1: R denotes rectangle element, 4 the wave numbers in the element and 1 means the number of the Lagrange multiplier on the boundary of element. This method is applied to 2D problem in [START_REF] Farhat | The discontinuous enrichment method for multiscale analysis[END_REF][START_REF] Farhat | A discontinuous Galerkin method with plane waves and Lagrange multipliers for the solution of short wave exterior Helmholtz problems on unstructured meshes[END_REF] and to 3D problem in [Tezaur et Farhat, 2006]. It is also proved in [Farhat et al., 2004a] that the coarse solution calculated by the FEM does not contribute to the accuracy of the solution in Helmholtz problem. In this case the polynomial functions could be cut out and correspondingly the method is named the Discontinuous Galerkin method (DGM). As the WBEM, the DEM requires a much coarser mesh. Application of this method to acoustics is presented in [Gabard, 2007], to plate assemblies in [START_REF] Massimi | A discontinuous enrichment method for the efficient solution of plate vibration problems in the mediumfrequency regime[END_REF], Zhang et al., 2006], to high Péclet advection-diffusion problems in [START_REF] Kalashnikova | A discontinuous enrichment method for the finite element solution of high Péclet advection-diffusion problems[END_REF]. Recently, facing to the varying wave number Helmholtz problem, the DEM uses Airy functions as shape functions. In [START_REF] Tezaur | The discontinuous enrichment method for medium-frequency Helmholtz problems with a spatially variable wavenumber[END_REF] these new enrich functions are used to resolve a 2D under water scattering problem. Conclusion This chapter mainly presented the principal computational methods in vibrations and in acoustic, which could be classified into low-, mid-and high-frequency problems. Considerable approaches have been specifically developed depending on the frequency of the problem. In the low frequency range, the principal methods are the FEM and the BEM. Both of these methods require the refinement of mesh. Their difference is that for the BEM only the boundary is required to be discretized and for the FEM however, the mesh covers the whole volume. These two methods are reliable and robust in low-frequency problem. Facing to the mid-frequency problem, the FEM suffers from the numerical dispersion effect. To alleviate this effect, the mesh of the FEM needs to be greatly refined. Consequently, the FEM becomes extremely expensive. Even though the BEM has a much smaller numerical model to manipulate, its numerical integrations are expensive. In addition, since the BEM interpolates the polynomial functions on the boundary, consequently a refined mesh is also necessary. Both the FEM and the BEM are no longer fit to solve mid-frequency problem. Being contrary to the low-frequency problems, the high-frequency problems could not be analysed by the local response of modes. Instead, the energetic approaches are more practical and efficient. However these methods neglect the local response. In addition, sometimes the parameters in the methods needs to be determined by experience or by very intensive calculation. Lastly, it mainly resorts to the waves based method to solve the mid-frequency problems. These methods commonly adopt the exact solutions of the governing equation as shape functions or enrichment functions. The fundamental difference is the way they deal with the boundary conditions and continuity conditions between the subdomains. The VTCR is categorized into these waves based method. Especially, the VTCR possesses an original variational formulation which naturally incorporates all conditions on the boundary and on the interface between subdomains. Moreover there is a priori independence of the approximations among each subdomains. This feature enables one freely to choose the approximations which locally satisfy the governing equation in each subdomain. In the Helmholtz problem of constant wave number, the plane wave functions are taken as shape functions. However, most of the existent mid-frequency methods are confined to solve the Helmholtz problem of piecewise constant wave number. In the extended VTCR, Airy wave functions are used as shape functions. The extended VTCR could well solve the Helmholtz problem when the square of wave number varies linearly. Then the WTDG method is applied to solve the heterogeneous Helmholtz problem in more generous cases. In this dissertation, two WTDG approaches are proposed, namely the Zero Order and the First Order WTDG . Moreover, the survey mentioned above shows that there lacks a efficient method to solve the problem with bandwidth ranging from the low-frequency to the mid-frequency. Even there it is one such as DEM, supplementary multipliers are necessarily needed, which complicates the numerical model. The FEM/WAVE WTDG method could achieve this goal by making a hybrid use of polynomial approximations and plane wave approximations. Chapter 2 The Variational Theory of Complex Rays in Helmholtz problem of constant wave number The objective of this chapter is to illustrate the basic features of the standard Variational Theory of Complex Rays. The problem background lies in acoustics. A rewriting of the reference problem into variational formulation is introduced. The equivalence of formulation, the existence and the uniqueness of the solution are demonstrated. This specific variational formulation naturally comprises all the boundary conditions and the continuity conditions on the interface between subdomains. Since the shape functions are required to satisfy the governing equation, the variational formulation has no need to incorporate the governing equation. These shape functions contain two scales. The slow scale is chosen to be discretized and calculated numerically. It corresponds to the amplitude of vibration. Meanwhile the fast scale represents the oscillatory effect and is treated analytically. Furthermore, three kinds of classical VTCR approximations are discussed. They are correspondingly the sector approximation, the ray approximation and the Fourier approximation. The numerical implementation of the VTCR is introduced, including ray distribution and iterative solvers. Then an error estimator and convergence properties of the VTCR is presented. At last, an adaptive version of the VTCR is introduced. Ω u d pressure prescribed over ∂ 1 Ω Ω E subdomain of Ω Γ EE ′ interface between subdomains Ω E and Ω E ′ {u} EE ′ (u E + u E ′ ) |Γ EE ′ [u] EE ′ (u E -u E ′ ) |Γ EE ′ q u (1 -iη)gradu ζ (1 -iη) -1/2 2.1 Reference problem and notations To illustrate the methods in this dissertation, a 2-D Helmholtz problem is taken as reference problem (see Figure 2.1). Acoustics or underwater wave propagation problem could be all abstracted into this model. Let Ω be the computational domain and ∂Ω = ∂ 1 Ω ∪ ∂ 2 Ω be the boundary. Without losing generality, Dirichlet and Neumann conditions are prescribed on ∂ 1 Ω, ∂ 2 Ω in this dissertation. Treatment of other different boundary conditions can be seen in [Ladevèze et Riou, 2014]. The following problem is considered: Ω Ω E Γ EE ′ r d Ω u d ∂ 1 Ω ∂ 2 Ω g d Ω E ′ find u ∈ H 1 (Ω) such that (1 -iη)∆u + k 2 u + r d = 0 over Ω u = u d over ∂ 1 Ω (1 -iη)∂ n u = g d over ∂ 2 Ω (2.1) where ∂ n u = gradu • n and n is the outward normal. u is the physical variable studied such as the pressure in acoustics. η is the damping coefficient, which is positive or equals to zero. The real number k is the wave number and i is the imaginary unit. u d and g d are the prescribed Dirichlet and Neumann data. Rewrite of the reference problem The reference problem (2.1) can be reformulated by the weak formulation. Both the reformulation and demonstration of equivalence are introduced in [Ladevèze et Riou, 2014]. Variational formulation As Figure 2.1 shows, let Ω be partitioned into N non overlapping subdomains Ω = ∪ N E=1 Ω E . Denoting ∂Ω E the boundary of Ω E , we define Γ EE = ∂Ω E ∩ ∂Ω and Γ EE ′ = ∂Ω E ∩ Ω E ′ . The VTCR approach consists in searching solution u in functional space U such that U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k 2 u E + r d = 0} (2.2) The variational formulation of (2.1) can be written as: find u ∈ U such that Re   ik   ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS = 0 ∀v ∈ U 0 (2.3) where ˜ represents the conjugation of . The U E,0 and U 0 denote the vector space associated with U E and U when r d = 0. Properties of the variational formulation First, let us note that Formulation (2.3) can be written: find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (2.4) Let us introduce u 2 U = ∑ E∈E Ω E gradu.grad ũdΩ (2.5) Property 1. u U is a norm over U 0 . Proof. The only condition which is not straightforward is u U = 0 for u ∈ U 0 ⇒ u = 0 over Ω. Assuming that u ∈ U 0 such that u U = 0, it follows that q u = 0 over Ω. Hence, from divq u + k 2 u = 0 over Ω E with E ∈ E where E = {1,2, • • • , N}, we have u = 0 over Ω E and, consequently, over Ω. Property 2. For u ∈ U 0 , b(u,u) kη u 2 U , which means that if η is positive the formulation is coercive. Proof. For u ∈ U 0 , we have b(u,u) = Re ik ∑ E∈E ∂Ω E q u .n ũdS (2.6) Consequently, b(u,u) = Re ik ∑ E∈E Ω E -k 2 u ũ + (1 -iη)gradu.grad ũ dΩ (2.7) Finally, b(u,u) = kη ∑ E∈E Ω E gradu.grad ũdΩ (2.8) Then, b(u,u) kη u 2 U . Property 1 implies that if η is positive the solution of (2.3) is unique. Since the exact solution of Problem (2.1) verifies (2.3), Formulation (2.3) is equivalent to the reference problem (2.1). Besides, it can be observed that for a perturbation ∆l ∈ U ′ 0 of the excitation the perturbation ∆w of the solution verifies ∆w U 1 kη |∆l| U ′ 0 (2.9) 28The Variational Theory of Complex Rays in Helmholtz problem of constant wave number Approximation and discretization of the problem To solve the variational problem (2.3), it is necessary to build the approximations u h E and the test functions v h E for each subdomain Ω E . Such u h E and v h E belongs to the subdomain U h E ⊂ U E . The projection of solutions into the finite dimensional subdomain U h E makes the implementation of the VTCR method be feasible. Re   ik   ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u h • n} EE ′ ṽh EE ′ - 1 2 [ q v h • n] EE ′ u h EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω q v h • n u h -u d dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u h • n -g d ) ṽh dS = 0 ∀v h ∈ U h 0 (2.10) The solution could be locally expressed as the superposition of finite number of local modes namely complex rays. These rays are represented by the complex function: u E (x) = u (E) n (x, k)e ik•x (2.11) where u (E) n is a polynomial of degree n of the spatial variable x. The complex ray with the polynomial of order n is called ray of order n. k is a wave vector. The functions belonging to U h 0 satisfy the Helmholtz equation ( 2 The evanescent rays only exist on the boundary and do not appear in the pure acoustic problem. However it is necessary to introduce these rays in some problems. For example in the vibro-acoustic where the nature of waves in the structure and that in the fluid are quite different, there exist the evanescent rays. The wave vector of these rays is in the form of k = ζk[±cosh(θ), -isinh(θ)] T with θ ∈ [0, 2π[. In this dissertation, these evanescent rays will not be used in the problem. For the ray of order 0, the polynomial u (E) n becomes a constant, and at the same time the solution of the Helmholtz problem could be written in the form u E (x) = C E A E (k)e ik•x dC E (2.12) where A E is the distribution of the amplitudes of the complex rays and C E is the curve described by the wave vector when it propagates to all the directions of the plane. In the linear acoustic C E is a circle. The expression (2.12) describes two scales. One is the slow scale, which is the distribution of amplitudes A E (k). It slowly varies with the wave vector k. The other one is the fast scale, which corresponds to e ik•x . It depicts the vibrational effect. This scale fast varies with wave vector k and the spatial variable x. Sectors approximation: To achieve the approximation in finite dimension, in the VTCR, the fast scale is taken into account analytically and the slow scale is discretized into finite dimension. That is to say the unknown distribution of amplitudes A E needs to be discretized. Without a priori knowing of the propagation direction of the solution, the VTCR proposes an integral representation of waves in all directions. In this way A E is considered as piecewise constant and the approximation could be expressed as u E (x) = C E A E (k)e ik•x dC E = J ∑ j=1 A jE C jE e ik•x dC jE (2.13) where C jE is the angular discretization of the circle C E and A jE is the piecewise constant approximation of A E (k) on the angular section C jE . The shape functions of (2.13) are called sectors of vibration and they could be rewritten on function of the variable θ ϕ jE (x) = θ j+ 1 2 θ j-1 2 e ik(θ)•x dθ (2.14) Therefore, the working space of shape functions could be generated as U h E = Vect ϕ jE (x), j = 1, 2, • • • , J (2.15) Rays approximation: Denoting ∆θ as the angular support, it should be noticed that when ∆θ → 0 the sectors become rays. In this case, the expression of approximation becomes: u E (x) = J ∑ j=1 A jE e ik•x (2.16) ϕ jE (x) = e ik(θ j )•x (2.17) where A jE becomes the amplitude associated with the complex ray which propagates in direction θ j . Fourier approximation: Both the sectors and the rays are engaged to discretize the slow scale of (2.12), whose fast scale is treated analytically. In previous work of ( [START_REF] Kovalevsky | The Fourier version of the Variational Theory of Complex Rays for medium-frequency acoustics[END_REF]) it proposes an new idea to discretize the slow scale. The corresponding method is to take advantage of the Fourier series to achieve this discretization. On the 2D dimension, this approximation could be written into u E (x) = 2π 0 A E (k)e ik•x dθ = J ∑ j=-J A jE 2π 0 e i jθ e ik•x dθ (2.18) The shape functions of this discretization is in the form of ϕ jE (x) = 2π 0 e i jθ e ik(θ)•x dθ (2.19) It is proved that the Fourier approximation outperforms the sectors approximation and the rays approximation. Compared to the other two approximations, this approximation alleviates ill conditioning of matrix. In this dissertation, for the simplicity of implementation, among the three types of VTCR approximations presented above, discrete complex rays are chosen to be used. By this way, the approximation could be expressed as u E (x) = A T E • ϕ E (x) (2.20) where ϕ E = [ϕ 1E , ϕ 2E , • • • , ϕ JE ] is the vector of the shape functions ϕ jE of (2.17) and A T E is the vector of the associated amplitudes A jE . By this way, the formulation (2.3) could be written into a matrix problem KA = F (2.21) K corresponds to the discretization of the bilinear form of weak formulation. Inside K there are N 2 partitioning of blocks K EE ′ , whose dimension are J ×J. When Γ EE ′ = / 0, the blocks corresponding to K EE ′ are non zero fully populated. Otherwise K EE ′ are zero blocks. The vector A = [A 1 , • • • , A E , • • •A N ] corresponds to the total amplitudes , which is the degree of freedom in the VTCR. F is the linear form of weak formulation and corresponds to the loading. Ray distribution and matrix recycling For rays approximation, one has to discretize the propagative wave direction in [0, 2π[. In works [Ladevèze et Riou, 2005, Riou et al., 2004, Riou et al., 2008, Kovalevsky et al., 2014], a symmetric ray distribution was adopted. The idea is to evenly distribute the wave directions over the unit circle. There are two advantages of the symmetric ray distribution. First, it is easy to calculate the wave direction. Second, the distribution always keeps symmetric. However, this distribution requires a complete matrix recomputation as the number of rays changes. In the VTCR, matrix construction is a relevant (predominant in some cases) operation in terms of computational costs. Therefore the symmetric ray distribution is not ideal to save computational costs. In work [Cattabiani, 2016], a quasisymmetric ray distribution method is proposed. In this algorithm previous rays are fixed as new ones are added. The first ray can be placed in any direction. After that, new rays are inserted in gaps among previous rays in the most possible symmetric way. The distribution enables one to recycle matrices. But the drawback is that, for a given ray number, its distribution could be asymmetric. Compared to the symmetric distribution, the asymmetric distribution has a less efficient convergence rate. This phenomenon only exists when insufficient number of rays are used. When the ray number increases, their difference will decrease. In practice, when convergence is reached, the difference between these two distributions is already negligible. To save computational cost, the asymmetric ray distribution is used in this dissertation. Iterative solver The VTCR suffers from ill-conditioning. Typically, the VTCR suddenly converges when ill-conditioning appears. However, there is not a deterioration of the error. To offer a numerical example, as Figure 2.3 shows, a domain Ω with square geometry [0 m,1 m]× [0 m,1 m] and η = 1 -0.01i is considered. The wave number is k = 40m -1 over the domain. The boundary conditions are u d = 4 ∑ n=1 A i e ikζcosθ i x+ikζsinθ i y with A 1 = 1, A 2 = 1.5, A 3 = 2, A 4 = 4, θ 1 = 6 • , θ 2 = 33 • , θ 3 = 102 • , θ 4 = 219 • . In order to figuratively illustrate the fact that the VTCR suffers from ill-conditioning, only one subdomain is used in the calculation and the number of rays is gradually increased to make the result converge. The relative error and the condition number along with the increasing of number of rays could be seen in Figure 2.4. Since the exact solution is known over the domain, the real error is defined as following: ε ex = u h -u re f L 2 (Ω) u re f L 2 (Ω) (2.22) Result shows that when the VTCR converges, the condition number drastically increases. In this situation, it means that the matrix is quasi-singular. In order to have a precise resolution, the proper iterative solver is required. Four iterative solvers considered are: Ω u d = u ex u d = u ex u d = u ex u d = u ex • backslash. It is the standard direct MATLAB solver. It is considered for reference. • pinv. This algorithm returns the Moore-Penrose pseudoinverse of matrix. It is suggested for ill-conditioning since it normalizes to one the smallest singular values. The result is a relatively well-conditioned pseudoinverse [Courrieu, 2008]. • gmres. It uses the Arnoldi's method to compute an orthonormal basis of the Krylov subspace. The method restarts if stagnation occurs [Saad et Schultz, 1986]. • lsqr. It is based on the Lanczos tridiagonalization [START_REF] Paige | [END_REF]. The numerical example defined in Figure 2.3 is reused here to compare the four solvers. Since the real error could be calculated, the performance of the four solvers are shown in Figure 2.5. It shows that the pinv possesses the best accuracy. The lsqr and the gmres perform similarly but with less accuracy. The backslash explodes immediately when the condition number gets worse. Therefore pinv is chosen to be the iterative solver in this dissertation. Convergence of the VTCR Convergence criteria In [START_REF] Kovalevsky | On the use of the Variational Theory of Complex Rays for the analysis of 2-D exterior Helmholtz problem in an unbounded domain[END_REF], it is proposed that the geometrical heuristic criterion of convergence for the VTCR with plane waves in 2D follows the relation that where N e is the number of directions of waves, τ a parameter to be chosen, k the wave number and R e is the characteristic radius of domain. In the VTCR, one generally chooses τ = 2. Error indicator In general, the exact solution is unknown. Therefore one needs to define an error estimator. It is not easy because there may be some subdomains Ω E which do not touch the boundary ∂Ω. The only way to evaluate the accuracy of the approximated solution in such a subdomain is to verify the continuity in terms of displacement and velocity with all the other subdomains in the vicinity of Ω E . But this verification is difficult because the solutions in the surrounding subcavities are only approximated solutions. In work [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF], a local error estimator is defined as: ε h E = E d,Ω E (u h E -u pv E )/mes(Ω E ) ∑ E E d,Ω E (u pv E )/mes(Ω) (2.24) where E d,Ω E (u) is the dissipated energy, mes(Ω) and mes(Ω E ) denote respectively the measures of Ω and Ω E , and u pv E corresponds to the solution of the problem in Ω E when the pressure and normal gradient of pressure are prescribed at the boundaries of Ω E in such way that they correspond to the pressure and normal gradient of pressure in all the Ω E ′ adjacent to Ω E . Particularly, when the boundary of Ω E coincides with the boundary of the domain, the prescribed quantities are introduced. It should be noticed that this error measures the relative difference between u h E and u pv E in terms of dissipated energy. The dissipated energy is interesting in the medium-frequency range because at these frequencies it is a relevant quantity. In the similar way, one could define a global error indicator as: ε = max E {ε h E } (2.25) In [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF] a comparison among the true local error, the H 1 relative error and the local error estimator (2.24) was made. The work proves that error estimator (2.24) comes very close to the classical H 1 error, and is a relevant error measure for assessing the quality of the calculated solution. h-and p-convergence of VTCR This subsection paves quick scope to the convergence properties of the standard VTCR. There exists two methods leading the VTCR to the convergent result. The first one is u d = 1 u d = 1 u d = 1 u d = 1 Ω Figure 2 .6: The definition of numerical example in Section 2.4.3. named h-method, which is to fix the number of rays and to decrease the size of the subdomains. The second one is named p-method, which is to fix the size of sub-domains and to increase the number of rays. Here, a simple numerical example will show the performance of the VTCR. A domain Ω with square geometry [0 m,1 m]×[0 m,1 m] and η = 1 -0.01i is considered as Figure 2.6 shows. The wave number is k = 40m -1 over the domain. The boundaries conditions imposed are u d = 1 along all the boundaries. In order to capture the error, one uses the error indicator defined in Section 2.4.2. The conclusion drawn from the result is that in the VTCR the p-convergent method is far more efficient than the h-convergent method. To obtain the same level precision, the p-convergent method only uses much fewer degrees of freedom. This numerical test is consistent with the results proved in [Melenk, 1995] that the p-convergence is exponential while the h-convergence is much slower. By taking advantage of this feature, the VTCR could lead to a precise solution with a relatively small numerical model. Adaptive VTCR An adaptive version of the VTCR is presented in [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF]. For the VTCR, it needs a proper angular discretization in each subdomain. If the amplitudes of waves are sparsely distributed, a coarse angular discretization is enough for the VTCR. Otherwise when the amplitudes of waves are densely distributed, a refined angular discretization is required. Beginning with a coarse angular discretization, the adaptive version VTCR will adopt a refined angular discretization when it is needed. Thus, the process is completely analogous to that used in the adaptive FEM [Stewart et Hughes, 1997b] and consists of three steps: • In the first step, a global analysis of the problem is carried out using a uniform, low-density angular wave distribution based angular grid ν M . • The objective of the second step is to calculate the proper angular discretization. The quality of the approximation from the first step is quantified using an error indicator I Ω E which indicates whether a new angular discretization is locally nec- essary. If it is, a refined angular grid ν m locally replaces the coarse angular grid ν M . • The third step is a new full calculation using angular grid ν m . If the last calculation is not sufficiently accurate, the procedure can be repeated until the desired level of accuracy is attained. In the second step, the error estimator defined in (2.24) serves as the error indicator I Ω E . It can be useful to set two limit levels m 0 and m 1 : if I Ω E < m 0 , the quality of the solution is considered to be sufficient and no angular rediscretization of subdomain Ω E is necessary. If m 0 < I Ω E m 1 , the error is moderate, but too high and a new refined angular discretization is necessary. If m 1 < I Ω E , the solution is seriously flawed and the boundary conditions of Ω E must be recalculated more accurately, which requires a new first step. In practice, one often chooses m 0 = 10% and m 1 = 40%. As explained before, a large I Ω E indicates a poor solution in Ω E due to too coarse an angular discretization of the wave amplitudes. A new and better angular discretization is required. Then, the number of rays used for the coarse and refined discretizations are defined as: N M e = τ M kR e /(2π) N m e = τ m kR e /(2π) τ M = τ m + ∆τ (2.26) where τ M , τ m and ∆τ are positive real numbers. ∆τ is a parameter for angular discretization refinement. In practice, one chooses τ M = 0.2 and ∆τ = 0.2.The angles of rays added for the refinement are determined by the quasi-symmetric ray distribution presented in Section 2.2.4. Conclusion This chapter has presented the standard VTCR applied in the Helmholtz problem of constant wave number. The VTCR uses the general solutions of the governing equation as shape functions. The solution of problem is approximated by a combination of these shape functions. Generally, the general solutions are plane wave functions, evanescent wave functions. The approximations in different subdomains are a priori independent. Since the governing equation is satisfied, only the boundary conditions and the continuity conditions on the interfaces should be taken into account. The VTCR naturally introduces these conditions in a variational formulation. The unknowns are the amplitudes associated with waves in all subdomains. To achieve the numerical implementation in finite dimension, an angular discretization should be done. An asymmetric ray distribution is used for recycling the matrix. In the VTCR, the condition number increases when result begins to converge. Even though this phenomenon will not deteriorate the error, a proper iterative solver should be chosen for solution. By comparison, an iterative solver namely pinv is chosen for the VTCR. Since the VTCR uses the wave functions to approximate solutions, it requires only a very small number of degrees of freedom to obtain a precise result. Therefore the VTCR outperforms the FEM when h-convergence is used. Furthermore it shows that the p-convergence is much more efficient than the h-convergence. Finally, a geometrical heuristic criterion of convergence, an error estimator of VTCR and an adaptive version VTCR are presented. Chapter 3 The Extended VTCR for Helmholtz problem of slowly varying wave number This chapter is dedicated to extend the VTCR in Helmholtz problem of a slowly varying wave number. Based on the governing equation, the exact solutions, named Airy wave functions, are developed thoroughly. Construction of the finite dimensional approximation comes into discretizing the unknown distribution of the amplitudes of Airy wave functions. Then, in the first numerical example, its convergence properties will be studied. It will show that the convergence properties of this extended VTCR quite resemble the standard VTCR. It could well solve the mid-frequency problem with a small amount of degrees of freedom. Of course as a heritage of standard VTCR, the performances of p-convergence are also remarkable in the extended VTCR. The second numerical study concerns a complicated semi-unbounded harbor agitation problem, on which the extended VTCR is applied to get the solution. The result further proves the advantages and efficiency of the extended VTCR method. In this chapter the wave number k in (2.1) is no longer a constant. Instead it is supposed to be in the form that k 2 = αx + βy + γ, where α, β, γ are constant parameters. For simplicity, in this section we denote that k 2 † = k 2 /(1 -iη) = α † x + β † y + γ † , where α † = α/(1 -iη), β † = β/(1 -iη), γ † = γ/(1 -iη) respectively. Presented in Chapter 2, for the VTCR method, the exact solutions need to be known a priori to serve as shape functions. Therefore exact solutions of heterogeneous Helmholtz equation in (2.1) are required to be found. In order to solve the equation, the technique of separation of variable is considered here. On 2D, by introducing u(x) = F(x)G(y) into (2.1), it can be obtained that: F ′′ F + α † x + γ † = - G ′′ G + β † y ≡ δ (3.1) where δ is a free constant parameter. The analytic solutions of (3.1) are: F(x) =            C 1 Ai   -α † x -γ † + δ α 2/3 †   +C 2 Bi   -α † x -γ † + δ α 2/3 †   if |α † | = 0 C 1 cos γ † -δx +C 2 sin γ † -δx if |α † | = 0 (3.2) G(y) =          D 1 Ai   -β † y -δ β 2/3 †   + D 2 Bi   -β † y -δ β 2/3 †   if |β † | = 0 D 1 cos √ δy + D 2 sin √ δy if |β † | = 0 (3.3) where Ai and Bi are Airy functions [Zaitsev et Polyanin, 2002]. C 1 , C 2 , D 1 , D 2 are constant coefficients. When a variable named z → +∞, function Ai(z) tends towards 0 and function Bi(z) tends towards infinity (see Figure 3.1). Moreover when -z → -∞, the asymptotic expression of function Ai and Bi are:            Bi(-z) ∼ cos( 2 3 z 3 2 + π 4 ) √ πz 1 4 |arg(z)| < 2π/3 Ai(-z) ∼ sin( 2 3 z 3 2 + π 4 ) √ πz 1 4 |arg(z)| < 2π/3 (3.4) Since when z → +∞, Bi(z) goes to infinity and it has no physical meaning. To avoid of using Airy functions in this interval, the idea is to create functions in combination of Airy where k 2 m represents the minimum value of k 2 on Ω and (x m , y m ) is the coordinate which enables k 2 to take its minimum value k 2 m over the domain. Denoting P = [P 1 ,P 2 ] = [cos(θ), sin(θ)], where θ represents an angle parameter ranging from 0 to 2π, k 2 can be expressed in form that: k 2 = k 2 m + α(x -x m ) + β(y -y m ) = k 2 m P 2 1 + k 2 m P 2 2 + α(x -x m ) + β(y -y m ) (3.6) As the similar procedure to get (3.2) and (3.3), functions F and G can be composed by: F( x) = Bi(-x) + i * Ai(-x) (3.7) G( ỹ) = Bi(-ỹ) + i * Ai(-ỹ) (3.8) where x and ỹ are defined as follows: x = k 2 m * P 2 1 + α(x -x m ) α 2/3 (1 -iη) 1/3 = k 2 1 α 2/3 (1 -iη) 1/3 (3.9) ỹ = k 2 m * P 2 2 + β(y -y m ) β 2/3 (1 -iη) 1/3 = k 2 2 β 2/3 (1 -iη) 1/3 (3.10) By such a way, -x andỹ always locate in [-∞,0] on the domain Ω. The new wave function ψ(x,P) is built as: ψ(x,P) = F( x) * G( ỹ) (3.11) Asymptotically, when α tends to 0 F( x) → cos(ζk 1 • x) + i * sin(ζk 1 • x) (3.12) Asymptotically, when β tends to 0 G( ỹ) → cos(ζk 2 • y) + i * sin(ζk 2 • y) (3.13) It can be observed that ψ(x,P) function is the general solution of Helmholtz equation in (2.1). Especially when α = 0 and β = 0, ψ(x,P) function becomes plane wave function. The angle parameter θ in P describes the propagation direction of plane wave. Analogous to plane wave case, when α = 0 and β = 0, ψ function still represents a wave propagates on the 2D plane. P decides its propagation direction. In order to be distinct from plane wave, this wave is named Airy wave. An example of Airy wave and plane wave can be seen in Figure 3.2. Variational Formulation To solve this heterogeneous Helmholtz problem, again, the VTCR approach consists in searching solution u in functional space U such that U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k 2 u E + r d = 0} (3.14) The variational formulation of (2.1) can be written as: find u ∈ U such that Re   ik   ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS = 0 ∀v ∈ U 0 (3.15) The U E,0 and U 0 are the vector space associated with U E and U when r d = 0. It could be noticed that the variational formulation (3.15) is exactly the same as (2.3). Therefore, to prove the equivalence of this weak formulation with the reference problem, one could refer to the demonstration in Section 2.2.2. The only difference between (3.15) and (2.3) is the definition of their working space. In (2.3), the working space is composed by the plane wave functions. In (3.15), instead, the working space is composed by the Airy wave functions. Approximations and discretization of the problem The ψ(x,P) function defined in (3.11) only represents the fast oscillatory scale of the wave propagating in heterogeneous field. Meanwhile the amplitude associated with the Airy wave function corresponds to the slow scale. Similarly, here only the slow scale is discretized and the fast scale is obtained analytically. The amplitude, which is a function that depends on the propagation direction θ, could be discretized by the similar way as the disretization of plane wave functions in Chapter 2. The general solution of heterogeneous Helmholtz equation could be locally written as u E (x) = C E A E (k,P)ψ(x,P)dC E (3.16) where A E is the distribution of the amplitudes of the complex rays and C E curve is described by the wave vector when it propagates to all the directions of the plane. In the linear acoustic C E is a circle. The expression (3.16) describes two scales. In order to further discretize the general solution to achieve the finite dimensional implementation, instead of the circular integration, the general solution could be approximately composed by complex rays of several directions. With the rays approximation, (3.16) could be rewritten into u E (x) = J ∑ j=1 A jE ψ(x,P j ) (3.17) ϕ jE (x) = ψ(x,P j ) (3.18) where A jE becomes the amplitude of the Airy wave which propagates in direction θ associated with P j . Here, it is no need to repeat the procedure to generate the matrix system. It is exactly the same with the procedure presented in Chapter 2. One could refer to it for all the details and the properties. Numerical implementation Numerical integration Since Airy wave function behaves in a quick oscillatory way, the general Gauss quadrature is no longer fit for the numerical integration. Due to the complexity of the Airy wave function, analytic solution of integration is difficult to be explicitly expressed. One must resort to other powerful numerical integration techniques. The integration methods considered are: • trapz. It performs numerical integration via the trapezoidal method. This method approximates the integration over an interval by breaking the area down into trapezoids with more easily computable areas. For an integration with N+1 evenly spaced points, the approximation is: b a f (x)dx ≈ b -a 2N N ∑ n=1 ( f (x n ) + f (x n+1 )) = b -a 2N [ f (x 1 ) + 2 f (x 2 ) + • • • + 2 f (x N ) + f (x N+1 )] (3.19) where the spacing between each point is equal to the scalar value ba N . If the spacing between the points is not constant, then the formula generalizes to b a f (x)dx ≈ 1 2 N ∑ n=1 (x n+1 -x n ) [ f (x n ) + f (x n+1 )] (3.20) where (x n+1x n ) is the spacing between each consecutive pair of points. • quad. It adopts the adaptive Simpson quadrature rule for the numerical integration. One derivation replaces the integrand f (x) by the quadratic polynomial P(x) which takes the same values as f (x) at the end points a and b and the midpoint m = (a + b) 2 . One can use Lagrange polynomial interpolation to find an expression for this polynomial. P(x) = f (a) (x -m)(x -b) (a -m)(a -b) + f (m) (x -a)(x -b) (m -a)(m -b) + f (b) (x -a)(x -m) (b -a)(b -m) (3.21) An easy integration by substitution shows that b a P(x) = b -a 6 f (a) + 4 f ( a + b 2 ) + f (b) (3.22) Consequently, the numerical integration could be expressed as: b a f (x)dx ≈ b -a 6 f (a) + 4 f ( a + b 2 ) + f (b) (3.23) The quad function may be most efficient for low accuracies with nonsmooth integrands. • quadl. It adopts the Gauss-Lobatto rules. It is similar to Gaussian quadrature with mainly differences. First, the integration points include the end points of the integration interval. Second, it is accurate for polynomials up to degree 2n -3, where n is the number of integration points. Lobatto quadrature of function f (x) on interval [-1, 1]: 1 -1 f (x)dx ≈ 2 n(n -1) [ f (1) + f (-1)] + n-1 ∑ i=2 w i f (x i ) (3.24) where the abscissas x i is the (i -1)st zero of P ′ n-1 (x) and the weights w i could be expressed as: w i = 2 n(n -1)[P n-1 (x i ) 2 ] , x i = ±1 (3.25) The quadl function might be more efficient than quad at higher accuracies with smooth integrands. • quadgk. Gauss-Kronrod quadrature is a variant of Gaussian quadrature, in which the evaluation points are chosen so that an accurate approximation can be computed by reusing the information produced by the computation of a less accurate approximation. Such integrals can be approximated, for example, by n-point Gaussian quadrature: b a f (x)dx ≈ n ∑ i=1 w i f (x i ) (3.26) where w i and x i are the weights and points at which to evaluate the function f (x). If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at the midpoint for odd numbers of evaluation points), and thus the integrand must be evaluated at every point. Gauss-Kronrod formulas are extensions of the Gauss quadrature formulas generated by adding n + 1 points to an n-point rule in such a way that the resulting rule is of order 2n + 1. These extra points are the zeros of Stieltjes polynomials. This allows for computing higher-order estimates while reusing the function values of a lower-order estimate. The quadgk function might be most efficient for high accuracy and oscillatory integrands. It supports infinite intervals and can handle moderate singularities at the endpoints. It also supports contour integration along piecewise linear paths. A quick numerical example is done to test the performance of these four different numerical integrations on a simple square domain of [0 m,1 m]×[0 m, 1 m] and the origin being the left upper vertex( see Figure 3.3). The integrations are defined as ∂L u A • u i with i = 1, 2, • • • , 32 , where u A = ψ(x,P) is an Airy wave with θ = 0 • and its amplitude is 1. u i = ψ(x,P i ) are Airy waves with the angle shown in Table 3.1 and their amplitudes are all chosen to be 1. In this way, one constructs respectively thirty two integrations along the boundary on the bottom, which is denoted by ∂L. The other parameters are η = 0.01, α = 0 m -3 , β = -800 m -3 , γ = 1500 m -2 . This example is typical because to do the numerical implementation by the VTCR (3.14) one will always encounter the integrations resembling the integrations in our test. The symbol integration with MATLAB is used to yield the reference result as Table 3.2 shows. The symbol integration in MATLAB is the most accurate method but with extreme low efficiency. This is the reason why one choose the numerical integrations instead of symbol integrations in MATLAB. The differences of results between the reference results with the four numerical integration methods are made in Table 3.3, Table 3.4, Table 3.5, Table 3.6 correspondingly. It could be seen from Table 3.2 that the reference results are of order 10 0 . The differences between the reference results with the results calculated by trapz, guad, guadl and guadgk are of order 10 0 , 10 0 , 10 -10 and 10 -14 respectively. By comparison, one could draw the conclusion that quadgk could yield the most accurate results. The VTCR suffers from ill-conditioning when it converges. In this situation, it is possible that even a disturbance of small value in the system may generate totally different solution. Therefore the accuracy is the crucial point for us to choose the numerical integration method. One could draw the conclusion from the results that the quadgk is most accurate and suitable since in the VTCR there are many quick oscillatory integrands. Table 3.1: The angle θ of Airy wave functions for the numerical test ×10 0 1.0344 + 0.0000i 0.0122 -0.0045i 0.0007 + 0.0000i -0.0013 -0.0078i 0.7028 -0.3261i 0.0067 -0.0039i 0.0001 -0.0077i 0.0852 -0.7668i -0.0177 -0.0862i -0.0008 + 0.0001i 0.0006 + 0.0006i -0.0540 + 0.0663i -0.0396 + 0.0080i -0.0023 -0.0017i 0.0022 + 0.0015i 0.0361 -0.0082i -0.0228 -0.0347i -0.0047 -0.0044i 0.0053 + 0.0012i 0.0311 + 0.0155i 0.0189 -0.0224i 0.0046 -0.0065i 0.0023 -0.0055i 0.0100 -0.0195i 0.0080 + 0.0150i 0.0050 + 0.0048i -0.0047 -0.0004i -0.0106 -0.0044i 0.0000 + 0.0046i -0.0029 + 0.0001i -0.0012 + 0.0014i 0.0020 + 0.0020i Table 3.2: Reference integral values ×10 -14 0.0000 + 0.0000i -0.0029 -0.0048i 0.0052 -0.0147i 0.0032 -0.0019i 0.1776 -0.2776i -0.0108 + 0.0082i -0.0048 + 0.0094i -0.1360 -0.3109i -0.1676 -0.0444i -0.0011 -0.0103i 0.0072 + 0.0074i 0.0604 + 0.1485i 0.0847 + 0.0808i 0.0133 -0.0024i -0.0050 + 0.0019i -0.0749 -0.0753i -0.0385 -0.0097i 0.0109 -0.0096i 0.0092 + 0.0096i 0.0343 -0.0049i -0.0073 -0.0021i -0.0143 -0.0031i -0.0169 -0.0013i -0.0168 + 0.0014i -0.0014 + 0.0024i 0.0044 + 0.0054i -0.0032 -0.0027i 0.0014 + 0.0006i 0.0061 -0.0047i -0.0032 + 0.0031i -0.0054 + 0.0031i 0.0021 -0.0046i Table 3.3: Difference between the quadgk integral values and the reference integral values Iterative solver Similar to the VTCR method, the extended VTCR also suffers from ill-conditioning. The research of the iterative solvers for the VTCR have been thoroughly studied in Section 2.3. Convergence of the Extended VTCR Convergence criteria The geometrical heuristic criterion of convergence for the VTCR in the Helmholtz problem of constant wave number is shown in (2.23). Since k is not constant here, its maximum value k max on the domain is used in the heuristic criterion (2.23), which leads to N e = τk max R e /(2π) (3.27) where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of domain. τ = 2 is chosen in this dissertation. ×10 -10 0.0001 + 0.0000i 0.0005 + 0.0004i 0.0038 + 0.0000i 0.0003 -0.0003i -0.0715 -0.1134i -0.2077 + 0.1253i -0.0006 + 0.2415i -0.1334 + 0.0024i 0.0016 -0.0030i 0.7817 + 0.0680i -0.4297 -0.6301i -0.0033 + 0.0006i 0.2674 + 0.5459i 0.0976 + 0.0593i -0.0907 -0.0523i -0.2568 -0.4949i 0.0419 -0.0236i -0.0186 -0.0222i 0.0228 + 0.0083i -0.0204 + 0.0347i -0.0047 + 0.0069i 0.0891 -0.1254i 0.0438 -0.1065i -0.0022 + 0.0058i -0.0482 -0.0246i 0.0238 + 0.0190i -0.0204 + 0.0003i 0.0462 + 0.0006i -0.0221 + 0.0470i 0.0097 -0.0089i -0.2798 + 0.5013i -0.0101 + 0.0274i Table 3.4: Difference between the quadl integral values and the reference integral values ×10 0 0.0000 + 0.0000i -0.0006 + 0.0002i -0.0002 -0.0000i 0.0001 + 0.0004i -0.0000 + 0.0000i -0.0014 + 0.0008i -0.0000 + 0.0016i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0002 -0.0000i -0.0001 -0.0001i 0.0000 -0.0000i 0.0001 -0.0000i 0.0004 + 0.0003i -0.0004 -0.0003i -0.0001 + 0.0000i 0.0001 + 0.0001i 0.0007 + 0.0007i -0.0008 -0.0002i -0.0001 -0.0001i -0.0002 + 0.0002i -0.0006 + 0.0008i -0.0003 + 0.0007i -0.0001 + 0.0002i -0.0002 -0.0003i -0.0005 -0.0005i 0.0005 + 0.0000i 0.0002 + 0.0001i -0.0000 -0.0002i 0.0002 -0.0000i 0.0001 -0.0001i -0.0001 -0.0001i Table 3.5: Difference between the trapz integral values and the reference integral values Error indicator The extended VTCR possesses the same feature as VTCR for error estimation. In each subdomain, its shape functions satisfy the governing equation. Meanwhile the boundary conditions are not satisfied automatically. Consequently, for the extended VTCR, the way to evaluate the accuracy of the approximated solution in subdomain is still to verify the continuity in terms of displacement and velocity with all the other subdomains in the vicinity of Ω E . Therefore, the definition of error indicator for the extended VTCR is the same as (2.24). Numerical examples Academic study of the extended VTCR on medium frequency heterogeneous Helmholtz problem A simple geometry of square [0 m; 1 m]×[0 m; 1 m] is considered for domain Ω. In this domain, η = 0.01, α = 150 m -3 , β = 150 m -3 , γ = 1000 m -2 . Boundary conditions ×10 0 0.0000 + 0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 -0.0000i 0.0000 -0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 -0.0000i 0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i -0.0000 -0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 -0.0000i -0.1461 -0.1707i 0.1487 + 0.0262i 0.0000 + 0.0000i -0.0000 + 0.0000i 0.1906 -0.1578i 0.0126 -0.1533i -0.0000 + 0.0000i = 10 • , θ 2 = 55 • , θ 3 = 70 • correspond to propagation angle in P 1 , P 2 , P 3 respectively. The definition of the problem and the discretization strategy can be seen on Figure 3.4. This choice of geometry and boundary conditions allow one to calculate the real relative error of the extended VTCR method with exact solution. Therefore, the real relative error is defined as following: ε ex = u -u ex L 2 (Ω) u ex L 2 (Ω) (3.28) The result could be seen in Figure 3.5. The convergence curves of this extended VTCR method in heterogeneous problem behaves in the similar way as the convergence curves of the VTCR in the Helmholtz problem of constant wave number. Merely a small amount of degrees of freedom is sufficient to attain the convergence of numerical result, which is under a small relative error. It can be seen that to obtain the result with same precision, refinement of subdomains results in the need of more degrees of freedom. This phenomena could be explained by the convergence properties of the VTCR. As presented in Chapter 2, for the standard VTCR both the p-convergence and the h-convergence will lead to convergent results but p-convergence performs in a far more efficient way. This special feature is inherited from the standard VTCR to this extended VTCR. Correspondingly, in Figure 3.5, the extended VTCR with only one computational domain converges the fastest. The one with nine subdomains is the slowest and the one with four subdomains locates in the middle. Study of the extended VTCR on semi-unbounded harbor agitation problem This example corresponds to a study of water agitation of a harbor. The movement of waves is dominated by Helmholtz equation. Incoming wave from far away field gives rise to reflected wave inside the harbor. The water wave length is much smaller than the geometry size of harbor. It is a medium frequency Helmholtz problem since there exists many periods of wave in the harbor. Ω 1m 1m Ω Ω 1 Ω 2 Ω 3 Ω 4 Ω 1 Ω 2 Ω 3 Ω 4 Ω 5 Ω 6 Ω 7 Ω 8 Ω 9 The work in [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF] solves the agitation of a real harbor with multi input data in an heterogeneous media and with an unbounded domain. There are mainly three difficulties in this problem. The first one is the pollution errors. The problem requires a large amount of degrees of freedom of FEM since there are large numbers of waves over the computational domain. The second difficulty is to solve the influence of small geometric features to the solution. The proper generalized decomposition (PGD) model reduction approach was used to obtain a separable representation of the solution at any point and for any incoming wave direction and frequency. By this approach, the calculation cost is drastically reduced. The third difficulty is to solve the unbounded problem. Facing to this task, the perfectly matched layers (PMLs) [Berenger, 1994, Modesto et al., 2015] was proposed to satisfy the Sommerfeld radiation condition. A special artificial layer is created around the studied domain to absorb the non-physical waves. The work [START_REF] Giorgiani | High-order continuous and discontinuous Galerkin methods for wave problems[END_REF] compares three Galerkin methods-continuous Galerkin, Compact Discontinuous Galerkin, and hybridizable discontinuous Galerkin in terms of performance and computational efficiency in 2-D scattering problems for low and high-order polynomial approximations. It shows the superior performance of high-order elements. It also presents the similar capabilities for continuous Galerkin and hybridizable discontinuous Galerkin, when high-order elements are adopted, both of them outperforming compact discontinuous Galerkin. Model of problem: Definition of the harbor is shown in Figure 3.6. The agitation of harbor depends on incoming wave. In later part of this section, one can see different numerical results calculated with different parameters including the angle of incoming wave and the frequency of incoming wave. Without losing generality, all boundaries of the harbor are supposed to be totally reflecting boundaries, which is denoted by Γ R : (1iη)∂ n u = 0 over Γ R (3.29) u + 0 represents incoming wave from far away onto the harbor. It can be expressed as u + 0 = A + 0 e ik + 0 ζ(cosθ + 0 x+sinθ + 0 y) , where A + 0 is the amplitude of wave and θ + 0 is the angle of wave propagation direction. The origin of coordinate is O, located in the middle point of the harbor entrance. As Figure 3.7 shows, the sea bottom of the region outside the harbor varies slowly and the depth of water is considered as constant there. The depth of water inside the harbor decreases when it is closer to the land. Consequently, the length of wave varies inside the harbor. An assumption is proposed in this example that the depth of water h complies with the following relation: h = 1 a + by (3.30) where a, b are constant parameters. This relation could describe the variation of the water depth with respect to y. The relation between wave frequency ω and water depth h follows the non linear dispersion relation: ω 2 = kgtanh(kh) (3.31) where g = 9.81 m/s 2 is the gravitational acceleration and k is the wave number. In the case h ≪ λ, when the depth of water is far more less than the length of wave, there is the following shallow water approximation: tanh(kh) ≈ kh (3.32) This approximation is valid in the underwater field near seashore. The numerical result of this section will further validate of this approximation. Thus it can be obtained that: k 2 = g -1 ω 2 (a + by) (3.33) Incoming waves cause two kinds of reflection, which include the wave reflected by the boundary inside the harbor and the wave reflected by the boundary locating outside the harbor. Part of these reflected waves propagate from the harbor to far away field. This phenomenon leads to a semi-unbounded problem. In physics these waves need to satisfy Sommerfeld radiation condition. In our 2D model it is represented by: lim r→+∞ √ r ∂u(r) r -iku(r) = 0 (3.34) where r is the radial direction in polar coordinate. Unbounded problem: Many methods have been proposed to solve unbounded problem such as perfectly matched layer (PMLs) [Berenger, 1994, Modesto et al., 2015], Nonreflecting artificial boundary conditions (NRBC) [Givoli, 2004], Bayliss, Gunzburger and Turkel Local non-reflecting boundary conditions (BGT-like ABC) [Bayliss et Turkel, 1980, Antoine et al., 1999] and Dirichlet to Neumann non-local operators [Givoli, 1999]. PMLs creates an artificial boundary and a layer outside the region of interest in order to absorb the outgoing waves. NRBC, ABC and Dirichlet to Neumann non-local operators introduce a far away artificial boundary which leads to minimize spurious reflections. VTCR method can combine these artificial boundary techniques to solve the semi-unbounded harbor problem without difficulty. But here analytic solution is taken into account to solve the problem. This choice allows us to take great advantage of VTCR method. Since analytic solution verifies Helmholtz equation and Sommerfeld radiation condition, it can be used as shape functions in VTCR. Compared with artificial boundary techniques, this approach leads to a simpler strategy of calculation. The idea of seeking for analytic solution on the unbounded domain outside the harbor can be illustrated by two steps. As Figure 3.8 shows, in the first step a relatively simple problem is considered. Without the region inside the harbor, incoming wave u + 0 agitates on a straight boundary which is infinitely long. The boundary condition here is same as (3.29). The reflected wave is denoted by u a . It is evident that for such a problem, when u + 0 = A + 0 e ik + 0 ζ(cosθ + 0 x+sinθ + 0 y) , it can be obtained that u a = A + 0 e ik + 0 ζ(cosθ a x+sinθ a y) , where θ a = 2π -θ + 0 . For the second step as Figure 3.9 shows, it is exactly the original harbor agitation problem in this Section. If u a of the first step is taken as exact solution here, it will create the residual value because the governing equation inside the harbor and boundary conditions are not satisfied. It is logical to add a complementary solution outside the harbor to offset the residual value. In this point of view, the origin O is chosen to develop the expansion of this complementary solution, which is denoted by u b . Here u b is required to satisfy governing equation outside the harbor, where the wave number is constant. Furthermore u b is required to satisfy the boundary condition on Γ O and Sommerfeld radiation condition. In previous work of VTCR [START_REF] Kovalevsky | The Fourier version of the Variational Theory of Complex Rays for medium-frequency acoustics[END_REF], it is shown that for 2D acoustic domain exterior to a circular boundary surface, the analytic solution of reflected wave U s of scattering problem in polar coordinate is in form of [Herrera, 1984]: U s = ∞ ∑ n=0 (A n sin(nθ) + B n cos(nθ)) H (1) n (ζkr) (3.35) where H It can be verified that (3.36) satisfies boundary conditions on Γ O . Therefore u b is found. Except on the origin point, the analytic solution on the domain outside the harbor equals to the sum of u + 0 , u a and u b . Computational strategy: As mentioned before, our computational strategies are shown in Figure 3.11 and Figure 3.12. The domain outside the harbor is divided into two computational subdomains Ω 1 and Ω 2 . The subdomain Ω 2 is a semicircular domain, whose center locates at the origin point. The subdomain Ω 1 ranges from the boundary of Ω 2 to infinity. On this domain the analytic solution presented before is used. Computational domain Ω 2 is created to separate origin point from Ω 1 . Since k is considered as constant value of the region outside the harbor, plane wave function is used as shape function on subdomain Ω 2 . Inside the harbor two different strategies of discretization are chosen in Figure 3.11 and Figure 3.12. The first strategy is that the domain inside the harbor is divided into one computational subdomain (See Figure 3.11). The second strategy is that the domain inside the harbor is divided into four computational subdomains (See Figure 3.12). By When the subdivision of computational domain is done, one needs to choose shape functions used on each subdomain. As mentioned before, u on domain Ω 1 contains u + 0 , u a and u b . This relation can be represented by u| Ω 1 = u + 0 + u a + u b . The unknown value u b can be expanded in the series written as (3.36). To achieve a discrete version of the VTCR, finite-dimensional space is required. Thus (3.36) needs to be truncated into finite series. The working space of u b denoted by U b Ω 1 is defined as: U b Ω 1 = u b ∈ L 2 (Ω 1 ) : u b (x,y) = N 1 ∑ n=0 A 1n cosnθH (1) n (ζkr), A 1n ∈ C, n = 0, • • • ,N 1 (3.37) where A 1n is the unknown degree of freedom. N 1 is the number of degree of freedom on Ω 1 . Working space of Ω 2 is defined as follows: U Ω 2 = u ∈ L 2 (Ω 2 ) : u(x,y) = N 2 ∑ n=0 A 2n e ikζ(cosθ n x+sinθ n y) , A 2n ∈ C, n = 1, • • • ,N 2 (3.38) where A 2n is the unknown amplitude of plane wave. N 2 is the number of degree of freedom on Ω 2 . On the computational domain of inside harbor, the working space is constituted by the ψ(x,P) functions and it is in the form of U Ω m = u ∈ L 2 (Ω m ) : u(x,y) = N m ∑ n=0 A mn ψ(x,P n ), A mn ∈ C, n = 1, • • • ,N m (3.39) where A mn is the unknown amplitude of the Airy waves on subdomain Ω m with m 3. N m is the number of degrees of freedom on Ω m . Numerical result: Here ω = 0.5 rad/s, a = 4.8 • 10 -2 m -1 , b = 4.8 • 10 -5 m -2 , η = 0.03 are the chosen as parameters. Therefore the depth of water ranges from -20.83 m to -8.33 m, which corresponds to slow variation of water depth near the seashore. The relation between k 2 and y follows (3.33). Taking into account the parameters, it can be derived that: andλ ∈ [104.72 m, 181.38 m]. The shallow water approximation (3.32) is approved to be valid since λ ≫ h. k 2 = 1.2 • 10 -3 -1.2 • 10 -6 y (3.40) Inside the harbor k 2 ∈ [1.2 • 10 -3 m -2 , 3.0 • 10 -3 m -2 ] Let the amplitude of incoming wave corresponds to A + 0 = 2 m and the angle of incoming wave corresponds to θ + 0 = 45 • . Following the computational strategies mentioned above, numerical results are shown in Figure 3.13. In this example the exact solution is unknown, therefore one adopts the error indicator (2.24). For the first strategy, one chooses N 1 = 20, N 2 = 100, N 3 = 160. The result error is 6.21 • 10 -3 . For the second strategy, one chooses N 1 = 20, N 2 = 100, N 3 = 100, N 4 = 160, N 5 = 160, N 6 = 160. The result error is 1.52 • 10 -2 . The results could be seen in Figure 3.13 and Figure 3.14. Figure 3.13 presents the global results over all subdomains. Since Ω 1 is the semi-unbounded domain, here the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. Figure 3.14 shows the results inside the harbor calculated by the first strategy and the second strategy. One can see from the results that the two different computational strategies of the extended VTCR lead to the same result. It should be noticed that the performance of the first strategy is slightly better than the second strategy and it uses less degrees of freedom. Again, this phenomenon can be explained by the fact that the p-convergence always outperforms the h-convergence in the VTCR. It should also be noticed that only 280 degrees of freedom in all are sufficient to solve this medium frequency heterogeneous Helmholtz problem. The coarse domain disretization and small amounts of degrees of freedom used by the VTCR typify the advantage of this method. It also can be seen from Figure 3.13 that the numerical solution has a good continuity between adjacent subdomains. With the same parameters and with the first computational strategy mentioned before, two other results are calculated by changing the angle of incoming wave to θ + 0 = 35 • and θ + 0 = 65 • (see Figure 3.15). Again, results show that the continuity of displacement and velocity between subdomains are well verified. Conclusion This chapter proposes an extended VTCR method, which is able to solve heterogeneous Helmholtz problem. In this extended VTCR, new shape functions are created. In the context of Trefftz Discontinuous Galerkin method, these new shape functions satisfy governing equation a priori. Therefore the extended VTCR is only required to meet the continuity conditions between subdomains and the boundary conditions. All these conditions are included in the variational formulation, which is equivalent to the reference problem. From the academic studies one learns the convergence properties of the extended VTCR. This approach converges in the same way as the VTCR method presented in Chapter 2. Then a harbor agitation problem is studied. Compared with previous examples, the harbor has a more complex geometry. By applying the extended VTCR, the problem is solved by a simple domain discretization and a small amount of rays. To satisfy Sommerfeld radiation condition, the analytic solutions of unbounded subdomain are developed. Then these analytic solutions are further used as the shape functions by the VTCR on the unbounded subdomain. Inside the harbor, where the square of wave number varies linearly due to the variation of depth of water, the Airy wave functions are used as shape functions. In the calculation, one adopts two different strategies. The first strategy only has one subdomain inside the harbor, while the second strategy has four subdomains inside the harbor. From the results it could be seen that with a good angular discretization, the two strategies lead the calculation converges to the same result. It successfully illustrates that the VTCR has a significant potential to solve true engineering problem in an efficient and flexible way. Chapter 4 The Zero Order and the First Order WTDG for heterogeneous Helmholtz problem This chapter presents a wave based Weak Trefftz Discontinuous Galerkin method for heterogeneous Helmholtz problem. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. In this way, the Zero Order WTDG adopts the plane wave functions as shape functions. The First Order WTDG adopts the Airy wave functions as shape functions. Academic studies will show the features of the Zero Order WTDG and the First Order WTDG for heterogeneous Helmholtz problem. Lastly, the harbor agitation example is restudied by the Zero Order WTDG method. Its results are compared with the results calculated by the extended VTCR method in Chapter 3. The WTDG was first introduced in [Ladevèze, 2011, Ladevèze et Riou, 2014]. In this method, the domain is divided into several subdomains. Shape functions are independent from one subdomain to another. The solution continuity between two adjacent subdomains is verified weakly through the variational formulation of reference problem. The reference problem considered is an heterogeneous Helmholtz problem over a domain Ω. Let Ω be partitioned into N non overlapping subdomains Ω = ∪ N E=1 Ω E . Denoting ∂Ω E as the boundary of Ω E , we define Γ EE = ∂Ω E ∩ ∂Ω and Γ EE ′ = ∂Ω E ∩ Ω E ′ . The proposed approach here is searching the solution u in the functional space U such that U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k2 E u E + r d = 0} (4.1) where kE is an approximation of k in subdomain Ω E . Although it could be close to k, kE is still an approximation and the shape functions defined in (4.1) will not satisfy a priori the governing equation in (2.1). This is the reason why this method is named as weak Trefftz method instead of Trefftz method. In Section 4.1.3 the concrete form of kE will be further discussed. When r d = 0 the vector spaces associated with U and U E are defined as U 0 and U E,0 . The variational formulation can be written as: find u ∈ U such that Re   ik(x)   ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS -∑ E∈E Ω E divq u + k 2 u + r d ṽdΩ = 0 ∀v ∈ U 0 (4.2) where ˜ represents the conjugation of . It should be mentioned that the term which contains the governing equation in the formulation ∑ E∈E Ω E divq u + k 2 u + r d ṽdΩ could also be replaced by ∑ E∈E Ω E 1 2 divq u + k 2 u + r d ṽ + 1 2 divq v + k 2 v ũdΩ and the demonstrations in the Section 4.1.2 will keep unchanged. Equivalence of the reference problem Let us note that the WTDG formulation (4.2) can be written as: find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (4.3) where b meets the property that b(u,u) is real. Property 1. By defining u 2 U = ∑ E∈E Ω E grad ũ • gradudΩ, u U is a norm over U 0 . Proof. When u U = 0, we can find that gradu = 0. Then u could be a non-zero constant or zero. From the definition of U 0 , it follows that: (1 -iη)∆u + k2 E u = 0 over Ω E with k2 E > 0. It can be deduced that u = 0 over Ω. Therefore u U is a norm over U 0 . Property 2. When η is positive, the WTDG formulation is coercive. Proof. If it is the weak Trefftz formulation case, then we have: b(u,u) = Re ik ∑ E∈E ∂Ω E (q u • n) ũdS -∑ E∈E Ω E divq u ũdΩ ∀u ∈ U 0 (4.4) Consequently, b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ (4.5) Let us denote cl Ω a bounded closed set, which contains Ω and ∂Ω. Because k is a continuous function, k has an minimum value on cl Ω. Denoting k in f = inf{k(x)| x ∈ cl Ω}, it is evident that when η is positive, for u ∈ U 0 , b(u,u) k in f η u 2 U . Property 3. The WTDG formulation (4.2) is equivalent to reference problem (2.1). And it has a unique solution. Proof. If u is a solution of (2.1), it is also a solution of (4.2). Therefore the existence of solution is proved. From Property (1) and Property (2), it can be directly deduced that the solution u is unique. The shape functions of the Zero Order WTDG and the First Order WTDG Defined in (4.1), the shape functions used in each subdomain need to satisfy the Helmholtz equation where kE is an approximation of k on Ω E . Defining x e ∈ Ω E , one has the Taylor's series expansion of k 2 at the point x e : T . ξ = 0 or 1. k 2 = k 2 (x e ) + ξ∇(k 2 )| x=x e • (x -x e ) + o( x -x e ( 1+ξ Taking the Zero Order approximation of (4.6) and replacing it in (2.1), it can be obtained that: (1iη)∆u + k 2 (x e )u = 0 (4.7) In this case k2 E = k 2 (x e ) and it is known that the shape functions which satisfy (4.7) are the plane wave functions. Taking the First Order approximation of (4.6) and replacing it in to (2.1), it can be obtained that: (1 -iη)∆u + k 2 (x e ) + ∇k 2 | x=x e • (x -x e ) u = Approximations and discretization of the problem To implement the WTDG method, it is required to take a finite dimensional subspace U h 0 of U 0 . In Section 4.1.3, two kinds of shape functions are generated by taking the approximation of wave number k on subdomain. For both the plane wave functions and the Airy wave functions, they represent waves propagating in the 2D plane. Thus by using an angular discretization, one can build the functional space U h 0 . For the plane wave functions, U h 0 is defined as: U h 0 = u ∈ L 2 (Ω) : u(x) |Ω E = M E ∑ m E =1 A m E e ik•x , A m E ∈ C, E = 1, • • • ,N (4.10) For the Airy wave functions, U 0,s is defined as: 4.11) where M E is the number of waves and A m E is the amplitude of the wave. U h 0 = u ∈ L 2 (Ω) : u(x) |Ω E = M E ∑ m E =1 A m E ψ(x,P m E ), A m E ∈ C, E = 1, • • • ,N ( Numerical implementation 4.3.1 Integration of the WTDG To implement the WTDG method, numerical integrations need to be done over the domain and along the boundary. Since the Zero Order WTDG and the First Order WTDG all use the quick oscillatory shape functions. Standard integration methods such as the Gauss integration method are not suitable in this kind of problems. Due to the complexity of the Airy wave function, one needs to resort to numerical integration presented in Chapter 3.3. Benefiting from the feature of plane wave functions, numerical integration of the Zero Order WTDG could be achieved totally by semi-analytic integration. There are mainly two reasons to explain this. First, as the plane wave functions are always in form of exponential functions, multiplication of two shape functions will be still in form of an exponential function. Instead of the direct multiplication operation, one could add the indexes of the two exponential functions to get the index of the result and multiply the two coefficients to get the final coefficient. Second, integration of the exponential function could be calculated analytically if the index and the coefficient of the exponential function are given. As the geometry of the subdomain of the WTDG here is in a rectangle shape, all of its boundaries are straight lines. Since the weak formulation of the WTDG contains the governing equation, the continuity of displacement and velocity on interfaces and the boundary conditions. Consequently, there are three kinds of integrations. Correspondingly, they are integration over the domain, integration along the interface between subdomains and integration along the boundary. For the first kind of integration, it could be done analytically without difficulty. In fact, any integration over the domain for the Zero Order WTDG could be decomposed into the following basic integration problem. Supposing (k 1 cosθ 1 + k 2 cosθ 2 ) = 0 and (k 1 sinθ 1 + k 2 sinθ 2 ) = 0, the analytic integration could be calculated in the following way: y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = -C 1 •C 2 (k 1 cosθ 1 + k 2 cosθ 2 ) • (k 1 sinθ 1 + k 2 sinθ 2 ) e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y x 2 x 1 y 2 y 1 (4.12) If (k 1 cosθ 1 + k 2 cosθ 2 ) = 0 and (k 1 sinθ 1 + k 2 sinθ 2 ) = 0, the integration becomes: y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (x 2 -x 1 ) 2i(k 1 sinθ 1 + k 2 sinθ 2 ) e i(k 1 sinθ 1 +k 2 sinθ 2 )y y 2 y 1 (4.13) If (k 1 sinθ 1 + k 2 sinθ 2 ) = 0 and (k 1 cosθ 1 + k 2 cosθ 2 ) = 0 , the integration becomes: y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (y 2 -y 1 ) 2i(k 1 cosθ 1 + k 2 cosθ 2 ) e i(k 1 cosθ 1 +k 2 cosθ 2 )x x 2 x 1 (4.14) If (k 1 sinθ 1 + k 2 sinθ 2 ) = 0 and (k 1 cosθ 1 + k 2 cosθ 2 ) = 0, the integration becomes: y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (x 2 -x 1 )(y 2 -y 1 ) 4 (4.15) For the second kind of integration, it could also be calculated analytically. The integration is along the interface. The analytic method is similar to the integration over domain. Here it is preferable not to repeat the process. One could refer to (4.12), (4.13), (4.14) and (4.15). For the third kind of integration, when the boundary condition can be decomposed by the Fourier expansions in form of exponential functions, the calculation could be done analytically. In this situation the integration is similar as the case along the interface. However, when the boundary of the domain is irregular, the integration needs to be implemented numerically. The numerical integration methods are proposed in Chapter 3.3.1. Iterative solver of the WTDG Since in the WTDG the shape functions are the wave functions in form of ray approximations, the matrix will suffer from ill-conditioning when the number of shape functions become too large. A similar feature is observed on the VTCR in Chapter 2. Thereby, the pinv iterative solver is chosen again for both the Zero Order WTDG and the First Order WTDG. More details could be seen in Section 2.3. Convergence of the Zero Order and the First Order WTDG 4.4.1 Convergence criteria The common point of the VTCR and the WTDG is that they all take the wave functions as shape functions. As mentioned before, the shape functions of the VTCR satisfy a priori the governing equation. Therefore the residue will only appear on the boundary of each subdomain. Upon the convergence criteria of (2.23) and (3.27), a sufficient large number of rays will make the results of standard VTCR and the extended VTCR converge with a desired precision. Unlike the VTCR method, the WTDG will not only incur residues on the boundary but also inside the domain, because the governing equation is not satisfied by the shape functions. In this case, only a sufficient large number of rays could make the result converge but at the mean time there may exist a big residue inside the domain. For the WTDG, a sufficient number of subdomains and rays are both the essential conditions to obtain an accurate solution. The technique to choose a sufficient number of subdomains will be illustrated in Section 4.4.2. Here, the criteria for the number of rays is proposed. For the Zero Order WTDG the criteria is defined as: N e = τk e,0 R e /(2π) (4.16) where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of the domain. k e,0 is a constant average value of the wave number on the domain. τ = 2 is chosen in this dissertation. For the First Order WTDG the criteria is defined as: N e = τk e,max R e /(2π) (4.17) where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of domain. k e,max is the maximum value of the linearisation approximation of the wave number on the domain. τ = 2 is chosen in this dissertation. Error indicator and convergence strategy Unlike the VTCR, in each subdomain the shape functions of the WTDG neither satisfy the governing equation nor satisfy the boundary conditions. In this case, the definition of (2.24) is not a valid error estimator because the error inside the subdomain is not taken into account. It leaves an open question to define the error estimator for the WTDG. In this dissertation, since the numerical examples are academic, it is practicable to take a precalculated WTDG solution as a reference solution. As (4.18) shows, the error estimator for the WTDG method is only based on the results of the WTDG. The reference solution is calculated with an overestimated number of subdomains and an overestimated number of rays. ε W T DG = u h -u re f L 2 (Ω) u re f L 2 (Ω) (4.18) where u re f is the overestimated solution of the WTDG. Criteria (4.16) and ( 4.17) are used to overestimate the solution. τ = 4 is chosen in this dissertation for the overestimation. Convergence strategy: Since the WTDG requires a sufficient number of subdomains to decrease the residues inside the domain, a convergence strategy of the WTDG is proposed as following: • 1. Start the calculation with several subdomains and quasi-sufficient rays; Calculate the error and assign its value to ε 0 . If m stop ε 0 , go to step 4. If m stop < ε 0 , go to step 2. • 2. Increase the number of rays; Calculate the error and assign its value to ε 1 ; If m stop ε 1 , go to step 4. If m stop < ε 1 < ε 0 , assign the value of ε 1 to ε 0 and repeat step 2. If ε stop < ε 1 and ε 0 = ε 1 , go to step 3. • 3. Increase subdomains and set a quasi-sufficient number of rays; Calculate the error and assign its value to ε 0 . Then go to step 2. • 4. Obtain the result with the desired precision and finish the calculation. where m stop is a desired precision. The quasi-sufficient rays means that the angular discretization meets the criteria of (4.16) if plane wave functions are used and meets (4.17) if Airy wave functions are used. τ = 2 is used to determine the number of quasi-sufficient rays. • correspond to propagation angles in P 1 , P 2 and P 3 respectively. This choice of geometry and boundary conditions allows one to calculate the relative error of the Zero Order WTDG method with the exact solution. The real relative error is defined as: ε ex = u -u ex L 2 (Ω) u ex L 2 (Ω) (4.19) The definition of the problem and the discretization strategy can be seen on Figure 4.1. It is evident that in this case the Airy wave function is the exact solution of the governing equation. This example has already been studied by the extend VTCR method in Chapter 3. Here it serves as a quick example to show the capacity of the Zero Order WTDG in dealing with the medium frequency Helmholtz problem of slowly varying wave number. In Figure 4.1, it shows five strategies to discretize the subdomains of the Zero Order WTDG. For each strategy, the number of subdomains is fixed and the number of wave is gradually increased to draw the convergence curve. It corresponds to the p-convergence study. Moreover, for each strategy, the number of wave keeps the same in each subdomain. Figure 4.2 shows the convergence curves. First, it implies that for each strategy, the convergence curve will remain nearly unchangeable after certain degrees of freedom. This could be explained by the fact that a sufficient number of rays is used in each subdomain. But due to the number of subdomains is fixed, the residue inside the subdomain can not be further decreased. Second, it could be observed that the performance of the convergence curve depends on the number of subdomains. The reason is that the WTDG takes an approximation value of wave number in each subdomain. When more subdomains are used, the residue caused by approximation of the governing equation will decrease correspondingly. This phenomenon is consistent with the convergence study presented in Section 4.4.1. It reflects the significant feature of the WTDG that it can smoothly approximate the exact solution of an heterogeneous Helmholtz problem through the refinement of subdomains. It can be seen that, by this method, one could obtain a result with a desired precision. Ω 1m 1m Ω Ω1 Ω2 Ω3 Ω4 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω10 Ω11 Ω12 Ω13 Ω14 Ω15 Ω16 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω10 Ω11 Ω12 Ω13 Ω14 Ω15 Ω16 Ω17 Ω18 Ω19 Ω20 Ω21 Ω22 Ω23 Ω24 Ω25 Academic study of the First Order WTDG in the heterogeneous Helmholtz problem of sharply varying wave number In this numerical example, the geometry definition of reference problem keeps the same as the one in Section 4.5.1. A square computational domain with η = 0.01. But here k = 5x + 5y + 40. Therefore k varies 25% on Ω. The boundary condition on ∂Ω are Dirichlet type such that u d = 1. Since in this problem the general solution of governing equation is unknown, one could not use VTCR method to solve it. However, by smoothly approximating of the governing equation, the WTDG method could treat the problem. Both the Zero Order WTDG and the First Order WTDG are employed to show the performance of the WTDG approach. In this example, the error estimator defined by (4.18) is adopted here to capture the error. For the Zero Order WTDG, the overestimated calculation uses 225 subdomains and 40 plane waves in each subdomain to obtain u re f . For the First Order WTDG, the overestimated calculation uses 25 subdomains and 80 Airy waves in each subdomain to obtain u re f . The visual illustration of some results calculated by the First Order WTDG are presented in Figure 4.6. Besides, in Figure 4.6, there is also a result calculated by the FEM with 625 elements of quadric mesh of order 3. This result could be used to make a comparison with the results calculated by the Zero Order WTDG and First Order WTDG. It could be seen that both the Zero Order WTDG and the First Order WTDG are all capable to well solve this problem. There are three points to be mentioned here. First, the Zero Order WTDG needs a fine discretization of subdomains. This is because in this problem the wave number k varies greatly on Ω. Compared to the medium frequency Helmholtz problem of slowly varying wave number in Section 4.5.1, the problem becomes a fast varying wave number one. In this condition, it is necessary to divide more subdomains for the Zero Order WTDG. Otherwise, there will be a large residue inside the domain. Second, the First Order WTDG requires much less subdomains to obtain an accurate result. The reason is that the First Order WTDG makes a higher order approximation of the governing equation. The Zero Order WTDG takes the average value of the wave number on the subdomain while the First Order takes into account not only the average value of the wave number, but also its linear variation. Consequently, compared with the Zero Order WTDG, the First Order WTDG uses much less subdomains. Third, when more subdomains are used in WTDG, less waves are needed in each subdomain. This phenomenon can be explained by the convergence criteria (4.16) and (4.17), which determine the number of the plane waves and of the Airy waves for convergence. Again, it should also be noticed that in the WTDG a sufficient number of rays is only a necessary condition and is not the sufficient condition for its accuracy. Its residue is also influenced by the method to approximate the wave number. Therefore sufficient subdomains are essential in the WTDG to lead to the accurate result. Otherwise, it could be seen from the example on 4.5.1 that when there are insufficient subdomains, increasing the amount of waves will not further improve the accuracy of the WTDG. Study of the Zero Order WTDG on the semi-unbounded harbor agitation problem The harbor agitation problem is studied by the extended VTCR in Chapter 3. In this section, one uses the WTDG to solve this problem. For the region outside the harbor, the discretization of subdomain and the choice of their working space remain unchanged. However, inside the harbor, the zero order WTDG is adopted. As mentioned above, the working space of u b denoted by U b Ω 1 is defined as: U b Ω 1 = u b ∈ L 2 (Ω 1 ) : u b (x,y) = N 1 ∑ n=0 A 1n cosnθH (1) n (ζkr), A 1n ∈ C, n = 0, • • • ,N 1 (4.20 ) where A 1n is the unknown degree of freedom. N 1 is the number of degrees of freedom on Ω 1 . Working space of Ω 2 is defined as follows: U Ω 2 = u ∈ L 2 (Ω 2 ) : u(x,y) = N 2 ∑ n=0 A 2n e ikζ(cosθ n x+sinθ n y) , A 2n ∈ C, n = 1, • • • ,N 2 (4.21) where A 2n is the unknown amplitude of plane wave. N 2 is the number of degrees of freedom on Ω 2 . For the working space U Ω j of subdomain Ω j inside the harbor, where j 3, it is expressed as follows: U Ω j = u ∈ L 2 (Ω j ) : u(x,y) = N j ∑ n=0 A jn e ik j ζ(cosθ n x+sinθ n y) , A jn ∈ C, n = 1, • • • ,N j (4.22) where k j = k(x j ) and x j is the coordinate of center point on Ω j . A jn is the unknown amplitude of plane wave. N j is the number of degrees of freedom on Ω j . To implement the numerical calculation, parameters of the model remain the same as adopted in Chapter 3. The amplitude of incoming wave corresponds to A + 0 = 2 m and the angle of incoming wave θ + 0 = 45 • . Other parameters are chosen as: ω = 0.5 rad/s, a = 4.8 • 10 -2 m -1 , b = 4.8 • 10 -5 m -2 , η = 0.03. By replacing the parameters into (3.33), it can be derived that: The error indicator ε W T DG defined in (4.18) is used here to pilot the calculation to converge. Following the computational strategies mentioned above, the overestimated solution u re f takes 20 subdomains and 200 waves in each subdomain inside the harbor. The calculation here is carried out with three different dicretization strategies. The first strategy takes five subdomains and 120 waves in each subdomain. The result error is 5.32 • 10 -3 . Totally, 720 degrees of freedom are used. The second strategy takes ten subdomains and 120 waves in each subdomain. The result error is 1.27 • 10 -3 . Totally, 1320 degrees of freedom are used. The third strategy takes fifteen subdomains and 120 waves in each subdomain. The result error is 3.06•10 -4 . Totally, 1920 degrees of freedom are used. k 2 = 1.2 • 10 -3 -1.2 • 10 -6 y (4.23) The results are shown in Figure 4.8 and in Figure 4.9. Figure 4.8 shows the global results which contain the region inside the harbor and outside the harbor. Since Ω 1 is the semi-unbounded domain, the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinates. Figure 4.9 shows the detailed results inside the harbor. Besides, the result calculated by the VTCR in Chapter 3 is also shown here to give a visual comparison with the WTDG method. It could be inferred from the results that this medium frequency heterogeneous Helmholtz problem could be well solved by the WTDG. One could also know from the result that increasing the subdomains inside the harbor could reduce the residue. However the improvement could not longer be judged visually since the error is already small. Therefore one could see that the different strategies in Figure 4.8 and Figure 4.9 always lead to the same result. This phenomenon reflects the stability of the WTDG and is consistent with the its performance in previous academic study of Section 4.5.1. Another point to be mentioned is that though considerable subdomains it uses, the WTDG guarantees good continuity between adjacent subdomains. The fact that the WTDG could smoothly approximate the wave varying Helmholtz problem is proved again in this harbor agitation problem. Mid/high frequency model: Lastly, a quick calculation is executed with the change of the wave number parameter. One increases the wave number of the model (4.23) four times. The results could be seen in Figure 4.10 and Figure 4.11. Again, since Ω 1 is the semi-unbounded domain, here the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinates. In this case, there are nearly 90 periods of waves inside the computation domain. This calculation adopts 3940 degrees of freedom. This kind of calculation will pose a great numerical challenge to the FEM method while the WTDG could solve it without difficulty. Conclusion Facing to heterogeneous Helmholtz problem, this chapter proposed two wave based WTDG approaches. For the WTDG, there is no requirement for the shape functions to satisfy a priori the governing equation. In this chapter, wave functions are proposed as shape functions. Approximating the wave number of governing equation by its zero order Taylor series, one could obtain an approximated equation, whose exact solutions are plane wave functions. Approximating the wave number of governing equation by its first order Taylor series, one could obtain an approximated equation, whose exact solutions are Airy wave functions. These wave functions only satisfy the governing equation approximately. In other words, these shape functions do not satisfy the governing equation in the variational formulation. Therefore residue will be created inside each subdomain. The finer the subdomains discretization is, the smaller the residue inside the subdomain will be. The reason is that when the region of subdomain reduces, the approximated wave number will be closer to the real wave number of the problem. In short, the WTDG could smoothly approximate the solution of reference problem. Academic studies have been done in this chapter to show the convergence properties of the WTDG method. Both the Zero Order WTDG and the First Order WTDG lead to the convergent and accurate numerical result. In addition, the WTDG is also used to study the habor agitation problem, which has an engineering application background and has been studied by the extended VTCR in Chapter 3. The result shows that the wave based WTDG performs well in this problem. Chapter 5 FEM/WAVE WTDG approach for frequency bandwidth including LF and MF This chapter is focusing on the hybrid use of the FEM approximation and the wave approximation for the constant wave number Helmholtz problem, which ranges from low-frequency to mid-frequency. Benefiting from the FEM approximation , the FEM/WAVE WTDG method well solves the low frequency problem. Moreover, benefiting from the wave approximation, the FEM/WAVE WTDG method could solve the mid-frequency problem in an efficient way as VTCR does. The feasibility of this hybrid method is ensured by the weak Trefftz discontinuous Galerkin method. The WTDG introduces a variational formulation of the reference problem and its shape functions could be found under fewer restrictions compared to the VTCR method. Shape functions are not required to satisfy the governing equation a priori. The equivalence of the formulation is proved and discretization strategies are proposed in this chapter. Of course, numerical studies illustrate the performance of the FEM/WAVE WTDG approach. Rewriting of the reference problem The WTDG was first introduced in [Ladevèze, 2011]. In [Ladevèze et Riou, 2014], a coupling between the FEM approximation and the wave approximation has been developed by the WTDG in the way that FEM approximation and wave approximation are used separately in each subdomain. In this Chapter, the WTDG is extended to mix them in the same subdomains, at the same time. Variational Formulation In this chapter the reference problem is defined by (2.1), where the wave number is a constant. Particularly the wave number locates either in the low-frequency range or in the mid-frequency range. In order to get an equivalent variational formulation of (2.1), the domain is divided into subdomains Ω E with E ∈ E. Γ EE ′ denotes the interface between two subdomains E and E ′ . Γ EE denotes the interface between subdomain Ω E and boundary ∂Ω. The approach proposed consists in using the working space U ⊂ H 1 (Ω): U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )} (5.1) The vector spaces associated with U and U E where r d = 0 are denoted by U 0 and U E,0 . Then the WTDG formulation can be written as: find u ∈ U such that Re   ik   ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS -∑ E∈E Γ EE ∩∂ 1 Ω α • i • ṽ(u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS -∑ E∈E Ω E divq u + k 2 u + r d ṽdΩ = 0 ∀v ∈ U 0 (5. 2) where α is a parameter strictly positive to enforce the boundary Dirichlet condition. As one can see, there is no a priori constraint on the choice of the spaces U and U 0 . Consequently, one can select polynomial approximation, like in the FEM, or wave approximation, like in the VTCR, or even both. Equivalence of the reference problem Let us note that (5.2) can be written as: find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (5.3) where b has the property that b(u,u) is real. Property 1. For u ∈ U 0 , we have b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαu ũdS 0 (5.4) Proof. b(u,u) = Re ik ∑ E∈E ∂Ω E (q u • n) ũdS -∑ E∈E Ω E divq u ũdΩ + ∑ E∈E Γ EE ∩∂ 1 Ω αiu ũdS (5.5) Consequently, b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαu ũdS (5.6) From Property 1 it can be deduced that if b(u,u) = 0, then u is equal to zero over ∂Ω E ∩ ∂ 1 Ω. It is a piecewise constant within subdomains Ω E , E ∈ E. To keep the uniqueness of the solution, condition (P) is introduced to be satisfied by the shape functions which belong to U 0 . Refering to work [Ladevèze et Riou, 2014], one could obtain the Condition (P), which is crucial for the demonstration. Its definition is as follows: Condition (P) Let a E ∈ U E be a piecewise constant function within subdomains E ∈ E. a E satisfies condition (P) if    ∀v ∈ U 0 , ∀E ∈ E, Re   ik   ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS     = 0    ⇒ a E = ±a (5.7) where E ′ is a subdomain sharing a common boundary with E. And let us take the convention a E ′ = -a E over ∂Ω E ∩ ∂Ω. Property 2. If U 0 satisfies condition (P) and if η is positive, the WTDG formulation (5.2) has a unique solution. Proof. In finite dimension, existence of solution will be confirmed if uniqueness can be proved. Let us suppose (5.2) has two solutions u 1 and u 2 . v = u 1 -u 2 ∈ U 0 and b(v,v) = ∑ E∈E kη Ω E grad ṽ • gradvdΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαv ṽdS = 0 (5.8) It can be observed that v E = a E with E ∈ E, where a E is piecewise constant within the subdomains and a E = 0 in the subdomains sharing a common boundary with ∂ 1 Ω. Backsubstituting this result into (5.2), one also finds b(v,v * ) = 0 ∀v * ∈ U 0 , which leads to ∀v * ∈ U 0 , Re   ik   ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS     = 0 (5.9) (5.9) corresponds to the condition (P), where E ′ represents a subdomain sharing a common boundary with E, with the convention a E ′ = -a E over ∂Ω E ∩ ∂Ω. Consequently, a E = ±a ∀E ∈ E. Moreover, given that a E = 0 over ∂ 1 Ω, we have a = 0. Refering to work [Ladevèze et Riou, 2014], one could obtain the Property 3, which is crucial for the demonstration. Its definition and demonstration are as follows: Property 3. If U E is the combination of solution spaces of FEM and VTCR, then the condition (P) is satisfied, and u 2 U 0 = b(u,u) + γ 2 (u) (5.10) is a norm over U 0 . We define (5.11) where C E is constant vector over Ω E and X E is the position vector relative to the center of inertia of element E. U 0 denotes the associated space defined over Ω of U E,0 . And for u ∈ U 0 the definition of quantity γ is defined as (5.12) where C v corresponds to the vector C E of v according to (5.11). U E,0 = u|u ∈ V E , u = C E • X E γ(u) = sup v∈U 0 b(u,v)/||C v || L 2 (Ω) Proof. = β E + a E . z E is continuous because z E|Γ EE ′ = a E ′ + a E = z E ′ |Γ EE ′ . It follows that z is constant over Ω. Since z is zero over ∂Ω, z = 0 over Ω and β E = -a E . Consequently, a E can be only the values of +a or -a, a being a constant over Ω. To demonstrate that ||u|| 2 U 0 is a norm over U 0 , let us consider that u 2 U 0 = b(u,u) + γ 2 (u) = 0. It follows that b(u,u) = 0 and γ(u) = 0. From (5.6) it can be obtained that u |Ω E = a E is constant over Ω E and that u = 0 over ∂ 1 Ω. Then γ(u) is equal to γ(u) = sup v∈U 0 1 ||C v || L 2 (Ω) Re   ik   ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS     = 0 (5.13) Since condition (P) is satisfied, it can be derived that u E = ±a, a being a constant over Ω. Finally from u = 0 on ∂ 1 Ω, one gets u = 0 over Ω. Approximations and discretization of the problem Defined by (5.1), the working space U could be split into two subspaces U w and U p , which represent the subspace generated by the plane wave functions and the subspace generated by the polynomial functions. U = U w ⊕ U p (5.14) For numerical implementation, U w and U p are then truncated into the finite dimensional subspaces, which could be noted by U h w and U h p respectively. Plane wave approximation : The approximation solution in subspace U h w could be expressed such as u w (x) = N w ∑ n=0 A n e ik•x (5.15) where A n is the unknown amplitude of plane wave. N w is the number of plane waves. Polynomial approximation : The approximation solution in subspace U h p could be expressed such as u p (x) = N p ∑ n=0 U n φ n (x) (5.16) where U n is the unknown degrees of freedom of polynomial interpolation. φ n (x) is the standard interpolation functions of the polynomial approximation. The mesh of polynomial approximation could be built in the same way as the standard FEM method. Without losing generality, in this dissertation the meshes are regular square types. However it should be noticed that being different from the standard FEM method, the approximation solution u p (x) is not required to a priori satisfy the Dirichlet condition imposed on the boundary. Instead, it is evident that the sum of u w (x) and u p (x) should satisfy this condition, which is weakly comprised in the variational formulation (5.2). Numerical implementation Since the shape functions contains both the polynomial and the wave approximations, terms in matrix to integrate composed by polynomial-polynomial terms, wave-wave terms and polynomial-wave terms. Polynomial-polynomial terms are the productions of two polynomials. Gauss quadrature is capable to treat this type of integrations. For the wave-wave terms, the productions of two plane wave functions, their integration could be achieved analytically. Details have been illustrated in Section 4.3.1. As for the terms of the productions of polynomial and plane wave functions, one could still calculate the integrations analytically by the technique of integration by part. The following illustration is typical since each integration of polynomial-wave term could be decomposed into following integration unit: (5.17) where ikcosθ = 0 and iksinθ = 0. This integration is the most complicated form that could appear in integration of polynomial-wave terms. It takes account of high order approximation of polynomial and the integration over the domain. Other cases of polynomial-wave terms could be simplified and derived from it. ) for the same model of problem, in the solution. It can be studied from this example how the FEM/WAVE WTDG method works in low-frequency problem and in mid-frequency problem. The definition of the problem and the discretization strategy can be seen on Figure 5.1. In the FEM/WAVE WTDG formulation (5.2), α = 0.0001. For the FEM approximation in the WTDG, a regular squared mesh of degree 1 is used. One uses 10 elements per wave length. For the wave approximation in the WTDG, one uses only one subdomain and a regular angular distribution of the waves from 0 to 2π. The choice for angular distribution is determined by the geometrical heuristic criterion (2.23). Since the exact solution is given, the convergence of the FEM/WAVE WTDG strategy is assessed by computing the real relative error defined as following: y 2 y 1 x 2 x 1 x m y n • e ikcosθx+iksinθy dxdy = x 2 x 1 x m e ikcosθx dx • y 2 y 1 y n e iksinθy dy = x m ikcosθ e ikcosθx x 2 x 1 - x 2 x 1 mx m-1 ikcosθ e ikcosθx dx • y n iksinθ e iksinθy y 2 y 1 - y 2 y 1 ny n-1 iksinθ e iksinθy dy = . . . = m+1 ∑ p=1 m!(-1) p+1 x m-p+1 (m -p + 1)!(ikcosθ) p + m!(-1) m+2 (ikcosθ) m+1 e ikcosθx x 2 x 1 × n+1 ∑ q=1 n!(-1) q+1 y n-q+1 (n -q + 1)!(iksinθ) q + n!(-1) n+2 (iksinθ) n+1 e iksinθy ε ex = u -u ex L 2 (Ω) u ex L 2 (Ω) (5.18) A comparison of the pure FEM approach (which uses only a polynomial description), the pure VTCR approach (which uses only a wave description) and the FEM/WAVE WTDG approach (which uses at the same time the polynomial and the wave descriptions) is made. For each wave number k, the pure FEM uses the same discretization strategy as the FEM approximation in the WTDG. The pure VTCR uses the same discretization strategy as the wave approximation in the WTDG. The convergence curve is represented on Figure 5.2. As one can see, the FEM/WAVE WTDG presents a better behaviour than the pure FEM or the pure VTCR. The pure FEM suffers from a lack of accuracy when the frequency becomes to be too high. The pure VTCR is not so efficient in the low frequency domain. This shows the benefits of using the WTDG method for finding the solution for low and mid frequency problems with the same descriptions, at the same time. The convergence of the FEM/WAVE WTDG method relies on both the FEM approximation and the wave approximation. A study is made to see how the wave approximation affects the performance of the FEM/WAVE WTDG method. With the model of the same computational domain and the same boundary condition defined in this section, we take k = 25 m -1 . Seven different wave approximations have been used to draw the convergence curves of the FEM/WAVE WTDG method as Figure 5.3 shows. For each wave approximation strategy, only one subdomain and a fixed number of rays are used. Meanwhile, for the FEM approximation in the FEM/WAVE WTDG, the mesh is gradually refined until the result converges. As one can see, for a fixed number of waves, the results converge along with the increase of degrees of freedom of the FEM approximation. It can be seen that using the same degrees of freedom of the FEM approximation, a refinement of the angular discretization of the wave approximation in the FEM/WAVE WTDG leads to more precise result. An interesting phenomenon is that the FEM/WAVE WTDG with 32 waves always has a precise solution. The reason is that depending on the criterion (2.23), 32 waves are sufficient to make the result converge. Non-homogeneous Helmholtz problem with two scales in the solution The problem considered is an Helmholtz problem defined on Ω = [-0.5 m; 0. y) , k e = 10 m -1 and η = 0.0001. The boundary condition is y) . This boundary condition enables one to know the exact solution of problem with u ex = u d . Therefore, the real relative error could be measured by (5.18). This example is again interesting, because it corresponds to a non-homogeneous Helmholtz problem with two scales in the solution (slow varying scale with k e and fast varying scale with k). The exact solution u ex could be seen on Figure 5.4. 5 m]× [-0.5 m; 0.5 m], with k = 100 m -1 , θ 1 = 30 • , θ 2 = 82 • , r d = (k 2 -k 2 e )e ik e ζ(cosθ 2 •x+sinθ 2 • u d = e ikζ(cosθ 1 •x+sinθ 1 •y) + e ik e ζ(cosθ 2 •x+sinθ 2 • The FEM/WAVE WTDG is used to solve this problem. In the variational formulation, one has α = 0.0001. The objective of this method is to use the wave approximation to approximate the fast varying scale solution and to use the FEM approximation for the slow varying scale solution. Correspondingly, a regular squared mesh of degree 2 is used for the FEM approximation. The criteria is to choose 10 elements per wave length. The wave approximation uses 2 waves, which propagate in the 30 • and 210 • two directions. It should be noticed that the exact solution of fast varying scale is taken directly as shape The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach. function in wave approximation. Consequently, there is no need to add more shape function to simulate the fast varying scale. With such a choice, the solutions given by the wave approximation, denoted u V TCR and the polynomial approximation, denoted u FEM are depicted in Figure 5.4. The comparison between the exact solution and the FEM/WAVE WTDG solution is shown in Figure 5.5. The real relative error is 4.48×10 -5 . As one can see, the FEM/WAVE WTDG gives a good approximation. This example shows the advantage of the FEM/WAVE WTDG. Due to the fact that the fast varying scale is in mid-frequency, it will require considerable degrees of freedom for the FEM to solve this problem. However, with the FEM/WAVE WTDG, solutions of different scales are solved by different approximations. For the FEM approximation, only a small amount of degrees of freedom are used to get the slow varying scale solution. At the same time, only two more degrees of freedom of the wave approximation could well recover the fast vary scale solution. The FEM/WAVE WTDG method applied with different types of approximations The problem considered has a computational domain defined on Figure 5.6. This L shape domain is filled with a fluid with k = 30 m -1 and η = 0.0001. The boundary conditions is u d = e ikζ(cosθ•x+sinθ•y) + e ikζ(sinθ•x+cosθ•y) with θ = 60 • . This choice of the boundary condition enables one to know the exact solution of problem with u ex = u d . Then the performance of the approach could be evaluated by the real relative error. On this example, three kinds of approximations are used: even a pure FEM approximation, or a pure VTCR approximation, or a mix of the polynomial approximation and the wave approximation (see Figure 5.6). The variational formulation of the WTDG allows this possibility. In order to have a good approximation, one needs select the discretization criteria of each approximation. For the FEM, the choice is to use 20 elements of degree 1 per wave length. For the VTCR, the choice is τ = 14. The FEM/WAVE WTDG uses τ = 17 for the wave approximations and 6 elements of degree 1 in the subdomain for the FEM approximation. It should be noticed that the criteria is highly overestimated for the FEM, for the VTCR and for the FEM/WAVE WTDG in order to have a convergent result. The reason for this overestimation lies in the fact that the convergence criteria for this mix use of approximations is unknown. Even though the criteria for each individual approximation is known, there is no previous study for this mix situation. When the approximations are coupled, they will interact with each other. In [Ladevèze et Riou, 2014], a coupling between the FEM approximation and the wave approximation have been developed by the WTDG in the way that FEM approximation and wave approximation are used separately in each subdomain. Its results show that compared to their individual application, this coupling use requires more degrees of freedom for the FEM approximation and the wave approximation. Consequently, the criteria for each individual approximation can only serve as a reference for our choice. The true criteria for this mix use is still an open question. Here, the objective of the example is only to give a scope of the practicability to achieve a mix use of the FEM/WAVE approximation with the FEM and the VTCR. Again, in the variational formulation, one has α = 0.0001. The exact solution and the FEM/WAVE WTDG solution are depicted in Figure 5.7. As one can see, the solutions are very closed. This is because the variational formulation the WTDG allows the couple use of the FEM, the VTCR and the FEM/WAVE approximation. According to the definition of the error in (5.18), the error is here 2.187×10 -2 . It can be deduced from this example that all combinations of methods such as pure FEM, pure VTCR, hybrid of FEM and VTCR can be integrated together in one complex geometry problem. In each subdomain the concrete method can be chosen depend on specific requirement of engineering problems. Conclusion This chapter proposes an hybrid use of the FEM approximation and the wave approximation thanks to the Weak Trefftz Discontinuous Galerkin method. It is illustrated on the Helmholtz problem. The FEM/WAVE WTDG method allows one to use a combination of FEM approximation and wave approximation. It is based on a variational formulation which is equivalent to the reference problem. All the conditions such as the governing equation, the transmission continuity and the boundary conditions are included in the formulation. No a priori constraint is needed for the definition of the shape functions. As a consequence, any shape function can be used, with no difficulty. It gives the FEM/WAVE WTDG method a great flexibility, as one can select polynomials or wave shape functions (or a combination of them) very easily in the working space, with no restriction. It is successfully illustrated on different examples of different complexity, ranging from low-frequency to mid-frequency, homogeneous or not, with sometimes two scales in the solution. Conclusion Along with the development of computer science, numerical technique becomes a fundamental tool to solve engineering problem. The vibration problem dominated by the Helmholtz equation vastly exists in aerospace and automotive industries. Finite elements method is the most common used method in industry. However, the nature of the approximation of this method limits its application to low-frequency problem. Surpassing the low-frequency range, numerical dispersion and pollution effect arise and consequently large amounts of degrees of freedom are required to solve the problem. On the other hand, the existent method in high-frequency problem such as Statistical Energy Analysis method only studies the global energy of system and neglects the local response. The midfrequency vibration problem contains features of the low-frequency and high-frequency. The local response is still required and the system is more sensible to uncertainties that at low-frequency. Therefore it is essential to develop a specific numerical technique for mid-frequency problem. The Variational Theory of Complex Rays is designed to treat piecewise homogeneous mid-frequency vibro-acoustic problem. This method mainly possesses two hallmarks: • It rewrites the reference problem into a new formulation. This formulation allows one to use independently the approximations in each subdomain. Continuity conditions between subdomains and boundary conditions are incorporated directly into the formulation. • It uses the shape functions that satisfy the governing equation in each subdomain. These shape functions are in form of the linear combination of propagative waves. They have two scales of approximations. The fast variation scale corresponds to the wave functions. The amplitude of wave is the slow variation scale. The VTCR calculates the fast variation scale analytically. Only the slow variation scale is discretized. The VTCR was first introduced in [Ladevèze, 1996]. It has been developed for 3-D plate assemblies in [Rouch et Ladevèze, 2003], for plates with heterogeneities in [START_REF] Ladevèze | A multiscale computational method for medium-frequency vibrations of assemblies of heterogeneous plates[END_REF], for shells in [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF], and for transient dynamics in [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. Its extensions to acoustics problems can be seen in [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. In [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF] the broad band calculation problem in linear acoustic has been studied. Nevertheless, all these developments are limited to the Helmholtz problem of piecewise constant wave number. The originality of this dissertation is to solve the heterogeneous Helmholtz problem. Two numerical approaches are developed. The first approach is presented in Chapter 3. It is the extension of the VTCR. New shape functions are developed namely Airy wave functions. These Airy wave functions satisfy the Helmholtz equation when the square of wave number varies linearly. Through academic studies, the convergence properties of this method are illustrated. The convergence of the VTCR could be quickly achieved with a small amount of degrees of freedom. p-convergence is more efficient than h-convergence. Then the extended VTCR is applied to solve an unbounded harbor agitation problem. This example is studied by adopting different domain discretization strategies and by modifying the direction parameter of incoming wave. The result is evaluated by an error estimator and it proves the practicability of the extended VTCR in engineering problem. The second approach is presented in Chapter 4. It is the Weak Trefftz Discontinuous Galerkin method. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. Consequently, residue exists in each subdomain and a refined domain discretization strategy is necessary to decrease the residue. The academic studies present the convergence properties of the WTDG. The harbor agitation problem is again solved by the WTDG and a comparison with the extended VTCR is made. Finally a modified harbor problem with the parameter of wave number being raised to mid-/high-frequency range is resolved. In Chapter 5, the WTDG is extended to mix the polynomial and the wave approximations in the same subdomains, at the same time. Through numerical studies, it illustrates that such a mix approach presents better performances than a pure FEM approach or a pure VTCR approach in the problem with a bandwidth including low-frequency and mid-frequency. Parallel with theoretical development, a software is created: HeterHelm(HETERogeneous HELMholtz). This software is programmed in the environment of MATLAB during the thesis. All the numerical results in this dissertation are obtained from this software. Following this thesis, there are mainly two prospectives of the possible developments. The first prospective is to extend the extended VTCR and the WTDG to vibration for heterogeneous media. Since the excitation problem is different from acoustic, it is not easy to conduct the extended VTCR to this extension. On the other side, without restriction of the governing equation, the extension of the WTDG could be achieved without difficulty. The second prospective is the extension of the WTDG to transient nonlinear problems. In work [Cattabiani, 2016], the VTCR was proved to be able to solve transient problem in a piecewise homogeneous media. The extension to nonlinear ones as viscoplasticity and damage phenomena imposes to work with heterogeneous media. Then, the work with the WTDG could be seen as a first step toward this goal. French resume Le sujet de thèse s'intéresse au développement des méthodes numériques pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Les problèmes de Helmholtz jouent un rôle majeur dans le monde industriel. C'est le cas par exemple dans l'industrie automobile où les contraintes du marché et le respect des normes antipollution ont conduit les constructeurs à produire des véhicules toujours plus légers, mais de ce fait beaucoup plus sujets aux vibrations. Le confort acoustique des passagers dans un avion ou des habitants dans un bâtiment en est un autre exemple. Il nécessite la maîtrise du comportement vibro-acoustique de la structure, qui doit être pris en compte dès la conception. Un dernier exemple est celui de l'industrie navale, où la problématique du comportement vibratoire est intégrée très tôt dans la conception des navires de grande taille. Aujourd'hui, avec le développement des outils informatiques, on est capable de traiter ce genre de problème par des méthodes numériques. C'est l'approche qui est proposée dans ce travail. Dans cette thèse, on considère principalement le problème de vibration issu de l'équation d'Helmholtz hétérogène. C'est cette équation qui peut être utilisée, par exemple, dans la modélisation de l'agitation des vagues dans un port, dans lequel la profondeur varie au fur et à mesure que le rivage est proche. Dans les travaux de [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF], ce problème est traitée par la méthode des élément finis avec la technique Perfectly Matched Layer. La Proper Generalized Decomposition, technique de la réduction de modèle, est utilisée pour étudier l'influence des différents paramètres sur le résultat. Ici, nous proposons de le faire par une méthode de Trefftz. Il est d'usage de définir les gammes de fréquences selon la taille relative des composants d'un système par rapport à une longueur d'onde (voir la Figure 8). Lorsque la taille d'un composant est plus petite que la longueur d'onde de sa réponse, on parle alors de basse fréquence (BF) qui est essentiellement caractérisé par un comportement modal du système, avec des pics de résonance bien distincts les uns des autres. Les problèmes de cette plage de fréquences ne sont pas sensibles à l'incertitude. Les méthodes de calcul les plus utilisées pour BF sont basées sur la méthode des éléments finis (FEM). Lorsque la taille d'un composant est beaucoup plus grande par rapport à longueur d'onde, sa réponse implique généralement un grand nombre de modes locaux. On parle alors du domaine de la haute fréquence (HF). Dans ce domaine, l'aspect local de la réponse du système disparait. Le champ vibratoire comporte tellement d'oscillations que la réponse locale du système perd de son sens. Les approches dédiées à ce domaine s'appuient donc sur [Ohayon et Soize, 1998]. des considérations statistiques appliquées à des grandeurs énergétiques globales comme l'analyse statistique de l'énergie (SEA) [Lyon et Maidanik, 1962], FEM-SEA [De Rosa et Franco, 2008, De Rosa et Franco, 2010], Wave Intensity Analysis [Langley, 1992], The Energy flow Analysis [START_REF] Belov | Propagation of vibrational energy in absorbing structures[END_REF][START_REF] Buvailo | [END_REF], Ray tracing méthode [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF], Chae et Ih, 2001]. La gamme de fréquences intermédiaires est le domaine de la moyenne fréquence (MF). Ce domaine est caractérisé par une densification modale importante et une hypersensibilité du champ vibratoire par rapport aux conditions sur le bord. Ces caractéristiques impliquent l'impossibilité d'utiliser des méthodes de la BF et de la HF vers cette gamme de fréquences. C'est une des raisons qui a fait apparaître les approches ondulatoires, basée sur les travaux de Trefftz [Trefftz, 1926], qui utilisent les solutions générales des équations d'équilibre comme les fonctions de forme. Parmi ces méthodes, celle qui a été retenue pour ce travail est la Théorie Variationnelle des Rayons Complexes (TVRC). Elle a été introduite pour la première fois dans [Ladevèze, 1996], et depuis l'activité de recherche sur cette approche a porté sur nombreux aspects. Tout d'abord, la TVRC a montré son efficacité dans le traitement des vibrations des assemblages complexes de structures planes [Rouch et Ladevèze, 2003] et de type coques [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF]. Ensuite des travaux ont porté sur l'utilisation de la méthode dans le cadre d'une approche fréquentielle pour la résolution de problème de dynamique transitoire incluant le domaine des MF [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. La TVRC a ensuite été étendue au traitement des vibrations acoustiques [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. Avec le PGD, elle a été appliquée aux problèmes sur des bandes de fréquence [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF]. Plus récemment, des travaux ont été également effectués sur la réponse du choc [Cattabiani, 2016]. Néanmoins, la TVRC et la plupart des autres méthodes ondulatoires se limitent aux milieux homogènes par morceau. Pour les problèmes de Helmholtz hétérogènes, le Ultra Week Variational Formulation (UWVF) exploite l'exponentiel du polynôme pour approximer la solution. Le Discontinuous Enrichment Method (DEM) utilise les fonctions d'Airy pour résoudre le problème. Les travaux de cette thèse sont principalement liés à l'extension de la TVRC et la weak Trefftz discontinuous Galerkin méthode (WTDG) (voir [Ladevèze et Riou, 2014]) pour résoudre le problème de Helmholtz hétérogène. La WTDG n'utilise pas la solution exacte de l'équation d'équilibre comme fonction de forme. Par conséquent l'équation d'équilibre est pas vérifiée à priori et elle est introduite dans la formulation variationnelle pour être approchée. Cette approche est capable d'intégrer des fonctions de forme polynomiales dans sa formulation, et donc de coupler les éléments finis avec la TVRC dans les différentes sous-domaines d'un système. • p-convergence permet d'obtenir des niveaux de précision très grands avec peu de degré de libertés (ddls). Ω Ω E Γ EE ′ r d Ω u d ∂ 1 Ω ∂ 2 Ω g d Ω E ′ • Moins de sous-domaines sont utilisés, plus vite le résultat converge. Un autre exemple plus compliqué pour illustrer la capacité de cette extension de la TVRC est celui de l'agitation du port. Les vagues viennent de loin, et agissent sur le port. Dans le modèle du port, le profondeur d'eau varie linéairement le long de l'axe y à l'intérieur du port. Étant donné la vitesse de l'eau v et la pulsation w, l'expression du nombre d'onde k est connue explicitement. Les conditions aux limites sur les bords sont de type réflection totale. Comme le problème défini est non borné, la solution doit vérifier la condition de Sommerfeld. Le domaine du problème est globalement divisé dans trois sous-domaines. Les fonctions de forme sont les fonctions Hankel modifiée en Ω 1 , les ondes planes en Ω 2 et les fonctions d'onde d'Airy en Ω 3 . Avec seulement 20 ddls en Ω 1 , 100 ddls en Ω 2 , 160 ddls en Ω 3 . Un résultat est obtenu avec une erreur relative de 6.21 • 10 -3 (voir Figure 11). Si on divise l'intérieure du port en 4 sous-domaines et avec 160 ddls en chaque de ces sous-domaines, le résultat est obtenu avec une erreur relative de 1.52 • 10 -2 (voir Figure 12). U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k2 E u E + r d = 0} (21) où kE est un valeur approximé par l'expansion de Taylor de k(x). En utilisant l'ordre 0 de l'équation, la solution générale de l'équation d'équilibre peut être approximée par la fonction d'onde plane. Avec l'approximation d'ordre 1, on peut utiliser la fonction d'onde d'Airy. Ces deux approximations sont définies comme le zéro ordre WTDG et le premier ordre WTDG. Dans un des exemples numériques, on considère un domaine Ω en Le chapitre 5 s'intéresse au couplage entre une approximation de type onde et une approximation de type FEM dans le cadre de la formulation variationnelle de WTDG. C'est par la réalisation d'un tel couplage que les problèmes de bande passante qui contient la BF et la MF sont bien résolus. La capacité de cette méthode de pouvoir traiter des problèmes multi-échelles ayant des sous-systèmes à MF couplés à des sous-systèmes BF est aussi illustrée. Ce travail de thèse développe des stratégies de calcul pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Il s'appuie sur l'utilisation de la TVRC, et enrichit l'espace des fonctions qu'elle utilise par des fonctions d'Airy, quand le carré de la longueur d'onde du milieu varie linéairement. Il généralise aussi la prédiction de la solution par la WTDG pour des milieux dont la longueur d'onde varie d'une quelconque autre manière. Pour cela, des approximations à l'ordre zéro et à l'ordre un sont définies, et vérifient localement les équations d'équilibre selon une certaine moyenne sur les sous domaines de calcul. Plusieurs démonstrations théoriques des performances de l'extension de la TVRC et de la WTDG sont menées, et plusieurs exemples numériques illustrent les résultats. La complexité retenue pour ces exemples montrent que les approches retenues permettent de prédire le comportement vibratoire de problèmes complexes, tel que le régime oscillatoire des vagues dans un port maritime. Ils montrent également qu'il est tout à fait envisageable de mixer les stratégies de calcul développées avec celles classiquement utilisées, telle que la méthode des éléments finis, pour construire des stratégies de calcul utilisables pour les basses et les moyennes fréquences, en même temps. 3. 1 1 Behaviors of Airy functions. . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 Example of Airy wave and plane wave. Left: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Right: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. . . . . . . . . . . . . . . . . . . . . . 46 3.3 Geometry definition for the test of numerical integration performance in Section 3.3.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 From left to right: First: definition of domain. Second: 1 subdomain discretisation. Third: 4 subdomains discretisation. Fourth: 9 subdomains discretisation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5 The three convergence curves of extended VTCR calculated with the discretization strategies shown in Figure 3.4. . . . . . . . . . . . . . . . . . 54 3.6 Top view of Harbor in Section 3.5.2. θ + 0 represents the direction of incident wave. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7 Side view of Harbor in Section 3.5.2. Variable h represents depth of water from sea surface to the bottom. The depth h increases when it points from harbor inside to harbor outside. . . . . . . . . . . . . . . . . . . . . . . . 56 3.8 First step for seeking analytic solution ouside the harbor . . . . . . . . . 58 3.9 Second step for seeking analytic solution ouside the harbor . . . . . . . . 58 3.10 Half plane problem with boundary Γ O . . . . . . . . . . . . . . . . . . . . 59 3.11 The first strategy in Section 3.5.2: domain inside the harbor divided into one computational subdomain. . . . . . . . . . . . . . . . . . . . . . . . 60 3.12 The second strategy in Section 3.5.2: domain inside harbor divided into four computational subdomains. . . . . . . . . . . . . . . . . . . . . . . 3.13 Up: numerical result calculated by the first strategy of Figure 3.11 with θ + 0 = 45 • . Down: numerical result calculated by the second strategy of Figure 3.12 with θ + 0 = 45 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. . 3.14 Up: numerical result inside the harbor calculated by the first strategy with θ + 0 = 45 • . Down: numerical result inside the harbor calculated by the second strategy with θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Up: numerical result calculated by the first strategy with θ + 0 = 35 • . Down: numerical result calculated by the first strategy with θ + 0 = 65 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. . . . . . . . . . . . . . . . . . . . 4.1 From left to right: First: definition of domain, Second: 1 subdomain discretisation, Third: 4 subdomains discretisation, Fourth: 9 subdomains discretisation, Fifth: 16 subdomains discretisation, Sixth: 25 subdomains discretisation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The convergence curves for the example of Section 4.5.1. The five convergence curves of the Zero Order WTDG calculated with the strategies showed in Figure 4.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The convergence curves of the Zero Order WTDG in Section 4.5.2. . . . 4.4 The convergence curves of the First Order WTDG in Section 4.5.2. . . . 4.5 From left to right: First: the Zero Order WTDG with 4 subdomains and 100 waves per subdomain. Second: the Zero Order WTDG with 25 subdomains and 80 waves per subdomain. Third: the Zero Order WTDG with 100 subdomains and 40 waves per subdomain. . . . . . . . . . . . . 4.6 From left to right: First: the First Order WTDG with 1 subdomain and 160 waves per subdomain. Second: the First Order WTDG with 4 subdomains and 120 waves per subdomain. Third: Solution calculated by the FEM with 625 elements of quadric mesh of order 3. . . . . . . . . . . . . . . . 4.7 Left: computational strategy of the VTCR. Right: computational strategy of the Zero Order WTDG. . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR in Chapter 3. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. . . . . . . . . . . . . . . . . . . . . . . 4.9 The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Global result considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. . . . . . . . . . 4.11 Result of harbor inside considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. . . . . 5.1 Left: definition of domain. Middle: VTCR wave directions discretisation. Right: FEM mesh refinement. . . . . . . . . . . . . . . . . . . . . . . . 5.2 The convergence curves for the example of Section 5.4.1. The FEM curve corresponds to the solution obtained with a pure FEM discretization explained in Section. The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach. 5.3 The convergence curves for the example of Section 5.4.1. For each convergence curve, a fixed number of wave directions of VTCR part is chosen in FEM/WAVE WTDG strategy. The degrees of freedom of FEM part is varied in order to attain the convergence. . . . . . . . . . . . . . . . . . . 5.4 Up left: definition of the computational domain. Up right: exact solution u ex . Down left: representation of the fast varying scale result simulated by VTCR part u V TCR . Down right: representation of the slow varying scale result simulated by FEM part u FEM . . . . . . . . . . . . . . . . . . . . . 5.5 Up: WTDG solution u W T DG . Down: exact solution u ex . . . . . . . . . . . 5.6 Left: computational domain Ω. Right: selected discretizations in the subdomains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Up: FEM/WAVE WTDG solution. Down: exact solution. . . . . . . . . . 8 Fonction de réponse en fréquence d'une structure complexe [Ohayon et Soize, 1998]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Problème de référence et discrétisation du domaine . . . . . . . . . . . . 10 Exemple d'onde d'Airy et d'onde plane. À gauche: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. À droite: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. . . . . . . . . . . . . . . . . . . . . . 11 À gauche: la première stratégie. À droite: résultat de la première stratégie avec θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 À gauche: la deuxième stratégie. À droite: résultat de la deuxième stratégie avec θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . . . . En haut: résultat du Zéro Ordre WTDG en utilisant 4, 25, 100 sousdomaines. En bas: courbes de convergence du Zéro Order WTDG . . . . 113 En haut: résultat du Premier Ordre WTDG en utilisant 1, 4 sous-domaines et résultat de FEM. En bas: courbes de convergence du Premier Order WTDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 L'angle d'onde incidente est θ + 0 = 45 • . En haut à gauche: résultat de référence calculé par l'extension de la TVRC. En haut à droite: résultat du zéro ordre WTDG avec cinq sous-domaines. En bas à gauche: résultat du zéro ordre WTDG avec dix sous-domaines. En bas à droite: résultat du zéro ordre WTDG avec quinze sous-domaines. . . . . . . . . . . . . . 115 List of Tables 3.1 The angle θ of Airy wave functions for the numerical test . . . . . . . . . 3.2 Reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Difference between the quadgk integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Difference between the quadl integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Difference between the trapz integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Difference between the quad integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bubble functions b e for e ∈ 1,2, • • • , m are the solutions of following problem Lb e = -Lϕ e on Ω E b e = 0 on ∂Ω E (1.13) Figure 2 . 1 : 21 Figure 2.1: Left: reference problem. Right: discretization of computational domain. .1) with r d = 0. By replacing (2.11) back in to the Helmholtz equation, one could find two classes of waves, namely propagative wave and evanescent wave. Examples of propagative wave and evanescent wave could be seen in Figure 2.2. Figure 2 . 2 : 22 Figure 2.2: Left: propagative wave. Right: evanescent wave. Figure 2 . 3 : 23 Figure 2.3: The definition of numerical example in Section 2.3. Figure 2 . 4 :Figure 2 . 5 : 2425 Figure 2.4: The evaluation of condition number along with the convergence of result in Section 2.3. Figure 2.7: The comparison of h-convergence and p-convergence in Section 2.4.3. Figure 3 . 1 : 31 Figure 3.1: Behaviors of Airy functions. Figure 3 . 2 : 32 Figure 3.2: Example of Airy wave and plane wave. Left: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Right: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Figure 3.3: Geometry definition for the test of numerical integration performance in Section 3.3.1. Figure 3 . 4 :Figure 3 . 5 : 3435 Figure 3.4: From left to right: First: definition of domain. Second: 1 subdomain discretisation. Third: 4 subdomains discretisation. Fourth: 9 subdomains discretisation. Figure 3 . 6 : 36 Figure 3.6: Top view of Harbor in Section 3.5.2. θ + 0 represents the direction of incident wave. Figure 3 . 7 : 37 Figure 3.7: Side view of Harbor in Section 3.5.2. Variable h represents depth of water from sea surface to the bottom. The depth h increases when it points from harbor inside to harbor outside. Figure 3 . 8 : 38 Figure 3.8: First step for seeking analytic solution ouside the harbor Figure 3 . 10 : 310 Figure 3.10: Half plane problem with boundary Γ O . Figure 3 . 3 Figure 3.11: The first strategy in Section 3.5.2: domain inside the harbor divided into one computational subdomain. Figure 3 . 3 Figure 3.12: The second strategy in Section 3.5.2: domain inside harbor divided into four computational subdomains. Figure 3 . 3 Figure 3.13: Up: numerical result calculated by the first strategy of Figure 3.11 with θ + 0 = 45 • . Down: numerical result calculated by the second strategy of Figure 3.12 with θ + 0 = 45 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. Figure 3 . 3 Figure 3.14: Up: numerical result inside the harbor calculated by the first strategy with θ + 0 = 45 • . Down: numerical result inside the harbor calculated by the second strategy with θ + 0 = 45 • . Figure 3 . 3 Figure 3.15: Up: numerical result calculated by the first strategy with θ + 0 = 35 • . Down: numerical result calculated by the first strategy with θ + 0 = 65 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. Contents 4 . 1 41 Rewriting of the reference problem . . . . . . . . . . . . . . . . . . . . 69 4.1 Rewriting of the reference problem 4.1.1 Variational Formulation 4. 5 5 Numerical examples 4.5.1 Academic study of the Zero Order WTDG in the heterogeneous Helmholtz problem of slowly varying wave number A simple geometry of a square [0 m; 1 m]×[0 m; 1 m] is considered for the domain Ω. In this domain, k 2 = 150x + 150y + 1000, η = 0.01. k varies 14.02% on Ω. Boundary conditions on ∂Ω are Dirichlet type such that u d = 3 ∑ j=1 ψ(x,P j ), where ψ(x,P j ) is the Airy wave solution of heterogeneous Helmholtz equation in domain Ω. θ 1 = 10 • , θ 2 = 55 • , θ 3 = 70 Figure 4 . 1 : 41 Figure 4.1: From left to right: First: definition of domain, Second: 1 subdomain discretisation, Third: 4 subdomains discretisation, Fourth: 9 subdomains discretisation, Fifth: 16 subdomains discretisation, Sixth: 25 subdomains discretisation. Figure 4 . 2 : 42 Figure 4.2: The convergence curves for the example of Section 4.5.1. The five convergence curves of the Zero Order WTDG calculated with the strategies showed in Figure 4.1. Figure 4 . 3 : 43 Figure 4.3: The convergence curves of the Zero Order WTDG in Section 4.5.2. Figure 4 . 4 : 44 Figure 4.4: The convergence curves of the First Order WTDG in Section 4.5.2. Figure 4 . 5 : 45 Figure 4.5: From left to right: First: the Zero Order WTDG with 4 subdomains and 100 waves per subdomain. Second: the Zero Order WTDG with 25 subdomains and 80 waves per subdomain. Third: the Zero Order WTDG with 100 subdomains and 40 waves per subdomain. Figure 4 . 6 : 46 Figure 4.6: From left to right: First: the First Order WTDG with 1 subdomain and 160 waves per subdomain. Second: the First Order WTDG with 4 subdomains and 120 waves per subdomain. Third: Solution calculated by the FEM with 625 elements of quadric mesh of order 3. Figure 4 . 7 : 47 Figure 4.7: Left: computational strategy of the VTCR. Right: computational strategy of the Zero Order WTDG. Figure 4 . 8 : 48 Figure 4.8: The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR in Chapter 3. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. Figure 4 . 9 : 49 Figure 4.9: The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. Figure 4 . 4 Figure 4.10: Global result considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. Figure 4 . 4 Figure 4.11: Result of harbor inside considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. 5. 4 5 ∑ 45 Numerical examples 5.4.1 Homogeneous Helmholtz problem of frequency bandwidth including LF and MF The domain being considered is the square Ω = [0 m; 0.5 m]×[0 m; 0.5 m]. The prescribed boundary conditions are u d = j=1 e ikζ(cosθ j •x+sinθ j •y) with θ 1 = 5.6 • , θ 2 = 12.8 • , θ 3 = 18 • , θ 4 = 33.5 • , θ 5 = 41.2 • and η = 0.0001. The bandwidth of the wave number k ranges from 5 m -1 to 72 m -1 . This example is interesting, because it covers different scales (from slow varying scale with k = 5 m -1 to fast varying scale with k = 72 m -1 Figure 5 . 1 : 51 Figure 5.1: Left: definition of domain. Middle: VTCR wave directions discretisation. Right: FEM mesh refinement. Figure 5 . 2 : 52 Figure 5.2: The convergence curves for the example of Section 5.4.1. The FEM curve corresponds to the solution obtained with a pure FEM discretization explained in Section.The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach. Figure 5 . 3 : 53 Figure 5.3: The convergence curves for the example of Section 5.4.1. For each convergence curve, a fixed number of wave directions of VTCR part is chosen in FEM/WAVE WTDG strategy. The degrees of freedom of FEM part is varied in order to attain the convergence. Figure 5 . 4 : 54 Figure 5.4: Up left: definition of the computational domain. Up right: exact solution u ex . Down left: representation of the fast varying scale result simulated by VTCR part u V TCR . Down right: representation of the slow varying scale result simulated by FEM part u FEM . Figure 5 . 5 : 55 Figure 5.5: Up: WTDG solution u W T DG . Down: exact solution u ex . Figure 5 . 6 : 56 Figure 5.6: Left: computational domain Ω. Right: selected discretizations in the subdomains Figure 5 . 7 : 57 Figure 5.7: Up: FEM/WAVE WTDG solution. Down: exact solution. Figure 8 : 8 Figure8: Fonction de réponse en fréquence d'une structure complexe[Ohayon et Soize, 1998]. Figure 9 : 9 Figure 9: Problème de référence et discrétisation du domaine Figure 10 : 10 Figure 10: Exemple d'onde d'Airy et d'onde plane. À gauche: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. À droite: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Figure 11 : 11 Figure 11: À gauche: la première stratégie. À droite: résultat de la première stratégie avec θ + 0 = 45 • .Le chapitre 4 est consacré au développement de la WTDG basé sur les ondes, pour les les problèmes de Helmholtz hétérogènes. L'équation d'équilibre n'est pas vérifiée a priori. L'espace admissible de la WTDG est composée par les solutions u qui vérifient l'équation d'équilibre approximée: Figure 12 : 12 Figure 12: À gauche: la deuxième stratégie. À droite: résultat de la deuxième stratégie avec θ + 0 = 45 • . Figure 13 :Figure 14 : 1314 Figure 13: En haut: résultat du Zéro Ordre WTDG en utilisant 4, 25, 100 sous-domaines. En bas: courbes de convergence du Zéro Order WTDG The energetic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . The wave-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . Wave Boundary Element Method . . . . . . . . . . . . . . . . . 1.3.4 Discontinuous Enrichment Method . . . . . . . . . . . . . . . . 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Variational Theory of Complex Rays in Helmholtz problem of constant wave number 2.1 Reference problem and notations . . . . . . . . . . . . . . . . . . . . . . 2.2 Rewrite of the reference problem . . . . . . . . . . . . . . . . . . . . . 2.2.1 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Properties of the variational formulation . . . . . . . . . . . . . . Introduction 1 Bibliographie 1.1 The polynomial methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 The standard finite element method . . . . . . . . . . . . . . . . 1.1.2 The extension of FEM . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 The boundary element method . . . . . . . . . . . . . . . . . . . 1.2 1.2.1 The Statistical Energy Analysis . . . . . . . . . . . . . . . . . . 1.2.2 The Hybrid FEM-SEA . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Wave Intensity Analysis . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Energy Flow Analysis . . . . . . . . . . . . . . . . . . . . . 1.2.5 Ray Tracing Method . . . . . . . . . . . . . . . . . . . . . . . . 1.3 1.3.1 Ultra Weak Variational Formulation . . . . . . . . . . . . . . . . 1.3.2 Wave Based Method . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 2.2.3 Approximation and discretization of the problem . . . . . . . . . 2.2.4 Ray distribution and matrix recycling . . . . . . . . . . . . . . . The Hybrid FEM-SEA . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Wave Intensity Analysis . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Energy Flow Analysis . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Ray Tracing Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The wave-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Ultra Weak Variational Formulation . . . . . . . . . . . . . . . . . 1.3.2 Wave Based Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Wave Boundary Element Method . . . . . . . . . . . . . . . . . . 1.3.4 Discontinuous Enrichment Method . . . . . . . . . . . . . . . . . 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The polynomial methods 1.1.1 The standard finite element method Contents 1.1 The polynomial methods . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 The standard finite element method . . . . . . . . . . . . . . . . . 9 1.1.2 The extension of FEM . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.3 The boundary element method . . . . . . . . . . . . . . . . . . . . 13 1.2 The energetic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2.1 The Statistical Energy Analysis . . . . . . . . . . . . . . . . . . . 15 1.2.2 Contents 2.1 Reference problem and notations . . . . . . . . . . . . . . . . . . . . . . 2.2 Rewrite of the reference problem . . . . . . . . . . . . . . . . . . . . . 2.2.1 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Properties of the variational formulation . . . . . . . . . . . . . . . 2.2.3 Approximation and discretization of the problem . . . . . . . . . . 2.2.4 Ray distribution and matrix recycling . . . . . . . . . . . . . . . . 2.3 Iterative solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nomenclature Ω domain ∂Ω boundary of Ω u or v pressure or displacement k wave number η damping coefficient h constant related to the impedance r d g d source prescribed over Ω source prescribed over ∂ 2 2.4 Convergence of the VTCR . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Error indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 h-and p-convergence of VTCR . . . . . . . . . . . . . . . . . . . 2.4.4 Adaptive VTCR . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 3.1 VTCR with Airy wave functions . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Airy wave functions . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Variational Formulation . . . . . . . . . . . . . . . . . . . . . . . 3.2 Approximations and discretization of the problem . . . . . . . . . . . . 3.3 Numerical implementation . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Iterative solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Convergence of the Extended VTCR . . . . . . . . . . . . . . . . . . . . 3.4.1 Convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Error indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Academic study of the extended VTCR on medium frequency heterogeneous Helmholtz problem . . . . . . . . . . . . . . . . . . . 3.1 VTCR with Airy wave functions 3.1.1 Airy wave functions 3.5.2 Study of the extended VTCR on semi-unbounded harbor agitation problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 3 . 3 6: Difference between the quad integral values and the reference integral values on ∂Ω are Dirichlet type such that u d = 3 ∑ j=1 ψ(x,y,P j ), where ψ(x,y,P j ) is the Airy wave solution of heterogeneous Helmholtz equation in domain Ω. θ 1 To demonstrate that condition (P) is satisfied, let us take a E ∈ U E , E ∈ E a piecewise constant. Since U E is the combination of FEM and VTCR, u could be any linear combination of polynoms and wave functions. Therefore when u = a E a piecewise constant, u could only be the polynomial function of order 0. Let us note that ∀E ∈ E a E ′ = β E (β E constant over Ω E ) for any subdomain E ′ sharing a common boundary with E. Let us introduce z E Titre : Sur des stratégies de calcul ondulatoires pour les milieux hétérogènes Mots-clés : hétérogène, moyennes fréqences, TVRC, WTDG Résumé : Ce travail de thèse s'intéresse au développement de stratégies de calcul pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Il s'appuie sur l'utilisation de la Théorie Variationnelle des Rayons Complexes (TVRC), et enrichit l'espace des fonctions qu'elle utilise par des fonctions d'Airy, quand le carré de la longueur d'onde du milieu varie linéairement. Il s'intéresse aussi à une généralisation de la prédiction de la solution pour des milieux dont la longueur d'onde varie d'une quelconque autre manière. Pour cela, des approximations à l'ordre zéro et à l'ordre un sont définies, et vérifient localement les équations d'équilibre selon une certaine moyenne sur les sous domaines de calcul. Plusieurs démonstrations théoriques des performances de la méthodes sont menées, et plusieurs exemples numériques illustrent les résultats. La complexité retenue pour ces exemples montrent que l'approche retenue permet de prédire le comportement vibratoire de problèmes complexes, tel que le régime oscillatoire des vagues dans un port maritime. Ils montrent également qu'il est tout à fait envisageable de mixer les stratégies de calcul développées avec celles classiquement utilisées, telle que la méthode des éléments finis, pour construire des stratégies de calcul utilisables pour les basses et les moyennes fréquences, en même temps. Title : On wave based computational approaches for heterogeneous media Keywords : heterogeneous, mid-frequency, VTCR, WTDG Abstract : This thesis develops numerical approaches to solve mid-frequency heterogeneous Helmholtz problem. When the square of wave number varies linearly in the media, one considers an extended Variational Theory of Complex Rays(VTCR) with shape functions namely Airy wave functions, which satisfy the governing equation. Then a general way to handle heterogeneous media by the Weak Trefftz Discontinuous Galerkin (WTDG) is proposed. There is no a priori restriction for the wave number. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this way, zero order and first order approximations are defined, namely Zero Order WTDG and First Order WTDG. Their shape functions only satisfy the local governing equation in average sense. Theoretical demonstration and academic examples of approaches are addressed. Then the extended VTCR and the WTDG are both applied to solve a harbor agitation problem. Finally, a FEM/WAVE WTDG is further developed to achieve a mix use of the Finite Element method(FEM) approximation and the wave approximation in the same subdomains, at the same time for frequency bandwidth including LF and MF. Université Paris-Saclay Espace Technologique / Immeuble Discovery Route de l'Orme aux Merisiers RD 128 / 91190 Saint-Aubin, France
194,053
[ "763133" ]
[ "247321" ]
01488061
en
[ "sdv" ]
2024/03/04 23:41:48
2017
https://inserm.hal.science/inserm-01488061/file/Infant%20PrEP%20BMJ%202017.pdf
Philippe Van De Perre email: [email protected] Chipepo Kankasa Nicolas Nagot Nicolas Meda James K Tumwine Anna Coutsoudis Thorkild Tylleskär Hoosen M Coovadia Pre-exposure prophylaxis for infants exposed to HIV through breast feeding published or not. The documents may come The AIDS 2016 conference, held in July in Durban, South Africa, lauded pre-exposure prophylaxis (PrEP) as the way forward for substantially reducing the rate of new HIV infections worldwide. PrEP is defined as the continuous or intermittent use of an antiretroviral drug or drug combination to prevent HIV infection in people exposed to the virus. The underlying pathophysiological rationale is that impregnating uninfected cells and tissues with an antiviral drug could prevent infection by both cell-free and cell-associated HIV (cell-to-cell transfer). PrEP's tolerance and efficacy have been demonstrated in well designed clinical trials in men who have sex with men (MSM). 1 2 In the Ipergay trial, 86% of HIV infections were averted in highly exposed men. [START_REF] Molina | Study Group. On-demand preexposure prophylaxis in men at high risk for HIV-1 infection[END_REF] PrEP has also been evaluated in other highly exposed groups such as transgender women, injecting drug users, serodiscordant heterosexual couples, and commercial sex workers. [START_REF] Who | WHO technical update on pre exposure prophylaxis (PreP)[END_REF] HIV exposed children: lost in translation Uninfected pregnant or breastfeeding women in high incidence areas have also been suggested as a potential target population for PrEP, but infants exposed to HIV through breast feeding have not been mentioned. [START_REF] Price | Cost-effectiveness of pre-exposure HIV prophylaxis during pregnancy and breastfeeding in Sub-Saharan Africa[END_REF] Numerous public declarations and petitions have produced a strong advocacy for extension of the PrEP principle to all high risk populations exposed to HIV, considering access to PrEP as part of human rights. Recently, the World Health Organization recommended offering PrEP to any population in which the expected incidence of HIV infection is above 3 per 100 person-years. 3 5 So why are breastfed infants born to HIV infected women, a population that often has an overall HIV acquisition rate above 3/100 person-years, not receiving this clearly beneficial preventive health measure? Current strategy not good enough Since June 2013, the WHO has recommended universal lifelong antiretroviral therapy (ART)-known as "option B+"-for all pregnant and breastfeeding women infected with HIV-1, with the objective of eliminating mother-to-child transmission (defined by WHO as an overall rate of transmission lower than 5%). [START_REF] Who | Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection[END_REF] The B+ strategy also recommends that their babies receive nevirapine for six weeks to mitigate the risks of transmission during delivery. But infants who are breast fed continue to be exposed to a substantial risk of infection beyond the six week prophylaxis period. The B+ strategy has been rolled out in most programmes to prevent mother-to-child transmission worldwide without any additional protection for breastfed infants. Although these programmes have been shown to increase the number of pregnant and breastfeeding women who receive ART, their success in prevention of infection in infants is less clear. According to UNAIDS estimates, improvement in services to prevent mother-to-child HIV transmission since 2010 has reduced the annual number of new infections among children globally by 56%. [START_REF]2016 prevention gap report[END_REF] However, the few available programmatic data on long term residual HIV transmission rates suggest that this is mainly accounted for by reduced in utero and intrapartum HIV transmission rather than in postnatal transmission through breast feeding. Also, there is considerable variation across countries and continents, with many countries, mainly in Africa Analysis ANALYSIS and Asia, seeing no change in HIV incidence among children. An update of the UNAIDS 2015 report suggests that in 2015 the average mother-to-child transmission rate was 8.9% among 21 Global Plan African countries, and only five of these countries-Namibia, Uganda, Swaziland, Botswana and South Africa-have reached the target transmission rate of below 5%. 7 Reasons for continued transmission Most of the residual transmission is attributable to exposure through breast feeding. A recent study assessed community viral load in Kenyan, Malawian, and South African households, including more than 11 000 women of child bearing age, of whom 3296 were pregnant or breast feeding. A total of 608 pregnant or breastfeeding women had HIV infection, with the proportion with plasma RNA above 1000 copies/ml varying from 27% in Malawi to 73% in Kenya. [START_REF] Maman | Most breastfeeding women with high viral load are still undiagnosed in sub Saharan Africa[END_REF] Some of the women who had detectable viral load were unaware of their infection because they had not been tested or had become infected after antenatal screening; others had not started ART or were not taking it as recommended. In 2015, about 150 000 children were infected with HIV worldwide. The vertical transmission rate from mother-to-infant at six weeks was 5% but rose to 8.9% by the conclusion of breast feeding. [START_REF] Mofenson | State of the art in prevention of mother-to-child transmission. 2nd workshop on prevention trials in infants born to HIV-positive mothers[END_REF] In Africa, the reasons for this high residual burden of child infections are multiple. The main reason is operational, with challenges in all phases of the care cascade (test, treat, and retain in care), including consistent testing of HIV exposed infants, starting infants on treatment, and retaining infants in care. Primary obstacles to linkage and retention include the distance and resources required to travel to a health facility, cultural or stigma related challenges, logistic hurdles that exist in antenatal care centres, and resources and efficacy of linkage to definitive HIV care. Observational studies in different African settings report less than optimal adherence, with only 50-70% viral suppression in women one year after starting ART. In Malawi, where the B+ strategy was rolled out in 2011, one fifth of women identified never started ART during the early phases of the programme. [START_REF] Tenthani | Ministry of Health in Malawi and IeDEA Southern Africa. Retention in care under universal antiretroviral therapy for HIV-infected pregnant and breastfeeding women ('Option B+') in Malawi[END_REF] In the early phases of the Swaziland programme, postnatal retention in care for HIV infected women was only 37% overall and 50% for those who started ART during pregnancy. [START_REF] Abrams | Impact of Option B+ on ART uptake and retention in Swaziland: a stepped-wedge trial[END_REF] A study in Malawi found that women who started ART to prevent transmission to their child were five times more likely to default than women who started treatment for their own health. [START_REF] Tenthani | Ministry of Health in Malawi and IeDEA Southern Africa. Retention in care under universal antiretroviral therapy for HIV-infected pregnant and breastfeeding women ('Option B+') in Malawi[END_REF] Maternal discontinuation of ART while breast feeding considerably increases risk of HIV transmission to the infant because of viral rebound, as observed after interrupting maternal zidovudine prophylaxis in the DITRAME study. [START_REF] Manigart | Diminution de la Transmission Mere-Enfant Study Group. Effect of perinatal zidovudine prophylaxis on the evolution of cell-free HIV-1 RNA in breast milk and on postnatal transmission[END_REF] Furthermore, cell-to-cell transfer of HIV is not inhibited in mothers taking ART in many cases. [START_REF] Van De Perre | HIV-1 reservoirs in breast milk and translational challenges to elimination of breast-feeding transmission of HIV-1[END_REF] The residual postnatal transmission rate from a mother with an ART suppressed viral load has been estimated at 0.2% per month of breastfeeding. This corresponds to an expected residual rate of 2.4% at 12 months. [START_REF] Rollins | Estimates of peripartum and postnatal mother-to-child transmission probabilities of HIV for use in Spectrum and other population-based models[END_REF] Since the latest WHO/Unicef guidelines for HIV and infant feeding recommend 24 months of breast feeding rather than 12, 14 the duration of infant HIV exposure will be much extended, increasing the risk of additional HIV infections. Infant prophylaxis has been used before Administration of a daily antiviral drug to an uninfected but exposed breastfed infant meets the definition of PrEP: the prophylaxis is administered before exposure (ideally from birth) to an uninfected infant whose exposure to HIV is intermittent (during breast feeding) and persistent. Ironically, infants born to HIV infected women were the first to participate in ARV prophylaxis trials. They have probably contributed the highest number of participants in such studies worldwide. Indeed, prophylaxis with oral zidovudine was integrated in the first prophylactic protocol (ACTG 076) reported in 1994. [START_REF] Connor | Reduction of maternal-infant transmission of human immunodeficiency virus type 1 with zidovudine treatment. Pediatric AIDS Clinical Trials Group Protocol 076 Study Group[END_REF] Thereafter, numerous trials have included infant PrEP to prevent mother-to-child transmission, in combination or even as a sole preventive regimen. [START_REF] Kilewo | Prevention of mother-to-child transmission of HIV-1 through breast-feeding by treating infants prophylactically with lamivudine in Dar es Salaam, Tanzania: the Mitra Study[END_REF][START_REF] Kumwenda | Extended antiretroviral prophylaxis to reduce breast-milk HIV-1 transmission[END_REF][START_REF] Coovadia | HPTN 046 protocol team. Efficacy and safety of an extended nevirapine regimen in infant children of breastfeeding mothers with HIV-1 infection for prevention of postnatal HIV-1 transmission (HPTN 046): a randomised, double-blind, placebo-controlled trial[END_REF][START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] The most recent of these, the ANRS 12174 trial, showed that infant prophylaxis with either lamivudine (3TC) or boosted lopinavir (LPV/r) daily throughout breastfeeding for up to 12 months among infants of HIV infected women who did not qualify for ART for their own health was well tolerated and reduced the risk of postnatal transmission at 1 year of age to 0.5% (per protocol) or 1.4% (intention to treat). [START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] Adherence to infant PrEP in the trial was particularly high (over 90%). [START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] Pharmacological data suggest that plasma drug levels lower than the therapeutic threshold are sufficient to protect infants. [START_REF] Foissac | ANRS 12174 Trial Group. Are prophylactic and therapeutic target concentrations different? The case of lopinavir/ritonavir or lamivudine administered to infants for the prevention of mother-to-child HIV-1 transmission during breastfeeding[END_REF] In addition, pharmacokinetic studies in infants breastfed by mothers taking ART show that their antiretroviral drug plasma levels are largely below 5% of the therapeutic level. [START_REF] Shapiro | Therapeutic levels of lopinavir in late pregnancy and abacavir passage into breast milk in the Mma Bana Study, Botswana[END_REF] This suggests that infant PrEP could be combined with maternal ART without a risk of overdosing or cumulative adverse effects. In the near future, injectable long acting antiretroviral drugs such as rilpivirine or cabotegravir may become available. This would enable PrEP to be started from birth with only a few additional administrations to cover the duration of breastfeeding. The estimated cost of daily administration lamivudine paediatric suspension in a breastfed infant is less than $15 (£12; €14) a year. Cost effectiveness studies of infant PrEP have not been done, but the low cost of the infant PrEP regimen suggests that the expected benefit would justify the expense of adding it to maternal ART. Indeed, even if only one HIV infection was averted out of 100 exposed infants, the cost per averted infection would be minimal ($1500). When should infant PrEP be recommended? Infant PrEP should certainly be advised when the mother's HIV infection is untreated or if she has a detectable viral load despite ART. Such situations can occur when the mother does not want or is unable to take ART or is at high risk of poor drug adherence. The determinants of maternal adherence to ART probably differ from those for adherence to infant PrEP. Unpublished data collected during the ANRS 12174 trial suggest that most pregnant or lactating mothers prefer to administer a prophylactic antiretroviral drug to their exposed infant than to adhere to their own ART. However, this targeted approach may be seen as complex and hampered by programmatic problems in some settings. A simpler alternative would be to protect all HIV exposed infants with PrEP during the breastfeeding period, on the basis that the PrEP drugs are safe and that optimal maternal adherence to ART in the perinatal period cannot be assumed. Of course, treatment of the mother should remain a priority. Conclusion Mother-to-child HIV transmission among breastfed infants is not unlike HIV transmission associated with discordant couples, with the mother and child having frequent contact that exposes the infant to HIV, even if the mother is provided with a For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe ANALYSIS suppressive ART regimen. Given the evidence that infant PrEP is effective, there is a moral imperative to correct the policy inequity that exists between HIV exposed adults and children. Scaling up existing interventions and extended access to PrEP to those most in need are the most cost effective ways to stem new HIV infections. [START_REF] Smith | Maximising HIV prevention by balancing the opportunities of today with the promises of tomorrow: a modelling study[END_REF] Expanding global prevention guidelines to include infant PrEP for infants exposed to HIV by breast feeding could be a major breakthrough as a public health approach to eliminate mother-to-child transmission. Contributors and sources: This article is based on recent publications and conference presentations on PMTCT. All authors conceptualised this article during meetings on mother and child health. PV wrote the first draft of the manuscript and coordinated the revised versions. All authors reviewed and approved the final version and are responsible for the final content of the manuscript. Provenance and peer review: Not commissioned; externally peer reviewed. Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare. ANALYSIS Key messages WHO recommends pre-exposure prophylaxis for any group with an expected incidence of HIV infection above 3/100 person-years Current strategies to prevent mother-to-child transmission of HIV cover only the first six weeks Many infections of breastfed infants occur after this period Adding infant PrEP to maternal ART is cheap and does not expose infants to unsafe doses Routine infant PrEP has the potential to be a breakthrough in elimination of mother-to-child transmission For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe
16,546
[ "932073", "889428", "889426", "932080" ]
[ "488753", "139833", "488755", "488756", "488757", "488759", "488760", "12196" ]
01488172
en
[ "math" ]
2024/03/04 23:41:48
2018
https://inria.hal.science/hal-01488172/file/CiDS17_HAL.pdf
Patrick Ciarlet email: [email protected] Charles F Dunkl Stefan A Sauter T ⊂ ∂k} V ∈ ∂k} E ⊂ ∂k} A Family of Crouzeix-Raviart Finite Elements in 3D Keywords: AMS-Classification: 33C45, 33C50, 65N12, 65N30, secondary 33C80 finite element, non-conforming, Crouzeix-Raviart, orthogonal polynomials on triangles, symmetric orthogonal polynomials In this paper we will develop a family of non-conforming "Crouzeix-Raviart" type finite elements in three dimensions. They consist of local polynomials of maximal degree p ∈ N on simplicial finite element meshes while certain jump conditions are imposed across adjacent simplices. We will prove optimal a priori estimates for these finite elements. The characterization of this space via jump conditions is implicit and the derivation of a local basis requires some deeper theoretical tools from orthogonal polynomials on triangles and their representation. We will derive these tools for this purpose. These results allow us to give explicit representations of the local basis functions. Finally we will analyze the linear independence of these sets of functions and discuss the question whether they span the whole non-conforming space. Introduction For the numerical solution of partial differential equations, Galerkin finite element methods are among the most popular discretization methods. In the last decades, non-conforming Galerkin discretizations have become very attractive where the test and trial spaces are not subspaces of the natural energy spaces and/or the variational formulation is modified on the discrete level. These methods have nice properties, e.g. in different parts of the domain different discretizations can be easily used and glued together or, for certain classes of problems (Stokes problems, highly indefinite Helmholtz and Maxwell problems, problems with "locking", etc.), the non-conforming discretization enjoys a better stability behavior compared to the conforming one. One of the first non-conforming finite element space was the Crouzeix-Raviart element ( [START_REF] Crouzeix | Conforming and nonconforming finite element methods for solving the stationary Stokes equations[END_REF], see [START_REF] Brenner | Forty years of the Crouzeix-Raviart element[END_REF] for a survey). It is piecewise affine with respect to a triangulation of the domain while interelement continuity is required only at the barycenters of the edges/facets (2D/3D). In [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF], a family of high order non-conforming (intrinsic) finite elements have been introduced which corresponds to a family of high-order Crouzeix-Raviart elements in two dimensions. For Poisson's equation, this family includes the non-conforming Crouzeix-Raviart element [START_REF] Crouzeix | Conforming and nonconforming finite element methods for solving the stationary Stokes equations[END_REF], the Fortin-Soulie element [START_REF] Fortin | A nonconforming quadratic finite element on triangles[END_REF], the Crouzeix-Falk element [START_REF] Crouzeix | Nonconforming finite elements for Stokes problems[END_REF], and the Gauss-Legendre elements [START_REF] Baran | Gauss-Legendre elements: a stable, higher order non-conforming finite element family[END_REF], [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] as well as the standard conforming hp-finite elements. In our paper we will characterize a family of high-order Crouzeix-Raviart type finite elements in three dimensions, first implicitly by imposing certain jump conditions at the interelement facets. Then we derive a local basis for these finite elements. These new finite element spaces are non-conforming but the (broken version of the) continuous bilinear form can still be used. Thus, our results also give insights on how far one can go in the non-conforming direction while keeping the original forms. The explicit construction of a basis for these new finite element spaces require some deeper theoretical tools in the field of orthogonal polynomials on triangles and their representations which we develop here for this purpose. As a simple model problem for the introduction of our method, we consider Poisson's equation but emphasize that this method is applicable also for much more general (systems of) elliptic equations. There is a vast literature on various conforming and non-conforming, primal, dual, mixed formulations of elliptic differential equations and conforming as well as non-conforming discretization. Our main focus is the characterization and construction of non-conforming Crouzeix-Raviart type finite elements from theoretical principles. For this reason, we do not provide an extensive list of references on the analysis of specific families of finite elements spaces but refer to the classical monographs [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF], [START_REF] Schwab | p-and hp-finite element methods[END_REF], and [START_REF] Boffi | Mixed finite element methods and applications[END_REF] and the references therein. The paper is organized as follows. In Section 2 we introduce our model problem, Poisson's equation, the relevant function spaces and standard conditions on its well-posedness. In Section 3 we briefly recall classical, conforming hp-finite element spaces and their Lagrange basis. The new non-conforming finite element spaces are introduced in Section 4. We introduce an appropriate compatibility condition at the interfaces between elements of the mesh so that the non-conforming perturbation of the original bilinear form is consistent with the local error estimates. We will see that this compatibility condition can be inferred from the proof of the second Strang lemma applied to our setting. The weak compatibility condition allows to characterize the non-conforming family of high-order Crouzeix-Raviart type elements in an implicit way. In this section, we will also present explicit representations of non-conforming basis functions of general degree p while their derivation and analysis is the topic of the following sections. Section 5 is devoted to the explicit construction of a basis for these new non-conforming finite elements. It requires deeper theoretical tools from orthogonal polynomials on triangles and their representation which we will derive for this purpose in this section. It is by no means obvious whether the constructed set of functions is linearly independent and span the non-conforming space which was defined implicitly in Section 4. These questions will be treated in Section 6. Finally, in Section 7 we summarize the main results and give some comparison with the two-dimensional case which was developed in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF]. Model Problem As a model problem we consider the Poisson equation in a bounded Lipschitz domain Ω ⊂ R d with boundary Γ := ∂Ω. First, we introduce some spaces and sets of functions for the coefficient functions and solution spaces. The Euclidean scalar product in R d is denoted for a, b ∈ R d by a • b. For s ≥ 0, 1 ≤ p ≤ ∞, let W s,p (Ω) denote the classical (real-valued) Sobolev spaces with norm • W s,p (Ω) . The space W s,p 0 (Ω) is the closure with respect to the • W s,p (Ω) of all C ∞ (Ω) functions with compact support. As usual we write L p (Ω) short for W 0,p (Ω). The scalar product and norm in L 2 (Ω) are denoted by (u, v) := Ω uv and • := (•, •) 1/2 . For p = 2, we use H s (Ω), H s 0 (Ω) as shorthands for W s,2 (Ω), W s,2 0 (Ω). The dual space of H s 0 (Ω) is denoted by H -s (Ω). We recall that, for positive integers s, the seminorm |•| H s (Ω) in H s (Ω) which contains only the derivatives of order s is a norm in H s 0 (Ω). We consider the Poisson problem in weak form: Given f ∈ L 2 (Ω) find u ∈ H 1 0 (Ω) a (u, v) := (A∇u, ∇v) = (f, v) ∀v ∈ H 1 0 (Ω) . (1) Throughout the paper we assume that the diffusion matrix A ∈ L ∞ Ω, R d×d sym is symmetric and satisfies 0 < a min := ess inf x∈Ω inf v∈R d \{0} (A (x) v) • v v • v ≤ ess sup x∈Ω sup v∈R d \{0} (A (x) v) • v v • v =: a max < ∞ (2) and that there exists a partition P := (Ω j ) J j=1 of Ω into J (possibly curved) polygons (polyhedra for d = 3) such that, for some appropriate r ∈ N, it holds A P W r,∞ (Ω) := max 1≤j≤J A| Ω j W r,∞ (Ωj ) < ∞. (3) Assumption (2) implies the well-posedness of problem (1) via the Lax-Milgram lemma. Conforming hp-Finite Element Galerkin Discretization In this paper we restrict our studies to bounded, polygonal (d = 2) or polyhedral (d = 3) Lipschitz domains Ω ⊂ R d and regular finite element meshes G (in the sense of [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF]) consisting of (closed) simplices K, where hanging nodes are not allowed. The local and global mesh width is denoted by h K := diam K and h := max K∈G h K . The boundary of a simplex K can be split into (d -1)-dimensional simplices (facets for d = 3 and triangle edges for d = 2) which are denoted by T . The set of all facets in G is called F; the set of facets lying on ∂Ω is denoted by F ∂Ω and defines a triangulation of the surface ∂Ω. The set of facets in Ω is denoted by F Ω . As a convention we assume that simplices and facets are closed sets. The interior of a simplex K is denoted by • K and we write • T to denote the (relative) interior of a facet T . The set of all simplex vertices in the mesh G is denoted by V, those lying on ∂Ω by V ∂Ω , and those lying in Ω by V Ω . Similar the set of simplex edges in G is denoted by E, those lying on ∂Ω by E ∂Ω , and those lying in Ω by E Ω . We recall the definition of conforming hp-finite element spaces (see, e.g., [START_REF] Schwab | p-and hp-finite element methods[END_REF]). For p ∈ N 0 := {0, 1, . . .}, let P d p denote the space of d-variate polynomials of total degree ≤ p. For a connected subset ω ⊂ Ω, we write P p d (ω) for polynomials of degree ≤ p defined on ω. For a connected m-dimensional manifold ω ⊂ R d , for which there exists a subset ω ∈ R m along an affine bijection χ ω : ω → ω, we set P m p (ω) := v • χ -1 ω : v ∈ P m p (ω) . If the dimension m is clear from the context, we write P p (ω) short for P m p (ω). The conforming hp-finite element space is given by S p G,c := u ∈ C 0 Ω | ∀K ∈ G u| K ∈ P p (K) ∩ H 1 0 (Ω) . (4) A Lagrange basis for S p G,c can be defined as follows. Let N p := i p : i ∈ N d 0 with i 1 + . . . + i d ≤ p (5) denote the equispaced unisolvent set of nodal points on the d-dimensional unit simplex K := x ∈ R d ≥0 | x 1 + . . . + x d ≤ 1 . (6) For a simplex K ∈ G, let χ K : K → K denote an affine mapping. The set of nodal points is given by N p := χ K N | N ∈ N p , K ∈ G , N p Ω := N p ∩ Ω, N p ∂Ω := N p ∩ ∂Ω. (7) The Lagrange basis for S p G,c can be indexed by the nodal points N ∈ N p Ω and is characterized by B G p,N ∈ S p G,c and ∀N ′ ∈ N p Ω B G p,N (N ′ ) = δ N,N ′ , (8) where δ N,N ′ is the Kronecker delta. Definition 1 For all K ∈ G, T ∈ F Ω , E ∈ E Ω , V ∈ V Ω , the conforming spaces S p K,c , S p T,c , S p E,c , S p V,c are given as the spans of the following basis functions S p K,c := span B G p,N | N ∈ • K ∩ N p Ω , S p T,c := span B G p,N | N ∈ • T ∩ N p Ω , S p E,c := span B G p,N | N ∈ • E ∩ N p Ω , S p V,c := span B G p,V . The following proposition shows that these spaces give rise to a direct sum decomposition and that these spaces are locally defined. To be more specific we first have to introduce some notation. In this section, we will characterize a class of non-conforming finite element spaces implicitly by a weak compatibility condition across the facets. For each facet T ∈ F, we fix a unit vector n T which is orthogonal to T . The orientation for the inner facets is arbitrary but fixed while the orientation for the boundary facets is such that n T points toward the exterior of Ω. Our non-conforming finite element spaces will be a subspace of C 0 G (Ω) := u ∈ L ∞ (Ω) | ∀K ∈ G u| • K ∈ C 0 • K and we consider the skeleton T ∈F T as a set of measure zero. For K ∈ G, we define the restriction operator γ K : C 0 G (Ω) → C 0 (K) by (γ K w) (x) = w (x) ∀x ∈ • K and on the boundary ∂K by continuous extension. For the inner facets T ∈ F, let K 1 T , K 2 T be the two simplices which share T as a common facet with the convention that n T points into K 2 . We set ω T := K 1 T ∪ K 2 T . The jump [•] T : C 0 G (Ω) → C 0 (T ) across T is defined by [w] T = (γ K 2 w)| T -(γ K 1 w)| T . (11) For vector-valued functions, the jump is defined component-wise. The definition of the non-conforming finite elements involves orthogonal polynomials on triangles which we introduce first. Let T denote the (closed) unit simplex in R d-1 , with vertices 0, (1, 0, . . . , 0) ⊺ , (0, 1, 0, . . . , 0) ⊺ , (0, . . . , 0, 1) ⊺ . For n ∈ N 0 , the set of orthogonal polynomials on T is given by P ⊥ n,n-1 T :=    P 0 T n = 0, u ∈ P n T | T uv = 0 ∀v ∈ P n-1 T n ≥ 1. (12) We lift this space to a facet T ∈ F by employing an affine transform χ T : T → T P ⊥ n,n-1 (T ) := v • χ -1 T : v ∈ P ⊥ n,n-1 (T ) . The orthogonal polynomials on triangles allows us to formulate the weak compatibility condition which is employed for the definition of non-conforming finite element spaces: [u] T ∈ P ⊥ p,p-1 (T ) , ∀T ∈ F Ω and u| T ∈ P ⊥ p,p-1 (T ) , ∀T ∈ F ∂Ω . (13) We have collected all ingredients for the (implicit) characterization of the non-conforming Crouzeix-Raviart finite element space. Definition 3 The non-conforming finite element space S p G with weak compatibility conditions across facets is given by S p G := {u ∈ L ∞ (Ω) | ∀K ∈ G γ K u ∈ P p (K) and u satisfies (13)} . (14) The non-conforming Galerkin discretization of (1) for a given finite element space S which satisfies S p G,nc ⊂ S ⊂ S p G reads: Given f ∈ L 2 (Ω) find u S ∈ S a G (u S , v) := (A∇ G u S , ∇ G v) = (f, v) ∀v ∈ S (15) where ∇ G u (x) := ∇u (x) ∀x ∈ Ω\ T ∈F ∂T . Non-Conforming Finite Elements of Crouzeix-Raviart Type in 3D The definition of the non-conforming space S p G in ( 14) is implicit via the weak compatibility condition. In this section, we will present explicit representations of non-conforming basis functions of Crouzeix-Raviart type for general polynomial order p. These functions together with the conforming basis functions span a space S p G,nc which satisfies the inclusions S p G,c S p G,nc ⊆ S p G (cf. Theorem 10). The derivation of the formula and their algebraic properties will be the topic of the following sections. We will introduce two types of non-conforming basis functions: those whose support is one tetrahedron and those whose support consists of two adjacent tetrahedrons, that is tetrahedrons which have a common facet. For details and their derivation we refer to Section 5 while here we focus on the representation formulae. Non-Conforming Basis Functions Supported on One Tetrahedron The construction starts by defining symmetric orthogonal polynomials b sym p,k , 0 ≤ k ≤ d triv (p) -1 on the reference triangle T with vertices (0, 0) ⊺ , (1, 0) ⊺ , (0, 1) ⊺ , where d triv (p) := p 2 - p -1 3 . ( 16 ) We define the coefficients M (p) i,j = (-1) p 4 F 3 -j, j + 1, -i, i + 1 -p, p + 2, 1 ; 1 2i + 1 p + 1 0 ≤ i, j ≤ p, where p F q denotes the generalized hypergeometric function (cf. [9,Chap. 16]). The 4 F 3 -sum is understood to terminate at i to avoid the 0/0 ambiguities in the formal 4 F 3 -series. These coefficients allow to define the polynomials r p,2k (x 1 , x 2 ) := 2 0≤j≤p/2 M (n) 2j,2k b p,2j + b p,2k 0 ≤ k ≤ p/2, where b p,k , 0 ≤ k ≤ p, are the basis for the orthogonal polynomials of degree p on T as defined afterwards in (35). Then, a basis for the symmetric orthogonal polynomials is given by b sym p,k := r p,p-2k if p is even, r p,p-1-2k if p is odd, k = 0, 1, . . . , d triv (p) -1. (17) The non-conforming Crouzeix-Raviart basis function B K,nc p,k ∈ P p K on the unit tetrahedron K is characterized by its values at the nodal points in N p (cf. ( 5)). For a facet T ⊂ ∂ K, let χ T : T → T denote an affine pullback to the reference triangle. Then B K,nc p,k ∈ P p K is uniquely defined by Remark 4 In Sec. 5.3, we will prove that the polynomials b sym p,k are totally symmetric, i.e., invariant under affine bijections χ : K → K. Thus, any of these functions can be lifted to the facets of a tetrahedron via affine pullbacks and the resulting function on the surface is continuous. As a consequence, the value B K,nc p,k (N) in definition (18) is independent of the choice of T also for nodal points N which belong to different facets. B K,nc p,k (N) := b sym p,k • χ -1 T (N) ∀N ∈ N p s.t. N ∈ T for some facet T ⊂ ∂ K, 0 ∀N ∈ N p \∂ K k = 0, 1, . . . , d triv (p)-1. ( 18 ) b sym 2,0 , B K,nc It will turn out that the value 0 at the inner nodes could be replaced by other values without changing the arising non-conforming space. Other choices could be preferable in the context of inverse inequalities and the condition number of the stiffness matrix. However, we recommend to choose these values such that the symmetries of B K,nc p,k are preserved. Definition 5 The non-conforming tetrahedron-supported basis functions on the reference element are given by B K,nc p,k = N∈ N p ∩∂ K B K,nc p,k (N) B G p,N k = 0, 1, . . . , d triv (p) -1 (19) with values B K,nc p,k (N) as in (18). For a simplex K ∈ G the corresponding non-conforming basis functions B K,nc p,k are given by lifting B K,nc p,k via an affine pullback χ K from K to K ∈ G: B K,nc p,k • K ′ := B K,nc p,k • χ -1 K K = K ′ , 0 K = K ′ . and span the space S p K,nc := span B K,nc p,k : k = 0, 1, . . . , d triv (p) -1 . ( 20 ) Example 6 The lowest order of p such that d triv (p) ≥ 1 is p = 2. In this case, we get d triv (p) = 1. In Figure 1 the function b sym p,k and corresponding basis functions B K,nc p,k are depicted for (p, k) ∈ {(2, 0) , (3, 0) , (6, 0) , (6, 1)}. Non-Conforming Basis Functions Supported on Two Adjacent Tetrahedrons The starting point is to define orthogonal polynomials b refl p,k on the reference triangle T which are mirror symmetric 1 with respect to the angular bisector in T through 0 and linear independent from the fully symmetric functions b sym p,k . We set b refl p,k := 1 3 (2b p,2k (x 1 , x 2 ) -b p,2k (x 2 , 1 -x 1 -x 2 ) -b p,2k (1 -x 1 -x 2 , x 1 )) 0 ≤ k ≤ d refl (p) -1, (21) where d refl (p) := p + 2 3 . (22) Let K 1 , K 2 denote two tetrahedrons which share a common facet, say T . The vertex of K i which is opposite to T is denoted by V i . The procedure of lifting the nodal values to the facets of ω T := K 1 ∪ K 2 is analogous as for the basis functions B K,nc n,k . However, it is necessary to choose the pullback χ i, T : T → T of a facet T ⊂ ∂K i \ • T such that the origin is mapped to V i . B T,nc p,k (N) := b refl p,k • χ -1 i, T (N) ∀N ∈ N p s.t. N ∈ T for some facet T ⊂ ∂K\ • T i , 0 ∀N ∈ N p ∩ • ω T k = 0, 1, . . . , d refl (p)-1. (23) Again, the value 0 at the inner nodes of ω T could be replaced by other values without changing the arising non-conforming space. Definition 7 The non-conforming facet-oriented basis functions are given by B T,nc p,k = N∈N p ∩∂ωT B T,nc p,k (N) B G p,N ωT ∀T ∈ F Ω , k = 0, 1, . . . , d refl (p) -1 (24) with values B T,nc p,k (N) as in (23) and span the space S p T,nc := span B T,nc p,k : k = 0, 1, . . . , d refl (p) -1 . ( 25 ) The non-conforming finite element space of Crouzeix-Raviart type is given by S p G,nc := E∈EΩ S p E,c ⊕ T ∈FΩ S p T,c ⊕ K∈G S p K,c ⊕ K∈G S p K,nc ⊕ T ∈FΩ span B T,nc p,0 . (26) Remark 8 In Sec. 5.3.3, we will show that the polynomials b refl p,k are mirror symmetric with respect to the angular bisector in T through 0. Thus, any of these functions can be lifted to the outer facets of two adjacent tetrahedrons via (oriented) affine pullbacks as employed in (23) and the resulting function on the surface is continuous. As a consequence, the value B T,nc p,k (N) in definition ( 23) is independent of the choice of T also for nodal points N which belong to different facets. In Theorem 33, we will prove that (26), in fact, is a direct sum and a basis is given by the functions B G p,N ∀N ∈ N Ω \V, B K,nc p,k ∀K ∈ G, 0 ≤ k ≤ d triv (p) -1, B T,nc p,0 ∀T ∈ F Ω . Also we will prove that S p G,c S p G,nc ⊆ S p G . This condition implies that the convergence estimates as in Theorem 10 are valid for this space. We restricted the reflection-type non-conforming basis functions to the lowest order k = 0 in order to keep the functions linearly independent. Error Analysis In this subsection we present the error analysis for the Galerkin discretization [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] with the non-conforming finite element space S p G and subspaces thereof. The analysis is based on the second Strang lemma and has been presented for an intrinsic version of S p G in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF]. For any inner facet T ∈ F and any v ∈ S p G , condition (13) implies T [v] T = 0 : hence, the jump [v] T is always zero-mean valued. Let h T denote the diameter of T . The combination of a Poincaré inequality with a trace inequality then yields where [u] T L 2 (T ) ≤ Ch T |[u] T | H 1 (T ) ≤ Ch 1/2 T |u| H 1 pw (ω T ) , (27) |u| H p pw (ωT ) := K⊂ω T |u| 2 H p (K) 1/2 . In a similar fashion we obtain for all boundary facets T ∈ F ∂Ω and all u ∈ S p G the estimate u L 2 (T ) ≤ Ch 1/2 T |u| H 1 pw (ωT ) . (28) We say that the exact solution u ∈ H 1 0 (Ω) is piecewise smooth over the partition P = (Ω j ) J j=1 , if there exists some positive integer s such that u |Ωj ∈ H 1+s (Ω j ) for j = 1, 2, . . . , J. We write u ∈P H 1+s (Ω) and refer for further properties and generalizations to non-integer values of s, e.g., to [START_REF] Sauter | Boundary Element Methods[END_REF]Sec. 4.1.9]. For the approximation results, the finite element meshes G are assumed to be compatible with the partition P in the following sense: for all K ∈ G, there exists a single index j such that • K∩Ω j = ∅. The proof that |•| H 1 pw (Ω) is a norm on S p G is similar as in [4, Sect. 10.3]: For w ∈ H 1 0 (Ω) this follows from |w| H 1 pw (Ω) = ∇w and a Friedrichs inequality; for w ∈ S p G the condition ∇ G w = 0 implies that w| K is constant on all simplices K ∈ G. The combination with T w = 0 for all T ∈ F ∂Ω leads to w| K = 0 for the outmost simplex layer via a Poincaré inequality, i.e., w| K = 0 for all K ∈ G having at least one facet on ∂Ω. This argument can be iterated step by step over simplex layers towards the interior of Ω to finally obtain w = 0. Theorem 10 Let Ω ⊂ R d be a bounded, polygonal (d = 2) or polyhedral (d = 3) Lipschitz domain and let G be a regular simplicial finite element mesh for Ω. Let the diffusion matrix A ∈ L ∞ Ω, R d×d sym satisfy assumption (2) and let f ∈ L 2 (Ω). As an additional assumption on the regularity, we require that the exact solution of (1) satisfies u ∈ P H 1+s (Ω) for some positive integer s and A P W r,∞ (Ω) < ∞ holds with r := min {p, s}. Let the continuous problem (1) be discretized by the non-conforming Galerkin method (15) with a finite dimensional space S which satisfies S p G,c ⊂ S ⊂ S p G on a compatible mesh G. Then, (15) has a unique solution which satisfies |uu S | H 1 pw (Ω) ≤ Ch r u P H 1+r (Ω) . The constant C only depends on a min , a max , A P W r,∞ (Ω) , p, r, and the shape regularity of the mesh. Proof. The second Strang lemma (cf. [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF]Theo. 4.2.2]) applied to the non-conforming Galerkin discretization [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] implies the existence of a unique solution which satisfies the error estimate |u -u S | H 1 pw (Ω) ≤ 1 + a max a min inf v∈S |u -v| H 1 pw (Ω) + 1 a min sup v∈S |L u (v)| |v| H 1 pw (Ω) , where L u (v) := a G (u, v) -(f, v) . The approximation properties of S are inherited from the approximation properties of S p G,c in the first infimum because of the inclusion S p G,c ⊂ S. For the second term we obtain L u (v) = (A∇u, ∇ G v) -(f, v) . ( 29 ) Note that f ∈ L 2 (Ω) implies that div (A∇u) ∈ L 2 (Ω) and, in turn, that the normal jump [A∇u • n T ] T equals zero and the restriction (A∇u • n T )| T is well defined for all T ∈ F. We may apply simplexwise integration by parts to (29) to obtain L u (v) = - T ∈FΩ T (A∇u • n T ) [v] T + T ∈F∂Ω T (A∇u • n T ) v. Let K T be one simplex in ω T . For 1 ≤ i ≤ d, let q i ∈ P p-1 d (K T ) denote the best approximation of w i := d j=1 A i,j ∂ j u K T with respect to the H 1 (K T ) norm. Then, q i | T n T,i ∈ P p-1 d-1 (T ) for 1 ≤ i ≤ d, and the inclusion S ⊂ S p G implies |L u (v)| ≤ - T ∈FΩ T d i=1 (w i -q i ) • n T,i [v] T (30) + T ∈F ∂Ω T d i=1 (w i -q i ) • n T,i v ≤ T ∈F Ω [v] T L 2 (T ) d i=1 w i -q i L 2 (T ) + T ∈F ∂Ω v L 2 (T ) d i=1 w i -q i L 2 (T ) . Standard trace estimates and approximation properties lead to w i -q i L 2 (T ) ≤ C h -1/2 T w i -q i L 2 (KT ) + h 1/2 T |w i -q i | H 1 (KT ) (31) ≤ Ch r-1/2 T |w i | H r (KT ) ≤ Ch r-1/2 T u H 1+r (KT ) , where C depends only on p, r, A W r (KT ) , and the shape regularity of the mesh.The combination of (30), ( 31) and ( 27),(28) along with the shape regularity of the mesh leads to the consistency estimate |L u (v)| ≤ C T ∈F Ω h r T u H 1+r (KT ) |v| H 1 pw (ωT ) + T ∈F ∂Ω h r T u H 1+r (KT ) |v| H 1 pw (ωT ) ≤ Ch r u P H 1+r (Ω) |v| H 1 pw (Ω) , which completes the proof. Remark 11 If one chooses in (13) a degree p ′ < p for the orthogonality relations in [START_REF] Sauter | Boundary Element Methods[END_REF], then the order of convergence behaves like h r ′ e H 1+r ′ (Ω) , with r ′ := min {p ′ , s}, because the best approximations q i now belong to P p ′ -1 d-1 (T ). P (α,β) n (x) q (x) (1 -x) α (1 + x) β dx = 0 for all polynomials q of degree less than n, and (cf. [9, Table 18.6.1]) P (α,β) n (1) = (α + 1) n n! , P (α,β) n (-1) = (-1) n (β + 1) n n! . ( 32 ) Here the shifted factorial is defined by (a) n := a (a + 1) . . . (a + n -1) for n > 0 and (a) 0 := 1. The Jacobi polynomial has an explicit expression in terms of a terminating Gauss hypergeometric series (see (cf. [9, 18.5.7])) 2 F 1 -n, b c ; z := n k=0 (-n) k (b) k (c) k k! z k (33) as follows P (α,β) n (x) = (α + 1) n n! 2 F 1 -n, n + α + β + 1 α + 1 ; 1 -x 2 . ( 34 ) Orthogonal Polynomials on Triangles Recall that T is the (closed) unit triangle in R 2 with vertices A 0 = (0, 0) ⊺ , A 1 = (1, 0) ⊺ , and A 3 = (0, 1) ⊺ . An orthogonal basis for the space P ⊥ n,n-1 T was introduced in [START_REF] Proriol | Sur une famille de polynomes à deux variables orthogonaux dans un triangle[END_REF] and is given by the functions b n,k , 0 ≤ k ≤ n, b n,k (x) := (x 1 + x 2 ) k P (0,2k+1) n-k (2 (x 1 + x 2 ) -1) P (0,0) k x 1 -x 2 x 1 + x 2 , ( 35 ) where P (0,0) k are the Legendre polynomials (see [9, 18.7.9]) 2 . From (36) (footnote) it follows that these polynomials satisfy the following symmetry relation b n,k (x 1 , x 2 ) = (-1) k b n,k (x 2 , x 1 ) ∀n ≥ 0, ∀ (x 1 , x 2 ) . ( 37 ) By combining (33) -( 35), an elementary calculation leads to 3 b n,0 (0, 0) = (-1) n (n + 1). Let E I := A 0 A 1 , E II := A 0 A 2 , and E III := A 1 A 2 (38) denote the edges of T . For Z ∈ {I, II, III}, we introduce the linear restriction operator for the edge E Z by γ Z : C 0 T → C 0 ([0, 1]) by γ I u := u (•, 0) , γ II u := u (0, •) , γ III u = u (1 -•, •) (39) which allows to define b I n,k := γ I b n,k , b II n,k := γ II b n,k , b III n,k := γ III b n,k , for k = 0, 1, . . . , n. 2 The Legendre polynomials with normalization P (0,0) k (1) = 1 for all k = 0, 1, . . . can be defined [9, Table 18.9.1] via the three-term recursion P (0,0) 0 (x) = 1; P (0,0) 1 (x) = x; and (k + 1) P (0,0) k+1 (x) = (2k + 1) xP (0,0) k (x) -kP (0,0) k-1 (x) for k = 1, 2, . . . , (36) from which the well-known relation P (0,0) k (x) = (-1) k P (0,0) k (x) for all k ∈ N 0 follows. 3 Further special values are b n,0 (0, 0) = P (0,1) n Proof. First note that x j (x -1) n-j : 0 ≤ j ≤ n is a basis for P n ([0, 1]); this follows from expanding the right-hand side of x m = x m (x -(x -1)) n-m . Specialize the formula [9, 18.5.8] (-1) = (-1) n (2) n n! = (-1) n (n + 1) , b n,k (0, 0) = 0, 1 ≤ k ≤ n, b n,k (1, 0) = P (0,2k+1) n-k (1) P (0,0) k (1) = 1, 0 ≤ k ≤ n, b n,k (0, 1) = P (0,2k+1) n-k (1) P (0,0) k (-1) = (-1) k , 0 ≤ k ≤ n. P (α,β) m (s) = (α + 1) m m! 1 + s 2 m 2 F 1 -m, -m -β α + 1 ; s -1 s + 1 to m = n -k, α = 0, β = 2k + 1, s = 2x -1 to obtain b I n,k (x) = x n 2 F 1 k -n, -n -k -1 1 ; x -1 x ( 40 ) (33) = n-k i=0 (k -n) i (-n -k -1) i i!i! x n-i (x -1) i . ( 41 ) The highest index i of x n-i (x -1) i in b I n,k (x) is n -k with coefficient (2k + 2) n-k (n -k)! = 0. Thus the matrix expressing b I n,0 , . . . , b I n,n ! in terms of " (x -1) n , x (x -1) n-1 , . . . , x n # is triangular and nonsingular; hence b I n,k : 0 ≤ k ≤ n is a basis of P n ([0, 1]). The symmetry relation b II n,k = (-1) k b I n,k for 0 ≤ k ≤ n (cf. ( 37 )) shows that b II n,k : 0 ≤ k ≤ n is also a basis of P n ([0, 1]). Finally substituting x 1 = 1 -x, x 2 = x in b n,k results in b III n,k (x) = P (0,2k+1) n-k (1) P (0,0) k (1 -2x) , (42) and P (0,2k+1) n-k (1) = 1 (from (32)). Clearly P (0,0) k (1 -2x) : 0 ≤ k ≤ n is a basis for P n ([0, 1]). Lemma 13 Let v ∈ P n ([0, 1]). Then, there exist unique orthogonal polynomials u Z ∈ P ⊥ n,n-1 T , Z ∈ {I, II, III} with v = γ Z u Z . Thus, the linear extension operator E Z : P n ([0, 1]) → P ⊥ n,n-1 T is well defined by E Z v := u Z . Proof. From Lemma 12 we conclude that γ Z is surjective. Since the polynomial spaces are finite dimensional the assertion follows from dim P n ([0, 1]) = n + 1 = dim P ⊥ n,n-1 T . The orthogonal polynomials can be lifted to a general triangle T . Definition 14 Let T denote a triangle and χ T an affine pullback to the reference triangle T . Then, the space of orthogonal polynomials of degree n on T is P ⊥ n,n-1 (T ) := v • χ -1 T : v ∈ P ⊥ n,n-1 T . From the transformation rule for integrals one concludes that for any u = v • χ -1 T ∈ P ⊥ n,n-1 (T ) and all q ∈ P n-1 (T ) it holds T uq = T v • χ -1 T q = 2 |T | T v (q • χ T ) = 0 (43) since q • χ T ∈ P n-1 T . Here |T | denotes the area of the triangle T . Totally Symmetric Orthogonal Polynomials In this section, we will decompose the space of orthogonal polynomials P ⊥ n,n-1 T into three irreducible modules (see §5.3.1) and thus, obtain a direct sum decomposition P ⊥ n,n-1 T = P ⊥,sym n,n-1 T ⊕ P ⊥,refl n,n-1 T ⊕ P ⊥,sign n,n-1 T . We will derive an explicit representation for a basis of the space of totally symmetric polynomials P ⊥,sym n,n-1 T in §5.3.2 and of the space of reflection symmetric polynomials P ⊥,refl n,n-1 T in §5.3.3. We start by introducing, for functions on triangles, the notation of total symmetry. For an arbitrary triangle T with vertices A 0 , A 1 , A 2 , we introduce the set of permutations Π = {(i, j, k) : i, j, k ∈ {0, 1, 2} pairwise disjoint}. For π = (i, j, k) ∈ Π, define the affine mapping χ π : T → T by χ π (x) = A i + x 1 (A j -A i ) + x 2 (A k -A i ) . ( 44 ) We say a function u, defined on T , has total symmetry if u = u • χ π ∀π ∈ Π. The space of totally symmetric orthogonal polynomials is P ⊥,sym n,n-1 T := u ∈ P ⊥ n,n-1 T : u has total symmetry . ( 45 ) The construction of a basis of P ⊥,sym n,n-1 T requires some algebraic tools which we develop in the following. The decomposition of P ⊥ n,n-1 T or P n ([0, 1]) into irreducible S 3 modules We use the operator γ I (cf. (39)) to set up an action of the symmetric group S 3 on P n ([0, 1]) by transferring its action on P ⊥ n,n-1 T on the basis {b n,k }. It suffices to work with two generating reflections. On the triangle χ {0,2,1} (x 1 , x 2 ) = (x 2 , x 1 ) and thus b n,k • χ {0,2,1} = (-1) k b n,k (this follows from (37)). The action of χ {0,2,1} is mapped to n k=0 α k b I n,k → n k=0 (-1) k α k b I n,k , and denoted by R. For the other generator we use χ {1,0,2} (x 1 , x 2 ) = (1 -x 1 -x 2 , x 2 ). Under γ I this corresponds to the map n k=0 α k b I n,k (x) → n k=0 α k b I n,k (1 -x) which is denoted by M. We will return later to transformation formulae expressing b n,k • χ {1,0,2} (x 1 , x 2 ) = (1 -x 1 ) k P (0,2k+1) n-k (1 -2x 1 ) P (0,0) k 1 -x 1 -2x 2 1 -x 1 in the {b n,k }-basis. Observe that (MR) 3 = I because χ {1,0,2} • χ {0,2,1} (x 1 , x 2 ) = (1 -x 1 -x 2 , x 1 ) and this mapping is of period 3. It follows that each of {M, R} and χ {1,0,2} , χ {0,2,1} generates (an isomorphic copy of) S 3 . It is a basic fact that the relations M 2 = I, R 2 = I and (MR) 3 = I define S 3 . The representation theory of S 3 informs us that there are three nonisomorphic irreducible representations: τ triv : χ {0,2,1} → 1, χ {1,0,2} → 1; τ sign : χ {0,2,1} → -1, χ {1,0,2} → -1; τ refl : χ {0,2,1} → σ 1 := $ -1 0 0 1 % , χ {1,0,2} → σ 2 := $ 1 2 1 3 4 -1 2 % . (The subscript "refl" designates the reflection representation). Then the eigenvectors of σ 1 , σ 2 with -1 as eigenvalue are (-1, 0) ⊺ and (2, -3) ⊺ respectively; these two vectors are a basis for R 2 . Similarly the eigenvectors of σ 1 and σ 2 with eigenvalue +1, namely (0, 1) ⊺ , (2, 1) ⊺ , form a basis. Form a direct sum P ⊥ n,n-1 T :=   j≥0 E (triv) j   ⊕   j≥0 E (sign) j   ⊕   j≥0 E (refl) j   , where the E If n = 2m + 1 is odd then the eigenvector multiplicities are m + 1 for both eigenvalues +1, -1. By similar arguments we obtain the equations (triv) j , E (sign) j , E ( d refl (n) + d sign (n) = m + 1, d refl (n) + d triv (n) = m + 1. It remains to find one last relation for both, even and odd cases. To finish the determination of the multiplicities d triv (n) , d sign (n) , d refl (n) it suffices to find d triv (n). This is the dimension of the space of polynomials in P ⊥ n,n-1 T which are invariant under both χ {0,2,1} and χ {1,0,2} . Since these two group elements generate S 3 this is equivalent to being invariant under each element of S 3 .This property is called totally symmetric. Under the action of γ I this corresponds to the space of polynomials in P n ([0, 1]) which are invariant under both R and M. We appeal to the classical theory of symmetric polynomials: suppose S 3 acts on polynomials in (y 1 , y 2 , y 3 ) by permutation of coordinates then the space of symmetric (invariant under the group) polynomials is exactly the space of polynomials in {e 1 , e 2 , e 3 } the elementary symmetric polynomials, namely e 1 = y 1 + y 2 + y 3 , e 2 = y 1 y 2 + y 1 y 3 + y 2 y 3 , e 3 = y 1 y 2 y 3 . To apply this we set up an affine map from T to the triangle in R 3 with vertices (2, -1, -1), (-1, 2, -1), (-1, -1, 2). The formula for the map is y (x) = (2 -3x 1 -3x 2 , 3x 1 -1, 3x 2 -1) . The map takes (0, 0) , (1, 0) , (0, 1) to the three vertices respectively. The result is This number is the coefficient of t n in the power series expansion of e 1 (y (x)) = 0, e 2 (y (x)) = -9 x 2 1 + x 1 x 2 + x 2 2 -x 1 -x 2 -3, e 3 (y (x)) = (3x 1 -1) (3x 2 -1) (2 -3x 1 -3x 2 ) . 1 (1 -t 2 ) (1 -t 3 ) = 1 + t 2 + t 3 + t 4 + t 5 + t 7 1 + 2t 6 + 3t 12 + . . . . From d triv (n) = card ({0, 2, 4, . . .} ∩ {n, n -3, n -6, . . .}) we deduce the formula (cf. ( 16)) d triv (n) = n 2 - n -1 3 . As a consequence: if n = 2m then d sign (n) = d triv (n) -1 and d refl (n) = m + 1 -d triv (n); if n = 2m + 1 then d sign (n) = d triv (n) and d refl (n) = m + 1 -d triv (n). From this the following can be derived: d sign (n) = * n-1 2 + - * n-1 3 + and d refl (n) = * n+2 3 + . Here is a table of values in terms of n mod 6: n d triv (n) d sign (n) d refl (n) 6m m + 1 m 2m 6m + 1 m m 2m + 1 6m + 2 m + 1 m 2m + 1 6m + 3 m + 1 m + 1 2m + 1 6m + 4 m + 1 m 2m + 2 6m + 5 m + 1 m + 1 2m + 2 . Construction of totally symmetric polynomials Let M and R denote the linear maps Mp (x 1 , x 2 ) := p (1x 1x 2 , x 2 ) and Rp (x 1 , x 2 ) := p (x 2 , x 1 ) respectively. Both are automorphisms of P ⊥ n,n-1 T . Note M p = p • χ {1,0,2} and Rp = p • χ {0,2,1} (cf. Section 5.3.1). Proposition 15 Suppose 0 ≤ k ≤ n then Rb n,k = (-1) k b n,k ; (46) M b n,k = (-1) n n j=0 4 F 3 -j, j + 1, -k, k + 1 -n, n + 2, 1 ; 1 2j + 1 n + 1 b n,j . (47) Proof. The 4 F 3 -sum is understood to terminate at k to avoid the 0/0 ambiguities in the formal 4 F 3 -series. The first formula was shown in Section 5.3.1. The second formula is a specialization of transformations in [10, Theorem 1.7(iii)]: this paper used the shifted Jacobi polynomial R (α,β) m (s) = m! (α+1) m P (α,β) m (1 -2s). Setting α = β = γ = 0 in the formulas in [10, Theorem 1.7(iii)] results in b n,k = (-1) k θ n,k k! (n -k)! and Mb n,k = φ n,k k! (n -k)! , where θ n,k , φ n,k are the polynomials introduced in [10, p.690]. More precisely, the arguments v 1 , v 2 , v 3 in θ n,k and φ n,k are specialized to v 1 = x 1 , v 2 = x 2 and v 3 = 1 -x 1 -x 2 . Proposition 16 The range of I + RM + M R is exactly the subspace p ∈ P ⊥ n,n-1 T : RM p = p . Proof. By direct computation (MR) 3 = I (cf. Section 5.3.1). This implies (RM) 2 = M R. If p satisfies RM p = p then Mp = Rp and p = M Rp. Now suppose RM p = p then (I + RM + MR) 1 3 p = p; hence p is in the range of I + RM + M R. Conversely suppose p = (I + RM + MR) p ′ for some polynomial p ′ , then, RM (I + RM + M R) p ′ = RM + (RM) 2 + I p ′ = p. Let M (n) i,j , R (n) i,j denote the matrix entries of M, R with respect to the basis {b n,k : 0 ≤ k ≤ n}, respectively (that is M b n,k = n j=0 b n,j M (n) j,k ) . Let S (n) i,j denote the matrix entries of M R + RM + I. Then R (n) i,j = (-1) i δ i,j ; M (n) i,j = (-1) n 4 F 3 -i, i + 1, -j, j + 1 -n, n + 2, 1 ; 1 2i + 1 n + 1 ; S (n) i,j = (-1) j + (-1) i M (n) i,j + δ i,j . Thus S (n) i,j = 2M (n) i,j + δ i,j if both i, j are even, S i,j = -2M (n) i,j + δ i,j if both i, j are odd , and Proof. We use the homogeneous form of the b n,m as in [START_REF] Dunkl | Orthogonal polynomials with symmetry of order three[END_REF], that is, set S (n) i,j = 0 if i -j ≡ 1 mod 2. b ′ n,2m (v) = (v 1 + v 2 + v 3 ) n b n,2m v 1 v 1 + v 2 + v 3 , v 2 v 1 + v 2 + v 3 = (v 1 + v 2 + v 3 ) n-2m P (0,4m+1) n-2m v 1 + v 2 -v 3 v 1 + v 2 + v 3 (v 1 + v 2 ) 2m P (0,0) 2m v 1 -v 2 v 1 + v 2 . Formally b ′ n,j (v) = (-1) j (j! (nj)!) -1 θ n,j (v) with θ n,j as in [10, p.690]. The expansion of such a polynomial is a sum of monomials v n 1 1 v n 2 2 v n 3 3 with 3 i=1 n i = n. Symmetrizing the monomial results in the sum of v m 1 1 v m 2 2 v m 3 3 where (m 1 , m 2 , m 3 ) ranges over all permutations of (n 1 , n 2 , n 3 ). The argument is based on the occurrence of certain indices in b n,m . For a more straightforward approach to the coefficients we use the following expansions (with ℓ = n -2k, β = 2k + 1): (v 1 + v 2 + v 3 ) ℓ P (0,β) ℓ v 1 + v 2 -v 3 v 1 + v 2 + v 3 = (-1) ℓ (v 1 + v 2 + v 3 ) ℓ P (β,0) ℓ -v 1 -v 2 + v 3 v 1 + v 2 + v 3 (48) = (-1) ℓ (β + 1) ℓ ℓ! ℓ i=0 (-ℓ) i (ℓ + β + 1) i i! (β + 1) i (v 1 + v 2 ) i (v 1 + v 2 + v 3 ) ℓ-i ; and (v 1 + v 2 ) 2k P (0,0) 2k v 1 -v 2 v 1 + v 2 = 1 (2k)! 2k j=0 (-2k) j (-2k) j (-2k) 2k-j j! v j 2 v 2k-j 1 . First let n = 2m. The highest power of v 3 that can occur in b ′ 2m,2m-2k is 2k, with corresponding coefficient (4m-4k+1) 2k (2k)! 2m-2k j=0 c j v j 2 v 2m-j 1 for certain coefficients {c j }. Recall that d triv (n) is the number of solutions (i, j) of the equation 3j + 2i = 2m (with i, j = 0, 1, 2, . . .). The solutions can be listed as (m, 0) , (m -3, 2) , (m -6, 4) . . . (m -3ℓ, 2ℓ) where ℓ = d triv (n) -1. By hypothesis (m -3k, 2k) occurs in the list and thus m -3k ≥ 0 and mk ≥ 2k. There is only one possible permutation of v m-k 1 v m-k 2 v 2k 3 that occurs in b ′ 2m,2m-2k and its coefficient is (2k-2m) 3 m-k (2m-2k)! = 0. Hence there is a triangular pattern for the occurrence of v m 1 v m 2 , v m-1 1 v m-1 2 v 2 3 , v m-2 1 v m-2 2 v 4 3 , . . .in the symmetrizations of b ′ 2m,2m , b ′ 2m,2m-2 . . . with nonzero numbers on the diagonal and this proves the basis property when n = 2m. Now let n = 2m + 1. The highest power of v 3 that can occur in b ′ 2m+1,2m-2k is 2k + 1, with coefficient (4m-4k+1) 2k+1 (2k+1)! 2m-2k j=0 c j v j 2 v 2m-j 1 for certain coefficients {c j }. The solutions of 3j + 2i = 2m + 1 can be listed as (m -1, 1) , (m -4, 3) , (m -7, 5) . . . (m -1 -3ℓ, 2ℓ + 1) where ℓ = d triv (n) -1. By hypothesis (m -1 -3k, 2k + 1) occurs in this list, thus mk ≥ 2k + 1. There is only one possible permutation of v m-k 1 v m-k 2 v 2k+1 3 that occurs in b ′ 2m+1,2m-2k and its coefficient is (2k-2m) 3 m-k (2m-2k)! = 0. As above, there is a triangular pattern for the occurrence of v m 1 v m 2 v 3 , v m-1 1 v m-1 2 v 3 3 , v m-2 1 v m-2 2 v 5 3 , . . . in the symmetrizations of b ′ 2m+1,2m , b ′ 2m+1,2m-2 , . . . with nonzero numbers on the diagonal and this proves the basis property when n = 2m + 1. The totally symmetric orthogonal polynomials can be lifted to a general triangle T . Definition 19 Let T denote a triangle. The space of totally symmetric, orthogonal polynomials of degree n is P ⊥,sym n,n-1 (T ) := u ∈ P ⊥ n,n-1 (T ) : u has total symmetry (49) = span b T,sym n,m : 0 ≤ m ≤ d triv (n) -1 , (50) where the lifted symmetric basis functions are given by b T,sym n,m := b sym n,m • χ -1 T for b sym n,m as in Theorem 18 and an affine pullback χ T : T → T . A Basis for the τ refl component of P ⊥ n,n-1 (T ) As explained in Section 5.3.1 the space P ⊥ n,n-1 T can be decomposed into the τ triv -, the τ sign -and the τ refl -component. A basis for the τ triv component are the fully symmetric basis functions (cf. Section 5.3.2). Next, we will construct a basis for all of P ⊥ n,n-1 T by extending the totally symmetric one. It is straightforward to adjoin the d sign (n) basis, using the same technique as for the fully symmetric ones: the monomials which appear in p with Rp = -p = M p must be permutations of v n1 1 v n2 2 v n3 3 with n 1 > n 2 > n 3 . As in Theorem 18 for n = 2m argue on monomials v m-k 1 v m-1-k 2 v 2k+1 3 and the polynomials b ′ 2m,2m-2k-1 with 0 ≤ k ≤ d sign (n) -1 = d triv (n) -2, and for n = 2m + 1 use the monomials v m+1-k 1 v m-k 2 v 2k 3 and b 2m+1,2m-2k with 0 ≤ k ≤ d triv (n) -1 = d sign (n) -1. As we will see when constructing a basis for the non-conforming finite element space, the τ sign component of P ⊥ n,n-1 T is not relevant, in contrast to the τ refl component. In this section, we will construct a basis for the τ refl polynomials in P ⊥ n,n-1 T . Each such polynomial is an eigenvector of RM + MR with eigenvalue -1. We will show that the polynomials b refl n,k = 1 3 (2I -RM -MR) b n,2k , 0 ≤ k ≤ n -1 3 , (51) are linearly independent (and the same as introduced in ( 21)) and, subsequently, that the set RMb refl n,k , M Rb refl n,k : 0 ≤ k ≤ n -1 3 (52) is a basis for the τ refl subspace of P ⊥ n,n-1 T . (The upper limit of k is as in (52) d refl (n) -1 (cf. ( 22 )).) Note that RMb refl n,k = 1 3 (2RM -MR -I) b n,2k , M Rb refl n,k = 1 3 (2M R -I -RM ) b n,2k , (53) because (RM) 2 = MR. Thus the calculation of these polynomials follows directly from the formulae for [M ij ] and [R ij ]. The method of proof relies on complex coordinates for the triangle. Lemma 20 For k = 0, 1, 2, . . . P (0,0) 2k (s) = (-1) k k + 1 2 k k! k j=0 (-k) 2 j j! 1 2 -2k j 1 -s 2 k-j , (v 1 + v 2 ) 2k P (0,0) 2k v 1 -v 2 v 1 + v 2 = (-1) k k + 1 2 k k! k j=0 (-k) 2 j j! 1 2 -2k j 4 k-j (v 1 v 2 ) k-j (v 1 + v 2 ) 2j . Proof. Start with the formula (specialized from a formula for Gegenbauer polynomials [9, 18.5.10]) P (0,0) 2k (s) = (2s) 2k 1 2 2k (2k)! 2 F 1 -k, 1 2 -k 1 2 -2k ; 1 s 2 . Apply the transformation (cf. [9, 15.8.1]) 2 F 1 -k, b c ; t = (1 -t) k 2 F 1 -k, c -b c ; t t -1 with t = 1/s 2 ; then t t -1 = 1 1 -s 2 and s 2k 1 -1 s 2 k = (-1) k 1 -s 2 k . Also 2 2k ( 1 2 ) 2k (2k)! = ( 1 2 ) 2k k!( 1 2 ) k = (k+ 1 2 ) k k! . This proves the first formula. Set 2 to obtain the second one. Introduce complex homogeneous coordinates: s = v 1 -v 2 v 1 + v 2 then 1 -s 2 = 4v 1 v 2 (v 1 + v 2 ) z = ωv 1 + ω 2 v 2 + v 3 z = ω 2 v 1 + ωv 2 + v 3 t = v 1 + v 2 + v 3 . Recall ω = e 2πi/3 = -1 2 + i 2 √ 3 and ω 2 = ω. The inverse relations are v 1 = 1 3 (-(ω + 1) z + ωz + t) v 2 = 1 3 (ωz -(ω + 1) z + t) v 3 = 1 3 (z + z + t) . Suppose f (z, z, t) is a polynomial in z and z then Rf (z, z, t) = f (z, z, t) and M f (z, z, t) = f ωz, ω 2 z, t . Thus RM f (z, z, t) = f ω 2 z, ωz, t and M Rf (z, z, t) = f ωz, ω 2 z, t . The idea is to write b n,2k in terms of z, z, t and apply the projection Π := 1 3 (2I -M R -RM ). To determine linear independence it suffices to consider the terms of highest degree in z, z thus we set t = v 1 + v 2 + v 3 = 0 in the formula for b n,2k (previously denoted b ′ n,2k using the homogeneous coordinates, see proof of Theorem 18). From formula (48) and Lemma 20 b ′ n,2k (v 1 , v 2 , 0) = (n -2k + 2) n-2k (v 1 + v 2 ) n-2k (-1) k k + 1 2 k k! × k j=0 (-k) 2 j j! 1 2 -2k j 4 k-j (v 1 v 2 ) k-j (v 1 + v 2 ) 2j . The coefficient of (v 1 v 2 ) k (v 1 + v 2 ) n-2k in b ′ n,2k (v 1 , v 2 , 0 ) is nonzero, and this is the term with highest power of v 1 v 2 . Thus b ′ n,2k (v 1 , v 2 , 0) : 0 ≤ k ≤ n-2 3 is a basis for span (v 1 v 2 ) k (v 1 + v 2 ) n-2k : 0 ≤ k ≤ n-2 3 . The next step is to show that the projection Π has trivial kernel. In the complex coordinates v 1 + v 2 = -1 3 (z + z -t) = -1 3 (z + z) and v 1 v 2 = 1 9 z 2 -zz + z 2 (discarding terms of lower order in z, z, that is, set t = 0). Proposition 21 If Π ⌊(n-1)/3⌋ k=0 c k (z + z) n-2k z 2 -zz + z 2 k = 0 then c k = 0 for all k. Proof. For any polynomial f (z, z) we have Πf (z, z) = 1 3 2f (z, z)f ω 2 z, ωzf ωz, ω 2 z . In particular Π (z + z) n-2k z 2 -zz + z 2 k = Π (z + z) n-3k z 3 + z 3 k = 1 3 2 (z + z) n-3k -ω 2 z + ωz n-3k -ωz + ω 2 z n-3k z 3 + z 3 k . By hypothesis n -3k ≥ 1. Evaluate the expression at z = e πi /6 + ε where ε is real and near 0. Note e πi /6 = 1 2 √ 3 + i . Then z + z = √ 3 + 2ε, ω 2 z + ωz = -ε, ωz + ω 2 z = - √ 3 -ε, z 3 + z 3 = 3ε + 3 √ 3ε 2 + 2ε 3 , and 1 3 2 (z + z) n-3k -ω 2 z + ωz n-3k -ωz + ω 2 z n-3k z 3 + z 3 k = 1 3 2 -(-1) n-3k × 3 (n-3k)/2 -(-ε) n-3k + Cε + O ε 2 ε k 3 + 3 √ 3ε + 2ε 2 k , where C = 3 (n--3k-1)/2 (n -3k) 4 -2 (-1) n-3k (binomial theorem). The dominant term in the right-hand side is 2 -(-1) n-3k 3 (n-k)/2-1 ε k . Now suppose Π ⌊(n-1)/3⌋ k=0 c k (z + z) n-2k z 2 -zz + z 2 k = 0. Evaluate the polynomial at z = e πi /6 + ε. Let ε → 0 implying c 0 = 0. Indeed write the expression as ⌊(n-1)/3⌋ k=0 c k 2 -(-1) n-3k 3 (n-k)/2-1 ε k (1 + O (ε)) = 0. Since 2 -(-1) n-3k ≥ 1 this shows c k = 0 for all k. We have shown: Proposition 22 Suppose Π ⌊(n-1)/3⌋ k=0 c k b n,2k = 0 then c k = 0 for all k; the cardinality of the set (52) is d refl (n). Π (z + z) n-3k z 3 + z 3 k = n-3k j=0 n-2j≡1,2 mod 3 n -3k j z n-3k-j z j z 3 + z 3 k . Then RM w k (z, z) = n-3k j=0,n-2j≡1,2 mod 3 n -3k j ω 2j-n z n-3k-j z j z 3 + z 3 k , MRw k (z, z) = n-3k j=0,n-2j≡1,2 mod 3 n -3k j ω n-2j z n-3k-j z j z 3 + z 3 k . Firstly we show that {RM w k , M Rw k } is linearly independent for 0 ≤ k ≤ n-1 3 . For each value of n mod 3 we select the highest degree terms from RM w k and MRw k : (i) n = 3m + 1, ω 2 z 3m+1 + ωz 3m+1 and ωz 3m+1 + ω 2 z 3m+1 , (ii) n = 3m+2, ωz 3m+2 +ω 2 z 3m+2 and ω 2 z 3m+2 +ωz 3m+2 , (iii) n = 3m, (n -3k) ω 2 z 3m z + ωzz 3m and (n -3k) ωz 3m z + ω 2 zz 3m (by hypothesis n-3k ≥ 1). In each case the two terms are linearly independent (the determinant of the coefficients is ± ωω 2 = ∓i √ 3). Secondly the same argument as in the previous theorem shows that ⌊(n-1)/3⌋ k=0 {c k RMw k + d k M Rw k } = 0 implies c k RM w k + d k M Rw k = 0 for all k. By the first part it follows that c k = 0 = d k . This completes the proof. Remark 24 The basis b n,k for P ⊥ n,n-1 T in (35) is mirror symmetric with respect to the angular bisector in T through the origin for even k and is mirror skew-symmetric for odd k. This fact makes the point 0 in T special compared to the other vertices. As a consequence the functions defined in Theorem 23.a reflects the special role of 0. Part b shows that it is possible to define a basis with functions which are either symmetric with respect to the angle bisector in T through (1, 0) ⊺ or through (0, 1) ⊺ by "rotating" the functions Πb n,2k to these vertices: RM (Πb n,2k ) (x 1 , x 2 ) = (Πb n,2k ) (x 2 , 1 -x 1 -x 2 ) and M R (Πb n,2k ) (x 1 , x 2 ) = (Πb n,2k ) (1 -x 1 -x 2 , x 1 ) . Since the dimension of E (refl) is 2d refl (n) = 2 * n+2 3 + is not (always) a multiple of 3, it is, in general, not possible to define a basis where all three vertices of the triangle are treated in a symmetric way. Definition 25 Let P ⊥,refl n,n-1 T := span RM Πb n,2k , M RΠb n,2k : 0 ≤ k ≤ n -1 3 . ( 54 ) This space is lifted to a general triangle T by fixing a vertex P of T and setting P ⊥,refl n,n-1 (T ) := u • χ -1 P,T : u ∈ P ⊥,refl n,n-1 T , (55) where the lifting χ P,T is an affine pullback χ P,T : T → T which maps 0 to P. The basis b refl n,k to describe the restrictions of facet-oriented, non-conforming finite element functions to the facets is related to a reduced space and defined as in (51) with lifted versions b P,T n,k := b refl n,k • χ -1 P,T , 0 ≤ k ≤ n -1 3 . ( 56 ) Remark 26 The construction of the spaces P ⊥,sym p,p-1 (T ) and P ⊥,refl p,p-1 (T ) (cf. Definitions 19 and 25) implies the direct sum decomposition span b p,2k • χ -1 P,T : 0 ≤ k ≤ ⌊p/2⌋ = P ⊥,sym p,p-1 (T ) ⊕ P ⊥,refl p,p-1 (T ) . (57) It is easy to verify that the basis functions b P,T p,k are mirror symmetric with respect to the angle bisector in T through P. However, the space P ⊥,refl n,n-1 (T ) is independent of the choice of the vertex P. In Appendix A we will define further sets of basis functions for the τ refl component of P ⊥ n,n-1 T -different choices might be preferable for different kinds of applications. Simplex-Supported and Facet-Oriented Non-Conforming Basis Functions In this section, we will define non-conforming Crouzeix-Raviart type functions which are supported either on one single tetrahedron or on two tetrahedrons which share a common facet. As a prerequisite, we study in §5.4.1 piecewise orthogonal polynomials on triangle stars, i.e., on a collection of triangles which share a common vertex and cover a neighborhood of this vertex (see Notation 27). We will derive conditions such that these functions are continuous across common edges and determine the dimension of the resulting space. This allows us to determine the non-conforming Courzeix-Raviart basis functions which are either supported on a single tetrahedron (see §5.4.2) or on two adjacent tetrahedrons (see §5.4.3) by "closing" triangle stars either by a single triangle or another triangle star. Orthogonal Polynomials on Triangle Stars The construction of the functions B K,nc p,k and B T,nc p,k as in ( 20) and ( 24) requires some results of continuous, piecewise orthogonal polynomials on triangle stars which we provide in this section. Notation 27 A subset C ⊂ Ω is a triangle star if C is the union of some, say m C ≥ 3, triangles T ∈ F C ⊂ F, i.e., C = T ∈F C T and there exists some vertex V C ∈ V such that V C is a vertex of T ∀T ∈ F C , ∃ a continuous, piecewise affine mapping χ : D mC → C such that χ (0) = V C . (58) Here, D k denotes the regular closed k-gon (in R 2 ). For a triangle star C, we define P ⊥ p,p-1 (C) := u ∈ C 0 (C) | ∀T ∈ F C : u| T ∈ P ⊥ p,p-1 (T ) . In the next step, we will explicitly characterize the space P ⊥ p,p-1 (C) by a set of basis functions. Set A := V C (cf. (58)) and pick an outer vertex in F C , denote it by A 1 , and number the remaining vertices A 2 , . . . , A mC in F C counterclockwise. We use the cyclic numbering convention A mC +1 := A 1 and also for similar quantities. For 1 ≤ ℓ ≤ m C , let e ℓ := [A, A ℓ ] be the straight line (convex hull) between and including A, A ℓ . Let T ℓ ∈ F C be the triangle with vertices A, A ℓ , A ℓ+1 . Then we choose the affine pullbacks to the reference element T by χ ℓ (x 1 , x 2 ) := A + x 1 (A ℓ -A) + x 2 (A ℓ+1 -A) if ℓ is odd, A + x 1 (A ℓ+1 -A) + x 2 (A ℓ -A) if ℓ is even. In this way, the common edges e ℓ are parametrized by χ ℓ-1 (t, 0) = χ ℓ (t, 0) if 3 ≤ ℓ ≤ m C is odd and by χ ℓ-1 (0, t) = χ ℓ (0, t) if 2 ≤ ℓ ≤ m C is even. The final edge e 1 is parametrized by χ 1 (t, 0) = χ m C (t, 0) if m C is even and by χ 1 (t, 0) = χ mC (0, t) (with interchanged arguments!) otherwise. We introduce the set R p,C := {0, . . . , p} if m C is even, 2ℓ : 0 ≤ ℓ ≤ * p 2 + if m C is odd and define the functions (cf. ( 49), ( 55), (57)) b C p,k T ℓ := b p,k • χ -1 ℓ , ∀k ∈ R p,C . (59) Lemma 28 For a triangle star C, a basis for P ⊥ p,p-1 (C) is given by b From Lemma 12 we conclude that the continuity across such edges is equivalent to C p,k , k ∈ R p,C . Further dim P ⊥ p,p-1 (C) = p + 1 if m C is even, * p 2 + + 1 if m C is odd. ( 60 α (ℓ-1) p,k = α (ℓ) p,k ∀0 ≤ k ≤ p. ( 61 ) Continuity across e ℓ for even 2 ≤ ℓ ≤ m C . Note that χ 2 (0, t) = χ 3 (0, t). Taking into account (49), ( 55), (57) we see that the continuity across e ℓ is equivalent to p k=0 α (2) p,k b II p,k = p k=0 α (3) p,k b II p,k . From Lemma 12 we conclude that the continuity across e ℓ for even 2 ≤ ℓ ≤ m C is again equivalent to α (ℓ-1) p,k = α (ℓ) p,k ∀0 ≤ k ≤ p. (62) Continuity across e 1 For even m C the previous argument also applies for the edge e 1 and the functions b C p,k , 0 ≤ k ≤ p, are continuous across e 1 . For odd m C , note that χ 1 (t, 0) = χ mC (0, t). Taking into account (49), (55), (57) we see that the continuity across e 1 is equivalent to Using the symmetry relation (37) we conclude that this is equivalent to p k=0 α (1) p,k b I p,k = p k=0 α (mC ) p,k (-1) k b I p,k . From Lemma 12 we conclude that this, in turn, is equivalent to α (1) p,k = α (m C ) p,k k is even, α (1) p,k = -α (m C ) p,k k is odd. ( 63 ) From the above reasoning, the continuity of b C p,k across e 1 follows if α In this section, we will prove that S p K,nc (cf. (20)) satisfies S p K,nc ⊕ S p K,c = S p K := u ∈ S p G : supp u ⊂ K , where S p G is defined in (4) and, moreover, that the functions B K,nc p,k , k = 0, 1, . . . , d triv (p) -1, as in ( 18), (20) form a basis of S p K,nc . in S p K1,nc ⊕ S p K2,nc . In view of the direct sum in (67) we may thus assume that the functions in Sp T,nc are continuous in ω T . To finally arrive at a direct decomposition of the space in the right-hand side of (67) we have to split the spaces P ⊥ p,p-1 (C i ) into a direct sum of the spaces of totally symmetric orthogonal polynomials and the spaces introduced in Definition 25 and glue them together in a continuous way. We introduce the functions for the definition of S p T,nc . The resulting non-conforming facet-oriented space S p T,nc was introduced in Definition 7 and Sp T,nc can be chosen to be S p T,nc . Proposition 30 For any u ∈ S p T,nc , the following implication holds u| T ∈ S p T,nc T ∩ P ⊥ p,p-1 (T ) =⇒ u = 0. Proof. Assume there exists u ∈ S p T,nc with u| T ∈ S p T,nc T ∩P ⊥ p,p-1 (T ). Let K be a simplex adjacent to T . Then u K = u| K satisfies u K | T ′ ∈ P ⊥ p,p-1 (T ′ ) for all T ′ ⊂ ∂K and, thus, u K ∈ S p K,nc . Since S p K,nc T ′ ∩ S p T,nc T ′ = {0} for T ′ ∈ ∂K\ refl p,k (x 1 , 1 -x 1 ) is invariant under x 1 → 1 -x 1 . For four non-coplanar points A 0 , A 1 , A 2 , A 3 let K denote the tetrahedron with these vertices. For any k such that 0 ≤ k ≤ p-1 3 define a piecewise polynomial on the faces of K as follows: choose a local (x 1 , x 2 )-coordinate system for A 0 A 1 A 2 so that the respective coordinates are (0, 0) , (1, 0) , (0, 1), and define Q k is continuous at the edges A 0 A 1 , A 0 A 2 , and A 0 A 3 . The values at the boundary of the triangle star equal b refl p,k (x 1 , 1x 1 ); note the symmetry and thus the orientation of the coordinates on the edges A 1 A 2 , A 2 A 3 , A 3 A 1 is immaterial. The value of Q (0) k on the triangle A 1 A 2 A 3 is taken to be a degree p polynomial, totally symmetric, with values agreeing with b refl p,k (x 1 , 1x 1 ) on each edge. Similarly Q (1) k , Q (2) k , Q (3) k are defined by taking A 1 , A 2 , A 3 as the center of the construction, respectively. Theorem 31 a) The functions Q (i) k , 0 ≤ k ≤ d refl (p) -1, i = 0, 1, 2, 3 are linearly independent. b) Property (71) holds. A Basis for Non-Conforming Crouzeix-Raviart Finite Elements We have defined conforming and non-conforming sets of functions which are spanned by functions with local support. In this section, we will investigate the linear independence of these functions. We introduce the following spaces S p sym,nc := is not direct. The sum Sp G,c ⊕ S p sym,nc ⊕ S p,0 refl,nc (74) is direct. Proof. Part 1. We prove that the sum S p sym,nc ⊕ S p refl,nc is direct. From Proposition 30 we know that the sum S p T,nc T , 0 ≤ k ≤ d refl (p) -1, are linearly independent and belong to P p-1 (T ). We define the functionals ⊕ P ⊥ p,p-1 (T ) is direct. Let Π T : L 2 (T ) → P p-1 ( J T p,k (w) := T wq T p,k 0 ≤ k ≤ d refl (p) -1. Next we consider a general linear combination and show that the condition K⊂G dtriv(p)-1 i=0 α K i B K,nc p,i + K⊂G T ′ ⊂∂K d refl (p)-1 j=0 β T ′ j B T ′ ,nc p,j ! = 0 (76) implies that all coefficients are zero. We apply the functionals J T p,k to (76) and use the orthogonality between P ⊥ p,p-1 (T ) and q T p,k to obtain K⊂G T ′ ⊂∂K d refl (p)-1 j=0 β T ′ j J T p,k B T ′ ,nc p,j ! = 0. ( 77 ) For T ′ = T it holds J T p,k B T ′ ,nc p,i = 0 since B T ′ ,nc p,i K T is an orthogonal polynomial. Thus, equation (77) is equivalent to drefl(p)-1 j=0 β T j J T p,k B T,nc p,j ! = 0. (78) The matrix J T p,k B T,nc p,j d refl (p)-1 k,j=0 is regular because J T p,k B T,nc p,j = T B T,nc p,j q T p,k = T B T,nc p,j Π T B T,nc p,k T = T B T,nc p,j B T,nc p,k and B T,nc p,k T k are linearly independent. Hence we conclude from (78) that all coefficients β T j are zero and the condition (76) reduces to K⊂G d triv (p)-1 i=0 α K i B K,nc p,i ! = 0. The left-hand side is a piecewise continuous function so that the condition is equivalent to dtriv(p)-1 i=0 α K i B K,nc p,i ! where χ i : T → T i are affine pullbacks to the reference triangle such that χ i (0) = A 0 . This implies that the functions u i at A 0 have the same value (say w 0 ) and, from the condition u refl (A 0 ) = 3w 0 = 0, we conclude that u i (A 0 ) = 0. The values of u i at the vertex A i of K (which is opposite to T i ) also coincide and we denote this value by v 0 . Since u refl | T = 0 it holds u refl (A i ) = 2w 0 + v 0 = 0. From w 0 = 0 we conclude that also v 0 = 0. Let χ i,T0 : T → T 0 denote an affine pullback with the property χ i,T0 (0) = A i . Hence, u i := u i | T 0 • χ -1 i,T0 ∈ span b refl p,0 (80) with values zero at the vertices of T . Note that b p,0 (0, 0) = (-1) p (p + 1) and b p,0 (1, 0) = b p,0 (0, 1) = 1. The vertex properties (81) along the definition of b refl p,k (cf. ( 51)) imply that b refl p,0 (1, 0) = b refl p,0 (0, 1) = 1 3 (1 -(-1) p (p + 1)) = c p , (82) b refl p,0 (0, 0) = -2b refl p,0 (1, 0) . Since c p = 0 for p ≥ 1 we conclude that u i = 0 holds. Relation (80) implies u i | T0 = 0 and thus u i = 0. From u refl | T = 3 i=1 u i | T we deduce that u refl | K = 0. The Cases b.1-.3 allow to proceed with the same induction argument as for Case a and u refl = 0 follows by induction. Part 3. An inspection of Part 2 shows that, for the proof of Case a, it was never used that the vertexoriented basis functions have been removed from S p G,c and Case a holds verbatim for S p G,c . This implies that the first sum in (73) is direct. Part 4. The fact that the sum S p G,c + S p refl,nc is not direct is postponed to Proposition 34. Proposition 34 For any vertex V ∈V Ω it holds B G p,V ∈ S p sym,nc ⊕ S p,0 refl,nc ⊕ Sp G,c . Proof. We will show the stronger statement B G p,V ∈ S p,0 refl,nc ⊕ Sp G,c . It suffices to construct a continuous function u V ∈ S p refl,nc which coincides with B G p,V at all vertices V ′ ∈ V and vanishes at ∂Ω; then, B G p,V -u V ∈ Sp G,c and the assertion follows. Recall the known values of b refl p,0 at the vertices of the reference triangle and the definition of c p as in (82). Let K ∈ G be a tetrahedron with V as a vertex. The facets of K are denoted by T i , 0 ≤ i ≤ 3, and the vertex which is opposite to T i is denoted by A i . As a convention we assume that A 0 = V. For every T i , 1 ≤ i ≤ 3, we define the function u Ti ∈ S p Ti,nc by setting (cf. (56)) u Ti | T0 = b refl p,0 • χ -1 Ai,T0 , where χ Ai,T0 : T → T 0 is an affine pullback which satisfies χ Ai,T0 (0) = A i . (It is easy to see that the definition of u T i is independent of the side of T i , where the tetrahedron K is located.) From ( 51) and (53) we conclude that 3 i=1 u Ti T0 = 0 holds. We proceed in the same way for all tetrahedrons K ∈ G V (cf. ( 9)). This implies that ũV := T ∈FΩ V∈T u T (83) vanishes at Ω\ • ω V (cf. ( 9)). By construction the function ũV is continuous. At V, the function u T i has the value (cf. (82)) u Ti (V) = c p so that ũV (V) = Cc p , where C is the number of terms in the sum (83). Since c p > 0 for all p ≥ 1, the function u V := 1 Ccp ũV is well defined and has the desired properties. Remark 35 We have seen that the extension of the basis functions of S p G,c by the basis functions of S p refl,nc leads to linearly depending functions. On the other hand, if the basis functions of the subspace S p,0 refl,nc are added and the vertex-oriented basis functions in S p G,c are simply removed, one arrives at a set a linear independent functions which span a larger space than S p G,c . Note that S p,0 refl,nc = S p refl,nc for p = 1, 2, 3. One could add more basis functions from S p refl,nc but then has to remove further basis functions from Sp G,c or formulate side constraints in order to obtain a set of linearly independent functions. We finish this section by an example which shows that there exist meshes with fairly special topology, where the inclusion S p G,c + S p sym,nc + S p refl,nc ⊂ S p G (84) is strict. We emphasize that the left-hand side in (84), for p ≥ 4, defines a larger space than the space in (75) since it contains all non-conforming functions of reflection type. Example 36 Let us consider the octahedron Ω with vertices A ± := (0, 0, ±1) ⊺ and A 1 := (1, 0, 0) ⊺ , A 2 := (0, 1, 0) ⊺ , A 3 := (-1, 0, 0) ⊺ , A 4 := (0, -1, 0) ⊺ . Ω is subdivided into a mesh G := {K i : 1 ≤ i ≤ 8} consisting of eight congruent tetrahedrons sharing the origin 0 as a common vertex. The six vertices at ∂Ω have the special topological property that each one belongs to exactly four surface facets. Note that the space defined by the left-hand side of (84) does not contain functions whose restriction to a surface facet, say T , belongs to the τ sign component of P ⊥ n,n-1 (T ). Hence, the inclusion in ( 84) is strict if we identify a function in S p G whose restriction to some surface facet is an orthogonal polynomial of "sign type". Let q = 0 be a polynomial which belongs to the τ sign component of P ⊥ n,n-1 (T ) on the reference element. Denote the (eight) facet on ∂Ω with the vertices A ± , A i , A i+1 by T ± i for 1 i ≤ 4 (with cyclic numbering convention) and choose affine pullbacks χ ±,i : T → T ± i as χ ±,i (x) := A ± + x 1 (A i -A ± ) + x 2 (A i+1 -A ± ). Then, it is easy to verify (use Lemma 28 with even m C ) that the function q : ∂Ω → R, defined by q| T ± i := q • χ -1 ±,i is continuous on ∂Ω. Hence the "finite element extension" to the interior of Ω via Q := N∈N p ∩∂Ω q (N) B G p,N defines a function in S p G which is not in the space defined by the left-hand side of (84). We state in passing that the space S p G does not contain any function whose restriction to a boundary facet, say T , belongs to the τ sign component of P ⊥ p,p-1 (T ) if there exists at least one surface vertex which belongs to an odd number of surface facets. In this sense, the topological situation considered in this example is fairly special. Conclusion In this article we developed explicit representation of a local basis for non-conforming finite elements of the Crouzeix-Raviart type. As a model problem we have considered Poisson-type equations in three-dimensional domains; however, this approach is by no means limited to this model problem. Using theoretical conditions in the spirit of the second Strang lemma, we have derived conforming and non-conforming finite element spaces of arbitrary order. For these spaces, we also derived sets of local basis functions. To the best of our knowledge, such explicit representation for general polynomial order p are not available in the existing literature. The derivation requires some deeper tools from orthogonal polynomials of triangles, in particular, the splitting of these polynomials into three irreducible irreducible S 3 modules. Based on these orthogonal polynomials, simplex-and facet-oriented non-conforming basis functions are defined. There are two types of non-conforming basis functions: those whose supports consist of one tetrahedron and those whose supports consist of two adjacent tetrahedrons. The first type can be simply added to the conforming hp basis functions. It is important to note that the span of the functions of the second type contains also conforming functions and one has to remove some conforming functions in order to obtain a linearly independent set of functions. We have proposed a non-conforming space which consists of a) all basis functions of the first type and b) a reduced set of basis functions of the second type and c) of the conforming basis functions without the vertex-oriented ones. This leads to a set of linearly independent functions and is in analogy to the well known lowest order Crouzeix-Raviart element. It is interesting to compare these results with high-order Crouzeix-Raviart finite elements for the twodimensional case which have been presented in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF]. Facets T of tetrahedrons in 3D correspond to edges E of triangles in 2D. As a consequence the dimension of the space of orthogonal polynomials P ⊥ p,p-1 (E) equals one. For even degree p, one has only non-conforming basis functions of "symmetric" type (which are supported on a single triangle) and for odd degree p, one has only non-conforming basis functions of "reflection" type (which are supported on two adjacent triangles). It turns out that adding the non conforming symmetric basis function to the conforming hp finite element space leads to a set of linearly independent functions which is the analogue of the first sum in (73). If the non-conforming basis functions of reflection type are added, the Figure 1 : 1 Figure 1: Symmetric orthogonal polynomials on the reference triangle and corresponding tetrahedronsupported non-conforming basis functions. Example 9 9 The lowest order of p such that d refl (p) ≥ 1 is p = 1. In this case, we get d refl (p) = 1. In Figure2the function b refl p,k and corresponding basis functions B T,nc p,k are depicted for (p, k) ∈ {(1, 0) , (2, 0) , (4, 0) , (4, 1)}. Figure 2 : 2 Figure 2: Orthogonal polynomials of reflection type and corresponding non-conforming basis functions which are supported on two adjacent tetrahedrons. The common facet is horizontal and the two tetrahedrons are on top of each other. Lemma 12 12 For any Z ∈ {I, II, III}, each of the systems b Z n,k n k=0 , form a basis of P n ([0, 1]). refl) j are S 3 - 3 irreducible and realizations of the representations τ triv , τ sign , τ refl respectively. Let d triv (n) , d sign (n) , d refl (n) denote the respective multiplicities, so that d triv (n) + d sign (n) + 2d refl (n) = n + 1. The case n even or odd are handled separately. If n = 2m is even then the number of eigenvectors of R having -1 as eigenvalue equals m (the cardinality of {1, 3, 5, . . . , 2m -1}). The same property holds for M since the eigenvectors of M in the basis x 2m (x -1)2m-j are explicitly given by x 2m-2ℓ (x -1) 2ℓx 2ℓ (x -1) 2m-2ℓ : 0 ≤ ℓ ≤ m . Each E (refl) j contains one (-1)-eigenvector of χ {1,0,2}and one of χ {0,2,1} and each E (sign) j consists of one (-1)-eigenvector of χ {0,2,1} . This gives the equationd refl (n) + d sign (n) = m. Each E (refl) jcontains one (+1)-eigenvector of χ {1,0,2} and one of χ {0,2,1} and each E (triv) j consists of one (+1)-eigenvector of χ {0,2,1} . There are m + 1 eigenvectors with eigenvalue 1 of each of χ {1,0,2} and χ {0,2,1} thus d refl (n) + d triv (n) = m + 1. Thus any totally symmetric polynomial on T is a linear combination of e a 2 e b 3 with uniquely determined coefficients. The number of linearly independent totally symmetric polynomials in 0 T equals the number of solutions of 0 ≤ 2a + 3b ≤ n with a, b = 0, 1, 2, . . .. As a consequence d triv (n) = card {(a, b) : 2a + 3b = n}. Corollary 17 2 0≤j≤n/ 2 M 2 0≤j≤ 17222 For 0 ≤ k ≤ n 2 each polynomial r n,2k := b n,2j + b n,2k is totally symmetric and for 0 ≤ k ≤ n-1 2 each polynomial r n,2k+1 =b n,2j+1 + b n,2k+1 satisfies Mp = -p = Rp (the sign representation). Proof. The pattern of zeroes in " M (n) i,j # shows that r n,2k = (M R + RM + I) b n,2k ∈ span {b n,2j } and thus satisfies Rr n,2k = r n,2k ; combined with RM r n,2k = r n,2k this shows r n,2k is totally symmetric. A similar argument applies to (M R + RM + I) b n,2k+1 . Theorem 18 The functions b sym n,k , 0 ≤ k ≤ d triv (n) -1, as in (17) form a basis for the totally symmetric polynomials in P ⊥ n,n-1 T . 1 3 1 3 11 polynomials Πb n,2k : 0 ≤ k ≤ n-are linearly independent. b. The set RM Πb n,2k , M RΠb n,2k : 0 ≤ k ≤ n-is linearly independent and defines a basis for the τ refl component of P ⊥ n,n-1 T . Proof. In general Πz a z b = z a z b if a-b ≡ 1, 2 mod 3 and Πz a z b = 0 if a-b ≡ 0 mod 3. Expand the polynomials w k (z, z) := Π (z + z) n-3k z 3 + z 3 k by the binomial theorem to obtain ) Proof. We show that b C p,k k∈R p,C is a basis of P ⊥ p,p-1 (C) and the dimension formula. Continuity across e ℓ for odd 3 ≤ ℓ ≤ m C . The definition of the lifted orthogonal polynomials (see (49), (55), (57)) implies that the continuity across e ℓ for odd 3 ≤ ℓ ≤ m C is equivalent to = 0 for odd k and all 1 ≤ ℓ ≤ m C . The proof of the dimension formula (60) is trivial. 5.4.2 A Basis for the Symmetric Non-Conforming Space S p K,nc , 0 , i = 1 , 2 , 012 ≤ k ≤ d triv (p) -1, with b ∂Ki,sym p,k as in (65) and define b Ci,refl p,k , 0 ≤ k ≤ d refl (p) -1, piecewise by b Ci,refl p,k T ′ := b Ai,T ′ p,k for T ′ ⊂ C i with b Ai,T ′ p,k as in (56). The mirror symmetry of b Ai,T ′ p,k with respect to the angular bisector in T ′ through A i implies the continuity of b Ci,refl p,k . Hence, P ⊥ p,p-1 (C i ) = span b Ci,sym p,k Ci : 0 ≤ k ≤ d triv (p) -1 ⊕ span b Ci,refl p,k : 0 ≤ k ≤ d refl (p) -1 . (70) Since the traces of b Ci,sym p,k and b Ci,refl p,k at ∂T are continuous and are, from both sides, the same linear combinations of edge-wise Legendre polynomials of even degree, the gluing b ∂ωT ,defines continuous functions on ∂ω T . Since the space S p T,nc must satisfy a direct sum decomposition (cf. (67)), it suffices to consider the functions b ∂ω T ,refl p,k •T 1 3 1 we conclude that u K = 0. Note that Definition 7 and Proposition 30 neither imply a priori that the functions B T,nc p,k K , ∀T ⊂ ∂K, k = 0, . . . , d refl (p) -1 are linearly independent nor that ∀T ⊂ ∂K it holds T ′ ⊂C B T ′ ,nc p,m T = P ⊥,refl p,p-1 (T ) for the triangle star C = ∂K\ • T (71) holds. These properties will be proved next. Recall the projection Π = 1 3 (2I -M R -RM) from Proposition 21. We showed (Theorem 23.a) that b refl p,k : 0 ≤ k ≤ p-is linearly independent, where b refl p,k := Πb p,2k . Additionally Rb refl p,k = b refl p,k which implies b refl p,k (0, x 1 ) = b refl p,k (x 1 , 0), and the restriction x 1 -→ b k on the facet equal to b refl p,k . Similarly define Q (0) k on A 0 A 2 A 3 and A 0 A 3 A 1 (with analogously chosen local (x 1 , x 2 )-coordinate systems), by the property b refl p,k (0, x 1 ) = b refl p,k (x 1 , 0). Q T ) denote the L 2 (T ) orthogonal projection. Since P p-1 (T ) is the orthogonal complement of P ⊥ p,p-1 (T ) in P p (T ) and since P ⊥ p,p-1 (T ) ∩ S p T,nc T = {0}, the restricted mapping Π T : S p T,nc T → P p-1 (T ) is injective and the functions q T p,k := Π T B T,nc p,k T The superscript "refl" is a shorthand for "reflection" and explained in Section 5.3.1. where P E 2k is the Legendre polynomial of even degree 2k scaled to the edge E with endpoint values +1 and symmetry with respect to the midpoint of E. Hence, we are looking for orthogonal polynomials P to the total simplex K by polynomial extension (cf. ( 18), ( 19)) These functions are the same as those introduced in Definition 5. The above reasoning leads to the following Proposition. Proposition 29 For a simplex K, the space of non-conforming, simplex-supported Crouzeix-Raviart finite elements can be chosen as in (20) and the functions B K,nc p,k , 0 ≤ k ≤ d triv (p) -1 are linearly independent. A Basis for S p T,nc Let T ∈ F Ω be an inner facet and ) with the convention that the unit normal n T points into K 2 . In this section, we will prove that a space Sp T,nc which satisfies can be chosen as Sp T,nc := S p T,nc (cf. (25)) and, moreover, that the functions B T,nc p,k , k = 0, 1, . . . , d refl (p) -1, as in (24) form a basis of S p T,nc . denote the triangle star (cf. Notation 27) formed by the three remaining triangles of ∂K i . We conclude from Lemma 28 that a basis for Since any function in S p T is continuous on C i , we conclude from Lemma 28 (with with b ∂T p,2k as in (64). To identify a space Sp T,nc which satisfies (67) we consider the jump condition in (68) restricted to the boundary ∂T . The symmetry of the functions b ∂T p,2k implies that [u] T ∈ P ⊥,sym p,p-1 (T ), i.e., there is a function q 1 ∈ S p K1,nc (see (20)) such that [u] T = q 1 | T and ũ, defined by ũ| K1 = u 1 + q 1 and ũ| K2 = u 2 , is continuous across T . On the other hand, all functions u ∈ S p T whose restrictions u| ωT are discontinuous can be found The proof involves a series of steps. The argument will depend on the values of the functions on the three rays A 0 A 1 , A 0 A 2 , A 0 A 3 , each one of them is given coordinates t so that t = 0 at A 0 and t = 1 at the other end-point. For a fixed k let q Lemma 32 Suppose 0 ≤ k ≤ p-1 3 and 0 ≤ t ≤ 1 then q (t) + q (t) + , q (t) = 0. Proof. The actions of RM and MR on polynomials k to the values on the ray is constructed taking the origin at A 1 and because of the reverse orientation of the ray we see that the value of k is given by q. The value of k on the ray A 0 A 2 is , q (by the symmetry of , q the orientation of the ray does not matter). The other functions are handled similarly, and the contributions to the three rays are given in this table: We use q k , , q k , q k to denote the polynomials corresponding to b refl p,k . Suppose that the linear combination Evaluate the sum on the three rays to obtain the equations: We used Lemma 32 to eliminate q k from the equations. In Theorem 23.b we showed the linear independence of , and in Lemma 12 that the restriction map f → f (x 1 , 0) is an isomorphism from the orthogonal polynomials P ⊥ p,p-1 to P p ([0, 1]). Thus the projection of the set is also linearly independent, that is, , 3 is a linearly independent set of polynomials on 0 ≤ t ≤ 1. This implies all the coefficients in the above equations vanish: the q k terms show c k,0 = c k,1 = c k,2 = c k,3 and then the , To prove (71) it suffices to transfer the statement to the reference element T . The pullbacks of the restrictions Properties of Non-Conforming Crouzeix-Raviart Finite Elements The and u refl ∈ S p refl,nc . We prove by contradiction that u sym ∈ C 0 (Ω). Assume that u sym / ∈ C 0 (Ω). Then, there exists a facet T ⊂ F Ω such that [u sym ] T = 0. Then, [u refl ] T = -[u sym ] T is a necessary condition for the continuity of u. However, [u sym ] T ∈ P ⊥,sym p,p-1 (T ) while [u refl ] T ∈ P ⊥,refl p,p-1 (T ) and there is a contradiction because P ⊥,sym p,p-1 (T ) ∩ P ⊥,refl p,p-1 (T ) = {0}. Hence, u sym ∈ C 0 (Ω) and, in turn, u refl ∈ C 0 (Ω). Since u = 0, at least, one of the functions u sym and u refl must be different from the zero function. Case a. We show u sym = 0 by contradiction: Assume u sym = 0. Then, u sym | T = 0 for all facets T ∈ F. (Proof by contradiction: If u sym | T = 0 for some T ∈ F, we pick some K ∈ F which has T as a facet. Since we have u sym | T ′ = 0 for all facets T ′ of K and u sym | K = 0. Since u sym is continuous in Ω, the restriction u sym | K ′ is zero for any K ′ ∈ G which shares a facet with K. This argument can be applied inductively to show that u sym = 0 in Ω. This is a contradiction.) We pick a boundary facet T ∈ F ∂Ω . The condition u ∈ Sp G,c implies u = 0 on ∂Ω and, in particular, u| T = u sym | T + u refl | T = 0. We use again the argument P ⊥,sym p,p-1 (T ) ∩ P ⊥,refl p,p-1 (T ) = {0} which implies u sym = 0 and this is a contradiction to the assumption u sym = 0. Case b. From Case a we know that u sym = 0, i.e., u refl = u, and it remains to show u refl = 0. The condition u refl ∈ Sp G,c implies u refl | ∂Ω = 0 and u refl (V) = 0 for all vertices V ∈ V. The proof of Case b is similar than the proof of Case a and we start by showing for a tetrahedron, say K, with a facet on the boundary that u refl | K = 0 and employ an induction over adjacent tetrahedrons to prove that u refl = 0 on every tetrahedron in G. We consider a boundary facet T 0 ∈ F ∂Ω with adjacent tetrahedron K ⊂ G. We denote the three other facets of K by T i , 1 ≤ i ≤ 3, and for 0 ≤ i ≤ 3, the vertex of K which is opposite to T i by A i . Case b.1. First we consider the case that there is one and only one other facet, say, T 1 which lies in ∂Ω. The case that there are exactly two other facets which are lying in ∂Ω can be treated in a similar way. Case b.3. Next, we consider the case that Ti,nc . On T we choose a local (x 1 , x 2 )-coordinate system such that A 1 = 0, A 2 = (1, 0) ⊺ , A 3 = (0, 1) ⊺ . From (51) and (53) we conclude that ) and, in turn, that the restrictions u E i of u i to the edge E i = T i ∩ T 0 , 1 ≤ i ≤ 3, are the "same", more precisely, the affine pullbacks of u E i to the interval [0, 1] are the same. From Lemma 13, we obtain that set of vertex-oriented conforming basis functions have to be removed from the conforming space. This is in analogy to the properties (74) and ( 75). Future research is devoted on numerical experiments and the application of these functions to system of equations as, e.g., Stokes equation and the Lamé system. Acknowledgement This work was supported in part by ENSTA, Paris, through a visit of S.A. Sauter during his sabbatical. This support is gratefully acknowledged. A Alternative Sets of "Reflection-type" Basis Functions In this Appendix we define further sets of basis functions for the τ refl component of P ⊥ n,n-1 T -different choices might be preferable for different kinds of applications. All these sets have in common that two vertices of T are special -any basis function is symmetric/skew symmetric with respect to the angular bisector of one of these two vertices. Remark 37 The functions b n,2k can be characterized as the range of I + R. We project these functions onto τ refl , that is, the space E (refl) := {p : RMp + MRp = -p}. Let The range of both is E (refl) . We will show that {T 1 b n,2k , T 2 b n,2k , 0 ≤ k ≤ (n -2) /3} is a basis for E (refl) . Previously we showed {RMq k , M Rq k } is a basis, where holds, so the basis is made up out of linear combinations of {T 1 b n,2k , T 2 b n,2k , 0 ≤ k ≤ (n -1) /3}. These can be written as elements of the range of T 1 (I + R) and T 2 (I + R). Different linear combinations will behave differently under the reflections R, M, RM R (that is (x, y) → (y, x), (1xy, y), (x, 1xy) respectively). After some computations we find Any two of these types can be used in producing bases from the b n,2k . Also each pair (first two, second two, third two) are orthogonal to each other. Note R fixes (0, 0) and reflects in the line x = y, M fixes (0, 1), reflects in 2x + y = 1, and RMR fixes (1, 0), reflects in x + 2y = 1. If we allow for a complex valued basis, the three vertices of T can be treated more equally as can be seen from the following remark. Remark 38 The basis functions can be complexified: set ω = e 2π i /3 ; any polynomial in E (refl) can be expressed as p = p 1 + p 2 such that MRp = ωp 1 + ω 2 p 2 (consequently RM p = ω 2 p 1 + ωp 2 ), then This is a basis which behaves similarly at each vertex.
81,089
[ "4372" ]
[ "3316", "127972", "217898" ]
00148826
en
[ "spi" ]
2024/03/04 23:41:48
2007
https://hal.science/hal-00148826/file/IAVSD_06_global_chassis.pdf
Péter Gáspár email: [email protected] Z Szabó J Bokor C Poussot-Vassal O Sename ⋆⋆ L ⋆⋆ ⋆⋆ Dugard Global chassis control using braking and suspension systems Motivation In the current design practice several individual active control mechanisms are applied in road vehicles to solve different control tasks, see e.g. [START_REF] Alleyne | Improved vehicle performance using combined suspension and braking forces[END_REF][START_REF] Hedrick | Brake system modelling, control and integrated brake/throttle switching[END_REF][START_REF] Odenthal | Nonlinear steering and braking control for vehicle rollover avoidance[END_REF][START_REF] Trächtler | Integrated vehicle dynamics control using active brake, steering and suspension systems[END_REF]. As an example, the suspension system is the main tool to achieve comfort and road holding for a vehicle whilst the braking system is the main tool applied in emergency situations. Since there is a certain set of dynamical parameters influenced by both systems, due to the different control goals, the demands for a common set of dynamical parameters might be in conflict if the controllers of these systems are designed independently. This fact might cause a suboptimal actuation, especially in emergencies such as an imminent rollover. For example, the suspension system is usually designed to merely improve passenger comfort and road holding although its action could be used to improve safety [START_REF] Gáspár | The design of an integrated control system in heavy vehicles based on an LPV method[END_REF]. The aim of the global chassis design is to use the influence of the systems in an optimal way, see [START_REF] Gáspár | Active suspension design using the mixed µ synthesis[END_REF][START_REF] Zin | An LP V /H∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. The goal is to design a controller that uses active suspensions all the time to improve passenger comfort and road holding and it activates the braking system only when the vehicle comes close to rolling over. In extreme situations, such as imminent rollover, the safety requirement overwrites the passenger comfort demand by executing a functional reconfiguration of the control goals by generating a stabilizing moment to balance an overturning moment. This reconfiguration can be achieved by a sufficient balance between the performance requirements imposed on the suspension system. In the presentation an integration of the control of braking and suspension systems is proposed. LPV modeling for control design The model for control design is constructed in a Linear Parameter Varying (LPV) structure that allows us to take into consideration the nonlinear effects in the state space description, thus the model structure is nonlinear in the parameter functions, but linear in the states. In the control design the performance specifications for rollover and suspension problems, and the model uncertainties are taken into consideration. In normal operation suspension control is designed based on a full-car model describing the vertical dynamics and concentrating on passenger comfort and road holding. The state vector includes the the vertical displacement, the pitch angle and the roll angle of the sprung mass, the front and rear displacements of the unsprung masses on both sides and their derivatives. The measured signals are the relative displacements at the front and rear on both sides. Since the spring coefficient is a nonlinear function of the relative displacement and the damping coefficient also depends nonlinearly on the relative velocities these parameters are used as the scheduling variables of our LPV model. The performance outputs are the heave acceleration, pitch and roll angle accelerations to achieve passenger comfort and the suspension deflections and tire deflections for road holding. The design for emergency is based on a full-car model describing the yaw and roll dynamics and contains as actuators both the braking and the suspension systems. The state components are the side slip angle of the sprung mass, the yaw rate, the roll angle, the roll rate and the roll angle of the unsprung mass at the front and rear axles. The measured signals are the lateral acceleration, the yaw rate and the roll rate. The forward velocity has a great impact on the evaluation of the dynamics, thus this parameter is chosen as a scheduling variable in our LPV model. The performance demands for control design are the minimization of the lateral acceleration and the lateral load transfers at the front and the rear. In order to monitor emergencies the so-called normalized lateral load transfers R, which are the ratio of lateral load transfers and the mass of the vehicle at the front and rear axles, are introduced. An adaptive observer-based method is proposed to estimate these signals [START_REF] Gáspár | Continuous-time parameter identification using adaptive observers[END_REF]. Integrated control design based on the LPV method The control design is performed in an H ∞ setting where performance requirements are reflected by suitable choices of weighting functions. In an emergency one of the critical performance outputs is the lateral acceleration. A weighting function W a (R), which depends on the parameter R is selected for the lateral acceleration. It is selected to be small when the vehicle is not in an emergency, indicating that the control should not focus on minimizing acceleration. However, W a (R) is selected to be large when R is approaching a critical value, indicating that the control should focus on preventing the rollover. As a result of the weighting strategy, the LPV model of the augmented plant contains additional scheduling variables such as the parameter R. The weighting function W z (R) for the heave displacement and heave acceleration must be selected in a trade-off with the selection of W a (R). The H ∞ controller synthesis extended to LPV systems using a parameter dependent Lyapunov function is based on the algorithm of Wu et al. [START_REF] Wu | Induced L 2 -norm control for LPV systems with bounded parameter variation rates[END_REF]. The control design of the rollover problem results in the stabilizing roll moments at the front and the rear generated by active suspensions and the difference between the braking forces between the left and right-hand sides of the vehicle. A sharing logic is required to distribute the brake forces for wheels to minimize the wear of the tires. The control design of the suspension problem is to generate suspension forces which are modified by the demand of the stabilizing moment during an imminent rollover. The full version of the paper contains all the details concerning the analysis of this design. An illustrative simulation example The operation of the integrated control is illustrated through a double lane changing maneuver based on a model of a real vehicle. The time responses of the steering angle, the normalized load transfer at the front and the rear and their maximum, the lateral acceleration, the roll moments at the front and the rear and the difference of the braking forces are presented in the figure. When a rollover is imminent the values R increase and reach a lower critical limit (R 1 crit ) and suspension forces are generated to create a moment at the front and the rear to increase the stabilization of the vehicle. When this dangerous situation persists and R reaches the second critical limit (R 2 crit ) the active brake system generates unilateral brake forces in order to reduce the risk of the rollover. The detailed analysis of the example is included in the full paper.
7,740
[ "756640", "834135", "1618", "5833" ]
[ "15818", "15818", "15818", "388748", "388748", "388748" ]
00148830
en
[ "spi" ]
2024/03/04 23:41:48
2007
https://hal.science/hal-00148830/file/AAC_06_global_chassis.pdf
Péter Gáspár email: [email protected] Z Szabó J Bokor C Poussot-Vassal O Sename L Dugard TOWARDS GLOBAL CHASSIS CONTROL BY INTEGRATING THE BRAKE AND SUSPENSION SYSTEMS Keywords: LPV modeling and control, performance specifications, uncertainty, safety operation, passenger comfort, automotive A control structure that integrates active suspensions and an active brake is proposed to improve the safety of vehicles. The design is based on an H ∞ control synthesis extended to LPV systems and uses a parameter dependent Lyapunov function. In an emergency, such as an imminent rollover, the safety requirement overwrites the passenger comfort demand by tuning the performance weighting functions associated with the suspension systems. If the emergency persists active braking is applied to reduce the effects of the lateral load transfers and thus the rollover risk. The solution is facilitated by using the actual values of the so-called normalized lateral load transfer as a scheduling variable of the integrated control design. The applicability of the method is demonstrated through a complex simulation example containing vehicle maneuvers. INTRODUCTION These days road vehicles contain several individual active control mechanisms that solve a large number of required control tasks. These control systems contain a lot of hardware components, such as sensors, actuators, communication links, power electronics, switches and micro-processors. In traditional control systems the vehicle functions to be controlled are designed and implemented separately. This means that control hardware is grouped into disjoint subsets with sensor information and control demands handled in parallel processes. However, these approaches can lead to unnecessary hardware redundancy. Al-though in the design of the individual control components only a subset of the full vehicle dynamics is considered these components influence the entire vehicle. Thus in the operation of these autonomous control systems interactions and conflicts may occur that might overwrite the intentions of the designers concerning the individual performance requirements. The aim of the integrated control methodologies is to combine and supervise all controllable subsystems affecting the vehicle dynamic responses in order to ensure the management of resources. The flexibility of the control systems must be improved by using plug-and-play extensibility, see e.g. [START_REF] Gordon | Integrated control methodologies for road vehicles[END_REF]. The central purpose of vehicle control is not only to improve functionality, but also simplify the electric architecture of the vehicle. Complex and overloaded networks are the bottle-neck of functional improvements and high complexity can also cause difficulties in reliability and quality. The solution might be the integration of the high level control logic of subsystems. It enables designers to reduce the number of networks and create a clear-structured vehicle control strategy. Several schemes concerned with the possible active intervention into vehicle dynamics to solve different control tasks have been proposed. These approaches employ active antiroll bars, active steering, active suspensions or active braking, see e.g. [START_REF] Alleyne | Nonlinear adaptive control of active suspensions[END_REF][START_REF] Fialho | Design of nonlinear controllers for active vehicle suspensions using parameter-varying control synthesis[END_REF][START_REF] Hedrick | Brake system modelling, control and integrated brake/throttle switching[END_REF][START_REF] Kim | Investigation of robust roll motion control considering varying speed and actuator dynamics[END_REF][START_REF] Nagai | Integrated robust control of active rear wheel steering and direct yaw moment control[END_REF][START_REF] Odenthal | Nonlinear steering and braking control for vehicle rollover avoidance[END_REF][START_REF] Sampson | Active roll control of single unit heavy road vehicles[END_REF][START_REF] Shibahata | Progress and future direction of chassis control technology[END_REF][START_REF] Trächtler | Integrated vehicle dynamics control using active brake, steering and suspension systems[END_REF]. In this paper a control structure that integrates active suspensions and an active brake is proposed to improve the safety of vehicles. The active suspension system is primarily designed to improve passenger comfort, i.e. to reduce the effects of harmful vibrations on the vehicle and passengers. However, the active suspension system is able to generate a stabilizing moment to balance an overturning moment during vehicle maneuvers in order to reduce the rollover risk, (Gáspár and Bokor, 2005). Although the role of the brake is to decelerate the vehicle, if the emergency persists, the effects of the lateral tire forces can be reduced directly by applying unilateral braking and thus reducing the rollover risk (Gáspár et al., 2005;[START_REF] Palkovics | Roll-over prevention system for commercial vehicles[END_REF]. This paper is an extension of the principle of the global chassis control, which has been proposed in [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. The controller uses the actual values of the socalled normalized lateral load transfer R as a scheduling variable of the integrated control design. When a rollover is imminent the values of R increase and reach a lower critical limit, and then suspension forces must be generated to create a moment at the front and the rear to enhance the stability of the vehicle. When this dangerous situation persists and R reaches the upper critical limit the active brake system must generate unilateral brake forces in order to reduce the risk of the rollover. The goal of the control system is to use the active suspension system all the time to improve passenger comfort and road holding and activate the braking system only when the vehicle comes close to rolling over. In an emergency the safety requirement overwrites the passenger comfort demand by tuning the performance weighting functions associated with the suspension systems. Then a functional reconfiguration of the suspension system is carried out in order to generate stabilizing moments to balance an overturning moment during vehicle maneuvers. In this paper the control-oriented model design has been carried out in a Linear Parameter Varying (LPV) framework that allows us to take into consideration the nonlinear effects in the state space description. Thus the model structure is nonlinear in the parameter functions, but it remains linear in the states. In the control design the performance specifications for rollover and suspension problems, and also the model uncertainties are taken into consideration. The design is based on an H ∞ control synthesis extended to LPV systems that use parameter dependent Lyapunov functions, [START_REF] Balas | Theory and application of linear parameter varying control techniques[END_REF][START_REF] Wu | Induced l 2 -norm control for LPV systems with bounded parameter variation rates[END_REF]. The structure of the paper is as follows. After a short introduction in Section 2 the control oriented modeling for rollover prevention and suspension problems is presented. In Section 3 the weighting strategy applied for the parameterdependent LPV control is presented. In Section 4 the operation of the integrated control system is demonstrated through a simulation example. Finally, Section 5 contains some concluding remarks. AN LPV MODELING FOR THE CONTROL DESIGN The combined yaw-roll dynamics of the vehicle is modeled by a three-body system, where m s is the sprung mass, m f and m r are the unsprung masses at the front and at the rear including the wheels and axles and m is the total vehicle mass. , respectively. The front and rear displacements at both sides of the sprung and the unsprung masses are denoted by x 1f l , x 1f r , x 1rl , x 1rr and x 2f l , x 2f r , x 2rl , x 2rr , respectively. In the model, the disturbances w f l , w f r , w rl , w rr are caused by road irregularities. k tf k tr k tr k tf m f r m rr m rl m f l f f r f rr f rl f f l T ' x, φ z, ψ y, θ j E E ' t f E ' t r b a l r b a l f {m s , I x , I y , I z } T c h CG w rr w rl w f l w f r The yaw and roll dynamics of the vehicle is shown in Figure 2. The roll moment of the inertia of the sprung mass and of the yaw-roll product is denoted by I xx and I xz while I yy is the the pitch moment of inertia and I zz is the yaw moment of inertia. The total axle loads are F zl and F zr . The lateral tire forces in the direction of the wheel-ground contact are denoted by F yf and F yr . h is the height of CG of the sprung mass and h uf , h ur are the heights of CG of the unsprung masses, ℓ w is the half of the vehicle width and r is the height of the roll axis from the ground. β denotes the side slip angle of the sprung mass, ψ is the heading angle, φ is the roll angle, ψ denotes the yaw rate and θ the pitch angle. The roll angle of the unsprung mass at the front and at the rear axle are denoted by φ t,f and φ t,r , respectively. δ f is the front wheel steering angle, a y denotes the lateral acceleration and z s is the heave displacement while v stands for the forward velocity. First the modeling for suspension purposes is formalized. The vehicle dynamical model, i.e. the heave, pitch and roll dynamics of the sprung mass and the front and rear dynamics of the unsprung masses at both sides of the front and rear, is as follows: ms zs = k f (∆ f l + ∆ f r ) + kr(∆ rl + ∆rr) + b f ( ∆fl + ∆fr ) + br( ∆rl + ∆rr) -f f l -f f r -f rl -frr Iyy θ = k f l f (∆ f l + ∆ f r ) + krlr(∆ rl + ∆rr) + b f l f ( ∆fl + ∆fr ) -brlr( ∆rl + ∆rr) -(f f l + f f r )l f + (f rl + frr)lr Ixx φ = k f ℓw(∆ f l -∆ f r ) + krℓw(∆ rl -∆rr) + b f ℓw( ∆fl -∆fr ) + brℓw( ∆rl -∆rr) -(f f l -f f r )ℓw -(f rl -frr)ℓw m f ẍ2fl = -k f ∆ f l + k tf ∆ wf l + b f ∆fl -f f l m f ẍ2fr = -k f ∆ f r + k tf ∆ wf r + b f ∆fr -f f r mr ẍ2rl = -kr∆ rl + ktr∆ wrl + br ∆rl -f rl mr ẍ2rr = -kr∆rr + ktr∆wrr + br ∆rr -frr with the following notations: with ∆ f l = -x 1f l + x 2f l , ∆ f r = -x 1f r + x 2f r , ∆ rl = -x 1rl + x 2rl , ∆rr = -x 1rr + x 2rr , ∆ wf l = x 2f l -w f l , ∆ wf r = x 2f r -w f r , ∆ wrl = x 2rl -w rl and ∆wrr = x 2rr -wrr. The state space representation of the suspension system is the following: ẋs = A s x s + B 1s d s + B 2s u s , (1) with the state vector x s = x 1 ẋ1 T , where x 1 = z s φ θ x 2f l x 2f r x 2rl x 2rr T . The input signals is u s = f f l f f r f rl f rr T and d s = w f l w f r w rl w rr T is the disturbance. Second, the modeling for the rollover problem is formalized. This structure includes two control mechanisms which generate control inputs: the roll moments between the sprung and unsprung masses, generated by the active suspensions u af , u ar , and the difference in brake forces between the left and right-hand sides of the vehicle ∆F b . The differential equations of the yaw-roll dynamics are formalized: mv( β + ψ) -msh φ = F yf + Fyr -Ixz φ + Izz ψ = F yf l f -Fyrlr + lw∆F b (Ixx+msh 2 ) φ -Ixz ψ = msghφ + msvh( β + ψ) -k f (φ -φ tf ) -b f ( φ -φtf ) -kr(φ -φtr) -br( φ -φtr) + ℓwu af + ℓwuar -rF yf = m f v(r -h uf )( β + ψ) + m uf gh uf φ tf -k tf φ tf + k f (φ -φ tf ) + b f ( φ -φtf ) + ℓwu af -rFyr = mrv(r -hur)( β + ψ) -murghurφtr -ktrφtr + kr(φ -φtr) + br( φ -φtr) + ℓwuar. The lateral tire forces F yf and F yr are approximated linearly to the tire slide slip angles α f and α r , respectively: F yf = µC f α f and F yr = µC r α r , where µ is the side force coefficient and C f and C r are tire side slip constants. At stable driving conditions, the tire side slip angles α f and α r can be approximated as α f = -β + δ f - l f • ψ v and α r = -β + lr• ψ v . The differential equations depend on the forward velocity v of the vehicle nonlinearly. Choosing the forward velocity as a scheduling parameter ρ r = v, an LPV model is constructed. Note, that the side force coefficient is another parameter which varies nonlinearly during operational time. In [START_REF] Gáspár | Side force coefficient estimation for the design of active brake control[END_REF] a method has been proposed for the estimation of this parameter. Hence, it can be considered as a scheduling variable of the LPV model, too. In this paper, for the sake of simplicity, the variation of the side force coefficient is ignored. The equations can be expressed in the state space representation form as: ẋr = A r (ρ r )x r +B 1rv (ρ r )d r + B 2rv (ρ r )u r , (2) where x r = β ψ φ φ φ tf φ tr T is the state vec- tor, u r = ∆F b is the control input while d r = δ f is considered as a disturbance. In this approach of the rollover problem the active suspensions generate two stabilizing moments at the front and the rear, which can be considered as the effects of the suspension forces u af = (f f lf f r )ℓ w and u ar = (f rl -f rr )ℓ w . The control input provided by the brake system generates a yaw moment, which affects the lateral tire forces directly. The difference between the brake forces ∆F b provided by the compensator is applied to the vehicle: ∆F b = (F brl + d 2 F bf l ) -(F brr + d 1 F bf r ), where d 1 and d 2 are distances, which depend on the steering angle. In the implementation of the controller means that the control action be distributed at the front and the rear wheels at either of the two sides. The reason for distributing the control force between the front and rear wheels is to minimize the wear of the tires. In this case a sharing logic is required which calculates the brake forces for the wheels. INTEGRATED CONTROL DESIGN BASED ON THE LPV METHOD Predicting emergencies by monitoring R Roll stability is achieved by limiting the lateral load transfers for both axles, ∆F zl and ∆F zr , below the level for wheel lift-off. The lateral load transfers are given by ∆F zi = ktiφti lw , where i denotes the front and rear axles. The tire contact force is guaranteed if mg 2 ± ∆F z > 0 for both sides of the vehicle. This requirement leads to the definition of the normalized load transfer, which is the ratio of the lateral load transfers at the front and rear axles: r i = ∆Fzi mig , where m i is the mass of the vehicle in the front and the rear. The scheduling parameter in the LPV model is the maximum value of the normalized load transfer R = max(|r i |). The limit of the cornering condition is reached when the load on the inside wheels has dropped to zero and all the load has been transferred onto the outside wheels. Thus, if the normalized load transfer R takes on the value ±1 then the inner wheels in the bend lift off. This event does not necessary result in the rolling over of the vehicle. However, the aim of the control design is to prevent the rollover in all cases and thus the lift-off of the wheels must also be prevented. Thus, the normalized load transfer is also critical when the vehicle is stable but the tendency of the dynamics is unfavorable in terms of a rollover. An observer design method has been proposed for the estimation of the normalized load transfers, see (Gáspár et al., 2005). In this paper the detection of an imminent rollover is based on the monitoring of the normalized lateral load transfers for both axles. In the control design the actual value of the normalized load transfer is used. In order to make an estimation of the lateral load transfers the roll angles of the unsprung masses φ t,i must be estimated. For this purpose a Luenberger type observer η = (A(ρ) + K(ρ)C)η + B(ρ)u -K(ρ)y (3) is used. The observer is based on the measured signals, a y , ψ and φ, where a y is the lateral acceleration. In order to obtain a quadratically stable observer the LMI (A(ρ)+K(ρ)C) T P +P (A(ρ)+K(ρ)C) < 0 must hold for suitable K(ρ) and P = P T > 0 for all the corner points of the parameter space, see [START_REF] Apkarian | A convex characterization of gain-scheduled H ∞ controllers[END_REF][START_REF] Wu | Induced l 2 -norm control for LPV systems with bounded parameter variation rates[END_REF]. By introducing the auxiliary variable G(ρ) = P K(ρ), the following set of LMIs on the corner points of the parameter space must be solved: A(ρ) T P + P A(ρ) + C T G(ρ) T + G(ρ)C < 0. Weighting strategy for the control design Based on the model of the suspension system a control is designed considering the suspension deflections at the suspension components as measured output signals and u s as the control inputs. The performance outputs for control design are the passenger comfort (i.e. heave displacement and acceleration z a and z d ), the suspension deflections z si = z sf l z sf r z srl z srr and the tire deflection z ti = z tf l z tf r z trl z trr . In an earlier paper of this project the design of a global chassis system is proposed, see [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. Here the suspension forces on the left and right hand sides at the front and rear are designed in the following form: u a = u -b 0 ( żs -żus ) , (4) where b 0 is a damping coefficient and u is the active force. When the value b 0 is selected small the suspension system focuses on passenger comfort, while the system focuses on road holding when value b 0 is selected large. In this paper this experience is exploited when a parameter dependent weighting strategy is applied in the design of the suspension system. Figure 3 shows the structure of the active suspension system incorporated into the integrated control. The inputs of the controller are the measured relative displacements and their numerical differentiations. The controller uses the normalized lateral load transfer R and the so-called normalized moment χ = φ az Mact Mmax as scheduling variables. Here φ az =        1 if |R| < R s 1 - |R| -R s R c -R s if R s ≤ |R| ≤ R c 0 if |R| > R c , where R s is a warning level, while R c is a critical value of the admissible normalized lateral load transfer. The value of the damping b 0 is scheduled by the normalized lateral load transfer R. Its value must be selected in such a way that it improves passenger comfort in normal cruising, however, it enhances road holding in an emergency. With this selection the active suspension system focuses on passenger comfort and road holding due to the value of the normalized load transfer. The LPV controller C is designed to meet the same criteria but its scheduling variable also reflects the presence of the moment demand. This is achieved by using a look-up table that encodes the function φ az . - the sprung mass acceleration, the sprung mass displacement, the displacement of the unsprung mass, and the relative displacement between the sprung and unsprung masses. This parameter represents the balance between road holding and passenger comfort. The active suspension of the closed-loop model presents better performances than the passive model. When a small value of the tuning parameter is selected a better ride comfort without the deterioration of road holding or the suspension deflection is achieved. On the other hand, when the value of the tuning parameter increases, passenger comfort deteriorates, while road holding improves. This emphasizes the tradeoff between comfort and road holding and the significance of using b 0 as a varying coefficient. The weighting functions applied in the active suspension design are the following:                                  W zs (χ) = 3 s/(2πf 1 ) + 1 χ W θ (χ) = 2 s/(2πf 2 ) + 1 χ W φ (χ) = 2 s/(2πf 3 ) + 1 (1 -χ) W u = 10 -2 W zr = 7.10 -2 W dx = 10 5 W dy = 5.10 4 W n = 10 -3 where W zs is shaped in order to reduce bounce amplification of the suspended mass (z s ) between [0, 8]Hz (f 1 = 8Hz), W θ attenuate amplification in low frequency and the frequency peak at 9Hz (f 2 = 2Hz) and W φ reduces the rolling moment especially in low frequency (f 3 = 2Hz). Then W zr , W dx , W dy and W n model ground, roll, pitch disturbances (z r , M dx and M dy ) and measurement noise (n) respectively, and W u is used to limit the control signal. Note, that although the suspension model is a linear time invariant (LTI), the model of the augmented plant is LPV because of the weighting strategy. Thus, the control design is performed in an LPV setting. The control of braking forces are designed in terms of the rollover problem. The measured outputs are the lateral acceleration of the sprung mass, the yaw rate and the roll rate of the sprung mass while u r are the control inputs. The performance outputs for the control design are the lateral acceleration a y , the lateral load transfers at the front and the rear ∆F zf and ∆F zr . The lateral acceleration is given by a y = v β + v Ψ -h Φ. The weighting function for the lateral acceleration is selected in such a way that in the low frequency domain the lateral accelerations of the body must be penalized by a factor of φ ay . W p,ay = φ ay s 2000 + 1 s 12 + 1 , where φ ay =        0 if |R| < R s |R| -R s R c -R s if R s ≤ |R| ≤ R c 1 if |R| > R c , R c defines the critical status when the vehicle is in an emergency and the braking system must be activated. The gain φ ay in the weighting functions is selected as a function of parameter |R| in the following way. In the lower range of |R| the gain φ ay must be small, and in the upper range of |R| the gains must be large. Consequently, the weighting functions must be selected in such a way that they minimize the lateral load transfers in emergencies. In normal cruising the brake is not activated since the weight is small. The weighting function for the lateral loads and the braking forces are the following: W p,F z = diag( 1 7 , 1 5 ) W p,∆F b = 10 -3 φ ay The control design is performed based on an augmented LPV model of the yaw-roll dynamics where two parameters are selected as scheduling variables: the forward velocity and the maximum value of the normalized lateral load transfer either at the rear side or at the front ρ r = v R T . In the design of rollover problem the difference in the braking forces is designed. Based on this fictitious control input the actual control forces at the front and rear on both sides generated in the braking system are calculated. Certainly, different optimization procedures, which distribute the fictitious force between the braking forces can be implemented. However, this problem is not within the scope of the paper. selected R = [0, R s , R c , 1]. A SIMULATION EXAMPLE In the simulation example, a double lane change maneuver is performed. In this maneuver passenger comfort and road holding are guaranteed by the suspension actuators and the rollover is prevented by modifying the operation of the suspension actuators and using an active brake. When a rollover is imminent the values R increase and reach a lower critical limit (R s ) and suspension forces are generated to create a moment at the front and the rear. When this dangerous situation persists and R reaches the second critical limit (R c ) the active brake system generates unilateral brake forces. The velocity of the vehicle is 90 km/h. The maneuver starts at the 1 st second and at the 2.5 th and the 7 th seconds 6-cm-high bumps on the front wheels disturbs the motion of the vehicle. The steering angle is generated with a ramp signal with 3.5 degrees maximum value and 4 rad/s filtering, which represents the finite bandwidth of the driver. The time responses of the steering angle, the road disturbance, the yaw rate, the roll rate, the lateral acceleration, the heave acceleration on the front-left side, the normalized load transfer at the rear and their maximum, the vehicle velocity, the roll moments at the front and the rear and the braking forces at the front and the rear are presented in Figure 5... Figure 7. The effect of a 6-cm-high bump disturbs heave acceleration at the 2.5 th second. The effect of this disturbance should be reduced by the suspension system, since it improves the passenger comfort and road holding. During the maneuver the lateral acceleration and the roll angles of the unsprung masses increase, thus the normalized load transfer also increases and reaches the critical value R s . Control forces (0.5 kN and 0.5 kN at the front and at the rear, respectively) should also be generated by the suspension forces so that the controller can prevent the rollover of the vehicle. Thus, during the maneuver the suspension system focuses on both the passenger comfort and the roll stability of the vehicle. The control moments are not sufficient to prevent rollovers, since the normalized lateral load transfers have achieved the critical value R c . Thus the brake is also activated and unilateral braking forces (approximately 0.9 kN and 1 kN on the left and the right hand sides in the rear) are generated. As a result the velocity of the vehicle decreases and the normalized lateral load transfers stay below the critical value 1. After the double lane maneuver another 6-cm-high bump disturbs the motion. In this case a large suspension force generated by the suspension actuators is needed to reduce both the magnitude and the duration of the oscillation. In the future it is possible to exploit the balance between the brake and suspension systems to enhance braking. During braking the real path might be significantly different from the desired path due to the brake moment which affects the yaw motion. Thus, the braking maneuver usually requires the drivers intervention. Applying the integrated control, the suspension system is able to focus on the emergency, consequently safety is improved. CONCLUSION In this paper an integrated control structure that uses active suspensions and an active brake is proposed to improve the safety of vehicles. In normal operation the suspension system focuses on passenger comfort and road holding, however, in an emergency the safety requirement overwrites the passenger comfort demand. When the emergency persists, the brake is also activated to reduce the rollover risk. The solution is based on a weighting strategy in which the normalized lateral load transfer is selected as a scheduling variable. The design is based on an H ∞ control synthesis extended to LPV systems that uses a parameter dependent Lyapunov function. This control mechanism guarantees the balance between rollover prevention and passenger comfort. The applicability of the method is demonstrated through a complex simulation example containing vehicle maneuvers. Fig. 1 . 1 Fig. 1. Vertical dynamics of the full-car model. The suspension system, which is shown in Figure1, contains springs, dampers and actuators between the body and the axle on both sides at the front and rear. The suspension stiffnesses, the tire stiffnesses and the suspension dampers at the front and rear are denoted by k f , k r , k tf , k tr , b f , b r , respectively. The front and rear displacements at both sides of the sprung and the unsprung masses are denoted by x 1f l , x 1f r , x 1rl , x 1rr and x 2f l , x 2f r , x 2rl , x 2rr , respectively. In the model, Fig. 2 . 2 Fig.2. Yaw and roll dynamics of the full-car model Fig. 3 . 3 Fig. 3. Logical structure of the suspension controllerFigure4illustrates the effects of the tuning parameters b 0 and χ through the frequency responses of the closed loop system to the disturbances, i.e. the sprung mass acceleration, the sprung mass displacement, the displacement of the unsprung mass, and the relative displacement between the sprung and unsprung masses. This parameter represents the balance between road holding and passenger comfort. The active suspension of the closed-loop model presents better performances than the passive model. When a small value of the tuning parameter is selected a better ride comfort without the deterioration of road holding or the suspension deflection is achieved. On the other hand, when the value of the tuning parameter increases, passenger comfort deteriorates, while road holding improves. This emphasizes the tradeoff between comfort and road holding and the significance of using b 0 as a varying coefficient. Fig. 4 . 4 Fig. 4. Frequency responses of the suspension system Fig. 5 . 5 Fig. 5. Time responses in the double lane change maneuver Fig. 6 . 6 Fig. 6. Output signals in the double lane change maneuver Fig. 7 . 7 Fig. 7. Control signals in the double lane change maneuver The solution of an LPV problem is governed by the set of infinite dimensional LMIs being satisfied for all ρ ∈ F P , thus it is a convex problem. In practice, this problem is set up by gridding the parameter space and solving the set of LMIs that hold on the subset of F P . If this problem does not have a solution, neither does the original infinite dimension problem. Even if a solution is found, it does not guarantee that the solution satisfies the original constraints for all ρ. However, it is expected since the matrix functions are continuous with respect to ρ. The number of grid points depends on the nonlinearity and the operation range of the system. For the interconnection structure, H ∞ controllers are synthesized for 7 values of velocity in a range v = [20km/h, 140km/h]. The normalized lateral load transfer parameter space is Acknowledgement: This work was supported by the Hungarian National Office for Research and Technology through the project "Advanced Vehicles and Vehicle Control Knowledge Center" (OMFB-01418/2004) and the Hungarian National Science Foundation (OTKA) under the grant T -048482 which are gratefully acknowledged. Dr Gáspár and Dr Szabó were supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.
29,905
[ "756640", "834135", "1618", "5833" ]
[ "15818", "15818", "15818", "388748", "388748", "388748" ]
00148831
en
[ "spi" ]
2024/03/04 23:41:48
2007
https://hal.science/hal-00148831/file/SSSC07_2.pdf
C Poussot-Vassal O Sename L Dugard P Gáspár Z Szabó J Bokor A LPV BASED SEMI-ACTIVE SUSPENSION CONTROL STRATEGY Keywords: Semi-active suspension, Linear Parameter Varying (LPV), H ∞ Control, Linear Matrix Inequality (LMI) In this paper we consider the design and analysis of a semi-active suspension controller. In the recent years different kinds of semi-active control strategies, like two-state Skyhook, LQ-clipped or model-predictive, have already been developed in the literature. In this paper we introduce a new semi-active suspension control strategy that achieves a priori limitations of a semi-active suspension actuator (dissipative constraint and force bounds) through the Linear Parameter Varying (LPV) theory. This new approach exhibits some interesting advantages compared to already existing methods (implementation, performance flexibility, robustness etc.). Both industrial criterion evaluation and simulations on nonlinear quarter vehicle model are performed to show the efficiency of the method and to validate the theoretical approach. 1. INTRODUCTION Suspension system's aim is to isolate passenger from road irregularities keeping a good road holding behavior. Industrial and scientist research is very active in the automotive field and suspension control and design is an important aspect for comfort and security achievements. In the last decade, many different active suspension system control approaches were developed: Linear Quadratic (e.g. [START_REF] Hrovat | Survey of advanced suspension developments and related optimal control application[END_REF], Skyhook (e.g. [START_REF] Poussot-Vassal | Optimal skyhook control for semi-active suspensions[END_REF], that suits well to improve comfort. Robust Linear Time Invariant (LTI) H ∞ (e.g. [START_REF] Rossi | H ∞ control of automotive semi-active suspensions[END_REF]) can achieve better results improving both comfort and road holding but which is limited to fixed performances (due to fixed weights), Mixed LTI H ∞ /H 2 (see [START_REF] Gáspár | Iterative model-based mixed H 2 /H ∞ control design[END_REF][START_REF] Lu | Multiobjective optimal suspension control to achieve integrated ride and handling performance[END_REF][START_REF] Takahashi | A multiobjective approach for H 2 and H ∞ active suspension control[END_REF] can improve H ∞ control reducing signals energy. Recently, Linear Parameter Varying (LPV) (e.g. [START_REF] Fialho | Road adaptive active suspension design using linear parameter varying gain scheduling[END_REF]Balas 2002, Gáspár et al. 2004), that can either adapt the performances according to measured signals (road, deflection, etc.) or improve robustness, taking care of the nonlinearities (see [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. Most of these controllers are designed and validated assuming that the actuator of the suspension is active. Unfortunately such active actuators are not yet used on a wide range of vehicles because of their inherent cost (e.g. energy, weight, volume, price, etc.) and low performance (e.g. time response); hence, in the industry, semi-active actuators (e.g. controlled dampers) are often preferred. The twostate skyhook control is an on/off strategy that switches between high and low damping coefficient in order to achieve body comfort specifications. Clipped approaches leads to unpredictable behaviors and reduce the achievable performances. In Giorgetti et al.'s (2006) article, authors compare different semi-active strategies based on optimal control and introduce an hybrid model predictive optimal controller. The resulting control law is implemented by an hybrid controller that switches between a large number (function of the prediction horizon) of controllers and requires a full state measurement. In Canale et al.'s (2006) paper, another model-predictive semi-active suspension is proposed and results in good performances compared to the Skyhook and LQ-clipped approaches but requires an on-line "fast" optimization procedure. As it involves optimal control, full state measurement and a good knowledge of the model parameters are necessary. The contribution of this paper is to introduce a new methodology to design a semi-active suspension controller through the LPV technique. The main interest of such an approach is that it a priori fulfills the dissipative actuator constraint and allows the designer to build a controller in the robust framework (H ∞ , H 2 , Mixed etc...). As long as the new method does not involves any on-line optimization process and only requires a single sensor, it could be an interesting algorithm from the applications point of view. The paper is organized as follows: in Section 2 we both introduce linear and nonlinear quarter car models used for synthesis and validation. In Section 3, the involved semi-active suspension actuator system (based on real experimental data) is described. In Section 4 the proposed semi-active LPV/H ∞ control design and its scheduling strategy are presented. In Section 5, both industrial based performance criterion and simulations on a nonlinear quarter vehicle model show the efficiency of the proposed method. Conclusions and perspectives are discussed in Section 6. QUARTER CAR MODEL The simplified quarter vehicle model involved here includes the sprung mass (m s ) and the unsprung mass (m us ) and only catches vertical motions (z s , z us ). As the damping coefficient of the tire is negligible, it is simply modeled by a spring linked to the road (z r ) where a contact point is assumed. The passive suspension, located between m s and m us , is modeled by a damper and a spring as on Figure 1 (left). The nonlinear "Renault Mégane Coupé" based passive model, that will be later used as our reference model (for performance evaluation and comparison with the controlled one), is given by: F c F k k t m s m us z s z us z r F k k t m s m us u > z s z us z r Fig. 1. Passive (left) Controlled (right) quarter car model.    m s zs = -F k (z def ) -F c ( żdef ) m us zus = F k (z def ) + F c ( żdef ) -k t (z us -z r ) z def ∈ z def z def (1) where F k (z def ) and F c ( żdef ) are the nonlinear forces provided by the spring and damper respectively (see dashed curves on Figure 2). In the controlled suspension framework, one considers the model given on Figure 1 (right) and described by,    m s zs = -F k (z def ) + u m us zus = F k (z def ) -u -k t (z us -z r ) z def ∈ z def z def (2) where u is the control input of the system, provided by the considered actuator. Note that in this formulation, the passive damper that appears in equation ( 1) is replaced by an actuator or a controlled damper. SEMI-ACTIVE SUSPENSION ACTUATOR In the previous section, the u control input was introduced to control the quarter car model. Since we focus here on semi-active suspension control, in the sequel, emphasis is put on static performances and structural limitations for the considered semiactive actuator. Active vs. Semi-active suspension systems Active suspension systems can either store, dissipate or generate energy to the masses (m s and m us ). When semi-active suspension actuators are considered, only energy dissipation is allowed. This difference is usually represented using the Force-Deflection speed space representation given on Figure 3. Hence, a semi-active controller can only deliver forces within the two semi-active quadrants. Note that when a full active actuator is considered, all the four quadrants can be used. 6 - F [N ] żdef [m/s] The Magneto-rheological damper The actuators considered here are called semiactive Continuously Controlled Dampers (CCD). For this kind of controlled damper, it is assumed that all the forces within the allowed semi-active space quadrants can be achieved (Figure 3). In our application, we consider a magneto-rheological (M-R) damper (more and more studied and used in the industry because of its great performances) (see [START_REF] Du | Semiactive H ∞ control with magneto-rheological dampers[END_REF]. Through the change of current input, M-R damper viscosity can be adjusted (i.e. the damping coefficient). The main advantages of such an actuator are that the weight and the volume are similar to classic passive dampers and the range of damping coefficients is nearly infinite within the bounded area. In the meantime, the time response is very fast (about 10ms), compared to an active hydrological actuator. For this purpose, we consider a Delphi M-R damper available in the Tecnologico de Monterrey (see [START_REF] Nino | Applying black box model in the identification of mr damper[END_REF]. To evaluate the upper and lower capacities of this actuator, a sinusoidal disturbance of frequency 0.25Hz is generated at the extremity of the suspension (equivalent to a deflection disturbance) for different magnitudes of current in order to measure the achievable forces of this damper. Figure 4 shows results for two different current values. Note that, due to hysteresis behavior of such actuators (see [START_REF] Du | Semiactive H ∞ control with magneto-rheological dampers[END_REF], some points are in the actives quadrants. of the damper and will be denoted as D. Then, for a given deflection speed ( żdef ), if the controller computes a force F * out of the achievable damper range, the force provided to the system will be F ⊥ the projection of F * on the possible force area (see Figure 5). Semi-active suspension static model 6 - F [N ] żdef [m/s] F * 2 F * 1 F ⊥ 2 F ⊥ 1 ? ? F * 3 = F ⊥ 3 Fig. 5 . Projection principle of the semi-active controlled damper model (F * 1 and F * 2 are out of the allowed area and F * 3 is inside). LPV BASED ROBUST SEMI-ACTIVE SUSPENSION CONTROL DESIGN For controller synthesis purpose we consider the model described in (1) where F k (z def ) and F c ( żdef ) are linear functions (see solid curves on Figure 2). The control law, applied on model (2), is then given by: u = -c.( żdef ) + u H∞ where c is the nominal linearized damping coefficient of the M-R damper and u H∞ the added force provided by the controller. To account for actuator limitations shown in Section 3, we propose a new method based on the LPV polytopic theory using the H ∞ synthesis approach. Frequency based industrial performance criterion In the sequel, we introduce four performance objectives derived from industrial specifications (see [START_REF] Sammier | Skyhook and H ∞ control of active vehicle suspensions: some practical aspects[END_REF]. Comfort at high frequencies: vibration isolation between [4 -30]Hz is evaluated by zs /z r . Comfort at low frequencies: vibration isolation between [0 -5]Hz is evaluated by z s /z r . Road holding: wheel are evaluated by z us /z r between [0 -20]Hz. Suspension constraint: suspension deflection is evaluated between [0 -20]Hz by z def /z r . In each case, one wish to perform better wrt. a passive suspension does. Therefore, to evaluate the control approach exposed thereafter wrt. the passive one, we introduce the power spectral density (PSD) measure of each of these signals on the frequency and amplitude space of interest by the use of the following formula: I {f1,a1}→{f2,a2} (x) = f2 f1 a2 a1 x 2 (f, a)da • df (3) where f 1 and f 2 (resp. a 1 , a 2 ) are the lower and higher frequency (resp. amplitude) bounds respectively and x is the signal of interest. The frequency response (x(f, a)) of the nonlinear system is evaluated assuming a sinusoidal input z r of varying magnitude (1 -8cm) for 10 periods (with varying frequency). Then a discrete Fourier Transform is performed to evaluate the system gain. Semi-active proposed approach To ensure the semi-activeness of the controller output, the static damper model D given in Section 3 is used in the LPV controller; the computed control force u provided by the controller is compared with the possible reachable one v (Figure 6). The controller is scheduled according to this difference (as the anti-windup does with the integral action) as: |u -v| = 0 ⇒ semi-active control (u H∞ = 0) |u -v| > ε ⇒ nominal control (u H∞ = 0) where ε is chosen sufficiently small (≃ 10 -4 ) to ensure the semi-active control. |u -v| = 0 means that the required force is outside the allowed range, then the "passive control" is chosen (u H∞ = 0 ⇔ u = -c( żdef )). To incorporate this strategy in the framework of a LPV design, we introduce a parameter ρ with the following choice: |u -v| = 0 ⇒ ρ low |u -v| > ε ⇒ ρ high With this strategy, we can find a controller S(ρ) that can either satisfy some performance objectives or be passive (when no control law can be applied because of actuator limitations). The generalized block scheme incorporating the weighting functions is given on Figure 6, where ρ is the scheduling parameter that will be used to satisfy the dissipative damper constraints. W zr W u (ρ) - - - - - + ? W n S(ρ) z s y Σ u z r z 1 z 3 w 1 w 2 n W zs D -- u 7 + żdef v - z us W zus z 2 - ρ Fig. 6. General block diagram. LPV design & Scheduling strategy As described on Figure 6, the ρ parameter appears in the W u (ρ) weight function. Through the LPV design, W u (ρ) is varying between an upper and a lower bound. Let remember that in the H ∞ framework this weight indicates how large the gain on the control signal can be. Choosing a high W u (ρ) = ρ forces the control signal to be low, and conversely. Hence, when ρ is large, the control signal is so penalized that it is practically zero, and the closed-loop behavior is the same as the passive quarter vehicle model one. Conversely, when ρ is small, the control signal is no more penalized, hence the controller acts as an active controller and can achieve performances. Let consider the generalize plant description,   ẋ z ∞ y   =   A(ρ) B ∞ (ρ) B C ∞ (ρ) D ∞w (ρ) D ∞u C 0 0     x w ∞ u   (4) where, x = x quarter x weights T represents the states of the linearized quarter vehicle model (obtained thanks to equation 1) and the weight states, z ∞ = W zs z s W zus z us W u u T the per- formance signals, w ∞ = W -1 zr z r W -1 n n T the weighted input signals, y = z def the measurement and ρ ∈ ρ ρ the varying parameter. The weighting functions are given by: W zs = 2 2πf1 (s+2πf1) , W zus = 2πf2 (s+2πf2) , W zr = 7.10 -2 , W n = 10 -4 and W u (ρ) ∈ ρ = 0.1 10 . W zs (resp. W zus ) is shaped according to performance specifications, W zr and W n model ground disturbances (z r ) and measurement noise (n) respectively, and W u (ρ) is used to limit the control signal and achieve the semi-active constraint. f 1 = 3Hz and f 2 = 5Hz. To find the LPV/H ∞ controller, we solve at each vertex of the polytope formed by co{ρ, ρ}, the bounded real lemma (using a common parameter independent Lyapunov function):   A(ρ) T K + KA(ρ) KB(ρ) C(ρ) T B(ρ) T K -γ 2 ∞ I D(ρ) T C(ρ) D(ρ) -I   < 0 (5) Because of the ρ parameter, ( 5) is an infinite set Bilinear Matrix Inequality (BMI), hence a nonconvex problem has to be solved. Via a change of basis expressed in [START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF], extended to polytopic systems, we can find a nonconservative LMI that expresses the same problem in a tractable way for Semi-Definite Programs (SDP). As the parameter dependency enters in a linear way in the system definition, the polytopic approach is used (see e.g. [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. It leads to two controllers S(ρ) and S(ρ), hence two closed-loop (CL(ρ) and CL(ρ)). Then, the applied control law is a convex combination of these two controllers. Hence, controller S(ρ) and closed-loop CL(ρ) can be expressed as the following convex hull: co{S(ρ), S(ρ)} ⇔ co{S 1 , S 0 } and co{CL(ρ), CL(ρ)} ⇔ co{CL 1 , CL 0 }. Note that a major interest in using the LPV design is that it ensures the internal stability of the closed-loop system for all ρ ∈ [ρ, ρ]. Note that the passive reference model is a "Renault Mégane Coupé", which is known to be a good road holding car. Nevertheless, the proposed semi-active control shows to improve the comfort without deteriorating the road holding. SIMULATION & VALIDATION Time simulation results To validate the approach and check weather the semi-active constraint is fulfilled, a step road disturbance (z r = 3cm) is generated on both passive and controlled system. This leads to the Force-Deflection speed space and chassis displacement given on Figure 9 and 10. With this representation it is clear that the proposed LPV controller provides a force that fulfills the dissipative inherent constraint of the controlled damper keeping a good chassis behavior. It also appears that this strategy does not only satisfy the semi-active constraint, but also the actuator limitations. CONCLUSION AND FUTURE WORKS In this article, we introduce a new strategy to ensure the dissipative constraint for a semi-active suspension keeping the advantages of the H ∞ control design. Interests of such approach compared to existing ones are: Hence the new semi-active strategy exhibits significant improvements on the achieved performances. Moreover, implementation of such a controller results in a cheap solution. In future works we aim to implement such algorithm on a suspension. Fig. 2 . 2 Fig. 2. Nonlinear (dashed) and Linear (solid) Spring (left) and the Damper (right) forces. Fig. 3 . 3 Fig. 3. Active vs. Semi-active quadrant. Fig. 4 . 4 Fig. 4. Delphi Force-Deflection speed diagrams for different current (0A cross, and 3A dots). Fig. 9 . 9 Fig. 9. LPV/H ∞ semi-active controller (dot), nominal damping & saturation force (solid). Fig.10. Chassis displacement for passive(dashed) and LPV/H ∞ semi-active (solid) suspension. Flexible design: possibility to apply H ∞ , H 2 , Pole placement, Mixed etc. criterion Measurement: only the suspension deflection sensor is required Computation: synthesis leads to two LTI controllers & simple scheduling strategy (no on-line optimization process involved) Robustness: internal stability & robustness Fig. 7 . 7 Fig. 7. Freq. resp. of z s /z r for the passive (left) and controlled (right) nonlinear quarter vehicle. Table 1 . 1 5.1 Performance evaluation & Frequency behaviorOn Figures7 and 8we plot the frequency response z s /z r and z us /z r , of the passive and controlled quarter car. Passive vs. Controlled PSD. Both show frequency responses and PSD show improvements of the proposed approach. Then, applying PSD criteria (3) on both passive and controlled nonlinear quarter car model leads to the results summarized on Table 1 where the improvement is evaluated as: (Passive PSD - Controlled PSD)/Passive PSD. Signal Passive PSD Controlled PSD Gain [%] zs/zr 280 206 25.4% zs/zr 2.4 2.1 12.1% zus/zr 1.3 1.2 7% z def /zr 1.5 1.4 8.02%
18,989
[ "834135", "1618", "5833", "756640" ]
[ "388748", "388748", "388748", "15818", "15818", "15818" ]
01275830
en
[ "sde" ]
2024/03/04 23:41:48
2015
https://hal.science/hal-01275830v2/file/doc00023958.pdf
TU1102: Autonomic Road Transport Support Systems INRETS/GRETIA Le Descartes 2 ,rue de la Butte Verte 93166 Noisy Le Grand Cedex-10 France Comparative results of environmental aspects obtained by Micro and Mesoscopic approach for urban road networks Tiziana Campisi , Neila Bhouri , Giovanni Tesoriere The environmental impacts related to signalized intersection are tested comparing micro and mesoscopic approach and ,also , analyzing possible scenarios that lead to decrease of vehicular emissions referring to Le Bourget area on French Region of Seine-Saint Denis. Specific emission factors for the acceleration and the queue phases have been estimated at mesoscopic level starting from the microscopic approach by ENVIVER tool. In particular pollutant concentration were compared ,changing four different traffic light cycle with two phases (especially increasing or decreasing red time). It also shows the capacity of the model to test different off-line and on-line traffic strategies in order to work both on emission and congestion problems. For some decades the air quality has received, both locally and globally, more and more attention from the scientific community and local authorities. It is well known that one of the major causes of pollution are vehicle emissions, and that they are closely related with driving style. According to studies that define vehicular emissions, it has been found that they contain a wide variety of pollutants, principally carbon monoxide and dioxide (CO and CO2), oxides of nitrogen (NOX), particulate matter (PM10) and hydrocarbons (HC) or volatile organic compounds (VOC), which have a major long-term impact on air quality. The explained methodologies is applied to: Input data •24 hours of dynamic traffic simulation (from 00:00 am to 11:00 pm); • The application involved also the analysis of simulate the impact of adopted strategies to reduce emissions such as the optimization of signalized intersection (phases, cycle length and offset), the optimization of a system of one-way street and the changes of the vehicle fleet composition. The use of a mathematical model allows the creation of measurement scales of the environmental risk (and specific maps) considering urban road network in according with European and local Legislation benchmarking. This work describes the correlation between the traffic phenomena of road networks and the percentages of the major pollutants made by vehicle emissions. Results are based on real time data from a road network in France where mesoscopic approach offers insights on the use of ITS systems, such as the intelligent traffic lights or Urban Traffic Control (UTC) systems, which are useful to road management policy in terms of pollution decrease. The use of a mathematical model allows the creation of measurement scales of the environmental risk considering signalized and unsignalized intersections on the road network. The emission estimation for a link k approaching a signalized intersection in the time slice T can be evaluated as follows: Emission parameters ea calibrated specific emission function to be adopted in LA eb calibrated specific emission function to be adopted in LB; ec calibrated specific emission function to be adopted in LC. University of Enna KORE Cittadella Universitaria 94100,Enna (EN)-Italy volume [veh/h] on link k at time T;Qnv total hourly volume [veh/h] on link k that cross the intersection without any deceleration (i.e. vehicles not penalized by the traffic control): it is computed as the vehicles per cycle not subject to stop and go phases (qnv) multiplied for the number of cycles C during the considered time slice T (T/C); Qns total hourly volume [veh/h] subject to stop and go phases on link k : it is computed as the vehicles per cycle subject to stop and go phases (qns) multiplied by the number of cycles during the considered time slice (T/C);
3,906
[ "1278852" ]
[ "486329", "222120", "486329" ]
01488396
en
[ "math" ]
2024/03/04 23:41:48
2019
https://hal.science/hal-01488396/file/poromembraneshellupdateversiondef.pdf
Andro Mikelić email: [email protected] Josip Tambača email: [email protected] Derivation of a poroelastic elliptic membrane shell model Keywords: Membrane poroelastic shell, Biot's quasi-static equations, elliptic-parabolic systems, asymptotic methods AMS subject classification: 35B25, 74F10, 74K25, 74Q15, MSC 76S published or not. The documents may come Introduction The present work is devoted to the derivation of a model of the poroelastic elliptic membrane shell. We follow the standpoint of Ciarlet et al, who derived the Kirchhoff-Love models for thin elastic bodies in the zero thickness limit (see [START_REF] Ciarlet | A justification of the two dimensional linear plate model[END_REF][START_REF] Ciarlet | Asymptotic Analysis of Linearly Elastic Shells. I. Justification of Membrane Shells Equations[END_REF][START_REF] Ciarlet | Asymptotic analysis of linearly elastic shells. II. Justification of flexural shell equations[END_REF]). While this approach to the effective behavior of three-dimensional linearized elastic bodies is well-established, much less attention has been paid to the poroelastic thin bodies. As the recent example of the special issue of the journal Transport in Porous Media [START_REF]Special Issue: Thin Porous Media[END_REF][START_REF] Iliev | Numerical Solution of Plate Poroelasticity Problems[END_REF] shows, this is likely to change. Also there is a related experimental work, see [START_REF] Grosjean | Experimental and numerical study of the interaction between fluid flow and filtering media on the macroscopic scale[END_REF]. The poroelastic bodies are characterized by the simultaneous presence of the deformation and the filtration (flow). They are described by the quasi-static Biot's system of PDE's. It couples the Navier equations of linearized elasticity, containing the pressure gradient, with the mass conservation equation involving the fluid content change and divergence of the filtration velocity. The filtration velocity is the relative velocity for the upscaled fluid-structure problem and obeys Darcy's law. The fluid content change is proportional to the pressure and the elastic body compression. In the quasistatic Biot's system the mechanical part is elliptic in the displacement and the flow equation has a parabolic operator for the pressure. For more modeling details we refer to [START_REF] Coussy | Mechanics and Physics of Porous Solids[END_REF], [START_REF] Mei | Homogenization Methods for Multiscale Mechanics[END_REF] and [START_REF] Tolstoy | Acoustics, elasticity, and thermodynamics of porous media. Twenty-one papers[END_REF] and for the mathematical theory to [START_REF] Sanchez-Palencia | Non-Homogeneous Media and Vibration Theory[END_REF], [START_REF] Nguetseng | Asymptotic analysis for a stiff variational problem arising in mechanics[END_REF] and [START_REF] Mikelić | On the interface law between a deformable porous medium containing a viscous fluid and an elastic body[END_REF]. The simplest relevant two dimensional poroelastic thin body is a poroelastic plate. A physically relevant choice of the time scale and the related coefficient size was set up by Marciniak-Czochra and Mikelić in [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF]. They rigorously derived the effective equations for the Kirchhoff-Love-Biot poroelastic plate in the zero thickness limit of the 3D quasi-static Biot equations. The limiting zero thickness procedure is seriously affected by the presence of coupling structure-flow. As in the purely elastic case, the specificity of the poroelastic plate model from [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF] is that the limit model contains simultaneously both flexural and membrane equations. This remarkable property does not transfer to the shells. Following both Ciarlet et al zero thickness limit approach to flexural linearized shells and handling of the Biot quasi-static equations in a thin domain by Marciniak-Czochra and Mikelić, Mikelić and Tambača have undertaken the derivation of the equation for a linear flexural poroelastic shell in [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF]. In this article we undertake derivation of a model for linear elliptic membrane poroelastic shell through the same type of the limit procedure. The coupling elastic structure -flow is scaled as in [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF]. It corresponds to the physical parameters leading to the quasi-static diphasic Biot's equations for the displacement and the pressure. As in [START_REF] Taber | A Theory for Transverse Deflection of Poroelastic Plates[END_REF] and [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF], it means that the characteristic time scale is of Taber and Terzaghi and in the dimensionless form there will be the ratio between the width and length squared, multiplying Laplacean of the pressure. The flexural and the membrane poroelastic shells correspond to different regimes of the filtration, different sizes of the applied contact forces, different geometries of the shell and different boundary conditions. In our case they are applied at the top and the bottom boundaries. For the membrane shell case, we impose a given inflow/outflow velocity of order of the characteristic filtration velocity through a shell of width ℓ. The applied contact forces at the same top/bottom boundaries should be of order of the pressure drop between these boundaries. We recall that in the case of the flexural poroelastic shell, studied in [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF], the contact forces at the top/bottom boundaries are an order of magnitude smaller and even smaller than the related inflow/outflow velocities. The motivation for studying the flexural poroelastic shells comes from the industrial filters modeling. For instance, the results from [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF] and [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF] can be applied to the modeling of the air filters for the cars, while some oil car filters can be modeled as membrane poroelastic shells. The motivation for studying the membrane poroelastic shells also comes from the biomechanics. An important example is the study of the mechanical behavior of fluid-saturated large living bone tissues. We recall that the bulk modulus of the bone is much larger than the bulk moduli of the soft tissues and the bone deformation is small. A full physiological understanding of the bone modeling would provide insight to important clinical problems which concern bones. For detailed review we refer to [START_REF] Nowinski | A model of the human skull as a poroelastic spherical shell subjected to a quasistatic load[END_REF] and [START_REF] Cowin | Bone poroelasticity[END_REF]. Many other living structures are fluid-saturated membrane shells, see e.g. [START_REF] Di Carlo | Living shell-like structures[END_REF]. Another modeling question, raised in [START_REF] Cowin | Bone poroelasticity[END_REF], is of the modeling of the elastic wave propagation in a bone. As the Biot theory was originally developed to describe the wave propagation, the subject attracted attention. The reader can consult [START_REF]Poroelastic structures[END_REF], [START_REF] Etchessahar | Bending vibrations of a rectangular poroelastic plate[END_REF] and [START_REF] Theodorakopoulos | Flexural vibrations of poroelastie plates[END_REF] and references therein. With our scaling, our spatial operators do not coincide with models from these references, but rather with Taber's works [START_REF] Taber | A Theory for Transverse Deflection of Poroelastic Plates[END_REF] and [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF]. Furthermore, the dynamic models of the diphasic Biot equations for a viscous fluid exhibit memory effects, as proposed by Biot through the introduction of the viscodynamic operator (see [START_REF] Tolstoy | Acoustics, elasticity, and thermodynamics of porous media. Twenty-one papers[END_REF]). The homogenization derivation of the dynamic diphasic Biot's equations gives the memory terms (see [START_REF] Clopeau | Homogenizing the Acoustic Properties of the Seabed, II[END_REF]) for a viscous fluid. If the pores are filled by an ideal fluid, there are no memory effects (see [START_REF] Ferrín | Homogenizing the Acoustic Properties of a Porous Matrix Containing an Incompressible Inviscid Fluid[END_REF]). The analysis of the relationship between the dynamic and the quasi-static diphasic Biot equation was undertaken in [START_REF] Mikelić | Theory of the dynamic Biot-Allard equations and their link to the quasi-static Biot system[END_REF] and there are scalings when the memory effects are not important. But in general it is not possible just to add the acceleration to the quasi-static Biot system. Hence modeling of the elastic waves propagation in poroelastic plates/shell requires some future research. Since in [START_REF] Mikelić | On the interface law between a deformable porous medium containing a viscous fluid and an elastic body[END_REF], the quasi-static Biot equations are obtained by homogenization of a pore scale fluidstructure problem, one can raise question why we do not study simultaneously homogenization of the fluid-structure problem and the zero thickness limit. In the applications we have in mind (industrial filters, living tissues. . . ) the thickness is much bigger than the RVE size and such approach does not make much sense. For some other problems like the study of the overall behavior of curved layers of living cells, having a thickness of one cell, the simultaneous homogenization and singular perturbation would give new models. The flexural shell model is formulated on a subspace of infinitesimally inextensional displacements involving boundary conditions, usually denoted by V F . However this function space for some geometries and boundary conditions turns to be trivial. In this case a model for extensional displacements is necessary. In this paper we focus on the shells with elliptic surfaces which are clamped at the whole boundary. The model in this case is called elliptic membrane shell model. In the case of the classical elasticity the membrane effects are measured by the change of the metric of the shell. This is different with the case of the flexural shell model where the potential energy is measured by the change in the curvature tensor. This difference results in a simpler model for the membrane case and lower order derivatives involved in the formulation. Remaining cases in which V F = {0} as well are covered by the generalized membrane shell model (an example is a tube clamped at ends). However, the formulation is given in abstract spaces, see [START_REF] Ciarlet | Mathematical elasticity[END_REF]. Derivation of the present model is more difficult than the derivation of the classical elliptic membrane shell model starting from the three-dimensional linearized [START_REF] Ciarlet | Asymptotic Analysis of Linearly Elastic Shells. I. Justification of Membrane Shells Equations[END_REF]. Namely, we are dealing with an additional equation for the additional unknown (pressure). We use the results derived in the classical static case, see [START_REF] Ciarlet | Mathematical elasticity[END_REF], as much as possible, but Biot's equations are quasi-static and, therefore, time dependent. Presence of the additional independent variable (time) requires special attention and careful analysis. Note also that in some parts this derivation is more demanding than the derivation of the poroelastic flexural shell model from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF] since here we have weaker a priori estimates in strains and we have to obtain the same convergences of the tangential displacements as in the flexural case. Finally, we recall that the flexural shell models are characterized by the presence of the 4th order differential operators and for the membrane shell models the differential operators are of the 2nd order. Geometry of Shells and Setting of the Problem We are starting by recalling the basic facts of geometry of shells. The text follows [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF], but it is shorter since some terms are not needed in the membrane model. Namely, lower order derivatives are sufficient to express the membrane effects. Throughout this paper we use boldfaced letters for vectors or matrices. The only exceptions are points in the Euclidean spaces (e.g., x, y, ). R n×m denotes the space of all n by m matrices and the subscript sym denotes its subspace of symmetric matrices. By L 2 we denote the Lebesgue space of the square integrable functions, while H 1 stands for the Sobolev space. Let the surface S is given as S = X(ω L ), where ω ⊂ R 2 be an open bounded and simply connected set with Lipschitz-continuous boundary ∂ω L and X : ω L → R 3 is a smooth injective immersion (that is X ∈ C 3 and 3 × 2 matrix ∇X is of rank two). Thus the vectors a α (y) = ∂ α X(y), α = 1, 2, are linearly independent for all y ∈ ω L and form the covariant basis of the tangent plane to the 2-surface S. Let Ω ℓ L = ω L × (-ℓ/2, ℓ/2). In this paper we study the deformation and the flow in a three-dimensional poroelastic shell Ωℓ L = r(Ω ℓ L ), L, ℓ > 0, where the injective mapping r is given by r = r(y, x 3 ) = X(y) + x 3 a 3 (y), a 3 (y) = a 1 (y) × a 2 (y) |a 1 (y) × a 2 (y)| , ( 2.1) for x 3 ∈ (-ℓ/2, ℓ/2) and (y 1 , y 2 ) ∈ ω L , diam (ω L ) = L. The contravariant basis of the plane spanned by a 1 (y), a 2 (y) is given by the vectors a α (y) defined by a α (y) • a β (y) = δ α β . We extend these bases to the basis of the whole space R 3 by the vector a 3 (y) given in (2.1) (a 3 (y) = a 3 (y)). Now we collect the local contravariant and covariant bases into the matrix functions Q = [ a 1 a 2 a 3 ] , Q -1 =   a T 1 a T 2 a T 3   . (2.2) The first fundamental form of the surface S, or the metric tensor, in covariant A c = (a αβ ) or contravariant A c = (a αβ ) components are given respectively by a αβ = a α • a β , a αβ = a α • a β , α, β = 1, 2. Note here that because of continuity of A c and compactness of ω L , there are constants M c ≥ m c > 0 such that m c x • x ≤ A c (y)x • x ≤ M c x • x, x ∈ R 3 , y ∈ ω L . (2.3) These estimates, with different constants, hold for A c as well, as it is the inverse of A c . The second fundamental form of the surface S, also known as the curvature tensor, in covariant B c = (b αβ ) or mixed components B = (b β α ) are given respectively by b αβ = a 3 • ∂ β a α = -∂ β a 3 • a α , b β α = 2 ∑ κ=1 a βκ b κα , α, β = 1, 2. The Christoffel symbols Γ κ are defined by Γ κ αβ = a κ • ∂ β a α = -∂ β a κ • a α , α, β, κ = 1, 2. We will sometime use Γ 3 αβ for b αβ . The area element along S is √ ady, where a := det A c . By (2.3) it is uniformly positive, i.e., there is m a > 0 such that 0 < m a ≤ a(y), y ∈ ω L . (2.4) In order to describe our results we also need the following differential operators: γ αβ (v) = 1 2 (∂ α v β + ∂ β v α ) - 2 ∑ κ=1 Γ κ αβ v κ -b αβ v 3 , α, β = 1, 2, (2.5 ) n αβ | β = ∂ β n αβ + 2 ∑ κ=1 Γ α βκ n βκ + 2 ∑ κ=1 Γ β βκ n ακ , α, β = 1, 2, defined for smooth vector fields v and tensor fields n. The upper face (respectively lower face) of the shell Ωℓ L is Σℓ L = r(ω L × {x 3 = ℓ/2}) = r(Σ ℓ L ) (respectively Σ-ℓ L = r(ω L × {x 3 = -ℓ/2}) = r(Σ -ℓ L )). Γℓ L is the lateral boundary, Γℓ L = r(∂ω L × (-ℓ/2, ℓ/2)) = r(Γ ℓ L ). The small parameter ε in the problem is the ratio between the shell thickness and the characteristic horizontal length ε = ℓ/L ≪ 1. We note that Biot's diphasic equations describe behavior of the system at so called Terzaghi's time scale T = ηL 2 c /(kµ), where L c is the characteristic domain size, η is dynamic viscosity, k is permeability and µ is the shear modulus. For the list of all parameters see Table 1. Similarly as in [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF], we chose as the characteristic length L c = ℓ, which leads to the Taber-Terzaghi transversal time T tab = ηℓ 2 /(kµ). Another possibility was to choose the longitudinal time scaling with T long = ηL 2 /(kµ). It would lead to different scaling in (2.8) and the dimensionless permeability coefficient in (3.3) would not be ε 2 but 1. In the context of thermoelasticity, one has the same equations and Blanchard and Francfort rigorously derived in [START_REF] Blanchard | Asymptotic thermoelastic behavior of flat plates[END_REF] the corresponding thermoelastic plate equations. We note that considering the longitudinal time scale yields the effective model where the pressure (i.e. the temperature in thermoelasticity) is decoupled from the flexion. Then the quasi-static Biot equations for the poroelastic body Ωℓ L take the following dimensional form: σ = 2µe(ũ) + (λ div ũ -αp)I in Ωℓ L , ( 2.6 ) -div σ = -µ △ ũ -(λ + µ) ▽ div ũ + α ▽ p = 0 in Ωℓ L , (2.7) ∂ ∂t (β G p + α div ũ) - k η △ p = 0 in Ωℓ L . (2.8) Note that e(u) = sym ▽u and σ is the stress tensor. All other quantities are defined in Table 1. We impose a given contact force σν = P±ℓ L and a given normal flux - k η ∂ p ∂x 3 = ṼL at x 3 = ±ℓ/2. At the lateral boundary Γℓ we impose a zero displacement and a zero normal flux. Here ν is the outer unit normal at the boundary. At initial time t = 0 we prescribe the initial pressure pℓ L,in . Our goal is to extend the elliptic membrane shell justification by Ciarlet, Lods et al and by Dauge et al to the poroelastic case. Thus in the sequel we assume that the middle surface is elliptic (Gaussian curvature (product of principal curvatures) is positive at all points) and that the shell is clamped at its entire boundary. We announce briefly the differential equations of the membrane poroelastic shell in dimensional form. Effective dimensional equations: The model is given in terms of u eff : ω L → R 3 which is the vector of components of the displacement of the middle surface of the shell in the contravariant basis and p eff : Ω ℓ L → R which is the pressure in the 3D shell. Let us denote the stress tensor due to the variation in pore pressure across the shell thickness by n = ℓ Cc (A c γ(u eff ))A c - 2µα λ + 2µ ∫ ℓ/2 -ℓ/2 p eff dy 3 A c , ( 2.9) where γ(•) is given by (2.5) and Cc is the elasticity tensor, usually appearing in the classical shell theories, given by Cc E = 2µ λ λ + 2µ tr (E)I + 2µE, E ∈ R 2×2 sym . Then the model in the differential formulation reads as follows: - 2 ∑ β=1 n αβ | β = (P +ℓ L ) α + (P -ℓ L ) α in ω L , α = 1, 2, - 2 ∑ α,β=1 b αβ n αβ = (P +ℓ L ) 3 + (P -ℓ L ) 3 in ω L , u eff α = 0, α = 1, 2, on ∂ω L , for every t ∈ (0, T ), (2.10) ( β G + α 2 λ + 2µ ) ∂p eff ∂t + α 2µ λ + 2µ A c : γ( ∂u eff ∂t ) - k η ∂ 2 p eff ∂(y 3 ) 2 = 0 in (0, T ) × ω L × (-ℓ/2, ℓ/2), k η ∂p eff ∂y 3 = -V L , on (0, T ) × ω L × ({-ℓ/2} ∪ {ℓ/2}), p eff = p ℓ L,in given at t = 0. (2.11) Here (P ±ℓ L ) i , i = 1, 2, 3 are components of the contact force P±ℓ L • r at Σ ±ℓ L in the covariant basis, V L = ṼL • X, p ℓ L,in = pℓ L,in • r. Thus, the poroelastic elliptic membrane shell model in the differential formulation is given for unknowns {n, u eff , p eff } and by equations (2.9), (2.10) and (2.11). The components of n are the contact forces. The first two equations in (2.10) can be found in the differential equation of the elliptic membrane shell model (see [START_REF] Ciarlet | Mathematical elasticity[END_REF]). The first equation in (2.11) is the evolution equation for the effective pressure with associated boundary and initial conditions in the remaining part of (2.11). In the case of the classical theory of the purely elastic shell, we recall that, in addition to already quoted articles and books by Ciarlet and al, there is a huge literature, with both mathematical and engineering approaches (see e.g. [START_REF] Blaauwendraad | Structural Shell Analysis: Understanding and Application[END_REF], [START_REF] Dauge | Plates and shells: Asymptotic expansions and hierarchical models[END_REF], [START_REF] Hoefakker | Theory Review for Cylindrical Shells and Parametric Study of Chimneys and Tanks[END_REF], [START_REF] Naghdi | The Theory of Shells and Plates[END_REF] and references therein). In Section 3 we present the dimensionless form of the problem, then recall existence and uniqueness result of the smooth solution for the starting problem, rewrite the problem in curvilinear coordinates and rescale the problem on the domain Ω = ω × (-1/2, 1/2). At the end of this section the main convergence results are formulated. In Section 4 we study the a priori estimates for the family of solutions. Then in Section 5 the convergence (including strong) of the solutions to the rescaled problem, is studied as ε → 0. In Appendix we give the limit model written for a part of the spherical surface. Finally, the radially symmetric effective equations the problem on the whole sphere are derived and the result is compared with the one in [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF]. Problem setting in curvilinear coordinates and the main results Dimensionless equations We introduce the dimensionless unknowns and variable by setting β = β G µ; P = µU L ; U ũε = ũ; T = ηℓ 2 kµ ; λ = λ µ ; P pε = p; ỹL = y; x3 L = x 3 ; rL = r; XL = X; tT = t; σε µU L = σ. After dropping wiggles in the coordinates and in the time, the system (2.6)-(2.8) becomes -div σε = -△ ũε -λ ▽ div ũε + α ▽ pε = 0 in (0, T ) × Ωε , ( 3.1) σε = 2e(ũ ε ) + ( λ div ũε -αp ε )I in (0, T ) × Ωε , (3.2) ∂ ∂t (β pε + α div ũε ) -ε 2 △p ε = 0 in (0, T ) × Ωε , ( 3.3) where ũε = (ũ ε 1 , ũε 2 , ũε 3 ) denotes the dimensionless displacement field and pε the dimensionless pressure. We study a shell Ωε with thickness ε = ℓ/L and section ω = ω L /L. It is described by Ωε = 1 L r({(x 1 , x 2 , x 3 )/L ∈ ω × (-ε/2, ε/2)}) = r(Ω ℓ L ) = Ωℓ L /L, Σε + (respectively Σε -) is the upper face (respectively the lower face) of the shell Ωε . Γε is the lateral boundary, Γε = Γℓ L /L. We suppose that a given dimensionless traction force is applied on Σε + ∪ Σε -and impose the shell is clamped on Γε : σε ν = (2e(ũ ε ) -αp ε I + λ(div ũε )I)ν = ε P± on Σε ± , ( 3.4 ) ũε = 0, on Γε . (3.5) For the pressure pε , at the lateral boundary Γε the zero inflow/outflow flux is imposed: -▽ pε • ν = 0. (3.6) and at Σε ± , we set -ε 2 ▽ pε • ν = ±ε Ṽ . (3.7) Finally, we need an initial condition for pε at t = 0, pε (x 1 , x 2 , x 3 , 0) = pin (x 1 , x 2 ) in Ωε . (3.8) The difference here, with respect to flexural shell case ( [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF]), is that the contact loads in (3.4) and filtration velocity in (3.7) are differently scaled. Remark 1. We recall that in the flexural shell case contact loads were assumed to behave like ε 3 P± and the normal boundary filtration velocity by -ε 2 ▽ pε • ν = ±ε 2 Ṽ . Let V( Ωε ) = {ṽ ∈ H 1 ( Ωε ; R 3 ) : ṽ| Γε = 0}. Then the weak formulation corresponding to (3.1)- (3.8) is given by Find ũε ∈ H 1 (0, T, V( Ωε )), pε ∈ H 1 (0, T ; H 1 ( Ωε )) such that it holds ∫ Ωε 2 e(ũ ε ) : e(ṽ) dx + λ ∫ Ωε div ũε div ṽ dx -α ∫ Ωε pε div ṽ dx = ∫ Σε + ε P+ • ṽ ds + ∫ Σε - ε P-• ṽ ds, for every ṽ ∈ V( Ωε ) and t ∈ (0, T ), (3.9) β ∫ Ωε ∂ t pε q dx + ∫ Ωε α div ∂ t ũε q dx + ε 2 ∫ Ωε ∇p ε • ∇q dx = ε ∫ Σε - Ṽ q ds -ε ∫ Σε + Ṽ q ds, for every q ∈ H 1 ( Ωε ) and t ∈ (0, T ), (3.10) pε | {t=0} = pin , in Ωε . (3.11) Note that for two 3 × 3 matrices A and B the Frobenius scalar product is denoted by A : B = tr (AB T ). Existence and uniqueness for the ε-problem In this subsection the existence and uniqueness of a solution {ũ ε , pε } ∈ H 1 (0, T ; V( Ωε ))×H 1 (0, T ; H 1 ( Ωε )) to problem (3.9)-(3.11) is recalled. We follow [START_REF] Marciniak-Czochra | A Rigorous Derivation of the Equations for the Clamped Biot-Kirchhoff-Love Poroelastic plate[END_REF] and get Proposition 2. Let us suppose pin ∈ H 2 0 ( Ωε ), P ± ∈ H 2 (0, T ; L 2 (ω; R 3 )) and Ṽ ∈ H 1 (0, T ; L 2 (ω)), Ṽ | {t=0} = 0. (3.12) Then problem (3.9)- (3.11) has a unique solution {ũ ε , pε } ∈ H 1 (0, T ; V( Ωε ))) × H 1 (0, T ; H 1 ( Ωε )). Problem in Curvilinear Coordinates and the Scaled Problem In this section we introduce the formulation of the problem in curvilinear coordinates. The formulation is the same as in Subsection 3.3 in [22, pages 371-374] without the rescaled problem in (3.18). For completeness and for the comfort of the reader we repeat it here. Our goal is to find the limits of the solutions of problem (3.9)-(3.11) when ε tends to zero. It is known from similar considerations made for classical shells that asymptotic behavior of the longitudinal and transverse displacements of the elastic body is different. The same effect is expected in the present setting. Therefore we need to consider asymptotic behavior of the local components of the displacement ũε . It can be done in many ways, but in order to preserve some important properties of bilinear forms, such as positive definiteness and symmetry, we rewrite the three-dimensional equations in curvilinear coordinates defined by r. Then we formulate equivalent problems posed on the domain independent of ε. We essentially follow the analogous section from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF]. Nevertheless, it should be noted that the pressure scaling is different. Let r(x/L) = r(x)/L, x ∈ Ω ℓ L . The covariant basis of the shell Ωε , which is the three-dimensional manifold parameterized by r, is defined by g ε i = ∂ i r : Ω ε → R 3 , i = 1, 2, 3. Vectors {g ε 1 , g ε 2 , g ε 3 } are given by g ε 1 = a 1 (y) + x 3 ∂ y 1 a 3 (y), g ε 2 = a 2 (y) + x 3 ∂ y 2 a 3 (y), g ε 3 = a 3 (y). Vectors { g 1,ε , g 2,ε , g 3,ε } satisfying g j,ε • g ε i = δ ij on Ω ε , i, j = 1, 2, 3, where δ ij is the Kronecker symbol, form the contravariant basis on Ωε . The contravariant metric tensor G c,ε = (g ij,ε ), the covariant metric tensor G ε c = (g ε ij ) and the Christoffel symbols Γ i,ε jk of the shell Ωε are defined by g ij,ε = g i,ε • g j,ε , g ε ij = g ε i • g ε j , Γ i,ε jk = g i,ε • ∂ j g ε k on Ω ε , i, j, k = 1, 2, 3. We set Γ i,ε = (Γ i,ε jk ) j,k=1,...,3 and γε (v) = 1 2 (∇v + ∇v T ) - 3 ∑ i=1 v i Γ i,ε . (3.13) Let g ε = det G ε c . Until now we were using the canonical basis {e 1 , e 2 , e 3 }, for R 3 . Now the displacement is rewritten in the contravariant basis, ũε • r(y 1 , y 2 , x 3 ) = 3 ∑ i=1 ũε i • r(y 1 , y 2 , x 3 )e i = 3 ∑ i=1 u ε i (y 1 , y 2 , x 3 )g i,ε (y 1 , y 2 , x 3 ), ṽ • r = 3 ∑ i=1 v i g i,ε , while for scalar fields we just change the coordinates pε • r = p ε , q • r = q, Ṽ • r = V, pin • r = p in , on Ω ε . The contact forces are rewritten in the covariant basis of the shell P± • r = 3 ∑ i=1 (P ± ) i g ε i on Σ ε ± . New vector functions are defined by u ε = u ε i e i , v = v i e i , P ± = (P ± ) i e i . Note that u ε i are not components of the physical displacement. They are just intermediate functions which will be used to reconstruct ũε . The corresponding function space to V( Ωε ) is the space V(Ω ε ) = {v ∈ H 1 (Ω ε ) 3 : v| Γ ε = 0}. Let Q ε = (∇r) -T = (g ε 1 g ε 2 g ε 3 ) -T = ( g 1,ε g 2,ε g 3,ε ) and let CE = λ(tr E)I + 2E, for all E ∈ R 3×3 sym . (3.14) Then the problem (3.9)-(3.11) can be written as ∫ Ω ε C ( Q ε γε (u ε )(Q ε ) T ) : ( Q ε γε (v)(Q ε ) T ) √ g ε dy -α ∫ Ω ε p ε tr ( Q ε γε (v)(Q ε ) T )√ g ε dy = ε ∫ Σ ε + P + • v √ g ε ds + ε ∫ Σ ε - P -• v √ g ε ds, v ∈ V(Ω ε ), a.e. t ∈ [0, T ], ∫ Ω ε β ∂p ε ∂t q √ g ε dy + ∫ Ω ε α ∂ ∂t tr ( Q ε γε (u ε )(Q ε ) T ) q √ g ε dy + ε 2 ∫ Ω ε Q ε ∇p ε • Q ε ∇q √ g ε dy = ε ∫ Σ ε - V q √ g ε ds -ε ∫ Σ ε + V q √ g ε ds, q ∈ H 1 (Ω ε ), a.e. t ∈ [0, T ], p ε = p in , for t = 0. (3.15) This is the problem in curvilinear coordinates. Problems for all ũε , pε and u ε , p ε are posed on ε-dependent domains. In the sequel we follow the idea from Ciarlet, Destuynder [START_REF] Ciarlet | A justification of the two dimensional linear plate model[END_REF] and rewrite (3.15) on the canonical domain independent of ε. As a consequence, the coefficients of the resulting weak formulation will depend on ε explicitly. Let Ω = ω × (-1/2, 1/2) and let R ε : Ω → Ω ε be defined by R ε (z) = (z 1 , z 2 , εz 3 ), z ∈ Ω, ε ∈ (0, ε 0 ). By Σ ± = ω × {±1/2} we denote the upper and lower face of Ω. Let Γ = ∂ω × (-1/2, 1/2). To the functions u ε , p ε , g ε , g ε i , g ε,i , Q ε , Γ i,ε jk , i, j, k = 1, 2, 3 defined on Ω ε we associate the functions u(ε), p(ε), g(ε), g i (ε), g i (ε), Q(ε), Γ i ij (ε), i, j, k = 1 , 2, 3 defined on Ω by composition with R ε . Let us also define V(Ω) = {v = (v 1 , v 2 , v 3 ) ∈ H 1 (Ω; R 3 ) : v| Γ = 0}. Then the problem (3.15) can be written as ε ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz -εα ∫ Ω p(ε)tr ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz = ε ∫ Σ ± P ± • v √ g(ε)ds, v ∈ V(Ω), a.e. t ∈ [0, T ], ε ∫ Ω β ∂p(ε) ∂t q √ g(ε)dz + ε ∫ Ω α ∂ ∂t tr ( Q(ε)γ ε (u(ε))Q(ε) T ) q √ g(ε)dz + ε 3 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε q √ g(ε)dz = ∓ε ∫ Σ ± V q √ g(ε)ds, q ∈ H 1 (Ω), a.e. t ∈ [0, T ], p(ε) = p in , for t = 0. (3.16) Here γ ε (v) = 1 ε γ z (v) + γ y (v) - 3 ∑ i=1 v i Γ i (ε), (3.17 ) γ z (v) =   0 0 1 2 ∂ 3 v 1 0 0 1 2 ∂ 3 v 2 1 2 ∂ 3 v 1 1 2 ∂ 3 v 2 ∂ 3 v 3   , γ y (v) =   ∂ 1 v 1 1 2 (∂ 2 v 1 + ∂ 1 v 2 ) 1 2 ∂ 1 v 3 1 2 (∂ 2 v 1 + ∂ 1 v 2 ) ∂ 2 v 2 1 2 ∂ 2 v 3 1 2 ∂ 1 v 3 1 2 ∂ 2 v 3 0   , ∇ ε q = 1 ε ∇ z q + ∇ y q, ∇ z q = [ 0 0 ∂ 3 q ] , ∇ y q = [ ∂ 1 q ∂ 2 q 0 ] and we have also used the notation ∓ ∫ Σ ± V q √ g(ε)ds = ∫ Σ - V q √ g(ε)ds - ∫ Σ + V q √ g(ε)ds, ∫ Σ ± P ± • v √ g(ε)ds = ∫ Σ + P + • v √ g(ε)ds + ∫ Σ - P -• v √ g(ε)ds. Remark 3. Existence and uniqueness of a smooth solution to problem (3.16) follows from Proposition 2 and the smoothness of the curvilinear coordinates transformation. Also notice that in the present (elliptic membrane) case there is no rescaling of the pressure. It will appear to be of order one which is in contrast to the flexural shell case where it was of order ε. Convergence results In the remainder of the paper we make the following assumptions Assumption 4. For simplicity, we assume that p in = 0, that V ∈ H 1 (0, T ; L 2 (ω)), V | {t=0} = 0 and that P ± ∈ H 2 (0, T ; L 2 (ω; R 3 )), with P ± | {t=0} = 0. To describe the limit problem we introduce the function space V M (ω) = H 1 0 (ω) × H 1 0 (ω) × L 2 (ω). Contrary to V F (ω), which is the function space for the flexural shell model, it is always non-trivial. The boundary value problem in Ω = ω × (-1/2, 1/2) for the effective displacement and the effective pressure is given by: find {u, p 0 } ∈ C([0, T ]; V M (ω) × L 2 (Ω)), ∂ z 3 p 0 ∈ L 2 ((0, T ) × Ω) satisfying the system ∫ ω C(A c γ(u)) : γ(v)A c √ adz 1 dz 2 - 2α λ + 2 ∫ ω ∫ 1/2 -1/2 p 0 dz 3 A c : γ(v) √ adz 1 dz 2 = ∫ ω (P + + P -) • v √ adz 1 dz 2 ., v ∈ V M (ω), (3.18) ∫ Ω ( β + α 2 λ + 2 ) ∂p 0 ∂t q √ adz + ∫ Ω α ∂ ∂t ( 2 λ + 2 A c : γ(u) ) q √ adz + ∫ Ω ∂p 0 ∂z 3 ∂q ∂z 3 √ adz = ∓ ∫ Σ ± V q √ ads, q ∈ H 1 (Ω). (3.19) p 0 = 0 at t = 0, (3.20) where γ(•) is given by (2.5) and CE = 2 λ λ + 2 tr (E)I + 2E, E ∈ R 2×2 sym . ( 3.21) Remark 5. We observe that, contrary to the effective flexural shell system from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF], problem (3.18)-(3.20) is of the second order. Fundamental for the analysis of this model is the inequality of Korn's type on an elliptic surface, see [5, Theorem 2.7-3] or [START_REF] Ciarlet | Asymptotic Analysis of Linearly Elastic Shells. I. Justification of Membrane Shells Equations[END_REF]Theorem 4.2]. Lemma 6. Let ω be a domain in R 2 and let X ∈ C 2,1 (ω; R 3 ) be an injective mapping such that the two vectors a α = ∂ α X are linearly independent at all points of ω and such that the surface X(ω) is elliptic. Then there is C M > 0 such that ∥v 1 ∥ 2 H 1 (ω) + ∥v 2 ∥ 2 H 1 (ω) + ∥v 3 ∥ 2 L 2 (ω) ≤ C M ∥γ(v)∥ 2 L 2 (ω;R 3×3 ) , v ∈ V M (ω). Proposition 7. Under Assumption 4, problem (3.18)-(3.20) has a unique solution {u, p 0 } in the space C([0, T ]; V M (ω) × L 2 (Ω)), ∂ z 3 p 0 ∈ L 2 ((0, T ) × Ω) Furthermore, ∂ t p 0 ∈ L 2 ((0, T ) × Ω) and ∂ t u ∈ L 2 (0, T ; V M (ω)). Proof. We follow the proof of Proposition 4 from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF] and first prove that {u, p 0 } ∈ C([0, T ]; V M (ω)× L 2 (Ω) ) and ∂ z 3 p 0 ∈ L 2 ((0, T ) × Ω) imply a higher regularity in time. Ideas are analogous but details of the calculations are different. Next we take q = q(z 1 , z 2 ), q ∈ C ∞ (ω) as a test function in (3.19). The time continuity and (3.19) yield ( β + α 2 λ + 2 ) ∫ 1/2 -1/2 p 0 dz 3 + 2α λ + 2 A c : γ(u) = 0. (3.22) After inserting (3.22) into (3.18), it takes the form ∫ ω C(A c γ(u)) : γ(v)A c √ adz 1 dz 2 + 4α 2 ( λ + 2) ( β( λ + 2) + α 2 ) ∫ ω A c : γ(u)A c : γ(v) √ adz 1 dz 2 = ∫ ω (P + + P -) • v √ adz 1 dz 2 , v ∈ V M (ω). (3.23) The H 2 -regularity in time of P ± allows taking time derivatives of equation (3.23) up to order 2. It yields ∂ t u ∈ L 2 (0, T ; V M (ω)) and ∂ tt u ∈ L 2 (0, T ; V M (ω)). Hence ∂ t u ∈ H 1 (0, T ; V M (ω) ). Note that contrary to the flexural case from the proof of Proposition 4 from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF], the equation for u is now of the 2nd order. For such u classical regularity theory for the second order linear parabolic equations applied at (3.19) implies ∂ t p 0 ∈ L 2 ((0, T ) × Ω). The d dt { ∫ ω C(A c γ(u)) : γ(u)A c √ adz 1 dz 2 + ∫ Ω ( β + α 2 λ + 2 ) (p 0 ) 2 √ adz -2 ∫ ω (P + + P -) • u √ adz 1 dz 2 } + ∫ Ω ( ∂p 0 ∂z 3 ) 2 √ adzdt = - ∫ ω ∂ t (P + + P -) • u √ adz 1 dz 2 ∓ ∫ Σ ± V p 0 √ a dz 1 dz 2 . ( 3 (u) in L ∞ (0, T ; V M (ω)), for p 0 in L ∞ (0, T ; L 2 (Ω)) and for ∂ z 3 p 0 in L 2 (0, T ; L 2 (Ω)). Using Lemma 6 and the classical weak compactness reasoning, we conclude the existence of at least one solution. Remark 8. Note that the equation (3.23) can be used to decouple the problem. Thus we first can solve the membrane problem with slightly changed coefficients and with time as a parameter in the equation. In the second step we plug this solution into (3.19). This approach can also lead to alternative existence proof. Namely, standard existence theory for membrane shell model applied on (3.23) yields that there is a unique u ∈ H 2 (0, T ; V M (ω)) solving (3.23). Then the standard parabolic theory for (3.19) implies the existence of p 0 in C([0, T ]; L 2 (Ω)) and such ∂ z 3 p 0 in L 2 (0, T ; L 2 (Ω)). Further regularity is standard. Standards computations give The main result of the paper is the following theorem. p 0 - ∫ 1/2 -1/2 p 0 dz 3 = -V (t)y 3 + 4β π 2 ∞ ∑ m=0 ∫ t 0 e - (2m+1) 2 π 2 β (t-s) ∂ t V ds (-1) m (2m + 1 Theorem 9. Let us suppose Assumption 4. Let {u(ε), p(ε)} ∈ H 1 (0, T ; V(Ω)) × H 1 (0, T ; H 1 (Ω)) be the unique solution of (3.16) and let {u, p 0 } be the unique solution for (3.18)- (3.20). Then we obtain u(ε) → u strongly in C([0, T ]; H 1 (Ω) × H 1 (Ω) × L 2 (Ω)), γ ε (u(ε)) → γ 0 strongly in C([0, T ]; L 2 (Ω; R 3×3 )), p(ε) → p 0 strongly in C([0, T ]; L 2 (Ω)), ∂p(ε) ∂z 3 → ∂p 0 ∂z 3 strongly in L 2 (0, T ; L 2 (Ω)), where γ 0 =    γ(u) 0 0 0 0 α λ+2 p 0 -λ λ+2 A c : γ(u)    . (3.26) Remark 10. We observe differences between convergence theorem (Theorem 6) from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF] and this result. The most notable difference is in the structure of γ 0 . As a consequence of the convergence of the term γ ε (u(ε)), we obtain the convergence of the scaled stress tensor. Corollary 11. For the stress tensor σ(ε) = C(Q(ε)γ ε (u(ε))Q(ε) T ) -αp(ε)I one has 1 ε σ(ε) → σ = C(Qγ 0 Q T ) -αp 0 I strongly in C([0, T ]; L 2 (Ω; R 3×3 )). (3.27) The limit stress in the local contravariant basis Q = (a 1 a 2 a 3 ) is given by Q T σQ =    ( - 2α λ + 2 p 0 I + 2 λ λ + 2 (A c : γ(u))I + 2A c γ(u) ) A c 0 0 0    . A priori estimates Fundamental for a priori estimates for elliptic membrane shells (clamped at all lateral surface and elliptic) is the following three-dimensional inequality of Korn's type for a family of linearly elastic elliptic membrane shells. Theorem 12 ([5, Theorem 4.3-1], [7, Theorem 4.1]). Assume that X ∈ C 3 (ω; R 3 ) parameterizes an elliptic surface. Then there exist constants ε 0 > 0, C > 0 such that for all ε ∈ (0, ε 0 ) one has ∥v 1 ∥ 2 H 1 (Ω) + ∥v 2 ∥ 2 H 1 (Ω) + ∥v 3 ∥ 2 L 2 (Ω) ≤ C∥γ ε (v)∥ 2 L 2 (Ω;R 3×3 ) , v ∈ V(Ω). Remark 13. Note that the above estimate applies to the functions on the three-dimensional domain Ω and will be the basis for the a priori estimates for the solution of (3.16), while in Lemma 6 the estimate was for functions on ω ad is the basis for the existence and uniqueness of the solution of the limit model (3.18)- (3.20). Next we state the asymptotic properties of the coefficients in the equation (3.16). Direct calculation shows that there are constants m g , M g , independent of ε ∈ (0, ε 0 ), such that for all z ∈ Ω, m g ≤ √ g(ε) ≤ M g . (4.1) The functions g i (ε), g i (ε), g ij (ε), g(ε), Γ i jk (ε), Q(ε) are in C(Ω) by assumptions. Moreover, there is a constant C > 0 such that for all ε ∈ (0, ε 0 ), ∥g i (ε) -a i ∥ ∞ + ∥g i (ε) -a i ∥ ∞ ≤ Cε, ∥ ∂ ∂z 3 √ g(ε)∥ ∞ + ∥ √ g(ε) - √ a∥ ∞ ≤ Cε, (4.2) ∥Q(ε) -Q∥ ∞ ≤ Cε, ∥Γ i jk (ε) -Γ i jk (0)∥ ∞ ≤ Cε, where ∥ • ∥ ∞ is the norm in C(Ω). For proof see [START_REF] Ciarlet | Asymptotic Analysis of Linearly Elastic Shells. I. Justification of Membrane Shells Equations[END_REF][START_REF] Ciarlet | Asymptotic analysis of linearly elastic shells. II. Justification of flexural shell equations[END_REF]. In addition, in [5, Theorem 3.3-1] and [8, Lemma 3.1] the asymptotic of the Christoffel symbols is given by Γ κ (ε) =   Γ κ 11 Γ κ 12 -b κ 1 Γ κ 21 Γ κ 22 -b κ 2 -b κ 1 -b κ 2 0   + O(ε), ( 4.3) where κ = 1, 2 and Γ 3 (ε) =   b 11 b 12 0 b 21 b 22 0 0 0 0   +O(ε). (4.4) In the following two lemmas we derive the a priori estimates in a classical way. The estimates are similar, but different from the flexural case. Namely, the scaling of γ ε (u(ε)) is different. Lemma 14. There is C > 0 and ε 0 > 0 such that for all ε ∈ (0, ε 0 ) one has ∥γ ε (u(ε))∥ L ∞ (0,T ;L 2 (Ω;R 3×3 )) , ∥p(ε)∥ L ∞ (0,T ;L 2 (Ω;R)) , ∥ε∇ ε p(ε)∥ L 2 (0,T ;L 2 (Ω;R 3 )) ≤ C. Proof. We set v = ∂u(ε) ∂t and q = p(ε) in (3.16) and sum up the equations. After noticing that the pressure term from the first equation cancels with the compression term from the second equation we obtain 1 2 ε d dt ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + 1 2 βε d dt ∫ Ω p(ε) 2 √ g(ε)dz + ε 3 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dz = ε ∫ Σ ± P ± • ∂u(ε) ∂t √ g(ε)ds ∓ ε ∫ Σ ± V p(ε) √ g(ε)ds. Dividing the equation by ε and using the product rule for derivatives with respect to time on the right hand side yield 1 2 d dt (∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + β ∫ Ω p(ε) 2 √ g(ε)dz ) + ε 2 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dz = d dt ∫ Σ ± P ± • u(ε) √ g(ε)ds - ∫ Σ ± ∂P ± ∂t • u(ε) √ g(ε)ds ∓ ∫ Σ ± V p(ε) √ g(ε)ds. Now we use the Newton-Leibnitz formula for the terms on the right hand side and the notation P = (P + + P -)z 3 + P + -P - 2 , V = 2V z 3 to obtain 1 2 d dt (∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + β ∫ Ω p(ε) 2 √ g(ε)dz ) + ε 2 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dz = d dt ∫ Ω ∂ ∂z 3 (P • u(ε) √ g(ε))dz - ∫ Ω ∂ ∂z 3 ( ∂P ∂t • u(ε) √ g(ε) ) dz - ∫ Ω ∂ ∂z 3 ( Vp(ε) √ g(ε) ) dz. Next we integrate this inequality over time 1 2 ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + 1 2 β ∫ Ω p(ε) 2 √ g(ε)dz + ε 2 ∫ t 0 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dzdτ = 1 2 (∫ Ω C ( Q(ε)γ ε (u(ε)| t=0 )Q(ε) T ) : ( Q(ε)γ ε (u(ε)| t=0 )Q(ε) T ) √ g(ε)dz + β ∫ Ω p(ε) 2 | t=0 √ g(ε)dz ) + ∫ Ω ∂ ∂z 3 (P • u(ε) √ g(ε))dz - ∫ Ω ∂ ∂z 3 (P| t=0 • u(ε)| t=0 √ g(ε))dz - ∫ t 0 ∫ Ω ∂ ∂z 3 ( ∂P ∂t • u(ε) √ g(ε) ) dzdτ + ∫ t 0 ∫ Ω ∂ ∂z 3 ( Vp(ε) √ g(ε) ) dzdτ. (4.5) Since we have sufficient time regularity for u(ε), we consider (3.16) for t = 0. Then u(ε)| t=0 satisfies: for all v ∈ V(Ω) ∫ Ω C ( Q(ε)γ ε (u(ε)| t=0 )Q(ε) T ) : ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz -α ∫ Ω p(ε)| t=0 tr ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz = ∫ Σ ± P ± | t=0 • v √ g(ε)ds. Since the initial condition is p(ε)| t=0 = 0 this equation is a classical 3D equation of shell-like body in curvilinear coordinates rescaled on the canonical domain. Next, P ± | t=0 = 0 and the classical theory (see Ciarlet [START_REF] Ciarlet | Mathematical elasticity[END_REF]) yields u(ε)| t=0 = 0. Using Korn's inequality, positivity of C and uniform positivity of Q(ε) T Q(ε) and g(ε) in (4.5) yields the estimate 1 2 ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + 1 2 β ∫ Ω p(ε) 2 √ g(ε)dz + ε 2 ∫ t 0 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dzdτ ≤ C. Since C is positive definite and since g(ε) is uniformly positive definite (see [START_REF] Ciarlet | Mathematical elasticity[END_REF]) we obtain the following uniform bounds ∥Q(ε)γ ε (u(ε))Q(ε) T ∥ L ∞ (0,T ;L 2 (Ω;R 3×3 )) , ∥p(ε)∥ L ∞ (0,T ;L 2 (Ω)) , ∥εQ(ε)∇ ε p(ε)∥ L 2 (0,T ;L 2 (Ω;R 3 )) . Since Q(ε) T Q(ε) is uniformly positive definite these estimates imply uniform bounds for ∥γ ε (u(ε))Q(ε) T ∥ L ∞ (0,T ;L 2 (Ω;R 3×3 )) , ∥p(ε)∥ L ∞ (0,T ;L 2 (Ω)) , ∥ε∇ ε p(ε)∥ L 2 (0,T ;L 2 (Ω;R 3 )) . Applying the uniform bounds for Q(ε) T Q(ε) once again implies the statement of the lemma. We now first take the time derivative of the first equation in (3.16) and then insert v = ∂u(ε) ∂t as a test functions. Then we take q = ∂p(ε) ∂t as test functions in the second equation in (3.16) and sum the equations. We obtain ∫ Ω C ( Q(ε)γ ε ( ∂u(ε) ∂t )Q(ε) T ) : ( Q(ε)γ ε ( ∂u(ε) ∂t )Q(ε) T ) √ g(ε)dz + β ∫ Ω ∂p(ε) ∂t ∂p(ε) ∂t √ g(ε)dz + 1 2 ε 2 d dt ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dz = ∫ Σ ± ∂P ± ∂t • ∂u(ε) ∂t √ g(ε)ds ∓ ∫ Σ ± V ∂p(ε) ∂t √ g(ε)ds. (4.6) Similarly as in Lemma 14 from this equality we obtain Lemma 15. There is C > 0 and ε 0 > 0 such that for all ε ∈ (0, ε 0 ) one has ∥γ ε ( ∂u(ε) ∂t )∥ L 2 (0,T ;L 2 (Ω;R 3×3 )) , ∥ ∂p(ε) ∂t ∥ L 2 (0,T ;L 2 (Ω;R)) , ∥ε∇ ε p(ε)∥ L ∞ (0,T ;L 2 (Ω;R 3 )) ≤ C. As a consequence of the scaled Korn's inequality from Theorem 12 we obtain Corollary 16. Let us suppose Assumption 4 and let {u(ε), p(ε)} be the solution for problem (3.16). Then there is C > 0 and ε 0 > 0 such that for all ε ∈ (0, ε 0 ) one has ∥γ ε (u(ε))∥ H 1 (0,T ;L 2 (Ω;R 9 )) , ∥u 1 (ε)∥ H 1 (0,T ;H 1 (Ω)) , ∥u 2 (ε)∥ H 1 (0,T ;H 1 (Ω)) , ∥u 3 (ε)∥ H 1 (0,T ;L 2 (Ω)) , ∥p(ε)∥ H 1 (0,T ;L 2 (Ω;R)) , ∥ ∂p(ε) ∂z 3 ∥ L ∞ (0,T ;L 2 (Ω;R)) ≤ C. In addition, there are u 1 , u 2 ∈ H 1 (0, T ; H 1 (Ω; R 3 )), u 3 ∈ H 1 (0, T ; L 2 (Ω; R 3 )), p 0 ∈ H 1 (0, T ; L 2 (Ω; R)) and γ 0 ∈ L ∞ (0, T ; L 2 (Ω; R 3×3 )) such that on a subsequence one has u j (ε) ⇀ u j weakly in H 1 (0, T ; H 1 (Ω)), j = 1, 2, u 3 (ε) ⇀ u 3 weakly in H 1 (0, T ; L 2 (Ω)), p(ε) ⇀ p 0 weakly in H 1 (0, T ; L 2 (Ω; R)), ∂p(ε) ∂z 3 ⇀ ∂p 0 ∂z 3 weakly in L 2 (0, T ; L 2 (Ω; R)) and weak * in L ∞ (0, T ; L 2 (Ω; R)), γ ε (u(ε)) ⇀ γ 0 weakly in H 1 (0, T ; L 2 (Ω; R 3×3 )). (4.7) Proof. Straightforward. Since γ ε (u(ε)) depends on u(ε) one expects that the limits u= (u 1 , u 2 , u 3 ) and γ 0 are related. The following theorem gives the precise relationship. The following theorem is fundamental for obtaining the limit model in classical elliptic membrane shell derivation as well as in the present derivation. Its proof is a collection of particular statements in the proof of Theorem 4.4-1 in [START_REF] Ciarlet | Mathematical elasticity[END_REF]. Therefore we just sketch it here. Theorem 17. For any v ∈ V(Ω) let γ ε (v) be given by (3.17) and let the tensor γ(v) be given by (2.5). Let the family (w(ε)) ε>0 ⊂ V(Ω) satisfies w j (ε) ⇀ w j weakly in H 1 (Ω), j = 1, 2, w 3 (ε) ⇀ w 3 weakly in L 2 (Ω), γ ε (w(ε)) ⇀ γ0 weakly in L 2 (Ω; R 3×3 ) (4.8) as ε → 0. Then w= (w 1 , w 2 , w 3 ) is independent of transverse variable z 3 , belongs to V M (ω) = H 1 0 (ω) × H 1 0 (ω) × L 2 (ω) , and satisfies γ0 αβ = γ αβ (w), α, β ∈ {1, 2}. Proof. From γ ε (w(ε)) ⇀ γ0 we obtain that εγ ε (w(ε)) = γ z (w(ε)) + εγ y (w(ε)) -ε 3 ∑ i=1 w i (ε)Γ i (ε) → 0 strongly in L 2 (Ω; R 3 ). From the convergences in (4.8) and asymptotics of Γ i (ε) given in (4.3) and (4.4) we have that ε 3 ∑ i=1 w i (ε)Γ i (ε) → 0 strongly in L 2 (Ω; R 3×3 ). Also (4.8) implies that εγ y (w(ε)) → 0 strongly in H -1 (Ω; R 3×3 ). Finally γ z (w(ε)) → 0 strongly in L 2 (ω; H -1 (-1/2, 1/2; R 3×3 )) and weakly in L 2 (Ω; R 3×3 ). From the definition of γ z in (3.17) we obtain that ∂ 3 w i (ε) → 0 strongly in H -1 (Ω). Therefore w is independent of the transverse variable z 3 . Then it is straightforward to conclude that w ∈ V M (ω). Now, the convergences γ ε αβ (w(ε)) ⇀ γ0 αβ , α, β ∈ {1, 2} from (4.8), using the definition of γ ε from (3.17), imply γ0 αβ = lim ε→0 ( 1 2 (∂ α w β (ε) + ∂ β w α (ε)) - 3 ∑ i=1 w i (ε)Γ i αβ (ε) ) . Using the asymptotics of Γ i (ε) from (4.3) and (4.4) together with the remaining convergences in (4.8) yield γ0 αβ = 1 2 (∂ α w β + ∂ β w α ) - 3 ∑ i=1 w i Γ i αβ (0) = γ αβ (w). Remark 18. In order to apply Theorem 17 we need pointwise convergences for every t ∈ [0, T ]. The estimates from Corollary 16 (i.e., Lemma 14 and Lemma 15) imply that we are in the same position as in Remark 14 from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF] for u 1 (ε), u 2 (ε), p(ε) and γ ε (u(ε)), i.e. u α (ε)(t) ⇀ u α (t) weakly in H 1 (Ω) for every t ∈ [0, T ], α ∈ {1, 2}, p(ε)(t) ⇀ p 0 (t) weakly in L 2 (Ω), γ ε (u(ε))(t) ⇀ γ 0 (t) weakly in L 2 (Ω; R 3×3 ), for every t ∈ [0, T ]. In the case of u 3 (ε) we argue similarly. Corollary 16 implies that u 3 (ε) is uniformly bounded in C 0,1/2 ([0, T ], L 2 (Ω)). Therefore by the Aubin-Lions lemma (see [START_REF] Simon | Compact sets in the space L p (0, T ; B)[END_REF]), there is a subsequence such that the {u 3 (ε)} converges to u 3 also in C([0, T ]; H -1 (Ω)). Let φ ∈ L 2 (Ω). Then for every δ > 0, there exists φ δ ∈ C ∞ 0 (Ω) such that ∥φ -φ δ ∥ L 2 (Ω) ≤ δ. Then sup 0≤t≤T | ∫ Ω (u 3 (ε)(t) -u 3 (t)) φ dx| ≤ sup 0≤t≤T | ∫ Ω (u 3 (ε)(t) -u 3 (t)) (φ -φ δ ) dx| + sup 0≤t≤T | ∫ Ω φ δ (u 3 (ε)(t) -u 3 (t)) dx| ≤ δ∥u 3 (ε) -u 3 ∥ C([0,T ];L 2 (Ω;R 3 )) + ∥φ δ ∥ H 1 (Ω) ∥u 3 (ε) -u 3 ∥ C([0,T ];H -1 (Ω;R 3 )) ≤ Cδ, (4.9) for ε ≤ ε 0 (δ). Therefore lim ε→0 sup 0≤t≤T | ∫ Ω (u 3 (ε)(t) -u 3 (t)) φ dx| ≤ Cδ, which yields u 3 (ε)(t) ⇀ u 3 (t) weakly in L 2 (Ω; R 3 ) for every t ∈ [0, T ]. (4.10) Thus we may apply Theorem 17, with w(ε) = u(ε)(t), for each t ∈ [0, T ] and conclude that the limit points of {u(ε)(t)} belong to V M (ω). Moreover we conclude that γ 0 αβ = γ αβ (u), α, β ∈ {1, 2}. Derivation of the limit model In this section we derive the limit model in two steps by taking the limit in (3.16) for two choices of test functions. Then in the Step 3 we prove the strong convergence of the strain and pressure. Finally in the Step 4 we prove the strong convergence of displacements. Step 1 (Identification of γ 0 i3 ).We take the limit as ε → 0 in the first equation in (3.16) divided by ε and obtain ∫ Ω C ( Q(0)γ 0 Q(0) T ) : ( Q(0)γ z (v)Q(0) T ) √ g(0)dz -α ∫ Ω p 0 tr ( Q(0)γ z (v)Q(0) T ) √ g(0)dz = 0, v ∈ V(Ω), a.e. t ∈ [0, T ]. Using Q(0) = Q and g(0) = a, and the definition of γ z and the function space V(Ω) yield ( Q T ( C ( Qγ 0 Q T ) -αp 0 I ) Q ) i3 = 0, i = 1, 2, 3. This implies (( λ tr ( Qγ 0 Q T ) -αp 0 ) Q T Q + 2Q T Qγ 0 Q T Q ) i3 = 0, i = 1, 2, 3. Since Q T Q = [ A c 0 0 1 ] (5.1) we obtain expressions of the third column of γ 0 in terms of the rest of elements (Q T Qγ 0 ) 13 = (Q T Qγ 0 ) 23 = λ tr ( Q T Qγ 0 ) -αp 0 + 2γ 0 33 = 0. (5.2) The first two equations imply that A c [ γ 0 13 γ 0 23 ] = 0 and since A c is positive definite we obtain that γ 0 13 = γ 0 31 = γ 0 23 = γ 0 32 = 0. From the third equation in (5.2) we get λA c : [ γ 0 11 γ 0 12 γ 0 12 γ 0 22 ] -αp 0 + ( λ + 2)γ 0 33 = 0. Thus we have obtained the following result. Up to here we followed pages 382-383 from [START_REF] Mikelić | Derivation of a poroelastic flexural shell model[END_REF]. Lemma 19. γ 0 13 = γ 0 31 = γ 0 23 = γ 0 32 = 0, γ 0 33 = α λ + 2 p 0 - λ λ + 2 A c : γ(u). From this lemma and Theorem 17 we have that γ 0 is of the following form γ 0 =    γ(u) 0 0 0 0 α λ+2 p 0 -λ λ+2 A c : γ(u)    . (5.3) Step 2 (Taking the second limit). Now we take the limit in (3.16), after division of both equations by ε, for test functions independent of the transversal variable z 3 , i.e., v ∈ H 1 0 (ω; R 3 ), such that γ z (v) = 0. Thus γ ε (v) = γ y (v) - ∑ 3 i=1 v i Γ i (0). The equations are ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz -α ∫ Ω p(ε) tr ( Q(ε)γ ε (v)Q(ε) T ) √ g(ε)dz = ∫ Σ ± P ± • v √ g(ε)ds, v ∈ H 1 0 (ω; R 3 ), a.e. t ∈ [0, T ], ∫ Ω β ∂p(ε) ∂t q √ g(ε)dz + ∫ Ω α ∂ ∂t tr ( Q(ε)γ ε (u(ε))Q(ε) T ) q √ g(ε)dz + ε 2 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε q √ g(ε)dz = ∓ ∫ Σ ± V q √ g(ε)ds, q ∈ H 1 (Ω). In the limit when ε → 0 we obtain ∫ Ω C ( Qγ 0 Q T ) : ( Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T ) √ adz -α ∫ Ω p 0 tr ( Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T ) √ adz = ∫ Σ ± P ± • v √ ads, v ∈ H 1 0 (ω; R 3 ), a.e. t ∈ [0, T ], ∫ Ω β ∂p 0 ∂t q √ adz + ∫ Ω α ∂ ∂t tr ( Qγ 0 Q T ) q √ adz + ∫ Ω ∂p 0 ∂z 3 Qe 3 • ∂q ∂z 3 Qe 3 √ adz = ∓ ∫ Σ ± V q √ ads, q ∈ H 1 (Ω). (5.4) Note that Qe 3 • Qe 3 = 1. Since γ y (v) -v i Γ i (0) =   γ(v) 1 2 ∂ 1 v 3 + ∑ 2 σ=1 v σ b σ 1 1 2 ∂ 2 v 3 + ∑ 2 σ=1 v σ b σ 2 1 2 ∂ 1 v 3 + ∑ 2 σ=1 v σ b σ 1 1 2 ∂ 2 v 3 + ∑ 2 σ=1 v σ b σ 2 0   using (5.1) we get tr (Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T ) = tr (Q T Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))) = A c : γ(v). ( 5.5) Next, using Lemma 19 yields tr (Qγ 0 Q T ) = tr (Q T Qγ 0 ) = A c : γ(u) + γ 0 33 = A c : γ(u) + α λ + 2 p 0 - λ λ + 2 A c : γ(u) = 2 λ + 2 A c : γ(u) + α λ + 2 p 0 . (5.6) Further, using (5.1) and Lemma 19 we compute Q T Qγ 0 Q T Q = [ A c γ(u)A c 0 0 γ 0 33 ] = [ A c γ(u)A c 0 0 α λ+2 p 0 -λ λ+2 A c : γ(u) ] . (5.7) Now the main elastic term in the first equation in (5.4) is computed: ∫ Ω C ( Qγ 0 Q T ) : ( Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T ) √ adz = ∫ Ω λ tr ( Qγ 0 Q T ) tr ( Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T ) + 2Qγ 0 Q T : Q(γ y (v) - 3 ∑ i=1 v i Γ i (0))Q T √ adz = ∫ Ω λ ( 2 λ + 2 A c : γ(u) + α λ + 2 p 0 ) A c : γ(v) √ adz + ∫ Ω 2Q T Qγ 0 Q T Q : (γ y (v) - 3 ∑ i=1 v i Γ i (0)) √ adz = ∫ Ω 2 λ λ + 2 (A c : γ(u))(A c : γ(v)) + λα λ + 2 p 0 A c : γ(v) √ adz + ∫ Ω 2A c γ(u)A c : γ(v) √ adz. Let us define the tensor (it usually appears in plate and shell theories!) CE = 2 λ λ + 2 tr (E)I + 2E, E ∈ R 2×2 sym . The first equation in (5.4) now becomes: for all v ∈ H 1 0 (ω; R 3 ) one has ∫ ω C(A c γ(u)) : γ(v)A c √ adz 1 dz 2 + ∫ Ω λα λ + 2 p 0 A c : γ(v) √ adz -α ∫ Ω p 0 A c : γ(v) √ adz = ∫ ω (P + + P -) • v √ adz 1 dz 2 . By density of H 1 0 (ω; R 3 ) in V M (ω) the above equation implies: ∫ ω C(A c γ(u)) : γ(v)A c √ adz 1 dz 2 - 2α λ + 2 ∫ ω ∫ 1/2 -1/2 p 0 dz 3 A c : γ(v) √ adz 1 dz 2 = ∫ ω (P + + P -) • v √ adz 1 dz 2 ., v ∈ V M (ω). (5.8) Equation (5.8) is the classical equation of the membrane shell model with the addition of the pressure p 0 term. Using (5.6), the second equation in (5.4) can be now written by ∫ Ω ( β + α 2 λ + 2 ) ∂p 0 ∂t q √ adz + ∫ Ω α ∂ ∂t ( 2 λ + 2 A c : γ(u) ) q √ adz + ∫ Ω ∂p 0 ∂z 3 ∂q ∂z 3 √ adz = ∓ ∫ Σ ± V q √ ads, q ∈ H 1 (Ω). (5.9) The elliptic membrane poroelastic shell model is given by (5.8), (5.9). Step 3 (The strong convergences of strain and pressure). As mentioned in Remark 8 the limit problem can be decoupled and the displacement can be calculated independently of the pressure, by solving the elliptic membrane shell model with modified coefficients. However the same decoupling cannot be done for the three-dimensional problem and thus the result of the strong convergence for the classical elliptic membrane shell derivation, see [START_REF] Ciarlet | Mathematical elasticity[END_REF], cannot be applied directly to the poroelastic case. Hence we adapt the ideas from [START_REF] Ciarlet | Mathematical elasticity[END_REF]. We start with Λ(ε)(t) = 1 2 ∫ Ω C ( Q(ε) ( γ ε (u(ε))(t) -γ 0 (t) ) Q(ε) T ) : ( Q(ε) ( γ ε (u(ε))(t) -γ 0 (t) ) Q(ε) T ) √ g(ε) + 1 2 β ∫ Ω (p(ε)(t) -p 0 (t)) 2 √ g(ε)dz + ε 2 ∫ t 0 ∫ Ω (∇ ε p(ε) -∇ ε p 0 )Q(ε) T • (∇ ε p(ε) -∇ ε p 0 )Q(ε) T √ g(ε)dz. and we will show that Λ(ε)(t) → Λ(t) as ε tends to zero for all t ∈ [0, T ]. Since Λ(ε) ≥ 0 the Λ ≥ 0 as well. After some calculation we will show that actually Λ = 0. This will give the strong convergences in (4.7). Since we have only weak convergences in (4.7) we first remove quadratic terms in Λ(ε) using ∫ Ω C ( Q(ε)γ ε (u(ε))Q(ε) T ) : ( Q(ε)γ ε (u(ε))Q(ε) T ) √ g(ε)dz + 1 2 β ∫ Ω p(ε) 2 √ g(ε)dz + ε 2 ∫ t 0 ∫ Ω Q(ε)∇ ε p(ε) • Q(ε)∇ ε p(ε) √ g(ε)dzdτ = ∫ t 0 ∫ Σ ± P ± • ∂u(ε) ∂t √ g(ε)dsdτ ∓ ∫ t 0 ∫ Σ ± V p(ε) √ g(ε)dsdτ. Inserting this into the definition of Λ(ε) we obtain Λ(ε)(t) = ∫ t 0 ∫ Σ ± P ± • ∂u(ε) ∂t √ g(ε)dsdτ ∓ ∫ t 0 ∫ Σ ± V p(ε) √ g(ε)dsdτ - ∫ Ω C ( Q(ε)γ ε (u(ε))(t)Q(ε) T ) : ( Q(ε)γ 0 (t)Q(ε) T ) √ g(ε)dz -β ∫ Ω p(ε)p 0 √ g(ε)dz -2ε 2 ∫ t 0 ∫ Ω Q(ε)∇ ε p(ε)(t) • Q(ε)∇ ε p 0 (t) √ g(ε)dzdτ + 1 2 ∫ Ω C ( Q(ε)γ 0 Q(ε) T ) : ( Q(ε)γ 0 Q(ε) T ) √ g(ε)dz + 1 2 β ∫ Ω (p 0 ) 2 √ g(ε)dz + ε 2 ∫ t 0 ∫ Ω Q(ε)∇ ε p 0 • Q(ε)∇ ε p 0 √ g(ε)dzdτ. Now we take the limit as ε tends to zero and obtain that Λ(ε)(t) → Λ(t) ≥ 0, where Λ(t) = ∫ t 0 ∫ ω (P + + P -) • ∂u ∂t √ adsdτ ∓ ∫ t 0 ∫ Σ ± V p 0 √ adsdτ - 1 2 ∫ Ω C ( Qγ 0 (t)Q T ) : ( Qγ 0 (t)Q T ) √ adz - 1 2 β ∫ Ω p 0 (t) 2 √ adz - ∫ t 0 ∫ Ω ( ∂p 0 ∂z 3 ) 2 √ adzdτ. (5.10) We now insert ∂u ∂t as a test function in (5.8), p 0 in (5.9) and sum up the equations. The antisymmetric terms cancel out as before. Then we integrate the equation over time and use the initial conditions to obtain 1 2 ∫ ω C(A c γ(u(t))) : (γ(u(t))A c ) √ adz 1 dz 2 + 1 2 ∫ Ω ( β + α 2 λ + 2 ) (p 0 (t)) 2 √ adz + ∫ t 0 ∫ Ω ( ∂p 0 ∂z 3 ) 2 √ adzdτ = ∫ t 0 ∫ ω (P + + P -) • ∂u ∂t √ adsdτ ∓ ∫ t 0 ∫ Σ ± V p 0 √ adsdτ. Inserting the above equation into (5.10) yields Λ(t) = 1 2 ∫ ω C(A c γ(u(t))) : (γ(u(t))A c ) √ adz 1 dz 2 + 1 2 ∫ Ω α 2 λ + 2 (p 0 (t)) 2 √ adz - 1 2 ∫ Ω C ( Qγ 0 (t)Q T ) : ( Qγ 0 (t)Q T ) √ adz. (5.11) Next we compute the elastic energy using (5.3), (5.6), (5.7): ∫ Ω C ( Qγ 0 Q T ) : ( Qγ 0 Q T ) √ adz = ∫ Ω λ(tr ( Qγ 0 Q T ) ) 2 + 2Q T Qγ 0 Q T Q : γ 0 √ adz = ∫ Ω λ ( 2 λ + 2 A c : γ(u) + α λ + 2 p 0 ) 2 + 2 [ A c γ(u)A c 0 0 α λ+2 p 0 -λ λ+2 A c : γ(u) ] : γ 0 √ adz = ∫ Ω λ ( 2 λ + 2 A c : γ(u) + α λ + 2 p 0 ) 2 + 2   A c γ(u)A c : γ(u) + ( α λ + 2 p 0 - λ λ + 2 A c : γ(u) ) 2   √ adz = ∫ Ω ( 4 λ ( λ + 2) 2 (A c : γ(u)) 2 + 4 λα ( λ + 2) 2 A c : γ(u)p 0 + λα 2 ( λ + 2) 2 (p 0 ) 2 + 2A c γ(u)A c : γ(u) + 2α 2 ( λ + 2) 2 (p 0 ) 2 - 4 λα ( λ + 2) 2 A c : γ(u)p 0 + 2 λ2 ( λ + 2) 2 (A c : γ(u)) 2 ) √ adz = ∫ Ω 2 λ λ + 2 (A c : γ(u)) 2 + α 2 λ + 2 (p 0 ) 2 + 2A c γ(u)A c : γ(u) √ adz = ∫ Ω C(A c γ(u)) : γ(u)A c + α 2 λ + 2 (p 0 ) 2 √ adz. Inserting this into (5.11) we obtain that Λ(t) = 0. We now have Λ(ε)(t) → 0 for every t ∈ [0, T ]. Since Λ(ε) : [0, T ] → R is an equicontinuous family, we conclude strong convergences of the strain tensor and the pressure γ ε (u(ε)) → γ 0 strongly in C([0, T ]; L 2 (Ω; R 3×3 )), p(ε) → p 0 strongly in C([0, T ]; L 2 (Ω)), ∂p(ε) ∂z 3 → ∂p 0 ∂z 3 strongly in L 2 (0, T ; L 2 (Ω)). (5.12) Step 4 (The strong convergences of the displacements). The setting is more complicated than in the proof of Theorem 4.4-1 from [START_REF] Ciarlet | Mathematical elasticity[END_REF] because the problem is time dependent. The first convergence in (5.12) implies γ αβ (u(ε)) → γ αβ (u) strongly in C([0, T ]; L 2 (Ω)), α, β ∈ {1, 2}. (5.13) Let us now denote by • the operator of averaging over z 3 , i.e., v(•) = ∫ 1/2 -1/2 v(•, z 3 )dz 3 . Then from (5.13) we obtain γ αβ (u(ε)) → γ αβ (u) strongly in C([0, T ]; L 2 (ω)), α, β ∈ {1, 2}. Note that u = u since u is independent of z 3 . The inequality of Korn's type on an elliptic surface, see Lemma 6 ([5, Theorem 2.7-3]), for u(ε) -u implies ∥u 1 (ε) -u 1 ∥ 2 H 1 (ω) + ∥u 2 (ε) -u 2 ∥ 2 H 1 (ω) + ∥u 3 (ε) -u 3 ∥ 2 L 2 (ω) ≤ C M ∥γ(u(ε)) -γ(u)∥ 2 L 2 (Ω;R 3×3 ) . Application of (5.13) yields u α (ε) → u α strongly in C([0, T ]; H 1 (ω)), α ∈ {1, 2}, u 3 (ε) → u 3 strongly in C([0, T ]; L 2 (ω)). (5.14) Next we prove that u 3 (ε) → u 3 strongly in C([0, T ]; L 2 (Ω)). (5.15) From the Poincare type estimate |u 3 (ε)(•, z 3 ) -u 3 (ε)(•)| ≤ C √ ∫ 1/2 -1/2 (∂ 3 u 3 (ε)(•, y 3 )) 2 dy 3 we obtain ∥u 3 (ε) -u 3 (ε)∥ C([0,T ];L 2 (Ω)) ≤ C∥∂ 3 u 3 (ε)∥ C([0,T ];L 2 (Ω)) . From γ ε 33 (u(ε)) = 1 ε ∂ 3 u 3 (ε) → γ 0 33 strongly in C([0, T ]; L 2 (Ω)) we conclude ∂ 3 u 3 (ε) → 0 strongly in C([0, T ]; L 2 (Ω) ). Together with the last convergence in (5.14) this implies (5.15). In the remaining part of the proof we prove u α (ε) → u α strongly in C([0, T ]; H 1 (Ω)), α ∈ {1, 2}. (5.16) It follows from the Korn inequality and the Lions lemma (see [START_REF] Duvaut | Grundlehren der mathematischen Wissenschaften[END_REF]). Let us denote u ′ (ε) = (u 1 (ε), u 2 (ε), 0), u ′ = (u 1 , u 2 , 0). Then, using the Korn inequality, the convergence e(u ′ (ε)) → e(u ′ ) strongly in C([0, T ]; L 2 (Ω; R 3×3 )) (5.17) is equivalent to (5.16) (e(u) denotes the symmetrized gradient). Convergence in (5.17) will be obtained component by component. Since for α, β ∈ {1, 2} e αβ (u ′ (ε)) = γ ε αβ (u(ε)) + 3 ∑ i=1 u i (ε)Γ i αβ (ε), using the first convergence in (5.12) for γ ε αβ (u(ε)), Remark 18 for u 1 (ε) and u 2 (ε), (5.15) for u 3 (ε) and (4.3) and (4.4) we obtain e αβ (u ′ (ε)) → γ αβ (u) + 3 ∑ i=1 u i Γ i αβ (0) = e αβ (u ′ ) strongly in C([0, T ]; L 2 (Ω)), α, β = 1, 2. (5.18) Since e 33 (u ′ (ε)) = e 33 (u ′ ) = 0 the convergence of this component is trivial. For the convergence of e α3 (u ′ (ε)) = 1 2 ∂ 3 u α (ε) we apply the Lions lemma. Thus we first prove that ∂ 3 u α (ε), ∂ 13 u α (ε), ∂ 23 u α (ε), ∂ 33 u α (ε) → 0 strongly in C([0, T ]; H -1 (Ω)), α = 1, 2. (5.19) We start with the expression 1 ε ∂ 3 u α (ε) = 2γ ε α3 (u(ε)) -∂ α u 3 (ε) + 2 3 ∑ i=1 u i (ε)Γ i α3 (ε). (5.20) The first and the third term on the right hand side converge strongly in C([0, T ]; L 2 (Ω)), while the middle one converges only in C([0, T ]; H -1 (Ω)) by (5.15). Thus we obtain that ∂ 3 u α (ε) → 0 in C([0, T ]; H -1 (Ω)). Differentiating (5.20) with respect to z 3 yields 1 ε ∂ 33 u α (ε) = 2∂ 3 γ ε α3 (u(ε)) -∂ α ∂ 3 u 3 (ε) + 2 3 ∑ i=1 ∂ 3 (u i (ε)Γ i α3 (ε)). Since ∂ 3 u 3 (ε) → 0 in C([0, T ]; L 2 (Ω)) the convergence of ∂ 33 u α (ε) in C([0, T ]; H -1 (Ω)) is obtained. Differentiating (5.18) with respect to z 3 for α = β yields ∂ α3 u α (ε) = ∂ 3 e αα (u ′ (ε)) → ∂ 3 e αα (u ′ ) = 0 strongly in C([0, T ]; H -1 (Ω)), α = 1, 2. Since ∂ 13 u 2 (ε) = ε∂ 1 γ ε 23 (u(ε)) -ε∂ 2 γ ε 13 (u(ε)) + ∂ 3 e 12 (u ′ (ε)) + ε ( ∂ 1 2 ∑ τ =1 u τ (ε)Γ τ 23 (ε) -∂ 2 2 ∑ τ =1 u τ (ε)Γ τ 13 (ε) ) and since all terms on the right hand side strongly converge in C([0, T ]; H -1 (Ω)) we obtain the strong convergence of ∂ 13 u 2 (ε) in the same space. It then implies that the term ∂ 23 u 1 (ε) = 2∂ 3 γ ε 12 (u(ε)) -∂ 13 u 2 (ε) + 2∂ 3 3 ∑ i=1 u i (ε)Γ i 12 (ε) converges strongly in C([0, T ]; H -1 (Ω)). Thus we have proved (5.19). A consequence of Lions lemma is that the spaces v ∈ L 2 (Ω) ↔ (v, ∂ 1 v, ∂ 2 v, ∂ 3 v) ∈ H -1 (Ω) 4 are isomorphic. Therefore the spaces v ∈ C([0, T ]; L 2 (Ω)) ↔ (v, ∂ 1 v, ∂ 2 v, ∂ 3 v) ∈ C([0, T ]; H -1 (Ω)) 4 are also isomorphic. Therefore (5.19) implies that e α3 (u ′ (ε)) = 1 2 ∂ 3 u α (ε) → 0 strongly in C([0, T ]; L 2 (Ω)). Therefore we have proved (5.17 A Spherical surface Let ω = [0, ] . Now the displacement vector ṽ in the canonical coordinates is rewritten in the local basis ṽ = Qv = v 1 a 1 + v 2 a 2 + v 3 a 3 . Note that contravariant basis is different than the usual basis associated with the spherical coordinates. One has v 1 = R sin θv φ , v 2 = Rv θ , v 3 = -v r . Similarly, P± = Q -T P ± = (P ± ) 1 a 1 + (P ± ) 2 a 2 + (P ± ) 3 a 3 . Thus (P ± ) 1 = 1 R sin θ (P ± ) φ , (P ± ) 2 = 1 R (P ± ) θ , (P ± ) 3 = -(P ± ) r and P ± • v = (P ± ) 1 v 1 + (P ± ) 2 v 2 + (P ± ) 3 v 3 = (P ± ) φ v φ + (P ± ) θ v θ + (P ± ) r v r . Inserting the geometry coefficients into the strain γ we obtain a priori estimates used in Theorem 12. In the case of the whole sphere and radial solutions, a simple calculation gives γ(v) = [ ∂ 1 v 1 - ∑ 2 κ=1 Γ κ 11 v κ -b 11 v 3 1 2 (∂ 1 v 2 + ∂ 2 v 1 ) - ∑ 2 κ=1 Γ κ 12 v κ -b 12 v 3 1 2 (∂ 2 v 1 + ∂ 1 v 2 ) - ∑ 2 κ=1 Γ κ 21 v κ -b 21 v 3 ∂ 2 v 2 - ∑ 2 ∥γ ε (v)∥ 2 L 2 (Ω;R 3×3 ) = ∫ 2π 0 ∫ π 0 ∫ 1/2 -1/2 ( (R sin 2 θ -εz 3 ) 4 v 2 3 + (R -εz 3 ) 2 v 2 3 + 1 ε 2 ∂ 3 v 3 (z 3 ) 2 ) dz 3 dθdφ ≥ ∫ 2π 0 ∫ π 0 ∫ 1/2 -1/2 ( R 2 /4v 2 3 + 1 ε 2 ∂ 3 v 3 (z 3 ) 2 ) dz 3 dθdφ ≥ C 2 ∥v 3 ∥ 2 H 1 (Ω) , for R ≥ ε and 1 ≫ ε. Then the subsequent analysis follows as above and the limit model in this case can be obtained by specialization of the above equations for spherical geometry. Thus for loading depending only on time for the solution of the shell problem we obtain the following relation for u r and p 0 : This equation has the same structure as the equation in [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF] for spherical poroelastic membrane, however the coefficients are obviously not the same since in [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF] there are no inverse Biot's coefficient β and the effective stress coefficient α. In the constitutive law (2.6) Taber takes α = 1 and in (2.8) β seems to be 1 as well. Hence, our consideration establish rigorously the results from [START_REF] Taber | Poroelastic Plate and Shell Theories[END_REF]. 3 λ + 2 λ + 2 4u r -2R 2α λ + 2 ∫ 1/ . 24 ) 24 Equality(3.24) implies uniqueness of solutions to problem (3.18)-(3.20). Concerning existence, equality(3.24) allows to obtain the uniform bounds for γ 2 . 2 After inserting (3.22) into (3.18) for displacement we get a standard elastic membrane shell equations with modified coefficients. Then we use (3.22) to compute the mean pressure and finally (3.25) to reconstruct the pressure fluctuation. ( 3 . 3 16) for v = ∂u(ε) ∂t and q = p(ε) divided by ε. Integration of (3.16) over time, using p(ε)| t=0 = 0 and u(ε)| t=0 = 0, implies 1 2 ) and then(5.16) follows by the Korn inequality. Thus we have proved the strong convergence of displacementsu(ε) → u strongly in C([0, T ]; H 1 (Ω) × H 1 (Ω) × L 2 (Ω)). d 1 ] 2 ]a 1σ b σ1 = a 11 b 11 a 2σ b σ2 = a 22 b 22 ForΓ 1 = 1211221 × [0, d 2 ], where d 1 ∈ (0, 2π], d 2 ∈ (0, π], with one of the strict inequalities d 1 < 2π or d 2 < π holding, and let (φ, θ) denotes the generic point in ω. Let R > 0. We define a spherical shell by the parametrizationX : ω → R 3 , X(φ, θ) = (R sin θ cos φ, R sin θ sin φ, R cos θ) T .Then the extended covariant basis of the shell S = X(ω) is given bya 1 (φ, θ) = ∂ φ X(φ, θ) = R(-sin θ sin φ, sin θ cos φ, 0) T , a 2 (φ, θ) = ∂ θ X(φ, θ) = R(cos θ cos φ, cos θ sin φ, -sin θ) T , a 3 (φ, θ) = a 1 (φ, θ) × a 2 (φ, θ) |a 1 (φ, θ) × a 2 (φ, θ)| = (-sin θ cos φ, -sin θ sin φ, -cos θ) T .The contravariant basis is biorthogonal and is given bya 1 (φ, θ) = 1 R (-sin φ/ sin θ, cos φ/ sin θ, 0) T , a 2 (φ, θ) = 1 R (cos θ cos φ, cos θ sin φ, -sin θ) T , a 3 (φ, θ) = (-sin θ cos φ, -sin θ sin φ, -cos θ) T .The covariant A c = (a αβ ) and contravariant A c = (a αβ ) metric tensors are respectively given by and the area element is now√ adS = √ det A c dS = R 2 sin θdS.The covariant and mixed components of the curvature tensor are now given byb 11 = a 3 • ∂ φ a 1 = R sin 2 θ, b 12 = a 3 • ∂ θ a 1 = 0, b 21 = a 3 • ∂ φ a 2 = 0, b 22 = a 3 • ∂ θ a 2 = R, Christoffel symbols Γ σ αβ = a σ • ∂ β a α one has 1 2 1 2 11 κ=1 Γ κ 22 v κ -b 22 v 3 ] = R [ sin θ∂ φ v φ + sin θ cos θv θ + sin 2 θv r (∂ φ v θ + sin θ∂ θ v φ ) -cos θv φ (sin θ∂ θ v φ + ∂ φ v θ ) -cos θv φ ∂ θ v θ + v r ] . (3 λ + 6) ∂ t ((P + ) r + (P -) r ) -∂ rr p 0 = 0, ∂ r p 0 | r=±1/2 = V, p 0 = 0 at t = 0. Table 1 : 1 Parameter and unknowns description SYMBOL QUANTITY µ shear modulus (Lamé's second parameter) λ Lamé's first parameter β G inverse of Biot's modulus α effective stress coefficient k permeability η viscosity L and ℓ midsurface length and shell width, respectively ε = ℓ/L small parameter T = ηL 2 c /(kµ) characteristic Terzaghi's time U characteristic displacement P = U µ/L characteristic fluid pressure u = (u 1 , u 2 , u 3 ) solid phase displacement p pressure existence and the uniqueness are based on the energy estimate. If we choose v = ∂u ∂t test function in (3.18) and p 0 as a test function in (3.19) and sum up the equations we obtain the as a equality 1 2 2 -1/2 p 0 dr = ((P + ) r + (P -) r )R 2 . ∂ t u r -R 2 ∂ rr p 0 = 0, ∂ r p 0 | r=±1/2 = V, p 0 = 0 at t = 0.Here we calculate u r from (A.21) with ∂ t ∫ 1/2 -1/2 p 0 dr replaced using (A.23) and obtain the boundary value problem for p 0 (A.21) d dt V (q(-1/2) -q(1/2))R 2 ∫ 1/2 -1/2 ( β + α 2 λ + 2 ) p 0 qR 2 dr + 2R in D ′ (0, T ), 2α λ + 2 ∫ 1/2 -1/2 q ∈ H 1 (-1/2, 1/2), ∂ t u r qdr + R 2 ∫ 1/2 -1/2 ∂p 0 ∂r ∂q ∂r dr = (A.22) p 0 = 0 at t = 0. From (A.22) for constant test functions we obtain ( β + ∂ ∂ t α 2 λ + 2 ) ∂ t ∫ 1/2 -1/2 p 0 R 2 dr + 2R 2α λ + 2 ∫ 1/2 -1/2 p 0 = -Rα λ+2 β(3 λ + 2) + α 2 3 λ+6 ∂ (A.23) After partial integration in (A.22), we obtain ( β + α 2 λ + 2 ) ∂ t p 0 R 2 + 2R 2α λ + 2 ( β + α 2 λ + 2 ) ∂ t p 0 + R α(β( λ + 2) + α 2 ) β(3 λ + 2)( λ + 2) + α 2 t u r = 0. Inserting u r from(A.21), after some calculations, we obtain t ((P + ) r + (P -) r ). B Acknowledgements The research of A.M. was supported in part by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). (sin θ∂ φ u φ + sin θ cos θu θ + sin 2 θu r ) + (∂ θ u θ + u r ) ) . The insertion of the above expression for C into (3.23) gives the equations of the spherical membrane shell. They read: find ), satisfying the system Then ∫ 1/2 -1/2 p 0 dr is calculated from (3.22) and the fluctuation of the pressure across the thickness can be calculated from (3.25). Remark 20. In the case of the whole sphere we are not in the elliptic membrane case since the boundary conditions are different. However if we assume that the body is loaded radially (i.e. P ± , V are functions of time only) we can search for the radial solution of the three-dimensional problem (3.16) (i.e. u φ = u θ = 0 and u r is independent of φ and θ). The asymptotic analysis is based on the
67,408
[ "2599" ]
[ "521754", "444777" ]
01488409
en
[ "info" ]
2024/03/04 23:41:48
2017
https://inria.hal.science/hal-01488409/file/RR-9042.pdf
Aiman Fang Aurélien Cavelan Yves Robert Andrew A Chien Resilience for Stencil Computations with Latent Errors Keywords: resilience, fail-stop errors, multi-level checkpointing, optimal pattern Projections and measurements of error rates in near-exascale and exascale systems suggest a dramatic growth, due to extreme scale (10 9 cores), concurrency, software complexity, and deep submicron transistor scaling. Such a growth makes resilience a critical concern, and may increase the incidence of errors that "escape", silently corrupting application state. Such errors can often be revealed by application software tests but with long latencies, and thus are known as latent errors. We explore how to efficiently recover from latent errors, with an approach called application-based focused recovery (ABFR). Specifically we present a case study of stencil computations, a widely useful computational structure, showing how ABFR focuses recovery effort where needed, using intelligent testing and pruning to reduce recovery effort, and enables recovery effort to be overlapped with application computation. We analyze and characterize the ABFR approach on stencils, creating a performance model parameterized by error rate and detection interval (latency). We compare projections from the model to experimental results with the Chombo stencil application, validating the model and showing that ABFR on stencil can achieve a significant reductions in error recovery cost (up to 400x) and recovery latency (up to 4x). Such reductions enable efficient execution at scale with high latent error rates. Résilience pour des calculs de type "stencil" avec des erreurs latentes Résumé : Les projections et mesures pour les systèmes exascale (10 9 coeurs) suggèrent une augmentation très importante du taux d'erreur. Une telle augmentation fait de la résilience un sujet critique, et risque d'aggraver l'impact des erreurs qui "s'échappent", corrompant silencieusement la mémoire. Ces erreurs sont souvent détectées par des tests logiciels au niveau de l'application, mais avec une latence de détection importante, et sont donc connues sous le nom d'erreurs latentes. Nous explorons une approche appelée application-based-focus-recovery, ou ABFR, afin de relancer l'exécution efficacement, suit à une erreur. En particulier, nous présentons une étude de cas pour les applications de type stencil, montrant comment ABFR concentre les calculs de récupération où ils sont nécessaire, utilisant des tests et des élagages intelligents pour réduire les calculs de récupération, et permettre le recouvrement avec les calculs de l'application. Nous analysons et caractérisons l'approche ABFR pour les applications de type stencil, créant un modèle de performance paramétré par le taux d'erreur et l'interval de détection (la latence). Nous comparons les projections du modèle aux résultats expérimentaux avec l'application stencil Chombo, validant le modèle et montrant que ABFR permet d'obtenir une réduction significative du coût de récupération (jusqu'à 400x) et de la latence (jusqu'à 4x). De telles réductions de coût permettent de passer à l'échelle avec des taux d'erreurs latentes élevés. Mots-clés : résilience, erreurs latentes, stencil, ABFR. Introduction Large-scale computing is essential for addressing scientific and engineering challenges in many areas. To meet these needs, supercomputers have grown rapidly in scale and complexity. They typically consist of millions of components [START_REF] Bergman | Exascale computing study: Technology challenges in achieving exascale systems[END_REF], with growing complexity of software services [START_REF] Amarasinghe | Exascale software study: Software challenges in extreme scale systems[END_REF]. In such systems, errors come from both software and hardware [START_REF] Martino | Lessons learned from the analysis of system failures at petascale: The case of blue waters[END_REF][START_REF] Martino | Measuring and understanding extreme-scale application resilience: A field study of 5,000,000 hpc application runs[END_REF]; both hardwarecorrectable errors and latent (or so-called silent) errors [START_REF] Cappello | Toward exascale resilience: 2014 update. Supercomput[END_REF][START_REF] Lu | When is multi-version checkpointing needed?[END_REF] are projected to increase significantly, producing mean time between failure (MTBF) as low as a few minutes [START_REF] Cappello | Fault tolerance in petascale/exascale systems: Current knowledge, challenges and research opportunities[END_REF][START_REF] Snir | Addressing failures in exascale computing[END_REF]. Latent errors are detected as data corruption, but some time after their occurrence. We focus on latent errors, that escape simple system level detection such as error-correction in memory, and can only be exposed by sophisticated application, algorithm, and domain-semantics checks [START_REF] Chen | Online-abft: An online algorithm based fault tolerance scheme for soft error detection in iterative methods[END_REF][START_REF] Huang | Algorithm-based fault tolerance for matrix operations[END_REF]. These errors are of particular concern, since their data corruption, if undetected and uncorrected, threatens the validity of computational (and scientific) results. Such latent errors can be exposed by sophisticated software level checks, but such checking is often computationally expensive, so it must be infrequent. We use the term "detection latency" to denote the time from error occurrence to detection, which may be 10 3 (thousands) to 10 9 (billions) of cycles. This delay allows corrupting a range of computation data. Thus, we detect the resulting data corruption, rather than the original error. Checkpoint-Restart (CR) is a widely-used fault tolerance technique, where resilience is achieved by writing periodic checkpoints, and using rollback and recovery in case of failure. Rising error rates require frequent checkpoints for efficient execution, and fortunately new, low-cost techniques have emerged [START_REF] Cappello | Toward exascale resilience: 2014 update. Supercomput[END_REF][START_REF] Daly | A higher order estimate of the optimum checkpoint interval for restart dumps[END_REF]. Paradoxically, more frequent checkpoint increase the challenge with latent errors, as each checkpoint must be checked for errors as well. As a result not all checkpoints can be verified, and latent errors escape into checkpoints. Thus, improved checkpointing does not obviously help with latent errors. Keeping multiple checkpoints or using multi-level checkpointing systems have been proposed [START_REF] Aupy | On the combination of silent error detection and checkpointing[END_REF][START_REF] Bautista-Gomez | Fti: High performance fault tolerance interface for hybrid systems[END_REF][START_REF] Gelenbe | A model of roll-back recovery with multiple checkpoints[END_REF][START_REF] Lu | When is multi-version checkpointing needed?[END_REF][START_REF] Moody | Design, modeling, and evaluation of a scalable multilevel checkpointing system[END_REF]; for latent errors, these systems search backward through the checkpoints, restarting, reexecuting, and retesting for error. Such iterated recovery is expensive, making development of alternatives desirable. Algorithm-based fault tolerance (ABFT) exploits algorithm features and data structures to detect and correct errors, and can be used on latent errors. ABFT has been primarily developed for linear-algebra kernels [START_REF] Chen | Online-abft: An online algorithm based fault tolerance scheme for soft error detection in iterative methods[END_REF][START_REF] Du | Algorithm-based fault tolerance for dense matrix factorizations[END_REF][START_REF] Huang | Algorithm-based fault tolerance for matrix operations[END_REF][START_REF] Shantharam | Fault tolerant preconditioned conjugate gradient for sparse linear system solution[END_REF], including efficient schemes to correct single and double errors. However, each applies only to specific algorithms and data structures. Inspired by ABFT, we exploit application semantics to bound error impact and further localize recovery. Our central idea is to utilize algorithm dataflow and intermediate application states to identify potential root causes of a latent error. Diagnosing this data can enable recovery effort to be confined, reducing cost. We exploit Global View Resilience (GVR) to create inexpensive versions of application states, and utilize them for diagnosis and recovery. In prior work [START_REF] Chien | Versioned distributed arrays for resilience in scientific applications: Global view resilience[END_REF][START_REF] Chien | Exploring versioned distributed arrays for resilience in scientific applications: global view resilience[END_REF], GVR demonstrated that versioning cost is as low as 1% of total cost for frequent versioning under high error rates. A range of flexible rollback and forward recovery is feasible, exploiting convenient access to versioned state. We propose and explore a new approach, application-based focused recovery (ABFR), that exploits data corruption detection and application data flow, to focus recovery effort on an accurate estimate of potentially corrupted data. In many applications, errors take time to propagate through data, so ABFR utilizes application structure to intelligently confine error recovery effort, and allow overlapped recovery. In contrast, global recovery does neither. We apply this approach to a model application, stencil-based computations, a widely used paradigm for scientific computing, such as computation simulations, solving partial differential equations and image processing. We create an analytical performance model to explore the potential benefits of ABFR for stencil methods, varying dimensions such as error rate, error latencies and error detection intervals. The model enables us to characterize the advantages of ABFR across a wide range of system and application parameters. To validate the model, we perform a set of ABFR experiments, using the Chombo heat equation kernel (2-D stencil). The empirical results show that ABFR can improve recovery from latent errors significantly. For example, recovery cost (consumed CPU time) can be reduced by over 400-fold, and recovery latency (execution runtime) can be reduced by up to four-fold. Specific contributions of the paper include: • A new approach to latent error recovery, algorithm-based focused recovery (ABFR), that exploits application data flow to focus recovery effort, thereby reducing the cost of latent error recovery; • An analytical performance model for ABFR on stencil computations, and its use to highlight areas where significant performance advantages can be achieved; • Experiments with the Chombo stencil computations, applying ABFR, both validating the model and demonstrating its practical application and effectiveness, reducing recovery cost by up to 400x, and recovery latency by up to 4x. The remainder of the paper is organized as follows: Section 2 introduces the GVR library and stencil computations. In Section 3, we describe the ABFR recovery method, applied to stencil computations. Section 4 presents an analytical performance model for recovery, parameterized by error rate and detection interval (error latency). In Section 5, we present experiments with Chombo that validate the model, and provide quantitative benefits. Section 6 discusses classes of promising candidate applications of ABFR and limitations. Related work is presented in Section 7. Finally, we summarize our work in Section 8, suggesting directions for future research. Background Global View Resilience (GVR) We use the GVR library to preserve application data and enable flexible recovery. GVR provides a global view of array data, enabling an application to easily create, version and restore (partial or entire) arrays. In addition, GVR's convenient naming enables applications to flexibly compute across versions of single or multiple arrays. GVR users can control where (data structure) and when (timing and rate) array versioning is done, and tune the parameters according to the needs of the application. The ability to create multi-version array and partially materialize them, enables flexible recovery across versions. GVR has been used to demonstrate flexible multi-version rollback, forward error correction, and other creative recovery schemes [START_REF] Dun | Data decomposition in monte carlo neutron transport simulations using global view arrays[END_REF][START_REF] Fang | Applying gvr to molecular dynamics: Enabling resilience for scientific computations[END_REF]. Demonstrations include higherror rates, and results show modest runtime cost (< 1%) and programming effort in full-scale molecular dynamics, Monte Carlo, adaptive mesh, and indirect linear solver applications [START_REF] Chien | Versioned distributed arrays for resilience in scientific applications: Global view resilience[END_REF][START_REF] Chien | Exploring versioned distributed arrays for resilience in scientific applications: global view resilience[END_REF]. GVR exploits both DRAM and high bandwidth and capacity burst buffers or other forms of non-volatile memory to enable low-cost, frequent versioning and retention of large numbers of versions. As needed, local disks and parallel file system can also be exploited for additional capacity. For example, NERSC Cori [START_REF]Nersc cori[END_REF] supercomputer provides 1.8 PB SSDs in the burst buffer, with 1.7 TB/s aggregate bandwidth (6 GB/s per node). The JUQUEEN supercomputer at Jülich Supercomputing Center [1] is equipped with 2 TB flash memory, providing 2 GB/s bandwidth per node. Multi-versioning performance studies on JUQUEEN [1] showed GVR is able to create versions at full bandwidth, demonstrating low cost versioning is a reality [START_REF] Dun | Multi-versioning performance opportunities in bgas system for resilience[END_REF]. In this paper, GVR's low-cost versioning enables flexible recovery for ABFR. Stencils Stencils are a class of iterative kernels that update array elements in a fixed pattern, called stencil. Stencil-based kernels are the core of a significant set of scientific applications [START_REF] Datta | Optimization and performance modeling of stencil computations on modern microprocessors[END_REF][START_REF] Epperson | An introduction to numerical methods and analysis[END_REF], and are widely-used in physical simulations, computational fluid dynamics, PDE solvers, cosmology, combustion, and image processing. Stencils involve computations across a set of 5-100 neighbors, with typical iterative structure as follows: for k timesteps do -compute each element in array using neighbors in a fixed pattern -exchange the new value with neighbors RR n°9042 end During execution, each process computes local elements and communicates with direct neighbors. The regular structure of stencils and their communication pattern suggest that errors take time to propagate to the whole data. Given error latency and location, we can use the communication pattern to identify potentially corrupted data and bound the recovery scope. We consider 5-point 2D stencil computations in subsequent sections, but the modeling and concepts can be extended in a straightforward fashion to higher dimensions and more complex stencils, see the extended version of this work for details (Appendix Section 8). 3 Algorithm-Based Focused Recovery (ABFR) Approach Many applications have regular, local data dependences or well-known communication patterns. Algorithm-based focused recovery (ABFR) exploits this knowledge to: (i) identify potentially corrupted data and focus recovery effort on a small subset (see Figure 1); and (ii) allow recovery to be overlapped, reducing recovery overhead and enabling tolerance of high error rates. In contrast, checkpoint-restart blindly rolls back the entire computation to the last verified checkpoint and recomputes everything. ABFR is a type of ABFT method [START_REF] Huang | Algorithm-based fault tolerance for matrix operations[END_REF] that can be applied more generally. ABFR shares the ideas of overlapped, local recovery with [START_REF] Gamell | Local recovery and failure masking for stencil-based applications at extreme scales[END_REF], but extends them in scope and with sophisticated diagnosis. Specifically, ABFR's enables only the processes whose data is affected by errors to participate in the recovery process, and other processes to continue computation (overlapping recovery, subject to application data dependencies). By bounding error scope, ABFR saves CPU throughput, reducing recovery cost. Furthermore, overlapping recovery and computation can reduce runtime overhead significantly, enabling tolerance of high error rates. In this paper, we describe an ABFR approach for stencil computations subject to latent errors. We assume that a latent error detector (or "error check") is available. Such detectors are application-specific and computationally expensive. In order to keep the model general, we make the following assumptions: • The error detector has 100% 1 coverage, finding some manifestation whenever there is an error, but not precisely identifying all manifestations. • The error check detects error manifestations in the data, namely, corrupted values and their locations. • Because latent ("silent") errors are complex to identify, the detector is computationally expensive. 2 As with other ABFT approaches, we utilize application semantics to design error detectors. Example detectors include: (i) temperature variation across timesteps within a threshold; (ii) one point within a range compared to its direct neighbors; (iii) average or total heat conservation, including fluxes; and (iv) comparison with direct neighbors to reach a consensus. The interval between two consecutive error detections bounds the error latency. Given error location and timing, application logic and dataflow (see Figure 2) -is used to invert worst-case error propagation, identifying all data points in past that could have contributed to this error manifestation. These data points are called potential root causes (PRC). To bound error impact more precisely, PRCs can be tested (diagnosis), eliminating many of the initial PRCs (see Figure 3); for stencils, this can accomplished by recomputing intermediate states from versions (courtesy of GVR) and comparing to previously saved results. If the values match, the PRC can be pruned. At last, recovery is applied to the reduced set of PRCs and their downstream error propagation paths. In Section 4, we develop a model, quantifying the PRCs for a given error latency. It takes thousands of timesteps to corrupt even 1% of the data, but traditional CR assumes all application data is corrupted. 1 Errors that cannot be detected are beyond the ability of any error recovery system to consider. 2 Assuming expensive checks means that any improvements in checking can be incorporated -cost is not a disqualifier. Explaining our example in detail (Figure 3), there are five ranks in the stencil computation. Each box in the figure represents the data of a rank. Each rank exchanges data with its two neighbors at each timestep, using the incoming data at the next step. At a certain timestep, an error is detected. Inverse propagation identifies all potential root causes (PRCs) of the error (purple boxes). Diagnosis of the PRCs eliminates most of them, leaving only viable one (the red box). Recovering the red box and its neighbors produces the correct application. RR n°9042 Analytical Performance Model Suppose the stencil works on M elements, each updated every timestep. Every D timesteps, an error detector is invoked to examine the state of M elements. Therefore the error latency bound is D timesteps. Then, a version of the state is stored. For ABFR, additional versions of data are created every V timesteps between two error detections. In order to simplify the model, we make the following assumptions: • Errors occur randomly in space and time. • Only a single error occurs between two error detections. • Only a single manifestation of the error is detected. Note that these assumptions are commonly used to model CR. The impli- If an error is detected, we first identify the potential root causes based on stencil pattern. Let step(j) be the number of additional elements that got corrupted after one timestep. This typically depends on the dimension of the grid, and the number of neighbors involved in the computation for one timestep. We define root(i) as the number of potential root causes i timesteps ago and AllRoot as the total number of potential root causes over the past D timesteps as follows: root(i) = 1 + i j=1 step(j), AllRoot = D-1 i=0 root(i) . 1D 2D 3D step(i) 2 4i 4i 2 + 2 root(i) 2i + 1 2i 2 + 2i + 1 1 + 4 3 i 3 + 2i 2 + 8 3 i AllRoot D 2 2 3 D 3 + 1 3 D 1 3 D 4 + 2 3 D 2 Table 2: Expressions for step, root, and AllRoot functions for 1, 2 and 3 dimensional grids, assuming an element interacts only with its direct neighbors. RR n°9042 Table 2 shows the expressions for step, root and AllRoot for 1D, 2D, and 3D stencils. We assume that diagnosis is done by recomputing elements from the last checked version, which was D timesteps ago. We can compare the result against intermediate versions. If the recomputed data differs from the version, then the error occurred between the last two versions. Note that with a version at every step, we can narrow the root cause of an error to a single point. Suppose the error occurred j timesteps ago, then the time required for diagnosis is the time to recompute, reload and check (t + r + c) each element against the version from iteration D -1 to j as illustrated in Figure 3c: diag(i) = r • root(D) + (t + r + c) D-1 j=i root(j) . Once potential root causes are pruned, recovery is done by recomputing the reduced set of potential root causes and affected data from the last correct version, as illustrated in Figure 3d: recomp(i) = -t + (t + s) i j=0 root(j) . As discussed in Section 3, ABFR allows overlapping recovery. In that case, the recovery cost (work needed) is the critical metric. If recovery cannot be overlapped, then recovery latency (parallel time) is appropriate. We model both of these for 2D stencils. We refer the reader to the Appendix Section 8 for the analysis of 1D and 3D stencils. Recovery Cost We first quantify the total cost (amount of work due to computation, detection, versioning and recovery, counted in CPU time) of the ABFR approach, as a function of error rate λ (errors per second per byte) and detection interval D, denoted by E ABF R and compare it with the classical CR (Checkpoint/Restart) approach, denoted by E CR . Program execution is divided into equal-size segments of D timesteps. The time needed to complete one segment with p processes is DtM p , and the total CPU time on computation is DtM . Similarly, we spend a total of dM time on detection and BsM time on versioning, where B is the number of versions taken between two detections For CR, we use B = 1, CR creates a version every D timesteps. Then, we assume that errors occur following an exponential distribution, and the probability of having an error during the execution of one segment is denoted by 1 -e -λM DtM p , where λM is the application error rate. Therefore, we can write E CR and E ABF R as functions RR n°9042 of D and λM as follows: E CR = DtM + dM + sM + 1 -e -λM DtM p Rec CR , (1) E ABF R = DtM + dM + BsM + 1 -e -λM DtM p Rec ABF R . (2) The main difference between both approaches lies in recovery cost. Recovery of CR includes reloading data and full recomputation, while ABFR includes diagnosis cost, different data reloading, and reduced recomputation cost. For CR, we have directly: Rec CR = rM + DtM . (3) For ABFR, let B = D V denote the number of versions taken between two detections. We number versions backwards, from j = 0 (timestep 0) up to j = B -1 (timestep (B -1)V ). The last checked version (timestep D) has been versioned too (j = B). We introduce the notation A(j), which is the total number of potential root causes between two versioned timesteps jV and (j + 1)V , excluding (j + 1)V but including jV : A(j) = (j+1)V -1 k=jV root(k) . Therefore, A(j) AllRoot denote the probability that the error occurred between version j and j + 1, and we can write: Rec ABF R = B-1 j=0 A(j) AllRoot (diag(j) + recomp(j)) . The diagnosis is done by recomputing all potential root causes from timesteps D -1 up to version j, that is timestep jV . In addition, we need to pay (r + c)root(kV ) for every version k that passed the diagnosis test, that is from version B -1 to j included. Therefore, we can write: diag(j) = r • root(D) + t D-1 k=jV root(k) + (r + c) B-1 k=j root(kV ) . Because we may have gaps in-between versions, we do not know the exact location of the root cause of the error. Therefore, we recompute starting from version j + 1 instead of j. We must recompute all potential affected elements from timestep (j + 1)V -1 to 0. At timestep (j + 1)V -1, there are root((j + 1)V -1) potential root causes elements to recompute. At every timestep, the number of elements to recompute increases by step(j), so that there are a total of root(2(j + 1)V ) elements to recompute at timestep 0. Therefore, we can write: recomp(j) = t 2(j+1)V k=(j+1)V -1 root(k) + s 2(j+1) k=j+1 root(kV ) . Finally, we obtain the recovery cost as a function of the detection interval D: Rec ABF R = 8 15 t(α 5 -5α 3 + 9α + 5)D 3 + O(D 2 ) , We plot the recovery cost of CR and ABFR as a function of detection interval (error latency) in Figure 4 (note that CR creates 1 version during D timesteps, while ABFR creates B versions. The plot uses B = 1 α = 4). We observe that CR grows linearly with detection interval. While ABFR increases slowly for less than 9,000 and outperforms CR for error latencies up to 17,000 timesteps. This range of 1,000 to 17,000 time steps corresponds to 3 seconds to about 1 minute. After that, most data are corrupted, hence ABFR cannot further improve the performance by bounding error impact. where α = 1 B . (4) RR n°9042 Let H = E DtM denote the expected overhead with respect to the computation cost without errors. We have H CR = 1 + d + s Dt + λM (rM + DtM ), H ABF R = 1 + b D + λM aD 3 , where a = 8 15 t(α 5 -5α 3 + 9α + 5) and b = αd + s αt . ( ) 5 Optimal Detection Interval Minimizing the overhead, we derive the following optimal detection interval for Checkpoint-Restart and ABFR: D * CR = (d + s)p λM 2 t 2 , and D * ABF R = 4 bp 3aλM . (6) Empirical studies of petascale systems have shown MTBF's of three hours at deployment [START_REF] Martino | Lessons learned from the analysis of system failures at petascale: The case of blue waters[END_REF], and allowing for the greater scale of exascale systems [START_REF] Cappello | Toward exascale resilience: 2014 update. Supercomput[END_REF][START_REF] Snir | Addressing failures in exascale computing[END_REF], future HPC system MTBFs have been projected as low as 20 minutes [START_REF] Ferreira | Evaluating the viability of process replication reliability for exascale systems[END_REF]. To explore possibilities for a broad range of future systems (including cloud), we consider system error rates (errors/second) ranging from 0 (infinite MTBF) to 0.01 (1 minute MTBF). We assume the application runs on the entire system, setting λM to the system error rate. We plot the optimal detection interval as a function of error rate λM in Figure 5. We observe that as error rate increases, the optimal detection interval of CR drops faster than ABFR for varied error detector cost, indicating CR demands more frequent error detection in high error rate environments. So, here the goal is to be lazy in error detection checking, because deep application-semantics are assumed to be expensive. Higher numbers for optimal detection interval are good. Plugging D * back into H, we derive that H * CR = 1 + 2M (d + s) p √ λ + rM 2 λ, (7) and H * ABF R = 1 + 4 3 4 3ab 3 λM p . (8) We plot the overhead as a function of error rate, when using the optimal detection interval, in Figure 6. With growing error rates, CR incurs high overhead. In contrast, ABFR significantly reduces overhead and performs stable even for high error rates. Recovery Latency We model recovery latency (parallel execution runtime). Large-scale simulations overly decompose a grid into boxes, enabling parallelism and load balance. As in Figure 8, each process is assigned a set of boxes; each of which is associated a halo of ghost cells. The square grid of RR n°9042 √ M × √ M elements is partitioned into square boxes of size √ m × √ m. We have M m boxes mapped on to p processes. Recovery latency, RecLat, is determined by the process with the most work. For CR, we assume perfect load balance; each process has n boxes, so npm = M . Thus RecLat CR reloads n boxes and recomputes them for D timesteps: RecLat CR = n(rm + Dtm) . For ABFR, recovery latency is determined by the process with the most corrupted boxes. For simplicity, we recompute entire box even it is partially corrupted in ABFR. In an ideal case, the actual corrupted boxes are owned by processes uniformly, making the number of corrupted box of each process, equal to n ideal = root(D) mp = 2D 2 mp + O(D). For the interleaved mapping (see Figure 8), there are M/m boxes in one row, so the distance between two boxes assigned to the same rank is diag(j) = rm + t D-1 m + (r + c) k=j m, recomp(j) = t (j+1)V k=0 m + s j+1 k=0 m . RR n°9042 To compute the recovery latency Rec box per box, we proceed as before: Rec box = B-1 j=0 A(j) AllRoot (diag(j) + recomp(j)) = tmαD + o(D). Multiplying Rec box by the corresponding number of boxes in the ideal and interleaved scenarios, we obtain RecLat ideal = 2tα p D 3 + O(D 2 ), (10) RecLat inter = 2tα √ M p D 2 + O(D) . (11) Comparing Equations ( 9) and ( 10), we conclude that as long as the latency is not long enough to infect all assigned boxes of one process, ABFR would produce better performance. We plot RecLat CR and RecLat inter as a function of detection interval in Figure 7. Similar as in Figure 4, CR increases linearly with detection interval. And ABFR outperforms CR for the detection interval from 0 to 17,000 timesteps. But the gap between their recovery latencies is smaller compared with that in recovery cost. The gap between recovery latencies mainly depends on the difference in the number of boxes that the slowest process needs to work on. Therefore ABFR is at most n = 4 times better in the plot configuration. Optimal Detection Interval. We derive the expected runtime of CR and ABFR to successfully compute D timesteps. T CR = Dnmt + dnm + snm + (1 -e -λM Dnmt )RecLat CR T ABF R = Dnmt + dnm + Bsnm + (1 -e -λM Dnmt )RecLat ABF R The overhead H = T Dnmt of CR and ABFR are given by H CR = 1 + d + s Dt + n(rm + Dtm), ideal = + αd + s αDt + λM 2tα p D 3 , H inter = 1 + αd + s αDt + λM 2tα √ M p D 2 . Minimizing the overhead, we derive the optimal detection interval for CR, ideal ABFR and interleaved ABFR, respectively, as follows: D * CR = (d + s)p λM 2 t 2 , D * ideal = 4 (αd + s)p 6α 2 t 2 λM , D * inter = 3 (αd + s)p 4α 2 λM 3 2 t 2 . RR n°9042 The optimal interval D * CR of CR is the same as in Equation ( 6). The optimal interval for ideal-ABFR is 4 ) , the same order of magnitude as D * ABF R , the optimal value of Equation ( 6) for the recovery cost. D * inter is different due to imbalanced recovery. Workload We use Chombo 2D heat equation codes as the testbed to validate the model. Chombo [START_REF] Colella | Chombo software package for AMR applications design document[END_REF] is a library that implements block-structured adaptive mesh refinement technique. The 2D heat equation codes, implemented with Chombo library, solve a parabolic partial differential equation that describes the distribution of heat in a given region over time. It is a 5point 2D stencil program and deploys an interleaved domain decomposition method. An example of such decomposition for a 64x64 domain and 8 ranks is shown in Figure 8. D * ideal = Θ(λ - 1 Model Validation: Experiments We enhanced Chombo with two recovery schemes -CR (baseline) and ABFR. The CR scheme saves a version in memory after each error check. When an error is detected, CR rolls back to the last checked version and recomputes. Note that it is a improved version of classical CR because it avoids iteratively rollback and recompute until the error is corrected. ABFR creates 3 additional versions between two error checks, i.e. 4 versioning intervals in 1 detection interval. In recovery, ABFR diagnoses potential root causes using application knowledge and intermediate versions, then only recomputes corrupted data. Experiment Design We explore the performance of CR and ABFR for varied error detection intervals and error latencies. The configuration of experiments is listed in Table 3. We run 4,096 ranks and solve the heat equation for a domain of 10 9 elements. With this problem size, we vary the detection interval from 1,000 timesteps to 13,000 timesteps, producing potential corrupted data fractions that range from 0.2% to 32%. ABFR always creates 4 versions, the interval between versions increases with the detection interval. For each detection interval, we sample error latencies uniformly, injecting an error in each versioning interval. We measure the performance for each error latency and calculate the average results to produce performance for the detection interval length. All experiments were conducted on Edison, the Cray XC30 at NERSC (5576 nodes, dual 12-core Intel IvyBridge 2.4 GHz, 64GB memory). We use 4,096 ranks, typically spread over 342 nodes. The results are average of three trials. Metrics We use metricsrecovery cost, recovery latency and data read (IO) for comparison. Recovery cost is the total amount of work (CPU time) required to recover. Recovery latency is the runtime critical path for application recovery. Data read is the amount of data restored during recovery, representing I/O cost. Recovery Cost Figure 9 plots the recovery cost for varied detection intervals (1000 to 13,000 timesteps). Recovery cost for CR grows linearly with detection interval (error latency). The recovery cost of ABFR is initially 400x lower (62 vs. 25,700 CPU seconds at 1000 timesteps), and it grows slowly. The gap between them increases steadily but the ratio decreases. Even at 13000 timesteps, ABFR has 2x lower recovery cost. In contrast to CR, ABFR effectively focuses recovery effort, using diagnosis to reduce cost. Results Figure 9 also plots the performance model (dotted and dash lines), showing a close match (for broader comparison see Figure 4). As expected, ABFR cost starts lower and grows polynomially with the detection interval. Recovery Latency Figure 10 compares the recovery latency with a range of detection intervals. For shorter intervals, ABFR has much lower recovery latency, reducing recovery latency by up to 4x (detection interval of 1000 timesteps). The recovery latency is determined by the slowest process. Each process in CR recomputes all 4 boxes assigned to it at every timestep. While in ABFR, for 1,000 time steps, only 41 boxes are identified potentially corrupted, so processes involved in recovery work at most on one box, producing 4x better performance. As detection interval increases, the error may propagate to larger area, making it more likely that each process has more boxes to handle. At detection interval (error latency) of 13,000 timesteps, ABFR has same performance as CR. The dotted and dash lines in Figure 10 are performance model results using parameter values of our experiments (see also Figure 7). Our experiment results have similar curves as the model. The recovery latency of CR grows almost linearly with detection intervals. While ABFR produces low recovery latencies for short detection intervals and then chases up with CR with expanding detection intervals. The measured ABFR performance are slightly worse than the model because we only keep the highest order terms in the model for simplification but omit some other costs. Data Read (IO) An important cost for recovery is the reading of stored version data from the IO system. Figure 11 presents the data read versus detection intervals. In general, the data read increases with detection interval as on average the actual error latency is greater, causing ABFR to read parts of more versions. In contrast, CR always reloads the entire grid. Because ABFR intelligently bounds the error impact and loads the required data to recover all potential errors, it reduces data read by as much as 1000-fold. Discussion Generality of ABFR As a type of ABFT, ABFR requires sufficient application knowledge to design inverse error propagation, diagnose and focus recovery. However, this knowledge can be coarse-grained. Our studies show that ABFR is helpful for several classes of applications. Applications that have regular data dependencies, such as stencils and adaptive mesh refinement (AMR) can easily adopt ABFR to bound error effect and confine recovery. Some applications have dependency tables or graphs that can be exploited by ABFR. Such examples include broad graph processing algorithms and task-parallel applications. Some applications have properties that limit the spread of errors. For instance, N-Body tree codes have numerical cutoff that confine erroneous regions to some subtrees. Monte Carlo applications do not propagate errors across sampled batches. We plan to extend ABFR to these applications in future work. Multiple Errors For simplicity we only model single errors. This assumption is common and underlies much of CR practice. There are several potential avenues for extension. First, multiple errors within a detection interval could trigger multiple ABFR responses. Alternatively, diagnosis could be extended to deal with multiple errors at once. These are promising directions for future work. Related Work Soft errors and data corruption for extreme-scale systems have been the target of numerous studies. A considerable number of researchers have already looked at error vulnerability. Some focus on error detection but rely on other methods to recover. Others work on designing recovery techniques. We classify related work into three categories: system-level resilience, ABFT (Algorithm-based Fault Tolerance) techniques and resilience for stencils. System-level Resilience With the growing error rates, it has been recognized that single checkpoint cannot handle latent errors, as the rising frequency shrinks the optimal checkpoint interval [START_REF] Daly | A higher order estimate of the optimum checkpoint interval for restart dumps[END_REF], increasing the incidence of escaped errors. To address this reality at extreme scale, researchers have proposed multi-level checkpointing systems and multiple checkpoint-restart (MCR) approaches [START_REF] Bautista-Gomez | Fti: High performance fault tolerance interface for hybrid systems[END_REF][START_REF] Gelenbe | A model of roll-back recovery with multiple checkpoints[END_REF][START_REF] Moody | Design, modeling, and evaluation of a scalable multilevel checkpointing system[END_REF]. Such systems exploit fast storage (DRAM, NVRAM) to reduce I/O cost and keep multiple checkpoints around. Inexpensive but less-resilient checkpoints are kept in fast, volatile storage, and expensive but most-resilient checkpoints in parallel file system. When a latent error is detected, applications must search these checkpoints, attempting to find one that doesn't contain latent errors. The typical algorithm is to start from the more recent checkpoint, reexecute, then see if the latent error recurs. If it does, repeat with the next older checkpoint. This blind search and global recovery incurs high overhead especially in case of errors with long latency, making MCR unsuitable for high error rates. In contrast, our ABFR approach exploits application-knowledge to narrow down the corrupted state, and only recompute that. Algorithm-Based Fault-Tolerance Huang and Abraham [START_REF] Huang | Algorithm-based fault tolerance for matrix operations[END_REF] proposed a checksum-based ABFT for linear algebra kernels to detect, locate and correct single error in matrix operations. Other researchers extended Huang and Abraham's work for other specialized linear system algorithms, such as PCG for sparse linear system [START_REF] Shantharam | Fault tolerant preconditioned conjugate gradient for sparse linear system solution[END_REF], dense matrix factorization [START_REF] Du | Algorithm-based fault tolerance for dense matrix factorizations[END_REF], Krylov subspace iterative methods [START_REF] Chen | Online-abft: An online algorithm based fault tolerance scheme for soft error detection in iterative methods[END_REF]. Below we address ABFT methods for stencils. Our work is similar to ABFT approaches, exploiting application knowledge for error detection, but adding the use of application knowledge to diagnose what state is potentially corrupt, and using that knowledge to limit recomputation, and thereby achieve efficient recovery from latent errors. Resilience for Stencil Computations Researchers have explored error detection in stencil computations, for example exploiting the smoothness of the evolution of a particular dataset in the iterative methods to detect errors. Berrocal et al. [START_REF]Lightweight silent data corruption detection based on runtime data analysis for hpc applications[END_REF] showed that an interval of normal values for the evolution of the datasets can be predicted, therefore any errors that make the corrupted data point outside the interval can be detected. Benson et al. [START_REF] Benson | Silent error detection in numerical time-stepping schemes[END_REF] proposed a error detection method by using an cheap auxiliary algorithm/method to repeat the computation at the same time with original algorithm, and compare the difference with the results produced by the original algorithm. These work relied on Checkpoint-Restart to correct errors. Our ABFR approach can benefit from these efforts on application error checks. Other studies have also explored resilience approaches for stencils. Gamell et al. [START_REF] Gamell | Local recovery and failure masking for stencil-based applications at extreme scales[END_REF] studied the feasibility of local recovery for stencil-based parallel applications. When a failure occurs, only the failed process is substituted with a spare one and rollbacks to the last saved state for the failed process and resumes computation. The rest of the domain continues communication. This technique assumes immediate error detection. Sharma et al. [START_REF] Sharma | Detecting soft errors in stencil based computations[END_REF] proposed an error detection method for stencil-based applications using the predicted values by a regression model. Dubey et al. [START_REF] Dubey | Granularity and the cost of error recovery in resilient amr scientific applications[END_REF] explored local recovery schemes for applications using structured adaptive mesh refinement (AMR). Their work studied customizing resilience strategy exploiting the inherent structure and granularities within applications. Recovery granularities can be controlled at cell, box, and level (a union of boxes) for AMR depending on failure modes. This work also assumes immediate error detection. We share the context of stencils and attempts to confine error recovery scope, but our work is clearly different with its focus on latent errors. Summary and Future Work We propose an application-based focused recovery (ABFR) for stencil computations to efficiently recover from latent errors. This approach exploits stencil semantics and inexpensive versioned states to bound error impact and confine recovery scope. This focused recovery approach can yield significant performance benefits. We analyze and characterize the ABFR approach on stencils, creating a performance model parameterized by error rate and detection interval (error latency). Experiments with the Chombo heat equation application show promising results, reducing both recovery cost (up to 400x) and recovery latency (up to 4x), and validating the model. Future directions include experiments that extend ABFR ideas to other applications, and the analytical study of optimal versioning intervals and detection intervals. A.1 Standard checkpoint and recovery In this section, we set V = D, so that we only version after a successful error check. When an error is detected, we simply reload the last correct version, and we recompute all D iterations from there. Let E denote the expected time needed to execute D iterations successfully. We first pay DtM , the cost for executing D iterations, where t is the time needed to compute a single element. Then, we pay dM , the cost for running the detector on the M elements. With probability 1 -e -λM DtM p , there was an error and we pay rM , the time needed to recover from the last correct version and DtM , the time needed to recompute all elements. With probability e -λM DtM p , there was no error and we are done. Finally, in both cases we must store the correct version with cost sM . Therefore, we can write: E = DtM + dM + 1 -e -λM DtM p (rM + DtM ) + sM . Then, let H = E DtM denote the expected overhead with respect to the execution time without errors (DtM ). Using Taylor series to approximate e -λM DtM p to 1 -λM DtM p , and keeping only first order terms, we can write: H = 1 + d + s Dt + λM (rM + DtM p ) + O(λ 2 ) . Finally, differentiating H and solving for D, we derive that: 2 ). Note that this result holds for any grid dimension. D * = (d + s)p λM 2 t 2 , hence we have D * = O(λ - 1 A.2 Version every step In this section, we set V = 1, so that a version is taken after every iteration. Let E denote the expected time needed to execute D iterations successfully. We first pay DtM , the cost for executing D iterations, where t is the time needed to compute a single element. Then, we pay DsM , the cost for versioning at every step, where s denote the time needed to store a single element. Finally, we pay dM , the cost for running the detector on the M elements. With probability 1 -e -λM DtM p , there was an error and we pay Rec, the expected time needed to trace the source of the error from the single manifestation and to recompute all corrupted elements from there. E = DtM + DsM + dM + 1 -e -λM DtM p Rec . RR n°9042 We assume that the error was detected at iteration 0 and we number iterations backwards, from i = 0 to i = D -1. i = D corresponds to the last check. With probability P line err (i) = root(i) AllRoot , the error occurred i iterations ago. Let diag(i) denote the time needed to find the root cause of the error, from iteration D -1 to i, and let recomp(i) denote the time needed to recompute all corrupted elements, from iteration i -1 to 0. Accounting for all possible scenarios, we can write: Rec = D-1 i=0 P line err (i) (diag(i) + recomp(i)) . Diagnosis is done by recomputing all potential root causes from iteration D -1 to i, and by comparing the result with the corresponding versions. First, we pay r.root(D) in order to reload the last correct version. Then, for each iteration j, we must pay r.root(j) to reload the corresponding version, t.root(j) to recompute all the potential root causes at iteration j, and finally c.root(j), the cost to compare the data against the version. Altogether, we can write: diag(i) = r.root(D) + (t + r + c) D-1 j=i root(j) . When the diagnosis is done, we have to account for the recomputation cost. The number of elements to recompute grows linearly from i downto 0. Indeed, there is only one element to recompute at iteration i, or root(0), and there are root(i) elements to recompute at iteration 0. Also, the root cause of the error itself at iteration i has been corrected during the diagnosis. Therefore, we need to pay (t + s)root(j), the cost to recompute and store each corrupted elements with j from 0 to i, to which we remove t.root(0) and we can write: recomp(i) = -t + (t + s) i j=0 root(j) . Then, Rec = D-1 i=0 root(i) AllRoot   -t + (r + t + c) D-1 j=i root(i) + (t + s) i-1 j=0 root(i)   . Note that AllRoot = D-1 j=0 root(i), so that we can extract t and rewrite Rec as follows: Rec = t(AllRoot -1) + D-1 i=0 root i AllRoot   (r + c) D-1 j=i root(i) + s i-1 j=0 root(i)   . RR n°9042 Finally, let H = E DtM denote the expected overhead with respect to the execution time without errors (DtM ). Using Taylor series to approximate 1 -e -λM DtM p to λM DtM p , keeping only first order terms, we can write: H = 1 + s t + d Dt + λM p Rec + O(λ 2 ) . Instantiating H with the correct step(i) function, differentiating and solving for D, we can derive the optimal detection interval D * . 1D case. We set step(i) = 2, and we get: Rec = t(D 2 -1) + D-1 i=0 2i + 1 D 2 (r + c)(D 2 -i 2 ) + s(i 2 -1) . Keeping only terms in D 2 , we get: Rec = D 2 c + r + s + 2t 2 + O(D) . So that: H = 1 + s t + d Dt + λM p D 2 c + r + s + 2t 2 + O(λ 2 D) . Differentiating and solving for D we get: 3 ). D * = 3 dp λM t(c + r + s + 2t) , hence we have D * = O(λ - 1 2D case. Similarly, we set step(i) = 4i, and we get: Rec = D 3 c + r + s + 2t 3 + O(D 2 ) . Therefore, we can write: H = 1 + s t + d Dt + λM p D 3 c + r + s + 2t 3 + O(λ 2 D 2 ) . Then, differentiating and solving for D we get: D * = 4 dp λM t(c + r + s + 2t) , and we have 4 ). D * = O(λ - 1 RR n°9042 3D case. Let step(i) = 4i 2 + 2, we derive that: Rec = D 4 c + r + s + 2t 6 + O(D 3 ) . Therefore, we can get: H = 1 + s t + d Dt + λM p D 4 c + r + s + 2t 6 + O(λ 2 D 3 ) , and finally differentiating and solving for D we get: 5 ). D * = 5 dp λM t(c + r + s + 2t) , with D * = O(λ - 1 A.3 Version at a given interval In this section, we consider the general case, and V can be anywhere between 1 and D. Let B = D V denote the number of versions taken between two detections. As before, we denote by E the expected time needed to successfully execute D iterations. We pay DtM , the cost for executing tM elements for D iterations, BsM , the cost of storing B versions, and dM , the cost of running the detector. With probability 1 -e -λM DtM p there was an error, and we need to recover from the last correct version. Therefore, we can write: E = DtM + BsM + dM + 1 -e -λM DtM p Rec . Similarly as for iterations, we number versions backwards, from j = 0 (iteration 0) up to j = B -1 (iteration (B -1)V ). The last checked version (iteration D) has been versioned too (j = B). We introduce the notation A(j), which is the total number of potential root causes between two versioned iterations jV and (j + 1)V , excluding (j + 1)V but including jV : A(j) = (j+1)V -1 k=jV root(k) . Then, let P area err (j) = A(j) AllRoot denote the probability that the error occurred between version j and j + 1. We can write: Rec = B-1 j=0 A(j) AllRoot (diag(j) + recomp(j)) . RR n°9042 The diagnosis is done by recomputing all potential root causes from iterations D -1 up to version j, that is iteration jV . In addition, we need to pay (r + c)root(kV ) for every version k that passed the check, that is from version B -1 to j included. Therefore, we can write: Then, recompute all corrupted elements. Because we have gaps inbetween versions, we do not know the exact location of the root cause of the error. Therefore, we recompute starting from version j + 1 instead of j. We must recompute all potential affected elements from iteration (j +1)V -1 to 0. At iteration (j + 1)V -1, there are root((j + 1)V -1) potential root causes elements to recompute. At every iteration, the number of elements to recompute increases by step(j), so that there are a total of root(2(j + 1)V ) elements to recompute at iteration 0. Therefore, we can write: recomp(j) = -t.root((j + 1)V ) + t Then, let V = αD, where α is a fraction of D, so that we can write: H = 1 + s + αd αt 1 D + λM p Rec + O(λ 2 ) . Setting b = s+αd αt , we can write: H = 1 + b D + λM p Rec + O(λ 2 ) . 1D case. In order to derive the optimal detection interval D * and the optimal version interval V * , we first set V = αD, where 0 < α ≤ 1, so that we have V = O(D). For the 1D case, we set step(i) = 2, and keeping leading terms with respect to D, we get: Rec = tD RR n°9042 Then let a = t 2 3 3 -α 3 + 4α , so that we can rewrite H as follows: H = 1 + b D + λM p aD 2 + O(λ 2 D) . Differentiating H with respect to D, and then solving for D, we can derive: D * = 3 1 2 bp aλM , hence we have 3 ). Plugging D * back into H, we derive that: D * = O(λ - 1 H * = 1 + 3 2 3 aM λ2b 2 p Finally, in order to derive V * , we must find α * . Differentiating H with respect to α, we need to solve: Then let a = 8 15 t(α 5 -5α 3 + 9α + 5), so that we can rewrite H as follows: H = 1 + b D + λM p aD 3 + O(D 2 ) . Differentiating H with respect to D and optimizing for D we can derive: D * = 4 1 3 bp aλM . Plugging D * back into H, we derive that: H * = 1 + 4 3 4 3ab 3 λM p . As for the 1D case, the optimal detection interval V * has to be computed numerically. RR n°9042 3D case. Similarly, let step(i) = 4i 2 + 2, we derive that: Then let a = 8 315 t(140α 5 -23α 7 -252α 3 +250α+105), so that we can rewrite H as follows: H = 1 + b D + λM p aD 4 + O(D 3 ) . Differentiating H with respect to D and optimizing for D we can derive: D * = 5 1 4 bp aλM . Plugging D * back into H, we derive that: H * = 1 + 5 4 5 4ab 4 λM p . As for the 1D and 2D case, the optimal detection interval V * has to be computed numerically. RR n°9042 Figure 2 : 2 Figure 2: Stencil patterns: an error propagates to direct neighbors (blue) in a timestep. Figure 3 : 3 Figure 3: ABFR in a 3-point 1D Stencil. Figure 4 : 4 )Figure 5 : 4 ) 4454 Figure 4: Recovery Cost vs. Detection Interval (M = 32768 2 , t = 10 -8 , d = 100t, r = 10 -9 , s = 10 -8 , α = 1 4 ) Figure 5: Optimal Detection Interval vs. Error Rate (M = 32768 2 , p = 4096, t = 10 -8 , r = 10 -9 , s = 10 -8 , α = 1 4 ) Recovery Cost Comparison The dominant cost in recovery is recomputation. It is O(DM ) for CR in Equation 3 and O(D 3 ) for ABFR in Equation 4. Suppose the number of elements in one dimension of stencil is U , we have M = U , M = U 2 and M = U 3 for 1D, 2D, and 3D stencil respectively. Since CR always recompute all the data, the corresponding recomputation cost is O(DU ), O(DU 2 ) and O(DU 3 ). In constrast, ABFR only need to recompute a small fraction of the M elements. The corresponding recomputation cost is O(D 2 ), O(D 3 ) and O(D 4 ) respectively (see Appendix Section 8. Note that the detection interval D (or error latency) is much smaller than the number of elements in one dimension U .We plot the recovery cost of CR and ABFR as a function of detection interval (error latency) in Figure4(note that CR creates 1 version during D timesteps, while ABFR creates B versions. The plot uses B = 1 α = 4). We observe that CR grows linearly with detection interval. While ABFR increases slowly for less than 9,000 and outperforms CR for error latencies up to 17,000 timesteps. This range of 1,000 to 17,000 time steps corresponds to 3 seconds to about 1 minute. After that, most data are corrupted, hence ABFR cannot further improve the performance by bounding error impact. Figure 6 : 4 )Figure 7 : 647 Figure 6: Overhead vs. Error Rate Using Optimal Detection Interval (M = 32768 2 , p = 4096, t = 10 -8 , r = 10 -9 , s = 10 -8 , α = 1 4 ) Figure 7: Recovery Latency vs. Detection Interval (M = 32768 2 , m = 65536, p = 4096, n = 4, t = 10 -8 , d = 100t, r = 10 -9 , s = 10 -8 , α = 1 4 ) length 2D is range of error spread. The slowest process would have n inter = 2D √ boxes. Then, for an error at step j, we have: Figure 8: Interleaved domain decomposition Figure 9 : 9 Figure 9: Recovery Cost vs. Detection Interval (Model plotted for experiment configuration and measured t = 1.5 * 10 -8 second) Figure 10 : 10 Figure 10: Recovery Latency vs. Detection Interval (Model plotted for experiment configuration and measured t = 1.5 * 10 -8 second) Figure 11 : 11 Figure 11: Data Read (MB) vs. Detection Interval ) . Now, let H = E DtM denote the expected overhead with respect to the execution time without errors (DtM ). Using Taylor series to approximate 1 -e -λM DtM p to λM DtM p , keeping only first order terms, we can write: 2 2 3 3 - 3 α 3 + 4α + O(D) . 5 - 5 s) 4α 2 d -3α 4 d -α 3 s -4αs -6s tα 3 = 0 ,which has to be done numerically.2D case. Similarly, let step(i) = 4i, we derive that: 5α 3 + 9α + 5)D 3 + O(D 2 ) . 5 - 5 23α 7 -252α 3 + 250α + 105)D 4 + O(D 3 ) . Table 3 : 3 Experiment Configurations RR n°9042 A Extended analysis In this section, we derive the optimal values for D and V that minimize the total expected execution time of the application for different scenarios. In the first scenario, we do not take advantage of the known error propagation pattern. We focus on the standard checkpoint and recovery approach and we derive the optimal D following the approach of Young/Daly. Then, in order to take full advantage of the known error propagation pattern, we focus on the simple scenario where V = 1, which allows us to cut down the recomputation time in case of error. Ultimately, we move to the general scenario with arbitrary values for V . We must find a tradeoff between the amount of time spent versioning vs recomputing upon error, and we derive optimal values for both D and V .
57,867
[ "739318" ]
[ "129172", "179718", "6818", "35418", "179718", "6818", "135613", "35418", "129172", "49346" ]
01480272
en
[ "spi" ]
2024/03/04 23:41:48
2017
https://hal.science/hal-01480272/file/sympo_vortex.pdf
B Franzelli A Cuoci A Stagni M Ihme T Faravelli S Candel Numerical investigation of soot-flame-vortex interaction Keywords: Soot, Curvature, Vortex, Strain rate, Heavy PAHs HAL is interactions with LII images. Calculations adequately retrieve experimental data when the vortex is initiated at the fuel side. Differences are observed when the vortex is formed on the air side. The simulations, however, are useful for examining strain rate and curvature effects on soot volume fractions of small spherical particles and large aggregates, and to study physical processes underlying the soot production. It is shown that the variability of the soot volume fraction is highly correlated to the response of soot precursors to flame curvature. Soot variability in the mixture fraction space depends on the behavior of large aggregates that, being characterized by high Schmidt numbers, are more sensitive to the convective motion imposed by the vortex compared to the gaseous phase. The observed behavior has to be reproduced by the models developed for numerical simulations in order to obtain an accurate prediction of soot production in turbulent flames. Introduction Reducing soot emissions is of considerable importance for many practical applications due to their negative effects on the environment and on public health. In this context, substantial experimental and numerical efforts have been made to understand, characterize, and model the complex processes leading to soot formation and oxidation [1,2,3,4]. Soot production 1 in turbulent flames, which are most relevant in practice, is a complex process which depends on chemistry, flow history and local turbulence properties. The analysis of soot in fully turbulent flames is then extremely challenging. The experimental investigation requires the use of combined diagnostics to obtain quantitative measurements of soot distribution, its relation with respect to flow and flame quantities [5,6,7,8]. On the numerical level, Direct Numerical Simulations (DNS) of turbulent flames have been used to investigate the dependence of soot production on the flow history and local turbulence properties [9,10,11,12,13,14,15,[START_REF] Arias | Combustion Theory and Modelling[END_REF]. Alternatively, simpler configurations may be used to study flame-flowsoot interactions for a reduced computational cost. This is exemplified in the cases of a laminar pulsed flame [START_REF] Shaddix | [END_REF] or a diffusion flame wrapped-up by a line vortex [5]. In these two cases, measurements are more easily carried out than in fully turbulent flames because the phenomenon can be periodically reproduced. From a numerical standpoint, these configurations are attractive because they can be simulated using detailed models so that complex pro-1 Soot production indicates the net process comprising both formation and consumption contributions. cesses underlying soot production can be accurately described. Obviously, these fundamental studies do not replace the need for fully-turbulent simulations, but provide crucial information on the instantaneous and local effect of a vortex eddy on soot production that could be used to clarify the turbulent flame behavior and to guide the modeling efforts. The flame-vortex interaction is specifically useful and has been extensively explored to investigate the effects of unsteady strain rate and curvature induced by vortices on the flame front [18]. Among various generic configurations, the case of a planar diffusion flame wrapped-up by a line vortex is considered in the present investigation and simulated with a detailed model for gas dynamics and soot particle production that cannot be afforded in DNS of turbulent flames. Using an elegant experimental design, Cetegen and Basu [5] were able to obtain soot volume fraction distributions from LII imaging. Previous numerical studies of this configuration [START_REF] Gupta | Spring Technical Meeting of the Central States Section of the Combustion Institute[END_REF][START_REF] Mishra | [END_REF] have used simplified gaseous mechanisms and soot models based on semi-empirical twoequation representations, with a limited degree of generality. In the present work, we propose a detailed numerical simulation of soot-flame-vortex interaction based on a detailed description of both gas and soot, together with a direct comparison between experiments and simulations on the effect of vortices on soot production. Calculations are carried out to examine effects related to the injection side and to the vortex strength. The paper begins with a review of the detailed kinetic mechanism (Section 2). The numerical setup is briefly presented in Section 3. The interaction of a vortex with soot is analyzed in Section 4. The analysis focuses on the influence of curvature on the flame quantities governing soot production and on the effect of the flow field induced by the vortex on the soot layer. Detailed kinetic mechanism The kinetic scheme combines a detailed gas-phase mechanism (DGM) and a detailed soot mechanism (DSM) for the description of the formation and oxidation of soot. The DGM consists of ∼170 species and ∼6000 reactions, describing the high-temperature pyrolysis and oxidation for a wide range of hydrocarbon fuels [21]. The mechanism has been tested over a wide range of conditions [22,23]. The DSM was developed using the discrete sectional method [24]. Only are assumed to be spherical in shape with a mass density of 1500 kg m -3 [25]. BIN13 to BIN20 are treated as monodisperse aggregates, with fractal dimension of 1.8 [26]. The DSM features a total number of 100 lumped pseudo-species organized in 20 BINs, each of which has two or three subclasses (different H/C ratios), split into radical or molecular surfaces. Six heterogeneous reaction classes are accounted for with appropriate kinetic parameters: hydrogen-abstraction-carbon-addition (HACA) mechanism; inception; oxidation; surface growth; dehydrogenation; coalescence and aggregation. The total number of reactions for the DSM scheme is ∼ 10500. To enable computational simulations, the mechanism was reduced with the Species-Targeted Sensitivity Analysis (STSA) technique [START_REF] Stagni | Combust. Flame In press[END_REF]. The final mechanism, provided in the supplementary material, includes 156 species (97 gaseous species and 59 BINs) and ∼5600 reactions. Species transport properties are calculated from the standard molecular theory of gases. Soot particles and aggregates are treated as gaseous species, so that their binary mass diffusion coefficients are calculated on the basis of a proper extrapolation from the binary mass diffusion coefficients of larger PAHs. Typical values of the Schmidt number for the soot particles range from 4 to 50 (at 1000 K). Numerical setup The computational domain (Fig. 1) reproduces an experimental configuration [5], consisting of a 2D channel with length equal to (3L + L w ) and width equal to L, where L = 40 mm and L w = 12 mm. The two incoming streams, a mixture of acetylene and nitrogen (0.25/0.75 on a molar basis) on one side and air on the other side, are separated by a splitter plate of thickness d w = 1.2 mm. Both streams are injected at atmospheric pressure and temperature of 300 K. The velocity of the incoming streams was fixed at u c = 0.15 m s -1 with a flat profile at the channel inlet. Vortices are created by modifying the velocity-time profile at injection: u inj (τ ) u c = 1 + βV 0 exp -ln τ 0.1 2 with β = 4.5 (1) where τ = u c t/L is the dimensionless time. The value of V 0 = 1.5, 2.0, 2.5 governs the circulation level and, consequently, the strength of the vortex induced by the velocity pulse. Details on the derivation and validation of the velocity boundary conditions are provided as supplementary materials. Numerical simulations were carried out using the laminarSMOKE framework [START_REF] Cuoci | [END_REF]29], a CFD code specifically designed to solve multidimensional laminar reacting flows with detailed kinetic mechanisms. A passive scalar Z (Z=0 in the air stream and Z=1 in fuel stream) was also transported, assuming a diffusivity equal to that of N 2 . This passive scalar Z measures the degree of mixing between air and fuel streams. The thermophoretic and Soret effects are accounted for in the calculation. A uniform spatial discretization of 100 µm is used on the first 3/4 of the configuration where the soot-vortex-flame evolution is evaluated, resulting in more than 20 grid points to describe the flame reaction zone. The resolution is halved in the last part of the grid, necessary to evacuate the vortex. Second-order centered spatial discretization schemes were applied, while the time step, after a convergence study, was fixed to 2 × 10 -5 s. Grid convergence was investigated by halving the number of cells, and noting that differences in soot volume fraction and particle number density remained below 5%. Flame-soot-vortex interaction Experimental and numerical results representative for the vortex interaction with the soot field are shown in Fig. 2 for both air (top) and fuel (bottom) vortex injections. They correspond to the case V 0 = 2.0 at time τ = 0.34. For these conditions, the numerical soot volume fraction f v field (second column) can be compared with experimental LII data [5] (first column). 2 It can be observed that soot particles form a layer that is rolled-up by the vortical field. A more pronounced roll-up of the soot layer is observed when the vortex is injected on the fuel side, which shows a fairly good agreement with the experimental results. In contrast, in the case of air vortex injection, the soot layer is less well wrapped by the vortex and there are noticeable differences between calculations and experimental data. Possible reasons for this difference will be discussed in Section 4.2, but it is important to highlight the fact that this is the first time that a comparison between experiments and numerical simulations based on a detailed soot description is attempted for a configuration describing the instantaneous and local effect of vortices on soot production. Results show that a simulation using stateof-the-art soot modeling has qualitative agreement with experimental data, but that additional modeling efforts are still needed to improve the accuracy of detailed soot descriptions. Nevertheless, it is valuable to use these stateof-the-art data to examine the effects of vortex interactions on soot yield due to the reasonable agreement between experiments and numerical data. In agreement with previous numerical and experimental results [5,[START_REF] Gupta | Spring Technical Meeting of the Central States Section of the Combustion Institute[END_REF][START_REF] Mishra | [END_REF], it can be seen that the total soot volume fraction is slightly higher in the case where the vortex is injected on the air side, even if both flames are subjected to similar vortex strength. The spatial distribution of f v is not homogeneous along the soot layer and differs in the two cases considered in Fig. 2, showing that soot production is strongly dependent on the local interaction with the vortex. This is also valid for the soot number density field (Fig. 2, third column). Indeed, the soot particle distribution varies in space along the soot layer and differs in the two cases as can be deduced by examining the concentration of small spherical particles and aggregates (Fig. 2, last column). Temporal evolution of soot-flame-vortex interaction Simulations for smaller and higher vortex strengths (V 0 = 1.5 and V 0 = 2.5, respectively) were also carried out for both injection sides. The total soot mass Q s is obtained by integrating the soot mass over the whole computational domain. The evolution of the normalised total soot mass Q * s = Q s /Q steady s ( where Q steady s is the total soot mass of the steady flame) indicates that similar behaviors are observed for the six cases (Fig. 3). Classically, the stretch k imposed by the vortex to the flame front can be decomposed into strain rate a and curvature contributions: k = (δ ij -n i n j )∂ x j u i + S d ∂ x i n i = a + S d ∇ • n (2) where δ ij is the Kronecker delta, u is the velocity, S d is the displacement speed and n = ∇Z/|∇Z| is the normal to the flame front. Initially, the injection velocity drastically increases so that a strong strain rate is imposed to the flame by the vortex passage and the soot yield rapidly decreases. The soot volume fraction suppression is more effective for the strongest vortex in the first phase of the soot-vortex-flame interaction. Then, the injection velocity decreases to its initial state and the curvature becomes the main contributor to stretch since the flame is wrinkled by the rotating motion induced by the vortex. The soot yield increases and eventually exceeds its steady-state value. In this phase, the stronger the vortex, the higher the total soot production rate so that more soot is finally produced by the vortex than in the steady case. For higher values of τ (τ > 0.5), results are affected by the vortex interaction with walls and are less meaningful. Globally, the stronger the vortex is, the more important its effect on soot production will be for both the initial consumption and the second formation phases. Moreover, for a given vortex strength, the soot yield is slightly higher in the case where the vortex is formed on the air side. These findings are in general agreement with the experimental results. Curvature effect on soot production As already indicated, a high strain rate is imposed to the flame during the initial soot-vortex interaction phase so that the soot volume fraction is drastically diminished. Now, it is known that the soot volume fraction of diffusion flames decreases with increasing strain rates [30,31]. Unsteady effects of strain rate are commonly investigated by examining unsteady counterflow diffusion flames [32,33,[START_REF] Rodrigues | Submitted to[END_REF]. In contrast, the effect of flame curvature on soot production is less well documented and can be easily characterized by looking at the interaction of the formed vortex with the soot layer, i.e. for τ > 0.2. Soot volume fraction iso-contours colored by flame curvature are plotted in Fig. 4 for V 0 = 2.0 at τ = 0.34. An isoline of Z soot , defined as the Z value where the maximum of f v is located in the steady flame, is added to this graph and arrows indicate the flow direction. Isoline of Z OH , i.e. the Z-value where the maximum of OH radical is located in the steady flame, is also included to better locate lean (Z < Z OH ) and rich (Z > Z OH ) mixtures. High formation and oxidation regions are also identified. The vortex passage strongly modifies the spatial distribution of soot volume fraction and number density, depending on the side of the vortex injection. The variations can be understood by looking at the effect of the flame curvature on soot processes and of the flow field on the soot layer. Considering the case with air-injection (Fig. 4, top), regions A and A * are characterized by a negligible flame front curvature. Due to the flow field, soot produced at this point is convected outside this region, so that the soot population is mainly characterized by nuclei and small spherical particles which have no time to growth or agglomerate. As a consequence, the num- ber density and the spherical particle concentrations are highest in these zones. Point B presents a smaller value of the number density and of f v . Due to the velocity field imposed by the vortex, the flame front at this point is characterized by a high concavity towards the fuel side. Soot production is almost zero and the soot volume fraction is mainly convected in this region. Aggregation is the only process occurring for the solid phase, so that the number density decreases with the collision of smaller particles into bigger aggregates. On the contrary, region C is characterized by a strong convexity of the flame front towards the fuel side. Due to differential diffusion effects on PAHs, the soot precursors concentration is higher in this region so that an high soot mass formation is observed. In addition, due to the flow field induced by the vortex, this zone presents a strong preferential concentration of soot which enhance the number of collisions, i.e. the agglomeration process. The preferential concentration, together with increasing soot formation rate, explains the high concentrations of both aggregates and spherical particles. The convective motion of the vortex pushes the soot layer, whose diffusivity is negligible compared to the gaseous species, to penetrate the OH front at region D, where it is strongly oxidized explaining the drastically decreasing of f v . Only large aggregates, which survives after passing through the OHfront, reaches region E. These large particles hardly diffuse so that they can be convected by the flow field motion inside the vortex. The soot volume fraction field is markedly different when the vortex is injected on the fuel side (Fig. 4, bottom) but the soot behavior is a consequence of the same features observed for the air-side case. Once again, nucleation is From the presented cases, some conclusions can be drawn. The vortex passage only slightly increases the flame surface and the region where soot formation occurs, whereas soot occupies a wider region. The interaction of the soot layer with the vortex depends on the vortex-injection side and leads to different soot topologies and inhomogenous soot layer characteristics, which can be explained by looking to the flame curvature. Two different sources of variability can be recognized. First, it exists a differential diffusion effect between gaseous and solid phases. A concave/convex curvature is the result of a convective motion towards lean/rich regions acting on both the gaseous and the solid phase. The solid phase and in particular large aggregates characterized by high Schmidt numbers are mainly governed by convection compared to the more diffusive gaseous species. As a consequence, the soot volume fraction will be found at leaner/richer mixture under the effect of the convective motion of the vortex, generating a variability of f v in the Z-space. The second effect is caused by a differential diffusion effects on PAHs. The flame front convexity enhances the heat transport from the flame towards the preheat region, increasing the PAHs concentration and, consequently, soot production. The spatial variability of the soot layer is governed by the flow field induced by the vortex and the way the flame front is deformed by it. This qualitative analysis can be further supported by examining scatterplots of the f v as a function of the passive scalar Z presented in Fig. 5. Here, the contributions of flat, convex and concave profiles are identified, as well as the results for the steady solution. It should be noted that both the maximum and the total f v are smaller than the steady values for both cases, since at τ = 0.34 the destructive effect of the strain rate has just become negligible compared to the curvature effect (Fig. 3). Compared to the steady case, the soot volume fraction presents a sizable variability, occupying a wide zone in the f v -Z manifold. Variations are not only observed in f v values, but also in their position in Z-space. Moreover, the two scatterplots notably differ, highlighting the strong effect of vortex-side injection on soot. The reasons of this variability may be understood by examining scatterplots of the soot mass production rate plotted in Fig. 6 (top). Soot mass formation rate is mainly localized at the same Z observed at steady state condition, so that no greater variability in the unsteady case is observed in the Z-space for this quantity. Compared to the flat profile, concavity seems to have a negative effect on the formation of soot, whereas convexity increases the soot production in both injection cases. The unsteady values exceed the steady state, proving that curvature, and in particular convexity, is the source of a total soot mass production in the flame-vortex-soot configuration higher than the steady case. A strong destruction rate characterizes the airinjection case in the convex region, corresponding to the region close to zone D in Fig. 4, where the soot layer is pushed against the OH front. The positive formation rate observed for the air-injection case is higher than that obtained in the fuel-case leading to a higher total f v in Fig. 3. As deduced experimentally [5,[START_REF] Shaddix | [END_REF], the cause lies in the effect of flame curvature on the production of PAHs, which are necessary for nucleation and surface growth. Looking at results for heavy PAHs (Fig. 6, center), one observes that convexity towards the fuel strongly enhances their production due to the concentration of heat, transported from the flame towards the preheat region. A higher PAH concentration is observed in the air-injection case, since the convexity radius is considerably smaller at point C than at point G. This explanation, proposed in Ref. [5] to account for the higher levels of f v close to convex zones, is confirmed by the present calculations. Heavy PAHs seem to be less sensitive to concavity. The maximum values of light and heavy PAH mass fractions normalized by the maximum steady value are reported in Table 1 for both injection sides and the three vortex strenghts. From this analysis, it appears that there is a direct correlation between the weight of PAH and its sensitivity to curvature. The sole response of naphthalene (which is included in the light PAH class) and of the soot production rate are also added to the table. It can be concluded that whenever the heavy PAH concentration is not negligible, as in these calculations, their response to flame curvature plays a non-negligible role in the soot production process. In these cases, the reduced models should account for heavy PAHs to reproduce the flame curvature effect on soot production rate. The marked sensitivity of PAHs to flame curvature is expected to have a direct consequence on the volume fraction of small spherical particles, whose explain the variability in the Z-space observed for the soot volume fraction (Fig. 5). Results for the large aggregates (not shown) indicate a strong analogy with the scatterplots of f v , since they are the main contributors to soot volume fraction. Even if flat zones locate the maximum value at Z soot in the steady case, the concave (convex) zone is characterized by a strong variability for Z > Z soot (Z < Z soot ) for the fuel (air)-injection case. This can be explained by looking at the soot-flame-vortex interaction in Fig. 4. Due to differential diffusion between the solid particles and the gaseous species, soot aggregates are mainly driven by convection and affected to a lesser extent by diffusion, so that in the concave (convex) regions soot particles are pushed towards richer (leaner) zones due to vortex motion. This is even more pronounced when soot penetrates the vortex in richer zones in the case of fuel-injection and leaner zone for air-injection (points E and I in Fig. 4, respectively). The relevance of the high Schmidt number on the soot variability has already been observed in DNS of turbulent flames in the scalar dissipation rate space [14]. Here, it is possible to identify the effect of convexity and concavity on this variability by studying a simpler soot-flamevortex interaction. This effect is not observed on small spherical particles since their diffusivity is higher than that of aggregates and also because in the penetration zones the soot volume fraction is mainly determined by aggregates. Conclusions To improve the understanding of soot formation and oxidation in turbulent flames, it is useful to examine soot-vortex-flame interactions. This is accomplished in this article by detailed numerical modeling of the interaction between a line vortex and an acetylene-air laminar diffusion flame configuration, investigated experimentally in Ref. [5]. Findings from soot-flame-vortex interaction could be relevant to soot modeling in turbulent flames, which are know to be characterized by large curvature effects and strong and intermittent strain rates. For example, the behavior of reduced models could be usefully verified in this configuration in terms of curvature and strain rate effects on heavy PAHs, soot particles and aggregates. the most relevant details are provided in what follows. PAHs (Polycyclic aromatic hydrocarbons considered soot precursors) are organized in two classes: light PAHs, including species up to pyrene, and heavy PAHs, including species with more than 4 aromatic rings. Heavy PAHs and particles are discretized into 20 classes of pseudo-species (called BINs) with their masses doubled from one class to the next. PAHs of more than 20 carbon atoms constitute the first four BINs. The first soot particles (BIN5) are modeled as clusters containing 320 carbon atoms. Particles between BIN5 and BIN12 Figure 1 : 1 Figure 1: Schematic view of the computational domain: a planar diffusion acethylene/air flame is wrapped by a line vortex. Figure 2 : 2 Figure 2: Soot topology for V 0 = 2.0 at τ = 0.34 for air-injection (top) and fuel-injection (bottom). First column: LII soot experimental results [5]. Second column: numerical soot volume fraction. Third column: numerical soot number density. Last column: Iso-contours of spherical soot particles (copper colormap) and of soot aggregates (blue colormap). Figure 3 : 3 Figure 3: Time evolution of the normalized total soot mass. Air-side and fuel-side injections have been considered for the three experimental voltages (V 0 = 1.5, 2.0, 2.5,[5]). Figure 4 : 4 Figure 4: Soot-flame-vortex interaction for air (bottom) and fuel (top) vortex injection for case V 0 = 2.0 at τ = 0.34. Soot volume fraction iso-contours are colored by curvature (red-concave, blue-convex). The soot mass formation and oxidation regions are presented in gray and black, respectively. The iso-contour of Z soot is shown in black, where arrows indicate the flow direction, and of Z OH is shown in dashed green. the main soot formation process at points F and F * , characterized by a neg-ligible curvature. In the convex region G, the soot volume fraction increases rapidly due to a high PAH concentration. Point H is characterized by a high concavity of the flame and the soot mass formation is negligible here as for the air-injection case. The flow field generates a strong preferential concentration of soot in this point, characterized by a decrease of the soot number density due to aggregation. Due to the flow field induced by the vortex and the low diffusivity of the solid phase, point I is characterized by the presence of soot aggregates that are convected towards the vortex center. Figure 5 : 5 Figure 5: Scatterplot of soot volume fraction for air (left) and fuel (right) injection for case V 0 = 2.0 at τ = 0.34. Contributions for flat (black), concave (red) and convex (blue) zones are represented. Results for the steady flame are presented by grey lines. Vertical continuous and dashed lines indicate Z OH and Z soot , respectively. Figure 6 : 6 Figure 6: Scatterplot of soot mass production rate (top), heavy PAHs mass fraction (center) and spherical particles volume fraction (bottom) for air (left) and fuel (right) injection for case V 0 = 2.0 at τ = 0.34. Captions are the same as in Fig. 5. Results of soot volume fraction are compared to the soot LII data and the general experimental trends are reproduced. The proposed numerical strategy allows to examine effects of strain rate and flame curvature on soot production. The initial negative effect of strain rate, leading to a strong reduction of soot, is quickly balanced by the globally positive effects of curvature, so that the total soot volume fraction exceeds the steady value. Concentrations of heavy PAHs are strongly affected by flame curvature (convexity towards fuel increases their concentration), causing a strong variability in the soot production rate. The effect of curvature on PAHs increases with their sizes. Moreover, the flow field induced by the vortes is responsible for the variability of f v in Z-space since large aggregates, characterized by large Schmidt numbers, penetrate deeper into the vortex and are pushed towards leaner or richer regions depending on the vortex motion. Following the notation given in[5], the experimental results correspond to a reduced time τ D = t -τ R = 62 ms, where τ R ≈ 28 ms is the piston rise time. Acknowledgements This work was performed using HPC resources from GENCI-CINES (Grant 2016-020164). Dr. Franzelli acknowledges the support of the French Association of Mechanics (Association Franaise de Mcanique). Air-side Fuel-side V 0 1.5 2.0 2.5 1. 5 level strongly depends on surface growth processes (Fig. 6, bottom). The instantaneous soot particle volume fraction depends on the flame curvature and is higher than the steady value in both cases. However, it should be noticed that the large variability of heavy PAHs is not completely reflected by spherical soot particles. The reason is threefold. First, the production of spherical particles is governed also by the presence of small PAHs, which are less sensitive to curvature. Second, the higher positive soot formation due to high heavy PAHs concentration observed for the air-side injection is balanced by the strong negative destruction rate due to the flow field motion pushing the soot layer against the OH front. Third, soot formation is characterized by long chemical time scales so that the PAHs behaviour is not yet reflected on soot spherical concentrations. As a consequence, the spherical partical concentration is similar for both injection sides. For the same reason, more time is still necessary to observe an increase of the soot volume fraction. The behavior of heavy PAHs and small soot particles is not sufficient to
30,195
[ "7345", "1038181" ]
[ "416", "125443", "125443", "73500", "125443", "416" ]
01272954
en
[ "spi" ]
2024/03/04 23:41:48
2015
https://hal.science/hal-01272954/file/CTM_sprayflamelet.pdf
Benedetta Franzelli email: [email protected] Aymeric Vié email: [email protected] Matthias Ihme email: [email protected] On the generalisation of the mixture fraction to a monotonic mixing-describing variable for the flamelet formulation of spray flames Keywords: Laminar counterflow spray flames, Flamelet formulation, Mixture fraction, Effective composition, Bifurcation, Hysteresis Spray flames are complex combustion configurations that require the consideration of competing processes between evaporation, mixing and chemical reactions. The classical mixturefraction formulation, commonly employed for the representation of gaseous diffusion flames, cannot be used for spray flames due to its non-monotonicity. This is a consequence of the presence of an evaporation source term in the corresponding conservation equation. By addressing this issue, a new mixing-describing variable, called effective composition variable η, is defined to enable the general analysis of spray-flame structures in composition space. This quantity combines the gaseous mixture fraction Zg and the liquid-to-gas mass ratio Z l , and is defined as: dη = (dZg) 2 + (dZ l ) 2 . This new expression reduces to the classical mixture-fraction definition for gaseous systems, thereby ensuring consistency. The versatility of this new expression is demonstrated in application to the analysis of counterflow spray flames. Following this analysis, the effective composition variable η is employed for the derivation of a spray-flamelet formulation. The consistent representation in both effective-composition space and physical space is guaranteed by construction and the feasibility of solving the resulting spray-flamelet equation in this newly defined composition space is demonstrated numerically. A model for the scalar dissipation rate is proposed to close the derived spray-flamelet equations.The laminar one-dimensional counterflow spray flamelet equations are numerically solved in the η-space and compared to the physical-space solutions. It is shown that the hysteresis and bifurcation characterizing the flame structure response to variations of droplet diameter and strain rate are correctly reproduced by the proposed composition-space formulation. Introduction Motivated by the utilization of liquid fuels for transportation and propulsion systems, considerable progress has been made on the analysis of spray flames [START_REF] Faeth | Evaporation and combustion of sprays[END_REF][START_REF] Borghi | Background on droplets and sprays[END_REF][START_REF] Sirignano | Fluid dynamics and transport of droplets and sprays[END_REF][START_REF] Jenny | Modeling of turbulent dilute spray combustion[END_REF][START_REF] Sirignano | Advances in droplet array combustion theory and modeling[END_REF][START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF]. While gaseous diffusion flames are characterized by the competition between scalar mixing and chemistry, spray flames require the continuous supply of gaseous fuel via evaporation and transport to the reaction zone to sustain combustion. Because of this complexity, the investigation of spray flames in canonical combustion configurations, such as mixing layers, coflow and counterflow flames, represents a viable approach to obtain physical insight into the behavior of spray flames [START_REF] Continillo | Counterflow spray combustion modeling[END_REF][START_REF] Hollmann | Diffusion flames based on a laminar spray flame library[END_REF][START_REF] Russo | The extinction behavior of small interacting droplets in cross-flow[END_REF][START_REF] Russo | Physical characterization of laminar spray flames in the pressure range 0.1-0.9 MPa[END_REF][START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF]. Counterflow spray flames have been the subject of intensive research, and considerable numerical and experimental studies have been performed by considering laminar conditions [START_REF] Continillo | Counterflow spray combustion modeling[END_REF][START_REF] Li | Experimental and theoretical studies of counterflow spray diffusion flames[END_REF][START_REF] Darabiha | Laminar counterflow spray diffusion flames: A comparison between experimental results and complex chemistry calculations[END_REF][START_REF] Massot | Spray counterflow diffusion flames of heptane: Experiments and computations with detailed kinetics and transport[END_REF][START_REF] Gutheil | Counterflow spray combustion modeling with detailed transport and detailed chemistry[END_REF][START_REF] Gutheil | Multiple solutions for structures of laminar counterflow spray flames[END_REF][START_REF] Watanabe | Characteristics of flamelets in spray flames formed in a laminar counterflow[END_REF]. Theoretical investigations provided understanding about underlying physical processes, flame stabilization and extinction processes of spray flames [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF][START_REF] Greenberg | Coupled evaporation and transport effects in counterflow spray diffusion flames[END_REF][START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF][START_REF] Dvorjetski | Analysis of steady state polydisperse counterflow spray diffusion flames in the large Stokes number limit[END_REF]. Experiments in counterflow flames have been performed to examine extinction behavior of mono-and polydisperse spray flames through strain variation and vortex interaction [START_REF] Santoro | An experimental study of vortex-flame interaction in counterflow spray diffusion flames[END_REF][START_REF] Santoro | Extinction and reignition in counterflow spray diffusion flames interacting with laminar vortices[END_REF]. More recently, bistable flame structures of laminar flames were considered for examining the bifurcation in three-dimensional turbulent counterflow spray flames [START_REF] Vié | Analysis of segregation and bifurcation in turbulent spray flames: a 3d counterflow configuration[END_REF]. As such, these studies demonstrated that the structure of spray flames is of fundamental relevance for a wide range of operating regimes. In the context of laminar gaseous diffusion flames, the flame structure is typically examined in composition space by introducing the gaseous mixture fraction Z g as an independent variable [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF]. For a given strain rate, the flame structure is then fully parameterized in terms of the gaseous mixture composition, providing a unique mapping between physical and composition space. This mixture-fraction formulation is also used in turbulent combustion models, enabling the representation of the turbulence-chemistry interaction through presumed probability density function models [START_REF] Borghi | The links between turbulent combustion and spray combustion and their modelling[END_REF][START_REF] Demoulin | Assumed pdf modeling of turbulent spray combustion[END_REF]. Another significant advantage of a mixture-fraction representation arises from the computationally efficient solution of the resulting flamelet equations in composition state. Therefore, extending the mixture-fraction concept to spray flames is desirable and enables the utilization of analysis tools that have been developed for gaseous flames. Unfortunately, this extension is non-trivial, since the classical gaseous mixturefraction definition looses its monotonicity due to evaporation [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF][START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF]. With the exception of pre-vaporized flames and other simplifying assumptions, the structure of spray flames cannot be studied in the classical mixture-fraction space. Previous works have dealt with the extension of the mixture fraction definition to spray flames. Sirignano [START_REF] Sirignano | A general superscalar for the combustion of liquid fuels[END_REF] and Bilger [START_REF] Bilger | A mixture fraction framework for the theory and modeling of droplets and sprays[END_REF] have investigated the definition of mixing-describing variables for two-phase combustion. Their works apply to the characterization of the mixture evolution from the droplet (or ligament) surface to the far field. Such an approach is only applicable if the diffusive layer around each droplet is small compared to the droplet interspacing. In cases where the droplet interspacing is too small compared to flame and diffusive scales, a mesoscopic point of view should be adopted and a continuum representation is required with regard to the mixture-fraction field [START_REF] Sirignano | Fluid dynamics and transport of droplets and sprays[END_REF]. Although Bilger's approach is able to recover the mesoscopic limit, the detailed representation of these scales is computationally expensive. In this scenario, extending the mixture fraction concept to spray flames is not straightforward. This issue was mentioned in [START_REF] Smith | Simulation and modeling of the behavior of conditional scalar moments in turbulent spray combustion[END_REF], and a total mixture fraction was introduced to account for both gas and liquid contributions. Luo et al. [START_REF] Luo | New spray flamelet equations considering evaporation effects in the mixture fraction space[END_REF] extended the classical mixture fraction flamelet transformation to spray flames, but only for pre-vaporized conditions that serve the definition of the boundary conditions for the gaseous flamelet equations. Olguin and Gutheil [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF][START_REF] Olguin | Theoretical and numerical study of evaporation effects in spray flamelet model[END_REF], Greenberg et al. [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF][START_REF] Greenberg | Coupled evaporation and transport effects in counterflow spray diffusion flames[END_REF][START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF][START_REF] Dvorjetski | Analysis of steady state polydisperse counterflow spray diffusion flames in the large Stokes number limit[END_REF] and Maionchi and Fachini [START_REF] Maionchi | Simple spray-flamelet model: Influence of ambient temperature and fuel concentration, vaporisation source and fuel injection position[END_REF] directly solved the spray flame equations in physical space and subsequently represented the flame structure in the Z g -space; for example by separating the purely gaseous region of the flame from the evaporation zone [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF][START_REF] Olguin | Theoretical and numerical study of evaporation effects in spray flamelet model[END_REF]. However, due to the non-monotonicity, the classical gaseous definition cannot be used to solve the spray flamelet equations in composition space. By addressing these issues, this work proposes a new composition-space variable that enables the description of spray flames. The key idea for this formulation consists in identifying a monotonic representation of a mixing-describing coordinate for spray flames. This new coordinate, referred to as effective composition variable η, is both useful for analyzing the flame structure and for effectively solving the corresponding spray flamelet equations. In addition, the effective composition variable η is defined in such a way that it extends the classical flamelet formulation for gaseous diffusion systems [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF][START_REF] Williams | Combustion Theory[END_REF][START_REF] Liñán | The asymptotic structure of counterflow diffusion flames for large activation energies[END_REF][START_REF] Poinsot | Theoretical and Numerical Combustion[END_REF], thereby ensuring consistency. Compared to a non-monotonic definition, the use of the proposed effective composition variable for spray flames exhibits the following advantages: • it allows a mathematical well-posed definition of the transformation from physical to composition space, thereby providing a theoretical foundation for onedimensional laminar spray flamelet formulations. • it enables the representation of the system in composition space, eliminating the explicit dependence on the spatial coordinate, thereby providing a computationally more efficient solution. • it allows the analysis of spray flames in analogy to the work on gaseous flames based on a mixture-fraction formulation [START_REF] Liñán | The asymptotic structure of counterflow diffusion flames for large activation energies[END_REF][START_REF] Seshadri | Simulation of a turbulent spray flame using coupled pdf gas phase and spray flamelet modeling[END_REF]. • it provides direct insight of the flame structure without any additional postprocessing that is otherwise required, for example, when using the classical gaseous Z g -space. The remainder of this paper is organized as follows. The spray-flamelet equations in physical space and composition space are presented in Sec. 2. The effective composition formulation and its mathematical properties are discussed in Sec. 3. The versatility of this effective composition space formulation is demonstrated by considering two applications. The first application (Sec. 4) is concerned with the analysis of the spray-flame structure in composition space. The second application concerns the use of η to directly solve the spray-flame system in composition space. For this, the spray-flamelet equations in η-space are formulated in Sec. 5, and a closure model for the scalar dissipation rate is proposed in App. C. Comparisons of simulation results with solutions obtained in physical space are performed and different levels of model approximations are assessed. It is shown that the proposed formulation is able to reproduce the bifurcation and hysteresis, characterizing the flame-structure response to strain-rate and droplet-diameter variations. The paper finishes by offering conclusions and perspectives. Governing equations In the present work, we consider a mono-disperse spray flame in a counterflow configuration, and the governing equations are formulated in an Eulerian framework. In this configuration, fresh air is injected against a stream consisting of a fuel spray and pure air. Consistent with the classical analysis of gaseous flames, the following assumptions are invoked [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF][START_REF] Williams | Combustion Theory[END_REF][START_REF] Poinsot | Theoretical and Numerical Combustion[END_REF]: (1) Steady-state solution and low-Mach number limit. (2) Unconfined flame and constant thermodynamic pressure. (3) Single-component fuel. (4) Unity Lewis number. Equal but not necessarily constant diffusivities are assumed for all chemical species and temperature: D k = D th = λ/(ρc p ) ≡ D. Ficks' law without velocity correction is used for diffusion velocities [START_REF] Poinsot | Theoretical and Numerical Combustion[END_REF]. (5) Calorically perfect gas: c p,k = c p = constant. In this context, it is noted that the composition variable and formulation proposed in this paper are not restricted to these assumptions and can equally be extended in analogy to the theory for gaseous flames, and guidance on the extension to non-unity Lewis numbers is provided in App. E. Under these assumptions, the transport equations for gaseous and liquid phases in physical space are introduced next. From this, we derive the general spray flamelet formulation, which serves as foundation for the following analysis. Spray flame equations in physical space Gas-phase equations The gaseous phase is described by the transport equations for momentum, species mass fractions, temperature and mixture fraction Z g : ρu i ∂u j ∂x i = ∂ ∂x i µ ∂u j ∂x i - ∂p ∂x j + (u j -u l,j ) ṁ -f j , (1a) ρu i ∂Y k ∂x i = ∂ ∂x i ρD ∂Y k ∂x i + ωk + (δ kF -Y k ) ṁ , for k = 1, . . . , N s (1b) ρu i ∂T ∂x i = ∂ ∂x i ρD ∂T ∂x i + ωT + ṁ T l -T - q c p , ( 1c ) ρu i ∂Z g ∂x i = ∂ ∂x i ρD ∂Z g ∂x i + (1 -Z g ) ṁ , ( 1d ) where ρ is the density, p is the pressure, and u j is the j th component of the velocity vector. The production rate of species k is denoted by ωk , ωT = -Ns k=1 ωk W k h k /c p is the heat released by combustion, W k is the molecular weight of species k, h k is the sensible and chemical enthalpy of species k, c p is the heat capacity of the gaseous mixture, q is the ratio between heat transfer and mass transfer rates from the gas to each droplet, δ ij is the Kronecker delta, and N s is the total number of species. The total mass vaporization rate is ṁ, T is the temperature, µ is the dynamic viscosity of the gas mixture, and f j is the jth component of the drag force, which is here modeled by the Stokes law [START_REF] Maxey | Equation of motion for a small rigid sphere in a non uniform flow[END_REF]. Subscript l is used to identify quantities of the liquid phase and the subscript F refers to the fuel. The gaseous non-normalized mixture fraction is here formulated with respect to the carboncontaining species [START_REF] Bilger | On reduced mechanisms for methane-air combustion in nonpremixed flames[END_REF]: Z g = W F n CF W C Ns k=1 n C,k Y k W C W k , (2) where Y k is the mass fraction of species k, n C,k is the number of carbon atoms in species k and W C is the carbon molecular weight. Liquid-phase equations As we are considering spray combustion, the liquid phase is composed of a set of droplets. The following assumptions are made: • Monodisperse/Monokinetic/Mono-temperature spray: all the droplets in the same vicinity have the same diameter, velocity and temperature. • Dilute spray: the spray volume fraction is negligible compared to that of the gas phase. Consequently, the gas phase volume fraction is assumed to be one in the gas phase equations. • The only external force acting on the particle trajectory is the drag force. • One-way coupling and no droplet/droplet interaction or secondary break-up are considered. Consequently, the balance equations for the total liquid mass, the individual droplet mass, the liquid momentum, and the enthalpy of the liquid phase dh l = c l dT l read as [START_REF] Continillo | Counterflow spray combustion modeling[END_REF]: ∂(ρ l α l u l,i ) ∂x i = -ṁ , (3a) n l u l,i ∂m d ∂x i = -ṁ , (3b) ∂(ρ l α l u l,i u l,j ) ∂x i = -f j -ṁu l,j , (3c) ∂(ρ l α l u l,i h l ) ∂x i = -ṁ(h l -q + L v ) , (3d) where α l = n l πd 3 /6 is the liquid volume fraction, m d = ρ l πd 3 /6 is the individual droplet mass, ρ l is the liquid density, d is the droplet diameter, n l is the liquid droplet number density, c l is the liquid heat capacity, and L v is the latent heat of evaporation. By introducing the liquid-to-gas mass ratio: Z l = α l ρ l (1 -α l )ρ ≈ α l ρ l ρ , (4) Eqs. (3) can be written in non-conservative form: ρu i ∂Z l ∂x i = ∂[ρ(u i -u l,i )Z l ] ∂x i -ṁ (1 + Z l ) , (5a) ρu i ∂m d ∂x i = - ρ n l ṁ + ∂[ρ(u i -u l,i )m d ] ∂x i , (5b) ρu i ∂(u l,j Z l ) ∂x i = ∂[ρu l,j (u i -u l,i )Z l ] ∂x i -f j + ṁu l,j (1 + Z l ) , (5c) ρu i ∂(Z l h l ) ∂x i = ∂[ρh l (u i -u l,i )Z l ] ∂x i -ṁ(1 + Z l )h l + ṁ(L v -q) . (5d) In the following, Eqs. (1) and ( 5) are used to derive the spray flamelet equations. General spray flamelet formulation The general spray flamelet equations can be derived in analogy to the analysis for counterflow gaseous flames [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF]. The physical coordinate along the flamenormal direction can be expressed in terms of a generic variable ζ, which is assumed to monotonically increase from the oxidizer side to the spray injection side. By introducing the transformation from physical space to composition space, ( x 1 , x 2 , x 3 ) → (ζ(x i ), ζ 2 , ζ 3 ), all spatial derivatives can be written as: ∂ ∂x 1 = ∂ζ ∂x 1 ∂ ∂ζ , (6a) ∂ ∂x i = ∂ ∂ζ i + ∂ζ ∂x i ∂ ∂ζ for i = 2, 3. (6b) B. Franzelli, A. Vié, M. Ihme It is important to note that the strict monotonicity of the quantity ζ is essential to guarantee the mathematical well-posedness of the transformation in Eq. ( 6), which is not defined for ∂ xi ζ=0. Peters assumed [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] that derivatives along the ζ-direction are much larger compared to those along the tangential directions (ζ 2 and ζ 3 ). By neglecting these high-order contributions, the following operators are obtained: ρu i ∂φ ∂x i = Ξ ζ ∂φ ∂ζ , (7a) ∂ ∂x i ρD ∂φ ∂x i = ∂φ ∂ζ ρD 2 ∂ ∂ζ χ ζ 2D + χ ζ 2D ∂ρD ∂ζ + ρχ ζ 2 ∂ 2 φ ∂ζ 2 , (7b) ∂ ∂x i µ ∂φ ∂x i = ∂φ ∂ζ µ 2 ∂ ∂ζ χ ζ 2D + χ ζ 2D ∂µ ∂ζ + µ D χ ζ 2 ∂ 2 φ ∂ζ 2 , (7c) where Ξ ζ = ρu i ∂ζ ∂x i (8) is the material derivative of ζ, and χ ζ is the scalar dissipation of the variable ζ: χ ζ = 2D ∂ζ ∂x i ∂ζ ∂x i . ( 9 ) With this, the equations for the gas phase, Eqs. (1), can be rewritten as: du j dζ Ξ ζ - µ 2 d dζ χ ζ 2D - χ ζ 2D dµ dζ = µ D χ ζ 2 d 2 u j dζ 2 + (u j -u l,j ) ṁ -f j + J j dp dζ , (10a) dY k dζ Ξ ζ - ρD 2 d dζ χ ζ 2D - χ ζ 2D d(ρD) dζ = ρχ ζ 2 d 2 Y k dζ 2 + (δ kF -Y k ) ṁ + ωk , (10b) dT dζ Ξ ζ - ρD 2 d dζ χ ζ 2D - χ ζ 2D d(ρD) dζ = ρχ ζ 2 d 2 T dζ 2 + ṁ T l -T - q c p + ωT , (10c) dZ g dζ Ξ ζ - ρD 2 d dζ χ ζ 2D - χ ζ 2D d(ρD) dζ = ρχ ζ 2 d 2 Z g dζ 2 + (1 -Z g ) ṁ , (10d) where J j = -∂ζ ∂xj . The equations for the liquid phase are: Ξ ζ dZ l dζ = -ṁ (1 + Z l ) + Ψ [Z l ] , (11a) Ξ ζ dm d dζ = - ṁ ρ n l + Ψ [m d ] , (11b) Ξ ζ d (u l,j Z l ) dζ = -ṁu l,j (1 + Z l ) -f j + Ψ [u l,j Z l ] , (11c) Ξ ζ d (Z l h l ) dζ = -ṁ(1 + Z l )h l + ṁ(L -q) + Ψ [h l Z l ] , (11d) where Ψ[φ] is defined as the contribution to the slip velocity due to the drag force: Ψ [φ] = ∂ζ ∂x i ∂ ∂ζ [ρφ(u i -u l,i )] . (12) In accordance with Peters' theory for gaseous flames, the flamelet transformation assumes that the flame structure is locally one-dimensional. The formulation of an appropriate mixing-describing variable ζ is discussed in the following section. Composition-space definition for counterflow spray flames The spray flamelet equations, Eqs. ( 10) and [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF], are derived by invoking two assumptions, namely the presence of a one-dimensional flame structure and the strict monotonicity of ζ with respect to the spatial coordinate. The last constraint is required to guarantee the existence of the derivative and that the solution remains single-valued. Identifying an appropriate definition of ζ that meets this last criterion is the central focus of this paper. Before introducing this variable, we will review previously suggested formulations from the literature. Gaseous mixture fraction The first candidate is the classical gaseous nonnormalized mixture fraction, ζ = Z g , (13) which is defined in Eq. ( 2) and the corresponding conservation equation is given by Eq. (1d). This definition was used previously to parameterize the spray-flamelet equations [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF]. As discussed in [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF][START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF][START_REF] Urzay | Flamelet structures in spray ignition[END_REF], the presence of a source term results in a non-conserved quantity for counterflow spray flames1 . Furthermore, due to competing effects between evaporation and mixing, Z g becomes non-monotonic, resulting in the multi-valued representation of the flame structure in Z g -space. While this prevents the direct solution of the spray flamelet equations in composition space, Z g has been used for the parameterization of spray flames using two different approaches: • Separating the spray zone and the purely gaseous zone to identify two distinct regions where Z g is monotonic as done in [START_REF] Hollmann | Diffusion flames based on a laminar spray flame library[END_REF]. However, it will be shown in the subsequent section that this approach does not always guarantee monotonicity in these phase-separated regions when diffusion and evaporation are not spatially separated. • Separating the flame structure at the tangent point d x Z g = 0 [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF]. Although this method provides a valid representation of the flame structure, the location of this inflection point is not know a priori and can therefore not be used in a straightforward manner as a separation indicator. Total mixture fraction An alternative to using Z g as describing composition variable is to also consider the contribution from the two-phase region in the definition of ζ. A possible definition of such a quantity was first proposed in [START_REF] Smith | Simulation and modeling of the behavior of conditional scalar moments in turbulent spray combustion[END_REF], and further B. Franzelli, A. Vié, M. Ihme investigated in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF][START_REF] Urzay | Flamelet structures in spray ignition[END_REF] as: ζ = Z t = Z g + Z l . (14) This definition can be considered as an extension of Eq. ( 2). In this context it is noted that the consistency of this formulation is guaranteed by the fact that Z g ≡ Y F for pure fuel. The conservation equation in physical space is given by [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF][START_REF] Urzay | Flamelet structures in spray ignition[END_REF]: ρu i ∂Z t ∂x i = ∂ ∂x i ρD ∂Z g ∂x i + ρ(u i -u l,i )Z l -Z t ṁ . ( 15 ) The evaporation source term in this equation is negative, leading to a decreasing Z t along the material derivative. Consequently, this term will not affect the monotonicity. However, due to the differential diffusion between liquid and gaseous phase and the presence of the slip velocity, the monotonicity of Z t is not guaranteed. This issue was discussed in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF][START_REF] Urzay | Flamelet structures in spray ignition[END_REF] and demonstrated in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF]. Conserved mixture fraction Another definition of a mixture fraction can be obtained by eliminating the evaporation source term, which is achieved through the following definition: ζ = Z c = Z g + Z l 1 + Z l , (16) with corresponding conservation equation in physical space is then: ρu i ∂Z c ∂x i = 1 1 + Z l ∂ ∂x i ρD ∂Z g ∂x i + 1 -Z c 1 + Z l ∂ ∂x i [ρ(u i -u l,i )Z l ] . (17) This definition also suffers from contributions by slip velocity and differential diffusion between gaseous and liquid phase. Effective composition variable A composition variable that is strictly monotonic for counterflow spray flames can be obtained by restricting the two-dimensional space (Z g , Z l ) to the 1D manifold to which the solution belongs to. By doing so, one can define a composition space variable η as the metric of the 1D manifold, i.e. its tangent in the (Z g , Z l )-space: (dζ) 2 = (dη) 2 = (dZ g ) 2 + (dZ l ) 2 , (18) from which follows: dη = (dZ g ) 2 + (dZ l ) 2 , ( 19 ) where the sign is determined subject to the local flow structure. By combining Eqs. ( 19) and (5a) with Eq. (1d), the transport equation for η can be written as: ρu i ∂η ∂x i = sgn(u η ) ρu i ∂Z g ∂x i 2 + ρu i ∂Z l ∂x i 2 = sgn(u η ) ∂ ∂x i ρD ∂Z g ∂x i + (1 -Z g ) ṁ 2 + ∂[ρ(u i -u l,i )Z l ] ∂x i -ṁ(1 + Z l ) 2 , (20) where u η = u i ∂ xi η/ (∂ xj η) 2 is the gas velocity projected along the gradient of η. Note that this definition of η reduced to the classical gaseous mixture fraction expression in the absence of a liquid phase, guaranteeing the consistency with the single-phase flamelet formulation [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF]. This follows by imposing the condition: sgn(u η ) = sgn u i ∂ xi Z g / (∂ xj Z g ) 2 if Z l = 0 . (21) In this context, it is noted that the definition ( 19) of η is based on mathematical arguments and requirements of monotonicity, positivity and degeneracy. As such, it is not directly based on physical interpretation following classical mixture-fraction argumentation. However, the mathematical definition of Eq. ( 19) enables comparisons with measurements, and a link between η, Z g and Z l is provided under the assumption of separating mixing and evaporation zones (App. B). The particular advantage of definition [START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF] is that it enables a direct solution of the flamelet equations in composition space. Further, with regards to application to tabulation methods, it overcomes the ambiguity that is associated with the construction of different chemistry libraries to represent gaseous and two-phase zones. It has to be noted that the evaporation process contributes twice to the evolution of η, as it acts both on Z g and Z l . This double contribution is necessary for cases where the evaporation process does not happen in the mixing layer. This situation occurs for instance if a premixed two-phase flame propagates towards the fuel injection, if the liquid fuel vaporized prior to injection, or if preferential concentration occurs before the mixing layer. In this context it is also noted that η contains a source term and is therefore not a conserved scalar. Moreover, η, as defined in Eq. ( 19), is non-normalized. However, this does not represent an issue for numerical simulations since the resulting flamelet equations are numerically well behaved. In fact, this property is strictly not necessary to correctly identify the flame-normal direction, which only requires to monotonically increase from the oxidizer side to the spray injection side (or vice versa). The maximum value of η, found for the limiting case with separated mixing and evaporation zones, as provided in App. B, could be used to normalize this quantity, when necessary. Analysis of spray flame structure This work considers a counterflow configuration, which consists of two opposed injection slots that are separated by a distance L = 0.02 m along the x 1 -direction, see Fig. 1. On the fuel side, a mono-disperse kerosene (C 10 H 20 ) spray is injected with air. On the oxidizer side, pure air is injected. Similar to the works by Dvorjetski and Greenberg [START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF] and Lerman and Greenberg [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF], the gaseous flow field B. Franzelli, A. Vié, M. Ihme is assumed to be described by a constant strain rate:1 u 1 = -ax 1 and u 2 = ax 2 . Compared to gaseous flames, the boundary conditions are not imposed at infinity in order to take into account the effect of evaporation on the mixing and reaction2 . The following gaseous boundary conditions are imposed at both sides: T 0 = 600 K, Y 0 O2 = 0.233, Y 0 N2 = 0.767. For the liquid phase at the spray side, the liquid-to-gas mass ratio is Z 0 l = 0.2. In the present study, we examine effects of the droplet diameter of the injected spray, d 0 , and the strain rate, a, on the flame structure. To focus on the coupling between mass transfer, mixing and reaction processes, approximations on the evaporation model, the liquid velocity and the temperature have been invoked for the numerical solutions of the spray flame equations. These assumptions and the resulting system of equations are presented in App. A. The reaction chemistry developed in [START_REF] Franzelli | A two-step chemical scheme for large eddy simulation of kerosene-air flames[END_REF] for kerosene/air flames is used in the following. Choice of composition-space variable The solution of the counterflow spray flame at atmospheric pressure for d 0 = 40 µm and a = 100 s -1 in physical space is shown in Fig. 2. The gaseous fuel from the droplet evaporation is consumed in the reaction zone, which is characterized by the high temperature region and product concentration. As a result of the fuel-rich injection condition, all oxygen that is injected at the fuel side is consumed. The excess fuel is eventually consumed in the diffusion region, where it reacts with the oxygen that is provided from the oxidizer stream. In the following, the evaporation zone (Z l > 0) identifies the spray side of the flame, and the gas side of the flame coincides with the region where Z l = 0. The different definitions for mixture fraction are evaluated and compared in Fig. 2(b). This comparison shows that gaseous (Z g ), total (Z t ) and conserved (Z c ) mixture fractions are not monotonic, which is a result of the slip velocity, the evaporation and differential diffusion effects 1 between liquid and gaseous phases. It is noted that such non-monotonic character is not due to the constant strain rate assumption, and the same effect has been observed for variable strain rate spray flames in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF]. As shown in Fig. 3(a), the spray flame structure can not be easily studied in the classical mixture-fraction space. The potential of representing the spray flame structure in Z g -space is assessed by separating the solution into two parts following two distinct approches: by distinguishing between gas and spray regions [START_REF] Hollmann | Diffusion flames based on a laminar spray flame library[END_REF] or by using the maximum value of Z g as a separation threshold [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF]. However, as shown in Fig. 3(b), representing the flame structure in the Z g -space by separating the solution into gas and spray regions is not adequate since the solution is not necessarily unique due to the non-monotonicity of Z g in the spray region. The second strategy circumvents this issue (cfr. Fig. 3(c)), but unfortunately, the a priori evaluation of the maximum value of Z g is not possible, so that this separation strategy cannot be used in a straightforward manner. The newly proposed composition variable η addresses both issues, and the flame structure as a function of η is shown in Fig. 3(d). Compared to the mixture-fraction parameterization with respect to Z g and Z t , the solution is guaranteed to have a unique value for any given η. Moreover, compared to the two-zone separation, this parameterization eliminates the need for a separation criterion. The flame structure on the spray side can be correctly represented when working in physical space or in η-space. Flame structure in effective composition space The counterflow spray-flame equations (Eq. (A3)) are solved in physical space and the effective composition variable η is used to analyze the flame structure for different values of d 0 and a. The solutions for Z g and Z l are compared with results from an asymptotic analysis. The derivation of the analytical solution is provided in App. B, and is obtained under the assumption that evaporation and diffusion occur in two distinct regions. The analytic solutions for Z g and Z l present piecewise linear behaviors with respect to η when the evaporation is completed without interaction with the diffusion process. The gaseous mixture fraction reaches its maximum value Z * g = Z 0 l /(1+Z 0 l ) = 0.166 at Z l = 0. The spray side is then located at η > Z * g and is mainly governed by evaporation. In contrast, the gas side (η ≤ Z * g ) is characterized by diffusion. By construction, η coincides with Z g on the gas side, thereby retaining consistency with the mixture-fraction formulation for purely gaseous flames. Results for different initial droplet diameters and strain rates are illustrated in Figs. 4 and5, showing the solution in physical space (left) and in effective composition space (middle). The location separating the evaporation and mixing B. Franzelli, A. Vié, M. Ihme regions, is indicated by the vertical blue line. To assess the significance of the diffusion process at the spray side, a budget analysis of the Z g -transport equation (1d) is performed. In this budget analysis, the contribution of each term appearing in Eq. (1d), i.e. advection, diffusion and evaporation, is evaluated. Compared to the work of [START_REF] Olguin | Theoretical and numerical study of evaporation effects in spray flamelet model[END_REF], the contribution of the evaporation to the budget of Z g is not split, since both terms in Eq. (1d) relate to the sole evaporation process. These results are presented in the right column of Figs. 4 and5. The comparison of the results with the asymptotic solutions also allows to quantify the diffusion contribution at the spray side without looking at the budget analysis. Discrepancies with the asymptotic solutions will occur when the diffusion and evaporation zones overlap. Indeed, diffusion contributions in the spray region are apparent in Figs. 4 and5 as deviation from the linear behavior of Z g with respect to η on the spray side. The region where diffusion affects the results is then presented in gray in all figures based on the Z g -profiles. The blue vertical line separates the spray side from the gas side based on the Z l -profiles. Effects of droplet diameter on spray-flame structure Results for a constant strain rate of a = 100 s -1 and three different initial droplet diameters of d 0 = {20, 40, 80} µm are presented in Fig. 4. For d 0 = 20 µm (Fig. 4(a)), the liquid fuel fully evaporates before reaching the flame reaction zone, and the high temperature region is confined to the gas region of the flame. By considering the budget analysis, it can be seen that the diffusion contribution on the spray side is negligible for small droplet diameters. This is further confirmed by comparisons with the asymptotic solution for the gaseous mixture fraction (shown by symbols), which is in very good agreement with the simulation results. By increasing the initial droplet diameter to d 0 = 40 µm, shown in Fig. 4(b), it can be seen that a small amount of liquid fuel reaches the preheat zone of the flame. The evaporation is not separated anymore from the diffusion region: as shown in the right panel of Fig. 4(b), the diffusive part of the budget can no longer be neglected close to the maximum value of Z g . This may also be recognized by comparing the numerical results with the asymptotic profiles. Here, the maximum values for η and Z g are small compared to the analytic solution, provided that the underlying modeling hypothesis of distinct evaporation and mixing zones is invalid. For the case with d 0 = 80 µm (Fig. 4(c)) liquid fuel is penetrating into the reaction zone, and a high temperature region and a second heat-release region on the spray side can be observed. This complex flame structure is clearly visible in the η-space. Moreover, as evidenced by the overlap between the gray region and B. Franzelli, A. Vié, M. Ihme the liquid volume fraction Z l , as well as by the budget analysis, both diffusive and evaporative contributions are mixed. These interaction processes are not represented by the asymptotic solution, which relies on the spatial separation between both processes. Considering the η-space, the effect of the droplet diameter on the flame structure is clearly identified. For all three cases considered, the first temperature peak is located on the gas side at stoichiometric condition. However, with increasing initial droplet diameter, a second temperature peak is formed on the spray side, which identifies the transition from a single-reaction to a double-reaction flame structure for large droplets, as observed in [START_REF] Gutheil | Multiple solutions for structures of laminar counterflow spray flames[END_REF][START_REF] Vié | Analysis of segregation and bifurcation in turbulent spray flames: a 3d counterflow configuration[END_REF]. Moreover, by comparing profiles of Z g and Z l with the analytic solution, the diffusive contribution on the spray side can be clearly recognized. By increasing the droplet diameter, diffusion effects become increasingly important in the spray region, and the diffusive processes overlap with evaporation. These effects are not reproduced by the analytic solution that is derived in App. B. Effects of strain rate on spray-flame structure Results for different strain rates a = {200, 400, 600} s -1 and fixed initial droplet diameter of d 0 = 40 µm are presented in Fig. 5. Compared to the results in physical space for a strain rate of a = 100 s -1 (Fig. 4(b)), the flame structure in Fig. 5(a) is confined to a narrow region for a = 200 s -1 . However, the representation of the flame structure with respect to the effective composition variable η provides a clear description of the different regions that are associated with heat release and diffusion. The comparison with the analytic profiles provides an assessment of competing effects between diffusion, advection, and evaporation. The flame structure for a strain rate of a = 400 s -1 is shown in Fig. 5(b). For this condition, a double-flame structure is observed in which the primary heat-release zone is formed on the spray side and the unburned vaporized fuel is consumed in a secondary reaction zone on the gaseous side of the flame. This result is similar to that presented in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF], but has the opposite behavior compared to the findings of [START_REF] Olguin | Theoretical and numerical study of evaporation effects in spray flamelet model[END_REF], for which a double-flame structure is observed for low strain rates. Since, however, this study used methanol or ethanol, for which the latent heat is twice larger than that of kerosene used here and in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF], there is no contradiction between the three studies. The different reaction zones are conveniently identified in composition space, and the budget analysis provides a clear description of the contributions arising from a balance between diffusion and advection in the absence of evaporation effects. By further increasing the strain rate to a value of a = 600 s -1 a high-temperature region is observed on the spray side (Fig. 5(c)). However, compared to the case with a = 400 s -1 the two heat-release zones are closer without exhibiting a significant reduction in temperature. At this condition, the flame on the gas side is highly strained, leading to a reduction of the maximum temperature (from 2400 to 2000 K) and both temperature peaks are located on the spray side. In comparison, the maximum temperature on the spray side is less affected by variations in strain rate. Derivation of spray flamelet equations in effective composition space One of the main motivations for introducing the monotonic composition-space variable η is to enable the direct solution of Eqs. [START_REF] Russo | Physical characterization of laminar spray flames in the pressure range 0.1-0.9 MPa[END_REF] and [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF] in composition space. Rewriting Eq. ( 8) by introducing the effective composition space variable η and the transformation operators [START_REF] Continillo | Counterflow spray combustion modeling[END_REF], the term Ξ η (corresponding to the advection term in Eq. ( 20)) can be written as: Ξ η = sgn(u η ) ρu i ∂Z g ∂x i 2 + ρu i ∂Z l ∂x i 2 , ( 22 ) = sgn(u η ) dZ g dη ρD 2 d dη χ η 2D + χ η 2D d(ρD) dη + ρχ η 2 d 2 Z g dη 2 + (1 -Z g ) ṁ 2 + χ η 2D d[ρZ l (u i -u l,i )] dη -ṁ (1 + Z l ) 2 1/2 . ( 23 ) By assuming a constant pressure along the η-direction, we obtain the complete B. Franzelli, A. Vié, M. Ihme spray-flamelet equations: Ξ * η du j dη = µ D χ η 2 d 2 u j dη 2 + (u j -u l,j ) ṁ -f j , (24a) Ξ † η dY k dη = ρχ η 2 d 2 Y k dη 2 + (δ kF -Y k ) ṁ + ωk , (24b) Ξ † η dT dη = ρχ η 2 d 2 T dη 2 + ṁ T l -T - q c p + ωT , (24c) Ξ † η dZ g dη = ρχ η 2 d 2 Z g dη 2 + (1 -Z g ) ṁ , (24d) Ξ η dZ l dη = -ṁ (1 + Z l ) + Ψ [Z l ] , (24e) Ξ η dm d dη = - ṁ ρ n l + Ψ [m d ] , (24f) Ξ η d(u l,j Z l ) dη = -f j -ṁu l,j (1 + Z l ) + Ψ [u l,j Z l ] , (24g) Ξ η d(Z l h l ) dη = -ṁh l (1 + Z l ) + ṁ(L v -q) + Ψ [h l Z l ] , (24h) where the following quantities are introduced: Ξ * η =Ξ η - µ 2 d dη χ η 2D + χ η 2D dµ dη , (25a) Ξ † η =Ξ η - ρD 2 d dη χ η 2D + χ η 2D d(ρD) dη , (25b) Ψ [φ] = ∂η ∂x i ∂ ∂η [ρφ(u i -u l,i )] , (25c) χ η =2D ∂η ∂x i 2 . ( 25d ) To confirm consistency, it can be seen that the spray-flamelet formulation [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] reduces to the classical gaseous mixture-fraction formulation in the absence of a liquid phase. Moreover, its consistency is guaranteed by construction, since no assumption has been applied to rewrite the general equation system, Eqs. ( 10) and [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF], into the formulation [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF], except for d η p = 0. To solve Eqs. [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] in the effective composition space, closure models are required for the terms ∂ xi η and (∂ xi η) 2 that appear in the expressions for the slip velocity and the scalar dissipation rate (Eqs. (25c) and (25d)). Before discussing in Sec. 5.2 the validity of the closure models developed in Apps. C and D, we will first verify the feasibility of directly solving Eqs. [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] in composition space through direct comparisons with spray-flame solutions from physical space. For this, the sprayflamelet equations [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] are solved using expressions for χ η and Ψ that are directly extracted from the physical-space spray-flame solutions. In the following, the assumptions described in App. A will be used to simplify the numerical simulations. However, it is noted that the spray flamelet equations ( 24) are general and do not rely on such assumptions. Feasibility of η-space simulations A spray-flamelet formulation has been proposed in Z g -composition space by [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF]. However, due to the non-monotonicity of Z g , the system could not be directly solved in composition space. Instead, the system was solved in physical space and contributions of each term from the solution of the counterflow spray flame was post-processed in the Z g -space. In contrast, the introduction of η enables the direct solution of the spray-flamelet equations in composition space. To demonstrate the consistency of the sprayflamelet formulation, one-dimensional counterflow spray flames are solved in ηspace by invoking the assumptions that were introduced in App. A. A direct comparison of the solutions obtained in physical space using 400 mesh points with adaptive refinement (solid lines) and in composition space with 100 mesh points with equidistant grid spacing (symbols) are shown in Fig. 6. The operating conditions correspond to the case discussed in Sec. 4 (d 0 = 40 µm and a = 100 s -1 ). The excellent agreement between both solutions confirms the validity of the newly proposed spray-flamelet formulation for providing a viable method for the flame-structure representation and as method for solving the spray-flamelet equations in composition space. obtained from the solution in physical space (symbols) and in η-composition space (solid lines). To facilitate a direct comparison, χ η is extracted from the x-space solution. Closure models for χ η and Ψ In this section, the performance of the closure model for the scalar dissipation rate χ η on the simulation results is assessed. Here, we consider the formulation of χ η developed in App. C. This closure model is based on the linearization of the evaporation model, controlled by the constant vaporization time τ v , and the spatial separation of evaporation and diffusion. A model for the slip-velocity term Ψ, under consideration of the small Stokes-number limit based on the drag Stokes number St d = aτ d , is also provided in App. D. Since the numerical simulation considers the limiting case of zero slip velocity, the only unclosed term is the scalar dissipation rate χ η . This term is essential not only to characterize the gas side of the flame structure, but also to account for effects of advection, mixing, and evaporation on the liquid spray side. To ensure consistency with the assumptions that were introduced in developing the closure for χ η in App. C, we utilize the linearized evaporation model, which introduces a constant evaporation time τ v (see Eq. (B2) in App. B). The solution of the spray-flame equations in physical space is then compared against the solution obtained by solving the spray-flamelet equations in η-space, for which χ η is either directly extracted from the solution in physical space or from the analytical expression given by Eq. (C5). Two cases are considered here: The comparison of the flame structures for τ v = 0.005 s is presented in Fig. 7. The maximum value of η is slightly overestimated when using the analytical expression for χ η , resulting in a small shift of the flame-structure profile in effective composition space. This can be attributed to the fact that evaporation and diffusion overlap in a small region. However all solutions give comparable results, confirming the validity of the model for small Stokes numbers. τ v = 0. The flame structure for τ v = 0.02 s is analyzed in Fig. 8. The flame structure is substantially different from the other case, showing the presence of a double-flame and an overlap of the evaporation and diffusion regions. The results in effective composition space are in good agreement with the physical-space solution, but some differences can be seen in the region where evaporation and diffusion overlap. Radicals and intermediate species are expected to be more sensitive to strain rate and, consequently, to be more sensitive to the closure model for χ η . This can be observed by comparing the CO mass fraction in Figs. 7(b) and 8(b). For τ v = 0.005 s, the assumptions underlying the χ η closure model are verified, leading to a good agreement between the physical results and the two composition-space solutions. In contrast, for τ v = 0.02 s diffusion and evaporation overlap, violating the assumptions that we invoked in the development of the closure for χ η . Indeed, some discrepancies for the CO-profile are noted for the calculation with the analytical closure model, whereas the calculation using χ η extracted from the x-space solution is still in good agreement with the physical space solution. Nevertheless, the overall agreement remains satisfactory for all simulations. The same analysis was performed using the d2 -evaporation model of App. A (Eq. (A1)) and variable density. Results show the same trend discussed for constant τ v , and this will be further examined in the following section. Although further improvements for the closure model of χ η are desirable to extend its applicability to larger values of τ v , results obtained from the η-space solution are in satisfactory agreement with the x-space solutions. Effect of droplet diameter and strain rate: bifurcation and hysteresis Effects of droplet diameter and strain rate on the flame structure are examined by solving the spray flamelet equations (Eqs. (A6)) in η-space using the analytical closure for χ η and the d 2 -evaporation model that we introduced in App. A1 . Starting from the solution for d 0 = 10 µm and a = 100 s -1 , the droplet diameter at injection is successively increased until d 0 = 80 µm at an increment of 10 µm. Results for d 0 = {20, 40, 80} µm are presented in Fig. 9. It can be seen that for small droplet diameters a single-reaction structure is observed whereas for larger droplet diameters (d 0 > 50 µm) the flame is characterized by a double-reaction structure. Starting from the solution for d 0 = 80 µm, the droplet diameter at injection is incrementally decreased until d 0 = 10 µm. The double-reaction structure is retained until d 0 = 40 µm with a transition from double-to single-reaction structure occurring at d 0 = 30 µm. Hence, for a droplet diameter between d 0 = 40 µm and d 0 = 60 µm, depending on the initial condition two different flame structures are found. This is shown for the case of d 0 = 40 µm in Fig. 9(b), obtained when increasing the droplet diameter, and in Fig. 9(d), corresponding to the transition from double-to single-reaction structure. The occurrence of this bifurcation was suggested by Continillo and Sirignano [START_REF] Continillo | Counterflow spray combustion modeling[END_REF] and confirmed by Gutheil [START_REF] Gutheil | Multiple solutions for structures of laminar counterflow spray flames[END_REF], and is B. Franzelli, A. Vié, M. Ihme attributed to the increased nonlinearity that is introduced through the evaporation term. Capturing this phenomenon is a confirmation of the suitability of our flamelet formulation for the description of the physics of spray flames. The behavior of the flame to a variation in the droplet diameter strongly depends on the evaporation model and the reaction chemistry. Vié et al. [START_REF] Vié | Analysis of segregation and bifurcation in turbulent spray flames: a 3d counterflow configuration[END_REF] identified a hysteresis for droplet diameter variations, which was characterized by a doublebranch structure. Following this analysis, the mean flamelet temperature is used as a robust metric to distinguish between single-and double-reaction structures: T = 1 max(η) max(η) 0 T (η) dη . ( 26 ) In the following, the mean flame temperature is normalized by the corresponding value for d 0 = 10 µm and a = 100 s -1 . Results for variations in droplet diameter are shown in Fig. 10 to represent the hysteresis loop. Results from the physical space are also included in Fig. 10 for comparison. The hysteresis behavior is captured by both formulations, and slightly higher values for the double-reaction structure are obtained from the solution in physical space. The effect of the strain rate is further investigated. Starting from the solution behavior of the x-space solution from Sec. 4, with a transition from a single-to a double-reaction structure at a = 350 s -1 . However, when starting from a doublereaction solution for a > 350 s -1 and decreasing the strain rate, the flame retains its double-reaction structure. Moreover, it has been verified that when starting from the double-reaction solution for d 0 = 40 µm and a = 100 s -1 , the doublereaction structure is retrieved both by increasing and by decreasing the droplet diameter. Consequently, a stable branch is identified for which the flame structure is of double-reaction type, whereas the solution stays on the lower single-reaction structure branch of Fig. 10(b) as long as the strain rate remains below 350 s -1 . This type of bifurcation was also observed in [START_REF] Vié | Analysis of segregation and bifurcation in turbulent spray flames: a 3d counterflow configuration[END_REF], where two branches were identified without the occurrence of an hysteresis 1 . It may also be noted that the temperature is overestimated for the highest values of the strain rate when solving the system in η-space. This is due to the fact that the assumptions underlying the closure for χ η are not valid for high strain rate values, as discussed in Sec. 4.2. However, the proposed closure for χ η is a first attempt to model the scalar dissipation rate of spray flame. With its shortcomings, the proposed η-space formulation is able to reproduce effects of droplet diameter and strain rate on the spray flame structure. This capability of the spray-flamelet formulation was further demonstrated by showing that it captures the hysteresis process. Conclusions An effective composition variable η was proposed to study the structure of spray flames in composition space in analogy with the classical theory for purely gaseous diffusion flames. Unlike previous attempts [START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF][START_REF] Luo | New spray flamelet equations considering evaporation effects in the mixture fraction space[END_REF] that have been used to describe a mixture-fraction variable, the newly proposed effective composition variable is monotonic, thereby enabling the solution of spray flame in composition space. Furthermore, since this new definition is also based on the liquid-to-gas mass ratio, it can capture the evolution of the disperse phase even if no evaporation occurs, which is not the case for purely gaseous-based definitions. This new composition space was used to analyze counterflow spray flames that were simulated in physical space, showing its ability to represent the spray-flame structure. Subsequently, a flamelet formulation was derived and solved, showing the practical feasibility of directly evaluating the resulting spray flamelet equations in η-space. From these flamelet equations arises the necessity of closures for the scalar dissipation rate and the slip velocity. A simplified model was proposed and the potential of the closure for χ η was verified against solutions in physical space. The complete flamelet formulation is used to investigate effects of strain rate and droplet diameter on the flame behavior, reproducing the bifurcation and hysteresis of the flame structure. The proposed spray flamelet formulation represents a theoretical tool for the asymptotic analysis of spray flames [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF] in composition space. Formulation in an Eulerian form can be extended to polydisperse flow fields, by using for instance a multifluid formulation [START_REF] Laurent | Multi-fluid modeling of laminar poly-dispersed spray flames: origin, assumptions and comparison of the sectional and sampling methods[END_REF] for the droplet phase. This enables the consideration of the liquid mixture fraction as the sum of all liquid size volume fractions, where the polydispersity only acts on the overall vaporization rate. Another interesting extension could be to take into account large Stokes-number effect such as droplet velocity reversal [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF], which can be done by introducing additional droplet classes and adding each droplet class contribution to the mixture fraction definition. This work is also a first step towards the development of spray-flamelet based turbulent models, which will require development of subgrid scales model for the composition space variable η as well as evaporation source terms. The equations for the liquid-to-gas mass ratio and the gaseous mixture fraction, Eqs. (1d) and (5a), are then given in non-dimensional form as: ξ dZ l dξ = 1 α 2Z 0 l + Z l (1 + Z l ) , (B3a) d 2 Z g dξ 2 + 2ξ dZ g dξ = 2 α 2Z 0 l + Z l (Z g -1) , (B3b) where ξ = x/δ D , and δ D = 2D 0 /a is the diffusion layer thickness. Profiles of mixture-fraction distributions are schematically illustrated in Fig. B1. An analytic solution for Z l can be obtained by solving Eq. (B3a); however, we were not able to find a closed-form solution for Eq. ( B3b). An analytical solution can be obtained for the asymptotic limit in which effects of evaporation and species diffusion are spatially separated. For this case, the gaseous mixture fraction increases on the spray side until reaching its maximum; once the evaporation is completed (Z l = 0), the diffusion becomes relevant and the spatial evolution of Z g is described by the purely gaseous mixture-fraction equation (see Figs. B1(a) and B1(b)). It is important to recognize that this zonal separation is different from a pre-vaporized spray flame, in which liquid fuel is evaporated before diffusion and combustion occur, and combustion is confined to the gaseous side. The present formulation is not restricted to this special case and allows for the spatial superposition of evaporation and combustion. With this, the flame can be separated into two regions: (1) Spray side for ξ > ξ v : The liquid volume fraction starts evaporating close to the injection ξ = L/(2δ D ) and completely disappears at ξ = ξ v . The main contribution in this region is assumed to arise from the evaporation, so that contributions from diffusion in the Z g -equation can be neglected: ξ dZ l dξ = 1 α 2Z 0 l + Z l (1 + Z l ) , (B4a) ξ dZ g dξ = 1 α 2Z 0 l + Z l (Z g -1) . (B4b) The analytic solutions for Z l and Z g can be written as: Z l (ξ) = -2Z 0 l 1 -(ξ/ξ v ) β 2Z 0 l -(ξ/ξ v ) β , (B5a) Z g (ξ) = - 1 -2Z 0 l 1 + Z 0 l 1 1 -2Z 0 l (ξ/ξ v ) -β + 1 , (B5b) where β = (2Z 0 l -1)/α, and the value for ξ v is obtained by imposing the boundary condition Z l (L/(2δ D )) = Z 0 l in Eq. (B5a): ξ v = L 2δ D 3 2(1 + Z 0 l ) 1/β . ( B6 ) The extension of this region depends on the evaporation time τ v through the parameter β: increasing the evaporation time leads to a broadening of the evaporation zone, and the limiting case of this model is represented in Figs. B1(b) and B1(c). The maximum value of the gaseous mixture fraction Z * g , found at ξ = ξ v , is calculated using Eqs. (B5) and (B6): Z * g = Z 0 l (1 + Z 0 l ) ≡ 1 K 1 . ( B7 ) From Eqs. (B5) a relation between Z g and Z l can be derived for the evaporation region: Z l = Z 0 l -(1 + Z 0 l )Z g , dZ l dξ = - K 1 (K 1 -1) dZ g dξ . ( B8 ) (2) Gas side for ξ ≤ ξ v : In this region, the liquid volume fraction is zero and the expression for Z g reduces to the classical equation for gaseous flames. Z l = 0 , (B9a) Z g = Z * g 2 (1 + erf(ξ)) , (B9b) where the last equation is obtained by setting the right-hand-side of Eq. (B3b) to zero and using the boundary conditions Z g (-∞) = 0 and Z g (ξ v ) = Z +∞ g . It is noted that Z g asymptotically reaches the value Z * g at ξ = ξ v , since our model implies ξ v ≥ 2δ D . From Eq. (B9b), it is found that the stagnation point required to evaluate the function sgn(x) = sgn(ξ) corresponds to Z g = Z * g /2. where K 3 = (2Z 0 l ) -2/β 8D 0 α 2 L 2 2(1+Z 0 l ) 3 2/β . To derive the gaseous scalar dissipation rate χ Zg , the diffusion and the evaporation contributions are considered separately: χ Zg = χ Z evap g + χ Z mix g . The scalar dissipation of the gaseous mixture fraction on the gas side in the absence of liquid volume fraction, is given in analogy with a purely gaseous flame (Eq. B9b): χ mix Zg = a(Z * g ) 2 π exp -2 erf -1 2Z g Z * g -1 2 . ( C3 ) The scalar dissipation of the gaseous mixture fraction on the spray side is obtained from Eq. (B5b): χ evap Zg = K 3 (2Z 0 l + Z l ) 2+2/β (1 + Z l ) -2/β (1 -Z g ) 2 . (C4) The individual contributions are combined to describe the dissipation rate of the effective composition variable in the gaseous and liquid regions of the flame: χ η = χ mix Zg if η ≤ Z * g , χ evap Zg + χ Zl if η > Z * g . (C5) The analytical closure is compared to the χ η -profile from the solution in physical space for the case with τ v = 0.005 s from Sec. 5.2. Results from this comparison are shown in Fig. C1. For this case, diffusion and evaporation occur in two distinct regions and the analytical closure model is able to reproduce χ η in both regions. Extending the closure model to more general cases for which evaporation and diffusion are not separated is feasible for instance by directly evaluating χ η from the simulation or by combining the contributions from χ mix Zg , χ evap Zg and χ Zl in the region where diffusion and evaporation occur simultaneously. This zone may be identified by evaluating the non-linear behavior of Z g in the η-space as discussed in Sec. 4. space. Nevertheless, extending this formulation to a non-unity Lewis-number systems is possible. For this, we recall the equation of the gaseous mixture fraction, but in the case of non-unity Lewis Number: ρu i ∂Z g ∂x i = ∂ ∂x i ρD ∂Z g ∂x i + (1 -Z g ) ṁ , + W F n C,F W C Ns k=1 n C,k W C W k ∂ ∂x i ρ(D k -D) ∂Y k ∂x i , ( E1 ) where D is a mean diffusion coefficient and D k is the diffusion coefficient of species k. As shown for instance in [START_REF] Vié | On the description of spray flame structure in the mixture fraction space[END_REF], such a definition of the mixture fraction is not monotonic even for gaseous flames, and thus can not be used as a proper composition space variable. However, if we look at the purely gaseous case, and use our composition space variable η: dη dt = sgn dη dt dZ g dt 2 , ( E2 ) any variation of Z g will lead to a monotonic variation of η on either fuel or oxidizer sides of the flow. Consequently, our η-space formulation can handle non-unity Lewis number assumption. Another possible solution is to use the strategy proposed by Pitsch and Peters [START_REF] Pitsch | A consistent flamelet formulation for non-premixed combustion considering differential diffusion effects[END_REF], who introduces a mixture fraction that is not linked to the species in the flow, and is by definition a passive scalar. This way, even if this formulation cannot be linked to physical quantities, it can be used as a composition space variable. Figure 1 . 1 Figure 1.: Schematic of the laminar counterflow spray flame. (a) Temperature and species mass fraction. (b) Gaseous, total and conserved mixture fractions. Figure 2 . 2 Figure 2.: Flame structure in physical space for d 0 = 40 µm and a = 100 s -1 : a) temperature and species mass fraction, and b) gaseous, total and conserved mixture fractions, liquid-to-gas mass ratio and effective composition variable. (a) Representation in Zg-space. (b) Representation in Zg-space separating the spray and gas zone. (c) Representation in Zg-space separating the solution at maximum value of Zg. (d) Representation in η-space. Figure 3 . 3 Figure 3.: Flame structure for d 0 = 40 µm and a = 100 s -1 . (a) Initial droplet diameter: d 0 = 20 µm. (b) Initial droplet diameter: d 0 = 40 µm. (c) Initial droplet diameter: d 0 = 80 µm. Figure 4 . 4 Figure 4.: Flame structure obtained from the solution in physical space for a = 100 s -1 as a function of different initial droplet diameters d 0 : solution in x-space (left), η-space (middle), and budget analysis (right) of Z g -conservation equation (Eq. (1d)); the gray area corresponds to the diffusion zone; the blue vertical line separates the spray side from the gas side. For comparison, asymptotic solutions for Z g and Z l are shown by symbols. (a) Strain rate: a = 200 s -1 . (b) Strain rate: a = 400 s -1 . (c) Strain rate: a = 600 s -1 . Figure 5 . 5 Figure 5.: Flame structure obtained from the solution in physical space for d 0 = 40 µm as a function of different strain rates: solution in x-space (left), η-space (middle), and budget analysis (right) of Z g -conservation equation (Eq. (1d)); the gray area corresponds to the diffusion zone; the blue vertical line separates the spray side from the gas side. For comparison, asymptotic solutions for Z g and Z l are shown by symbols. (a) Temperature, fuel and liquid-to-gas mass fractions. (b) Mass fractions of CO 2 and CO. Figure 6 . 6 Figure 6.: Comparison of spray-flame structure for d 0 = 40 µm and a = 100 s -1obtained from the solution in physical space (symbols) and in η-composition space (solid lines). To facilitate a direct comparison, χ η is extracted from the x-space solution. 005 s and τ v = 0.02 s. Comparing the flame structure with results obtained in physical space for the evaporation model of Sec. 4, these cases are representative for conditions of d 0 = 20 µm and d 0 = 60 µm, respectively. (a) Temperature, fuel and liquid-to-gas mass fractions. (b) Mass fractions of CO 2 and CO. Figure 7 . 7 Figure 7.: Comparison of spray-flame solution in η-space using the linearized evaporation model for τ v = 0.005 s. The solution obtained in physical space is projected onto the η-space (solid lines), spray-flamelet solution in η-space with χ η extracted from x-space solution (stars) and from analytic closure model (open circles). The strain rate is a = 100 s -1 . Figure 8 . 8 Figure 8.: Comparison of spray-flame solution in η-space using the linearized evaporation model for τ v = 0.02 s. The solution obtained in physical space is projected onto the η-space (solid lines), spray-flamelet solution in η-space with χ η extracted from x-space solution (stars) and from analytic closure model (open circles). The strain rate is a = 100 s -1 . (a) Increasing droplet diameter: d 0 = 20 µm. (b) Increasing droplet diameter: d 0 = 40 µm. (c) Increasing droplet diameter: d 0 = 80 µm. (d) Decreasing droplet diameter: d 0 = 40 µm. Figure 9 . 9 Figure 9.: Counterflow flame structure in η-space for a) d 0 = 20 µm, b) d 0 = 40 µm for increasing droplet diameter, and c) d 0 = 80 µm and d) d 0 = 40 µm for decreasing droplet diameter. Spray-flamelet equations are solved in η-space using the analytic closure for χ η developed in App. C (symbols). The gray area corresponds to the diffusion zone; the blue vertical line separates the spray side from the gas side. ( a ) a Variations in droplet diameter. (b) Variations in strain rate. Figure 10 . 10 Figure 10.: Counterflow solution in η-space for a) variations in droplet diameter at a fixed strain rate of a = 100 s -1 and b) variations in strain rate for a fixed droplet diameter of d 0 = 40 µm. Solution from η-space formulation is shown by open squares and corresponding reference solution in physical space is shown by blue closed circles. Arrows indicate the direction of the parametric variation.for d 0 = 40 µm and a = 100 s -1 at the lower branch in Fig.10(a). The strain rate is initially increased in increments of ∆a = 50 s -1 until a = 600 s -1 . Results for a = {200, 400, 600} s -1 are illustrated in Fig.11. These results reproduce the Figure 11 . 11 Figure 11.: Counterflow flame structure in η-space for d 0 = 40 µm and increasing strain rates of a) a = 200 s -1 , b) 400 s -1 and c) 600 s -1 . Solution obtained in η-space using the closure for χ η developed in App. C. The gray area corresponds to the diffusion zone; the blue vertical line separates the spray side from the gas side. (a) Physical space; small vaporization time τv. (b) Physical space; large vaporization time τv. (c) Effective composition space; large vaporization time τv. Figure Figure B1.: Schematic representation of gaseous mixture fraction and liquid-togas mass ratio profiles in physical space for (a) small vaporization times τ v and (b) large values of τ v (corresponding to the limit of the present model), and (c) representation in effective composition space. The gray zone identifies the diffusion layer. The blue vertical line separates the gas side from the spray side. Figure Figure C1.: Effective composition dissipation rate as a function of η. Result from the physical solution for τ v = 0.005 s in Sec. 5.2 (line) is compared to the analytical model (symbols). Since the definition of mixture fraction is reserved for a conserved quantity, Zg from Eq. (1d) does not strictly represent a mixture fraction. However, for consistency reasons with previous works, we follow this convention. Despite the fact that this assumption is not exact for variable-density flows, it reduces the computational complexity of the counterflow while retaining the main physics. This approximation is often used as a simplified model for two-phase flame analysis. For L → ∞, the pre-evaporated case is retrieved.[START_REF] Faeth | Evaporation and combustion of sprays[END_REF] The liquid phase does not have a diffusion term, and is therefore characterized by an infinite Lewis number. To take into account the variability of the evaporation time, the vaporization Stokes number is approximated by Stv = aτ v,ref (d/d ref ) 2 where τ v,ref = 0.04 s and d ref = 40 µm. It is noted that the flame transition from single-to double-reaction and vice versa is sensitive to the The assumption of constant liquid temperature is not valid for real applications[START_REF] Sirignano | Fluid dynamics and transport of droplets and sprays[END_REF], the transient heating time being of primary importance. However, since the main concern about the definition of a composition space is the effect of the vaporization rate, this assumption has no consequence on the suitability of our methodology when liquid temperature variations are taken into account. It is worth mentioning that this assumption could be relaxed to take into account density effects on the flow structure, by using the Howarth-Dorodnitzyn approximation under the classical boundary layer approximation[START_REF] Williams | Combustion Theory[END_REF]. Acknowledgments The authors gratefully acknowledge financial support through NASA with Award Nos. NNX14CM43P and NNM13AA11G and from SAFRAN. Helpful discussions with Prof. Sirignano on the spray-flamelet formulation are appreciated. numerical procedure that is used to vary the strain rate and droplet diameter. Disclosure statement Conflict of interest: The authors declare that they have no conflict of interests. Research involving Human Participants and/or Animals: Not applicable for this paper. Informed consent: All the authors approve this submission. Appendix A. One-dimensional counterflow spray flames equations A.1 Modeling approach The counterflow spray flame equations are solved on the axis of symmetry x 2 = 0, from the fuel to the oxidizer side. To focus on the coupling between mass transfer, mixing, and reaction, the following simplifying assumptions are invoked for the numerical solution of the governing equations: • A constant strain rate is assumed [START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF][START_REF] Dvorjetski | Analysis of steady state polydisperse counterflow spray diffusion flames in the large Stokes number limit[END_REF]: u 1 = -ax 1 and u 2 = ax 2 . • For evaporation, a simplified d 2 -model is considered by fixing the droplet temperature 1 T l = T b , where T b is the boiling temperature of the fuel species. Consequently, the evaporation model writes [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF]: where H(•) is the Heaviside function. The liquid fuel properties for kerosene are T b = 478 K and L v = 289.9 kJ/kg. • The liquid velocity is assumed to be the same as that of the gas velocity. This assumption is valid for small Stokes-number droplets based on the gaseous flow strain rate St d = aτ d (where τ d = ρ l d 2 /(18µ) is the particle relaxation time [START_REF] Maxey | Equation of motion for a small rigid sphere in a non uniform flow[END_REF]). It has to be noted that such a system cannot capture droplets with a Stokes number greater than 1/4 that could potentially cross the stagnation and exhibit velocity reversal. Capturing such a behaviour should be handled by using more velocity moments [START_REF] Kah | Eulerian quadrature-based moment models for dilute polydisperse evaporating sprays[END_REF] or by introducing additional droplet classes [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF]. • Constant thermo-diffusive properties 2 with ρD = 2 × 10 -5 kg/(m s) and c p = 1300 J/(kg K). With these assumptions, the system of equations that is solved in physical space takes the following form: -ax -ax -ax where the density is calculated from the species mass fractions, the temperature, and the constant thermodynamic pressure using the ideal gas law. In this configuration, the equation for η is: To construct a monotonic composition space, we thus impose: The corresponding spray-flamelet system in composition space reads as: with and 1 η is equal to zero on the gas side. For the limit of small Stokes numbers, all droplets evaporate before crossing the stagnation plane, which corresponds to the region of negative velocity. The assumption could be violated for larger droplets if their Stokes number St d = aτ p is higher than 1/4 [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF], requiring a closure model that accounts for the slip velocity between the gas and liquid phase. However, it is also noted that even droplets with a high Stokes number could evaporate before reaching the stagnation plane. This is likely to occur for hydrocarbon fuels, for which the latent heat of vaporization is small compared to those fuels that are commonly used to study droplet crossings [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF][START_REF] Olguin | Influence of evaporation on spray flamelet structures[END_REF]. Moreover, a closure model accounting for effects of the slip velocity on the flame structure is proposed in App. D, under the assumption of small St d . For high values of St d , the transport equation for the liquid velocity (Eq. (24f)) may also be added to the system. As result of the zero-slip velocity assumption, that is u i = u l,i , χ η is the only unclosed term in the spray-flamelet equations (A6). This term is directly evaluated from the x-space solution in Sec. 5.1. Subsequently, this approximation is relaxed in Secs. 5.2 and 5.3 and a model for the scalar dissipation rate is developed in App. C. A.2 Numerical method To solve Eqs. (A3) and (A6) in their respective physical and effective composition spaces, four numerical ingredients are used: • An adaptive mesh refinement method is used based on the gradients of η in physical space. • Diffusive operators, i.e. second order derivatives, are discretized using a central finite difference scheme. Considering a non-uniform mesh spacing of elements ∆x i , the second order derivative of a quantity Φ at the location i is: • Convective operators, i.e. first order derivatives, are discretized using an upwind finite difference scheme: • Steady-state is reached through a pseudo-time advancement with explicit Euler scheme. Considering τ as the increment of the pseudo-time variable and n as the time iteration: Appendix B. Analytical solution for Z g , Z l and η The analytical profiles for the gaseous and liquid-to-gas mass ratio in η-space are here derived for the 1D laminar counterflow flame, described in App. A (Eqs. (A3)). To obtain a closed-form solution, the following assumptions are introduced: • Consistent with the modeling of the scalar dissipation rate of gaseous flames [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF], a constant density ρ = ρ 0 is considered so that D = D 0 . • Starting from a d 2 -evaporation law, for which the evaporation rate is proportional to the droplet diameter (that is d ∝ Z a linearized evaporation model at Z 0 l is derived: where α = 3St v , St v = aτ v is the evaporation Stokes number, and τ v is the constant evaporation time. The analytic formulation for η is then obtained by combining Eqs. (B5) and (B9): where Through the spatial separation of the evaporation and diffusion regions of the flame, it can be seen from Eq. (B10) that η is only a function of Z 0 l and St v . The maximum value of the effective mixture fraction is evaluated as: which is only a function of the liquid mass fraction at injection. Invoking the linear dependence of liquid and gaseous mixture fractions on the effective composition variable (see Fig. B1(c)), Z g and Z l can be written as functions of η: and As discussed in Sec. 4.2, the validity of the analytical solution relies on the assumption that mixing and evaporation occur in two distinct regions. Appendix C. Closure model for the scalar dissipation A closure model for the scalar dissipation rate χ η can be derived using the analytic expressions for Z g and Z l that were derived in the previous section. For this, we decompose Eq. (25d) into liquid and gaseous contributions: and corresponding expressions directly follow from the definition of the effective composition variable. The scalar dissipation of the liquid-to-gas mass ratio is evaluated from the analytic solution of Z l (Eq. B5a): Appendix D. Analytic model for slip velocity The assumptions of App. A are here retained to derive a model for the slip velocity contribution Ψ d [φ] for a counterflow spray flame. For this, we follow the work of Ferry and Balachandar [START_REF] Ferry | A fast Eulerian method for disperse two-phase flow[END_REF] and evaluate the velocity of the liquid phase from the gaseous velocity: where D Dt is the material derivative of the gas phase. Under the assumption of constant strain rate and potential flow solution [START_REF] Lerman | Spray diffusion flames -an asymptotic theory[END_REF][START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF], the liquid velocity can be written as: By considering the limit of small Stokes number, higher-order terms are truncated, and the following expression for the slip-velocity contribution is obtained: This closure model can be used to take into account the effect of a slip velocity between liquid and gaseous phases on the flame structure. The flame structure defined by the 1D spray-flamelet formulation given in Eqs. [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF] depends on the liquid and gaseous velocities through the quantities Ξ η and Ψ d . However, when using Eq. (D2), the dependence on the velocity of both phases is eliminated and only the dependence on the droplet Stokes number is retained: In the limit of small Stokes number St d → 0, the liquid and gaseous velocities are identical, so that the spray-flamelet formulation simplifies to the system given by Eqs. (A6). Appendix E. Non-unity Lewis number flows In the present work, we invoked the unity Lewis assumption, which is a classical assumption for the development of flamelet methods. However it is well known that hydrocarbon liquid fuels, such as dodecane or kerosene, have a Lewis number above 2. Here we kept the unity Lewis number assumption for the sake of simplicity and clarity, since the focus of the work is on the formulation of an effective composition
83,759
[ "7345", "4628" ]
[ "416", "254946", "254946", "501870" ]
01272955
en
[ "spi" ]
2024/03/04 23:41:48
2016
https://hal.science/hal-01272955/file/CF_Franzelli2015.pdf
Benedetta Franzelli email: [email protected] Aymeric Vié email: [email protected] Matthias Ihme email: [email protected] Characterizing spray flame-vortex interaction: a spray spectral diagram for extinction Keywords: Spray flame-vortex interaction, Spectral diagram, Extinction come Introduction The fundamental understanding of turbulence-flame interaction is of relevance for practical applications, since turbulence may drastically modify the combustion process by affecting the flame structure, thus possibly impacting pollutant emissions, thermo-acoustic instabilities, local quenching and reignition [START_REF] Renard | Dynamics of flame/vortex interactions[END_REF]. With regard to application to Large-Eddy Simulation (LES) and Reynolds-Average Navier-Stokes (RANS) modeling, it is therefore required to accurately model the interaction between the flow field and the flame on the computationally unresolved scales. In the context of gaseous flames, several studies have been performed to investigate the complex interaction of a vortex with premixed [START_REF] Williams | Combustion Theory[END_REF][START_REF] Borghi | Turbulent combustion modelling[END_REF][START_REF] Poinsot | Quenching processes and premixed turbulent combustion diagrams[END_REF] and non-premixed flames [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF][START_REF] Peters | Laminar flamelet concepts in turbulent combustion[END_REF][START_REF] Bilger | The structure of turbulent non-premixed flames[END_REF] to mimic the turbulence effect on combustion. In particular, a flamelet regime was identified, in which the turbulent flame front is seen as a collection of one-dimensional flames that are stretched and deformed by vortices [START_REF] Peters | Laminar diffusion flamelet models in non-premixed turbulent combustion[END_REF]. Under this assumption, the understanding of flame-vortex interaction is essential for numerous practical combustion applications [START_REF] Renard | Dynamics of flame/vortex interactions[END_REF]. The interaction of a pair of vortices with a laminar flame represents a canonical configuration for the theoretical understanding of combustion mechanisms in turbulent flows [START_REF] Poinsot | Theoretical and Numerical Combustion[END_REF] and the development and validation of turbulent combustion models [START_REF] Colin | A thickened flame model for large eddy simulations of turbulent premixed combustion[END_REF]. Indeed, the effect of a pair of vortices on a laminar flame can be studied to examine several combustion regimes that are representative for turbulent flows [START_REF] Poinsot | Theoretical and Numerical Combustion[END_REF]. For purely gaseous flames, such an idealized configuration has led to several studies, either in premixed and non-premixed regimes, see [START_REF] Renard | Dynamics of flame/vortex interactions[END_REF] for an exhaustive overview. In addition, findings from these studies have led to the construction of combustion spectral diagrams [START_REF] Poinsot | Quenching processes and premixed turbulent combustion diagrams[END_REF][START_REF] Cuenot | Effect of curvature and unsteadiness in diffusion flames. Implications for turbulent diffusion flames[END_REF][START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF][START_REF] Liñán | Ignition, liftoff, and extinction of gaseous diffusion flames[END_REF] that are of particular importance for the derivation of new combustion models for turbulent flow applications. In the context of spray flames, less efforts have been made towards the understanding of combustion regimes. In [START_REF] Luo | Direct numerical simulations and analysis of three-dimensional n-heptane spray flames in a model swirl combustor[END_REF], the investigation of a 3D swirled spray flame through Direct Numerical Simulation (DNS) has shown the complexity of spray flames, in which premixed, partially premixed and diffusion reaction zones may coexist. In [START_REF] Vie | Analysis of segregation and bifurcation in turbulent spray flames: A 3d counterflow configuration[END_REF], the authors studied the interaction of a counterflow spray flame with turbulence, confirming the existence of a flamelet regime for spray flames. As such, the study of a spray flame interacting with a pair of vortices may provide a fundamental understanding of the competition between evaporation, mixing and combustion for a range of practically relevant operating regimes. Although flamevortex interaction is recognized as a canonical configuration for examining the coupling between combustion and turbulence in gaseous configurations, the investigation of spray flames in these configurations has been limited to phenomenological observations [START_REF] Santoro | An experimental study of vortex-flame interaction in counterflow spray diffusion flames[END_REF][START_REF] Santoro | Extinction and reignition in counterflow spray diffusion flames interacting with laminar vortices[END_REF][START_REF] Lemaire | PIV/PLIF investigation of two-phase vortex-flame interactions[END_REF][START_REF] Lemaire | Unsteady effects on flame extinction limits during gaseous and two-phase flame/vortex interactions[END_REF] and asymptotic analysis [START_REF] Shiah | On the interaction of a dense spray diffusion flame and a potential vortex[END_REF]. The objective of this work is to extend the knowledge of spray flame-vortex interaction by combining theoretical and numerical analyse. In particular, the interaction of a pair of vortices with a spray flame in the limit of zero slip velocity is considered in order to identify the effect of evaporation on combustion regimes for turbulent spray flames. Particular attention is attributed to the investigation of local extinction. A new combustion diagram that is generalized to spray flames is here analytically derived by following the work of Vera et al. [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF] for purely gaseous flames. This regime diagram is subsequently verified through detailed numerical simulations. The remainder of this paper is organized as follows. We first present in Section 2 the theoretical derivation of the new spectral diagram for spray flame-vortex interaction, following the rationale of [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF]. The modeling approach that is used for the computational verification and the computational approach are presented in Section 3. Numerical results are presented in Section 4, first examining the steady state structure of the counterflow flame. Examples of possible scenarios of spray flame-vortex interaction are analyzed to highlight different responses of a spray flame to the vortex passage compared to the corresponding gaseous flame. To verify the theoretically developed spectral diagram, the role of the evaporation time is finally characterized. The paper finishes with conclusions. Spectral diagram for spray flame-vortex interaction Background: Gaseous flame-vortex interaction The flame-vortex interaction is a canonical configuration for examining basic phenomena that control the coupling between combustion and turbulence. By considering this configuration, Renard et al. [START_REF] Renard | Dynamics of flame/vortex interactions[END_REF] developed a fundamental understanding of different combustion modes that are summarized in a so-called "spectral diagram". With relevance to the present work, we briefly summarize the classical results for gaseous flames. The configuration consists of a strained non-premixed flame, in which a nitrogen-diluted fuel mixture is injected against an oxidizer stream (see Fig. 1(a)). The flame has a characteristic flame-front speed S L ∼ D th /δ L , where D th is the thermal diffusivity, δ L is flame-front thickness and the chemical time scale is τ c ∼ D th /S 2 L . Due to the unperturbed flow, the flame is subjected to a global strain rate A 0 . The steady gaseous flame is governed by the competition of mixing, advecting and chemical processes. Two non-dimensional numbers are then sufficient to characterize the flame: the Peclet number Pe = A 0 L 2 /D th (with L a characteristic length), describing the ratio between mixing and advection contribution, and the Damköhler number: Da = τ strain τ c = 1 τ c A 0 (1) accounting for the competition between characteristic advection and chemical time scales. In the flame-vortex configuration, a vortex ring of radius r 0 , strength Γ and characteristic speed u T ∼ Γ/r 0 is injected at the oxidizer side to interact with the flame front. As a result of the vortex interaction, the flame will experience a strain A Γ = Γ/(2r 2 0 ), and the flame can locally extinguish when A Γ exceeds the critical extinction strain rate A e . In the following, the non dimension vortex strenght Γ = A Γ /A 0 is introduced to describe the flame-vortex interaction. The robustness of the flame R = A e /A 0 ∼ 1/(A 0 τ c ) ∼ Da -1 is also introduced. The Peclet number is given by Pe = Pe 0 = A 0 r 2 0 /D th . With this, different regimes can be identified by considering these three non-dimensional parameters (R, Pe 0 and Γ) [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF][START_REF] Liñán | Ignition, liftoff, and extinction of gaseous diffusion flames[END_REF]: • Vortex dissipation: for small vortex strength, the vortex dissipates before reaching the flame front without affecting the flame. Consequently, no flame-vortex interaction occurs for: Γ ν ≤ 1 ⇒ Re Γ ∼ ΓPe 0 ≤ 1, (2) where Re Γ = Γ/ν is the Reynolds number based on the vortex strength. • Thickened reaction zone: under the condition that the vortex is small compared to the flame thickness, the vortex penetrates the preheat flame region and enhances the mixing of the reactants. This results in a thickening of the preheat zone, but the inner flame structure is not affected by the vortex for: r 0 δ L ≤ 1 ⇒ r 0 S L D th ∼ (Pe 0 R) 1/2 ≤ 1. (3) • Local flame quenching: under the condition that the flame strength R is smaller than the nondimensionalized vortex strength Γ, the flame is locally extinguished by the vortex pair: A e A Γ ≤ 1 ⇒ Da e Γ = R Γ ≤ 1, (4) where Da e Γ = A e /A Γ is the Damköhler number at extinction. • Flame re-ignition via edge flame: for the case of local extinction, the flame may re-ignite if the front propagation velocity 1 U F is of the same order as the flow velocity A 0 r 0 [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF]: U F ≥ A 0 r 0 ⇒ (Pe 0 R -1 ) 1/2 < f ∞ (5) with f ∞ ≈ 3 [START_REF] Fernández-Tarrazo | Liftoff and blowoff of a diffusion flame between parallel streams of fuel and air[END_REF][START_REF] Michaelis | Fem-simulation of laminar flame propagation II: twin and triple flames in counterflow[END_REF]. Turbulent vortices and unsteady chemistry effects could also be taken into account [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF], introducing additional regions in the spectral diagram. Spray flame characterization and assumptions Compared to a purely gaseous flame, the spray has a compounding effect on the flame due to the introduction of additional characteristic time scales, namely the evaporation time τ v and the droplet drag time τ p . In this asymptotic analysis, the following assumptions are considered: • Both τ v and τ p are assumed to be constant. A relation for this time scale ratio can be written as [START_REF] Réveillon | Analysis of weakly turbulent diluted-spray flames and spray combustion regimes[END_REF]: τ p τ v = 4 9 ln(1 + B M ) Sc = St p St v , (6) where B M is the Spalding number, Sc is the Schmidt number of the gas phase, St p = τ p A 0 is the drag Stokes number, and St v = τ v A 0 is the evaporation Stokes number. These two additional characteristic time scales affect the flow-field quantities and the flame characteristics, compared to the corresponding gaseous flame. • One-way coupling is considered to examine the effect of the evaporation time. This assumption is reasonable when considering that the droplets follow the mean flow: St p = τ p A 0 1 ⇒ St p /St v (A 0 τ v ) -1 . • Gas and spray phases are assumed at momentum equilibrium, i.e there is no slip velocity between both phases [START_REF] Marble | Dynamics of dusty gases[END_REF]. This assumption allows to isolate the vaporization part of the spray physics, since the contribution of the drag force is zero. This is a reasonable assumption in the limit St p → 0. • The main contribution of the evaporation time is the change of the characteristic quenching time τ q of the flame, as suggested by Ballal and Lefebvre [START_REF] Ballal | Flame propagation in heterogeneous mixtures of fuel droplets, fuel vapor and air[END_REF]: τ q = τ v + τ c , (7) This assumption implicitly assumes that the two phenomena occur sequentially and do not spatially overlap. By introducing these assumptions, the problem is simplified to isolate the role of the evaporation time on the spray flame-vortex interaction. As such, the asymptotic analysis proposed in the following is strictly valid in the limit of zero slip velocity, but it is expected to provide a reasonable estimation of flame-vortex interaction for St p < 0.25, when no droplet stagnation plane crossing occurs. Under such assumptions, the relation between the flame speed and the flame time is extended to spray flames such as: S L ≈ D th /τ q , and other flame properties can be derived from the gaseous flame values: S L S L ∼ τ q τ c = 1 + τ v τ c , (8a) Da e Γ Da e, Γ ∼ A e A e ∼ τ q τ c , (8b) where the superscript denotes the spray flame quantities. The characterization of a steady non-premixed spray flame requires the introduction of an additional non-dimensional number accounting for the presence of the evaporation process, competing with the other phenomena. The evaporation Stokes number St v can be considered. Alternatively, a more appropriate non-dimensional number, namely the Lefebvre number, is introduced here to account for the competition between evaporation and chemical times: Lf = τ v τ c = St v Da. ( 9 ) For small values of Lf, the evaporation time is small compared to the chemical time so that quenching is mainly governed by chemistry and the flame properties are only slightly modified by evaporation. Indeed, for Lf 1, a pre-evaporating flame is retrieved, whereas a purely gaseous flame is characterized by Lf = 0. In contrast, for larger values of Lf the evaporation process is expected to largely modify the flame properties and, consequently, the flame-vortex interaction. For Lf 1, the vaporization process is too long compared to τ c , so that the gaseous fuel provided by evaporation may not be sufficient to sustain combustion. From Eq. 8, the Lefebvre number allows to describe the effect of the evaporation time on the flame characteristics, compared to the corresponding purely gaseous flame: S L ∼ (1 + Lf) -1/2 S L , (10a) Da e, Γ ∼ (1 + Lf) -1 Da e Γ . (10b) The flame-front thickness, which depends only on the chemical time τ c , is unchanged for spray flames and equal to δ L . These relations are supported by experimental and numerical data for stoichiometric premixed spray flames [START_REF] Ballal | Flame propagation in heterogeneous mixtures of fuel droplets, fuel vapor and air[END_REF][START_REF] Senoner | Large Eddy Simulation of the Two Phase Flow in an Aeronautical Burner using the Euler-Lagrange Approach[END_REF] and the simulation presented in Section 4. Equation (8b) shows that the modified Damköhler number at the extinction strain rate reduces for large values of the Lf number, implying that a spray flame extinguishes for smaller strain rates compared to a gaseous flame as found in [START_REF] Dvorjetski | Steady-state and extinction analyses of counterflow spray diffusion flames with arbitrary finite evaporation rate[END_REF]. In [START_REF] Hermanns | On the dynamics of flame edges in diffusion-flame/vortex interactions[END_REF], the front propagation velocity U F for purely gaseous flames is observed to be a function of the Damköhler number, which presents an asymptotic value. This behavior is here also assumed for spray flames: U F S L 1 f ∞ = f (Da/Da e ) ∼ 1 ⇒ U F S L 1 f ∞ = f (Da /Da e, ) ∼ 1. ( 11 ) where Da e is the local Damköhler number at extinction as defined in [START_REF] Hermanns | On the dynamics of flame edges in diffusion-flame/vortex interactions[END_REF]. Spray spectral diagram The spectral diagram is here extended to consider the interaction of a pair of vortices with a spray flame. In the spray flame-vortex configuration, the fuel spray is injected together with gaseous nitrogen against a stream of oxidizer. In analogy with the classical analysis for gaseous flames the following assumptions are invoked: • the vortex is injected only from the oxidizer side. This choice has been considered for most studies on flame-vortex interactions [START_REF] Cuenot | Effect of curvature and unsteadiness in diffusion flames. Implications for turbulent diffusion flames[END_REF][START_REF] Mantel | Fundamental mechanisms in premixed turbulent flame propagation via vortex-flame interactions -part II: numerical simulation[END_REF]. The effect of the vortex injection at the fuel side has been experimentally observed by Santoro et al. [START_REF] Santoro | An experimental study of vortex-flame interaction in counterflow spray diffusion flames[END_REF] and is here numerically examined in Appendix A. • Equal diffusivities for all species is considered to avoid differential diffusion effects on the flame-vortex interaction [START_REF] Katta | Interaction of a vortex with a flat flame formed between opposing jets of hydrogen and air[END_REF]. This is a broadly-used assumption and its impact has been discussed in [START_REF] Katta | Interaction of a vortex with a flat flame formed between opposing jets of hydrogen and air[END_REF]. The configuration under investigation is schematically presented in Fig. 1(b), and essential features that characterize flame/vortex interactions are shown in Fig. 2 [START_REF] Renard | Investigations of heat release, extinction and time evolution of the flame surface, for a nonpremixed flame interacting with a vortex[END_REF]. Once the vortex pair reaches the flame front (colored in red in Fig. 2(a), it interacts with the flame structure, which may be locally modified by the induced stretch. For sufficiently high vortex strength, the flame is engulfed by vortices, creating a dome in the flame. In this case, the maximum induced strain rate is located at the top hat region, whereas curvature effects dominate in the hat brim (see Fig. 2(a)). In the case of a spray flame, the droplets are subjected to centrifugal forces of the vortex. The modification of the flame reaction zone as well as the preferential droplet concentration due to the vortex passage affect the evaporation process and flame position (shown in blue in Fig. 2(b)). With this, the classical asymptotic analysis, discussed in [START_REF] Thévenin | Extinction processes during a non-premixed flame vortex interaction[END_REF] for gaseous flame/vortex interaction, is extended to spray flames using expressions [START_REF] Cuenot | Effect of curvature and unsteadiness in diffusion flames. Implications for turbulent diffusion flames[END_REF] and [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF] to relate gaseous and spray flame properties. Compared to the regime diagram for gaseous flames, an additional third dimension, accounting for the ratio between evaporation and chemical time scales expressed by the Lf number, has to be considered. By considering only the effect of the evaporation time on the criteria given in Section 2.1, it is possible to define the limits of the spectral diagram for spray flame-vortex interactions, presented in Fig. 3. The flame-vortex interaction is here represented as a function of the characteristic speed u T ∼ Γ/r 0 and size l T ∼ r 0 of the vortex, which are normalized by the characteristic flame-front velocity S L and thickness δ L , respectively. By considering the four non-dimensional parameters (R, Pe 0 , Γ, Lf), the following regimes can be distinguished: e Figure 3: Spectral diagram for spray flame-vortex interaction. Black lines limit the different combustion regimes that were found analytically for gaseous flames [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF][START_REF] Liñán | Ignition, liftoff, and extinction of gaseous diffusion flames[END_REF]. Red lines correspond to spray flames with increasingly higher evaporation time. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) • The vortex dissipation region (Re Γ ≤ 1) is constrained by a diagonal line with a logarithmic slope of n = -1 in the spectral diagram: u T S L l T δ L ∼ ΓPe 0 ≤ 1. ( 12 ) This area is identical with that for purely gaseous combustion since the spray is assumed to not affect the flow-field and vortex properties. • The thickened reaction zone (l T /δ L ∼ 1) is identified by a vertical line in the spectral diagram: l T δ L ∼ (RPe 0 ) 1/2 ≤ 1. ( 13 ) Since the size of the inner reaction zone is not affected by the evaporation time, this region is identical to that of gaseous flames. • The local flame quenching region (Da e, Γ ≤ 1) is represented by a diagonal in the spectral diagram: u T S L δ L l T ∼ 1 Da e Γ ∼ ΓR -1 ≤ (1 + Lf) -1 . ( 14 ) This relation shows that the extinction area increases for larger values of Lf implying that for spray flames local extinction occurs for smaller values of the strain rate induced by the vortex compared to the corresponding gaseous flame. • The reignition condition (U F /S L ≤ f ∞ ) is valid for the region identified by a horizontal line in the u T /S L -l T /δ L diagram: u T S L ∼ Γ(Pe 0 R -1 ) 1/2 ≤ Γf ∞ Da e, Γ Da e Γ 1/2 ≤ Γf ∞ (1 + Lf) -1/2 . ( 15 ) The reignition area decreases with increasing Lf-number, showing that for large droplet diameters, i.e. large τ v values, the flame is more likely to globally extinguish than for the corresponding purely gaseous flame, characterized by the same τ c . The value for f ∞ may be assumed to be of order of unity, in analogy to gaseous flames. • The extinction of spray flames may also occur due to fuel depletion. This represents a new extinction mode that is not present in gaseous flames. At this condition, the competition between evaporation, mixing and advection is changed as vortices reach the flame. Under the assumption of negligible particle drag, i.e. St p 0.25, the droplets reach the stagnation point but they cannot pass through it [START_REF] Sanchez | The role of separation of scales in the description of spray combustion[END_REF]. As schematically presented in Fig. 2(b), close to the hat top the presence of the vortices inverses the velocity of the droplets which are pushed away from the flame front. As a consequence, fewer droplets are available for evaporation, resulting in a local fuel depletion. This in turn weakens the flame strength and enhances the extinction propensity. Flame extinction due to fuel depletion is expected to occur when the evaporating droplets, located next to the flame front, do not evaporate completely before exiting the flame reaction zone. This situation occurs when the thickness of the evaporation layer δ v ∼ τ v u T is smaller than the reaction layer: δ v < δ L . Under this condition, extinction by fuel depletion occurs: δ v < δ L ⇒ u T S L ∼ Γ(Pe 0 R -1 ) 1/2 ≤ Lf -1 , (16) Criterion ( 16) compares the characteristic time of evaporation and mixing and is identified by a horizontal line in the spectral diagram, whose ordinate is given by the inverse of Lf. Therefore, this extinction region increases for flames characterized by the same chemical time τ c with increasing droplet diameter. This criterion is consistent with purely gaseous flames, for which Lf -1 → ∞. The effect of the competition between evaporation and combustion, represented by the Lefebvre number, on the spray-flame-vortex interaction is identified in Fig. 3. Here, the black lines limit the different interaction modes that were analytically found for gaseous flames whereas the red lines correspond to spray flames with increasingly higher Lf, which changes the way the vortex interacts with the flame by changing the characteristic flame properties. In the following, this spectral diagram is verified through numerical simulations. It is noted that a complete characterization of the reignition phenomenon will require an extensive study on edge spray flames, in analogy with the work of Hermanns et al. [START_REF] Hermanns | On the dynamics of flame edges in diffusion-flame/vortex interactions[END_REF] for gaseous flames. This, however, is beyond the scope of this study. Therefore, we will focus on the extinction phenomena without investigating the reignition stage through our simulations. Detailed simulations of spray-flame-vortex interaction In this section, detailed simulations are performed to computationally confirm the spectral diagram that was developed in the previous section. Gas-phase and dispersed-phase equations The gas phase is described by the conservation equations for mass, momentum, species, and energy: ∂ρ ∂t + ∂ρu j ∂x j = Ṡm , (17a) ∂ρu i ∂t + ∂ρu i u j ∂x j = - ∂p ∂x i + ∂σ ij ∂x j + Ṡui , (17b) ∂ρY k ∂t + ∂ρY k u j ∂x j = ∂ ∂x j ρD k W k W ∂X k ∂x j + ωk + Ṡm δ kF , (17c) ∂ρT ∂t + ∂ρT u j ∂x j = ∂ ∂x j ρD th ∂T ∂x j + ρD th c 2 p ∂T ∂x j ∂c p ∂x j + ωT + ṠT , ( 17d ) where ρ is the gas density, u i is the i th component of the gas velocity vector, Ṡm , Ṡui , and ṠT are the source terms due to droplet evaporation, drag force, and heat transfer, respectively. The mass fraction of species k is denoted by Y k . The pressure is denoted by p, and σ ij = µ ∂u j ∂x i + ∂u i ∂x j - 2 3 ∂u k ∂x k δ ij is the viscous stress tensor. The molecular weight of species k is denoted by W k and W is the mixture-averaged molecular weight. The diffusivity of species k is denoted by D k , ωk is the net production rate of species k, and δ kF is the Kronecker function that is unity for fuel (denoted by the index F ) and zero for all other species. The temperature is denoted by T , c p is the heat capacity, and h k is the total sensible and chemical enthalpy of species k. The heat release ωT is given by ωT = -c -1 p Ns k=1 h k ωk , where N s is the number of species considered. For the dispersed phase, a Lagrangian point-particle approach is used [START_REF] Miller | Direct numerical simulation of a confined three-dimensional gas mixing layer with one evaporating hydrocarbon-droplet-laden stream[END_REF]. The equations describing each droplet are written as: dx d,i dt = u d,i , (18a) du d,i dt = f 1 τ p [u i (x d ) -u d,i ] , (18b) dT d dt = Nu 3Pr c p c l f 2 τ p [T (x d ) -T d ] + ṁd l v m d c l , (18c) dm d dt = - Sh 3Sc m d τ p ln(1 + B M ) , (18d) where x d is the position of the droplet, u d its velocity, T d its temperature, and m d its mass. The Nusselt number is described by Nu, Pr is the Prandtl number and Sh is the Sherwood number. The relaxation time of the droplet is τ p = ρ l d 2 /18µ, ρ l is its density, d is its diameter, c l is its heat capacity and l v is the latent heat of vaporization. The drag coefficient is f 1 , accounting for high Reynolds number effects, and f 2 is a correction factor to consider effects of heat exchange on the evaporation [START_REF] Miller | Direct numerical simulation of a confined three-dimensional gas mixing layer with one evaporating hydrocarbon-droplet-laden stream[END_REF]. The coupling terms to the gas phase are obtained by integrating the contributions from all droplets contained in the control volume ∆V [START_REF] Crowe | The particle-source-in cell (PSI-CELL) model for gas-droplet flows[END_REF]: Ṡm = - dm d dt , ( 19 ) Ṡui = - dm d u d,i dt , ( 20 ) ṠT = - 1 c p c l m d dT d dt + (c p T d + l v ) dm d dt , (21) where {•} ≡ 1 ∆V d∈∆V •. To be consistent with the assumptions used to develop the analytical spectral diagram, gas and spray phases are assumed at momentum equilibrium, i.e there is no slip velocity between both phases, so that u d,i = u i [START_REF] Marble | Dynamics of dusty gases[END_REF]. The heat transfer from the liquid to the gas is also assumed to be equal to zero, i.e. ṠT = 0, which is a reasonable assumption when the ratio α l ρ l c l ρcp is small, where α l is the liquid volume fraction. Reaction chemistry In the present study, a 24-species mechanism for n-dodecane is used [START_REF] Vie | Analysis of segregation and bifurcation in turbulent spray flames: A 3d counterflow configuration[END_REF], which is based on the JetSurF 1.0 mechanism [START_REF] Sirjean | Simplified chemical kinetic models for high-temperature oxidation of C1 to C12 n-alkanes[END_REF], originally consisting of 123 species and 977 reactions. This reduced mechanism has been validated in auto-ignition and perfectly stirred reactors [START_REF] Vie | Analysis of segregation and bifurcation in turbulent spray flames: A 3d counterflow configuration[END_REF] and guarantees a correct description of the flame structure and its response to strain rate variations. Detailed thermodynamic and transport properties are considered. The species diffusivities are calculated assuming unity Lewis number. Numerics The governing equations are solved in the low-Mach number limit using the structured 3DA code [START_REF] Desjardins | High order conservative finite difference scheme for variable density low Mach number turbulent flows[END_REF][START_REF] Shashank | High Fidelity Simulation of Reactive Liquid Fuel Jets[END_REF]. The scalar advection operators are discretized using a QUICK scheme, and a second-order central differencing scheme is used for the momentum and pressure equations. The discrete Poisson system is solved using the HYPRE library [START_REF] Baker | Scaling hypre's Multigrid Solvers to 100,000 Cores[END_REF]. A staggered representation is used: the velocity is defined at the cell face, while the scalars and density are located at the cell center. A second-order Crank-Nicholson scheme is used for time integration. The chemical source terms are evaluated using the DVODE library [START_REF] Brown | A Variable-Coefficient ODE Solver[END_REF], based on the use of adaptive time stepping to advance the system of ODEs. Configuration We consider a two-dimensional counterflow configuration2 , consisting of two opposed slots. The direction x 1 = x is the injection direction, the direction x 2 = y is the outflow. The separation distance between the two injectors is L x = 0.1 m, and L y = 0.075 m is the vertical domain length. The mesh consists of 1000×740 cells, resulting in approximately 30 grid points to describe the reaction zone of the diffusion flame (whose thickness is δ L ≈ 3 mm). At the fuel side, gaseous nitrogen is injected with a fuel spray composed of ndodecane at ambient condition (T F d = T F = 300 K). Here and in the following the superscripts "F" and "O" refer to the fuel side and the oxidizer side, respectively. The injection velocity of the liquid phase is identical to that of the gas phase, u F = u O = u F d = 2.5 m/s, corresponding to a theoretical global flame strain rate of A 0 = 50 s -1 . The liquid mass flow rate is 9 g/s, corresponding to a purely gaseous composition of Y F N2 = 0.68 and Y F C12H26 = 0.32. The use of nitrogen at the fuel injection guarantees a diffusion-like combustion mode. The initial droplet distribution at injection is randomly drawn over the entire slot, resulting in a statistically homogeneous distribution. A parcel method is used so that each numerical droplet statistically represents N p physical droplets [START_REF] Crowe | The particle-source-in cell (PSI-CELL) model for gas-droplet flows[END_REF]. On the oxidizer side, pure air is injected at T O = 800 K. These operating conditions ensure a robust flame (due to the relatively high temperature of the oxidizer mixture) with the liquid phase mainly evaporating in the preheat zone of the flame (due to the low temperature at the fuel side), preventing pre-vaporization. Three different injection droplet diameters (d 0 = 25, 50, 75 µm) are considered, corresponding to three different evaporation times τ v ∼ Kd 2 0 (τ v ≈ 0.75, 3.0, 675 ms, respectively), where K is the regression rate. 3 Both R and Pe 0 numbers are then constant in the steady calculations, so that the effect of the Lf number on the flame characteristics can be examined (Lf = 0.075, 0.3, 0.675, respectively, where τ c ≈ 1.0 ms from a laminar premixed stoichiometric calculation [START_REF] Vera | A combustion diagram to characterize the regimes of interaction of non-premixed flames and strong vortices[END_REF]). To keep the number of numerical droplets that are injected comparable for all configurations, we use N p = 40, 10, 5 for d 0 = 25, 50, 75 µm, respectively. From Eq. ( 6), the drag Stokes numbers are evaluated as St p ≈ 0.0375, 0.15, 0.34, respectively, so that the assumption of zero drag Stokes number is reasonable for all calculations. In particular, a simple calculation of the trajectory of the droplet in a flow with constant strain rate leads to maximum velocity differences of 3, 10, 21%, respectively, with negligible velocity reversal for the larger Stokes number. Results and discussion Structure of steady gaseous and spray flames A direct comparison of the flame structure of the steady purely gaseous flame (Lf = 0) and the spray flame for Lf = 0.3 is presented in Fig. 4, showing results along the centerline for y = 0 mm. Gaseous fuel, whose mass fraction is Y F , is injected on the right side of the configuration. It decreases due to diffusion and starts burning close to the stagnation plane in the high temperature region. The flame structure is presented in terms of the gaseous mixture fraction Z g , which is here defined with respect to the carbon mass fraction in the gaseous mixture: Z g = W F n C,F W C Ns k=1 n C,k Y k W C W k , (22) where n C,k is the number of carbon atoms in species k. Since for the gaseous flame gaseous fuel is injected with nitrogen at the right side, Z g increases monotonically from zero to Z F g = 0.32 (Fig. 4(a)). The high temperature region is located around the stagnation plane and the inner reaction zone is identified by the OH mass fraction on the oxidizer side of the configuration, where a stoichiometric mixture is found (Z st g = 0.063) and the heat is mainly released (not shown). The structure of the spray flame, presented in Fig. 4(b), is more complex. Due to the selected operating conditions, the droplets that are injected on the right side start evaporating only after they have reached the high temperature region and droplets cannot pass through the stagnation plane since no slip velocity between phases is allowed. The evaporation zone, identified by the mass evaporation source term Ṡm in Fig. 4(b), is then confined to a small region that overlaps with the mixing layer at the right side of the stagnation plane. The evaporated fuel mass fraction Y F then passes through the stagnation plane due to mixing and the reaction zone is identified at the oxidizer side, similarly to the pure gaseous case. The maximum value of the fuel mass fraction is smaller than the maximum value of Z g , indicating that Y F starts reacting before the end of the evaporation zone. However, the reaction zone where the heat is mainly released is located on the oxidizer side, so that the evaporation layer and the reaction zone do not overlap spatially, confirming Eq. [START_REF] Bilger | The structure of turbulent non-premixed flames[END_REF]. Indeed, the scaling relations [START_REF] Cuenot | Effect of curvature and unsteadiness in diffusion flames. Implications for turbulent diffusion flames[END_REF] are confirmed here: the inner reaction zone (identified by the presence of OH) does not differ between gas and spray flames. The peak of the OH mass fraction is smaller for the spray flame as well as the integral of the fuel consumption rate (not shown), suggesting that a smaller flame speed characterizes the spray flame. The flame structures for the three values of Lf considered in this work are compared in the Z g -space to the gaseous flame structure in Fig. 5. As seen in Fig. 4(b), for a spray flame the profile of Z g is not monotonic in physical space, as a result of the competition between evaporation and mixing. Alternatively, the effective composition space variable proposed in [START_REF] Franzelli | Generalized composition space formulation for spray flames: monotonic mixing describing variable and associated flamelet model[END_REF] could be used to represent the flame structure along a monotonic direction. Since mixing and evaporation processes overlap spatially, the maximum value of Z g is smaller for the spray flames than for the gaseous flame, even if the same amount of fuel has been injected. Its value decreases when Lf increases, i.e., when the evaporation time increases. The maximum value of the gaseous mixture fraction for the steady solution Z steady g,max is used in the following to normalize the results (Z steady g,max = 0.319, 0.168, 0.132, 0.096 for gaseous and spray flames with Lf = 0.075, 0.3, 0.675, respectively). The effect of the spray on the combustion process is apparent from profiles of temperature and OH mass fraction. By considering the velocity profile, it has been verified that the local strain rate is similar for all flames (not shown). It is noticed that the maximum values of OH and temperature decrease as the Lefebvre number increases and, consequently, is smaller than the corresponding maximum values for the gaseous flames. This is due to the competition of the evaporation time τ v with the characteristic chemical and mixing times, which implies a reduction of the maximum value of Z g and a leaner combustion mode, compared to the stoichiometric diffusion-like mode observed for the gaseous flame. Examples of spray flame-vortex interaction A pair of symmetric counter-rotating vortices is superimposed to the initial steady-state velocity field. A schematic of the initialization procedure is presented in Fig. 1(b). The "Hat" vortex is used here [START_REF] Mantel | Fundamental mechanisms in premixed turbulent flame propagation via vortex-flame interactions -part II: numerical simulation[END_REF]. The equation for the velocity field in the vortex reference frame is: u θ = rΓ r 2 v exp - r 2 2r 2 v , u r = 0, (23) where r = x/ cos(θ), θ = atan(ỹ/x), x = xx v , ỹ = yy v , and (x v , y v ) is the initial position of the center of the vortex, Γ is the vortex strength and r v is the inner vortex radius. The two vortices of equal radii and opposite strengths are initially separated by a distance s (see Fig. 1), and they are at equal distance s/2 from the symmetry axis y = 0. As demonstrated in [START_REF] Mantel | Fundamental mechanisms in premixed turbulent flame propagation via vortex-flame interactions -part II: numerical simulation[END_REF], an appreciable tangential velocity u θ induced by the Hat vortex still exists until 3r v away from the vortex center. Following the recommendations of [START_REF] Mantel | Fundamental mechanisms in premixed turbulent flame propagation via vortex-flame interactions -part II: numerical simulation[END_REF], the separation distance between the two vortices and the flame is set to s = l x = 3r v , to avoid initial interactions between the viscous cores of the vortices and the reaction zone. Using this constraint on the vortex separation distance, the characteristic length scale of the perturbation is therefore l T = 9r v . The vortices that are injected from the left side interact then with the reaction zone before crossing the stagnation plane. To map the spectral diagram, parametric simulations along isolines of characteristic strain rate A Γ are performed using the following procedure: first the non-dimensional vortex strength Γ is fixed ( Γ = 1, 2, 4, 6, 10), then the radius of the vortex core is chosen ( rv δ L = The presence of the vortex has two effects on a spray flame. First, in analogy to the classical behavior of purely gaseous flames, the vortices wrinkle and strain the flame due to the stretch imposed on the flame front. To illustrate this, we decompose the stretch κ into a local strain rate a and a curvature contribution: κ = (δ ij -ni nj ) ∂u i ∂x j strain + S d ∂ ni ∂x i curvature = a + S d ∇ • n (24) where n = -∇Z g /|∇Z g | is the vector normal to the flame surface and S d is the displacement speed. Second, the vortices also interact with the droplet dynamics by changing their velocities. The induced droplet preferential concentration due to the centrifugal vortex force may cause a local variation of the gaseous mixture composition. Depending on the strength and dimension of the vortices, the spray flame front may quench by strain rate effects or by the local depletion of fuel. Indeed, for a given Lf number, i.e. initial droplet diameter, the flame-vortex interaction is evaluated for seventeen combinations of Γ -Pe 0 , keeping R constant. The most representative cases for the spray flame-vortex interaction cases are summarized in Table 1. Results from three different spray-flame-vortex interaction cases for Lf = 0.3 are presented in Figs. 6 (case A), 9 (case B) and 11 (case C). Here, the time is normalized by the time that is required by the vortex core to reach the flame front. The inner reaction zone, identified by the OH isocontours, is colored by the strain rate in the bottom part of the images. The vortex location is represented by vorticity isocontours. The mixture fraction, normalized by its maximum value for the steady case Z * g = Z g /(Z steady g,max ), is presented in the bottom part and the spray droplets, shown in the top part, are colored by their axial velocity. The curvature is not shown since its magnitude is smaller than the strain rate and its maximum is generally located at the hat brim of the flame. For conditions at which flame extinction occurs at the symmetry axis 4 , it can be concluded that the contribution of curvature to flame stretch is less relevant compared to the strain rate. The vortex dissipation case is not relevant for the understanding of the spray flame-vortex interaction and is not further discussed. Figure 6 presents the spray-flame-vortex interaction corresponding to Γ = 1 and r v /δ L = 5/9 (case A in Table 1). Here, the flame wrinkling is enhanced by the contribution of curvature to the stretch at the hat brim whereas the maximum of the strain rate is detected at the symmetry point of the dome, i.e the hat top. The stretching of the flame due to the vortex interaction is not sufficiently strong to penetrate the flame front. To better visualize the processes, the temporal evolution of the flame and flow quantities at the location corresponding to the hat top are represented in Fig. 7. For comparison, results for the corresponding purely gaseous flames are also shown. The local strain rate on the inner reaction zone of the flame, evaluated at the position of the maximum value of OH, increases with time due to the interaction with the approaching vortices (see Fig. 7(a)). Considering the local heat release for the reference flame at steady conditions, ωsteady T , we introduce the normalized overall heat release at the symmetry axis as: Ω * T = ∞ -∞ ωT dx ∞ -∞ ωsteady T dx , ( 25 ) which is a global quantity used to analyze the effect of strain on the flame structure. As observed for gaseous flames [START_REF] Renard | Investigations of heat release, extinction and time evolution of the flame surface, for a nonpremixed flame interacting with a vortex[END_REF], the mean heat release increases with the vortex strength. However, Fig. 7(a) shows that the vortex strength is not enough in this case to quench the flame. In analogy with gaseous flames, spray flames interacting with a vortex pair sustain higher values of local strain rate compared to the critical strain rate for steady flames due to unsteadiness and curvature effects [START_REF] Poinsot | Quenching processes and premixed turbulent combustion diagrams[END_REF]. The vortices interact also with the spray, by reversing its axial velocity in the evaporation zone near the hat top. Evaporating droplets are pushed away from the flame front, thereby decreasing the maximum value of Z * g available for combustion. For the present case, the contribution from this effect is not significant enough to affect the flame behavior. This process is highlighted in Fig. 7(b), which presents the temporal evolution of the maximum gaseous mixture fraction Z * g,max and of the maximum velocity of the evaporating droplets u l,max , located at the centerline y = 0 in the region where Ṡm = 0. For Γ = 4 and r v /δ L = 5/9 (case B in Table 1), presented in Fig. 9, the vortex pair is sufficiently strong to penetrate the flame. The quenching point corresponds to the condition of maximum strain rate, which is located at the symmetry point of the dome. At the moment of quenching, the gaseous mixture fraction distribution is homogeneous. This extinction mode is further analyzed in Fig. 10. The mean heat release increases by a factor of three because of the local strain rate, before the flame quenches at a ≈ 2800 s -1 . Here, the quenching time is identified by the point at which Ω * T starts decreasing (represented by the vertical line in Fig. 10). At the same time, the maximum value of Z * g decreases due to the spray flame-vortex interaction. 14 However, when the flame extinguishes, Z * g remains higher than the stoichiometric value. Enough gaseous fuel is then available for combustion, confirming the role of the strain rate contribution for this mode of extinction. For the same vortex characteristics, the purely gaseous flame does not quench since the spray flame is more sensitive to strain than the gaseous flame as predicted by the spectral diagram. For gaseous flames, the response to strain is commonly represented in the Ω * T -a-space. In order to simultaneously account for the effect on the mixture fraction for spray flames, results are represented in the Ω * T -Z * g,max -space, colored by the strain rate. These results are illustrated in Fig. 8. At the initial time, the mixture fraction value coincides with the steady solution. As soon as the strain rate increases, the normalized heat release increase too, and simultaneously, the maximum value of Z * g decreases due to the vortex interaction with the spray. For higher values of the strain rate, the total heat release drastically decreases since the flame quenches. The local value of Z * g,max is still higher than the stoichiometric value Z * ,stoich g,max = 0.478, indicating that in this case the quenching is due to strain rate since enough fuel is available to sustain the diffusion flame. 1 ( Γ = 2 and r v /δ L = 10/9). For this case, the strain rate at the hat top is not strong enough to quench the flame and the depletion of gaseous fuel is the primary reason for extinction. This extinction mechanism is nor present in the classical gaseous flame-vortex configuration. Looking at the axial velocity of the droplets, it can be seen that the vortices induce a strong positive velocity to the droplets located close to the hat top and a strong negative velocity to the droplets near the hat brim. Indeed, the droplets are forced to leave the hat top region to reach the hat brim of the flame, leading to a strong preferential droplet concentration along the flame. This translates into local inhomogeneities of the gaseous mixture composition, which is apparent when looking at the gaseous mixture fraction Z * g in Fig. 11. The mixture fraction accumulates at the hat brim, exceeding the maximum value of Z * g observed for the steady case, whereas its concentration decreases at the hat top with time. As such, the flame is not sustained anymore by fuel supply and, consequently, extinguishes. ⌦ ⇤ T [ ] ⌦ ⇤ T [ ] a[1/s] The same conclusions can be drawn by considering the temporal evolution of strain rate and heat release in Fig. 12. After an increase in Ω * T due to the strain rate, a plateau is reached for both strain rate and Ω * T . At this time, the maximum value of Z * g decreases due to the spray-vortex interaction. For Z * g smaller than the stoichiometric value, Ω * T starts to decrease due to fuel depletion (identified by the vertical line). Fuel depletion is then observed in a region where the strain rate decreases, confirming that the extinction is not Local strain rate due to strain rate but due to a decrease of Z * g . This behavior is clearly identified by looking at the solution in the Ω * T -Z * g,max space which has been added to Fig. 8. Increasing the strain rate and decreasing Z * g has an opposite effect on the flame. Starting at the right side (at Z * g = 1), Ω * T initially increases since the positive contribution of the strain rate on Ω * T dominates over the adverse effect due to the decrease of Z * g . For the fuel depletion case, a plateau is reached when the spray-vortex interaction compensates the flame-vortex interaction. After this, the flame extinguishes when Z * g,max becomes smaller than the stoichiometric value and is not enough to sustain combustion after the maximum of the local strain rate has passed. This extinction mode due to fuel depletion is not observed for purely gaseous flames since it is due to the interaction of the vortices with the spray, causing an inhomogeneous distribution of evaporated gaseous fuel. Ω T * ⌦ ⇤ T [ ] ⌦ ⇤ T [ ] a[1/s] By looking at the solutions in the Ω * T -Z * g,max space, it is then possible to distinguish flame quenching from the fuel depletion case. The composition trajectory for the flame quenching case by strain rate exhibits a rapid increase followed by a sudden drop in Ω * T due to strain rate and quenching, respectively. On the contrary, the fuel depletion case C shows the existence of an extended plateau and Ω * T drops with decreasing strain rate. The evolution of Ω * T in composition space is used in the following to distinguish between the two extinction modes. However, it is noted that since the vortex passage enhances both fuel depletion and flame quenching, the two extinction modes may not always be easily discriminated. Simulation results for the 17 operating conditions considered are summarized in the spectral diagram in Fig. 13. Good agreement between the numerical results and the spectral diagram, derived from the time scale analysis, is observed for both flame configurations. The injection of fuel as spray reduces the flame robustness, since its quenching time is larger compared to that of the purely gaseous phase due to the evaporation time. Such behavior is taken into account by the spectral diagram and was confirmed from the numerical results. Furthermore, a new extinction mode is identified for spray flames. Due to the underlying physical processes that are responsible for this extinction process, this mode is restricted to spray flames and not observed in gaseous flames. The competition between processes associated with the evaporation leads to a local depletion of fuel, which is illustrated in the spectral diagram (Fig. 13(b)). Effect of Lefebvre number In the previous section, the role of the evaporation process has been discussed by comparing the response of the flame to the vortex passage for a purely gaseous flame and a spray flame. To fully quantify the effect of the evaporation process, the cases of three Lf numbers are compared in this section, for constant values of Pe 0 and Γ couple, i.e. only the initial droplet diameter is modified (once again, R is kept constant in all calculations). In Figs. 14 and15, the evolution of Ω * T is presented as a function of Z * g,max for the cases summarized in Table 1. In Fig. 14(a) (case B), the vortex strength is high enough to quench the flame, characterized by a strong increase in Ω * T followed by a rapid drop, irrespectively of the Lf number. As discussed in Section 2, the robustness of a spray flame depends on the competition between time scales associated with the chemistry and the evaporation. Indeed, a vortex may quench a spray flame that is characterized by a slow evaporation process and does not extinguish a flame with smaller droplets. This is clearly observed for case D (Fig. 14(b)), for which only the spray flame with the largest Lf number extinguishes. The spectral diagram suggests that the flame robustness reduces accordingly with 1 + Lf. On the contrary, the extinction by fuel depletion depends on the competition between advection, mixing and evaporation processes. For larger droplets, the evaporation process is slow and not enough fuel is supplied by the evaporation before the droplets are pushed away from the flame front due to the vortex passage. This is verified in Fig. 15. Depending on the vortex characteristics, fuel depletion is observed for large droplets but not for the smallest (cases C and E). The strong effect of the vortices on the spray distribution is enhanced by the zero slip assumption used in the numerical simulation. For small droplets, this assumption is expected to be verified and, in general, the velocity induced by the vortex on the gaseous phase is an order of magnitude larger than the droplet velocity, rapidly forcing the droplets to move with the gaseous field. Results for Lf = 0.075 and Lf = 0.675 are compared to the spectral diagram prediction in Fig. 16. By comparing the results to the one obtained for Lf = 0.3 in Fig. 13(b), the dependence of the extinction limit to the Lf number is verified, the extinction region being extended as the droplet diameter at injection increases. Moreover, concerning the fuel depletion region, the same trend is found: increasing the Lf number at injection enlarges this region, which is confirmed by our theoretical analysis. By extrapolating the results to larger Lf numbers, it is expected that the spray flame with large droplet diameter at injection will be extinguished by any vortex that reaches the reaction zone, since the flame will be too close to extinction. Conclusions The spectral diagram for gaseous flame-vortex interaction was extended to spray flames. An analytic derivation was presented in the limit of momentum equilibrium, considering the influence of the evaporation time on the flame quenching time. A third dimension has been added to the spectral diagram, identified by the ratio between evaporation and chemical times, which is represented by the Lefebvre number, Lf. To confirm the spectral diagram, numerical simulations were performed by considering a planar counterflow configuration using a detailed chemistry description, and a Lagrangian method to describe the droplet evolution. Two different extinction modes were identified. The flame quenching, caused by an induced high strain rate at the flame front, is commonly observed for purely gaseous flames. The fuel depletion, a new extinction mode that is particular to spray flames, was caused by a local lack of gaseous fuel due to the preferential droplet concentration induced by the vortex interaction with the spray. The newly developed spectral diagram for spray flames accounts for both effects and correctly describes the numerical results. The effect of the evaporation time on the flame-vortex interaction and, specifically, on these two extinction modes was quantified by examining spray flames with different values of the Lf number. The proposed spectral diagram establishes the framework for the analysis of spray flame-vortex interaction to obtain a fundamental understanding of spray turbulent combustion and to develop turbulent spray combustion models, which require the correct representation of both extinction modes. To account for slip velocity effects on sprayflame-vortex interactions, the spectral diagram should be generalized to large drag Stokes numbers. Also, further extensions are required to consider vortex injection at the fuel side. In analogy to the theory for gaseous flames, the spectral diagram was developed and numerically verified by considering the vortex injection at the oxidizer side of the configuration. To examine the effect of injecting the vortex pair at the fuel side we performed additional calculations. Typical results for the spray flames when vortices are injected on the fuel side are represented in Fig. 17. The flame is more likely to conform with the fuel depletion extinction since the vortices interact with the spray distribution from the beginning. Due to the presence of the vortices, a strong inverse droplet velocity is observed at the symmetry axis (cfr. Fig. 17(b)), leading to strong inhomogeneities in the Z * g -field. Indeed, for case E fuel depletion is observed when injecting the vortices at the fuel side whereas the flame does not extinguish for an oxidizer side injection of the vortices. Moreover, the spray flame is found to be more sensitive to the strain rate when vortices are injected on the fuel side. This was also observed for gaseous flames (not shown). The spectral diagram does not account for the effect of injection side neither for gaseous nor for spray flames but it still provides a reasonable estimation of the flame behavior based on the order of magnitude of the competing processes in a flame-vortex interaction. Nomenclature B. A-priori evaluation of the reference evaporation time scale The reference evaporation time scale is evaluated following the work of [START_REF] Réveillon | Analysis of weakly turbulent diluted-spray flames and spray combustion regimes[END_REF], in which the authors suggested the following definition: τ v = Scρ l d 2 0 4 Sh µ ln(B 0 M + 1) (26) where Sc is the Schmidt number, Sh is the Sherwood number, and B 0 M = B M (T = T burnt , Y F = 0, p = 1 bar) is the Spalding number: B 0 M = Y F,vs -Y F 1 -Y vs . ( 27 ) The burnt gas temperature is denoted by T burnt and Y vs is the mass fraction of gaseous fuel at the droplet surface: Y F,vs = p s p s + (1 + p s ) W F W , (28) and p s is the saturated pressure: p s = exp l v W F R 1 T b - 1 T d , (29) with T b the boiling temperature. To determine the Spalding number, we evaluate all parameters using 0D calculations with a fixed gas temperature. Within the temperature range T ∈ [1500, 2400] K, the Spalding 25 number is almost linear with T , and B 0 M ∈ [3.6, 6.7]. Therefore, depending on the chosen reference burnt gas temperature, the factor ln 1 + B 0 M -1 varies between 0.5 and 0.66. Consequently, the reference burnt gas temperature has a weak dependence on the evaporation time estimation, compared to the diameter variations considered in this study, which is a factor of two between each successive diameter, and therefore a factor four in terms of the evaporation time. Choosing the reference temperature T = 2000 K, for Sc = 0.7, Sh = 2, µ = 4 × 10 -5 kg m -1 s -1 and ρ l = 750 kg m -3 the relationship between vaporization time and droplet diameter at injection is estimated as τ v = Kd C. Effect of boundary conditions: pre-vaporized case In our simulation, we have chosen a fixed mass loading at injection, as well as fixed temperature and velocities for gas and liquid phases. At this point, it could be of interest to evaluate the impact of these choices. First, the effect of mass loading at injection and velocities are characterized by the Damköhler number Da e Γ : it takes into account the injection velocity and mass loading through the strain rate at extinction for an unperturbated flame A e . Second, concerning the effect of gas temperature, it also affects the chemical time scale, thus being taken into account in the Damköhler number too. Third, the liquid temperature and droplet diameters affect the evaporation time scale. If no pre-vaporization occurs, our diagram is expected to account for the effect of any choice of boundary conditions, as it will simply affect the vaporization time scale that only occurs in the flame. If pre-vaporization occurs, we do have to take into account an additional time scale: τ prev = Scρ l d 2 0 4 Sh µ ln(B pre M + 1) , ( 30 ) where B pre M is evaluated at the injection temperature. Then a vaporization Stokes number for pre-evaporation St prev A 0 τ prev can be defined. If St prev < 1 the droplet are considered fully prevaporized, and the flame is a gaseous flame. If St prev > 1, the droplets reach the flame. In this case, we have to evaluate the vaporization time scale in the flame using the diameter at the flame location d f : d 2 f = 1 - 1 St prev d 2 0 . (31) Purely gaseous flame. Figure 1 : 1 Figure 1: Schematic of the flame-vortex interaction for (a) purely gaseous flame and (b) spray flame: two counter-rotating vortices are superimposed to the initial steady-state solution of the counterflow flame. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) ( a ) a Schematic of flame-vortex interaction. (b) Flow field representation. Figure 2 : 2 Figure 2: Qualitative features of a flame-vortex interaction [30]. The grey, red and blue regions in (b) identify the diffusion layer, the inner reaction zone and the evaporation zone, respectively. Streamlines are added to illustrate the droplet trajectories. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) (a) Gaseous flame. (b) Spray flame for Lf = 0.3. Figure 4 : 4 Figure 4: Flame structure at the centerline (y = 0 mm) for (a) gaseous and (b) spray flame. Reaction and mixing zones are identified by the red and black vertical dashed lines, respectively. The evaporation zone is identified by blue vertical dashed lines for the spray flame. xst is the axial position of the stagnation plane. For clarity, the OH mass fraction is multiplied by a factor 50. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Figure 5 : 5 Figure 5: Structure of the steady gaseous (solid line) and spray flames (with increasing Lefebvre number Lf = 0.075, 0.3, 0.675 at the centerline (y = 0 mm) in the Zg-space. The vertical line indicates the location of stoichiometric mixture fraction. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Figure 6 : 6 Figure 6: Temporal evolution of the spray-flame-vortex interaction for the case without extinction (case A in Table 1 corresponding to rv/δ L = 5/9, Γ = 1 and Lf = 0.3). Instantaneous fields at normalized time τ = 1.2, 1.4, 1.6, 1.8, 2.0 (from left to right). Top: isocontours colored by OH mass fraction (black to yellow), vorticity contours (black), droplet positions colored by droplet axial velocity (blue to red dots). Bottom: OH isocontours colored by strain rate (red to yellow), vorticity contours (black) and gaseous mass fraction Z * g (blue). (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) ( a ) a Strain rate induced on the inner flame zone (top) and normalized mean heat release (bottom) as a function of normalized time. Z ⇤ g,max [ ] u l,max [m/s] (b) Maximum value of Z * g (top) and of the evaporating droplets velocity u l (bottom) as a function of normalized time. Figure 7 : 7 Figure 7: Temporal evolution for the spray (symbols) and gaseous (lines) flames for the case without extinction at the hat top (case A, rv/δ L = 5/9, Γ = 1 and Lf = 0.3). The horizontal dashed line in the upper right panel indicates the stoichiometric condition. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Figure 8 : 8 Figure 8: Evolution of Ω * T as a function of the maximum value of Z * g for case B (flame quenching) and C (fuel depletion). Results are colored by the local strain rate. Figure 9 : 9 Figure9: Temporal evolution of the spray flame-vortex interaction for the case with extinction due to strain rate (case B in Table1corresponding to rv/δ L = 5/9, Γ = 4 and Lf = 0.3). Instantaneous fields at normalized time τ = 0.0, 2.4, 2.8, 3.25, 3.5 (from left to right). Legend is the same as in Fig.6. (a) Strain rate induced on the inner zone (top) and normalized mean heat release (bottom) as a function of normalized time. Z ⇤ g,max [ ] u l,max [m/s] (b) Maximum value of Z * g (top) and of the evaporating droplets velocity (bottom) as a function of normalized time. Figure 10 : 10 Figure 10: Temporal evolution for the spray (symbol) and gaseous (line) flames for rv/δ L = 5/9, Γ = 4 and Lf = 0.3 (case B) at the hat top. The horizontal line in the upper right panel indicates the stoichiometric mixture fraction, the vertical line denotes the time at flame quenching. Figure 11 11 Figure 11 corresponds to case C in Table1( Γ = 2 and r v /δ L = 10/9). For this case, the strain rate at the hat top is not strong enough to quench the flame and the depletion of gaseous fuel is the primary reason for extinction. This extinction mechanism is nor present in the classical gaseous flame-vortex configuration. Looking at the axial velocity of the droplets, it can be seen that the vortices induce a strong positive velocity to the droplets located close to the hat top and a strong negative velocity to the droplets near the hat brim. Indeed, the droplets are forced to leave the hat top region to reach the hat brim of the flame, leading to a strong preferential droplet concentration along the flame. This translates into local inhomogeneities of the gaseous mixture composition, which is apparent when looking at the gaseous mixture fraction Z * g in Fig.11. The mixture fraction accumulates at the hat brim, exceeding the maximum value of Z * g observed for the steady case, whereas its concentration decreases at the hat top with time. As such, the flame is not sustained anymore by fuel supply and, consequently, extinguishes.The same conclusions can be drawn by considering the temporal evolution of strain rate and heat release in Fig.12. After an increase in Ω * T due to the strain rate, a plateau is reached for both strain rate and Ω * T . At this time, the maximum value of Z * g decreases due to the spray-vortex interaction. For Z * g smaller than the stoichiometric value, Ω * T starts to decrease due to fuel depletion (identified by the vertical line). Fuel depletion is then observed in a region where the strain rate decreases, confirming that the extinction is not Figure 11 : 11 Figure 11: Temporal evolution of the spray-flame-vortex interaction for the case of extinction due to fuel depletion (case C in Table 1 corresponding to rv/δ L = 10/9, Γ = 2 and Lf = 0.3). Instantaneous fields at normalized time τ = 1.0, 2.25, 3.5, 4.75, 5.25, 5.35 (from left to right). Legend is the same as in Fig. 6. ( a ) a Strain rate induced on the inner flame zone (top) and normalized mean release (bottom) as a function of normalized time. Z ⇤ g,max [ ] u l,max [m/s] (b) Maximum value of Z * g (top) and the evaporating droplets velocity (bottom) as a function of normalized time. Figure 12 : 12 Figure 12: Temporal evolution for the spray (symbol) and gaseous (line) flames for the case of extinction due to fuel depletion at the hat top (case C, rv/δ L = 10/9, Γ = 2 and Lf = 0.3). The horizontal line in the upper right panel indicates the stoichiometry, the vertical line denotes the time of fuel depletion. Spray flame for Lf = 0.3. Figure 13 : 13 Figure 13: Spectral diagram for flame-vortex interaction: the symbols represent numerical simulations. Squares represent the vortex dissipation regime, triangles the "no extinction" condition, circles the extinction due to strain rate, stars are for extinction due to fuel depletion. The new fuel depletion zone is highlighted in gray. Results for case B. Results for case D. Figure 14 : 14 Figure 14: Flame quenching: Evolution of Ω * T as a function of the maximum of Z * for Lf = 0.075 (blue), Lf = 0.3 (black), Lf = 0.675 (red), colored by the local strain rate. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Results for case C. Results for case E. Figure 15 : 15 Figure 15: Fuel depletion: evolution of Ω * T as a function of Z g,max Lf = 0.075 (blue), Lf = 0.3 (black), Lf = 0.675(red), colored by the local strain rate. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Spray flame for Lf = 0.075. Spray flame for Lf = 0.675. Figure 16 : 16 Figure 16: Spectral diagram for spray-flame-vortex interaction: the symbols represent numerical simulations. Squares represent the vortex dissipation regime, triangles the"no extinction" mode, circles the extinction due to strain rate, stars are for extinction due to fuel depletion. The new extinction regime due to fuel depletion is shown in gray. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) 1 W 1 Molecular weight of the mixture kg mol -1 W k Molecular weight of species k kg mol -1 Y k Mass fraction of species k Z g Gaseous mixture fraction -A. Effect of the vortex injection side on flame-vortex interaction 2 0 2 where K = 1.083 × 10 6 s m -2 . 1 cm! (a) No extinction for case A. Instantaneous fields at normalized time τ = 1.0, 2.0, 3.0, 3.5, 4.0, 4.5. 1 cm! (b) Fuel depletion for case E. Instantaneous fields at normalized time τ = 1.0, 2.0, 2.5, 2.9, 3.0, 3.1, 3.2. Figure 17 : 17 Figure 17: Temporal evolution of the spray flame-vortex interaction when vortices are injected on the fuel side. Legend is the same as in Fig. 6. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.) Table 1 : 1 Summary of operating conditions and regimes for gaseous and spray flame-vortex interaction configurations. 1 18 , 2 9 , 5 9 , 10 9 ), determining the value of the Note that the velocity U F is not equal to the laminar flame speed S L , as it is well recognized for triple flames[START_REF] Liñán | Ignition, liftoff, and extinction of gaseous diffusion flames[END_REF]. The third dimension is homogeneous. Droplets are still considered as spheres, and closure laws for drag/evaporation are still 3D.[START_REF] Borghi | Turbulent combustion modelling[END_REF] The methodology that is used to estimate τv ∼ Kd 2 0 from the initial droplet diameter d 0 and the regression rate K is described in Appendix B. When differential diffusion effects are accounted for, extinction of gaseous flames is not always located at the hat top[START_REF] Katta | Interaction of a vortex with a flat flame formed between opposing jets of hydrogen and air[END_REF]. Acknowledgments The authors gratefully acknowledge financial support through NASA with Award Nos. NNX14CM43P and NNM13AA11G and Prof. Heinz Pitsch for permission to use his code for this analysis. The resources 20 of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, are also acknowledged.
72,333
[ "7345", "4628" ]
[ "416", "73500", "73500", "73500" ]
01488882
en
[ "info" ]
2024/03/04 23:41:48
2017
https://inria.hal.science/hal-01488882/file/icassp2017.pdf
Benjamin Girault Shrikanth S Narayanan Antonio Ortega TOWARDS A DEFINITION OF LOCAL STATIONARITY FOR GRAPH SIGNALS Keywords: In this paper, we extend the recent definition of graph stationarity into a definition of local stationarity. Doing so, we present a metric to assess local stationarity using projections on localized atoms on the graph. Energy of these projections defines the local power spectrum of the signal. We use this local power spectrum to characterize local stationarity and identify sources of non-stationarity through differences of local power spectrum. Finally, we take advantage of the knowledge of the spectrum of the atoms to give a new power spectrum estimator. INTRODUCTION The recently introduced extension of the definition of stationarity to the framework of graph signal processing allows us to study stochastic graph signals with respect to their spectral properties [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF]. Applications such as Wiener filtering [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] or more generally power spectrum filtering through convolutive filters are made possible by those spectral properties. White and colored noises are also straightforward to generalize using the power spectrum of the graph signal, i.e. the expected squared modulus of its Fourier transform E | x(l)| 2 . The definition gave rise to several methods to estimate this power spectrum, from the very simple direct estimator [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF] to an extension of the Bartlett-Welch method to the graph framework [START_REF] Perraudin | Stationary signal processing on graphs[END_REF]. However, despite its good interpretation in the graph Fourier domain, the interpretation of stationarity in the vertex domain remains elusive. Indeed, a graph signal is stationary if and only if its statistics are invariant through the graph translation operator [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF][START_REF] Girault | Translation on Graphs: An Isometric Shift Operator[END_REF], and this operator is well understood in the Fourier domain, but not so in the vertex domain, especially because of its complex nature. In [START_REF] Girault | Localization Bounds for the Graph Translation[END_REF], we addressed this question through energy bounds of the graph translation impulse response, and showed that it can be interpreted as a diffusion-like operator. In this paper, we go a step further on the concept of stationarity by refining it through the definition of local stationarity. Such a definition finds its roots at the core of stationarity in the temporal domain where a temporal signal is stationary if it is statistically the same when viewed from any point of time. We extend this to the graph framework by requiring that a locally stationary graph signal be the same when viewed from any vertex. This gives rise to a clear interpretation of stationarity in the vertex domain, and more importantly, this allows to pinpoint the sources of non-stationarity. In the process, we also give a novel power spectrum estimator obtained from the local power spectrum. The paper is organized as follows. sec. 2 recalls the classical framework of graph signal processing. sec. 3 gives the recent defi- nition of graph stationarity, while sec. 4 gives the new definition of local stationarity and discuss how to formalize it. Finally, experiments are carried out in sec. 5. GRAPH SIGNAL PROCESSING We first state the concepts from graph signal processing that are used in this paper. Let G = (V, E) be a graph with V = {1, . . . , N } the set of vertices and E ∈ V × V the set of edges between vertices. We focus here on undirected graphs: if (ij) ∈ E, then (ji) ∈ E. Let A be the weighted adjacency matrix with aij the weight of the edge ij, or 0 if no such edge exists. Let D = diag(d1, . . . , dN ) be the degree matrix with di = j aij the degree of vertex i. Let L = D -A be the Laplacian matrix. A graph signal x = (x1, . . . , xN ) T maps vertices to scalar values (real or complex). The graph Fourier transform (GFT) of x is then defined as the projection onto the eigenvectors χ l of L, with Lχ l = λ l χ l , and such that x(l) = x, χ l = i x i χ * l (i) [START_REF] Shuman | The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains[END_REF]. Noting F the matrix of the GFT, we have F * = [χ0 . . . χN-1] and x = F x and x = F * x since F is unitary [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF]. Note that the uniqueness of the matrix F is not guaranteed when there is a non unique eigenvalue λ l = λ l+1 . λ l = χ * l Lχ l represents the variation of the Fourier mode χ l on the graph and ranges from λ0 = 0 to λN-1 ≤ ρG ≤ 2dmax, with ρG a carefully chosen upper bound on λN-1 [START_REF] Girault | Translation on Graphs: An Isometric Shift Operator[END_REF]. In this paper, boldface is used for stochastic variables. For example, x = (x1, . . . , xN ) T is a stochastic graph signal. STATIONARY GRAPH SIGNAL: GLOBAL DEFINITION The original definition of stationarity as stated in [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF][START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] involves extending the time shift operator to the graph framework. This in turns allows to define graph signal stationarity as a statistical invariance through the graph shift. We use the graph translation as a generalization of the time shift to graph signals [START_REF] Girault | Translation on Graphs: An Isometric Shift Operator[END_REF]. This operator amounts to a phase shift of the Fourier modes: TGχ l = exp(-ıπ λ l /ρG)χ l . Definition 1 (Graph Translation [START_REF] Girault | Translation on Graphs: An Isometric Shift Operator[END_REF]). TG := exp -ıπ L ρG . We obtain then definitions of strict and wide sense stationarity: Definition 2 (Strict Sense Stationary [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF]). A stochastic signal x on the graph G is Strict-Sense Stationary (SSS) if and only if: x d = TGx, (1) where d = stands for equality of probability distributions. Definition 3 (Wide-Sense Stationary [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF]). A stochastic signal x on the graph G is Wide-Sense Stationary (WSS) if and only if: µx := E[x] = E[TGx] (2) Rx := E[xx * ] = E[(TGx)(TGx) * ]. (3) In Def. 3, µx is the mean of the graph signal x and Rx is the autocorrelation matrix. Def. 3 states the invariance of these two quantities under the application of the graph translation to the stochastic graph signal x. This definition has an interesting interpretation in the Fourier domain: Property 1 (Spectral Characterization [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF]). A stochastic signal x on the graph G is WSS if and only if µx(l) = E[ x(l)] = 0 if λ l > 0 (4) Γx(k, l) = E[ x(k) x * (l)] = 0 if λ k = λ l . (5) In other words, Def. 3 is equivalent to the first moment µx of x being a DC signal, i.e. a graph signal whose Fourier transform is only non-zero on the first Fourier mode, and the spectral autocorrelation matrix Γx = F RxF * being block diagonal, with blocks corresponding to equal eigenvalues. We denote γx(l) = Γx(l, l) the power spectrum of x. Furthermore, assuming that all eigenvalues are distinct1 , the matrix Γx is diagonal and we can [3]2 , and the inverse GFT γx of γx to characterize WSS graph signals: Property 2 (Localization Characterization [START_REF] Perraudin | Stationary signal processing on graphs[END_REF]). Assuming uniqueness of the eigenvalues of L, a stochastic signal x on the graph G is WSS if and only if µx = E[ x(0)]χ0 and: ∀i, j ∈ V, Rx(i, j) = (Tjγx) i . (6) In Property 2, the operator Tj is the localization operator verifying Tjx = x * δj = l x(l) δj(l) with * the generalized convolution operator [START_REF] Shuman | Vertex-frequency analysis on graphs[END_REF]. γx is the autocorrelation operator of the stochastic graph signal x. The advantage of this formulation is that it can lead to a definition of local stationarity. Indeed, it is well known that when all γx(l) can be written as a polynomial of λ l , then Tjγx is localized around j, with the localization being tighter if the polynomial is of lower order [START_REF] Girault | Localization Bounds for the Graph Translation[END_REF]. In other words, if said polynomial is of low order, correlations extend only to the vicinity of a vertex. In that case, studying those vertices is enough to draw conclusions on the stationarity, and the power spectrum, of a graph signal. We formalize this into the notion of local stationarity in the rest of this paper. LOCAL STATIONARITY We give in this section a first informal definition of local stationarity, and develop a framework to illustrate this concept. Additionally, we give a sensible interpretation of local stationarity in the vertex domain. Definition and Premises We define local stationarity as the property that a signal "looks the same" in any neighborhood. More precisely, given a neighborhood span corresponding to how far from a vertex we observe a signal, then for any two vertices, the signal on these two neighborhoods shall be statistically the same. For example, this can be the k-hop neighborhood. Our goal is to show how to formalize local stationarity and apply this formalism to synthetic and real data. Doing so, we devise a new method to estimate the (global) power spectrum from this formalism. This informal definition of local stationarity finds its premises in the temporal framework. Indeed, we know that a temporal WSS signal can be split into many smaller signals of equal length with each of them being statistically the same. In [START_REF] David | Spectrum estimation and harmonic analysis[END_REF], the author leverages this property by carefully selecting windows of compact temporal support and optimal frequency concentration such that projection of the signal on those windows can be used to obtain a good estimator of the power spectrum density. However, one crucial difference between graph and temporal frameworks is that graph Fourier modes can be highly localized [START_REF] Bastien Pasdeloup | Toward An Uncertainty Principle For Weighted Graphs[END_REF][START_REF] Agaskar | A Spectral Graph Uncertainty Principle[END_REF], whereas classical Fourier modes are delocalized. This has an adverse impact on the definition of local stationarity. The challenge is that given a Fourier mode localized on vertex i, and given two windows about vertex one around vertex i and the other around another vertex j, which is not close to vertex i, then this Fourier mode may have very different values in the two windows, with most of the energy in the window centered in i, and no energy around j. We see with this example, that contrary to the global definition of stationarity, there are restrictions to the power spectrum of a locally stationary graph signal. For instance, the example above shows that the signal whose power spectrum is a delta centered on that Fourier mode is not locally stationary. In [START_REF] Perraudin | Stationary signal processing on graphs[END_REF], the authors made an assumption of smoothness of the power spectrum that can be interpreted with this remark. However, the windows they use to perform power spectrum estimation are not localized such that this approach cannot be used to define local stationarity. Our Framework We now study one path towards a formal definition of local stationarity for graph signals. When introducing local stationarity, we have used the so-called k-hop neighborhood. This gives a first definition of windows on the graph. However, this simple definition has two drawbacks: the window edges are sharp, and the weights on the graph are not taken into account. Several approaches can be devised to address this point, from refining the k-hop neighborhood (e.g. using shortest-path or diffusion distances), or using the GSP toolbox. In this paper, we choose the later and use as windows a set of localized atoms {gi,m}i,m with i the vertex on which the atom is centered, and m corresponding to the span of the window on the graph. The associated decomposition of a signal x onto this set of atoms is given by wx(i, m) = x, gi,m . We then have: Definition 4 (Local Power Spectrum). The local power spectrum of a stochatic graph signal x about vertex i at scale m is given by: Sx(i, m) := E |wx(i, m)| 2 Note that this quantity is not strictly a power spectrum since the atoms should not be localized in the frequency domain to account for localized Fourier modes, as stated before. Nevertheless, we will see that they can be used to perform spectrum estimation. We are left with the definition of the atoms gi,m. As discussed before, these atoms should be more or less localized on i depending on m. We arbitrarily choose to increase localization with m. One way to achieve localization is using the localization operator of [START_REF] Shuman | Vertex-frequency analysis on graphs[END_REF] applied to a signal gm with smooth GFT. Indeed, if gm(l) can be written as a polynomial of λ l of low order (or can be well approximated with a low order polynomial), then the atom Tigm is localized about i, depending on the order of the polynomial [START_REF] Girault | Localization Bounds for the Graph Translation[END_REF]. We are left with choosing the set of signals {gm}m. Formal definition of such a set will be the subject of a future paper, and we now explore a few possibilities from the literature, and one simple alternative. A review of the literature on tight frames for graph signals can be found in [START_REF] Shuman | Spectrum-Adapted Tight Graph Wavelet and Vertex-Frequency Frames[END_REF], where the authors explore several definitions of gm. However, most of those definitions do not yield good localization properties (i.e. high order polynomials are required in the definition of gm from {λ l }). This is the same for the definition used in [START_REF] Perraudin | Stationary signal processing on graphs[END_REF] where the authors define gm(l) = g(λ l -mτ ) where g(λ l ) can be approximated with a low order polynomial, but its translations in gm cannot. One definition of gm in [START_REF] Shuman | Spectrum-Adapted Tight Graph Wavelet and Vertex-Frequency Frames[END_REF] does have the required localization properties: the Spectral Graph Wavelet Transform (SGWT) of [START_REF] Hammond | Wavelets on graphs via spectral graph theory[END_REF]. In this paper, we focus on this transform, and on a new waveletlike decomposition having the theoretical advantage of not being biased in the Fourier domain. This new basis, which will be called "Expo", is built from the signals gm: gm(l) = exp (-κλ l ) if m = 0 exp -κ λ l 2 m -exp -κ λ l 2 m-1 otherwise, with κ a free parameter. The advantage of this decomposition is the property that m gm = 1 such that there is no bias towards any graph frequency. The value of κ should then be chosen such that when considering the signals gm with m ≤ M , their summed spectrum spans the graph frequencies well enough. Example of frequency responses are shown on Fig. 2 for a particular graph, with M = 15. We can show that the atoms constructed in this manner are approximately local using [START_REF] Girault | Localization Bounds for the Graph Translation[END_REF], and that the locality is tighter for larger values of m. Theoretical Properties We use the local power spectrum Sx(i, m) in two different ways. First studying its variation with i gives information on whether the graph signal is locally stationary. Note that due to the atoms being of different energy (see Fig. 2), the local spectrum needs to be rescaled to remove the bias of this energy (first row of Fig. 3): Sx(i, m)/|gi,m| 2 2 . To illustrate the difficulties of finding a good set of atoms, we study the local power spectrum of a WSS graph signal: Sx(i, m) = l γx(l)| gm(l)| 2 |χ l (i)| 2 . We see here that having Sx(i, m) independent of i would require |χ l (i)| independent of i, which is not the case, such that global stationarity does not imply local stationarity. This is an illustration of the earlier point stating that the power spectrum of locally stationary graph signal cannot be arbitrary due to the localization of some Fourier modes. The second use of the local power spectrum is to perform an estimation of the global spectrum when the signal is WSS. Indeed, let x be a WSS graph signal. We can show that: Sx(m) := i Sx(i, m) = l γx(l)| gm(l)| 2 . This relation can be written in matrix form as H γx = Sx, with Sx the column vector of Sx(m) and H the rectangular matrix with H m,l = | gm(l)| 2 . Using a least-square estimator yields an estimation of γx from the knowledge of Sx. We use also a similar estimator without the coarsest scales, i.e. only the atoms with m ≥ Mmin, the rationale being that if we can infer the global power spectrum from local quantities, then we can easily perform distributed stochastic graph signal processing. Mathematically, the least-square estimator is performed with the last elements of Sx and the last rows of H above. EXPERIMENTS In this section, we experiment with the framework described above on a geographical graph. We show three uses of the local power spectrum: non-stationarity detection, power spectrum estimation, and power spectrum approximation using the finer windows (atoms associated to larger values of m). Graph The graph we use is the Molène graph [START_REF] Girault | Stationary Graph Signals using an Isometric Graph Translation[END_REF][START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF][START_REF] Girault | Signaux Stationnaires sur graphe : étude d'un cas réel[END_REF]. Its vertices are 28 weather stations in the Brittany region, France. The dataset has been published by the French national weather agency 3 , and comes with hourly readings of the weather stations over the month of January 2014 (744 readings). We build the edges of the graph using a Gaussian kernel of the geographical distance between vertices as edge weights (aij = exp(-d 2 ij /(2σ 2 )), with σ 2 = 5.10 9 ), and remove the edges corresponding to distances greater than 96km (aij < 10 -4 ). The graph is shown on Fig. 1. We build then the set of atoms from the SGWT and the Expo schemes described in the previous section. Fig. 2 shows the spectrum of the signals gm for both decompositions, along with the spectrum of their sum and their truncated sum (where coarsest atoms are removed). We remark that the spectrum of the sum is flat for the Expo decomposition, but its truncated sum influences high frequencies more than the SGWT, meaning that SGWT should give more stable results in the high frequency spectrum when approximating the power spectrum from a truncated decomposition. Notice also that the DC-component is only found for m = 0, and as soon as Mmin > 0, this power of the DC-component cannot be recovered by the estimator. Fig. 2 also shows the energy of the atoms. The energy of the SGWT atoms is larger for finer scales (m large) whereas the Expo decomposition has very low energy for those scales, suggesting again less bias when approximating the power spectrum from a truncated decomposition for SGWT. Synthetic Data Before studying real (weather) data, we begin with synthetic data for which we can choose a ground truth for the power spectrum. To that end, we choose a simple GMRF model with power spectrum γx(l) = (a + λ l ) -1 [START_REF] Pavez | Generalized Laplacian precision matrix estimation for graph signal processing[END_REF]. This power spectrum is smooth for high frequencies (almost flat), hence interesting for the study of local stationarity. We generate 744 realizations of this model, and estimate the local power spectrum by averaging |wx(i, m)| 2 . As shown on Fig. 3, the local power spectrum without normalization is not very informative due to the atoms being of different energy (first row), while after normalization, we clearly see similarities between local power spectrum of vertices with both the SGWT and the Expo decompositions. Comparison of various approaches to spectrum estimation is shown on Fig. 4. The Simple estimator refers to the estimator computing the Fourier transform of each realization and performing the mean power spectrum, Perraudin et al. refers to the approach of [START_REF] Perraudin | Stationary signal processing on graphs[END_REF]. All four of them perform very well on the GMRF model. As expected, the simple estimator has more variance than the other. Finally, the last row of Fig. 3 shows how the estimator behaves when the decomposition is truncated. As expected, the SGWT yields a more stable estimator when only the lowest scales are missing. Real Data The dataset includes also weather readings, which the temperature is of particular interest to us. We preprocess the data according to [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] to remove the temporal dependency of the readings in order to assume ergodicity and obtain relevant statistical estimators. Fig. 5 (top row) shows the resulting local power spectrum after normalization. A very interesting observation drawn from these local power spectrum estimates is the fact that one vertex is clearly different than the others with high spectral energy in finer scales: this vertex corresponds to the weather station of Guiscriff (vertex #19 on Fig. 5, circled in red on Fig. 1). To assess whether this is a significant difference, we synthesize a WSS graph signal with the same prescribed power spectrum (see [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] for details), and the same number of realizations (744). This yields the local power spectrum of Fig. 5 (middle row), where the local power spectrum of that vertex is not as dominant as in the temperature dataset. This difference can be explained from the data, where this particular weather station shows the highest short-time variation of temperatures (+5.3˚C, +8.7˚C, +9.5˚C in respectively 1, 2, and 3 hours), thus having the highest variance in the preprocessed data. This is one very important use of the local power spectrum approach: identifying the sources of non-stationarity. Indeed, the global power spectrum alone cannot be used for that very purpose since the correlation matrix Rx of a WSS signal does not show any obvious structure, and the very simple non-stationary case of increasing the variance on one vertex does not lead to a spectral correlation matrix where we can identify the vertex (see [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] for details). Local power spectrum is therefore an essential tool to study non-stationarity. Finally, power spectrum estimators yields slightly different results as shown on Fig. 4, with our estimators closer to the simple estimator in the low frequencies, and smoother in the high frequencies. This suggest that compared to [START_REF] Perraudin | Stationary signal processing on graphs[END_REF], our method allows for more richness of the power spectrum in the lower part of the power spectrum. Finally, truncation of the local power spectrum yields similar results on the estimator with SGWT giving more stable estimates than Expo (see Fig. 5). CONCLUSIONS AND PERSPECTIVES This work on local stationarity yields highly interesting results, and in particular a definition of stationarity that is easy to interpret in the vertex domain. Moreover, sources of non-stationarity in the vertex domain become easier to pinpoint using local stationarity. Finally, we show that the local power spectrum we defined can be used to approximate the global power spectrum, and even approximate it without the use of the coarsest scales. Numerous perspectives of local stationarity are being investigated, among which we can cite work on the signals gm to have better localization properties and/or power spectrum estimation, study of the local power spectrum, or how well this framework works for different classes of stochastic graph signals. Finally, we are looking into a local stationarity test extending the work of [START_REF] Borgnat | Testing stationarity with surrogates: a time-frequency approach[END_REF] based on our local power spectrum. Fig. 1 : 1 Fig. 1: Molène graph. Circled in red: non-stationarity identified in Fig. 5. Fig. 2 : 2 Fig. 2: Top: Spectra of gm for m ∈ {0, . . . , M = 15} (in dB). Bold curves: sum of spectra (M min {0, 2, 4, 6}). Crosses: eigenvalues of the Molène graph. Bottom: Energy of the atoms. Left: SGWT. Right: Expo. Fig. 3 : 3 Fig. 3: Local power spectrum (in dB) of 744 realizations of the GMRF model (with a = 1), without (top) and with (middle) normalization. Bottom: PSD estimate with only the local power spectrum verifying m ≥ M min (Blue: Simple, Black: Ground Truth). Left: SGWT. Right: Expo. Fig. 4 : 4 Fig. 4: Comparison of PSD estimators on the GMRF model (left) and the temperature dataset (right). Fig. 5 : 5 Fig. 5: Temperatures local PSD (in dB) after preprocessing (top) and local PSD of a model (744 realization of a WSS graph signal with prescribed PSD equal to an estimate of the temperatures) (middle). Bottom: PSD estimate from truncated local PSD (Blue: Simple). Left: SGWT. Right: Expo. The more complex case of a graph with multiple eigenvalues will be covered in a future paper. We swapped i and j compared to the reference to account for complex graph signals. Published under the title "Données horaires des 55 stations terrestres de la zone Large Molène sur un mois" on http://data.gouv.fr.
26,226
[ "3436", "989485", "1003988" ]
[ "301992", "301992", "301992" ]
01424804
en
[ "info" ]
2024/03/04 23:41:48
2017
https://inria.hal.science/hal-01424804/file/icassp-2017-demo.pdf
Benjamin Girault Shrikanth S Narayanan Antonio Ortega Paulo Gonçalves Éric Fleury GRASP: A MATLAB TOOLBOX FOR GRAPH SIGNAL PROCESSING The GraSP toolbox aims at processing and visualizing graphs and graphs signal with ease. In the demo, we show those capabilities using several examples from the literature and from our own experiments. Index Termsgraph signal processing. THE GRASP TOOLBOX The emerging field of graph signal processing aims at studying signals on (possibly) irregular discrete supporting domains. Whereas classical signal processing relies on well founded mathematical properties of Euclidean spaces, we here rely on the algebraic properties of graphs and the derivation of a graph Fourier transform and harmonic analysis of graph signals found in the literature [START_REF] Shuman | The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains[END_REF], where the signal is a function associating a value to each vertex, and edges code relations between those values. The goal of GraSP, for Graph Signal Processing 1 , is to assemble in a unified framework a series of analysis and visualization tools for graphs and graph signals. Developed for the Matlab environment, a popular choice for the signal processing community, it benefits from its advanced plotting capabilities to yield an efficient and complete tollkit for graphs signal processing. Moreover, the code is well documented, versatile, and easy to understand and contribute to. Graph and Plotting Several functions generate known graph structures, from the very simple cycle graph and grid graphs (underlying Euclidean domain), to the Barabási-Albert and Watts-Strogatz graphs (useful for simulation of social networks without communities). Some are weighted such as the graph of Fig. 1 (Gaussian kernel of the Euclidean distance). Importing and exporting graph structures using CSV files is also available This work was supported in part by NSF under grants CCF-1410009, CCF-1527874, CCF-1029373, and by Labex Milyon. This toolbox has been developed by B. Girault during his Ph.D. at École Normale Supérieure de Lyon. 1 The toolbox is a free software (GPL-compatible license) available at http://grasp.gforge.inria.fr. and allows interacting with other systems. Graphs are stored in a Matlab structure holding all pertinent data to perform efficiently graph signal processing. The centerpiece of the toolbox is the function grasp_ show_graph plotting a graph and a graph signal using a color scale (see Fig. 1). This function has been optimized for later modification of the signal, for example in animations. Examples of animations are given with a GUI iterating an operator, and through a function generating a GIF file from a sequence of graph signals (this can be for example a time varying graph signal). Graph Signal Processing GraSP is able to compute the matrix of the graph Fourier transform based on the three different approaches found in % B u i l d a g r a p h and a g r a p h s i g n a l g = g r a s p _ p l a n e _ r n d ( 1 0 0 ) ; g . A = g r a s p _ a d j a c e n c y _ t h r e s h ( g , 0 . 0 1 ) ; g = g r a s p _ e i g e n d e c o m p o s i t i o n ( g ) ; x = g r a s p _ h e a t _ k e r n e l ( g , 2 ) ; the literature: from the standard Laplacian [START_REF] Shuman | The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains[END_REF], the normalized Laplacian [START_REF] Shuman | The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains[END_REF], or the graph shift [START_REF] Sandryhaila | Discrete Signal Processing on Graphs[END_REF]. Based on this matrix, the user can perform spectrum analysis of signals, spectrum filtering, or filter design, to cite only a few applications. % D i L A T E X Packages GraSP includes several L A T E X packages to directly plot graphs and signals within a L A T E X document using only CSV files. This achieves multiple goals. First of all, it allows to quickly tweak how figures look, without going back and forth between Matlab and L A T E X. Second, it allows to generate animations in Beamer presentations in a very simple manner, without having to generate each figure of the animation separately. Finally, the packages act as PGF/TikZ macros, effectively allowing then to use the full power of PGF/TikZ to tweak figures with additional graphics. These packages have been used in [START_REF] Girault | Signal Processing on Graphs -Contributions to an Emerging Field[END_REF] to generate all but one figures (the exception being a screenshot of Matlab). DEMO The demonstration shows the capabilities of the toolbox, especially when it comes to plotting. We show how quickly one can start using the toolbox and get results with several showcases taken from the literature and our experiments. We also show how we build an environment for graph signal processing tools for Matlab with GraSP as a foundation and how we hope to build a community from that environment. Requirements for the demo are very limited. Ideally, it would be shown on screen, or projected, but the demonstration can also be performed on a laptop. COMPARISON WITH THE STATE OF THE ART This abstract would not be complete without mentioning another toolbox released earlier, but developed during similar \ b e g i n { t i k z p i c t u r e } [ s c a l e = 0 . time frames: The GSPbox2 [START_REF] Perraudin | GSPBOX: A toolbox for signal processing on graphs[END_REF]. The goal of both toolbox are identical in the sense that they implement functions to easily perform graph signal processing. GraSP implements two functions to convert the graph structures from one toolbox to the other, making them compatible. We now list several additional functionalities that each toolbox present. We begin by what GSPbox offers and is missing from GraSP. GSPbox is capable of making clusters within a graph better identifiable (useful when communities are known in advance). It is also better equipped to plot signals and filters in the Fourier domain, with the use of graph frequencies. On the other hand, our toolbox is capable of using the third party library GraphVIZ to plot a graph whose vertices do not have 2D coordinates. Several algorithms are available to achieve this (see the documentation of GraphVIZ). When it comes to plotting, we can also plot values on edges using colors (as opposed to thickness for GSPbox). Graph plotting has been optimized for animation, thus allowing efficient GUI where signals may change. We also provide a function to plot matrices where entries of the matrix appear according to a colormap (instead of black and white for the builtin imshow). Finally, as shown in subsection 1.3, we provide L A T E X packages to draw graphs, signals on graphs, and matrices using L A T E X. Fig. 1 : 1 Fig. 1: A graph and a low pass graph signal depicted using Matlab (left) and the L A T E X package (right). Fig. 2 : 2 Fig. 2: An example of a Matlab GUI taking advantage of the plotting efficiency of the toolbox to visualize the iteration of an operator on a graph signal. Fig. 3 : 3 Fig. 3: Matlab code to generate Fig. 1. Fig. 4 : 4 Fig. 4: L A T E X code to generate Fig. 1. https://lts2.epfl.ch/gsp/.
7,423
[ "3436", "989485", "989486", "6491", "3217" ]
[ "301992", "1079003", "301992", "301992", "1079003", "1079001", "1079003" ]
01283866
en
[ "info" ]
2024/03/04 23:41:48
2014
https://hal.science/hal-01283866/file/Liris-6807.pdf
Christian Wolf email: [email protected] Eric Lombardi Julien Mille Oya Celiktutan Mingyuan Jiu Emre Dogan Gonen Eren email: [email protected] Moez Baccouche Emmanuel Dellandréa Charles-Edmond Bichot Christophe Garcia Bülent Sankur Evaluation of video activity localizations integrating quality and quantity measurements Keywords: Performance evaluation, performance metrics, activity recognition and localization, competition à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction and related work Applications such as video surveillance, robotics, source selection, video indexing often require the recognition of actions and activities based on the motion of different actors in a video, for instance, people or vehicles. Certain applications may require assigning activities to one of the predefined classes, while others may focus on the detection of abnormal or infrequent unusual activities. This task is inherently more difficult than more traditional tasks like object recognition in static images, for a number of reasons. Activity recognition requires space-time segmentation and extraction of motion information from the video in addition to the color and texture information. Second, while object appearances in static scenes also vary under imaging conditions such as viewpoint, occlusion, illumination, the variability in the temporal component of human actions is even greater, as camera motion, action length, subject appearance and style must also be taken into account. Finally, the characteristics of human behavior are less well understood. Early work in this area had focused on classification of human activities, and the first works classified videos where one subject performed a single type of action. More recently, research has focused on more realistic and therefore challenging problems involving complex activities, including interactions with objects and/or containing multiple people and multiple activities. Detecting and localizing activities have therefore become as important as their classification. Evaluating detection and localization performance is inherently not straightforward and goes beyond simple measures like classification accuracy. Indeed, evaluation of algorithms for the detection and localization of acting subject(s) within a scene is a non-trivial task. Typically, a detection result is evaluated by comparing the spatial support of the detected entity (a bounding box or a list of bounding boxes corresponding to a region in space-time) with its ground-truth space-time support. The commonly used measures, Recall, Precision and F-Score, must be computed in terms of the overlap proportions of these two supports. However, these measures have a serious limitation: depending on the way they are calculated, they either convey information on (i) the correctly detected proportions of the spatial support of the entity of interest, i.e., a qualitative evaluation, or (ii) the correctly detected proportion of the set of entities, i.e., a number of entities, a quantitative evaluation measure. In other words, quantitative measures relate to the recall and precision figures of activities; qualitative measures relate to how reliably activities are detected, how much of their spatial/temporal supports are recovered. It is easy to see that (ii) depends on (i), as the amount of correctly recognized entities depends on the detection quality we require for a recognition to be considered as correct. This paper addresses these issues. The key contributions of the paper are the following: • A new evaluation procedure is proposed for action localization which separately measures detection quality and detection quantity, and which identifies the dependency between these two concepts. • Performance graphs are introduced that show the changes in quantity as a function of quality. The usefulness of these graphs to characterize the behavior of detection and localization algorithms is shown over recent algorithms. Figure 1: Samples frames from one of the videos of the LIRIS / ICPR HARL 2012 dataset, as shot from a camera mounted on a mobile robot. This example contains 3 actions : 2 discussion actions (one on the blackboard, one between two sitting people), and one person typing on a keyboard. Cluttering motion is produced by other people in the background (last row). • A single performance measure is proposed, which integrates out quality constraints and which enables the ranking of different algorithms. • Soft upper bounds for the ranking measure and for the performance graphs are estimated from experimental data containing multiple annotations. • Experiments show that the ranking measure is robust to annotator noise, that is variations among different annotators, while keeping a high discriminative power. • The LIRIS human activities dataset is introduced. It has been designed specifically for the problem of recognizing complex human actions from depth data in a realistic surveillance setting and in an office environment. It has already been used for the ICPR 2012 human activities recognition and localization competition 1 (HARL). Figure 1 shows some example frames from this dataset. • We briefly describe the entry algorithms in the ICPR 2012 1 http://liris.cnrs.fr/harl2012 2 HARL competition and we report the evaluation results of the proposed performance metric2 over these entries, as well as over other baseline algorithms. The rest of this section describes existing related metrics in the literature for activity recognition and the datasets which employ them. In Section 2, our main contributions, namely, the performance metric and the performance graphs are introduced. Section 3 describes the LIRIS / ICPR 2012 HARL dataset, and section 4 illustrates the application of the proposed evaluation metric to the competition entries. Section 5 concludes. Related metrics and datasets Standardized performance metrics and datasets are invaluable for experimental assessment and performance comparisons of different algorithms, to guide the selection of proper solutions in practical applications. Much work has been done in an effort to generate a standard testbed for action detection and recognition systems. Metrics -Arguably the most widely used measures for performance comparison of algorithms and datasets in the computer vision community are (i) Accuracy, as calculated from a confusion matrix, and (ii) Precision, Recall and the resulting F-measure. The former is only applicable to pure classification problems where detection and localization do not come into play. The latter measure both detection and recognition performance, and indirectly the localization performance. However they depend on certain quality constraints where a given detection must be sufficiently reliable in order to be taken into account. A measure related to the Precision, Recall and F-measure class is Receiver Operating Characteristics (ROC) curves. These curves plot the true positive rate (related to Recall) versus the false alarm rate (related to Precision) parametrically as a function of the detection threshold. While these curves are very useful to illustrate the behavior of a method's performance over a range of operating parameters, they have two limitations. First, they can only be applied in cases where the evaluated methods can be controlled in some way, or when a confidence measure is available for each detection. Second, ROCs are applicable to binary decision problems. Examples of cases where accuracy was used to reflect classification performance are the early datasets, such as KTH [START_REF] Schuldt | Recognizing human actions: a local svm approach[END_REF], Weizmann [START_REF] Zelnik-Manor | Weizmann eventbased analysis of video[END_REF], Hollywood [START_REF] Laptev | Irisa download data/software[END_REF], Hollywood-2 [START_REF] Laptev | Hollywood2: Human actions and scenes dataset[END_REF], Olympic Sports [START_REF] University | Olympic sports dataset[END_REF] and others. In these datasets, each video corresponds to a single action from some class, which needs to be recognized. Criteria of the Precision, Recall, F-measure variety measure correct detection performance (the number of items detected) in terms of Recall, and false alarm rate (the clutter generated by imprecise detection. ) The earliest attempts for standardized performance evaluation were the Video Analysis and Content Extraction project (VACE) [START_REF] Kasturi | Performance Evaluation Protocol for Text, Face, Hands, Person and Vehicle Detection & Tracking in Video Analysis and Content Extraction (VACE-II)[END_REF] and the Performance Evaluation of Tracking and Surveillance workshop series (PETS) [START_REF] Collins | An open source tracking testbed and evaluation web site[END_REF]. The aim of VACE project was detecting and tracking text, faces and vehicles in video sequences, where two performance metrics were used [START_REF] Kasturi | Framework for performance evaluation of face, text, and vehicle detection and tracking in video: Data, metrics, and protocol[END_REF]: a spatial frame-level measure and a spatio-temporal measure, based on the overlap between the detected object and the ground truth in the space and spatio-temporal domains, respectively. The PETS workshop series focused on object tracking as well as event recognition and crowd analysis. Performance metrics were defined in terms of the number of frames in which the object was tracked, the overlap between bounding boxes and the average chamfer distance. In the same vein, the TRECVid series [START_REF] Smeaton | Evaluation campaigns and trecvid[END_REF] proposed an evaluation protocol based on temporal alignment and the two measures, called Detection Cost Rate (DCR) and Detection Error Tradeoff (DET). While DCR was defined as a linear combination of missed detections and false alarms, the temporal alignment relied on the Hungarian algorithm to find a one-to-one mapping between the system output and ground truth. The ETISEO project (Evaluation du Traitement et de l'Interpretation de Sequences Video) evaluated the results with several criteria amongst which were object localization, object shape quality, tracking time, object ID persistence and object ID confusion. The results were given in the form of ROC curves. In the CLEAR project [START_REF] Mostefa | The chil audiovisual corpus for lecture and meeting analysis inside smart rooms[END_REF], the metrics used in VACE were improved by splitting accuracy and localization error into two separate measures for a detailed failure analysis. Finally, a recent survey on the performance evaluation of vision based human activity recognition can be found in [START_REF] Xu | Exploring techniques for vision based human activity recognition: Methods, systems, and evaluation[END_REF]. An interesting special case are action similarity based metrics, a principle introduced for the ASLAN dataset in [START_REF] Kliper-Gross | The action similarity labeling challenge[END_REF]. Instead of assigning each activity to one of a (possibly large) set of classes, pairs of activities taken and classified as same or not same. This approach has several advantages: the ambiguity inherent in partitioning a set into multiple classes is addressed; the test set can contain actions which are very different from the ones in the training set; and, finally, similarity search is an application in itself, for instance in retrieval scenarios. On the other hand, the recognition of a specific activity class may be required for certain applications, as for instance surveillance and user interfaces. A classification problem can of course be solved through similarity learning, as done in [START_REF] Kliper-Gross | The action similarity labeling challenge[END_REF]. However, depending on the specific task, no clear winner can be declared between direct classification and classification through similarity learning. For completeness we also mention a class of related problems, namely detection and recognition of continuous activities. Here, the unit of evaluation is the unsegmented whole video, in which continuous streams of activities can occur. Evaluation of this variant needs metrics adapted to the problem. In [START_REF] Ward | Evaluating performance in continuous context recognition using eventdriven error characterisation[END_REF], a measure is proposed based on alignment. It introduces six different error types: insertion, deletion, merge, fragmentation, underfill and overfill. These errors are consolidated into a segment error table, which can also be visualized in a diagram as a percentage of the total duration [START_REF] Ward | Activity recognition of assembly tasks using body-worn microphones and accelerometers[END_REF]. An improvement of this diagrams makes the performance measures invariant to class skew, as different activities by nature can have different duration times [START_REF] Ward | Performance metrics for activity recognition[END_REF]. In [START_REF] Minnen | Performance metrics and evaluation issues for continuous activity recognition[END_REF], several measures are calculated on different levels: frame-level, event-level and segment-level. In [START_REF] Van Kasteren | Effective performance metrics for evaluating activity recognition methods[END_REF] Recall, Precision and F-Score are calculated from lower level error measures like substitution, occurrence, timing and segmentation. However, these error metrics focus on temporal aspects ignoring the spatial location of activities. All the metrics described above necessarily need somehow to integrate the detection quality measures, which are in our case determined through spatial/temporal overlap of action bounding boxes, to obtain informed quantitative measures of the number of actions detected. Datasets -Human activity recognition in videos has a wide range of application areas such as biometrics, contentbased video analysis, security and surveillance, humancomputer interaction, forensics, and ambient intelligence. These different focuses have spawned several different types of datasets. The available datasets have been extensively studied in a recent survey [START_REF] Chaquet | A survey of video datasets for human action and activity recognition[END_REF]. Here, we only recall the most prominent ones. The earliest datasets have focused on simple periodic actions, e.g., running, walking, boxing, hand-clapping etc. with usually uniform background and static camera. Each video sequence included a single person performing only one action. Typical examples are the KTH dataset [START_REF] Schuldt | Recognizing human actions: a local svm approach[END_REF] and the Weizmann dataset [START_REF] Zelnik-Manor | Weizmann eventbased analysis of video[END_REF]. Presently, these datasets seem to be saturated in that the performances of the most recent methods reached or approach 100% accuracy. More complex actions and cluttered and dynamic backgrounds are part of the CAVIAR [START_REF]Caviar: Context aware vision using image-based active recognition[END_REF], ETISEO [START_REF]Etiseo video understanding evaluation[END_REF], UIUC [START_REF] Tran | Human activity recognition with metric learning[END_REF] and MSR [START_REF] Yuan | Discriminative video pattern search for efficient action detection[END_REF] action datasets, where the recordings took place in shopping centers, hallways, metro stations or in streets. More realistic datasets include videos of a series of actions or concurrent actions performed by one or more person. These activities or events are closer to the ones in real-world scenes and are generally collected for surveillance purposes. In this context, sample datasets that focus person-person interaction are CAVIAR [START_REF]Caviar: Context aware vision using image-based active recognition[END_REF], BEHAVE [START_REF] Fisher | Behave: Computer-assisted prescreening of video streams for unusual activities[END_REF], CASIA [START_REF] Biometrics | Casia action database for recognition[END_REF], i3DPost [START_REF] Surrey | CERTH-ITI, i3dpost multi-view human action datasets[END_REF], TV Human Interactions [START_REF] Group | Tv humacn interactions dataset[END_REF], UT-Interaction [START_REF] Ryoo | Ut-interaction dataset, icpr contest on semantic description of human activities (sdha)[END_REF], VideoWeb [START_REF] Group | Videoweb dataset[END_REF] datasets. Several datasets feature crowd behavior, for instance PETS 2009 [START_REF]Reading University Computational Vision Group[END_REF], ETISEO [START_REF]Etiseo video understanding evaluation[END_REF], or group activities, for instance BEHAVE [START_REF] Fisher | Behave: Computer-assisted prescreening of video streams for unusual activities[END_REF] and Collective Activity [START_REF] Choi | What are they doing? : Collective activity classification using spatio-temporal relationship among people[END_REF]. Personobject interactions were addressed by CASIA [START_REF] Biometrics | Casia action database for recognition[END_REF], where the object can be a car, door, telephone, baggage etc. Finally, daily activities in a natural kitchen environment are dealt by the University of Rochester Activities of Daily Living Dataset [START_REF] Messing | Activity recognition using the velocity histories of tracked keypoints[END_REF] and the relatively more challenging TUM Kitchen [START_REF] Tenorth | The tum kitchen data set of everyday manipulation activities for motion tracking and action recognition[END_REF] dataset. Multi-view datasets include several simultaneous views for each scene: BEHAVE [START_REF] Fisher | Behave: Computer-assisted prescreening of video streams for unusual activities[END_REF], CASIA [START_REF] Biometrics | Casia action database for recognition[END_REF], CAVIAR [START_REF]Caviar: Context aware vision using image-based active recognition[END_REF], ETISEO, [START_REF]Etiseo video understanding evaluation[END_REF], IXMAS [START_REF]Inria xmas motion acquisition sequences (ixmas)[END_REF], i3DPost [START_REF] Surrey | CERTH-ITI, i3dpost multi-view human action datasets[END_REF], MuHaVi [START_REF] University | Muhavi: Multicamera human action video data[END_REF], UCF-ARG [START_REF]Ucf aerial camera, rooftop camera and ground camera dataset[END_REF], VideoWeb [START_REF] Group | Videoweb dataset[END_REF] and Multiple Cameras Fall [START_REF] Auvinet | Multiple cameras fall dataset[END_REF]. Aerial views are handled by UCF Aerial [START_REF]Ucf aerial action dataset[END_REF] and UCF-ARG [START_REF]Ucf aerial camera, rooftop camera and ground camera dataset[END_REF]. Many datasets can be defined as "controlled" in that they are collected within the framework of a defined experimental setup. Uncontrolled databases, on the other hand, are collected without any constraints, and they are appropriately called sometimes "actions in the wild". Recently, datasets collected from Youtube, dailymotion and broadcast television channels, and movies have aroused a lot of interest. First, because they provide more realistic and challenging scenes, and second, due to the huge amount of web sources in contrast to the laborious process of building controlled databases. These datasets exhibit much larger variability as compared to the controlled datasets in their background, camera view angle, camera motion, resolution, illumination, environmental conditions, etc. and also include confounding factors such as randomness in the action rate, style, posture and clothing of the subjects. Prominent examples of wild datasets are ASLAN [START_REF] Kliper-Gross | The action similarity labeling challenge[END_REF], BE-HAVE [START_REF] Fisher | Behave: Computer-assisted prescreening of video streams for unusual activities[END_REF], HMDB51 [START_REF] Lab | Hmdb: A large video database for human motion recognition[END_REF], Hollywood [START_REF] Laptev | Irisa download data/software[END_REF], Hollywood-2 [START_REF] Laptev | Hollywood2: Human actions and scenes dataset[END_REF], Olympic Sports [START_REF] University | Olympic sports dataset[END_REF], TV Human Interaction [START_REF] Group | Tv humacn interactions dataset[END_REF], UCF Youtube [START_REF]Ucf youtube action dataset[END_REF], UCF Sports, UCF 50 [START_REF]Ucf sports action dataset[END_REF] and UCF 101 [START_REF] Soomro | THUMOS: The First International Workshop on Action Recognition with a Large Number of Classes[END_REF]. The recent introduction of low-cost depth cameras, e.g., Microsoft Kinect, Asus Xtion, Primesense Carmine and Capri, has created wide spread interest in activity recognition from depth sequences. Depth data potentially mitigates the limitations encountered in the presence of uncontrolled lighting, camera view variations, camera motion and complex colored backgrounds etc. Processing data in 3D also makes alternative representations possible, based on depth maps or point clouds. The downside is that the current technology only allows detecting objects within a short distance to the depth sensor, i.e., it is reliable within 3-4 meters. There is a considerable amount of publicly available 3D datasets, so-called "RGB-D" or "multi-modal" datasets, in the literature. Among these one can mention MSR Gesture 3D [START_REF]Msr action recognition datasets and codes[END_REF], MSRC-12 Kinect Gesture Dataset [START_REF] Fothergill | Instructing people for training gestural interactive systems[END_REF] and the ChaLearn Gesture Dataset [START_REF] Chalearn | Chalearn gesture dataset (cgd2011)[END_REF]. The latter focuses on gesture recognition and body sign language understanding. Basic actions such as jumping, hand clapping, stand up etc. are handled in the Berkeley Multimodal Human Action Database (MHAD) [START_REF] Ofli | Berkeley mhad: A comprehensive multimodal human action database[END_REF] as well as in the Flo-rence3D Dataset [START_REF] Seidenari | Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses[END_REF]. The recognition of daily activities is addressed in the Cornell Human Activities dataset [START_REF] Sung | Cornell activity datasets[END_REF], the RGBD-HuDaAct dataset [START_REF] Ni | Rgbd-hudaact: A color-depth video database for human daily activity recognition[END_REF] and the MSR Daily Activity 3D dataset [START_REF]Msr action recognition datasets and codes[END_REF]. Finally, person-person interactions are provided in the SBU-Kinect-Interaction dataset [START_REF] Yun | 3rd International Workshop on Human Activity Understanding from 3D Data (HAU3D-CVPRW)[END_REF]. Up to our knowledge, only two datasets exist, which contain spatial annotations in form of bounding boxes, and which are activity recognition datasets (as opposed to object tracking datasets with event detection components, like the afore mentioned PETS [START_REF] Collins | An open source tracking testbed and evaluation web site[END_REF] series and others). These two datasets are the Hollywood Localization Dataset (HLD [START_REF] Kläser | Human focused action localization in video[END_REF] and the Coffee and Cigarettes Dataset (CC) [START_REF] Laptev | Retrieving actions in movies[END_REF]. They both contain the starting and end frame number, as well as a single bounding box for a single frame of each activity. For the HLD it is the middle frame, whereas for the CC dataset it is the frame where the hand touches the head in the drinking and smoking activities. The limited spatial information is sufficient for the activities targeted by the two datasets. However, in our targeted and more complex scenarios, people move and the camera may move. In this configuration, continuous localization is important. The LIRIS Human Activities dataset described in this paper addresses and combines several issues, providing a realistic and complex dataset featuring the following aspects and degrees of difficulty: (i) multi-modality, since both RGB and D channels are available; (ii) human-human interactions, humanobject interactions and human-human-object interactions; (iii) a moving camera installed on a mobile robot; (iv) similar action classes which require integration of context; (v) full localization information with bounding boxes available for each individual frame of each activity. The performance metric We propose a new performance metric for algorithms that detect and recognize complex activities in realistic environments. The goals of these algorithms are: • To detect relevant human behavior in midst of motion clutter originating from unrelated background activity, e.g., other people walking past the scene or other irrelevant actions; • To recognize detected actions among the given action classes; • To localize actions temporally and spatially; • To be able to manage multiple actions in the scene occurring in parallel in space and in time. The ground truth data has been annotated by marking labeled bounding boxes in each frame of each action. In particular, we assume that the ground truth annotation has segmented action occurrences, grouping all frames and bounding boxes of any one action. In other words, an action consists of a list of bounding boxes, where each bounding box corresponds to a frame. Actions consist of consecutive frames, and no frame drops are allowed in the sequence. Detection results are assumed to be in the same format. This makes it possible to provide more meaningful Recall and Precision values -indeed, a Recall of 90% is easier to interpret if it precisely tells us that 90% of the actions have been correctly detected. Without this segmentation, performance measures would need to be computed on frame level and therefore would be ambiguous. In absence of segmented activities, the example of a Recall of 90% on frame level could be interpreted as anything of the following possibilities : • 90% of the action bounding boxes have been correctly detected on 100% of the activities; a very unlikely case; • 100% of the action bounding boxes have been correctly detected on 90% of the activities, a very unlikely case; • a mixture between the first two cases; this is the general case. The goal of the evaluation scheme is to measure a match between the annotated ground-truth and the outcome of an algorithm, i.e., between: • A list G of ground truth actions G v,a , where G v,a corresponds to the a th action in the v th video and where each action consists of a set of bounding boxes G v,a b marked with one and the same class. • A list D of detected actions D v,a , where D v,a corresponds to the a th action in the v th video and where each action consists of a set of bounding boxes D v,a b marked with one and the same class. The objective is to measure the degree of similarity between the two lists. The measure should penalize two aspects, first, information loss, which occurs if whole actions or their spatial or temporal parts of actions have not been detected, and second, information clutter due to false alarms or bounding box detections which are in excess of the ground-truth. The proposed measure is inspired by a similarity measure used for object recognition in images [START_REF] Wolf | Object count/Area Graphs for the Evaluation of Object Detection and Segmentation Algorithms[END_REF], and is designed to satisfy the following goals: 1. The metric should provide a quantitative evaluation: it should indicate how many actions have been detected correctly, and how many false alarms have been created. 2. The metric should provide an indication of the quality of detection and should be easily interpretable. The two goals, namely, to be able to determine the number of actions in the scene, and to be able to measure their detection quality are interrelated. Indeed, the number of actions we consider as detected depends on the quality threshold which we impose for any action in order to be considered as detected. A natural way to combine these two goals is first described briefly below, and then formalized in more detail in the rest of this section: The traditional measures, Precision and Recall, quantitative measures of detection performance, form the basis of the proposed metric. In our formulation, we employ these measures with two types of threshold that gauge the amount of overlap between a ground truth action and a detected action: 1. A threshold on pixel-level recall, which specifies the amount of overlap between the area of detected action and the area of the ground-truth action; 2. A threshold on pixel-level precision, which specifies how much spurious detected area (not part of the ground truth) is allowed. The plots of precision and recall, which depend upon the quality parameters, i.e., the thresholds, visually describe the interrelationship of quantitative and qualitative aspects of an algorithm. These are similar to the performance graphs used in [START_REF] Wolf | Object count/Area Graphs for the Evaluation of Object Detection and Segmentation Algorithms[END_REF], which relate Recall and Precision to quality thresholds. Precision and Recall for localized activities The first measure, Recall, describes the number of correctly detected action occurrences with respect to the total number of action occurrences in the dataset. The second measure, Precision, penalizes false alarms, by measuring the proportion of correctly detected actions within the total number of detected actions: where G denotes ground-truths and D detections. Recall(G, D) = Number of In order to get a single measure, these measures are combined into the traditional F-score [START_REF] Van Rijsbergen | Information Retrieval[END_REF]. The rationale of considering the harmonic mean of precision and recall is that the smaller of the two performance values is emphasized: F-Score(G, D) = 2 • Precision(G, D) • Recall(G, D) Precision(G, D) + Recall(G, D) (2) In our modified version, these criteria involve thresholds that qualifies if and when an action can be considered as detected. Thus we can gauge how close the detected bounding boxes need to be to the ground-truth bounding boxes, and how close the detected temporal duration of an action need to be to the actual duration in the ground truth. Other imperfections such as multiple detections for a single ground truth action can similarly be handled. An intuitive way to express Recall and Precision in terms of matched detections is as follows: Recall(G, D) = v a 1 G v,a finds match in D v v |G v | Precision(G, D) = v a 1 D v,a finds match in G v v |D v | (3) where 1 ω is the indicator function returning 1 if condition ω holds and 0 else; v is a video index and a an activity index. Notice that both measures search for a match in a corresponding action list: Recall requires matching of each action in the ground truth to one of the actions in the detection list, whereas Precision requires matching of each action in the detected list to one of the actions in the ground-truth list. This is done in two steps by defining first the two functions β(a, S) and Υ(g, d): • For a given action a, β(a, S) gives the best match in the set S of actions, which can be detected actions or groundtruth actions; • For a pair of ground-truth action g and detected action d, Υ(g, d) determines whether the match between g and d satisfies our criteria on geometric and temporal overlaps. At this stage Υ can veto a match, if it is of poor quality. The definitions in (3) can thus be refined as follows: Recall(G, D) = v a 1 Υ(G v,a ,β(G v,a ,D v )) v |G v | Precision(G, D) = v a 1 Υ(β(D v,a ,G v ),D v,a )) v |D v | (4) Qualifying the best match β(a, S) is done by maximizing the normalized overlap O between two actions a and b over all respective frames, where O is defined as the Sørensen-Dice coefficient: O(a, b) = 2•Area(a∩b) Area(a)+Area(b) if Class(a) = Class(b) 0 else (5) Here Area(a) is the sum of the areas of the bounding boxes of action a and ∩ is the intersection operator returning the overlap of two actions. The overlap is calculated framewise and summed over all frames. More formally, for any given video v, the following two conditions hold: ∀a, a ∈ G v × G v :β(G v,a , D v ) = β(G v,a , D v ) ∀a, a ∈ D v × D v :β(D v,a , G v ) = β(D v,a , G v ) (6) As a consequence, calculating β(a, S) maximizes normalized overlap O as defined in (5 ) subject to constraints [START_REF] Kasturi | Performance Evaluation Protocol for Text, Face, Hands, Person and Vehicle Detection & Tracking in Video Analysis and Content Extraction (VACE-II)[END_REF]. These constraints preclude the matching of a single groundtruth action to multiple detected actions, and vice-versa. This maximization is made in a greedy way: O is calculated for all possible pairs (G v,a , D v,a ) and then the maximum value is searched, and the assignment chosen for the corresponding actions a and a . These actions are then removed from respective lists, and the algorithm proceeds iteratively searching for the next best match, since the video may contain more than one action type. Υ(g, d) decides whether a pair of ground truth action g and detected action d are sufficiently matched based on four criteria, two of which are spatial and two are temporal. Here we have used a simplified notation by denoting the ground-truth action as g = G v,a and the detected action as d = D v,a . We first describe these criteria intuitively and then formalize them in equation [START_REF] Collins | An open source tracking testbed and evaluation web site[END_REF]. A detected action d can be matched to a ground truth action g if all of the following criteria are satisfied : Sufficient temporal frame-wise recall -the number of frames which are part of both actions is above an adequate proportion of the number frames, i.e., a sufficiently long duration of the action has been correctly found; Sufficient temporal frame-wise precision -the number of frames which are part of both actions is above an adequate proportion of the number frames in the detected set, i.e., the detected excess duration is small enough; Sufficient spatial pixel-wise recall -the size of the common areas between the bounding boxes is large enough with respect to the size of the bounding boxes in the ground-truth set, i.e., a sufficiently large portion of the ground truth rectangles is correctly found. In order to ignore temporal differences, this calculation is done frame wise and includes only frames which are part of both actions, d and g; Sufficient spatial pixel-wise precision -the size of common areas between the bounding boxes is large enough with respect to the size of the bounding boxes in the detected set, i.e., the space detected in excess is sufficiently small. In order to ignore temporal differences, this calculation is done frame wise and includes only frames which are part of both actions, d and g; Correct classificationd and g have the same action class. We denote by d| g the set of bounding boxes of the detection action d restricted to the frames which are also part of groundtruth action g. Then, the above criteria can be expressed as Υ(g, d) =                                  Area( Class(g) = Class(d)                                  (7) where N oF rames(a) is the number of frames in set a. The decision whether the two actions g and d are correctly matched depends therefore on the threshold values t sr , t sp , t tr , t tp , which threshold, respectively, spatial pixel-wise recall, spatial pixel-wise precision, temporal frame-wise recall, and temporal frame-wise precision. Quantity/Quality plots We had put forward the necessity to consider the quality of detection with respect to the quantity of detection, as an inherent property of any method to assess algorithms. In our work, the quantity-quality interrelationship manifests itself through the dependence of Recall and Precision on the thresholds t sr , t sp , t tr and t tp . For this reason, an integral part of the proposed performance evaluation framework is a set of graphs which illustrate this dependence, similar to the graphs proposed in [START_REF] Wolf | Object count/Area Graphs for the Evaluation of Object Detection and Segmentation Algorithms[END_REF]. For each algorithm to be assessed, a number of diagrams are created, each one showing the performance as a function of one of the quality measures, that is, dependence on one of the thresholds. The performance graphs are produced by varying one threshold (assigned to the x-axis) in the interval [0, 1], while the other three thresholds are kept at fixed lowest reasonable values, and plotting Recall and Precision and F-Score on the y-axis of the graphs. This results in 4 graphs containing each 3 curves. Figures 7 and8 in the experimental part (section 4) show examples of graphs obtained this way. These can be easily interpreted as recognition performance versus detection quality curves. Section 4.4 gives a more details on how to read these diagrams based on examples of actual detection methods. Figure 2a shows a toy example involving an action of type Discussion covering a single frame. The ground truth bounding box, in blue, is labeled "1". Two different detection methods have been applied resulting in two bounding boxes, in red, and labeled "2" and "3", respectively. Since we are dealing with a single frame, temporal thresholds cannot be applied. The graphs for varying spatial thresholds are shown in Figures 2b and2c, respectively. In this simple example, Recall, Precision and F-Score graphs collapse into one, since the measures are 1 if the bounding box is considered as detected, and 0 otherwise. Since the bounding box "2" resulting from one of the methods is completely included in the ground-truth bounding box, varying the threshold t sp (the required spatial pixel-wise precision) does not change the result: in fact, even for t sp =1, which does not allow for any nonground-truth pixels, will consider the bounding box to be correctly detected. Varying t sr (the required spatial pixel-wise recall), on the other hand, results in a drop of performance at roughly t sr = 0.25. In order words, once we require more than 1/4 of the ground truth bounding box to be detected, the bounding box "2" is not considered as detected anymore. This corresponds to what we observe in Figure 2a. Bounding box "3", produced by the other method, has only a small overlap with the ground truth-bounding box. Only a small part of the ground truth bounding box is detected, and in addition algorithm "3" produces a large spurious area. Consequently, we see an early drop in performance varying any one of the thresholds, t sr = 1 or t sp = 1. Ranking Ranking a set of detection algorithms according to a single performance measure should minimize the dependence on external parameters of the performance metric. This can be achieved by basing the final performance measure on an integration of the F-Score measure on the whole range of possible threshold values. In particular, four measures are created, each one measuring the performance while varying one of the thresholds while keeping the other ones fixed at a very low value (generally = 0.1). In the following, we denote by F (t sr , t sp , t tr , t tp ) the F-Score of equation ( 2) depending on the quality constraints. We get 4 integrals, each one showing the performance sensitivity averaged over the range of thresholds while three remaining variables (in both Precision or Recall) must satisfy a minimum of quality level, where the threshold is set to a reasonably small value (we choose =0.1). Thus, (labeled "1", in blue) and two bounding boxes corresponding, respectively, to two respective methods (labeled "2" and "3", in red); (b) evaluation curves for the method, labeled "3"); (c) evaluation curves for the method, labeled "2"). we get: I sr = 1 0 F (u In practice, we sample the Precision and Recall values in small steps and find the integrals numerically. The final value used for ranking is the mean over these four values: IntegratedPerformance = 1 4 (I sr + I sp + I tr + I tp ) (9) This integrated performance measure relates to the areas under the curves in the graphs described in section 2.2. In section 4 it will be experimentally shown, that this measure is quite invariant to changes in annotation styles. Confusion matrices The goal of the proposed performance metric is to go beyond classification, as the evaluated vision tasks also require detection and localization. However, it might be interesting to complete the traditional precision and recall measures with a confusion matrix which illustrates the pure classification performance of the evaluated methods. This can be done easily by associating a detected action to each ground truth rectangle using equations ( 5) and ( 7), while removing the class equality constraint from [START_REF] Collins | An open source tracking testbed and evaluation web site[END_REF]. The pairs ground truth -detected actions can be used to calculate a confusion matrix (see figure 10 for examples). Note that the confusion matrix ignores actions which have not been detected, actions with no decision outcome. Therefore, unlike in classification tasks, the recognition rate (accuracy) cannot be determined from its diagonal. For this reason the confusion matrix must be accompanied by precision and recall values. The LIRIS / ICPR 2012 HARL dataset The LIRIS human activities dataset has been designed for recognizing complex and realistic actions in a set of videos, where each video may contain one or more actions concurrently. Table 1 shows the list of actions to be recognized. Some of them are interactions between two or more humans, like discussion, giving an item etc. Other actions are characterized as interactions between humans and objects, for instance talking on a telephone, leaving baggage unattended etc. Note that simple "actions" as walking and running are not part of the events to be detected. The dataset therefore contains motion which is not necessarily relevant for the tasks at hand. The dataset is publicly available online on a dedicated web site 3 . It is organized into two different and independent sets, shot with two different cameras: D1/robot-kinect The videos of this set have been shot using a mobile robot of model Pekee II, shown in figure 3. During the tests the robot was controlled manually through a joystick. It was equipped with a consumer depth camera of type Primesense/MS Kinect, which delivers color images as well as 11bit depth images, both at a spatial resolution of 640×480 pixels, at 25 frames per second (see figures 4a-c). In the proposed dataset the RGB information has been converted to grayscale. The Kinect module has been calibrated; the calibration information and its software are provided allowing users to calculate the coordinates in the grayscale image for each pixel of the corresponding depth image. D2/fixed-camcorder The videos of this set have been shot with a consumer camcorder (a Sony DCR-HC51) mounted on a tripod. The camera was fixed (zero egomotion), the videos have been shot in a spatial resolution of 720×576 pixels at 25 frames per second (see figure 4d). 3 http://liris.cnrs.fr/voir/activities-dataset The two sets D1 and D2 are NOT completely independent, as most of the D2 videos are shots from the same scenes captured in D1 but taken from a different viewpoint. Care has been taken to ensure that the dataset is as realistic as possible: • The actions have been performed by a group of 21 different people. • The actions have been shot from various viewpoints and different settings to avoid the possibility of learning actions from background features. • Correlation between camera motion and activities has been avoided. In order to make the dataset more challenging than previous datasets, the actions are less focused on low-level characteristics and defined more by semantics and context: • The discussion action can take place anywhere, either by people standing in some room or in an aisle without any support, or in front of a whiteboard or blackboard, or by people sitting on chairs. • The action enter or leave a room can involve opening a door and passing through or passing through an already open door. • Three actions involve very similar motion, the difference being the context : entering a room, unlocking a door and then entering a room and trying to enter a room without being able to open the door. • The action of an item being picked up or put down (into/from a box, drawer, desk etc.) is very similar to the action of a person leaving a baggage unattended (drop and leave), as both involve very similar human-object interactions. The difference is mainly defined through the context. • We took care to use different telephones in the action telephone conversation: classical office telephones, cell phones, wall mounted phones. • Actions like handshaking and giving an item can occur before, after or in the middle of other actions like discussion, typing on a keyboard etc. The acquisition conditions have not been artificially improved, which means that the following additional difficulties are present in the dataset : • Non-uniform lighting and lighting changes when doors open and close • The Kinect camera's gain control is rather slow compared to other cameras. This is not the case for the Sony camcorder. • The depth data delivered by the Kinect camera is disturbed by transparencies like windows etc. This is due to the data acquisition method (shape from structured light). • The data taken with the mobile robot is subject to vibrations when the robot accelerates or slows down. This reflects realistic conditions in a mobile robotics environment. The full data set contains 828 actions (subsets D1 and D2) by 21 different people. Each video may contain one or several people performing one or several actions. Example images for the different activity classes are shown in figure 5. All actions are localized in time and space, and ground truth bounding boxes are provided. Figure 6 shows a frame with annotated bounding boxes in a screen shot of the annotation/viewing tool provided with the dataset. Each video has been annotated by one of 10 annotators, and then verified by a different annotator to keep the annotations as coherent as possible. Results of the ICPR 2012 HARL competition The proposed performance metric was tested on six different detection and recognition algorithms. Four methods correspond to submissions of the ICPR 2012 HARL competition, which was held in conjunction with the International Conference on Pattern Recognition 2012. Two additional methods have been applied to the same dataset. The HARL competition took place during roughly 12 months from October 2011 to October 2012. The video frames of the competition dataset (described in section 3) were published in October 2011 and the ground-truth annotations of the training set were released in December 2011. The participants had 7 months to develop and train their system. In mid July 2012 annotations of the test set were published and in September 2012 the results had to be submitted to the competition committee. A special session dedicated to the HARL competition was held during the ICPR conference in November 2012. The competition has attracted great interest: 70 teams from all over the world registered to the competition and downloaded the dataset, which appeared to be more difficult than existing datasets at that time, as anticipated. The task of not only classifying, but also locating activities in space and in time is still a hard one. Four teams finally managed to solve the problem and to submit their results. We distinguished the six methods in the following way: the four participations were identified by their participation numbers [START_REF] Ward | Evaluating performance in continuous context recognition using eventdriven error characterisation[END_REF][START_REF] Yun | 3rd International Workshop on Human Activity Understanding from 3D Data (HAU3D-CVPRW)[END_REF][START_REF] Laptev | Retrieving actions in movies[END_REF][START_REF] Kläser | A Spatio-Temporal Descriptor Based on 3D-Gradients[END_REF], and the two additional methods were identified by letters A and B. The description of the submitted method is yet undisclosed for reasons related to protection of intellectual property rights. HARL participant No. 49 (subset D1) Participating team No. 49 was a collaboration between institutions in Singapore and a US university, and they submitted a run for dataset D1 (Kinect frames): The submitted method (published in [START_REF] Ni | Integrating multi stage depth induced contextual information for human action recognition and localization[END_REF]) uses low-level features and mid-level features calculated on detected and tracked people using [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] and detected objects specific to the dataset (doors, mailboxes etc.). The following features were calculated: (i) pose and appearance information on people and on objects; (ii) geometric contextual attributes on pairs of neighboring items, where each item may be a person or an object; (iii) scene type attributes obtained by clustering depth histograms. These heterogeneous features were integrated through a Bayesian network learned from the training set. The submitted method is an adaptation of existing work in [START_REF] Yuan | Discriminative subvolume search for efficient action detection[END_REF]. Space-time interest points were extracted and HoG-HoF (histogram of gradients and histogram of motion flow) features were extracted from the local patches [START_REF] Laptev | Learning realistic human actions from movies[END_REF]. Then 10 one-against-all SVM classifiers were trained for the 10 activity categories. Activities were detected and localized by shifting sub-volumes over the video and maximizing mutual information, as in [START_REF] Yuan | Discriminative subvolume search for efficient action detection[END_REF]. Adaptations concerned reduction of search boundaries according to maximize mutual information and calculations of proper step widths, which significantly increased performance compared with the original brute force search. HARL HARL participant No. 59 (subset D1) Participating team No. 59 came from India and was an academic / industrial collaboration. They submitted a run for dataset D2 (Sony color frames): Tanushyam Chattopadhyay, Sangheeta Roy and Aniruddha Sinha Innovation Lab, Tata Consultancy Services, Kolkata Dipti Prasad Mukherjee and Apurbaa Mallik Indian Statistical Institute, Kolkata The submitted method detects and classifies actions in the dataset, but does not localize them. It segments and extracts "interesting" moving objects in the scene based on motion and entropy. HoF features were calculated according to [START_REF] Mukherjee | Recognizing human action at a distance in video by key poses[END_REF] in a hierarchical way using a pyramid. Method A -(subset D1) To create a baseline, we calculated 3D (depth) features proposed in [START_REF] Kläser | A Spatio-Temporal Descriptor Based on 3D-Gradients[END_REF] and extracted them with dense-sampling on sliding cuboids. Bounding boxes were estimated on the test set using the same pre-processing step as method B [START_REF] Ni | Multi-level depth and image fusion for human activity detection[END_REF]: people were tracked using the Dalal/Triggs detector [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], and candidate bounding boxes were created based on pairwise combinations of tracklets. From the acquired features, codebooks trained through k-means clustering and videos were represented as bags of words (BoW) on space-time sliding windows. For the recognition part an SVM classifier was trained. We detected activities including a no-action class into the classifier which was trained trough boot-strapping. Method B -(subset D1) Method B (published in [START_REF] Ni | Multi-level depth and image fusion for human activity detection[END_REF]) shares the features and the preprocessing steps as the winning entry of the HARL competition, entry nr. 49. In particular, tracklets are created by the Dalal/Triggs detector and combined into larger bounding boxes. Instead of learning a belief network, each activity is modelled as a deformable parts model in the spirit of [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF] Results without localization Table 2 shows a preliminary evaluation of Recall and Precision values calculated according to equations ( 1) but without any localization information, i.e., ignoring the bounding box related information in the ground truth. Half of the participants submitted results for dataset D1 (kinect) and the other half submitted runs for dataset D2 (color frames). The two additional methods were applied to dataset D1. Results of the two datasets cannot be directly compared, of course. As expected, the obtained results are better on the Kinect data than on the color frames, since the depth data is richer in information as compared to color data for these scenes. We consider a recall rate of 74% as an excellent result for this difficult dataset with high intra-class variations. A precision value of 41% indicates that, roughly, for each correctly detected activity, a second incorrect activity has been detected. Note that no confusion matrices can be given if localization is not used. Methods 49 and B obtained the same performance in this setting, as the difference lies in the way how activities are localized. Results with localization Table 3 gives performance measures which do use localization information from ground truth and detection. For a first experiment, all quality thresholds have been fixed to a low value of t sr =t sp =t tr =t tp =0.1. In other words, a ground-truth action g is matched to a detected action a if and only if • at least 10% of the ground-truth frames are detected (t tr ); • at least 10% of the detected frames are also in the groundtruth (t tp ); • at least 10% of the pixels of the ground-truth bounding boxes have been detected, only counting frames which appear in both ground truth and detection (t sr ); • at least 10% of the pixels of the detected bounding boxes are also in the ground-truth bounding boxes, only counting frames which appear in both ground truth and detection (t sp ). These conditions correspond to equation ( 7) in section 2. When localization information is taken into account, differences in performance between the algorithms become much more evident. While the differences were modest when no penalty was applied on localization, now a clear winner emerges in competition participant No. 49, only slightly topped by not participating method B. Including these new constraints due to localization, Recall drops from 74% to 63%, and Precision drops from 41% to 33% for this participant. The performance for the winning entry, submitted for dataset D1, cannot be directly compared to the performance of methods No. 13 and No. 51, which were submitted for dataset D2. However, given the difference in F-scores, with 44% on one hand, and 5% and 3% on the other hand, it is safe to announce for a clear winner of the contest. With an F-Score of 0.22, the baseline method based on a bags of words representation (method A) fares reasonable well compared to the winning methods based on more sophisticated methods integrating spatio-temporal relationships through belief nets (method 49, F-Score of 0.44) and deformable parts models (method B, F-Score of 47). Integrating spatial relationships is clearly important. Dependence on quality The measures described above have been calculated using thresholds set to 0.1, which seems to be a good compromise given the high spatial and temporal variations of human activities. However, interesting information on the behavior of a detection algorithm can be obtained by calculating Recall and Precision measures over varying thresholds and creating plots, as explained in section 2.2. Figures 7 and8 show these graphs. Each column corresponds to a method, and each of the Equation → four rows corresponds to a situation where only one of the four thresholds t sr , t sp , t tr and t tp is varied from 0 to 1, while the other three thresholds are kept to a fixed value of 0.1. Focusing our attention on the winning entry, method No. 49 shown in the rightmost column, we can deduce valuable information from the first diagram in the top row, where the threshold t tr of the temporal frame-wise recall is varied. The highest performance is obtained for t tr = 0: Recall=65% and Precision=36%. Note that the thresholding condition requires the thresholded quantity to be strictly larger than 0, as indicated in equation [START_REF] Collins | An open source tracking testbed and evaluation web site[END_REF]. When we increase the threshold in small steps up to t tr =1, we can see an almost linear drop of the Precision and Recall measures. At the right end of the diagram we see that we obtain a performance of 8% and 5%, respectively, for t tr =1, which gives the performance for the case where all frames of an activity need to be detected in order for the activity itself to be counted as detected. A similar behavior can be seen when threshold t tp (temporal frame-wise precision) is varied, illustrated in the second row of figure 7. At t tp =1, when we require that not a single spurious frame outside ground-truth activity is detected, we still get performance measures of 30% and 16%, respectively. The last two rows of figure 7 illustrate the behavior when spatial overlap is considered. Both diagrams show performance figures approaching zero when the respective threshold approaches 1. This shows that it is extremely rare for a ground truth activity to be consistently (spatially) included in the corresponding detected activity over all frames, as indicated in the third row, and it is extremely rare for a detected activity to be (spatially) included in the corresponding ground-truth activity over all frames, as indicated in the last row. These indications of behavior over varying quality con- straints can be captured in a performance measure, as given in equation ( 8) in section 2.3. Intuitively, the measures are means of F-Score over the threshold variations. They are given in figure 4 for the different participants of the competition. Soft upper bounds on performance The upper bound for any of the performance measures, (Precision, Recall and F-Score) is in principle 1. However, groundtruth annotations are subjective and inherently imprecise, so a totally precise localization resulting in Recall=Precision=F-Score=1 may not be expected for any method. It is therefore interesting to estimate "soft" upper bounds on the performance measures corresponding to the average agreement score of human annotators (inter-subject agreement), which is defined as expected value of the performance measures when different test subjects do the localization task. To estimate these bounds, we selected a subset of 9 videos containing 9 actions and had these actions annotated by 7 different people. From this pool of annotations, pairs of annotations were selected where the first one was used as ground truth and the second one as (virtual) detection. Table 5 shows means of F-Score obtained for sets of such pairs. The last row shows the mean over all possible 21 combinations of pairs of this set of 7 annotators. The first seven rows give the different means for different annotators, where each row corresponds to a mean over 6 runs (one annotator against the 6 other annotators). The different columns correspond to F-Score for different quality constraints, as given in equation ( 2), as well as integrated performance, as given in equation ( 9). The given figures may seem quite low, especially for thresholds equal to 0.8. These low detection rates among different manual annotations can be explained by the fact that the threshold is enforced jointly in the temporal and the spatial domain. It is very difficult to create similar annotations in all aspects, i.e., not to cut away relevant parts or not to add irrelevant parts, both temporally and spatially. We can also see that while classical performance measures (measures that do not take into account the quality factors as in Eqs. 1-2) seem to vary across annotators, the proposed integrated performance measure stays quite stable over different annotators. We claim that this invariance to subtle changes in annotations is a major advantage of this new metric. This does not mean that the measure loses its discriminative power in comparing different methods, as can be seen in Table 4: for the winning entry, performance is measured as 33%, whereas the other two entries are measured as 3% and 2%, respectively. The large difference in the methods' performance is illustrated by the curves given in figures 7 and 8. "Soft" upper bounds have also been calculated for the performance curves proposed in section 2.2. These curves are shown in figure 9, where each row corresponds to a variation in one of the four thresholds, in the same way as shown in figure 7. In figure 9, each point of a plot corresponds to a mean calculated over the 21 different values obtained by taking all possible pairwise combinations of annotations. Note that the plots of Recall, Precision and F-Score are identical for each diagram. This is due to the fact that all annotators have annotated the same activities and that no annotator added a new false activity. In other words, the differences in annotations are only in the coordinates and number of the bounding boxes, not in the number of found activities. Confusion matrices Figure 10 shows confusion matrices for the methods for which localization information (bounding boxes) has been submitted. These matrices contain complementary information to Recall and Precision values, but otherwise they cannot be used as indicators for detection performance of a method. In particular, calculating the difference between the ground-truth class and the detected class requires the assignment of a detected activity for each ground-truth activity, which can only be done through localization information, i.e., bounding boxes. Unmatched activities are not included in these matrices, which therefore lack any information on the amount of unfound activities or false alarms. Considering the matrix for the winning entry, the bottom matrix in the figure, we see that the actions Handshaking and Give Item are frequently confused, which is not surprising given the similar motion involved in both actions. Activities Telephone call and Pick up / Put down Object are also sometimes confused, which may eventually be explained by the context model used in the method. Both methods take place in similar contexts, and picking up and putting down a telephone has been annotated as Pick up / Put down Object. Further confusions are Discussion and Give Item, which both involve a group of people standing close together and interacting. Implementation and tools An open-source implementation for Windows, Linux and Mac OS of the proposed performance metric is available online 4 . The software allows to calculate Recall, Precision and F-Score for fixed (selectable) thresholds as well as integrated performance, to plot and export performance curves and confusion matrices. Two versions are available: one with a graphical user interface and one version with a scriptable command line interface. It comes with software allowing to view groundtruth annotations superimposed on videos, as well as software which allows to create new annotations. Conclusion This paper has introduced a new performance metric which allows to evaluate human activity detection, recognition and localization algorithms. Taking into account localization information is a non-trivial task, as evaluation needs to decide for each activity whether it has been successfully detected based on detection quality constraints. The inherent dependency between performance and quality has been identified and a set of quantity / quality curves has been introduced to describe the detection and localization behavior of a computer vision algorithm. The proposed integrated performance measure is a new way to compare and rank detection and localization methods. Its advantages are two-fold: • the measure is independent of quality constraints on detection, i.e. it is independent of arbitrary thresholds on spatial and temporal overlap; Participant Nr. 13 (subset D2) Method A (subset D1) DI GI BO EN ET LO UB HS KB TE DI 50 0 0 0 0 0 0 50 0 0 GI 50 0 50 0 0 0 0 0 0 0 BO 0 0 80 0 0 0 0 0 0 0 EN 0 0 0 100 0 0 0 0 0 0 ET 33 0 0 0 67 0 0 0 0 0 LO 0 0 0 0 100 0 0 0 0 0 UB 0 0 100 0 0 0 0 0 0 0 HS 0 0 100 0 0 0 0 0 0 0 KB 0 0 67 33 0 0 0 0 0 0 TE 0 0 0 0 0 0 0 0 0 100 DI GI BO EN ET LO UB HS KB TE DI 60 0 7 33 0 0 0 0 0 0 GI 20 0 20 40 0 20 0 0 0 0 BO 0 0 29 43 7 7 0 0 0 14 EN 0 0 0 83 6 11 0 0 0 0 ET 0 0 0 100 0 0 0 0 0 0 LO 0 0 25 0 50 25 0 0 0 0 UB 0 0 17 83 0 0 0 0 0 0 HS 6 0 0 71 12 0 0 12 0 0 KB 0 0 0 60 0 0 0 0 40 0 TE 0 0 0 71 0 0 0 0 14 14 Participant Nr. 51 (subset D2) Method B (subset D1) DI GI BO EN ET LO UB HS KB TE DI 100 0 0 0 0 0 0 0 0 0 GI 100 0 0 0 0 0 0 0 0 0 BO 50 0 0 0 0 25 0 25 0 0 EN 0 0 0 89 0 11 0 0 0 0 ET 0 0 0 25 25 25 0 25 0 0 LO 0 0 0 100 0 0 0 0 0 0 UB 100 0 0 0 0 0 0 0 0 0 HS 0 0 14 57 0 14 0 14 0 0 KB 100 0 0 0 0 0 0 0 0 0 TE 33 0 0 33 0 33 0 0 0 0 DI GI BO EN ET LO UB HS KB TE DI 93 7 0 0 0 0 0 0 0 0 GI 33 50 0 0 0 17 0 0 0 0 BO 0 8 69 0 0 0 0 0 15 8 EN 0 0 0 96 4 0 0 0 0 0 ET 0 0 0 0 89 11 0 0 0 0 LO 0 0 0 11 0 89 0 0 0 0 UB 20 0 40 0 0 0 40 0 0 0 HS 6 59 6 6 0 0 0 24 0 0 KB 0 20 10 0 0 0 0 0 70 0 TE 0 0 57 0 0 0 0 0 14 29 Participant Nr. 49 (subset D1) DI GI BO EN ET LO UB HS KB TE DI 93 7 0 0 0 0 0 0 0 0 GI 33 50 0 0 0 17 0 0 0 0 BO 0 8 69 0 0 0 0 0 15 8 EN 0 0 0 96 4 0 0 0 0 0 ET 0 0 0 0 89 11 0 0 0 0 LO 0 0 0 11 0 89 0 0 0 0 UB 20 0 40 0 0 0 40 0 0 0 HS 6 56 6 6 0 0 0 25 0 0 KB 0 20 10 0 0 0 0 0 70 0 TE 0 0 57 0 0 0 0 0 14 29 • experiments described in this paper have shown, that the measure is less sensitive to annotator variance than the classical measures while at the same time allowing to discriminate between changes in performance of the algorithms. The paper also describes the LIRIS human activities dataset as a new standard dataset, which allows to benchmark activity recognition algorithms based on realistic and difficult data. The proposed performance metric has been tested on the LIRIS dataset and on the detection methods submitted to the ICPR HARL 2012 competition. correctly found actions Number of actions in the ground truth Precision(G, D) = Number of correctly found actions Number of found actions (1) Figure 2 : 2 Figure2: An example of evaluation plots for a single frame taken from the ICPR HARL competition: (a) a single frame with ground truth bounding box (labeled "1", in blue) and two bounding boxes corresponding, respectively, to two respective methods (labeled "2" and "3", in red); (b) evaluation curves for the method, labeled "3"); (c) evaluation curves for the method, labeled "2"). DI Discussion of two or several people HH GI A person gives an item to a second person HH,HO BO An item is picked up or put down (into/from a box, drawer, desk etc.) HO EN A person enters or leaves an room -ET A person tries to enter unsuccessfully -LO A person unlocks a room and then enters -UB A person leaves baggage unattended HO HS Handshaking of two people HH KB A person types on a keyboard HO TE A person talks on a telephone HOTable 1: The behavior classes in the dataset. Some of the actions are humanhuman interactions (HH) or human-object interactions (HO). Figure 3 : 3 Figure 3: The Pekee II mobile robot in our setup with the Kinect module during the shooting of the dataset. Figure 6 : 6 Figure 6: The videos are annotated: each action is localized through a set of bounding boxes over a contiguous sequence of frames. Figure 4 : 4 Figure 4: The dataset has been shot with two different cameras, Kinect camera and a color camera. (a) Kinect -grayscale image; (b) Kinect -depth image; (c) Kinect -color coded depth image; (d) color images from the Sony camcorder. 4. 1 . 1 Evaluated methods HARL participant No. 13 (subset D2) Participating team No. 13 came from Spain and submitted a run for dataset D2 (Sony color frames): Juan C. SanMiguel and Sergio Suja Video Processing and Understanding Lab Universidad Autonoma of Madrid, Spain Participant No. 51 (subset D2) Participating team No. 51 came from China and submitted a run for dataset D2 (Sony color frames): Yonghao He, Hao Liu, Wei Sui, Shiming Xiang and Chunhong Pan Institute of Automation, Chinese Academy of Sciences, Beijing Figure 5 : 5 Figure 5: Example frames for various activity classes (D1/Kinect grayscale shown only) Figure 9 : 9 Figure 9: Estimation of soft upper bounds on the proposed performance measures: mean performance curves calculated on pairs of different ground truth annotations. Figure 10 : 10 Figure 10: Confusion matrices for the five methods providing localization data. They are calculated on actions satisfying quality constraints only. Rows are ground-truth classes, columns are detected classes. g∩d) Area(g| d ) > t sr and Area(g∩d) Area(d|g) > t sp and N oF rames(g∩d) N oF rames(g) > t tr and N oF rames(g∩d) N oF rames(d) > t tp and Table 2 : 2 and learned using structured SVM. Results without localization. The bounding boxes of the annotation are not used. No. Set Recall Precision F-Score 49 D1 0.74 0.41 0.53 59 D1 0.08 0.17 0.11 A D1 0.34 0.24 0.28 B D1 0.74 0.41 0.53 13 D2 0.35 0.66 0.46 51 D2 0.30 0.46 0.36 No. Set Recall Precision F-Score 49 D1 0.63 0.33 0.44 59 D1 N/A N/A N/A A D1 0.27 0.18 0.22 B D1 0.67 0.36 0.47 13 D2 0.04 0.08 0.05 51 D2 0.03 0.04 0.03 Table 3 : 3 Results with fixed quality constraints: all thresholds are set to 0.1. No localization information has been submitted for method No. 59. Table 4 : 4 Results integrated over all quality constraints: for each column of type I * a single threshold is varied and the others are fixed. The total is the mean value over these indicators. No localization information has been submitted for method No. 59. (9) Table 5 : 5 Performance curves for the two additional methods in view of localization data (OR, under the constraint of localization data). They are obtained by varying a single quality constraint and keeping the other ones at 0.1 level. From top to bottom, the following constraints are varied: temporal recall, temporal precision, spatial recall, spatial precision. Estimation of soft upper bounds on the proposed performance measures: mean performance values calculated on different ground-truth annotations (t* signifies all four thresholds collectively set at that the given value). In contrast to classical measures as in Eq. 7, integrated performance stays quite stable over different annotators. Method A (D1) Method B (D1) 1 1 Recall Recall Precision Precision F-Score F-Score 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t-tr t-tr 1 Recall 1 Recall Precision Precision F-Score F-Score 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t-tp t-tp 1 1 Recall Recall Precision Precision F-Score F-Score 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t-sr t-sr 1 Recall 1 Recall Precision Precision F-Score F-Score 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 t-sp t-sp Figure 8: The term metric used in the context of performance evaluation is only loosely related to the mathematical meaning of the term metric. In particular, the triangular inequality is not supposed to hold for metrics in this context. http://liris.cnrs.fr/voir/activities-dataset Participant Nr. [START_REF] Ward | Evaluating performance in continuous context recognition using eventdriven error characterisation[END_REF]
73,067
[ "3860", "12850", "7702", "5387", "946597", "7701", "4094", "3989" ]
[ "403930", "403930", "403930", "403930", "208035", "403930", "403930", "489164", "489164", "403930", "403930", "403930", "403930", "208035" ]
01489458
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01489458/file/978-3-642-38541-4_17_Chapter.pdf
Panagiotis Garefalakis Panagiotis Papadopoulos Ioannis Manousakis Kostas Magoutis email: [email protected] Strengthening Consistency in the Cassandra Distributed Key-Value Store des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The ability to perform large-scale data analytics over huge data sets has in the past decade proved to be a competitive advantage in a wide range of industries (retail, telecom, defence, etc.). In response to this trend, the research community and the IT industry have proposed a number of platforms to facilitate largescale data analytics. Such platforms include a new class of databases, often referred to as NoSQL data stores, which trade the expressive power and strong semantics of long established SQL databases for the specialization, scalability, high availability, and often relaxed consistency of their simpler designs. Companies such as Amazon [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF] and Google [START_REF] Chang | Bigtable: A Distributed Storage System for Structured Data[END_REF] and open-source communities such as Apache [START_REF] Lakshman | Cassandra: A decentralized structured storage system[END_REF] have adopted and advanced this trend. Many of these systems achieve availability and fault-tolerance through data replication. Google's BigTable [START_REF] Chang | Bigtable: A Distributed Storage System for Structured Data[END_REF] is an early approach that helped define the space of NoSQL key-value data stores. Amazon's Dynamo [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF] is another approach that offered an eventually consistent replication mechanism with tunable consistency levels. Dynamo's open-source variant Cassandra [START_REF] Lakshman | Cassandra: A decentralized structured storage system[END_REF] combined Dynamo's consistency mechanisms with a BigTable-like data schema. Cassandra uses consistent hashing to ensure a good distribution of key ranges (data partitions, or shards) to storage nodes. Cassandra works well with applications that share its relaxed semantics (such as maintaining customer carts in online stores [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF]) but is not a good fit for more traditional applications requiring strong consistency. We recently decided to embark on a re-design of Cassandra that preserves some of its features (such as its data partitioning based on consistent hashing) but replaces others with the aim of strengthening consistency. Our design does not utilize multiple masters on concurrent updates to a shard or techniques such as hinted handoff [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF]. Instead, service availability requires that a single master per shard (part of a self-organized replication group) be available and its identity known to I/O coordinators. We reduce intervals of unavailability by aggressively publishing configuration updates. Furthermore we improve performance by using client-coordinated I/O, avoiding a forwarding step in Cassandra's original I/O path. In summary, our re-design centers on: -Replacing Cassandra's data replication mechanism with the highly available Oracle Berkeley DB Java Edition (JE) High Availability (HA) key-value storage engine (hereafter abbreviated as BDB). Our design simplifies Cassandra while at the same time it strengthens its data consistency guarantees. -Enhancing Cassandra's membership protocol with a highly available Paxosbased directory accessible to clients. In this way, replica group reconfigurations are rapidly propagated to clients, reducing periods of unavailability. The resulting system is simpler to reason about and backwards-compatible with original Cassandra applications. While we expect that dropping the eventual consistency model may result in reduced availability in certain cases, we try to make up by focusing on reducing recovery time of the I/O path after a failure. Fig. 2. System components and their interactions The rest of the paper is organized as follows: In Section 2 we describe the overall design and in Section 3 we provide details of our implementation and preliminary results. In Section 4 we describe related work and in Section 5 directions of ongoing and future work. Finally in section 6 we conclude. Design Our system architecture is depicted in Figure 1. We preserve the Thrift-based client API for compatibility with existing Cassandra applications. We also maintain Cassandra's ring-based consistent hashing mechanism (where keys and storage nodes both map onto a circular ring [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF]) but modify it to map each key to a BDB replication group (RG) instead of a single node. BDB implements a B+tree indexed key-value store via master-based replication of a transaction log, using Paxos for reconfiguration. In our setup, all accesses go through the master (ensuring order) while writes are considered durable when in memory and acknowledged by all replicas. Periodically, replicas flush their memory buffers to disk. These settings offer a strong level of consistency with a slightly weaker (but sufficient for practical purposes) notion of durability [START_REF] Birman | and others: Overcoming CAP with Consistent Soft-State Replication[END_REF]. Each node in an RG runs a software stack comprising a modified Cassandra with an embedded BDB (left of Figure 2). On a master, Cassandra is active and serves read/write requests; on a follower, Cassandra is inactive until elected master (election is performed by BDB and its result communicated to Cassandra via an upcall). The ring state is stored on a Configuration Manager (or CM, right of Figure 2). The CM complements Cassandra's original metadata service which uses a gossip-based protocol [START_REF] Lakshman | Cassandra: A decentralized structured storage system[END_REF]. It combines a partitioner (a module that chooses tokens for new RGs on the ring) with a primary-backup viewstamp replication [START_REF] Mazieres | Paxos Made Practical[END_REF] scheme where a group of nodes (termed cohorts) exchange state updates over the network. The CM can be thought of as a highly-available alternative to Cassandra's seed nodes. It contains information about all RGs, such as addresses and status (master or follower), and corresponding tokens. Any change in the status of RGs (new RG inserted in the ring or existing RG changes master) is reported to the CM via RPC. The CM is queried by clients to identify the current master of an RG (by token). We improve data consistency over original Cassandra by prohibiting multimaster updates. For a client to successfully issue an I/O operation, it must have access to the master node of the corresponding RG. Causes of unavailability include RG reconfiguration actions after failures and delays in the new ring state propagating to clients. Our implementation supports faster client updates by either eager notifications by the CM [START_REF] Burrows | The Chubby Lock Service for Loosely-coupled Distributed Systems[END_REF] or by integrating with the CM. Additionally, clients can explicitly request RG reconfiguration actions if they suspect partial failure (i.e., a master visible to the RG but not to the client). Our partitioner subdivides the ring to a fixed number of key ranges and assigns node tokens to key-range boundaries. This method has previously been shown to exhibit advantages over alternative approaches [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF]. Each key range in our system corresponds to a different BDB database, the total number of key ranges on the ring being a configuration parameter. Finally, data movement (streaming) between storage nodes takes place when bootstrapping a new RG. Implementation and preliminary results Our implementation replaces the original Cassandra storage backend with Oracle Berkeley DB JE HA. One of the challenges we faced was bridging Cassandra's rich data model (involving column families, column qualifiers, and versions [START_REF] Lakshman | Cassandra: A decentralized structured storage system[END_REF]) with BDB's simple key-value get/put interface where both key and value are opaque one-dimensional entities. Our first approach mapped each Cassandra cell (row key, column family, column qualifier) to a separate BDB entry by concatenating all row attributes into a larger unique key. The problem we faced with this model was the explosion in the number of BDB entries and the associated (indexing, lookup, etc.) overhead. Our second approach maps the Cassandra row-key to a BDB key (one-to-one) and stores in the BDB value a serialized HashMap of the column structure. Accessing a row requires a lookup for the row and subsequent lookup in the HashMap structure to locate the appropriate data cell. Our current implementation following this approach performs well in the general case, with the exception of frequent updates/appends to large rows (the entire row has to be retrieved, modified, then written back to BDB). This is a case where Cassandra's native no-overwrite storage backend is more efficient by writing the update directly to storage, avoiding the read-modify-write cycle. Our Configuration Manager (CM) uses a specially developed Cassandra partitioner to maintain RG identities, master and follower IPs, RG tokens, and the key ranges on the ring. We decided to use actual rather than elastic IP addresses due to the long reassignment delays we observed with the latter on certain Cloud environments. Each RG stores its identifier and token in a special BDB table so that a newly elected RG master can retrieve it and identify itself to the CM. The CM exports two RPC APIs to storage nodes: register/deregister RG, new master for RG; and one to both storage nodes and clients: get ring info. The CM achieves high availability of the ring state via viewstamp replication [START_REF] Mazieres | Paxos Made Practical[END_REF][START_REF] Oki | B: Viewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems[END_REF]. Preliminary results with the Yahoo Cloud Serving Benchmark (YCSB) over a cluster of six Cassandra nodes (single-replica RGs) on Flexiant VMs with 2 CPUs, 2GB memory, and a 20GB remotely-mounted disk indicate improvement by 26% and 30% in average response time and throughput respectively, compared 1 summarizes our results). This benefit is primarily due to client-coordination of requests. Our ongoing evaluation will further focus on system availability under failures and scalability with larger configurations. Related Work Our system is related to several existing distributed NoSQL key-vale stores [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF][START_REF] Chang | Bigtable: A Distributed Storage System for Structured Data[END_REF][START_REF] Lakshman | Cassandra: A decentralized structured storage system[END_REF] implementing a wide range of semantics, some of them using the Paxos algorithm [START_REF] Lamport | Paxos made simple[END_REF] as a building block [START_REF] Burrows | The Chubby Lock Service for Loosely-coupled Distributed Systems[END_REF][START_REF] Lee | Petal: Distributed Virtual Disks[END_REF][START_REF] Maccormick | Niobe: A Practical Replication Protocol[END_REF]. Most NoSQL systems rely on some form of relaxed consistency to maintain data replicas and reserve Paxos to the implementation of a global state module [START_REF] Lee | Petal: Distributed Virtual Disks[END_REF][START_REF] Maccormick | Niobe: A Practical Replication Protocol[END_REF] for storing infrequently updated configuration metadata or to provide a distributed lock service [START_REF] Burrows | The Chubby Lock Service for Loosely-coupled Distributed Systems[END_REF]. Exposing storage metadata information to clients has been proposed in the past [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-value Store[END_REF][START_REF] Lee | Petal: Distributed Virtual Disks[END_REF][START_REF]Oracle NoSQL Database: An Oracle White Paper[END_REF], although the scalability of updates to that state has been a challenge. Perhaps the closest approaches to ours are Scatter [START_REF] Glendenning | Scalable Consistency in Scatter[END_REF], ID-Replication [START_REF] Shafaat | and others: ID-Replication for Structured Peer-to-Peer Systems[END_REF], and Oracle's NoSQL database [START_REF]Oracle NoSQL Database: An Oracle White Paper[END_REF]. All these systems use consistent hashing and self-managing replication groups. Scatter and ID-Replication target planetaryscale rather than enterprise data services and thus focus more on system behavior under high churn than speed at which clients are notified of configuration changes. Just as we do, Oracle NoSQL leverages the Oracle Berkeley DB (BDB) JE HA storage engine and maintains information about data partitions and replica groups across all clients. A key difference with our system is that whereas Oracle NoSQL piggybacks state updates in response to data operations, our clients have direct access to ring state in the CM, receive immediate notification after failures, and can request reconfiguration actions if they suspect a partial failure. We are aware of an HA monitor component that helps Oracle NoSQL clients locate RG masters after a failure, but were unable to find detailed information on how it operates. Future Work Integrating the CM service into Cassandra clients (making each client a participant in the viewstamp replication protocol) raises scalability issues. We plan to investigate the scalability of our approach as well as the availability of the resulting system under a variety of scenarios. Another research challenge is in provisioning storage nodes for replication groups to be added to a growing cluster. Assuming that storage nodes come in the form virtual machines (VMs) with local or remote storage on Cloud infrastructure, we need to ensure that nodes in an RG fail independently (easier to reason about in a private rather than a public Cloud setting). Elasticity is another area we plan to focus on. A brute force approach of streaming a number of key ranges (databases) to a newly joining RG is a starting point but our focus will be on alternatives that exploit replication mechanisms [START_REF] Lorch | The SMART Way to Migrate Replicated Stateful Services[END_REF]. Conclusions In this short note we described a re-design of the Apache Cassandra NoSQL system aiming to strengthen its consistency while preserving its key distribution mechanism. Replacing its eventually-consistent replication protocol by the Oracle Berkeley DB JE HA component simplifies the system while making it applicable to a wider range of applications. A new membership protocol further increases system robustness. A first prototype of the system is ready for evaluation while the development of more advanced functionality is currently underway. This work was supported by the CumuloNimbo (FP7-257993) and PaaSage (FP7-317715) EU projects. Fig. 1 . 1 Fig. 1. System architecture Table 1 . 1 YCSB read-only workload to original Cassandra (Table Throughput Read latency Read latency (ops/sec) (average, ms) (99 percentile, ms) Original Cassandra 317 3.1 4 Client-coordinated I/O 412 2.3 3
15,549
[ "1004029", "1004030", "1004031", "1004032" ]
[ "307779", "307779", "307779", "307779" ]
01489460
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01489460/file/978-3-642-38541-4_1_Chapter.pdf
Matthew Brook email: [email protected] Craig Sharp email: [email protected] Graham Morgan email: [email protected] Semantically Aware Contention Management for Distributed Applications Keywords: replication, contention management, causal ordering Distributed applications that allow replicated state to deviate in favour of increasing throughput still need to ensure such state achieves consistency at some point. This is achieved via compensating conflicting updates or undoing some updates to resolve conflicts. When causal relationships exist across updates that must be maintained then conflicts may result in previous updates also needing to be undone or compensated for. Therefore, an ability to manage contention across the distributed domain to pre-emptively lower conflicts as a result of causal infringements without hindering the throughput achieved from weaker consistency is desirable. In this paper we present such a system. We exploit the causality inherent in the application domain to improve overall system performance. We demonstrate the effectiveness of our approach with simulated benchmarked performance results. Introduction A popular technique to reduce shared data access latency across computer networks requires clients to replicate state locally; data access actions (reads and/or writes) become quicker as no network latency will be involved. An added benefit of such an approach is the ability to allow clients to continue processing when disconnected from a server. This is of importance in the domains of mobile networks and rich Internet clients where lack of connectivity may otherwise inhibit progress. State shared at the client side still requires a level of consistency to ensure correct execution. This level is usually guaranteed by protocols implementing eventual consistency. In such protocols, reconciling conflicting actions that are a result of clients operating on out-of-date replicas must be achieved. In this paper we assume a strict case of conflict that includes out-of-date reads. Such scenarios are typical for rich Internet clients where eventual agreement regarding data provenance during runtime can be important. Common approaches to reconciliation advocate compensation or undoing previous actions. Unfortunately, the impact of either of these reconciliation techniques has the potential to invalidate the causality at the application level within clients (semantic causality): all tentative actions not yet committed to shared state but carried out on the local replica may have assumed previous actions were successful, but now require reconciliation. This requires tentative actions to be rolled back. For applications requiring this level of client-local causality, the impact of rolling back tentative actions has a significant impact on performance; they must rollback execution to where the conflict was discovered. Eventually consistent applications exhibiting strong semantic causality that need to rollback in the presence of conflicts are similar in nature to transactions. Transactions, although offering stronger guarantees, abort (rollback state changes) if they can't be committed. In recent years transactional approaches have been used for regulating multi-threaded accesses to shared data objects. A contention manager has been shown to improve performance in the presence of semantic causality across a multi-threaded execution. A contention manager determines which transactions should abort based on some defined strategy relating to the execution environment. In this paper we present a contention management scheme for distributed applications where maintaining semantic causality is important. We extend our initial idea [START_REF] Abushnagh | Liana: A Framework that Utilizes Causality to Schedule Contention Management across Networked Systems[END_REF] by dynamically adapting to possible changes in semantic causality at the application layer. In addition, we extend our initial idea of a single server based approach to encompass n-tier architectures more typical of current server side architectures. In section 2 we describe background and related work, highlighting the notion of borrowing techniques from transactional memory to benefit distributed applications. In section 3 we describe the design of our client/server interaction scenario. In section 4 we describe our contention management approach with enhanced configurability properties. In section 5 we present results from our simulation demonstrating the benefits our approach can bring to the system described in section 3. Background and Related Work Our target application is typically a rich Internet client that maintains replicas of shared states at the client side and wishes to maintain semantic causality. Such an application could relate to e-commerce, collaborative document editing or any application where the provenance of interaction must be accurately captured. Optimistic replication Optimistic protocols allow for a deviation in replica state to promote overall system throughput. They are ideal for those applications that can tolerate inconsistent state in favour of instant information retrieval (e.g., search engines, messaging services). The guarantee afforded to the shared state is eventual consistency [START_REF] Saito | Optimistic Replication[END_REF], [START_REF] Vogels | Eventually Consistent[END_REF]. Popular optimistic solutions such as Cassandra [START_REF] Lakshman | Cassandra: A Decentralized Structured Storage System[END_REF] and Dynamo [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-Value Store[END_REF] may be capable of recognising causal infringement, but do not provision rollback schemes to enforce semantic causality requirements at the application layer within their design. They are primarily designed, and work best, for scalable deployment over large clusters of servers. They are not designed for distributed clients to maintain replication of shared state. However, earlier academic work did consider client-based replication. Bayou [START_REF] Terry | Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System[END_REF] and Icecube [START_REF] Kermarrec | The IceCube Approach to the Reconciliation of Divergent Replicas[END_REF] [9] do attempt to maintain a degree of causality, but only at the application programmer's discretion. In such systems the application programmer may specify the extent of causality, preventing a total rollback and restart. This has the advantage of exploiting application domains to improve availability and timeliness, but does complicate the programming of such systems as the boundary between protocol and application overlap. In addition, the programmer may not be able to predict the semantic causality accurately. Transactions Transactions offer a general platform to build techniques for maintaining causality within replication accesses at the client side that does not require tailoring based on application semantics. Unfortunately, they impose a high overhead to an application that negates the scalable performance expected from optimistic replication: maintain ordering guarantees for accesses at the server and clients complete with persistent fault-tolerance. Transactional memory [START_REF] Herlihy | Transactional Memory: Architectural Support for Lock-free Data Structures[END_REF] approaches found in multi-threaded programming demonstrate fewer of such guarantees (e.g., persistence). In addition, unlike typical transactions in database processing multi-threaded programs present a high degree of semantic causality (threads executing and repeatedly sharing state). Therefore, it is no surprise to learn that they have been shown to help improve overall system throughput by judicial use of a contention manager [START_REF] Scherer | Contention Management in Dynamic Software Transactional Memory[END_REF] [2] [START_REF] Herlihy | Software Transactional Memory for Dynamic-sized Data Structures[END_REF]. Although no single contention manager works for all application types [START_REF] Guerraoui | Polymorphic Contention Management[END_REF], dynamism can be used to vary the contention strategy. Contribution In our previous work we successfully borrowed the concept of contention management from transactional memory and described a contention manager that may satisfy our rich Internet client setting [START_REF] Abushnagh | Liana: A Framework that Utilizes Causality to Schedule Contention Management across Networked Systems[END_REF]. We derived our own approach based on probability of data object accesses by clients. Unfortunately, the approach had limitations: (1) it was static and could not react to changes in application behaviour (i.e., probability of object accesses changing); (2) it worked on a centralised server (i.e., we could not utilise scalability at the server side). In this paper we propose solutions to both these problems and present a complete description of our eventually consistent protocol (which we didn't present earlier) with the dynamic version of our semantically aware contention manager. System Design Our contention management protocol is deployed within a three-tier server side architecture as illustrated in Fig. 1. The load balancer initially determines which application server to direct client requests to. A sticky session is then established such that all messages from one client are always directed to the same application server. Commu-nication channels exhibit FIFO qualities but message loss is possible. We model this type of communication channel to reflect a typical TCP connection with the possibility of transient connectivity of clients, a requirement in many rich Internet client applications. Data accesses are performed locally on a replication of the database state at the client side. In our evaluation we used a complete replica, but this could be partial based on a client's ability to gain replica state on initialisation of a client. Periodically clients inform the application server of these data accesses. An application server receives these access notifications and determines whether these accesses are valid given the current state as maintained at the database. Should the updates be valid, the database state is updated to reflect the changes the client has made locally. However, if the update results in an irreconcilable conflict then the client is notified. When the client learns that a previous action was not successful, the client rolls back to the point of execution where this action took place and resumes execution from this point. Fig. 1. System Design Clients Each client maintains a local replica of the data set maintained by the database. All client actions enacted on the shared data are directed to their local replica. The client uses a number of logical clocks to aid in managing their execution and rollback:  Client data item clock (CDI)exists for each data item and identifies the current version of the data item's state held by a client. The value is used to identify when a client's view of the data item is out-of-date. This value is incremented by the client when updating a data item locally or when a message is received from an application server informing the client of a conflict.  Client session clock (CSC)this value is attached to every request sent to an application server. When a client rolls back this value is incremented. This allows the application server to ignore messages belonging to out of date sessions.  Client action clock (CAC)this value is incremented each time a message is sent to an application server. This allows the application servers to recognize missing messages from clients. The result of an action that modifies a data item in the local replicated state results in a message being sent to the application servers. This message contains the data item state, the CDI of the data item, the CSC and the CAC. An execution log is maintained and each client message is added to it. This execution log allows client rollback. A message arriving from the application server indicates that a previous action, say A n , was not possible or client messages are missing. All application server messages contain a session identifier. If this identifier is the same or lower than the client's CSC then the application server message is ignored (as the client has already rolled backthe application server may send out multiple copies of the rollback message). However, if the session identifier is higher than the client's CSC the client must update their own CSC to match the new value and rollback. If the message from the application server was sent due to missing client messages then only an action clock and session identifier will be present (we call this the missed message request). On receiving this message type, the client should rollback to the action point using their execution log. However, if the application server sent the message because of a conflicting action then this message will contain the latest state of the data that A n operated on and the new logical clock value (we call this the irreconcilable message request). On receiving such a message the client halts execution and rolls back to attempt execution from A n . Although a client will have to rollback when requested by the application server, the receiving of an application server message also informs the client that all their actions prior to A n were successful. As such, the client can reduce the size of their execution log to reflect this. Application Server The role of an application server is to manage the causal relationship between a client's actions and ensure a client's local replica is eventually consistent. The application server manages three types of logical clock to inform the client when to rollback:  Session identifier (SI)this is the application server's view of a client's CSC. Therefore, the application server maintains an SI for each client. This is used to disregard messages from out of date sessions from clients. The SI is incremented by one each time a client is requested to rollback.  Action clock (AC)this is the application server's view of client's CAC. Therefore, the application server maintains an AC for each client. This is used to identify missing messages from a client. Every action honoured by the application server on behalf of the client results in the AC for that client being set to the CAC belonging to the client.  Logical clock (LC)this value is stored with the data item state at the database. The value is requested by the application sever when an update message is received from a client. The application server determines if a client has operated on an out-of-date version using this value. If the action from the client was valid then the application server updates the value at the database. Requests made to the database are considered transactional; handling transactional failure is beyond the scope of this paper (we propose the use of the technique described in [START_REF] Kistijantoro | Enhancing an Application Server to Support Available Components[END_REF] to handle such issues). A message from a client, say C 1 , may not be able to be honoured by the application server due to one of the following reasons:  Stale sessionthe application server's SI belonging to C 1 is less than the CSC in C 1 's message.  Lost messagethe CAC in C 1 's message is two or more greater than the application server's AC for C 1 .  Stale datathe LC for the data item the client has updated is greater than the CDI in C 1 's message. When the application server has to rebut a client's access, a rollback message is sent to that client. Preparation of the rollback message depends on the state of the client as perceived by the application server. An application server can recognize a client (C 1 ) in one of two modes:  Progressthe last message received from C 1 could be honoured.  Stalledthe last message received from C 1 could not be honoured or was ignored. If C 1 is in the progress state then the application server will create a new rollback message and increment the SI for C 1 by one. If the problem was due to a lost message then the AC value for C 1 is incremented by one (to indicate that rollback is required to just after the last successful action) and is sent together with C 1 's updated SI value (this is the missed message request mentioned in section 3.1). If the problem was due to an irreconcilable action the message sent to the client will contain the latest LC for the data item the action attempted to access (retrieved from the database), and the application server's SI value for C 1 (this is the irreconcilable message request mentioned in section 3.1). The application server moves C 1 to the stalled state and records the rollback message sent to C 1 (this is called the authoritative rollback message). If C 1 is in the stalled state all the client's messages are responded to with C 1 's current authoritative rollback message. The exception is if the received message contains a CSC value equal to C 1 's SI value held by the application server. If such a message is received then the CAC value contained in the message is compared with the AC value of C 1 held by the application server. If it is greater (i.e., the required message from C 1 is missing) the application server increments C 1 's SI by one and constructs a new authoritative rollback message to be used in response to C 1 . If the CAC value in the message is equivalent to the AC value of C 1 as held by the application server, and the application server can honour this message (logical clock values are valid), then C 1 's state is moved to progress and the authoritative rollback message is discarded. If the message cannot be honoured (it is irreconcilable), then the application server increments the SI for C 1 by one and uses this together with the contents of the received message to create a fresh authoritative rollback message, sending this to the client. 3.3 Database The database manages the master copy of the shared data set. The data set comprises of data items and their associated logical clock values. The data item reflects the state while the logical clock indicates versioning information. The logical clock value is requested by application servers to be used in determining when a client's update message is irreconcilable. The database accepts requests to retrieve logical clock values for data items or to update the state and logical clock values (as a result of a successful action as determined by an application server). We assume application servers and databases interact in standard transactional ways. System Properties The system design described so far can be reasoned about in the following manner:  Liveness -Clients progress until an application server informs them that they must rollback (via an authoritative rollback message). If this message is lost in transit the client will continue execution, sending further access notification to the application server. The application server will continue to send the rollback message in response until the client responds appropriately. If the client message that is a direct response to the authoritative rollback message goes missing the application server will eventually realize this due to receiving client messages with the appropriate SI values but with CAC values that are too high. This will cause the application server to respond with an authoritative rollback message.  Causality -A client always rolls back to where an irreconcilable action (or missing action due to message loss) was discovered by the application server. Therefore, all actions that are reconciled at the application server and removed from a client's execution log maintain the required causality. Those tentative actions in the execution log are in a state of reconciliation and may require rolling back.  Eventually Consistent -If a client never receives a message from an application server then either: (i) all client requests are honoured and states are mutually consistent; or (ii) all application server or client messages are lost. Therefore, as long as sufficient connectivity between client and application servers exists, the shared data will become eventually consistent. The system design provides opportunity for clients to progress independently of the application server in the presence of no message loss and no irreconcilable issues on the shared data. In reality, there will be a number of irreconcilable actions and as such the burden of rolling back is much more substantial than other eventually consistent optimistic approaches. This does, however, provide the benefit of not requiring any application level dependencies in the protocol itself; the application developer does not need to specify any exception handling facility to satisfy rollback. Semantic Contention Management We now describe our contention management scheme and how it is applied to the system design presented in the previous section. The aim of the contention management scheme is to attain a greater performance in the form of fewer irreconcilable differences without hindering overall throughput. Like all contention management schemes, we exploit a degree of predictability to achieve improved performance. We assume that causality across actions is reflected in the order in which a client accesses shared data items. The diagram in Fig. 2 illustrates this assumption. Fig. 2. Relating client actions progressing to data items In the simple graph shown in Figure 2 we represent three data items (a, b and c) as vertices with two edges connecting a to b and c. The edges of the graph represent the causal relationship between the two connected data items. So if a client performs a successful action on data item a there is a higher than average probability that the focus of the next action from the same client will be either data item b or c. Each application server manages their own graph configuration representing the data items stored within the database. Because of this graphs will diverge across application servers. This is of no concern, as an application server must reflect the insession causality of its own clients, not the clients of others. We extend the system design described in the previous section by adding the following constructs to support the contention management framework:  Volatility value (VV)a value associated to each vertex of the graph indicating the relative popularity for the given data item. The volatility for a data item in the graph is incremented when a client's action is successful. The volatility for the data item that was the focus of the action is incremented by one and the neighbouring data items (those that are connected by outgoing arcs of the original data item) volatilities are incremented by one. Periodically, the application server will decrement these values to reflect the deterioration of the volatility for nodes that are no longer experiencing regular data access.  Delta queue (DQ)for those actions that could not be honoured by the application server due to irreconcilable conflicts (out-of-date logical clock values) a backoff period is generated as the sum of the volatility for the related data. These related data items include the original data item for which the conflict occurred along with the data items with the highest volatilities up to three hops away in the graph. This client is now considered to be in a stalled state and is placed in the delta queue for the generated backoff period. The backoff period is measured in milliseconds given a value generated from the volatility values.  Enhanced authoritative rollback messagewhen a backoff period expires for a client residing in the delta queue, an enhanced authoritative rollback message is sent to the client. This is an extension of the authoritative rollback message described in the system design that includes a partial state update for the client. This partial state update includes the latest state and logical clock values for the conflicting data item and the data items causally related to the original conflicting access. Based on the assumption of causality as reflected in the graph configuration, the aim here to pre-emptively update the client. As a result, future update messages will have a higher chance of being valid (this cannot be guaranteed due to simultaneous accesses made by other clients). The approach we have taken is a server side enhancement. This decision was taken to alleviate clients from active participation in the contention management framework. The client needs only to be able to handle the enhanced authoritative rollback message that requires additional state updates to the client's local replica. As each application server manages their graph structure representing the data items, should a single application server crash, client requests can be directed to another working application server will little loss. Clients that originally had sessions belonging to the crashed application server will require directing to a different application server and there will be some additional conflicts and overhead due to the lost session data. Graph Reconfiguration To satisfy the changing probabilities of causal data access over time our static graph approach requires only minor modifications. We introduce two new values that an application server maintains for each client:  Happens Before Value (HBV)the vertex representing a data item a client last successfully accessed.  Happens After Value (HAV)the vertex representing a data item a client successfully accessed directly after HBV. If there does not exist a link between HBV and HAV then one is created. Unfortunately, if we were to continue in this manner we may well end up with a fully connected graph, unnecessarily increasing the load in the overall system (e.g., increased sized enhanced authoritative rollback message). Therefore, to allow for the deletion of edges as well as the creation of edges we make use of an additional value to record the popularity of traversal associated to each edge in the graph:  Edge Popularity Value (EPV) -The cumulative number of times, across all clients, a causal occurrence has occurred between a HBV and HAV. If there already exists a link between HBV and HAV then the associated edge's EPV is incremented by one. This provides a scheme within which the most popular edges will maintain the highest values. However, this may not reflect the current popularity of the causal relations, therefore, the EPVs purpose is to prune the graph. Periodically, the graph is searched and EPVs below a certain threshold result in the removal of their edges. After pruning the graph all remaining edges are reset to the value 0. Periodic pruning and resetting of EPVs provides our scheme with a basic reconfiguration process to more appropriately reflect current semantic causal popularity. We acknowledge that this process of reconfiguration will incur a performance cost relative to the number of data items (vertices) and edges present in the graph. The decision on the time between periodic reconfiguration will be based on a number of factors: (i) the relative performance cost of the reconfiguration; (ii) the number of client requests within the period. If the number of requests is low but reconfiguration too frequent then edges may be removed that are still popular. Therefore, we dynamically base our reconfiguration timings on changes in load. An interesting observation of reconfiguration is it also presents a window of opportunity to alter the data items present. If this was a typical e-commerce site with items for sale then they may be introduced as graph reconfiguration occurs. This has two benefits: (i) introduction of items may well alter the causal relationships dramatically (e.g., timed flash sales) and so waiting for reconfiguration would not result in unnecessary overhead as graph values change significantly; (ii) one can apply some application level analysis on the effect new items have on existing data items. Evaluation Three approaches were evaluated to determine performance in terms of server side conflicts and throughput: (1) the basic protocol as described in the system design with no contention management; (2) the enhanced protocol with contention management but without graph reconfigurations; (3) the enhanced protocol with both contention management and graph reconfiguration. To create an appropriate simulation scenario we rely on a pattern of execution for rich Internet clients similar to that described in [START_REF] Clarke | E-Commerce with Rich Clients and Flexible Transactions[END_REF] (ecommerce end client sales). Simulation Environment We produced a discrete event simulation using the SimJava [15] framework. We modeled variable client numbers, a load balancer, three application servers and a database as processes. Graph layouts are randomly created and client accesses are pre-generated. The initial graph layouts include vertices with and without edges. In the dynamic scenario such a vertex may at some point become connected, but not in the static graph. In the dynamic graph periodic reconfiguration occurred every thirty seconds with a relaxed threshold of one. This period was determined over experimentation and was found to provide reasonable balance between accurate causality representation and overhead induced by reconfiguration. The relaxed threshold simply indicated edges that had shown any causal interest would be guaranteed a starting presence in the graph after reconfiguration. We simulated message delay between client and application servers (load balancer) as a random variable with a normal distribution between 1 -50 milliseconds. Each client performs 200 data accesses then leaves the system. Each experiment was run five times to generate the average figures presented. The arrival rate of client messages to the application server was set as ten messages per second for each client process. The simulation was modeled with a 2% message loss probability. Database read and writes were 3 and 6 microseconds respectively. 5.2 Evaluation 1 -Irreconcilable Client Updates (Conflicts) Fig. 3. Irreconcilable conflicts for varying graph sizes The graphs in figure 3 show that the inclusion of contention management lowers conflicts. The results also show the added benefit of graph reconfiguration over a static graph. In addition, reconfiguration appears to approach a stable state as the contention increases. Reconfiguration allows for the system to adapt to the changing client interactions resulting in the graph more accurately reflecting semantic causality over time. Without reconfiguration the conflicts continue to rise rather than stabilize. What has little impact on the results is the number of data items represented in the graph. This is due to the predictability exhibited in the client accesses: if clients accessed data at random we would expect that graph size mattered, as there would naturally be less conflicts. The results presented here indicate that backing off clients and updating their replicas in a predictive manner actually improves performance: conflicts lowered and throughput is increased. In terms of throughput alone, this is seen as a significant 30% improvement when combined with reconfiguration. Therefore, we conclude that causality at the application layer can be exploited to improve performance for those applications where causality infringement requires rollback of local replica state. Evaluation 2 -Throughput of successful client actions (commits) Conclusion We have described an optimistic replication scheme that makes use of dynamic contention management. We base our contention manager on the popularity of data accesses and the possible semantic causal relation this may hint at within the application layer. Our approach is wholly server based, requiring no responsibility for managing contention from the client side (apart from affording rollback). Our approach suits applications where causality is important and irreconcilable accesses of shared state may cause a client to rollback accesses tentatively carried out on a local replica. Such scenarios occur in rich Internet clients where provenance of data access is to be maintained or where actions of a client's progress must be rolled back in the context of achieving a successful client session. We describe our approach in the context of ntier architectures, typical in application server scenarios. Our evaluation, via simulation, demonstrates how overall throughput is improved by reducing irreconcilable actions on shared state. In particular, we show how adapting to changes in causal relationships during runtime based solely on access patterns of clients provide greatest improvements in throughput. This is the first time runtime adaptability of causality informed contention management has been demonstrated in a complete solution exhibiting eventual synchronous guarantees. As such, we believe that this is not only a useful contribution to the literature, but opens new avenues of research by bringing the notion of contention management to replication protocols. Fig. 4 . 4 Fig. 4. Throughput measured as commits per second for varying graph sizes We acknowledge that our approach is focussed on a particular application type: applications that always rollback to where conflict was detected. However, we believe that advocating contention management as an aid to performance for eventually consistent replicated state in general would be beneficial and worthy of future exploration. Future work will focus on peer-to-peer based evaluation and creating contention management schemes suitable for mobile environments (where epidemic models of communication are favoured). A further opportunity of exploration will be in taking the semantic awareness properties of this work back to transactional memory systems themselves.
34,163
[ "1004033", "1004034", "1004035" ]
[ "252912", "252912", "252912" ]
01489462
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01489462/file/978-3-642-38541-4_3_Chapter.pdf
Tobias Gebhardt email: [email protected] Hans P Reiser email: [email protected] Network Forensics for Cloud Computing Keywords: Cloud Computing, Network Forensics, Incident Investigation Computer forensics involves the collection, analysis, and reporting of information about security incidents and computer-based criminal activity. Cloud computing causes new challenges for the forensics process. This paper addresses three challenges for network forensics in an Infrastructure-as-a-Service (IaaS) environment: First, network forensics needs a mechanism for analysing network traffic remotely in the cloud. This task is complicated by dynamic migration of virtual machines. Second, forensics needs to be targeted at the virtual resources of a specific cloud user. In a multi-tenancy environment, in which multiple cloud clients share physical resources, forensics must not infringe the privacy and security of other users. Third, forensic data should be processed directly in the cloud to avoid a costly transfer of huge amounts of data to external investigators. This paper presents a generic model for network forensics in the cloud and defines an architecture that addresses above challenges. We validate this architecture with a prototype implementation based on the OpenNebula platform and the Xplico analysis tool. Motivation Cloud computing has become highly popular over the past decade. Many organizations nowadays use virtual resources offered by external cloud providers as part of their IT infrastructure. As a second significant trend, IT-based services are getting more and more into the focus of criminal activities [START_REF] Grobauer | Understanding Cloud Computing Vulnerabilities[END_REF][START_REF] Somorovsky | All your clouds are belong to us: security analysis of cloud management interfaces[END_REF]. As a result, technology for computer forensics has become increasingly important. Cloud computing imposes new challenges for this technology. This paper addresses some of these challenges that concern network forensics in an Infrastructure-asa-Service (IaaS) model. Computer forensics is the science of collecting evidence on computer systems regarding incidents such as malicious criminal activities. Noblett et al. define it as "the science of acquiring, preserving, retrieving, and presenting data that has been processed electronically and stored on computer media" [START_REF] Noblett | Recovering and examining computer forensic evidence[END_REF]. Traditionally, forensics has been associated with the collection of evidence for supporting or refuting a hypothesis before the court. In this paper, we use the term in its broader sense, which includes collecting information for purposes such as internal incident investigation and intrusion analysis. Network forensics is the investigation of network traffic of a live system. This means that for network forensics it is necessary to capture and analyse the network traffic of the system under investigation. If an organization runs a service on its own local IT infrastructure, the responsible administrator (or any investigator with physical access to the infrastructure) can easily apply local forensics measures. Today, a great variety of forensics frameworks are available for this purpose [START_REF] Glavach | Cyber Forensics in the Cloud[END_REF]. With cloud computing, a paradigm shift takes place. Virtualization and multi-tenancy create novel challenges. If a cloud user wants to investigate an incident on a virtual resource, it is usually not possible for him to use traditional forensics tools that require direct access to networks and physical machines [START_REF] Hoopes | Virtualization for Security[END_REF]. Even if such tools can be used at the physical facilities of the cloud provider, this easily creates privacy and security issues: Investigations typically target a specific system, which might be running in a virtual machine on a physical host shared with other completely unrelated systems. These other systems should not be affected by the investigation. As an additional complication, due to dynamic elasticity and migration of virtual resources, the geographical location of systems under investigation is no longer constant [START_REF] Mather | Cloud Security and Privacy -An Enterprise Perspecive on Risks and Compliance[END_REF]. These issues create a new research field called cloud forensics [START_REF] Biggs | Cloud computing: The impact on digital forensic investigations[END_REF][START_REF] Birk | Technical Challenges of Forensic Investigations in Cloud Computing Environments[END_REF]. The aim of this work is to propose a solution for some of these challenges. Its focus is on network forensics for the Infrastructure-as-a-Service (IaaS) model. Specifically, this paper makes the following contributions: -It defines a generic model for network forensics in IaaS. -It defines an architecture for "Forensics-as-a-Service" in a cloud management infrastructure. This architecture offers an API that authorized subjects can use to remotely control the forensics process at the cloud provider. Both data acquisition and data analysis can be handled directly at the cloud provider. -It describes and evaluates a prototype implementation of this architecture for the OpenNebula cloud management infrastructure. The prototype includes a daemon running on all cloud hosts for collecting network traffic, filtered for a specific system under observation, and the integration of an existing network forensics analysis tool as a cloud-based service. The paper is structured as follows. The next section discusses related work on computer forensics. Section 3 describes the system model and security assumptions in our approach. Section 4 presents our generic forensics model. Section 5 focuses on the forensics architecture and describes details of the prototype for OpenNebula. Section 6 evaluates this prototype, and finally Section 7 presents our conclusions. Computer forensics is a field that has undergone thorough investigation in various directions over the past decades. Within the scope of this paper, we discuss related work in the areas of network forensics and cloud computing forensics. Network Forensics The term network forensics was coined by Ranum [START_REF] Ranum | Network forensics and traffic monitoring[END_REF]. In his paper, the author describes a forensics system called "Network Flight Recorder". This system offers functionality similar to an Intrusion Detection System (IDS), but it makes it possible to analyse data from the past. Some existing approaches cover only parts of the forensics process. For example, practical tools such as WireShark3 and tcpdump4 support only the acquisition of data, without providing mechanisms for further analysis and reporting. These tools assume that you can run them on the host under investigation and have no dedicated support for remote forensics in the cloud. Similarly, intrusion detection systems focus on the detection and reporting of attacks as they happen, without providing specific evidence gathering functionality [START_REF] Scarfone | Guide to intrusion detection and prevention systems[END_REF]. In the context of network forensics, intrusion detection systems can be used as a trigger point for forensic investigations, as they create events upon the detection of suspicious behaviour. Several publications in the area of network forensics target the question of how to manage and store forensic data efficiently. For example, Pilli et al. [START_REF] Pilli | Data reduction by identification and correlation of TCP/IP attack attributes for network forensics[END_REF] focus on reducing the file size of captured data. They consider only TCP/IP headers and additionally reduce file size with a filter. Other publications focus on the analysis step of the forensics process. For example, Haggerty et al. [START_REF] Haggerty | FORWEB: file fingerprinting for automated network forensics investigations[END_REF] describe a service that helps to identify malicious digital pictures with a file fingerprinting service. Such research is orthogonal to the contribution of our paper. Some existing approaches address the full forensics process. For example, Almulhem et al. [START_REF] Almulhem | Experience with engineering a network forensics system[END_REF] describe the architecture of a network forensics system that captures, records and analyses network packets. The system combines networkbased modules for identifying and marking suspect packets, host-based capturing modules installed on the hosts under observation, and a network-based logging module for archiving data. A disadvantage of this approach is that the host-based capturing module cannot be trusted after an attack has compromised the host. Shanmugasundaram et al. [START_REF] Shanmugasundaram | ForNet: A distributed forensics network[END_REF] have proposed a similar approach that suffers from the same disadvantage. Wang et al. [START_REF] Wang | Design and implementation of a network forensics system for Linux[END_REF] propose an approach that adds a capturing tool on a host at the moment it should be inspected. Again, this approach suffers from the same integrity weakness, as data is captured by tools running on a possibly compromised system. All of these approaches imply the assumption that there is a single entity that has permission to perform forensic investigations over all data. While this is not a problem if all data is owned by a single entity, this model is not appropriate for a multi-tenancy cloud architecture. Cloud Forensics Cloud forensics is defined as a junction of the research areas of cloud computing and digital forensics [START_REF] Ruan | Cloud Forensics[END_REF]. In the recent years, cloud forensics has been identified as an area that is still faced with important open research problems [START_REF] Beebe | Digital forensic research: The good, the bad and the unaddressed[END_REF][START_REF] Catteddu | Cloud Computing -Benefits, risks and recommendations for information security[END_REF]. Zafarullah et al. [START_REF] Zafarullah | Digital forensics for Eucalyptus[END_REF] proposed an approach to analyse log files in an IaaS environment. They use a centralized approach in which the cloud service provider is responsible for forensic investigations. In contrast, in our approach we want to offer forensics services to cloud users to investigate problems and incidents in their own virtual resources. The works of Birk et al. [START_REF] Birk | Technical Challenges of Forensic Investigations in Cloud Computing Environments[END_REF] and Grobauer et al. [START_REF] Grobauer | Towards incident handling in the cloud[END_REF] share same aspects of our approach, as they propose a cloud API for forensics data. However, both publications list this idea together with other high level approaches, without presenting many details. Our forensics model is a more specific approach that also discusses aspects of a real prototype implementation. System Model and Security Assumptions This paper considers an IaaS model in which a cloud provider executes virtual machines on behalf of a cloud client. The client has full control over the software running within the virtual machines. The cloud provider manages the physical machines, and the client has no direct access to them. Multiple clients can share a single physical machine. Client virtual machines can be compromised by malicious attacks. This work proposes an architecture in which cloud clients (or authorized third parties) can autonomously perform forensic investigations targeted at cloud-based virtual machines, based on support provided by the cloud provider. For this purpose, we assume that only the cloud provider and the cloud infrastructure are trusted, whereas client virtual machines are untrusted. Having untrusted client virtual machines means that we make no assumptions on their behaviour. An attacker that compromises a virtual machine can fully modify the virtual machine's behaviour. We therefore do not want to collect forensic data with the help of processes running within the client virtual machine, as an attacker can manipulate these. We make no assumptions on how the attacker gains access to the client virtual machine. For example, the attacker might use access credentials he obtained from the cloud user by some means, or he might be able to completely take over the virtual machine by exploiting some vulnerability of the software running within the virtual machine. Even if the adversary has full control over an arbitrary number of client virtual machines, we assume that the virtualization infrastructure guarantees isolation of virtual machines. This means that the attacker has no access to the virtual machine monitor (VMM) itself or to other VMs running on the same physical host. We assume that the cloud provider and the cloud infrastructure can justifiably be trusted. This means that they always behave according to their specification. Under this model, the possibility that an attacker compromises the cloud infrastructure (i.e., the VMM and the cloud management system) is not considered. This might be regarded as a strong assumption, but it is the usual assumption that is made by practically all cloud users. In addition, several research projects have shown that the trust into the cloud infrastructure can be further justified by technical mechanisms. For example, Garfinkel et al. [START_REF] Garfinkel | Terra: a virtual machine-based platform for trusted computing[END_REF] introduce a trusted virtual machine monitor (TVMM). Santos et al. [START_REF] Santos | Towards trusted cloud computing[END_REF] address the problem of creating a trustworthy cloud infrastructure by giving the cloud user an opportunity to verify the confidentiality and integrity of his own data and VMs. Doelitzscher et al. [START_REF] Doelitzscher | Incident Detection for Cloud Environments[END_REF] designed a security audit system for IaaS. This means that trust into a cloud infrastructure can be enforced not only by contracts (SLAs), but also by technical mechanisms. The exact details of such mechanisms are beyond the scope of this paper. Network Forensics Architecture for the Cloud In this chapter we define our model for network forensics in IaaS environments and describe a generic network forensics architecture. Forensics Process Model The model for our forensics process is shown in Fig. 1. Five horizontal layers interact with a management component, which is needed as central point of control. The layers of the model are adopted from the process flow of the NIST forensics model [START_REF] Kent | SP800-86: Guide to Integrating Forensic Techniques into Incident Response[END_REF]. The layers represent independent tasks regarding the investigation of an incident. The tasks are executed in a distributed multi-tenant environment, as described before in Section 3. The first layer is the Data Collection layer. All network data can be captured at this point. The management component interactions with the data collection layer for starting and stopping the capture of network traffic. The data collection also needs to be coordinated with migration of virtual machines in the IaaS environment. For this purpose, the management component coordinates a continuous capture of network traffic for a migrating virtual machine. On top of the data collection layer resides the Separation layer. The task of this layer is to separate data by cloud users. At the output of the separation layer, each data set must contain data of only a single cloud client. Optionally, the separation layer can additionally provide client-specific compression and filtering of network traffic to reduce the size of the collected forensics data. The third layer is called Aggregation layer. It combines data from multiple sources belonging to the same cloud client. Data is collected at multiple physical locations if a cloud client uses multiple virtual machines (e.g., for replication or load balancing), or if a virtual machine migrates between multiple locations. All network data is combined into a single data set at this layer. The next layer is the Analysis layer. This layer receives pre-processed data sets from the aggregation layer and starts the analysis starts of the investigation. The management layer configures the transmission of collected data from aggregation to analysis. Typically, the analysis is run as a service within the same cloud. The top layer is the Reporting layer in which the analysis results and consequences are presented. Network Forensics Architecture Our network forensics architecture directly translates the conceptional blocks of the model into separate services. -The management layer is realized as part of the cloud management infrastructure. This infrastructure typically offers a central point of access for cloud clients. The infrastructure is augmented with an interface for configuring and controlling the forensics process. This component also handles authorization of forensics requests. A cloud client can control forensic mechanisms targeted at his own virtual machines, and this control privilege can also be delegated to a third party. -The data collection layer is realized in the architecture by a process that executes on the local virtual machine monitor of each physical host. In a typical virtualization infrastructure, all network traffic is accessible at the VMM level. For example, if using the Xen hypervisor, all network traffic is typically handled by the Dom0 system, and thus all traffic can be captured at this place. The management layer needs to determine (and dynamically update upon reconfigurations) the physical hosts on which virtual machines of a specific client are running, and start/stop the data collection on the corresponding hosts. -The separation layer is responsible for filtering data per cloud user. It is possible to investigate multiple clients on the same physical hosts, but all investigations must be kept independent. This layer separates network traffic into data sets each belonging to a single cloud client, and drops all traffic not pertaining to a client under investigation. To monitor the network data of a specific cloud client, a means of identification of the corresponding traffic is required. In our architecture, we use a lookup table from virtual machine ID to MAC address and use this address for filtering. -The aggregation layer is realized by a simple component that can collect and combine data from the separation layer at multiple locations. -For analysis layer, our architecture potentially supports multiple possibilities. A simple approach is to transfer the output of the aggregation layer to the investigator, who then can locally use any existing analysis tools. In practice, the disadvantage of such approach is the high transfer cost (in terms of time and money). A better option is to run the analysis within the cloud. For this purpose, the cloud user can deploy the analysis software as a service within the cloud. -The task of the reporting layer is to produce reports on the analysis results that are suitable for further distribution. This step is identical to other forensics frameworks. Prototype Implementation We have implemented a prototype according to the architecture described in the previous section. As a basis for this implementation, we have selected and extended existing projects in order to create a prototype system that is usable for network forensics in IaaS clouds. The basic idea of the prototype is the following: A cloud user has an account at a cloud provider to manage VMs. This user should be able to start and to stop the forensics process for his VMs and access analysis results using a web-based system inside the cloud. The user's central point of interaction is the management software for IaaS. The user is able to work with the VMs he owns, to monitor or to change their status and parameters. We extend the management software by adding API commands for controlling network forensics actions. Two established systems for cloud management are Eucalyptus5 and OpenNebula6 [START_REF] Sempolinski | A Comparison and Critique of Eucalyptus, Open-Nebula and Nimbus[END_REF]. Both are open source systems and have an active community. We have chosen OpenNebula for our prototype, as it supports more different VMMs and we want our prototype to be usable independent of specific VMM products. Our goal is not to develop new analysis tools, but instead make existing tools and approaches applicable to cloud-based systems. Because of this, we did not want to implement a custom integration of the analysis part into OpenNebula. Instead, the idea is to create an interface for existing analysis software and run it as a service in the cloud. With this approach it is easy to replace it by another software if requirements changed for the analysis steps. PyFlag7 and Xplico8 are both network forensics analysis frameworks that could be used for analysis within our work. Cohen et al. have presented PyFlag in 2008 [START_REF] Cohen | PyFlag -an advanced network forensic framework[END_REF], but apparently it is not being maintained any more. Xplico has a more active community and thanks to the helpful author Gianluca Costa, a developer version of Xplico is available. In the prototype, we have implemented the Management layer of our architecture as an extension to the OpenNebula API. The Aggregation and Analysis layers have been implemented by a cloud-based service running Xplico, and currently the Reporting functionality is limited to the visualization of the Xplico output. The Data Collection and Separation layers have been realized by a custom implementation called nfdaemon. The original cloud environment as provided by OpenNebula is shown in Fig. 2. Our modified environment with additional forensics components is shown in Fig. 3. The cloud user is interacting with OpenNebula, e.g. through the Open-Fig. 3. The user controls forensics processes via our extended OpenNebula management component. The analysis software (Xplico) runs as a service within the cloud and offers the user a direct interface for accessing the analysis results. Nebula CLI. Actions to start, stop or restart virtual machines are examples for actions that are already implemented by the OpenNebula project. Beside these actions we added the actions startnf and stopnf. The first one, startnf, triggers the process of starting a network forensics session. The VM-ID from OpenNebula is used as a parameter for the action to identify a particular VM. stopnf is doing the opposite and stops the process. A nfdaemon (network forensics daemon) needs to be running an each VMM (see Fig. 3). The main tasks of nfdaemon are collecting data, separating data per VM, and transferring it to the aggregation and analysis system. The communication interfaces are shown in Fig. 4. The control FIFO channel is the interface for interaction between nfdaemon and the OpenNebula management system. The nfdaemon waits for input on this channel. Each command sent by the management system triggers actions of nfdaemon. For filtering network data pertaining to a specific virtual machine, a mechanism for translating the VM-ID (which is used for identifying VMs by Open-Nebula) to MAC network address is needed. The information about the corresponding MAC address is obtained from OpenNebula. The Data Collection internally uses tcpdump to capture data. The Separation layer is realized on the tcpdump output on the basis of filtering by MAC addresses. The monitored data is periodically written into PCAP files ("Packet CAPture", a widely used format for network traffic dumps). The nfdaemon periodically transfers PCAP files to the aggregation and analysis system. In our prototype, Xplico handles all analysis. This tool already contains an interface for receiving PCAP files over the network (PCAP-over-IP). This standard interface, however, is insufficient for our prototype, as it accepts connections from any source. We wanted to make sure that the only data used for analysis is traffic collected by nfdaemon. Therefore, we implemented a date source authentication mechanism based on TLS, using a private key stored within nfdaemon. PCAP files from multiple virtual machines of the same cloud users can be aggregated within the same Xplico analysis. Evaluation As a first step for evaluating our approach, we have performed basic functionality tests using a vulnerable web application running within an OpenNebula-based cloud environment. The web application was vulnerable for remote code execution due to inappropriate input validation of a CGI script. Using our architecture, we could analyse attacks targeted at this web application using Xplico in the same way as we were able to analyse the same attack on a local system. These basic tests convinced us that we can use our approach for forensic investigations in the cloud. For a real-world deployment of our approach, however, we wanted to analyse two additional questions: What is the performance impact of such analysis, and what are over-all benefits and implications of our approach? Performance Evaluation The purpose of the following performance evaluation is to verify whether our modified cloud system suffers from significant performance degradations if the network forensics functionality is activated. The setup for measuring the performance impact uses a running nfdaemon on each VMM in a cloud environment. This process consumes computation and communication resources when capturing, processing, and transferring network traffic. The following experiment intends to quantify this performance impact by comparing two configurations with and without nfdaemon. The measurements have been done on hosts with 2.8 GHz Opteron 4280 CPUs and 32 GB RAM, connected via switched Gigabit Ethernet. [START_REF] Mather | Cloud Security and Privacy -An Enterprise Perspecive on Risks and Compliance[END_REF] VMs are used in parallel on one VMM for each scenario. Each VM hosts a web service that executes a computationally intensive task. The web service chooses two random values, calculates the greatest common divisor of the two values is calculated, and shows the result on its output. The calculation is repeated 1000 times for each client request. Clients iteratively call functions of this web service. Client and Xplico analysis is running on a separate host. Clients and the service calls are realized with the tool ab 9 . For each VM, the web service is called 2,500 times with 25 concurrent requests. The time for each request and a summary of every 2500 calls are stored in text files. This procedure is repeated 60 times every two minutes. 150,000 measurement values are collected for each VM. Fig. 5 shows the results of these measurements, with network forensics turned on and off. The results show a measurable but insignificant impact on the performance of the VMs. The average performance reduction between the two runs is between 2% to 17%. On average, the difference between active and inactive forensic data collection is 9%. The measurement shows that it is possible to transfer the concept of the network forensics service to a real life scenario. Discussion of Results The measurements show the overhead for monitoring a single physical host. For a real cloud deployment, the question of scalability to a large number of hosts arises. Furthermore, the business model for the cloud provider and benefits for the cloud client need to be discussed. Regarding the scalability of our approach we note that the observed overhead does not depend on the number of physical machines of the cloud provider. If a cloud client uses a single virtual machine in a large cloud, the previously shown overhead exists only on the physical machine hosting the client VM. Our approach can, however, be used to observe multiple client VMs simultaneously. In this case, a single analysis VM can become a bottleneck. A distributed approach in which multiple VMs are used to analyse forensic data could be used to leverage this problem, but this is beyond the scope of this paper. For cloud providers, offering Forensics-as-a-Service is a business model. On the one hand, the client uses additional virtual resources for the analysis of forensics data. This utilization can easily be measured by existing accounting mechanisms. On the other hand, the overhead caused by the nfdaemon cannot be accounted by existing means. Therefore, we propose to estimate this overhead by measuring the amount of forensic data transferred to the aggregation and analysis component. For the cloud user, the benefits of our approach are two-fold: Most importantly, the user re-gains the possibility to perform network-based forensic investigations on his services, even if they are running on a remote cloud. This way, the lack of control caused by cloud computing is reduced. Second, the client needs resources for performing the analysis of forensic data. With our approach, he can dynamically allocate cloud resources for the duration of the analysis. Conclusion The growing use of cloud-based infrastructure creates new challenges for computer forensics. Efficient means for remote forensics in the cloud are needed. The location of a virtual resource is often hard to determine and may even change over time. Furthermore, forensics needs to be limited to the specific system under observation in multi-tenant environments. In this paper, we have analysed these problems with a special focus on network forensics for the Infrastructure-as-a-Service (IaaS) model in the cloud. We have defined a generic model for network forensics in the cloud. Based on this model, we have developed a system architecture for network forensics and developed a prototype implementation of this architecture. The implementation integrates forensics support into the OpenNebula cloud management infrastructure. Our approach solves three basic issues: -It provides a remote network forensics mechanism to cloud clients. Network data acquisition and processing can be controlled remotely, independent of the physical location of virtual machines. If virtual machines migrate, the data acquisition transparently follows the location of the migrating VM. -It ensures separation of users in a multi-tenant environment. The mechanism of acquiring data is limited to network traffic of the user's virtual machines, without infringing the privacy and security of others. -It avoids the cost of transferring captured network data to external investigation tools by implementing the analysis step by a cloud-internal service under to control of the investigator. An evaluation shows that the additionally needed computing power for running our data collection and processing service in parallel to an existing service is at an acceptable level. All in all, with the results of this work an organization will be able to use network forensics for a service hosted on an IaaS cloud infrastructure. This contribution eliminates a disadvantage that cloud-based services have compared to traditional services running locally on the organization's internal IT-infrastructure. Fig. 1 . 1 Fig. 1. The forensics process for the cloud is modelled by five basic layers, controlled by a central management block. Fig. 2 . 2 Fig. 2. In an OpenNebula-based IaaS environment, the user interacts with a central OpenNebula management instance, which in turn controls the execution of user VMs on various physical hosts. Fig. 5 . 5 Fig. 5. Measurements show that ongoing data acquisition (NF) has an only minor impact on the running system. The nfdaemon receives command via the command FIFO. It stores its state in a local database, manages tcpdump processes, and interacts with remote OpenNebula and Xplico components. command FIFO PCAP transfer Xplico tcpdump start nfdaemon Database VMID to MAC lookup OpenNebula log network traffic to PCAP files monitor/control tcpdump by pid VM-ID | MAC | pid | state host boundary Operating System Fig. 4. http://www.wireshark.org http://www.tcpdump.org http://www.eucalyptus.com http://www.opennebula.org http://code.google.com/p/pyflag/ http://www.xplico.org http://httpd.apache.org/docs/2.0/programs/ab.html
32,515
[ "1004036", "1004037" ]
[ "488905", "98761" ]
01489466
en
[ "info" ]
2024/03/04 23:41:48
2013
https://inria.hal.science/hal-01489466/file/978-3-642-38541-4_6_Chapter.pdf
Ana-Maria Oprescu email: [email protected] Spyros Voulgaris email: [email protected] Haralambie Leahu email: [email protected] Strategies for Generating and Evaluating Large-Scale Powerlaw-Distributed P2P Overlays des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Many real-world large-scale networks demonstrate a power-law degree distribution, that is, a very small fraction of the nodes has a high degree (i.e., number of neighbors), while the vast majority of nodes has a small degree. In nature, such networks typically emerge over time, rather than being instantiated on the spot based on a blueprint. Providing researchers from different disciplines with a framework that allows them to control the self-emerging process of power-law networks, could substantially help them in studying and better understanding such networks, as well as deploying them at will to serve new applications (e.g., bio-inspired algorithms for peer-to-peer systems). There are several algorithms to generate power-law networks, however little has been done for a self-emerging method for building such networks [START_REF] Batagelj | Efficient generation of large random networks[END_REF][START_REF] Cohen | Scale-free networks are ultrasmall[END_REF][START_REF] Dangalchev | Generation models for scale-free networks[END_REF][START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF]. In this work, we first investigate existing research with an emphasis on the decentralization properties of proposed algorithms. Next, we select one approach that looks promising for straightforward decentralization. We identify several limitations within the existing approach and we present a novel algorithm that has been tailored specifically to the needs of a large P2P network. Starting from a given, static distribution of random values among the P2P network nodes, we control the emerging power-law overlay . We summarize related research conducted on power-law generation in Section 2, where we assess the degree to which such approaches may be decentralized. In Section 3.1 we identify several limitations (both theoretical and empirical) with an existing sequential approach and proceed to present a novel algorithm to alleviate the respective issues. In Section 4 we show how the decentralized algorithm may be implemented in a P2P network and present our evaluation results. We summarize our findings in Section 5. Related work There is a vast literature on properties and characteristics of scale-free and smallworld networks. The research behind such literature is focused on the observation of aforementioned topologies and their behavior (like finding the λ value) rather than construction methodologies. However, there are several important generative mechanisms which produce specific models of power-law networks. It started with the Erdös and Rényi random-graph theory and continued with the Watts and Strogatz model, which was the first to generate a small-world topology from a regular graph, by random rewiring of edges. Drawbacks of this initial model are its degree homogeneity and static number of nodes. These limitations can be addressed by scale-free networks, but the clustering coefficient becomes an issue. In turn, the clustering coefficient can be controlled through the employed generative mechanism. However, generating a random scale-free network having a specific λ value is not trivial. Moreover, most existing algorithms to generate scale-free networks are centralized and their decentralization, again, far from trivial. We present several types of generative models. Preferential attachment This model, also known as the "rich-get-richer" model, combines preferential attachment and growth. It assumes a small initial set of m 0 nodes, with m 0 > 2, forming a connected network. The remaining nodes are added one at a time. Each new node is attached to m existing nodes, chosen with probabilities proportional to their degrees. This model is referred to as the Barabási-Albert (BA) model [START_REF] Author | Unknown title[END_REF], though it was proposed by Derek J. de Solla Price [START_REF] De Solla Price | A general theory of bibliometric and other cumulative advantage processes[END_REF] in 1965 and Yule in 1925 [12]. The degree distribution is proven to be P (k) ∼ k -3 . Dorogovtsev and Mendes [START_REF] Dorogovtsev | Structure of growing networks with preferential linking[END_REF] have extended the model to a linear preference function, i.e., instead of a preference function f [START_REF] Dangalchev | Generation models for scale-free networks[END_REF] introduced the two-level network model, by considering the neighbor connectivity as a second "attractiveness" discriminator, f Da (i) = k i + c × j k j , where c ∈ [0, 1]. The global view required at each node attachment renders this algorithm difficult to decentralize. BA (i) = k i they use f DM (i) = k i + D, D ≥ 0. Dangalchev Preferential attachment with accelerated growth This model [START_REF] Dorogovtsev | Effect of the accelerating growth of communications networks on their structure[END_REF] extends the previous model with a separate mechanism to add new links between existing nodes, hence accelerating the growth of the number of links in the network (much like the Internet). This algorithm inherits the difficulties of the basic preferential attachment with respect to decentralization. Non-linear preferential attachment Krapivsky, Redner, and Leyvraz propose a model [START_REF] Krapivsky | Connectivity of growing random networks[END_REF] that produces scale-free networks as long as f KRL (i) ∼ k i ; k → ∞, where f KRL (i) = k γ i . This algorithm inherits the difficulties of the basic preferential attachment with respect to decentralization. Deterministic static models Dangalchev proposed two such networks, the k-control and the k-pyramid, where the latter can be extended to a growth model. Ravasz and Barabási [START_REF] Albert | Emergence of scaling in random networks[END_REF] explored hierarchical (fractal-like) networks in an effort to meet both the power-law degree distribution of scale-free networks and the high clustering coefficient of many real networks. Their model starts with a complete q-node graph which is copied q -1 times (q > 2); the root of the initial graph (selected arbitrarily from the q nodes) is connected with all the leaves at the lowest level; these copy-connect steps can be repeated indefinitely. Such networks have degree distribution P (k) ∼ k ln q ln(q-1) . Cohen and Havlin [START_REF] Cohen | Scale-free networks are ultrasmall[END_REF] use a very simple model which delivers an ultra-small world for λ > 2; it assumes an origin node (the highest degree site) and connects it to next highest degree sites until the expected number of links is reached. Since loops occur only in the last layer, the clustering coefficient is intuitively high for a large number of nodes. According to [START_REF] Dorogovtsev | Pseudofractal scale-free web[END_REF], some deterministic scale-free networks have a clustering coefficient distribution C(q) ∼ q -1 , where q is the degree. This implies wellconnected neighborhoods of small degree nodes. This algorithm seems promising with respect to decentralization, except for the initial phase of complete q-node connectedness. Fitness-driven model This was introduced by Caldarelli [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF] and proves how scale-free networks can be constructed using a power-law fitness function and an attaching rule which is a probability function depending on the fitness of both vertices. Moreover, it shows that even non-scale-free fitness distributions can generate scale-free networks. Recently, the same type of model with infinite mean fitness-distribution was treated in [START_REF] Flegel | Canonical fitness model for simple scale-free graphs[END_REF]. This power-law network generative algorithm seems the most promising with respect to decentralization. Decentralizable algorithms for building scale-free networks We are interested in analyzing approaches that are feasible to decentralize. We first look at an existing model, presented by Caldarelli in [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF], for which we introduce an analytical and empirical verification. We then present an improved model to build scale-free networks, which we also analyze and verify empirically. Our model maintains the property of easy decentralization. Caldarelli's fitness-driven model In this model, power-law networks are generated using a "recipe" that consists of two main ingredients: a fitness density, ρ(x), and a vicinity function, f (x i , x j ). The fitness density is used to assign each node a fitness value, while the vicinity function is used to decide, based on the fitness values, whether a link should be placed between two nodes. One instance of this model assumes each node to have a fitness value x i drawn from a Pareto distribution with density ρ(x) ∼ x -γ . For each node i, a link to another node j is drawn with probability f (x i , x j ) = xixj x 2 M , where x M is the maximum fitness value currently in the network. This model looks very appealing for a self-emerging approach to power-law network generation, since it requires very little information to be globally available. Using epidemic dissemination techniques [START_REF] Demers | Epidemic Algorithms for Replicated Database Maintenance[END_REF], the maximum fitness value currently existing in the network, x M , may be easily propagated throughout the network. According to [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF], this approach leads to a network with a power-law degree distribution, that should have the same exponent as the non-truncated Pareto distribution of the fitness values. However, our initial set of experiments show that Caldarelli's approach rarely converges for very large networks. Figure 1 presents the data collected from four different experiments. Each experiment corresponds to a different degree distribution exponent and was repeated for two network sizes: 10,000 nodes and 100,000 nodes. For each experiment we constructed 100 different graphs, each with the same fitness distribution and different random seeds for the neighbor selection. We remark that for a desired power-law degree distribution with exponent γ = 3 and larger values of N (100K), the obtained degree distribution exponent does not converge to its desired value. Also, for γ = 4 and a network of 10K nodes, the general approximation function used to determine the degree distribution exponent could not be applied here. We investigated the issue further, by constructing the histogram corresponding to the degree distribution. Figure 5 shows the histograms obtained for each experiment. We remark that the histograms do not coincide with a power-law distribution. A second set of experiments evaluated how well the algorithm controlled the emerging degree distribution exponent, γ. We increased the control γ in steps of 0.1 and ran the algorithm ten times for each value on a network of 100,000 nodes. We collected the estimated value of the emerging degree distribution exponent and the percentage of isolated nodes (i.e., nodes of degree zero). Both types of results are plotted in Figures 3a and3b. We note that in the Caldarelli model a large number of nodes remain isolated. We verified the empirical results by revisiting the assumptions made in [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF]. We localized a possible problem with the way the vicinity function is integrated. Intuitively, the problem is that while the X's (fitnesses) are independent r.v. (by assumption), their maximum (x M ) is dependent on all of them, hence can not be pulled out of the integral. To explain this formally, using the Law of Large V k = Number of nodes of degree k n ≈ 1 n ρ P -1 (k/n) P (P -1 (k/n)) , where P -1 denotes the inverse of P , which is the probability that a node u with fitness x will be linked with any other node v (P (x ) := E[p(X u , X v )|X u = x]). This approximation, in conjunction with the assumption ρ(x) ∼ x -γ would provide the power-law behavior V k ∼ k -γ , as claimed in [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF]. However, this is not the case since x M is a random variable dependent on all fitnesses (thus, also on X u and X v ). Hence P (x) is not linear, but is a rather intricate expression of x and an analytical expression for the inverse of P is infeasible. Even worse, if ρ(x) ∼ x -γ , the squared-maximum x 2 M will grow to infinity at rate n 2/(γ-1) (by Fisher-Tippet-Gnedenko Theorem), so that D = (n -1)E[p(X 1 , X 2 )], will tend to 0 when γ < 3 (the resulting graph will have a very large fraction of isolated nodes) and will grow to infinity for n → ∞, when γ > 3; however, as explained before, this last fact is impossible in a power-law graph in which all nodes are linked with the same probability; see equation (3). Improved model Here we present a novel model for a power-law graph with n nodes. It addresses the limitations found with the Caldarelli model by avoiding certain mathematical pitfalls. Our assumptions differ from the Caldarelli model in that we consider a truncated Pareto distribution, with density function ρ(x) ∼ x -2 , for x ∈ (l, b n ). We emphasize that, unlike Caldarelli, we start with a fixed distribution exponent. Another considerable difference is the truncation and its upper bound b n → ∞. The upper bound will depend on the density ρ(x) and on the desired outcome graph degree distribution exponent, denoted by γ in Caldarelli's model. The global variable x M from Caldarelli's model will be replaced in our model by b n . We summarize the mathematical model below: (II) Every pair of nodes (u, v) will be linked with a probability given by p(X u , X v ), where we define p(x 1 , x 2 ) := x 1 x 2 b 2 n η , (1) with η > 0 depending (again) on the desired outcome. For appropriate choices of the upper-bound b n (see details below), performing steps (I) and (II) will result in a power-law graph with index γ := 1 + (1/η), satisfying (for large k ≤ n) V k := Number of nodes of degree k n ≈ γ -1 k γ ; in other words, if a power-law degree-distribution with exponent γ > 1 is desired, then one must choose η = (γ -1) -1 in step (II), while the upper-bound b n must be chosen according to the following rules: (i) For γ ∈ (1, 2) we choose b n := γ-1 2-γ n γ-1 γ , which gives an expected degree D ≈ γ -1 2 -γ 2 γ n 2-γ γ . (ii) For γ = 2 we choose b n := (n/2) log(n) and obtain for the expected degree D ≈ log(n) 2 . (iii) For γ > 2 we choose b n := γ-1 γ-2 n γ-1 2 which yields D ≈ γ -1 γ -2 . In the model described by steps (I) and (II), the probability that a node u, having fitness x, will be linked with any other node v is given by P (x) := E[p(X u , X v )|X u = x] = x η b 2η n ∞ 0 z η ρ(z) dz. (2) The expected degree of a node of fitness x is (n -1)P (x). The (unconditional) probability of having the edge (u, v) is π n := E[p(X u , X v )] = E[P (X u ) ] and the expected degree of a node is D := (n -1)π n . For the choices (i)-(iii), the expected degree of a node of fitness x will be approximately x η , for large enough n. At this point, it should be noted that power-law graphs with index γ > 1 (regardless of how they are generated) in which every two nodes are linked with the same probability π n , enjoy the following property: If γ > 2 then the expected degree D must remain bounded as the number of nodes n grows arbitrarily large. When γ = 2 the expected degree D may grow to infinity with the number of nodes, but no faster than log(n). Finally, when γ ∈ (1, 2) the expected degree D may again grow to infinity with the number of nodes, but no faster than n 2-γ . To justify the above claims, one may express the total expected number N of edges in the graph in two ways: first, since any two nodes are linked with the same probability π n , the expected number of edges E[N ] is given by n(n -1)π n /2. On the other hand, N is half of the sum of all degrees in the graph, hence D = (n -1)π n = 2E[N ] n = n-1 k=1 kE[V k ] ≤ c n-1 k=1 1 k γ-1 , (3) where V k denotes the number of nodes of degree k and c > 0 is some finite constant. Since the r.h.s. in (3) is bounded for γ > 2 and using the estimates n-1 k=1 1 k γ-1 ∼ n 2-γ , γ ∈ (1, 2), log(n), γ = 2, hence our claims are justified. The conclusion is that the power-law structure of a graph, in which every two nodes interact with the same probability, induces an upper-bound on the magnitude of the expected degree of the nodes. Comparing the expected degree estimates in (i)-(iii) with the maximal rates imposed by (3) reveals that our method maximizes the expected degree when γ ≥ 2. We also remark that the graph resulted at step (II) will have a certain fraction of isolated nodes which increases with γ. More precisely, for γ close to 1 this fraction will be very small (close to 0), while for very large γ it will approach 1/e 37%; when γ ∈ (2, 3) this fraction will stay between 14 -22%. The existence of these isolated nodes in our model is a consequence of the upperbound established by (3) since, in general, by Jensen's Inequality it holds that E[deg(v) = 0] ≈ E[exp(-(n -1)P (X))] ≥ exp[-(n -1)π n ] = exp(-D), so whenever the expected degree D is bounded (recall that this is necessarily the case when γ > 2) the expected fraction of isolated nodes will be strictly positive. Similarly to the experiments conducted on the Caldarelli model, we also performed a set of extensive tests on our novel model. Results from the set of 100 experiments are collected in Figure 2. We remark that our model performs in a more stable fashion with respect to the emerging degree distribution exponent. We also notice that our model provides a better convergence with respect to the size of the network. Next, we analyzed how well our model controlled the emerging degree distribution exponent, γ, by performing the same set of averaging experiments as in Caldarelli's case. All results are collected in Figures 4a and4b. Our model outperforms Caldarelli's model both in terms of control over the emerging degree distribution exponent, γ, and in terms of the number of isolated nodes. Finally, we notice that the theoretically proven discontinuity at γ = 2 is illustrated by the experimental results. In this section, we have presented and experimentally evaluated a novel method for generating connected power-law graphs with any index γ > 1. In our proposed model, we correct the issues in [START_REF] Caldarelli | Scale-free networks from varying vertex intrinsic fitness[END_REF], by considering truncated (bounded) fitness-values and use a deterministic bound b n instead of a random one. While the lower bound (l = 1) is included in the model for technical purposes only, the upper bound b n is crucial and plays the role of a tuning parameter which allows one to obtain the desired power-law index γ as well as the correct behavior for the expected degree. In fact, the upper-bound b n is strongly related to the number of edges in the graph by means of the vicinity function defined in (1); namely, the larger the b n the smaller the number of edges in the graph. In general, increasing the magnitude of b n will damage the power-law behavior, while for γ > 2 decreasing b n will result in an asymptotically empty (still power-law) graph. Therefore, the model is extremely sensitive to the choice of the upper-bound b n when γ > 2. 4 Building Power-law Overlays Algorithm Building power-law overlays in the real world is a nontrivial task. Following the standard methodology, that is, applying the vicinity function on all possible pairs of nodes to decide which edges to place is impractical: It assumes either centralized membership management, or complete membership knowledge by each node. Neither of these scales well with the size of the overlay. Instead, we explicitly designed a solution in which nodes are not required to traverse the whole network to determine their links. They form links by considering a small partial view of the network. The key point, however, in this Figure 6 shows the programming model of our protocol, without including Cyclon. As most gossiping protocols, it is modelled by two threads. An active thread, taking care of all periodic behavior and sending invitations, and a passive thread receiving messages (invitations or responses to invitations) and reacting accordingly. Evaluation We implemented our algorithm in PeerNet, a branch of the popular PeerSim simulator written in Java. We consider a network consisting of a fixed set of N nodes. We assume that communication is reliable. Links are persistent and bidirectional (i.e., when x establishes a link to y, y gets a message to establish a link to x). A node's active thread operates in a periodic manner, and all nodes' periods are the same, yet they are triggered completely asynchronously with respect to each other. The behavior of the protocol depends on three main parameters. First, the target γ; second, the number of nodes in the network; and third, the random view size, that is, the number of random links a node is handed by Cyclon in each round. Figures 7 and8 show the results of our experiments for 10,000 and 100,000 nodes, respectively. The first row in each figure (i.e., Figure 7(a-c) and Figure 8(a-c)) shows the observed γ of the emerged overlay, as a function of the number of rounds elapsed since the beginning of the experiment, for three sample values of γ, namely, 1.4, 1.8, and 2.6. The four different lines in each plot correspond to four different random view sizes. In the case of 10K nodes, all four lines converge equally fast to the (approximate) target γ. For the larger network of 100K nodes, checking out more random nodes per round provides some advantage with respect to convergence time. Note that each graph shows a different target value of γ and the corresponding approximate value. Our formula for a node's expected degree is derived from the mathematical model presented in Section 3.2. However, it is based on the assumption of a large enough number of nodes and therefore we evaluate the error introduced by this approximation. We construct the histogram of all expected degrees (i.e., the expected degree distribution) and we use it to compute an approximate γ. In each Figure 7a-7c and 8a-8c we compare the target γ, the approximate γ and the γ values of the self-emerging overlays. The second row of the figures (i.e, Figure 7(d-f) and Figure 8(d-f)) shows the percentage of nodes that have not yet established as many links as their expected degree mandates, and are, therefore, still gossiping in search of new connections. We see that, particularly for the 10K network, the most of the nodes meet their termination criterion within the first few hundred rounds, which means they do not spend any network resources thereafter. Our formula for a node's expected degree is derived from the mathematical model presented in Section 3.2. However, it is based on the assumption of a large enough number of nodes and therefore we evaluate the error introduced by this approximation. We construct the histogram of all expected degrees, which corresponds to the expected degree distribution and use it to compute an approximate γ. In each Figure 7a-7c and 8a-8c we compare the target γ, the approximate γ and the γ values of the self-emerging overlays. Most importantly, though, the graphs of the second row show that the vast majority of the nodes reach their exact expected degree, contributing to the excellent γ approximation observed in the first row graphs. Finally, the third row graphs (i.e, Figure 7(g-i) and Figure 8(g-i)) show the number of nodes not contained in the largest cluster. For low values of γ the largest cluster is massive, containing virtually the whole set of nodes. This is expected, as nodes tend to have high degrees. For higher values of γ, though, which experience long tails of nodes with very low degrees, we see that the resulting overlay is split in many disconnected components. This does not mean Fig. 2 : 2 Fig. 1: Caldarelli's model Fig. 3: Caldarelli's fitness-driven model Fig. 4 : 4 Fig. 4: Our model Fig. 5 : 5 Fig. 5: Caldarelli degree histograms for γ = 4 Fig. 7 : 7 Fig. 7: Statistics collected for different γ values and different random views. Active Thread (on node p) while true do // wait T time units S ← r random peers from Cyclon foreach q in S do v = VicinityFunc(fitness(p), fitness(q)) with prob v Send (q, "INVITE") Fig. 6: The generic gossiping skeleton for building power-law overlays. approach is the termination condition, that is, a criterion that lets a node decide when to stop looking for additional links. Our method exploits the analytic findings of the previous section. In a nutshell, each node periodically picks a few random other nodes, and feeds the two fitness values into the vicinity function to determine whether to set up a link or not. A node performs this repeatedly until it has satisfied its termination condition, that is, it has established a number of links equal to its expected degree, as computed by the respective formula. In more detail, our protocol works as follows. Nodes run an instance of Cyclon [START_REF] Voulgaris | Cyclon: Inexpensive membership management for unstructured p2p overlays[END_REF], a peer sampling service that provides each node with a regularly refreshed list of pointers to random other peers, in a fully decentralized manner and at negligible bandwidth cost. Upon being handed a number of random other peers, a node applies the vicinity function and decides if it wants to set up a link with one or more of them. It sends an Invite message to the respective peers, and awaits their responses. Upon receiving an Invite, a node checks if its degree has already reached its expected degree value. If not, it sends back an Accept message as a response to the invitation, and the two nodes establish a link with each other on behalf of the power-law overlay. When a node's termination condition is met, that is, the number of established links of that node has reached its expected degree, it refrains from further gossiping. That is, it stops considering new neighbors to send Invite messages to, and it responds to other nodes' invitations by a Reject message. Notably, a node also refrains from all Cyclon communication. This is particularly important for letting the network converge fast. By ceasing its Cyclon communication, a node is prompty and conveniently "forgotten" by the Cyclon overlay, letting the latter be populated exclusively by nodes that are still in search of additional links. Thus, Cyclon constitutes a good source of random other peers, as it picks random nodes out of a pool of peers that are willing to form additional links. Even in a network of hundreds of thousands of nodes, when a small nodes are isolated at an individual level (as confirmed by the graphs of the second rows), but that nodes are connected according to their expected degrees in smaller components. Making sure of connecting all these components in a single connected overlay is the subject of future work. Conclusions Self-emerging power-law networks are an important area of research. However, algorithms that generate such topologies in a controlled manner are still scarce. In this work, we investigated existing approaches to sequential power-law graphs generation and selected a model that allowed for straightforward decentralization. We then experimentally identified limitations with the selected model which have been supported by our theoretical findings. We presented a novel model, built on a thorough mathematical support, that addressed the lim-itations found with previous models. Under the same experimental settings, our results show that our proposed model significantly outperforms the initial one in different convergence aspects. Next, we implemented a prototype self-emerging power-law network based on our model and gossiping protocols. We show that the theoretical and sequential implementations of the novel model are closely followed in performance by the decentralized prototype. Furthermore, the theoretical bounds are observed throughout an extensive set of experiments. Such a result encourages us to consider the theoretical model already robust with respect to implementation approximations and to continue our research efforts having this model as a foundation. One interesting future research question, identified by our decentralized prototype evaluation, is how to alleviate the the problem of (many) disconnected components.
29,096
[ "1004039", "831879", "1004040" ]
[ "146643", "62433", "62433" ]
00122278
en
[ "phys" ]
2024/03/04 23:41:48
2007
https://hal.science/hal-00122278v4/file/anderson20070523.pdf
Laurent Sanchez-Palencia David Clément Pierre Lugan Philippe Bouyer Georgy V Shlyapnikov A Aspect Anderson Localization of Expanding Bose-Einstein Condensates in Random Potentials Keywords: numbers: 05.30.Jp, 03.75.Kk, 03.75.Nt, 05.60.Gg ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Disorder in quantum systems can have dramatic effects, such as strong Anderson localization (AL) of non-interacting particles in random media [1]. The main paradigm of AL is that the suppression of transport is due to a destructive interference of particles (waves) which multiply scatter from the modulations of a random potential. AL is thus expected to occur when interferences play a central role in the multiple scattering process [START_REF] Van Tiggelen | Wave Diffusion in Complex Media[END_REF]. In three dimensions, this requires the particle wavelength to be larger than the scattering mean free path, l, as pointed out by Ioffe and Regel [3]. One then finds a mobility edge at momentum k m = 1/l, below which AL can appear. In one and two dimensions, all single-particle quantum states are predicted to be localized [4,5,6], although for certain types of disorder one has an effective mobility edge in the Born approximation (see Ref. [7] and below). A crossover to the regime of AL has been observed in low dimensional conductors [START_REF] Imry | Introduction to Mesoscopic Physics[END_REF][START_REF] Gershenson | [END_REF], and recently, evidences of AL have been obtained for light waves in bulk powders [10] and in 2D disordered photonic lattices [11]. The subtle question is whether and how the interaction between particles can cause delocalization and transport, and there is a long-standing discussion of this issue for the case of electrons in solids [12]. Ultracold atomic gases can shed new light on these problems owing to an unprecedented control of interactions, a perfect isolation from a thermal bath, and the possibilities of designing controlled random [13,14,15,16,17] or quasirandom [18] potentials. Of particular interest are the studies of localization in Bose gases [19,20] and the interplay between interactions and disorder in Bose and Fermi gases [21,22]. Localization of expanding Bose-Einstein condensates (BEC) in random potentials has been reported in Refs. [15,16,17]. However, this effect is not related to AL, but rather to the fragmentation of the core of the BEC, and to single reflections from large modulations of the random potential in the tails [15]. Numerical calculations [15,23,24] confirm this scenario for parameters relevant to the experiments of Refs. [15,16,17]. In this Letter, we show that the expansion of a 1D interacting BEC can exhibit AL in a random potential without large or wide modulations. Here, in contrast to the situation in Refs. [15,16,17], the BEC is not significantly affected by a single reflection. For this weak disorder regime we have identified the following localization scenario on the basis of numerical calculations and the toy model described below. At short times, the disorder does not play a significant role, atom-atom interactions drive the expansion of the BEC and determine the long-time momentum distribution, D(k). According to the scaling theory [25], D(k) has a highmomentum cut-off at 1/ξ in , where ξ in = / √ 4mµ and µ are the initial healing length and chemical potential of the BEC, and m is the atom mass. When the density is significantly decreased, the expansion is governed by the scattering of almost non-interacting waves from the random potential. Each wave with momentum k undergoes AL on a momentum-dependent length L(k) and the BEC density profile will be determined by the superposition of localized waves. For speckle potentials the Fourier transform of the correlation function vanishes for k > 2/σ R , where σ R is the correlation length of the disorder, and the Born approach yields an effective mobility edge at 1/σ R . Then, if the high-momentum cut-off is provided by the momentum distribution D(k) (for ξ in > σ R ), the BEC is exponentially localized, whereas if the cut-off is provided by the correlation function of the disorder (for ξ in < σ R ) the localization is algebraic. These findings pave the way to observe AL in experiments similar to those of Refs. [15,16,17]. We consider a 1D Bose gas with repulsive short-range interactions, characterized by the 1D coupling constant g and trapped in a harmonic potential V ho (z) = mω 2 z 2 /2. The finite size of the trapped sample provides a low-momentum cut-off for the phase fluctuations, and for weak interactions (n ≫ mg/ 2 , where n is the 1D density), the gas forms a true BEC at low temperatures [26]. We treat the BEC wave function ψ(z, t) using the Gross-Pitaevskii equation (GPE). In the presence of a superimposed random potential V (z), this equation reads: i ∂ t ψ = -2 2m ∂ 2 z + V ho (z) + V (z) + g|ψ| 2 -µ ψ, ( 1 ) where ψ is normalized by dz|ψ| 2 = N , with N being the number of atoms. It can be assumed without loss of gener-ality that the average of V (z) over the disorder, V , vanishes, while the correlation function C(z) = V (z ′ )V (z ′ +z) can be written as C(z) = V 2 R c(z/σ R ) , where the reduced correlation function c(u) has unity height and width. So, V R = V 2 is the standard deviation, and σ R is the correlation length of the disorder. The properties of the correlation function depend on the model of disorder. Although most of our discussion is general, we mainly refer to a 1D speckle random potential [START_REF] Goodman | Statistical Properties of Laser Speckle Patterns[END_REF] similar to the ones used in experiments with cold atoms [13,14,15,16,17]. It is a random potential with a truncated negative exponential single-point distribution [START_REF] Goodman | Statistical Properties of Laser Speckle Patterns[END_REF]: P[V (z)] = exp[-(V (z) + V R )/V R ] V R Θ V (z) V R + 1 , ( 2 ) where Θ is the Heaviside step function, and with a correlation function which can be controlled almost at will [17]. For a speckle potential produced by diffraction through a 1D square aperture [17,[START_REF] Goodman | Statistical Properties of Laser Speckle Patterns[END_REF], we have C(z) = V 2 R c(z/σ R ); c(u) = sin 2 (u)/u 2 . ( 3 ) Thus the Fourier transform of C(z) has a finite support: C(k) = V 2 R σ R c(kσ R ); c(κ) = π/2(1-κ/2)Θ(1-κ/2), (4) so that C(k) = 0 for k > 2/σ R . This is actually a general property of speckle potentials, related to the way they are produced using finite-size diffusive plates [START_REF] Goodman | Statistical Properties of Laser Speckle Patterns[END_REF]. We now consider the expansion of the BEC, using the following toy model. Initially, the BEC is assumed to be at equilibrium in the trapping potential V ho (z) and in the absence of disorder. In the Thomas-Fermi regime (TF) where µ ≫ ω, the initial BEC density is an inverted parabola, n(z) = (µ/g)(1 -z 2 /L 2 TF )Θ(1 -|z|/L TF ), with L TF = 2µ/mω 2 being the TF half-length. The expansion is induced by abruptly switching off the confining trap at time t = 0, still in the absence of disorder. Assuming that the condition of weak interactions is preserved during the expansion, we work within the framework of the GPE (1). Repulsive atom-atom interactions drive the short-time (t 1/ω) expansion, while at longer times (t ≫ 1/ω) the interactions are not important and the expansion becomes free. According to the scaling approach [25], the expanding BEC acquires a dynamical phase and the density profile is rescaled, remaining an inverted parabola: ψ(z, t) = ψ[z/b(t), 0]/ b(t) exp {imz 2 ḃ(t)/2 b(t)}, (5) where the scaling parameter b(t) = 1 for t = 0, and b(t) ≃ √ 2ωt for t ≫ 1/ω [15]. We assume that the random potential is abruptly switched on at a time t 0 ≫ 1/ω. Since the atom-atom interactions are no longer important, the BEC represents a superposition of almost independent plane waves: ψ(z, t) = dk √ 2π ψ(k, t) exp(ikz). (6) The momentum distribution D(k) follows from Eq. ( 5). For t ≫ 1/ω, it is stationary and has a high-momentum cut-off at the inverse healing length 1/ξ in : D(k) = | ψ(k, t)| 2 ≃ 3N ξ in 4 (1 -k 2 ξ 2 in )Θ(1 -kξ in ), (7) with the normalization condition +∞ -∞ dkD(k) = N . According to the Anderson theory [1], k-waves will exponentially localize as a result of multiple scattering from the random potential. Thus, components exp(ikz) in Eq. ( 6) will become localized functions φ k (z). At large distances, φ k (z) decays exponentially, so that ln |φ k (z)| ≃ -γ(k)|z|, with γ(k) = 1/L(k) the Lyapunov exponent, and L(k) the localization length. The AL of the BEC occurs when the independent k-waves have localized. Assuming that the phases of the functions φ k (z), which are determined by the local properties of the random potential and by the time t 0 , are random, uncorrelated functions for different momenta, the BEC density is given by n 0 (z) ≡ |ψ(z)| 2 = 2 ∞ 0 dkD(k) |φ k (z)| 2 , (8) where we have taken into account that D(k) = D(-k) and |φ k (z)| 2 = |φ -k (z)| 2 . We now briefly outline the properties of the functions φ k (z) from the theory of localization of single particles. For a weak random potential, using the phase formalism [START_REF] Lifshits | Introduction to the Theory of Disordered Systems[END_REF] the state with momentum k is written in the form: φ k (z) = r(z) sin [θ(z)] ; ∂ z φ k = kr(z) cos [θ(z)] , (9) and the Lyapunov exponent is obtained from the relation γ(k) = -lim |z|→∞ log [r(z)] /|z| . If the disorder is sufficiently weak, then the phase is approximately kz and solving the Schrödinger equation up to first order in |∂ z θ(z)/k -1|, one finds [START_REF] Lifshits | Introduction to the Theory of Disordered Systems[END_REF], γ(k) ≃ ( √ 2π/8σ R )(V R /E) 2 (kσ R ) 2 c(2kσ R ), (10) where E = 2 k 2 /2m. Such a perturbative (Born) approximation assumes the inequality V R σ R ≪ ( 2 k/m)(kσ R ) 1/2 , (11) or equivalently γ(k) ≪ k. Typically, Eq. ( 11) means that the random potential does not comprise large or wide peaks. Deviations from a pure exponential decay of φ k are obtained using diagrammatic methods [START_REF] Gogolin | [END_REF], and one has |φ k (z)| 2 = π 2 γ(k) 2 ∞ 0 du u sinh(πu) × ( 12 ) 1 + u 2 1 + cosh(πu) 2 exp{-2(1 + u 2 )γ(k)|z|}, where γ(k) is given by Eq. (10). Note that at large distances (γ(k)|z| ≫ 1), Eq. ( 12) reduces to |φ k (z)| 2 ≃ π 7/2 /64 2γ(k)|z| 3/2 exp{-2γ(k)|z|}. The localization effect is closely related to the properties of the correlation function of the disorder. For the 1D speckle potential the correlation function C(k) has a high-momentum cut-off 2/σ R , and from Eqs. ( 4) and ( 10) we find γ(k) = γ 0 (k)(1-kσ R )Θ(1-kσ R ); γ 0 (k) = πm 2 V 2 R σ R 2 4 k 2 . (13) Thus, one has γ(k) > 0 only for kσ R < 1 so that there is a mobility edge at 1/σ R in the Born approximation. Strictly speaking, on the basis of this approach one cannot say that the Lyapunov exponent is exactly zero for k > 1/σ R . However, direct numerical calculations of the Lyapunov exponent show that for k > 1/σ R it is at least two orders of magnitude smaller than γ 0 (1/σ R ) representing a characteristic value of γ(k) for k approaching 1/σ R . For σ R 1µm, achievable for speckle potentials [17] and for V R satisfying Eq. ( 11) with k ∼ 1/σ R , the localization length at k > 1/σ R exceeds 10cm which is much larger than the system size in the studies of quantum gases. Therefore, k = 1/σ R corresponds to an effective mobility edge in the present context. We stress that it is a general feature of optical speckle potentials, owing to the finite support of the Fourier transform of their correlation function. We then use Eqs. ( 7), ( 12) and ( 13) for calculating the density profile of the localized BEC from Eq. ( 8). Since the highmomentum cut-off of D(k) is 1/ξ in , and for the speckle potential the cut-off of γ(k) is 1/σ R , the upper bound of integration in Eq. ( 8) is k c = min{1/ξ in , 1/σ R }. As the density profile n 0 (z) is a sum of functions |φ k (z)| 2 which decay exponentially with a rate 2γ(k), the long-tail behavior of n 0 (z) is mainly determined by the components with the smallest γ(k), i.e. those with k close to k c , and integrating in Eq. ( 8) we limit ourselves to leading order terms in Taylor series for D(k) and γ(k) at k close to k c . For ξ in > σ R , the high-momentum cut-off k c in Eq. ( 8) is set by the momentum distribution D(k) and is equal to 1/ξ in . In this case all functions |φ k (z)| 2 have a finite Lyapunov exponent, γ(k) > γ(1/ξ in ), and the whole BEC wave function is exponentially localized. For the long-tail behavior of n 0 (z), from Eqs. ( 7), ( 8) and ( 12) we obtain: n 0 (z) ∝ |z| -7/2 exp{-2γ(1/ξ in )|z|}; ξ in > σ R . (14) Equation ( 14) assumes the inequality γ(1/ξ in )|z| ≫ 1, or equivalently γ 0 (k c )(1 -σ R /ξ in )|z| ≫ 1. For ξ in < σ R , k c is provided by the Lyapunov exponents of |φ k (z)| 2 so that they do not have a finite lower bound. Then the localization of the BEC becomes algebraic and it is only partial. The part of the BEC wave function, corresponding to the waves with momenta in the range 1/σ R < k < 1/ξ in , continues to expand. Under the condition γ 0 (k c )(1 -ξ 2 in /σ 2 R )|z| ≫ 1 for the asymptotic density distribution of localized particles, Eqs. ( 8) and ( 12) yield: n 0 (z) ∝ |z| -2 ; ξ in < σ R . (15) Far tails of n 0 (z) will be always described by the asymptotic relations (14) or (15), unless ξ in = σ R . In the special case of ξ in = σ R , or for ξ in very close to σ R and at distances where γ 0 (k c )|(1 -ξ 2 in /σ 2 R )z| ≪ 1, still assuming that γ 0 (k c )|z| ≫ 1 we find n 0 (z) ∝ |z| -3 . Since the typical momentum of the expanding BEC is 1/ξ in , according to Eq. ( 11), our approach is valid for V R ≪ µ(ξ in /σ R ) 1/2 . For a speckle potential, the typical momentum of the waves which become localized is 1/σ R and for ξ in < σ R the restriction is stronger: V R ≪ µ(ξ in /σ R ) 2 . These conditions were not fulfilled, neither in the experiments of Refs. [15,16,17], nor in the numerics of Refs. [15,23,24]. We now present numerical results for the expansion of a 1D interacting BEC in a speckle potential, performed on the basis of Eq. ( 1). The BEC is initially at equilibrium in the combined random plus harmonic potential, and the expansion of the BEC is induced by switching off abruptly the confining potential at time t = 0 as in Refs. [15,16,17,20]. The differences from the model discussed above are that the random potential is already present for the initial stationary condensate and that the interactions are maintained during the whole expansion. This, however, does not significantly change the physical picture. The properties of the initially trapped BEC have been discussed in Ref. [22] for an arbitrary ratio ξ in /σ R . For ξ in ≪ σ R , the BEC follows the modulations of the random potential, while for ξ in σ R the effect of the random potential can be significantly smoothed. In both cases, the weak random potential only slightly modifies the density profile [22]. At the same time, the expansion of the BEC is strongly suppressed compared to the non-disordered case. This is seen from the time evolution of the rms size of the BEC, ∆z = z 2 -z 2 , in the inset of Fig. 1. At large times, the BEC density reaches an almost stationary profile. The numerically obtained density profile in Fig. 1 shows an excellent agreement with a fit of n 0 (z) from Eqs. (7), [START_REF] Imry | Introduction to Mesoscopic Physics[END_REF] and (12), where a multiplying constant was the only fitting parameter. Note that Eq. ( 8) overestimates the density in the center of the localized BEC, where the contribution of waves with very small k is important. This is because Eq. ( 13) overestimates γ(k) in this momentum range, where the criterion ( 11) is not satisfied. ξ in =0.01L TF V R =0.05µ We have also studied the long-tail asymptotic behavior of the numerical data. For ξ in > σ R , we have performed fits of |z| -7/2 e -2γeff|z| to the data. The obtained γ eff are in excellent agreement with γ(1/ξ in ) following from the prediction of Eq. ( 14), as shown in Fig. 2a. For ξ in < σ R , we have fitted |z| -βeff to the data. The results are plotted in Fig. 2b and show that the long-tail behavior of the BEC density is compatible with a power-law decay with β eff ≃ 2, in agreement with the prediction of Eq. (15). In summary, we have shown that in weak disorder the expansion of an initially confined interacting 1D BEC can exhibit Anderson localization. Importantly, the high-momentum cut-off of the Fourier transform of the correlation function for 1D speckle potentials can change localization from exponential to algebraic. Our results draw prospects for the observation of Anderson localization of matter waves in experiments similar to those of Refs. [15,16,17]. For V R = 0.2µ, ξ in = 3σ R /2 and σ R = 0.27µm, we find the localization length L(1/ξ in ) ≃ 460µm. These parameters are in the range of accessibility of current experiments [17]. In addition, the localized density profile can be imaged directly, which allows one to distinguish between exponential and algebraic localization. Finally, we would like to raise an interesting problem for future studies. The expanding and then localized BEC is an excited Bose-condensed state as it has been made by switching off the confining trap. Therefore, the remaining small interaction between atoms should cause the depletion of the BEC and the relaxation to a new equilibrium state. The question is how the relaxation process occurs and to which extent it modifies the localized state. Figure 1 : 1 Figure 1: (color online) Density profile of the localized BEC in a speckle potential at t = 150/ω. Shown are the numerical data (black points), the fit of the result from Eqs. (7), (8) and (12) [red solid line], and the fit of the asymptotic formula (14) [blue dotted line]. Inset: Time evolution of the rms size of the BEC. The parameters are VR = 0.1µ, ξin = 0.01LTF, and σR = 0.78ξin. Figure 2 : 2 Figure 2: (color online) a) Lyapunov exponent γeff in units of 1/LTF for the localized BEC in a speckle potential, in the regime ξin > σR. The solid line is γ(1/ξin) from Eq. (13). b) Exponent of the power-law decay of the localized BEC in the regime ξin < σR. The parameters are indicated in the figure. We thank M. Lewenstein, S. Matveenko, P. Chavel, P. Leboeuf and N. Pavloff for useful discussions. This work was supported by the French DGA, IFRAF, Ministère de la Recherche (ACI Nanoscience 201), ANR (grants NTOR-4-42586, NT05-2-42103 and 05-Nano-008-02), and the Euro-pean Union (FINAQS consortium and grants IST-2001-38863 and MRTN-CT-2003-505032), the ESF program QUDEDIS, and the Dutch Foundation FOM. LPTMS is a mixed research unit 8626 of CNRS and University Paris-Sud.
18,965
[ "739093", "3074", "178479", "1269973" ]
[ "388153", "388153", "388153", "388153", "12", "5740", "388153" ]
01401552
en
[ "sdu" ]
2024/03/04 23:41:50
2016
https://theses.hal.science/tel-01401552v2/file/these_archivage_2507122o.pdf
La région arctique s'ouvre peu à peu aux activités humaines, en raison du réchauffement climatique et de la fonte des glaces, dûs en partie à l'effet de polluants à courte durée de vie (aérosols, ozone). Dans le futur, les émissions de ces polluants liées à la navigation et à l'extraction de ressources en Arctique pourraient augmenter, et devenir prépondérantes comparées à la source historique liée au transport de pollution depuis les moyennes latitudes. Dans cette thèse, j'effectue des simulations régionales de la troposphère arctique avec le modèle WRF-Chem, combiné à de nouveaux inventaires des émissions de pollution locales en Arctique (navigation et torches pétrolières). Deux cas d'étude issus de campagnes de mesure par avion sont analysés. Premièrement, j'étudie un évènement de transport d'aérosols depuis l'Europe au printemps 2008, afin d'améliorer les connaissances sur cette source majeure de pollution Arctique. Deuxièmement, je détermine l'impact des émissions de la navigation en Norvège en été 2012, où la navigation Arctique est actuellement la plus intense. J'utilise ces cas d'étude pour valider la pollution modélisée et améliorer WRF-Chem en Arctique. J'effectue avec ce modèle amélioré des simulations des impacts actuels (2012) et futurs (2050) de la navigation et des torches pétrolières en Arctique sur la qualité de l'air et le bilan radiatif. Les résultats indiquent que les torches sont et devraient rester une source majeure d'aérosols de carbone suie réchauffants en Arctique. La navigation est une source de pollution importante en été ; et en 2050, la navigation de diversion à travers l'Arctique pourrait devenir une source majeure de pollution locale. Introduction L'Arctique est la région du monde qui se réchauffe le plus rapidement ; les températures de surface y augmented plus de deux fois plus vite que la moyenne globale (IPCC, 2013b). Ce réchauffement est dû principalement à l'effet des gaz à effet de serre « bien mélangés », comme le CO 2 et le méthane, mais aussi à l'effet d'espèces à plus courtes durées de vie : les aérosols et l'ozone [START_REF] Shindell | Role of tropospheric ozone increases in 20th-century climate change[END_REF]. Les émissions locales de pollution en Arctique sont supposées faibles. Pour cette raison, des études précédentes indiquent que la source principale de pollution à l'ozone et aux aérosols en Arctique au 20 ème siècle est le transport de polluants depuis les moyennes latitudes [START_REF] Barrie | Arctic air pollution: An overview of current knowledge[END_REF], tandis que la cause principale du réchauffement Arctique est le réchauffement des moyennes latitudes suivi du transport de chaleur vers l'Arctique [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF]. Le réchauffement de l'Arctique et la fonte des glaces qui lui est associée (IPCC, [START_REF] Kirtman | Near-term Climate Change: Projections and Predictability, book section 11[END_REF] pourraient progressivement permettre le développement industriel de cette région, en particulier des activités liées au traffic maritime et à l'extraction de ressources. Ceci pourrait entraîner une croissance importante des émissions locales de polluants à courte durée de vie et de leurs précurseurs en Arctique (IPCC, 2014), alors que dans le reste de la planète ces émissions devraient diminuer (IPCC, 2013a). Etant donné que les aérosols et l'ozone sont très sensibles aux sources d'émissions locales, l'impact des sources liées aux bateaux et à l'extraction de resources en Arctique pourrait devenir significatif comparé aux sources de pollution lointaines, et devenir une cause majeure du réchauffement futur dans cette région. Ces questions sont particulièrement importantes pour les décideurs politiques, qui doivent savoir si la réduction des émissions de polluants à courte durée de vie et de leurs précurseurs pourrait permettre de limiter le réchauffement Arctique et le réchauffement global, et améliorer la qualité de l'air [START_REF] Penner | Short-lived uncertainty?[END_REF]. La réponse à ces questions est cependant toujours incertaine, pour deux raisons principales. Premièrement, les modèles atmosphériques globaux reproduisent relativement mal les concentrations des polluants à courte durée de vie an Arctique, en particulier celles des aérosols [START_REF] Koch | Evaluation of black carbon estimations in global aerosol models[END_REF]. Ceci est probablement lié aux incertitudes concernant le dépôt par les précipitations et les nuages [START_REF] Huang | Importance of deposition processes in simulating the seasonality of the Arctic black carbon aerosol[END_REF]. Deuxièmement, des inventaires d'émissions dédiés et précis sont nécessaires pour modéliser l'impact des émissions de la navigation et de l'extraction de ressources en Arctique. Introduction De tels inventaires ont été développés récemment par Peters et al. (2011), pour l'extraction de pétrole et de gaz en Arctique, et [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF], pour la navigation en Arctique. Ces inventaires ont été combinés dans le passé avec des modèles globaux pour effectuer les premières estimations des impacts actuels et futurs des sources de locales de pollution en Arctique (e.g. Ødemark et al., 2012;Dalsøren et al., 2013;Browse et al., 2013). Cependant, de nouveaux inventaires développés plus récemment par [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] et [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] indiquent que ces premières études pourraient avoir sous-estimé l'importance de la pollution due aux sources locales. Cette thèse considère une nouvelle approche, combinant des inventaires récents des émissions de l'industrie pétrolière et de la navigation en région Arctique [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF][START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] avec un modèle régional couplé de météorologie-chimie-aérosols, WRF-Chem (Weather Research and Forecasting with chemistry, [START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF]Fast et al., 2006). Des simulations sont effectuées à l'aide de WRF-Chem de l'échelle locale (échelle des panaches de pollution) à l'échelle régionale, et les résultats de ces simulations sont comparés à de nouveaux jeux de données issus de campagnes de mesures aéroportées en Arctique : POLARCAT-France (Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, Climate, Chemistry, Aerosols and Transport, [START_REF] Law | Arctic Air Pollution: New Insights from POLARCAT-IPY[END_REF]) au printemps 2008, et ACCESS (Arctic Climate Change, Economy, and Society, Roiger et al., 2015) en Juillet 2012. Les objectifs principaux de cette thèse sont les suivants : • Quantifier les impacts actuels et futurs des émissions locales en Arctique, en termes de qualité de l'air et le bilan radiatif, relativement à la pollution issue du transport à longue distance. • Contribuer à améliorer la connaissance du transport de pollution depuis les moyennes latitudes vers l'Arctique, et les estimations des impacts actuels de la pollution locale en Arctique. • Évaluer la performance du modèle et améliorer la représentation des aérosols et de l'ozone zn Actique par WRF-Chem La thèse est organisée selon le plan suivant. Le Chapitre 1 présente le contexte scientifique de cette thèse, et décrit le réchauffement climatique et la pollution de l'air en Arctique, ainsi que l'importance des sources locales de pollution. Le Chapitre 2 est consacré aux aérosols et à l'ozone troposphériques, et présente leurs sources et puits principaux, leurs principaux pro- Introduction The Arctic is the fastest warming region in the world, with surface temperatures rising more than twice as fast as the global average (IPCC, 2013b). Arctic warming is mostly due to the effect of well-mixed greenhouse-gases, such as CO 2 and methane, combined with the effect of shorter lived species: aerosols and ozone [START_REF] Shindell | Role of tropospheric ozone increases in 20th-century climate change[END_REF]. Studies indicate that, during the 20 th century, aerosol and ozone pollution in the Arctic was mostly due to transport from the mid-latitudes [START_REF] Barrie | Arctic air pollution: An overview of current knowledge[END_REF], while Arctic climate change was mostly due to warming in the mid-latitudes followed by heat transport to the Arctic [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF]. Future Arctic warming and the associated decline in sea ice (IPCC, [START_REF] Kirtman | Near-term Climate Change: Projections and Predictability, book section 11[END_REF] will increasingly open the region to human activity, especially shipping and resource extraction. Local Arctic emissions of air pollutants could rise dramatically as a result (IPCC, 2014), whereas global emissions of several short-lived pollutants and their precursors are expected to decrease (IPCC, 2013a). Since aerosols and ozone are very sensitive to local emissions, the impacts of local Arctic emissions could become significant compared to remote sources, and these rising local emissions could become a major driver of Arctic climate change. These questions are especially important for policymakers, who need to know if reducing emissions of short-lived pollutants and their precursors is the right course of action to curb Arctic and global warming, and improve local air quality [START_REF] Penner | Short-lived uncertainty?[END_REF]. Unfortunately, the answer to these questions is currently unclear, for two main reasons. First, global models struggle to represent short-lived pollutants, especially aerosols, in the Arctic [START_REF] Koch | Evaluation of black carbon estimations in global aerosol models[END_REF]. This is likely due to uncertainties in the treatment of aerosol removal by precipitation and clouds [START_REF] Huang | Importance of deposition processes in simulating the seasonality of the Arctic black carbon aerosol[END_REF]. Second, assessing the impact of Arctic shipping and Arctic resource extraction requires accurate emission inventories, which were not available until recently. Such inventories were developed by Peters et al. (2011) for Arctic oil and gas extraction and [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] for Arctic shipping. These emission inventories were combined with global models to perform the first assessments of the current and future impacts of local emissions in the Arctic (e.g. Ødemark et al., 2012;Dalsøren et al., 2013;Browse et al., 2013). However, new inventories developed recently by [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] and [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] suggest that earlier studies could have been underestimating the magnitude of Arctic pollution from local sources. Introduction In this thesis, a new approach is taken, combining the recent Arctic emission inventories for shipping and oil and gas extraction by [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] and [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] with a regional meteorology-chemistry-aerosol-transport model, WRF-Chem (Weather Research and Forecasting with chemistry, [START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF]Fast et al., 2006). In this thesis, Introduction 21 2012, in order to assess the current impacts of Arctic shipping in terms of air quality and radiative effects, and to evaluate new Arctic shipping emission inventories (Marelle et al., 2016). Third, insights gained from these case studies are used to identify the important processes controlling short-lived pollution in the Arctic, and to improve the model for Arctic studies. The updated version of WRF-Chem is used in Chapter 5 to perform quasi-hemispheric simulations to assess the current (2012) and future (2050) impacts of local emissions from shipping and oil-and gas-related flaring in the Arctic, relative to the impacts of anthropogenic emissions transported from the mid-latitudes and emissions from biomass burning (Marelle et al., in preparation). Chapter 1 Climate change and air pollution in the Arctic Global air pollution and climate change Human activities have an increasing impact on the global environment. Specifically, the combustion of fossil fuels and biomass associated with industrialization significantly alters atmospheric composition, with two main consequences: air pollution and climate change. Air pollution Air pollution is defined as the introduction in the atmosphere of a compound with harmful effects on human health or on the environment. Air quality has long been a problem in populated cities. The deleterious effects of air pollution in Rome were already mentioned by Seneca in 61 AD (Moral Letters to Lucilius, Letter CIV): "As soon as I escaped from the oppressive atmosphere of the city, and from that awful odour of reeking kitchens which, when in use, pour forth a ruinous mess of steam and soot, I perceived at once that my health was mending." Air pollution became a larger concern with the industrial revolution and the development of coal burning for industries, domestic heat, and later combustion engines and power generation. As the prejudicial effect of air pollution became obvious, countries implemented regulations, such as the 1875 Public Health Act in the UK. Other countries implemented similar rules and stricter controls in the 20 th century. In spite of these regulations, outdoor air pollution still leads to 3.3 million premature deaths per year worldwide (Lelieveld et al., 2015), by contributing to the development of respiratory diseases, cardiovascular diseases and cancer [START_REF] Who | Burden of disease from Ambient Air Pollution for[END_REF]. The health impacts of air pollution are mostly due to aerosols and ozone (e.g. Lelieveld et al., 2015). Aerosols, also called particulate matter, are defined as all airborne solid or liquid 24 Chapter 1. Climate change and air pollution in the Arctic matter, excluding cloud droplets, ice crystals, and other hydrometeors. Aerosols can be emitted directly in the atmosphere (primary aerosols, such as ash, soot and desert dust), or can be formed in the atmosphere from precursor gases (secondary aerosols, such as sulfate). Ozone (O 3 ) is a trace gas that is naturally abundant in the stratosphere (altitudes ∼ 10 to 100 km), but it is also present in the troposphere (altitudes ∼ 0 to 10 km). Tropospheric ozone, which is consideed a pollutant, can be chemically produced from precursor gases such as nitrogen oxides (NO x ) and volatile organic compounds (VOC) in the presence of solar radiation. O 3 and aerosols are presented in more detail in Chapter 2. Aside from its impacts on human health, ozone pollution can harm vegetation [START_REF] Reich | Effects of Low Concentrations of O3 on Net Photosynthesis, Dark Respiration, and Chlorophyll Contents in Aging Hybrid Poplar Leaves[END_REF], and reduce crop production and yields [START_REF] Dingenen | The global impact of ozone on agricultural crop yields under current and future air quality legislation[END_REF]. Aerosol pollution contributes to acid rain [START_REF] Cowling | Acid precipitation in historical perspective[END_REF], damaging soils, terrestrial ecosystems [START_REF] Johnson | Soil changes in forest ecosystems: evidence for and probable causes[END_REF] and aquatic ecosystems [START_REF] Muniz | Freshwater acidification: its effects on species and communities of freshwater microbes, plants and animals[END_REF]. In addition, acid rain contributes to the weathering of stone buildings and corrosion of metal structures [START_REF] Likens | Acid Rain[END_REF]. Aerosols and ozone can be produced by human activity, but also by natural sources, e.g. forest fires, desert dust storms, volcanic eruptions, lightning, or biogenic activity. Figure 1-1a shows the global distribution of aerosol optical depth in August 2014, which can be used as a proxy for aerosol burdens; Figure 1-1b shows the global distribution of total tropospheric ozone during summer, averaged over the period 1979-2000. These maps illustrate that the main regions of high aerosol and ozone pollution are urbanized areas such as Eastern Asia, as well as boreal and tropical forests (where large forest fires occur). Aerosol optical depths are also enhanced above deserts (where dust storms occur). Figure 1-1 also illustrates the inhomogeneous distribution of aerosols and ozone. Their amounts are stronger close to emission regions, because of their relatively short lifetime in the troposphere (1 to 10 days for aerosols, Textor et al., 2006; 22 days for O 3 , [START_REF] Stevenson | Multimodel ensemble simulations of present-day and near-future tropospheric ozone[END_REF]. Since intercontinental transport times are about 5 to 10 days [START_REF] Wild | Intercontinental transport of ozone and its precursors in a threedimensional global CTM[END_REF], aerosols and ozone can still be transported over relatively long distances. Mitigating aerosol and ozone pollution therefore demands both local and international action. Global climate change Global mean surface temperatures have increased by ∼0.85 ∘ C since the beginning of the industrial era (IPCC, Hartmann et al., 2013). This global warming is also associated with increasing ocean heat content, increasing atmospheric water vapor concentrations, rising sea levels, decreasing snow and sea ice cover, and decreasing glacier and polar ice sheet mass (Figure 1-2; IPCC, [START_REF] Hartmann | Observations: Atmosphere and Surface, book section 2[END_REF]. It is now widely known that this global warming is mainly caused by human activity. It is primarily due to the enhanced greenhouse effect due to rising greenhouse gas (GHG) Chapter 1. Climate change and air pollution in the Arctic concentrations in the atmosphere, combined with the climate effect of increased aerosols (IPCC, [START_REF] Bindoff | Detection and Attribution of Climate Change: from Global to Regional[END_REF]. 1.1.2.1 The effect of greenhouse gases on climate The mean temperature of the Earth is primarily determined by the balance between incoming shortwave (SW) solar radiation and outgoing longwave (LW) terrestrial radiation (Figure 1-3). Schematically, solar radiation warms the Earth when it is absorbed by the Earth's surface, and the surface cools down by reemitting heat to the atmosphere and to space as infrared radiation, approximately following Stefan-Boltzmann's law for blackbody radiation, 𝐸 𝑠 = 𝜎𝑇 𝑠 4 (1.1) where 𝑇 𝑠 (K) is the temperature of the Earth's surface, 𝐸 𝑠 (W m -2 ) the energy radiated in the infrared by the Earth per unit surface and unit time, and 𝜎 the Stefan-Boltzmann constant. This terrestrial infrared radiation is absorbed in the atmosphere by certain gases, called GHG, which reemit infrared radiation downward to the surface and upward into space. The solar radiation is often called shortwave radiation (wavelengths ∼0.1 to 4.0 µm), while the terrestrial and atmospheric infrared radiation is called longwave radiation (wavelengths ∼4.0 to 50 µm). In the lower atmosphere (the troposphere), temperature decreases with altitude, which means that greenhouse gases reemit longwave radiation to space at an atmo-spheric temperature 𝑇 𝑎 < 𝑇 𝑠 . Because of this, and following the Stefan-Boltzmann law, the amount of longwave radiation 𝐸 𝑎 lost to space by the atmosphere is lower than 𝐸 𝑠 . Thus, in the presence of greenhouse gases, less energy is lost to space and more is trapped in the surface-troposphere system. This warming effect is called the greenhouse effect. The main greenhouse gases in the atmosphere by abundance are water vapor, carbon dioxide (CO 2 ), methane (CH 4 ), nitrous oxide (N 2 O) and ozone (O 3 ). Since the beginning of the industrial era, the global average tropospheric concentrations of CO 2 , CH 4 , N 2 O and O 3 have risen from 280 ppm, 722 ppb, 270 ppb and 237 ppb, to, respectively, 395 ppm, 1800 ppb, 325 ppb and 337 ppb, due primarily to human activity [START_REF] Blasing | Recent greenhouse gas concentrations[END_REF]IPCC, Hartmann et al., 2013). These enhanced greenhouse gas concentrations cause an enhanced greenhouse effect, warming the planet. This warming influence can be quantified in terms of radiative forcing (RF, in W m -2 ), defined as the change in net (down minus up) total (SW + LW) irradiance at the tropopause due to a change in the climate system (IPCC, 2013b). For example, the RF due to CO 2 is calculated as 𝑅𝐹 𝐶𝑂 2 = 𝐹 𝐶𝑂 2 -𝐹 𝑛𝑜𝐶𝑂 2 (1.2) = (𝐹 ↓ 𝐶𝑂 2 -𝐹 ↑ 𝐶𝑂 2 ) -(𝐹 ↓ 𝑛𝑜𝐶𝑂 2 -𝐹 ↑ 𝑛𝑜𝐶𝑂 2 ) (1.3) where 𝐹 is the total (SW + LW) radiative flux. Based on this definition, 𝑅𝐹 > 0 means that the substance has a warming effect. In the framework of the IPCC, the radiative forcing of a GHG usually has a more restrictive meaning, and represents the change in irradiance due to changes in GHG concentrations from the preindustrial to the present-day period, once the stratospheric temperatures have adjusted to the change in irradiance. 𝑅𝐹 𝐶𝑂 2 = 𝐹 𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝐶𝑂 2 -𝐹 𝑝𝑟𝑒𝑖𝑛𝑑𝑢𝑠𝑡𝑟𝑖𝑎𝑙𝐶𝑂 2 Figure 1-4 shows the radiative forcing of the main anthropogenic GHGs, as estimated by the IPCC [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF]. Figure 1-4 -Radiative forcing of climate by well-mixed greenhouse gases and ozone between 1750 and 2011, adapted from IPCC [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF]. For ozone, the figure shows the effective radiative forcing. Chapter 1. Climate change and air pollution in the Arctic This figure shows that, in terms of radiative forcing, CO 2 is the main anthropogenic greenhouse gas affecting the troposphere, followed by CH 4 and tropospheric O 3 . For ozone, the effective RF is showed, which is the RF corrected by the efficacy of the O 3 forcing. This efficacy is defined as the ratio of the rapid temperature response to the RF of O 3 divided by the rapid temperature response to the RF of CO 2 (𝜆 O 3 /𝜆 CO 2 , where 𝜆 is the climate sensitivity defined in Section 1.1.2.3). CO 2 and CH 4 are often called well-mixed greenhouse gases, because their lifetimes are long compared to the global atmospheric mixing time (∼1 to 3 yr, IPCC, 2013b), while tropospheric O 3 is considered a short-lived (22 days) climate forcer. In addition to its greenhouse effect (LW effect, Section 2.1.7), O 3 can directly absorb solar radiation (SW effect, Section 2.1.7). Both effects are included in Figure 1-4, but the global effect of tropospheric ozone is estimated to be 80 % due to its LW effect (Stevenson et al., 2013). CO 2 increasing in the stratosphere also results in stratospheric cooling, while decreasing stratospheric ozone cause stratospheric cooling and weak tropospheric cooling, however, these stratospheric processes are out of the scope of this thesis. Aerosol effects on climate Human activities also change the Earth's climate by increasing the global burden of aerosols. Once in the atmosphere, aerosols can influence climate in several ways, presented on Figure 12345. Figure 1-5 -Main radiative effects of atmospheric aerosols, based on [START_REF] Haywood | Estimates of the direct and indirect radiative forcing due to tropospheric aerosols: A review[END_REF]. First, aerosols can directly absorb solar radiation (warming effect), or scatter it back into space (cooling effect) [START_REF] Ångström | On the atmospheric transmission of sun radiation and on dust in the air[END_REF][START_REF] Haywood | The effect of anthropogenic sulfate and soot aerosol on the clear sky planetary radiation budget[END_REF]. Large aerosol particles can also directly absorb terrestrial infrared radiation (greenhouse warming effect) [START_REF] Haywood | Multi-spectral calculations of the direct radiative forcing of tropospheric sulphate and soot aerosols using a column model[END_REF]. These effects are called the direct aerosol radiative effects. Second, these direct effects modify the atmospheric profiles of temperature and relative humidity, which Chapter 1. Climate change and air pollution in the Arctic 29 can inhibit or enhance cloud formation [START_REF] Hansen | Radiative forcing and climate response[END_REF]. These effects are called the semi-direct aerosol radiative effects. Third, aerosols, when co-located with clouds, have an impact on cloud formation, cloud optical properties, cloud height and cloud lifetime [START_REF] Twomey | The Influence of Pollution on the Shortwave Albedo of Clouds[END_REF][START_REF] Albrecht | Aerosols, Cloud Microphysics, and Fractional Cloudiness[END_REF]. These effects are called the indirect aerosol radiative effects, or are referred to as the radiative effects of cloud-aerosol interactions. Absorbing aerosols, such as black carbon (BC), also contribute to warming when deposited on snow and ice, by darkening the snow or ice surface and increasing snow-grain size, which increases absorption of solar radiation (snow-albedo effect) [START_REF] Warren | A Model for the Spectral Albedo of Snow. II: Snow Containing Atmospheric Aerosols[END_REF]. More details on these effects are given in Section 2.2.5. The climate effect of all of these aerosol processes can also be quantified in terms of radiative forcing. This is shown on Figure 123456. Figure 1-6 -Radiative forcing of climate by aerosols between 1750 and 2011, adapted from IPCC [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF]. Figure 1-6 shows that the total radiative forcing of aerosols since 1750 is negative. This is mostly due to sulfate aerosols. Globally, aerosols thus have a cooling effect which counteracts part of the warming effect of greenhouse gases. The radiative effect of aerosols is also more uncertain than the radiative effect of greenhouse gases, due to uncertainties in aerosol sources, aging processes, sinks and forcing mechanisms.. Climate sensitivity and feedbacks RF values presented in Figure 1-4 and Figure 1-6 represent the radiative imbalance of the troposphere due to anthropogenic influences in the climate system. This imbalance causes a change in global temperature Δ𝑇 𝑠 , which is often assumed to depend linearly on RF, Δ𝑇 𝑠 = 𝜆 • 𝑅𝐹 where 𝜆 is the climate sensitivity. If everything else than temperature remained equal in the climate system, 𝜆 would be approximately equal to 𝜆 0 = 0.3 K W -1 m 2 [START_REF] Forster | On aspects of the concept of radiative forcing[END_REF]. However, any rise in temperature leads to further changes, which can amplify or dampen this initial response. For example, rising temperatures lead to declining sea ice and snow cover. This melt uncovers the underlying surface, which has a lower albedo than snow or ice and absorbs more solar radiation. This process, called the surface albedo feedback, amplifies the response to RF. There are other climate feedbacks, such as the water vapor Chapter 1. Climate change and air pollution in the Arctic feedback, cloud feedbacks... A feedback that amplifies an initial warming perturbation is called a positive feedback, while a feedback that reduces this initial warming is called a negative feedback. The current understanding of the climate system indicates that the sum of feedbacks is positive and that the actual climate sensitivity 𝜆 is larger than 𝜆 0 (best estimate, 𝜆 = 1.0 ± 0.5 K W -1 m 2 , IPCC, [START_REF] Flato | Evaluation of Climate Models, book section 9[END_REF]. 1.2 Arctic climate change: causes and future projections 1.2.1 What is the Arctic? Figure 1-7 -Several definitions of the Arctic boundary: the Arctic Circle (green), the 10 ∘ C summer isotherm (red), the tree line (yellow), the marine salinity boundary (black) and the location of permafrost (gray). Source: The Arctic System project, http://www. arcticsystem.no/en/arctic-inc/headquarters.html. The Arctic region is traditionally defined as the region of the Northern Hemisphere where the 24-hour long polar day and polar night occur. This corresponds to the area north of the Arctic Circle, located at a latitude of 66°33 ′ N. The Arctic can also be defined as the area north of the tree line (the northern limit of tree growth), the area north of the 10 ∘ C summer surface isotherm, or other definitions based on the location of permafrost or on reduced ocean salinity. The Arctic boundaries corresponding to these definitions are shown Chapter 1. Climate change and air pollution in the Arctic 31 in Figure 1234567. In atmospheric studies, the Arctic is often more broadly defined as the area from 60 ∘ N to 90 ∘ N. This last definition is used in this thesis. The Arctic is different from other regions because of several characteristics such as a high surface albedo due to sea ice, land ice and snow; long periods of darkness (polar night during winter) and sunlight (polar day during summer); high solar zenith angles; low temperatures; low relative humidities and strong atmospheric stability. The Arctic region is also especially sensitive to climate change (IPCC, 2013b). Current Arctic warming The Arctic surface is warming faster than the rest of the Earth (Figure 1-8(a)), a situation known as Arctic amplification. Arctic amplification was predicted using early climate model simulations [START_REF] Kellogg | Climatic feedback mechanisms involving the polar regions[END_REF][START_REF] Manabe | Transient Response of a Global Ocean-Atmosphere Model to a Doubling of Atmospheric Carbon Dioxide[END_REF], and is mainly due to the positive surfacealbedo climate feedback, combined with changes in heat transport to the Arctic, in Arctic clouds and in Arctic water vapor caused by climate change [START_REF] Serreze | Processes and impacts of Arctic amplification: A research synthesis[END_REF]. The enhanced radiative effects of black carbon aerosols in the Arctic are also thought to contribute to Arctic amplification [START_REF] Serreze | Processes and impacts of Arctic amplification: A research synthesis[END_REF]. This warming has important consequences for the Arctic cryosphere. Figure 1 -8(b) shows the evolution of September sea ice extent in the Arctic (when the yearly minimum cover is reached) between 1979 and 2015. Since satellite measurements began, September Arctic sea ice cover has declined at a rate of -13.4 % per decade (details in [START_REF] Stroeve | Arctic sea ice decline: Faster than forecast[END_REF]. Arctic warming is also causing Chapter 1. Climate change and air pollution in the Arctic the melt of the Greenland ice cap (-215 ± 59 Gt yr -1 ; IPCC, [START_REF] Vaughan | Observations: Cryosphere, book section 4[END_REF], and of Arctic glaciers (-138 ± 33 Gt yr -1 , excluding Greenland; IPCC, [START_REF] Vaughan | Observations: Cryosphere, book section 4[END_REF], contributing to sea level rise. Snow cover is also declining in the Northern Hemisphere, by 1 % to 2 % per year (IPCC, [START_REF] Vaughan | Observations: Cryosphere, book section 4[END_REF]. Causes of Arctic warming The Arctic is warming due to anthropogenic influences. [START_REF] Shindell | Role of tropospheric ozone increases in 20th-century climate change[END_REF] estimated that, during the 20 th century, surface temperatures in the Arctic increased mostly because of wellmixed greenhouse gases such as CO 2 and CH 4 (+1.65 ∘ C). However, short-lived climate forcers also appear to have played an important role (O 3 , +0.30 ∘ C; aerosols, including sulfate, black carbon, organic carbon, and nitrate in air, BC on snow, and indirect effects, -0.76 ∘ C). [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF] also found that aerosols and O 3 have a stronger Arctic warming effect per unit forcing than well-mixed GHGs (see also Sections 2.1.7 and 2.2.5). The literature review of [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF] indicates that the direct effect of aerosols on Arctic surface temperatures is -0.98 ∘ C, the indirect effect -0.70 ∘ C and the snow albedo effect (BC on snow) +0.043 ∘ C. However, based on a multi-model analysis, AMAP (2015) estimate that the direct effect of aerosols on Arctic surface temperatures is positive, +0.35 ∘ C (+0.40 ∘ C from BC in air, +0.22 ∘ C from BC in snow, and -0.27 ∘ C from OC and SO4). Results from AMAP (2015) also indicate a weaker effect of O 3 than previous studies, +0.12 ∘ C. Arctic warming is due to both local and remote forcings. Local forcings are due to the radiative effect of rising concentrations of greenhouse gases and aerosols within the Arctic, but Arctic warming can also be caused by remote forcings (located outside of the Arctic), which indirectly warm the Arctic through heat transport. [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF] estimated that most of the Arctic warming from 1880 to 2003 was caused by remote forcings, except during summer when local forcings made at least a similar contribution. However, [START_REF] Shindell | Climate response to regional radiative forcing during the twentieth century[END_REF] also showed that Arctic surface temperature was especially sensitive to local forcings, and dependent on the type of forcing. Future projections Future Arctic warming will depend on current and future action to limit climate change. Multi-model CMIP5 (Coupled Model Intercomparison Project Phase 5) future projections as part of the IPCC's 5th assessment report (IPCC, 2013b) indicate that the Arctic will continue to warm the most, but that the magnitude of this warming will depend strongly on future emission pathways. For the lowest emission scenario used in the framework of CMIP5 (RCP2.6 scenario, Representative Concentration Pathway 2.6 W m -2 ), Arctic temperatures are expected to rise by 2.2 ± 1.7 ∘ C in 2100 compared to 1986-2005. In the highest emission scenario (RCP8.5, Representative Concentration Pathway 8.5 W m -2 ), Arctic temperatures could rise by 8.3 ± 1.9 ∘ C (IPCC, [START_REF] Collins | Long-term Climate Change: Projections, Commitments and Irreversibility, book section 12[END_REF]. As a result, in the RCP8.5 scenario most CMIP5 models predict an ice-free Arctic Ocean during summer (less than 1 × 10 6 km 2 sea ice cover) by 2100. However, most models underestimate recent sea ice loss [START_REF] Stroeve | Arctic sea ice decline: Faster than forecast[END_REF], and models which reproduce past changes the most indicate that the Arctic could be seasonaly ice-free in summer months before midcentury [START_REF] Wang | A sea ice free summer Arctic within 30 years: An update from CMIP5 models[END_REF]. Future Arctic warming is also expected to accelerate the loss of ice from ice sheets and glaciers, and contribute to sea-level rise. Under a mediumrange warming scenario, Nick et al. (2013) predict that sea level will rise by 19 to 30 mm by 2200 due to the Greenland ice sheet alone. Arctic air pollution Arctic Haze The Arctic troposphere was long thought to be extremely clean, until the 1950s, when pilots observed a reduction of visibility in the springtime North American Arctic [START_REF] Greenaway | Experiences with Arctic flying weather[END_REF][START_REF] Mitchell | Visual range in the polar regions with particular reference to the Alaskan Arctic[END_REF]. Further analysis showed that this Arctic Haze, which builds up every winter and spring, was of anthropogenic origin [START_REF] Rahn | The Asian source of Arctic haze bands[END_REF]. It contains enhanced levels of aerosols, mostly composed of sulphate and sea salt, as well as organic matter, nitrate and black carbon [START_REF] Quinn | A 3-year record of simultaneously measured aerosol chemical and optical properties at Barrow, Alaska[END_REF]. Arctic haze also contains elevated levels of several trace gases, such as carbon monoxide (CO), (NO x ) and VOC [START_REF] Solberg | Carbonyls and nonmethane hydrocarbons at rural European sites from the mediterranean to the arctic[END_REF]. This peak in aerosol concentrations in late winter and early spring can be clearly seen in time series of aerosol measurements at Arctic surface stations. Figure 1 -9, shows 1997-2004 and 1981-2003 sulfate and nitrate aerosol observations at Barrow (Alaska, USA) and Alert (Canada) [START_REF] Quinn | Arctic haze: current trends and knowledge gaps[END_REF]. This Figure illustrates the strong seasonal variation of surface aerosol concentrations in the Arctic, reaching a maximum every year between January and April. These enhanced background levels at the surface (peaking below 2 km, [START_REF] Quinn | Arctic haze: current trends and knowledge gaps[END_REF] are called "Arctic Haze". In addition to this background haze, the Arctic troposphere can be polluted by episodic transport events, which bring dense localized pollution plumes in the Arctic. These plumes should not strictly be called "Arctic Haze" but contribute to Arctic pollution (Brock et al., 2011). Arctic air pollution transported from the mid-latitudes In late winter and early spring, Eurasian pollution can be efficiently transported to the Arctic at low altitudes [START_REF] Rahn | Arctic Air Chemistry Proceedings of the Second Symposium Relative importances of North America and Eurasia as sources of arctic aerosol[END_REF], causing Arctic Haze. This strong influence of Eurasian emissions is due, in part, to the position of the Arctic front. Air masses traveling from the mid-latitudes to the Arctic usually rise along surfaces of constant potential temperature (isentropic transport). These surfaces form a fictional "dome" called the Arctic front, which isolates the lower Arctic troposphere from the mid-latitudes [START_REF] Klonecki | Seasonal changes in the transport of pollutants into the Arctic troposphere-model study[END_REF][START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF]. In addition, these rising air masses are usually associated with precipitation, which remove pollutants from the atmosphere ("wet removal") during transport. On the contrary, pollutants emitted North of the Arctic front can be easily transported to the Arctic surface. Figure 1-10 shows the position of the Arctic front during winter and during summer, as well as the main atmospheric pathways from the mid-latitudes to the Arctic. During winter and spring, the Arctic front can extend south down to 40 ∘ N over Europe and Russia due to the extensive snow cover and low temperatures there. Eurasian emissions within the Arctic front can then be transported into the lower Arctic troposphere. In winter and spring, pollution removal processes are also lower in Eurasia and in the Arctic due to strong atmospheric stability and reduced precipitation [START_REF] Shaw | The Arctic Haze Phenomenon[END_REF][START_REF] Garrett | The role of scavenging in the seasonal transport of black carbon and sulfate to the Arctic[END_REF], causing the buildup of Arctic Haze. During summer, the Arctic front is located further north, and removal processes are higher, isolating the Arctic atmosphere from pollution in the mid-latitudes. The main source regions of Arctic pollution are presented on Figure 1-11. This Figure shows the result of an earlier multi-model analysis [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF], estimating the relative contributions of Europe, South Asia, East Asia, and North America to Arctic pollution at the surface and in the upper troposphere (250 hPa). These contributions were estimated for two aerosol components (BC and sulfate) and two trace gases (CO and ozone), by performing simulations with 20 % reduction in anthropogenic emissions of pollution precursors from each source region. Figure 1-11 illustrates that, on average over the year, modeled surface aerosol and CO Arrow width is proportional to the multi-model mean percentage contribution from each region to the total from these four source regions. From [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF]. pollution in the Arctic (blue arrows) are mostly influenced by European emissions. However, Asian emissions play a more important role at higher altitudes (see also [START_REF] Fisher | Sources, distribution, and acidity of sulfate-ammonium aerosol in the Arctic in winter-spring[END_REF]Wang et al., 2014a). Figure 1-11 also indicates that, in these simulations, Arctic ozone is mostly sensitive to North American emissions, but Wespes et al. (2012) estimated that European anthropogenic emissions could have a similar or larger importance. Other modeling studies have shown that, aside from anthropogenic emissions, Eurasian biomass burning could be a major source of Arctic pollution [START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF][START_REF] Warneke | An important contribution to springtime Arctic aerosol from biomass burning in Russia[END_REF], but the scale of this contribution remains uncertain. Developing local sources of Arctic pollution Aerosols and ozone can be chemically destroyed or deposited during transport. These removal processes (described in Chapter 2) limit the efficiency of long-range transport of pollution from the mid-latitudes. Furthermore, pollution transport from the mid-latitudes occurs along rising isentrops, which tend to bring remote pollution at high altitudes in the Arctic [START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF]. Local Arctic emissions are, by definition, directly emitted in the Arctic boundary layer and do not experience aging during transport. For this reason, local sources can influence Arctic surface pollution [START_REF] Sand | Arctic surface temperature change to emissions of black carbon within Arctic or midlatitudes[END_REF] and Arctic pollution burdens (Wang et al., 2014a) with much higher per-emission efficiency than remote sources. However, anthropogenic emissions in the Arctic are thought to be small compared to other regions. There are very few large cities north of the Arctic circle, the most populated being Murmansk in Russia ( ∼ 300,000 inhabitants); other large cities include Norilsk in Russia ( ∼ 175,000 inhabitants), Tromsø and Bodø in Norway ( ∼ 70,000 and 50,000 inhabitants). There are some industrial sources of pollution north of the Arctic circle, such as mines and metal smelters in Norilsk, Russia (AMAP, 2006), and metal smelters in the Kola Peninsula, Russia (Prank et al., 2010). Oil and gas related activities in northern Russia and Norway (AMAP, 2006;Peters et al., 2011) are thought to be an important local source of Arctic pollution (especially for BC), and recent studies (Stohl et al., 2013;[START_REF] Huang | Russian anthropogenic black carbon: Emission reconstruction and Arctic black carbon simulation[END_REF] indicate that these emissions might be higher than previously thought. Arctic shipping emissions are another noteworthy local source of pollution, emitting NO x , SO 2 (forming sulfate) and BC along with other pollutants. The Arctic council's AMSA report (Arctic Marine Shipping Assessment, Arctic Council, 2009) found that, in 2004, about 6000 ships operated within the Arctic (latitude > 60 ∘ N). This traffic mostly takes place along the Norwegian Coast, in northwestern Russia, around Iceland, in southwestern Greenland and in the Bering Sea. Arctic shipping is made up of a combination of supply ships for Arctic communities, bulk transport of resources extracted within the Arctic region, fishing ships, passenger ships and cruise ships (Arctic Council, 2009). In addition to this local Arctic shipping, it has long been known that the routes through the Arctic Ocean are the shortest way from northern Europe and northwestern America sponding route through the Suez Canal [START_REF] Liu | The potential economic viability of using the Northern Sea Route (NSR) as an alternative route between Asia and Europe[END_REF]. The Arctic route from Northeastern America to Eastern Asia (the northwest passage, NWP) is 15 to 30 % shorter than the corresponding route through the Panama canal [START_REF] Somanathan | The Northwest Passage: A simulation[END_REF]. These routes (shown in Figure 1-12) could be used to save trip distance and costs and might already be profitable, but they are not widely used yet due to the presence of sea ice, leading to additional costs, additional risks due to potential ice damage, and reduced vessel speeds [START_REF] Liu | The potential economic viability of using the Northern Sea Route (NSR) as an alternative route between Asia and Europe[END_REF][START_REF] Somanathan | The Northwest Passage: A simulation[END_REF]. The NSR and NWP could become more economically competitive along with Arctic sea ice decline. Models and observations indicate that the number of ice-free days per year along the NSR and NWP increased by 22 and 19 days between 1979-1988and 1998[START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF][START_REF] Mokhov | Natural Processes in Polar Regions, Part 2, chapter Assessment of the Northern Sea Route perspectives under climate changes on the basis of simulations with the climate models ensemble[END_REF]. As a result, transit along these routes is increasing: a record number of 71 ships transited through the NSR in 2013 (Northern Sea Route information office, 2013). At the same time, decreasing sea ice extent also contributed to a rise in Arctic cruise tourism [START_REF] Stewart | Sea Ice in Canada's Arctic: Implications for Cruise Tourism[END_REF]. The ice-free shipping season is expected to continue to lengthen due to climate change [START_REF] Prowse | Implications of Climate Change for Economic Development in Northern Canada: Energy, Resource, and Transportation Sectors[END_REF]Khon et al., 2009). This will allow increased traffic along the NSR and NWP, by opening these routes to ships with no hull ice strengthening (Smith and Stephenson, 2013). As a result, [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] estimate that Arctic shipping emissions of NO x and BC could increase by a factor of 10 between 2004 Chapter 1. Climate change and air pollution in the Arctic and 2050 (high-growth scenario). Increased shipping access in the Arctic is also expected to facilitate resource extraction in this region [START_REF] Prowse | Implications of Climate Change for Economic Development in Northern Canada: Energy, Resource, and Transportation Sectors[END_REF]. The Arctic contains vast resources of minerals [START_REF] Lindholt | The Economy of the North, chapter Arctic natural resources in a global perspective[END_REF], oil, and gas [START_REF] Gautier | Assessment of Undiscovered Oil and Gas in the Arctic[END_REF]. Arctic oil and gas resources are already being exploited, and the Arctic is expected to remain an important producer of oil by 2050, while its relative importance in gas production could decrease due to its high extraction prices [START_REF] Lindholt | The Arctic: No big bonanza for the global petroleum industry[END_REF]Peters et al., 2011, projections shown in Figure 1-13). The oil and gas sector is expected to keep contributing to future local pollutant emissions in the Arctic (Peters et al., 2011), although current and future emission inventories from this source remain very uncertain. Local natural sources of aerosols, ozone and of their precursors in the Arctic are not well-known. They include boreal wildfires; sea salt, dimethylsulfide (DMS, forming sulfate) and organic matter from oceans; mineral dust; ozone transported from the stratosphere; NO x from soils and snow. Vegetation is sparse in the Arctic, which limits the formation of biogenic VOCs from plants; and the lack of local thunderstorms (e.g., [START_REF] Cecil | Gridded lightning climatology from TRMM-LIS and OTD: Dataset description[END_REF] prevents NO x formation from lightning. These natural sources are not the focus of this thesis, but are included in the simulations presented in Chapter 4, 5 and 6 when estimates or emission models are available. Scientific challenges in modeling Arctic aerosols and ozone and their impacts In this thesis, aerosol and ozone pollution in the Arctic is studied using regional simulations of the Arctic troposphere, and new global and local emission inventories of Arctic pollution. Model results are used to analyze recent aircraft measurements in the Arctic. This approach (details in Chapter 3) was motivated by results from previous studies, which showed that modeling aerosol and ozone pollution in the Arctic was especially challenging. Modeling aerosol and ozone pollution from long-range transport Models do not represent aerosols well in the Arctic. [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF] showed that models often underpredicted sulfate at Arctic surface stations, and greatly underpredicted BC, while several models struggled to reproduce the seasonal cycle of surface aerosol concentrations. [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF] attributed this poor agreement to the treatment of aerosol aging and removal within models. [START_REF] Koch | Evaluation of black carbon estimations in global aerosol models[END_REF] and [START_REF] Schwarz | Global-scale black carbon profiles observed in the remote atmosphere and compared to models[END_REF] compared global models to different sets of aircraft observations of BC in the Arctic, and found that models underestimated BC at the surface but overestimated it aloft. A more recent intercomparison by [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF] also indicates that most models strongly underestimate surface BC observations in the Arctic, especially during winter and spring. Several studies [START_REF] Huang | Importance of deposition processes in simulating the seasonality of the Arctic black carbon aerosol[END_REF]Liu et al., 2012;[START_REF] Browse | The scavenging processes controlling the seasonal cycle in Arctic sulphate and black carbon aerosol[END_REF][START_REF] Kirtman | Near-term Climate Change: Projections and Predictability, book section 11[END_REF] showed that Arctic BC could be improved by the use of more complex wet removal schemes within models. However, implementing these schemes does not fully resolve model disagreement with measurements [START_REF] Browse | The scavenging processes controlling the seasonal cycle in Arctic sulphate and black carbon aerosol[END_REF][START_REF] Kirtman | Near-term Climate Change: Projections and Predictability, book section 11[END_REF][START_REF] Wang | Global budget and radiative forcing of black carbon aerosol: Constraints from pole-to-pole (HIPPO) observations across the Pacific[END_REF]Eckhardt et al., 2015). precursors. These biases are attributed to uncertainties in emissions, errors in stratospheretroposphere exchange and uncertainties related to the hydroxyl radical OH (these processes are described in Section 2.1). Modeling aerosol and ozone pollution from local Arctic sources Emissions from local Arctic sources are not well quantified, which makes investigating their impacts difficult. There are very few specific emission inventories focused on local Arctic sources, and existing inventories are known to be incomplete. The current and future Arctic shipping inventories of [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] do not include fishing ships, which constitute a significant proportion of Arctic shipping [START_REF] Mckuin | Emissions and climate forcing from global and Arctic fishing vessels[END_REF]. In addition, these inventories are based on the AMSA shipping dataset, which might underestimate Arctic marine traffic (Arctic Council, 2009). Other shipping inventories [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF](Dalsøren et al., , 2009;;Peters et al., 2011) are also known to be biased towards specific ship types Emissions from the oil and gas sector are also very uncertain, as most oil and gas activity in the Arctic is located in northern Russia, where very few observations are available to validate inventories. Peters et al. (2011) estimated current and future emissions from Arctic oil and gas activities, but recent inventories [START_REF] Huang | Russian anthropogenic black carbon: Emission reconstruction and Arctic black carbon simulation[END_REF][START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] indicate that this earlier estimate might be too low, especially in terms of BC emissions (Stohl et al., 2013). For these reasons, earlier studies based on the inventories of [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF], [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF] and Peters et al. (2011), could be underestimating the impacts of Arctic shipping emissions. Furthermore, models do not represent well aerosol pollution in the Arctic. This could have a strong impact on results when studies report relative impacts of local emissions over this uncertain background. Until recently, there was also no specific field measurements focused on Arctic shipping or Arctic resource extraction that could be used to study the impacts of local Arctic emissions, to assess model performance, and to validate inventories. Such a dataset is now available from the ACCESS aircraft campaign [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF], which took place in northern Norway in summer 2012 and specifically targeted ships and oil and gas platforms in the Norwegian and Barents seas. Chapter 2 Tropospheric ozone and tropospheric aerosols in the Arctic Aerosols and ozone are responsible for most of the health impacts of air pollution. They are also short-lived climate forcers. The processes governing aerosol and ozone pollution in the Arctic are complex, but underestanding these processes is critical in order to understand the impacts of Arctic aerosols and ozone on air quality and climate. This section presents the main chemical and physical processes governing ozone (Section 2.1) and aerosols (Section 2.2) in the troposphere, as well as their radiative impacts. Tropospheric ozone In this section, we only focus on the mechanisms which are the most important to understand ozone pollution in the Arctic [START_REF] Jacob | Introduction to Atmospheric Chemistry[END_REF]. Introduction: stratospheric and tropospheric ozone Ozone is a trace gas in the atmosphere, with mixing ratios ranging from 1 ppbv to 10 ppmv. The highest ozone concentrationss are found in the stratosphere between 20 to 30 km altitudes, a region known as the ozone layer (Figure 2-1) . In the stratosphere, UV radiation (𝜆 < 240 nm) can dissociate O 2 and form O 3 : O 2 + h𝜈 𝜆 < 240 nm -------→ O( 3 P) + O( 3 P) (2.1) O( 3 P) + O 2 + M --→ O 3 + M (2.2) Where O( 3 P) is the oxygen atom in its triplet state and M is a third body, usually O 2 or N 2 . Reactions 2.1 and 2.2 produce O 3 , but O 3 can also dissociate in the presence of UV (Bodeker et al., 2013). radiation to make atomic oxygen, which can react with O 3 to reform O 2 : O 3 + h𝜈 𝜆 < 320 nm -------→ O 2 + O( 1 D) (2.3) O( 1 D) + M --→ O( 3 P) + M (2.4) O 3 + O( 3 P) --→ 2 O 2 (2.5) Where O( 1 D) is the oxygen atom in its singlet state. This cycle (2.1-2.5) is called the Chapman cycle [START_REF] Chapman | A theory of upper stratospheric ozone[END_REF]. A steady-state analysis of this system reveals that it explains the location of the ozone layer, which is due to the simultaneous abundance of O 2 and UV radiation (reaction 2.1) at altitudes 20 to 30 km [START_REF] Chapman | A theory of upper stratospheric ozone[END_REF]. At lower altitudes, UV radiation at 𝜆 < 240 nm is filtered by the overhead O 2 and O 3 , and reaction 2.1 cannot produce much O 3 . However, O 3 is also found at lower altitudes, in the troposphere (Figure 2-1). Since high-energy UV radiation does not penetrate into the troposphere, it was once thought that tropospheric O 3 was transported from the stratosphere. However, this theory could not explain the observations of enhanced O 3 at low altitudes and in polluted regions. [START_REF] Ripperton | Natural synthesis of ozone in the troposphere[END_REF]; [START_REF] Chameides | A photochemical theory of tropospheric ozone[END_REF]; [START_REF] Crutzen | Photochemical reactions initiated by and influencing ozone in unpolluted tropospheric air[END_REF] later proved that O 3 could also be produced locally in the troposphere from natural and anthropogenic compounds. 2.1.2 Chemical O 3 production in the troposphere from NO x and VOC The main chemical reaction producing O 3 in the troposphere is the photolysis of NO 2 : NO 2 + h𝜈 𝜆 < 424 nm -------→ NO + O( 3 P) (2.6) O 2 + O( 3 P) + M --→ O 3 + M (2.7) However, ozone can also react with NO to reform NO 2 : NO + O 3 --→ NO 2 (2.8) Reactions 2.6-2.7 and 2.8 are fast, and correspond to a quick interconversion between NO and NO 2 . For this reason, we can define the NO x chemical family as the sum NO + NO 2 . This interconversion also means that ozone can only be produced in the troposphere from NO 2 if there is another source of NO 2 than reaction 2.8. Oxidation of CO, CH 4 and other hydrocarbons as ozone sources Earlier studies identified that NO 2 could be produced in the troposphere during the oxidation of CO [START_REF] Levy | Photochemistry of the lower troposphere[END_REF]: CO + OH --→ CO 2 + H (2.9) H + O 2 + M --→ HO 2 + M (2.10) HO 2 + NO --→ NO 2 + OH (2.11) Where OH is the hydroxyl radical and HO 2 the hydroperoxyl radical. Similarly to NO x , a HO x family can be defined as OH + HO 2 . OH can be produced in the troposphere by several photochemical reactions, notably: O 3 + h𝜈 𝜆 < 320 nm -------→ O 2 + O( 1 D) (2.12) O( 1 D) + H 2 O --→ 2 OH (2.13) Reaction 2.12 can occur from photons at wavelengths 300 nm < 𝜆 < 320 nm, which are found in the troposphere (photons at 𝜆 < 300 nm are filtered by overhead stratospheric O 3 and O 2 ). From reactions 2.12-2. The main chemical sinks of ozone in the troposphere are the following reactions: CH 4 + OH --→ CH 3 + H 2 O (2.14) CH 3 + O 2 + M --→ CH 3 O 2 + M (2.15) CH 3 O 2 + NO --→ CH 3 O + NO 2 (2. O 3 + h𝜈 --→ O 2 + O( 1 D) (2.12) O( 1 D) + H 2 O --→ 2 OH (2.13) O 3 + OH --→ O 2 + HO 2 (2.17) O 3 + HO 2 --→ OH + 2 O 2 (2.18) In addition, since there is an interconversion between O 3 , NO 2 and HO x , loss of HO x and NO 2 results in a net destruction of ozone. HO x are very reactive and several reactions can compete with ozone formation. The reaction of HO 2 with itself is an important sink of HO x : HO 2 + HO 2 --→ H 2 O 2 (2.19) Hydrogen peroxide, H 2 O 2 , is soluble in water and can be removed by precipitation (wet deposition). Likewise, NO 2 can react to form nitric acid, HNO 3 , which can be efficiently removed by rain. During the day, when OH is available, NO 2 + OH + M --→ HNO 3 + M (2.20) At night, there is less OH and NO x is mostly found as NO 2 because NO 2 cannot be photolyzed. NO 2 can then react with O 3 : NO 2 + O 3 --→ NO 3 + O 2 (2.21) NO 3 + NO 2 + M --→ N 2 O 5 + M (2.22) N 2 O 5 + H 2 O 𝑎𝑒𝑟𝑜𝑠𝑜𝑙 ----→ 2 HNO 3 (2.23) Wet removal of HNO 3 formed by reactions 2.20-2.23 is the main sink of NO x . 2.1.4 Dry deposition of NO x and O 3 NO x and O 3 can also be removed by dry deposition, which constitutes an important sink for O 3 . Dry deposition is the process by which a molecule or a particle is transferred to the surface where it is removed. Dry deposition is especially fast over vegetation, due to uptake in plants during respiration and transpiration [START_REF] Erisman | Parametrization of surface resistance for the quantification of atmospheric deposition of acidifying pollutants and ozone[END_REF]. Dry deposition depends mostly on the strength of the exchanges between the surface and the rest of the atmosphere: to reach the surface, a compound has to be transported by turbulence within a few centimeters of the surface, and then by molecular diffusion through the laminar boundary layer near the surface [START_REF] Wesely | Parameterization of surface resistances to gaseous dry deposition in regional-scale numerical models[END_REF]. The speed of these processes depends on the properties of the surface and on the state of the atmosphere. 2.1.5 Peroxyacetyl nitrate (PAN) as a NO x reservoir in the troposphere NO x have a relatively short lifetime of 0.5-2 days in the troposphere and therefore cannot be transported over long distances. However, research has shown that NO x could be transported at the hemispheric scale through the formation of a reservoir species, peroxyacetyl nitrate (PAN, CH 3 C(O)OONO 2 ) (e.g. [START_REF] Singh | Global distribution of peroxyacetyl nitrate[END_REF]. PAN is formed by the oxidation of carbonyl compounds (e.g. acetaldehyde, CH 3 CHO) by NO 2 , and its main sink is thermal decomposition: PAN --→ CH 3 C(O)OO + NO 2 (2.24) Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic The lifetime of PAN against thermolysis is 1 h at 295 K, and several months at 240 K [START_REF] Jacob | Heterogeneous chemistry and tropospheric ozone[END_REF]. As a result, PAN can be formed during high-altitude pollution transport, and can be decomposed to release NO 2 in remote regions when reaching lower altitudes. PAN is thought to be an important source of surface and lower tropospheric ozone in the Arctic during summer (e.g. [START_REF] Mauzerall | Origin of tropospheric ozone at remote high northern latitudes in summer[END_REF]. (Wild, 2007). Chemical production is the largest source of tropospheric ozone, and chemical destruction its largest sink. The net effect of tropospheric chemistry is to produce ozone, but this production and the stratospheric source are balanced by dry deposition (note that the budget is not perfectly balanced because not all studies or models reported every statistic). Models can also be used to estimate the lifetime of ozone in the troposphere. Based on results from 26 models, [START_REF] Stevenson | Multimodel ensemble simulations of present-day and near-future tropospheric ozone[END_REF] report a mean lifetime of 22 days, but this average hides strong variations from a few days to a few months, depending on location, altitude and season. absorption band in the infrared, near 9.6 µm. This wavelength overlaps with the spectrum of Earth's thermal radiation, and is located in a spectral region where other atmospheric gases do not absorb much (the 8 to 12 µm atmospheric window). This absorption band is responsible for most of the greenhouse effect due to ozone. The greenhouse (LW) effect of a greenhouse gas is stronger when there is a large temperature contrast between the surface and the altitude where the gas absorbs. As a result, the radiative effect of ozone is stronger near the tropopause [START_REF] Lacis | Radiative forcing of climate by changes in the vertical distribution of ozone[END_REF]. Ozone also has an indirect radiative effect Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic by increasing OH, decreasing the methane lifetime. Furthermore, dry deposition of ozone via plant stomatal uptake also damages plants, reducing primary productivity, reducing the land carbon sink and indirectly increasing CO 2 forcing [START_REF] Sitch | Indirect radiative forcing of climate change through ozone effects on the land-carbon sink[END_REF]. The global budget of tropospheric ozone Radiative effects of tropospheric ozone Tropospheric ozone in the Arctic This section presents Arctic O 3 pollution, its origins, as well as Arctic-specific processes influencing O 3 pollution and O 3 radiative impacts. Arctic ozone pollution O 3 concentrations measured at Arctic surface sites are usually between 10 and 50 ppbv (AMAP, 2015). Arctic surface O 3 is often higher during winter and spring, due to stronger pollution transport from the mid-latitudes and weaker removal processes, but several sites exhibit a winter/spring minimum due to ozone depletion events caused by catalytic cycles over snow and ice involving halogens [START_REF] Bottenheim | Depletion of lower tropospheric ozone during Arctic spring: The Polar Sunrise Experiment[END_REF]. Previous studies do not agree on the exact relative contributions of different remote emission regions, but indicate that European anthropogenic emissions are an important source of ozone pollution yearround in the lower Arctic troposphere. At higher altitudes, emissions from North America and Asia are also thought to make important contributions [START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF]Wespes et al., 2012), as well as boreal and agricultural fires. Ozone formed in polluted regions can be directly transported to the Arctic. During summer, Arctic ozone can also be produced locally from NO x transported PAN [START_REF] Jacob | Summertime photochemistry of the troposphere at high northern latitudes[END_REF]. Stratosphere-troposphere transport is the main source of O 3 in the upper troposphere, and is highest during spring. Several specific factors influence ozone pollution in the Arctic. The Arctic experiences polar night during winter, and low solar zenith angles year-round, which reduce the amount of UV radiation available for photochemistry. However, the sun does not set during polar day in summer, and snow and ice have a very high UV-albedo (∼ 0.96 for pure snow, [START_REF] Grenfell | Reflection of solar radiation by the Antarctic snow surface at ultraviolet, visible, and near-infrared wavelengths[END_REF] which nearly doubles the available UV flux. Local emissions of NO x are thought to be low in the Arctic, due to the relative lack of industrial emissions, the lack of lightning activity, and the low soil NO x emissions. Nitrate photochemistry in snow is thought to be an important source of NO x at the Arctic surface [START_REF] Grannas | An overview of snow photochemistry: evidence, mechanisms and impacts[END_REF], but this source is still relatively poorly known and is not included in most atmospheric models. Sources of VOC are also rare in the Arctic, but NO x /𝑉 𝑂𝐶 ratios are thought to be low [START_REF] Jacob | Summertime photochemistry of the troposphere at high northern latitudes[END_REF], and local O 3 production is thus more sensitive to perturbations in NO x than to perturbations in VOC (e.g. In the Arctic, the temperature contrast between the surface and the atmosphere is lower than in other regions, and the greenhouse effect of ozone is reduced. However, the SW radiative effect of ozone is higher in the Arctic due to the the high UV albedo of snow and ice, and the longer path lengths for solar radiation. At the global scale, the radiative effect of troposperic ozone is mostly due to its LW effect, but in the Arctic both effects might be approximately equal [START_REF] Berntsen | Effects of anthropogenic emissions on tropospheric ozone and its radiative forcing[END_REF]. Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic Tropospheric aerosols Aerosols are defined as all liquid or solid particles suspended in the atmosphere, excluding cloud droplets, ice crystals, and other hydrometeors. Atmospheric aerosols vary strongly in chemical composition, size, and spatial distribution. Global aerosol sources The main sources and types of aerosols at the global scale are presented in • Black carbon is mostly made of elemental carbon atoms in graphite aggregates, but what constitutes BC is not well defined and can include other light-absorbing compounds (Petzold et al., 2013). In the present work, black carbon (BC) designates pure elemental carbon (EC), unless indicated otherwise. • Organic aerosols are carbon-containing particles of organic origin (other than BC), which can come from anthropogenic or biogenic activity. Primary organic aerosols (POA) are directly emitted (e.g. non-volatile organic compounds, organic debris), while secondary organic aerosols (SOA) are formed from gas-phase reactions involving VOCs. The volatility of organic gases is a continuum (volatile OC, semi-volatile OC, non-volatile OC), and some compounds can either be in the gas phase or the condensed phase, depending on environmental conditions. • Inorganic soluble aerosols in the atmosphere are secondary aerosols such as sulfate SO 2- 4 , nitrate NO - 3 or ammonium NH + 4 , which are very hygrophilic. • Other inorganic aerosols are relatively non-reactive and insoluble particles, such as mineral dust, ashes, industrial dust, metals... Aerosol composition in the Arctic is variable (Schmale et al., 2011;[START_REF] Frossard | Springtime Arctic haze contributions of submicron organic particles from European and Asian combustion sources[END_REF]Brock et al., 2011), but measurements (e.g. Figure 2-4) usually indicate that fine aerosols in the Arctic contain mostly sulfate and organic matter, while coarse aerosols can also contain mineral dust and sea salt. (Brock et al., 2011). Measurements are for background (haze) conditions. The chart does not include dust or sea salts, which represent respectively 6 % and 4 % of fine particles, and 49 % and 23 % of coarse particles (number proportions). Mixing state Aerosols are rarely made of a single pure compound, and are often mixed. Another common model of aerosol mixing is internal mixing, which assumes that aerosols Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic 53 of a given size have the same (mixed) composition. Components of an aerosol can be wellmixed, or can be separated within the particle: core-shell mixing represents aerosols as separated between an internal refractory "core" (often made of BC) and a coating "shell" (secondary components often composed of sulfate, nitrate, ammonium, organic matter and water). This refractory part can also be represented as scattered in the secondary phase, in the form of randomly-distributed spherical inclusions. These different mixing states or a combination of them can be found in the atmosphere, but numerical models often make the assumption of a single mixing state. This assumption can have a strong influence on the calculated optical, microphysical and chemical properties of the aerosols [START_REF] Chýlek | Scattering of electromagnetic waves by composite spherical particles: experiment and effective medium approximations[END_REF]Wang et al., 2010). Size distributions Aerosol diameters range from 2 nm to 100 µm. Although aerosols are often non-spherical, their size can be characterized by an equivalent diameter 𝐷 (e.g. the aerodynamic diameter, or the Stokes diameter). Aerosol optical, chemical and microphysical properties are very sensitive to particle size [START_REF] Bohren | Absorption and Scattering of Light by Small Particles[END_REF][START_REF] Dusek | Size Matters More Than Chemistry for Cloud-Nucleating Ability of Aerosol Particles[END_REF]. Dust, ashes and sea salt are usually large particles (𝐷 > 1 µm), while black carbon, organic aerosols and soluble inorganic aerosols are often fine (𝐷 < 1 µm). An aerosol population in a given volume of air contains particles of different diameters; this population can be described by a function 𝑛(𝐷) = 𝑑𝑁/𝑑𝑙𝑜𝑔𝐷 (cm -3 ) called a number size distribution. Figure 2-6 shows (thin line and circles) the size distribution of aerosols from anthropogenic pollution measured in the Scandinavian Arctic in spring 2008 (Quennehen et al., 2012). The aerosol distribution presents two main modes, with modal diameters of ∼ 35 nm and ∼ 100 nm. Each of these modes can be approximated by a lognormal size distribution (2 modes, shown in Figure 23456). 𝑛 𝑖 (𝐷) = 𝑑𝑁 𝑖 (𝐷) 𝑑𝐷 = 𝑁 𝑖 1 ln(𝜎 𝑖 )𝐷 √ 2𝜋 𝑒𝑥𝑝 ⎛ ⎝ - ln 2 (︁ 𝐷 𝐷 𝑖 )︁ 2 ln 2 (𝜎 𝑖 ) ⎞ ⎠ (2.25) Where 𝑑𝑁 𝑖 is the number of particles in mode i with a diameter between 𝐷 and 𝐷 + 𝑑𝐷, 𝑁 𝑖 the total number of particles in mode 𝑖, 𝐷 𝑖 the modal diameter, and 𝜎 𝑖 its geometric standard deviation. Other approaches can be used to approximate a size distribution, such as the use of discrete size bins (sectional approach, also shown in Figure 23456). Aerosol size distributions in the atmosphere are usually distributed in a limited number of modes (typically, 1 to 4). A mode centered between 0 to 10 nm is called a nucleation mode, 10 to 100 nm is the Aitken mode, 0.1 to 1 µm is the accumulation mode. Larger modes (𝐷 > 1 µm) are called coarse modes. These names, and the existence of these modes, are due to the processes governing aerosol formation, growth and removal. In general, the For air quality applications, aerosol amounts are often given as Particulate Matter (PM) mass, for example PM 10 , representing the total mass of aerosols with aerodynamic diameters smaller than 10 µm. PM 2.5 (𝐷 < 2.5 µm) are also often used. Aerosol processes: from nucleation to removal The main aerosol processes, including primary emissions, nucleation, coagulation, conden- SO 2 (g) ←→ SO 2 • H 2 O (2.32) SO 2 • H 2 O --→ HSO - 3 + H + (2.33) HSO - 3 + H 2 O 2 (aq) + H + --→ SO 2- 4 + 2 H + + H 2 O (2.34) Condensation and aqueous chemistry conserves aerosol number but increases aerosol mass. Coagulation and condensation are more efficient for small particles (𝐷 < 100 nm), and explain the growth of aerosols up to the accumulation mode, hence the name of this mode. Aerosol activation in clouds Cloud droplets cannot form on their own from pure water in the atmosphere. This is because the equilibrium water vapor pressure over the curved surface of a small droplet is much greater than over a flat surface (Kelvin effect). For this reason, very small pure water droplets are unstable even when relative humidity (RH) is much greater than 100 %. In the atmosphere, cloud droplet formation always involves consensation of water over a preexisting aerosol for two reasons [START_REF] Andreae | Aerosol-cloud-precipitation interactions. Part 1. The nature and sources of cloud-active aerosols[END_REF]. First, the surface of a preexisting aerosol is less curved than the surface of a freshly formed pure-water cluster. Second, if the aerosol is hygroscopic, the equilibrium water vapor pressure over its surface is lowered even more (Raoult effect). (Textor et al., 2006). In the Arctic during spring, [START_REF] Fisher | Sources, distribution, and acidity of sulfate-ammonium aerosol in the Arctic in winter-spring[END_REF] estimate that 85 to 91 % of BC deposition is due to wet removal. Aerosol optical properties Aerosols can interact with solar and terrestrial radiation by absorbing and scattering light. This interaction is responsible for the direct radiative effect of aerosols, which contributes to their impact on climate. These aerosol/radiation interactions can be quantified by calculating the optical properties of the aerosols. Scattering and absorption by a single particle The interaction of an aerosol with solar and terrestrial radiation can be described as the sum of a scattering term (describing the change in direction of light due to the presence of the aerosol) and an absorption term (describing the transfer of energy from photons to the particle). This is illustrated in Figure 2345678. If the radiative flux per unit surface reaching a particle is 𝐹 0 (W m -2 ), the flux 𝐹 𝑎 (W) being absorbed by this particle is 𝐹 0 𝐹 𝑎 𝐹 𝑠 𝐹 𝑎 = 𝜎 𝑎 𝐹 0 , (2.35) while the scattered flux is (2.36) where 𝜎 𝑎 and 𝜎 𝑠 (m 2 ) are the single-particle absorption cross-section and scattering crosssection, and the sum 𝜎 𝑒𝑥𝑡 = 𝜎 𝑎 +𝜎 𝑠 is called the extinction cross-section. These cross-sections can be calculated using Mie theory [START_REF] Mie | Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen[END_REF], knowing the wavelength 𝜆 of the incident radiation, the complex refractive index of the aerosol 𝑚 = 𝑛 -𝑖𝑘 and the size of the aerosol, 𝐹 𝑠 = 𝜎 𝑠 𝐹 0 , given as an adimensional size parameter 𝑥 = 𝜋𝐷 𝜆 (aerosol diameter D). Scattering phase function Mie calculations show, in agreement with observations, that scattering by an aerosol is not isotropic. The angular distribution of light scattered by an aerosol can be described by the phase function 𝑝(𝜃) of the particle, representing the intensity 𝐹 (𝜃) scattered in direction 𝜃 (relative to the incident beam), normalized by the total scattered intensity: 𝑝(𝜃) = 𝐹 (𝜃) ∫︀ 𝐹 (𝜃) 𝑑Ω 4𝜋 (2.37) Where Ω is the solid angle. Figure 2-9 shows the typical shape of the phase function for small aerosols (𝑥 << 1, Rayleigh regime) and for typical aerosols (𝑥∼1, Mie regime). It illustrates that typical aerosols scatter mostly in the forward direction, but that a portion of the radiation is backscattered. Mie regime Rayleigh regime If the aerosol is not well-mixed (core-shell, random spherical inclusions), calculations of the effective refractive index are more complex [START_REF] Heller | Remarks on Refractive Index Mixture Rules[END_REF] and result in a lower imaginary part (-15 to 30 % for core-shell mixing, [START_REF] Schuster | Inferring black carbon content and specific absorption from Aerosol Robotic Network (AERONET) aerosol retrievals[END_REF]. Scattering also depends on the refractive index of the particle and on mixing state assumptions. 𝐹 0 𝐹 0 (b) (a) Optical properties of an aerosol population The absorption and scattering cross sections calculated by The backscattered fraction of 𝛼 𝑠 is noted 𝛽. The optical depth of an aerosol layer (altitudes 𝑧 1 to 𝑧 2 , often from the surface to the top of the atmosphere, also noted AOD) is defined as 𝜏 (𝜆) = ∫︁ 𝑧 2 𝑧 1 𝛼 𝑒𝑥𝑡 (𝑧 ′ , 𝜆)𝑑𝑧 ′ (2.42) Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic The single-scattering albedo of the aerosol population is 𝜔(𝜆) = 𝛼 𝑠 (𝜆) 𝛼 𝑒𝑥𝑡 (𝜆) (2.43) The bulk coefficients 𝛼 𝑠 , 𝛼 𝑎 and 𝛼 𝑒𝑥𝑡 describe the interaction of an aerosol population with radiation, and can be used to calculate the radiative impacts of atmospheric aerosols. Aerosol radiative effects The radiative effects of aerosols are due to their direct interaction with radiation (direct radiative effects, due to scattering and absorption); to their impacts on cloud formation, properties and lifetime (indirect and semi-direct radiative effects); and to their effect on the albedo of snow. Direct aerosol-radiation interactions: absorption and scattering by an aerosol layer The interaction between an optically thin aerosol layer (optical depth 𝜏 , single scattering albedo 𝜔, backscatter fraction 𝛽, here equal to the upscatter fraction because the solar zenith angle is 0°) and solar radiation (radiative flux 𝐹 0 ) can be modeled as in Figure 2-10. Surface 𝐹 𝑎 = 𝐹 0 (1 -𝜔)(1 -𝑒 -𝜏 ) Aerosol layer 𝐹 0 𝐹 𝑏𝑠 = 𝐹 0 (1 -𝑒 -𝜏 )𝜔𝛽 𝐹 𝑡 = 𝐹 0 𝑒 -𝜏 𝐹 𝑓 𝑠 = 𝐹 0 (1 -𝛽)(1 -𝑒 -𝜏 )𝜔 Multiple reflections between surface and layer Where 𝐹 𝑎 = 𝑎𝐹 0 is the absorbed flux, 𝐹 𝑏𝑠 = 𝑟𝐹 0 is the backscattered flux, 𝐹 𝑓 𝑠 is the forward-scattered flux, and 𝐹 𝑡 the transmitted flux, and 𝑎 = (1 -𝜔)(1 -𝑒 -𝜏 ) (2.44) 𝑟 = (1 -𝑒 -𝜏 )𝜔𝛽 (2.45) The total transmitted flux (𝐹 𝑡 + 𝐹 𝑓 𝑠 = 𝑡𝐹 0 , and 𝑡 = 𝑒 -𝜏 + (1 -𝛽)(1 -𝑒 -𝜏 )𝜔) can be reflected multiple times between the surface (surface albedo 𝑅 𝑠 ) and the aerosol layer, and the resulting total upward flux leaving the system at TOA is 𝐹 ↑ 𝑎𝑒𝑟 = 𝐹 0 (︂ 𝑟 + 𝑅 𝑠 𝑡 2 1 -𝑟𝑅 𝑠 )︂ (2.46) Without the aerosol layer, the upward flux at TOA would be 𝐹 ↑ 𝑛𝑜𝑎𝑒𝑟 = 𝐹 0 𝑅 𝑠 , which means that the radiative effect of the aerosol layer at TOA is Δ𝐹 = 𝐹 ↑ 𝑎𝑒𝑟 -𝐹 ↑ 𝑛𝑜𝑎𝑒𝑟 = 𝐹 0 (︂ 𝑟 + 𝑅 𝑠 𝑡 2 1 -𝑟𝑅 𝑠 -𝑅 𝑠 )︂ (2.47) The resulting radiative effects are the following: • Within the aerosol layer, aerosols cause warming (𝐹 𝑎 > 0). • At the surface, any aerosol layer aloft reduces downward radiation and causes cooling (𝑡𝐹 0 < 𝐹 0 ), but aerosols at the surface can cause warming (𝐹 𝑎 > 0). • At TOA (radiative effect Δ𝐹 on the whole surface-atmosphere system) : aerosols can cause cooling or warming depending on 𝜔, 𝛽 and 𝑅 𝑠 . This "direct radiative effect" of aerosols is negative at the global scale, but it depends strongly on the surface albedo 𝑅 𝑠 [START_REF] Haywood | The effect of anthropogenic sulfate and soot aerosol on the clear sky planetary radiation budget[END_REF]. Over high-albedo surfaces in the Arctic, aerosol layers have a stronger warming effect and weaker cooling effect, and even particles with weak absorbing properties cause a net warming at TOA [START_REF] Pueschel | Physical and radiative properties of Arctic atmospheric aerosols[END_REF]. For the same reason, weakly absorbing aerosol layers located above highalbedo clouds can also cause warming at TOA. The Arctic is very often covered by thick, low clouds [START_REF] Cesana | Ubiquitous low-level liquidcontaining Arctic clouds: New observations and climate model constraints from CALIPSO-GOCCP[END_REF]. For this reason, the direct effect of aerosols can either be increased due to the high cloud albedo if the aerosol layer is located above clouds, or decreased due to the reduced downwelling solar radiation (reduced 𝐹 0 ) if the aerosol layer is located below clouds. Absorbing aerosols such as BC, when present aloft, will reduce the amount of SW radiation reaching the surface while having a warming effect within the layer. Since the Arctic troposphere is generally stably stratified, BC present in the planetary boundary layer can heat the surface, but BC aloft will usually cool the surface [START_REF] Flanner | Arctic climate sensitivity to local black carbon[END_REF]. Figure 2-10 only shows the interaction between aerosols and shortwave radiation. Aerosols can also interact with longwave radiation, but this direct LW effect is only significant for large particles (e.g. mineral dusts) transported at high altitudes (Tegen 1996). Most pollution aerosols are located in the accumulation mode, and the global direct LW effect of pollution aerosols is much lower than their SW effect [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF]. For this reason, the direct radiative effect of aerosols in the Arctic during winter is weak. Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic Aerosol effects on Arctic clouds and resulting radiative effects In addition to their direct effects, aerosols have an effect on the radiative budget through their impacts on cloud properties (semi-direct and indirect aerosol radiative effects). These cloud/aerosol radiative effects are different in the Arctic than elsewhere in the globe, due to the particularities of Arctic meteorology and Arctic clouds. Arctic clouds The Arctic is often covered with low-altitude stratus clouds. At the global scale, stratus clouds usually have a cooling effect due to their high albedo (increasing the loss of solar radiation to space and reducing solar radiation at the surface) and their weak greenhouse effect (due to the weak temperature contrast between surface and cloud tops). In the Arctic, the SW cooling effect of clouds is reduced, due to the lack of SW radiation during winter and spring, and due to the low contrast in albedo between clouds and snow-or ice-covered surfaces. As a result, the LW (greenhouse) effect of clouds dominates in the Arctic, and clouds have a net warming effect at the surface [START_REF] Shupe | Cloud Radiative Forcing of the Arctic Surface: The Influence of Cloud Properties, Surface Albedo, and Solar Zenith Angle[END_REF]. During the Arctic summer, snow and ice cover decreases and solar radiation increases, and Arctic clouds mostly cause surface cooling due to their high albedo. Semi-direct aerosol radiative effects in the Arctic An aerosol layer located in altitude cools the surface and warms aloft; this results in increased atmospheric stability. The cloud response to this increased stability is relatively complex, with several competing processes. First, this increased stability can reduce the cloud-top entrainment rate, increasing stratus cloud cover. Second, increased stability can also inhibit convection, reducing the amount of cumulus clouds, even if cumuli are relatively rare in the Arctic. Third, absorbing aerosol layers located at low altitudes can increase temperatures and lower relative humidities enough to evaporate low-level clouds (cloud burn-off, [START_REF] Hansen | Radiative forcing and climate response[END_REF]Koch and Del Genio, 2010). In the Arctic, the direct aerosol radiative effects are very low during winter and spring, due to lack of SW radiation during polar night. As a result, semi-direct effects (which are consequences of the direct effects) are also low during winter and spring. During the Arctic summer, direct and semi-direct effects are higher, but the sign of the semi-direct effect depends on the aerosol vertical distribution. Aerosol enhancements at low altitudes have a warming (positive) semi-direct effect (cloud burn-off, decreased SW cloud cooling), and aerosols enhancements at higher altitudes have a cooling (negative) semi-direct effect (reduced cloud-top entrainment rates, increased cloud-cover, increased SW cloud cooling). Because of this, semi-direct effects of aerosols from local Arctic emission sources (stronger at the surface) are more likely to cause warming than aerosols from remote sources (often transported at higher altitudes). This sensitivity of Arctic semi-direct effects to vertical aerosol distributions is discussed in detail in Flanner (2013). Radiative effects of cloud-aerosol interactions in the Arctic Increased aerosol concentrations usually lead to increased CCN concentrations, increasing cloud droplet number and decreasing droplet sizes. These changes in cloud microphysical properties increase the cloud albedo (SW cooling effect, [START_REF] Twomey | The Influence of Pollution on the Shortwave Albedo of Clouds[END_REF] but also increase the LW emissivity of clouds (LW warming effect, [START_REF] Garrett | Increased Arctic cloud longwave emissivity associated with pollution from mid-latitudes[END_REF], and increase the cloud lifetime by reducing precipitation [START_REF] Albrecht | Aerosols, Cloud Microphysics, and Fractional Cloudiness[END_REF]. These cloud-aerosol effects are called the indirect effects of aerosols. At the global scale, the indirect SW cooling effect outweighs the indirect LW warming effect and cloud aerosol interactions cause a net cooling [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF]. Arctic clouds are especially sensitive to aerosol-cloud interactions due to the relative lack of CCN in the Arctic (Mauritsen et al., 2011). In addition, as discussed in Section 2.2.5.2.2, the warming LW effect of Arctic clouds is stronger than their SW cooling effect, except during summer. As a result, cloud-aerosol interactions also cause warming in the Arctic during all seasons except summer, by increasing cloud optical depth, cloud emissivity, cloud cover and cloud lifetime. At Barrow (Alaska), [START_REF] Zhao | Effects of Arctic haze on surface cloud radiative forcing[END_REF] found that cloudaerosol interactions had a warming effect in October-May and a cooling effect during June-September. The yearly-averaged effect of cloud-aerosol interactions at Barrow was found by [START_REF] Zhao | Effects of Arctic haze on surface cloud radiative forcing[END_REF] to be a weak warming. Radiative effects of absorbing aerosols deposited on snow In the Arctic, absorbing aerosols (e.g. BC, dust) can significantly contribute to local warming by being deposited on snow and ice [START_REF] Warren | A Model for the Spectral Albedo of Snow. II: Snow Containing Atmospheric Aerosols[END_REF]. Aerosol deposition on snow lowers the surface albedo, and leads to earlier snow melt, revealing the darker underlying surface. BC can also indirectly increase snow absorption by increasing the average snow grain size [START_REF] Flanner | Present-day climate forcing and response from black carbon in snow[END_REF]. These snow-albedo effects are also more sensitive to local sources emitted at the Arctic surface [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF][START_REF] Flanner | Arctic climate sensitivity to local black carbon[END_REF]. Chapter 3 Methods: modeling tools, emission inventories and Arctic measurements One of the objectives of this thesis is to quantify the impacts of remote and local sources of pollution on Arctic aerosols and ozone. These impacts can be estimated using 3D atmospheric models predicting the state and composition of the atmosphere. In these models, estimated emissions of aerosols and trace gases are transported based on results from a meteorological model (e.g. wind, temperature), and are transformed based on physical (e.g. advection, diffusion, interaction with clouds) and chemical (e.g. gas-phase chemistry, dry deposition, aerosol aging) processes. Such atmospheric models can be used to calculate the effect of a source of pollutant emissions on the state and composition of the atmosphere. The impacts of a specific pollution source (or region) can be determined by performing simulations with and without pollutant emissions from this source, since this perturbation in emissions leads to a change in 3D modeled pollutant concentrations, meteorological properties and radiative budgets. This perturbation approach is used in Chapters 4, 5 and 6. This Chapter presents the main atmospheric modeling tools used in this thesis, the air pollutant emission inventories used as input for model simulations; and the Arctic measurements of atmospheric constituents and meteorological properties used to validate model results and used as a basis for case studies analyzing Arctic pollution. 3.1 Modeling the air quality and radiative impacts of shortlived pollutants in the Arctic. performed in this thesis using a regional meteorology-chemistry-aerosol model, WRF-Chem [START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF]Fast et al., 2006). Compared to global models, using a regional model has two main advantages. First, using smaller, regional domains is less computationally costly, allowing the use of higher grid resolutions or more detailed chemistry, aerosol and physics schemes. Second, regional models such as WRF-Chem can use ad hoc grids centered on the Arctic (i.e., polar stereographic grids), while global model simulations are typically run on latitude-longitude grids, where the North Pole is a singularity and the grid is strongly anisotropic near the poles. In order to preserve numerical stability, models run on latitude/longitude grids require special filtering at the pole [START_REF] Takacs | Filtering Techniques on a Stretched Grid General Circulation Model[END_REF], which do not scale well on parallel computers [START_REF] Skamarock | A Multiscale Nonhydrostatic Atmospheric Model Using Centroidal Voronoi Tesselations and C-Grid Staggering[END_REF]. WRF-Chem is called "online" (as opposed to "offline") because it performs simultaneously meteorological and chemical calculations. Online models such as WRF-Chem can take into account the aerosol/meteorology and trace gas/meteorology interactions, since calculated chemical and aerosol compositions can influence the meteorological fields (e.g. through changes in the radiation budget or clouds). Unless otherwise specified, these interactions are always included in the simulations presented in this thesis. Regional meteorology-chemistry-aerosol modeling with WRF-Chem In this thesis, regional chemical-transport simulations are performed with the fully coupled, online WRF-Chem model (Weather Research and Forecasting model, including chemistry, [START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF]Fast et al., 2006) version 3.5.1. WRF-Chem is a regional atmospheric model based on the mesoscale meteorological model WRF-ARW (Advanced Research WRF, [START_REF] Skamarock | A description of the Advanced Research WRF Version[END_REF]. WRF-Chem is fully integrated within WRF, and uses the same grid, time step, advection scheme and physics schemes as WRF. WRF-Chem is a community model and is highly modular: meteorological, aerosol and gas-phase processes can be represented by different schemes of different complexities. WRF-Chem has been widely used to study air quality over emission regions, and has been extensively validated over Europe (e.g. [START_REF] Tuccella | Modeling of gas and aerosol with WRF/Chem over Europe: Evaluation and sensitivity study[END_REF]Zhang et al., 2013a,b), Asia (e.g. Kumar et al., 2012;Quennehen et al., 2015) and North America (e.g. Fast et al., 2006;Tessum et al., 2015). In relation to this thesis, WRF-Chem has also been used in the past to study e.g. the impacts of shipping at high latitudes [START_REF] Mölders | Influence of ship emissions on air quality and input of contaminants in southern Alaska National Parks and Wilderness Areas during the 2006 tourist season[END_REF], to analyze aircraft observations of CO and aerosols (Fast et al., 2012), and to investigate pollution transport to the Arctic [START_REF] Sessions | An investigation of methods for injecting emissions from boreal wildfires using WRF-Chem during ARCTAS[END_REF]. Since WRF-Chem is a coupled meteorological-chemistry-aerosol model, it can be setup in a way that allows predicted aerosol and trace gas concentrations to influence modeled meteorology. In all of the simulations presented in this thesis, aerosols influence meteorological Chapter 3. Methods: modeling tools, emission inventories and Arctic measurements 67 properties by their direct effect on radiation, and their indirect effect on cloud properties, precipitation and cloud lifetime. The base model setup is presented in Table 3.1. This setup is mostly based on recommendations found in [START_REF] Peckham | Best Practices for Applying WRF-Chem[END_REF]; it is used in the study presented in Chapter 4. Later studies (Chapters 5 and 6) use different options, presented in Table 5.2 (Chapter 5) and Table 6.1 (Chapter 6); these changes were motivated by earlier results, and the reasons for these modifications are presented in each chapter. Table 3.1 -WRF-Chem base setup in this thesis (as used in Chapter 4). Option name Selected option Chemistry & aerosol options Gas-phase chemistry CBM-Z (Zaveri and Peters, 1999) Aerosols MOSAIC 8-bins [START_REF] Zaveri | Model for Simulating Aerosol Interactions and Chemistry (MOSAIC)[END_REF]) Photolysis Fast-J [START_REF] Wild | Fast-J: Accurate Simulation of In-and Below-Cloud Photolysis in Tropospheric Chemical Models[END_REF] Metrorological options Planetary boundary layer MYJ [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] Surface layer Monin-Obukhov Janjic Eta scheme [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] Land surface Unified Noah land-surface model (Chen and Dudhia, 2001) Microphysics Morrison [START_REF] Morrison | Impact of Cloud Microphysics on the Development of Trailing Stratiform Precipitation in a Simulated Squall Line: Comparison of One-and Two-Moment Schemes[END_REF] SW radiation Goddard [START_REF] Chou | A Solar Radiation Parameterization (CLIRAD-SW) for Atmospheric Studies[END_REF]) LW radiation RRTM [START_REF] Mlawer | Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave[END_REF]) Cumulus parameterization Grell-3 (Grell and Dévényi, 2002) 3.1.1.1 WRF-Chem gas-phase chemistry and aerosol schemes WRF-Chem aerosols are represented by the MOSAIC (Model for Simulating Aerosol Interactions and Chemistry, [START_REF] Zaveri | Model for Simulating Aerosol Interactions and Chemistry (MOSAIC)[END_REF] model. MOSAIC represents aerosol size distributions by eight discrete size bins between 39 nm and 10 µm. Within each size bin and each grid cell, MOSAIC calculates aerosol number concentrations, as well as the mass concentrations of SO 2- 4 , NO - 3 , NH + 4 , BC (EC), OA, Na + , Cl -, and "other inorganics" (OIN, including mineral dusts). Aerosols are assumed to be internally mixed within each bin; because of this, EC can become instantaneously hygrophilic when emitted in grid cells already containing small water-soluble aerosols. Nucleation is based on the H 2 SO 4 -H 2 O scheme of [START_REF] Wexler | Modelling urban and regional aerosolsâ Part I -Model development[END_REF], and new particles are grown (as SO 2- 4 and NH + 4 ) to the lower-bound of the MOSAIC 8-bin scheme (39 nm). Coagulation is calculated following the approach of [START_REF] Jacobson | Modeling coagulation among particles of different composition and size[END_REF]. MOSAIC includes aerosol/cloud interactions, and predicts aerosol activation in clouds, aqueous chemistry in clouds, and within-and below-cloud wet scaveng-Chapter 3. Methods: modeling tools, emission inventories and Arctic measurements ing. Interstitial and cloud-borne aerosol particles are treated explicitly, and modeled aerosols can be activated or re-suspended depending on saturation, particle size, and aerosol composition, based on the parameterizations of Abdul-Razzak andGhan (2000, 2002). Aqueous chemistry in clouds is based on [START_REF] Fahey | Optimizing model performance: variable size resolution in cloud chemistry modeling[END_REF], and includes oxidation of S(IV) by H 2 O 2 , O 3 and other radicals, as well as non-reactive uptake of NH 3 , HNO 3 , HCl, and other trace gases. In-cloud wet removal occurs when cloud droplets containing activated aerosols are converted to precipitation, and model-predicted precipitation can also remove a fraction of below-cloud aerosols by impaction. Dry deposition velocities are calculated using the resistance scheme of [START_REF] Wesely | Parameterization of surface resistances to gaseous dry deposition in regional-scale numerical models[END_REF]. In the version presented here, MOSAIC includes 176 advected aerosol species: 8 bins × 11 species (mass concentrations for 8 chemical species + 2 species for aerosol water+ 1 bulk number concentration) × 2 (activated or interstitial aerosol). As a result, it is one of the most computationally costly mechanisms available in WRF-Chem, and cannot currently be used to perform high resolution simulations over long periods and large domains. In WRF-Chem, MOSAIC is coupled to 3 different gas-phase chemistry schemes of similar complexities: CBM-Z (Carbon Bond Mechanism, version Z; 73 species, 237 reactions ; [START_REF] Zaveri | A new lumped structure photochemical mechanism for largescale applications[END_REF], SAPRC-99 (Statewide Air Pollution Research Center, 1999 version;79 species, 235 reactions;[START_REF] Carter | Documentation of the SAPRC-99 chemical mechanism for VOC reactivity assessment. Final Report to California Air Resources[END_REF] and MOZART (Model for Ozone and Related chemical Tracers; 85 species and 196 reactions; Emmons et al., 2010). The base configuration uses CBM-Z, chemistry, which is the gas-phase chemistry scheme recommended by the MOSAIC development team [START_REF] Peckham | Best Practices for Applying WRF-Chem[END_REF]. CBM-Z/MOSAIC does not include both cloud-aerosol interactions and SOA formation. These processes were added to the SAPRC-99/MOSAIC WRF-Chem setup by Shrivastava et al. (2011), and for this reason SAPRC-99/MOSAIC is used in Chapter 6. Aerosol optical properties are calculated by a Mie code [START_REF] Mie | Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen[END_REF]Barnard et al., 2010). Mie calculations are performed assuming spherical aerosols and an average refractive index within each bin. This refractive index is calculated in this thesis as the volume average of the indices of the chemical components within each bin. Photolysis rates used in the gas-phase chemistry calculations are determined by the Fast-J scheme [START_REF] Wild | Fast-J: Accurate Simulation of In-and Below-Cloud Photolysis in Tropospheric Chemical Models[END_REF], and take into account the influence of hydrometeors and aerosols on actinic fluxes. Initial and boundary conditions for trace gases and aerosols are taken from the global chemical-transport model MOZART-4 (Emmons et al., 2010); boundary conditions are updated every 6 h. WRF-Chem does not include stratospheric chemistry. In order to include realistic concentrations of chemical species in the stratosphere and upper troposphere, stratospheric mixing ratios of CO, O 3 , NO, NO 2 , HNO 3 , N 2 O 5 and N 2 O are constrained by a zonal-mean climatology, following the approach used in MOZART-4 (Emmons et al., 2010). The species are fixed to climatological values at the model top (50 hPa), and relaxed to model values down to the tropopause. WRF-Chem does not include detailed chemistry, sources and sinks of CH 4 and CO 2 ; these species are set to a single global value based on measurements. Meteorological (WRF) setup Table 3.1 also presents the options selected for the meteorological (WRF) part of the model. The choice of several options is constrained by the use of MOSAIC. The recommended microphysical scheme to use with MOSAIC is the Morrison 2-moment scheme [START_REF] Morrison | Impact of Cloud Microphysics on the Development of Trailing Stratiform Precipitation in a Simulated Squall Line: Comparison of One-and Two-Moment Schemes[END_REF]. The Morrison 2-moment scheme calculates cloud formation, cloud properties, and precipitation at the grid scale, but for simulations at horizontal resolutions coarser than 10 km, it is recommended to use an additional parameterization for sub-grid cumulus clouds. The Grell-3D cumulus scheme [START_REF] Grell | A generalized approach to parameterizing convection combining ensemble and data assimilation techniques[END_REF] was chosen because it was until recently the only scheme in WRF-Chem representing sub-grid cloud interactions with radiation and tracer convection. There are several possible options for planetary boundary layer schemes (PBL), which compute turbulent vertical mixing and fluxes, and for surface layer schemes, which compute friction velocities and surface exchange coefficients. The Mellor-Yamada-Janjic (MYJ) scheme [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] was chosen to represent the PBL as well as the associated Janjic Eta surface layer scheme [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF]. These schemes are among the most commonly used within WRF-Chem. The land surface is represented by the unified Noah Land-Surface Model (Noah-LSM, [START_REF] Chen | Coupling an Advanced Land Surface-Hydrology Model with the Penn State-NCAR MM5 Modeling System. Part I: Model Implementation and Sensitivity[END_REF]. Radiative transfer calculations in the atmosphere are performed by the Rapid Radiative Transfer Model (RRTM) in the longwave (terrestrial radiation, [START_REF] Mlawer | Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave[END_REF], and the Goddard scheme in the shortwave (solar radiation, [START_REF] Chou | A Solar Radiation Parameterization (CLIRAD-SW) for Atmospheric Studies[END_REF]. Both radiation schemes are coupled with the aerosol optical properties calculated by the Mie code. [START_REF] Iacono | Assessment of Radiation Options in the Advanced Research WRF weather forecast model[END_REF] compared different WRF radiation schemes, including RRTM and Goddard, to surface measurements of SW and LW radiative fluxes in the USA. This comparison showed a good agreement for both schemes chosen here, although discrepancies (up to 50 W m -2 ) were possible due to errors in modeled cloud fractions. Initial and boundary conditions for meteorology are specified using the NCEP GFS (National Center for Environmental Prediction, Global Forecast System) FNL (final) analysis; boundary conditions are updated every 6 h. In addition, WRF-Chem winds, temperature and humidity are nudged to FNL every 6 h in the free troposphere. WRF dynamical setup, discretization and numerical integration WRF-ARW integrates the fully compressible and non hydrostatic Euler equations in flux form [START_REF] Ooyama | A Thermodynamic Foundation for Modeling the Moist Atmosphere[END_REF]). Details on the numerical schemes used in WRF-ARW are given in the technical description of the model by [START_REF] Skamarock | A description of the Advanced Research WRF Version[END_REF]. Briefly, the equations are formulated in Cartesian horizontal coordinates, and in a pressure-based terrain-following "eta" vertical coordinate [START_REF] Laprise | The Euler Equations of Motion with Hydrostatic Pressure as an Independent Variable[END_REF]. The spatial discretization uses an Arakawa C-Chapter 3. Methods: modeling tools, emission inventories and Arctic measurements grid staggering [START_REF] Skamarock | A description of the Advanced Research WRF Version[END_REF]. Time discretization is based on a Runge-Kutta 3rd-order time-split integration [START_REF] Skamarock | The Stability of Time-Split Numerical Methods for the Hydrostatic and the Nonhydrostatic Elastic Equations[END_REF], with a smaller time step for acoustic and gravity-wave modes. A 5th-order scheme is used for horizontal scalar and momentum advection, and a 3rd-order scheme for vertical advection. Advection schemes conserve mass, and use a monotonic flux limiter, following the recommendations of [START_REF] Wang | Evaluation of Scalar Advection Schemes in the Advanced Research WRF Model Using Large-Eddy Simulations of Aerosol-Cloud Interactions[END_REF]. Sub-grid-scale horizontal turbulent mixing is performed by a 2nd-order scheme, and vertical mixing is performed by the chosen PBL scheme. Gas-phase chemistry equations in the SAPRC-99 and CBM-Z schemes are solved by a RODAS3 Rosenbrock-type solver [START_REF] Sandu | Benchmarking stiff ode solvers for atmospheric chemistry problems II: Rosenbrock solvers[END_REF]. The integration of gas-particle partitioning in MOSAIC is done by a dedicated scheme called ASTEM (Adaptive Step Time-Split Euler Method), described in detail in [START_REF] Zaveri | Model for Simulating Aerosol Interactions and Chemistry (MOSAIC)[END_REF]. Aerosol and ozone radiative effects in WRF-Chem Aerosols predicted by WRF-Chem/MOSAIC influence the modeled radiation budget in two ways. First, predicted aerosols are used to compute aerosol optical properties. These optical properties are passed to the RRTM and Goddard radiation modules (aerosol direct effect), where they are used in radiative transfer calculations (direct radiative effect). As a result, this interaction also modifies the modeled meteorology and can affect cloud formation (semidirect effect). Second, aerosol activation changes the cloud droplet number concentrations and cloud droplet radii in the Morrison microphysics scheme, and these properties are used in the radiation schemes to calculate cloud optical properties (first indirect aerosol effect). Aerosol activation in MOSAIC can also influence cloud lifetime by changing cloud properties and precipitation rates (second indirect aerosol effect). Radiation modules (e.g., Goddard and RRTM) do not use the WRF-Chem predicted atmospheric profiles of O 3 to perform calculations, and use climatological profiles instead. As part of this thesis, predicted O 3 was coupled to SW and LW radiation in Chapter 6 to quantify its direct radiative effect (See Section 6.8). Lagrangian modeling with FLEXPART-WRF The Lagrangian particle dispersion model FLEXPART-WRF (Fast and Easter, 2006;Brioude et al., 2013) is used in this thesis to calculate pollution transport and dispersion from an individual source, and to identify the origin of measured pollution. FLEXPART-WRF, is a version of the dispersion model FLEXPART (Stohl et al., 2005), modified to be driven by meteorological fields from WRF. In its "forward mode", FLEXPART-WRF uses meteorological fields from a WRF or WRF-Chem simulation to compute the transport and dispersion of a large number of particles released along time from a box source. Each particle is associated with a given mass of tracer, and after a given time the particles can be counted on a grid to estimate the tracer concentration from this source. FLEXPART-WRF can also be used in "backward mode" to estimate source-receptor relationships. In this case, particles released at a receptor point are transported backward in time using the meteorological fields from the WRF simulation. This calculation can be performed because equations included in FLEXPART-WRF are symmetric in time. The model can then compute "potential emission sensitivities" (PES) on a grid, which represent the amount of time (in seconds) spent by backward-moving particles in each of the cells of an output grid. Footprint PES (FPES) are often defined as the PES integrated over the lowest atmospheric layers, typically 0-250 m. FPES can be applied to surface emissions fluxes in kg/m3/s to estimate concentrations (kg/m3) at the receptor (origin) point due to this specific emission source. FLEXPART-WRF does not include gas-phase and aerosol chemistry, but can represent the exponential decay of a tracer based on its lifetime. Dry and wet removal can also be calculated based on WRF precipitation and given dry and wet deposition velocities. In this thesis, FLEXPART-WRF is used to identify the origins and transport pathways of pollution plumes measured in the Arctic (Chapter 4) and to compute plume dispersion and transport from shipping point sources in order to derive emissions from measurements (Chapter 5). Air pollutant emissions from global and local Arctic pollution sources In order to represent atmospheric composition, WRF-Chem simulations need air pollutant emissions as input. Anthropogenic and biomass burning emissions are usually taken from emission inventories, containing geographically distributed and time-resolved emissions of relevant pollutants. Natural emissions can often be calculated directly within the model during simulations, since they are usually tied to specific physical phenomenons (e.g. NO x emissions from lightning). This section presents how anthropogenic, biomass burning and natural emissions are implemented in the WRF-Chem simulations performed in this thesis. Stohl et al., 2015). NFC, CLE and MIT scenarios are represented in blue, along with the IPCC RCP scenario range in gray (Lamarque et al., 2010). port and agriculture sectors. ECLIPSEv5 emissions in Figure 3-2 include emissions from the same sectors, and additional emissions from the "waste processing" and "solvent" sectors (included as part of the residential and industrial sectors in HTAPv2). Ship emissions are not included in Figures 3-1 and 3-2, but are discussed in detail in Section 3.2.5. "Agricultural waste burning" emissions from ECLIPSEv5 are excluded from all simulations (and from Figure 3-2), to avoid double counting with biomass burning emissions discussed in Section 3.2.2. Global anthropogenic emissions from ECLIPSEv5 and HTAPv2 Figures 3-1 and 3-2 illustrate that global yearly emission totals are very similar between the two inventories despite the different methodologies. However, the distribution of the emissions can be quite different, especialy in remote regions (Northern Russia, deserts, oceans). Another difference between the 2 inventories is that HTAPv2 is available at a higher resolution (0.1 ∘ × 0.1 ∘ ) than ECLIPSEv5 (0.5 ∘ × 0.5 ∘ ). For this reason, HTAPv2 is used in Chapters 4 and 5, where WRF-Chem simulations are performed with a horizontal grid-spacing less than 0.5 ∘ × 0.5 ∘ (∼ 50 km × 50 km). ECLIPSEv5 emissions include a recent estimate of Arctic oil and gas flaring emissions (Stohl et al., 2013, oil and gas emissions discussed in Section 3.2.4). For this reason ECLIPSEv5 emissions are used in Chapter 6 in simulations assessing the present and future impact of emissions from Arctic oil and gas extraction. ECLIPSEv5 future projections are based on results from the GAINS (Greenhouse gas-Air pollution Interactions and Synergies) model [START_REF] Amann | Cost-effective control of air quality and greenhouse gases in Europe: Modeling and policy applications[END_REF], and include several possible scenarios. NFC (No Further Control) is a high-emission, business-as-usual scenario assuming that no new emission controls are implemented after 2005. CLE (Current Legislation) includes lower emissions, assuming that already committed future emission reductions will be implemented. MIT (MITigation) is a low-emission mitigation scenario including further additional mitigation of short-lived climate forcers (Stohl et al., 2015). The CLE scenario is used in the present work, because it represents mid-range, more probable future emissions. Figure 3-3 shows the evolution of global emissions of CO 2 , CH 4 , SO 2 , NO x and BC in the ECLIPSEv5 inventories between 1990 and 2050. 2050 CO 2 and NOx emissions are larger than today in the CLE scenario, but, due to regulations, BC emissions decrease globally and SO 2 emissions remain approximately the same. ECLIPSEv5 and HTAPv2 give NMVOC emissions as a bulk total mass by emission sector, but gas-phase chemistry mechanisms within WRF-Chem include several individual NMVOC species. As a result, NMVOC emissions from inventories need to be speciated to mechanism species during emission preprocessing. This speciation is done in two steps. First, bulk anthropogenic VOCs from a given emission sector are disaggregated into individual VOC chemical species based on a detailed anthropogenic VOC inventory for the UK (Table 2.13 in [START_REF] Murrells | UK Emissions of Air Pollutants 1970 to 2007[END_REF]. These individual VOC species are then assigned to individual WRF-Chem VOC species, using a database compiled for this purpose by [START_REF] Carter | Development of an Improved Chemical Speciation Database for Processing Emissions of Volatile Organic Compounds for Air Quality Models[END_REF]. NO x emissions are given as a NO 2 total in ECLIPSEv5 and HTAPv2 inventories. For all anthropogenic emission sectors except shipping, NO x emissions are assigned as 90 % NO and 10 % NO 2 [START_REF] Finlayson-Pitts | Chemistry of the Upper and Lower Atmosphere: Theory, Experiments, and Applica-tions[END_REF]. Shipping NO x are assigned as 94 % NO and 6 % NO 2 [START_REF] Epa | Analysis of commercial marine vessels emissions and fuel consumption data[END_REF]. In Chapters 4 and 5, Organic Carbon (OC) emissions from HTAPv2, are converted to WRF-Chem (MOSAIC) POA using a factor of 1.4 [START_REF] Turpin | Measuring and simulating particulate organics in the atmosphere: problems and prospects[END_REF]. Biomass burning emissions Emissions from boreal biomass burning are an important source of Arctic pollution during summer [START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF]. [START_REF] Warneke | An important contribution to springtime Arctic aerosol from biomass burning in Russia[END_REF] also showed that agricultural fires located at lower latitudes could be transported in the Arctic in spring. In order to represent this contribution, biomass burning emissions from the FINNv1 (Fire inventory from NCAR version 1; Wiedinmyer et al., 2011; Chapters 4 and 5) and FINNv1.5 [START_REF] Wiedinmyer | Global Emissions of Trace Gases, Particulate Matter, and Hazardous Air Pollutants from Open Burning of Domestic Waste[END_REF] Chapter 6) inventories are included in WRF-Chem simulations. FINN emissions are based on fire detections by the space-borne MODIS instrument, and combine MODIS-derived area-burned and land cover type with emission factors to estimate daily resolved global fire emissions. FINN documentation does not include estimates of year to year variations of biomass burning emissions in the Arctic, but this information is available from another biomass burning inventory, GFED (Global Fire Emissions Database, Figure 3 Daily FINN emissions implemented in WRF-Chem are transformed into hourly emissions by applying a daily emission cycle, peaking at local (solar) 1 pm. WRF-Chem simulations also includes a fire plume rise model [START_REF] Freitas | Including the sub-grid scale plume rise of vegetation fires in low resolution atmospheric transport models[END_REF][START_REF] Sessions | An investigation of methods for injecting emissions from boreal wildfires using WRF-Chem during ARCTAS[END_REF], which takes into account the transport in altitude of fire emissions due to pyroconvection based on fire size, land use and WRF-Chem meteorology. Natural emisssions calculated online within WRF-Chem Dust and sea salt aerosol emissions are calculated online within WRF-Chem. Dust emissions are based on the GOCART emission scheme [START_REF] Chin | Tropospheric Aerosol Optical Thickness from the GOCART Model and Comparisons with Satellite and Sun Photometer Measurements[END_REF], combining model-predicted 10 m wind speed, model-predicted soil water content and input maps of soil erodability from GOCART. Sea salt emissions from oceans are based on [START_REF] Gong | Modeling sea-salt aerosols in the atmosphere: 1. Model development[END_REF], and also use 10 m wind speeds as the main input parameter. Biogenic emissions from vegetation are from an online version of MEGAN (Model of Emissions of Gases and Aerosols from Nature, Guenther et al., 2006) within WRF-Chem. MEGAN estimates biogenic emissions from solar radiation, WRF-Chem predicted temperature, and climatological input maps of leaf area index and vegetation types. In addition, soil NO x emissions developped for the POLMIP (POLARCAT model intercomparison) project (Emmons et al., 2015) are used in the simulations presented in Chapters 4 and 6. DMS emissions by oceans are an important source of SO 2 and SO 4 in the Arctic during summer months over the open ocean. DMS emissions and chemistry were included in the studies presented in Chapters 5 and 6. Lighting NO x emissions were included in the simulation presented in Chapter 6. Additional details on model setup and emissions are given at the beginning of each chapter. Snow NO x emissions are still poorly understood. For this reason, they are not included in the simulations presented in this thesis. Volcanic emissions are difficult to quantify, episodic, and are not thought to be a major source of high-latitude SO 2 except during strong events, and were not included in this work either. Local Arctic pollutant emissions from oil and gas extraction Emissions from the petroleum extraction sector are included in global emission inventories (e.g. HTAPv2), but they are usually lumped with other emissions from the whole energy sector. Furthermore, global inventories might not be suited for regional applications in the Arctic if the location and magnitude of Arctic oil and gas emissions are not precisely implemented in inventories. For these reasons, it is preferable to use an inventory with a specific focus on the Arctic, such as the inventory of Peters et al. (2011), or the ECLIPSEv5 gas flaring emissions [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF], both shown in Figure 3-6. ECLIPSEv5 oil and gas emissions emissions shown here only represent emissions associated with gas flaring, which is the process of burning excess gas for disposal in petroleum production and processing facilities. Emissions from Peters et al. (2011) are not directly comparable, since they also include other emissions associated with the petroleum sector, such as emissions from diesel engines, leaks, and venting. In spite of this, oil and gas related emission totals are 10-50 times higher in ECLIPSEv5 than in Peters et al. (2011) for all species except NO x . Specifically, gas flaring is the most important source of Arctic anthro- pogenic BC in the ECLIPSEv5 dataset (Stohl et al., 2013). This is mostly due to the high recent estimates of flared gas volumes in this region by [START_REF] Elvidge | A twelve year record of national and global gas flaring volumes estimated using satellite data: final report to the World Bank[END_REF][START_REF] Elvidge | A Fifteen Year Record of Global Natural Gas Flaring Derived from Satellite Data[END_REF]. Flared volumes were estimated using "night light" satellite measurements from DMSP-OLS (U.S. Air Force Defense Meteorological Satellite Program -Operational Linescan System). Flares were identified based on their emission spectrum, and locations were later confirmed flareby-flare from satellite pictures. In addition, the BC emission factor for flares in ECLIPSEv5 (1.6 g Nm -3 gas flared) is higher than previous values from laboratory studies (0.51 g Nm -3 McEwen and [START_REF] Mcewen | Black carbon particulate matter emission factors for buoyancydriven associated gas flares[END_REF]. Emissions from flares are very uncertain due to the lack of dedicated field campaigns to measure emission factors in situ. However, another recent approach by [START_REF] Huang | Russian anthropogenic black carbon: Emission reconstruction and Arctic black carbon simulation[END_REF] estimates an even higher BC emission factor, 2.27 g Nm -3 . ECLIPSEv5 emissions indicate that previous studies based on Peters et al. (2011) emissions (e.g., Ødemark et al., 2012) could have been underestimating the atmospheric impacts of oil and gas activity in the Arctic. ECLIPSEv5 also contains estimates of future flaring emissions in the Arctic, based on projections from the GAINS model (Table 3.2). Using CO emissions as a proxy for flared volumes indicates that ECLIPSEv5 flaring increases slightly between 2010 and 2050 (+4.7 %). This is coherent with projections from Peters et al. (2011) shown in Section 1.3.3 (Figure 1-13). For SO 2 and NMVOC, this increase is compensated by a decrease in emission factors. ECLIPSEv5 gas flaring emissions are used in Chapter 6 to estimate the current and future air quality and radiative impacts of the petroleum extraction sector in the Arctic. Local Arctic emissions from shipping Air pollutant emissions from marine traffic can be estimated using top-down or bottom-up approaches. In top-down approaches, ship fuel consumption is estimated based on fuel sales or on characteristics of the total fleet. Emission factors are applied to the fuel consumption (total or by ship type) to estimate emissions, which can be allocated geographically based on known ship routes. In bottom-up approaches, emissions are modeled for a single ship, based on this ship's speed and location, and on a technical description of the ship (e.g. engine type, size, fuel type). Emissions from single ships are aggregated to produce emissions for the total fleet. The principle of this approach is presented in Figure 34567. Top-down approaches were used to produce early inventories (e.g. [START_REF] Corbett | Updated emissions from ocean shipping[END_REF], but these inventories were uncertain due to simplifying assumptions when applying average emission factors for a large part of the fleet [START_REF] Eyring | Transport impacts on atmosphere and climate: Shipping[END_REF]. Most recent inventories are based on bottom-up approaches, which were made possible by the availability of detailed databases of ship activity, AMVER (Automated Mutual-assistance Vessel Rescue) and COADS (Comprehensive Ocean-Atmosphere Data Set, now ICOADS). AMVER is a tracking system used by merchant ships for search and rescue, and COADS/ICOADS is based on self-reporting of ship journeys by ship crews. Although using either COADS or AMVER datasets was an improvement compared to earlier approaches, [START_REF] Endresen | Emission from international sea transportation and environmental impact[END_REF] showed that the use of these 2 datasets produced very different regional shipping emissions. In addition, both datasets are known to be biased towards specific ship types, ship sizes and locations [START_REF] Endresen | Emission from international sea transportation and environmental impact[END_REF][START_REF] Eyring | Transport impacts on atmosphere and climate: Shipping[END_REF]. [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] used an Arctic specific ship activity dataset, AMSA, based on self-reporting (Arctic Council, 2009) to produce emissions for 2004. However, the AMSA dataset is also thought to underreport Arctic shipping (Arctic Council, 2009), and fishing ship emissions were not included by [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF], since fishing ships do not usually follow straight routes. Several recent emission inventories [START_REF] Jalkanen | A modelling system for the exhaust emissions of marine traffic and its application in the Baltic Sea area[END_REF][START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] used AIS (Automatic Identification System) ship activity data. AIS is a real-time ship positioning on board ships, which is mandatory for large ships (gross tonnage> 300 ton) and voluntary for smaller ships. In order to be included in databases, AIS signals have to be received either by terrestrial stations, with a limited range from shore (∼50 km) but high temporal resolutions (∼6 min); or by polar orbital satellites, with higher coverage but limited time resolutions (∼20 min-2 h). AIS is known to be very representative in the Baltic Sea, where 90 % of ships are equipped [START_REF] Miola | Estimating air emissions from ships: Meta-analysis of modelling approaches and available data sources[END_REF]. (2010) and IMO [START_REF] Buhaug | Second IMO GHG study[END_REF]. Future projections also take into account changes in emission factors due to new regulations and improved engine efficiencies. The resulting changes in emissions between 2012 and 2050 are presented in Table 3.4. Resulting future emissions shown in Table 3.4 are due to a combination of new regulations and increased traffic. SO 2 emissions decrease strongly due to further reductions of sulfur contents to 0.1 % in SECAs in 2015, and less strict worldwide sulfur content controls (0.5 %), expected at the latest for 2025 (Jonson et al., 2015). NO x emissions also decrease in 2050 in all scenarios, as older ships are replaced with new ships complying with IMO regulations. In this thesis, "high-growth" scenarios are used for future shipping emissions in Chapter 6, in order to estimate the upper-bound of future shipping impacts in the absence of regulations, and because some earlier estimates (e.g. Browse et al., 2013) indicate that these future impacts could be limited. Future emissions presented in Table 3.4 do not include the effect of future diversion of international shipping through the Arctic (the reasons for this possible future large scale diversion of shipping through the Arctic are presented in Section 1.3.3). The emissions from future diversion shipping have been estimated by Peters et al. (2011) and [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF]. The Arctic-wide future simulations presented in Chapter 6) use high-growth 2050 diversion shipping emissions from [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF], who assumed that 5 % of global shipping traffic would be diverted through the Arctic during summer and fall, when sea ice cover is low (July-November in 2050). The total yearly emissions associated with this future scenario are presented in Table 3.5, illustrating that, in 2050, emissions from Arctic diversion shipping could be much higher than emissions from local Arctic shipping (Table 3.4). [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] also determine several possible future diversion routes through the Arctic Ocean, presented in Figure 3-9, including trans-polar routes. In this thesis (Chapter 6), diversion shipping emissions are assumed to be distributed equally between Table 3.5 -Total Arctic (latitude > 60 ∘ N) diversion shipping emissions in 2050 (kton yr -1 ), estimated by Corbett et al. (2010) (High-Growth, scenario). NMVOC emissions are calculated by assuming a VOC/CO ratio of 53.15 % [START_REF] Corbett | Updated emissions from ocean shipping[END_REF], and POA are calculated by assuming a POA/OC ratio of 1.25 (Shrivastava et al., 2011). Aerosol and ozone measurements in the Arctic Surface and aircraft measurements of aerosols, ozone, and their precursors are used throughout this thesis to validate model results. WRF-Chem and FLEXPART-WRF simulations are also used to analyze airborne measurements of aerosols and ozone (model case studies), in order to learn about the origins and impacts of short-lived pollution observed in the Arctic. The main Arctic measurement datasets used in this thesis are presented in the following sections. Surface measurements = 𝜎 𝑎𝑏𝑠 × 𝑀 𝐴𝐶 -1 , where 𝜎 𝑎𝑏𝑠 is the measured lightabsorption coefficient in m -1 , and MAC is the mass-specific absorption coefficient at the same wavelength, in m 2 kg -1 . Both 𝜎 𝑎𝑏𝑠 measurements and assumptions about MAC introduce significant uncertainties in EBC calculations (Petzold et al., 2013). Additional details on the ACCESS campaign are given in [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF]. Chapter 4 Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France Motivation The lower Arctic troposphere (altitudes < 3 km) is polluted each year in winter/spring, a phenomenon known as Arctic Haze [START_REF] Quinn | A 3-year record of simultaneously measured aerosol chemical and optical properties at Barrow, Alaska[END_REF]. This Arctic Haze contains enhanced aerosol, NO x and VOC concentrations, and was shown to be mostly due to long-range transport of anthropogenic pollution from Europe and Western Asia [START_REF] Rahn | Arctic Air Chemistry Proceedings of the Second Symposium Relative importances of North America and Eurasia as sources of arctic aerosol[END_REF]. Although the phenomenon is relatively well-known, it is not well represented in models, which often underestimate aerosol concentrations at the surface [START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF] and overestimate concentrations aloft [START_REF] Schwarz | Global-scale black carbon profiles observed in the remote atmosphere and compared to models[END_REF]. These biases were found to be due to uncertainties in the treatment of aerosol wet removal [START_REF] Browse | The scavenging processes controlling the seasonal cycle in Arctic sulphate and black carbon aerosol[END_REF], and although many models have updated their representation of these processes recently, significant difficulties remain (Eckhardt et al., 2015). In order to improve our understanding of Arctic pollution, several airborne measurement Arctic. These studies also showed that aerosols contained in those plumes aged significantly by coagulation, condensation and wet removal proceses during transport. However, these observation-based studies were unable to precisely quantify the the magnitude of these aging processes, the contributions from different sources (anthropogenic, biomass burning), or the large scale air quality and radiative impacts of these pollution aerosols in the Arctic. 88 Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France In this chapter, WRF-Chem is combined with HTAPv2 anthropogenic emissions, FINNv1 biomass burning emissions and POLARCAT-France airborne measurements for a case study investigating long-range transport of pollution from Europe to the Arctic during the campaign. In the context of this thesis, an important objective of this study is to validate the representation of aerosol transport events from Europe to the Arctic in spring in WRF-Chem. These types of transport events are known to be an important source of Arctic aerosols in winter/spring, and need to be well-reproduced by the model in order to perform other large-scale studies of Arctic aerosol pollution (see Chaper 6). This study also aims to improve our knowledge of these transport events, specifically: • Identify pollution transport pathways and European sources of Arctic pollution in April 2008., and quantify aerosol wet removal during transport. • Determine if WRF-Chem simulations can reproduce the complex vertical layering of pollution observed in the Arctic troposphere, in situ and by airborne LIDAR (LIght Detection and Ranging), during POLARCAT-France, and if these layers correspond to different sources, transport pathways and aging processes. • Determine the regional impacts of the observed transport events in terms of aerosol concentrations and radiative effects in the European Arctic. This study was published as Introduction Arctic haze, which is present during winter and spring, is a well-known phenomenon that includes elevated concentrations of anthropogenic aerosols transported to the Arctic region (e.g., [START_REF] Rahn | The Asian source of Arctic haze bands[END_REF][START_REF] Quinn | Arctic haze: current trends and knowledge gaps[END_REF]. It was identified for the first time in the 1950s, when pilots experienced reduced visibility in the springtime North American Arctic [START_REF] Greenaway | Experiences with Arctic flying weather[END_REF][START_REF] Mitchell | Visual range in the polar regions with particular reference to the Alaskan Arctic[END_REF]. Further analysis showed that Arctic haze aerosols are mostly composed of sulfate, as well as organic matter, nitrate, sea salt, and black carbon (e.g., [START_REF] Quinn | A 3-year record of simultaneously measured aerosol chemical and optical properties at Barrow, Alaska[END_REF]. Since local Arctic emissions are rather low, most air pollutants in the Arctic originate from transport from the mid-latitudes [START_REF] Barrie | Arctic air pollution: An overview of current knowledge[END_REF]. In late winter and early spring, Eurasian emissions can be efficiently transported at a low level in the Arctic [START_REF] Rahn | Arctic Air Chemistry Proceedings of the Second Symposium Relative importances of North America and Eurasia as sources of arctic aerosol[END_REF], when removal processes are particularly slow [START_REF] Shaw | The Arctic Haze Phenomenon[END_REF][START_REF] Garrett | The role of scavenging in the seasonal transport of black carbon and sulfate to the Arctic[END_REF] Asian emissions have a larger influence in the upper troposphere [START_REF] Fisher | Sources, distribution, and acidity of sulfate-ammonium aerosol in the Arctic in winter-spring[END_REF]. Eurasian biomass burning emissions are thought to be major sources of Arctic pollution [START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF][START_REF] Warneke | An important contribution to springtime Arctic aerosol from biomass burning in Russia[END_REF], but the magnitude of this contribution is still uncertain. Aerosols play a key role in the climate system, through their absorption and scattering of solar radiation (direct effect, e.g., [START_REF] Haywood | The effect of anthropogenic sulfate and soot aerosol on the clear sky planetary radiation budget[END_REF][START_REF] Charlson | Climate Forcing by Anthropogenic Aerosols[END_REF], and through their impacts on cloud formation by modifying relative humidity and atmospheric stability (semi-direct effect, [START_REF] Ackerman | Reduction of Tropical Cloudiness by Soot[END_REF] and by changing cloud properties, lifetime, and precipitation (indirect effects, [START_REF] Twomey | The Influence of Pollution on the Shortwave Albedo of Clouds[END_REF][START_REF] Albrecht | Aerosols, Cloud Microphysics, and Fractional Cloudiness[END_REF]. In the Arctic, several processes enhance the radiative impact of aerosols, including soot deposition on snow [START_REF] Flanner | Present-day climate forcing and response from black carbon in snow[END_REF], increased longwave emissivity in clouds in polluted conditions [START_REF] Garrett | Increased Arctic cloud longwave emissivity associated with pollution from mid-latitudes[END_REF], and the increased atmospheric heating effect of aerosols with weak absorbing properties over snow-or ice-covered surfaces [START_REF] Pueschel | Physical and radiative properties of Arctic atmospheric aerosols[END_REF][START_REF] Haywood | The effect of anthropogenic sulfate and soot aerosol on the clear sky planetary radiation budget[END_REF]. Modeling studies by [START_REF] Shindell | Climate response to regional radiative forcing during the twentieth century[END_REF] [START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF]Fast et al., 2006). It has been successfully used in previous studies focused on the Arctic region [START_REF] Sessions | An investigation of methods for injecting emissions from boreal wildfires using WRF-Chem during ARCTAS[END_REF]Thomas et al., 2013) and to ana- To compare simulations with airborne lidar measurements, modeled backscatter ratio profiles at the plane position are calculated by using the aerosol backscattering coefficient at 400 nm simulated by WRF-Chem. This coefficient is computed within WRF-Chem from the method of [START_REF] Toon | Algorithms for the calculation of scattering by stratified spheres[END_REF], using a bulk, volume-averaged, refractive index derived from the modeled size distribution [START_REF] Bond | Limitations in the enhancement of visible light absorption due to mixing state[END_REF]. The backscattering coefficient is then estimated at 532 nm by using the simulated Angström exponent, and the effect of aerosol transmission is ignored because aerosol optical depths FLEXPART-WRF We also use FLEXPART-WRF, a Lagrangian particle dispersion model (Brioude et al., 2013) adapted from the model FLEXPART (Stohl et al., 2005), to study air mass origins and transport processes using WRF meteorological forecasts. In this study, we use FLEXPART- Meteorological context during the spring POLARCAT-France campaign Long-range transport of aerosol from Europe to the Arctic is usually associated with specific synoptic meteorological conditions over Europe, causing large-scale meridional transport (e.g., [START_REF] Iversen | Arctic air pollution and large scale atmospheric flows[END_REF]. In order to investigate the origin and transport of aerosols ues of these ratios along the three flight tracks are presented in the Supplement, Fig. S2 2 . We used a threshold of 20 % to highlight the difference between air masses significantly influenced by biomass burning (BB) and air masses mostly influenced by anthropogenic emissions. This threshold excludes air masses weakly influenced (5 to 15 %) by BB on 10 and 11 April (as seen on Fig. S2 3 ) and identifies air masses significant influenced by BB, up to 30-40 %. We used the same threshold of 20 % for anthropogenic plumes for consistency. On We evaluate model predictions of aerosol size distributions, which are known to be important for the optical properties [START_REF] Boucher | On Aerosol Direct Shortwave Forcing and the Henyey-Greenstein Phase Function[END_REF] presented in Sects. 4.2.6.3 and 4.2.7. It is also important to note that activation in clouds, which is outside the scope of the present study, is also sensitive to aerosol size distributions [START_REF] Dusek | Size Matters More Than Chemistry for Cloud-Nucleating Ability of Aerosol Particles[END_REF]. Plumes for which we compare modeled and measured size distributions are indicated by ticks in Fig. 4567(referring to the modeled aerosol peak). Four anthropogenic plumes (I, J, M, POLARCAT-France Impacts of European aerosol transport on the Arctic Results presented so far give us confidence in the way this transport event is represented in our simulation in terms of meteorology, PM 2.5 levels, size distributions, spatial extent, and vertical structure of the plumes. We now investigate the regional impacts of this transport event in the European Arctic region. between warming and cooling effects. In our case, modeled European plumes contained higher levels of black carbon (2.5 to 3 % of submicron aerosol mass) than the measured value used in the study of (Lund Myhre et al., 2007) (1.98 %). The transport event studied here also featured a high-altitude anthropogenic plume that would have a local warming effect above the high-albedo low-level clouds. The inclusion of the semi-direct effect in our study might have also played a limited role. At the surface, the direct aerosol effect causes local cooling for all types of land surfaces, including snow and ice (-1.1 W m -2 DSRE on average, -2.75 W m -2 at noon over Scandinavia and Finland). However, we also show in Fig. 4-12 that BC was enhanced at the surface in anthropogenic plumes, which could lead to surface warming through the effects of BC deposited on snow. Black carbon deposition is not coupled to snow albedo in WRF-Chem 3.5.1, however the global model study of [START_REF] Fisher | Sources, distribution, and acidity of sulfate-ammonium aerosol in the Arctic in winter-spring[END_REF] showed that in spring 2008 (April-May), significant levels of anthropogenic BC (1 to 5 mgC m -2 month -1 ) were deposited on snow in northern Europe, leading to 1 to 2 % change in the regional albedo of snow and ice. This change in snow albedo was estimated to cause a radiative effect of +1.7 W m -2 in April-May (average value for the Arctic north of 60 ∘ N). [START_REF] Fisher | Sources, distribution, and acidity of sulfate-ammonium aerosol in the Arctic in winter-spring[END_REF] did not show the geographical distribution of this forcing, which should be higher in Scandinavia and Finland because the snow-albedo change from BC deposition is higher in their study in continental Eurasia than in the rest of the Arctic. Summary and conclusions In this study, we investigate an aerosol transport event from Europe to the European Arctic using measurements as well as regional chemical trans- However, this event is particularly interesting because of the extensive seasonal snow cover present in northern Scandinavia during this period. We show that the event had a significant local atmospheric warming effect over snow and ice surfaces. The average 96 h TOA direct and semidirect shortwave radiative effect from this event over snow and sea ice is found to be +0.58 W m -2 north of 60 ∘ N. At solar noon, in regions significantly influenced by European aerosols, larger warming is predicted, +3.3 W m -2 (TOA direct and semi-direct radiative effects) over the Scandinavian and Finnish snow cover north of 60 ∘ N. This result is of the same order of magnitude as values previously reported for aerosols in the 114 Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France western Arctic (Brock et al., 2011;[START_REF] Quinn | Arctic haze: current trends and knowledge gaps[END_REF]. port These radiative effect values do not include the impacts of cloud-aerosol interactions, which could be significant due to the extensive cloud cover in northern Scandinavia during this transport event. Aerosol transport to the Arctic The case study presented in this Chapter shows that WRF-Chem is able to reproduce a long-range aerosol transport event from Europe to the Arctic. The model is evaluated over the European source region and in the Arctic in terms of aerosol concentrations, plume locations and optical properties, showing good agreement with two main exceptions. First, WRF-Chem seems to underestimate OA concentrations and overestimate NO - 3 concentrations in the mid-latitudes and in the Arctic. Second, comparison of model results with in-situ size distribution observations and airborne LIDAR measurements suggest that WRF-Chem underestimates aerosol growth in some older biomass burning plumes measured in altitude (4 km). These two exceptions can be explained by the lack of SOA formation in these simulations, since recent studies indicate that biomass burning emissions are an important global source of SOA [START_REF] Shrivastava | Global transformation and fate of SOA: Implications of low-volatility SOA and gas-phase fragmentation reactions[END_REF]. SOA can be formed by the oxidation of VOCs by gaseous NO - 3 , which could also explain part of the NO - 3 overestimation. For this reason, SOA formation is later included in the Arctic wide simulations presented in Chapter 6. This study also helps to identify that the transport event observed during POLARCAT-France involved a complex mix of sources (anthropogenic emissions, biomass burning), source regions (central Europe and West Asia), and transport pathways (fast high-altitude transport and slower low-level transport). These processes produced several aerosol pollution layers at different altitudes in the Arctic. Wet removal has a strong impact (> 50 %) on PM 10 for both low-altitude transport and transport in frontal systems (in "warm conveyor belt" circulations), and is thus a critical process controlling aerosol amounts reaching the Arctic in spring. This study also illustrates that WRF-Chem can be used to estimate direct and semi-direct radiative effects of pollution aerosols. The estimate of the aerosol direct and semi-direct radiative effect at TOA associated with this event (3.3 W m -2 at solar noon and over snow-and ice-covered land) is comparable with previous estimates of the direct aerosol radiative effect in spring in the American Arctic (3.3 W m -2 and 2.5 W m -2 respectively in Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France Brock et al., 2011 and[START_REF] Quinn | Arctic haze: current trends and knowledge gaps[END_REF]. Since WRF-Chem temperature, wind speed and humidity are nudged to FNL, the semi-direct aerosol effect is probably damped in these runs, and values might be more representative of the direct effect alone. 2013) also showed that WRF-Chem was able to represent plume composition in the Arctic and in source regions, and that significant ozone production occurred during long-range transport of anthropogenic and biomass burning pollution plumes from the mid-latitudes during summer (6.5 ppbv and 3 ppbv respectively). Chapter 5 Current impacts of Arctic shipping in Northern Norway Motivation Shipping is thought to be an important local source of Arctic pollution (AMAP, 2006;Arctic Council, 2009). Ships emit CO 2 , and several air pollutants, notably BC, SO 2 and NO x . Shipping emissions have a current net global cooling effect on climate [START_REF] Eyring | Transport impacts on atmosphere and climate: Shipping[END_REF], due to the direct and indirect effects of sulfate aerosols formed from SO 2 emissions. However, since SO 2 has a much shorter lifetime than CO 2 , the net long-term climate effect of shipping emissions is warming due to CO 2 . Arctic warming and the associated decline in sea ice are expected to unlock the Arctic Ocean to human activity, and trans-Arctic shipping routes could be widely used by midcentury (Smith and Stephenson, 2013). Currently, Arctic shipping is the highest along the Norwegian and Western Russian coasts, and previous studies investigating the impacts of Arctic shipping emissions in this region [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF]Ødemark et al., 2012) found that these emissions had some influence on ozone concentrations, sulfate and BC burdens, and radiative effects. However, these earlier studies were based on earlier emission inventories known to be incomplete (i.e. not representing fishing ships), to underestimate marine traffic, or to be biased towards specific ship types (i.e. large cargo ships). Recent The WRF-Chem model setup used in this study (Table 5.2, described in Sect. 5.2.4.1) is similar to the base model setup described in Chapter 2, with two main exceptions. First, the boundary layer and surface schemes were changed from MYJ+Janjić [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] to MYNN+MM5 (Mellor-Yamada-Nakanishi-Niino, [START_REF] Nakanishi | An Improved Mellor-Yamada Level-3 Model: Its Numerical Stability and Application to a Regional Prediction of Advection Fog[END_REF], Fifth-Generation Penn State/NCAR Mesoscale Model, [START_REF] Zhang | A High-Resolution Model of the Planetary Boundary Layer -Sensitivity Tests and Comparisons with SESAME-79 Data[END_REF]. The reason for this change is that PBL heights from WRF-Chem are used in FLEXPART-WRF simulations, and the MYJ scheme often diagnoses very low PBL heights. Second, a different version of CBM-Z/MOSAIC including DMS and methane sulphonic acid (MSA) chemistry was used, since DMS is an important source of particles in the Arctic during summer [START_REF] Ferek | Dimethyl sulfide in the Arctic atmosphere[END_REF], and oceanic concentrations of DMS are particularly high in Northern Norway during summer [START_REF] Lana | An updated climatology of surface dimethlysulfide concentrations and emission fluxes in the global ocean[END_REF]). An online DMS emission scheme based on [START_REF] Nightingale | In situ evaluation of air-sea gas exchange parameterizations using novel conservative and volatile tracers[END_REF] and [START_REF] Saltzman | Experimental determination of the diffusion coefficient of dimethylsulfide in water[END_REF] was also implemented in WRF-Chem for these simulations. This study was published in Atmospheric Chemistry and Physics as Marelle, L., Thomas, J. L., Raut, J.-C., Law, K. S., Jalkanen, J.-P., Johansson, L., Roiger, A., Schlager, H., Kim, J., Reiter, A., and Weinzierl, B., The paper is reproduced in the following section. Air quality and radiative impacts of Arctic shipping emis- sions in the summertime in northern Norway: from the local to the regional scale (Marelle et al., 2016). Abstract In this study, we quantify the impacts of shipping pollution on air quality and shortwave radia- Introduction Shipping is an important source of air pollutants and their precursors, including carbon monoxide (CO), nitrogen oxides (NO x ), sulfur dioxide (SO 2 ), volatile organic compounds (VOCs) as well as organic carbon (OC) and black carbon (BC) aerosols [START_REF] Corbett | Emissions from Ships[END_REF][START_REF] Corbett | Updated emissions from ocean shipping[END_REF]. It is well known that shipping emissions have an important influence on air quality in coastal regions, often enhancing ozone (O 3 ) and increasing aerosol concentrations (e.g., [START_REF] Endresen | Emission from international sea transportation and environmental impact[END_REF]. [START_REF] Corbett | Mortality from Ship Emissions: A Global Assessment[END_REF] and [START_REF] Winebrake | Mitigating the Health Impacts of Pollution from Oceangoing Shipping: An Assessment of Low-Sulfur Fuel Mandates[END_REF] showed that aerosol pollution from ships might be linked to cardiopulmonary and lung diseases globally. Because of their negative impacts, shipping emissions are increasingly subjected to environmental regulations. In addition to its impacts on air quality, maritime traffic already contributes to climate change, by increasing the concentrations of greenhouse gases (CO 2 , O 3 ) and aerosols (SO 4 , OC, BC) [START_REF] Capaldo | Effects of ship emissions on sulphur cycling and radiative climate forcing over the ocean[END_REF][START_REF] Endresen | Emission from international sea transportation and environmental impact[END_REF]. The current radiative forcing of shipping emissions is negative and is dominated by the cooling influence of sulfate aerosols formed from SO 2 emissions [START_REF] Eyring | Transport impacts on atmosphere and climate: Shipping[END_REF]. However, due to the long lifetime of CO 2 compared to sulfate, shipping emissions warm the climate in the long term (after 350 years; [START_REF] Fuglestvedt | Shipping Emissions: From Cooling to Warming of Climate-and Reducing Impacts on Health[END_REF]. In the future, global shipping emissions of SO 2 are expected to decrease due to IMO regulations, while global CO 2 emissions from shipping will continue to grow due to increased traffic. This combination is expected to cause warming relative to the present day [START_REF] Fuglestvedt | Shipping Emissions: From Cooling to Warming of Climate-and Reducing Impacts on Health[END_REF]Dalsøren et al., 2013). In addition to their global impacts, shipping emissions are of particular concern in the Arctic, where they are projected to increase in the fu-ture as sea ice declines (for details of future sea ice, e.g., Stroeve et al., 2011. Decreased summer sea ice, associated with warmer temperatures, is progressively opening the Arctic region to transit shipping, and projections indicate that new trans-Arctic shipping routes should be available by mid-century (Smith and Stephenson, 2013). Other shipping activities are also predicted to increase, including shipping associated with oil and gas extraction (Peters et al., 2011). Sightseeing cruises have increased significantly during the last decades (Eckhardt et al., 2013), although it is uncertain whether or not this trend will continue. Future Arctic shipping is expected to have important impacts on air quality in a now relatively pristine region (e.g., [START_REF] Granier | Ozone pollution from future ship traffic in the Arctic northern passages[END_REF], and will influence both Arctic and global climate (Dalsøren et al., 2013;[START_REF] Lund | Global-Mean Temperature Change from Shipping toward 2050: Improved Representation of the Indirect Aerosol Effect in Simple Climate Models[END_REF]. In addition, it has recently been shown that routing international maritime traffic through the Arctic, as opposed to traditional routes through the Suez and Panama canals, will result in warming in the coming century and cooling on the long term (150 years). This is due to the opposite sign of impacts due to reduced SO 2 linked to IMO regulations and reduced CO 2 and O 3 associated with fuel savings from using these shorter Arctic routes [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF]. In addition, sulfate is predicted to cause a weaker cooling effect for the northern routes [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF]. Although maritime traffic is relatively minor at present in the Arctic compared to global shipping, even a small number of ships can significantly degrade air quality in regions where other anthropogenic emissions are low (Aliabadi et al., 2015;Eckhardt et al., 2013). Dalsøren In this study, we aim to quantify the impacts of shipping along the Norwegian coast in July 2012, using airborne measurements from the ACCESS (Arctic Climate Change, Economy and Society) aircraft campaign [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF]. This campaign (Sect. 5. the local (i.e., at the plume scale) and regional impacts of shipping pollution on air quality and shortwave radiative effects along the coast of northern Norway. The ACCESS aircraft campaign The Modeling tools FLEXPART-WRF and WRF Plume dispersion simulations are performed with FLEXPART-WRF for the four ships presented in Table 5.1, in order to estimate their emissions of NO x and SO 2 . FLEXPART-WRF (Brioude et al., 2013) is a version of the Lagrangian particle dispersion model FLEXPART (Stohl et al., 2005), driven by meteorological fields from the mesoscale weather forecasting model WRF [START_REF] Skamarock | A description of the Advanced Research WRF Version[END_REF]. In order to drive Ship emissions can continue to rise after leaving the exhaust, due to their vertical momentum and buoyancy. This was taken into account in the FLEXPART-WRF simulations by calculating effective injection heights for each targeted ship, using a simple plume rise model [START_REF] Briggs | A Plume Rise Model Compared with Observations[END_REF]. This model takes into account ambient WRF-Chem In order to estimate the impacts of shipping on with smelting activities that occur on the Russian Kola Peninsula [START_REF] Virkkula | The influence of Kola Peninsula, continental European and marine sources on the number concentrations and scattering coefficients of the atmospheric aerosol in Finnish Lapland[END_REF]Prank et al., 2010). The Kola Peninsula emissions represent 79 % of the total HTAPv2 SO 2 emissions in the domain. Primary aerosol emissions from STEAM2 (BC, OC, SO 4 , and ash) are distributed into the eight MOSAIC aerosol bins in WRF-Chem, according to the mass size distribution measured in the exhaust of ships equipped with mediumspeed diesel engines by [START_REF] Lyyränen | Aerosol characterisation in medium-speed diesel engines operating with heavy fuel oils[END_REF]. The submicron mode of this measured distribution is used to distribute primary BC, OC, and SO = 4 , while the coarse mode is used to distribute exhaust ash particles (represented as "other inorganics" in MOSAIC). Ship emission evaluation In this section, emissions of NO x and SO 2 are determined for the four ships sampled during AC-CESS flights (shown in Table 5.1). We compare airborne measurements in ship plumes and concentrations predicted by FLEXPART-WRF plume dispersion simulations. In order to derive emission fluxes, good agreement between measured and modeled plume locations is required (discussed in Sect. 5.2.5.1). The methods, derived emissions values for the four ships, and comparison with STEAM2 emissions are presented in Sect. 5.2.5.2. -17 % on 12 July). Figure 5-3 shows the comparison between maps of the measured NO x and plume locations predicted by FLEXPART-WRF. This figure also shows the typical meandering pattern of the plane during ACCESS, measuring the same ship plumes several times as they age, while moving further away from the ship [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF]. Wilson Leer and Costa Deliziosa Deliziosa is, on average, located 4.7 km to the west of the measured plume. This displacement is small considering that, at the end of this flight leg, the plume was being sampled ∼ 80 km away from its source. This displacement is caused by biases in the simulation (MET) used to drive the plume dispersion model (-16 ∘ for wind direction, +14 % for wind speed). On 12 July 2012, the aircraft targeted emissions from the Wilson Nanjing ship (Fig. 5-3e andf), but also sampled the plume of another ship, the Alaed. This Ship plume representation in Ship emission derivation and comparison with STEAM2 In this section, we describe the method for deriving ship emissions of NO x and SO 2 using FLEXPART-WRF and measurements. This method relies on the fact that in the FLEXPART-WRF simulations presented in Sect. 5.2.4.1, there is a linear relationship between the constant emission flux of the tracer chosen for the simulation and the tracer concentrations in the modeled plume. The only source of non-linearity that cannot be taken into account is changes in the emission source strength, which is assumed to be constant in time for the plumes sampled. Given that the ship and meteorological conditions were consistent during sampling (shown in the Supplement, Fig. S1 3 ), we expect that these effects would be very small. In our simulations, this constant emission flux is picked at 𝐸 = 0.1 kg s -1 and is identical for all ships. This initial value 𝐸 is scaled for each ship by the ratio of the measured and modeled areas of the peaks in concentration corresponding to plume crossings, as shown in Fig. 5-4. Equation (1) shows how SO 2 emissions are derived by this method. and +14 % for the same ships. Therefore, we expect that the large discrepancy in NO x for one individual ship (the Costa Deliziosa) has only a small impact on the total regional emissions generated by STEAM2. The results presented later in Sect. 5.2.6.1 also indicate that STEAM2 likely performs better on average in the Norwegian Sea during ACCESS than for individual ships. 𝐸 𝑖 = 𝐸 × ∫︀ 𝑡 end 𝑖 𝑡 begin 𝑖 (SO 2 (𝑡) -SO 2 background)d𝑡 Comparison of STEAM2 to other shipping emission inventories for northern Norway We compare in Table 5.5 the July emission totals for NO x , SO 2 , BC, OC and SO = 4 in northern Norway (latitudes 60.6 to 73 ∘ N, longitudes 0 to 31 ∘ W) for STEAM2 and four other shipping emission inventories used in previous studies investigating shipping impacts in the Arctic. We include emissions from the [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF], Dalsøren et al. (2009[START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF] [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF]. These emissions could not be precisely distributed geospatially using earlier methodologies, since fishing ships do not typically follow a simple course [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF]. [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF] emissions for coastal shipping in Norwegian waters are estimated based on Norwegian shipping statistics for the year 2000, and contain higher NO x , BC, and OC emissions, but less SO 2 , than the 2004 inventories. This comparison indicates that earlier ship emission inventories usually contain lower emissions in this region, which can be explained by the current growth in shipping traffic in northern Norway. This means that up-to-date emissions are required in order to assess the current impacts of shipping in this region. Modeling the impacts of ship emissions along the Norwegian coast In this section, WRF-Chem, using STEAM2 ship emissions, is employed to study the influence of ship pollution on atmospheric composition along the Norwegian coast, at both the local (i.e., at the plume scale) and regional scale. As shown in Fig. 5-4, shipping pollution measured during ACCESS is inhomogeneous, with sharp NO x and SO 2 peaks in thin ship plumes, emitted into relatively clean background concentrations. The measured concentrations are on spatial scales that can only be reproduced using very highresolution WRF-Chem simulations (a few kilometers of horizontal resolution), but such simulations can only be performed for short periods and over small domains. Therefore, highresolution simulations cannot be used to estimate the regional impacts of shipping emissions. In order to bridge the scale between measurements and model runs that can be used to make conclusions about the regional impacts of shipping pollution, we compare in Sect. 5.2.6.1 WRF-Chem simulations using STEAM2 ship emissions, at 3 km × 3 km resolution (CTRL3) and at 15 km × 15 km resolution (CTRL). Specifically, we show in Sect. 5.2.6.1 that both the CTRL3 and CTRL simulations reproduce the average regional influence of ships on NO x , O 3 , and SO 2 , compared to ACCESS measurements. In Sect. 5.2.6.2 we use the CTRL simulation to quantify the regional contribution of ships to surface pollution and shortwave radiative fluxes in northern Norway. 5.2.6.1 Model evaluation from the plume scale to the regional scale It is well known that ship plumes contain finescale features that cannot be captured by most regional or global chemical transport models. This fine plume structure influences the processing of ship emissions, including O 3 and aerosol formation, which are non-linear processes that largely depend on the concentration of species inside the plume. Some models take into account the influence of the instantaneous mixing of ship emissions in the model grid box by including corrections to the O 3 production and destruction rates (Huszar et al., 2010) or take into account plume ageing before dilution by using corrections based on plume chemistry models (Vinken et al., 2011). Here, we take an alternative approach by running the model at a sufficient resolution to distinguish individual ships in the Norwegian Sea (CTRL3 run at 3 km × 3 km resolution), and at a lower resolution (CTRL run at 15 km × 15 km resolution). It is clear that a 3 km × 3 km horizontal resolution is not sufficiently small to capture all small-scale plume processes. However, by comparing the CTRL3 simulation to ACCESS measurements, we show in this section that this resolution is sufficient to (54 % of the modeled background PM 1 during ship plume sampling is sea salt in NOSHIPS3). Because of this, comparing modeled and observed in-plume PM 1 directly would be mostly representative of background aerosols, especially sea salt, which is not the focus of this paper. -7.0 %, respectively. Correlations between modeled (CTRL) and measured profiles are significant for NO x and O 3 (𝑟 2 = 0.82 and 0.90). However, the correlation is very low between measured and modeled SO 2 (𝑟 2 = 0.02), and it is not improved compared to the NOSHIPS simulation. Ships have the largest influence on NO x and SO 2 profiles, a moderate influence on O 3 and do not strongly influence PM 2.5 profiles along the ACCESS flights. However, this small increase in PM 2.5 corresponds to a larger relative increase in sulfate concentrations and in particle numbers in the size ranges typically activated as cloud condensation nuclei (shown in Fig. S4 in the Supplement6 ). NO x concentrations are overestimated in the parts of the profile strongly influenced by shipping emissions. This is in agreement with the findings of Sect. 5.2.5.2, showing that STEAM2 NO x emissions were overestimated for the ships sampled during ACCESS. However, the CTRL simulation performs well on average, suggesting that the STEAM2 inventory is able to represent the average NO x emissions from ships along the northern Norwegian coast during the study period. The bias for SO 2 is very low compared to results from Eyring et al. (2007), which showed that global models significantly underestimated SO 2 in the polluted marine boundary layer in July. Since aerosols from ships contain mostly secondary sulfate formed from SO 2 oxidation, the validation of modeled SO 2 presented in Fig. 5-8 also gives some confidence in our aerosol results compared to earlier studies investigating the air quality and radiative impacts of shipping aerosols. We therefore use the 15 km × 15 km CTRL run for further analysis of the regional influence of ships on pollution and the shortwave radiative effect in this region in Sect. 5.2.6.2. [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF] did not include the impact of international transit shipping along the Norwegian coast. Our estimated impact on O 3 in this region (6 % and 1.5ppbv increase) is about half of the one determined by Ødemark et al. (2012) (12 % and 3ppbv), for the total Arctic fleet in the summer (June-Aug-Sept) 2004, using ship emissions for the year 2004 from Dalsøren et al. (2009). It is important to note that we expect lower impacts of shipping in studies based on earlier years, because of the continued growth of shipping emissions along the Norwegian coast (as discussed in Sect. 5.2.5.3 and illustrated in Table 5.5). However, stronger or lower emissions do not seem to completely explain the different modeled impacts. Ødemark et al. (2012) found that Arctic ships had a strong influence on surface O 3 in northern Norway for relatively low 2004 shipping emissions. This could be explained by the different processes included in both models, or by dif-ferent meteorological situations in the two studies based on two different meteorological years (2004 and 2012). However, it is also likely that the higher O 3 in the Ødemark et al. (2012) study could be caused, in part, by nonlinear effects associated with global models run at low resolutions. For example Vinken et al. (2011) We estimated the lifetime (residence time) of BC originating from ship emissions using the method presented in [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF]. This residence time is defined as the ratio of the average BC burden from ships divided by the average BC emissions in STEAM2 during the simulation. Using this method, we find a BC lifetime of 1.4 days. This short lifetime can be explained by the negative sea level pressure anomalies over northern Norway during the ACCESS campaign [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF], which indicates more rain and clouds than normal during summer. Given this short lifetime, BC is not efficiently transported away from the source region. Shortwave radiative effect of ship emissions in northern Norway The present-day climate effect of ship emissions is mostly due to aerosols, especially sulfate, which cool the climate through their direct and indirect effects [START_REF] Capaldo | Effects of ship emissions on sulphur cycling and radiative climate forcing over the ocean[END_REF]. However, large uncertainties still exist concerning the magnitude of the aerosol indirect effects [START_REF] Boucher | Clouds and Aerosols, book section 7[END_REF]. In this section, we determine the total shortwave Conclusions The focus of this work, linking modeling and measurements, is to better quantify regional at- gested are also significant (Dalsøren et al., 2013;Ødemark et al., 2012). However, since shipping emissions are highly variable and localized, quantifying impacts using global models can be challenging. Our approach used a regional chemicaltransport model at different scales, with highresolution ship emissions, to evaluate model results against observations and estimate the regional impact of shipping emissions. In the future, additional work is needed in other regions and at different spatial scales (measurements and modeling) in order to investigate the impacts of shipping over the wider Arctic area. Main insights from the study Local sources of aerosol and ozone pollution in the Arctic are thought to be rather small, but are expected to grow along with future Arctic warming and sea-ice loss. In particular, the decline in sea-ice should unlock the Arctic Ocean to human activity, and Arctic shipping emissions are expected to increase. Trans-Arctic shipping (Northern Sea Route information office, 2013) and Arctic cruise tourism [START_REF] Stewart | Sea Ice in Canada's Arctic: Implications for Cruise Tourism[END_REF] are already thought to be on the rise . Currently, Arctic shipping emissions are estimated to be highest along the Norwegian and Western Russian coasts. However, there was, until the recent ACCESS aircraft campaign in summer 2012, no dedicated measurement dataset to study the impacts of these emissions. In this Chapter, WRF-Chem simulations at 15 km × 15 km and 3 km × 3 km horizontal resolutions are combined with a new shipping emission inventory created for this study (by researchers at the Finnish Meteorological Institute) using the emission model STEAM2 . These simulations are used to analyze measurements from the ACCESS aircraft campaign, in order to evaluate the current effect of shipping emissions in this region on aerosol and ozone concentrations and aerosol radiative effects. This study shows that WRF-Chem simulations at a 15 km × 15 km horizontal resolution are able to reproduce meteorological conditions observed during the ACCESS campaign, as well as the average observed profiles of SO 2 , O 3 and NO x in the polluted marine boundary layer. SO 2 comparisons are improved compared to earlier model intercomparisons in similar conditions (Eyring et al., 2007). This improvement might be due, in part, to the detailed shipping emission inventory used and, in part, to the representation of the SO 2 source from DMS, which appears to constitute most of the background concentrations. Model results also indicate that WRF-Chem simulations compare relatively well with measurements, at both plume-resolving (3 km × 3 km) and non-plume-resolving (15 km × 15 km) scales. Modeled enhancements of PM 10 and O 3 from ships do not appear to be significantly affected by this change in resolution, but it is not certain from this study how well shipping impacts would be represented by simulations at even lower resolutions (100 km × 100 km in Chapter 6). For example, [START_REF] Cohan | Dependence of ozone sensitivity analysis on grid resolution[END_REF] found little sensitivity of O 3 production in an urban area to horizontal resolution when comparing runs at 4 km and 12 km resolutions, but showed that simulations at a 36 km resolution did not perform as well. This study also aimed to validate the new STEAM2 AIS shipping emission inventory of Jalkanen et al. (2012), by comparing it to emissions of NO x and SO 2 derived for individual ships measured during ACCESS. This comparison shows that, for individual ships, very large differences are possible between STEAM2 emissions and emissions calculated from measurements, although previous evaluations of STEAM2 for a larger number of ships (Beecken et al., 2015) indicate that these differences are reduced when integrated over a large fleet. This indicates that the ACCESS dataset (4 ships) is too small to draw conclusions for the whole STEAM2 inventory used in this Chapter (1366 individual ships). However, WRF-Chem simulations using STEAM2 agree relatively well with ACCESS average profiles in the polluted marine boundary layer, indicating that STEAM2 emissions in this region are qualitatively correct. STEAM2 July emission totals in this region are similar to emission totals from the [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] inventory (also based on AIS) used in Chapter 6. The results from this study also indicate that shipping in Northern Norway already has significant regional impacts on SO 2 , NO x , BC and SO 2- 4 concentrations, as well as strong radiative impacts due to cloud/aerosol interactions. The complex treatment of aerosol/cloud interactions included in WRF-Chem does not lead to a very different estimate of indirect aerosol radiative effect than the one made by Ødemark et al. (2012) using a simpler approach. BC concentrations at the surface increase by up to 40 % due to shipping emissions, but this does not appear to be associated with a strong positive radiative effect, probably because these increases in BC occur over the low-albedo open ocean, and often below clouds. BC from ships could have a higher direct radiative effect if emissions are located close to or within sea-ice. Shipping impacts on O 3 are relatively modest (approximately 6 %) and lower than previous estimates. This could be due, in part, to lower non-linear effects due to lower dilution of shipping NO x emissions in relatively fine (15 km × 15 km) model grids. However, this lower impact on O 3 could also be an artifact of the limited domain size, as our simulations only represent the effect of local emissions and do not include transport of ozone produced from shipping emissions in other parts of the Arctic. These results are put into a larger Arctic-wide context in Chapter 6. Chapter 6 Current and future impacts of local Arctic sources of aerosols and ozone Introduction and motivation The Arctic is increasingly open to human activity, due to rapid Arctic warming associated with decreased sea ice extent and snow cover. Pollution from in-Arctic sources was previously thought to be low, but oil and gas extraction and marine traffic could already be important sources of short-lived pollutants (aerosols, ozone) in the Arctic. Arctic shipping has been shown to increase O 3 concentrations along the Northern Norwegian Coast during summer by 1.5 ppbv-3 ppbv [START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF]Ødemark et al., 2012;Marelle et al., 2016), and could also significantly enhance summertime surface black carbon and sulfate (up to + 50 and +30 % respectively, Marelle et al., 2016) and black carbon and sulfate burdens in this region (Ødemark et al., 2012). The resulting radiative impact of Arctic shipping emissions is thought to be negative (Ødemark et al., 2012;Dalsøren et al., 2013;Marelle et al., 2016), due to the direct and indirect effect of sulfate aerosols. Oil and gas activity was shown by Ødemark et al. (2012) to increase the black carbon burden in Northern Russia, causing significant warming due to the effect of BC in air and BC deposited on snow and ice. Recent emission estimates for the Arctic oil and gas sector by [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] and [START_REF] Huang | Russian anthropogenic black carbon: Emission reconstruction and Arctic black carbon simulation[END_REF] indicate that BC emissions from gas flaring in the Arctic could be much higher than previously thought, and that their impacts might have been underestimated in past studies (Stohl et al., 2013). In the future, global and Arctic shipping emissions are expected to increase due to enhanced traffic, except sulfur emissions which will decrease (by mass of fuel burned) due to new regulations [START_REF] Imo | Report of the Marine Environment Protection Committee on the Sixty-First Session[END_REF][START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF]. As a result, Dalsøren et al. (2013) found that BC and O 3 burdens due to Arctic shipping could increase between 2004 and 2030, but that sulfate burdens could decrease. Because of this, shipping in 2030 is expected 148 Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone to cause warming in the Arctic relative to the present day, due to the reduced cooling effect from reduced sulfate, and the increased warming effect of rising BC and O 3 . [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF] investigated the impacts of shifting a fraction of global shipping through the Arctic Ocean (NSR and NWP), and showed that this would result in warming in the coming century and cooling on the long term (150 years). This response is due to the opposite sign of impacts due to reduced SO 2 (stricter regulations) and reduced CO 2 and O 3 (fuel savings from using shorter Arctic routes). In addition, sulfate is predicted to cause a weaker cooling effect in the Arctic. [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF] also showed that the response to increased Arctic shipping can be complex, since increased oxidant levels from Arctic shipping emissions could decrease the lifetime of Arctic CH 4 and SO 2 , and cause changes in concentrations and radiative effects further away from the Arctic. There is, to our knowledge, no study investigating the future impacts of oil and gas activity in the Arctic. It is also currently still unclear if these local sources can become significant in the future compared to other sources of Arctic Pollution, such as long-range transport of anthropogenic pollution from the mid-latitudes, and emissions from biomass burning. In this Chapter, quasi-hemispheric WRF-Chem simulations are performed in order to quantify the impact of remote and local Arctic emission sources on aerosol and ozone concentrations, aerosol and ozone radiative effects, and aerosol deposition. Methods In this study, the WRF-Chem model is run with new inventories for local Arctic shipping [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] and gas flaring associated with oil and gas extraction [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF]. The objecive of these simulations is to study the effect of these local emission sources on aerosols and ozone concentrations, on black carbon deposition and on the radiative impacts of aerosols and ozone in the Arctic. This study also aims to determine if these impacts could be significant relative to the impacts of remote emissions transported to the Arctic. These impacts are determined by performing 6-month long (March-August), quasi-hemispheric WRF-Chem simulations over the Arctic region. In order to represent both local Actic emissions and emissions transported from the mid-latitudes, the simulation domain needs to include all sources of emissions potentially transported from the mid-latitudes to the Arctic. Based on the previous work of [START_REF] Stohl | Characteristics of atmospheric transport into the Arctic troposphere[END_REF], the model domain was selected to include sources of pollution potentially transported in the Arctic in less than 30 days (Figures 8, 9 and 10 in Stohl, 2006), a transport time larger than the mean ozone and aerosol lifetimes in the troposphere. This domain is shown in The model setup is presented in Table 6.1. There are 4 major changes compared to earlier model setups presented in Table 4.1 and Table 5.2. First, simulations presented in this chapter use a version of MOSAIC coupled with a SOA formation mechanism, VBS-2 (Volatility Basis Set with 2 volatility species, Shrivastava et al., 2011). The VBS-2 mechanism treats the partitioning of OA between the volatile and the condensed phase using the "volatility basis set" approach [START_REF] Robinson | Rethinking Organic Aerosols: Semivolatile Emissions and Photochemical Aging[END_REF], and includes SOA formation from the oxidation of anthropogenic VOCs, biogenic VOCs and Semi-volatile and Intermediate-Volatility Organic Compounds (S/IVOCs, [START_REF] Robinson | Rethinking Organic Aerosols: Semivolatile Emissions and Photochemical Aging[END_REF]. For reasons explained in Section 6.3, SOA formation from S/IVOCs was not included in these simulations. Second, the gas-phase chemistry mechanism was changed from CBM-Z to SAPRC-99, since SAPRC-99/MOSAIC is the only chemistry and aerosol mechanism in WRF-Chem 3.5.1 including both VBS-2 SOA and aerosol/cloud interactions. Third, LW and SW radiative calculations are now performed by the RRTMG scheme, which is coupled with WRF-Chem aerosol optical properties, and can be easily coupled with WRF-Chem predicted ozone (Sect. 6.3 and 6.8). Fourth, sub-grid (cumulus) clouds are represented here by the KF-CuP (Kain-Fritsch + cumulus potential scheme) parameterization (Berg et al., 2015). The KF-CuP scheme was developped to include aerosol/cloud and chemistry/cloud interactions in sub-grid clouds in MOSAIC, including tracer convection, wet removal, and aqueous chemistry. Taking into account these processes is especially critical in these low-resolution simulations, since sub-grid 150 Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone clouds are expected to be make up a larger proportion of total clouds as resolution decreases. In the context of this thesis, the model was also updated and several bugs were corrected for these simulations (Sect. 6.3). Table 6.1 -WRF-Chem setup for the quasi-hemispheric, 6-months long simulations. Option name Selected option Chemistry & aerosol options Gas-phase chemistry SAPRC-99 [START_REF] Carter | Documentation of the SAPRC-99 chemical mechanism for VOC reactivity assessment. Final Report to California Air Resources[END_REF] Aerosols MOSAIC 8-bins [START_REF] Zaveri | Model for Simulating Aerosol Interactions and Chemistry (MOSAIC)[END_REF], including VBS-2 (SOA formation, Shrivastava et al., 2011) and aqueous chemistry Photolysis Fast-J [START_REF] Wild | Fast-J: Accurate Simulation of In-and Below-Cloud Photolysis in Tropospheric Chemical Models[END_REF] Metrorological options Planetary boundary layer MYJ [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] Surface layer Monin-Obukhov Janjic Eta scheme [START_REF] Janjić | The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes[END_REF] Land surface Unified Noah land-surface model ( the Arctic, and pollution from local sources. However, when quasi-hemispheric WRF-Chem simulations (without sub-grid cloud-aerosol interactions or SOA formation) were compared by Eckhardt et al. (2015) to results from several global models and to aerosol observations in the Arctic, this intercomparison revealed that WRF-Chem struggled to reproduce BC and SO 2- 4 concentrations at Arctic surface sites, as well as aerosol concentrations aloft during summer. The source of these discrepancies has been investigated here and three main sources of error have been identified and corrected in this Chapter: • The computation of skin temperatures over the prescribed sea ice in the Noah Land Surface Module were found to produce unrealistically high temperatures (∼ 5 to 10 K) during the ice-melt season. These biases reduced atmospheric stability in the Arctic, and increased vertical mixing, bringing high altitude pollution to the surface. These calculations were corrected to take into account the fact that, during ice melt, the skin temperature of sea ice cannot rise above the freezing temperature [START_REF] Deluc | Recherches sur les modifications de l'atmosphère[END_REF]. • WRF-Chem simulations presented in Eckhardt et al. (2015) did not take into account wet removal of aerosols from sub-grid clouds, which make up a large part of the total simulated cloud cover and total simulated precipitation amounts in low resolution simulations. Simulations presented in this chapter include these processes as represented by the KF-CuP cumulus scheme, recently developed by Berg et al. (2015). • In earlier simulations, aerosol sedimentation was only performed in the first model level and only took into account the contribution of sedimentation to dry deposition, but not its role in bringing large particles from higher altitudes to the surface. An explicit size-resolved sedimentation scheme was developed for MOSAIC, using the same Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone 153 algorithm for calculating settling velocities than the one already included in MOSAIC for sedimentation at the surface. Further analysis of the WRF-Chem simulations presented in Chapter 4 (discussed in Sect. 4.3) also revealed that the model tends to underestimate ozone concentrations over snow and ice and in the Arctic troposphere. In order to perform the new simulations presented in this Chapter, two main reasons for this underestimation were found and corrected: • Dry deposition [START_REF] Wesely | Parameterization of surface resistances to gaseous dry deposition in regional-scale numerical models[END_REF] in WRF-Chem) is known to be lower in winter and over snow-and ice-covered ground, due to reduced stomatal uptake of gases by plants, and enhanced atmospheric stability. In previous versions of the model, the predicted snow cover and the prescribed ice cover were not coupled to the dry deposition scheme for the CBM-Z and SAPRC-99 mechanisms. For this reason, the model only took into account reduced deposition over snow and ice over "permanently" snow-and ice covered surfaces, e.g. mountain tops, the Greenland ice sheet... Here, WRF-Chem seaice and snow-cover were coupled to the dry deposition scheme, to force "wintertime" conditions ("Winter, snow on ground and near freezing" category in [START_REF] Wesely | Parameterization of surface resistances to gaseous dry deposition in regional-scale numerical models[END_REF] when snow height and ice cover are above 10 cm and 15 % respectively. • The Fast-J photolysis scheme implemented in WRF-Chem uses one single value for the broadband UV albedo at the surface (0.055). While UV-albedo does not vary much over most land types, this value should be much higher over bare snow or ice (approximately 0.85). The UV-albedo in Fast-J was corrected according to the satellite measurements from Tanskanen and Manninen (2007). UV-albedo values derived from satellite measurements for different land-cover types by Tanskanen and Manninen (2007) were mapped to WRF-Chem land use categories, and a weighted average of snow-or ice-covered albedos and snow-and ice-free albedos was calculated based on snow and ice covers in the model. Other important model improvements developed for these runs include the coupling of model predicted O 3 with RRTMG SW and LW radiation, as well as the coupling of KF-CuP cloud properties to RRTMG (following the approach of [START_REF] Alapaty | Introducing subgrid-scale cloud feedbacks to radiation for regional meteorological and climate modeling[END_REF]. KF-CuP was also coupled with "online" lightning NO x emissions developed by [START_REF] Barth | Simulations of Lightning-Generated NOx for Parameterized Convection in the WRF-Chem model[END_REF]. A simplified treatment of DMS chemistry from MOZART4 (Emmons et al., 2010;[START_REF] Chin | A global three-dimensional model of tropospheric sulfate[END_REF] was implemented in SAPRC-99, as well as an "online" DMS emission scheme based on [START_REF] Nightingale | In situ evaluation of air-sea gas exchange parameterizations using novel conservative and volatile tracers[END_REF], [START_REF] Saltzman | Experimental determination of the diffusion coefficient of dimethylsulfide in water[END_REF] and [START_REF] Lana | An updated climatology of surface dimethlysulfide concentrations and emission fluxes in the global ocean[END_REF]. SOA formation from S/IVOC (Semi-volatile and Intermediate-Volatility Organic Compounds) was removed from our version of the MOSAIC/VBS-2 mechanism for two reasons. First, there is no inventory of S/IVOC emissions yet, and emissions have been previously estimated in WRF-Chem by multiplying POA or VOC emissions by a factor of 6.5 (Shrivastava et al., 2011), based on case studies for Mexico City (Hodzic et al., 2010). This factor 154 Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone is extremely uncertain, and recent studies [START_REF] Shrivastava | Global transformation and fate of SOA: Implications of low-volatility SOA and gas-phase fragmentation reactions[END_REF] indicate that it cannot be used to estimate global S/IVOC emissions. Second, the treatment of S/IVOC formation as currently implemented in WRF-Chem was found to be prohibitively computationally expensive for quasi-hemispheric simulations. The VBS-2 mechanism used in this Chapter still includes formation of SOA from the oxidation of biogenic and anthropogenic VOC. Model validation Simulations presented in Table 6.2 are performed using the setup, emissions and model updates presented above. The resulting modeled BC and O 3 concentrations from the 2012_BASE simulation are compared in Figures 6-2 2015), which strongly overestimated BC during summer. Note that, due to the high uncertainty of EBC measurements in the Arctic (approximately a factor of 2), a close agreement is not expected. The model fails to reproduce strong BC peaks at Tiksi, Russia during spring, which are probably due to the influence of a local pollution source (possibly the town or port of Tiksi), which cannot be reproduced in a 100 km×100 km-resolution simulation. The high "Arctic Haze" BC concentrations in spring and lower pollution in summer are reproduced where observed, at Alert (Canada), Barrow (Alaska) and Zeppelin (Svalbard), and the model also reproduces biomass burning pollution at Barrow and Tiksi during summer. In terms of O 3 , the base simulation compares well to measurements during summer, although it seems to strongly overestimate biomass burning influence at Tiksi and to slightly underestimate measurements at Nord and Summit. During spring, the model does not reproduce O 3 depletion events at Barrow, Tiksi, Nord and Zeppelin. These events are due to catalytic halogen reactions happening at the Arctic surface over snow and ice [START_REF] Bottenheim | Depletion of lower tropospheric ozone during Arctic spring: The Polar Sunrise Experiment[END_REF], and these chemical reactions are not currently included in WRF-Chem. Furthermore, simulated surface O 3 is used in Section 6.6 to assess the impact of local emissions on photochemical O 3 production, which is larger during summer because of higher solar radiation and higher emissions from Arctic shipping (Section 6.6). The model overestimates rBC at all altitudes, including at the surface, despite good agreement with surface EBC measurements at Zeppelin (Figure 6-2) in July 2012, in the region and at the time of the ACCESS flights. However, this overestimation appears to be larger in the free troposphere. This discrepancy could be caused by underestimated black carbon removal, because the model does not include BC removal due to ice nucleation, and only includes a simplified treatment of secondary activation during deep convection in subgrid-scale clouds. Since SP2 measurements do not span the full possible size distribution of 158 Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone BC, WRF-Chem BC contained in the SP2 size range (80 to 470 nm) had to be estimated from the modeled size distributions, introducing additional uncertainties. It is also possible that WRF-Chem BC (technically EC) and SP2 rBC do not exactly correspond, depending on how well emission inventories distinguish EC emissions from other light-absorbing compounds (Petzold et al., 2013). Overestimated convective uplift is another possible source of this high-altitude BC bias [START_REF] Allen | The vertical distribution of black carbon in CMIP5 models: Comparison to observations and the importance of convective transport[END_REF]. This bias could also be due, in part, to errors in mid-latitude emissions. The main difference between simulations in 2012 and 2050 is the occurrence of diversion shipping in July and August 2050. As a result, Arctic shipping emissions become the main source of summertime surface O 3 , BC and NO - 3 pollution along diversion shipping lanes in the future. Shipping emission also becomes an important source of other aerosol components (SO 2- 4 , NH + 4 , NO - 3 and OA), this is likely due to a combination of increased emissions (NO x , SO 2 , POA), increased secondary aerosol formation from higher OH, and changes to aerosol chemistry. Over the open Arctic Ocean, ∼ 50 % of the total modeled OH is due to shipping emissions. At lower latitudes (over Northern America and Russia), shipping emissions appear once again to cause small reductions in SO 2- 4 , NH + 4 and OA over land at lower latitudes , and stronger reductions in SO 2 over the same region (0 to -20 %, lower negative values of 0 to -50 % over Greenland and over the ice pack). However, the robustness of this result is still not certain. Diversion shipping emissions are responsible for most of the total surface BC along diversion shipping lanes during summer 2050, but do not appear to cause a strong increase in BC deposition at the surface. In our simulations, BC deposition is mainly due to wet removal, which also depends on BC concentrations aloft that are not very sensitive to shipping emissions in these simulations. The contribution of diversion shipping emissions to total BC deposition is high along the Northwest Passage, because of the lower background deposition in this region. However, in agreement with Browse et al. (2013) The base simulation is compared in Vertical distribution of Arctic aerosol and ozone pollution from remote and local sources Results presented in the previous section indicate that local Arctic emissions already contribute to aerosol and ozone pollution at the surface, and that this contribution should grow in the future. Since local Arctic emissions are directly emitted at the Arctic surface, their impacts are expected to be the highest at low altitudes. However, the direct and indirect radiative effects of aerosols and ozone do not scale with surface concentrations, and are usually related to the total column burden. In addition, the radiative effect of BC and of O 3 in the Arctic is known to be very sensitive to vertical distributions [START_REF] Lacis | Radiative forcing of climate by changes in the vertical distribution of ozone[END_REF]; These figures indicate that pollution from mid-latitude anthropogenic emissions and biomass burning dominates at higher altitudes, and that these remote sources are responsible for the marjority of the Arctic-wide burden of aerosol and ozone pollution. Shipping pollution is confined at very low altitudes, which is probably due to atmospheric stability and to the short residence time of Arctic shipping BC (1.4 days during summer). BC enhancements from Arctic shipping and Arctic flaring are often located below and within clouds, which should reduce aerosol direct effects from these sources (reduced SW radiation below clouds), but which could also lead to more efficient cloud-aerosol interactions (cloud albedo, cloud lifetime, cloud burn-off effects). 6.8 Radiative effects of aerosols and ozone in the Arctic. The direct and indirect radiative effects of aerosols and ozone are calculated offline using the RRTMG radiative transfer model. RRTMG, as implemented in WRF-Chem, takes as input predicted meteorological, cloud and surface properties, as well as predicted aerosol optical properties. Ozone and other absorbing gases are usually taken into account as climatological profiles or monthly resolved zonal means; in this work, RRTMG was modified to use modelpredicted ozone instead. 6.8.1 Direct radiative effects of pollution aerosols and ozone in the Arctic. RRTMG is used to calculate top-of-atmosphere (TOA) direct radiative effects (DRE, instantaneous radiative effect before adjustments) from aerosols (shortwave) and ozone (short-wave+longwave), averaged over the Arctic region (north of 60°N). First, monthly-averaged 3D meteorological properties, cloud properties, size-resolved aerosol number and speciated, size-resolved aerosol mass were calculated for each simulation. Second, the total DRE of a compound e.g. BC was calculated by performing 24-hour RRTMG simulations (using solar zenith angles for the 15 th of each month) with and without the monthly-averaged 3D field for this compound. Aerosol optical properties were calculated with and without e.g. BC, and passed to RRTMG to compute the resulting change in TOA flux. This is mainly a consequence of the larger emission amounts from these sources, leading to higher pollution burdens in the Arctic. Pollution from biomass burning and midlatitude anthropogenic emissions is also located at higher altitudes, where O 3 and aerosols have a proportionally higher direct radiative effect. The cooling effect of SO 2- 4 + NO - 3 + NH + 4 + OA is mostly due to sulfate for anthropogenic emissions, and due to OA for biomass burning This cooling O 3 effect is surprising, since O 3 usually causes warming at TOA. This cooling is due to the LW (greenhouse) effect of O 3 , and appears to be caused by the surface temperature inversion in the Arctic. Figure 6-14 shows that O 3 pollution from Arctic shipping is confined in the lower atmosphere. In the model, temperatures in the lower Arctic troposphere are often warmer than ground skin temperatures, a situation known as a temperature inversion, which is commonly observed in the Arctic. As a result, enhanced ozone in these comparatively hot atmospheric layers increases LW emission and heat loss into space. In the 2012_BASE simulation, inversions occur over 70 % of the Arctic area (north of 66.6°N) during spring, 58 % during summer. Using clear-sky satellite measurements, Devasthale et al. (2010) estimated similarly high inversion frequencies, 88 to 92 % during winter, 69 to 86 % during summer. [START_REF] Rap | Satellite constraint on the tropospheric ozone radiative effect[END_REF] also showed that increasing O 3 at the surface in the Arctic and Antarctic was causing a negative LW effect (Figure 1b in [START_REF] Rap | Satellite constraint on the tropospheric ozone radiative effect[END_REF], although they estimated that this effect would be compensated by an associated positive SW effect. This compensation does not occur in the runs presented here, where the positive SW effect is weaker than the negative LW effect. The SW effect might be lower here because O 3 enhancements are often located below clouds, or the LW effect might be larger due to stronger temperature inversions. All in all, this net O 3 cooling effect can be expected to depend strongly on the strength and occurrence of the surface inversion in the Arctic, and whether or not it is accurately represented in models. 𝐷𝑅𝐸 𝐵𝐶_𝑏𝑎𝑠𝑒 = 𝐹 𝑤𝑖𝑡ℎ𝐵𝐶_𝑏𝑎𝑠𝑒 -𝐹 𝑛𝑜𝐵𝐶_𝑏𝑎𝑠𝑒 (6.1) = (𝐹 ↓ 𝑤𝑖𝑡ℎ𝐵𝐶_𝑏𝑎𝑠𝑒 -𝐹 ↑ 𝑤𝑖𝑡ℎ𝐵𝐶_𝑏𝑎𝑠𝑒 ) -(𝐹 ↓ 𝑛𝑜𝐵𝐶_𝑏𝑎𝑠𝑒 -𝐹 ↑ 𝑛𝑜𝐵𝐶_𝑏𝑎𝑠𝑒 ) ( In summer 2050, diversion shipping emissions cause a strong increase in surface aerosol concentrations, but a relatively low aerosol DRE . Figure 6-13 shows that shipping BC in 2050 is confined in the lowest levels of the troposphere. In addition, the residence time of BC originating from Arctic shipping emissions is very low during summer, 1.4 days. Figure 6-13 also shows that BC from shipping is mostly located below and within clouds, further reducing their SW direct effect. The direct radiative effects of O 3 and scattering aerosols are low year-roung for Arctic flaring emission, and are also low during spring for Arctic shipping emissions . These effects also exhibit unexpected abrupt variations between months, wich indicates that they are too small to be separated from model noise. For this reason, these values, shown in Figure 6-15 are not discussed in detail here. The only exception is the combined radiative effect of SO 2- 4 +NO - 3 +NH + 4 +OA from Arctic flares, which appears to be positive and rather large. This warming effect appears to be due to a large reduction in Arctic OA burdens when introducing Arctic Flaring emissions, causing a reduction in direct OA cooling. This effect seems to be due to interactions between Arctic flaring emissions and nearby intense biomass burning emissions, but the exact mechanism for this interaction is unclear. 6.8.2 Semi-direct and indirect radiative effects. WRF-Chem calculates aerosol activation in clouds and the resulting effects on cloud properties, including cloud albedo and cloud lifetime. This effect is taken into account in both grid-scale clouds (Morrison microphysics scheme) and sub-grid clouds (KFCuP). As a result, changes in aerosol concentrations, composition and size between 2 simulations, e.g. 2012_BASE and 2012_NOANTHRO also cause changes in cloud properties (aerosol indirect effects). In addition, since predicted aerosol optical properties and ozone concentrations are coupled to radiation calculations in WRF-Chem, changes in emissions also have an influence on heating rates, temperature profiles and relative humidity profiles, causing additional changes in cloud formation, cloud properties and cloud lifetime (semi-direct effects). Here, RRTMG is used to calculate SW and LW indirect and semi-direct radiative effects (ISRE) at TOA, averaged over the Arctic region (north of 60°N). Calculations are similar to the ones presented in the previous Section 6.8.1, but here RRTMG simulations are performed by changing only cloud and meteorological properties between runs, while keeping aerosol and ozone concentrations set to the values from the 2012 or 2050 BASE simulations. Since model-predicted ozone is also coupled to radiations in the WRF-Chem simulations, this value also includes the "semi-direct" (cloud adjustment) effect of ozone, but this effect is expected to be low. The corresponding seasonally-averaged results for 2012 and 2050 are presented in Figure 6-16. As discussed in Section 6.5, cloud properties and precipitation appear to be chaotic in WRF-Chem; as a result the confidence in ISRE values is low, and is very low for the small emission perturbations due to Arctic shipping and Arctic flaring. Another weakness of this approach is that the simulations presented in this Chapter do not include the winter and fall seasons, when indirect effects are qualitatively different and when LW effects (due to changes in cloud LW emissivity, see [START_REF] Zhao | Effects of Arctic haze on surface cloud radiative forcing[END_REF] tend to dominate. The sums of the ISRE due to Anthropogenic and Biomass burning in spring 2012 (-2.0 W m -2 ) and summer 2012 (-4.0 W m -2 ) are comparable to previous results for the indirect effect by, e.g. [START_REF] Shindell | Local and remote contributions to Arctic warming[END_REF] (-0.25 to -1 W m -2 in spring; -1 to -2 W m -2 , during summer). However, results for Arctic shipping and flaring appear to be very different than previous estimates, with stronger month-to-month variations. For example, calculations in Chapter 5 (Marelle et al., 2016) indicate that Arctic shipping emissions in Northern Norway Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone 173 have an Arctic wide total (indirect + direct) radiative effect of -140 mW m -2 in July 2012, and Ødemark et al. (2012), estimate an indirect effect of -105 mW m -2 in summer 2012, varying relatively smoothly between months. AMAP (2015) found that the yearly averaged indirect effect of flaring BC in the Arctic was +5 to +30 mW m -2 . The results presented here for Arctic shipping and flaring emissions appear to be spurious effects of the chaotic cloud variability within the model. In order to estimate the radiative effect due to the cloud response to small emission perturbations from local Arctic emissions, it is thus necessary to either perform longer simulations (several years at least) or to average ensemble model results to lower this noise. These points can be addressed as part of future work on assessing the climate impacts of Arctic pollution. Conclusions and perspectives This study investigates the impacts of local Arctic pollutant emissions on aerosols and ozone, in terms of surface concentrations, direct radiative impacts, and BC deposition in the Arctic. Recent work (Ødemark et al., 2012;[START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF][START_REF] Larsen | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part B: Regional Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel of Climate Change[END_REF] indicates that local Arctic emissions from shipping and resource extraction are currently growing and could become an important source of pollution in the Arctic, relative to other, better-known sources (i.e. long-range transport). However, few previous studies investigated this question, and to our knowledge no previous study investigated simultaneously the current and future impacts of these sources in terms of air quality, radiative effects and BC deposition using a single methodology. In this study, 6-months long quasi hemispheric simulations are performed using the regional WRF-Chem model, combined with new emission inventories for Arctic shipping [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF], and Arctic gas flaring from petroleum activities (ECLIPSEv5, [START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] This work is also, to our knowledge, the first successful attempt at using a regional model to investigate Arctic-wide aerosol and ozone pollution and their radiative impacts. In order to perform these regional simulations, several components of the model were improved, including surface temperatures over sea-ice, cloud-aerosol interactions in subgrid clouds, aerosol sedimentation, DMS chemistry, trace gas deposition over snow and ice, and photolysis (UV-albedo) over snow and ice. Updated simulations compare well to surface • Current gas flaring from petroleum activities in the Arctic: Arctic gas flaring appears to be a major source of current surface BC year-round, and of BC deposition over snow and ice in spring. Flaring BC is also associated with a significant positive direct radiative effect of ∼ 25 mW m -2 (60°N-90°N average). However, this warming effect is still approximately two orders of magnitude lower than BC warming from anthropogenic emissions transported from the mid-latitudes to the Arctic, and than BC warming from biomass burning emissions. The radiative impact of increased BC deposition could not be investigated here due to the lack of a detailed snow-albedo model within WRF-Chem. However, the simulation dataset presented here could be used in future work to assess the radiative effects of local sources due to these processes. • Future (2050) Arctic gas flaring: Simulations were performed using future anthropogenic emissions for 2050, and show that Arctic flaring should remain an important source of surface BC, BC deposition and BC direct radiative effects in the Arctic. Furthermore, the relative influence from Arctic flaring BC is expected to increase due to anthropogenic BC emission reductions in the mid-latitudes (here, in the ECLIPSEv5 future CLE scenarios). • Future (2050) Arctic shipping: Future Arctic shipping emissions in 2050 increase significantly during summer, due to the occurrence of diversion shipping in July-November (i.e. July-August here, since simulations were only performed for 6-months, March-August). In this "high-growth" diversion scenario, Arctic shipping become a major source of surface ozone and aerosols (locally, the main source of BC and NO - 3 ) during the Arctic summer. This strong increase in O 3 (8 to 15 ppbv) is also associated with a strong increase in surface OH concentrations (∼ a factor of 2) over the Arctic Ocean. However, diversion shipping does not enhance significantly BC deposition over snow and ice, likely because of limited atmospheric transport to the ice pack, due to efficient wet removal over the open Arctic Ocean (the residence time of BC is 1.4 days during summer). The simulations presented here do not estimate the effects of the reduced (diverted) international shipping emissions at lower latitudes in 2050, which has been shown by [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF] to have some air quality and (short-term) climate benefits. The results presented in this Chapter also put the findings of previous case studies (Chapters 4 and 5) in the broader context of Arctic-wide aerosol and ozone pollution. These results indicate that, even though emissions from local sources are relatively low compared to global totals, they already have significant local relative impacts in the Arctic in terms of surface aerosol and ozone concentrations, chemistry (though the effect of increasing OH), and direct aerosol and ozone radiative effects. This relative influence can also be expected to increase in the future. However, even for worst-case future local emission scenarios, midlatitude anthropogenic emissions and biomass burning emissions can be expected to remain the main source of Arctic ozone and aerosol burdens in the troposphere, as well as the main source contributing to direct aerosol and ozone radiative effects in the Arctic. Future projections performed in this study only consider the consequences of changing anthropogenic emissions in 2050. In order to obtain a complete picture of the future influence of local emission sources, it is also necessary to consider the impacts of the changing climate, causing decreased sea ice and snow covers, changing long-range pollution transport from the mid-latitudes, and increasing dry deposition (due to vegetation) and wet removal (due to stronger precipitation and precipitation phase change from solid to liquid). [START_REF] Jiao | Changing black carbon transport to the Arctic from present day to the end of 21st century[END_REF] indicate that, by the end of the 21 st century, the change in transport patterns and deposition processes could decrease the mean BC burden in the Arctic by 13.6 %. It is also necessary to quantify the future impacts on Arctic aerosols and ozone of changing natural or semi-natural emission sources such as biomass burning, biogenic activity, and the increasingly open Arctic Ocean. General Conclusions Summary This thesis aims to improve our understanding of Arctic aerosol and ozone pollution from local and remote sources. In this work, regional meteorology-chemistry-aerosol simulations are performed with the WRF-Chem model, using new emission inventories for local Arctic sources. These results indicate that regional modeling using WRF-Chem is an effective tool for investigating aerosol and ozone pollution in the Arctic. Throughout this thesis, These findings also show that aerosol amounts reaching the European Arctic in spring are strongly influenced by wet removal (> 50 % of aerosol mass), for both low-level and frontal transport. In terms of direct radiative effects, this event causes top-of-atmosphere (TOA) cooling due to the scattering effect of aerosols, but causes TOA warming over snow and ice due to the high surface albedo, and causes surface cooling over all surface types. Second, local Arctic emissions associated with shipping and resource extraction are expected to grow in the future. Several studies suggested that these sources could already be significant. In this thesis, WRF-Chem is used to investigate the impacts of the emerging source of Arctic pollution from shipping, by performing plume-scale and regional-scale simulations in Northern Norway in July 2012, where current Arctic shipping emissions are thought to be the highest. The model is combined with shipping emissions from a new bottom-up inventory, STEAM2, and simulations are used to analyze measurements of Arctic shipping pollution from the ACCESS aircraft campaign. These simulations show that current Arctic shipping has significant local impacts on NO x , SO 2 , SO 2- 4 , BC and O 3 concentrations at the surface during summer. WRF-Chem includes a relatively complex treatment of cloud/aerosol interactions, which is used to estimate the total (direct+semi-direct+indirect) radiative effect of aerosols from shipping emissions in this region in July 2012. These emissions cause a strong local TOA cooling due to the effect of cloud/aerosol interactions. On average, new inventories such as STEAM2 appear to represent reasonably well Arctic shipping pollution, but considerable uncertainty remains on emissions from individual ships. Additional measurements focused on Arctic shipping pollution are necessary in order to fully validate these new emission inventories. The WRF-Chem model can be run at fine, plume-resolving scales to analyze highresolution measurements, or at large, quasi-hemispheric scales to investigate Arctic-wide aerosol and ozone pollution. In order to perform Arctic-wide simulations, a new model setup is defined for Arctic studies and the model is improved when key Arctic processes are missing. This new setup includes recent model developments (SOA formation, aerosol/cloud interactions in sub-grid scale clouds, lightning NO x emissions) as well as additional modules developed specifically for this thesis. I find that some of these additional processes appear to be critical in order to model Arctic ozone and aerosol pollution. For ozone, it is important to model the increase in UV-albedo over snow and ice and its impacts on photolysis rates. Simulations also need to take into account the reduced dry deposition velocity of ozone over snow-and ice-covered surfaces. In terms of Arctic aerosols, the representations of Arctic boundary layer structure and of the wet removal of aerosols by sub-grid scale clouds appear to be critical. Comparisons to groundbased and airborne measurements of aerosols and ozone in the Arctic are greatly improved in the updated simulations. Finally, this updated version of WRF-Chem is used to investigate the current (2012) and future (2050) impacts of Arctic shipping and Arctic gas flaring emissions, in terms of air quality and radiative effects. Results show that Arctic flaring emissions are and should remain a major source of local black carbon aerosols, causing warming, and that Arctic shipping is already a significant source of aerosols and ozone during summer. In 2050, diversion shipping through the Arctic Ocean could become the main source of local NO - 3 , BC, and ozone pollution. I also find that the main direct radiative effect of Arctic ships appears to be longwave TOA ozone cooling. This cooling effect is due to the temperature inversion in the Arctic boundary layer. Direct shortwave radiative effects from ships are small, due to the short lifetime of shipping aerosols (1.4 days) and because short-lived pollution from Arctic ships is often located below clouds. As a result, it also appears that an accurate representation of surface temperatures, boundary layer structure and clouds is critical to General Conclusions correctly compute the direct radiative effect of aerosols and ozone from local Arctic emissions. Perspectives The work presented in this thesis improves our understanding of aerosol and ozone pollution in the Arctic. However, it is important to keep in mind that results presented in this thesis are based on relatively short simulations (at most 6 months), and do not consider the effect of year-to-year variability in, e.g., boreal fire activity, weather, or snow and sea ice cover, which can influence Arctic aerosols and ozone. In addition, the results presented in Chapter 6 indicate that modeled changes in aerosols due to small emission perturbations (e.g. adding Arctic shipping or Arctic flaring emissions) are relatively uncertain, due to the chaotic nature of modeled clouds and precipitation. In order to improve the robustness of these calculations, it is important to investigate these chaotic effects in order to properly separate signal from noise in modeled results. This could be carried out in the future, based on these simulations, by performing ensemble sensitivity simulations to e.g. Arctic shipping and Arctic flaring, on a smaller Arctic simulation domain embedded (nested) in the domain presented in Chapter 6. Because of this variability, the radiative effect of cloud-aerosol interactions (indirect effects) due to local Arctic emissions are even less certain than their direct effects. Additionally, results presented in this thesis show that the direct aerosol and ozone radiative effects in the Arctic appear to depend strongly on modeled Arctic meteorology (vertical temperature profiles for LW radiative effects, cloud cover for SW effects). In order to better constrain these radiative effects, it is important to better validate model representations of Arctic meteorology, especially surface temperature inversions and Arctic clouds. Simulations presented in this thesis also indicate that improving the representation of the following processes in WRF-Chem could be important for Arctic studies: • Organic aerosols: Simulations presented in Chapters 4 and 5 do not include any mechanism for secondary organic aerosol (SOA) formation. The study presented in Chapter 6 includes a mechanism for "traditional" SOA formation from the oxidation of biogenic and anthropogenic VOC, but recent work [START_REF] Shrivastava | Global transformation and fate of SOA: Implications of low-volatility SOA and gas-phase fragmentation reactions[END_REF][START_REF] Peckham | Best Practices for Applying WRF-Chem[END_REF] indicates that non-traditional SOA formed from semivolatile organic compounds and intermediate volatility organic compounds (S/IVOC) constitutes another major source of OA. Results from Chapter 4 also show that not including SOA formation leads to underestimated aerosol concentrations in some biomass burning plumes in the Arctic, and, in general, to underestimated OA and overestimated NO - 3 in all plumes. SOA formation from S/IVOC formation was removed from the simulations presented in Chapter 6 because of the lack of a reliable global S/IVOC emission inventory. However, several recent approaches can be used to estimate these emissions (e.g., Hodzic et al., 2015, suggest to use 60 % of POA emissions for SVOC, and 20 % of NMVOC emissions for IVOC). Simulation results presented in Chapter 6 indicate that biomass burning emissions are a strong source of OA in the Arctic, and that these OA enhancements cause a significant cooling (negative) direct radiative effect at TOA. The effect of non-traditional SOA formation on Arctic pollution, especially from biomass burning, could be investigated using WRF-Chem in the future. "Brown Carbon" aerosols (absorbing organic aerosols), whichare not included in these runs, could also play an important role. • BC wet removal: In the Arctic-wide WRF-Chem simulations performed in this thesis, BC concentrations at the surface are well represented, but BC appears to be overestimated in the Arctic mid-and upper troposphere during summer. This overestimation could be caused, in part, by the absence of a scheme for ice nucleation in mixed-phase clouds in these simulations, since BC aerosols are efficient ice nuclei. Secondary aerosol activation in liquid clouds could also be an important sink for high-altitude aerosols. This last process is included in our simulations, but it is based in the KFCuP sub-grid cloud scheme on a simplifying assumption for the critical supersaturation. The effect of this assumption could be studied in the future. • Snow and ice modeling: Simulations presented in this thesis do not include snow-NO x emissions, or halogen chemistry over snow and ice. Comparison with surface measurements show that the lack of halogen chemistry is a source of strong discrepancies in O 3 during winter and spring at Arctic surface stations, as the model is unable to reproduce ozone depletion events. The effect of the lack of snow-NO x emissions is not clear from these simulations; however, including these processes could change the modeled response of Arctic surface O 3 to local and remote sources near the ice edge and over snow. The released version of WRF-Chem does not include a snow-albedo model, which could also be used to estimate the radiative impacts of BC deposition on snow, which could be significant for Arctic flares [START_REF] Flanner | Present-day climate forcing and response from black carbon in snow[END_REF]AMAP, 2015). • Stratospheric upper boundary condition: Simulation results indicate that the stratospheric O 3 source might to be too high in our simulations. This upper boundary condition is based on a climatology for years 1996-2005. It is not clear if using an updated climatology could improve these results, or if this overestimation is caused by other (e.g. transport) processes. Last but not least is necessary to calculate the climate impacts of longer-lived greenhouse gases (CO 2 and CH 4 ) in order to obtain a complete picture of the climate effects of local Arctic emissions. For example, the warming effect of CO 2 from ships is known to outweigh the cooling effect of shipping aerosols in the long-term [START_REF] Eyring | Transport impacts on atmosphere and climate: Shipping[END_REF]. The General Conclusions ECLIPSEv5 emission inventory used here also estimates that Arctic flares emit significant amounts of CH 4 , with strong potential warming effects (AMAP, 2015). In the simulations presented in Chapter 6, future (2050) projections only consider the effect of changing anthropogenic emissions in 2050. In order to improve these projections, it is also necessary to consider the effect of changing natural and biomass burning emissions, as well as the effect of future climate change on e.g. removal processes and long-range transport. Other long term radiative impacts, such as the effects increased Arctic O 3 due to ships on CH 4 lifetime and on the carbon sink (damaged vegetation due to O 3 ) also need to be estimated. These effects, and other long-term or large-scale effects relevant for the study of Arctic climate change (e.g. climate feedbacks) cannot be easily studied using WRF-Chem. As a result, it appears that regional models such as WRF-Chem should be used in combination with global climate models in order to study Arctic climate change. On the one hand, global climate models have the capability to study these long-term effects, and to estimate the Arctic surface temperature response in these different scenarios. On the other hand, results from this thesis indicate that regional models are very useful tools in order to bridge the gap between measurements and regional air quality and climate (concentrations, radiative effects), and can also be used in detailed case studies to identify important processes and improve global models. Conclusion Figure 1-1 -Satellite measurements of (a) MODIS-Terra aerosol optical depth measured at 550 nm (dimensionless), showing global aerosol distributions in August 2014 (NASA Earth observatory) (b) TOMS tropospheric ozone during summer (Dobson units), 1979-2000 average (adapted from Fishman et al. (2003)). Chapter 1 . 1 Figure 1-2 -Multiple independent indicators of a changing global climate, from IPCC (Hart-mann et al., 2013) . Figure 1 - 3 - 13 Figure 1-3 -Global annual mean energy budget of the Earth for the period from March 2000 to May 2004 (W m -2 ); Trenberth et al. (2009) . Figure 1 1 Figure 1-8 -(a) Annual mean Arctic surface temperature anomalies from 1880 to 2015 (blue), compared to the global average (black). Temperature anomalies are from the GISS surface temperature analysis (GISTEMP), and are relative to the 1951-1980 base period. (b) September Arctic sea ice cover, 1979-2015, with trend line (Figure from the National Snow and Ice Data Center, NSIDC). Figure 1 1 Figure 1-9 -Time-series of monthly averaged particulate sulfate and nitrate concentrations (in µgS m -3 and µgN m -3 respectively), at (a) Barrow, Alaska and (b) Alert, Canada. Figure from Quinn et al. (2007). Figure 1 - 1 Figure 1-10 -Position of the Arctic front in winter (blue) and summer (yellow), and main pathways of atmospheric transport to the Arctic. From AMAP (2006). Figure 1 - 1 Figure 1-11 -Relative importance of different source regions to annual mean Arctic concentration at the surface and in the upper troposphere (250 hPa) for the indicated species.Arrow width is proportional to the multi-model mean percentage contribution from each region to the total from these four source regions. From[START_REF] Quinn | Short-lived pollutants in the Arctic: their climate impact and possible mitigation strategies[END_REF]. Figure 1 - 1 Figure 1-12 -Location of transit shipping routes through the Arctic (in red): Northern Sea Route (NSR, along the Russian coast) and Northwest Passage (NWP, along the Canadian coast). Adapted from IPCC (2014). Figure 1 - 1 Figure 1-13 -Historic and estimated oil (left) and gas (right) production in the Arctic, from 1960 to 2050. From Peters et al. (2011). Additionally, most models included in the recent intercomparison in the Arctic of Emmons et al. (2015) underestimate ozone in the middle and high Arctic troposphere by ∼ 10 to 30 %, and exhibit stronger biases for ozone precursors such as NO x , carbon monoxide (CO), peroxyacetyl nitrate (PAN) and several VOCs. Results from another recent model intercomparison in the Arctic performed by AMAP (Arctic Monitoring and Assessment Programme, AMAP, 2015) indicates that models are strongly biased for both O 3 and its (i.e. container ships and large ships). Furthermore, Arctic shipping inventories can quickly become out of date as the local traffic increases and new emission control regulations are implemented (e.g., (Jonson et al., 2015)). A new Arctic shipping inventory based on ship 40 Chapter 1. Climate change and air pollution in the Arctic positioning by satellite was developed recently by Winther et al. (2014), but it has not yet been validated against measurements. Figure 2 - 1 - 21 Figure 2-1 -Zonal mean ozone number concentrations (1979-2008 average), showing the location of the ozone layer around 25 km altitudes. Data from NOAA CDR(Bodeker et al., 2013). Figure 2 - 2 - 22 Figure 2-2 -Ozone chemistry and processes in the troposphere, adapted from Jacob (2000). Figure 2 2 Figure 2-3 -Zenith atmospheric transmission spectrum, (0.3 to 50 µm), and the contributions from the 3 strongest greenhouse gases H 2 O, CO 2 and O 3 . Figure from Petty (2006). Figure 2 - 4 - 24 Figure 2-4 -Average composition (mass proportion) of submicron aerosols measured in the free troposphere in the Alaskan Arctic in spring 2008(Brock et al., 2011). Measurements are for background (haze) conditions. The chart does not include dust or sea salts, which represent respectively 6 % and 4 % of fine particles, and 49 % and 23 % of coarse particles (number proportions). Figure 2 2 Figure 2-5 -Aerosol mixing states: (a) External mixing, (b) Internal mixing, well-mixed (c) Core-shell mixing, (d) Random spherical inclusions Figure 2 2 Figure 2-6 -Aerosol number size distribution in a pollution plume sampled in the Scandinavian Arctic in spring 2008, showing measurements (thin line and circles), a statistical fit with two lognormal modes (thick red line) and an approximate fit by 10 discrete size bins (blue). Adapted from Quennehen et al. (2012). Figure 2 - 8 - 28 Figure 2-8 -Direct aerosol-radiation interactions: incident flux, absorbed flux and scattered flux. Figure 2 - 9 - 29 Figure 2-9 -Scattering by a spherical aerosol: shape of the phase function in the (a) Rayleigh regime (b) Mie regime. From[START_REF] Quennehen | Etude des aérosols transportés en Arctique à partir des mesures aéroportées (ATR-42) du LaMP durant le projet POLARCART[END_REF] Figure 2 - 2 Figure 2-10 -Interactions between solar radiation and an aerosol layer, in clear sky conditions. WRF-Chem simulations presented in this thesis use global anthopogenic emissions of NO x , NMVOC, CO, BC, POA, SO 2 and NH 3 from the HTAPv2 (Hemispheric transport of air pollution, version 2,Janssens-Maenhout et al., 2015) or ECLIPSEv5 (Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants, version 5,[START_REF] Klimont | Global scenarios of air pollutants and methane[END_REF] inventories. HTAPv2 and ECLIPSEv5 emissions are presented in Figures 3-1 and 3-2 respectively. Figure 3 - 3 Figure3-1 represents HTAPv2 emissions from the energy, industrial, residential, trans- Figure 3 3 Figure 3-3 -Global ECLIPSEv5 emissions of CO 2 , CH 4 , SO 2 , NO x and BC between 1990 and 2050 (Figure fromStohl et al., 2015). NFC, CLE and MIT scenarios are represented in blue, along with the IPCC RCP scenario range in gray(Lamarque et al., 2010). Figure 3 3 Figure 3-4 -Day-of-week and hour-of-day profiles applied to anthropogenic emissions. Figure from Denier van der Gon et al. (2011). HTAPv2 and ECLIPSEv5 emissions are given as monthly files. Finer daily and hourly temporal variations are implemented for each anthropogenic emission sector in WRF-Chem Chapter 3. Methods: modeling tools, emission inventories and Arctic measurements (Figure 3-4), using factors from Denier van der Gon et al. (2011), shown in Figure 3-4. Figure 3-5 -Total boreal (boreal Asia + boreal North America) biomass burning BC emissions (kton yr -1 ) in the GFED4.1 inventory between 1997 and 2014, individual years (blue markers) and average (blue line). Chapter 3 .Figure 3 33 Figure 3-6 -Yearly averaged oil and gas emissions in the Arctic (latitude > 60 ∘ N) in the inventories of Peters et al. (2011) (year 2004, left), and ECLIPSEv5 (flaring layer, year 2010, right). (a-b) NO x emissions, (c-d) BC emissions, (e-f) yearly emission totals in the Arctic. Figure 3-7 -Description of a bottom-up ship emission model (for a single ship). Figure 3 - 3 Figure 3-8 shows the geographical distribution of 3 recent bottom-up shipping emission inventories, based on activity data from AMVER/ICOADS (RCP8.5, (Riahi et al., 2011)),AMSA[START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] and AIS[START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF]. The repartition of Arctic shipping emissions is very different between these inventories, but all datasets indicate that the highest shipping activity occurs along the Norwegian Coast and around Iceland. For each ship in the Winther et al. (2014) inventory, ship engine work (in kW h) is calculated based on vessel speed, ship length and engine and fuel characteristics. Fishing ships (accounting for 25 % of sailed distance in 2012) are assumed to function at engine loads of 60 %, as engine work is not directly related to vessel speed for fishing ships. Emission factors (in g kW -1 h) depend on fuel type, engine type and engine production year, and take into account IMO MARPOL Annex VI requirements. The emission factor for BC does not depend on engine load or fuel sulfur content. The Winther et al. (2014) inventory also includes future projections in 2012, 2020 and 2050. These projections represent the growth in local traffic and do not include the effect of the diversion of international shipping through the Arctic through the NSR and NWP Chapter 3. Methods: modeling tools, emission inventories and Arctic measurements 83 due to summer sea ice melt. Winther et al. (2014) estimate the growth in local traffic from 2012 by scaling 2012 traffic with ship type-specific growth factors based on Corbett et al. Route along the Russian Coast, and the Northwest Passage through the Canadian archipelago. It is important to note that estimates of future Arctic diversion shipping emissions and their onset are very uncertain, and that emission totals and geographical locations of diversion routes are still poorly constrained. Figure 3 3 Figure 3-9 -Potential global shipping diversion routes through the Arctic Ocean in 2030/2050, adapted from Corbett et al. (2010). Figure 3 - 3 Figure 3-10 shows the location of surface stations measuring ozone and aerosols in situ in the Arctic. These measurements are used in Chapter 6 to validate large scale WRF-Chem simulations. European EMEP measurements of aerosols in the Arctic and at lower latitudes are also used in Chapter 4 to characterize pollution in source regions during a transport event from Europe to the Arctic. campaigns were organized in 2008 as part of the POLARCAT project. The POLARCAT-France spring campaign took place from Kiruna, Sweden in April 2008. Studies by Quennehen et al. (2012) and Adam de Villiers et al. (2010) showed that, during several campaign flights, aerosol pollution plumes transported from Europe were sampled in the European PM 2.5 agrees well with the measurements, although the model overestimates nitrate and underestimates organic carbon in source regions. Using WRF-Chem in combination with the Lagrangian model FLEXPART-WRF, we find that during the campaign the research aircraft sampled two different types of European plumes: mixed anthropogenic and fire plumes from eastern Europe and Russia transported below 2 km, and anthropogenic plumes from central Europe uplifted by warm conveyor belt circulations to 5-6 km. Both modeled plume types had undergone significant wet scavenging (> 50 % PM 10 ) during transport. Modeled aerosol vertical distributions and optical properties below the aircraft are evaluated in the Arctic using airborne lidar measurements. Model results show that the pollution event transported aerosols into the Arctic (> 66.6 ∘ N) for a 4-day period. During this 4-day period, biomass burning emissions have the strongest influence on concentrations between 2.5 and 3 km altitudes, while European anthropogenic emissions influence aerosols at both lower (∼ 1.5 km) and higher altitudes (∼ 4.5 km). As a proportion of PM 2.5 , modeled black carbon and SO = 4 concentrations are more enhanced near the surface in anthropogenic plumes. The European plumes sampled during the POLARCAT-France campaign were transported over the region of springtime snow cover in northern Scandinavia, where they had a significant local atmospheric warming effect. We find that, during this transport event, the average modeled top-of-atmosphere (TOA) shortwave direct and semi-direct radiative effect (DSRE) north of 60 ∘ N over snow and ice-covered surfaces reaches +0.58 W m -2 , peaking at +3.3 W m -2 at noon over Scandinavia and Finland. , causing elevated pollution concentrations in the lower troposphere. Surface aerosol concen-90 Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France trations in the Arctic are mostly influenced by European and west Asian emissions, while east Figure 4-1 -(a) WRF-Chem domain including the location of ground-based EMEP measurement stations used for this study. Stations measuring PM 2.5 are marked by red circles, and stations measuring aerosol composition are marked by green squares. Stations with both measurements are indicated with both symbols. The POLARCAT-France spring flight tracks are shown in red, green, and blue, with a close up over the flight region shown in (b). scheme is not coupled to a secondary organic aerosol (SOA) scheme in our version of WRF-Chem (3.5.1). According toBessagnet et al. (2009), 75-95 % of annually averaged SOA in Europe is associated with biogenic sources. However, biogenic VOC (volatile organic compounds) emissions are relatively low in Europe during the months of March and April[START_REF] Karl | A new European plant-specific emission inventory of biogenic volatile organic compounds for use in atmospheric transport models[END_REF]. In addition,Bessagnet et al. (2009) point out that Figure 4-2 shows black carbon (BC), organic carbon (OC), and sulfur oxides (SO x ) emissions during our simulation, from both anthropogenic sources (panels a, b, and c) and biomass burning sources (panels d, e, and f). In-domain biomass burning emission totals are 13 kton for Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France 95 SO x , 12 kton for BC, and 75 kton for OC. For anthropogenic emissions, in-domain emission totals from HTAPv2 are 575 kton for SO x , 21 kton for BC, and 46 kton for OC. Anthropogenic emissions are stronger in western and central Europe, especially in Poland and Slovakia. Biomass burning emissions are located in the eastern part of the domain because of intense agricultural fires in Ukraine, Russia and Kazakhstan during early April 2008 (Warneke et al., 2009). Biogenic emissions are calculated online in WRF-Chem by the model MEGAN (Guenther et al., 2006). Finally, sea salt aerosol emissions are calculated online, while mineral dust emissions are not included. Boundary and initial meteorological conditions in the simulation are given by the global NCEP Final Analysis (FNL), and WRF-Chem temperature, humidity, and winds are nudged every 6 h to the reanalysis above the atmospheric boundary layer. Trace gases and aerosol initial and boundary conditions (updated every 6 h) are taken from the global chemical transport model MOZART-4 (Emmons et al., 2010). WRF-Chem simulations include a control run (CTL) from 00:00 UTC 1 April to 00:00 UTC 12 April using the model and emissions as described above. We also perform four sensitivity simulations for the same period to investigate the sources, processes along transport, and regional impacts of aerosols sampled during POLARCAT: (1) removing the HTAPv2 emissions (NOANTHRO), (2) without biomass burning emissions (NOFIRES), (3) a simulation with wet scavenging turned off (NOWETSCAV), and (4) a simulation with the aerosol direct interaction with shortwave radiation disabled, thus switching off the direct and semi-direct aerosol effects (NODIRECT). The NOANTHRO and NOFIRES simulations are used in Sect. 4.2.6.1 to estimate the contribution of European anthropogenic and biomass burning emissions to Arctic aerosols measured during POLARCAT. The NOWETSCAV simulation allows us to quantify in Sect. 4.2.6.2 the magnitude of the wet scavenging of aerosols during their transport from Europe to the Arctic. The NODIRECT simulation is used in Sect. 4.2.7 to estimate the direct and semi-direct shortwave radiative effect (DSRE) of aerosols associated with this transport event. ( AODs) of observed layers were low (< 4 %) during POLARCAT-France (Adam deVilliers et al., 2010). The backscatter ratio is calculated following the definition in Sect. 4.2.3.1, where the molecular backscattering is estimated by an empirical formulation of the Rayleigh scattering[START_REF] Nicolet | On the molecular scattering in the terrestrial atmosphere : An empirical formula for its calculation in the homosphere[END_REF] using meteorological profiles from the CTL simulation. Figure 4-3 -Meteorological conditions simulated by WRF-Chem during the POLARCAT-France spring campaign period, represented by the 700 hPa geopotential height (contour lines) and 700 hPa wind vectors (30 m s -1 vector given for scale) on 6-11 April 2008 (12:00 UTC). The POLARCAT-France flight tracks on 9, 10, and 11 April 2008 are indicated in magenta. Figure 4-4 -Simulated BC column on 6-11 April 2008 (12:00 UTC). POLARCAT-France flight tracks are indicated in white, with a black border. Figure 4-5 -Time series of modeled (red) and measured (blue) (a-c) temperature, (d-f ) relative humidity, (g-i) wind speed, and (j-l) wind direction extracted along the POLARCAT-France flight tracks. The corresponding aircraft altitude is shown in black. Fig.S34 . Finally, white shading indicates air masses that are not attributed to a specific source using the methods described above and are referred to as unpolluted air.In the free troposphere, the model is able to reproduce the baseline PM 2.5 levels and the main peaks observed in European air masses for all three flights. The NMB for PM 2.5 for all three flights, excluding unpolluted air and boundary condition air, is +8.8 %. Peaks attributed to European anthropogenic emissions are reproduced, although the model cannot capture some small-scale features due to its resolution. At the end of the 9 April flight, two concentrated plumes were sampled in situ around 12:00 and 12:15 UTC. The model identifies these plumes as mixed (anthropogenic/biomass burning), meaning that significant (> 40 %) enhancements in modeled PM 2.5 at these times are due to biomass burning or anthropogenic European emissions. The first PM 2.5 peak is underestimated by the model (around 12:00 UTC), and the second plume (around 12:15 UTC) is located 1.5 km too low in altitude. This may be due to uncertainties in the injection height for fires or in the intensity and timing of the emissions.However, the issue does not appear to be system- Figure 4 4 Figure 4-8 -Modeled (red) and measured (blue) number size distributions of plumes labeled (I-O) in Fig. 4-7, influenced by (I, J, M, N) European anthropogenic and (K, L, O) mixed European anthropogenic and fire emissions. Modeled and observed size distributions corresponding to two consecutive samplings of the same plume during the same flight (I-J, M-N, L-O) were averaged together. Chapter 4 . 4 Figure 4-9 -Backward mode FLEXPART-WRF column-integrated PES (a and b), showing typical transport pathways for an anthropogenic plume (left, plume J, originating on 9 April 2008 at 11:19 UTC on the POLARCAT flight track) and a mixed anthropogenic/biomass burning plume (right, plume K, originating on 9 April 2008 at 12:19 UTC on the flight track). Numbers in white indicate the plume age, in days. Panels (c) and (d) show each plume's mean altitude with rms error bars showing vertical dispersion (blue) and the difference between the CTL PM 10 and the NOWETSCAV PM 10 along transport, indicating wet scavenging events (black). Figure 4 - 4 Figure 4-9c and d show the mean altitude Fig. S6 7 7 Fig.S6 7). This displacement is probably due to the cumulative effect of small errors on wind speed and wind direction over the 3 to 5 days of long-range transport. The model underestimates the PBR in the intense layer measured in situ Chapter 4 . 4 Figure 4-12 -Modeled vertical profiles of enhancements in (a) PM 2.5 , (b) BC, (c) OC, (d) SO = 4 , (e) NO - 3 , and (f ) NH + 4 PM 2.5 , due to anthropogenic (red) and fire (black) emissions within the WRF-Chem model domain, averaged in the Arctic (latitude > 66.6 ∘ N) and over the period from 00:00 UTC 8 April 2008 to 00:00 UTC 12 April 2008. Figure 4-13 -Model averages over the period from 00:00 UTC 8 April 2008 to 00:00 UTC 12 April 2008 of the (a) aerosol DSRE, at the TOA, in regions significantly affected by in-domain anthropogenic and fire emissions, (b) PM 2.5 column sensitivity to anthropogenic and biomass burning emissions, and (c) fractional snow and sea ice cover, (d) fractional cloud cover. In panel (a), regions not significantly affected by in-domain emissions are masked in gray. In panels (b-d), regions outside of the WRF-Chem domain are masked in gray. The Arctic Circle is indicated by a dashed line. leave it to future studies to draw broader conclusions about whether these results are representative of wider spatial and temporal scales.Brock et al. (2011) calculated a direct radiative effect of +3.3 W m -2 over snow at TOA for the average of 10 typical polluted profiles measured during the ARCPAC campaign, not taking the semidirect effect into account. Maximum modeled BC in WRF-Chem along the POLARCAT-France flight tracks is 150 ng m -3 (anthropogenic) and 260 ng m -3 (mixed fire/anthropogenic), which are comparable with the average BC values reported for anthropogenic (148 ng m -3 ) and fire plumes (312 ng m -3 ) inBrock et al. (2011). This means that, on average, the BC values for pollution-influenced plumes in our simulation are lower than values reported byBrock et al. (2011). Quinn et al. (2007) found a similar direct radiative effect value of +2.5 W m -2 over snow at TOA for the average polluted conditions encountered during the Arctic haze maximum at Barrow. Those results were obtained at solar noon, in clear sky conditions, over snow, and in polluted regions only, conditions that lead to a maximum 112 Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France direct effect. Using a similar approach, we compute the DSRE in regions influenced by European pollution, close to noon (11:00 UTC), and above high snow covers (> 90 %). This results in an average DSRE of +1.9 W m -2 north of 60 ∘ N. If we exclude the snowpack in Russia, east of 42 ∘ E, the average DSRE in reaches +3.3 W m -2 . These values are in agreement with results from Brock et al. (2011) and Quinn et al. (2007). It should be noted that our retrievals are done in all-sky conditions and not exactly at local solar noon, introducing a slight low bias. Including the semidirect effect in our calculations might have introduced a warming bias, which would be limited by the nudging of WRF-Chem temperature, relative humidity, and wind speed towards FNL reanalyses in the free troposphere. We verified that differences in cloud cover between the NODIRECT and CTL simulations were limited in magnitude and extent, with only a few local points over the sea affected (below 10 % cloud cover change for the 8 to 12 April average), that mostly cancel each other out when regionally averaged. Lund Myhre et al. (2007) calculated the direct forcing of biomass burning aerosols transported from Europe to the Arctic in late April and early May 2006 from spaceborne AOD measurements. For those exceptionally intense plumes, they found that the cooling direct effect at TOA reached -35 W m -2 over the regions with the highest AOD in the Barents Sea, while the maximum warming direct effect over snow was limited to +5 W m -2 over Svalbard. Keeping in mind that our results are not directly comparable because of the different times of year and different averaging periods, we found a 4day average direct and semi-direct effect reaching maximum values of +2 W m -2 over snowcovered Scandinavia, and maximum cooling values of -5 W m -2 over the Norwegian Sea. Several reasons could explain this different balance model simulations for the first time. Specifically, an event involving long-range transport of biomass burning and anthropogenic aerosols from Europe to the Arctic in April 2008 is studied using the regional model WRF-Chem (eight-bin Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France 113 MOSAIC aerosol scheme), to quantify impacts on aerosol concentrations and resulting direct shortwave radiative effects in the Scandinavian Arctic. Modeled aerosols are evaluated against ground-based observations from the EMEP network in European source regions, and using POLARCAT-France aircraft measurements aloft in the European Arctic. The model reproduces background PM 2.5 levels at EMEP groundbased stations in Europe (NMB = -0.9 %) and in Arctic polluted air masses measured by the ATR42 aircraft (NMB = +8.8 %). Comparison with EMEP measurements shows that the model overestimates concentrations of particulate NO - 3 (NMB = +107 %) and NH + 4 (NMB = +53 %) in source regions, probably because of overestimated NH 3 emissions and the lack of SOA formation, and may underestimate OC. Good agreement is found between simulated SO = 4 and EMEP measurements (NMB = -0.6 %). The model indicates that European biomass burning and anthropogenic emissions both had a significant influence on total aerosol mass concentrations (> 20 % of total PM 2.5 ) during portions of the POLARCAT-France spring campaign measurements analyzed in this study. Plumes influenced by biomass burning sources in the model are also found to be significantly influenced by anthropogenic emissions. These modeled mixed plumes contain elevated organic carbon and black carbon concentrations. They originated in eastern Europe and western Russia, and followed lowaltitude (below 2 km) transport pathways into the Arctic. Significant wet scavenging is predicted in the model during transport over Finland, reducing PM 10 levels by 55 %. Modeled high-altitude anthropogenic plumes, originating in central Europe, were rapidly uplifted (from 1 to 6 km in less than 24 h) by warm conveyor belt circulations over Poland and the North Sea. The model also predicts significant wet scavenging during transport of these anthropogenic plumes (PM 10 reduced by 74 %). Evaluation of the model against in situ measurements and lidar profiles below the aircraft shows that the model correctly represents the average vertical distribution of aerosols during this European transport event, as well as the magnitude of the aerosol optical properties. However, this comparison suggests that the model underrepresents the rate of aerosol growth processes, especially condensation, which has the largest impact on the older mixed plumes (3 to 5 days old). The model is used to investigate the average vertical structure of aerosol enhancements from European anthropogenic and biomass burning emissions in the Scandinavian Arctic. Anthropogenic emissions are shown to influence aerosols at both low (∼ 1.5 km) and higher altitudes (∼ 4.5 km), while biomass burning emissions influence aerosols between these altitudes (2.5 to 3 km). In anthropogenic plumes, BC and SO = 4 aerosol concentrations are proportionally more enhanced at lower altitudes, including at the surface. This transport event brought elevated aerosol concentrations north of the Arctic Circle for a rather short period of 4 days, from 8 to 12 April 2008. Due to the location of the polar front, these European aerosols did not mix significantly with local Arctic air further north. 4. 3 . 2 32 Ozone transport to the Arctic in these simulations, and in the related work ofThomas et al. (2013) Long-range ozone transport from the mid-latitudes was not investigated in the study presented in this Chapter, which is focused on aerosol pollution. Model representations of ozone were nonetheless evaluated by comparing simulation results to EMEP measurements of O 3 .This comparison is shown inFigure 4-14, and indicates that these WRF-Chem simulations underestimate observed O 3 in spring 2008. Tuccella et al. (2012) also found a similar underestimation in spring in WRF-Chem simulations over Europe, using a different model setup.They attributed this underestimation to the lack of time-varying boundary conditions in their runs. However, our simulations include time-varying boundary conditions from the MOZART4 model but still show these discrepancies. Later analysis (Chapters 5 and 6) and results by[START_REF] Peckham | Best Practices for Applying WRF-Chem[END_REF] indicate that this bias is most likely due to incorrect values for UV-albedo over snow and ice in the photolysis scheme, and to overestimated gaseous dry deposition over snow-and ice-covered surfaces in WRF-Chem. These errors were corrected in the simulations presented in Chapters 5 and 6. Figure 4 - 4 Figure 4-14 -Hourly mean O 3 measured at EMEP stations within the domain (in blue) and WRF-Chem O 3 extracted at the position of the stations (in red). The color shading indicates standard deviation between stations. shipping emission inventories based on AIS ship positioning developed by[START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] andJalkanen et al. (2012) correspond to higher emissions in the Arctic (∼ ×2 for NO x ), which suggests that earlier studies could have been underestimating shipping impacts in this region. Furthermore, previous studies were mostly based on calculations by global models, which often struggle to reproduce O 3[START_REF] Dalsøren | Environmental impacts of the expected increase in sea transportation, with a particular focus on oil and gas scenarios for Norway and northwest Russia[END_REF], and aerosols[START_REF] Myhre | Anthropogenic and Natural Radiative Forcing[END_REF] at high latitudes.In this Chapter, WRF-Chem simulations are combined with a new ship emission in-118 Chapter 5. Current impacts of Arctic shipping in Northern Norway ventory created by the STEAM2 model(Jalkanen et al., 2012), in order to evaluate the model representation of meteorological conditions and shipping pollution, to validate new shipping emission inventories and to quantify the current impacts of shipping emissions on atmospheric composition and radiative effects along the Norwegian coast. WRF-Chem simulation results and STEAM2 emissions are compared to new measurements from the ACCESS aircraft campaign[START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF], using flights targeting shipping pollution in the Arctic region. tive effect in northern Norway, using WRF-Chem (Weather Research and Forecasting with chemistry) simulations combined with high-resolution, real-time STEAM2 (Ship Traffic Emissions Assessment Model version 2) shipping emissions. STEAM2 emissions are evaluated using airborne measurements from the ACCESS (Arctic Climate Change, Economy and Society) aircraft campaign, which was conducted in the summer 2012, in two ways. First, emissions of nitrogen oxides (NO x ) and sulfur dioxide (SO 2 ) are derived for specific ships by combining in situ measurements in ship plumes and FLEXPART-WRF plume dispersion modeling, and these values are compared to STEAM2 emissions for the same ships. Second, regional WRF-Chem runs with and without STEAM2 ship emissions are performed at two different resolutions, 3 km × 3 km and 15 km × 15 km, and evaluated against measurements along flight tracks and average campaign profiles in the marine boundary layer and lower troposphere. These comparisons show that differences between STEAM2 emissions and calculated emissions can be quite large (-57 to +148 %) for individual ships, but that WRF-Chem simulations using STEAM2 emissions reproduce well the average NO 𝑥 , SO 2 and O 3 measured during ACCESS flights. The same WRF-Chem simulations show that the magnitude of NO 𝑥 and ozone (O 3 ) production from ship emissions at the surface is not very sensitive (< 5 %) to the horizontal grid resolution (15 or 3 km), while surface PM 10 particulate matter enhancements due to ships are moderately sensitive (15 %) to resolution. The 15 km resolution WRF-Chem simulations are used to estimate the regional impacts of shipping pollution in northern Norway. Our results indicate that ship emissions are an important source of pollution along the Norwegian coast, enhancing 15-day-averaged surface concentrations of NO 𝑥 (∼ +80 %), SO 2 (∼ +80 %), O 3 (∼ +5 %), black carbon (∼ +40 %), and PM 2.5 (∼ +10 %). The residence time of black carbon originating from shipping emissions is 1.4 days. Over the same 15-day period, ship emissions in northern Norway have a global shortwave (direct + semi-direct + indirect) radiative effect of -9.3 mW m -2 . et al. (2007) and Ødemark et al. (2012) have shown that shipping emissions also influence air quality and climate along the Norwegian and Russian coasts, where current Arctic ship traffic is the largest. Both studies (for years 2000 and 2004) were based on emission data sets constructed using ship activity data from the AMVER (Automated Mutual-Assistance VEssel Rescue system) and COADS (Comprehensive Ocean-Atmosphere Data Set) data sets. However, the AMVER data set is biased towards larger vessels (> 20 000 t) and cargo ships (Endresen et al., 2003), and both data sets have limited coverage in Europe (Miola and Ciuffo, 2011). More recently, ship emissions using new approaches have been developed that use ship activity data more representative of European maritime traffic, based on the AIS (Automatic Identification System) ship positioning system. These include the STEAM2 (Ship Traffic Emissions Assessment Model version 2) shipping emissions, described in Jalkanen et al. (2012) and an Arctic-wide emission inventory described in Winther et al. (2014). To date, quantifying the impacts of Arctic shipping on air quality and climate has also been largely based on global model studies, which are limited in horizontal resolution. In addition, there have not been specific field measurements focused on Arctic shipping that could be used to study the local influence of shipping emissions in the European Arctic and to validate model predicted air quality impacts. 2.3) took place in summer 2012 in northern Norway, and was primarily dedicated to the study of local pollution sources in the Arctic, including pollution originating from shipping. ACCESS measurements are combined with two modeling approaches, described in Sect. 5.2.4. First, we use the Weather Research and Forecasting (WRF) model to drive the Lagrangian particle dispersion model FLEXPART-WRF run in forward mode to predict the dispersion of ship emissions. FLEXPART-WRF results are used in combination with ACCESS air-craft measurements in Sect. 5.2.5 to derive emissions of NO x and SO 2 for specific ships sampled during ACCESS. The derived emissions are compared to emissions from the STEAM2 model for the same ships. Then, we perform simulations with the WRF-Chem model, including STEAM2 ship emissions, in order to examine in Sect. 5.2.6 ACCESS aircraft campaign took place in July 2012 from Andenes, Norway (69.3 ∘ N, 16.1 ∘ W); it included characterization of pollution originating from shipping (four flights) as well as other local Arctic pollution sources (details are available in the ACCESS campaign overview paper; Roiger et al., 2015). The aircraft (DLR Falcon 20) payload included a wide range of instruments measuring meteorological variables and trace gases, described in detail by Roiger et al. (2015). Briefly, O 3 was measured by UV (ultraviolet) absorption (5 % precision, 0.2 Hz), nitrogen oxide (NO), and nitrogen dioxide (NO 2 ) by chemiluminescence and photolytic conversion (10 % precision for NO, 15 % for NO 2 ; 1 Hz), and SO 2 by chemical ionization ion trap mass spectrometry (20 % precision; 0.3 to 0.5 Hz). Aerosol size distributions between 60 nm and 1 µm were measured using a Ultra-High Sensitivity Aerosol Spectrometer Airborne. The four flights focused on shipping pollution took place on 11, 12, 19, and 25 July 2012 and are shown in Fig. 5-1 a (details on the 11 and 12 July 2012 flights shown in Fig. 5-1b). The three flights on 11, 12, and 25 July 2012 sampled pollution from specific ships (referred to as singleplume flights). During these flights, the research aircraft repeatedly sampled relatively fresh emis- Figure 5 5 Figure 5-1 -WRF and WRF-Chem domain (a) outer domains used for the MET, CTRL, and NOSHIP runs. ACCESS flight tracks during 11, 12, 19a (a -denotes that this was the first flight that occurred on this day, flight 19b -the second flight was dedicated to hydrocarbon extraction facilities) and 25 July 2012 flights are shown in color. (b) Inner domain used for the CTRL3 and NOSHIPS3 simulations, with the tracks of the four ships sampled during the 11 and 12 July 2012 flights (routes extracted from the STEAM2 inventory). FLEXPART Fig. 5-1a. The domain (15 km × 15 km horizontal resolution with 65 vertical eta levels between the surface and 50 hPa) covers most of northern temperature and wind speed, as well as the volume flow rate and temperature at the ship exhaust, to calculate a plume injection height above the ship stack. Ambient temperature and wind speed values at each ship's position are obtained from the WRF simulation. We use an average of measurements by[START_REF] Lyyränen | Aerosol characterisation in medium-speed diesel engines operating with heavy fuel oils[END_REF] and[START_REF] Cooper | Exhaust emissions from high speed passenger ferries[END_REF] for the exhaust temperature of the four targeted ships (350 ∘ C). The volume flows at the exhaust are derived for each ship using CO 2 emissions from the STEAM2 ship emission model (STEAM2 emissions described in Sect. 5.2.4.3). Specifically, CO 2 emissions from STEAM2 for the four targeted ships are converted to an exhaust gas flow based on the average composition of ship exhaust gases measured by Cooper (2001) and Petzold et al. (2008). Average injection heights, including stack heights and plume rise, are found to be approximately 230 m for the Costa Deliziosa, 50 m for the Wilson Nanjing, 30 m for the Wilson Leer, and 65 m for the Alaed. In order to estimate the sensitivity of plume dispersion to these calculated injection heights, two other simulations are performed for each ship, where injection heights are decreased and increased by 50 %. Details of the FLEXPART-WRF runs and how they are used to estimate emissions are presented in Sect. 5.2.5. air quality and radiative effects in northern Norway, simulations are performed using the 3-D chemical transport model WRF-Chem (Weather Research and Forecasting model, including chemistry,[START_REF] Grell | Fully coupled "online" chemistry within the WRF model[END_REF] Fast et al., 2006). WRF-Chem has been used previously by[START_REF] Mölders | Influence of ship emissions on air quality and input of contaminants in southern Alaska National Parks and Wilderness Areas during the 2006 tourist season[END_REF] to quantify the influence of ship emissions on air quality in southern Alaska. Table5.2 summarizes all the WRF-Chem options and parameterizations used in the present study, detailed briefly below. The gas phase mechanism is the carbon bond mechanism, version Z (CBM-Z; Zaveri and Peters, 1999). The version of the mechanism used in this study includes dimethylsulfide (DMS) chemistry. Aerosols are represented by the 8 bin sectional MOSAIC (Model for Simulating Aerosol Interactions and Chemistry; Zaveri et al., 2008) mechanism. Aerosol optical properties are calculated by a Mie code within WRF-Chem, based on the simulated aerosol composition, concentrations, and size distributions. These optical properties are linked with the radiation modules (aerosol direct effect), and this interaction also modifies the modeled dynamics and can affect cloud formation (semi-direct effect). The simulations also include cloudaerosol interactions, representing aerosol activation in clouds, aqueous chemistry for activated aerosols, and wet scavenging within and below clouds. Aerosol activation changes the cloud droplet number concentrations and cloud droplet radii in the Morrison microphysics scheme, thus influencing cloud optical properties (first indirect aerosol effect). Aerosol activation in MOSAIC also influences cloud lifetime by changing precipitation rates (second indirect aerosol effect). Chemical initial and boundary conditions are taken from the global chemical-transport model MOZART-4 (model for ozone and related chemical tracers version 4; Emmons et al., 2010). In our simulations, the dry deposition routine for trace gases (Wesely, 1989) was modified to improve dry deposition on snow, following the recommendations of Ahmadov et al. (2015). The seasonal variation of dry deposition was also updated to include a more detailed dependence of dry deposition parameters on land use, latitude, and date, which was already in use in WRF-Chem for the MOZART-4 gas-phase mechanism. Anthropogenic emissions (except ships) are taken from the HTAPv2 (Hemispheric transport of air pollution version 2) inventory (0.1 ∘ × 0.1 ∘ resolution). Bulk VOCs are speciated for both ship-ping and anthropogenic emissions, based on Murrells et al. (2010). Ship VOC emissions are speciated using the "other transport" sector (transport emissions, excluding road transport) and anthropogenic VOC emissions are speciated using the average speciation for the remaining sectors. DMS emissions are calculated following the methodology of Nightingale et al. (2000) and Saltzman et al. (1993). The oceanic concentration of DMS in the Norwegian Sea in July, taken from Lana et al. (2011), is 5.8 × 10 -6 mol m -3 . Other biogenic emissions are calculated online by the MEGAN (Model of Emissions of Gases and Aerosols from Nature; Guenther et al., 2006) model within WRF-Chem. Sea salt emissions are also calculated online within WRF-Chem. The WRF-Chem simulations performed in this study are summarized in Table 5.3. The CTRL simulation uses the settings and emissions presented above, as well as ship emissions produced by the model STEAM2 (Sect. 5.2.4.3). The NOSHIPS simulation is similar to CTRL, but does not include ship emissions. The NO-SHIPS and CTRL simulations are carried out from 4 to 25 July 2012, over the 15 km × 15 km simulation domain presented in Fig. 5-1a. The CTRL3 and NOSHIPS3 simulations are similar to CTRL and NOSHIPS, but are run on a smaller 3 km × 3 km resolution domain, shown in Fig. 5-1b, from 10 to 13 July 2012. The CTRL3 and NOSHIPS3 simulations are not nudged to FNL and do not include a subgrid parameterization for cumulus due to their high resolution. Boundary conditions for CTRL3 and NOSHIPS3 are taken from the CTRL and NOSHIPS simulations (using one-way nesting within WRF-Chem) and are updated every hour. The CTRL and CTRL3 simulations are not nudged to the reanalysis fields in the boundary layer, in order to obtain a more realistic boundary layer structure. However, comparison with ACCESS meteorological measurements shows that on 11 July 2012 this leads to an over-126 Chapter 5. Current impacts of Arctic shipping in Northern Norway [START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF]. STEAM2 emissions were generated on a 5 km × 5 km grid every 30 min for the CTRL simulation, and on a 1 km × 1 km grid every 15 min for the CTRL3 simulation, and were regridded on the WRF-Chem simulation grids.Shipping emissions of NOx , SO 2 , black carbon, and organic carbon are presented in Fig. 5-2 for the 15 km × 15 km simulation domain (emissions totals during the simulation period are indicated within the figure panels). For comparison, the Figure 5 5 Figure 5-2 -(a, c, e, g) STEAM2 ship emissions and (b, d, f, h) HTAPv2 anthropogenic emissions (without ships) of (a, b) NO x , (c, d) SO 2 , (e, f ) BC, and (g, h) OC in kg km -2 over the CTRL and NOSHIPS WRF-Chem domain, during the simulation period (00:00 UTC 4 July 2012 to 00:00 UTC 26 July 2012). On panel (d), the location of the intense Kola Peninsula SO 2 emissions is highlighted by a gray box. The emissions totals for the simulation period are noted in each panel. STEAM2 emissions are based on AIS signals that are transmitted to base stations on shore that have a limited range of 50-90 km, which explains why the emissions presented in Fig. 5-2 only represent near-shore traffic. In addition, our study is focused on shipping emissions in northern Norway, therefore STEAM2 emissions were only generated along the Norwegian coast. As a result, ship emissions in the northern Baltic and along the northwestern Russian coast are not included in this study. However, these missing shipping emissions are much lower than other anthropogenic sources inside the model domain. In the CTRL and CTRL3 simulations, ship emissions are injected in altitude using the plume rise model presented in Sect. 5.2.4.1. Stack height and exhaust fluxes are unknown for most of the ships present in the STEAM2 emissions, which were not specifically targeted during ACCESS. For these ships, exhaust parameters for the Wilson Leer (∼ 6000 gross tonnage) are used as a compromise between the smaller fishing ships (∼ 40 % of Arctic shipping emissions; Winther et al., 2014), and larger ships like the ones targeted during ACCESS. In the CTRL3 simulation, the four ships targeted during ACCESS are usually alone in a 3 km × 3 km grid cell, which enabled us to treat these ships separately and to inject their emissions in altitude using individual exhaust parameters (Sect. 5.2.4.1). In the CTRL simulation, there are usually several ships in the same 15 km × 15 km grid cell, and the four targeted ships were treated in the same way together with all unidentified ships, using the exhaust parameters of the Wilson Leer and local meteorological conditions to estimate injection heights. This means that, for the Costa Deliziosa, Alaed and Wilson Nanjing, the plume rise model is used in CTRL with exhaust parameters from a smaller ship (the Wilson Leer ) than in CTRL3. Because of this, emission injection heights for these ships are lower in CTRL (0 to 30 m) than in CTRL3 (230 m for the Costa Deliziosa, 50 m for the Wilson Nanjing, 30 m for the Wilson Leer, and 65 m for the Alaed ). FLEXPART plume dispersion simulations driven by the MET simulation are performed for the four ships sampled during ACCESS (Sect. 5.2.4.1). The MET simulation agrees well with airborne meteorological measurements on both days (shown in the Supplement, Fig. S1 1 ) in terms of wind direction (mean bias of -16 ∘ on 11 July, +6 ∘ on 12 July) and wind speed (normalized mean bias of +14 % on 11 July, Figure 5 5 Figure 5-3 -Left panels: ACCESS airborne NO x measurements between (a) 16:00 and 16:35 UTC, 11 July 2012 (flight leg at 𝑍 ∼ 49 m), (c) 16:52 and 18:08 UTC, 11 July 2012 (𝑍 ∼ 165 m), and (e) 10:53 and 11:51 UTC, 12 July 2012 (𝑍 ∼ 46 m). Right panels: corresponding FLEXPART-WRF plumes (relative air tracer mixing ratios): (b, d) Wilson Leer and Costa Deliziosa plumes and (f ) Wilson Nanjing and Alaed plumes. FLEXPART-WRF plumes are shown for the closest model time step and vertical level. plumes were sampled during two different runs at two altitudes on 11 July 2012, and presented in Fig. 5-3a and b (𝑧 = 49 m) and Fig. 5-3c and d (𝑧 = 165 m). During the second altitude level on 11 July (Fig. 5-3c and d) the Wilson Leer was farther south and the Costa Deliziosa had moved further north. Therefore, the plumes are farther apart than during the first pass at 49 m. Modeled and measured plume locations agree well for the first run (𝑧 = 49 m). For the second run (𝑧 = 165 m), the modeled plume for the Costa Figure 5 5 Figure 5-4 -(a, c, d) NO x and (b, e) SO 2 aircraft measurements (black) compared to FLEXPART-WRF air tracer mixing ratios interpolated along flight tracks, for the plumes of the (a, b) Costa Deliziosa and Wilson Leer on 11 July 2012 (first constant altitude level (𝑍 ∼ 49 m), also shown in Fig. 5-3a) and (c, d, e) Wilson Nanjing and Alaed on 12 July 2012. Panel (d) shows the same results as panel (c) in detail. Since model results depend linearly on the emission flux chosen a priori for each ship, model results have been scaled so that peak heights are comparable to the measurements. × 𝑀 SO 2 𝑀 air In Eq. (1), SO 2 (𝑡) is the measured SO 2 mixing ratio (pptv), SO 2 background is the background SO 2 mixing ratio for each peak, Tracer(𝑡) is the modeled tracer mixing ratio interpolated along the ACCESS flight track (pptv), 𝑡 begin 𝑖 and 𝑡 end 𝑖 are the beginning and end time of peak 𝑖 (modeled or measured, in s) and 𝑀 SO 2 and 𝑀 air are the molar masses of SO 2 and air (g mol -1 ). This method produces a different SO 2 emission flux value 𝐸 𝑖 (kg s -1 ) for each of the 𝑖 = 1 to 𝑁 peaks corresponding to all the crossings of a single ship plume by the aircraft. These 𝑁 different estimates are averaged together to reduce the uncertainty in the estimated SO 2 emissions. A similar approach is used to estimate NO x emissions. The background mixing ratios were determined by applying a 30 s running average to the SO 2 and NO 𝑥 measurements. Background values were then determined manually from the filtered time series. For each NO 𝑥 peak, an individual background value was identified and used to determine the NO 𝑥 enhancement for the same plume. For SO 2 , a single background value was used for each flight leg (constant altitude). In order to reduce sensitivity to the calculated emission injection heights, FLEXPART-WRF peaks that are sensitive to a ±50 % change in injection height are excluded from the analysis. Results are considered sensitive to injection heights if the peak area in tracer concentration changes by more than 50 % in the injection height sensitivity runs. Using a lower threshold of 25 % alters the final emission estimates by less than 6 %. Peaks sensitive to the calculated injection height typically correspond to samplings close to the ship, where the plumes are narrow. An intense SO 2 peak most likely associated with the Costa Deliziosa and sampled around 17:25 UTC on 11 July 2012 is also excluded from the calculations, because this large increase in SO 2 in an older, diluted part of the ship plume suggests contamination from another source. SO 2 132 Chapter 5. Current impacts of Arctic shipping in Northern Norway emissions are not determined for the Wilson Leer and the Alaed, since SO 2 measurements in their plumes are too low to be distinguished from the background variability. For the same reason, only the higher SO 2 peaks (four peaks > 1ppbv) were used to derive emissions for the Wilson Nanjing. The number of peaks used to derive emissions for each ship is 𝑁 = 13 for the Costa Deliziosa, 𝑁 = 4 for the Wilson Leer, 𝑁 = 8 for the Wilson Nanjing (𝑁 = 4 for SO 2 ) and 𝑁 = 5 for the Alaed. The derived emissions of NO x (equivalent NO 2 mass flux in kg day -1 ) and SO 2 are given in Table 5.4. The emissions extracted from the STEAM2 inventory for the same ships during the same time period are also shown. STEAM2 SO 2 emissions are higher than the value derived for the Costa Deliziosa, and lower than the value derived for the Wilson Nanjing. NO x emissions from STEAM2 are higher than our calculations for all ships. In STEAM2, the NO x emission factor is assigned according to IMO MARPOL (marine pollution) Annex VI requirements (IMO, 2008) and engine revolutions per minute (RPM), but all engines subject to these limits must emit less NO x than this required value. For the Wilson Leer, two calculated values are reported: one calculated by averaging the estimates from the four measured peaks, and one value where an outlier value was removed before calculating the average. During the 11 July flight, the Wilson Leer was traveling south at an average speed of 4.5 m s -1 , with relatively slow tailwinds of 5.5 m s -1 . Because of this, the dispersion of this ship's plume on this day could be sensitive to small changes in modeled wind speeds, and calculated emissions are more uncertain.The most important difference between the inventory NO 𝑥 and our estimates is ∼ 150 % for the Costa Deliziosa. Reasons for large discrepancy in predicted and measured NO x emissions of Costa Deliziosa were investigated in more detail. A complete technical description of Costa Deliziosa was not available, but her sister vessel Costa Luminosa was described at length recently (RINA, 2010). The details of Costa Luminosa and Costa Deliziosa are practically identical and allow for in-depth analysis of emission modeling. With complete technical data, the STEAM2 SO 𝑥 and NO x emissions of Costa Deliziosa were estimated to be 2684 and 5243 kg d -1 , respectively, whereas our derived estimates indicate 2399 and 2728 kg d -1 (difference of +12 % for SO 𝑥 and +92 % for NO 𝑥 ). The good agreement for SO 𝑥 indicates that the power prediction at vessel speed reported in AIS and associated fuel flow is well predicted by STEAM2, but emissions of NO x are twice as high as the value derived from measurements. In case of Costa Deliziosa, the NO x emission factor of 10.5 g kW -1 h for a tier II compliant vessel with 500 RPM engine is assumed by STEAM2. Based on the measurement-derived value, a NO x emission factor of 5.5 g kW -1 h would be necessary, which is well below the tier II requirements. It was reported recently (IPCO, 2015) that NO x emission reduction technology was installed on Costa Deliziosa, but it is unclear whether this technology was in place during the airborne measurement campaign in 2012. The case of Costa Deliziosa underlines the need for accurate and up-to-date technical data for ships when bottom-up emission inventories are constructed. It also necessitates the inclusion of the effect of emission abatement technologies in ship emission inventories. Furthermore, model predictions for individual vessels are complicated by external contributions, like weather and sea currents, affecting vessel performance. However, the STEAM2 emission model is based on AIS real-time positioning data, which has a much better coverage than activity data sets used to generate older shipping emission inventories (e.g., COADS and AMVER). These earlier data sets also have known biases for ships of specific sizes or types. In addition, components of the Chapter 5. Current impacts of Arctic shipping in Northern Norway 133 STEAM2 inventory, such as fuel consumption, engine loads, and emission factors have already been studied in detail in the Baltic Sea by[START_REF] Jalkanen | A modelling system for the exhaust emissions of marine traffic and its application in the Baltic Sea area[END_REF]Jalkanen et al. ( , 2012) ) andBeecken et al. (2015).Beecken et al. (2015) compared STEAM2 emission factors to measurements for ∼ 300 ships in the Baltic Sea. Their results showed that, while important biases were possible for individual ships, STEAM2 performed much better on average for a large fleet. In the Baltic Sea, STEAM2 NO x emission factors were found to be biased by +4 % for passenger ships, based on 29 ships, and -11 % for cargo ships, based on 118 ships. For SO 𝑥 , the biases were respectively +1 , and Corbett et al. (2010) inventories. The highest shipping emissions in the region of northern Norway are found in the STEAM2 and Winther et al. (2014) inventories, which are both based on 2012 AIS ship activity data (Sect. 5.2.4.3 for a description of the methodology used for STEAM2). We note that, except for OC, the emissions are higher in the Winther et al. (2014) inventory because of the larger geographical coverage: Winther et al. (2014) used both ground-based and satellite retrieved AIS signals, whereas the current study is restricted to data received by ground based AIS stations (capturing ships within 50 to 90 km of the Norwegian coastline). Despite lower coverage, the horizontal and temporal resolutions are better described in land-based AIS networks than satellite AIS data. The terrestrial AIS data used in this study is thus more comparable to the spatial extent and temporal resolution of the measurements collected close to the Norwegian coast. STEAM2 is the only inventory including sulfate emissions, which account for SO 2 to SO = 4 conversion in the ship exhaust. Ship emissions from Dalsøren et al. (2009) and Corbett et al. (2010) are based on ship activity data from 2004, when marine traffic was lower than in 2012. Furthermore, the gridded inventory from Corbett et al. (2010) does not include emissions from fishing ships, which represent close to 40 % of Arctic shipping emissions Figure 5 - 5 - 55 Figure 5-5 -Snapshots of model predicted surface NO x and O 3 from the CTRL3 (3 km) simulation (a, c, e, g) and the CTRL (15 km) simulation (b, d, f, h) during the flights on 11 and 12 July 2012. Model results for the CTRL3 simulation are shown over the full model domain. CTRL run results are shown over the same region for comparison. The aircraft flight tracks are indicated in blue. On panels (c) and (g), black arrows indicate several areas of O 3 titration due to high NO x from ships. (i, j) NO x and (k, l) O 3 2-day average surface enhancements (00:00 UTC 11 July 2012 to 00:00 UTC 13 July 2012) due to shipping emissions, (i, k) CTRL3 simulation, (j, l) CTRL simulation. The 2-day average enhancements of NO 𝑥 and O 3 over the whole area are given below each respective panel. resolve individual ship plumes and to reproduce some of the plume macroscopic properties. The CTRL and CTRL3 simulations (presented in Table 5.3) are then compared to evaluate if nonlinear effects are important for this study period and region. WRF-Chem results from CTRL and CTRL3 for surface (∼ 0 to 30 m) NO x and O 3 are shown in Fig. 5-5. On 11 and 12 July, the aircraft specif- Figure 5 5 Figure 5-6 -Time series of measured O 3 and NO x on 11 July 2012 compared to model results extracted along the flight track for the CTRL and CTRL3 runs. Observations are in black, the CTRL run is in red, and the CTRL3 run is in green. A 56 s averaging window is applied to the measured data for model comparison (approximately the time for the aircraft to travel 2 × 3 km). Flight altitude is given as a dashed gray line. After the first run at 49 m, a vertical profile was performed (16:35 to 16:45 UTC) providing information about the vertical structure of the boundary layer. Figure 5 - 7 - 57 Figure 5-7 -Observed background-corrected PM 1 enhancements in the plume of the Costa Deliziosa on 11 July 2012 (black squares), compared to modeled PM 1 enhancements in ship plumes (in red), extracted along the flight track (CTRL3 -NOSHIPS3 PM 1 ). A 56 s averaging window is applied to the measured data to simulate dilution in the model grid. Flight altitude is given as dashed black line. Fig. 5-8. Modeled vertical profiles of PM 2.5 are also shown in Fig. 5-8. This comparison allows us to estimate how well CTRL represents the average impact of shipping over a larger area and a longer period. Figure 5 - 5 Figure 5-8 shows that the NOSHIPS simulation significantly underestimates NO x and SO 2 , and moderately underestimates O 3 along the AC-CESS flights, indicating that ship emissions are needed to improve the agreement between the model and observations. In the CTRL simulation, NO x , SO 2 , and O 3 vertical structure and concentrations are generally well reproduced, with normalized mean biases of +14.2, -6.8, and Figure 5 5 Figure 5-8 -Average vertical profiles of (a) NO x , (b) SO 2 , (c) O 3 and (d) PM 2.5 observed during the four ACCESS ship flights (in black, with error bars showing standard deviations), and interpolated along the ACCESS flight tracks in the CTRL simulation (red line) and in the NOSHIPS simulation (blue line). For PM 2.5 only simulation results are shown. Figure 5 5 Figure 5-9 -15-day average (00:00 UTC 11 July 2012 to 00:00 UTC 26 July 2012) of (top) absolute and (bottom) relative surface enhancements (CTRL -NOSHIPS) in (a, d) SO 2 , (b, e) NO x , and (c, e) O 3 due to ship emissions in northern Norway from STEAM2. estimated that instant dilution of shipping NO x emissions in 2 ∘ × 2.5 ∘ model grids leads to a 1 to 2ppbv overestimation in ozone in the Norwegian and Barents seas during July 2005. This effect could explain a large part of the difference in O 3 enhancements from shipping between the simulations of Ødemark et al. (2012) (2.8 ∘ × 2.8 ∘ resolution) and the simulations presented in this paper (15 km × 15 km resolution). The impact of ships in northern Norway on surface PM 2.5 , BC, and SO = 4 during the same period is shown in Fig. 5-10. The impact on PM 2.5 is relatively modest, less than 0.5 µg m -3 . However, these values correspond to an important relative increase of ∼ 10 % over inland Norway and Sweden because of the low background PM 2.5 in this region. Over the sea surface, the relative effect of ship emissions is quite low because of higher sea salt aerosol background. Aliabadi et al. (2015) have observed similar increases in PM 2.5 (0.5 to 1.9 µg m -3 ) in air masses influenced by shipping pollution in the remote Canadian Arctic. In spite of the higher traffic in northern Norway, we find lower values than Aliabadi et al. (2015) because results in Fig. 5-10 are smoothed by the 15-day average. Impacts on surface sulfate and BC concentrations are quite large, reaching up to 20 and 50 %, respectively. We note that Eckhardt et al. (2013) found enhancements in summertime equivalent BC of 11 % in Svalbard from cruise ships alone. As expected, absolute SO = 4 and BC enhancements in our simulations are higher in the southern part of the domain, where ship emissions are the strongest. radiative effect of ships by calculating the difference between the top-of-atmosphere (TOA) upwards shortwave (0.125 to 10 µm wavelengths) radiative flux in the CTRL and the NOSHIPS simulations. Since the CTRL and NOSHIPS simulations take into account aerosol-radiation interactions and their feedbacks (the so-called direct and semi-direct effects) as well as cloudaerosol interactions (indirect effects), this quantity represents the sum of modeled direct, semidirect and indirect effects from aerosols associated with ship emissions. Yang et al. (2011) and Saide et al. (2012) showed that including cloud aerosol couplings in WRF-Chem improved significantly the representation of simulated clouds, indicating that the indirect effect was relatively well simulated using CBM-Z/MOSAIC chemistry within WRF-Chem. Our calculations do not include the effect of BC on snow, since this effect Figure 5 - 5 Figure 5-10 -15-day average (00:00 UTC 11 July 2012 to 00:00 UTC 26 July 2012) of (top) absolute and (bottom) relative surface enhancements (CTRL -NOSHIPS) in (a, d) PM 2.5 , (b, e) BC and (c, f ) SO = 4 due to ship emissions in northern Norway from STEAM2. mospheric impacts of ships in northern Norway in July 2012. The study relies on measurements from the ACCESS aircraft campaign, emissions evaluation, and regional modeling in order to evaluate both individual ship plumes and their regional-scale effects. STEAM2 emissions, which represent individual ships based on highresolution AIS ship positioning data, are compared with emissions for specific ships derived from measurements and plume dispersion mod-eling using FLEXPART-WRF. Regional WRF-Chem simulations run with and without ship emissions are performed at two different resolutions to quantify the surface air quality changes and radiative effects from ship emissions in northern Norway in July 2012. The most important conclusions from our study are 1. Validation of the STEAM2 emissionsemissions of NO x and SO 2 are determined for individual ships, by comparing airborne measurements with plume dispersion modeling results. These calculated emissions are compared with bottom-up emissions determined for the same ships by the STEAM2 emission model. Results show that STEAM2 overestimates NO x emissions for the four ships sampled during ACCESS. SO 2 emissions are also determined for two ships. Large biases are possible for individual ships in STEAM2, especially for ships for which there is incomplete technical data or where emission reduction techniques have been employed. Nevertheless, combining WRF-Chem simulations and STEAM2 emissions leads to reasonable predictions of NO x , SO 2 , and O 3 compared to ACCESS profiles in the lower troposphere (normalized mean biases of +14.2, -6.8, and -7.0 %, respectively). These results also indicate that shipping emissions comprise a significant source of NO x and SO 2 at low altitudes during the ACCESS flights, even though specific ship plume sampling near the surface was excluded from these profiles. Pollution sampled during these flights thus represents shipping pollution that had time to mix vertically in the marine boundary layer and is more representative of the regional pollution from shipping in northern Norway. These results are in agreement with the recent evaluation of STEAM2 in the Baltic Sea by Beecken et al. (2015), which showed that STEAM2 performed well for an average fleet (∼ 200 ships), despite biases for individual ships. 2. Regional model representation of ship plumes and their local-scale influence -WRF-Chem runs including shipping emissions from STEAM2 are performed at 15 km × 15 km and 3 km × 3 km horizontal resolutions, and compared with airborne measurements of NO x and ozone. The high-resolution simulation is better at reproducing measured NO 𝑥 peaks and suggests some ozone titration in ship plumes, but the NO x and ozone enhancements due to ships in both simulations are within less than 5 % of each other when averaged over the whole domain and simulation period. The 3 km × 3 km simulation also reproduces observed PM 1 enhancements in ship plumes. Surface PM 10 enhancements due to ships are 15 % higher in the 3 km × 3 km resolution simulation. 3. Average influence of ship pollution in July 2012 -the difference between runs with and without ship emissions are compared with campaign average profiles (excluding flights focused on oil platforms, smelters, and biomass burning emissions from outside the simulation domain). Including STEAM2 emissions reduces the mean bias between measured and modeled trace gases NO x , SO 2 , and O 3 . At the surface, ship emissions enhance 15-dayaveraged concentrations along the Norwegian coast by approximately 80 % for NO x , 80 % for SO 2 , 5 % for O 3 , 40 % for BC, and 10 % for PM 2.5 , suggesting that these emissions are already having an impact on atmospheric composition in this region. Regional model results presented in this study predict lower ozone production from ships compared to certain earlier studies using global models. However, it is known that global models run at low resolution tend to overestimate ozone production (underestimate ozone titration) from fresh ship emissions because of nonlinearities introduced when diluting concentrated emissions from ships into coarse model grid cells. 4. Influence on the radiative budget -northern Norwegian ship emissions contribute -9.3 mW m -2 to the global shortwave radiative budget of ship emissions, including semi-direct and indirect effects. These results are more significant than found previously in a study using a global model that did not explicitly resolve aerosol activation in clouds. This suggests that global models may be underestimating the radiative impacts of shipping in this region. Our study shows that local shipping emissions along the northern Norwegian coast already have a significant influence on regional air quality and aerosol shortwave radiative effects. As Arctic shipping continues to grow and new regulations are implemented, the magnitude of these impacts is expected to change. Due to the limited region (northern Norway) and the short timescale (15 days) considered here, it is not possible to assess the radiative effect of other climate forcers associated with shipping in northern Norway, including O 3 which global model studies have sug- Figure 6 6 Figure 6-1. It covers most of the Northern Hemisphere, and in order to be computationally feasible, simulations are run at a relatively low resolution (similar to the ones used in global models) of 100 km × 100 km. and 6-3 to measurements at Arctic ground stations (Alert, Canada; Barrow, Alaska; Tiski, Russia; Nord, Greenland; Pallas, Finland; Summit, Greenland and Zeppelin, Svalbard, Norway). The BC comparison shown in Figure 6-2 is significantly improved compared to WRF-Chem results presented in Eckhardt et al. ( photometer, SP2) and O 3 (measured by UV-absorption). The model agrees very well with O 3 measurements up to ∼ 4 km, but overestimates O 3 above. This is likely due to uncertainties in the stratospheric upper boundary condition in WRF-Chem, or to overestimated stratosphere-troposphere exchange. Similar issues were found byEmmons et al. (2015) when comparing global models and WRF-Chem simulations to aircraft and radiosonde observa- Figure 6 6 Figure 6-4 -Mean (a) SP2 rBC and (b) O 3 profiles from the ACCESS aircraft campaign (black, July 2012, Northern Norway) compared to WRF-Chem BC and O 3 interpolated along ACCESS flights (base simulation). SP2 rBC measurements cover the size range 80 to 470 nm, and WRF-Chem 80 to 470 nm BC was calculated in two ways 1) from the size distribution of the internally mixed particles in MOSAIC (thick continuous red line) and 2) by estimating the size of BC "cores" within each MOSAIC size bin (thin dotted red line) 6. 5 5 Model internal variability and noise: issues when quantifying sensitivities to small emission perturbations with WRF-Chem In this Chapter, the effect on Arctic aerosols and ozone of local and remote sources of pollutant emissions are calculated by performing sensitivity WRF-Chem simulations, with and without each source of emissions. In order to compute the model response to a perturbation in emissions, e.g. the atmospheric impacts of 2012 Arctic shipping emissions, results from the 2012_NOSHIPS simulations (without Arctic shipping emissions) are substracted from the 2012_BASE simulation (including all 2012 emissions). This assumes that differences between both simulations are only driven by the addition of Arctic shipping emissions. For example, the absolute and relative changes in surface BC concentrations due to Arctic shipping emissions (2012_𝐵𝐴𝑆𝐸 -2012_𝑁 𝑂𝑆𝐻𝐼𝑃 𝑆 and [2012_𝐵𝐴𝑆𝐸 -2012_𝑁 𝑂𝑆𝐻𝐼𝑃 𝑆]/2012_𝐵𝐴𝑆𝐸 ) are shown in Figure 6-5. Figure 6 6 Figure 6-5 -(a) Absolute and (b) relative differences in surface (∼ 0 --50 m) BC concentrations between BASE and NOSHIPS simulations in 2012, July average. Figure 6 - 6 Figure 6-5a shows that the largest changes in surface BC due to Arctic shipping emissions are located in very polluted regions such as Siberia where boreal fires occur, India, and Eastern Asia. Some of these regions are located far away from the Arctic, where little Figure 6 6 Figure 6-6 -Differences in (a) total cloud fraction, July 2012 average (b) July 2012 monthly rainfall between BASE and NOSHIPS simulations. Figure 6 6 Figure 6-9 -Left column: Seasonally integrated BC deposition in (top) spring (MAM) and (bottom) summer (JJA) 2012 in the base simulation. The 4 columns on the right show the relative contributions from each source to total BC deposition: mid-latitude anthropogenic emissions, biomass burning, Arctic flares and Arctic shipping. The 50 % snow and ice limit in 2012 is also shown as a green line on each panel. Figure 6-10 -Same as Figure 6-7, for spring (MAM) 2050. Figures 6 - 6 Figures 6-13 and 6-14 present the 6-month-averaged vertical distributions of BC and O 3 enhancements due to remote and local sources in 2050. These figures also show the average cloud cover during the simulation, indicating strong surface cloudiness (30 to 40 %) in the Arctic. Results for 2050 are shown because of the stronger enhancements from shipping give a clearer picture, but results in 2012 are qualitatively similar. Figure 6 - 6 Figure 6-13 -6-month averaged zonal mean BC enhancements in 2050 associated with each source, black contour lines indicate the 6-month averaged modeled cloud fraction. Figure 6 - 6 Figure 6-14 -6 month averaged zonal mean O 3 enhancements in 2050 associated with each source, black contour lines indicate the 6-month averaged modeled cloud fraction. 6.2) Where all fluxes are upwelling TOA fluxes. Since 𝐹 ↓ 𝑤𝑖𝑡ℎ𝐵𝐶_𝑏𝑎𝑠𝑒 = 𝐹 ↓ 𝑛𝑜𝐵𝐶_𝑏𝑎𝑠𝑒 𝐷𝑅𝐸 𝐵𝐶_𝑏𝑎𝑠𝑒 = 𝐹 ↑ 𝑛𝑜𝐵𝐶_𝑏𝑎𝑠𝑒 -𝐹 ↑ 𝑤𝑖𝑡ℎ𝐵𝐶_𝑏𝑎𝑠𝑒 (6.3) Third, the BC DRE from a specific source, e.g. 2012 Arctic shipping, was then determined by substracting the total BC DRE in the 2012_BASE simulation and the total BC DRE in the sensitivity run, e.g. 2012_NOSHIPS, simulation.𝐷𝑅𝐸 𝐵𝐶_𝑠ℎ𝑖𝑝𝑠 = 𝐷𝑅𝐸 𝐵𝐶_𝑏𝑎𝑠𝑒 -𝐷𝑅𝐸 𝐵𝐶_𝑛𝑜𝑠ℎ𝑖𝑝𝑠 (6.4)In order to separate the DRE from cross-effects on DRE of changed cloud and meteorological properties between base and sensitivity simulations (due to indirect/semi-direct effects), all aerosol DRE calculations were performed using ozone, meteorological and cloud properties from the base simulations. Similarly, ozone DRE calculations (SW and LW) were performed using aerosol, meteorological and cloud properties from the base simulations. Figure 6 - 6 Figure 6-15 -Average Arctic (latitude > 60°N) all-sky direct radiative effects (DRE) of scattering aerosols (SO 2- 4 + NO - 3 + NH + 4 + OA), absorbing aerosols (BC) and ozone, from each source, at top-of-atmosphere. Note differences in scales for DRE. Figure 6-15 indicates that, even in 2050, the effect of biomass burning and mid-latitude anthropogenic emissions is approximately two Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone 171 orders of magnitude larger than the effect of local Arctic shipping and flaring emissions. emissions. The most significant radiative impacts of local Arctic emissions are BC warming from flaring emissions (∼ 25 mW m -2 in 2012 and 2050), and O 3 cooling from shipping emissions (∼ -20 mW m -2 in summer 2012, ∼ -30 mW m -2 in summer 2050). Figure 6 - 6 Figure 6-16 -Average Arctic (latitude > 60°N) indirect and semi-direct radiative effects (ISRE) from each source in spring and summer 2012, at top-of-atmosphere. Note differences in scales. The values for Arctic shipping and Arctic flaring are likely to be spurious results due to cloud variability within the model. O 3 and BC measurements in the Arctic, and to airborne O 3 measurements, but indicate that our simulations might overestimate BC concentrations aloft and O 3 near the tropopause.Improving the representation of aerosol removal in ice clouds and of stratosphere-troposphere exchange could help to further improve model performance.The main findings from this study are the following: • Current Arctic shipping: Arctic shipping emissions already have a significant influence on surface O 3 and surface aerosol concentrations (BC, NO - 3 , NH + 4 , SO 2- 4 ) in Northern Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone 175 Europe during summer 2012. This confirms earlier results from the WRF-Chem case study in July 2012 presented in Chapter 4. Impacts on surface O 3 (15 to 35 % of total, 3 to 8 ppbv) are higher than previous estimates, and are also associated with a strong increase in surface OH concentrations (∼ a factor of 2 in the Norwegian Sea). Surprisingly, our results indicate that the main direct radiative effect of Arctic shipping emissions is LW O 3 cooling. This appears to be due to strong surface temperature inversions in the Arctic, and to the vertical distribution of Arctic shipping O 3 , which is in our simulations confined in atmospheric layers warmer than the ground surface. In order to confirm this finding, further work is needed to validate the model representation of surface and boundary layer temperatures in the Arctic. Direct radiative effects from Arctic shipping aerosols appear to be very limited, which can be due, in part, to the limited residence time of aerosols emitted in the Arctic marine boundary layer ∼ 1.4 days and to the location of Arctic aerosols within clouds, decreasing the available SW radiation. comparisons with airborne, ground-based, and satellite measurements in the Arctic show that the model is able to reproduce ozone and aerosol plumes from long-range pollution transport, local shipping pollution in the marine boundary layer, and seasonal (6-months simulations) aerosol and ozone pollution at the surface in the Arctic. First of all, WRF-Chem is used to investigate pollution transport from Europe to the Arctic, which is currently thought to be one of the main sources of Arctic aerosol and ozone pollution. Simulations are used to analyze airborne aerosol measurements from the POLARCAT-France spring campaign, which took place in Northern Europe in Spring 2008. The model reproduces the complex vertical distribution of European pollution aerosols observed in the Arctic during the campaign, and show that these observed aerosol layers were due to different emission types (industrial and urban emissions; agricultural fires), different geographical origins (central Europe; Western Russia and Ukraine) and different transport pathways (fast transport in altitude in frontal systems; slower stransport at low altitudes). Le but de cette thèse est d'améliorer les connaissances sur la pollution à l'ozone et aux aérosols en Arctique. Dans cette thèse, des simulations de météorologie-chimie-aérosols sont effectuées à l'aide du modèle WRF-Chem, combiné à de nouveaux inventaires des émissions locales en Arctique. Ces simulations sont utilisées pour analyser les mesures issues de campagnes aéroportées récentes consacrées au transport de pollution depuis les moyennes latitudes jusqu'en Arctique, et à la pollution liée aux bateaux et à l'extraction de pétrole et de gaz en Arctique. Les résultats indiquent que la modélisation régionale avec WRF-Chem est un outil adapté pour étudier la pollution à l'ozone et aux aérosols dans cette région. Les résultats de simulations WRF-Chem sont en bon accord avec les mesures par avion de la campagne POLARCAT-France au printemps 2008, en termes de quantité d'aérosols et de propriétés optique des aérosols. Les simulations parviennent à reproduire la distribution verticale complexe des aérosols de pollution européens observés en Arctique. Ces simulations montrent que les différentes couches d'aérosols observés pendant POLARCAT-France sont dues à différents types d'émission (émissions industrielles et urbaines ; feux agricoles), différentes régions géographiques (Europe centrale, ouest de la Russie et Ukraine) et différents mécanismes de transport (transport rapide en altitude dans des systèmes frontaux ; transport lent à basse altitude). Ces résultats confirment l'importance du dépôt humide, qui contrôle les quantités d'aérosols transportées jusqu'en Arctique (> 50 % de la masse totale déposée), tant pour le transport dans les systèmes frontaux que pour le transport à basse altitude. Les résultats indiquent aussi que la prise en compte des aérosols organiques secondaires (SOA, non pris en compte dans ces simulation initiales) semble être importante pour reproduire les quantités d'aérosols dans les panaches de feux, et la composition des panaches pour tous types de sources. En termes d'effets directs des aérosols, cet évènement cause un refroidissement au sommet de l'atmosphère (TOA), dû à la diffusion du rayonnement solaire par les aérosols, mais cause un réchauffement au TOA au dessus des surfaces couvertes de neige ou de glace, et un refroidissement de surface pour tous les types de surface. Le modèle est aussi combiné avec un nouvel inventaire d'émission des bateaux pour effectuer des simulations au nord de la Norvège en été 2012, pendant les dates de la campagne aéroportée ACCESS, dédiée aux émissions locales de pollution en Arctique. Ces simulations montrent que l'impact actuel des émissions de la navigation dans cette région est significatif sur les concentrations de surface en NO x , SO 2 , SO 2- 4 , BC et O 3 . WRF-Chem comprend un schéma relativement complexe d'intéractions aérosols/nuages, qui est utilisé pour estimer l'effet radiatif total (direct + semi-direct + indirect) des émissions des bateaux dans cette région en juillet 2012. L'effet principal de ces émissions est un fort refroidissement au TOA, dû aux interactions aérosols/nuages. Les nouveaux inventaires d'émission semblent bien représenter la pollution en Arctique, mais de nouvelles campagnes de mesures consacrées à la pollution des bateaux en Arctique sont nécessaires afin de valider précisément ces nouveaux inventaires. Cette thèse montre que le modèle WRF-Chem peut être utilisé à des échelles fines résolvant les panaches individuels pour analyser des mesures à hautes résolutions, ou à de grandes échelles quasi-hémisphériques pour étudier la pollution à l'échelle de l'Arctique. Afin d'effectuer ces simulations à grande échelle, j'ai défini une configuration du modèle pour les études arctiques, et amélioré le modèle quand certains processus clés n'étaient pas pris en compte. Cette configuration comprend des développements récents du modèle (formation de SOA, interactions aérosols/nuages dans les nuages sous-maille, émission de NO x par la foudre), ainsi que de nouveaux modules développés spécialement pour cette thèse. J'ai trouvé que certains de ces nouveaux processus étaient critiques en région arctique. Pour l'ozone, il semble important de prendre correctement en compte l'albedo UV au-dessus de la neige et de la glace et son influence sur les taux de photolyse, ainsi que le dépôt sec réduit au dessus de la neige et de la glace. Pour les aérosols en Arctique, la représentation de la couche limite arctique et le dépôt humide dans les nuages sous-maille semblent primordiaux. Ces améliorations du modèle entraînent de meilleurs résultats pour les aérosols et l'ozone, comparés à des mesures effectuées par avion et au sol en Arctique. La version améliorée du modèle est utilisée pour quantifier les impacts actuels (2012) et futurs (2050) de la navigation et des émissions liées au torchage de gaz en Arctique, en termes de qualité de l'air et d'effets radiatifs. Les résultats de simulation indiquent que les torches pétrolières sont et devraient rester une source majeure de carbone suie en Arctique, entraînant un effet réchauffant, et que la navigation en Arctique est déjà une source importante de carbone suie et d'ozone en été. En 2050, la navigation de diversion à travers l'Océan Arctique pourrait devenir la source principale de pollution locale. J'ai aussi déterminé que l'effet radiatif principal des bateaux en Arctique semblait être un refroidissement infra-rouge au TOA dû à l'ozone. Cet effet refroidissant est dû à l'inversion de température dans la couche limite arctique. Les effets radatifs directs dans l'UV et le visible sont faibles pour les bateaux, en raison du faible temps de vie des aérosols dans cette région (1.4 jours pour le carbone suie) et parce que la pollution liée aux bateaux est souvent située sous les nuages. En conséquence, il semble primordial de bien représenter les températures de peau, la structure de la couche limite et les nuages en Arctique pour prendre en compte correctement les effets radiatifs liés à ces sources locales de pollution. Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic these additional reactions are a net sink of HO x and can decrease O 3 . Longer hydrocarbon chains R-H can also be oxidized by a mechanism similar to 2.14-2.16 to form RO 2 and NO 2 . Organic compounds present in the gas phase in the atmosphere are called Volatile Global emissions of NO x , CO, CH 4 and non-methane VOC (NMVOC)Tropospheric O 3 is produced from NO x , CO, CH 4 , and other (non-methane) VOC. At the global scale, the main source of NOx and CO is human activity (Table2.1). Natural emissions of NO x are due to soils, lightning and wildfires. VOC emissions are mostly natural, Organic Compounds (VOC). 2.1.2.3 16) Reactions 2.14-2.16 provide another source of NO 2 which can potentially produce O 3 . Furthermore, the methyl peroxy radical, CH 3 O 2 , can be oxidized to form methoxy radicals, CH 3 O, and formaldehyde, CH 2 O (details in [START_REF] Jacob | Heterogeneous chemistry and tropospheric ozone[END_REF] . Under high NO x , these pathways lead to the formation of additional HO 2 and CO, and one molecule of CH 4 can produce an average of 3.7 molecules of O 3 [START_REF] Crutzen | The changing photochemistry of the troposphere[END_REF] . Under low NO x however, due to methane sources from wetlands, and emissions of isoprene and terpenes by vegetation. Human VOC emissions include methane, several alkanes and alkenes, and aromatic compounds such as benzene and toluene. The non-methane VOC category represents a large variety of compounds with different reactivities and different impacts on ozone production. Table 2 2 Photochemical sinks of ozone, HO x and NO x in the troposphere Source NO x (TgN yr -1 ) (Tg yr -1 ) (Tg yr -1 ) (Tg yr -1 ) CO NMVOC CH 4 Anthropogenic, excluding agriculture 28.3 96 1071 213 Agriculture 3.7 204.5 - - Biomass burning 5.5 32.5 372.5 81.3 Soils 7.3 - - - Lightning 4 - - - Vegetation - - - 950 Oceans and geological sources - 54 49 4.9 Wetlands, other natural sources - 228.5 - - 2.1.3 .1 -Global NO x , CH 4 and NMVOC emissions by source, from IPCC [START_REF] Ciais | Carbon and Other Biogeochemical Cycles[END_REF] IPCC, 2013a) , Wiedinmyer et al. (2011) and Sindelarova et al. (2014) Table 2 2 .2 -Global tropospheric ozone budget from 17 model studies, from Wild (2007) Chem. prod. Chem. loss Prod -Loss Stratosphere Deposition (Tg yr -1 ) (Tg yr -1 ) (Tg yr -1 ) (Tg yr -1 ) (Tg yr -1 ) 4465 ± 514 4114 ± 409 396 ± 247 529 ± 105 949 ± 222 Table 2.2 shows an estimate of the global budget of tropospheric ozone, based on 17 model studies published between 2000 and 2007 There are also less liquid clouds and liquid precipitation in the Arctic during winter and spring, which decreases the efficiency of HNO 3 and H 2 O 2 wet removal. 2.1.8.2 Radiative impacts from Arctic ozone pollution [START_REF] Sillman | The relation between ozone, NO_x and hydrocarbons in urban and polluted rural environments[END_REF] for a presentation of the NO x and VOC sensitivity regimes). O 3 and NO x have a relatively long lifetime in the Arctic, due, in part, to lower dry deposition caused by the strong atmospheric stability over snow and ice (causing reduced atmosphere-surface exchanges) and the relative lack of vegetation (causing reduced uptake Chapter 2. Tropospheric ozone and tropospheric aerosols in the Arctic 49 by plants). Table 2.3. Aerosol emissions are mostly natural, due to sea salts emitted from oceans and mineral dust emitted from deserts. Less than 10 % of total aerosol emissions and production is of anthropogenic origin, but anthropogenic sources contribute significantly to the global budget of sulfate, nitrate, black carbon and organic aerosols. In Table 2 .3, biomass burning is presented separately because it can be due to both human and natural causes. Aerosols emitted directly as particles (e.g. dust, sea salt, ashes) are called primary aerosols, and aerosols formed from gas to particle conversion (involving complex microphysical and chemical processes) are called secondary aerosols (e.g. sulfate, nitrate, ammonium, secondary organic aerosols). 2.2.2 Aerosol properties: chemical composition, mixing state, size 2.2.2.1 Composition Table 2 .3 illustrates the chemical variety of aerosols, which can be separated into 5 main categories: black carbon, organic aerosols, inorganic soluble aerosols, sea salt, and other inorganics. Table 2 2 .3 -Global sources of aerosols by composition, from (1) IPCC (Boucher et al., 2013) (2) Delmas et al. (2005), (3) Spracklen et al. (2011), (4) Seinfeld and Pandis (2006), (5) Adams et al. (1999). POA are Primary Organic Aerosols, SOA Secondary Organic Aerosols Source Emissions & Secondary production (Tg yr -1 ) Natural Primary aerosols Sea spray 1400-1800 (1) including marine POA 2-20 (1) Mineral dust 1000 -4000 (1) Volcanic ash 33 (2) Biogenic POA 56 (2) Secondary aerosols Sulfate from oceanic DMS 90 (2) Sulfate from volcanic SO 2 SOA from biogenic VOC 21 (2) 20-380 (1) Nitrate from NO x 4 (2) Ammonium from NH 3 13.4 (5)* Anthropogenic Primary aerosols BC 3.6-6.0 (1) POA 6.3-15.3 (1) Secondary aerosols Sulfate from SO 2 SOA from VOC 120 (2) 100 (3) Nitrate from NO x 21.3 (4) Ammonium from NH 3 20.2 (5)* Biomass burning Primary aerosols BC 5.7 (2) POA 54 (2) Secondary aerosols SOA from VOC 3 (3) Sulfate and nitrate 90 (4) Table 2 2 𝑚 𝑚𝑖𝑥 , of the mixture is the volume average of the indices of the individual components 𝑖 (volume 𝑉 𝑖 ) of refractive index 𝑚 𝑖 , .4. Most components are non-absorbing at these wavelengths, except BC, dust, and (not shown here) other compounds such as brown carbon and metal oxides. An atmospheric aerosol often contains a mixture of absorbing and non-absorbing com- pounds (Section 2.2.2.2). The effective refractive index of this mixed aerosol depends on the Mie theory for individual particles can be integrated to define the bulk scattering, absorption and extinction coefficients (𝛼 𝑠 , 𝛼 𝑎 and 𝛼 𝑒𝑥𝑡 , in m -1 ) of an aerosol population of number size distribution 𝑑𝑁/𝑑𝐷. 𝛼 𝑠 (𝜆) = ∫︁ 𝐷𝑚𝑎𝑥 0 𝜎 𝑠 (𝜆, 𝐷) 𝑑𝑁 𝑑𝐷 𝑑𝐷 (2.39) 𝛼 𝑎 (𝜆) = ∫︁ 𝐷𝑚𝑎𝑥 0 𝜎 𝑎 (𝜆, 𝐷) 𝑑𝑁 𝑑𝐷 𝑑𝐷 (2.40) 𝛼 𝑒𝑥𝑡 (𝜆) = ∫︁ 𝐷𝑚𝑎𝑥 0 𝜎 𝑒𝑥𝑡 (𝜆, 𝐷) 𝑑𝑁 𝑑𝐷 𝑑𝐷 (2.41) Table 3 3 Year NO x NMVOC CO BC POA SO 2 NH 3 2010 52.1 1018 311 51.9 13.0 26.3 0.156 2050 54.3 937 325 54.3 13.6 26.5 0.163 (+4.3 %) (-7.9 %) (+4.7 %) (+4.7 %) (+4.7 %) (+0.78 %) (+4.23 %) .2 -Evolution of total Arctic gas flaring emissions (kton yr -1 ) between 2010 and 2050 in ECLIPSEv5 (CLE scenario). Table 3 . 3 3 -Total local Arctic (latitude > 60 ∘ N) shipping emissions (kton yr -1 ) in several recent shipping emission inventories. Inventory Year NO x NMVOC CO BC POA SO 2 (as NO 2 ) RCP8.5 2000 185.5 29.5 12.3 1.36 2.03 115 Corbett et al. (2010) 2004 95.7 N/A 9.1 0.431 1.84 66.5 HTAPv2 2010 146 7.61 14.2 0.346 6.46 88.6 RCP8.5 2010 206 33.8 14.1 1.56 2.33 130 Winther et al. (2014) 2012 225 8.57 27.4 1.18 2.49 66.4 Because of recent emission regulations and because of the better representativity of new AIS-based inventories, earlier shipping inventories cannot be expected to represent accurately current Arctic shipping. For this reason, the recent Winther et al. (2014) inventory is used in Chapter 6 to assess the impacts of Arctic-wide shipping emissions. Another AIS-based inventory, generated by the STEAM2 emission model (Jalkanen et al., 2012), is used in Chapter 5. STEAM2 emissions are based on high-resolution terrestrial AIS data, which are better suited than Winther et al. (2014) emissions for high-resolution WRF-Chem simulations and direct comparisons of WRF-Chem simulations with airborne measurements behind individual ships (as performed in Chapter 5). Table 3 3 .4 -Evolution of total local Arctic (latitude > 60 ∘ N) shipping emissions between 2010 and 2050 (kton yr -1 ), as estimated by Winther et al. (2014), Business As Usual (BAU) and High-Growth (HiG) scenarios. Year NO x NMVOC CO BC POA SO 2 (as NO 2 ) 2012 225 8.57 27.4 1.18 2.49 66.4 2050 BAU 179 10.6 17.0 1.31 2.04 29.1 (-20 %) (+24 %) (-28 %) (+11 %) (-18 %) (-56 %) 2050 HiG 215 12.7 20.4 1.56 2.44 33.9 (-4.4 %) (+49 %) (-26 %) (+32 %) (-2.2 %) (-49 %) There are several groundbased stations measuring ozone and aerosols in the Arctic, as part of the European Monitoring and Evaluation Programme (EMEP), the US Clean Air Sta- tus and Trends Network (CASTNET), and the World Meteorological Organization's Global Atmosphere Watch (WMO-GAW). Aerosol stations can measure the total PM 2.5 or PM 10 , or can include speciated measurements of e.g. NO -3 , SO 2-4 , NH + 4 , OA and BC. Long-term measurements of BC in the Arctic are derived from light absorption. BC concentrations derived from light absorption measurements, also called equivalent black carbon (EBC), are calculated using the relation 𝐸𝐵𝐶 Chapter 4. Transport of pollution from the mid-latitudes to the Arctic during POLARCAT-France 93 Table4.1 -Parameterizations and options used for the WRF-Chem simulations. Atmospheric process WRF-Chem option Planetary boundary layer MYJ (Janjić, 1994) Surface layer Monin-Obukhov Janjic Eta scheme (Janjić, 1994) Land surface Unified Noah land-surface model (Chen and Dudhia, 2001) Microphysics Morrison (Morrison et al., 2009) SW radiation Goddard (Chou and Suarez, 1999) LW radiation RRTM (Mlawer et al., 1997) Photolysis Fast-J (Wild et al., 2000) Cumulus parameterization Grell-3 (Grell and Dévényi, 2002) Gas-phase chemistry CBM-Z (Zaveri and Peters, 1999) Aerosol model MOSAIC eight bins (Zaveri et al., 2008) 4.2.3.3 Model calculations: WRF- Chem and FLEXPART-WRF 4.2.3.2 EMEP ground-based mea- 4.2.3.3.1 WRF-Chem surements Regional chemical transport model simulations are performed with the version 3.5.1 of the WRF- The EMEP network of ground-based measure- Chem (Weather Research and Forecasting, in- ments includes both aerosol PM 2.5 mass and cluding Chemistry) model to provide further in- aerosol chemical composition (available online sight into the POLARCAT-France spring aerosol from the EMEP database -http://www.nilu. measurements. WRF-Chem is a fully coupled, no/projects/ccc/). Stations from the EMEP online meteorological and chemical transport network are typically outside of urban centers mesoscale model and are intended to represent air free of recent pollution sources. We use the EMEP measure- ments of PM 2.5 , as well as chemical composition in SO = 4 , organic carbon (OC), black carbon (BC), NH + 4 , and NO -3 to evaluate model aerosols from 1 April to 11 April 2008, using data from stations with either daily or hourly data. Stations are excluded if they have less than 75 % data cover- age during this period, and OC and BC measure- ments are excluded because of the lack of spatial coverage of measurements (four stations for BC, five for OC). The locations of stations used for model comparison are shown in Fig. 4-1, includ- ing stations that measure PM 2.5 (33 stations) and stations that measure aerosol mass of SO = 4 , NH + 4 , and NO -3 (34, 31, and 28 stations, respectively). The average data coverage for selected stations is 98 %. Table 4 . 4 2 -Modeled PM 2.5 aerosol composition by source type along POLARCAT-France spring flights. BC, OC, and SS are black carbon, organic carbon, and sea salt, respectively. Flight Source type BC OC SO = 4 NH + 4 NO -3 SS (%) (%) (%) (%) (%) (%) 9 Apr 2008 Anthro. 2.5 7.0 24.1 20.6 40.2 5.6 Mixed fires + anthro. 3.2 12.6 35.0 20.1 26.0 3.2 10 Apr 2008 Anthro. 2.3 5.5 21.7 20.9 42.4 7.3 11 Apr 2008 Anthro. 2.7 8.7 34.4 19.5 27.3 7.4 Mixed fires + anthro. 2.8 11.9 33.9 19.4 28.5 3.4 Table 4 4 Type of land surface DSRE at TOA (W m -2 ) Snow and ice cover > 90 % +0.58 Ocean -1.52 All -0.98 ropean Arctic and put our results into the con- text of other studies focusing on the same pe- riod in different locations within the Arctic. We summarize the other studies for comparison, but .3 -Four-day average DSRE at the TOA north of 60 ∘ N, over regions significantly influenced by European pollution (> 50 % of total PM 2.5 column due to in-domain anthropogenic and biomass burning emissions). Table 5 . 5 1 -Description of the ships sampled during the ACCESS flights on 11 and 12 July 2012. Ship name Vessel type Gross Fuel type tonnage (tons) Wilson Leer Cargo ship 2446 Marine gas oil Costa Deliziosa Passenger ship 92 720 Heavy fuel oil Wilson Nanjing Cargo ship 6118 Heavy fuel oil Alaed * Cargo ship 7579 Heavy fuel oil * Ship present in STEAM2, not targeted during the cam- paign.. Table 5 . 5 2 -Parameterizations and options used for the WRF and WRF-Chem simulations. Atmospheric process WRF-Chem option Planetary boundary layer MYNN (Nakanishi and Niino, 2006) Surface layer MM5 Similarity scheme, Carlson-Boland viscous sublayer (Zhang and Anthes, 1982; Carlson and Boland, 1978) Land surface Unified Noah land-surface model (Chen and Dudhia, 2001) Microphysics Morrison (Morrison et al., 2009) Shortwave radiation Goddard (Chou and Suarez, 1999) Longwave radiation RRTM (Mlawer et al., 1997) Cumulus parameterization Grell-3D (Grell and Dévényi, 2002) Photolysis Fast-J (Wild et al., 2000) Gas phase chemistry CBM-Z (Zaveri and Peters, 1999) Aerosol model MOSAIC 8 bins (Zaveri et al., 2008) Table 5 . 5 3 -Description of WRF and WRF-Chem simulations. Name Description Period Remarks MET WRF meteorological simulation, 4-25 July 2012 Nudged to FNL 15 km × 15 km resolution (d01) CTRL WRF-Chem simulation, HTAPv2 an- 4-25 July 2012 Nudged to FNL in the free tropo- thropogenic emissions, STEAM2 ship sphere only emissions, online MEGAN biogenic emissions, online DMS and sea salt emis- sions, 15 km × 15 km horizontal resolu- tion (d01) NOSHIPS CTRL without STEAM2 emissions, 4-25 July 2012 Nudged to FNL in the free tropo- 15 km × 15 km horizontal resolution sphere only (d01) CTRL3 CTRL setup and emissions, 3 km × 3 km 10-12 July 2012 Boundary conditions from CTRL horizontal resolution (d02) No nudging No cumulus parameterization NOSHIPS3 NOSHIPS setup and emissions, 10-12 July 2012 Boundary conditions from NO- 3 km × 3 km horizontal resolution SHIPS (d02) No nudging No cumulus parameterization estimation of marine boundary layer wind speeds (normalized mean bias = +38 %). Since wind speed is one of the most critical parameters in the FLEXPART-WRF simulations, we decided smaller ships. to drive FLEXPART-WRF with the MET sim- Emissions from STEAM2 are compared with ulation instead of using CTRL or CTRL3. In emissions derived from measurements for indi- the MET simulation, results are also nudged to vidual ships in Sect. 5.2.5. STEAM2 emissions FNL in the boundary layer in order to reproduce of CO, NO x , OC, BC (technically elemental car- wind speeds (normalized mean bias of +14 % on bon in STEAM2), sulfur oxides (SO 𝑥 ), SO 4 , and 11 July 2012). All CTRL, NOSHIPS, CTRL3, exhaust ashes are also used in the WRF-Chem NOSHIPS3 and MET simulations agree well with CTRL and CTRL3 simulations. SO 𝑥 are emitted meteorological measurements during the other as SO 2 in WRF-Chem, and NO x are emitted as ACCESS ship flights. 94 % NO, and 6 % NO 2 (EPA, 2000). VOC emis- sions are estimated from STEAM2 CO emissions 5.2.4.3 High-resolution ship emissions from STEAM2 using a bulk VOC / CO mass ratio of 53.15 %, the ratio used in the Arctic ship inventory from STEAM2 is a high-resolution, real-time bottom- up shipping emissions model based on AIS po- sitioning data (Jalkanen et al., 2012). STEAM2 calculates fuel consumption for each ship based on its speed, engine type, fuel type, vessel length, and propeller type. The model can also take into account the effect of waves, and distinguishes ships at berth, maneuvering ships, and cruising ships. Contributions from weather effects were not included in this study, however. The presence of AIS transmitters is mandatory for large ships (gross tonnage > 300 ton) and voluntary for Table 5 . 5 4 -NO x and SO 2 emissions estimated from FLEXPART-WRF and ACCESS measurements, compared with STEAM2 emissions. Values in parentheses indicate the relative difference between STEAM2 and calculated values. SO 2 emissions were not calculated for the Wilson Leer and Alaed since the measured SO 2 concentrations in the plumes were too low above background. The second value corresponds to STEAM2 calculations using complete technical data from the Costa Deliziosa sister ship Costa Luminosa. Ship name NO x calculated from measurements NO x from STEAM2 (kg day -1 ) from measurements SO 2 calculated SO 𝑥 from STEAM2 (kg day -1 ) (kg day -1 ) (kg day -1 ) Costa Deliziosa 2728 6767/5243 a (+148/+92 % a ) 2399 3285/2684 a (+37/+12 % a ) Wilson Leer 167/82 b 287 (+72/+250 % b ) NA 88 (NA) Wilson Nanjing 561 602 (+7 %) 504 219 (-57 %) Alaed 1362 1809 (+33 %) NA 1130 (NA) a b Value with outliers removed. Table 5 . 5 5 -July emission totals in northern Norway (60.6-73 ∘ N, 0 to 31 ∘ W) of NO x , SO 2 , BC, OC, and SO = 4 in different ship emission inventories. Inventory Year NO x (kt) SO 2 (kt) BC (t) OC (t) SO = 4 (t) STEAM2 2012 7.1 2.4 48.1 123.4 197.3 Winther et al. (2014) 2012 9.3 3.4 47.7 82.9 - Dalsøren et al. (2009) 2004 3.1 1.9 7.3 24.5 - Corbett et al. (2010) 2004 2.4 1.6 10.6 32.5 - Dalsøren et al. (2007) 2000 5.5 1.1 24. 479.3 - Local emissions from Arctic flares and Arctic ships are from the ECLIPSEv5 and[START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] inventories, respectively. In 2050, additional Arctic "diversion shipping" emissions from[START_REF] Corbett | Arctic shipping emissions inventories and future scenarios[END_REF] are used. These simulations assume that diversion shipping occurs in July-November (July-August in these simulations ending on September Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone 151 , SO 2 and BC in Arctic shipping and Arctic flaring inventories in 2012 and 2050 are given in Table6.3. Arctic shipping mostly emits NO x and SO 2 , and flaring is an important source of BC. This Table shows that flaring emissions are relatively stable between 2012 and 2050 (in agreement with earlier findings byPeters et al., 2011), but that diversion shipping in summer 2050 causes a very strong increase in local emissions of BC, NO x and SO 2 . In summer 2050, SO 2 emissions from Arctic shipping are higher than in 2012 because of this large increase in summertime marine traffic, although SO 2 emissions by ship are lower due to expected reductions in emission factors(IMO and EU regulations). This Current and future impacts of local Arctic sources of aerosols and ozone Table 6.3 -Emission totals for Arctic local sources in spring (MAM) and summer (JJA) 2012 and 2050. Relative emission increases from 2012 to 2050 are also given. Chen and The model is run for 6-months, from March 1 to September 1, with an additional 15 Dudhia, 2001) Microphysics Morrison (Morrison et al., 2009) SW radiation RRTMG (Iacono et al., 2008) LW radiation RRTMG (Iacono et al., 2008) Cumulus parameterization KF-CuP (Berg et al., 2015) days for model spin-up (February 15 to February 29, discarded for analysis). This period includes spring, when long-range pollution transport to the Arctic is relatively efficient, and summer, when local Arctic emissions from shipping are highest. The different simulations are presented in Table 6.2. All simulations use the setup presented above, and are forced by 2012 sea ice, SST and vegetation, as well as 2012 meteorological, chemical and strato-spheric boundary conditions. The 2050 simulations also use 2012 biomass burning emissions from FINNv1.5. As a result, "2050" simulations only consider the impact of changing an-thropogenic emissions in 2050, and do not estimate the effect of climate change and its consequences on e.g. sea ice, natural emissions, transport pathways, clouds and precipita-tion. In this chapter, "impacts in 2050" and similar expressions are used as a shorthand for "impacts of 2050 anthropogenic emissions". The emission inventories used in these simulations are presented in Chapter 3, Sec-Simulation name Description 2012 emissions 2012_BASE All 2012 emissions (ECLIPSEv5; Winther et al. (2014) Arctic ship-ping; RCP8.5 subarctic shipping; FINNv1.5 2012 biomass burning emissions) + natural (biogenic, sea salt, soil, lightning, dust, DMS) emissions. 2012_NOSHIPS 2012_BASE, without shipping emissions north of 60°N 2012_NOFLR 2012_BASE, without gas flaring emissions north of 60°N 2012_NOANTHRO 2012_BASE, without anthropogenic emissions south of 60°N 2012_NOFIRES 2012_BASE, without biomass burning emissions 2050_NOSHIPS 2050_BASE, without shipping emissions north of 60°N 2050_NOFLR 2050_BASE, without gas flaring emissions north of 60°N 2050_NOANTHRO 2050_BASE, without anthropogenic emissions south of 60°N 2050_NOFIRES 2050_BASE, without FINNv1.5 2012 biomass burning emissions 1), in agreement with Corbett et al. (2010), and that diversion shipping emissions are equally divided between the NSR and the NWP routes. All future shipping emissions are based on worst-case "High-growth" projections (Corbett et al., 2010; Winther et al., 2014). Emission totals of NO x increase in Arctic shipping emissions is associated with a decrease in international shipping emissions elsewhere (partly diverted to the Arctic), whose benefits on air quality and climate are not presented here but are investigated in Fuglestvedt et al. (2014). Emission source BC emissions (kton) NO x emissions (kton) SO 2 emissions (kton) 2012 emissions, spring Arctic ships 0.210 28.4 10.5 Arctic flares 12.7 8.75 6.49 2012 emissions, summer Arctic ships 0.297 39.0 16.9 Arctic flares 9.58 6.58 4.87 2050 emissions, spring tion 3.2. Table 6.2 -List of simulations Arctic ships 0.325 (+55 %) 31.1 (+9.5 %) 7.22 (-31 %) 2050 emissions 2050_BASE Arctic flares 13.4 (+5.5 %) 9.20 (+5.1 %) 6.55 (+0.92 %) All 2050 emissions (ECLIPSEv5 CLE; Winther et al. (2014) High-2050 emissions, summer growth Arctic shipping; Corbett et al. (2010) High-growth diver-Arctic ships 4.35 (+1400 %) 445 (+1500 %) 135 (+1200 %) sion Arctic shipping; RCP8.5 subarctic shipping) + FINNv1.5 2012 biomass burning emissions + natural 2012 emissions Arctic flares 10.1 (+5.4 %) 6.91 (+5.0 %) 4.92 (+1.0 %) 6.3 Model updates for quasi-hemispheric Arctic simulations Results from the case studies presented in Chapters 4 and 5 show that WRF-Chem is able to reproduce aerosol transport events from Europe to the Arctic, aerosol and ozone pollution in 152 Chapter 6. http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/15/3831/2015/acp-15-3831-2015-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf http://www.atmos-chem-phys.net/16/2359/2016/acp-16-2359-2016-supplement.pdf Chapter 6. Current and future impacts of local Arctic sources of aerosols and ozone[START_REF] Rap | Satellite constraint on the tropospheric ozone radiative effect[END_REF] Flanner, 2013, Sections 2.1.7, 2.2.5.1 and 2.2.5.2.2. Remerciements Adding Arctic shipping emissions in simulations seems to reduce concentrations of several compounds at lower latitudes over North America and Russia: NO x (-20 to 0 %), SO 2 (-20 to 0 %), SO 2- 4 (-10 to 0 %), NH + 4 (-10 to 0 %), and O 3 (-10 to 0 %). These effects are however much smaller than the direct increase in surface pollution due to Arctic shipping, and these small localized concentration reductions away from emission regions could correspond to unphysical effects due to the internal variability in our model (see Sect. 6.5 for a discussion about model noise). However, [START_REF] Fuglestvedt | Climate Penalty for Shifting Shipping to the Arctic[END_REF] (Marelle et al., 2016). The contributions of Arctic shipping emissions to surface BC and SO 2- 4 are comparable between both simulations (here, 10-40 % for BC, 10-20 % for SO 2- 4 ; in Chapter 5, 10-30 % for BC, 10-25 % for SO 2- 4 ). However, the O 3 enhancement over the Norwegian and Barents Seas is much stronger here (4 to 5 ppbv than in previous simulations (1 to 1.5 ppbv in [START_REF] Roiger | Quantifying Emerging Local Anthropogenic Emissions in the Arctic Region: The ACCESS Aircraft Campaign Experiment[END_REF]. In Figure 6-8, shipping O 3 enhancements are also located quite far from the stronger shipping emissions regions. This suggests that simulation results presented in Chapter 5 could have been underestimating O 3 production from Arctic shipping emissions, since the limited simulation domain did not allow O 3 production further downwind from Northern Norway, or transport of Arctic shipping O 3 to Northern Norway from afar. This discrepancy could also be due, in part, to higher NO x emissions, here [START_REF] Winther | Emission inventories for ships in the arctic based on satellite sampled AIS data[END_REF] than in Chapter 5 (STEAM2). Overestimated O 3 production is also a known artifact of chemical-transport simulations run at low resolutions (Huszar et al., 2010;Vinken et al., 2011), due to the instant dilution of localized NO x emissions in large model grids. However, previous studies indicate that this dilution effect causes at most 1 to 2 ppbv overestimations in total Arctic O 3 (Vinken et al., 2011), which are significantly lower than the 3 to 8 ppbv enhancements in O 3 found here. There is also a good agreement between modeled and observed summer O 3 at Zeppelin (Svalbard, Norway), where shipping influence is high (Figure 6-3), which gives some confidence in these results.
338,075
[ "767320" ]
[ "88261" ]
01465880
en
[ "math" ]
2024/03/04 23:41:50
2016
https://theses.hal.science/tel-01465880v2/file/DDOC_T_2016_0135_HENRY.pdf
Keywords: Splitting trees, processus de branchement, processus de Crump-Mode-Jagers Splitting trees, branching processes, Crump-Mode-Jagers processes Dans cette thèse nous considérons une population branchante générale où les individus vivent et se reproduisent de manière i.i.d. La durée de vie de chaque individu est distribuée suivant une mesure de probabilité arbitraire et chacun d'eux donne naissance à taux exponentiel. L'arbre décrivant la dynamique de cette population est connu sous le nom de splitting tree. Ces arbres aléatoires ont été introduits par Geiger et Kersting en 1997. Dans un premier temps nous nous intéressons au processus stochastique qui compte le nombre d'individus vivants à un instant donné. Ce processus est connu sous le nom de processus de Crump-Mode-Jagers binaire homogène, et il est connu que ce processus, quand correctement renormalisé, converge presque sûrement en temps long vers une variable aléatoire (non dégénérée dans le cas surcritique). Grâce à l'étude du splitting tree sous-jacent à la population via les outils introduit par A. Lambert en 2010, nous montrons un théorème central limite pour cette convergence p.s. dans le cas surcritique. Dans un second temps, nous supposons que les individus subissent des mutations à taux exponentiel sous l'hypothèse d'infinité d'allèles. Cette procédure mène à une partition de la population à un instant donné par familles de même type. Nous nous intéressons alors au spectre de fréquence allélique de la population qui compte la fréquence des tailles de familles dans la population à un instant donnée. A l'aide d'un nouveau théorème permettant de calculer l'espérance de l'intégrale d'un processus stochastique contre un mesure aléatoire quand les deux objets présentent une structure particulière de dépendance, nous obtenons des formules pour calculer tout les moments joints du spectre de fréquence. En utilisant ces formules, et en adaptant la preuve de la première partie, nous obtenons également des théorèmes centraux limites en temps long pour le spectre de fréquence. Une dernière partie, indépendante des autres, s'intéresse à des questions statistiques sur des arbres de Galton-Watson conditionnés par leurs tailles. L'idée de base est que les processus de contours devraient être utilisés pour faire des statistiques sur des données hiérarchiques dans la mesure où ils ont déjà prouvés leur efficacité dans des cadres plus théoriques. Ce travail est un premier pas dans cette direction. Le but est ici d'estimer la variance de la loi de naissance rendue inaccessible par le conditionnement. On utilise le fait que le processus de contour d'un arbre de Galton-Watson conditionné converge vers une excursion Brownienne quand la taille de l'arbre grandit afin de construire des estimateurs de la variance à partir de forêts. On s'attache ensuite à étudier le comportement asymptotique de ces estimateurs. Dans une dernière partie, on illustre numériquement leurs comportements. Table des matières 3.2 The contour process of a Splitting tree is a Lévy process . . . . . . . . . . . . . . 3.2.1 The contour process of a finite tree . . . . . . . . . . . . . . . . . . . . . . 3.2.2 The law of the contour process of a splitting tree . . . . . . . . . . . . . . Introduction Cette thèse porte sur l'étude de certains d'objets aléatoires utilisés en dynamique et génétique des populations. La dynamique des populations s'intéresse essentiellement à l'étude des variations des effectifs d'individus dans une population au cours du temps. L'utilisation des mathématiques en dynamique des populations remonte au moins à 1826 lorsque Thomas Malthus les utilise dans son livre "Essay on the principle of population" [START_REF] Robert | An essay on the principle of population[END_REF]. Pour défendre sa thèse : "I said that a population, when unchecked, increased in a geometrical ratio", il introduit un modèle très simple : u n+1 = 2u n . Le terme croissance Matlhusienne viens de là. Quelques années plus tard, en 1838, à la suite des travaux de Malthus, Pierre François Verhulst introduit le modèle de croissance logistique [START_REF] François | Recherches mathématiques sur la loi d'accroissement de la population[END_REF] afin de prendre en compte les contraintes environnementales. Depuis, la variété des modèles et leurs complexités n'a cessé de croître, et en faire l'inventaire serait un travail qui dépasse le cadre de cette introduction. Une sophistication naturelle fut dès lors de prendre en compte l'aléa qui influe notamment dans de petites populations. Le modèle probabiliste le plus souvent cité en exemple est bien sûr le processus de Bienaymé-Galton-Watson [START_REF] Harris | The theory of branching processes[END_REF][START_REF] Athreya | Branching processes[END_REF] qui, bien qu'introduit il y a plus de 150 ans, est encore un sujet d'étude aujourd'hui. La génétique des populations, quant à elle, s'intéresse à l'apparition ou à la variation de la fréquence d'allèles au sein d'une population. Les modèles mathématiques pour la génétique des populations [START_REF] Ewens | Mathematical population genetics. I[END_REF][START_REF] Etheridge | Some mathematical models from population genetics[END_REF] adoptent un point de vue légèrement différent de ceux utilisés en dynamique des populations. En considérant souvent des populations de tailles fixées, ce domaine s'intéresse par exemple aux probabilités de fixation d'un allèle au sein de la population. On pourra penser au modèle de Wright-Fisher [START_REF] Wright | Evolution in mendelian populations[END_REF][START_REF] Ronald | The genetical theory of natural selection : a complete variorum edition[END_REF] dont le pendant en temps rétrograde, le modèle de Kingman [START_REF] Kingman | The coalescent[END_REF][START_REF] Kingman | On the genealogy of large populations[END_REF] a permis d'obtenir une expression explicite pour la loi du spectre de fréquence de la population échantillonnée connu sous le nom de formule d'échantillonage d'Ewens [START_REF] Ewens | The sampling theory of selectively neutral alleles[END_REF]. Ceci permet par exemple de construire des estimateurs pour le taux de mutation de la population. En biologie, le spectre de fréquence a également été utilisé pour détecter une sélection positive d'un gène dans une population en croissance [START_REF] Pardis | Detecting recent positive selection in the human genome from haplotype structure[END_REF][START_REF] Pardis | Genome-wide detection and characterization of positive selection in human populations[END_REF]. Dans cette thèse nous considérons un modèle plus sophistiqué que celui de Bienaymé-Galton-Watson afin de prendre en compte le temps et les durées de vies des individus. Nous considérons une population branchante générale. Comme dans le modèle de Galton-Watson, les individus vivent et se reproduisent de manière indépendante les uns des autres. Cependant, nous supposons que leurs durées de vies sont distribuées suivant une loi de probabilité fixée P V . Ensuite, chaque individu donne naissance à de nouveaux individus à taux fixé b durant sa vie, chaque nouvelle naissance donnant un unique nouvel individu (contrairement à d'autres modèles où des naissances simultanées sont possibles). En fonction des valeurs de b et de R + x P V (dx), il est connu [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF] que la population montre plusieurs régimes de croissance différents. Si b R + x P V (dx) = 1 (cas critique) ou b R + x P V (dx) < 1 (cas sous-critique) alors la population s'éteint presque sûrement. Si b R + x P V (dx) > 1 (cas surcritique), alors, avec un probabilité positive, la population ne s'éteindra jamais. De plus, en cas de non extinction, la croissance de la population se fait à vitesse exponentielle. Dans ce dernier cas, on peut montrer l'existence d'une constante α strictement positive, appelée paramètre Malthusien, correspondant au taux de croissance exponentiel de la population. Le modèle décrit plus haut est plus fin que celui de Galton-Watson (ou que son pendant Markovien en temps continu) car il prend en compte le vieillissement éventuel des individus. Il présente cependant deux défauts importants du point de vu biologique : -Les naissances sont Poissoniennes (donc "sans mémoire"). -Il n'y a pas d'interactions entre les individus. Une telle population peut naturellement être assimilée à un arbre dans lequel chaque branche représente un individu et dont la longueur représente la durée de vie de l'individu correspondant. Les branchements représentent alors les événements de naissances. L'arbre décrivant la dynamique de la population décrite plus haut est appelé un splitting tree [START_REF] Geiger | Depth-first search of random trees, and Poisson point processes[END_REF]. Comme souvent dans l'étude des arbres, il est commode de construire un opérateur inversible qui transforme l'arbre en une fonction réelle car ce sont des objets bien plus aisés à manipuler que les arbres. Les exemples les plus connus concernent les arbres de Galton-Watson (bien qu'ils soient définis pour n'importe quel arbre discret). On pourra penser au processus de Harris [START_REF] Pitman | Combinatorial stochastic processes[END_REF] ou encore à la marche de Łukasiewicz [START_REF] Flajolet | Analytic combinatorics[END_REF]. Dans le cas des splitting trees, il a été montré par A. Lambert en 2010 [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF] qu'il existe une transformation d'un splitting tree "fini" en un processus càdlàg possédant la très commode propriété d'être un processus de Lévy tué en zéro. Dans le cas d'un arbre infini, l'étude est permise par la troncature de l'arbre en deçà d'une date fixée. Ce point de vue a donné lieu à de nombreux travaux [START_REF] Dávila | Time reversal dualities for some random forests[END_REF][START_REF] Lambert | Splitting trees stopped when the first clock rings and Vervaat's transformation[END_REF][START_REF] Lambert | The coalescent in peripatric metapopulations[END_REF][START_REF] Lambert | The coalescent point process of branching trees[END_REF][START_REF] Lambert | Predicting the loss of phylogenetic diversity under nonstationary diversification models[END_REF], par exemple sur l'inférence ancestrale sous le modèle des splitting trees [START_REF] Lambert | The reconstructed tree in the lineage-based model of protracted speciation[END_REF][START_REF] Lambert | Phylogenetic analysis accounting for age-dependent death and sampling with applications to epidemics[END_REF]. D'autre travaux s'intéressent aux splitting trees avec mutations arrivant soit à la naissance des individus [START_REF] Richard | Processus de branchement non Markoviens et processus de Lévy[END_REF][START_REF] Richard | Splitting trees with neutral mutations at birth[END_REF][START_REF] Delaporte | Lévy processes with marked jumps I : Limit theorems[END_REF][START_REF] Delaporte | Lévy processes with marked jumps II : Application to a population model with mutations at birth[END_REF] soit de manière Poissonienne durant la vie des individus [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF]. En particulier, cet outil s'est avéré utile dans l'étude du comportement asymptotique des processus de branchement construits à partir des splitting trees. Le plus simple d'entre eux est le processus (N t , t ∈ R + ) qui compte le nombre d'individus N t vivants dans l'arbre à l'instant t. Ce processus est connu sous le nom de processus de Crump-Mode-Jagers binaire homogène. Dans le cas surcritique, il a été montré [START_REF] Richard | Processus de branchement non Markoviens et processus de Lévy[END_REF] que la quantité N t E [N t | N t > 0] (1.1) converge presque sûrement quand t tend vers l'infini. M. Richard [START_REF] Richard | Limit theorems for supercritical age-dependent branching processes with neutral immigration[END_REF] a également étudié à l'aide de ces mêmes outils des processus branchement associés à des splitting trees avec immigration. Il montre notamment la convergence presque sûre du processus qui compte la population vivante à un instant t vers une variable aléatoire de loi gamma. De même, il regarde le comportement asymptotique des ratios des populations migrantes par rapport à la population totale sous divers modèles d'immigrations. Là encore, il obtient des lois des grands nombres. Bien que de nombreux résultats de convergence presque sûre aient été montrés, il semble qu'aucun théorème central limite associé à l'une de ces loi des grands nombres n'apparaisse dans la littérature. L'un des apports de cette thèse est un théorème central limite pour la convergence presque sûre du ratio (1.1). Une perspective pourrait être de regarder ces fluctuations dans le cadre des résultats de Mathieu Richard ou dans un cadre plus général. En effet, la preuve de ce théorème central limite semble pouvoir s'étendre à d'autres processus de branchements construit à partir de splitting trees. Dans cette thèse, nous étudions également un modèle avec mutations. Nous supposons que les individus vivants subissent des mutations à taux exponentiel θ sous l'hypothèse d'infinité d'allèles. Cette hypothèse suppose que chaque mutation remplace le type de l'individu touché par un type entièrement nouveau. Par ailleurs les types sont supposés se transmettre de parents à enfants. Ce mécanisme mène à une partition de la population par types. On note alors A(k, t) le nombre de familles (c'est-à-dire les ensembles d'individus partageant le même type) de taille k à l'instant t. La suite d'entiers (A(k, t)) k≥1 est appelée spectre de fréquence de la population au temps t. Cet objet, bien connu en biologie (c'est celui étudié par la formule d'échantillonnage d'Ewens [START_REF] Ewens | The sampling theory of selectively neutral alleles[END_REF]), a été introduit dans le cadre des splitting trees par N. Champagnat et A. Lambert dans [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF] où ils obtiennent une expression explicite pour E A(k, t)u Nt , ∀u ∈ (0, 1). Ils obtiennent ensuite la convergence presque sûre du spectre de fréquence quand correctement renormalisé. Dans un second travail [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF], ils s'intéressent à la taille des plus grandes familles et à l'âge des plus anciennes. Mathieu Richard s'est également intéressé à des splitting trees avec mutations mais dans son modèle, il suppose que les mutations ont lieu à la naissance des individus. Il obtient le même type de résultats dans son cadre [START_REF] Richard | Splitting trees with neutral mutations at birth[END_REF]. Dans cette thèse nous obtenons des formules permettant de calculer tout les moments (joints ou non) du spectre de fréquence dans le modèle à mutations Poissoniennes. Ceci est fait à l'aide d'un nouveau théorème permettant de calculer l'espérance de l'intégrale d'un processus stochastique contre un mesure aléatoire quand les deux objets présentent une structure particulière de dépendance. Par ailleurs, ces formules nous permettent d'étendre la preuve du théorème central limite obtenu pour N t afin d'en obtenir un pour le spectre de fréquence. Dans cette thèse nous nous sommes également intéressé à un problème statistique sur des arbres de Galton-Watson conditionnés par leurs tailles. Les problèmes statistiques portant sur des données arborescentes sont en général délicats car l'espace dans lequel vivent ces objets est très grand. Ce travail repose sur l'idée que les processus de contours devraient être utilisés pour faire des statistiques sur ce type de donnée car ces outils ce sont déjà révélé efficace dans l'étude théorique des arbres aléatoires. Un arbre de Galton-Watson peut servir à décrire la généalogie d'une population. On suppose donnée une mesure de probabilité µ sur N de variance finie et on considère une population démarrant d'un unique individu. Puis on suppose que cet individu donne naissance à un nombre aléatoire d'enfants distribué selon µ. Alors chacun de ces enfants donne lui-même naissance à des nouveaux individus selon le même mécanisme indépendamment des autres. Notre but est d'estimer la variance de µ pour des arbres conditionnés par leurs tailles. Si le problème est simple dans le cadre des arbres de Galton-Watson non conditionnés [START_REF] Jagers | Branching processes with biological applications[END_REF], il est beaucoup plus délicat dans le cadre des arbres conditionnés. En effet, le conditionnement remet en cause l'indépendance et l'homogénéité (des lois) des variables aléatoires correspondants aux nombres d'enfants de chaque individus. On peut par exemple montrer qu'il n'est pas possible d'estimer (à partir d'un seul arbre ou d'une forêt) la moyenne de la loi µ (problème d'identifiabililté). De plus, d'autres résultats suggèrent qu'il n'est pas possible d'estimer σ à partir d'un seul arbre conditionné (par exemple [START_REF] Janson | Conditioned Galton-Watson trees do not grow[END_REF]). Dans ce chapitre, nous construisons des estimateurs de σ -1 à partir d'une forêt F = (τ 1 , . . . , τ N ) d'arbres indépendants telle que chaque arbre τ i est un arbre de Galton-Watson conditionné à avoir n i noeuds. Dans des travaux récents [START_REF] Bharath | Inference for large tree-structured data[END_REF], les auteurs cherchent également a estimer σ -1 mais sans utiliser les processus de contour pour construire des estimateurs. Le présent document est découpé en 6 chapitres, chaque chapitre étant centré sur une unité thématique. Les deux premiers chapitres ne contiennent pas de contributions originales. Ils sont dévoués à des introductions aux outils utilisés dans la suite. Le Chapitre 4 introduit des outils originaux utilisés dans la suite. Le chapitre 5 concerne l'étude des processsus de Crump-Mode-Jagers binaires homogènes. Le chapitre 6 s'intéresse au spectre de fréquence allélique d'un splitting tree avec mutations Poissonienne neutre. La première section du Chapitre 4 ainsi que le Chapitre 5 sont issus de la prépublication [START_REF] Henry | Clts for general branching processes related to splitting trees[END_REF]. Les deux dernières sections du Chapitre 4 ainsi que le Chapitre 6 sont issus de la publication [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF] en collaboration avec Nicolas Champagnat. Le dernier chapitre concerne des questions statistiques sur des arbres de Galton-Watson conditionnés. C'est une travail en collaboration avec Romain Azaïs (Nancy) et Alexandre Genadot (Bordeaux). Chapitre 2 Ce chapitre ne contient pas de contribution originale. C'est une introduction pédagogique à la théorie des fluctuations des processus de Lévy sans sauts négatifs. Un processus de Lévy est défini comme un processus (Y t , t ∈ R + ) càdlàg à valeurs réelles tel que, pour toutes suites de temps 0 ≤ t 1 < t Plus précisément, étant donnés a < b deux nombres réels, on note τ + b = inf{t ≥ 0 | Y t > b} et τ - a = inf{t ≥ 0 | Y t < a}. Le but final est d'arriver à des expressions les plus explicites possibles pour des quantité du type P x τ - a < τ + b et pour la loi du couple (Y τ + b -, Y τ + b ) (car Y est susceptible de sortir de l'intervalle (a, b) en sautant). Finalement, une courte partie à la fin du chapitre rappelle quelques résultats classiques de théorie du renouvellement utilisés dans cette thèse. Chapitre 3 Ce chapitre ne contient pas de contribution originale. Dans le même esprit que le chapitre précédent, ce chapitre a pour but d'introduire des outils importants utilisés dans cette thèse. Dans un premier temps, on y présente les splitting trees. Comme indiqué dans la section précédente, cette classe d'arbres puise son intérêt dans le fait qu'ils modélisent de manière relativement générale une population biologique. Pour nous, ces arbres sont très intéressants car les processus de branchement étudiés dans cette thèse peuvent s'écrire comme des fonctionnelles ces arbres. Par exemple, si T est un splitting tree et N t est le nombre d'individus présents dans l'arbre au temps t, alors le processus stochastique (N t , t ∈ R + ) est un processus de Crump-Mode-Jagers binaire homogène. L'étude de ces processus fait l'objet du Chapitre 5. Cependant les splitting trees ne sont pas l'outil essentiel introduit dans ce chapitre. Comme souvent avec les arbres, il est plus commode de les transformer en objets plus simples à manipuler mais contenant toute l'information contenue dans l'arbre. Par exemple, c'est le cas du processus de contour pour les arbres de Galton-Watson. Dans le cadre des splitting trees, il existe aussi un processus de contour introduit par A. Lambert [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Si la population s'éteint presque sûrement, celui-ci a la particularité d'être un processus de Lévy tué en 0 dont l'exposant de Laplace est donné par ψ(λ) = λ - R + (1 -e -λx )bP V (dx). (1.2) Dans le cas où la population ne s'éteint pas, l'étude est conduite en considérant la troncature de l'arbre en deçà d'une date fixé. Un autre objet important introduit dans le Chapitre 4 est le processus ponctuel de coalescence (CPP). La relation de cet objet avec les splitting trees est similaire à celle qu'entretient le coalescent de Kingman avec le modèle de Wright-Fisher dans le sens qu'il décrit les relations généalogiques entre le individus vivants dans l'arbre à un instant donnée. Chapitre 5 Le Chapitre 5 est dévolu à des résultats préliminaires qui sont relativement déconnectés thématiquement des chapitres suivants et qui présentent, selon nous, un intérêt particulier qui justifie qu'ils soient mis dans un chapitre différent. Il est découpé en trois parties. La première partie concerne l'étude du comportement asymptotique de la fonction d'échelle W du contour d'un splitting tree surcritique, c'est à dire d'un processus de Lévy sans sauts négatifs dont l'exposant de Laplace est donné par (1.2). La fonction d'échelle est une fonction intervenant dans l'étude des fluctuations du processus de Lévy [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF]. Celle-ci est caractérisée par sa transformée de Laplace : R + W (s)e -βs ds = 1 ψ(β) . Plus précisément, dans [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF], Champagnat N. and Lambert A. montrent, dans le cas surcritique, l'existence d'une constante positive γ telle que e -αt ψ (α)W (t) -1 = O e -γt . Le but de cette partie est d'obtenir des estimées plus fines sur ce O e -γt . Plus précisément, on montre le résultat suivant. Proposition 1.3.1 (Comportement asymptotique de W ). Il existe une fonction positive décroissante càdlàg telle que W (t) = e αt ψ (α) -e αt F (t), t ≥ 0, satisfaisant lim t→∞ e αt F (t) = 1 bEV -1 if EV < ∞, 0 sinon. La preuve de ce résultat est basée sur le fait que la fonction W peut se réécrire en fonction de la mesure de potentiel du subordinateur d'échelle ascendante (ascending ladder process) d'une légère modification de notre processus de Lévy. Dans notre cas particulier, il est possible d'effectuer des calculs explicites concernant la loi de ce subordinateur, ce qui permet d'étudier plus précisément la fonction W . Ce résultat est fondamental pour démontrer les résultats du Chapitre 5. La seconde partie s'intéresse au calcul de l'espérance d'une intégrale du type X X s N (ds), (1.3) où X est un espace polonais, (X s , s ∈ X ) est un processus continu (ou càdlàg quand X est, par exemple, R + ), et N est une mesure aléatoire. Bien sûr le résultat est évident quand N et (X s , s ∈ X ) sont indépendants. Notre but est d'obtenir un théorème permettant de calculer cette espérance lorsque (X s , s ∈ X ) et N présente une structure particulière de dépendance. Les théorèmes que l'ont obtient ont des applications très importantes dans le Chapitre 6 de cette thèse. On obtient par exemple les résultats suivants. Theorem 1.3.2. Soit X un processus stochastique continu de X dans R + . Soit N une mesure aléatoire sur X d'intensité finie µ. Si X est localement indépendant de N , c'est à dire, pour tout x ∈ X , il existe un voisinage V x de x tel que X x soit indépendant de N (V x ∩ •). Si on suppose, de plus, qu'il existe une variable aléatoire intégrable Y telle que |X x | ≤ Y, ∀x ∈ X , a.s. et E [Y N (X )] < ∞. Alors E X X x N (dx) = X E [X x ] µ (dx) . Dans le cas càdlàg, on obtient également. Theorem 1.3.3. Soit X un processus stochastique de [0, T ] × X dans R + tel que X .,x est càdlàg pour tout x et X s,. est continu pour tout s. Soit N une mesure aléatoire sur [0, T ] × X d'intensité µ finie. Si, pour tout s de [0, T ], la famille (X s,x , x ∈ X ) est indépendante de la restriction de N sur [0, s], et s'il existe une variable aléatoire intégrable Y telle que |X s,x | ≤ Y, ∀x ∈ X , ∀s ∈ [0, t], a.s. et E [Y N (X )] < ∞. Alors, E [0,T ]×X X s,x N (ds, dx) = [0,T ]×X E [X s,x ] µ (ds, dx) . Ce résultat est utilisé dans le Chapitre 6 où il permet d'étudier les moments du spectre de fréquence. L'idée de ces théorèmes est la suivante. À une mesure aléatoire N sur X , on peut associer une famille de mesures de probabilité (P x ) x∈X . La mesure P x est appelé mesure de Palm en x associée à N . La formule de Campbell [START_REF] Daley | An introduction to the theory of point processes[END_REF], permet d'exprimer la moyenne de l'intégrale (1.3) en fonction de l'intensité de N et de la moyenne sous P x de X : E X X x N (dx) = X E Px [X x ] µ (dx) , où E Px est l'espérance sous P x . Dans la cadre des mesures ponctuelles, on peut penser à P x comme à P conditionné à ce que N ait un atome en x (P(• | N ({x}) > 0). Dès lors, si X vérifie les hypothèse du Théorème 1.3.2, alors sa loi sous P x est la même que sous P et le théorème est démontré. Cependant, il n'est pas possible de donner un sens au conditionnement P(• | N ({x}) > 0). Par exemple, pour une mesure ponctuelle de Poisson, il faudrait que µ ait un atome en x, ce qui est déjà très restrictif. La dernière partie concerne l'introduction d'une nouvelle construction du processus ponctuel de coalescence. Le processus ponctuel de coalescence (CPP) est le processus de coalescence associé au modèle des splitting trees, comme le coalescent de Kingman l'est au modèle de Wright-Fisher. Le CPP représente les relations généalogiques entre les lignées des individus vivants à un temps t fixé dans le splitting tree. On parlera de CPP arrêté au temps t. Il a été montré [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF] qu'on pouvait le définir comme une suite (H i ) i≥0 de variables aléatoires telle que H 0 = t et que la famille (H i ) i≥1 soit i.i.d. de loi donnée par P (H i > t) = 1 W (t) , arrêtée au premier H i > t. Chaque variable aléatoire H i est alors associée à un individu et la première coalescence de cette lignée est supposée avoir lieu au bout du temps H i avec la lignée de l'individu j vérifiant (voir Figure 1.1) j = max{k < i | H k > H i }. La dernière partie du Chapitre 4 est dévolue au résultat suivant qui donne une construction d'un CPP par recollement de CPP indépendants sur un autre CPP (voir Figure 1.2). Proposition 1.3.4. Soit P (i) i≥1 une suite i.i.d. de CPP de fonction d'échelle W au temps a, et soit N i a i≥1 leurs tailles respectives. Soit P un CPP, indépendent de la famille précédente, de fonction d'échelle Ŵ (t) := W (t + a) W (a) , au temps t -a, et soit Nt-a sa taille. Posons S 0 := 0 et S i := i j=1 N j a , ∀i ≥ 1. Alors le vecteur aléatoire H k , 0 ≤ k ≤ S Na-1 défini, pour tout k ≥ 0, par H k = P (i+1) k-S i si il existe i ≥ 0 tel que S i < k < S i+1 , Pi + a si il existe i ≥ 0 tel que k = S i , N t W (t) -→ t→∞ E, presque sûrement et dans L 2 . De plus, conditionnellement à la non-extinction, E suit une loi exponentielle de paramètre 1. La preuve de ce résultat peut être trouvée dans la thèse de Mathieu Richard [82, Proposition 2.1]. Elle repose sur un critère de convergence presque sûre pour les processus de Crump-Mode-Jagers généraux établis par Nerman, O. [START_REF] Nerman | On the convergence of supercritical general (C-M-J) branching processes[END_REF] dans les années 80. Dans ce chapitre nous donnons une nouvelle preuve élémentaire de la convergence presque sûr de Nt W (t) . Cette preuve a été publié dans [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF]. L'apport essentiel de ce chapitre concerne l'étude des fluctuations dans la convergence établie par le théorème précédent. Plus précisément, nous y établissons le théorème central limite suivant pré-publié dans [START_REF] Henry | Clts for general branching processes related to splitting trees[END_REF]. Theorem 1.4.2. Dans le cas surcritique (α > 0), conditionnellement à la non-extinction, la quantité W (t) N t W (t) -E converge en loi, quand t tends vers l'infini, vers une loi de Laplace de moyenne nulle et de variance 2 -ψ (α). Dans l'état actuel de nos connaissances, c'est la première fois qu'un théorème central limite est établi pour un processus de Crump-Mode-Jagers général alors que des lois de grands nombres pour ces processus sont l'objet de nombreux travaux. La preuve de ce théorème repose sur l'idée suivante : -Une décomposition de N t comme la somme des contributions des lignées des différents individus vivants à un instant antérieur. -Un contrôle des dépendances entre chacune de ces lignées. -Une expression explicite pour l'erreur quadratique moyenne E ( Nt W (t) -E) 2 grâce à des méthodes de renouvellement. -Un contrôle fin des erreur du type E ( Nt W (t) -E) n (n = 1, 2, 3) grâce à des estimées précises sur la fonction W (t). Par ailleurs, la preuve de ce théorème étant assez souple, elle peut s'étendre à des cas plus complexes de processus de branchements comptés par caractéristiques aléatoires dès lors que l'arbre support est un splitting tree. Dans la Chapitre 6, nous démontrons grâce à cette méthode des théorèmes central limite pour ce type de processus dans le cas particulier du spectre de fréquence. Chapitre 6 Dans ce chapitre, nous considérons que notre population subit également des mutations. Les mutations sont supposées arriver de manière Poissonienne à taux θ indépendamment d'un individu à l'autre. On suppose de plus que chaque nouvelle mutation remplace le type de l'individu qu'elle touche par un type totalement nouveau (hypothèse d'infinité d'allèles). Par ailleurs, les types sont supposés se transmettre de parents à enfants. Finalement, on suppose que les mutations n'ont pas d'influence sur la généalogie de la population (mutations neutres). Ce mécanisme de mutations mène à une partition de la population vivante à un instant t en familles de même type (ou allèle). Notre but est d'étudier la fréquence des tailles de familles. Plus précisément, on note A(k, t) le nombre de familles de taille k au temps t. La suite d'entiers A(1, t), A(2, t), . . . est appelée spectre de fréquence de la population vivante au temps t. Dans l'étude de ce spectre de fréquence, un rôle important est joué par la famille clonale. Cette famille est définie à un instant t comme l'ensemble des individus vivants à cet instant et possédant le type que possédait l'ancêtre à l'instant 0. On note Z 0 (t) le nombre d'individus clonaux à l'instant t. Cet objet a été très étudié par N. Champagnat et A. Lambert dans [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF]. Pour étudier cette quantité, une idée est de considérer le splitting tree dit clonal. Dans ce nouvel arbre on considère que les individus sont tués dès qu'ils subissent une mutations. De cette manière, la loi de (Z 0 (t), t ∈ R + ) dans un splitting tree avec mutations est la même que la loi du processus qui compte la population dans un splitting tree clonal. Il facile de se rendre compte que la loi de la durée de vie d'un individu dans le splitting tree clonal est la loi du minimum entre une variable aléatoire exponentielle E de paramètre θ et une variable aléatoire V de loi P V . On appelle alors W θ la fonction d'échelle associée au splitting tree clonal. Dans le cas où θ > α, on dit que l'arbre est clonal sous-critique signifiant que le famille ancestrale s'éteindra presque sûrement. Respectivement, si α = θ on parlera de cas clonal critique et si θ < α de cas clonal surcritique (dans ce cas la famille clonale ne s'éteint pas avec probabilité positive). Ce chapitre est découpé en deux grandes parties. La première étudie les moments du spectre de fréquence à l'aide d'une nouvelle représentation du spectre sous forme intégrale. Plus précisément, on formalise l'apparition de mutations à l'aide d'une mesure aléatoire de Poisson N sur l'arbre. On a alors A(k, t) = [0,t]×N 1 B i,k (a) N (da, di), où B i,k ( E[A(k, t) | N t > 0] = W (t) t 0 e -θa W θ (a) 2 1 - 1 W θ (a) k-1 da. Plus encore, cela permet d'obtenir des formules récursives pour tout les moments du spectre de fréquence. Par exemple nous obtenons le théorème suivant. E A(k, t) n | N t > 0 = E        t 0 θN (t) t-a n 1 +•••+n N (t) t-a =n-1 E A(k, a) n 1 1 Z 0 (a)=k | N a > 0 N (t) t-a m=2 E A(k, a) n m | N a > 0 da        , où N (t) t-a suit une loi géométrique de paramètre W (a) W (t+a) . La preuve de ce résultat repose sur les mêmes idées que le calcul de E [A(k, t) | N t > 0] . En effet, on peut par exemple montrer que A(k, t) 2 = [0,t]×N 1 B i,k (a) N (t) t-a n=1 A (n) (k, a) N (da, di), où A (n) (k, E N i=1 A(k i , t) n i | N t > 0 et E N i=1 A(k i , t) n i 1 Z 0 (t) = | N t > 0 , ce qui permet de fermer les formules. L'étude de ces moments est l'objet de la publication [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF] en collaboration avec Nicolas Champagnat. La seconde partie du chapitre s'intéresse au comportement en temps long du spectre fréquence. Premièrement, nos formules sur les moments nous permettent de donner une preuve élémentaire de la loi des grand nombres pour le spectre de fréquence qui est originalement due à N. Champagnat et A. Lambert. Theorem 1.5.2. Dans le cas surcritique (α > 0), pour tout entier strictement positif k, A(k, t) W (t) -→ c k E presque sûrement, quand t tend vers l'infini, avec c k = t 0 e -θa W θ (a) 2 1 - 1 W θ (a) k-1 da. Pour finir, nos formules sur les moments nous permettent d'étendre la preuve du théorème central limite du précédent chapitre pour obtenir le même type de résultats pour le spectre de fréquence. On obtient par exemple le théorème suivant. Theorem 1.5.3. Si θ > α > 0 et [0,∞) e (θ-α)v P V (dv) > 1 . Alors, conditionnellement à la non-extinction, on a la convergence en loi suivante : e -α t 2 ψ (α)A(k, t) -e αt c k E k∈N * (d) -→L t→∞ (0, K) , où L (0, K) est la loi de Laplace infinie dimensionnelle de covariance K et moyenne nulle. Nous obtenons également un autre résultat, potentiellement plus intéressant pour les applications. Il permet d'approcher (en temps long) le spectre de fréquence par une fraction de la population totale. Theorem 1.5.4. Si θ > α, alors on a la convergence en loi suivante conditionnellement à la non-extinction, [START_REF] Devroye | Simulating size-constrained galton-watson trees[END_REF][START_REF] Janson | Simply generated trees, conditioned Galton-Watson trees, random allocations and condensation[END_REF]. Par exemple, ce modèle particulier à récemment été étudié avec des applications en cancérologie [START_REF] Bharath | Inference for large tree-structured data[END_REF]. ψ (α) e -α t 2 (A(k, t) -c k N t ) k∈N * (d) -→ t→∞ L (0, M ) . L' Nos estimateurs sont basés sur l'adéquation des contours des arbres de la forêt avec leur contour (Harris path) limite moyen. Soit τ (n) un arbre de Galton-Watson conditionné à avoir n noeuds. Si on note H[τ (n)](t) le processus de contour (Harris path) de τ (n), il est bien connus (voir [START_REF] Aldous | The Continuum Random Tree III[END_REF]) que H[τ (n)] converge en loi vers une excursion Brownienne ( 1 σ e t , t ∈ [0, 1]). Theorem 1.6.1 (Aldous, 1991). Quand n tend vers l'infini, H[τ (n)](2nt) √ n , t ∈ [0, 1] (d) -→ 2 σ e t , t ∈ [0, 1] , dans C([0, 1], R) où (e t, t∈R + ) est une excursion Brownienne renormalisée. Dans ce travail nous introduisons deux estimateurs : le premier λ ls est basé sur l'adéquation, au sens L 2 , du contour de la forêt (la concaténation des contours de chaque arbres) avec le contour limite moyen. Plus précisément, λ ls est définie par λ ls = argmin λ∈R + H[F](•) -λH 2 L 2 ([0,N ]) , avec H[F](t) = N i=1 1 √ n i H[τ i ](2n i (t -i + 1))1 [i-1,i) (t), ∀ 0 ≤ t ≤ N, et H(t) = Ee (t-t ) , ∀ 0 ≤ t ≤ N, où • est la partie entière inférieure. Un second estimateur est construit de la manière suivante. Pour chaque arbre de la forêt τ i , on considère la quantité λ[τ i ] = H[τ n ](•), E [e • ] L 2 ([0,1]) E[e • ] L 2 ([0,1]) , où •, • L 2 ([0,1]) est le produit scalaire dans L 2 ([0, 1]). De cette manière, λ[τ i ] est la projection de H[τ i ] sur le sous espace de L 2 [0, 1] engendré par E[e • ]. Cette quantité mesure l'adéquation du contour de τ i avec son contour limite moyen. L'estimateur est alors construit comme le paramètre λ qui minimise l'écart (au sens de Wasserstein) entre la loi attendue à la limite, celle de la v.a. λ e • , E [e • ] L 2 ([0,1]) E[e • ] L 2 ([0,1]) =: λΛ ∞ et la mesure empirique P = 1 N i δ λ[τ i n i ] . Plus précisément, λ W = argmin λ>0 d W (P λΛ∞ , P) , où P λΛ∞ est la loi de λΛ ∞ . d W est la distance de Wasserstein L 2 définie, pour toutes mesures de probabilité µ et ν sur R par d W (µ, ν) = inf R 2 |x -y| 2 γ(dx, dy) | γ ∈ M 1 (R 2 ), π 1 γ = µ, π 2 γ = ν , où M 1 (R 2 ) est l'ensemble des mesures de probabilité sur R 2 , π i est la projection suivant la ième coordonnée, et π i γ est la mesure image de γ par π i . Les principaux résultats théoriques sur nos estimateurs sont l'absence de biais asymptotique ainsi qu'une convergence presque sûre du type suivant. Theorem 1.6.2. Soit (u n ) n≥1 une suite d'entier et F = (τ n ) n≥1 une famille infinie d'arbre de Galton-Watson conditionnés de taille respective u n et de loi de naissance commune µ. Alors, ∀ > 0, ∃ A ∈ N, min n≥1 u n > A ⇒ P lim sup N →∞ λ • [F N ] -σ -1 < = 1 , où λ • [F N ] peut être arbitrairement λ ls [F N ] ou λ W [F N ]. La première difficulté pour obtenir ces résultats est de démontrer que la variable aléatoire Λ ∞ possède une densité par rapport à la mesure de Lebesgue. Pour ce faire on utilise le calcul de Malliavin. Par la suite, on utilise la théorie des opérateurs de Bernstein-Kantorovich (qui apparaissent naturellement dans les calculs) et des méthodes standards sur les distances de transport. La dernière partie du chapitre est consacrée à des tests numériques sur nos estimateurs ainsi qu'à la comparaison de nos résultats avec des estimateurs concurrents [START_REF] Bharath | Inference for large tree-structured data[END_REF]. Chapitre 2 Preliminaries I : Fluctuation of Lévy processes in a nutshell The purpose of this chapter is to introduce the fluctuation theory of Lévy processes. Our motivation is that, in Chapter 3, the contour process of a splitting tree (which describes our population dynamics) is almost a Lévy process with no negative jumps. Since this process gives many informations on the underlying tree, it follows that many properties of the splitting trees can be deduce thanks to the tools provided by the theory of Lévy processes. The theory of Lévy processes is rich and the reader may object that a regular nutshell may not be large enough to contain a complete account on the fluctuations of Lévy processes. It is true, and the present chapter is not designed to be an exhaustive or fully rigorous treatment of this theory. Our goal is rather to give an intuitive treatment of it. It is designed to go as straightforward as possible to the fluctuation identities used in the sequel of this manuscript. That is why, most of the proofs are only sketched and many technical difficulties, which are not of core importance, are evaded. The following text is based on two excellent references by Jean Bertoin [START_REF] Bertoin | Lévy processes[END_REF] and Andreas E. Kyprianou [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF]. We refer the readers interested in a full and rigorous treatment of this theory to these two books. Section 2.1 is devoted to recall some elementary properties of Poisson random measures. Such measures naturally appear when working with Lévy processes. The results recalled in Section 2.1 play a central role in the other sections of this chapter, in particular the useful compensation formula. Section 2.2 recalls basic facts on Lévy processes which are essential to go further. Section 2.3 explains the link between fluctuations of Lévy processes and excursions of Markov processes. Section 2.4 is an quick introduction to the theory of the excursions of Markov processes. This theory was developed by Itô in his famous work [START_REF] Itô | Poisson point processes and their application to Markov processes[END_REF][START_REF] Itô | Poisson point processes attached to Markov processes[END_REF]. His approach appeared to be fruitful in many domains of probability. See for instance [START_REF] Pitman | Itô's excursion theory and its applications[END_REF] for applications in the study of Brownian motion and its functional or [START_REF] Gall | Itô's excursion theory and random trees[END_REF] with applications in the study of scaling limits of random trees. The interested reader can take a look to [START_REF] Watanabe | Itô's theory of excursion point processes and its developments[END_REF] or [START_REF] Blumenthal | Excursions of Markov processes. Probability and its Applications[END_REF] for introductions to the subject. Section 2.5 introduces the main tools used in order to study fluctuations of Lévy processes : the so-called ascending and descending ladder processes. Their study is another example of application of Itô's theory. The celebrated Wiener-Hopf factorization is the main result of Section 2.5. It allows to express the law of the ladder processes in terms of the law of the underlying Lévy process. This result comes back to [START_REF] Greenwood | Fluctuation identities for Lévy processes and splitting at the maximum[END_REF]. Section 2.6 presents the main fluctuation identities used in the other chapters of this manuscript and shows how the ladder processes can be used to solve fluctuation problems. The last part, Section 2.7, is quite independent from the rest of the chapter and is devoted to a quick reminder on renewal theory which is used in this thesis. Some results on Poisson random measure In this section we present two important results on Poisson random measure. The first one allows to characterize whether a random measure is Poissonian or not. The second is the celebrated compensation formula which allows computing the expectation of the integral w.r.t. a Poisson random measure. We recall that a random measure is simply a random variable taking values in some measure space. Our first interest in such object comes from the fact that, in the second part of the manuscript, we use it to model the mutation mechanism in a biological population. In the sequel, (E, E, η) refers to a measured space such that η is σ-finite. For a measurable subset A and a random measure N , N (A) is a real valued random variable. In the sequel, σ(N (A ∩ •)) refers to the σ-field generated by the restriction of N on the subset A. One can easily show that σ (N (A ∩ •)) = σ ({N (A ∩ B) | B ∈ E}) . The interested reader can find a very good introduction to random measure in [START_REF] Daley | An introduction to the theory of point processes[END_REF]. Important examples of random measures are Poisson random measures. Such measures naturally appears in the theory of Lévy processes which is the main subject of this chapter. Let us recall the definition. Definition 2.1.1. A Poisson random measure on E with intensity η is a random measure satisfying -for any measurable set A of E, N (A) has Poisson distribution with parameter η(A), -for any disjoint and measurable sets A 1 and A 2 , the random variables N (A 1 ) and N (A 2 ) are independent. The interested reader can find in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF] (Chapter 2) a quick introduction on Poisson random measure. A more exhaustive reference is [START_REF] Daley | An introduction to the theory of point processes[END_REF]. We begin by recalling some useful basic results. These can be found in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF], Theorem 2.7. Lemma 2.1.2. Let N be a Poisson random measure on E with intensity η. Let f be a real-valued measurable function on E. Then, the integral E f (x) N (dx) is almost surely finite if and only if E 1 ∧ |f (x)| η(dx) 2.2. A quick reminder to Lévy processes is finite. In addition, if f is positive, we have, for any positive real number λ, E e -λ E f (x) N (dx) = exp - E 1 -e -λf (x) η(dx) . (2.1) The next result provides a tool to show that a given random measure is Poissonian. This result plays an important role in the theory of excursions of Markov processes. Proposition 2.1.3 (Poisson processes characterization of space-time Poisson random measure). Let N be a random measure on R + × E. Let F t be the σ-field generated by N ([0, t] × E ∩ •). Then, N is a Poisson random measure if and only if the family of counting processes (N A , A ∈ E), defined by N A t = N ([0, t] × A), ∀t ∈ R + , ∀A ∈ E, satisfies -for any measurable set A, N A t , t ∈ R + is a Poisson process which is Markovian with respect to (F t ) t∈R + . -for any two disjoint and measurable sets B 1 and B 2 , the processes N B 1 t , t ∈ R + and N B 2 t , t ∈ R + never jump simultaneously (almost surely). The proof of this proposition lies on the fact that Poisson processes such as those above are independent. A statement of this fact can be found in [START_REF] Bertoin | Lévy processes[END_REF], Section O.4 and a proof can be found in [START_REF] Revuz | Continuous Martingales and Brownian Motion. Grundlehren der mathematischen Wissenchaften A series of comprehensive studies in mathematics[END_REF], Section XII.1. To end this section, we recall, as stated in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF], the celebrated compensation formula for functionals of Poisson random measures. This formula appears to be extremely important in the theory of Lévy processes. Theorem 2.1.4. Let ϕ : R + × R × Ω → R + be a measurable function such that -for any t ≥ 0, (ω, x) → ϕ(t, x, ω) is measurable with respect to σ(N ([0, t] × R ∩ •)) ⊗ B(R). -for any x ∈ R + , t → ϕ(t, x, ω) is left continuous for P-almost all ω. Then, for any positive real number t, E [0,t]×R ϕ(t, x) N (dt, dx) = [0,t]×R E [ϕ(t, x)] bds η(dx). (2.2) In Chapter 4, we show Theorem 4.2.2 which might be seen as an extension of the compensation formula for any random measure under, as expected, more restricting hypothesis on ϕ. The compensation formula can then be obtained as a simple corollary of this result. Unfortunately, in this corollary, a.s. continuity of x → ϕ(t, x) is required for all fixed t. In that sense, Theorem 4.2.2 is not a generalisation of the compensation formula. A quick reminder to Lévy processes Before going further, let us recall some basic facts about Lévy processes. Here, we say that a process (X t , t ∈ R + ) is a Lévy process if it is a càdlàg random process with stationary and independent increments. From this definition, it is easily seen that the law of X t (for any positive t) is infinitely divisible (i.e. it can be written as the nth convolution power of another probability measure). It is also easy to see that the quantity E 0 e iλX 1 characterizes the law of the process. Remark 2.2.1. In the sequel, we use the convention that P refers to the measure associated to the process started from 0. In any other case, it is denoted P x . Since the law of X 1 is infinitely divisible, the Lévy-Khintchine representation theorem for infinitely divisible distribution tells us that there exists a triple (a, σ, Π) where a is a real number, σ a positive real number, and Π is a measure supported by R\{0} satisfying R 1 ∧ x 2 Π(dx) < ∞, such that E e iλXt = e -tΨ(λ) , ∀t ∈ R + , with Ψ (λ) = iλa + σ 2 λ 2 + |x|≥1 1 -e iλr Π(dr) + |x|<1 1 -e iλr + iλr Π(dr). (2.3) Ψ is called the characteristic exponent of X. The interested reader can find a statement of this theorem in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF] (Theorem 1.3), and a proof in [START_REF] Sato | Lévy processes and infinitely divisible distributions[END_REF]. There exists a more precise and powerful result. Indeed, the celebrated Lévy-Ito decomposition theorem gives an interpretation of the triple (a, σ, Π) in terms of the paths of the process X. More precisely, it can be showed that the law such a Lévy process is the law of the sum of three simpler independent Lévy processes X (1) , X (2) and X (3) , where -X (1) is a drifted Brownian motion with drift a and diffusion coefficient σ, -X (2) is a compound Poisson process with rate Π (R\(-1, 1)) and jump law given by the probability measure 3) is a square integrable martingale. Each of the three terms in (2.3) correspond to the characteristic exponents of each of these three processes. The Lévy-Ito decomposition is the subject of Chapter 2 in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF]. Π (• ∩ R\(-1, 1)) Π (R\(-1, 1)) , -X ( In the particular case where X is spectrally positive, meaning that the process never experiences negative jump, the exponential moments, E e -βX 1 , are finite for all positive β. In such case, one can consider the so-called Laplace exponent of the process, denoted by ψ here, and defined by ψ(β) = log E e -βX 1 , ∀β ≥ 0. For this, we refer to [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF], Section 3.3. Fluctuations Figure 2.1 -The Lévy process X and its excursions (in colours) below its running maxima. t 1 t 2 t Figure 2.2 - The reflected process Y . Fluctuations The idea which allows to handle fluctuation problems is to decompose the path of X in terms its excursions below its running maxima and above its running minima (see Figure 2.1). Let X be the running supremum of X, i.e. the process defined by X t = sup s∈[0,t] X s , ∀t ∈ R + . It appears that this process remains constant as soon as X experiences an excursion below its maximum. Hence, the process Y defined by Y t = X t -X t , ∀t ∈ R + , only contains the informations of these excursions (see Figure 2.2). The key fact is that this process remains Markovian when X is Lévy (see Proposition IV.1 in [START_REF] Bertoin | Lévy processes[END_REF]). From this, it follows that studying the excursions of X below its running maxima boils to study the excursions of Y from 0 (see Figures 2.1 and 2.2). Excursions of a Markov process away from zero In this section, we consider a Markov process Y (which plays the role of our reflected process). We are interested in studying its excursions from 0. Usually, such study begins with a discussion about the behaviour of the process around 0. Since such discussion leads to technicalities which are not central, we completely avoid this question. We refer the reader to [START_REF] Bertoin | Lévy processes[END_REF] to find a treatment of these problems, in particular Section IV.1. In the sequel, we denote by (G t ) t∈R + the natural filtration of Y . θ t denotes the canonical shift operator for random processes. The first step is to be able to quantify the time spent by the process in 0 up to a time t. However, in many cases, one cannot be able to quantify it through the Lebesgue measure since the time spent in 0 by the process is likely to be of measure 0 (think about Brownian motion, for instance). It follows that we need to measure the time spent in 0 through a different time scale, which is called "local time". This takes the form of a random process. Y T = 0 on {T < ∞}, L P(•|T <∞) (Y • θ T , L • θ T -L T ) = L P ((Y, L)) . Moreover, conditionally on {T < ∞}, (Y • θ T , L • θ T -L T ) is independent of G T . We do not prove this result. The interested reader can find a proof in [START_REF] Bertoin | Lévy processes[END_REF], Section IV.2, using martingale methods. With this idea in mind, we could roughly say that L t (for a positive real number t) is the amount of local time spent by Y in 0 up to t regular time. We can now introduce the first quantity of great importance which is the right inverse of L, L -1 s = inf {t ≥ 0 | L t > s} . (2.4) L -1 is also called the ascending ladder time process associated to X. In the same manner as L, L -1 s can be interpreted as the quantity of regular time spent up to s local time. In other words, if someone wants to get s local time, he needs to wait L -1 s regular time, that is L L -1 s = s. An important property of this process is the following : let t 1 < t 2 two positive real numbers and suppose that Y experiences an excursion between those two times, that is Y t 1 = Y t 2 = 0 and ∀t ∈ (t 1 , t 2 ), Y t > 0. Hence, L t is constant on (t It follows that the jumps of the inverse local time L -1 correspond exactly to the excursions of the reflected process Y and their amplitudes are the durations of the corresponding excursions (see Figure 2.4). t 1 = L -1 L t 1 - t 2 = L -1 L t 1 L t 1 s t Figure 2.4 -The inverse local time L -1 . Of course, such remarks would be pointless if the law of the inverse local time was intractable. But it appears that L -1 is a nice process. Proposition 2.4.2. The inverse local time L -1 is a (possibly killed) non-decreasing Lévy process (i.e. a subordinator). Hence, it can be written as L -1 s = ds + [0,s]×R + x J (dv, dx), if s ≤ E, ∞ else, where J is a Poisson random measure with some intensity J(dx)ds, d is a positive real number, and E is an independent exponential random variable. Remark 2.4.3 (Killing). The parameter of the random variable E is related to the probability of occurrence of an infinite excursion. Indeed, if an infinite excursion occurs, then the local time L remains constant from the beginning of this excursion. This implies that L -1 jumps to infinity. In such case L -1 is said to be killed. Actually, E is the time of the first infinite excursion of Y . In the case where the probability of occurrence of an infinite excursion is zero, then L -1 is a (unkilled) subordinator which simply writes L -1 s = ds + [0,s]×R + x J (ds, dx), ∀s ∈ R + . Sketch of proof of Proposition 2.4.2. The proof simply gets back to the definition of a Lévy process in terms of its increments. We need to show that they are homogeneous in law and independents. We assume that L -1 s < ∞ a.s. This means that the random variable E in the above statement equals infinity almost surely. The converse may happen when 0 is transient for Y which means that the local time never reaches s. In this case, L -1 is a killed subordinator and {∞} is the cemetery state. First note that it follows from (2.4) of the inverse local time that L -1 u is the hitting time of (u, ∞) by L. In particular, L -1 u is a stopping time with respect to the natural filtration of Y . Now, using point (ii) of Definition 2.4.1, we have that L L -1 s +t -s, t ≥ 0 d = (L t , t ≥ 0) , ∀s ∈ R + . Since, the right inverse of the process in the l.h.s. of the last equality is given by L -1 u+s -L -1 s , u ∈ R + , it follows that L -1 u+s -L -1 s d = L -1 u , which gives the homogeneity of the increments. The independence is deduced from the fact that L L -1 u +t -s, t ≥ 0 is independent from G L -1 u (Definition 2.4.1, point (ii)). Remark 2.4.4 (Laplace exponent of a killed subordinator). The Laplace exponent of a killed subordinator takes a particular form. Indeed, assume that (Z t , t ∈ R + ) is a subordinator with triple (a, 0, Π). This implies that its Laplace exponent is given by γ(β) = -aβ - R + 1 -e -βx Π(dx), ∀β ∈ R + . A first remark is that γ(0) equals 0 in any case. Now, let e p be an independent exponential random variable with parameter p. The Laplace exponent of Z killed at rate p is given by γ κ (β) = -log E e -βZ 1 1 ep>1 = -p + γ(β). Now, -γ κ (0) = p which is the death rate of the killed version of Z. We can now make some remarks on L -1 . First, the drift part corresponds to the case where Y remains in 0 on a set of positive measure, in which case the local time is passing at a proportional speed w.r.t. to the regular time. On another side, the jump measure J already allows to get some informations about the excursions of Y . For instance, J ([0, s] × (a, ∞)) is the number of excursions with duration greater than a and its law is Poissonian (Definition 2.4.2). Hence, using that P (J ([0, s] × (a, ∞)) = 0) = e -sJ((a,∞)) , we have that the time of first excursion longer than a is exponentially distributed (which is not really a surprise). Another interesting example is the following : let S a be the (local) time of first excursion with duration greater than a. Hence, L -1 Sa-is the time (in the usual time scale) where this excursion begins. It follows that the number of excursions with duration greater than b < a before the first excursion with length greater than a is given by J ((0, S a ) × (b, a)). Now, since S a is measurable with respect to the σ-field generated by J ({R + × (a, ∞)} ∩ •), S a is independent from J ((0, s) × (b, a)), for all s > 0. This implies that P (J ((0, S a ) × (b, a)) = k) = R + J ((a, ∞)) e -sJ((a,∞)) P (J ((0, s) × (b, a)) = k) ds = J ((a, ∞)) J ((b, ∞)) 1 - J ((a, ∞)) J ((b, ∞)) k . (2.5) The next step is to show that the excursions themselves arrive according to a point process in some function space. First, note that the law of the path Y t+L -1 Sa- , t ∈ R + defines a probability measure, say η a , on the space E (a) of the excursions with length greater than a. More precisely, E (a) = {f ∈ D[0, ∞) | f (0) = 0, ∀t ∈ (0, a) f (t) = 0} , endowed with the Skorohod topology of D[0, ∞). Now, if one wants to define a measure on the whole space of excursions ∪ a>0 E (a) using the family (η a ) a>0 , he only needs some compatibility conditions (similar to those of Kolmogorov theorem). More precisely, for each η a , its restriction to a subspace E (b) needs to agree with η b . This family of measure does not satisfy this condition. However, a slight modification of this family, ηa := J ((a, ∞)) η a , ∀a ∈ R + , does the trick. Indeed, let a > b and A ∈ B E (a) , then η b (A) = P Y t+L -1 S b - , t ∈ R + ∈ A, L -1 S b -L -1 S b -> a = P Y t+L -1 Sa- , t ∈ R + ∈ A, L -1 s -L -1 s-< b, ∀s ∈ (0, S a ) . = P Y t+L -1 Sa- , t ∈ R + ∈ A, J ((0, S a ) × (b, ∞)) = 0 . But the event L -1 s -L -1 s-< b, ∀s ∈ (0, S a ) belongs to G L -1 Sa- . Hence, the two events in the above probability are independent conditionally on Y L -1 Sa-which is almost surely equal to 0. Finally, from (2.5), η b (A) = J ((a, ∞)) J ((b, ∞)) η a (A). Hence, there exists a measure denoted η on E := ∪ a>0 E (a) such that its restriction on each subspace E (a) coincides with ηa . Now, the main result is the following. Proposition 2.4.5. There exists a Poisson random measure H on R + × E with intensity λ ⊗ η such that, for all s > 0, we have H ({{s} × E} ∩ •) = δ θ L -1 s- Y 1 L -1 s--L -1 s >0 . Sketch of proof. It is easily seen that H : B(R + ) ⊗ B E (a) → R + , defined by H ([0, t] × A) = Card s ∈ [0, t] | L -1 s--L -1 s > 0 and Y L -1 s-+t , t ∈ R + ∈ A , ∀A ∈ B (E) , t ∈ R + , defines a random measure on R + × E. This measure is the number of excursions of Y up to time t that lie in A. In order to show that H is Poissonian, we use Theorem 2.1.3. For any measurable set A of B (E), the proof of the Poissonian nature of the counting process As usual, the behaviour at 0 of Y leads to technical difficulties which are, as usual, eluded (see [START_REF] Bertoin | Lévy processes[END_REF], theorem IV.10). N A t = H ([0, t] × A) , ∀t ∈ Ladder processes and their Laplace exponents In the preceding section, we have seen how the excursions of a Lévy process from its maximum can be described through a Poisson random measure H. This was done using the excursions from 0 of the reflected process. However, the knowledge of H is not enough to recover the whole trajectory of X. Indeed, it does not describe the behaviour of X when it reaches its maximum. Hence, we need the couple H, X to characterize X. Moreover, it appears that a slight modification of X makes it more user-friendly. Indeed, the time changed supremum process X L -1 s , s ∈ R + is also almost a subordinator. The only problem lies in the fact that, if the reflected process experiences an infinite excursion, L -1 jumps to infinity. This implies that X L -1 remains constant which is inconsistent with its Lévy nature. This is the motivation of the definition of the ascending height process H defined by H s = X L -1 s if L -1 s < ∞, ∞ else, for all s in R + . Indeed, we have that the 2-dimensional process ((L -1 s , H s ), s ∈ R + ) is also a (eventually killed) subordinator. This process is usually called the ascending ladder process. The proof of this fact lies on the same ideas as the proof Proposition 2.4.2. We do not write this proof here but the interested reader can find it in [START_REF] Bertoin | Lévy processes[END_REF] (Lemma 2, p.157). Now, the core point is that, in the case of Lévy processes with non negative jumps, the Laplace exponent of the ladder process can be explicitly expressed through the Laplace exponent of X. To prove that, we need to introduce the so-called descending ladder processes which plays a symmetric role as the ascending one but with the idea to decompose the path of X through the excursions from its minimum. More precisely, let (( L -1 s , H s ), s ∈ R + ) be the ascending ladder process of -X. This bivariate subordinator is called the descending ladder process of X. We are now able to state the main theorem of this chapter which links the characteristic exponent of X with the Laplace exponent of (( L -1 s , H s ), s ∈ R + ) and ((L -1 s , H s ), s ∈ R + ). This is the celebrated Wiener-Hopf factorization. Theorem 2.5.1. (Wiener-Hopf factorization) Let Ψ the characteristic exponent of X. Let κ and κ be the Laplace exponents of L -1 , H and L-1 , Ĥ (respectively). Then, for all p ∈ R + , p p -iθ + ψ(λ) = κ(p, 0) κ(p -iθ, -iλ) κ(p, 0) κ(p -iθ, -iλ) . (2.6) Note that in order to give a sense to the above equality, the function κ and κ needs to be analytically extended to {z | z ≤ 0}. Sketch of proof. Let e p be an exponential random variable with parameter p ∈ R + independent of X. Now, easy calculations show that E e iθep+iλXe p = p p -iθ + ψ(λ) . On the other hand, let G t be the time of last supremum of X up to time t, that is G t = sup s < t | X s = X s . Now we use that (G ep , X ep ) and (e p -G ep , X -X ep ) are independent random variables (see Lemma VI.6 in [START_REF] Bertoin | Lévy processes[END_REF]). This leads to E e iθep+iλXe p = E e iθGe p +iλXe p E e iθep-Ge p +iλXe p -Xe p . However, the duality Lemma for Lévy processes (see [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF], Lemma 3.4) tells us that the time reversed process (X T -t -X T , t ∈ [0, T ]) has the law of (-X t , t ∈ [0, T ]). Hence, E e iθep+iλXe p = E e iθGe p +iλXe p E e iθG ep +iλX ep , where G and X are defined from -X as G and X are defined from X. It remains to show that E e iθGe p +iλXe p = κ(p, 0) κ(p -iθ, -iλ) . We work by path decomposition. We have that E e iθGe p +iλXe p = E ∞ 0 qe -qt e iθGt+iλXt dt = E ∞ 0 qe -qt e iθGt+iλXt 1 Xt=Xt dt + E ∞ 0 qe -qt e iθGt+iλXt 1 Xt =Xt dt. We treat the two terms in the last equality independently and begin with the second term. Now, assume we have two times t 1 and t 2 satisfying . From this, one can see that X t 1 = X t 1 , X t 2 = X t 2 , e iθGt+iλXt 1 Xt =Xt = R + ×E 1 L -1 s-<t<L -1 s e iθL -1 s-+iλX L -1 s-H(ds, de). Using this, one has ∞ 0 qe -qt e iθGt+iλXt 1 Xt =Xt dt = ∞ 0 qe -qt R + ×E 1 L -1 s-<t<L -1 s e iθL -1 s-+iλX L -1 s-H(ds, de)dt = R + ×E e iθL -1 s-+iλX L -1 s- e -qL -1 s--e -qL -1 s H(ds, de) = R + ×E e (iθ-q)L -1 s-+iλX L -1 s- 1 -e -qL(e) H(ds, de), where L(e) denotes the length of the excursion e (that is L -1 s -L -1 s-). From this, the compensation formula for Poisson functional (2.2) implies that E ∞ 0 qe -qt R + ×E 1 L -1 s-<t<L -1 s e iθL -1 s-+iλX L -1 s-H(ds, de)dt = R + ×E E e (iθ-q)L -1 s-+iλX L -1 s- 1 -e -qL(e) ds η(de). (2.7) Now, one has on one side, E e -qL -1 s = e sκ(q,0) . But on the other side, E e -qL -1 s = E e -qds-q [0,s]×R + x J (ds,dx) = E e -qds-q [0,s]×E L(e) H(ds,de) . Hence, using formula (2.1), one has E e -qL -1 s = e -qds exp - E 1 -e -sL(e) η(de) . Finally, E (1 -e -qL(e) ) η(de) = -κ(q, 0) -qd. This, in conjunction with (2.7), entails E ∞ 0 qe -qt e iθGt+iλXt 1 Xt =Xt dt = R + E e (iθ-q)L -1 s-+iλX L -1 s- ds (κ(q, 0) -qd) = R + e tκ(q-iθ,-iλ) ds (-κ(q, 0) -qd) = -κ(q, 0) -qd κ(q -iθ, -iλ) . Concerning the first term, it is easily seen that G t = L -1 Lt-and G t = t when X t = X t . This implies that E ∞ 0 qe -qt e iθGt+iλXt 1 Xt=Xt dt = E R + q exp (iθ -q)L -1 Lt + iλX L -1 L t 1 L -1 L t -=L -1 L t dt = E R + q exp (iθ -q)L -1 s + iλX L -1 s 1 L -1 s-=L -1 s dL -1 s , where we use that dL -1 is the push-forward measure of the Lesbegue measure by L to obtain the last equality. But on the set s ∈ R + | L -1 s-= L -1 s , dL -1 s = dds, hence E R + q exp (iθ -q)L -1 s + iλX L -1 s 1 L -1 s-=L -1 s dL -1 s = dE R + q exp (iθ -q)L -1 s + iλX L -1 s ds = qd κ(q -iθ, -iλ) . Finally, E e iθGe p +iλXe p = -κ(p, 0) κ(p -iθ, -iλ) . (2.8) Using the same computation on -X leads to E e iθG ep +iλX ep = -κ(p, 0) κ(p -iθ, -iλ) . (2.9) The Wiener-Hopf factorization for spectrally positive Lévy processes Here, we focus on spectrally positive Lévy processes which is the type of processes that appear in the sequel of this manuscript. A spectrally positive Lévy process is supposed to satisfy the condition Π(R -) = 0 meaning that the path of the process does not experience negative jumps. In this case, the Wiener-Hopf factorization of Theorem 2.5.1 takes a simpler form. Indeed, first note that when X is a spectrally positive Lévy process, -X satisfies the definition of a local time at 0 for the reflected process at the minimum. This implies, since L -1 s = inf {t ≥ 0 | X t < -s} , that the descending ladder process L -1 is nothing more than the hitting time of (-∞, -s] by X. It follows then, using the fact that the process e -αXt-tψ(α) , t ∈ R + is a martingale and Doob's optimal stopping theorem at the stopping time L -1 s that E e -ψ(α) L -1 s = e αs . Finally, since X L -1 s = s (using that X decreases only continuously), we have that E e -α L -1 s -β Ĥs = e -s(-φ(α)+β) , (2.10) where φ is the right inverse of ψ. Hence, κ(α, β) = β -φ(α). Now, (2.6) entails that κ(p + α, β) κ(p, 0) = φ(p) p ψ(β) -(α + p) φ(α + p) -β . Finally, κ(α, β) = ψ(β) -α φ(α) -β . (2.11) Fluctuation problems for spectrally positive Lévy processes The purpose of this section is to use the ladder processes in order to solve fluctuation problems. Large time behaviour In this section, we are interested in studying the asymptotic behaviour of X. In particular, does X drifts to ±∞ or not ? A first necessary condition for our Lévy process to drift to ∞ is that H t drifts to ∞ as t grows. It is clear from the homogeneity of its increments that a subordinator always drifts to infinity unless it is constant or killed. Hence, it follows that the finiteness of the overall supremum of X depends of the killing of H. This last fact can be seen from the value at 0 of its Laplace exponent. We know from (2.11) that the Laplace exponent of H is given by - ψ(β) β -φ(0) , (2.12) where we recall that ψ is the Laplace exponent of X and φ its right-inverse. Using that ψ(β) = -aβ + 1 2 σ 2 β 2 - R + 1 -e -βx + βx1 x<1 Π(dx), it is easily seen that ψ is twice differentiable on R + \{0}. One can then use this to show that ψ is convexe. Using its convexity, ψ has a positive zero if and only if ψ (0+) < 0. In that case, according to (2.12), the Laplace exponent of H takes value 0 at 0 (since φ(0) > 0). When, ψ (0+) ≥ 0, κ(0, 0) equals to ψ (0+). It follows, that H is a killed subordinator only when ψ (0+) > 0. Suppose, for now, that ψ (0+) > 0. We show that X drifts to -∞. From the remarks above, ψ (0+) > 0 implies that the overall maximum of X, say X ∞ , is a.s. finite (since H is killed). On the other hand, it follows from (2.10) that κ(0, 0) equals 0 meaning that H drifts to infinity. Hence, sup t≥0 X t < ∞ a.s. and inf t≥0 X t = -∞ a.s. Now, let x be a positive real number, we have P lim sup t→∞ X t ≥ - x 2 ≤ P   sup t>τ - -x X t ≥ - x 2   , where τ - a = inf {t ≥ 0 | X t < a} , ∀a ∈ R. But, by the strong Markov property, we have P   sup t>τ - -x X t ≥ - x 2   = E P X τ - -x X ∞ ≥ - x 2 ≤ P -x X ∞ ≥ - x 2 , since X τ - -x ≤ -x. Using the properties of the increment of X, the r.h.s. of the last inequality equals P X ∞ ≥ x 2 . To end, note that, since X ∞ , is a.s. finite, this last probability tends to 0 as x increases. Hence, P(lim sup t→∞ X t ≥ -x 2 ) equals 0. Using a similar method, on can show that X drifts to -∞ if ψ (0+) < 0, and oscillate if ψ (0+) = 0, that is lim sup t→∞ X t = ∞ and lim inf t→∞ X t = -∞, almost surely. Note that this last property gives no information on the recurrence of the process X. Exit problems for spectrally negative Lévy process Now, we are interested in the exit time of X from an interval. Let τ + a = inf {t > 0 | X t > a} and τ - a = inf {t > 0 | X t < a} the exit times upward and downward. Let a > b, we are interested, for any x ∈ (b, a), in the evaluation of the probability P x τ - b < τ + a . By the properties of homogeneity of the Lévy process, this boils to study the case b = 0. Moreover, we have P x τ - 0 < τ + a = P x-a τ - -a < τ + 0 . Now we use that the probability that the overall maximum X ∞ is lower than 0 starting from x -a is equal to the probability to hit -a before 0 and then that X ∞ is greater than 0. More, precisely P x-a X ∞ ≤ 0 = E P X τ - -a X ∞ ≤ 0 1 τ - -a <τ + 0 = P x-a τ - -a < τ + 0 P -a X ∞ ≤ 0 . Hence, we have that P x τ - 0 < τ + a = P x-a X ∞ ≤ 0 P -a X ∞ ≤ 0 . On the other hand, E e -αX∞ = ∞ 0 αe -αs P X ∞ ≤ s ds = ∞ 0 αe -αs P -s X ∞ ≤ 0 ds. Its now time to use what we know about the ascending ladder process. Let H be a subordinator with Laplace exponent given by κ(0, β) -κ(0, 0). This means that H has the same law as H with the difference that it is not killed. Let also E be an independent exponential random variable with parameter ψ (0+). Since H took at its killing time is equal to the overall supremum of the process X, we have E e -αX∞ = E e -αH E = ∞ 0 ψ (0+)e -ψ (0+)t E e -αHt dt = ∞ 0 ψ (0+)e -ψ (0+)t e tκ(0,β) dt = ψ (0+) α ψ(α) . Finally, P -s X ∞ ≤ 0 satisfies ∞ 0 e -αs ψ (0+) P -s X ∞ ≤ 0 ds = 1 ψ(α) . The function x → 1 ψ (0+) P -x X ∞ ≤ 0 is called the scale function of X and is denoted W . Now, we want to go a little further. Let τ be the exit time of the interval (-a, 0). We are interested in the law of the couple (X τ -, X τ ). As usual, there is no restriction in taking (-a, 0), but there are two reasons for this choice. First, it is in that case that the law of the couple is the easier to write. Second, our real interest lies in the overshoot and undershoot of the Lévy process (see Figure 2.5) over a fixed level which has exactly the law of (-X τ -, X τ ) when the chosen interval is (-a, 0) (this follows again from the homogeneity of X). Of course, since X is spectrally positive we have X τ -= X τ = -a almost surely on the event τ - -a < τ + 0 . It is more interesting to see what happens when τ + 0 < τ - -a because X can cross the border by jumping above. Now let A and B be two Borel sets such that A ⊂ (-a, 0) and B ⊂ [0, ∞). We have, for x in [0, a), P -x (X τ -∈ A, X τ ∈ B) = E -x [0,∞)×R 1 Xt≤0, X t ≥-a 1 X t-∈A 1 X t-+y∈B N (dt, dy) . Now, the compensation formula (2.2) entails that P -x (X τ -∈ A, X τ ∈ B) = [0,∞)×R E -x 1 Xt≤0, X t ≥-a 1 X t-∈A 1 X t-+y∈B dt Π(dy) = [0,∞) E -x [1 τ >t 1 Xt∈A Π(B -X t )] dt = A Π(B -y) U(-x, dy), where U is the mean occupation measure of X up to its first exit of (-a, 0). That is the measure defined by U(x, A) = R + P x (X t ∈ A, τ > t) dt, ∀A ∈ B ((-a, 0]) . Moreover, we have, for a positive real number x, Chapitre 2. Preliminaries I : Fluctuation of Lévy processes in a nutshell -a 0 { } Overshoot Undershoot Xτ X τ - Figure 2 .5 -Undershoot and overshoot of the process X at the exit time τ of the interval (-a, 0). R + P -x (X t ∈ A, τ > t) dt = R + P -x X t ∈ A, X t ≤ 0 -P -x X t ∈ A, X t < -a, X t ≤ 0 dt. (2.13) But the probability P -x X t ∈ A, X t ≤ 0, X t < -a rewrites P -x X t ∈ A, X t ≤ 0, X t < -a = P -x τ - -a < τ + 0 P -a X t ∈ A, X t ≤ 0 = W (x) W (a) P -a X t ∈ A, X t ≤ 0 . (2.14) From this point, we focus on the probability in the r.h.s of the last equality. Now, as in the proof of Theorem 2.5.1, let e p be an exponential random variable with parameter p > 0 independent of X. We have R e -pt P -x X t ∈ A, X t ≤ 0 dt = 1 p P -x X ep ∈ A, X ep ≤ 0 = 1 p P -x X ep -X ep + X ep ∈ A, X ep ≤ 0 . Now, using again the elements given in the proof of Theorem 2.5.1, X ep -X ep and X ep are independent. Moreover, X ep -X ep has the law of X ep , which is exponential with parameter φ(p) according to (2.9) and (2.10). Hence, denoting by P Xe p the law of X ep , we get R e -pt P -x X t ∈ A, X t ≤ 0 dt = R - φ(p) p e φ(p)y R 1 y+z-x∈A 1 z-x≤0 P Xe p (dz)dy. Now, using (2.8) and (2.11), we have that φ(p) p P Xe p converges weakly, as p goes to zero, to a measure whose Laplace transform is given by β-φ(0) ψ(β) . This corresponds to the measure W (dz) -φ(0)W (z)dz, where W (dz) the Stieljes measure associated to W . Consequently, R P -x X t ∈ A, X t ≤ 0 dt = R + e φ(0)y R 1 y+z-x∈A 1 z-x≤0 (W (dz) -φ(0)W (z)dz) dy. A simple change of variable leads to A e φ(0)y [x+y,x] e -φ(0)(x+z) (W (dz) -φ(0)W (z)dz) dy. Now, integrating by parts entails P -x X t ∈ A, X t ≤ 0 = A e φ(0)y W (x) -W (x + y)dy. Using this last equality in conjunction with (2.13) and (2.14) leads to U(-x, A) = A W (x)W (a + y) W (a) -W (x + y)dy, and to P -x (X τ -∈ A, X τ ∈ B) = A Π(B -y) W (x)W (a + y) W (a) -W (x + y) dy. We summarize these results in the following Theorem. Theorem 2.6.1. Let X be a spectrally positive Lévy process with Laplace exponent given by ψ. Let W be the unique increasing function satisfying R + e -βt W (t) dt = 1 ψ(β) , ∀β ∈ R + . Then, P x τ + a < τ - 0 = W (x -a) W (a) , ∀a ∈ R + , x ∈ (0, a). In addition, if τ = τ + a ∧ τ - 0 . The law of the overshoot O + and the undershoot O -of the process when crossing level a is given by P x O -∈ A, O + ∈ B = A Π(B + y) W (a -x)W (a -y) W (a) -W (a -x -y) dy, ∀x > 0, for any A in B((0, a]) and B in B(R + ). Reminder on renewal theory The purpose of this part is to recall some facts on renewal equations borrowed from [START_REF] Feller | An introduction to probability theory and its applications[END_REF]. Let h : R → R be a function bounded on finite intervals with support in R + and Γ a probability measure on R + . The equation F (t) = R + F (t -s)Γ(ds) + h(t), called a renewal equation, is known to admit a unique solution finite on bounded interval. Here, our interest is focused on the asymptotic behavior of F . We said that the function h is DRI (directly Riemann integrable) if for any δ > 0, the quantities δ n i=0 sup t∈[δi,δ(i+1)) f (t) and δ n i=0 inf t∈[δi,δ(i+1)) f (t) converge as n goes to infinity respectively to some real numbers I δ sup and I δ inf , and lim δ→0 I δ sup = lim δ→0 I δ inf < ∞. In the sequel, we use the two following criteria for the DRI property : Lemma 2.7.1. Let h a function as defined previously. If h satisfies one of the next two conditions, then h is DRI : 1. h is non-negative decreasing and classically Riemann integrable on R + , 2. h is càdlàg and bounded by a DRI function. We can now state the next result, which is constantly used in the sequel. Theorem 2.7.2. Suppose that Γ is non-lattice, and h is DRI, then lim t→∞ F (t) = γ R + h(s)ds, with γ := R + s Γ(ds) -1 , if the above integral is finite, and zero otherwise. Remark 2.7.3. In particular, if we suppose that Γ is a measure with mass lower than 1, and that there exists a constant α ≥ 0 such that R + e αt Γ(dt) = 1, then, one can perform the change a measure Γ(dt) = e αt Γ(dt), in order to apply Theorem 2.7.2 to a new renewal equation to obtain the asymptotic behavior of F . (See [START_REF] Feller | An introduction to probability theory and its applications[END_REF] for details). This method is also used in the sequel. Chapitre 3 Preliminaries II : Splitting trees The purpose of this chapter is to present splitting trees in a user-friendly fashion. Almost all the results presented in this chapter come from [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. A splitting tree is a kind of planar rooted random tree which can be used to describe the dynamics of a biological population. In contrary with the well known Galton-Watson trees which only take into account the genealogical structure of the population, the splitting trees contain informations on the lifetimes of the individuals. For this purpose, individuals are not represented by the nodes of the tree but by its branches. A branch is supposed to have a length equal to the lifetime of the corresponding individual. Hence, the splitting trees describe in a more robust way the dynamics of the population through time. These trees have been introduced by Geiger and Kersting in [START_REF] Geiger | Depth-first search of random trees, and Poisson point processes[END_REF]. In their work, the authors introduce a contour process which can be roughly described as a height process in a depth-first exploration of the tree. The purpose of their paper is then to study this process and its links to Poisson point processes. Their method allows one of the authors to study splitting trees under various conditioning [START_REF] Geiger | Contour processes of random trees[END_REF]. Later, in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF], A. Lambert introduced a new contour process which appears to be (almost) a Lévy process. His new method appeared to be fruitful to derive properties of the splitting trees and their functionals (for instance on some particular Crump-Mode-Jagers processes). Many of the results of this thesis were obtained thanks to the tools introduced in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. In Section 3.1, we describe a construction of the splitting trees based on the one given in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Section 3.2 is dedicated to the construction of the contour process introduced in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Note that, in contrary with [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF], we do not consider trees which can be "locally infinite". This leads to some simplifications. In Section 3.3, the contour process is used to derive basic properties of binary homogeneous Crump-Mode-Jagers processes used in the next chapters. Section 3.4 introduces the backward model associated to splitting trees. In any model of population dynamics, it is interesting to understand the links between the lineages of individuals alive at a fixed time. This is especially true for population genetics. For instance, the Kingman's famous coalescent model is derived from the Wright-Fisher model of population dynamics. The backward model associated to a splitting tree is the so-called coalescent point process. Its law is, once again, studied through the contour process. Construction The purpose of this section is to give the mathematical formalism underlying the theory of splitting trees. The first part describes in which space of trees a splitting tree belongs. The second part gives a characterization of the law of a splitting tree. The construction is based on Lambert [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. The Ulam-Harris-Neveu set and the discrete genealogy From the mathematical point of view, a splitting tree is a random variable with value in a set of trees with branch length. This set is a subset of P(U × R) where U := n≥0 N n , where N 0 equals {∅}. U is the well known Ulam-Harris-Neveu set which means to describe the genealogical structure of the individuals in the tree. In the sequel, for any σ ∈ U, we denote, for any non-negative integer k, σ k the kth last ancestor of σ. That is ∀k ∈ N, σ k = (σ 1 , . . . , σ n-k ) . In this manner, σ 0 equals σ and σ 1 is the parent of σ. By the way, if σ = (σ 1 , . . . , σ n ) belongs to N n , for some integer n, and if k ≥ n, we assume that σ k equals ∅. ∅ is called the ancestor individual. Let P U (resp. P R ) be the canonical projection from U × R onto U (resp. R). For a tree T, P U (T) can be thought of as the underlying discrete genealogical tree. In the sequel, we denote by G this discrete genealogy. In order to be admissible as a tree, a subset T of U × R needs to have a discrete genealogy G satisfying some compatibility conditions. Compatiblity conditions on the discrete genealogy G : -The ancestor belongs to the tree : ∅ ∈ G. -If an individual belongs to the tree, so does its parent : ∀σ ∈ G, σ 1 ∈ G. -Individuals are well-ordered : ((σ 1 , . . . , σ n ) ∈ G and σ n > 1) ⇒ ((σ 1 , . . . , σ n -1) ∈ G). Now, we introduce the canonical order relation on a discrete tree in order to characterize the relationship between individuals. Let δ and σ be two elements of U, we write δ σ if δ is an ancestor of σ. That is (δ σ) ⇐⇒ ∃k ∈ N, σ k = δ . This relation defines a partial order on U. We also denote δ ∧ σ = sup {η ∈ G | η σ} ∩ {η ∈ G | η δ} , which is the last common ancestor of δ and σ. Note that this last supremum is well-defined since if η 1 and η 2 are two elements of {η ∈ G | η σ}, then there exist two non-negative integers n 1 and n 2 such that η 1 = σ n 1 and η 2 = σ n 2 . Hence η 1 η 2 if and only if n 1 ≤ n 2 . Chronological trees We now describe more accurately the set of admissible trees. We desire to introduce a time structure. In the previous part, we described how the discrete genealogy of a subset T of U × R must be to make T admissible as a tree. We, now, describe the "time compatibility conditions". Time compatibility conditions : -Individuals are alive for all time between their birthdate and their date of death : ∀σ ∈ G, ∃B σ , D σ ∈ R + , B σ < D σ and (B σ , D σ ] = P R ({t ∈ R + | (σ, t) ∈ T}) . B σ is its birth date while D σ is its date of death. -Individuals are born during the lifetime of their parents : ∀σ ∈ G\{∅}, B σ ∈ (B σ 1 , D σ 1 ). -Individuals are born in the right order : ((σ 1 , . . . , σ n ) ∈ G and σ n > 1) ⇒ B (σ 1 ,...,σn-1) < B σ . -The ancestral individual born at time 0 : B ∅ = 0. The set of subsets T of U × R + satisfying these compatibility conditions as well as those on the discrete genealogy is called the set of admissible trees and is denoted by T . Chronological trees as measured metric spaces The purpose of this section is to give more structure to chronological trees. This allows us to define, for instance, Poisson random measures on a tree T. Topology on T Here we define a topology on the tree through a metric. The natural distance one may expect between two points of the tree is the length of the "shortest path" between these two points (see Figure 3.2). To do so we need to define the divergence point between points (δ, t) and (σ, s) of T. This point, denoted (δ, t) ∧ (σ, s), is defined by the relations (see Figure 3.1) P U ((δ, t) ∧ (σ, s)) = δ ∧ σ, P R ((δ, t) ∧ (σ, s)) = inf B σ i , B δ j | i, j ∈ N, σ ∧ δ ≺ σ i and σ ∧ δ ≺ δ j . Now, for two points (δ, t) and (σ, s), we set d ((δ, t), (σ, s)) = t + s -2P R ((δ, t) ∧ (σ, s)) . The function d defines the desired distance on the tree. U (T), there is a natural isometry ϕ σ from {(δ, t) ∈ T | δ = σ} to (B σ , D σ ]. Now, let O be an open set of T, and set λ(O) = σ∈P U (T) λ(ϕ σ (L(σ) ∩ O)), where L(σ) denotes the slice of the tree corresponding to σ, that is L(σ) = {σ} × (B σ , D σ ]. It is easy to see that λ defines a σ-additive functional on the topology of T, which uniquely extend to a measure on B(T). This defines a Lebesgue measure λ on T. These two objects (the metric and the measure) are useful to construct the contour process of the tree (see Section 3.2). The law of a splitting tree Let b a positive real number and P V be a probability measure on (0, ∞]. The purpose of this part is to introduce a probability measure P T on T . This measure is called the law of a splitting tree with lifespan measure bP V . Let us roughly describe this law through the dynamics of the population described by a splitting tree. The population start with a single individual. This individual gives births at exponential rate b. Each child is assumed to have a lifetime distributed according to P V , independent of its parent or its brotherhood. To end, children give birth according to the same mechanisms (and independently from the other individuals) and so on. More precisely, let E : U × T → T be the application defined by E(i, T) = {((σ 2 , . . . , σ n ), t -B i ) | ((σ 1 , . . . , σ n ), t) ∈ T and σ 1 = i} , ∀i ∈ N\{0}, ∀T ∈ T . This application returns the subtree of T induced by the ith child of the ancestor individual. Note that if i does not belong to G, E(i, T) equals ∅. Now, the law P T of a splitting tree T, with lifetime distribution P V and birth rate b, is the unique distribution such that -D ∅ is distributed according to P V , -conditionally on D ∅ , the random measure on (0, D ∅ ] defined by i∈N i∈G δ B i is a Poisson random measure with rate b, -for all i ∈ N ∩ G, the law of E(i, T) is P T . The readers interested in more details should look at [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. A very important consequence of this last point is that a splitting tree presents a renewal structure. Indeed, any of the subtrees induced by the children of the roots is itself a splitting tree. The contour process of a Splitting tree is a Lévy process A very common method in trees analysis is to transform trees into more convenient objects. In the Galton-Watson case, for instance, one may think of the Harris path or the height process. This allows transforming a tree into a real-valued function which is easier to manipulate. Our purpose in this section is to introduce the same kind of object for spitting trees. This will reveal particularly powerful in the study of the properties of the splitting trees. In particular because the contour process appears to have a nice behaviour. The ideas and results come again from [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. In order to go further, we need to introduce an exploration process of the tree. To do that, we must choose an order to explore the tree. The contour process of a finite tree Let T be an element of T which is finite in the sense that λ(T) is finite. Assume also that the total number of individuals is finite. This last hypothesis is not a restriction since a splitting tree with finite length must have a finite number of individuals (because lifetimes are i.i.d.). Exploring chronological trees We define a total order relation on a chronological tree by setting, for two elements (σ, t) and (δ, s) of T, (σ, t) ≤ (δ, s) ⇔      δ σ and P R ((δ, s) ∧ (σ, t)) ≥ s (C1) or ∃n ∈ N , σ n δ and t > B σ n . (C2) In a more informal way, the point of birth of the lineage of (δ, s) during the lifetime of the root split the tree in two connected components, then (σ, t) ≤ (δ, s) if (σ, t) belongs to the same component as (δ, s) but is not an ancestor of (δ, s) (see Figure 3.3). x Now, we have the tools needed to introduce the exploration process. Let ϕ, be the application defined by ϕ : T → [0, λ (T)), x → λ ({y | y ≤ x}) . The main result is the following. Proposition 3.2.1 (Lambert, [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]). ϕ is an increasing bijection. We give a new proof of this result which was hopped to be simpler... but just appears to be different. Proof. In order to get the result, we prove it for a slight modification of ϕ. More precisely, in this proof we assume that ϕ is defined on T = T ∪ {(∅, 0)} as follow : ϕ : T → [0, λ (T)], x → λ ({y | y ≤ x}) if y = (∅, 0), λ(T) else. The proof follows these steps : ϕ is strictly increasing (similar to the proof in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF] ). ϕ is continuous with respect to the order topology on T induced by ≤. -T is connected w.r.t. the order topology. -The range of ϕ is [0, λ(T)]. Let (δ, t) < (σ, s), then there exists ε > 0 such that {y ∈ T | y ≤ (σ, s)}\{y ∈ T | y ≤ (δ, t)} ⊃ B((σ, t + ε), ε), if t = D σ , and {y ∈ T | y ≤ (σ, s)}\{y ∈ T | y ≤ (δ, t)} ⊃ B((σ 1 , t + ε), ε), if t = D σ . These imply that ϕ is strictly increasing (from the definition of λ). Let us consider from this point (and until the end of this proof) that T is endowed with the order topology induced by ≤. This topology is different from the topology induced by the distance d. We begin by showing that ϕ is continuous with respect to the order topology (which is trivially not the case with respect to d). Continuity of ϕ Let (σ, t) in T. Assume that (σ, t) is a branching point. That is there exists δ in G such that δ 1 equals σ and B δ = t. Let be a positive real number. Now, consider the segment ((σ, t -), (δ, B δ + )) = {(δ, s) | s ∈ (B δ , B δ + )} ∪ {(σ, s) | s ∈ (t -, t)} . The last equality holds for small enough. From this equality, it is easily seen that (for small enough), ϕ {((σ, t -), (δ, B δ + ))} = B(ϕ(σ, t), 2 ). Since {((σ, t -), (δ, B δ + )), > 0} is a complete neighbourhood system of (σ, t), ϕ is continuous at the branching points of the tree (we recall that this continuity holds only w.r.t. to the order topology). The other cases (leaf or simple point) are left to the reader. Connectedness of T Let A be a subset of T. We want to show that A admits a supremum. Define for all σ in G, M σ = inf (P R (A ∩ {σ} × (B σ , D σ ])) . Now, according to hypothesis made in the beginning of this section we have that G is finite. Hence, the set {(σ, M σ ) | σ ∈ P U (A), M σ > B σ } ∪ (σ 1 , M σ ) | σ ∈ P U (A), M σ = B σ is a totally ordered finite set which, hence, has a maximum (σ * , t * ). We claim that (σ * , t * ) is a supremum of A. This means that T is a complete lattice w.r.t. ≤. Moreover, if (σ, t) < (δ, s), it is easily seen (by considering for instance (σ, t -) or (δ, s + )) that there exists a third point x in T such that : (σ, t) < x < (δ, s) (continuum property). This fact, in conjunction with the fact that T is a complete lattice, implies that T is connected w.r.t. the order topology (see [START_REF] Willard | General topology[END_REF]). This is the end Finally, using that ϕ((∅, D ∅ )) = 0 and ϕ((∅, 0)) = λ(T), we have, since ϕ is continuous and T is connected, ϕ( T) = [0, λ(T)]. Note that the hypothesis that G is finite is fundamental to make things work. Indeed, as one can see in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF], in the general case the bijection holds only with the local closure of T. A way to adapt this proof to the general case would require to use a compacification of the tree T (this is more or less what is done in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF] ). The exploration process is now defined as the inverse of ϕ (see Figure 3.4). The contour of a finite tree We can now define the contour of a finite tree. Informally speaking, the contour process is a real valued process which can be seen as this : it begins at the top of the root and decreases with slope -1 while running back along the life of the root until it meets a birth. The contour process then jumps at the top of the life interval of the child born at this time and continues its exploration as before. If the exploration process does not encounter a birth when exploring the life interval of an individual, it goes back to its parent and continues the exploration from the birth-date of the just left individual (see Figure 3.5). From the mathematical point of view, the contour process is just the height in the tree (that is the biological time) of the exploration process at time t. More precisely, the contour process Y is defined by A useful feature is that the tree T is in bijection with the graph of the contour process. Indeed, let (s, Y s ) be a point of the graph, then the unique corresponding point in the tree is ϕ -1 (s) (see Figure 3.6). Y s = P R (ϕ -1 (s)), ∀s ∈ [0, λ(T)]. The law of the contour process of a splitting tree We recall that a splitting tree is a tree describing a population where individuals with (independent) lifetimes distributed according to a distribution P V experience birth at rate b. Moreover, the birth processes of different individuals are supposed independents. In this section, we are interested in the law of the contour process of a splitting tree T (see Section 3.1.2 for a more precise definition). The first problem is that a splitting tree may not have a finite length λ(T). This implies that the above definition does not apply. That is why we define, for all positive real number t, the contour Y (t) of the truncated tree at time t. More precisely, let T (t) = T ∩ U × [0, t], the truncated tree at level t. This means that all parts of the tree above level t are removed. Now, since the number of children of each individual before time t must be finite (because a Poisson random measure with finite rate b is locally finite), then the total length of the tree must be finite. This implies that the contour process Y (t) associated to T (t) is well-defined. Now, the main result of this chapter is the following. Theorem 3.2.2 (Lambert [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]). Let (X i ) i≥1 be a sequence of i.i.d. Lévy processes with Laplace exponent ψ(x) = x - (0,∞] 1 -e -rx bP V (dr), x ∈ R + , (3.1) such that X 1 0 = t ∧ V almost surely and X i 0 = t almost surely, for all i > 1. Set τ i t = inf{s > 0 | X i t > t}, and S i = i j=1 τ i t . Then, the process X defined by X t = i≥1 X t-S i-1 1 S i-1 ≤t<S i , ∀t ∈ R + , killed at its first hitting of 0 has the same distribution as the process Y (t) . We say that Y (t) is a spectrally positive Lévy process started from V ∧ t, reflected below t and killed at its first hitting of 0. Moreover, its Laplace exponent is given by (3.1). The population counting process In this section, we introduce the population counting process which is the subject of Chapter 5. We first give its definition. Then, we show how to use the contour process in order to derive its first properties. As in the previous section, the results come from [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Definition 3.3.1 (Binary homogenous CMJ process). Let (N t , t ∈ R + ) the process defined by N t = {T ∩ U × {t}} , ∀t > 0. This process is known as binary homogenous Crump-Mode-Jagers process. The unidimensional marginal of (N t , t ∈ R + ) From Section 3.2.1, we easily see that N t = {Y (t) s = t | s ∈ R + }. Finally,This remark allows to get, thanks to the theory of Lévy processes, a first information on the process (N t , t ∈ R + ). Indeed, let τ t (resp. τ 0 ) be the hitting time of t (resp. of 0) by the contour process Y (t) . Now, for any positive integer k, the strong Markov property entails that P (N t = k | N t > 0) = E P t∧V {Y (t) s = t | s ∈ R + } = k | τ t < τ 0 = P t {Y (t) s = t | s > 0} = k -1 . Once again, the strong Markov property gives P t {Y (t) s = t | s > 0} = k -1 = P t (τ t < τ 0 ) P t {Y (t) s = t | s > 0} = k -2 = P t (τ t < τ 0 ) k-1 P t {Y (t) s = t | s > 0} = 0 = P t (τ t < τ 0 ) k-1 P t (τ 0 < τ t ) . Now, by Theorem 2.6.1 (see also Theorem 8.1 in [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF]), we have that P t (τ t < τ 0 ) = 1 - 1 W (t) , where W is the scale function of the Lévy process whose Laplace exponent is given by (3.1). We recall that W is characterized by its Laplace transform, T L W (t) = (0,∞) e -rt W (r)dr = 1 ψ(t) , ∀t > α, (3.2) where α is the largest root of ψ. P (N t = k | N t > 0) = 1 W (t) 1 - 1 W (t) k-1 , ∀k ∈ N\{0}. (3.3) According to this, N t is geometrically distributed (conditionally on non-extinction) with parameter W (t). In particular E [N t | N t > 0] = W (t). (3.4) Hence, it would be worth to know the asymptotic behaviour of W in large time in order to get some hints on the behaviour of (N t , t ∈ R + ). To this goal, one can use Tauberian theorems for Laplace transform (see [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF], Section 7.6). The results are the following Lemma 3.3.2. Lambert, [60] -if ψ (0+) > 0, then W (t) ∼ 1 ψ (0+) , -if ψ (0+) = 0, then W (t) ∼ 2t ψ (0+) , -if ψ (0+) < 0, then W (t) ∼ e αt ψ (α) . According to Champagnat-Lambert [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF], one can even go further and get In the sequel we refer to the supercritical case (resp. critical, subcritical) when ψ (0) < 0 (resp. ψ (0) = 0, ψ (0) > 0). Remark that, differentiating ψ in (3.1), this is equivalent to have bE[V ] > 1 (resp. bE[V ] = 1 , bE[V ] < 1 ). Extinction Let t be a positive real number, we have P (N t = 0) = E {P t∧V (τ t > τ 0 )} , which is equal, according to Theorem 2.6.1, to E W (t -t ∧ V ) W (t) = [0,t] W (t -v) W (t) P V (dv). Hence, P (N t > 0) = 1 - W P V (t) W (t) , (3.5) and EN t = W (t) -W P V (t), (3.6) Moreover, it is easily seen that P (Extinction) = lim t→∞ P (N t = 0) . Using this with Lemma 3.3.3, one can get in the critical and subcritical cases, P (Extinction) = 1. Similarly, using again Lemma 3.3.3, we have, in the supercritical case, lim t→∞ [0,t] W (t -v) W (t) P V (dv) = R + e -αt P V (dv). But according to (3.1), one has R + e -λv P V (dv) = 1 + ψ(λ) -λ b . (3.7) Finally, P (Extinction) = 1 - α b , (3.8) and P (NonEx) = α b . (3.9) Using the convexity of ψ, one can easily see that α > 0 if and only if ψ (0) < 0. In short, the only case where the population does not almost surely extinct is the supercritical one. Backward model : coalescent point process The purpose of this section is to analyse the genealogical model associated to splitting trees. Some previous works (for instance [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF][START_REF] Champagnat | Birth and death processes with neutral mutations[END_REF]) show that some properties of a splitting tree are easier to study using the tree describing the genealogical relation between the lineages of the individuals alive at a time t. This is true in particular when one wants to study the genotype of individuals in the population (if we add mutations to the model). Indeed, the difference between two individuals in terms of genotype should depend only on the time past since their lineages has diverged. Hence, this particular genealogical tree, known as coalescent point processes (CPP), contains the essential informations to study, for instance, the allelic partition. In order to derive the law of that genealogical tree, we need to characterize the joint law of the times of coalescence between pairs of individuals in the population, which are the times since their lineages have split. In the sequel, let I t = {T ∩ U × {t}} denotes the set of individuals alive at fixed time t. This set is naturally ordered through the total order on T. We may refer to the ith individual in this order as the ith individual alive at time t (provided i ≤ N t ). This individual is denoted I t (i). Before defining the divergence time between the lineages of two individuals, let us define what a lineage is. Definition 3.4.1 (Lineage). The lineage of an individual alive at time t, or equivalently of a point (σ, t) in T, is defined by the set (see Figure 3.7) Lin((σ, t)) = (∪ n≥1 {(σ n , s) | s ≤ B σ n-1 }) ∪ {(σ, s) | s ≤ t} . (3.10) We refer the reader to Section 3.1 for the definition of σ n . The set Lin((σ, t)) corresponds to the set of successive (in the intuitive sense) points linking (σ, t) to (∅, 0)(see Figure 3.7). The time of coalescence C i,j between individuals i and j is the amount of time spent since their lineages have diverged (see Figure 3.8). It can be defined by C i,j = t -sup P R {Lin((I t (i), t)) ∩ Lin((I t (j), t))} . But one can show, for two individuals i and j (such that i ≤ j), that (see also Figure 3.9) C i,j = sup {C k,k+1 | k ∈ i, j } . Hence, all the coalescence times are characterized by the coalescence times of adjacent individuals. In the sequel, C i,i+1 is denoted H i . The so-called coalescent point process (CPP) is defined as the sequence (H i ) 0≤i≤Nt-1 . The CPP of the population is indeed described by this sequence, saying that a lineage coalesces with the first deeper branch on its left (see Figure 3.9). x The law of the CPP In order to derive the law of the sequence (H i ) 0≤i≤Nt-1 , we use once again the contour process of the splitting tree. The first step is to reword the coalescence times in terms of the contour. Its appears that the coalescence time H i between two adjacent individuals i and i + 1 is equal in distribution to the depth of the excursion of the contour below t between the visit of these two individuals. Let (σ, t) and (δ, t) be, respectively, the ith and (i+1)th individuals alive at time t (we assume that they exist). Now, the branching point (σ, t) ∧ (δ, t) between the lineages of these two individuals can be obtain as sup {Lin((σ, t))\Lin((δ, t))} . We are interested in the points explored by the exploration process between (σ, t) and (δ, t). These points are given by ϕ -1 ([ϕ((σ, t)), ϕ((δ, t))]) = Lin((σ, t))\Lin((δ, t)) {x ∈ T | sup {Lin((σ, t))\Lin((δ, t))} < x ≤ (δ, t)} , where ϕ denotes the exploration process defined in Subsection 3.2.1 and denotes the union of disjoint sets. Now let x in T such that sup {Lin((σ, t))\Lin((δ, t))} < x < (δ, t). Hence, we must have δ ∧ σ P U (x) and P R (x) > P R (sup {Lin((σ, t))\Lin((δ, t))}) . Otherwise, we would have x ≥ (δ, t) (see the definition of the order relation). This implies that P R (sup {Lin((σ, t))\Lin((δ, t))}) ≤ P R (ϕ -1 (s)), ∀s ∈ [ϕ((σ, t)), ϕ((δ, t))]. But the right hand side of the last inequality is the contour process. Finally, we get that the diverging time of the lineages of (σ, t) and (σ, t) is given by min Y (t) s | s ∈ [ϕ((σ, t)), ϕ((δ, t))] . This also implies that the coalescence time between those two individuals is the depth of the excursion of the contour process on the time interval [ϕ((σ, t)), ϕ((δ, t))]. Now, let H be the depth of an excursion below t of a Lévy process with Laplace exponent given by (3.1). It is easily seen that P (H > s) = P t τ - s < τ + t , ∀s ∈ R + , where τ - s and τ + t were defined in the beginning of Section 2.6.2. But Theorem 2.6.1 (see also [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF][START_REF] Bertoin | Lévy processes[END_REF]) gives P t τ - s < τ + t = 1 W (s) , where W is the scale function of our Lévy process. Finally, we have Proposition 3.4.2. Let (X i ) i≥1 be an i.i.d. family of random variables with law given by P (X 1 > s) = 1 W (s) , ∀s ≥ 0. Chapitre 4 On some auxiliary results The purpose of this chapter is to state and prove three preliminary results which are crucial for the two forthcoming chapters. Since they are not only related to splitting trees and that they have their own mathematical interest, we decided to dedicate a chapter to these results. Section 4.1 gives precise asymptotic estimates on the scale function of the contour of a splitting tree. Some weaker results were already stated in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF]. This original result can be found in [START_REF] Henry | Clts for general branching processes related to splitting trees[END_REF]. Section 4.2 is devoted to the proof of an extension of the Campbell formula concerning the expectation of the integral of a random process with respect to a random measure when both objects present some local independence properties. These results can also be seen as extension of the well known compensation formula for Poisson functional (see 2.1.4). These two formulas enable us to use a very elegant formalism to model the frequency spectrum of a splitting with neutral Poissonian mutations. This also allows us to obtain formulas for moments for the frequency spectrum in Chapter 6. The results of Section 4.2 can be found in [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF]. Finally, Section 4.3 is devoted to an alternative construction of the CPP which plays an important role in the computation of the moments of the frequency spectrum (see Chapter 6). Asymptotic behavior of the scale function of the contour process Before stating and proving the result of this section, we make some reminders from Chapter 2 adapted in the context of our particular Lévy process. First, we recall that the law of a spectrally positive Lévy process (Y t , t ∈ R + ) is uniquely characterized by its Laplace exponent ψ, ψ Y (λ) = log E e -λY 1 , λ ∈ R + , which in our case take the form of (3.1) : ψ Y (λ) = x - (0,∞] 1 -e -rx bP V (dr), λ ∈ R + . We also assume that ψ (0+) < 0, so that the greatest zero of ψ is positive. Let α be this zero. This corresponds to the supercritical case for the splitting tree. In this section, we suppose that Y 0 = 0. For a such Lévy process, the local time at the reflected process (see Chapter 2) (L t , t ∈ R) can be chosen as L t = nt i=0 e i , t ∈ R + , where e i i≥0 is a family of i.i.d. exponential random variables with parameter 1, and n t := Card{0 < s ≤ t | Y s = sup u≤s Y u }, is the number of times Y reaches its running maximum up to time t. We recall that the ascending ladder process associated to Y is defined as H t = Y L -1 t , t ∈ R + , where L -1 t , t ∈ R + is the right-inverse of L. It is easily seen that H is a subordinator whose values are the successive new maxima of Y . Conversely, in our case, the process (inf s≤t Y s , t ∈ R + ) can be chosen as descending ladder time process Lt , t ∈ R + . The descending ladder process Ĥ is then defined from L as H was defined from L. The Wiener-Hopf factorization, given in Theorem 2.5.1, allows us to connect the characteristic exponent ψ Y of Y with the characteristic exponents of the bivariate Lévy processes ((L t , H t ), t ∈ R + ) and (( Lt , Ĥt ), t ∈ R + ), respectively denoted by κ and κ. In our particular case, where Y is spectrally negative, we have κ(γ, β) = γ-ψ Y (β) φ Y (γ)-β , γ, β ∈ R + , κ(γ, β) = φ Y (γ) + β, γ, β ∈ R + , where φ Y is the right-inverse of ψ Y . Taking γ = 0 allows us to recover the Laplace exponent ψ H of H from which we obtain the relation, ψ Y (λ) = (λ -φ Y (0)) ψ H (λ). (4.1) We have now all the notation to state and prove the main result of this section. Proposition 4.1.1 (Behavior of W ). In the supercritical case (α > 0), there exists a positive non-increasing càdlàg function F such that W (t) = e αt ψ (α) -e αt F (t), t ≥ 0, and lim t→∞ e αt F (t) = 1 bEV -1 if EV < ∞, 0 otherwise. Proof. Let Y be a spectrally negative Lévy process with Laplace exponent given by ψ (λ) = λ - R + 1 -e -λx e -αx b P V (dx). Asymptotic behavior of the scale function of the contour process It is known that Y has the law of the contour process of the supercritical splitting tree with lifespan measure P V conditioned to extinction (see [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]). In this case the largest root of ψ is zero, meaning that the process Y does not go to infinity and that φ Y (0) = 0. Elementary manipulations on Laplace transform show that the scale function W of Y is related to W by W (t) = e -αt W (t), t ∈ R + . Let H be the ascending ladder subordinator associated to the Lévy process Y . In the case where φ Y (0) = 0, and in this case only, the scale function W can be rewritten as (see [START_REF] Kyprianou | Fluctuations of Lévy processes with applications[END_REF] or use Laplace transform), W (t) = ∞ 0 P H x ≤ t dx. (4.2) In other words, if we denote by U the potential measure of H , W (t) = U [0, t]. Now, it is easily seen from (4.1) that the Laplace exponent ψ H of H takes the form, ψ H (λ) = ψ (α) - [0,∞] 1 -e -λr Υ(dr), where Υ(dr) = (r,∞) e -αv bP V (dv)dr = E e -αV 1 V >r bdr. Moreover, Υ(R + ) = 1 -ψ (α), which means that H is a compound Poisson process with jump rate 1 -ψ (α), jump distribution J(dr) := E[e -αV 1 V >r ] 1-ψ (α) dr, and killed at rate ψ (α). It is well known (or elementary by conditioning on the number of jumps at time x), that the law P H x of H x (x ∈ R + ) is given P H x (dt) = e -ψ (α)x k≥0 e -(1-ψ (α))x ((1 -ψ (α)) x) k k! J k (dt). Some calculations now lead to, U (dx) = k≥0 Υ k (dx). From this point, since Υ is a sub-probability, U (x) := U [0, x] satisfies the following defective renewal equation, U (x) = R + U (x -u)Υ(du) + 1 R + (x). Finally, since R + e αx Υ(dx) = e αx (U (R + ) -U (x)) -→ x→∞ 1 αµ , with µ = R + re αr Υ(dr) = 1 α (bEV -1) , if V is integrable. In the case where V is not integrable, the limit is 0. To end the proof, note using relation (4.2) and the fact that H is killed at rate ψ (α) that, W (t) = 1 ψ (α) -U (t, ∞). 4.2 A formula to compute the expectation of an integral with respect to a random measure In this section, we use notation and vocabulary from [START_REF] Daley | An introduction to the theory of point processes[END_REF]. Let X a be Polish space. We recall that a random measure is a measurable mapping from a probability space to the space M b (X ) of all boundedly finite measures on X , i.e. such that each bounded set has finite mass. The purpose of this section is to prove an extension of the Campbell formula (see Proposition 13.1.IV in [START_REF] Daley | An introduction to the theory of point processes[END_REF]), giving the expectation of an integral with respect to a random measure when the integrand has specific "local" independence properties w.r.t. to the measure. For this purpose, we need to introduce the notion of Palm measure related to a random measure N . So let N be a random measure on X with intensity measure µ. Let also (X x , x ∈ X ) be a continuous random process with value in R + . Since this section is devoted to prove relations concerning only the distributions of N and X, we can assume without loss of generality that our random elements X and N are defined (in the canonical way) on the space C (X ) × M b (X ) , where C(X ) denotes the space of continuous function on X . This space is Polish as a product of Polish spaces. We denote by F the corresponding product Borel σ-field. For the random measure N , the corresponding Campbell measure C N is the measure defined on σ (F × B (X )) by extension of the following relation on the semi-ring F × B (X ), C N (F × B) = E [1 F N (B)] , F ∈ F, B ∈ B (X ) . It is straightforward to see that C N is σ-finite and for each F in F the measure C N (F × •) is absolutely continuous with respect to µ. Then, from Radon-Nikodym's theorem, for each F ∈ F, there exist y ∈ X → P y (F ) in L 1 (µ) such that, C N (F × B) = B P y (F ) µ (dy) , uniquely defined up to its values on µ-null sets. Since our probability space is Polish, P can be chosen to be a probabilistic kernel, i.e. for all F in F, y ∈ X → P y (F ) is mesurable, and for all y in X , F ∈ F → P y (F ) is a probability measure. The probability measure P y is called the Palm measure of N at point y. Since X is continuous, it is B (X ) ⊗ F measurable, and it is easily deduced from this point that E X X x N (dx) = X E Px [X x ] µ(dx), (4.3) where E Px denotes the expectation w.r.t. P x . Formula (4.3) is the so-called Campbell formula. We can now state, the main results of this section which are the aforementioned extensions of the above formula. Theorem 4.2.1. Let X be a continuous process from X to R + . Let N be a random measure on X with finite intensity measure µ. Assume that X is locally independent from N , that is, for all x ∈ X , there exists a neighbourhood V x of x such that X x is independent from N (V x ∩ •). Suppose moreover that there exists an integrable random variable Y such that |X x | ≤ Y, ∀x ∈ X , a.s. and E [Y N (X )] < ∞. Then we have E X X x N (dx) = X E [X x ] µ (dx) . (4.4) However, the continuity condition of the preceding theorem is too restrictive for our purposes. We need a more specific result. Theorem 4.2.2. Let X be a process from [0, T ] × X to R + such that X .,x is càdlàg for all x and X s,. is continuous for all s. Let N be a random measure on [0, T ] × X with finite intensity measure µ. Assume that, for each s in [0, T ], the family (X s,x , x ∈ X ) is independent from the restriction of N on [0, s], that there exists an integrable random variable Y such that |X s,x | ≤ Y, ∀x ∈ X , ∀s ∈ [0, t], a.s. and that E [Y N (X )] < ∞. Then we have E [0,T ]×X X s,x N (ds, dx) = [0,T ]×X E [X s,x ] µ (ds, dx) . (4.5) Let 1, n denotes the set N ∩ [1, n]. Before going further, we recall that a dissecting system is a sequence {A n,j , j ∈ 1, K n } n≥0 of nested partitions of X , where (K n ) n≥0 is an increasing sequence of integers, such that lim n→∞ max j∈ 1,Kn diam A n,j = 0. In the spirit of the works of Kallenberg on the approximation of simple point processes, the proof of Theorems 4.2.1 is based on the following Theorem which can be find in [START_REF] Kallenberg | Random measures[END_REF] or in [START_REF] Meyer | Probability and potentials[END_REF] (Section WIII.9). Theorem 4.2.3 (Kallenberg [START_REF] Kallenberg | Random measures[END_REF]). Let µ and ν be two finite measures on the Polish space X , such that µ is absolutely continuous with respect to ν. Let f be the Radon-Nikodym derivative of µ w.r.t. ν. Then, for any dissecting system {A n,j , j ∈ 1, K n } n≥0 of X , we have lim n→∞ Kn j=1 µ (A n,j ) ν (A n,j ) 1 s∈A n,j = f (s), for µ-almost all s ∈ X . We can now prove our results. Proof of Theorem 4.2.1. Let {A n,j , j ∈ 1, K n } n≥0 be a dissecting system of X . We denote by A n (x) the element of the partition (A n,j ) 1≤j≤Kn which contain x. Let also T be a denumerable dense subset of X . We use lower and upper approximations of X. More precisely, let for all positive integer k and for all a un X , X (k) x : = inf {X s |s ∈ T ∩ A k (x)} = K k j=1 χ (k) j 1 x∈A j,k , X (k) x : = sup {X s |s ∈ T ∩ A k (x)} = K k j=1 χ (k) j 1 x∈A j,k , with χ (k) j = sup {X s |s ∈ A j,k ∩ T } and χ (k) j = inf {X s |s ∈ A j,k ∩ T } . Note that the supremum and infinimum are taken on T ∩ A k (a) to ensure that X (k) j and X(k) j are measurable, but the set T could be removed by continuity of X. We remark that, for any j, k, the measure E χ (k) j N (•) is absolutely continuous with respect to µ and it follows from Campbell's formula (4.3) that the Radon-Nikodym derivative is E Px χ (k) j . Thus, it follows from Theorem 4.2.3 that, µ-a.e., E Px χ (k) j = lim n→∞ E χ (k) j N (A n (x)) µ (A n (x)) . Then, since X (k) and X (k) are finite sums of such random variables, E Px X (k) x = lim n→∞ E X (k) x N (A n (x)) µ (A n (x)) , and E Px X (k) x = lim n→∞ E X (k) x N (A n (x)) µ (A n (x)) , outside a µ-null set which can be chosen independent of k by countability. Now, since X (k) x ≤ X x ≤ X (k) x , it follows that E Px X (k) x ≤ lim inf n→∞ E [X x N (A n (x))] E [N (A n (x))] ≤ lim sup n→∞ E [X x N (A n (x))] E [N (A n (x))] ≤ E Px X (k) x , µ -a.e.. Now, since X is continuous, X (k) x -→ k→∞ X x and X (k) x -→ k→∞ X x , it follows, from Lebesgue's Theorem, that E Px [X x ] = lim n→∞ E [X x N (A n (x))] E [N (A n (x))] , µ -a.e.. Now, since A n,j is a dissecting system, there exists an integer N such that, for all n > N , A n (x) ⊂ V x . That is, for n large enough, E [X x N (A n (x))] E [N (A n (x))] = EX x . Finally, E Px [X x ] = E [X x ] , µ -a.e.. And the conclusion comes from (4.3). Proof of Theorem 4.2.2. Clearly, we may assume without loss of generality that T = 1. Define, for all integer M , X M s,x = M -1 k=0 X k+1 M ,x 1 s∈[ k M , k+1 M ) . Since X .,x is càdlàg, this sequence of processes converges pointwise to (X s,x , s ∈ [0, 1]) for all ω. Then, by Lebesgue's theorem, E [0,1]×X X s,x N (ds, dx) = [0,1]×X E Ps,x [X s,x ] µ(ds, dx), = lim M →∞ M -1 k=0 [0,1] 1 s∈[ k M , k+1 M )×X E Ps,x X k+1 M ,x µ(ds, dx). Clearly, for fixed k, (s, x) → X k+1 M ,x is continuous on [ k M , k+1 M ] × X . Hence, Theorem 4.2.1 can be applied to k M , k+1 M × X → R + , (s, x) → X k+1 M ,x , to conclude the proof. A recursive construction of the CPP The purpose of this section is to given an alternative construction of the CPP. This construction comes from the joint work with N. Champagnat [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF]. We recall that a CPP at time t can be seen as sequence (H i ) 0≤i≤Nt-1 where (H i ) i≥1 is an i.i.d. sequence of random variables with distribution given by P (H i > s) = 1 W (s) , stopped at its first value N t greater than t, and H 0 equals to t. We also recall that W is the scale function of the contour process of a splitting tree (see Section 3.2). The motivation of the construction given above comes from the fact that if a mutation (in a model with mutation) occurs on an individuals at some time, the future of the family carrying this mutation does not depend on the whole tree but only on the subtree induced by this individual. This fact can be equivalently studied through the CPP rather than in the tree directly. Here, we consider the CPP at some time t and we introduce a construction of this CPP which underlines this independence. Suppose we are given a sequence P (i) i≥1 of coalescent point processes stopped at time a with scale function W . Then, take an independent CPP P, where the law of the branches corresponds to the excess over a of a branch with scale function W conditioned to be higher than a. As stated in the next proposition, the tree build from the grafting of the P (i) above each branch of P is also a CPP with scale function W stopped at time t (see Figure 4.1). S i := i j=1 N j a , ∀i ≥ 1. Then the random vector H k , 0 ≤ k ≤ S Na-1 defined, for all k ≥ 0, by H k = P (i+1) k-S i if S i < k < S i+1 , for some i ≥ 0, Pi + a if k = S i , for some i ≥ 0, is a CPP with scale function W at time t. Proof. Note that H 0 = P0 + a. To prove the result, it is enough to show that the sequence (H k ) k≥1 is an i.i.d. sequence with the same law as H, given by P (H > s) = 1 W (s) , ∀s > 0. The independence follows from the construction. We details the computation for the joint law of (H l , H k ) and leave the easy extension to the general case to the reader. Let k > l be two positive integers, and let also s 1 , s 2 be two positive real numbers. We denote by S the random set {S i , i ≥ 1}. Hence, P (H k < s 1 , H l < s 2 ) =P (H < s 1 | H < a) P (H < s 2 | H < a) P (l / ∈ S, k / ∈ S) + P a + Ĥ < s 1 P (H < s 2 | H < a) P (l / ∈ S, k ∈ S) + P (H < s 1 | H < a) P a + Ĥ < s 2 P (l ∈ S, k / ∈ S) + P a + Ĥ < s 1 P a + Ĥ < s 2 P (l ∈ S, k ∈ S) , where Ĥ denotes a random variable with the law of the branches of P, i.e. such that P Ĥ > s = W (s) W (s + a) , ∀s > 0. Now, since the random variables S i are sums of geometric random variables, we get P (H k < s 1 , H l < s 2 ) = pP (H < s 1 | H < a) + (1 -p)P a + Ĥ < s 1 × pP (H < s 2 | H < a) + (1 -p)P a + Ĥ < s 2 , with p = P (k ∈ S). Moreover we have, P (H k ≤ s) = i≥1 P (H k ≤ s | k ∈ S i-1 , S i ) P (k ∈ S i-1 , S i ) + P (H k ≤ s | k = S i ) P (k = S i ) =P (H ≤ s | H < a) P   i≥1 {k ∈ S i-1 , S i }   + P (H ≤ s | H > a) P   i≥1 {k = S i }   . Since the S i 's are sums of geometric random variables of parameters Ŵ (t -a) -1 , they follow binomial negative distributions with parameters i and Ŵ (t -a) -1 . Hence, since P (S i = k) =    0, if k < i, i -1 k -1 Ŵ (t -a) -i 1 -Ŵ (t -a) -1 k-i , else, some elementary calculus leads to P   i≥1 {k = S i }   = P (H > a) , ∀k ∈ N. which ends the proof. Chapitre 5 On the population counting process (a.k.a. binary homogeneous CMJ processes) Introduction In this chapter, we consider the population counting process N t (giving the number of living individuals at time t) of a splitting tree. This process is a binary homogeneous Crump-Mode-Jagers (CMJ) process. Crump-Mode-Jagers processes are very general branching processes. Such processes are known to have many applications. For instance, in biology, they have recently been used to model spreading diseases (see [START_REF] Olofsson | A Crump-Mode-Jagers branching process model of prion loss in yeast[END_REF][START_REF] Ball | Stochastic monotonicity and continuity properties of functions defined on Crump-Mode-Jagers branching processes, with application to vaccination in epidemic modelling[END_REF]) or for questions in population genetics ( [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF]). Another example of application appears in queuing theory (see [START_REF] Lambert | Scaling limits via excursion theory : interplay between Crump-Mode-Jagers branching processes and processor-sharing queues[END_REF] and [START_REF] Grishechkin | On a relationship between processor-sharing queues and Crump-Mode-Jagers branching processes[END_REF]). In the supercritical case, it is known that the quantity e -αt N t , where α is the Malthusian parameter of the population, converges almost surely. This result has been proved in [START_REF] Richard | Processus de branchement non Markoviens et processus de Lévy[END_REF] using Jagers-Nerman's theory of general branching processes counted by random characteristics. One of our goals in this chapter is to give a new proof of this result using only elementary probabilistic tools and relying on fluctuation analysis of the process. This proof comes from a joint work with Nicolas Champagnat on the frequency spectrum of a splitting tree [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF] (see also Chapter 6). The other goal of this chapter is to investigate the behaviour of the error in the aforementioned convergence. This study comes from the preprint [START_REF] Henry | Clts for general branching processes related to splitting trees[END_REF]. Many papers studied the second order behaviour of converging branching processes. Early works investigate the Galton-Watson case. In [START_REF] Heyde | A rate of convergence result for the super-critical Galton-Watson process[END_REF] and [START_REF] Heyde | Some central limit analogues for supercritical Galton-Watson processes[END_REF], Heyde obtains rates of convergence and gets central limit theorems in the case of supercritical Galton-Watson when the limit has finite variance. Later, in [START_REF] Asmussen | Convergence rates for branching processes[END_REF], Asmussen obtained the polynomial convergence rates in the general case. In our model, the particular case when the individuals never die (i.e. P V = δ ∞ , implying that the population counting process is a Markovian Yule process) has already been studied. More precisely, Athreya showed in [START_REF] Balasundaram | Limit theorems for multitype continuous time Markov branching processes. II. The case of an arbitrary linear functional[END_REF], for a Markovian branching process Z with appropriate conditions, and such that e -αt Z t converges to some random variable W a.s., that the error Z t -e αt W √ Z t , converges in distribution to some Gaussian random variable. In the case of general CMJ processes, there was no similar result except a very recent work of Iksanov and Meiners [START_REF] Iksanov | Rate of convergence in the law of large numbers for supercritical general multi-type branching processes[END_REF] giving sufficient conditions for the error terms in the convergence of supercritical general branching processes to be o(t δ ) in a very general background (arbitrary birth point process). Although our model is more specific, we give more precise results. Indeed, we give an exact rate of convergence, e α 2 t , and characterize the limit. Moreover, we believe that our method can also apply to other general branching processes counted by random characteristics, as soon as the birth point process is Poissonian. Section 5.2 is devoted to the statement of the law of large numbers for N t and the associated central limit theorem. Section 5.3 is devoted to the new proof of the law of large numbers. Finally, the central limit theorem is proved in Section 5.4. Statement of the limit theorems We recall that we consider a general branching population where individuals live and reproduce independently. The lifetimes of the individuals are i.i.d. random variables distributed as a random variable V with law P V . Moreover, individuals give birth at a Poissonian rate b. We refer the reader to Chapter 3 for the details about the model. Let us also recall that the Laplace distribution with zero mean and variance σ 2 is the probability distribution whose characteristic function is given by λ ∈ R → 1 1 + 1 2 σ 2 λ 2 . It particular, it has a density given by x ∈ R → 1 2σ e -|x| σ . We denote this law by L 0, σ 2 . We also recall that, if G is a Gaussian random variable with zero mean and variance σ 2 and E is an exponential random variable with parameter 1 independent of G, then √ EG is Laplace L 0, σ 2 . In the sequel of this chapter (as well as in Chapter 6) we denote by P t the probability measure P(• | N t > 0) . Similarly, we denotes by P ∞ the measure P(• | Non-extinction). We can now state the main results of the chapter. First, let us recall the law of large number for N t . Theorem 5.2.1. In the supercritical case, that is bE [V ] > 1, there exists a random variable E, such that e -αt N t → t→∞ E ψ (α) , a.s. and in L 2 . In addition, under P ∞ , E is exponentially distributed with parameter one. Section 5.3 is devoted to a new proof of this theorem. In this chapter, we also prove the following theorem on the second order properties of the above convergence. Theorem 5.2.2. In the supercritical case, we have, under P ∞ , e -α 2 t ψ (α)N t -e αt E (d) -→ t→∞ L 0, 2 -ψ (α) . The proof of this theorem is the subject of Section 5.4. Note that, according to (3.2), we have ψ (x) = 1 - R + xe -xv bP V (dv), ∀x ∈ R + , (5.1) which implies that 2 -ψ (α) = 1 + R + ve -αv bP V (dv) > 0. Note that one can also see using (3.2) and the fact that we are in the supercritical case that R + e -αv P V (dv) = 1 - α b . (5.2) An alternative proof of the law of large numbers The purpose of this section is to show the law of large numbers for N t . We recall once again that we are in the supercritical case (α > 0). This last hypothesis implies that W (t) ∼ e αt ψ (α) . The goal of this section is to prove the almost sure convergence of the population counting process. We first show that the convergence holds in probability, using the convergence of the process which counts at time t the number N ∞ t of individuals having infinite descent. More formally, recalling that a splitting tree is a subset of ∪ k≥0 N k × R + (see Section 3.1), an individual (u, t) in the tree T is said to have infinite descent at time t if for any T > t there exist ũ in n≥0 N n such that (uũ, T ) belong to T. Finally, to obtain the almost sure convergence, we show in Theorem 5.2.1 that N t can not fluctuate faster than a Yule process. Proposition 5.3.1. Let (N ∞ t , t ∈ R + ) be the number of alive individuals at time t having infinite descent. Then, under P ∞ , N ∞ is a Yule process with parameter α. Proof. Let T, t ∈ R + . Let, for T < t, N (T ) t be the number of individuals at time t who have alive children at time T . We extend this notation to t > T by setting N (T ) t = 0 in this case. Fix S a positive real number, we consider the quantity, sup t≤S N (T ) t -N ∞ t . There exists a (random) finite time T S such that N (T S ) S = N ∞ S . This means that the progeny of all the individuals alive at time S who have finite descent are extinct at time T S . Moreover, N (T S ) t = N ∞ t for all t < S, since, otherwise, there would exist an individual at time t who has alive descent at time T S but which does not have an infinite descent. Hence, for all T > T S , sup t≤S N (T ) t -N ∞ t = 0. In particular, as T → ∞, N (T ) converges to N ∞ a.s. for the Skorokhod topology of D [0, ∞) and N ∞ is a.s. càdlàg. Now, it remains to derive from N (T ) the law of the process N ∞ . A first remark is that N (T ) t is the number of alive individuals in the upper CPP in the construction given in Chapter 4, Section 4.3. Hence, applying Proposition 4.3.1, with a = T -t, gives that N (T ) t is the number of individuals in the CPP P (according to the notation of Proposition 4.3.1). Hence, it is geometrically distributed with parameter W (T ) W (T -t) . Now, we recursively apply this property on a sequence 0 < s 1 < s 2 < • • • < s n < T . By a recursive use of Proposition 4.3.1, we see that, under P T , the process N (T ) s l , 1 ≤ l ≤ n is a time inhomogeneous Markov chain with geometric initial distribution with parameter P t (H > T | H > T -s 1 ) , and the law of N (T ) s l given N (T ) s l-1 is the law of a sum of N (T ) s l-1 i.i.d. geometric random variable with parameter p l = P (H > T -s l-1 | H > T -s l ) , i.e. a binomial negative with parameters N (T ) s l-1 and 1 -p l . Hence, P t N (T ) s 1 = m 1 , . . . , N (T ) sn = m n = p 1 (1 -p 1 ) m 1 -1 n i=2 m i + m i-1 -1 m i p m i-1 i (1 -p i ) m i -1 . Moreover, we have, by Lemma 3.3.3, p 1 = W (T -s 1 ) W (T ) -→ t→∞ e -αs 1 , and p l = W (T -s l ) W (T -s l-1 ) -→ t→∞ e -α(s l -s l-1 ) . This leads to, P t N (T ) s 1 = m 1 , . . . , N (T ) sn = m n -→ t→∞ e -αs 1 1 -e -αs 1 m 1 -1 n i=2 m i + m i-1 -1 m i e -αm i-1 (s l -s l-1 ) 1 -e -α(s l -s l-1 ) m i -1 . Since the right hand side term corresponds to the finite dimensional distribution of a Yule process with parameter α, this concludes the proof. Because N ∞ is a Yule process, e -αt N ∞ t converges a.s. (under P ∞ ) to an exponential random variable of parameter 1, denoted E hereafter, when t goes to infinity (see for instance [START_REF] Athreya | Branching processes[END_REF]). Remark 5.3.2. Let N be a integer valued random variable. In the sequel we say that a random vector with random size (X i ) 1≤i≤N form an i.i.d. family of random variables independent of N , if and only if (X 1 , . . . , X N ) d = X1 , . . . , XN , where Xi i≥1 is a sequence of i.i.d. random variables distributed as X 1 independent of N . We are now able to prove the law of large numbers for N t . t O 1 O 2 O 3 O 4 O 5 Figure 5.1 -Reflected (below t) contour process with overshoot over t. Proof of Theorem 5.2.1. We first look at the quantity, E t e -2αt N ∞ t -ψ (α)N t 2 . First note that N ∞ t can always be written as a sum of Bernoulli trials, N ∞ t = Nt i=1 B (t) i , (5.3) corresponding to the fact that the ith individual has infinite descent or not. Now, by construction of the splitting tree, the descent of each individual alive at time t can be seen as a (sub-)splitting tree where the lifetime of the root follows a particular distribution (that is the law of the residual lifetime of the corresponding individual). We denote by O i the residual lifetime of the ith individual which correspond to the ith overshoot of the contour process above t (see Figure 5.1). In particular, these subtrees are dependent only through the residual lifetimes (O i ) 1≤i≤Nt of the individuals. Hence, the random variables B (t) i i≥2 are independent conditionally on the family (O i ) 1≤i≤Nt . In addition, the family (O i ) 1≤i≤Nt has independence properties under P t . This is the subject of the following lemma which is proved at the end of this section. Lemma 5.3.3. Under P t , the family (O i , i ∈ 1, N t ) forms a family of independent random variables, independent of N t , and, except O 1 , having the same distribution. The proof of this lemma is postponed at the end of this section. Hence, it follows that, under P t , (Proposition 5.3.1) that E ∞ [N ∞ t ] = e αt . Now, since E ∞ [N ∞ t ] = E t [N ∞ t ] P (N t > 0) P (Non-ex) , we have e αt = (p t (W (t) -1) + pt ) P (N t > 0) P (Non-ex) . We recall from Section 3.3 (see also [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]) that, P (Non-ex) = E e -αV , and P (N t > 0) = E W (t -V ) W (t) , where V is a random variable with law P V (i.e. the lifetime of a typical individual). It then follows, from Lesbegue's Theorem that, P (N t > 0) P (Non-ex) -1 = O e -βt , (5.4) with β = α ∧ γ where the constant γ is given by Lemma 3.3.3. Hence, p t e -αt W (t) = 1 + O e -βt . (5.5) Now, using (5. 3), we have E t [N ∞ t N t ] = E t [N t (N t -1)] p t + pt E t N t = 2W (t) 2 p t + O e αt , (5.6) where the second equality comes from the fact that N t is geometrically distributed with parameter W (t) -1 under P t . Recalling also that N ∞ t is geometrically distributed with parameter e -αt under P ∞ , it follows that E t N ∞ t -ψ (α)N t 2 = 2e 2αt P (Non-ex) P (N t > 0) -4ψ (α)W (t) 2 p t + 2ψ (α) 2 W (t) 2 + O e αt . Hence, it follows from (5.5), (5.6), (5.4) and Lemma 3.3.3, that E t e -2αt N ∞ t -ψ (α)N t 2 = O e -βt . (5.7) Let us define now, for all integer n, t n = 2 β log n. Then, by the previous estimation, it follows from Borel-Cantelli lemma and a Markov-type inequality that, lim n→∞ e -αtn N tn = ψ (α)E, a.s., (5.8) on the survival event. From this point, we need to control the fluctuation of N between the times (t n ) n≥1 . The births can be controlled by comparisons with a Yule process, but the deaths are harder to control. For this, we use that, by (5.8) Since Markov inequalities are not precise enough to go further, we need to compute exactly the probability, P tn Y t n+1 -tn -Y 0 > e αtn . From the branching and Markov properties, Y t n+1 -tn -Y 0 is a sum of a geometric number, with parameter W (t n ) -1 , of independent and i.i.d. geometric random variables supported on Z + with parameter e -b(t n+1 -tn) . Hence, Y t n+1 -tn -Y 0 is geometric supported on Z + with parameter e -b(t n+1 -tn) W (t n ) 1 -e -b(t n+1 -tn) 1 -1 W (tn) , and, we have P tn Y t n+1 -tn -Y 0 ≥ k = 1 - 1 W (t n ) e b(t n+1 -tn) -1 + 1 k . Using W (t n ) = O e αtn = O n 2α β , we have W (t n ) e b(t n+1 -tn) -1 = O n α 2β -1 . Finally, P tn Y t n+1 -tn -Y 0 > e αtn ≤ 1 - 1 1 + Cn α 2β -1 n α 2β , for some positive real constant C. Borel-Cantelli's Lemma then entails lim n→∞ sup s∈[tn,t n+1 ] e -αt n N tn -e -αs N s = 0, almost surely, which ends the proof of the almost sure convergence. Now, for the convergence in L 2 , we have that E t ψ (α)e -αt N t -E 2 ≤ 2E t e -2αt N ∞ t -ψ (α)N t 2 + 2E t e -αt N ∞ t -E 2 . The first term in the right hand side of the last inequality converges to 0 according to (5.7). For the second term, since N ∞ t and E vanish on the extinction event, we have lim t→∞ E t e -αt N ∞ t -E 2 = lim t→∞ E ∞ e -αt N ∞ t -E 2 . The conclusion comes from the fact that e -αt N ∞ t , t ∈ R + is a martingale uniformly bounded in L 2 . In the preceding proof, we postponed the demonstration of the independence of the residual lifetimes of the alive individuals at time t. We give its proof now, which is quite similar to the Proposition 5.5 of [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Proof of Lemma 5.3.3. Let Y (i) 0≤i≤Nt be a family of independent Lévy processes with Laplace exponent ψ(x) = x - (0,∞] 1 -e -rx Λ(dr), x ∈ R + , conditioned to hit (t, ∞) before 0, for i ∈ {0, . . . , N t -1}, and conditioned to hit 0 before (t, ∞) for i = N t . We also assume that, Y (0) 0 = t ∧ V, and Y (i) 0 = t, i ∈ {1, . . . , N t } . Now, denote by τ i the exit time of the ith process of (0, t) and T n = n-1 i=0 τ i , n ∈ {0, . . . , N t + 1} . Then, the process defined for all s ∈ [0, T Nt ] by Y s = Nt i=0 Y (i) s-T i 1 T i ≤s<T i+1 , has the law of the contour process of a splitting tree cut under t. Moreover, the quantity Y (i) τ i -Y (i) τ i - is the lifetime of the ith alive individual at time t. The family of residual lifetime (O i ) 1≤i≤Nt has then the same distribution as the sequence of the overshoots of the contour above u. Thus, the independence of the Lévy processes Y (i) ensures us that (O i , i ∈ 2, N t ) is an i.i.d family of random variables, and that O 1 is independent of the other O i 's. Proof of Theorem 5.2.2 In this section, we prove the central limit theorem associated to the law of large numbers for N t . The first step of the method is to obtain informations on the moments of the error in the a.s. convergence of the process. Using the renewal structure of the tree and formulae on the expectation of a random integral, we are able to express the moments of the error in terms of the scale function of a Lévy process. This process is known to be the contour process of the splitting tree as constructed in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. The asymptotic behaviours of the moments are then precisely studied thanks to the precise asymptotic results obtained on the scale function W in Proposition 4.1.1. The second ingredient is a decomposition of the splitting tree into subtrees whose laws are characterized by the overshoots of the contour process over a fixed level. Finally, the error term can be decomposed as the sum of the error made in each subtrees. Our controls on the moments ensure that the error in each subtree decreases fast enough compared to the growth of the population (see Section 5.4.2). The first section is devoted to the introduction of a useful lemma used in this work on the expectation of a random integral. Section 5.4.2 details the main lines of the method. Theorem 5.2.2 is finally proved in Section 5.4.3. Preliminaries : A lemma on the expectation of a random integral with respect to a Poisson random measure The purpose of this part is to state and prove a lemma concerning the expectation of a random integral. Lemma 5.4.1. Let ξ be a Poisson random measure on R + with intensity θλ(da) where θ is a positive real number and λ the Lebesgue measure. Let also X (i) u , u ∈ R + i≥1 be an i.i.d. sequence of non-negative càdlàg random processes independent of ξ. Let also Y be a random variable independent of ξ and of the family X (i) u , u ∈ R + i≥1 . If ξ u denotes ξ ([0, u]), then, for any t ≥ 0, E [0,t] X (ξu) u 1 Y >u ξ(du) = t 0 P (Y > u) θEX u du, where (X u , u ∈ R + ) = X (1) u , u ∈ R + . In addition, for any t ≤ s, we have E [0,t] X (ξv) v 1 Y >v ξ(dv) [0,s] X (ξu) u 1 Y >u ξ(du) = t 0 θE X 2 u P (Y > u) du + t 0 s 0 θ 2 EX u EX v P (Y > u, Y > v) dudv. Proof. Since the proof of the two formulas lies on the same ideas, we only give the proof of the second formula. First of all, let f : R 2 + → R + be a positive measurable deterministic function. We recall that, for a Poisson random measure, the measures of two disjoint measurable sets are independent random variables. That is, for A, B in the Borel σ-field of R + , ξ(A ∩ B c ) is independent from ξ(B), which leads to E [ξ(A)ξ(B)] = Eξ(A)Eξ(B) + Varξ(A ∩ B). Using the approximation of f by an increasing sequence of simple function, as in the construction of Lebesgue's integral, it follows from the Fubini-Tonelli theorem and the monotone convergence theorem, that E [0,t]×[0,s] f (u, v) ξ(du)ξ(dv) = t 0 θf (u, u) du + t 0 s 0 θ 2 f (u, v) dudv. Since the desired relation only depends on the law of our random objects, we can assume without loss of generality that ξ is defined on a probability space (Ω, F, P) and the family X (i) s , s ∈ R + i≥1 is defined on an other probability space Ω, F, P . Then, using a slight abuse of notation, we define ξ on Ω × Ω by ξ (ω,ω) = ξ ω , and similarly for the family X. Then, by Fubini-Tonneli Theorem, with the notation ξ v ω = ξ ω ([0, v]), E [0,t]×[0,s] X (ξv) v X (ξu) u ξ(du)ξ(dv) = Ω× Ω [0,t]×[0,s] X (ξ v ω ) v (ω)X (ξ u ω ) u (ω) ξ ω (du)ξ ω (dv) P ⊗ P (dω, dω) = Ω [0,t]×[0,s] Ω X (ξ v ω ) v (ω)X (ξ u ω ) u (ω) P (dω) ξ ω (du)ξ ω (dv) P(dω). But since the X (i) are identically distributed and ξ is a simple measure (purely atomic with mass one for each atom) we deduce that, if u and v are two atoms of ξ ω , ξ v ω = ξ u ω if and only if u = v, which implies that Ω X (ξ v ω ) v (ω)X (ξ u ω ) u (ω) P (dω) = EX u EX v , u = v, EX 2 u , u = v, ξ ω -a.e. The result follows readily, and the case with the indicator function of Y is left to the reader. Strategy of proof Now, we detail the main lines of the proof. Let (G n ) n≥1 be a sequence of geometric random variables with respective parameter 1 n , and (X i ) i≥1 a L 2 family of i.i.d. random variables with zero mean independent of (G n ) n≥1 . It is easy to show that the characteristic function of Z n := 1 √ n Gn i=1 X i , (5.10) is given by Ee iλZn = 1 + o n (1) 1 + λ 2 EX 2 1 + o n (1) , (5.11) from which we deduce that Z n converges in distribution to L(0, EX 2 1 ). If we suppose that the population counting process N is a Yule Markov process, it clearly follows from the branching and Markov properties that, for s < t, N t = Ns i=1 N i t-s , (5.12) where the family N i t-s i≥1 is an i.i.d. sequence of random variables distributed as N t-s and independent of N s . Moreover, since N s is geometrically distributed with parameter e -αs , taking the renormalized limit leads to, lim t→∞ e -αt N t =: E = e -αs Ns i=1 E i , where E 1 , . . . , E Ns is an i.i.d. family of exponential random variables with parameter one, and independent of N s . Hence, N t -e αt E = Ns i=1 N i t-s -e α(t-s) E i , is a geometric sum of centered i.i.d. random variables. This remark and (5.10) suggest the desired CLT in the Yule case. However, in the general case, we need to overcome some important difficulties. First of all, equation (5.12) is wrong in general. Nevertheless, a much weaker version of (5.12) can be obtained in the general case. To make this clear, if u < t are two positive real numbers, then the number of alive individual at time t is the sum of the contributions of each subtrees T (O i ) induced by each alive individuals at time u (see Figure 5.2). Provided there are individuals alive at time u, we denote by (O i ) 1≤i≤Nu the residual lifetimes (see Figure 5.2) of the alive individuals at time u indexed using that the ith individual is the ith individual visited by the contour process. Hence, N t = Nu i=1 N i t-u (O i ) , (5.13) where N i t-u (O i ) i≤Nu denote the population counting processes of the subtrees T(O i ) induced by each individual. The notation refers to the fact that each subtree has the law of a standard splitting tree with the only difference that the lifelength of the root is given by O i . More precisly, we define, for all i ≥ 1 and o ∈ R + , N i t-u (o) the population counting process of the splitting tree constructed from the same random objects as the ith subtree of Figure 5.2, where the life duration of the first individual is equal to o. Hence, from the independence properties between each individuals, N i t-u (o) , t ≥ u, o ≥ 0 i≥1 is a family of independent processes, independent of (O i ) 1≤i≤Nu , and N i t-u (o), t ≥ u has the law of the population counting process of a splitting tree but where the lifespan of the ancestor is o. Note that the lifespans of the other individuals are still distributed as V . From the discussion above, it follows that the family of processes N i t-u (O i ) , t ≥ u 1≤i≤Nu are dependent only through the residual lifetimes (O i ) 1≤i≤Nu and the law of (N t (O i ) , t ∈ R + ) under P u is the law of standard population counting process of splitting tree where the lifespan of the root is distributed as O i under P u . Unfortunately, the computation of (5.11) does not apply to (5.13). This issue is solved by the following lemma which is an improvement of Lemma 5.3.3 and whose proof is similar to one of Proposition 5.5 of [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF]. Lemma 5.4.2. Let u in R + , we denote by O i for i an integer between 1 and N u the residual lifetime of the ith individuals alive at time u. Then under P u , the family (O i , i ∈ 1, N u ) form a family of independent random variables, independent of N u , and, expect O 1 , having the same distribution, given by, for 2 ≤ i ≤ N t , P u (O i ∈ dx) = R + W (u -y) W (u) -1 bP (V -y ∈ dx) dy. (5.14) Moreover, it follows that the family (N s (O i ), s ∈ R + ) 1≤i≤Nu is an independent family of process, i.i.d. for i ≥ 2, and independent of N u . Proof. Let Y (i) 0≤i≤Nu a family of independent Lévy processes with Laplace exponent conditioned to hit (u, ∞) before hitting 0, for i ∈ {0, . . . , N u -1}, and conditioned to hit 0 first for i = N u . We also assume that, Y ψ(x) = x - (0,∞] 1 -e -rx Λ(dr), x ∈ R + , t O 1 O 2 O 3 O 4 O 5 = u ∧ V, (0) 0 and Y (i) 0 = u, i ∈ {1, . . . , N u } . Now, denote by τ i the exit time of the ith process out of (0, u) and T n = n-1 i=0 τ i , n ∈ {0, . . . , N u + 1} . Then, the process defined, for all s, by Y s = Nu i=0 Y (i) s-T i 1 T i ≤s<T i+1 , has the law of the contour process of a splitting tree cut above u. Moreover, the quantity Y τ i -Y τ i - is the lifetime of the ith alive individual at time t. The family of residual lifetimes (O i ) 1≤i≤Nu has then the same distribution as the sequence of the overshoots of the Y above u. Thus, the Markov property ensures us that (O i , i ∈ 2, N u ) is an i.i.d. family of random variables. The Markov property also ensures that O 1 is independent of the other O i 's. It remains to derive the law of O i . Let Y be a Lévy process with Laplace exponent ψ. We denote by τ + u the time of first passage of -Y above u and τ - 0 the time of first passage of -Y below 0. Then, for all i ≥ 2, P u (O i ∈ dx) = P 0 -Y τ - 0 ∈ dx | τ - 0 < τ + u . On the other hand, Theorem 2.6.1 gives for any measurable subsets A ⊂ [0, u], B ⊂ (0, -∞), P 0 -Y τ - 0 ∈ B, -Y τ - 0 -∈ A = A P -V (B -y) W (u -y) W (u) dy. The result follows easily from P τ - 0 < τ + u = 1 - 1 W (u) . Remark 5.4.3. It is important to note that the law of the residual lifetimes of the individuals considered above depends on the particular time u we choose to cut the tree. That is why, in the sequel, we may denote O (u) i for O i when we want to underline the dependence in time of the law of the residual lifetimes. In addition, as suggested by (5.11), we need to compute the expected quadratic error in the convergence of N t , E ψ (α)N t -e αt E 2 , which implies to compute EN t E. Although this moment is easy to obtain in the Markovian case, the method does not extend easily to the general case. One idea is to characterize it as a solution of a renewal equation in the spirit of the theory of general CMJ processes. To make this, we use the renewal structure of a splitting tree : the splitting trees can be constructed (see Chapter 3) by grafting i.i.d. splitting tree on a branch (a tree with a single individual) of length V ∅ distributed as V . Therefore, there exists a family N (i) t , t ∈ R + i≥1 of i.i.d. population counting processes with the same law as (N t , t ∈ R + ), and a Poisson random measure ξ on R + with intensity b da such that N t = [0,t] N (ξu) t-u 1 V ∅ >u ξ(du) + 1 V ∅ >t , a.s., (5.15) where ξ u = ξ ([0, u]). Another difficulty comes from the fact that, unlike (5.10), the quantities summed in (5.13) are time-dependent, which requires a careful analysis of the asymptotic behaviour of their moments. The calculus and the asymptotic analysis of these moments is made in Section 5.4.3 : In Lemma 5.4.4, we compute EN t E, and then with Lemmas 5.4.5 and 5.4.7, we study the asymptotic behaviour of the error of order 2 and 3 respectively. The second part of Section 5.4.3 is devoted to the study of the same questions for the population counting processes of the subtrees described in Figure 5.2 (when the lifetime of the root is not distributed as V ). Finally, Section 5.4.4 is devoted to the proof of Theorem 5.2.2. One of the difficulties in studying the behaviour of the moments is to get better estimates on the scale function W than those of Lemma 3.3.3. This is the subject of the next section. Preliminary moments estimates We begin the proof of Theorem 5.2.2 by computing moments, and analysing their asymptotic behaviours. A first part is devoted to the case of a splitting tree where the lifetime of the root is distributed as V whereas a second part study the case where the lifespan of the root is arbitrary (for instance, as the subtrees described by Figure 5.2). This section is devoted to the calculus of the expectation of N t -e αt E 2 . We start with the simple case where the initial individual has life-length distributed as V . Secondly, we study the asymptotic behavior of these moments. In the second part of this section, we prove similar result for arbitrary initial distributions. The expectations above are given with respect to P, however since N t and E vanish on the event {N t = 0}, we can easily recover the results with respect to P t by using (3.9) and (3.5) (see Corollary 5.4.6). Case V ∅ d = V We start with the computation of EN t E. Lemma 5.4.4 (Join moment of E and N t ). The function t → E [N t E] is the unique solution bounded on finite intervals of the renewal equation, f (t) = R + f (t -u)be -αu P (V > u) du + αbE [N • ] R + e -αv P (V > •, V > v) dv (t) + α R + e -αv P (V > t, V > v) dv, (5.16) and its solution is given by 1 + α b -e -αt W (t) -1 -e -αt W P V (t). Proof. As explained in Section 5.4.2, N t = [0,t] N (ξu) t-u 1 V ∅ >u ξ(du) + 1 V ∅ >t , where ξ a Poisson point process with rate b on the real line, N (i) i≥1 is a family of independent CMJ processes with the same law as N and V ∅ is the lifespan of the root. Moreover, the three objects N (u) , ξ and V ∅ are independent. It follows that, for s > t N t N s = [0,t]×[0,s] N (ξu) t-u N (ξv) s-v 1 V ∅ >u 1 V ∅ >v ξ(du)ξ(dv) + [0,t] N (ξu) t-u 1 V ∅ >u ξ(du)1 V ∅ >s + [0,s] N (ξu) s-u 1 V ∅ >u ξ(du)1 V ∅ >t + 1 V ∅ >t 1 V ∅ >s , and, using Lemma 5.4.1, EN t N s = [0,t] bE [N t-u N s-u ] P (V > u) du + [0,t]×[0,s] b 2 E [N t-u ] E [N s-v ] P (V > u, V > v) du dv + P (V > s) [0,t] bE [N t-u ] du + [0,s] bE [N s-u ] P (V > u, V > t) du + P (V > s) . Then, thanks to the estimate W (t) = O e αt (see Lemma 3.3.3 or 4.1.1) and the L 1 convergence of W (s) -1 N t N s to N t E as s goes to infinity (since, by Theorem 5.2.1, Ns W (s) converge in L 2 and using Cauchy-Schwarz inequality), we can exchange limit and integrals to obtain, lim s→∞ EN t N s W (s) = EN t E :=f (t) = [0,t] E [N t-u E] e -αu P (V > u) b du =:f G(t) + [0,t]×[0,∞) αbE [N t-u ] e -αv P (V > u, V > v) du dv =:ζ 1 (t) + [0,∞] αe -αv P (V > v, V > t) dv =:ζ 2 (t) , where we used that lim t→∞ W (t) -1 EN t = α b . Now, we need to solve the last equation to obtain the last part of the lemma. To do that, we compute the Laplace transform of each part of the equation. Note that, since W (t) = O e αt , it is easy to see that the Laplace transform of each term of (5.16) is well-defined as soon as λ > α (using Cauchy-Schwarz inequality for the first term). Now, using (3.7), T L e α• G(λ) = b R + e -λt P (V > t) dt = b R + e -λt (t,∞) P V (dv) dt = 1 λ R + 1 -e -λv bP V (dv) = 1 - ψ(λ) λ . (5.17) So, T L G(λ) = 1 - ψ(λ + α) λ + α . Then, T L ζ 1 (λ) = αT L EN . (λ)T L b R + e -αv P (V > •, V > v) dv (λ) = λ ψ(λ) -1 T L α R + e -αv P (V > •, V > v) dv (λ) =Lζ 2 (λ) . and, using (5.17), we get T L ζ 2 (λ) = α R + e -λt R + e -αv P (V > t, V > v) dv dt = 1 b ψ (λ + α) λ - ψ(λ) λ . Finally, we obtain, T L f (λ) = T L f (λ) 1 - ψ(λ + α) λ + α + λ ψ(λ) -1 1 b ψ (λ + α) λ - ψ(λ) λ + 1 b ψ (λ + α) λ - ψ(λ) λ . Hence, T L f (λ) = λ b 1 ψ(λ) - 1 ψ(λ + α) . Finally, using (3.2) and bT L (W P V ) (λ) = (ψ(λ) -b + λ) ψ(λ) , allows to inverse the Laplace transform of f and get the result. Lemma 5.4.4 allows us to compute the expected quadratic error. Lemma 5.4.5 (Quadratic error in the convergence of N t ). Let E the a.s. limit of ψ (α)e -αt N t . Then, lim t→∞ e -αt E ψ (α)N t -e αt E 2 = α b 2 -ψ (α) . Proof. Let µ := lim t→∞ e αt F (t), where F is defined in Proposition 4.1.1. We have, using Proposition 4.1.1 and (5.2), [0,t] W (t -u)P V (du) = e αt ψ (α) 1 - α b -µ - e αt ψ (α) (t,∞) e -αu P V (du) + [0,t] µ -e α(t-u) F (t -u) P V (du) = e αt ψ (α) 1 - α b -µ + o(1). Hence, the expression of EN t E given by Lemma 5.4.4 can be rewritten, thanks to Lemmas 4.1.1, as EN t E = 2αe αt bψ (α) - α b 1 ψ (α) + µ + o(1), (5.18) Using (3.3) and (3.5) in conjunction with Proposition 4.1.1, we also have e -αt EN 2 t = 2 αe αt bψ (α) 2 - 2αµ bψ (α) - α bψ (α) + o(1). (5.19) Hence, it finally follows from (5.18) and (5. [START_REF] David | Order statistics[END_REF]) that e -αt E ψ (α)N t -e αt E 2 = ψ (α) 2 e -αt EN 2 t -2ψ (α)EN t E + 2αe αt b = -2 αµ b ψ (α) - αψ (α) b + 2 α b 1 + ψ (α)µ + o(1) = α b 2 -ψ (α) + o(1). It is worth noting that, using (3.5) and the method above, we have the following result. Corollary 5.4.6. We have 1 P (N t > 0) = b α - bµψ (α) α e -αt + o(e -αt ), (5.20) which leads to E t N t E = 2e αt ψ (α) - 1 ψ (α) -3µ + o(1). (5.21) Our last estimate is the boundedness of the third moments. Lemma 5.4.7 (Boundedness of the third moment). The third moment of the error is asymptotically bounded, that is E e -α 2 t ψ (α)N t -e αt E 3 = O (1) . Proof. We define for all t ≥ 0, N ∞ t as the number of individuals alive at time t which have an infinite descent. According to Proposition 5.3.1, N ∞ is a Yule process under P ∞ . We have E ψ (α)N t -e αt E e α 2 t 3 ≤ 8E ψ (α)N t -N ∞ t e α 2 t 3 + 8E N ∞ t -e αt E e α 2 t 3 . Now, we know according to the proof of Theorem 5.2.1 (and this is easy to prove using the decomposition of Figure 5.2) that N ∞ can be decomposed as N ∞ t = Nt i=1 B (t) i , where B (t) i i≥1 is a family of independent Bernoulli random variables, which is i.i.d. for i ≥ 2, under P t . Hence, E t ψ (α)N t -N ∞ t e α 2 t 3 ≤ e -3 2 αt E t   Nt i=1 ψ (α) -B (t) i 4   3 4 . Since, it is known from the proof of Theorem 5.2.1 that EB (t) 2 = ψ (α) + O e -αt , it is straightforward that E t ψ (α)N t -N ∞ t e α 2 t   = e -2αt R + E ∞ P x(e αt -1) -e αt x 4 e -x dx. Finally, for a Poissonian random variable X with parameter ν, straightforward computations give that E (X -ν) 4 = 3ν 2 + ν, which allows us to end the proof. Case with arbitrary initial distribution P V ∅ In order to study the behavior of the sub-splitting trees involved in the decomposition described in Figure 5.2, we investigate the behaviour of a splitting tree where the ancestor lifelength is not distributed as V , but follows an arbitrary distribution. Let Ξ be a random variable in (0, ∞], giving to the life-length of the ancestor and by N (Ξ) the associated population counting process. Using the decomposition of N (Ξ) over the lifespan of the ancestor, as described in Section 5.4.2, we have N t (Ξ) = R + N (ξu) t-u 1 Ξ>u ξ(du) + 1 Ξ>t , (5.22) where N i i≥1 is a family of i.i.d. CMJ processes with the same law as N independent of Ξ and ξ, as described in section 5.4.2. Let, for all i ≥ 1, E i be E i := lim t→∞ ψ (α)e -αt N i t , a.s, (5.23) and, let E (Ξ) be the random variable defined by E (Ξ) := [0,∞] E (ξu) e -αu 1 Ξ>u ξ(du). (5.24) Lemma 5.4.8 (First moment). The first moment is asymptotically bounded, that is E ψ (α)N t (Ξ) -e αt E(Ξ) = O(1), uniformly with respect to the random variable Ξ. Proof. Using Lemma 5.4.1, (5.22) and (5.24) with have E ψ (α)N t (Ξ) -e αt E(Ξ) = [0,t] ψ (α)EN t-u -e α(t-u) EE e -αu P (Ξ > u) bdu, which leads using (3.6) and (3.9) to E ψ (α)N t (Ξ) -e αt E(Ξ) = [0,t] ψ (α)W (t -u) -ψ (α)W P V (t -u) - α b e α(t-u) =:I t-u e -αu P (Ξ > u) bdu. (5.25) We get using Proposition 4.1.1 and (5.2), I s =e αs -ψ (α)e αs F (s) -e αs 1 - α b + ψ (α) [0,s] e α(s-v) F (s -v)P V (dv) + e αs (s,∞) e -αv P V (dv) - α b e αs =e αs (s,∞) e -αv P V (dv) + o(1). Hence, (I s ) s≥0 is bounded. The result, now, follows from (5.25). Lemma 5.4.9 (L 2 convergence in the general case). ψ (α)e -αt N t (Ξ) converge a.s. and in L 2 to E (Ξ), and lim t→∞ e -αt E ψ (α)N t (Ξ) -e αt E(Ξ) 2 = α b 2 -ψ (α) R + e -αs P (Ξ > s) bds, where the convergence is uniform with respect to Ξ in (0, ∞]. In the particular case when Ξ follows the distribution of O (βt) 2 given by (5.14), we have, for 0 < β < 1 2 , lim t→∞ e αt E βt e -αt ψ (α)N t (O (βt) 2 ) -E(O (βt) 2 ) 2 = 2 -ψ (α) ψ (α). Proof. From (5.22) and (5.24), we have e -αt ψ (α)N t (Ξ) -E(Ξ) 2 = R + e -α(t-u) ψ (α)N (ξu) t-u -E (u) e -αu 1 Ξ>u ξ(du) + e -αt 1 Ξ>t 2 (5.26) and, using Lemma 5.4.1, E ψ (α)e -αt N t (Ξ) -E(Ξ) 2 =E R + ψ (α)e -α(t-u) N (ξu) t-u -E (u) e -αu 1 Ξ>u ξ(du) 2 + e -2αt P (Ξ > t) + 2e -αt E1 Ξ>t R + ψ (α)e -α(t-u) N (ξu) t-u -E (u) e -αu 1 Ξ>u ξ(du), = R + E ψ (α)e -α(t-u) N (ξu) t-u -E (u) 2 e -2αu P (Ξ > u) bdu + R + E ψ (α)e -α(t-u) N (ξu) t-u -E (u) E ψ (α)e -α(t-v) N (ξv) t-v -E (v) × e -α(u+v) P (Ξ > u, Ξ > v) bdu dv + e -2αt P (Ξ > t) + 2e -αt R + E ψ (α)e -α(t-u) N (ξu) t-u -E (u) e -αu P (Ξ > u, Ξ > t) bdu. Moreover, since, ψ (α)Ee -αt N t -E = O e -αt , this leads, using Lemma 5.4.8, to lim t→∞ e αt E e -αt ψ (α)N t (Ξ) -E(Ξ) 2 = α b 2 -ψ (α) R + e -αu P (Ξ > u) bdu. Now, we have from (5.14) and Lemma 3.3.3, lim u→∞ P u (O 2 > s) = lim u→∞ R + W (u -y) W (u) -1 P (V > s + y) bdy = R + e -αy P (V > s + y) bdy. It follows then from Lebesgue theorem that, lim t→∞ R + e -αs P βt (O 2 > s) bds = bψ (α) α . Lemma 5.4.10 (Boundedness in the general case.). The error of order 3 in asymptotically bounded, that is e -3 2 αt E ψ (α)N t (Ξ) -e αt E(Ξ) 3 = O (1) , uniformly w.r.t. Ξ. Proof. Rewriting N (Ξ) and E (Ξ) as in the proof of Lemma 5.4.9, we see that, e -3 2 t E ψ (α)N t (Ξ) -e αt E(Ξ) 3 = e -3 2 t E   [0,t] ψ (α)N (ξu) t-u -e α(t-u) E (u) 1 Ξ>u ξ(du) + ψ (α)1 Ξ>t 3   ≤ 8E [0,t] e -3 2 (t-u) ψ (α)N (ξu) t-u -e α(t-u) E (u) e -1 2 u 1 Ξ>u ξ(du) 3 + 8ψ (α)e -1 2 t P (Ξ > t) 3 We denote by I the first term of the r.h.s. of the last inequality, leading to I ≤ 8E [0,t] 3 3 i=1 e -1 2 (t-s i ) ψ (α)N (ξs i ) t-s i -e α(t-s i ) E (s i ) e -1 2 s i 1 Ξ>s i ξ(ds 1 )ξ(ds 2 )ξ(ds 3 ) ≤ 8E [0,t] 3 3 j=1 e -1 2 (t-s j ) ψ (α)N (ξ s j ) t-s j -e α(t-s j ) E (s j ) 3 3 i=1 e -1 2 s i 1 Ξ>s i ξ(ds 1 )ξ(ds 2 )ξ(ds 3 ) ≤ 24E [0,t] e -1 2 (t-u) ψ (α)N (ξu) t-u -e α(t-u) E (u) 3 e -1 2 u 1 Ξ>u ξ(du) [0,t] e -1 2 u ξ(du) 2 ≤ 24E [0,t] e -1 2 (t-u) ψ (α)N (ξu) t-u -e α(t-u) E (u) 3 e -1 2 u 1 Ξ>u µ(du), with µ(du) = [0,t] e -1 2 s ξ(ds) 2 ξ(du). Now, since µ is independent from the family N (i) and E (i) , an easy adaptation of the proof of Lemma 5.4.1, leads to e -3 2 t E ψ (α)N t (Ξ) -e αt E(Ξ) 3 ≤ 24E [0,t] E e -1 2 (t-u) ψ (α)N t-u -e α(t-u) E 3 e -1 2 u 1 Ξ>u µ(du) + 8ψ (α)e -1 2 t P (Ξ > t) Using Lemma 5.4.7 to bound E e -3 2 (t-u) N t-u -e α(t-u) E 3 , in the previous expression, finally leads to e -3 2 t E ψ (α)N t (Ξ) -e αt E(Ξ) 3 ≤ C E R + e -1 2 u ξ(du) 3 + 1 , for some real positive constant C. Proof of Theorem 5.2.2 We fix a positive real number u. From this point, we recall the decomposition of the splitting tree as described in Section 5.4.2 (see also Figure 5.2). We also recall that, for all i in {1, . . . , N u }, the process N i s (O i ) , s ∈ R + is the population counting process of the (sub-)splitting tree T (O i ). As explained in Section 5.4.2, it follows from the construction of the splitting tree, that, for all i in {1, . . . , N u }, there exists an i.i.d. family of processes N i,j j≥1 independent from N u with the same law as (N t , t ∈ R + ), and an i.i.d. family ξ (i) 1≤i≤Nu of random measure independent from N u and from N i,j j≥1 the family with same law as ξ, such that N i t (O i ) = [0,t] N i,j t-u 1 O i >u ξ (i) (du) + 1 O i >t , ∀t ∈ R + , ∀i ∈ {1, . . . , N u } . (5.27) As in (5.24), we define, for all i in {1, . . . , N u }, E (O i ) := [0,t] E i,ξ (i) u e -αu 1 O i >u ξ (i) (du), (5.28) where E i,j := lim t→∞ ψ (α)e -αt N i,j t . Hence, it follows from Lemma 5.4.9, that e -αt N i t (O i ) converges to E (O i ) in L 2 . Note also that, from Lemma 5.4.2, the family N i t (O i ) , t ∈ R + 2≤i≤Nu is i.i.d. and independent from N u under P u , as well as the family (E (O i )) 2≤i≤Nu (in the sense of Remark 5.3.2). Note that the law under P u of the processes of the family N i t (O i ) , t ∈ R + 2≤i≤Nu is the law of standard population counting processes where the lifespan of the root is distributed as O 2 under P u (except for the first one). Lemma 5.4.11 (Decomposition of E). We have the following decomposition of E, E = e -αu Nu i=1 E i (O i ) , a.s. Moreover, under P u , the random variables (E i (O i )) i≥1 (defined by (5.28)) are independent, independent of N u , and identically distributed for i ≥ 2. Proof. Step 1 : Decomposition of E. For all t in R + , we denote by N ∞ t the number of individuals alive at time t which have an infinite descent. For all i, we define, for all t ≥ 0, N ∞ t (O i ) from T (O i ) as N ∞ t was defined from the whole tree. Now, it is easily seen that N ∞ t = Nu i=1 N ∞ t-u (O i ) . Hence, if e -αt N ∞ t (O i ) converges a.s. to E (O i ), then lim t→∞ e -αt N ∞ t = lim t→∞ e -αu Nu i=1 e -α(t-u) N ∞ t-u (O i ) = e -αu Nu i=1 E (O i ) . So, it just remains to prove the a.s. convergence to get the desired result. Step 2 : a.s. convergence of N ∞ (O i ) to E (O i ). For this step, we fix i ∈ {1, . . . , N u }. In the same spirit as (5.27) (see also Section 5.4.2), it follows from the construction of the splitting tree T (O i ), that there exists, an i.i.d. (and independent of N u ) sequence of processes N j,∞ s , s ∈ R + j≥1 with the same law as (N ∞ t , t ∈ R + ) (under P), such that N ∞ t (O i ) = [0,t] N ξ (i) u ,∞ t-u 1 O i >u ξ (i) (du) + 1 O i =∞ , ∀t ≥ 0. Now, it follows from Theorem 5.2.1, that for all j, lim t→∞ e -αt N j,∞ t = E i,j , a.s., where E i,j was defined in the beginning of this section. Let C j := sup t∈R + e -αt N j,∞ t , ∀j ≥ 1, and C := sup t∈R + e -αt N ∞ t . Then, the family (C j ) j≥1 is i.i.d., since the processes N j,∞ j≥1 are i.i.d, with the same law as C. Hence, [0,t] e -α(t-u) N ξ (i) u ,∞ t-u e -αu 1 O i >u ξ (i) (du) ≤ [0,t] C ξ (i) u e -αu 1 O i >u ξ (i) (du). (5.29) It is easily seen that E [C] = P (NonEx) E ∞ [C] . Now, since, from Proposition 5.3.1, N ∞ t is a Yule process under P ∞ (and hence e -αt N ∞ t is a martingale), Doobs's inequalities entails that the random variable C is integrable. Hence, the right hand side of the (5.29) is a.s. finite, and we can apply Lesbegue Theorem to get lim t→∞ e -αt N ∞ t (O i ) = [0,t] E i,ξ (i) u e -αu 1 O i >u Γ(du) = E (O i ) , a.s., where the right hand side of the last equality is just the definition of E (O i ). We have now all the tools needed to prove the central limit theorem for N t . Proof of Theorem 5.2.2. Let u < t, two positive real numbers. From Lemma 5.4.11 and Section 5.4.2, we have N t = Nu i=1 N (i) t-u (O i ) and e αt E = Nu i=1 e α(t-u) E i (O i ) . Then, ψ (α)N t -e αt E e α 2 t = Nu i=1 ψ (α)N (i) t-u (O i ) -e α(t-u) E i (O i ) e α 2 (t-u) e α 2 u . (5.30) Using Lemma 5.4.2, we know that, under P u , N i t-u (O i ), t > u 1≤i≤Nu are independent processes, i.i.d. for i ≥ 2 and independent of N u . Let us denote by ϕ and φ the characteristic functions ϕ(λ) := E exp iλ ψ (α)N 2 t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) , λ ∈ R and φ(λ) := E exp iλ ψ (α)N 1 t-u (O 1 ) -e α(t-u) E 1 (O 1 ) e α 2 (t-u) , λ ∈ R. It follows from (5.30) and Lemma 5.4.2 that, E u exp iλ ψ (α)N t -e αt E e α 2 t = φ λ e α 2 u ϕ λ e α 2 u E u ϕ λ e α 2 u Nu Since N u is geometric with parameter W (u) -1 under P u , E u exp iλ ψ (α)N t -e αt E e α 2 t = φ λ e α 2 u ϕ λ e α 2 u W (u) -1 ϕ λ e α 2 u 1 -(1 -W (u) -1 ) ϕ λ e α 2 u Using Taylor formula for ϕ, we obtain, E u exp iλ ψ (α)N t -e αt E e α 2 t = φ λ e α 2 u 1 D(λ, t, u) where, D(λ, t, u) = W (u) -(W (u) -1) 1 + iλE ψ (α)N i t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) e α 2 u - λ 2 2 E   ψ (α)N i t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) e α 2 u 2   + R(λ, t, u) = 1 -iλ W (u) -1 e α 2 u E ψ (α)N i t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) + λ 2 2 W (u) -1 e αu E   ψ (α) N i t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) 2   -(W (u) -1)R(λ, t, u), with, for all > 0 and all λ in (-, ), |R(λ, t, u)| ≤ sup λ∈(-, ) ∂ 3 ∂λ 3 ϕ(λ) ≤ E   ψ (α)N i t-u (O 2 ) -e α(t-u) E 2 (O 2 ) e α 2 (t-u) 3   3 e -3 2 αu 6 ≤ C 3 e -3 2 u , (5.31) for some real positive constant C obtained using Lemma 5.4.10. From this point, we set u = βt with 0 < β < 1 2 . It follows then from the Lemmas 5.4.9 and 5.4.2, that lim t→∞ E βt   ψ (α)N i t-βt (O 2 ) -e α(t-βt) E 2 (O 2 ) e α 2 (t-βt) 2   = ψ (α) 2 -ψ (α) . (5.32) Moreover, we have from Lemma 5.4.8, and since β < 1 2 , lim t→∞ W (βt)e -α 2 t E ψ (α)N i t (O 2 ) -e αt E 2 (O 2 ) = 0. (5.33) Finally, the relations (5.31), (5.32) and (5.33) lead to lim t→∞ E βt exp iλ N t -e αt E e α 2 t = 1 1 + λ 2 2 (2 -ψ (α)) . To conclude, note that, E βt exp iλ N t -e αt E e α 2 t -E ∞ exp iλ N t -e αt E e α 2 t = E e iλ ψ (α)N t -e αt E e α 2 t 1 N βt >0 P (N βt > 0) - 1 NonEx P (NonEx) ≤ E 1 N βt >0 P (N βt > 0) - 1 NonEx P (NonEx) goes to 0 as t goes to infinity. This ends the proof of Theorem 5.2.2. Chapitre 6 On the frequency spectrum of a splitting tree with neutral Poissonian mutations Introduction The purpose of this chapter is to study splitting trees with neutral Poissonian mutations. We consider the same model as in the previous chapter, but we assume that individuals also experience mutations at Poisson rate. Each mutation leads to a totally new type replacing the previous type of the individual, this is the infinitely-many alleles assumption. Every time an individual gives birth to new individual, it transmits its type to his child. This mutation process is a way to model the occurrence of a new type in a population (such as a new species or a new phenotype in a given species). Our study concerns the allelic partition of the living population at a fixed time t, which is characterized by the frequency spectrum (A(k, t)) k≥1 of the population, where each integer A(k, t) is the number of families represented by k alive individuals at time t. A famous example is the Ewens sampling formula which gives the distribution of the frequency spectrum when the genealogy is given by the Kingman coalescent [START_REF] Ewens | Mathematical population genetics. I[END_REF]. Other works studied similar quantities in the case of Galton-Waston branching processes (see [START_REF] Bertoin | The structure of the allelic partition of the total population for Galton-Watson processes with neutral mutations[END_REF] or [START_REF] Griffiths | An infinite-alleles version of the simple branching process[END_REF]). The purpose of this chapter is to obtain explicit formulas for the moments of the frequency spectrum and then to use this formulas in order to extend the central limit theorem proved in Chapter 5 to the frequency spectrum. The model with Poissonian mutations was studied in Champagnat and Lambert [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF], where many properties of the frequency spectrum and the clonal family (the family who carries the type of the first individuals at time 0) were obtained. The population counting process (N t , t ∈ R + ) and the frequency spectrum (A(k, t)) k≥1 belong to the class of general branching processes counted by random characteristics. This class of processes has been deeply studied by Jagers and Nerman, who give, for instance, criteria for the long time convergence of such processes [START_REF] Jagers | Convergence of general branching processes and functionals thereof[END_REF][START_REF] Nerman | On the convergence of supercritical general (C-M-J) branching processes[END_REF][START_REF] Jagers | The growth and composition of branching populations[END_REF][START_REF] Jagers | Limit theorems for sums determined by branching and other exponentially growing processes[END_REF][START_REF] Taïb | Branching processes and neutral mutations[END_REF]. Using these tools, Richard and Lambert [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF][START_REF] Richard | Processus de branchement non Markoviens et processus de Lévy[END_REF] shown the almost sure convergence of N t , properly renormalized, to an exponential random variable in the supercritical case. The almost sure convergence of the ratios A(k,t) Nt was proved in [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF] using similar tools. From this, one can easily deduce the a.s. convergence of A(k,t) W (t) where we recall that W (t) is the average number of individuals at time t conditionally on N t > 0. This result was stated without proof in [START_REF] Champagnat | Birth and death processes with neutral mutations[END_REF]. An important tool is the so called coalescent point process (CPP) : given the individuals alive at a fixed time, the coalescent point process at time t is the tree describing the relation between the lineages of all individuals alive at time t. Here, the term lineage of an individual refers to the succession of individuals, from child to parent, backward in time until the ancestor of the population. Roughly speaking, the CPP is the genealogical tree of the lineages of the individuals. This tool goes back to Aldous and Popovic [START_REF] Aldous | A critical branching process model for biodiversity[END_REF] who introduced it for a Markovian model. Later in [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF], Lambert showed the general link between coaslescent point processes and the splitting trees. In this work, we use the representation of the CPP of a splitting tree as an i.i.d. sequence of random variables (H i ) i≥1 . More precisely, we use the new construction of the coalescent point process given in Chapter 4, and thanks to Theorem 4.2.2, this allows us to obtain explicit recursive formulas for the moments of the frequency spectrum, valid for any parameter of the model. As an application, we prove the almost sure convergence of the frequency spectrum avoiding the use of the theory of general branching processes counted by random characteristics in the supercritical case. Of course, these moment formulas can also provide many valuable informations. For instance, on the error in the aforementioned convergence. Another application is then to prove central limit theorems for the frequency spectrum (such as the one of Chapter 5). Section 6.2 is dedicated to the description of the models and the introduction of anterior results (essentially from [START_REF] Lambert | The contour of splitting trees is a Lévy process[END_REF][START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF]) used in the sequel. In Section 6.3, we state results (Theorems 6.3.1 and 6.3.2) giving explicit formulas for the factorial moments of the frequency spectrum (A(k, t)) k≥1 expressed in terms of the lower order moments. A first example of the method in Subsection 6.3.1 focusing on the expectation of A(k, t). Although, the computation of this expectation was already known from [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF], we give here a much more simple proof. Subsection 6.3.2 is dedicated to the proofs of Theorems 6.3.1 and 6.3.2. We give the asymptotic behaviour of higher moments in Subsection 6.3.5. All these sections come from a joint work with Nicolas Champagnat published in [START_REF] Champagnat | Moments of a splitting tree with neutral poissonian mutations[END_REF]. In Section 6.4, we state the same kind of limit theorems as those for N t stated in Section 5.2. The following sections are devoted to the proofs of these results. Section 6.5 gives a new proof of the already known law of large numbers for the frequency spectrum (originally obtained in [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF]). Sections 6.6, 6.7 and 6.8 give the proof of the various CLT stated in Section 6.4. Splitting trees with neutral Poissonian mutations Here we define what we call a splitting tree with neutral mutation. Since a splitting tree T is a measured space (with a σ-finite measure λ), one can define on T a Poisson random measure with intensity λ. Hereafter we call mutations every atoms of this measure on T. However, since the only observable mutation at time t are the one which occurred on the lineage of the individuals alive at this time, we can define the occurrence of mutations directly on the CPP. So, let P be a Poisson random measure on (0, t) × N with intensity measure θλ ⊗ C where λ is the Lebesgue 6.2. Splitting trees with neutral Poissonian mutations measure on (0, t) and C is the counting measure on N. The mutation random measure on the CPP is then defined by N (da, di) = 1 H i >t-a 1 i<Nt P (di, da) , (6.1) where an atom at (a, i) means that the ith branch of the CPP experiences a mutation at time t -a. We assume that each mutation gives a totally new type to its holder (infinitly-many alleles model) and that the types are transmitted to offspring. This rule yields a partition of the population by type at a given time t. The distribution of the frequency of types in the population is called the frequency spectrum and is defined as the sequence (A(k, t)) k≥1 where A(k, t) is the number of types carried by exactly k individuals in the alive population at time t (or, for short, the number of families of size k at this time) excluding the family holding the original type of the root. In the study of the frequency spectrum, an important role is played by the family carrying the type of the root.The type of the ancestor individual at time 0 is said clonal. Moreover, at any time t, the set of individuals carrying this type is called the clonal family. We denote by Z 0 (t) the size of the clonal family at time t. To study this family it is easier to consider the clonal splitting tree constructed from the original splitting tree by cutting every branches beyond mutations. This clonal splitting tree is a standard splitting tree without mutations where individuals are killed as soon as they die or experience a mutation. The new lifespan law P V θ is then the minimum between an exponential random variable of parameter θ and an independent copy of V . As a splitting tree, one can study its contour process whose Laplace exponent is given, using simple manipulations on Laplace transforms, by ψ θ (x) = x - (0,∞] 1 -e -rx bP V θ (dr) = xψ(x + θ) x + θ . In the case where α -θ > 0 (resp. α -θ < 0, α -θ = 0) the clonal population is supercritical (resp. sub-critical, critical), and we talk about clonal supercritical (resp. sub-critical, critical) case. We denote by W θ the scale function of the Lévy process induced by this new tree, related to ψ θ as in (3.1). This leads to P (Z 0 (t) = k | Z 0 (t) > 0) = 1 W θ (t) 1 - 1 W θ (t) k-1 . Moreover, E [N t ] satisfies the renewal equation f (t) = P (V > t) + b t 0 f (t -s)P (V > s) ds, which, applied to the clonal splitting tree, allows obtaining after some easy calculations, P (Z 0 (t) > 0) P (N t > 0) = e -θt W (t) W θ (t) , from which one can deduce P (Z 0 (t) = k | N t > 0) = e -θt W (t) W θ (t) 2 1 - 1 W θ (t) k-1 , ∀k ≥ 1, (6.2) and P (Z 0 (t) = 0 | N t > 0) = 1 - e -θt W (t) W θ (t) . The main idea underlying our study is that the behaviour of any family in the CPP is the same as the clonal one but on a smaller time scale. For the rest of this chapter, unless otherwise stated, the notation P t refers to P (• | N t > 0) and P ∞ refers to the probability measure conditioned on the non-extinction event, denoted Non-Ex in the sequel. Finally, we recall the asymptotic behaviour of the scale functions W (t) and W θ (t), which is widely used in the sequel. Lemma 6.2.1. (Champagnat-Lambert [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations II : Largest and oldest families[END_REF]) Assume α > 0, there exists a positive constant γ such that e -αt ψ (α)W (t) -1 = O e -γt . In the case that θ < α (clonal supercritical case), W θ (t) ∼ t→∞ ψ θ (α -θ) -1 e (α-θ)t . In the case that θ > α (clonal sub-critical case), W θ (t) = θ ψ(θ) + O e -(θ-α)t . In the case where θ = α (clonal critical case), W α (t) ∼ t→∞ αt ψ (α) . From this lemma, one can obtain that the probability that the clonal family reaches a fixed size at time t decreases exponentially fast with t. Corollary 6.2.2. In the supercritical case (α > 0), for any positive integer k, P t (Z 0 (t) = k) = O e -δt , where δ is equal to θ (resp. 2α -θ) in the clonal critical and sub-critical cases (resp. supercritical case). Remark 6.2.3. Note that Lemma 6.2.1 implies in particular that, for any positive integer k, tW (t) k-1 = o W (t) k . Moments formulas of the frequency spectrum For two positive real numbers a < t, we denote by N (t) t-a the number of individuals alive at time ta who have descent alive at time t. In the CPP of the individuals alive at time t, N (t) t-a corresponds to the number of branches higher than t -a, that is {H i | i ∈ {0, . . . , N t -1}, H i > t -a}. In the sequel, we use the following notation for multi-indexed sums : let K, N be two positive integers and 1 , . . . , K some non-negative integers, then the notation n 1:K 1 +•••+n 1:K N = 1:K refers to the sum n 1 1 +•••+n 1 N = 1 ... n K 1 +•••+n K N = K . In order to lighten notation, we also use the convention that for any integer n and any negative integer k, n k = 0. We recall that P t is the conditional probability on the event {N t > 0} and that E t is the corresponding expectation. In the both following theorem, we know, according to Proposition 4.3.1, that the random variable N (t) t-a is geometrically distributed with parameter W (t+a) W (a) under P t . We can now state the main theorems of this section. Theorem 6.3.1. For any positive integers n and k, we have, E t A(k, t) n = E t        t 0 θN (t) t-a n 1 +•••+n N (t) t-a =n-1 E a A(k, a) n 1 1 Z 0 (a)=k N (t) t-a m=2 E a A(k, a) n m da        . We also have a similar result for the joint moments of the frequency spectrum. Theorem 6.3.2. Let n 1 , . . . , n N and k 1 , . . . , k N be positive integers. We have E t N i=1 A(k i , t) n i = N l=1 E t t 0 θN (t) t-a n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l E a N i=1 A(k i , a) n i 1 1 Z 0 (a)=k l × N (t) t-a m=2 E a N i=1 A(k i , a) n i m da , (6.3) where δ refers to the Kronecker symbol. In Subsection 6.3.3, we also give formulas for moments like E t A(k, t) n 1 Z 0 (t)= . An example : the expectation of A(k, t) Before going further, we point out that this section uses the recursive construction of the CPP given in Section 4.3. A nice application of this construction is the derivation of the expectation of A(k, t). Indeed, suppose that a mutation occurs on branch i at a time a. Then, by construction of the CPP, the future of this family depends only on what happens on the branches (H j , i ≤ j < τ ) (see Figure 6.1), where τ = inf {j > i | H j ≥ a} . In fact, this set of branches is also a CPP with scale function W stopped at a (we talk about sub-CPP), and the number of individuals carrying the mutation at time t is the number of clonal individuals in this sub-CPP. We recall that this expectation was first calculated in [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF], with a E t [A(k, t)] = W (t) t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da. Proof. Since A(k, t) is the number of types represented at time t by k individuals, it is equivalent to enumerate all the mutations and ask if they have exactly k clonal children at time t. This remark leads to the following integral representation of A(k, t) : A (k, t) = [0,t]×N 1 Z i 0 (a)=k N (da, di) , (6.4) where N is defined in (6.1), and Z i 0 (a) denotes the number of alive individuals at time t carrying the same type as the type carried at time t -a on the ith branch of the CPP of the individuals alive at time t (the notation comes from the fact that Z i 0 (a) corresponds to the size of the clonal family in the sub-CPP induced by the ith individual at time t -a, see Figure 4.1). From Proposition 4.3.1, it follows that 1 Z i 0 (a)=k satisfies the conditions of Theorem 4.2.2, so E t [A (k, t)] = t 0 θ P a (Z 0 (a) = k) E t N (t) t-a da = W (t) t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da, using (6.2). 6.3.2 Proof of Theorems 6.3.1 and 6.3.2 Let a and t be two positive real numbers such that a < t, and n a positive integer. We call k-mutation, a mutation represented by k alive individuals at time t in the splitting tree. Let A (i) (k, a) k≥1 be the frequency spectrum in the i-th subtree of construction provided by Proposition 4.3.1. To count the number of n-tuples in the set of k-mutations, we look along the tree and seek for mutations in the CPP. For each k-mutation encountered, we count the number of (n -1)-tuples made of younger k-mutations. The (n -1)-tuples should be enumerated by decomposition in each subtree in order to exploit the independence property of the subtrees of Proposition 4.3.1. Suppose that a mutation is encountered at a time a, then the number of (n -1)-tuples made of younger mutations is given by n 1 +•••+n N (t) t-a =n-1 N (t) t-a m=1 A (m) (k, a) n m . So the number A(k, t) n of n-tuples of k-mutations is given by A(k, t) n = [0,t]×N 1 Z i 0 (a)=k n 1 +•••+n N (t) t-a =n-1 N (t) t-a m=1 A (m) (k, a) n m N (da, di), (6.5) = ≥1 [0,t]×N 1 Z i 0 (a)=k n 1 +•••+n l =n-1 m=1 A (m) (k, a) n m 1 N (t) t-a = N (da, di), where Z i 0 (a) was defined in the proof of Theorem 6.3.3. Finally, using the independence provided by Proposition 4.3.1, it follows from Theorem 4.2.2 applied to all the integrals with respect to the random measures 1 N (t) t-a =k N (da, di), that E t A(k, t) n = E t [0,t]×N n 1 +•••+n N (t) t-a =n-1 E a A(k, a) n 1 1 Z 0 (a)=k N (t) t-a m=2 E a A(k, a) n m N (da, di) . Finally, using that the N (da, di) = 1 H i >t-a 1 i<Nt P(di, da) where P independent from the CPP (and, hence, from N (t) t-a ), it follows that E t A(k, t) n = E t [0,t] n 1 +•••+n N (t) t-a =n-1 E a A(k, a) n 1 1 Z 0 (a)=k × N (t) t-a m=2 E a A(k, a) n m N 1 H i >t-a 1 i<Nt C(di)θda, = E t [0,t] θN (t) t-a n 1 +•••+n N (t) t-a =n-1 E a A(k, a) n 1 1 Z 0 (a)=k N (t) t-a m=2 E a A(k, a) n m da, (6.6) which ends the proof of Theorem 6.3.1. The proof of Theorem 6.3.2 follows exactly the same lines, and we leave it to the reader. Joint moments of the frequency spectrum and 1 Z 0 (t)= In order to compute the terms of the form E t N i=1 A(k i , t) n i 1 Z 0 (t)= involved in (6.3), we need to extend the representation (6.5) of A(k,t) n to take into account the indicator function of {Z 0 (t) = }. To do this, when integrating w.r.t. N (da, di), we need to ask that the sum of the number of clonal individuals in each subtree for which the type at time t -a is the ancestral type, is equal to k. We begin with the case E A(k, t)1 Z 0 (t)= in order to highlight the ideas. In this case, we have the following result. Proposition 6.3.4. E t A(k, t)1 Z 0 (t)= =E t t 0 N (t) t-a -Z (t) 0 (a) P a (Z 0 (a) = k) 1 +•••+ Z (t) 0 (a) = Z (t) 0 (a) i=1 P a (Z 0 (a) = i ) θda + E t t 0 Z (t) 0 (a)P a (Z 0 (a) = k) 1 +•••+ Z (t) 0 (a)-1 = Z (t) 0 (a)-1 i=1 P a (Z 0 (a) = i ) θda. (6.7) Proof. Recalling that N (t) t-a refers to the size whole population in the lower tree P of the construction of Proposition 4.3.1, we similarly define Z (t) 0 (a) as the size of the clonal population in the same tree (with the convention that mutations that occur at time t -a, i.e. on the leaves of the tree P, do not affect Z (t) 0 (a)). It follows that A(k, t)1 Z 0 (t)= = [0,t]×N 1 Z j 0 (a)=k Z (t) 0 (a) -B j ! σ∈I 1 σ is ancestral 1 +•••+ Z (t) 0 (a)-B j = Z (t) 0 (a)-B j i=1 1 Z σ i 0 (a)= i N (da, dj), (6.8) where I is the set of injections from 1, . . . , Z (t) 0 (a) -B j to 1, . . . , N (t) t-a , B j is the indicator function of the event {the jth individual at time t -a is clonal} , and "σ is ancestral" denotes the event that the individuals σ 1 , . . . , σ Z (t) 0 (a)-B j at time t -a have the ancestral type. Now, using the same method as in the proof of Theorem 6.3.1 leads to E t A(k, t)1 Z 0 (t)= = E t [0,t]×N P a (Z 0 (a) = k) σ∈I 1 σ is ancestral 1 +•••+ Z (t) 0 (a)-B j = Z (t) 0 (a)-B j i=1 P a (Z 0 (a) = i ) N (da, dj) Z (t) 0 (a) -B j ! = E t [0,t]×N P a (Z 0 (a) = k) 1 +•••+ Z (t) 0 (a)-B j = Z (t) 0 (a)-B j i=1 P a (Z 0 (a) = i ) N (da, dj) = E t [0,t]×N P a (Z 0 (a) = k) 1 +•••+ Z (t) 0 (a)-B j = Z (t) 0 (a)-B j i=1 P a (Z 0 (a) = i ) 1 H j >t-a 1 j<Nt P(da, dj). Now, Z (t) 0 (a) is not independent from P, but we have that Z (t) 0 (a) is independent from P ([a, T ] ∩ •) for all a < T . Hence, Theorem 4.2.2 applies to X a := Z (t) 0 (t -a) and P defined for all measurable set A ⊂ [0, t] by P (A) = P (t -A) , and, as in (6.6), E t A(k, t)1 Z 0 (t)= = E t [0,t]×N P a (Z 0 (a) = k) 1 +•••+ Z (t) 0 (a)-B j = Z (t) 0 (a)-B j i=1 P a (Z 0 (a) = i ) 1 H j >t-a 1 j<Nt θda C(dj). Finally, integrating with respect to C(dj) leads to the result. This last proposition in not exactly a closed formula since its involves the law of the couple (N (t) t-a , Z (t) 0 (a)). To close the formula, we need an explicit formula for the joint generating function of N (t) t-a and Z (t) 0 (a). Let F (u, v) = E t u N (t) t-a v Z (t) 0 (a) , u, v ∈ [0, 1], which is given, thanks to Proposition 4.1 of [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF], by F (u, v) = u Ŵ (t -a, u) Ŵ (t -a) 1 - e -θ(t-a) Ŵ (t -a, u) v 1-v + Ŵθ (t -a, u) , (6.9) where Ŵ is the scale function of the lower CPP, P, defined in Proposition 4.3.1, Ŵ (t, u) := Ŵ (t) Ŵ (t) -u Ŵ (t) -1 , and Ŵθ (t, u) := e -θt Ŵ (t, u) + θ t 0 Ŵ (s, u)e -θs ds. Proposition 6.3.5. For all k ≥ 1 and l ≥ 0, E t A(k, t)1 Z 0 (t)= = t 0 P a (Z 0 (a) = k) l j=1 l -1 j -1 1 j! 1 - 1 W θ (a) l-j e -θa W (a) W θ (a) 2 P (Z 0 (a) = 0) j H j 1, 1 - e -θa W (a) W θ (a) θda + t 0 P a (Z 0 (a) = k) l j=1 l -1 j -1 1 j! 1 - 1 W θ (a) l-j e -θa W (a) W θ (a) 2 P (Z 0 (a) = 0) j G j 1 - e -θa W (a) W θ (a) θda, where H j (u, v) := v j ∂ j v ∂ u uF (u, v) -v j+1 ∂ j+1 v vE v Z (t) 0 (a) , and G j := v j-1 ∂ j v E t v Z (t) 0 (a) . Proof. Let A 1 and A 2 denote the two terms of the r.h.s. of (6.7). We detail the computations of A 1 . The case A 2 is similar. A 1 = E t t 0 N (t) t-a -Z (t) 0 (a) P a (Z 0 (a) = k) × Z (t) 0 (a)∧l j=1 Z (t) 0 (a) j 1 +•••+ j = j >0 j i=1 P a (Z 0 (a) = i ) P a (Z 0 (a) = 0) Z 0 (a)-j θda. Since, from (6.2), j i=1 P a (Z 0 (a) = i ) = j i=1 e -θa W (a) W θ (a) 2 1 - 1 W θ (a) i -1 = e -θa W (a) W θ (a) 2 j 1 - 1 W θ (a) l-j , we get A 1 =E t t 0 N (t) t-a -Z (t) 0 (a) P a (Z 0 (a) = k) Z (t) 0 (a)∧l j=1 Z (t) 0 (a) j l -1 j -1 × e -θa W (a) W θ (a) 2 j 1 - 1 W θ (a) l-j P a (Z 0 (a) = 0) Z 0 (a)-j θda = t 0 P a (Z 0 (a) = k) l j=1 l -1 j -1 1 j! 1 - 1 W θ (a) l-j e -θa W (a) W θ (a) 2 P a (Z 0 (a) = 0) j × E t N (t) t-a -Z (t) 0 (a) Z (t) 0 (a) (j) P a (Z 0 (a) = 0) Z (t) 0 (a) θda Finally, if we define, for all integer j, H j (u, v) := v j ∂ j v ∂ u uF (u, v) -v j+1 ∂ j+1 v vE v Z (t) 0 (a) , and G j := v j-1 ∂ j v E t v Z (t) 0 (a) , we get E t A(k, t)1 Z 0 (t)= = t 0 P a (Z 0 (a) = k) l j=1 l -1 j -1 1 j! 1 - 1 W θ (a) l-j e -θa W (a) W θ (a) 2 P (Z 0 (a) = 0) j H j 1, 1 - e -θa W (a) W θ (a) θda + t 0 P a (Z 0 (a) = k) l j=1 l -1 j -1 1 j! 1 - 1 W θ (a) l-j e -θa W (a) W θ (a) 2 P (Z 0 (a) = 0) j G j 1 - e -θa W (a) W θ (a) θda. These ideas also lead to the following formula, which is proved similarly. Corollary 6.3.6. Let n 1 , . . . , n N and k 1 , . . . , k N be positive integers. Let be a positive integer. We have E t N i=1 A(k i , t) n i 1 Z 0 (t)= = N κ=1 E t [0,t] N (t) t-a -Z (t) 0 (a) n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l 2 +•••+ Z (t) 0 (a)+1 = N (t) t-a m=Z (t) 0 (a)+2 E a N i=1 A(k i , a) n i m × Z (t) 0 (a)+1 m=2 E a N i=1 A(k i , a) n i m 1 Z 0 (a)= m E a N i=1 A(k i , a) n i 1 1 Z 0 (a)=kκ θda + N κ=1 E t [0,t] Z (t) 0 (a) n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l 2 +•••+ Z (t) 0 (a)+1 = N (t) t-a m=Z (t) 0 (a)+1 E a N i=1 A(k i , a) n i m × Z (t) 0 (a) m=2 E a N i=1 A(k i , a) n i m 1 Z 0 (a)= m 2 E a N i=1 A(k i , a) n i 1 1 Z 0 (a)=kκ θda. Proof. According to Section 6.3.2, we have the following integral representation. N i=1 A(k i , t) n i = N l=1 [0,t]×N 1 Z j 0 (a)=k l n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l N (t) t-a m=1 N i=1 A(k i , a) n j m N (da, dj). Now, using this equation in conjunction with the decomposition of 1 Z 0 (t)= used in Section 6.3.3, we have N i=1 A(k i , t) n i 1 Z 0 (t)= = N l=1 [0,t]×N 1 Z j 0 (a)=k l σ∈I 1 σ is ancestral × n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l 1 +•••+ Z (t) 0 (a)-B j = N (t) t-a m 1 =1 N i=1 A m 1 (k i , a) n i m 1 Z (t) 0 (a)-B j m 2 =1 1 Z σm 2 0 (a)= m 2 N (da, dj) Z (t) 0 (a) -B j ! . We refer the reader to the proof of Proposition 6.3.4 for the definitions of I,B j , and the event {σ is ancestral}. The definitions of A (m) (k, a) and Z (m) 0 (a) can be found in the beginning of this section. Now, we take the expectation in the last equality. Thanks to the method used in the proof of Proposition 6.3.4, we have E t N i=1 A(k i , t) n i 1 Z 0 (t)= = N κ=1 E t [0,t]×N σ∈I 1 σ is ancestral n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l 1 +•••+ Z (t) 0 (a)-B j = N (t) t-a m 1 =1 m 1 =σ,m 1 =i E a N i=1 A m 1 (k i , a) n i m 1 × Z (t) 0 (a)-B j m 2 =1 E a N i=1 A σm 2 (k i , a) n i σm 2 1 Z σm 2 0 (a)= m 2 E a N i=1 A i (k j , a) n j i 1 Z j 0 (a)=kκ N (da, dj) Z (t) 0 (a) -B j ! , where m 1 = σ means that m 1 / ∈ σ 1, . . . , Z (t)(a) 0 -B j . Now, following, as above, we get E t N i=1 A(k i , t) n i 1 Z 0 (t)= = N κ=1 E t [0,t]×N σ∈I 1 σ is ancestral n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,l 1 +•••+ Z (t) 0 (a)-B j = N (t) t-a m 1 =Z (t) 0 (a)-B i +1 E a N i=1 A(k i , a) n i m 1 × Z (t) 0 (a)-B i +1 m 2 =2 E a N i=1 A(k i , a) n i m 2 1 Z 0 (a)= m 2 E a N i=1 A(k 1 , a) n 1 i 1 Z 0 (a)=kκ 1 H i >t-a 1 j<Nt θda C(di) Z (t) 0 (a) -B j ! . Then, the sum with σ can be removed since there is no term depending on σ. Finally, integrating with respect to C(di) leads to the result. Together with Theorems 6.3.1 and 6.3.2 and using the joint law of N t-a and Z (t) 0 (a) given in (6.9), these formulas give explicit recursion to compute each factorial moment of the frequency spectrum. Remark 6.3.7. Although, these formulas are quite heavy, an important interest lies in the method used to compute them. Indeed, this method should work to obtain the joint moments of A(k, t) with any quantity which can be expressed, at any time a, as the sum of contributions of each subtrees. For instance, since N t = N (t) t-a i=1 N i a , ∀a ∈ [0, t], where N i a is the number of individuals of the i-th subtrees at time a, we are able to compute the joint moments of N t and (A(k, t)) k≥1 . For example, using the integral representation (6.4) of A(k, t) and following the proof of Theorem 6.3.3 , we have that E t [A(k, t)N t ] = E t [0,t]×N N (t) t-a j=1 N j a 1 Z (i) 0 (a)=k N (da, di) = [0,t] θE t N (t) t-a N (t) t-a -1 E a [N a ] P a (Z 0 (a) = k) da + [0,t] θE t N (t) t-a E a N a 1 Z 0 (a)=k θda = [0,t] W (t) 2 1 - W (a) W (t) θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da + W (t) [0,t] θ E a N a 1 Z 0 (a)=k W (a) θda. (6.10) Application to the computation of the covariances of the frequency spectrum A quantity of particular interest is the limit covariance between two terms of the frequency spectrum. Proposition 6.3.8. Suppose that α > 0. Let k and l two positive integers, then, Cov t (A (k, t) , A (l, t)) = W (t) 2 c k c l + o W (t) 2 , where c k := ∞ 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds, ∀k ∈ N\{0}. Proof. In order to show how quantities in Theorem 6.3.2 can be manipulated, we detail the proof. Using Theorem 6.3.2, we obtain E t [A(k, t)A(l, t)] = t 0 θE t N (t) t-a N (t) t-a -1 (P a (Z 0 (a) = k) E a [A(l, a)] + P a (Z 0 (a) = ) E a [A(k, a)]) da + t 0 θE t N (t) t-a E a A(l, a)1 Z 0 (a)=k + E a A(k, a)1 Z 0 (a)= da. Recalling, from Proposition 4.3.1, that N (t) t-a is geometrically distributed with parameter W (a) W (t) under P t , E t N (t) t-a = W (t) W (a) and E t N (t) t-a N (t) t-a -1 = 2 W (t) 2 W (a) 2 1 - W (a) W (t) . Since E A(k, a)1 Z 0 (a)= ≤ E [A(k, a)] = O(W (a)), it follows by Lemma 3.3.3 and Theorem 6.3.3, that E t [A(k, t)A(l, t)] = 2 t 0 θ W (t) 2 W (a) 2 P a (Z 0 (a) = )E a [A(k, a)] + P a (Z 0 (a) = k)E a [A(l, a)] da + O (tW (t)) . By Theorem 6.3.3 and (6.2), the r.h.s. is equal to 2W (t) 2 t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) l-1 ds + 1 - 1 W θ (a) l-1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds da + O (tW (t)) =2W (t) 2 t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) l-1 da + O (tW (t)) . The proof ends thanks to Remark 6.2.3. Asymptotic behaviour of the moments of the frequency spectrum In this part, we study the long time behaviour of the moments of the frequency spectrum. From this point and until the end of this chapter, we suppose that the tree is supercritical, that is α > 0. Proposition 6.3.9. For any positive multi-integers n and k in N N , E t N i=1 A (k i , t) n i = W (t) |n| |n|! N i=1 n i ! N i=1 c n i k i + O tW (t) |n|-1 , (6.11) where the c k i 's are as defined in Proposition 6.3.8. Proof. Step 1 : Preliminaries and ideas. The proposition is proved by induction. Using the symmetry of the formula provided by Theorem 6.3.2, we may restrict to the study of the term l = 1 in (6.3). Hence, we want to study E t t 0 θN (t) t-a n 1:N 1 +•••+n 1:N N (t) t-a =n 1:N -δ 1:N,1 E a N i=1 A(k i , t) n i 1 1 Z 0 (a)=k 1 N (t) t-a -1 m=2 E a N i=1 A(k i , a) n i m da. (6.12) We recall that the terms of the multi-sum in the above formula correspond to the ways of allocating the mutations in the subtrees. The analysis relies on the fact that the growth of each term depends on the repartition of the mutations. In particular, the main term correspond to the case where all mutations are allocated to different subtrees. To capitalize on this fact, let M N (t) t-a the subset of M (N t t-a -1)×N (N) (the space of matrices of size N (t) t-a -1 × N with coefficients in N), such that each n in M N (t) t-a satisfies the relation N (t) t-a -1 m=1 n i m = n i -δ i,1 , ∀i ∈ N. (6.13) The notations n m and n i refer to the multi-integers n 1 m , . . . , n N m and n i 1 , . . . , n i N (t) t-a respectively. To simplify the analysis, we highlight three cases of interest : C 1 := n ∈ M N (t) t-a | ∀i, n i 1 = 0, ∀i ≥ 1, ∀m ≥ 2, n i m ≤ 1, and n i m = 1 ⇒ n k m = 0, ∀k = i . This set corresponds to the case where all the mutations are taken in different subtrees and are not taken in the tree where a mutation just occurs. In fact, this corresponds to the dominant term of (6.12) because as N (t) t-a tends to be large, the mutations tend to occur in different subtrees. Let also C 2 := n ∈ M N (t) t-a | ∀i, n i 1 = 0 \C 1 . Finally, let C 3 := n ∈ M N (t) t-a | N i=1 n i 1 > 0 . Step 2 : Uniform bound on the number of tuple of mutations in the subtrees. Assuming that the relation of Lemma 6.3.9 is true for any multi-integer n such that |n | = |n|-1, we have N (t) t-a m=1 E a N i=1 A(k i , a) n i m = N (t) t-a m=1 W (a) |nm| |n m |! N i=1 n i m ! N i=1 c n i m k i + O aW (a) |nm|-1 . (6.14) (6.15) Since there are at most |n| -1 multi-integers n m such that |n m | > 0 (because of the condition (6.13)), we can assume without loss of generality, up to reordering the indices, that n i m = 0, for all m ≥ |n|, and so all the terms with m > |n| in the product of (6.14) are equal to one. Hence, N (t) t-a -1 m=1 E a N i=1 A(k i , a) n i m ≤ C n W (a) |n|-1 , (6.16) for some constant C n depending only on the choice of n in M |n| . Moreover, since M |n| is finite, then N (t) t-a m=1 E a N i=1 A(k i , a) n i m ≤ CW (a) |n|-1 . (6.17) Step 3 : Analysis of C 1 . For n ∈ C 1 , and in this case only, the product N i=1 A(k i , a) n i m has only one term different from 1, and it follows from Theorem 6.3.3, that N (t) t-a -1 m=1 E N i=1 A(k i , a) n i m = W (a) |n|-1 N i=1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k i ds n i -δ i,1 . The corresponding contribution in (6.12) is I 1 := t 0 θW (a) |n|-1 P a (Z 0 (t) = k 1 ) N i=1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k i ds n i -δ i,1 E a N (t) t-a Card(C 1 ) da. Now, Card(C 1 ) is the number of way we can choose |n| -1 subtrees among the N (t) t-a -1 possible subtrees and choosing a way to allocate to each chosen subtree a mutation sizes k 1 , . . . , k N , i.e. Card(C 1 ) = N (t) t-a -1 |n| -1 (|n| -1)! N i=1 (n i -δ i,1 )! . Finally, I 1 = t 0 θW (a) |n|-1 P a (Z 0 (t) = k 1 ) N i=1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k i ds n i -δ i,1 E a N (t) t-a (|n|) N i=1 (n i -δ i,1 )! da, where (x) (|n|) is the falling factorial of order |n|. Since, N (t) t-a is geometrically distributed under P t with parameter W (t) W (a) , it follows that I 1 = |n|!W (t) |n| N i=1 (n i -δ i,1 )! t 0 θ e -θa W θ (a) 2 1 - 1 W θ (a) k 1 -1 N m=1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k i ds n i -δ i,1 da + O tW (t) |n|-1 Step 4 : Analysis of C 2 . We denote I 2 := E t t 0 N (t) t-a n∈C 2 P a (Z 0 (a) = k 1 ) N (t) t-a -1 m=1 E a N i=1 A(k i , a) n i m da. (6.18) Now, since Card(C 2 ) = O N (t) t-a |n|-2 , we have using estimation (6.17), I 2 ≤ t 0 N (t) t-a n∈C 2 CW (a) |n|-1 da ≤ C t 0 N (t) t-a |n|-1 W (a) |n|-1 da, for some positive real constant C. Using that N (t) t-a is geometrically distributed with parameter W (t) W (a) , it follows that there exists a positive real number Ĉ such that I 2 ≤ Ĉ t 0 W (t) W (a) |n|-1 W (a) |n|-1 da. Which imply that, I 2 = O tW (t) |n|-1 . Step 5 : Analysis of C 3 . In the case where there is a positive n i 1 (C 3 case), using that E a N i=1 A(k i , a) n i 1 1 Z 0 (a)=k l ≤ E a N i=1 A(k i , a) n i 1 , we have, t 0 N (t) t-a n∈C 3 E a N i=1 A(k i , a) n i 1 1 Z 0 (a)=k l N (t) t-a m=2 E a N i=1 A(k i , a) n i m da, ≤ t 0 N (t) t-a n∈C 3 E a N i=1 A(k i , a) n i 1 N (t) t-a m=2 E a N i=1 A(k i , a) n i m da, which is very similar to the the other steps. This term is O tW (t) |n|-1 because the condition i n i 1 > 0 reduces the number of terms in the multi-sum. Indeed, Card(C 3 ) = n 1:N -δ 1:N,1 j 1:N =0 s.t. i j i >0 n i 2 +•••+n i N (t) t-a =n i -δ i,l -j i 1 = n 1:N -δ 1:N,1 j 1:N =0 s.t. i j i >0 N i=1 N i=1 N (t) t-a -1 + n i -δ i,1 -j i (n i -δ i,1 -j i ) N i=1 (n i -δ i,1 -j i )! ≤C n 1:N -δ 1:N,1 j 1:N =0 s.t. i j i >0 N (t) t-a |n|-1-j i . Then, the expectation of the last quantity gives a polynomial of degree |n| -1 in W (t) W (a) . Using the same study as I 2 shows that this part is of order O tW (t) |n|-1 . Finally, summing over l ends the proof since the leading term is N l=1 |n|!W (t) |n| N i=1 (n i -δ i,1 )! t 0 θ e -θa W θ (a) 2 1 - 1 W θ (a) k l -1 N m=1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) km ds nm-δ m,1 da, while the rest is a finite sum of O tW (t) |n|-1 -terms. By Lemma 6.2.1, c k = t 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds + O e -γt , where γ is equal to θ (resp. 2α -θ) in the clonal critical and subcritical cases (resp. supercritical case). Hence, we deduce (6.11). Limit theorems for the frequency spectrum Remark 6.3.10. Taking the behavior of P (Z 0 (a) = k) into account and using the Cauchy-Schwartz inequality for E A(k, a)1 Z 0 (a)= one could actually prove that the error term in (6.11) is of order O W (t) |n|-1 in the clonal sub-critical and super-critical cases, and O log t W (t) |n|-1 in the clonal critical case. Corollary 6.3.11. We have, conditionally on the nonextinction, lim t→∞ A(k, t) W (t) k≥1 = E (c k ) k≥1 in distribution, where E is an exponential random variable with parameter 1. Proof. From Lemma 6.3.9, we have lim t→∞ W (t) -|n| E t K i=1 A (k i , t) n i = |n|! N i=1 c n i k i . Since the finite dimensional law of a process with form E (c k ) k≥1 is fully determined by its moments, it follows from the multidimensional moment problem (see [START_REF] Kleiber | Multivariate distributions and the moment problem[END_REF]) and from the fact the events {N t > 0} increase to the event of nonextinction, that we have the claimed convergence. Limit theorems for the frequency spectrum The purpose of this section is to state the same kind of limit theorem as those obtained for N t in Chapter 5. We begin by the law of large number. This result was proved in [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF]. Theorem 6.4.1. We have, e -αt (A (k, t)) k≥1 -→ t→∞ E ψ (α) (c k ) k≥1 , a.s. and in L 2 , where E is the same random variable as in Theorem 5.2.1, and c k was defined in Proposition 6.3.8. Now, we can state central limit theorems related to this convergence. It can take several forms by before we recall that the Laplace distribution with zero mean and covariance matrix K is the probability distribution whose characteristic function is given, for all λ ∈ R n by 1 1 + 1 2 λ Kλ We denote this law by L (µ, K). We also recall that, if G is a Gaussian random vector with zero mean and covariance matrix K and E is an exponential random variable with parameter 1 independent of G, then √ EG is Laplace L (µ, K). We can now state the first CLT. Theorem 6.4.2. Suppose that θ > α and [0,∞) e (θ-α)v P V (dv) > 1 . Then, we have, under P ∞ , lim t→∞ e -α t 2 ψ (α)A(k, t) -e αt c k E k∈N d = L (0, K) , where K is some covariance matrix and the constants c k are defined in Proposition 6.3.8. The proof of this result can be found in Section 6.6. Remark 6.4.3. We are not able to compute explicitly the covariance matrix K in the general case due to our method of demonstration. However, all our other results give explicit formulas. In particular, the case where P V is exponential is given by the next theorem. The Yule case is also covered in the following theorem for d = 0 although it does not satisfy the hypothesis of Theorem 6.4.2. Theorem 6.4.4. Suppose that V is exponentially distributed with parameter d ∈ [0, b). In this case, α = b -d. We still suppose that α < θ, then lim t→∞ e -α t 2 ψ (α)A(k, t) -e αt c k E k∈N d = L (0, K) , w.r.t. P ∞ , where K is given by K l,k = M l,k + c k c l α b 1 -6 d α , and M l,k = 2ψ (α) ∞ 0 θe -θa W θ (a) 2 1 - 1 W θ (a) l-1 (E a [A(k, a)] -c k W (a)) + 1 - 1 W θ (a) k-1 (E a [A(l, a)] -c l W (a)) da -ψ (α) ∞ 0 θW (a) -1 E a (A(k, a) -c k N a ) 1 Z 0 (a)=l + (A(l, a) -c l N a ) 1 Z 0 (a)=k , (6.19) where W , W θ , ψ (α) are defined in the Section 6.2. The proof of this result can be found in Section 6.8. Note that an explicit formula for E t A(k, t) is given by 6.3.3. Explicit formulas for E t A(k, t)1 Z 0 (t)=l are given by 6.3.5, and E t N a 1 Z 0 (t)=k by 6.10. Remark 6.4.5. The condition on V in Theorem 6.4.2 is required only to ensure controls of the moments of the considered quantities. However, although the Yule case does not satisfy this condition (V = ∞ p.s.) it is included in this last theorem (d=0). This suggests that the condition on V may not be needed. The next theorem concerns the error between A(k, t) and c k N t . This case is easier to treat and we have an explicit expression of the covariance matrix of the limit. Theorem 6.4.6. Suppose that θ > α, then lim t→∞ ψ (α) e -α t 2 (A(k, t) -c k N t ) k∈N d = L (0, M ) , w.r.t. P ∞ , where M is defined in relation (6.19). The proof of this result can be found in Section 6.7. 6.5 Proof of Theorem 6.4.1 Proof. Using (6.10) and the bound E N a 1 Z 0 (a)=k ≤ E [N a ], it follows that E t (c k N t -A(k, t)) 2 = 2W (t) 2 ∞ t θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da 2 + O (W (t)) . Finally, it follows from Lemma 6.2.1 that E t e -2αt (c k N t -A(k, t)) 2 ∼ t→∞ Ce -γt , where γ is equal to θ (resp. 2α -θ) in the clonal critical and sub-critical cases (resp. supercritical case). From this point we follow the proof of Theorem 5.2.1, except that the Yule process used in (5.9) must be replaced by another Yule process corresponding to the a binary fission every time an individual experiences a birth or a mutation, i.e. the new Yule process has parameter b + θ. Indeed, the process A(k, t) can make a positive jump only in two cases : the first corresponding to the birth of an individual in a family of size k -1, the other one correspond to a mutation occurring on an individual in a family of size k + 1. Proof of Theorem 6.4.2 The proof of this theorem follows the same structure as Section 5.4. We refer the reader to this section for the details. It begins by some estimate on moments. Preliminary moments estimates We start by computing the moment in the case of a standard splitting tree. Case V ∅ L = V One of the main difficulties to extend the preceding proof to the frequency spectrum is to get estimates on E ψ (α)A(k, t) -e αt c k E n , for n = 2 or 3. We first study the renewal equation satisfied by EA(k, t)E similarly as in Lemma 5.4.4. Lemma 6.6.1 (Joint moment of E and A(k, t)). E [A(k, t)E] is the unique solution bounded on finite intervals of the renewal equation, f (t) = R + f (t -u)be -αu P (V > u) du + αE [A(k, .)] b R + e -αv P (V > ., V > v) dv (t) + αE [EX t ] , (6.20) with X t the number of families of size k alive at time t whose original mutation has taken place during the lifetime of the ancestor individual. Proof. We recall that A(k, t) is the number of non-ancestral families of size k at time t. Similarly as for N t , A(k, t) can be obtained as the sum of the contributions of all the trees grafted on the lifetime of the ancestor individual in addition to the mutations which take place on the ancestral branch, that is, A(k, t) = [0,t] A(k, t -u, ξ u )1 V ∅ >u ξ(du) + X t , where (A(k, t, i), t ∈ R + ) i≥1 is a family of independent processes having the same law as A(k, t). Now, taking the product A(k, t)N s and using the same arguments as in the proof of lemma 5.4.4 to take the limit in s leads to the result. In particular, the last term is obtained using that lim s→∞ E X t N s W (s) = E [X t E] . The result of Lemma 6.6.1 is quite disappointing since the presence of the mysterious process X t prevents any explicit resolution of equation (6.20). However, one may note that equation (6.20) is quite similar to equation (5.16) driving EN t E, so if the contribution of X t in the renewal structure of the process is small enough, one can expect the same asymptotic behaviour for EA(k, t)E as for EN t E. Moreover, we clearly have on X t the following a.s. estimate, X t ≤ [0,t] 1 Z (u) 0 (t-u)>0 1 V >u ξ(du), (6.21) where Z (i) 0 denote for the ancestral families on the ith trees grafted on the ancestral branch. Hence, if we take θ > α and we suppose V < ∞ a.s., one can expect that X t decreases very fast. These are the ideas the following Lemma is based on. Moreover, as it is seen in the proof of the following lemma, the hypothesis V < ∞ a.s. can be weakened. Lemma 6.6.2. Under the hypothesis of Theorem 6.4.2, for all k ≥ 1, there exists a constant γ k ∈ R such that, lim t→∞ EN t Ec k -EA(k, t)E = γ k . (6.22) Proof. Combining equations (5.16) and (6.20), we get that, EN t Ec k -EA(k, t)E = R + (EN t-u Ec k -EA(k, t -u)E) be -αu P (V > u) du + αb (c k EN . -E [A(k, .)]) R + e -αv P (V > ., V > v) dv (t) :=ξ (k) 1 (t) + c k P (V > t) -αE [X t E] :=ξ (k) 2 (t) , which is also a renewal equation. On one hand, using equations (3.4) and Theorem 6.3.3 imply that E t [c k N t -A(k, t)] = W (t) ∞ t θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds, which leads using Lemma 6.2.1, to ξ 1 (t) =α R + (c k EN t-u -E [A(k, t -u)]) R + e -αv P (V > u, V > v) dvdu ≤ C [0,t] e (α-θ)t-u P (V > u) du [0,∞) e -αu du ≤ C α e -(θ-α)t t 0 e (θ-α)u P (V > u) du, (6.23) for some positive real constant C. The derivative of the r.h.s. of (6.23) is given by C α e -(θ-α)t e (θ-α)t P (V > t) -(α -θ) t 0 e (θ-α)u P (V > u) du , t > 0, (6.24) which is equal to C α e -(θ-α)t 1 - [0,t] e (θ-α)s P V (ds) , t > 0, using Stieljes integration by parts. Now, since, [0,∞) e (θ-α)s P V (ds) > 1, this shows that the right hand side of (6.23) is decreasing for t large enough. Moreover, it is straightforward to shows that the r.h.s. of (6.23) is also integrable. This implies that ξ (k) 1 is DRI (see Section 2.7 for the definition of DRI) from Lemma 2.7.1. On the other hand, it follows from (6.21) that X t E ≤ E [0,t] 1 Z (u) 0 (t-u)>0 1 V >t ξ(du). (6.25) Then, we obtain using Cauchy-Schwarz inequality, that E [X t E] ≤ 2α b E   [0,t] 1 Z (u) 0 (t-u)>0 1 V >t ξ(du) 2   1/2 . It follows that we need to investigate the behavior of E   (0,t) 1 Z (u) 0 (t-u)>0 1 V >t ξ(du) 2   , which is equal to t 0 P (Z 0 (t -u) > 0) P (V > t) bdu+ [0,t] 2 P (Z 0 (t -v) > 0) P (Z 0 (t -u) > 0) P (V > u, V > v) b 2 du dv, using Lemma 5.4.1. Then, since, from (6.2) and Lemma 6.2.1, P t-u (Z 0 (t -u) > 0) = e -θ(t-u) W (t -u) W θ (t -u) = O(e -(θ-α)(t-u) ), it follows, using that the right hand side of (6. c k (t) := t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da, (6.27) we have, from the proof of Proposition 6.3.8 and Lemma 4.1.1, ψ (α) 2 E t [A(k, t)A(l, t)] = 2e 2αt c k (t)c l (t) + e αt 4ψ (α)e αt F (t)c k (t)c l (t) + R ψ (α) + O (1) , (6.28) with R := -ψ (α) ∞ 0 2θ e -θa W (a) W θ (a) 2 1 - 1 W θ (a) l-1 a 0 e -θs W θ (s) 2 1 - 1 W θ (a) k-1 dsda -ψ (α) ∞ 0 2θ e -θa W (a) W θ (a) 2 1 - 1 W θ (a) k-1 a 0 e -θs W θ (s) 2 1 - 1 W θ (a) l-1 dsda + ψ (α) ∞ 0 θW (a) -1 E t A(k, t)1 Z 0 (a)=l + E t A(l, t)1 Z 0 (a)=k da, and F , µ are defined in Lemma 4.1.1. Now, using (5.20), we have E t E 2 -2 = -2µψ (α)e -αt + o(e -αt ), which leads to E t e -αt ψ (α)A(k, t) -Ec k e -αt ψ (α)A(l, t) -Ec l =E t e -2αt ψ (α) 2 A(k, t)A(l, t) -c l E t e -αt ψ (α)A(k, t)E -c k E t e -αt ψ (α)A(l, t)E + 2c k c l -2c k c l µψ (α)e -αt + o(e -αt ), =2 (c k (t) -c k ) (c l (t) -c l ) -4µψ (α)c k c l e -αt + Re -αt -2c k (t)c l + 2c l (t)c k -2c k c l ψ (α)e -αt E t N t E + ψ (α)c l e -αt E t [(c k N t -A(k, t)) E] + ψ (α)c k e -αt E t [(c l N t -A(l, t)) E] + o(e -αt ), Since, by Lemma 6.2.1 c k (t) = c k + O(e -θt ) = c k + o(e -αt ), it follows, combining (6.26), (5.21), and Lemma 6.6.2, that e αt E t e -αt ψ (α)A(k, t) -Ec k e -αt ψ (α)A(l, t) -Ec l =ψ (α) (c k γ l + c l γ k ) + c k c l 2e αt -2ψ (α)E t N t E + R -4µψ (α)c k c l + o(1) =ψ (α) (c k γ l + c l γ k ) + c k c l 1 ψ (α) + 3µ + R -4µψ (α)c k c l + o(1). The result follows readily from the fact that P (N t > 0) ∼ α b . Lemma 6.6.4 (Boundedness of the third moment). Let k 1 , k 2 , k 3 three positive integers, then E 3 i=1 e -α 2 t ψ (α)A(k i , t) -e αt Ec k i = O (1) . Proof. We have, E 3 i=1 ψ (α)A(k i , t) -e αt Ec k i e α 2 t ≤ 3 i=1   E   ψ (α)A(k i , t) -e αt Ec k i e α 2 t 3     1 3 . Hence, we only have to prove the Lemma for k 1 = k 2 = k 3 = k. Hence, E   ψ (α)A(k, t) -e αt Ec k e α 2 t 3   ≤ 8E ψ (α)A(k, t) -c k N t e α 2 t 3 + 8c k E ψ (α)N t -N ∞ t e α 2 t 3 + 8c k E N ∞ t -e αt E e α 2 t 3 . The last two terms have been treated in the proof of Lemma 5.4.7, and the boundedness of E ψ (α)A(k, t) -c k N t e α 2 t 3 , follows from the following Lemma 6.6.5 and Hölder's inequality. Lemma 6.6.5. For all k ≥ 1, E A(k, t) -c k N t e -α 2 t 4 , is bounded. Due to technicality, the proof of this lemma is postponed to the end of this chapter. On the event Γ u,t , we have a.s., A(k l , t) = Nu i=1 A (i) (k l , t -u, O i ), ∀l = 1, . . . , N, where the family A (i) (k l , t -u, O i ) i≥1 stand for the frequency spectrum for each subtree, which are independent from Lemma 5.3.3 (see also Section 5.4.2 and Figure 5.2). Hence, using Lemma 5.4.11, L k l t = Nu i=1 ψ (α)A (i) (k l , t -u, O i ) -e α(t-u) E i (O i )c k l e α 2 u e α 2 (t-u) . By Lemma 5.3.3, that the family A i (k l , t -u, O i ) 2≤i≤Nu is i.i.d. under P u . In the sequel, we denote, for all l and i ≥ 1, Ã(i) (k l , t -u, O i ) = ψ (α)A (i) (k l , t -u, O i ) -e α(t-u) E i (O i )c k l e α 2 (t-u) . As in the proof of Theorem 5.2.2, let ϕ K (ξ) := E exp i < Ã (K, t -u, O 2 ) , ξ > 1 Z 2 0 (t-u,O 2 )=0 , φK (ξ) := E exp i < Ã (K, t -u, O 1 ) , ξ > 1 Z 1 0 (t-u,O 1 )=0 . From this point, following closely the proof of Theorem 5.2.2, with β in 0, 1 2 ∧ (1 -α θ ) , the only difficulty is to handle the indicator function 1 Z 0 (t-u,O i )>0 in the Taylor development of ϕ K . We show how it can be done for one of the second order terms, and leave the rest of the details to the reader. It follows from Hölder's inequality that E   ψ (α)A (i) (k l , (1 -β)t, O i ) -e α((1-β)t) E i (O i )c k l e α 2 ((1-β)t) 2 1 Z 2 0 ((1-β)t,O 2 )>0   ≤ E   ψ (α)A (i) (k l , (1 -β)t, O i ) -e α(1-β)t E i (O i )c k l e α 2 (1-β)t 3   2 3 P Z 2 0 ((1 -β)t, O 2 ) > 0 1 3 , (6.30) from which it follows, using Lemma 6.6.8, that the r.h.s. of this last inequality is O P Z 2 0 (t -u, O 2 ) > 0 1 3 . Now, using (6.29) and Lemma 6.6.9, it is easily seen that lim t→∞ P Z 2 0 ((1 -β)t, O 2 ) > 0 = 0. Finally, using Lemma 6.6.3, we get lim t→∞ E   ψ (α)A (i) (k l , t -u, O i ) -e α(t-u) E i (O i )c k l e α 2 (t-u) 2 1 Z 2 0 (t-u,O 2 )=0   = ψ (α)a k,k . These allow us to conclude that lim t→∞ E βt e i<L (K) t ,ξ> 1 Γt = 1 1 + N i,j=1 M i,j ξ i ξ j , where K i,j is given by M i,j := ψ (α)a K i ,K j , with K is the multi-integer (k 1 , . . . , k N ), and the a l,k s are defined in Lemma 6.6.3. To end the proof, note that, E ∞ e i<L (K) t ,ξ> -E βt e i<L (K) t ,ξ> 1 Γ βt,t ≤ E 1 NonEx P (NonEx) - 1 N βt >0 1 Γ βt,t P (N βt > 0) → t→∞ 0, thanks to Lemma 6.6.9. 6.7 Proof of Theorem 6.4.6 Since all the ideas of the proof of this theorem have been developed in the last two section, we do not detail all the proof. The only step which needs clarification is the computation of the covariance matrix of the Laplace limit law M. According to the proof of Theorem 6.4.2, it is given by M i,j := lim t→∞ W (βt) e αβt E ψ (α)A (i) (k i , (1 -β)t, O i ) -ψ (α)c k i N (1-β)t e α 2 ((1-β)t) × ψ (α)A (i) (k j , (1 -β)t, O i ) -c k j N (1-β)t e α 2 ((1-β)t) 1 Z 2 0 ((1-β)t,O 2 )>0 , which is equal, thanks to (6.30) and an easy adaptation of Lemma 5.4.9, to M i,j = lim t→∞ bψ (α) α W (βt) e αβt e αt E e -αt A(k i , t) -c k i e -αt N t e -αt A(k j , t) -c k j e -αt N t . So it remains to get the limit of e αt E e -αt ψ (α)A(k, t) -ψ (α)c k e -αt N t e -αt ψ (α)A(l, t) -c l e -αt ψ (α)N t , as t goes to infinity. We recall that using the calculus made in the proof of Theorem 6.4.2, we have E t A(k, t)N t = 2W (t) 2 c k (t) -2W (t) [0,t] θP a (Z 0 (a) = k) da + W (t) [0,t] θW (a) -1 E a N a 1 Z 0 (a)=k da. ( 6 E [A(k, t)N s | F t ] = A(k, t)N t E [N s-t ] . So that, E [A(k, t)N s ] = E [A(k, t)N t ] (W (s -t) -P V W (s -t)) . By making a renormalization by e -αs and taking the limit as s goes to infinity, we get, E [A(k, t)E] = ψ (α)e -αt E [A(k, t)N t ] , since, in the Markovian case, it is known from [START_REF] Champagnat | Birth and death processes with neutral mutations[END_REF] that α b = ψ (α). Suppose first that d > 0. It follows that, E ψ (α)A(k, t) -e αt c k E ψ (α)A(l, t) -e αt c l E = ψ (α) 2 E t [A(k, t)A(l, t)] P (N t > 0) -c k ψ (α) 2 E t [A(l, t)N t ] P (N t > 0) -c l ψ (α) 2 E t [A(k, t)N t ] P (N t > 0) + 2ψ (α)e 2αt c k c l By (5.20), P (N t > 0) = ψ (α) + ψ (α) 2 µe -αt + o(e -αt ), so E ψ (α)A(k, t) -e αt c k E ψ (α)A(l, t) -e αt c l E = P (N t > 0) ψ (α) 2 E t [(A(k, t) -c k N t ) (A(l, t) -c l N t )]+c k c l ψ (α) 2e 2αt -ψ (α)E t N 2 t P (N t > 0) . Finally, since, using Proposition 4.1.1, lim t→∞ e -αt 2e 2αt -ψ (α)E t N 2 t P (N t > 0) = ψ (α) (1 -6µ) , it follows from (6.32), lim t→∞ E ψ (α)A(k, t) -e αt c k E ψ (α)A(l, t) -e αt c l E = ψ (α)M k,l + c k c l ψ (α) 2 (1 -6µ) = ψ (α)M k,l + c k c l ψ (α) 2 1 -6 d α , using that µ = 1 bEV -1 . In the Yule case, an easy adaptation of the preceding proof leads to lim t→∞ E ψ (α)A(k, t) -e αt c k E ψ (α)A(l, t) -e αt c l E = M k,l + c k c l . 6.9 Postponed estimates 6.9.1 Formula for the fourth moment of the error Lemma 6.9.1. E t (A(k, t) -c k N t ) 4 = 4 [0,t] θ W (t) W (a) E a 1 Z 0 (a)=k (A(k, a) -c k N a ) 3 da + 48 [0,t] θ W (t) 2 W (a) 2 1 - W (a) W (t) E a 1 Z 0 (a)=k N a A(k, a) E a [(c k N a -A(k, a))] da + 24 [0,t] θ W (t) 2 W (a) 2 1 - W (a) W (t) E a 1 Z 0 (a)=k N 2 a E a [(A(k, a) -c k N a )] da + 24 [0,t] θ W (t) 2 W (a) 2 1 - W (a) W (t) E a 1 Z 0 (a)=k A(k, a) 2 E a [(A(k, a) -c k N a )] da + 8 [0,t] θ W (t) 2 W (a) 2 1 - W (a) W (t) P a (Z 0 (a) = k) E a (A(k, a) -c k N a ) 3 da + 48 [0,t] θ W (t) 2 W (a) 2 1 - W (a) W (t) E a 1 Z 0 (a)=k A(k, a) E a (A(k, a) -c k N a ) 2 da + 72 [0,t] θ W (t) 3 W (a) 3 1 - W (a) W (t) 2 E a 1 Z 0 (a)=k (A(k, a) -c k N a ) E a [(A(k, a) -c k N a )] 2 da + 72 [0,t] θ W (t) 3 W (a) 3 1 - W (a) W (t) 2 P a (Z 0 (a) = k) E a (A(k, a) -c k N a ) 2 E a [A(k, a) -N a c k ] da + 96 [0,t] θ W (t) 4 W (a) 4 1 - W (a) W (t) 3 P a (Z 0 (a) = k) E a [(A(k, a) -c k N a )] 3 da + c 4 k E t N 4 t Proof. The proof of this Lemma lies on the calculation of the expectation of each term in the development of (A(k, t) -c k N t ) 4 . We begin by computing E t A(k, t) 4 . Using the formulas for the moments, we have A(k, t) 4 =4 [0,t]×N 1 Z i 0 (a)=k i N (t) t-a u 1:3 =1 3 j=1 i =j A (u j ) (k, a)N (da, di) =4 [0,t]×N 1 Z i 0 (a)=k A i (k, a)A i (k, a)A i (k, a)N (da, di) + 4 [0,t]×N 1 Z i 0 (a)=k N (t) t-a j 1 ,j 2 ,j 3 =1 j 1 =j 2 =j 3 =i A j 1 (k, a)A j 2 (k, a)A j 3 (k, a)N (da, di) + 12 [0,t]×N 1 Z i 0 (a)=k A i (k, a)A i (k, a) N (t) t-a j=1,j =i A j (k, a)N (da, di) + 4 [0,t]×N 1 Z i 0 (a)=k N (t) t-a j=1,j =i A j (k, a) 3 N (da, di) + 12 [0,t]×N 1 Z i 0 (a)=k A i (k, a) N (t) t-a j 1 ,j 2 =1,j 1 =j 2 =i A j 1 (k, a)A j 2 (k, a)N (da, di) + 24 [0,t]×N 1 Z i 0 (a)=k A i (k, a) N (t) t-a j 1 =1,j 1 =i A j 1 (k, a)A j 1 (k, a)N (da, di) + 12 [0,t]×N 1 Z i 0 (a)=k N (t) t-a j 1 ,j 2 =1,j 1 =j 2 =i A j 1 (k, a) 2 A j 2 (k, a)N (da, di). (6.33) The decomposition of the sum in form N (t) t-a u 1:3 =1 , has then been made to distinguish independence properties in our calculation. Actually, as soon as, i = j, A i (k, a) is independent from A j (k, a). It is essential to note that the expectation of these integrals with respect to the random measure N are all calculated thanks to Theorem 4.2.2. So, taking the expectation now leads to, E t A(k, t) 4 =4 [0,t] θE a N (t) t-a E a 1 Z 0 (a)=k A(k, a) 3 θda + 4 [0,t] θP a (Z 0 (a) = k) E a N (t) t-a (4) E a [A(k, a)] 3 da + 12 [0,t] θE a 1 Z 0 (a)=k A(k, a) 2 E a N (t) t-a (2) E a [A(k, a)] da + 4 [0,t] θP a (Z 0 (a) = k) E a N (t) t-a (2) E a A(k, a) 3 da + 12 [0,t] θE a 1 Z 0 (a)=k A(k, a) E a N (t) t-a (3) E a [A(k, a)] 2 da + 24 [0,t] θE a 1 Z 0 (a)=k A(k, a) E a N (t) t-a (2) E a A(k, a) 2 da + 12 [0,t] θP a (Z 0 (a) = k) E a N (t) t-a (3) E a A(k, a) 2 E a [A(k, a)] da. Using the same method for all the other terms and that, for any positive real number a lower than t, N t = N (t) t-a i=1 N (i) a , we get Lemma 6.9.1 by reassembling similar terms together. The last term is obtained using the geometric distribution of N t under P t . 6.9.2 Boundedness of the fourth moment Lemma 6.9.2. We begin the proof of the boundedness of the fourth moment by some estimates. E t [(A(k, t) -c k N t )] = O e -(θ-α)t , (i) E t (A(k, t) -c k N t ) 3 = O W (t) 2 , (ii) E t (A(k, t) -c k N t ) 2 = O (W (t)) , (iii) E t N n t = O(e nαt ), n ∈ N * , (iv) P t (Z 0 (t) = k) = O(e (α-θ)t ). (v) Proof. Relation (i) is easily obtained using the expectation of N t and A(k, t) and the behaviour of W provided by Proposition 4.1.1. The relation (iii) has been obtained in the proof of Theorem 5.2.1. The two last relations are easily obtained from (3.3), (6.2) and Lemma 6.2.1. The relation (ii) is obtained using the following estimation, E t (A(k, a) -c k N a ) 3 ≤ E t N a (A(k, a) -c k N a ) 2 . We begin the proof by computing the r.h.s. of the previous inequality using the same techniques as before. E A(k, t) 2 N t = 2 t 0 θ W (t) W (a) E N a A(k, a)1 Z 0 (a)=k da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) E N a 1 Z 0 (a)=k E [A(k, a)] da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) E A(k, a)1 Z 0 (a)=k E [N a ] da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) P a (Z 0 (a) = k) E [A(k, a)N a ] da +12 t 0 θ W (t) 3 W (a) 3 1 - W (a) W (t) 2 P a (Z 0 (a) = k) E [A(k, a)] E [N a ] da. 2E A(k, t)N 2 t = 2 t 0 θ W (t) W (a) E N 2 a 1 Z 0 (a)=k da +8 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) E N a 1 Z 0 (a)=k E [N a ] da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) P a (Z 0 (a) = k) E N 2 a da +12 t 0 θ W (t) 3 W (a) 3 1 - W (a) W (t) 2 P a (Z 0 (a) = k) E [N a ] 2 da. Finally, E N t (A(k, t) -c k N t ) 2 = 2 t 0 θ W (t) W (a) E N a (A(k, a) -c k N a ) 1 Z 0 (a)=k da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) E N a 1 Z 0 (a)=k E [A(k, a) -c k N a ] da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) E (A(k, a) -c k N a ) 1 Z 0 (a)=k E [N a ] da +4 t 0 θ W (t) 2 W (a) 2 1 - W (a) W (t) P a (Z 0 (a) = k) E [N a (A(k, a) -c k N a )] da +12 t 0 θ W (t) 3 W (a) 3 1 - W (a) W (t) 2 P a (Z 0 (a) = k) E [N a ] E [A(k, a) -c k N a ] da +c 2 k E t N 3 t . Now, an analysis similar to the one of Lemma 6.6.5 leads to the result. Proof of Lemma 6.6.5. The ideas of the proof, is to analyses one to one every terms of the expression of E t (A(k, t) -c k N t ) 4 , given by Lemma 6.9.1 using Lemma 6.9.2 to show that they behave as O W (t) 2 . Since the ideas are the same for every terms, we just give a few examples. First of all, we consider [0,t] W (t) W (a) E a 1 Z 0 (a)=k (A(k, a) -c k N a ) 3 da. Using Lemma 6.9.2 (ii), we have [0,t] W (t) W (a) E a 1 Z 0 (a)=k (A(k, a) -c k N a ) 3 da = O W (t) 2 . Now take the term [0,t] W (t) 2 W (a) 2 E a 1 Z 0 (a)=k N 2 a E a [(A(k, a) -c k N a )] da, we have from Lemma 6.9.2 (i) and (iv), [0,t] W (t) 2 W (a) 2 E a 1 Z 0 (a)=k N 2 a E a [(A(k, a) -c k N a )] da ≤ [0,t] W (t) 2 W (a) 2 E a N 2 a e -(θ-α)a da = O W (t) 2 . Every term in W (t) or W (t) 2 are treated this way. Now, we consider the term in W (t) 4 which is I := 96 [0,t] W (t) 4 W (a) 4 P a (Z 0 (a) = k) E a [(A(k, a) -c k N a )] 3 da + 24W (t) 4 c 4 k , since N t is geometrically distributed under P t , and that E t N 4 t = 24W (t) 4 -36W (t) 3 + O(W (t) 2 ). (6.34) On the other hand, using the law of Z 0 (t) given by (6.2) and the expectation of A(k, t) given by Theorem 6.3.3 (under P t ), we have, 96 [0,t] W (t) 4 W (a) 4 P a (Z 0 (a) = k) E a [(A(k, a) -c k N a )] 3 da = -96W (t) 4 t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 a 0 θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds 3 da = -24W (t) 4 t 0 θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da 4 . Finally, I = 24W (t) 4 ∞ t θe -θa W θ (a) 2 1 - 1 W θ (a) k-1 da 4 = O W (t) 4 e -4θt = o(1). The last example is the most technical and relies with the term in W (t) 3 , which is, using (6.34) and Lemma 6.9.1, J :=72 [0,t] W (t) 3 W (a) 3 E a 1 Z 0 (a)=k (A(k, a) -c k N a ) E a [(A(k, a) -c k N a )] 2 da + 72 [0,t] W (t) 3 W (a) 3 P a (Z 0 (a) = k) E a (A(k, a) -c k N a ) 2 E a [A(k, a) -N a c k ] da -288 [0,t] W (t) 3 W (a) 3 P a (Z 0 (a) = k) E a [(A(k, a) -c k N a )] 3 da -36c 4 k W (t) 3 . On the other hand, using the calculus made in the proof of Theorem 6.4.2, we have E a (A(k, a) -c k N a ) 2 =4 [0,a] W (a) 2 W (s) 2 1 - W (s) W (a) P s (Z 0 (s) = k) E a (A(k, s) -c k N s ) ds + 2 [0,a] W (s) W (a) E a 1 Z 0 (s)=k (A(k, s) -c k N s ) ds + c 2 k W (a) 2 2 - 1 W (a) . Substituting this last expression in J leads to J = -144 [0,t] W (t) 3 W (a) 3 E a 1 Z 0 (a)=k (A(k, a) -c k N a ) [a,∞] P (Z 0 (a) = k) W (s) 2 E a [(A(k, s) -c k N s )] dsda + 144W (t) 3 [0,t] 1 W (a) E a 1 Z 0 (a)=k (A(k, a) -c k N a ) [a,t] 1 W (s) 2 P s (Z 0 (s) = k) E a [A(k, s) -N s c k ] da -144c 2 k [0,t] W (t) 3 W (a) P a (Z 0 (a) = k) E a [A(k, a) -N a c k ] da + 144 [0,t] W (t) 3 W (a) 3 P (Z 0 (a) = k) E a [A(k, a) -N a c k ] 3 da -288 [0,t] W (t) 3 W (a) 2 P a (Z 0 (a) = k) [0,a] 1 W (s) P s (Z 0 (s) = k) E a (A(k, s) -c k N s ) dsE a [A(k, a) -N a c k ] da + 72 [0,t] W (t) 3 W (a) P a (Z 0 (a) = k) c 2 k 2 - 1 W (a) E a [A(k, a) -N a c k ] da -288 [0,t] W (t) 3 W (a) 3 P a (Z 0 (a) = k) E a [(A(k, a) -c k N a )] 3 da -36c 4 k W (t) 3 . Using many times that, [0,t] θP (Z 0 (a) = k) W (s) 2 E a [(A(k, s) -c k N s )] ds = - [0,t] θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 [s,∞] θe -θu W θ (u) 2 1 - 1 W θ (u) k-1 duds = c 2 k 2 - 1 2 [t,∞] θe -θs W θ (s) 2 1 - 1 W θ (s) k-1 ds 2 , thanks to (6.2), Theorem 6.3.3, and (3.6), we finally get J = -144 c 2 k -c k (t) 2 [0,t] W (t) 3 W (a) 3 E a 1 Z 0 (a)=k (A(k, a) -c k N a ) da + 36W (t) 3   c 2 k [t,∞] W (t) 3 W (a) 3 E a [A(k, a) -N a c k ] 3 da 2 - [t,∞] W (t) 3 W (a) 3 E a [A(k, a) -N a c k ] 3 da 4   + 144 (c k -c k (t)) 2 [0,t] W (t) 3 W (a) E a [A(k, a) -N a c k ] da + 36W (t) 3 (c k -c k (t)) 4 . This shows that J is O W (t) 2 . Chapitre 7 On the inference for size constrained Galton-Watson trees This chapter is dedicated to a joint work with Romain Azais from team Bigs (Inria Nancy) and Alexandre Genadot from team CQFD (Inria Bordeaux). It originally arose from the idea that contour processes should be used in order to perform statistics on tree shaped data. Indeed, such objects proved to be powerful in the theoretical study of trees, and are often more convenient to manipulate than trees. Many data are naturally modelled by an ordered tree structure : from blood vessels in biology to XML files in computer science [START_REF] Yang | Similarity evaluation on tree-structured data[END_REF] through the secondary structure of RNA in biochemistry. The statistical analysis of a dataset of hierarchical records is thus of a great interest. In particular, detecting differences in large tree structures is a complex and challenging issue. This question may be tackled via an editing distance that is the minimum number of elementary operations (insert or delete a node for example) that must be done to transform a tree into another. As a consequence, one may compute the distance matrix of a given tree dataset and thus apply any appropriate clustering method which should solve the initial statistical problem. Nevertheless, this kind of strategy is not well-adapted to exhibit the evolution of a tree structure. The space of ordered trees can not be represented in a Euclidean state space and visualizing the main differences appearing in the history of a tree data over time is often difficult. Of course, there are classical and easily-computable indicators to at least partially sum up the dynamic of a tree data : number of nodes, height, outdegree, number of leaves, etc. Each of them may be adapted to a given application. However, they all increase with the size of the tree and not really model the main structure of the data. The aim of this work is to introduce a real-valued quantity that describes the key structure of an ordered tree independently of its size. In probability theory, we often encounter trees that have been generated from independent and identically distributed numbers of offsprings, which leads to the so-called Galton-Watson trees (sometimes also referred to as Bienaymé-Galton-Watson trees). Of course, these stochastic trees have random sizes. However, in many practical applications we are faced to random trees with a given size. We thus consider the class of Galton-Watson trees conditioned on having a certain number of nodes. This class is referred to as conditioned Galton-Watson trees. It is well-known that several classes of random trees can be seen as conditioned Galton-Watson trees [START_REF] Devroye | Simulating size-constrained galton-watson trees[END_REF][START_REF] Janson | Simply generated trees, conditioned Galton-Watson trees, random allocations and condensation[END_REF] : Motzkin trees from the uniform offspring distribution on the set {0, 1, 2}, Catalan trees from the offspring distribution (0.25, 0.5, 0.25) on {0, 1, 2}, Cayley trees from the Poisson offspring distribution, etc. We also refer the reader to [9, 3.1 Galton-Watson trees] for an enumeration of some specific parameterizations. In other words, conditioned Galton-Watson trees model a large variety of random hierarchical structures. In this paper we focus on conditioned critical Galton-Watson trees, that is to say that the expectation of the offspring distribution is 1. Conditioned Galton-Watson trees are simple critical Galton-Watson trees which are conditioned to have a fixed size. Our main goal is to estimate the variance of the birth distribution. As for the discrete genealogy of a splitting tree, the node of a Galton-Watson tree can be labelled according to the Ulam-Harris-Neveu notation (see Chapter 3). In standard Galton-Watson trees, the number of children of each node is distributed according to a probability measure µ on N. Moreover, if we denote by ζ u the number of children of individual u in ∪ n≥0 N n , then the family (ζ u ) u∈∪nN n is assumed to be i.i.d. in the case of classical Galton-Watson trees. Hence, inferring the variance of the birth distribution is easy using for instance the empirical variance. However, in the size constrained case, one cannot expect to make estimations through standard statistic methods since the independence and homogeneity properties of the family ζ u have been broken up by the conditioning. Such problem has already been studied in a recent work by Bharath et al. [START_REF] Bharath | Inference for large tree-structured data[END_REF], in which the authors use the knowledge of the asymptotic distribution of the height an uniformly sampled node in order to make inferences from a forest of trees. Here we introduce two new estimators based on the contour processes of a forest of Galton-Watson trees which appear to have a better behaviour. Section 7.1 is devoted to an introduction about Galton-Watson trees conditioned on having a fixed size and its contours. For discrete trees, there are many different contour processes which can be constructed from a tree. In this work we focus on the well known Harris path. Basics on size-constrained Galton-Watson trees This section is devoted to an introduction to size constrained Galton-Watson trees. The next subsection simply recalls the definition. Subsection 7.1.2 gives the definition of the Harris path. Subsection 7.1.3 recalls the well known limit theorem which describes the asymptotic behaviour of the Harris path as the number of nodes in the tree increases. Definition Intuitively, a Galton-Watson tree can be seen as a tree encoding the dynamic of a population generated from some offspring distribution µ on N. A Galton-Watson tree τ with offspring distribution µ is a random rooted tree constructed recursively as follows. -The number of children ζ ∅ emanating from the root is a random variable with law µ. The first generation consists thus in ζ ∅ vertices. -Assume that the n th generation of children has been constructed and consists in a set of vertices V n ⊂ N n (with the Ulam-Harris-Neveu labelling). Then, the generation n + 1 is constructed such that {ζ v : v ∈ V n } is a collection of independent random variables with law µ. In the sequel, we use the notation GW(µ) for the law of a Galton-Watson trees with offspring distribution µ. The asymptotic behavior of Galton-Watson trees may exhibit different regimes depending on the average number of offsprings per capita, µ = k≥0 kµ(k). -The subcritical case : µ < 1. In this case, the number of vertices is almost-surely finite with finite expectation. This means that the population almost-surely extincts and has a finite expected number of individuals. -The critical case : µ = 1. The fact that the offspring distribution µ is critical also ensures the almost-sure finiteness of a critical Galton-Watson tree, except when µ(1) = 1. When µ(1) < 1, in contrary to the sub-critical case, the expected number of individuals is infinite. -The supercritical case : µ > 1. In this case, the number of vertices explodes with positive probability. Then, we write GW n (µ) for the law of Galton-Watson trees conditioned on having n vertices. We always state our results assuming critical Galton-Watson processes. However, this is not really a restriction since, as noted in [78, 6.3 Brownian asymptotics for conditioned Galton-Watson trees], the measure GW n (µ) is the same as GW n (µ θ ) where, for an arbitrary θ > 0 such that g(θ) = µ(k)θ k is finite, µ θ (k) = µ(k)θ k /g(θ). Therefore, in some sense, conditioned non-critical Galton-Watson trees are critical ones. Remark 7.1.1. An important consequence of this last remark is that we cannot estimate the mean of the distribution µ from a conditioned Galton-Watson tree (or from a forest of such trees). From ordered trees to Harris paths In graph theory, a tree τ is a graph G = (V, E) that satisfies these two conditions : G is connected and has no cycles. In addition, a rooted tree is a tree in which one node has been distinguished as the root, denoted here by r(τ ) (always drawn at the bottom of the tree in this chapter). In this case, the edges are assigned a natural orientation, away from the root towards the leaves. One obtains a directed rooted tree in which there exists a parent-child relationship : the parent of a node v is the first vertex met on the path to the root starting from v. The length of this path (in number of nodes) is called the height h(v) of v. The set c(v) of children of a vertex v is the set of nodes that have v as parent. An ordered or plane tree is a rooted tree in which an ordering has been specified for the set of children of each node, conventionally drawn from left to right. We recall from Chapter 3 that the Ulam-Harris-Neveu labelling provides a natural order. Tree structures can be traversed in many different ways, for example using the order introduced in Chapter 3. A depth-first search algorithm traversing a tree in this order is given in Algorithm 1 below (where for a node v of τ , t[v] denotes the subtree containing v and its children). This algorithm is a particular feature of the classical depth-first search algorithm. Indeed it is a version that returns to parent when all the descendants of a given child of the node have been visited. In this case, each node v appears ζ v + 1 times. The result is thus a sequence of length v∈V (ζ v + 1) = #V + v∈V ζ v = 2#V -1, because the root is the only vertex not to be counted. The Harris walk H[τ ] of an ordered rooted tree τ is defined from both the depth-first search returning to parent algorithm and the notion of height of nodes. H[τ ] is defined as a sequence of integers indexed by the set {0, . . . , 2#V} as follows : - H[τ ](0) = H[τ ](2#V) = 0, -for 1 ≤ k ≤ 2#V, H[τ ](k) = h(v) + 1 where v is the k th node in pre-order (returning to parent) traversal of τ . The Harris process is then defined as the linear interpolation of the Harris walk (see example in Figure 7.1). Note that, as displayed in Figure 7.2, the tree can be recovered from the contour such that the correspondence is one to one. In the sequel, we denote by (H[τ ](t), t ∈ [0, 2 V]) the linear interpolation of the Harris walk. Function DFS(τ , l = ∅): Data: an ordered tree τ Result: vertices of τ in depth-first order add r(τ ) to l for v in c(r(τ )) do if r(t[v]) is not in l then call DFS (t[v],l) add again r(τ ) to l return l Algorithm 1: Recursive depth-first search. Asymptotic behaviour of the Harris path We consider a tree τ n with distribution GW n (µ) where µ is some critical offspring distribution whose variance is denoted by σ 2 . We focus on the asymptotic behavior of the Harris process Theorem 7.1.2. When n goes to infinity, we have H[τ n ](2nt) √ n , t ∈ [0, 1] (d) ---→ n→∞ 2 σ e t , t ∈ [0, 1] , where e is a standard Brownian excursion, the convergence holding in law in the space C([0, 1], R). Let us simply recall that a standard Brownian excursion is a Brownian motion conditioned (for instance in the sense of h-transform) on being positive and to taking the value 0 at time 1. The density of e t , for 0 ≤ t ≤ 1, is given in [80, XI. 3. Bessel Bridges] and writes f et (x) = 2 π x 2 t(1 -t) 3 exp - x 2 2t(1 -t) 1 R + (x). (7.1) From this, we can compute some simple functionals of the excursion. For instance, we have, ∀ 0 ≤ t ≤ 1, E[e t ] = 4 √ 2π t(1 -t) and E e 2 t = 3t(1 -t). (7.2) In the sequel, we denote by (E t , t ∈ [0, 1]) the expectation of a normalized Brownian excursion, that is E t = E[e t ], ∀t ∈ [0, 1]. The easiest way to simulate a Brownian excursion certainly is from its identity in law with a three-dimensional Bessel bridge, which simply is the Euclidean norm of a three-dimensional Brownian bridge, (e t , t ∈ [0, 1]) (d) =   3 i=1 B (i) t -tB (i) 1 2 , t ∈ [0, 1]   (7.3) where B (1) , B (2) and B (3) are three independent Brownian motions. The convergence presented in Theorem 7.1.2 also holds in expectation [25, Theorem 1]. Theorem 7.1.3. When n goes to infinity, we have, ∀ 0 ≤ t ≤ 1, E H[τ n ](2nt) √ n ---→ n→∞ 2 σ E t . Note that the quantity appearing in this theorem is σ -1 . For this practical reason, we decided to estimate σ -1 . Inferring σ -1 from a forest In this section we propose two methods in order to get estimations on σ -1 . After what, we study the behaviour of our estimators. Let τ n be a size-constrained Galton-Watson tree, and H[τ n ] its Harris path. As usual, the idea should be to construct some operator T : C([0, 1]) → R + such that, as n increase, T H[τ n ] becomes close to σ -1 in some sense. However, the weak convergence given by Theorem 7.1.2 does not enabled us to expect a strong convergence. Worst, it appears according to [START_REF] Janson | Conditioned Galton-Watson trees do not grow[END_REF] that one cannot construct on a same probability space a couple (τ n , τ n+1 ) such that τ n is a subtree of τ n+1 . This suggest that we cannot hope to construct an estimator of σ -1 from only one tree. Our purpose is then to construct an efficient estimator of σ -1 from a forest. Adequacy of the Harris path with the expected contour Let τ n ∼ GW n (µ) with µ = 1. We assume that the offspring distribution µ is unknown. By virtue of Theorem 7.1.3, the asymptotic average behavior of the normalized Harris process (n -1/2 H[τ n ](2nt), 0 ≤ t ≤ 1) is given by (2σ -1 E t , 0 ≤ t ≤ 1) , where σ -1 is obviously also unknown. We propose to estimate σ -1 by minimizing the L 2 -error defined by λ → H[τ n ](2n•) √ n -2λE 2 2 . The solution of this least-square problem is well-known and is given by λ[τ n ] = H[τ n ](2n•), E 2 √ n E 2 2 . ( 7 -→ σ -1 Λ ∞ , where the real random variable Λ ∞ is defined by Λ ∞ = e, E E 2 2 . Proof. The result directly follows from Theorem 7.1.2 because the functional x → x, E is continuous on C([0, 1]). 2 Remark 7.2.2. The convergence in distribution stated in Corollary 7.2.1 seems quite unsatisfactory because this means that λ[τ n ] is not a consistent estimator of σ -1 and the least-square strategy thus looks like inadequate. Nevertheless, one can not expect a stronger convergence from the observation of only one stochastic process within a finite window of time. This is why one may only focus on the estimation of the parameter of interest σ -1 from a forest of conditioned Galton-Watson trees. This statistical framework is also considered in [START_REF] Bharath | Inference for large tree-structured data[END_REF]. Computing λ[τ n ] is only a first step in the estimation of the inverse standard deviation from a large number of conditioned Galton-Watson trees. As a consequence, the distribution of the limit variable Λ ∞ is of first importance. Proposition 7.2.3. The random variable Λ ∞ admits a density f Λ∞ with respect to the Lebesgue measure. Furthermore, E[Λ ∞ ] = 1. (7.5) Proof. The existence of a density was already known [START_REF] Louchard | Kac's formula, Levy's local time and Brownian excursion[END_REF][START_REF] Louchard | Tail estimates for the Brownian excursion area and other Brownian areas[END_REF] for the random variable Thanks to the Feynmann-Kac formula, the authors express this quantity in terms of Airy functions. Then, they inverse the Laplace transform via analytical methods. Unfortunately, their method does not extend to our case. Indeed, in their case, an expression of the double Laplace transform given above is derived from the Feynmann-Kac formula for standard Brownian motion which tells us that the function u(t, x) = E x f (B t ) exp t 0 B s ds , ∀(t, x) ∈ R + × R, is the solution of the PDE ∂ t u(t, x) = 1 2 ∆u(t, x) + xu(t, x) ∀x ∈ R, t ∈ R + , u(0, x) = f (x) ∀x ∈ R. In this case, taking the Laplace transform in time of u leads to an ODE whose solution can be express in term of Airy functions (see [START_REF] Janson | Brownian excursion area, wright's constants in graph enumeration, and other brownian areas[END_REF]). In our case, the PDE becomes inhomogeneous in time which makes such transformation useless. As a consequence, one cannot obtain informations by this method. That is why we propose a new method using Malliavin calculus and the representation of the Brownian excursion as a three-dimensional Bessel bridge (7.3) to show that Λ ∞ admits a density. We consider the probability space (C([0, 1], R 3 ), F, W), where C([0, 1], R 3 ) is endowed with the topology of uniform convergence, F is the corresponding Borel σ-field and W is the Wiener measure. Let T be the continuous linear operator defined by T : C([0, 1], R 3 ) → (C([0, 1], R 3 ), ϕ → (T ϕ(s) = ϕ s -sϕ 1 ) . Let also Γ be the following function, Γ : ϕ → 1 0 ϕ(s) 3 E s ds. where x denotes the Euclidian norm on R 3 . With these notations and (7.3), we have that the pushforward measure of W through the application F : ϕ → Γ(T ϕ), is the law of E 2 2 Λ ∞ . In other words, the random variable F is equal in distribution to E 2 2 Λ ∞ . Now for every ϕ in C([0, 1], R 3 ) such that Leb {t ∈ R + : ϕ(t) = 0} = 0, we have that Γ is Frechet differentiable at point ϕ (where Leb denotes the Lebesgue measure). Indeed, set D ϕ Γ : (C([0, 1], R 3 ) → R, h → 1 0 ϕ(s),h(s) ϕ(s) E s ds. Then, some straightforward manipulations give 1 0 ϕ(s) + h(s) -ϕ(s) - ϕ(s), h(s) ϕ(s) ds = 1 0   h(s) 2 + ϕ(s), h(s) 1 -ϕ(s)+h(s) ϕ(s) ϕ(s) + h(s) + ϕ(s)   ds. Now, Cauchy-Schwarz inequality entails 1 0 ϕ(s) + h(s) -ϕ(s) - ϕ(s), h(s) ϕ(s) ds ≤ 1 0       h(s) 2 + h(s) ϕ(s) -ϕ(s) + h(s) ϕ(s) + h(s) + ϕ(s)       ds ≤ h ∞ 1 0     h(s) + ϕ(s) -ϕ(s) + h(s) ϕ(s) + h(s) + ϕ(s)     ds. Now, since 1 0     h(s) + ϕ(s) -ϕ(s) + h(s) ϕ(s) + h(s) + ϕ(s)     ds is well-defined (because the integrand is bounded by 2) and goes to zero as h ∞ goes to zero, this prove that D ϕ Γ is the Frechet derivative of Γ at point ϕ. Now, since T is linear, we have that F is Frechet differentiable at every ϕ such that Leb {t ∈ R + : ϕ(t) = 0} = 0 and D ϕ F = D T ϕ Γ•T . We now show that F belongs to the Malliavin-Sobolev space D 1,2 (see [75, p. 25-27] for the definition of this space). Let h be an element of L 2 ([0, 1], R 3 ), it is easily seen that F (ω + • 0 h s ds) -F (ω) ≤ 1 0 t 0 h s ds + t 1 0 h s ds E t dt. But in the right hand side of the last inequality, we have, using Jensen's inequality, 1 0    3 i=1 t 0 h (i) s ds 2 + t 3 i=1 1 0 h (i) s ds 2    E t dt ≤ 1 0 3 i=1 1 0 (h (i) s ) 2 ds (1 + t)E t dt = 1 0 h L 2 ([0,1],R 3 ) (1 + s)E s ds. From this, using the results of [75, p. 35], we have that F belongs to the space D 1,2 . Before going further let us recall some facts on Malliavin derivative. When, working with the probability space (C([0, 1], R 3 ), F, W), its is known (see Section 1.2.1 in [START_REF] Nualart | The Malliavin calculus and related topics[END_REF]) that there exists strong connexions between Malliavin derivative and Frechet derivative for a random variable G of D 1,2 defined from (C([0, 1], R 3 ), F, W) to R. Since, the Frechet derivative D ω G at point ω of G is a continuous linear form from C([0, 1], R 3 ) into R, it can be identified to a triple (µ ω 1 , µ ω 2 , µ ω 3 ) of σ-finite measures on R such that D ϕ Gh = 3 i=1 [0,1] h i s µ ω i (ds), ∀h ∈ C([0, 1], R 3 ). In such case, the Malliavin derivative of G is random process belonging to L 2 ([0, 1], R 3 ) given by {(µ ω 1 (u, 1], µ ω 2 (u, 1], µ ω 3 (u, 1]) , u ∈ [0, 1]} . In our case, since D ϕ F h = 1 0 h s ϕ s -sϕ 1 ϕ s -sϕ 1 E s ds - 1 0 v(ϕ v -vϕ 1 ) ϕ v -vϕ 1 E v dv δ 1 (ds) , it follows that the Malliavin derivative of F is given by DF = 1 0 (ω s -sω 1 )E s ω s -sω 1 (1 s>u -s)ds, u ∈ [0, 1] ∈ L 2 ([0, 1], R 3 ). Now, since DF is W-almost everywhere not zero (in L 2 ([0, 1], R 3 )), we have using [75, Theorem 2.1.2] the existence of a density for the push-forward measure of W by F with respect to the Lebesgue measure. 2 It should be noted that the weak limit of λ[τ n ] has mean equal to σ -1 by (7.5). Moreover, it can be showed that the random variable Λ ∞ is square integrable. Indeed, since the function E is bounded, we have 0 ≤ Λ ∞ ≤ C 1 0 e t dt, for some positive constant C. Now, its is known that the random variable 1 0 e t dt admit moments at all order (see for instance [START_REF] Louchard | Tail estimates for the Brownian excursion area and other Brownian areas[END_REF]). The variance of Λ ∞ can then be evaluated numerically in order to compare our methods with other estimators. We use Monte-Carlo simulations to produce a sample with same law as Λ ∞ to achieve this task. This lead to Var(Λ ∞ ) 0.0690785. At this point, it is quite interesting to compare our approach to the one developed in [START_REF] Bharath | Inference for large tree-structured data[END_REF]. As in the present paper, the authors of [START_REF] Bharath | Inference for large tree-structured data[END_REF] construct estimators for the inverse standard deviation of the offspring distribution of a forest of conditioned critical Galton-Watson trees. Their strategy relies on the distance to the root of a uniformly sampled vertex v of the considered tree τ n ∼ GW n (µ), δ[τ n ] = h(v) √ n , where we recall that h(v) is the height of v in the tree. Using Theorem 7.1.2, it has been shown that δ[τ n ] converges in law, when the number of nodes n goes to infinity, towards σ -1 ∆ ∞ where the random variable ∆ ∞ follows the Rayleigh distribution with parameter scale 1 [9, Proposition 4] with density, ∀ x ∈ R + , f ∆∞ (x) = x exp - 1 2 x 2 . This was not noticed in [START_REF] Bharath | Inference for large tree-structured data[END_REF], but we emphasize that δ [τ n ] is somehow biased because E[∆ ∞ ] = π 2 = 1. Nevertheless, one may avoid this issue by considering the quantity δ[τ n ] = 2 π δ[τ n ] which converges to σ -1 2 π ∆ ∞ which is σ -1 on average. As a consequence, λ[τ n ] and δ[τ n ] are two quantities directly computable from the tree τ n and that may be used to estimate the inverse standard deviation. We propose to compare them from their respective asymptotic dispersion. A first comparison may be done by computing the variances of Λ ∞ and 2 π ∆ ∞ . One has Var 2 π ∆ ∞ 0.2732395 and Var(Λ ∞ ) 0.0690785. This difference in the dispersions is quite apparent in Figure 7.3 where the densities of 2 π ∆ ∞ and Λ ∞ have been displayed. Consequently, one may expect better results in terms of dispersion from our strategy. Estimation strategies In this section, we detail two ideas in order to estimate σ -1 from a forest of conditioned Galton-Watson trees. A forest is define as a tuple a trees. Let N be a positive integer. In this section, we consider a forest F made of N independent trees τ 1 , . . . , τ N with respective sizes n 1 , . . . , n N and respective laws GW n 1 (µ), . . . , GW n N (µ). Least-square estimation This first strategy lies on the goodness of fit between the Harris path of the forest with the expected limiting contour. This adequacy is measured thanks to an L 2 norm. More precisely, we denote by (H[F](t), t ∈ [0, N ]) the Harris path of the forest F. This process is defined by ∀ 0 ≤ t ≤ N, H[F](t) = N i=1 1 √ n i H[τ i ](2n i (t -i + 1))1 [i-1,i) (t), the Harris path of a forest being the concatenation of the Harris path of each tree, in the natural order. We propose to estimate σ -1 by λ ls [F] that minimizes the L 2 error H[F](•) -λH(• -• ) 2 L 2 . That is λ ls [F] = argmin λ∈R + H[F](•) -λH(• -• ) 2 L 2 . As aforementioned in (7.4), λ ls [F] can be explicitly computed. Indeed, on can check that λ ls [F] = H[F](•), H(• -• ) H(• -• ) 2 L 2 . ( 7.6) Interestingly, λ ls [F] is only the average of the quantities λ[τ i ] (defined in (7.4)), λ ls [F] = 1 N N i=1 λ[τ i ]. Thus, according to Theorem 7.2.1 and 7.1.3, one can expect that λ ls [F] tends to σ -1 in some sense, when both N and n i go to infinity, by virtue of the law of large numbers. Estimation by minimal Wasserstein distance In the preceding method, we did not use our knowledge of the limiting distribution of the random variable of type λ[τ n ]. In order to take this into account, one may want to test the goodness of fit between the empirical measure P defined by P = 1 N N i=1 δ λ[τ i ] . (7.7) and the law of Λ ∞ . Using Wasserstein metrics to align distributions is rather natural since it corresponds to the transportation cost between two probability laws. In particular, this feature appears to be useful in a statistical framework [START_REF] Czado | Nonparametric validation of similar distributions and assessment of goodness of fit[END_REF][START_REF] Gallón | Statistical properties of the quantile normalization method for density curve alignment[END_REF]. In our case, P is expected to look like (in some sense) σ -1 Λ ∞ in the limit of an infinite forest of infinite trees. That is why, we propose to estimate σ -1 with the real number λ which minimizes the distance between P and the law of λΛ ∞ , denoted P λΛ∞ . More precisely, our estimator λ W [F] is defined by λ W [F] = argmin λ>0 d W P , P λΛ∞ , ( where d W denotes the Wasserstein distance of order 2. The Wasserstein distance of order 2, denoted d W (ν 1 , ν 2 ), between two probability measures ν 1 and ν 2 can be defined (see for instance [START_REF] Del Barrio | Central limit theorems for the Wasserstein distance between the empirical and the true distributions[END_REF]) from their cumulative distribution functions F 1 and F 2 as follows, d W (ν 1 , ν 2 ) = 1 0 F -1 1 (t) -F -1 2 (t) 2 dt. (7.9) Let F be the cumulative function of the empirical measure P, while F λΛ∞ stands for the cumulative function of the random variable λΛ ∞ . As a consequence of (7.9), one has thanks to the fact that F -1 λΛ∞ = λF -1 Λ∞ . It follows that minimizing the Wasserstein distance boils down to solve a least-square minimization problem. Hence, it comes that λ W [F] = F -1 , F -1 Λ∞ F -1 Λ∞ 2 L 2 = 1 F -1 Λ∞ 2 L 2 N i=1 λ[τ (i) ] i N i-1 N F -1 Λ∞ (s)ds, where ( λ[τ (i) ]) 1≤i≤N denotes the order statistic associated to the family ( λ[τ i ]) 1≤i≤N . Remark 7.2.4. We point out the fact that there is no problem of definition in the above quantities because both F -1 and F -1 Λ∞ belong to L 2 ([0, 1]). In the first case, this follows from the fact that F -1 is bounded (because P has compact support). For F -1 Λ∞ , this comes from the uniform sampling principle which entails that for x ∈ R + and of Λ ∞ (dashed line) estimated from 100 000 simulated Brownian excursions. 1 0 F -1 Λ∞ (u) 2 du = E[Λ 2 ∞ ]. Main results Asymptotic regimes In this section, we study the asymptotic properties of our estimators. Before going further, let us introduce some notations. In the sequel, the set of integer sequences is denoted S. For any positive real number A, we denote by S A the subset of S defined by S A = u ∈ S | min i≥1 u i ≥ A In addition, for any sequence u in S and any positive integer N , u N is the multi-integer made of the N first components of u, that is u N = (u 1 , . . . , u N ) . Moreover, for any multi-integer n in ∪ n≥1 N n , we denote by (n) its number of components and by m(n) its minimal value, that is m(n) = min 1≤i≤ (n) n i . Somehow, in the forests we are about to consider, m(n) refers to the size of the smallest tree whereas (n) refers to the size of the forest. Now, let us introduce our probabilistic framework. Let (τ k n ) n,k≥1 be a family of conditioned Galton-Watson trees such that, for a given n, the family (τ k n ) k≥1 is i.i.d. GW n (µ). From this family, we define, for any mutli-integer n = (n 1 , . . . , n N ), the random forest F n made of the trees (τ 1 n 1 , . . . , τ N n N ). The idea of this construction is to consider increasing (in the sense of inclusion) sequences of random forests. Indeed, assume we are given a sequence (u n ) n≥1 of integer (corresponding to the size of our trees), then the N first trees of the forest F u N +1 are the same as the trees of the forest F u N . To be crystal clear, let us precise what we mean by saying that something converges as m(n) (or (n)) goes to infinity. Let f be an application from ∪ n≥1 N n into some metric space (E, d) (of course, what we are about to say trivially extend to any topological space). We say that f converges to some element e of E as m(n) ( (n), respectively) goes to infinity if ∀ε > 0, ∃A ∈ R + , ∀n ∈ ∪ n≥1 N n , m(n) > A ( (n) > A, respectively) ⇒ d(f (n), e) < ε. In this section, two asymptotic regimes are considered : when (n) goes to infinity (infinite forest regime) and when m(n) goes to infinity (infinite trees regime). In the following section Corollary 7.10 and Proposition 7.3.5 are concerned with the infinite trees regime (m(n) → ∞) whereas Proposition 7.3.3, Lemma 7.3.6, and Proposition 7.3.7 are concerned with the infinite forest regime ( (n) → ∞). Least square estimation This first result focuses on the regime of large trees. -→ σ -1 1 (n) (n) i=1 Λ ∞,i , (7.10) where the Λ ∞,i 's are N independent copies of Λ ∞ . Furthermore, when (n) is fixed and m(n) goes to infinity, we have E λ ls [F n ] -→ σ -1 . Proof. The first convergence is a direct consequence of the independence properties of the family (τ i n i ) 1≤i≤l(n) and the fact that each one converges to a random variable with law Λ ∞ by Corollary 7.2.1. Now, its remains to prove the second statement. Since the family (τ i n i ) 1≤i≤ (n) is made of independent random variables its follows from Theorem 7.1.3 and the definition (7.4) of λ[τ n i ] that the proof of this last statement boils down to prove that E   σ -1 1 (n) (n) i=1 Λ ∞,i   = σ -1 . Moreover, (7.5) and the law of large numbers entails that this same limit converges a.s. to σ -1 as (n) goes to infinity. The following result states a stronger convergence when (n) goes to infinity before m(n). The spirit of this result is that, given an increasing sequence of random forest, the least square estimator cannot be too far from σ -1 as soon as the size of the trees are large enough. It particular, due to the results of [START_REF] Janson | Conditioned Galton-Watson trees do not grow[END_REF], one cannot expect a stronger convergence. Proposition 7.3.3. We have, ∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup N →∞ λ ls [F u N ] -σ -1 < = 1. Proof. We begin the proof by showing that the family λ[τ Estimation by minimal Wasserstein distance As in the preceding section, we begin by looking at the convergence in m(n). Proposition 7.3.5. When m(n) goes to infinity (and (n) is fixed), we have λ W [F n ] (d) --→ 1 σ F -1 Λ∞ 2 2 (n) i=1 Λ ∞,(i) i (n) i-1 (n) F -1 Λ∞ (s)ds, where the Λ ∞,(i) 's are N independent copies of Λ ∞ sorted in increasing order. In addition, the limit is asymptotically unbiased, in the sense that, when (n) goes to infinity, 1 σ F -1 Λ∞ 2 2 E   (n) i=1 Λ ∞,(i) i (n) i-1 (n) F -1 Λ∞ (s)ds   -→ 1 σ . Proof. The convergence in distribution is straightforward from Corollary 7.2.1 and standard methods on order statistics. We now prove that the estimator is asymptotically unbiased. In order to lighten the notation, let us set N = (n). It is well known, since Λ ∞ has a density, that, for any 1 ≤ i ≤ N , one has (see for instance [START_REF] David | Order statistics[END_REF]) E Λ ∞,(i) = N N -1 i -1 ∞ 0 xF Λ∞ (x) i-1 (1 -F Λ∞ (x)) N -i f Λ∞ (x)dx. Hence, E N i=1 Λ ∞,(i) i N i-1 N F -1 Λ∞ (s)ds = N ∞ 0 xf Λ∞ (x) N i=1 N -1 i -1 F Λ∞ (x) i-1 (1 -F Λ∞ (x)) N -i 1 N 0 F -1 Λ∞ s + i -1 N ds dx. This rewrites thanks to the right inverse sampling principle as E N i=1 Λ ∞,(i) i N i-1 N F -1 Λ∞ (s)ds = 1 0 F -1 Λ∞ (x)K n F -1 Λ∞ (y) dy, where K n is defined for all function f in L 2 ([0, 1]) by K n (f ) (y) = N i=1 N -1 i -1 y i-1 (1 -y) N -i 1 N 0 f s + i -1 N ds, ∀y ∈ [0, 1]. The operators K n are known as Bernstein-Kantorovich operators which were introduce in 1930 by Kantorovich in order to extend the properties of Bernstein polynomials to non-continuous functions (see [START_REF] Kantorovitch | Sur certains développements suivant les polynômes de la forme de S. Bernstein[END_REF]). In particular, it is known that, for all f in L 2 ([0, 1]), K n (f ) converges strongly to f in L 2 ([0, 1]) (see [START_REF] Lorentz | Bernstein polynomials[END_REF] for an old but practical reference). Now, according to Cauchy-Schwarz inequality we have that 1 0 F -1 Λ∞ (x)K n F -1 Λ∞ (y) dy - 1 0 F -1 Λ∞ (y) 2 dy ≤ F -1 Λ∞ 2 L 2 1 0 K n (F -1 Λ∞ )(y) -F -1 Λ∞ (y) 2 dy. But since, K n (F -1 Λ∞ ) converges to F -1 Λ∞ in L 2 ([0, 1]), we finally obtain E N i=1 Λ ∞,(i) i N i-1 N F -1 Λ∞ (s)ds -→ N →∞ F -1 Λ∞ 2 L 2 , leading to the result. 2 In addition, we have the same kind of strong convergence result for this estimator. Its lies on the fact that the empirical measure P defined in (7.7) must be close (in Wasserstein distance) to the law of σ -1 Λ ∞ as soon as the trees are large enough. More precisely, we have the following lemma. Lemma 7.3.6. Let P be the law of σ -1 Λ ∞ . Let also P n be the empirical distribution defined for any multi-integer n by P n = 1 (n) (n) i=1 δ λ[τ i n i ] . Then, the following statement holds, ∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup where Π δ µ denotes the image measure of µ by Π δ . To obtain the desired result, we need to control each of the three terms in the right hand side of (7.16). -Third term. First, it is clear, for any probability measure µ, that Π δ is a transport of µ on Π δ µ which needs not to be optimal [11, 2. Generalities on Kantorovich transport distances]. Hence, d W (µ, Π δ µ) ≤ R |x -Π δ (x)| 2 µ(dx). It follows, since x → x 2 is integrable with respect to P, that δ can be chosen in order to have N →∞ 1 N N i=1 λ[τ i u i ] 2 1 | λ[τ i u i ]|>δ -E Λ 2 ∞ 1 |Λ∞|>δ < = 1. (7.18) This bound allows us to control the first term in the right hand side of (7.16) since d W (P n (ω) , Π δ P n (ω)) ≤ R |x -Π δ (x)| 2 P n (ω)(dx) ≤ 1 N N i=1 λ[τ i n i ](ω) 2 1 | λ[τ i n i ](ω)|>δ . Hence, it remains to control the second term. -Second term. Since Π δ P n (ω) and Π δ P are compactly supported measure, for any multiinteger n, we have the following duality formula for the first order Wasserstein distance (which we denote W 1 ), W 1 (Π δ P n (ω) , Π δ P) = sup N →∞ |P u N f k -Pf k | < = 1, (7.19) where Pf denotes R f (x)P(dx). Now, the density of (f k ) k≥1 entails that for any function f in Lip 1 ([-δ, δ]), one can finds a function f k such that f k -f ∞ < ε, for any positive ε. Hence, (7.19) holds for any function in C K on the same event. Moreover, since Π δ P n (ω) and Π δ P are compactly supported measures, Finally, using (7.17), (7.18) and (7.20) in (7.16) leads to the result. 2 Proposition 7.3.7. We have, ∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup N →∞ λ W [F u N ] - 1 σ < = 1. Proof. By the Cauchy-Schwarz inequality, the convergence of this estimator follows from the convergence of the Wasserstein distance in the following manner, λ W [F n ] - 1 σ = F [F n ] -1 -σ -1 F -1 Λ∞ , F -1 Λ∞ F -1 Λ∞ 2 2 ≤ F [F n ] -1 -σ -1 F -1 Λ∞ 2 F -1 Λ∞ 2 F -1 Λ∞ 2 2 = d W (P n , P) F -1 Λ∞ 2 . The result finally arises from the preceding Lemma. In order to test our estimation techniques on Galton-Watson forests, we need to make some numerical experiment. However, simulation of conditioned Galton-Watson tree is a difficult problem of independent importance. In this section, we briefly present an algorithm due to Devroye [START_REF] Devroye | Simulating size-constrained galton-watson trees[END_REF] allowing to achieve this aim. Note that, it is (with a direct rejection method) the only known (in the best of our knowledge) algorithm allowing to simulate size constrained Galton-Watson trees. C[τ n ](k) =      H[τ n ](i) -(k -b i ) if ∃ 0 ≤ i ≤ n -2, b i ≤ k < b i+1 -1, k -b i+1 + H[τ n ](i + 1) if ∃ 0 ≤ i ≤ n -2, b i+1 -1 ≤ k < b i+1 , H[τ n ](b n-1 ) -(k -b n-1 ) if b n-1 ≤ k ≤ b n . -From contour process to Harris path. The Harris path is only a small modification of the contour process, defined by H[τ n ](0) = H[τ n ](2n) = 0 and ∀ 1 ≤ k ≤ 2n -1, H[τ n ](k) = C[τ n ](k -1) + 1. Inference for a forest of binary size-constrained Galton-Watson trees The aim of this section is to analyze the finite-sample behavior of both estimators introduced in this chapter by means of numerical experiments. The theoretical study achieved in Section 7.3 shows that we can expect to obtain good numerical results, at least for large trees and/or a large forest. To this goal, we consider a forest of independent conditioned Galton-Watson trees with common critical birth distribution µ such that µ(k) = 0 for k ≥ 3. In such case, µ is entirely characterized by its variance σ 2 . Simulations of Galton-Watson trees GW n (µ) are performed with the method provided in Subsection 7.4.1. Let F = (τ i ) 1≤i≤N be a forest of N independent trees such that, for any 1 ≤ i ≤ N , τ i ∼ GW n i (µ) for some integer n i . From the Harris process of each tree τ i , one first computes the quantity λ τ i = H[τ i ](2n i •), E 2 √ n i E 2 2 , where E is known and defined in (7.2). Then, we propose to estimate σ -1 in the two following ways. Least Squares Wasserstein λ ls [F] = 1 N N i=1 λ τ i λ W [F] = 1 F -1 Λ∞ 2 2 N i=1 λ τ (i) i N i-1 N F -1 Λ∞ (s)ds Remark 7.4.1. In order to compute λ W [F], we need to be able to perform computations using the function F -1 Λ∞ . Unfortunately, in view of the theoretical study of Λ ∞ made in Subsection 7.2.1, one cannot expect to have an explicit expression for this function. In the following of this section, we use a numerical estimation of F -1 Λ∞ by Monte Carlo simulations. To achieve this goal, we perform simulations of Λ ∞ by simulating Brownian excursion thanks to (7.3). In order to ensure that the error made on F -1 Λ∞ does not propagate too much in our results, F -1 Λ is estimated with an important sample of simulations of Λ ∞ (exactly 10 6 simulations). The theoretical investigations of Section 7.3 establish that our estimators are unbiased in the "infinite trees" regime m(n) → ∞. Nevertheless, the problem is not as simple when working with finite trees. A clear illustration of this comes from the numerical evaluations of the average Harris processes of finite trees. Indeed, the numerical study of Figure 7.5 shows that the average Harris processes of small trees seem to be lower than the limiting Harris process. Hence, the quantities λ[τ i ] are expected to underestimate the target σ -1 . But any estimator based on the asymptotic behavior of conditioned Galton-Watson trees is expected to present such a bias. In particular, we state in our numerical experiments that the estimator proposed in [START_REF] Bharath | Inference for large tree-structured data[END_REF] presents the same bias. The natural question arising from the preceding comments is : how is the bias of a conditioned Galton-Watson tree related to its size and/or the unknown parameter σ ? The numerical study presented in Figure 7.6 shows that the quantity η(n) = σ -1 E[ λ[τ n ]] -1 , where τ n ∼ GW n (µ), seems close to be uncorrelated to σ at least when σ is large enough. This allows us to construct a bias corrector independent on the unknown standard deviation σ. In addition, the dependency on n may be modeled by the relation η(n) = 1 -(a √ n + b) -1 . The coefficients appearing in η may be estimated from simulated data, η(n) = 1 -(0.504273 √ n + 0.9754839) -1 (see Figure 7.6 again). The correction is obviously expected to be better for large values of σ. Finally, we construct the following corrected versions of the estimators λ ls [F] and λ W [F]. Corrected Least Squares Corrected Wasserstein λ c ls [F] = 1 N N i=1 η(#τ i ) λ τ i λ c W [F] = 1 F -1 Λ∞ 2 2 N i=1 η #τ (i) λ τ (i) i N i-1 N F -1 Λ∞ (s)ds Computing the estimators proposed in this chapter is not an easy task. According to Remark 7.4.1, this needs to perform an important number of simulations of Λ ∞ in order to get an accurate approximation of F -1 Λ∞ . Moreover, to be able to correct the bias highlighted above, one needs to perform many simulations of finite trees. Together with this work, we propose a Matlab toolbox which already includes these preliminary computations and allows to directly compute our estimators for a forest. This toolbox as well as its documentation and the scripts used in this chapter are available at the page : http ://agh.gforge.inria.fr. The study of Figure 7.7 shows that for values of σ greater than 0.5, the bias correction works properly. Moreover, it also shows that the estimator developed in [START_REF] Bharath | Inference for large tree-structured data[END_REF] present the same kind of bias as ours, which can also be corrected. In the case of small parameter σ, the bias correction is not as accurate. This was expected because the bias corrector does not fit as well to the bias curve for small small values of sigma as its does for greater values of σ. Since we have an estimation procedure which seems to work, the natural further study is to see how the quality of our estimators vary as the characteristics of the forest change. We begin by looking at the variations when the size of the trees increase. A priori, the sizes of the trees in the considered forest should not have influence on the dispersion of the estimators. Indeed, our estimation strategy is based on the approximation of the Harris path of a finite tree by its limit. As a consequence, the size parameter only governs the quality of this approximation. Whatever the sizes of the trees, the dispersion will be given by the variance of the limit distribution Λ ∞ . As expected Figure 7.8 shows that the dispersion of the estimators does not change as the sizes of the trees change when σ takes great values. Similarly, as shown in Figure 7.9, for small values of σ, the sizes of the trees do not influence the dispersion of the estimator. However, Figure 7.9 also shows that the sizes of the trees have a positive influence of the bias of the estimators. Finally, Figure 7.10 shows the variation of the quality of the Least-square estimator as the size of the forest changes. It appears to be consistent with the theoretical fluctuation intervals given by the central limit theorem. Theoretical tolerence interval at 95% Numerical simulations Theoretical tolerence interval at 50% I : Fluctuation of Lévy processes in a nutshell 2.1 Some results on Poisson random measure . . . . . . . . . . . . . . . . . . . . . . 2.2 A quick reminder to Lévy processes . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Excursions of a Markov process away from zero . . . . . . . . . . . . . . . . . . . 2.5 Ladder processes and their Laplace exponents . . . . . . . . . . . . . . . . . . . . 2.6 Fluctuation problems for spectrally positive Lévy processes . . . . . . . . . . . . 2.6.1 Large time behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Exit problems for spectrally negative Lévy process . . . . . . . . . . . . . 2.7 Reminder on renewal theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 3 Preliminaries II : Splitting trees 3.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Chronological trees as measured metric spaces . . . . . . . . . . . . . . . . 3.1.2 The law of a splitting tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. 3 3 The population counting process . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Backward model : coalescent point process . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The law of the CPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 4 On some auxiliary results 4.1 Asymptotic behavior of the scale function of the contour process . . . . . . . . . 4.2 A formula to compute the expectation of an integral with respect to a random measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 A recursive construction of the CPP . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 1 Theorem 1 . 5 . 1 . 151 Pour tout entier positive n et k, on a Figure 3 . 1 - 31 Figure 3.1 -Graphical representation of a Splitting tree. The vertical axis represents the biological time and the horizontal axis has no biological meaning. The vertical lines represent the individuals, their lengths correspond to their lifetimes. The dashed lines denote the filiations between individuals. Figure 3 . 2 - 32 Figure 3.2 -Geodesic path connecting x to y Figure 3 . 3 - 33 Figure 3.3 -In blue and red, the set {y ∈ T | y ≤ x}. The blue part corresponds to condition (C1) while the red part corresponds to condition (C2). Figure 3 . 4 - 34 Figure 3.4 -A graphical representation of the exploration process. The one-to-one correspondence is represented by corresponding colours. Figure 3 . 5 - 35 Figure 3.5 -The contour process of the finite tree of Figure 3.1. Figure 3 . 6 - 36 Figure 3.6 -One-to-one correspondence between the tree and the graph of the contour represented by corresponding colours. Lemma 3 . 3 . 3 . 333 [START_REF] Champagnat | Splitting trees with neutral Poissonian mutations I : Small families[END_REF] Thm. 3.21] If ψ (0+) < 0, then, there exist a positive constant γ such that,e -αt ψ (α)W (t) -1 = O e -γt .In Proposition 4.1.1, we characterize the O e -γt term of this Lemma. Figure 3 . 7 - 37 Figure 3.7 -The lineage of point x. t 0 3 C 0, 3 Figure 3 . 8 - 3338 Figure 3.8 -Coalescence time between individuals 0 and 3 represented as an arrow. Figure 4 . 1 - 41 Figure 4.1 -Grafting of trees. 4 tFigure 5 . 2 - 452 Figure 5.2 -Residual lifetimes with subtrees associated to living individuals at time u. Figure 5 . 3 - 53 Figure 5.3 -Reflected JCCP with overshoot over t. Independence is provided by the Markov property. Figure 6 . 1 - 61 Figure 6.1 -The future of a mutation only depends on a sub-tree of the genealogical tree. 0 14 1 Figure 7 . 1 - 171 Figure 7.1 -Construction of the Harris path (right) from 0 to 2n = 14 as the contour of an ordered tree (left) with n = 7 nodes. Figure 7 . 2 - 72 Figure 7.2 -The ordered tree of Figure 7.1 in its Harris path (left) : each vertical axis represents a node of the original structure (right), (See[START_REF] Pitman | Growth of the Brownian forest[END_REF]). A common picture helping to see how to recover the tree from the contour is to imagine putting glue under the contour and then crushing the contour horizontally such that the inner parts of the contour which face each others are glued. 1 0 1 e s ds. In these papers the study is performed thanks to the analysis of the double Laplace transform 2 = 1 0F - 1 F - 1 2111 τ i ] , P λΛ∞ (s) -F -1 λΛ∞ (s) (s) -λF -1 Λ∞ (s) 2 ds, Figure 7 . 3 -Densities of 2 π 732 Figure 7.3 -Densities of Corollary 7 . 3 . 1 . 731 When m(n) goes to infinity, we haveλ ls [F n ] (d) N 1 . 1 →∞ d W (P u N , P) < = Proof.Let n be a multi-integer. Let Π δ being the canonical projection of R on [-δ, δ], for a positive real number δ. We haved W (P n (ω) , P) ≤ d W (P n (ω) , Π δ P n (ω)) + d W (Π δ P n (ω) , Π δ P) + d W (P, Π δ P) ,(7.16) d 3 . ( 7 . 17 )- 3717 W (P, Π δ P) ≤ E (Λ ∞ ) 2 1 |Λ∞|>δ < ε First term. On the other hand, following the same lines as in the proof of Proposition 7.3.3, one can shows that∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup φ∈Lip 1 ( 1 [-δ,δ]) R φ(x)(Π δ P n (ω) (dx) -Π δ P(dx)) , where Lip 1 ([-δ, δ]) denotes the set of 1-Lipschitz continuous function on [-δ, δ]. Since, [-δ, δ] is compact, Lip 1 ([-δ, δ]) is separable endowed with the uniform topology. This implies the existence of a countable family (f k ) k≥1 which is dense. Using again the method of the proof of Proposition 7.3.3, one can show ∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup d W (Π δ P n (ω) , Π δ P) ≤ C W 1 (Π δ P n (ω) , Π δ P), which implies ∀ > 0, ∃ A ∈ N, ∀ u ∈ S A , P lim sup N →∞ d W (Π δ P u N (ω) , Π δ P) < = 1. (7.20) 2 2 7. 4 4 Numerical simulations 7.4.1 Simulation of conditioned Galton-Watson trees Figure 7 . 4 - 74 Figure 7.4 -Illustration of the re-arrangement procedure. Figure 7 . 5 - 75 Figure 7.5 -Mean contours of binary conditioned Galton-Watson trees with size n and σ = 0.7 calculated from 2000 trees for each values of n. Figure 7 . 9 - 79 Figure 7.9 -Variation of the size of the tree for large σ (equal to 0.3) : tree sizes varying from 20 nodes (left), 50 nodes (center), to 100 nodes (right). Forests of 50 trees. Figure 7 . 7 Figure 7.10 -Least-square estimation of σ -1 for different sizes of forests. 2 < • • • < t n , les accroissements de Y , Y t 2 -Y t 1 , Y t 3 -Y t 2 , . . . , Y t n-1 -Y tn ,sont des variables aléatoires indépendantes dont les lois ne dépendent respectivement que des écarts t 2 -t 1 , . . . , t n -t n-1 . Le principal but du chapitre est d'obtenir les identités de fluctuations pour les processus de Lévy sans sauts négatifs utilisées dans la suite de ce manuscrit de manière aussi simple et directe que possible. est un CPP de fonction d'échelle W au temps t. P (1) P (2) P (3) P (4) 0 a t Figure 1.2 -Recollement de CPP. 0 1 2 3 4 5 6 7 8 9 10 12 13 14 15 Figure 1.1 -Processus ponctuel de coalescence à 16 individus. Les pointillés horizontaux représentent les coalescences. Ce résultat trouve deux applications. Dans le chapitre 5, il est utile dans la preuve de la loi des grands nombres associée au processus N t et dans la chapitre 6 il trouve son utilité dans les calculs des moments du spectre de fréquence. 1.4 Chapitre 5 Le Chapitre 5 concerne les processus de Crump-Mode-Jagers binaires homogènes surcritiques. Dans ce chapitre nous nous intéressons au comportement en temps long du processus (N t , t ∈ R + ). Le théorème (déjà connu) suivant établit que, correctement renormalisé, le processus N t converge presque sûrement vers une variable aléatoire dont la loi est exponentielle conditionnellement à la non-extinction de la population. Theorem 1.4.1. Dans le cas surcritique (α > 0), il existe une variable aléatoire E telle que et A (n) (k, a), on arrive alors a obtenir le résultat. De la même manière, nous obtenons des formules pour les moments du type Chapitre 1. Introduction (t) t-a les lois de N a) est le nombre de famille de taille k au temps t dans le sous-arbre induit par le nème individu vivant au temps t -a, et N (t) t-a est le nombre d'individus vivant au temps t -a ayant une descendance vivante au temps t. En appliquant à nouveau le Théorème 1.3.3 et en déterminant avantage dans ce contexte est que les méthodes développées pour calculer les moments du spectre de fréquence s'étendent très bien au calcul des erreurs du type E [(A(k, t) -c k N t ) n ], ce qui était un point délicat dans le résultat précédent. Ceci nous permet d'avoir une expression explicite pour la matrice de covariance M . 1.6 Chapitre 7 Le Chapitre 7 est quelque peu déconnecté du reste du manuscrit. Il s'agit d'un travail en collaboration avec Romain Azaïs et Alexandre Genadot. Dans ce chapitre nous nous intéressons à un problème de statistique pour des arbres de Galton-Watson conditionnés par leurs tailles. Note but est d'estimer σ -1 , l'inverse de la variance de la loi de naissance µ, à partir d'une forêt F = (τ 1 , . . . , τ N ) d'arbres indépendants telle que chaque arbre τ i est un arbre de Galton-Watson conditionné à avoir n i noeuds. L'étude statistique d'arbre aléatoire semble être un problème naturel car de nombreuses données peuvent naturellement être représentée par des arbres (systèmes sanguins en biologie, ficher XML en informatiques,....). En particulier, les arbres de Galton-Watson conditionné apparaissent dans le nombreux problèmes 1 , t 2 ). Hence, the quantity of regular time one needs to wait to get L t 1 local time is t 1 , but to get L t 1 + ε (for any positive ε) is at least t 2 . Hence, L -1 experiences a jump at time L t 1 (see Figures 2.3 and 2.4). Moreover, since the local time increases again at time t 2 , the size of the jump is equal to t 2 -t 1 . R + lies on the same arguments as the proof of Theorem 2.4.2. More precisely, we use that a pure jump Lévy process making only jumps of size 1 is a Poisson process. Moreover, it is quite clear from the construction that (N A t , t ∈ R + ) and (N B t , t ∈ R + ) never jump simultaneously as soon as A is disjoint of B. This implies through the application of Theorem 2.1.3 that H is a Poissonian random measure. and ∀t ∈ (t 1 , t 2 ), X t = X t , meaning that X experiences an excursion from its maximum on the interval (t 1 , t 2 ). Hence, for all t in (t 1 , t 2 ), G t is constant and equal to L -1 Lt 1 -(see all the Figures above). Similarly, X t equals X L -1 L t 1 - the random variables (B ) 1≥i≥Nt are independent and identically distributed for i ≥ 2 (in the sense of Remark 5.3.2). Let us denote by pt the parameter of B (t) (t) 1 , and by p t the common parameter of the others i.i.d. Bernoulli random variables. It follows from (5.3) that E i t [N ∞ t ] = p t (W (t) -1) + pt and from the Yule nature of N ∞ under P ∞ , e -αt n+1 N t n+1 -e -αtn N tn is small, for n large. It then follows that if the quantity inf s∈[tn,t n+1 ] e -αtn N tn -e -αs N s , -αs N s -e -αt n+1 N t n+1 , must take very high positive value. More precisely, -αt n N tn -e -αs N s > ≤ P tn sup s∈[tn,t n+1 ] e -αs N s -e -αtn N tn > + P tn e -αtn N tn -e -αt n+1 N t n+1 + sup s∈[tn,t n+1 ] e -αt n+1 N t n+1 -e -αs N s > -αt n+1 N t n+1 > ≤ 2 P tn Y t n+1 -Y tn > e αtn + P tn e -αtn N tn -e -αt n+1 N t n+1 > . takes very low negative values, then sup s∈[tn,t n+1 ] P tn sup s∈[tn,t n+1 ] ≤ P tn sup s∈[tn,t n+1 ] e e e -αs N s -e -αtn N tn > + P tn sup s∈[tn,t n+1 ] e -αt n+1 N t n+1 -e -αs N s > + P tn e -αtn N tn -e -αt n+1 N t n+1 > Now, there exists a Yule process Y with parameter b such that Y 0 = N tn and for all s in [0, t n+1 -t n ], N tn -N s ≤ Y s-tn -Y 0 , a.s. (5.9) This Yule process can be constructed from the population at time t n by extending the lifetimes of all individuals to infinity, and constructing births from the same Poisson process as in the splitting tree. This leads to P tn sup s∈[tn,t n+1 ] e -αt n N tn -e -αs N s > ≤ P tn sup s∈[tn,t n+1 ] Y s-tn -Y 0 > e αtn + P tn sup s∈[tn,t n+1 ] Y t n+1 -tn -Y s-tn > e αtn + P tn e -αtn N tn -e [START_REF] Delaporte | Lévy processes with marked jumps II : Application to a population model with mutations at birth[END_REF] is DRI and Lemma 2.7.1, that ξ Lemma 6.6.3 (Quadratic error for the convergence of A(k, t).). Let k and l two positive integers. Then under the hypothesis of Theorem 6.4.2, there exists a family of real numbers (a k,l ) l,k≥1 such that,lim t→∞ e -αt E ψ (α)A(k, t) -e αt Ec k ψ (α)A(l, t) -e αt Ec l = α b a k,l ,where the sequence (c k ) k≥1 is defined in Proposition 6.3.8. (k) 2 is DRI. Finally, it comes from Theorem 2.7.2, that lim t→∞ EN t Ec k -EA(k, t)E = α ψ (α) R + ξ (k) 1 (s) + ξ (k) 2 (s)ds. (6.26) Using the preceding lemma, we can now get the quadratic error in the convergence of the fre- quency spectrum. Proof. Now, noting ) t∈R + , where F t is the σ-field generated by the tree truncated above t and the restriction of the mutation measure on [0, t). Then N t is Markovian with respect to F t and for all positive real numbers t ≤ s, 6.8. Markovian cases (F t .31) Moreover, (6.28) entails ψ (α) 2 E t A(k, t)A(l, t) = 2W (t) 2 c k (t)c l (t) + RW (t) + o(e -αt ), Subsection 7.1.2 gives the definition of the Harris path. Subsection 7.1.3 recalls the well known limit theorem which describes the asymptotic behaviour of the Harris path as the number of nodes in the tree increases. Section 7.2 is devoted to the introduction and the study of our estimators. The estimators are introduced in Subsection 7.2.2. Subsection 7.3.2 and 7.3.3 are dedicated to the theoretical study of our estimators. Finally, in Section 7.4 we apply our methods on simulations of conditioned Galton-Watson trees. where τ n is some tree with law GW n (µ). Its is known from[START_REF] Drmota | Reinforced weak convergence of stochastic processes[END_REF] Lemma 4] that, for any positive integer n and real number 0 < t < 1, From this last estimate, one can easily shows that E H[τn](2ns) √ Finally, the result follows from Theorem 7.1.3 and the dominated convergence Theorem. 2Remark 7.3.2. It is worth noting that the limit appearing in the right hand side of (7.10) is unbiased, that is 0 1 E H[τ n ](2ns) √ n E s ds -→ n→∞ 2 σ 0 1 E 2 s ds, ∀x ∈ R + , P H[τ i n ](2nt) √ n > x ≤ C t exp -Dx √ t . (7.11) n is uniformly bounded (w.r.t. n) by t → C D √ t which is integrable on [0, 1]. On one hand, we know from[START_REF] Drmota | Reinforced weak convergence of stochastic processes[END_REF] Lemma 4] that, for any positive integers i,n and real number 0 < t < 1, Remark 7.3.4. According to the proof the preceding theorem, it would be very interesting to have estimates on the rate of convergence in the result stated in Theorem 7.1.3. Indeed, this would enable us to have estimate on the error in the convergence stated in Proposition 7.3.3 given in terms of the smallest trees in the increasing sequence of random forests. k n ] n,k∈N has uniformly bounded fourth moments. ∀x ∈ R + , P H[τ i n ](2nt) √ n > x ≤ C t exp -Dx √ t . (7.12) On the other hand, by Jensen's inequality, there exists a positive constant c such that E λ[τ i n ] 4 ≤ c 0 1 E H[τ i n ](2ns) √ n 4 ds = 4c 0 1 R + x 3 P H[τ i n ](2ns) √ n > x dx ds. Figure 7.6 -Bias of the least-square estimator for different values of σ with a fitted bias corrector function.Figure 7.7 -Estimation and bias correction for forests of 10 trees with 20 nodes for σ equals to 0.3 (top, left) 0.5 (top, right), 0.7 (bottom, left) and 0.9 (bottom right).Chapitre 7. On the inference for size constrained Galton-Watson trees Figure 7.8 -Variation of the size of the tree for small σ (equal to 0.9) : tree sizes varying from 20 nodes (left), 50 nodes (center), to 100 nodes (right). Forests of 50 trees. 2 2 2 1.8 1.8 1.8 1.6 0.95 1.6 1.6 1.4 1.4 1.4 1.2 1.2 1.2 1 0.9 1 1 0.8 0.8 0.8 0.2 0.4 0.6 0.75 0.8 0.85 Corrected Least-Square Corrected Wasserstein True valued divided by its estimation Corrected KPDRV 0.2 0.4 0.6 Corrected Least-Square Corrected Wasserstein Corrected KPDRV 0.2 0.4 0.6 fitted bias corrector σ=0.3 Corrected Least-Square Corrected Wasserstein Corrected KPDRV 0.7 σ=0.5 σ=0.7 3.4 3.2 3.8 4.5 σ=0.9 0.65 0 100 200 3.6 300 400 500 600 700 800 900 1000 3 3.4 Number of nodes 4 2.8 3.2 2.6 3 3.5 2.4 2.8 2.6 3 2.2 2.4 2 2.5 2.2 1.8 Corrected Least-Square Corrected Wasserstein Corrected KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV 3.5 3 2.8 3 2.6 2.4 2.5 2.2 2 2 1.8 1.5 1.6 1.4 1 1.2 Least-Square Wasserstein KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV 1 Least-square Wasserstein KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV 2.4 2 2.2 1.8 2 1.6 1.8 1.4 1.6 1.2 1.4 1 1.2 0.8 1 0.6 0.8 0.4 0.6 0.2 Least-square Wasserstein KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV Least-square Wasserstein KPDRV Corrected Least-Square Corrected Wasserstein Corrected KPDRV Finally, set X 0 equals t. Then, Arbitrary initial distribution case The following Lemmas are the counter part of Lemmas 5.4.8, 5.4.9, and 5.4.10. They play the same role in the proof of Theorem 6.4.2. In the sequel, we denote by (A(k, t, Ξ)) k≥1 , the frequency spectrum of the splitting tree where the lifetime of the ancestral individual is Ξ, in the same manner as for N t (Ξ) in the previous section. Lemma 6.6.6 (L 2 convergence in the general case). Consider the general frequency spectrum (A(k, t, Ξ)) k≥1 , then, for all k, ψ (α)e -αt A(k, t, Ξ) converge to E (Ξ) (see 5.24) in L 2 as t goes to infinity and where the convergence is uniform w.r.t. the random variable Ξ. In the case where Ξ is distributed as 2 , for 0 < β < 1 2 (see section 5.4.2), we get 2 ) -e αt E(O Lemma 6.6.7 (First moment). The first moments are asymptotically bounded, that is, for all k ≥ 1, uniformly with respect to the random variable Ξ. Lemma 6.6.8 (Boundedness in the general case.). Let k 1 , k 2 , k 3 three positive integers, then = O (1) , uniformly with respect to the random variable Ξ. We do not detail the proofs of these results since they are direct adaptations of the proofs of Lemmas 5.4.8, 5.4.9 and 5.4.10. Proof of the theorem The following result is based on the fact that, in the clonal sub-critical case, the lifetime of a family is expected to be small. It follows that, in the decomposition of Figure 5.2, one can expect that all the family of size k live in different subtrees as soon as t >> u. This is the point of the following lemma. Lemma 6.6.9. Suppose that α < θ. If we denote by Γ u,t the event, Γ u,t = {"there is no family in the population at time t which is older than u"} , then, for all β in (0, 1 -α θ ), we have Proof. The proof of this Lemma, as the calculation of the moments of A(k, t) relies on the representation of the genealogy of the living population at time t as a coalescent point process (see Section 3.4). Moreover, we denote by u the number of living individuals at time u who have alive descent at time t. According to Proposition 4.3.1, under P t , N (t) u is geometrically distributed with parameter W (t-u) W (t) . Now, 1 Γu,t can be rewritten as where Z i 0 (t -u) denotes the number of individuals alive at time t descending from the ith individual alive at time u and carrying its type (the clonal type of the sub-CPP). Moreover, using again Proposition 4.3.1, we know that that under P t , the family family of random variables distributed as Z 0 (t -u) under P t-u , and . Using (6.2), some calculus leads to, . Now, since, , taking u = βt, we obtain, using Lemma 6.2.1 and the desired result. Proof of Theorem 6.4.2. Fix 0 < u < t. Note that the event Γ u,t of Lemma 6.6.9 can be rewritten as where Z i 0 (t -u, O i ) denote the number of individuals alive at time t carrying the same type as the ith alive individual at time u, that is the ancestral family of the splitting constructed from the residual lifetime of the ith individual (see Section 5.4.2). Let K be a multi-integer, we denote by L (K) (resp. A(K, t)) the random vector L k 1 , . . . , L k N (resp. (A(k 1 , t), . . . , A(k N , t))) with . with These identities allow us to obtain Taking the limit as t goes to infinity leads to Markovian cases Theorem 5.2.2 for the Markovian case is already well known (see [START_REF] Athreya | Branching processes[END_REF]), however the allelic partition for such model has not been studied. We can get more information on the unknown covariance matrix K in the case where the life duration distribution is exponential. Our study also cover the case P V = δ ∞ (Yule case), although it does not fit the conditions required by the Theorem 6.4.2. The reason comes from our method of calculation for E [A(k, t)E]. Let us consider the filtration Finally, using (7.12) gives the desired bound, From this point we consider a sequence u of integers. This sequence corresponds to the sizes of the trees in our increasing sequence of random forests (F u N ) N ≥1 . We recall according to the definitions given in the beginning of this section that the random forest It is worth noting that this expectation depends only on the integer u i . Now, using the uniform bound on the fourth moment, it is easy to show using standard methods that --→ 0, when N goes to infinity. Moreover, using Theorem 7.1.3, we have that m i u i converges to σ -1 as u i goes to infinity, form which it follows that for any > 0, there exists an integer A such that whenever u i > A. Finally, letting all the u i 's be greater than A, we have that there exists a measurable set Ω u , with mass 1, such that, using (7.14) and (7.15), for all ω in this set, lim sup Numerical simulations The main idea of the algorithm is the following : assume that µ is supported on {0, . . . , K}, for an integer K. If N 0 denotes the number of individuals with no children, N 1 the number of individuals with 1 children and so on... Then, the sequence (N 0 , . . . , N K ) is distributed following a multinomial distribution with parameter n and (µ k ) 0≤k≤K conditioned to have -Simulation of numbers of children. The multinomial distribution of parameters (µ k ) 0≤k≤K and n may be defined by its probability mass function, Simulation of the multinomial distribution presents no difficulty. By rejection sampling, we simulate multinomial random variables until obtaining a sequence (N 0 ) 0≤k≤K satisfying We define the sequence (ζ i ) 1≤i≤n from Let (ξ i ) 1≤i≤n be a sequence obtained as a random permutation of (ζ i ) 1≤i≤n . A suitable technique for random shuffling is presented in [START_REF] Knuth | Seminumerical algorithms[END_REF]Algorithm P (p.139)]. The sequence (ξ i ) 1≤i≤n represents the vertices's numbers of children in the depth-first search order. -Computation of Łukasciewicz walk. Let L be the process defined by L(0) = 0 and, ∀ 0 ≤ k ≤ n -2, L(k + 1) = L(k) + ξ k+1 -1. Set l = 1 + argmin {L(k) : 0 ≤ k ≤ n -1}. Then there exists a tree τ n with n nodes whose Łukasciecwicz walk is defined by The computation of L[τ n ] from L is illustrated in Figure 7
293,923
[ "5168" ]
[ "211251" ]
01489780
en
[ "info" ]
2024/03/04 23:41:50
2017
https://hal.science/hal-01489780/file/final.pdf
Anass Nouri Christophe Charrier Olivier Lézoray Olivier Lézoray 3d 3D Blind Mesh Quality Assessment Index Introduction Nowadays, with the development of 3D scanners, 3D meshes represent the most emergent content. These latters are used in many fields and applications. Medical industry uses 3D meshes for organ analysis and detailed representation of chemistry components. The architectural domain uses them for the modelization and the visualisation of buildings, bridges, etc. The automative industry adopts 3D meshes for representing new concepts and design cars. 3D meshes find their place also in cinema, video games, fashion and 3D printing which is a prominent application of these last decades. Other futuristic applications like the Holoportation [2] and the 3D mainstream photography [START_REF] Kolev | Turning mobile phones into 3d scanners[END_REF] benefit from the flexebity of 3D meshes. In this context, it is evident that the quantity and the frequency of exchanges of 3D meshes will increase exponentially. This leads to new challenges concerning the implementation of methods that assess their objective visual quality while taking into account some properties of the human visual system (HVS). A first approach for assessing the quality of 3D meshes is to perform subjective evaluations when seeking human opinions. However, this method is slow, tedious and inadequate for real time applications. An alternative approach falls within the objective assessment of quality and aims at predicting the quality in an automatic manner. The goal is to design quality metrics that are correlated with subjective scores provided by humans observers. In the literature, such quality assessment approaches are classified into 3 categories: 1) full reference (the original version of the distorted content is fully available for the comparison), 2) reduced reference (partial informations about the original content and the distorted one are available) and 3) no reference (no infor-mation is available about the reference content) metrics. In the majority of real time applications manipulating 3D meshes, the reference version (considered as free distorted) of the 3D mesh is not available which makes the objective quality assessment of the 3D content more difficult. This capacity of assessing objects without their reference version is an easy task for humans but this is far from being the case for machines and algorithms. Many quality assessment metrics for 3D meshes were proposed in the stateof-the-art, however they are still limited due to their dependence to the reference version of the 3D mesh. Perceptually-based noreference/blind quality assessment algorithms can play a significant role in several computers graphics applications such as optimizing and assessing performances of compression and restauration approaches, dynamically adjusting the quality of a monitor, 3D TV or parameters of mesh processing methods in a transmission application, etc. In this context, we propose a no-reference perceptual metric for the quality assessment of a 3D mesh based on saliency and roughness called BMQI (Blind Mesh Quality Index). The paper is organized as follows. Section 2 describes the link between visual saliency, visual roughness and quality assessment. We present in the same section an overview of the pipeline of our approach. In section 3, we present the proposed metric with its associated details: multi-scale saliency detection method, roughness estimation, patch segmentation and regression stage. In section 4, we present the considered subject-rated mesh datasets and analyze the correlation results of the proposed metric with the human subjective scores. Finally, we conclude and point some perspectives of this work in section 5. The proposed metric Visual saliency, roughness and quality assessment The principal challenge met while designing this noreference quality assessment metric was to select visual features that have the capability to quantify the structural deformation that the 3D mesh undergo and that are correlated with human perception. To do this, we use multi-scale visual saliency and roughness maps. Visual saliency is an important characteristic for human visual attention. Its use in computer graphics applications like mesh quality assessment [START_REF] Nouri | Full-reference saliency-based 3d mesh quality assessment index[END_REF], optimal view point selection [START_REF] Nouri | Multi-scale mesh saliency with local adaptive patches for viewpoint selection[END_REF] and simplification [START_REF] Ha | Mesh saliency[END_REF] has proven beyond any doubt its correlation with human visual perception. We suppose as in [START_REF] Nouri | Full-reference saliency-based 3d mesh quality assessment index[END_REF] that visual quality of a 3D mesh is more affected when salient regions are affected rather than less or not salient regions. This characteristic was studied in [START_REF] Boulos | Perceptual effects of packet loss on H.264/AVC encoded videos[END_REF] [START_REF] Engelke | Linking distortion perception and visual saliency in h.264/avc coded video containing packet loss[END_REF]. Likewise, variations of 3D mesh roughness appear to be correlated with human perception [START_REF] Wang | Technical section: A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF]. Indeed, a roughness map points regions that expose a strong visual masking effect. Regions with high roughness magnitude expose an important degree of visual masking effect since distorsions are less visible on these ones. We show that local variations of saliency and roughness combined together succeed in assessing the visual quality of a distorted 3D mesh without the need of its reference version. Method Given a 3D distorted mesh, we start by computing a multiscale saliency map MS with our method proposed in [START_REF] Nouri | Multi-scale mesh saliency with local adaptive patches for viewpoint selection[END_REF] and a roughness map R with the method proposed in [START_REF] Wang | Technical section: A fast roughness-based approach to the assessment of 3D mesh visual quality[END_REF]. Then, we adapt the approach of [START_REF] Simari | Fast and scalable mesh superfacets[END_REF] to our context and segment the 3D mesh into a number of superfacets N SF . In our context, these Superfacets will play the role of local patches since the human visual system (HVS) locally processes information. Once the segmentation is performed, we affect to each vertex v i of a superfacet SF j its respective values of saliency MS(v i ) and roughness R(v i ). Afterwards, we construct a feature vector of 4 attributes for each superfacet SF j : φ j = µ SF j , σ SF j , δ SF j , γ SF j with j ∈ [1, N SF ] . (1) where µ SF j and σ SF j represent respectively the local mean saliency and local standard deviation saliency of the superfacet SF j and are defined as: µ SF j = 1 |SF j | ∑ v i ∈SF j MS(v i ) (2) σ j = 1 |SF j | ∑ v i ∈SF j (MS(v i ) -µ j ) 2 (3) where |SF SF j | represents the cardinality (i.e, the number of vertices) of the superfacet SF j . δ SF j and γ SF j denote respectively the local mean roughness and the local standard deviation roughness and are defined as: δ SF j = 1 |SF j | ∑ v i ∈SF j LRF(v i ) (4) γ SF j = 1 |SF j | ∑ v i ∈SF j (LRF(v i ) -δ SF j ) 2 (5) Finally we perform a learning step using the constructed feature vector. This is done using the Support Vector Regression (SVR) [START_REF] Vapnik | The Nature of Statistical Learning Theory[END_REF] that is also used for scoring the visual quality of the 3D mesh. Figure 1 presents the block-diagram of our approach. Segmentation, learning and regresssion Superfacets segmentation One of the novelties of the proposed approach falls within the use of the superfacets -the result of an over-segmentation of the mesh surface into regions whose borders fit well the semantic entities of the mesh -into the pipeline of a mesh quality assessment metric. To segment the mesh, we modified the approach of [START_REF] Simari | Fast and scalable mesh superfacets[END_REF] which, for a 3D mesh M and a number of desired superfacets, execute the following steps based on the farthest point principle: Update of the centers: Once the triangles have been affected to different superfacets, it's necessary to compute the new center of each superfacet. For this, the method computes the mean area of all triangles belonging to a superfacet and associates the new center to the triangle of which the area is the nearest to the computed mean area. If the new center is different from the prior one, the algorithm stops. Otherwise, the classification step is computed. Classification: For each triangle, the method computes, using the Dijkstra Algorithm, the shortests paths between the centers of the defined superfacets and the triangles of the mesh. When a triangle is considered while computing the shortest path from a superfacet center and if the current computed distance is less than the prior stored one (obtained from the initialization step or from an expansion that started from a different center) then both distance and label associated to the considered triangle are updated (the superfacet that contains the triangle is fixed). Geodesic weight: Given two adjacent faces f i and f j sharing an edge e i, j with a median point m i, j and two respective centroids c i and c j , the geodesic weight is defined as geo( f i , f j ) = ||c i -m i, j ||+||m i, j -c j ||. This latter is affected to the weight w( f i , f j ) of the edge e i, j as follows: w( f i , f j ) = geo( f i , f j ) d ( 6 ) where d is the length of the diagonal of the bounding box including the 3D mesh. Learning and regression Even if it's unsual that a human observer associates a quality score in the form of a scalar to a 3D mesh but rather proceeds to a classification of the quality according to the perceived sensation (for exemple: "good" or "bad" quality), nevertheless, the applications context of the quality metrics forces us to provide a single scalar reflecting the perceived quality. For this, we use the extension of Support Vector Machines (SVM) to regression: SVR. The aim is to estimate a function f presenting at most a maximal deviation ε reflecting the dependance between a features vector x i and an affiliation class y i . Thus, for a features vector x i of a distorted 3D mesh M i with a subjective quality score y i , the regression function of an observation x to classify is defined as : f SV R = ∑ x i ∈V S α i y i K(x i , x) + b (7) where V S are the support vectors, (x i , y i ) is the learning set, α is the Lagrange coefficient obtained from a minimization process and K(x i , x) represents the RBF(Radial Basis Function) kernel defined by: K(x i , x) = exp(γ||x i -x j || 2 ) (8) Indeed, the RBF function is often used as a kernel function due to its resemblance to a similarity measure between 2 examples to classify. Also, the motivations related to the use of the SVR are as follows: 1. The regression solution includes a small number of examples x i (rapidity and efficiency). 2. Results of the regression depend of the used kernel. The test of different kernels is beneficial in so far as the correlation rate between the objective scores and subjective ones may depends on the chosen kernel. Experimental results Datasets In order to compare our no-reference metric with the fullreference methods proposed in the state-of-the-art, we use 2 publicly available subject-rated mesh databases: 1) Liris/Epfl General-Purpose database [START_REF] Lavoué | Perceptually driven 3D distance metrics with application to watermarking[END_REF] and 2) Liris-Masking database [START_REF] Lavoué | A local roughness measure for 3D meshes and its application to visual masking[END_REF]. The first database contains 4 reference meshes. These are affected with two distorsions: additive noise and smoothing. The distorsions are performed according to 3 strengths on 3 different regions of the surface mesh: 1) uniformly on the surface mesh, 2) specifically on rough or smooth regions and 3) specifically on transitional regions (between rough and smooth regions). In total, 22 distorted meshes of each 3D reference mesh are generated and evaluated by 12 human observers. Figure 2 shows some 3D meshes of the Liris/Epfl General-Purpose database with their respective normalized MOS(Mean Opinion Score). The second database Liris-Masking consists in 4 reference meshes. 6 distorted versions with the additive noise are generated for each reference 3D mesh. Only rough and smooth regions are considered while distorting the 3D meshes according to 3 strengths to simulate the masking effect. The main goal of this database is to evaluate the masking effect detection capacity of proposed metrics. 12 human observers have assessed the visual quality of the corpus. We present some 3D meshes of the Liris-Masking database with their respective normalized MOS in figure 3. Performance and analysis We begin by carrying out a learning step onto the Liris-Masking database in order to determine the RBF kernel's parameters (γ and C which represents the error penality coefficient) with a 4 parts cross-validation. Each of these parts represents the distorted versions of one of the four reference mesh associated to their subjective quality score. The SVR regression was performed using the LIBSVM [START_REF] Chang | LIBSVM: A library for support vector machines[END_REF] library. The selected parameters of the RBF kernel for the Liris-Masking database are: gamma = 0.002 and C = 32. For the Liris-Epfl General Purpose database, the selected parameters are : gamma = 0.005 and C = 2 (where C is penalty error). To evaluate the performance of the proposed metric, we compute the Spearman Rank Ordered Correlation Coefficient (SROCC) between the predicted scores and the subjective human scores of quality provided by the subject rated databases. We present in table 1 the SROOC correlation values of our noreference metric and the correlation values of 7 full-reference metrics from the state-of-the-art associated to the Liris-Masking database. We can notice that our approach BMQI produces important correlation values for all the 3D meshes without the need of their reference version on the contrary of the full-reference metrics. These results confirm that our metric succeeds very well in taking into account the visual masking effect. We do not show the correlation over the whole database since the subjective evaluation protocol used while designing the Liris-Masking database has established the referential range for the rating separately for each 3D mesh and therefore the correlation values over the whole set of 3D meshes are not really meaningful [START_REF] Lavoué | A comparison of perceptuallybased metrics for objective evaluation of geometry processing[END_REF]. Table 2 shows the correlation values of our metric associated to the Liris/Epfl General Purpose database. We notice that the performances of BMQI on this database are not as good as those on Liris-Masking database. Indeed, distorsions on the Liris-Epfl General Purpose database (noise addition and smoothing) are applied on 4 distinct regions of the surface mesh (uniform regions, rough regions, smooth regions, and transitional regions). This aims at reflecting the distorsions associated to common mesh processing methods like simplification, compression and watermarking [START_REF] Lavoué | Perceptually driven 3D distance metrics with application to watermarking[END_REF] which makes the quality assessment more difficult. From the results presented in table 2, it seems that BMQI assesses The second row presents 4 distorted 3D meshes: e) 3D mesh Armadillo affected with noise on rough regions (MOS=0.84), f) 3D mesh Dinosaur uniformly smoothed (MOS=0.43), g) 3D mesh RockerArm affected with noise on smooth regions (MOS=0.75) and h) 3D mesh Venus affected uniformly with noise (MOS=1). the visual quality in a multi-distorsion context with less precision than in a mono-distorsion context even if the correlation values of the three groups of meshes (Dinosaur, Venus and RockerArm) are important. This is mainly due to the correlation value of the sub-corpus Armadillo, this one is lower in comparison to other correlation values. This can be explained by the generated multiscale saliency map that may not reflect well the distorted salient regions. Thus, the objective quality scores aren't consistant with the quality scores of human observers. The number of superfacets and their size are 2 parameters that could influence the performance of the proposed metric. A precise definition of theses parameters may improve the results. Finally, when we consider the whole corpus of the Liris-Epfl General Purpose database, our approach provides a correlation value relatively low in comparison to the full-reference metrics of the state-of-the-art. This is related on one hand to the low correlation value of the Armadillo subcorpus and on the other hand to the number of 3D meshes considered in the learning step that is very small. Indeed, a corpus consisting in 88 meshes with their associated MOS wouldn't allow the design of an effective quality metric in a multi-distortion context. In the light of these results, and given the capacity of our approach to assess the perceived quality of a distorted mesh without the need of its reference version, our no-reference metric seems nevertheless competitive with the full-reference methods. For ex-ample, our no-reference approach obtains better correlation rates associated to the Liris-Masking database in comparison to our previous full-reference approach [START_REF] Nouri | Full-reference saliency-based 3d mesh quality assessment index[END_REF]. This could be explained by the segmentation of the mesh into superfacets. Performance on independant 3D meshes We have also tested our no-reference metric for the quality assessment of 3D meshes not belonging to any database. This permits to analyse the behavior of our metric when assessing the visual quality of any 3D mesh. Figure 4 presents two reference 3D meshes with their distorted versions. The distorsions considered are: additive noise and simplification. It's important to note that the simplification distorsion wasn't considered in the learning process for selecting the parameters of the RBF kernel since both subject rated databases doesn't include this type of distorsion. In experimentations, we use the selected parameters associated to the Liris/Epfl General Purpose (see subsection Performance Analysis). From the top row of figure 4, we can notice that BMQI provides coherent scores of quality in accordance with human perception. The reference mesh (figure 4(a)) obtains a perceived quality score equal to 6.13. Its noised version obtains a quality score equal to 6.25 and its simplified version (more visually distorted) obtains a quality score equal to 6.74 (note that a low score signifies a good quality score and vice versa). The same remarks could be made to the second row of figure 4. Conclusion In this paper we have proposed an approach to address the difficult problem of the blind quality assessment of 3D meshes. This new index uses simple characteristics computed on a multiscale saliency and roughness maps in order to asses the perceived quality of a 3D mesh without the need of its reference version. The good performance in terms of correlation with humans judgments proofs that our measure is competitive with full reference metrics. Our future work consists on enhancing both the multiscale saliency maps and the learning/regression step. Indeed, we believe that the subject rated databases have to be more important in terms of dimension to provide an optimal learning and thus leading to a better prediction of quality in a context of different types of distorsions. Another improvement would consist in considering the multi-scale aspect from the superfacets sizes. Figure 1 : 1 Figure 1: Block diagram of our metric Figure 2 : 2 Figure 2: Examples of 3D meshes belonging to the Liris/Epfl General-Purpose database. The top row shows the 4 reference meshes.The second row presents 4 distorted 3D meshes: e) 3D mesh Armadillo affected with noise on rough regions (MOS=0.84), f) 3D mesh Dinosaur uniformly smoothed (MOS=0.43), g) 3D mesh RockerArm affected with noise on smooth regions (MOS=0.75) and h) 3D mesh Venus affected uniformly with noise (MOS=1). Figure 3 : 3 Figure 3: Examples of 3D meshes belonging to the Liris-Masking. The top row shows 2 reference 3D meshes. The second row presents 2 distorted 3D meshes: c) 3D mesh Vase-Lion affected with noise on rough regions (MOS=0.20) and d) 3D mesh Bimba affected with noise on smooth regions (MOS=1.0). Table 1 : 1 SROOC values (%) of different viewpoint-independent metrics on the Liris-Masking database. Mesh database Full-reference No-reference Liris Masking HD RMS 3DWPM1 3DWPM2 MSDM2 FMPD TPDM SMQI BMQI Armadillo 48.6 65.7 58.0 48.6 88.6 88.6 88.6 88.6 94.3 Lion-vase 71.4 71.4 20.0 38.3 94.3 94.3 82.9 83.0 94.3 Bimba 25.7 71.4 20.0 37.1 100.0 100.0 100.0 100.0 100.0 Dinosaur 48.6 71.4 66.7 71.4 100.0 94.3 100.0 100 83.0 Table 2 : 2 SROOC values (%) of different viewpoint-independent metrics on the LIRIS/EPFL Genaral Purpose database. Author Biography Anass Nouri received both the M.Sc. degree in intelligent systems and imaging from the IBN TOFAIL-University, Kenitra, Morocco, and the M.Sc. degree in telecommunications and image processing from the University of Poitiers, France in 2013. He obtained his Ph.D. in computer science from the University of Normandie in 2016. From 2016 to 2017, he was an assistant professor in the Computer Science department of the national school of engineering of Caen (ENSICAEN). His current research interests include 3D mesh processing, quality assessment of 3D meshes/2D digital images, and computational vision.
22,087
[ "9153", "230" ]
[ "406734", "388352", "406734" ]
01489959
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01489959/file/978-3-642-38998-6_14_Chapter.pdf
Christopher Bailey email: [email protected] David W Chadwick email: [email protected] Rogério De Lemos email: [email protected] Kristy W S Siu Enabling the Autonomic Management of Federated Identity Providers Keywords: identity management, self-adaptive authorization, federated authorization, computing security, autonomic computing The autonomic management of federated authorization infrastructures (federations) is seen as a means for improving the monitoring and use of a service provider's resources. However, federations are comprised of independent management domains with varying scopes of control and data ownership. The focus of this paper is on the autonomic management of federated identity providers by service providers located in other domains, when the identity providers have been diagnosed as the source of abuse. In particular, we describe how an autonomic controller, external to the domain of the identity provider, exercises control over the issuing of privilege attributes. The paper presents a conceptual design and implementation of an effector for an identity provider that is capable of enabling cross-domain autonomic management. The implementation of an effector for a SimpleSAMLphp identity provider is evaluated by demonstrating how an autonomic controller, together with the effector, is capable of responding to malicious abuse. Introduction Autonomic computing is fast becoming a means of improving traditional methods for repairing, managing and evolving systems in a plethora of application domains [START_REF] Kephart | The Vision of Autonomic Computing[END_REF]. One particular interest within autonomic computing is solutions that enable autonomic management of entities within complex systems, such as the autonomic management of federated authorization infrastructures (federations). Federations can be represented as a network of identity providers (IdPs) that identify and authenticate subjects (users) in order to facilitate their access to remote service providers' (SPs) resources. One aspect of managing federated authorization infrastructures is how to respond to subjects whose interactions and usage of resources becomes abusive, or malicious, whilst being within the bounds of their access privileges. For example, in the case of Wikileaks, an army intelligence officer allegedly accessed (then subsequently leaked) hundreds of thousands of classified U.S. Department of Defence cables [START_REF] Adams | Private Memo Exposes US Fears over Wikileaks[END_REF]. Since each individual access was granted by the SP's access control system, it did not detect any abuse. Had it done so, and the system had been federated, then the SP would have faced a dilemma, since the user's privilege attributes would have been assigned by a trusted IdP, and not by itself. The SP consequently loses some control over exactly who the subjects are and how they are authorised. It is a challenging task for human administrators to monitor, and respond to these potential malicious events today. They may only resolve these by either 1) removing the trust they have placed in the IdP, 2) by personally requesting the IdP to limit the offending subjects' privilege attributes, or 3) by stopping all accesses by anyone with these privilege attributes (unless they can uniquely identify the particular user, which is not always the case in federated systems). This is clearly time consuming and unsatisfactory. Previous work [START_REF] Bailey | Self-Adaptive Authorization Framework for Policy Based RBAC/ABAC Models[END_REF] identified the need for autonomic management of (federated) authorization infrastructures, and described the Self-Adaptive Authorization Framework (SAAF). SAAF analyses subject behaviour via subject usage of authorization services (i.e., from authorization decisions). It considers various adaptation strategies against the IdPs' and SP's components within federations. There are several challenges when considering the autonomic management of IdPs. Whilst SPs own the resources where the malicious behaviour is identified, they do not own the subject's privilege attributes that confer access. These belong to the IdPs. Yet SPs are required to limit these privileges in order to prevent further malicious events within their own domain. Assuming a SP deploys an autonomic controller, the controller is normally restricted in its operation to the SP's domain whilst the IdP is outside this domain. Therefore, adaptation strategies can only be executed on the IdP with its permission. Without this, the autonomic controller will need to resort to high consequence adaptations within its own domain (such as removing all trust in the IdP). Increasing the likelihood that an IdP will permit the requested adaptations requires a secure and configurable solution, in which the IdP maintains ownership of its data, and can act on adaptation requests through varying means, which it ultimately controls. The contribution of this paper is to define and implement the enabling concepts of automated and semi-automated management of subjects' privilege attributes within IdP domains, by SP domains. We describe the enabling solution as an effector, to be deployed within an IdP's domain. An implementation of the effector is deployed as part of an extended SimpleSAMLphp [START_REF][END_REF] IdP. An instance of SAAF, the autonomic controller, is deployed as part of a SimpleSAMLphp SP. We show that the performance of this system is good. The rest of this paper is structured as follows. In Section 2, we review background and related work. In Section 3 we describe a conceptual design to the problem area. In Section 4 we detail an implementation of the conceptual design. Section 5 describes the experimental results. Finally, Section 6 concludes by summarising the work done so far, and indicating future directions of research. Background and Related Work This section details a brief review of background and current work, which motivates this research, within the areas of authorization infrastructures, identity management, and autonomic computing. 2.1 Federated Authorization Infrastructures Federated authorization infrastructures (federations) refer to a collection of distributed services and assets (such as privilege attributes and authorization policies) that enable the sharing and protection of organisational resources, across organisational domains [START_REF] Chadwick | Federated Identity Management[END_REF]. Organisations, known as SPs, share their resources with users authenticated by trusted third party organisations, known as IdPs. Authorization is given in conformance to an authorization model, such as the Attribute Based Access Control (ABAC) model [START_REF]Security Frameworks for open systems: Access control framework[END_REF]. ABAC authorization policies state the permissions (actions executable against a resource) assigned to various attribute types and values, which the IdPs are required to store and provide on behalf of their subjects. There are various technologies that exist to enable federations. X.509 [7] defines a distributed privilege management infrastructure built with attribute certificates, upon which SAML attribute assertions [START_REF] Gorrieri | Security Assertion Markup Language (SAML) Version 2[END_REF] were modelled. Shibboleth [START_REF] Morgan | Federated Security: The Shibboleth Approach[END_REF] uses the SAML standard to protect web services over a network, requiring users accessing Shibboleth protected resources to authenticate against their IdP in order for the latter to provide attribute assertions to the former. SimpleSAMLphp [START_REF][END_REF] is an alternative implementation of the same SAML standard. PERMIS [START_REF] Chadwick | PERMIS: A modular Authorization Infrastructure[END_REF] was originally an implementation of the X.509 privilege management infrastructure, but was subsequently enhanced to support SAML attribute assertions as well. OpenID Connect [START_REF] Sakimura | OpenID Connect Standard 1.0 -draft 18[END_REF] and IETF Abfab [START_REF] Howlett | Application Bridging for Federated Access Beyond Web (ABFAB) Architecture[END_REF] are two of the latest federation protocols, which are in the final stages of being standardised. Self-Adaptation and Authorization The Self-Adaptive Authorization Framework (SAAF) [START_REF] Bailey | Self-Adaptive Authorization Framework for Policy Based RBAC/ABAC Models[END_REF] is a solution for improving the monitoring and regulation of resource usage within federations, through autonomic management. SAAF adapts authorization assets (i.e., privilege attributes and authorization policies) in response to identifying malicious/abusive behaviour. Malicious behaviour is identified by the monitoring of subject usage in conformance to behaviour rules (defined at deployment) that classify malicious patterns of usage (e.g., high rate of access requests). The deployment of SAAF (Figure 1) comprises an autonomic controller, owned by a SP, monitoring the use of its authorization services in relation to its protected resources. This is achieved through a feedback control loop [START_REF] Brun | Engineering Self-Adaptive Systems through Feedback Loops[END_REF], adapting authorization assets to further prevent or mitigate malicious behaviour. Fig. 1. Autonomic management in federated authorization infrastructures In the case of adapting SP assets (authorization policies), the SAAF autonomic controller is trusted by the SP to carry out the necessary adaptations, implying strict control. However, a critical adaptation within SAAF is the adaptation of authorization assets belonging to IdPs where control is restricted (loose control). Related Work To the best of our knowledge no other works explore the role of autonomic controllers across different management domains, in particular, within the area of federated identity management. However, similar works exist which explore the autonomic management of complex systems. For example, an autonomic management framework [START_REF] Cheng | Toward an Autonomic Service Management Framework: A Holistic Vision of SOA, AON, and Autonomic Computing[END_REF] for web services describes autonomic controllers deployed at the point of service, enabling services to identify and resolve their own management problems. Our work differs in that autonomic controllers are not applicable for all types of services within a federated authorization infrastructure, as malicious behaviour identified by SPs cannot be identified by the source (IdPs). This requires external autonomic controllers to operate across management domains. Other papers explore the role of autonomic management and cooperation between differing services within a network [START_REF] Psaier | Runtime Behaviour Monitoring and Self-Adaptation in Service-Oriented Systems[END_REF], whereby trust and reputation is relied upon to increase the favourability of cooperation (in our case, adaptation). In comparison, our work provides a platform for autonomic management in which trust already exists for issuing of privilege attributes, as a fundamental component of federations. Managing Identity Providers This section details the conceptual design for enabling the autonomic management of identity providers. Conceptual Design The ability to manage IdPs relies specifically on the trust that an IdP has in the (autonomic controller of the) requesting SP. For example, a SP identifies malicious/abusive activity associated with a subject belonging to an IdP. The SP might request the IdP to remove the subject's identity attribute(s) which grant the subject access rights at the SP. However, these identity attributes may give the subject access rights at many SPs, and not only at the abused SP. In the latter case the IdP might easily decide to grant the removal request. In the former case the decision is more difficult and hinges partially on whether the IdP is more concerned about upsetting its subject or the many SPs that it has trust relationships with (and which the subject might similarly be abusing). If the request is refused the SP is left with several options: -allow the malicious activity to continue (for example, when the alternative options have a greater cost when compared to the malicious activity), or -ask the IdP to alter its attribute release / issuing policy so that it does not issue attribute assertions for this subject, or -remove access rights from this specific subject (challenging, as it depends on how subjects are identified, i.e., through persistent or transient IDs) or -remove access rights from all subjects who share the same set of identity attributes with the abusive subject, or -remove all trust from this particular IdP (for example, the IdP has refused numerous adaptation requests and the abusive behaviour continues). To avoid the last option being taken, it is in an IdP's interest to comply with requests for management changes in relation to either its attribute release policy or one of its subject's identity attributes, otherwise SPs may associate too much risk in using the IdP. It is for these complex reasons that we have defined the autonomic management to be about the IdP's output i.e., its assertions about a subject's privilege attributes, so that it is independent of the actual internal mechanisms employed by the IdP to achieve this. Autonomic controllers only depend on the final outcome, which is to control the privilege attributes that the IdP will assert for a particular subject in the future. The IdP therefore remains in control of the corrective action that is to be taken, and deciding how to achieve the desired objective. We therefore propose the following two definitions: Definition 1. We define the automated management of a subject's privilege attribute assertions within a federated identity management infrastructure as: the ability for an autonomic controller, situated in a SP's domain, to issue adaptations to an IdP's domain in order to immediately control the privilege attribute assertions that the IdP will issue for that subject when it subsequently requests access to the SP's resources. Definition 2. We define the semi-automated management of a subject's privilege attribute assertions within a federated identity management infrastructure as: a variant of definition 1, whereby the IdP's domain queues adaptations for a human controller to review, before execution. Fig. 2. Conceptual design Figure 2 details the conceptual components of a managed IdP, which are required both to provide information to the autonomic controller (within the domain of a SP), and to control and enable it to request changes to a subject's asserted privilege attributes. The effector is the enabler for adaptations concerning an IdP. The authorization service at the IdP authorizes the autonomic controller, via the effector, to change either the issuing policy (which controls the subject's attribute assertions) or the attribute repository (which holds the subject's attributes). The audit log provides the effector with mappings between the local IDs of subjects and the IDs presented to the SP in the security assertions. The authorization services at the SP utilise the subject's security assertions provided by the IdP's authenticating and issuing services. The autonomic controller requests adaptations against the IdP's effector, and receives state changes (i.e., subject no longer has privilege attribute 'x'), to confirm adaptations. Identity Provider We assume an IdP is capable of authenticating a user as being one of its subjects, and of providing attribute assertions about an authenticated subject to SPs. The IdP is capable of utilising supporting technologies that facilitate the storage and access of subject credentials/privilege attributes, for example, the Lightweight Directory Access Protocol (LDAP). These privilege attributes are assumed to be cryptographically secured and provided to trusted SPs as security assertions, following a standard protocol, such as SAML [START_REF] Gorrieri | Security Assertion Markup Language (SAML) Version 2[END_REF]. We also assume IdPs are able to log and audit security assertion assignments, as well as the authentications made through the IdP authentication services and any random, transient or session identifiers that are assigned to the subjects in the security assertions. Without these auditing capabilities, IdPs are unable to map session usage to actual subjects, in case they need to identify subjects when responding to notifications of malicious activity. Autonomic Controller and Service Provider The autonomic controller is capable of observing activity within the SP's resources to produce a state, specifically in relation to the accessing subjects and the use of subject privileges. The autonomic controller is able to classify malicious/abusive behaviour as behaviour rules. Behaviour rules are defined at deployment by sources of authority within the SP domain, and relevant to the SP's environment (i.e., academic / governmental). The autonomic controller is able to assess conformance to behaviour rules by observing subject usage, and respond when abusive behaviour has been identified. We make the assumption that the responses made by the autonomic controller are necessary, although the method in which abusive behaviour is identified, and the response chosen, is not covered by this paper. The autonomic controller is placed in the SP domain, as it is intrinsic to identification of malicious activity attributed through the subject's direct actions against the SP. In the case of managing IdPs, an autonomic controller's adaptations refer to the modification of privilege attribute assertions. Each request made by an autonomic controller specifies an abstract adaptation operation along with enabling information, such as the persistent ID to which malicious behaviour is attributed, and the privilege attributes used. Requests are made over a reliable communications protocol and are idempotent, meaning that the autonomic controller will expect to always get a response. The autonomic controller may continue to make the same request (until a timeout is reached) if a response is not received, without adapting the final state of the IdP's system. Upon timeout or a failure response the adaptation is classed as failed. Request-responses may be synchronous or asynchronous. Synchronous communications are used to implement the automated management of a subject's attribute asser-tions, whereas asynchronous communications are used to implement semi-automated management. Identity Provider Effector The IdP's effector is under the full control of the IdP administrator. He/she configures it to process adaptations requested by a SP's autonomic controller, either synchronously, or asynchronously. Communication flows between the IdP's effector and IdP software are made internally and rely on a host's operating system to ensure security. Communication between an autonomic controller and an IdP's effector are executed via secure communication, such as TLS/SSL, and require mutual authentication. The effector requires access to issuing policies, attribute repositories and audit logs, within the IdP. Access to issuing policies is required in order to adapt the policy controlling the subjects' privilege attributes asserted by the IdP (if allowed by the administrator). Access to logs is required to map between an identifier (persistent or transient) that the SP has received, and the internal identifier of the subject. Access to attribute repositories is needed to modify a subject's privilege attributes (if allowed). The effector supports a set of abstract adaptations that are necessary when managing an IdP. It is expected to translate these abstract adaptations into concrete adaptions that are supported by the underlying technology. For example, 'remove subject's privilege attribute assertion' may be translated into the relevant LDAP modify command in order to be executed against the LDAP directory, or into the appropriate Shibboleth attribute release policy to stop the SAML attribute assertion being created. The list of executable adaptations is as follows, and these are referred to as the effector operations: 1) Remove privilege attribute assertion from all subjects, 2) Remove privilege attribute assertion from identified subject, 3) Add privilege attribute assertion for all subjects, and 4) Add privilege attribute assertion for identified subject. A consequence of defining such a set of abstract operations is that it allows the IdP to utilise an authorization service to determine which operations to allow and which to deny, and then to determine how to implement the allowed ones. The addition of privilege attribute assertions is provided in order to specify a subject with reduced privileges (as a new attribute), where attributes exist within a hierarchy. For example, a Supervisor attribute inherits from an Employee attribute. Implementation This section describes the implementation of the effector for a SimpleSAMLphp [START_REF][END_REF] IdP, and shows how it can be integrated with an autonomic controller. Federated Authorization Infrastructure The effector together with a single SP and a single IdP are implemented as a SAML compliant federation. SimpleSAMLphp is used as the unifying technology to enable communication between the two providers. This is a basic federated authorization infrastructure to demonstrate the effector, however the effector could potentially be used in setups with multiple services and IdPs. The IdP is implemented on a single host machine, whereby an instance of Simple-SAMLphp is installed and configured to provide IdP services. An open LDAP server is installed to store subject privilege attributes and authentication information. Finally, an implementation of a SimpleSAMLphp IdP effector is installed, compliant with our conceptual design, to enable cross-domain management. The effector makes use of open LDAP's access control lists in order to manage the extent of adaptations a client is permitted to request. The SP is implemented across two host machines, one to host the SP's resources (resource host), and one to host an autonomic controller and authorization services (authorization host). The authorization host deploys an implementation of SAAF [START_REF] Bailey | Self-Adaptive Authorization Framework for Policy Based RBAC/ABAC Models[END_REF] and an instance of PERMIS [START_REF] Chadwick | PERMIS: A modular Authorization Infrastructure[END_REF], which is used to protect the SP's resources deployed on the resource host. PERMIS is capable of utilising ABAC authorization policies to provide the validation of SAML attribute assertions issued by IdPs, and access control decisions to the resource host. Extending SimpleSAMLphp To facilitate operations by the IdP's effector, we extended the logging capabilities of SimpleSAMLphp in order to always ensure the correct retrieval of a subject's LDAP distinguished (unique) name. SimpleSAMLphp stores its log information in a relational database (SQLite). In its original configuration, SimpleSAMLphp was only capable of mapping persistent IDs to subject attribute values. Additional information, such as attribute type, LDAP host, and LDAP search base, is needed in order to locate the actual subjects' LDAP entries for both transient and persistent IDs. Whilst some of this information e.g. LDAP host names, is available in the SimpleSAMLphp configuration file, it is not persistent to configuration changes. For this reason we decided to record all this additional information in the log DB, so that the effector is always able to identify the abusive subject's distinguished name. SimpleSAMLphp Effector The SimpleSAMLphp effector, shown in Figure 3, implements a subset of the effector component shown in Figure 2. It is a PHP web service hosted alongside the Simple-SAMLphp IdP service. It has access to the log database stored within the Simple-SAMLphp directory, which enables it to map between persistent and transient IDs and a subject's distinguished name. Web service clients, such as the SAAF controller, can access the effector providing they have been issued with a trusted client X.509 certificate. Mutual SSL/TLS authentication is required and the client's certificate distinguished name is used to identify the requesting client. Although the effector component conforms to the conceptual design described in Section 3, it is somewhat restricted due to the limited capabilities of Simple-SAMLphp. SimpleSAMLphp relies upon an attribute repository, such as LDAP, along with an attribute release / issuing policy which is represented by a PHP configu-ration file. However, the attribute release policy is constrained to stating only which attributes can be released to which SPs, regardless of the individual subject. As a result the effector adapts subject attributes held in the LDAP repository in order to achieve the per subject granularity. Modifying the privilege attribute assertions for all subjects is implemented by changing the SP's PERMIS credential validation policy rather than the SimpleSAMLphp attribute release policy. However, if the SP's authorization services do not provide credential validation policies, then adaptation of attribute release policies will be needed. When operating synchronously, the effector utilises the LDAP access control lists in order to authorize the subject level adaptation requests, notifying requesting clients of failure in case the client is unauthorized. When operating asynchronously, meaning manual review is required, the effector queues requests and notifies administrators via email when new requests are received. Human administrators then review the queued requests before allowing the effector to execute an adaptation and inform the client of success or failure. The effector is initialised once it receives a SOAP message request from a client. From here SOAP requests are processed in the following manner: 1) mutually authenticate the requesting client over TLS and obtain the requestor's distinguished name (DN) from its certificate, 2) verify the requested operation is valid, 3) retrieve the target subject's unique attribute mapping from the persistent/transient ID stored in the SimpleSAMLphp audit log database, 4) retrieve the subjects' DN(s) using the relevant LDAP host name and search base, 5) translate the requested operation into LDAP executable operations, 6) bind the requestor's DN to the relevant LDAP server, 7) execute the update operation against LDAP, providing the access control list allows it, 8) respond to the client with confirmation of the state changes. Experiments In this section, we discuss the deployment of the SimpleSAMLphp IdP and its effector in relation to a case of abuse identified with a SAAF controller. Adaptation Scenario The SimpleSAMLphp IdP is configured to issue persistent IDs with the release of privilege attributes for authenticated subjects. An LDAP directory is populated with subject authentication and privilege attributes. The effector is deployed, configured to run synchronously, and rely on an LDAP access control list to restrict the actions of a SAAF autonomic controller. The SP is configured to host a payroll web application that utilises a policy enforcement point (PEP). The PEP requires subjects to 1) authenticate against the subject's IdP, 2) obtain the subject's releasable privilege attributes in the form of a SAML assertion (via SimpleSAMLphp), and 3) utilise the SP's authorization services to provide an authorization decision. PERMIS is deployed with an authorization policy that states the IdP is trusted to assign the privilege attribute 'permis-Role=employee' to its subjects. This privilege attribute can be used to execute the permission of 'get employee payslip' on the payroll web application. The SAAF autonomic controller is deployed with a simple behaviour policy stating that no single subject belonging to the IdP may request access to any of the SP's resources, greater than 10 requests per minute. This is to stop automated attacks. SAAF profiles usage based on subjects' persistent IDs associated with the federated access requests. Should this rule be broken SAAF identifies the subject as committing abuse and can respond through various adaptation strategies. The best adaptation strategy is chosen based on a weighted decision problem solving algorithm, for example, considering the cost of realising the adaptation strategy against the cost of allowing abuse to continue. In this scenario, a subject registered with the IdP, requests access to 'get employee payslip' more than 10 times within a minute interval. Each time the subject requests access, PERMIS logs the request, detailing the subject's attributes from the subject's SAML assertion, the subject's persistent ID, and access decision given. The SAAF autonomic controller builds up the subject's pattern of access based on these logged events, checking conformance access against its behaviour policy. SAAF identifies that the stated behaviour rule has been broken, and reacts by requesting the Simple-SAMLphp effector to prevent the subject from using the privilege attribute of 'per-misRole=employee'. The SAAF autonomic controller encapsulates this request in a SOAP message, which is sent over a mutually authenticated HTTPS connection to the effector. It contains an operation (remove privilege attribute), the subject's persistent ID observed from the subject's SAML assertions, the SP's ID to identify where the persistent ID was used, attribute type (permisRole) and attribute value (employee). Providing the effector's response to the client indicates a successful adaptation (i.e., subject will no longer be issued permisRole=employee), the SAAF controller assumes the adaptation has been successful. However, if the response indicates an unsuccessful state, the offending subject is free to continue committing malicious behaviour. If the subject's behaviour continues, SAAF may take steps to remove the trustworthiness of the IdP in question, but this is not addressed here. Performance and Load Tests We have executed four types of load and performance tests. These tests are categorised as T1 -successful adaptation, T2 -invalid operation, T3 -invalid subject mapping, and T4 -LDAP error (either not authorized or unable to execute action). Tests were performed on two virtual machines (Debian 6.0.5 512MB memory hosted on a 2.4Ghz, 3GB memory MS Windows machine), as server and client, where threads on the client machine were used to depict multiple virtual clients. We measured the average response times within an interval of one second (reflecting the minimum SAAF autonomic controller adaptation cycle), issuing requests within a single initial burst until the interval was complete. On average, with the minimum load of one client (one SAAF) issuing one request per second, we found performance of T1 requests could be executed in 65ms, T2 in 49ms, T3 in 50ms, and finally T4 in 62ms. We identified that the maximum load (Figure 4) was reached with 18 clients executing one request within the one-second interval. In practice we do not expect SAAF to create a high load on this effector, due to the nature in which it executes adaptation strategies. As more adaptation requests are made, it is likely to coincide with increased levels of malicious activity, causing the autonomic controller to resort to high consequence adaptations that are out of scope of the effector, such as changing its local PERMIS policy. Conclusion This paper has presented an approach for enabling the autonomic management of federated identity providers (IdPs) across independent management domains. The motivation for this work is the fact that service provider (SP) domains can diagnose IdP domains as the source of malicious abuse. At the conceptual level, the basis of the proposed approach is the integration of an autonomic controller, positioned in the domain of a SP, with an effector, positioned in the domain of an IdP. We present the conceptual design of the effector, whilst satisfying key safeguards such as, ensuring the IdP remains in complete control of its assets. This effector has been implemented and evaluated through the deployment of a federated authorization infrastructure, which incorporates a SimpleSAMLphp IdP. We have shown that an autonomic controller is able to manage, via the effector, an IdP's ability to assign privilege attributes to its subjects. Through performance and load testing, we have shown that the IdP's effector is capable of operating with multiple autonomic controllers when handling adaptation requests within an autonomic controller's minimum adaptation cycle. In the work described in this paper, it is recognised that the autonomic controller does not have strict control over the IdP, and relies on the IdP's goodwill. In order for control to be more effectively applied, it would be necessary to have a legal service agreement or similar between the SP and IdP, whereby the IdP agrees to enact the SP's adaptation requests. In this way, the sphere of control exercised by the SP's autonomic controller would extend beyond the domain of the SP with which it is associated, to that of the IdPs to which the SP is contractually bound. Our future work aims to explore the requirements of service agreements between SPs and IdPs in order to ensure control when managing subjects' access rights between different domains. Fig. 3 . 3 Fig. 3. Effector for SimpleSAMLphp IdP Fig. 4 . 4 Fig. 4. Average (mean) response time, with standard error, against number of clients
33,870
[ "1004062", "996070", "1004063", "1004064" ]
[ "300739", "300739", "300739", "300739" ]
01489961
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01489961/file/978-3-642-38998-6_15_Chapter.pdf
Siri Fagernes email: [email protected] Alva L Couch email: [email protected] On the Effects of Omitting Information Exchange Between Autonomous Resource Management Agents Keywords: self-organization, resource management, autonomic computing, agent management, distributed agents We study the problem of information exchange between coordinated autonomous resource management agents. We limit information to that which can be directly observed by each agent. While exchanging all relevant observables leads to near optimal management, leaving out information leads to "hidden variable" problems that affect only part of the overall behavior. The patterns observed for hidden variables in simulation predict what will happen in a realistic situation when not all information is interchanged. Through simulation, we observe that leaving out information results in non-optimal behavior of the resource management model when resource needs are decreasing, although the partial information model performs very well when resource needs are increasing. Introduction Traditional approaches to autonomic management of larger distributed systems involves monitoring many components, and gathering information at a global level. Many solutions also involve creating representations of the system with near-complete knowledge of both entities and their interactions. The gathering of such knowledge is costly and generates significant network overhead. To achieve dynamic resource allocation in cloud computing, there has been an increased attention to the so-called elasticity of cloud data centres [START_REF] Sharma | Kingfisher: Cost-aware elasticity in the cloud[END_REF], [START_REF] Chacin | Utility driven elastic services[END_REF]. The primary goal is to run cloud data centres cost-efficiently; to fulfil SLAs without overprovisioning. This requires a system for dynamic resource allocation, which is able to adapt to varying demands over time. In our research we seek simpler solutions to autonomic management. Our primary goals have included decreasing information exchange among entities and decreasing the need for prior knowledge and learning. Previously we have demonstrated that timing of events in an autonomic system is crucial in to achieve efficient management [START_REF] Fagernes | On the combined behavior of autonomous resource management agents[END_REF], [START_REF] Fagernes | On alternation and information sharing among cooperating autonomous management agents[END_REF], [START_REF] Fagernes | Coordination and information exchange among resource management agents[END_REF]. In this study, our goal is to determine what type of information is most important to achieve sufficient management results, and whether certain types of information exchange are unnecessary. Related work The traditional approach to achieving autonomic management is based on control theory. It is based on control loops which monitor and give feedback to the managed system, in addition to making changes to the system based on the feedback. The control-theoretical approach is suited for managing closed systems, which are usually less vulnerable to unpredictable events and external forces influencing the system. It is not as successful in representing open systems, where we do not necessarily know the inner structure and relationships [START_REF] Dobson | A survey of autonomic communications[END_REF]. The control-theoretical approach involves the use of one or more autonomic controllers, which sense and gather information from the environment where they reside. If any global knowledge needs to be shared among the controllers, this is normally done through a knowledge plane (KP) [START_REF] Daniel F Macedo | A knowledge plane for autonomic context-aware wireless mobile ad hoc networks[END_REF], [START_REF] Mbaye | A collaborative knowledge plane for autonomic networks[END_REF], [START_REF] David D Clark | A knowledge plane for the internet[END_REF]. A KP should provide the system with knowledge about its goals and current states, and hence be responsible for gathering all necessary information and also generating new knowledge and responses. This approach involves much coordination and information overhead among the networked entities in the system being monitored. To achieve autonomic resource management based upon the above approaches, one normally uses adaptive middleware, which is placed between the application and the infrastructure [START_REF] Padala | Adaptive control of virtualized resources in utility computing environments[END_REF], [START_REF] Pacifici | Dynamic estimation of cpu demand of web traffic[END_REF], [START_REF] Adam | Service middleware for self-managing largescale systems[END_REF]. This middleware mediates between managed services and clients, and reconfigures services as needed to adapt to changing needs and contingencies. Cloud elasticity is defined as the ability of the infrastructure to rapidly change the amount of resources allocated to a service to meet varying demands on the service while enforcing SLAs [START_REF] Ali-Eldin | An adaptive hybrid elasticity controller for cloud infrastructures[END_REF]. The goal is to ensure the fulfilment of the SLAs with the least amount of overprovisioning. A common approach is to build controllers based on predictions of future load [START_REF] Ali-Eldin | An adaptive hybrid elasticity controller for cloud infrastructures[END_REF]. [START_REF] Sharma | Kingfisher: Cost-aware elasticity in the cloud[END_REF] proposes system integrating cost-awareness and elasticity mechanisms like replication and migration. The system optimizes cost versus resource demand using integer linear programming. [13] models a cloud service using queueing theory and designs a closed system consisting of two adaptive proactive controllers to control the QoS of a service. Predictions on future load is used as a basis for estimating the optimal resource provisioning. In this paper, we study an approach to elasticity based upon autonomous, distributed agents. This differs from the middleware approach in that the agents are autonomous and distributed, and do not mediate between clients and services; they simply observe what is happening and adapt the service accordingly. We avoid the use of a centralized planner, to increase both potential scalability and robustness, and seek instead to define autonomous, independent agents whose minimal interactions accomplish management. Models and variations The work presented in this paper is a continuance of the work presented in [START_REF] Fagernes | On the combined behavior of autonomous resource management agents[END_REF], [START_REF] Fagernes | On alternation and information sharing among cooperating autonomous management agents[END_REF] and [START_REF] Fagernes | Coordination and information exchange among resource management agents[END_REF], which again is based on the work presented in [START_REF] Alva | Dynamics of resource closure operators[END_REF] and [START_REF] Alva | Combining learned and highly-reactive management[END_REF]. The original closure model [START_REF] Alva | Dynamics of resource closure operators[END_REF] consists of a single closure operator Q controlling a resource variable R. The resource level determines the performance of the system, which is determined based on the response time of the service the system delivers. Decisions on resource adjustments (increase/decrease) are made based on iterative feedback on the perceived value of the service. Initial studies [START_REF] Alva | Dynamics of resource closure operators[END_REF], [START_REF] Alva | Combining learned and highly-reactive management[END_REF] showed that a simple management scheme with minimal available information could achieve close-to-optimal performance. In [START_REF] Fagernes | On the combined behavior of autonomous resource management agents[END_REF], [START_REF] Fagernes | On alternation and information sharing among cooperating autonomous management agents[END_REF] and [START_REF] Fagernes | Coordination and information exchange among resource management agents[END_REF] we extended the original model to a two-operator model. The aim with this research was to investigate the efficiency of management when two resource variables in the same system need to be updated without access to full system information. Single closure model The single closure model represents a system that delivers a service S. The resource usage is modelled by a resource variable R, which in this scenario represents a number of virtual servers, and has an associated cost C. The system or service performance is measured by the response time P , which is affected by the system load L. The total value of the service is V . The system load L is defined as an arrival rate of requests, and the system performance P is defined as the request completion rate. The system dynamics are as follows: -Cost increases as R increases. A linear relationship between C and R is plausible, i.e. C = αR. -Performance P increases as R increases, and decreases as the load L increases. The system performance P (in requests handled per second, a rate) has a baseline performance B (the quiescent request completion rate, or the performance when there is no load affecting the system). B is a constant value. A plausible estimate of system performance P is then the baseline performance minus corrections for load and resource usage, P = B -L R . This has the rough shape of a realistic performance curve, though realistic curves are described by much more complex equations. The definition of P is a statement that as L increases, performance decreases; B is the baseline performance for no load. The model is an ideal case. In real situations, there would be a baseline in which B is not affected; for certain levels of L, P would be flat. -A plausible concept of value is βP , which is β(B -L R ), i.e., there is higher value for higher throughput. β is a constant of proportionality. Again, this is an approximation of reality, and not an exact measure. -Without loss of generality, we set α = 1, β = 1, and B = 200 (requests/second). While in a practical situation, α and β would be determined by policy, the shape of the overall optimization problem is exactly the same as if they were just set to 1. Based upon this, we obtain a total net value N = V -C = B -L R -R, where N represents some monetary value. The model is based on a scenario of diminishing returns, in which as resources are added, there is a point where adding resources increases cost more than value. In our scenario, in increasing the resource usage without the increase in other parameters, the total net value produced will be lower. As V = B -L R gets closer to B, resource utilization does not justify value. That means that total net value N = V -C = B -L/R-R has a local minimum that is also the global minimum. Different hardware architectures determine different baseline performance values B, which do not affect the method for assuring optimal system performance. To maximize N = V -C, we estimate the derivative dN/dR and try to achieve the resource level which corresponds to dN/dR = 0, through a simple hill-climbing strategy. If dN/dR > 0, we increase R; if dN/dR < 0, we decrease R. We chose this simple model as an approximation of reality that allows us to compare our algorithms with optima that we can compute. In reality, in our scheme, the optimum values for R are not known at runtime. It is often the case that in a practical situation, the reward curve follows a pattern of diminishing returns. For example, in adding resources, the user cannot necessarily perceive the difference. In our model, this is quantified by balancing cost and value, so that diminishing returns become apparent. This differs from the standard model of assuring fixed setpoints for performance (as defined in SLOs or SLAs), in that there is a balance between cost and value rather than a specific goal. In our model, the setpoints are determined dynamically; if cost and value change, and the setpoint is invalidated, and our model instantly adjusts to a new global optimum, which has the character of a new setpoint. Two closure operators In earlier studies ( [START_REF] Fagernes | On the combined behavior of autonomous resource management agents[END_REF], [START_REF] Fagernes | On alternation and information sharing among cooperating autonomous management agents[END_REF], [START_REF] Fagernes | Coordination and information exchange among resource management agents[END_REF]) we extended the closure model above to apply to the scenario of two closure operators, each controlling a separate resource variable influencing the same system. In this model the system delivers a service with a total response time P = P 1 + P 2 , where P 1 (P 2 ) is the individual response time of the part of the system controlled by closure Q 1 (Q 2 ). In this case the overall value is V = P = P 1 + P 2 = B -L R1 -L R2 . Both closures receive the same feedback, which means that they are less able to identify the effects of their own actions. The two-operator scenario is illustrated in Figure 1a and the corresponding closure model in Figure 1b. "Gatekeeper" nodes are responsible for gathering statistics on load and value, while "closure" nodes Q 1 and Q 2 control state based upon what the gatekeepers measure. This sets values for resources R 1 and R 2 in the system being managed. A typical web service contains "front end" and "back end" components that frequently compete for resources. Information Exchange The main motivation in this study was to determine which type of information that has the strongest effect on the precision of the closure model. The crucial part in the simulator is the estimation of dV /dR, which is the basis for estimating dN/dR. dV /dR is estimated through a linear curve fitting process, using available information observed in the environment. It is natural to assume that the more information used in this process, the better estimate we will obtain. In this study we compare different estimators of "slopes" based on selected variables that are observed over time. In this section the different slope-estimates will be explained, both for the single-and two-operator model. Information exchange in the univariate case The interpolation functions in the simulator are all based on a linear fit of the history of values of V and R to give an estimate of dV /dR. Since dC/dR is 1, dN/dR is dV /dR -1. For the univariate scenario we have tested two different fitting functions: 1. Minimum information: We assume knowledge of the inverse nature of the relationship between V and R, such that we use linear interpolation to estimate a in V = a 1 R + b. 2. Full information: Additional information of the system load L is required to make a linear fit of V = a L R + b, which includes everything observable in our model. Information exchange -two closure operators We performed the same study for the model with two closure operators. In the model we tested, the system consists of two separate parts, each of which has an individual response time P 1 (P 2 ), but each of the closures receives the same value feedback based on the overall response time P = P 1 + P 2 . This makes it challenging to estimate their individual influence on the system based on changes in their own resource variable. In the multivariate scenario, three different slope estimators were tested. 1. Independent optimization: fits V to a 1 Ri + b for each of the closures i. This requires information about V and R i for each closure. (I.e. does not require information about the other closure.) 2. Knowledge of other resource use: fits V to a 1 R1 + b 1 R2 + c. This requires information of V and both resource values R 1 and R 2 . 3. Full knowledge of resources and loads: fits V to a L R1 + b L R2 + c. This requires information about V, R 1 , R 2 and L. Experiments and Results In this section the experiment setup will be explained, along with the main findings. We ran simulations on both the single-operator model and the two-operator model. For each of the model we ran simulations with different information exchange, two different levels for the single operator scenario, and three different dV/dR estimation methods for the two-operator scenario. The decision-making process depends on received feedback on the perceived value V of the current system performance. The change in V is estimated through a sliding window computation. The "measurement window" is the number of measurements utilized in each prediction. This window has a finite size in measurements, where each measurement is done at a different time step. At each measurement step, the earliest measurement is discarded and replaced by a current measurement. Larger windows incorporate more history, which makes predictions more accurate in stable situations and less accurate in changing conditions. Smaller windows are more reactive, and adapt better to changes in situation. To check how the amount of available history affected the precision of the models, we varied the size of the measurement window w = 3, 5, 10. System load was sinusoidal, L(t) = 1000sin((t/p) * 2π) + 2000, which made the load vary periodically between 1000 and 3000. Many realistic sites observe roughly sinusoidal load variation based upon day/night cycles. We recorded resource usage, response time and net value for all simulations. When the closures do not exchange full information, i.e. when the closures do not use information about system load and the resource level of the other operators, we observe what we refer to as a hidden variable problem. The results from both the single-and two-closure operator model show that the precision of fit of the model compared to the theoretical values have been lower in certain parts of the data. In simulation results (like Figure 2), when the load L increases, the closure model produces estimates of R that are quite close to optimal. However, when load decreases, the optimality of the solution is lower. The R estimates oscillate around the optimum, but move quite far away from the optimum curve. Increasing the measurement window did not have any positive effect, as seen in Figure 3 and Figure 5. Up-hill, the resource usage curve is shifted to the right of the optimum curve, while the oscillations increase downhill compared to when the input window is smaller. This is evidence that the actual resource usage varies farther from the optimum curve for larger input windows. To mitigate the oscillation problem, we must add "full information" (Figure 5). For the single operator model, this includes information about the inverse relationship between V and R, and the value of system load L. In this case the net value tracks the optimum value quite closely. Clearly, this performs better than the partial information model. For the two-closure model, full information includes all information for the single operator model, plus information about R 1 and R 2 ; which means that the agents exchange information about their current resource levels. The simulations show that heavy oscillation in resource usage is present when we do not provide full information. The estimator that results in the worst performance, is independent optimization (Figure 6), in which the two operators optimize separately without exchanging any information. The model performs better when the resource demand increases, but resource allocations oscillate extensively when the resource demand decreases (Figure 6a). As seen in Figure 6b, this affects net value, which varies far from the theoretical best (the dotted line). Adding information about the second operator improves performance somewhat (Figure 7). There is still more oscillation when load decreases (Figure 7a), but significantly less compared to the results for independent optimization. The improvement is more obvious when comparing net value N in the two cases (Figure 7b). Adding additional information about system load more or less removes the oscillation effect (Figure 8). The resource usage generated by the simulator tracks the theoretical optimum with high precision (Figure 8a), and the curve representing total net value is almost precisely the theoretical optimum. To obtain an understanding of what generates the oscillations in the experiments, we studied how the model estimates dN/dR. For the scenario illustrated in Figure 2, we plotted the values for dN/dR estimated by the simulator. Figure 9a shows the estimated dN/dR-values (solid line) for a part of the simulation when the load L increases, while Figure 9b displays the same values when load decreases. The dashed line shown in both figures represents the theoretical value of dN/dR, which is L R 2 -1. The horizontal line in each figure determines whether resources will be incremented (for dN/dR above the line) or decremented (for dN/dR below the line). Figure 9b shows a delay in tracking the correct dN/dR-values, which is caused by the sliding window. As data on the change enters the window, it takes a few steps of measurement for the prediction to change. This creates greater and greater differences from optimal that the fixed resource increment size never corrects, even when resources are being updated in the proper direction. The delay causes a deviation from the theoretical values, and this deviation increases as the simulation progresses. This suggests that that implementing adjustable increments based upon the relative magnitude of the estimated dN/dR could solve the oscillation problem. Conclusions We discovered a phenomenon of heavy oscillation in our closure model. For the model to estimate the optimal resource level throughout the simulations with minimum error, information about system load is crucial. Removing that information from the model does not significantly affect the case when load is increasing, but when load decreases, not accounting for its decrease causes oscillation away from the optimum. Thus the load is a "hidden variable" thatwhen exposed -improves adaptive behavior. The oscillations around optimum disappear when full information is used in the decision-making process. Full information is defined as current resource values in both controllers and system load. In the single-operator scenario, even minimum information gives total net value quite close to the theoretical maximum. For the two-operator scenario, the downhill-oscillation effect is significantly worse for the independent optimization-method, which is the interpolation method that does not use information about both operators. The hidden variable effect is stronger when each agent makes decisions without taking other agents into account. The oscillation disappears when we add load information into the decision mechanism. Finally, the oscillations seem to be caused by a combination of the fixed window size and the fact that resources are always changed by a fixed amount. Detailed analysis of dN/dR values suggests that a varying resource increment size based upon the relative magnitudes of dN/dR may solve the oscillation problem without adding additional information. Fig. 1 : 1 Fig. 1: The two-closure model Fig. 2 :Fig. 3 : 23 Fig. 2: Single operator model. Resource usage and net value for the minimum information scenario. w = 3. Fig. 4 :Fig. 5 : 45 Fig. 4: Single operator model. Resource usage and net value for the minimum information scenario. w = 10. Fig. 6 :Fig. 7 : 67 Fig. 6: Results using independent optimization. Fig. 8 : 8 Fig. 8: Results using full knowledge of resources and loads. dN/dR-values when load increases. dN/dR-values when load decreases. Fig. 9 : 9 Fig. 9: The estimation of dN/dR-values in the single operator scenario, minimum information.
23,921
[ "1004065", "1004066" ]
[ "470812", "248373" ]
01489968
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01489968/file/978-3-642-38998-6_5_Chapter.pdf
Vaibhav Bajpai Jürgen Schönwälder Understanding the Impact of Network Infrastructure Changes Using Large-Scale Measurement Platforms des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Understanding the Impact of Network Infrastructure Changes using Large-Scale Measurement Platforms Vaibhav Bajpai and Jürgen Schönwälder Computer Science, Jacobs University Bremen, Germany {v.bajpai,j.schoenwaelder}@jacobs-university.de Abstract. A number of large-scale measurement platforms have emerged in the last few years. These platforms have deployed thousands of probes within the access, backbone networks and at residential gateways. Their primary goal is to measure the performance of broadband access networks and to help regulators sketch better policy decisions. We want to expand the goal further by using large-scale measurement platforms to understand the impact of network infrastructure changes. Research Statement The curiosity to understand the performance of the Internet from the user's vantage point led to the development of techniques to remotely probe broadband access networks. Dischinger et al. in [START_REF] Dischinger | Characterizing Residential Broadband Networks[END_REF], for instance, inject packet trains and use the responses received from residential gateways to infer broadband link characteristics. This led to the development of a number of software-based solutions such as netalyzr [START_REF] Kreibich | Netalyzr: Illuminating the Edge Network[END_REF], that require explicit interactions with the broadband customer. Recently, the requirement for accurate measurements, coupled with efforts initiated by regulators to define data-driven standards, has led to the deployment of a number of large-scale measurement platforms that perform measurements using dedicated hardware probes not only from within ISP networks but also directly from home gateways. In a recent study, sponsered by the FCC, Sundaresan et al. [START_REF] Sundaresan | Broadband Internet Performance: A View from the Gateway[END_REF] have used measurement data from a swarm of deployed SamKnows probes to investigate the throughput and latency of access network links across multiple ISPs in the United States. They have analyzed this data together with data from their own Bismark platform to investigate different traffic shaping policies enforced by ISPs and to understand the bufferbloat phenomenon. The empirical findings of this study have recently been repraised by Canadi et al. in [START_REF] Canadi | Revisiting Broadband Performance[END_REF] where they use crowdsourced data from speedtest.net to compare both results. The primary aim of all these activities is to measure the performance and reliability of broadband access networks and facilitate the regulators with research findings to help them make policy decisions [START_REF] Schulzrinne | Large-Scale Measurement of Broadband Performance: Use Cases, Architecture and Protocol Requirements[END_REF]. Using a large-scale measurement platform we want to take this further and study the impact of network infrastructure changes. We want to define metrics, implement measurement tests and data analysis tools that help us answer questions of the form: -How does the performance of IPv6 compare to that of IPv4 in the real world? -Can we identify a Carrier-Grade NAT (CGN) from a home gateway? -Can we identify multiple layers of NAT from a home gateway? -How much do web services centralize on Content Delivery Network (CDN)s? -To what extend does the network experience depend on regionalization? In the past, we have performed an evaluation of IPv6 transitioning technologies to identify how well current applications and protocols interoperate with them [START_REF] Bajpai | Flow-Based Identification of Failures Caused by IPv6 Transition Mechanisms[END_REF]. We are now participating in the Leone1 project, whose primary goal is to define metrics and implement tests that can asses the end-user's Quality of Experience (QoE) from measurements running on SamKnows probes. Proposed Approach SamKnows specializes in the deployment of hardware-based probes that perform measurements to assess the performance of broadband access networks. The probes function by performing active measurements when the user is not aggressively using the network. RIPE Atlas is another independent measurement infrastructure deployed by the RIPE NCC. It consists of hardware probes distributed around the globe that perform RTT and traceroute measurements to preconfigured destinations alongside DNS queries to DNS root servers. Measurement Lab (M-Lab) [START_REF] Dovrolis | Measurement Lab: Overview and an Invitation to the Research Community[END_REF] is an open, distributed platform to deploy Internet measurement tools. The measurement results are stored on Google's infrastructure. The tools vary from measuring TCP throughput and available bandwidth to emulating clients to identify end-user traffic differentiation policies [START_REF] Dischinger | Glasnost: Enabling End Users to Detect Traffic Differentiation[END_REF][START_REF] Kanuparthy | ShaperProbe: End-to-End Detection of ISP Traffic Shaping using Active Methods[END_REF] to performing reverse traceroute lookups from arbitrary destinations [START_REF] Katz-Bassett | Reverse Traceroute[END_REF]. It will only be possible to answer the aforementioned research questions with access to a large-scale measurement platform. As partners of the Leone consortium, we will leverage the infrastructure of our partners. We will define metrics targeted to our research questions and complement them by implementing measurement tests. The developed tests will be deployed in our partner's networks, but may also become part of SamKnows global infrastructure, which has several thousand deployed probes and will continue to grow during the project's lifetime. The collected data will be conglomerated from multiple Measurement Agent (MA)s and analyzed to uncover information needed to help us answer these questions. This requires to develop data analysis algorithms that can integrate data from different data sources such as address block allocations from Regional Internet Registry (RIR)s or prefix and path information from BGP route views. In this pursuit, we have started with a study to assess how the user experience is effected by the deployment of IPv6. Preliminary Results The function getaddrinfo(...) resolves a service name to a list of endpoints in an order that prioritizes an IPv6-upgrade path [START_REF] Thaler | Default Address Selection for Internet Protocol Version 6[END_REF]. The order can dramatically reduce the application's responsiveness where IPv6 connectivity is broken, because the attempt to connect over an IPv4 endpoint will take place only when the IPv6 connection attempt has timed out, which can be in the order of seconds. This degraded user experience can be subverted by implementing the happy eyeballs algorithm [START_REF] Wing | Happy Eyeballs: Success with Dual-Stack Hosts[END_REF]. The algorithm recommends that a host, after resolving the service name, tries a TCP connect(...) to the first endpoint (usually IPv6). However, instead of waiting for a timeout, it only waits for 300ms, after which it must initiate another TCP connect(...) to an endpoint with a different address family and start a competition to pick the one that completes first. We have developed happy, a simple TCP happy eyeballs probing tool that uses TCP connection establishment time as a parameter to measure the algorithm's effectiveness. It uses non-blocking connect(...) calls to concurrently establish connections to all endpoints of a service. In order to develop data-analysis tools, we have prepared an internal test-bed of multiple MAs. The MAs have different flavors of IPv4 and IPv6 connectivity ranging from native IPv4, native IPv6, IPv6 tunnel broker endpoints, Teredo and tunnelled IPv4. We used the top 100 DNS names compiled by he.net2 and ran happy on them. A preliminary result comparing the mean time to establish a TCP connection to each of the services from one of the MA is shown in Fig. 1. The initial results show higher connection times over IPv6. Furthermore, on a Teredo MA, an application will never use IPv6 except when IPv4 connectivity is broken, because the Teredo IPv6 prefix has a low priority in the address selection algorithm [START_REF] Thaler | Default Address Selection for Internet Protocol Version 6[END_REF]. It also appears that several services show very similar performance. These services resolve to a set of endpoints that belong to the same allocated address blocks. Digging through the whois information for each of the endpoints from their RIR demonstrates that major portions of the services map to address blocks owned by organizations such as Google and Akamai Technologies. Conclusion We have performed a preliminary study on how IPv6 deployment may affect the QoE of Internet users. Using a large-scale measurement platform we want to take this further, and define new metrics, measurement tests and data analysis tools that help us understand the impact of network infrastructure changes. Fig. 1 . 1 Fig. 1. Mean time to establish TCP connections to a list of web services. The MA is a virtual machine hosted at greatnet.de. It has IPv4 connectivity via LamdaNet Communications [AS13237] and IPv6 connectivity via Teredo. This work was supported by the European Community's Seventh Framework Programme (FP7/2007-2013) grant no. 317647 (Leone) http://leone-project.eu http://bgp.he.net/ipv6-progress-report.cgi
9,683
[ "1004080" ]
[ "264916", "264916" ]
01489971
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01489971/file/978-3-642-38998-6_8_Chapter.pdf
Franka Schuster Andreas Paul Hartmut König Towards Learning Normality for Anomaly Detection in Industrial Control Networks des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Motivation Currently operators of industrial control systems (ICS) aim at optimally integrating their systems into corporate infrastructures to reduce costs. For this purpose, they started to involve common information and communication technologies (ICT) in their supervisory control and data acquisition (SCADA) systems. This results in the exposition of industrial control networks to common ICT vulnerabilities and indirect connections to public networks. Simultaneously, interoperability between devices of various vendors and different automation levels is driven forward by the introduction of open standards on control and field level of ICS. For instance, Industrial Ethernet is widely used in industrial control networks, although it lacks essential security features, such as authentication and encryption. These trends affect security of industrial networks as well as critical infrastructures, such as power plants. For complementing existing active security measures, we investigate in the use of intrusion detection in this field. In [START_REF] Schuster | A Distributed Intrusion Detection System for Industrial Automation Networks[END_REF] we proposed a network-based intrusion detection system consisting of multiple autonomous components, called SCADA Intrusion Detectors (SCIDs), as illustrated in Figure 1. In contrast to common ICT networks, industrial control networks are usually characterized by a well-defined setup of devices, communication relations, and data exchanged [START_REF] Hadziosmanović | Challenges and Opportunities in Securing Industrial Control Systems[END_REF] [START_REF] Barbosa | Intrusion Detection in SCADA Networks[END_REF]. Consequently, a model of normal traffic can be defined comparatively well. Thus, each detector shall perform anomaly detection based on a model of the individual normal network traffic of its observation domain. In this realm, we investigate in the application of machine learning methods on ICS traffic to enable sophisticated anomaly detection in this field. The main contributions of this paper are: [START_REF] Schuster | A Distributed Intrusion Detection System for Industrial Automation Networks[END_REF] we identify the requirements for intrusion detection in ICS; [START_REF] Hadziosmanović | Challenges and Opportunities in Securing Industrial Control Systems[END_REF] we present an approach for self-learning intrusion detection whose characteristics meet these requirements; (3) on the basis of this approach, we discuss the challenges and future aspects of research for realizing such a tailored self-learning intrusion detection for ICS. The remainder of the paper is organized as follows: In Section 2 we reason the main characteristics that an intrusion detection for ICS should have. After a discussion of the drawbacks of existing works dealing with intrusion detection in this field in Section 3, we introduce an approach in Section 4 that meets all characteristics presented in Section 2. For outlining research aspects involved, we go in Section 5 into the details of each step. We conclude the paper with an outlook on our future research in this field. Problem Definition The principal problem addressed is the development of a learning approach in accordance with the intended intrusion detection. Consequently, as a prerequisite the significant criteria of the intrusion detection need to be clarified. In the following the applied detection process is reasoned by the particular characteristics of automation networks. Network-based Analysis. Industrial control systems, especially in critical infrastructures, have to meet real-time constraints and high availability requirements. They rely on the flawless operation of their devices. Especially on field level of industrial control networks, embedded devices, such as programmable logic controllers (PLCs) and peripheral devices, are used. These systems are dedicated to perform only a specific automation task. Due to their limited resources of computing power and memory, it is often impossible to run further applications on these systems. Moreover, vendors as well as operators usually oppose any manipulation of these devices. Against this background, host-based intrusion detection is not applicable for automation devices in practice. Networkbased intrusion detection, in contrast, can be integrated comparatively easily into existing automation networks. Even on control and field level this kind of intrusion detection can be placed without any manipulation of existing devices, e.g., by listening to a mirror port of a network switch. Deep Packet Inspection. In high-speed networks flow-level monitoring is increasingly outracing packet-based analysis because the monitoring systems lack processing power and storage capacity necessary for a deep packet inspection for the corresponding data rates [START_REF] Hofstede | Real-Time and Resilient Intrusion Detection: A Flow-Based Approach[END_REF]. While flow-level analysis is intended to handle huge data rates by abstracting from detailed packet data, this limitation is neither required nor suitable for analyzing automation data due to the following reasons. First, packet-based analysis of automation traffic does not require extensive resources, because the amount of data is far below [START_REF] Barbosa | Difficulties in Modeling SCADA Traffic: A Comparative Analysis[END_REF] what a conventional deep packet inspection can process (>100 Mbit/s). Second, flowbased monitoring generally omits payload analysis which is essential for detecting protocol-specific attacks, such as Man-in-the-Middle attacks on Profinet IO [START_REF] Åkerberg | Exploring Security in PROFINET IO[END_REF] or false data injection in general [START_REF] Liu | False Data Injection Attacks Against State Estimation in Electric Power Grids[END_REF] [START_REF] Gao | On SCADA Control System Command and Response Injection and Intrusion Detection[END_REF]. Without a deep packet inspection it is not possible to distinguish between packet types of the automation protocol used (e.g., read requests from write requests) and thus communication cannot be analyzed regarding anomal packet sequences between automation devices. For these reasons, packet-based analysis is preferred to flow-level analysis. Nevertheless, we also analyze packet data during packet-based analysis that is usually considered in flow-based analysis, i.e., the monitoring of communication relations. N-gram Anomaly Detection. Today the specifications of most Ethernetbased ICS protocols are officially available. With protocol-specific knowledge, an attacker may launch attacks either by interferring normal protocol sequences by additional packets (sequence-based attacks) or by manipulating data of packets within a legitimate sequence (content-based attacks), or both. Whereas singlepacket anomaly detection can help to detect content-based attacks, such as false data injection attacks, identification of sequence-based attacks requires to monitor sequences of packets. Examples for such sequence-based attacks are the aforementioned Man-in-the-Middle attack on a Profinet IO setup [START_REF] Åkerberg | Exploring Security in PROFINET IO[END_REF] or Denial-of-Service attacks by packet flooding on the Modbus TCP protocol [START_REF] Nai Fovino | An Experimental Investigation of Malware Attacks on SCADA Systems[END_REF] and DNP3 over TCP [START_REF] Jin | An Event Buffer Flooding Attack in DNP3 Controlled SCADA Systems[END_REF]. Since these attacks can be triggered by packets that satisfy the protocol-specific packet formats and contain ordinary data, these attacks cannot be detected by a single-packet analysis. The feasibility to deploy n-gram analysis in real environments is addressed in [START_REF] Hadziosmanović | N-gram Against the Machine: On the Feasibility of the N-gram Network Analysis for Binary Protocols[END_REF], where the homogeneity of ICS traffic is outlined to be a key issue for a high detection capability and a low rate of false positives. In our approach each gram refers to the result of the deep packet inspection of a network packet. Consequently, an n-gram characterizes a specific sequence of n monitored packets. While learning sequences of packets as n-grams, a packet that, if considered isolated, looks ordinary can be identified as anomal in an unusual packet sequence. Unsupervised learning. Machine learning can be subdivided into supervised and unsupervised learning. In supervised learning the input data for learning are labeled, e.g., assigned to classes. The task is to learn a model to predict the class for new data. In line with this, supervised learning for intrusion detection is done by learning normal data as well as attacks in order to apply misuse detection. The input data of unsupervised learning, in contrast, are unlabeled and the aim is to find a model for a representation of the input data. Associated with this approach is the idea of anomaly detection: a model representing normality is learned from unlabeled normal data to be able to identify attacks as a kind of anomalies. Both approaches have been subject for research in networkbased intrusion detection. If unknown attacks can be expected, however, it has been empirically shown in [START_REF] Laskov | Learning Intrusion Detection: Supervised or Unsupervised?[END_REF] that unsupervised methods are more qualified for practical purposes, because their detection capability is similar to supervised learning, while they do not require the tedious preparation of labelling the input data. Consequently, unsupervised learning is the most promising method also for learning normal traffic for anomaly detection in ICS. Related Work After reasoning why network-based anomaly detection using a learned model of normal traffic is the most suitable kind of intrusion detection for ICS, we will focus the discussion of related work on similar approaches. In [START_REF] Carcano | State-Based Network Intrusion Detection Systems for SCADA Protocols: A Proof of Concept[END_REF] [14] contributions to a state-based IDS are presented. The system performs anomaly detection based on a decision whether the monitored system enters a critical state. For this purpose, a central virtual image of the physical state of the whole system is set up and regularly updated. The presented algorithm in [START_REF] Rrushi | Detecting Anomalies in Process Control Networks[END_REF] uses deep packet inspection and estimates if a network packet has anomal effect on a memory variable of an ICS device. This approach, however, requires both detailed understanding of the used ICS network protocol and extensive knowledge about variables stored in the RAM variable memory of all monitored PLCs of the ICS. Other approaches apply Artificial Neural Networks to perform anomaly detection in ICS. The authors of [START_REF] Linda | Neural Network based Intrusion Detection System for Critical Infrastructures[END_REF] also focus on n-gram anomaly detection, where each gram refers to the attribute extraction from a network packet. In [START_REF] Gao | On SCADA Control System Command and Response Injection and Intrusion Detection[END_REF] a backpropagation algorithm is used to build the neural network for a network-based intrusion detection system. Although these works provide relevant contributions, both are based on supervised learning that depends on labeled input data, i.e., requires normal as well as attack data. The anomaly detection process in [START_REF] Yang | Anomaly-based Intrusion Detection for SCADA Systems[END_REF] relies on pattern matching. It combines Autoassociative Kernel Regression for model generation with a binary hypothesis technique called Sequential Probability Ratio Test while detection. The proposed kind of model generation, however, relies on the assumption that security violations are reflected by a change in system usage, which is subject of the detection. This obviously limits the detection capability. Research in flow-based anomaly detection for ICS is motivated in [START_REF] Barbosa | Intrusion Detection in SCADA Networks[END_REF]. The model generation focuses on finding relations between network flows based on clustering and correlation. Here, we argue the limitation of a detection that only focuses on flow data analysis, as we have explained in the previous section. Learning Approach The model generation and anomaly detection of our approach focuses on the following data: -Communication relations: Communication relations refer to network flows. These are sequences of packets from a source to a destination device using a certain protocol. For monitoring communication relations between ICS devices, flow data, i.e., source and destination addresses as well as the used protocol, have to be determined. This also allows to gather flow characteristics, such as byte or packet number transmitted in a certain time interval. -Integrity of ICS application data: Beyond the monitoring of communication relations, the actual data exchanged are subject of the monitoring. For this purpose, the payload of network packets is inspected and protocolspecific data are analyzed regarding anomalies. -Consistency of the packet exchange: Based on identified flows and knowledge about the used protocol, the type of each packet within a flow can be determined. Thus, the order of packets exchanged in a communication relation can be evaluated. The principle for learning this information from the industrial control network is depicted in Figure 2. Initially, network packets are captured and decoded by a deep packet inspection (DPI) sensor that is capable to analyze packets of the used ICS protocols. From each packet a set of attributes, so-called features, is extracted. For the later application of mathematical operations in the machine learning stage, this set of features, i.e., list of original packet attributes, are mapped to a vector of real numbers (feature vector ). In this process of feature conversion a suitable numerical representation of the feature values has to be found with respect to the feature types (e.g., categorical or continuous) and dependencies between the features. In the next step the current feature vector is aggregated with the feature vectors of the n -1 previously monitored network packets. The resulting n-gram represents an input instance for the machine learning algorithm applied. Finally all information mentioned above is concentrated within the n-grams as input for the machine learning procedure. The approach can be applied in two ways: (1) either learning is realized protocol-related, so that for each ICS protocol used in the network a separate model of normality is learned based on a protocol-specific set of features, or (2) a common feature set for all supported ICS protocols is defined and used to learn a shared model. The comparison of both strategies is an interesting issue for future investigation, which represents aspect 1 for future research. In the following we will denote further aspects using consecutive numbering. The discussion in this paper, however, focuses on the first strategy, since it promises a more tailored learning and anomaly detection for the respective protocols. Challenges of Learning for ICS Intrusion Detection Development and implementation of a self-learning anomaly detection for ICS induces a set of challenges. We address these challenges based on the steps of the approach introduced in Section 4. If necessary we exemplify our explanations using the example of the ICS protocol Profinet IO. Understanding Automation Protocols The development of a deep packet inspection for application on control and field level of industrial control networks requires to understand the protocols spoken on these levels. The aim is to extract the data of each network packet and to map them to a representation for further analysis. For such a packet-based analysis, a protocol-specific decoding of network packets has to be implemented. Here, it also has to be regarded that different ICS protocols expect different protocol stacks for transportation. For instance, whereas Profinet IO bypasses the network and transport layer, Modbus TCP requires the regular TCP/IP stack. Thus, the specification of the respective protocol has to be analyzed in advance regarding transportation stack and message formats. This work has to be done individually for each ICS protocol that the intrusion detection shall be capable to support. Nevertheless, the effort for realizing such a protocol analysis is well spent, because it also allows a vulnerability assessment and the derivation of possible attacks. This is also fruitful for a later evaluation of the implemented anomaly detection. Since traffic capturing and decoding is an essential feature of packet-based intrusion detection systems in general, existing solutions [18] can be extended by the respective protocol knowledge to appropriate sensors. Feature Selection The challenging part in this step is to decide which ICS protocol data are worth and suitable to be learned. Since our approach shall not be limited to learning the traffic of a certain protocol, but rather be applicable for a wide range of Ethernetbased ICS protocols, we abstract from the various protocol data formats and focus our explanation here on data that are usually part of Ethernet-based ICS protocols: -source and destination addresses -unique identifiers of sender and receiver (e.g., MAC addresses), -protocol type -the identifier of the ICS protocol; in case of Profinet value 0x8892 encoded in the Ethernet frame's EtherType field, -packet type -the type of protocol-specific packet that the Ethernet frame conveys; in case of Profinet this can be for example a DCP request or DCP response packet for device identification or an alarm packet [START_REF] Neumann | Ethernet-based Real-time Communications with PROFINET IO[END_REF], -packet data -the ICS application data, e.g., parameters for cyclic control. From each monitored frame at the respective network interface, the deep packet inspection sensor constructs a protocol-specific object containing the mentioned information in form of a set of features. While this object in detail depends on the protocol-specific fields, here a generic high-level description of this object is chosen, which in the following is referred to as feature object. struct featureObj { timestamp tstamp; (1) identifier source; (2) identifier destination; [START_REF] Barbosa | Intrusion Detection in SCADA Networks[END_REF] enum event type type; (4) union event data event; (5) } The feature tstamp holds a value used to recover the temporal order of packets, whereas source and destination encode addresses of the sending device and the one receiving the packet. The categorical feature type defines one of the standard packet types of the ICS protocol. The remaining feature event contains all packet-type-specific data, e.g., parameters for control or feedback data transmission between ICS devices. Features (2-3) help to identify (unidirectional) flows and (bidirectional) communication relations between automation devices. Based on features (1-4), the concrete sequence of packet types exchanged between these devices can be monitored. This is sufficient for detecting, for instance, the Man-in-the-Middle attack presented in [START_REF] Åkerberg | Exploring Security in PROFINET IO[END_REF] as anomaly from learned normal sequences. By the use of detailled features contained in the complex feature [START_REF] Barbosa | Difficulties in Modeling SCADA Traffic: A Comparative Analysis[END_REF], an anomaly detection based on learned normal operation data can be realized. If, for instance, a parameter like a valve pressure measurement contained in (5) normally varies within the boundaries of interval [a, b]; a, b ∈ R then a value x ∈ R with (x << a) or (x >> b) can be detected as anomaly from the learned normal interval. Feature Conversion for Learning Machine learning deals with finding characteristics in provided training data and generalizing from these characteristics for evaluation of new, unseen data instances in test data. For this purpose, machine learning relies on the use of mathematical constructs and algorithms. Hence, data instances for learning have to be converted into numerical representations and summarized in a feature vector. Finding an optimal representation for the input data as numerical data is actually a key issue for successful application of machine learning in general. The conversion applied depends on the type of feature. Each feature that has been extracted from a packet is either of categorical, identifying, or continuous type. Table 1 summarizes the attribution and provides some example values for features contained in the introduced feature object. The process of converting a set of features into a numerical representation basically consists of two steps: (1) mapping each feature to a value in real space, i.e., a vector in R n ; n ∈ N, and (2) scaling the real values of features to lie in similar range. Scaling has two advantages for learning: First, if all features are represented by real values in similar range, no features in greater numerical ranges can dominate those in smaller numerical ranges while the application of mathematical operations during learning. Second, numerical problems during calculation are avoided. For instance, some machine learning methods [START_REF] Schölkopf | Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond[END_REF] apply kernels that depend on the inner products of feature vectors which might be difficult to determine in case of highly varying feature spaces. Mapping Categorical Features. The main criterion of a categorical feature is the dimension n ∈ N of possible categories. The conversion can either be realized by just defining a real number for each category (e.g., integers 1 to n) or by constructing an n-dimensional vector whose values at position i ∈ {1 . . . n} are defined as v(i) = 1, feature value is of the k-th category 0, otherwise ; k ∈ {1 . . . n} . (1) In terms of the introduced feature object, if feature type can be one of the values {read request, write request, alarm}, then conversion of feature value read request would result in (1, 0, 0), feature value alarm correspondingly in (0, 0, 1). Comparing both ways of converting a categorical ICS feature in the context of the machine learning method applied is another relevant issue of study (aspect 2). Mapping Identifying Features. These are features holding addresses or other identifiers, such as device names, that are a typical aid for ICS operators to distinguish and locate automation devices. Learning concrete values of an identifying feature, e.g., the integer value of a MAC address' byte sequence, is not useful. It would result in a model of normality, in which new MAC addresses with similar integer values like normal addresses would be also considered as normal. Even they, however, explicitely identify a new device, which is, in terms of homogenous ICS traffic, an anomal event. Instead, identifying features have to be converted in a way that the applied learning method results in a model that only characterizes the identifiers as normal that have explicitely seen during learning phase. Thus, a better way of converting an identifying feature is to allow a fixed maximum number n ∈ N of devices in the monitoring domain and to store each seen identifier in the training data in a list of length n. Then, conversion of a specific identifier is realized like conversion of a categorical feature, where the list position of the identifier is handled like a category (see Formula 1). For illustration: If in a monitored network packet in training or test data the identifying feature source contains MAC address y while MAC addresses (x, y, z) have already been seen, then y would be converted to (0, 1, 0). Mapping Continuous Features. In the realm of ICS, most operation parameters are provided as real numbers. This, for instance, can be a measured value as part of a feedback packet from a peripheral device to a PLC. Such a parameter would be part of complex feature event. In contrast to categorical and identifying features, the concrete value of this feature has to be learned in order to identify anomalies in ICS application data. Some machine learning methods rely on discrete input data. Consequently, continuous features have to be discretized for these algorithms. In [START_REF] Dougherty | Supervised and Unsupervised Discretization of Continuous Features[END_REF] a comprehensive overview about existing approaches and an empirical comparison of discretization for continuous attributes is provided. In terms of this work, unsupervised discretization methods are suitable for our approach. More recent discussions in this field can be found in [START_REF] Liu | Discretization: An Enabling Technique[END_REF] and [START_REF] Peng | Study on Comparison of Discretization Methods[END_REF]. If continuous features in the form of real numbers are accepted as input for the applied machine learning method (e.g., methods in [START_REF] Schölkopf | Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond[END_REF]) the mapping step during conversion can obviously be omitted. Here, scaling is crucial for successful learning. Nevertheless, discretization may also be supportive of learning even if the learning algorithm would accept continuous data. This, however, can only be evaluated as a kind of preprocessing in strong connection with the concrete learning algorithm applied (aspect 3). Scaling. In the presented ways for converting categorical and identifying attributes the kind of mapping applied implicitely involves also scaling, since the length of the resulting vector is always maximum 1 (It can also be 0 in case the value is not defined for the categorical respectively identifying feature, which results in conversion to a null vector ). If continuous features are not discretized for learning, they have at least to be scaled. This involves the transformation of real values into a smaller interval. It can be realized by defining the minimum and maximum value accepted for this feature and linear scaling to a smaller interval, such as [0, 1] or [-1, +1]. The same scaling method, however, should be applied for the respective feature during training and anomaly detection phase. For example, if the feature param1 in abstract feature event has been scaled from [0, 10000] to [0, 1] during training, then during anomaly detection a value 3659 for param1 has to be scaled to 0.3659. Finding the right scaling approach, i.e., scaling range for each feature and ratio across features, is a further aspect of study to optimize self-learning anomaly detection for ICS (aspect 4). Feature Dependencies. It might be the case that there are dependencies between features for some ICS protocols which have to be explicitly regarded during conversion. One approach for handling dependencies is to construct a complex feature value from values of depending features and to learn this aggregated feature value instead of the individual ones. For example, if the range of feature param3 in complex feature event by protocol specification depends on the value of feature type, then this could be expressed as follows during conversion: A simple method is to map param3 to a real number, so that its concrete value only affects the less significant digits, whereas the value of type dictates the more significant ones. Thus, instead of just scaling, real value 7 would be mapped beforehand to real number x007 where x represents a digit based on the k-th category for categorical value type. As illustrated by this simple example, exploring sophisticated dependency-based conversion methods is a further subject of investigation (aspect 5). Evaluation of Learning Algorithms Besides the optimal choice of parameter n (aspect 6) also the choice of the unsupervised learning algorithm for generating the normal traffic model, based on the n-grams of converted features, is the most relevant aspect towards the implementation of a sophisticated self-learning anomaly detection for ICS (aspect 7). We plan to evaluate the algorithms regarding their behaviour on ICS data: -efficiency of the algorithm, i.e, the number of learning examples necessary for a certain detection accuracy on new, unseen data, -stability of the learned model to variations of input parameters, -scaling of the learning effort with the number of training instances and input features. In this context, it will be interesting to find out whether a specific learning algorithm can distinctly outperform the other ones or if the learning success will be similar among different algorithms. Another aspect of study will be the prevention of overfitting in the learning (aspect 8). Final Remarks In this work we have presented an approach for learning normal ICS traffic to support anomaly-based intrusion detection in this field. In contrast to existing methods, our approach combines the learning of communication relations, ICS operation data as well as exchanged packet sequences. By explaining the steps of the approach, we identified eight aspects that affect the quality of learning the addressed information. In general, the application of machine learning techniques for ICS security is a very promising field. For successfully applying machine learning and anomaly detection in ICS networks, however, the identified aspects for optimizing the learning have to be explicitly evaluated with regard to ICS traffic characteristics. We address ourselves to this task. So we plan to apply several machine learning algorithms as part of our SCADA intrusion detector in order to investigate in the identified aspects for proper learning. Analysis will first focus on monitoring a Profinet IO network. For this purpose, we have implemented a Profinet-specific deep packet inspection sensor. In [START_REF] Paul | Towards the Protection of Industrial Control Systems -Conclusions of a Vulnerability Analysis of Profinet IO[END_REF] we have identified numerous vulnerabilities and possible attacks on the protocol which will help us to prove the detection accuracy of our approach. Fig. 1 : 1 Fig. 1: Multiple SCIDs monitoring an example ICS Fig. 2 : 2 Fig. 2: Schematic depiction of the learning approach Table 1 : 1 Feature types and example values Feature Type Example values tstamp continuous 1253306964; 1361483071 source identifying ac:de:48:00:00:80; 00:80:41:ae:fd:7e destination identifying d6:6c:51:84:af:bc; 84:2b:2b:92:41:a8 type categorical read request; write request; alarm event (complex) (param1=3659, param2=0.85, param3=7)
31,184
[ "1004084", "1004085", "1004086" ]
[ "454903", "454903", "454903" ]
01489972
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01489972/file/978-3-642-38998-6_9_Chapter.pdf
Michal Kováčik email: [email protected] Michal Kajan email: [email protected] Martin Žádník Detecting IP Spoofing by Modelling History of IP Address Entry Points ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The source IP address spoofing is inherently used during attempts to hijack network sessions [START_REF] Wanner | Session Hijacking in Windows Networks[END_REF] or to scan a target in a stealth mode [START_REF] Lyon | Nmap Network Scanning. Insecure[END_REF]. But most commonly, spoofing plays an important role during denial-of-service attacks (DoS) or distributed denial-of-service attacks (DDoS). The goal of these attacks is to exhaust network or host resources by flooding the victim with an overwhelming number of packets. As a result, the service provided by the victim becomes unavailable. The spoofing is used to: generate large amount of new connections, -hide the true source identity and render filtering the source of an attack very hard, -amplify and/or reflect the attack to the victim. Therefore, there should be counter-measures to prevent IP spoofing or at least procedures to trace back the true source. Although many research has been done in this area none of the proposed solutions became deployed widely. This is of no surprise since the spoofing might be largely mitigated by installing ingress filtering in the stub networks, yet this is rather an exception than a rule. In our work, we propose an algorithm to detect occurrence of the flows with spoofed IP addresses. We consider network operator (Tier-1, Tier-2) with peering interconnections to other large networks. The scheme works upon Net-Flow v5 [START_REF] Systems | NetFlow Services Solutions Guide[END_REF] data collected from the entry points in the operator network. The scheme is based on the following assumptions: there is a set of specific source IP addresses that should not appear in the packets entering the network, -large portion of the communication is symmetric (i.e. it takes the same path from source to destination and vice versa), -network traffic originating from a certain network enters the observed network via stable set of points, -the number of new source IP addresses is stable. It is straight-forward to build a classifier based on the first assumption. Unfortunately, the assumption covers only a limited set of traffic with potentially spoofed addresses. The second assumption allows to verify legitimate traffic but cannot detect spoofing itself. The third assumption allows to report on which link and from which source prefix the spoofing occurs whereas the last assumption allows to report the destination prefix of the traffic with the spoofed source. The proposed scheme provides several outputs which may serve as an additional information for anomaly or attack detection methods as well as a basis for filtering decisions or post mortem forensics. The rest of the paper is organized as follows. Section 2 discusses related work on IP spoofing prevention and traceback. Section 3 proposes an algorithm for IP spoofing detection with the use of NetFlow data. Section 4 provides information about the deployment and achieved results. The last section sums up the paper and discusses further research directions. Related work As previously mentioned, IP spoofing plays an important role in some types of attacks and a lot of research interest has been paid to study methods for preventing or tracing back spoofed IP addresses. A basic preventive method suggests an ingress filtering [START_REF] Ferguson | Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing[END_REF] in customer or source ISP networks where the pool of legitimate source IP addresses is well known. In order to allow filtering in transit or destination networks, the information about legitimate source IP addresses must be passed from a source towards the destination networks. In [START_REF] Li | Learning the valid incoming direction of IP packets[END_REF], authors propose a new protocol to spread this information to routers along the path which may build a filtering table for each ingress interface accordingly. The SPM [START_REF] Bremler-Barr | Spoofing prevention method[END_REF], which is an alternative method designing a scheme in which participants (involved autonomous systems -AS) authenticate their packets by a source AS key. In fact, a host may spoof an IP address from within the same subnet in all these schemes. To address this problem, Shen et al. [START_REF] Shen | A two-level source address spoofing prevention based on automatic signature and verification mechanism[END_REF] extended SPM into the intranet where a host tags its packets to authenticate them for a network gateway. Xie et al. [START_REF] Xie | An Authentication Based Source Address Spoofing Prevention Method Deployed in IPv6 Edge Network[END_REF] proposed authentication of a host connecting to the Internet by an established authentication protocol. TCP SYN cookies, improved in [START_REF] Zuquete | Improving the functionality of SYN cookies[END_REF], may be considered as an IP-spoofing prevention although the method works only for TCP SYN flood attacks. A seminal work of Savage et al. [START_REF] Savage | Practical network support for IP traceback[END_REF] (Probabilistic Packet Marking -PPM) started out research in the field of packet marking for tracing back the source of spoofed packets. These marking methods aim at encoding ID of routers along the path into the packet. Other works extended probabilistic packet marking by authentication and upstream router map [START_REF] Song | Advanced and authenticated marking schemes for IP traceback[END_REF] or by dynamic marking probability [START_REF] Peng | Adjusted Probabilistic Packet Marking for IP Traceback[END_REF]. A similar approach was proposed in [START_REF] Belenky | IP traceback with deterministic packet marking[END_REF], but instead of the full path only an address of the interface at the first router is encoded into the packet. Strayer et al. [START_REF] Strayer | SPIE-IPv6: Single IPv6 Packet Traceback[END_REF] developed an alternative approach. Rather than to store path in the packets, a router along the path stores information about a packet seen in its Bloom filter. The traceback is performed by querying relevant routers if their local Bloom filter contains the packet. Other method based on ICMP packets was proposed in [START_REF] Dan | On Design and Evaluation of "Intention-Driven" ICMP Traceback[END_REF]. The routers along the path generate with some probability an ICMP packet containing the previous and the next hop of a packet. Such information is sent to the destination which can eventually recover the whole path of the spoofed packets. Although the previously stated methods are related to our work we do not aim at preventing nor tracing back spoofed packets. Our goal is to detect spoofed packets, identify their destination (potential victim or reflector) and a set of links the spoofed packets are entering the network. The detection of spoofed packets has only been researched in [START_REF] Jin | Hop-count filtering: an effective defense against spoofed DDoS traffic[END_REF][START_REF] Wang | Defense against spoofed IP traffic using hop-count filtering[END_REF][START_REF] Peng | Detecting distributed denial of service attacks using source ip address monitoring[END_REF]. The first two detection methods are based on detecting variances in TTL (Time To Live). In [START_REF] Wang | Defense against spoofed IP traffic using hop-count filtering[END_REF] the authors have discussed TTL issues which constitute a problematic estimation of initial TTL (consider NAT, change of routes, etc.) and a possibility to spoof TTL value. In [START_REF] Peng | Detecting distributed denial of service attacks using source ip address monitoring[END_REF] the authors suggest to detect spoofing periods based on a significant increase of new source IP addresses. Such an algorithm cannot detect spoofing used during reflector attacks since there is only one new spoofed source IP address, i.e., the address of the final victim. We utilize this algorithm as a part of our scheme. We build our detection scheme on the network processes that are out of the control of a spoofing source. As a result, the scheme is able to work upon Net-Flow v5 records. NetFlow v5 is widely spread monitoring protocol supported by routers and other stand alone monitoring probes and exporters [START_REF]INVEA-TECH: Flowmon[END_REF]5]. Unlike preventive or tracing methods, our detection method does not require any modification of a packet, no specific protocol nor any modification to the routers. The scheme only assumes that each or the majority of border links of the target network is monitored via NetFlow v5. Detection algorithm The core idea of the algorithm is to detect the source IP addresses that are not expected to appear in arriving packets on a particular link (entry point in the destination network). The detection is based upon filtering and modeling of the arriving traffic (a new set of flow records is processed periodically every 5 minutes). The proposed scheme is depicted on Figure 1. The algorithm starts with the filtering of source IP addresses that should never appear at the entry points. These addresses fall either into a set of so called bogon prefixes [START_REF]The bogon reference[END_REF] (e.g., private networks, loopback, etc.) or into a set of prefixes belonging to the destination network itself (in our case approx. 60 prefixes belonging to /16-/24 networks). The intuition is that the randomly spoofed source IP addresses may fall into these prefixes. In such a case, the flow is filtered out and reported as spoofed. The next step of the algorithm is to reduce the set of loaded flows by removing those which are not important for the detection process. In our case, the transit flows are excluded. These flows traverse via the monitored network but their source and destination addresses belong to other networks. Therefore these flows are out of the detection scope. The transit flow can be recognized easily (the same flow key appears in the incoming as well as in the outgoing traffic) and it is filtered out. Next, we assume that most of the traffic is transferred over a symmetric path. To this end, the symmetric filter builds a routing model by observing outgoing flows on each link. If the traffic with a particular destination is routed over a particular link it is very likely that the same link is used for the reverse path. The model contains a set of the source IP prefixes derived from the destination IP addresses of the outgoing traffic for each link. It is necessary to process all outgoing flows prior to the incoming flows. The incoming flows verified by the model are considered to be legitimate (and are filtered out) whereas the incoming flows taking the asymmetric path may be spoofed and must be classified by the next filter. Also, it may not be sufficient to model expected source prefixes due to a following issue. A path from an external point A to some internal points B and C may differ since these points may be located in different parts of the network. Table 1 shows a histogram of the number of links the source prefixes is observed on. It may be observed that the majority of prefixes occur only on a single link which offers the filter a potential for successful filtering based only on the source prefixes. Further, the flows are classified by the history-based filter. The filter builds a model of arriving source prefixes (we discuss the length of the prefix in Section 4). There are several issues that must be taken into account when the filter utilizes the model to filter out the spoofed flows. It must deal with load balancing (the source prefix may occur on multiple links at the same time) and route flapping (the link for the source prefix may change frequently). These issues are addressed by the parameters and the characteristics of the model. For each source prefix, the model stores the distribution among the links, exponentially weighted moving average of the flows belonging to each prefix and the time of the last update. Table 2 depicts an example of a single record (columns 2 -4) for a sequence of intervals. The number of received flows in the current interval is represented by the last column. The spoofing is detected if there is a deviance from both characteristics -a change of the distribution and a large increase of the received flows (as depicted in the second row in Table 2). A logical expression 1 describes the detection. where a t corresponds to the average number of flows at time interval t and e i t is the distribution of the prefix over the links i = 0 . . . L. The coefficient k corresponds to the increased ratio of the average number of flows whereas the threshold H corresponds to the average per-link distribution deviance and L is the number of links. If the spoofing is detected the matching prefix and the link with the largest increase in received flows is reported. The last model is based on [START_REF] Peng | Detecting distributed denial of service attacks using source ip address monitoring[END_REF]. It receives all flows entering the network except the transit flows. The model tracks the number of received flows per each destination (CESNET) prefix. We follow the implementation in [START_REF] Peng | Detecting distributed denial of service attacks using source ip address monitoring[END_REF] and use CUSUM to detect the increased number of new source prefixes. The detector triggers an alert if the threshold is reached and the triggered destination prefix is reported. a t > ka t-1 ∧ L i=0 |e i t -e i t-1 |/L > H, (1) Evaluation The monitoring data all originate from the CESNET network. CESNET is connected to other AS with seven links. All these seven links are monitored and reported NetFlow data are processed. In this study we utilize data from period 11.3.2013 00:00 to 17.3.2013 23:59. Each 5 minute interval contains approximately 19 million of flows collected from all entry points. There are no DoS or DDoS attacks reported by our analysis tools nor by Warden 3 in the data set. The detection results of the first filter (detecting bogon and CESNET prefixes) are depicted in Figure 2 and3. Figures 2 and3 show the occurrence of spoofed addresses from the specific ranges in the inbound traffic. The large number of detected bogons confirms the lack of ingress filtering in other networks. The most often reported prefixes belong to the private network ranges. The spoofing is detected if the observed value differs from the average (EWMA) three times the standard deviation or if the value exceeds a fixed threshold of 13000 flows. There are rare cases when the CESNET prefixes appear in the source addresses in the arriving packets. The investigation of these cases revealed misconfigurations of external routers which upon arrival of particular packets return them to its source by the default path. Due to a low amount of these anomalies the detection threshold of CESNET prefixes is relatively low. We set the fixed threshold to be thirty detected spoofed addresses which is two times more than the maximum observed value per 5-minute interval in a long term (a week). Hence the spoofing detector tolerates these anomalies. Subsequently, the transit filter reduces the processed set of the flows by 25% on average. The results of the symmetric filter are presented in Figure 4. The model reaches its stable state after it overcomes a learning phase during the first several intervals. A large portion of symmetric communication in our traffic allows to filter out large number (approx. 85%) of flows as legitimate. The number of records stored in the model is dependent on the prefix length (see Table 3). In all our experiments we utilize prefix /24 to achieve moderate memory requirements and low processing overhead. Additionally, the number of records in the symmetric as well as in the history-based model is dependent on the length of considered history (see Fig. 5). We keep all records that are no older than 60 minutes. It can be observed that stabilizes after initial growth and decrease. The decrease is caused by an increased number of new flows arriving in the first interval due to the symmetric filter which has not built the model yet. The number of asymmetric flows remains too high for any manual inspection. To this end, the history-based filter matches the flows to the derived model of the arriving source IP prefixes. Based on the observation of the collected data we set up the detection thresholds H = 40% and k = 3. Moreover we introduce an activity threshold of the prefix. This threshold disables matching of the traffic against the prefixes that are not mature enough to be trusted. We have found out that if we set up the activity threshold to be even 5 minutes the number of false positives decreases to zero. Such a setup is aligned with the standard behavior of our network traffic. Figure 6 depicts the situation when the activity threshold is not utilized. After a learning phase in the first few intervals the detector reports only a small number of source prefixes. The second peak is caused by the expiration of the prefixes learned during the first interval when the symmetric filter did not filter out large portion of otherwise symmetrical flows. At last, we evaluate the model of new source IP prefixes. The detector runs in parallel to the symmetric and the history-based detectors. The number of new source IP prefixes per selected destination prefixes is depicted in Figure 7. Of course, the number of the new prefixes decreases with each new interval as the model learns from new addresses. The average value stabilizes at approx. 130 000 across all destination prefixes in total. The learned prefix is removed from the model after a week of inactivity. The CUSUM detects the increase of the new prefixes with respect to the average value. The outcome of the detector is binary -either the destination prefix receives significant number of new prefixes addresses or not. The filter detectors are connected in a pipeline in order to reduce the number of flows that must be inspected by the history-based filter. Although the number of alerts is relatively small we do not expect an operator to inspect the corresponding flows. Rather, we envision that the flows are inspected only in the cases when more than one detector agree upon that the interval contains flows with spoofed IP addresses. Conclusion The paper proposed a new scheme to detect source IP address spoofing. The scheme detects spoofing by analysing NetFlow data collected at the entry points in the network. The scheme is based on four assumptions related to the symptoms of IP spoofing in the network traffic. An offline evaluation of the scheme was done on real data collected from CESNET2 network. The results showed the effectiveness of each assumption. The experiments with parameters of the algorithm revealed the behavior of the detection scheme and provided a hint on setting up the scheme in other networks. Our future research will focus on proposing further filters to improve accuracy of the whole scheme. For example, if we used IPFIX protocol as an input data, it would be possible to use TTL to create another filter based on [START_REF] Jin | Hop-count filtering: an effective defense against spoofed DDoS traffic[END_REF]. We also work on an NfSen plugin that implements the proposed detection scheme and we plan to run it online. Fig. 1 . 1 Fig. 1. Algorithm scheme. Fig. 2 . 2 Fig. 2. Number of flows detected by the bogon prefix filter. Fig. 3 . 3 Fig. 3. Number of flows detected by the CESNET filter. Fig. 4 . 4 Fig. 4. Number of flows detected by the symmetric filter as potentially spoofed. Fig. 5 . 5 Fig. 5. Network model size vs. length of kept history. Fig. 6 . 6 Fig. 6. Number of source prefixes reported by the history-based filter. Fig. 7 . 7 Fig. 7. Number of source prefixes reported per destination prefix by the model of new source prefixes. Table 1 . 1 Histogram of the number of links an incoming source prefix is observed on. Number of entry points 1 2 3 4 Number of source prefixes (/24) 758103 33697 48 1 Time Distribution Average Received [5min] e 1 [%] e 2 [%] e 3 [%] a [f lows] r [f lows] 1 100 0 0 1000 1000 2 20 80 0 1750 4000 3 100 0 0 1562 1000 4 0 100 0 1421 1000 5 50 50 0 1316 1000 Table 2 . 2 History model for a single prefix containing the distribution for 3 entry points (e 1 , e 2 , e 3 ) and the average number of flows (a). Table 3 . 3 Network Prefix length /8 /16 /24 /32 Records in model 4 556 227 074 1 442 638 2 420 396 model size vs. prefix lengths based on a single 5-minute interval. Acknowledgement This work was supported by the research programme MSM 0021630528, the grant BUT FIT-S-11-1, the grant VG20102015022 and the IT4Innovations Centre of Excellence CZ.1.05/1.1.00/02.0070.
21,228
[ "1004087", "1004088", "994069" ]
[ "160209", "160209", "458961" ]
01490174
en
[ "math" ]
2024/03/04 23:41:50
2017
https://hal.science/hal-01490174/file/new-cdf.pdf
Christophe Chesneau email: [email protected] Hassan S Bakouch email: [email protected] A new cumulative distribution function based on m existing ones Keywords: Cumulative distribution function transformations, Probability density function, Statistical distributions, Hazard rate function. 2000 MSC: 60E05, 62E15 In this note, we present a new cumulative distribution function using sums and products of m existing cumulative distribution functions. Properties of such function are justified and using it we propose many distributions that exhibit various shapes for their probability density and hazard rate functions. Introduction In literature, several transformations exist to obtain a new cumulative distribution function (cdf) using other(s) well-known cdf(s). The most famous of them is the power transformation introduced by [START_REF] Gupta | Modeling failure time data by Lehman alternatives[END_REF]. Using a cdf F (x), the considered cdf is G(x) = (F (x)) α , α ≥ 1. For extensions and applications, see [START_REF] Gupta | Generalized Exponential Distributions[END_REF], [START_REF] Nadarajah | The Exponentiated Type Distributions[END_REF] and [START_REF] Nadarajah | The exponentiated Gumbel distribution with climate application[END_REF], and the references therein. Another popular transformation is the quadratic rank transmutation map (QRTM) introduced by [START_REF] Shaw | The alchemy of probability distributions: beyond Gram-Charlier expansions, and a skew-kurtotic-normal distribution from a rank transmutation map[END_REF], where the considered cdf is G(x) = (1 + λ)F (x)λ(F (x)) 2 , λ ∈ [-1 , 1]. Recent developments can be found in [START_REF] Aryal | On the transmuted extreme value distribution with application[END_REF][START_REF] Aryal | Transmuted log-logistic distribution[END_REF], [START_REF] Khan | Transmuted modified Weibull distribution: A generalization of the modified Weibull probability distribution[END_REF] and [START_REF] Khan | Characterizations of the transmuted inverse Weibull distribution[END_REF], and the references therein. Modern ideas include the DUS transformation proposed by [START_REF] Kumar | A Method of Proposing New Distribution and its Application to Bladder Cancer Patients Data[END_REF]: G(x) = 1 e-1 (e F (x) -1), the SS transformation introduced by [START_REF] Kumar | A New Distribution Using Sine Function-Its Application to Bladder Cancer Patients Data[END_REF]: G(x) = sin π 2 F (x) and the MG transformation studied by [START_REF] Kumar | Life Time Distributions: Derived from some Minimum Guarantee Distribution[END_REF]: G(x) = e 1-1 F (x) . An interesting approach is also given by the M transformation developed by [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF], where using two cdfs F 1 (x) and F 2 (x), the considered cdf is G(x) = F1(x)+F2(x) 1+F1(x) . In particular, [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF] showed that the M transformation has great applications in data analysis. With specific cdfs F 1 (x) and F 2 (x), it can better fit real data in comparison to some exploited distributions. In this study, we propose a generalized version of the M transformation, called GM transformation. It is constructed from sums and products of m cdfs with m ≥ 1. In comparison to the M transformation, it offers more possibility of cdf, mainly thanks to more flexibility on the denominator term. Then new distributions are derived, with the associated probability density function (pdf) and hazard rate function (hrf). In particular, some graphs of such functions related to new distributions based on Weibull and Cauchy with normal distributions are given and showing a wide variety of shapes, curves and asymmetries. The note is organized as follows. In Section 2, we present our new transformation. Sections 3 and 4 apply it with specific well-known distributions, defining the associated pdfs and hrfs with some plots. Section 5 is devoted to the proof of our theorem. GM transformation Let m ≥ 1 be an integer, F 1 (x), . . . , F m (x) be m cdfs of continuous distribution(s) with common support and δ 1 , . . . , δ m be m binary numbers, i.e. δ k ∈ {0, 1} for any k ∈ {1, . . . , m}. We introduce the following transformation of F 1 (x), . . . , F m (x): G(x) = m k=1 F k (x) m -1 + m k=1 (F k (x)) δ k , (1) with the imposed value δ m = 0 in the special case where m = 1. The support of G(x) is the common one of F 1 (x), . . . , F m (x). The role of δ 1 , . . . , δ m is to activate or not the chosen cdfs in the product in the denominator. For examples, taking m = 2, δ 1 = 1, and δ 2 = 1, the function (1) becomes G(x) = F1(x)+F2(x) 1+F1(x)F2(x) . Taking m = 3, δ 1 = 1, δ 2 = 1 and δ 3 = 0, the function (1) becomes G(x) = F1(x)+F2(x)+F3(x) 2+F1(x)F2(x) ; F 3 (x) is excluded of the denominator. The following result motivates the interest of (1). Theorem 1. The function G(x) (1) possesses the properties of a cdf. The proof of Theorem 1 is given in Section 5. Let us now present some immediate examples. Taking m = 1 (so δ 1 = 0), we obtain the simple cdf G(x) = F 1 (x) . The choice δ 1 = . . . = δ m = 0 gives an uniform mixture of cdfs: G(x) = 1 m m k=1 F k (x). Finally, for m = 2, δ 1 = 1 and δ 2 = 0, we obtain the M transformation introduced by [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF]: G(x) = F1(x)+F2(x) 1+F1(x) . For this reason, we will call (1) as the GM transformation (as Generalization of the M transformation). To the best of our knowledge, it is new in literature. New cdfs can also be derived by the GM transformation and existing transformations. Some of them using only one cdf are described below. • For any cdf F of continuous distribution with support equal to R or [0, +∞) or (-∞, 0) and any real numbers β 1 , . . . , β m , where β k > 0 for any k ∈ {1, . . . , m}, the GM transformation includes the following cdf: G(x) = m k=1 F (β k x) m -1 + m k=1 (F (β k x)) δ k . • Combining the GM transformation and the power transformation introduced by [START_REF] Gupta | Modeling failure time data by Lehman alternatives[END_REF], for any cdf F of continuous distribution and any real numbers α 1 , . . . , α m , where α k ≥ 1 for any k ∈ {1, . . . , m}, we obtain the cdf: G(x) = m k=1 (F (x)) α k m -1 + m k=1 (F (x)) δ k α k . • Combining the GM transformation and the transformation using QRTM introduced by [START_REF] Shaw | The alchemy of probability distributions: beyond Gram-Charlier expansions, and a skew-kurtotic-normal distribution from a rank transmutation map[END_REF], for any cdf F of continuous distribution and any real numbers λ 1 , . . . , λ m , where λ k ∈ [-1, 1] for any k ∈ {1, . . . , m}, we obtain the cdf: G(x) = m k=1 (1 + λ k )F (x) -λ k (F (x)) 2 m -1 + m k=1 ((1 + λ k )F (x) -λ k (F (x)) 2 ) δ k . Others interesting combinations are possible according to the problem. Thanks to their adaptability, with a specific F (x), these cdfs are of interest from the theoretical and applied aspects. A particular case with some related new distributions If we chose F 1 (x) = . . . = F m (x) = F (x) and δ 1 , . . . , δ m such that m k=1 δ k = q with q ∈ {0, . . . , m}, the GM transformation yields the following cdf: G(x) = mF (x) m -1 + (F (x)) q . Let f be an associated pdf to F . Then an associated pdf to G is given by g(x) = m(m -1 -(q -1)(F (x)) q )f (x) (m -1 + (F (x)) q ) 2 . The associated hrf is given by h(x) = m(m -1 -(q -1)(F (x)) q )f (x) (m -1 + (F (x)) q )(m -1 + (F (x)) q -mF (x)) . Remark 1. For this special case, note that G is still a cdf for any real numbers m > 1 and q such that q ∈ [0, m). The case m = 2 and q = 1 corresponds to a particular case of the M transformation studied in [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF]. New distributions arise from these functions. Some of them with potential interest are presented below. • Considering the uniform distribution on [0, 1], we have F (x) = x1 [0,1] (x) + 1 (1,+∞) (x), G(x) = mx m -1 + x q 1 [0,1] (x) + 1 (1,+∞) (x), g(x) = m(m -1 -(q -1)x q ) (m -1 + x q ) 2 1 [0,1] (x) and h(x) = m(m -1 -(q -1)x q ) (m -1 + x q )(m -1 + x q -mx) 1 [0,1] (x). • Considering the exponential distribution with parameter λ > 0, we have F (x) = (1 -e -λx )1 [0,+∞) (x), G(x) = m(1 -e -λx ) m -1 + (1 -e -λx ) q 1 [0,+∞) (x), g(x) = mλ(m -1 -(q -1)(1 -e -λx ) q )e -λx (m -1 + (1 -e -λx ) q ) 2 1 [0,∞) (x) and h(x) = mλ(m -1 -(q -1)(1 -e -λx ) q )e -λx (m -1 + (1 -e -λx ) q )((1 -e -λx ) q + me -λx -1) 1 [0,∞) (x). • Considering the logistic distribution with parameters µ ∈ R and s > 0, we have F (x) = 1 + e -( x-µ s ) -1 , x ∈ R, G(x) = m 1 + e -( x-µ s ) -1 m -1 + 1 + e -( x-µ s ) -q , g(x) = m m -1 -(q -1) 1 + e -( x-µ s ) -q e -( x-µ s ) s 1 + e -( x-µ s ) 2 m -1 + 1 + e -( x-µ s ) -q 2 and h(x) = m m -1 -(q -1) 1 + e -( x-µ s ) -q e -( x-µ s ) s 1 + e -( x-µ s ) 2 m -1 + 1 + e -( x-µ s ) -q m -1 + 1 + e -( x-µ s ) -q -m 1 + e -( x-µ s ) -1 . F (x) = 1 π arctan x-x0 a + 1 2 , x ∈ R, G(x) = m 1 π arctan x-x0 a + 1 2 m -1 + 1 π arctan x-x0 a + 1 2 q and g(x) = ma m -1 -(q -1) 1 π arctan x-x0 a + 1 2 q π((x -x 0 ) 2 + a 2 ) m -1 + 1 π arctan x-x0 a + 1 2 q 2 . • Considering the normal distribution with parameters µ ∈ R and σ > 0, we have F (x) = x -∞ 1 √ 2πσ 2 e -(t-µ) 2 2σ 2 dt = Φ(x), x ∈ R, G(x) = mΦ(x) m -1 + (Φ(x)) q , g(x) = (m -1 -(q -1)(Φ(x)) q ) e -(x-µ) 2 2σ 2 √ 2πσ 2 (m -1 + (Φ(x)) q ) 2 and h(x) = m(m -1 -(q -1)(Φ(x)) q )e -(x-µ) 2 2σ 2 √ 2πσ 2 (m -1 + (Φ(x)) q )(m -1 + (Φ(x)) q -mΦ(x) ) . • Considering the Weibull distribution with parameters k > 0 and λ > 0, we have F (x) = 1 -e -( x λ ) k 1 [0,+∞) (x), G(x) = m 1 -e -( x λ ) k m -1 + 1 -e -( x λ ) k q 1 [0,+∞) (x), (2) g(x) = m k λ x λ k-1 m -1 -(q -1) 1 -e -( x λ ) k q e -( x λ ) k m -1 + 1 -e -( x λ ) k q 2 1 [0,∞) (x) (3) and h(x) = m k λ x λ k-1 m -1 -(q -1) 1 -e -( x λ ) k q e -( x λ ) k m -1 + 1 -e -( x λ ) k q 1 -e -( x λ ) k q + me -( x λ ) k - 1 1 [0,∞) (x). ( 4 ) For this case, particularly rich, we denote the associated distribution by GM W (m, q, k, λ). The case m = 2 and q = 1 corresponds to the distribution M W (k, λ) introduced by [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF]. Our distribution has the advantage to offer more flexibility thanks to the additional parameters m and q, opening the door to many applications in data analysis. In order to illustrate the potential of applicability of GM W (m, q, k, λ), some graphs of the associated cdf, pdf and hrf are presented in Figures 1, 2 and 3 showing various shapes, curves and asymmetries. Figure 1: Some cdfs G(x) = G(x, m, q, k, λ) (2) associated to the distribution GM W (m, q, k, λ). g(x, 5, 4, 3, 1) Figure 2: Some pdfs g(x) = g(x, m, q, k, λ) (3) associated to the distribution GM W (m, q, k, λ). Figure 3: Some hrfs h(x) = h(x, m, q, k, λ) (4) associated to the distribution GM W (m, q, k, λ). Another case with some related new distributions If we chose m = 2 and δ 1 = δ 2 = 1, then the GM transformation is reduced to the following form G(x) = F 1 (x) + F 2 (x) 1 + F 1 (x)F 2 (x) . The main difference with G and the cdf proposed by [START_REF] Kumar | The New Probability Distribution: An Aspect to a Life Time Distribution[END_REF] is the function F 2 in the denominator, leading new cdf. The associated pdf is given by g(x) = f 1 (x)(1 -(F 2 (x)) 2 ) + f 2 (x)(1 -(F 1 (x)) 2 ) (1 + F 1 (x)F 2 (x)) 2 . The associated hrf is given by h(x) = f 1 (x)(1 -(F 2 (x)) 2 ) + f 2 (x)(1 -(F 1 (x)) 2 ) (1 + F 1 (x)F 2 (x))(1 -F 1 (x))(1 -F 2 (x)) . New distributions can arise from the expressions above and some of them are presented below. • Considering the cdf F 1 of the power distribution with parameters α > 0 and the cdf F 2 of the power distribution with parameters β > 0. Then we have F 1 (x) = x α 1 [0,1] (x) + 1 (1,+∞) (x), F 2 (x) = x β 1 [0,1] (x) + 1 (1,+∞) (x), G(x) = x α + x β 1 + x α+β 1 [0,1] (x) + 1 (1,+∞) (x), g(x) = αx α-1 (1 -x 2β ) + βx β-1 (1 -x 2α ) (1 + x α+β ) 2 1 [0,1] (x) and h(x) = αx α-1 (1 -x 2β ) + βx β-1 (1 -x 2α ) (1 + x α+β )(1 -x α )(1 -x β ) 1 [0,1] (x). • Considering the cdf F 1 of the Weibull distribution with parameters k 1 > 0 and λ 1 > 0 and the cdf F 2 of the Weibull distribution with parameters k 2 > 0 and λ 2 > 0 . Then we have F 1 (x) = 1 -e -x λ 1 k 1 1 [0,+∞) (x), F 2 (x) = 1 -e -x λ 2 k 2 1 [0,+∞) (x), G(x) = 2 -e -x λ 1 k 1 -e -x λ 2 k 2 2 -e -x λ 1 k 1 -e -x λ 2 k 2 + e -x λ 1 k 1 -x λ 2 k 2 1 [0,+∞) (x), g(x) = k1 λ1 x λ1 k1-1 e -x λ 1 k 1 1 -1 -e -x λ 2 k 2 2 2 -e -x λ 1 k 1 -e -x λ 2 k 2 + e -x λ 1 k 1 -x λ 2 k 2 2 1 [0,+∞) (x) + k2 λ2 x λ2 k2-1 e -x λ 2 k 2 1 -1 -e -x λ 1 k 1 2 2 -e -x λ 1 k 1 -e -x λ 2 k 2 + e -x λ 1 k 1 -x λ 2 k 2 2 1 [0,+∞) (x) 6 and h(x) = k1 λ1 x λ1 k1-1 e -x λ 1 k 1 1 -1 -e -x λ 2 k 2 2 2 -e -x λ 1 k 1 -e -x λ 2 k 2 + e -x λ 1 k 1 -x λ 2 k 2 e -x λ 1 k 1 e -x λ 2 k 2 1 [0,+∞) (x) + k2 λ2 x λ2 k2-1 e -x λ 2 k 2 1 -1 -e -x λ 1 k 1 2 2 -e -x λ 1 k 1 -e -x λ 2 k 2 + e -x λ 1 k 1 -x λ 2 k 2 e -x λ 1 k 1 e -x λ 2 k 2 1 [0,+∞) (x). • Considering the cdf F 1 of the Cauchy distribution with parameters 0 and 1 and the cdf F 2 of the normal distribution with parameters µ ∈ R and σ > 0. Then we have F 1 (x) = 1 π arctan(x) + 1 2 , F 2 (x) = x -∞ 1 √ 2πσ 2 e -(t-µ) 2 2σ 2 dt = Φ(x), x ∈ R, G(x) = 1 π arctan(x) + 1 2 + Φ(x) 1 + 1 π arctan(x) + 1 2 Φ(x) , (5) g(x) = 1 π(x 2 +1) 1 -(Φ(x)) 2 + 1 √ 2πσ 2 e -(x-µ) 2 2σ 2 1 -1 π arctan(x) + 1 2 2 1 + 1 π arctan(x) + 1 2 Φ(x) 2 (6) and h(x) = 1 π(x 2 +1) 1 -(Φ(x)) 2 + 1 √ 2πσ 2 e -(x-µ) 2 2σ 2 1 -1 π arctan(x) + 1 2 2 1 + 1 π arctan(x) + 1 2 Φ(x) 1 2 -1 π arctan(x) Φ(-x) . ( 7 ) Some graphs of these three functions for arbitrary values of (µ, σ) are given in Figures 4, 5 and 6. Again, we see different kinds of shapes, curves and asymmetries, which can be of interest for the statistician in a analysis data context. Proofs Proof of Theorem 1. For any k ∈ {1, . . . , m}, let f k (x) be a pdf associated to the cdf F k (x). Recall that F k (x) is continuous with F k (x) ∈ [0, 1], lim x→+∞ F k (x) = 1, lim x→-∞ F k (x) = 0 and f k (x) = F k (x) almost everywhere with f k (x) ≥ 0. Let us now investigate the sufficient conditions for G(x) to be a cdf. • Since (F k (x)) δ k > 0, we have G(x) ≥ 0. On the other hand, using the inequality: m k=1 (1 -x k ) ≥ 1 - m k=1 x k , x k ∈ [0, 1], with x k = 1 -(F k (x)) δ k ∈ [0, 1] and observing that (F k (x)) δ k ≥ F k (x), we obtain m k=1 (F k (x)) δ k ≥ 1 - m k=1 (1 -(F k (x)) δ k ) = 1 -m + m k=1 (F k (x)) δ k ≥ 1 -m + m k=1 F k (x). Hence G(x) ≤ 1. • Let us prove that G (x) ≥ 0. For any derivable function u(x), note that ((u(x)) δ k ) = δ k u (x) since δ k ∈ {0, 1}. Therefore we have G (x) = A(x) B(x) almost everywhere, where A(x) = m k=1 f k (x) m -1 + m k=1 (F k (x)) δ k - m k=1 F k (x)    m k=1 δ k f k (x) m u=1 u =k (F u (x)) δu    and B(x) = m -1 + m k=1 (F k (x)) δ k 2 . We have B(x) > 0. Let us now investigate the sign of A(x). The following decomposition holds: A(x) = A 1 (x) + A 2 (x), where Figure 4 : 4 Figure 4: Some cdfs G(x) = G(x, µ, σ) (5) with various values for µ and λ. Figure 5 :Figure 6 : 56 Figure 5: Some pdfs g(x) = g(x, µ, σ) (6) with various values for µ and λ. F•F k (x) and m-1+ m k=1 (F k (x)) δ k are continuous functions with m-1+ m k=1 (F k (x)) δ k = 0, G(x) is a continuous function of x. Let us prove that G(x) ∈ [0, 1]. Owing to m k=1 k (x) ≥ 0 and m -1 + m k=1 A 1 (( 1 -•FF 1 111 δ k )f k (x) m -1 + m k=1 (F k (x)) δ k .Since A 2 (x) ≥ 0 as a sum of positive terms, let us focus on the sign of A 1 (x). Observe that,if δ k = 1, we have F k (x) (x)) δu . If δ k = 0, the k-th term in the sum of A 1 (x) is zero. Therefore we can write A 1 (x) = m k=1 δ k f k (x) (x)) δu ≤ 1, we have m -1 -(x)) δu ≥ 0, implying that A 1 (x) ≥ 0. Therefore A(x) ≥ 0, so G (x) ≥ 0. Let us now investigate lim x→-∞ G(x) and lim x→+∞ G(x). If m ≥ 2, we have m -1 + m k=1 (F k (x)) δ k ≥ m -1 > 0. Since lim k (x) = 0, we have lim x→-∞ G(x) = 0. If m = 1, recall that we have imposed δ m = 0, so lim (x) = 0. On the other hand, for any m ≥ 1, we have lim x→+∞
16,370
[ "15258" ]
[ "105", "444421" ]
01490176
en
[ "info" ]
2024/03/04 23:41:50
2017
https://inria.hal.science/hal-01490176/file/mishraIJDAR.pdf
Anand Mishra email: [email protected] Alahari Karteek email: [email protected] C V Jawahar ⋆⋆ ⋆⋆⋆ Karteek ⋆⋆ Alahari ⋆⋆⋆ C V Jawahar email: [email protected] Unsupervised refinement of color and stroke features for text binarization Color and strokes are the salient features of text regions in an image. In this work, we use both these features as cues, and introduce a novel energy function to formulate the text binarization problem. The minimum of this energy function corresponds to the optimal binarization. We minimize the energy function with an iterative graph cut based algorithm. Our model is robust to variations in foreground and background as we learn Gaussian mixture models for color and strokes in each iteration of the graph cut. We show results on word images from the challenging ICDAR 2003/2011, born-digital image and street view text datasets, as well as full scene images containing text from ICDAR 2013 datasets, and compare our performance with state-of-the-art methods. Our approach shows significant improvements in performance under a variety of performance measures commonly used to assess text binarization schemes. In addition, our method adapts to diverse document images, like text in videos, handwritten text images. Introduction . The performance of subsequent steps like character segmentation and recognition is highly dependent on the success of binarization. Document image binarization has been an active area of research for many years [START_REF] Stathis | An evaluation technique for binarization algorithms[END_REF][START_REF] Howe | A Laplacian energy for document binarization[END_REF][START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Valizadeh | Binarization of degraded document image based on feature space partitioning and classification[END_REF][START_REF] Lazzara | Efficient multiscale Sauvola's binarization[END_REF][START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF]. It, however, is not a solved problem in light of the challenges posed by text in video sequences, born-digital (web and email) images, old historic manuscripts and natural scenes where the state-of-the-art recognition performance is still poor. In this context of a variety of imaging systems, designing a powerful text binarization algorithm can be considered a major step towards robust text understanding. Recent interest of the community by organizing binarization contests like DIBCO [START_REF] Pratikakis | ICDAR 2013 document image binarization contest (DIBCO 2013)[END_REF], H-DIBCO [START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF][START_REF] Ntirogiannis | ICFHR2014 competition on handwritten document image binarization (H-DIBCO 2014)[END_REF] at major international document image analysis conferences further highlights its importance. In this work, we focus on binarization of natural scene text images. These images contain numerous degradations which are not usually present in machine-printed ones, e.g., uneven lighting, blur, complex background, and perspective distortion. A few sample images from the popular datasets we use are shown in Fig. 1. Our proposed method is targeted to such cases, and also to historical handwritten document images. Our method is inspired by the success of interactive graph cut [START_REF] Boykov | Interactive graph cuts for optimal boundary and region segmentation of objects in ND images[END_REF] and GrabCut [START_REF] Rother | GrabCut: Interactive foreground extraction using iterated graph cuts[END_REF] algorithms for foreground-background segmentation of natural scenes. We formulate the binarization problem in an energy minimization framework, where text is foreground and anything else is background, and define a novel energy (cost) function such that the quality of the binarization is inversely related to the energy value. We minimize this energy function to find the optimal binarization using an iterative graph cut scheme. The graph cut method needs to be initialized with foreground and background seeds. To make the binarization fully automatic, we initialize the seeds by obtaining character-like strokes. At each iteration of graph cut, the seeds and the binarization are refined. This makes it more powerful than a oneshot graph cut algorithm. Moreover, we use two cues to distinguish text regions from background: (i) color, and (ii) stroke width. We model foreground and background colors, as well as stroke widths in a Gaussian mixture Markov random field framework [START_REF] Blake | Interactive image segmentation using an adaptive GMMRF model[END_REF], to make the binarization robust to variations in foreground and background. The contributions of this work are threefold: firstly, we propose a principled framework for the text binarization problem, which is initialized with character-like strokes in an unsupervised manner. The use of color and stroke width features together in an optimization framework for text binarization is an important factor in our work. Secondly, we present a comprehensive evaluation of the proposed binarization method on multiple text datasets. We evaluate the performance using various measures, such as pixel-level and atom-level scores, recognition accuracy, and compare it with the state-ofthe-art methods [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Kittler | Threshold selection based on a simple image statistic[END_REF][START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF][START_REF] Kasar | Font and background color independent text binarization[END_REF][START_REF] Niblack | An introduction to digital image processing[END_REF][START_REF] Sauvola | Adaptive document image binarization[END_REF][START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF] as well as the top-performing methods in the ICDAR robust reading competition [START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF]. To our knowledge, text binarization methods have not been evaluated in such a rigorous setting in the past, and are restricted to only a few hundred images or one category of document images (e.g., handwritten documents or scene text). In contrast, we evaluate on more than 2000 images including scene text, video text, born-digital and handwritten text images. Additionally, we also perform qualitative analysis on 6000 images containing video text of several Indian scripts. Interestingly, the performance of existing binarization methods varies widely across the datasets, whereas our results are consistently compelling. In fact, our binarization improves the recognition results of an open source OCR [START_REF]Tesseract OCR[END_REF] by more than 10% on various public benchmarks. Thirdly, we show the utility of our method in binarizing degraded historical documents. On a benchmark dataset of handwritten images, our method achieves comparable performance to the H-DIBCO 2012 competition winner and a state-of-the-art method [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF], which is specifically tuned for handwritten images. The code for our method and the performance measures we use is available on our project website [24]. The remainder of the paper is organized as follows. We discuss related work in Section 2. In Section 3, the binarization task is formulated as a labeling problem, where we define the energy function such that its minimum corresponds to the target binary image. This section also briefly introduces the graph cut method. Section 4 explains the terms of the cost function in detail. In Section 5, we discuss our automatic GMM initialization strategy. Section 6 gives details of the datasets, evaluation protocols, and performance measures used in this work. Experimental settings, results, discussions, and comparisons with various classical as well as modern binarization techniques are provided in Section 7, followed by a summary in Section 8. Related Work Early methods for text binarization were mostly designed for clean, scanned documents. In the context of images taken from street scenes, video sequences and historical handwritten documents, binarization poses many additional challenges. A few recent approaches aimed to address them for scene text binarization [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Thillou | Color binarization for complex camera-based images[END_REF][START_REF] Kita | Binarization of color characters in scene images using k-means clustering and support vector machines[END_REF], handwritten text binarization [START_REF] Howe | A Laplacian energy for document binarization[END_REF][START_REF] Howe | Document binarization with automatic parameter tuning[END_REF] and degraded printed text binarization [START_REF] Lu | Document image binarization using background estimation and stroke edges[END_REF]. In this section we review such literature as well as other works related to binarization (specifically text binarization), and argue for the need for better techniques. We group text binarization approaches into three broad categories: (i) classical binarization, (ii) energy minimization based methods, and (iii) others. Classical binarization methods. They can be further categorized into: global (e.g., Otsu [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF], Kittler [START_REF] Kittler | Threshold selection based on a simple image statistic[END_REF]) and local (e.g., Sauvola [START_REF] Sauvola | Adaptive document image binarization[END_REF], Niblack [START_REF] Niblack | An introduction to digital image processing[END_REF]) approaches. Global approaches compute a binarization threshold based on global statistics of the image such as intra-class variance of text and background regions, whereas local approaches compute the threshold from local statistics of the image such as mean and variance of pixel intensities in patches. The reader is encouraged to refer to [START_REF] Stathis | An evaluation technique for binarization algorithms[END_REF] for more details of these methods. Although most of these methods perform satisfactorily for many cases, they suffer from problems like: (i) manual tuning of parameters, (ii) high sensitivity to the choice of parameters, and (iii) failure to handle images with uneven lighting, noisy background, similar foreground-background colors. Energy minimization based methods. Several methods have been proposed for text binarization problems in this paradigm over the last decade [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF][START_REF] Kuk | Feature based binarization of document images degraded by uneven light condition[END_REF][START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF][START_REF] Zhang | An improved scene text extraction method using conditional random field and optical character recognition[END_REF][START_REF] Pan | Text localization in natural scene images based on conditional random field[END_REF][START_REF] Hebert | Discrete CRF based combination framework for document image binarization[END_REF]. Here, the binarization task is posed as an optimization problem, typically modeled using Markov random fields (MRFs). In [START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF], Wolf and Doermann applied simulated annealing to minimize the resulting cost function. The method proposed in [START_REF] Kuk | Feature based binarization of document images degraded by uneven light condition[END_REF], authors first classified a document into text, near text and background regions, and then performed a graph cut to produce the binary image. An MRF based binarization for camera-captured document images was proposed in [START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF], where a thresholding based technique is used to produce an initial binary image which is refined with a graph cut scheme. The energy function in [START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF] also uses stroke width as cues, and achieves good performance on printed document images. However, it needs an accurate estimation of stroke width, which is not always trivial in the datasets we use (see Fig. 2). Following a similar pipeline of thresholding followed by labeling with a conditional random field (CRF) model, Zhang et al. [START_REF] Zhang | An improved scene text extraction method using conditional random field and optical character recognition[END_REF] and Pan et al. [START_REF] Pan | Text localization in natural scene images based on conditional random field[END_REF] proposed text extraction methods. These methods however rely on the performance of the thresholding step. Also, being a supervised method, they require large training data with pixel-level annotations for learning a text vs non-text classifier. Hebert et al. [START_REF] Hebert | Discrete CRF based combination framework for document image binarization[END_REF] proposed a scheme where six classical binarization approaches are combined in a CRF framework. Unlike these methods [START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF][START_REF] Zhang | An improved scene text extraction method using conditional random field and optical character recognition[END_REF][START_REF] Pan | Text localization in natural scene images based on conditional random field[END_REF][START_REF] Hebert | Discrete CRF based combination framework for document image binarization[END_REF], our framework does not require thresholding as a first step and proceeds with stroke as well as color initializations which are refined iteratively in an unsupervised manner. Howe [START_REF] Howe | A Laplacian energy for document binarization[END_REF] used the Laplacian of image intensity in the energy term for document binarization, and later improved it with a method for automatic parameter selection in [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF]. These approaches were designed for handwritten images, and fail to cope up with variations in scene text images, e.g., large changes in stroke width and foreground-background colors within a single image. Adopting a similar framework, Milyaev et al. [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF] have proposed a scene text binarization technique, where they obtain an initial estimate of binarization with [START_REF] Niblack | An introduction to digital image processing[END_REF], and then use Laplacian of image intensity to compute the unary term of the energy function. Other methods. Binarization has also been formulated as a text extraction problem [START_REF] Kasar | Font and background color independent text binarization[END_REF][START_REF] Gatos | Text detection in indoor/outdoor scene images[END_REF][START_REF] Ezaki | Text detection from natural scene images: towards a system for visually impaired persons[END_REF][START_REF] Gomez | A fast hierarchical method for multi-script and arbitrary oriented scene text extraction[END_REF][START_REF] Epshtein | Detecting text in natural scenes with stroke width transform[END_REF]. Gatos et al. [START_REF] Gatos | Text detection in indoor/outdoor scene images[END_REF] presented a method with four steps: denoising with a low-pass Wiener filter, rough estimation of text and background, using the estimates to compute local thresholds, and post-processing to eliminate noise and preserve strokes. Epshtein et al. [START_REF] Epshtein | Detecting text in natural scenes with stroke width transform[END_REF] presented a novel operator called the stroke width transform. It computes the stroke width at every pixel of the input image. A set of heuristics were then applied for text extraction. Kasar et al. [START_REF] Kasar | Font and background color independent text binarization[END_REF] proposed a method which extracts text based on candidate bounding boxes in a Canny edge image. Ezaki et al. [START_REF] Ezaki | Text detection from natural scene images: towards a system for visually impaired persons[END_REF] applied Otsu binarization [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] on different image channels, and then used morphological operators as post processing. Feild and Learned-Miller [START_REF] Feild | Scene text recognition with bilateral regression[END_REF] proposed a bilateral regression based binarization method. This method uses color clustering as a starting point to fit a regression model, and generates multiple hypotheses of text regions. Histogram of gradient features [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] computed for English characters are then used to prune these hypotheses. Tian et al. [START_REF] Tian | Scene text segmentation with multi-level maximally stable extremal regions[END_REF] proposed a binarization technique which computes MSER [START_REF] Matas | Robust wide baseline stereo from maximally stable extremal regions[END_REF] on different color channels to obtain many connected components, and then prune them based on text vs non-text classifier to produce the binarization output. Most of these approaches are either supervised methods requiring large labeled training data, or use multiple heuristics which can not be easily generalized to the diverse datasets we use. In contrast to the binarization techniques in literature, we propose a method which models color as well as stroke width distributions of foreground (text) and background (non-text) using Gaussian mixture models, and perform inference using an iterative graph cut algorithm to obtain clean binary images. We evaluate publicly available implementations of many existing meth- [START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF], and (b) part of a handwritten document image taken from H-DIBCO 2012 [START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF]. We note that stroke width within text is not always constant, and varies smoothly. ods on multiple benchmarks, and compare with them in Section 7. This paper is an extension of our initial work [START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF] which appeared at ICDAR 2011, with the following additions: (i) we initialize candidate text regions using character-like strokes, and refine them in an iterative scheme, instead of relying on a heuristically-designed auto-seeding method, (ii) we incorporate a novel stroke based term in the original color based energy function, and compute its relative importance with respect to color based terms automatically, and (iii) we perform extensive experiments on several recent benchmarks, including handwritten image datasets H-DIBCO 2012/2014, video texts, born-digital images, and ICDAR 2011/2013 datasets. Iterative Graph Cut based Binarization We formulate the binarization problem in a labeling framework as follows. The binary output of a text image containing n pixels can be expressed as a vector of random variables X = {X 1 , X 2 , ..., X n }, where each random variable X i takes a label x i ∈ {0, 1} based on whether it is text (foreground) or non-text (background). Most of the heuristic-based algorithms take the decision of assigning label 0 or 1 to x i based on the pixel value at that location, or local statistics computed in a neighborhood. In contrast, we formulate the problem in a more principled framework where we represent image pixels as nodes in a conditional random field (CRF) and associate a unary and pairwise cost for labeling pixels. We then solve the problem by minimizing a linear combination of two energy functions E c and E s given by: [START_REF] Chen | Broken and degraded document images binarization[END_REF] such that its minimum corresponds to the target binary image. Here x = {x 1 , x 2 , ..., x n } is the set of labels of all the pixels. The model parameters θ c and θ s are learned from the foreground/background color and stroke width distributions respectively. The vector z c contains the color values of all the pixels in RGB color space, and the vector z s contains pixel intensity and stroke width at every pixel. 1 The weights w 1 and w 2 are automatically computed from the text image. To this end, we use two image properties, edge density (ρ 1 ) and stroke width consistency (ρ 2 ). They are defined as the fraction of edge pixels and standard deviation of stroke widths in the image respectively. We observe that stroke cues are more reliable when we have sufficient edge pixels (i.e., edge density ρ 1 is high), and when the standard deviation of stroke widths is low (i.e., stroke width consistency ρ 2 is low). Based on this, we compute the relative weights ( ŵ1 , ŵ2 ) between color and stroke terms as follows: ŵ2 = ρ1 ρ2 , ŵ1 = |1-ŵ2 |. We then normalize these weights to obtain w 1 and w 2 as follows: E all (x, θ, z) = w 1 E c (x, θ c , z c ) + w 2 E s (x, θ s , z s ), w 1 = ŵ1 ŵ1 + ŵ2 , (2) w 2 = ŵ2 ŵ1 + ŵ2 , (3) giving more weight to the stroke width based term when the extracted strokes are more reliable, and vice-versa. For simplicity, we will denote θ c and θ s as θ and z c , and z s as z from now. It should be noted that the formulation of stroke width based term E s and color based term E c are analogous. Hence, we will only show the formulation of color based energy term in the subsequent text. It is expressed as: E(x, θ, z) = i E i (x i , θ, z i ) + (i,j)∈N E ij (x i , x j , z i , z j ), (4) where, N denotes the neighborhood system defined in the CRF, and E i and E ij correspond to data and smoothness terms respectively. The data term E i measures the degree of agreement of the inferred label x i to the observed image data z i . The smoothness term measures the cost of assigning labels x i , x j to adjacent pixels, essentially imposing spatial smoothness. The unary term is given by: E i (x i , θ, z i ) = -log p(x i |z i ), (5) where p(x i |z i ) is the likelihood of pixel i taking label x i . The smoothness term is the standard Potts model [START_REF] Boykov | Interactive graph cuts for optimal boundary and region segmentation of objects in ND images[END_REF]: E ij (x i , x j , z i , z j ) = λ [x i = x j ] dist(i, j) exp β(z i -z j ) 2 , (6) where the scalar parameter λ controls the degree of smoothness, dist(i, j) is the Euclidean distance between neighboring pixels i and j. The smoothness term imposes the cost only for those adjacent pixels which have different labels, i.e., [x i = x j ]. The constant β allows discontinuity-preserving smoothing, and is given by: β = 1/2E[(z i -z j ) 2 ] , where E[a] is expected value of a [START_REF] Rother | GrabCut: Interactive foreground extraction using iterated graph cuts[END_REF]. The problem of binarization is now to find the global minima of the energy function E all , i.e., x * = arg min x E all (x, θ, z). (7) The global minima of this energy function can be efficiently computed by graph cut [START_REF] Boykov | An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision[END_REF] as it satisfies the criteria of submodularity [START_REF] Kolmogorov | What energy functions can be minimized via graph cuts?[END_REF]. To this end, a weighted graph G = (V, E) is formed where each vertex corresponds to an image pixel, and edges link adjacent pixels. Two additional vertices source (s) and sink (t) are added to the graph. All the other vertices are connected to them with weighted edges. The weights of all the edges are defined in such a way that every cut of the graph is equivalent to some label assignment to nodes. Here, a cut of the graph G is a partition of the set of vertices V into two disjoint sets S and T , and the cost of the cut is defined as the sum of the weights of edges going from vertices belonging to the set S to T [START_REF] Kolmogorov | What energy functions can be minimized via graph cuts?[END_REF][START_REF] Boros | Pseudo-boolean optimization[END_REF]. The minimum cut of such a graph corresponds to the global minima of the energy function, which can be computed efficiently [START_REF] Boykov | An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision[END_REF]. In [START_REF] Boykov | Interactive graph cuts for optimal boundary and region segmentation of objects in ND images[END_REF], θ corresponds to the parameters of the image foreground and background histograms. These histograms are constructed directly from the foreground and background seeds obtained with user interaction. However, the foreground/background distribution in the challenging cases we target (see images in Fig. 1) cannot be captured effectively by such histograms. Instead, we represent each pixel color (and stroke width) with a Gaussian mixture model (GMM). In this regard, we are inspired by the success of GrabCut [START_REF] Rother | GrabCut: Interactive foreground extraction using iterated graph cuts[END_REF] for object segmentation. The foreground and background GMMs in GrabCut are initialized by user interaction. We avoid any user interaction by initializing GMMs with character-like strokes obtained using a method described in Section 5. Color and Stroke Width Potentials The color of each pixel is generated from one of the 2c GMMs [START_REF] Reynolds | Gaussian mixture models[END_REF] (c each for foreground and background) with a mean µ and a covariance Σ. 2 In other words, each foreground color pixel is generated from the following distribution: p(z i |x i , θ, k i ) = N (z, θ; µ(x i , k i ), Σ(x i , k i )), (8) where N denotes a Gaussian distribution, x i ∈ {0, 1} and k i ∈ {1, ..., c}. To model the foreground color using this distribution, an additional vector k = {k 1 , k 2 , ..., k n } is introduced where each k i takes one of the c GMM components. Similarly, background color is modeled from one of the c GMM components. Further, the overall likelihood can be assumed to be independent of the pixel position, and thus expressed as: p(z|x, θ, k) = i p(z i |x i , θ, k i ), (9) = i π i |Σ i | exp -zi T Σ -1 i zi 2 , (10) where π i = π(x i , k i ) is Gaussian mixture weighting co- efficient, Σ i = Σ(x i , k i ) and zi = (z i -µ(x i , k i )) . Due to Fig. 3. Overview of the proposed method. Given an input image containing text, we first obtain character-like strokes using the method described in Section 5. GMMs for foreground (text) and background (non-text) are learnt from these initial seeds. We learn two types of GMMs: one using RGB color values and another using stroke width and intensity values. Unary and pairwise costs are computed for every pixel, and are appropriately weighted (see Section 3). An s-t graph is constructed with these costs. The min cut of this graph produces an initial binary image, which is used to refine the seeds, and the GMMs. The GMM refinement and graph cut steps are repeated a few times to obtain the final binary image. (Best viewed in pdf.) the introduction of GMMs the data term in (4) becomes dependent on its assignment to a GMM component, and is given by: E i (x i , k i , θ, z i ) = -log p(z i |x i , θ, k i ). (11) In order to make the energy function robust to low contrast color images we introduce a novel term into the smoothness function which measures the "edginess" of pixels as: E ij (x i , x j , z i , z j ) = λ 1 (i,j)∈N Z ij + λ 2 (i,j)∈N G ij , (12) where, Z ij = [x i = x j ] exp(-β c ||z i -z j || 2 ) and G ij = [x i = x j ] exp(-β g ||g i -g j || 2 ). Here g i denotes the magnitude of gradient (edginess) at pixel i. Two neighboring pixels with similar edginess values are more likely to belong to the same class with this constraint. The constants λ 1 and λ 2 determine the relative strength of the color and edginess difference terms with respect to the unary term, and are fixed to 25 empirically. The parameters β c and β g are automatically computed from the image as follows: β c = 1 ξ (i,j)∈N (z i -z j ) 2 , ( 13 ) β g = 1 ξ (i,j)∈N (g i -g j ) 2 , ( 14 ) where ξ = 2(4wh -3w -3h + 2) is the total number of edges in the 8-neighborhood system N with w and h denoting the width and the height of the image respectively. In summary, both the color and stroke width of foreground and background regions are modeled as GMMs. To initialize these GMMs, we obtain characterlike strokes from the given image as described in the following section. GMM Initialization Initializing GMMs can play a crucial role as it is hard to recover from a poor random initialization. In this work we propose to obtain initial seeds from character-like strokes. The idea of obtaining character-like strokes is similar in spirit to the work of Epshtein et al. [START_REF] Epshtein | Detecting text in natural scenes with stroke width transform[END_REF]. However, unlike [START_REF] Epshtein | Detecting text in natural scenes with stroke width transform[END_REF], our method is robust to incorrect strokes as we refine the initializations iteratively by learning new color and stroke GMMs in each iteration. Alternative techniques can also be used for initialization, such as other binarization techniques [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF][START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF]. In Section 7.1 we investigate these alternatives empirically. Obtaining character-like strokes. We begin by extracting an edge image with the Canny edge operator, and then find character-like strokes with the following two-step approach. We first automatically detect the polarity of the image (see Section 7). If the average gray pixel value in the vertical strip at the center of an image is greater than the average value in the boundary region, we assign a polarity of one (i.e., light text on dark background), otherwise we assign a polarity of zero (i.e., dark text on light background). In the case of images with polarity one, we subtract 180 • from the original gradient orientation. We then detect the strokes in the second step. Let u be an edge pixel with gradient orientation θ. For every such edge pixel u in the image, we trace a line segment along the gradient orientation θ until we find an edge pixel v, whose gradient orientation is (180 •θ) ± 5 i.e., the opposite direction approximately. We mark pixels u and v as traversed, and the line segment uv as a character-like stroke. We repeat this process for all the non-traversed edge pixels, and mark all the corresponding line segments as character-like strokes. We use these character-like strokes as initial foreground seeds. Pixels with no strokes are used as background seeds. Fig. 4 shows an example image and the corresponding character-like strokes obtained with the method described. We initialize two types of GMMs: one with color values, and other with stroke width and pixel intensity values, for both foreground and background, from these initial seeds. Note that unlike our previous work [START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF], (i) we do not use any heuristics to discard some of the strokes, and instead refine this candidate set of strokes over iterations, (ii) background seeds do not need to be explicitly computed, rather, pixels with no strokes are initialized as potential background. Once the GMMs are initialized, we compute unary and pairwise terms from [START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF] and [START_REF] Ntirogiannis | ICFHR2014 competition on handwritten document image binarization (H-DIBCO 2014)[END_REF] for both color and stroke based terms. With the terms in the energy function (1) now defined, iterative graph cut based inference is performed to minimize [START_REF] Chen | Broken and degraded document images binarization[END_REF]. At each iteration, the initializations are refined, new GMMs are learned from them, and the relative weights between color and stroke terms are recomputed. This makes the algorithm adapt to variations in foreground and background. The overview of our proposed method is illustrated in Fig. 3 and Algorithm 1. Datasets and Performance Measures To conduct a comprehensive evaluation of the proposed binarization method, we use four scene text, a born-digital text, a video text and two handwritten image datasets. These are summarized in Table 1. In this section, we briefly describe the datasets and their available annotations. ICDAR cropped word datasets. ICDAR 2003 and ICDAR 2011 robust reading datasets were originally introduced for tasks like text localization, cropped word recognition, and scene character recognition. We use the cropped words from these datasets for evaluating binarization performance. The test sets of these two datasets contain 1110 and 1189 word images respectively [START_REF] Sosa | ICDAR 2003 robust reading competitions[END_REF][START_REF] Shahab | ICDAR 2011 robust reading competition challenge 2: Reading text in scene images[END_REF][START_REF]ICDAR 2003 dataset[END_REF][START_REF]ICDAR 2011 dataset[END_REF]. Pixel-level annotations for both these datasets are provided by Kumar et al. [START_REF] Kumar | Benchmarking recognition results on camera captured word image data sets[END_REF]. Note that pixel-level annotations are available only for 716 images of ICDAR 2011 dataset. We show pixel-level and atom-level results for only these annotated images for this dataset, and refer this subset as ICDAR 2011-S. However, we show recognition results on all the 1189 images of ICDAR 2011. The ICDAR 2003 dataset also contains a training set of 1157 word images. Pixel-level annotations for these images are provided by [START_REF] Milyaev | Image binarization for end-to-end text understanding in natural images[END_REF]. We use 578 word images from this set chosen randomly to validate our choice of parameters (see Section 7.1). We refer to this subset as our validation set for all our experiments. ICDAR 2013 full scene image dataset. It is composed of outdoor and indoor scene images containing text. There are 233 images in all, with their corresponding ground truth pixel-level text annotations [START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF]. ICDAR born-digital image dataset (BDI) 2011. Images are often used in emails or websites to embed textual information. These images are known as born-digital text images. As noted in ICDAR 2011 competitions [START_REF] Karatzas | ICDAR 2011 robust reading competitionchallenge 1: Reading text in born-digital images (web and email)[END_REF], born-digital images: (i) are inherently low-resolution, (ii) often suffer from compression artefacts and severe anti-aliasing. Thus, a method designed for scene text images may not work for these. Considering this, a dataset known as ICDAR borndigital image (BDI) was introduced as part of ICDAR 2011 competitions. It contains 916 word images, and their corresponding pixel-level annotations provided by Kumar et al. [START_REF] Kumar | Benchmarking recognition results on camera captured word image data sets[END_REF]. Street view text. The street view text (SVT) dataset contains images harvested from Google Street View. As noted in [START_REF] Wang | Word spotting in the wild[END_REF], most of the images come from business signage and exhibit a high degree of variability in appearance and resolution. We show binarization results on the cropped words of SVT-word, which contains 647 word images, and evaluate it with pixel-level annotations available publicly [START_REF] Kumar | Benchmarking recognition results on camera captured word image data sets[END_REF]. Video script identification dataset (CVSI). The CVSI dataset is composed of images from news videos of various Indian languages. It contains 6000 text images from ten scripts, namely English, Hindi, Bengali, Oriya, Gujarati, Punjabi, Kannada, Tamil, Telugu and Arabic, commonly used in India. This dataset was originally introduced for script identification [START_REF]ICDAR 2015 Competition on Video Script Identification[END_REF], and does not include pixel level annotations. We use it solely for qualitative evaluation of binarization methods. H-DIBCO 2012/2014. Although our binarization scheme is designed for scene text images, it can also be applied for handwritten images. To demonstrate this we test our method on the H-DIBCO 2012 [START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF] and 2014 [START_REF] Ntirogiannis | ICFHR2014 competition on handwritten document image binarization (H-DIBCO 2014)[END_REF] datasets. They contain 14 and 10 degraded handwritten images respectively, with their corresponding ground truth pixel-level annotations. Performance Measures Although binarization is a highly researched problem, the task of evaluating the performance of proposed solutions has received less attention [START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF]. Due to the lack of well-defined performance measures or ground truth, some of the previous works perform only a qualitative evaluation [START_REF] Lopresti | Locating and recognizing text in WWW images[END_REF][START_REF] Karatzas | Colour text segmentation in web images based on human perception[END_REF]. This subjective evaluation only provide a partial view of performance. A few others measure binarization accuracy in terms of OCR performance [START_REF] Kumar | NESP: Nonlinear enhancement and selection of plane for optimal segmentation and recognition of scene word images[END_REF]. While improving text recognition performance can be considered as an end goal of binarization, relying on OCR systems which depend on many factors, e.g., character classification, statistical language models, and not just the quality of text binarization, is not ideal. Thus, OCR-level evaluation can only be considered as an indi-rect performance measure for rating binarization methods [START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF]. A well-established practice in document image binarization competitions at ICDAR is to evaluate binarization at the pixel level [START_REF] Pratikakis | ICDAR 2013 document image binarization contest (DIBCO 2013)[END_REF]. This evaluation is more precise than the previous two measures, but has a few drawbacks: (i) pixel-level ground truth for large scale datasets is difficult to acquire, (ii) defining pixel accurate ground truth can be subjective due to aliasing and blur, (iii) a small error in ground truth can alter the ranking of binarization performance significantly as studied in [START_REF] Smith | An analysis of binarization ground truthing[END_REF]. To address these issues, Clavelli et al. [START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF] proposed a measure for text binarization based on an atom-level assessment. An atom is defined as the minimum unit of text segmentation which can be recognized on its own. This performance measure does not require pixel accurate ground truth, and measures various characteristics of binarization methods such as producing broken text, merging characters. In order to provide a comprehensive analysis, we evaluate binarization methods on these three measures, i.e, pixel-level, atom-level, and recognition (OCR) accuracy. 3 Pixel-level evaluation. Given a ground truth image annotated at the pixel-level and the result of a binarization method, each pixel in the output image is classified as one of the following: (i) true positive if it is a text pixel in both the output and the ground truth image, (ii) false positive if it is a background pixel in the output image but a text pixel in the ground truth, (iii) false negative if it is a text pixel in the output image but background pixel in the ground truth, or (iv) true negative if it is background pixel in both the output and the ground truth images. With these in hand we compute precision, recall and f-score for every image, and then report mean values of these measures over all the images in the dataset to compare binarization methods. Atom-level evaluation. Each connected component in the binary output image is classified as one 3 Source code for all the performance measures used in this work is available on our project website [24]. of the six categories [START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF], using following two criteria. (i) The connected component and the skeleton 4 of the ground truth have at least θ min pixels in common. (ii) If the connected component comprises pixels that do not overlap with text-area in the ground truth, their number should not exceed θ max . The threshold θ min is chosen as 90% of the total pixels in the skeleton, and the threshold θ max is either half of the maximum thickness of connected components in the image or five, whichever is lower, as suggested by Clavelli et al. [START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF]. Each connected component in the output image is classified into one of the following categories. The number of connected components in the above categories is normalized by the number of ground truth connected components for every image to obtain scores (denoted by w, b, f , m, f m, mi). Then the mean values of these scores over the entire dataset can be used to compare binarization methods. Higher values (maximum = 1) for w, whereas lower values (minimum = 0) for all the other categories are desired. Further, to represent atom-level performance with a single measure, we compute: atom-score = 1 1 w + b + f + m + f m + mi . (15) The atom-score is computed for each image, and the mean over all the images in the dataset is reported. The desired mean atom-score for a binarization method is 1, denoting an ideal binarization output. OCR-level evaluation. We use two well-known off-the-shelf OCRs: Tesseract [START_REF]Tesseract OCR[END_REF] and ABBYY fine Reader 8.0 [60]. Tesseract is an open source OCR whereas ABBYY fine Reader 8.0 is a commercial OCR product. We report word recognition accuracy which is defined as the number of correctly recognized words normalized by the total number of words in the dataset. Following the ICDAR competition protocols [START_REF] Shahab | ICDAR2011 robust reading competition challenge 2: Reading text in scene images[END_REF], we do not perform any edit distance based correction with 4 Skeleton, also known as morphological skeleton, is a medial axis representation of a binary image computed with morphological operators [START_REF] Gonzalez | Digital Image Processing[END_REF]. lexicons, and report case-sensitive word recognition accuracy. Experimental Analysis Given a color or grayscale image containing text, our goal is to binarize it such that the pixels corresponding to text and non-text are assigned labels 0 and 1 respectively. In this section, we perform a comprehensive evaluation of the proposed binarization scheme on the datasets presented in Section 6. We compare our method with classical as well as modern top-performing text binarization approaches with all the performance measures defined in Section 6.1. Implementation details We use publicly available implementations of several binarization techniques for comparison. Global thresholding methods Otsu [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] and Kittler [START_REF] Kittler | Threshold selection based on a simple image statistic[END_REF] are parameter-independent. For local thresholding methods Niblack [START_REF] Niblack | An introduction to digital image processing[END_REF] and Sauvola [START_REF] Sauvola | Adaptive document image binarization[END_REF], we choose the parameters by cross-validating on the ICDAR validation set. For more recent methods like [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Kasar | Font and background color independent text binarization[END_REF][START_REF] Feild | Scene text recognition with bilateral regression[END_REF] we use the original implementations provided by the authors. 5 For the methods proposed in [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Feild | Scene text recognition with bilateral regression[END_REF], we use the parameter settings suggested by the corresponding authors. The method in [START_REF] Kasar | Font and background color independent text binarization[END_REF] is originally designed for full scene images, and uses heuristics on candidate character bounding boxes. We modify these heuristics, i.e., the maximum allowed height for a character candidate bounding box is changed from 80% of image height to 99% of image height, thereby adapting the method for cropped word images. Polarity check. Most of the binarization methods in the literature produce white text on black background for images with light text on dark background. Since ground truth typically contains black text on white ground, we perform an automatic polarity check before evaluating the method as follows. If the average gray pixel value of the middle part of a given word image is greater than the average gray pixel value of boundary, then we assign reverse polarity, i.e., light text on dark background, to it, and invert the corresponding output image before comparing it with the ground truth. Note that our method produces black text on white background irrespective of the polarity of the word image, and hence does not require this inversion. It should be noted that handwritten images are always assumed as dark text on light background. Further, we delay the polarity check till the end for full scene images, and obtain the binary images corresponding to both the polarities, i.e., the original image as well as the image where 180 • is subtracted from the original gradient orientations. We compute standard deviation of stroke width in both these binary images, and choose the one with lower standard deviation as the final binary image. We now provide empirical evidence for the choice of parameters, such as, number of iterations, the GMM initialization method, the number of GMMs and weights λ 1 and λ 2 in our method. Number of iterations. We refine the initial strokes and color cues obtained by our unsupervised automatic initialization scheme. This is performed using iterative graph cuts. To illustrate the refinement of these two cues over iterations, we studied the pixel-level f-score on the validation set. This result is shown in Fig. 5. We observe that the pixel-level f-score improves with iterations till the seventh, and then remains unchanged. We also show qualitative results over iterations of graph cut in Fig. 6. We note that the iterative refinement using graph cut helps in improving the pixel-level performance. Based on this study, we fix number of iterations to 8 in all our experiments. GMM initialization. We initialize GMMs by character-like strokes (see Section 5). However, these GMMs can also be initialized using any binarization method. To study its impact, we performed the following experiment. We initialize foreground and background GMMs from three of the best-performing binarization methods in literature: Otsu [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF], Wolf [START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF] and Howe [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF], and study the word recognition performance on the validation set. We also studied the effect of user-assisted initialization of foreground and background GMMs. We refer to this as manual initialization (MI). In Fig. 7 we show the word recognition performance of Tesseract on the validation set in two settings: (i) when the above binarization techniques are used, and the binary images are fed to the OCR (lighter gray bars), (ii) when these methods are used for GMM initialization, followed by our iterative graph cut based scheme for binarization, and then the output images are fed to the OCR (darker red bars). We observe that our binarization method improves the word recognition performance irrespective of the initialization used. This is primarily due to the fact that our method iteratively refines the initial seeds by using color and stroke cues, improves the binarization, and subsequently the recognition performance. Further, the variant of our method using manual initialization achieves a high recognition performance on this dataset. This shows that the proposed technique can also prove handy for user-assisted binarization as in [START_REF] Lu | Directed assistance for ink-bleed reduction in old documents[END_REF][START_REF] Lu | Interactive degraded document binarization: An example (and case) for interactive computer vision[END_REF]. Other parameters. We estimate the parameters of our method, i.e., number of color and stroke GMMs (c), and the relative weights between color and edginess terms (λ 1 and λ 2 ), using grid search on the validation set, and fix them for all our experiments. We vary the number of color and stroke GMMs from 5 to 20 in steps of 5, and compute the validation accuracy (pixel-level f-score). We observe only a small change (± 0.02) in f-score for different numbers of color and stroke GMM. We fix the number of color and stroke GMMs as 5 in all our experiments. We use a similar strategy for choosing λ 1 and λ 2 , and vary these two parameters from 5 to 50 in steps of 5. We compute the pixel-level f-score on the validation set for all these pairs, and choose the one with the best performance, which results in 25 for both λ 1 and λ 2 . Our method is implemented in C++, and it takes about 0.8s on a cropped word image of size 60 × 180 Table 2. Pixel-level binarization performance. We compare methods on mean precision, recall and f-score values. Here "Ours (color)", "Ours (stroke)" and "Ours (color+stroke)" refer to the proposed iterative graph cut, where only the color, only the stroke, and the color+stroke terms are used respectively. "Ours (MI)" refers to our method with manual initialization of GMMs, and serves as an upper bound. 3. Atom-level evaluation. We show the fractions of connected components classified as whole, background, mixed, fraction, and multiple categories as well as the atom-score. Here "Ours (color)", "Ours (stroke)" and "Ours (color+stroke)" refer to the proposed iterative graph cut, where only the color, only the stroke, and the color+stroke terms are used respectively. "Ours (MI)" refers to our method with manual initialization of GMMs, and serves as an upper bound. Method Method Quantitative Evaluation Pixel-level evaluation. We show these results in Table 2 as mean precision, recall and f-score on three datasets. Values of these performance measures vary from 0 to 1, and a high value is desired for a good binarization method. We observe that our approach with color only and color+stroke based terms achieves reasonably high f-score on all the datasets. The classical method [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] performs better at pixel-level than many other works, and is comparable to ours on the ICDAR 2003 dataset, and poorer on the other two datasets. Atom-level evaluation. Recall that in this evaluation each connected component in the output image is classified as one of the following categories: whole, background, fraction, multiple, mixed or fractionmultiple (see Section 6.1). Evaluation according to these categories is shown in Table 3. We do not show fractionmultiple scores as they are insignificant for all the binarization techniques. Further, we also evaluate binarization methods based on the atom-score. An ideal binarization method should achieve 1 for the atom-score and the whole category, whereas 0 for all other categories. Note that these measures are considered more reliable than pixel-level measures [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Clavelli | A framework for the assessment of text extraction algorithms on complex colour images[END_REF]. We observe that our method with color only and color+stroke based terms achieve the best atom-scores. On ICDAR 2003 and ICDAR 2011 datasets, our method is ranked first based on the atom-score, and improves by 3% and 4% respectively with respect to the next best method [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF]. On SVT our method is ranked second. Other recent methods [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Feild | Scene text recognition with bilateral regression[END_REF] perform well on a few selected images, but fall short in comparison, when tested on multiple datasets. OCR-level evaluation. OCR results on the IC-DAR 2003 and 2011 datasets are summarized in Table 4. We observe that our method improves the performance of OCRs by more than 10% on both these datasets. For example, on the ICDAR 2003 dataset, Tesseract [START_REF]Tesseract OCR[END_REF] achieves word recognition accuracy of 47.93% without any binarization, whereas when our binarization is applied on these images prior to recognition, the accuracy improves to 56.14%. Our binarization method improves the OCR performance over Otsu by about 5%. Note that all these results are based on case-sensitive evaluation, and we do not perform any edit distance based corrections. It should also be noted that the aim of this work is to obtain clean binary images, and evaluate binarization methods on this performance measure. Hence, we dropped recent word recognition methods which bypass binarization [START_REF] Mishra | Top-down and bottom-up cues for scene text recognition[END_REF][START_REF] Jaderberg | Deep features for text spotting[END_REF][START_REF] Novikova | Large-lexicon attribute-consistent text recognition in natural images[END_REF][START_REF] Shi | Scene text recognition using part-based treestructured character detection[END_REF], in this comparison. Qualitative Evaluation We compare our proposed approach with other binarization methods in Fig. 8. Sample images with uneven lighting, hardly distinguishable foreground/background colors, noisy foreground colors, are shown in this figure. We observe that our approach produces clearly readable binary images with less noise compared to [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Feild | Scene text recognition with bilateral regression[END_REF]. The global thresholding method [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] performs reason- ably well on some examples, but fails unpredictably in cases of high variations in text intensities (e.g., rows 2-3, 7-10). Our method is successful even in such cases and produces clean binary images. Results on other types of text images We also evaluate our binarization method on other types of text images, such as, text in videos, born-digital, handwritten text, and full scene images containing text. For text in videos, we qualitatively evaluate binarization methods on the CVSI dataset [START_REF]ICDAR 2015 Competition on Video Script Identification[END_REF]. A selection of our results on this dataset are shown in Fig. 9. Despite low-resolution, the performance of our method is encouraging on this dataset. Since our method uses generic text features like color and stroke, which are independent of language, it generalizes to multiple languages as shown in the figure . We report results on the BDI dataset in Table 5. Our method performs reasonably well, but is inferior to [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] as it suffers from oversmoothing due in part to the extremely low resolution of images in this dataset. This limitation is discussed further in Section 8. We evaluate on handwritten images of H-DIBCO 2012 [START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF] and H-DIBCO 2014 [START_REF] Ntirogiannis | ICFHR2014 competition on handwritten document image binarization (H-DIBCO 2014)[END_REF], and compare the results with other methods for this task. Quantitative results on these datasets are summarized in Table 8. We observe that our proposed method outperforms modern and classical binarization methods, and is comparable to the H-DIBCO 2012 competition winner [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF]. On H-DIBCO 2014, our method is marginally inferior to the winning method. Moreover, we achieve noticeable im- Image Otsu [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF] Wolf [START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF] Kasar [START_REF] Kasar | Font and background color independent text binarization[END_REF] Milyaev [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF] Bilateral [START_REF] Feild | Scene text recognition with bilateral regression[END_REF] Howe [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF] Ours Milyaev et al. [START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF], bilateral regression [START_REF] Feild | Scene text recognition with bilateral regression[END_REF], Howe [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF] and our method (Ours), which uses color and stroke cues. Other classical techniques [START_REF] Kittler | Threshold selection based on a simple image statistic[END_REF][START_REF] Niblack | An introduction to digital image processing[END_REF][START_REF] Sauvola | Adaptive document image binarization[END_REF] show poor performance on these images. Fig. 9. Results on the CVSI dataset. We show results on images (left to right) with Devanagari, Telugu, Oriya and Gujarati scripts. Since our method does not use any language-specific information, it is applicable to this dataset, containing English, Arabic, and eight Indian scripts. provement by adding the stroke-based term on these datasets, highlighting their importance for handwritten images. We show qualitative results for a couple of examples in Fig. 10. We observe that despite color bleeding and high variations in pixel intensities and strokes, our method produces a clean binary result. The significance of stroke based term is also highlighted for these examples. Binarization of natural scene images containing text is a challenging problem. It was considered as one of the challenges in the ICDAR 2013 competitions [START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF]. Our original work [START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF] was designed for cropped word images. We now modify our automatic seeding strategy (cf. Section 5) to suit full scene images as well. We evalu-ate our method on ICDAR 2013, and compare it with the top-performing methods from the competition for the text segmentation task, as shown in Table 6. We compare with the winner method, as well as the first three runner-ups of the competition. Our method with color and stroke terms performs well on this dataset, and stands third in this competition, being marginally inferior to the winner, and comparable to the first runner-up method. Comparison with other energy formulations Energy functions for the binarization task can also be formulated with connected components (CC) or Connected component labeling with CRF. We first obtain connected components by thresholding the scene text image using Niblack binarization [START_REF] Niblack | An introduction to digital image processing[END_REF] with the parameter setting in [START_REF] Pan | Text localization in natural scene images based on conditional random field[END_REF]. We then learn an SVM on the ICDAR 2003 training set to classify each component as text or non-text region. Each connected component is represented by its normalized width, normalized height, aspect ratio, shape difference, occupy ratio, compactness, contour gradient and average run-length as in [START_REF] Pan | Text localization in natural scene images based on conditional random field[END_REF]. We then define an energy function composed of a unary term for every CC (computed from the SVM text/non-text classification score), and a pairwise term between two neighboring CCs (truncated sum of squares of the following features: centroid distance, color difference, scale ratio and shape difference). Once the energy function is formulated, we construct a graph representing it, and perform graph cut to label the CCs. MSER labeling with CRF. We replace the first step of the method described for connected components with MSER, thus defining a graph on MSER nodes, and pose the task as an MSER labeling problem. Comparison of our method (color+stroke) with these two approaches is shown in Table 7 on the ICDAR 2003 test set. We observe that our method outperforms these two variants whose performance relies extensively on the initialization used. Moreover, extending these two approaches to diverse datasets, such as, handwritten text, text in videos, is not trivial, due to their demand of large pixel-level annotated training sets. On the contrary, our method assigns binary labels to pixels in an unsupervised manner, without the need for such expensive annotation. Summary In this work we proposed a novel binarization technique, and evaluated it on state-of-the-art datasets. Many existing methods have restricted their focus to small datasets containing only a few images [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Mishra | An MRF model for binarization of natural scene text[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Feild | Scene text recognition with bilateral regression[END_REF]. They show impressive performance on them, but this does not necessarily generalize to the large and varied datasets we consider in this paper. Our method performs consistently well on all the datasets, as we do not make assumptions specific to images. We compare recognition results on public ICDAR benchmarks, where the utility of our work is even more evident. The proposed method integrated with an open source OCR [START_REF]Tesseract OCR[END_REF] outperforms other binarization techniques (see Table 4). Additionally, on a dataset of video text images of multiple scripts, our results are promising, and on two benchmark datasets of handwritten images we achieve results comparable to the state of the art [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Ntirogiannis | ICFHR2014 competition on handwritten document image binarization (H-DIBCO 2014)[END_REF]. Comparison with other energy minimization based methods. Some other binarization techniques in the literature are based on an energy minimization framework [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF][START_REF] Kuk | Feature based binarization of document images degraded by uneven light condition[END_REF][START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF]. Our method falls in this category, but differs significantly in the energy formulation and the minimization technique used. We compare our method empirically with [START_REF] Howe | Document binarization with automatic parameter tuning[END_REF][START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF][START_REF] Wolf | Binarization of low quality text using a Markov random field model[END_REF] in Tables 2, 3 and 4. Two other energy minimization based methods [START_REF] Kuk | Feature based binarization of document images degraded by uneven light condition[END_REF][START_REF] Peng | Markov random field based binarization for hand-held devices captured document images[END_REF] were dropped for experimental comparison as their implementation was not available when this paper was written. Our method outperforms these approaches. The robustness of our method can be attributed to the proposed iterative graph cut based algorithm, which minimizes an energy function composed of color and stroke based terms. There have been attempts to solve the natural image segmentation problem using unsupervised iterative graph cut based methods. Jahangiri and Heesch [START_REF] Jahangiri | Modified grabcut for unsupervised object segmentation[END_REF] have proposed a method for high contrast natural image segmentation using active contours for initializing the foreground region. In [START_REF] Khattab | Multi-label automatic grabcut for image segmentation[END_REF][START_REF] Khattab | Color image segmentation based on different color space models using automatic grabcut[END_REF] authors use clustering techniques to initialize foreground regions. Our method falls in this category of unsupervised image segmentation, but differs significantly from these approaches in the initialization scheme, and uses text specific information, i.e., character-like strokes, to initialize the foreground regions. Further improvements. Oversmoothing is one of the limitations of our method, and is pronounced in the case of low resolution images where inter-character gaps and holes within characters like 'o', 'a' are only a few pixels, i.e., three to four pixels. Such limitations can be handled with techniques like cooperative graph cuts [START_REF] Jegelka | Submodularity beyond submodular energies: coupling edges in graph cuts[END_REF]. Further, a noisy automatic initialization may be hard to recover from. Improved initialization or image enhancement techniques can be investigated in future work. Fig. 1 . 1 Fig. 1. Sample images we consider in this work. Due to large variations in foreground and background colors, most of the popular binarization techniques in the literature tend to fail on such images (as shown in Section 7). Fig. 2 . 2 Fig. 2. (a) A scene text image from ICDAR 2013 dataset[START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF], and (b) part of a handwritten document image taken from H-DIBCO 2012[START_REF] Pratikakis | ICFHR 2012 competition on handwritten document image binarization[END_REF]. We note that stroke width within text is not always constant, and varies smoothly. Fig. 4 . 4 Fig. 4. (a) Input image. (b) Character-like strokes obtained using the method presented in Section 5. Darker regions in (b) represent parts with lower stroke width. - whole (w). If the connected component overlaps with one skeleton of the ground truth, and both criteria are satisfied. -background (b). If the connected component does not overlap with any of the skeletons of the ground truth. -fraction (f ). If the connected component overlaps with one skeletons of the ground truth, and only criteria (ii) is satisfied. -multiple (m). If the connected component overlaps with many skeletons of the ground truth, and only criteria (i) is satisfied. -fraction and multiple (f m). If the connected component overlaps with many skeletons of the ground truth, and only criteria (ii) is satisfied. -mixed (mi). If the connected component overlaps with many skeletons of the ground truth, and neither criteria (i) nor criteria (ii) is satisfied. Fig. 5 . 5 Fig. 5. The pixel-level f-score on the subset of ICDAR 2003 training images, used as validation set, at each iteration of graph cut. Fig. 6 . 6 Fig. 6. Illustration of binarization results with different number of iterations of graph cut. Here, we show the original image and the results with 1, 3, 5 and 8 iterations (from left to right). Fig. 7 . 7 Fig. 7. Impact of GMM initialization techniques. We show the word recognition accuracy of Tesseract on the ICDAR 2003 validation set. Here, lighter (gray) bars show recognition results after applying binarization techniques [5, 17, 21], and darker (red) bars show recognition results of the proposed iterative graph cut based method, with the corresponding binarization techniques used as initialization for GMMs. We also show recognition results when initialization is performed from character-like strokes (char-like strokes) and manually (MI). Fig. 8 . 8 Fig. 8. Comparison of binarization results. From left to right: input image, Otsu [17], Wolf and Doerman [21], Kasar et al. [18],Milyaev et al.[START_REF] Milyaev | Fast and accurate scene text understanding with image binarization and off-the-shelf OCR[END_REF], bilateral regression[START_REF] Feild | Scene text recognition with bilateral regression[END_REF], Howe[START_REF] Howe | Document binarization with automatic parameter tuning[END_REF] and our method (Ours), which uses color and stroke cues. Other classical techniques[START_REF] Kittler | Threshold selection based on a simple image statistic[END_REF][START_REF] Niblack | An introduction to digital image processing[END_REF][START_REF] Sauvola | Adaptive document image binarization[END_REF] show poor performance on these images. Fig. 10 . 10 Fig. 10. Results on sample images from the H-DIBCO 2012 dataset. (a) Input image, and results of our binarization technique: (b) with only color based term, (c) with color and stroke based terms. We observe that the color+stroke based term shows significant improvement over the color only term. Algorithm 1 Overall procedure of the proposed binarization scheme. procedure Input: Color or grayscale image Output: Binary image Initialize: 1. Number of GMM components: 2c for color and 2c for stroke GMMs. 2. maxIT : maximum number of iterations. 3. Seeds and GMMs (Section 5). 4. iteration ← 1 CRF optimization: while iteration ≤ maxIT do 5. Learn color and stroke GMMs from seeds (Section 5) 6. Compute color (Ec) and stroke (Es) based terms (Sections 3 & 4) 7. Construct s-t graph representing the energy (Sections 3 & 4) 8. Perform s-t mincut 9. Refine seeds (Section 5) 10. iteration ← iteration + 1. end while end procedure • , Table 1 . 1 Datasets used in our experiments. Dataset No. of images Type Available annotations ICDAR 2003 word [45] 1110 Scene text Pixel, text ICDAR 2011 word [46] 1189 Scene text Pixel, text ICDAR 2013 scene text [22] 233 Scene text Pixel, text ICDAR BDI 2011 [47] 916 Born-digital Pixel, text Street view text [48] 647 Scene text Pixel, text CVSI 2015 [49] 6000 Video text - H-DIBCO 2012 [11] 14 Handwritten Pixel H-DIBCO 2014 [12] 10 Handwritten Pixel Table 4 . 4 Word recognition accuracy (in %): open vocabulary setting. Results shown here are case sensitive, and without minimum edit distance based correction. * No binarization implies that color images are used directly to obtain the corresponding OCR result. Method ICDAR 2003 ICDAR 2011 Tesseract ABBYY Tesseract ABBYY No binarization* 47.93 46.51 47.94 46.00 Otsu [17] 51.71 49.10 55.92 53.99 Kittler [16] 44.55 43.25 48.84 48.61 Sauvola [20] 19.73 17.60 26.24 26.32 Niblack [19] 15.59 14.45 22.20 21.27 Kasar [18] 33.78 32.75 12.95 12.11 Wolf [21] 46.52 44.90 50.04 48.78 Milyaev [9] 22.70 21.87 22.07 22.54 Howe [5] 42.88 41.50 43.99 41.04 Bilateral [37] 50.99 47.35 45.16 43.06 Ours (color) 52.25 49.81 59.97 55.00 Ours (stroke) 47.93 46.00 55.75 54.60 Ours (color+stroke) 56.14 52.97 62.57 58.11 Table 5 . 5 Results on the BDI dataset. Method precision recall f-score Otsu [17] 0.77 0.92 0.83 Kittler [16] 0.57 0.88 0.63 Sauvola [20] 0.54 0.94 0.75 Niblack [19] 0.59 0.94 0.71 Kasar [18] 0.55 0.65 0.58 Milyaev [9] 0.48 0.68 0.61 Howe [5] 0.43 0.93 0.52 Bilateral [37] 0.75 0.86 0.79 Ours (color) 0.67 0.88 0.74 Ours (stroke) 0.65 0.80 0.72 Ours (color + stroke) 0.70 0.90 0.80 Table 6 . 6 Results on the ICDAR 2013 text segmentation challenge. We compare our method with the top-performing methods in the competition[START_REF] Karatzas | ICDAR 2013 robust reading competition[END_REF]. Method precision recall f-score The winner (USTB-FuSTAR) 87.21 78.84 82.81 1st runner-up (I2R-NUS) 87.95 73.88 80.31 2nd runner-up (I2R-NUS-FAR) 82.56 73.67 77.86 3rd runner-up (OCTYMIST) 81.82 70.42 75.69 Ours (color) 77.65 71.82 74.89 Ours (stroke) 72.33 68.23 70.45 Ours (color + stroke) 81.00 77.72 79.89 maximally stable extremal regions (MSER) as nodes in the corresponding graph. Table 7 . 7 Comparison with variants, where connected components (CC) and MSER are labeled directly with CRF, instead of labeling our binary segments. Method precision recall f-score CC labeling 0.73 0.63 0.65 MSER labeling 0.76 0.82 0.79 Ours (color+stroke) 0.82 0.91 0.86 Table 8 . 8 Pixel-level f-score on handwritten images from H-DIBCO 2012 and H-DIBCO 2014. Method H-DIBCO 2012 H-DIBCO 2014 Otsu [17] 0.75 0.89 Kittler [16] 0.71 0.73 Sauvola [20] 0.14 0.18 Niblack [19] 0.19 0.25 Kasar [18] 0.74 0.78 Wolf [21] 0.78 0.82 Milyaev [9] 0.84 0.92 H-DIBCO 2012 winner [5] 0.89 0.96 H-DIBCO 2014 winner - 0.97 Ours (Color) 0.84 0.91 Ours (Stroke) 0.78 0.85 Ours (Color+Stroke) 0.90 0.95 Other color spaces such as CMYK or HSV can also be used. The stroke-based term is computed similarly with stroke width and intensity of each pixel generated from one of the 2c GMMs. We thank the authors for providing the implementation of their methods. Acknowledgements. This work was partially supported by the Indo-French project no. 5302-1, EVEREST, funded by CEFIPRA. Anand Mishra was supported by Microsoft Corporation and Microsoft Research India under the Microsoft Research India PhD fellowship award.
76,277
[ "19670", "835846" ]
[ "21854", "445108", "21854" ]
01376630
en
[ "shs" ]
2024/03/04 23:41:50
2017
https://hal.science/hal-01376630v2/file/india%20euxine%20v5b.pdf
Written evidence for the India-Black Sea trading route The existence of a trade route running from India to the Euxine is documented by a small set of Greek and Latin texts, repeatedly quoted by nearly all scholars addressing this issue. The present paper will be no exception, for the sake of clarity. In the course of the eleventh book of his Geography, Strabo describes the rivers flowing across Hyrcania, on the south-east coast of the Caspian Sea, drawing directly and indirectly on three authorities : Aristobulus, who accompanied Alexander on all campaigns and wrote a history of the Asian expedition; Eratosthenes, the well-known Alexandrian scientist; Patrocles, a man appointed by Seleucus I as governor of Sogdiana and Bactriana about 285 B.C. Between 285 and 282, Patrocles was despatched as a commander of a fleet by Seleucus I and Antiochus I to reconnoitre the Caspian Sea 2 . At that time Hyrcania was a Seleucid satrapy. Aristobulus [FGrH 139 F20] says that the Oxos [Amu Darya] is the largest of the rivers he has seen in Asia except those in India. And he further says that it is navigable (both him and Eratosthenes [III B 67 Berger] taking this statement from Patrocles 3 [FGrH 712 F5]), and that large quantities of Indian wares are brought down on it to the Hyrcanian Sea [i. e., the Caspian Sea], and thence on that sea are transported to Albania 4 1 Lévêque 313-314. 2 On Patrocles' explorations of the Caspian Sea, see Gisinger 2267-2270; Williams (commentary on F5). 3 Contrary to H. L. Jones' interpretation, Tarn (b) 489, convincingly states that the participle labôn (φησὶ δὲ καὶ εὔπλουν εἶναι καὶ οὗτος καὶ Ἐρατοσθένης παρὰ Πατροκλέους λαβών) agrees only with Eratosthenês: "Aristobulus is probably only cited here as an authority for the Oxus being euplous, for it is unlikely that he wrote late enough to use Patrocles". Yet there is no reason to suggest that this local traffic was unknown to Aristobulus. Also see Jacoby 514-515; Gisinger 2268; Bosworth 373; Radt 277. 4 Albania lay along the lower to middle course of the Kyros. It can be more or less equated with modern Azerbaïjan. According to Strabo, 11, 4, 2, the twelve mouths of the Kyros were shallow or silted. Appian, Roman History, 12, 103, describes them as "navigable". From India to the Black Sea : an overlooked trade route? (Slightly revised version with addenda) Abstract. Some Hellenistic sources and Pliny the Elder briefly describe a trade route linking the Pontic area and India. Eastern commodities were carried by various middlemen using both land and river routes. The Caspian Sea was even crossed. This existence of this itinerary, which is documented by too few texts and very little archaeological remains, has been called into question by some scholars. On the basis of several literary sources so far overlooked if not missed, I argue that the "northern road" played a continuous role in the so-called Indo-Mediterranean trade, along with the better known Indian Ocean routes. Was the Black Sea the ending point of a trade route from Central Asia? P. Lévêque's afterword of the proceedings of a conference held at Vani (Georgia) in 1999, entitled "La genèse de la route de la soie", left no doubt about his opinion 1 ; yet the road linking the Pontic area to Central Asia and India is tremendously elusive, being documented by a few classical texts, much debated, and scanty archaeological remains. Such poor evidence led some scholars to call into question the very existence of the "northern route". However, several so far neglected classical texts help support the opposite opinion, as this paper aims to show. and brought down on the Kyros River [i. e., the Kura, or Mtkvari] and through the region that comes next after it to the Euxine [καὶ πολλὰ τῶν Ἰνδικῶν φορτίων κατάγειν εἰς τὴν Ὑρκανίαν θάλατταν, ἐντεῦθεν δ' εἰς τὴν Ἀλβανίαν περαιοῦσθαι καὶ διὰ τοῦ Κύρου καὶ τῶν ἑξῆς τόπων εἰς τὸν Εὔξεινον καταφέρεσθαι]. (Strabo, 11, 7, 3; transl. H. L. Jones 5 ). If we suppose the existence of two sources, Aristobulus and Patrocles, their account of this trade circuit may derive from hear-say information and/or personal observation. In particular, it is likely that Patrocles noticed boats of a local type plying between each side of the Caspian Sea. This trade connection enabled "large quantities of Indian commodities" to reach Asia Minor and, probably the Mediterranean world, prior to the later development of Indian Ocean sea routes. One can confidently number aromatics among these unnamed goods, especially pepper, which was imported to the Greek world for its healing properties as early as the late 5 th century B.C.; a cinnamon flower has even been recovered from the 7 th century B. C. rubbish dumps at the Heraion of Samos 6 . Strabo's report contains no details on the first overland sections of this trade route, from (northwest) India to the Amu Darya / Oxos. Neither does he locate the harbour on the Caspian Sea. Some stages, however, can be guessed with reasonable certainty, thanks to the following excerpt: The voyage from Amisos to Kolchis is thus toward equinoctial east (which is known by the winds, seasons, crops, and the sunrise itself), as also are the pass to the Caspian and the route from there to Baktra [ὡς δ' αὕτως καὶ ἡ ἐπὶ τὴν Κασπίαν ὑπέρβασις καὶ ἡ ἐφεξῆς ὁδὸς μέχρι Βάκτρων]. (Strabo, 2, 1, 11 = Eratosth. III A 11; transl. D. Roller; also see also Strabo, 2, 1, 5) Baktra (Balkh), located not far from the upper course of the Amu Darya, was a place of transit for Indian commodities, which were certainly carried along the so-called "vieille route de l'Inde", from Taxila (Takshaçila) to Baktra, via Bamiyan, Kapici, Pushkarâvatî and Udabhânda 7 . As for Taxila, this city acted as a node in networks of circulation, being connected, for instance, with the Gangetic area 8 . Much more problematic is the Oxos flowing into the Caspian Sea instead of the Aral Sea 9 . As suggested by some scholars, the Oxos may have had a branch called Uzboy leading into the Caspian. 5 A slightly different version of the same account appears in Strabo's Prolegomena: "And further, the River Oxos, which divides Bactriana from Sogdiana, is so easily navigable, they say, that the Indian merchandise packed over the mountains to it is easily brought down to the Hyrcanian Sea, and thence, on the rivers, to the successive regions beyond as far as the Pontus [ὥστε τὸν Ἰνδικὸν φόρτον ὑπερκομισθέντα εἰς αὐτὸν ῥᾳδίως εἰς τὴν Ὑρκανίαν κατάγεσθαι καὶ τοὺς ἐφεξῆς τόπους μέχρι τοῦ Πόντου διὰ τῶν ποταμῶν]." (Strabo, 2, 1, 15; transl. H. L. Jones; also see Strabo, 2, 1, 3). 6 Amigues (b) 369-375. Indian aromatics could also be introduced via the Strait of Hormuz and Mesopotamia, as Amigues (b) 375 rightly points out. 7 Foucher 13-53, and particularly 47-53. A fragment of Ctesias' Indika (F 45 (6) in D. Lenfant's edition) may have some ties with this itinerary: "Ctesias describes a gemstone called pantarba, which, when it was thrown into the river (i. e. the Indus), was retrieved clinging together gems and precious stones that belonged to a Bactrian dealer." (transl. Nichols; also see Nichols 23-25). Note that the participle ὑπερκομισθέντα (above, n. 5) may point to a passage over mountains. 8 Fillliozat 13; Sen xiii-xiv. 9 Jacoby 514-515; Tarn (a) 10-12; Tarn (b) 491; Callieri 539-540; Williams (commentary on F5a). The Amu Darya may have gone through several shifts in the historic times, by this has not been established. Tarn (b) 113 has imagined another more complicated scenario: "Patrocles sent to explore the Caspian mistook the mouth of the Atrek, seen from the sea, for that of the Oxus, and believing that the Oxus flowed into the Caspian, reported to Antiochus I that such a trade route could easily be made; in due course his report was turned into a statement that it existed." (also Gisinger 2268) This now dried up channel, which split off the Amu Darya south of the delta in former times, could be understood to be this Oxos pouring into the Caspian Sea 10 . In the face of these difficulties, however, E. H. Warmington prudently concludes "that after a journey down the river wares were carried by land to the Caspian and then across or round it." 11 With respect to the western and final section of the Black Sea-India route, Strabo lists several transhipment points: It [i. e., the Phasis, today's Rioni] is navigated as far as Sarapana [ἀναπλεῖται δὲ μέχρι Σαραπανῶν ἐρύματος] 12 , a fortress capable of admitting the population even of a city. From here people go by land to the Kyros in four days by a wagon-road [δι' ἁμαξιτοῦ]. On the Phasis is situated a city bearing the same name, an emporium of the Colchi, which is protected on one side by the river, on another by a lake, and on another by the sea. Thence people go to Amisos and Sinopê by sea ... (Strabo, 11, 2, 17; transl. H. L. Jones) These data are partially echoed by Pliny the Elder who, although showing himself less prolific than Strabo, seems to have benefited from more recent sources. He quotes indeed the first century polymath Varro, whose direct involvement in the third Mithridatic war (74-63 B.C.) as a legate is debated 13 . Varro further adds that exploration under the leadership of Pompey ascertained that a seven days' journey from India into the Bactrian country reaches the river Bactrus, a tributary of the Oxus, and that Indian merchandize can be conveyed from the Bactrus across the Caspian to the Cyrus and thence with not more Some scholars deny for no reason that Varro obtained fresh information and think that he paraphrased a geographical treatise, maybe that of Eratosthenes 16 . In fact, whereas Pliny's account recalls Strabo in pointing to Indian merchandise being imported to the Pontus, two discrepancies can be found. First Strabo's sources are not aware of the Bactrus river (Balkh-ab, Balḫāb), which nowadays no longer merges with the Amu Darya and empties into the ground 17 ; second, in the final section of the route, from the middle course of the Cyrus to Phasis, goods are said to be transported 10 See Callieri 540-541 (with references). Archaeological fieldwork has shown that the Uzboy valley hosted human settlements between the 6 th /5 th centuries B. C. and the 4 th century A. D. (see, however, Williams [commentary of F5a]). 11 Warmington 27. 12 Today's Shorapani (Barrington Atlas 88 B2). Sarapana is referred to in Strabo, 11, 3, 4 as a pass from Colchis to Iberia; also see Procopius, 2, 29, 18; 8, 13, 15; 8, 16, 17. For further details, see Furtwängler 267. Note that there used to be a more northern branch beginning at the Caspian Sea and ending at the Lake Meotis (Sea of Azov), which was under the sway of the Upper Aorsoi: "The upper Aorsoi (...) ruled over most of the Caspian coast; and consequently they could import on camels the Indian and Babylonian merchandise, receiving it in their turn from the Armenians and the Medes, and also, owing to their wealth, could wear gold ornaments." (Strabo, 11, 5, 8; transl. H. L. Jones). See See, e. g., Olshausen. 14 Bactrum is an emendation by Detlefsen (manuscripts give iacrum or iachrum); Tarn (b) 488 suggests emendating subvectos to subvectas, to make it agree with merces. 15 Compare with Solinus, 19, 4-6, basically a paraphrase of Pliny with slight variations (see Callieri 539). 16 On this issue, see André & Filliozat 11-12; Lordkipanidze (a) 116; Callieri 538-539. 17 See Tomaschek. by land (terreno itinere), while Strabo apparently points to a riverine traffic (ἀναπλεῖται ) between Phasis and Sarapana. Further texts, which do not add much to what has been gathered from previous documents, are of lesser interest. Both texts describe Phasis as a city into which merchants flock. First Arrian (Periplus M. Eux. 10) claims that 400 auxiliaries were garrisoned in Phasis, and refers to merchants staying there. Second, according to a late antiquity periplus (6 th -7 th century A.D.), "sixty peoples are said to descend [i. e., to Phasis], using different languages. And they say that among them come together certain barbarians from India and Bactria (εἰς ταύτην δὲ καταβαίνειν λόγος φωναῖς διαφόροις χρώμεν' ἐξήκοντ' ἔθνη, ἐν οἷς τινας λέγουσιν ἀπὸ τῆς Ἰνδικῆς καὶ Βακτριανῆς <γῆς> συναφικνεῖσθαι βαρβάρους)." (Periplus of the Pontus Euxinus, p. 127 Diller = Pseudo-Skymnos F20 Marcotte 18 ). The scholarly debate on the "northern route" With so little documentary material, either written or, as we will see, archaeological, the "northern road" pales in comparison with the Indian Ocean sea routes: not only are the latter documented by many more texts, but they have also benefited from a leap forward in archaeological research over the past decades. Hence a spectrum of opinions among scholars: while some did not hesitate to doubt the existence of this northern route, others admitted it with varying degrees of conviction 19 . The existence of the "supposed Oxo-Caspian route" was more or less dismissed by W. W. Tarn 20 . His demonstration is based on somewhat farfetched arguments. For instance, he claims that Patrocles was the only source of the three major documents (Pliny; Strabo 2, 1, 15; 11, 7, 3; the other texts are ignored). In other words, neither Aristobulus nor Varro are acknowledged as true sources. Moreover, the existence of the trade route itself is denied. Patrocles, Tarn argues, did not observe any actual traffic but just deemed this voyage as being feasible; he then goes on to imagine that Patrocles, mirroring the "mercantile sensitivity" 21 of his sovereign, said to king Antiochus I: "You can easily (radiôs) make a trade route from Bactria across the Caspian Sea if you like". There is no need here to dissect Tarn's arguments one by one. Suffice it to say that most of them rest on nothing but his own conviction, such as the view that "Eratosthenes [= Strabo, 2, 1, 15] has altered the whole sense (i. e., of Patrocles' report [= Strabo 11, 7, 3]) by turning 'easily' into 'many goods'". Yet one of his objections deserves more attention: Tarn recalls a passage by Strabo proving that the Caspian Sea was not sailed: However, neither the country itself [i. e., Hyrcania] nor the sea that is named after it [i. e., the Hyrcanian Sea / Caspian Sea] has received proper attention, the sea being both without vessels and unused [ἂπλους τε οὖσα καὶ ἀργός]." (Strabo, 11, 7, 2; transl. H. L. Jones) 22 . 18 According to Marcotte 256, however, this passage is unlikely to derive from Pseudo-Skymnos, who wrote his account of the world between 133 B.C. (or 127/6) and 110 B.C. (Marcotte 7-16). Thus the source of the author of the Periplus Ponti Euxini author remains unknown. 19 For a good review, see Callieri 538-540. 20 Tarn (b) 488-490. Tarn's judgment has been adopted by various scholars: see, e. g. Karttunen 337, n. 9; Lasserre 139, n. 5 ("la prétendue route fluviale des Indes") ; Waugh 190 (presenting himself as a "skeptic"). See also the scholars mentioned by Callieri 539-540 and below, p. 5-6. 21 Kosmin 202. 22 Tarn (a) 26. In reality, Strabo blamed the Hyrcanians for not properly exploiting the important resources of their sea, as attested by the absence of large ships. On the other hand, he certainly paid less or little attention to small crafts manned by fishermen or freight carriers, the existence of which seems to me beyond doubt 23 . The point is that Tarn, having in mind the great roads of the long distance trade operated by Greeks and Romans (e. g., the trans-Asian land road described by Maes Titianos 24 ), was to some degree scornful of interconnected small scale networks, which actually were the building blocks of many interregional circuits: On the whole it appears to me that we are safe in saying that whatever trade came down the Oxus and across the Caspian Sea was entirely in native hands during the whole period of Greek knowledge of this river; and that it was of no great extent. 25 Yet the conclusion he drew some time later was even less qualified: "There is no evidence at all that, in Greek times, any such trade-route from India ever existed" 26 . E. H. Warmington, being of the opposite opinion, did not call into question the extant textual evidence 27 . Wisely leaving open the question raised by the current course of the Oxos/Amu Darya, he just observed that Indian wares could have been carried from this river to the Caspian Sea by land. Some scholars adopted this view, but opinions as to how important the role played by this circuit was vary. Considering the lack of documents and Ptolemy's relatively mediocre knowledge of this part of the world, K. Karttunen is inclined to belittle its importance: "In any case this route was hardly important for Indian trade" 28 . Quite the reverse D. Schur, on the basis of literary evidence (Pliny, Tacitus ...), claims that Nero's foreign policy in the Caspian Sea and Hyrcania, following in the footsteps of Seleucus and Antiochus, comprised economic goals ( "Kaspische Handelsplänen"). Controlling the Oxo-Caspian road was part of Nero's wide plan to secure distant commerce routes for the Roman Empire -a similar "Südostpolitik" was conducted in Arabia and Ethiopia [Nubia]-29 . E. H. Warmington rather believes that "the Romans left the trade in the hands of middlemen, perhaps in order to avoid offending Parthia, contenting themselves (...) with obtaining influence among the tribes" 30 . W. W. Tarn, due to the lack of archaeological remains, drew his conclusion solely on the basis of literary evidence. Interestingly, D. Braund, an archaeologist focusing on the ancient Black Sea and Georgia, agrees more or less with his critical analysis: Much has been written about a trade-route by which goods could pass from India through Central Asia across the Caspian, and thence from Iberia across the Surami Ridge to Colchis and the Black Sea. The notion was encouraged by Patrocles, who reported to Antiochus I of Syria on the region of the Caspian 23 For further objections -including archaeological arguments -, see Callieri 541. 24 Tarn (a) 26. 25 Tarn (a) 28; "There is no good evidence ... for an important trade route by the Oxus, though some trade undoubtedly came that way." (Tarn (a) 28). 26 Tarn (b) 490. 27 Warmington 26 (nor does Haussig 79). See also Callieri 539;Bosworth 373;Karttunen 337,n. 94. 28 Karttunnen 337. 29 Schur 67; 80-83. Mc Laughlin 90-92 and 201, on the basis of epigraphic evidence, states that as time progressed Rome was more involved in controlling Colchis and extending her authority to the small kingdom of Iberia. Wisseman 193, pursuing Schur's ideas, argues that Rome intended to gain greater influence in the western arm of this trade route (Colchis, Iberia, Albania, and Armenia). 30 Warmington 28. Sea. Strabo expresses no view on the matter, but reports the statement of others. Pliny, summarizing Varro, is still more restrained: he states only that this route was deemed to be a possibility 31 . However, as Tarn sharply observes, Varro's evidence tells against the existence of such a route, for such a route can hardly have functioned significantly, if its feasibility was still in question in Pompey's day and if its activity was deemed no more than potential by Varro, who had visited this region 32 . He then goes on to discuss the archaeological evidence: Despite some ancient and much modern talk of a trade-route between the Black Sea and India, the inescapable fact is that the Surami Ridge constituted a significant obstacle to trade and movement. Archaeology reinforces Strabo's account of the difficulty of its passage. For example, from the archaic and classical periods, only a very few fragments of fine Greek pottery have been found east of that ridge, while it is relatively commonplace in Colchis, to the west. (...). And even with extensive use of available waterways, the distance from the Caspian to the Black Sea was long enough to deter much trade. Only light, precious items are likely to have found their way to the Black Sea in such a manner. 33 In contrast, O. Lordkipanidze, an archaeologist who wrote extensively on ancient Georgia, does not share Tarn's scepticism. He believes that Patrocles paid attention to an actual trade road, about which he managed to collect information; he also admits that Pliny's account derives from Varro who accompanied Pompey in the Third Mithridatic war, and subsequently that Strabo and Pliny documented the same route. This activity would have have preceded the Hellenistic period, as shown by coins of Amisos recovered in Colchis and ancient Nissa, in Parthia 34 . Pottery and coins ranging from the 6 th century to the 3 rd -2 nd centuries B. C are also commonly found in Colchis along the Rioni, where it is navigable 35 . Going eastwards, however, less material has been unearthed in Iberia, and even less in Albania -note that neither he nor D. Braund take the Bagram treasure into account 36 -. Thus, Lordkipanidze concludes, the eastern branch of the "northern route" was not as intensively used as its Transcaucasian section: Daraus kann man ersehen, dass der beschriebene Handels-und Transitweg von Indien zum Schwarzen Meer (…), nicht regelmässig und intensiv genug funktionierte, d.h. man hatte ihn nur in einzelnen Fällen benutzt. … Ein Abschnitt dieses Weges, und zwar die Phasis (Rioni-Kwirila) Magistrale muss ober doch recht regelmässig funktioniert haben. 37 31 Braund's interpretation of posse in Pliny's text is as restrictive as questionable: Varro points to a virtual voyage, he argues. Posse, however, may well relate to the possibility of a very short overland voyage, meaning that it was possible to carry Indian wares from the Kyros to Phasis within five days (V non amplius dierum). Other scholars are less skeptic : see, e. g., Dreher 203 and 206 ("Bei all dem scheint Pompeius auch an die Sicherung der Handelswege, besonders dessen von Indien her zum Schwarzen Meer, gedacht zu haben"). 32 Braund (a) 40-41. 33 Braund (a) 40-41. 34 Lordkipanidze (a) 114-117. 35 See Callieri 538 on a Greco-Bactrian coin found at Tbilisi. 36 Unlike Callieri 537. Raschke 746, n. 435, reports a hoard unearthed in "Soviet Albania" which contained Parthian and Greco-Bactrian coins. According to Furtwängler & alii 170 "the discovery of shells of the species Cyprea moneta in the archaeological material of Georgia is quite common, starting from the Early Iron Age. They were probably imported from the coasts of the Indian Ocean." 37 Lordkipanidze (a) 116-119. Also see Lordkipanidze (b) 28-31: "This (i.e. archaeological finds) would seem to suggest that the presumed trade route from India to the Black Sea, attested by Strabo and Pliny, on the whole functioned irregularly, being used casually." The quest for further evidence For now little hope can be entertained of the discovery of decisive archaeological elements to throw light on this road 38 . In other words, one must look for overlooked or missed pieces of literary evidence to possibly enhance our knowledge. In an article published some time after his monograph on ancient Georgia, D. Braund again tackled the problem of the "Northern route". Dealing with a cloth called in Greek sardonikon (a kind of linen produced in Colchis) 39 , he suggested that this designation might derive from the toponym Sardô, a mountain in India mentioned by Ctesias 40 and echo a trade connection. Here D. Braund appears less hostile to the existence of the "northern route", though with limited enthusiasm: It seems to me that we must seek a balance. The tradition of trade between India and Colchis is supported by several authorities. Moreover, a glance at the map shows that it must have been a possibility. But on the other hand, some of our ancient authorities retained a doubt about its reality [i. e., Pliny the Elder; see above, n. 30]. Moreover, as far as I am aware, archaeology provides no significant support to the tradition of such a route. 41 In the face of this scarcity of archaeological traces, E. de la Vaissière similarly turns to literary testimonies: La seule véritable preuve de l'existence d'un commerce relativement régulier entre l'Asie centrale et la mer Noire se trouve dans les lapidaires gréco-romains: ceux-ci connaissent plusieurs variétés de pierres bleues, dont la meilleure, dite cyanos scythique, est importée de la mer Noire et correspond certainement au lapis-lazuli du Badakhstan 42 [= Pliny the Elder, 37, 119 43 ]. La pyrite contenue dans le lapis-lazuli correspond exactement à cette description. Il y a donc eu une diffusion régulière du lapis, nécessairement à travers la Sogdiane, en direction de la steppe puis de la mer Noire 44 . Incidentally, E. de la Vaissière has omitted to mention the so-called Black Sea beryls, which are likely to have been Indian beryls transhipped through one of the Pontic trade centres 45 (for other "Pontic" items probably imported from India, see below, p. 11). In the course of my research work, I came accross three texts which may strengthen the view that such a trade system is not fictitious. These will be discussed in order of relevance. The first excerpt is taken from the Description of the inhabited world by Dionysius of Alexandria, also known as Dionysius Periegetes, a contemporary of Hadrian (regn. 117-138 A. D.). He was the author of a description (periegesis) of the world in hexameter verse. Before describing the Caspian Sea region, 38 For a more optimistic point of view, see Callieri 542. For further archaeological evidence, see addendum 2, and below, n. 57. 39 Herodotus, 2, 105: "The Colchian kind [of linen] is called by the Greeks sardonikon." 40 Braund (b) 293-294. 41 Braund (b) 292. See also Braund (b) 293: in his Medea (lines 483-487), Seneca "presented the palace of Aetes in Colchis crammed with goods taken from India". 42 Many other examples of producing regions and transhipment points being mixed up are known to us: see, e. g., Pliny the Elder, 12, 32 (Arabian saccharon [cane sugar]); Statius, Silv. 4, 9, 12 (Nile valley pepper). 43 Note, however, that Pliny does not explicitly mention the Pontus: "The best kind is the Scythian, then comes the Cyprian and lastly there is the Egyptian ..." (transl. D. E. Eichholz). In addition, the cyanus must be identified with azurite; the true lapis lazuli was called sappiri (see Pliny the Elder, 37, 120). 44 De la Vaissière 45-47, with further references relating to archaeological remains in the Black Sea area. 45 Pliny the Elder, 37, 76-79: "Beryls are produced in India an are rarely found elsewhere (...). In our part of the world beryls, it is thought, are sometimes found in the neighbourhood of the Black Sea." Dionysius embarks upon a digression to explain the sources of his geographical information: unlike those who describe the Caspian Sea from personal experience, Dionysus declares that his knowledge derives from the goddesses of the inspiration of literature, science, and the arts, namely the Muses. In other words, a poet taught by the Muses is not compelled to travel around the world as merchants do 46 : Easily could I describe this sea [= the Caspian Sea], also to you, although I have not seen its channels far away, nor have I traversed it with a ship. For I do not make my living upon black ships, nor does my family engage in commerce, nor do I go to the Ganges, as many do, through the Erythraean Sea, not caring for their lives, in order to gain indescribable wealth. Nor do I have dealings with the Hyrcanians, nor do I search after the Caucasian ridges of the Erythraean Arians. But the mind of the Muses convey me etc. (Dionysius Periegetes, 707-716; transl. D. D. Greaves) Certainly there is not much originality in invoking the Muses. These lines echo, as observed by nearly all commentators, a passage from Hesiod's Works and Days 47 . On the other hand, this commonplace -the mind of the Muses -is treated in a very personal way here: in contrasting his position to that of merchants, Dionysius, instead of referring to the traders sailing the Mediterranean Sea, who were certainly familiar to his audience, mentions those engaged in the long distance eastern commerce. This is certainly an oblique reference to the pivotal role of Alexandria -Dionysius' city of birth -in Rome's trade with India 48 . Moreover, in so doing, Dionysius hints at three trading centres of the eastern world: first the Ganges, linked to Egypt and Alexandria via the Indian Ocean -called the Erythraean Sea -sea roads 49 ; second Ariana, a renowned source of lapis-lazuli linked to northwestern Indian ports 50 ; finally there are the Hyrcanians. Their name may point to the "northern route", though admittedly this testimony does not ascertain whether this circuit was used in the early 2 nd century A.D.: that Dionysius paraphrases one of his sources is all but implausible 51 . The second text, dating back to the sixth century A.D., seems far less ambiguous. The facts related by Procopius take place during the war between Justinian and Khosrow I (regn. 531 -579). At some point in the course of events, the Roman army proceeded to Doubios -today's Dvin, in Armenia 52 -, which Procopius describes as follows: Now Doubios is a land excellent in every respect and especially blessed with a healthy climate and abundance of good water; and from Theodosiopolis it is removed a journey of eight days. In that region there are plains suitable for riding, and many very populous villages are situated in very close proximity to one another, and numerous merchants conduct their business in them. For from India and the neighbouring regions of Iberia and from practically all the nations of Persia and some of those under Roman sway they bring in merchandise and carry on their dealings with each other there [ἔκ τε γὰρ Ἰνδῶν καὶ τῶν πλησιοχώρων Ἰβήρων πάντων τε ὡς εἰπεῖν τῶν ἐν Πέρσαις ἐθνῶν καὶ Ῥωμαίων 46 See Greaves 109-111. 47 See, e. g., See Schneider 560-562. 49 See the Periplus of the Erythraean Sea, 63; Strabo, 15, 1, 4. 50 See Schneider 554-555. 51 I do not, however, adhere to Lightfoot's excessive opinion: "The area (Hyrcania) was explored by Patrocles, but although Strabo reports that the region was better known than it used to be (2.5.12) a journey there looks more like a fantastic foil in a priamel than a serious proposition for a classical traveller." (Lightfoot 421). 52 See Kettenhofen. τινῶν τὰ φορτία ἐσκομιζόμενοι ἐνταῦθα ἀλλήλοις ξυμβάλλουσι]. (Procopius, History of the wars, 2, 25, 1-3; transl. by H. B. Dewing) This passage recalls the above quoted passage (above, n. 12) in which Strabo presents Armenians and Medes as middlemen receiving Indian and Babylonian commodities before they supply the Aorsoi with them (ἐνεπορεύοντο καμήλοις τὸν Ἰνδικὸν φόρτον καὶ τὸν Βαβυλώνιον παρά τε Ἀρμενίων καὶ Μήδων διαδεχόμενοι ). Similarly Procopius gives evidence of Indian wares conveyed to Doubios by Indian merchants; some were bartered or purchased by Iberian traders, who would transport these back home. Although Procopius does not explicitly say that they were re-exported, one can reasonably assume that a certain quantity of Indian commodities -probably pepper and other spices -reached some Black Sea ports. In any case, Procopius attests that India was connected to the Transcaucasian area by a land route (also see addendum 1). Regrettably the actual itinerary remains speculative. The last piece of evidence comes from Persius, a Roman satirical poet of the Neronian period (34-62 A.D.). His fifth satyr, dedicated to his teacher Cornutus, praises philosophy as the source of inner freedom, which such people as merchants were not able to enjoy, being on the dependency of their greed -actually a common place of the time-. At some point Persius imagines a merchant dashing to the Pontus to load various commodities: You are snoring lazily in the morning: "Up you get," says Avarice; "come, up with you!" -You do not budge: "Up, up with you!", she cries again. -"O, I can't!" you say.-"Rise, rise, I tell you! "-"O dear, what for? "-"What for? Why, to fetch salt fish from Pontus, beaver oil, tow, ebony, frankincense and glossy Coan fabrics; be the first to take the fresh pepper off the camel's back before he has had his drink; do some bartering, and then forswear yourself [En saperdas aduehe Ponto, / castoreum, stuppas, hebenum, tus, lubrica Coa. / Tolle recens primus piper e sitiente camelo]." (Persius, Satur. 5, 132-136 -translation by G. G. Ramsay) Satirical poetry was not intended to convey positive and accurate facts. On the contrary, such documents are liable to confusions and mistakes. The main issue to emerge here is whether Persius refers solely to the Pontus, or implicitly mixes up several places of trade. For instance, according to the German editor Jahn, the camel is an allusion to pepper imported from India to Alexandriaactually camels never reached Alexandria, for the caravan routes, either from Myos Hormos or from Berenikê, ended at Coptos -: "Piper Indicum ex India camelis potissimum Alexandriam asportabatur." 53 I take the view, however, that Persius did not compose these lines in an ambiguous and inconsistent way. In other words, he gave a list of commodities available in the Pontic area, and accordingly the reader would normally understand that this array of commodities was fetched from the Pontus. This seems to me corroborated by the fact that the Pontic region was famous not only for salted fish (saperdas), but also beaver-oil (castoreum) 54 ; in addition the stuppa (the coarse part of flax 55 ) is likely to belong to the same area, for, according to Strabo 56 , the Colchians grew flax in abundance. I am thus convinced that Persius' list relates to the Pontus, to which the merchant is strongly advised to head. 53 Jahn 203 (also see Jahn 202: "Diversas merces de consilio miscet"). 54 See Pliny the Elder, 8, 109; Virgil, Georg. 1, 57-59 ( India mittit ebur, molles sua tura Sabaei,/ at Chalybes nudi ferrum uirosaque Pontus /castorea, Eliadum palmas Epiros equarum). 55 Pliny the Elder,19,17;Sextus Pompeius Festus,De Verborum Significatione,317,31. 56 Strabo, 11, 2, 17 (λίνον τε ποιεῖ πολὺ καὶ κάνναβιν καὶ κηρὸν καὶ πίτταν). If the reader accepts this premise, it appears therefore that some Indian wares reached the Black Sea during the early Roman Empire -which could be corroborated by a hoard found in Georgia 57 -. The trade place hinted at by Persius was perhaps located where an overland route ended, as suggested by the camels (see addendum 3). Phasis, standing at the extremity of such a road (above, p. 3), could be a suitable candidate 58 . The imports possibly of Indian origin are the following ones.  Coan fabrics. The name Coa usually applies to a fine and light clothing from the island of Cos 59 . This fabric was woven from the raw silk of the bombyx, whose cocoons produced short threads, differing thus from genuine silk. Coae vestes are mentioned during the Roman Imperial period, being "regarded as luxury clothing for demi-mondaines (e. g., Hor. Sat. 1, 2, 101; Tib. 2, 3, 57)" 60 . However, when Tibullus lists the Coae vestes among the most highly esteemed luxuries of his time -emeralds, purple clothes, Erythraean pearls -, the name Coan fabric may apply to genuine silk (serica) 61 . If Persius resorted to a similar assimilation, then this text points to silk reaching the Mediterranean world via the northern road and the Black Sea.  Ebony. The Greek word ἔβενος and its Latin counterpart hebenus designate the true ebony obtained from Diospyros ebenum in "Ethiopia" (i. e., Nubia, East Africa) and India as well. It also applies to a lower quality of wood produced by a different species (Dalbergia sissoo, or "seesham"), which is very common in the Pendjab and is also found in southern Iran and the rest of the subcontinent 62 . This material was exported from north-west India by sea-route from Barygaza to the Arab-Persian Gulf 63 . It may have also been sent to the Pontic area.  Pepper. Two distinct species grow in India: the long pepper (Piper longum) and the black pepper (Piper nigrum). The latter is native to south India, while the former occurs from the foothills of the Himalaya to south India 64 . The high volume of Roman consumption is well known and the importance of Indian Ocean sea routes has been brought to light following the excavations at Berenikê and Myos Hormos 65 . Persius gives evidence for Indian pepper being conveyed to the Eastern Pontus and the Mediterranean by an alternative itinerary.  Frankincense. This name presents difficulties of which it is not easy to give a satisfactory explanation. Conclusion The" northern route" is so poorly documented that, in my opinion, this set of literary documentsnot mentioned in previous academic literature, to the best of my knowledge -should not be ignored or despised. They tend to confirm the existence of this road along which eastern commodities would be carried to the Pontic area, in some cases via the Caspian Sea. These texts, ranging from the late 4 th century B. C. (Aristobulus) to the 6 th century A. D. (Procopius) give evidence for a most likely continuous trade activity, even if shifts probably occurred in the course of time. A significant conclusion is that the boom in the Indian Ocean trade, which followed the annexation of Egypt by the Roman power, did not put an end to this trade traffic. K. Ruffing, in a study devoted to the two main routes of the eastern commerce of Rome -the first one ending at Alexandria and the other at Antioch -observed that they somewhat complemented each other: "Schliessliech waren die beiden Hauptrouten nicht voneinander so unabhängig, wie est auf den ersten Blick scheint" 69 . The long existence of the "northern route" may be understood in a similar way: this trade circuit supplied the northern regions of the Roman world and the Pontic area with Indian wares through an efficient and thousand-year-old trade network, and worked independently of the southern supply chain. As such, the Pontus acted as a third end point of the eastern routes. Addenda  Addendum 1 (on Doubios / Dvin, above, p. 8). See Preiser-Kapelle 3: "As also Procopius indicates, from Dvin routes both to the north through Georgia and beyond the Caucasus as well as to the south to Azerbaijan and Media in the interior of the Sasanian Empire would connect to the 'Silk roads.'" 66 Strabo, 16, 3, 3 = Aristobulus, FGrH 139 F57. On this circuit, see, e. g., Young 92-94. 67 A kind of spice produced in India (Pliny, 12, 48; 16, 135) and praised in the Roman world at Pliny's time (Pliny, 37, 204). 68 See also Thorley 215: "The evidence does therefore seem sufficient to establish the existence of this Caspian route in Republican times. There is no direct reference to trade along it in the Empire period, although there are reasons to believe it continued to be used. It seems from its name that radix pontica, the drug rhubarb (Celsus, De med. 5, 23, 3), which was a Chinese export, may have reached the West by this route." The Latin sources, however, gives no clue about the origin of the radix pontica. 69 Ruffing 375.  Addendum 2 (on recent archaeological finds [above, p. 6]). See Shortland and Schroeder 961-963, about beads unearthed in the Pichvnari necropolis on the Black Sea coast of Georgia. Some "are made of a plant ash-based glass, which at that time was not produced in the Mediterranean. The very high alumina composition of two of the beads suggests that they may be from India, the only place where such beads are common. They therefore represent part of the trade from the subcontinent, and part of the reason why Colchis, forming as it did a major port and a staging post to the trade routes to Central Asia and beyond, was such a wealthy place in the fifth and fourth centuries BC. Further work now needs to be undertaken, looking for more such trade objects both in Colchis and in Central Asia and India to further establish the level and nature of long-distance trade through this important and interesting junction between East and West." Remains of silk have been recovered at Dedoplis Gora in Georgia. See Kvavadze 214: "In this case silk thread was probably used to make the textile glitter and beautiful. It should be noted that this is one of the first discoveries of silk in Georgia. So far the earliest information about silk in Georgia comes from the archaeological material from Armazi where a piece of silk fabric was found, which has been dated to the second century A. D. (Isakadze 1970). Silk originated in China and the "Silk Road" which established the trade contacts between China and the Mediterranean, Asia and Europe began in the first or second century B. C. (Wild 1984). The discovered silk fibres in the Dedoplis Gora layers confirm the theoretical assumption that both the Caucasus (Babaev 1998) and Georgia (Abesadze 1957; Isakadze 1970) were involved in this trade from the beginning. It is believed that at that time China sold only silk thread, and silk textile was exported much later (Wild 1984). The results of the investigation in Dedoplis Gora completely confim this concept." A set of archaological remains, and especially ivory hairpins found in the Oxus Valley, caused E. V. Rtveladze to draw the following conclusion: "The diffusion of the above mentioned objects can also imply that Parthian, Bactrian and Indian merchants had set up trading stations along the Oxus that were used for shipment of ivory and other articles on their way from India to Bactria and Margiana. From here these goods were shipped to Chorasmia along the Oxus, and from Margiana they were transported along the Great Indian Road across the southern Caucasus and the Euxine Pontus to the northern Black Sea region. The finds at Olbia of carved ivory bearing the image of a Parthian nobleman and imitations of Greco-Bactrian coins along the northern Black Sea coast, and Sanabares' coins minted in Margiana found in the Kura valley in Georgia are links in a chain and testifies to the movement of goods along the Great Indian Road."  Addendum 3 (on camels, above, p. 9). See Peters & von den Driesch 662: "For the 1 st Millennium BC, different sources of information confirm the presence of Bactrian camels in many parts of Asia including the Near East and Asia Minor. It is worth mentioning its increasing economic importance in Iran, and its appearance in Mesopotamia (...), e.g. Bactrian camels depicted on the Black Obelisk during the reign of Shalmaneser I11 (859-824 BC). Its occurrence (and consumption) in Hellenistic Chorasmia (Khorezm), to the south of the Aral Sea in the lower Amudarya (Oxus) region, has been noted by Calkin (1966). Bactrian camels played an important role for the opening of western trade routes from China to the Black Sea in Han (206 BC-24 AD) and Tang (618-907 AD) times (...), and this might explain why their remains have occasionally been found in Greek colonial towns on the northern shore of the Black Sea (Calkin, 1960; Bokonyi, 1969). Using data in Chinese literature, Schafer (1950) convincingly argued that the exploitation of Bactrian camels on a larger scale in northern China began during the western Chou dynasty (11 th century-771 BC) and increased as trade with western Asia increased. Camels reached Central Europe in the Imperial Roman Period, as occasional finds from sites such as Vindonissa-Windisch (...), Vienna (...), Abodiacum-Epfach (...), Vemania-Isny (...) and Augusta Vindelicum-Augsburg illustrate. For reasons of climate, it is probable that Bactrian camels were involved. However, the few isolated bones do not allow an identification to the species level, and the presence of the dromedary a priori cannot be excluded." Pierre Schneider Université d'Artois -Maison de l'Orient et de la Méditerranée [email protected] https://sites.google.com/site/mererythree/ . André & Filliozat 70-71 rightly emphasize that neither Strabo nor Pliny explicitly state that the Oxos empties into the Caspian Sea. than five days' portage by land can reach Phasis in Pontus [adicit idem Pompei ductu exploratum, in Bactros septem diebus ex India perveniri ad Bactrum flumen quod in Oxum influat, et ex eo per Caspium in Cyrum subvectos, et V non amplius dierum terreno itinere ad Phasim in Pontum Indicas posse devehi merces 14 ]. (Pliny the Elder, 6, 52; transl. H. Rackham) 15 True frankincense was produced in South Arabia and not in India. A certain quantity of Arabian aromatics travelled by an overland road to Mesopotamia via the city of Gerrha -a 57 De Romanis 179, n. 78: "The significant number (350) of CL CAESARES denarii found in Georgia has to be explained with the import of items brought from central Asia:Str. 11.7.3." 58 On Phasis in the early second century A. D., see Arrian, Peripl. Eux. 9 (Arrian says that he had a wall built for merchants' [ἐμπορικῶν ἀνθρώπων] security). On how Roman authority over Colchis and Iberia, and trade activities were interrelated, see Mc Laughlin 91-92; Thorley 215. Ah, ruin to all who gather the emeralds green or with Tyrian purple dye the snowy sheepskin. The stuffs of Cos and the bright pearl from out of the Red sea sow greed in lasses" (compare with Seneca, De benefic. 7, 9). Also see Tibullus, 2, 3, 53-54: "Let her wear the gossamer robe which some woman of Cos has woven and laid it out in golden tracks. [Illa gerat vestes tenues, quas femina Coa / Texuit, auratas disposuitque vias]." (transl. G. P. Goold); the Coan fabric interwoven with gold clearly recalls the silken texture mixed with gold (see A. Peckridou-Gorecki).62 Theophrastus, H. P., 4, 4, 6: "The ebony is also peculiar to this country [India]; of this there are two kinds, one with good handsome wood [Diospyros ebenum], the other inferior [Dalbergia sissoo]. The better sort is rare, but the inferior one is common." (transl. A. Hort). See Amigues (a) 223-224.63 Periplus of the Erythraean Sea, 36. See Casson 181 ; 259.64 Amigues (c) 238-239.65 SeeVan der Veen 44-46. trading post in north-east Arabia -, according to Strabo drawing on Aristobulus 66 . Some may have been carried northward as far as the Black Sea. Here it is worth recalling Strabo's report about the Aorsoi: the "Babylonian wares" received by Armenians and Medes for re-export to the northern Pontic area may well have been Arabian aromatics. Alternatively tus may vaguely refer to a kind of aromatic gum imported from India: Pliny (12, 71) speaks of an Indian myrrh of low quality and Philostratus (V. A., 3, 4) says that both "frankincense" and pepper grow on the southern face of the Caucasus (= Himalaya). Incidentally, let us mention two additional documents reporting eastern spices seemingly transhipped to the Mediterranean world by an overland northern route: 1) according to Dioscorides, the best kardamômon, collected in India and Arabia, was imported from Commagene, Armenia and Bosphorus [καρδάμωμον ἄριστον τὸ ἐκ τῆς Κομμαγηνῆς καὶ Ἀρμενίας καὶ Βοσπόρου κομιζόμενον• γεννᾶται δὲ καὶ ἐν Ἰνδίᾳ καὶ Ἀραβίᾳ]; 2) a Plautian character gives a lady frankincense from Arabia and amomum 67 from the Pontus as presents (Plautus, Truc. 539-540 [ex Arabia tibi /attuli tus, Ponto amomum]) 68 . 59 See Pliny the Elder, 4, 62; 11, 76-77; 24, 108. 60 See Hurschmann. Also see Propertius, 4, 2, 23; 2, 1, 5; Tibullus, 2, 5, 38. 61 Tibullus, 2, 4, 27-30: "
47,401
[ "749240" ]
[ "56711", "90438" ]
00149050
en
[ "math" ]
2024/03/04 23:41:50
2009
https://hal.science/hal-00149050/file/stablefinalhal.pdf
Bénédicte Haas email: [email protected] Jim Pitman email: [email protected] Matthias Winkel email: [email protected] Spinal partitions and invariance under re-rooting of continuum random trees * Keywords: AMS 2000 subject classifications: 60J80 Markov branching model, discrete tree, Poisson-Dirichlet distribution, fragmentation process, continuum random tree, spinal decomposition, random re-rooting We develop some theory of spinal decompositions of discrete and continuous fragmentation trees. Specifically, we consider a coarse and a fine spinal integer partition derived from spinal tree decompositions. We prove that for a two-parameter Poisson-Dirichlet family of continuous fragmentation trees, including the stable trees of Duquesne and Le Gall, the fine partition is obtained from the coarse one by shattering each of its parts independently, according to the same law. As a second application of spinal decompositions, we prove that among the continuous fragmentation trees, stable trees are the only ones whose distribution is invariant under uniform re-rooting. Introduction Starting from a rooted combinatorial tree T [n] with n leaves labelled by [n] = {1, . . . , n}, we call the path from the root to the leaf labelled 1 the spine of T [n] . Deleting each edge along the spine of T [n] defines a graph whose connected components we call bushes. If as well as cutting each edge on the spine, we cut each edge connected to a spinal vertex, each bush is further decomposed into subtrees. We thus obtain two nested partitions of {2, . . . , n}, which naturally extend to partitions of [n] by adding the singleton {1}. We call these partitions of [n] the coarse spinal partition and the fine spinal partition derived from T [n] . The aim of this paper is to develop some theory of spinal decompositions of fragmentation trees that arise as genealogical trees of fragmentation processes. We focus on Markovian partition-valued fragmentation processes of the following two types. In a setting of discrete time and partitions of [n], we postulate that each non-singleton block splits at each time, which leads to Markov branching models [START_REF] Aldous | Probability distributions on cladograms[END_REF][START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF][START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. In a setting of continuous time and partitions of N we postulate a self-similarity condition, which leads to self-similar continuum random trees [START_REF] Haas | The genealogy of self-similar fragmentations with negative index as a continuum random tree[END_REF][START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. Before giving an overview of this paper in Section 1.3, we formally introduce the discrete setting in Section 1.1 and the continuous setting in Section 1.2. Discrete fragmentations We start by introducing a convenient formalism for the kind of combinatorial trees arising naturally in the context of fragmentation processes. Let B be a finite non-empty set, and write #B for the number of elements of B. Following standard terminology, a partition of B is a collection Π B = {B 1 , . . . , B k } of non-empty disjoint subsets of B whose union is B. To introduce a new terminology convenient for our purpose, we make the following recursive definition. A fragmentation of B (sometimes called a hierarchy or a total partition [START_REF] Schroeder | Vier combinatorische Probleme[END_REF][START_REF] Stanley | Enumerative combinatorics[END_REF]) is a collection T B of non-empty subsets of B such that (i) B ∈ T B (ii) if #B ≥ T B = {B} ∪ T B 1 ∪ • • • ∪ T B k (1) where T B i is a fragmentation of B i for each 1 ≤ i ≤ k. Necessarily B i ∈ T B , each child B i of B with #B i > 1 has further children, and so on, until the set B is broken down into singletons. We use the same notation T B for both • such a collection of subsets of B, and • for the tree whose vertices are these subsets of B, and whose edges are defined by the parent/child relation implicitly determined by the collection of subsets of B. To emphasize the tree structure we may call T B a fragmentation tree. Thus B is the first branch point of T B , and each singleton subset of B is a leaf of T B , see Figure 1. It is often convenient to plant T B by adding a root vertex and an edge between the root and the first branch point B. We denote by T B the collection of all fragmentation trees labelled by B. [START_REF] Bertoin | Self-similar fragmentations[END_REF] represented as trees with nodes labelled by subsets of [START_REF] Bertoin | Self-similar fragmentations[END_REF]. For each non-empty subset A of B, the restriction to A of T B , denoted T A,B , is the fragmentation tree whose first branch point is A, whose leaves are the singleton subsets of A, and whose tree structure is defined by restriction of T B . That is, T A,B is the fragmentation T A,B = {C ∩ A : C ∩ A = ∅, C ∈ T B } ∈ T A , corresponding to a reduced subtree as discussed by Aldous [START_REF] Aldous | The continuum random tree[END_REF]. Given a rooted combinatorial tree with no single-child vertices and whose leaves are labelled by a finite set B, there is a corresponding fragmentation tree T B , where each vertex of the combinatorial tree is associated with the set of leaves in the subtree above that vertex. So the fragmentation trees defined here provide a convenient way to both label the vertices of a combinatorial tree, and to encode the tree structure in the labelling. A random fragmentation model is an assignment of a probability distribution on T B for a random fragmentation tree T B with first branch point B for each finite subset B of N. We assume throughout this paper that the model is exchangeable, meaning that the distribution of Π B , the partition of B generated by the branching of T B at its root, is of the form P(Π B = {B 1 , . . . , B k }) = p(#B 1 , . . . , #B k ) (2) for all partitions {B 1 , . . . , B k } with k ≥ 2 blocks, and some symmetric function p of compositions of positive integers, called a splitting probability rule. The model is called . See also [START_REF] Aldous | Probability distributions on cladograms[END_REF][START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF][START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. Continuous self-similar fragmentations We denote by P the set of partitions of N and equip it with the distance d(π, π ′ ) = 2 -n(π,π ′ ) , where n(π, π ′ ) is the largest integer such that the restrictions of partitions π, π ′ to [n] coincide. Following Bertoin [START_REF] Bertoin | Self-similar fragmentations[END_REF], a continuous-time P-valued Markov process (Π(t), t ≥ 0) is called a selfsimilar fragmentation process with index a ∈ R if it is càdlàg and • Π(0) = {N}, i.e. Π starts from the trivial partition with a unique block; • Π is exchangeable, i.e. its distribution is invariant under permutations of N; • given Π(t) = π, the post-t process (Π(t + s), s ≥ 0) has the same law as the process whose state at time s ≥ 0 is the partition of N whose blocks are those of π i ∩ Π (i) (|π i | a s), i ≥ 1, where (π i , i ≥ 1) is the sequence of blocks of π in order of least elements, (|π i |, i ≥ 1) is the sequence of their asymptotic frequencies and (Π (i) , i ≥ 1) is a sequence of i.i.d. copies of Π. We recall that Kingman's theorem [START_REF] Kingman | The representation of partition structures[END_REF] on exchangeable partitions ensures that for every t ≥ 0, the asymptotic frequencies |π i | = lim n→∞ n -1 #(π i ∩ [n] ) of all blocks π i of Π(t) exist a.s.. Bertoin [START_REF] Bertoin | Homogeneous fragmentation processes[END_REF] shows that actually a.s. for every t, these asymptotic frequencies exist. In [START_REF] Bertoin | Self-similar fragmentations[END_REF], Bertoin proved that the distribution of a self-similar fragmentation is entirely characterized by three parameters: the index of self-similarity a, a coefficient c ≥ 0 that measures the rate of erosion and a dislocation measure on S ↓ =    (s i ) i≥1 : s 1 ≥ s 2 ≥ . . . ≥ 0, i≥1 s i ≤ 1    , with no atom at (1, 0, ...) and that integrates 1s 1 . This measure ν describes the sudden dislocations of blocks, in the sense that a block B ⊂ N splits in some blocks B 1 , B 2 , . . . with relative asymptotic frequencies s ∈ S ↓ at rate |B| a ν(ds). When the index a = 0, this fragmentation rate does not depend on the size of the blocks and the fragmentation processes is then said to be homogeneous. A crucial point is that a self-similar fragmentation with parameters a, c and ν can always be constructed measurably from a homogeneous fragmentation with same coefficient c and measure ν, using time-changes, and vice-versa. We refer to Bertoin's book [START_REF] Bertoin | Random fragmentation and coagulation processes[END_REF] and the above mentioned papers [START_REF] Bertoin | Homogeneous fragmentation processes[END_REF][START_REF] Bertoin | Self-similar fragmentations[END_REF] for details on these time-changes and background on homogeneous and self-similar fragmentations. In this paper, we focus on self-similar fragmentations without erosion (c = 0), which are non-trivial (ν(S ↓ ) = 0) and do not lose mass at sudden dislocations, i.e. ν   i≥1 s i < 1   = 0. We call (a, ν) the characteristic pair of such a process. A family of combinatorial trees with edge lengths R [n] , n ≥ 1, with n exchangeably labelled leaves, is naturally associated to a self-similar fragmentation process Π by considering the evolution of Π restricted to the first n integers. Specifically, R [n] consists of all blocks B that occur in the evolution of Π ∩ [n]; an edge between the root and the first branch point [n] has as its length the first dislocation time of Π ∩ [n], and similarly for subtrees with two or more leaves; the edge below leaf j has as its length the time between the last relevant dislocation time of Π ∩ [n] and the time when {j} becomes a singleton for Π, which may be infinite. This gives a consistent family of trees, in the sense that the subtree of R [n] spanned by [k] is R [k] , for all k ≤ n, where superfluous (i.e. multiplicity 2) vertices are removed and associated edges merged, their lengths summed up. By exchangeability, the same is true in distribution for uniformly chosen k distinct leaves of R [n] , relabelled by [k]. The coupling of self-similar fragmentations using time-changes entails that the distribution of the combinatorial shapes (say T [n] ) of R [n] , n ≥ 1, depends only on the dislocation measure ν, and not on the index a. So without loss of generality, we may focus on a = 0, the case of homogeneous fragmentations, when working with the shapes T [n] , n ≥ 1. Furthermore, (T [n] , n ≥ 1) defines a consistent Markov branching model as in the previous subsection. Reciprocally, each consistent Markov branching model can be constructed similarly from some homogeneous fragmentation (possibly with erosion). See [START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. When the index a is negative, small fragments vanish quickly and it is well-known that the whole fragmentation Π then reaches in finite time the trivial partition composed exclusively of singletons. See e.g. [START_REF] Bertoin | Random fragmentation and coagulation processes[END_REF]. In terms of trees, this implies that the height of R [n] is bounded uniformly in n. Using the consistency property and Aldous's results [START_REF] Aldous | The continuum random tree[END_REF], it is then possible to define the projective limit T of the family (R [n] , n ≥ 1) and equip it with a probability measure µ, the mass measure, that arises as limit of the empirical measures on the leaves of R [n] , n ≥ 1. Implicitly, the tree T is rooted. The pair (T , µ) is a continuum random tree (CRT) and was studied in [START_REF] Haas | The genealogy of self-similar fragmentations with negative index as a continuum random tree[END_REF] using Aldous's formalism of trees as compact metric subsets of l 1 , cf. [START_REF] Aldous | The continuum random tree[END_REF][START_REF] Aldous | The continuum random tree. II. An overview[END_REF][START_REF] Aldous | The continuum random tree[END_REF]]. An alternative formalism can be considered, via the set of equivalence classes of compact rooted R-trees endowed with the Gromov-Hausdorff distance, as developed in [START_REF] Evans | Rayleigh processes, real trees, and root growth with re-grafting[END_REF][START_REF] Evans | Subtree prune and regraft: a reversible real tree-valued Markov process[END_REF]. We will not go further into details here and refer to the above-mentioned papers for rigorous definitions and statements. We shall call the CRT (T , µ) a self-similar fragmentation CRT with parameters (a, ν). A fundamental property of (T , µ) is that a version of (R [n] , n ≥ 1) can be obtained from a random sampling L 1 , L 2 , ... picked independently according to µ, conditional on (T , µ), by considering for each n the subtree of T spanned by the root and leaves L 1 , ..., L n . Consider then the forest F T (t) obtained by removing in T all vertices at distance less than t from the root and define the random partition Π ′ (t) by letting i and j be in the same block of Π ′ (t) if and only if L i and L j are in the same connected component of F T (t), t ≥ 0. Clearly the process Π ′ is distributed as Π. We shall often suppose in the following that the fragmentation process we are working with is constructed in such a manner from some self-similar fragmentation CRT. Examples of self-similar fragmentation CRTs are the Brownian CRT of Aldous [START_REF] Aldous | The continuum random tree[END_REF][START_REF] Aldous | The continuum random tree. II. An overview[END_REF][START_REF] Aldous | The continuum random tree[END_REF] and, more generally, the stable Lévy trees with index β ∈ (1, 2] of Duquesne and Le Gall [START_REF] Duquesne | Random trees, Lévy processes and spatial branching processes[END_REF][START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF]. For details on their fragmentation properties, see Bertoin [START_REF] Bertoin | Self-similar fragmentations[END_REF] for the Brownian case (i.e. when β = 2) and Miermont [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF] for the other stable cases. The parameters of these CRTs are recalled later in the paper. Contents and organization of the paper The structure and contents of this paper are as follows. In Section 2, we study the coarse and fine spinal partitions of some Markov branching model (T [n] , n ≥ 1) constructed consistently from a self-similar fragmentation process. These partitions of [n] are consistent as n varies, which leads to a nested pair of partitions of N. Restricted to N\{1}, they are jointly exchangeable. In particular, they possess asymptotic frequencies a.s. The decreasing rearrangements of these frequencies are called the coarse spinal mass partitions and fine spinal mass partitions. By decomposing the trees along the spine, we then show that when the parameters a and ν of the fragmentation are known and ν is infinite, we can reconstruct the whole self-similar fragmentation process from the sequence of shapes (T [n] , n ≥ 1) (Proposition 2). Next, the main result of this section (Theorem 6) states that under some factorization property of the dislocation measure ν (Definition 2), the fine spinal mass partition derived from the sequence of shapes (T [n] , n ≥ 1) is obtained from the coarse one by shattering each of its fragments in an i.i.d. manner. In particular, this result applies to a family of fragmentations whose dislocation measures are built from Poisson-Dirichlet partitions (Section 3). The stable fragmentations studied by Miermont [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF], built from the stable trees of Duquesne and Le Gall with index in [START_REF] Aldous | The continuum random tree[END_REF][START_REF] Aldous | The continuum random tree. II. An overview[END_REF], belong to this family. As a consequence, we obtain an extensive description, in terms of Poisson-Dirichlet partitions (Corollary 10), of spinal decompositions of stable trees. The stable trees (T , µ) are known to possess an interesting symmetry property of invariance under uniform re-rooting (see [START_REF] Aldous | The continuum random tree. II. An overview[END_REF][START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF]15]). Informally, this means that taking a leaf at random according to µ and considering T rooted at this random leaf, gives a CRT with the same distribution as the original CRT with its original root. In Section 4, we give a new proof of this invariance, using combinatorial methods, and show that, up to a scaling factor, stable trees are the only self-similar fragmentation CRTs that are invariant under uniform re-rooting (Theorem 11). To finish this introduction, let us mention that studies on spinal decompositions of various trees exist in the literature. See e.g. Aldous-Pitman [START_REF] Aldous | Tree-valued Markov chains derived from Galton-Watson processes[END_REF] (for Galton-Watson trees), Duquesne-Le Gall [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF] (for stable and Lévy trees). In the fragmentation context, decomposing the trees/processes along the spine is a useful tool, which has been used to obtain results on large time asymptotics [START_REF] Bertoin | Discretization methods for homogeneous fragmentations[END_REF], small time asymptotics [START_REF] Haas | Fragmentation processes with an initial mass converging to infinity[END_REF] and discrete approximations [START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. Spinal partitions of fragmentation trees Decompose a combinatorial fragmentation tree T [n] with leaves labelled by [n] along the spine from the root to leaf 1 into a collection of bushes by deleting each edge along the spine. By adding a conventional root edge to its base, each bush is identified with an element of T B for some B ⊆ [n], where T B is the collection of rooted combinatorial trees with #B leaves labelled by B. Each such B is associated with a unique vertex on the spine of T [n] . We list these sets of leaf labels B in order of the corresponding spinal vertices to obtain an ordered exchangeable random partition of {2, . . . , n}. The first set B in this list is the set of elements of [n] not in the block containing 1 after the first fragmentation event involving [n]. If after the first fragmentation of [n] the block [n] -B containing 1 is of size 2 or more, the next set is what remains of [n] -B after deleting the block containing 1 in the next fragmentation of [n] -B, and so on, until the last set which is the singleton {1}. If as well as cutting each edge on the spine, we cut each edge connected to a spinal vertex, each bush is further decomposed into subtrees. We thus obtain two nested exchangeable random partitions of {2, . . . , n}, which naturally extend to partitions of [n] by adding the singleton {1}, the coarse and fine spinal partitions derived from T [n] . We can include the spinal order in the coarse spinal partition to form the coarse spinal composition. Assuming that the trees T [n] , n ≥ 1, are constructed consistently from a homogeneous fragmentation process with values in the partitions of N, both partitions of [n] are consistent as n varies. Thus the coarse and fine spinal partitions may be regarded as a nested pair of random partitions of N. These partitions have natural interpretations in terms of associated partitionvalued self-similar fragmentations processes (Π(t), t ≥ 0), of any index a, in which the sequence (T [n] , n ≥ 1) is embedded. For each pair of integers i and j let the splitting time D i,j be the first time t that i and j fall in distinct blocks of Π(t). Let i, j ≥ 2. By construction, i and j fall in the same block of the coarse spinal partition if and only if D 1,i = D 1,j , whereas i and j fall in the same block of the fine spinal partition if and only if D i,j > D 1,i (this clearly implies D 1,i = D 1,j ). Assuming further that Π is constructed by random sampling of leaves L 1 , L 2 , . . . from some CRT (T , µ) according to µ, i and j fall in the same block of the coarse spinal partition if and only if the paths from L i and L j to the root first meet the spine of T , i.e. the path from the root to L 1 , at the same point. Besides, i and j fall in the same block of the fine spinal partition if and only if the path from L i to L j does not intersect the spine. The coarse spinal decomposition of T is the collection of equivalence classes for the random equivalence relation x ∼ y if and only if the paths from x and y to the root first meet the spine at the same point on the spine. Note that the whole spine itself carries no µ-mass, and spinal non-branchpoints (an uncountable set of singletons in this decomposition of T ) will be excluded from further consideration. The restriction of T to a typical equivalence class is a bush which can be further decomposed into trees by deleting the point on the spine, and then giving each connected component its own root where it used to be connected to the spine. The resulting random partition of T into subtrees is the fine spinal decomposition of T . We measure the size of each component of one of these partitions by its µ-mass, to obtain coarse and fine spinal mass partitions of (T , µ), which we may regard as two random elements of S ↓ . The following proposition summarizes some basic properties of these random partitions, which follow easily from the above discussion. Proposition 1 The coarse and fine spinal partitions derived from the sequence of shapes (T [n] , n ≥ 1) embedded in (T , µ) have the following properties. (i) The singleton block {1} belongs to both partitions of N, while the restrictions of these partitions to N\{1} are jointly exchangeable. (ii) The sequence of ranked limiting frequencies of each partition of N is the sequence of ranked µ-masses of the corresponding mass partition of (T , µ). We now offer a more detailed study of these two partitions, first considering the coarse spinal partition (and composition), then the fine one and its relation to the coarse one. Obviously, the fine spinal partition is identical to the coarse one if and only if the trees T [n] are binary for all n ≥ 1. The coarse spinal partition Assume throughout this section that the trees T [n] , n ≥ 1, are constructed consistently from a homogeneous fragmentation process, as when T [n] is derived from an associated continuum tree (T , µ) as the shape of the subtree spanned by L i , i ∈ [n] for L 1 , L 2 , . . . an exchangeable sample of leaves with directing measure µ. To ease notation we work with T [n+1] instead of T [n] . Let B n,1 , B n,2 , . . . , B n,Kn , {1} be the sets of leaves of the bushes derived from the coarse spinal decomposition of T [n+1] , in order of the corresponding spinal vertices. Then (B n,1 , B n,2 , . . . , B n,Kn ) is the restriction to {2, . . . , n + 1} of an exchangeable ordered random partition of {2, 3, . . .}, as studied in [START_REF] Donnelly | Consistent ordered sampling distributions: characterization and convergence[END_REF][START_REF] Gnedin | The representation of composition structures[END_REF]. Let C n := (#B n,1 , #B n,2 , . . . , #B n,Kn ). (3) It follows easily from sampling consistency of the sequence (T [n] , n ≥ 1) that (C n , n ≥ 1) is a regenerative composition structure, as defined in [START_REF] Gnedin | Regenerative composition structures[END_REF]. That is to say, (C n , n ≥ 1) is a sampling consistent sequence of random compositions C n of n, with the property that conditionally given the first part of C n is of size i < n, the remaining parts of C n define a random composition of ni with the same distribution as C n-i . Let S n,k := n - k j=1 #B n,j where B n,j is empty for j > K n . So (S n,k + 1, k ≥ 0) is the sequence of sizes of the fragment containing 1 as it undergoes successive fragmentations according to T [n+1] , starting with S n,0 = n and terminating with S n,k = 0 for k ≥ K n , where K n is the total number of fragmentation events experienced by the block containing 1 in T [n+1] . According to Gnedin and Pitman [START_REF] Gnedin | Regenerative composition structures[END_REF], there is the following almost sure convergence of random sets with respect to the Hausdorff metric on closed subsets of [0, 1]: {S n,k /n, k ≥ 0} a.s. -→ n→∞ {exp(-ξ t ), t ≥ 0} cl (4) where the left side is the random discrete set of values S n,k rescaled onto [0, 1], and the right side is the closure of the range of the exponential of some subordinator (ξ t , t ≥ 0). The random interval partition of [0, 1] defined by interval components of the complement of the closed range of 1e -ξ has a natural interpretation in terms of the associated CRT (T , µ): the lengths of these intervals are the strictly positive masses of components in the coarse spinal decomposition of (T , µ), in the order they appear along the spine from the root to leaf 1. We will therefore call this interval partition the coarse spinal interval partition of [0, 1] derived from (T , µ). In terms of the associated homogeneous fragmentation, the lengths of these intervals are the total masses of fragments thrown off by the mass process of the fragment containing 1, put in the order they split away from this tagged fragment. Otherwise said, exp(-ξ) is the mass process of the fragment containing 1. Since the fragmentation process has zero erosion and no sudden loss of mass, the subordinator ξ has no drift and no killing. Bertoin [START_REF] Bertoin | Homogeneous fragmentation processes[END_REF] proved that the Lévy measure of ξ is then given by Λ(dx) = exp(-x) i≥1 ν(-log s i ∈ dx), x > 0. ( 5 ) Proposition 2 Let (Π(t), t ≥ 0) be a self-similar fragmentation process, with index a ∈ R and dislocation measure ν with infinite total mass. Then the entire process (Π(t), t ≥ 0) can be constructed from the consistent sequence (T [n] , n ≥ 1) of combinatorial shapes of trees derived from Π. Proof. In view of the time-change relation between fragmentations of different indices, it suffices to consider the homogeneous case. Given the consistent family of trees (T [n] , n ≥ 1), we first use (4) to recover the closure of the range of exp(-ξ), hence also the closure of the range of ξ, the subordinator describing the evolution of the mass fragment containing 1. Since the dislocation measure has infinite mass, so does the Lévy measure of ξ. Then it is well known that the entire sample path of ξ can be measurably reconstructed from its range, up to a constant factor on the time scale (see e.g. [START_REF] Greenwood | Construction of local time and Poisson point processes from nested arrays[END_REF]). Since the distribution of ξ is determined by that of (Π(t), t ≥ 0), this constant is known. Let Π n = (Π n (t), t ≥ 0) be the restriction of (Π(t), t ≥ 0) to [n]. The path of ξ, and its construction ( 4) from (T [n] , n ≥ 1), determine almost surely for each n the sequence of random times t when transitions of Π n occur which change the block of Π n containing 1, and at each of these times t the block of Π n (t) containing 1 can be read from T [n] . By exchangeability, the same reconstruction can evidently be done almost surely for the block of Π n (t) containing j, for each 1 ≤ j ≤ n. But this information determines the entire path of (Π n (t), t ≥ 0), for each n, hence that of (Π(t), t ≥ 0). Corollary 3 If in the setting of Proposition 2 we have a < 0, then an associated (a, ν)fragmentation CRT (T , µ) can also be constructed from (T [n] , n ≥ 1) on the same probability space. Proof. While the construction of a self-similar fragmentation CRT in [START_REF] Haas | The genealogy of self-similar fragmentations with negative index as a continuum random tree[END_REF] from a self-similar partition-valued fragmentation process is carried out explicitly only "in distribution", it is not hard to give an almost sure construction, e.g. via Aldous's sequential construction in l 1 (see e.g. [START_REF] Aldous | The continuum random tree[END_REF] p.252). This yields an increasing sequence of trees with edge lengths R [n] that converges in distribution, hence almost surely, with respect to the Hausdorff metric on closed subsets of l 1 . The almost sure convergence of empirical measures on the leaves of R [n] to a mass measure µ is then given by [3, Lemma 7] (convergence of measures is weak convergence). We record now an explicit distributional result for the coarse spinal partition of T [n+1] , which can either be read from [START_REF] Gnedin | Regenerative composition structures[END_REF] or derived directly. Recall that n + 1 -#B n,1 is the size of the fragment containing 1 at the first branch point of T [n+1] . Let Σ(ds) := ∞ j=1 ν(s j ∈ ds) and let Λ be the Lévy measure of (ξ t , t ≥ 0), which, according to [START_REF] Aldous | Brownian bridge asymptotics for random pmappings[END_REF], is the image of sΣ(ds) via s →log s. Then by embedding in the homogeneous fragmentation, we see that P(#B n,1 = m) = Φ(n : m)/Φ(n) (1 ≤ m ≤ n) (6) where Φ(n) is the total rate of fragmentations with some effect on partitions of [n + 1], and Φ(n : m) the rate of such fragmentations in which 1 ends up in a block of size n + 1m. These rates are easily evaluated as follows: Φ(n : m) = n m 1 0 s n+1-m (1 -s) m Σ(ds) = n m ∞ 0 e -(n-m)x (1 -e -x ) m Λ(dx) (7) and Φ(n) = n m=1 Φ(n : m) = 1 0 (1 -s n )sΣ(ds) = ∞ 0 (1 -e -nx )Λ(dx). ( 8 ) From this and [START_REF] Gnedin | Regenerative composition structures[END_REF], we get the exchangeable partition probability function (EPPF) of the coarse spinal partition {B n,1 , B n,2 , . . . , B n,Kn }, i.e. the probabilities p(n 1 , ...n k ) = P({B n,1 , . . . , B n,Kn } = π), for each particular partition π of {2, ..., n + 1} in sets of sizes n 1 , ..., n k , ∀n ≥ 1, ∀(n 1 , ...n k ) partition of n. For an explicit formula, see [START_REF] Gnedin | Regenerative composition structures[END_REF], especially formulae ( 26),( 6),( 3) and ( 4). Various further properties of the coarse spinal partition can also be read from [START_REF] Gnedin | Regenerative composition structures[END_REF]. The fine spinal partition We start by observing some basic symmetry properties of this partition. Proposition 4 (i) Conditionally given the sizes of components of the fine spinal decomposition of T [n+1] , say n 1 , . . . , n k with k i=1 n i = n, the corresponding collection of subtrees of T [n+1] , modulo relabelling by [n 1 ], . . . , [n k ], is a collection of independent copies of T [n 1 ] , . . . , T [n k ] . (ii) Conditionally given the fine spinal mass partition of a self-similar fragmentation CRT (T , µ) with parameters (a, ν), the corresponding collection of subtrees T of T , with each T of mass m equipped with m -1 µ restricted to T , modulo isomorphism and multiplication of edge lengths by m a , is a collection of independent copies of (T , µ). Proof. Part (i) follows easily from the defining Markov (fragmentation/branching) property of T [n] . For part (ii), consider Π a partition-valued (a, ν)-fragmentation constructed from (T , µ). Let Π (i) (t) denote the block of Π(t) containing i, i ≥ 1, and recall that D 1,i denotes the first time at which 1 and i belong to distinct blocks. For all t ≥ 0, the collection of blocks (Π (i) (D 1,i +t), i ≥ 1) induces a partition of N. In the terminology of Bertoin ([10], Definition 3.4), the sequence (D 1,i , i ≥ 1) is a stopping line, and as such, satisfies the extended branching property ([10], Lemma 3.14), which ensures that given (Π (i) (D 1,i ), i ≥ 1), the processes (Π (i) (D 1,i + t), t ≥ 0), i ≥ 1, evolve respectively as (m i Π (i) (m a i t), t ≥ 0), where m i is the asymptotic frequency of Π (i) (D 1,i ), i ≥ 1, and the Π (i) s are i.i.d. copies of Π. Now, coming back to the CRT (T , µ), each component of its fine spinal partition corresponds to a fragmentation (Π (i) (D 1,i + t), t ≥ 0) for some i and obviously, can be measurably reconstructed from this fragmentation (see the proof of Corollary 3). Conditionally given the masses m i , i ≥ 1, the subtrees of the fine spinal partition are therefore independent, distributed (modulo isomorphisms) respectively as (m -a i T , m i µ (m -a i ) ), i ≥ 1, where m -a i T means that the edge lengths of T have been multiplied by m -a i and µ (m -a i ) is the image of µ by this transformation. Part (ii) of the proposition is a natural generalization of the spinal decomposition of the Brownian CRT described in [START_REF] Aldous | Brownian bridge asymptotics for random pmappings[END_REF]. When the Brownian CRT is encoded in a Brownian excursion, this corresponds to a path decomposition whereby a single excursion is decomposed into a countably infinite collection of independent copies of itself. In view of this symmetry property of the fine spinal partition, it is natural to look for some more explicit description of this decomposition, such as its EPPF or the distribution on S ↓ of the corresponding mass partition. While such descriptions are known for the Brownian CRT, and more generally for all binary self-similar fragmentation CRTs according to the previous section, they appear to be difficult to obtain in general. But searching for conditions which simplify the structure of the fine spinal partition of (T , µ) leads naturally to consideration of further symmetry properties, and then to interesting examples with these properties for which explicit computations can be made. Consider first the fine partition of the set of leaves in some block of the coarse spinal decomposition of T [n+1] . By recursive arguments, it is enough to discuss the fine partition of the first block of the spinal decomposition. For each s ∈ S ↓ let P s denote the probability measure governing an exchangeable random partition Π of N whose ranked frequencies are equal to s, and for a measure ν on S ↓ let P ν (•) = S ↓ P s (•)ν(ds) be the corresponding distribution of Π as a mixture of Kingman's paintbox partitions. For each n the distribution of Π n is determined by the formula P ν (Π n = {B 1 , . . . , B k }) = p ν (#B 1 , . . . , #B k ) for every partition {B 1 , . . . , B k } of [n] into k ≥ 1 parts, for some function p ν (n 1 , . . . , n k ) of compositions (n 1 , . . . , n k ) of positive integers n. We refer here to [START_REF] Pitman | Combinatorial stochastic processes[END_REF] or [START_REF] Bertoin | Random fragmentation and coagulation processes[END_REF] for a specific formula for p ν (n 1 , . . . , n k ). In particular, p ν (1, 1) = S ↓ (1-i≥1 s 2 i )ν(ds). Note that p ν (n 1 , . . . , n k ) < ∞ for all n 1 , . . . , n k ∈ N, k ≥ 2, if and only if p ν (1, 1) < ∞, i.e. if and only if S ↓ (1 -s 1 )ν(ds) < ∞. Definition 1 The function p ν is called the exchangeable partition rate function (EPRF) associated with ν. If ν is a probability measure, then so is P ν , and p ν is known as an exchangeable partition probability function (EPPF). Note that we have the addition rule p ν (n 1 , . . . , n k ) = p ν (n 1 + 1, n 2 , . . . , n k ) + . . . + p ν (n 1 , . . . , n k-1 , n k + 1) + p ν (n 1 , . . . , n k , 1). The following lemma presents a basic decomposition in some generality. Lemma 5 Let ν be a dislocation measure on S ↓ with associated EPRF p ν . Then for every k ≥ 2 and every composition n 1 , . . . , n k of n ≥ 2 into at least two parts, p ν (n 1 , . . . , n k ) = g(n, n 1 )p ν(n,n 1 ) (n 2 , . . . , n k ) (9) for some function g(n, n 1 ) and some family of probability measures ν(n, n 1 ) on S ↓ indexed by 1 ≤ n 1 ≤ n -1. Proof. Let Π be a homogeneous fragmentation with dislocation measure ν. The result is obtained by conditioning on the size of the block B 1 containing 1. We (have to) take g(n, n 1 ) as the total rate associated with the formation of a particular block B 1 of n 1 out of n elements. Then n-1 n 1 -1 g(n, n 1 ) = Φ(n -1 : nn 1 ) as in [START_REF] Basdevant | Ruelle's probability cascades seen as a fragmentation process[END_REF], so that Φ(n -1) = n-1 n 1 =1 n -1 n 1 -1 g(n, n 1 ) = P ν (Π n = {[n]}) = S ↓   1 - ∞ j=1 s n j   ν(ds), (10) as in [START_REF] Bertoin | Homogeneous fragmentation processes[END_REF], is the total rate of formation of partitions of [n] with at least 2 parts. Then p ν(n,n 1 ) (n 2 , . . . , n k ) is the conditional probability, given the particular set B 1 , that the remaining nn 1 elements are partitioned as they must be to make a particular partition of [n] into blocks of sizes n 1 , . . . , n k . To be more precise, we can take ν(n, n 1 )(ds) = 1 g(n, n 1 ) S ↓ i≥1 r n 1 i (1 -r i ) n-n 1 δ ri /(1-r i ) (ds)ν(dr), where ri is the vector r with component r i omitted. By Kingman's paintbox representation and conditioning on the colour i of the first block, we then get for all partitions with block sizes (n 1 , . . . , n k ) in order of least element p ν(n,n 1 ) (n 2 , . . . , n k ) = S ↓ p s (n 2 , . . . , n k ) ν(n, n 1 )(ds) = 1 g(n, n 1 ) S ↓ i≥1 r n 1 i (1 -r i ) n-n 1 p ri /(1-r i ) (n 2 , . . . , n k )ν(dr) = 1 g(n, n 1 ) p ν (n 1 , . . . , n k ), where by convention p s = p δs . This discussion simplifies greatly for measures ν with the special symmetry property introduced in the following definition: Definition 2 Let ν be a measure on S ↓ , and let ν be a probability measure on S ↓ . Say that ν has ν as its factor, if ν(n, n 1 ) in (9) can be chosen identically equal to ν for every 1 ≤ n 1 < n, that is p ν (n 1 , . . . , n k ) = g(n, n 1 )p ν (n 2 , . . . , n k ) (11) for every composition n 1 , . . . , n k of n ≥ 2 into at least 2 parts, and some function g(n, n 1 ). Note that ν may be sigma-finite, but that ν is always assumed to be a probability measure. It is obvious that if ν has factor ν, then ν is unique. A rich class of measures ν which admit a factor ν is the class of Poisson-Dirichlet measures considered in the next section. It is an open problem [START_REF] Pitman | Combinatorial stochastic processes[END_REF]Problem 3.7], even for probability measures, to describe all measures ν on S ↓ which admit a factor ν. Note that all binary dislocation measures trivially admit a factor, as well as ordered Dirichlet(a, . . . , a) including the Dirac mass at (1/m, . . . , 1/m). The latter are just the remaining members of the Ewens-Pitman two-parameter family. Following the formalism of [START_REF] Pitman | Coalescents with multiple collisions[END_REF]Corollary 13], given two random elements V and V ′ of S ↓ , and a probability distribution ν on S ↓ , say that V ′ is a ν-fragmentation of V if the joint distribution of V and V ′ is the same as if V ′ is derived from V by shattering each fragment of V independently in proportions determined by ν. Theorem 6 Let ν be a dislocation measure on S ↓ , let (T , µ) be some self-similar CRT derived from fragmentation according to ν, and let ν be a probability distribution on S ↓ . Then the following two conditions are equivalent: (i) the measure ν has ν as a factor; (ii) the fine spinal mass partition of (T , µ) is a ν-fragmentation of the coarse spinal mass partition of (T , µ). Proof. According to [START_REF] Pitman | Coalescents with multiple collisions[END_REF]Lemma 35], the fine spinal partition is a ν-fragmentation of the coarse spinal partition if and only if, for all n ≥ 1, in passing from the coarse spinal partition of [n] generated by T [n] to the fine one, within each block of the coarse partition the fine partition is distributed according to P ν , independently between blocks of the coarse partition. But due to the fragmentation property of the trees T [n] , n ≥ 1, this amounts to the relation [START_REF] Bertoin | Discretization methods for homogeneous fragmentations[END_REF] between ν and ν. Poisson-Dirichlet fragmentations We now turn to a particular family of fragmentation processes, namely the Poisson-Dirichlet fragmentations, characterized by dislocation measures of type PD * (α, θ), 0 < α < 1, θ > -2α, as defined below by [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF]. This family generalizes the family of previously studied stable fragmentations ( [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF], [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. II. Splitting at nodes[END_REF]), constructed from the stable trees (T β , µ β ) with index β, 1 < β < 2. These stable CRTs were introduced and studied by Duquesne and Le Gall [START_REF] Duquesne | Random trees, Lévy processes and spatial branching processes[END_REF], [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF] to which we refer for a rigorous construction. Roughly, T β arises as the limit in distribution as n → ∞ of rescaled critical Galton-Watson trees T n , conditioned to have n vertices, with edge-lengths n 1/β-1 , and an offspring distribution (η k , k ≥ 0) such that η k ∼ Ck -1-β as k → ∞. It is endowed with a (random) probability measure µ β which is the limit as n → ∞ of the empirical measure on the vertices of T n . Miermont [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF] shows that the partition-valued process constructed by random sampling of leaves L 1 , L 2 , . . . from (T β , µ β ) according to µ β is a self-similar fragmentation with index 1/β -1, and dislocation measure ν β defined for all non-negative measurable function f on S ↓ by S ↓ f (s)ν β (ds) = β 2 Γ(2 -1/β) Γ(2 -β) E T f ∆ 1 T , ∆ 2 T , ... (12) (and no erosion). Here T = ∞ i=1 ∆ i where ∆ 1 > ∆ 2 > • • • are the points of a Poisson process on (0, ∞) with intensity (βΓ(1-1/β)) -1 x -1/β-1 dx. Besides, cutting the stable tree T β at nodes (see [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. II. Splitting at nodes[END_REF]), Miermont obtained a self-similar fragmentation with index 1/β and the same dislocation measure ν β . Definition and factorization property For 0 ≤ α < 1, θ > -α, let PD(α, θ) denote the two-parameter Poisson-Dirichlet distribution on S ↓ , defined as the distribution of the decreasing rearrangement of its size-biased presentation, which is W 1 , (1 -W 1 )W 2 , (1 -W 1 )(1 -W 2 )W 3 , . . . (13) for W i which are independent beta(1α, iα + θ) variables. The formula for the corresponding EPPF is [START_REF] Pitman | Combinatorial stochastic processes[END_REF]Th.3.2.] p PD(α,θ) (n 1 , ..., n k ) = α k-1 [1 + θ/α] k-1 [1 + θ] n-1 k i=1 [1 -α] n i -1 (14) for every composition (n 1 , ..., n k ) of n, where [x] n = Γ(x + n)/Γ(x) is a rising factorial. It is evident by inspection of this formula and (11) that the probability measure PD(α, θ) admits PD(α, θ + α) as a factor for every 0 < α < 1 and θ > -α. Following Miermont [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF] we now consider the rescaled measure PD * (α, θ) := Γ(1 + θ/α) Γ(1 + θ) PD(α, θ) (15) which is defined in the first instance for 0 < α < 1 and -α < θ. It is known [START_REF] Pitman | Combinatorial stochastic processes[END_REF]Corollary 3.9.] that for 0 < α < 1 there is the absolute continuity relation PD * (α, θ)(ds) = (S α (s)) θ/α PD(α, 0)(ds) [START_REF] Evans | Rayleigh processes, real trees, and root growth with re-grafting[END_REF] where S α (s) is the α-diversity which is almost surely associated to a sequence s = (s 1 , s 2 , . . .) with distribution PD(α, 0) by the formula S α (s) := Γ(1 -α) lim j→∞ js α j . The PD(α, θ) distribution is recovered from ( 16) for -α < θ by normalization as in (15). The α-diversity S α , which has a Mittag-Leffler distribution (see e.g. [31, formula 0.43]), appears variously disguised in different contexts, for example as a local time variable, or again as S α = T -α for a positive stable variable T of index α. Indeed, if such a T is constructed as T = ∞ i=1 ∆ i where ∆ 1 > ∆ 2 > • • • are the points of a Poisson process on (0, ∞) with intensity α(Γ(1 -α)) -1 x -α-1 dx, then (∆ 1 /T, ∆ 2 /T, . . .) = d PD(α, 0) and, according to [31, formula 4.45], S α (∆ 1 /T, ∆ 2 /T, . . .) = T -α a.s. so that for every non-negative measurable function f of s = (s 1 , s 2 , . . .) ∈ S ↓ , S ↓ f (s)PD * (α, θ)(ds) = E T -θ f (∆ 1 /T, ∆ 2 /T, . . .) . (17) Lemma 7 For each 0 < α < 1, let PD * (α, θ) be the measure defined on S ↓ for each real θ by either [START_REF] Evans | Rayleigh processes, real trees, and root growth with re-grafting[END_REF] or [START_REF] Evans | Subtree prune and regraft: a reversible real tree-valued Markov process[END_REF]. Then for -2α < θ, this measure PD * (α, θ) is also the unique measure with no mass at (1, 0, 0, . . .) whose EPRF is given for k ≥ 2 by p PD * (α,θ) (n 1 , . . . , n k ) = α k-1 Γ(k + θ/α) Γ(n + θ) k i=1 [1 -α] n i -1 (18) and for k = 1 by the same formula for -α < θ, and by ∞ for -2α < θ ≤ -α. Basic integrability properties of this extended family of Poisson-Dirichlet measures are S ↓ PD * (α, θ)(ds) < ∞ ⇔ θ > -α; (19) S ↓ (1 -s 1 )PD * (α, θ)(ds) < ∞ ⇔ θ > -2α. (20) For each choice of (α, θ) with θ > -2α the measure PD * (α, θ) has the probability distribution PD(α, θ + α) as its factor. Proof. Following Miermont [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF], we observe from ( 14) and ( 15) that the formula [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF] holds in the first instance for all θ > -α, and that the right side of ( 18) is analytic in θ for θ > -2α. It follows easily that [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF] holds for all such θ. The fact [START_REF] Gnedin | Regenerative composition structures[END_REF] is elementary. As for (20), we have seen in Section 2.2 that this integrability condition holds if and only if the expressions in [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF] are finite for every choice of n 1 , • • • , n k with k ≥ 2, and this is clear by inspection of [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF]. The infinite measure PD * (α, -α) was already used and studied by Basdevant [START_REF] Basdevant | Ruelle's probability cascades seen as a fragmentation process[END_REF] in the context of Ruelle's probability cascades. Remarks. 1). For 0 < α < 1, θ > -α, the EPPF [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF] gives P PD(α,θ) (Π n = {[n]}) = 1 - [1 -α] n-1 [1 + θ] n-1 (21) and hence P PD * (α,θ) (Π n = {[n]}) = Γ(1 + θ/α) Γ(1 + θ) 1 - [1 -α] n-1 [1 + θ] n-1 (22) in the first instance for 0 < α < 1, θ > -α, and then by analytic continuation for 0 < α < 1, θ > -2α, with values of the right side defined by continuity for θ = -α or θ = -1. To see that the left side of ( 22) is analytic in this range, observe that for each n this function of (α, θ) is just a finite sum of the functions in [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF] weighted by combinatorial coefficients. 2). From the fact ( 13) that a size-biased pick from PD(α, θ) has beta(1α, α + θ) distribution, we can write down s ∞ j=1 PD(α, θ)(s j ∈ ds) = Γ(1 + θ) Γ(1 -α)Γ(α + θ) s -α (1 -s) α+θ-1 ds (0 < s < 1) and hence for -2α < θ by analytic continuation s ∞ j=1 PD * (α, θ)(s j ∈ ds) = α Γ(2 + θ/α) Γ(1 -α)Γ(1 + α + θ) s -α (1 -s) α+θ-1 ds (0 < s < 1). (23) The image of this measure by the change of variable x =log s is the corresponding Lévy measure Λ α,θ (dx) = α Γ(2 + θ/α) Γ(1 -α)Γ(1 + α + θ) e -x(1-α) (1 -e -x ) α+θ-1 dx (0 < x < ∞). (24) From Theorem 6 we now deduce: Corollary 8 For each 0 < α < 1, θ > -2α, let (T α,θ , µ) be some CRT derived from fragmentation process with dislocation measure PD * (α, θ). The sequence of discrete fragmentation trees (T [n] , n ≥ 1) embedded in (T α,θ , µ) is governed by fragmentations of [n] according the EPPF obtained by normalization of formula (18) by formula [START_REF] Greenwood | Construction of local time and Poisson point processes from nested arrays[END_REF]. The fine spinal mass partition of (T α,θ , µ) is a PD(α, α + θ)-fragmentation of the coarse spinal mass partition of (T α,θ , µ), which is derived from the range of 1e -ξ for the pure jump subordinator ξ with Lévy measure [START_REF] Haas | Fragmentation processes with an initial mass converging to infinity[END_REF] and Laplace exponent Φ α,θ (z) =        αΓ(2 + θ/α) (α + θ)Γ(1 -α) (1 + θ)Γ(1 -α) Γ(2 + θ) - (z + 1 + θ)Γ(z + 1 -α) Γ(z + 2 + θ) , θ = -α, α Γ(1 -α) Γ ′ (z + 1 -α) Γ(z + 1 -α) - Γ ′ (1 -α) Γ(1 -α) , θ = -α. (25) Last, for θ ∈ (-2α, -α) we have an interesting regime where Proposition 4 applies along with the asymptotic theory of consistent Markov branching models in [START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]. Specifically, Corollary 9 For θ ∈ (-2α, -α), let (T [n] , n ≥ 1) be a Markov branching model derived from a self-similar fragmentation with dislocation measure PD * (α, θ). Adding unit edge lengths to T [n] , there is the convergence in probability |α + θ|Γ(1 -α) αΓ(2 + θ/α) × T [n] n |θ+α| → T (θ+α,PD * (α,θ)) (26) for the Gromov-Hausdorff topology, where the limit is a self-similar fragmentation CRT of index θ + α and dislocation measure PD * (α, θ). Proof. Note from [START_REF] Haas | Loss of mass in deterministic and random fragmentations[END_REF] that PD * (α, θ)(s 1 ≤ 1 -ε) ∼ αΓ(2 + θ/α) |α + θ|Γ(1 -α)Γ(1 + α + θ) ε α+θ as ε ↓ 0. Then Theorem 2 of [START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF] applies (Λ α,θ clearly also satisfies ∞ x ρ Λ α,θ (ds) < ∞ for some ρ > 0), which gives (26). Stable fragmentations The case 1/2 < α < 1 is of special interest. Then -2α < -1 < -α, so we can take θ = -1 in [START_REF] Haas | Loss of mass in deterministic and random fragmentations[END_REF], and then the Lévy measure ( 24) is of the form Λ(dx) = c b (1 -e -x ) -b-1 e -bx dx, (27) for some constant c b > 0 and b = 1α. It is known [START_REF] Gnedin | Regenerative composition structures[END_REF] that if ξ is a subordinator with this Lévy measure, for any b ∈ (0, 1), then the closure of the range of e -ξ is reversible and identical in law with the zero set of a Bessel bridge of dimension 2 -2b. The corresponding distribution of ranked lengths of intervals is then known to be PD(b, b) ([31, Corollary 4.9]). Miermont [29, p. 444] found the same Lévy measure, up to a scaling constant, for the subordinator associated with the self-similar fragmentation of index α -1 ∈ (-1/2, 0) that he derived from the stable CRT T β of index β = 1/α ∈ (1, 2). Here we have reversed this line of reasoning, and constructed T β directly from combinatorial considerations, without relying on the relation between the height process of T β and the stable process of index β, which was the basis of the work of Duquesne and Le Gall [START_REF] Duquesne | Random trees, Lévy processes and spatial branching processes[END_REF][START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF]. As byproducts of this argument, we have a number of refinements of earlier work on T β , which we summarize in the following corollary of previous results. Corollary 10 For each α ∈ (1/2, 1), corresponding to β = 1/α ∈ (1, 2) the dislocation measure PD * (α, -1) derived from the two-parameter Poisson-Dirichlet family as in [START_REF] Ford | Probabilities on cladograms: introduction to the alpha model[END_REF] has PD(α, α-1) as a factor. Let (T [n] , n = 1, 2 . . .) be a consistent family of combinatorial trees governed by fragmentation according to PD * (α, -1). Then 1. The tree T [n] is identical in law to the combinatorial tree with n leaves derived by sampling according the mass measure in the stable tree T β of index β, and T β may be constructed from the sequence of combinatorial trees (T [n] , n ≥ 1), as indicated in [START_REF] Haas | Continuum tree asymptotics of discrete fragmentations and applications to phylogenetic models[END_REF]Theorem 2], or Corollary 3. 2. The distribution of the coarse spinal mass partition of T β is PD(1α, 1α). 3. The coarse spinal interval partition of [0, 1] derived from T β is exchangeable, with the same distribution as the collection of excursion intervals of a Bessel bridge of dimension 2α. The (1-α)-diversity of this interval partition is a multiple of the height of a leaf picked at random from the mass measure of T β . This height has the same tilted Mittag-Leffler distribution as the local time at 0 of the Bessel bridge of dimension 2α. 4. The corresponding fine spinal mass partition of T β is a PD(α, α -1)-fragmentation of the coarse spinal mass partition. 5. The unconditional distribution of the fine spinal mass partition of T β is PD(α, 1α). 6. The conditional distribution of the coarse spinal mass partition of T β given the fine one is provided by the operator of PD(γ, γ) coagulation, as defined in [START_REF] Pitman | Coalescents with multiple collisions[END_REF], for γ = (1α)/α. 7. Conditionally given the fine spinal mass partition of T β , the corresponding collection of subtrees obtained by removing the spine, modulo isomorphism and rescaling trees T of mass m to m -(1-α) T , is a collection of independent copies of T β . Proof. All but items 5 and 6 follow immediately from the previous development. Those two items are read from item 4 by the more general coagulation/fragmentation duality relation for the PD family provided by [START_REF] Pitman | Coalescents with multiple collisions[END_REF]Theorem 12]. For more information about the distribution of random partitions in the PD family, see [START_REF] Pitman | The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator[END_REF] and [START_REF] Gnedin | Asymptotic laws for compositions derived from transformed subordinators[END_REF]. In the limiting case when β ↑ 2, the above results reduce to the description of the interval partition derived from the spinal decomposition of the Brownian CRT, which is well known to be distributed like the partition generated by excursions of a Brownian bridge. See [START_REF] Aldous | Brownian bridge asymptotics for random pmappings[END_REF] for applications of this decomposition to the asymptotics of random mappings. The structure of the fine spinal partition of T β for 1 < β < 2 has no analogue for β = 2, because in the Brownian tree all splits are binary. Invariance under uniform re-rooting It is of particular interest to consider fragmentation trees with additional symmetry properties. A well-known property of the stable tree T β with index β ∈ (1, 2], established by Aldous [START_REF] Aldous | The continuum random tree. II. An overview[END_REF] for the Brownian CRT with β = 2, and by Duquesne and Le Gall [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF]Prop.4.8] for β ∈ (1, 2), is invariance under uniform re-rooting. See also [15]. Let us first introduce the discrete analogue of this property. . Definition 3 Let (T [n] , n ≥ 1) be a consistent Markov branching model. We say that the Markov branching model is invariant under uniform re-rooting if for all n ≥ 1 T [n] law = T (root↔1) [n] . Theorem 11 (i) [START_REF] Aldous | The continuum random tree. II. An overview[END_REF][START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF] For all β ∈ (1, 2], the stable tree (T β , µ β ) is invariant under uniform re-rooting. (ii) Let (T , µ) be a self-similar fragmentation CRT with parameters (a, ν) and suppose it is invariant under uniform re-rooting. Then there exists some β ∈ (1, 2] and some constant C > 0 such that ν = Cν β and a = 1/β -1. Remark. According to [15], a stronger invariance result is available for the height functions H of stable trees (and more generally Lévy trees), which is that H [u] , as defined in [START_REF] Gall | Conditioned Brownian trees[END_REF], is distributed as H for each fixed u ∈ [0, 1]. See also [START_REF] Gall | Conditioned Brownian trees[END_REF] for the Brownian CRT. The rest of this section is devoted to the proof of Theorem 11. Spinal decomposition and proof of Theorem 11 (i) The first step is to consider the spinal decomposition of trees invariant under uniform re-rooting: one consequence of this invariance is that the coarse spinal interval partition of [0, 1] derived from the tree is reversible (in fact an exchangeable interval partition of [0, 1], see [START_REF] Gnedin | Regenerative composition structures[END_REF]). The class of trees with this property is significantly restricted by the following proposition. Proposition 12 Let (T [n] , n ≥ 1) be a sequence of combinatorial trees associated with some self-similar fragmentation CRT (T , µ) with dislocation measure ν, and let ξ be the subordinator describing the evolution of the mass fragment containing 1 in an associated homogeneous fragmentation process (cf. Section 2.1). (i) The coarse spinal composition of n derived from T [n+1] (as defined in (3)) is reversible for each n if and only if ξ has a Lévy measure of the form Λ(dx) = c(1 -e -x ) -b-1 e -bx dx, (29) for some 0 < b < 1 and some constant c > 0. (ii) There cannot exist a self-similar fragmentation CRT with a Lévy measure of this form when b > 1/2. = e -x , x = -log(z), dx = -dz/z i≥1 ν(s i ∈ dz) = c(1 -z) -b-1 z b-2 dz, z ∈ (0, 1). Since ν is supported by decreasing sequences with ∞ i=1 s i = 1, s i ≤ 1/i for all i ≥ 1. In particular, ν(s 1 ∈ dz) = c(1 -z) -b-1 z b-2 dz, z ∈ (1/2, 1). ( 30 ) Using the fact that for z ∈ (0, 1/2) z -b (1 -z) b-2 > (1 -z) -b-1 z b-1 ⇐⇒ (1 -z) 2b-1 > z 2b-1 ⇐⇒ b > 1/2, we see that for b > 1/2 (0,1) (1 -z)ν(s 1 ∈ dz) ≥ c (1/2,1) (1 -z) -b z b-2 dz = c (0,1/2) z -b (1 -z) b-2 dz > c (0,1/2) (1 -z) -b-1 z b-1 dz ≥ i≥2 (0,1) zν(s i ∈ dz) by [START_REF] Miermont | Self-similar fragmentations derived from the stable tree. II. Splitting at nodes[END_REF]. On the other hand, we have (0,1) (1 -z)ν(s 1 ∈ dz) = (0,1) zν   i≥2 s i ∈ dz   = S ↓ i≥2 s i ν(ds) = i≥2 S ↓ s i ν(ds) = i≥2 (0,1) zν(s i ∈ dz), which contradicts the inequality obtained in the preceding calculation. The Lévy measure associated with the tagged fragment of some fragmentation tree invariant under uniform re-rooting is therefore of the form (29) for some 0 < b ≤ 1/2. We recall that the Lévy measures associated to β-stable trees are of this form for b = 1 -1/β (see Section 3.2 for 1 < β < 2 and [START_REF] Bertoin | Self-similar fragmentations[END_REF] for β = 2) which covers the range (0, 1/2] when β varies in [START_REF] Aldous | The continuum random tree[END_REF][START_REF] Aldous | The continuum random tree. II. An overview[END_REF]. Theorem 11 (i). Let (T β , µ β ) be some stable CRT with index β ∈ [START_REF] Aldous | The continuum random tree[END_REF][START_REF] Aldous | The continuum random tree. II. An overview[END_REF]. According to the previous proposition, its coarse spinal interval partition of [0,1] is reversible. We then conclude with Items 4 and 7 of Corollary 10. Proof of Characterization of the dislocation measure and Proof of Theorem 11(ii) In general the Lévy measure does not characterize the dislocation measure of the fragmentation tree, i.e. two different dislocation measures may lead to the same Lévy measure Λ, see Haas [START_REF] Haas | Loss of mass in deterministic and random fragmentations[END_REF] for an example. However, this complication no longer arises when the set of fragmentation trees is restricted to the ones invariant under uniform re-rooting. Proposition 13 Let (T , µ) be a self-similar fragmentation CRT with parameters (a, ν) and suppose it is invariant under uniform re-rooting. Then the dislocation measure ν can be reconstructed from the Lévy measure Λ associated to the tagged fragment. Proof. Consider Π 0 , the homogeneous fragmentation constructed from Π by time-changes. The probabilities p n describe the ordered sizes of blocks of Π 0 n at the first time when it differs from [n]. Let D 0 1,i , 2 ≤ i ≤ n, be the first time in this homogeneous fragmentation at which 1 and i belong to separate fragments. Let (λ 0 (t), t ≥ 0) be the decreasing process of masses of fragments containing 1. The law of λ 0 = exp(-ξ) is determined by Λ, as well as that of (λ 0 , D 0 1,2 , ..., D 0 1,n ) since P(D 0 1,2 > s 2 , ..., D 0 1,n > s n | λ 0 ) = λ 0 (s 2 )...λ 0 (s n ), for all sequences of times (s 2 , ..., s n ). In particular, knowing Λ, we know the probabilities P(D 0 1,2 < min 3≤i≤n D 0 1,i ) = p n (1, n -1). In the particular case when n = 3, this gives p 3 (1, 2) and then p 3 (1, 1, 1), since 3p 3 (1, 2) + p 3 (1, 1, 1) = 1. Remark. It is not hard to see, with a specific example, that in general Λ does not characterize the probabilities p 4 (n 1 , ..., n k ), n 1 + ... + n k = 4. Proof of Proposition 13. The dislocation measure is determined, up to a scaling constant, by the probabilities p n (n 1 , ..., n k ), ∀n ≥ 2, ∀(n 1 , ..., n k ) composition of n with k ≥ 2. The scaling constant is then obtained from Λ, using [START_REF] Aldous | Brownian bridge asymptotics for random pmappings[END_REF]. The goal here is therefore to check that under the re-rooting assumption, all the probabilities p n can be recovered from Λ. Suppose Λ is known. We proceed by induction on n. For n = 2, p 2 (1, 1) = 1. For n = 3, the probabilities p 3 are known, by Lemma 16. Suppose now that the p m 's are known, ∀m ≤ n -1. By Lemma 16, p n (1, n -1) is also known. Then, by Lemma 15, ∀(n 2 , ..., n k ) composition of n -2, p n (2, n 2 , ..., n k )p 2 (1, 1) = p n (1, n -1)p n-1 (1, n 2 , ..., n k ), which gives p n (2, n 2 , ..., n k ). The probabilities p n (n 1 , ..., n k ), with n 1 ≥ 3, are obtained in the same manner, by induction on n 1 , thanks to Lemma 15 (note that p n 1 (1, n 1 -1) = 0, ∀n 1 ). Therefore, for all compositions (n 1 , ..., n k ) = (1, ..., 1), k ≥ 2, of n, we have p n (n 1 , ..., n k ), since there is at least one n i = 1 and, by symmetry, one can suppose it is n 1 . It remains to get p n (1, ..., 1), which can be done by using the equality [START_REF] Pitman | Combinatorial stochastic processes[END_REF]. Proof of Theorem 11 (ii). By Corollary 14, since the law of the CRT (T , µ) is invariant under uniform re-rooting, there exists some β ∈ (1, 2] and some constant C such that ν = Cν β . It remains to prove that the index of self-similarity is a = 1/β -1. Up to now, we only used the combinatorial properties of reduced trees encoded in the dislocation measure ν, and not the further structure of the CRT (T , µ) that involves the edge lengths and depends on the scaling parameter a. To conclude that a = 1/β -1, we must consider edge lengths. Given the CRT (T , µ) rooted at ρ and the leaves L 1 , L 2 , the reduced tree R(T , L 1 , L 2 ) can be described by the edge-lengths D 1,2 , D 1 -D 1,2 , D 2 -D 1,2 , where D 1,2 is the separation time of 1 and 2 in Π and D i the first time at which the block containing i is reduced to a singleton, i = 1, 2. By invariance under re-rooting, D 1,2 must have the same law as D 1 -D 1,2 . We already know that this is true for the index 1/β -1, from Duquesne-Le Gall's Theorem 4.8 in [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF]. Using time-changes relating Π and its homogeneous counterpart Π 0 (these time-changes are given specifically in [START_REF] Bertoin | Self-similar fragmentations[END_REF]), we have where D 0 1,2 is the first separation time of 1 and 2 in Π 0 . By the strong Markov property of Π (see [START_REF] Bertoin | Self-similar fragmentations[END_REF]), |Π For a < 0 we may cancel the common factor of E[D 1 ] < ∞ (D 1 is an exponential functional of a subordinator). It remains to notice that the function f (a) = E[|Π 0 (1) (D 0 ,12 )| -a ] is a strictly monotone function with limit 0 at -∞ and 1 at 0, so that the equation f (a) = 1f (a) has a unique solution a, which has to be the index a = 1/β -1. Figure 1 : 1 Figure1: Two fragmentations of[START_REF] Bertoin | Self-similar fragmentations[END_REF] represented as trees with nodes labelled by subsets of[START_REF] Bertoin | Self-similar fragmentations[END_REF]. Now we take B = [n]. The collection of vertices at graph distance m ≥ 0 above the first branch point form a partition of a subset of [n] that we extend to a partition Π (n) m of [n] by adding a singleton {j} for each leaf j at height below m. We refer to (Π (n) m , m ≥ 0) as the partition-valued discrete fragmentation process associated with T [n] For a tree T [n] with leaves labelled by [n], let T (root↔1) [n] denote the tree with leaves labelled by [n] obtained by re-rooting T [n] at 1 and re-labelling the original root by 1. See for instance the following picture. Figure 2 : 2 Figure2: A fragmentation tree T[START_REF] Bertoin | Self-similar fragmentations[END_REF] and its re-rooted counterpart T (root↔1)[START_REF] Bertoin | Self-similar fragmentations[END_REF] Proof. Part (i) is read from[START_REF] Gnedin | Regenerative composition structures[END_REF] Theorem 10.1], just using the regenerative property of the coarse spinal compositions. For part (ii), from (5)Λ(dx) = e -x i≥1 ν(-log s i ∈ dx), x > 0,and[START_REF] Miermont | Self-similar fragmentations derived from the stable tree. I. Splitting at heights[END_REF] we deduce by the transformation z Together with Proposition 12 ,Figure 3 : 123 Figure 3: This configuration always happen with positive probability Figure 4 : 4 Figure 4: By the invariance under re-rooting assumption, these two configurations are equally likely to occur D 1 1 ) 1 ) 111 (t)| -a dt = D 1 -(t)| -a dt , 0 ( 1 ) 1 ) 11 (t + D 0 1,2 )| has same distribution as |Π 0 (1) (D 0 1,2 )| • | Π 0 (1) (t)|, where Π 0 is an independent copy of Π 0 . Therefore, (t)| -a dt = |Π 0 (1) (D 0 1,2 )| -a D 1 ,where D 1 has same distribution as D 1 and is independent of |Π 0 (1) (D 0 1,2 )| -a . Assuming that D 1,2 has same distribution as D 1 -D 1,2 and taking expectations in[START_REF] Stanley | Enumerative combinatorics[END_REF], we obtainE[|Π 0 (1) (D 0 1,2 )| -a ]E[D 1 ] = E[D 1 -D 1,2 ] = E[D 1,2 ] = E[D 1 ](1 -E[|Π 0 (1) (D 0 1,2 )| -a ]) . 2 there is a partition Π B of B into at least two parts B 1 , . . . , B k , called the children of B, with • Markovian (or a Markov branching model) if given Π B = {B 1 , . . . , B k }, the k subtrees of T B above B are independent and distributed as T B 1 , . . . , T B k , for all partitions {B 1 , . . . , B k } of B; • consistent if for every A ⊂ B, the restriction to A of T B is distributed like T A ; • binary if every A ∈ T B has either 0 or 2 children with probability one, for all B. Acknowledgements. We thank Grégory Miermont for many interesting discussions. * This research is supported in parts by EPSRC grant GR/T26368/01 and N.S.F. grant DMS-0405779 Note that due to the exchangeability of leaf labels, leaf 1 is indeed a uniformly picked leaf of the de-labelled combinatorial tree shape. Due to the exchangeability of leaf labels, invariance under uniform re-rooting is in fact a property of de-labelled combinatorial tree shapes. Definition 4 Let (T , µ) be a CRT rooted at ρ and conditionally on (T , µ), let (L 1 , L 2 , ...) be a sample of leaves i.i.d. with distribution µ. Let then T [L 1 ] denote the tree T re-rooted at L 1 . We say that (T , µ) is invariant under uniform re-rooting if for all n ≥ 1, the law of the reduced subtree R(T , L 1 , ..., L n ) of T spanned by the root ρ and L 1 , ..., L n is invariant under re-rooting at L 1 , i.e. R(T as an identity in law of combinatorial tree shapes with assignment of edge lengths. Clearly, the invariance under uniform re-rooting of (T , µ) implies the invariance under uniform re-rooting of the sequence (T [n] , n ≥ 1) of combinatorial trees associated with (T , µ). We will see that the converse is false (see the arguments after [START_REF] Stanley | Enumerative combinatorics[END_REF]). Remark. In [START_REF] Aldous | The continuum random tree. II. An overview[END_REF], [15] a different formalism is used for the definition of invariance under uniform re-rooting, via height functions of ordered CRTs. Briefly, assuming that the CRT (T , µ) can be encoded into a continuous real-valued function H on [0, 1], with H(0) = H(1) = 0, such that • µ is the measure induced by the projection of the Lebesgue measure on this quotient space then the invariance under uniform re-rooting is defined via with the convention u+ x = u+ x-1 when u+ x > 1. It was proved in [START_REF] Haas | The genealogy of self-similar fragmentations with negative index as a continuum random tree[END_REF] that the structures of the combinatorial subtrees R(T , L 1 , ..., L n ), n ≥ 1, derived from some self-similar fragmentation CRT (T , µ) can be enriched with a consistent "uniform" order so as to encode the fragmentation CRTs into a continuous height function as described above, provided the dislocation measure is infinite. In that context, it is not hard to check that the height function definition and Definition 4 above are equivalent. Details are left to the reader. The goal of this section is twofold: first to give a combinatorial proof, different from that given in [START_REF] Aldous | The continuum random tree. II. An overview[END_REF], [START_REF] Duquesne | Probabilistic and fractal aspects of Lévy trees[END_REF], [15], of the fact that the stable trees are invariant under uniform re-rooting; second to prove that among the self-similar fragmentation CRTs, the stable trees are the only ones, up to a scaling factor, to satisfy this invariance property. For the Brownian CRT (T 2 , µ 2 ), we recall that the partition-valued process constructed by random sampling of leaves L 1 , L 2 , . . . according to µ 2 is a self-similar fragmentation with index a = -1/2 and dislocation measure ν 2 defined by ν 2 (s 1 + s 2 = 1) = 0 and (see [START_REF] Bertoin | Self-similar fragmentations[END_REF]). The dislocation measure ν β associated to the stable tree T β , 1 < β < 2 is given by [START_REF] Donnelly | Consistent ordered sampling distributions: characterization and convergence[END_REF] and its self-similar index is 1/β -1. In order to prove Proposition 13, we first set up two lemmas. In the rest of this subsection, the CRT (T , µ) with parameters (a, ν) is fixed and supposed to be invariant under uniform re-rooting. A sample of leaves L i , i ≥ 1 is given and we consider the associated partition-valued fragmentation Π. We call p n the probabilities where t n is the first time when Π n differs from [n] and (n 1 , ..., n k ) denotes any composition of n with k ≥ 2 (in other words, the probabilities p n are the EPPFs obtained by conditioning P ν on {Π n = {[n]}} in the proof of Lemma 5). Note in particular that where the sum is over all compositions of n, see [START_REF] Pitman | Combinatorial stochastic processes[END_REF]Exercise 2.1.3]. Lemma 15 For all compositions (n 1 , ..., n k ) of n with k ≥ 2 with the convention, when n 1 = 1, that the probabilities involving expressions with a term n 1 -1 = 0 are all equal to 1. Proof. Consider the following fragmentation scheme : the first time at which the block {1, ..., n} splits, it splits in blocks {1, ..., n 1 }, {n 1 + 1, ..., n 1 + n 2 }, ..., {n 1 + ... + n k-1 + 1, ..., n}; then the first of these blocks splits in {1}, {2, ..., n 1 }. We are not really interested in the further evolution of {2, ..., n 1 }, {n 1 + 1, ..., n 1 + n 2 }, ..., {n 1 + ... + n k-1 + 1, ..., n}, let us just say that it is in a configuration which happens with a (strictly) positive probability, say r n (n 1 , ..., n k ) (e.g. evolutions as in Figure 3). Consider then the discrete tree with leaf labels obtained from this fragmentation scheme. The probability that the tree with n leaves R(T , L 1 , L 2 , ..., L n ) has this labelled shape is exactly Now, look at the same tree rooted at L 1 , i.e. R(T L 1 , ρ, L 2 , ..., L n ), cf. Figure 4. Starting from the root L 1 , the corresponding fragmentation scheme evolves as follows : {ρ, 2, ..., n} first splits in {2, ..., n 1 }, {ρ, n 1 + 1, ..., n}. Then {ρ, n 1 + 1, ..., n} splits in {ρ}, {n 1 + 1, ..., n 1 + n 2 }, ..., {n 1 + ... + n k-1 + 1, ..., n}. And the blocks {2, ..., n 1 }, {n 1 + 1, ..., n 1 + n 2 }, ..., {n 1 + ... + n k-1 + 1, ..., n} then all split according to the same configuration as in the previous scheme. By invariance under uniform re-rooting, the subtree R(T [L 1 ] , ρ, L 2 , ..., L n ) is distributed as R(T , L 1 , L 2 , ..., L n ), and therefore, the probability that R(T [L 1 ] , ρ, L 2 , ..., L n ) has this labelled shape is p n (n 1 -1, n 2 + ... + n k + 1)p n-n 1 +1 (1, n 2 , ..., n k )r n (n 1 , ..., n k ). ( By invariance under re-rooting, the probabilities in [START_REF] Pitman | The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator[END_REF] and [START_REF] Schroeder | Vier combinatorische Probleme[END_REF] are equal. This yields [START_REF] Pitman | Coalescents with multiple collisions[END_REF], since r n (n 1 , ..., n k ) = 0. Remark. It is easy to check that the probabilities p n associated to the stable trees, which are obtained by normalization of formula ( 18) by ( 22) with θ = -1, satisfy the relations [START_REF] Pitman | Coalescents with multiple collisions[END_REF].
74,496
[ "830393" ]
[ "60", "57500", "246706" ]
01158169
en
[ "info" ]
2024/03/04 23:41:50
2017
https://hal.science/hal-01158169v2/file/jsc_slat_final.pdf
Xiaohao Cai email: [email protected] Raymond Chan email: [email protected] Mila Nikolova email: [email protected] Tieyong Zeng email: [email protected] Mila Nikolova Cmla A Three-Stage Keywords: Mumford-Shah model, convex variational models, multiphase color image segmentation, color spaces whether they are published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction IMAGE segmentation is a fundamental and challenging task in image processing and computer vision. It can serve as a preliminary step for object recognition and interpretation. The goal of image segmentation is to group parts of the given image with similar characteristics together. These characteristics include, for example, edges, intensities, colors and textures. For a human observer, image segmentation seems obvious, but consensus among different observers is seldom found. The problem is much more difficult to solve by a computer. A nice overview of region-based and edge-based segmentation methods is given in [16]. In our work we investigate the image segmentation problem for color images corrupted by different types of degradations: noise, information loss and blur. Let Ω ⊂ R 2 be a bounded open connected set, and f : Ω → R d with d ≥ 1 be a given vector-valued image. For example, d = 1 for gray-scale images and d = 3 for the usual RGB (red-green-blue) color images. One has d > 3 in many cases such as in hyperspectral imaging [START_REF] Plaza | Recent advances in techniques for hyperspectral image processing[END_REF] or in medical imaging [START_REF] Townsend | Multimodality imaging of structure and function[END_REF]. In this paper, we are mainly concerned with color images (i.e. d = 3) though our approach can be extended to higher-dimensional vector-valued images. Without loss of generality, we restrict the range of f to [0, 1] 3 and hence f ∈ L ∞ (Ω) 3 . In the literature, various studies have been carried out and many techniques have been considered for image segmentation [START_REF] Shi | Normalized cuts and image segmentation[END_REF]14,[START_REF] Grady | Random walks for image segmentation. Pattern Analysis and Machine Intelligence[END_REF][START_REF] Levinshtein | Turbopixels: fast superpixels using geometric flows[END_REF][START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF][START_REF] Storath | Fast partitioning of vector-valued images[END_REF][START_REF] Tai | Soft color segmentation and its applications[END_REF]. For grayscale images, i.e. d = 1, Mumford and Shah proposed in [START_REF] Mumford | Boundary detection by minimizing functionals[END_REF][START_REF] Mumford | Optimal approximations by piecewise smooth functions and associated variational problems[END_REF] an energy minimization problem for image segmentation which finds optimal piecewise smooth approximations. More precisely, this problem was formulated in [START_REF] Mumford | Optimal approximations by piecewise smooth functions and associated variational problems[END_REF] as E MS (g, Γ ) := λ 2 Ω (f -g) 2 dx + µ 2 Ω\Γ |∇g| 2 dx + Length(Γ ), (1) where λ and µ are positive parameters, and g : Ω → R is continuous in Ω \ Γ but may be discontinuous across the sought-after boundary Γ . Here, the length of Γ can be written as H 1 (Γ ), the one-dimensional Hausdorff measure in R 2 . Model [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF] has attractive properties even though finding a globally optimal solution remains an open problem and it is an active area of research. A recent overview can be found in [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF]. For image segmentation, the Chan-Vese model [14] pioneered a simplification of functional [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF] where Γ partitions the image domain into two constant segments and thus ∇g = 0 on Ω \ Γ . More generally, for K constant regions Ω i , i ∈ {1, . . . , K}, the multiphase piecewise constant Mumford-Shah model [START_REF] Vese | A multiphase level set framework for image segmentation using the Mumford and Shah model[END_REF] reads as E PCMS {Ω i , c i } K i=1 = λ 2 K i=1 Ω i (f -c i ) 2 dx + 1 2 K i=1 Per(Ω i ), (2) where Per(Ω i ) is the perimeter of Ω i in Ω, all Ω i 's are pairwise disjoint and Ω = K i=1 Ω i . The Chan-Vese model where K = 2 in (2) has many applications for two-phase image segmentation. Model ( 2) is a nonconvex problem, so the obtained solutions are in general local minimizers. To overcome the problem, convex relaxation approaches [START_REF] Bresson | Fast global minimization of the active contour/snake model[END_REF][START_REF] Chambolle | A convex approach to minimal partitions[END_REF]12,[START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF], graph cut method [START_REF] Grady | Reformulating and optimizing the Mumford-Shah functional on a graph -faster, lower energy solution[END_REF] and fuzzy membership functions [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF] were proposed. After [14], many approaches have decomposed the segmentation process into several steps and here we give a brief overview of recent work in this direction. The paper [START_REF] Kay | Color image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution[END_REF] performs a simultaneous segmentation of the input image into arbitrarily many pieces using a modified version of model ( 1) and the final segmented image results from a stopping rule using a multigrid approach. In [START_REF] Li | A level set method for image segmentation in the presence of intensity inhomogeneity with application to MRI[END_REF], after involving a bias field estimation, a level set segmentation method dealing with images with intensity inhomogeneity was proposed and applied to MRI images. In [START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF], an initial hierarchy of regions was obtained by greedy iterative region merging using model [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF]; the final segmentation is obtained by thresholding this hierarchy using hypothesis testing. The paper [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF] first determined homogeneous regions in the noisy image with a special emphasis on topological changes; then each region was restored using model [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF]. Further multistage methods extending model (2) can be found in [START_REF] Tai | Wavelet frame based multiphase image segmentation[END_REF] where wavelet frames were used, and in [START_REF] Cai | Multiclass segmentation by iterated ROF thresholding[END_REF] which was based on iterative thresholding of the minimizer of the ROF functional [START_REF] Rudin | Nonlinear total variation based noise removal algorithms[END_REF], just to cite a few. In the discrete setting, the piecewise constant Mumford-Shah model (2) amounts to the classical Potts model [START_REF] Potts | Some generalized order-disorder transformations[END_REF]. The use of this kind of functionals for image segmentation was pioneered by Geman and Geman in [19]. In [START_REF] Storath | Fast partitioning of vector-valued images[END_REF], a coupled Potts model was used for direct partitioning of images using a convergent minimization scheme. In [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF], a conceptually different two-stage method for the segmentation of gray-scale images was proposed. In the first stage, a smoothed solution g is extracted from the given image f by minimizing a non-tight convexification of the Mumford-Shah model [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF]. The segmented image was obtained in the second stage by applying a thresholding technique to g. This approach was extended in [11] to images corrupted by Poisson and Gamma noises. Since the basic concept of our method in this paper is similar, we will give more details on [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11] in Section 2. Extending or conceiving segmentation methods for color images is not a simple task since one needs to discriminate segments with respect to both luminance and chrominance information. The two-phase Chan-Vese model [14] was generalized to deal with vector-valued images in [13] by combining the information in the different channels using the data fidelity term. Many methods are applied in the usual RGB color space [START_REF] Cai | Variational image segmentation model coupled with image restoration achievements[END_REF]13,16,[START_REF] Jung | Multiphase image segmentation via Modica-Mortola phase transition[END_REF][START_REF] Kay | Color image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution[END_REF][START_REF] Martin | A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics[END_REF][START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF][START_REF] Storath | Fast partitioning of vector-valued images[END_REF], among others. It is often mentioned that the RGB color space is not well adapted to segmentation because for real-world images the R, G and B channels can be highly correlated. In [START_REF] Rotaru | Color image segmentation in HSI space for automotive applications[END_REF], RGB images are transformed into HSI (hue, saturation, and intensity) color space in order to perform segmentation. In [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF] a general segmentation approach was developed for gray-value images and further extended to color images in the RGB, the HSV (hue, saturation, and value) and the CB (chromaticity-brightness) color spaces. However, a study on this point in [START_REF] Paschos | Perceptually uniform color spaces for color texture analysis: an empirical evaluation[END_REF] has shown that the Lab (perceived lightness, red-green and yellow-blue) color space defined by the CIE (Commission Internationale de l'Eclairage) is better adapted for color image segmentation than the RGB and the HSI color spaces. In [START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF] RGB input images were first converted to Lab space. In [START_REF] Wang | A global/local affinity graph for image segmentation[END_REF] color features were described using the Lab color space and texture using histograms in RGB space. A careful examination of the methods that transform a given RGB image to another color space (HSI, CB, Lab, ...) before performing the segmentation task has shown that these algorithms are always applied only to noise-free RGB images (though these images unavoidably contain quantization and compression noise). For instance, this is the case of [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF][START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF][START_REF] Rotaru | Color image segmentation in HSI space for automotive applications[END_REF][START_REF] Wang | A global/local affinity graph for image segmentation[END_REF], among others. One of the main reasons is that if the input RGB image is degraded, the degradation would be hard to control after a transformation to another color space [START_REF] Paschos | Perceptually uniform color spaces for color texture analysis: an empirical evaluation[END_REF]. Our goal is to develop an image segmentation method that has the following properties: i) work on vector-valued (color) images possibly corrupted with noise, blur and missing data; ii) initialization independent and non-supervised (the number of segments is not fixed in advance); iii) take into account perceptual edges between colors and between intensities so as to detect vector-valued objects with edges and also objects without edges; iv) obtain an image adapted for segmentation using convex methods; v) the segmentation is done at the last stage: no need to solve the previous stage when the number of segments required is changed. Contributions. The main contribution of this paper is to propose a segmentation method having all these properties. Goals (a)-(d) lead us to explore possible extensions of the methods [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11] to vector-valued (color) images. Goal (e) requires finding a way to use information from perceptual color spaces even though our input images are corrupted; see goal (a). Let V 1 and V 2 be two color spaces. Our method has the following 3 steps: 1) Let the given degraded image be in V 1 . The convex variational model [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11] is applied in parallel to each channel of V 1 . This yields a restored smooth image. We show that the model has unique solution. 2) The second stage consists of color dimension lifting: we transform the smooth color image obtained at Stage 1 to a secondary color space V 2 that provides us with complementary information. Then we combine these images as a new vector-valued image composed of all the channels from color spaces V 1 and V 2 . 3) According to the desired number of phases K, we apply a multichannel thresholding to the combined V 1 -V 2 image to obtain a segmented image. We call our method "SLaT" for Smoothing, Lifting and Thresholding. Unlike the methods that perform segmentation in a different color space like [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF][START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF][START_REF] Rotaru | Color image segmentation in HSI space for automotive applications[END_REF][START_REF] Vandenbroucke | Color image segmentation by pixel classification in an adapted hybrid color space. application to soccer image analysis[END_REF][START_REF] Wang | A global/local affinity graph for image segmentation[END_REF], we can deal with degraded images thanks to Stage 1 which yields a smooth image that we can transform to another color space. We will fix V 1 to be the RGB color space since one usually has RGB color images. We use the Lab color space [START_REF] Luong | Color in computer vision[END_REF] as the secondary color space V 2 since it is often recommended for color segmentation [START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF]16,[START_REF] Paschos | Perceptually uniform color spaces for color texture analysis: an empirical evaluation[END_REF]. The crucial importance of the dimension lifting Stage 2 is illustrated in additional information on the color image so that in all cases we can obtain very good segmentation results. The number of phases K is needed only in Stage 3. Its value can reasonably be selected based on the RGB image obtained at Stage 1. Extensive numerical tests on synthetic and real-world images have shown that our method outperforms state-of-the-art variational segmentation methods like [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF][START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF][START_REF] Storath | Fast partitioning of vector-valued images[END_REF] in terms of segmentation quality, speed and parallelism of the algorithm, and the ability to segment images corrupted by different kind of degradations. Outline. In Section 2, we briefly review the models in [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11]. Our SLaT segmentation method is presented in Section 3. In Section 4, we provide experimental results on synthetic and real-world images. Conclusion remarks are given in Section 5. 2 Review of the Two-stage Segmentation Methods in [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11] The methods in [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF]11] for the segmentation of gray-scale images are motivated by the observation that one can obtain a good segmentation by properly thresholding a smooth approximation of the given image. Thus in their first stage, these methods solve a minimization problem of the form inf g∈W 1,2 (Ω) λ 2 Ω Φ(f, g)dx + µ 2 Ω |∇g| 2 dx + Ω |∇g|dx , (3) where Φ(f, g) is the data fidelity term, µ and λ are positive parameters. We note that the model ( 3) is a convex non-tight relaxation of the Mumford-Shah model in [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF]. Paper [START_REF] Cai | A two-stage image segmentation method using a convex variant of the Mumford-Shah model and thresholding[END_REF] considers Φ(f, g) = (f -Ag) 2 where A is a given blurring operator; when f is degraded by Poisson or Gamma noise, the statistically justified choice Φ(f, g) = Ag -f log(Ag) is used in [11] . Under a weak assumption, the functional in (3) has a unique minimizer, say ḡ, which is a smooth approximation of f . The second stage is to use the K-means algorithm [START_REF] Kanungo | An efficient k-means clustering algorithm: analysis and implementation[END_REF] to determine the thresholds for segmentation. These methods have important advantages: they can segment degraded images and the minimizer ḡ is unique. Further, the segmentation stage being independent from the optimization problem (3), one can change the number of phases K without solving (3) again. 3 SLaT: Our Segmentation Method for Color Images Let f = (f 1 , . . . , f d ) be a given color image with channels f i : Ω → R, i = 1, • • • , d. For f an RGB image, d = 3. This given image f is typically a blurred and noisy version of an original unknown image. It can also be incomplete: we denote by Ω i 0 the open nonempty subset of Ω where the given f i is known for channel i. Our SLaT segmentation method consists of three stages described next. First Stage: Recovery of a Smooth Image First, we restore each channel f i of f separately by minimizing the functional E below E(g i ) = λ 2 Ω ω i • Φ(f i , g i )dx + µ 2 Ω |∇g i | 2 dx + Ω |∇g i |dx, i = 1, . . . , d, (4) where | • | stands for Euclidian norm and ω i (•) is the characteristic function of Ω i 0 , i.e. ω i (x) = 1, x ∈ Ω i 0 , 0, x ∈ Ω \ Ω i 0 . (5) For Φ in (4) we consider the following options: i) Φ(f, g) = (f -Ag) 2 , the usual choice; ii) Φ(f, g) = Ag -f log(Ag) if data are corrupted by Poisson noise. Theorem 1 below proves the existence and the uniqueness of the minimizer of (4). In view of ( 4) and ( 5), we define the linear operator (ω i A) by (ω i A) : u(x) ∈ L 2 (Ω) → ω i (x)(Au)(x) ∈ L 2 (Ω). ( 6 ) Theorem 1 Let Ω be a bounded connected open subset of R 2 with a Lipschitz boundary. Let A : L 2 (Ω) → L 2 (Ω) be bounded and linear. For i ∈ {1, . . . , d}, assume that f i ∈ L 2 (Ω) and that Ker(ω i A) Ker(∇) = {0} where Ker stands for null-space. Then (4) with either Φ(f i , g i ) = (f i -Ag i ) 2 or Φ(f i , g i ) = Ag i - f i log(Ag i ) has a unique minimizer ḡi ∈ W 1,2 (Ω). Proof First consider Φ(f i , g i ) = (f i -Ag i ) 2 . Using (6), E(g i ) defined in ( 4) can be rewritten as E(g i ) = λ 2 Ω (ω i • f i -(ω i A)g i ) 2 dx + µ 2 Ω |∇g i | 2 dx + Ω |∇g i |dx. (7) Noticing that ω i • f i ∈ L 2 (Ω) and that (ω i A) : L 2 (Ω) → L 2 (Ω) i ) = Ag i -f i log(Ag i ). Then E(g i ) = λ 2 Ω ω i • (Ag i ) -(ω i • f i ) log(Ag i )dx + µ 2 ∇g i 2 L 2 (Ω) + ∇g i L 2 (Ω) . (8) 1) Existence: Since W 1,2 (Ω) is a reflective Banach space and E(g i ) is convex lower semi-continuous, by [17, Proposition 1.2] we need to prove that E(g i ) is coercive on W 1,2 (Ω), i.e. that E(g i ) → +∞ as g i W 1,2 (Ω) := g i L 2 (Ω) + ∇g i L 2 (Ω) → +∞. The function Ag i → (Ag i -f log Ag i ) is strictly convex with a minimizer pointwisely satisfying 2 (Ω) and f = 0. Using the Poincaré inequality, see [18], we have: Ag i = f ∈ [0, 1], hence Φ(f i , g i ) ≥ 0. Thus ∇g i L 2 (Ω) is upper bounded by E(g i ) > 0 for any g i ∈ W 1, g i -g i Ω L 2 (Ω) ≤ C 1 ∇g i L 2 (Ω) ≤ C 1 E(g i ), (9) where C 1 > 0 is a constant and g i Ω = 1 |Ω| Ω g i dx. Let us set C 2 := 1 -1 e f i ∞ . We have C 2 > 0 because f i ∞ ≤ 1. Recall the fact that t e ≥ log t for any t > 0 which can be easily verified by showing that t/e -log t is convex for t > 0 with minimum at e. Hence we have ω i • Φ(f i , g i ) ≥ (ω i A) g i - 1 e (ω i • f i )Ag i = ω i • (1 - 1 e f i )Ag i ≥ C 2 (ω i A) g i which should be understood pointwisely. Hence, (ω i A) g i L 1 (Ω) ≤ 2 C 2 λ E(g i ). ( 10 ) Let C 3 := (ω i A)1 L 1 (Ω) where 1(x) = 1 for any x ∈ Ω. Using Ker(∇) = {u ∈ L 2 (Ω) : u = c 1 a.e. for x ∈ Ω, c ∈ R} together with the assumption Ker(ω i A) Ker(∇) = {0} one has C 3 > 0. Using [START_REF] Chambolle | A first-order primal-dual algorithm for convex problems with applications to imaging[END_REF] together with the fact that g i Ω > 0 yields |g i Ω | (ω i A)1 L 1 (Ω) = |g i Ω | C 3 = ω i • (A1g i Ω ) L 1 (Ω) ≤ 2 C 2 λ E(g i ), and thus |g i Ω | ≤ 2 C 2 C 3 λ E(g i ). Applying the triangular inequality in [START_REF] Chambolle | A convex approach to minimal partitions[END_REF] gives g i L 2 (Ω) -|g i Ω | ≤ C 1 ∇g i L 2 (Ω) . Hence g i L 2 (Ω) ≤ |g i Ω | + C 1 ∇g i L 2 (Ω) ≤ 2 C 2 C 3 λ + 1 E(g i ). Comparing with (9) yet again shows that we have obtained g i W 1,2 (Ω) = g i L 2 (Ω) + ∇g i L 2 (Ω) ≤ 2 C 2 C 3 λ + 1 + C 1 E(g i ). Therefore, E is coercive. 2) Uniqueness: Suppose ḡi 1 and ḡi 2 are both minimizers of E(g i ). The convexity of E and the strict convexity of Ag i → (Ag i -f log Ag i ) entail Aḡ i 1 = Aḡ i 2 on Ω i 0 and ∇ḡ i 1 = ∇ḡ i 2 . Further, the assumption on Ker(ω i A) Ker(∇) shows that ḡi 1 = ḡi 2 . The condition Ker(ω i A) Ker(∇) = {0} is mild which means that Ker(ω i A) does not contain constant images. The discrete model. In the discrete setting, Ω is an array of pixels, say of size M × N , and our model (4) reads as E(g i ) = λ 2 Ψ (f i , g i ) + µ 2 ∇g i 2 F + ∇g i 2,1 , i = 1, . . . , d. (11) Here Ψ (f i , g i ) := j∈Ω ω i • (f i -Ag i ) 2 j , or Ψ (f i , g i ) := j∈Ω ω i • Ag i -f i log(Ag i ) j . The operator ∇ = (∇ x , ∇ y ) is discretized using backward differences with Neumann boundary conditions. Further, • 2 F is the Frobenius norm, so ∇g i 2 F = j∈Ω (∇ x g i ) 2 j + (∇ y g i ) 2 j , and ∇g i 2,1 is the usual discretization of the TV semi-norm given by ∇g i 2,1 = j∈Ω (∇ x g i ) 2 j + (∇ y g i ) 2 j . For each i, the minimizer ḡi can be computed easily using different methods, for example the primal-dual algorithm [START_REF] Chambolle | A first-order primal-dual algorithm for convex problems with applications to imaging[END_REF]15], alternating direction method with multipliers (ADMM) [START_REF] Boyd | Distributed optimization and statistical learning via the alternating direction method of multipliers[END_REF], or the split-Bregman algorithm [20]. Then we rescale each ḡi onto [0, 1] to obtain {ḡ i } d i=1 ∈ [0, 1] d . Second Stage: Dimension Lifting with Secondary Color Space For the ease of presentation, in the following, we assume V 1 is the RGB color space. The goal in color segmentation is to recover segments both in the luminance and in the chromaticity of the image. It is well known that the R, G and B channels can be highly correlated. For instance, the R, G and B channels of the output of Stage 1 for the noisy pyramid image in Fig. 1 are depicted in Fig. 2 (a)-(c). One can hardly expect to make a meaningful segmentation based on these channelssee the result in Fig. 1 (b), as well as Fig. 8 where other contemporary methods are compared. Stage 1 provides us with a restored smooth image ḡ. In Stage 2, we perform dimension lifting in order to acquire additional information on ḡ from a different color space that will help the segmentation in Stage 3. The choice is delicate. Popular choices of less-correlated color spaces include HSV, HSI, CB and Lab, as described in the Introduction. The Lab color space was created by the CIE with the aim to be perceptually uniform [START_REF] Luong | Color in computer vision[END_REF] in the sense that the numerical difference between two colors is proportional to perceived color difference. This is an important property for color image segmentation, see e.g. [START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF]16,[START_REF] Paschos | Perceptually uniform color spaces for color texture analysis: an empirical evaluation[END_REF]. For this reason in the following we use the Lab as the additional color space. Here the L channel correlates to perceived lightness, while the a and b channels correlate approximately with red-green and yellow-blue, respectively. As an example we show in Fig. 2 (d Remark 1 The transformation from RGB to Lab color space is based on the intermediate CIE XYZ tristimulus values. The transformation of ḡ (in RGB color space) to g in XYZ is given by a linear transform g = H ḡ. Then the Lab transform ḡ of ḡ, see e.g., [29, chapter 1], is defined in terms of g as ḡ 1 = 116 3 g2 /Y r , if g2 /Y r > 0.008856, 903.3g 2 /Y r , otherwise, ḡ 2 = 500 (ρ(g 1 /X r ) -ρ(g 2 /Y r )) , ḡ 3 = 200 (ρ(g 2 /Y r ) -ρ(g 3 /Z r )) , where ρ(x) = 3 √ x, if x > 0.008856, otherwise ρ(x) = (7.787x + 16)/116, and X r , Y r and Z r are the XYZ tristimulus values of the reference white point. The cube root function compresses some values more than others and the transform corresponds to the CIE chromaticity diagram. The transform takes into account the observation that the human eye is more sensitive to changes in chroma than to changes in lightness. As mentioned before, the Lab space is perceptually uniform [START_REF] Paschos | Perceptually uniform color spaces for color texture analysis: an empirical evaluation[END_REF]. So the Lab channels provide important complementary information to the restored RGB image ḡ. Following an aggregation approach, we use all channels of the two color spaces. Third Stage: Thresholding Given the vector-valued image ḡ * ∈ [0, 1] 2d for d = 3 from Stage 2 we want now to segment it into K segments. Here we design a properly adapted strategy to partition vector-valued images into K segments. It is based on the K-means algorithm [START_REF] Kanungo | An efficient k-means clustering algorithm: analysis and implementation[END_REF] because of its simplicity and good asymptotic properties. According to the value of K, the algorithm clusters all points of {ḡ * (x) : x ∈ Ω} into K Voronoi-shaped cells, say Σ 1 ∪Σ 2 • • •∪Σ K = Ω. Then we compute the mean vector c k ∈ R 6 on each cell Σ k by c k = Σ k ḡ * dx Σ k dx , k = 1, . . . , K. (12) We recall that each entry c k [i] for i = 1, • • • , 6 is a value belonging to {R, G, B, L, a, b}, respectively. Using {c k } K k=1 , we separate ḡ * into K phases by Ω k := x ∈ Ω : ḡ * (x) -c k 2 = min 1≤j≤K ḡ * (x) -c j 2 , k = 1, . . . , K. (13) It is easy to verify that {Ω k } K k=1 are disjoint and that K k=1 Ω k = Ω. The use of the 2 distance here follows from our model (4) as well as from the properties of the Lab color space [START_REF] Luong | Color in computer vision[END_REF][START_REF] Cardelino | A contrario selection of optimal partitions for image segmentation[END_REF]. The SLaT Algorithm We summarize our three-stage segmentation method for color images in Algorithm 1. Like the Mumford-Shah model, our model ( 4) has two parameters λ and µ. Extensive numerical tests have shown that we can fix µ = 1. We choose λ empirically; the method is quite stable with respect to this choice. We emphasize Algorithm 1: Three-stage Segmentation Method (SLaT) for Color Images Input: Given color image f ∈ V 1 and color space V 2 . Output:Phases Ω k , k = 1, . . . , K. 1: Stage one: compute ḡi the minimizer in (4), rescale it on [0, 1] for i = 1, 2, 3 and set ḡ = (ḡ 1 , ḡ2 , ḡ3 ) in V 1 2: Stage two: compute ḡ ∈ V 2 , the transform of ḡ in V 2 , to obtain ḡt = (ḡ t 1 , ḡt 2 , ḡt 3 ); form ḡ * = (ḡ 1 , ḡ2 , ḡ3 , ḡt 1 , ḡt 2 , ḡt 3 ) 3: Stage three: choose K, apply the K-means algorithm to obtain {c k } K k=1 in (12) and find the segments Ω k , k = 1, . . . , K using (13). that our method is quite suitable for parallelism since {ḡ i } 3 i=1 in Stage 1 can be computed in parallel. Experimental Results In this section, we compare our SLaT method with three state-of-the-art variational color segmentation methods [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF][START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF][START_REF] Storath | Fast partitioning of vector-valued images[END_REF]. Method [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF] uses fuzzy membership functions to approximate the piecewise constant Mumford-Shah model [START_REF] Benninghoff | Efficient image segmentation and restoration using parametric curve evolution with junctions and topology changes[END_REF]. Method [START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF] uses a primal-dual algorithm to solve a convex relaxation of model ( 2) with a fixed code book. Method [START_REF] Storath | Fast partitioning of vector-valued images[END_REF] uses an ADMM algorithm to solve the model (2) (without phase number K) with structured Potts priors. These methods were originally designed to work on color images with degradation such as noise, blur and information loss. The codes we used were provided by the authors, and the parameters in the codes were chosen by trial and error to give the best results of each method. For our model (4), we fix µ = 1 and only vary λ. In the segmented figures below, each phase is represented by the average intensity of that phase in the image. All the results were tested on a MacBook with 2.4 GHz processor and 4GB RAM, and Matlab R2014a. We present the tests on two synthesis and seven real-world images given in Fig. 3 (images (v) -(ix) are chosen from the Berkeley Segmentation Dataset and Benchmark1 ). The images are all in RGB color space. We considered combinations of three different forms of image degradation: noise, information loss, and blur. The Gaussian and Poisson noisy images are all generated using Matlab function imnoise. For Gaussian noisy images, the Gaussian noise we added are all of mean 0 and variance 0.001 or 0.1. To apply the Poisson noise, we linearly stretch the given image f to [START_REF] Bar | Mumford and Shah model and its applications to image segmentation and image restoration[END_REF]255] first, then linearly stretch the noisy image back to [0, 1] for testing. The mean of the Poisson distribution is 10. For information loss case, we deleted 60% pixels values randomly. The blur in the test images were all obtained by a vertical motion-blur with 10 pixels length. In Stage 1 of our method, the primal-dual algorithm [START_REF] Chambolle | A first-order primal-dual algorithm for convex problems with applications to imaging[END_REF]15] and the split-Bregman algorithm [20] are adopted to solve (4) for Φ(f, g) = Ag -f log(Ag) and Φ(f, g) = (f -Ag) 2 , respectively. We terminate the iterations when Example 2. Four-phase segmentation. Our next test is on a four-phase synthetic image containing four rectangles with different colors, see Fig. 5. The variable illumination in the figure make the segmentation very challenging. The results shows that in all cases (noise, information loss and blur) all three competing methods [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF][START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF][START_REF] Storath | Fast partitioning of vector-valued images[END_REF] fail while our method gives extremely good results. Table 2 shows further that the time cost of our method is the least. g (k) i -g (k+1) i 2 g (k+1) i 2 < 10 -4 for i = 1, Segmentation of Real-world Color Images In this section, we compare our method with the three competing methods for 7 real-world color images in two-phase and multiphase segmentations, see Figs. 6-12. Moreover, for the images from the Berkeley Segmentation Dataset and Benchmark used in Figs. 8-12, the segmentation results by humans are shown in Fig. 13 as ground truth for visual comparison purpose. We see from the figures that our method is far superior than those by the competing methods, and our results are consistent with the segmentations provided by humans. The timing of the methods given in Table 2 shows that our method in most of the cases gives the least timing. Again, we emphasize that our method is easily parallelizable. All presented experiments clearly show that all goals listed in Introduction are fulfilled. Conclusions In this paper we proposed a three-stage image segmentation method for color images. At the first stage of our method, a convex variational model is used in parallel on each channel of the color image to obtain a smooth color image. Then in the second stage we transform this smooth image to a secondary color space so Fig. 1 Fig. 1 11 Fig. 1 Segmentation results for a noisy image (a) without the dimension lifting in Stage 2 (b) and with Stage 2 (c). )-(f) the L, a and b channels of the smooth ḡ in Stage 1 for the noisy pyramid in Fig.1(a). From Fig.2one can see that the collection of 6 channels gives different information with respect to a further segmentation. The result in Fig.1 (c) have shown that this additional color space helps the segmentation significantly.Let ḡ denote Lab transform of ḡ. In order to compare ḡ with ḡ ∈ [0, 1] 3 , we rescale on [0,1] the channels of ḡ which yields an image denoted by ḡt ∈ [0, 1] 3 . By stacking together ḡ and ḡt we obtain a new vector-valued image ḡ * with 2d = 6 channels:ḡ * := (ḡ 1 , ḡ2 , ḡ3 , ḡt 1 , ḡt 2 , ḡt 3 ). Our segmentation in Stage 3 is done on this ḡ * . 1 (e) a channel ḡt 2 (f) b channel ḡt 3 Fig. 2 1232 Fig. 2 Channels comparison for the restored (smoothed) ḡ in Stage 1 used in Fig. 1. (a)-(c): the R, G and B channels of ḡ; (d)-(f): the L, a and b channels of ḡt -the Lab transform of ḡ. Both ḡ and ḡt were used to obtain the result in Fig. 1 (c). Fig. 3 3 Fig. 3 Images used in our tests. Fig. 5 5 Fig. 5 Four-phase synthetic image segmentation (size: 256 × 256). (A): Given Gaussian noisy image with mean 0 and variance 0.001; (B): Given Gaussian noisy image with 60% information loss; (C): Given blurry image with Gaussian noise; (A1-A4), (B1-B4) and (C1-C4): Resultsof methods[START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF],[START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF],[START_REF] Storath | Fast partitioning of vector-valued images[END_REF], and our SLaT on (A), (B) and (C), respectively. Table 1 1 2, 3 or when the maximum iteration number 200 is reached. In Stage 2, the transformation from RGB to Lab color spaces is implemented by Matlab build-in function makecform('srgb2lab'). In Stage 3, given the user defined number of phases K, the thresholds are determined automatically by Matlab K-means function kmeans. Since ḡ * is calculated prior Comparison of percentage of correct pixels for the 6-phase synthetic image. Methods 1-3 are the methods in[START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF],[START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF],[START_REF] Storath | Fast partitioning of vector-valued images[END_REF], respectively. Method 1 Method Method 3 Our SLaT method (A) 70.11% 99.53% 82.55% 99.51% Fig. 4 (B) 13.90% 16.92% 85.04% 99.25% (C) 28.08% 98.58% 74.77% 98.88% Average 37.36% 71.68% 80.79% 99.21% Table 2 2 Iteration numbers and CPU time in seconds. Methods 1-3 are the methods in[START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF],[START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF],[START_REF] Storath | Fast partitioning of vector-valued images[END_REF], respectively. Method 1 Method 2 Method 3 Our SLaT method Fig. iter. time iter. time iter. time iter. for {g i } 3 i=1 time (A) 200 5.03 150 6.02 20 4.40 (92, 86, 98) 2.53 4 (B) 200 5.65 150 4.01 16 3.65 (98, 95, 106) 2.73 (C) 200 6.54 150 4.03 17 4.18 (97, 95, 94) 2.48 (A) 200 13.92 150 13.89 17 16.89 (54, 54, 51) 5.47 5 (B) 200 13.16 150 14.32 14 13.62 (101, 92, 88) 7.74 (C) 200 17.68 150 16.37 16 15.75 (154, 147, 142) 9.89 (A) 200 10.58 150 7.53 19 11.62 (50, 73, 93) 5.11 6 (B) 200 9.59 150 7.36 20 14.64 (84, 105, 115) 6.43 (C) 200 10.39 150 7.39 19 9.76 (200, 200, 200) 17.75 (A) 200 44.26 150 66.01 19 106.35 (97, 106, 109) 25.13 7 (B) 200 52.12 150 54.76 20 110.68 (148, 161, 171) 38.30 (C) 200 44.51 150 55.09 18 101.09 (116, 125, 124) 30.00 (A) 200 17.76 150 19.02 16 25.08 (80, 83, 99) 20.99 8 (B) 200 18.41 150 16.45 16 28.33 (109, 114, 129) 22.45 (C) 200 18.02 150 18.21 15 31.93 (127, 120, 144) 30.92 (A) 200 18.47 150 19.62 15 27.56 (47, 42, 62) 10.98 9 (B) 200 17.35 150 16.63 15 26.63 (86, 85, 93) 15.93 (C) 200 18.07 150 17.61 15 23.13 (48, 48, 52) 15.02 (A) 200 24.57 150 31.64 20 56.28 (101, 95, 94) 27.29 10 (B) 200 27.15 150 28.92 21 63.02 (154, 142, 131) 27.89 (C) 200 26.54 150 29.79 20 55.45 (161, 147, 141) 33.79 (A) 200 26.62 150 32.87 17 87.13 (35, 35, 36) 14.23 11 (B) 200 24.77 150 26.39 16 60.98 (102, 102, 103) 18.99 (C) 200 25.26 150 31.16 18 77.73 (48, 50, 58) 18.15 (A) 200 32.23 150 41.91 19 47.12 (106, 102, 108) 21.93 12 (B) 200 34.83 150 44.70 20 53.48 (116, 116, 117) 23.95 (C) 200 35.01 150 49.93 19 49.14 (67, 65, 63) 21.04 Average 200 22.17 150 25.25 18 41.69 (99, 99, 104) 17.67 https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ Acknowledgment The authors thank G. Steidl and M. Bertalmío for constructive discussions. The work of X. Cai is partially supported by Welcome Trust, Issac Newton Trust, and KAUST Award No. KUK-I1-007-43. The work of R. Chan is partially supported by HKRGC GRF Grant No. CUHK300614, CUHK14306316, CRF Grant No. CUHK2/CRF/11G, CRF Grant C1007-15G, and AoE Grant AoE/M-05/12 The work of M. Nikolova is partially supported by HKRGC GRF Grant No. CUHK300614, and by the French Research Agency (ANR) under grant No. ANR-14-CE27-001 (MIRIAM). The work of T. Zeng is partially supported by NSFC 11271049, 11671002, RGC 211911, 12302714 and RFGs of HKBU. to the choice of K, users can try different K and segment the image all without re-computing ḡ * . Segmentation of Synthetic Images Example 1. Six-phase segmentation. 4 gives the result on a six-phase synthetic image containing five overlapping circles with different colors. The image is corrupted by Gaussian noise, information loss, and blur, see Fig. 4 (A), (B), and (C) respectively. From the figures, we see that method Li et al. [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF] and method Storath et al. [START_REF] Storath | Fast partitioning of vector-valued images[END_REF] both fail for the three experiments while method Pock et al. [START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF] fails for the case of information lost. Table 1 shows the segmentation accuracy by giving the ratio of the number of correctly segmented pixels to the total number of pixels. The best ratios are printed in bold face. From the table, we see that our method gives the highest accuracy for the case of information loss and blur. For denoising, method [START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF] is 0.02% better. Table 2 gives the iteration numbers of each method and the CPU time cost. We see that our method outperforms the others compared. Moreover, if using parallel technique, the time can be reduced roughly by a factor of 3. of methods [START_REF] Li | A multiphase image segmentation method based on fuzzy region competition[END_REF], [START_REF] Pock | A convex relaxation approach for computing minimal partitions[END_REF], [START_REF] Storath | Fast partitioning of vector-valued images[END_REF], and our SLaT on (A), (B) and (C), respectively. as to obtain additional information of the image in the less-correlated color space. In the last stage, multichannel thresholding is used to threshold the combined image from the two color spaces. The new three-stage method, named SLaT for Smoothing, Lifting and Thresholding, has the ability to segment images corrupted by noise, blur, or when some pixel information is lost. Experimental results on RGB images coupled with Lab secondary color space demonstrate that our method gives much better segmentation results for images with degradation than some stateof-the-art segmentation models both in terms of quality and CPU time cost. Our future work includes finding an automatical way to determine λ and possibly an improved model ( 4) that can better promote geometry. It is also interesting to optimize channels from the selected color spaces, and analyze the effect in color image segmentation.
42,670
[ "856436" ]
[ "192086", "62", "93486" ]
01490579
en
[ "shs" ]
2024/03/04 23:41:50
1998
https://hal.science/hal-01490579/file/Scandinavian%20Journal%20of%20Management%2C%201998.pdf
Florence Allard-Poesi REPRESENTATIONS AND INFLUENCE PROCESSES IN GROUPS: TOWARDS A SOCIO-COGNITIVE PERSPECTIVE ON COGNITION IN ORGANIZATION Keywords: Cognition, Social Representations, Social influence, Group dynamics whether they are published or not. The documents may come from teaching and research institutions in France or INTRODUCTION The cognitive perspective, which has been criticized for its methodological bias and conceptual limits (Von [START_REF] Krogh | An essay on corporate epistemology[END_REF][START_REF] Schneider | Cognition in organizational analysis: Who's Minding the Store?[END_REF][START_REF] Laroche | L'approche cognitive de la stratégie d'entreprise[END_REF], assumes implicitly that a collective representation (generally called a shared or collective cognitive schema) exists in organizations, a variable that could explain numerous organizational phenomena such as action, change, crisis and performance [START_REF] Johnson | Strategic change and the management process[END_REF]. What does this collective cognitive structure consist of? What are its properties compared to an individual cognitive schema? How does it emerge from the supposedly different representations held by the organizational members? We first examine these questions in light of the cognitive paradigm. After having underlined the methodological and conceptual limits of this paradigm and referred to some empirical studies whose results question this approach, we argue for the adoption of a socio-cognitive perspective. In particular this perspective leads to a new conception of social influence processes and of the socio-cognitive dynamics in groups, which in turn offers interesting insights into collective cognition in organizations. While not considered exclusively as a cognitive or behavioural conformity mechanism, social influence can lead not only to conformity in groups but also to normalization and to polarization, innovation and change, all appearing as many different emergence processes and forms for a collective cognitive schema in organizations. We contend that such a perspective on cognition reflects more adequately how people behave and think in organizations. It also sheds new light on decision-making, organizational change and learning. However, a socio-cognitive perspective on organizational life requires new methodological approaches and methods. THE INDIVIDUAL / COLLECTIVE CONNECTION ACCORDING TO THE COGNITIVE PARADIGM From individual to organizational cognition The cognitive perspective on organizations comes from an Anglo-Saxon research current in cognitive psychology, which assumes that the individual has cognitive structures or schemas which enable him to understand events and situations [START_REF] Walsh | Negotiated belief structures and decision performance: an empirical investigation[END_REF][START_REF] Codol | Vingt ans de cognition sociale[END_REF][START_REF] Gioia | Scripts in organizational behavior[END_REF][START_REF] Harris | Organizational culture and individual sensemaking: A schema based perspective[END_REF]. These cognitive structures are usually defined as general frames used by the individual to impose structure and thus meaning on information, situations or experiences [START_REF] Harris | Organizational culture and individual sensemaking: A schema based perspective[END_REF]. The basic unit of a cognitive structure is a "cognem" (the smallest cognitive unit), defined either as a concept, a piece of knowledge, an object, an opinion, a proposition, a belief, an attribute or trait of concrete or abstract objects, or an item of information. These units are regarded as being linked together and integrated into sets. The set of all possible cognems and their relationships concerning a specific domain or object, forms a representation (a term used especially by French-speaking psychologists) or a cognitive structure, organization or schema [START_REF] Codol | Vingt ans de cognition sociale[END_REF][START_REF] Codol | Note terminologique sur l'emploi de quelques expressions concernant les activités et processus cognitifs en psychologie sociale[END_REF]. The meaning of an object or a concept is then encoded in the pattern of relationships occurring in its corresponding schema. Assigning an object, an experience or a person to its corresponding schema consequently permits the perception, interpretation and conceptualization of the environment [START_REF] Harris | Organizational culture and individual sensemaking: A schema based perspective[END_REF][START_REF] Lord | An information processing approach to social perceptions, leadership and behavioral measurement in organizations[END_REF], as well as judgements and actions [START_REF] Montgomery | Process and structure in human decision making[END_REF][START_REF] Kiesler | Managerial response to changing environment: Perspectives on problem sensing from social cognition[END_REF][START_REF] Lord | An information processing approach to social perceptions, leadership and behavioral measurement in organizations[END_REF] in a very economical way. However, these cognitive schemas are the source of numerous biases. They affect information-processing [START_REF] Lord | An information processing approach to social perceptions, leadership and behavioral measurement in organizations[END_REF][START_REF] Kiesler | Managerial response to changing environment: Perspectives on problem sensing from social cognition[END_REF][START_REF] Louis | Surprise and sense-making: what newcomers experience when entering unfamiliar organizational settings[END_REF], judgement [START_REF] Walton | Managers' Prototypes of financial terms[END_REF], attitude [START_REF] Calder | Attitudinal processes in organizations[END_REF], decision processes [START_REF] Montgomery | Process and structure in human decision making[END_REF][START_REF] Isenberg | Thinking and managing: A verbal protocol analysis of managerial problem solving[END_REF] and behaviour [START_REF] Gioia | Scripts in organizational behavior[END_REF][START_REF] Klimoski | Team Mental Model: Construct or Metaphor[END_REF]. Researchers in organizational studies have increasingly adopted this cognitive perspective developed at the individual level, and have incorporated the notion of representation into their theories [START_REF] Klimoski | Team Mental Model: Construct or Metaphor[END_REF]. [START_REF] Stubbart | Managerial Cognition: A Missing Link in Strategic Management Research[END_REF] argues that this perspective makes it possible to go beyond a rational perspective of choice in strategy-making and decision processes; [START_REF] Porac | Competitive groups as cognitive communities: The case of Scottish Knitwear manufacturers[END_REF] use the notion of mental models as a framework for understanding how strategists interpret their competitive environments in order to explain competitive strategy-making; [START_REF] Walton | Managers' Prototypes of financial terms[END_REF] demonstrates that managers in the same sector use similar schemas of success, indicating that they attach some core meanings to this notion. At a general level researchers consider that bringing the implicit assumptions held by organizational members to the surface, is a first step towards understanding organizational action and managerial decision-making [START_REF] Walton | Managers' Prototypes of financial terms[END_REF], the individual's cognitive performance [START_REF] Isenberg | Thinking and managing: A verbal protocol analysis of managerial problem solving[END_REF] and judgements and behaviours in organizations [START_REF] Conlon | Absence schema and the managerial judgement[END_REF][START_REF] Gioia | Communication and Cognition in Appraisal: A tale of two Paradigms[END_REF][START_REF] Gioia | Cognition-behavior connections: attribution and verbal behavior in leader/subordinates interactions[END_REF]. Although the cognitive perspective has been traditionally applied at the individual level, the growing interest it has attracted in organizational studies has led to its extension to the group and organizational levels of analysis. The cognitive approach has thus been applied to the organizational level in a cognitive paradigm that sees the organization as the result of social constructions based on its members' collective cognitive schema [START_REF] Allaire | Theories of Organizational culture[END_REF][START_REF] Smircich | Concepts of culture and organizational analysis[END_REF]. In this perspective, organizations become frames of reference or networks of subjective meanings which are shared to varying degrees by their members. These systems of shared ideas or beliefs influence organizational members so that their behaviours fit the organizational goals and expectations, and so that organizational action becomes possible [START_REF] Allaire | Theories of Organizational culture[END_REF]. This notion of "system of shared ideas / beliefs" has been defined in various ways in the literature: as beliefs, understandings that represent credible relations between objects, properties, or ideas [START_REF] Sproull | Beliefs in Organizations[END_REF]; as an ideology, a coherent set of beliefs that binds some people together and explains their world in terms of cause and effect relationships [START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF]et al., 1988;[START_REF] Starbuck | Congealing oil: inventing ideologies to justify ideologies out[END_REF]; as a dominant logic, a mental map developed through experience [START_REF] Bettis | The dominant logic: Retrospective and extension[END_REF][START_REF] Prahalad | The dominant logic: a new linkage between diversity and performance[END_REF]; as an interpretative scheme, a schema that maps the experience of the world, identifying both its relevant aspects and how we are to understand them [START_REF] Poole | Influence modes, schema change, and organizational transformation[END_REF][START_REF] Bartunek | Changing interpretative schemes and organizational restructuring: The example of a religious order[END_REF][START_REF] Bartunek | First order, second order and third order change and organizational development intervention: A cognitive approach[END_REF]; as a cognitive system, a set of mental maps that persist over time although organizational members come and go [START_REF] Hedberg | How organizations learn and unlearn[END_REF] etc. Such systems of shared beliefs are supposed to emerge from shared experiences and interactions among organizational members [START_REF] Harris | Organizational culture and individual sensemaking: A schema based perspective[END_REF][START_REF] Porac | Competitive groups as cognitive communities: The case of Scottish Knitwear manufacturers[END_REF]). On the one hand, individual schemas may become similar as the result of shared experiences and exposure to social cues regarding other people's constructions of reality [START_REF] Harris | Organizational culture and individual sensemaking: A schema based perspective[END_REF][START_REF] Porac | Competitive groups as cognitive communities: The case of Scottish Knitwear manufacturers[END_REF][START_REF] Kiesler | Managerial response to changing environment: Perspectives on problem sensing from social cognition[END_REF]. On the other hand, common schemas in organizations emerge from interactions and communication among organizational members [START_REF] Porac | Competitive groups as cognitive communities: The case of Scottish Knitwear manufacturers[END_REF][START_REF] Shrivastava | Organizational frames of references[END_REF][START_REF] Schneider | Basic Assumptions Themes in Organizations[END_REF][START_REF] Ashforth | Climate formations: Issues and extensions[END_REF][START_REF] Feldman | The development and enforcement of group norms[END_REF][START_REF] Pfeffer | Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms[END_REF]. Referring to Festinger's social comparison theory (e.g. [START_REF] Gioia | Scripts in organizational behavior[END_REF][START_REF] Pfeffer | A Social Information Processing Approach to Job Attitudes and Task Design[END_REF][START_REF] Ashforth | Climate formations: Issues and extensions[END_REF][START_REF] Pfeffer | Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms[END_REF][START_REF] Sproull | Beliefs in Organizations[END_REF][START_REF] Feldman | The development and enforcement of group norms[END_REF], and to normative influence theory (e.g. Bettenhausen and Murningham, 1985;[START_REF] Schein | Coming to a New Awareness of Organizational Culture[END_REF], researchers regard interactions and communication processes (and especially those involved in decision-making processes - [START_REF] Shrivastava | Organizational frames of references[END_REF] as resulting in conformity and shared thinking in organizations. Socialization processes and managers' symbolic actions, which more specifically activate these influence processes, are then crucial to the development and maintenance of shared mental models in organizations [START_REF] Louis | Surprise and sense-making: what newcomers experience when entering unfamiliar organizational settings[END_REF][START_REF] Pfeffer | Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms[END_REF][START_REF] Nystrom | Managing beliefs in organizations[END_REF][START_REF] Sproull | Beliefs in Organizations[END_REF][START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF]. In order that newcomers should develop adequate schemas, managers must pay attention to these socialization processes [START_REF] Sproull | Beliefs in Organizations[END_REF][START_REF] Nystrom | Managing beliefs in organizations[END_REF]. The public and symbolic rewarding of the desired beliefs and behaviours, and the use of co-workers to indoctrinate newcomers, can help to maintain shared beliefs in organizations. Similarly, if organizations are organized through the development of shared understandings, management's task is to develop such understandings within the organization. This may be accomplished through the use of symbols, ceremonies and language [START_REF] Pfeffer | Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms[END_REF]. Once created and socially shared, these symbols and perceptions of reality can motivate and mobilize organizational members to act, thus ensuring continuous support for the organization [START_REF] Pfeffer | Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms[END_REF][START_REF] Shrivastava | Organizational frames of references[END_REF]. However, collective representations are enduring and resistant to change. They will affect other organizational variables and processes such as decision processes and strategy-making, organizational action, performance, structure, change and learning. For instance, collective representations polarize attention to specific problems and provide guidelines for interpreting environmental information [START_REF] Shrivastava | Organizational frames of references[END_REF][START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF][START_REF] Sapienza | Believing is seeing,: How Culture Influences the Decisions Top Managers Make in[END_REF], thus facilitating choices and implementation processes [START_REF] Klimoski | Team Mental Model: Construct or Metaphor[END_REF][START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF]. At a more general level, common representations influence strategic actions and thus organizational performance [START_REF] Sapienza | Believing is seeing,: How Culture Influences the Decisions Top Managers Make in[END_REF][START_REF] Thomas | Strategic sensemaking and organizational performance: Linkages among scanning, interpretation, actions and outcomes[END_REF]. These systems of shared ideas may result in cognitive rigidities and inadequate actions in a changing environment, leading to crisis and low performance [START_REF] Hall | The natural logic of management policy making: its implication for the survival of an organization[END_REF][START_REF] Hall | A system pathology of an organization: the rise and fall of the Old Saturday Evening Post[END_REF]Fahey andNarayanan, 1989: Starbuck, 1982;Huff and Schwenk, in Huff, 1990). Managers are then encouraged to become more aware and to call in question their basic assumptions, when engaged in a decision-making process [START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF][START_REF] Schneider | Basic Assumptions Themes in Organizations[END_REF]. By using dialectical and devil's advocacy approaches in their decision-making processes [START_REF] Beyer | Ideologies, values and decision making in organizations[END_REF][START_REF] Sproull | Beliefs in Organizations[END_REF][START_REF] Schweiger | Group approachs for improving strategic decision-making: a comparative analysis of dialectical inquiry, devil's advocacy and consensus[END_REF], managers can evaluate the extent to which their beliefs are facilitating performance. They will then avoid decision biases and increase their creativity [START_REF] Smircich | Strategic Management in an Enacted World[END_REF]. At the organizational level, shared representations in organizations may inhibit change processes and implementation [START_REF] Schwenk | Linking cognitive, organizational and political factors in explaning strategic change[END_REF][START_REF] Poole | Influence modes, schema change, and organizational transformation[END_REF][START_REF] Gioia | Symbolism and strategic change in academia: The dynamics of sensemaking and influence[END_REF][START_REF] Nystrom | Managing beliefs in organizations[END_REF]. Organizational change implies a redefinition of the organization's mission and goals, or a substantial change in its properties so as to reflect new orientations [START_REF] Gioia | Symbolism and strategic change in academia: The dynamics of sensemaking and influence[END_REF]. It involves the development of new understandings of the organizational goals and their dissemination among organizational members, so that their new schemas fit current organizational experiences [START_REF] Poole | Influence modes, schema change, and organizational transformation[END_REF]. Previous common schemas, structural and political factors may affect the development of new representations among managers [START_REF] Schwenk | Linking cognitive, organizational and political factors in explaning strategic change[END_REF][START_REF] Lyles | Top management, strategy and organizational knowledge structures[END_REF]. These representations also have to be efficiently disseminated among organizational members [START_REF] Lyles | Top management, strategy and organizational knowledge structures[END_REF][START_REF] Schwenk | Linking cognitive, organizational and political factors in explaning strategic change[END_REF]. Here managers have to be aware of the previously existing schemas if their mission is to be successful [START_REF] Bartunek | First order, second order and third order change and organizational development intervention: A cognitive approach[END_REF]. They can associate the perceived crisis with the representations held by organizational members, in order to reveal their inadequacy and to propose new ones. They can also use direct and indirect modes of influence, in order to disseminate their new representations among the organizational members [START_REF] Gioia | Scripts in organizational behavior[END_REF][START_REF] Poole | Influence modes, schema change, and organizational transformation[END_REF][START_REF] Bartunek | First order, second order and third order change and organizational development intervention: A cognitive approach[END_REF]. Promoting organizational learning requires similar actions. At a very general level, organizational learning may be understood as the process by which organizational members share their schemas in order to form a collective map or an organizational knowledge structure. This system of shared ideas in turn affects members' schemas and representations [START_REF] Shrivastava | A typology of Organizational learning systems[END_REF][START_REF] Hedberg | How organizations learn and unlearn[END_REF][START_REF] Lyles | Top management, strategy and organizational knowledge structures[END_REF], and guides individual behaviours and organizational actions [START_REF] Lee | A system for organizational learning using cognitive maps[END_REF][START_REF] Fiol | Consensus, diversity and learning in organizations[END_REF][START_REF] Fiol | Organizational learning[END_REF]. However, previously held representations act as filters for an organization and constrain its learning capacity. Unlearning these old knowledge structures is then necessary, if new ones are to emerge [START_REF] Hedberg | How organizations learn and unlearn[END_REF][START_REF] Bettis | The dominant logic: Retrospective and extension[END_REF]. Dialectical and devil's advocacy approaches in decision-making processes, as well as debates, exchange and diffusion of ideas [START_REF] Koenig | L'Apprentissage Organisationnel: Repérage des Lieux[END_REF][START_REF] Shrivastava | Organizational frames of references[END_REF][START_REF] Smircich | Strategic Management in an Enacted World[END_REF] can facilitate these unlearning processes. The new system of shared ideas leads to actions and environmental responses that will be interpreted according to the new collective schema. These new interpretations in turn affect these representations, resulting in a more or less profound change in the "theories in use" [START_REF] Schön | Organizational Learning[END_REF] or "dominant logic" [START_REF] Bettis | The dominant logic: Retrospective and extension[END_REF] of the organization. Up to now, the notion of shared ideas and beliefs in organizations has been regarded as crucial to the understanding of decision processes, organizational action and performance, change and learning. However, this central assumption of the cognitive paradigm is being seriously questioned today, by empirical studies on the one hand and conceptual and methodological problems on the other hand. The need for revising the notion of collective representations in organization. Although the cognitive perspective offers new and important insights on organizational life, it does little to account for the emergence and properties of a collective representation in organizations. From a relatively clear concept at the individual level, we seem to have ended up with a very slack and loose concept at the organizational level. Relying on an extensive literature review of group mental models, [START_REF] Klimoski | Team Mental Model: Construct or Metaphor[END_REF] demonstrate the ambiguities of this notion in organization studies. What form do such collective cognitive structures assume (verbal, spatial, concrete, abstract, images, beliefs)? What is the content (linked to the task, the environment, the others)? What does "sharing" mean: to what extent do group members have to share their individual representations so one can conclude that a collective representation exists? This last question appears even more critical if we consider recent empirical studies that evaluate the extent to which individual representations coincide in organizations [START_REF] Hugues | The diversity of individual level managerial mental models of competition[END_REF][START_REF] Allard-Poesi | From individual causal maps to a collective causal map: An exploratory Study[END_REF][START_REF] Bougon | Cognitions in organizations: an analysis of the Utrecht Jazz Orchestra[END_REF][START_REF] Ford | Decision maker's beliefs about causes and effects of structure[END_REF][START_REF] Fotoni-Paradissopoulos | Assessing the boardmembers' roles: A cognitive model for strategic decision-making[END_REF]. Altogether, this research reveals that, although there may be consensus about some strategic domain or core beliefs among managers, homogeneity or congruence in individual representations in organizational settings cannot be taken for granted [START_REF] Gray | Organizations as constructions and destruction of meaning[END_REF]. Beyond these ambiguities in the concept of collective representation, one may also question the way cognitive phenomena are understood globally in organizational studies. There is often no coincidence between the unit of analysis that is supposed to be organizational, and measurements that rely mainly on aggregates of individual measurements (see for instance [START_REF] James | Comment: Organizations do not cognize[END_REF][START_REF] Bougon | Cognitions in organizations: an analysis of the Utrecht Jazz Orchestra[END_REF]. This tends to result in anthropomorphic bias [START_REF] Schneider | Cognition in organizational analysis: Who's Minding the Store?[END_REF] or cross-level fallacies, i.e. the generalization of interindividual relationships to universal of intercollective ones [START_REF] Rousseau | Issues of level in organizational research: Multi-level and cross-level perspectives[END_REF][START_REF] Glick | Response: Organizations are not central tendencies: Shadow Boxing in the Dark, Round 2[END_REF]. On the other hand, research tends to ignore measurement problems and to freely apply the idea of cognitive schema or representation to individuals, groups, organizations or institutions, and this leads to reification bias [START_REF] Laroche | L'approche cognitive de la stratégie d'entreprise[END_REF][START_REF] Spender | Workplace Knowledge: the Individual and Collective Dimensions[END_REF]. These difficulties come from the implicit assumption of a conceptual isomorphism between the individual and the collective levels of cognition. In fact, it is being assumed that the same functional relationships can be used to represent the two constructs, which are supposed to have the same position in a nomological network [START_REF] Rousseau | Issues of level in organizational research: Multi-level and cross-level perspectives[END_REF]. According to [START_REF] Schneider | Cognition in organizational analysis: Who's Minding the Store?[END_REF], we have to search for such a multi-level equivalence. But its merit and its relevance for the study of collective cognition in organization can be legitimately questioned. Influence and the political processes at stake in organizational settings may endow a collective representation with much greater complexity than a schema resulting from an average (cf. [START_REF] Bougon | Cognitions in organizations: an analysis of the Utrecht Jazz Orchestra[END_REF], or from more sophisticated aggregates of individual measurements (cf. [START_REF] Ginsberg | Connecting Diversification to Performance: A sociocognitive approach[END_REF][START_REF] Dunn | A Sociocognitive Network Approach to Organizational Analysis[END_REF][START_REF] Weick | Organizations as cognitive maps, Charting ways to success and failure[END_REF][START_REF] Walsh | The role of negotiated belief structures in strategy making[END_REF]et al., 1988). These mathematical artefacts do not take into account the potentially emergent properties of collectivities [START_REF] Schneider | Cognition in organizational analysis: Who's Minding the Store?[END_REF][START_REF] Stubbart | What is managerial and organizational cognition?[END_REF]. Questioning the implicit isomorphism between the individual and the collective levels of cognition may enable us to capture the distinctive nature and the emergent properties of cognition at the organizational level [START_REF] Stubbart | What is managerial and organizational cognition?[END_REF]. Similar critiques can be formulated against the conceptualizations of the emergence and development of collective representations in organizations. It has been argued that similarity in organizational members' representations occurs because they experience similar contexts, problems and constraints. Such a conceptualization relies on a determinist view of cognitive phenomena, which could be said to be a contradiction in terms [START_REF] Lauriol | Approches cognitives de la décision et concept de Représentations Sociales[END_REF][START_REF] Codol | Vingt ans de cognition sociale[END_REF]Moscovici, 1984a). On the other hand, common representations are seen as emerging through mechanisms whereby the individual conforms to the group. Such a conceptualization of influence relies on the idea that organizations are sets of levels (individual, group, organization): the individuals are constrained by the group, which is more or less influenced by other organizational factors. This perspective thus tends to ignore the bidirectionality of influence processes [START_REF] Gioia | Symbolism and strategic change in academia: The dynamics of sensemaking and influence[END_REF], and to reproduce anthropomorphic and reification biases. Altogether, then, it appears that the concept of collective representation not only suffers from lack of clarity but may also be too restrictive to comprehend the nature of collective cognition in organizations: Is there a collective cognitive representation in organizations? What are its properties compared with individual representation, and how does it emerge from the supposedly different representations held by organisational members? The socio-cognitive perspective here offers an interesting alternative approach to the organization as a system of ideas. THE SOCIO-COGNITIVE PERSPECTIVE ON COLLECTIVE REPRESENTATIONS: KEY ELEMENTS The rejection of the individual/social dichotomy and the notion of Social Representation A number of research orientations coming from various domains in the social sciences today converge to produce a vision of reality as a "consensual construction elaborated in interactions and communication" [START_REF] Jodelet | Les représentations sociales[END_REF][START_REF] Allaire | Theories of Organizational culture[END_REF]: from anthropology, [START_REF] Sperber | L'étude anthropologique des représentations: Problèmes et perspectives[END_REF]; from sociology, [START_REF] Berger | The social construction of reality[END_REF]; from socio-linguistics, [START_REF] Cicourel | Cognitive sociology: Language and meaning in social interaction[END_REF]; from linguistics, [START_REF] Harré | Rituals, rhetoric and social cognition[END_REF]; and from social and cognitive psychology, [START_REF] Moscovici | Des représentations collectives aux représentations sociales: Eléments pour une histoire[END_REF]88;1984a;[START_REF] Moscovici | The phenomenon of Social Representations[END_REF], [START_REF] Hewstone | Représentations sociales et causalité[END_REF], [START_REF] Forgas | What is social about social cognition[END_REF][START_REF] Hedberg | How organizations learn and unlearn[END_REF]. Despite differences in their methodological approaches and units of analysis (society for the first of these, the group or the individual for the others), all these research streams reject the individual/social or cognition/social dichotomy. They advocate instead the study of "social cognition", thus implying a change in the research object from individual/organization to interaction. This interactive view of the cognitive and social aspects has been developed in particular by Moscovici and his colleagues in social psychology. Referring to the works of Piaget and Freud, which question on the one hand the prevailing influence of the social on the individual, and on the other the strict impermeability and the equivalence of these levels, [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF]89;84a;84 b) reintroduces the concept of Social Representation. Going beyond the numerous definitions and debates surrounding this notion (See [START_REF] Jodelet | Les représentations sociales[END_REF][START_REF] Moscovici | Notes towards a description of Social Representations[END_REF][START_REF] Jahoda | Critical Notes and Reflections on "Social Representations[END_REF], Social Representations can be defined as "processes and products of a social and cognitive elaboration of reality" (Codol, 1989, our translation). This concept must be understood not in terms of its social foundations, but in the sense that its content and working rules depend on interindividual processes: "Representations may be termed 'social' less on account of whether their foundations are individuals or groups, than because they are worked out during processes of exchange and interaction" (Codol, 1984, p. 251). It is in this sense that Social Representations must be distinguished from the concepts of ideology, belief, interpretative system, cognitive system, and so on, which are used in organizational cognition research that sees the foundations of these cognitive systems as mainly organizational, and their mode of transmission as unilateral (from the organization to its members). A disctinction is also made relative to the concepts of cognitive schema, cognitive organization or structure, as used by cognitive psychologists for whom cognitive processes are essentially intra-individual [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF][START_REF] Codol | Note terminologique sur l'emploi de quelques expressions concernant les activités et processus cognitifs en psychologie sociale[END_REF]. Towards Social Cognition Extricating ourselves from the individual/social dichotomy and reconsidering in this light the concept of Social Representations make it possible to put an end to some of the unquestioned assumptions of the cognitive and social fields: -The social aspects do not only constrain: we must recognize that in one way or another, representations are generated and modified, and that the individual contributes to these processes. Vergès et al. (1987, p. 51) underline that "The social actor is a true creator... He is admittedly limited by his pre-constructs, but he creates his own discourse. What he says is singular. It is a true emergence. Thus, the individual is not a reflection; he constructs his own schemas and objects" (our translation). -Cognition is not only an intra-individual process: it is determined by elements that are fundamentally social. It is contextualized and often has a social end, in particular when it is expressed in discourse [START_REF] Harré | Rituals, rhetoric and social cognition[END_REF]. "Our knowledge is socially structured and transmitted from the first days of our life and they are coloured by the values, motivations and norms of our social environment in adulthood" (Forgas, 1981a, p. 2). And if individuals continually construct and reconstruct their representations, they do not do it alone, but in interaction with others (Windish, 1989, p. 180). Recent empirical works on the cognitive development of children [START_REF] Bruner | Language and thought in infancy[END_REF][START_REF] Doise | Individual and collective conflicts of centrations in cognitive development[END_REF][START_REF] Doise | On the social nature of cognition[END_REF] on the one hand, and on the systems of representation in group situations [START_REF] Abric | A theorical and experimental approach to the study of social representations in a situation of interaction[END_REF]1989;et al., 1975;[START_REF] Codol | On the system of Representations in a group situation[END_REF]1984) on the other, demonstrate this permeability of the social and cognitive dimensions. As cognitive phenomena cannot be reduced to intra-individual processes [START_REF] Forgas | What is social about social cognition[END_REF][START_REF] Hedberg | How organizations learn and unlearn[END_REF], and inversely, interactions are influenced by the representations held by group members [START_REF] Codol | On the system of Representations in a group situation[END_REF]1984;[START_REF] Abric | A theorical and experimental approach to the study of social representations in a situation of interaction[END_REF]1989), a collective representation or schema appears to be inseparable from interactions. These interactions have to be understood as the process and result of the social construction of reality [START_REF] Jodelet | Les représentations sociales[END_REF]. Such a conceptualization leads to the examination a collective cognitive schema not as a phenomenon in itself, but as something "always in the making, in the context of interrelations and actions ... that are themselves always in the making" (Moscovici, 1988, p. 219). This means shifting the emphasis from the phenomenon in itself to communication [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF]. Communication, through the influence processes it activates, enables individual representations to converge, and something individual to become social [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF][START_REF] Jodelet | Les représentations sociales[END_REF]. However, again, social influence must not be understood here in a unilateral way, as a conforming process. The recognition that the social and cognitive dimensions interact, calls for a reconceptualization of the process of social influence. Social influence and its contribution to the understanding of collective cognition Social influence has in fact long been regarded as a conforming process: the individual modifies his or her behaviour or attitude to those of the group [START_REF] Levine | Conformité et obéissance[END_REF]. The spectacular experiments of Asch in the fifties demonstrated in particular that the individual tends to conform to the majority response, even if it expresses opinions contrary to objective physical evidence [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]. This conformity process has been theoretically explained by informational influence theory [START_REF] Levine | Conformité et obéissance[END_REF][START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF] or normative influence theory [START_REF] Marc | L'interaction sociale[END_REF][START_REF] Levine | Conformité et obéissance[END_REF]. According to Moscovici and his colleagues, such a perspective on influence relies on too restrictive an approach to the relationships between the source and the target of influence [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF][START_REF] Moscovici | Studies in Social influence I: Those who are absent are in the right: Convergence and Polarization of answers in the course of Social Interactions[END_REF][START_REF] Doms | Innovation et Influence des minorités[END_REF]. The majority (or the source of authority) is considered as the norm-sender and is identified with the group, and the minority is seen as the norm-receiver and is equivalent to the "deviants" [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF][START_REF] Doms | Innovation et Influence des minorités[END_REF][START_REF] Levine | Conformité et obéissance[END_REF]. In this perspective, the minority can only accept or reject what is imposed by the group. Such a conception of social influence thus relies on the idea that interactions have to lead to conformity if the group is to survive [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]. However, "Far from being only an element of solidarity and psychological equilibrium, conformity may become, in a long run, a source of instability and conflict". Thus "the analysis of social influence which finds its expression in social change is just as legitimate a concern" (Moscovici and Faucheux, 1972, p. 155). In this perspective, we need a theory that does not see social influence as a unilateral process, but understands it as dynamic and symmetrical [START_REF] Doise | Les décisions de groupe[END_REF][START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF][START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]Moscovici et al., 1969). A socio-cognitive perspective on Social Influence Social influence as a conflict and negotiation process If we regard every group member as both source and target of influence, and accept that influence processes can result in innovation and change, how do we conceptualize social influence? What are the key variables and mechanisms underlying this process? The presence of a minority in a group, which strives to introduce or create new ways of thinking or behaving, or to modify existing beliefs, is a key variable that may allow for innovation and change [START_REF] Doms | Innovation et Influence des minorités[END_REF][START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF][START_REF] Levine | Conformité et obéissance[END_REF]. By introducing divergent viewpoints concerning the same social object under discussion, a nomic and active minority in fact creates a socio-cognitive conflict. This conflict can admittedly lead to a rupture in the group, but in most cases individuals feel compelled to eliminate divergencies and to make concessions. Social influence is then no longer tantamount to an information exchange process intended to reduce environment uncertainty, as social comparison theory assumes; it is a process of conflict and negotiation. Social influence and its outcomes correspond to different forms of socio-cognitive conflict and to different ways of treating this: it is necessary to talk social influences [START_REF] Doms | Innovation et Influence des minorités[END_REF][START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF][START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF]. In this perspective, the role of contextual factors has to be taken into account: the adequacy of opinions expressed relative to current thinking [START_REF] Paicheler | Polarization of attitudes in homogeneous and heterogeneous groups[END_REF][START_REF] Paicheler | Argumentation, Négociation et polarisation[END_REF], the norms induced by the task [START_REF] Moscovici | Studies in social influence IV: Minority influence in a context of original judgement[END_REF], the interaction style of the participants (rigid vs. flexible) and the way conflict between divergent answers is solved [START_REF] Mugny | When rigidity does not fail: Individualization and psychologisation as resistances to the diffusion of minority innovations[END_REF]. All these factors will have a key role in the outcome of influence, and thus in the emergence of a consensual view of reality. As influence processes and their results depend on the intensity and forms of the socio-cognitive conflicts during interactions, every type of influence will correspond to a specific way of treating the conflict. In this light, conformity appears as one possible result of influence among others in the group dynamics. Depending on the various socio-cognitive dynamics occuring between group members -and especially on the type of conflict created during interactions, the kind of participative mode adopted by the group (formal vs. informal), the consistency of the viewpoints expressed by members, their implications and the way conflict is solved (control, rejection, avoidance or negotiation) -the forms of consensus achieved and thus the collective representation developed, will be quite different [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF][START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]. Conformity, normalization and polarization as forms of socio-cognitive conflict and negotiation processes. Conformity as conflict control or rejection Conformity is defined as "a change in the individual or subgroup behaviour or opinions towards legitimate rules and expectations of the group, irrespective of initial differences" (Moscovici and Faucheux, 1972, p. 166). This process is liable to emerge when the minority has no counter-norm to invoke. In this case, the majority members have no reason to make concessions, as the minority lacks internal consistency; the minority will be either converted or rejected [START_REF] Levine | Conformité et obéissance[END_REF]. However such a process requires very specific conditions. In particular, the group has to be "closed" and to represent Asch's group characteristics: there is a "correct" answer to the task, the answers of the minority and the majority diverge significantly, the majority is interindividually consistent, communication of judgements only is permissible between interacting participants (they cannot discuss their viewpoints) and the social constraint does not appear to be intentional -which distinguishes conformity from obedience - [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]. However, when the majority members diversify some of their responses, the majority appears less consistent and less committed to its judgements. In this case, the minority feels less obliged to accept the majority answer, and is more liable to move towards a "compromise" response. Normalization as conflict-avoidance "Normalization defines the pressure each exerts on the other during an interaction, with the aim of reaching either a judgement acceptable to all individuals or one which approaches complete acceptability" (Moscovici and Faucheux, 1972, p. 171). This is accomplished by suppressing differences and levelling off at the lowest common denominator [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF]. This mechanism is liable to arise when the participants are equal in their capacities and competencies so that no-one can legitimately impose their viewpoint on the others, when they are not very involved in the issue or committed to any position concerning it, and/or when the object of the judgement has little significance or is unknown to most people in the group (for instance judgements of weight, colour, smell etc., as opposed to something charged with value). In such cases, as nobody feels legitimated to adhere rigidly to their opinion, participants will avoid extreme positions and will adopt judgements approximating those of the others. A tacit negotiation takes place, and answers are coordinated so that conflict is avoided. The group members' answers converge towards an averaging response as opposed to an extreme one [START_REF] Moscovici | Social influence, conformity bias, and the study of active minority[END_REF][START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF][START_REF] Moscovici | The group as polarizer of attitudes[END_REF]. This leads to a "groupal and non-critical thinking", a "shared illusion of unanimity that comes from the self-censure of everybody and that increases because of the assumption that 'who doesn't disagree agrees' " (Doise and Moscovici, 1984, p. 215, our translation). In this perspective, anything that reduces the intensity and frequency of the interactions will tend -since it also reduces the opportunities for group members to express divergent viewpoints and lessens their involvement in the decision process -to produce what [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF] call "normalized participation" between group members. This participative mode, which is liable to emerge in a formal relationships context, will lead to conflict-avoidance and, consequently, to a compromise consensus. However, if sufficient divergencies are expressed and group members commit themselves in the decision-making process, interactions will produce a change, a polarized answer. Polarization as conflict-creation and resolution. Thus, if all participants express themselves freely in the group, influence processes will result not in an averaging of the members' initial positions, but in a specific answer. This collective result is produced by true collaboration between group members; it is close to the values they initially shared and tends to be more extreme than that produced by an averaging of the initial individual positions [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF]. Far from being a particular phenomenon reduced to very specific conditions, polarization -which has been found in the field of attitudes [START_REF] Moscovici | The group as polarizer of attitudes[END_REF]Myers and Bishops, 1971), in judgements about facts, perceptions of persons, jury and ethical decisions, and risk taking (see [START_REF] Lamm | The group polarization phenomenon[END_REF], for a review) -appears to be a general process [START_REF] Doise | Les décisions de groupe[END_REF]. In fact every decision process, be it individual or collective, leads the individuals involved to look for persuasive arguments which enable them to justify their choices. In seeking these arguments, they become involved in the task they have to undertake [START_REF] Moscovici | The group as polarizer of attitudes[END_REF]. In a context of free interactions, the expression of divergent viewpoints, opinions, judgements and ideas in the group will result in conflict -in Lewinian terms, the group "thaws". This conflict further increases the participants' commitment to the task. According to [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF], as the group strives to reach agreement by making reciprocal concessions, the individuals express preferences and alternatives in order to influence each other. One of the most economical ways of reaching agreement is to increase the common basis for argumentation between the participants. As a result of the sharing and of the controversies and new combinations of elements, even people who did not know each other before will find some ideas and meanings that they hold in common. The common elements which they discover will then serve as a basis for consensus. However, this common basis does not result from an elementary arithmetical sum of positive and negative arguments, but from series of exchanges, debates and influences. By way of these processes the dimensions held in common are revealed and become more noticeable, so that the whole cognitive field becomes more organized for each group member (see [START_REF] Moscovici | The group as polarizer of attitudes[END_REF][START_REF] Codol | On the system of Representations in a group situation[END_REF]1984 for empirical demonstrations). Various empirical studies tend to confirm such a theoretical approach to the polarization process: the group members' involvement in the task [START_REF] Vinokur | Novel argumentation and attitude change: the case of polarization following group discussion[END_REF]Burnstein and Vinokur, 1973;[START_REF] Neve | Phénomènes de polarisation des décisions de groupe: Etude expérimentale des effets de l'implication[END_REF], the intensity of the socio-cognitive conflict due in particular to the heterogeneity of the initial individual positions [START_REF] Moscovici | The group as polarizer of attitudes[END_REF]Paicheler, 1976;[START_REF] Forgas | Polarization and moderation of person perception judgements as function of group interaction style[END_REF], the participative mode -formal vs. informal - [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF][START_REF] Forgas | Polarization and moderation of person perception judgements as function of group interaction style[END_REF], all these play a crucial role in setting up the social relationships between the group members (see Appendix 1 for a recapitulation of these studies). And these relationships are a key factor in the perceptions and judgements of the participants, in their cognitive activities as well as in the result of the influence processes [START_REF] Doise | Conflict and consensus, a general theory of collective decisions[END_REF]. Thus, the reconceptualization of social influence allows us to recognize different socio-cognitive processes and phenomena in groups: conformity, normalization and polarization, all represent different emergence processes and forms for the collective representations in the organization. Up to now, we have been considering social influence in its interindividual and group manifestations. But social influence also results in a more or less pronounced cognitive restructuring at the individual level. Forms of Social influence and intra-individual effects: Compliance vs. conversion The opinions, interests and representations expressed during group discussions belong to the level of social responses. It is necessary however to distinguish between what belongs to the private (or latent) level, that is to say the cognitive structure underlying the social response, and the public or manifest level [START_REF] Paicheler | Suivisme et conversion[END_REF]. Such a distinction enables us to appreciate that there can be a public consensus in a group without private acceptance, which corresponds to compliance behaviour. On the other hand, social influence may lead to private but not public acceptance, which reveals conversion behaviour. Compliance and conversion behaviours, which have been revealed empirically in the field of perceptual judgements [START_REF] Moscovici | Influence of a consistent minority on the responses of a majority in a color perception task[END_REF][START_REF] Moscovici | Toward a theory of conversion behavior[END_REF]et al., 1981) and of opinions and attitudes (Mugny, Pierrhumbert and Zubel, 1972; and see Appendix 2 below), force us to take into account the potential discrepancy between what is thought and what is said in social life. The limits to the majority influence in compliance and the power of the minority influence in conversion have to be considered, the more so because the question of the externalization of the private/latent level to the social/public one, has not yet been much investigated [START_REF] Paicheler | Suivisme et conversion[END_REF]. This set of conceptual and empirical elements concerning social influence processes in groups (recapitulated in Appendices 1 and 2) offers new insights regarding our conception of the individual/collective connection, and hence to our conceptualization of collective representations in organizations. A collective representation is no longer limited to an intersection point or an average of static individual schemas nor to a conforming social response. We can assume that a collective representation in an organization may exhibit different forms, different emergence processes and different relationships with the individual level of cognition, depending in particular on the participative mode or the relationships between organizational members: -A collective representation may correspond to a majority response (in conformity). In that case there is no correspondence between the group response and some of the members' (the deviants) individual representations. However, these may benefit from the conversion behaviour of other members at the private level. -A collective representation may be equivalent to an average position, to which nobody really adheres (in normalization). -Or it may be equivalent to a new position which has been developed by means of real collaborative decision work between group members, and which implies a real cognitive restructuring not only at the social but also at the private level. Further, the distinction between the manifest/public and latent/private levels underlines the potentially evolutionary nature of collective cognitive representations over time, especially if the minority beliefs are expressed consistently at the public level, which may lead to innovation and change in the organization. DISCUSSION AND CONCLUSION On the nature of collective representations in organizations Table 1 summarizes the cognitive and the socio-cognitive perspectives on collective representations in organizations. Table 1. -Various mental representations may exist in organizations: between interacting subgroups, as has already been evidenced by some empirical studies on culture [START_REF] Rentsch | Climate and Culture: Interaction and qualitative differences in organizational meanings[END_REF][START_REF] Jermier | Organizational subcultures in a soft bureaucracy: Resistance behind the myth and facade of an official culture[END_REF], and among members of the same board (cf. [START_REF] Hugues | The diversity of individual level managerial mental models of competition[END_REF][START_REF] Allard-Poesi | From individual causal maps to a collective causal map: An exploratory Study[END_REF][START_REF] Fotoni-Paradissopoulos | Assessing the boardmembers' roles: A cognitive model for strategic decision-making[END_REF]. The cognitive and the socio-cognitive perspectives on collective representations in organizations -These representations are continuously changing, especially when minority beliefs are expressed consistently at the public level and when a negotiation process occurs between these deviants and the majority; -Organizational members develop different forms of collective representations, depending on the socio-cognitive processes that take place during their interactions. These phenomena depend not only on the representations previously held by organizational members, but also on their involvement in the task, on their participative mode during a decision process, and on the norms induced by their tasks and by the social context. These dimensions will result in various forms of socio-cognitive conflict, leading to different kinds of influence processes and different forms for the collective representations. Such a conceptualization of cognition seems to reflect more adequately what everyone experiences in organizations, namely different ideas about decision to take; influence among members of the same board, among departments or among main coalitions; conflict-avoidance during a decision-process, as well as continuous change in the views held by organizational members, which can lead to innovation and change. These ideas are certainly not new, but they depart from the cognitive perspective that has led us to think of organizations as communities of thinking. The socio-cognitive perspective allows us to recognize that in everyday organization life people do not mention their opinions, do change their views, do influence each other, and do continually reconstruct their representations. Such an approach to cognition not only gives a more accurate picture of organizational life, but also sheds new light on change and learning. A socio-cognitive perspective on decision-processes, change and learning in organizations. A socio-cognitive perspective on decision processes in organizations. The socio-cognitive perspective goes beyond the pure cognitive or political approaches to strategic decision-making which seem too restrictive [START_REF] Laroche | From decision to action in organizations: decision-making as a social representation[END_REF][START_REF] Walsh | The role of negotiated belief structures in strategy making[END_REF]. Such approaches tend to consider decision-making as an outcome of either cognitive or socio-political variables. When both the political and cognitive dimensions are taken into account, they are envisaged in terms of sets of variables which influence each other causally and diachronically: the socio-political structure of a decision-making group is regarded as affecting the interactions between group members, which in turn will influence the development of a shared knowledge structure (see for instance, [START_REF] Ward | Socio-cognitive analysis of group decision-making among consumers[END_REF]Whitney and Smith, 1985;[START_REF] Walsh | Negotiated belief structures and decision performance: an empirical investigation[END_REF][START_REF] Schwenk | Linking cognitive, organizational and political factors in explaning strategic change[END_REF]. These approaches may be termed "socio-cognitive", as they take into account the cognitive and the social dimensions that are at stake during the decision process. However, they depart from the socio-cognitive perspective presented here: they do not consider that social and cognitive dimensions interact during the decision-making process itself. These dimensions should not be regarded as given, or one may miss important aspects of the decision process, and even misinterpret its outcomes. For instance, the final agreement of a decision group may result from interactions dominated by highly formal relationships between participants who were not particularly committed to the decision process. Or it may be the product of an informal participative mode between involved group members, which has produced conflicts and negotiation processes. Thus, depending on its actual emergence process, a final consensus may or may not reveal private agreement of the part of group members. This dimension may also have an impact on the future implementation of the decision [START_REF] Whitney | Effects of Group cohesiveness on attitude polarization of knowledge in a strategic planning context[END_REF], and on the organizational performance [START_REF] Bourgeois | Performance and consensus[END_REF]. As far as the interpretation of consensus in decision-making groups is concerned, [START_REF] Fiol | Consensus, diversity and learning in organizations[END_REF] notes that the link between consensus and performance has led to conflicting results (see, for instance, [START_REF] Bourgeois | Performance and consensus[END_REF][START_REF] Schweiger | Group approachs for improving strategic decision-making: a comparative analysis of dialectical inquiry, devil's advocacy and consensus[END_REF]. Fiol suggests that different dimensions of consensus should be taken into account, in order to clarify the link between these variables. In our opinion, it may be more fruitful to try to specify what kind of consensus was obtained: is it a compromise response, which does not actually reflect any individual representation? In this case, it is understandable that the decision will be poorly implemented. Or does the consensus obtained reveal the majority viewpoints? If this is the case, some passive resistance from the minority is to be expected. Or does the decision result from a true collaborative effort between the participants, who have expressed their conflicting views and have striven to reach an agreement through negotiation, argumentation and counter-argumentation? In this case, group members have involved themselves in the decision process, and the final consensus reflects, at least in part, their representations of the situation. So they will certainly feel more motivated to implement the decision. The socio-cognitive perspective, which specifies the conditions, processes and outcomes of interactions between group members, sheds new light on the socio-cognitive dynamics involved in a decision-making process. It may also have interesting managerial implications. Contrary to the arguments of the cognitive approach, organizations cannot be envisaged in terms of a system of shared ideas and beliefs among their members. Organizations imply diversity in meanings and representations. Far from inhibiting consensus in decision-making, such cognitive diversity enables new collective representations to emerge, in so far as participants are involved in the decision-making process and express their divergent viewpoints. This implies not only promoting diversity in thinking, but also encouraging the free expression of opinions and judgements in the organization. This may be accomplished with the help of self-organized groups, by asking for voluntary participation in a task, and by promoting informal relationships and low-level decisions and autonomy in the organization. A socio-cognitive perspective on organizational change and learning The socio-cognitive approach also enables us to go beyond a cognitive perspective on organizational change and learning. Although referring to Berger and Luckman's notion of "a socially constructed reality", the cognitive perspective understands organizational representations as variables that influence and are influenced by political, organizational and structural dimensions in a causal manner (see, for instance, Bartunek's model of change in organizational schemas, structure and actions, 1984; Schwenk's model of organizational change, 1989). At a very general level, organizational learning and change are understood as transition processes, from one state of equilibrium in the collective cognitive schemes to another. The notion of social representations challenges these assumptions. As noted above, social representations should be examined not as a phenomenon in themselves but as always in the making [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF]. If some of their elements are perhaps more stable than otherselements which are called core elements due to their cognitive centrality, as opposed to peripheral ones which are more related to actions and are unstable [START_REF] Flament | Structure et dynamiques des représentations sociales[END_REF] -representations which are intrinsically linked to interactions in a social group are continuously changing. In this construction process the degree of change may be more or less pronounced, according to the degree of inconsistency between the practices of group members and their representations, and to the influence of subgroups or individuals whose representations are different [START_REF] Flament | Structure et dynamiques des représentations sociales[END_REF]. In this perspective, change and learning can be envisaged as continually involved in organizations. The main issue then may not be why change and learning occur in organizations, but to what extent they do: To what extent do change and learning processes occur in individual and collective representations in organizations? What is their content like, and their structure? How do these changes relate to practice, and to the socio-cognitive dynamics that take place among organizational members? As regards these change processes, the theory of social influence challenges the idea of a unilateral influence flowing from the top management group to other organizational stakeholders who are supposed to receive it passively. If instead influence is regarded as symmetric, the idea that top management first constructs a new collective representation and then tries to disseminate it to other organizational members, appears artificial. A socio-cognitive perspective on influence compels us to recognize that managers not only influence other organizational members, but are also influenced by them, simply through everyday interactions. Managers should be aware here that conflict and negotiation processes, as opposed to conflict-avoidance and control, can generate new ideas and representations to which other organizational members may adhere. Further, if the representations of organizational members are continually changing, rather than remaining stable and resistent to change, solutions such as firing managers and hiring new ones to promote change in the organization (cf. [START_REF] Nystrom | Managing beliefs in organizations[END_REF] are questionable. Even though such solutions may have a powerful symbolic value and may promote the idea that change is happening in the organization, they cannot be legitimized by cognitive rigidities: people are not so stupid. The socio-cognitive perspective also offers new insights on some aspects of organizational learning. That organizational learning should not be conceived as the sum of individual learning is well established [START_REF] Fiol | Organizational learning[END_REF][START_REF] Hedberg | How organizations learn and unlearn[END_REF][START_REF] Fiol | Consensus, diversity and learning in organizations[END_REF]. It is also sometimes argued that individual learning is a necessary condition for organizational learning [START_REF] Hedberg | How organizations learn and unlearn[END_REF]. The theory of social influence presented here seriously questions this assumption, as it shows that the whole may be more than the sum of the parts (for instance, when a group polarizes). In this perspective, it could be envisaged that a group learns before (or simultaneously with) its members. The socio-cognitive approach also recognizes the key role of minority behaviour in conflict-creation and influence processes, as has been underlined by some researchers on innovation processes and diffusion [START_REF] Bouwen | Organizational learning and innovation[END_REF][START_REF] Garud | A socio-cognitive model of technology evolution: The case of Cochlear implants[END_REF]. The socio-cognitive approach may also help us to understand more about innovation processes in organizations -why, for instance, some innovative ideas are diffused and implemented while others are not. Here again, promoting informal relationships, low-level decision-making and autonomy, and seeking voluntary participation in a project, may encourage the free expression of divergent ideas and negotiation processes. These, in turn, may foster the group members' involvement in the task, and lead to a true collaborative effort which allows new representations to emerge. The socio-cognitive perspective not only sheds new light on organizational life; it also makes us reconsider the way managerial and organizational cognition scholars apprehend cognition in organization, namely in terms of the research design and methodological approaches used on the one hand, and in terms of the methods adopted to elicit and analyse the representations on the other. The study of collective representations in organizations The understanding of collective representations in organizations implies that we first try to describe these representations in various settings and in all their complexity. The comparison of these descriptions may help us to comprehend collective representations and to find regularities, so as to build a general theory [START_REF] Moscovici | The phenomenon of Social Representations[END_REF]. What kind of methodological approaches can be used to describe collective representations in organizations? A socio-cognitive approach to cognition If we regard collective representations as non-reproducible in laboratory settings, then a field observation approach seems as a relevant alternative [START_REF] Deconchy | Laboratory experiment and social field experimentation: An ambiguous distinction[END_REF]. Collective representations can then be studied longitudinally, in situ, and in the context of various types of interactions: decision-making, informal meetings, day-to-day interactions, and so on [START_REF] Moscovici | The phenomenon of Social Representations[END_REF]. The socio-cognitive approach also leads to the adoption of an interactionist perspective on organizations, whereby the organization is viewed as a continuous collective construction evolving and developing in interactions. Interactions, and communication in particular, activate cognitive and social dynamics which allow organizational members to develop realities and representations of these realities. Small groups (for instance decision-making groups, SMED, etc.), which permit an in-depth analysis of these dynamics, can be regarded as a relevant level of analysis for the study of collective representations in organizations. Attention must be paid here to the initial conditions and the processes occurring during the group interactions. Did group members have very different representations of reality before the group work started? Did they commit themselves to the task? What are the norms induced by the task and the social context? What kind of relationships exist between these people? And so on. The specification of these dimensions makes it possible to interpret the collective representation developed and to establish the validity of the results obtained, particularly if different groups are studied and compared [START_REF] Yin | Case study research, design and methods, Applied social research methods[END_REF]. In such contexts, how can collective representations be investigated? Interactions as the unit of analysis Collective representations have been understood as being always in the making, and as occurring in interactions that are themselves also always in the making [START_REF] Moscovici | Notes towards a description of Social Representations[END_REF]. In this perspective, the study of collective representations -or, more generally -of social cognition, can no longer be limited to the investigation of the "shared" beliefs or other aggregates of individual measurements: The socio-cognitive theory of social influence convincingly demonstrates that "the whole is different from the sum of its parts". The rejection of the individual/social or cognitive/social dichotomy requires us to consider interactions as the unit of analysis. Particular attention must be paid here to communication, which through the influence processes that it activates, allows something individual to become social [START_REF] Jodelet | Les représentations sociales[END_REF]. More precisely, the points of agreement and disagreement expressed during the group interactions can be seen as a manifestation of the development of a collective representation of reality. The main grounds serving as a basis for consensus in the group can then be discovered. It may also be inferred from the group decisions and behaviours [START_REF] Langfield-Smith | Exploring the need for a shared cognitive map[END_REF]. However, as influence processes during the interactions affect the individual representations differently, a study of the kind and the form of the collective representations developed calls for an investigation of the individual representations over time. Does the collective representation developed during the interactions result from conformity or normalization behaviour? Or does it reveal a true collaborative effort between the group members, which has given rise to a cognitive restructuring at the individual and public levels? Such questions mean that we need to investigate the similarity of group member representations over time, and to compare the beliefs on which participants agreed during the group work with those reappropriated at the individual level. It is then possible to determine the influence of the interactions on the individual representations and the kind of collective representations achieved during the group interactions. In sum, a study of collective representations requires a multi-level and processual approach. Methodological approaches to eliciting and analysing individual and collective representations. Various types of methods may be suggested for eliciting and studying individual and collective representations: cognitive mapping (Axelrod, 1976, Cossette and[START_REF] Cossette | Mapping an idiosyncrasic Schema[END_REF], content analysis [START_REF] Weber | Basic content analysis[END_REF], argument mapping, semiotic analysis [START_REF] Huff | Mapping Strategic Thought[END_REF], among others. However, if we want to study these representations accurately and to compare the individual and collective levels elicited, some guidelines may be useful. First, individual representations have to be elicited from discourses produced by non-intrusive techniques. This can increase the validity of the representations obtained. For instance, these may be established by in-depth interviews conducted in a such way as to avoid suggesting anything to the respondent which might become part of his representations [START_REF] Cossette | Mapping an idiosyncrasic Schema[END_REF][START_REF] Cossette | Les cartes cognitives et organizations[END_REF]. Other techniques which anchor the interviewer's questions to the last respondent's answers may also be used (cf. [START_REF] Cossette | La vision stratégique du propriétaire-dirigeant de PME: une étude de cartographie cognitive[END_REF][START_REF] Laukkanen | Comparative Cause Mapping of Management Cognitions, A computer Data Base Method for Natural Data[END_REF]. Secondly, as representations have "practical aims" [START_REF] Jodelet | Les représentations sociales[END_REF], the domain or object represented must be meaningful to the respondent. In the same vein, the established representation can be presented to the respondent to allow him to assess the extent to which it properly represents his own vision; It must also be guaranteed that the data will remain confidential. These guidelines should ensure that the representation obtained is "credible" to the subject [START_REF] Cossette | Les cartes cognitives et organizations[END_REF]. Apart from these elements, which have already been addressed elsewhere (cf. [START_REF] Cossette | Les cartes cognitives et organizations[END_REF][START_REF] Cossette | Mapping an idiosyncrasic Schema[END_REF][START_REF] Laukkanen | Comparative Cause Mapping of Management Cognitions, A computer Data Base Method for Natural Data[END_REF][START_REF] Eden | Messing about in problems: An informal structured approach to their identification and management[END_REF]1992), the method used to establish the individual representations must be applicable to the interactions. The comparison of the beliefs contained in individual representations and those expressed during the group interaction will then be possible. Here it is worth noting that, even in social psychology studies, few techniques of representation at the individual level have been developed or applied to interactions data. The ambiguity of discourse, especially in a small-group context, calls for the development and testing of specific processing techniques and coding methods. These could make it possible to investigate the content of the beliefs expressed as well as the form of communication (agreements, disagreements), so that the common ground developed during the interactions can be revealed. [START_REF] Fiol | Consensus, diversity and learning in organizations[END_REF] research on communication during a joint-venture process can be regarded as a good example of such an undertaking. Once the individual and collective representations have been elicited, their analysis also calls for some precautions. As far as the comparison of individual representations is concerned (over time, between subjects), the analysis must be applied to both the content and the structure of the representations: "After all, what we think is not so distinct from how we think" (Moscovici, 1984b, p. 87). Secondly, quantitative and qualitative methods can be used to evaluate similarities in individual representations. However, quantitative approaches -such as mutidimensional scaling [START_REF] Walton | Managers' Prototypes of financial terms[END_REF], factor and cluster analysis [START_REF] Doise | Representations Sociales et Analyse des données[END_REF], distance and similarity measures (Markoczy et al., 1992;[START_REF] Daniels | Comparing cognitive maps[END_REF] -may be criticized for relying on an "atomistic" perspective of representation [START_REF] Jenkins | Creating and Comparing Strategic Causal Maps: Issues in Mapping across Multiple Organizations[END_REF] and for their lack of validity. More qualitative methods are thus needed. For instance, independent judges can be asked to evaluate the similarity of pairs of representations [START_REF] Daniels | Comparing cognitive maps[END_REF]. Or, the researcher can select some core elements in the individual representations, and compare their sub-representations by different subjects (see, for example, domain map analysis for causal mapping, [START_REF] Laukkanen | Comparative Cause mapping of Organizational Cognitions[END_REF]. Or, the dimensions or the core elements around which the individuals organize their representations can be elicited by cluster analysis, and then characterized in order to compare them between subjects. These methods make it possible not only to compare the individual representations over time and between subjects, but also to evaluate the extent to which the subjects develop a common ground for representing their world. Once these common dimensions, and more specifically the beliefs shared by a majority, a minority or by all group members before and after the interactions have been noted, they can be compared with the common elements emerging during the interactions. The representations developed during the interactions can then be characterized. For instance, by comparing the beliefs on which a majority of subjects agreed during the interactions with those shared after the interactions, we can evaluate the extent to which the consensus obtained during the interaction was a surface or a latent one. The study of the emergence process and of the nature of collective representations in organizations also requires that the dynamics of the content and the form of the discussions be investigated further. How do people collectively make sense of their world? Do they anchor their thinking in examples and situations which they have experienced, or in some more abstract elements [START_REF] Klimoski | Team Mental Model: Construct or Metaphor[END_REF]? Do they deal with one problem after another, or do they think recursively? On what aspects of reality do they easily come to an agreement -on the interpretation of reality or on ways of modifying it? How were conflicts expressed and solved? Do people avoid divergencies? Or do they negotiate? With a few exceptions [START_REF] Fiol | Consensus, diversity and learning in organizations[END_REF][START_REF] Langfield-Smith | Exploring the need for a shared cognitive map[END_REF][START_REF] Sawy | Triggers, Templates and Twitches in the Tracking of Emerging Strategic Issues[END_REF], not many researchers have investigated these dimensions. Here again, specific methods and coding rules are needed. Thus, to sum up, a socio-cognitive approach to cognition in organizations appears to call for (a) field observation approaches focusing on small groups as the level of analysis; (b) the investigation of both individual representations and the content and form of their interactions over time, (c) using non-intrusive elicitation techniques and methods which take into account the content and structure of the representations. If organizational life is largely made up of interactions, then such approaches should be able to shed new light on such aspects as have hitherto been neglected in organization studies. A concluding comment I have tried to demonstrate here that the socio-cognitive approach can help to enhance our understanding of cognitive (or rather, socio-cognitive) life in organizations. It even prevents us from adopting an "all-cognitive approach" which cannot be beneficial [START_REF] Laroche | L'approche cognitive de la stratégie d'entreprise[END_REF]. More generally, this perspective challenges the way we conceptualize issues related to cognition in organizations. It not only questions some implicit assumptions regarding the collective and individual levels of analysis we adhere to, but also pushes us to re-examine other dichotomies such as structure/process, dynamic/static, diversity/uniformity and cognitive/organizational outcomes, that are not always helpful. The socio-cognitive perspective thus provides an interesting framework for capturing and comprehending everyday ideas and behaviours in organizations. processes, actions, change and learning in the organization Members' sharing of similar experiences, problems and situations Interactions which lead to Continuous collective construction through interactions among individuals that imply cognitive and social dimensions Part of the organizing process Manifested and constructed in and through interactions Contents and working rules depend on interindividual processes More or less enduring, depending on contextual factors and the socio-cognitive dynamics during the interactions Part of the decision, change and learning processes in the organization Influence processes and the way conflict is solved during interactions, which may lead to conformity by way of informational and normative influence conformity, normalization or innovationAt a general level the socio-cognitive perspective highlights the mutually permeable character of the social and cognitive fields. In this perspective, collective representations have to be understood as expressed and constructed in and through interactions between organizational members. They are not an enduring cultural phenomenon to be activated during interactions.Social influence, by allowing individual representations to converge and letting something individual become something social, has a crucial role in this process. Social influence, regarded as a form of conflict and negotiation, may in fact result in conformity in members' answers, in normalization or change. These processes and results will lead to different types of collective representations: more or less organized, and more or less tightly coupled to the members' representations. Unlike the cognitive approach, this theoretical framework thus suggests that: This paper is supported by a grant of the Fondation Nationale Pour l'Enseignement de la Gestion des Entreprises. Earliers versions of the manuscript were presented at the 4ième Conférence Internationale thank participants for their comments and remarks. Special thanks to Guje Sevon, Mauri Laukkanen and Nancy Adler for their helpful comments and support.
86,527
[ "2437" ]
[ "57129" ]
00149062
en
[ "info" ]
2024/03/04 23:41:50
2007
https://ens-lyon.hal.science/ensl-00149062/file/formulas.pdf
Uffe Flarup email: [email protected] Pascal Koiran email: [email protected] Laurent Lyaudet email: [email protected] On the expressive power of planar perfect matching and permanents of bounded treewidth matrices Valiant introduced some 25 years ago an algebraic model of computation along with the complexity classes VP and VNP, which can be viewed as analogues of the classical classes P and NP. They are defined using non-uniform sequences of arithmetic circuits and provides a framework to study the complexity for sequences of polynomials. Prominent examples of difficult (that is, VNP-complete) problems in this model includes the permanent and hamiltonian polynomials. While the permanent and hamiltonian polynomials in general are difficult to evaluate, there have been research on which special cases of these polynomials admits efficient evaluation. For instance, Barvinok has shown that if the underlying matrix has bounded rank, both the permanent and the hamiltonian polynomials can be evaluated in polynomial time, and thus are in VP. Courcelle, Makowsky and Rotics have shown that for matrices of bounded treewidth several difficult problems (including evaluating the permanent and hamiltonian polynomials) can be solved efficiently. An earlier result of this flavour is Kasteleyn's theorem which states that the sum of weights of perfect matchings of a planar graph can be computed in polynomial time, and thus is in VP also. For general graphs this problem is VNP-complete. In this paper we investigate the expressive power of the above results. We show that the permanent and hamiltonian polynomials for matrices of bounded treewidth both are equivalent to arithmetic formulas. Also, arithmetic weakly skew circuits are shown to be equivalent to the sum of weights of perfect matchings of planar graphs. Introduction Our focus in this paper is on easy special cases of otherwise difficult to evaluate polynomials, and their connection to various classes of arithmetic circuits. In particular we consider the permanent and hamiltonian polynomials for matrices of bounded treewidth, and sum of weights of perfect matchings for planar graphs. It is a widely believed conjecture that the permanent is hard to evaluate. Indeed, in Valiant's framework [START_REF] Valiant | Completeness classes in algebra[END_REF][START_REF] Valiant | Reducibility by algebraic projections[END_REF] the permanent is complete for the class VNP. This is an algebraic analogue of his ♯P-completeness result for the permanent [START_REF] Valiant | The complexity of computing the permanent[END_REF]. For a book-length treatment of Valiant's algebraic complexity theory one may consult [START_REF] Bürgisser | Completeness and Reduction in Algebraic Complexity Theory[END_REF]. The same results (♯P-completeness in the boolean framework, and VNP-completeness in the algebraic framework) also apply to the hamiltonian polynomial. The sum of weights of perfect matchings in an (undirected) graph G is yet another example of a presumably hard to compute polynomial since it reduces to the permanent when G is bipartite. However, all three polynomials are known to be easy to evaluate in special cases. In particular, the permanent and hamiltonian polynomials can be evaluated in a polynomial number of arithmetic operations for matrices of bounded treewidth [START_REF] Courcelle | On the fixed parameter complexity of graph enumeration problems definable in monadic second-order logic[END_REF]. An earlier result of this flavour is Kasteleyn's theorem [START_REF] Kasteleyn | Graph theory and crystal physics[END_REF] which states that the sum of weights of perfect matchings of a planar graph can be computed in a polynomial number of arithmetic operations. One can try to turn these three efficient algorithms into general-purpose evaluation algorithms by means of reductions (this is the approach followed by Valiant in [START_REF] Valiant | Holographic algorithms[END_REF], where he exhibits polynomial time algorithms for several problems which previously only had exponential time algorithms, by means of holographic reductions to perfect matchings of planar graphs). For instance, in order to evaluate a polynomial P one can try to construct a matrix of bounded treewidth A such that: (i) The entries of A are variables of P , or constants. (ii) The permanent of A is equal to P . The same approach can be tried for the hamiltonian and the sum of weights of perfect matchings in a planar graph. The goal of this paper is to assess the power of these polynomial evaluation methods. It turns out that the three methods are all universal -that is, every polynomial can be expressed as the sum of weights of perfect matchings in a planar graph, and as a permanent and hamiltonian of matrices of bounded treewidth. From a complexity-theoretic point of view, these methods are no longer equivalent. Our main findings are that: -The permanents and hamiltonians of matrices of polynomial size and bounded treewidth have the same expressive power, namely, the power of polynomial size arithmetic formulas. This is established in Theorem 1. -The sum of weights of perfect matchings in polynomial size planar graphs has at least the same power as the above two representations, and in fact it is more powerful under a widely believed conjecture. Indeed, this representation has the same power as polynomial size (weakly) skew arithmetic circuits. This is established in Theorem 7. We recall that (weakly) skew arithmetic circuits capture the complexity of computing the determinant [START_REF] Toda | Classes of arithmetic circuits capturing the complexity of computing the determinant[END_REF]. It is widely believed that the determinant cannot be expressed by polynomial size arithmetic formulas. Our three methods therefore capture (presumably proper) subsets of the class VP of easy to compute polynomial families. By contrast, if we drop the bounded treewidth or planarity assumptions, the class VNP is captured in all three cases. Various notions of graph "width" have been defined in the litterature besides treewidth (there is for instance pathwidth, cliquewidth, rankwidth...). They should be worth studying from the point of view of their expressive power. Also, Barvinok [START_REF] Barvinok | Two algorithmic results for the traveling salesman problem[END_REF] has shown that if the underlying matrix has bounded rank, both the permanent and the hamiltonian polynomials can be evaluated in a polynomial number of arithmetic operations. A proper study of the expressive power of permanents and hamiltonians of bounded rank along the same line as in this paper remains to be done. Definitions 2.1 Arithmetic circuits Definition 1. An arithmetic circuit is a finite, acyclic, directed graph. Vertices have indegree 0 or 2, where those with indegree 0 are referred to as inputs. A single vertex must have outdegree 0, and is referred to as output. Each vertex of indegree 2 must be labeled by either + or ×, thus representing computation. Vertices are commonly referred to as gates and edges as arrows. By interpreting the input gates either as constants or variables it is easy to prove by induction that each arithmetic circuit naturally represents a polynomial. In this paper various subclasses of arithmetic circuits will be considered: For weakly skew circuits we have the restriction that for every multiplication gate, at least one of the incoming arrows is from a subcircuit whose only connection to the rest of the circuit is through this incoming arrow. For skew circuits we have the restriction that for every multiplication gate, at least one of incoming arrows is from an input gate. For formulas all gates (except output) have outdegree 1. Thus, reuse of partial results is not allowed. For a detailed description of various subclasses of arithmetic circuits, along with examples, we refer to [START_REF] Malod | Characterizing Valiant's Algebraic Complexity Classes[END_REF]. Definition 2. The size of a circuit is the total number of gates in the circuit. The depth of a circuit is the length of the longest path from an input gate to the output gate. A family (f n ) belongs to the complexity class VP if f n can be computed by a circuit C n of size polynomial in n, and if moreover the degree of f n is bounded by a polynomial function of n. Treewidth Treewidth for undirected graphs is most commonly defined as follows: Definition 3. Let G = V, E be a graph. A k-tree-decomposition of G is: (i) A tree T = V T , E T . (ii) For each t ∈ V T a subset X t ⊆ V of size at most k + 1. (iii) For each edge (u, v) ∈ E there is a t ∈ V T such that {u, v} ⊆ X t . (iv) For each vertex v ∈ V the set {t ∈ V T |v ∈ X T } forms a (connected) subtree of T . The treewidth of G is then the smallest k such that there exists a k-tree-decomposition for G. There is an equivalent definition of treewidth in terms of certain graph grammars called HR algebras [START_REF] Courcelle | Graph Grammars, Monadic Second-Order Logic And The Theory Of Graph Minors[END_REF]: Definition 4. A graph G has a k-tree-decomposition iff there exist a set of source labels of cardinality k +1 such that G can be constructed using a finite number of the following operations: (i) ver a , loop a , edge ab (basic constructs: create a single vertex with label a, a single vertex with label a and a looping edge, two vertices labeled a and b connected by an edge) (ii) ren a↔b (G) (rename all labels a as labels b and rename all labels b as labels a) (iii) f org a (G) (forget all labels a) (iv) G 1 // G 2 (composition of graphs: any two vertices with the same label are identified as a single vertex) Example 1. Cycles are known to have treewidth 2. Here we show that they have treewidth at most 2 by constructing G, a cycle of length l ≥ 3, using {a, b, c} as the set of source labels. First we construct G 1 by the operation edge ab . For 1 < i < l we construct G i by operations f org c (ren b↔c (G i-1 // edge bc ). Finally G is then constructed by the operation G l-1 // edge ab . The treewidth of a directed graph is defined as the treewidth of the underlying undirected graph. The treewidth of an (n × n) matrix M = (m i,j ) is defined as the treewidth of the directed graph G M = V M , E M , w where V M = {1, . . . , n}, (i, j) ∈ E M iff m i,j = 0, and w(i, j) = m i,j . Notice that G M can have loops. Loops do not affect the treewidth of G M but are important for the characterization of the permanent and hamiltonian polynomials. Permanent and hamiltonian polynomials In this paper we take a graph theoretic approach to deal with permanent and hamiltonian polynomials. The reason for this being that a natural way to define the treewidth of a matrix, is by the treewidth of the underlying graph, see also e.g. [START_REF] Makowsky | Polynomials of bounded treewidth[END_REF]. Definition 5. A cycle cover of a directed graph is a subset of the edges, such that these edges form disjoint, directed cycles (loops are allowed). Furthermore, each vertex in the graph must be in one (and only one) of these cycles. The weight of a cycle cover is the product of weights of all participating edges. Definition 6. The permanent of an (n × n) matrix M = (m i,j ) is the sum of weights of all cycle covers of G M . The permanent of M can also be defined by the formula per(M ) = σ∈Sn n i=1 m i,σ(i) . The equivalence with Definition 6 is clear since any permutation can be written down as a product of disjoint cycles, and this decomposition is unique. There is a natural way of representing polynomials by permanents. Indeed, if the entries of M are variables or constants from some field K, f = per(M ) is a polynomial with coefficients in K (in Valiant's terminology, f is a projection of the permanent polynomial). In the next section we study the power of this representation in the case where M has bounded treewidth. The hamiltonian polynomial ham(M ) is defined similarly, except that we only sum over cycle covers consisting of a single cycle (hence the name). Matrices of bounded treewidth In this section we work with directed graphs. All paths and cycles are assumed to be directed, even if this word is omitted. In [START_REF] Courcelle | On the fixed parameter complexity of graph enumeration problems definable in monadic second-order logic[END_REF] it is shown that the permanent and hamiltonian polynomials are in VP for matrices of bounded treewidth. Here we show that both the permanent and hamiltonian polynomials for matrices of bounded treewidth are equivalent to arithmetic formulas. This is an improvement on the result of [START_REF] Courcelle | On the fixed parameter complexity of graph enumeration problems definable in monadic second-order logic[END_REF] since the set of polynomial families representable by polynomial size arithmetic formulas is a (probably strict) subset of VP. Theorem 1. Let (f n ) be a family of polynomials with coefficients in a field K. The three following properties are equivalent: -(f n ) can be represented by a family of polynomial size arithmetic formulas. -There exists a family (M n ) of polynomial size, bounded treewidth matrices such that the entries of M n are constants from K or variables of f n , and f n = per(M n ). -There exists a family (M n ) of polynomial size, bounded treewidth matrices such that the entries of M n are constants from K or variables of f n , and f n = ham(M n ). Remark 1. By the VNP-completeness of the hamiltonian, if we drop the bounded treewidth assumption on M n we capture exactly the VNP families instead of the families represented by polynomial size arithmetic formulas. The same property holds true for the permanent if the characteristic of K is different from 2. Theorem 1 follows immediately from Theorems 2, 3, 5 and 6. Theorem 2. Every arithmetic formula can be expressed as the permanent of a matrix of treewidth at most 2 and size at most (n + 1) × (n + 1) where n is the size of the formula. All entries in the matrix are either 0, 1, or variables of the formula. Proof. The first step is to construct a directed graph that is a special case of a series-parallel (SP) graph, in which there is a connection between weights of directed paths and the value computed by the formula. The overall idea behind the construction is quite standard, see e.g. [START_REF] Malod | Characterizing Valiant's Algebraic Complexity Classes[END_REF]. SP graphs in general can between any two adjacent vertices have multiple directed edges. But we construct an SP graph in which there is at most one directed edge from any vertex u to any vertex v. This property will be needed in the second step, in which a connection between cycle covers and the permanent of a given matrix will be established. SP graphs have distinguished source and sink vertices, denoted by s and t. By SW (G) we denote the sum of weights of all directed paths from s to t, where the weight of a path is the product of weights of participating edges. Let ϕ be a formula of size e. For the first step of the proof we will by induction over e construct a weighted, directed SP graph G such that val(ϕ) = SW (G). For the base case ϕ = w we construct vertices s and t and connect them by a directed edge from s to t with weight w. Assume ϕ = ϕ 1 + ϕ 2 and let G i be the graph associated with ϕ i by the induction hypothesis. Introduce one new vertex s and let G be the union of the three graphs {s} , G 1 and G 2 in which we identify t 1 with t 2 and denote it t, add an edge of weight 1 from s to s 1 , and add an edge of weight 1 from s to s 2 . By induction hypothesis the resulting graph G satisfies SW (G) = 1•SW (G 1 )+1•SW (G 2 ) = val(ϕ 1 )+val(ϕ 2 ) . Between any two vertices u and v there is at most one directed edge from u to v. We introduced one new vertex, but since t 1 was identified with t 2 the number of vertices used equals |V 1 | + |V 2 | ≤ size(ϕ 1 ) + 1 + size(ϕ 2 ) + 1 = size(ϕ) + 1. Assume ϕ = ϕ 1 * ϕ 2 . We construct G by making the disjoint union of G 1 and G 2 in which we identify t 1 with s 2 , identify s 1 as s in G and identify t 2 as t in G. For every directed path from s 1 to t 1 in G 1 and for every directed path from s 2 to t 2 in G 2 we can find a directed path from s to t in G of weight equal to the product of the weights of the paths in G 1 and G 2 , and since all (s, t) paths in G are of this type we get SW (G) = SW (G 1 ) • SW (G 2 ). The number of vertices used equals |V 1 | + |V 2 | -1 ≤ size(ϕ 1 ) + size(ϕ 2 ) + 1 < size(ϕ) + 1. For the second step of the proof we need to construct a graph G ′ such that there is a relation between cycle covers in G ′ and directed paths from s to t in G. We construct G ′ by adding an edge of weight 1 from t back to s, and loops of weight 1 at all vertices different from s and t. Now, for every (s, t) path in G we can find a cycle in G ′ visiting the corresponding nodes. For nodes in G ′ not in this cycle, we include them in a cycle cover by the loops of weight 1. Because there is at most one directed edge from any vertex u to any vertex v in G ′ we can find a matrix M of size at most (n + 1) × (n + 1) such that G M = G ′ and per(M ) = val(ϕ). The graph G ′ can be constructed using an HR algebra with only 3 source labels, and thus have treewidth at most 2. For the base case the operation edge ab is sufficient. For the simulation of addition of formulas the following grammar operations provide the desired construction: ren a↔c (f org a (edge ac // (loop a // G 1 )) // f org a (edge ac // (loop a // G 2 ))). For simulating multiplication of formulas we use the following grammar operations: f org c (ren b↔c (G 1 ) // ren a↔c (loop a // G 2 )). Finally, the last step in obtaining G ′ is to make a composition with the graph edge ab . ⊓ ⊔ Theorem 3. Every arithmetic formula of size n can be expressed as the hamiltonian of a matrix of treewidth at most 6 and size at most (2n + 1) × (2n + 1). All entries in the matrix are either 0, 1, or variables of the formula. Proof. The first step is to produce the graph G as shown in Theorem 2. The next step is to show that the proof of universality for the hamiltonian polynomial in [START_REF] Malod | Polynômes et coefficients[END_REF] can be done with treewidth at most 6. Their construction for universality of the hamiltonian polynomial introduces |V G | -1 new vertices to G in order to produce G ′ , along with appropriate directed edges (all of weight 1). The proof is sketched in Figure 1. The additional vertices t i and edges permit to visit any subset of vertices of G with a directed path of weight 1 from t to s using all t i 's. Hence, any path from s to t in G can be followed by a path from t to s to obtain a hamiltonian cycle of same weight. If one just need to show universality, then it is not important exactly which one of the vertices t i that has an edge to a given vertex among s i . But in order to show bounded treewidth one carefully need to take into account which one of the vertices of t i that has an edge to a particular s i vertex. We show such a construction with bounded treewidth, by giving an HR algebra which can express a graph similar to the one in Figure 1 using 7 source labels. Series composition is done using the following operations (also see Figure 2): f org e (f org f (f org g ( ren d↔f (ren b↔e (G 1 )) // ren c↔g (ren a↔e (G 2 )) // edge ef // edge eg // edge f g ))) Labels a, b, c and d in Figures 2 and3 plays the roles of s, t, t 1 and t n respectively in Figure 1. The above construction does not take into account, that G 1 and/or G 2 are graphs generated from the base case. For base cases vertices c and d are replaced by a single vertex. However, it is clear that the above construction can be modified to work for these simpler cases as well. For parallel composition an additional vertex was introduced. It can be done using the following operations (also see Figure 3): )) ))) f org e (f The final step in the construction, after all series and parallel composition have been done, is to connect vertices a and c and connect vertices b and d. ⊓ ⊔ The decision version of the hamiltonian cycle problem for graphs of bounded cliquewidth is shown to be polynomial time solvable in [START_REF] Espelage | How to solve NP-hard graph problems on clique-width bounded graphs in polynomial time[END_REF]. Though bounded treewidth implies bounded cliquewidth we are mainly interested in studying the evaluation version. Evaluation of the hamiltonian and permanent polynomials was shown in [START_REF] Courcelle | On the fixed parameter complexity of graph enumeration problems definable in monadic second-order logic[END_REF] to be in VP for matrices of bounded treewidth. They give efficients algorithms for a much broader class of problems, but the proof we give here is more direct and gives a more precise characterization. By Definition 6, computing the permanent of a matrix M amounts to computing the sum of the weights of all cycle covers of G M . In our algorithm we need to consider partial covers, which are generalizations of cycle covers. Definition 7. A partial cover of a directed graph is a union of paths and cycles such that every vertex of the graph belongs to at most one path (and to none of the cycles), or to at most one cycle (and to none of the paths). The weight of a partial cover is the product of the weights of all participating edges. More generally, for any set S of edges the weight w(S) of S is defined as the product of the weights of the elements of S. In contrast to cycle covers, for a partial cover there is no requirement that all vertices be covered. The following theorem from [START_REF] Bodlaender | NC-algorithms for graphs with small treewidth[END_REF] is a standard tool in the design of parallel algorithms for graphs of bounded treewidth (see also [START_REF] Bodlaender | Parallel Algorithms with Optimal Speedup for Bounded Treewidth[END_REF] and [START_REF] Mackworth | Parallel and Distributed Finite Constraint Satisfaction[END_REF]). Theorem 4. Let G = V, E be a graph of treewidth k with n vertices. Then there exists a tree-decomposition T, (X t ) t∈VT of G of width 3k + 2 such that T = V T , E T is a binary tree of depth at most 2⌈log 5 4 (2n)⌉. We also need the following standard lemma: Proof. We construct the formula from the circuit by duplicating entire subcircuits whenever we reuse a gate. The formula constructed in this way also has height d. In the produced formula the number of gates having distance j to the root is at most twice the number of gates having distance j -1 to the root, so the formula has at most d i=0 2 i = 2 • 2 d -1 gates. ⊓ ⊔ Theorem 5. The permanent of a n × n matrix M of bounded treewidth k can be expressed as a formula of size O(n O(1) ). Proof. We show how to construct a circuit of depth O(log(n)), which can then be expressed as a formula of size O(n O(1) ) using Lemma 1. Consider the graph G = G M and apply Theorem 4 to obtain a balanced, binary tree-decomposition T of bounded width k ′ . For each node t of T , we denote by T t the subtree of T rooted at t, and we denote by X(T t ) the set of vertices of G which belong to X u for at least one of the nodes u of T t . We denote by G t the subgraph of G induced by the subset of vertices X(T t ). Consider a partial cover C of G t . Any given edge (u, v) ∈ X 2 t is either used or unused by C. Likewise, any given vertex of X t has indegree 0 or 1 in C, and outdegree 0 or 1. We denote by λ t = I t (C) the list of all these data for every edge (u, v) ∈ X 2 t and every element of X t . By abuse of language, we will say that an edge in X 2 t is used by λ t if it is used by one partial cover satisfying I t (C) = λ t (or equivalently, by all partial cover satisfying I t (C) = λ t ). We will compute for each possible list λ t a weight w λt , defined as the sum of the weights of all partial covers C of G t satisfying the following three properties: (i) the two endpoints of all paths of C belong to X t ; (ii) all uncovered vertices belong to X t ; (iii) I t (C) = λ t . Note that the number of weights to be computed at each node of T is bounded by a constant (which depends on k ′ ). When t is the root of T we can easily compute the permanent of M from the weights w λt : it is equal to the sums of the w λt over all λ t which assign indegree 1 and outdegree 1 to all vertices of X t . Also, when t is a leaf of T we can compute the weights in a constant number of arithmetic operations since G t has at most k ′ vertices in this case. It therefore remains to explain how to compute the weights w λt when t is not a leaf. Our algorithm for this proceeds in a bottom-up manner: we will compute the weights for t from the weights already computed for its left child (denoted l) and its right child (denoted r). The idea is that we can obtain a partial cover of G t by taking the union of a partial cover of G l and of a partial cover of G r , and adding some additional edges. Conversely, a partial cover of G t induces a partial cover of G l and a partial cover of G r . In order to avoid counting many times the same partial cover, we must define the considered partial covers of G l and G r to ensure that the partial cover of G t induces a unique partial cover of G l and a unique partial cover of G r . We will say that (λ l , λ r ) is compatible with λ t if and only if the following holds: -no edge in X 2 t is used in λ l or λ r ; -for every vertex x ∈ X t at most one of λ t , λ l , λ r assigns indegree 1 to x; -for every vertex x ∈ X t at most one of λ t , λ l , λ r assigns outdegree 1 to x; -every vertex x ∈ X l \X t has indegree 1 and outdegree 1 in λ l ; -every vertex x ∈ X r \X t has indegree 1 and outdegree 1 in λ r . We now have to prove two things. If there is a partial cover C of G t which satisfies the properties (i) and (ii) such that I t (C) = λ t then it induces a partial cover C l of G l and a partial cover C r of G r such that C l and C r satisfy (i) and (ii), I l (C) = λ l , I r (C) = λ r , and (λ l , λ r ) is compatible with λ t . Conversely, if (λ l , λ r ) is compatible with λ t , and C l and C r are partial covers of G l and G r satisfying (i), (ii), I l (C) = λ l , and I r (C) = λ r , then there exists a unique partial cover C of G t containing C l and C r such that I t (C) = λ t . Consider a partial cover C of G t which satisfies the properties (i) and (ii) defined above. We can assign to C a unique triple (C l , C r , S) defined as follows. First, we define S as the set of edges of C ∩ X 2 t . Then we define C l as the set of edges of C which have their two endpoints in X(T l ), and at least one of them outside of X t . Finally, we define C r as the set of edges of C which have their two endpoints in X(T r ), and at least one of them outside of X t . Note that w(C) = w(C l )w(C r )w(S) since (C l , C r , S) forms a partition of the edges of C. Moreover, C l is a partial cover of G l and properties (i) and (ii) are satisfied: the endpoints of the paths of C l and the uncovered vertices of X(T l ) all belong to X l ∩ X t . Likewise, C r is a partial cover of X(T r ) and properties (i) and (ii) are satisfied. If I l (C) = λ l and I r (C) = λ r , it is clear that (λ l , λ r ) is compatible with λ t . Any other partition of C in three parts with one partial cover of G l , one partial cover of G r , and a subset of edges in X 2 t would have an edge of X 2 t used by C l or C r . Hence (λ l , λ r ) would not be compatible with λ t . Suppose now (λ l , λ r ) is compatible with λ t , and C l and C r are partial covers of G l and G r satisfying (i), (ii), I l (C) = λ l , and I r (C) = λ r . We define S λt as the set of edges of X 2 t which are used by λ t . It is clear that S λt , C l and C r are disjoint. Consider C = S λt ∪ C l ∪ C r . Since (λ l , λ r ) is compatible with λ t , C is a partial cover satisfying (i) and (ii). It is also clear that C is the only partial cover containing C l and C r such that I t (C) = λ t . These considerations lead to the formula w(λ t ) = (λ l ,λr) w(λ l )w(λ r )w(S λt ). The sum runs over all pairs (λ l , λ r ) that are compatible with λ t . The weight w(λ t ) can therefore be computed in a constant number of arithmetic operations. Since the height of T is O(log(n)) the above algorithm can be executed on a circuit of height O(log(n)) as well. ⊓ ⊔ Theorem 6. The hamiltonian of a n × n matrix M of bounded treewidth k can be expressed as a formula of size O(n O( 1) ). Proof. The proof is very similar to that of Theorem 5. The only difference is that we only consider partial cycle covers consisting exclusively of paths, and at the root of T the partial cycle covers of the two children must be combined into a hamiltonian cycle. ⊓ ⊔ Perfect matchings of planar graphs In this section we work with undirected graphs. Definition 8. A perfect matching of a graph G = V, E is a subset E ′ of E such that every vertex in V is incident to exactly one edge in E ′ . The weight of a perfect matching E ′ is the product of weights of all edges in E ′ . By SP M (G) we denote the sum of weights of all perfect matchings of G. In 1967 Kasteleyn showed in [START_REF] Kasteleyn | Graph theory and crystal physics[END_REF] that SP M (G) can be computed efficiently if G is planar. His observations was that for planar graphs SP M (G) could be expressed as a special kind of Pfaffian. For general non-planar graphs computing SP M (G) is VNP-complete (or ♯P-complete in the boolean model of computation). This follows from the fact that for a bipartite graph G = (U, V, E), SP M (G) is equal to the permanent of its "bipartite adjacency matrix" M (if |U | = |V | = n, M is a n × n matrix and m ij is equal to the weight of the edge between u i and v j ). Theorem 7. Let (f n ) be a family of polynomials with coefficients in a field K. The three following properties are equivalent: -(f n ) can be computed by a family of polynomial size weakly skew circuits. -(f n ) can be computed by a family of polynomial size skew circuits. -There exists a family (G n ) of polynomial size planar graphs with edges weighted by constants from K or variables of f n such that f n = SP M (G n ). The equivalence of (i) and (ii) is etablished in [START_REF] Malod | Characterizing Valiant's Algebraic Complexity Classes[END_REF] and [START_REF] Toda | Classes of arithmetic circuits capturing the complexity of computing the determinant[END_REF]. In [START_REF] Malod | Characterizing Valiant's Algebraic Complexity Classes[END_REF] the complexity class VDET is defined as the class of polynomial families computed by polynomial size (weakly) skew circuits, and it is shown that the determinant is VDET-complete. We have therefore shown that computing SP M (G) for a planar graph G is equivalent to computing the determinant. Previously it was known that SP M (G) could be reduced to computing Pfaffians [START_REF] Kasteleyn | Graph theory and crystal physics[END_REF]. The equivalence of (iii) with (i) and (ii) follows immediately from Theorem 8 and Theorem 9. Theorem 8. The output of every skew circuit of size n can be expressed as SP M (G) where G is a weighted, planar, bipartite graph with O(n 2 ) vertices. The weight of each edge of G is equal to 1, to -1, or to an input variable of the circuit. Proof. Let ϕ be a skew circuit; that is, for each multiplication gate at least one of the inputs is an input gate of ϕ (w.l.o.g. we assume it is exactly one). Furthermore, by making at most a linear amount of duplication we can assume all input gates have outdegree 1. Thus, every input gate of ϕ is either input to exactly one addition gate or input to exactly one multiplication gate (ignoring the trivial special case where ϕ only consist of a single gate), and throughout the proof we will distinguish between these two types of input gates. Consider a drawing of ϕ in which all input gates which are input to an addition gate, are placed on a straight line, and all other gates are drawn on the same side of that line. Assume all arrows in the circuit are drawn as straight lines. This implies at most a quadratic number of places where two arrows cross each other in the plane. By using the planar crossover widget from Figure 4 we replace these crossings by planar subgraphs, introducing at most a quadratic amount of extra gates. For each multiplication gate we have that exactly one of the input gates is an input gate of ϕ, so these input gates can be placed arbitrarily close the the multiplication gate in which they are used. Thus we obtain a planar skew circuit ϕ ′ computing the same value as ϕ. Consider a topological ordering of the gates in ϕ ′ in which input gates that are input to multiplication gates have numbers less than 1, and input gates that are input to addition gates have the numbers 1 through i (where i is the number of input gates that are input to addition gates). Let m be the number of the output gate in this topological ordering of ϕ ′ . Steps 1 through For each step i < m ′ ≤ m an addition or multiplication gate are handled as shown in Figure 6. White vertices indicate vertices that are already present in the graph, whereas black vertices indicate new vertices that are introduced during that step. For multiplications the edge weight w denote the value of the input gate of ϕ ′ , which is input to that multiplication gate. Finally, the output gate of ϕ ′ is handled in a special way. Correctness can be shown by induction using the following observation. For each step 1 ≤ m ′ < m in the construction of G the following properties will hold for the graph generated so far: The labels ♯1, ♯2, . . . , ♯m ′ have been assigned to m ′ distinct vertices. For all 1 ≤ j ≤ m ′ if the vertex with label ♯j is removed (along with all adjacent edges), then SP M of the remaining graph equals the value computed at gate with topological number j in ϕ ′ . It is clear that the graph produced during the initialization (Figure 5) has this property. For the remaining vertices in the topological ordering we either have to simulate an addition gate (♯c = ♯a + ♯b) or a multiplication gate (♯c = ♯a • w). For each new labeled vertex added in this way we can see that it simulates the corresponding gate correctly, without affecting the simulation done by other labeled vertices in the graph. Bipartiteness of G can be shown by putting the vertex labeled s as well as vertices labeled ♯i, 1 ≤ i ≤ m, on one side of the partition, and all other vertices on the other side of the partition. ⊓ ⊔ Remark 2. The theorem can be proven for weakly skew circuits as well without the result from [START_REF] Toda | Classes of arithmetic circuits capturing the complexity of computing the determinant[END_REF] stating that weakly skew circuits are equivalent to skew circuits. Consider the graph G\{s, t}. One can show that this graph has a single perfect matching of weight 1. For simulation of a multiplication gate, instead of adding a single edge of weight w, one can add an entire subcircuit constructed in the above manner. Theorem 9. For any weighted, planar graph G with n vertices, SP M (G) can be expressed as the output of a skew circuit of size O(n O (1) ). Inputs to the skew circuit are either constants or weights of the edges of G. Proof. The result will be established by computation of Pfaffians and is shown by combining results from [START_REF] Kasteleyn | Graph theory and crystal physics[END_REF] and [START_REF] Mahajan | The combinatorial approach yields an NC algorithm for computing Pfaffians[END_REF]. Let H be a weighted graph and -→ H an oriented version of H. Then the Pfaffian is defined as: P f ( -→ H ) = M sgn(M) w(M), where M ranges over all perfect matchings of -→ H . The Pfaffian depends on how the edges of -→ H are oriented, since the sign of a perfect matching depends on this orientation (details on how the sign depends on the orientation are not needed for this proof). It is known from Kasteleyn's work [START_REF] Kasteleyn | Graph theory and crystal physics[END_REF] that all planar graphs have a Pfaffian orientation of the edges (and that such an orientation can be found in polynomial time). A Pfaffian orientation is an orientation of the edges such that each term in the above sum has positive sign sgn(M). So for planar graphs computing SP M (G) reduces to computing Pfaffians (which can be done in polynomial time). A Pfaffian orientation of G does not depend on the weights of the edges, it only depends on the planar layout of G. In our reduction to a skew circuit we can therefore assume that a Pfaffian orientation -→ G is given along with G, thus the problem of computing SP M (G) by a skew circuit is reduced to computing P f ( -→ G) by a skew circuit. From Theorem 12 in [START_REF] Mahajan | The combinatorial approach yields an NC algorithm for computing Pfaffians[END_REF] we have that P f ( -→ G) can be expressed as SW (G ′ ) where G ′ is a weighted, acyclic, directed graph with distinguished source and sink vertices denoted s and t (recall SW (G ′ ) from Theorem 2). The size of G ′ is polynomial in the size of -→ G . The last step is to reduce G ′ to a polynomial size skew circuit representing the same polynomial. Consider a topological ordering of the vertices of G ′ . The vertex s is replaced by an input gate with value 1. For a vertex v of indegree 1, assume u is the vertex such that there is a directed edge from u to v in G ′ , and assume the weight of this edge is w. We then replace v by a multiplication gate, where one arrow leading to this gate comes from the subcircuit representing u, and the other arrow leading to this gate comes from a new input gate with value w. Vertices of indegree d > 1 are replaced by a series of d -1 addition gates, adding weights of all paths leading here, similar to what is done for vertices of indegree 1. The circuit produced in this way clearly represent the same polynomial, and it is a skew circuit because for every multiplication gate at least one of the arrows leading to that gate comes from an input gate. ⊓ ⊔ Fig. 1 . 1 Fig. 1. Universality of the hamiltonian polynomial Fig. 2 . 2 Fig. 2. Series composition (simulating multiplication) Fig. 3 . 3 Fig. 3. Parallel composition (simulating addition) Lemma 1 . 1 Let ϕ be a circuit of depth d. Then there exists a formula of depth d and size O(2 d ) representing the same polynomial. Fig. 4 . 4 Fig. 4. Planar crossover widget for skew circuits Fig. 5 . 5 Fig. 5. Initialization for input gates which are input to addition gates Fig. 6 . 6 Fig. 6. I) Non-output addition II) Non-output multiplication III) Output add. IV) Output mult. org f (f org g ( ren a↔e (ren c↔g ( f org a (f org c (edge ag // edge cg // ren d↔f (edge ae // edge ac // G 1 ))) // f org a (f org c (edge af // edge cf // edge ae // edge ac // G 2 )) 5 Acknowledgements This work was done while U. Flarup was visiting the ENS Lyon during the spring semester of 2007. This visit was partially made possible by funding from Ambassade de France in Denmark, Service de Coopération et d'Action Culturelle, Ref.:39/2007-CSU 8.2.1.
39,244
[ "840155", "171878", "840156" ]
[ "37461", "35418", "35418" ]
01490700
en
[ "shs" ]
2024/03/04 23:41:50
2012
https://hal.science/hal-01490700/file/PROS-016%20full%20paper.pdf
Florence Allard-Poesi email: [email protected] Hervé Laroche email: [email protected] Unmasking Spies in the Corporation: When the police order of discourse erupts into managerial conversations On January 3, 2011, three managers from the French car manufacturer Renault were accused of having received large sums of money, allegedly for having sold proprietary data to a foreign company. They were offered the opportunity to quit discreetly, the alternative being a formal complaint and subsequent police investigation. However, the affair quickly went public and gained extensive media coverage. Renault officially filed a complaint and the CEO was forced to go on TV on a major channel. The three managers denied any kind of misbehavior and in return filed a complaint against Renault. The police investigation found no evidence of any kind against the three executives; instead, it revealed that the whole affair was probably a scam designed by a member of the manufacturer's security service. Significant amounts of money had been spent on collecting fake evidence about secret bank accounts allegedly possessed by the three managers in Switzerland and other countries. Renault's CEO reappeared on TV to make public apologies. The three managers were either reintegrated or compensated. In this research we analyze the interviews of January 3, 2011, when the three managers each had an 'unofficial', face-to-face conversation with a high-ranked executive. These conversations were recorded in their entirety. Later, they were leaked to the press and published as audio files. As it was the first time that the managers had been confronted with the accusation, these conversations could have been opportunities for company executives 1 to have a free discussion with the managers, to collect information, and to make sense jointly of what happened. The managers and the executives had no direct hierarchical links and the conversations took place outside organizational formal structures, two conditions that previous research on sensemaking [START_REF] Weick | Sensemaking in organizations[END_REF][START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF] and conversations [START_REF] Ford | Organizational change as shifting conversations[END_REF][START_REF] Westley | Middle managers and strategy: Microdynamics of inclusion[END_REF]Jarzabowski and Seidl, 2008) sees as favoring the emergence of new discourses and interpretations. Unexpectedly, the conversations turned out to be highly dominated by the executives and closely resembled police interrogation, leaving little (if no) room for the managers to suggest alternative interpretations. How did such an external, police 'order of discourse' erupt into these managerial conversations? And to what extent did this emergent discourse contribute to the asymmetries between the executives and the managers? 1 For the purpose of clarity, "managers" will refer to the three accused organization members, while "executives" will refer to their interrogators. Following recent research adopting a critical and microscopic approach to analysing conversations [START_REF] Samra-Fredericks | Strategizing as lived experience and strategists' everyday efforts to shape strategic directions[END_REF][START_REF] Samra-Fredericks | Strategic practices, 'discourse' and the everyday interactional constitution of 'power effects[END_REF][START_REF] Rasmussen | Enabling selves to conduct themselves safely: Safety committee discourse as governmentality in practice[END_REF], we contend that conversations always rely on some prior forms of knowledge and discourses, thereby incorporating or reproducing power relationships between participants. From this perspective, the emergence of new discourses in the organization is better conceptualized as 'traces' of (external) structures [START_REF] Fairclough | Critical Discourse analysis, organisational discourse and organisational change[END_REF] rather than of their suspending or suppressing. Relying on conversational analysis [START_REF] Heritage | Conversation analysis and institutional talk[END_REF] and discursive psychology [START_REF] Potter | Discourse analysis as a way of analysing naturally occurring talk[END_REF][START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995), we analyzed the overall organization of the conversations, as well as the discursive tactics used by the executives to accomplish asymmetries. Two interviews exhibited a highly dominant pattern, while in the third interview an apparently more 'open' pattern was enacted. Although different, these two patterns were oriented towards the same objective: that of dismissing the managers. A turn-by-turn analysis of conversation extracts show that, in both patterns, the executives relied on the sorts of discursive tactics used in police interrogation. The paper is organized around the following four sections. Firstly, we review previous works on conversations and show how the conceptualization of how new ideas emerge, and discourse in organizational conversations, has evolved in recent research. Secondly, we describe the research context and the methods used to analyze the conversations. Thirdly, we present the overall structure of the conversations, conduct a turn-by-turn analysis of selected sequences, and then identify some of the tactics used by the executives. Finally, we discuss the contribution of discursive tactics to the eruption of external orders of discourse in organizations, as well as a number of circumstances that might favor such an 'emergence'. Conversations and Organizing in Organizational Analysis Conversations can be defined as "what is said and listened to between people" (Berger & Luckman, 1966, in Ford, 1999: 483), "a complex, information-rich mix of auditory, visual, olfactory and tactile events that may be used in conjunction with, or as substitute for, what is spoken" (Ford, 1999: 484). It refers to those "interpersonal interactions, […] in which people interact with each other through verbal statements but also through glances, gestures and positioning" (Goffman, 1967: 1, in Mengis and Eppler, 2008: 1287). Conversations may take place in formal committees such as recurrent meetings, workshops (Hoon, 2008;[START_REF] Jarzabkowski | The role of meetings in the social practice of strategy[END_REF], and informal encounters between organizational members (Hoon, 2008). Departing from a functionalist concept 2 of conversations, we consider them as both a means and a medium through which organizing unfolds. Conversations enable people to agree on decisions and actions to undertakeand thus to coordinatetheir actions (Weick, 1995: 99). These agreements do not take place in a socio-political vacuum, but rather in a context of meanings, beliefs, status, roles, authority relations, routines, and procedures that are instantiated and reconstituted as participants interact. From this so-called 'constructivist' perspective, research that adopts a 'mesoscopic' approach to conversations in organizations has developed a dialectical and tensional view of conversations. On the one hand, conversations are seen as a privileged medium through which organizational members develop new meanings and behaviors that may give rise to new systems of roles and rules in the organization [START_REF] Ford | The role of conversations in producing intentional change in organizations[END_REF][START_REF] Weick | Sensemaking in organizations[END_REF]. On the other hand, research considers that, when interacting, organizational members build on and re-enact existing routines and formal and informal hierarchies, thereby reproducing the organizational structure (Weick, 1990;Giddens, 1991). Recent studies investigating the micro-discursive activities of participants during conversations mitigate such a tensional or dialectical view of conversation and organizing. In so far as conversations always rely on previous discourses, conversations cannot escape the (discursive) power of 'structures'. Here, we briefly review past research adopting a 'tensional' approach, before detailing the contributions of microscopic studies on conversations. A 'tensional' (or dialectical) perspective on conversations: Conversations as a vehicle for both organizational reproduction and change Following Weick, research on sensemaking processes in organizations underscores that sensemaking takes place mainly through "talk, discourse, and conversation" (Weick, 1995: 41), and through conversation in particular, new language and understandings can be developed. The diversity of experiences and actors' interpretations create a polyphony (Hazen, 1993) that is liable to give way to new understandings, though this does not imply that these understandings will be shared (Langfield-Smith, 1992;Maitlis, 2005;Doise & Moscovici, 1994). As people strive to share their feelings, intentions, and thinking through face-to-face communication, they give rise to "vivid, unique intersubjective meanings" (Weick, 1995: 75). 2 Where conversations are seen as instruments for transmitting information and making decisions. While conversations are a privileged medium through which managers may develop new understandings of their experiences and of the world around them [START_REF] Westley | Middle managers and strategy: Microdynamics of inclusion[END_REF][START_REF] Weick | Sensemaking in organizations[END_REF][START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF], such new understandings cannot develop without first altering or suspending existing prevailing orders of discourse (Ford, 1999: 491). According to Fairclough (2010: 358), the organization's order of discourse may be defined as a relatively stable and durable configuration of discourses, genres, and styles that might be complementary or conflicting. Whereas discourses are particular ways of representing the world (e.g. the discourse of strategy), styles designate ways of being (e.g. a charismatic leader) and genres ways of interacting with others (e.g. participation). Referring back to [START_REF] Foucault | L'archéologie du savoir[END_REF], Fairclough underlines that, although linguistic and semiotic systems can generate an infinite number of texts (i.e. the discoursal elements of social events), "the actual range of variation is socially delimited and structured, i.e. through the ways semiotic systems interact with other social structures and systems" (p. 358). According to Ford (1999: 496), orders of discourse designate underlying social conventions regarding the production, distribution, and consumption of discourse that will structure conversational patterns, e.g. "who gets to speak (voice), on what, and when". The order of discourse in an organization depends on the type of discourses incorporated, the genres and styles that accompany them 3 (e.g. the manager as 'in control' of his destiny and environment in the orthodox discourse of strategy, see Knights and Morgan, 1991), and on the way organizations progressively appropriate, modify, and connect/disconnect these available discourses, thereby reflecting or reconstituting its particular structures and power relationships (see [START_REF] Taylor | Finding the organizations in the communication: Discourse as action and sensemaking[END_REF]. Here, previous works underline that organizational roles and statuses should be either "suspended" or "defined dynamically" (Mengis andEppler, 2008: 1303) for new understandings or behaviors to appear (Ford, 1999: 491). From this perspective, [START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF] show that lateral social interactions (as opposed to vertical interactions) between managers are crucial in the development of equivalent understandings of their roles during planned radical change. Conversations, when prevalent structures are suspended or relaxed, can transform existing structures of power and roles in the organization [START_REF] Ford | Organizational change as shifting conversations[END_REF]. The creation of new meanings in strategic conversations between a superior and a subordinate occurs only when the superior "allows the 3 According to [START_REF] Fairclough | Critical Discourse analysis, organisational discourse and organisational change[END_REF], genres and styles include not only discoursal aspects, but also bodily habits and dispositions, which distinguish these notions from those of discourse that only has a discoursal aspect. Following Foucault and Knights and Morgan (1991), we consider that 'discourses' are also distinctive in the genre and style that they instantiate for those who speak. subordinate considerable degree of freedom" (Westley 1990: 346). Similarly, freer and more creative thought is favored by certain strategic practices (respectively, awaydays or strategic episodes and the distancing of the centre/periphery) that suspend the rules and routines of the organization. [START_REF] Jarzabkowski | The role of meetings in the social practice of strategy[END_REF] view 'free discussion", i.e. the suspension of authority on both the content and the processes of discussion, as a condition for the emergence of variations in strategic orientations. However, although informal and seemingly freed from hierarchy, conversations can "potentially enact formal structures of domination" (Westley, 1990: 340) by reflecting, at least in part, the structure of the roles in place in the organization [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF] and framing the agenda and the selection of initiatives by subordinates (Hoon, 2008). Beyond hierarchical aspects, conversations also rely on previous discourses, genres, and styles, so that they are, to a certain extent, always 'pre-structured' [START_REF] Taylor | Finding the organizations in the communication: Discourse as action and sensemaking[END_REF][START_REF] Alvesson | Varieties of discourse: On the study of organizations through discourse analysis[END_REF]. The micro-activities of conversations activate and reenact both the society (e.g. the discourse of 'professional management') and the organization within which the interactions take place. This leads us to a more nuanced picture of conversations in organizations, where the suspension of existing structures and related 'orders' of discourse appears fragile and transitory, if not illusory. A complexified approach to conversations: Where conversations cannot escape the power of 'structures' Without contradicting previous perspectives on conversations, research adopting a microscopic approach to conversations provides a more nuanced and subtle portrayal of what 're-constitution' or 're-enactment'/'emergence' in or through conversation may mean. While adopting a microscopic approach to conversations, these studies demonstrate that social structures and power relationships never completely disappear. Relying on conversational analysis and ethnomethodology in particular, recent empirical research on naturally occurring talks in organization studies sheds light on the discursive devices and 'forms of knowledge' (or discourse) activated by participants as they strive to gain a favorable position vis-à-vis the other during conversations, and how such micro-discursive activities might reconstitute a combination of macro-discourses, styles and genres (i.e. an order of discourse). Combining ethnography with an ethnomethodological and conversational analysis, [START_REF] Samra-Fredericks | Strategizing as lived experience and strategists' everyday efforts to shape strategic directions[END_REF] explains how, although all participants are at the same hierarchical level and no one can a priori impose his/her view on the other, a strategist is able to redefine both the strategic orientations of a firm and the status and roles of other strategists through 'minormoves' made during conversations (e.g. mitigating, questioning, use of metaphors and of typified forms of knowledge and related 'moral orders ', what Samra-Fredericks (2005) later called "the discourse of strategy"). Such processes among peers can also prevent change, as suggested by [START_REF] Kärreman | Making Newsmakers: Conversational identity at work[END_REF]. Analyzing in depth a meeting held by a Swedish newspaper about news bills and their sales effects, the authors show that, despite the absence of formal authority, participants reconstitute the organization's 'order of discourse' and 'deviant' voices are silenced in one way or another (e.g. jokes, request for 'good news', etc.). Skillful managers can resort to more complex patterns of discourse by mixing partnership, competition, and authority to introduce organizational change, as analyzed by [START_REF] Rasmussen | Enabling selves to conduct themselves safely: Safety committee discourse as governmentality in practice[END_REF]. To summarize, while initial research on sensemaking and organizational change underlines that organizational structures and orders of discourse have to be 'suspended' for new understandings and behaviors to emerge, recent studies adopting a micro-analytic approach suggest that such a 'suspension' is never fully realized, as one always relies on some prior forms of knowledge and discourses. This means that associating 'free discussion' with 'emergence', and 'hierarchical relations' with 'reproduction', does not adequately reflect the variety and complexity of conversational dynamics taking place in organizations. Conversations do not happen in a discursive vacuum but rely instead on other 'discourses', 'styles', and 'genres', thereby modifying or reproducing the organization's order of discourse. Conversations, as social practices, are the discursive traces of structures, in that emergence "require[s] reference to these structural" aspects (Fairclough, 2010: 368). Following this perspective, our intention here is to take a step toward a better understanding of the structural dimension of 'emergence' in conversations. Taking Fairclough's warning seriously, how do external discourses erupt into the organization? How do they contribute to symmetrical/asymmetrical relationships between organizational members? Research setting The interviews with the three managers, which took place at Renault's headquarters on 3 rd January, 2011, offer relevant material through which to explore this question for two reasons. Firstly, the managers were summoned to have an unofficial, informal conversation with a high-ranking executive. Although they knew each other quite well, the managers had no direct subordinate relationship with the executives. This set of conditions is favorable for a 'free' or symmetrical [START_REF] Heritage | Conversation analysis and institutional talk[END_REF] discussion. Secondly, the interviews were the first ones conducted with the managers regarding the 'affair', and so they could have been unique opportunities for the participants to share information and jointly make sense of what was happening. When the managers met the executives, the investigation about possible misbehavior had been developing for approximately four months. It started with an anonymous letter associating one of the three managers with acts of bribery, and alluding to another one. None of the managers had ever been requested to provide information on the matter, either directly or indirectly. The investigation was conducted by the security department, with the approbation and under the monitoring of top management. It appeared later that, at the time of the interviews, the evidence possessed by Renault had been obtained through an unidentified agent, known only by one member of the security department. This agent had previously worked for Renault on similar cases, one of them leading to the resignation of a manager two years previously. No substantial evidence (e.g. documents, files, testimonies) was provided by the agent; instead, available information consisted of the names of banks, account numbers, and origins, destinations, and amounts of money transferred. Nevertheless, it was suspected that the managers had sold proprietary information relating to projects involving electric vehicle technologies for the benefit of Chinese interests. Two months later, it became clear that first Renault and then the French police had failed to find any substantial evidence against the managers. In the meantime, the interviews had been leaked and published in the press (as well as, later, a meeting between members of the security department, headquarters staff, and a lawyer). All three managers had operational positions. Two of them, Balthazard and Rochette, had senior management positions (Rochette being the Balthazard's deputy), while the youngest one, Tenenbaum, was considered a highly promising manager. Among the three executives, two of them, Husson and Coudriou, held high-level staff positions as heads of Legal Affairs and Managerial Human Resources, respectively. The third executive, Pelata, was the Director General of the company (ranking second and reporting directly only to the Chairman and CEO, Carlos Ghosn). It has to be noted that the manager who was interviewed by Pelata was not working under his direct command. Research methods Following [START_REF] Miller | Building bridges. The possibility of analytic dialogue between ethnography, conversation analysis and Foucault[END_REF] and [START_REF] Heritage | Conversation analysis and institutional talk[END_REF], we consider that some analytical bridges can be constructed between Foucauldian discourse analysis, such as those elaborated by [START_REF] Fairclough | Critical Discourse analysis, organisational discourse and organisational change[END_REF], and conversational analyses. As mentioned earlier, we define an order of discourse as a way of representing the world, of being (i.e. a style), and of interacting with the other (i.e. a genre), a dimension that may be studied more particularly during conversations. We employed a set of methods inspired by conversational analysis and discursive psychology (also called discourse analysis, see Silverman, 2006: 223). While driven by different analytical focuses and objectives, conversational analysis (CA) and discursive psychology (DP) share a set of assumptions regarding talk-in-interaction. Firstly, both approaches consider that talk is a medium for social action, so that "the analysis of discourse becomes the analysis of what people do" (Potter, 2004: 201). Rather than explaining people's talk by inferring to their underlying beliefs, values, states of mind, or implicit goals, CA and DP describe what people are actually doing when talking, for it is through these actions that people fabricate the context of their interactions and display mutual understanding (or misunderstanding). Secondly, CA and DP approaches are reluctant to embrace the classical micro-macro distinction, arguing that social realities and interactions between people are constituted through talk-in-interaction. Institutions (and consequently organizations), exemplified by asymmetrical relationships, prototypical descriptions, or the constraint of people's actions, are envisioned as situated constructions that are made up, attended to, and made relevant by participants during their conversations4 [START_REF] Potter | Discourse analysis as a way of analysing naturally occurring talk[END_REF]: "'Context' and identity have to be treated as inherently locally produced, incrementally developed and, by extension, as transformable at any moment. […] Analysts who wish to depict the distinctively 'institutional' character of some stretch of talk must […] demonstrate that the participants constructed their conduct over its courseturn by responsive turnso as progressively to constitute… the occasion of their talk, together with their own roles in it, as having some institutional character" (Drew and Heritage, 1992: 21, in Silverman, 2006: 221). By extension, CA and DP prefer to analyze naturally occurring talk as the locus of the social construction of institutions and interactions. We follow these analytical commitments to analyze the three interviews between the managers and executives at Renault. These interviews were audio-recorded and published on two journal websites in April 2011. Their durations range from 25 to 40 minutes. We transcribed the interviews by following a simplified format rather than the detailed prescriptions recommended by CA, firstly because the conversations and subsequent analyses were held in French and it would have made little sense to translate language-specific details such as intonations or voice raising, and secondly, our research objective was not to conduct an analysis as detailed as those conducted in CA. Rather, our intention was to characterize the discursive tactics used and relationships constructed by participants during their conversations, in that their symmetry or asymmetry may reflect the genres and styles of the orders of discourse during the interactions. In order to characterize the discursive tactics used and relationships constructed by participants during conversations, we conducted a three-step analysis. Overall structural organization. Following [START_REF] Heritage | Conversation analysis and institutional talk[END_REF] recommendations for CA, the overall structural organization of the three conversations was first analyzed. We looked for typical phases or sections in terms of the tasks that the participants were doing. Though focused on a single topic and a single objective, the interviews clearly showed similarities and differences. Identification of sets of similar sequences and variations. With this in mind, the two authors looked independently for sequences that might show similarities and differences in terms of tasks and sub-goals (cf. [START_REF] Edwards | Facts, norms and dispositions: practical uses of the modal verb would in police interrogations[END_REF]. Only sequences that appear at least twice in the dataset and those that depart from these were retained for subsequent analysis. According to CA and DP, looking for similarities and variations is essential for catching the goals and tasks that participants accomplish through their talk. The two authors converged on six sets of sequences, each set comprising from seventeen to four sequences (or extracts). Two sets of sequences (containing only two sequences each) were identified by the second author, and the two authors agreed to keep these two sets in the final analysis. This analysis clearly demonstrated that the three conversations were dominated highly by the executives. Each set of sequences was consequently labeled according to the main activity or sub-goals pursued by the executives (e.g. claiming to have information or, on the contrary, recognizing the organization's ignorance regarding what really happened). The analysis also showed that these asymmetries were not accomplished through the same discursive tactics. Analysis of the discursive tactics used. In order to gain a better understanding on how such authoritative behaviors were accomplished, we analyzed the discursive tactics used in those sequences, defining these tactics 5 as discursive actions oriented towards the accomplishment of a particular sub-goal. Here, we relied on previous studies conducted on police interrogation [START_REF] Shuy | The Language of Confession. Interrogation and Deception[END_REF][START_REF] Leo | Police Interrogation and American Justice[END_REF]Proteau, 2009) and interviews [START_REF] Haworth | The dynamics of power and resistance in police interview discourse[END_REF][START_REF] Edwards | Facts, norms and dispositions: practical uses of the modal verb would in police interrogations[END_REF]. Such a framework was inspired by the data themselves, as one of the executives made reference to a French police series, and the lexicon used in the interviews belonged to those of legal and police institutions (i.e. investigation, criminal court, suspect, confess, etc.). For each set of sequences, one or two extracts were chosen and analyzed in detail, with the aim of characterizing the tactics used by the executives to orient the path taken toward the identified goal. In the next section, we present an overview of conversational patterns in the three interviews, before then analyzing the main sequences identified and the discursive tactics used by the executives. Results Overview of conversational patterns In this section we try to "build an overall map of the interaction in terms of typical phases or sections" (Heritage, 2004: 227-229) for each interview, and point out some similarities and differences between the three interviews. Table 1 provides a comparative summary of the structure of the interviews. In the three interviews, we found a similar underlying structure organized around the following phases (in this order, and indicated in italics in Table 1): greetings; incrimination via spying and bribery; suspension procedure; offering a "choice" (complaint and trial versus confession and resignation); leaving; and security procedures. Each interview also exhibits a short sequence of professional camaraderie (even two in the Pelata-Rochette interview). 5 These tactics may be accomplished through various discursive devices (see [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995;[START_REF] Potter | Talking cognition: mapping and making the terrain[END_REF][START_REF] Whittle | Bankers in the dock: Moral storytelling in action[END_REF]. Discursive devices refer to linguistic styles, phrases, tropes, and figures of speech (Whittle and Mueller, 2011: 111) such as rhetorical contrast, identity ascription, the use of modal verbs, etc. For instance, [START_REF] Edwards | Facts, norms and dispositions: practical uses of the modal verb would in police interrogations[END_REF] shows that in order to negate an accusation during police interrogationwhat we call here a discursive tactic -suspects usually use the modal verb 'would' (i.e. Why would I do such a thing?), which helps them refer to their supposedly habitual and 'normal' behavior and reject the accusation. However, these phases are sometimes fragmented and repeated. More importantly, other sections occur in between. Most of these sections revolve around the substance of incrimination (spying and bribery), but they take a different turn in each interview. In the Husson-Tenenbaum interview, they typically involve repetitive sequences of denials versus a rejection of the denials and more accusations (four phases). In the Coudriou-Balthazard interview, they typically show Balthazard asking questions about the incriminating facts and Coudriou answering in an elusive or laconic manner (five phases). In the Pelata-Rochette interview, the sections are more like a joint investigation (two phases) and cooperation (two phases), in that both participants explore different forms of conjecture, evaluate them, and then discuss the conditions of their cooperation. While displaying similarities and differences, the three conversations culminate in the same resultthe suspension of the manager. In the following section we focus primarily on the particular sections of the interviews that exhibit differences between conversations. Discursive tactics Focusing our attention on the phases that exhibited differences in the conversations, we looked for similarities and differences in the sequences of these phases and this way identified different sets of sub-goals and activities carried out by the executives, namely claiming to have information while maintaining ambiguity vs. recognizing ignorance while providing details; showing confidence via institutional reference vs. looking for more information; closing off vs. opening up alternatives; and passive, minimal emotion demonstrating vs. active emotion demonstrating. We now analyze in depth the discursive tactics used in these sequences by the executives. Claiming to have information while maintaining ambiguity vs. recognizing ignorance while providing details In a significant number of sequences in Coudriou-Balthazar (ten sequences) and Husson Tennenbaum's (seven sequences) interviews, the executive asserts that Renault holds information incriminating the manager. In these sequences, the executive is careful not to provide any detail about what the organization really knows, so that the accusation remains vague and ambiguous. The following extract is an exemplar of such talk. Husson (H) has just accused Tennenbaum (T) of having committed a 'serious offence', implying bribery and 'foreign interests': ploy refers to any attempt to make the suspect believe that the police possess incriminating evidence. While an evidence ploy may imply confronting the suspect with testimonial or material ('scientific') evidence, Husson does not provide a cue about the 'evidence' they hold. In addition, his terse answers ("We know", "You're guilty") convey both confidence and ambiguity. Although surprising in the organizational context, the eruption of the police order of discourse is confirmed in the next turns (lines 55 to 58). Husson, again, attacks Tenenbaum's denials when asserting that Tenenbaum is playing the suspect in Commissaire Moulin (l. 55), a popular French detective series of the 1980s. Husson portrays Tenenbaum's denials as 'playing the suspect who denies everything', a discursive device known as reflexive conceptualization, which refers to "instances when speakers explicitly refer on the meaning of their own (or others') talk […], which enables speakers to 'cancel, substitute or renew prior segments of their talk'" (Auburn, 2005: 701, in Whittle andMueller, 2011: 126). Here, reflexive conceptualization is a powerful device employed to discourage further denials from Tenenbaum ("That's fair enough"). The repeated use of attacks on denials and evidence ploy tactics, together with the vagueness of Husson's answers, are effective means for putting pressure on the manager, a third tactic widely used in police interrogation [START_REF] Leo | Police Interrogation and American Justice[END_REF]Proteau, 2009). Pressure tactics refer to a set of both physical or material and discursive techniques such as social isolation, sensory deprivation, escalation, or repetition. In the sequences between Husson and Tenenbaum, pressure is accomplished by repeating the 'We know' answers and the evasiveness of his responses, which may convey the feeling of being trapped in a Kafkaesque situation. In summary, Husson is able both to claim that the organization holds evidence and to remain vague about what they know. These activities are accomplished through the use of three tactics used in police interrogation: evidence ploys, attacks on denials and pressure. These three tactics are accompanied and sustained by a fourth discursive tactic, namely the use of brief answers that offer no opportunity for Tenenbaum to glean more information. With subtle variations, similar tactics and effects were found in the subsequent parts of the conversation, in Coudriou-Balthazard's interview, and, very marginally, in the Pelata-Rochette conversation (one sequence only). The analysis of the conversation between Pelata (P) and Rochette (R) shows that Pelata gives Rochette much more information about what Renault actually knows. Contrary to Husson, Pelata never states that the organization knows everything about the case. On the contrary, he repeatedly asks Rochette for further Referring to these institutional aspects is also a way for participants to display confidence, to show their expertise, and to gain the upper hand, temporarily at least, on the other protagonist. By mentioning the CEO and the 'law department', Coudriou also evokes negative consequences for the manager, a point that appears very clearly in lines 152-153 ("criminal court", " complaint", "we can take you to court"). Again, Coudriou uses repetition as a way to dramatize and increase the pressure on the manager [START_REF] Leo | Police Interrogation and American Justice[END_REF]. Such a dramatization of the consequences for the manager is also known as the 'bad scenario' tactic in police interrogation, a tactic that we will discuss in detail later on. Pelata does not display such a confident attitude: He asks Rochette precise questions (six sequences) and at the same time provides him with more information. Extract 4a (which quickly follows extract 2) and 4b are significant examples. Extract 4 a. 32 P: Did Michel Balthazard ever ask you to get him some plans or some er…? 33 R: Are you joking? 34 P: I don't know. Extract 4b 127 P: You've not been asked by er people from here or there from, inside Renault to er to 128 pass information that looked strange to you (clearing throat)? 129 R: No, wait, really I'm at least a little bit aware er a little bit aware of what I'm doing and 130 of the [stakes?], here. Plus I think I'm rather loyal. No, no, no, no, no, no. In both extracts (l. 32 and l. 127-128), Pelata asks Rochette precise 'yes-no' questions regarding his past actions. The questions suggest that Rochette could have given away information without any intention of misbehaving, but rather by obeying his superior (l. 32) or inadvertently performing such an act (l. 127). Such a suggestion may be related to the tactic known in police interrogation as good scenarios construction. Constructing good scenarios refers to the suggestion of possible reasons or scenarios that downplay the responsibility of the suspect: attributing blameworthiness to social circumstances, redefining the action in a way that minimizes the suspect's culpability, and displacing the locus of responsibility from the suspect to outside aspects (Leo, 2008: 153-154). Bad scenarios, on the contrary, exaggerate the seriousness of the act or its consequences (p. 154). Both tactics are aimed at pushing the suspect to confess. In the Pelata-Rochette conversation, however, the tactic is not successful because Rochette is appalled by Pelata's questions (e.g. l. 33 "Are you joking?") and even offended (l. 129 "no wait, really I'm at least aware of what I'm doing"). Rochette interprets the scenarios as so outrageous that Pelata is forced to retreat (l. 34. "I don't know'). The good/bad scenarios tactic is used more particularly in the following sets of sequences. Closing off vs. opening up alternatives The contrast between the confident and accusatory attitude of Husson and Coudriou and that of investigator adopted by Pelata is at its strongest when the executives envisage what is going to happen next. Extract 5a follows extract 3, where Coudriou underlines the seriousness of the case by referring to legal and institutional aspects and possible legal consequences. While Husson and Extract 5a C: […] Er, the decision about all this is not made 154 yet. So it can go very, very far, so you realize that on the private side, I looked at that, I 155 looked again at your er personal situation, your kids, your wife, all this, it's only half fun, 156 hey? You're 56, you're young. You have three kids. You're a big name in the company. 160 C: So er you realize the implications of all this on the private side. So there's, there's 161 another option, hey? If you decide to, to resign, this could also be an option. After this 162 letter, you have plenty of time to think about it. For us, today we can er not sue, we can, 163 we can decide to stop here and decide that, well that you think about it, you know what. While maintaining that the decision has not been taken yet, Coudriou dramatizes the consequences (l. 153-154 "so it can go very far") and then refers to Balthazard's private situation. Although not being very specific, the reference to Balthazard's family frames the consequences as dramatic (l. 155-156 "your kids, your wife, all this, it's only half fun hey?"; l. 156 "You have three kids"). He then goes one step further by referring to Balthazard's reputation (l. 156 "You're a big name in the company", l. 158 "You're one of the biggest names in the company"). In so doing, Coudriou not only increases the pressure on Balthazard, but also flatters him and indirectly calls for his sense of honor because being a 'big name' conveys the idea of notoriety, integrity, and competence (in France, at least). In constructing such a bad scenario, Coudriou is clearly closing off any alternative interpretations of what has happened and of what will happen. Balthazard seems stunned (l. 157 "I just don't get it" repeated over l. 157 and 159). After having developed a very bad scenario (extract 3, l. 153 "take you to the court"), Coudriou suggests what should appear as a 'good' situation (l. 160-161. "So there's, there's another option, hey?"). In both scenarios, however, Balthazard is constructed as 'guilty', yet he vehemently rejects this closed interpretation (extract 5b, l.164). The contrasting sets of activities and tactics used by the executives are also accompanied by different 'emotional tactics', a trait also found in previous police interrogation studies [START_REF] Rafaeli | Emotional Contrast Strategies as Means of Social Influence: Lessons from Criminal Interrogators and Bill Collectors[END_REF][START_REF] Leo | Police Interrogation and American Justice[END_REF]. Extract 5b Passive, minimal emotion demonstrating vs. active emotion demonstrating During the interviews, the managers express a variety of emotions such as astonishment, confusion, dismay, and indignation. However, although they are colleagues from the same company, some of them long-time acquaintances, the executives conducting the interviews do not engage in a display of emotions in return. Most of the time they ignore the expression of emotion and just carry on or, as analyzed earlier, take the emotion as part of a denial act that is to be expected and rejected. In each interview there are only rare occasions when the executive responds to the manager and engages in a brief sequence involving a common display of emotion. In extract 7 (Husson-Tenenbaum) this occurs with a minimal level of emotion sharing, while in extract 8 (Pelata-Rochette) more empathy is expressed. Husson's "too bad" (l. 272) is received as an understatement by Tenenbaum, who is about to lose his job and faces the possibility of a trial. Husson acknowledges this by repeating Tenenbaum's utterance (l. 274). This passive way of acknowledging the dismay expressed by Tenenbaum can be interpreted as a demonstration of coldness that is part of a "bad cop" tactic [START_REF] Rafaeli | Emotional Contrast Strategies as Means of Social Influence: Lessons from Criminal Interrogators and Bill Collectors[END_REF], aiming at exerting greater pressure [START_REF] Leo | Police Interrogation and American Justice[END_REF]. By contrast, in extract 8, Pelata takes a more active part in the sharing of emotions. Discussion How do external discourses erupt into conversations in organizations? An in-depth analysis of three conversations between executives and managers at Renault shows that, through the use of various police interrogation tactics, the executives were able to accomplish activities that transformed a supposedly 'free', symmetrical conversation oriented towards joint sensemaking into an asymmetrical conversation oriented towards the laying off of the managers. These results shed light on the 'structural' aspects of 'emergence' in conversations in three respects. Firstly, they suggest an enlarged conception of the order of discourse. The eruption of the police order of discourse into the conversations is not due solely to the introduction of 'police' lexicons and related views of the world, but relies heavily on the use of good cop/bad cop tactics, that is, specific ways of behaving and relating (i.e. a style and genre) 6 . This enlarged conception of the order of discourse echoes Samra-Fredericks ' (2003; 2005) previous studies on managerial conversations, which showed how the incorporation of the discourse of strategy relies heavily on the discursive minor moves (i.e. asking questions, use of metaphors etc.) made by one strategista set of discursive acts that could be associated with a strategic 'genre'. This, of course, does not imply that a discourse is systematically associated with a stable style and genre. Following [START_REF] Foucault | Politics and the study of discourse[END_REF], [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF], for instance, showed how the discourse of strategy implies various kinds of interactions moving from a top-down, disciplinary process to participative relationships. From a similar perspective, police discourses may imply a variety of relationships moving from accusatory interrogation to respectful interview (see Hoon, 2010;[START_REF] Shuy | The Language of Confession. Interrogation and Deception[END_REF], thereby implying different orders of discourse. The view that an order of discourse implies a specific genre and style increases our sensitivity towards the variety of discoursesand the asymmetries they carry. Secondly, our analysis supports Fairclough's (2010) critical view of the notion of 'free discussion' and emergence. While new discourses can be introduced in formal meetings (cf. Rasmussen, 2010) via hierarchical relationships, 'free discussions' do not necessarily equate with symmetrical relationships and joint sensemaking. Our research shows that when conversations take place outside the hierarchical structure and the prevailing orders of discourse, the 'emergence' expected to happen by some scholars (Jarzabowski and Seidl, 2008;Wegner, 1990) turns out to be highly structured by another order of discourse. Even more strikingly, this 'emergence' looks more like an 'eruption': Far from being the result of a jointly built, progressive process, it is rather brutally imposed and enacted by a range of discursive tactics -an 'imposition' that greatly contributes to the asymmetries observed. A third, interesting aspect of our contribution here, we believe, is that it suggests that the substitute order of a discourse can be imported. An intriguing question is, then, how and why does a certain 'order of discourse' happen to penetrate an organization? In the case we 6 It is difficult to set a clear difference between genre and style here: behaving like a 'cop' (i.e. the style) implies an accusatory, suspicious attitude towards the managers (i.e. a genre), who in turn are constructed as 'suspects'. studied, it is not surprising that once a possible case of spying and bribery has been identified, legal issues come into play. What is striking, though, is that the Renault executives (and perhaps also the top management team) incorporated this order of discourse by deciding not to go to the police and instead acted like police interrogators. One could have imagined that, categorizing the managers' behavior as criminal, the executives would have handed the matter to the police and judicial system, while handling the managerial and strategic aspects of the case on their side. Renault's top management later argued that it was for purposes of discretion and confidentiality that they decided not to go to the police. If we admit this, it does not imply, however, that they should have handled the case in the same way as the police would (supposedly) have done. We suggest two alternative explanations for the eruption of an imported order of discourse, although considerable additional investigation is certainly required. The key question is to what extent the triggering event (i.e. alleged evidence of spying and bribery by managers) constituted a breach in the flow of events [START_REF] Weick | Sensemaking in organizations[END_REF]Weick, Sutcliffe & Obsfeldt, 2005). The first explanation is that, when organizational members are confronted with an interruption that cannot be dealt with by employing routine solutions, an imported order of discourse provides organizational members with an alternative repertoire of legitimate ways of acting, relating, and talking. We might hypothesize that the stronger the interruption, the more an imported order of discourse is likely to be adopted. The order of discourse that is likely to be introduced depends on the knowledge and skills of the participants in the conversation (cf. [START_REF] Samra-Fredericks | Strategizing as lived experience and strategists' everyday efforts to shape strategic directions[END_REF][START_REF] Samra-Fredericks | Strategic practices, 'discourse' and the everyday interactional constitution of 'power effects[END_REF] and the various influence processes that take place between alternative 'orders' of discourse introduced. Following Janis' (1972) seminal works on decision processes, one may assume that Renault's security department's influence on the executives, together with the isolation of top management from external influence, contributed both to the absence of alternative discourses or dissenting voices and to the prevalence of the police order of discourse. The popularity of detective series and films might finally explain the ease with which the executives appropriated the police 'order' of discourse and were able to play the good/bad cops (cf. [START_REF] Zimbardo | On the ethics of intervention in human psychological research: With special reference to the Stanford prison experiment[END_REF]. In the case we studied, it is not clear, however, that the trigger was necessarily such a strong event. Partial evidence of prior similar events, and of similar ways of handling the cases, has subsequently been published in the press, although without such an abundance of first-hand, high-quality information. If we admit that our case had precedents, then the adoption of the police discourse appears as the result of an intra-organizational (micro-level) institutionalization process [START_REF] Elsbach | Intra-organizational institutions[END_REF]. The use of a police order of discourse is therefore part of an integrated, legitimized process for handling crises. Repeated interactions on a variety of cases and among a limited group of members from within the security department, the department of legal affairs, and top management established taken-for-granted beliefs and ways of acting, encompassing specific ways of gaining information (i.e. resorting to external, undercover sources), sharing information (i.e. with selected members the top management team), and making decisions (i.e. taking some liberties with the compliance committee and related official procedures). In addition, preferences for speed and discretion in handling such cases, expectations of submission from the suspects, and prior (apparent) successes also established norms about decisions that should be implemented (i.e. resorting to discursive police sub-tactics and presenting suspects with a choice between legal complaint and discreet resignation early in the process). While the first explanation portrays organizations as fragile under exceptional circumstances, the second explanation views organizations in a 'Dr Jekyll and Mr Hyde' manner. The case suggests, eventually, that, whatever the correct explanation, the eruption and dominance of only one 'order of discourse' may imply damaging consequences for everybody. Coudriou close off alternative versions of the future (four sequences in the Husson-Tenenbaum conversation; three in Coudriou-Balthazard) and construct the manager as necessarily guilty (six in Husson-Tenenbaum's conversation; one sequence in Coudriou-Balthazard), Pelata progressively evolves from two to three different interpretations and envisages Rochette's innocence (two sequences). 157 B: No, but I just don't get it. 158 C: You're one of the biggest names in the company. 159 B: I don't get it. I don't get it. I don't get it. 164 B: What, no! I don't know! 165 C: Michel… 166 B You make me laugh, I don't know. 167 C: I can understand, I can understand that you deny it. I can understand. 168 B: But I'm not denying! I don't know! This is crazy! 169 C: You think about it, Michel. 170 B: This is crazy! Coudriou takes Balthazard's denials (l. 164, 1. 166) as evidence of his guilt, as 'normal' behavior for a guilty person (l. 167. "I can understand"). In the same way as Husson in extract 1, Coudriou uses a reflexive conceptualization device as a way to reject Balthazard's denials and closes off any alternative interpretation of what could have happened. Well er I, I, I… it's worse than, than too bad. 274 H: Yes it's worse than too bad, yes. .hh Look, frankly er I never thought you wanted to see me this morning to tell me this. 216 P: Yeah. Not really a pleasure. 217 R: No, I know, I know. Table 1 - 1 Overview of the structure of the interviews Husson (H) -Tenenbaum (T) Coudriou (C) -Balthazard Pelata (P) -Rochette (R) (B) greetings greetings greetings incrimination + suspension incrimination incrimination procedure + "choice" denials (T) / rejection of questions (B) / elusive or joint investigation denials (H) laconic answers (C) incrimination + "choice" suspension procedure + suspension procedure + "choice" "choice" camaraderie episode denials (T) / rejection of questions (B) / elusive or joint investigation denials (H) laconic answers (C) suspension procedure suspension procedure suspension procedure (explaining again) (explaining again) (explaining, reading, signing) denials (T) / rejection of questions and denials (B) / cooperation denials + repeated elusive or laconic answers incrimination (H) (C) "choice" "choice" "choice" suspension procedure suspension procedure (reading, signing) (reading) questions and denials (B) / cooperation elusive or laconic answers + "choice" (C) suspension procedure (signing) questions and denials (B) / elusive or laconic answers + "choice" (C) camaraderie episode camaraderie episode camaraderie episode denials (T) / rejection of "choice" denials + incrimination (H) leaving and security leaving and security leaving and security procedure procedure procedure Table 2 . 2 Main activities of the executives and the tactics used for their accomplishment 'bad cop' tactics 'good cop' tactics Claiming to have information while Recognizing ignorance while providing maintaining ambiguity details Attacking denials (through reflexive Establishing a rapport and gaining suspect's conceptualization device) trust (through the establishment of a give and Evidence ploys take relationship) Pressure through repetition Showing confidence via institutional Looking for more information via references questioning Pressure through dramatization, use of legal Constructing 'good scenarios' (i.e. that and institutional references, and repetition downplay the suspects' responsibility) Constructing a bad scenario (that suggests bad consequences for the suspect) Closing off interpretations and enacting Opening up interpretations guiltiness Pressure via dramatization Constructing good and bad scenarios Constructing bad, then less bad scenarios Appealing to personal relationship Attacking denials (via reflexive conceptualization) Passive, minimal emotional demonstrating Active emotion sharing Pressure through coldness Displaying sympathy and understanding for the suspect Such a constructionist view of context and institutions does not imply that all aspects of talk-in-interaction are context-dependent. CA insists that social interactions also embody a common set of socially shared and structured procedures (i.e. the needs to listen, to display understanding, to respond to summons, etc.) that allow mutual understandings and for which participants are held accountable (see[START_REF] Silverman | Interpreting Qualitative Data[END_REF]. information (eight sequences), thereby recognizing the organization's ignorance. Extract 2 follows Pelata's exposure of the reason for the interview: the organization has discovered that Rochette has committed industrial espionage. The case implies M. Balthazard, and Rochette is Balthazard's deputy manager. Extract 2. 19 R: I, yes, I, I'm shaken, I, I (?)… Now I'm sorry I'm not er not quite awake this morning, 20 but I don't understand, but well… 21 P: Look, there's, there's er it's a case of bribery, that is that… a er a foreign company er er 22 puts some money for you on an account and er in exchange for er things we would in 23 fact like to know, right, that we can figure out a little, though we don't er, we don't know 24 in all the details and we would like to know. Rochette expresses surprise (l. 19 "I, I'm shaken") and says he does not understand what Pelata is talking about (l. 19-20), an invitation for further explanation. Contrary to Husson ("Yes you do"), Pelata provides some details about what the organization knows (l. 21 and 22) and, more importantly, what it does not know (l. 23-24). In so doing, he not only asks for Rochette's collaboration ("we would like to know" l.22 and 24), but he also establishes a give and take relationship (Proteau, 2009). Consequently, he looks to establish a rapport and gain the suspect's trust, a police interrogation tactic that aims at 'softening up the suspect' (Leo, 2008: 121-122). This tactic refers to the interrogator's effort to "create the illusion that he and the suspect will be engaged in a simple information exchange that does not implicate the suspect and is designed to assist police to solve the crime" (p. 122). Pelata's numerous hesitations convey the idea that he feels concerned by Rochette's situation and concurs to portray the conversation as a "joint problem-solving exercise". Husson and Pelata's different orientations regarding the 'suspects' and what the organization knows are also present in the following sets of sequences. Showing confidence via institutional reference vs. looking for more information In these sets of sequences, while Coudriou (two sequences) and Husson (two sequences) show confidence in relying on institutional aspects, Pelata repeatedly asks Rochette for more information (six sequences). In order to make their point that the organization knows what happened, Husson and Coudriou refer to institutional and legal aspects. Extract 3 is illustrative of such activities. Facing vague accusations and the announcement of his laying off, Balthazard (B) asks for more information, as he feels "completely lost". Coudriou (C) does not answer his question and instead describes the different steps up to 11 January, when a formal interview will take place. He advises Balthazard to "really think about it" by then. Extract 3. 142 C: Er, the company will make the decisions. These decisions, I'm sure you realize that 143 Carlos Ghosn, the chairman, is in the loop, the CEO is in the loop, Odile is also in the 144 loop. Of course, I'm not acting er on my own… 150 C: I'm telling you, I'm telling you, our people in the law department, all this is perfectly 151 clean legally. So, the consequences er will be, will be our choice, in agreement with the 152 company and the law department and er, actually, this may even go to a criminal court -153 we can file a complaint, we can take you to court. In this extract, Coudriou underlines the seriousness of the case via multiple institutional and legal references. He first mentions that the company will make the decision (l. 142). To make his point clearer, he outlines that the CEO and other high-level executives participate in the decision process (l. 143). These references, and consequently the seriousness of the case, are well understood by an already confused Balthazard (l. 146,148). Coudriou takes this confusion as an opportunity to reaffirm the 'seriousness' of the situation and to show how confident he is by mentioning the law department (l.150) and affirming that "this is perfectly clean legally" (l. 150-151). All of these institutional aspects are summed up when he states that "the consequences will be our choice in agreement with the company and law department" (l. 151-152). Such a repeated use of legal and institutional references has been found in police interrogation studies (see Young, 2010;[START_REF] Haworth | The dynamics of power and resistance in police interview discourse[END_REF]. Young (2010) shows that policemen refer to their colleagues through their occupational title as a way to dramatize the situation and put pressure on the suspect. [START_REF] Haworth | The dynamics of power and resistance in police interview discourse[END_REF] demonstrates that such institutional reference is a device used alternatively by the policemen and the suspect, depending on the topic discussed and the relative expertise of the participants on the subject. 344 R: For me in any case I'll go to court, that's for sure (PP clearing throat). I for sure want 345 to be cleared of all this (?). From l. 332 to l. 343, Pelata describes three different scenarios ("three options, see?"). In the first one, Rochette has been "taken into" the scheme and is innocent (l. 332-334), but Pelata does not seem to take this option at face value (l. 334 "I don't really know what for. Right"). In the second and third scenarios, Rochette is guilty (l. 334-335 "Either it's, either you, either you lie to me") and so Rochette would have to resign or, if he doesn't, they will go to the criminal court and he will be fired (l. 341-343). In exposing the scenarios, Pelata hesitates significantly (numerous 'er') and seeks to refer to their personal relationship: Instead of mentioning Rochette's guilt, Pelata refers to his lying (or not lying) to him. As in extract 2, Pelata seems to want to appeal to their rapport and mutual trust. If Pelata wants Rochette to confess, or at least to resign, his tactics are not successful, as Rochette affirms that "in any case [he]'ll go to court, that's for sure" (l. 344). 218 P: As you can imagine… 219 R: I know and I think that it's… It pisses me off because you must also have been 220 disappointed when it happened… Pelata expresses his own emotion about shocking Rochette in such a way (l. 216) and insists that Rochette should be aware of this point (l. 218), thus demonstrating sympathy and understanding for the suspect [START_REF] Leo | Police Interrogation and American Justice[END_REF]). Pelata's discomfort is in turn acknowledged by Rochette ("I know", l. 217 and l. 219). The two men therefore agree on the strong emotional loading of the situation and on the mutual understanding of their feelings. Then Rochette makes an even more intimate remark that involves the mutual respect between them, and the high price he takes in this respect (l. 219-220). A striking feature of Rochette's utterance is the symmetry between the feelings he attributes to Pelata ("disappointed", l. 220) and his own feelings ("it pisses me off", l. 219), while the words he uses are at such different language levels. The findings are summarized in Table 2, which shows that the three executives used a wide variety of police interrogation sub-tactics. Some relate to 'bad cop' interrogation tactics, while others can be labeled as 'good cop' tactics. 'Bad cop' tactics refer to a set of maneuvers used to convey negative and unsupportive emotions (Rafaeli and Sutton, 1991: 758), including accusations, attacks on denials, evidence ploys, pressure, repetition, escalation, and repeated confrontation (Leo, 2008: 148;Yoong, 2010: 695). These devices aim to weaken the suspect's resistance so that he starts confessing. 'Good cop' tactics refer to strategies used to convey positive feelings, including displaying sympathy and understanding for the suspect, suggesting more benign motivations for the suspect's crime, establishing a rapport, and gaining mutual trust [START_REF] Leo | Police Interrogation and American Justice[END_REF][START_REF] Rafaeli | Emotional Contrast Strategies as Means of Social Influence: Lessons from Criminal Interrogators and Bill Collectors[END_REF]. All in all, the two interviews exhibit a 'bad cop' dominating pattern, while the third one is mostly on the "good cop" side. At first view there is little use of the contrasting strategy ('good cop/bad cop') that is supposed to produce high emotional contrasts for the target individual, and therefore strongly drive him towards compliance [START_REF] Rafaeli | Emotional Contrast Strategies as Means of Social Influence: Lessons from Criminal Interrogators and Bill Collectors[END_REF]. However, the use of different scenarios (bad vs. less bad, bad vs. good) may be interpreted as variation around this 'good/bad cop' strategy.
65,202
[ "2437" ]
[ "57129", "59542" ]
01490702
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490702/file/978-3-642-39256-6_10_Chapter.pdf
Hiroaki Kikuchi Jun Sakuma Bloom Filter Bootstrap: Privacy-Preserving Estimation of the Size of an Intersection This paper proposes a new privacy-preserving scheme for estimating the size of the intersection of two given secret subsets. Given the inner product of two Bloom filters (BFs) of the given sets, the proposed scheme applies Bayesian estimation under assumption of beta distribution for an a priori probability of the size to be estimated. The BF retains the communication complexity and the Bayesian estimation improves the estimation accuracy. An possible application of the proposed protocol is an epidemiological datasets regarding two attributes, Helicobactor pylori infection and stomach cancer. Assuming information related to Helicobactor Pylori infection and stomach cancer are separately collected, the protocol demonstrates that a χ 2 -test can be performed without disclosing the contents of the two confidential databases. Introduction With the rapid development of database systems and online services, large amounts of information are being collected and accumulated from various data sources independently and simultaneously. Privacy-preserving data mining (PPDM) has been attracting significant attention as a technology that could enable us to perform data analysis over multiple databases containing sensitive information without violating subjects' privacy. In this paper, we investigate the problem of set intersection cardinality. Given two private sets, the goal of this problem is to evaluate the cardinality of the intersection without disclosing the sets mutually. Set intersection cardinality has been extensively studied as a building block of PPDM, including association rule mining [START_REF] Vaidya | Privacy preserving association rule mining in vertically partitioned data[END_REF], model and attribute selection [START_REF] Sakuma | Privacy-preserving evaluation of generalization error and its application to model and attribute selection[END_REF], and other aspects [START_REF] Clifton | Tools for privacy preserving distributed data mining[END_REF]. Our major application of this problem is epidemiological analysis, including privacypreserving cohort studies. We wish to perform cohort studies over multiple independently collected medical databases, which are not allowed to disclose identifying information about patients. Consider two databases developed independently by two organizations. One organization collects individual medical information, including patient ID, patient name, patient address, presence or absence of disease 1, disease 2, and so on. The other organization collects individual genome information from research participants; including participant ID, participant name, participant address, presence or absence of genome type 1, genome type 2, and so on. The objective of a cohort study may be to investigate the association between the outbreak of a specific disease and genomes. For this analysis, the analyst makes use of four-cell contingency tables; each cell counts the number of patients who have (do not have) a specific disease and have (do not have) a specific genome type. If both tables are private, the set intersection cardinality may be used for evaluating of the count of each cell without sharing database content. In this study, we consider the following four requirements for practical situations. Requirement 1. The time and communication complexity should be linear with respect to the number of records n. This is because statistical analysis, including cohort studies, usually treats databases with a large number of records. Requirement 2. The time and communication complexity should be independent of the size of the ID space. In the use case described above, both organizations independently collect information from individuals. Thus, unique IDs are not given to records. Instead, the protocol must generate a unique ID for each record with the combination of individual attributes, such as the name and address. Because the space required for the combination of such user attributes is often much larger than the number of individuals, this requirement is important. Requirement 3. The protocol should be designed considering the asymmetry of computational capabilities of organizations. Assume that a research institute that holds genome information provides epidemiological analysis services upon request to hospitals that hold medical information. In such a case, it is expected that the computational capabilities of the hospitals are poor. Therefore, a reasonable solution can be the outsourcing of computation; the research institute offers servers with high computational power and the hospital outsources most of the computation required for the analysis to the research institute. This example indicates that the protocol of set intersection cardinality should be designed considering the asymmetry of computational capabilities. Requirement 4. The outputs of the protocol may be random shares. This requirement implicitly suggests that the set intersection cardinality may be used as a part of a larger-scale protocol. If the outputs of the protocol are random shares, these can be seamlessly used for inputs to other privacypreserving protocols. In this paper, we propose a set intersection cardinality protocol that satisfies these requirements. Related Work Let S A and S B be private inputs of the set intersection cardinality. Let n A and n B be the cardinalities of S A and S B , respectively. Agrawal et al. [START_REF] Rakesh Agrawal | Information sharing across private databases[END_REF] presented a set intersection cardinality protocol using commutative encryption under DDH (Decisional Diffie-Hellman) assumption. The time complexity of this protocol is O(n A + n B ); this is linear in the size of the databases and is independent of the size of the ID space. However, this protocol assumes that the two parties have nearly the same computation power. Furthermore, the protocol cannot output random shares. De Cristofaro and Tsudik [START_REF] De | Practical private set intersection protocols with linear complexity[END_REF] introduced an extension of [START_REF] Rakesh Agrawal | Information sharing across private databases[END_REF]. It also requires O(n) computation by both parties. Freedman et al. [START_REF] Michael | Efficient private matching and set intersection[END_REF] proposed a set intersection protocol using oblivious polynomial evaluation. This protocol can be converted to the set intersection cardinality with a slight modification, and achieves O(n B +log log n A ) time/communication complexity. Furthermore, the time complexity is independent of the ID space size and random shares can be output. This protocol also assumes that both parties have equal computational power. All the above protocols guarantee exact outputs. Kantarcioglu et al. [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF] approach the set intersection cardinality differently. Their protocol maps the input set onto a binary vector using a Bloom filter (BF) [START_REF] Broder | Network applications of bloom filters: A survey[END_REF], and the set intersection cardinality is statistically estimated from the scalar product of the two binary vectors. With this approach, the results become approximations, although the computation cost is expected to be greatly reduced. The dimensionality of the vector used in this protocol is equal to the ID space size; this does not meet Requirement 2. In [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], a technique to shorten large IDs using hash functions was used with their protocol. As shown later by our theoretical analysis, given an error rate ϵ, the optimal range of hash functions for n elements is O(n 2 ). This indicates that such Naive ID generation can be too inefficient for practical use. Camenisch and Zaverucha [START_REF] Camenisch | Private intersection of certified sets[END_REF] has introduced the certified set intersection cardinality problem. This protocol considers asymmetry in the security assumptions of the parties, but does not consider asymmetry in their computational capability. Ravikumar et al. used the TF-IDF measures to estimate the scalar product in [START_REF] Ravikumar | A secure protocol for computing string distance metrics[END_REF]. As for epidemiological study, Lu et al. studied the contingency tables in [START_REF] Lu | Secure construction of contingency tables from distributed data[END_REF]. Thus, to our knowledge, no set intersection cardinality protocol satisfies the four requirements above, which should be met for practical privacy-preserving data analysis, especially for the outsourcing models. Our Contribution In this manuscript, we present a protocol that satisfies the four requirements. Considering the first and second requirement, the sets are independently mapped onto BFs, and then the set intersection cardinality is statistically estimated from the scalar product of the two binary vectors representing the BFs. As discussed later, the size of the BF must be O(n 2 ) to control the false positive rate in [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF]; this does not meet Requirement 2. Our protocol therefore uses a number of BFs of size O(n). The set intersection cardinality is obtained by iteratively applying Bayesian estimation to the scalar products of the BFs. In the proposed protocol, the scalar product protocol is used as a building block. Modulo exponentiation is performed only by one party and this fits well with the outsourcing model (Requirement 3). In addition, the outputs can naturally be made random shares (Requirement 4). We demonstrate our protocol with an epidemiological datasets regarding two attributes, Helicobactor pylori infection and stomach cancer. Assuming information related to Helicobactor Pylori infection and stomach cancer are separately collected, we demonstrate that a χ 2 -test can be performed without disclosing the contents of the two databases. Preliminary Bloom Filter A BF is a simple space-efficient data structure for representing a set to support membership queries [START_REF] Broder | Network applications of bloom filters: A survey[END_REF]. Recently, BFs have been used not only for database applications but also for network problems including detecting malicious addresses, packet routing, and the measurement of traffic statistics. A BF for representing a set S = {a 1 , . . . , a n } of n elements is an array of m bits, initially all set to 0. The BF uses k independent hash functions H 1 , . . . , H k such that H i : {0, 1} * → {1, . . . , m}. The hash functions map each element in the map to a random number uniformly chosen from {1, . . . , m}. Let B(S) be a set i = { 1 if i ∈ B(S), 0 if i ̸ ∈ B(S) , for i = 1, . . . , m. For example, the hash functions that map an element a as H 1 (a) = 2, H 2 (a) = 7 characterize a BF with m = 8, B(a) = {2, 7}. Alternatively, b(a) = (0, 1, 0, 0, 0, 0, 1, 0). We can use either the set or vector representation of BF, depending on the cryptographic building blocks used. Note the following relationship between the set and vector representations, b(S 1 ) • b(S 2 ) = |B(S 1 ) ∩ B(S 2 )|. To test if a is an element of set S, we can verify that ∀i = 1, . . . , k H i (a) ∈ B(S), (1) which holds if a ∈ S. However, it also holds, with a small probability, even if a ̸ ∈ S. That is, BFs suffer from false positives. According to [START_REF] Broder | Network applications of bloom filters: A survey[END_REF], after all the elements of S are hashed into the BF, the probability that element i does not belong to B(S), i.e., that the i-th bit of b(S) is 0, is p = ( 1 -1 m ) kn ≈ e -kn/m . We therefore have a probability of false positives given by p ′ = ( 1 -(1 -1 m ) kn ) k ≈ ( 1 -e -kn/m ) k . If k is sufficiently small for given m and n, Equation (1) is likely to hold only for the element of S. Conversely, with too large a value for k, the BF is mostly occupied by 1 values. In [START_REF] Broder | Network applications of bloom filters: A survey[END_REF][START_REF] Fan | Summary cache: a scalable wide-area web cache sharing protocol[END_REF], the optimal BF was found for k * = ln 2 • (m/n), which minimized the false-positive probability. Cryptographic Primitives Secure Scalar Product. The scalar product of two vectors is performed securely by using a public-key encryption algorithm in Algorithm 1. Algorithm 1 Secure Scalar Product Input: Alice has an n-dimensional vector x = (x1, . . . , xn). Bob has an n-dimensional vector y = (y1, . . . , yn). Output: Alice has sA and Bob has sB such that sA + sB = x • y. 1. Alice generates a homomorphic public-key pair and sends the public key to Bob. Security Model. We assume that the parties are honest-but-curious, which is known as semi-honest model, with parties that own private datasets following protocols properly but trying to learn additional information about the datasets from received messages. We also assume the Decisional Diffie-Hellman hypothesis (DDH), that is, a distribution of (g a , g b , g ab ) is indistinguishable from a distribution of (g a , g b , g c ), where a, b, c are uniformly chosen from Z q . 3 Difficulties in ID-less Datasets Problem Definition We are considering the problem of a two-party protocol that can evaluate the size of the intersection of two sets without revealing the sets themselves. Let A and B be parties owing subsets S A and S B , respectively. For an agreed threshold t, they each wish to know if X = |S A∩B | = |S A ∩ S B | ≥ t (2) is true, without revealing S A or S B to the other party. Here, X is a random variable describing the size of the intersection S A∩B . Note that we are not interested in learning about the intersection, itself but are only interested in evaluating its size because the size is often useful in many privacy-preserving applications. For example, an epidemic study might test if the difference between two subsets is statistically significant. The difference of |X A∩B | and t may even be confidential in some applications. Naïve ID Generation Consider a dataset of n elements with multiple attributes, such as name, sex, age and address, but with no unique identity being assigned. Instead, the elements are uniquely specified by attributes, e.g., name and birthday. Let A be a set of attrbutes A = {a 1 , . . . , a n }. The simplest way to generate a pseudo identity is to use a hash function h : {0, 1} * → {1, . . . , ℓ}. Using this hash function, we assign h(a i ) to the i-th element. For efficiency reasons, we assume the range is sufficiently large that we can neglect the occurrence of a collision such that h(a i ) = h(a j ) for some i ̸ = j. Letting h A be the set of all pseudo identities, defined as h A = {h(a i ) | a i ∈ A}, we can see observe any collision of identities by testing whether |h A | = n. If the size ℓ of the ID set increases, collisions can be avoided, but the computational cost will accordingly increase with ℓ. Clearly, ℓ ≥ n, but finding the optimal size is not trivial. To solve the tradeoff between accuracy and performance reduction, let us assume we have an optimal ℓ that is sufficiently large to uniquely determine the given set of n elements. This problem is equivalence to the problem known as "birthday paradox", whereby, among a set of n randomly chosen people, there is a probability that some pair of them have the same birthday. When identities (birthdays) are chosen with a uniform probability of 1/ℓ, the probability that all n identities are unique is given by n-1 ∏ j=1 ( 1 - j ℓ ) ≈ n-1 ∏ j=1 e -j/ℓ = e -n(n-1)/2ℓ ≈ e -n 2 /2ℓ . Therefore, given the probability ϵ with which n hash values are unique, we have n 2 2ℓ = ln ϵ -1 , from which follows the solution of our problem. The optimal range of hash functions for n elements is given as ℓ = n 2 /2 ln ϵ -1 , for which n elements will have distinct identities with a probability of ϵ. Kantarcioglu's Scheme In [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], Kantarcioglu, Nix and Vaidya proposed the following cryptographic protocol using BF in an approximate algorithm for the threshold scalar (dot) product. Let Y be a random variable representing the number of matching bits in the two BFs of S A and S B . That is, Y is defined by Y = |B(S A ) ∩ B(S B )|. There is a positive correlation between X, defined by true size of intersection S A∩B , and Y , which enables us to predict X from Y which can be obtained from BFs in a secure way. Based on the properties of BFs [START_REF] Broder | Network applications of bloom filters: A survey[END_REF], Equation ( 2) is equivalent to Z A + Z B + Z AB ≥ Z A Z B 1 m (1 - 1 m ) -kt , ( 3 ) where Z A (Z B ) Z A + u 1 -m) + (Z B + u 2 ) ≥ (v 1 + v 2 ). According to their experimental results [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], their approximation algorithm using BFs with m = 3, 000, k = 2, and n = 20, 000 ran in 4 minutes, whereas an exact version required 27 minutes. Difficulties in ID-less Datasets In [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], Kantarioglu et al. claim that as long as, m ≪ n, their method would be much faster than the typical implementation of a secure scalar (dot) product protocol 3 . Their experimental results show that the accuracy of approximation increases as m increases4 . We will show that these properties do not hold in our target, ID-less datasets model, where the two datasets have no consistent identities and hence n elements are specified with some unique attribute(s). (Accuracy) The size of intersection is approximated in their scheme based on the expected value of probability of common bits in BFs. The accuracy is expected to be improved as m increases. However, this is not true in large m because that the vector becomes too sparse. To be adaptively dense vector, we must increase the number of hash functions, k. This is not trivial. In [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], the experimental behavior with some parameters were shown and no guarantee in accuracy. (Performance) The size of BF, m, increases up to n 2 in ID-less datasets. As we discussed in Section 3.2, the range of hash function should be as large as n 2 in order to minimize the probability to fail to uniquely identify elements. This is too large to find the intersection since some schemes running in O(n) complexity in private set intersection are known, e.g., [START_REF] Rakesh Agrawal | Information sharing across private databases[END_REF], [START_REF] De | Practical private set intersection protocols with linear complexity[END_REF]. 3. (Overhead) Their scheme requires the secure multiplication as well as scalar product. It is not necessary in private set intersection. In later section, we will present our scheme which overcomes the above limitations. Table 1 gives a summary of comparison between the scheme in [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF] and proposed scheme. Proposed Scheme Probability Distribution of Matching Bits in BFs Suppose that given S A∩B = S A ∩ S B , random variable X of the cardinality of S A∩B , and instance x = X, we wish to estimate the number of matching 1s bits in their two BFs, i.e., y = |B(S A ) ∩ B(S B )|. The quantity y is equal to the number of 1s values in the conjunction of the two BF vectors. This subsection presents the mathematical properties of BFs, which will be used to estimate X in the subsequent subsection. An element a in S A ∪ S B belongs to either S A∩B or S A ∪ S B -S A∩B . The former case always ensures that a ∈ B(S A ) ∩ B(S B ). Therefore, the probability that a certain bit in the conjunction of BFs is 0 after k random bits are set to 1 is q X = (1 -1 m ) kx . In the latter case, an element in S A ∪ S B -S A∩B does not always have a value of 1 because it yields a false positive. That is, an element a in S A can have the same hash value H i (a) = H j (b) as some element b ̸ = a in S B . The probability that a certain bit is 0 in the BF for a in S A -S A∩B is q x) . Similarly, the BF of an element in S B -S A∩B having a certain bit being 0 has a probability of q B = (1 -1/m) (nB -x)k . Therefore, the probability of a certain bit in the BF for S A ∪ S B -S A∩B being 1 is given by the product of the compliment of each event, namely (1 A = (1 -1 m ) k(n A - -q A )(1 -q B ) = 1 -q A -q B + q A q B . Because the conjunction of BF has 1 for a certain bit by being either an element of S A∩B or S A ∪ S B -S A∩B , we have the probability θ for a bit being 1 as the disjunction of the two events namely θ = 1 -q X (1 -(1 -q A )(1 -q B )) = 1 -(1 - 1 m ) kn A -(1 - 1 m ) kn B + (1 - 1 m ) k(n A +n B -x) . ( 4 ) Consequently, the conditional probability of Y = |B(S A )∧B(S B )| being y, given x = |S A ∩ S B | , is given by the binomial distribution B(m, θ), of m independent binary events with success probability θ. That is, P r(Y = y|X = x) = ( m y ) θ y (1 -θ) m-y . ( 5 ) Bayesian Estimation of X Given known parameter values and P r(X|Y ), we wish to identify the posterior distribution P r(Y |X) using Bayes' rule. One possible solution is an approximation based on a the likelihood value from a single observation, as described by Kantarcioglu et al. [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF]. Their scheme suffers from the complexity of O(m). That is, a secure scalar product will require m ciphertexts, which is greater than n. Moreover, the accuracy achieved is inadequate. Instead, we will use recursive Bayesian estimation using several small BFs. That is more efficient because each individual BF used to perform the secure scalar product between two BFs will be smaller. Moreover, the iteration over multiple BFs improves the accuracy of the estimation. Given the properties of beta distribution, the iteration process can be performed with lightweight overheads. Using the conjugate prior distribution of Equation ( 5), we assume a beta distribution Be(α, β), which gives P r(θ) = θ α-1 (1 -θ) β-1 ∫ 1 0 θ α-1 (1 -θ) β-1 dy . The initial prior distribution is given by Be(1, 1), which yields a uniform distribution P r(θ) = 1. Using Bayes' theorem, we obtain the posterior probability of θ given y as P r(θ|y) = P r(θ)P r(y|θ) ∫ P r(θ)P r(y|θ)dθ ∝ P r(θ)P r(x|θ) ∝ θ α-1+y (1 -θ) β-1+m-y , which results again in a beta distribution Be(α ′ , β ′ ) with new parameters as α ′ = α + y, and β ′ = β + m -y. Helicobactor Pylori infection is considered to be an event that occurs to each individual independently. Modeling such a situation with the binomial distribution is considered to be reasonable; beta distribution, the natural conjugate prior distribution of the binomial distribution, is used as the prior distribution in our protocol mainly due to its mathematical convenience. The initial prior was set to the non-informative uniform distribution in the experiments. Nonetheless, it is difficult to exclude the subjectivity from the settings of the prior distributions, and the obtained experimental results need to be carefully examined. The mean of the beta distribution is denoted by E[θ] = α/(α + β). We can therefore estimate θ when the BFs of two sets have y matching bits as follows, θ = α ′ α ′ +β ′ = 1+y 2+m . After estimating θ, the size of the intersection is given by the inverse of Equation ( 4), a mapping θ -1 , as x = n A + n B - 1 k log 1-1 m ( θ -1 + (1 - 1 m ) knB + (1 - 1 m ) knA ) . ( 6 ) The inverse mapping can be evaluated locally in the final stage of privacy preservation (without encryption). We are not concerned that if Equation ( 6) might appear complicated to evaluate. "Bootstrap" of BFs To improve the accuracy, there are two approaches. (1) Enlarge the size of BF, m, and the estimate θ, 5(2) Estimate θ from multiple observations of Y 1 , Y 2 , . . . , Y s . Using a BF with more bits m could decrease the false positives in the membership test with the cost increasing as m. It is of interest that the value of m does not play a significant role in estimating of the intersection size, as we had expected. We will now show the mathematical properties that explain this observation. )), we illustrate the change of variance with respect to m in Fig. 1. Since the variance determines the standard deviation, which provides a confidence interval for the estimation, we can predict the accuracy via the reduction in variance. Fig. 1 shows that the variance of θ decreases slightly as m increases. However, the reduction in variance is not significant, given the increased cost of the required ciphertexts. For example, a BF with m = 100 requires 10 times more ciphertexts than that for an element in S with n = |S| = 10. (2) Variance from "Bootstrap" s Small BFs. Let y 1 , y 2 , . . . , y s be the sequence of matching bits in s independent BFs for S A and S B . Recursive Bayesian estimation based on the sequence gives the posterior probability P r(θ|y 1 , . . . , y s ) for the beta distribution Be(α ′ , β ′ ) defined by α ′ = α + s ∑ i=1 y i , β ′ = β - s ∑ i=1 y i + sm. The estimation of θ is provided from the mean of the beta distribution, namely θ = α + ∑ s i=1 y i α + β + sm (7) Fig. 2 illustrates the reduction in the variance of θ. It implies that the bootstrapping reduces the confidence interval for the estimation of θ significantly with increasing s. Proposed Scheme We give the procedure for estimating the size of the intersection without revealing each set in Algorithm 2. At Step 1, both parties A and B compute BFs for their n-element sets S A and S B with parameters, size of BF m and the number of hash function k such that k = (m/n) ln 2. For tradeoff between efficiency and accuracy, k = 1 and m = n/ ln 2 can be used. Since this process can be performed locally and the hash function performs very efficiently, we consider the overhead is negligible. Both parties participate the secure scalar product protocol (Algorithm 1), which is the most significant part in computation. The scalar product of two BFs, y, gives the number of common 1's bit in BFs, which can be divided into two integers, making the SFE possible to approximate θ in Algorithm 2 Bloom Filter Bootstrap BF B(S A , S B ) Input: SA, SB of n elements, m (size of the BF), k (number of hash functions). Output: x (estimate of the size of the intersection of sA and SB). 3. Estimate θ using Equation ( 7). 4. Identify x using Equation [START_REF] Fan | Summary cache: a scalable wide-area web cache sharing protocol[END_REF]. Equation ( 7) without revealing any y i . Step 4 is performed in public (or locally) after θ reaches at convergence. Instead of extend the size of BF, we perform the secure scalar product protocols multiple times to get the sequence of y 1 , y 2 , . . . , y s , which will be used to predict the θ in Bayesian estimation. Both parties iterate the test until the expected accuracy is given. The confidence interval is given by the standard deviation of estimated value. Security The following theorem shows the security of Algorithm 2. Theorem 1. Suppose A and B behaves in the semi-honest model. Let S A and S B be inputs for Bloom Filter Bootstrap. Then, after execution of Bloom Filter Bootstrap, A and B learns random shares of y i for i = 1, . . . , s; nothing but y i and what can be inferred from y i is learned by both A and B. Sketch of the proof. Message exchange occurs only in step 2, so the security of step 2 is proved. Since step 2 is multiple invocation of the scalar product protocol, the security is reduced to that of the scalar product protocol. By following the security proof in [START_REF] Goethals | On private scalar product computation for privacy-preserving data mining[END_REF], the security of Bloom Filter Bootstrap is immediately proved. Note that computation in step 4 is performed by A without communication with B, the security is not compromised by execution of these steps. Complexity We examine the complexities of our proposed scheme in terms of computation and communication costs. When these quantities are almost identical, we unify these by simply n. Protocols are compared in Table 2. In comparison with [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF], we assume the ID-less model, where the size of BF can increase up to n 2 . Table 2 shows that the computational cost for A is linear to ms, while the cost for B is 0 (no modular exponentiation is required). Hence, it is preferable for outsourcing solution to our Requirement 3, where hospitals do not have powerful computational resources and become B in our protocol. The protocols are classified into three groups. The first group is the scheme based on Oblivious Polynomial Evaluation. Scheme FNP [START_REF] Michael | Efficient private matching and set intersection[END_REF] is designed to reveal not only the size of intersection but also the elements in the intersection. We show the performance for comparison purpose. The second class, consisting of AES [START_REF] Rakesh Agrawal | Information sharing across private databases[END_REF] and CT [START_REF] De | Practical private set intersection protocols with linear complexity[END_REF], is classified as Oblivious Pseudo-Random Functions (OPRF). AES depends on the commutative one-way function, while CT uses the RSA (Fig. 3 in [START_REF] De | Practical private set intersection protocols with linear complexity[END_REF]) and the blind RSA (Fig. 4) encryptions. The privacy of scheme (Fig. 3 in [START_REF] De | Practical private set intersection protocols with linear complexity[END_REF]) is proved as the view of honestbut-curious party is indistinguishable under the One-More Gap Diffie-Hellman assumption in the random oracle model. The last class is based on BF and Secure Scalar Product schemes. KNV [START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF] uses a single BF with large size, while ours iterates s independent BFs with small size. The sizes are shown in Table. Accuracy Evaluation Simulation with DBLP dataset We evaluate the accuracy of the proposed scheme using a public dataset of author names, DBLP 6 . Four pairs of datasets S A and S B with n A = n B = 100 were chosen from DBLP with the intersection sizes x = 20, 40, 60, 80. Table 3 shows the experimental results for the estimation of x, for x = 20, 40, 60, and 80, where we used a BF with of size m = 400, a number of hash functions k = 3, and iterated the estimation s times. The results show that our scheme estimates the intersection within an error of ±1. The numbers of matching bits in the BFs, Y , are distributed according to the binominal distribution. Note that all BFs estimate a size of the intersection close to the actual size of 40, but the differences are unstable. Optimal BF design The accuracy of estimation depends on the size of BF, m, and the number of hash function, k, and the iteration of testing, s. In order to clarify the strategy for optimal accuracy, we examine the Mean Absolute Error (MAE) with respect to m and k. Figure 3 shows MAE in terms of m from 40 through 280, where n A = n B = 100, x = 20, k = 1 and s = 20. Figure 4 shows MAE with respect to k = 1, . . . , 6 where m = 200. The MAE decreases as m increases, while the computational/communicational overhead increases accordingly. On the other hand, the increase of k does not reduce MAE. A possible reason for the source of the error might be the restriction of m and k. As we discussed in Section 4.3, the optimal size for the BF is not trivial. Since large m increases the computational cost at secure scalar product, we conclude minimize k, i.e., k = 1 and optimize m = n/ ln 2. The accuracy can be improved by iteration of small BF tests rather than increasing the size of BFs. In fact, Figure 5 demonstrates the reduction of variance of observation of E[Y ], indicated by bar plot, when s = 10. The solid line represents the distribution of Y , which is widely distributed than that of E[Y ]. It is known as Central Limit Theorem [START_REF] Pagano | Principles of biostatistics[END_REF], that as s increases, the amount of sampling variation decreases. Figure 6 shows that the variance of estimated probability θ reduces as the iteration s increases. The experiment shows even small s = 10 gives conversion of probability θ. The selection of optimal s can be made based on the variance of the prediction of θ. As we have showed in Section 4.3, the variance of beta distribution decreases with s, which determines the accuracy of approximation. Finally, we obtain the estimate of intersection size, x, by Equation 6. We illustrate the distribution of θ and the corresponding estimation of x. Performance We implemented the proposed scheme in Java, JDK 1. Privacy-Preserving Risk Analysis of H. pylori Helicobacter pylori, or H. pylori, is a bacterium that is found in the stomachs of two-thirds of the world's population. Epidemiology studies have shown that individuals infected with H. pylori have an increased risk of cancer of the stomach [START_REF] Atherton | The pathogenesis of helicobacter pylori-induced gastro-duodenal diseases[END_REF][START_REF] Kuipers | Pathogenesis of helicobacter pylori infection[END_REF]. Although H. pylori has been classified as a cancer-causing agent, it is not known how H. pylori infection increases the risk of cancer of the stomach. Some researchers have estimated that the risk of cancer the noncardiac region of the stomach is nearly six times higher for H. pylori-infected individuals than for uninfected people [START_REF] Helicobacter | Gastric cancer and helicobacter pylori: a combined analysis of 12 case control studies nested within prospective cohorts[END_REF]. Some cohort studies revealed that the risk of gastric cardiac cancer among H. pylori-infected individuals was about one-third of that among uninfected individuals. The source of uncertainty is that the number of gastric cancers in the cohort study was too small to make a definitive statement. Cancer is a highly confidential matter and people will not reveal that they have it. Our proposed methodology addresses the problem of epidemiology studies that preserve the privacy of the patients. The cryptographic protocol allows several small cohorts to be aggregated and analyzed for more certain evidence of increase or reduction of risk. Given two datasets of patients with cancer and H. pylori, the proposed protocol determines the size of the intersection of the two sets without revealing any entries in the datasets. With a secure hash function, the proposed scheme identifies a patient from their personal attributes. Contingency Tables The epidemiology study aims to determine whether an H. pylori-infected individual has increased the risk of gastric cancer. The evidence is shown by a measure of relative risk (RR), the probability of disease among exposed individuals divided by the probability of disease among the unexposed. Suppose that a sample of N individuals is arranged in the form of the 2 × 2 contingency table in Table 5; the relative risk (RR) of H. pylori is estimated by RR = Pr( cancer | H. pylori) Pr(cancer |unexposed) = a a + b / c c + d ≈ ad bc , where we assume a ≪ b and hence a + b = b. To examine whether H. pylori-infection increases the risk of cancer, i.e., RR > 1, we test the null and the alternative hypotheses. H 0 : The proportion of patients with cancer among individuals infected with H. pylori is equal to the proportion of patients with cancer among those uninfected. H A : The proportions of patients with cancer are not identical in the two populations. The chi-square test compares the observed frequencies in each category of the contingency table, O, with the expected frequencies given that the null hypothesis is true, E. To perform the test, we calculate the sum χ 2 = k ∑ i=1 (O i -E i ) 2 E i = (N -1) ((ad -bc) ± N/2) 2 (a + c)(b + d)(a + b)(c + d) , where k is the number of cells in the table. The probability distribution of this sum is approximated by a χ 2 distribution with (2 -1)(2 -1) = 1 degree of freedom. Alternatively, by taking it squire root, we may assume that χ is normally distributed with mean 0 and standard deviation 1. Datasets In our experiment, we have two datasets collected by independent agencies. The Japanese Ministry of Health and Welfare (MHW) carried out a medical examination in 2001 in a small village in Chiba Prefecture. The dataset contains the number of H. pylori-infected individuals but their cancer status is not known. Hypothesis Testing Our proposed algorithm estimates the size of the intersection of the two datasets, thus allowing the estimation of relative risk of H. pylori. The statistics show that the population in Chiba Prefecture in 2003 was 6, 056, 462 (3, 029, 486 male). The dataset in Table 6 has n A = 7401 recodes of patients with cancer. pylori. We apply a BF with size m = 14, 000, k = 1 and s = 10 to the two datasets and obtain the scalar product, y = b(CAN) • b(PYL) as µ(y) = 1023.9 on average. Based on Bayes' theorem, we estimate the probability θ in Equation ( 7 which is too high to assume the null hypothesis. Therefore, we reject the null hypothesis at the 0.05 level of confidence. In the experiment in Intel Xeon E5620 2.40 GHz, Memory 16GB, the processing of the BF takes 17,030 second (=4.7 hour), while the naive ID generation requires a scalar product of n 2 = 4.9 × 107 , which is estimated to be 223 hours. Conclusions We have proposed an efficient algorithm for the estimation of the size of the intersection of two private sets. The proposed scheme gives a Bayesian estimation of the intersection size based on the mathematical properties of the number of matching bits in two BFs. A well-known secure scalar product protocol enables us to evaluate the number of matching bits in a privacy-preserving way and to test hypothesizes that are useful in epidemiological studies. We have shown the properties of the accuracy of estimation for various parameters and the experimental results for the DBLP public dataset. One of our main results is that the bootstrap approach, iterating small BFs several times, is better than using a single large BF. The extension of scalar product protocol to multiple parties can be done by replacing the Step 3 as that Bob forwards n ciphertexts computed with his secret vector as E(x 1 ) y1 , . . . , E(x n ) yn to Carol who then perform the original Step 3 as c = E(x 1 ) y1z1 • • • E(x n ) ynzn /E(s B ). The extension of Bloom filter to multiple parities is not trivial and one of our future work. representing a BF defined by B(S) = ∪ a∈S B(a) such that B(a) = {H 1 (a), . . . , H k (a)}. Now let b be an m-dimensional vector, (b 1 , . . . , b m ), which is an alternative representation of the BF, defined by b 2 . 2 Alice sends to Bob n ciphertexts E(x1), . . . , E(xn), encrypted with her public key. 3. Bob chooses sB at random, computes c = E(x1) y 1 • • • E(xn) yn /E(sB) and sends c to Alice. 4. Alice uses her secret key to decrypt c to obtain sA = D(c) = x1y1 + • • • + xnyn -sB Fig. 1 . 14 0Fig. 2 .( 1 ) 11421 Fig. 1. (1) Distribution of the variance of θ, V ar[θ], with respect to m, the size of the BF, for n = 10, k = 3, and y = 14 1 . 1 A computes BF b(SA) for SA and B computes BF b(SB). 2. A and B jointly perform Algorithm 1 to obtain yi = b(SA) • b(SB) for i = 1, . . . , s. 6 DBLP, A Citation Network Dataset, V1, (http://arnetminer.org/citation). Fig. 3 .Fig. 4 .Fig. 5 .Fig. 6 . 3456 Fig. 3. Mean Absolute Error (MAE) with respect to the size of BF, m 6, with BigInteger class. As additive homomorphic public key algorithm, we use Paillier cryptosystem with 1024 bit key. With platform of commodity PC, Intel Core (TM) i7-663DQM, 2 GHz, 4 GB, running Windows 7 (64 bit), the encryption runs in t e = 15.7 [s], the decryption takes t d = 21.5 [s] in average. The secure scalar product of 64-bit vectors (n A = n B = 64, x = 5) is performed in 5.28 [s], i.e., 82.5 [ms/element]. With this platform, the processing time to deal with the problem in [11], n = 2000, k = 1, and m = n/ ln 2 = 2885, is 4 minute and 125 second. Table 1 . 1 Comparison between[START_REF] Kantarcioglu | An efficient approximate protocol for privacy-preserving association rule mining[END_REF] and ours item [11] Proposed approximation Equation (3) Equation (7), (6) priori distribution - Beta distribution BF size (m) large (n 2 ) small (n/ ln 2) accuracy no guarantee improved with Bayesian estimation from s tests Table 2 . 2 Complexity Comparison of protocols FNP[7] AES[1] CT[5] KNV[11] Proposed primitives OPE commutative enc. (blind) RSA SSP w. BF SSP w. BF comp. at A nA log log nB nA + nB 2nA + 1 m ms BF size - - - n 2 ≥ m > kn m = n/ ln 2 comp. at B nB + nA log log nB 2nA + nB nA + nB + 1 0 0 complexity O(nA log log nB) O(n) O(n) O(n 2 ) O(n) comm. cost nA + nB nA + nB 2nA + nB m + 1 ms + 1 OPE (Oblivious Polynomial Evaluation), SSP (Secure Scalar Product) Table 3 . 3 Results of estimating X for various intersection sizes, x, for the dataset (nA = nB = 100, m = 400, k = 3) x 20 40 60 80 E[Y ] 125.24 141.45 160.98 184.11 σ(Y ) 6.78 5.92 5.34 5.15 E(θ) 0.31 0.35 0.40 0.46 x 19.523 38.869 58.969 79.411 Table 4 . 4 Results of estimating X for various BF sizes, m for the dataset (nA = nB = 100, x = 40) m 200 400 600 800 k 1 3 4 6 E[Y ] 46.62 141.45 189.64 283.66 σ(Y ) 3.146 5.923 6.436 7.488 E(θ) 0.24 0.35 0.32 0.35 x 39.490 38.869 39.604 39.227 Table 5 . 5 2 × 2 Contingency table for H. pylori and stomach cancer H. pylori Cancer No cancer total Yes a b a + b No c d c + d total a + c b + d N Table 6 . 6 Chiba Cancer Center dataset CAN year male female total 2003 2,330 1,134 3,464 2004 2,610 1,242 3,852 2005 2,559 1,205 3,763 total 7,500 3,581 11,081 Table 7 . 7 MHW dataset of H. pylori infections PYL year male female total 2001 2,671 5,206 7,877 The Chiba Cancer Center has performed an epidemiology study of causes and effects of cancer conditions since 1975 in Chiba Prefecture, Japan. Table6shows the statistics for three years from 2003, used in this study. The dataset contains private attributes, including name, gender, birthday, mailing address, ZIP code, and medical treatments, e.g., patient ID, days of operations, day of death, type of cancers, and degree of tumor differentiation. 2. Individuals infected with H. pylori PYL. 1. Patients with gastric cancer CAN. Table 7 contains n B = 2629 individuals infected with H. Table 8 . 8 Experimental results for CAN and PYL H. pylori Cancer No cancer total Yes 80 2,549 2,629 No 7,321 2,990,050 2,997,371 total 7,401 2,992,599 3,000,000 7 From Equation (6), x = 81.1702, while the exact size of the intersection is 80. The number of individuals who are infected with H. pylori but do not have is therefore n a -x = 2549. The other values can be obtained similarly. Finally, the numbers of individuals are summarized in Table8.An estimate of the relative risk of having cancer among H. pylori-infected individuals is therefore , 000 -1(80 • 222, 964 -2, 549 • 7, 321 -3, 000, 000/2) √ 7, 401 • 2, 992, 599 • 2, 629 • 230, 285 = 28.71 > N (.05/2) = 1.960, ) as θ = α + α + β + sm ∑ s y i = 0.073142. RR = 80 • 222, 964 2, 549 • 7, 321 = 12.81. The chi-square test of the null hypothesis yields χ = √ 3, 000 ⋆ This work was done when author was in Tokai University. In Section 2.2 (Computation and Communicational cost). In Section 3, they assume that the vector of 20000 elements, whose density was 10 %, that is, the vector contains 2000 1's (= n), and it performs 20000-dimensional vector's scalar product for exact match and m = 3000 BF for their scheme. In Section 3.1, Figure 1(b). We do not consider the number of hash functions k because there are some constraints between m and k, such as kn < m and k = (ln 2)m/n for minimizing false positives. The number is referred from statistics in Chiba prefecture. There are potential individuals infected by H. Pylori who was not counted in the table.
44,874
[ "1004130", "1004131" ]
[ "489019", "1061308" ]
01490703
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490703/file/978-3-642-39256-6_11_Chapter.pdf
Bechara Al Bouna email: [email protected]@cs.purdue.edu Chris Clifton Qutaibah Malluhi email: [email protected] Using Safety Constraint for Transactional Dataset Anonymization Keywords: In this paper, we address privacy breaches in transactional data where individuals have multiple tuples in a dataset. We provide a safe grouping principle to ensure that correlated values are grouped together in unique partitions that enforce l-diversity at the level of individuals. We conduct a set of experiments to evaluate privacy breach and the anonymization cost of safe grouping. Introduction Data outsourcing is on the rise, and the emergence of cloud computing provides additional benefits to outsourcing. Privacy regulations pose a challenge to outsourcing, as the very flexibility provided makes it difficult to prevent against trans-border data flows, protection and separation of clients, and other constraints that may be required to outsource data. An alternative is encrypting the data [START_REF] Hacıgümüş | Executing SQL over encrypted data in the database-service-provider model[END_REF]; while this protects privacy, it also prevents beneficial use of the data such as value-added services by the cloud provider (e.g., address normalization), or aggregate analysis of the data (and use/sale of the analysis) that can reduce the cost of outsourcing. Generalization-based data anonymization [START_REF] Samarati | Protecting respondents' identities in microdata release[END_REF][START_REF] Sweeney | k-anonymity: a model for protecting privacy[END_REF][START_REF] Machanavajjhala | l-diversity: Privacy beyond k-anonymity[END_REF][START_REF] Li | t-closeness: Privacy beyond k-anonymity and l-diversity[END_REF] provides a way to protect privacy while allowing aggregate analysis, but doesn't make sense in an outsourcing environment where the client wants to be able to retrieve the original data values. An alternative is to use bucketization, as in the anatomy [START_REF] Xiao | Anatomy: Simple and effective privacy preservation[END_REF], fragmentation [START_REF] Ciriani | Combining fragmentation and encryption to protect privacy in data storage[END_REF], or slicing [START_REF] Li | Slicing: A new approach for privacy preserving data publishing[END_REF] models. Such a database system has been developed [START_REF] Erhan | Query processing in private data outsourcing using anonymization[END_REF][START_REF] Nergiz | Updating outsourced anatomized private databases[END_REF]. The key idea is that identifying and sensitive information are stored in separate tables, with the join key encrypted. To support analysis at the server, data items are grouped into buckets; the mapping between buckets (but not between items in the bucket) is exposed to the server. An example is given in Figure 1 where attribute DrugName is Fig. 1: Table Prescription anonymized sensitive: Figure 1b is an anatomized version of table prescription with attributes separated into P rescription QIT and P rescription SN T . The bucket size and grouping of tuples into buckets ensures privacy constraints (such as k-anonymity [START_REF] Samarati | Protecting respondents' identities in microdata release[END_REF][START_REF] Sweeney | k-anonymity: a model for protecting privacy[END_REF] or l-diversity [START_REF] Machanavajjhala | l-diversity: Privacy beyond k-anonymity[END_REF]) are satisfied. Complications arise when extending this approach to transactional datasets. Even with generalization-based approaches, it has been shown that transactions introduce new challenges. While approaches as (X, Y )privacy [START_REF] Wang | Anonymizing sequential releases[END_REF] and k m -anonymity [START_REF] Terrovitis | Privacy-preserving anonymization of set-valued data[END_REF] include restrictions on the correlation of quasi-identifying values and can be used to model transactional data [START_REF] Burghardt | Anonymous search histories featuring personalized advertisement -balancing privacy with economic interests[END_REF], they still face limitations when applied to bucketization approaches. We give examples of this based on Figure 1b. The anonymized table satisfies the (X,Y)-privacy and (2,2)-diversity privacy constraints [START_REF] Machanavajjhala | l-diversity: Privacy beyond k-anonymity. ACMTransactions on Knowledge Discovery from Data[END_REF]; given the 2-diverse table, an adversary should at best be able to link a patient to a drug with probability 1/2. Inter-group dependencies occur when an adversary knows certain facts about drug use, e.g., Retinoic Acid is a maintenance drug taken over a long period of time. As P1 is the only individual who appears in all groups where Retinoic Acid appears, it is likely that P1 is taking this drug. Note that this fact can either be background knowledge, or learned from the data. Intra-group dependencies occur where the number of transactions for a single individual within a group results in an inherent violation of l-diversity (this would most obviously occur if all transactions in a group were for the same individual.) By considering this separately for transactional data, rather than simply looking at all tuples for an individual as a single "data instance", we gain some flexibility. We present a method to counter such privacy violations while preserving data utility. Our contributions can be summarized as follows: -An in-depth study of privacy violation due to correlation of individuals' related tuples in bucketization techniques. -A safe grouping technique to eliminate privacy violation. Our safe grouping technique ensures that quasi-identifier and sensitive partitions respect l-diversity privacy constraint. The approach is based on knowing (or learning) the correlations, and forming buckets with a common antecedent to the correlation. This protects against inter-group dependencies. Identifiers are then suppressed where necessary (in an outsourcing model, this corresponds to encrypting just the portion of the tuple in the identifier table) to ensure the privacy constraint is met (including protection against intra-group correlation.) In the next section, we present our adversary model. Section 3 gives further background on prior work and its limitations in dealing with this problem. In Section 4, we define the basic notations and key concepts used in the rest of the paper. A definition of correlation-based privacy violation in transactional datasets is given in Section 5. In Section 6, we present our a safe grouping constraint that forms the basis of our anonymization method. Section 7 gives a safe grouping algorithm. A set of experiments to evaluate both the practical efficiency and the loss of data utility (suppression/encryption) is given in Section 8. We conclude with a discussion of next steps to move this work toward practical use. Adversary Model In our adversary model, we assume that the adversary has knowledge of the transactional nature of the dataset. We also assume that he/she has outside information on correlations between sensitive data items that leads to a high probability that certain sets of items would belong to the same individual. This is illustrated in the introduction (example 1) where the fact that the drug Retinoic Acid is known to be taken for a long period of time makes it possible to link it to patient P1. We do not care about the source of such background information; it may be public knowledge, or it may be learned from the anatomized data itself. (We view learning such knowledge from the data as beneficial aggregate analysis of the data.) In [START_REF] Wang | Anonymizing sequential releases[END_REF], the authors consider that any transaction known by the adversary could reveal additional information that might be used to uncover a sensitive linking between a quasi-identifier and a sensitive value. They define (X,Y)-privacy to ensure on one hand that each value of X is linked to at least k different values of Y , and on the other hand, no value of Y can be inferred from a value of X with confidence higher than a designated threshold. A similar approach proposed in [START_REF] Terrovitis | Privacy-preserving anonymization of set-valued data[END_REF] in which the authors extend k-anonymity with k m -anonymity requiring that each combination of at most m items appears in at least k transactions, where m is the maximum number of items per transaction that could be known by the adversary. (Also related is the problem of trail re-identification [START_REF] Malin | Trail Re-identification and Unlinkability in Distributed Databases[END_REF].) As demonstrated in the example in Figure 1b, these techniques are limited when it comes to bucketization, as more subtle in intra and intra group correlations may lead to a breach of l-diversity. In [START_REF] Li | Slicing: A new approach for privacy preserving data publishing[END_REF] the authors proposed a slicing technique to provide effective protection against membership disclosure, but it still remains vulnerable to identity disclosure. An adversary with knowledge of the transactional nature of the dataset may still be able to associate an individual identifier to correlated sensitive values. The authors in [START_REF] Jiang | Multiple sensitive association protection in the outsourced database[END_REF] discuss privacy violations in the anatomy privacy model [START_REF] Xiao | Anatomy: Simple and effective privacy preservation[END_REF] due to functional dependencies (FDs). In their approach, they propose to create QI-groups on the basis of a FD tree while grouping tuples based on the sensitive attribute to form l-diverse groups. Unfortunately, dealing with FDs' is not sufficient, as less strict dependencies can still pose a threat. In [START_REF] Raymond | Can the utility of anonymized data be used for privacy breaches?[END_REF], the authors consider correlation as foreground knowledge that can be mined from anonymized data. They use the possible worlds model to compute the probability of associating an individual to a sensitive value based on a global distribution. In [START_REF] Kifer | Attacks on privacy and definetti's theorem[END_REF], a Naïve Bayesian model is used to compute association probability. They used exchangeability [START_REF] Aldous | Exchangeability and related topics[END_REF] and DeFinetti's theorem [START_REF] Ressel | De Finetti-type theorems: an analytical approach[END_REF] to model and compute patterns from the anonymized data. Both papers address correlation in its general form where the authors show how an adversary can violate l-diversity privacy constraint through an estimation of such correlations in the anonymized data. As it is a separate matter, we consider that correlations due to transactions where multiple tuples are related to the same individual ensure that particular sensitive values can be linked to a particular individual when correlated in the same group (i.e., bucket). We go beyond this, ad-dressing any correlation (either learned from the data or otherwise known) that explicitly violates the targeted privacy goal. Formalization Given a table T with a set of attributes {A 1 , ..., A b }, t[A i ] refers to the value of attribute A i for the tuple t. Let U be the set of individuals of a specific population, ∀u ∈ U we denote by T u the set of tuples in T related to the individual u. Attributes of a table T that we deal with in this paper are divided as follows; -Identifier (A id ) represents an attribute that can be used (possibly with external information available to the adversary) to identify the individual associated with a tuple in a table. We distinguish two types of identifiers; sensitive and nonsensitive. For instance, the attribute Social Security Number is a sensitive identifier ; as such it must be suppressed (encrypted). Nonsensitive identifiers are viewed as public information, and include both direct identifiers such as Patient ID in Figure 4, and quasi-identifiers that in combination may identify an individual (such as <Gender, Birthdate, Zipcode>, which uniquely identifies many individuals.) -Sensitive attribute (A s ) contains sensitive information that must not be linkable to an individual, but does not inherently identify an individual. In our example (Table 1a), the attribute DrugName is considered sensitive and should not be linked to an individual. Definition 1 (Equivalence class / QI-group). [START_REF] Samarati | Protecting respondents' identities in microdata release[END_REF] A quasi-identifier group ( QI-group) is defined as a subset of tuples of T = m j=1 QI j such that, for any 1 ≤ j 1 = j 2 ≤ m, QI j1 ∩ QI j2 = φ. Note that for our purposes, this can include direct identifiers as well as quasi-identifiers; we stick with the QI-group terminology for compatibility with the broader anonymization literature. To express correlation in transactional data we use the following notation cd id : A id 1 , ..., A id n A s where A id i is a nonsensitive identifying attribute and A s is a sensitive attribute, and cd id is a correlation dependency between attributes A id 1 , ..., A id n on one hand, and A s on the other. Next, we present a formal description of the privacy violation that can be caused due to such correlations. Definition 2 (l-diversity). [13] a table T is said to be l-diverse if each of the QI-groups QI j (1 ≤ j ≤ m) is l-diverse; i.e., QI j satisfies the condition c j (v s )/|QI j | ≤ 1/l where -m is the total number of QI-groups in T -v s is the most frequent value of A s -c j (v s ) is the number of tuples of v s in QI j -|QI j | Correlation-Based Privacy Violation Inter-group correlation occurs when transactions for a single individual are placed in multiple QI-groups (as with P1, P3, and P4 in Figure 1a). The problem arises when the values in different groups are related (as would happen with association rules); this leads to an implication that the values belong to the same individual. Formally: Definition 1 (Inter QI-group Correlation). Given a correlation dependency of the form cd id : A id A s over T * , we say that a privacy violation might exists if there are correlated values in a subset QI j (1 ≤ j ≤ m) of T * such that v id ∈ π A id QI 1 ∩ ... ∩ π A id QI m and |π A s QI 1 ∩ ... ∩ π A s QI m | < l where v id ∈ A id is an individual identifying value, l is the privacy constant and an adversary knows of that correlation. The example shown in Figure 1, explains how an adversary with prior knowledge of the correlation, in this case that Retinoic Acid must be taken multiple times, is able to associate the drug to the patient Roan (P1) due to their correlation in several QI-groups. (The same would also apply to different drugs that must be taken together.) An intra-group violation can arise if several correlated values are contained in the same QI-group. Here the problem is that this gives a count of tuples that likely belong to the same individual, which may limit it to a particular individual in the group. Figure 2 is an example of Intra QI-group Correlation, formally defined as follows: Fig. 2: Intra QI-group correlation Lemma 1 (Intra QI-group Correlation). Given a QI-group QI j (1 ≤ j ≤ m) in T * that is l-diverse, we say that a privacy violation might occur if individual's related tuples are correlated in QI j such that f (QI j , u) + c j (v s ) > |QI j | where v s is the most frequent A s value in QI j , c j (v s ) is the number of tuples t ∈ QI j with t[A s ] = v s , u is the individual who has the most frequent tuples in QI j , f (QI j , u) is a function that returns the number of u 's related tuples in QI j -|QI j | is the size of QI j (number of tuples contained in QI j ) Proof. Let r be the number of remaining sensitive values in QI j , r = |QI j | -c j (v s ). If f (QI j , u) + c j (v s ) > |QI j |, this means that f (QI j , u) > |QI j | -c j (v s ) and therefore f (QI j , u) > r. That is, there are e tuples related to individual u such that f (QI j , u) = e to be associated to r sensitive values of QI j where e > r. According to the pigeon-hole principle, at least a tuple t of T u will be associated to the sensitive value v s which leads to a privacy violation. It would be nice if this lemma was "if and only if", giving criteria where a privacy violation would NOT occur. Unfortunately, this requires making assumptions about the background knowledge available to an adversary (e.g., if an adversary knows that one individual is taking a certain medication, they may be able to narrow the possibilities for other individuals). This is an assumption made by all k-anonymity based approaches, but it becomes harder to state when dealing with transactional data. Let us go back to Figure 2, an adversary is able to associate both drugs (Retinoic Acid and Azelaic Acid) to patient Roan (P1) due to the correlation of their related tuples in the same QI-group. In the following, we provide an approach that deals with such privacy violations. Safe Grouping for Transactional Data As we have shown in the previous section, bucketization techniques do not cope well with correlation due to transactional data where an individual might be represented by several tuples that could lead to identify his/her sensitive values. In order to guarantee safety, we present in this section our safe grouping safety constraint . Safety Constraint (Safe Grouping). Given a correlation dependency in the form of cd id : A id A s , safe grouping is satisfied iff 1. ∀u ∈ U , the subset T u of T is contained in one and only one quasi identifier group QI j (1 ≤ j ≤ m) such that QI j respects l-diversity and contains at least k subsets T u 1 , ..., T u k where u 1 , ..., u k are k distinct individuals of the population and, 2. P r(u i 1 |QI j ) = P r(u i 2 |QI j ) ≤ 1/l where u i 1 , u i 2 , i 1 = i 2 are two distinct individuals in QI j with (1 ≤ i ≤ k) and P r(u i |QI j ) is the probability of u i in QI j . Safe grouping ensures that individual tuples are grouped in one and only one QI-group that is at the same time l-diverse, respects a minimum diversity for identity attribute values, and every subset T u in QI j are of equal number of tuples. Figure 3 describes a quasi identifier group (QI 1 ) that respects safe grouping where on one hand, we assume that there are no other QI-groups containing P 1 and P 3 and on the other hand, two tuples from T P 1 are anonymized to guarantee that P r(P 1|QI 1 ) = P r(P 3|QI 1 ) ≤ 1/2. Note that we have suppressed some data in order to meet the constraint; this is in keeping with the model in [START_REF] Erhan | Query processing in private data outsourcing using anonymization[END_REF] where some data is left encrypted, and only "safe" data is revealed. Lemma 1. Let QI j for (1 ≤ j ≤ m) be a QI-group that includes k individuals, if QI j satisfies safe grouping then k is at least equal to l Proof. Consider an individual u in QI j , according to the safe grouping, P r(u|QI j ) ≤ 1/l. Or P r(u|QI j ) is equal to f (QI j , u)/|QI j | where f (QI j , u) = |QI j |/k represents the number of individual's u related tuples in j . Hence, 1/k ≤ 1/l and k ≥ l Corollary 1 (Correctness). Given an anonymized table T * that respects safe grouping, and a correlation dependency of the form cd id : A id A s , an adversary cannot correctly associate an individual u to a sensitive value v s with a probability P r(A s = v s , u|T * ) greater than 1/l. Proof. Safe grouping guarantees that individual's u related tuples T u are contained in one and only one QI-group (QI j ), which means that possible association of u to v s is limited to the set of correlated values that are contained in QI j . Hence, P r(A s = v s , u|T * ) can be written as P r(A s = v s , u|QI j ). On the other hand, P r(A s = v s , u|QI j ) = P r(A s =vs,u) k i=1 P r(A s =vs,u i ) where k is the number of individuals in QI j and P r(A s = v s , u i ) is the probability of associating individual u i to a sensitive value v s . Recall that safe grouping guarantees that for a given individual u i , P r(A s = v s , u i ) is at the most equal to 1/l . Summarizing, P r(A s = v s , u|QI j ) is at the most equal to 1/k where k ≥ l according to Lemma 1. We can estimate3 , for example, P r(A s = RetinoicAcid, A id = P 1|T * ) to be 4/5 where it is possible to associate Roan (P1) to Retinoic Acid in 4 of 5 QI-groups as shown in Figure 1b. However, as you can notice from Figure 3, safe grouping guarantees that P r(A s = RetinoicAcid, A id = P 1|T * ) remains limited to the possible association of values in QI 1 and thus bounded by l-diversity. The safe grouping constraint is restrictive, but may be necessary. While we do not have a formal proof that it is optimal, we can find examples where any straightforward relaxation can result in a privacy violation (we do not elaborate due to space constraints.) We note that using safe grouping, we do not intend to replace anatomy. In fact, we preserve table decomposition as described in the original anatomy model by separating a table T into two subtables (T QIT , T SN T ) while providing a safe grouping of tuples on the basis of the attributes related by a correlation dependency. Fig. 3: Table Prescription respecting our safety constraint 7 Safe Grouping Algorithm In this section, we provide an algorithm to enforce ensure safe grouping for transactional data. The algorithm guaranties the safe grouping of a table T based on an identity attribute correlation dependency cd id : A id A s (A id ∈ T QIT and A s ∈ T SN T ). The main idea behind the algorithm is to create k buckets based on the attribute (A id ) defined on the left hand side of a correlation dependency in a reasonable time. The safe grouping algorithm takes a table T , a correlation dependency A id A s , a constant l to ensure diversity, and a constant k representing the number of individuals (individuals' related tuples) to be stored in a QI-group. It ensures a safe grouping on the basis of the attribute A id . In Step 2, the algorithm hashes the tuples in T based on their A id values and sorts the resulting buckets. For any individual, all their values will end up in the same bucket. In the group creation process from steps 4-17, the algorithm creates a QI-group with k individuals. If the QI-group respects l-diversity the algorithm adds it to the list of QI-groups and enforces the safety constraint in step 8 by anonymizing tuples in T QIT including values that are frequently correlated in the QI-group. In other terms, it makes sure that individuals' related tuples in the QI-group are of equal number. Algorithm 1 SafeGrouping Require: a table T , cd id : A id A s , l, k, minConf , maxConf and exp Ensure: safe grouping for T 1: TQIT = ∅; TSNT = ∅; gn = 0; i = 0, j = 0; 2: Hash the tuples in T by their A id values (one bucket per A id value) 3: Sort the set of Buckets based on their number of tuples. 4: while there are k groups QI ∈ Buckets do 5: if QI is l-diverse then 6: gn = gn + 1 7: QIgn = QI 8: Enforce safety constraint on QIgn 9: Remove for each random value vs of A s ∈ QIj do 24: insert tuple (j, vs) into TSNT 25: end for 26: end for If l-diversity for the QI-group in question is not met, the algorithm enforces it by anonymizing tuples related to the most frequent sensitive value in the QI-group. After the l-diversity enforcement process, the algorithms verifies whether the group contains k buckets, and if not anonymizes (which could mean generalizing, suppressing, or encrypting the values, depending on the target model.) From steps 19 to 26 the algorithm anatomizes the tables based on the QI-groups created. It stores random sensitive attribute values in the T SN T table. While safe grouping provides safety, its ability to preserve data utility is limited to the number of distinct values of A id attribute. We now present a set of experiments to evaluate the efficiency of our approach, both in terms of computation and more importantly, loss of data utility. We implemented the safe grouping code in Java based on the Anonymization Toolbox [START_REF] Kantarcioglu | Anonymization toolbox[END_REF], and conducted experiments with an Intel XEON 2.4GHz PC with 2GB RAM. Evaluation dataset In keeping with much work on anonymization, we use the Adult Dataset from the UCI Machine Learning Repository [START_REF] Asuncion | UCI machine learning repository[END_REF]. To simulate real identifiers, we made use of a U.S. state voter registration list containing the attributes Birthyear, Gender , Firstname, and Lastname. We combined the adult dataset with the voter's list such that every individual in the voters list is associated with multiple tuples from the adult dataset, simulating a longitudinal dataset from multiple census years. We have constructed this dataset to have a correlation dependency of the following form F irstname, Lastname Occupation; where Occupation is a sensitive attribute, F irstname, Lastname are identifying attributes and remaining attributes are presumed to be quasi-identifiers. We say that an individual is likely to stay in the same occupation across multiple censuses. Note that this is not an exact longitudinal dataset; n varies between individuals (simulating a dataset where some individuals move into or out of the census area. The generated dataset is of size 48836 tuples with 21201 distinct individuals. In the next section, we present and discuss results from running our safe grouping algorithm. Evaluation Results We elaborated a set of measurements to evaluate the efficiency of safe grouping. These measurements can be summarized as follows: -Evaluating privacy breach in a naive anatomization. We note that the same test could be performed on the slicing technique [START_REF] Li | Slicing: A new approach for privacy preserving data publishing[END_REF] as the authors in their approach do not deal with identity disclosure, -Determining anonymization cost represented by the loss metric to capture the fraction of tuples that must be (partially or totally) generalized, suppressed, or encrypted in order to satisfy the safe grouping and, -Comparing the computational cost of our safe grouping algorithm to anatomy [START_REF] Xiao | Anatomy: Simple and effective privacy preservation[END_REF]. Evaluating Privacy After naïve anatomization over the generated dataset, we have identified 5 explicit violations due to intra QIgroup correlations where values of A id are correlated in a QI-group. On the other hand, in order to determine the number of violations due to inter QI-group correlation, we calculate first the possible associations of A id and A s values across a naïve anatomized table. This is summarized in the following equation for values v id and v s respectively. G(v id , v s ) = m j=1 f j (v id , v s ) m j=1 g j (v id ) where f j (v id , v s ) = 1 if v id and v s are associated in QI j 0 otherwise and, g j (v id ) = 1 if v id exists in QI j 0 otherwise At this point, a violation occurs for significant4 A id values if; 1. G(v id , v s ) > 1/l. This represents a frequent association between v id and v s where v id is more likely to be associated to v s in the QI-groups to which it belongs and, 2. |π A s QI 1 ∩ ... ∩ π A s QI m | < l where QI 1 , ..., QI m are the QI-groups to which v id belongs. After we applied the above test to the anatomized dataset, we have identified for l = 2 and l = 3, 167 and 360 inter QI-groups correlation violations. We note that a much deeper study on violations due to data correlation can be found in [START_REF] Raymond | Can the utility of anonymized data be used for privacy breaches?[END_REF][8][10]. Evaluating Anonymization Cost We evaluate our proposed anonymization algorithms to determine the loss metric (LM) representing the number of tuples in T and T QIT that need to be suppressed in order to achieve the safety constraint. Figure 3 shows a anonymized version of table prescription where grouping is safe and has a loss metric equal to LM(P rescription) = 2/13. We investigate the anonymization cost for a correlation dependency cd id : F irstname, Lastname Occupation using the safe grouping algorithm. We anonymize the dataset with k = 7, 8, 9 and l = 2, 3, 4, 5, 6, 7 for which the dataset satisfies the eligibility condition (see [START_REF] Machanavajjhala | l-diversity: Privacy beyond k-anonymity. ACMTransactions on Knowledge Discovery from Data[END_REF]). At each execution, we compute the LM. The results are shown in Figure 4. From Figure 4, we can see that the LM increases with l, and for (k = 9, l = 7) the computed loss metric LM is high; notice that the number of tuples to anonymize in order to preserve l-diversity reaches 35%. Nonetheless, for small values of l an acceptable value of LM is computed. Anonymizing datasets using safe grouping can be seen as a trade-off between cost and privacy where for small values of l, LM produces values less than 10% leading to a relatively small anonymization cost. Another aspect to consider is how to define k w.r.t l to guarantee a minimum LM. Note that for transactional data, it is possible for k (the number of individuals, not transactions, in a group) to be smaller than l; however, this makes satisfying the privacy criteria difficult, lead-ing to substantial anonymized data. The experiments show that high data utility can be preserved as long as k is somewhat greater than l. Evaluating Computation Cost We now give the processing time to perform safe grouping compared to anatomy. Figure 4d shows the computation time of both safe grouping and anatomy over a nontransactional dataset with different k values. Theoretically, a worst case of safe grouping could be much higher; but in practice, for a small values of l the safe grouping has better performance than anatomy. Furthermore, as k increases the safe grouping computation time decreases due to reduced I/O access needed to test QI-groups' l-diversity. Conclusion In this paper, we proposed a safe grouping method to cope with defects of bucketization techniques in handling correlated values in a transactional dataset. Our safe grouping algorithm creates partitions with an individual's related tuples stored in one and only one group, eliminating these privacy violations. We showed, using a set of experiments, that there is a trade-off to be made between privacy and utility. This trade-off is quantified based on the number of tuples to be anonymized using the safe grouping algorithm. Finally, we investigated the computation time of safe grouping and showed that despite the exponential growth of safe grouping, for a small range of values of l, safe grouping outperforms anatomy while providing stronger privacy guarantees. Fig. 4 : 4 Fig. 4: Safe grouping evaluation in transactional datasets 4a, 4b and 4c Table 1 : 1 is the size (number of tuples) of QI j Definition 3 (Anatomy). Given a table T , we say that T is anatomized if it is separated into a quasi-identifier table (T QIT ) and a sensitive table (T SN T ) as follows:-T QIT has a schema (A 1 , ..., A d , GID) where A i (1 ≤ i ≤ d) is either a nonsensitive identifying or quasi-identifying attribute and GID is the group id of the QI-group. -T SN T has a schema (GID, A s d+1 ) where A s d+1 is the sensitive attribute in T . Notations T a table containing individuals re- lated tuples ti a tuple of T u an individual described in T Tu a set of tuples related to individual u A an attribute of T A id an identifying attribute of T A s a sensitive attribute of T QIj a quasi-identifier group T * Anonymized version of table T P r(A s = RetinoicAcid, A id = P 1|T * ) as calculated remains an estimation where a much deeper aspect on how to calculate the exact probability of values correlated across QI-groups can be seen in[START_REF] Raymond | Can the utility of anonymized data be used for privacy breaches?[END_REF] and[START_REF] Kifer | Attacks on privacy and definetti's theorem[END_REF] Significance is measured in this case based on the support of A id values and their correlation across QI-groups. For instance, v id is considered significant if it exists in at least α QI-groups where α is a predefined constant greater than 2. Acknowledgements This publication was made possible by NPRP grant 09-256-1-046 from the Qatar National Research Fund. The statements made herein are solely the responsibility of the authors.
32,505
[ "1004132", "1004133", "1004134" ]
[ "257464", "147250", "257464" ]
01490706
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490706/file/978-3-642-39256-6_14_Chapter.pdf
Rohit Jain email: [email protected] Sunil Prabhakar Access Control and Query Verification for Untrusted Databases Keywords: Access Control, Cloud Computing, Query Verification, Private Outsourcing With the advent of Cloud Computing, data are increasingly being stored and processed by untrusted third-party servers on the Internet. Since the data owner lacks direct control over the hardware and the software running at the server, there is a need to ensure that the data are not read or modified by unauthorized entities. Even though a simple encryption of the data before transferring it to the server ensures that only authorized entities who have the private key can access the data, it has many drawbacks. Encryption alone does not ensure that the retrieved query results are trustworthy (e.g., retrieved values are the latest values and not stale). A simple encryption can not enforce access control policies where each entity has access rights to only a certain part of the database. In this paper, we provide a solution to enforce access control policies while ensuring the trustworthiness of the data. Our solution ensures that a particular data item is read and modified by only those entities who have been authorized by the data owner to access that data item. It provides privacy against malicious entities that somehow get access to the data stored at the server. Our solutions allow easy change in access control policies under the lazy revocation model under which a user's access to a subset of the data can be revoked so that the user can not read any new values in that subset of the data. Our solution also provides correctness and completeness verification of query results in the presence of access control policies. We implement our solution in a prototype system built on top of Oracle with no modifications to the database internals. We also provide an empirical evaluation of the proposed solutions and establish their feasibility. Introduction Access control mechanisms are an important part of a database system with which the data owner limits a user's access to a subset of the data. In a typical setting, the database server enforces access control policies by rewriting user queries to limit access to the authorized subset. When the data owner wants to revoke or grant a user, access to a certain part of the data, the data owner does that by informing the server. Traditionally, the server is assumed to be trustworthy and the data owner assumes that the access control policies will be faithfully enforced by the server. However, this assumption is not reasonable when the database is hosted at a third-party server, e.g., cloud, as the data owner lacks control over the hardware and software running at the server. Even when the server is trusted, there is a threat from a malicious insider or an intruder. Another important problem that arises when the database systems are hosted at an untrusted server is to verify the trustworthiness of query execution. Much work has been done [START_REF] Jain | Trustworthy data from untrusted databases[END_REF][START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF][START_REF] Mykletun | Authentication and integrity in outsourced databases[END_REF][START_REF] Narasimha | Authentication of outsourced databases using signature aggregation and chaining[END_REF] towards verifying correctness and completeness of query results. However, most of these solutions do not work in the presence of access control rules, as they leak information that is outside the query range and outside the scope of the user's authorization. In this paper, we provide solutions that ensure that a data item in the database is read and modified only by authorized users, and none other (including the server). The data encrypted by our solution is still queriable. Our solution provides mechanisms to verify the trustworthiness of query results in the presence of access control rules. For this, we extend our previous work [START_REF] Jain | Trustworthy data from untrusted databases[END_REF] on ensuring the trustworthiness of data retrieved from an untrusted database that can be modified by multiple entities. The contributions of this work are: -A novel mechanism to enforce access control rules without trusting the server -Solutions that allow users to verify the correctness and completeness of query results in the presence of access control rules -A demonstration of the feasibility of the solution through a prototype in Oracle, and its evaluation The rest of this paper is organized as follows. Section 5 discusses some related work. Section 2 describes our model and presents some preliminary tools that are necessary for this work. Section 3 presents our solutions. A discussion of the implementation of the solution and an empirical evaluation is presented in Section 4. Finally, Section 6 concludes the paper. Preliminaries In this section, we start by explaining the different entities involved in our model. Then, we explain Merkle Hash Trees and Merkle B+ Tree which we use for building our solutions, and also discuss their use to verify the correctness and completeness of query results. Model There are three main entities involved: Alice, the database owner; Bob, the (untrusted) database server that will host the database; and Carol, the user(s) that will access this data (may include Alice) from the server. Users are authorized by Alice and can independently authenticate themselves with the server. A user can read or write data to the parts of the database she is authorized to. Figure 1 shows the various entities in this model. Alice wants to ensure that the data are accessed by only those entities that were authorized by her. Alice and Carol want to ensure that the query results were indeed correct and complete in presence of access control policies. An acceptable solution should allow Alice to grant or revoke access to a user at any point in time, without much work. The solution will disable Bob from being able to read the encrypted data. However, Alice and Carol should still be able to execute queries and run updates on the encrypted data. Note that our assumptions about Bob are minimal. In most settings, the server is likely to be at least semi-honest -i.e., it will not maliciously compromise data privacy by not following access control rules, or compromize data integrity by maliciously modifying the data or query results. However, due to poor implementation, failures, over commitment of resources, or other reasons, some loss of data or breach of privacy may occur. Given the lack of direct control over the server, Alice should not assume that Bob is infallible. Lazy Revocation Model: As mentioned before, simple encryption can ensure that only authorized users can read or write the data. However, this introduces many problems. One such problem is related to dynamic access control rules. In a simple encryption method, when a user's access is revoked from a subset of the data, the data have to be re-encrypted. This can be a very costly process due to network usage and computation for encryption. To alleviate this burden, we consider the Lazy Revocation Model. Under this model, when a user is granted access to a subset of the database, the user can read or write to that subset. If the user's access is revoked from that subset, the data are not re-encrypted immediately. Instead, the new values in that subset are encrypted with a new version of the key so that the evicted user can no longer read the new values in that subset. Since the user had access to the old data before eviction, it can be assumed that the user had cached that data, hence it is not important to re-encrypt old values. We will consider the lazy revocation model for access control policies. Correctness and Completeness We begin by discussing the use of Merkle Hash Trees (MHT) to prove correctness. And then further discuss a variant, the MB-tree, which we use to prove completeness. We will use an MB-tree as a building block for our overall solution. Correctness requires that any data item in the query result are indeed part of the database and is not a fabricated value. An MHT can be used to establish the correctness of query results. An MHT is a binary tree with labeled nodes. We represent the label for node n as Φ(n). For an internal node, n, with children n lef t and n right , Φ(n) is defined as: Φ(n) = h(Φ(n lef t )||Φ(n right )) (1) where || is concatenation and h is a one-way hash function. Table 1 explains the symbols used in this paper. Labels for leaf nodes are computed as the hash of the tuple value represented by that leaf. The root label is called 'Proof'. Initially, an MHT is created on top of the database table. Alice stores only the root hash value (P roof ) to authenticate future query results. To prove the correctness of a tuple, i.e., to verify that a tuple existed in the database, Alice can ask Bob for some extra data (called Verification Object (VO)) from the MHT and recompute the root hash label. If the computed root hash label is the same as that she stored initially, she is convinced about the correctness of the tuple. Completeness requires that all data items that should have been part of the query result are indeed present in the query result. Correctness and completeness combined establish the correntness of read-only queries. MHT can be extended to use B+ trees instead of a binary tree [START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF]. MB-trees can be used to verify both correctness and completeness. To prove completeness of a range query, Bob provides extra data with which Alice can verify that tuple values just preceeding, and just following (in sorted order) the query results were indeed outside the query range. Alice can also verify that no data is missing from the query result and the returned values are indeed part of the database. For more details, please refer to [START_REF] Jain | Trustworthy data from untrusted databases[END_REF][START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF]. Figure 2 shows a sample MB-tree structure built on the attribute A of Table 2. As an example, consider a query σ 30<A<60 . The result for this query would include t 3 , t 4 , and t 5 . To verify the correctness and completeness of the query results, the server sends VO to the user which includes the tuples just preceeding and just following the query ranges (i.e., t 2 and t 6 ). The VO also includes any node labels that are required to compute the root label (i.e., h(t 1 ) and H 7 ). Using VO, the user can generate the P roof . If the computed proof matches with the proof value computed by Alice, the user is assured that the query results were correct and complete. Access Control: In the presence of access control rules, traditionally, the query range is divided into multiple parts to ensure that each sub range is accessible to the user. In that case, each sub range can be verified individually. However, for verification, the server has to reveal the tuples bordering each sub-range. These bordering tuples may not be accessible to the user, leading to information leakage. In the next section, we will discuss our proposed solutions to enforce access control rules while still allowing query verification and privacy from the server or hackers. Fig. 2. An MB-tree on attribute A of Table 2 Updates: MHT or MB-tree work for static databases. When the data can be modified by users without prior knowledge of the data owner, as is the case in our model, MB-tree cannot be used directly. [START_REF] Jain | Trustworthy data from untrusted databases[END_REF] proposes solutions with which authorized users can executed transactions at the server without being vetted by the data owner. These transactions can read or write data. This is done by engaging the server in a protocol that requires the server to declare the database state on which the transaction was executed and the database state that the transaction produced. The user can verify that the transaction read values from the declared consistent state to produce the next consistent state. However, this solution does not enforce access control rules. In the next section, we provide a solution that allows the user run and verify transactions in the presence of access control rules. Access Control As mentioned before, access control rules allow the data owner to restrict a user's access to a certain part of the database. The database owner may also want to hide the data from the server as well, while still allowing the users to read and query the data. The difficulty introduced by using access control rules is two fold. Firstly, verification algorithms have to be modified to assure the user that the partial database table visible to the user is indeed correct and complete. Secondly, the data have to be encrypted so the user sees only the allowed data, and, the data remain private from the server or an intruder. The server should still be able to run queries on this data. In this section, we provide our solutions to these problems. In this paper, we consider fine-grained access control policies. We assume that the access control policies expose a user to a subset of each database table (this is the approach adopted by some commercial systems like Oracle VPD). In particular, we consider the following system for defining access control rules: R = {r i |0 ≤ i ≤ k} is a set of ranges on an attribute that partitions the data into k disjoint subsets. Each user is allowed access (read and write) to a part of the database table defined by a subset of R, i.e., the user, Carol, can access tuples {t i |t i ∈ ∪r i l }, where {r i l } ⊂ R is the set of ranges accessible to Carol. Verification in Presence of Access Control Given a range query, all tuples that satisfy the range query may not be accessible to the user. In such case, the verification using the regular MB-tree VO will not work. Also, verification of query results usually involves reading extra tuple values [START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF][START_REF] Narasimha | Authentication of outsourced databases using signature aggregation and chaining[END_REF]. These tuples may not be accessible to the verifier due to the access control rules. In such case, the verifier will not be able to verify a query or a transaction. Suitable adjustments to the authentication data structures are required to enable the verification of a query in presence of access control rules. To solve this problem, we modify the MB-tree as follows: Each node, n, is extended with an access control bitmap, B n , in which the i th bit is "on" if there is a tuple in the subtree that belongs to the range r i . Node labels are computed using Equation 2. Φ(n) = h(B child 1 ||Φ(n child 1 )|| . . . ||B child k ||Φ(n child k )) (2) The VO now contains the nearest tuple value just preceding and the nearest tuple value just following the query range such that these tuples are accessible to the user. VO also contains all the tree nodes required to prove the correctness of these tuples and to prove that the tuples that were left out of the query results were indeed inaccessible to the user. As an example, consider the following access control ranges on attribute A: r 1 : [0, 35] r 2 : [36, 64] r 3 : [65, 100] Under these accesss control ranges, each access control bitmap will have three bits, one for each access control range. Figure 3 shows an augmented MBtree, as described above, built on Table 2. When a user who is authorized to access r 1 and r 3 executes a range query σ 25<A<50 , tuple t 2 and t 3 will form the query result. Tuple t 4 will not be part of the query result as it is not accessible to the user. To verify the correctness and completeness of the query result, the user has to verify that the missing tuples were indeed inaccessible to her. The user will also have to verify that the nearest tuple just before the query range was indeed t 1 and the nearest tuple just following the query range (and also accessible to the user) is indeed t 7 . To prove the completeness of the query result, the VO of this query will include t 1 and t 7 . To prove that the ommitted tuples (i.e., t 4 , t 5 , and t 6 ) were indeed inaccessible to the user, the VO will also include the bitmaps B 6 and B 11 . Using B 6 , the user can be convinced that tuples t 5 , and, t 6 were indeed inaccessible to her. Similarly, using B 11 the user can be assured that Tuple t 4 was inaccessible to her. As in the case of regular MB-tree, the VO will include all other necessary labels required to calculate the root label. B1, H1 B1 : 111 B2, H2 59 B2 : 011 B3, H3 35 B3 : 110 B4, H4 65 Enforcing Privacy for Access Control In this subsection, we present our solutions to encrypt the database, so that a user can read/write only the subset of the data that she has been authorized to access. The server (or any intruder) cannot read the data. As mentioned before, in this work we consider the lazy revocation model. Under this model, once a range, r i , is removed from a user's accessible ranges, the future tuples in r i are encrypted using a new key. All remaining and future users who can access r i will be distributed the new key. Any pre-existing tuples in r i are not necessarily re-encrypted with the new key. To decrypt the data in the range r i , the user may need the current or previous keys of that range. Only the data owner decides which ranges are accessibe to the user. The Key Regression scheme [START_REF] Fu | Key regression: Enabling efficient key distribution for secure distributed storage[END_REF] provides a mechanism for versioning encryption keys for symmetric-key encryption. Given a version of the key, the user can compute all previous versions of the key. However, future versions of the key can not be derived using the current key. In the start, all data items in an access control range are encrypted using the first version of the key. Each user authorized to access the range is given that key. When a user is evicted from the range, the key is updated to a newer version. All future data items in the range are now encrypted using the new version of the key. Since the users cannot generate the new version of the key, the evicted user cannot read future tuples in the range. A Key Regression scheme is defined using four algorithms. Algorithm setup is used by the data owner to setup the initial state. Algorithm wind is used to generate the next state. Algorithm unwind is used to derive the previous state, and keyder is used to generate the symmetric key for a given state. The tuples are encrypted using the symmetric key. We consider a particular key regression scheme that uses RSA to generate states. Consider an RSA scheme with private key < p, q, d >, public key < N, e >, and security parameter k, such that p and q are two k-bit prime numbers, N = pq, and ed = 1(modϕ(N ) where ϕ(N ) = (p -1)(q -1). For each range r i , a secrent random number S i ∈ Z * N is selected as the initial state. Algorithm wind, unwind and keyder are defined in Algorithms 1, 2, and, 3 respectively. Algorithm 1 wind (N , e, d, S i ) nextSi = S d i (mod N ) return nextSi Algorithm 2 unwind (N , e, S i ) prevSi = S e i (mod N ) return prevSi Algorithm 3 keyder(S i ) Ki = SHA1(Si) return Ki For each range, r i , in range set R, the data owner generates a secret state S i ∈ Z * N . The user stores the current states for each range it has access to. Using the current state of a range, the user can compute the corresponding symmetrickey to encrypt or decrypt the data in that range. Whenever, a range is added or removed from a user's accessible ranges, the state corresponding to that range is moved to the next state and all users who still have access to that range are informed about the new version of the state. If a tuple in a range is encrypted using a newer key, the user requests the new state from the data owner. Due to encryption, the server cannot execute range queries. To be able to execute range queries on data, we use bucketization to divide the data into multiple buckets. Range queries are then suitably modified to search data among these bucket ranges. Bucketization: Bucketization involves partitioning the attribute domain into multiple equi-width or equi-depth partitions. Attribute values are then converted from a specific value in the domain to the bucket labels. Table 3 is an example of equi-width bucketization of Table 2 where each partition width is 10. Using equi-width bucketization reveals the density in each bucket. Equi-depth, on the other hand, requires frequent adjustments (which requires communication with the user) when database is updated frequently. [START_REF] Ceselli | Modeling and assessing inference exposure in encrypted databases[END_REF][START_REF] Hore | A privacy-preserving index for range queries[END_REF] show that only limited information can be deduced due to bucketization. As shown in Table 3, the attribute A * represents the bucket labels after bucketization, and the encrypted tuple value is kept in a separate attribute. User queries are now executed on A * . To verify the correctness and completeness of query results, our augmented MB-tree can be built on top of the bucketized field, A * . The verification process will remain the same, except now tuples will be inserted in the tree according to A * . Thus, combining the solutions proposed in Subsections 3.1 and 3.2, the data owner and the users can be convinced that the data were not maliciously modified, and the data were accessed by the user that had appropriate authorizations. Experiments To demonstrate the feasibility and evaluate the efficiency of the proposed solutions, we implement our solutions on top of Oracle. The solutions are implemented in the form of database procedures using Pl/SQL and no internal modifications were done on the database. While we expect that the ability to modify the database internals or to exploit the index system will lead to a much more efficient implementation, our current goal is to establish the feasibility of our approach and to demonstrate the ease with which our solution can be adopted for any generic DBMS. Users are implemented using Python. Setup: We create a synthetic database with one table uTable containing one million tuples of application data. uTable is composed of a table with two attributes (T upleID and A). The table is populated with random values of A between -10 7 and 10 7 . When tuples are encrypted, the ciphertext is stored in attribute EncA. Table 4 describes the different tables and indexes used in our prototype. An MB-tree is created on attribute A (integer). We consider three transactions implemented as stored procedures, namely Insert, Delete, and Select. Insert creates a new tuple with a given value of attribute A. Delete deletes the tuples which have a given value of attribute A and Select is a range query over attribute A. The experiments were run on an Intel Xeon 2.4GHz machine with 12GB RAM and a 7200RPM disk with a transfer rate of 3Gb/s, running Oracle 11g on Linux. We run Oracle with a strict serializable isolation level. We use a standard block size of 8KB. Implementation Details: The MB-tree has been implemented in the form of a database table -each node in the MB-tree is represented by a tuple in the MB-tree table (uTableMBT). Ideally, the MB-tree should be maintained as a B+ index trees of the database. However, that requires internal modifications to the index system of the database. We leave that for future work. Each MB-tree node, identified by a unique id, stores uTable tuples in the range [key min, key max). level denotes the height of the node from the leaf level, i.e., leaf nodes have level = 0, and the root has the highest level. The keys field stores the keys of the node, and the children and childLabels fields store the corresponding child ids and labels respectively. Label stores the label of the node. When access control mechanisms are in place, two more attributes, accessBitmap and childAcessBitmaps, are added to store the access control bitmap of the node and access control bitmaps of the child nodes respectively. Results We now present the results of our experiments. To provide a base case for comparison, we compare the performance of our solutions with a regular MB-tree based solution [START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF], where access control rules are not supported. This solution leaks information for transaction verification. Furthermore, this solution does not provide privacy against a malicious server. We analyze the costs of construction for the authentication data structures, execution of a transaction, and verification of a transaction. The fanout for the authentication structure is chosen so as to ensure that each tree node is contained within a single disk block. In each experiment, time is measured in seconds, storage and IO is measured as the number of blocks read or written as reported by Oracle.The reported times and IO are the total time and IO for the entire workload. Each experiment was executed 3 times to reduce the error -average values are reported. In the plots, N ormal represents the solution from [START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF], AC represents our solution where access control bitmaps are added to the nodes to support access control rules, and AC + Enc represents our solution that encrypts the tuple values and uses bucketization. AC and AC + Enc both allow query verification in the presence of access control rules. AC + Enc also provides privacy against the server or an intruder. When bucketization is used, we divide the data into 1000 buckets. We use 200 access control ranges. Construction Cost: First, we consider the overhead of constructing (bulk loading) the proposed data structures. To support access control rules, our solution requires augmenting MB-tree nodes with additional values that store the access control bitmaps. To provide privacy from the server, key regression is used that allows different versions of the encryption key. This requires storing additional attibutes to store the ciphertext and the key version. Figures 4(a) and 4(b) show the effect of data size on construction time and storage overhead, respectively. As expected, the storage cost is higher for our solutions. However, the construction time does not change significantly as the additional computation required for encryption is done by the user, keeping the computation cost for the server similar to just maintaining the MB-tree. Insert Cost: We study the performance as the number of Insert transactions is increased. For this experiment no verification is performed. Figures 5(a) and 5(b) show the results. As expected, our solution incurs a higher overhead for IO as it requires keeping additional data. These costs increase linearly with the number of transactions. Surprisingly, this does not translate into a significant increase in the running time. This represents the computational overhead of hashing and concatenations which dominates the cost. Delete operation shows similar costs (not presented due to lack of space). Search Cost: Search cost is influenced by both the size of the result (larger results will be more expensive to verify) and number of access control rules as that requires verifying that the tuples that were dropped from the query result were indeed not accessible to the user. To evaluate the performance of our solution for range queries (Search), we run 100 Search transactions for different ranges (thereby with different result set size) and verify all transactions. sub-range that is accessible to the user is returned as query result. For verification, the server has to return the right and left most paths of each sub-range. However, in our solution, an access control bitmap is enough to verify that the sub-range is not accessible. This decreases the VO size and computation cost. As shown in the figure 6(a), our solution performs slightly better than MB-tree as our solution requires lesser VO size. As the result set size increases, the verification object size increases which results in an increase in verification time. The performance of our solution is comparable to that of an MB-tree alone. Verification Cost: Our solution changes the Verification Object significant as our solution does not require bordering tuples outside the accessible range. However, since the node labels now include access control bitmaps, it increases the VO size. We now demonstrate the change in VO size in our solutions. To demonstrate the overhead of insert verification, we run 1000 Insert transactions and verify them. Average VO size is reported in figure 7(a). As expected, the VO size is higher for our solutions as it requires additional information, like access control bitmaps and key versions. To demonstrate the overhead of search query verification on the system, we run 1000 Search transactions with varying ranges and varying access control ranges. The average VO size is reported in Figure 7(b). As discussed before, in a normal MB-tree, to support access control, a query range has to be divided into multiple sub-ranges so that the query accesses only the part of the data that are accessible to the user. For each sub-range, the VO includes the tuple just before and just following the sub-range. VO also includes all necessary nodes that are required to verify that the bordering tuples indeed existed the database. However, in our solution, this is not necessary. Each node contains information if the descendant tuples are accessible or not. Hence, VO does not always require the bordering tuples. Figure 7(b), that shows the effects of our solution on the VO size validates this. VO size for AC is smaller than the normal MB-tree. VO size for AC + Enc is comparable to MB-tree. This is due to the ciphertexts. Overall, we observe that our solutions are efficient and provide mechanisms for access control with reasonable overheads and perform better than current solutions in some cases. Related Work Much work has been done towards providing mechanisms to verify the correctness and completeness of query results from an untrusted database server [START_REF] Jain | Trustworthy data from untrusted databases[END_REF][START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF][START_REF] Mykletun | Authentication and integrity in outsourced databases[END_REF][START_REF] Narasimha | Authentication of outsourced databases using signature aggregation and chaining[END_REF][START_REF] Devanbu | Authentic third-party data publication[END_REF][START_REF] Pang | Verifying completeness of relational query results in data publishing[END_REF]. While some of the earlier work only considered correctness of query results [START_REF] Devanbu | Authentic third-party data publication[END_REF][START_REF] Pang | Authenticating query results in edge computing[END_REF], later work consider both correctness and completeness [START_REF] Li | Dynamic authenticated index structures for outsourced databases[END_REF][START_REF] Narasimha | Authentication of outsourced databases using signature aggregation and chaining[END_REF]. Some of these work have also considered data updates from multiple sources [START_REF] Jain | Trustworthy data from untrusted databases[END_REF][START_REF] Narasimha | Authentication of outsourced databases using signature aggregation and chaining[END_REF]. [START_REF] Jain | Trustworthy data from untrusted databases[END_REF] proposes a solution that uses Merkle B-Trees as authentication data structure, and allows multiple entities to independently run transactions on the untrusted database. Most of these works do not consider issues related to data privacy and access control. In these works, the user requires additional data items for verification, leading to information leakage. Some work has been done towards providing verification for correctness and completeness in the presence of access control rules [START_REF] Chen | Access control friendly query verification for outsourced data publishing[END_REF][START_REF] Pang | Verifying completeness of relational query results in data publishing[END_REF][START_REF] Kundu | Structural signatures for tree data structures[END_REF][START_REF] Bertino | Selective and authentic third-party distribution of xml documents[END_REF][START_REF] Miklau | Controlling access to published data using cryptography[END_REF]. While [START_REF] Pang | Verifying completeness of relational query results in data publishing[END_REF] supports one-dimensional range queries and data updates, [START_REF] Chen | Access control friendly query verification for outsourced data publishing[END_REF] supports multi-dimensional range queries and does not handle updates. Both these solutions do not provide privacy against the server. [13] provides a tree based solution for verifying correctness of query results without information leakage. However, this solution does not provide mechanisms for verifying completeness. [START_REF] Bertino | Selective and authentic third-party distribution of xml documents[END_REF][START_REF] Miklau | Controlling access to published data using cryptography[END_REF] focus on the access control problems with data authenticity for XML data. These solutions provide solutions for data privacy against users but not the server or an intruder. The server or any intruder would have full access to the data leading to breach of privacy. [START_REF] Hacigümüs | Providing database as a service[END_REF] proposes solutions to provide privacy against the server. Data are encrypted before sending it to the server. The data are encrypted in such a way that user queries can still be executed on the encrypted data. However, this solution does not provide access control mechanisms. Much work has been done towards key management [START_REF] Tzeng | A time-bound cryptographic key assignment scheme for access control in a hierarchy[END_REF][START_REF] Akl | Cryptographic solution to a problem of access control in a hierarchy[END_REF][START_REF] Atallah | Dynamic and efficient key management for access hierarchies[END_REF][START_REF] Fu | Key regression: Enabling efficient key distribution for secure distributed storage[END_REF][START_REF] Kallahalla | Plutus: Scalable secure file sharing on untrusted storage[END_REF]. [START_REF] Fu | Key regression: Enabling efficient key distribution for secure distributed storage[END_REF][START_REF] Kallahalla | Plutus: Scalable secure file sharing on untrusted storage[END_REF] consider the lazy revocation model under which following the revocation of user membership from a group, the content publisher encrypts future content in that group with a new cryptographic key and the new key is distributed to only current group members. The content publisher does not immediately re-encrypt all preexisting content since the evicted member could have already cached that content. [START_REF] Fu | Key regression: Enabling efficient key distribution for secure distributed storage[END_REF] proposes a key derivation mechanism with which a user can derive old encryption keys using the current keys, however, it does not allow a user to derive future keys. When a user is evicted from the group, all future updates are encrypted using a newer version of the key. This saves a lot of computation and I/O cost whenever access control rules are changed. [START_REF] Tzeng | A time-bound cryptographic key assignment scheme for access control in a hierarchy[END_REF][START_REF] Akl | Cryptographic solution to a problem of access control in a hierarchy[END_REF][START_REF] Atallah | Dynamic and efficient key management for access hierarchies[END_REF] propose key management solutions for access hierarchies. [START_REF] Tzeng | A time-bound cryptographic key assignment scheme for access control in a hierarchy[END_REF] proposes a solution to not only restrict a user's access to a subset of the data, but also restricts the user's access to a limited time. In this paper, we propose solutions to solve both problems collectivelyour solutions provide mechanisms to ensure trustworthiness of query results while ensuring that access control policies are enforced, and it also provides mechanisms for encrypting the data that ensures that a data item is accessed (read and/or write) by only those entities that were authorized to access it. Conclusion In this paper, we considered the problem of implementing access control policies on an untrusted database server, while ensuring that the query results are trustworthy. With our solution, the data owner can be assured that the data will be read by only those users that were authorized by her apriori. Furthermore, the data owner and the users can be assured of the trustworthiness of the query results without violating the access control policies. We demonstrate that the solutions can be implemented over an existing database system (Oracle) without making any changes to the internals of the DBMS. Our results show that the solutions do not incur heavy costs and are comparable to current solutions for query verification (that do not support access control rules). We believe that the efficiency of the solutions can be further improved by modifying the internals and exploiting the index structures to get better disk performance. We plan to explore these issues in future work. Fig. 1 . 1 Fig. 1. The various entities involved: The database owner (Alice); The database server(Bob); and Authorized users (Carol). Fig. 3 . 3 Fig. 3. Augmented MB-tree to allow Access Control Fig. 4 . 4 Fig. 4. Construction time and storage overhead Fig. 5 .Fig. 6 . 56 Fig. 5. Insert time and I/O overhead Fig. 7 . 7 Fig. 7. Verification Object Size Table 1 . 1 Symbol Table Symbol Description ti the i th tuple of a relation h(x) the value of a one way hash function over x Φ(n) label of node n in MB-tree or MHT Hi label of the i th node in the MB-tree a||b concatenation of a and b VO a verification object P roof root label of the MB-tree R a set of ranges that partitions the data ri ri ∈ R Si, Ki State and key for range ri Enc k (x) encryption of x using symmetric-key k Bn access control bitmap for node n Table 2 . 2 Sample Data Table tupleID A 1 23 2 29 3 35 4 48 5 59 6 63 7 65 8 70 Table 3 . 3 Bucketized Data Table tupleID A * Enc(A) 1 [20-30) EncK 1 (23) 2 [20-30) EncK 1 (29) 3 [30-40) EncK 1 (35) 4 [40-50) EncK 2 (48) 5 [50-60) EncK 2 (59) 6 [60-70) EncK 2 (63) 7 [60-70) EncK 3 (65) 8 [70-80) EncK 3 (70) Table 4 . 4 Relations and Indexes in the database Table Attributes Indexes uTable TupleID, A, EncA 2 , keyVersion 2 A uTableMBT id, level, Label, keys, children, child- id, (key min, key max, Labels, key min, key max, access- level) Bitmap 1 , childAccessBitmap 1 AccessControlRanges id, key min, key max AccessControlRules AcessControlRule id, Use id Used when supporting Access Controls Acknowledgements: We thank Walid Aref for many discussions and valuable comments. The work in this paper is supported by National Science Foundation grants IIS-1017990 and IIS-09168724.
39,598
[ "1004142", "1004143" ]
[ "147250", "147250" ]
01490709
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490709/file/978-3-642-39256-6_17_Chapter.pdf
Boxiang Dong email: [email protected] Ruilin Liu Wendy Wang Result Integrity Verification of Outsourced Frequent Itemset Mining Keywords: Cloud computing, data mining as a service, integrity verification The data-mining-as-a-service (DM aS) paradigm enables the data owner (client) that lacks expertise or computational resources to outsource its mining tasks to a third-party service provider (server). Outsourcing, however, raises a serious security issue: how can the client of weak computational power verify that the server returned correct mining result? In this paper, we focus on the problem of frequent itemset mining, and propose efficient and practical probabilistic verification approaches to check whether the server has returned correct and complete frequent itemsets. Introduction Cloud computing, an emerging trend of provisioning scalable computing services, provides the opportunity that data mining is offered as an outsourced service. Though the data-mining-as-a-service (DM aS) paradigm is advantageous to achieve sophisticated data analysis in a cost effective way, end users hesitate to place full trust in Cloud computing. This raises serious security concerns. One of the main security issues is the integrity of the mining result. There are many possible reasons for the service provider to return incorrect answers. For instance, the service provider would like to improve its revenue by computing with less resources while charging for more. Therefore, it is important to provide efficient mechanisms to verify the result integrity of outsourced data mining computations. In this paper, we focus on frequent itemset mining, an important data mining problem, as the main outsourced data mining service. We aim to address the particular problem of verifying whether the server has returned correct and complete frequent itemsets. By correctness, we mean that all itemsets returned by the server are frequent. By completeness, we mean that no frequent itemset is missing in the server's result. The key idea of our verification methods is to construct a set of (in)frequent itemsets from real items, and use these (in)frequent itemsets as evidence to check the integrity of the server's mining result. We remove real items from the original dataset to construct artificial infrequent itemsets, and insert copies of items that exist in the dataset to construct artificial frequent items. A nice property of our verification approach is that the number of required evidence (in)frequent itemsets is independent from the size of the dataset as well as the number of real frequent itemsets. This is advantageous as our verification approach will be especially suitable for verification of frequent mining on large datasets. Compared with the verification techniques based on fake items (e.g, [START_REF] Wong | An audit environment for outsourcing of frequent itemset mining[END_REF]), our verification techniques are more robust to catch the untrusted server that may try to escape verification by utilizing additional background knowledge such as the item frequency distribution information in the outsourced data. Our experimental results show that our verification approach can achieve strong correctness/completeness guarantee with small overhead. The paper is organized as follows. We discuss related work in Section 2 and preliminaries in Section 3. We present our EF and EI construction mechanisms for completeness and correctness verification in Section 4 and 5 respectively. In Section 6 we describe the post-processing procedures at the client side. In Section 7, we evaluate the performance of our approach. We conclude in Section 8. Related Work The problem of verifiable computation was tackled previously by using interactive proofs [START_REF] Goldwasser | The knowledge complexity of interactive proof systems[END_REF], probabilistically checkable proofs [START_REF] Arora | Proof verification and the hardness of approximation problems[END_REF], zero-knowledge proofs [START_REF] Yan | Zero-knowledge proofs of retrievability[END_REF], and non-interactive verifiable computing [START_REF] Gennaro | Non-interactive verifiable computing: outsourcing computation to untrusted workers[END_REF]. Unfortunately, this body of theory is impractical, due to the complexity of the algorithms and difficulty to use general-purpose cryptographic techniques in practical data mining problems. In the last decade, intensive efforts have been put on the security issues of the database-as-a-service (DaS) paradigm (e.g., [START_REF] Hacigümüş | Executing sql over encrypted data in the database-service-provider model[END_REF][START_REF] Pang | Verifying completeness of relational query results in data publishing[END_REF]). The main focus is the integrity (i.e., correctness and completeness) of result of range query evaluation. Only until recently some attention was paid to the security issues of the datamining-as-a-service (DM aS) paradigm [START_REF] Tai | k-support anonymity based on pseudo taxonomy for outsourcing of frequent itemset mining[END_REF]. However, most of these work only focus on how to encrypt the data to protect data confidentiality and pattern privacy, while we focus on integrity verification of mining result. There is surprisingly very little research [START_REF] Wong | An audit environment for outsourcing of frequent itemset mining[END_REF][START_REF] Liu | Audio: An integrity auditing framework of outlier-mining-as-a-service systems[END_REF] on result verification of outsourced data mining computations in the DM aS paradigm. Among these work, [START_REF] Wong | An audit environment for outsourcing of frequent itemset mining[END_REF] is the one the most related to ours. It proposed a result verification scheme for outsourced frequent itemset mining. Its basic idea is to insert some fake items that do not exist in the original dataset into the outsourced data; these fake items construct a set of fake (in)frequent itemsets. Then by checking the fake (in)frequent itemsets, the client can verify the correctness and completeness of the mining answer by the server. Though effective, this method assumes that the server has no background knowledge of the items in the outsourced data, and thus it has equal probability to cheat on the fake and real itemsets. We argue that using fake items cannot catch the malicious server that may have some background knowledge of the outsourced dataset. For example, if the server knows that there are k unique items in the original dataset, let k ′ (k ′ > k) be the number of items in the outsourced dataset. The probability that an item is real is k/k ′ . If the number of artificial items is relatively small compared with the number of real items, the server has a high probability to identify a real item. Furthermore, the verification approach in [START_REF] Wong | An audit environment for outsourcing of frequent itemset mining[END_REF] still preserves the frequency of items, which may enable the server to identify the real/artificial items by the frequency-based attack (e.g, [START_REF] Wong | Security in outsourcing of association rule mining[END_REF][START_REF] Wang | Efficient secure query evaluation over encrypted xml databases[END_REF]). Our approach is much more challenging than using fake items (as in [START_REF] Wong | An audit environment for outsourcing of frequent itemset mining[END_REF]), since insertion/deletion of real items may modify the true frequent itemsets. Our goal is to minimize the undesired change on the real frequent itemsets, while provide quantifiable correctness/completeness guarantee of the returned result. Preliminaries Frequent Itemset Mining. Given a transaction dataset D that consists of n transactions, let I be the set of unique items in D. The support of the itemset I ⊆ I (denoted as sup D (I)) is the number of transactions in D that contain I. An itemset I is frequent if its support is no less than a support threshold min sup [START_REF] Agrawal | Fast algorithms for mining association rules large databases[END_REF]. The (in)frequent itemsets behave the following two monotone properties: (1) any superset of an infrequent itemset must be infrequent, and (2) any subset of a frequent itemset must be frequent. Untrusted Server and Verification Goal. Due to many reasons (e.g., code bugs, software misconfiguration, and inside attack), a service provider may return incorrect data mining results. In this paper, we consider the server that possesses the background knowledge of the outsourced dataset, including the domain of items and their frequency information, and tries to escape from verification by utilizing such information. We formally define the correctness and completeness of the frequent itemset mining result. Let F be the real frequent itemsets in the outsourced database D, and F S be the result returned by the server. We define the precision P of F S as P = |F ∩F S | |F S | (i.e., the percentage of returned frequent itemsets that are correct), and the recall R of F S as R = |F ∩F S | |F | (i.e., the percentage of correct frequent itemsets that are returned). Our aim is to catch any answer that does not meet the predefined precision/recall requirement with high probability. Formally, given a dataset D, let F s be the set of frequent itemsets returned by the server. Let pr R and pr P be the probability to catch F s of recall R ≤ α 1 and precision P ≤ α 2 , where α 1 , α 2 ∈ [0, 1] are given thresholds. We say a verification method M can verify (α 1 , β 1 )-completeness ((α 2 , β 2 )-correctness, resp.) if pr R ≥ β 1 ( pr P ≥ β 2 , resp.), where β 1 ∈ [0, 1] (β 2 ∈ [0, 1], resp. ) is a given threshold. Our goal is to find a verification mechanism that can verify (α 1 , β 1 )-completeness and (α 2 , β 2 )-correctness. Construction of Evidence Frequent Itemsets (EF s) Our key idea of completeness verification is that the client uses a set of frequent itemsets as the evidence, and checks whether the server misses any evidence frequent itemset in its returned result. If it does, the incomplete answer by the server is caught with 100% certainty. Otherwise, the client believes that the server returns incomplete answer with a probability. In particular, the probability pr R of catching the incomplete frequent itemsets F S of recall R by ℓ evidence frequent itemsets (EF s) is pr R = 1-R ℓ . Clearly, to satisfy (α 1 , β 1 )-completeness (i.e., pr R ≥ β 1 ), it must be true that ℓ ≥ ⌈log α1 (1 -β 1 )⌉. Further analysis can show that to catch a server that fails to return a small fraction of frequent itemsets with high completeness probability does not need large number of EF s. For instance, when α 1 = 0.95 and β 1 = 0.95, only 58 EF s are sufficient. Apparently the number of required EF s is independent from the size of the dataset as well as the number of real frequent itemsets. Therefore our verification approach is especially suitable for large datasets. We propose the MiniGraph approach to construct EF s. The basic idea of the MiniGraph approach is to construct itemsets that are guaranteed to be infrequent in the original dataset D. To construct these itemsets quickly without doing any mining, we construct the itemsets that contain at least one infrequent 1-itemset. The MiniGraph approach consists of the following steps: Step 1: Pick the shortest infrequent itemset (can be 1-itemset) of the largest support as I s . Step 2: Find transactions D s ⊆ D that contain I s . Construct the MiniGraph G from D s , with the root of G representing I s , and each non-root node in G representing a transaction in D s . There is an edge from node N i to node N j if the transaction of node N j is the maximum subset of the transaction of node N i in D (i.e., no other transactions in D that contain the transaction of node N i ). Step 3: Mark all nodes at the second level of G as candidates. For each candidate, all of its subset itemsets that contain I s will be picked as EF s. If the total number of candidates is less than ℓ = ⌈log α1 (1 -β 1 )⌉, we add the next infrequent 1-itemset of the largest frequency as another I s , and repeat Step 1 -3, until we either find ℓ EF s or there is no infrequent 1-itemset left. Step 4: For each EF , construct (min sup -s) copies as artificial transactions, where s is the support of EF in D. The time complexity of the MiniGraph approach is O(|D|). Construction of Evidence Infrequent Itemsets (EIs) Our basic idea of correctness verification is that the client uses a set of infrequent itemsets as the evidence, and checks whether the server returns any evidence infrequent itemset. If it does, the incorrect answer by the server is caught with 100% certainty. Otherwise, the client believes that the server returns the incorrect answer with a probability. In particular, the probability pr P of catching the incorrect frequent itemsets with precision P by using ℓ EIs is pr P = 1 -P ℓ . To satisfy (α 2 , β 2 )-correctness (i.e., pr P ≥ β 2 ), it must satisfy that ℓ ≥ ⌈log α2 (1 -β 2 )⌉. As pr P and pr R (Section 4) are measured in the similar way, we have the same observation of the number of EIs as the number of EF s. Our EI construction method will identify a set of real frequent itemsets and change them to be infrequent by removing items from the transactions that contain them. Our goal is to minimize the number of items that are removed. Next, we explain the details. Step 1: Pick Evidence Infrequent Itemsets (EIs). First, we exclude items that are used as I s for EF construction from the set of 1-itemset candidates. This ensures that no itemset will be required to be EI and EF at the same time. Second, we insert all infrequent 1-itemsets into the evidence repository R. If |R| ≥ ℓ= ⌈log α2 (1 -β 2 )⌉, we terminate EI construction. Otherwise, we compute h, the minimal value to make m-|R| h ≥ ℓ -|R|, where m is the number of unique items in D. Third, we compute k, the minimal value to make k h ≥ ℓ -|R|. We pick the first k frequent 1-itemsets S following their frequency in ascending order, and construct all h-itemset candidates S h that contain h items from S. The h-itemset candidates of non-zero support in D will be inserted into R. To efficiently find the itemset I that has non-zero support in D, we make use of a simpler version of the F P -tree [START_REF] Han | Mining frequent patterns without candidate generation: A frequent-pattern tree approach[END_REF] to store D in a compressed way. More details of this data structure is omitted due to space limit. Step 2: Pick Transactions for Item Removal. We aim at transforming those frequent EIs (i.e., artificial infrequent EIs) picked by Step 1, notated as AI, to be infrequent. To achieve this, we pick a set of transactions D ′ ⊆ D, so that for each frequent itemset I ∈ AI, sup D ′ (I) ≥ sup D (I) -min sup + 1. Step 3: Pick Item Instances for Removal. We decide which items in the transactions picked by Step 2 will be removed. To minimize the total number of removed items, we prefer to remove the items that are shared among patterns in AI. Therefore, we sort items in AI by their frequency in AI in descending order. We follow this order to pick items to be removed. The time complexity of the EI construction approach is O(|EI||D| + k!|T |), where k is the number of frequent 1-itemsets for construction of EIs, and T is the FP-tree constructed for checking the existence of itemsets in D. Normally k << m, where m is the number of items in D, and |T | << |D|. Post-Processing There are two types of side effects by introducing EF s and EIs that need to be compensated: (1) EF s may introduce artificial frequent itemsets that do not exist in D; and (2) EIs may make some real frequent itemsets disappear. Removal of artificial frequent itemsets is straightforward. As the client is aware of the seed item I s that is contained in all EF s, it only needs to remove all the returned frequent itemsets that contain I s . To recover missing real frequent itemsets, the client maintains locally all AIs when it constructs EIs. During post-processing, the client adds these AIs back to F S as frequent itemsets. Experiments In this section, we experimentally evaluate our verification methods. All experiments were executed on a Macbook Pro machine with 2.4GHz CPU, 4GB mem-ory, running Mac OS X 10.7.3. We implemented a prototype of our algorithm in Java. We evaluated our algorithm on two type of datasets: (1) the dense dataset in which most of transactions are of similar length, and contain > 75% of items; and (2) the sparse dataset in which the transactions are of skewed length distribution. We use the NCDC dataset1 (500 items, 365 transactions) as the dense dataset, and the Retail dataset2 (16470 items, 88162 transactions) as the sparse dataset. Due to its density/sparsity, N CDC dataset has a large number of frequent 1itemsets, while Retail dataset has a large number of infrequent 1-itemsets. We use the Apriori algorithm [START_REF] Agrawal | Fast algorithms for mining association rules large databases[END_REF], a classic frequent itemset mining algorithm, as the main mining algorithm. We use the implementation of Apriori algorithm available at http://www.borgelt.net/apriori.html. Robustness. We measure the robustness of our probabilistic approach by studying the probability that the incorrect/incomplete frequent itemsets can be caught by using artificial EIs/EF s. We use the Retail dataset and vary α 1 and α 2 values to control the amount of mistakes that the server can make on the mining result. For each α 1 (α 2 , resp.) value, we randomly modify (1 -α 1 ) ((1 -α 2 ), resp.) percent of frequent (infrequent, resp.) itemsets (including both true and artificial ones) to be infrequent (frequent, resp.). Then with various β 1 and β 2 values, we construct artificial tuples to satisfy (α 1 , β 1 )-completeness and (α 2 , β 2 )correctness. Detection of any missing EF or the presence of any EIs will be recorded as a successful trial of catching the server. We repeat 1,000 times and record the percentage of trials (as detection probability) that the server is caught, with α 1 , α 2 ∈ [0.7, 0.9] and β 1 , β 2 ∈ [0.7, 0.9]. It shows that the detection probability for the completeness and correctness verification is always higher than β 1 and β 2 respectively. This proves the robustness of our probabilistic approach. The results are omitted due to limited space. Completeness Verification First, we measure the EF construction time for various α 1 and β 1 values. The result in Figure 1 (a) shows that EF construction time grows when α 1 and β 1 grow, since the MiniGraph approach has to search for more I s to construct more EF s for higher completeness guarantee. Second, we measure the amount of inserted artificial transactions and compare it with the size of the database. In particular, let t be the number of artificial transactions to be inserted, we measure the ratio r = t m , where m is the number of real transactions in D. As shown in Figure 1 (b), for Retail dataset, the inserted artificial transactions only take a small portion of the original database. For example, when β 1 ≤ 0.99, the ratio is less than 3%. Even for large values such as α 1 = β 1 = 0.999, the ratio is no more than 25% . Correctness Verification. First, we measure the EI construction time on N CDC dataset. The performance result is shown in Figure 2 (a). It is not surprising that it needs more time to construct EIs for higher α 2 and β 2 values. With a closer look of the result, when β 2 = 0.9 and 0.99, EI construction is very fast (no more than 1 second), since all EIs are real infrequent itemsets and there is no need to remove any item. However, when β 2 grows to 0.999, the construction time jumps to 400 -600 seconds, since now the algorithm needs to find frequent itemset candidates to be EIs as well as the items to be removed. We also measure the EI construction time of Retail dataset. It does not increase much when β 2 increases from 0.9 to 0.999, since all EIs are real infrequent 1itemsets. Second, we measure the amount of item instances that are removed by EI construction. In particular, let t be the number of item instances to be removed, we measure the ratio r = t |D| . The result of N CDC dataset is shown in Figure 2 (b). It can be seen that the number of item instances to be removed is a negligible portion (no more than 0.045%) of N CDC dataset. There is no item that is removed from Retail dataset, as it has a large number of infrequent 1itemsets, which provides sufficient number of EI candidates. This shows that we can achieve high correctness guarantee to catch small errors by slight change of the dataset. Conclusion In this paper, we our methods that verify the completeness of outsourced frequent itemset mining. We propose a lightweight verification approach that constructs evidence (in)frequent itemsets. In particular, we remove a small set of items from the original dataset and insert a small set of artificial transactions into the dataset to construct evidence (in)frequent itemsets. Our experiments show the efficiency and effectiveness of our approach. An interesting direction to explore is to design verification approaches that can provide deterministic correctness/completeness guarantee without extensive computational overhead. Fig. 1 .Fig. 2 . 12 Fig. 1. Ratio of Artificial Transactions and Mining Overhead (Retail dataset) National Climatic Data Center of U.S. Department of Commerce: http://lwf.ncdc.noaa.gov/oa/climate/rcsg/datasets.html Frequent Itemset Mining Dataset Repository: http://fimi.ua.ac.be/data/.
21,703
[ "1004149", "1004150", "1004151" ]
[ "244058", "244058", "244058" ]
01490710
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490710/file/978-3-642-39256-6_18_Chapter.pdf
Minh Sang Le Tran email: [email protected] Bjørnar Solhaug Ketil Stølen email: [email protected] An Approach to Select Cost-Effective Risk Countermeasures Security risk analysis should be conducted regularly to maintain an acceptable level of security. In principle, all risks that are unacceptable according to the predefined criteria should be mitigated. However, risk mitigation comes at a cost, and only the countermeasures that cost-efficiently mitigate risks should be implemented. This paper presents an approach to integrate the countermeasure cost-benefit assessment into the risk analysis and to provide decision makers with the necessary decision support. The approach comes with the necessary modeling support, a calculus for reasoning about the countermeasure cost and effect, as well as means for visualization of the results to aid decision makers. Introduction Security risk analysis concludes with a set of recommended options for mitigating unacceptable risks [START_REF]Risk management -Principles and guidelines[END_REF]. The required level of security and the acceptable level of risk should be defined by the risk criteria. However, deciding which countermeasures to eventually implement depends also on the trade-off between benefit and spending. No matter the criteria and the mitigating effect of the countermeasures, risk mitigation should ensure return on investment in security [START_REF] Birch | Risk analysis for information systems[END_REF]. Currently there exists little methodic support for systematically capturing and analyzing the necessary information for such decision making as an integrated part of the security risk analysis process. The contribution of this paper is an approach to integrate the assessment of countermeasures and their cost and effect into the risk analysis process. The approach comes with the necessary modeling support, a calculus for reasoning about risks, countermeasures, costs and effects within the risk models, as well as support for decision making. A formal foundation is provided to ensure rigorous analysis and to prove the soundness of the calculus. The approach is generic in the sense that it can be instantiated by several established risk modeling techniques. The reader is referred to the full technical report [START_REF] Tran | An approach to select cost-effective risk countermeasures exemplified in CORAS[END_REF] for the formal foundation, the soundness proofs and other details. The report demonstrates the instantiation in CORAS [START_REF] Lund | Model-Driven Risk Analysis: The CORAS Approach[END_REF] with an example from the eHealth domain. In Section 2 we present our approach, including the method, the modeling support and the analysis techniques. Section 3 gives a small example. Related work is presented in Section 4, before we conclude in Section 5. Our approach (see Fig. 1) takes a risk model resulting from a risk assessment and the associated risk acceptance criteria as input, and delivers a recommended countermeasure alternative as output. Hence, the approach assumes that the risk assessment has already been conducted, i.e. that risks have been identified, estimated and evaluated and that the overall risk analysis process is ready to proceed with the risk treatment phase. We moreover assume that the risk analysis process complies with the ISO 31000 risk management standard [START_REF]Risk management -Principles and guidelines[END_REF], in which risk countermeasure is the final phase. Our process consists of three main steps. In Step 1, the risk model is annotated with relevant information including the countermeasures, their cost, their reduction effect (i.e. effect on risk value), as well as possible effect dependencies (i.e. countervailing effects among countermeasures). In Step 2, we perform countermeasure analysis by enumerating all countermeasure alternatives (i.e. combinations of countermeasures to address risks) and reevaluating the risk picture for each alternative. This analysis makes use of the annotated risk model and a calculus for propagating and aggregating the reduction effect and effect dependency along the risk paths of the model. Step 3 performs synergy analysis for selected risks based on decision diagrams. The output is a recommended countermeasure alternative. Fig. 2 presents the underlying concepts of our approach. A Risk Model is a structured way of representing unwanted incidents, their causes and consequences using graphs, trees or block diagrams. An unwanted incident is an event that harms or reduces the value of an asset, and a risk is the likelihood of an unwanted incident and its consequence for a specific asset [START_REF]Risk management -Principles and guidelines[END_REF]. A Countermeasure mitigates risk by reducing its likelihood and/or consequence. The Expenditure includes the expenditure of countermeasure implementation, maintenance and so on for a defined period of time. The Effects Relation captures the extent to which a countermeasure mitigates risks. The Effects Relation could be the reduction of likelihood, and/or the reduction of consequence of a risk. The Dependency relation captures the countervailing effect among countermeasures that must be taken into account in order to understand the combined effect of identified countermeasures. The Calculus provides a mechanism to reason about the annotated risk model. Using the Calculus, we can perform countermeasure analysis on annotated risk models to calculate the residual risk value for each individual risk. A Decision Diagram facilitates the decision making process based on the countermeasure analysis. As already explained, the input required by our approach is the result of a risk assessment in the form of a risk model, and the corresponding risk accep- tance criteria. To ensure that our approach is compatible with established risk modeling techniques, we only require that the risk model can be understood as a risk graph. A risk graph [START_REF] Braendeland | Modular analysis and modelling of risk scenarios with dependencies[END_REF] is a common abstraction of several established risk modeling techniques such as Fault Tree Analysis (FTA), Event Tree Analysis (ETA), Attack Trees, Cause-Consequence Diagrams, Bayesian networks, and CORAS risk diagrams. Hence, our approach complies with these risk modeling techniques, and can be instantiated by them. A risk graph is a finite set of vertices and relations (see Fig. 3(a)). Each vertex v represents a threat scenario, i.e. a sequence of events that may lead to an unwanted incident, and can be assigned a likelihood f, and a consequence co. The likelihood can be either probabilities or frequencies, but here we use only the latter. A leads-to relation from v 1 to v 2 means that the former threat scenario may lead to the latter. The positive real numbers decorating the relations capture statistical dependencies between scenarios, such as conditional probabilities. Detailing of Step 1 -Annotate Risk Model This step is to annotate the input risk model with required information for further analysis. There are four types of annotation as follows. Countermeasures are represented as rectangles. In Fig. 3(b) there is one countermeasure, namely cm. An expenditure is expressed within square brackets following the countermeasure name (e in Fig. 3(b)). This is an estimation of the expense to ensure the mitigation of countermeasure including the expense of implementation, maintenance, and so on. An effects relation is represented by v1 [f1, co1] v2 [f2, co2] v3 [f3, co3] v4 [f4, co4] v6 [f6, co6] v5 [f5, co5] v7 [f7, co7] Detailing of Step 2 -Countermeasure Analysis The countermeasure analysis is conducted for every risk of the annotated risk model. The analysis enumerates all possible countermeasure combinations, called countermeasure alternatives (or alternatives shortly), and evaluates the residual risk value (i.e. residual frequency and consequence value) with respect to each alternative to determine the most effective one. The residual risk value is obtained by propagating the reduction effect along the risk model. From the leftmost threat scenarios (i.e. scenarios that have only outgoing leads-to relations), frequencies assigned to threat scenarios are propagated to the right using the formal calculus. The reader is referred to [START_REF] Tran | An approach to select cost-effective risk countermeasures exemplified in CORAS[END_REF] for the full formal calculus and for the soundness proofs. During the propagation, frequencies assigned to leads-to relations, reduction effects, and effect dependencies are taken into account. Finally, the propagation stops at the rightmost threat scenarios (i.e. scenarios that have only incoming leads-to relations). Based on the results from the propagation, the residual risk value is computed. A Decision Diagram, as depicted in Fig. 4 for two different risks, is a directed graph used to visualize the outcome of a countermeasure analysis. A node in the diagram represents a risk state which is a triplet of a likelihood, a consequence, and a countermeasure alternative for the risk being analyzed. The frequency and consequence are the X and Y coordinates, respectively, of the node. The countermeasure alternative is annotated on the path from the initial state S 0 (representing the situation where no countermeasure has yet been applied). Notice that we ignore all states whose residual risks are greater than those of S 0 since it is useless to implement such countermeasures. Detailing of Step 3 -Synergy Analysis The synergy analysis aims to recommend a cost-effective countermeasure alternative for mitigating all risks. It is based on the decision diagrams of individual risks (generated in Step 2), the risk acceptance criteria, and the overall cost (OC) of each countermeasure alternative. The OC is calculated as follows: OC(ca) = r∈R rc(r) + cm∈ca cost(cm) (1) Here, ca is a countermeasure alternative; R is the set of risks; rc() is a function that yields the loss (in monetary value) due to the risk taken as argument (based on its likelihood and consequence); cost() is a function that yields the expenditure of the countermeasure taken as argument. The synergy analysis is decomposed into the following three substeps: Step 3A Identify countermeasure alternatives: Identify the set of countermeasure alternatives CA for which all risks are acceptable with respect to the risk acceptance criteria. CA could be identified by exploiting decision diagrams. Step 3B Evaluate countermeasure alternatives: If there is no countermeasure alternative for which all risks fulfill the risk acceptance criteria (CA = ∅), do either of the following: • identify new countermeasures and go to Step 1, or • adjust the risk acceptance criteria and go to Step 3A. Otherwise, if there is at least one such countermeasure alternative (CA = ∅), calculate the overall cost of each ca ∈ CA. Step 3C Select cost-effective countermeasure alternative: If there is at least one countermeasure ca ∈ CA for which OC(ca) is acceptable (for the customer company in question) select the cheapest and terminate the analysis. Otherwise, identify more (cheaper and/or more effective) countermeasures and go to Step 1. The above procedure may of course be detailed further based on various heuristics. For example, in many situations, with respect to Step 3A, if we already know that countermeasure alternative ca is contained in CA then we do not have to consider other countermeasure alternatives ca such that ca ⊆ ca . However, we do not go into these issues here. Example In the following we give a small example of the synergy analysis based on our eHealth assessment [START_REF] Tran | An approach to select cost-effective risk countermeasures exemplified in CORAS[END_REF]. The scenario is on remote patient monitoring, where one of the identified risks is loss of monitored data (LMD). Table 1 is input from Step 2, namely the result of the analysis of seven treatment alternatives given three identified treatments. The corresponding decision diagram is depicted in Fig. 4(a). The shaded area to the lower left are the acceptable risks levels, whereas the upper right are the unacceptable levels. Notice that while the treatment alternatives for LMD reduce only the consequence, some of the alternatives for loss of integrity of monitored data (LID) also reduce the frequency. The results of the synergy analysis of three risks are depicted in Table 2. Their respective treatment alternatives that yield acceptable risk levels are shown in the middle, whereas their combinations are shown in the first column. The third column shows the overall costs as calculated in Step 3. If also the costs are acceptable, the cheapest alternative should be selected. Related Work In risk management, decision on different risk mitigation alternatives has been emphasized in many studies [START_REF]Risk Characterization of Microbiological Hazards in Food: Guidelines[END_REF][START_REF] Norman | Risk Analysis and Security Countermeasure Selection[END_REF][START_REF] Stoneburner | Risk Management Guide for Information Technology Systems[END_REF]. The guideline in [START_REF] Stoneburner | Risk Management Guide for Information Technology Systems[END_REF] proposes cost-benefit analysis to optimally allocate resources and implement cost-effective controls after identifying all possible countermeasures. This encompasses the determination of the impact of implementing (and not implementing) the mitigations, and the estimated costs of them. Another guideline [START_REF]Risk Characterization of Microbiological Hazards in Food: Guidelines[END_REF] provides a semi-quantitative risk assessment. The probability and impact of risks are put into categories which are assigned scores. The differences between the total score for all risks before and after any proposed risk reduction strategy relatively show the efficiency among strategies, and effectiveness of their costs. It also suggests that the economic costs for baseline risks should be evaluated. However, the proposed methods for conducting the evaluation have not been designed to assess cost of treatments, but rather cost of risks. Norman [START_REF] Norman | Risk Analysis and Security Countermeasure Selection[END_REF] advocates the use of Decision Matrix to agree on countermeasure alternative. A Decision Matrix is a simple spreadsheet consisting of countermeasures and their mitigated risks. The approach, however, is not clearly defined, and the spreadsheets are complicated to implement and follow. Meanwhile, our proposal is graphical and backed up with a formal definition and reasoning. Butler [START_REF] Butler | Security attribute evaluation method: a cost-benefit approach[END_REF] proposes the Security Attribute Evaluation Method (SAEM) to evaluate alternative security designs in four steps: benefit assessment, threat index evaluation, coverage assessment, and cost analysis. This approach, however, focuses mostly on the consequence of risks rather than cost of countermeasures, whereas our approach captures both. Chapman and Leng [START_REF] Chapman | Cost-effective responses to terrorist risks in constructed facilities[END_REF] describe a decision methodology to measure the economic performance of risk mitigation alternatives. It focuses on the costdifference aspect, but does not consider the benefit-difference (i.e. level of risks reduced) among alternatives. Houmb et al. [START_REF] Houmb | SecInvest : Balancing security needs with financial and business constraints[END_REF] introduce SecInvest, a security investment support framework which derives a security solution fitness score to compare alternatives and decide whether to invest or to take the associated risk. SecInvest relies on a trade-off analysis which employs existing risk assessment techniques. SecInvest ranks alternatives with respect to their cost and effect, trade-off parameters, and investment opportunities. However, this approach does not provide a systematic way to assess the effects of alternatives on risks, and does not take into account the dependency among countermeasures in an alternative. Beresnevichiene et al. [START_REF] Beresnevichiene | Decision support for systems security investment[END_REF] propose a methodology incorporating a multi-attribute utility evaluation and mathematical system modeling to assist decision makers in the investment on security measures. It can be employed in existing risk assessment methods, including ours, to evaluate the residual risk. Conclusion We have presented an approach to select a cost-effective countermeasure alternative to mitigate risks. The approach requires input in the form of risk models represented as risk graphs. The approach analyses risk countermeasures with respect to different aspects such as the mitigating effect, how countermeasures affect others, and how much countermeasures cost. We have developed a formal calculus extending the existing calculus for risk graphs. The extended calculus can be used to propagate likelihoods and consequences along risk graphs, thereby facilitating a quantitative countermeasure analysis on individual risks, and a synergy analysis on all the risks. The outcome is a list of countermeasure alternatives quantitatively ranked according to the their overall cost. These alternatives are represented not only in tabular format, but also graphically in the form of decision diagrams. The approach is generic in the sense that it can be instantiated by several existing risk assessment techniques. Fig. 1 .Fig. 2 . 12 Fig. 1. Three-steps approach Fig. 3 . 3 Fig. 3. A risk graph (a) and its extended annotations: Effect relation (b), and Dependency relation (c) Fig. 4 . 4 Fig. 4. Decision diagrams of two risks in the eHealth scenario Table 1 . 1 Analysis for the risk Loss of monitored dataEach treatment alternative S is shown in the first column (Risk State) followed by its combination of treatments. The Frequency column is the number of occurrences in ten years. Both Frequency and Consequence columns are valued after considering the treatments. Ensure sufficient QoS from network provider Implement Redundant Network connection Implement Redundant Handheld Risk State Treatment Frequency Consequence S0 26.4 5000 S1 • 21.36 5000 S2 • 12.96 5000 S3 • • 7.92 5000 S4 • 12.96 5000 S5 • • 7.92 5000 S6 • • 10.08 5000 S7 • • • 5.04 5000 3000 4000 5000 6000 7000 0 5 10 15 20 25 30 Table 2 . 2 Results from synergy analysis of three risks Individual Risk Treatment Alternative LID LMD DAS Overall Cost {UBA,SCO,IRH,IRN,USW} S3 S3 S3 101740 {UBA,SCO,IRH,IRN,EQS,USW} S3 S7 S3 102340 {UBA,IRH,IRN,USW} S2 S3 S3 104500 {UBA,IRH,IRN,EQS,USW} S2 S7 S3 105100 {UBA,SCO,IRH,IRN} S3 S3 S2 108740 {UBA,SCO,IRH,IRN,EQS} S3 S7 S2 109340 {UBA,IRH,IRN} S2 S3 S2 111500 {UBA,IRH,IRN,EQS} S2 S7 S2 112100 Acknowledgments: This work has received funding from the European Commission via the NESSoS NoE (256980) and the RASEN project (316853), and from the Research Council of Norway via the DIAMONDS project (201579/S10).
19,306
[ "1004152", "1004153", "1004154" ]
[ "304024", "86695", "86695", "50791" ]
01490712
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490712/file/978-3-642-39256-6_20_Chapter.pdf
Meixing Le Krishna Kant email: [email protected] Sushil Jajodia email: [email protected] Rule Enforcement with Third Parties in Secure Cooperative Data Access Keywords: Authorization rule, Trusted third party, Join Path In this paper, we consider the scenario where a set of parties need to cooperate with one another. To safely exchange the information, a set of authorization rules is given to the parties. In some cases, a trusted third party is required to perform the expected operations. Since interactions with the third party can be expensive and there maybe risk of data exposure/misuse, it is important to minimize their use. We formulate the minimization problem and show the problem is in N P -hard. We then propose a greedy algorithm to find close to optimal solutions. Introduction In many cases, enterprises need to interact with one another cooperatively to provide rich services. For instance, an e-commerce company needs to obtain data from a shipping company to know the status and cost of a shipping order, and the shipping company requires the order information from the e-commerce company. Furthermore, the e-commerce company may have to exchange data with warehouses and suppliers to get the information about the products. In such an environment, information needs to be exchanged in a controlled way so that the desired business requirements can be met but other private information is never leaked. For example, a shipping company has all the information about its customers. However, only the information about the customers that deal with the e-commerce company in question should be visible to the e-commerce company. The information about the remaining customers should not be released to the e-commerce company. In addition, the data from shipping company may include other information such as which employee is delivering the order, and such information should not be released to the e-commerce company. Therefore, we need a mechanism to define the data access privileges in the cooperative data access environment. We assume that each enterprise manages its own data and all data is stored in a standard relational form such as BCNF, but it is possible to extend the model to work with other data forms. The data access privileges of the enterprises are regulated by a set of authorization rules. Each authorization rule is defined either on the original tables belonging to an enterprise or over the lossless joins of the data from several different parties. Using join operations, an authorization rule only releases the matched information from the parties. For instance, if the e-commerce company can only access the join result of its data and the shipping company's data, then only the tuples about the shipping orders from the e-commerce company can be visible to the e-commerce company. In addition, the attributes such as "delivery person" are never released to the e-commerce company, so suitable projection operations are applied on the join results in authorization rules to further restrict the access privileges. Hence, the requirement of selective data sharing can be achieved. Selection operations are not considered in the authorization rules. Under such a scenario, an enterprise may be given an authorization rule on the join result of several relational tables. To obtain the join result, it is required to have one party that has the privileges to access all the basic relations and perform the required join operations. However, due to the access restrictions laid down by the authorization rules, it is possible that no party is capable of receiving all required data. Therefore, we may have to introduce a trusted third party to perform join operations. Third parties may be expensive to use and the data given to them could be at greater risk of exposure than the data maintained by original parties. In this paper, we focus on the problem of using third parties minimally in order to deliver the information regulated by the given authorization rules. We model the cost of using third party as the amount of data being transferred to the third party, and prove that finding the minimal amount of data to implement a given rule is N P -hard. Therefore, we propose efficient greedy algorithm and evaluate its performance against brute force algorithm. The rest of the paper is organized as follows. Section 2 discusses the related work. Section 3 defines the problem and introduces some concepts. Section 4 discusses minimizing the cost of using third parties. Finally, Section 5 concludes the discussion. Related Work In previous works, researchers proposed models [START_REF] Capitani Di Vimercati | Controlled information sharing in collaborative distributed query processing[END_REF] for controlling the cooperative data release. There is also a mechanism [START_REF] Le | Rule configuration checking in secure cooperative data access[END_REF] to check if an authorization rule can be enforced among cooperative parties. In addition, many classical works discuss query processing in centralized and distributed systems [START_REF] Bernstein | Query processing in a system for distributed databases (SDD-1)[END_REF][START_REF] Kossmann | The state of the art in distributed query processing[END_REF][START_REF] Chaudhuri | An overview of query optimization in relational systems[END_REF]. However, these works do not deal with constraints from the data owners, which make our problem quite different. There are works such as Sovereign joins [START_REF] Agrawal | Sovereign joins[END_REF] to provide third party join services, we can think this as one possible third party service model in our work. Such a service receives encrypted relations from the participating data providers, and sends the encrypted results to the recipients. Because of the risks associated with third parties, secure multiparty computation (SMC) mechanisms have been developed that ensures no party needs to know about the information of other parties [START_REF] Kissner | Privacy-preserving set operations[END_REF][START_REF] Mishra | A glance at secure multiparty computation for privacy preserving data mining[END_REF][START_REF] Chow | Two-party computation model for privacy-preserving queries over distributed databases[END_REF]. However, the generic solution of a SMC problem can be very complicated depending on the input data and does not scale in practice. Therefore, we consider using the third party to implement the rules. Problem definition and concepts We assume the possible join schema is given and all joins are lossless so that a join attribute is always a key attribute of some relations, and only select-project-join queries are considered. An authorization rule denoted as r t is a triple [A t , J t , P t ], where J t is called the join path of the rule which indicates the join over the relational tables, A t is the authorized attribute set which is the authorized attributes on the join path, and P t is the party authorized to access the data. For instance, an example rule could be (R.K, R.X, S.Y ), (R R.K S) → P t , where R.K is the key attribute of both R and S, and join path is R S. We assume that a trusted third party (T P ) is not among the existing cooperative parties and can receive information from any cooperative party. We assume that the T P always performs required operations honestly, and does not leak information to any other party. In our model, we assume the trusted third party works as a service. That is, each time we want to enforce a rule, we need to send all relevant information to the third party, and the third party is only responsible for returning the join results. After that, the third party does not retain any information about the completed join requests. We say an authorization rule can be enforced only if there is a way to obtain all the information regulated by the rule. With the existence of a third party, we can always enforce a rule by sending relevant information from cooperative parties to T P . We aim to minimize the amount of data to be sent to the third party. To find the minimal amount of data to be sent, we can just select rules from the given authorization rules. It is because each rule defines a relational table and we can quantify the amount of information using the data in the tables. We say that a rule is Relevant to another if the join path of a rule contains a proper subset of relations of the join path of the other rule. All the rules being selected must be relevant to the target rule denoted as r t , which is the rule to be enforced. If a relevant rule of r t is not relevant to any other relevant rules of r t with longer join paths on the same party, we call it a Candidate Rule. We only choose from candidate rules to decide the data that needs to be sent to the T P . Minimizing cost In this section, we consider the problem of choosing the proper candidate rules to minimize the amount of information sent to the third party. In our cost model, the amount of information is quantified by sum of the number of attributes picked from each rule multiplied by the number of tuples in that selected rule. Thus, we want to minimize Cost = k i=1 π(r i ) * w(r i ), where r i is a selected rule, k is the number of selected rules, and π(r i ) is the number of attributes selected to be sent, and w(r i ) is the number of tuples in r i . Rules with same number of tuples We first assume the candidate rules have the same w(r i ) value. To find the candidate rules that can provide enough information to enforce r t , we map each attribute in r t to only one candidate rule so that all of these attributes can be covered. Once we get such a mapping, we have one solution including the selected rules and projections on desired attributes. Among these solutions, we want the minimal cost solution according to our model. Since we assume all the candidate rules have the same number of tuples, it seems that the total cost of each candidate solution should always be the same. However, it is not true because the join attributes appearing in different relations are merged into one attribute in the join results. We can consider the example in Figure 1. The boxes in the figure show the attribute set of the rules, and the join paths and rule numbers are indicated above the boxes. There are four cooperating parties indicated by P i and one T P , and the three basic relations are joining over the same key attributes R.K. Among the 4 candidate rules, if we select r 2 , r 3 to retrieve the attributes R.X and S.Y (non-key attributes), we need to send R.K and S.K which are their join attributes to the third party as well. Whereas, if we choose r 1 , then we only need to send 3 attributes as R.K and S.K are merged into one attribute in r 1 . Thus, choosing a candidate rule with longer join path may reduce the number of attributes actually being transferred. Fewer rules means fewer overlapped join attributes to be sent (e.g., R.K in r 1 and T.K in r 4 are overlapped join attributes). In addition, selecting fewer rules can result in fewer join operations performed at the third party. Since we assume the numbers of tuples in candidate rules are the same, the problem is converted to identify minimal number of candidate rules that can be composed to cover the target attribute set. The problem can be reduced to unweighted set covering problem which is N P -hard. Proof. Consider a set of elements U = {A 1 , A 2 , ..., A n } (called the universe), and a set of subsets S = {S 1 , S 2 , ...S m } where S i is a set of elements from U . The unweighted set covering problem is to find the minimal number of S i so that all the elements in U are covered. We can turn it into our rule selection problem. For this we start with the attribute set {A 0 , A 1 , A 2 , ..., A n }, where A 0 is the key attribute of some relation R and A i 's are non-key attributes of R. For each S i ∈ S, we construct a candidate rule r i on R with the attribute set S i {A 0 } and assign it to a separate cooperative party. Therefore, if we can find the minimal set of rules to enforce some target rule r t in polynomial time, the set covering problem can also be solved in polynomial time. T.K,T.Z R.K, R.X, S.K, S.Y, T.K, T.Z R R.KS R.KT rt Trusted Third Party T.K,T.Z T r4 P4 R.K,R.X R r2 P2 S.K,S.Y S r3 P3 R.K,R.X,S.K,S,Y R R.KS r1 P1 R.K,R.X,S,Y Rules with different number of tuples In general, the numbers of tuples in the relations/join paths are different, and they depend on the length of the join paths and the join selectivity among the different relations. Join selectivity [START_REF] Kossmann | The state of the art in distributed query processing[END_REF] is the ratio of tuples that agree on the join attributes between different relations, and it can be estimated using the historical and statistical data of these relations. In classical query optimization, a large number of works assume such values are known when generating the query plans. We also assume that this is the case. Therefore, we can assign each candidate rule r i with a cost cst i = w(J i ) * π(r i ), where π(r i ) is the per tuple cost of choosing rule r i , and w(J i ) is the number of tuples in join path J i . The problem is similar to (but not identical to) the weighted set covering problem. In our problem, once some attributes are covered by previously chosen rules, the following chosen rules should project out these attributes so as to reduce cost. Therefore, our cost function should be as follows where S i is the attribute set of rule r i and U is the target attribute set. Basically, the equation says if the key attribute of a rule has already been covered, then one more attribute is added to the cost of choosing this rule. cost(C) = k i=1 w(S i )π(S i ), π(S i ) = |S j (U \ i-1 j=1 S j )|, if (key(S i ) / ∈ i-1 j=1 S j ) |S j (U \ i-1 j=1 S j )| + 1, if (key(S i ) ∈ i-1 j=1 S j ) (1) Corollary 1 Finding the minimal amount of information sent to the third party to enforce a target rule is N P -hard. Proof. Based on Theorem 1, if we have a polynomial algorithm to find the minimal amount of information with rules of different costs, we can assign the same cost to each candidate rule so as to solve the unweighted version of the problem. In the weighted set covering problem, the best known greedy algorithm finds the most effective subset by calculating the number of missing attributes it contributes divided by the cost of the subset. In our case, we also want to select the attributes with least costs from the available subsets. Similar to the weighted set covering algorithm which selects the subset S i using the one with minimal w(S i ) |S i \U | , we select the rule with the minimal value of w(S i ) * π(S i ) |S j (U \ i-1 j=1 S j )| , where π(S i ) is defined in equation [START_REF] Agrawal | Sovereign joins[END_REF]. In our problem, with one more rule selected, the third party need to perform one more join operation, and possibly one more join attribute need to be transferred to the third party. Therefore, when selecting a candidate rule, we examine the number of attributes this rule can provide and the costs of retrieving these attributes. In the second case of for Each attribute A i ∈ (r i U ) do Fig. 1 . 1 Fig. 1. An example of choosing candidate rules Algorithm 1 2 : 3 : 4 : 5 : 7 : 8 :R ← R \ r i 9 : 12345789 Selecting Minimal Relevant Data For Third PartyRequire: The set R of candidate rules of rt on cooperative parties Ensure: Find minimal amount of data being sent to T P to enforce rt 1: for Each candidate rule r i ∈ R do Do projection on r i according to the attributes in rt Assign r i with its estimated number of tuples t i The set of selected rules C ← ∅ Target attribute set U ← merged attribute set of rt 6:while U = ∅ do Find a rule r i ∈ R that minimize α = w(S i ) * π(S i ) |S j (U \ i-1 j=1 S j )| 10: cost(A i ) ← w(S i ) 11: [START_REF] Agrawal | Sovereign joins[END_REF], the cost of one extra attribute is added. However, if this selected rule can provide many attributes to the uncovered set, the cost of this additional attribute can be amortized. This makes the algorithm prefer rules providing more attributes and results in less number of selected rules which is consistent with our goal. We present our greedy algorithm in Algorithm 1. We evaluated the effectiveness of our greedy algorithm against brute force algorithm via preliminary simulations. In this simulation evaluation, we use a join schema with 8 parties. The number of tuples in a rule is defined as a function of the join path length, basically w(J i ) = 1024/2 length(J i )-1 . In other words, we assume as the join path length increases by one, the number of tuples in the results decreases by half. We tested with randomly generated target rules with join path length of 4 and 7. Figure 2 shows the comparison between two algorithms. In fact, the two algorithms generate almost the same results. In Figure 2, the legend of "BruteForce4" indicates the target rule has the join path length of 4, and brute force algorithm is used. Among these solutions, in less than 2% of the cases the two algorithms produce different answers. In addition, the maximal difference between them is just 5%. The results also indicate the join path length of the target rule affects the costs, but two algorithms give similar solutions independent of the join path length. Conclusions and future work In this paper, we considered a set of authorization rules for cooperative data access among different parties. A trusted third party may be required to do the expected join operations so as to enforce a given rule. We discussed what is the minimal amount of data be sent to the third party. As the problem is N P -hard, we proposed greedy algorithms to generate solutions which were close to the optimal ones. In the future, we will look into the problem of how to combine the third parties with the existing parties to generate optimal safe query plans.
18,039
[ "1004157", "1004158", "978046" ]
[ "452410", "452410", "452410" ]
01490713
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490713/file/978-3-642-39256-6_21_Chapter.pdf
Ronald Petrlic email: [email protected] Stephan Sekula email: [email protected] Unlinkable content playbacks in a multiparty DRM system We present a solution to the problem of privacy invasion in a multiparty digital rights management scheme. (Roaming) users buy content licenses from a content provider and execute it at any nearby content distributor. Our approach, which does not need any trusted third party-in contrast to most related work on privacy-preserving DRM-is based on a re-encryption scheme that runs on any mobile Android device. Only a minor security-critical part needs to be performed on the device's smartcard which could, for instance, be a SIM card. Introduction Mobile users access digital content provided in the cloud from anywhere in the world. Music streaming services like Spotify enjoy popularity among users. The lack of bulky storage on mobile devices is compensated for by such services by streaming the content to the users's devices. Content is downloaded on demand and can be used only during playback. Thus, paying users are able to access huge amounts of content. There exist certain price models that allow the playback for a certain number of times, until a specific day (e.g., movie rentals), etc. In such a scenario, we have content providers (CPs) that sell licenses to users and there are content distributors (CDs) that provide the content. Users can access content from CDs that are closest or provide best service at the moment. This bears advantages for roaming users as they can choose local distributors. Such scenarios are called multiparty DRM systems in the literature. A drawback of today's DRM systems is that CPs/CDs can build content usage profiles of their users as they learn which user plays back content at a certain time, etc. Here we contribute with this paper. We suggest a privacy-preserving multiparty DRM system. In such a system, users anonymously buy content and anonymously playback the content. Moreover, neither CPs nor CDs can link content playbacks to each other and thus cannot build usage profiles under a pseudonym-as the past has shown that profiles under a pseudonym, assumed to be unrelatable to users, can be related to users given external information and thus, inverting user privacy again [START_REF] Narayanan | Robust de-anonymization of large sparse datasets[END_REF]. One major advantage of our approach compared to related work on privacy-preserving DRM is that we do not need a trusted third party (TTP) that checks licenses. In [START_REF] Mishra | Privacy rights management in multiparty multilevel DRM system[END_REF] a scenario where a content owner provides its content to users via local distributors is presented-similar to our scenario. Users buy licenses for content from a license server (trusted third party). Once a license is bought, the user gets in possession of the decryption key which allows him to access the content as often as desired. Differentiated license models are not intended-however, if license enforcement additionally took place on the client-side, such models could be implemented. As content download and license buying are done anonymously, none of the parties can build user profiles. [START_REF] Win | A privacy preserving content distribution mechanism for DRM without trusted third parties[END_REF] presents a privacy-preserving DRM scheme for multiparty scenarios without a TTP. A user anonymously requests a token set from the content owner that allows anonymous purchase of content licenses from content providers (CPs). A drawback is that CPs are able to build usage profiles of content executions under a pseudonym. [START_REF] Petrlic | Privacy-preserving DRM for cloud computing[END_REF] presents a DRM scenario that allows users to anonymously buy content from any CP and execute it at any computing center within the cloud. The users' permission to execute the content is checked before every single execution. Their solution is resistant against profile building. The authors suggest employing a re-encryption scheme based on secret sharing and homomorphic encryption to achieve unlinkability of executions. The approach is extended in [START_REF] Petrlic | Proxy re-encryption in a privacy-preserving cloud computing DRM scheme[END_REF] by employing an adapted version of proxy re-encryption [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF]. The scheme makes explicit use of a service provider as TTP. The approach towards privacy-preserving DRM in [START_REF] Joshi | Towards practical privacy-preserving digital rights management for cloud computing[END_REF] also requires a TTP for license checking before execution. It makes use of a number of cryptographic primitives such as proxy re-encryption, ring signatures and an anonymous recipient scheme to provide unlinkability of executions. System Model Our multiparty DRM scenario involves CPs, CDs, and users. The focus is on mobile users with different content access devices (CADs) accessing content. As devices have different hardware trust anchors-e.g., smartphones are equipped with SIM cards, tablet computers have trusted platform modules (TPMs), etc.we subsume those trust anchors under the term smartcards in the following. 1The CP takes the role of, e.g., a film studio or music label that produces content. Users interact with the CP by buying a license that allows playback of the content-under certain terms that are mediated. The user's smartcard is used to check whether the user is still allowed to access the content. Then, a nearby CD is contacted and the CD streams the content to the user. The CD can have contracts with different CPs, which allows the user to access content by different CPs from a single source-as it is the case with state-of-the-art streaming servers as well. The CD might get paid for providing its services by the CPs (or even the users). We do not cover this aspect in the paper at hand. We assume that CPs and CDs are honest-but-curious, i.e., they follow the protocol but try to find out as much as possible to track users. Users are assumed as active adversaries, i.e. trying to break the protocol to execute content without a license. Our protocol is not based on any TTP checking licenses. DRM requirements: We identify the CP, CD, and the user as stakeholders. The requirements are: Content provider: Req. I: Support for different license models, Req. II: Protection of the content (confidentiality), and Req. III: Enforcement of licenses. User: Req. IV: Profile building (under a pseudonym) must not be possible for any involved party. To achieve Req. IV, the the following aspects must be met: Anonymous content (license) buying towards content provider, and anonymous content execution towards content distributor, Unlinkability of content (license) purchases towards the content provider, and Unlinkability of content executions towards the content distributor. Privacy-preserving multiparty DRM system System Initialization: Let G 1 and G 2 be cyclic groups with the same prime order q, the security parameter n = ||q||, <g> = G 1 , and Z = e(g, g) ∈ G 2 . Users are equipped with smartcards (SCs) that are programmed and shipped by trustworthy SC providers that install a private key sk sc and the corresponding digital certificate cert sc on every smartcard. The private key and certificate are shared by all SCs since they are used for anonymous authentication towards the CP during the process of purchasing content. Authentication of SCs is required so that only legitimate SCs can be used to purchase content, however, CPs must not be able to recognize SCs. Moreover, the current time of production of the SC is set as the SC's timestamp ts. Content offered by the CP is encrypted using a symmetric encryption algorithm such as AES [START_REF]Advanced Encryption Standard (AES)[END_REF] and a separate content key ck i for each content i. The user employs an anonymous payment scheme with his/her bank to get supplied with payment tokens pt. Content Purchase: We assume that the connection between user and CP is anonymized (e.g., by using an anonymization network such as Tor [START_REF] Dingledine | TOR: the second-generation onion router[END_REF]). The user initiates the content purchase via his/her content access device (CAD) by authenticating towards the SC with his/her PIN and initiating the TLS [START_REF]The Transport Layer Security (TLS) Protocol[END_REF] handshake with the CP. The SC executes the KG algorithm as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF] to generate a temporary key pair 2 (pk-tmp sc = (Z a1 , g a2 ), sk-tmp sc = (a 1 , a 2 )), where a 1 , a 2 ∈ Z q are chosen randomly. During the TLS handshake, CP challenges CAD's SC with a nonce r and asks for SC's certificate. CAD forwards r to SC which signs r and pk-tmp sc with SC's private key sk sc . The signature and SC's certificate cert sc , as well as pk-tmp sc are forwarded to CAD and CAD forwards them, together with the content-id i of the content i to be bought, as well as the payment token pt to pay for the license. From this moment on, the communication between CAD and CP is authenticated and encrypted via TLS. CP verifies the response by checking the signature. This way, CAD's SC has anonymously authenticated towards CP, meaning CP knows that pk-tmp sc is from an authentic SC and the corresponding sk-tmp sc does not leave the SC. CP creates the license for content i. This license includes a license identifier id, a timestamp ts, the content-id i , the license terms, and CP's certificate cert cp . Note that the license terms depend on the license model. The license is encrypted under SC's pk-tmp sc . Moreover, the content key ck i for content i is encrypted under pk-tmp sc as well. The license, the signature of the license, the content-id i and the encrypted content key (ck i ) pk-tmpsc are forwarded to CAD. CAD stores (ck i ) pk-tmpsc and forwards the license and the signature to SC. The SC verifies the license's signature and decrypts the license with sk-tmp sc . Then it checks whether the id was not used before and whether ts is newer than the current ts on the SC-both to prevent replay attacks. The SC's ts is then set to the newer ts of the license. 3 Finally, the license is stored under the content-id i on the SC. Content Execution: To playback the purchased content, the user first selects a CD of his choice (this choice could be automated as well, e.g., dependent of the region the user currently is in). We assume that the connection between user and CD is anonymized (e.g., by using Tor [START_REF] Dingledine | TOR: the second-generation onion router[END_REF]). The CAD establishes a TLS connection [START_REF]The Transport Layer Security (TLS) Protocol[END_REF] with CD-CD authenticates towards CAD with its certificate. CAD afterwards requests a new certificate from CD. CD creates a new key-pair using KG as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF]: (pk-j cd = (Z a1 , g a2 ), sk-j cd = (a 1 , a 2 )), where a 1 , a 2 ∈ Z q are chosen randomly and j denotes the j th request to the CD. The pk-j cd is included in the newly generated certificate cert-j cd , as well as a unique certificate id and the current timestamp ts. CD self-signs the certificate 4 . The certificate is forwarded to the CAD. The user authenticates towards the SC with his PIN entered on the CAD and the SC then forwards the list of available contentids to CAD. The user chooses the content-id i to be executed and forwards it, together with cert-j cd to SC. SC checks whether the signature of cert-j cd is valid, whether CD was certified by a known CA, whether the certificate id was not used before and whether the ts is newer than the current ts on the SC. If these tests pass, the new ts from the certificate is set on the SC. It is important to note, that SC checks whether the certificate really belongs to a CD. If this was not the case, the user might be able to launch an attack by including a self-signed certificate that he has generated himself. Hence, if SC would not verify that the certificate belonged to a CD, the user might acquire a re-encryption key from SC that allowed him to decrypt the content key, granting him unlimited access to the content. Furthermore, SC checks whether the license terms still allow the content to be played back. If this is the case, the terms are updated. Then, SC generates the re-encryption key rk pk-tmpsc→pk-j cd by using the RG algorithm as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF], taking as input CD's public key g a2 ∈ pk-j cd , and its own private key a 1 ∈ sk-tmp sc (as created during the content purchase). The re-encryption key is then forwarded to CAD. CAD re-encrypts the encrypted content key (ck i ) pk-tmpsc by employing the R algorithm as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF] with rk pk-tmpsc→pk-j cd as input to retrieve (ck i ) pk-j cd -i.e., the encrypted content key under CD's public key. The re-encrypted content key is then forwarded to CD and CD decrypts the ciphertext using the D algorithm as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF] with its private key a 2 ∈ sk-j cd as input to retrieve ck i . The content-retrieved from CP-can now be decrypted by CD using ck i and the symmetric scheme as employed during system initialization. Eventually, the content is provided, for example, streamed, to the user's CAD. Authorization Categories [START_REF] Perlman | Privacy-preserving DRM[END_REF]: There might be content that should not be accessible to everybody, such as X-rated content. Before initially obtaining a SC, the user provides certain information to the SC provider (e.g., his passport). The SC provider will then securely5 store the required information on the user's SC. If we assume that the user's SC now contains information like the user's date of birth or home country, it can check whether or not the user is allowed to access content. This means that if the user requests access to, for instance, X-rated content, the SC checks the user's date of birth and according to this information either allows or denies access to the queried content. Evaluation and Discussion Performance Analysis: The user's CAD performs the re-encryption of the content key. CP and SC are involved in a challenge-response protocol for authentication of SC which is not too expensive. Further, CP has to encrypt the content key using SC's public key and the content using the content key. The latter is a symmetric encryption executed only once per content. Additionally, CD decrypts the re-encrypted content key as well as the content obtained from CP. The required generation of keys is not expensive. We show that current smartphones are easily capable of executing the required tasks by implementing a demo application on an Android smartphone. We have implemented the reencryption using the jPBC (Java Pairing Based Cryptography) library 6 . The app that has been developed re-encrypts 128 Bytes of data-the length of a symmetric key to be encrypted-in 302 ms on a Samsung Galaxy Nexus (2 × 1.5 GHz) running Android 4.2. Due to a lack of a proper SC7 , we could not implement the re-encryption key generation algorithm RG as in [START_REF] Ateniese | Improved proxy re-encryption schemes with applications to secure distributed storage[END_REF]. Thus, to show the practicability of the implementation, we must refer to [START_REF] Ullmann | Password authenticated key agreement for contactless smart cards[END_REF]. The authors have implemented elliptic curve scalar point multiplications and additions for a smartcard in C and Assembler-which are needed in our approach as well. As the authors conclude, the standard Javacard API (version 2.2.2) cannot be used as the available EC Diffie-Hellman key exchange only provides the hashed version of the key derivation function. [START_REF] Ullmann | Password authenticated key agreement for contactless smart cards[END_REF] However, we need the immediate result of the key derivation function, i.e., the result of the EC point multiplication. Our own implementation of the EC point multiplication on the smartcard's CPU did not yield practicable results-as the efficient cryptographic co-processors could not be utilized due to proprietary code. Evaluation of Requirements: CP is able to provide different kinds of rights to users for content playback. Our system allows for the most popular models like flatrate, execute at most n-times, execute until a certain date, etc., and thus we meet Req. I. CP distributes its content only in encrypted form. Thus, none of the parties not in possession of the content decryption key is able to access the content and our protocol meets Req. II. Smartcards, as trusted devices, are used in our protocol to enforce licenses. Thus, if the SC's check of a license fails, the re-encryption key is not generated and the user is not able to execute the content. A replay attack with an "old" CD certificate fails as the SC does not accept the ts-since it is older than the current one stored on the SC. The SC's property of tamper-resistance is required since we assumed users to be active adversaries. Thus, we meet Req. III. Concernng Req. IV we have: (1) Users anonymously pay for content (licenses), i.e., they do not need to register with CP/CD and need not provide their payment details, which is why they stay anonymous during their transactions with CP and CD. (2) All SCs use the same certificate for anonymous authentication towards CP, thus CP cannot link different purchases made with the same SC. SC's public key pk-tmp sc is newly generated for each content (license) purchase-preventing CP from linking purchases to each other. Moreover, the anonymous payment scheme provides unlinkability of individual payments. Furthermore, we assumed the connection between user and CP to be anonymized via Tor. Thus, unlinkability of content (license) purchases is achieved. (3) The user only provides the re-encrypted content key to CD. Content i is only encrypted once during initialization with ck i and thus, ck i does not contain any information connected to the user or the user's CAD. As a new re-encryption key is generated for each content execution, the encrypted content key "looks" different for CD each time and hence, CD cannot link any pair (ck i ) pk-j cd , (ck i ) pk-k cd , for j = k to each other. Further, we assumed the connection between user and CD to be anonymized via Tor. Therefore, multiple transactions executed by the user are unlinkable for the CD. Moreover, even if an attacker gets access to the user's CAD, he does not learn which content has been bought and executed. The list of available content is only revealed by the SC after authentication with the proper PIN and the CAD application does not keep track of executed content. Thus, to sum it up, profile building (even under a pseudonym) is neither possible for CP nor CD. Comparison to related work: In Tab. 1 we compare our proposed scheme to related work in the field of privacy-preserving digital rights management. Need for TTP: One of the main advantages of our scheme compared to related work is that it does not need a trusted third party which is involved in the license checking process as in [START_REF] Petrlic | Proxy re-encryption in a privacy-preserving cloud computing DRM scheme[END_REF][START_REF] Joshi | Towards practical privacy-preserving digital rights management for cloud computing[END_REF] during each content execution. In [START_REF] Mishra | Privacy rights management in multiparty multilevel DRM system[END_REF], the license server constitutes the TTP. However, it is not involved in the protocol for each single content execution but only once, when retrieving the license. Need for trusted hardware: In our protocol a smartcard performs the license checking. Trusted hardware is not needed by other protocols that rely on some TTP. A trusted platform module (TPM) is needed in the protocol presented in [START_REF] Win | A privacy preserving content distribution mechanism for DRM without trusted third parties[END_REF] to securely store tokens at the user's computing platform. Support for differentiated license models: The protocol presented here and in [START_REF] Petrlic | Proxy re-encryption in a privacy-preserving cloud computing DRM scheme[END_REF][START_REF] Joshi | Towards practical privacy-preserving digital rights management for cloud computing[END_REF] allow for differentiated license models. The protocol presented in [START_REF] Mishra | Privacy rights management in multiparty multilevel DRM system[END_REF] does not allow such flexibility-once a license is bought for some content, it may be executed by the user as often as desired. The authors of [START_REF] Win | A privacy preserving content distribution mechanism for DRM without trusted third parties[END_REF] do not clearly state whether differentiated license models are intended. From the protocol's point of view, it should be possible to implement, e.g., execute at most n times-models as a token set provided by the content owner. Such token sets could include n tokens. Further, licenses that allow only a single content execution could be mapped to each token by the content provider 8 later on. Unlinkability of content executions: All of the approaches covered here, except for [START_REF] Win | A privacy preserving content distribution mechanism for DRM without trusted third parties[END_REF], provide unlinkability of content executions and thus, prevent any party from building a content usage profile (under a pseudonym). Computational efficiency: In terms of computational overhead, our proposed scheme is very efficient, as discussed above. The scheme presented in [START_REF] Joshi | Towards practical privacy-preserving digital rights management for cloud computing[END_REF] makes use of a number of different cryptographic primitives and thus performs less well. In [START_REF] Petrlic | Proxy re-encryption in a privacy-preserving cloud computing DRM scheme[END_REF], the entire content is re-encrypted for each content execution. Efficient standard cryptographic primitives are used in [START_REF] Mishra | Privacy rights management in multiparty multilevel DRM system[END_REF][START_REF] Win | A privacy preserving content distribution mechanism for DRM without trusted third parties[END_REF]. Flexibility in choosing content distributor: All the schemes presented in this overview provide users with the possibility to freely choose the CDs. In other two-party DRM scenarios, such a flexibility is typically not provided. Conclusion We have come up with a privacy-preserving multiparty DRM concept. Users anonymously buy content licenses from a CP and anonymously execute the content at any CD by, for example, streaming the content from CDs nearby. 8 Content distributor in our scenario. Table 1 : 1 Comparison of our scheme to related work in terms of properties. Properties Paper [7] [5] [2] [3] at hand Need for TTP no yes yes yes no Need for trusted hardware yes no no no yes Support for differentiated yes yes yes no yes license models Unlinkability of yes yes yes yes no content executions Computational efficiency good medium bad good good Flexibility in choosing yes yes yes yes yes content distributor SIM cards are smartcards and TPMs are a special form of smartcards as well. A new temporary key pair is used for each content purchase. Note that the SC does not have an internal clock and thus cannot keep track of (authenticated) time. The time can only be set via new and verified licenses. Secure storage in this context especially means integrity-protection http://gas.dia.unisa.it/projects/jpbc/ According to the specifications, the NXP JCOP card 4.1, V2.2.1 can be used to implement the needed functionality. Anonymity in this context means that none of the involved parties is able to build a content usage profile-not even under a pseudonym. In contrast to related work on privacy-preserving DRM, our approach does not require a trusted third party. We implemented our concept on a state-of-the-art smartphone and proved its practicability for a multiparty DRM scenario in a mobile environment in which a user buys a license allowing the playback of, e.g., some TV showroaming in different regions, the user is free to choose the nearest streaming server (content distributor) and hence, getting the best throughput. This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Centre "On-The-Fly Computing" (SFB 901). The extended version of this paper can be found at arXiv:1304.8109.
25,142
[ "1004159", "1004160" ]
[ "74348", "74348" ]
01490714
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490714/file/978-3-642-39256-6_22_Chapter.pdf
Emre Uzun email: [email protected] Vijayalakshmi Atluri email: [email protected] Jaideep Vaidya email: [email protected] Shamik Sural email: [email protected] Analysis of TRBAC with Dynamic Temporal Role Hierarchies The temporal role based access control (TRBAC) models support the notion of temporal roles, user-to-role and permission-to-role assignment, as well as allow role enabling. In this paper, we argue that role hierarchies can be temporal in nature with a dynamism that allows it to have a different structure in different time intervals; and safety analysis of such extensions is crucial. Towards this end, we propose the temporal role based access control model extended with dynamic temporal role hierarchies, denoted as TRBACRH, and offer an approach to perform its safety analysis. We also present an administrative model to govern changes to the proposed role hierarchy. Introduction The temporal extension of the role based access control (TRBAC) model assumes one or more of the following features: temporal User to Role Assignments, temporal Permission to Role Assignments, role enabling, and role hierarchies [START_REF] Bertino | TRBAC: A temporal role based access control model[END_REF][START_REF] Joshi | A generalized temporal role based access control model[END_REF]. In this paper, we introduce dynamic temporal role hierarchies for TRBAC. Role Hierarchies (RH), or sometimes called Role to Role Assignments (RRA), are one of the three basic relations that are defined in RBAC along with URA and PRA [START_REF] Sandhu | Role-based access control models[END_REF]. Whether the basis for RH in an enterprise is either functional or administrative, it simply allows higher level (senior) roles inherit the permissions assigned to the lower level (junior) roles. In this paper, we argue that the role hierarchies can be temporal in nature, i.e., they may change with time. Although role hierarchies in prior temporal extensions of RBAC have been specified, they do not allow temporal constraints to be specified on RH that not only restrict the time during which the hierarchy is valid, but also change its structure by shifting the position of the roles in the hierarchy. Essentially this means that a senior level role cannot always inherit the permissions of a junior level role. Also, a role may change its level in the hierarchy, for example, a junior level role may be elevated to a higher level role during certain time periods. To capture this dynamic structure, we enhance the traditional definition of TRBAC with Dynamic Temporal Role Hierarchy (DTRH), and we denote the resulting model, TRBAC RH , a temporal role based access control model with dynamic temporal role hierarchies. Although enterprises usually specify a static hierarchy, DTRH comes into play in some temporary or periodical Consider a manufacturing company with two different production plants, one having the headquarters of the company. The company has a CEO and a General Manager (GM) who works at both the plants; an Accounting Manager (AM), a Manufacturing Manager (MM), and a Human Resources Manager (HR) for each plant. Although CEO works at the headquarters, GM works in both of the plants in different days of the week. As in Figures 1(a) and 1(b), when he is present at a plant, he manages the operations and audits the actions of the AM of that plant. However, when he is at the other plant, MM has the responsibility to audit the operations of AM without completely assuming the GM role, which is considered to have many additional permissions. Since the hierarchical relationships among the roles change, this situation can be specified by DTRH, by simply having a policy which makes MM move to the second level, on top of AM only on the days when GM is away. Nevertheless, it is still possible to represent the scenario in this example using a static role hierarchy. However, lack of temporal role hierarchies will force the system administrators to create a dummy role, like "Manager and Auditor" (MA), that does not essentially represent a regular job function. Also, this role should have the required permission and hierarchy assignments that MM needs. Moreover, MM should be assigned to two separate roles (MM and MA) which are enabled and disabled in regular time intervals. Clearly, creation of such redundant dummy roles increases the administrative burden [START_REF] Guo | The role hierarchy mining problem: Discovery of optimal role hierarchies[END_REF]. Role delegation, is another way of handling such scenarios [START_REF] Crampton | Delegation in role-based access control[END_REF][START_REF] Zhang | PBDM: a flexible delegation model in rbac[END_REF][START_REF] Barka | A role-based delegation model and some extensions[END_REF][START_REF] Zhang | A rule-based framework for role-based delegation and revocation[END_REF][START_REF] Joshi | Fine-grained role-based delegation in presence of the hybrid role hierarchy[END_REF]. Users are delegated to the necessary roles of the users that are away. Although this process seems more practical than dealing with dummy roles, some complications are possible. The delegatees might not be allowed to assume all of the permissions of the role that they are delegated. At this point, we have to note that [START_REF] Zhang | PBDM: a flexible delegation model in rbac[END_REF] and [START_REF] Joshi | Fine-grained role-based delegation in presence of the hybrid role hierarchy[END_REF] provides a scheme for partial delegation by either temporary dummy roles or blocking some permissions in the delegated role. Even though our example scenario can be modeled using role delegation without imposing significant overhead, employing temporal role hierarchies has still an advantage as it lends itself for performing safety analysis since none of the role delegation studies propose it. The main contribution of this paper is to perform safety analysis of TRBAC RH . Whether handling the temporal role hierarchies is done using the specification of DTRH, using dummy roles or delegation, none of the prior work on safety analysis considers RBAC models with temporal constraints on role hierarchies. The safety analysis of TRBAC RH leads us to expand the set of possible safety questions. As discussed above, having DTRH can reduce redundancy and facilitate the administration in various dynamic work environments. Since we have a dynamic hierarchy, which is controlled by an administrative model (Section 3.2), the implicit role assignments require much more attention than before. There is no problem of this sort in the case of static role hierarchies, however a simple manipulation in the hierarchy could create a security breach, and should be detected in advance to prevent any such occurrence. Therefore, we need to examine new security questions in the analysis of systems with dynamic temporal role hierarchies. A possible safety question can be: "Will a user u ever get implicitly assigned to role r in the future?" A liveness question can be: "Will a user u ever lose any role that he is implicitly assigned in the future?" Finally, a mutual exclusion question can be: "Will users u 1 and u 2 ever get implicitly assigned to role r at the same time slot in the future?" We define the TRBAC RH by extending the definitions of TRBAC with the dynamic temporal role hierarchies, as well as its administrative model. We also propose an approach to perform safety analysis on this model to answer potential safety questions discussed above. For our analysis, we adopt the TRBAC safety analysis approach recently proposed by Uzun et al. [START_REF] Uzun | Analyzing temporal role based access control models[END_REF]. Specifically, we decompose the TRBAC RH analysis problem into multiple RBAC analysis problems and simply employ existing RBAC analysis techniques to solve the TRBAC RH analysis. Preliminaries Temporal Role Based Access Control Model: Temporal RBAC was first proposed by Bertino et al. [START_REF] Bertino | TRBAC: A temporal role based access control model[END_REF] to be an RBAC model with the capability of role enabling and disabling via periodical and duration constraints. Joshi et al. [START_REF] Joshi | A generalized temporal role based access control model[END_REF] extended this model to have temporal capabilities on user to role and role to permission assignments along with some other components like constraints, role triggers and role hierarchies. In both of these models, the time notion is embedded using Calendar expression which is composed of periodicity and duration expressions. Uzun et al. [START_REF] Uzun | Analyzing temporal role based access control models[END_REF] provide a simplified version of the temporal models of [START_REF] Bertino | TRBAC: A temporal role based access control model[END_REF] and [START_REF] Joshi | A generalized temporal role based access control model[END_REF] in order to provide strategies to perform safety analysis on TRBAC. The main difference between this model and the models by [START_REF] Bertino | TRBAC: A temporal role based access control model[END_REF] and [START_REF] Joshi | A generalized temporal role based access control model[END_REF] is the simplified calendar expression, which only has periodicity constraints. Since we base our temporal role hierarchies on the TRBAC model by [START_REF] Uzun | Analyzing temporal role based access control models[END_REF], we now give some of its components and notation. Let U , R, P RM S be finite sets of users, roles and permissions, respectively, of a traditional RBAC system. Although the P A relation, P A ⊆ P RM S × R is defined the same way as in RBAC [START_REF] Sandhu | Role-based access control models[END_REF], U A relation is defined in a different way, considering the temporal nature of the model. The unit time is represented by discrete time slots. Let T MAX be a positive integer. A time slot of Times is a pair (a, a + 1), where a is an integer, and 0 ≤ a < a + 1 ≤ T MAX . We use the term time interval, for a consecutive series of time slots. A schedule s over T MAX is a set of time slots. The model has the periodicity property (just like the preceding TRBAC models) which is provided by having schedules that repeat themselves in every T M AX time slots. This temporal notion is embedded into two different components of the model: TUA ⊆ (U × R × S) is the temporal user to role assignment relation and RS ⊆ (R × S) is the role-status relation which controls the role enabling and disabling. A tuple (u, r, s) ∈ TUA represents that user u is a member of the role r only during the time intervals of schedule s. A tuple (r, s) ∈ RS imposes that role r is enabled only during the time intervals of s and therefore it can only be assumed at these times. Thus, a user u can assume role r at time t ∈ [0, T MAX ] provided that (u, r, s 1 ) ∈ TUA, (r, s 2 ) ∈ RS, and t ∈ (s 1 ∩ s 2 ), for some schedules s 1 and s 2 . The administrative model for TRBAC is used to change these two temporal components. More specifically, the administrative rules t can assign, t can revoke, can enable and can disable is used to assign / revoke roles to users, and enable / disable roles, respectively. Applying these rules change the assignments along with their schedules. Static and Temporal Role Hierarchies: A Role Hierarchy relationship (r 1 ≥ r 2 ) between roles r 1 and r 2 means that r 1 is superior to r 2 , so that any user who has r 1 assigned, can inherit the permissions assigned to r 2 . In traditional RBAC, this assignment is, naturally, static [START_REF] Sandhu | Role-based access control models[END_REF]. However, presence of a temporal dimension brings some additional flexibility on how these hierarchies work. Previously proposed models for temporal role hierarchies [START_REF] Joshi | Hybrid role hierarchy for generalized temporal role based access control model[END_REF][START_REF] Joshi | A generalized temporal role based access control model[END_REF] focus on the permission and activation inheritance through the role hierarchies in the presence of role enabling and disabling. Particularly, the role hierarchy is still static, but the temporal constraints on the role enabling determines whether the role hierarchy will provide inheritance for a role at a given time. Three types of hierarchy relations for temporal domain are proposed: Inheritance Only Hierarchy (≥), Activation Only Hierarchy (≽) and General Inheritance Hierarchy (≫). Lastly, a Hybrid Hierarchy exists when the pairwise relations among different roles are of different types. Interested readers may consult [START_REF] Joshi | Hybrid role hierarchy for generalized temporal role based access control model[END_REF][START_REF] Joshi | A generalized temporal role based access control model[END_REF] for details. Dynamic Temporal Role Hierarchies in TRBAC The flexibility to have a different hierarchy structure at different time intervals makes Dynamic Temporal Role Hierarchy different than the Temporal Role Hierarchy in [START_REF] Joshi | A generalized temporal role based access control model[END_REF]. In order to represent this additional capability, we provide a new Role to Role Relation called dynamic temporal role hierarchy policy, and an administrative model to make modifications on it, like the RRA97 relation of ARBAC97 [START_REF] Sandhu | The ARBAC97 model for role-based administration of roles: preliminary description and outline[END_REF]. Dynamic Temporal Role Hierarchy Policies A TRBAC policy with the presence of dynamic temporal role hierarchies, denoted as TRBAC RH , and is defined as follows: Let S be the set of all possible schedules over T MAX . A TRBAC RH policy over T MAX is a tuple M = ⟨U, R, PRMS , TUA, PA, RS , DT RH⟩ where DT RH ⊆ (R×R×S × {weak, strong}) is the temporal role hierarchy relation. In our model, DT RH is represented as a collection of dynamic temporal role hierarchy policies, which are tuples consisted of a pair of roles associated with a schedule that denotes the time slots that the policy is valid. In our model, we have dynamic temporal role hierarchy for inheritance only relation DT RH I , for activation only relation DT RH A and for general inheritance relation DT RH IA . For notational simplicity, we use DT RH, when we refer to any one of them. Definition 1. A dynamic temporal role hierarchy policy (r 1 ≥ s,weak r 2 ) ∈ DT RH I between roles r 1 and r 2 is an inheritance-only weak temporal relation, that is valid in the time slots specified by a schedule s. Under this policy, a user u who can activate r 1 can inherit permissions of r 2 at time t if (1) (u, r 1 , s 1 ) ∈ T U A (2) (r 1 , s 2 ) ∈ RS and (3) t ∈ (s 1 ∩ s 2 ∩ s), provided that there exists schedules s 1 and s 2 that determine the time slots that u is assigned to r 1 and r 1 is enabled, respectively. Definition 2. A dynamic temporal role hierarchy policy (r 1 ≽ s,weak r 2 ) ∈ DT RH A between roles r 1 and r 2 is an activation-only weak temporal relationship, that is valid in the time slots specified by a schedule s. Under this policy, a user u can activate r 2 at time t if (1) (u, r 1 s 1 ) ∈ T U A (2) (r 2 , s 2 ) ∈ RS and (3) t ∈ (s 1 ∩ s 2 ∩ s), provided that there exists schedules s 1 and s 2 that determine the time slots that u is assigned to r 1 , and r 2 is enabled, respectively. Definition 3. A dynamic temporal role hierarchy policy (r 1 ≫ s,weak r 2 ) ∈ DT RH IA between roles r 1 and r 2 is a general weak temporal relationship, that is valid in the time slots specified by a schedule s. Under this policy, a user u can activate r 2 at time t, or inherit permissions of r 2 if (1) (u, r 1 , s 1 ) ∈ T U A (2) (r 2 , s 2 ) ∈ RS and (3) t ∈ (s 1 ∩ s 2 ∩ s), provided that there exists schedules s 1 , and s 2 that determine the time slots that u is assigned to r 1 and r 2 is enabled, respectively. In the above three definitions, the relations become strong, (i.e: r 1 ≥ s,strong r 2 ) ∈ DT RH I , (r 1 ≽ s,strong r 2 ) ∈ DT RH A and (r 1 ≫ s,strong r 2 ) ∈ DT RH IA ), when (2) is replaced with (r 1 , s 2 ), (r 2 , s 3 ) ∈ RS and ( 3) is replaced with t ∈ (s 1 ∩s 2 ∩s 3 ∩s) where s 3 is the schedule that determine the time slots that r 2 is enabled. Now, let us give an example about how these policies work. Consider that we have temporal access control system with three roles, r 1 , r 2 and r 3 and T M AX = 3. Suppose that we have the following DT RH and RS policies defined: (1) (r 1 , (0, 2)) ∈ RS (2) (r 2 , (0, 1)) ∈ RS (3) (r 3 , (1, 3)) ∈ RS (4) (r 1 ≥ (0,3),strong r 2 ) ∈ DT RH I (5) (r 1 ≥ (0,3),weak r 3 ) ∈ DT RH I . According to these policies, a user who has r 1 assigned can inherit permissions of r 2 only in the time interval (0, 1), because r 2 is not enabled in (1, 3) and the role hierarchy relation is strong. However, u can inherit permissions of r 3 in (0, 2), even if r 3 is not enabled in (0, 1), since the relation is weak. A hybrid relation in a dynamic temporal role hierarchy, DT RH H , may contain all of the tuples defined in the above definitions, and each relation among different roles is determined using the type of that specific relation. Dynamic temporal role hierarchy policies (r 1 ≥ s,weak r 2 ) ∈ DT RH satisfy the following properties for a given schedule s: (1) Reflexive: (r 1 ≥ s,weak r 1 ) ∈ DT RH, (2) Transitive: If (r 1 ≥ s,weak r 2 ), (r 2 ≥ s,weak r 3 ) ∈ DT RH, then (r 1 ≥ s,weak r 3 ) ∈ DT RH. (3) Asymmetric: If (r 1 ≥ s,weak r 2 ) ∈ DT RH then (r 2 ≥ s,weak r 1 ) ̸ ∈ DT RH. These properties apply for both strong and the other types of relations (≽, ≫) as well. Administrative Model for TRBAC with Dynamic Temporal Role Hierarchies Administrative models are required for RBAC systems in order to govern the modifications on the access control policies [START_REF] Sandhu | The ARBAC97 model for role-based administration of roles: preliminary description and outline[END_REF]. Without these models, the access control policies are considered static as DT RH, which is a static policy unless there is an administrative model to allow for modifications. Uzun et al. [START_REF] Uzun | Analyzing temporal role based access control models[END_REF] present an administrative model for TRBAC. In this section, we propose an extension to that model, which makes it cover TRBAC RH . This extension is composed of a rule called t can modify, similar in semantics to the can modify in [START_REF] Sandhu | The ARBAC97 model for role-based administration of roles: preliminary description and outline[END_REF], but with additional capabilities for temporal dimension. This rule updates the valid time slots of the dynamic temporal role hierarchy policies. Also, in contrast to precondition structures that have been proposed in the literature for other administrative rules (like can assign), it has two sets of preconditions, one for senior and one for junior role in order to protect the integrity of the hierarchy. The rule is composed of eight parameters that should be satisfied to execute the rule. Let t be the time slot that the rule is required to be executed. ( 1) admin denotes the administrative role that a user must belong in order to execute the rule. (2) s rule is a schedule that denotes the time slots in which the rule is executable. In order to satisfy, t ⊆ s rule . (3) s hierarchy is a schedule that denotes the time slots of the hierarchy policy that the rule is authorized to modify. ( 4) type ∈ {strong, weak} denotes the type of the hierarchy relation. ( 5) r sr is the senior role of the hierarchy policy. ( 6) r jr is the junior role of the hierarchy policy. ( 7) SR(P os, N eg) denotes the positive and negative preconditions of the senior role r sr . The preconditions are satisfied in the following way: Let ŝ denote the time slots that are intended to be modified by the rule (ŝ ⊆ s hierarchy ). For each r ∈ P os, there must be a role hierarchy policy (r ≥ ŝ,type r sr ) ∈ DT RH and for each r ∈ N eg, there must not be a hierarchy policy (r ≥ ŝ,type r sr ) ∈ DT RH. (8) JR(P os, N eg) denotes the positive and negative preconditions of the junior role r jr . The preconditions are satisfied in the following way. Let ŝ denote the time slots that are intended to be modified by the rule (ŝ ⊆ s hierarchy ). For each r ∈ P os, there must be a role hierarchy policy (r jr ≥ ŝ,type r) ∈ DT RH and for each r ∈ N eg, there must not be a hierarchy policy (r jr ≥ ŝ,type r) ∈ DT RH. Under these parameters, a tuple (admin, s rule , SR(P os, N eg), JR(P os, N eg), s hierarchy , r sr , r jr , type) ∈ t can modify allows to update the role hierarchy relation r sr ≥ s,type r jr as follows: Let ŝ be a schedule over T MAX with ŝ ⊆ s hierarchy . Then, if this rule can be executed at time t, and the preconditions are satisfied w.r.t. schedule ŝ, then the tuple r sr ≥ s,type r jr is updated to r sr ≥ s∪ŝ,type r jr or r sr ≥ s\ŝ,type r jr , depending on the intended modification. This definition is for inheritance only hierarchies, but it also applies to activation only and general inheritance hierarchies, by replacing ≥ with ≽ and ≫. Toward Safety Analysis of TRBAC Systems with Dynamic Temporal Role Hierarchies An important aspect of any access control model is its safety analysis. This is necessary to answer security questions such as those given in Section 1. In this section, we examine how safety analysis of the TRBAC model with DTRH can be carried out. The basic idea is to use the decomposition approach proposed in [START_REF] Uzun | Analyzing temporal role based access control models[END_REF], which reduces the TRBAC safety problem into multiple RBAC safety sub-problems and handles each sub-problem separately using an RBAC safety analyzer that has been proposed in the literature. Here, we need to make two assumptions: [START_REF] Barka | A role-based delegation model and some extensions[END_REF] The administrative model for the dynamic temporal role hierarchy given in Section 3.2 cannot completely be decom-posed into a traditional RBAC. The underlying reason is the precondition structure of dynamic temporal role hierarchies, that does not exist in ARBAC97 role hierarchy component RRA97. Decomposing TRBAC into multiple RBAC safety sub-problems relax the schedule components, but the precondition requirements remain in effect. Since, there is no known RBAC safety analyzer that can handle preconditions in role hierarchies, we assume that the administrative model for dynamic temporal role hierarchy contains rules with no precondition requirement for safety analysis purposes. (2) The safety questions and the structure of the analysis in [START_REF] Uzun | Analyzing temporal role based access control models[END_REF] are based on checking the presence of a particular role (or roles) being assigned to a particular user. There is no permission level control available in the model. Hence, we restrict our safety analysis on the Activation-Only hierarchies. So, we assume DT RH = DT RH A . Moreover, we assume all relationships are strong. The decomposition that we utilize is the Role Schedule Approach of [START_REF] Uzun | Analyzing temporal role based access control models[END_REF]. In this approach the sub-problems are constructed using the role schedules of the administrative rules. In TRBAC, the administrative rules for role assignment and role enabling have two separate schedules: Rule Schedule and Role Schedule. Rule schedule is similar to the s rule of t can modify and determines the periods in which the rule is valid. Similarly, the role schedule is similar to the s hierarchy of the t can modify and determines the time slots that the rule is authorized to modify T U A and RS policies. The key observation that makes this decomposition possible is the independency among different time slots, and the periodic behavior of the model. Particularly, if we are interested in the safety analysis of a time slot t, then we only need to consider the administrative rules, that are authorized to modify time slot t of T U A and RS relations. Furthermore, since the system is periodic, for any long run analysis, we can safely assume that the validity constraints of the rules (s rule ) will be enforced implicitly even if they are ignored. For detailed discussion, readers may refer to [START_REF] Uzun | Analyzing temporal role based access control models[END_REF]. In TRBAC RH , we perform similar operations to generate sub-problems. Our model has the same property of having independency among the time slots. First, we define the dynamic temporal role hierarchy policies for a single time slot t, denoted as DT RH t ⊆ DT RH to be a collection of role hierarchy policies in the system that satisfies: DT RH t = {(r i ≽ s1,strong r j )|t ∈ s 1 }∀r i , r j , ∈ R where s 1 and s 2 are the schedules of role hierarchy rules. The dynamic role hierarchy policies in the time slot t of RH t , then reduces to (r i ≽ strong r j ). These policies become non-temporal role hierarchy policies for the time slot t. Similarly, administrative rules for a specific time slot t is defined as: (admin, s rule , SR(∅, ∅), JR(∅, ∅), s hierarchy , r sr , r jr , type) ∈ t can modify t , t ∈ s hierarchy Hence, the set t can modify t contains the administrative rules that are authorized to modify the t th time slot of the dynamic temporal role hierarchy policies. In other words, if one is interested in the safety analysis for time slot t, then there is not any other administrative rule authorized to modify time slot t, but the rules in t can modify t . The administrative rules for t are reduced to (admin, r sr , r jr , type) which belong to the can modify ⊆ admin × 2 R of the RRA97 of [START_REF] Sandhu | The ARBAC97 model for role-based administration of roles: preliminary description and outline[END_REF]. The above defined two sets, RH t and t can modify t provide safety analysis using the Role Schedule Approach in [START_REF] Uzun | Analyzing temporal role based access control models[END_REF]. When decomposed, there are k RBAC safety sub-problems, where k is the number of time slots. Any RBAC safety analyzer that is capable of handling role hierarchies and the can modify relation can be used to analyze these sub-problems. Repeating this operation for each of these k sub-problems will yield the safety analysis of the TRBAC RH system. The computational complexity of this process depends linearly on the computational complexity of the RBAC safety analyzer and k. Conclusion and Future Work In this paper, we introduced the concept of dynamic temporal role hierarchy, which, can be viewed as a different role hierarchy at different times. We develop an administrative model for RBAC with dynamic temporal role hierarchies along with a road map for its safety analysis. Currently we are implementing safety analysis using the RBAC analysis tools available. Implementing administrative model of the dynamic temporal role hierarchies in the safety analysis will require new tools to fully capture the capabilities of the model. This will be our future work. 1 Fig. 1 . 11 Fig. 1. Role Hierarchies on Different Days of the Week Acknowledgements: This work is partially supported by the National Science Foundation under grant numbers CNS-0746943 and CNS-1018414.
27,343
[ "986163", "986166", "986164", "1004161" ]
[ "489037", "489037", "489037", "301693" ]
01490715
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490715/file/978-3-642-39256-6_2_Chapter.pdf
Joachim Biskup email: [email protected] Marcel Preuß email: [email protected] Database Fragmentation with Encryption: Under Which Semantic Constraints and A Priori Knowledge Can Two Keep a Secret? Keywords: A Priori Knowledge, Confidentiality Constraint, Fragmentation, Inference-Proofness, Logic, Outsourcing, Semi-Honest Server Database outsourcing to semi-honest servers raises concerns against the confidentiality of sensitive information. To hide such information, an existing approach splits data among two supposedly mutually isolated servers by means of fragmentation and encryption. This approach is modelled logic-orientedly and then proved to be confidentiality preserving, even if an attacker employs some restricted but nevertheless versatile class of a priori knowledge to draw inferences. Finally, a method to compute a secure fragmentation schema is developed. Introduction Database outsourcing faces two directly conflicting goals: it should both reduce storage and processing costs by storing data on external servers as well as provably comply with confidentiality requirements -in particular with privacy concerns -in spite of storing data externally [START_REF] Hacigümüs | Providing database as a service[END_REF]. A basic solution presented in [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF] aims at resolving this conflict by means of the combined usage of fragmentation and encryption: a client's database relation is losslessly decomposed into (at least) two vertical fragments each of which is maintained by a different semihonest server; sensitive data is split into harmless parts, either by breaking an association or by separating an encrypted piece of data from the cryptographic key employed; moreover, the servers are (postulated to be) mutually isolated and each attacker is assumed to have access to at most one server. Consequently, due to splitting, each attacker (identified with a server) only has accesses to non-sensitive data and, due to losslessness, an authorized user (identified with the client) can still reconstruct the original data while, due to isolation, only authorized users can do so. Example 1. We consider the relational instance about medical data shown in the upper half of Fig. 1. Suppose that social security numbers (SSN) should be hidden, as well as associations between a patient identified by his name (Name) Fig. 1. A relational instance containing sensitive data items and associations together with a possible fragmentation with encryption and an illness treated (Illness), between a patient (Name) and a person who caused an illness (HurtBy), and between an illness (Illness) and a person having caused that illness (HurtBy), respectively. The lower half of Fig. 1 exhibits a possible fragmentation with encryption: The sensitive association between Name and Illness is "broken" by separating the attribute Name in the fragment F 1 from the attribute Illness in the fragment F 2 . The sensitive associations between Name and HurtBy and between Illness and HurtBy are made "invisible" by using encryption for the attribute HurtBy such that ciphertexts are stored in fragment F 1 and corresponding keys in fragment F 2 . The sensitive attribute SSN is similarly treated by encryption. The newly introduced tuple identifiers (tid) ensure the losslessness of the vertical decomposition (see, e.g., [START_REF] Abiteboul | Foundations of Databases[END_REF]). At first glance two semi-honest servers seem to "keep the secrets" declared in a confidentiality policy. However, a second thought raises some doubts on the actual achievements: though each server only stores data that is non-sensitive per se, an attacker might still be able to infer sensitive information by exploiting his a priori knowledge obtained from further sources. In particular, this a priori knowledge might comprise semantic constraints to be satisfied by the relation being decomposed and individual fact data stemming from the "outside world". Example 2. Suppose an attacker has access to the fragment F 1 and knows a priori that Doctor White is a psychiatrist only treating patients suffering from the Borderline-syndrome. The attacker can then conclude that patient Hellmann suffers from the illness Borderline-syndrome, thereby violating the requirement that associations between a patient and an illness treated should be hidden. Moreover, if this attacker additionally knows that all patients suffering from the Borderline-syndrome have hurt themselves, the attacker can conclude that patient Hellmann has been hurt by Hellmann, thereby revealing an association between a patient and a person who caused an illness. The first violation is enabled by a priori knowledge connecting a fact shown in the visible fragment with a fact in the hidden fragment, namely by means of the constant symbols White and Borderline. Similarly, the second violation is caused by a priori knowledge that connects two concepts across the decomposition, namely the concept of a patient and the concept of a hurt creator, where a concept will be formally represented by a variable ranging over the domain of an attribute. Such connections might "transfer information" between the visible fragment and the hidden fragment. In other words, an attacker a priori knowing such connections might infer hidden information from visible information. Next, we introduce a more abstract example in a more formal way. Example 3. The client maintains a relational schema with relational symbol R, attribute set A R = {a 1 , a 2 , a 3 , a 4 } and the functional dependency a 2 → a 3 as a semantic constraint. Confidentiality interests are expressed by a set C = {{a 1 , a 3 }, {a 4 }} of two confidentiality constraints: {a 1 , a 3 } is intended to require to hide the associations between values of the attributes a 1 and a 3 , and {a 4 } requires to hide single values of attribute a 4 . The a priori knowledge comprises the functional dependency and a sentence expressing the following: "for some specific values b and c for the attributes a 2 and a 3 , resp., there exist a value X 1 for attribute a 1 and a value X 4 for attribute a 4 such that the tuple (a 1 : X 1 , a 2 : b , a 3 : c , a 4 : X 4 ) is an element of the relational instance r". Furthermore, fragment F 1 has attribute set A F1 = {tid, a 1 , a 2 , a 4 } and fragment F 2 attribute set A F2 = {tid, a 3 , a 4 } such that the common attribute a 4 is encrypted. Let fragment F 1 exhibit a tuple (tid : no , a 1 : a , a 2 : b , a 4 : ran), where no is a tuple identifier and ran results from encryption. Combining the a priori knowledge with the tuple exhibited, an attacker might infer that the value a for attribute a 1 is associated with the value c for attribute a 3 , thereby violating the confidentiality constraint {a 1 , a 3 }. Thus fragment F 1 is not inference-proof under the given assumptions. In contrast, fragment F 2 is harmless. The a priori knowledge relates the fragments F 1 and F 2 by means of both the functional dependency using variables and the association fact about a and b dealing with constant symbols. Though taken alone, each of these items might be harmless, their combination turns out to be potentially harmful. The next example indicates that for the same underlying situation one fragmentation satisfying required confidentiality constraints might be better than another one. Example 4. Modifying Example 3 such that A F1 = {tid, a 1 , a 4 } and A F2 = {tid, a 2 , a 3 , a 4 } would block the harmful inference. For, intuitively, the crucial fact about the association of a with b does not span across the decomposition. More generally, we will investigate the following problems in this article: -Given a fragmentation, identify conditions on the a priori knowledge to provably disable an attacker to infer sensitive information. -Given some a priori knowledge, determine a fragmentation such that an attacker cannot infer sensitive information. Our solutions will be based on a logic-oriented modelling of the fragmentation approach presented in [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF] within the more general framework of Controlled Interaction Execution, CIE, as surveyed in [START_REF] Biskup | Inference-usability confinement by maintaining inference-proof views of an information system[END_REF]. This framework assists a database owner in ensuring that each of his interaction partners can only obtain a dedicated inference-proof view on the owner's data: each of these views does not contain information to be kept confidential from the respective partner, even if this partner tries to employ inferences by using his a priori knowledge and his general awareness of the protection mechanism. Our main achievements can be summarized as follows and will be elaborated in the remainder as indicated: -We formalize the fragmentation approach of [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF] (Sect. 2). -We provide a logic-oriented modelling of that approach (Sect. 3). -We exhibit sufficient conditions to achieve confidentiality (Sect. 4). -We propose a method to compute a suitable fragmentation (Sect. 5). These results extend the previous work [START_REF] Biskup | On the Inference-Proofness of Database Fragmentation Satisfying Confidentiality Constraints[END_REF] in which a more simple approach to fragmentation proposed in [START_REF] Ciriani | Keep a few: Outsourcing data while maintaining confidentiality[END_REF] -splitting a relational instance into one externally stored part and one locally-held part without resorting to encryption -is formally analyzed to be inference-proof. In particular, the previous work is extended by a more detailed formal modelling of fragmentation including encryption of values, a more expressive class of sentences representing an attacker's a priori knowledge and a method to compute an inference-proof fragmentation. Confidentiality by Fragmentation In this section, we briefly formalize and extend the approach to fragmentation proposed in [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF]. All data is represented within a single relational instance r over a relational schema R|A R |SC R with relational symbol R and the set A R = {a 1 , . . . , a n } of attributes, for simplicity assumed to have the same type given by the infinite set U of values. Moreover, the set SC R contains some semantic (database) constraints, which must be satisfied by the relational instance r. The idea for achieving confidentiality basically lies in splitting the original instance r vertically (i.e., by projections on subsets of A R ) into two fragment instances f 1 and f 2 each of which is stored on exactly one of the two external servers instead of r. Those confidentiality requirements which cannot be satisfied by just splitting instance r are satisfied by encrypting the values of some attributes. Each "encrypted attribute" is contained in f 1 -storing ciphertextsas well as in f 2 -storing globally unique cryptographic keys. We assume an encryption function Enc : U × U → U satisfying the group properties to achieve perfect (information-theoretic) security. A value of U might be used not only as a plaintext but also as a cryptographic key and a ciphertext. The decryption function is defined by Dec(e, κ) = v iff Enc(v, κ) = e. (ii) SC Fi := {a tid → ĀFi } with a tid → ĀFi being a functional dependency declaring a tid as a primary key, (iii Definition 1 (Fragmentation). Given a relational schema R|A R |SC R , a vertical fragmentation (F, E) of R|A R |SC R contains a set E ⊆ A R of so-called "encrypted attributes" and a set F = { F 1 |A F1 |SC F1 , F 2 |A F2 |SC F2 } in which F 1 |A F1 |SC F1 and F 2 |A F2 |SC F2 are relational schemas called fragments of (F, E) both containing the distinguished attribute a tid / ∈ A R for tuple identifiers. Moreover, for i ∈ {1, 2}, it holds that (i) A Fi := {a tid } ∪ ĀFi with ĀFi ⊆ A R , A F i \ A R A F 1 \ E ∩ A R E ∩ A F i ∩ A R A F 2 \ E ∩ A R A R a 1 , . . . , ) ĀF1 ∪ ĀF2 = A R and ĀF1 ∩ ĀF2 = E. Given a relational instance r over R|A R |SC R , the fragment instances f 1 and f 2 over F 1 |A F1 |SC F1 and F 2 |A F2 |SC F2 are created by inserting exactly both the tuples ν 1 into f 1 and ν 2 into f 2 for each tuple µ ∈ r. Thereby, (a) ν 1 [a tid ] = ν 2 [a tid ] = v µ s.t. v µ is a globally unique tuple identifier, (b) ν i [a] = µ[a] for i ∈ {1, 2} and for each attribute a ∈ ( ĀFi \ E), (c) ν 1 [a] := Enc(µ[a], κ) and ν 2 [a] := κ for each a ∈ E s.t. κ is a cryptographic key being random but globally unique for each value of each tuple. W.l.o.g. we suppose that A R := {a 1 , . . . , a h , a h+1 , . . . , a k , a k+1 , . . . , a n } is the set of attributes of R|A R |SC R and that the columns of the instances r, f 1 and f 2 are rearranged as visualized in Fig. 2. The columns h + 1, . . . , k differ in the interpretation of the values stored in the instances r, f 1 and f 2 : although each of the tuples µ ∈ r, ν 1 ∈ f 1 and ν 2 ∈ f 2 assign values to the attributes a h+1 , . . . , a k , µ[a j ] is a plaintext value, ν 1 [a j ] is a ciphertext value and ν 2 [a j ] is a cryptographic key. In contrast, for a 1 , . . . , a h (a k+1 , . . . , a n , respectively) corresponding tuples of r and f 1 (r and f 2 ) share the same combination of values. To enable an authorized user having access to both fragment-instances f 1 and f 2 to query all information contained in the original instance r, fragmentation ensures that in f 1 and f 2 exactly those two tuples ν 1 ∈ f 1 and ν 2 ∈ f 2 corresponding to a tuple of r share the same unique tuple ID (item (a) of Def. 1). Thus, if ν 1 [a tid ] = ν 2 [a tid ], two tuples ν 1 ∈ f 1 and ν 2 ∈ f 2 can be recomposed to a tuple of r with the help of a binary operation denoted by . As the goal is to achieve confidentiality by fragmentation, a formal declaration of confidentiality requirements is indispensable. In [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF] this is obtained by defining a set of so-called confidentiality constraints on schema level. Definition 2 (Confidentiality Constraint ). A confidentiality constraint c over a relational schema R|A R |SC R is a non-empty subset c ⊆ A R . Semantically, a confidentiality constraint c claims that each combination of values allocated to the set c ⊆ A R of attributes in the original instance r over schema R|A R |SC R should neither be contained completely in the unencrypted part of f 1 nor be contained completely in the unencrypted part of f 2 . Definition 3 (Confidentiality of Fragmentation ). Let R|A R |SC R be a relational schema, (F, E) a fragmentation of R|A R |SC R according to Def. 1 and C a set of confidentiality constraints over R|A R |SC R according to Def. 2. (F, E) is confidential w.r.t. C iff c ⊆ (A F1 \ E) and c ⊆ (A F2 \ E) for each c ∈ C. A Logic-Oriented View on Fragmentation In this section we will present a logic-oriented modelling of fragmentation, for conciseness mostly focussing on the attacker's point of view resulting from his knowledge of the fragment instance f 1 , which is supposed to be known to him. To set up the universe of discourse, we start by defining the set P of predicate symbols of a language L of first-order logic with equality. First, to model the attacker's knowledge about the fragment instance f 1 , we need the predicate symbol F 1 ∈ P with arity k + 1 = |A F1 | (including the additional tuple ID attribute plus k original attributes (cf. Fig. 2)). Second, to capture the attacker's awareness of the fragmentation, in particular his partial knowledge about the hidden original instance r and the separated second fragmentation instance f 2 , we additionally use the predicate symbols R with arity n = |A R | and F 2 with arity n -h + 1 = |A F2 |. Additionally, the distinguished predicate symbol ≡ / ∈ P is available in L for expressing equality. We employ the binary function symbols E and D for modelling the attacker's knowledge about the encryption function Enc and the inverse decryption function Dec. Finally, we denote tuple values by elements of the set Dom of constant symbols, which will be employed as the universe of (Herbrand) interpretations for L as well. In compliance with CIE (e.g., [START_REF] Biskup | Controlled query evaluation with open queries for a decidable relational submodel[END_REF][START_REF] Biskup | A sound and complete model-generation procedure for consistent and confidentiality-preserving databases[END_REF]) this set is assumed to be fixed and infinite. Further, we have an infinite set Var of variables. As usual, the formulas contained in L are constructed inductively using the quantifiers ∀ and ∃ and the connectives ¬, ∧, ∨ and ⇒. Closed formulas, i.e., formulas without free occurrences of variables, are called sentences. This syntactic specification is complemented with a semantics which reflects the characteristics of databases by means of so-called DB-Interpretations according to [START_REF] Biskup | Controlled query evaluation with open queries for a decidable relational submodel[END_REF][START_REF] Biskup | A sound and complete model-generation procedure for consistent and confidentiality-preserving databases[END_REF]: Definition 4 (DB-Interpretation). Given the language L described above, an interpretation I over a universe U is a DB-Interpretation for L iff (i) Universe U := I(Dom) = Dom, (ii) I(v) = v ∈ U for every constant symbol v ∈ Dom, (iii) I(E)(v, κ) = e iff Enc(v, κ) = e, for all v, κ, e ∈ U, (iv) I(D)(e, κ) = v iff Dec(e, κ) = v, for all v, κ, e ∈ U, (v) every P ∈ P with arity m is interpreted by a finite relation I(P ) ⊂ U m , (vi) the predicate symbol ≡ / ∈ P is interpreted by I(≡) = {(v, v) | v ∈ U}. If item (v) is instantiated by taking the instances r, f 1 and f 2 as interpretations of P =R, F 1 , and F 2 , respectively, the resulting DB-Interpretation I r,f1,f2 -or just I r for short if f 1 and f 2 are derived from r according to Def. 1 -is called induced by r (and f 1 and f 2 ). The notion of satisfaction/validity of formulas in L by a DB-Interpretation is the same as in usual first-order logic. A set S ⊂ L of sentences implies/entails a sentence Φ ∈ L (written as S |= DB Φ) iff each DB-Interpretation I satisfying S (written as I |= M S) also satisfies Φ (written as I |= M Φ). Considering an attacker knowing the fragment instance f 1 , the attacker's positive knowledge about the tuples explicitly recorded in f 1 can be simply modelled logic-orientedly by adding an atomic sentence F 1 (ν[a tid ], ν[a 1 ], . . . , ν[a k ]) for each tuple ν ∈ f 1 . As the original instance r -and so its fragment instance f 1 -is assumed to be complete1 , each piece of information expressible in L which is not contained in r (f 1 , resp.) is considered to be not valid by Closed World Assumption (CWA). The concept of DB-Interpretations fully complies with the semantics of complete relational instances. Accordingly, an attacker knows that each of the infinite combinations of values (v tid , v 1 , . . . , v k ) ∈ Dom k+1 not con- tained in any tuple of f 1 leads to a valid sentence ¬F 1 (v tid , v 1 , . . . , v k ). As this negative knowledge is not explicitly enumerable, it is expressed implicitly by a so-called completeness sentence (cf. [START_REF] Biskup | Controlled query evaluation with open queries for a decidable relational submodel[END_REF]) having a universally quantified variable X j for each attribute a j ∈ A F1 (sentence (2) of Def. 5 below). This completeness sentence expresses that every constant combination (v tid , v 1 , . . . , v k ) ∈ Dom k+1 (substituting the universally quantified variables X tid , X 1 , . . . X k ) either appears in f 1 or satisfies the sentence ¬F 1 (v tid , v 1 , . . . , v k ). By construction, this completeness sentence is satisfied by any DB-Interpretation induced by f 1 . Example 6. For the medical example, the knowledge implicitly taken to be not valid by CWA can be expressed as the following completeness sentence: (∀X t )(∀X S )(∀X N )(∀X H )(∀X D ) [ (X t ≡ 1 ∧ X S ≡ e 1 S ∧ X N ≡ Hellmann ∧ X H ≡ e 1 H ∧ X D ≡ White) ∨ (X t ≡ 2 ∧ X S ≡ e 2 S ∧ X N ≡ Dooley ∧ X H ≡ e 2 H ∧ X D ≡ Warren) ∨ (X t ≡ 3 ∧ X S ≡ e 3 S ∧ X N ≡ McKinley ∧ X H ≡ e 3 H ∧ X D ≡ Warren) ∨ (X t ≡ 4 ∧ X S ≡ e 4 S ∧ X N ≡ McKinley ∧ X H ≡ e 4 H ∧ X D ≡ Warren) ∨ ¬F 1 (X t , X S , X N , X H , X D ) ] Based on the explanations given so far, an attacker's knowledge about the fragment instance f 1 can be formalized logic-orientedly as follows: Definition 5 (Logic-Oriented View on f 1 ). Given a fragment instance f 1 over F 1 |A F1 |SC F1 according to Def. 1 with A F1 = {a tid , a 1 , . . . , a k }, the positive knowledge contained in f 1 is modelled in L by the set of sentences db + f1 := {F 1 (ν[a tid ], ν[a 1 ], . . . , ν[a k ]) | ν ∈ f 1 } . (1) The implicit negative knowledge contained in f 1 is modelled in L by the singleton set db - f1 containing the completeness sentence (∀X tid ) . . . (∀X k )   ν∈f1   aj ∈A F 1 (X j ≡ ν[a j ])   ∨ ¬F 1 (X tid , X 1 , . . . , X k )   . ( 2 ) Moreover the functional dependency a tid → {a 1 , . . . , a k } ∈ SC F1 is modelled in L by the singleton set fd F1 containing the sentence (∀X tid ) (∀X 1 ) . . . (∀X k ) (∀X 1 ) . . . (∀X k ) [ F 1 (X tid , X 1 , . . . , X k ) ∧ F 1 (X tid , X 1 , . . . , X k ) ⇒ (X 1 ≡ X 1 ) ∧ . . . ∧ (X k ≡ X k ) ] . (3) Overall the logic-oriented view on f 1 in L is db f1 := db + f1 ∪ db - f1 ∪ fd F1 . Proposition 1. Under the assumptions of Def. 5, the sentences (1), ( 2) and ( 3) of db f1 are satisfied by the DB-interpretation I r , i.e., I r |= M db f1 . Proof. Direct consequence of the definitions. An attacker is assumed to know the process of fragmentation as well as the schemas R|A R |SC R and F 2 |A F2 |SC F2 of the instances kept hidden from him. Thus he can infer that for each tuple ν 1 ∈ f 1 there are tuples ν 2 ∈ f 2 and µ ∈ r satisfying the equation ν 1 ν 2 = µ. So, an attacker knows all values assigned to the set (A F1 \ E) ∩ A R of unencrypted attributes in µ from his knowledge of ν 1 , whereas in general he only knows the existence of values for the remaining attributes (sentence (4) of Def. 6 below). Similarly, the attacker is not able to infer the cleartext values assigned to the attributes of E in µ: by the group properties of the encryption function, each ciphertext considered might be mapped to each possible cleartext without knowing the specific key hidden in fragment f 2 . Next, an attacker knows that a tuple ν 2 ∈ f 2 can only exist if also corresponding tuples ν 1 ∈ f 1 and µ ∈ r satisfying the equation ν 1 ν 2 = µ exist (sentence (5) of Def. 6 below). According to the if-part of sentence [START_REF] Biskup | A sound and complete model-generation procedure for consistent and confidentiality-preserving databases[END_REF] this requirement analogously holds for the existence of each tuple of r. The only-if-part of sentence [START_REF] Biskup | A sound and complete model-generation procedure for consistent and confidentiality-preserving databases[END_REF] describes the fact that the (hypothetical) knowledge of both tuples ν 1 ∈ f 1 and ν 2 ∈ f 2 with ν 1 [a tid ] = ν 2 [a tid ] would enable the attacker to reconstruct the tuple µ ∈ r satisfying µ = ν 1 ν 2 completely. Based on the one-to-one correspondence between each tuple µ ∈ r and a tuple ν 1 ∈ f 1 (ν 2 ∈ f 2 , resp.), observing that two different tuples ν 1 , ν 1 ∈ f 1 are equal w.r.t. the values allocated to the unencrypted attributes of (A F1 \ E) ∩ A R , an attacker can reason that there are also two tuples µ, µ ∈ r which are equal w.r.t. the values allocated to these attributes, but differ in at least one of the values allocated to A R \ (A F1 \ E) (sentence (7) (sentence [START_REF] Ciriani | Combining fragmentation and encryption to protect privacy in data storage[END_REF] in case of f 2 ) of Def. 6 below). Otherwise, the instance r would have duplicates. Summarizing, and for now neglecting semantic constraints, an attacker's logic-oriented view on the (hidden) instances r and f 2 can be modelled as follows: Definition 6 (Fragmentation Logic-Oriented). Let (F, E) be a fragmentation of a relational schema R|A R |SC R with instance r and let f 1 and f 2 be the corresponding fragment instances over the fragments F 1 |A F1 |SC F1 ∈ F and F 2 |A F2 |SC F2 ∈ F according to Def. 1. The knowledge about r and f 2 deduced from the knowledge of f 1 is expressed by (∀X tid ) (∀X 1 ) . . . (∀X h ) (∀X h+1 ) . . . (∀X k ) F 1 (X tid , X 1 , . . . , X h , X h+1 , . . . , X k ) ⇒ (∃Y h+1 ) . . . (∃Y k ) (∃Z k+1 ) . . . (∃Z n ) F 2 (X tid , Y h+1 , . . . , Y k , Z k+1 , . . . , Z n ) ∧ R (X 1 , . . . , X h , D (X h+1 , Y h+1 ) , . . . , D (X k , Y k ) , Z k+1 , . . . , Z n ) ; (4) the knowledge about r and f 1 deduced from the knowledge of f 2 is expressed by (∀X tid ) (∀X h+1 ) . . . (∀X k ) (∀X k+1 ) . . . (∀X n ) F 2 (X tid , X h+1 , . . . , X k , X k+1 , . . . , X n ) ⇒ (∃Y 1 ) . . . (∃Y h ) (∃Z h+1 ) . . . (∃Z k ) F 1 (X tid , Y 1 , . . . , Y h , Z h+1 , . . . , Z k ) ∧ R (Y 1 , . . . , Y h , D (Z h+1 , X h+1 ) , . . . , D (Z k , X k ) , X k+1 , . . . , X n ) ; (5) the knowledge about f 1 and f 2 deduced from the knowledge of r as well as the knowledge about r deduced from f 1 and f 2 is expressed by (∀X 1 ) . . . (∀X h ) (∀X h+1 ) . . . (∀X k ) (∀X k+1 ) . . . (∀X n ) R (X 1 , . . . , X h , X h+1 , . . . , X k , X k+1 , . . . , X n ) ⇔ (∃Z tid ) (∃Y h+1 ) . . . (∃Y k ) F 2 (Z tid , Y h+1 , . . . , Y k , X k+1 , . . . , X n ) ∧ F 1 (Z tid , X 1 , . . . , X h , E (X h+1 , Y h+1 ) , . . . , E (X k , Y k )) ; (6) the knowledge about inequalities in r based on f 1 is expressed by (∀X tid ) (∀X tid ) (∀X 1 ) . . . (∀X h ) (∀X h+1 ) . . . (∀X k ) ∀X h+1 . . . (∀X k ) F 1 (X tid , X 1 , . . . , X h , X h+1 , . . . , X k ) ∧ F 1 X tid , X 1 , . . . , X h , X h+1 , . . . , X k ∧ (X tid ≡ X tid ) ⇒ (∃Y h+1 ) . . . (∃Y n ) (∃Z h+1 ) . . . (∃Z n ) R (X 1 , . . . , X h , Y h+1 , . . . , Y k , Y k+1 , . . . , Y n ) ∧ R (X 1 , . . . , X h , Z h+1 , . . . , Z k , Z k+1 , . . . , Z n ) ∧ n j=h+1 (Y j ≡ Z j ) ; (7) and the knowledge about inequalities in r based on f 2 is expressed by (∀X tid ) (∀X tid ) (∀X h+1 ) . . . (∀X k ) ∀X h+1 . . . (∀X k ) (∀X k+1 ) . . . (∀X n ) F 2 (X tid , X h+1 , . . . , X k , X k+1 , . . . , X n ) ∧ F 2 X tid , X h+1 , . . . , X k , X k+1 , . . . , X n ∧ (X tid ≡ X tid ) ⇒ (∃Y 1 ) . . . (∃Y k ) (∃Z 1 ) . . . (∃Z k ) R (Y 1 , . . . , Y h , Y h+1 , . . . , Y k , X k+1 , . . . , X n ) ∧ R (Z 1 , . . . , Z h , Z h+1 , . . . , Z k , X k+1 , . . . , X n ) ∧ k j=1 (Y j ≡ Z j ) . (8) This view on r and f 2 is referred to as the set of sentences db R containing the sentences (4), ( 5), ( 6), ( 7) and [START_REF] Ciriani | Combining fragmentation and encryption to protect privacy in data storage[END_REF]. Strictly speaking, db R alone does not provide any knowledge about the relational instance r; instead, only the combination of db f1 and db R describes the knowledge about r that is available to an attacker. The essential part of this insight is formally captured by the following proposition. Proposition 2. Under the assumptions of Def. 6, the sentences (4), ( 5), ( 6), ( 7) and ( 8) of db R are satisfied by the DB-Interpretation I r , i.e., I r |= M db R . Proof. Omitted. See the informal explanations before Definition 6. Note that -in contrast to sentence (6) -the equivalence does not hold for the sentences ( 4) and ( 5), as it can be shown by a straightforward example. Finally, we have to model the confidentiality policy logic-orientedly. A confidentiality constraint c ⊆ A R claims that each combination of (cleartext-)values allocated to the attributes of c should not be revealed to an attacker completely. To specify this semantics more precisely, it is assumed that c only protects those combinations of values which are explicitly allocated to the attributes of c in a tuple of r. In contrast, an attacker may get to know that a certain combination of values is not allocated to the attributes of c in any tuple of r. The wish to protect a certain combination of values (v i1 , . . . , v i ) ∈ Dom |c| is modelled as a "potential secret" in the form of a sentence (∃X) R(t 1 , . . . , t n ) in which t j := v j holds for each j ∈ {i 1 , . . . , i } and all other terms are existentially quantified variables. To protect each of the infinitely many combinations, regardless of whether it is contained in a tuple of r or not, we use a single open formula with free variables X i1 , . . . , X i like an open query as follows. Definition 7 (Confidentiality Policy). Let C be a set of confidentiality constraints over schema R|A R |SC R according to Def. 2. Considering a confidentiality constraint c i ∈ C with c i = {a i1 , . . . , a i } ⊆ {a 1 , . . . , a n } = A R and the set A R \ c i = {a i +1 , . . . , a in }, constraint c i is modelled as a potential secret Ψ i (X i ) := (∃X i +1 ) . . . (∃X in ) R(X 1 , . . . , X n ) , which is a formula in the language L . Thereby X i = (X i1 , . . . , X i ) is the vector of free variables contained in Ψ i (X i ). The set containing exactly one potential secret Ψ i (X i ) constructed as above for every confidentiality constraint c i ∈ C is called potsec(C). Moreover, the expansion ex(potsec(C)) contains all ground substitutions over Dom of all formulas in potsec(C). Example 7. For our example, c 2 = {Name, Illness} is modelled as Ψ 2 (X 2 ) := (∃X S )(∃X H )(∃X D )R(X S , X N , X I , X H , X D ) with free variables X 2 = (X N , X I ). Inference-Proofness of Fragmentation Until now the logic-oriented modelling of an attacker's view only comprises knowledge the attacker can deduce from the outsourced fragment instance f 1 , which is supposed to be visible to him. Additionally, however, the attacker might also employ a priori knowledge to draw harmful inferences. Example 8. As in Example 2, suppose the attacker knows that Doctor White is a psychiatrist only treating patients suffering from the Borderline-syndrome: (∀X S )(∀X N )(∀X I )(∀X H )[R(X S , X N , X I , X H , White) ⇒ (X I ≡ Borderline)] . This knowledge enables the attacker to conclude that patient Hellmann suffers from the illness Borderline-syndrome, thereby violating confidentiality constraint c 2 = {Name, Illness}. Moreover, let the attacker additionally know that all patients suffering from the Borderline-syndrome have hurt themselves: (∀X S )(∀X N )(∀X H )(∀X D )[R(X S , X N , Borderline, X H , X D ) ⇒ (X N ≡ X H )] . The attacker can then draw the conclusion that patient Hellmann has been hurt by Hellmann, thereby violating c 3 = {Name, HurtBy}. Following the framework of CIE [START_REF] Biskup | Inference-usability confinement by maintaining inference-proof views of an information system[END_REF], we aim at achieving a sophisticated kind of confidentiality taking care of an attacker's (postulated) a priori knowledge. This a priori knowledge is modelled as a finite set prior of sentences in L containing only R and ≡ as predicate symbols. Moreover, we always assume that the semantic constraints SC R declared in the relational schema are publicly known, i.e., SC R ⊆ prior . Intuitively, we then would like to guarantee that a fragmentation is inference-proof in the sense that -from the attacker's point of view -each of the potential secrets might not be true in the original relational instance r. More formally: for each potential secret Ψ i (v i ) ∈ ex(potsec(C)) there should exist an alternative instance r over R|A R |SC R that witnesses the nonentailment db f1 ∪ db R ∪ prior |= DB Ψ i (v i ). Clearly, deciding on non-entailment, equivalently finding a suitable witness, is computationally infeasible in general. Accordingly, we will have to restrict on approximations and special cases. Regarding approximations, we might straightforwardly require for the witness r that for at least one m ∈ {i 1 , . . . , i } the value v m appearing in the potential secret must not occur under the attribute a m . Accordingly, we could try to substitute v m in the original instance r by a newly selected constant symbol v * to obtain r . However, we also have to preserve indistinguishability of r and r by the attacker, and thus m has to be chosen such that a m / ∈ (A F1 \ E). Furthermore, to fully achieve indistinguishability, the alternative instance r has to coincide with the original instance r on the part visible in fragment f 1 , i.e., I r |= M db f1 , and modifying the original instance r into the alternative r should preserve satisfaction of the a priori knowledge, i.e., I r |= M prior . Regarding special cases, we will adapt two useful properties known from relational database theory [START_REF] Abiteboul | Foundations of Databases[END_REF]. Genericity of a sentence in L perceives constant symbols as being atomic and uninterpreted. Intuitively, all knowledge about a constant symbol arises from its occurrences in the relational instance r. Clearly, sentences with "essential" occurrences of constant symbols will not be generic. But in general "essential" occurrences of constant symbols are difficult to identify. Moreover, renaming v m by v * should not modify the fragment f 1 that is visible to the attacker. Typedness restricts the occurrences of a variable within a sentence to a single attribute (column), and thus prevents a "transfer of information" from a visible attribute to a hidden one. We will now state our main result about the achievements of fragmentation with encryption regarding preservation of confidentiality against an attacker who only has access to one of the fragment instances, here exemplarily to fragment instance f 1 . Facing the challenges discussed above, this main result exhibits a sufficient condition for confidentiality. An inference-proof fragmentation of the running example in terms of Theorem 1 is presented in Example 9 of Sect. 5. Theorem 1 (Inference-Proofness on Schema Level). Let R|A R |SC R be a relational schema with A R = {a 1 , . . . , a n } and (F, E) be a fragmentation with fragment F 1 |A F1 |SC F1 ∈ F that is confidential w.r.t. a set C of confidentiality constraints. Moreover, let SC R ⊆ prior be a set of sentences in L containing only R and ≡ as predicate symbols, satisfying the following restrictions: -Untyped dependencies with constants: each Γ ∈ prior is in the syntactic form of (∀x)(∃y)[ j=1,...,p ¬A j ∨ A p+1 ] with A l being an atom of the form R(t l,1 , . . . , t l,n ) or (t p+1,1 ≡ t p+1,2 ) and t j,i is a variable or a constant symbol; moreover, w.l.o.g., equality predicates may only occur positively, and there might also be a conjunction of positively occurring R-atoms. -Satisfiability: prior is DB-satisfiable and each Γ ∈ prior is not DB-tautologic (and thus: each Γ ∈ prior is range-restricted and does not contain an existentially quantified variable in the negated atoms (premises)). -Compatibility with (F, E) and C: there is a subset M ⊆ {h + 1, . . . , n} s.t. (1) M ∩ {i 1 , . . . , i } = ∅ for each c i ∈ C with c i = (a i1 , . . . , a i ); (2) for each Γ ∈ prior there exists a partitioning X Γ 1 ∪ X Γ 2 = Var s.t. (i) for each atom R(t 1 , . . . , t n ) of Γ • for all j ∈ {1, . . . , n} \ M term t j can either be a (quantified) variable of X Γ 1 or a constant symbol of Dom, • for all j ∈ M term t j must be a (quantified) variable of X Γ 2 , (ii) for each atom (X i ≡ X j ) of Γ either X i , X j ∈ X Γ 1 or X i , X j ∈ X Γ 2 , (iii) for each atom (X i ≡ v) of Γ with v ∈ Dom variable X i is in X Γ 1 . Then, inference-proofness is achieved: For each instance r over R|A R |SC R with fragment instance f 1 such that I r |= M prior and for each potential secret Proof (sketch). Consider any Ψ i (v i ) ∈ ex(potsec(C)) with v i = (v i1 , . . . , v i ). Then c i := {a i1 , . . . , a i } ∈ C, and thus by the assumptions there is an attribute a m ∈ c i with m ∈ M ; moreover, either a m ∈ E or a m ∈ ( ĀF2 \ E). Ψ i (v i ) ∈ ex(potsec(C)) we have db f1 ∪ db R ∪ prior |= DB Ψ i (v i ), i.e., Starting the construction of r and thus of the induced I r , to ensure I r |= M db f1 according to Proposition 1, we define f 1 := f 1 and I r (F 1 ) := f 1 . Continuing the construction of I r , we select a constant symbol v * = v m from the infinite set U that does not occur in the finite active domain of π M (r) and define a bijection ϕ : U → U such that ϕ(v m ) = v * and no value of π M (r) is mapped to v m . Then we extend ϕ to a tuple transformation ϕ * that maps a value v for an attribute a j ∈ A R with j ∈ M to ϕ(v) and each value for an attribute a j ∈ A R with j / ∈ M to itself, and define r := ϕ * [r]. Accordingly, the predicate symbol R is interpreted by I r (R) := r . The instance r and its fragment instance f 1 together uniquely determine the corresponding fragment instance f 2 -whose constructability is guaranteed by the group properties of Enc -and thus we define I r (F 2 ) := f 2 . By the selection of v * and the definition of ϕ, we immediately have I r |= M Ψ i (v i ), and thus I r complies with property (b). Furthermore, by the construction and according to Proposition 2, I r |= M db R . Finally, we outline the argument to verify the remaining part of property (a), namely I r |= M prior . We consider the following Γ ∈ prior (other cases are treated similarly): (∀x)(∃y)[ Otherwise, for all j ∈ {1, . . . , p} we have I σ r |= M ¬R(t j,1 , . . . , t j,n ) and thus for each tuple µ j := (σ (t j,1 ), . . . , σ (t j,n )) we have µ j ∈ r . Since r := ϕ * [r], for all j ∈ {1, . . . , p} there exists µ j ∈ r such that ϕ * [µ j ] = µ j . Now exploiting the properties of the set M -essentially, for each term exactly one case of the definition of ϕ * applies -we can construct a variable substitution σ : x → Dom such that µ j = (σ(t j,1 ), . . . , σ(t j,n )) and, accordingly, I σ r |= M ¬R(t j,1 , . . . , t j,n ). Since I r |= M Γ , there exists a variable substitution τ : y → Dom such that I σ|τ r |= M R(t p+1,1 , . . . , t p+1,n ), i.e., µ p+1 := (σ|τ (t p+1,1 ), . . . , σ|τ (t p+1,n )) ∈ r. By the definition of r , we have µ p+1 := ϕ * [µ p+1 ] ∈ r . Exploiting the properties of M and using τ , we can construct a variable substitution τ : y → Dom such that µ p+1 = (σ |τ (t p+1,1 ), . . . , σ |τ (t p+1,n )). Theorem 1 provides a sufficient condition for inference-proofness on schema level, i.e., for each relational instance satisfying the a priori knowledge prior . In some situations, however, a security officer might aim at only achieving inferenceproofness of a fixed particular relational instance r. Such a situation could be captured by a corollary. Essentially, if we know r and thus also f 1 in advance, we can inspect the usefulness of each implicational sentence Γ ∈ prior of form (∀x)(∃y)[ j=1,...,p ¬A j ∨A p+1 ] to derive harmful information for the specific situation. If r already satisfies (∀x)[ j=1,...,p ¬A j ], then we can completely discard Γ from the considerations. More generally, we could only consider the effects of Γ for those variable substitutions σ of x that make [ j=1,...,p ¬A j ] false for r. Hence, I Creation of an Appropriate Fragmentation If an attacker is supposed to have a priori knowledge, a fragmentation has to comply with this knowledge to guarantee inference-proofness in terms of Theorem 1. i ∈ {1, 2}, X I , X D ∈ X Γi 1 and X S , X N , X H ∈ X Γi 2 . In the following, an Integer Linear Program (ILP) (see [START_REF] Korte | Combinatorial Optimization: Theory and Algorithms[END_REF]) computing a confidential fragmentation complying with an attacker's a priori knowledge is developed to solve this problem with the help of generic algorithms solving ILPs. 2As the optimization goal the set of "encrypted attributes" is chosen to be minimized to reduce the costs for processing queries over the fragmented database as proposed in [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF]. Other optimization goals are conceivable, too. Given the attribute set A R of an original schema R|A R |SC R , a set C of confidentiality constraints and a set prior in terms of Theorem 1, the ILP presented in the following computes the attribute sets ĀF1 and ĀF2 as well as the set E of "encrypted attributes" of a fragmentation being confidential w.r.t to C and complying with prior . The ILP contains the following binary decision variables: -A variable a i j , for both i ∈ {1, 2} and for each a j ∈ A R . If a i j = 1, attribute a j ∈ A R is in ĀFi ; if a i j = 0, attribute a j ∈ A R is not in ĀFi . -A variable a e j for each a j ∈ A R . If a e j = 1, attribute a j ∈ A R is an "encrypted attribute"; if a e j = 0, attribute a j ∈ A R is a "cleartext attribute". -A variable m j for each a j ∈ A R . If m j = 1, the index of attribute a j is in M ; if m j = 0, the index of attribute a j is not in M . -A variable X Γ for each variable X contained in a sentence Γ ∈ prior . If X Γ = 1, variable X is in X Γ 1 ; if X Γ = 0, variable X is in X Γ 2 . For each Γ ∈ prior the set Var Γ j is assumed to contain X Γ if Γ is built over an atom R(t 1 , . . . , t n ) with t j being the variable X (note that each variable might occur in different columns). Moreover, the set const(Γ ) is assumed to contain the index j, if Γ is built over an atom R(t 1 , . . . , t n ) with t j being a constant. Then, the ILP computing an appropriate fragmentation is defined as follows: Minimize the number of "encrypted attributes", i.e., min: n j=1 a e j s.t. the following constraints are fulfilled: -"Cleartext attributes" in exactly one fragment, "encrypted ones" in both: a 1 j + a 2 j = 1 + a e j for each a j ∈ A R -For i ∈ {1, 2}, fragment F i |A Fi |SC Fi fulfills all confidentiality constraints: aj ∈c a i j ≤ |c| -1 + aj ∈c a e j for each c ∈ C and each i ∈ {1, 2} -M ⊆ {h + 1, . . . , n}, i.e., M is a subset of attributes in A F2 : m j ≤ a 2 j for each a j ∈ A R -M overlaps with the indices of the attributes of each c ∈ C: aj ∈c m j ≥ 1 for each c ∈ C -For each formula Γ ∈ prior : • In each R(t 1 , . . . , t n ) of Γ : for each t j being a constant with j / ∈ M : m j = 0 for each j ∈ const(Γ ) • Partitioning of variables into X Γ 1 and X Γ 2 : X Γ = 1 -m j for j ∈ {1, . . . , n} with Var Γ j = ∅ and each X Γ ∈ Var Γ j • In each atom (X i ≡ X j ): variables X i , X j belong to the same partition: X Γ i = X Γ j for each atom (X i ≡ X j ) • In each atom (X ≡ v): variable X belongs to partition X Γ 1 : X Γ = 1 for each atom (X ≡ v) -Each decision variable of ILP is binary: 0 ≤ x ≤ 1 for each integer decision variable x of this ILP If the ILP solver outputs a feasible solution, an inference-proof fragmentation can be determined by constructing the sets ĀF1 , ĀF2 and E of Def. 1 according to the allocation of the corresponding decision variables of the ILP. Note that availability requirements such as storing a particular subset of attributes within the same (or even a particular) fragment or keeping the values of a particular attribute as cleartext values can be simply modelled by adding appropriate constraints, i.e., (in-)equations, to the ILP. Conclusion and Future Work Motivated by the question, whether splitting of data vertically over two semihonest servers guarantees confidentiality, the fragmentation model introduced in [START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF] is formalized, then modelled logic-orientedly and subsequently analyzed w.r.t. its inference-proofness. This analysis considers an attacker employing his a priori knowledge to draw harmful inferences and provides a sufficient condition to decide whether a given combination of a fragmentation and a priori knowledge is inference-proof w.r.t. a given confidentiality policy. Additionally, a generic ILP formulation computing such an inference-proof fragmentation is developed. As Theorem 1 only states a sufficient condition for inference-proofness, there might be a more relaxed, most desirably even necessary definition of a priori knowledge still guaranteeing inference-proofness. A full characterization of inference-proofness could also provide a basis for deciding on the existence of a secure fragmentation for a given setting. Theorem 1 might be also enhanced in the spirit of k-anonymity by a more sophisticated definition of confidentiality guaranteeing that an "invisible value" cannot be narrowed down to a set of possible values of a certain cardinality. A further analysis of confidentiality assuming that commonly used encryption functions such as AES or RSA (which do not satisfy the group properties) come into operation is desirable, too. Although a formal analysis based on probability theory and complexity theory is indispensable to guarantee profound statements, we expect these encryption functions to be "sufficiently secure" in practice. In this article and previously in [START_REF] Biskup | On the Inference-Proofness of Database Fragmentation Satisfying Confidentiality Constraints[END_REF] each one of two existing approaches to achieve confidentiality by vertical fragmentation is analyzed. As a third approach -using an arbitrary number of fragments which are all supposed to be known to an attacker -is presented in [START_REF] Ciriani | Combining fragmentation and encryption to protect privacy in data storage[END_REF], a formal analysis of this approach in the spirit of Theorem 1 might be another challenging task for future work. Example 5 . 5 The fragmentation depicted in Fig. 1 is confidential w.r.t. the set C = {c 1 , c 2 , c 3 , c 4 } of confidentiality constraints such that c 1 = {SSN}, c 2 = {Name, Illness}, c 3 = {Name, HurtBy}, and c 4 = {Illness, HurtBy}. there exists an alternative instance r over R|A R |SC R s.t. (a) I r |= M db f1 ∪ db R ∪ prior , and (b) I r |= M Ψ i (v i ). σ |τ r |= M R(t p+1,1 , . . . , t p+1,n ) and thus I σ |τ r |= M Γ . F This work has been supported by the DFG under grant SFB 876/A5. R SSN Name Illness HurtBy Doctor 1234 Hellmann Borderline Hellmann White 2345 Dooley Laceration McKinley Warren 3456 McKinley Laceration Dooley Warren 3456 McKinley Concussion Dooley Warren F 1 tid 1 2 3 4 SSN e 1 S e 2 S e 3 S e 4 S Name Hellmann Dooley McKinley McKinley HurtBy e 1 H e 2 H e 3 H e 4 H Doctor White Warren Warren Warren F 2 tid 1 2 3 4 SSN κ 1 S κ 2 S κ 3 S κ 4 S HurtBy κ 1 H κ 2 H κ 3 H κ 4 H Illness Borderline Laceration Laceration Concussion ¬R(t j,1 , . . . , t j,n ) ∨ R(t p+1,1 , . . . , t p+1,n )] ,where {t j,1 , . . . , t j,n } ⊆ x ∪ Dom for j ∈ {1, . . . , p} and {t p+1,1 , . . . , t p+1,n } ⊆ x ∪ y ∪ Dom. To demonstrate I r |= M Γ , we inspect any variable substitution σ : x → Dom. If there exists j ∈ {1, . . . , p} such that I σ r |= M ¬R(t j,1 , . . . , t j,n ), we are done. j=1,...,p Hence, an algorithm computing a fragmentation should not only determine an arbitrary fragmentation being confidential in terms of Def. 3. The algorithm should rather consider all of these fragmentations and select one complying with the user's a priori knowledge (if such a fragmentation exists). Reconsidering the a priori knowledge presented in Example 8, this knowledge does not compromise confidentiality if the fragmentation known from Fig.1is modified as depicted in Fig.3. In terms of Theorem 1, for an attacker knowing f 1 the set M can be chosen to contain the indices of SSN, HurtBy and Name and for both sentences Γ 1 and Γ 2 of Example 8 the set of variables can be partitioned s.t., for both 1 tid 1 2 3 4 SSN e 1 S e 2 S e 3 S e 4 S Illness Borderline Laceration Laceration Concussion HurtBy e 1 H e 2 H e 3 H e 4 H Doctor White Warren Warren Warren F 2 tid 1 2 3 4 SSN κ 1 S κ 2 S κ 3 S κ 4 S HurtBy κ 1 H κ 2 H κ 3 H κ 4 H Name Hellmann Dooley McKinley McKinley Fig. 3. Inference-proof fragmentation w.r.t. a priori knowledge of Example 8 Example 9. Though not explicitly stated in[START_REF] Aggarwal | Two can keep a secret: A distributed architecture for secure database services[END_REF][START_REF] Ganapathy | Distributing data for secure database services[END_REF], in this article we follow the usual intuitive semantics of complete instances. For our prototype implementation "lp_solve" turned out to be an appropriate and fast ILP solver (see http://lpsolve.sourceforge.net/).
49,356
[ "1004162", "1004163" ]
[ "133089", "133089" ]
01490718
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490718/file/978-3-642-39256-6_5_Chapter.pdf
Barsha Mitra email: [email protected] Shamik Sural email: [email protected] Vijayalakshmi Atluri email: [email protected] Jaideep Vaidya email: [email protected] Toward Mining of Temporal Roles Keywords: TRBAC, Role Enabling Base, Temporal role mining, NPcomplete, Greedy heuristic In Role-Based Access Control (RBAC), users acquire permissions through their assigned roles. Role mining, the process of finding a set of roles from direct user-permission assignments, is essential for successful implementation of RBAC. In many organizations it is often required that users are given permissions that can vary with time. To handle such requirements, temporal extensions of RBAC like Temporal-RBAC (TRBAC) and Generalized Temporal Role-Based Access Control (GTRBAC) have been proposed. Existing role mining techniques, however, cannot be used to process the temporal element associated with roles in these models. In this paper, we propose a method for mining roles in the context of TRBAC. First we formally define the Temporal Role Mining Problem (TRMP), and then show that the TRMP problem is NP-complete and present a heuristic approach for solving it. Introduction Role-Based Access Control (RBAC) [START_REF] Sandhu | Role-based access control models[END_REF] has emerged as the de-facto standard for enforcing authorized access to data and resources. Roles play a pivotal part in the working of RBAC. In order to implement RBAC, one of the key challenges is to identify a correct set of roles. The process of defining the set of roles is known as Role Engineering [START_REF] Coyne | Role engineering[END_REF]. It can be of two types: top-down and bottom-up. The top-down approach [START_REF] Roeckle | Process-oriented approach for role-finding to implement role-based security administration in a large industrial organization[END_REF] analyzes and decomposes the business processes into smaller units in order to identify the permissions required to carry out the specific tasks. The bottom-up [START_REF] Vaidya | The role mining problem: Finding a minimal descriptive set of roles[END_REF] approach uses the existing user-permission assignments to determine the roles. Role mining is a bottom-up role engineering technique. It assumes that the user-permission assignments are available in the form a boolean matrix called the UPA matrix. A 1 in cell (i, j) of the UPA denotes that user i is assigned permission j. Role mining takes the UPA matrix as input and produces as output a set of roles, a UA matrix representing which roles are assigned to each user and a PA matrix representing which permissions are included in each role. In many organizations, there is a need for restricting permissions to users only for a specified period of time. In such cases, the available user-permission assignments have temporal information associated with them. The roles that are derived from these temporal user-permission assignments will also have limited temporal duration. The traditional RBAC model is incapable of supporting such temporal constraints associated with roles. In order to capture this temporal aspect of roles, several extensions of RBAC have been proposed like Temporal-RBAC (TRBAC) [START_REF] Bertino | TRBAC: A temporal role-based access control model[END_REF] and Generalized Temporal Role-Based Access Control (GTRBAC) [START_REF] Joshi | A generalized temporal role-based access control model[END_REF]. The TRBAC model supports periodic enabling and disabling of roles. This implies that a role can be enabled during a certain set of time intervals and remains disabled for the rest of the time. The set of time intervals during which each role can be enabled is specified in a Role Enabling Base (REB). To the best of our knowledge, none of the existing role mining techniques take into consideration such temporal information while computing the set of roles, and hence cannot be applied for mining roles in TRBAC or GTRBAC models. In this paper, we propose an approach for role mining in the context of the TRBAC model. The problem of finding an optimal and correct set of roles from an existing set of temporal user-permission assignments has been named as the Temporal Role Mining Problem (TRMP), and the process of finding such a set is termed as Temporal Role Mining. We first formally define TRMP and analyze its complexity. We then propose an approach for solving TRMP that works in two phases: i) enumerating a candidate set of roles and ii) selecting a minimal set of roles using a greedy heuristic from the candidate role set and assigning them to the appropriate users so that each user gets his required set of permissions for only the set of time intervals specified in the original temporal user-permission assignments. Our experimental results show how the number of the final set of roles obtained using our approach varies with the number of permissions and also the number of distinct time intervals present. The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents some preliminaries related to RBAC and TRBAC. Section 4 defines the problem and analyzes its complexity. Section 5 describes our heuristic approach to solve TRMP. In Section 6, we present experimental results and finally conclude in Section 7 along with directions for future work. Related Work The problem of finding an optimal set of roles from a given set of user-permission assignments is known as the Role Mining Problem (RMP). In [START_REF] Vaidya | The role mining problem: Finding a minimal descriptive set of roles[END_REF] and [START_REF] Vaidya | The role mining problem: A formal perspective[END_REF], the authors formally define RMP and show the problem to be NP-complete. They also map RMP to the Minimum Tiling Problem of databases and use a known heuristic algorithm for finding the minimum tiling of a database to solve RMP. Vaidya et al. [START_REF] Vaidya | Role miner: Mining roles using subset enumeration[END_REF] present an unsupervised approach called RoleMiner which is based on clustering users having similar permissions into groups. The authors present two algorithms: CompleteMiner and FastMiner to enumerate the set of candidate roles. Lu et al. [START_REF] Lu | Optimal boolean matrix decomposition: Application to role engineering[END_REF] model the problem of Optimal Boolean Matrix Decomposition (OBMD) using binary integer programming. This enables them to directly apply a wide range of heuristics for binary integer programming to solve the OBMD problem. Since solving RMP essentially involves optimally decomposing the boolean matrix UPA into two boolean matrices, UA and PA, RMP is modeled as a boolean matrix decomposition problem. In [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF], it is shown that the role minimization problem is equivalent to the Minimum Biclique Cover (MBC) problem. MBC being an NP-hard problem, the authors present a greedy heuristic to find the minimum biclique cover of a bipartite graph which can be used to solve RMP. In [START_REF] Colantonio | Mining stable roles in RBAC[END_REF], [START_REF] Colantonio | Taming role mining complexity in RBAC[END_REF] the authors propose a three-step methodology to reduce the complexity of role mining as well as the administration cost by restricting the process of role mining only to stable user-permission assignments, i.e., user-permission assignments that belong to roles having weight above a predefined threshold value. The unstable assignments are used to create singlepermission roles. Other role mining approaches include role mining with noisy data [START_REF] Molloy | Mining roles with noisy data[END_REF], where the input data is first cleansed to remove the noise before generating candidate roles, role mining based on weights [START_REF] Ma | Role mining based on weights[END_REF] in which a certain weight is associated with each permission depending on its importance, mining roles having low structural complexity and semantic meaning [START_REF] Molloy | Mining roles with multiple objectives[END_REF], and Visual Role Mining (VRM) [START_REF] Colantonio | Visual role mining: A picture is worth a thousand roles[END_REF], which enumerates roles based on a visual analysis of the graphical representation of the user-permission assignments. Xu and Stoller [START_REF] Xu | Algorithms for mining meaningful roles[END_REF] propose algorithms for role mining which optimize a number of policy quality metrics. Verde et al. present an approach in [START_REF] Verde | Role engineering: From theory to practice[END_REF] to make role mining applicable to large datasets and hence scalable. None of the above-mentioned approaches consider the presence of temporal elements in user-permission assignments. We study the problem of role mining in the context of a temporal extension of RBAC, namely, TRBAC [START_REF] Bertino | TRBAC: A temporal role-based access control model[END_REF]. Preliminaries In this section, we present some preliminaries related to RBAC and TRBAC. Role-Based Access Control According to the NIST standard [START_REF] Ferraiolo | Proposed NIST standard for role-based access control[END_REF], the RBAC model consists of the following components: Definition 1. RBAC -U SERS, ROLES, OP S, and OBJS are respectively the set of users, roles, operations, and objects. -U A ⊆ U SERS × ROLES, a many-to-many mapping of user-to-role assignment. -The set of permissions, P RM S. P RM S ⊆ {(op, obj)|op ∈ OP S ∧ obj ∈ OBJS}. -P A ⊆ ROLES × P RM S, a many-to-many mapping of role-to-permission assignment. assigned users(R) = {u ∈ U SERS|(u, R) ∈ U A}, the mapping of role R ∈ ROLES onto a set of users. assigned permissions(R) = {p ∈ P RM S|(p, R) ∈ P A}, the mapping of role R ∈ ROLES onto a set of permissions. Temporal Role-Based Access Control The TRBAC model allows periodic enabling and disabling of roles. Temporal dependencies among such actions are expressed using role triggers. The enabling or disabling of roles is expressed using simple event expressions or prioritized event expressions. Role status expressions, having the form enabled R or ¬enabled R, describe whether a role R is currently enabled or not. Event expressions, role status expressions and role triggers together build up the Role Enabling Base (REB ), which contains various temporal constraints related to the enabling and disabling of roles. The model also allows runtime requests to be issued by an administrator to dynamically change the status of a role, so as to be able to react to emergency situations. In order to represent the set of time intervals for which a role can be enabled, the TRBAC model uses the notion of periodic expressions [START_REF] Bertino | TRBAC: A temporal role-based access control model[END_REF]. Periodic time can be represented as [begin, end], P , where P is a periodic expression representing an infinite set of periodic time instants and [begin, end] is a time interval that imposes an upper and a lower bound on the instants of P . The representation of periodic expression is based on the notion of Calender. A calender is a finite set of consecutive time intervals. Let C d , C 1 , ..., C n be a set of calenders. A periodic expression P is defined as: P = n i=1 O i • C i r • C d (1) where O 1 = all, O i ∈ 2 N ∪ {all}, C i C i-1 for i = 2, ..., n, C d C n , and r ∈ N. The symbol denotes sub-calender relationship. The first part of the periodic expression P before the symbol represents the starting points of the set of time intervals denoted by P and the second part denotes the duration of each time interval in terms of a natural number r and calender C d . Mining Roles having Temporal Constraints In this section, we discuss how the user-permission assignments having associated temporal information can be represented, then formally define the temporal role mining problem, and finally present an analysis of its complexity. Temporal UPA Matrix The temporal role mining process takes as input a temporal user-permission assignment relation, which describes the sets of time intervals for which one or more permissions are assigned to each user. Such a user-permission assignment relation can be directly available in an organization or can be derived from the access logs. We represent these temporal user-permission assignments using a Temporal UPA (TUPA) matrix. The rows of the matrix represent the users and the columns represent the permissions. Each cell (u i , p j ) of the matrix contains either a zero or a set T ij of time intervals for which user u i is assigned permission p j . Each set of time intervals T ij is represented using a periodic expression of the form of Eqn. 1. Table 1 shows an example TUPA matrix. In this matrix, user u 1 is assigned permission p 1 for two different sets of time intervals: everyday from 8 am to 9 am and from 10 am to 11 am. u 1 is also assigned p 3 for a single set of time intervals: everyday from 8 am to 9 am. Similarly, u 2 is assigned p 2 everyday from 6 am to 7 am and from 8 am to 10 am. u 2 is also assigned p 3 everyday from 8 am to 9 am. Finally, u 3 is assigned only p 2 for a single set of time intervals: everyday from 9 am to 10 am. It may be observed that, in the TUPA matrix, a user can be assigned different permissions for the same or different sets of time intervals and also the same permission can be assigned to different users for the same or different sets of time intervals. In general, the sets of time intervals that are not equal can be either overlapping or disjoint. Problem Definition The TUPA matrix is given as input to the temporal role mining process. The output of the process is a set of roles ROLES, a UA matrix, which is a boolean matrix denoting the roles assigned to each user, a PA matrix, which is a boolean matrix denoting the permissions included in each role and a Role Enabling Base (REB), containing, for each role, a set of time intervals during which the role can be enabled. The permissions are thus available to the users through the assigned roles only during the sets of time intervals during which the corresponding roles are enabled as specified in the REB. We say that the output is consistent with the input TUPA matrix if each user, on being assigned a subset of the set of mined roles, gets a set of permissions for a set of time intervals as he was originally assigned in the given TUPA matrix. Thus, the Temporal Role Mining Problem (TRMP) can be formally defined as: Definition 2. TRMP Given a set of users USERS, a set of permissions PRMS and a temporal userpermission assignment TUPA, find a set of roles ROLES, a user-role assignment UA, a role-permission assignment PA, and an REB consistent with the TUPA, such that the total number of roles is minimized. Complexity Analysis In this subsection, we provide a formal analysis of the complexity of TRMP. Before proceeding with the formal analysis, we first formulate the decision version of TRMP. The Decision-TRMP problem (DTRMP) can be defined as: Definition 3. DTRMP Given a set of users USERS, a set of permissions PRMS, a temporal userpermission assignment TUPA and a positive integer k, is there a set of roles ROLES, a user-role assignment UA, a role-permission assignment PA and an REB, consistent with the TUPA, such that |ROLES |≤ k ? Given a certificate consisting of a set ROLES, a UA, a PA and an REB, it can be verified in polynomial time whether |ROLES|≤ k and whether the ROLES, UA, PA and REB are consistent with the TUPA by finding out the set of time intervals during which each user gets a particular permission through one or more roles assigned to him and comparing with the TUPA. Thus DTRMP is in NP. Next we show that a known NP-complete (or NP-hard) problem is polynomial time reducible to DTRMP. For this, we select RMP. The Decision RMP, which has been shown to be NP-complete [START_REF] Vaidya | The role mining problem: Finding a minimal descriptive set of roles[END_REF], can be stated as: Definition 4. Decision RMP Given a set of users U RM P , a set of permissions P RM P , a user-permission assignment UPA and a positive integer k, is there a set of roles R, a user-role assignment U A RM P and a role-permission assignment P A RM P consistent with the UPA, such that |R| ≤ k ? Given an instance of the Decision RMP, U RM P and P RM P are respectively mapped to USERS and PRMS using identity transformations. UPA is mapped to TUPA where each zero entry of UPA is mapped to a zero entry of TUPA and each non-zero entry of UPA is assigned a fixed set of time intervals, say T 0 in the TUPA. This reduction is in polynomial time. To complete the proof, it is to be shown that the output instance of Decision RMP (consisting of R, UA RM P and PA RM P ) is such that |R | is less than or equal to k if and only if the output instance of DTRMP (consisting of ROLES, UA, PA and REB) is such that |ROLES| has a value less than or equal to k. Given an instance of the Decision RMP, let R, UA RM P and PA RM P constitute the output instance such that |R | ≤ k. Now, the output instance of DTRMP can be constructed from the output instance of the Decision RMP as follows: R denotes the set of roles ROLES, UA RM P denotes the UA and PA RM P denotes the PA. The REB is constructed by associating the same set of time intervals corresponding to each of the roles in ROLES. Since the set of time intervals during which each role in ROLES can be enabled are the same, so if the output instance of the Decision RMP is consistent with the given UPA, then the output instance of the DTRMP is also consistent with the given TUPA. Therefore, the output instance of DTRMP constructed from the output instance of Decision RMP is a valid solution of DTRMP. Similarly, it can be shown that given an output instance of DTRMP, a valid solution to Decision RMP can be constructed. Thus, DTRMP produces as output a set of roles ROLES having size k or less, a UA, a PA and an REB if and only if Decision RMP gives a set of roles R of size k or less, a UA RM P and a PA RM P . Therefore, DTRMP is NP-complete. Heuristic Approach for Solving TRMP Since TRMP has been shown to be NP-complete in Section 4, we present a heuristic approach for solving it in this section. It works in two phases: -Candidate Role Generation: This phase enumerates the set of candidate roles from an input TUPA matrix. -Role Selection: This phase selects the least possible number of roles from the candidate roles using a greedy heuristic so that the generated UA, PA and REB together is consistent with the TUPA matrix. Candidate Role Generation A TUPA matrix is given as input to the candidate role generation phase. Each non-zero entry of TUPA represents a triple u i , p j , T ij . This implies that user u i is assigned permission p j for the set of time intervals T ij . Let us denote the set of all such triples by U T . A role is a collection of permissions which is enabled during a certain set of time intervals and can be assigned to a specific set of users. So, each triple of U T can be considered as a role consisting of a single user, a single permission and a set of time intervals. We call such roles as unit roles. In the first phase, a set U nitRoles of all such unit roles is initially constructed. Before going into the details of the successive steps, we show how the creation of roles depends on the interrelationships among the sets of time intervals for which permissions are assigned to users. Let a user u 1 be assigned two permissions, namely, p 1 for the set of time intervals T 11 and p 2 for the set of time intervals T 12 . Now, three scenarios may arise. -If T 11 = T 12 , then a role r will be created containing p 1 and p 2 . r will be enabled during T 11 . -If T 11 ∩ T 12 = φ, i.e., T 11 and T 12 are disjoint, then two roles: r 1 containing p 1 , and r 2 containing p 2 will be created. r 1 will be enabled during T 11 and r 2 will be enabled during T 12 . -If T 11 ∩ T 12 = φ, i.e., T 11 and T 12 have a non-empty intersection, then three roles will be created: r 1 containing p 1 and p 2 which will be enabled during T 11 ∩T 12 , r 2 containing only p 1 which will be enabled during T 11 -(T 11 ∩T 12 ), and r 3 containing only p 2 which will be enabled during T 12 -(T 11 ∩ T 12 ). Either one of r 2 and r 3 might be superfluous depending on whether T 11 ⊂ T 12 or T 12 ⊂ T 11 . Thus, it is seen that depending upon how the various sets of time intervals are related to one another, they may have to be split differently while creating roles. Moreover, since (T 11 ∩ T 12 ) ⊆ T 11 , (T 11 ∩ T 12 ) ⊆ T 12 and each of T 11 and T 12 is a subset of itself, the set of time intervals during which a role can be enabled is a subset of the common sets of time intervals associated with the permissions included in that role. Since the sets of time intervals associated with the permissions are the non-zero TUPA matrix entries, the set of time intervals during which a role can be enabled can be considered to be a subset of the nonzero TUPA matrix entries. Therefore, each role can be considered to be a subset of one or more triples of U T . After constructing the set of unit roles, a set of initial roles is next constructed as follows: For each user, if a common set of time intervals exists among all the permissions assigned to him, then the user, the permissions and the common set of time intervals are put together to form a role. If after creation of this role, corresponding to any one or more permissions, there remains any set of time intervals that is not included in the role, separate roles are created for each of those permissions by including the corresponding user and the remaining set of time intervals. If no common set of time intervals exists among all the permissions assigned to the user, then separate roles are created by combining the user, each of the permissions and the corresponding sets of time intervals. We call the set of all the initial roles as InitialRoles. In the final step of phase 1, a generated set of roles is constructed by performing pairwise intersection between the members of InitialRoles. We call this set as GeneratedRoles. For any two roles i and j of InitialRoles, let u be the user associated with i and u be the user associated with j. If both i and j have one or more common permissions and the associated sets of time intervals have a non-empty intersection, then the following roles are created: a role r 1 containing the users u and u , the permissions common to both i and j, and the common set of time intervals between i and j. a role r 2 containing user u, the permissions common to both i and j, and the remaining set of time intervals (if any, after creating r 1 ) associated with role i. a role r 3 containing user u , the permissions common to both i and j, and the remaining set of time intervals (if any, after creating r 1 ) associated with role j. a role r 4 containing user u, permissions of i which are not present in j, and the set of time intervals associated with i. a role r 5 containing user u , permissions of j which are not present in i, and the set of time intervals associated with j. Roles r 2 and r 3 are created by combining the sets of time intervals that get split as a result of creating role r 1 with the appropriate users and permissions. After creating the generated set of roles, the candidate set of roles CandidateRoles is created by taking the union of U nitRoles, InitialRoles and GeneratedRoles. If the sets of time intervals associated with any two roles in CandidateRoles are the same and either their permission sets are identical or the user sets associated with them are identical, then the two roles are merged to create a single role by either taking union of their user sets or that of their permission sets respectively. completes phase 1. Role Selection The set of candidate roles created in the Candidate Role Generation phase is given as input to the Role Selection phase. In this phase, a minimal cardinality subset of CandidateRoles is selected so that each user after being assigned a subset of the set of selected roles, gets each of the permissions assigned to him for only those sets of time intervals as specified in the TUPA matrix. As already mentioned, each role can be considered to be a subset of the set U T . A set of such roles can be considered to be a set of subsets of U T . We say that a role r covers a triple u i , p j , T ij of U T if r contains the user u i , permission p j and either a proper or an improper subset of the set of time intervals T ij . Each role covers one or more triples of the set U T and each triple of U T can be covered by more than one role (if a set of time intervals corresponding to a non-zero TUPA entry gets split into two or more sets of intervals during role creation). So, there arises a need to distinguish between fully covered and partially covered triples. Definition 5. Fully Covered, Partially Covered Let T im be a set of time intervals corresponding to the triple t = u i , p m , T im and r k be a role such that r k = ({u i , u j }, {p m , p n }, {T k }). If T im ⊆ T k , then r k fully covers the triple t. If T k ⊂ T im , then r k partially covers t. The task of the role selection phase is to select the minimum number of roles to fully cover all the triples of U T . It uses the following greedy heuristic: at each stage, the role that fully covers the maximum number of uncovered triples is selected. If more than one role fully covers the maximum number of triples, the tie is broken by selecting the role that partially covers the maximum number of triples. This is done because the sets of time intervals remaining after a number of sets of time intervals get partially covered may be covered by a single role if the corresponding user and permission sets are the same. At each stage, after selecting a role, the fully covered triples of U T are marked appropriately and the partially covered ones are updated. After all the triples are fully covered, it is checked whether any two or more of the selected roles can be merged. If so, the appropriate roles are merged into a single role. In the merging step at the end of the candidate role generation phase, the merged roles are not compared with each other for further merging. It may so happen that two or more roles created as a result of merging at the end of the previous phase can be merged further to form a single role. If these roles are selected by the role selection phase, then they are merged in this final merging step. If not, then no merging is required. This final merging step can further reduce the number of roles. Algorithm 1 Enumerate Candidate Roles Require: P (u): the set of permissions assigned to user u Require: U (i): the set of users associated with role i Require: T (P (u)): the common set of time intervals among all the permissions assigned to user u Require: Tup: the set of time intervals for which user u is assigned permission p Require: T (x): the set of time intervals during which role x can be enabled 1: Initialize U nitRoles, InitialRoles, GeneratedRoles, CandidateRoles as empty sets 2: for each triple ({u}, {p}, {Tup}) corresponding to a non-zero entry of TUPA do 3: U nitRoles ← U nitRoles ∪ ({u}, {p}, {Tup}) 4: end for 5: for each user u ∈ T U P A do 6: if T (P (u)) = φ then 7: InitialRoles ← InitialRoles ∪ ({u}, {P (u)}, {T (P (u))}) 8: for each p ∈ P (u) do 9: InitialRoles ← InitialRoles ∪ ({u}, {p}, {Tup -T (P (u))}) 17: for each role j ∈ InitialRoles do 18: if {i ∩ j} = φ and T (i) ∩ T (j) = φ then 19: {i ∩ j} denotes the common set of permissions of roles i and j 20: GeneratedRoles ← GeneratedRoles ∪ ({u, u }, {i ∩ j}, {T (i) ∩ T (j)}) ∪ ({u}, {i ∩ j}, {T (i) -(T (i) ∩ T (j))}) ∪ ({u }, {i ∩ j}, {T (j) -(T (i) ∩ T (j))}) ∪ ({u}, {i -(i ∩ j)}, {T (i)}) ∪ ({u }, {j -(i ∩ j)}, {T (j)}) u and u are the users associated with roles i and j respectively 21: end if 22: end for 23: end for 24: CandidateRoles ← U nitRoles ∪ InitialRoles ∪ GeneratedRoles 25: for each i, j ∈ CandidateRoles do 26: if T (i) = T (j) then 27: if i = j then permission sets of i and j are same 28: CandidateRoles ← {CandidateRoles -i -j} ∪ ({U (i) ∪ U (j)}, {i}, {T (i)}) 29: else 30: if U (i) = U (j) then 31: CandidateRoles ← {CandidateRoles -i -j} ∪ ({U (i)}, {i ∪ j}, {T (i)}) 32: end if 33: end if 34: end if 35: end for The algorithm for enumerating the set of candidate roles is given in Algorithm 1. The sets U nitRoles, InitialRoles, GeneratedRoles and CandidateRoles are initialized as empty sets in line 1. The set U nitRoles is created from the set of triples corresponding to the non-zero entries of the TUPA matrix in lines 2 -4. Lines 5 -14 create the set of initial roles. The set of generated roles is created in lines 15 -23 by performing pairwise intersection between the members of InitialRoles. The set CandidateRoles is created in line 24 by taking union of U nitRoles, InitialRoles and GeneratedRoles. Finally, for each candidate role, it is checked whether it can be merged with any other candidate role, and if possible, the two roles are merged (lines 25 -35). The algorithm for selecting the minimal set of roles from CandidateRoles is shown in Algorithm 2. Select Final Roles takes as input the set CandidateRoles. It keeps track of the number of triples of U T that remain uncovered, using entry count. entry count is initialized to the number of triples of U T (line 1). The while loop of lines 2 -9 selects a role that fully covers the maximum number of uncovered triples in each iteration (line 3) until all the triples are fully covered. If there is a tie, the role that partially covers the maximum number of uncovered triples is selected (line 5). U T is updated in line 7 by marking appropriately the triples which get fully covered and by modifying the ones which get partially covered. entry count is next updated (line 8). Finally, when no uncovered triples are left the set of selected roles is checked to determine if any of the roles can be merged. If so, the appropriate roles are merged (line 10). Lastly, the UA, PA and REB are constructed from the set of selected roles (line 11). Algorithm 2 Select Final Roles Require: entry count: the number of uncovered triples of U T Require: f ully covered: the number of triples that are fully covered in a particular iteration 1: entry count ← |U T | 2: while entry count > 0 do 3: select a role that fully covers the maximum number uncovered triples 4: if there is a tie then 5: select the role that partially covers the maximum number of triples 6: end if 7: update U T 8: entry count ← entry count -f ully covered 9: end while 10: merge roles, if possible, in the set of selected roles 11: create UA, PA and REB from the set of selected roles Illustrative Example We illustrate how our approach works using the TUPA matrix given in Table 2 which is a simplified representation of Table 1. In this table, each non-zero TUPA matrix entry is a set of one or two time intervals. The proposed approach is, however, generic enough to handle complex sets of time intervals represented in the form of periodic expressions as mentioned in Section 3. A. Candidate Role Generation Algorithm 1 constructs the set U nitRoles as follows. U nitRoles = {r 1 = ({u 1 }, {p 1 }, {8 am -9 am}), r 2 = ({u 1 }, {p 1 }, {10 am -11 am}), r 3 = ({u 1 }, {p 3 }, {8 am-9 am}), r 4 = ({u 2 }, {p 2 }, {6 am-7 am}), r 5 = B. Role Selection The set CandidateRoles is given as input to Algorithm 2. In the first iteration, both r 1 and r 5 fully cover the maximum number of uncovered triples, i.e., 2, and neither of them partially covers any of the triples. This tie is broken by selecting r 1 . As a result, triples u 1 , p 1 , {8am -9am} and u 1 , p 3 , {8am -9am} are fully covered. In the next iteration, there is a tie among all the remaining candidate roles as each of them fully covers 1 triple. Among these roles, only r 4 and r 6 each covers 1 triple partially. This tie is broken by selecting r 4 . Now the triple u 2 , p 3 , {8am -9am} gets fully covered and the triple u 2 , p 2 , {8am -10am} after getting partially covered becomes u 2 , p 2 , {9am -10am} . Each of the remaining triples is fully covered by selecting the roles r 6 , r 2 and r 3 one by one. After sorting the roles according to their indices and renaming r 6 to r 5 , the resulting UA, PA and REB are shown in Tables 3, 4 and 5, respectively. Table 3. UA Matrix r1 r2 r3 r4 r5 u1 1 1 0 0 0 u2 0 0 1 1 1 u3 0 0 0 0 1 Experimental Results We test the performance of the proposed temporal role mining algorithm on a number of synthetically generated TUPA matrices. Instead of directly creating random TUPA matrices, we first create UA, PA and REB randomly and then combine them to obtain the random TUPA matrix. For all the datasets, the number of users is fixed at 100. The REB is created by varying the number of distinct time intervals from 1 to 3. When the number of distinct time intervals is 2, we consider three cases that may arise: (i) one time interval is contained in the other (2C) (ii) the two time intervals overlap, but neither is contained in the other (2O) and (iii) the two time intervals are disjoint (2D). When the number of distinct time intervals is 3, we consider 5 scenarios: (i) two intervals overlap, neither one is contained in the other and the third one is disjoint (2O1D) (ii) all the three intervals are disjoint (3D) (iii) one interval is contained in the other and the third is disjoint (2C1D) (iv) two intervals are contained in the third one (3C) and (v) all the three intervals overlap, but no interval is contained in any of the remaining intervals (3O). For generating the data, the number of roles is taken to be one-tenth of of the number of permissions, except in two cases: (i) when the number of permissions is 10, the number of roles is taken as 2 for both one and two distinct time intervals and (ii) when the number of distinct time intervals is three and the number of permissions is 10 or 20, the number of roles is taken as 3. For achieving high confidence level, we created 20 datasets for each parameter setting and the final number of roles reported is the mean and mode of the output of all the 20 runs. Table 6 shows the variation of the number of roles with the number of permissions when there is only one distinct time interval for all the user-permission assignments. In this scenario, the temporal role mining problem reduces to nontemporal role mining. The results of Table 6 indicate correctness of our approach since the number of roles obtained in each case is the same as the number of roles with which the dataset was generated. Table 7 shows the variation of the number of roles with the number of permissions when the number of distinct time intervals is two. This table shows that as the number of permissions increases, the number of roles also increases. The increase in the number of roles is attributed to the splitting of time intervals during role creation. The increase is relatively less for case 2C because, if a user is assigned a permission for both the time intervals, then actually he is assigned the permission for a single time interval, namely, the one which contains the other. But still the number of roles generated is more than that obtained for one distinct time interval, because a single user can acquire different permissions during either one of the two distinct time intervals, resulting in the splitting of the time intervals during role creation. Case 2O generates a large number of roles as the two time intervals get split during role creation. Finally, for case 2D, the time intervals do not get split during role creation and so the number of roles is comparatively less than case 2O. Finally, in Table 8, we show the variation of the number of roles with the number of permissions for all the five cases mentioned above for three distinct time intervals. Here also it is seen that with the increase in the number of permissions, the number of roles increases. Case 3O generates the maximum number of roles as overlap among three time intervals results in the maximum amount of time interval splitting. Cases 3C and 3O generate more number of roles than cases 2C and 2O respectively, thus showing that with the increase in the number of distinct time intervals, the number of roles generated also increases. Case 2O1D generates lesser number of roles than case 3O, as overlap between two time intervals results in less splitting than that occurring in case of overlap among three time intervals. The number of roles obtained in case 3D is less than the rest of the cases, since these 4 cases result in time interval splitting during role creation which is completely absent in case 3D. Cases 2O1D and 3O respectively generate more number of roles than cases 2C1D and 3C as overlap among time intervals causes greater amount of splitting than containment of time intervals within one another. Our results show that, as the number of distinct time intervals increases and there exists some overlap among them, the number of roles finally produced also increases due to the splitting of time intervals during role creation. The effect of overlap is more significant than that of permissions. Conclusions and Future Directions Temporal role mining is essential for creating roles in systems that assign permissions to users for varying sets of time intervals. In this paper, we have formally defined the Temporal Role Mining Problem (TRMP) and proved it to be NPcomplete. We have proposed an approach for mining roles from temporal userpermission assignments. Our approach first creates a candidate set of roles and then selects a minimal subset of the candidate role set using a greedy heuristic to cover all the requisite assignments. Future work in this area would include designing of other heuristics that can further reduce the number of roles finally obtained. Different optimization metrics besides the number of roles may be defined that would generate more meaningful roles having temporal constraints. An approximate solution approach could be designed that would allow a certain amount of inaccuracy in terms of the time duration for which some users would acquire certain permissions through one or more roles assigned to him. ({u 2 } 2 , {p 2 }, {8 am-10 am}), r 6 = ({u 2 }, {p 3 }, {8 am-9 am}), r 7 = ({u 3 }, {p 2 }, {9 am -10 am})}The set InitialRoles is next constructed by considering each user of the TUPA matrix one at a time.InitialRoles = {r 8 = ({u 1 }, {p 1 , p 3 }, {8 am -9 am}), r 9 = ({u 1 }, {p 1 }, {10 am -11 am}), r 10 = ({u 1 }, {p 3 }, {8 am -9 am}), r 11 = ({u 2 }, {p 2 }, {6 am -7 am}), r 12 = ({u 2 }, {p 2 , p 3 }, {8 am-9 am}), r 13 = ({u 2 }, {p 2 }, {9 am-10 am}), r 14 = ({u 2 }, {p 3 }, {8 am -9 am}), r 15 = ({u 3 }, {p 2 }, {9 am -10 am})}By performing pairwise intersection between the members of InitialRoles, the set of generated roles is obtained.GeneratedRoles = {r 16 = ({u 1 , u 2 }, {p 3 }, {8 am-9 am}), r 17 = ({u 1 }, {p 1 }, {8 am-9 am}), r 18 = ({u 2 }, {p 2 }, {8 am-9 am}), r 19 = ({u 2 , u 3 }, {p 2 }, {9 am-10am})}Finally, after taking union of the three sets of roles created, merging the roles and renaming them, the set CandidateRoles is obtained.CandidateRoles = {r 1 = ({u 1 }, {p 1 , p 3 }, {8 am -9 am}), r 2 = ({u 1 }, {p 1 }, {10 am-11 am}), r 3 = ({u 2 }, {p 2 }, {6 am-7 am}), r 4 = ({u 2 }, {p 2 , p 3 }, {8 am-9 am}), r 5 = ({u 1 , u 2 }, {p 3 }, {8 am -9 am}), r 6 = ({u 2 , u 3 }, {p 2 }, {9 am -10 am}), r 7 = ({u 2 }, {p 2 }, {8 am -10 am})} Table 1 . 1 An Example TUPA Matrix p1 p2 p3 u1 [1/1/2010, ∞], all.Days+ 0 [1/1/2010, ∞], all.Days+ {8}.Hours 1.Hours , {8}.Hours 1.Hours [1/1/2010, ∞], all.Days+ {10}.Hours 1.Hours u2 0 [1/ 1/2010, ∞], all.Days+ [1/1/2010, ∞], all.Days+ {6}.Hours 1.Hours , {8}.Hours 1.Hours [1/1/2010, ∞], all.Days+ {8}.Hours 2.Hours u3 0 [1/1/2010, ∞], all.Days+ 0 {9}.Hours 1.Hours Table 2 . 2 Simplified Representation of the TUPA Matrix given in Table1 Table 4 . 4 PA Matrix Table 5 . 5 REBRoleEnabling Time Interval r1 all.Days + {8}.Hours 1.Hours r2 all.Days + {10}.Hours 1.Hours r3 all.Days + {6}.Hours 1.Hours r4 all.Days + {8}.Hours 1.Hours r5 all.Days + {9}.Hours 1.Hours Table 6 . 6 No. of Roles (Mean|Mode) vs. No. of Permissions when the No. of Distinct Time Intervals is 1 Number of Permissions Number of Roles (Mean|Mode) 10 2.0 |2 20 2.0 |2 30 3.0 |3 40 4.0 |4 Table 7 . 7 No. of Roles (Mean|Mode) vs. No. of Permissions when the No. of Distinct Time Intervals is 2 Number of Roles (Mean|Mode) Number of Permissions 2C 2O 2D 10 2.7 |2 2.8 |2 2.8 |2 20 2.9 |2 3.2 |2 2.7 |2 30 5.5 |3 8.4 |12 5.6 |7 40 7.4 |7 21.8 |23 9.9 |12 Table 8 . 8 No. of Roles (Mean|Mode) vs. No. of Permissions when the No. of Distinct Time Intervals is 3 |6 5.2 |6 5.9 |7 7.0 |6 7.7 |3 30 7.3 |7 6.5 |7 6.7 |7 6.3 |6 9.9 |15 40 13.3 |12 9.0 |8 11.6 |10 8.5 |4 19.9 |23 Number of Roles (Mean|Mode) Number of Permissions 2O1D 3D 2C1D 3C 3O 10 6.2 |7 5.6 |6 5.7 |3 6.6 |6 7.8 |8 20 7.4 Acknowledgement: This work is partially supported by the National Science Foundation under grant numbers CNS-0746943 and 1018414.
41,513
[ "1004167", "986165", "978120", "978061" ]
[ "301693", "301693", "357848", "357848" ]
01490719
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490719/file/978-3-642-39256-6_6_Chapter.pdf
Haibing Lu Yuan Hong email: [email protected] Yanjiang Yang email: [email protected] Lian Duan email: [email protected] Nazia Badar email: [email protected] Towards User-Oriented RBAC Model Keywords: Access Control, Role Mining, Sparseness, Binary, Optimization de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Role-based access control (RBAC) restricts system access to authorized users by assigning permissions to roles and then assigning roles to users. RBAC has become a de facto access control model, due to its many advantages, including the convenience of authorization allocation and the reduction of the system administrative workload. Enterprises still employing their old access control systems want to migrate to RBAC. To accomplish the migration, the first phase is to define a good role set. While the role defining problem is seemingly straightforward, it has been recognized as one of the costliest phases in the implementation of RBAC and poses a great challenge to the system engineers. The difficulty comes from the fact that a RBAC system engineer usually has little knowledge on the semantic meanings of user responsibilities and business processes within an enterprise. Role mining has proven to be an effective (machine-operated) means of discovering a good role set. Its key idea is to utilize data mining technologies to extract patterns from existing permission assignments of the old access control system, which are then used to establish roles. This greatly facilitates the implementation of RBAC (by migrating from the old access control system). In the literature, role mining has been extensively studied. In a nutshell, the existing literature investigates role mining with different objectives, including minimization of the number of roles, minimization of the administration cost, minimization of the complexity of the role hierarchy structure, and others. However, we find that none of the existing works has ever considered to improve enduser experience (of the underlying RBAC system), which should be one ultimate goal for any practical information system. Needless to say, users' experience/perception of a system represents system usability and directly affects the eventual success of the system in practice. As such, we argue that user-friendliness should be an essential criterion for evaluating the quality of role mining. In this paper, we study user-oriented role mining, being the first to explore the role mining problem from the end-user perspective. Our daily experiences tell us that end users often prefer fewer role assignments; as long as a user acquires all the needed permissions, the fewer roles she has to bear, the better usability she may feel upon the system. That is, from the end-user perspective, a good RBAC system should have as sparse user-role assignments as possible. This coincides with an advantage of RBAC: recall that one reason accounting for the wide acceptance of RBAC is that it allows users to carry very few roles while enjoying their (potentially many) access rights. However, on the flip side, if we create a unique role for every user, in which case user-role assignments are trivially the most sparse, then the resultant RBAC system would contain too many roles. This absolutely contradicts to a premise of RBAC, which is to map permission roles with functional roles within an organization. As such, a user-oriented RBAC solution should not compromise other advantages of RBAC. To this end, we propose to limit the maximum number of roles a user can take on top of regular role mining. Such a strategy would well balance user friendliness and other system quality factors such as administrative overhead. While the idea is clear, the added constraint poses extra challenges to role mining, considering that role mining in general has already been a hard problem. Towards tackling the obstacle, we make the following contributions: (1) we formulate user-oriented role mining as two specific problems, i.e., user-oriented exact RMP (Role Mining Problem) and user-oriented approximate RMP; (2) in searching for efficient solutions to the formulated problems, we examine several typical role mining algorithms and reveal that they do not meet our needs; (3) in view of the weaknesses of the existing algorithms, we present an efficient algorithm, tailored to the user-oriented role mining problems; (4) to investigate the effectiveness of our algorithm, we conduct experiments on benchmark datasets and obtain promising experimental results. The remainder of the paper is organized as follows. Section 2 reviews existing role mining works in the literature. Section 3 presents the user-oriented role mining problem. Section 4 presents optimization models. The heuristic algorithm is provided in Section 5. Experimental results on benchmark access control data sets are reported in Section 6. Section 7 concludes the paper. Related Work The concept of role engineering was introduced in 1995 by Coyne [START_REF] Coyne | Role engineering[END_REF]. It aims to define an architectural structure that is complete, correct and efficient to specify the organization's security policies and business functions. Coyne's approach is a top-down process oriented strategy for role definition. With the top-down approach, one starts from requirements and successively refine the definitions to reflect the business functions [START_REF] Neumann | A scenario-driven role engineering process for functional rbac roles[END_REF]. Top-down approaches are only suitable for small size enterprises. For medium or large size cases, the bottom-up approach that utilizes the existing user-permission assignments to formulate roles is preferred. In particular, data mining techniques are often employed in the bottom-up approach to identify promising roles. Kuhlmann et al. [START_REF] Kuhlmann | Role mining -revealing business roles for security administration using data mining technology[END_REF] coined the concept of role mining using data mining. In [START_REF] Schlegelmilch | Role mining with orca[END_REF], an algorithm ORCA is proposed to build a hierarchy of permission clusters using data mining technologies. However, overlap between roles is not allowed in ORCA, which contradicts to normal practice in real applications. Vaidya et al. [START_REF] Vaidya | Roleminer: mining roles using subset enumeration[END_REF] propose a subset enumeration approach, which can effectively overcome this limitation. An inherent issue with all of the above approaches is that there is no formal notion for goodness of role. Vaidya et al. [START_REF] Vaidya | The role mining problem: finding a minimal descriptive set of roles[END_REF] propose to use the number of roles to evaluate the goodness of a role set. They [START_REF] Vaidya | Edge-rmp: Minimizing administrative assignments for role-based access control[END_REF] also introduce to use the administrative task as an evaluative criterion. Role hierarchy is another important evaluative criterion, as it is closely related to the semantic meanings of roles. Related works on role hierarchy include [START_REF] Molloy | Mining roles with semantic meanings[END_REF]. Ma et al. [START_REF] Ma | Role mining based on weights[END_REF] also use weights to combine multiple objectives. Our work in this paper strengthens this line of research by being the first to incorporate user experience/perception as an extra evaluative criterion in role mining. Beside the above works on finding a role set with respect to different criteria, there are other interesting works with different flavors. Lu et al. [START_REF] Lu | Optimal boolean matrix decomposition: Application to role engineering[END_REF][START_REF] Lu | Constraint-aware role mining via extended boolean matrix decomposition. Dependable and Secure Computing[END_REF] present an optimization framework for role engineering. They even extend their work to incorporate negative authorizations in [START_REF] Lu | Extended boolean matrix decomposition[END_REF]. Frank et al. [START_REF] Frank | A class of probabilistic models for role engineering[END_REF] provide a probabilistic model to analyze the relevance of different kinds of business information for defining roles that can both explain the given user-permission assignments and describe the meanings from the business perspective. They [START_REF] Frank | A probabilistic approach to hybrid role mining[END_REF] also introduce a model to take the attributes of users into account. Studies on role mining in the presence of noisy data are presented in [START_REF] Vaidya | Role mining in the presence of noise[END_REF]. Conflict Resolution In this section, we study and formulate user-oriented role mining. As we have discussed in the introduction, from users' perspective a user-friendly RBAC system should assign as few roles as possible to each user; no user is happy with being overwhelmed by assuming too many roles (titles). Ideally, a user would wish to carry only one role given that the role provides with him all the necessary access privileges for him to work and function smoothly. Indeed, in reality most organization's systems are designed that way. For example, in a school system, the majority of people carry only one role among STU-DENT, FACULTY, STAFF, and VISITOR. In a software company, most employees are either ACCOUNTANT, ENGINEER, or MANAGER. Thus, user-oriented role mining is characterized with the fact that user-permission assignments should be sparse. This gives rise to the definition of user-oriented role mining, as stated below. Definition 1. (User-Oriented Role Mining) User-Oriented Role Mining is to discover a RBAC solution with respect to some evaluation criterions, such that after role assignments users get the same permissions as before and the resultant user-role assignments are sparse. To concretize user-oriented role mining, we have two questions to answer: [START_REF] Coyne | Role engineering[END_REF] what evaluation criterion should be used to evaluate the goodness of a RBAC solution? [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF] what level of spareness of user-role assignments is appropriate? Towards answering these questions, we examine existing role evaluation criteria for some insights. Existing Evaluation Criteria Role mining is typically formulated as certain optimization problems with objectives and constraints. As summarized by Molloy et al. at [START_REF] Molloy | Mining roles with semantic meanings[END_REF], there are five main factors which can be used to evaluate the goodness of a RBAC solution. They are the number of roles |R|, the complexity of user-role assignments |U A|, the complexity of role-permission assignments |P A|, the number of direct user-permission assignments |DU P A|, and the complexity of reduced role hierarchy |t reduce(RH)|. Among them, |R|, |U A| , and |P A| are routine notations in RBAC, and no further exposition is needed on them. Direct user-permission assignments DU P A imply that roles of one single user are treated as special roles. |DU P A| is the amount of such direct user-permission assignments in a deployed RBAC solution. Role hierarchy RH ⊆ R × R represents the partial order over roles R. t reduce(RH) denotes the transitive reduction of the role hierarchy. Almost all role mining evaluative criteria can be generally described with the weighted structural complexity measure introduced in [START_REF] Molloy | Mining roles with semantic meanings[END_REF], which sums up the above five factors, with possibly different weights for each factor. Definition 2. (Weighted Structural Complexity Measure) Given a weight vector W =< w r , w u , w p , w d , w h >, where w r , w u , w p , w d , w h ∈ Q + ∪ {0}1 , the weighted structural complexity of an RBAC state γ, which is denoted as wsc(γ, W ) is computed as: wsc(γ, W ) = w r * |R| + w u * |U A| + w p * |P A| +w d * |DU P A| + w h * |t reduce(RH)| Role mining in general involves minimizing wsc(γ, W ). However, a minimization implicating all factors not only is too complex, but may not lead to a good RBAC system, as they may counteract with each other in the minimization. Depending on the objective to achieve, a specific role mining task often chooses to minimize a subset of factors relevant to the underlying objective. In particular, minimizing the number of roles |R| might be the most studied role mining problem, where in the weighted structural complexity measure, w r is a positive number while others are all 0. Such a specific role mining problem is referred to as basic RMP (Role Mining Problem), and it has been proven equivalent to the classic set cover problem. It is true that the number of roles can describe the complexity of a RBAC solution in a certain way. However, sometimes a RBAC solution minimizing the number of roles might not be able to capture the internal organizational structure within an company. We shall show this by a toy example with six users, three permissions, and user-permission assignments such that u1 : {p1, p2, p3}, u2 : {p1, p2, p3}, u3 : {p1, p2, p3}, u4 : {p1}, u5 : {p2}, u6 : {p3}. Under the objective of minimizing |R|, it gives a solution such that each individual permission is a role. As a result, there are three roles in total and u1, u2 and u3 are assigned to three roles to cover their permissions. However, even without the semantic information of permissions and the knowledge of user responsibilities, by simply observing the data set, one would conjecture that the permission set of {p1, p2, p3} should be one role, as the permission set is shared by three out of six users. However, if we incorporate the consideration of the size of the total user-role assignments, it can lead us to the right track. With the goal of minimizing |U A|, each user will get one role and each unique set of user permissions is treated as a role. As a result, the role of {u1, u2, u3} is discovered. This example suggests the importance of incorporating the complexity of |U A| as a part of the role mining goal. The sum of user-role assignments and role-permission assignments of |U A| + |P A| is commonly viewed as the representation of the system administrative cost. The minimization of |U A| + |P A| is called edge RMP [START_REF] Vaidya | Edge-rmp: Minimizing administrative assignments for role-based access control[END_REF]. However, as far as an end-user is concerned, |U A| is the only part that she can experience of a RBAC system. A user would not care how complex P A is. For example, a student would not care about how many permissions her STUDENT role actually contains, and all she cares is the simplicity of executing her role (or roles). Other evaluative criteria, such as minimizing the complexity of the resultant role hierarchy [START_REF] Molloy | Mining roles with semantic meanings[END_REF], are also more from the system administrator perspective, rather than the end-user perspective. By examining existing evaluative criterion, we found that the only factor in the Weighted Structural Complexity Measure that matters to end-users is the size of userrole assignments |U A|. However, if the user-oriented RMP is defined as the minimization of |U A|, the trivially optimized solution is to create a unique role for every user. That absolutely contradicts to the premise of role mining, which is to map permission roles with functional roles. As such, a user-oriented RMP solution should balance the user-friendliness and the overall quality of the system. Among five factors in the weighted structural complexity measure, the number of roles |R| would be the best representative of the succinctness and goodness of a RBAC solution and is also the most studied criterion. So we propose to define the user-oriented RMP as the minimization of the combination of the number of roles and the size of user-role assignments, w r * |R| + w u * |U A|. User-Oriented RMP User-Oriented Exact RMP Given m users, n permissions, user-permission assignments U P A m×n , and the evaluation criteria of w r * |R| + w u * |U A|, the user-oriented exact RMP is to find U A m×k and P A k×n to completely reconstruct U P A while minimizing the evaluation criterion. It can be formulated as below: min w r * |R| + w u * |U A m×k | s.t. { U P A m×n = U A m×k ⊗ P A k×n ( 1 ) where ⊗ is the Boolean product operator [START_REF] Lu | Optimal boolean matrix decomposition: Application to role engineering[END_REF]. However, directly working on this formulation has two difficulties. First, it is not easy to determine weights of w r and w u in practice, as a role mining engineer may not have a global sense on the importance of |R| and |U A|. Second, it is difficult to solve an optimization problem with a complex objective function. It would be relatively easier to solve an optimization problem with an objective of either |R| or |U A|. In light of them, we redefine the user-oriented exact RMP as the following. Problem 1 (User-Oriented Exact RMP). Given m users, n permissions, user-permission assignments U P A m×n and a positive number t, it is to discover a role set P A k×n and the user-role assignments U A m×k such that: (1) the number of roles k is minimized, (2) the role assignments U A and the permission-role assignments P A accurately and completely reconstruct the existing user-permission assignments U P A, and (3) no user gets more than t roles. Mathematically, it can be described in an optimization form as follows. min k s.t.      U A m×k ⊗ P A k×n = U P A m×n ∑ j U A(i, j) ≤ t, ∀i U A ∈ {0, 1} m×k , P A ∈ {0, 1} k×n (2) It is not difficult for a role mining engineer to find out the maximum roles a user can have. For example, it could be achieved through discussions with company operators and an investigation of the general organizational structure of the company. When the maximum roles each user can have is limited to a small number, |U A| is naturally enforced to be small. Another property is that Equation (2) can be easily converted to Equation (1) with the method of Lagrange multipliers. If we move the constraint ∑ j U A(i, j) ≤ t to the objective function by adding ∑ j U A(i, j) -t as a penalty component, Equation 2 becomes min |R| + ∑ i λ i ( ∑ j U A(i, j) -t) s.t. { U P A m×n = U A m×k ⊗ P A k×n (3) where λ i is the Lagrange multiplier for the constraint of ∑ j U A(i, j) ≤ t. Further, we could assume that all Lagrange multipliers have the same value λ. Then the equation is changed to the following. min w r * |R| + λ|U A| -λ * t s.t. { U P A m×n = U A m×k ⊗ P A k×n (4) Since λ * t is a constant, it can be dropped from the objective function. Now the resultant optimization problem is the same as Equation 1. The effect of adjusting the Lagrange multiplier (penalty parameter) λ is equivalent to adjusting k, the maximum roles a user can have. User-Oriented Approximate RMP Role mining with exact coverage of permission assignments is only suitable when the given permission assignments contain no error. The recent research results on the role mining on noisy data [START_REF] Molloy | Mining roles with noisy data[END_REF][START_REF] Vaidya | Role mining in the presence of noise[END_REF]. suggest that when the given user-permission assignments contain noise, it is not necessary to enforce a complete reconstruction, as it causes the over-fitting problem. In such cases, approximate role mining may return better results. So in this paper, we also consider the user-oriented approximate RMP, which is defined as below. Problem 2 (User-Oriented Approximate RMP). Given m users, n permissions, userpermission assignments U P A m×n , a positive integer number t and a positive fractional number δ, it is to discover a role set P A k×n and the user-role assignments U A m×k such that: (1) the number of roles k is minimized, (2) the role assignments U A and the role set P A reconstruct the existing user-permission assignments U P A with the error rate less than δ, and (3) no user gets more than t roles. The problem can be roughly described in the following optimization form. min k s.t.      ||U A m×k ⊗ P A k×n -U P A m×n || 1 ≤ δ • ∑ ij U P A ij ∑ j U A(i, j) ≤ t, ∀i U A ∈ {0, 1} m×k , P A ∈ {0, 1} k×n (5) NP-hardness Recall that the basic RMP is to minimize the number of role while the resultant RBAC solution completely reconstructs the given user-permission assignments. The user-oriented exact RMP is a generalization of the basic RMP. If we make the number of the maximum roles each user can have be a large enough number, so that the sparseness constraint does not take effect, then the user-oriented exact RMP becomes the basic RMP. The basic RMP is known to be NP-hard, as it can be reduced to the classic NP-hard set cover problem [START_REF] Vaidya | The role mining problem: finding a minimal descriptive set of roles[END_REF]. Therefore, the user-oriented exact RMP is NP-hard. Similarly, the user-oriented approximate RMP is a generalization of the approximate RMP, which is NP-hard. Thus, it is also NP-hard. Optimization Model Among many existing role mining approaches, the optimization approach has been favored by researchers, due to the existence of many public and commercial optimization software packages. The user-oriented RMP problems can be formulated by optimization models as well, which enables an engineer to directly adopt an existing software package. We formulate the user-oriented exact RMP first, which can be viewed as a variant of the basic RMP with a constraint that each user cannot have more than t roles. Suppose we have located a set of q candidate roles, represented by a binary matrix CR ∈ {0, 1} q×n , where CR kj = 1 means candidate role k contains permission j. Then the user-oriented exact RMP is reduced to finding the minimum roles from CR to completely reconstruct existing user-permission assignments while no one can have more than t roles. The problem can be formulated as the following ILP. minimize ∑ k d k                ∑ q k=1 U A ik CR kj ≥ 1, if U P Aij = 1 ∑ q k=1 U A ik CR kj = 0, if U P Aij = 0 d k ≥ U Aij, ∀i, j ∑ j U Aij ≤ t ∀i d k , U Aij ∈ {0, 1} (6) In the model, U A ik and d are variables. The detailed description of the model is given as follows: -Binary variable U A ik determines whether candidate role k is assigned to user i and binary variable d k determines whether candidate role k is selected. So the objective function ∑ k d k represents the number of selected roles. -The first constraint enforces that if user i has permission j, at least one role containing permission j has to be assigned to user i. -The second constraint enforces that if user i has no permission j, no role containing permission j can be assigned to user i. -The third constraint d k ≥ U A ij ensures d k to be 1 as long as one user has role k. -∑ j U A ij ≤ t enforces that a user cannot have more than t role assignments. The user-oriented approximate RMP can be viewed as a variant of the approximate RMP with the constraint that no user can have more than t roles. Similarly, we simplify the problem by locating a candidate role set CR. At the basis of the ILP formulation for the approximate RMP, an ILP formulation for the user-oriented approximate RMP is presented as follows. minimize ∑ t d k                              ∑ q k=1 U A ik CR kj + Vij ≥ 1, if U P Aij = 1 ∑ q k=1 U A ik CR kj -Vij = 0, if U P Aij = 0 M Uij -Vij ≥ 0, ∀i, j Uij ≤ Vij, ∀i, j ∑ i ∑ j Uij ≤ δ • ∑ ij U P Aij d k ≥ U A ik , ∀i, k ∑ j U Aij ≤ t ∀i dj, U A ik , Uij ∈ {0, 1}, Vij ≥ 0 (7) In the model, U A ik , V ij , U ij and d k are variables and M is a large enough constant. The detailed descriptions of the model are given as follows: -In the first two constraints, V ij acts as an auxiliary variable. Without V ij , the constraints would enforce the exact coverage as the ILP model for the user-oriented exact RMP. With the existence of V ij , the exact coverage constraint is relaxed. The value of V ij indicates whether the constraint for element (i, j) is violated. -The third and fourth constraints convert V ij to a binary value U ij . If V ij is 1, which means the constraint for element (i, j) is violated, U ij has to be 1; otherwise U ij is 0. The fifth constraint ∑ i,j U ij ≤ δ • ∑ ij U P A ij enforces the error rate to be less than δ. -The constraint of d k ≥ U A ik ∀i, k enforces d k to be 1 as long as a user is assigned to role K. So the objective function represents the number of roles being selected. -∑ j U A ij ≤ t ensures no user gets more than t role assignments. Although the optimization framework allows us to directly adopt fruitful optimization research results, the ILP in general is NP-hard. Existing algorithms and software packages for general ILP problems only work for small-scale problems. For mid to large size RMP problems, specially designed efficient heuristics are still required. Heuristic Algorithm In this section, we propose a tailored algorithm for the two user-oriented RMP variants formulated above. It is a heuristic solution, employing an iterative approach to discover roles. The key of our algorithm is a dynamic role generation strategy. Lately, we happened to notice that the idea of dynamic role generation was briefly mentioned in [START_REF] Molloy | Mining roles with multiple objectives[END_REF], but no further details were seen. User-Oriented Exact RMP The user-oriented exact RMP is to find a minimum set of roles to accurately and completely reconstruct the existing user-permission assignments with the constraint that no user can have more than t roles. Before coming to the details of our algorithm, we start by introducing a preprocessing stage that helps to reduce the problem complexity. In the preprocessing stage, there are two steps. The first step is to reduce the data size by removing users with the same permission assignments. This step is also employed in other role mining methods, such as [START_REF] Vaidya | Roleminer: mining roles using subset enumeration[END_REF]. To do so, we group all users who have the exact same set of permissions, which can be done in a single pass over the data by maintaining a hash table of the sets of permissions gradually discovered. The second step identifies a subset of users U ′ who have permissions that no other people have. These user-permissions assignments, {U A i: |i ∈ U ′ }, will be included into the final role set. In other words, these users only get one role, which are themselves. Our argument is that if a user has a permission that is exclusively for herself, she must have at least one role containing that permission, and that role is not shared by other people. As such, from the end-user perspective, why not simply package all permissions of that user as one role and assign the only role to her? Therefore, the number of role assignments is significantly reduced while without increasing the total role number. This preprocessing step can significantly reduce the data size as well and simplify the subsequent role mining task. Note that after the two preprocessing steps, in the remaining data, all user permissions assignments are unique and every permission is assigned to at least two users. The general structure of our algorithm is to iteratively choose a candidate role and assign it to users until all existing permission assignments are covered while the constraint that no user gets more than t roles is carefully respected. We mention that such an idea of iterative role assignment has also been used in many other role mining methods such as the Lattice Model [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF], the FastMinder [START_REF] Vaidya | Roleminer: mining roles using subset enumeration[END_REF] and the optimization-based heuristic [START_REF] Lu | Optimal boolean matrix decomposition: Application to role engineering[END_REF]. The distinguishing element of our algorithm is the way of generating candidate roles. The core of our algorithm is a dynamic role generation strategy. All of the other role mining algorithms generate a static set of candidate roles. Given n permissions, there are 2 n possible roles. If we consider too many candidate roles, the computing time is expensive. Conversely, if we consider only a very limited set of candidate roles, we might not be able to find a good role set. To avoid the extreme cases, our strategy is dynamic candidate roles generation. Specifically, rather than generating a static set of roles at the start of the algorithm, we generate a small set of promising roles at each iteration of the algorithm and the role set is updated according to the remaining userpermission assignments as the algorithm proceeds. There are two advantages: (i) we do not need to maintain and consider a large candidate role set all the time; (ii) the candidate role pool always keeps the potentially interesting roles. In particular, we always consider the remaining user-permission assignments as potentially interesting roles. For instance, consider Table 1 as the existing user-permission assignments. Our algorithm treats the permission assignments for each user as a candidate role. So in this case, there are three candidate roles: cr1 (0 0 1 1 1 1), cr2 (0 0 1 1 0 0), and cr3 (1 1 1 1 0 0). p1 p2 p3 p4 p5 p6 u1 0 0 1 1 1 1 u2 0 0 1 1 0 0 u3 1 1 1 1 0 0 Table 1. Existing Assignments p1 p2 p3 p4 p5 p6 u1 0 0 0 0 1 1 u2 0 0 0 0 0 0 u3 1 1 0 0 0 0 Table 2. Remaining Assignments Suppose now cr2 is chosen and it is assigned to all of the three users. Then the remaining permission assignments become Table 2, and they are treated as candidate roles for the next step in the algorithm. So the candidate roles are updated to be the following: cr1 (0 0 0 0 1 1) and cr2 (1 1 0 0 0 0). With the candidate roles being defined, we need to figure out two things: (1) how to select a candidate role at each step? (2) how to enforce the constraint that no user can have more than t roles. For the first question, there are some studies and discussions in the literature. Here are some well known strategies. Vaidya et al. [START_REF] Vaidya | Roleminer: mining roles using subset enumeration[END_REF] chooses the candidate role which covers the most remaining permission assignments. Ene et al. [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF] selects the candidate role with the least number of permissions. We have tested both of them and found out they do not work well in our case. As such, we use an alternative strategy: we choose the candidate role which covers the most users. In other words, the selected candidate role can be assigned to the most users. In fact, our strategy is justified by that the need for a permission set to become a role comes from the fact that they are Algorithm 1 User-Oriented Exact RMP Input: U P A, t Output: U A, P A 1: U A ← ∅, P A ← ∅, U P A ′ ← ∅; 2: CRoles ← U P A; 3: while U P A ′ ̸ = U P A do 4: Call RSelector; 5: Call CGenerator; 6: end while shared by many people. Suppose that {p1, p2} are possessed by three people, while {p1, p2, ..., p10} are possed by only one person. It is more reasonable to make {p1, p2} as a role than {p1, p2, ..., p10}. To illustrate this candidate role selection strategy, look at Table 1 again. Among those three candidate roles, (0 0 1 1 0 0) can be assigned to three people, so it is chosen. To enforce the constraint that no user gets more than t roles, we make some special arrangement, when a user U i has been covered by t -1 roles and still has uncovered permissions. In such a case, we either need to create a role to cover all remaining permissions of U i or revoke roles that have been assigned to U i . Suppose we create a new role which consists of all remaining permissions of U i and assign it to U i . Then, we may need to check if the new role can be repetitively used by other users, otherwise it is costly. If no one else can take the new role, we make all permissions of U i as a single role. Thus we can revoke all roles that have been assigned to U i and assign the sole role to the user. In this way, at the same cost of adding one role, the role assignments for the user are significantly reduced, which is exactly the goal of this work. Based on this idea, the following steps are implemented to enforce that no user gets more than t roles. When a user U i has been covered by t -1 roles and still has uncovered permissions, we stop choosing roles from the candidate role set. Instead, we first treat the uncovered permissions of U i as a candidate role, and evaluate its suitability by checking if some other user who has been assigned less than t -1 roles will take this candidate role. If so, it means that the role can be repetitively used, then we include the role into the final role set and assign it to users. Otherwise, we discard it, and then make all permissions of the user U i as a role, assign the role to U i and delete all of the other role assignments to U i . The complete algorithm is stated in Algorithms 1-3 (U A i: denotes the ith row of U A, which represents the role assignments to user i; U P A i: denotes the ith row of U P A, which represents the permissions assigned to user i). User-Oriented Approximate RMP The user-oriented approximate RMP is the same as the user-oriented exact RMP, except that the complete reconstruction is not required. The above algorithm for the useroriented exact RMP can be easily modified for the the user-oriented approximate RMP by changing the termination condition from U P A ̸ = U P A ′ to ||U P A -U P A ′ || > δ. Consequently, the algorithm stops early and avoids covering too much noisy information. Algorithm 2 RSelector Input: U P A, U P A ′ , CRoles, U A, t Output: r, U A, P A, U P A ′ Computational Complexity Analysis The key of the above user-oriented role mining algorithm is the continuous updating of candidate roles. At each iteration of the algorithm, a candidate role is chosen and the role coverage is determined. The total computations then depend on the number of iterations. Consider a user-permission dataset with m users and n permissions. According to our algorithm, at each iteration, at least one user's permissions are completely covered. So the maximum required iterations are m. At each iteration step, each candidate role is compared against each remaining user. As the number of candidate roles is less than m, the number of remaining users is less than m and each user (or role) has up to n permission, so the incurred computations at each iteration cannot be over m 2 n. Therefore the computation complexity of our algorithm is upper bounded by m 3 n. Experiments and Results Experiments are conducted on benchmark access control datasets. They are americas small, apj, healthcare, domino, firewall1 and firewall2, which can be found at the HP website 2 . americas small and apj are user profiles from Cisco firewalls. healthcare was obtained from the US Veteran's Administration. The domino graph is from a set of user and access profiles for a Lotus Domino server. firewall1 and firewall2 are results of running an analysis algorithm on Checkpoint firewalls. Descriptions on the data sets including the number of users, the number of permissions, and the size of user-permission assignments are given in Table 3. More detailed descriptions can be found in [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF]. The first experiment evaluates the user-oriented exact RMP. We want to know whether our Dynamic algorithm can effectively enforce the sparseness constraint and whether the output of the algorithm is comparable to the optimal RBAC solution without the sparsity constraint. To find the answers, we run the Dynamic algorithm on those real data sets with different sparsity constraints. We compare our results with the benchmark role mining algorithm, Lattice [START_REF] Ene | Fast exact and heuristic methods for role minimization problems[END_REF]. As far as we know, Lattice has the best reported result with respect to the minimization of the number of roles and the minimization of the system administrative cost. The experimental results are reported in Tables 4567. In these tables, δ denotes the error rate. The exact RMP requires the error rate to be 0. So we only look at the portion of the results with δ = 0. Other parameters are: t denotes the maximum number of role assignments enforced in our algorithm, |U A| denotes the size of user-permission assignments and |P A| denoting the size of permission-role assignments. Note that δ and t has no effect on the Lattice algorithm, as Lattice returns an exact RBAC solution and the solution is unique. In the results, the value at the row of Lattice and the column of t is the maximum number of roles that a user has in the RBAC solution returned by the Lattice algorithm. In the results, when t decreases, the size of U A decreases accordingly. However, the value of |U A| + |P A| changes in an opposite direction. This matches our expectation. Specifically, when t is a small value, each user gets few role assignments. Thus, we need roles with more permissions, so each user can still get enough permission assignments. When the sparseness constraint becomes more strict, the number of required roles increases. As a result, the value of |P A| increases accordingly. Furthermore, we are pleased to see that even with the sparseness constraint being enforced, the complexity of the RBAC solution returned by Dynamic is still comparable to that of Lattice. For example, in For instance, in Table 5, the value of |U A| for Dynamic with δ of 0 and t of 2 is 3477, while that for Lattice is 4782. The second experiment is to study the user-oriented approximate RMP. We want to know how the RBAC solution varies with the error rate. We run the Dynamic algorithm by varying the value of δ from 0.05 to 0.20. Results are reported in Tables 4567. We observe that when the complexity of the RBAC solutions decrease drastically when δ increases. For instance, in Table 4, with t of 8, only 39 roles are required to cover the 95 percent of permission assignments (i.e., δ = 0.05), while 80 roles are required for the complete coverage (i.e., δ = 0). In terms of the coverage of permission assignments, those 39 roles appear more promising than the remaining 41 roles. In cases where data noise is believed to exist, the approximate version of Dynamic appears to be more useful. To summarize, the two experiments have demonstrated the effectiveness of our useroriented RMP approach. We highlight that one primary advantage of Dynamic is that it allows a RBAC engineer to tune the sparsity constraint to reflect the real need. This feature is not supported by any existing role mining method. More importantly, the overall system complexity of the resultant solution is comparable to that of the optimal solution without any sparsity constraint. Conclusion In this paper, we studied the role mining problem from the end-user perspective. Unlike other existing role mining approaches which primarily aim to reduce the administrative workload, our approach strives to incorporate better user experience into the role decision process. As end-users prefer simple role assignments, we add a sparseness constraint that mandates the maximum number of roles a user can have to the role mining process. The number usually can be determined in practice by a brief study on the general business processes of an organization. Basing on this rationale, we formulated user-oriented role mining as two specific problems. One is the user-oriented exact RMP, which is obliged to completely reconstruct given permission assignments while obeying the sparseness constraint. It is applicable for scenarios where the given dataset has no noise. The other is the user-oriented approximate RMP, which tolerates a certain amount of deviation from the complete reconstruction. It suits for datasets containing noises. We studied existing role mining methods, and found that some of them can be applied to our problems with simple modification. For better efficiency, we also designed new algorithms tailored to our problems, which are based on a dynamic candidate role generation strategy. Experimental results demonstrate the effectiveness of our approach in discovering a user-oriented RBAC solution while without increasing the overall administrative workload too much. Future work can go along two directions. One is to study the feasibility of employing some statistical measures such as Bayesian information criterion to facilitate the role mining process. The motivation is that sometimes the accurate sparseness constraint (the maximum role that a user can have) is not available. We could employ some statistical criteria to choose the RBAC model with a good balance of model complexity and describability. The other direction is to consider the dynamic sparseness constraint. In this work, we assume that the same sparseness constraint is enforced to everyone. However, it might be the case that some user requires many role assignments due to some need. In such cases, a more practical role mining approach is to minimize the sparsity of the whole user-role assignments rather than enforcing a sparseness constraint for every user. Table 3 . 3 Data Description Table 8 , 8 when t is 2, Dynamic returns a RBAC solution with only 18 roles, while Lattice returns a solution with 15 roles and the maxi-δ t |R| |U A| |U A| + |P A| δ t |R| |U A| |U A| + |P A| Dynamic 0.00 2 90 365 7100 Dynamic 0.00 2 259 3477 25229 4 85 454 6890 4 256 3722 23610 6 84 600 6897 6 249 3890 22194 8 80 1516 6638 8 246 4269 20119 0.05 2 37 250 5688 0.05 2 224 3283 23174 4 48 361 5685 4 184 3566 21349 6 41 529 5523 6 183 3780 20109 8 39 1416 5671 8 185 4160 18406 0.10 2 30 250 4619 0.10 2 205 3220 21421 4 27 330 4297 4 154 3453 19870 6 26 439 3868 6 157 3715 18451 8 14 1464 2970 8 147 4042 16318 0.15 2 17 250 2661 0.15 2 180 3096 19929 4 9 422 1762 4 138 3383 17966 6 10 563 1810 6 137 3604 16630 8 7 1334 2055 8 127 4036 14698 0.20 2 11 250 1881 0.20 2 171 3096 18505 4 8 426 1655 4 130 3384 16530 6 6 1131 1839 6 123 3673 14971 8 6 1131 1839 8 117 3915 14238 Lattice 0.00 9 66 874 1953 Lattice 0.00 10 192 4782 9830 Table 4. fire1 Table 5. americas small δ t |R| |U A| |U A| + |P A| Dynamic 0.00 2 11 325 1499 δ t |R| |U A| |U A| + |P A| 0.05 2 11 325 1499 Dynamic 0.00 2 23 79 716 0.10 2 7 285 1092 0.05 2 17 71 695 0.15 2 7 285 1092 0.10 2 14 64 680 0.20 2 7 285 1092 0.20 2 10 53 657 Lattice 0.00 3 10 434 1110 Lattice 0.00 3 20 110 713 Table 6. fire2 Table 7 . 7 domino mum role assignments of 4. Another observation is that the |U A| value of the solutions returned by Dynamic can be much less than that of the solutions returned by Lattice. Table 9 . apj 9 Q + is the set of all non-negative rational numbers http://www.hpl.hp.com/personal/Robert Schreiber/
43,344
[ "978062", "1004168", "978064", "978066", "1004169" ]
[ "316705", "357848", "452406", "412849", "357848" ]
01490720
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490720/file/978-3-642-39256-6_8_Chapter.pdf
Zhan Wang email: [email protected] Kun Sun Sushil Jajodia email: [email protected] Jiwu Jing email: [email protected] TerraCheck: Verification of Dedicated Cloud Storage Keywords: Dedicated Storage, Cloud Security, Verification Occupied Area When hardware resources are shared between mutually distrustful tenants in the cloud, it may cause information leakage and bring difficulties to regulatory control. To address these concerns, cloud providers are starting to offer hardware resources dedicated to a single user. Cloud users have to pay more for such dedicated tenancy; however, they may not be able to detect the unexpected misuse of their dedicated storage due to the abstraction layer of the cloud. In this paper, we propose TerraCheck to help cloud users verify if their dedicated storage devices have been misused to store other users' data. TerraCheck detects the malicious occupation of the dedicated device by monitoring the change of the shadow data that are residual bits intentionally left on the disk and are invisible by the file system. When the cloud providers share the dedicated disk with other users, such misuses can be detected since the shadow data will be overwritten and become irretrievable. We describe the theoretical framework of TerraCheck and show experimentally that TerraCheck works well in practice. Introduction Cloud service significantly reduces costs by multiplexing hardware resources among users [START_REF] Kurmus | A comparison of secure multi-tenancy architectures for filesystem storage clouds[END_REF]. The co-resident data belonging to different users may lead to information leakage, which has become a major security concern for cloud users. For instance, a malicious VM is capable of retrieving the encryption keys [START_REF] Zhang | Cross-VM side channels and their use to extract private keys[END_REF] from a victim VM hosted on the same physical machine. Sensitive information can be compromised through the covert communication channels based on the shared CPU cache [START_REF] Xu | An exploration of L2 cache covert channels in virtualized environments[END_REF], memory bus [START_REF] Wu | Whispers in the hyper-space: High-speed covert channel attacks in the cloud[END_REF], hard disks [START_REF] Ristenpart | Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds[END_REF][START_REF] Wang | Disk storage isolation and verification in cloud[END_REF] and so on in the cloud. Cloud providers are starting to offer physically isolated resources in order to lower the entry barrier of cloud adoption for processing sensitive data. For instance, in Amazon cloud [1], Dedicated Instances are a form of EC2 instances launched within the Amazon Virtual Private Cloud (Amazon VPC) that runs on hardware dedicated to a single customer. A dedicated instance ensures that the resources, such as CPU, memory, disk storage and network, are isolated physically at the hardware level. Unsurprisingly, the cloud users have to pay more for the dedicated resources than the regular ones. Although the dedication property is guaranteed by the Service Level Agreement (SLA), a misbehaved cloud provider may fail to meet the isolation requirement due to either accidental configuration error or intentionally reassigning the unallocated resources to other users. As a consequence, the dedicated resource, for example the storage device, will store the data belonging to unexpected users and cause information leakage. Because the cloud users usually can only see a logical view of their resources due to the abstraction layer or the business model of cloud computing [START_REF] Jhawar | Fault tolerance management in iaas clouds[END_REF], they may not be aware of or not be able to detect the violation of the desired dedicated configuration before the security breaches occur. In this paper, we propose TerraCheck to help cloud users verify if the unallocated disk space has been occupied by undesired users without the cooperation of the cloud provider. We assume that the cloud providers are honest-but-greedy, i.e., trustworthy for managing user's data without violating the data privacy but greedy for allocating the storage resources not being in use by the dedicated user to other tenants. To detect the greedy allocation, TerraCheck places shadow data on the unallocated disk space and verifies the dedication by detecting the change of the shadow information. Shadow data are portions of the residue left behind when files are deleted from the disk. As such, this data cannot be accessed directly by the file system, but can be recovered using forensic techniques. We group the set of residual bits related to the same original file as "shadow chunk". We record the hash value and physical disk address of each shadow chunk as verification metadata. To verify the integrity of each shadow chunk, we utilize disk forensics techniques to retrieve shadow chunks according to the prior recorded disk addresses. If the shadow chunks cannot be recovered entirely, it indicates that the unallocated disk space has been overwritten and the dedication property is violated. Our shadow chunk method has two advantages, comparing to simply stuffing the unallocated disk space with void files. First, it makes the cheating behavior of the honest-but-greedy cloud provider very costly. The retrieval of the shadow chunks relies on the physical disk address of each chunk. If the misbehaved cloud providers move the shadow data to some non-dedicated devices and make the shadow data still retrievable, they must map the prior recorded disk address to the addresses of the new device. Instead, accessing files relies on the file system and can be redirected to another device with less effort. Second, shadow data will not affect the normal use of the dedicated device. The attested disk area filled by the shadow chunks remains available for allocation in the view of the file system. However, if the attested disk area is filled by files, it cannot be occupied by the dedicated user immediately. We present two schemes for verifying the dedication property of cloud storage. The basic TerraCheck scheme can detect the unexpected occupation of a dedicated storage device with high accuracy by checking the retrievability of every chunk. With sampling, our advanced probabilistic TerraCheck scheme can discover 10% unexpected occupation of the dedicated storage device with 95% probability, by randomly challenging 29 chunks. Therefore, a smaller chunk achieves low computational cost, but results in the large storage of metadata. Furthermore, with the help of Bloom filter with 1% false positive rate, the size of verification metadata can be reduced 5.5 times. The rest of the paper is organized as follows. In Section 2, we describe the threat model and general assumptions. Section 3 presents the requirements and operations of the dedication verification. Section 4 describes both the basic and advanced probabilistic TerraCheck schemes. Section 5 implements two schemes and evaluates both the computational and storage costs. Section 6 overviews the related work. Section 7 concludes this paper. Threat Model and Assumptions The dedication property of cloud storage is guaranteed by the terms in SLA. However, a misbehaved cloud provider may fail to meet such dedication requirement due to either accidental configuration errors or intentionally being greedy with the unallocated storage resources: First, configuration error may allocate dedicated storage space to undesired tenants. For instance, in Amazon dedicated instance, the dedication property is enabled by the "Dedicated" attribute configured at the launch time. The "Dedicated" attribute may be silently disabled (e.g., for software update, server migration or testing). Second, a cloud provider may intentionally place the non-frequently accessed data, such as archive data, to the unoccupied disk space where it is supposed to belong to one specific customer. We consider the misbehaved cloud providers as honest-but-greedy. Honest means that the cloud providers are not motivated to corrupt user's data or violate the data privacy with respect to the business reputation. However, the cloud providers may be greedy for allocating the storage not in used by the dedicated user to other tenants. Although the honest-but-greedy cloud providers are only interested in the large amount of unused disk space belonging to a dedicated user, they cannot control the behavior of the co-resident tenants once the cloud provider accidentally allocates the unoccupied space to another tenant. Co-resident tenants may threaten the security and privacy of the existing user data, such as exploiting covert channels to retrieve encryption key [START_REF] Zhang | Cross-VM side channels and their use to extract private keys[END_REF] and other sensitive information [START_REF] Ristenpart | Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds[END_REF] or violating the access control policy [START_REF] Wang | Disk storage isolation and verification in cloud[END_REF]. We also consider the cloud providers are economically rational. Misbehaved cloud providers will not defeat our verification mechanism by paying higher storage overhead. For example, the cloud provider can intecept the write call or monitor the process of placing the shadow data. However, by doing these, the cloud provider has to store the shadow data somewhere else in order to be able to response the verification challenge correctly and occupy the dedicated storage at the same time. We assume the usage of the dedicated storage is well-planned by the user. For example, the user allocates a determined amount of dedicated disk space to each VM. This is a common practice [START_REF] Jhawar | Fault tolerance management in iaas clouds[END_REF] of resource management in the cloud. When the user launches a small number of VMs, only part of the dedicated storage is allocated. The rest of the dedicated storage should be protected from being exploited by other users due to both the security and performance reasons. We refer this part of the disk space as attested area. The disk space being in use by the dedicated user is called occupied area. Additionally, the attested area may scale up and down based on the occupation of the dedicated disk. TerraCheck requires a small amount of trusted disk space for storing verification metadata on the occupied area. We assume the occupied area is trusted, since an honestbut-greedy cloud provider is trustworthy for managing user data. System Model In TerraCheck, both occupied and attested disk spaces are assigned to the user by the cloud provider and under the management of file system. Occupied area is the disk space which has stored user's data; attested area is the empty disk space that is available to be allocated by the user. TerraCheck only focuses on the verification of dedicated storage assigned by the cloud provider rather than the physical disk, since the users of dedicated storage cannot control the disk space that doesn't belong to them. The verification of dedicated storage that solely occupies a physical disk is another research topic, which can be addressed by existing co-residency checking techniques [START_REF] Wang | Disk storage isolation and verification in cloud[END_REF]17]. We first formalize the model of TerraCheck. Suppose a user C pays and possesses a dedicated disk with the capacity of s in the cloud. The dedicated disk is divided into two areas as shown in Fig. 1. The occupied area with the capacity of s a disk space has been allocated by C for storing the data associated with running VMs or as general purpose storage. We consider occupied area is trusted by C to execute the TerraCheck and store the verification metadata. The attested area with the capacity of s u disk space remains unallocated where s u = s-s a . Attested area is the verification target of TerraCheck. When C needs more disk space by increasing the size of occupied area, the size of attested area will shrink accordingly. The goal of TerraCheck is to verify if the attested area has been maliciously taken by other users or the cloud provider. TerraCheck consists of four major procedures, as shown in Fig. 1. First, it places shadow chunks on the attested area of the target disk. The shadow chunks are deleted files which cannot be accessed from the file system directly by system calls. Instead, shadow chunks can be recovered by disk forensics technique as long as they have not been overwritten. Second, it generates metadata, such as the hash value of the shadow chunks, for monitoring the alternation of shadow chunks. The metadata are stored on the occupied area where it has been allocated for storing the data associated with running VMs or as general purpose storage. Third, TerraCheck challenges the shadow chunks by using disk forensic techniques to recover them. Lastly, it compares the forensics results with the verification metadata. If any one of the shadow chunks has been altered and cannot be recovered, a violation of dedication property is detected. Verification Requirements A solution for verifying the dedicated storage should satisfy the following technical requirements. -Accuracy. The verification mechanism should ensure the users to trust the result of the verification. When the misbehaved cloud providers break the dedication property by reassigning the dedicated storage to undesired tenants, the user should be able to detect such violation with a high probability. -Efficiency. The verification procedure should be fast, without obviously interrupting the disk activities against the allocated part of the disk. Moreover, The metadata used for verification should be small; otherwise, it is unacceptable to use the same amount or more disk space to store the original shadow data on the local disk. When the dedicated user occupies or releases more disk space, for example, for running more VMs or shutting down existing VMs, the disk area to be attested varies. Every time the customer needs to scale the disk space up or down, the affected shadow chunks should be as few as possible. System Operations TerraCheck consists of five basic operations. ChunkGen generates the shadow chunks and places them on the attested area. M etaGen generates the verification metadata and stores them on the occupied area. ChalGen generates the information of challenged chunks. Retrieve executes the forensics of challenged chunks and calculates their hash values. V erif y operation compares the result of Retrieve with the verification metadata recorded in M etaGen and makes the decision of the dedication verification. Table 1 summarizes all the variables used in this paper. -ChunkGen(n, l k , t h , t f )→K = {k 1 , k 2 , ..., k n }: TerraCheck fills attested area with a set of chunks K = {k 1 , k 2 , ..., k n } and n * l k = s u . Each chunk k i has a header tag t h and a footer tag t f to represent the start and the end of a chunk, respectively. The total length of the header and the footer l t h + l t f is less than l k . This algorithm takes the number of chunks, the length of each chunk, the header t h , the footer t f as inputs and generates n temporary files F = {f 1 , f 2 , ..., f n } first. Every file f i in F starts with t h , ends with t f and the rest of it is filled by random bits. Every file f i has the same length as l k . All the files in F are stored on attested area and then deleted from the file system. The bits left on attested area associated with each file f i are the set of chunks K = {k 1 , k 2 , ..., k n }. Each chunk contains three parts -the header, the footer, and a random body. -MetaGen(n, t h , t f , img AA , h)→{meta DB , ⊥}: It takes the number of chunks, the header, footer tag information, the disk image of attested area and a hash function as inputs, returns the verification metadata or abortion. h : {0, 1} * →{0, 1} m denotes a fixed hash function that outputs m bits hash value. The MetaGen algorithm retrieves the chunks from img AA by matching the t h and t f and calculates the hash value of each chunk. The results of verification metadata meta DB is stored on occupied area. -Verify(result, chal)→{"success", "failure"}: The Verify algorithm takes result and chal as inputs and compares the hash value in result with that in chal. If the two hash values match, it outputs "success" and otherwise outputs "failure". meta DB = {(id ki , b i , e i , h(k i ))|i ∈ {1, 2...n}, k i ∈ K} TerraCheck Schemes We propose two schemes. The basic TerraCheck can accurately verify the violation of the dedication with a high computation and storage overhead. The advanced TerraCheck can detect the violation of the dedication with a high probability, while reducing the verification overhead dramatically. Basic Scheme Our goal is to make sure that the attested area hasn't been allocated to other users. Our basic TerraCheck scheme consists of four phases. -Initial. In the initial phase, the attested area is filled by all zeros. This operation prevents the existing content on the disk from affecting our placement results. -Placement. We place the shadow chunks on the attested area by using the ChunckGen and M etaGen algorithms. If M etaGen → ⊥, a failure occurs, TerraCheck should be restarted from the initial phase. Otherwise, M etaGen generates valid verification metadata meta DB . -Verification is a procedure to patrol on the dedicated storage device and collect the evidence for the undesired occupation by calling Challenge, Retrieve and V erif y algorithms until each shadow chunk placed in the attested area has been checked. The V erif ication phase would be stopped once V erif y algorithm returns a "failure" for any chunk. The dedication property is preserved if all the chunks passed the examination. -Update is executed when the size of attested area is subject to change. It is difficult to predict the set of affected chunks since the allocation of disk space depends on the disk scheduling. Therefore, both the shadow chunks and their associated verification metadata become useless and subjects to deletion. The initial phase and placement phase should be restarted with the new attested area. The basic TerraCheck can successfully check the dedication property with high accuracy. If n -t shadow chunks are recoverable, it means that t chunks are altered so that around t * l k out of s u disk space has been allocated or corrupted maliciously. Theoretically, we can 100% detect the alternation of any number of chunks. However, the basic TerraCheck scheme has two main limitations: -Computational Cost. The verification phase has to read through the whole attested area and calculate the hash value for every shadow chunk. -Update Operation. When the size of attested area has to be changed, Ter-raCheck should be restarted from the initial phase against the new attested area. Advanced Scheme To mitigate the limitations of the basic TerraCheck scheme, we propose a probabilistic based TerraCheck scheme. To reduce the computational cost, we randomly sample the chunks during the Verification procedure. In order to provide efficient update operation, we introduce multiple regions within the attested area, we call them attested region. The attested region is the smallest unit for C to scale up the size of the occupied area. For example, C plans to attach a certain size of disk space to a newly launched VM. When the size of the occupied area is shrunk due to the termination of a VM, a new attested region will be created. Each attested region contains multiple shadow chunks. The shadow chunk is the smallest unit for challenge and verification. In addition, we use Bloom filter to reduce the storage for saving the verification metadata. Attested Region We introduce attested region for conveniently scaling up and down the size of attested area. The attested area is divided into multiple attested regions. The size of attested region depends on how a user uses the dedicated disk. For example, if it uses the disk as the attached secondary storage for running VMs, and each VM is attached by a fixed amount of disk space, such amount is an optimal size for each attested region. When an attested region should be deleted, the related verification metadata are deleted and excluded from the TerraCheck procedure. The attested region can also serve the purpose of preventing the cloud provider from manipulating the allocation status of the dedicated storage. That is, only the user of dedicated storage can extend the size of occupied area by generating more attested region. Probabilistic Verification The sampling would greatly reduce the computational cost, while still achieving a high detection probability. We now analyze the probabilistic guarantees offered by a scheme that supports chunk sampling. Suppose the client probes p chunks during the Challenge phase. Clearly, if the cloud provider destroys chunks other than those probed, the cloud provider will not be caught. Assume now that t chunks are tampered and become unrecoverable, so that at least s t = t * l k size of disk space are maliciously allocated. If the total number of chunks is n, the probability that at least one of the probed chunks matches at least one of the tampered chunks is ρ = 1 -n-t n • n-t-1 n-1 , ..., • n-p+1-t n-p+1 . Since n-t-i n-i ≥ n-t-i-1 n-i-1 , it follows that ρ ≥ 1 -( n-t n ) p . When t is a fraction of the chunks, user C can detect misbehaviors by asking for a constant amount of chunks, independently on the total number of file blocks. As shown in Fig. 2, if t = 1% of n, then TerraCheck asks for 459 chunks, 300 chunks and 230 chunks in order to achieve the probability of at least 99%, 95% and 90%, respectively. When the number of corrupted chunks goes up to 10% of the total chunks, the violation can be detected with 95% probability, by only challenging 29 chunks. As the number of corrupted chunks increases, the number of chunks required to be checked is decreased. The sampling is overwhelmingly better than scanning all chunks in the basic TerraCheck scheme. Therefore, we can challenge a fixed number of chunks to achieve certain accuracy. The size of each chunk will determine the computation cost. When the size of each chunk is small, the overhead for retrieving all challenged chunks from dedicated disk is low. Advanced Operations For establishing efficient TerraCheck, we need to refine both the M etaGen and ChalGen algorithms. MetaGen(n, t h , t f , img AA , h)→{meta DB , ⊥}: The results of verification metadata meta DB = {(id ARx , id ki , b i , e i , h(k i ))|i ∈ {1, 2...n}, k i ∈ K}. It lists the ID of the located attested region, the ID of a chunk and the boundary of each chunk on the disk, such as the start block number b i and the end block number e i of chunk k i , and the hash value of each chunk h(k i ). Each chunk can be retrieved from the raw disk based on the start and end block number and the ID of the attested region without the help of the file system. ChalGen(meta DB ) r →chal. It randomly generates a challenge chal based on meta DB . chal = (id ARr , id kr , b r , e r , h(k r )) ∈ meta DB is the chunk to be examined. Our advanced TerraCheck scheme consists of the same phases as the basic TerraCheck. Advanced operations will be involved in the related phases, and the update phase should be modified as follows. -Update. Since the attested area is further divided into attested regions, when a user needs to extend or shrink the disk space for occupied area, only limited number of attested regions are deleted or added so that the TerraCheck against the rest of chunks remains valid. When the occupied area scales up, the metadata related to the erased attested region will be deleted. The rest of metadata are still available for TerraCheck. Reducing Metadata Storage In the basic TerraCheck scheme, the size of meta DB for storing the verification metadata is linear to the number of shadow chunks. The number of chunks could be very large if the user wants to achieve a lower computational cost, as we discussed in the probabilistic verification. In order to reduce the amount of storage for verification metadata in TerraCheck, we take advantage of Bloom filter to store the metadata for verification. Bloom filter [START_REF] Bloom | Space/time trade-offs in hash coding with allowable errors[END_REF] is a space-efficient data structure for representing a set in order to support membership queries. Bloom filter is suitable to the place where one might like to keep or send a list for verification, but a complete list requires too much space. We use Bloom filter to represent a set S = {x 1 , x 2 , ..., x n } of n elements as an array of m counters, initially all set to 0. It uses k independent hash functions h 1 , h 2 , ..., h k with range [1, m]. For mathematical convenience, we make the natural assumption that these hash functions map each item in the universe to a random number over the range {1, ..., m}. For each element x ∈ S, the bits h i (x) are set 1 for 1 ≤ i ≤ k. A location can be set as 1 multiple times. To check if an item y is a member of S, we check whether all h i (y) are 1. If not, then clearly y is not a member of S. If all h i (y) are 1, we assume that y is in S. We know that a Bloom filter may yield a false positive, where it suggests that an element x is in S even though it is not. The probability of a false positive for an element not in the set, or the false positive rate, can be estimated, given our assumption that hash functions are perfectly random. After all the elements of S are hashed into the Bloom filter, the probability that a specific bit is still 0 is P R zero = 1 -1 m kn ≈ e -kn m . The probability of a false positive is (1 -P R zero ) k . A Bloom filter with an optimal value for the number of hash functions can improve storage efficiency. We modify our TerraCheck model for utilizing Bloom filter to reduce the storage cost of the verification metadata. -BF-MetaGen(t h , t f , img AA , h)→{meta F ILT ER , ⊥} The algorithm takes the header, footer tag information, the disk image of attested area and a hash function as inputs, returns the verification metadata or an abortion. meta F ILT ER is a Bloom filter which involves the hash value of every shadow chunk. -BF-Verify(result, meta F ILT ER )→{"success", "failure"}: It takes result and meta F ILT ER as inputs and checks if the hash value in result is valid and associates with any chunks. If the hash value can be found from meta F ILT ER , the algorithm outputs "success" and otherwise "failure". Implementation and Evaluation We implement and evaluate both basic TerraCheck scheme and advanced Ter-raCheck scheme. All experiments are conducted on a Dell PowerEdge460 server with Intel Core i5 CPU running at 3.10GHz, and with 4096 MB of RAM. The system runs Ubuntu 12.04 (LST) that is configured with Xen Hypervisor. The dedicated storage device is a WestDigital SATA 7200 rpm hard disk with 1TB capacity and 64MB cache. For evaluation purpose, we used SHA-1 as the hash function h. The random values used for challenging the chunks in the advanced TerraCheck are generated using the function proposed by Shoup [START_REF] Dent | The Cramer-Shoup encryption scheme is plaintext aware in the standard model[END_REF]. All data represent the mean of 20 trials. We implement a large attested area in basic TerraCheck and implement an attested region in advanced TerraCheck as a logical volume. The occupied area may involve multiple logical volumes. LVM (Logical Volume Management) technology is exploited to automate the update operation when the size of the occupied disk space varies. We rely on the retrievability of the shadow chunks on each logical volume to check the dedication property. We utilize Scalpel [START_REF] Richard | Scalpel: A frugal, high performance file carver[END_REF], which is an open source file recovery utility with an emphasis on speed and memory efficiency, to retrieve the shadow chunks based on their header tag and footer tag. To perform file recovery, Scalpel makes two sequential passes over each disk image. The first pass reads the entire disk image and searches for the headers, footers and a database of the locations of these headers. The second pass retrieves the files from the disk image based on the location information of the header and footer. Scalpel is file system-independent and will carve files from FATx, NTFS, ext2 and ext3, or raw partitions. We evaluate both the computation overhead and storage cost during each phase of TerraCheckand demonstrate the compliance with the requirements of both accuracy and efficiency identified in Section 3.1. Initial Phase. During the initial phase, the attested area is filled by all zeros. The time for this phase is determined by, and linear to the size of attested area s u . It takes about 10 seconds for cleaning 1 GB of the attested area. Both basic TerraCheck and advanced TerraCheck have the same performance at this phase. Placement Phase. There are two steps for placing the chunks. The first step is to generate and store the chunks to the attested area. The cost of this operation is determined by the chunk size and the size of the attested area. On our testbed, it takes 12 seconds to store 100MB of shadow chunks. The second step is to generate the metadata. It takes 8.198 seconds for Scalpel to scan 1 GB of the attested area in the first pass and store the location information. Verification Phase. The basic TerraCheck examines all the chunks based on the verification metadata recorded in meta DB . Therefore, the time for generating the challenge can be ignored. The advanced TerraCheck randomly challenges the chunks. The generation of random number takes less than 0.1 ms. The challenged chunks are retrieved from the attested area based on the start and end location recorded as the verification metadata. Therefore, the performance is determined by the disk access time. Tab. 2 shows the disk access time in our experiment. After retrieving the challenged chunks, TerraCheck compares the hash value of the retrieved chunk with the verification information. In basic TerraCheck, all the chunks residing on the attested area should be checked, which uses the time for calculating the hash value of all the chunks. The advanced TerraCheck scheme randomly challenges the chunks to achieve the detection of undesired disk occupation. We simulate the behaviors that a proportion of attested area is altered. For instance, if a random 1% of an attested area with 10000 chunks are altered, such a situation could be detected with a 90% probability by challenging 217 chunks on average, which is close to the theoretical result. Update Phase. For the basic TerraCheck scheme, the performance of the update is the same as the overhead of executing the initial and placement phases. The performance of the advanced TerraCheck scheme depends on the change of the size of the attested area. When the occupied area is extended, the advanced TerraCheck scheme only needs to update the meta DB by deleting the items of affected chunks. When the occupied area is shrunk, more attested regions should be created on the attested area. The generation of each attested region takes about 400 ms regardless the size of the attested region. Therefore, TerraCheck scheme can scale with a low overhead when users update the size of attested area frequently. Reducing Metadata Storage. apgbmf [2] is originally used to manage Bloom filter for restricting password generation in APG password generation software [START_REF] Spafford | Opus: Preventing weak password choices 17[END_REF]. We use apgbmf version 2.2.3 as a standalone bloom filter management tool. We consider each hash value of the shadow chunk as an item of password dictionary in the context of apgbmf. We create a Bloom filter for such hash value dictionary. During the verification phase of TerraCheck, if a recovered chunk is unaltered, its hash value will pass the Bloom filter, i.e, the hash value is one of the hash values which associates an original shadow chunk with a high probability. When we allow a 1% fault positive rate, the storage cost with Bloom filter is reduced 5.5 times as shown in Fig. 3. When the number of chunks is more than 10 million, the metadata only requires 36 MB as compared to 200 MB without using Bloom filter. Related Work Cloud service providers [1, 13] are starting to offer physically isolated resources to lower the entry barrier for enterprises to adopt cloud computing and storage. For instance, in Amazon cloud [1], Dedicated Instances are a form of EC2 instances launched within the Amazon Virtual Private Cloud, which runs hardware dedicated to a single customer. Some research has been done to guarantee the exclusive occupation of dedicated resources for security reasons. The side channel based on CPU L2 cache has been used to verify the exclusive use of a physical machine [START_REF] Zhang | Homealone: Co-residency detection in the cloud via side-channel analysis[END_REF]. Ristenpart et al. [START_REF] Ristenpart | Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds[END_REF] propose to use the existing side channels to verify the co-residency of VMs. [START_REF] Garfinkel | Terra: a virtual machine-based platform for trusted computing[END_REF] allows application designers to build secure applications in the same way as on a dedicated closed platform by using a trusted virtual machine monitor. However, it requires the modification of commercial hypervisor. Researchers have also investigated techniques to verify various security properties claimed in the SLAs. Dijk et al. [START_REF] Dijk | Hourglass schemes: How to prove that cloud files are encrypted[END_REF] prove that the files are stored with encryption at the cloud server side by imposing a resource requirement on the process of translating files from the plain texts to the cipher texts. Proof of Retrievability (PoR) [START_REF] Dodis | Proofs of retrievability via hardness amplification[END_REF] aims to verify if the files are available in the cloud storage at any time. However, PoR cannot verify where the files are located. RAFT [START_REF] Bowers | How to tell if your cloud files are vulnerable to drive crashes[END_REF] can verify that a file is stored with sufficient redundancy by measuring the response time for accessing "well-collected" file blocks. Another work [START_REF] Benson | Do you know where your cloud files are?[END_REF] proposes a mechanism to verify that the cloud storage provider replicates the data in multiple geo-locations, by measuring the network latency. [START_REF] Wang | Disk storage isolation and verification in cloud[END_REF] proposes a method to verify the disk storage isolation of conflict-of-interest files so that Chinese Wall security policy [START_REF] Brewer | The chinese wall security policy[END_REF] can be successfully enforced in cloud storage environment. Conclusion In this paper, we propose TerraCheck to help cloud users verify the exclusive use of their dedicated cloud storage resources. TerraCheck places shadow chunks on the dedicated disk and detects the change of the shadow information by taking advantage of disk forensics technique. We further improve the computational efficiency by randomly challenging the chunks and reduce the storage by applying Bloom filter. Fig. 1 . 1 Fig. 1. Overview of TerraCheck lists the ID of a chunk and the boundary of each chunk on the disk, such as the start block number b i and the end block number e i of chunk k i , and the hash value of each chunk h(k i ). Each chunk can be retrieved from the raw disk based on the start and end block number without the help of the file system. Let |meta DB | be the number of items in meta DB . If |meta DB | = n, it indicates that some chunks either cannot be recovered from the disk image of attested area or a mismatched header or footer involved among the chunks. In this case, MetaGen fails and outputs abortion symbol ⊥.-ChalGen(meta DB , id ki )→chal: This algorithm generates a challenge chal based on meta DB and the ID of the queried chunk. chal = (id ki , b i , e i , h(k i )) ∈ meta DB is the chunk to be examined. -Retrieve(chal, h)→result: It takes challenge and the hash function as inputs and calculates the hash value after retrieving the chunk based on the information specified in chal. It returns the hash value of the chunk in chal. Fig. 3 . 3 Fig. 3. Comparison of the Storage Cost with/without Bloom Filter ( %1 Fault Positive Rate Allowed) Table 1 . 1 Summary of Operation Parameters Variable Meaning C The cloud user who possesses the dedicated device and executes dedication verification n The number of shadow chunks placed on attested disk area l k Length of each shadow chunk t h Header tag of each chunk t f Footer tag of each chunk K The set of shadow chunks su Size of unallocated disk space id k i F ID of shadow chunk i The set of files for generating shadow chunks img AA Disk image of attested area meta DB File for storing verification metadata bi Starting disk address of chunk i on attested area ei Ending disk address of chunk i on attested area id ARx ID of attested region x meta F ILT ER File for storing Bloom filter Table 2 . 2 Time for Retrieving Chunks Chunk Size 512KB 1MB 2MB 4MB 8MB 16MB Retrieve Time 13 ms 15 ms 20 ms 29 ms 48 ms 86 ms Acknowledgement This material is based upon work supported by the National Science Foundation under grant CT-20013A, by US Army Research Office under MURI grant W911NF-09-1-0525 and DURIP grant W911NF-11-1-0340, and by the Office of Naval Research under grant N0014-11-1-0471.
37,941
[ "1004170", "1004171", "978046", "1004172" ]
[ "303369", "452410", "452410", "452410", "303369" ]
01490721
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490721/file/978-3-642-39256-6_9_Chapter.pdf
Changyu Dong email: [email protected] Liqun Chen email: [email protected] Jan Camenisch Giovanni Russello email: [email protected] Fair Private Set Intersection with a Semi-trusted Arbiter A private set intersection (PSI) protocol allows two parties to compute the intersection of their input sets privately. Most of the previous PSI protocols only output the result to one party and the other party gets nothing from running the protocols. However, a mutual PSI protocol in which both parties can get the output is highly desirable in many applications. A major obstacle in designing a mutual PSI protocol is how to ensure fairness. In this paper we present the first fair mutual PSI protocol which is efficient and secure. Fairness of the protocol is obtained in an optimistic fashion, i.e. by using an offline third party arbiter. In contrast to many optimistic protocols which require a fully trusted arbiter, in our protocol the arbiter is only required to be semi-trusted, in the sense that we consider it to be a potential threat to both parties' privacy but believe it will follow the protocol. The arbiter can resolve disputes without knowing any private information belongs to the two parties. This feature is appealing for a PSI protocol in which privacy may be of ultimate importance. Introduction An interesting problem in secure computation is private set intersection (PSI). Namely, how to enable two mutually untrusted parties to compute jointly the intersection of their private input sets. PSI has many potential applications in private data mining, online recommendation services, online dating services, medical databases and so on. There have been many protocols proposed to solve the PSI problem [START_REF] Freedman | Efficient private matching and set intersection[END_REF][START_REF] Dachman-Soled | Efficient robust private set intersection[END_REF][START_REF] Hazay | Efficient set operations in the presence of malicious adversaries[END_REF][START_REF] Hazay | Efficient protocols for set intersection and pattern matching with security against malicious and covert adversaries[END_REF][START_REF] Jarecki | Efficient oblivious pseudorandom function with applications to adaptive ot and secure computation of set intersection[END_REF][START_REF] Cristofaro | Practical private set intersection protocols with linear complexity[END_REF][START_REF] Cristofaro | Linear-complexity private set intersection protocols secure in malicious model[END_REF][START_REF] Kissner | Privacy-preserving set operations[END_REF][START_REF] Camenisch | Private intersection of certified sets[END_REF][START_REF] Kim | Mutual private set intersection with linear complexity[END_REF]. The majority of them are single-output protocols, i.e. only one party obtains the intersection and the other party gets nothing. However, there are many motivating scenarios in which both parties want to know the intersection. Several examples have been given in [START_REF] Cristofaro | Practical private set intersection protocols with linear complexity[END_REF] to demonstrate the need for such mutual PSI protocols: -Two real estate companies would like to identify customers (e.g., homeowners) who are double-dealing, i.e., have signed exclusive contracts with both companies to assist them in selling their properties. -A government agency needs to make sure that employees of its industrial contractor have no criminal records. Neither the agency nor the contractor are willing to disclose their respective data-sets (list of convicted felons and employees, respectively) but both would like to know the intersection, if any. A mutual PSI protocol must be fair, i.e. if one party knows the intersection, the other party should also know it. However fairness is hard to achieve in cryptographic protocols (see Section 2 for a brief overview). To efficiently achieve fairness, most fair cryptographic protocols are optimistic which requires help from an offline arbiter who is a trusted third party. The arbiter only participates if one party unfairly aborts the protocol and can recover the output from the protocol for the honest party. Incorporating optimistic fairness in PSI protocols is not easy for two reasons: firstly, although there is a generic structure, there is no generic construction for optimistic fair protocols. Secondly, the arbiter usually has to get access to some private information and therefore has to be fully trusted. However, in reality it is hard to find such a fully trusted third party. Think about the examples above: an independent entity, e.g. an auditing service provider, could be well qualified to resolve the disputes, however giving a third party access to private data may raise privacy concerns. We can find more cases in which the two parties may trust a third party for fairly resolving disputes, but may not trust it for privacy. In this paper, we present the first fair mutual PSI protocol. The protocol has built-in support for optimistic fairness and does not require setup assumptions such as certified input sets. In addition, the third party acting as the arbiter can resolve disputes without knowing the private inputs or the output of the PSI protocol. Hence we can significantly reduce the trust placed on the arbiter. This makes the protocol more flexible in terms of practical usage as any third party can become an arbiter as long as they are believed to be able to correctly carry out instructions. Related Work Private Set Intersection (PSI) protocols allow two parties, each with a private set, to securely compute the intersection of their sets. It was first introduced by Freedman et al in [START_REF] Freedman | Efficient private matching and set intersection[END_REF]. Their protocol is based on oblivious polynomial evaluation. Dachman-Soled et al [START_REF] Dachman-Soled | Efficient robust private set intersection[END_REF], Hazay and Nissim [START_REF] Hazay | Efficient set operations in the presence of malicious adversaries[END_REF] followed the oblivious polynomial evaluation approach and proposed protocols which are more efficient in the presence of malicious adversaries. Hazey and Lindell [START_REF] Hazay | Efficient protocols for set intersection and pattern matching with security against malicious and covert adversaries[END_REF] proposed another approach for PSI which is based on oblivious pseudorandom function evaluation. This approach is further improved by Jarecki and Liu [START_REF] Jarecki | Efficient oblivious pseudorandom function with applications to adaptive ot and secure computation of set intersection[END_REF]. De Cristofaro et al [START_REF] Cristofaro | Practical private set intersection protocols with linear complexity[END_REF][START_REF] Cristofaro | Linear-complexity private set intersection protocols secure in malicious model[END_REF] proposed PSI protocols with linear communication and computational complexities. Huang et al [START_REF] Huang | Private set intersection: Are garbled circuits better than custom protocols?[END_REF] presented a PSI protocol based on garble circuits, and shows in the semi-honest model the protocol can be very efficient. There are also protocols based on commutative encryption [START_REF] Agrawal | Information sharing across private databases[END_REF][START_REF] Vaidya | Secure set intersection cardinality with application to association rule mining[END_REF]. All of the above protocols are single-output, i.e. one party gets the output and the other party gets nothing. This is a traditional way to simplify protocol design in the malicious model because it removes the need for fairness, i.e. how to prevent the adversary from aborting the protocol pre-maturely after obtaining the output (and before the other party obtains it) [START_REF] Goldreich | Foundations of Cryptography: Volume II Basic Applications[END_REF]. Nevertheless, there have been a few mutual PSI protocols which are designed to output the intersection to both parties. Kissner and Song [START_REF] Kissner | Privacy-preserving set operations[END_REF] proposed the first mutual PSI protocol. The protocol itself does not guarantee fairness, but relies on the assumption that the homomorphic encryption scheme they use has a fair threshold decryption protocol. However, unless there is an online trusted third party, it is also non-trivial to achieve fairness in threshold decryption protocols. On the other hand, if an online trust third party is available, the PSI functionality can be trivially computed by giving the input sets to the trusted party. Camenisch and Zaverucha [START_REF] Camenisch | Private intersection of certified sets[END_REF] sketched a mutual PSI protocol which requires the input sets to be signed and certified by a trusted party. Their mutual PSI protocol is obtained by weaving two symmetric instances of a singleoutput PSI protocol with certified input sets. Fairness is obtained by incorporating an optimistic fair exchange scheme. However this protocol does not work in general cases where inputs are not certified because it is hard to force the two parties to use the same inputs in the two instances. Another mutual PSI protocol is proposed by Kim et al [START_REF] Kim | Mutual private set intersection with linear complexity[END_REF], but they specifically state that fairness is not considered in their security model. Fairness is a long discussed topic in cryptographic protocols. Cleve [START_REF] Cleve | Limits on the security of coin flips when half the processors are faulty (extended abstract)[END_REF] showed that complete fairness is impossible in two-party protocols in the malicious model. However, partial fairness can be achieved. Partial fairness means that one party can have an unfair advantage, but the advantage is computationally insignificant. Many protocols achieve partial fairness by using the gradual release approach [START_REF] Blum | How to exchange (secret) keys[END_REF][START_REF] Ben-Or | A fair protocol for signing contracts[END_REF][START_REF] Pinkas | Fair secure two-party computation[END_REF]. However, this approach is very inefficient in nature. The Optimistic approach, which uses an offline trusted third party, has been widely used to obtain fairness efficiently. It is called optimistic because it cannot prevent the unfair behaviour but later the trusted third party can recover the output for the honest party. There has been a long line of research in this direction [START_REF] Asokan | Optimistic protocols for fair exchange[END_REF][START_REF] Asokan | Optimistic fair exchange of digital signatures (extended abstract)[END_REF][START_REF] Bao | Efficient and practical fair exchange protocols with off-line ttp[END_REF][START_REF] Ateniese | Efficient verifiable encryption (and fair exchange) of digital signatures[END_REF][START_REF] Cachin | Optimistic fair secure computation[END_REF][START_REF] Micali | Simple and fast optimistic protocols for fair electronic exchange[END_REF][START_REF] Dodis | Optimistic fair exchange in a multi-user setting[END_REF]. Previously, the trusted third party in an optimistic fair protocol which requires non-trivial computation on the inputs needs to be fully trusted and can get the output or inputs of the protocol if one party raises a dispute. This might not be desirable when the output or inputs should be strictly kept private. There are also other approaches for achieving partial fairness efficiently. But usually they work only for a specific problem. For example, the concurrent signatures protocol [START_REF] Chen | Concurrent signatures[END_REF] allows two parties to produce and exchange two ambiguous signatures until an extra piece of information (called keystone) is released by one of the parties. The two parities obtain the signature from the other party concurrently when the keystone is released and therefore fairness is achieved. Kamara el al [START_REF] Kamara | Salus: A system for efficient server-aided multi-party computation[END_REF] proposed a new computation model in which a non-colluding server is involved. Fairness can be achieved in this model if there is a semi-trusted server, but the server has to be online during the computation. In our protocol we also require a semi-trusted server but it can be offline most of the time. Building Blocks Homomorphic Encryption A semantically secure homomorphic public key encryption scheme is used as a building block in the protocol. There are two types of homomorphic encryption, additive and multiplicative. The additive homomorphic property can be stated as follows: [START_REF] Freedman | Efficient private matching and set intersection[END_REF] given two ciphertexts E pk (m 1 ), E pk (m 2 ), E pk (m 1 + m 2 ) = E pk (m 1 ) • E pk (m 2 ); (2) given a ciphertext E pk (m 1 ) and a constant c, E pk (c • m 1 ) = E pk (m 1 ) c . The multiplicative homomorphic property can be stated as follows: [START_REF] Freedman | Efficient private matching and set intersection[END_REF] given two ciphertexts E pk (m 1 ), E pk (m 2 ), E pk (m 1 • m 2 ) = E pk (m 1 ) • E pk (m 2 ); (2) given a ciphertext E pk (m 1 ) and a constant c, E pk (m c 1 ) = E pk (m 1 ) c . The Freedman-Nissim-Pinkas (FNP) protocol Our starting point is the PSI protocol in the semi-honest model proposed by Freedman et al. [START_REF] Freedman | Efficient private matching and set intersection[END_REF], which is based on oblivious polynomial evaluation. In this protocol, one party A has an input set X and another party B has an input set Y such that |X| = |Y | = n. 5The two parties interact as follows 1. A chooses a key pair (pk, sk) for an additive homomorphic encryption scheme and makes the public key pk available to B. 2. A defines a polynomial Q(y) = (y -x 1 )(y -x 2 ) . . . (y -x n ) = n i=0 d i y i , where each element x i ∈ X is a root of Q(y). A then encrypts each coefficient d i using the public key chosen in the last step and sends the encrypted coefficients E pk (d i ) to B. 3. For each element y j ∈ Y , B evaluates Q(y j ) obliviously using the homomorphic property E pk (Q(y j )) = n i=0 E pk (d i ) y i j . B also encrypts y j using A's public key. B then chooses a random r j and uses the homomorphic property again to compute E pk (r j •Q(y j )+y j ) = E pk (Q(y j )) rj •E pk (y j ). B sends each E pk (r j •Q(y j )+y j ) to A. 64. A decrypts each ciphertext received from B. If y j ∈ X ∩ Y , then Q(y j ) = 0, thus the decryption will be y j which is also an element in X, otherwise, the decryption will be a random value. By checking whether the decryption is in X, A can output X ∩ Y while learns nothing about other elements in Y but not in X. Zero Knowledge Proof A zero knowledge proof protocol allows a prover to prove the validity of a statement without leaking any other information. The protocol presented in Section 3.2 is secure against semi-honest adversaries. However, in the presence of malicious adversaries we have to prevent the adversaries from deviating from the protocol. We enforce this by requiring each party to use zero knowledge proofs to convince the other party that it follows the protocol correctly. We will name the protocols as P K (...) and use the notation introduced in [START_REF] Camenisch | A framework for practical universally composable zero-knowledge protocols[END_REF] to present the protocols in the rest of the paper u {ω i ∈ I * (m ωi )} n i=1 : ∃{χ j ∈ I * (m χj )} m j=1 : φ(ω 1 , ..., ω n , χ 1 , ..., χ m ) In short, the prover is proving the knowledge of ω 1 , ..., ω n and the existence of χ 1 , ..., χ m such that these values satisfy certain predicate φ(ω 1 , ..., ω n , χ 1 , ..., χ m ). Each ω i and χ j belongs to some integer domain I * (m ωi ) and I * (m χj ). Each predicate is a boolean formula built from atomic predicates of discrete logarithms y = n i=1 g Fi(ω1,...,ωn) i , where F i is an integer polynomial. All quantities except ω 1 , ..., ω n are assumed to be publicly known. For example, the following means that given a certain group structure and a tuple (α, β, g, h), the prover can prove in zero knowledge that it knows the discrete logarithm x of α and there exists some s such that β = h x g s . u x ∈ Z q : ∃s ∈ Z q : α = g x ∧ β = h x g s Verifiable Encryption In a nutshell, a verifiable encryption scheme is a public key encryption scheme accompanied by an efficient zero knowledge proof of the plaintext satisfies certain properties [START_REF] Camenisch | Practical verifiable encryption and decryption of discrete logarithms[END_REF]. It has numerous applications in key escrow, secret sharing and optimistic fair exchange. In optimistic fair exchange protocols, a convention is to let a party create a verifiable escrow of a data item. The escrow is essentially an encryption of the escrowed item under the offline arbiter's public key. A public data called a label is attached so that the arbiter can verify the decryption against the label to ensure certain properties hold. It also allows efficient zero knowledge proof of correct decryption to be constructed. Perfectly Hiding Commitment In our protocol, we also use a perfectly hiding commitment scheme [START_REF] Pedersen | Non-interactive and information-theoretic secure verifiable secret sharing[END_REF] in zero knowledge proof protocols. Generally speaking, a commitment scheme is a protocol between two parties, the committer and the receiver. The committer can commit to a value v by generating a commitment com(v) and sends it to the receiver. The commitment can be used as input to zero knowledge proof protocols. The commitment has two properties: hiding which means it is infeasible for the receiver to find v; binding which means it is infeasible for the committer to find another v such that com(v ) = com(v). The strength of hiding and binding can be perfect or computational. In our case, we want a perfectly hiding commitment scheme which means the receiver cannot recover the value committed, even with unbounded computational power. Overview of the Protocol Dispute Resolution P K re-enc pk A /sk A , X, |X| = n 0 pk B /sk B , Y, |Y | = n n 0 > n L is a label includes a session ID and the hash value of all previous messages k, k, P K prop P K dec Epk A (g d0 ), Epk A (g d1 ), . . . , Epk A (g d n0 ) E pkA (g rj •Q(yj )+r 0 j +yj ) E pkB (g rj •Q(yj )+r 0 j +yj ) g r j •Q(y j )+r 0 j +y j E pkB (g rj •Q(yj )+r 0 j +yj ) g r j •Q(y j )+r 0 j +y j E pkB (g rj •Q(yj )+r 0 j +yj ) E L pkR (g r 0 1 ), . . . , E L pkR (g r 0 n ) g r 0 1 , . . . , g r 0 n g r 0 1 , . . . , g r 0 n Fig. 1. Overview of the Fair PSI protocol In this section, we give a high level view of the protocol as depicted in Fig. 1. The protocol has two sub-protocols: a PSI protocol to compute the set intersection between A and B and a dispute resolution protocol. Note in our protocol, all encryptions are in exponential form, i.e. rather than encrypting directly a message m, we encrypt g m where g is a generator of a certain group. This modification is necessary to allow zero knowledge proof, and the modification does not affect the correctness or security of the encryption schemes. With this modification, oblivious polynomial evaluation is still possible if we use a multiplicative homomorphic encryption scheme rather than an additive one. The polynomial is moved to the exponent and the evaluation is done by operations on exponents. This is a standard technique in homomorphic encryption. For example, given E pk (g a ), E pk (g b ) and x, we can evaluate ax + b obliviously and get E pk (g ax+b ) by computing (E pk (g a )) x • E pk (g b ). Having polynomial evaluation results on exponents is sufficient for our protocol, as the parties only need to test whether for certain y, Q(y) is 0. This can be done effectively because Q(y) = 0 iff g Q(y) = 1. -Setup: Choose a homomorphic encryption scheme E, a verifiable encryption scheme E, publish the public parameters. The offline arbiter R also generates a key pair for E and publishes the public key through a CA. -Private Set Intersection: A and B are parties who engage in the computation of the set intersection, and each has a private input set X and Y respectively. In our protocol we require that A's set contains at least one random dummy element in each protocol execution. The sizes of X and Y are also required to be different. Namely, |X| = n , |Y | = n such that n > n. The requirements are placed to protect A's polynomial (see remark 1). A and B each also generates a random key pair for E and sends the public key to the other. They also negotiate a message authentication code (MAC) key k. This key is used by both parties to ensure the messages in the protocol execution comes from the other party. A general method to achieve this is using a MAC algorithm. To simplify presentation, we omit the MAC in the protocol description . 1. A generates a polynomial based on A's set X as described in Section 3.2. If d n is zero, regenerates the random dummy elements in X and the polynomial again until d n is not zero. A encrypts all the coefficients as E pk A (g d0 ), ..., E pK A (g d n ) and sends the ciphertexts to B. A then runs a zero knowledge proof protocol P K poly to prove that the polynomial is indeed correctly constructed. 2. For each element y j ∈ Y , B evaluates the polynomial using the homomorphic property. Unlike in the FNP protocol that evaluates to E pk A (r j • Q(y j ) + y j ), in our protocol, B also uses another random blinding factor r j to blind the result. So the polynomial evaluates to E pk A (g rj •Q(yj )+r j +yj ). B sends all ciphertexts to A. B then encrypts all the blinding factors r j using R's public key with a label L as E L pk R (g r j ). L includes a session ID and a hash value of all communication in the the protocol execution so far (see remark 2). B sends the encrypted blinding factors to A, and uses P K prop to prove that (1) the polynomial evaluation is properly done and (2) the encryption of blinding factors is properly done. 3. A decrypts each E pk A (g rj •Q(yj )+r j +yj ) and then encrypts each g rj •Q(yj )+r j +yj using B's public key. Each ciphertext E pk B (g rj •Q(yj )+r j +yj ) is sent to B and A must prove to B that the ciphertext is a correct re-encrypted ciphertext of the corresponding E pk A (g rj •Q(yj )+r j +yj ). B then decrypts each ciphertext and checks whether there is g yj +r j that matches the decryption g rj •Q(yj )+r j +yj , if so y j is in X ∩ Y . 4. B then sends g r 1 , ..., g r n and proves they are correct with regard to the encryption sent in step 2. Then A will be able to test all combinations of g xi+r j to see whether there is a match of a decryption g rj •Q(yj )+r j +yj it obtained in step 3, if so x i is in X ∩ Y . If B does not send g r 1 , ..., g r n or fail to prove they are valid, A can raise a dispute with R by sending a dispute resolution request. -Dispute Resolution: 1. A sends all messages sent and received in the first two setps of the PSI protocol execution to R. R verifies it by checking the consisitence between the messages and the label. If the transcript ends before the end of step 2 of the PSI protocol, R simply aborts as neither party gets any advantage. 2. A then encrypts each g rj •Q(yj )+r j +yj using B's public key. The ciphertext E pk B (g rj •Q(yj )+r j +yj ) is sent to R and A must prove to R that the ciphertext is a correct re-encrypted ciphertext of the corresponding E pk A (g rj •Q(yj )+r j +yj ) in the transcript. 3. R decrypts E L pk R (g r 1 ), ..., E L pk R (g r n ) and sends g r 1 , ..., g r n to A, so that A can learn the intersection X ∩ Y . 4. R also sends all E pk B (g rj •Q(yj )+r j +yj ) to B. Remark 1: In the initialisation stage of the PSI protocol, we require A to randomise its set X by adding at least one random and secret dummy element, and make sure |X| > |Y |. This is to protect A's privacy. Plaintext in each E pk B (g rj •Q(yj )+r j +yj ) needs to be released to B in the PSI protocol. As r j and r j are chosen by B, B might be able to recover g Q(yj ) . B can recover A's polynomial if it can obtain at least n (g Q(yj ) , y j ) pairs. In any execution of the protocol, B can recover at most n pairs. Because n > n, the attack is not possible. Randomising the polynomial in each execution prevents B from pooling information gathered from different executions to recover A's polynomial. Remark 2: We let B to encrypt blinding factors with a label L in step 2. The label L is for two purposes: (1) to ensure timeliness of dispute resolution. A session ID is attached to each protocol execution and B uses it as an input when generating the label. We assume a standard format and semantics of the session ID have been agreed by all parities beforehand, so that R can verify the identities of the two parties involved and that the protocol execution is within a certain time window. (2) To ensure the integrity of the messages in the first two steps of the protocol. As only A can raise a dispute resolution, B needs to ensure A cannot get any advantage by modifying critical messages, e.g. the encrypted coefficients and polynomial evaluation results. By using the hash of past communication as an input for the label, B can ensure that. This is because the ciphertext with the label is encrypted under R's public key so cannot be modified without R's private key, and any modification to the messages will invalidate the label so R can detect it. Remark 3: In our protocol B adds an additional blinding factor r j when evaluating A's polynomial. This is because if we follow the FNP protocol and do not add this blinding factor, then there is no good way to deal the case in which A aborts after decrypting all E pk A (g rj •Q(yj )+yj ). In this case to maintain fairness, B needs R to recover the set intersection. A would have to to provide a verifiable encryption of its private key sk A in order for R to decrypt E pk A (g rj •Q(yj )+yj ) for B. But that will violate A's privacy because given the private key R can also recover A's polynomial coefficients from the transcript. Our design is better because now R only gets random numbers g r 1 , ..., g r n which contain no information about both parties' sets. Remark 4: In the last step of the dispute resolution protocol, R sends E pk B (g rj •Q(yj )+r j +yj ) to B. This is needed because from the transcript, R cannot tell whether A has sent them to B or not. It is possible that A unfairly aborts the protocol after finishing step 2 and then uses R to recover the result. we add this step to make sure B also receives the output in this case. And because this is the only case that A can gain advantage by unfairly aborting the protocol, we do not need a dispute resolution protocol for B. A Concrete Construction Verifiable Encryption As a setup requirement. the arbiter R must have a key pair of a verifiable encryption scheme. In the second step of the PSI protocol, B must encrypt the blinding factors r 1 , r 2 , ..., r n under R's public key. The encryption scheme used by R is the Cramer-Shoup encryption [START_REF] Cramer | A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack[END_REF] with a small modification. The system works in this way: -Setup: On input 1 k , output two prime numbers p, q such that q divides p-1, a cyclic group G with two generator g, h such that G is the unique order q subgroup of Z * p . Choose u 1 , u 2 , v 1 , v 2 , w R ← Z q . Compute a = g u1 h u2 , b = g v1 h v2 , c = g w . Then publish (a, b, c) along with G, q, g, h as the public key and retain (u 1 , u 2 , v 1 , v 2 , w) as the private key. -Encryption: To encrypt a message m, calculate the following: • The only modification we made to the original Cramer-Shoup encryption is that L is added as an ingredient of σ. All security properties of the Cramer-Shoup encryption are inherited. A Homomorphic Encryption Scheme At the core of our construction is a semantically secure homomorphic encryption scheme. Our choice is the ElGamal [START_REF] Cramer | A secure and optimally efficient multi-authority election scheme[END_REF] encryption scheme. This allows us to construct efficient zero knowledge proofs needed in the protocol. To simplify design, we share certain parameters between E and E. The scheme is described as follows: -Setup: Use the same group G and generator g as in section 5.1. Choose x R ← Z q and compute g x . The public key is pk = (G, g, g x , q) and the private key is sk = x. -Encryption: Choose r R ← Z q and output the ciphertext c(m) = (g r , m(g x ) r ). -Decryption: The ciphertext is decrypted as m(g x ) r • (g r ) -x = mg rx-rx = m. ElGamal is multiplicative homomorphic, so it is suitable in our protocol. As mentioned before we will convert the plaintext m to g m before encryption, so that oblivious polynomial evaluation is possible using ElGamal. Zero Knowledge Proof Protocols P K poly : Proof of Correct Construction of a Polynomial In step 1 of the PSI protocol, A has to prove to B that the polynomial is constructed correctly. Namely, A has to convince B that it knows the polynomial and the polynomial has no more than n roots. For each coefficient d i , the ciphertext is E pk A (g di ) = (g ti , g di g x A ti ) = (α di , α di ), where t i is a random number in Z q . To prove it knows the polynomial, A runs the following protocol: u d i ∈ Z q : ∃t i ∈ Z q : α di = g ti ∧ α di = g di (g x A ) ti As the maximum degree of the polynomial is determined beforehand and can be verified by counting the number of encrypted coefficients received, then for a polynomial of degree n , the only case that it can have more than n roots is when all coefficients are zero. To show the coefficients are not all zero, we require A to prove that d n is not zero by running ∃t n , t n ∈ Z q : α d n = g t n ∧ α d n = (g x A ) t n ∧ t n = t n Intuitively, t n = t n + d n /x A and therefore t n = t n iff d n = 0. So by verifying t n = t n , B can be convinced that d n = 0. To prove the inequality of discrete logarithms, we can use the protocol proposed in [START_REF] Camenisch | Practical verifiable encryption and decryption of discrete logarithms[END_REF]. P K prop : Proof of Proper Polynomial Evaluation and Encryption In step 2 of the PSI protocol, B must prove that each E pk A (g rj •Q(yj )+r j +yj ) is a proper ciphertext for g rj •Q(yj )+r j +yj , and also each E L pk R (g r j ) is a proper encryption under R's public key and the label L. Recall that for an encrypted coefficient d i , E pk A (g di ) = (g ri , g di g x A ri ) = (α di , α di ). Then for each term d i y i j of the polynomial, the ciphertext computed using the homomorphic property from E pk A (g di ) is E pk A (g diy i j ) = ((α di ) y i j , (α di ) y i j ). Similarly, for each r j • Q(y j ), the ciphertext is E pk A (g rj •Q(yj ) ) = (( n i=0 (α di ) rj y i j ), ( n i=0 (α di ) rj y i j )) B also encrypts g r j +yj by itself, and the ciphertext E pk A (g r j +yj ) = (g r j , g r j g yj g x A r j ). The ciphertext of the whole can be obtained by multiplying the corresponding components of the two: E pk A (g rj •Q(yj )+r j +yj ) = (α, β) = (( n i=0 (α di ) rj y i j ) • g r j , ( n i=0 (α di ) rj y i j ) • g r j g yj g x A r j ) For each E L pk R (g r j ), the ciphertext is (e 1j , e 2j , e 3j , e 4j ), such that e 1j = g zj , e 2j = h zj , e 3j = c zj g r j ,e 4j = a zj b zj σ where z j R ← Z q and σ = H(e 1j , e 2j , e 3j , L). The proof has two steps. In the first step, B commits to y j and r j y i j for each y j ∈ Y and 0 ≤ i ≤ n . We use the Pedersen Commitment Scheme [START_REF] Pedersen | Non-interactive and information-theoretic secure verifiable secret sharing[END_REF] here. This commitment scheme is known to be perfectly hiding and computationally binding. It is a discrete logarithm based scheme, that enables us to re-use the parameters used for the encryption schemes. We use the same group G, and parameters q, g, h as in section 5.1. To commit to v, choose a random s and create com(v) = g v h s . So we have com(y j ) = g yj h sj , and com(a j,i ) = g rj y i j h si for each a j,i = r j y i j . Then starting from i = 1, B must prove that the value committed in com(a j,i ) is the product of the values committed in com(a j,i-1 ) and com(y j ). To do this, we use the protocol from [START_REF] Gennaro | Simplified vss and fact-track multiparty computations with applications to threshold cryptography[END_REF] which proves a committed value in γ i is the product of two other values committed in δ, γ i-1 : ∃y j , a j,i-1 , a j,i , s j , s i-1 , s i ∈ Z q : γ i = g aj,i h si ∧ δ = g yj h sj ∧ γ i-1 = g aj,i-1 h si-1 The protocol is correct because a j,i = a j,i-1 • y j . Now A has a series of correct commitments of a geometric sequence a j,i = r j y i j for 0 ≤ i ≤ n . In the second step, B runs the following protocol for each 0 ≤ j ≤ n: u r j , y j ∈ Z q : ∃a j,0 , ..., a j,n , r j , z j ∈ Z q : δ = g yj h sj n i=0 γ i = g aj,i h si ∧α = ( n i=0 (α di ) aj,i ) • g r j ∧ β = ( n i=0 (α di ) aj,i ) • g r j g yj g x A r j ∧e 1j = g zj ∧ e 2j = h zj ∧ e 3j = c zj g r j ∧ e 4j = a zj b zj σ B proves in the first two lines that it knows y j , r j , also each exponent a j,i in α and β match the value committed in γ i , y j in β matches the value committed in δ, r j matches the value encrypted in e 3j , and (α, β) is a proper ciphertext of the polynomial evaluation result. In the last line, B proves that the verifiable encryption is correct. P K re-enc : Proof of Correct Re-encryption In step 3 of the PSI protocol and step 2 of the dispute resolution protocol, A must prove that each value sent is the correct ciphertext E pK B (g rj •Q(yj )+r j +yj ). A generates the ciphertext by first decrypting E pK A (g rj •Q(yj )+r j +yj ), and then re-encrypting the result using B's public key. The two ciphertexts are E pk A (g rj •Q(yj )+r j +yj ) = (g tj , g rj •Q(yj )+r j +yj g x A tj ) = (g tj , m j g x A tj ) E pk B (g rj •Q(yj )+r j +yj ) = (g t j , g rj •Q(yj )+r j +yj g x B t j = (g t j , m j g x B t j )) where t j , t j are random numbers. The protocol is then: ∃x A , t j ∈ Z q : pk A = g x A ∧ α = m j (g tj ) x A ∧ β = g t j ∧ γ = m j (g x B ) t j The proof shows that the two ciphertexts are correct and encrypt the same plaintext. P K dec : Proof of Correct Decryption In step 4 of the PSI protocol, B needs to prove that each g r j is a correct decryption of E L pk R (g r j ). For each E L pk R (g r j ), the ciphertext is (e 1j , e 2j , e 3j , e 4j ), such that e 1j = g zj , e 2j = h zj , e 3j = c zj g r j ,e 4j = a zj b zj σ where z j R ← Z q and σ = H(e 1j , e 2j , e 3j , L). What B needs to show is that it knows z j and z j is used consistently in all ciphertext compoents. ∃z j ∈ Z q : e 1j = g zj ∧ e 2j = h zj ∧ e 3j = c zj g r j ∧ e 4j = a zj (b σ ) zj If g r j is not the correct decryption, then B cannot find a z j that satisfies the relation. Complexity Analysis Now we give an account of the complexity of the protocol. The computational and communication complexity of the zero knowledge proof protocol is linear in the number of statements to be proved, so we separate it from the main protocol. In the PSI protocol, A needs to perform 3n exponentiations to encrypt the coefficients in step 1, and 3n exponentiations to decrypt and re-encrypt the polynomial evaluation results in step 3, B needs 2(n n + 2n) exponentiations to evaluate the polynomial obliviously and 3n exponentiations for the verifiable encryption in step 2. The messages sent in the protocol consist of 2n + 9n group elements. In the dispute resolution protocol, R needs 6n exponentiations to verify and decrypt the ciphertexts of the verifiable encryption. The total traffic generated includes 5n group elements, plus the transcript sent in step 1. In total, the computational complexity is O(nn ) and the communication complexity is O(n + n ). The complexity of the zero-knowledge proof protocols: P K poly is O(n ), P K prop is O(nn ), P K re-enc is O(n), and P K dec is O(n). The complexity of our protocol is similar to other PSI protocols in the malicious model [START_REF] Dachman-Soled | Efficient robust private set intersection[END_REF][START_REF] Hazay | Efficient set operations in the presence of malicious adversaries[END_REF]. 6 Security Analysis Security Model The basic security requirements of our protocol are correctness, privacy and fairness. Informally, correctness means an honest party is guaranteed that the output it receives is correct with regard to the actual input and the functionality realised by the protocol; privacy means no party should learn more than its prescribed output from the execution of the protocol; fairness means a dishonest party should receive its output if and only if the honest party also receives its output. We define a security model to capture the above security requirements in terms of the simulation paradigm [START_REF] Goldreich | Foundations of Cryptography: Volume II Basic Applications[END_REF]. We model the parties A, B and R as probabilistic interactive Turing machines. A functionality is denoted as f : X A × X B → Y A × Y B , In our protocol, the functionality to be computed by A and B is the set intersection. The model is similar to the one used in the optimistic fair secure computation protocol [START_REF] Cachin | Optimistic fair secure computation[END_REF]. Generally speaking the protocol is executed in a real world model where the participants may be corrupted and controlled by an adversary. To show the protocol is secure, we define an ideal process which satisfies all the security requirements. In the ideal process, there is an incorruptible trusted party which helps in the computation of the functionality, e.g. in our case the set intersection. The protocol is said to be secure if for every adversary in the real world model there is also an adversary in the ideal world model who can simulate the real world adversary. The real world. The protocol has three participants A, B and R. All participants have the public parameters of the protocol including the function f ∩ , the security parameter κ, R's public key and other cryptographic parameters to be used. A has a private input X, B has a private input Y and R has an input ∈ { , ⊥}. The participants of the protocol can be corrupted by an adversary. The adversary can corrupt up to two parties in the protocol. We use C to denote the adversary. The adversary can behave arbitrarily, e.g. substitute local input, abort the protocol prematurely, and deviate from the protocol specification. At the end of the execution, an honest party outputs whatever prescribed in the protocol, a corrupted party has no output, and an adversary outputs its view. For a fixed adversary C, and input X, Y , the joint output of A, B, R, C is denoted by O ABRC (X, Y ) which is the random variable consisted of all the outputs as stated. The ideal process. In the ideal process, there is an incorruptible trust party T , and parties Ā, B, R. Ā has input X, B has input Y and R has an input ∈ { , ⊥}. The operation is as follows: - Simulatability. The security definition is in terms of simulatability: Definition 1. Let f ∩ be the set intersection functionality. We say a protocol Π securely computes f ∩ if for every real-world adversary C, there exists an adversary C in the ideal process such that for all X ∈ X A , for all Y ∈ X B , the joint distribution of all outputs of the ideal process is computationally indistinguishable from the outputs in the real world, i.e., O Ā, B, R, C (X, Y ) c ≈ O ABRC (X, Y ) The design of the ideal process captures the security we want to achieve from the real protocol. Our assumption is that in real world, we can find a semi-trusted arbiter that can be trusted for fairly resolving disputes, but not for privacy. Then by incorporating such an arbiter in a two-party private set intersection protocol, we can achieve fairness, correctness and privacy. In the ideal process, if R follows the protocol and does not collude with Ā or B then all security properties are guaranteed. In this case, Ā and B will always get the correct intersection with regard to the actual input to the protocol, and know nothing more than that. On the other hand, if R is corrupted and colludes with Ā or B, then fairness is not guaranteed. However, even in this case privacy is guaranteed. That is, the corrupted parties will not get more information about the honest party's set other than the intersection. Security Proof We are now ready to state and prove the security of our protocol. The protocol uses zero knowledge proof protocols as subprotocols. As they are obtained by using existing secure protocols and standard composition techniques, they are consequently secure and we omit the security proofs of them. To prove the main theorem below, we work in a hybrid model in which the real protocol is replaced with a hybrid protocol such that every invocation of the subprotocols is replaced by a call to an ideal functionality computed by a trusted party. In our case we need ideal functionalities for zero knowledge proofs and certification authority. If the subprotocols are secure, then by the composition theorem [START_REF] Canetti | Security and composition of multiparty cryptographic protocols[END_REF] the output distribution of the hybrid execution is computationally indistinguishable from the output distribution of the real execution. Thus, it suffices to show that the ideal execution is indistinguishable from the hybrid execution. Theorem 1. If the encryption E and E are semantically secure, and the associated proof protocols are zero knowledge proof, the optimistic fair mutual private set intersection protocol securely computes f ∩ . Because of limited space, below we only sketch the proof. The detailed proof will appear in the full version. Proof. Let's first consider the cases that the adversary C corrupts two parties. Case 1: C corrupts and controls A and B. This is a trivial case because C has full knowledge on X, Y and if the encryption scheme used by R is semantically secure, a simulator can always be constructed. Case 2: C corrupts and controls A and R. We construct a simulator S in the ideal process that corrupts and controls Ā and R. It uses the adversary C as a subroutine and we will show the simulatability holds in this case. .., r n , computes E P K A (g rj •Q(y j )+r j +y j ) and encrypts all blinding factors using R's public key. It also generates commitments for each y j and r j y i j . S sends all commitments and ciphertexts to C and also emulates the ideal computation of P K prop by sending "accept" to C. Depends on C's reply, executes step 5, 6 or 7. In the next three steps, S will send an instruction to T when it is ready to output, then T sends the delayed output to B In the joint output, the honest parties' outputs are always the same. All we need to check is whether the view of the simulator is indistinguishable from the view of an adversary in the hybrid execution. The difference between a simulation and a hybrid execution is that in the simulation S uses Y which is not the same as Y . However, this does not affect the distribution of the views. From how Y is constructed we can see that Y contains the correct intersection (Y ∩ X ⊆ Y ). For those elements in the intersection, they produce the same distributions in the simulation (using Y ) and the hybrid execution (using Y ). For any elements y j ∈ Y and y j ∈ Y not in the intersection, the commitments produced should be indistinguishable because the commitment scheme is perfectly hiding. Also g rj •Q(yj )+r i +y j and g rj •Q(y j )+r i +yj are uniformly random because Q(y j ) and Q(y j ) are both non-zero, and so are the ciphertexts of them. The blinding factors and their ciphertexts are uniformly random in both the simulation and the hybrid execution. Therefore the two views are indistinguishable. Case 3: C corrupts and controls B and R. We construct a simulator S in the ideal process that corrupts and controls B and R. It uses the adversary C as a subroutine. 1. S is given B and R's inputs, S invokes an ideal functionality CA to obtain R's key pair, then invokes C and plays the role of A. 2. S generates a key pair pk A /sk A and gives the public key to C. 3. S generates a random set X such that |X | = n , then constructs a polynomial using elements in X . S encrypts the coefficients, sends them to C and simulates the ideal computation of P K poly by sending "accept" to C. 4. S receives the commitments and ciphertexts from C, then receives inputs to the ideal computation of P K prop , including (y j , r j ), 0 ≤ j ≤ n. If the ciphertexts are not properly produced, S instructs B to send ⊥ to T , otherwise S extract Y and instructs B to send Y to T and instructs R to send b B = to T , and receives X ∩ Y from T . 5. S constructs another set X such that X ∩ Y ⊆ X and |X | = n . S then constructs another polynomial Q , and evaluates the polynomial using (y j , g r j ) to construct E pk B (g rj •Q (yj )+r j +yj ). The ciphertexts are sent to C, S also simulates the ideal computation of P K re-enc by sending "accept" to C. Depends on C's reply, executes step 6,7 or 8. In the next three steps, S will send an instruction to T when it is ready to output, then T sends the delayed output to Ā The difference between a simulation and a hybrid execution is that the simulator uses X and X rather than the honest party's input X. Using X does not affect the distribution of the view if E is semantically secure, because the ciphertexts generated using A's public key are indistinguishable. Using X also does not affect the distribution of the view. For the two sets X and X, two polynomials are constructed from them Q and Q. We also know X ∩Y = X ∩Y , so Q (y j ) = 0 iff Q(y j ) = 0 for any y j ∈ Y . For each g rj •Q (yj )+r j +yj and g rj •Q(yj )+r j +yj , if Q (y j ) = 0 then Q(y j ) = 0 so the distribution of the two depends only on y j and r j , if Q (y j ) = 0 then Q(y j ) = 0 and both Q (y j ) and Q (y j ) are uniformly random, so g rj •Q (yj )+r j +yj and g rj •Q(yj )+r j +yj are also uniformly random. Therefore the distributions of the views are indistinguishable. For cases that C corrupts only one party, proofs can be constructed similarly. In the case that R is corrupted, R is not involved in the protocol because A and B are honest, so it is trivial to construct a simulator. In the case that A or B is corrupted, the simulator can be constructed as in case 2 step 1 -4 or case 3 step 1 -5, except now R is honest and always sends to T . The view from the simulation is still indistinguishable. Conclusion and Future Work In this paper, we have presented a fair mutual PSI protocol which allows both parties to obtain the output. The protocol is optimistic which means fairness is obtained by using an offline third party arbiter. To address the possible privacy concerns raised by introducing a third party, the protocol is designed to enable the arbiter to resolve dispute blindly without knowing any private information from the two parties. We have analysed and shown that the protocol is secure. The communication and computation complexity of our protocol are both O(nn ). The main overhead comes from the oblivious polynomial evaluation and the large accompanying zero knowledge proof. We would like to investigate PSI protocols based on other primitives, e.g. [START_REF] Cristofaro | Practical private set intersection protocols with linear complexity[END_REF][START_REF] Cristofaro | Linear-complexity private set intersection protocols secure in malicious model[END_REF], to see whether efficiency can be improved. Another area we would like to investigate is whether the protocol structure that we use to obtain fairness can be made general so that it can be applied to other secure computation protocols. e 1 = 1 g z , e 2 = h z , e 3 = c z m where z R ← Z q . • σ = H(e 1 , e 2 , e 3 , L) , where H is a hash function and L is the label. • e 4 = a z b zσ • The ciphertext is (e 1 , e 2 , e 3 , e 4 ). -Decryption: To decrypt, compute σ = H(e 1 , e 2 , e 3 , L), then verify e u1 1 e u2 2 (e v1 1 e v2 2 ) σ = e 4 . If the verification succeeds, then decrypt m = e 3 /(e w 1 ) ( 3 ) 3 Ā sends X or ⊥ to T, then B sends Y or ⊥ to T, then R sends two messages b A ∈ Y A ∪ { , ⊥} and b B ∈ Y B ∪ { , ⊥} to T . The actual input X and Y may be different from X and Y if the party is malicious. -T sends private delayed output to Ā and B. T 's reply to Ā depends on Ā and B's messages and b A . T 's reply to B depends on Ā and B's messages and b B .• T to Ā: (1) If b A = , Ā sends X and B sends Y , T sends X ∩ Y to Ā. (2) Else if b A = , but Ā or B sends ⊥, T sends ⊥ to Ā Else if b A = , T sends b A to Ā. • T to B: (1) If b B = , Ā sends X and B sends Y , T sends X ∩ Y to B.(2) Else if b B = , but Ā or B sends ⊥, T sends ⊥ to B. (3) Else if b A = , T sends b B to B . Honest parties in the ideal process behave as follows: Ā and B send their input to T and R sends b a = and b B = . The ideal process adversary C controls the behaviours of corrupted parties. It gets the input of a corrupted party and may substitute them. It also gets T 's answer to corrupted parties. For a fixed adversary C, and input X, Y , the joint output of Ā, B, R, C in the ideal process is denoted by O Ā B R C (X, Y ). 5 . 5 If C instructs both A and R to abort, then S instructs R to send b B = ⊥ to T , then outputs whatever C outputs and terminates. 6. If C instructs A to abort and instructs R to send n ciphertexts, S decrypts them using B's private key, constructs a set by testing whether any elements in Y match the decryption results. Then S collects all matching elements, put them in a set and instructs R to send the set as b B . Then S outputs whatever C outputs and terminates. 7. If C instructs A to send n ciphertexts, then S extracts a set of elements from the reply and engages in the ideal computation of P K re-enc . If the reply is correct, S instructs R to send b B = to T and sends g r 1 , ..., g r n to C. If the reply is not correct and C instructs R to abort, S instructs R to send b B = ⊥ to T . If the reply is not correct and C instructs R to send n ciphertexts, S extracts a set of elements from the cipehrtexts and instructs R to send the set as b B to T . Then it outputs whatever C outputs and terminates. 6 . 6 If C instructs B to send the blinding factors, then S instructs R to send b A = , outputs whatever C outputs and terminates. 7. If C instructs both B and R to abort, then S instructs R to send b A = ⊥, outputs whatever C outputs and terminates. 8. If C instructs B to abort and R to send n blinding factors, use the blinding factors to extract a set, and then instructs R to send the extracted set as b A to T . S then outputs whatever C outputs and terminates. 1. S is given A and R's inputs, S invokes an ideal functionality CA to obtain R's key pair, then invokes C and plays the role of B. 2. S generates a public/private key pair pk B /sk B and gives the public key to C. 3. S receives the encrypted coefficients E pk A (d i ) from C. S also receives d i , 0 ≤ i ≤ n for the ideal computation of P K poly , where d i is a coefficient of the polynomial. If the polynomial is not correctly constructed, then S instructs Ā to send ⊥ to T and terminates the execution. If the polynomial is correct, S extracts input X from the coefficients, instructs Ā to send X to T and instructs R to send b A = to T . S then receives the intersection X ∩ Y from T . 4. S then constructs Y from the intersection received in last step by adding random dummy elements until |Y | = n. Then It generates a set of random blinding factors r 1 , r 2 , . In our protocol described in 4, we have a different requirement on the size of the input sets. This is due to the fact that the FNP protocol is a single output PSI protocol and ours is a mutual PSI protocol. For the sake of simplicity, we neglect the optimisations made in the paper to polynomial evaluation by using balanced allocation scheme and Horner's rule. Acknowledgements. We would like to thank the anonymous reviewers. Changyu Dong is supported by a Science Faculty Starter Grant from the University of Strathclyde.
52,436
[ "986114", "1004173", "997995", "1004174" ]
[ "13192", "112148", "106019", "160272" ]
01490734
en
[ "shs" ]
2024/03/04 23:41:50
2015
https://hal.science/hal-01490734/file/Dancing%20in%20the%20Dark%20-%20Scandinavian%20Journal%20of%20Management%202015.pdf
Florence Allard-Poesi email: [email protected] Dancing in the Dark: Making Sense of Managerial Roles during Strategic Conversations Keywords: sensemaking, strategic conversation, managerial roles, contradictions, discourse Highlights - This article explores how managers make sense of their strategic roles when confronted with contradictory expectations from top management. Relying discourse analysis (DA), we analyze extracts of conversations between a director and a team of managers as they strive to elaborate a strategic project for a large association within the social sector. Our research complements prior research on managerial roles in showing that the sensemaking of managerial roles relies on the construction and contestation of scripted descriptions of the organization and its environment; 2/ demonstrating how the managers and the director both contribute to the fabric of contradicted versions of the managerial roles and 3/ how participants' will to power contribute to the "dance" observed. Introduction This article aims to understand how managers make sense of their strategic roles during conversations. Research in strategic management has emphasized the key role that middle managers play, particularly in terms of strategic renewal. Middle managers need to make sense of the strategic orientations given by top management, that is to interpret and enact these orientations through the creation of the adequate structures, systems and personnel. They also have to make sense of experiences and information from the field and possibly champion these strategic orientations [START_REF] Mantere | Strategic practices as enablers and disablers of championing activity[END_REF][START_REF] Regnér | Strategy creation in practice: Adaptive and creative learning dynamics[END_REF]. These contributions of middle managers to strategizing, which may be referred to as their strategic role, depend on their understanding of who they are in the organization and what is expected of them, i. e. on how they make sense of these strategic roles. Following an interactionist perspective, a role may be defined as an "intermediary translation device between oneself and others" (Simpson & Caroll, 2008, p. 33-34) of how one should act in a particular situation. Roles are intermediaries between personal identity (i.e. the more or less temporary stabilization of one's own definition of "who I am", [START_REF] Alvesson | Identity regulation as organizational control: Producing the appropriate individual[END_REF] and others, be they specific persons (i.e. the boss, some colleagues) or more generalized others 1 including professional or occupational identities (i.e. the more or less temporary stabilization of some abstract and institutionalized conception of one's own profession). This definition calls attention to the discursive and political dimensions of managers' roles. Roles, write Simpson and Carroll (2008, p. 33), "sit as boundary object [s] in the middle of intersubjective interactions" and "translate [s] meanings backwards and forwards between actors (p. 34)". They are the object of continuous negotiation between individual strivings and 1 While managers' contribution to strategizing can be regarded as part of their occupation as managers, we prefer to talk about the manager's strategic role (and of managers' roles) rather than managers' occupations, considering that being a manager conveys a much more ambiguous, unstable and contextual definition compared to what is usually referred to as an occupation (e.g. doctor, firefighter…, see [START_REF] Bechky | Gaffers, goffers, and grips: Role-based coordination in temporary organizations[END_REF][START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF]. external prescriptions, personal conceptions and organizational or institutional discourses [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. As such, roles are the locus of power struggles and the dynamics of control and resistance (see [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF][START_REF] Thomas | Managing Organizational Change: Negotiating Meaning and Power-Resistance Relations[END_REF], where power, following a conversation analytic view of a Foucauldian conception [START_REF] Foucault | The Subject and Power[END_REF], is understood as relational and exercised in talk-in-interactions (Samra-Fredericks, 2005, p. 811; [START_REF] Heritage | Ethnomethodology[END_REF]. In this perspective, the sensemaking of roles in organizations does not rely exclusively on the actions (i.e. the decision taken, the communicative practices) of those who are in a superior hierarchical position, but depends on both the superior's and the subordinates' communicative actions [START_REF] Schneider | Power as interactional accomplishment: An ethnomethodological perspective on the regulation of communicative practice in organizations[END_REF] through which they "create, assemble, produce and reproduce the social structure through which they orient" (Heritage, 1987, p. 231). While the managers' identity construction processes has received much attention in the last decade (see [START_REF] Alvesson | Identity regulation as organizational control: Producing the appropriate individual[END_REF][START_REF] Alvesson | Identity matters: reflections on the construction of identity scholarship in organization studies[END_REF][START_REF] Ybema | Articulating identities[END_REF], and while recent strategy-as-practice research has contributed to the understanding of manager roles in the sensemaking and enactment of strategy (see [START_REF] Rouleau | Micro-practices of strategic sensemaking and sensegiving: how middle managers interpret and sell change every day[END_REF][START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF][START_REF] Mantere | Strategic practices as enablers and disablers of championing activity[END_REF]2008), the actual construction of the managers' strategic role has been neglected. We consider this problematic in so far that while managers are the maître d'oeuvre2 of strategy, they often face ambiguous if not contradictory expectations from top managers [START_REF] Lüscher | Organizational change and managerial sensemaking: working through paradox[END_REF][START_REF] Alvesson | Good visions, bad micro-management and ugly ambiguity: contradictions of (non-)leadership in a knowledge-intensive organization[END_REF]. As "Who we think we are (identity) as organizational actors shapes what we enact and how we interpret" (Weick, Sutcliffe & Obstfeld, 2005, p. 416), managers may be incapable of making sense of the strategic orientations and how to act, if they do not know what role they have in the strategic process and in the organization (see [START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF]. Lack of clarity of participants' roles in a strategic project may also encourage useless struggles for territory, thereby impeding structuring and collective sensemaking (see Pattriotta & Spedale, 2009;[START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF]. In a similar way, managers may find it difficult to commit themselves to any one course of action [START_REF] Maitlis | Sensemaking in crisis and change: Inspiration and insights from Weick[END_REF] and so retrench themselves in a passive or cynical attitude regarding the top management's strategic initiatives [START_REF] Mccabe | Strategy-as-Power: Ambiguity, contradiction and the exercice of power in a UK building society[END_REF][START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. This article aims to understand how managers make sense of their strategic roles when confronted with contradictory expectations from top management. Considering conversations and interactions as the privileged medium through which people negotiate and make sense of their roles (Balogun & Johnson, 2004, p. 545), we rely here on [START_REF] Edwards | Discursive Psychology[END_REF]'s version of Discourse Analysis, a variant of Conversational Analysis, to analyze conversations between a director and a team of managers who are in the process of elaborating a strategic project in a large French association within the social sector. Our research contributes to prior research on managerial roles in three related ways. First, it shows that managers and the director make sense of the managers' strategic role by relying on descriptions of oneself and others (e.g. [START_REF] Simpson | Reviewing 'role' in processes of identity construction[END_REF][START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995), and on the construction and contestation of scripted descriptions of the organization and its environment. Second, while previous research has underscored that different organizational actors may hold different discourses about their roles in the strategy process [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF][START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF], our research shows how the same actors may develop, and oscillate, between different and contradictory conceptions of their roles during the same meeting; thereby engaging in a sort of dance that contributes to the lack of clarity (the dark) in the definition of the managers' role. Third, while prior research has shown that power struggles during conversations may lead to trench warfare between actors and the loss of sensemaking of the task at hand (Pattriotta & Spedale, 2009), our analysis show how participants may also oscillate among contradictory concepts of their roles as the result of their will to power during the conversation, leading them to lose control over their argumentation. On the whole, the research shows how sensemaking of managerial roles evolves in, and is shaped by, discrete conversations between top management and middle managers in strategy meetings. It contributes to the understanding of how political and interpretative dynamics drive the sensemaking of managers' strategic roles in the organization during conversations among the actors. This article is organized around four sections. First, we briefly review previous work on how managers make sense of their strategic roles. Second, we describe the context within which this research took place, and, the methods of data collection and analysis that were used. We then present the discourses of the director and the managers concerning the strategic roles of the managers, and conduct a detailed analysis of two sequences of conversations in which the managers and the director oscillate between different conceptions of their roles. Finally, we discuss the research contributions. Making Sense of Manager's Strategic Roles Three complementary strands of research may contribute to our understanding of how managers make sense of their strategic roles. Managers' strategic roles: A reaction to top managers' sensegiving A first research strand concerns the top managers' efforts to shape or frame other managers' understandings of their roles, in particular during strategic change. In this perspective, top managers are seen first as engaged in sensemaking activities so as to make sense of the strategic orientations and the organizational structure supporting this strategy, and second as committing to sensegiving activities so as to convince the managers and the other organizational members to embrace their vision (cf. Gioia & Chittippeddi, 1991). In an in-depth investigation of the effects of top management's discourse on managers' understanding of their roles in the strategy process, [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF], and [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF] show two contrasting reactions from managers and other organizational actors. Whether promoting a participative or a hierarchical, disciplinary (non-participative) concept of the strategy process, managers generally adhere to the discourse promoted by top managers. However, a few managers do resist top manager's expectations, in particular when these are understood as an attempt from top managers to reinforce their hegemony [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF] or to keep managers in a rather passive or subordinate role of execution [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. Far from always taking on the expected role of a passive transmitter of corporate strategy, some managers even develop counter-conceptions through which they reaffirm their roles as strategic innovators [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF] or promote a more collective vision of the strategic process [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. These results confirm previous research on identity regulation [START_REF] Alvesson | Identity regulation as organizational control: Producing the appropriate individual[END_REF], which outlined that organizational members may either endorse or resist identity regulation3 attempts from the organization. While this research investigates both the top management's and the other's managers sensegiving/sensemaking, a second research strand focuses on the middle managers' sensemaking process during strategic change. Managers' strategic roles: managers' sensemaking Contrasting with the first approach, this research underscores that the discourses and practices from top management during strategic change may be unclear or even contradictory so that managers do not only react to top managers' expectations; they have to make sense of their roles as change unfolds. [START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF] show that the top management's decision to engage in a major organizational restructuring in a UK electricity provider lacked operational content so that the managers at the head of the different divisions had to make sense of the restructuring and engage in lengthy negotiation in order to mutually define their roles in the new structure. This process took place mainly through their informal interactions and outside the control of senior managers. Other researchers have pointed out the contradictory messages addressed to middle managers during strategic change. [START_REF] Lüscher | Organizational change and managerial sensemaking: working through paradox[END_REF] show the difficulty that production team managers have in making sense of their new roles when executive managers are asking them, for example, to "build effective teams while ensuring productivity". Confronted with what the authors call the "paradox of performing", should managers take the time "to deal with conflicts" in their team or rather "keep the team focused" on productivity (p. 232)? This ambiguity is that much stronger when the discourse (valuing risk-taking and initiative, for instance) enters into conflict with practices (that reward conformity to the plans defined by top management, see [START_REF] Alvesson | Good visions, bad micro-management and ugly ambiguity: contradictions of (non-)leadership in a knowledge-intensive organization[END_REF]. Confronted with unclear or contradictory discourses regarding their strategic roles, managers must make their own sense of what is expected of them. These processes result in different understandings ranging from the adoption of a contradictory or schizoid concept of their roles, incorporating the antagonisms of the organizational discourse [START_REF] Lüscher | Organizational change and managerial sensemaking: working through paradox[END_REF], from the rejection of the strategic discourse as a whole [START_REF] Mccabe | Strategy-as-Power: Ambiguity, contradiction and the exercice of power in a UK building society[END_REF] or to a selective reconstruction of certain aspects and the rejection of others (see [START_REF] Humphreys | Narratives of organizational identity and identification: a case study of hegemony and resistance[END_REF][START_REF] Clarke | Working identities? Antagonistic identities resources and managerial identity[END_REF]). Manager's strategic roles: co-constructed sensemaking through conversations involving both top and the other managers A third, emerging research strand investigates the process underlying these different sensemaking dynamics. Activating both interpretative and social-political dynamics [START_REF] Rouleau | Micro-practices of strategic sensemaking and sensegiving: how middle managers interpret and sell change every day[END_REF], conversations that take place during formal meetings, work sessions or informal encounters are the privileged medium through which both sensemaking and organizational structuring occur (see [START_REF] Crevani | Leadership, not leaders: On the study of leadership as practices and interactions[END_REF]Pattriotta & Spedale, 2009;[START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF][START_REF] Weick | The collapse of sensemaking in organizations: the Mann Gulch disaster[END_REF]1990). [START_REF] Bechky | Gaffers, goffers, and grips: Role-based coordination in temporary organizations[END_REF] showed how, through polite admonishing, joking and thanking, role expectations in a film project can be smoothly communicated and negotiated, allowing coordination among participants to take place. Outlining the socio-political aspect of conversations, Westley considers that they "potentially enact formal structures of domination" (our emphasis, Westley, 1990, p. 340). Conversations are, in fact, the locus of power struggles related to differences in hierarchical status [START_REF] Westley | Middle managers and strategy: microdynamics of inclusion[END_REF][START_REF] Thomas | Managing Organizational Change: Negotiating Meaning and Power-Resistance Relations[END_REF]), competencies, or rhetorical skills (Samra-Fredericks, 2005;2003) among actors. Unless the conversation turns out to be a monologue, subordinates react to the superior's communicative practices influencing, in return, the superior's behaviors. In fact, power relationships are not shaped exclusively by the superior's behaviors but also result from "the way in which participants design their interactions, because it can have the effect of placing them in a relationship where discourse strategies of greater or lesser power are differentially available to each of them" [START_REF] Hutchby | Power in discourse: The case of arguments on a British talk radio Show[END_REF], p. 482, in Schneider, 2007, p. 188). With few exceptions, this co-constructive dimension of sensemaking has received little attention until recently. Articulating sociopolitical and interpretive dimensions, [START_REF] Thomas | Managing Organizational Change: Negotiating Meaning and Power-Resistance Relations[END_REF] underscored that the enactment of hierarchical relationships relies on communicative practices whereby senior managers display authority and try to impose their views on middle managers (during a workshop held as part of a change program at a telecommunication company). In this configuration, sensemaking is fragmented as the senior managers and the middle managers stick to their views and contradict each other without searching for a common ground to build on; alternatively, senior and the middle managers may incrementally build on the others' views, even if it is in opposition to one's own. These results confirm Pattriotta and Spedale's study (2009; 2011) which shows how what they call the "interaction order"i. e. the relational patterns that emerge out of the flow of exchanges among actors during face-to-face interactions affects sensemaking. Detailed analysis of the meetings held during the set-up phase of a consultancy task force in an European oil company, led the authors to outline how the participants' moves to position themselves favorably in the project (at the expense of others) and the leader's aggressive attempt to regain control over the conversation contribute to a conflicting and fragmented interaction order that is reenacted from one meeting to another. In the absence of a working consensus of the participants' roles in the task force, the participants cannot develop a common definition of what the project itself means, leading to further loss of meaning of participants' roles in the project. In sum, this research strand displays how the participants' discursive behaviors during conversations enact or reenact structuring processes that may reflect (or suspend, [START_REF] Westley | Middle managers and strategy: microdynamics of inclusion[END_REF]) the actors' roles in the organization and thus constitute a more or less favorable platform for joint sensemaking (as opposed to a fragmented sensemaking, [START_REF] Maitlis | The social processes of organizational sensemaking[END_REF] or collapse of sensemaking, [START_REF] Weick | The collapse of sensemaking in organizations: the Mann Gulch disaster[END_REF]. While bringing insights into the incidence of structuring on sensemaking (or lack of), the research discussed above says little about how organizational actors facing contradictory expectations from top management make sense of their roles, or said differently, how sensemaking effects the structuring process. Capitalizing on this research, we aim to develop a micro-processual analysis of sensemaking of the managers' strategic roles during conversations between the managers and the director as they strive to develop a strategic project for a large association within the social sector. Our intention is also to contribute to the understanding of how interpretative and socio-political dynamics articulate and combine during conversations in organizations. Research Design and Methods Research setting The research took place over 2½ years in a large French association within the social sector. Created in 1946, the association is a departmental organization comprised of 160 employees. As for most associations within this sector, its financial resources (annual amount of 10 million Euros) come largely (90%) from general councils and, less so (10%) from the PJJ (Protection Judiciaire de la Jeunesse / Judicial Youth Protection), a service under the auspices of the Minister of Justice. The association's mission is defined as the protection, reception, education, social and professional insertion for children, adolescents and young adults in difficulty, danger or delinquency. These children are received by the association at the behest of the PJJ or the social services of the ASE (Aide Sociale à l'Enfance/Child Protection Agency). The association has three centers dedicated to this mission which are located a few kilometers from one another. Each serves a different population and proposes different services: -A social and educational center (Maison d'Enfant à Caractère Social -MECS A) that accommodates 60 children between the ages of 6 and 14 for a period of 3 to 4 years with the goal of helping them assimilate socially and academically; -An educational and professional center (MECS B) that accommodates 65 adolescents from the ages of 14 to 21 years old with the objective of professional training and social and professional insertion; -A state approved delinquent center for educational orientation and action (MECS C) that can house approximately 30 adolescents for a period of one year with the goal of guidance, educational help and social and professional insertion. The association also has at its disposal the service of the MO (Milieu Ouvert -Open Milieu), a service that provides social investigation and support to families and youth in difficulty. The structure of the association has three hierarchical levels. The main office of the association (the "Bureau"), is elected by the board of directors, which names the director. The director heads up the four different operational entities (MECS A, B, C and MO); each entity is managed by a center manager. The director also is responsible for supervising the central administrative department that regroups the financial and human resource services of the association. Each center is then organized into services. There are 12 service managers in all and a technical advisor who reports to the administrative department. At the request of the director, all 18 managers -including the 4 center managers, the 12 service managers, the technical advisor and the director-, actively participated in the action-research described here. Participatory action-research The director of the association contacted the two researchers to help the managerial team define the organization's strategy. We progressively understood that defining a strategic project for the organization was a means for the organization to meet several challenges. On the one hand, the general council was asking the associations in the social sector to be accountable regarding their financial resources and to report on the quality of the services provided. This pressure for accountability, which was regarded by the members of the association as creating competition among associations, was accompanied by new legal obligations, including new procedures to guarantee greater transparency and dialogue vis-à-vis the families and the youth. The director and the managers of the association felt that the organization should develop strategic answers that met these new demands for cost control and better quality for the services provided. On the other hand, they were convinced that the organization was ill prepared to meet these challenges. The different centers of the association had developed independently from one other, and the managers were not in the habit of working together or with the director. They had to clarify and converge on a workable definition of their roles as director and managers in the strategic project. The board of the association, which was mainly composed of retired notables and industrials, while lacking competencies to understand and help the organization to develop adequate strategic answers, put increased pressure on the director so that he initiated strategic change. The objective of the action-research was to help the team develop a strategic project for the association. This supposed that the managers and the director would also agree on their respective roles in the project (see also [START_REF] Patriotta | Making sense through face: identity and social interaction in a consultancy task force[END_REF][START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF]. This research borrows its methods from participatory action-research in which the problem to resolve, and the research design, are defined with the actors in the field [START_REF] Reason | Three approaches to participative inquiry[END_REF][START_REF] Whyte | Participatory action research[END_REF]. While containing numerous difficulties, this research design seems to be a pertinent research alternative for understanding actors' sensemaking [START_REF] Lüscher | Organizational change and managerial sensemaking: working through paradox[END_REF], as it implies that the researchers experience, from the inside, the contradictions, ambiguity and uncertainties that organizational actors are facing. Led by the two researchers, the study relied on full-day collective work sessions as well as smaller group sessions. These sessions took place at intervals of a few weeks, allowing the researchers to synthesize the work accomplished up until that point and to give the sub-groups time to conduct specific research (for example, to study the evolution of the legislative and regulatory context, the competition, etc.). The research was conducted in two phases: -During the first phase (year 1 with 6 days of collective work) a strategic diagnosis was developed. This included the following meetings: -During the first meeting, we collected each participant's view of the main issues and challenges that the organization was facing through semi-structured, written questionnaires. Basic concepts and vocabulary of strategy (e.g. the organization's mission, suppliers, competitors, clients/users, resources and competencies, SWOT analysis) were introduced and illustrated. -The second meeting was devoted to a report and discussion about the main issues and problems of the association as identified by the participants in their individual questionnaires (completed during the 1 st meeting). -During the third and fourth meetings, a strategic diagnosis of the institutional environment and the association's competitors was achieved. -The fifth meeting was dedicated to the identification of the association's mission, involving a debate on the profile of the youth that the organization should take and the social needs that it should meet. -During the sixth meeting, four strategic orientations revolving around the notion of "quality service provider" were identified: 1/ the definition, communication and fulfillment of common procedures and rules, 2/ the adaptation of the service provided to meet the needs of the families and youth, 3/ including their latent needs, and 4/ cost control. -The second phase (year 2 with 6 meetings) was based on the work of the sub-groups which enabled the specification of these directions and the development of concrete action plans. The project was thus formalized and presented to the board at the end of the second year of intervention. These meetings were generally divided in two parts: 1/ a report on the work done during the previous meeting and 2/ a collective work session on a particular aspect of the strategic diagnosis or plan. The researchers guided the collective work sessions through open-ended questions (see Extract 2 analyzed below) that encouraged participants to express (e. g. what is your opinion about …?) and elaborate their views about the subject at hand (e. g. what do you mean by …? Could you develop your idea about …?), and check that they agree on the strategic diagnosis or the actions to implement (e.g. would you agree on this point? On that action?). Data collection Our analysis is built on two data sources: -Notes taken during interviews with the director conducted by one of the researchers before the beginning of the study; -The near complete transcription4 of the work sessions that were focused on strategic diagnosis, representing a total of more than 25 hours of tape recording. As our intention was to focus on content (rather than the details of discursive devices used by participants as in conversational analysis) during the conversations, we transcribed the conversations by following a simplified format. Because of the high number of participants at the work sessions, we were not always able to identify each speaker. When this happened we attributed the remarks heard to "Manager X". In addition, certain passages could not be transcribed as some participants were talking at the same time, or several conversations were being held simultaneously. However, in general, and without detailing all points of view of each of the participants (certain views not necessarily being expressed in the meetings themselves) the numerous conflicts and divergent opinions expressed at the meetings leave us to believe that the transcriptions give us ample and pertinent cues with which to better understand how participants make sense during these conversations. Our understanding of context was also based on the numerous informal discussions held during breaks and lunches and on internal documents. The documents written by the participants in their workgroups between two meetings, the different elements of diagnosis and the directions developed during the collective sessions completed our data. Data Analysis We focus our analysis on the first six meetings where we, as researchers, encountered difficulties both making sense of the problems of the organization and building consensus around the identification of those problems (and so on the actions to implement) among the managers. From the very first (individual) meeting with the director, we were struck by the contradictory manner with which he defined the problem to be resolved and the roles he gave to the managers in their strategic functions. On the one hand, he diagnosed "a breakdown of the strategic functioning" that he related to "a lack of motivation from the employees" and "a lack of commitment of the managers in the strategic function". On the other hand, when we suggested a research design aimed at helping the team to collectively define the strategy of the organization, the director objected, saying that "the strategy and the projects of the association are not to be played with", "the mission of the association is clearly defined in its statutes and cannot be debated" and that "the strategic function is handed over to the managers". We also observed a similar alternating between a participative and a hierarchical, nonparticipative concept of strategy in the group of managers during the first phase of the strategic diagnosis (see Extract 2 below). It was our feeling that, as action-researchers, we may be incapable of building a consensus around the role of the managers in the strategy process as the discussions were marked by a continuous coming and going between agreement and, sometimes aggressive, contestation on both the definition of managers' role and on the strategic process itself: Who was going to do what and how? It was this observation that gave us the metaphor of "dancing in the dark". In order to understand how participants perform this dance, we systematically researched 1/ the discourses of the director and the managers (in the transcripts) on the strategic roles of managers, 2/ the sequences of conversations where participants swung from one version to another. In these sequences, we noticed that they anchored their argumentation in detailed descriptions of the organization's functioning, of its strategic positioning and environment. In order to analyze these extracts of conversations we relied on Discourse Analysis (DA) as developed by [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995;2006) and [START_REF] Edwards | Discursive Psychology[END_REF], an approach and method that focuses on the way in which people describe their own and others' experiences. Following Conversational Analysis' analytical commitments, DA considers that talk is a medium for social action, so that "the analysis of discourse becomes the analysis of what people do" (Potter, 2004: 201). Rather than explaining people's talk through reference to their underlying beliefs, values, states of mind, or implicit goals, DA describes what people are actually doing when talking, for it is through these actions that people fabricate the context of their interactions and display mutual understanding (or misunderstanding). In this perspective, institutions (and consequently organizations), exemplified by asymmetrical relationships, prototypical descriptions, or constraints on people's actions, are envisioned as situated constructions that are made up, attended to, and made relevant by participants during their conversations [START_REF] Potter | Discourse analysis as a way of analysing naturally occurring talk[END_REF] 5 . In other words, the organizational or institutional dimensions are seen as enacted in, and through, the participants' talks-in-interactions. According to [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995;2006), one should pay particular attention to the descriptions people make of their world when talking to each other, for it is through these descriptions that they perform particular actions during their conversations i. e. suggesting a particular interpretation, rejecting an anticipated interpretation, blaming the other, accounting for their or the others' actions or interpretations etc. In particular, people often describe events or experiences as instances of generalized patterns (or as exception of these). Through these scripted descriptions [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF], they establish a normative base against which some events or actions can be considered as normal, adequate or usual while others will be regarded as pathological, anomalous or as requiring an account. In this analysis, the details provided by the participants cannot, as such, be classified as referring to, or deviating from, this norm: one should pay attention to the ways in which participants, during the very course of the conversation, assemble details and produce a particular interactional upshot (e. g. blaming, justifying, accounting for the event in question and for the very act of reporting). Scripts are, in fact, characterized by their variability and flexibility so that their functions cannot be identified without a careful analysis of the surrounding talks. Following these analytical commitments, we analyze the way in which participants made sense of the managers' strategic role during the conversations through descriptions of the 5 This clearly distinguishes DA from Fairclough's Critical Discourse Analysis for which social reality is made of different layers of discourse (from macro Discourse to micro, situated conversations) so that one cannot analyze a conversation without taking into account the institutional discourses (or texts) that participants draw on when talking. According to DA, institutional realities are enacted in, and through, conversations and do not pre-exist these talks. organizational functioning, and, as will be demonstrated, of the organization's strategy and environment. Our analysis focuses on the particular moments of conversations where participants swing from one version of the managers' role to another in order to understand how and why they operate such moves. In these analyses, we considered whether we, as action-researchers could have contributed to the contradictions and oscillations observed. The detailed analysis of extracts of conversations where the director and the managers swung from one version of the managers' role to another shows that we had limited, if not no influence, on the unfolding of the oscillation observed6 . These elements, together with the contradictory way in which the director defined the managers' roles at the beginning of the research process, led us to believe that, while not totally lacking in influence, the researchers actually acted more in a midwife capacity, bringing contradictions to the surface, rather than creating those contradictions. Dancing in the Dark We first briefly present how the director and the managers define, in contradictory and confusing terms, managers' strategic roles so that we, as researchers, also felt "in the dark". We then analyze two extracts of conversations where the director and the managers oscillate from one version of their role to another through their construction of various scripts of the organization and its environment; an oscillation that resembles a sort of dance. In the Dark: setting up contradictory roles In broad terms, the director and the managers construct two versions of the managers' strategic roles: -A non-participatory version where the "strategic function is handed over to the managers" and their roles mainly consist of meeting their missions as defined by the state and the board. Although agreeing with the director on this first version, the director and some center managers disagree on the director's management style. While they ask for more autonomya laissez-faire management stylethe director defends his centralizing management style as a way to insure the financial viability of the organization. -and a participatory version where, working together and transversally, managers may find innovative answers to the complex problems of the youth in their care. A non-participatory concept of managers' strategic role Sometimes the director and a few of the managers would defend a disciplinary, nonparticipatory conception of strategy [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. During our first interviews and the first collective meetings, the director defended the idea that the strategy, as defined by the board and himself, was handed over to the managers so that their roles in the strategic process mainly consisted of implementing the strategic orientations. During the 2 nd meeting, for instance, the managers discussed "the centralization of management" considering it as one of the more important organizational problems to be resolved. The director responded to this by saying that the centers, and thus their managers, are not autonomous (see Extract 1 analyzed below); that this absence of autonomy extends to the strategic level: it is not up to the centers to define strategic directions which remain a policy prerogative, meaning the prerogative of the association's board. While all the managers were opposed to the "centralization of the head office", two center managers agreed with this non-participative version of their strategic roles, i.e. that their roles essentially consist of satisfying the mission that was given to their particular establishment. When the group worked on defining the global mission of the association during the 5 th meeting, they questioned the pertinence of this work for the association as a whole and, indirectly, their participation in the strategic project. Center Manager (MO): I ask myself if this level [of the association as a whole] is relevant enough to work on […] Working on the strategic level of the association and in its mission, I wonder if it is the best place to act, the most effective […] This does not mean that they agree with the centralized management style of the director. While accepting their role of passive transmitter of the directives of the head office to the operational level, they also want more autonomy and room for maneuvering in the fulfilling of their missions; a call that led to an open conflict with the director during the second meeting (see Extract 1 analyzed below). A participatory concept of managers' strategic role Contrary to the previous, rather passive, concept of the managers' strategic role is a vision that is more participatory and collective. During our second interview, the director reformulated the problem he encountered as "the breakdown of strategic function", meaning a lack of involvement of the managers in the elaboration of a strategic function. He then agreed to have the managers participate in the development of the strategic project. At the end of the 5 th meeting, when the group was striving to define the mission of the organization, he even goes as far as talking about the "strategic will" of the group of managers. The director also defended the idea that managers are not simple executors of a given strategy responding to the mandates of the state, but that they are capable of working together, "transversally" and of innovating. When we completed the strategic diagnosis of the association during the 3 rd meeting and defined its missions as "a provider of social services" responding to the mandates of the state, the director protests: Director Not only, not only! We are looking to develop new answers; we do not only apply existing mandates. […]. We don't just execute solutions. Even if there are limitations, we are also innovators. Here, the director is defending a participatory concept of strategy; a vision that is also defended and expressed by some managers through the notion of "transversal work" during the 5 th meeting. This transversal work (i.e. working "at the level of the association as a whole", "with all its centers and services") is justified by the complex problems of the young that are seen in the centers (see also Extract 2 analyzed below). Center Manager (MO): […] That is what interests me, working in that sphere at the level of the association as a whole, to internally work to find a solution, with all its centers and services, and modalities that could be put into place to help those [the young] that we see in MO for whom we have no ready answers. This work is also justified by the very notion of "association" which indicates filling a void left by the state, satisfying needs that are not covered by the state through new services. Service Manager (MECS B): I believe that there is a niche to take on in terms of the current difficulties that youth are facing; a lot of institutions are getting to the end of their rope. So, there is a spot to be filled and there we can be innovators of new projects and new types of support to be offered. Dancing: oscillating between contradictory roles In order to understand how participants develop such contradictory concepts of their roles and oscillate between them we analyze in detail two sequences where participants swing from one version to another. Extract 1. Between a non-participative and a participative concept of manager's role. During the 2 nd meeting, the researchers reported on the main problems identified by the participants through the individual questionnaires collected in the first meeting. While participants agreed with the definitions of the first problems presented, a strong conflict appeared between the director and a manager concerning the centralization of the head office (see Extract 1 below), which has been identified as a major issue for the organization by a vast majority of participants. In order to clarify its meaning, Researcher 1 asked participants to define and illustrate the notion of centralization. Without specifically asking for more participation in the strategic process, a manager then recalls an episode (see beginning of Extract 1) that he presents as exemplar of a centralized decision process; thereby indirectly calling for more autonomy. He is, however, forced to step back, and so reframes the centralization problem in terms of lack of consultation or dialogue between the center managers and the director, in effect, asking for more participation in the strategy process. At first, the director defends his centralized, nonparticipatory management style and does not accept the manager's call for more autonomy. While he does not explicitly mention the managers' strategic role in the process, he defends a decision taken without referring to either the manager or the team i.e. a non-participatory concept of the managers' strategic role. He then takes the call for dialogue as an opportunity to up the ante, that is to extend the manager's interpretation to imply that the lack of dialogue is managers' fault and that they should participate in the strategic process; changing from a nonparticipative to a participative concept of strategy. Extract 1A: Words in CAPITALS indicate the speech became especially loud 1 Center Manager (MO): I am going to try to give you an example of centralism that shows the state 2 of the relationships among management, the head office and the Board. I was asked to study the 3 implementation of a new service, a service in family mediation. It was first initiated by a social 1/ The manager narrates an episode that he presents as exemplar of the centralism of the head office (l.1). In describing this particular event, he mixes specific actions ("I was asked"; "it was first initiated", l. 2-3; "I started with", "he expressed", l. 6-7) and generalized aspects regarding the situation ("we are onto something important", l. 4) and his ability to deal with it ("it is clear we were listening" l. 6; "his overall knowledge of the situation", l. 7-8). In describing the incident, he outlines that it has generalized implications for the organization and its protagonists (l. 1. "An example of centralism that shows the state of the relationship") so that the description has the characteristics of a script [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995). In presenting the incident as exemplar of some general characteristics of the organizational functioningas opposed to something uncommonthe manager protects his description against potential refutationof it being a rare occurrenceas well as protecting himself against accusation of bias or prejudice (Edwards, 1995, p. 325). The credibility of the manager, because he is asking for more autonomy and so has an interest in the reporting of the incident, could be in doubt. The script here is a powerful device in managing this issue of credibility of both the account and the speaker so that the call for more autonomy in decision-making appears legitimate ("And I have a problem in proposing a new service with instructions not to take financial risks because ...", l. 8-9). 2a/ The director counters this move first by suggesting an alternative interpretation of the script (l. 11-14) He complements the manager's description and gives some information about the financial situation of the organization. In so doing, he accepts the managers' description of himself as being a centralizing director, but, at the same time, presents himself as a responsible top manager who must ensure the financial viability of the organization as a whole. Complementing the managers' description, the director outlines that his centralizing management style is legitimate. 2b/ In the next turn, although the manager does not contest these elements, he implies that he is not happy with other aspects of the episode ("I am not contesting that" [our emphasis] l. 18), leading the director to operate a second, more serious countermove through which he contests the manager's version (that they could not do any project, l. 21) and suggests an alternative script (l. 21 "It is not the answer that was given"). This way, he not only denies that he is refusing autonomy to the managers but also sheds doubt on the credibility of the manager's account. 3a/ The manager is then forced to step back ("it is not the answer I was given", l. 22) but provides new information ("a study for the financial risk of the project was not done before giving the answer that financial risk should not be taken", l. 26-28) and so develops an alternative script where the refusal of the project is not seen as a lack of autonomy but as a lack of participation and dialogue in the decision process ("for me, that is centralism", l. 27-28). While initially asking for more autonomy, he now argues for more participation and dialogue in the decision process (l. [START_REF]the example of centralism in that case is that a study on the financial risk of 27 the project was not done before giving the answer that financial risk should not be taken. For me, 28 that is centralism[END_REF][27][28]. This new interpretation of what centralization means may be seen as a sign of the participants' "will to power". Introduced by Luhmnan and Boje (2001, p. 164) in the analysis of discourse, will to power designates the "struggle of the individual to actively reinterpret and re-story meaning from one event to the next" that is aimed at "benefiting some over others". In our view, the manager's wish to make his point pushes him to redefine his interpretation of the episode; a script that argues for a participative concept of managers' role in the strategy process. The new version of the managers' role is confirmed in the following exchange that took place soon afterwards. 4/ Surprisingly, the director not only agrees with the description of the organization as implying centralization but pushes the argument further. He complements the manager's script and reciprocates the argument ("a phenomenon of reproducing the mechanism of centralization both ways", l. 35; "the questions asked in the centers are not addressed to management but go directly to the technical services in the head office", l. 39-40). Also, the director not only recognizes that he is a centralizing manager but is able to reciprocate the blame when he argues that this functioning suits the managers because they don't have to manage the problem ("We don't manage the problem, and in the end, that is fine with us", l. 43). The director's will to power is manifested here when he reinterprets the centralization issue as a generalized organizational functioningso that the problem is not due to his own management style; a reinterpretation through which he amalgamates centralization of the head office with passivity and nonparticipation from the managers. The director illustrates his point by referring to his own experience (l. 46-48) as a center manager when he did not take care of the budget or the accounts. In considering his past behavior as manager as exemplar of such passive, nonparticipatory, irresponsible behaviours, he is able to up the ante while at the same time to protect his description from a charge of bias. Again, the description of the managers' role is anchored in a generalized description of organizational functioning. On the whole, the analysis of the extract shows that participants 1/ change from a rather nonparticipatory to a participatory version of the managers' role; 2/ anchor these versions and claims in scripted descriptions of the organization from which they present themselves as both credible and responsible or knowledgeable managers (cf. [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995;2006). These various versions and descriptions are progressively constructed in relation to the counterversions and descriptions elaborated by the other participants during the conversations (cf. Edwards, 1995, p. 329). However, through these continuous adaptations of their script, they lose sight of their initial position regarding the managers' role in the strategic process and end up with a version that contradicts that which was initially defended. On the one hand, the manager's stepping back and suggesting a new interpretation contradicts his initial claim for more autonomy and, on the other, the director's contestation followed by his agreement with the manager's labelling of his management as centralizing, may be seen as the participants' will to power during the conversation. In our view, the director and the manager's will to make their point during the very course of the conversation greatly contributes to the sensemaking of the manager's role observed. In order to counter the other's argument, they refer to facts, complement the version of the episode with new facts, or agree with the others' version and "up the ante". But far from mastering the unfolding of the conversations, these two last moves may lead them both to suggest a version of the managers' role that contradicts their initial version; as if their will to power in the conversation, and their wish to escape the responsibility of the issue at hand, release so much energy that they lose control of their argumentation. Of course, those changes may not be a problem in themselves, if they did not happen again and again, rendering the sensemaking of managers' role unstable and equivocal during the entire diagnosis phase of the research. It also impedes the creation of a common ground for making sense of the strategic orientations to be followed by the organization (see also [START_REF] Patriotta | Making sense through face: identity and social interaction in a consultancy task force[END_REF]. While dealing with the organization's mission, the sequence of the 2 nd extract (5 th meeting) is, in fact, similar to Extract 1. The swinging from one version to another is not initiated by the director but by another manager and operates between a participatory and a non-participatory concept of managerial role. Interestingly, the sensemaking of managers' roles in the strategic process is this time related to discussion concerning the mission of the organization and its environment. Extract 2. Between participation through transversal work and lack of participation and innovation. During the 3 rd and the 4 th meetings, the group completed a strategic diagnosis of the association. During the 5 th meeting, as we were discussing the organization's mission and main target, the issue of the managers' role in the strategy process was again put on the table . At the beginning of the meeting, participants defined the organization as a "service provider" yet progressively recognized that it was also able to innovate and propose new services to the unmet needs of the youth in question. For some managers, this "innovator" positioning implies that they would work together, transversally, so as to propose complex solutions to the complex problems they encounter; others expressed some reluctance as this positioning would also expect them to welcome "the most difficult" cases. The director takes this reluctance as an opportunity to present the managers as passive, incapable or unwilling to participate to the strategic process. The conversation had initially constructed the managers as capable of working transversally to propose collective answers to the problems of the youth, and thus as capable of participating in the strategic process; it ends up with an opposite version where they are presented as unable or unwilling to do so. Extract 2A: 1 Service Manager (MECS A): to go back to what we were saying before [that the managers are able 2 to work transversally and to propose new solutions to the problems of the youth], I am attracted by the 3 idea of youth in the most difficult situations, because there is the feeling that it comes from 4 the complexity of difficulties. There is pathology involved, social, delinquency, etc and I am drawn to 5 the idea that we [the association] are adapted to that as we are sufficiently generalist and 6 complex. Complex problems, complex answers. [...] 8 Researcher 1: does it seem to you, in particular for those who weren't in favor of [the expression of] 9 "youth in the most difficult situations", does this idea (given that the term may not be the best, but it 1/ Manager (MECS A) initially defends a collective and participatory concept of managers' role in the strategy process where, through transversal work, the manager could propose new solutions to the problems of the youth. In order to justify this, she describes in general terms the situation of the young (l. 4) and relates it to the organization's capabilities (l. 5-6) and missionwhich is defined as providing complex answers to the complex problems of the youth (l. 6). Through this script, she not only defines the managers' team as capable of working transversally and innovating, but also herself as a strategist when she establishes a one-to-one correspondence between the organization's target and its capabilities. In order to, at least temporarily, stop the oscillation observed between the different versions of the managers' roles, Researcher 1 asks the group if this definition of the organization's mission ("the most complex association", l. 10) may federate the group (l. 13-14). 2/ Managers express their reticence by laughing (l. 12). Manager X contests this expression ("what bothers me, is the term 'the most' ", l.15) 3/ and suggests a less extreme formulation ("One can say in great difficulty", l. 19). Referring to facts, he explains that UER (Reinforced Education Centers), a specific social service of the state, are designed to take the young in great difficulty (l. 23) and that they have more human and financial resources to accomplish this mission ("the question of means does not surface for the UER", l. 28; "In the UER, there is one educator per youth", l. 33). Through this comparison, the manager questions the first manager's scripted description of the organization as capable of taking the young in great difficulty. He does not contest the organization's capability to work transversally as such, but then reinterprets that as being a question of available means compared to those of the UER; a reinterpretation behind which one may see either an anticipative argument to obtain more financial resources or to refuse to take in the most difficult cases. 4/ The director agrees that the UER takes the most difficult cases but he complements the description by exaggerating their capabilities compared to that of the association; the UER are innovating, they say placement structures must be created and they do it (l. 26 to 28). When manager X counters that public services have more means, the director opposes that by shouting that the financial means of the association have not been reduced since 1946 (l. 32). He also supplements his description of the UER and contrasts their high capabilities and efficient functioning with those of the associations ("They don't have seminars that last, I don't know how many days, on the definition of youth in difficulty!", l. 24-25; "it always comes back to the pertinence of our organizations!", l. [START_REF]pertinence of our organizations! We often say that in 35 budgetary meetings[END_REF][35]. He also reports on a dialogue with the general council about the budget of the association and presents it as exemplar of what the general council demands of the association. Through this script, he points out the importance of the ability to propose projects, and the lack of "strategic will" of the managers of the association ("if there isn't the [strategic] will", l. 36-37). The director's descriptions of the UER and that of his dialogue with the general council are used to characterize the group of managers as unwilling and incapable of proposing projects and innovative solutions (and, indirectly, of not being able to work transversally). Through their reinterpretation, the director not only constructs the group as unwilling to work transversallywhich contradicts what manager A argued for at the beginning of the extractbut also rejects the responsibility for this lack; a move behind which one may again see the director's will to power. To summarize, this section shows that the participants move from a participatory and innovating concept of the managers' strategic role to a rather passive and non-participatory one. The oscillation is not anchored in a scripted description of the organization's functioning, as in Extract 1, but on that of the situation of the youth and the capacity of the organization to develop adequate answers to these problems. Also, as observed in Extract 1, the different versions of the managers' role and of the organization's capabilities are progressively constructed in relation to the version elaborated by the other participant to oppose it through reference either to the terms used to describe the situation ("complex" as opposed to "the most complex") or to facts (i.e. the financial means of the UER compared to those of the organization, the dialogue with the general council). While carefully adapting or responding to one another's script, participants end up with a version of the managers' role that contradicts that which was initially defended. As in the first extract, behind these moves the participants' and in particular the director'swill to power is apparent during the very course of the conversation; that is a will to reject the responsibility of the issue at hand that goes as far as trying to make their point heard at the expense of the point itself. The oscillation is also highly related to the lack of agreement among the managers around the youth that the organization should take in (the most difficult ones or not) and to the strategic orientations to follow (that of an innovator or of a service provider). Extract 2B followed directly after Extract 2A, and shows how an external reference permits the managers and the director to reconcile themselves around a consensual dimension of their roles as strategists -at least temporarily. It also provides some cues about the reason of the dance observed. Extract 2B: 40 Service Manager (MECS B): maybe that is why we are reticent, because there may be judgments 41 that are not explicit and the fear that, in terms of the association's policy, is that it will be 42 translated as "take in the most disturbed young as long as that is your vocation", maybe the fear at 43 the strategic level, that the board doesn't take into account, is that it will necessitate a total 44 reorganization of intake methods. It may not be said but there are issues like this. And if there are, it 45 must be said. Because if there are fears about this, it is in the strategic sphere and we must find 46 the method to communicate a certain amount of clear and coherent messages to the board. 47 Director: I believe that, yes, there is something like that, it is what I tried to say this morning…the 48 discomfort in talking about these very disturbed youth. I believe that brings us back to the fear of 49 being in difficulty in general and that goes along a little bit with what you just said. If we enter into 50 this particular niche, will it end up being us, ourselves, who will be in great difficulty because we 51 just don't have the proper means, etc. In this second part of the extract, the MECS B service manager recognizes their reservations ("that is why we are reticent", l. 40) and, indirectly, their lack of strategic will ("the fear, that in terms of the association's policy", l. 41; "maybe the fear at the strategic level", l. 42) but on the one hand, relates it to the "most disturbed youth" that these innovations would target ( l. 42), and, on the other, to the risk that the board will not take into account the vast reorganization that this would imply (l. 43-44). Although speculative, the description of the managers' fear permits the manager to position himself both as knowledgeable and courageous ("if there are, it must be said", l. 44; "we must find the method to communicate …", l. [45][46]. This script is different from the previous ones in that it does not put the managers in opposition to the director but contrasts the strategic group (which seems to now include both the managers and the director) with the board, thereby affirming the existence of a competent and proactive strategic group capable of taking and communicating a firm position vis-à-vis the board (l. 46-47). The director, as a member of the strategic group (and who sometimes feels his position threatened by the board), agrees (l. 47). Here the reference to the board creates, by contrast, the strategic group which includes both the managers and the director; an inclusion that for the moment overcomes direct confrontation between them as well as the dance previously observed. One should not conclude that this conversation puts an end to the oscillation. In referring back to the morning's discussion about "these very disturbed youth" (l. 48), the director also comes back to their exchange about the lack of means (l. 49-51). While agreeing with the manager at that point, he also wants to have the last word; in doing so his agreement appears quite ambiguous. From here, one may hypothesize that the discussions around the organization's capability and functioning constructs a strategic team that is defined in terms of an opposition between the managers and the director, thereby exacerbating the participants' will to power during collective work. Yet, alternatively, a discussion about the relationship between the group and the board seems to move participants' attention towards a more inclusive and contextualized view of the team that temporarily suspends the opposition between the director and the managers. On the whole, how participants make sense of the issue at hand influences their understanding of the relationship between the managers and the director, in oppositional terms or not; an understanding that, in turn, activates or suspends participants' will to power and political struggles during the meeting. Discussion This research contributes to the understanding of how managers make sense of their strategic roles during strategic conversations when confronted with contradictory expectations from top management. This contribution is threefold. First, complementing prior research on role sensemaking and on [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995) work on scripts, our results demonstrate that sensemaking activities of managerial roles are anchored in scripted descriptions of the organization and its environment. Role construction is an intermediary "within the relational processes of meaning construction" (Simpson & Carroll, 2008, p. 33), so that altering how the relation is understood and the terms of the relation can change the meaning of the role. While recent research have explored the discursive tactics and practices used by participants during conversations to convince the audience (i. e. [START_REF] Thomas | Managing Organizational Change: Negotiating Meaning and Power-Resistance Relations[END_REF][START_REF] Samra-Fredericks | Strategizing as lived experience and strategists' everyday efforts to shape strategic directions[END_REF]Pattriotta & Spedale, 2009;[START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF], much less is known about the actual content of these conversations and its incidence on sensemaking. Our research confirms that participants make sense of their roles in relation to themselves or others, as Simpson and Caroll's definition (2008) suggest. In particular, while the group changes from one version of the managers' role to another during the five first meetings, it arrives at some consensus when the director and the managers both refer to the board. This external reference creates the "strategic group", thereby temporarily putting an end to the confrontation between the managers and the director. This confirms that the relative positioning of the actors vis-à-vis the other, and thus the interpretive frame through which they "see" this other, are crucial elements driving the sensemaking process [START_REF] Weick | Organizing and the process of sensemaking[END_REF]. Complementing Carroll and Simpson's view, our research also shows that the sensemaking process of a managerial role is highly related to that of the organization and its environment. When making sense of the managers' role in the strategy process, participants do not exclusively refer to the director or to the board, but always replace the managers' role in a generalized description of the organization and/or of its environment. These generalized descriptions often refer to, or carry within themselves, conflicting demands presented as coming from the general council (being accountable for their financial resources and improving the service quality) or from the social sector (meeting new, unmet social needs as in the past and meeting the state's demands only). Further research is needed here on how organizational members enact or ignore potential contradictions emanating from their environment and its history. develop specific versions that selectively incorporate some elements and reject others. Contrasting with these results, our analysis of the conversation show that managers are not just reacting to the sensegiving efforts of top management (e.g. [START_REF] Gioia | Sensemaking and sensegiving in strategic change initiation[END_REF], but that they actively participate in the sensemaking of their roles and to the contradictions observed in their conversations with the director. These elements also demonstrated that differences in sensemaking may not only be due to differences of hierarchical positions [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF], functions [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF][START_REF] Suominen | Consuming strategy: the art and practice of managers' everyday strategy usage[END_REF], or audiences [START_REF] Vaara | Post-acquisition integration as sensemaking: glimpses of ambiguity, confusion, hypocrisy, and politicization[END_REF], but that the same actors may hold different views during a conversation as the result of their confrontation with other participants. These results, along with others (see [START_REF] Balogun | Organizational restructuring and middle manager sensemaking[END_REF]2005) invite us to distance ourselves from the portrayal of strategic change as made up of successive phases of sensemaking and sensegiving, mainly dominated or directed by top managers. It also encourages us to investigate sensemaking through naturally occurring talk rather than interviews or documentary data (see also [START_REF] Silverman | Interpreting qualitative data[END_REF]. This co-constructive view of the sensemaking of managers' strategic role also complements extant literature on identity work (Alvesson & Willmott, 2003;[START_REF] Watson | Managing identity: Identity work, personal predicaments and structural circumstances[END_REF]. While defining self-identity in constructive and "relational" terms, this research views identity work as the individual's work on social identities that are publicly available through discourses [START_REF] Watson | Managing identity: Identity work, personal predicaments and structural circumstances[END_REF] or promoted by the organization [START_REF] Alvesson | Good visions, bad micro-management and ugly ambiguity: contradictions of (non-)leadership in a knowledge-intensive organization[END_REF]Alvesson & Willmott, 2003); so that identity work implies a relationship that is asymmetrical and ideational. Though not contesting the relevance of such processes in a world where the individual is submitted to multiple and often contradictory discourses on how one should be or behave [START_REF] Watson | Managing identity: Identity work, personal predicaments and structural circumstances[END_REF], our results differ from identity work research by showing 1/ how rolesas intermediaries between self-identity and othersare constructed through conversations between organizational members (through (small d) discourses as opposed to (Big D) Discourses, [START_REF] Alvesson | Varieties of discourse: On the study of organizations through discourse analysis[END_REF][START_REF] Bechky | Making organizational theory work: Institutions, occupations, and negotiated orders[END_REF] and 2/ how their various discursive moves may contribute to the contradicted versions of their roles (vs. rely on available Discourses only). Finally, going into the details of a strategic conversation, the research outlines the discursive actions through which participants fabricate conflicting versions of the managerial role and oscillate between them. Although the specific subject matter and actors change, both extracts display a similar organization that can be summarized as follows: Although needing replication, these results shed light on the politics of interpretation. In our view, the participant's will to power during the very course of the conversation contributed to the discursive fabric of contradictory versions of managerial roles, on the oscillation between those versions and the resulting dance observed. Following the dance metaphor, it seems that participants' will to have the last word during the conversation is sometimes so strong that they lose control over their argumentation; that will to power sometimes releases so much energy that their argumentation just slides away from them. While power is usually associated with control over the others, as well as, over the course of the conversation [START_REF] Samra-Fredericks | Strategizing as lived experience and strategists' everyday efforts to shape strategic directions[END_REF] and that of manipulation (Rouleau & Balogun, 2011, p. 976), our research confirms Haworth (2006)'s analysis that taking control of the conversation might be done to one's own detriment. "Power and control can always be challenged by the use of discursive strategies, regardless of the subject matter, the status of participants, or any other factor. However […], they might not be wise in this context, and can in fact lead to weakening the challenger's position in the wider sense […] Thus it is just as important to know when to relinquish power and control in this context as it is to maintain it" (Haworth, 2006, p. 755). This result echoes Pattriotta and Spedale (2009)'s analysis of conversations in a consultancy task force where power struggles between consultants over their roles in the project led to entrenched warfare and loss of sensemaking. However, further research is needed to better understand how and why conversations sometimes lead to a positional warfare (as observed in Pattriotta & Spedale, 2009) or to the dance in the dark observed in our research. In summary, our research shows that the fabric of contradicted concepts of managers' strategic roles relies on situated constructions of scripted descriptions of the organization and its environment; a sensemaking process that is driven by the participants' will to power during the very course of a conversation and that is far from being mastered by participants. In our view, this invites us to explore the uncontrolled dimensions of power and sensemaking in organizational life, a dimension that is today underinvestigated in management and strategy research, even in its strategy-as-practice approach. Although contributing to our understanding of sensemaking during conversations, our research has limitations. First, the extracts of conversation reported in this research have been translated from French to English. Some important nuances may have been lost in the reporting of the events and other meanings added (cf. [START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF]. In order to limit these possible distortions in interpretation, the research and the analysis was conducted entirely in French and only then translated in English. Second, it was not possible to know the way in which the different strategic roles constructed by the managers during the conversations were actually played out in their centers (cf. [START_REF] Lüscher | Organizational change and managerial sensemaking: working through paradox[END_REF]. Even when we were able to pick up on the ambivalence of certain actors in the course of the group conversations, we did not seek to appreciate the degree to which they continued to individually defend contradictory conceptions of their role (cf. [START_REF] Vaara | Post-acquisition integration as sensemaking: glimpses of ambiguity, confusion, hypocrisy, and politicization[END_REF]. However, the use of naturally occurring talk (as opposed to interview data) is a step towards a deeper understanding of how managers make sense of their roles as they interact with one another in the organization. In our view, further research is needed to appreciate how the roles constructed in interviews, strategic conversations (cf. this research) and during day-to-day interactions (cf. [START_REF] Rouleau | Micro-practices of strategic sensemaking and sensegiving: how middle managers interpret and sell change every day[END_REF][START_REF] Rouleau | Middle Managers, Strategic Sensemaking, and Discursive Competence[END_REF] will continue to converge or differ. Finally, for lack of space, the interpretation suggested did not analyze in-depth our own contribution to the dance. As participatory action-research members, we sought to understand the different viewpoints expressed by participants. Through questioning, looking for clarification and further justifications, we may have exacerbated the contradictions expressed by participants. They may also have "instrumentalized" ususing us to insure that their points were heard by the director. At the same time, as we were actively looking for some common ground among participants (see Extract 2) on which to develop the strategic project, we may have undermined some differences between interpretations. While it is clear that we did contribute to the sensemaking process, the size of the group (18 managers), on the one hand, and our permanent feeling of confusion during the first meetings, on the other, incline us to believe that we were more interpreters (amplifying or reducing) than choreographers of the dance described here. 1. Manager A describes one's role as strategic manager and/or the director's role through a scripted description of the organization (version 1); 2. Manager B, or the director, counters manager A's script by referring to facts (not version 1); 3. Manager A steps back. An alternative version of the manager's role and related script is suggested through the addition of other facts; the new version contradicts the one initially suggested (not version 1 becomes version 2); 4. The director agrees with this new version of the manager's role but pushes the argument further which leads him to contradicts himself (version 2 ++ is version 1). The maître d'oeuvre, literally the "Master of Works", acts as a bridge, for example, between the architect and the end client and building companies. Identity regulation refers to the organization's discourses and practices that seek to shape the worker's identity (see[START_REF] Alvesson | Identity regulation as organizational control: Producing the appropriate individual[END_REF] The passages where one or the other researcher presents a summary of the work done in the previous meetings were not transcribed. Neither were the passages of the first meeting, during which theoretical and methodological elements were presented and discussed with the participants. While, for instance, in Extract 1, Researcher 1 tried to interrupt the confrontation between the director and the manager, the participants went on without considering the researcher's attempt. And when, in Extract 2, the researcher invited participants to agree on a definition of the organization's target, those did not hesitate to reject the proposed definition. Following Edwards (1994, p. 325), these scripted descriptions may be interpreted as a way to depoliticize the issue of role in the organization. They help participants establish their account as unbiased and build a picture of themselves as neutral reporters of facts as opposed to that of a self-serving or self-interested person. Referring back to events, incidents and episodes, the question of managerial roles becomes a rational and technical problem (as opposed to an issue of power and personal interest), so that the use of scripted descriptions permits managers to appear unbiased, but also competent, knowledgeable, responsible and apt to think strategically. Scripts are constructed as a platform of legitimacy from which to claim a particular version of managers' strategic role. Complementing [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995), our results show that participants in an organizational context construct not only scripted descriptions of their particular personal experiences or events, but also of the organization and its environment. Further research is needed on the content of the conversation as it is believed that what is said is complementary to how it is said in sensemaking processes (the discursive devices and tactics used, see [START_REF] Edwards | Script formulations: An analysis of event descriptions in conversation[END_REF]1995) and that it greatly contributes to the power struggle during the conversation. Longitudinal studies on the interactions between managers and top managers are also needed in order to get a deeper understanding of how contradictions evolve over time and of their consequences on the managers' and staff's behaviors and engagement within the organization. Second, our research shows how the managers and the director both contribute to the fabric of contradictory versions of the managerial roles. Previous research on strategy discourses (e.g. [START_REF] Laine | Struggling over subjectivity: A discursive analysis of strategic development in an engineering group[END_REF][START_REF] Mantere | On the problem of participation in strategy: a critical discursive perspective[END_REF] has underscored that managers, confronted with contradictory expectations from top managers reject, or adopt, top managers ' versions, or
85,702
[ "2437" ]
[ "57129" ]
01490901
en
[ "shs", "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490901/file/978-3-642-40358-3_12_Chapter.pdf
Marijn Janssen Skiftenes Leif Øystein Flak Saebø Leif Skiftenes Flak Øystein Saebø email: [email protected] Government Architecture: Concepts, Use and Impact Keywords: Concepts, Use and Impact Enterprise architecture, design, design science governance, government, government architecture, public value scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Current developments within the government sector focus on issues like access to open data, shared services, cloud computing and data integration between private and public organizations increase the importance of government architecture (GA). Government from across the world have embraced the concept of GA [START_REF] Peristera | Towards an enterprise architecture for public administration using a top-down approach[END_REF][START_REF] Weerakkody | Integration and Enterprise Architecture Challenges in E-Government: A European Perspective[END_REF][START_REF] Hjort-Madsen | Enterprise Architecture Implementation and Management: A Case Study on Interoperability[END_REF] which is often inspired by information systems architecture [START_REF] Zachman | A Framework for Information Systems Architecture[END_REF], information architecture [START_REF] Periasamy | Information architecture practice: Research-based recommendations for the practitioner[END_REF] or Enterprise Architecture (EA) [START_REF] Hjort-Madsen | Enterprise Architecture Implementation and Management: A Case Study on Interoperability[END_REF][START_REF] Richardson | A Principle-Based Enterprise Architecture: Lessons From Texaco and Star Enterprise[END_REF]. The term 'information systems' refers to the various socio-technical elements, 'information' refers to information as a fourth production factor, whereas "enterprise" refers to the scope of the architecture dealing with multiple departments and organizations rather than with a certain organizational part or individual components and/or projects [START_REF] Ross | Creating a strategic IT architecture competency: Learning in stages[END_REF]. In government various terms are used including enterprise architecture [START_REF] Hjort-Madsen | Enterprise Architecture Implementation and Management: A Case Study on Interoperability[END_REF], or national enterprise architecture [START_REF] Janssen | Analyzing Enterprise Architecture in National Governments: The Cases of Denmark and the Netherlands[END_REF] or national or domain reference architecture [START_REF] Janssen | Socio-political Aspects of Interoperability and Enterprise Architecture in Egovernment[END_REF]. We prefer to avoid the word enterprise to avoid any association with business, and prefer the wording government architecture to refer to the scope of the government in contrast to the enterprise. Governments have adopted a variety of models and often developed their own customized frameworks and applications that fit their country or organizational situations best, resulting in a variety of GAs [START_REF] Janssen | Analyzing Enterprise Architecture in National Governments: The Cases of Denmark and the Netherlands[END_REF]. Over time, these GAs developed in their own directions and adopting their own vocabularies may result in conceptual unclearness and indistinct concepts [START_REF] Janssen | Analyzing Enterprise Architecture in National Governments: The Cases of Denmark and the Netherlands[END_REF]. There is no "one-size-fits-all" architectural method that is equally effective and a contingency approach is often taken [START_REF] Riege | A Contingency Approach to Enterprise Architecture Method Engineering[END_REF]. Much can be learned from each other, but learning is made difficult by the various meanings that are given to the same concepts. Having a clear vocabulary is necessary to advance our understanding of the field and to understand how the various research efforts and conceptualizations are related to each other. The concept of architecture is ambiguous and lacks a common agreed upon definitions [START_REF] Corneliussen | IT Architecturing: Reconceptualizing Current Notions of Architecture in IS Research[END_REF]. Smolander [START_REF] Smolander | Four metaphors of architecture in software organizations: finding out the meaning of architecture in practice[END_REF] argues that a plausible reason for why it is difficult to define architecture as a concept is that the source domain i.e. building architecture is equally ill-defined and that the meaning of architecture changes according to the type of stakeholder, situation under study, and the phase of the project. A common understanding and methodological consistency seems far from being developed [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF]. There is a body of literature comparing different approaches and frameworks with each other [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF][START_REF] Schekkerman | How to Survive in the Jungle of Enterprise Architecture Framework: Creating or Choosing an Enterprise Architecture Framework[END_REF][START_REF] Leist | Evaluation of current architecture frameworks[END_REF]. Aier et al. [2008] and Simon et al. [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF] provide an overview of EA literature, compare different frameworks and approaches found in the literature along the following criteria: the understanding of enterprise architecture (i.e., the degree of consideration of architectural layers); the representation of enterprise architecture (modelling languages, tool support); and the use of enterprise architecture (e.g., documentation, analysis, and planning). In general, an architecture is the conceptual description of the set of elements and the relationships between them [START_REF] Armour | A big-picture look at Enterprise Architecture[END_REF] aimed at creating a coherent and consistent set of relationships among (sub)systems [START_REF] Doucet | Coherency Management: Using Enterprise Architecture for Alignment, Agility, and Assurance[END_REF]. A commonly used definition is that of the architecture working group as described in the IEEE Std 1471-2000 "Architecture is the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principle guiding its design and evolution" [START_REF]Architecture_Working_Group: IEEE Std 1471-2000 Recommended Practice Architectural Description of Software-Intensive Systems[END_REF]. According to this definition architecture consists of the following elements; architectural principles, implementation guidelines, system structure and components. Architectural principles are the foundation for making necessary design decisions and guide the development. Implementation guidelines focus on how organizations can adopt and implement their own architectures, whereas system structure and components focus on the components of the system and their relationship. This paper is a first attempt to provide clarity concerning the concepts, use and impact of GAs. EA can be used within a single organization, but in GA needs a scope that goes beyond a single organization and domain. Due to the wide variety or stakeholders, domains and diversity of government the complexity of GA may exceed the complexity of traditional EA. Further, as GA can be described as the backbone of modern public value creation and production it is seen as essential to ensure that GA is developed to leverage public value. We therefore develop a conceptual model including the central concepts of GA, direct and indirect benefits from GA and public value drivers. This model can be extended and refined in further research. There is a limited amount of existing conceptual research on GA and there is no uniformity among definitions or GA methodologies. This heterogeneity can be attributed to the abstract and diverse character [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF][START_REF] Schekkerman | How to Survive in the Jungle of Enterprise Architecture Framework: Creating or Choosing an Enterprise Architecture Framework[END_REF][START_REF] Leist | Evaluation of current architecture frameworks[END_REF]. To get a grip on this, we adopted a com-bination of deductive and inductive approaches to address our research problem, in line with Simon et al. [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF] we will utilize applied research to overcome potential gaps between theoretical foundations and the application of EA management. We applied inductive reasoning by starting from specific observations in Norway and the Netherlands to broader generalizations and theories, in other words; moving from the specific to the general. We opted for investigating two different situations in which GA are developed and in use, namely the Netherlands and Norway. This allowed us to observe a variety of different conceptualizations of direct and indirect benefits relating to GA. We explored the situation by first analysing publicly available data from both countries, by reading reports, presentations, project documents and websites. Thereafter, our findings were discussed by key personnel working in these GAs projects. Using literature we then sought for patterns and common elements in these observations resulting in our conceptualization. From the usage patterns we derived the main GA components, which are defined based on insights from the literature. Finally a model is developed which shows the contribution how the GA components create value. The paper is organized as follows. Next, GA usage pattern is derived, resulting in conceptualization of main concepts, which are illustrated by observations from our two countries. Finally we conceptualize relationships between impacts and value of GA before we conclude. 2 Government Architecture usage patterns GA development can be characterized by elements that are used to influence the development of the architectural landscape. GA is intended to direct and help developers in their design activities. Architecture influences the design decisions and the investments of an organization and in turn is influenced by behavior and design decisions. A GA actually emerges as a result of implementing individual projects. As such, architecture and design are closely linked as architecture aims at guiding designers in their design efforts. Design science, as conceptualized by Simon [START_REF] Simon | The Sciences of the Artificial[END_REF], focuses on creation of artefacts to solve real-world problems. Design science research combines a focus on the IT artefact with a high priority on relevance in the application domain, which is also the intention of GA. Typical GA artefacts include framework, tools, principles, patterns, basic facilities and shared services [START_REF] Janssen | Socio-political Aspects of Interoperability and Enterprise Architecture in Egovernment[END_REF]. These are used to influence new design projects at the conceptual level of implementation level. At the conceptual level the initial architecture of a project is influenced, whereas already available facilities and shared services can be used when implementing the design. The information systems (IS) community have recognized the importance of design science research to improve the effectiveness and utility of the artefact in the context of solving real-world business problems [START_REF] Hevner | Design science in information systems research[END_REF]. Design science research in IS addresses what are considered to be wicked problems [START_REF] Rittel | Planning problems are wicked problems[END_REF][START_REF] Hevner | Chapter 2 Design Science Research in Information Systems[END_REF] that can be characterized by unstable requirements and constraints based on ill-defined environmental contexts, complex interactions among components, inherent flexibility to change design processes as well as design artefacts, a critical dependence upon human cognitive abilities (e.g., creativity) to produce effective solutions, and a critical dependence upon human social abilities (e.g., teamwork) to produce effective solutions. GA is aimed at tackling a broad range of issues as the architecture aims at guiding a variety of design projects ranging from integrated service provisioning to social media platforms. Complexity is at the heart of the architecting challenges. When projects fail, one of the reasons is typically that the system or situation was more complex than originally expected. Many of the architecture methods, models principles, rules, standards and so on are aimed at simplifying the situation. GA cannot be viewed as a isolated instrument as it needs governance to be effective [e.g. 7]. GA is shaped by the interaction among stakeholders and influenced by contemporary developments. The organizations can adapt their GA strategy according to the path dependencies and anticipated or desired benefits. Whereas the initial focus might be on reducing administrative costs in the Netherlands and interoperability in Norway current developments like cloud computing and open (linked) data influence these developments. This results in expanding GA to be able to deal with new contemporarily challenges. The GA exhibits emergent phenomena like new standards, technology, innovations and players entering the field and there is no central control or invisible hand. In both countries the GAs are aimed at guiding and directing the development of ICT-projects in the government. This is a generic pattern that provides some commonalities. An important distinction is the use of the project start architecture (PSA) and GA [START_REF] Berg | Building an Enterprise Architecture Practice. Tools, Tips, Best Practices, Ready-to-Use Insights[END_REF]. Whereas GA refers to the government domain or organization as a whole, start architecture refers to the initial architecture developed for a certain project. PSA is derived from the EA and provides guidance for project-level decision-making. GA influences the design decisions and the investment behaviour of an organization and is in turn influenced by behaviour and design decisions. Further, GA influences the design decisions and the system architecture that will be developed by a project. The usage of architecture is about the balance the use of the architecture in design projects and providing leeway to the designers to deal with the inherent complexity they are working in. Too much freedom results in heterogeneity, whereas too little freedom may result in mechanistic views, reduced creativity, inappropriateness of dealing with uncertainties and solutions that are not appropriate for the given situations. Figure 2 provides an overview of the EA elements (in the box) and how they are used when developing PSA. This framework is created by relating main process step (grey blocks) used in design project to the elements of GA used in each of these steps. When a project is initiated the project requirements will be have to comply with the requirements as posed in the GA, such as the level of security and privacy. Thereafter, the EA framework will provide the structure for developing the PSA. The PSA will be filled in based on the input provided by EA elements principles, guidelines and standards. Elements that are not filled in have to be complemented by the project. If new elements can be added this might result in an update of the GA and the GA has to be adopted continuously. Also the realization of a new system or the evaluation might result in new insights, which can be fuelled into the GA. Once the process is completed the PSA is finished. The above usage patterns are based on how GA should be used to guide development projects, although we found that many times this usage pattern was more dynamic and ad-hoc. In a similar vein the GA can be used to guide modification of daily activities. This follows a similar process in which the PSA is updated and guided by the common architecture elements. Conceptual clarity is needed; hence we will further discuss and define the elements of GA by applying the elements from the framework and discussing practice in the Netherlands and Norway. The Netherlands was a frontrunner in the field, and in 2004 the Ministry of government reforms initiated the development of a national GA aiming to reduce red tape, whereas at a later stage the emphasis shifted toward interoperability, due to the focus of the EU on interoperability. A second version of was released in 2007 and in 2009 a third version was released, focusing on managers and administrators. Whereas the second version contained a large number of principles this number was reduced in the third version. Norway has had action plans for ICT in government or eGovernment since the early 2000s. Architecture became part of these plans in 2006, as part of a Government proposition. A central part of the Norwegian architecture is a number of suggested common public ICT components, with the idea that functionality required by the ma-jority of services should be developed once and made publicly available for re-use. Norway lacks an explicit focus on GA at the national level. However, several typical components of GA have been focused upon but the step of organizing it in a GA has not yet been formally initiated. Frameworks Zachman [START_REF] Zachman | A Framework for Information Systems Architecture[END_REF] introduced the concept of architecture frameworks that provide multiple views on information systems. Frameworks are used for describing and understanding EA [START_REF] Kaisler | Enterprise Architecting: Critical problems[END_REF]. The frameworks model(s) chosen determine what aspects can be captured at what level of abstraction. In EA the use of frameworks has been given much attention and a variety can be found [START_REF] Peristera | Towards an enterprise architecture for public administration using a top-down approach[END_REF][START_REF] Schekkerman | How to Survive in the Jungle of Enterprise Architecture Framework: Creating or Choosing an Enterprise Architecture Framework[END_REF][START_REF] Guijarro | Interoperability frameworks and enterprise architectures in e-government[END_REF], although many of them cannot be qualified as architecture frameworks. A framework often is realized as a matrix that visualizes the relationship between the various elements in each domain [START_REF] Janssen | Socio-political Aspects of Interoperability and Enterprise Architecture in Egovernment[END_REF]. In the Netherlands GA is developed by adopting one part of the Zachman model. The architecture is driven by requirement for EU, Dutch government, businesses and citizens. The model is primarily used as a way to structure and interrelated architecture principles and best practices. The web-based version contains hyperlinks to these principles and practices. The framework is generic due to the need for covering all public organizations. As the national EA is generic there are a number of domain architectures, which are derived from the NEA and provide more details and are customized to the domain. In Norway a three level conceptual framework from 2006 is supplemented recently by a proposed set of core national common components, including guidelines for how to use and administrate these common components. In addition, Norway refers to EU and European frameworks. In summary, the framework is used to specify how information technology is related to the overall business processes and outcomes of organizations, describing relationships among technical, organizational, and institutional components. This view on EA is expressed by providing codified understanding of elements. Definition: Architecture Frameworks structures and interrelates architecture elements to allow design of the elements independently and at the same time ensuring coherency among elements. Architectural principles The use of architectural principles for designing service systems are commonly used in the design of systems [START_REF] Van Bommel | Giving Meaning to Enterprise Architectures: Architecture Principles with ORM and ORC On the Move to Meaningful Internet Systems[END_REF][START_REF] Richardson | A principles-based enterprise architecture: Lessons from Texaco and Star Enterprise[END_REF]. Principles are particularly useful when it comes to solving ill-structured or 'complex' problems, which cannot be formulated in explicit and quantitative terms, and which cannot be solved by known and feasible computational techniques [START_REF] Simon | The Sciences of the Artificial[END_REF]. Principles are commonly used for guiding stakeholders in the design of complex information systems [START_REF] Van Bommel | Giving Meaning to Enterprise Architectures: Architecture Principles with ORM and ORC On the Move to Meaningful Internet Systems[END_REF][START_REF] Richardson | A principles-based enterprise architecture: Lessons from Texaco and Star Enterprise[END_REF]. Principles are often based on the experiences of the architects, which they have gained during many years of information systems development [e.g., 29]. Similarly, Gibb [START_REF] Gibb | Towards the Engineering of Requirements[END_REF] suggested that principles are the result of engineers reflecting on the experiences gained from previous engineering projects, sometimes combined with professional codes of conduct and practical constrains. Principles have been defined in various ways and they have been used interchangeably with other problem solving notions, including laws, patterns, rules and axioms [START_REF] Maier | The art of systems architecting[END_REF]. The Open Group have defined design principles as "general rules and guidelines, that are intended to be enduring and seldom amended, that inform and support the way in which an organization sets about fulfilling its mission" [START_REF]TOGAF: The Open Group Architecture Framework[END_REF]. The disadvantage of this definition is that it does not make any differentiation with guidelines, which are more indicative and do not have to be followed. Ideally, principles should be unrelated to the specific technology or persons [START_REF] Perks | Guide to Enterprise IT Architecture[END_REF]. Principles should emphasize "doing the right thing" but should not prescribe 'how' is should be accomplished. Principles are normative in nature. In the Netherlands the principles are used to ensure that everybody is guided by the same starting points and adopt the same approaches when developing new systems. This should warrant that requirements like flexibility, interoperability, security and maintenance are met. Norway has developed seven high level architectural principles as part of government propositions, to guide the design of service systems. Moreover, a number of national common components and core set of registries are re-used to avoid duplicated development and arrange for consistency. Definition: Principles are normative and directive statements that guide in decision making when designing new systems. Architectural guidelines Guidelines are aimed at supporting architects, commonly shaped as statements or other indications of policy or procedure by which to determine a course of action. Similar to principles they are aimed at transferring the knowledge obtained by experience to others. Whereas principles have to be followed, guidelines do not need to be completely followed and allow for discretion in its interpretation. Furthermore, guidelines might result in the need to make trade-offs, e.g. open access vs. security. Open access might make it more difficult to ensure security and security might prefer restricted access. Guidelines can be viewed as recommended practice (e.g. use of open source software) that allows some discretion or leeway in its interpretation and use (not always open source can provide a suitable solution). Interestingly, neither of the countries had explicit mentions of requirements in the available material on EA. In the Netherlands, some 'principles' are in fact guidelines, whereas Norway has no explicit guidelines but reference to EU and other nations guidelines. Definition: Guidelines are rules of thumb for determining courses of actions allowing leeway in its interpretation. Standards Standards management is viewed a new direction of EA business efforts [START_REF] Simon | An Exploration of Enterprise Architecture Research[END_REF], whereas it has been given considerable attention by governments. The EU framework initially was focussed on standard setting and interoperability and only at a later stage included architecture elements. There are a variety of types of standard, like open standards or technical standards. In general, standards are definite rules or measures established by some authority determining what a thing should be, often accompanied by some criteria to qualify if standards are obtained or not [34]. Standards are aimed at ensuring quality and that different elements are able to interoperate with each other. Standards specify or define policies that are subsequently adopted by a large number of members. Standards are essential for facilitating GA and enables organizations to influence the actions of units without explicitly prescribing how to handle internal information-processing activities [START_REF] Boh | Using Enterprise Architecture Standards in Managing Information Technology[END_REF]. In GA standards are essential for the interaction between public government organizations and their interaction with external entities, by defining interaction interfaced between various systems. There exists a wide variety of standards providing organizing logic for applications, data and infrastructure logic [START_REF] Ross | Develop Long-Term Competitiveness through IT Assets[END_REF], including standards on [START_REF] Boh | Using Enterprise Architecture Standards in Managing Information Technology[END_REF]: • Physical infrastructure management, standards on underlying technologies required to run organisations, like computers, networks, servers and database management • Human IT Infrastructure management, standards on human It resources such as organisational IT skills, expertise, competence and knowledge • Integrating Business applications, to define strategic directions for managing applications and the integration between them • Enterprise data integration, focusing on the integration of critical data elements and databases for cross-organisational integration, and define data elements In the Netherlands some standards are referred to in the GA framework, whereas other standards are put on a comply or explain list. This means that designers should adhere to these standards, and if they do not they have to explained this into details. I Norway a general "Catalogue of standards" (recommended or mandatory) for the public sector is available through a designated web portal. A broadly composed Standardization council maintains the catalogue. Definition: Standards are set of well-defined policies and specification used as rules to form unifying practices across projects and organizations. Conceptualizing government enterprise architecture impact In business there is limited knowledge about the effect of EA practices [START_REF] Schmidt | Outcomes and success factors of enterprise IT architecture management: empirical insight from the international financial services industry[END_REF] and we found that the same applies to the government domain. The patterns in the previous (sub)sections show the process how GA is used and updated, but not how this process contributes to the creation of public value. Therefore we investigated the overall aims and benefits of the GA. The GA elements are used to create value and these elements are shown on the left in figure 2. The process we induced from the two countries is aimed at creating observable direct or indirect benefits, which are shown in the middle of Figure 2. Direct benefits include better interoperability, reuse, flexibility/agility and information quality. Indirect benefits include better communication, decision-making and fit between organization and technology. The main goal of government is to create a wide-range of public values for their citizens. Hence, these observable direct and indirect benefits of GEA should contribute to the creation of public values. Public values are a "good, service or outcome which supports, meets or conforms with one or more of an individual or group's values" [START_REF] Bannister | Citizen centricity: a model of IS value in public administration[END_REF] are an "important (but often taken for granted) motivation for strategy and implementation of eGovernment projects" [START_REF] Rose | E-Government value priorities of Danish local authority managers[END_REF]. Rose and Persson [START_REF] Rose | E-Government value priorities of Danish local authority managers[END_REF] define three primary values, administrative efficiency, services improvement and citizens engagements which are shown on the right side in Figure 2. Administrative efficiency represents the search for value, expressed by efficiency, effectiveness and economy [START_REF] Rose | E-Government value priorities of Danish local authority managers[END_REF], and are deconstructed into target variables such as return on investments, net present value and increases capacity and throughput. Service improvements derive from customer orientation, focusing on how to use ICT to provide better services to the public [START_REF] Rose | E-Government value priorities of Danish local authority managers[END_REF], including issues like better access to services and information, online access to services, and cost-savings for citizens and other external stakeholders. Values related to citizen engagement combines ideals on community empowerment with democratic values such as citizens access to information [START_REF] Rose | E-Government value priorities of Danish local authority managers[END_REF]. Citizen engagement values relate to the engagement, empowerment and use of eGovernment services for citizens´ involvement, and citizens´ role in the design and development on eGovernment services provided by the public. Based on the conceptualizing of GA introduced in section 4 above, and the benefits and public value drivers introduced here, we conceptualize intended GA effects. The GA elements result in direct and indirect benefits that ultimately should contribute to the generation of public values. Conclusions We sought for patterns and common elements of GA in this research. The practices in two countries were investigated and it was found that GA consists of frameworks, principles, guidelines and standards to guide design project and deal with the complexity. These elements are used to direct and guide initiatives occurring at all levels of government. We inductively derived a generic pattern on how GA was used in government. GA frameworks, principles, guidelines and standards were identified as the main concepts used in both countries. Although these four concepts might look clear at first glance, they were not in the practice. For instance, in the Netherlands principles include statements ranging from a very high conceptual level, down to technology-specific statements explicitly telling how to do things. Defining the common elements of GA can result in a vocabulary with enables easier communication between stakeholders. The definition proposed in this paper should be viewed as a first start to better define concepts and can be further refined by investigating more practices. Furthermore, the number of main concepts can be expanded in further research, as GA might serve other purposes and concepts might change over time. Based on the use of the four concepts a model was developed showing how GA benefits help in the realization of public values. In the situation studied the architectural efforts are focussed on creating benefits like better interoperability, reuse, flexibility/agility and information quality and indirect benefits like better communication, decision-making and fit between organization and technology. Although we acknowledge the importance of these outcomes, these are primarily the benefits viewed from the IT perspectives, whereas the motivation for initiating the architectural efforts was the creation of public values. Therefore we argued that these direct and indirect benefits should result in three types of public values; administrative efficiency, services improvement and citizen engagement. The resulting conceptual model provides a starting point for conceptualizing the impact of GA and should be further refined and tested in further research. Although our aim is to provide conceptual clarity among the concept, GA is not a uniform concept and can have various interpretations and purposes. The differences between countries revealed similarities and differences. The Netherlands, being a front-runner in this area, has more years of experience with GA efforts than to Norway. Nevertheless it was rather surprising to see that Norway still has no formal descriptions of GA at least not at the national level. In the Netherlands GA focus has shifted over the years suggesting that concepts might also change over time. Further research is needed to explain differences among countries and to better understand the consequences of the differences. This can help to determine which architectural concepts are essential and which are supportive for creating public values. Fig. 1 . 1 Fig. 1. Common architecture usage patterns Fig. 2 . 2 Fig. 2. Conceptualizing GA and its impact
34,039
[ "985668", "994159", "1004202" ]
[ "333368", "301147", "301147" ]
01490907
en
[ "shs", "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490907/file/978-3-642-40358-3_17_Chapter.pdf
Johanna Sefyrin email: [email protected] Katarina L Gidlund email: [email protected] Karin Danielsson Öberg Annelie Ekelin email: [email protected] Representational Practices in Demands Driven Development of Public Sector Keywords: demands driven development, public sector, e-services, representationalism, feminist theory des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction This paper concerns representational practices in demands driven development of public sector, and the problems they involve. The term demands driven development refers to a movement in public sector towards a closer cooperation with the citizens, primarily with regards to the development of public e-services. This development currently takes place in Sweden as well as in other countries [START_REF]Ministerial Declaration on eGovernment[END_REF][START_REF]the European eGovernment Action Plan[END_REF]. The expected benefit of a closer cooperation with citizens is to make public sector more efficient and thus minimize public costs [START_REF]the European eGovernment Action Plan[END_REF]. It is supposed that if the citizens are somehow involved in the development of these services, they will also be more inclined to use them [START_REF]the European eGovernment Action Plan[END_REF]. Our objective is to explore representational practices through the analysis of practitioners' talk about demands driven development. There are several interrelated problems with representational practices; one is that they invoke questions of truthfulness, in terms for instance if a person really can represent a group of other persons [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF][START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF]. Second, categories open to representation are heterogeneous and multifaceted, thus making it problematic to talk about for instance 'women'. Third, categories are produced within existing dominant power relations [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF]. Fourth, some are defined as inside the available categories whilst others fall outside, and some might not fit into any category and hence become invisible [START_REF] Butler | Bodies that Matter. On the Discursive Limits of "Sex[END_REF]. We will get back to these problems and discuss them more in depth in the following sections of the paper. Our main focus in the paper is to problematize representationalism in demands driven development, not to explore alternatives. The question of who participates in participatory public projects such as demands driven development, and on what grounds, determines much of the legitimacy for these projects in the wider democratic system [START_REF] Karlsson | Democratic Legitimacy and Recruitment Strategies in eParticipation Projects[END_REF]. In the Swedish guidelines for demands driven development , it is stated that "A difficult question is how to find users who are representative for a target group and whose demands and wishes covers the demands of the whole target group. Additionally asking everybody is too costly. The point of departure should be that it is always better to have asked some than not to have asked at all. One does not get a comprehensive image of the demands, but at least some general demands can be found" [5:20]. Consequently the demands that are enunciated in demands driven development projects will depend very much on these users, whether they are representative of other users or not. The issue of representation is obviously central here, but in what sense is it possible for these participating citizens to be representative for someone else? The paper is structured as follows: first the theoretical points of departure are presented, followed by a presentation of the research project. After that the research approach is described and after that the practitioners' talk about representation in demands driven development is analyzed. The paper is concluded with a discussion of the analysis. Theoretical Points of Departure The feminist science and technology studies (STS) scholar Karen Barad [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF] argues that the idea of autonomous preexisting objects and subjects with inherent attributes lies behind the problems with representational practices, political as well as linguistic and epistemological. In her view representationalism is the idea that there exist two kinds of entities; first of all reality or real entities that can be represented, and second (mental or cognitive) representations of those entities. In a representationalist world view these are understood as separated and thus independent of each other. Sometimes a third party is included; someone who does the representing (an individual). Barad underscores that the problem is that entities -subjects, objects, various categories and so forth -are understood as autonomous from and unaffected by practices of representing. In such an individualist metaphysics gender, ethnicity, age, sexuality and class are understood as rather static properties located in individuals, which thus are possible to represent [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF]. Judith Butler, another feminist science scholar [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF], makes a similar analysis and argues that the (feminist) subject -'women' -is not homogeneous, and furthermore does not simply exist as an independent being, waiting to be represented, but is rather produced by the discursive practices and structures that are supposedly doing the representing. With such an analysis, an independent subject does not exist, but subjects are instead produced by the practices of which they are part, including the practices which purport to do the representing (more about this below). This argument is based on a constructivist point of departure in which practices such as representing not only describes an independently existing world, but also (re)produces the world. For instance in demands driven development projects in public sector groups of citizens are chosen as important. In this process several practices contribute to constructing these citizens as specific citizens, such as how these groups are defined and described -women, adolescents, immigrants, parents, elderly, students who are parents -how they are distinguished from other groups of citizens (for instance what defines an immigrant as opposed to a Swede?), on what grounds the demands of a specific group are important in a specific development project, and the conditions under which these citizens are allowed to formulate demands. Representationalism causes several major and interrelated problems. The first of these is that since reality or the entities therein are not identical to the representations, the question of the truthfulness of these representations becomes central. Representations are supposed to work as a mediator between separately existing entities -that is, between entities in reality, and the person who does the representing. This generates questions of the correspondence between reality and the representations. For instance, can a person who participates in demands driven development accurately represent a whole group of individuals, for instance mothers [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF]? Second, categories of subjects and objects are not homogeneous, but categories such as women, men, Swedes, immigrants, adolescents and heterosexuals consist of a number of heterogeneous subjects. The problem is that when someone talks about women it is hard to know what this means, that is, what kind of woman this refers to, since women come in many forms -old, young, heterosexual, lesbian, middle class, working class, single, in relations, white, black and so forth. Talk about women furthermore indicates that women would be alike, when this is not the case [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF]. Third, subjects and objects are produced by the practices, discourses and structures of which they are part [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF][START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF][START_REF] Foucault | The Will to Knowledge[END_REF]. Consequently there are no detached or independent entities which can be represented, and there are no independent entities which can do the representing, but instead these subjects and objects are produced by a variety of representational practices. Through these practices power is exercised, i.e. exclusionary practices of power are exercised in order to define what is being represented. For instance when a child is born in Sweden, and if the parents are heterosexual and married it is assumed that the man who is married to the child's mother also is the father, while if the man is not married to the mother but they 'only' live together, a fatherhood inquiry is conducted by the municipality in which the parents live. In other words legislative and administrative practices contribute to constructing fatherhood as self-evident only as long as the parents are married and heterosexual. Fourth, this means that the categories that are available for representation are formed within dominant practices and power relations, through the performance of boundaries that include and exclude. This recognition places questions of who are included in the category of women, who fall outside, and who does not fit into any category at the center of attention [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF][START_REF] Butler | Bodies that Matter. On the Discursive Limits of "Sex[END_REF]. The above example about fatherhood also exemplifies how legislative and administrative practices reproduce specific family norms, and thus contribute to marginalize other family configurations -such as same sex and single parent families. Posthumanist Alternatives to Individualism The alternative to representationalism is to focus not on preexisting subjects and objects, but on how these entities are produced in practices, something which many feminist scholars have focused on. Donna Haraway [10:328] writes: "Gender is a verb, not a noun. Gender is always about the production of subjects in relation to other subjects, and in relation to artifacts. Gender is about material-semiotic production of these assemblages, these human-artifact assemblages that are people. People are always already in assemblage with the world" Many of these researchers have extended the argument also to other identity formations such as sexuality [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF][START_REF] Butler | Bodies that Matter. On the Discursive Limits of "Sex[END_REF], class [START_REF] Trautner | Doing Gender, Doing Class: The Performance of Sexuality in Exotic Dance Clubs[END_REF], ethnicity [START_REF] Essers | Muslim businesswomen doing boundary work: The negotiations of Islam, gender and ethnicity within entrepreneurial contexts[END_REF], religion [ibid.], and age [START_REF] Nikander | Doing change and continuity: age identity and the micro-macro divide[END_REF]. This kind of research emphasizes how these practices are fundamentally entangled, and how they both enable and constrain specific subject formations. With such a point of departure gender -and other identity formations -should not be understood as individual traits which causes specific gendered doings, but instead gendered subject emerges or materializes as a result of specific performances and enactments. The construction of subjects is not done once and for all, but is rather an ongoing process of materialization which produce subjects, and of which subjects are parts. Butler [START_REF] Butler | Bodies that Matter. On the Discursive Limits of "Sex[END_REF] argues that the construction of sex as materiality is not done once and for all, but is in itself a process located in time, a kind of temporal unfolding or becoming, which works through the reiteration of norms. For the purposes of this paper we extend the argument to include not only the materialization of bodily formations, but also other identity formations, and understand subjects as a process of constant unfolding, that is, as something that is never set or fixed. With this view subjects are continuously evolving sociomaterial configurations of a variety of identity producing sociomaterial practices, which constantly vary and change. Identity producing sociomaterial practices are for instance linguistic and semiotic practices of talking, writing, and imaging, but also legislative practices of defining specific groups such as laws against discrimination. This displacement from pre-given subjects and objects, to entities as the result and part of intra-acting sociomaterial practices has several implications related to the problems with representationalism mentioned above. Obviously the existence of subjects and objects cannot be taken for granted, but instead they are produced, reproduced and reconfigured in each new situation, as the sociomaterial practices of which they are part change. For representational practices this means that categories cannot be taken for granted, but must be construed anew, as locally situated, contingent and temporary categories, in each new situation. 2.2 Alternative Practices? The previous discussion about how identity formations such as gender, age, ethnicity, and class should be understood as results and parts of ongoing productive materialdiscursive practices has several implications for demands driven development. First of all, it becomes virtually impossible to represent anyone else, that is, a category of subjects such as women, on the basis that subjects are intersections (or configurations) of a variety of identity producing practices, which are constantly reconfigured. Second, in other words, it is practically impossible to configure stable and coherent categories, since subjects are fluid and volatile, and consequently categories based on some sort of subject formation(s) would be unstable and differentiated. If someone would consciously try to represent a category of subjects -such as colleagues on a department -the result would be a representation of what that subject believes about the category, and thus based on prejudices. The consequence is that a subject can only participate in demands driven development as her (temporary) self, who could be similar with others in a similar position, but this cannot be taken for granted. Furthermore, based on how categories are formed within dominant material-discursive practices, and how there always is a risk that some individuals fall outside of categories, actors working with demands driven development would have to be very careful with how categories are constructed. From the above follows that it becomes impossible to use preexisting and generic categories of subjects, such as the common demographic categories mentioned (gender, age, ethnicity, and class). The Research Project This paper is written within the frame of a now finished project about demands driven development of public sector, conducted in Sweden. The overall project objective was to deepen the understanding of demands driven development processes in public sector from the perspectives of organizational culture and design methods. The project lasted for a year, and was run by four researchers situated at three different and geographically dispersed universities in Sweden. Furthermore a group of about thirty practitioners working in Swedish government agencies were involved in the project as key actors. These practitioners were women and men with different job descriptions, experiences and educational backgrounds, but they all worked with and were responsible for the implementation of demands driven development of public sector in their respective organizations. This project can be understood as the continuation of an earlier project with a similar research focus [START_REF] Lindblad-Gidlund | Behovsdriven utveckling av offentliga e-tjänster -Samverkan kring utmaningar och möjligheter [Demands driven development of public e-services -Cooperation around challenges and possibilities[END_REF]; two of the researchers and all of the practitioners took part in this earlier project. Within the project empirical material was gathered in several ways, such as through surveys, interviews and workshops with the practitioners, but policy documents were also used. Workshops with the practitioners constituted a central activity in the project; totally five workshops were arranged in Stockholm. These workshops had different themes, all based on the suggestions of the practitioners: representativity, accessibility, crowd sourcing, scenario based decision making, and critical design. Most parts of the workshops were filmed by a media specialist who also took part in the project. The workshops had the following structure; they took place between 9.30 and 15.00, and included a lecture of one or more invited speaker(s) (a researcher and/or a practitioner) who talked about the current topic. Furthermore the topic was treated through assignments in smaller groups, and discussions in the whole group. Methodological Approach The project was developed in rather close relation to the practitioners, who had chosen topics for four of the five workshop themes, and the objective was to develop new knowledge in close cooperation between researchers and practitioners. The first workshop considered representativity in demands driven design. Seven practitioners from various government agencies participated in the workshop. During the workshop an invited researcher in political science was giving a lecture about representational issues and recruitment strategies. This topic was discussed in the whole group, and the participants were also divided into two focus groups that were asked to discuss representation in relation to the recruitment strategies used in their own organizations. The participants in these focus groups were asked to do a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis of the recruitment strategies they practiced. This means that even though the topic of representation was initiated by the practitioners, the researchers developed and expanded this topic in a specific way. Consequently, the talk that is analyzed below is far from spontaneous talk, but instead it can be understood as -at least in part -the result of the survey and the lecture about representational issues. The empirical material that is the basis for this paper consists of notes and recorded and transcribed group discussions from the workshop, while results from the survey are used only briefly. The practitioners and the government agencies in which they work are anonymized in the paper. This material was analyzed with the help of the theoretical framework described above, in which the problems with representationalism is said to consist of four different but interrelated parts; first of all representational practices raise questions of truthfulness, in terms for instance if a person really can represent a group of other persons. Second, categories are heterogeneous rather than homogeneous. Third, categories are produced within existing dominant power relations, including representational practices. Fourth, since categories are the result of boundary making, some are defined as inside existing categories whilst others fall outside, and some might not fit into any category and are hence become invisible. If these are translated into the context of demands driven development the issue of truthfulness concerns whether a specific person or group do or can represent someone else, like a (larger) group. The issue of heterogeneous categories concerns how groups of individuals are heterogeneous rather than homogeneous. The issue of how categories are produced within existing power relation, concerns how categories of persons are made, and by whom. Finally the issue of whether some falls outside of existing categories concerns whether some individuals are forgotten or not included. These four issues of representation are used as an analytical framework, in order to explore how the practitioners talked about these issues during the workshop, something which indicates how they are understood and handled. However, that they talk about representation in a specific way does not necessarily mean that this is also how they do in their daily work practices. In other words we searched for how instances of these four representational issues turned up in the notes and the transcriptions of the discussions. Furthermore we searched the survey results for indications of how representational problems were understood and handled. Representational Practices in Demands Driven Development The survey result indicates that the practices for working with demands driven development, that is, for involving citizens in the development of various e-services, are very different. Some work with personas 1 The Issue of Truthfulness , some work with so called behavioral groups, some have established networks of clients which they cooperate with, some work with focus groups, and some are happy just to find any participants for their development projects. Based on these different practices, representationalist problematic becomes very different. Below a number of citations from the transcribed group discussions, and from notes taken during the workshop about representations are presented. These citations are selected because they in some sense touched upon the topics described in the analytical framework; the issue of truthfulness of representations, the issue of heterogeneous groups, the issue of how categories available for representations are produced within various power relations, and the issue of how some are excluded from available categories, and thus become invisible. A. … that's how we use to do when we have specific services that are aimed at [a specific group of users] … then we chose among those who are professional users of a service, then we chose those who are large, who have very many cases … those who are in between and those who do not use it at all in order to get the whole perspective … (from transcriptions) In this first example one of the practitioners discussed how they in her organization use to choose participants when they work with the development of a specific service. Their clients are businesses of different sizes, and so the users are working in and with those businesses. Thus they use to select among these the businesses that are large and use the service very much, the businesses who use the service sometimes, and those who do not use the service at all. This indicates that they try to select users for their development projects in order to reach a specific representativeness -or in other words in accordance with the use patterns of the whole group. The strategy for achieving accordance with this heterogeneous group is to try to find users who represent the extremes and the medium of businesses; those who are frequent users, those who are medium frequent users, and those who are non-users of the service. Personas are hypothetical users [START_REF] Pruitt | Personas: Practice and Theory[END_REF]. B. … if Our interpretation is that the practitioner here discussed how some of the potential participants in the development projects run by her organization would not be interested if she would ask them to participate as representatives of their professional society (or guild). The practitioner believed they would only want to participate if they could represent themselves, and pursue the questions that are interesting for them, but not if they would have to work for their professional colleagues in other organizations. This is an aspect of representation that is related to whether participants want to participate in their own interests or in the whole group's -but the assumption is still that someone can represent a whole group. Nevertheless, with this statement ("now I want you to put on a general representation hat") the practitioner pointed to an indistinctness in representational practices; how is one to know whether someone represents him/herself, and not someone else, like a professional society? Can someone represent both at once? Do they? The rather frequent occurrence of these and similar enunciations indicates that the practitioners are well aware of the problematic of letting someone speak for or represent an entire group. On the other hand, both group discussions and the result from the survey indicates that several of the practitioners are rather happy to have any participants in their supposedly demands driven development processes ("We are happy to have any [participants]" (from survey)), and in those cases, issues of whether these are representative for an entire group seemed to become subordinate. 5.2 The Issue of Heterogeneous Categories C. Anne: … well, immigrant women then we might have that group … perhaps you don't think twice … kind of like we do now, we talk a little about … we look at an assignment which concerns people who want to come here and … from a third country .. it's like a lump, it's like four billion people that can be very heterogeneous … but we kind of lump them together in that way, I do not know whether it is the right way to lump them… (from transcriptions) In this quotation one of the practitioners was talking about the problematic practice of lumping large numbers of heterogeneous individuals into one group. The practitioner mentioned immigrant women, and then went on to talking about four billion people from a third country 2 2 The term 'third country' refers to countries outside of the European Union (EU), the European Free Trade Association (EFTA), the European Economic Area, and the candidates to the EU (Universitets-och högskolerådet) [START_REF]Universitets-och högskolerådet[END_REF]. . The practitioner also seemed hesitant to whether this was right, or a good way of lumping these people together ("I do not know whether it is the right way to lump them…"). With this statement this practitioner raised the issue of how groups are heterogeneous, but also how they are produced in practices, and she questioned whether these practices are right. Indirectly the statement thus concerns the power of practices to produce groups, and how this power is used. 5.3 The Issue of How Power Relations Produce Categories D. Anne: And the two threats we were talking about was that the one who represent might not understand that s/he represents and the other that we might not have been enough clear with that … have we thought right about this strategic group, or have we chosen someone because "Oh, now we got a woman too, and a man…" Karin: But wasn't it too that … I was thinking also that these … what was it you said here, that you don't understand that you represent something larger, but it might be two different things… Eva: Yes but sometimes you don't. Then you anyway represent yourself, but I include you, your perspective because you are an immigrant woman, then I include your perspective but you are not expected to talk for everyone… (from transcriptions) This discussion among the practitioners touched upon several issues relating to how power relations are involved when groups are constructed. Anne talked about how someone might not understand that s/he is representing a group, and that this might have to do with that 'we' have not been enough clear with that. 'We' in this case refers to Anne and her colleagues at the agency in which she works, or more generally practitioners working with demands driven development of public sector. Anne also talked about the risk that 'we' have not thought right about a particular group. Our interpretation is that to 'think right' about a group means to construct the group in a way that makes it representative for a larger group. To be able to think right or wrong when constructing groups clearly concerns the power to construct groups, whether the result is considered right or not. Karin responded by talking about the risk that participants do not understand that they represent someone else, like a larger group, while Eva talked about how participants are sometimes included because they belong to a specific group, like immigrant women, even though they are expected to talk only for themselves. Our interpretation is that Eva with this statement raises the issue of how someone is chosen because s/he belongs to a marginalized group -in this case immigrant women -but she nevertheless does not want this immigrant woman to represent anyone else, but only to represent herself. As in previous quotations this refers to the ambiguity for a participant to know whether s/he is supposed to talk for her/himself, or for a larger group. This recurring issue concerns the importance to inform the participants of the expectations of their participation, something which is central to the practitioners because this is part of their responsibilities. This statement also concerns power in terms of how someone is chosen to be included because they supposedly belong to a specific -marginalized -group. E. Eva: But I think also that perhaps one does not want to … that one should believe that one represents, that I have chosen you because you represent a Chinese person, but I don't want you to believe that you … I want you to speak for yourself, I want you to tell your story … I don't even want you to try to speak for all Chinese … Karin: Because if you do you have yet another source of error when you come to the analysis phase cause then it's not only this person's opinions in the first hand but it's others or his whole imaginary image with prejudice filters and knowledge filters and that you have absolutely no control of … (from transcriptions) This short transcription concerns the power to construct categories ("I have chosen you because …."), but also how this is a contingent power. Eva said that she had chosen a Chinese person, but she did not want that Chinese person to represent anyone else than her-/himself. This statement indicates that she can never be sure of whether the Chinese person understands her-/himself as representing anyone else, or if s/he acts as a representative for anyone else. This can be understood as a way for participants who are categorized to resist and transgress the categories imposed on them. Karin responded by talking about how persons presumably representing others can only speak for others based on what they believe about others, that is, based on prejudices and limited knowledge. This is yet another aspect of the (im)possibilities to correctly represent others, that these practitioners have to deal with. Consequently, the practitioners cannot know whether the participants in their development projects in their own views represent themselves or others, and if participants in their own views represent others, they do so based on prejudices and limited knowledge. The issue of production of categories was mentioned at several occasions, indicating an awareness of the problematic. No one specifically mentioned the word power, but since practices for selecting participants was discussed, and the consequences of these practices for representational problems, power was still an issue. The Issue of Invisible or Excluded Subjects The issue of whether someone is left out or forgotten -that is, those that are excluded by the existing categories -is rather invisible in the material; we could only find one instance of this. This example comes from common discussions (based on the notes), in which one of the groups brought up focus groups as a possibility to find and include minorities. It was not specified whom the term minorities might refer to, and this indicates that they were aware of the existence of minorities that might not be included. The example is not entirely self-evident, but it is what comes closest to the existence of individuals who live outside of existing categories. Discussion The objective with this paper was to explore representation through the analysis of practitioners' talk about demands driven development. The practitioners were discussing how they work with representation in their development projects, and how they sometimes only had very few participants -as few as one -and were happy about this. Despite this they also discussed in a rather sophisticated way the problems with representational practices. In other words the practitioners seemed to agree with But-ler [START_REF] Butler | Gender Trouble. Feminism and the Subversion of Identity[END_REF] and Barad [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF] about the problematic with representational practices; but nevertheless they seemed to keep on working with representational practices in their development projects. Our empirical material gives no indications as to the reasons for this, but clues to the reasons for this situation may be found elsewhere. One reason is probably the dominant position of representationalism in our society [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF]. Representationalist practices are everywhere; in everyday linguistic practices words are taken to correspond to objects, subjects or some aspect of reality. In research and knowledge making individuals are regularly classified into women and men, upper, middle and working class, Swedes and immigrants, living in rural or urban areas, various groups based on age, or employed and unemployed, and studied and described based on these classifications. And in representational politics politicians are expected to speak for (or represent) various interests or perspectives. These representational practices are an important, routinized and taken for granted part of our everyday lives. Representationalism and its individualist foundation is not only an understanding of the relation between reality, knowledge and humans, but it is regularly produced and reproduced in these and other practices, of which the practitioners' work with demands driven development are part. There also seems to be a lack of alternatives to representational practices -or are there alternatives? Barad [START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF] discuss how representationalism did not always hold this dominant position, and how there are alternatives. It might be that the alternatives are made invisible, because current state of affairs serves central interests, and because influential actors gain some sort of advantage. Representational practices is common in political systems such as representational democracies, but also in numerous statistical practices [START_REF] King | Sidney: Designing Social Inquiry[END_REF][START_REF] Edling | Kvantitativa metoder. Grundläggande analysmetoder för samhälls-och beteendevetare [Quantitative methods. Basic analytic methods for social and behavioral scientists[END_REF] -in research, marketing, surveys, and the production of government statistics -which categorize individuals and then describe them based on these categorizations. Additionally representationalism is an integral part of positivist research practices in which subjects and objects of study are presumed to exist as stable and autonomous phenomena, independent of researchers and research practices [START_REF] Orlikowski | Studying Information Technology in Organizations: Research Approaches and Assumptions[END_REF][START_REF] Harding | Sciences from Below. Feminisms, Postcolonialities, and Modernities[END_REF][START_REF] Barad | Meeting the Universe Halfway. Quantum physics and the entanglement of matter and meaning[END_REF]. Positivist research practices are closely linked to natural and technological sciences such as mathematics, physics, chemistry, biology, engineering science and medical science; sciences which are and have been important for creating the wealth and socioeconomic welfare of current industrialized countries [START_REF] Harding | Sciences from Below. Feminisms, Postcolonialities, and Modernities[END_REF]. Taken together, these are deeply institutionalized sociomaterial practices which take place in several different areas of society, and which are not that easy to change, because they are an integral part of how our society works, and quite a lot of individuals make their livelihood from the organizations, companies and institutions that are involved in these practices. I would say like this: "Now I turn to you because you are a larger representative who work with agency 2, now I want you to put on a general representation hat". Then they would not be interested … then you chose a few [representatives] and you get a more nuanced picture … but not that … one would try to say that you are here as a representative and not from your own parts [e.g. department or organization], not that you should try to put on the whole guild's… (from transcriptions) 1
37,888
[ "1004205", "1004206", "1004207" ]
[ "300975", "300975", "548056", "136146" ]
01490908
en
[ "shs", "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490908/file/978-3-642-40358-3_18_Chapter.pdf
Carl-Mikael Lönn Elin Uppström email: [email protected] Process Management Challenges in Swedish Public Sector: a Bottom Up Initiative Keywords: Process management, public administration processes, egovernment Public administration is under pressure to work more effectively and increase effectiveness with regards to internal administrative processes as well as level of service towards citizens. This paper identifies process management challenges encountered in Swedish municipalities and provides concrete examples of consequences of these challenges by using a bottom up approach. It is done by using a common public service process and a mobile solution as platform for discussions with municipal officials working with the process. To categorise the challenges they are grouped into six core categories of business process management, which provide a picture of challenges that municipalities face today. Results show that Swedish municipalities face challenges in all categories and that it was not possible to design a generic process for the analysed service. To initiate work with improving process maturity in local governments the bottom up approach used was found successful. Introduction The public sector is struggling to become more effective. A driving vision to accomplish this goal is articulated as e-government and refers to improving public sector by rationalizing administration and improving the information and quality of services to citizens [START_REF] Moon | The Evolution of E-Government among Municipalities: Rhetoric or Reality[END_REF]. E-government utilizes information and communication technology to transform public administrations structures and processes [START_REF] Beynon-Davies | Constructing electronic government: the case of the UK inland revenue[END_REF] to increase efficiency and effectiveness. Sweden is considered as being in the forefront of e-government and is one of the highest ranked countries in [START_REF]UN E-Government Survey 2012: E-Government for the People[END_REF] e-government development index. There are many policies and guidelines communicated from higher-level authorities that underline the importance of improved process management and increased collaboration in public sector [START_REF] Safad | The 24/7 Agency Criteria for 24/7 Agencies in the Networked Public Administration[END_REF][START_REF] Regeringskansliet | Handlingsplan för eFörvaltning[END_REF][START_REF] Regeringskansliet | Statens offentliga utredningar. Strategi för myndigheternas arbete med e-förvaltning[END_REF][START_REF] Regeringskansliet | It i människans tjänst -en digital agenda för Sverige[END_REF][START_REF] Eu | Ministerial Declaration on eGovernment[END_REF]. With a total of 290 municipalities in Sweden the municipal government stands for 70 % of the total public administration and is thereby considered a very important part in the realization of the e-government vision [START_REF] Regeringskansliet | Handlingsplan för eFörvaltning[END_REF]. Swedish municipalities are autonomous and therefore choose to govern their own e-government initiatives; however they still have to comply with the same laws, policies and guidelines. This paper builds from experiences from a research project aiming to develop a fully integrated mobile complaint and problem reporting system (CPR system) [START_REF] Nilsson | Modeling IT-mediated value encounters between citizens and local government[END_REF]. In the project, processes for management of complaint and problems in Swedish municipalities were analysed with the aim to design a generic process model to be used in the new CPR system. The objective of this paper is to highlight the process management related challenges encountered in the work with the municipalities. This objective is guided by the following research question: Which challenges do municipalities face in their process management? A bottom up approach is used were the complaint and problem management process and the CPR system is used as a platform for discussion and to identify process management related challenges. There are many well-known and investigated benefits from working according to process-logic [START_REF] Davenport | The New Industrial Engineering: Information Technology and Business Process Redesign[END_REF][START_REF] Kohlbacher | The Effects of Process Orientation on Customer Satisfaction[END_REF]. Research focusing on process challenges for the Swedish public sector has limited empirical foundation. This paper presents process challenges in the Swedish public sector from the view of the people working with the processes and by the researchers' analysis of the processes. It unpacks the challenges by giving concrete examples of how these challenges leads to inefficiencies in the daily work of municipal workers. The challenges are analysed from a business process management (BPM) perspective by categorisation of the challenges according to [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF] "six core elements of BPM". The paper contributes with a clear view of current challenges encountered within municipalities and what needs to be addressed in their process management work. The paper is organized as follows: The next section gives a description of related research. Section 3 describes the research method and data collection techniques used. Section 4 presents the results and section 5 concludes the paper and directions for future work are presented. Previous Research Process Management Business processes describes how business functions and they are central in Business Process Management [START_REF] Ter Hofstede | Modern Business Process Automation: YAWL and its Support Environment[END_REF]. Business process management (BPM) encompasses knowledge and practices from several areas [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF] thereby creating a holistic management approach to business processes. [14, p. 4] defines Business process management as "supporting business processes using methods, techniques, and software to design, enact, control, and analyse operational processes involving humans, organizations, applications, documents and other sources of information". The objective with business process management is to improve the performance of organizations by making them more effective and reduce costs. This is enabled by constantly analysing and improving business processes. Improving the quality of public sector business processes can for example be done with the help of verifications tools [START_REF] Corradini | Designing quality business processes for e-government digital services[END_REF][START_REF] Corradini | Improving e-Government Business Processes Applying Formal Verification[END_REF]. BPM promotes agility and flexibility in which organizations can quicker adapt to changing market conditions [START_REF] Gong | From policy implementation to business process management: Principles for creating fexibility and agility[END_REF]. Since BPM is a holistic management approach it includes all phases in a business process lifecycle. The BPM lifecycle proposed by [START_REF] Van Der Aalst | Business process management: a survey[END_REF] contains the phases design, configuration, enactment and diagnosis. [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF] Identifies "six core elements of BPM" that organizations need to regard in order to succeed with BPM: strategic alignment, governance, methods, information technology, people and culture. The goal with strategic alignment is to promote business performance by harmonizing organisational priorities and business processes. BPM governance handles roles and responsibilities at different levels of BPM. Also important in BPM governance is the decision making and reward processes that have impact on process associated actions. Methods related to BPM are tools and techniques that facilitate actions throughout all different stages of a process lifecycle, process projects and programs within an organization. Information technology relate to hardware, software and information systems that are aimed at supporting BPM. An example is process aware systems. People are the BPM knowledge within the human capital of an organization that constantly improves and applies their process abilities and knowledge to improve business performances. Culture has a strong influence on BPM achievements and it concerns attitudes and behaviors formed by mutual values and beliefs regarding process orientations. [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF] Business Process Management Challenges To the researchers knowledge there is limited empirical research that highlights process management challenges that Swedish municipalities face. Research on process management challenges has been made in other countries and for other domains. [ [START_REF] Bandara | Major Issues in Business Process Management: An Expert Perspective[END_REF] and [START_REF] Sadiq | Major Issues in Business Process Management: A Vendor Perspective[END_REF] are two related studies to the [START_REF] Indulska | Major Issues in Business Process Management: An Australian Perspective[END_REF] study that report upon BPM challenges and group them after Strategic, Tactical and Operational challenges. [START_REF] Bandara | Major Issues in Business Process Management: An Expert Perspective[END_REF] Reports BPM challenges identified through interviews with domain expert from the industry and [START_REF] Sadiq | Major Issues in Business Process Management: A Vendor Perspective[END_REF] reports BPM challenges identified through interviews with BPM vendors. [START_REF] Palmberg | Experiences of implementing process management: a multiple-case study[END_REF] Reports on experience from implementing process management in three private organisations in Sweden and calls for further research in this area. There are studies focusing on process management related challenges in the public sector. [START_REF] Weerakkody | E-Government: The Need for Effective Process Management in the Public Sector[END_REF] Explore process management and integration issues in a local council in the United Kingdom. Through a case study approach they study a public service process (the student loan application process) by conducting semi-structured interviews with four local authority staff, one representative from a partner organization and one citizen. In their research they identify technical and organizational challenges and the most important challenge for the local council was shortage of coordination and integration between the different stakeholders. There are also studies that investigate challenges in limited parts of BPM, for example business process modelling challenges and issues studied by [START_REF] Indulska | Business process modeling: current issues and future challenges[END_REF] and challenges related to BPM methods studied by [START_REF] Van Der Aalst | Challenges in business process management: Veri fication of bus iness processes using Petri nets[END_REF]. More recent studies investigate limited parts of BPM in different countries. [START_REF] Valenca | Understanding the adoption of BPM governance in Brazilian public sector[END_REF] Investigates success factors and obstacles affecting BPM governance in local governments in Brazil. Main obstacles identified are lack of competence, BPM awareness and a non-process oriented culture. [START_REF] Niehaves | From bureaucratic to quasi-market environments: on the coevolution of public sector business process management[END_REF] States that local government in Germany lack BPM capabilities and therefore have problems with adjusting to a constantly changing environment and that they are under a financial pressure. [START_REF] Lönn | Configurable process models for the Swedish public sector[END_REF] Presents an initiative on configurable process models and reports on difficulties for a municipality in Sweden to fully utilize the configurable approach due to a number of reasons connected to low process maturity. 3 Research Method To answer the research question a case study research has been conducted. Case study research is a studying "a contemporary phenomenon within a real life context when the boundary between the phenomenon and its context is not clearly evident" [29, p. 13]. Case studies can be used in descriptive and exploratory research [START_REF] Myers | Qualitative Research in Business & Management[END_REF]. In this study the contemporary phenomenon is process management and the context is complaints and problem management within Swedish municipalities. The researchers have studied the phenomenon through the context of complaint and management handling at five different municipalities, thus following the case study methodology of viewing a phenomenon as tightly bound with the context were it is studied. Challenges related to process management are captured through the participants' descriptions of their view of the complaints and problem management process and the researcher's findings from being active participants in the analysis of the processes and when designing a generic process model. The complaint and problem management process is a suitable process to study since almost all municipalities in Sweden offer the service; it is also one of the most common e-service provided by Swedish municipalities [START_REF] Juell-Skielse | Enhancing Complaint and Problem Management: Designing and Evaluation of an M-Service Using Pictures and Positioning[END_REF]. Different units within a municipality handle complaints and problems reported by citizens and the handling of complaints and problems often spans over several administrative units. Data Collection and Analysis To collect data ten workshops with municipalities were performed, all municipalities volunteered to participate. All of the workshops were performed at municipality facilities with municipal officials participating. The number of municipal officials that participated during the workshops varied between 2 and 10 persons and the number of researchers participating varied between two and three. The municipal officials that participated hade different roles relevant for the process that were analysed e.g. IT managers, assistants, administrators, register clerks. A high level official made the selection of participants from the municipalities at each municipality. Prior to the workshop the researchers had communicated the scope and aim for the workshops and asked the responsible high level official to include suitable persons. The agenda for each municipality were structured in the same way. First all participants presented themselves. The researchers presented their research area and their intentions with the meeting. Secondly the research project was presented and the advantages and disadvantages with the solution were discussed. Then the current municipal process for complaint and problem management were analysed by letting the officials describe their work tasks and in what order from receiving a complaint or problem to the case is finished. The identified tasks were written on post-its or on a white board, and notes were taken by the researchers. Thereafter the agenda ended with analysing the to-be process for complaints and problems with the app included as a new input channel. More workshops were conducted in some of the municipalities because it was more time consuming to go through the agenda in these municipalities. Table 1 shows a compilation of the involved municipalities and the number of workshops carried out at each municipality. During the workshops discussions regarding issues related to business processes were brought to life. The participants made statements about perceived challenges related to business process management. Often the statements fuelled a discussion witch led to new statements. The researchers were passive during these discussions to minimize the risk of being leading. Also the challenges that the researchers encountered when analysing and modelling the processes were noted. Apart from the workshops also documents (requirement specification for the CPR system, municipal routine description) and artifacts (case management systems and front end app) have been used as data. In order to analyse the collected data all notes from each workshop were first discussed and compared by the researchers. The researchers then tried to model the processes in Visio following the notation of YAWL [START_REF] Ter Hofstede | Modern Business Process Automation: YAWL and its Support Environment[END_REF]. All data material related to process management challenges were then compiled and analysed, and challenges were derived by the researchers' interpretations. To create a better understanding and overall view the challenges were then categorized by using the labels of [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF] six core elements of business process management. The reason for using this categorisation is that it encompasses all elements needed to provide a holistic understanding of BPM [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF]. Results In this section the results from the empirical study are presented. All challenges found and to which BPM core element the challenges relate to are compiled in table 2. Then, each challenge is described and examples of how these challenges leads to inefficiencies in the daily work of municipal workers are given. If nothing else is explicitly written, a challenge is described if it was encountered in three or more municipalities. Strategic Alignment The organizational structure of the municipalities in isolated silos is not aligned with a process oriented approach. M4 explained that the social service unit functions as a world of its own. This causes insufficient insight and lack of relevant knowledge of other units regarding responsibilities and how they work. "No one sees the entirety, everyone only see their part" (M4). For example in M1 it happens that a submitted complaint is received by a unit not responsible for solving a complaint of that nature and the receiver doesn't know to what unit it should be forwarded. This has a negative impact on the effectiveness of handling the issue. Functional silos also resulted in difficulties to perform process design. In four out of the five different municipalities involved in this research it was not possible to design a single complaint and problem management process. That was due to considerable differences in the work routines between the different administrative units. Initiatives taken are not aligned with process management. E-service for complaints and problem reporting is designed in a way that could easily lead to misinterpretations by citizens. For example in M1 e-service for submitting complaints and problems the citizen needs to state the administrative unit that the issue will be sent to. The purpose is to simplify the administration of cases within the municipality. If the citizen chooses the wrong administrative unit when reporting, a case can "theoretically spin around a lot in the system between different administrative units" (M1). In M5, they have implemented a customer service that is intended to work as a single point of contact where citizens can submit all types of cases. The customer service is not integrated with the municipality's complaint and problem management process, "it is a completely separate process" (M5). Management and politicians agendas are not always compatible and in Sweden elections are held every fourth year; theoretically a new political agenda might be the result of each local government election. Change of political agenda is something that can affect all municipalities in Sweden since the same political system applies. To what extent it poses a challenge is due to the differences in agendas and municipalities' ability to change. Only M3 and M5 explicitly mention this challenge by stating that management and political agendas don't always match. Legal aspects and legislation changes complicate the routines at the municipalities. For example in M1, if a complaint about a mistreatment is submitted through their eservice the complaint is registered automatically in the system integrated with the eservice. This particular system doesn't meet the legal requirements of archiving cases that belong to the social welfare office. Therefore a mistreatment complaint needs to be manually registered in another cases management system that meets the legal requirements. Clearer laws regarding the municipalities' responsibilities to handle for example pupils being bullied at school such issues has to be archived correctly and therefore entered into a specific system. In M4 a new law that relate to care result in that the municipality need new fields in their case management system. Governance No clearly defined and communicated responsibilities are a challenge that cause inefficiencies in municipalities process management. M1 exemplifies that a register clerk forwards a submitted complaint or problem to the administrator that shall send a reply to the citizen. When the register clerk forwards the complaint or problem a confirmation that the message has been received with the administrators contact information is sent to the citizen. If it is unclear which administrator should answer the citizen and the issue is forwarded to the wrong administrator then the citizen may contact the wrong administrator. An issue might even be closed but continue to move around in the organization or an issue might be overlooked because of unclear responsibilities. 28.1 % of the issues in M2 are initially sent to the wrong administrator. No process owners have been present in any of the workshops with the municipalities nor have any of the municipal workers mentioned that a specific person is responsible for the complaint and problem management process. However, the researchers have noticed that some people take a lot of responsibility for the process, unofficial process ambassadors exist. How information is managed is a challenge that causes uncertainties. M5 is experiencing uncertainties in what needs to be documented and it is perceived hard to get an overview when people store documents in their desk or in a file folder. M4 experiences that information is not spread throughout the entire municipality. Method During the workshops the researcher did not encounter any challenges related to methods nor did any of the municipalities mention anything about methods related to BPM. This could in itself be seen as a challenge, i.e. lack of methods for BPM. Dur-ing the discussions about the complaint and problem management process none of the municipalities provided any reference process models. Information Technology All of the municipalities use multiple IT systems that are most often not integrated which cause manual tasks and overhead work. In M1 some cases has to be finished in one system and then registered manually in another system and sometimes cases are double registered. M1 exemplifies by describing when a registration of an illegal building are sent in by a user and registered automatically in the wrong system. The case is then finished in that system and instead the case needs to be manually initiated in the correct system. Another example from M1 is when a submitted case concerns a problem that needs to be fixed by an external provider an order cannot be created directly in the system, but needs to be created in a separate system. M1 describes their system for complaints and problems as an intermediate system and wishes for integration with other backend systems. In order to extract statistics on issue handling within M2, problems need to be registered in two different systems. M3 doesn't use the full potential in their systems; instead they buy new systems when functionality is needed. M3 states that "it is completely preposterous to buy new systems instead of developing what you already have". In M5 says that "we don't utilize our system to the fullest", instead we "buy different systems for different purposes and they have never been integrated". Due to legacy systems and proprietary systems municipalities have trouble integrating systems. M1 and M2 describes that many municipalities have a closed environment; it makes it harder to integrate the system. M3 describes that they have old systems and that the providers doesn't provides specifications such as XML schemas. Legacy systems still running within municipalities doesn't have functional support for automation of tasks and the interfaces are difficult to use for infrequent users thus creating problem for administrative workers that do not receive issues frequently. "Our system is difficult to use and therefore the usage is not spread throughout the organization, only very few people use it" (M2). In M3, one of their IT systems is not compatible with windows 7 so they have to log on to a remote desktop, which creates "another window among many windows". Also, what the researchers realized when they got access to the different municipality IT systems is that many of the systems are not process oriented and does not support the work routines associated with the complaint and problem management handling. People Ad hoc routines and uncertainties in work routines are two people related challenges. Many administrative workers prefer to give answers directly to citizens contacting them and skip the registration of the issue in the case management system. It is not communicated through any system what issues are solved, instead messages about solved issues are e-mailed to one person in the organization, no one else knows about the status of issues (M2). In M3 not all administrators are using the case management system and if they answer something directly to a person they do not register it in the system. M5 stated that the have shortcomings in their administrative routines, they perceive uncertainties in how things should be handled. Deficiencies in routines are also emphasized through the following quotes: "I am guessing that we have a number of complaints, but they are probably stuck somewhere" (M5). Citizens use communication channels that are not intended for submitting complaints and problems, e.g. in M1 citizens sends e-mails directly to the municipality workers e-mail instead of using channels that are connected to the case management system. Some citizens that frequently submit issues learn the administrators e-mail addresses and sends e-mails directly to them. "Some cases fall outside the system and it creates manual work" (M1). Municipalities don't put effort into educating and gain approval by the ones affected when buying new IT systems. In M4 it is not communicated how new technology is supposed to be used. "People feel burned" (M4). M3, "the management commitment, the people and education are what are required for an IT system not to be a system on the shelf." In M2 only a few persons work with their case management systems because the usage is not spread throughout the organization. Culture Responsiveness and resistance to changes are challenges in municipalities. M4 worries that they might not be able to handle the change the CPR system will bring, since they perceive that there are constant changes and they cannot handle changes all the time. There are evidences of people in the municipalities seeing process management as something with potential: "That everything is connected is important, that you see the complete picture and not only the work that you do" (M2). They see the need but have difficulties responding to it. "The negative thing about this integrated application is that we have to get resources to handle issues from this new channel. We cannot create an information flow that we are not prepared to handle and we should not implement the channel before we are ready. We are not there yet, we are not ready." (M2). Conclusion This paper has identified challenges within all of the six core elements of BPM strategic alignment, governance, methods, information technology, people and culture [START_REF] Rosemann | The Six Core Elements of Business Process Management[END_REF]. The results show that municipalities are still being organized by functions and therefore the organizational structures of municipalities are not aligned with a process oriented approach. Poorly defined and communicated responsibilities are another governance challenge that causes inefficiencies in municipalities' process management. No methods supporting process management exist. Municipalities are also facing challenges related to information technology such as usage of multiple IT systems, IT system integration, legacy and proprietary IT systems. Challenges related to the people working with the process using various ad hoc routines and not being suf-ficiently educated were also found. Notable challenges found related to culture are responsiveness to change and resistance to change. For the researchers many of these challenges contributed to the difficulty of creating a uniform view of the process and to determine a best practice process for complaints and problems management. IT (a process and an artifact) was used as an enabler (door opener) to stimulate and fuel the exposure of the challenges. It was a successful approach for starting a change dialogue because the municipal workers had a willingness to collaborate and change, and also showed openness to adopt the new IT artifact. The results imply that process management maturity in Swedish municipalities is low. Municipalities are struggling with taking the first small steps in re-organizing according to process-logic, despite visions from politicians and despite Sweden being ranked high in e-government development. Similar empirical evidence has been identified in previous research [START_REF] Indulska | Major Issues in Business Process Management: An Australian Perspective[END_REF][START_REF] Bandara | Major Issues in Business Process Management: An Expert Perspective[END_REF][START_REF] Sadiq | Major Issues in Business Process Management: A Vendor Perspective[END_REF][START_REF] Weerakkody | E-Government: The Need for Effective Process Management in the Public Sector[END_REF][START_REF] Indulska | Business process modeling: current issues and future challenges[END_REF][START_REF] Van Der Aalst | Challenges in business process management: Veri fication of bus iness processes using Petri nets[END_REF][START_REF] Valenca | Understanding the adoption of BPM governance in Brazilian public sector[END_REF][START_REF] Niehaves | From bureaucratic to quasi-market environments: on the coevolution of public sector business process management[END_REF]. For academia this paper informs about the low maturity within Swedish municipalities in regards to process management. It complements existing studies and provides a contemporary view of process management challenges. The paper also provides valuable insights of challenges that need to be address and can be used to position future research. This paper strengthen findings regarding low BPM maturity in local government found in other countries examples are [START_REF] Niehaves | From bureaucratic to quasi-market environments: on the coevolution of public sector business process management[END_REF] in Germany and [START_REF] Valenca | Understanding the adoption of BPM governance in Brazilian public sector[END_REF] in Brazil. For practice this paper contributes in providing an understanding of identified challenges giving examples of how business process challenges leads to inefficiencies in the daily work of municipal workers. The approach to use an IT artefact to fuel discussion and implement change was deemed usable when rationalising processes and improving process maturity within the public sector. This approach should therefore be taken into consideration by public management when initiating change within municipalities. Business process initiatives within municipalities should take a holistic perspective of process management since challenges were found in all core elements of BPM. The researchers also see the need for municipalities to get a bottom up support by complementing e-government policies and guidelines with instructions on how they can be implemented at a local level e.g. instructions of how single public processes can be transformed to promote e-government. Today e-government is promoted by a top-down approach were abstract policies and guidelines are communicated towards municipalities' and they have to implement them on their own. Limitations of this study that affects the transferability of the findings are that the municipalities' were not randomly selected and that only Swedish municipalities were studied. However the author argues that it is reasonable that the results are transferable to other local governments in other countries due to similarities in the result of earlier related studies (see above). Also, this paper does not consider cross sectional variance; that is how the challenges are influenced by differences between the municipalities and the study is limited to the investigation of BPM related challenges. We propose future research to focus on strategies for how to overcome identified challenges and present a structured framework with challenges and success factors. Also building collaborative solutions together with private partners should be investigated. Table 1 . 1 Participating municipalities Municipality Abbreviation Number of workshops Municipality 1 M1 3 Municipality 2 M2 3 Municipality 3 M3 1 Municipality 4 M4 1 Municipality 5 M5 2 Table 2 . 2 Process Management Challenges BPM Core Element Challenge Strategic Alignment Organisational structure, initiatives, legal aspects, politicians and management agendas Governance Responsibilities, process owners, information management Methods No methods Information Technology Multiple IT systems, IT system integration, legacy systems, Proprietary systems People Routines, education Culture Responsiveness to change, resistance to change Acknowledgements. The author would like to thank the Swedish Governmental Agency for Innovation Systems and NordForsk for founding the research project.
34,663
[ "1004208", "1004209" ]
[ "300563", "300563" ]
01490909
en
[ "shs", "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01490909/file/978-3-642-40358-3_19_Chapter.pdf
Gertraud Peinel email: [email protected] Thomas Rose email: [email protected] Business Processes and Standard Operating Procedures: Two Coins with Similar Sides Keywords: Process Management, Business Processes, Standard Operating Procedures, Emergency Management des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction We conducted several research projects in cooperation with emergency and relief organisations. Their pivotal objective has been the improvement of planning processes in terms of quality and time. Results of planning processes typically become manifest in Standard Operating Procedures (SOP), be it in medical or emergency management domains. Since SOPs appear rather close to business processes, the suitability of business process management (BPM) tools was obvious. Hence, the question arose on how to utilize BPM methods and tools for designing SOPs. Once embarking on this endeavour, one faces typically subsequent core challenges:  Does planning of emergency and relief services differ from planning processes in business environments in terms of objectives, methodology or interests?  Do formal process models offer benefits to the conceptual design of standard operating procedures of emergency services?  What changes to business process management means are instrumental for planning of emergency services or can prevailing tools directly been used?  What is the added value of business process analytics once being used for assessing the quality and effectiveness of planned procedures? This paper structures our findings along several dimensions in order to expose similarities and differences of both worlds, e.g., standards for business process management versus the prevalent folklore of emergency organisations. To make a long story short: we argue that standard business process means are currently not suitable for emergency management planning. This paper is organized as follows. After an introduction about business process management, we introduce some pivotal characteristics of the emergency domain and give an overview about our projects in this domain. We present in the fourth section our experiences about the relationship BP management and emergency management. Following an overview table with driving issues we elaborate our findings in detail. Finally, we draw our conclusion for the sake of planning counter measures. SOPs and Business Process Management A Standard Operating Procedure describes 'a set of written instructions that document a routine or repetitive activity followed by an organization' [START_REF]United States Environmental Protection Agency: Guidance for Preparing Standard Operating Procedures (SOPs) In: Office of Environmental Information[END_REF] and is a 'written, numbered organizational directive that establishes a standard course of action' [START_REF] Cook | Standard operating procedures and guidelines[END_REF]. The concept of SOPs is mainly used in emergency management, military, and medical context. Similarities between these SOPs and the definition of business processes are obvious taking into account that a process is defined as a 'set of partially ordered activities intended to reach a goal' [START_REF] Hammer | Reengineering the Corporation-A manifesto for Business Revolution[END_REF] or that 'a process is thus a specific ordering of work activities across time and place with a beginning, an end and clearly identified inputs and outputs: a structure for action' [START_REF] Lindsay | Business processes-attempts to find a definition[END_REF]. Consequently, several security related projects have directly applied process management means to standard operating procedures for emergency management. Many of these projects sought a partial automation of control flows for the execution of standard operating procedures [START_REF] Rüppel | Improving emergency management by formal dynamic process-modelling[END_REF][START_REF] Khalilbeigi | Towards Computer Support of Paper Workflows in Emergency Management[END_REF][START_REF] Paulheim | Improving usability of integrated emergency response systems: the SoKNOS approach[END_REF][START_REF] Becker | Effiziente Entscheidungsunterstützung im Krisenfall durch interaktive Standard Operating Procedures[END_REF]. That is, standard operating procedures served as blueprints for workflows, which can be directly translated into executable activities and by the same time impose control structures. Following this line of approaches automation and assisted procedures become the focus of direction. Other approaches strived for seamless information flows by integrating information management and data streams inside as well as between command centres [START_REF] Kittel | Gaining Flexibility and Compliance in Rescue Processes with BPM[END_REF][START_REF] Soini | Toward adaptable communication and enhanced collaboration in global crisis management using process modeling[END_REF][START_REF] Ziebermayr | A Proposal for the Application of Dynamic Workflows in Disaster Management: A Process Model Language Customized for Disaster Management[END_REF] or between command centres and rescue units during a crisis [START_REF] De Leoni | Mobile process management through web services[END_REF][START_REF] Franke | Design of a Collaborative Disaster Response Process Management System[END_REF][START_REF] Franke | Reference Process Models and Systems for Inter-Organizational Ad-Hoc Coordination-Supply Chain Management in Humanitarian Operations[END_REF]. In the course of their projects all of them realized that the identification of procedures of rescue organisations is essential for any research contribution that aims to improve the support of rescue workers and their organisations [START_REF] Kunze | Nutzung von Sensornetzwerken und mobilen Informationsgeräten für die Situationserfassung und die Prozessunterstützung bei Massenanfällen von Verletzten[END_REF]. But unfortunately, they found no adequate means for domain experts to grasp, to describe, or to formally model their courses of action themselves for daily routines or even large-scale crisis. Once, third-party experts with a modelling background are involved in the formalisa-tion of standard operating procedures, commercial process management environments become instrumental. Yet, the question of adequacy of the process model comes as a concern, i.e. how close is the model with its abstractions taking into account the domain's inherent complexity and variety. Our research intention has been to support emergency experts in their modelling endeavours themselves as also promoted in other domains like health care [START_REF] Sedlmayr | Proaktive Assistenz zur kontextabhängigen und zielorientierten Unterstützung bei der Indikationsstellung und Anwendung von Behandlungsmaßnahmen in der Intensivmedizin[END_REF] and engineering [START_REF] Fuenffinger | Management von Prozesswissen in projekthaften Prozessen[END_REF], that is, emergency experts are going to voice their standard operating procedures in terms of process models. But we encountered several obstacles and setbacks when trying to establish process modelling as planning method in the emergency management domain. Research Path Our experience is founded in an array of projects with emergency related organisations including fire brigades as well as emergency management authorities, but also police and rescue organisations. These projects revolved around central aspects of emergency management: 1. Utilizing BPM tools for SOP capture -Modelling the operational concept for cross-regional support of mass casualty incidents (MCI) with prevailing BPM methods [START_REF] Rose | Process management support for emergency management procedures[END_REF] This project delivered a proof of the usefulness of business process means towards emergency preparation. An already defined operational concept for the treatment of mass-casualties was translated into a formal process model, i.e. we transferred EM content into formal process models. This exercise demonstrated the virtues of formal process models, i.e. transparency across the organisation. Once processes are modelled by third-party experts, acceptance by domain experts has been high. 2. User-centred process assistance -Risk management process support for small to medium-sized communities for planning of natural or man-made disasters (project ERMA) [START_REF] Peinel | Process-oriented Risk Management for Smaller Municipalities[END_REF] In the follow-up project we moved our perspective closer to the emergency management domain: we designed and implemented a process management modelling tool with a more intuitive interface supporting the terminology of the EM domain, but still founded in the look and feel of BPM tools (graphs with activities as nodes while edges presented the control flow). Hence, user acceptance improved due to the new interface, which was custom-tailored to domain experts' terminology. 3. Re-engineering processes -Mobile Data Capture and Communication for Emergency Medical Services [START_REF] Soboll | Prozessmodellierung der mobilen Datenerfassung für den Rettungsdienst bei einer Großschadenslage[END_REF] Another project studied the impact of mobile devices on the processes in particular once seamless information flows are possible. The process for capturing patient data has been the central focus. This process was re-engineered on the basis of seamless communication flows. Once third-party experts model the processes the actual notation language appeared of secondary importance. Now, we came closer to the EM domain: we created a mobile interface hiding the process modelling endeavour and just presenting the process execution results guiding rescue forces with a mo-bile interface. Still we faced the problem that the EM domain was unable to change the process itself. 4. Planning support for emergency preparedness -Process support for emergency management organisations in case of a long-lasting power blackout (project In-foStrom) [START_REF] Peinel | Deploying process management for emergency services -Lessons learnt and research required[END_REF][START_REF] Harand | Process Structures in Crises Management[END_REF] The so far last approach was to move directly to the perspective of the EM domain: we elaborated a planning metaphor they already use, and formalised it in order to provide IT support: we framed a checklist model with a respective editor working in a network environment and employed converters to translate into BPM tools. Even though we succeeded in providing the proof of concept that SOPs like MCI can be translated to business processes, the modelling method and tools -whether commercial or self-implemented-hardly engendered rapt enthusiasm among our emergency management partners. To re-iterate: they liked the modelling results, but disapproved planning process, tools and terminology. We undertook a critical follow-up research to find out why these projects felt a step short. Several indicators amid the projects were unveiled as potential reasons and are listed in the following section. Specific attention has been devoted to their classification along dimensions with regard to BPM as well as EM. Juxtaposition of Process Management with Emergency Management Characteristics The following table shows in summary the findings gained from our project experience. The left column spans topics of interest encountered while the two right columns relate specific properties and characteristics of both domains: emergency and business process management. It has to be noted that our findings are in some parts based on confessions and personal opinions of project partners working in emergency service agencies, rather than on a normative, scientific justification. We admit that the naming of topics and the assignment of issues to topics might formally not be orthogonal or even overlapping. Our findings of differences are described in detail subsequently. 1. Planning Philosophy: a) Investigating the planning processes of German emergency services by analysing regulations, rules, directives, we surprisingly detected only rudimental process structures [START_REF] Harand | Process Structures in Crises Management[END_REF]. Checklists or task skeletons prevail that are sometimes even distributed across organizations. In all documents examined we did not found any formalisation or common notation presenting regulations or rules. They are kept as mostly unstructured text or checklists. Activities are hidden behind terms like "responsibilities" and "duties", which we believe could not be converted one-toone to the definitions of business processes or activities. Rarely, any list of activities found had been sorted in temporal or logical order. What we detected is that most operating procedures of our fire brigades are described by means of checklists. Yet, most of them are merely paper-based documentation. b) The operating procedures we found in regulations and handbook are mostly generally described and not adapted to specific disasters. When asked, rescue forces, fire departments, and police agencies of our projects are arguing that in the majority of cases a large event can be broken down to a set of smaller events, which in turn can be handled "as usual". As such, only specific events are pre-planned, aiming to guarantee sufficiency of resources. An anticipation of "what could happen", "who will call when and why" does currently only take place, if major events are planned such as a soccer world cup or other happenings with such an anticipated high number of visitors. Another argument for not-planning we received from an emergency service of a commercial organisation (harbour), which argued that they do not plan for events which might happen, but have not happened yet, because of a waste of time and resources and because there is no regulation, which forces them to plan more than evacuation paths and contact persons details. c) Also, an argumentation came up that one might be responsible for any damages or worse, cases of deaths, if a plan can be proved wrong by hindsight or if an actor deviated from such a predefined SOP (see also a discussion on standard operating procedures and resulting liability by Bentivoglio [START_REF] Bentivoglio | SOPs and Liability[END_REF]). Such arguments turned out to be the reason for an often hesitant response of emergency units concerning formalized planning. Business actors, on the other hand, want to unveil problems for smooth operation in order to bring a product or service better to the customer. d) Another problem we experienced was that responsible organizations do not want to expose their activities to other organizations; specifically fire brigades and medical services are tight-lipped due to data privacy, and police units due to matters of secrecy. But, to formalize procedures in order to unveil resource conflicts partners have to disclose what they are doing where and with what. This would allow emergency services to discuss their courses of actions and resolve potential conflicts (of goals, of resources etc.). Even our proposal of a careful selection or automatic filtering could not overcome the distrust; and we heard that the police are insulating their command and control systems understandably in general. e) Emergency services we talked to were interested in planning support before and documentation of actions after the operation. A direct operation support (means computer support during courses of action in operation) is still out of their scope: they say that most actions are manual tasks, any office automation is good for office workers but not for rescue organisations in the field. Simulation is done by training and exer-cises, and an analysis is made by evaluating the results of these exercises. BPM software on the other hand is used for process planning, analysis and simulation, improvement and finally execution in a product or service environment. However, we were surprised when staff responsible for the deployment documentation approached us with interest in mobile checklist support in order to initiate, track and finalize operation documentation on and in time. Mode of operation -Practice and Work: a) In many BPM applications the automation of processes is the driving objective, because process automation by information technologies saves time and costs. As indicated above, an overwhelming body of activities for emergency management is judged to be outside the scope of IT choreography. Hence, methods and tools for automation in the realms of process management do currently not affect emergency preparation and planning. Most BPM tools are geared towards an automatic execution of processes as workflows. While doing so, this intention impacts strongly the methodology on how and in which order to model. Often, it forces users to enter details unnecessary for a nonautomatic planning process. For the same reason, user interfaces are packed with complex functions, most of them unnecessary for planning without execution. We did not found any tools allowing one to scale up or down the interface according different application purposes. Consequently, our end users (fire fighters) were claiming that they are not able to use any BPM tools we proposed. b) While companies analyze according to costs and resources, emergency management organizations try to improve their procedures according to tactical and operational goals and time until these goals are reached [START_REF] Arsenova | Unterstützung der Prozessmodellierung im Notfallmanagement[END_REF][START_REF] Reijers | Workflow management systems+ swarm intelligence= dynamic task assignment for emergency management applications[END_REF]. But since their courses of actions are not ruled by predominant repetitions, they cannot stipulate and consequently inspect predetermined courses of actions in advance. Hence, they rely to date on exercises and subsequent expert evaluation. c) The few emergency plans we found are concentrating on internal processes within the organisation, while external connections and relationships are somehow neglected or only superficially mentioned. A clear "when what who with whom with which means" is mostly missing. Moreover, if processes have been identified at all, they merely revolve around the rather abstract observe-orient-decide-act cycle in order to improve communications inside this cycle. Commercial companies reveal a quite different attitude. They try to open their operations in order to establish efficient and effective value networks interacting with partners, retailers, and consumers. IT Technology, Implementation and Usage of Tools: a) We experienced that abstract thinking about procedures and then translating these procedures into a diagram of icons with computer software is still mostly too abstract for many local crises managers. Formalization and abstraction are normally not the business of "normal" fire department chiefs or respective staff responsible for planning. Not to forget that staff also comes from voluntary organizations and thus they are normally following a completely different business in everyday life. We often experienced that while we can easily translate the description of courses of actions in entities of the process world, many persons of this domain struggle with this net of concepts and its logical linking. Thus, any tool support can only partly overcome the missing routine to abstract; we should allot a moderator or translator operating the tool, too. b) We also detected that most BPM tools are somehow over-sophisticated offering hundreds of functions for different typical business operations: modelling, printing, sharing and loading, analysis, simulation, load paths, evaluation, documentation, import/export and so on. Taking into account that planning staff from e.g. fire brigades are not necessarily IT and modelling experts, learning on how to use such tools (with a completely different terminology as used in emergency services) is far too laborious and appears to be distant from the real work. Actually, some older staff appeared to be IT-adverse or computer-illiterate, but this might change when the smartphone-and Internet experienced youth succeeds them in position. 4. Models and Concepts: a) BPM tools describe a business process as a list of functionalities, triggered by events, executed by organisational units (referenced by roles, positions), passing and creating information objects, and linked by connectors allowing one to control the flow of functionalities, i.e. how functions are called (and, or, xor) (Example EPC [START_REF] Keller | Semantische Prozeßmodellierung auf der Grundlage "Ereignisgesteuerter Prozeßketten (EPK)[END_REF]). Apart from different naming (for example, our fire fighters said they do not execute function(alitie)s or processes, they insisted on carrying out measures), they also have additional concepts in use like a measure carrier and a measure carrier type with capabilities, which can be derived from the rank and are necessary for a position, with each measure following tactical and strategic goals [START_REF] Arsenova | Unterstützung der Prozessmodellierung im Notfallmanagement[END_REF]. These models are similar, but obviously not identical. Further research has to unveil whether important and necessary information is lost when switching to such a standard business process model. b) In an execution stance, concepts for dynamic control flow are required for the emergency management domain. One typical control flow element in emergency management is escalation (going from a lower alarm level to a higher) with a complementary de-escalation. Emergency management planning does cover different procedures according to different warning levels, be it flooding, storm, or rain with respect to gauge levels, wind speed, or precipitation rate. Procedures of a higher level often include activities of lower levels, increased or extended specific measures, and possibly replacement of resources or activities, if a level is skipped. The same goes for de-escalation, where activities have to be "reversed" step by step (e.g. evacuation of a hospital or rest homes). Currently, modelling and execution of such "escalation" processes is neither implemented nor in research investigated as far as we know. Such dynamic control flows are typically not part of prevailing tools for business process management. Although they can be implemented with so-called worklets for different instances of a sub-process in principle [START_REF] Adams | Implementing dynamic flexibility in workflows using worklets[END_REF], more natural implementations are desirable. c) The most decisive impediment of process modelling is the quest for completeness. A process model always claims by nature a complete understanding of the intended course of action without any discrepancy. Incomplete and partial models are not "valid" with regard to the philosophy of process modelling. Unfortunately, many courses of action in emergency management have to be prepared in a stepwise approach and call for customisation during the event [START_REF] Kittel | Gaining Flexibility and Compliance in Rescue Processes with BPM[END_REF], since "effective response to a crisis is a combination of anticipation and improvisation" [START_REF] Lalonde | Changing the Paradigm of Crisis Management: How to Put OD in the Process[END_REF]. Although process modelling has given birth to adaptive and ad-hoc workflows, incompleteness and flexibility is still an open research issue. 5. Organisational Issues: a) Important differences revolve around the representation of the organisation and their units. While enterprises are mostly built upon permanent organisational units, emergency organisations rely on temporal units, with changing locations, roles, capacities, and the like. b) Also, organisational changes might happen due to changes of phases caused by triggers [START_REF] Hoogendoorn | Formal modelling and comparing of disaster plans[END_REF] and this is limited supported by BPM or workflow tools [START_REF] Reijers | Workflow management systems+ swarm intelligence= dynamic task assignment for emergency management applications[END_REF]. Worse, some BPM tools often have only a rudimentary organisational model (for example, the BPMN standard only supports roles as pools and swim-lanes [START_REF] Bpmn | Business Process Model and Notation (BPMN)[END_REF]). And if they have an organisational model, these tools expect a permanent assignment of persons to roles and positions. Capacities or capabilities are rarely elaborated. c) In many instances, specific roles or positions require and also expect specific capabilities of a person by law or directives. On the other hand, resources sometimes require certain capabilities, i.e. only specifically trained persons can use a specific resource (e.g., rescue diver, rescuer from heights and depth, special crane driver). d) While commercial organisations can easily adopt new systems and technologies (due to education in school and Universities, qualification measures and training), emergency service agencies with voluntary staff often have problems to train and educate their personal apart from emergency related issues. While companies select the person with the best qualification, voluntary organisations often have to live with the staff available regardless of general knowledge and qualifications. A responsible from a fire brigade told us off the record: "Well, we also have to work with staff, which works full-time as a forklift operator and has a helper syndrome. How to I educate him to think in terms of processes?" and how to train them to use and also to trust a computer. 6. Terminology: a) Most process modelling tools available stick to their own terminology and that is basically not changeable and not adaptable to other nomenclatures. This is inacceptable for somehow military oriented organizations like fire or police departments, who, even more difficult, also cultivate the use of acronyms for explicitness. They have to follow their own rules, legislation, and standards; a change of terminology could lead to confusion and failure especially with respect to the command structure. b) We concentrated our research specifically on processes intersecting with processes of other organizations to unveil communication needs concerning use of the same resources (machines, places), mutual help, and the like. We focussed on these intersections, since they bear the most critical problems [START_REF] Jäger | VFH in Wiesbaden[END_REF][START_REF] Lasogga | Kooperation bei Großschadensereignissen[END_REF][START_REF] Schafer | Emergency management planning as collaborative community work[END_REF]. Thus, planners have to read and understand plans from other organisations, but ambulances, fire departments, and police forces use different terminologies, with -worsealso false friends. We obviously need translations and explanations of terms. Most BPM tools we know do not have any interfaces to something like glossaries or dictionaries. 7. Goals and Decisions: a) Underlying meta-models and analysis services of prevailing process management tools are tailored to business activities, not emergency management activities. Thus, they mostly suppose business interests of their users. To give an example, these tools can analyse concerning resources like time and money, but not according the achievement of objectives or goals. Correspondingly, the underlying meta-models of the tools do not tackle goals or objectives adequately. But we learned that the fulfilment of tactical and operative goals is the core driver of operations in the emergency management domain. Although several criteria for process analysis appear crucial for rescue organisations, the ultimate question and criterion for process design remains unanswered: does my process adequately address the disaster? In order to address this question, strategic and operational goals have to be assessed and orchestrated rather than restructuring control flows in courses of actions. Unfortunately, the majority of methods and tools for process modelling do not support the elicitation of goals and objectives. At best, some tools support the representation of linkages among activities and respective processes with goals, but they do not support the networking of goals and their dependencies. For better collaboration of emergency services, different goals and their consequences have to be unveiled [START_REF] Smith | Designing paper disasters: An authoring environment for developing training exercises in integrated emergency management[END_REF]. b) Since goals and their balanced consideration are of crucial importance, process modelling means have to be enhanced by means for unveiling design rationales [START_REF] Potts | Recording the reasons for design decisions[END_REF] important for reuse and exchange of plans: who has changed which plan why? Independent of the lack of goal-orientation, leveraging the quality of processes is the driver for modelling processes, i.e. different users should implement a comparable level of quality. This certainly also applies to rescue organisations, which is perfectly illustrated by the definition and use of standard operating procedures for fire brigades and standard medical services. Conclusion Although process modelling for emergency planning has been utilized by many security related projects, essential questions about the applicability of process modelling concepts for emergency management practice have not yet been researched, e.g. does the functional design of tools for process modelling coincide with the way of working and thinking of rescue organisations or do the modelling concepts adequately address the objectives of rescue organisations. Our experience suggests the conclusion that business process management methods and tools cannot be directly applied to the emergency management domain because of mismatches among tool support, intentions, organisational practices and experts' folklore. To re-iterate, major impediments are:  Available BPM tools do only fractionally support a change of terminology and never a change of model.  Available BPM tools are typically targeting the automation of execution and this governs modelling method and user interfaces.  The abstract world of process modelling is often incomprehensible for realistic and practical thinking rescue workers, fire men, and police men. Thus, they cannot use the BPM tools as is for planning. However, once courses of actions are modelled by process experts, processes become understandable and transparent for them. Process modelling unveils laurels & darts for emergency preparation. Transparency and leveraged quality of courses of action are definitely a surplus of process modelling for rescue organisations. But what is needed is a more generic and more flexible approach for emergency management planning incorporating its specific peculiarities. BPM tools should at least allow a change of terminology and a modification of the model, should invest more in goal and organisation modelling and its analysis, and should provide a scalable, user-oriented interface leaving complexity to BPM experts or advanced learners. Table 1 . 1 Table of Different Characteristics Topic Emergency Services Business Process Model- World ling World 1. Planning phi- a) Checklists prevail Formal processes losophy b) No planning of big events Focus on large-scale proce- dures c) Liability questions Transparency for quality d) Secrecy and privacy of Interfaces with other organi- information sations e) Concentration on planning Concentration on planning, and post-processing analysis and execution Acknowledgment. This article was partly supported by the German Federal Ministry of Education and Research (BMBF) security research program and also partly by the B-IT Foundation. Special thanks go to the fire brigades of the City of Cologne as well as Rhine-Erft and Siegen-Wittgenstein County.
32,302
[ "1004210", "1004211" ]
[ "107125", "107125", "303510" ]